repo_name
stringlengths
6
77
path
stringlengths
8
215
license
stringclasses
15 values
cells
list
types
list
zhouqifanbdh/liupengyuan.github.io
chapter2/homework/computer/4-19/201611680283.ipynb
mit
[ "练习一:自己定义一个reverse(s)函数,功能返回字符串s的倒序字符串。", "def reverse(s):\n print(s[0:len(s)])\n print(s[len(s)-1:0:-1]+s[0])\n \ns=input('请输入一个字符串,以回车结束:')\nreverse(s)", "练习二:写函数,根据给定符号和行数,打印相应直角三角形,等腰三角形及其他形式的三角形。", "def printzj(fuhao,k):\n for i in range(1,k+1):\n print(fuhao*i)\n\ndef printdy(fuhao,k):\n for i in range(1,k+1):\n if i==1:\n print(' '*(k-1) + fuhao)\n elif i%2==0:\n print(' '*(k-i) + fuhao*(i-1)*2+fuhao)\n else:\n print(' '*(k-i)+fuhao*(i-1)*2+fuhao)\n \n\ndef printdyzj(fuhao,k):\n for i in range(1,k+1):\n print(fuhao*i)\n \nk=int(input('请输入所需行数:')) \nfuhao='.'\nprintzj(fuhao,k)\nprint('-'*10+'分割线'+'-'*10)\nprintdy(fuhao,k)\nprint('-'*10+'分割线'+'-'*10)\nprintdyzj(fuhao,k)", "练习三:将任务4中的英语名词单数变复数的函数,尽可能的考虑多种情况,重新进行实现。", "def n_change(a):\n k=len(a) \n if a[k-1:k] in ['o','s','x']:\n print (a,'es',sep='')\n elif a[k-2:k] in ['ch','sh']:\n print(a,'es',sep='')\n elif a[k-1:k]=='y' and a[k-2:k-1] in ['a','e','i','o','u']:\n print(a,'s',sep='')\n elif a[k-1:k]=='y':\n print(a[:-1],'ies',sep='')\n else:\n print(a,'s',sep='')\n\na=input('请输入一个单词:')\nn_change(a) ", "练习四:写函数,根据给定符号,上底、下底、高,打印各种梯形。\n练习五:写函数,根据给定符号,打印各种菱形。\n练习六:与本小节任务基本相同,但要求打印回文字符倒三角形。" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
limpapud/data_science_tutorials_projects
DataScience_Tutorials/AZ/Pandas_DF_Creation.ipynb
mit
[ "Pandas: DataFreym yaratmağın müxtəlif üsulları\nİlk öncə pandas-da verilənlərin 2D əndazəli forma (sadə dil ilə \"cədvəl\") daxilində saxlanma vasitəsi və forması olan Dataframe yaratmaqdan başlayaq. Bu dərs tam olaraq müxtəlif mənbələrdən və formalarda əldə edilmiş məlumatı Datafreymə çevirmək yolları haqqında bəhs edəcək.\nPandas ilə iş aparmaq üçün modulu əvvəlcə import edək:", "import pandas as pd", "Datafreymin əsasına yerləşəcək verilənlər mənbə üzrə üzrə daxili və xarici formalara bölünür:\nPython daxili mənbələr:\nSiyahı: sətr əsaslı:\n\nİlk misal cədvəl daxilində dəyərlərin ard-arda, sətr-bə-sətr, siyahı daxilində DataFreymə daxil edilməsidir.\nSQL ilə tanış olan oxuyuculara bu dil daxilində İNSERT əmrin yada sala bilər.\nMəsələn, eyni sorğu SQL vasitəsi ilə aşağıda qeyd olunmuş formada icra oluna bilər\nİlk öncə kortejlərdən ibarət siyahı yaradırıq və onu email_list_lst adlı dəyişənə təhkim edirik.", "email_list_lst=[('Omar','Bayramov','omarbayramov@hotmail.com',1),\n ('Ali','Aliyev','alialiyev@example.com',0),\n ('Dmitry','Vladimirov','v.dmitry@koala.kl',1),\n ('Donald','Trump','grabthat@pussycatdolls.com',1),\n ('Rashid','Maniyev','rashid.maniyev@exponential.az',1),\n ]", "Növbəti email_list_lst_cln dəyşəninə isə sütun adlarından ibarət siyahı təhkim edirik.", "email_list_lst_cln=['f_name','l_name','email_adrs','a_status',]", "Nəhayət, DataFrame-nin \"from_records\" funksiyasına email_list_lst və email_list_lst_cln dəyərlərini ötürüb email_list_lst dəyərlərindən email_list_lst_cln sütunları ilə cədvəl yaradırıq və sonra cədvəli əks etdiririk.", "df=pd.DataFrame.from_records(email_list_lst, columns=email_list_lst_cln)\ndf", "Siyahı: sütun əsaslı:\nƏvvəlki misaldan fərqli olaraq bu dəfə məlumatı sütun-sütun qəbul edib cədvələ ötürən yanaşmadan istifadə olunacaq. Bunun üçün kortej siyahısından istifadə olunacaq ki hər bir kortej özü-özlüyündə sütun adın əks edən sətrdən və həmin sətrdə yerləşən dəyərlər siyahısından ibarətdir.", "email_list_lst=[('f_name', ['Omar', 'Ali', 'Dmitry', 'Donald', 'Rashid',]),\n ('l_name', ['Bayramov', 'Aliyev', 'Vladimirov', 'Trump', 'Maniyev',]),\n ('email_adrs', ['omarbayramov@hotmail.com', 'alialiyev@example.com', 'v.dmitry@koala.kl', 'grabthat@pussycatdolls.com', 'rashid.maniyev@exponential.az',]),\n ('a_status', [1, 0, 1, 1, 1,]),\n ]\ndf = pd.DataFrame.from_items(email_list_lst)\ndf", "Lüğət: yazı əsaslı\nNövbəti misal mənim ən çox tərcihə etdiyim (əlbəttə ki çox vaxt verilənlər istədiyimiz formada olmur, və bizim nəyə üstünlük verdiyimiz onları heç marağlandırmır) üsula keçirik. Bu üsula üstünlük verməyimin səbəbi çox sadədir: bundan əvvəl və bundan sonra qeyd olunmuş yollardan istifadə edərkən məlumatın əldə edilməsi zamanı və ya təmizləmə zamanı \"NaN\" dəyəri almadan bəzi məlumatlar pozula bilər ki bu da sütunun və ya yazının sürüşməsinə gətirə bilər ki o zaman analiz ya qismən çətinləşə, və ya ümumiyyətlə verilənlərin qatışması üzündan mənasın itirə bilər. ( Danışdığım problem ilə tanış olmaq üçün 2017/08 tarixində çalışdığım məlumat əldə edilməsi və analizi işimə baxa bilərsiniz.). Amma bu dəfə hər bir dəyər üzrə hansı sütuna aid olması açıq şəkildə qeyd olunur ki, qeyd olunmadığı halda avtomatik \"NaN\" olaraq qeyd olunur. Nəticədə əlavə rutin təmizləmə işi aparmağa ehtiyac olmur, olmayan dəyərləri isə araşdırmadan ya yığışdırmaq və ya digər metodlar ilə verilənlər ilə doldurmaq olur.\n\nSözügedən misala yaxından baxaq:", "email_list=[{\n 'f_name' : 'Omar', \n 'l_name': 'Bayramov', \n 'email_adrs' : 'omarbayramov@hotmail.com', \n 'a_status' : 1\n },\n {'f_name' : 'Ali', 'l_name': 'Aliyev', 'email_adrs':'alialiyev@example.com', 'a_status' : 0},\n {'f_name': 'Dmitry', 'l_name': 'Vladimirov', 'email_adrs':'v.dmitry@koala.kl', 'a_status':1},\n {'f_name': 'Donald', 'l_name': 'Trump', 'email_adrs':'grabthat@pussycatdolls.com', 'a_status':1},\n {'f_name': 'Rashid', 'l_name': 'Maniyev', 'email_adrs':'rashid.maniyev@exponential.az', 'a_status':1},\n ]\n\ndf=pd.DataFrame(email_list,)\ndf", "Burada gördüyünüz kimi məlumat DataFrame daxilinə keçsədə, sütunlar istədiyimiz ardıcıllıqda yox, əlifba ardıcıllığı üzrə sıralanmışdır. Bu məqamı aradan qaldırmaq üçün ya yuxarıda DataFrame yaradan zamandaki kimi əvvəlcədən column parametri vasitəsi ilə sütun adların və ardıcıllığın qeyd etməli, və ya sonradan aşaqda qeyd olunmuş əmr ilə sütun yerlərin dəyşməliyik.", "df=df[['f_name','l_name','email_adrs','a_status',]]\ndf", "Lüğət: sütun əsaslı\nBu misal yuxarıda üzərindən keçdiyimiz \"Siyahı:sütun əsaslı\"-ya çox oxşayır. Fərq dəyərlərin bu dəfə siyahı şəkilində lüğət açarı olaraq qeyd olunmasıdır.", "email_list_dct={'f_name': ['Omar', 'Ali', 'Dmitry', 'Donald', 'Rashid',],\n 'l_name': ['Bayramov', 'Aliyev', 'Vladimirov', 'Trump', 'Maniyev',],\n 'email_adrs': ['omarbayramov@hotmail.com', 'alialiyev@example.com', 'v.dmitry@koala.kl', 'grabthat@pussycatdolls.com', 'rashid.maniyev@exponential.az',],\n 'a_status': [1, 0, 1, 1, 1,],\n }", "Cədvəli yaradaq və sütunların yerlərin dəyişək:", "df = pd.DataFrame.from_dict(email_list_dct)\ndf=df[['f_name','l_name','email_adrs','a_status',]]\ndf", "Python xarici mənbələr:\nStandart, Python-un daxili verilən strukturlarından başqa Pandas fayl sistemi, Məlumat bazarı və digər mənbələrdən verilənləri əldə edib cədvəl qurmağa imkan yaradır.\nExcel fayl\nCədvəli yaratmaq üçün pandas-ın read_excel funksiyasına Excel faylına işarələyən fayl sistemi yolu, fayl internetdə yerləşən zaman isə URL qeyd etmək bəsdir. Əgər faylda bir neçə səhifə varsa, və ya məhz olaraq müəyyən səhifədə yerləşən məlumatı əldə etmək lazımdırsa o zaman sheet_name parametrinə səhifə adın ötürmək ilə məlumatı cədvələ çevirmək olur.", "df = pd.read_excel('https://raw.githubusercontent.com/limpapud/datasets/master/Tutorial_datasets/excel_to_dataframe.xlsx',\n sheet_name='data_for_ttrl')\ndf", "CSV\nYuxarıdaki funksiyadaki kimi ilk öncə .csv fayla yol, amma sonra sətr daxilində dəyərləri bir birindən ayıran işarə delimiter parameterinə ötürülməlidir. Ötürülmədikdə standart olaraq vergülü qəbul olunur.", "df = pd.read_csv('https://raw.githubusercontent.com/limpapud/datasets/master/Tutorial_datasets/csv_to_dataframe.csv', \n delimiter=',')\ndf", "JSON\nJson faylından verilənləri qəbul etmək üçün URL və ya fayl sistemində fayla yol tələb olunur. Json faylı misalı aşağıda qeyd olunub.\nDiqqət ilə baxdığınız halda özünüz üçün json faylın Lüğət: yazı əsaslı datafreym yaratma metodunda istifadə etdiyimiz dəyər təyinatından heç fərqi olmadığını görmüş oldunuz.", "df = pd.read_json('https://raw.githubusercontent.com/limpapud/datasets/master/Tutorial_datasets/json_to_dataframe.json')\ndf = df[['f_name','l_name','email_adrs','a_status',]]\ndf", "SQL\nVə son olaraq SQLite fayl məlumat bazasından məlumat sorğulayaq və datafreymə yerləşdirək. İlk öncə işimiz üçün tələb olunan modulları import edək.", "import sqlalchemy\nfrom sqlalchemy import create_engine\nimport sqlite3", "Sorğulama üçün engine yaradaq və məlumat bazası faylına yolu göstərək.", "engine = create_engine('sqlite:///C:/Users/omarbayramov/Documents/GitHub/datasets/Tutorial_datasets/sql_to_dataframe.db')", "Qoşulma yaradıb, məlumat bazasında yerləşən emails cədvəlindən bütün sətrləri sorğulayaq.", "con=engine.connect()\na=con.execute('SELECT * FROM emails')", "Məlumat sorğulandıqdan sonra fetchall funksiyası vasitəsi ilə sətrləri \"oxuyub\" data dəyişkəninə təhkim edək və sonda MB bağlantısın bağlayaq.", "data=a.fetchall()\na.close()\ndata", "Əldə olunan məlumatın strukturu tanış qəlir mi? Diqqət ilə baxsanız ilk tanış olduğumuz Siyahı: sətr əsaslı məlumat strukturun tanıyarsınız. Artıq tanış olduğumuz proseduru icra edərək cədvəl qurmaq qaldı:", "df=pd.DataFrame(data, columns=['f_name','l_name','email_adrs','a_status',])\ndf", "Jurnalın sonu\nJurnalın sonuna çatdınız. Oxuduğunuz üçün təşəkkürlər. Bu məqalə periodik olaraq yenilənəcək və əlavələr qəbul edəcək. Sizin əlavəniz, təklifiniz, iradınız olduğu halda GitHub vasitəsi ilə \"İssue\" yaradaraq və ya aşağıda qeyd olunmuş əlaqə vasitələri ilə fikrinizi bildirə bilərsiniz.\nƏlaqə\nMüəllif ilə əlaqə omarbayramov@hotmail.com elektron ünvan üzərindən aparıla bilər. Əlavə olaraq sosial şəbəkə və digər saytlara linklər əlavə olunur.\nFacebook\nWordpress Blog\nLinkedIn" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
quasars100/Resonance_testing_scripts
python_tutorials/WHFast.ipynb
gpl-3.0
[ "WHFast tutorial\nThis tutorial is an introduction to the python interface of WHFast, a fast and unbiased symplectic Wisdom-Holman integrator. The method is described in Rein & Tamayo (2015).\nThis tutorial assumes that you have already installed REBOUND.\nFirst WHFast integration\nYou can enter all the commands below into a file and execute it all at once, or open an interactive shell).\nFirst, we need to import the REBOUND module (make sure have have enabled the virtual environment if you used it to install REBOUND).", "import rebound", "Next, let's add some particles. We'll work in units in which $G=1$ (see below on how to set $G$ to another value). The first particle we add is the central object. We place it at rest at the origin and use the convention of setting the mass of the central object $M_*$ to 1:", "rebound.add(m=1.)", "Let's look at the particle we just added:", "print(rebound.particles[0])", "The output tells us that the mass of the particle is 1 and all coordinates are zero. \nThe next particle we're adding is a planet. We'll use Cartesian coordinates to initialize it. Any coordinate that we do not specify in the rebound.add() command is assumed to be 0. We place our planet on a circular orbit at $a=1$ and give it a mass of $10^{-3}$ times that of the central star.", "rebound.add(m=1e-3, x=1., vy=1.)", "Instead of initializing the particle with Cartesian coordinates, we can also use orbital elements. By default, REBOUND will use Jacobi coordinates, i.e. REBOUND assumes the orbital elements describe the particle's orbit around the centre of mass of all particles added previously. Our second planet will have a mass of $10^{-3}$, a semimajoraxis of $a=2$ and an eccentricity of $e=0.1$:", "rebound.add(m=1e-3, a=2., e=0.1)", "Now that we have added two more particles, let's have a quick look at what's \"in REBOUND\" by using", "rebound.status()", "Next, let's tell REBOUND which integrator (WHFast, of course!) and timestep we want to use. In our system of units, an orbit at $a=1$ has the orbital period of $T_{\\rm orb} =2\\pi \\sqrt{\\frac{GM}{a}}= 2\\pi$. \nSo a reasonable timestep to start with would be $dt=10^{-3}$.", "rebound.integrator = \"whfast\"\nrebound.dt = 1e-3 ", "whfast referrs to the 2nd order symplectic integrator WHFast described by Rein & Tamayo (2015). By default 11th order symplectic correctors are used. \nWe are now ready to start the integration. Let's integrate for one orbit, i.e. until $t=2\\pi$. Because we use a fixed timestep, rebound would have to change it to integrate exactly up to $2\\pi$. Changing a timestep in a symplectic integrator is a bad idea, so we'll tell rebound to don't worry about the exact_finish_time.", "rebound.integrate(6.28318530717959, exact_finish_time=0) # 6.28318530717959 is 2*pi", "Once again, let's look at what REBOUND's status is", "rebound.status()", "As you can see the time has advanced to $t=2\\pi$ and the positions and velocities of all particles have changed. If you want to post-process the particle data, you can access it in the following way:", "particles = rebound.particles\nfor p in particles:\n print(p.x, p.y, p.vx, p.vy)", "The particles object is an array of pointers to the particles. This means you can call particles = rebound.particles before the integration and the contents of particles will be updated after the integration. If you add or remove particles, you'll need to call rebound.particles again.\nVisualization with matplotlib\nInstead of just printing boring numbers at the end of the simulation, let's visualize the orbit using matplotlib (you'll need to install numpy and matplotlib to run this example, see above).\nWe'll use the same particles as above. As the particles are already in memory, we don't need to add them again. Let us plot the position of the inner planet at 100 steps during its orbit. First, we'll import numpy and create an array of times for which we want to have an output (here, from $T_{\\rm orb}$ to $2 T_{\\rm orb}$ (we have already advanced the simulation time to $t=2\\pi$).", "import numpy as np\ntorb = 2.*np.pi\nNoutputs = 100\ntimes = np.linspace(torb, 2.*torb, Noutputs)\nx = np.zeros(Noutputs)\ny = np.zeros(Noutputs)", "Next, we'll step through the simulation. Rebound will integrate up to time. Depending on the timestep, it might overshoot slightly. If you want to have the outputs at exactly the time you specify you can set the exactTime=1 flag in the integrate function. However, note that changing the timestep in a symplectic integrator could have negative impacts on its properties.", "for i,time in enumerate(times):\n rebound.integrate(time, exact_finish_time=0)\n x[i] = particles[1].x\n y[i] = particles[1].y", "Let's plot the orbit using matplotlib.", "%matplotlib inline\nimport matplotlib.pyplot as plt\nfig = plt.figure(figsize=(5,5))\nax = plt.subplot(111)\nax.set_xlim([-2,2])\nax.set_ylim([-2,2])\nplt.plot(x, y);", "Hurray! It worked. The orbit looks like it should, it's an almost perfect circle. There are small perturbations though, induced by the outer planet. Let's integrate a bit longer to see them.", "Noutputs = 1000\ntimes = np.linspace(2.*torb, 20.*torb, Noutputs)\nx = np.zeros(Noutputs)\ny = np.zeros(Noutputs)\nfor i,time in enumerate(times):\n rebound.integrate(time, exact_finish_time=0)\n x[i] = particles[1].x\n y[i] = particles[1].y\n \nfig = plt.figure(figsize=(5,5))\nax = plt.subplot(111)\nax.set_xlim([-2,2])\nax.set_ylim([-2,2])\nplt.plot(x, y);", "Oops! This doesn't look like what we expected to see (small perturbations to an almost circluar orbit). What you see here is the barycenter slowly drifting. Some integration packages require that the simulation be carried out in a particular frame, but WHFast provides extra flexibility by working in any inertial frame. If you recall how we added the particles, the Sun was at the origin and at rest, and then we added the planets. This means that the center of mass, or barycenter, will have a small velocity, which results in the observed drift. There are multiple ways we can get the plot we want to.\n1. We can calculate only relative positions.\n2. We can add the particles in the barycentric frame.\n3. We can let REBOUND transform the particle coordinates to the bayrcentric frame for us.\nLet's use the third option (next time you run a simulation, you probably want to do that at the beginning).", "rebound.move_to_com()", "So let's try this again. Let's integrate for a bit longer this time.", "times = np.linspace(20.*torb, 1000.*torb, Noutputs)\nfor i,time in enumerate(times):\n rebound.integrate(time, exact_finish_time=0)\n x[i] = particles[1].x\n y[i] = particles[1].y\n \nfig = plt.figure(figsize=(5,5))\nax = plt.subplot(111)\nax.set_xlim([-1.5,1.5])\nax.set_ylim([-1.5,1.5])\nplt.scatter(x, y, marker='.', color='k', s=1.2);", "That looks much more like it. Let us finally plot the orbital elements as a function of time.", "times = np.linspace(1000.*torb, 9000.*torb, Noutputs)\na = np.zeros(Noutputs)\ne = np.zeros(Noutputs)\nfor i,time in enumerate(times):\n rebound.integrate(time, exact_finish_time=0)\n orbits = rebound.calculate_orbits()\n a[i] = orbits[1].a\n e[i] = orbits[1].e\n \nfig = plt.figure(figsize=(15,5))\n\nax = plt.subplot(121)\nax.set_xlabel(\"time\")\nax.set_ylabel(\"semi-major axis\")\nplt.plot(times, a);\n\nax = plt.subplot(122)\nax.set_xlabel(\"time\")\nax.set_ylabel(\"eccentricity\")\nplt.plot(times, e);", "The semimajor axis seems to almost stay constant, whereas the eccentricity undergoes an oscillation. Thus, one might conclude the planets interact only secularly, i.e. there are no large resonant terms.\nAdvanced settings of WHFast\nYou can set various attributes to change the default behaviour of WHFast depending on the problem you're interested in.\nSymplectic correctors\nYou can change the order of the symplectic correctors in WHFast. The default is 11. If you simply want to turn off symplectic correctors alltogether, you can just choose the whfast-nocor integrator:", "rebound.integrator = \"whfast-nocor\"", "You can also set the order of the symplectic corrector explicitly:", "rebound.integrator = \"whfast\"\nrebound.integrator_whfast_corrector = 7", "You can choose between 0 (no correctors), 3, 5, 7 and 11 (default).\nKeeping particle data synchronized\nBy default, REBOUND will only synchronized particle data at the end of the integration, i.e. if you call rebound.integrate(100.), it will assume you don't need to access the particle data between now and $t=100$. There are a few instances where you might want to change that. \nOne example is MEGNO. Whenever you calculate MEGNO or the Lyapunov exponent, REBOUND needs to have the velocities and positions synchronized at the end of the timestep (to calculate the dot product between them). Thus, if you initialize MEGNO with", "rebound.init_megno(1e-16)", "you implicitly force REBOUND to keep the particle coordinates synchronized. This will slow it down and might reduce its accuracy. You can also manually force REBOUND to keep the particles synchronized at the end of every timestep by integrating with the synchronize_each_timestep flag set to 1:", "rebound.integrate(10., synchronize_each_timestep=1)", "In either case, you can change particle data between subsequent calls to integrate:", "rebound.integrate(20.)\nrebound.particles[0].m = 1.1 # Sudden increase of particle's mass\nrebound.integrate(30.)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
mattilyra/gensim
docs/notebooks/Poincare Tutorial.ipynb
lgpl-2.1
[ "Tutorial on Poincaré Embeddings\nThis notebook discusses the basic ideas and use-cases for Poincaré embeddings and demonstrates what kind of operations can be done with them. For more comprehensive technical details and results, this blog post may be a more appropriate resource.\n1. Introduction\n1.1 Concept and use-case\nPoincaré embeddings are a method to learn vector representations of nodes in a graph. The input data is of the form of a list of relations (edges) between nodes, and the model tries to learn representations such that the vectors for the nodes accurately represent the distances between them.\nThe learnt embeddings capture notions of both hierarchy and similarity - similarity by placing connected nodes close to each other and unconnected nodes far from each other; hierarchy by placing nodes lower in the hierarchy farther from the origin, i.e. with higher norms.\nThe paper uses this model to learn embeddings of nodes in the WordNet noun hierarchy, and evaluates these on 3 tasks - reconstruction, link prediction and lexical entailment, which are described in the section on evaluation. We have compared the results of our Poincaré model implementation on these tasks to other open-source implementations and the results mentioned in the paper.\nThe paper also describes a variant of the Poincaré model to learn embeddings of nodes in a symmetric graph, unlike the WordNet noun hierarchy, which is directed and asymmetric. The datasets used in the paper for this model are scientific collaboration networks, in which the nodes are researchers and an edge represents that the two researchers have co-authored a paper.\nThis variant has not been implemented yet, and is therefore not a part of our tutorial and experiments.\n1.2 Motivation\nThe main innovation here is that these embeddings are learnt in hyperbolic space, as opposed to the commonly used Euclidean space. The reason behind this is that hyperbolic space is more suitable for capturing any hierarchical information inherently present in the graph. Embedding nodes into a Euclidean space while preserving the distance between the nodes usually requires a very high number of dimensions. A simple illustration of this can be seen below - \n\nHere, the positions of nodes represent the positions of their vectors in 2-D euclidean space. Ideally, the distances between the vectors for nodes (A, D) should be the same as that between (D, H) and as that between H and its child nodes. Similarly, all the child nodes of H must be equally far away from node A. It becomes progressively hard to accurately preserve these distances in Euclidean space as the degree and depth of the tree grows larger. Hierarchical structures may also have cross-connections (effectively a directed graph), making this harder.\nThere is no representation of this simple tree in 2-dimensional Euclidean space which can reflect these distances correctly. This can be solved by adding more dimensions, but this becomes computationally infeasible as the number of required dimensions grows exponentially. \nHyperbolic space is a metric space in which distances aren't straight lines - they are curves, and this allows such tree-like hierarchical structures to have a representation that captures the distances more accurately even in low dimensions.\n2. Training the embedding", "% cd ../..\n\n%load_ext autoreload \n%autoreload 2\n\nimport os\nimport logging\nimport numpy as np\n\nfrom gensim.models.poincare import PoincareModel, PoincareKeyedVectors, PoincareRelations\n\nlogging.basicConfig(level=logging.INFO)\n\npoincare_directory = os.path.join(os.getcwd(), 'docs', 'notebooks', 'poincare')\ndata_directory = os.path.join(poincare_directory, 'data')\nwordnet_mammal_file = os.path.join(data_directory, 'wordnet_mammal_hypernyms.tsv')", "The model can be initialized using an iterable of relations, where a relation is simply a pair of nodes -", "model = PoincareModel(train_data=[('node.1', 'node.2'), ('node.2', 'node.3')])", "The model can also be initialized from a csv-like file containing one relation per line. The module provides a convenience class PoincareRelations to do so.", "relations = PoincareRelations(file_path=wordnet_mammal_file, delimiter='\\t')\nmodel = PoincareModel(train_data=relations)", "Note that the above only initializes the model and does not begin training. To train the model -", "model = PoincareModel(train_data=relations, size=2, burn_in=0)\nmodel.train(epochs=1, print_every=500)", "The same model can be trained further on more epochs in case the user decides that the model hasn't converged yet.", "model.train(epochs=1, print_every=500)", "The model can be saved and loaded using two different methods -", "# Saves the entire PoincareModel instance, the loaded model can be trained further\nmodel.save('/tmp/test_model')\nPoincareModel.load('/tmp/test_model')\n\n# Saves only the vectors from the PoincareModel instance, in the commonly used word2vec format\nmodel.kv.save_word2vec_format('/tmp/test_vectors')\nPoincareKeyedVectors.load_word2vec_format('/tmp/test_vectors')", "3. What the embedding can be used for", "# Load an example model\nmodels_directory = os.path.join(poincare_directory, 'models')\ntest_model_path = os.path.join(models_directory, 'gensim_model_batch_size_10_burn_in_0_epochs_50_neg_20_dim_50')\nmodel = PoincareModel.load(test_model_path)", "The learnt representations can be used to perform various kinds of useful operations. This section is split into two - some simple operations that are directly mentioned in the paper, as well as some experimental operations that are hinted at, and might require more work to refine.\nThe models that are used in this section have been trained on the transitive closure of the WordNet hypernym graph. The transitive closure is the list of all the direct and indirect hypernyms in the WordNet graph. An example of a direct hypernym is (seat.n.03, furniture.n.01) while an example of an indirect hypernym is (seat.n.03, physical_entity.n.01).\n3.1 Simple operations\nAll the following operations are based simply on the notion of distance between two nodes in hyperbolic space.", "# Distance between any two nodes\nmodel.kv.distance('plant.n.02', 'tree.n.01')\n\nmodel.kv.distance('plant.n.02', 'animal.n.01')\n\n# Nodes most similar to a given input node\nmodel.kv.most_similar('electricity.n.01')\n\nmodel.kv.most_similar('man.n.01')\n\n# Nodes closer to node 1 than node 2 is from node 1\nmodel.kv.nodes_closer_than('dog.n.01', 'carnivore.n.01')\n\n# Rank of distance of node 2 from node 1 in relation to distances of all nodes from node 1\nmodel.kv.rank('dog.n.01', 'carnivore.n.01')\n\n# Finding Poincare distance between input vectors\nvector_1 = np.random.uniform(size=(100,))\nvector_2 = np.random.uniform(size=(100,))\nvectors_multiple = np.random.uniform(size=(5, 100))\n\n# Distance between vector_1 and vector_2\nprint(PoincareKeyedVectors.vector_distance(vector_1, vector_2))\n# Distance between vector_1 and each vector in vectors_multiple\nprint(PoincareKeyedVectors.vector_distance_batch(vector_1, vectors_multiple))", "3.2 Experimental operations\nThese operations are based on the notion that the norm of a vector represents its hierarchical position. Leaf nodes typically tend to have the highest norms, and as we move up the hierarchy, the norm decreases, with the root node being close to the center (or origin).", "# Closest child node\nmodel.kv.closest_child('person.n.01')\n\n# Closest parent node\nmodel.kv.closest_parent('person.n.01')\n\n# Position in hierarchy - lower values represent that the node is higher in the hierarchy\nprint(model.kv.norm('person.n.01'))\nprint(model.kv.norm('teacher.n.01'))\n\n# Difference in hierarchy between the first node and the second node\n# Positive values indicate the first node is higher in the hierarchy\nprint(model.kv.difference_in_hierarchy('person.n.01', 'teacher.n.01'))\n\n# One possible descendant chain\nmodel.kv.descendants('mammal.n.01')\n\n# One possible ancestor chain\nmodel.kv.ancestors('dog.n.01')", "Note that the chains are not symmetric - while descending to the closest child recursively, starting with mammal, the closest child of carnivore is dog, however, while ascending from dog to the closest parent, the closest parent to dog is canine. \nThis is despite the fact that Poincaré distance is symmetric (like any distance in a metric space). The asymmetry stems from the fact that even if node Y is the closest node to node X amongst all nodes with a higher norm (lower in the hierarchy) than X, node X may not be the closest node to node Y amongst all the nodes with a lower norm (higher in the hierarchy) than Y.\n4. Useful Links\n\nOriginal paper by Facebook AI Research\nBlog post describing technical challenges in implementation\nDetailed evaluation notebook to reproduce results" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
CLEpy/CLEpy-MotM
Kivy/Kivy.ipynb
mit
[ "Kivy\nhttps://kivy.org\nNative UI Framework\n\nOpen source\nNo commercial version\nHardware accelerated\nPythonic\nCross platform (macOS, iOS, Android, Linux, Windows)\nHas automated tests\n\nOf Note\n\nKivy is simple\nNot for making native-looking apps\nFocus on touchscreen apps and custom UI\nNot a replacement for Qt or Tkinter\n\nMarkup + Code\nUI declared in a pythonic markup using Kv Language\nLooks like this:\n```\n:kivy 1.9\nBoxLayout:\n orientation: 'vertical'\nLabel:\n text: 'Label'\nButton:\n text: 'Button'\n```\nInstallation\n```bash\nbrew install sdl2 sdl2_image sdl2_ttf sdl2_mixer gstreamer\nexport USE_OSX_FRAMEWORKS=0 \npip install --upgrade Cython==0.25.2\npip install kivy\n```\n...or use Kivy.app\nHello, world", "%%bash\ncat hello.kv\n\n%%bash\ncat hello.py\n\n%%bash\npython hello.py > /dev/null 2>&1", "Layouts\nAnchorLayout, BoxLayout, FloatLayout, RelativeLayout, GridLayout, PageLayout, ScatterLayout, StackLayout", "%%bash\ncat boxlayout.kv\n\n%%bash\npython boxlayout.py > /dev/null 2>&1", "Widgets", "%%bash\ncat widgets.kv\n\n%%bash\npython widgets.py > /dev/null 2>&1", "Data binding", "%%bash\ncat databinding.kv\n\n%%bash\npython databinding.py > /dev/null 2>&1", "Testing\n\n\nKivy itself is tested \n\n\nFor non-UI, unittesting + mocks work as expected\n\n\nKivy provides mechanisms for running GL tests (screencap comparisons, yuck)\n\n\nFor BDD-style testing, kvaut (shameless plug): https://github.com/garyjohnson/kvaut" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
turbomanage/training-data-analyst
courses/machine_learning/deepdive/05_artandscience/labs/d_customestimator_linear.ipynb
apache-2.0
[ "Custom Estimator\nLearning Objectives:\n * Use a custom estimator of the Estimator class in TensorFlow to predict median housing price\nThe data is based on 1990 census data from California. This data is at the city block level, so these features reflect the total number of rooms in that block, or the total number of people who live on that block, respectively.\n<p>\nLet's use a set of features to predict house value.\n\n## Set Up\nIn this first cell, we'll load the necessary libraries.", "import math\nimport shutil\nimport numpy as np\nimport pandas as pd\nimport tensorflow as tf\n\ntf.logging.set_verbosity(tf.logging.INFO)\npd.options.display.max_rows = 10\npd.options.display.float_format = '{:.1f}'.format", "Next, we'll load our data set.", "df = pd.read_csv(\"https://storage.googleapis.com/ml_universities/california_housing_train.csv\", sep = \",\")", "Examine the data\nIt's a good idea to get to know your data a little bit before you work with it.\nWe'll print out a quick summary of a few useful statistics on each column.\nThis will include things like mean, standard deviation, max, min, and various quantiles.", "df.head()\n\ndf.describe()", "This data is at the city block level, so these features reflect the total number of rooms in that block, or the total number of people who live on that block, respectively. Let's create a different, more appropriate feature. Because we are predicing the price of a single house, we should try to make all our features correspond to a single house as well", "df['num_rooms'] = df['total_rooms'] / df['households']\ndf['num_bedrooms'] = df['total_bedrooms'] / df['households']\ndf['persons_per_house'] = df['population'] / df['households']\ndf.describe()\n\ndf.drop(['total_rooms', 'total_bedrooms', 'population', 'households'], axis = 1, inplace = True)\ndf.describe()", "Build a custom estimator linear regressor\nIn this exercise, we'll be trying to predict median_house_value. It will be our label. We'll use the remaining columns as our input features.\nTo train our model, we'll use the Estimator API and create a custom estimator for linear regression.\nNote that we don't actually need a custom estimator for linear regression since there is a canned estimator for it, however we're keeping it simple so you can practice creating a custom estimator function.", "# Define feature columns\nfeature_columns = {\n colname : tf.feature_column.numeric_column(colname) \\\n for colname in ['housing_median_age','median_income','num_rooms','num_bedrooms','persons_per_house']\n}\n# Bucketize lat, lon so it's not so high-res; California is mostly N-S, so more lats than lons\nfeature_columns['longitude'] = tf.feature_column.bucketized_column(tf.feature_column.numeric_column('longitude'), np.linspace(-124.3, -114.3, 5).tolist())\nfeature_columns['latitude'] = tf.feature_column.bucketized_column(tf.feature_column.numeric_column('latitude'), np.linspace(32.5, 42, 10).tolist())\n\n# Split into train and eval and create input functions\nmsk = np.random.rand(len(df)) < 0.8\ntraindf = df[msk]\nevaldf = df[~msk]\n\nSCALE = 100000\nBATCH_SIZE=128\ntrain_input_fn = tf.estimator.inputs.pandas_input_fn(x = traindf[list(feature_columns.keys())],\n y = traindf[\"median_house_value\"] / SCALE,\n num_epochs = None,\n batch_size = BATCH_SIZE,\n shuffle = True)\neval_input_fn = tf.estimator.inputs.pandas_input_fn(x = evaldf[list(feature_columns.keys())],\n y = evaldf[\"median_house_value\"] / SCALE, # note the scaling\n num_epochs = 1, \n batch_size = len(evaldf), \n shuffle=False)\n\n# Create the custom estimator\ndef custom_estimator(features, labels, mode, params): \n # 0. Extract data from feature columns\n input_layer = tf.feature_column.input_layer(features, params['feature_columns'])\n \n # 1. Define Model Architecture\n predictions = tf.layers.dense(input_layer,1,activation=None)\n \n # 2. Loss function, training/eval ops\n if mode == tf.estimator.ModeKeys.TRAIN or mode == tf.estimator.ModeKeys.EVAL:\n labels = tf.expand_dims(tf.cast(labels, dtype=tf.float32), -1)\n loss = tf.losses.mean_squared_error(labels, predictions)\n optimizer = tf.train.FtrlOptimizer(learning_rate=0.2)\n train_op = optimizer.minimize(\n loss = loss,\n global_step = tf.train.get_global_step())\n eval_metric_ops = {\n \"rmse\": tf.metrics.root_mean_squared_error(labels*SCALE, predictions*SCALE)\n }\n else:\n loss = None\n train_op = None\n eval_metric_ops = None\n \n # 3. Create predictions\n predictions_dict = #TODO: create predictions dictionary\n \n # 4. Create export outputs\n export_outputs = #TODO: create export_outputs dictionary\n \n # 5. Return EstimatorSpec\n return tf.estimator.EstimatorSpec(\n mode = mode,\n predictions = predictions_dict,\n loss = loss,\n train_op = train_op,\n eval_metric_ops = eval_metric_ops,\n export_outputs = export_outputs)\n\n# Create serving input function\ndef serving_input_fn():\n feature_placeholders = {\n colname : tf.placeholder(tf.float32, [None]) for colname in 'housing_median_age,median_income,num_rooms,num_bedrooms,persons_per_house'.split(',')\n }\n feature_placeholders['longitude'] = tf.placeholder(tf.float32, [None])\n feature_placeholders['latitude'] = tf.placeholder(tf.float32, [None])\n \n features = {\n key: tf.expand_dims(tensor, -1)\n for key, tensor in feature_placeholders.items()\n }\n \n return tf.estimator.export.ServingInputReceiver(features, feature_placeholders)\n\n# Create custom estimator's train and evaluate function\ndef train_and_evaluate(output_dir):\n estimator = # TODO: Add estimator, make sure to add params={'feature_columns': list(feature_columns.values())} as an argument\n \n train_spec = tf.estimator.TrainSpec(\n input_fn = train_input_fn,\n max_steps = 1000)\n exporter = tf.estimator.LatestExporter('exporter', serving_input_fn)\n eval_spec = tf.estimator.EvalSpec(\n input_fn = eval_input_fn,\n steps = None,\n exporters = exporter)\n tf.estimator.train_and_evaluate(estimator, train_spec, eval_spec)\n\n#Run Training\nOUTDIR = 'custom_estimator_trained_model'\nshutil.rmtree(OUTDIR, ignore_errors = True) # start fresh each time\ntrain_and_evaluate(OUTDIR)", "Challenge Excercise\nModify the custom_estimator function to be a neural network with one hidden layer, instead of a linear regressor" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
IsacLira/data-science-cookbook
2017/05-naive-bayes/resp_naive_bayes_moesio.ipynb
mit
[ "Naive Bayes - Trabalho\nQuestão 1\nImplemente um classifacor Naive Bayes para o problema de predizer a qualidade de um carro. Para este fim, utilizaremos um conjunto de dados referente a qualidade de carros, disponível no UCI. Este dataset de carros possui as seguintes features e classe:\n Attributos \n1. buying: vhigh, high, med, low\n2. maint: vhigh, high, med, low\n3. doors: 2, 3, 4, 5, more\n4. persons: 2, 4, more\n5. lug_boot: small, med, big\n6. safety: low, med, high\n Classes \n1. unacc, acc, good, vgood\nQuestão 2\nCrie uma versão de sua implementação usando as funções disponíveis na biblioteca SciKitLearn para o Naive Bayes (veja aqui) \nQuestão 3\nAnalise a acurácia dos dois algoritmos e discuta a sua solução.\nQuestão 1", "# Libraries\nimport numpy as np\nimport pandas as pd\nfrom sklearn.naive_bayes import MultinomialNB, GaussianNB\nfrom sklearn.cross_validation import train_test_split\nfrom sklearn.preprocessing import LabelEncoder\nfrom sklearn.metrics import accuracy_score, classification_report\n\n# Creating a class for the Naive Bayes\nclass NaiveBayes:\n def __init__(self):\n ''' Default Constructor '''\n self.lEncoder = LabelEncoder()\n self.X = None; self.y = None\n self.classProb = None; self.likeTable = {}\n \n def separateByClass(self):\n ''' This functions separates all the dataset indexing dictionaries by the classes '''\n separated = {}\n for i in range(len(self.y)):\n if (self.y[i] not in separated):\n separated[self.y[i]] = []\n separated[self.y[i]].append(self.X[i])\n return separated\n \n def makeLikeTable(self):\n ''' This functions counts the occurences of each attribute based on the classes\n and construct the Likelihood table (in this case a Dictionary) calculating all\n the propers probabilities '''\n separateClass = self.separateByClass()\n classSizes = [len(separateClass[i]) for i in separateClass.keys()] \n self.classProb = np.array(classSizes) / sum(classSizes)\n \n self.likeTable = {}\n for label in separateClass.keys():\n auxiliar = np.column_stack(separateClass[label])\n for attribute,idx in zip(auxiliar, range(len(auxiliar))):\n counts = np.asarray(np.unique(attribute, return_counts=True)).T\n for i in range(4):\n self.likeTable[(label, idx, i)] = 0\n for count_it in counts:\n self.likeTable[(label, idx, count_it[0])] = count_it[1] / len(separateClass[label])\n\n def calculateProbability(self, inputVector):\n ''' Utilizes the maximum likelihood estimation to calculate the probabilty of\n each row in inputVector belongs to each possible class '''\n separateClass = self.separateByClass()\n \n probabilities = {}\n for label,_ in separateClass.items():\n probabilities[label] = self.classProb[label]\n for i in range(len(inputVector)):\n probabilities[label] *= self.likeTable[label, i, inputVector[i]]\n \n return probabilities\n\n def fit(self, X_train, y_train):\n ''' Assign the training data and calls the Likelihood Table creator '''\n self.X = X_train\n self.y = y_train\n \n self.makeLikeTable()\n \n def predict(self, inputArray):\n ''' Return a list of predictions for each row in inputArray correspondent to\n the label of the class with the maximum probability '''\n predictions = []\n for row in inputArray:\n probabilities = self.calculateProbability(row)\n predictions.append(max(probabilities, key=probabilities.get))\n return predictions", "Questão 2", "# Carregando o Dataset\ndata = pd.read_csv(\"car.data\", header=None)\n\n# Transformando as variáveis categóricas em valores discretos contáveis\n# Apesar de diminuir a interpretação dos atributos, essa medida facilita bastante a vida dos métodos de contagem\nfor i in range(0, data.shape[1]):\n data.iloc[:,i] = LabelEncoder().fit_transform(data.iloc[:,i])\n\n# Separação do Conjunto de Treino e Conjunto de Teste (80%/20%)\nX_train, X_test, y_train, y_test = train_test_split(data.iloc[:,:-1], data.iloc[:,-1], test_size=0.2)\n\n# Implementação do Naive Bayes Multinomial do SKLearn\n# O Naive Bayes Multinomial é a implementação do SKLearn que funciona para variáveis discretas,\n# ao invés de utilizar o modelo da função gaussiana (a classe GaussianNB faz dessa forma)\nclf = MultinomialNB()\nclf.fit(X_train.values, y_train.values)\n\ny_pred = clf.predict(X_test.values)\n\n# Impressão dos Resultados\nprint(\"Multinomial Naive Bayes (SKlearn Version)\")\nprint(\"Total Accuracy: {}%\".format(accuracy_score(y_true=y_test, y_pred=y_pred)))\n\nprint(\"\\nClassification Report:\")\nprint(classification_report(y_true=y_test, y_pred=y_pred, target_names=[\"unacc\", \"acc\", \"good\", \"vgood\"]))", "Questão 3", "# Utilização da minha função própria de Naive Bayes para o mesmo conjunto de dados\nnBayes = NaiveBayes()\nnBayes.fit(X_train.values, y_train.values)\ny_pred = nBayes.predict(X_test.values)\n\n# Impressão dos Resultados\nprint(\"Naive Bayes (My Version :D)\")\nprint(\"Total Accuracy: {}%\".format(accuracy_score(y_true=y_test, y_pred=y_pred)))\n\nprint(\"\\nClassification Report:\")\nprint(classification_report(y_true=y_test, y_pred=y_pred, target_names=[\"unacc\", \"acc\", \"good\", \"vgood\"]))", "Resultados\nPodemos notar que o algoritmo possui um resultado relativamente bom. Apesar de tomar certas suposições bem rígidas, alguns casos realmente caem sobre os casos contemplados pelo Naive Bayes, e a classificação se mostra proveitosa. \nTodavia, é perceptível que esse algoritmo <b>depende bastante da frequência relativa de cada class.</b> Se olharmos na equação acima, podemos perceber que a probabilidade de maximum likelihood é proporcional à probabilidade marginal das classes. Por esse motivo, classes que possuem uma frequência muito maior que as outras terão uma probabilidade maior, e irão influenciar mais no resultado da probabilidade final. Por esse motivo, podemos ver que as classes com menos exemplares (\"acc\" e \"vgood\") acabam não tendo uma performance muito boa. Esse fato é ainda mais perceptível no Multinomial NaiveBayes, implementado pelo SKLearn. Por utilizar diferentes métodos de contagem, ele possui um melhor custo computacional, mas acaba subestimando as probabilidades das classes menos frequentes e, por consequência, vemos que as predições caem praticamente todas nas duas classes mais frequentes (\"unacc\" e \"good\")" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
GoogleCloudPlatform/documentai-notebooks
general/async_form_parser.ipynb
apache-2.0
[ "Document AI Form Parser (async)\nThis notebook shows you how to analyze a set pdfs using the Google Cloud DocumentAI API asynchronously.", "# Install necessary Python libraries and restart your kernel after.\n!pip install -r ../requirements.txt\n\nfrom google.cloud import documentai_v1beta3 as documentai\nfrom google.cloud import storage\n\nimport os\nimport re\nimport pandas as pd", "Set your Processor Variables", "# TODO(developer): Fill these variables with your values before running the sample\nPROJECT_ID = \"YOUR_PROJECT_ID_HERE\"\nLOCATION = \"us\" # Format is 'us' or 'eu'\nPROCESSOR_ID = \"PROCESSOR_ID\" # Create processor in Cloud Console\n\nGCS_INPUT_BUCKET = 'cloud-samples-data'\nGCS_INPUT_PREFIX = 'documentai/async_forms/'\nGCS_OUTPUT_URI = 'YOUR-OUTPUT-BUCKET'\nGCS_OUTPUT_URI_PREFIX = 'TEST'\nTIMEOUT = 300", "The following code calls the synchronous API and parses the form fields and values.", "def process_document_sample():\n # Instantiates a client\n client_options = {\"api_endpoint\": \"{}-documentai.googleapis.com\".format(LOCATION)}\n client = documentai.DocumentProcessorServiceClient(client_options=client_options)\n storage_client = storage.Client()\n \n blobs = storage_client.list_blobs(GCS_INPUT_BUCKET, prefix=GCS_INPUT_PREFIX)\n document_configs = []\n print(\"Input Files:\")\n for blob in blobs:\n if \".pdf\" in blob.name:\n source = \"gs://{bucket}/{name}\".format(bucket = GCS_INPUT_BUCKET, name = blob.name)\n print(source)\n document_config = {\"gcs_uri\": source, \"mime_type\": \"application/pdf\"}\n document_configs.append(document_config)\n \n gcs_documents = documentai.GcsDocuments(\n documents=document_configs\n )\n \n input_config = documentai.BatchDocumentsInputConfig(gcs_documents=gcs_documents)\n\n destination_uri = f\"{GCS_OUTPUT_URI}/{GCS_OUTPUT_URI_PREFIX}/\"\n\n # Where to write results\n output_config = documentai.DocumentOutputConfig(\n gcs_output_config={\"gcs_uri\": destination_uri}\n )\n\n # The full resource name of the processor, e.g.:\n # projects/project-id/locations/location/processor/processor-id\n # You must create new processors in the Cloud Console first.\n name = f\"projects/{PROJECT_ID}/locations/{LOCATION}/processors/{PROCESSOR_ID}\"\n request = documentai.types.document_processor_service.BatchProcessRequest(\n name=name,\n input_documents=input_config,\n document_output_config=output_config,\n )\n\n operation = client.batch_process_documents(request)\n \n # Wait for the operation to finish\n operation.result(timeout=TIMEOUT)\n\n # Results are written to GCS. Use a regex to find\n # output files\n match = re.match(r\"gs://([^/]+)/(.+)\", destination_uri)\n output_bucket = match.group(1)\n prefix = match.group(2)\n\n bucket = storage_client.get_bucket(output_bucket)\n blob_list = list(bucket.list_blobs(prefix=prefix))\n\n for i, blob in enumerate(blob_list):\n # If JSON file, download the contents of this blob as a bytes object.\n if \".json\" in blob.name:\n blob_as_bytes = blob.download_as_string()\n print(\"downloaded\")\n\n document = documentai.types.Document.from_json(blob_as_bytes)\n print(f\"Fetched file {i + 1}\")\n\n # For a full list of Document object attributes, please reference this page:\n # https://cloud.google.com/document-ai/docs/reference/rpc/google.cloud.documentai.v1beta3#document\n document_pages = document.pages\n keys = []\n keysConf = []\n values = []\n valuesConf = []\n \n # Grab each key/value pair and their corresponding confidence scores.\n for page in document_pages:\n for form_field in page.form_fields:\n fieldName=get_text(form_field.field_name,document)\n keys.append(fieldName.replace(':', ''))\n nameConfidence = round(form_field.field_name.confidence,4)\n keysConf.append(nameConfidence)\n fieldValue = get_text(form_field.field_value,document)\n values.append(fieldValue.replace(':', ''))\n valueConfidence = round(form_field.field_value.confidence,4)\n valuesConf.append(valueConfidence)\n \n # Create a Pandas Dataframe to print the values in tabular format. \n df = pd.DataFrame({'Key': keys, 'Key Conf': keysConf, 'Value': values, 'Value Conf': valuesConf})\n display(df)\n \n else:\n print(f\"Skipping non-supported file type {blob.name}\")\n \n# Extract shards from the text field\ndef get_text(doc_element: dict, document: dict):\n \"\"\"\n Document AI identifies form fields by their offsets\n in document text. This function converts offsets\n to text snippets.\n \"\"\"\n response = \"\"\n # If a text segment spans several lines, it will\n # be stored in different text segments.\n for segment in doc_element.text_anchor.text_segments:\n start_index = (\n int(segment.start_index)\n if segment in doc_element.text_anchor.text_segments\n else 0\n )\n end_index = int(segment.end_index)\n response += document.text[start_index:end_index]\n return response\n\ndoc = process_document_sample()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
kirichoi/tellurium
examples/notebooks/core/tellurium_export.ipynb
apache-2.0
[ "Back to the main Index\nSBML\nGiven a RoadRunner instance, you can get an SBML representation of the current state of the model using getCurrentSBML. You can also get the initial SBML from when the model was loaded using getSBML. Finally, exportToSBML can be used to export the current model state to a file.", "from __future__ import print_function\nimport tellurium as te\nte.setDefaultPlottingEngine('matplotlib')\n%matplotlib inline\nimport tempfile\n\n# load model\nr = te.loada('S1 -> S2; k1*S1; k1 = 0.1; S1 = 10')\n# file for export\nf_sbml = tempfile.NamedTemporaryFile(suffix=\".xml\")\n\n# export current model state\nr.exportToSBML(f_sbml.name)\n\n# to export the initial state when the model was loaded\n# set the current argument to False\nr.exportToSBML(f_sbml.name, current=False)\n\n# The string representations of the current model are available via\nstr_sbml = r.getCurrentSBML()\n\n# and of the initial state when the model was loaded via\nstr_sbml = r.getSBML()\nprint(str_sbml)", "Antimony\nSimilar to the SBML functions above, you can also use the functions getCurrentAntimony and exportToAntimony to get or export the current Antimony representation.", "import tellurium as te\nimport tempfile\n\n# load model\nr = te.loada('S1 -> S2; k1*S1; k1 = 0.1; S1 = 10')\n# file for export\nf_antimony = tempfile.NamedTemporaryFile(suffix=\".txt\")\n\n# export current model state\nr.exportToAntimony(f_antimony.name)\n\n# to export the initial state when the model was loaded\n# set the current argument to False\nr.exportToAntimony(f_antimony.name, current=False)\n\n# The string representations of the current model are available via\nstr_antimony = r.getCurrentAntimony()\n\n# and of the initial state when the model was loaded via\nstr_antimony = r.getAntimony()\nprint(str_antimony)", "CellML\nTellurium also has functions for exporting the current model state to CellML. These functionalities rely on using Antimony to perform the conversion.", "import tellurium as te\nimport tempfile\n\n# load model\nr = te.loada('S1 -> S2; k1*S1; k1 = 0.1; S1 = 10')\n# file for export\nf_cellml = tempfile.NamedTemporaryFile(suffix=\".cellml\")\n\n# export current model state\nr.exportToCellML(f_cellml.name)\n\n# to export the initial state when the model was loaded\n# set the current argument to False\nr.exportToCellML(f_cellml.name, current=False)\n\n# The string representations of the current model are available via\nstr_cellml = r.getCurrentCellML()\n\n# and of the initial state when the model was loaded via\nstr_cellml = r.getCellML()\nprint(str_cellml)", "Matlab\nTo export the current model state to MATLAB, use getCurrentMatlab.", "import tellurium as te\nimport tempfile\n\n# load model\nr = te.loada('S1 -> S2; k1*S1; k1 = 0.1; S1 = 10')\n# file for export\nf_matlab = tempfile.NamedTemporaryFile(suffix=\".m\")\n\n# export current model state\nr.exportToMatlab(f_matlab.name)\n\n# to export the initial state when the model was loaded\n# set the current argument to False\nr.exportToMatlab(f_matlab.name, current=False)\n\n# The string representations of the current model are available via\nstr_matlab = r.getCurrentMatlab()\n\n# and of the initial state when the model was loaded via\nstr_matlab = r.getMatlab()\nprint(str_matlab)", "Using Antimony Directly\nThe above examples rely on Antimony as in intermediary between formats. You can use this functionality directly using e.g. antimony.getCellMLString. A comprehensive set of functions can be found in the Antimony API documentation.", "import antimony\nantimony.loadAntimonyString('''S1 -> S2; k1*S1; k1 = 0.1; S1 = 10''')\nant_str = antimony.getCellMLString(antimony.getMainModuleName())\nprint(ant_str)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
davidhamann/python-fmrest
examples/performing_scripts.ipynb
mit
[ "# hide ssl warnings for this test.\nimport requests\nrequests.packages.urllib3.disable_warnings()", "Performing scripts with python-fmrest\nThis is a short example on how to perform scripts with python-fmrest.\nImport the module", "import fmrest", "Create the server instance", "fms = fmrest.Server('https://10.211.55.15',\n user='admin',\n password='admin',\n database='Contacts',\n layout='Demo',\n verify_ssl=False\n )", "Login\nThe login method obtains the access token.", "fms.login()", "Setup scripts\nYou can setup scripts to run prerequest, presort, and after the action and sorting are executed. The script setup is passed to a python-fmrest method as an object that contains the types of script executions, followed by a list containing the script name and parameter.", "scripts={\n 'prerequest': ['name_of_script_to_run_prerequest', 'script_parameter'],\n 'presort': ['name_of_script_to_run_presort', None], # parameter can also be None\n 'after': ['name_of_script_to_run_after_actions', '1234'], #FMSDAPI expects all parameters to be string\n}", "You only need to specify the scripts you actually want to execute. So if you only have an after action, just build a scripts object with only the 'after' key.\nCall a standard method\nScripts are always executed as part of a standard request to the server. These requests are the usual find(), create_record(), delete_record(), edit_record(), get_record() methods the Server class exposes to you.\nLet's make a find and then execute a script. The script being called contains an error on purpose, so that we can later read out the error number.", "fms.find(\n query=[{'name': 'David'}],\n scripts={\n 'after': ['testScriptWithError', None],\n }\n)", "Get the last script error and result\nVia the last_script_result property, you can access both last error and script result for all scripts that were called.", "fms.last_script_result", "We see that we had 3 as last error, and our script result was '1'. The FMS Data API only returns strings, but error numbers are automatically converted to integers, for convenience. The script result, however, will always be a string or None, even if you exit your script in FM with a number or boolean.\nAnother example\nLet's do another call, this time with a script that takes a parameter and does not have any errors.\nIt will exit with Exit Script[ Get(ScriptParameter) ], so essentially give us back what we feed in.", "fms.find(\n query=[{'name': 'David'}],\n scripts={\n 'prerequest': ['demoScript (id)', 'abc-1234'],\n }\n)", "... and here is the result (error 0 means no error):", "fms.last_script_result" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
dvro/sf-open-data-analysis
notebooks/2016-11-01-dvro-feature-selection.ipynb
mit
[ "Feature Selection", "%matplotlib inline\n\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport seaborn as sns\n\nimport bokeh\nfrom bokeh.io import output_notebook\noutput_notebook()\n\nimport os\n\nDATA_STREETLIGHT_CASES_URL = 'https://data.sfgov.org/api/views/c53t-rr3f/rows.json?accessType=DOWNLOAD'\nDATA_STREETLIGHT_CASES_LOCAL = 'DATA_STREETLIGHT_CASES.json'\n\ndata_path = DATA_STREETLIGHT_CASES_URL\nif os.path.isfile(DATA_STREETLIGHT_CASES_LOCAL):\n data_path = DATA_STREETLIGHT_CASES_LOCAL\n\nimport urllib, json\n\ndef _load_data(url):\n response = urllib.urlopen(url)\n raw_data = json.loads(response.read())\n columns = [col['name'] for col in raw_data['meta']['view']['columns']]\n rows = raw_data['data']\n return pd.DataFrame(data=rows, columns=columns)\n\ndf = _load_data(data_path)\n\ndf.columns = [col.lower().replace(' ', '_') for col in df.columns]\n\ndf['opened'] = pd.to_datetime(df.opened)\ndf['opened_dayofweek'] = df.opened.dt.dayofweek\ndf['opened_month'] = df.opened.dt.month\ndf['opened_year'] = df.opened.dt.year\ndf['opened_dayofmonth'] = df.opened.dt.day\ndf['opened_weekend'] = df.opened_dayofweek >= 5\n\ndf['closed'] = pd.to_datetime(df.closed)\ndf['closed_dayofweek'] = df.closed.dt.dayofweek\ndf['closed_month'] = df.closed.dt.month\ndf['closed_year'] = df.closed.dt.year\ndf['closed_dayofmonth'] = df.closed.dt.day\ndf['closed_weekend'] = df.closed_dayofweek >= 5\n\ndf['delta'] = (df.closed - df.opened).dt.days\ndf['is_open'] = pd.isnull(df.closed)\ndf['target'] = df.delta <= 2\n\nfrom geopy.distance import vincenty\n\ndf['latitude'] = df.point.apply(lambda e: float(e[1]))\ndf['longitude'] = df.point.apply(lambda e: float(e[2]))\n\nmin_lat, max_lat = min(df.latitude), max(df.latitude)\nmin_lng, max_lng = min(df.longitude), max(df.longitude)\n\ndef grid(lat, lng):\n x = vincenty((lat, min_lng), (lat, lng)).miles\n y = vincenty((min_lat, lng), (lat, lng)).miles\n return x, y\n\nxy = [grid(lat, lng) for lat, lng in zip(df.latitude.values, df.longitude.values)]\n\ndf['loc_x'] = np.array(xy)[:,0]\ndf['loc_y'] = np.array(xy)[:,1]\n\ndummies = pd.get_dummies(df.neighborhood.str.replace(' ', '_').str.lower(), prefix='neigh_', drop_first=False)\ndf[dummies.columns] = dummies\ndel df['neighborhood']\n\ndummies = pd.get_dummies(df.category.str.replace(' ', '_').str.lower(), prefix='cat_', drop_first=False)\ndf[dummies.columns] = dummies\ndel df['category']\n\ndummies = pd.get_dummies(df.source.str.replace(' ', '_').str.lower(), prefix='source_', drop_first=False)\ndf[dummies.columns] = dummies\ndel df['source']\n\ndf['status'] = df.status == 'Closed'\n\ndel df['sid']\ndel df['id']\ndel df['position']\ndel df['created_at']\ndel df['created_meta']\ndel df['updated_at']\ndel df['updated_meta']\ndel df['meta']\ndel df['caseid']\ndel df['address']\ndel df['responsible_agency']\ndel df['request_details']\ndel df['request_type']\ndel df['status']\ndel df['updated']\ndel df['supervisor_district']\ndel df['point']\n\ndf = df.sort_values(by='opened', ascending=True)\n\ndel df['opened']\ndel df['closed']\ndel df['closed_dayofweek']\ndel df['closed_month']\ndel df['closed_year']\ndel df['closed_dayofmonth']\ndel df['closed_weekend']\ndel df['delta']\ndel df['is_open']\n\n# deleting opened_year because there is only 2012 and 2013, which are not relevant for future classifications\ndel df['opened_year']\n\ndf = df.dropna()\n\ncolumns = list(df.columns)\ncolumns.remove('target')\ncolumns.append('target')\ndf = df[columns]\nfeature_columns = columns[:-1]", "Data Split\nIdealy, we'd perform stratified 5x4 fold cross validation, however, given the timeframe, we'll stick with a single split. We'll use an old chunck of data as training, a more recent as validation, and finally, the most recent data as test set.\nDon't worry, we'll use K-fold cross-validation in the next notebook\nSince the data we want to predict is in the future, we'll use the first 60% as training, and the following 20% as validation and 20% test.", "l1 = int(df.shape[0]*0.6)\nl2 = int(df.shape[0]*0.8)\n\ndf_tra = df.loc[range(0,l1)]\ndf_val = df.loc[range(l1,l2)]\ndf_tst = df.loc[range(l2, df.shape[0])]\n\ndf_tra.shape, df_val.shape, df_tst.shape, df.shape", "checking data distribution, we see that this is a good split (considering the porportion of targets)", "fig, axs = plt.subplots(1,3, sharex=True, sharey=True, figsize=(12,3))\naxs[0].hist(df_tra.target, bins=2)\naxs[1].hist(df_val.target, bins=2)\naxs[2].hist(df_tst.target, bins=2)\naxs[0].set_title('Training')\naxs[1].set_title('Validation')\naxs[2].set_title('Test')\n\nX_tra = df_tra.drop(labels=['target'], axis=1, inplace=False).values\ny_tra = df_tra.target.values\n\nX_val = df_val.drop(labels=['target'], axis=1, inplace=False).values\ny_val = df_val.target.values\n\nX_tst = df_tst.drop(labels=['target'], axis=1, inplace=False).values\ny_tst = df_tst.target.values", "Normalization\nFor the sake of simplicity, we will use the 0-1 range normalization:\n$ x_i = \\dfrac{x_i - min(x_i)}{max(x_i) - min(x_i)}$\nThis is allowed because we do not have that many 'outliers' in our features.\nThe Alpha-Trimmed normalization or Standard Scaler normalization would be more appropriate if we introduced other (interesting) features such as:\n- Average cases/week in the neighborhood.\n- Number of cases in the last X days in that neighborhood.", "from sklearn.preprocessing import MinMaxScaler\n\nnormalizer = MinMaxScaler().fit(X_tra)\nX_tra = normalizer.transform(X_tra)\nX_val = normalizer.transform(X_val)\nX_tst = normalizer.transform(X_tst)", "Feature Importance\nVariance Threshold", "from sklearn.feature_selection import VarianceThreshold\n\nprint X_tra.shape\nthreshold=(.999 * (1 - .999))\nsel = VarianceThreshold(threshold=threshold)\nX_tra = sel.fit(X_tra).transform(X_tra)\nX_val = sel.transform(X_val)\nX_tst = sel.transform(X_tst)\nprint X_tra.shape\n\nremoved_features_1 = np.array(columns)[np.where(sel.variances_ < threshold)]\nselected_features_1 = np.array(feature_columns)[np.where(sel.variances_ >= threshold)]\n\nprint 'removed_features'\nprint removed_features_1", "Correlation", "plt.figure(figsize=(12,8))\nsns.heatmap(df.corr('pearson'))", "features loc_x and loc_y are too correlated with latitude and longitude, respectively, for this reason, we'll delete lox_x and loc_y.", "del df['loc_x']\ndel df['loc_y']", "Feature Importance (Trees)\nThis can be done using sklearn.feature_selection.SelectFromModel, however, we do it by ourselves in order to get a better visualization of the process.", "from sklearn.ensemble import RandomForestClassifier, ExtraTreesClassifier\n\ndef feature_importance(X, y, feat_names, forest='random_forest', plot=False, print_=False):\n # Build a forest and compute the feature importances\n if forest == 'random_forest':\n forest = RandomForestClassifier(n_estimators=200, random_state=0)\n elif forest == 'extra_trees':\n forest = ExtraTreesClassifier(n_estimators=200, random_state=0)\n\n forest.fit(X, y)\n importances = forest.feature_importances_\n sd = np.std([tree.feature_importances_ for tree in forest.estimators_], axis=0)\n mn = np.mean([tree.feature_importances_ for tree in forest.estimators_], axis=0)\n indices = np.argsort(importances)[::-1]\n\n # Print the feature ranking\n\n if print_:\n print(\"Feature ranking:\")\n for f in range(X.shape[1]):\n print(\"%d. feature %d (%f) %s\" % (f + 1, indices[f], importances[indices[f]], feat_names[indices[f]]))\n\n if plot:\n \n plt.figure(figsize=(16,3))\n plt.title(\"Feature importances\")\n plt.bar(range(len(importances)), importances[indices],\n color=\"r\", yerr=sd[indices], align=\"center\")\n plt.xticks(range(len(importances)), indices)\n plt.xlim([-1, len(indices)])\n plt.show()\n \n return indices, importances\n\nindices, importances = feature_importance(X_tra, y_tra, selected_features_1, plot=True, forest='random_forest')\n\nindices, importances = feature_importance(X_tra, y_tra, selected_features_1, plot=True, forest='extra_trees')\n\nfrom sklearn.ensemble import RandomForestClassifier\nfrom sklearn.metrics import roc_auc_score\n \nscores = []\nfor i in range(1,len(indices)):\n mask = indices[:i]\n clf = RandomForestClassifier(n_estimators=100)\n clf.fit(X_tra[:,mask], y_tra)\n score = roc_auc_score(y_val, clf.predict_proba(X_val[:,mask])[:,1])\n scores.append(score)\n \nplt.plot(np.arange(len(scores)), scores)\nplt.xlabel(\"# Features\")\nplt.ylabel(\"AUC\")\n\nmax_index = np.argmax(scores)\nsel_index = 18", "Based on this results, we'll select the N features", "selected_features_2 = np.array(selected_features_1)[indices[:sel_index]]\n\nselected_features_2", "It seems like the location and category features are more important than the date related features.\nOn the date related features, the system also selected opened_dayofweek and opened_dayofmonth.\nTest Set", "from sklearn.metrics import roc_curve, auc\n\ndef find_cutoff(y_true, y_pred):\n fpr, tpr, threshold = roc_curve(y_true, y_pred)\n i = np.arange(len(tpr)) \n roc = pd.DataFrame({'tf' : pd.Series(tpr-(1-fpr), index=i), 'threshold' : pd.Series(threshold, index=i)})\n roc_t = roc.ix[(roc.tf-0).abs().argsort()[:1]]\n return list(roc_t['threshold'])[0]\n\nfrom sklearn.feature_selection import SelectFromModel, SelectKBest\nfrom sklearn.pipeline import Pipeline\n\ndef __feature_importance(X, y):\n forest = RandomForestClassifier(n_estimators=200, random_state=0)\n forest.fit(X, y)\n return forest.feature_importances_\n\npipe = Pipeline([\n ('normalizer', MinMaxScaler()),\n ('selection_threshold', VarianceThreshold(threshold=(.999 * (1 - .999)))),\n ('selection_kbest', SelectKBest(__feature_importance, k=31)),\n ('classifier', RandomForestClassifier(n_estimators=100))])\n\npipe.fit(X_tra, y_tra)\ny_proba = pipe.predict_proba(X_tst)\ncutoff = find_cutoff(y_tst, y_proba[:,1])\n\nfrom sklearn.metrics import roc_curve, auc\nfpr, tpr, thresh = roc_curve(y_tst, y_proba[:,1])\nauc_roc = auc(fpr, tpr)\n\nprint 'cuttoff {:.4f}'.format(cutoff)\nplt.title('ROC Curve')\nplt.plot(fpr, tpr, 'b',\nlabel='AUC = %0.2f'% auc_roc)\nplt.legend(loc='lower right')\nplt.plot([0,1],[0,1],'r--')\nplt.xlim([-0.1,1.2])\nplt.ylim([-0.1,1.2])\nplt.ylabel('True Positive Rate')\nplt.xlabel('False Positive Rate')\nplt.show()\n\nfrom sklearn.metrics import classification_report\n\nprint classification_report(y_tst, y_proba[:,1] >= cutoff)\n\nimport sqlite3\nfrom sqlalchemy import create_engine\n\nSQL_ENGINE = create_engine('sqlite:///streetlight_cases.db')\ndf.to_sql('new_data', SQL_ENGINE, if_exists='replace', index=False)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
jamesfolberth/NGC_STEM_camp_AWS
notebooks/20Q/loadMechanicalTurkData.ipynb
bsd-3-clause
[ "Loads the mechanical Turk data\nRun this script to load the data. Your job after loading the data is to make a 20 questions style game (see www.20q.net )\nRead in the list of movies\nThere were 250 movies in the list, but we only used the 149 movies that were made in 1980 or later", "# Read in the list of 250 movies, making sure to remove commas from their names\n# (actually, if it has commas, it will be read in as different fields)\nimport csv\nmovies = []\nwith open('movies.csv','r') as csvfile:\n myreader = csv.reader(csvfile)\n for index, row in enumerate(myreader):\n movies.append( ' '.join(row) ) # the join() call merges all fields\n# We might like to split this into two tasks, one for movies pre-1980 and one for post-1980, \nimport re # used for \"regular-expressions\", a method of searching strings\ncutoffYear = 1980\noldMovies = []\nnewMovies = []\nfor mv in movies:\n sp = re.split(r'[()]',mv)\n #print sp # output looks like: ['Kill Bill: Vol. 2 ', '2004', '']\n year = int(sp[1])\n if year < cutoffYear:\n oldMovies.append( mv )\n else:\n newMovies.append( mv )\nprint(\"Found\", len(newMovies), \"new movies (after 1980) and\", len(oldMovies), \"old movies\")\n# and for simplicity, let's just rename \"newMovies\" to \"movies\"\nmovies = newMovies\n\n# Make a dictionary that will help us convert movie titles to numbers\nMovie2index = {}\nfor ind, mv in enumerate(movies):\n Movie2index[mv] = ind\n# sample usage:\nprint('The movie ', movies[3],' has index', Movie2index[movies[3]])", "Read in the list of questions\nThere were 60 questions but due to a copy-paste error, there were some duplicates, so we only have 44 unique questions", "# Read in the list of 60 questions\nAllQuestions = []\nwith open('questions60.csv', 'r') as csvfile:\n myreader = csv.reader(csvfile)\n for row in myreader:\n # the rstrip() removes blanks\n AllQuestions.append( row[0].rstrip() )\nprint('Found', len(AllQuestions), 'questions')\nquestions = list(set(AllQuestions))\nprint('Found', len(questions), 'unique questions')\n\n# As we did for movies, make a dictionary to convert questions to numbers\nQuestion2index = {}\nfor index,quest in enumerate( questions ):\n Question2index[quest] = index\n# sample usage:\nprint('The question ', questions[40],' has index', Question2index[questions[40]])", "Read in the training data\nThe columns of X correspond to questions, and rows correspond to more data. The rows of y are the movie indices. The values of X are 1, -1 or 0 (see YesNoDict for encoding)", "YesNoDict = { \"Yes\": 1, \"No\": -1, \"Unsure\": 0, \"\": 0 }\n# load from csv files\nX = []\ny = []\nwith open('MechanicalTurkResults_149movies_X.csv','r') as csvfile:\n myreader = csv.reader(csvfile)\n for row in myreader:\n X.append( list(map(int,row)) )\nwith open('MechanicalTurkResults_149movies_y.csv','r') as csvfile:\n myreader = csv.reader(csvfile)\n for row in myreader:\n y = list(map(int,row))", "Your turn: train a decision tree classifier", "from sklearn import tree\n# the rest is up to you", "Use the trained classifier to play a 20 questions game\nYou can see the list of movies we trained on here: https://docs.google.com/spreadsheets/d/1-849aPzi8Su_c5HwwDFERrogXjvSaZFfp_y9MHeO1IA/edit?usp=sharing\nYou may want to use from sklearn.tree import _tree and 'tree.DecisionTreeClassifier' with commands like tree_.children_left[node], tree_.value[node], tree_.feature[node], and `tree_.threshold[node]'.", "# up to you" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
AllenDowney/ModSimPy
examples/kitten.ipynb
mit
[ "Kittens\nModeling and Simulation in Python\nCopyright 2021 Allen Downey\nLicense: Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International", "# download modsim.py if necessary\n\nfrom os.path import basename, exists\n\ndef download(url):\n filename = basename(url)\n if not exists(filename):\n from urllib.request import urlretrieve\n local, _ = urlretrieve(url, filename)\n print('Downloaded ' + local)\n \ndownload('https://github.com/AllenDowney/ModSimPy/raw/master/' +\n 'modsim.py')\n\n# import functions from modsim\n\nfrom modsim import *", "If you have used the Internet, you have probably seen videos of kittens unrolling toilet paper.\nAnd you might have wondered how long it would take a standard kitten to unroll 47 m of paper, the length of a standard roll.\nThe interactions of the kitten and the paper rolls are complex. To keep things simple, let's assume that the kitten pulls down on the free end of the roll with constant force. And let's neglect the friction between the roll and the axle.\nThis diagram shows the paper roll with the force applied by the kitten, $F$, the lever arm of the force around the axis of rotation, $r$, and the resulting torque, $\\tau$.\n\nAssuming that the force applied by the kitten is 0.002 N, how long would it take to unroll a standard roll of toilet paper?\nWe'll use the same parameters as in Chapter 24:", "Rmin = 0.02 # m\nRmax = 0.055 # m\nMcore = 15e-3 # kg\nMroll = 215e-3 # kg\nL = 47 # m\ntension = 0.002 # N", "Rmin and Rmax are the minimum and maximum radius of the roll, respectively.\nMcore is the weight of the core (the cardboard tube at the center) and Mroll is the total weight of the paper.\nL is the unrolled length of the paper.\ntension is the force the kitten applies by pulling on the loose end of the roll (I chose this value because it yields reasonable results).\nIn Chapter 24 we defined $k$ to be the constant that relates a change in the radius of the roll to a change in the rotation of the roll:\n$$dr = k~d\\theta$$ \nAnd we derived the equation for $k$ in terms of $R_{min}$, $R_{max}$, and $L$. \n$$k = \\frac{1}{2L} (R_{max}^2 - R_{min}^2)$$\nSo we can compute k like this:", "k = (Rmax**2 - Rmin**2) / 2 / L \nk ", "Moment of Inertia\nTo compute angular acceleration, we'll need the moment of inertia for the roll.\nAt http://modsimpy.com/moment you can find moments of inertia for\nsimple geometric shapes. I'll model the core as a \"thin cylindrical shell\", and the paper roll as a \"thick-walled cylindrical tube with open ends\".\nThe moment of inertia for a thin shell is just $m r^2$, where $m$ is the mass and $r$ is the radius of the shell.\nFor a thick-walled tube the moment of inertia is\n$$I = \\frac{\\pi \\rho h}{2} (r_2^4 - r_1^4)$$ \nwhere $\\rho$ is the density of the material, $h$ is the height of the tube (if we think of the roll oriented vertically), $r_2$ is the outer diameter, and $r_1$ is the inner diameter.\nSince the outer diameter changes as the kitten unrolls the paper, we\nhave to compute the moment of inertia, at each point in time, as a\nfunction of the current radius, r, like this:", "def moment_of_inertia(r):\n \"\"\"Moment of inertia for a roll of toilet paper.\n \n r: current radius of roll in meters\n \n returns: moment of inertia in kg m**2\n \"\"\" \n Icore = Mcore * Rmin**2 \n Iroll = np.pi * rho_h / 2 * (r**4 - Rmin**4)\n return Icore + Iroll", "Icore is the moment of inertia of the core; Iroll is the moment of inertia of the paper.\nrho_h is the density of the paper in terms of mass per unit of area. \nTo compute rho_h, we compute the area of the complete roll like this:", "area = np.pi * (Rmax**2 - Rmin**2)\narea", "And divide the mass of the roll by that area.", "rho_h = Mroll / area\nrho_h", "As an example, here's the moment of inertia for the complete roll.", "moment_of_inertia(Rmax)", "As r decreases, so does I. Here's the moment of inertia when the roll is empty.", "moment_of_inertia(Rmin)", "The way $I$ changes over time might be more of a problem than I have made it seem. In the same way that $F = m a$ only applies when $m$ is constant, $\\tau = I \\alpha$ only applies when $I$ is constant. When $I$ varies, we usually have to use a more general version of Newton's law. However, I believe that in this example, mass and moment of inertia vary together in a way that makes the simple approach work out.\nA friend of mine who is a physicist is not convinced; nevertheless, let's proceed on the assumption that I am right.\nSimulation\nThe state variables we'll use are\n\n\ntheta, the total rotation of the roll in radians, \n\n\nomega, angular velocity in rad / s,\n\n\nr, the radius of the roll, and\n\n\ny, the length of the unrolled paper.\n\n\nHere's a State object with the initial conditions.", "init = State(theta=0, omega=0, y=0, r=Rmax)\ninit", "And here's a System object with the starting conditions and t_end.", "system = System(init=init, t_end=120)", "You can take it from here.\nExercise:\nWrite a slope function we can use to simulate this system. Test it with the initial conditions. The results should be approximately\n0.0, 0.294, 0.0, 0.0", "# Solution goes here\n\n# Solution goes here", "Exercise: Write an event function that stops the simulation when y equals L, that is, when the entire roll is unrolled. Test your function with the initial conditions.", "# Solution goes here\n\n# Solution goes here", "Now run the simulation.", "# Solution goes here", "And check the results.", "results.tail()", "The final value of theta should be about 200 rotations, the same as in Chapter 24.\nThe final value of omega should be about 63 rad/s, which is about 10 revolutions per second. That's pretty fast, but it might be plausible.\nThe final value of y should be L, which is 47 m.\nThe final value of r should be Rmin, which is 0.02 m.\nAnd the total unrolling time should be about 76 seconds, which seems plausible.\nThe following cells plot the results.\ntheta increases slowly at first, then accelerates.", "results.theta.plot(color='C0', label='theta')\ndecorate(xlabel='Time (s)',\n ylabel='Angle (rad)')", "Angular velocity, omega, increases almost linearly at first, as constant force yields almost constant torque. Then, as the radius decreases, the lever arm decreases, yielding lower torque, but moment of inertia decreases even more, yielding higher angular acceleration.", "results.omega.plot(color='C2', label='omega')\n\ndecorate(xlabel='Time (s)',\n ylabel='Angular velocity (rad/s)')", "y increases slowly and then accelerates.", "results.y.plot(color='C1', label='y')\n\ndecorate(xlabel='Time (s)',\n ylabel='Length (m)')", "r decreases slowly, then accelerates.", "results.r.plot(color='C4', label='r')\n\ndecorate(xlabel='Time (s)',\n ylabel='Radius (m)')" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
cehbrecht/demo-notebooks
wps-cfchecker.ipynb
apache-2.0
[ "Init WPS with cfchecker proceses\n\nhummingbird caps url: https://bovec.dkrz.de/ows/proxy/hummingbird?version=1.0.0&request=GetCapabilities&service=WPS\nusing twitcher access tokens: http://twitcher.readthedocs.io/en/latest/tutorial.html", "from owslib.wps import WebProcessingService\ntoken = 'a890731658ac4f1ba93a62598d2f2645'\nheaders = {'Access-Token': token}\nwps = WebProcessingService(\"https://bovec.dkrz.de/ows/proxy/hummingbird\", verify=False, headers=headers)\n", "Show available processes", "for process in wps.processes:\n print process.identifier,\":\", process.title", "Show details about qa_cfchecker process", "process = wps.describeprocess(identifier='qa_cfchecker')\nfor inp in process.dataInputs:\n print inp.identifier, \":\", inp.title, \":\", inp.dataType", "Check file available on http service", "inputs = [('dataset', 'http://bovec.dkrz.de:8090/wpsoutputs/hummingbird/output-b9855b08-42d8-11e6-b10f-abe4891050e3.nc')]\nexecution = wps.execute(identifier='qa_cfchecker', inputs=inputs, output='output', async=False)\nprint execution.status\n\n\nfor out in execution.processOutputs:\n print out.title, out.reference", "Prepare local file to send to service\nTo send a local file with the request the file needs to be base64 encoded.", "from owslib.wps import ComplexDataInput\nimport base64\nfp = open(\"/home/pingu/tmp/input2.nc\", 'r')\ntext = fp.read()\nfp.close()\nencoded = base64.b64encode(text)\ncontent = ComplexDataInput(encoded)\ninputs = [ ('dataset', content) ]\n\nexecution = wps.execute(identifier='qa_cfchecker', inputs=inputs, output='output', async=False)\nprint execution.status\n\nfor out in execution.processOutputs:\n print out.title, out.reference" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
mohanprasath/Course-Work
coursera/python_for_data_science/3.3_Functions.ipynb
gpl-3.0
[ "<a href=\"http://cocl.us/topNotebooksPython101Coursera\"><img src = \"https://ibm.box.com/shared/static/yfe6h4az47ktg2mm9h05wby2n7e8kei3.png\" width = 750, align = \"center\"></a>\n<a href=\"https://www.bigdatauniversity.com\"><img src = \"https://ibm.box.com/shared/static/ugcqz6ohbvff804xp84y4kqnvvk3bq1g.png\" width = 300, align = \"center\"></a>\n<h1 align=center><font size = 5>WRITING YOUR OWN FUNCTIONS IN PYTHON</font></h1>\n\nTable of Contents\n<div class=\"alert alert-block alert-info\" style=\"margin-top: 20px\">\n<ol>\n\n<li><a href=\"#ref1\">What is a Function?</a></li>\n<li><a href=\"#ref3\">Using if/else statements in functions</a></li>\n<li><a href=\"#ref4\">Setting default argument values in your custom functions</a></li>\n<li><a href=\"#ref6\">Global and local variables</a></li>\n<li><a href=\"#ref7\">Scope of a Variable </a></li>\n\n</ol>\n<br>\n<p></p>\nEstimated Time Needed: <strong>40 min</strong>\n</div>\n\n<hr>\n\n<hr>\n\n<a id='ref1'></a>\n<center><h2>Defining a Function</h2></center>\nA function is a reusable block of code which performs operations specified in the function. They let you break down tasks and allow you to reuse your code in different programs.\nThere are two types of functions :\n\nPre-defined functions\nUser defined functions\n\n<h3>What is a Function?</h3>\n\nYou can define functions to provide the required functionality. Here are simple rules to define a function in Python:\n- Functions blocks begin def followed by the function name and parentheses ().\n- There are input parameters or arguments that should be placed within these parentheses. \n- You can also define parameters inside these parentheses.\n- There is a body within every function that starts with a colon (:) and is indented.\n- You can also place documentation before the body \n- The statement return exits a function, optionally passing back a value \nAn example of a function that adds on to the parameter a prints and returns the output as b:", "def add(a):\n \"\"\"\n add 1 to a\n \"\"\"\n b=a+1; \n print(a, \"if you add one\" ,b)\n \n return(b)", "The figure below illustrates the terminology: \n<a ><img src = \"https://ibm.box.com/shared/static/wsl6jcfld2c3171ob19vjr5chw9gyxrc.png\" width = 500, align = \"center\"></a>\n\n<h4 align=center> \n A labeled function\n </h4>\n\nWe can obtain help about a function :", "help(add)", "We can call the function:", "add(1)", "If we call the function with a new input we get a new result:", "add(2)", "We can create different functions. For example, we can create a function that multiplies two numbers. The numbers will be represented by the variables a and b:", "def Mult(a,b):\n c=a*b\n return(c)", "The same function can be used for different data types. For example, we can multiply two integers:", "Mult(2,3)", "Two Floats:", "Mult(10,3.14)", "We can even replicate a string by multiplying with an integer:", "Mult(2,\"Michael Jackson \")", "Come up with a function that divides the first input by the second input:", "def divide_values(a, b):\n return a / b", "<div align=\"right\">\n<a href=\"#q1\" class=\"btn btn-default\" data-toggle=\"collapse\">Click here for the solution</a>\n\n</div>\n<div id=\"q1\" class=\"collapse\">\n```\ndef div(a,b):\n return(a/b)\n```\n</div>\n\n<h3>Variables </h3>\n\nThe input to a function is called a formal parameter.\nA variable that is declared inside a function is called a local variable. The parameter only exists within the function (i.e. the point where the function starts and stops). \nA variable that is declared outside a function definition is a global variable, and its value is accessible and modifiable throughout the program. We will discuss more about global variables at the end of the lab.", "#Function Definition \ndef square(a):\n \"\"\"Square the input and add one \n \"\"\"\n #Local variable \n b=1\n c=a*a+b;\n print(a, \"if you square +1 \",c) \n return(c)\n", "The labels are displayed in the figure: \n<a ><img src = \"https://ibm.box.com/shared/static/gpfa525nnfwxt5rhrvd3o6i8rp2iwsai.png\" width = 500, align = \"center\"></a>\n\n<h4 align=center> \n Figure 2: A function with labeled variables \n </h4>\n\nWe can call the function with an input of 3:", "#Initializes Global variable \n\nx=3\n#Makes function call and return function a y\nz=square(x)\nz", "We can call the function with an input of 2 in a different manner:", "square(2)", "If there is no return statement, the function returns None. The following two functions are equivalent:", "def MJ():\n print('Michael Jackson')\n \ndef MJ1():\n print('Michael Jackson')\n return(None)\n\nMJ()\n\nMJ1()", "Printing the function after a call reveals a None is the default return statement:", "print(MJ())\nprint(MJ1())", "Create a function con that concatenates two strings using the addition operation:\n:", "def con(a,b):\n return(a+b)", "<div align=\"right\">\n<a href=\"#q2\" class=\"btn btn-default\" data-toggle=\"collapse\">Click here for the solution</a>\n\n</div>\n<div id=\"q2\" class=\"collapse\">\n```\ndef div(a,b):\n return(a+b)\n```\n</div>\n\nCan the same function be used to add to integers or strings?", "print(con(1, 2))\nprint(con('1', '2'))", "<div align=\"right\">\n<a href=\"#q3\" class=\"btn btn-default\" data-toggle=\"collapse\">Click here for the solution</a>\n\n</div>\n<div id=\"q3\" class=\"collapse\">\n```\nyes,for example: \ncon(2,2)\n```\n</div>\n\nCan the same function be used to concentrate a list or tuple?", "print(con([1], [2]))", "<div align=\"right\">\n<a href=\"#q4\" class=\"btn btn-default\" data-toggle=\"collapse\">Click here for the solution</a>\n\n</div>\n<div id=\"q4\" class=\"collapse\">\n```\nyes,for example: \ncon(['a',1],['b',1])\n```\n</div>\n\n<h3><b>Pre-defined functions</b></h3>\n\nThere are many pre-defined functions in Python, so let's start with the simple ones.\nThe print() function:", "album_ratings = [10.0,8.5,9.5,7.0,7.0,9.5,9.0,9.5] \nprint(album_ratings)", "The sum() function adds all the elements in a list or tuple:", "sum(album_ratings)", "The length function returns the length of a list or tuple:", "len(album_ratings)", "<div class=\"alert alert-success alertsuccess\" style=\"margin-top: 20px\">\n<h4> [Tip] How do I learn more about the pre-defined functions in Python? </h4>\n<p></p>\nWe will be introducing a variety of **pre-defined functions** to you as you learn more about Python. There are just too many functions, so there's no way we can teach them all in one sitting. But if you'd like to take a quick peek, here's a short reference card for some of the commonly-used pre-defined functions: \nhttp://www.astro.up.pt/~sousasag/Python_For_Astronomers/Python_qr.pdf\n</div>\n\n<h3>Functions Makes Things Simple </h3>\n\nConsider the two lines of code in Block 1 and Block 2: the procedure for each block is identical. The only thing that is different is the variable names and values.\nBlock 1:", "a1=4;\nb1=5;\nc1=a1+b1+2*a1*b1-1\nif(c1<0):\n c1=0; \nelse:\n c1=5;\nc1 ", "Block 2:", "a2=0;\nb2=0;\nc2=a2+b2+2*a2*b2-1\nif(c2<0):\n c2=0; \nelse:\n c2=5;\nc2 ", "We can replace the lines of code with a function. A function combines many instructions into a single line of code. Once a function is defined, it can be used repeatedly. You can invoke the same function many times in your program. You can save your function and use it in another program or use someone else’s function. The lines of code in code block 1 and code block 2 can be replaced by the following function:", "def Equation(a,b):\n c=a+b+2*a*b-1\n if(c<0):\n c=0\n \n else:\n c=5\n return(c) \n", "This function takes two inputs, a and b, then applies several operations to return c. \nWe simply define the function, replace the instructions with the function, and input the new values of a1,b1 and a2,b2 as inputs. The entire process is demonstrated in the figure: \n<a ><img src = \"https://ibm.box.com/shared/static/efn4rii75bgytjdb5c8ek6uezch7yaxq.gif\" width = 1100, align = \"center\"></a>\n\n<h4 align=center> \n Example of a function used to replace redundant lines of code \n </h4>\n\nCode Blocks 1 and Block 2 can now be replaced with code Block 3 and code Block 4.\nBlock 3:", "a1=4;\nb1=5;\nc1=Equation(a1,b1)\nc1", "Block 4:", "a2=0;\nb2=0;\nc2=Equation(a2,b2)\nc2", "<hr>\n\n<a id='ref3'></a>\n<center><h2>Using if/else statements and loops in functions</h2></center>\nThe return() function is particularly useful if you have any IF statements in the function, when you want your output to be dependent on some condition:", "def type_of_album(artist,album,year_released):\n if year_released > 1980:\n print(artist,album,year_released)\n return \"Modern\"\n else:\n print(artist,album,year_released)\n return \"Oldie\"\n \nx = type_of_album(\"Michael Jackson\",\"Thriller\",1980)\nprint(x)\n", "We can use a loop in a function. For example, we can print out each element in a list:", "def PrintList(the_list):\n for element in the_list:\n print(element)\n\nPrintList(['1',1,'the man',\"abc\"])", "<hr>\n\n<a id='ref4'></a>\n<center><h2>Setting default argument values in your custom functions</h2></center>\nYou can set a default value for arguments in your function. For example, in the isGoodRating() function, what if we wanted to create a threshold for what we consider to be a good rating? Perhaps by default, we should have a default rating of 4:", "def isGoodRating(rating=4): \n if(rating < 7):\n print(\"this album sucks it's rating is\",rating)\n \n else:\n print(\"this album is good its rating is\",rating)\n", "<hr>", "isGoodRating()\nisGoodRating(10)\n ", "<a id='ref6'></a>\n<center><h2>Global variables</h2></center>\n<br>\nSo far, we've been creating variables within functions, but we have not discussed variables outside the function. These are called global variables. \n<br>\nLet's try to see what printer1 returns:", "artist = \"Michael Jackson\"\ndef printer1(artist):\n internal_var = artist\n print(artist,\"is an artist\")\n \nprinter1(artist)\n", "If we print internal_var we get an error. \nWe got a Name Error: name 'internal_var' is not defined. Why? \nIt's because all the variables we create in the function is a local variable, meaning that the variable assignment does not persist outside the function. \nBut there is a way to create global variables from within a function as follows:", "artist = \"Michael Jackson\"\n\ndef printer(artist):\n global internal_var \n internal_var= \"Whitney Houston\"\n print(artist,\"is an artist\")\n\nprinter(artist) \nprinter(internal_var)\n", "<a id='ref7'></a>\n<center><h2>Scope of a Variable</h2></center>\n<hr>\n\nThe scope of a variable is the part of that program where that variable is accessible. Variables that are declared outside of all function definitions, such as the myFavouriteBand variable in the code shown here, are accessible from anywhere within the program. As a result, such variables are said to have global scope, and are known as global variables. \n myFavouriteBand is a global variable, so it is accessible from within the getBandRating function, and we can use it to determine a band's rating. We can also use it outside of the function, such as when we pass it to the print function to display it:", "myFavouriteBand = \"AC/DC\"\n\ndef getBandRating(bandname):\n if bandname == myFavouriteBand:\n return 10.0\n else:\n return 0.0\n\nprint(\"AC/DC's rating is:\", getBandRating(\"AC/DC\"))\nprint(\"Deep Purple's rating is:\",getBandRating(\"Deep Purple\"))\nprint(\"My favourite band is:\", myFavouriteBand)", "Take a look at this modified version of our code. Now the myFavouriteBand variable is defined within the getBandRating function. A variable that is defined within a function is said to be a local variable of that function. That means that it is only accessible from within the function in which it is defined. Our getBandRating function will still work, because myFavouriteBand is still defined within the function. However, we can no longer print myFavouriteBand outside our function, because it is a local variable of our getBandRating function; it is only defined within the getBandRating function:", "def getBandRating(bandname):\n myFavouriteBand = \"AC/DC\"\n if bandname == myFavouriteBand:\n return 10.0\n else:\n return 0.0\n\nprint(\"AC/DC's rating is: \", getBandRating(\"AC/DC\"))\nprint(\"Deep Purple's rating is: \", getBandRating(\"Deep Purple\"))\nprint(\"My favourite band is\", myFavouriteBand)", "Finally, take a look at this example. We now have two myFavouriteBand variable definitions. The first one of these has a global scope, and the second of them is a local variable within the getBandRating function. Within the getBandRating function, the local variable takes precedence. Deep Purple will receive a rating of 10.0 when passed to the getBandRating function. However, outside of the getBandRating function, the getBandRating s local variable is not defined, so the myFavouriteBand variable we print is the global variable, which has a value of AC/DC:", "myFavouriteBand = \"AC/DC\"\n\ndef getBandRating(bandname):\n myFavouriteBand = \"Deep Purple\"\n if bandname == myFavouriteBand:\n return 10.0\n else:\n return 0.0\n\nprint(\"AC/DC's rating is:\",getBandRating(\"AC/DC\"))\nprint(\"Deep Purple's rating is: \",getBandRating(\"Deep Purple\"))\nprint(\"My favourite band is:\",myFavouriteBand)", "<a href=\"http://cocl.us/bottemNotebooksPython101Coursera\"><img src = \"https://ibm.box.com/shared/static/irypdxea2q4th88zu1o1tsd06dya10go.png\" width = 750, align = \"center\"></a>\nAbout the Authors:\nJoseph Santarcangelo has a PhD in Electrical Engineering, his research focused on using machine learning, signal processing, and computer vision to determine how videos impact human cognition. Joseph has been working for IBM since he completed his PhD.\nJames Reeve James Reeves is a Software Engineering intern at IBM.\n<hr>\nCopyright &copy; 2017 cognitiveclass.ai. This notebook and its source code are released under the terms of the MIT License." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
computational-class/cjc2016
code/09.04-Feature-Engineering.ipynb
mit
[ "Feature Engineering\n<!--BOOK_INFORMATION-->\n\nThis notebook contains an excerpt from the Python Data Science Handbook by Jake VanderPlas; the content is available on GitHub.\nThe text is released under the CC-BY-NC-ND license, and code is released under the MIT license. If you find this content useful, please consider supporting the work by buying the book!\n<!--NAVIGATION-->\n< Hyperparameters and Model Validation | Contents | In Depth: Naive Bayes Classification >\nnumerical data in a tidy, [n_samples, n_features] format VS. Real world. \nFeature engineering taking whatever information you have about your problem and turning it into numbers that you can use to build your feature matrix.\nIn this section, we will cover a few common examples of feature engineering tasks: \n- features for representing categorical data, \n- features for representing text, and \n- features for representing images.\n- derived features for increasing model complexity\n- imputation of missing data.\nOften this process is known as vectorization\n- as it involves converting arbitrary data into well-behaved vectors.\nCategorical Features\nOne common type of non-numerical data is categorical data.\nHousing prices, \n- \"price\" and \"rooms\"\n- \"neighborhood\" information.", "data = [\n {'price': 850000, 'rooms': 4, 'neighborhood': 'Queen Anne'},\n {'price': 700000, 'rooms': 3, 'neighborhood': 'Fremont'},\n {'price': 650000, 'rooms': 3, 'neighborhood': 'Wallingford'},\n {'price': 600000, 'rooms': 2, 'neighborhood': 'Fremont'}\n]", "You might be tempted to encode this data with a straightforward numerical mapping:", "{'Queen Anne': 1, 'Fremont': 2, 'Wallingford': 3};\n# It turns out that this is not generally a useful approach", "A fundamental assumption: numerical features reflect algebraic quantities.\n\nQueen Anne < Fremont < Wallingford\nWallingford - Queen Anne = Fremont\n\nIt does not make much sense.\nOne-hot encoding (Dummy coding) effectively creates extra columns indicating the presence or absence of a category with a value of 1 or 0, respectively.\n- When your data comes as a list of dictionaries\n - Scikit-Learn's DictVectorizer will do this for you:", "from sklearn.feature_extraction import DictVectorizer\nvec = DictVectorizer(sparse=False, dtype=int )\nvec.fit_transform(data)", "Notice\n\nthe 'neighborhood' column has been expanded into three separate columns (why not four?)\nrepresenting the three neighborhood labels, and that each row has a 1 in the column associated with its neighborhood.\n\nTo see the meaning of each column, you can inspect the feature names:", "vec.get_feature_names()", "There is one clear disadvantage of this approach: \n- if your category has many possible values, this can greatly increase the size of your dataset.\n - However, because the encoded data contains mostly zeros, a sparse output can be a very efficient solution:", "vec = DictVectorizer(sparse=True, dtype=int)\nvec.fit_transform(data)", "Many (though not yet all) of the Scikit-Learn estimators accept such sparse inputs when fitting and evaluating models. \ntwo additional tools that Scikit-Learn includes to support this type of encoding:\n- sklearn.preprocessing.OneHotEncoder\n- sklearn.feature_extraction.FeatureHasher \nText Features\nAnother common need in feature engineering is to convert text to a set of representative numerical values.\nMost automatic mining of social media data relies on some form of encoding the text as numbers.\n- One of the simplest methods of encoding data is by word counts: \n - you take each snippet of text, count the occurrences of each word within it, and put the results in a table.\nFor example, consider the following set of three phrases:", "sample = ['problem of evil',\n 'evil queen',\n 'horizon problem']", "For a vectorization of this data based on word count, we could construct a column representing the word \"problem,\" the word \"evil,\" the word \"horizon,\" and so on.\nWhile doing this by hand would be possible, the tedium can be avoided by using Scikit-Learn's CountVectorizer:", "from sklearn.feature_extraction.text import CountVectorizer\n\nvec = CountVectorizer()\nX = vec.fit_transform(sample)\nX", "The result is a sparse matrix recording the number of times each word appears; \nit is easier to inspect if we convert this to a DataFrame with labeled columns:", "import pandas as pd\npd.DataFrame(X.toarray(), columns=vec.get_feature_names())", "Problem: The raw word counts put too much weight on words that appear very frequently.\nterm frequency-inverse document frequency (TF–IDF) weights the word counts by a measure of how often they appear in the documents.\nThe syntax for computing these features is similar to the previous example:", "from sklearn.feature_extraction.text import TfidfVectorizer\nvec = TfidfVectorizer()\nX = vec.fit_transform(sample)\npd.DataFrame(X.toarray(), columns=vec.get_feature_names())", "For an example of using TF-IDF in a classification problem, see In Depth: Naive Bayes Classification.\nImage Features\nThe simplest approach is what we used for the digits data in Introducing Scikit-Learn: simply using the pixel values themselves.\n- But depending on the application, such approaches may not be optimal.\n- A comprehensive summary of feature extraction techniques for images in the Scikit-Image project.\nFor one example of using Scikit-Learn and Scikit-Image together, see Feature Engineering: Working with Images.\nDerived Features\nAnother useful type of feature is one that is mathematically derived from some input features.\nWe saw an example of this in Hyperparameters and Model Validation when we constructed polynomial features from our input data.\nTo convert a linear regression into a polynomial regression \n- not by changing the model\n- but by transforming the input!\n - basis function regression, and is explored further in In Depth: Linear Regression.\nFor example, this data clearly cannot be well described by a straight line:", "%matplotlib inline\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nx = np.array([1, 2, 3, 4, 5])\ny = np.array([4, 2, 1, 3, 7])\nplt.scatter(x, y);", "Still, we can fit a line to the data using LinearRegression and get the optimal result:", "from sklearn.linear_model import LinearRegression\nX = x[:, np.newaxis]\nmodel = LinearRegression().fit(X, y)\nyfit = model.predict(X)\nplt.scatter(x, y)\nplt.plot(x, yfit);", "We need a more sophisticated model to describe the relationship between $x$ and $y$.\n- One approach to this is to transform the data, \n - adding extra columns of features to drive more flexibility in the model.\nFor example, we can add polynomial features to the data this way:", "from sklearn.preprocessing import PolynomialFeatures\npoly = PolynomialFeatures(degree=3, include_bias=False)\nX2 = poly.fit_transform(X)\nprint(X2)", "The derived feature matrix has one column representing $x$, and a second column representing $x^2$, and a third column representing $x^3$.\nComputing a linear regression on this expanded input gives a much closer fit to our data:", "model = LinearRegression().fit(X2, y)\nyfit = model.predict(X2)\nplt.scatter(x, y)\nplt.plot(x, yfit);", "This idea of improving a model not by changing the model, but by transforming the inputs, is fundamental to many of the more powerful machine learning methods.\n\n\nWe explore this idea further in In Depth: Linear Regression in the context of basis function regression.\n\n\nMore generally, this is one motivational path to the powerful set of techniques known as kernel methods, which we will explore in In-Depth: Support Vector Machines.\n\n\nImputation of Missing Data\nAnother common need in feature engineering is handling of missing data.\n\nHandling Missing Data\nNaN value is used to mark missing values.\n\n\n\nFor example, we might have a dataset that looks like this:", "from numpy import nan\nX = np.array([[ nan, 0, 3 ],\n [ 3, 7, 9 ],\n [ 3, 5, 2 ],\n [ 4, nan, 6 ],\n [ 8, 8, 1 ]])\ny = np.array([14, 16, -1, 8, -5])", "When applying a typical machine learning model to such data, we will need to first replace such missing data with some appropriate fill value.\nThis is known as imputation of missing values\n- simple method, e.g., replacing missing values with the mean of the column\n- sophisticated method, e.g., using matrix completion or a robust model to handle such data\n - It tends to be very application-specific, and we won't dive into them here.\nFor a baseline imputation approach, using the mean, median, or most frequent value, Scikit-Learn provides the Imputer class:", "from sklearn.preprocessing import Imputer\nimp = Imputer(strategy='mean')\nX2 = imp.fit_transform(X)\nX2", "We see that in the resulting data, the two missing values have been replaced with the mean of the remaining values in the column. \nThis imputed data can then be fed directly into, for example, a LinearRegression estimator:", "model = LinearRegression().fit(X2, y)\nmodel.predict(X2)", "Feature Pipelines\nWith any of the preceding examples, it can quickly become tedious to do the transformations by hand, especially if you wish to string together multiple steps.\nFor example, we might want a processing pipeline that looks something like this:\n\nImpute missing values using the mean\nTransform features to quadratic\nFit a linear regression\n\nTo streamline this type of processing pipeline, Scikit-Learn provides a Pipeline object, which can be used as follows:", "from sklearn.pipeline import make_pipeline\n\nmodel = make_pipeline(Imputer(strategy='mean'),\n PolynomialFeatures(degree=2),\n LinearRegression())", "This pipeline looks and acts like a standard Scikit-Learn object, and will apply all the specified steps to any input data.", "model.fit(X, y) # X with missing values, from above\nprint(y)\nprint(model.predict(X))", "All the steps of the model are applied automatically.\nNotice that for the simplicity of this demonstration, we've applied the model to the data it was trained on; \n- this is why it was able to perfectly predict the result (refer back to Hyperparameters and Model Validation for further discussion of this).\nFor some examples of Scikit-Learn pipelines in action, see the following section on naive Bayes classification, as well as In Depth: Linear Regression, and In-Depth: Support Vector Machines.\n<!--NAVIGATION-->\n< Hyperparameters and Model Validation | Contents | In Depth: Naive Bayes Classification >" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
atlury/deep-opencl
DL0110EN/3.3.2lab_predicting _MNIST_using_Softmax.ipynb
lgpl-3.0
[ "<div class=\"alert alert-block alert-info\" style=\"margin-top: 20px\">\n <a href=\"http://cocl.us/pytorch_link_top\"><img src = \"http://cocl.us/Pytorch_top\" width = 950, align = \"center\"></a>\n\n<img src = \"https://ibm.box.com/shared/static/ugcqz6ohbvff804xp84y4kqnvvk3bq1g.png\" width = 200, align = \"center\">\n\n\n<h1 align=center><font size = 5>Softmax Classifer </font></h1> \n\n\n# Table of Contents\nIn this lab, you will use a single layer Softmax to classify handwritten digits from the MNIST database.\n\n<div class=\"alert alert-block alert-info\" style=\"margin-top: 20px\">\n<li><a href=\"#ref0\">Helper functions</a></li>\n\n<li><a href=\"#ref1\">Prepare Data</a></li>\n<li><a href=\"#ref2\">Softmax Classifier</a></li>\n<li><a href=\"#ref3\">Define Softmax, Criterion Function, Optimizer, and Train the Model</a></li>\n<li><a href=\"#ref4\">Analyze Results</a></li>\n\n<br>\n<p></p>\nEstimated Time Needed: <strong>25 min</strong>\n</div>\n\n<hr>\n\n<a id=\"ref0\"></a>\n<h2 align=center>Helper functions </h2>", "!conda install -y torchvision\nimport torch \nimport torch.nn as nn\nimport torchvision.transforms as transforms\nimport torchvision.datasets as dsets\nimport matplotlib.pylab as plt\nimport numpy as np", "Use the following function to plot out the parameters of the Softmax function:", "def PlotParameters(model): \n W=model.state_dict() ['linear.weight'].data\n w_min=W.min().item()\n w_max=W.max().item()\n fig, axes = plt.subplots(2, 5)\n fig.subplots_adjust(hspace=0.01, wspace=0.1)\n for i,ax in enumerate(axes.flat):\n if i<10:\n # Set the label for the sub-plot.\n ax.set_xlabel( \"class: {0}\".format(i))\n\n # Plot the image.\n ax.imshow(W[i,:].view(28,28), vmin=w_min, vmax=w_max, cmap='seismic')\n\n ax.set_xticks([])\n ax.set_yticks([])\n \n # Ensure the plot is shown correctly with multiple plots\n # in a single Notebook cell.\n plt.show()", "Use the following function to visualize the data:", "def show_data(data_sample):\n\n plt.imshow(data_sample[0].numpy().reshape(28,28),cmap='gray')\n #print(data_sample[1].item())\n plt.title('y= '+ str(data_sample[1].item()))", "<a id=\"ref1\"></a>\n<h2 align=center>Prepare Data </h2>\n\nLoad the training dataset by setting the parameters <code>train</code> to <code>True</code> and convert it to a tensor by placing a transform object in the argument <code>transform</code>.", "train_dataset=dsets.MNIST(root='./data', train=True, download=True, transform=transforms.ToTensor())\ntrain_dataset", "Load the testing dataset by setting the parameters train <code>False</code> and convert it to a tensor by placing a transform object in the argument <code>transform</code>.", "validation_dataset=dsets.MNIST(root='./data', train=False, download=True, transform=transforms.ToTensor())\nvalidation_dataset", "You can see that the data type is long:", "train_dataset[0][1].type()", "Data Visualization\nEach element in the rectangular tensor corresponds to a number that represents a pixel intensity as demonstrated by the following image:\n<img src = \"https://ibm.box.com/shared/static/7024mnculm8w2oh0080y71cpa48cib2k.png\" width = 550, align = \"center\"></a>\nPrint out the third label:", "train_dataset[3][1]", "Plot the 3rd sample:", "show_data(train_dataset[3])", "You see that it is a 1. Now, plot the second sample:", "show_data(train_dataset[2])", "<a id=\"ref3\"></a>\nBuild a Softmax Classifer\nBuild a Softmax classifier class:", "class SoftMax(nn.Module):\n def __init__(self,input_size,output_size):\n super(SoftMax,self).__init__()\n self.linear=nn.Linear(input_size,output_size)\n def forward(self,x):\n z=self.linear(x)\n return z", "The Softmax function requires vector inputs. Note that the vector shape is 28x28.", "train_dataset[0][0].shape", "Flatten the tensor as shown in this image: \n<img src = \"https://ibm.box.com/shared/static/0cjl5inks3d8ay0sckgywowc3hw2j1sa.gif\" width = 550, align = \"center\"></a> \nThe size of the tensor is now 784.\n<img src = \"https://ibm.box.com/shared/static/lhezcvgm82gtdewooueopxp98ztq2pbv.png\" width = 550, align = \"center\"></a>\nSet the input size and output size:", "input_dim=28*28\noutput_dim=10\ninput_dim", "<a id=\"ref3\"></a>\n<h2> Define the Softmax Classifier, Criterion Function, Optimizer, and Train the Model</h2>", "model=SoftMax(input_dim,output_dim)\nmodel", "View the size of the model parameters:", "print('W:',list(model.parameters())[0].size())\nprint('b',list(model.parameters())[1].size())", "You can cover the model parameters for each class to a rectangular grid: \n<a> <img src = \"https://ibm.box.com/shared/static/9cuuwsvhwygbgoogmg464oht1o8ubkg2.gif\" width = 550, align = \"center\"></a> \nPlot the model parameters for each class:", "PlotParameters(model)", "Loss function:", "criterion=nn.CrossEntropyLoss()", "Optimizer class:", "learning_rate=0.1\noptimizer=torch.optim.SGD(model.parameters(), lr=learning_rate)", "Define the dataset loader:", "\ntrain_loader=torch.utils.data.DataLoader(dataset=train_dataset,batch_size=100)\nvalidation_loader=torch.utils.data.DataLoader(dataset=validation_dataset,batch_size=5000)", "Train the model and determine validation accuracy (should take a few minutes):", "n_epochs=10\nloss_list=[]\naccuracy_list=[]\nN_test=len(validation_dataset)\n#n_epochs\nfor epoch in range(n_epochs):\n for x, y in train_loader:\n \n\n #clear gradient \n optimizer.zero_grad()\n #make a prediction \n z=model(x.view(-1,28*28))\n # calculate loss \n loss=criterion(z,y)\n # calculate gradients of parameters \n loss.backward()\n # update parameters \n optimizer.step()\n \n \n \n correct=0\n #perform a prediction on the validation data \n for x_test, y_test in validation_loader:\n\n z=model(x_test.view(-1,28*28))\n _,yhat=torch.max(z.data,1)\n\n correct+=(yhat==y_test).sum().item()\n \n \n accuracy=correct/N_test\n\n accuracy_list.append(accuracy)\n \n loss_list.append(loss.data)\n accuracy_list.append(accuracy)", "<a id=\"ref3\"></a>\n<h2 align=center>Analyze Results</h2>\n\nPlot the loss and accuracy on the validation data:", "fig, ax1 = plt.subplots()\ncolor = 'tab:red'\nax1.plot(loss_list,color=color)\nax1.set_xlabel('epoch',color=color)\nax1.set_ylabel('total loss',color=color)\nax1.tick_params(axis='y', color=color)\n \nax2 = ax1.twinx() \ncolor = 'tab:blue'\nax2.set_ylabel('accuracy', color=color) \nax2.plot( accuracy_list, color=color)\nax2.tick_params(axis='y', labelcolor=color)\nfig.tight_layout()", "View the results of the parameters for each class after the training. You can see that they look like the corresponding numbers.", "PlotParameters(model)", "Plot the first five misclassified samples:", "count=0\nfor x,y in validation_dataset:\n\n z=model(x.reshape(-1,28*28))\n _,yhat=torch.max(z,1)\n if yhat!=y:\n show_data((x,y))\n\n plt.show()\n print(\"yhat:\",yhat)\n count+=1\n if count>=5:\n break \n ", "About the Authors:\nJoseph Santarcangelo has a PhD in Electrical Engineering. His research focused on using machine learning, signal processing, and computer vision to determine how videos impact human cognition. \nOther contributors: Michelle Carey, Mavis Zhou \n<hr>\n\nCopyright &copy; 2018 cognitiveclass.ai. This notebook and its source code are released under the terms of the MIT License." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
batfish/pybatfish
jupyter_notebooks/Introduction to Route Analysis.ipynb
apache-2.0
[ "Introduction to Route Computation and Analysis using Batfish\nNetwork engineers routinely need to validate routing and forwarding in the network. They often do that by connecting to multiple network devices and executing a series of show route commands. This distributed debugging is highly complex even in a moderately-sized network. Batfish makes this task extremely simple by providing an easy-to-query, centralized view of routing tables in the network. \nIn this notebook, we will look at how you can extract routing information from Batfish.\nCheck out a video demo of this notebook here.", "# Import packages\n%run startup.py\nbf = Session(host=\"localhost\")", "Initializing the Network and Snapshot\nSNAPSHOT_PATH below can be updated to point to a custom snapshot directory, see the Batfish instructions for how to package data for analysis.<br>\nMore example networks are available in the networks folder of the Batfish repository.", "# Initialize a network and snapshot\nNETWORK_NAME = \"example_network\"\nSNAPSHOT_NAME = \"example_snapshot\"\n\nSNAPSHOT_PATH = \"networks/example\"\n\nbf.set_network(NETWORK_NAME)\nbf.init_snapshot(SNAPSHOT_PATH, name=SNAPSHOT_NAME, overwrite=True)", "The network snapshot that we initialized above is illustrated below. You can download/view devices' configuration files here.\n\nAll of the information we will show you in this notebook is dynamically computed by Batfish based on the configuration files for the network devices.\nView Routing Tables for ALL devices and ALL VRFs\nBatfish makes all routing tables in the network easily accessible. Let's take a look at how you can retrieve the specific information you want.", "# Get routing tables for all nodes and VRFs\nroutes_all = bf.q.routes().answer().frame()", "We are not going to print this table as it has a large number of entries. \nView Routing Tables for default VRF on AS1 border routers\nThere are 2 ways that we can get the desired subset of data: \nOption 1) Only request that information from Batfish by passing in parameters into the routes() question. This is useful to do when you need to reduce the amount of data being returned, but is limited to regex filtering based on VRF, Node, Protocol and Network.\nOption 2) Filter the output of the routes() question using the Pandas APIs.", "?bf.q.routes\n\n# Get the routing table for the 'default' VRF on border routers of as1\n# using BF parameters\nroutes_as1border = bf.q.routes(nodes=\"/as1border/\", vrfs=\"default\").answer().frame()\n\n# Get the routing table for the 'default' VRF on border routers of as1\n# using Pandas filtering\nroutes_as1border = routes_all[(routes_all['Node'].str.contains('as1border')) & (routes_all['VRF'] == 'default')]\nroutes_as1border", "View BGP learnt routes for default VRF on AS1 border routers", "# Getting BGP routes in the routing table for the 'default' VRF on border routers of as1\n# using BF parameters\nroutes_as1border_bgp = bf.q.routes(nodes=\"/as1border/\", vrfs=\"default\", protocols=\"bgp\").answer().frame()\n\n# Geting BGP routes in the routing table for the 'default' VRF on border routers of as1\n# using Pandas filtering\nroutes_as1border_bgp = routes_all[(routes_all['Node'].str.contains('as1border')) & (routes_all['VRF'] == 'default') & (routes_all['Protocol'] == 'bgp')]\nroutes_as1border_bgp", "View BGP learnt routes for ALL VRFs on ALL routers with Metric >=50\nWe cannot pass in metric as a parameter to Batfish, so this task is best handled with the Pandas API.", "routes_filtered = routes_all[(routes_all['Protocol'] == 'bgp') & (routes_all['Metric'] >= 50)]\nroutes_filtered", "View the routing entries for network 1.0.2.0/24 on ALL routers in ALL VRFs", "# grab the route table entry for network 1.0.2.0/24 from all routers in all VRFs\n# using BF parameters\nroutes_filtered = bf.q.routes(network=\"1.0.2.0/24\").answer().frame()\n\n# grab the route table entry for network 1.0.2.0/24 from all routers in all VRFs\n# using Pandas filtering\nroutes_filtered = routes_all[routes_all['Network'] == \"1.0.2.0/24\"]\nroutes_filtered", "Using Panda's filtering it is easy to retrieve the list of nodes which have the network in the routing table for at least 1 VRF. This type of processing should always be done using the Pandas APIs.", "# Get the list of nodes that have the network 1.0.2.0/24 in at least 1 VRF\n# the .unique function removes duplicate entries that would have been returned if the network was in multiple VRFs on a node or there were\n# multiple route entries for the network (ECMP)\nprint(sorted(routes_filtered[\"Node\"].unique()))", "Now we will retrieve the list of nodes that do NOT have this prefix in their routing table. This is easy to do with the Pandas groupby and filter functions.", "# Group all routes by Node and filter for those that don't have '1.0.2.0/24'\nroutes_filtered = routes_all.groupby('Node').filter(lambda x: all(x['Network'] != '1.0.2.0/24'))\n\n# Get the unique node names and sort the list\nprint(sorted(routes_filtered[\"Node\"].unique()))", "The only devices that do not have a route to 1.0.2.0/24 are the 2 hosts in the snapshot. This is expected, as they should just have a default route. Let's verify that.", "routes_all[routes_all['Node'].str.contains('host')]", "With Batfish and Pandas you can easily retrieve the exact information you are looking for from the routing tables on ANY or ALL network devices for ANY or ALL VRFs.\nThis concludes the notebook. To recap, in this notebook we covered the foundational tasks for route analysis:\n\nHow to get routes at all nodes in the network or only at a subset of them\nHow to retrieve routing entries that match a specific protocol or metric\nHow to find which nodes have an entry for a prefix or which ones do not\n\nWe hope you found this notebook useful and informative. Future notebooks will dive into more advanced topics like path analysis, debugging ACLs and firewall rules, validating routing policy, etc.. so stay tuned! \nWant to know more?\nReach out to us through Slack or GitHub to learn more, or send feedback." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
jorisvandenbossche/DS-python-data-analysis
notebooks/python_recap/05-numpy.ipynb
bsd-3-clause
[ "Python the basics: numpy\n\nDS Data manipulation, analysis and visualization in Python\nMay/June, 2021\n© 2021, Joris Van den Bossche and Stijn Van Hoey (&#106;&#111;&#114;&#105;&#115;&#118;&#97;&#110;&#100;&#101;&#110;&#98;&#111;&#115;&#115;&#99;&#104;&#101;&#64;&#103;&#109;&#97;&#105;&#108;&#46;&#99;&#111;&#109;, &#115;&#116;&#105;&#106;&#110;&#118;&#97;&#110;&#104;&#111;&#101;&#121;&#64;&#103;&#109;&#97;&#105;&#108;&#46;&#99;&#111;&#109;). Licensed under CC BY 4.0 Creative Commons\n\n\n\nThis notebook is largely based on material of the Python Scientific Lecture Notes (https://scipy-lectures.github.io/), adapted with some exercises.\n\nNumpy - multidimensional data arrays\nIntroduction\nNumPy is the fundamental package for scientific computing with Python. It contains among other things:\n\na powerful N-dimensional array/vector/matrix object\nsophisticated (broadcasting) functions\nfunction implementation in C/Fortran assuring good performance if vectorized\ntools for integrating C/C++ and Fortran code\nuseful linear algebra, Fourier transform, and random number capabilities\n\nAlso known as array oriented computing. The recommended convention to import numpy is:", "import numpy as np", "In the numpy package the terminology used for vectors, matrices and higher-dimensional data sets is array. Let's already load some other modules too.", "import matplotlib.pyplot as plt\nimport seaborn as sns\nsns.set_style('darkgrid')", "Showcases\nRoll the dice\nYou like to play boardgames, but you want to better know you're chances of rolling a certain combination with 2 dices:", "def mydices(throws):\n \"\"\"\n Function to create the distrrbution of the sum of two dices.\n \n Parameters\n ----------\n throws : int\n Number of throws with the dices\n \"\"\"\n stone1 = np.random.uniform(1, 6, throws) \n stone2 = np.random.uniform(1, 6, throws) \n total = stone1 + stone2\n return plt.hist(total, bins=20) # We use matplotlib to show a histogram\n\nmydices(100) # test this out with multiple options", "Cartesian2Polar\nConsider a random 10x2 matrix representing cartesian coordinates, how to convert them to polar coordinates", "# random numbers (X, Y in 2 columns)\nZ = np.random.random((10,2))\nX, Y = Z[:,0], Z[:,1]\n\n# distance\nR = np.sqrt(X**2 + Y**2)\n# angle\nT = np.arctan2(Y, X) # Array of angles in radians\nTdegree = T*180/(np.pi) # If you like degrees more\n\n# NEXT PART (now for illustration)\n#plot the cartesian coordinates\nplt.figure(figsize=(14, 6))\nax1 = plt.subplot(121)\nax1.plot(Z[:,0], Z[:,1], 'o')\nax1.set_title(\"Cartesian\")\n#plot the polar coorsidnates\nax2 = plt.subplot(122, polar=True)\nax2.plot(T, R, 'o')\nax2.set_title(\"Polar\")", "Speed\nMemory-efficient container that provides fast numerical operations:", "L = range(1000)\n%timeit [i**2 for i in L]\n\na = np.arange(1000)\n%timeit a**2\n\n#More information about array?\nnp.array?", "Creating numpy arrays\nThere are a number of ways to initialize new numpy arrays, for example from\n\na Python list or tuples\nusing functions that are dedicated to generating numpy arrays, such as arange, linspace, etc.\nreading data from files\n\nFrom lists\nFor example, to create new vector and matrix arrays from Python lists we can use the numpy.array function.", "# a vector: the argument to the array function is a Python list\nV = np.array([1, 2, 3, 4])\nV\n\n# a matrix: the argument to the array function is a nested Python list\nM = np.array([[1, 2], [3, 4]])\nM", "The v and M objects are both of the type ndarray that the numpy module provides.", "type(V), type(M)", "The difference between the v and M arrays is only their shapes. We can get information about the shape of an array by using the ndarray.shape property.", "V.shape\n\nM.shape", "The number of elements in the array is available through the ndarray.size property:", "M.size", "Equivalently, we could use the function numpy.shape and numpy.size", "np.shape(M)\n\nnp.size(M)", "Using the dtype (data type) property of an ndarray, we can see what type the data of an array has (always fixed for each array, cfr. Matlab):", "M.dtype", "We get an error if we try to assign a value of the wrong type to an element in a numpy array:", "#M[0,0] = \"hello\" #uncomment this cell\n\nf = np.array(['Bonjour', 'Hello', 'Hallo',])\nf", "If we want, we can explicitly define the type of the array data when we create it, using the dtype keyword argument:", "M = np.array([[1, 2], [3, 4]], dtype=complex) #np.float64, np.float, np.int64\n\nprint(M, '\\n', M.dtype)", "Since Numpy arrays are statically typed, the type of an array does not change once created. But we can explicitly cast an array of some type to another using the astype functions (see also the similar asarray function). This always create a new array of new type:", "M = np.array([[1, 2], [3, 4]], dtype=float)\nM2 = M.astype(int)\nM2", "Common type that can be used with dtype are: int, float, complex, bool, object, etc.\nWe can also explicitly define the bit size of the data types, for example: int64, int16, float64, float128, complex128.\nHigher order is also possible:", "C = np.array([[[1], [2]], [[3], [4]]])\nprint(C.shape)\nC\n\nC.ndim # number of dimensions", "Using array-generating functions\nFor larger arrays it is inpractical to initialize the data manually, using explicit python lists. Instead we can use one of the many functions in numpy that generates arrays of different forms. Some of the more common are:\narange", "# create a range\nx = np.arange(0, 10, 1) # arguments: start, stop, step\nx\n\nx = np.arange(-1, 1, 0.1)\nx", "linspace and logspace", "# using linspace, both end points ARE included\nnp.linspace(0, 10, 25)\n\nnp.logspace(0, 10, 10, base=np.e)\n\nplt.plot(np.logspace(0, 10, 10, base=np.e), np.random.random(10), 'o')\nplt.xscale('log')", "random data", "# uniform random numbers in [0,1]\nnp.random.rand(5,5)\n\n# standard normal distributed random numbers\nnp.random.randn(5,5)", "zeros and ones", "np.zeros((3,3))\n\nnp.ones((3,3))", "<div class=\"alert alert-success\">\n <b>EXERCISE</b>: Create a vector with values ranging from 10 to 49 with steps of 1\n</div>", "np.arange(10, 50, 1)", "<div class=\"alert alert-success\">\n <b>EXERCISE</b>: Create a 3x3 identity matrix (look into docs!)\n</div>", "np.identity(3)\n\nnp.eye(3)", "<div class=\"alert alert-success\">\n <b>EXERCISE</b>: Create a 3x3x3 array with random values\n</div>", "np.random.random((3, 3, 3))", "File I/O\nNumpy is capable of reading and writing text and binary formats. However, since most data-sources are providing information in a format with headings, different dtypes,... we will use for reading/writing of textfiles the power of Pandas.\nComma-separated values (CSV)\nWriting to a csvfile with numpy is done with the savetxt-command:", "a = np.random.random(40).reshape((20, 2))\nnp.savetxt(\"random-matrix.csv\", a, delimiter=\",\")", "To read data from such file into Numpy arrays we can use the numpy.genfromtxt function. For example,", "a2 = np.genfromtxt(\"random-matrix.csv\", delimiter=',')\na2", "Numpy's native file format\nUseful when storing and reading back numpy array data, since binary. Use the functions numpy.save and numpy.load:", "np.save(\"random-matrix.npy\", a)\n\n!file random-matrix.npy\n\nnp.load(\"random-matrix.npy\")", "Manipulating arrays\nIndexing\n<center>MATLAB-USERS:<br> PYTHON STARTS AT 0!\nWe can index elements in an array using the square bracket and indices:", "V\n\n# V is a vector, and has only one dimension, taking one index\nV[0]\n\nV[-1:] #-2, -2:,...\n\n# a is a matrix, or a 2 dimensional array, taking two indices \n# the first dimension corresponds to rows, the second to columns.\na[1, 1]", "If we omit an index of a multidimensional array it returns the whole row (or, in general, a N-1 dimensional array)", "a[1]", "The same thing can be achieved with using : instead of an index:", "a[1, :] # row 1\n\na[:, 1] # column 1", "We can assign new values to elements in an array using indexing:", "a[0, 0] = 1\na[:, 1] = -1\na", "Index slicing\nIndex slicing is the technical name for the syntax M[lower:upper:step] to extract part of an array:", "A = np.array([1, 2, 3, 4, 5])\nA\n\nA[1:3]", "Array slices are mutable: if they are assigned a new value the original array from which the slice was extracted is modified:", "A[1:3] = [-2,-3]\n\nA", "We can omit any of the three parameters in M[lower:upper:step]:", "A[::] # lower, upper, step all take the default values\n\nA[::2] # step is 2, lower and upper defaults to the beginning and end of the array\n\nA[:3] # first three elements\n\nA[3:] # elements from index 3\n\nA[-3:] # the last three elements", "<div class=\"alert alert-success\">\n <b>EXERCISE</b>: Create a null vector of size 10 and adapt it in order to make the fifth element a value 1\n</div>", "vec = np.zeros(10)\nvec[4] = 1.\nvec", "Fancy indexing\nFancy indexing is the name for when an array or list is used in-place of an index:", "a = np.arange(0, 100, 10)\na[[2, 3, 2, 4, 2]]", "In more dimensions:", "A = np.arange(25).reshape(5,5)\nA\n\nrow_indices = [1, 2, 3]\nA[row_indices]\n\ncol_indices = [1, 2, -1] # remember, index -1 means the last element\nA[row_indices, col_indices]", "We can also index masks: If the index mask is an Numpy array of with data type bool, then an element is selected (True) or not (False) depending on the value of the index mask at the position each element:", "B = np.array([n for n in range(5)]) #range is pure python => Exercise: Make this shorter with pur numpy\nB\n\nrow_mask = np.array([True, False, True, False, False])\nB[row_mask]\n\n# same thing\nrow_mask = np.array([1,0,1,0,0], dtype=bool)\nB[row_mask]", "This feature is very useful to conditionally select elements from an array, using for example comparison operators:", "AR = np.random.randint(0, 20, 15)\nAR\n\nAR%3 == 0\n\nextract_from_AR = AR[AR%3 == 0]\nextract_from_AR\n\nx = np.arange(0, 10, 0.5)\nx\n\nmask = (5 < x) * (x < 7.5) # We actually multiply two masks here (boolean 0 and 1 values)\nmask\n\nx[mask]", "<div class=\"alert alert-success\">\n <b>EXERCISE</b>: Swap the first two rows of the 2-D array `A`?\n</div>", "A = np.arange(25).reshape(5,5)\nA\n\n#SWAP\nA[[0, 1]] = A[[1, 0]]\nA", "<div class=\"alert alert-success\">\n <b>EXERCISE</b>: Change all even numbers of `AR` into zero-values.\n</div>", "AR = np.random.randint(0, 20, 15)\nAR\n\nAR[AR%2==0] = 0.\nAR", "<div class=\"alert alert-success\">\n <b>EXERCISE</b>: Change all even positions of matrix `AR` into zero-values\n</div>", "AR = np.random.randint(1, 20, 15)\nAR\n\nAR[1::2] = 0\nAR", "Some more extraction functions\nwhere function to know the indices of something", "x = np.arange(0, 10, 0.5)\nnp.where(x>5.)", "With the diag function we can also extract the diagonal and subdiagonals of an array:", "np.diag(A)", "The take function is similar to fancy indexing described above:", "x.take([1, 5])", "Linear algebra\nVectorizing code is the key to writing efficient numerical calculation with Python/Numpy. That means that as much as possible of a program should be formulated in terms of matrix and vector operations.\nScalar-array operations\nWe can use the usual arithmetic operators to multiply, add, subtract, and divide arrays with scalar numbers.", "v1 = np.arange(0, 5)\n\nv1 * 2\n\nv1 + 2\n\nA = np.arange(25).reshape(5,5)\nA * 2\n\nnp.sin(A) #np.log(A), np.arctan,...", "Element-wise array-array operations\nWhen we add, subtract, multiply and divide arrays with each other, the default behaviour is element-wise operations:", "A * A # element-wise multiplication\n\nv1 * v1", "If we multiply arrays with compatible shapes, we get an element-wise multiplication of each row:", "A.shape, v1.shape\n\nA * v1", "Consider the speed difference with pure python:", "a = np.arange(10000)\n%timeit a + 1 \n\nl = range(10000)\n%timeit [i+1 for i in l] \n\n#logical operators:\na1 = np.arange(0, 5, 1)\na2 = np.arange(5, 0, -1)\na1>a2 # >, <=,...\n\n# cfr. \nnp.all(a1>a2) # any", "Basic operations on numpy arrays (addition, etc.) are elementwise. Nevertheless, It’s also possible to do operations on arrays of different sizes if Numpy can transform these arrays so that they all have the same size: this conversion is called broadcasting.", "A, v1\n\nA*v1\n\nx, y = np.arange(5), np.arange(5).reshape((5, 1)) # a row and a column array\n\ndistance = np.sqrt(x ** 2 + y ** 2)\ndistance\n\n#let's put this in a figure:\nplt.pcolor(distance) \nplt.colorbar() ", "Matrix algebra\nWhat about matrix mutiplication? There are two ways. We can either use the dot function, which applies a matrix-matrix, matrix-vector, or inner vector multiplication to its two arguments:", "np.dot(A, A)\n\nnp.dot(A, v1) #check the difference with A*v1 !!\n\nnp.dot(v1, v1)", "Alternatively, we can cast the array objects to the type matrix. This changes the behavior of the standard arithmetic operators +, -, * to use matrix algebra. You can also get inverse of matrices, determinant,... \nWe won't go deeper here on pure matrix calculation, but for more information, check the related functions: inner, outer, cross, kron, tensordot. Try for example help(kron).\nCalculations\nOften it is useful to store datasets in Numpy arrays. Numpy provides a number of functions to calculate statistics of datasets in arrays.", "a = np.random.random(40)", "Different frequently used operations can be done:", "print ('Mean value is', np.mean(a))\nprint ('Median value is', np.median(a))\nprint ('Std is', np.std(a))\nprint ('Variance is', np.var(a))\nprint ('Min is', a.min())\nprint ('Element of minimum value is', a.argmin())\nprint ('Max is', a.max())\nprint ('Sum is', np.sum(a))\nprint ('Prod', np.prod(a))\nprint ('Cumsum is', np.cumsum(a)[-1])\nprint ('CumProd of 5 first elements is', np.cumprod(a)[4])\nprint ('Unique values in this array are:', np.unique(np.random.randint(1,6,10)))\nprint ('85% Percentile value is: ', np.percentile(a, 85))\n\na = np.random.random(40)\nprint(a.argsort())\na.sort() #sorts in place!\nprint(a.argsort())", "Calculations with higher-dimensional data\nWhen functions such as min, max, etc., is applied to a multidimensional arrays, it is sometimes useful to apply the calculation to the entire array, and sometimes only on a row or column basis. Using the axis argument we can specify how these functions should behave:", "m = np.random.rand(3,3)\nm\n\n# global max\nm.max()\n\n# max in each column\nm.max(axis=0)\n\n# max in each row\nm.max(axis=1)", "Many other functions and methods in the array and matrix classes accept the same (optional) axis keyword argument.\n<div class=\"alert alert-success\">\n <b>EXERCISE</b>: Rescale the 5x5 matrix `Z` to values between 0 and 1:\n</div>", "Z = np.random.uniform(5.0, 15.0, (5,5))\nZ\n\n# RESCALE:\n(Z - Z.min())/(Z.max() - Z.min())", "Reshaping, resizing and stacking arrays\nThe shape of an Numpy array can be modified without copying the underlaying data, which makes it a fast operation even for large arrays.", "A = np.arange(25).reshape(5,5)\nn, m = A.shape\nB = A.reshape((1,n*m))\nB", "We can also use the function flatten to make a higher-dimensional array into a vector. But this function create a copy of the data (see next)", "B = A.flatten()\nB", "Stacking and repeating arrays\nUsing function repeat, tile, vstack, hstack, and concatenate we can create larger vectors and matrices from smaller ones:\ntile and repeat", "a = np.array([[1, 2], [3, 4]])\n\n# repeat each element 3 times\nnp.repeat(a, 3)\n\n# tile the matrix 3 times \nnp.tile(a, 3)", "concatenate", "b = np.array([[5, 6]])\n\nnp.concatenate((a, b), axis=0)\n\nnp.concatenate((a, b.T), axis=1)", "hstack and vstack", "np.vstack((a,b))\n\nnp.hstack((a,b.T))", "IMPORTANT!: View and Copy\nTo achieve high performance, assignments in Python usually do not copy the underlaying objects. This is important for example when objects are passed between functions, to avoid an excessive amount of memory copying when it is not necessary (techincal term: pass by reference).", "A = np.array([[1, 2], [3, 4]])\n\nA\n\n# now B is referring to the same array data as A \nB = A \n\n# changing B affects A\nB[0,0] = 10\n\nB\n\nA", "If we want to avoid this behavior, so that when we get a new completely independent object B copied from A, then we need to do a so-called \"deep copy\" using the function copy:", "B = np.copy(A)\n\n# now, if we modify B, A is not affected\nB[0,0] = -5\n\nB\n\nA", "Also reshape function just takes a view:", "arr = np.arange(8)\narr_view = arr.reshape(2, 4)\n\nprint('Before\\n', arr_view)\narr[0] = 1000\nprint('After\\n', arr_view)\n\narr.flatten()[2] = 10 #Flatten creates a copy!\n\narr", "Using arrays in conditions\nWhen using arrays in conditions in for example if statements and other boolean expressions, one need to use one of any or all, which requires that any or all elements in the array evalutes to True:", "M\n\nif (M > 5).any():\n print(\"at least one element in M is larger than 5\")\nelse:\n print(\"no element in M is larger than 5\")\n\nif (M > 5).all():\n print(\"all elements in M are larger than 5\")\nelse:\n print(\"all elements in M are not larger than 5\")", "Some extra applications:\nPolynomial fit", "b_data = np.genfromtxt(\"./data/bogota_part_dataset.csv\", skip_header=3, delimiter=',')\nplt.scatter(b_data[:,2], b_data[:,3])\n\nx, y = b_data[:,1], b_data[:,3] \nt = np.polyfit(x, y, 2) # fit a 2nd degree polynomial to the data, result is x**2 + 2x + 3\nt\n\nx.sort()\nplt.plot(x, y, 'o')\nplt.plot(x, t[0]*x**2 + t[1]*x + t[2], '-')", "---------------------_\n<div class=\"alert alert-success\">\n <b>EXERCISE</b>: Make a fourth order fit between the fourth and fifth column of `b_data`\n</div>", "x, y = b_data[:,3], b_data[:,4] \nt = np.polyfit(x, y, 4) # fit a 2nd degree polynomial to the data, result is x**2 + 2x + 3\nt\nx.sort()\nplt.plot(x, y, 'o')\nplt.plot(x, t[0]*x**4 + t[1]*x**3 + t[2]*x**2 + t[3]*x +t[4], '-')", "-------------------__\nHowever, when doing some kind of regression, we would like to have more information about the fit characterstics automatically. Statsmodels is a library that provides this functionality, we will later come back to this type of regression problem.\nMoving average function", "def moving_average(a, n=3) :\n ret = np.cumsum(a, dtype=float)\n ret[n:] = ret[n:] - ret[:-n]\n return ret[n - 1:] / n\n\nprint(moving_average(b_data , n=3))", "However, the latter fuction implementation is something we would expect from a good data-analysis library to be implemented already. \nThe perfect timing for Python Pandas!\nREMEMBER!\n\n\nKnow how to create arrays : array, arange, ones, zeros,....\n\n\nKnow the shape of the array with array.shape, then use slicing to obtain different views of the array: array[::2], etc. Adjust the shape of the array using reshape or flatten it.\n\n\nObtain a subset of the elements of an array and/or modify their values with masks\n\n\nKnow miscellaneous operations on arrays, such as finding the mean or max (array.max(), array.mean()). No need to retain everything, but have the reflex to search in the documentation (online docs, help(), lookfor())!!\n\n\nFor advanced use: master the indexing with arrays of integers, as well as broadcasting. Know more Numpy functions to handle various array operations.\n\n\nFurther reading\n\nhttp://numpy.scipy.org\nhttp://scipy.org/Tentative_NumPy_Tutorial\nhttp://scipy.org/NumPy_for_Matlab_Users - A Numpy guide for MATLAB users.\nhttp://wiki.scipy.org/Numpy_Example_List\nhttp://wiki.scipy.org/Cookbook\n\nAcknowledgments and Material\n\nJ.R. Johansson (robert@riken.jp) http://dml.riken.jp/~rob/\nhttp://scipy-lectures.github.io/intro/numpy/index.html\nhttp://www.labri.fr/perso/nrougier/teaching/numpy.100/index.html" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
tensorflow/hub
examples/colab/yamnet.ipynb
apache-2.0
[ "Copyright 2020 The TensorFlow Hub Authors.\nLicensed under the Apache License, Version 2.0 (the \"License\");", "#@title Copyright 2020 The TensorFlow Hub Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n# ==============================================================================", "<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://www.tensorflow.org/hub/tutorials/yamnet\"><img src=\"https://www.tensorflow.org/images/tf_logo_32px.png\" />View on TensorFlow.org</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/hub/blob/master/examples/colab/yamnet.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />Run in Google Colab</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://github.com/tensorflow/hub/blob/master/examples/colab/yamnet.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />View on GitHub</a>\n </td>\n <td>\n <a href=\"https://storage.googleapis.com/tensorflow_docs/hub/examples/colab/yamnet.ipynb\"><img src=\"https://www.tensorflow.org/images/download_logo_32px.png\" />Download notebook</a>\n </td>\n <td>\n <a href=\"https://tfhub.dev/google/yamnet/1\"><img src=\"https://www.tensorflow.org/images/hub_logo_32px.png\" />See TF Hub model</a>\n </td>\n</table>\n\nSound classification with YAMNet\nYAMNet is a deep net that predicts 521 audio event classes from the AudioSet-YouTube corpus it was trained on. It employs the\nMobilenet_v1 depthwise-separable\nconvolution architecture.", "import tensorflow as tf\nimport tensorflow_hub as hub\nimport numpy as np\nimport csv\n\nimport matplotlib.pyplot as plt\nfrom IPython.display import Audio\nfrom scipy.io import wavfile", "Load the Model from TensorFlow Hub.\nNote: to read the documentation just follow the model's url", "# Load the model.\nmodel = hub.load('https://tfhub.dev/google/yamnet/1')", "The labels file will be loaded from the models assets and is present at model.class_map_path().\nYou will load it on the class_names variable.", "# Find the name of the class with the top score when mean-aggregated across frames.\ndef class_names_from_csv(class_map_csv_text):\n \"\"\"Returns list of class names corresponding to score vector.\"\"\"\n class_names = []\n with tf.io.gfile.GFile(class_map_csv_text) as csvfile:\n reader = csv.DictReader(csvfile)\n for row in reader:\n class_names.append(row['display_name'])\n\n return class_names\n\nclass_map_path = model.class_map_path().numpy()\nclass_names = class_names_from_csv(class_map_path)", "Add a method to verify and convert a loaded audio is on the proper sample_rate (16K), otherwise it would affect the model's results.", "def ensure_sample_rate(original_sample_rate, waveform,\n desired_sample_rate=16000):\n \"\"\"Resample waveform if required.\"\"\"\n if original_sample_rate != desired_sample_rate:\n desired_length = int(round(float(len(waveform)) /\n original_sample_rate * desired_sample_rate))\n waveform = scipy.signal.resample(waveform, desired_length)\n return desired_sample_rate, waveform", "Downloading and preparing the sound file\nHere you will download a wav file and listen to it.\nIf you have a file already available, just upload it to colab and use it instead.\nNote: The expected audio file should be a mono wav file at 16kHz sample rate.", "!curl -O https://storage.googleapis.com/audioset/speech_whistling2.wav\n\n!curl -O https://storage.googleapis.com/audioset/miaow_16k.wav\n\n# wav_file_name = 'speech_whistling2.wav'\nwav_file_name = 'miaow_16k.wav'\nsample_rate, wav_data = wavfile.read(wav_file_name, 'rb')\nsample_rate, wav_data = ensure_sample_rate(sample_rate, wav_data)\n\n# Show some basic information about the audio.\nduration = len(wav_data)/sample_rate\nprint(f'Sample rate: {sample_rate} Hz')\nprint(f'Total duration: {duration:.2f}s')\nprint(f'Size of the input: {len(wav_data)}')\n\n# Listening to the wav file.\nAudio(wav_data, rate=sample_rate)", "The wav_data needs to be normalized to values in [-1.0, 1.0] (as stated in the model's documentation).", "waveform = wav_data / tf.int16.max", "Executing the Model\nNow the easy part: using the data already prepared, you just call the model and get the: scores, embedding and the spectrogram.\nThe score is the main result you will use.\nThe spectrogram you will use to do some visualizations later.", "# Run the model, check the output.\nscores, embeddings, spectrogram = model(waveform)\n\nscores_np = scores.numpy()\nspectrogram_np = spectrogram.numpy()\ninfered_class = class_names[scores_np.mean(axis=0).argmax()]\nprint(f'The main sound is: {infered_class}')", "Visualization\nYAMNet also returns some additional information that we can use for visualization.\nLet's take a look on the Waveform, spectrogram and the top classes inferred.", "plt.figure(figsize=(10, 6))\n\n# Plot the waveform.\nplt.subplot(3, 1, 1)\nplt.plot(waveform)\nplt.xlim([0, len(waveform)])\n\n# Plot the log-mel spectrogram (returned by the model).\nplt.subplot(3, 1, 2)\nplt.imshow(spectrogram_np.T, aspect='auto', interpolation='nearest', origin='lower')\n\n# Plot and label the model output scores for the top-scoring classes.\nmean_scores = np.mean(scores, axis=0)\ntop_n = 10\ntop_class_indices = np.argsort(mean_scores)[::-1][:top_n]\nplt.subplot(3, 1, 3)\nplt.imshow(scores_np[:, top_class_indices].T, aspect='auto', interpolation='nearest', cmap='gray_r')\n\n# patch_padding = (PATCH_WINDOW_SECONDS / 2) / PATCH_HOP_SECONDS\n# values from the model documentation\npatch_padding = (0.025 / 2) / 0.01\nplt.xlim([-patch_padding-0.5, scores.shape[0] + patch_padding-0.5])\n# Label the top_N classes.\nyticks = range(0, top_n, 1)\nplt.yticks(yticks, [class_names[top_class_indices[x]] for x in yticks])\n_ = plt.ylim(-0.5 + np.array([top_n, 0]))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
celiasmith/syde556
SYDE 556 Lecture 5 Dynamics.ipynb
gpl-2.0
[ "SYDE 556/750: Simulating Neurobiological Systems\nAccompanying Readings: Chapter 8\nADMIN STUFF: Next assignment, project proposal due dates\nDynamics\n\nEverything we've looked at so far has been feedforward\nThere's some pattern of activity in one group of neurons representing $x$\nWe want that to cause some pattern of activity in another group of neurons to represent $y=f(x)$\nThese can be chained together to make more complex systems $z=h(f(x)+g(y))$\n\n\nWhat about recurrent networks?\nWhat happens when we connect a neural group back to itself?\n\n\n\n<img src=\"files/lecture5/recnet1.png\">\nRecurrent functions\n\nWhat if we do exactly what we've done so far in the past, but instead of connecting one group of neurons to another, we just connect it back to itself\nInstead of $y=f(x)$\nWe get $x=f(x)$ (???)\n\n\n\nAs written, this is clearly non-sensical\n\nFor example, if we do $f(x)=x+1$ then we'd have $x=x+1$, or $x-x=1$, or $0=1$\n\n\n\nBut don't forget about time\n\nWhat if it was $x_{t+1} = f(x_t)$\nWhich makes more sense because we're talking about a real physical system\nThis is a lot like a differential equation\nWhat would happen if we built this?\n\n\n\nTry it out\n\nLet's try implementing this kind of circuit\nStart with $x_{t+1}=x_t+1$", "%pylab inline\n\nimport nengo\n\nmodel = nengo.Network()\n\nwith model:\n ensA = nengo.Ensemble(100, dimensions=1)\n \n def feedback(x):\n return x+1\n \n conn = nengo.Connection(ensA, ensA, function=feedback, synapse = 0.1)\n\n ensA_p = nengo.Probe(ensA, synapse=.01)\n \nsim = nengo.Simulator(model)\nsim.run(.5)\n\nplot(sim.trange(), sim.data[ensA_p])\nylim(-1.5,1.5);", "That sort of makes sense\n$x$ increases quickly, then hits an upper bound\n\n\nHow quickly?\nWhat parameters of the system affect this?\n\n\n\nWhat are the precise dynamics?\n\n\nWhat about $f(x)=-x$?", "with model:\n def feedback(x):\n return -x\n \n conn.function = feedback\n\nsim = nengo.Simulator(model)\nsim.run(.5)\n\nplot(sim.trange(), sim.data[ensA_p])\nylim(-1.5,1.5);", "That also makes sense. What if we nudge it away from zero?", "from nengo.utils.functions import piecewise\n\nwith model:\n stim = nengo.Node(piecewise({0:1, .2:-1, .4:0}))\n nengo.Connection(stim, ensA)\n \nsim = nengo.Simulator(model)\nsim.run(.6)\n\nplot(sim.trange(), sim.data[ensA_p])\nylim(-1.5,1.5);", "With an input of 1, $x=0.5$\nWith an input of -1, $x=-0.5$\nWith an input of 0, it goes back to $x=0$\n\nDoes this make sense?\n\nWhy / why not?\nAnd why that particular timing/curvature?\n\n\n\nWhat about $f(x)=x^2$?", "with model:\n stim.output = piecewise({.1:.2, .2:.4, .5:0})\n def feedback(x):\n return x*x\n \n conn.function = feedback\n\nsim = nengo.Simulator(model)\nsim.run(.6)\n\nplot(sim.trange(), sim.data[ensA_p])\nylim(-1.5,1.5); ", "Well that's weird\nStable at $x=0$ with no input \nStable at .2 \nUnstable at .4, shoots up high\nSomething very strange happens around $x=1$ when the input is turned off (why decay if $f(x) = x^2$?)\n\n\nWhy is this happening?\n\nMaking sense of dynamics\n\nLet's go back to something simple\nJust a single feed-forward neural population\nEncode $x$ into current, compute spikes, decode filtered spikes into $\\hat{x}$\n\n\nInstead of a constant input, let's change the input\nChange it suddenly from zero to one to get a sense of what's happening with changes", "import nengo\nfrom nengo.utils.functions import piecewise\n\nmodel = nengo.Network(seed=4)\n\nwith model:\n stim = nengo.Node(piecewise({.3:1}))\n ensA = nengo.Ensemble(100, dimensions=1)\n \n def feedback(x):\n return x\n \n nengo.Connection(stim, ensA)\n #conn = nengo.Connection(ensA, ensA, function=feedback)\n\n stim_p = nengo.Probe(stim)\n ensA_p = nengo.Probe(ensA, synapse=0.01)\n \nsim = nengo.Simulator(model)\nsim.run(1)\n\nplot(sim.trange(), sim.data[ensA_p], label=\"$\\hat{x}$\")\nplot(sim.trange(), sim.data[stim_p], label=\"$x$\")\nlegend()\nylim(-.2,1.5);", "This was supposed to compute $f(x)=x$\nFor a constant input, that works\nBut we get something else when there's a change in the input\n\n\nWhat is this difference?\nWhat affects it?", "with model:\n ensA_p = nengo.Probe(ensA, synapse=0.03)\n \nsim = nengo.Simulator(model)\nsim.run(1)\n\nplot(sim.trange(), sim.data[ensA_p], label=\"$\\hat{x}$\")\nplot(sim.trange(), sim.data[stim_p], label=\"$x$\")\nlegend()\nylim(-.2,1.5);", "The time constant of the post-synaptic filter\nWe're not getting $f(x)=x$\nInstead we're getting $f(x(t))=x(t)*h(t)$", "tau = 0.03\nwith model:\n ensA_p = nengo.Probe(ensA, synapse=tau)\n\nsim = nengo.Simulator(model)\nsim.run(1)\n\nstim_filt = nengo.Lowpass(tau).filt(sim.data[stim_p], dt=sim.dt)\n\nplot(sim.trange(), sim.data[ensA_p], label=\"$\\hat{x}$\")\nplot(sim.trange(), sim.data[stim_p], label=\"$x$\")\nplot(sim.trange(), stim_filt, label=\"$h(t)*x(t)$\")\nlegend()\nylim(-.2,1.5);", "So there are dynamics and filtering going on, since there is always a synaptic filter on a connection\nWhy isn't it exactly the same?\nRecurrent connections are dynamic as well (i.e. passing past information to future state of the population)\nLet's take a look more carefully\n\nRecurrent connections\n\nSo a connection actually approximates $f(x(t))*h(t)$\n\nSo what does a recurrent connection do?\n\nAlso $x(t) = f(x(t))*h(t)$\n\n\n\nwhere $$\nh(t) = \\begin{cases}\n e^{-t/\\tau} &\\mbox{if } t > 0 \\ \n 0 &\\mbox{otherwise} \n \\end{cases}\n$$\n\n\nHow can we work with this?\n\n\nGeneral rule of thumb: convolutions are annoying, so let's get rid of them\n\nWe could do a Fourier transform\n$X(\\omega)=F(\\omega)H(\\omega)$\nBut, since we are studying the response of a system (rather than a continuous signal), there's a more general and appropriate transform that makes life even easier:\nLaplace transform (it is more general because $s = a + j\\omega$)\nThe Laplace transform of our equations are:\n$X(s)=F(s)H(s)$\n$H(s)={1 \\over {1+s\\tau}}$\nRearranging:\n\n$X(s)=F(s){1 \\over {1+s\\tau}}$\n$X(s)(1+s\\tau) = F(s)$\n$X(s) + X(s)s\\tau = F(s)$\n$sX(s) = {1 \\over \\tau} (F(s)-X(s))$\n\nConvert back into the time domain (inverse Laplace):\n\n${dx \\over dt} = {1 \\over \\tau} (f(x(t))-x(t))$\nDynamics\n\n\nThis says that if we introduce a recurrent connection, we end up implementing a differential equation\n\n\nSo what happened with $f(x)=x+1$?\n\n$\\dot{x} = {1 \\over \\tau} (x+1-x)$\n$\\dot{x} = {1 \\over \\tau}$\n\n\nWhat about $f(x)=-x$?\n$\\dot{x} = {1 \\over \\tau} (-x-x)$\n$\\dot{x} = {-2x \\over \\tau}$\nConsistent with figures above, so at inputs of $\\pm 1$ get to $0 = 2x\\pm 1$, $x=\\pm .5$ \n\n\nAnd $f(x)=x^2$? \n$\\dot{x} = {1 \\over \\tau} (x^2-x)$\nConsistent with figure, at input of .2, $0=x^2-x+.2=(x-.72)(x-.27)$, for input of .4 you get imaginary solutions.\nFor 0 input, x = 0,1 ... what if we get it over 1 before turning off input?\n\n\n\nSynthesis\n\nWhat if there's some differential equation we really want to implement?\nWe want $\\dot{x} = f(x)$\nSo we do a recurrent connection of $f'(x)=\\tau f(x)+x$\nThe resulting model will end up implementing $\\dot{x} = {1 \\over \\tau} (\\tau f(x)+x-x)=f(x)$\n\n\n\nInputs\n\n\nWhat happens if there's an input as well?\n\nWe'll call the input $u$ from another population, and it is also computing some function $g(u)$\n$x(t) = f(x(t))h(t)+g(u(t))h(t)$\n\n\n\nFollow the same derivation steps\n\n$\\dot{x} = {1 \\over \\tau} (f(x)-x + g(u))$\n\n\n\nSo if you have some input that you want added to $\\dot{x}$, you need to scale it by $\\tau$\n\n\nThis lets us do any differential equation of the form $\\dot{x}=f(x)+g(u)$\n\n\nA derivation\nLinear systems\n\nLet's take a step back and look at just linear systems\nThe book shows that we can implement any equation of the form\n\n$\\dot{x}(t) = A x(t) + B u(t)$\n\nWhere $A$ and $x$ are a matrix and vector -- giving a standard control theoretic structure\n<img src=\"files/lecture5/control_sys.png\" width=\"600\">\n\nOur goal is to convert this to a structure which has $h(t)$ as the transfer function instead of the standard $\\int$\n<img src=\"files/lecture5/control_sysh.png\" width=\"600\">\n\n\nUsing Laplace on the standard form gives:\n\n\n$sX(s) = A X(s) + B U(s)$\n\nLaplace on the 'neural control' form gives (as before where $F(s) = A'X(s) + B'U(s)$):\n\n$X(s) = {1 \\over {1 + s\\tau}} (A'X(s) + B'U(s))$\n$X(s) + \\tau sX(s) = (A'X(s) + B'U(s))$\n$sX(s) = {1 \\over \\tau} (A'X(s) + B'U(s) - X(s))$\n$sX(s) = {1 \\over \\tau} ((A' - I) X(s) + B'U(s))$\n\nMaking the 'standard' and 'neural' equations equal to one another, we find that for any system with a given A and B, the A' and B' of the equivalent neural system are given by:\n\n$A' = \\tau A + I$ and\n$B' = \\tau B$\n\n\nwhere $I$ is the identity matrix\n\n\nThis is nice because lots of engineers think of the systems they build in these terms (i.e. as linear control systems).\n\n\nNonlinear systems\n\nIn fact, these same steps can be taken to account for nonlinear control systems as well:\n\n$\\dot{x}(t) = f(x(t),u(t),t)$\n\nFor a neural system with transfer function $h(t)$:\n\n$X(s) = H(s)F'(X(s),U(s),s)$\n$X(s) = {1 \\over {1 + s\\tau}} F'(X(s),U(s),s)$\n$sX(s) = {1 \\over \\tau} (F'(X(s),U(s),s) - X(s))$\n\nThis gives the general result (slightly more general than what we saw earlier):\n\n$F'(X(s),U(s),s) = \\tau(F(X(s),U(s),s)) + X(s)$\nApplications\nEye control\n\nPart of the brainstem called the nuclei prepositus hypoglossi\nInput is eye velocity $v$\nOutput is eye position $x$\n\n$\\dot{x}=v$\n\nThis is an integrator ($x$ is the integral of $v$)\n\n\n\nIt's a linear system, so, to get it in the standard control form $\\dot{x}=Ax+Bu$ we have:\n\n$A=0$\n$B=1$\n\n\nSo that means we need $A'=\\tau 0 + I = 1$ and $B'=\\tau 1 = \\tau$\n<img src=\"files/lecture5/eye_sys.png\" width=\"400\">", "import nengo\nfrom nengo.utils.functions import piecewise\nfrom nengo.utils.ensemble import tuning_curves\n\ntau = 0.01\n\nmodel = nengo.Network('Eye control', seed=8)\n\nwith model:\n stim = nengo.Node(piecewise({.3:1, .6:0 }))\n velocity = nengo.Ensemble(100, dimensions=1)\n position = nengo.Ensemble(20, dimensions=1)\n \n def feedback(x):\n return 1*x\n \n conn = nengo.Connection(stim, velocity)\n conn = nengo.Connection(velocity, position, transform=tau, synapse=tau)\n conn = nengo.Connection(position, position, function=feedback, synapse=tau)\n\n stim_p = nengo.Probe(stim)\n position_p = nengo.Probe(position, synapse=.01)\n velocity_p = nengo.Probe(velocity, synapse=.01)\n \nsim = nengo.Simulator(model)\nsim.run(1)\n\nx, A = tuning_curves(position, sim)\nplot(x,A)\n\nfigure()\nplot(sim.trange(), sim.data[stim_p], label = \"stim\")\nplot(sim.trange(), sim.data[position_p], label = \"position\")\nplot(sim.trange(), sim.data[velocity_p], label = \"velocity\")\nlegend(loc=\"best\");", "That's pretty good... the area under the input is about equal to the magnitude of the output.\nBut, in order to be a perfect integrator, we'd need exactly $x=1\\times x$\nWe won't get exactly that\nNeural implementations are always approximations\n\n\nTwo forms of error:\n$E_{distortion}$, the decoding error\n$E_{noise}$, the random noise error\n\n\nWhat will they do?\n\nDistortion error\n<img src=\"files/lecture5/integrator_error.png\">\n\nWhat affects this?", "import nengo\nfrom nengo.dists import Uniform\nfrom nengo.utils.ensemble import tuning_curves\n\nmodel = nengo.Network(label='Neurons')\nwith model:\n neurons = nengo.Ensemble(100, dimensions=1, max_rates=Uniform(100,200))\n\n connection = nengo.Connection(neurons, neurons)\n \nsim = nengo.Simulator(model)\n\nd = sim.data[connection].weights.T\nx, A = tuning_curves(neurons, sim)\nxhat = numpy.dot(A, d) \n\nx, A = tuning_curves(neurons, sim)\nplot(x,A)\n\nfigure()\nplot(x, xhat-x)\naxhline(0, color='k')\nxlabel('$x$')\nylabel('$\\hat{x}-x$');", "We can think of the distortion error as introducing a bunch of local attractors into the representation\nAny 'downward' x-crossing will be a stable point ('upwards' is unstable).\nThere will be a tendency to drift towards one of these even if the input is zero.\n\n\n\nNoise error\n\nWhat will random noise do?\nPush the representation back and forth\nWhat if it is small?\nWhat if it is large?\n\n\nWhat will changing the post-synaptic time constant $\\tau$ do?\nHow does that interact with noise?\n\n\n\nReal neural integrators\n\nBut real eyes aren't perfect integrators\nIf you get someone to look at someting, then turn off the lights but tell them to keep looking in the same direction, their eye will drift back to centre (with about 70s time constant)\nHow do we implement that?\n\n\n\n$\\dot{x}=-{1 \\over \\tau_c}x + v$\n\n\n$\\tau_c$ is the time constant of that return to centre\n\n\n$A'=\\tau {-1 \\over \\tau_c}+1$\n\n$B' = \\tau$", "import nengo\nfrom nengo.utils.functions import piecewise\n\ntau = 0.1\ntau_c = 2.0\n\nmodel = nengo.Network('Eye control', seed=5)\n\nwith model:\n stim = nengo.Node(piecewise({.3:1, .6:0 }))\n velocity = nengo.Ensemble(100, dimensions=1)\n position = nengo.Ensemble(200, dimensions=1)\n \n def feedback(x):\n return (-tau/tau_c + 1)*x\n \n conn = nengo.Connection(stim, velocity)\n conn = nengo.Connection(velocity, position, transform=tau, synapse=tau)\n conn = nengo.Connection(position, position, function=feedback, synapse=tau)\n\n stim_p = nengo.Probe(stim)\n position_p = nengo.Probe(position, synapse=.01)\n velocity_p = nengo.Probe(velocity, synapse=.01)\n \nsim = nengo.Simulator(model)\nsim.run(5)\n\nplot(sim.trange(), sim.data[stim_p], label = \"stim\")\nplot(sim.trange(), sim.data[position_p], label = \"position\")\nplot(sim.trange(), sim.data[velocity_p], label = \"velocity\")\nlegend(loc=\"best\");", "That also looks right. Note that as $\\tau_c \\rightarrow \\infty$ this will approach the integrator.\nHumans (a) and Goldfish (b)\nHumans have more neurons doing this than goldfish (~1000 vs ~40)\nThey also have slower decay (70 s vs. 10 s).\nWhy do these fit together?\n\n<img src=\"files/lecture5/integrator_decay.png\">\nControlled Integrator\n\nWhat if we want an integrator where we can adjust the decay on-the-fly?\nSeparate input telling us what the decay constant $d$ should be\n\n$\\dot{x} = -d x + v$\n\n\nSo there are two inputs: $v$ and $d$\n\n\nThis is no longer in the standard $Ax + Bu$ form. Sort of...\n\nLet $A = -d(t)$, so it's not a matrix\nBut it is of the more general form: ${dx \\over dt}=f(x)+g(u)$\n\n\n\nWe need to compute a nonlinear function of an input ($d$) and the state variable ($x$)\n\n\nHow can we do this?\n\n\nGoing to 2D so we can compute the nonlinear function\n\nLet's have the state variable be $[x, d]$\n\n\n\n<img src=\"files/lecture5/controlled_integrator.png\" width = \"600\">", "import nengo\nfrom nengo.utils.functions import piecewise\n\ntau = 0.1\n\nmodel = nengo.Network('Controlled integrator', seed=1)\n\nwith model:\n vel = nengo.Node(piecewise({.2:1.5, .5:0 }))\n dec = nengo.Node(piecewise({.7:.2, .9:0 }))\n \n velocity = nengo.Ensemble(100, dimensions=1)\n decay = nengo.Ensemble(100, dimensions=1)\n position = nengo.Ensemble(400, dimensions=2)\n \n def feedback(x):\n return -x[1]*x[0]+x[0], 0\n \n conn = nengo.Connection(vel, velocity)\n conn = nengo.Connection(dec, decay)\n conn = nengo.Connection(velocity, position[0], transform=tau, synapse=tau)\n conn = nengo.Connection(decay, position[1], synapse=0.01)\n conn = nengo.Connection(position, position, function=feedback, synapse=tau)\n\n position_p = nengo.Probe(position, synapse=.01)\n velocity_p = nengo.Probe(velocity, synapse=.01)\n decay_p = nengo.Probe(decay, synapse=.01)\n \nsim = nengo.Simulator(model)\nsim.run(1)\n\nplot(sim.trange(), sim.data[decay_p])\nlineObjects = plot(sim.trange(), sim.data[position_p])\nplot(sim.trange(), sim.data[velocity_p])\nlegend(('decay','position','decay','velocity'),loc=\"best\");\n\nfrom nengo_gui.ipython import IPythonViz\nIPythonViz(model, \"configs/controlled_integrator.py.cfg\")", "Other fun functions\n\nOscillator\n$F = -kx = m \\ddot{x}$ let $\\omega = \\sqrt{\\frac{k}{m}}$\n$\\frac{d}{dt} \\begin{bmatrix}\n\\omega x \\\n\\dot{x}\n\\end{bmatrix}\n=\n\\begin{bmatrix}\n0 & \\omega \\\n-\\omega & 0\n\\end{bmatrix}\n\\begin{bmatrix}\nx_0 \\\nx_1\n\\end{bmatrix}$\nTherefore, with the above, $\\dot{x}=[x_1, -x_0]$", "import nengo\n\nmodel = nengo.Network('Oscillator')\n\nfreq = -.5\n\nwith model:\n stim = nengo.Node(lambda t: [.5,.5] if t<.02 else [0,0])\n \n osc = nengo.Ensemble(200, dimensions=2)\n \n def feedback(x):\n return x[0]+freq*x[1], -freq*x[0]+x[1]\n \n nengo.Connection(osc, osc, function=feedback, synapse=.01)\n nengo.Connection(stim, osc)\n \n osc_p = nengo.Probe(osc, synapse=.01)\n \nsim = nengo.Simulator(model)\nsim.run(.5)\n\nfigure(figsize=(12,4))\nsubplot(1,2,1)\nplot(sim.trange(), sim.data[osc_p]);\nxlabel('Time (s)')\nylabel('State value')\n \nsubplot(1,2,2)\nplot(sim.data[osc_p][:,0],sim.data[osc_p][:,1])\nxlabel('$x_0$')\nylabel('$x_1$');\n\nfrom nengo_gui.ipython import IPythonViz\nIPythonViz(model, \"configs/oscillator.py.cfg\")", "Lorenz Attractor (a chaotic attractor)\n\n$\\dot{x}=[10x_1-10x_0, -x_0 x_2-x_1, x_0 x_1 - {8 \\over 3}(x_2+28)-28]$", "import nengo\n\nmodel = nengo.Network('Lorenz Attractor', seed=3)\n\ntau = 0.1\nsigma = 10\nbeta = 8.0/3\nrho = 28\n\ndef feedback(x):\n dx0 = -sigma * x[0] + sigma * x[1]\n dx1 = -x[0] * x[2] - x[1]\n dx2 = x[0] * x[1] - beta * (x[2] + rho) - rho\n return [dx0 * tau + x[0], \n dx1 * tau + x[1], \n dx2 * tau + x[2]]\n\nwith model:\n lorenz = nengo.Ensemble(2000, dimensions=3, radius=60)\n \n nengo.Connection(lorenz, lorenz, function=feedback, synapse=tau)\n \n lorenz_p = nengo.Probe(lorenz, synapse=tau)\n \nsim = nengo.Simulator(model)\nsim.run(14)\n\nfigure(figsize=(12,4))\nsubplot(1,2,1)\nplot(sim.trange(), sim.data[lorenz_p]);\nxlabel('Time (s)')\nylabel('State value')\n \nsubplot(1,2,2)\nplot(sim.data[lorenz_p][:,0],sim.data[lorenz_p][:,1])\nxlabel('$x_0$') \nylabel('$x_1$');\n\nfrom nengo_gui.ipython import IPythonViz\nIPythonViz(model, \"configs/lorenz.py.cfg\")", "Note: This is not the original Lorenz attractor. \nThe original is $\\dot{x}=[10x_1-10x_0, x_0 (28-x_2)-x_1, x_0 x_1 - {8 \\over 3}(x_2)]$\nWhy change it to $\\dot{x}=[10x_1-10x_0, -x_0 x_2-x_1, x_0 x_1 - {8 \\over 3}(x_2+28)-28]$?\nWhat's being changed here?\n\n\n\nOscillators with different paths\n\nSince we can implement any function, we're not limited to linear oscillators\nWhat about a \"square\" oscillator?\nInstead of the value going in a circle, it traces out a square\n\n\n\n$$\n{\\dot{x}} = \\begin{cases}\n [r, 0] &\\mbox{if } |x_1|>|x_0| \\wedge x_1>0 \\ \n [-r, 0] &\\mbox{if } |x_1|>|x_0| \\wedge x_1<0 \\ \n [0, -r] &\\mbox{if } |x_1|<|x_0| \\wedge x_0>0 \\ \n [0, r] &\\mbox{if } |x_1|<|x_0| \\wedge x_0<0 \\ \n \\end{cases}\n$$", "import nengo\n\nmodel = nengo.Network('Square Oscillator')\n\ntau = 0.02\nr=6\n\ndef feedback(x): \n if abs(x[1])>abs(x[0]):\n if x[1]>0: dx=[r, 0]\n else: dx=[-r, 0]\n else:\n if x[0]>0: dx=[0, -r]\n else: dx=[0, r]\n return [tau*dx[0]+x[0], tau*dx[1]+x[1]] \n\nwith model:\n stim = nengo.Node(lambda t: [.5,.5] if t<.02 else [0,0])\n \n square_osc = nengo.Ensemble(1000, dimensions=2)\n \n nengo.Connection(square_osc, square_osc, function=feedback, synapse=tau)\n nengo.Connection(stim, square_osc)\n \n square_osc_p = nengo.Probe(square_osc, synapse=tau)\n \nsim = nengo.Simulator(model)\nsim.run(2)\n\nfigure(figsize=(12,4))\nsubplot(1,2,1)\nplot(sim.trange(), sim.data[square_osc_p]);\nxlabel('Time (s)')\nylabel('State value')\n \nsubplot(1,2,2)\nplot(sim.data[square_osc_p][:,0],sim.data[square_osc_p][:,1])\nxlabel('$x_0$')\nylabel('$x_1$');\n\nfrom nengo_gui.ipython import IPythonViz\nIPythonViz(model) #do config", "Does this do what you expect?\n\nHow is it affected by:\n\nNumber of neurons?\nPost-synaptic time constant?\nDecoding filter time constant?\nSpeed of oscillation (r)?\n\n\n\nWhat about this shape?", "import nengo\n\nmodel = nengo.Network('Heart Oscillator')\n\ntau = 0.02\nr=4\n\ndef feedback(x): \n return [-tau*r*x[1]+x[0], tau*r*x[0]+x[1]]\n\ndef heart_shape(x):\n theta = np.arctan2(x[1], x[0])\n r = 2 - 2 * np.sin(theta) + np.sin(theta)*np.sqrt(np.abs(np.cos(theta)))/(np.sin(theta)+1.4)\n return -r*np.cos(theta), r*np.sin(theta)\n\nwith model:\n stim = nengo.Node(lambda t: [.5,.5] if t<.02 else [0,0])\n \n heart_osc = nengo.Ensemble(1000, dimensions=2)\n heart = nengo.Ensemble(100, dimensions=2, radius=4)\n \n nengo.Connection(stim, heart_osc)\n nengo.Connection(heart_osc, heart_osc, function=feedback, synapse=tau)\n nengo.Connection(heart_osc, heart, function=heart_shape, synapse=tau)\n \n heart_p = nengo.Probe(heart, synapse=tau)\n \nsim = nengo.Simulator(model)\nsim.run(4)\n\nfigure(figsize=(12,4))\nsubplot(1,2,1)\nplot(sim.trange(), sim.data[heart_p]);\nxlabel('Time (s)')\nylabel('State value')\n \nsubplot(1,2,2)\nplot(sim.data[heart_p][:,0],sim.data[heart_p][:,1])\nxlabel('$x_0$')\nylabel('$x_1$');\n\nfrom nengo_gui.ipython import IPythonViz\nIPythonViz(model) #do config", "We are doing things differently here\nThe actual $x$ value is a normal circle oscillator\nThe heart shape is a function of $x$\nBut that's just a different decoder\n\n\nWould it be possible to do an oscillator where $x$ followed this shape?\nHow could we tell them apart in terms of neural behaviour?\n\n\n\nControlled Oscillator\n\n\nChange the frequency of the oscillator on-the-fly\n\n\n$\\dot{x}=[x_1 x_2, -x_0 x_2]$", "import nengo\nfrom nengo.utils.functions import piecewise\n\nmodel = nengo.Network('Controlled Oscillator')\n\ntau = 0.1\nfreq = 20\n\ndef feedback(x):\n return x[1]*x[2]*freq*tau+1.1*x[0], -x[0]*x[2]*freq*tau+1.1*x[1], 0\n\nwith model:\n stim = nengo.Node(lambda t: [20,20] if t<.02 else [0,0])\n freq_ctrl = nengo.Node(piecewise({0:1, 2:.5, 6:-1}))\n \n ctrl_osc = nengo.Ensemble(500, dimensions=3)\n \n nengo.Connection(ctrl_osc, ctrl_osc, function=feedback, synapse=tau)\n nengo.Connection(stim, ctrl_osc[0:2])\n nengo.Connection(freq_ctrl, ctrl_osc[2])\n \n ctrl_osc_p = nengo.Probe(ctrl_osc, synapse=0.01)\n \nsim = nengo.Simulator(model)\nsim.run(8)\n\nfigure(figsize=(12,4))\nsubplot(1,2,1)\nplot(sim.trange(), sim.data[ctrl_osc_p]);\nxlabel('Time (s)')\nylabel('State value')\n \nsubplot(1,2,2)\nplot(sim.data[ctrl_osc_p][:,0],sim.data[ctrl_osc_p][:,1])\nxlabel('$x_0$')\nylabel('$x_1$');\n\nfrom nengo_gui.ipython import IPythonViz\nIPythonViz(model, \"configs/controlled_oscillator.py.cfg\")" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
Bio204-class/bio204-notebooks
2016-04-04-ANOVA-as-sumofsquares-decomposition.ipynb
cc0-1.0
[ "%matplotlib inline\n\nfrom collections import namedtuple\nimport numpy as np\nimport scipy.stats as stats\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport seaborn as sbn\n\n# set some seaborn aesthetics\nsbn.set_palette(\"Set1\")\n\n# initialize random seed for reproducibility\nnp.random.seed(20160404)", "Sums of squares functions\nLet's start by writing a set of functions for calculating sums of squared deviations from the mean (also called \"sums of squared differences\"), or \"sums-of-squares\" for short.", "def sum_squares_total(groups):\n \"\"\"Calculate total sum of squares ignoring groups.\n \n groups should be a sequence of np.arrays representing the samples asssigned \n to their respective groups.\n \"\"\"\n allobs = np.ravel(groups) # np.ravel collapses arrays or lists into a single list\n grandmean = np.mean(allobs)\n return np.sum((allobs - grandmean)**2)\n \n\ndef sum_squares_between(groups):\n \"\"\"Between group sum of squares\"\"\"\n ns = np.array([len(g) for g in groups])\n grandmean = np.mean(np.ravel(groups))\n groupmeans = np.array([np.mean(g) for g in groups])\n return np.sum(ns * (groupmeans - grandmean)**2)\n\n\ndef sum_squares_within(groups):\n \"\"\"Within group sum of squares\"\"\"\n groupmeans = np.array([np.mean(g) for g in groups])\n group_sumsquares = []\n for i in range(len(groups)):\n groupi = np.asarray(groups[i])\n groupmeani = groupmeans[i]\n group_sumsquares.append(np.sum((groupi - groupmeani)**2))\n return np.sum(group_sumsquares)\n \n \ndef degrees_freedom(groups):\n \"\"\"Calculate the \"\"\"\n N = len(np.ravel(groups))\n k = len(groups)\n return (k-1, N - k, N - 1)\n\ndef ANOVA_oneway(groups):\n index = ['BtwGroup', 'WithinGroup', 'Total']\n cols = ['df', 'SumSquares','MS','F','pval']\n \n df = degrees_freedom(groups)\n ss = sum_squares_between(groups), sum_squares_within(groups), sum_squares_total(groups)\n ms = ss[0]/df[0], ss[1]/df[1], \"\"\n F = ms[0]/ms[1], \"\", \"\"\n pval = stats.f.sf(F[0], df[0], df[1]), \"\", \"\"\n \n tbl = pd.DataFrame(index=index, columns=cols)\n tbl.index.name = 'Source'\n tbl.df = df\n tbl.SumSquares = ss\n tbl.MS = ms\n tbl.F = F\n tbl.pval = pval\n \n return tbl\n \ndef ANOVA_R2(anovatbl):\n SSwin = anovatbl.SumSquares[1]\n SStot = anovatbl.SumSquares[2]\n return (1.0 - (SSwin/SStot))", "Simulate ANOVA under the null hypothesis of no difference in group means", "## simulate one way ANOVA under the null hypothesis of no \n## difference in group means\n\ngroupmeans = [0, 0, 0, 0]\nk = len(groupmeans) # number of groups\ngroupstds = [1] * k # standard deviations equal across groups\nn = 25 # sample size\n\n# generate samples\nsamples = [stats.norm.rvs(loc=i, scale=j, size = n) for (i,j) in zip(groupmeans,groupstds)]\nallobs = np.concatenate(samples)", "Draw a figure to illustrate the within group distributions and the total distribution.", "sbn.set_palette(\"deep\")\n\nbins = np.linspace(-3, 3, 10)\nfig, axes = plt.subplots(1, 5, figsize=(20,5), sharex=True, sharey=True)\n\nfor i, sample in enumerate(samples):\n axes[i].hist(sample, bins=bins, histtype='stepfilled', \n linewidth=1.5, label='sample {}'.format(i+1))\n axes[i].legend()\n\naxes[-1].hist(allobs, bins=bins, histtype='stepfilled', linewidth=2, label='all data')\naxes[-1].legend()\naxes[-1].set_title(\"All data combined\", fontsize=20)\n\naxes[0].set_ylabel(\"Frequency\", fontsize=20)\naxes[2].set_xlabel(\"X\", fontsize=20)\n\nfig.tight_layout()\npass", "Calculate the sums of squares", "SSbtw = sum_squares_between(samples)\nSSwin = sum_squares_within(samples)\nSStot = sum_squares_total(samples)\n\nprint(\"SS between:\", SSbtw)\nprint(\"SS within:\", SSwin)\nprint(\"SS total:\", SStot)", "Generate ANOVA table:", "ANOVA_oneway(samples)", "Simulating ANOVA under $H_A$", "groupmeans = [0, 0, -1, 1]\nk = len(groupmeans) # number of groups\ngroupstds = [1] * k # standard deviations equal across groups\nn = 25 # sample size\n\n# generate samples\nsamples = [stats.norm.rvs(loc=i, scale=j, size = n) for (i,j) in zip(groupmeans,groupstds)]\nallobs = np.concatenate(samples)\n\nsbn.set_palette(\"deep\")\n\nbins = np.linspace(-3, 3, 10)\nfig, axes = plt.subplots(1, 5, figsize=(20,5), sharex=True, sharey=True)\n\nfor i, sample in enumerate(samples):\n axes[i].hist(sample, bins=bins, histtype='stepfilled', \n linewidth=1.5, label='sample {}'.format(i+1))\n axes[i].legend()\n\naxes[-1].hist(allobs, bins=bins, histtype='stepfilled', linewidth=2, label='all data')\naxes[-1].legend()\naxes[-1].set_title(\"All data combined\", fontsize=20)\n\naxes[0].set_ylabel(\"Frequency\", fontsize=20)\naxes[2].set_xlabel(\"X\", fontsize=20)\n\nfig.tight_layout()\npass\n\nSSbtw = sum_squares_between(samples)\nSSwin = sum_squares_within(samples)\nSStot = sum_squares_total(samples)\n\nprint(\"SS between:\", SSbtw)\nprint(\"SS within:\", SSwin)\nprint(\"SS total:\", SStot)\n\ntbl = ANOVA_oneway(samples)\ntbl", "Calculate the value of $R^2$ for our ANOVA model:", "ANOVA_R2(tbl)", "scipy.stats has an f_oneway function for calculating the f statistic and assocated p-value for a one-way ANOVA. Let's compare oure result above to the implementation in scipy.stats", "# f_oneway expects the samples to be passed in as individual arguments\n# this *samples notation \"unpacks\" the list of samples, treating each as \n# an argument to the function.\n# see: https://docs.python.org/3/tutorial/controlflow.html#unpacking-argument-lists\n\nstats.f_oneway(*samples)", "Anderson's Iris Data revisited", "irisurl = \"https://raw.githubusercontent.com/Bio204-class/bio204-datasets/master/iris.csv\"\niris = pd.read_csv(irisurl)\n\niris.head()\n\nsbn.distplot(iris[\"Sepal.Length\"])\npass\n\nsbn.violinplot(x=\"Species\", y=\"Sepal.Length\", data=iris)\npass\n\nsetosa = iris[iris.Species =='setosa']\nversicolor = iris[iris.Species=='versicolor']\nvirginica = iris[iris.Species == 'virginica']\n\nANOVA_oneway([setosa['Sepal.Length'], versicolor['Sepal.Length'],\n virginica['Sepal.Length']])" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
ThunderShiviah/code_guild
interactive-coding-challenges/arrays_strings/reverse_string/reverse_string_challenge-Copy1.ipynb
mit
[ "<small><i>This notebook was prepared by Donne Martin. Source and license info is on GitHub.</i></small>\nChallenge Notebook\nProblem: Implement a function to reverse a string (a list of characters), in-place.\n\nConstraints\nTest Cases\nAlgorithm\nCode\nUnit Test\nSolution Notebook\n\nConstraints\n\nCan I assume the string is ASCII?\nYes\nNote: Unicode strings could require special handling depending on your language\n\n\nSince we need to do this in-place, it seems we cannot use the slice operator or the reversed function?\nCorrect\n\n\nSince Python string are immutable, can I use a list of characters instead?\nYes\n\n\n\nTest Cases\n\nNone -> None\n[''] -> ['']\n['f', 'o', 'o', ' ', 'b', 'a', 'r'] -> ['r', 'a', 'b', ' ', 'o', 'o', 'f']\n\nAlgorithm\nRefer to the Solution Notebook. If you are stuck and need a hint, the solution notebook's algorithm discussion might be a good place to start.\nCode", "def list_of_chars(list_chars):\n # TODO: Implement me\n if li\n return list_chars[::-1]\n ", "Unit Test\nThe following unit test is expected to fail until you solve the challenge.", "# %load test_reverse_string.py\nfrom nose.tools import assert_equal\n\n\nclass TestReverse(object):\n\n def test_reverse(self):\n assert_equal(list_of_chars(None), None)\n assert_equal(list_of_chars(['']), [''])\n assert_equal(list_of_chars(\n ['f', 'o', 'o', ' ', 'b', 'a', 'r']),\n ['r', 'a', 'b', ' ', 'o', 'o', 'f'])\n print('Success: test_reverse')\n\n\ndef main():\n test = TestReverse()\n test.test_reverse()\n\n\nif __name__ == '__main__':\n main()", "Solution Notebook\nReview the Solution Notebook for a discussion on algorithms and code solutions." ]
[ "markdown", "code", "markdown", "code", "markdown" ]
bgroveben/python3_machine_learning_projects
kaggle_restaurant_revenue_prediction/kaggle_restaurant_revenue_prediction.ipynb
mit
[ "Kaggle Restaurant Revenue Prediction\nPredict annual restaurant sales based on objective measurements\nImport libraries and data; explore the data\nLet's begin by importing the Python libraries and data that we'll need:", "import pandas as pd\nimport numpy as np\nimport sklearn\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nfrom IPython.display import display\n%matplotlib inline\n\ntrain_data = pd.read_csv(\"train.csv\")\ntrain_data = train_data.drop('Id', axis=1)\ntest_data = pd.read_csv(\"test.csv\")\ntest_data = test_data.drop('Id', axis=1)", "Now for a bit of exploratory data analysis so we can get to know our data:", "display(train_data[:10])\n\ndisplay(test_data[:10])\n\ntrain_data.describe()\n\ntest_data.describe()\n\ntrain_data.head()\n\ntrain_data.tail()\n\ntrain_data.sample(5)\n\ntrain_data.keys()\n\ntest_data.keys()\n\ntest_data.keys()", "Plot the data", "feature_columns = train_data[['P1', 'P2', 'P3', 'P4',\n 'P5', 'P6', 'P7', 'P8', 'P9', 'P10', 'P11', 'P12', 'P13', 'P14', 'P15',\n 'P16', 'P17', 'P18', 'P19', 'P20', 'P21', 'P22', 'P23', 'P24', 'P25',\n 'P26', 'P27', 'P28', 'P29', 'P30', 'P31', 'P32', 'P33', 'P34', 'P35',\n 'P36', 'P37']]\nfeature_columns.plot.box(figsize=(20, 20))", "I'm sure there are more creative and informative ways to plot the data, but for now it's time to move on.\nMassage, munge, preprocess, and visualize the data", "# Cribbed from https://www.kaggle.com/ani310/restaurant-revenue-prediction/restaurant-revenue\n# Format the data so that dates are easier to work with.\n# Create a column that contains data about the number of days the restaurant has been open.\n# Remove the column that has the restaurant's opening date.\n\ntrain_data['Open Date'] = pd.to_datetime(train_data['Open Date'], format='%m/%d/%Y')\ntest_data['Open Date'] = pd.to_datetime(test_data['Open Date'], format='%m/%d/%Y')\n\ntrain_data['OpenDays'] = \"\"\ntest_data['OpenDays'] = \"\"\n\ndate_last_train = pd.DataFrame({'Date':np.repeat(['01/01/2015'], [len(train_data)])})\ndate_last_test = pd.DataFrame({'Date':np.repeat(['01/01/2015'], [len(test_data)])})\n\ndate_last_train['Date'] = pd.to_datetime(date_last_train['Date'], format='%m/%d/%Y')\ndate_last_test['Date'] = pd.to_datetime(date_last_test['Date'], format='%m/%d/%Y')\n\ntrain_data['OpenDays'] = date_last_train['Date'] - train_data['Open Date']\ntest_data['OpenDays'] = date_last_test['Date'] - test_data['Open Date']\n\ntrain_data['OpenDays'] = train_data['OpenDays'].astype('timedelta64[D]').astype(int)\ntest_data['OpenDays'] = test_data['OpenDays'].astype('timedelta64[D]').astype(int)\n\ntrain_data = train_data.drop('Open Date', axis=1)\ntest_data = test_data.drop('Open Date', axis=1)\n\n# Compare the revenue generated by the restaurants in Big Cities vs Other:\ncity_perc = train_data [[\"City Group\", \"revenue\"]].groupby(['City Group'], as_index=False).mean()\nsns.barplot(x='City Group', y='revenue', data=city_perc)\nplt.title(\"Revenue by city size\")\n\n# Convert data from 'City Group' and create columns of indicator variables for 'Big Cities' or 'Other':\ncity_group_dummy = pd.get_dummies(train_data['City Group'])\ntrain_data = train_data.join(city_group_dummy)\ncity_group_dummy_test = pd.get_dummies(test_data['City Group'])\ntest_data = test_data.join(city_group_dummy_test)\n\ntrain_data = train_data.drop('City Group', axis=1)\ntest_data = test_data.drop('City Group', axis=1)\n\n# Create scatterplot showing how long a restaurant has been open impacts revenue.\n# This will also show any outliers.\nplt.scatter(train_data['OpenDays'], train_data['revenue'])\nplt.xlabel(\"Days Open\")\nplt.ylabel(\"Revenue\")\nplt.title(\"Restaurant revenue by location age\")", "Find the relevant features", "from sklearn.feature_selection import SelectFromModel\n# from sklearn.linear_model import LassoCV\nfrom sklearn.ensemble import ExtraTreesClassifier\n\nX_train = train_data.iloc[:, 2:]\ny = train_data['revenue']\nprint(\"X_train.shape: {}\".format(X_train.shape))\nclf = ExtraTreesClassifier()\nclf = clf.fit(X_train, y)\nprint(\"clf.feature_.importances_: \\n{}\".format(clf.feature_importances_))\nmodel = SelectFromModel(clf, prefit=True)\nprint(model)\nX_train_new = model.transform(X_train)\nprint(\"X_train_new.shape: {}\".format(X_train_new.shape))\nX_train_new = pd.DataFrame(X_train_new)\nprint(X_train_new[:5])", "Try various machine learning algorithms\nLet's try on some algorithms and see how they fit and predict:\nsklearn.ensemble.RandomForestRegressor\nAlso cribbed from Kaggle.", "from sklearn.ensemble import RandomForestRegressor\n\n# Tweak seaborn visualizations and adapt to Jupyter notebooks:\nsns.set_context(\"notebook\", font_scale=1.1)\nsns.set_style(\"ticks\")\n\n# Make dataframes for train and test:\nX_train = pd.DataFrame({'OpenDaysLog':train_data['OpenDays'].apply(np.log),\n 'Big Cities':train_data['Big Cities'], 'Other':train_data['Other'],\n 'P2':train_data['P2'], 'P8':train_data['P8'], 'P22':train_data['P22'],\n 'P24':train_data['P24'], 'P28':train_data['P28'], 'P26':train_data['P26']})\n\ny_train = train_data['revenue'].apply(np.log)\n\nX_test = pd.DataFrame({'OpenDaysLog':test_data['OpenDays'].apply(np.log),\n 'Big Cities':test_data['Big Cities'], 'Other':test_data['Other'],\n 'P2':test_data['P2'], 'P8':test_data['P8'], 'P22':test_data['P22'],\n 'P24':test_data['P24'], 'P28':test_data['P28'], 'P26':test_data['P26']})\n\n# Time to build the models and make some predictions:\nfrom sklearn import linear_model\n\ncls = RandomForestRegressor(n_estimators=150)\ncls.fit(X_train, y_train)\npred = cls.predict(X_test)\npred = np.exp(pred)\npred\n\ncls.score(X_train, y_train)", "How to format the data for the Kaggle contest submission based on the sampleSubmission.csv file:", "test_data = pd.read_csv(\"test.csv\")\nsubmission = pd.DataFrame({\n \"Id\": test_data[\"Id\"],\n \"Prediction\": pred\n})\n# submission.to_csv('RandomForestSimple.csv', header=True, index=False)\n", "sklearn.neighbors.KNeighborsRegressor", "from sklearn.neighbors import KNeighborsRegressor\n\n# Use dataframes from sklearn.ensemble.RandomForestRegressor example above.\n\nknn_cls = KNeighborsRegressor(n_neighbors=2)\nknn_cls.fit(X_train, y_train)\nknn_pred = knn_cls.predict(X_test)\nknn_pred = np.exp(knn_pred)\nknn_cls.score(X_train, y_train)", "sklearn.linear_model.LinearRegression", "from sklearn.linear_model import LinearRegression\n\n# Use dataframes from sklearn.ensemble.RandomForestRegressor example above.\n\nlr_cls = LinearRegression()\nlr_cls.fit(X_train, y_train)\nlr_pred = lr_cls.predict(X_test)\nlr_pred = np.exp(lr_pred)\nlr_cls.score(X_train, y_train)", "sklearn.neural_network.MLPClassifier", "from sklearn.neural_network import MLPRegressor\n\nmlp_cls = MLPRegressor(solver='lbfgs')\nmlp_cls.fit(X_train, y_train)\nmlp_pred = mlp_cls.predict(X_test)\nmlp_pred = np.exp(mlp_pred)\nmlp_cls.score(X_train, y_train)", "Restaurant Revenue Prediction Kaggle solution\nThis next exercise is from a blog post (linked above) by Bikash Agrawal.", "import datetime\n%pylab inline\n\nfrom sklearn.model_selection import LeaveOneOut\nfrom sklearn.grid_search import GridSearchCV, RandomizedSearchCV\nfrom sklearn.metrics import mean_squared_error, mean_absolute_error\nfrom sklearn.preprocessing import MinMaxScaler\nfrom sklearn.ensemble import ExtraTreesClassifier \n\n# Regressors considered:\nfrom sklearn.svm import SVR\nfrom sklearn.neighbors import KNeighborsRegressor\n# Regressor chosen by the author for final submission:\nfrom sklearn.linear_model import Ridge\n\n# Kaggle added ~311.5 \"fake\" data points to the test for each real data point.\n# Dividing by this number gives more accurate counts of the \"real\" data in the test set.\nFAKE_DATA_RATIO = 311.5\n# Set a random seed:\nSEED = 0\n\n# Read in the data provided by Kaggle:\ntrain = pd.read_csv('train.csv', index_col=0, parse_dates=[1])\ntest = pd.read_csv('test.csv', index_col=0, parse_dates=[1])\nprint(\"Training data dimensions: \\n{}\".format(train.shape))\nprint(\"Test data dimensions: \\n{}\".format(test.shape))", "Concatenate the train and test data together into a single dataframe to pre-process and featurize both consistently:", "df = pd.concat((test, train), ignore_index=True)\ndf.describe()\n\n# Convert date strings to \"days open\" numerical value:\ndf[\"Open Date\"] = df[\"Open Date\"].apply(pd.to_datetime)\nlast_date = df[\"Open Date\"].max()\n# Create a datetime delta object:\ndf[\"Open Date\"] = last_date - df[\"Open Date\"]\n# Convert the delta object to an int:\ndf[\"Open Date\"] = df[\"Open Date\"].dt.days + 1\n\n# Scale \"days since opening\" so that the marginal impact decreases over time.\n# This and the similar log transform of City Count below are the modifications\n# that were not in the official competition submission.\ndf[\"Log Days Opened\"] = df[\"Open Date\"].apply(np.log)\ndf = df.drop([\"Open Date\"], axis=1)\n\n# Resize plots:\npylab.rcParams['figure.figsize'] = (8, 6)\ndf[[\"Log Days Opened\", \"revenue\"]].plot(x=\"Log Days Opened\", y=\"revenue\",\n kind='scatter', title=\"Log (Days Opened) vs Revenue\")\n\n# There is a certain set of columns that are either all zero or all non-zero.\n# We have added a feature to mark this -- the 'zeros' feature will be 17 for\n# these rows and 0 or 1 for the rows which are rarely or never zero.\n# Here are the features with the notable zero behavior:\nzero_cols = ['P14', 'P15', 'P16', 'P17', 'P18', 'P24', 'P25', 'P26', 'P27',\n 'P30', 'P31', 'P32', 'P33', 'P34', 'P35', 'P36', 'P37']\n\n# We make a feature that holds this count of zero columns in the above list:\ndf['zeros'] = (df[zero_cols] == 0).sum(1)\n\npylab.rcParams['figure.figsize'] = (20, 8)\nfig, axs = plt.subplots(1,2)\nfig.suptitle(\"Distribution of new Zeros features:\", fontsize=18)\n# There is only one row with a zero count between 0 and 17 in the training set:\ndf['zeros'].ix[pd.notnull(df.revenue)].value_counts().plot(\n title=\"Training Set\", kind='bar', ax=axs[0])\n# In the test set, however, there are many rows with an intermediate count of zeros.\n# This is probably an artifact of how the fake test data was generated, and might\n# indicate that conditional dependence between columns was not preserved.\ndf['zeros'].ix[pd.isnull(df.revenue)].value_counts().plot(\n title=\"Test Set\", kind='bar', ax=axs[1], color='red')\n\n# Here we convert two categorical variables, \"Restaurant Type\", and \"City\n# Group (Size)\" to dummy variables:\npylab.rcParams['figure.figsize'] = (6, 4)\n\n# The two categories of City Group both appear very frequently:\ntrain[\"City Group\"].value_counts().plot(\n title=\"City Group Distribution in the Training Set\", kind='bar')\n\n# Two of the four Restaurant Types (DT and MB) are very rare:\ntrain[\"Type\"].value_counts().plot(\n title=\"Restaurant Type Distribution in the Training Set\", kind='bar')\n\n(test[\"Type\"].value_counts() / FAKE_DATA_RATIO).plot(\n title=\"Approximate Restaurant Type Distribution in True Test Set\",\n kind='bar', color='red')\n\ndf = df.join(pd.get_dummies(df['City Group'], prefix=\"CG\"))\ndf = df.join(pd.get_dummies(df['Type'], prefix=\"T\"))\n# Since only n-1 columns are needed to binarize n categories, drop one\n# of the new columns and drop the original columns.\n# In addition, drop the rare restaurant types.\ndf = df.drop([\"City Group\", \"Type\", \"CG_Other\", \"T_MB\", \"T_DT\"], axis=1)\nprint(df.shape)\ndf.describe(include='all')\n\n# Replace city names with the count of their frequency in the training +\n# estimated frequency in the test set.\ncity_counts = (test[\"City\"].value_counts() /\n FAKE_DATA_RATIO).add(train[\"City\"].value_counts(), fill_value=0)\ndf[\"City\"] = df[\"City\"].replace(city_counts)\nprint(\"Some example estimated counts of restaurants per city: \\n{}\".format(\n city_counts.head()))\n\n# Take the natural logarithm of city count so that the marginal effect decreases:\ndf[\"Log City Count\"] = df[\"City\"].apply(np.log)\ndf = df.drop([\"City\"], axis=1)\n\n# The last vertical spread of points below are restaurants in Istanbul.\npylab.rcParams['figure.figsize'] = (8, 6)\ndf[[\"Log City Count\", \"revenue\"]].plot(x=\"Log City Count\", y=\"revenue\",\n kind='scatter', title=\"Log City Count vs Revenue\")", "Now is the time for us to impute values for the rare restaurant types (DT and MB).\nInstead of trying to predict with values that appear only 1 or 0 times in the training set, we will replace them with one of the other commonly appearing categories by fitting a model that predicts which common category they \"should\" be.", "# tofit are the rows in the training set that belong to one of the common restaurant types:\ntofit = df.ix[((df.T_FC==1) | (df.T_IL==1)) & (pd.notnull(df.revenue))]\n# tofill are rows in either train or test that belong to one of the rare types:\ntofill = df.ix[((df.T_FC==0) & (df.T_IL==0))]\nprint(\"Type training set shape: \\n{}\".format(tofit.shape))\nprint(\"Data to impute: \\n{}\".format(tofill.shape))\n\n# Restaurants with type FC are labeled 1, those with type IL are labeled 0.\ny = tofit.T_FC\n\n# Drop the label columns and revenue (which is not in the test set):\nX = tofit.drop([\"T_FC\", \"T_IL\", \"revenue\"], axis=1)", "Here we can define and train a model to impute restaurant type.\nThe grid below just has a range of values that the author has found to work well with random forest type models (of which ExtraTrees is one).", "model_grid = {'max_depth': [None, 8], 'min_samples_split': [4,9,16],\n 'min_samples_leaf': [1,4], 'max_features': ['sqrt', 0.5, None]}\ntype_model = ExtraTreesClassifier(n_estimators=25, random_state=SEED)\n\ngrid = RandomizedSearchCV(type_model, model_grid, n_iter=10, cv=5, scoring=\"roc_auc\")\ngrid.fit(X, y)\n\nprint(\"Best parameters for Type Model: \\n{}\".format(grid.best_params_))\n\ntype_model.set_params(**grid.best_params_)\ntype_model.fit(X, y)\n\nimputations = type_model.predict(tofill.drop([\"T_FC\", \"T_IL\", \"revenue\"], axis=1))\ndf.loc[(df.T_FC==0) & (df.T_IL==0), \"T_FC\"] = imputations\ndf = df.drop([\"T_IL\"], axis=1)\ndf[:7]\n\nprint(\"% labeled FC in the training set: \\n{}\".format(df.T_FC.mean()))\nprint(\"% of imputed values labeled FC: \\n{}\".format(np.mean(imputations)))", "Now we can binarize the \"P\" columns with dummy variables:", "print(\"Pre-binarizing columns: {}\".format(len(df.columns)))\nfor col in df.columns:\n if col[0] == 'P':\n print(col, len(df[col].unique()), \"Unique Values\")\n df = df.join(pd.get_dummies(df[col], prefix=col))\n df = df.drop([col, df.columns[-1]], axis=1)\nprint(\"Post-binarizing columns: {}\".format(len(df.columns)))", "To finish up our data preprocessing, we need to scale all input features to between 0 and 1 (this is especially important for KNN or SVM(SVR) models.\nHowever, we don't want to scale the output, so we'll temporarily 'drop' it.", "min_max_scaler = MinMaxScaler()\nrev = df.revenue\ndf = df.drop(['revenue'], axis=1)\ndf = pd.DataFrame(data=min_max_scaler.fit_transform(df), columns=df.columns, index=df.index)\ndf = df.join(rev)\n# Now that preprocessing is finished, let's have a look at the data before modeling with it:\ndf.describe()\n\n# Recover the original train and test rows based on revenue (which is null for test rows)\ntrain = df.ix[pd.notnull(df.revenue)]\ntest = df.ix[pd.isnull(df.revenue)].drop(['revenue'], axis=1)\n\n# Scale revenue by sqrt.\n# The reason is to decrease the influence of the few very large revenue values.\ny = train.revenue.apply(np.sqrt)\nX = train.drop([\"revenue\"], axis=1)", "Now we can define and train a Ridge Regression model.\nThe author tested others from the sklearn library, including SVR, RandomForest, K-nearest Neighbors, but found that Ridge consistently gave the strongest leaderboard results.\nOne takeaway -- when the training data is small, simplest is often best.", "model_grid = [{'normalize': [True, False], 'alpha': np.logspace(0,10)}]\nmodel = Ridge()\n# Use a grid search and leave-one-out CV on the train set to find the best regularization parameter to use.\ngrid = GridSearchCV(model, model_grid, scoring='neg_mean_squared_error') \ngrid.fit(X, y)\nprint(\"Best parameters set found on development set: \\n{}\".format(\n grid.best_params_))\n\n# Retrain model on the full training set using the best parameters found in the last step:\nmodel.set_params(**grid.best_params_)\nmodel.fit(X, y)\n\n# Predict on the test set using the trained model:\nsubmission = pd.DataFrame(columns=['Prediction'], index=test.index,\n data=model.predict(test))\n# Convert back to revenue from sqrt(revenue):\nsubmission.Prediction = submission.Prediction.apply(np.square)\nsubmission.Prediction[:7]", "So, now we're ready for our final submission to Kaggle:", "# Add required column name for Kaggle's submission parser:\nsubmission.index.name='Id'\n# Write out the submission:\n# submission.to_csv(\"TFI_Ridge.csv\")\n# Quick sanity check on the submission:\nsubmission.describe().astype(int)\n\n# Revenue from training set for comparison:\ntrain[['revenue']].describe().astype(int)", "One last quick comparison.\nNote the x-axis scale change: the predictions are more conservative and tend to be closer to the mean than the real revenues.\nThis is pretty standard behavior when using RMSE -- there are big penalties for being very wrong, so the model will tend towards more moderate predictions.", "train[['revenue']].plot(kind='kde', title=\"Training Set Revenue Distribution\")\nsubmission.columns = [\"predicted revenue\"]\nsubmission.plot(kind='kde', title=\"Prediction Revenue Distribution\", color='red')" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
junhwanjang/DataSchool
Lecture/05. 기초 선형 대수 1 - 행렬의 정의와 연산/3) NumPy 연산.ipynb
mit
[ "NumPy 연산\n벡터화 연산\nNumPy는 코드를 간단하게 만들고 계산 속도를 빠르게 하기 위한 벡터화 연산(vectorized operation)을 지원한다. 벡터화 연산이란 반복문(loop)을 사용하지 않고 선형 대수의 벡터 혹은 행렬 연산과 유사한 코드를 사용하는 것을 말한다.\n예를 들어 다음과 같은 연산을 해야 한다고 하자.\n$$ \nx = \\begin{bmatrix}1 \\ 2 \\ 3 \\ \\vdots \\ 100 \\end{bmatrix}, \\;\\;\\;\\;\ny = \\begin{bmatrix}101 \\ 102 \\ 103 \\ \\vdots \\ 200 \\end{bmatrix},\n$$\n$$z = x + y = \\begin{bmatrix}1+101 \\ 2+102 \\ 3+103 \\ \\vdots \\ 100+200 \\end{bmatrix}= \\begin{bmatrix}102 \\ 104 \\ 106 \\ \\vdots \\ 300 \\end{bmatrix}\n$$\n만약 NumPy의 벡터화 연산을 사용하지 않는다면 이 연산은 루프를 활용하여 다음과 같이 코딩해야 한다.", "x = np.arange(1, 101)\nx\n\ny = np.arange(101, 201)\ny\n\n%%time\nz = np.zeros_like(x)\nfor i, (xi, yi) in enumerate(zip(x, y)):\n z[i] = xi + yi\n\nz", "그러나 NumPy는 벡터화 연산을 지원하므로 다음과 같이 덧셈 연산 하나로 끝난다. 위에서 보인 선형 대수의 벡터 기호를 사용한 연산과 코드가 완전히 동일하다.", "%%time\nz = x + y\n\nz", "연산 속도도 벡터화 연산이 훨씬 빠른 것을 볼 수 있다.\nElement-Wise 연산\nNumPy의 벡터화 연산은 같은 위치의 원소끼리 연산하는 element-wise 연산이다. NumPy의 ndarray를 선형 대수의 벡터나 행렬이라고 했을 때 덧셈, 뺄셈은 NumPy 연산과 일치한다\n스칼라와 벡터의 곱도 마찬가지로 선형 대수에서 사용하는 식과 NumPy 코드가 일치한다.", "x = np.arange(10)\nx\n\na = 100\na * x", "NumPy 곱셉의 경우에는 행렬의 곱, 즉 내적(inner product, dot product)의 정의와 다르다. 따라서 이 경우에는 별도로 dot이라는 명령 혹은 메서드를 사용해야 한다.", "x = np.arange(10)\ny = np.arange(10)\nx * y\n\nnp.dot(x, y)\n\nx.dot(y)", "비교 연산도 마찬가지로 element-wise 연산이다. 따라서 벡터 혹은 행렬 전체의 원소가 모두 같아야 하는 선형 대수의 비교 연산과는 다르다.", "a = np.array([1, 2, 3, 4])\nb = np.array([4, 2, 2, 4])\n\na == b\n\na >= b", "만약 배열 전체를 비교하고 싶다면 array_equal 명령을 사용한다.", "a = np.array([1, 2, 3, 4])\nb = np.array([4, 2, 2, 4])\nc = np.array([1, 2, 3, 4])\n\nnp.array_equal(a, b)\n\nnp.array_equal(a, c)", "만약 NumPy 에서 제공하는 지수 함수, 로그 함수 등의 수학 함수를 사용하면 element-wise 벡터화 연산을 지원한다.", "a = np.arange(5)\na\n\nnp.exp(a)\n\n10**a\n\nnp.log(a)\n\nnp.log10(a)", "만약 NumPy에서 제공하는 함수를 사용하지 않으면 벡터화 연산은 불가능하다.", "import math\na = [1, 2, 3]\nmath.exp(a)", "브로드캐스팅\n선형 대수의 행렬 덧셈 혹은 뺄셈을 하려면 두 행렬의 크기가 같아야 한다. 그러나 NumPy에서는 서로 다른 크기를 가진 두 ndarray 배열의 사칙 연산도 지원한다. 이 기능을 브로드캐스팅(broadcasting)이라고 하는데 크기가 작은 배열을 자동으로 반복 확장하여 크기가 큰 배열에 맞추는 방벙이다.\n예를 들어 다음과 같이 벡터와 스칼라를 더하는 경우를 생각하자. 선형 대수에서는 이러한 연산이 불가능하다.\n$$ \nx = \\begin{bmatrix}0 \\ 1 \\ 2 \\ 3 \\ 4 \\end{bmatrix}, \\;\\;\\;\\; \nx + 1 = \\begin{bmatrix}0 \\ 1 \\ 2 \\ 3 \\ 4 \\end{bmatrix} + 1 = ?\n$$\n그러나 NumPy는 브로드캐스팅 기능을 사용하여 스칼라를 벡터와 같은 크기로 확장시켜서 덧셈 계산을 한다.\n$$ \n\\begin{bmatrix}0 \\ 1 \\ 2 \\ 3 \\ 4 \\end{bmatrix} \\overset{\\text{numpy}}+ 1 = \n\\begin{bmatrix}0 \\ 1 \\ 2 \\ 3 \\ 4 \\end{bmatrix} + \\begin{bmatrix}1 \\ 1 \\ 1 \\ 1 \\ 1 \\end{bmatrix} = \n\\begin{bmatrix}1 \\ 2 \\ 3 \\ 4 \\ 5 \\end{bmatrix}\n$$", "x = np.arange(5)\ny = np.ones_like(x)\nx + y\n\nx + 1", "브로드캐스팅은 더 차원이 높은 경우에도 적용된다. 다음 그림을 참조하라.\n<img src=\"https://datascienceschool.net/upfiles/dbd3775c3b914d4e8c6bbbb342246b6a.png\" style=\"width: 60%; margin: 0 auto 0 auto;\">", "a = np.tile(np.arange(0, 40, 10), (3, 1)).T\na\n\nb = np.array([0, 1, 2])\nb\n\na + b\n\na = np.arange(0, 40, 10)[:, np.newaxis]\na\n\na + b", "차원 축소 연산\nndarray의 하나의 행에 있는 원소를 하나의 데이터 집합으로 보고 평균을 구하면 각 행에 대해 하나의 숫자가 나오게 된다. 예를 들어 10x5 크기의 2차원 배열에 대해 행-평균을 구하면 10개의 숫자를 가진 1차원 벡터가 나오게 된다. 이러한 연산을 차원 축소(dimension reduction) 연산이라고 한다.\nndarray 는 다음과 같은 차원 축소 연산 명령 혹은 메서드를 지원한다.\n\n최대/최소: min, max, argmin, argmax\n통계: sum, mean, median, std, var\n불리언: all, any", "x = np.array([1, 2, 3, 4])\nx\n\nnp.sum(x)\n\nx.sum()\n\nx = np.array([1, 3, 2])\n\nx.min()\n\nx.max()\n\nx.argmin() # index of minimum\n\nx.argmax() # index of maximum\n\nx = np.array([1, 2, 3, 1])\n\nx.mean()\n\nnp.median(x)\n\nnp.all([True, True, False])\n\nnp.any([True, True, False])\n\na = np.zeros((100, 100), dtype=np.int)\na\n\nnp.any(a != 0)\n\nnp.all(a == a)\n\na = np.array([1, 2, 3, 2])\nb = np.array([2, 2, 3, 2])\nc = np.array([6, 4, 4, 5])\n\n((a <= b) & (b <= c)).all()", "연산의 대상이 2차원 이상인 경우에는 어느 차원으로 계산을 할 지를 axis 인수를 사용하여 지시한다. axis=0인 경우는 행 연산, axis=1인 경우는 열 연산 등으로 사용한다. 디폴트 값은 0이다.\n<img src=\"https://datascienceschool.net/upfiles/edfaf93a7f124f359343d1dcfe7f29fc.png\", style=\"margin: 0 auto 0 auto;\">", "x = np.array([[1, 1], [2, 2]])\nx\n\nx.sum()\n\nx.sum(axis=0) # columns (first dimension)\n\nx.sum(axis=1) # rows (second dimension)\n\ny = np.array([[1, 2, 3], [5, 6, 1]])\nnp.median(y, axis=-1) # last axis", "정렬\nsort 명령이나 메서드를 사용하여 배열 안의 원소를 크기에 따라 정렬하여 새로운 배열을 만들 수도 있다. 2차원 이상인 경우에는 마찬가지로 axis 인수를 사용하여 방향을 결정한다.", "a = np.array([[4, 3, 5], [1, 2, 1]])\na\n\nnp.sort(a)\n\nnp.sort(a, axis=1)", "sort 메서드는 해당 객체의 자료 자체가 변화하는 in-place 메서드이므로 사용할 때 주의를 기울여야 한다.", "a.sort(axis=1)\na", "만약 자료를 정렬하는 것이 아니라 순서만 알고 싶다면 argsort 명령을 사용한다.", "a = np.array([4, 3, 1, 2])\nj = np.argsort(a)\nj\n\na[j]" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
shareactorIO/pipeline
source.ml/jupyterhub.ml/notebooks/zz_old/TensorFlow/HvassLabsTutorials/04_Save_Restore.ipynb
apache-2.0
[ "TensorFlow Tutorial #04\nSave & Restore\nby Magnus Erik Hvass Pedersen\n/ GitHub / Videos on YouTube\nIntroduction\nThis tutorial demonstrates how to save and restore the variables of a Neural Network. During optimization we save the variables of the neural network whenever its classification accuracy has improved on the validation-set. The optimization is aborted when there has been no improvement for 1000 iterations. We then reload the variables that performed best on the validation-set.\nThis strategy is called Early Stopping. It is used to avoid overfitting of the neural network. This occurs when the neural network is being trained for too long so it starts to learn the noise of the training-set, which causes the neural network to mis-classify new images.\nOverfitting is not really a problem for the neural network used in this tutorial on the MNIST data-set for recognizing hand-written digits. But this tutorial demonstrates the idea of Early Stopping.\nThis builds on the previous tutorials, so you should have a basic understanding of TensorFlow and the add-on package Pretty Tensor. A lot of the source-code and text in this tutorial is similar to the previous tutorials and may be read quickly if you have recently read the previous tutorials.\nFlowchart\nThe following chart shows roughly how the data flows in the Convolutional Neural Network that is implemented below. The network has two convolutional layers and two fully-connected layers, with the last layer being used for the final classification of the input images. See Tutorial #02 for a more detailed description of this network and convolution in general.", "from IPython.display import Image\nImage('images/02_network_flowchart.png')", "Imports", "%matplotlib inline\nimport matplotlib.pyplot as plt\nimport tensorflow as tf\nimport numpy as np\nfrom sklearn.metrics import confusion_matrix\nimport time\nfrom datetime import timedelta\nimport math\nimport os\n\n# Use PrettyTensor to simplify Neural Network construction.\nimport prettytensor as pt", "This was developed using Python 3.5.2 (Anaconda) and TensorFlow version:", "tf.__version__", "Load Data\nThe MNIST data-set is about 12 MB and will be downloaded automatically if it is not located in the given path.", "from tensorflow.examples.tutorials.mnist import input_data\ndata = input_data.read_data_sets('data/MNIST/', one_hot=True)", "The MNIST data-set has now been loaded and consists of 70,000 images and associated labels (i.e. classifications of the images). The data-set is split into 3 mutually exclusive sub-sets. We will only use the training and test-sets in this tutorial.", "print(\"Size of:\")\nprint(\"- Training-set:\\t\\t{}\".format(len(data.train.labels)))\nprint(\"- Test-set:\\t\\t{}\".format(len(data.test.labels)))\nprint(\"- Validation-set:\\t{}\".format(len(data.validation.labels)))", "The class-labels are One-Hot encoded, which means that each label is a vector with 10 elements, all of which are zero except for one element. The index of this one element is the class-number, that is, the digit shown in the associated image. We also need the class-numbers as integers for the test- and validation-sets, so we calculate them now.", "data.test.cls = np.argmax(data.test.labels, axis=1)\ndata.validation.cls = np.argmax(data.validation.labels, axis=1)", "Data Dimensions\nThe data dimensions are used in several places in the source-code below. They are defined once so we can use these variables instead of numbers throughout the source-code below.", "# We know that MNIST images are 28 pixels in each dimension.\nimg_size = 28\n\n# Images are stored in one-dimensional arrays of this length.\nimg_size_flat = img_size * img_size\n\n# Tuple with height and width of images used to reshape arrays.\nimg_shape = (img_size, img_size)\n\n# Number of colour channels for the images: 1 channel for gray-scale.\nnum_channels = 1\n\n# Number of classes, one class for each of 10 digits.\nnum_classes = 10", "Helper-function for plotting images\nFunction used to plot 9 images in a 3x3 grid, and writing the true and predicted classes below each image.", "def plot_images(images, cls_true, cls_pred=None):\n assert len(images) == len(cls_true) == 9\n \n # Create figure with 3x3 sub-plots.\n fig, axes = plt.subplots(3, 3)\n fig.subplots_adjust(hspace=0.3, wspace=0.3)\n\n for i, ax in enumerate(axes.flat):\n # Plot image.\n ax.imshow(images[i].reshape(img_shape), cmap='binary')\n\n # Show true and predicted classes.\n if cls_pred is None:\n xlabel = \"True: {0}\".format(cls_true[i])\n else:\n xlabel = \"True: {0}, Pred: {1}\".format(cls_true[i], cls_pred[i])\n\n # Show the classes as the label on the x-axis.\n ax.set_xlabel(xlabel)\n \n # Remove ticks from the plot.\n ax.set_xticks([])\n ax.set_yticks([])\n \n # Ensure the plot is shown correctly with multiple plots\n # in a single Notebook cell.\n plt.show()", "Plot a few images to see if data is correct", "# Get the first images from the test-set.\nimages = data.test.images[0:9]\n\n# Get the true classes for those images.\ncls_true = data.test.cls[0:9]\n\n# Plot the images and labels using our helper-function above.\nplot_images(images=images, cls_true=cls_true)", "TensorFlow Graph\nThe entire purpose of TensorFlow is to have a so-called computational graph that can be executed much more efficiently than if the same calculations were to be performed directly in Python. TensorFlow can be more efficient than NumPy because TensorFlow knows the entire computation graph that must be executed, while NumPy only knows the computation of a single mathematical operation at a time.\nTensorFlow can also automatically calculate the gradients that are needed to optimize the variables of the graph so as to make the model perform better. This is because the graph is a combination of simple mathematical expressions so the gradient of the entire graph can be calculated using the chain-rule for derivatives.\nTensorFlow can also take advantage of multi-core CPUs as well as GPUs - and Google has even built special chips just for TensorFlow which are called TPUs (Tensor Processing Units) and are even faster than GPUs.\nA TensorFlow graph consists of the following parts which will be detailed below:\n\nPlaceholder variables used for inputting data to the graph.\nVariables that are going to be optimized so as to make the convolutional network perform better.\nThe mathematical formulas for the convolutional network.\nA loss measure that can be used to guide the optimization of the variables.\nAn optimization method which updates the variables.\n\nIn addition, the TensorFlow graph may also contain various debugging statements e.g. for logging data to be displayed using TensorBoard, which is not covered in this tutorial.\nPlaceholder variables\nPlaceholder variables serve as the input to the TensorFlow computational graph that we may change each time we execute the graph. We call this feeding the placeholder variables and it is demonstrated further below.\nFirst we define the placeholder variable for the input images. This allows us to change the images that are input to the TensorFlow graph. This is a so-called tensor, which just means that it is a multi-dimensional array. The data-type is set to float32 and the shape is set to [None, img_size_flat], where None means that the tensor may hold an arbitrary number of images with each image being a vector of length img_size_flat.", "x = tf.placeholder(tf.float32, shape=[None, img_size_flat], name='x')", "The convolutional layers expect x to be encoded as a 4-dim tensor so we have to reshape it so its shape is instead [num_images, img_height, img_width, num_channels]. Note that img_height == img_width == img_size and num_images can be inferred automatically by using -1 for the size of the first dimension. So the reshape operation is:", "x_image = tf.reshape(x, [-1, img_size, img_size, num_channels])", "Next we have the placeholder variable for the true labels associated with the images that were input in the placeholder variable x. The shape of this placeholder variable is [None, num_classes] which means it may hold an arbitrary number of labels and each label is a vector of length num_classes which is 10 in this case.", "y_true = tf.placeholder(tf.float32, shape=[None, 10], name='y_true')", "We could also have a placeholder variable for the class-number, but we will instead calculate it using argmax. Note that this is a TensorFlow operator so nothing is calculated at this point.", "y_true_cls = tf.argmax(y_true, dimension=1)", "Neural Network\nThis section implements the Convolutional Neural Network using Pretty Tensor, which is much simpler than a direct implementation in TensorFlow, see Tutorial #03.\nThe basic idea is to wrap the input tensor x_image in a Pretty Tensor object which has helper-functions for adding new computational layers so as to create an entire neural network. Pretty Tensor takes care of the variable allocation, etc.", "x_pretty = pt.wrap(x_image)", "Now that we have wrapped the input image in a Pretty Tensor object, we can add the convolutional and fully-connected layers in just a few lines of source-code.\nNote that pt.defaults_scope(activation_fn=tf.nn.relu) makes activation_fn=tf.nn.relu an argument for each of the layers constructed inside the with-block, so that Rectified Linear Units (ReLU) are used for each of these layers. The defaults_scope makes it easy to change arguments for all of the layers.", "with pt.defaults_scope(activation_fn=tf.nn.relu):\n y_pred, loss = x_pretty.\\\n conv2d(kernel=5, depth=16, name='layer_conv1').\\\n max_pool(kernel=2, stride=2).\\\n conv2d(kernel=5, depth=36, name='layer_conv2').\\\n max_pool(kernel=2, stride=2).\\\n flatten().\\\n fully_connected(size=128, name='layer_fc1').\\\n softmax_classifier(class_count=10, labels=y_true)", "Getting the Weights\nFurther below, we want to plot the weights of the neural network. When the network is constructed using Pretty Tensor, all the variables of the layers are created indirectly by Pretty Tensor. We therefore have to retrieve the variables from TensorFlow.\nWe used the names layer_conv1 and layer_conv2 for the two convolutional layers. These are also called variable scopes (not to be confused with defaults_scope as described above). Pretty Tensor automatically gives names to the variables it creates for each layer, so we can retrieve the weights for a layer using the layer's scope-name and the variable-name.\nThe implementation is somewhat awkward because we have to use the TensorFlow function get_variable() which was designed for another purpose; either creating a new variable or re-using an existing variable. The easiest thing is to make the following helper-function.", "def get_weights_variable(layer_name):\n # Retrieve an existing variable named 'weights' in the scope\n # with the given layer_name.\n # This is awkward because the TensorFlow function was\n # really intended for another purpose.\n\n with tf.variable_scope(layer_name, reuse=True):\n variable = tf.get_variable('weights')\n\n return variable", "Using this helper-function we can retrieve the variables. These are TensorFlow objects. In order to get the contents of the variables, you must do something like: contents = session.run(weights_conv1) as demonstrated further below.", "weights_conv1 = get_weights_variable(layer_name='layer_conv1')\nweights_conv2 = get_weights_variable(layer_name='layer_conv2')", "Optimization Method\nPretty Tensor gave us the predicted class-label (y_pred) as well as a loss-measure that must be minimized, so as to improve the ability of the neural network to classify the input images.\nIt is unclear from the documentation for Pretty Tensor whether the loss-measure is cross-entropy or something else. But we now use the AdamOptimizer to minimize the loss.\nNote that optimization is not performed at this point. In fact, nothing is calculated at all, we just add the optimizer-object to the TensorFlow graph for later execution.", "optimizer = tf.train.AdamOptimizer(learning_rate=1e-4).minimize(loss)", "Performance Measures\nWe need a few more performance measures to display the progress to the user.\nFirst we calculate the predicted class number from the output of the neural network y_pred, which is a vector with 10 elements. The class number is the index of the largest element.", "y_pred_cls = tf.argmax(y_pred, dimension=1)", "Then we create a vector of booleans telling us whether the predicted class equals the true class of each image.", "correct_prediction = tf.equal(y_pred_cls, y_true_cls)", "The classification accuracy is calculated by first type-casting the vector of booleans to floats, so that False becomes 0 and True becomes 1, and then taking the average of these numbers.", "accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))", "Saver\nIn order to save the variables of the neural network, we now create a so-called Saver-object which is used for storing and retrieving all the variables of the TensorFlow graph. Nothing is actually saved at this point, which will be done further below in the optimize()-function.", "saver = tf.train.Saver()", "The saved files are often called checkpoints because they may be written at regular intervals during optimization.\nThis is the directory used for saving and retrieving the data.", "save_dir = 'checkpoints/'", "Create the directory if it does not exist.", "if not os.path.exists(save_dir):\n os.makedirs(save_dir)", "This is the path for the checkpoint-file.", "save_path = save_dir + 'best_validation'", "TensorFlow Run\nCreate TensorFlow session\nOnce the TensorFlow graph has been created, we have to create a TensorFlow session which is used to execute the graph.", "session = tf.Session()", "Initialize variables\nThe variables for weights and biases must be initialized before we start optimizing them. We make a simple wrapper-function for this, because we will call it again below.", "def init_variables():\n session.run(tf.initialize_all_variables())", "Execute the function now to initialize the variables.", "init_variables()", "Helper-function to perform optimization iterations\nThere are 55,000 images in the training-set. It takes a long time to calculate the gradient of the model using all these images. We therefore only use a small batch of images in each iteration of the optimizer.\nIf your computer crashes or becomes very slow because you run out of RAM, then you may try and lower this number, but you may then need to perform more optimization iterations.", "train_batch_size = 64", "The classification accuracy for the validation-set will be calculated for every 100 iterations of the optimization function below. The optimization will be stopped if the validation accuracy has not been improved in 1000 iterations. We need a few variables to keep track of this.", "# Best validation accuracy seen so far.\nbest_validation_accuracy = 0.0\n\n# Iteration-number for last improvement to validation accuracy.\nlast_improvement = 0\n\n# Stop optimization if no improvement found in this many iterations.\nrequire_improvement = 1000", "Function for performing a number of optimization iterations so as to gradually improve the variables of the network layers. In each iteration, a new batch of data is selected from the training-set and then TensorFlow executes the optimizer using those training samples. The progress is printed every 100 iterations where the validation accuracy is also calculated and saved to a file if it is an improvement.", "# Counter for total number of iterations performed so far.\ntotal_iterations = 0\n\ndef optimize(num_iterations):\n # Ensure we update the global variables rather than local copies.\n global total_iterations\n global best_validation_accuracy\n global last_improvement\n\n # Start-time used for printing time-usage below.\n start_time = time.time()\n\n for i in range(num_iterations):\n\n # Increase the total number of iterations performed.\n # It is easier to update it in each iteration because\n # we need this number several times in the following.\n total_iterations += 1\n\n # Get a batch of training examples.\n # x_batch now holds a batch of images and\n # y_true_batch are the true labels for those images.\n x_batch, y_true_batch = data.train.next_batch(train_batch_size)\n\n # Put the batch into a dict with the proper names\n # for placeholder variables in the TensorFlow graph.\n feed_dict_train = {x: x_batch,\n y_true: y_true_batch}\n\n # Run the optimizer using this batch of training data.\n # TensorFlow assigns the variables in feed_dict_train\n # to the placeholder variables and then runs the optimizer.\n session.run(optimizer, feed_dict=feed_dict_train)\n\n # Print status every 100 iterations and after last iteration.\n if (total_iterations % 100 == 0) or (i == (num_iterations - 1)):\n\n # Calculate the accuracy on the training-batch.\n acc_train = session.run(accuracy, feed_dict=feed_dict_train)\n\n # Calculate the accuracy on the validation-set.\n # The function returns 2 values but we only need the first.\n acc_validation, _ = validation_accuracy()\n\n # If validation accuracy is an improvement over best-known.\n if acc_validation > best_validation_accuracy:\n # Update the best-known validation accuracy.\n best_validation_accuracy = acc_validation\n \n # Set the iteration for the last improvement to current.\n last_improvement = total_iterations\n\n # Save all variables of the TensorFlow graph to file.\n saver.save(sess=session, save_path=save_path)\n\n # A string to be printed below, shows improvement found.\n improved_str = '*'\n else:\n # An empty string to be printed below.\n # Shows that no improvement was found.\n improved_str = ''\n \n # Status-message for printing.\n msg = \"Iter: {0:>6}, Train-Batch Accuracy: {1:>6.1%}, Validation Acc: {2:>6.1%} {3}\"\n\n # Print it.\n print(msg.format(i + 1, acc_train, acc_validation, improved_str))\n\n # If no improvement found in the required number of iterations.\n if total_iterations - last_improvement > require_improvement:\n print(\"No improvement found in a while, stopping optimization.\")\n\n # Break out from the for-loop.\n break\n\n # Ending time.\n end_time = time.time()\n\n # Difference between start and end-times.\n time_dif = end_time - start_time\n\n # Print the time-usage.\n print(\"Time usage: \" + str(timedelta(seconds=int(round(time_dif)))))", "Helper-function to plot example errors\nFunction for plotting examples of images from the test-set that have been mis-classified.", "def plot_example_errors(cls_pred, correct):\n # This function is called from print_test_accuracy() below.\n\n # cls_pred is an array of the predicted class-number for\n # all images in the test-set.\n\n # correct is a boolean array whether the predicted class\n # is equal to the true class for each image in the test-set.\n\n # Negate the boolean array.\n incorrect = (correct == False)\n \n # Get the images from the test-set that have been\n # incorrectly classified.\n images = data.test.images[incorrect]\n \n # Get the predicted classes for those images.\n cls_pred = cls_pred[incorrect]\n\n # Get the true classes for those images.\n cls_true = data.test.cls[incorrect]\n \n # Plot the first 9 images.\n plot_images(images=images[0:9],\n cls_true=cls_true[0:9],\n cls_pred=cls_pred[0:9])", "Helper-function to plot confusion matrix", "def plot_confusion_matrix(cls_pred):\n # This is called from print_test_accuracy() below.\n\n # cls_pred is an array of the predicted class-number for\n # all images in the test-set.\n\n # Get the true classifications for the test-set.\n cls_true = data.test.cls\n \n # Get the confusion matrix using sklearn.\n cm = confusion_matrix(y_true=cls_true,\n y_pred=cls_pred)\n\n # Print the confusion matrix as text.\n print(cm)\n\n # Plot the confusion matrix as an image.\n plt.matshow(cm)\n\n # Make various adjustments to the plot.\n plt.colorbar()\n tick_marks = np.arange(num_classes)\n plt.xticks(tick_marks, range(num_classes))\n plt.yticks(tick_marks, range(num_classes))\n plt.xlabel('Predicted')\n plt.ylabel('True')\n\n # Ensure the plot is shown correctly with multiple plots\n # in a single Notebook cell.\n plt.show()", "Helper-functions for calculating classifications\nThis function calculates the predicted classes of images and also returns a boolean array whether the classification of each image is correct.\nThe calculation is done in batches because it might use too much RAM otherwise. If your computer crashes then you can try and lower the batch-size.", "# Split the data-set in batches of this size to limit RAM usage.\nbatch_size = 256\n\ndef predict_cls(images, labels, cls_true):\n # Number of images.\n num_images = len(images)\n\n # Allocate an array for the predicted classes which\n # will be calculated in batches and filled into this array.\n cls_pred = np.zeros(shape=num_images, dtype=np.int)\n\n # Now calculate the predicted classes for the batches.\n # We will just iterate through all the batches.\n # There might be a more clever and Pythonic way of doing this.\n\n # The starting index for the next batch is denoted i.\n i = 0\n\n while i < num_images:\n # The ending index for the next batch is denoted j.\n j = min(i + batch_size, num_images)\n\n # Create a feed-dict with the images and labels\n # between index i and j.\n feed_dict = {x: images[i:j, :],\n y_true: labels[i:j, :]}\n\n # Calculate the predicted class using TensorFlow.\n cls_pred[i:j] = session.run(y_pred_cls, feed_dict=feed_dict)\n\n # Set the start-index for the next batch to the\n # end-index of the current batch.\n i = j\n\n # Create a boolean array whether each image is correctly classified.\n correct = (cls_true == cls_pred)\n\n return correct, cls_pred", "Calculate the predicted class for the test-set.", "def predict_cls_test():\n return predict_cls(images = data.test.images,\n labels = data.test.labels,\n cls_true = data.test.cls)", "Calculate the predicted class for the validation-set.", "def predict_cls_validation():\n return predict_cls(images = data.validation.images,\n labels = data.validation.labels,\n cls_true = data.validation.cls)", "Helper-functions for the classification accuracy\nThis function calculates the classification accuracy given a boolean array whether each image was correctly classified. E.g. cls_accuracy([True, True, False, False, False]) = 2/5 = 0.4", "def cls_accuracy(correct):\n # Calculate the number of correctly classified images.\n # When summing a boolean array, False means 0 and True means 1.\n correct_sum = correct.sum()\n\n # Classification accuracy is the number of correctly classified\n # images divided by the total number of images in the test-set.\n acc = float(correct_sum) / len(correct)\n\n return acc, correct_sum", "Calculate the classification accuracy on the validation-set.", "def validation_accuracy():\n # Get the array of booleans whether the classifications are correct\n # for the validation-set.\n # The function returns two values but we only need the first.\n correct, _ = predict_cls_validation()\n \n # Calculate the classification accuracy and return it.\n return cls_accuracy(correct)", "Helper-function for showing the performance\nFunction for printing the classification accuracy on the test-set.\nIt takes a while to compute the classification for all the images in the test-set, that's why the results are re-used by calling the above functions directly from this function, so the classifications don't have to be recalculated by each function.", "def print_test_accuracy(show_example_errors=False,\n show_confusion_matrix=False):\n\n # For all the images in the test-set,\n # calculate the predicted classes and whether they are correct.\n correct, cls_pred = predict_cls_test()\n\n # Classification accuracy and the number of correct classifications.\n acc, num_correct = cls_accuracy(correct)\n \n # Number of images being classified.\n num_images = len(correct)\n\n # Print the accuracy.\n msg = \"Accuracy on Test-Set: {0:.1%} ({1} / {2})\"\n print(msg.format(acc, num_correct, num_images))\n\n # Plot some examples of mis-classifications, if desired.\n if show_example_errors:\n print(\"Example errors:\")\n plot_example_errors(cls_pred=cls_pred, correct=correct)\n\n # Plot the confusion matrix, if desired.\n if show_confusion_matrix:\n print(\"Confusion Matrix:\")\n plot_confusion_matrix(cls_pred=cls_pred)", "Helper-function for plotting convolutional weights", "def plot_conv_weights(weights, input_channel=0):\n # Assume weights are TensorFlow ops for 4-dim variables\n # e.g. weights_conv1 or weights_conv2.\n\n # Retrieve the values of the weight-variables from TensorFlow.\n # A feed-dict is not necessary because nothing is calculated.\n w = session.run(weights)\n\n # Print mean and standard deviation.\n print(\"Mean: {0:.5f}, Stdev: {1:.5f}\".format(w.mean(), w.std()))\n \n # Get the lowest and highest values for the weights.\n # This is used to correct the colour intensity across\n # the images so they can be compared with each other.\n w_min = np.min(w)\n w_max = np.max(w)\n\n # Number of filters used in the conv. layer.\n num_filters = w.shape[3]\n\n # Number of grids to plot.\n # Rounded-up, square-root of the number of filters.\n num_grids = math.ceil(math.sqrt(num_filters))\n \n # Create figure with a grid of sub-plots.\n fig, axes = plt.subplots(num_grids, num_grids)\n\n # Plot all the filter-weights.\n for i, ax in enumerate(axes.flat):\n # Only plot the valid filter-weights.\n if i<num_filters:\n # Get the weights for the i'th filter of the input channel.\n # The format of this 4-dim tensor is determined by the\n # TensorFlow API. See Tutorial #02 for more details.\n img = w[:, :, input_channel, i]\n\n # Plot image.\n ax.imshow(img, vmin=w_min, vmax=w_max,\n interpolation='nearest', cmap='seismic')\n \n # Remove ticks from the plot.\n ax.set_xticks([])\n ax.set_yticks([])\n \n # Ensure the plot is shown correctly with multiple plots\n # in a single Notebook cell.\n plt.show()", "Performance before any optimization\nThe accuracy on the test-set is very low because the model variables have only been initialized and not optimized at all, so it just classifies the images randomly.", "print_test_accuracy()", "The convolutional weights are random, but it can be difficult to see any difference from the optimized weights that are shown below. The mean and standard deviation is shown so we can see whether there is a difference.", "plot_conv_weights(weights=weights_conv1)", "Perform 10,000 optimization iterations\nWe now perform 10,000 optimization iterations and abort the optimization if no improvement is found on the validation-set in 1000 iterations.\nAn asterisk * is shown if the classification accuracy on the validation-set is an improvement.", "optimize(num_iterations=10000)\n\nprint_test_accuracy(show_example_errors=True,\n show_confusion_matrix=True)", "The convolutional weights have now been optimized. Compare these to the random weights shown above. They appear to be almost identical. In fact, I first thought there was a bug in the program because the weights look identical before and after optimization.\nBut try and save the images and compare them side-by-side (you can just right-click the image to save it). You will notice very small differences before and after optimization.\nThe mean and standard deviation has also changed slightly, so the optimized weights must be different.", "plot_conv_weights(weights=weights_conv1)", "Initialize Variables Again\nRe-initialize all the variables of the neural network with random values.", "init_variables()", "This means the neural network classifies the images completely randomly again, so the classification accuracy is very poor because it is like random guesses.", "print_test_accuracy()", "The convolutional weights should now be different from the weights shown above.", "plot_conv_weights(weights=weights_conv1)", "Restore Best Variables\nRe-load all the variables that were saved to file during optimization.", "saver.restore(sess=session, save_path=save_path)", "The classification accuracy is high again when using the variables that were previously saved.\nNote that the classification accuracy may be slightly higher or lower than that reported above, because the variables in the file were chosen to maximize the classification accuracy on the validation-set, but the optimization actually continued for another 1000 iterations after saving those variables, so we are reporting the results for two slightly different sets of variables. Sometimes this leads to slightly better or worse performance on the test-set.", "print_test_accuracy(show_example_errors=True,\n show_confusion_matrix=True)", "The convolutional weights should be nearly identical to those shown above, although not completely identical because the weights shown above had 1000 optimization iterations more.", "plot_conv_weights(weights=weights_conv1)", "Close TensorFlow Session\nWe are now done using TensorFlow, so we close the session to release its resources.", "# This has been commented out in case you want to modify and experiment\n# with the Notebook without having to restart it.\n# session.close()", "Conclusion\nThis tutorial showed how to save and retrieve the variables of a neural network in TensorFlow. This can be used in different ways. For example, if you want to use a neural network for recognizing images then you only have to train the network once and you can then deploy the finished network on other computers.\nAnother use of checkpoints is if you have a very large neural network and data-set, then you may want to save checkpoints at regular intervals in case the computer crashes, so you can continue the optimization at a recent checkpoint instead of having to restart the optimization from the beginning.\nThis tutorial also showed how to use the validation-set for so-called Early Stopping, where the optimization was aborted if it did not regularly improve the validation error. This is useful if the neural network starts to overfit and learn the noise of the training-set; although it was not really an issue with the convolutional network and MNIST data-set used in this tutorial.\nAn interesting observation was that the convolutional weights (or filters) changed very little from the optimization, even though the performance of the network went from random guesses to near-perfect classification. It seems strange that the random weights were almost good enough. Why do you think this happens?\nExercises\nThese are a few suggestions for exercises that may help improve your skills with TensorFlow. It is important to get hands-on experience with TensorFlow in order to learn how to use it properly.\nYou may want to backup this Notebook before making any changes.\n\nOptimization is stopped after 1000 iterations without improvement. Is this enough? Can you think of a better way to do Early Stopping? Try and implement it.\nIf the checkpoint file already exists then load it instead of doing the optimization.\nSave a new checkpoint for every 100 optimization iterations. Retrieve the latest using saver.latest_checkpoint(). Why would you want to save multiple checkpionts instead of just the most recent?\nTry and change the neural network, e.g. by adding another layer. What happens when you reload the variables from a different network?\nPlot the weights for the 2nd convolutional layer before and after optimization using the function plot_conv_weights(). Are they almost identical as well?\nWhy do you think the optimized convolutional weights are almost the same as the random initialization?\nRemake the program yourself without looking too much at this source-code.\nExplain to a friend how the program works.\n\nLicense (MIT)\nCopyright (c) 2016 by Magnus Erik Hvass Pedersen\nPermission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the \"Software\"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:\nThe above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
codehacken/CodingProwess
Python/leetcode/easy/strings-arrays.ipynb
mit
[ "String Problems\nhttps://leetcode.com/explore/challenge/card/30-day-leetcoding-challenge/528/week-1/3283/\nQ.1. Given a non-empty array of integers, every element appears twice except for one. Find that single one.", "from typing import List\n\n\"\"\"\nFor O(1) space complexity use math operation or XOR.\na^a = 0\na^0 = a\na^b^c = a^a^b = 0^b = b\n\"\"\"\nclass Solution(object):\n def singleNumber(self, nums: List[int]) -> int:\n \"\"\"\n :type nums: List[int]\n :rtype: int\n \"\"\"\n idx = {}\n for i in range(len(nums)):\n if nums[i] not in idx:\n idx[nums[i]] = 1\n else:\n idx[nums[i]] += 1\n \n for k in idx.keys():\n if idx[k] == 1:\n return k\n\nprint(Solution().singleNumber([4,1,2,1,2]))", "Q.2. Write an algorithm to determine if a number n is \"happy\".\nA happy number is a number defined by the following process: Starting with any positive integer, replace the number by the sum of the squares of its digits, and repeat the process until the number equals 1 (where it will stay), or it loops endlessly in a cycle which does not include 1. Those numbers for which this process ends in 1 are happy numbers.\nReturn True if n is a happy number, and False if not.", "class Solution(object):\n def ifHappy(self, n: int) -> bool:\n \"\"\"\n :type n: int\n :rtype: bool\n \"\"\"\n l = 0\n while (n != 1):\n add = 0\n for i in str(n):\n add += int(i) ** 2\n n = add\n l += 1\n if l > 100:\n return False\n return True\n\nprint(Solution().ifHappy(19))", "Q.3. Given an integer array nums, find the contiguous subarray (containing at least one number) which has the largest sum and return its sum.", "class Solution(object):\n def maxSubArray(self, nums: List[int]) -> int:\n \"\"\"\n :type nums: List[int]\n :rtype int\n \"\"\"\n # Special case is when all values in num are negative.\n if max(nums) < 0:\n return max(nums)\n \n max_sum = 0; curr = 0\n for i in range(len(nums)): \n if curr + nums[i] > 0:\n curr = curr + nums[i]\n else:\n curr = 0 # Reset the sum.\n \n if curr > max_sum:\n max_sum = curr\n \n return max_sum\n \n\nprint(Solution().maxSubArray([-2,1,-3,4,-1,2,1,-5,4]))", "Q.4. Given an array nums, write a function to move all 0's to the end of it while maintaining the relative order of the non-zero elements.", "class Solution(object):\n def moveZeroes(self, nums: List[int]) -> None:\n \"\"\"\n :type nums: List[int]\n :rtype None\n Perform inplace ordering.\n \n Method: Apply a form of insert sort that moves each non-negative value\n to its right place in the list.\n \"\"\"\n for i in range(len(nums)):\n if nums[i] != 0:\n j = i\n while j > 0 and nums[j - 1] == 0:\n nums[j], nums[j-1] = nums[j-1], nums[j]\n j -= 1\n\nnums = [0,1,0,3,12]\nSolution().moveZeroes(nums)\nprint(nums)", "Q.5. Say you have an array prices for which the ith element is the price of a given stock on day i. Design an algorithm to find the maximum profit. You may complete as many transactions as you like (i.e., buy one and sell one share of the stock multiple times).\nNote: You may not engage in multiple transactions at the same time (i.e., you must sell the stock before you buy again).", "class Solution(object):\n def maxProfit(self, prices: List[int]) -> int:\n \"\"\"\n :type prices: List[int]\n :rtype: int\n \n Maximum Profit is the cumulation of all positive differences.\n \"\"\"\n profit = 0\n for i in range(1, len(prices)):\n diff = prices[i] - prices[i-1]\n if diff > 0:\n profit += diff\n \n return profit\n\nprint(Solution().maxProfit([7,6,4,3,1]))", "Q.6. Given an array of strings, group anagrams together.", "class Solution(object):\n def groupAnagrams(self, strs: List[str]) -> List[List[str]]:\n \"\"\"\n :type strs: List[str]\n :rtype: List[List[str]]\n \n Method: Build a dictionary of words creating a bag of characters representation.\n Generate a has for that representation and add words with a similar hash.\n \"\"\"\n words = {}\n \n # Build a dictionary of words.\n for word in strs:\n boc_vec = [0 for i in range(26)]\n for char in word:\n boc_vec[ord(char) - 97] += 1\n\n # Check if the representation if present in the dict.\n hval = hash(tuple(boc_vec))\n if hval not in words:\n words[hval] = [word]\n else:\n words[hval].append(word)\n \n # Once, the dictionary is built, generate list.\n fin = []\n for key in words.keys():\n fin.append(words[key])\n \n return fin\n \n\nprint(Solution().groupAnagrams([\"eat\", \"tea\", \"tan\", \"ate\", \"nat\", \"bat\"]))", "Q.7. Given an integer array arr, count how many elements x there are, such that x + 1 is also in arr. If there're duplicates in arr, count them seperately.", "class Solution(object):\n def countElements(self, arr: List[int]) -> int:\n \"\"\"\n :type arr: List[int]\n :rtype: int\n \n Method: Build a dictionary of all numbers in the list and then separately\n verify if (n+1) number exists in the dictionary for every n.\n \"\"\"\n nums = {}\n for n in arr:\n if n not in nums:\n nums[n] = 1\n \n cnt = 0\n for n in arr:\n if n+1 in nums:\n cnt += 1\n \n return cnt\n\nprint(Solution().countElements([1,3,2,3,5,0]))\n\n if root.left is None and root.right is None:\n return 0\n \n def get_longest_path(root):\n if root.left is None and root.right is None:\n return 0\n elif root.left is None:\n return 1 + get_longest_path(root.right)\n elif root.right is None:\n return 1 + get_longest_path(root.left)\n else:\n return max(1 + get_longest_path(root.left), 1 + get_longest_path(root.right))\n \n return get_longest_path(root.left) + get_longest_path(root.right)\n" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
marcinofulus/teaching
ML_SS2017/zajecia_MJ_21.4.2017.ipynb
gpl-3.0
[ "%matplotlib inline\nimport os\nimport tensorflow as tf\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nfrom IPython.display import clear_output\n\ndane = \"/DATA/shared/datasets/cifar10/cifar10_test.tfrecord\"\ndane_train = \"/DATA/shared/datasets/cifar10/cifar10_train.tfrecord\"", "Input pipeline\nZastosowano tu następującą strategie:\n\ndo trenowania używamy kolejki zarządzanej przez tensorflow\nto testowania, pobieramy z kolejki dane w postaci tablic numpy i przekazujemy je do tensorflow z użyciem feed_dict\n\nnet", "def read_data(filename_queue):\n reader = tf.TFRecordReader()\n _, se = reader.read(filename_queue)\n f = tf.parse_single_example(se,features={'image/encoded':tf.FixedLenFeature([],tf.string),\n 'image/class/label':tf.FixedLenFeature([],tf.int64),\n 'image/height':tf.FixedLenFeature([],tf.int64),\n 'image/width':tf.FixedLenFeature([],tf.int64)})\n image = tf.image.decode_png(f['image/encoded'],channels=3)\n image.set_shape( (32,32,3) ) \n return image,f['image/class/label']\n\ntf.reset_default_graph()\n\nfq = tf.train.string_input_producer([dane_train])\nimage_data, label = read_data(filename_queue=fq)\n\nbatch_size = 128\nimages, sparse_labels = tf.train.shuffle_batch( [image_data,label],batch_size=batch_size,\n num_threads=2,\n capacity=1000+3*batch_size,\n min_after_dequeue=1000\n )\nimages = (tf.cast(images,tf.float32)-128.0)/33.0", "test queue", "fq_test = tf.train.string_input_producer([dane])\ntest_image_data, test_label = read_data(filename_queue=fq_test)\nbatch_size = 128\ntest_images, test_sparse_labels = tf.train.batch( [test_image_data,test_label],batch_size=batch_size,\n num_threads=2,\n capacity=1000+3*batch_size,\n )\ntest_images = (tf.cast(test_images,tf.float32)-128.0)/33.0\n\nnet = tf.contrib.layers.conv2d( images, 32, 3, padding='VALID')\nnet = tf.contrib.layers.max_pool2d( net, 2, 2, padding='VALID')\n\nnet = tf.contrib.layers.conv2d( net, 32, 3, padding='VALID')\nnet = tf.contrib.layers.max_pool2d( net, 2, 2, padding='VALID')\n\nnet = tf.contrib.layers.conv2d( net, 32, 3, padding='VALID')\nnet = tf.contrib.layers.max_pool2d( net, 2, 2, padding='VALID')\n\nnet = tf.contrib.layers.fully_connected(tf.reshape(net,[-1,2*2*32]), 32)\nnet = tf.contrib.layers.fully_connected(net, 10, activation_fn=None)\nlogits = net\n\nxent = tf.losses.sparse_softmax_cross_entropy(sparse_labels,net)\nloss = tf.reduce_mean( xent)\n\nopt = tf.train.GradientDescentOptimizer(learning_rate=0.01)\ntrain_op = opt.minimize(loss)\n\nconfig = tf.ConfigProto()\nconfig.gpu_options.allow_growth = True\n\nsess = tf.InteractiveSession(config=config)\ntf.global_variables_initializer().run()\ncoord = tf.train.Coordinator()\nthreads = tf.train.start_queue_runners(sess=sess,coord=coord)\n\n!ls cifar_convet.ckpt*\n\nglobal_step = 0\n\nif global_step>0:\n saver=tf.train.Saver()\n saver.restore(sess,'cifar_convet.ckpt-%d'%global_step)\n\n\n%%time\nlvals = []\n\n\nfor i in range(global_step,global_step+200000):\n l, _ = sess.run([loss,train_op])\n if i%10==0: \n clear_output(wait=True)\n print(l,i+1)\n if i%100==0:\n Images,Labels = sess.run([test_images,test_sparse_labels])\n predicted = np.argmax(sess.run(logits,feed_dict={images: Images}),axis=1)\n r_test = np.sum(predicted==Labels)/Labels.size\n \n Images,Labels = sess.run([images,sparse_labels])\n predicted = np.argmax(sess.run(logits,feed_dict={images: Images}),axis=1)\n r = np.sum(predicted==Labels)/Labels.size\n lvals.append([i,l,r,r_test])\n\nglobal_step = i+1\n\nglobal_step\n\nlvals = np.array(lvals)\nplt.plot(lvals[:,0],lvals[:,1])\n\nplt.plot(lvals[:,0],lvals[:,3])\nplt.plot(lvals[:,0],lvals[:,2])\n\nsaver.restore(sess,'cifar_convet.ckpt')\n\nsess.run(test_sparse_labels).shape\n\nsess.run(test_sparse_labels).shape\n\nlabel2txt = [\"airplane\",\n\"automobile\",\n\"bird\",\n\"cat\",\n\"deer\",\n\"dog\",\n\"frog\",\n\"horse\",\n\"ship\",\n\"truck\" ]\n", "Testing\nMożemy wykorzystać feed_dict by wykonać graf operacji na ndanych testowych.", "Images,Labels = sess.run([test_images,test_sparse_labels])\n\npredicted = np.argmax(sess.run(logits,feed_dict={images: Images}),axis=1)\nnp.sum(predicted==Labels)/Labels.size\n\nfor ith in range(5):\n print (label2txt[Labels[ith]],(label2txt[predicted[ith]]))\n plt.imshow((Images[ith]*33+128).astype(np.uint8))\n plt.show()\n\n%%time\nl_lst =[]\nfor i in range(1):\n Images,Labels = sess.run([test_images,test_sparse_labels])\n predicted = np.argmax(sess.run(logits,feed_dict={images: Images}),axis=1)\n rlst = np.sum(predicted==Labels)/Labels.size\nprint(rlst)\n\nsaver = tf.train.Saver()\n\nsaver.save(sess,'cifar_convet.ckpt',global_step=global_step)" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
PreetinderKalsi/Investigate-A-Dataset
Preetinder Kalsi - Udacity Project 4 - Investigate a Dataset.ipynb
gpl-3.0
[ "No Show Appointments Analysis\n<a id='intro'></a>\nIntroduction\nPurpose To perform a Data analysis on a sample Dataset of No-show Appointments\nThis Dataset contains the records of the patients with various types of diseases who booked appointments and did not showed up on their appointment Day.\nQuestions\nWhat factors made people to miss their Appointments ?\n\n\nHow many Female and male of different Age Group in the Dataset missed the Appointments ?\n\n\nDid Age, regardless of age_group and sex, determine the patients missing the Appointments ?\n\n\nDid women and children preferred to attend their appointments ?\n\n\nDid the Scholarship of the patients helped in the attendence of their appointments?\n\n\n<a id='wrangling'></a>\nData Wrangling\nData Description\n\nGender Gender\nage Age\nage_group Age Group\npeople_showed_up Patients who attended or missed their appointment (0 = Missed; 1 = Attended)\nscholarship Medical Scholarship", "# Render plots inline\n%matplotlib inline\n\n\n# Import Libraries\nimport pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport seaborn as sns\n\n# Set style for all graphs\nsns.set(style=\"whitegrid\")\n\n\n# Read in the Dataset, creat dataframe\nappointment_data = pd.read_csv('noshow.csv')\n\n# Print the first few records to review data and format\nappointment_data.head()\n\n# Print the Last few records to review data and format\nappointment_data.tail()", "Note PatientId column have exponential values in it.\nNote No-show displays No if the patient visited and Yes if the Patient did not visited.\nData Cleanup\nFrom the data description and questions to answer, I've determined that some of the dataset columns are not necessary for the analysis process and will therefore be removed. This will help to process the Data Analysis Faster.\n\nPatientId\nScheduledDay\nSms_received\nAppointmentID\nAppointmentDay\n\ni'll take a 3 step approach to data cleanup\n\nIdentify and remove duplicate entries\nRemove unnecessary columns\nFix missing and data format issues\n\nStep 1 - Removing Duplicate entries\nConcluded that no duplicates entries exists, based on the tests below", "# Identify and remove duplicate entries\nappointment_data_duplicates = appointment_data.duplicated()\nprint 'Number of duplicate entries is/are {}'.format(appointment_data_duplicates.sum())\n\n# Lets make sure that this is working\nduplication_test = appointment_data.duplicated('Age').head()\nprint 'Number of entries with duplicate age in top entries are {}'.format(duplication_test.sum())\nappointment_data.head()", "Step 2 - Remove unnecessary columns\nColumns(PatientId, ScheduledDay, Sms_received, AppointmentID, AppointmentDay) removed", "# Create new dataset without unwanted columns\nclean_appointment_data = appointment_data.drop(['PatientId','ScheduledDay','SMS_received','AppointmentID','AppointmentDay'], axis=1)\nclean_appointment_data.head()", "Step 3 - Fix any missing or data format issues\nConcluded that there is no missing data", "# Calculate number of missing values\nclean_appointment_data.isnull().sum()\n\n# Taking a look at the datatypes\nclean_appointment_data.info()", "Data Exploration And Visualization", "# Looking at some typical descriptive statistics\nclean_appointment_data.describe()\n\n# Age minimum at -1.0 looks a bit weird so give a closer look\nclean_appointment_data[clean_appointment_data['Age'] == -1]\n\n# Fixing the negative value and creating a new column named Fixed_Age.\nclean_appointment_data['Fixed_Age'] = clean_appointment_data['Age'].abs()\n\n# Checking whether the negative value is still there or is it removed and changed into a positive value.\nclean_appointment_data[clean_appointment_data['Fixed_Age'] == -1]", "The Fixed_Age column is created in order to replace the negative value available in the Age column. The newly created Fixed_Age column will help in the proper calculation in the further questions and the results will be perfect and clear too. The negative value (-1) is changed in to a positive value (1) by using the .abd() function.", "# Create AgeGroups for further Analysis\n'''bins = [0, 25, 50, 75, 100, 120]\ngroup_names = ['0-25', '25-50', '50-75', '75-100', '100-120']\nclean_appointment_data['age-group'] = pd.cut(clean_appointment_data['Fixed_Age'], bins, labels=group_names)\nclean_appointment_data.head()'''\nclean_appointment_data['Age_rounded'] = np.round(clean_appointment_data['Fixed_Age'], -1)\n\n\ncategories_dict = {0: '0-5',\n 10: '5-15',\n 20: '15-25',\n 30 : '25-35',\n 40 : '35-45',\n 50 : '45-55',\n 60: '55-65',\n 70 : '65-75',\n 80 : '75-85',\n 90: '85-95',\n 100: '95-105',\n 120: '105-115'}\n\nclean_appointment_data['age_group'] = clean_appointment_data['Age_rounded'].map(categories_dict)\n\nclean_appointment_data['age_group']", "Creation and Addition of Age_Group in the data set will help in the Q1 - How many Female and male of different Age Group in the Dataset missed the Appointments ?", "# Simplifying the analysis by Fixing Yes and No issue in the No-show \n# The issue is that in the No-show No means that the person visited at the time of their appointment and Yes means that they did not visited.\n# First I will change Yes to 0 and No to 1 so that there is no confusion\nclean_appointment_data['people_showed_up'] = clean_appointment_data['No-show'].replace(['Yes', 'No'], [0, 1])\nclean_appointment_data\n\n# Taking a look at the age of people who showed up and those who missed the appointment\nyoungest_to_showup = clean_appointment_data[clean_appointment_data['people_showed_up'] == True]['Fixed_Age'].min()\nyoungest_to_miss = clean_appointment_data[clean_appointment_data['people_showed_up'] == False]['Fixed_Age'].min()\noldest_to_showup = clean_appointment_data[clean_appointment_data['people_showed_up'] == True]['Fixed_Age'].max()\noldest_to_miss = clean_appointment_data[clean_appointment_data['people_showed_up'] == False]['Fixed_Age'].max()\n\nprint 'Youngest to Show up: {} \\nYoungest to Miss: {} \\nOldest to Show Up: {} \\nOldest to Miss: {}'.format(\nyoungest_to_showup, youngest_to_miss, oldest_to_showup, oldest_to_miss)", "Question 1\nHow many Female and male of different Age Group in the Dataset missed the Appointments ?", "# Returns the percentage of male and female who visited the \n# hospital on their appointment day with their Age\ndef people_visited(age_group, gender):\n grouped_by_total = clean_appointment_data.groupby(['age_group', 'Gender']).size()[age_group,gender].astype('float')\n grouped_by_visiting_gender = \\\n clean_appointment_data.groupby(['age_group', 'people_showed_up', 'Gender']).size()[age_group,1,gender].astype('float')\n visited_gender_pct = (grouped_by_visiting_gender / grouped_by_total * 100).round(2)\n \n return visited_gender_pct\n\n# Get the actual numbers grouped by Age, No-show, Gender\ngroupedby_visitors = clean_appointment_data.groupby(['age_group','people_showed_up','Gender']).size()\n\n# Print - Grouped by Age Group, Patients showing up on thier appointments and Gender\nprint groupedby_visitors\nprint '0-5 - Female Appointment Attendence: {}%'.format(people_visited('0-5','F'))\nprint '0-5 - Male Appointment Attendence: {}%'.format(people_visited('0-5','M'))\nprint '5-15 - Female Appointment Attendence: {}%'.format(people_visited('5-15','F'))\nprint '5-15 - Male Appointment Attendence: {}%'.format(people_visited('5-15','M'))\nprint '15-25 - Female Appointment Attendence: {}%'.format(people_visited('15-25','F'))\nprint '15-25 - Male Appointment Attendence: {}%'.format(people_visited('15-25','M'))\nprint '25-35 - Female Appointment Attendence: {}%'.format(people_visited('25-35','F'))\nprint '25-35 - Male Appointment Attendence: {}%'.format(people_visited('25-35','M'))\nprint '35-45 - Female Appointment Attendence: {}%'.format(people_visited('35-45','F'))\nprint '35-45 - Male Appointment Attendence: {}%'.format(people_visited('35-45','M'))\nprint '45-55 - Female Appointment Attendence: {}%'.format(people_visited('45-55','F'))\nprint '45-55 - Male Appointment Attendence: {}%'.format(people_visited('45-55','M'))\nprint '55-65 - Female Appointment Attendence: {}%'.format(people_visited('55-65','F'))\nprint '55-65 - Male Appointment Attendence: {}%'.format(people_visited('55-65','M'))\nprint '65-75 - Female Appointment Attendence: {}%'.format(people_visited('65-75','F'))\nprint '65-75 - Male Appointment Attendence: {}%'.format(people_visited('65-75','M'))\nprint '75-85 - Female Appointment Attendence: {}%'.format(people_visited('75-85','F'))\nprint '75-85 - Male Appointment Attendence: {}%'.format(people_visited('75-85','M'))\nprint '85-95 - Female Appointment Attendence: {}%'.format(people_visited('85-95','F'))\nprint '85-95 - Male Appointment Attendence: {}%'.format(people_visited('85-95','M'))\nprint '95-105 - Female Appointment Attendence: {}%'.format(people_visited('95-105','F'))\nprint '95-105 - Male Appointment Attendence: {}%'.format(people_visited('95-105','M'))\nprint '105-115 - Female Appointment Attendence: {}%'.format(people_visited('105-115','F'))\n\n\n# Graph - Grouped by class, survival and sex\ng = sns.factorplot(x=\"Gender\", y=\"people_showed_up\", col=\"age_group\", data=clean_appointment_data, \n saturation=4, kind=\"bar\", ci=None, size=12, aspect=.35)\n\n# Fix up the labels\n(g.set_axis_labels('', 'People Visited')\n .set_xticklabels([\"Men\", \"Women\"], fontsize = 30)\n .set_titles(\"Age Group {col_name}\")\n .set(ylim=(0, 1))\n .despine(left=True, bottom=True))\n\n", "The graph above shows the number of people who attended their appointment and those who did not attended their appointments acccording to the Gender of the people having the appointment in the hospital.\n- According to the graph above, women are more concious about their health regardless of the age group.", "# Graph - Actual count of passengers by survival, group and sex\ng = sns.factorplot('people_showed_up', col='Gender', hue='age_group', data=clean_appointment_data, kind='count', size=15, aspect=.6)\n\n# Fix up the labels\n(g.set_axis_labels('People Who Attended', 'No. of Appointment')\n .set_xticklabels([\"False\", \"True\"], fontsize=20)\n .set_titles('{col_name}')\n)\n\ntitles = ['Men', 'Women']\nfor ax, title in zip(g.axes.flat, titles):\n ax.set_title(title)\n", "The graph above shows the number of people who attended their appointment and those who did not attended their appointments. \n- False denotes that the people did not attended the appointments.\n- True denotes that the people did attended the appointments.\nThe graphs is categorized according to the Age Group.\nBased on the raw numbers it would appear that the age group of 65-75 is the most health cautious Age Group because they have the highest percentage of Appointment attendence followed by the Age Group of 55-65 which is just about 1% less than the 65-75 Age Group in the Appointment Attendence.\nThe Age group with the least percentage of Appointment Attendence is 15-25.\nNote 105-115 is not the least percentage age group because the number of patients in that Age group are too low. So, the comparision is not possible.\nQuestion 2\nDid Age, regardless of Gender, determine the patients missing the Appointments ?", "# Find the total number of people who showed up and those who missed their appointments\nnumber_showed_up = clean_appointment_data[clean_appointment_data['people_showed_up'] == True]['people_showed_up'].count()\nnumber_missed = clean_appointment_data[clean_appointment_data['people_showed_up'] == False]['people_showed_up'].count()\n\n# Find the average number of people who showed up and those who missed their appointments\nmean_age_showed_up = clean_appointment_data[clean_appointment_data['people_showed_up'] == True]['Age'].mean()\nmean_age_missed = clean_appointment_data[clean_appointment_data['people_showed_up'] == False]['Age'].mean() \n \n# Displaying a few Totals\nprint 'Total number of People Who Showed Up {} \\n\\\nTotal number of People who missed the appointment {} \\n\\\nMean age of people who Showed up {} \\n\\\nMean age of people who missed the appointment {} \\n\\\nOldest to show up {} \\n\\\nOldest to miss the appointment {}' \\\n.format(number_showed_up, number_missed, np.round(mean_age_showed_up), \n np.round(mean_age_missed), oldest_to_showup, oldest_to_miss)\n\n# Graph of age of passengers across sex of those who survived\ng = sns.factorplot(x=\"people_showed_up\", y=\"Fixed_Age\", hue='Gender', data=clean_appointment_data, kind=\"box\", size=7, aspect=.8)\n\n# Fixing the labels\n(g.set_axis_labels('Appointment Attendence', 'Age of Patients')\n .set_xticklabels([\"False\", \"True\"])\n)", "Based on the boxplot and the calculated data above, it would appear that:\n\nRegardless of the Gender, age was not a deciding factor in the appointment attendence rate of the Patients\nThe number of female who attended the appointment as well as who missed the appointment is more than the number of male\n\nQuestion 3\nDid women and children preferred to attend their appointments ?\nAssumption: With 'child' not classified in the data, I'll need to assume a cutoff point. Therefore, I'll be using today's standard of under 18 as those to be considered as a child vs adult.", "# Create Category and Categorize people\nclean_appointment_data.loc[\n ((clean_appointment_data['Gender'] == 'F') &\n (clean_appointment_data['Age'] >= 18)),\n 'Category'] = 'Woman'\n\nclean_appointment_data.loc[\n ((clean_appointment_data['Gender'] == 'M') &\n (clean_appointment_data['Age'] >= 18)),\n 'Category'] = 'Man'\n\nclean_appointment_data.loc[\n (clean_appointment_data['Age'] < 18),\n 'Category'] = 'Child'\n\n# Get the totals grouped by Men, Women and Children\nprint clean_appointment_data.groupby(['Category', 'people_showed_up']).size()\n\n# Graph - Comapre the number of Men, Women and Children who showed up on their appointments\ng = sns.factorplot('people_showed_up', col='Category', data=clean_appointment_data, kind='count', size=7, aspect=0.8)\n\n# Fix up the labels\n(g.set_axis_labels('Appointment Attendence', 'No. of Patients')\n .set_xticklabels(['False', 'True'])\n)\n\ntitles = ['Women', 'Men', 'Children']\nfor ax, title in zip(g.axes.flat, titles):\n ax.set_title(title)", "Based on the calculated data and the Graphs, it would appear that:\n- The appointment attendence of the women is significantly higher than that of the men and children\n- The number of Men and children who attended the appointment is almost the same, the difference between the number of men and children is about :- 967\nQuestion 4\nDid the Scholarship of the patients helped in the attendence of their appointments?", "# Determine the number of Man, Woman and Children who had scholarship\nman_with_scholarship = clean_appointment_data.loc[\n (clean_appointment_data['Category'] == 'Man') &\n (clean_appointment_data['Scholarship'] == 1)]\n\nman_without_scholarship = clean_appointment_data.loc[\n (clean_appointment_data['Category'] == 'Man') &\n (clean_appointment_data['Scholarship'] == 0)]\n\nwoman_with_scholarship = clean_appointment_data.loc[\n (clean_appointment_data['Category'] == 'Woman') &\n (clean_appointment_data['Scholarship'] == 1)]\n\nwoman_without_scholarship = clean_appointment_data.loc[\n (clean_appointment_data['Category'] == 'Woman') &\n (clean_appointment_data['Scholarship'] == 0)]\n\nchildren_with_scholarship = clean_appointment_data.loc[\n (clean_appointment_data['Category'] == 'Child') &\n (clean_appointment_data['Scholarship'] == 1)]\n\nchildren_without_scholarship = clean_appointment_data.loc[\n (clean_appointment_data['Category'] == 'Child') &\n (clean_appointment_data['Scholarship'] == 0)]\n\n# Graph - Compare how many man, woman and children with or without scholarship attended thier apoointments\ng = sns.factorplot('Scholarship', col='Category', data=clean_appointment_data, kind='count', size=8, aspect=0.3)\n\n# Fix up the labels\n(g.set_axis_labels('Scholarship', 'No of Patients')\n .set_xticklabels(['Missed', 'Attended'])\n)\n\ntitles = ['Women', 'Men', 'Children']\nfor ax, title in zip(g.axes.flat, titles):\n ax.set_title(title)", "According to the Bar graph above :- \n- The number of people with scholarship did not affected the number of people visiting the hospital on their appointment. \n- Women with Scholarship attended the appointments the most followed by Children. Men visited the hospital on their appointments the least.\nThe conclusion is that the Scholarship did not encouraged the number of people attending their appointments regardless of their age or gender.", "# Determine the Total Number of Men, Women and Children with Scholarship\ntotal_male_with_scholarship = clean_appointment_data.loc[\n (clean_appointment_data['Category'] == 'Man') &\n (clean_appointment_data['Scholarship'] < 2)]\n\ntotal_female_with_scholarship = clean_appointment_data.loc[\n (clean_appointment_data['Category'] == 'Woman') &\n (clean_appointment_data['Scholarship'] < 2)]\n\ntotal_child_with_scholarship = clean_appointment_data.loc[\n (clean_appointment_data['Category'] == 'Child') &\n (clean_appointment_data['Scholarship'] < 2)]\n\ntotal_man_with_scholarship = total_male_with_scholarship.Scholarship.count()\ntotal_woman_with_scholarship = total_female_with_scholarship.Scholarship.count()\ntotal_children_with_scholarship = total_child_with_scholarship.Scholarship.count()\n\n# Determine the number of Men, Women and Children with scholarship who Attended the Appointments\nman_with_scholarship_attendence = man_with_scholarship.Scholarship.count()\nwoman_with_scholarship_attendence = woman_with_scholarship.Scholarship.sum()\nchildren_with_scholarship_attendence = children_with_scholarship.Scholarship.sum()\n\n# Determine the Percentage of Men, Women and Children with Scholarship who Attended or Missed the Appointments\npct_man_with_scholarship_attendence = ((float(man_with_scholarship_attendence)/total_man_with_scholarship)*100)\npct_man_with_scholarship_attendence = np.round(pct_man_with_scholarship_attendence,2)\n\npct_woman_with_scholarship_attendence = ((float(woman_with_scholarship_attendence)/total_woman_with_scholarship)*100)\npct_woman_with_scholarship_attendence = np.round(pct_woman_with_scholarship_attendence,2)\n\npct_children_with_scholarship_attendence = ((float(children_with_scholarship_attendence)/total_children_with_scholarship)*100)\npct_children_with_scholarship_attendence = np.round(pct_children_with_scholarship_attendence,2)\n\n# Determine the Average Age of Men, Women and Children with Scholarship who Attended or Missed the Appointments\nman_with_scholarship_avg_age = np.round(man_with_scholarship.Age.mean())\nwoman_with_scholarship_avg_age = np.round(woman_with_scholarship.Age.mean())\nchildren_with_scholarship_avg_age = np.round(children_with_scholarship.Age.mean())\n\n# Display Results\nprint '1. Total number of Men with Scholarship: {}\\n\\\n2. Total number of Women with Scholarship: {}\\n\\\n3. Total number of Children with Scholarship: {}\\n\\\n4. Men with Scholarship who attended the Appointment: {}\\n\\\n5. Women with Scholarship who attended the Appointment: {}\\n\\\n6. Children with Scholarship who attended the Appointment: {}\\n\\\n7. Men with Scholarship who missed the Appointment: {}\\n\\\n8. Women with Scholarship who missed the Appointment: {}\\n\\\n9. Children with Scholarship who missed the Appointment: {}\\n\\\n10. Percentage of Men with Scholarship who attended the Appointment: {}%\\n\\\n11. Percentage of Women with Scholarship who attended the Appointment: {}%\\n\\\n12. Percentage of Children with Scholarship who attended the Appointment: {}%\\n\\\n13. Average Age of Men with Scholarship who attended the Appointment: {}\\n\\\n14. Average Age of Women with Scholarship who attended the Appointment: {}\\n\\\n15. Average Age of Children with Scholarship who attended the Appointment: {}'\\\n.format(total_man_with_scholarship, total_woman_with_scholarship, total_children_with_scholarship,\n man_with_scholarship_attendence, woman_with_scholarship_attendence, children_with_scholarship_attendence,\n total_man_with_scholarship-man_with_scholarship_attendence, total_woman_with_scholarship-woman_with_scholarship_attendence,\n total_children_with_scholarship-children_with_scholarship_attendence,\n pct_man_with_scholarship_attendence, pct_woman_with_scholarship_attendence, pct_children_with_scholarship_attendence,\n man_with_scholarship_avg_age, woman_with_scholarship_avg_age, children_with_scholarship_avg_age)", "Based on the Data Analysis above , it would appear that the percentage of women with scholarship i.e 12.31% is the highest among the percentage of men and children i.e 2.64%, 11.18% repectively. \nThe differnce between the women with schorlarship attending the appointments is very high about 9.67% \nHowever, in the average age the men have the highest age in i.e 43 Years whereas the average age of women and children attending the appointment is 39 Years and 9 Years repectively.\nConclusion\nThe results of the data analysis, would appear that Female are more health cautious whereas the health of Men and Children is neglected as they may not be taking their health seriously. Age did not seem to be a major factor. While Men neglected their health the most by skipping thier appointment dates with the Hospital.\nTherefore, The Gender is the most important factor in this Data Anaylsis.\nHowever, there were many values that were not included in the Dataset and which could helped the Data Analysis to be better. The Dataset could have included the Distance of the patient's Neighbourhood from the Hospital. It could have helped to analyse the data from the neighbourhood point of view also i.e the distance a patient have to travel in order to attend their appointments.\nThe Age values included a negative value which created problem in Analysing the Dataset so the negative value was changed into positive value by using the abs() function in order to Analyse the Data efficiently.\nResources\nN/A" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
anukarsh1/deep-learning-coursera
Improving Deep Neural networks- Hyperparameter Tuning - Regularization and Optimization/Gradient Checking.ipynb
mit
[ "Gradient Checking\nWelcome to the final assignment for this week! In this assignment you will learn to implement and use gradient checking. \nYou are part of a team working to make mobile payments available globally, and are asked to build a deep learning model to detect fraud--whenever someone makes a payment, you want to see if the payment might be fraudulent, such as if the user's account has been taken over by a hacker. \nBut backpropagation is quite challenging to implement, and sometimes has bugs. Because this is a mission-critical application, your company's CEO wants to be really certain that your implementation of backpropagation is correct. Your CEO says, \"Give me a proof that your backpropagation is actually working!\" To give this reassurance, you are going to use \"gradient checking\".\nLet's do it!", "# Packages\nimport numpy as np\nfrom testCases import *\nfrom gc_utils import sigmoid, relu, dictionary_to_vector, vector_to_dictionary, gradients_to_vector", "1) How does gradient checking work?\nBackpropagation computes the gradients $\\frac{\\partial J}{\\partial \\theta}$, where $\\theta$ denotes the parameters of the model. $J$ is computed using forward propagation and your loss function.\nBecause forward propagation is relatively easy to implement, you're confident you got that right, and so you're almost 100% sure that you're computing the cost $J$ correctly. Thus, you can use your code for computing $J$ to verify the code for computing $\\frac{\\partial J}{\\partial \\theta}$. \nLet's look back at the definition of a derivative (or gradient):\n$$ \\frac{\\partial J}{\\partial \\theta} = \\lim_{\\varepsilon \\to 0} \\frac{J(\\theta + \\varepsilon) - J(\\theta - \\varepsilon)}{2 \\varepsilon} \\tag{1}$$\nIf you're not familiar with the \"$\\displaystyle \\lim_{\\varepsilon \\to 0}$\" notation, it's just a way of saying \"when $\\varepsilon$ is really really small.\"\nWe know the following:\n\n$\\frac{\\partial J}{\\partial \\theta}$ is what you want to make sure you're computing correctly. \nYou can compute $J(\\theta + \\varepsilon)$ and $J(\\theta - \\varepsilon)$ (in the case that $\\theta$ is a real number), since you're confident your implementation for $J$ is correct. \n\nLets use equation (1) and a small value for $\\varepsilon$ to convince your CEO that your code for computing $\\frac{\\partial J}{\\partial \\theta}$ is correct!\n2) 1-dimensional gradient checking\nConsider a 1D linear function $J(\\theta) = \\theta x$. The model contains only a single real-valued parameter $\\theta$, and takes $x$ as input.\nYou will implement code to compute $J(.)$ and its derivative $\\frac{\\partial J}{\\partial \\theta}$. You will then use gradient checking to make sure your derivative computation for $J$ is correct. \n<img src=\"images/1Dgrad_kiank.png\" style=\"width:600px;height:250px;\">\n<caption><center> <u> Figure 1 </u>: 1D linear model<br> </center></caption>\nThe diagram above shows the key computation steps: First start with $x$, then evaluate the function $J(x)$ (\"forward propagation\"). Then compute the derivative $\\frac{\\partial J}{\\partial \\theta}$ (\"backward propagation\"). \nExercise: implement \"forward propagation\" and \"backward propagation\" for this simple function. I.e., compute both $J(.)$ (\"forward propagation\") and its derivative with respect to $\\theta$ (\"backward propagation\"), in two separate functions.", "# GRADED FUNCTION: forward_propagation\n\ndef forward_propagation(x, theta):\n \"\"\"\n Implement the linear forward propagation (compute J) presented in Figure 1 (J(theta) = theta * x)\n \n Arguments:\n x -- a real-valued input\n theta -- our parameter, a real number as well\n \n Returns:\n J -- the value of function J, computed using the formula J(theta) = theta * x\n \"\"\"\n \n ### START CODE HERE ### (approx. 1 line)\n J = np.dot(theta, x)\n ### END CODE HERE ###\n \n return J\n\nx, theta = 2, 4\nJ = forward_propagation(x, theta)\nprint (\"J = \" + str(J))", "Expected Output:\n<table style=>\n <tr>\n <td> ** J ** </td>\n <td> 8</td>\n </tr>\n</table>\n\nExercise: Now, implement the backward propagation step (derivative computation) of Figure 1. That is, compute the derivative of $J(\\theta) = \\theta x$ with respect to $\\theta$. To save you from doing the calculus, you should get $dtheta = \\frac { \\partial J }{ \\partial \\theta} = x$.", "# GRADED FUNCTION: backward_propagation\n\ndef backward_propagation(x, theta):\n \"\"\"\n Computes the derivative of J with respect to theta (see Figure 1).\n \n Arguments:\n x -- a real-valued input\n theta -- our parameter, a real number as well\n \n Returns:\n dtheta -- the gradient of the cost with respect to theta\n \"\"\"\n \n ### START CODE HERE ### (approx. 1 line)\n dtheta = x\n ### END CODE HERE ###\n \n return dtheta\n\nx, theta = 2, 4\ndtheta = backward_propagation(x, theta)\nprint (\"dtheta = \" + str(dtheta))", "Expected Output:\n<table>\n <tr>\n <td> ** dtheta ** </td>\n <td> 2 </td>\n </tr>\n</table>\n\nExercise: To show that the backward_propagation() function is correctly computing the gradient $\\frac{\\partial J}{\\partial \\theta}$, let's implement gradient checking.\nInstructions:\n- First compute \"gradapprox\" using the formula above (1) and a small value of $\\varepsilon$. Here are the Steps to follow:\n 1. $\\theta^{+} = \\theta + \\varepsilon$\n 2. $\\theta^{-} = \\theta - \\varepsilon$\n 3. $J^{+} = J(\\theta^{+})$\n 4. $J^{-} = J(\\theta^{-})$\n 5. $gradapprox = \\frac{J^{+} - J^{-}}{2 \\varepsilon}$\n- Then compute the gradient using backward propagation, and store the result in a variable \"grad\"\n- Finally, compute the relative difference between \"gradapprox\" and the \"grad\" using the following formula:\n$$ difference = \\frac {\\mid\\mid grad - gradapprox \\mid\\mid_2}{\\mid\\mid grad \\mid\\mid_2 + \\mid\\mid gradapprox \\mid\\mid_2} \\tag{2}$$\nYou will need 3 Steps to compute this formula:\n - 1'. compute the numerator using np.linalg.norm(...)\n - 2'. compute the denominator. You will need to call np.linalg.norm(...) twice.\n - 3'. divide them.\n- If this difference is small (say less than $10^{-7}$), you can be quite confident that you have computed your gradient correctly. Otherwise, there may be a mistake in the gradient computation.", "# GRADED FUNCTION: gradient_check\n\ndef gradient_check(x, theta, epsilon=1e-7):\n \"\"\"\n Implement the backward propagation presented in Figure 1.\n \n Arguments:\n x -- a real-valued input\n theta -- our parameter, a real number as well\n epsilon -- tiny shift to the input to compute approximated gradient with formula(1)\n \n Returns:\n difference -- difference (2) between the approximated gradient and the backward propagation gradient\n \"\"\"\n \n # Compute gradapprox using left side of formula (1). epsilon is small enough, you don't need to worry about the limit.\n ### START CODE HERE ### (approx. 5 lines)\n thetaplus = theta + epsilon # Step 1\n thetaminus = theta - epsilon # Step 2\n J_plus = forward_propagation(x, thetaplus) # Step 3\n J_minus = forward_propagation(x, thetaminus) # Step 4\n gradapprox = (J_plus - J_minus) / (2 * epsilon) # Step 5\n ### END CODE HERE ###\n \n # Check if gradapprox is close enough to the output of backward_propagation()\n ### START CODE HERE ### (approx. 1 line)\n grad = backward_propagation(x, theta)\n ### END CODE HERE ###\n \n ### START CODE HERE ### (approx. 1 line)\n numerator = np.linalg.norm(grad - gradapprox) # Step 1'\n denominator = np.linalg.norm(grad) + np.linalg.norm(gradapprox) # Step 2'\n difference = numerator / denominator # Step 3'\n ### END CODE HERE ###\n \n if difference < 1e-7:\n print(\"The gradient is correct!\")\n else:\n print(\"The gradient is wrong!\")\n \n return difference\n\nx, theta = 2, 4\ndifference = gradient_check(x, theta)\nprint(\"difference = \" + str(difference))", "Expected Output:\nThe gradient is correct!\n<table>\n <tr>\n <td> ** difference ** </td>\n <td> 2.9193358103083e-10 </td>\n </tr>\n</table>\n\nCongrats, the difference is smaller than the $10^{-7}$ threshold. So you can have high confidence that you've correctly computed the gradient in backward_propagation(). \nNow, in the more general case, your cost function $J$ has more than a single 1D input. When you are training a neural network, $\\theta$ actually consists of multiple matrices $W^{[l]}$ and biases $b^{[l]}$! It is important to know how to do a gradient check with higher-dimensional inputs. Let's do it!\n3) N-dimensional gradient checking\nThe following figure describes the forward and backward propagation of your fraud detection model.\n<img src=\"images/NDgrad_kiank.png\" style=\"width:600px;height:400px;\">\n<caption><center> <u> Figure 2 </u>: deep neural network<br>LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID</center></caption>\nLet's look at your implementations for forward propagation and backward propagation.", "def forward_propagation_n(X, Y, parameters):\n \"\"\"\n Implements the forward propagation (and computes the cost) presented in Figure 3.\n \n Arguments:\n X -- training set for m examples\n Y -- labels for m examples \n parameters -- python dictionary containing your parameters \"W1\", \"b1\", \"W2\", \"b2\", \"W3\", \"b3\":\n W1 -- weight matrix of shape (5, 4)\n b1 -- bias vector of shape (5, 1)\n W2 -- weight matrix of shape (3, 5)\n b2 -- bias vector of shape (3, 1)\n W3 -- weight matrix of shape (1, 3)\n b3 -- bias vector of shape (1, 1)\n \n Returns:\n cost -- the cost function (logistic cost for one example)\n \"\"\"\n \n # retrieve parameters\n m = X.shape[1]\n W1 = parameters[\"W1\"]\n b1 = parameters[\"b1\"]\n W2 = parameters[\"W2\"]\n b2 = parameters[\"b2\"]\n W3 = parameters[\"W3\"]\n b3 = parameters[\"b3\"]\n\n # LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID\n Z1 = np.dot(W1, X) + b1\n A1 = relu(Z1)\n Z2 = np.dot(W2, A1) + b2\n A2 = relu(Z2)\n Z3 = np.dot(W3, A2) + b3\n A3 = sigmoid(Z3)\n\n # Cost\n logprobs = np.multiply(-np.log(A3), Y) + np.multiply(-np.log(1 - A3), 1 - Y)\n cost = 1. / m * np.sum(logprobs)\n \n cache = (Z1, A1, W1, b1, Z2, A2, W2, b2, Z3, A3, W3, b3)\n \n return cost, cache", "Now, run backward propagation.", "def backward_propagation_n(X, Y, cache):\n \"\"\"\n Implement the backward propagation presented in figure 2.\n \n Arguments:\n X -- input datapoint, of shape (input size, 1)\n Y -- true \"label\"\n cache -- cache output from forward_propagation_n()\n \n Returns:\n gradients -- A dictionary with the gradients of the cost with respect to each parameter, activation and pre-activation variables.\n \"\"\"\n \n m = X.shape[1]\n (Z1, A1, W1, b1, Z2, A2, W2, b2, Z3, A3, W3, b3) = cache\n \n dZ3 = A3 - Y\n dW3 = 1. / m * np.dot(dZ3, A2.T)\n db3 = 1. / m * np.sum(dZ3, axis=1, keepdims=True)\n \n dA2 = np.dot(W3.T, dZ3)\n dZ2 = np.multiply(dA2, np.int64(A2 > 0))\n dW2 = 1. / m * np.dot(dZ2, A1.T) * 2 # Should not multiply by 2\n db2 = 1. / m * np.sum(dZ2, axis=1, keepdims=True)\n \n dA1 = np.dot(W2.T, dZ2)\n dZ1 = np.multiply(dA1, np.int64(A1 > 0))\n dW1 = 1. / m * np.dot(dZ1, X.T)\n db1 = 4. / m * np.sum(dZ1, axis=1, keepdims=True) # Should not multiply by 4\n \n gradients = {\"dZ3\": dZ3, \"dW3\": dW3, \"db3\": db3,\n \"dA2\": dA2, \"dZ2\": dZ2, \"dW2\": dW2, \"db2\": db2,\n \"dA1\": dA1, \"dZ1\": dZ1, \"dW1\": dW1, \"db1\": db1}\n \n return gradients", "You obtained some results on the fraud detection test set but you are not 100% sure of your model. Nobody's perfect! Let's implement gradient checking to verify if your gradients are correct.\nHow does gradient checking work?.\nAs in 1) and 2), you want to compare \"gradapprox\" to the gradient computed by backpropagation. The formula is still:\n$$ \\frac{\\partial J}{\\partial \\theta} = \\lim_{\\varepsilon \\to 0} \\frac{J(\\theta + \\varepsilon) - J(\\theta - \\varepsilon)}{2 \\varepsilon} \\tag{1}$$\nHowever, $\\theta$ is not a scalar anymore. It is a dictionary called \"parameters\". We implemented a function \"dictionary_to_vector()\" for you. It converts the \"parameters\" dictionary into a vector called \"values\", obtained by reshaping all parameters (W1, b1, W2, b2, W3, b3) into vectors and concatenating them.\nThe inverse function is \"vector_to_dictionary\" which outputs back the \"parameters\" dictionary.\n<img src=\"images/dictionary_to_vector.png\" style=\"width:600px;height:400px;\">\n<caption><center> <u> Figure 2 </u>: dictionary_to_vector() and vector_to_dictionary()<br> You will need these functions in gradient_check_n()</center></caption>\nWe have also converted the \"gradients\" dictionary into a vector \"grad\" using gradients_to_vector(). You don't need to worry about that.\nExercise: Implement gradient_check_n().\nInstructions: Here is pseudo-code that will help you implement the gradient check.\nFor each i in num_parameters:\n- To compute J_plus[i]:\n 1. Set $\\theta^{+}$ to np.copy(parameters_values)\n 2. Set $\\theta^{+}_i$ to $\\theta^{+}_i + \\varepsilon$\n 3. Calculate $J^{+}_i$ using to forward_propagation_n(x, y, vector_to_dictionary($\\theta^{+}$ )). \n- To compute J_minus[i]: do the same thing with $\\theta^{-}$\n- Compute $gradapprox[i] = \\frac{J^{+}_i - J^{-}_i}{2 \\varepsilon}$\nThus, you get a vector gradapprox, where gradapprox[i] is an approximation of the gradient with respect to parameter_values[i]. You can now compare this gradapprox vector to the gradients vector from backpropagation. Just like for the 1D case (Steps 1', 2', 3'), compute: \n$$ difference = \\frac {\\| grad - gradapprox \\|_2}{\\| grad \\|_2 + \\| gradapprox \\|_2 } \\tag{3}$$", "# GRADED FUNCTION: gradient_check_n\n\ndef gradient_check_n(parameters, gradients, X, Y, epsilon=1e-7):\n \"\"\"\n Checks if backward_propagation_n computes correctly the gradient of the cost output by forward_propagation_n\n \n Arguments:\n parameters -- python dictionary containing your parameters \"W1\", \"b1\", \"W2\", \"b2\", \"W3\", \"b3\":\n grad -- output of backward_propagation_n, contains gradients of the cost with respect to the parameters. \n x -- input datapoint, of shape (input size, 1)\n y -- true \"label\"\n epsilon -- tiny shift to the input to compute approximated gradient with formula(1)\n \n Returns:\n difference -- difference (2) between the approximated gradient and the backward propagation gradient\n \"\"\"\n \n # Set-up variables\n parameters_values, _ = dictionary_to_vector(parameters)\n grad = gradients_to_vector(gradients)\n num_parameters = parameters_values.shape[0]\n J_plus = np.zeros((num_parameters, 1))\n J_minus = np.zeros((num_parameters, 1))\n gradapprox = np.zeros((num_parameters, 1))\n \n # Compute gradapprox\n for i in range(num_parameters):\n \n # Compute J_plus[i]. Inputs: \"parameters_values, epsilon\". Output = \"J_plus[i]\".\n # \"_\" is used because the function you have to outputs two parameters but we only care about the first one\n ### START CODE HERE ### (approx. 3 lines)\n thetaplus = np.copy(parameters_values) # Step 1\n thetaplus[i][0] = thetaplus[i][0] + epsilon # Step 2\n J_plus[i], _ = forward_propagation_n(X, Y, vector_to_dictionary(thetaplus)) # Step 3\n ### END CODE HERE ###\n \n # Compute J_minus[i]. Inputs: \"parameters_values, epsilon\". Output = \"J_minus[i]\".\n ### START CODE HERE ### (approx. 3 lines)\n thetaminus = np.copy(parameters_values) # Step 1\n thetaminus[i][0] = thetaminus[i][0] - epsilon # Step 2 \n J_minus[i], _ = forward_propagation_n(X, Y, vector_to_dictionary(thetaminus)) # Step 3\n ### END CODE HERE ###\n \n # Compute gradapprox[i]\n ### START CODE HERE ### (approx. 1 line)\n gradapprox[i] = (J_plus[i] - J_minus[i]) / (2 * epsilon)\n ### END CODE HERE ###\n \n # Compare gradapprox to backward propagation gradients by computing difference.\n ### START CODE HERE ### (approx. 1 line)\n numerator = np.linalg.norm(grad - gradapprox) # Step 1'\n denominator = np.linalg.norm(grad) + np.linalg.norm(gradapprox) # Step 2'\n difference = numerator / denominator # Step 3'\n ### END CODE HERE ###\n\n if difference > 1e-7:\n print(\"\\033[93m\" + \"There is a mistake in the backward propagation! difference = \" + str(difference) + \"\\033[0m\")\n else:\n print(\"\\033[92m\" + \"Your backward propagation works perfectly fine! difference = \" + str(difference) + \"\\033[0m\")\n \n return difference\n\nX, Y, parameters = gradient_check_n_test_case()\n\ncost, cache = forward_propagation_n(X, Y, parameters)\ngradients = backward_propagation_n(X, Y, cache)\ndifference = gradient_check_n(parameters, gradients, X, Y)", "Expected output:\n<table>\n <tr>\n <td> ** There is a mistake in the backward propagation!** </td>\n <td> difference = 0.285093156781 </td>\n </tr>\n</table>\n\nIt seems that there were errors in the backward_propagation_n code we gave you! Good that you've implemented the gradient check. Go back to backward_propagation and try to find/correct the errors (Hint: check dW2 and db1). Rerun the gradient check when you think you've fixed it. Remember you'll need to re-execute the cell defining backward_propagation_n() if you modify the code. \nCan you get gradient check to declare your derivative computation correct? Even though this part of the assignment isn't graded, we strongly urge you to try to find the bug and re-run gradient check until you're convinced backprop is now correctly implemented. \nNote \n- Gradient Checking is slow! Approximating the gradient with $\\frac{\\partial J}{\\partial \\theta} \\approx \\frac{J(\\theta + \\varepsilon) - J(\\theta - \\varepsilon)}{2 \\varepsilon}$ is computationally costly. For this reason, we don't run gradient checking at every iteration during training. Just a few times to check if the gradient is correct. \n- Gradient Checking, at least as we've presented it, doesn't work with dropout. You would usually run the gradient check algorithm without dropout to make sure your backprop is correct, then add dropout. \nCongrats, you can be confident that your deep learning model for fraud detection is working correctly! You can even use this to convince your CEO. :) \n<font color='blue'>\nWhat you should remember from this notebook:\n- Gradient checking verifies closeness between the gradients from backpropagation and the numerical approximation of the gradient (computed using forward propagation).\n- Gradient checking is slow, so we don't run it in every iteration of training. You would usually run it only to make sure your code is correct, then turn it off and use backprop for the actual learning process." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
littlewizardLI/Udacity-ML-nanodegrees
Project0-titanic_survival_exploration/titanic_survival_exploration.ipynb
apache-2.0
[ "机器学习工程师纳米学位\n入门\n项目 0: 预测泰坦尼克号乘客生还率\n1912年,泰坦尼克号在第一次航行中就与冰山相撞沉没,导致了大部分乘客和船员身亡。在这个入门项目中,我们将探索部分泰坦尼克号旅客名单,来确定哪些特征可以最好地预测一个人是否会生还。为了完成这个项目,你将需要实现几个基于条件的预测并回答下面的问题。我们将根据代码的完成度和对问题的解答来对你提交的项目的进行评估。 \n\n提示:这样的文字将会指导你如何使用 iPython Notebook 来完成项目。\n\n点击这里查看本文件的英文版本。\n开始\n当我们开始处理泰坦尼克号乘客数据时,会先导入我们需要的功能模块以及将数据加载到 pandas DataFrame。运行下面区域中的代码加载数据,并使用 .head() 函数显示前几项乘客数据。 \n\n提示:你可以通过单击代码区域,然后使用键盘快捷键 Shift+Enter 或 Shift+ Return 来运行代码。或者在选择代码后使用播放(run cell)按钮执行代码。像这样的 MarkDown 文本可以通过双击编辑,并使用这些相同的快捷键保存。Markdown 允许你编写易读的纯文本并且可以转换为 HTML。", "import numpy as np\nimport pandas as pd\n\n# RMS Titanic data visualization code \n# 数据可视化代码\nfrom titanic_visualizations import survival_stats\nfrom IPython.display import display\n%matplotlib inline\n\n# Load the dataset \n# 加载数据集\nin_file = 'titanic_data.csv'\nfull_data = pd.read_csv(in_file)\n\n# Print the first few entries of the RMS Titanic data \n# 显示数据列表中的前几项乘客数据\ndisplay(full_data.head())", "从泰坦尼克号的数据样本中,我们可以看到船上每位旅客的特征\n\nSurvived:是否存活(0代表否,1代表是)\nPclass:社会阶级(1代表上层阶级,2代表中层阶级,3代表底层阶级)\nName:船上乘客的名字\nSex:船上乘客的性别\nAge:船上乘客的年龄(可能存在 NaN)\nSibSp:乘客在船上的兄弟姐妹和配偶的数量\nParch:乘客在船上的父母以及小孩的数量\nTicket:乘客船票的编号\nFare:乘客为船票支付的费用\nCabin:乘客所在船舱的编号(可能存在 NaN)\nEmbarked:乘客上船的港口(C 代表从 Cherbourg 登船,Q 代表从 Queenstown 登船,S 代表从 Southampton 登船)\n\n因为我们感兴趣的是每个乘客或船员是否在事故中活了下来。可以将 Survived 这一特征从这个数据集移除,并且用一个单独的变量 outcomes 来存储。它也做为我们要预测的目标。\n运行该代码,从数据集中移除 Survived 这个特征,并将它存储在变量 outcomes 中。", "# Store the 'Survived' feature in a new variable and remove it from the dataset \n# 从数据集中移除 'Survived' 这个特征,并将它存储在一个新的变量中。\noutcomes = full_data['Survived']\ndata = full_data.drop('Survived', axis = 1)\n\n# Show the new dataset with 'Survived' removed\n# 显示已移除 'Survived' 特征的数据集\ndisplay(data.head())", "这个例子展示了如何将泰坦尼克号的 Survived 数据从 DataFrame 移除。注意到 data(乘客数据)和 outcomes (是否存活)现在已经匹配好。这意味着对于任何乘客的 data.loc[i] 都有对应的存活的结果 outcome[i]。\n为了验证我们预测的结果,我们需要一个标准来给我们的预测打分。因为我们最感兴趣的是我们预测的准确率,既正确预测乘客存活的比例。运行下面的代码来创建我们的 accuracy_score 函数以对前五名乘客的预测来做测试。\n思考题:从第六个乘客算起,如果我们预测他们全部都存活,你觉得我们预测的准确率是多少?", "def accuracy_score(truth, pred):\n \"\"\" Returns accuracy score for input truth and predictions. \"\"\"\n \n # Ensure that the number of predictions matches number of outcomes\n # 确保预测的数量与结果的数量一致\n if len(truth) == len(pred): \n \n # Calculate and return the accuracy as a percent\n # 计算预测准确率(百分比)\n return \"Predictions have an accuracy of {:.2f}%.\".format((truth == pred).mean()*100)\n \n else:\n return \"Number of predictions does not match number of outcomes!\"\n \n# Test the 'accuracy_score' function\n# 测试 'accuracy_score' 函数\npredictions = pd.Series(np.ones(5, dtype = int))\nprint accuracy_score(outcomes[:5], predictions)", "提示:如果你保存 iPython Notebook,代码运行的输出也将被保存。但是,一旦你重新打开项目,你的工作区将会被重置。请确保每次都从上次离开的地方运行代码来重新生成变量和函数。\n\n预测\n如果我们要预测泰坦尼克号上的乘客是否存活,但是我们又对他们一无所知,那么最好的预测就是船上的人无一幸免。这是因为,我们可以假定当船沉没的时候大多数乘客都遇难了。下面的 predictions_0 函数就预测船上的乘客全部遇难。", "def predictions_0(data):\n \"\"\" Model with no features. Always predicts a passenger did not survive. \"\"\"\n\n predictions = []\n for _, passenger in data.iterrows():\n \n # Predict the survival of 'passenger'\n # 预测 'passenger' 的生还率\n predictions.append(0)\n \n # Return our predictions\n # 返回预测结果\n return pd.Series(predictions)\n\n# Make the predictions\n# 进行预测\npredictions = predictions_0(data)", "问题1\n对比真实的泰坦尼克号的数据,如果我们做一个所有乘客都没有存活的预测,你认为这个预测的准确率能达到多少?\n提示:运行下面的代码来查看预测的准确率。", "print accuracy_score(outcomes, predictions)", "回答: Predictions have an accuracy of 61.62%\n\n我们可以使用 survival_stats 函数来看看 Sex 这一特征对乘客的存活率有多大影响。这个函数定义在名为 titanic_visualizations.py 的 Python 脚本文件中,我们的项目提供了这个文件。传递给函数的前两个参数分别是泰坦尼克号的乘客数据和乘客的 生还结果。第三个参数表明我们会依据哪个特征来绘制图形。\n运行下面的代码绘制出依据乘客性别计算存活率的柱形图。", "survival_stats(data, outcomes, 'Sex')", "观察泰坦尼克号上乘客存活的数据统计,我们可以发现大部分男性乘客在船沉没的时候都遇难了。相反的,大部分女性乘客都在事故中生还。让我们在先前推断的基础上继续创建:如果乘客是男性,那么我们就预测他们遇难;如果乘客是女性,那么我们预测他们在事故中活了下来。\n将下面的代码补充完整,让函数可以进行正确预测。 \n提示:您可以用访问 dictionary(字典)的方法来访问船上乘客的每个特征对应的值。例如, passenger['Sex'] 返回乘客的性别。", "def predictions_1(data):\n \"\"\" Model with one feature: \n - Predict a passenger survived if they are female. \"\"\"\n \n predictions = []\n for _, passenger in data.iterrows():\n \n # Remove the 'pass' statement below \n # 移除下方的 'pass' 声明\n # and write your prediction conditions here\n # 输入你自己的预测条件\n if passenger['Sex'] == 'female':\n predictions.append(1)\n else:\n predictions.append(0)\n \n # Return our predictions\n # 返回预测结果\n return pd.Series(predictions)\n\n# Make the predictions\n# 进行预测\npredictions = predictions_1(data)", "问题2\n当我们预测船上女性乘客全部存活,而剩下的人全部遇难,那么我们预测的准确率会达到多少?\n提示:运行下面的代码来查看我们预测的准确率。", "print accuracy_score(outcomes, predictions)", "回答: Predictions have an accuracy of 78.68%.\n\n仅仅使用乘客性别(Sex)这一特征,我们预测的准确性就有了明显的提高。现在再看一下使用额外的特征能否更进一步提升我们的预测准确度。例如,综合考虑所有在泰坦尼克号上的男性乘客:我们是否找到这些乘客中的一个子集,他们的存活概率较高。让我们再次使用 survival_stats 函数来看看每位男性乘客的年龄(Age)。这一次,我们将使用第四个参数来限定柱形图中只有男性乘客。\n运行下面这段代码,把男性基于年龄的生存结果绘制出来。", "survival_stats(data, outcomes, 'Age', [\"Sex == 'male'\"])", "仔细观察泰坦尼克号存活的数据统计,在船沉没的时候,大部分小于10岁的男孩都活着,而大多数10岁以上的男性都随着船的沉没而遇难。让我们继续在先前预测的基础上构建:如果乘客是女性,那么我们就预测她们全部存活;如果乘客是男性并且小于10岁,我们也会预测他们全部存活;所有其它我们就预测他们都没有幸存。 \n将下面缺失的代码补充完整,让我们的函数可以实现预测。\n提示: 您可以用之前 predictions_1 的代码作为开始来修改代码,实现新的预测函数。", "def predictions_2(data):\n \"\"\" Model with two features: \n - Predict a passenger survived if they are female.\n - Predict a passenger survived if they are male and younger than 10. \"\"\"\n \n predictions = []\n for _, passenger in data.iterrows():\n \n # Remove the 'pass' statement below \n # 移除下方的 'pass' 声明\n # and write your prediction conditions here\n # 输入你自己的预测条件\n if passenger['Sex'] == 'female':\n predictions.append(1)\n elif passenger['Age'] < 10:\n predictions.append(1)\n else :\n predictions.append(0)\n \n # Return our predictions\n # 返回预测结果\n return pd.Series(predictions)\n\n# Make the predictions\n# 进行预测\npredictions = predictions_2(data)", "问题3\n当预测所有女性以及小于10岁的男性都存活的时候,预测的准确率会达到多少?\n提示:运行下面的代码来查看预测的准确率。", "print accuracy_score(outcomes, predictions)", "回答: Predictions have an accuracy of 79.35%.\n\n添加年龄(Age)特征与性别(Sex)的结合比单独使用性别(Sex)也提高了不少准确度。现在该你来做预测了:找到一系列的特征和条件来对数据进行划分,使得预测结果提高到80%以上。这可能需要多个特性和多个层次的条件语句才会成功。你可以在不同的条件下多次使用相同的特征。Pclass,Sex,Age,SibSp 和 Parch 是建议尝试使用的特征。 \n使用 survival_stats 函数来观测泰坦尼克号上乘客存活的数据统计。\n提示: 要使用多个过滤条件,把每一个条件放在一个列表里作为最后一个参数传递进去。例如: [\"Sex == 'male'\", \"Age &lt; 18\"]", "survival_stats(data, outcomes, 'Pclass')", "当查看和研究了图形化的泰坦尼克号上乘客的数据统计后,请补全下面这段代码中缺失的部分,使得函数可以返回你的预测。 \n在到达最终的预测模型前请确保记录你尝试过的各种特征和条件。 \n提示: 您可以用之前 predictions_2 的代码作为开始来修改代码,实现新的预测函数。", "def predictions_3(data):\n \"\"\" Model with multiple features. Makes a prediction with an accuracy of at least 80%. \"\"\"\n \n predictions = []\n for _, passenger in data.iterrows():\n \n if passenger['Sex'] == 'female':\n if passenger['Pclass'] == 3 and passenger['Age'] > 40:\n predictions.append(0)\n else:\n predictions.append(1)\n elif passenger['Age'] < 10:\n predictions.append(1)\n elif passenger['Pclass'] < 2 and passenger['Age'] < 18:\n predictions.append(1)\n else:\n predictions.append(0)\n # Return our predictions\n return pd.Series(predictions)\n\n# Make the predictions\npredictions = predictions_3(data)", "结论\n请描述你实现80%准确度的预测模型所经历的步骤。您观察过哪些特征?某些特性是否比其他特征更有帮助?你用了什么条件来预测生还结果?你最终的预测的准确率是多少?\n提示:运行下面的代码来查看你的预测准确度。", "print accuracy_score(outcomes, predictions)", "回答: Predictions have an accuracy of 80.36%.\n1. 首先,我看来前面那个把男女作为特征的柱状图,然后发现如果只是女性就标记为生存很不合理,因为大概有80多女性没有活下来。所以我第一步就是分析没有活下来的那些女性的特征。我首先试了Pclass.然后结果很明显,Pclass为1和2的基本生存下来了,阶级3的大概是50%生存下来。然后我再分析阶级3的为什么没活下来的,那部分人的特征,然后我选择了Age,然后发现40岁以上的基本都没活下来。所以就得出第一个判断结果。 然后年龄小于10的前面已经提过了。最后一个我选了Pclass,因为这个也是比较合理的,看来结果阶级1和2的都是50%左右,阶级3的太低我就不考虑。然后再考虑在这两个阶级中影响比较大的特征是什么,我就以'Pclass < 3' 作为过滤器查看其他特征的表现,最后选了Age < 18的\n2. 的确有些特征是比其他特征更有帮助,比如性别,年龄,阶级就是帮助比较大的\n3. 最终预测结果是80.36%\n结论\n经过了数次对数据的探索和分类,你创建了一个预测泰坦尼克号乘客存活率的有用的算法。在这个项目中你手动地实现了一个简单的机器学习模型——决策树(decision tree)。决策树每次按照一个特征把数据分割成越来越小的群组(被称为 nodes)。每次数据的一个子集被分出来,如果分割结果的子集中的数据比之前更同质(包含近似的标签),我们的预测也就更加准确。电脑来帮助我们做这件事会比手动做更彻底,更精确。这个链接提供了另一个使用决策树做机器学习入门的例子。 \n决策树是许多监督学习算法中的一种。在监督学习中,我们关心的是使用数据的特征并根据数据的结果标签进行预测或建模。也就是说,每一组数据都有一个真正的结果值,不论是像泰坦尼克号生存数据集一样的标签,或者是连续的房价预测。\n问题5\n想象一个真实世界中应用监督学习的场景,你期望预测的结果是什么?举出两个在这个场景中能够帮助你进行预测的数据集中的特征。\n回答: 成年人身高,特征:性别,区域等\n\n注意: 当你写完了所有的代码,并且回答了所有的问题。你就可以把你的 iPython Notebook 导出成 HTML 文件。你可以在菜单栏,这样导出File -> Download as -> HTML (.html) 把这个 HTML 和这个 iPython notebook 一起做为你的作业提交。\n\n\n翻译:毛礼建 | 校译:黄强 | 审译:曹晨巍" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
NuGrid/NuPyCEE
regression_tests/temp/RTS_plot_functions.ipynb
bsd-3-clause
[ "Regression test suite: Test of all plotting functions\nPlotting functions are called in the order as they appear in the code.\nEach field calls first the function with default input and then with user-specified input.\nYou can find the documentation <a href=\"doc/sygma.html\">here</a>.\n$\\odot$ Plotting functions tests", "#from imp import *\n#s=load_source('sygma','/home/nugrid/nugrid/SYGMA/SYGMA_online/SYGMA_dev/sygma.py')\n#import mpld3\n#mpld3.enable_notebook()\nimport sygma as s\nreload(s)\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\n\ns1=s.sygma(iniZ=0.02,dt=1e7,tend=2e7)", "plot_yield_input\nTo plot the yield data", "s1.plot_yield_input() #[1,3,5,12][Fe/H]\ns1.plot_yield_input(fig=2,xaxis='mini',yaxis='[Fe/H]',iniZ=0.0001,masses=[1,3,12,25],marker='s',color='r',shape='-')\ns1.plot_yield_input(fig=3,xaxis='[C/H]',yaxis='[Fe/H]',iniZ=0.0001,masses=[1,3,12,25],marker='x',color='b',shape='--')", "The following commands plot the ISM metallicity in spectroscopic notation.\ns1.plot_mass", "s1.plot_mass()\ns1.plot_mass(specie='N',shape='--',marker='x')\n\n#s1.plot_mass_multi()\n#s1.plot_mass_multi(fig=1,specie=['C','N'],ylims=[],source='all',norm=False,label=[],shape=['-','--'],marker=['o','D'],color=['r','b'],markevery=20)\n#plt.legend()", "s1.plot_massfrac", "s1.plot_massfrac()\ns1.plot_massfrac(yaxis='He-4',shape='--',marker='x')", "s1.plot_spectro", "s1.plot_spectro()\ns1.plot_spectro(yaxis='[O/Fe]',marker='x',shape='--')", "s1.plot_totmasses", "s1.plot_totmasses()\ns1.plot_totmasses(source='agb',shape='--',marker='x')\ns1.plot_totmasses(mass='stars',shape=':',marker='^')", "Test of SNIa and SNII rate plots", "import sygma as s\nreload(s)\ns1=s.sygma(iolevel=0,mgal=1e11,dt=1e7,tend=1.3e10,imf_type='salpeter',imf_bdys=[1,30],special_timesteps=-1,hardsetZ=0.0001,table='yield_tables/isotope_yield_table_h1.txt',sn1a_on=True, sn1a_table='yield_tables/sn1a_h1.txt', iniabu_table='yield_tables/iniabu/iniab1.0E-04GN93_alpha_h1.ppn',pop3_table='yield_tables/popIII_h1.txt')\n\n#s1.plot_sn_distr(rate=True,label1='SN1a, rate',label2='SNII, rate',marker1='o',marker2='s')\ns1.plot_sn_distr(fig=4,rate=False,label1='SN1a, number',label2='SNII number',marker1='d',marker2='p')\n##plt.xlim(1e6,1e10)\n#plt.ylabel('Number/Rate')\ns1.plot_sn_distr()\ns1.plot_sn_distr(fig=5,rate=True,rate_only='',xaxis='time',label1='SN1a',label2='SN2',shape1=':',shape2='--',marker1='o',marker2='s',color1='k',color2='b',markevery=20)", "One point at the beginning for only 1 starburst", "#s1=s.sygma(iolevel=0,mgal=1e11,dt=1e6,tend=1.3e10,imf_type='salpeter',imf_bdys=[1,30],special_timesteps=-1,iniZ=-1,hardsetZ=0.0001,table='yield_tables/isotope_yield_table_h1.txt',sn1a_on=True, sn1a_table='yield_tables/sn1a_h1.txt', iniabu_table='yield_tables/iniabu/iniab1.0E-04GN93_alpha_h1.ppn',pop3_table='yield_tables/popIII_h1.txt')\n\n#s1.plot_sn_distr(rate=True,label1='SN1a, rate',label2='SNII, rate',marker1='o',marker2='s')\n#s1.plot_sn_distr(rate=False,label1='SN1a, number',label2='SNII number',marker1='d',marker2='p')\n#plt.xlim(1e6,1e10)\n#plt.ylabel('Number/Rate')\n\n#s1=s.sygma(iniZ=0.0001,dt=1e9,tend=2e9)\n#s2=s.sygma(iniZ=0.02)#,dt=1e7,tend=2e9)\nreload(s)\ns1=s.sygma(iolevel=0,iniZ=0.02,dt=1e8,tend=1e9) #standart not workign\n#s2=s.sygma(iniZ=0.02,dt=1e8,tend=1e10)\n", "plot_mass_range_contributions", "s1.plot_mass_range_contributions()\ns1.plot_mass_range_contributions(fig=7,specie='O',rebin=0.5,label='',shape='-',marker='o',color='b',markevery=20,extralabel=False,log=False)\n\n#s1.plot_mass_range_contributions(fig=7,specie='O',prodfac=True,rebin=0.5,label='',shape='-',marker='o',color='r',markevery=20,extralabel=False,log=False)\n", "Tests with two starbursts", "import sygma as s\nreload(s)\nssp1=s.sygma(iolevel=0,dt=1e8,mgal=1e11,starbursts=[0.1,0.1],tend=1e9,special_timesteps=-1,imf_type='kroupa',imf_bdys=[0.1,100],sn1a_on=False,hardsetZ=0.0001,table='yield_tables/isotope_yield_table_h1.txt', sn1a_table='yield_tables/sn1a_h1.txt', iniabu_table='yield_tables/iniabu/iniab1.0E-04GN93_alpha_h1.ppn')\n\nssp1.plot_star_formation_rate()\nssp1.plot_star_formation_rate(fig=6,marker='o',shape=':')\n\nssp1.plot_mass_range_contributions(fig=7,specie='H',prodfac=False,rebin=-1,time=-1,label='Total burst',shape='-',marker='o',color='r',markevery=20,extralabel=False,log=False)\nssp1.plot_mass_range_contributions(fig=7,specie='H',prodfac=False,rebin=-1,time=1e8,label='Burst at 1e8',shape='-',marker='o',color='b',markevery=20,extralabel=False,log=False)", "write_evol_table", "#s1.write_evol_table(elements=['H','He','C'])\ns1.write_evol_table(elements=['H'],isotopes=['H-1'],table_name='gce_table.txt',interact=False)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
DHBern/Tools-and-Techniques
lessons/09 Natural language processing with NLTK.ipynb
gpl-3.0
[ "Introduction to NLTK\nNLTK is the Natural Language Toolkit, a fairly large Python library for doing many sorts of linguistic analysis of text. NLTK comes with a selection of sample texts that we'll use to day, to get yourself familiar with what sorts of analysis you can do.\nTo run this notebook you will need the nltk, matplotlib, and tkinter modules. If you are new to Python and programming, the best way to have these is to make sure you are using the Anaconda Python distribution, which includes all of these and a whole host of other useful libraries. You can check whether you have the libraries by running the following commands in a Terminal or Powershell window:\npython -c 'import nltk'\npython -c 'import matplotlib'\npython -c 'import tkinter'\n\nIf you don't have NLTK, you can install it using the pip command (or possibly pip3 if you're on a Mac) as usual.\npip install nltk\n\nIf you don't have Matplotlib or TkInter, and don't want to download Anaconda, you will be able to follow along with most but not all of this notebook.\nOnce all this package installation work is done, you can run\npython -c 'import nltk; nltk.download()'\n\nor, if you are on a Mac with Python 3.4 installed via the standard Python installer:\npython3 -c 'import nltk; nltk.download()'\n\nand use the dialog that appears to download the 'book' package.\nExamining features of a text\nWe will start by loading the example texts in the 'book' package that we just downloaded.", "from nltk.book import *", "This import statement reads the book samples, which include nine sentences and nine book-length texts. It has also helpfully put each of these texts into a variable for us, from sent1 to sent9 and text1 to text9.", "print(sent1)\nprint(sent3)\nprint(sent5)", "Let's look at the texts now.", "print(text6)\nprint(text6.name)\nprint(\"This text has %d words\" % len(text6.tokens))\nprint(\"The first hundred words are:\", \" \".join( text6.tokens[:100] ))", "Each of these texts is an nltk.text.Text object, and has methods to let you see what the text contains. But you can also treat it as a plain old list!", "print(text5[0])\nprint(text3[0:11])\nprint(text4[0:51])", "We can do simple concordancing, printing the context for each use of a word throughout the text:", "text6.concordance( \"swallow\" )", "The default is to show no more than 25 results for any given word, but we can change that.", "text6.concordance('Arthur', lines=37)", "We can adjust the amount of context we show in our concordance:", "text6.concordance('Arthur', width=100)", "...or get the number of times any individual word appears in the text.", "word_to_count = \"KNIGHT\"\nprint(\"The word %s appears %d times.\" % ( word_to_count, text6.count( word_to_count ) ))", "We can generate a vocabulary for the text, and use the vocabulary to find the most frequent words as well as the ones that appear only once (a.k.a. the hapaxes.)", "t6_vocab = text6.vocab()\nt6_words = list(t6_vocab.keys())\nprint(\"The text has %d different words\" % ( len( t6_words ) ))\nprint(\"Some arbitrary 50 of these are:\", t6_words[:50])\nprint(\"The most frequent 50 words are:\", t6_vocab.most_common(50))\nprint(\"The word swallow appears %d times\" % ( t6_vocab['swallow'] ))\nprint(\"The text has %d words that appear only once\" % ( len( t6_vocab.hapaxes() ) ))\nprint(\"Some arbitrary 100 of these are:\", t6_vocab.hapaxes()[:100])", "You've now seen two methods for getting the number of times a word appears in a text: t6.count(word) and t6_vocab[word]. These are in fact identical, and the following bit of code is just to prove that. An assert statement is used to test whether something is true - if it ever isn't true, the code will throw up an error! This is a basic building block for writing tests for your code.", "print(\"Here we assert something that is true.\")\nfor w in t6_words:\n assert text6.count( w ) == t6_vocab[w]\n \nprint(\"See, that worked! Now we will assert something that is false, and we will get an error.\")\nfor w in t6_words:\n assert w.lower() == w", "We can try and find interesting words in the text, such as words of a minimum length (the longer a word, the less common it probably is) that occur more than once or twice...", "# With a list comprehension\nlong_words = [ w for w in t6_words if len( w ) > 5 and t6_vocab[w] > 3 ]\n\n# The long way, with a for loop. This is identical to the above.\nlong_words = []\nfor w in t6_words:\n if( len ( w ) > 5 and t6_vocab[w] > 3 ):\n long_words.append( w )\n\nprint(\"The reasonably frequent long words in the text are:\", long_words)", "And we can look for pairs of words that go together more often than chance would suggest.", "print(\"\\nUp to twenty collocations\")\ntext6.collocations()\n\nprint(\"\\nUp to fifty collocations\")\ntext6.collocations(num=50)\n\nprint(\"\\nCollocations that might have one word in between\")\ntext6.collocations(window_size=3)", "NLTK can also provide us with a few simple graph visualizations, when we have matplotlib installed. To make this work in iPython, we need the following magic line. If you are running in PyCharm, then you do not need this line - it will throw an error if you try to use it!", "%pylab --no-import-all inline", "The vocabulary we get from the .vocab() method is something called a \"frequency distribution\", which means it's a giant tally of each unique word and the number of times that word appears in the text. We can also make a frequency distribution of other features, such as \"each possible word length and the number of times a word that length is used\". Let's do that and plot it.", "word_length_dist = FreqDist( [ len(w) for w in t6_vocab.keys() ] )\nword_length_dist.plot()", "We can plot where in the text a word occurs, and compare it to other words, with a dispersion plot. For example, the following dispersion plots show respectively (among other things) that the words 'coconut' and 'swallow' almost always appear in the same part of the Holy Grail text, and that Willoughby and Lucy do not appear in Sense and Sensibility until some time after the beginning of the book.", "text6.dispersion_plot([\"coconut\", \"swallow\", \"KNIGHT\", \"witch\", \"ARTHUR\"])\n\ntext2.dispersion_plot([\"Elinor\", \"Marianne\", \"Edward\", \"Willoughby\", \"Lucy\"])", "We can go a little crazy with text statistics. This block of code computes the average word length for each text, as well as a measure known as the \"lexical diversity\" that measures how much word re-use there is in a text.", "def print_text_stats( thetext ):\n # Average word length\n awl = sum([len(w) for w in thetext]) / len( thetext ) \n ld = len( thetext ) / len( thetext.vocab() )\n print(\"%.2f\\t%.2f\\t%s\" % ( awl, ld, thetext.name ))\n \nall_texts = [ text1, text2, text3, text4, text5, text6, text7, text8, text9 ]\nprint(\"Wlen\\tLdiv\\tTitle\")\nfor t in all_texts:\n print_text_stats( t )\n", "A text of your own\nSo far we have been using the sample texts, but we can also use any text that we have lying around on our computer. The easiest sort of text to read in is plaintext, not PDF or HTML or anything else. Once we have made the text into an NLTK text with the Text() function, we can use all the same methods on it as we did for the sample texts above.", "from nltk import word_tokenize\n\n# You can read the file this way:\nf = open('alice.txt', encoding='utf-8')\nraw = f.read()\nf.close()\n\n# or you can read it this way.\nwith open('alice.txt', encoding='utf-8') as f:\n raw = f.read()\n\n# Use NLTK to break the text up into words, and put the result into a \n# Text object.\nalice = Text( word_tokenize( raw ) )\nalice.name = \"Alice's Adventures in Wonderland\"\nprint(alice.name)\nalice.concordance( \"cat\" )\nprint_text_stats( alice )\n", "Using text corpora\nNLTK comes with several pre-existing corpora of texts, some of which are the main body of text used for certain sorts of linguistic research. Using a corpus of texts, as opposed to an individual text, brings us a few more features.", "from nltk.corpus import gutenberg\n\nprint(gutenberg.fileids())\nparadise_lost = Text( gutenberg.words( \"milton-paradise.txt\" ) )\nparadise_lost", "Paradise Lost is now a Text object, just like the ones we have worked on before. But we accessed it through the NLTK corpus reader, which means that we get some extra bits of functionality:", "print(\"Length of text is:\", len( gutenberg.raw( \"milton-paradise.txt\" )))\nprint(\"Number of words is:\", len( gutenberg.words( \"milton-paradise.txt\" )))\nassert( len( gutenberg.words( \"milton-paradise.txt\" )) == len( paradise_lost ))\nprint(\"Number of sentences is:\", len( gutenberg.sents( \"milton-paradise.txt\" )))\nprint(\"Number of paragraphs is:\", len( gutenberg.paras( \"milton-paradise.txt\" )))", "We can also make our own corpus if we have our own collection of files, e.g. the Federalist Papers from last week. But we have to pay attention to how those files are arranged! In this case, if you look in the text file, the paragraphs are set apart with 'hanging indentation' - all the lines", "from nltk.corpus import PlaintextCorpusReader\nfrom nltk.corpus.reader.util import read_regexp_block\n\n# Define how paragraphs look in our text files.\ndef read_hanging_block( stream ):\n return read_regexp_block( stream, \"^[A-Za-z]\" )\n\ncorpus_root = 'federalist'\nfile_pattern = 'federalist_.*\\.txt'\nfederalist = PlaintextCorpusReader( corpus_root, file_pattern, para_block_reader=read_hanging_block )\nprint(\"List of texts in corpus:\", federalist.fileids())\nprint(\"\\nHere is the fourth paragraph of the first text:\")\nprint(federalist.paras(\"federalist_1.txt\")[3])", "And just like before, from this corpus we can make individual Text objects, on which we can use the methods we have seen above.", "fed1 = Text( federalist.words( \"federalist_1.txt\" ))\nprint(\"The first Federalist Paper has the following word collocations:\")\nfed1.collocations()\nprint(\"\\n...and the following most frequent words.\")\nfed1.vocab().most_common(50)", "Filtering out stopwords\nIn linguistics, stopwords or function words are words that are so frequent in a particular language that they say little to nothing about the meaning of a text. You can make your own list of stopwords, but NLTK also provides a list for each of several common languages. These sets of stopwords are provided as another corpus.", "from nltk.corpus import stopwords\nprint(\"We have stopword lists for the following languages:\")\nprint(stopwords.fileids())\nprint(\"\\nThese are the NLTK-provided stopwords for the German language:\")\nprint(\", \".join( stopwords.words('german') ))", "So reading in the stopword list, we can use it to filter out vocabulary we don't want to see. Let's look at our 50 most frequent words in Holy Grail again.", "print(\"The most frequent words are: \")\nprint([word[0] for word in t6_vocab.most_common(50)])\n\nf1_most_frequent = [ w[0] for w in t6_vocab.most_common() if w[0].lower() not in stopwords.words('english') ]\nprint(\"\\nThe most frequent interesting words are: \", \" \".join( f1_most_frequent[:50] ))", "Maybe we should get rid of punctuation and all-caps words too...", "import re\n\ndef is_interesting( w ):\n if( w.lower() in stopwords.words('english') ):\n return False\n if( w.isupper() ):\n return False\n return w.isalpha()\n\nf1_most_frequent = [ w[0] for w in t6_vocab.most_common() if is_interesting( w[0] ) ]\nprint(\"The most frequent interesting words are: \", \" \".join( f1_most_frequent[:50] ))", "Getting word stems\nQuite frequently we might want to treat different forms of a word - e.g. 'make / makes / made / making' - as the same word. A common way to do this is to find the stem of the word and use that in your analysis, in place of the word itself. There are several different approaches that can be takenNone of them are perfect, and quite frequently linguists will write their own stemmers.\nLet's chop out a paragraph of Alice in Wonderland to play with.", "my_text = alice[305:549]\nprint(\" \". join( my_text ))\nprint(len( set( my_text )), \"words\")", "NLTK comes with a few different stemming algorithms; we can also use WordNet (a system for analyzing semantic relationships between words) to look for the lemma form of each word and \"stem\" it that way. Here are some results.", "from nltk import PorterStemmer, LancasterStemmer, WordNetLemmatizer\n\nporter = PorterStemmer()\nlanc = LancasterStemmer()\nwnl = WordNetLemmatizer()\n\nporterlist = [porter.stem(w) for w in my_text]\nprint(\" \".join( porterlist ))\nprint(len( set( porterlist )), \"Porter stems\")\nlanclist = [lanc.stem(w) for w in my_text]\nprint(\" \".join( lanclist ))\nprint(len( set( lanclist )), \"Lancaster stems\")\nwnllist = [ wnl.lemmatize(w) for w in my_text ]\nprint(\" \".join( wnllist ))\nprint(len( set( wnllist )), \"Wordnet lemmata\")\n", "Part-of-speech tagging\nThis is where corpus linguistics starts to get interesting. In order to analyze a text computationally, it is useful to know its syntactic structure - what words are nouns, what are verbs, and so on? This can be done (again, imperfectly) by using part-of-speech tagging.", "from nltk import pos_tag\n\nprint(pos_tag(my_text))", "NLTK part-of-speech tags (simplified tagset)\n| Tag | Meaning | Examples |\n|-----|--------------------|--------------------------------------|\n| JJ | adjective | new, good, high, special, big, local |\n| RB | adverb | really, already, still, early, now |\n| CC | conjunction | and, or, but, if, while, although |\n| DT | determiner | the, a, some, most, every, no |\n| EX | existential | there, there's |\n| FW | foreign word | dolce, ersatz, esprit, quo, maitre |\n| MD | modal verb | will, can, would, may, must, should |\n| NN | noun | year, home, costs, time, education |\n| NNP | proper noun | Alison, Africa, April, Washington |\n| NUM | number | twenty-four, fourth, 1991, 14:24 |\n| PRO | pronoun | he, their, her, its, my, I, us |\n| IN | preposition | on, of, at, with, by, into, under |\n| TO | the word to | to |\n| UH | interjection | ah, bang, ha, whee, hmpf, oops |\n| VB | verb | is, has, get, do, make, see, run |\n| VBD | past tense | said, took, told, made, asked |\n| VBG | present participle | making, going, playing, working |\n| VN | past participle | given, taken, begun, sung |\n| WRB | wh determiner | who, which, when, what, where, how |\nAutomated tagging is pretty good, but not perfect. There are other taggers out there, such as the Brill tagger and the TreeTagger, but these aren't set up to run 'out of the box' and, with TreeTagger in particular, you will have to download extra software.\nSome of the bigger corpora in NLTK come pre-tagged; this is a useful way to train a tagger that uses machine-learning methods (such as Brill), and a good way to test any new tagging method that is developed. This is also the data from which our knowledge of how language is used comes from. (At least, English and some other major Western languages.)", "from nltk.corpus import brown\n\nprint(brown.tagged_words()[:25])\nprint(brown.tagged_words(tagset='universal')[:25])", "We can even do a frequency plot of the different parts of speech in the corpus (if we have matplotlib installed!)", "tagged_word_fd = FreqDist([ w[1] for w in brown.tagged_words(tagset='universal') ])\ntagged_word_fd.plot()", "Named-entity recognition\nAs well as the parts of speech of individual words, it is useful to be able to analyze the structure of an entire sentence. This generally involves breaking the sentence up into its component phrases, otherwise known as chunking. \nNot going to cover chunking here as there is no out-of-the-box chunker for NLTK! You are expected to define the grammar (or at least some approximation of the grammar), and once you have done that then it becomes possible.\nBut one application of chunking is named-entity recognition - parsing a sentence to identify the named people, places, and organizations therein. This is more difficult than it looks, e.g. \"Yankee\", \"May\", \"North\".\nHere's how to do it. We will use the example sentences that were loaded in sent1 through sent9 to try it out. Notice the difference (in iPython only!) between printing the result and just looking at the result - if you try to show the graph for more than one sentence at a time then you'll be waiting a long time. So don't try it.", "from nltk import ne_chunk\n\ntagged_text = pos_tag(sent2)\nner_text = ne_chunk( tagged_text )\nprint(ner_text)\nner_text", "Here is a function that takes the result of ne_chunk (the plain-text form, not the graph form!) and spits out only the named entities that were found.", "def list_named_entities( tree ):\n try:\n tree.label()\n except AttributeError:\n return\n if( tree.label() != \"S\" ):\n print(tree)\n else:\n for child in tree:\n list_named_entities( child )\n \nlist_named_entities( ner_text )", "And there you have it - an introductory tour of what is probably the best-available code toolkit for natural language processing. If this sort of thing interests you, then there is an entire book-length tutorial about it:\nhttp://www.nltk.org/book/\nHave fun!" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
merryjman/astronomy
templateGraphing.ipynb
gpl-3.0
[ "Data Analysis Template\nThis notebook is a template for data analysis and includes some useful code for calculations and plotting.", "# import software packages\nimport pandas as pd\nimport numpy as np\n%matplotlib inline\nimport matplotlib as mpl\nimport matplotlib.pyplot as plt\ninline_rc = dict(mpl.rcParams)", "Raw data\nThis is an example of making a data table.", "# enter column labels and raw data (with same # of values)\ntable1 = pd.DataFrame.from_items([\n ('column1', [0,1,2,3]), \n ('column2', [0,2,4,6])\n ])\n# display data table\ntable1", "Plotting", "# Uncomment the next line to make your graphs look like xkcd.com\n#plt.xkcd()\n\n# to make normal-looking plots again execute:\n#mpl.rcParams.update(inline_rc)\n\n# set variables = data['column label']\nx = table1['column1']\ny = table1['column2']\n\n# this makes a scatterplot of the data\n# plt.scatter(x values, y values)\nplt.scatter(x, y)\nplt.title(\"?\")\nplt.xlabel(\"?\")\nplt.ylabel(\"?\")\nplt.autoscale(tight=True)\n\n# calculate a trendline equation\n# np.polyfit( x values, y values, polynomial order)\ntrend1 = np.polyfit(x, y, 1)\n\n# plot trendline\n# plt.plot(x values, y values, other parameters)\nplt.plot(x, np.poly1d(trend1)(x), label='trendline')\nplt.legend(loc='upper left')\n\n# display the trendline's coefficients (slope, y-int)\ntrend1", "Do calculations with the data", "# create a new empty column\ntable1['column3'] = ''\ntable1", "Here's an example of calculating the difference between the values in column 2:", "# np.diff() calculates the difference between a value and the one after it\nz = np.diff(x)\n\n# fill column 3 with values from the formula (z) above:\ntable1['column3'] = pd.DataFrame.from_items([('', z)])\n\n# display the data table\ntable1\n\n# NaN and Inf values cause problems with math and plotting.\n# Make a new table using only selected rows and columns\ntable2 = table1.loc[0:2,['column1', 'column2', 'column3']] # this keeps rows 0 through 2\ntable2\n\n# set new variables to plot\nx2 = table2['column1']\ny2 = table2['column3']", "Now you can copy the code above to plot your new data table.", "# code for plotting table2 can go here" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
GoogleCloudPlatform/asl-ml-immersion
notebooks/launching_into_ml/solutions/supplemental/decision_trees_and_random_Forests_in_Python.ipynb
apache-2.0
[ "Decision Trees and Random Forests in Python\nLearning Objectives\n\nExplore and analyze data using a Pairplot\nTrain a single Decision Tree\nPredict and evaluate the Decision Tree\nCompare the Decision Tree model to a Random Forest\n\nIntroduction\nIn this lab, you explore and analyze data using a Pairplot, train a single Decision Tree, predict and evaluate the Decision Tree, and compare the Decision Tree model to a Random Forest. Recall that the Decision Tree algorithm belongs to the family of supervised learning algorithms. Unlike other supervised learning algorithms, the decision tree algorithm can be used for solving both regression and classification problems too. Simply, the goal of using a Decision Tree is to create a training model that can use to predict the class or value of the target variable by learning simple decision rules inferred from prior data(training data).\nEach learning objective will correspond to a #TODO in the student lab notebook -- try to complete that notebook first before reviewing this solution notebook.\nLoad necessary libraries\nWe will start by importing the necessary libraries for this lab.", "import matplotlib.pyplot as plt\nimport numpy as np\nimport pandas as pd\nimport seaborn as sns\n\n%matplotlib inline", "Get the Data", "df = pd.read_csv(\"../kyphosis.csv\")\n\ndf.head()", "Exploratory Data Analysis\nWe'll just check out a simple pairplot for this small dataset.", "# TODO 1\nsns.pairplot(df, hue=\"Kyphosis\", palette=\"Set1\")", "Train Test Split\nLet's split up the data into a training set and a test set!", "from sklearn.model_selection import train_test_split\n\nX = df.drop(\"Kyphosis\", axis=1)\ny = df[\"Kyphosis\"]\n\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30)", "Decision Trees\nWe'll start just by training a single decision tree.", "from sklearn.tree import DecisionTreeClassifier\n\ndtree = DecisionTreeClassifier()\n\n# TODO 2\ndtree.fit(X_train, y_train)", "Prediction and Evaluation\nLet's evaluate our decision tree.", "predictions = dtree.predict(X_test)\n\nfrom sklearn.metrics import classification_report, confusion_matrix\n\n# TODO 3a\nprint(classification_report(y_test, predictions))\n\n# TODO 3b\nprint(confusion_matrix(y_test, predictions))", "Tree Visualization\nScikit learn actually has some built-in visualization capabilities for decision trees, you won't use this often and it requires you to install the pydot library, but here is an example of what it looks like and the code to execute this:", "import pydot\nfrom IPython.display import Image\nfrom six import StringIO\nfrom sklearn.tree import export_graphviz\n\nfeatures = list(df.columns[1:])\nfeatures\n\ndot_data = StringIO()\nexport_graphviz(\n dtree, out_file=dot_data, feature_names=features, filled=True, rounded=True\n)\n\ngraph = pydot.graph_from_dot_data(dot_data.getvalue())\nImage(graph[0].create_png())", "Random Forests\nNow let's compare the decision tree model to a random forest.", "from sklearn.ensemble import RandomForestClassifier\n\nrfc = RandomForestClassifier(n_estimators=100)\nrfc.fit(X_train, y_train)\n\nrfc_pred = rfc.predict(X_test)\n\n# TODO 4a\nprint(confusion_matrix(y_test, rfc_pred))\n\n# TODO 4b\nprint(classification_report(y_test, rfc_pred))", "Copyright 2021 Google Inc.\nLicensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at\nhttp://www.apache.org/licenses/LICENSE-2.0\nUnless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
rhiever/sklearn-benchmarks
Clean HPCC Data.ipynb
mit
[ "First read the 'sklearn-benchmark-data.tsv.gz' file", "import pandas as pd\n\nbenchmark_data = pd.read_csv('sklearn-benchmark-data.tsv.gz', sep='\\t')\nbenchmark_data.head()\nbenchmark_data.rename(columns={'heart-c':'Dataset_Name',\n 'GradientBoostingClassifier':'Method_Name',\n 'loss=exponential,learning_rate=10.0,n_estimators=100,max_depth=3,max_features=sqrt,warm_start=True':\n 'Parameters',\n '0.723684210526':'Test_Score'},inplace=True)", "Get all the method names", "methodNames_list = benchmark_data['Method_Name'].unique().tolist()\n#methodNames_list", "Store the data method wise separately", "methodWiseData = {}\nfor name in methodNames_list:\n methodWiseData[name] = benchmark_data[(benchmark_data.Method_Name == name)]", "Save the data method wise to a folder in tsv.gz format", "#for i in names_list:\n# print(d[i])\nimport os\nif not os.path.isdir('newBenchmark_results'):\n os.mkdir('newBenchmark_results')\n\ngb = methodWiseData['GradientBoostingClassifier']\ngb.to_pickle('newBenchmark_results/GradientBoostingClassifier_results.tsv.gz')\n\nmethod_data = pd.read_pickle('newBenchmark_results/GradientBoostingClassifier-benchmark_results.tsv.gz')\nmethod_data", "Split the parameters into dfifferent columns;\nmake sure to set the no of coumns according to different methods", "method_param = pd.DataFrame(method_data.Parameters.str.split(',').tolist(),\n columns = ['Param1','Param2','Param3'])\nmethod_param\n\nmethod_data1 = method_data.drop('Parameters', 1) #delete the Paameters column from the original dataframe\nidx = method_param.index.get_values() #get the index of the parameter dataframe \n#idx\nmethod_data2 = method_data1.set_index(idx) #set the index of method dataframe same as parameter dataframe\n#kneighbor_data2\nresult = pd.concat([method_data2, method_param], axis = 1) #finally add the parameter columns to get the result (desired format)\n#result", "Save this result in the HPCC_benchmark_results folder", "import os\nif not os.path.isdir('HPCC_benchmark_results'):\n os.mkdir('HPCC_benchmark_results')\nresult.to_pickle('HPCC_benchmark_results/GradientBoostingClassifier-hpcc_results.tsv.gz')\n\ndata = pd.read_pickle('HPCC_benchmark_results/GradientBoostingClassifier-hpcc_results.tsv.gz')\ndata" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
jdvelasq/ingenieria-economica
2016-03/IE-02-representacion-flujos-y-tasas.ipynb
mit
[ "Representación de flujos de caja y tasas de interés\nNotas de clase sobre ingeniería economica avanzada usando Python\nJuan David Velásquez Henao\njdvelasq@unal.edu.co \nUniversidad Nacional de Colombia, Sede Medellín\nFacultad de Minas\nMedellín, Colombia \nSoftware utilizado\n\nEste es un documento interactivo escrito como un notebook de Jupyter , en el cual se presenta un tutorial sobre finanzas corporativas usando Python. Los notebooks de Jupyter permiten incoporar simultáneamente código, texto, gráficos y ecuaciones. El código presentado en este notebook puede ejecutarse en los sistemas operativos Linux y OS X. \nHaga click aquí para obtener instrucciones detalladas sobre como instalar Jupyter en Windows y Mac OS X.\nDescargue la última versión de este documento a su disco duro; luego, carguelo y ejecutelo en línea en Try Jupyter!\n\nContenido\n\nBibliografía \n\n\n[1] SAS/ETS 14.1 User's Guide, 2015. \n[2] hp 12c platinum financial calculator. User's guide. \n[3] HP Business Consultant II Owner's manual.\n[4] C.S. Park and G.P. Sharp-Bette. Advanced Engineering Economics. John Wiley & Sons, Inc., 1990.", "import cashflows as cf", "Conversión de tasas de interés\nInterés anticipado e interés vencido\nContenido\n<img src=\"images/antxven.png\" width=500>\nInterés vencido: se paga al final del periodo. \n$$F=P(1+r)$$\nInterés anticipado: se paga al inicio del periodo (antes de su causación). En este caso surge una paradoja, que el interés se puede reinvertir a la misma tasa de interés (anticipadamente):\n$$F = P + Pr_ + Pr_^2 + ...$$\nLa suma infinita anterior puede reescribirse como:\n$$F=P \\frac{1}{1 - r_*}$$\nIgualando las dos ecuaciones anteriores:\n$$\\frac{1}{1 - r_}=1+r ~,~~ r=\\frac{r_}{1 - r_} ~,~~ r_=\\frac{r}{1 + r}$$ \nInterés efectivo anual: \n$$r_a=\\left[1+\\left (\\frac{r_}{1-r_}\\right)\\right]^n-1$$\ncashflow tiene las funciones nom2eff() y eff2nom() para realizar las conversiones entre nominal (anticipado y vencido) y efectivo respectivamente.\nEjemplo.-- Se solicita un prestamo a un año con un interés anticipado del 20%. ¿Determine el interés efectivo pagado por el dinero?", "0.2 / (1 - 0.2)", "Ejemplo.-- Si se desea obtener una tasa efectiva anual del 36%, ¿cuánto se deberá cobrar en forma anticipada anual para obenerla?", "0.36 / (1 + 0.36)", "Interés nominal e interés efectivo\nContenido\nInterés nominal (r): expresado sobre una base anual para un número M de periodos de pago en el año. \nInterés efectivo por periodo de pago ($i$): representa el interés real para cada periodo de pago en el año. \nInterés efectivo anual ($i_a$): interés real para un periodo único de pago de un año. \n$$ i= \\frac{r}{M},$$\n$$i_\\alpha = \\left( \\displaystyle 1 + \\frac{r}{M}\\right)^M - 1 $$\nEjemplo.-- Se está considerando abrir una cuenta de ahorros en uno de tres bancos. Cuál banco tienen la tasa de interés más favorable?\n\n\nBanco #1: 6.72% anual, compuesto semestralmente\n\n\nBanco #1: 6.70% anual, compuesto trimestralmente.\n\n\nBanco #2: 6.65% anual, compuesto mensualmente.", "cf.iconv(nrate = 6.72, pyr = 2) ## Banco 1\n\ncf.iconv(nrate = 6.70, pyr = 4) ## Banco 2 -- mejor opción \n\ncf.iconv(nrate = 6.65, pyr = 12) ## Banco 3\n\n## Otra forma\ncf.iconv(nrate = [6.72, 6.79, 6.65], pyr = [2, 4, 12])", "Ejemplo.-- Convierta una tasa del 12% anual compuesto semestralmente a anual compuesto mensualmente.", "erate, _ = cf.iconv(nrate = 12.0, pyr = 2) ## efectiva por año \nerate\n\nnrate, _ = cf.iconv(erate = erate, pyr = 12) ## nominal compuesta mensualmente\nnrate", "Ejemplo.-- Sea un interés nominal del 12% capitalizado mensualmente. Calcule:\n\nTasa efectiva mensual\nTasa efectiva trimestral\nTasa efectiva anual\n\n<img src=\"images/tasa-nominal-efectiva.png\" width=600>", "## tasa efectiva mensual\n0.12 / 12\n\n## tasa efectiva trimestral\nerate, _ = cf.iconv(nrate = 3 * 0.12 / 12, pyr = 3)\nerate\n\n## tasa efectiva anual\nerate, _ = cf.iconv(nrate = 12.0, pyr = 12)\nerate", "Nomenclatura\nContenido\n<img src=\"images/nomenclatura.png\" width=600>\n\nEjercicios\nEjercicio.-- Cuál es la tasa efectiva anual equivalente a 15% N.A.M.V. (nominal anual mes vencido)?\nEjercicio.-- Cuál es la tasa efectiva anual equivalente a 23% N.A.T.A. (nominal anual trimestre anticipado)?\nEjercicio.-- Sea un interés nominal del 39.29% capitalizado mensualmente a cuánto equivale en términos semestrales? (R/ 6.15%)\nEjercicio.-- ¿Cuál es el valor futuro de \\$ 609 dentro de 2 años a una tasa del 2% NATV? \nEjercicio.-- ¿Cuál es el valor presente de un pago único de \\$ 890 recibido dentro de 6 años a una tasa de 2.7% NATA? \nEjercicio.-- ¿Qué cantidad de dinero se poseerá después de prestar \\$ 2300 al 27% NAMA durante 3 años?\nEjercicio.-- ¿Cuál es la tasa efectiva semestral equivalente al 14% NMTA?\nEjercicio.-- ¿Cuánto dinero mensual se debe empezar a abonar hoy si se desea reunir \\$ 28700 al final de 5 años y los ahorros rentan el 16%?\nEjercicio.-- Se decide ahorrar mensualmente \\$ 900 los cuales depositará al principio de cada mes en una entidad financiera que paga un interés del 30%. ¿Cuánto habrá acumulado al cabo de 2 años?\n\nRepresentación de tasas de interés usando cashflow\nEn la modelación financiera es común tener que representar tasas de interés que cambian en el tiempo. La librería cashflow permite realizar esta tarea. En la tabla que se presenta a continuación, la columna n indica para que periodos se aplica el valor correspondiente de la tasa (columna rate). Por definición, la tasa para el periodo 0 siempre es 0.", "cf.nominal_rate(const_value=10, start=(2000, 0), nper=8, pyr=4) \n\ncf.nominal_rate(const_value=10, start=(2000, 0), nper=8, pyr=6) \n\nspec = ((2000, 3), 10)\ncf.nominal_rate(const_value=1, start=(2000, 0), nper=8, pyr=4, spec=spec) \n\nspec = [(3, 10), (6, 20)]\ncf.nominal_rate(const_value=1, start=(2000, 0), nper=8, pyr=4, spec=spec) \n\ncf.nominal_rate(const_value=[10, 20]*10, pyr=4) ", "Ejemplo.-- Se va a tomar un crédito a 48 meses. La tasa inicial es del 3% y aumenta un punto cada año. Represente la tasa de interés.", "cf.nominal_rate(const_value = 3, \n start = (2000, 0),\n nper = 48, \n pyr = 12, \n spec= [(12, 4), # tasa para el año 2\n (24, 5), # tasa para el año 3\n (36, 6)]) # tasa para el año 4) \n\nx = cf.nominal_rate(const_value = 3, \n start = (2000, 0),\n nper = 48, \n pyr = 12, \n spec= [(12, 4), # tasa para el año 2\n (24, 5), # tasa para el año 3\n (36, 6)]) # tasa para el año 4) \nx[5] = 100\nx", "Representación de flujos genéricos de caja\ncashflow también permite la representación de flujos de efectivo en forma similar (pero no igual) a las tasas de interés, pero en este caso las tuplas (time, value) representan valores puntuales en el tiempo.", "cf.cashflow(const_value=1, # valor constante\n start=(2000, 0), # (periodo mayor, periodo menor)\n nper=8, # número total de periodos\n pyr=4) # número de periodos por año\n\n## un valor puntual puede ser introducido mediante una tupla\nspec = ((2000, 3), 10) # ((major, minor), value)\ncf.cashflow(const_value=1, start=(2000, 0), nper=8, pyr=4, spec=spec) \n\ncf.cashflow(const_value=1, start=(2000, 0), nper=8, pyr=4, spec=((2000, 3), 10))\n\nspec = [((2000, 3), 10), ((2001, 3), 10)]\ncf.cashflow(const_value=1, start=(2000, 0), nper=8, pyr=4, spec=spec) \n\nspec = (3, 10)\ncf.cashflow(const_value=1, start=(2000, 0), nper=8, pyr=4, spec=spec) \n\ncf.cashflow(const_value=1, start=(2000, 0), nper=8, pyr=4, spec=(3, 10)) \n\nspec = [(3, 10), (7, 10)]\ncf.cashflow(const_value=1, start=(2000, 0), nper=8, pyr=4, spec=spec) \n\ncf.cashflow(const_value=[10]*10, pyr=4) \n\ncf.cashflow(const_value=[-10]*4) \n\n## un flujo de caja es un objeto que puede guardarse \n## en una variable para usarse después\nx = cf.cashflow(const_value=[0, 1, 2, 3], pyr=4)\nx[3] = 10\nx \n\n## es posible alterar y acceder a valores individuales \n## para cada periodo de tiempo usando []\nx[3]\n\nx[(0, 3)] = 0\n\nx\n\nx[(0,2)] \n\nabs(cf.cashflow(const_value=[-10]*4, pyr=4)) \n\ncf.cashflow(const_value=[1]*4, pyr=4) + cf.cashflow(const_value=[2]*4, pyr=4)\n\ncf.cashflow(const_value=[6]*4, pyr=4) // cf.cashflow(const_value=[4]*4, pyr=4) \n\nx = cf.cashflow( const_value=[2]*4, pyr=4)\nx += cf.cashflow( const_value=[3]*4, pyr=4)\nx \n\nx = cf.cashflow( const_value=[6]*4, pyr=4)\nx //= cf.cashflow( const_value=[4]*4, pyr=4)\nx\n\nx = cf.cashflow( const_value=[2]*4, pyr=4)\nx *= cf.cashflow( const_value=[3]*4, pyr=4)\nx \n\nx = cf.cashflow( const_value=[6]*4, pyr=4)\nx -= cf.cashflow( const_value=[4]*4, pyr=4)\nx\n\ncf.cashflow( const_value=[2]*4, pyr=4) * cf.cashflow( const_value=[3]*4, pyr=4)\n\ncf.cashflow( const_value=[6]*4, pyr=4) - cf.cashflow( const_value=[4]*4, pyr=4)\n\ncf.cashflow( const_value=[6]*4, pyr=4).tolist()\n\ncflo = cf.cashflow(const_value=[-10, 5, 0, 20] * 3, pyr=4)\ncf.cfloplot(cflo)", "En algunos casos es necesario introducir patrones de flujo más complejos.", "cf.cashflow(const_value=[0, 1, 2, 2, 4, 5, 6, 7, 8])\n\n## para 5 <= t < 10, el valor es $ 100, y 0 en el resto de los casos\ncf.cashflow(const_value=0, nper=15, pyr=1, spec=[(t,100) for t in range(5,10)]) \n\n## un flujo escalonado\na = [(t, 100) for t in range( 1, 5)]\nb = [(t, 150) for t in range( 6, 10)]\nc = [(t, 200) for t in range(11, 13)]\ncf.cashflow(const_value=0, nper=20, pyr=1, spec=a + b + c)\n\n## flujo con gradiente geométrico (aumento del 5% por periodo)\ncf.cashflow(const_value=0, nper=20, pyr=1, spec=[(t, 100 * 1.05 ** (t-5)) for t in range(5,10)])", "Contenido" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
opesci/devito
examples/seismic/abc_methods/04_habc.ipynb
mit
[ "4 - Hybdrid Absorbing Boundary Condition (HABC)\n4.1 - Introduction\nIn this notebook we describe absorbing boundary conditions and their use combined with the Hybdrid Absorbing Boundary Condition (HABC). The common points to the previous notebooks <a href=\"01_introduction.ipynb\">Introduction to Acoustic Problem</a>, <a href=\"02_damping.ipynb\">Damping</a> and <a href=\"03_pml.ipynb\">PML</a> will be used here, with brief descriptions.\n4.2 - Absorbing Boundary Conditions\nWe initially describe absorbing boundary conditions, the so called A1 and A2 Clayton's conditions and\n the scheme from Higdon. These methods can be used as pure boundary conditions, designed to reduce reflections,\n or as part of the Hybrid Absorbing Boundary Condition, in which they are combined with an absorption layer in a manner to be described ahead. \nIn the presentation of these boundary conditions we initially consider the wave equation to be solved on\n the spatial domain $\\Omega=\\left[x_{I},x_{F}\\right] \\times\\left[z_{I},z_{F}\\right]$ as show in the figure bellow. More details about the equation and domain definition can be found in the <a href=\"01_introduction.ipynb\">Introduction to Acoustic Problem</a> notebook. \n<img src='domain1.png' width=500>\n4.2.1 - Clayton's A1 Boundary Condition\nClayton's A1 boundary condition is based on a one way wave equation (OWWE). This simple condition\n is such that outgoing waves normal to the border would leave without reflection. At the $\\partial \\Omega_1$ part of the boundary\n we have, \n\n$\\displaystyle\\frac{\\partial u(x,z,t)}{\\partial t}-c(x,z)\\displaystyle\\frac{\\partial u(x,z,t)}{\\partial x}=0.$\n\nwhile at $\\partial \\Omega_3$ the condition is\n\n$\\displaystyle\\frac{\\partial u(x,z,t)}{\\partial t}+c(x,z)\\displaystyle\\frac{\\partial u(x,z,t)}{\\partial x}=0.$\n\nand at $\\partial \\Omega_2$\n\n$\\displaystyle\\frac{\\partial u(x,z,t)}{\\partial t}+c(x,z)\\displaystyle\\frac{\\partial u(x,z,t)}{\\partial z}=0.$\n\n4.2.2 - Clayton's A2 Boundary Condition\nThe A2 boundary condition aims to impose a boundary condition that would make outgoing waves leave the domain without being reflected. This condition is approximated (using a Padé approximation in the wave dispersion relation) by the following equation to be imposed on the boundary part $\\partial \\Omega_1$\n\n$\\displaystyle\\frac{\\partial^{2} u(x,z,t)}{\\partial t^{2}}+c(x,z)\\displaystyle\\frac{\\partial^{2} u(x,z,t)}{\\partial x \\partial t}+\\frac{c^2(x,z)}{2}\\displaystyle\\frac{\\partial^{2} u(x,z,t)}{\\partial z^{2}}=0.$\n\nAt $\\partial \\Omega_3$ we have\n\n$\\displaystyle\\frac{\\partial^{2} u(x,z,t)}{\\partial t^{2}}-c(x,z)\\displaystyle\\frac{\\partial^{2} u(x,z,t)}{\\partial z \\partial t}+\\frac{c^2(x,z)}{2}\\displaystyle\\frac{\\partial^{2} u(x,z,t)}{\\partial x^{2}}=0.$\n\nwhile at $\\partial \\Omega_2$ the condition is\n\n$\\displaystyle\\frac{\\partial^{2} u(x,z,t)}{\\partial t^{2}}-c(x,z)\\displaystyle\\frac{\\partial^{2} u(x,z,t)}{\\partial x \\partial t}+\\frac{c^2(x,z)}{2}\\displaystyle\\frac{\\partial^{2} u(x,z,t)}{\\partial z^{2}}=0.$\n\nAt the corner points the condition is \n\n$\\displaystyle\\frac{\\sqrt{2}\\partial u(x,z,t)}{\\partial t}+c(x,z)\\left(\\displaystyle\\frac{\\partial u(x,z,t)}{\\partial x}+\\displaystyle\\frac{\\partial u(x,z,t)}{\\partial z}\\right)=0.$\n\n4.2.3 - Higdon Boundary Condition\nThe Higdon Boundary condition of order p is given at $\\partial \\Omega_1$ and $\\partial \\Omega_3$n by:\n\n$\\Pi_{j=1}^p(\\cos(\\alpha_j)\\left(\\displaystyle\\frac{\\partial }{\\partial t}-c(x,z)\\displaystyle\\frac{\\partial }{\\partial x}\\right)u(x,z,t)=0.$\n\nand at $\\partial \\Omega_2$\n\n$\\Pi_{j=1}^p(\\cos(\\alpha_j)\\left(\\displaystyle\\frac{\\partial}{\\partial t}-c(x,z)\\displaystyle\\frac{\\partial}{\\partial z}\\right)u(x,z,t)=0.$\n\nThis method would make that outgoing waves with angle of incidence at the boundary equal to $\\alpha_j$ would\n present no reflection. The method we use in this notebook employs order 2 ($p=2$) and angles $0$ and $\\pi/4$.\nObservation: There are similarities between Clayton's A2 and the Higdon condition. If one chooses $p=2$ and\n both angles equal to zero in Higdon's method, this leads to the condition:\n $ u_{tt}-2cu_{xt}+c^2u_{xx}=0$. But, using the wave equation, we have that $c^2u_{xx}=u_{tt}-c^2u_{zz}$. Replacing this relation in the previous equation, we get: $2u_{tt}-2cu_{xt}-c^2u_{zz}=0$ which is Clayton's A2\n boundary condition. In this sence, Higdon's method would generalize Clayton's scheme. But the discretization of\n both methods are quite different, since in Higdon's scheme the boundary operators are unidirectional, while\n in Clayton's A2 not.\n4.3 - Acoustic Problem with HABC\nIn the hybrid absorption boundary condition (HABC) scheme we will also extend the spatial domain as $\\Omega=\\left[x_{I}-L,x_{F}+L\\right] \\times\\left[z_{I},z_{F}+L\\right]$.\nWe added to the target domain $\\Omega_{0}=\\left[x_{I},x_{F}\\right]\\times\\left[z_{I},z_{F}\\right]$ an extension zone, of length $L$ in both ends of the direction $x$ and at the end of the domain in the direction $z$, as represented in the figure bellow.\n<img src='domain2.png' width=500>\nThe difference with respect to previous schemes, is that this extended region will now be considered as the union of several gradual extensions. As represented in the next figure, we define a region $A_M=\\Omega_{0}$. The regions $A_k, k=M-1,\\cdots,1$ will be defined as the previous region $A_{k+1}$ to which we add one extra grid line to the left,\n right and bottom sides of it, such that the final region $A_1=\\Omega$ (we thus have $M=L+1$).\n<img src='region1.png' width=500>\nWe now consider the temporal evolution\n of the solution of the HABC method. Suppose that $u(x,z,t-1)$ is the solution at a given instant $t-1$ in all the \n extended $\\Omega$ domain. We update it to instant $t$, using one of the absorbing boundary conditions described in the previous section (A1, A2 or Higdon) producing a preliminar new function $u(x,z,t)$. Now, call $u_{1}(x,z,t)$ the solution at instant $t$ constructed in the extended region, by applying the same absorbing boundary condition at the border of each of the domains $A_k,k=1,..,M$. The HABC solution will be constructed as a convex combination of $u(x,z,t)$ and $u_{1}(x,z,t)$:\n\n$u(x,z,t) = (1-\\omega)u(x,z,t)+\\omega u_{1}(x,z,t)$.\n\nThe function $u_{1}(x,z,t)$ is defined (and used) only in the extension of the domain. The function $w$ is a \nweight function growing from zero at the boundary $\\partial\\Omega_{0}$ to one at $\\partial\\Omega$. The particular weight function to be used could vary linearly, as when the scheme was first proposed by Liu and Sen. But HABC produces better results with a non-linear weight function to be described ahead.\nThe wave equation employed here will be the same as in the previous notebooks, with same velocity model, source term and initial conditions.\n4.3.1 The weight function $\\omega$\nOne can choose a linear weight function as \n\\begin{equation}\n\\omega_{k} = \\displaystyle\\frac{M-k}{M};\n\\end{equation}\nor preferably a non linear\n\\begin{equation}\n\\omega_{k}=\\left{ \\begin{array}{ll}\n1, & \\textrm{if $1\\leq k \\leq P+1$,} \\ \\left(\\displaystyle\\frac{M-k}{M-P}\\right)^{\\alpha} , & \\textrm{if $P+2 \\leq k \\leq M-1.$} \\ 0 , & \\textrm{if $k=M$.}\\end{array}\\right.\n\\label{eq:elo8}\n\\end{equation} \nIn general we take $P=2$ and we choose $\\alpha$ as follows:\n\n$\\alpha = 1.5 + 0.07(npt-P)$, in the case of A1 and A2;\n$\\alpha = 1.0 + 0.15(npt-P)$, in the case of Higdon.\n\nThe value npt designates the number of discrete points that define the length of the blue band in the direction $x$ and/or $z$.\n4.4 - Finite Difference Operators and Discretization of Spatial and Temporal Domains\nWe employ the same methods as in the previous notebooks. \n4.5 - Standard Problem\nRedeeming the Standard Problem definitions discussed on the notebook <a href=\"01_introduction.ipynb\">Introduction to Acoustic Problem</a> we have that:\n\n$x_{I}$ = 0.0 Km;\n$x_{F}$ = 1.0 Km = 1000 m;\n$z_{I}$ = 0.0 Km;\n$z_{F}$ = 1.0 Km = 1000 m;\n\nThe spatial discretization parameters are given by:\n- $\\Delta x$ = 0.01 km = 10m;\n- $\\Delta z$ = 0.01 km = 10m;\nLet's consider a $I$ the time domain with the following limitations:\n\n$t_{I}$ = 0 s = 0 ms;\n$t_{F}$ = 1 s = 1000 ms;\n\nThe temporal discretization parameters are given by:\n\n$\\Delta t$ $\\approx$ 0.0016 s = 1.6 ms;\n$NT$ = 626.\n\nThe source term, velocity model and positioning of receivers will be as in the previous notebooks.\n4.6 - Numerical Simulations\nFor the numerical simulations of this notebook we use several of the notebook codes presented in <a href=\"02_damping.ipynb\">Damping</a> e <a href=\"03_pml.ipynb\">PML</a>. The new features will be described in more detail.\nSo, we import the following Python and Devito packages:", "# NBVAL_IGNORE_OUTPUT\n\nimport numpy as np\nimport matplotlib.pyplot as plot\nimport math as mt\nimport matplotlib.ticker as mticker \nfrom mpl_toolkits.axes_grid1 import make_axes_locatable\nfrom matplotlib import cm", "From Devito's library of examples we import the following structures:", "# NBVAL_IGNORE_OUTPUT\n\n%matplotlib inline\nfrom examples.seismic import TimeAxis\nfrom examples.seismic import RickerSource\nfrom examples.seismic import Receiver\nfrom examples.seismic import plot_velocity\nfrom devito import SubDomain, Grid, NODE, TimeFunction, Function, Eq, solve, Operator", "The mesh parameters that we choose define the domain $\\Omega_{0}$ plus the absorption region. For this, we use the following data:", "nptx = 101\nnptz = 101\nx0 = 0.\nx1 = 1000. \ncompx = x1-x0\nz0 = 0.\nz1 = 1000.\ncompz = z1-z0;\nhxv = (x1-x0)/(nptx-1)\nhzv = (z1-z0)/(nptz-1)", "As we saw previously, HABC has three approach possibilities (A1, A2 and Higdon) and two types of weights (linear and non-linear). So, we insert two control variables. The variable called habctype chooses the type of HABC approach and is such that:\n\nhabctype=1 is equivalent to choosing A1;\nhabctype=2 is equivalent to choosing A2;\nhabctype=3 is equivalent to choosing Higdon;\n\nRegarding the weights, we will introduce the variable habcw that chooses the type of weight and is such that:\n\nhabcw=1 is equivalent to linear weight;\nhabcw=2 is equivalent to non-linear weights;\n\nIn this way, we make the following choices:", "habctype = 3\nhabcw = 2", "The number of points of the absorption layer in the directions $x$ and $z$ are given, respectively, by:", "npmlx = 20\nnpmlz = 20", "The lengths $L_{x}$ and $L_{z}$ are given, respectively, by:", "lx = npmlx*hxv\nlz = npmlz*hzv", "For the construction of the grid we have:", "nptx = nptx + 2*npmlx\nnptz = nptz + 1*npmlz\nx0 = x0 - hxv*npmlx\nx1 = x1 + hxv*npmlx\ncompx = x1-x0\nz0 = z0\nz1 = z1 + hzv*npmlz\ncompz = z1-z0\norigin = (x0,z0)\nextent = (compx,compz)\nshape = (nptx,nptz)\nspacing = (hxv,hzv)", "As in the case of the acoustic equation with Damping and in the acoustic equation with PML, we can define specific regions in our domain, since the solution $u_{1}(x,z,t)$ is only calculated in the blue region. We will soon follow a similar scheme for creating subdomains as was done on notebooks <a href=\"02_damping.ipynb\">Damping</a> and <a href=\"03_pml.ipynb\">PML</a>.\nFirst, we define a region corresponding to the entire domain, naming this region as d0. In the language of subdomains d0 it is written as:", "class d0domain(SubDomain):\n name = 'd0'\n def define(self, dimensions):\n x, z = dimensions\n return {x: x, z: z}\nd0_domain = d0domain()", "The blue region will be built with 3 divisions:\n\nd1 represents the left range in the direction x, where the pairs $(x,z)$ satisfy: $x\\in{0,npmlx}$ and $z\\in{0,nptz}$;\nd2 represents the rigth range in the direction x, where the pairs $(x,z)$ satisfy: $x\\in{nptx-npmlx,nptx}$ and $z\\in{0,nptz}$;\nd3 represents the left range in the direction y, where the pairs $(x,z)$ satisfy: $x\\in{npmlx,nptx-npmlx}$ and $z\\in{nptz-npmlz,nptz}$;\n\nThus, the regions d1, d2 and d3 aare described as follows in the language of subdomains:", "class d1domain(SubDomain):\n name = 'd1'\n def define(self, dimensions):\n x, z = dimensions\n return {x: ('left',npmlx), z: z}\nd1_domain = d1domain()\n\nclass d2domain(SubDomain):\n name = 'd2'\n def define(self, dimensions):\n x, z = dimensions\n return {x: ('right',npmlx), z: z}\nd2_domain = d2domain()\n\nclass d3domain(SubDomain):\n name = 'd3'\n def define(self, dimensions):\n x, z = dimensions\n if((habctype==3)&(habcw==1)):\n return {x: x, z: ('right',npmlz)}\n else:\n return {x: ('middle', npmlx, npmlx), z: ('right',npmlz)}\nd3_domain = d3domain()", "The figure below represents the division of domains that we did previously:\n<img src='domain3.png' width=500>\nAfter we defining the spatial parameters and constructing the subdomains, we then generate the spatial grid and set the velocity field:", "grid = Grid(origin=origin, extent=extent, shape=shape, subdomains=(d0_domain,d1_domain,d2_domain,d3_domain), dtype=np.float64)\n\nv0 = np.zeros((nptx,nptz)) \nX0 = np.linspace(x0,x1,nptx)\nZ0 = np.linspace(z0,z1,nptz)\n \nx10 = x0+lx\nx11 = x1-lx\n \nz10 = z0\nz11 = z1 - lz\n\nxm = 0.5*(x10+x11)\nzm = 0.5*(z10+z11)\n \npxm = 0\npzm = 0\n \nfor i in range(0,nptx):\n if(X0[i]==xm): pxm = i\n \nfor j in range(0,nptz):\n if(Z0[j]==zm): pzm = j\n \np0 = 0 \np1 = pzm\np2 = nptz\n \nv0[0:nptx,p0:p1] = 1.5\nv0[0:nptx,p1:p2] = 2.5", "Previously we introduce the local variables x10,x11,z10,z11,xm,zm,pxm and pzm that help us to create a specific velocity field, where we consider the whole domain (including the absorpion region). Below we include a routine to plot the velocity field.", "def graph2dvel(vel):\n plot.figure()\n plot.figure(figsize=(16,8))\n fscale = 1/10**(3)\n scale = np.amax(vel[npmlx:-npmlx,0:-npmlz])\n extent = [fscale*(x0+lx),fscale*(x1-lx), fscale*(z1-lz), fscale*(z0)]\n fig = plot.imshow(np.transpose(vel[npmlx:-npmlx,0:-npmlz]), vmin=0.,vmax=scale, cmap=cm.seismic, extent=extent)\n plot.gca().xaxis.set_major_formatter(mticker.FormatStrFormatter('%.1f km'))\n plot.gca().yaxis.set_major_formatter(mticker.FormatStrFormatter('%.1f km'))\n plot.title('Velocity Profile')\n plot.grid()\n ax = plot.gca()\n divider = make_axes_locatable(ax)\n cax = divider.append_axes(\"right\", size=\"5%\", pad=0.05)\n cbar = plot.colorbar(fig, cax=cax, format='%.2e')\n cbar.set_label('Velocity [km/s]')\n plot.show()", "Below we include the plot of velocity field.", "# NBVAL_IGNORE_OUTPUT\n\ngraph2dvel(v0)", "Time parameters are defined and constructed by the following sequence of commands:", "t0 = 0.\ntn = 1000. \nCFL = 0.4\nvmax = np.amax(v0) \ndtmax = np.float64((min(hxv,hzv)*CFL)/(vmax))\nntmax = int((tn-t0)/dtmax)+1\ndt0 = np.float64((tn-t0)/ntmax)", "With the temporal parameters, we generate the time properties with TimeAxis as follows:", "time_range = TimeAxis(start=t0,stop=tn,num=ntmax+1)\nnt = time_range.num - 1", "The symbolic values associated with the spatial and temporal grids that are used in the composition of the equations are given by:", "(hx,hz) = grid.spacing_map \n(x, z) = grid.dimensions \nt = grid.stepping_dim\ndt = grid.stepping_dim.spacing", "We set the Ricker source:", "f0 = 0.01\nnsource = 1\nxposf = 0.5*(compx-2*npmlx*hxv)\nzposf = hzv\n\nsrc = RickerSource(name='src',grid=grid,f0=f0,npoint=nsource,time_range=time_range,staggered=NODE,dtype=np.float64)\nsrc.coordinates.data[:, 0] = xposf\nsrc.coordinates.data[:, 1] = zposf", "Below we include the plot of Ricker source.", "# NBVAL_IGNORE_OUTPUT\n\nsrc.show()", "We set the receivers:", "nrec = nptx\nnxpos = np.linspace(x0,x1,nrec)\nnzpos = hzv\n\nrec = Receiver(name='rec',grid=grid,npoint=nrec,time_range=time_range,staggered=NODE,dtype=np.float64)\nrec.coordinates.data[:, 0] = nxpos\nrec.coordinates.data[:, 1] = nzpos", "The displacement field u and the velocity vel are allocated:", "u = TimeFunction(name=\"u\",grid=grid,time_order=2,space_order=2,staggered=NODE,dtype=np.float64)\n\nvel = Function(name=\"vel\",grid=grid,space_order=2,staggered=NODE,dtype=np.float64)\nvel.data[:,:] = v0[:,:]", "We include the source term as src_term using the following command:", "src_term = src.inject(field=u.forward,expr=src*dt**2*vel**2)", "The Receivers are again called rec_term:", "rec_term = rec.interpolate(expr=u)", "The next step is to generate the $\\omega$ weights, which are selected using the habcw variable. Our construction approach will be in two steps: in a first step we build local vectors weightsx and weightsz that represent the weights in the directions $x$ and $z$, respectively. In a second step, with the weightsx and weightsz vectors, we distribute them in two global arrays called Mweightsx and Mweightsz that represent the distribution of these weights along the grid in the directions $x$ and $z$ respectively. The generateweights function below perform the operations listed previously:", "def generateweights():\n \n weightsx = np.zeros(npmlx)\n weightsz = np.zeros(npmlz)\n Mweightsx = np.zeros((nptx,nptz))\n Mweightsz = np.zeros((nptx,nptz))\n \n if(habcw==1):\n \n for i in range(0,npmlx):\n weightsx[i] = (npmlx-i)/(npmlx)\n \n for i in range(0,npmlz):\n weightsz[i] = (npmlz-i)/(npmlz)\n \n if(habcw==2):\n \n mx = 2\n mz = 2\n \n if(habctype==3):\n \n alphax = 1.0 + 0.15*(npmlx-mx) \n alphaz = 1.0 + 0.15*(npmlz-mz)\n \n else:\n \n alphax = 1.5 + 0.07*(npmlx-mx) \n alphaz = 1.5 + 0.07*(npmlz-mz)\n \n for i in range(0,npmlx):\n \n if(0<=i<=(mx)):\n weightsx[i] = 1\n elif((mx+1)<=i<=npmlx-1):\n weightsx[i] = ((npmlx-i)/(npmlx-mx))**(alphax)\n else:\n weightsx[i] = 0\n \n for i in range(0,npmlz):\n \n if(0<=i<=(mz)):\n weightsz[i] = 1\n elif((mz+1)<=i<=npmlz-1):\n weightsz[i] = ((npmlz-i)/(npmlz-mz))**(alphaz)\n else:\n weightsz[i] = 0\n \n for k in range(0,npmlx):\n \n ai = k\n af = nptx - k - 1 \n bi = 0\n bf = nptz - k\n Mweightsx[ai,bi:bf] = weightsx[k]\n Mweightsx[af,bi:bf] = weightsx[k]\n \n for k in range(0,npmlz):\n \n ai = k\n af = nptx - k \n bf = nptz - k - 1 \n Mweightsz[ai:af,bf] = weightsz[k]\n \n return Mweightsx,Mweightsz", "Once the generateweights function has been created, we execute it with the following command:", "Mweightsx,Mweightsz = generateweights();", "Below we include a routine to plot the weight fields.", "def graph2dweight(D): \n plot.figure()\n plot.figure(figsize=(16,8))\n fscale = 1/10**(-3)\n fscale = 10**(-3)\n scale = np.amax(D)\n extent = [fscale*x0,fscale*x1, fscale*z1, fscale*z0]\n fig = plot.imshow(np.transpose(D), vmin=0.,vmax=scale, cmap=cm.seismic, extent=extent)\n plot.gca().xaxis.set_major_formatter(mticker.FormatStrFormatter('%.1f km'))\n plot.gca().yaxis.set_major_formatter(mticker.FormatStrFormatter('%.1f km'))\n plot.title('Weight Function')\n plot.grid()\n ax = plot.gca()\n divider = make_axes_locatable(ax)\n cax = divider.append_axes(\"right\", size=\"5%\", pad=0.05)\n cbar = plot.colorbar(fig, cax=cax, format='%.2e')\n cbar.set_label('Weights')\n plot.show()", "Below we include the plot of weights field in $x$ direction.", "# NBVAL_IGNORE_OUTPUT\n\ngraph2dweight(Mweightsx)", "Below we include the plot of weights field in $z$ direction.", "# NBVAL_IGNORE_OUTPUT\n\ngraph2dweight(Mweightsz)", "Next we create the fields for the weight arrays weightsx and weightsz:", "weightsx = Function(name=\"weightsx\",grid=grid,space_order=2,staggered=NODE,dtype=np.float64)\nweightsx.data[:,:] = Mweightsx[:,:]\n\nweightsz = Function(name=\"weightsz\",grid=grid,space_order=2,staggered=NODE,dtype=np.float64)\nweightsz.data[:,:] = Mweightsz[:,:]", "For the discretization of the A2 and Higdon's boundary conditions (to calculate $u_{1}(x,z,t)$) we need information from three time levels, namely $u(x,z,t-1)$, $u (x,z,t)$ and $u(x,z,t+1)$. So it is convenient to create the three fields:", "u1 = Function(name=\"u1\" ,grid=grid,space_order=2,staggered=NODE,dtype=np.float64)\nu2 = Function(name=\"u2\" ,grid=grid,space_order=2,staggered=NODE,dtype=np.float64)\nu3 = Function(name=\"u3\" ,grid=grid,space_order=2,staggered=NODE,dtype=np.float64)", "We will assign to each of them the three time solutions described previously, that is,\n\nu1(x,z) = u(x,z,t-1);\nu2(x,z) = u(x,z,t);\nu3(x,z) = u(x,z,t+1);\n\nThese three assignments can be represented by the stencil01 given by:", "stencil01 = [Eq(u1,u.backward),Eq(u2,u),Eq(u3,u.forward)]", "An update of the term u3(x,z) will be necessary after updating u(x,z,t+1) in the direction $x$, so that we can continue to apply the HABC method. This update is given by stencil02 defined as:", "stencil02 = [Eq(u3,u.forward)]", "For the acoustic equation with HABC without the source term we need in $\\Omega$ \n\neq1 = u.dt2 - vel0 * vel0 * u.laplace;\n\nSo the pde that represents this equation is given by:", "pde0 = Eq(u.dt2 - u.laplace*vel**2)", "And the stencil for pde0 is given to:", "stencil0 = Eq(u.forward, solve(pde0,u.forward))", "For the blue region we will divide it into $npmlx$ layers in the $x$ direction and $npmlz$ layers in the $z$ direction. In this case, the representation is a little more complex than shown in the figures that exemplify the regions $A_{k}$ because there are intersections between the layers.\nObservation: Note that the representation of the $A_{k}$ layers that we present in our text reflects the case where $npmlx=npmlz$. However, our code includes the case illustrated in the figure, as well as situations in which $npmlx\\neq npmlz$. The discretizations of the bounadry conditions A1, A2 and Higdon follow in the bibliographic references at the end. They will not be detailled here, but can be seen in the codes below. \nIn the sequence of codes below we build the pdes that represent the eqs of the regions $B_{1}$, $B_{2}$ and $B_{3}$ and/or in the corners (red points in the case of A2) as represented in the following figure:\n<img src='region2.png' width=500>\nIn the sequence, we present the stencils for each of these pdes.\nSo, for the A1 case we have the following pdes and stencils:", "if(habctype==1):\n\n # Region B_{1}\n aux1 = ((-vel[x,z]*dt+hx)*u2[x,z] + (vel[x,z]*dt+hx)*u2[x+1,z] + (vel[x,z]*dt-hx)*u3[x+1,z])/(vel[x,z]*dt+hx)\n pde1 = (1-weightsx[x,z])*u3[x,z] + weightsx[x,z]*aux1\n stencil1 = Eq(u.forward,pde1,subdomain = grid.subdomains['d1'])\n\n # Region B_{3}\n aux2 = ((-vel[x,z]*dt+hx)*u2[x,z] + (vel[x,z]*dt+hx)*u2[x-1,z] + (vel[x,z]*dt-hx)*u3[x-1,z])/(vel[x,z]*dt+hx)\n pde2 = (1-weightsx[x,z])*u3[x,z] + weightsx[x,z]*aux2\n stencil2 = Eq(u.forward,pde2,subdomain = grid.subdomains['d2'])\n\n # Region B_{2}\n aux3 = ((-vel[x,z]*dt+hz)*u2[x,z] + (vel[x,z]*dt+hz)*u2[x,z-1] + (vel[x,z]*dt-hz)*u3[x,z-1])/(vel[x,z]*dt+hz)\n pde3 = (1-weightsz[x,z])*u3[x,z] + weightsz[x,z]*aux3\n stencil3 = Eq(u.forward,pde3,subdomain = grid.subdomains['d3'])", "For the A2 case we have the following pdes and stencils:", "if(habctype==2):\n \n # Region B_{1}\n cte11 = (1/(2*dt**2)) + (1/(2*dt*hx))*vel[x,z]\n cte21 = -(1/(2*dt**2)) + (1/(2*dt*hx))*vel[x,z] - (1/(2*hz**2))*vel[x,z]*vel[x,z] \n cte31 = -(1/(2*dt**2)) - (1/(2*dt*hx))*vel[x,z]\n cte41 = (1/(dt**2))\n cte51 = (1/(4*hz**2))*vel[x,z]**2\n\n aux1 = (cte21*(u3[x+1,z] + u1[x,z]) + cte31*u1[x+1,z] + cte41*(u2[x,z]+u2[x+1,z]) + cte51*(u3[x+1,z+1] + u3[x+1,z-1] + u1[x,z+1] + u1[x,z-1]))/cte11\n pde1 = (1-weightsx[x,z])*u3[x,z] + weightsx[x,z]*aux1\n stencil1 = Eq(u.forward,pde1,subdomain = grid.subdomains['d1'])\n \n # Region B_{3}\n cte12 = (1/(2*dt**2)) + (1/(2*dt*hx))*vel[x,z]\n cte22 = -(1/(2*dt**2)) + (1/(2*dt*hx))*vel[x,z] - (1/(2*hz**2))*vel[x,z]**2\n cte32 = -(1/(2*dt**2)) - (1/(2*dt*hx))*vel[x,z]\n cte42 = (1/(dt**2))\n cte52 = (1/(4*hz**2))*vel[x,z]*vel[x,z]\n \n aux2 = (cte22*(u3[x-1,z] + u1[x,z]) + cte32*u1[x-1,z] + cte42*(u2[x,z]+u2[x-1,z]) + cte52*(u3[x-1,z+1] + u3[x-1,z-1] + u1[x,z+1] + u1[x,z-1]))/cte12\n pde2 = (1-weightsx[x,z])*u3[x,z] + weightsx[x,z]*aux2\n stencil2 = Eq(u.forward,pde2,subdomain = grid.subdomains['d2'])\n\n # Region B_{2}\n cte13 = (1/(2*dt**2)) + (1/(2*dt*hz))*vel[x,z]\n cte23 = -(1/(2*dt**2)) + (1/(2*dt*hz))*vel[x,z] - (1/(2*hx**2))*vel[x,z]**2\n cte33 = -(1/(2*dt**2)) - (1/(2*dt*hz))*vel[x,z]\n cte43 = (1/(dt**2))\n cte53 = (1/(4*hx**2))*vel[x,z]*vel[x,z]\n\n aux3 = (cte23*(u3[x,z-1] + u1[x,z]) + cte33*u1[x,z-1] + cte43*(u2[x,z]+u2[x,z-1]) + cte53*(u3[x+1,z-1] + u3[x-1,z-1] + u1[x+1,z] + u1[x-1,z]))/cte13\n pde3 = (1-weightsz[x,z])*u3[x,z] + weightsz[x,z]*aux3\n stencil3 = Eq(u.forward,pde3,subdomain = grid.subdomains['d3'])\n\n # Red point rigth side\n stencil4 = [Eq(u[t+1,nptx-1-k,nptz-1-k],(1-weightsz[nptx-1-k,nptz-1-k])*u3[nptx-1-k,nptz-1-k] + \n weightsz[nptx-1-k,nptz-1-k]*(((-(1/(4*hx)) + (1/(4*hz)) - (np.sqrt(2))/(4*vel[nptx-1-k,nptz-1-k]*dt))*u3[nptx-1-k,nptz-2-k] \n + ((1/(4*hx)) - (1/(4*hz)) - (np.sqrt(2))/(4*vel[nptx-1-k,nptz-1-k]*dt))*u3[nptx-2-k,nptz-1-k] \n + ((1/(4*hx)) + (1/(4*hz)) - (np.sqrt(2))/(4*vel[nptx-1-k,nptz-1-k]*dt))*u3[nptx-2-k,nptz-2-k] \n + (-(1/(4*hx)) - (1/(4*hz)) + (np.sqrt(2))/(4*vel[nptx-1-k,nptz-1-k]*dt))*u2[nptx-1-k,nptz-1-k] \n + (-(1/(4*hx)) + (1/(4*hz)) + (np.sqrt(2))/(4*vel[nptx-1-k,nptz-1-k]*dt))*u2[nptx-1-k,nptz-2-k] \n + ((1/(4*hx)) - (1/(4*hz)) + (np.sqrt(2))/(4*vel[nptx-1-k,nptz-1-k]*dt))*u2[nptx-2-k,nptz-1-k] \n + ((1/(4*hx)) + (1/(4*hz)) + (np.sqrt(2))/(4*vel[nptx-1-k,nptz-1-k]*dt))*u2[nptx-2-k,nptz-2-k])\n / (((1/(4*hx)) + (1/(4*hz)) + (np.sqrt(2))/(4*vel[nptx-1-k,nptz-1-k]*dt))))) for k in range(0,npmlz)] \n\n # Red point left side\n stencil5 = [Eq(u[t+1,k,nptz-1-k],(1-weightsx[k,nptz-1-k] )*u3[k,nptz-1-k] \n + weightsx[k,nptz-1-k]*(( (-(1/(4*hx)) + (1/(4*hz)) - (np.sqrt(2))/(4*vel[k,nptz-1-k]*dt))*u3[k,nptz-2-k] \n + ((1/(4*hx)) - (1/(4*hz)) - (np.sqrt(2))/(4*vel[k,nptz-1-k]*dt))*u3[k+1,nptz-1-k] \n + ((1/(4*hx)) + (1/(4*hz)) - (np.sqrt(2))/(4*vel[k,nptz-1-k]*dt))*u3[k+1,nptz-2-k] \n + (-(1/(4*hx)) - (1/(4*hz)) + (np.sqrt(2))/(4*vel[k,nptz-1-k]*dt))*u2[k,nptz-1-k] \n + (-(1/(4*hx)) + (1/(4*hz)) + (np.sqrt(2))/(4*vel[k,nptz-1-k]*dt))*u2[k,nptz-2-k] \n + ((1/(4*hx)) - (1/(4*hz)) + (np.sqrt(2))/(4*vel[k,nptz-1-k]*dt))*u2[k+1,nptz-1-k] \n + ((1/(4*hx)) + (1/(4*hz)) + (np.sqrt(2))/(4*vel[k,nptz-1-k]*dt))*u2[k+1,nptz-2-k])\n / (((1/(4*hx)) + (1/(4*hz)) + (np.sqrt(2))/(4*vel[k,nptz-1-k]*dt))))) for k in range(0,npmlx)]", "For the Higdon case we have the following pdes and stencils:", "if(habctype==3):\n\n alpha1 = 0.0\n alpha2 = np.pi/4\n a1 = 0.5\n b1 = 0.5\n a2 = 0.5\n b2 = 0.5\n\n # Region B_{1}\n gama111 = np.cos(alpha1)*(1-a1)*(1/dt)\n gama121 = np.cos(alpha1)*(a1)*(1/dt)\n gama131 = np.cos(alpha1)*(1-b1)*(1/hx)*vel[x,z]\n gama141 = np.cos(alpha1)*(b1)*(1/hx)*vel[x,z]\n \n gama211 = np.cos(alpha2)*(1-a2)*(1/dt)\n gama221 = np.cos(alpha2)*(a2)*(1/dt)\n gama231 = np.cos(alpha2)*(1-b2)*(1/hx)*vel[x,z]\n gama241 = np.cos(alpha2)*(b2)*(1/hx)*vel[x,z]\n \n c111 = gama111 + gama131\n c121 = -gama111 + gama141\n c131 = gama121 - gama131\n c141 = -gama121 - gama141\n \n c211 = gama211 + gama231\n c221 = -gama211 + gama241\n c231 = gama221 - gama231\n c241 = -gama221 - gama241\n\n aux1 = ( u2[x,z]*(-c111*c221-c121*c211) + u3[x+1,z]*(-c111*c231-c131*c211) + u2[x+1,z]*(-c111*c241-c121*c231-c141*c211-c131*c221) \n + u1[x,z]*(-c121*c221) + u1[x+1,z]*(-c121*c241-c141*c221) + u3[x+2,z]*(-c131*c231) +u2[x+2,z]*(-c131*c241-c141*c231)\n + u1[x+2,z]*(-c141*c241))/(c111*c211)\n pde1 = (1-weightsx[x,z])*u3[x,z] + weightsx[x,z]*aux1\n stencil1 = Eq(u.forward,pde1,subdomain = grid.subdomains['d1'])\n\n # Region B_{3}\n gama112 = np.cos(alpha1)*(1-a1)*(1/dt)\n gama122 = np.cos(alpha1)*(a1)*(1/dt)\n gama132 = np.cos(alpha1)*(1-b1)*(1/hx)*vel[x,z]\n gama142 = np.cos(alpha1)*(b1)*(1/hx)*vel[x,z]\n \n gama212 = np.cos(alpha2)*(1-a2)*(1/dt)\n gama222 = np.cos(alpha2)*(a2)*(1/dt)\n gama232 = np.cos(alpha2)*(1-b2)*(1/hx)*vel[x,z]\n gama242 = np.cos(alpha2)*(b2)*(1/hx)*vel[x,z]\n \n c112 = gama112 + gama132\n c122 = -gama112 + gama142\n c132 = gama122 - gama132\n c142 = -gama122 - gama142\n \n c212 = gama212 + gama232\n c222 = -gama212 + gama242\n c232 = gama222 - gama232\n c242 = -gama222 - gama242\n\n aux2 = ( u2[x,z]*(-c112*c222-c122*c212) + u3[x-1,z]*(-c112*c232-c132*c212) + u2[x-1,z]*(-c112*c242-c122*c232-c142*c212-c132*c222) \n + u1[x,z]*(-c122*c222) + u1[x-1,z]*(-c122*c242-c142*c222) + u3[x-2,z]*(-c132*c232) +u2[x-2,z]*(-c132*c242-c142*c232)\n + u1[x-2,z]*(-c142*c242))/(c112*c212)\n pde2 = (1-weightsx[x,z])*u3[x,z] + weightsx[x,z]*aux2\n stencil2 = Eq(u.forward,pde2,subdomain = grid.subdomains['d2'])\n\n # Region B_{2}\n gama113 = np.cos(alpha1)*(1-a1)*(1/dt)\n gama123 = np.cos(alpha1)*(a1)*(1/dt)\n gama133 = np.cos(alpha1)*(1-b1)*(1/hz)*vel[x,z]\n gama143 = np.cos(alpha1)*(b1)*(1/hz)*vel[x,z]\n \n gama213 = np.cos(alpha2)*(1-a2)*(1/dt)\n gama223 = np.cos(alpha2)*(a2)*(1/dt)\n gama233 = np.cos(alpha2)*(1-b2)*(1/hz)*vel[x,z]\n gama243 = np.cos(alpha2)*(b2)*(1/hz)*vel[x,z]\n \n c113 = gama113 + gama133\n c123 = -gama113 + gama143\n c133 = gama123 - gama133\n c143 = -gama123 - gama143\n \n c213 = gama213 + gama233\n c223 = -gama213 + gama243\n c233 = gama223 - gama233\n c243 = -gama223 - gama243\n\n aux3 = ( u2[x,z]*(-c113*c223-c123*c213) + u3[x,z-1]*(-c113*c233-c133*c213) + u2[x,z-1]*(-c113*c243-c123*c233-c143*c213-c133*c223) \n + u1[x,z]*(-c123*c223) + u1[x,z-1]*(-c123*c243-c143*c223) + u3[x,z-2]*(-c133*c233) +u2[x,z-2]*(-c133*c243-c143*c233)\n + u1[x,z-2]*(-c143*c243))/(c113*c213)\n pde3 = (1-weightsz[x,z])*u3[x,z] + weightsz[x,z]*aux3\n stencil3 = Eq(u.forward,pde3,subdomain = grid.subdomains['d3'])", "The surface boundary conditions of the problem are the same as in the notebook <a href=\"01_introduction.ipynb\">Introduction to Acoustic Problem</a>. They are placed in the term bc and have the following form:", "bc = [Eq(u[t+1,x,0],u[t+1,x,1])]", "We will then define the operator (op) that will join the acoustic equation, source term, boundary conditions and receivers.\n\n\n\nThe acoustic wave equation in the d0 region: [stencil01];\n\n\n\n\nSource term: src_term;\n\n\n\n\nUpdating solutions over time: [stencil01,stencil02];\n\n\n\n\nThe acoustic wave equation in the d1, d2 e d3 regions: [stencil1,stencil2,stencil3];\n\n\n\n\nThe equation for red points for A2 method: [stencil5,stencil4];\n\n\n\n\nBoundry Conditions: bc;\n\n\n\n\nReceivers: rec_term;\n\n\n\nWe then define two types of op:\n\nThe first op is for the cases A1 and Higdon;\nThe second op is for the case A2;\n\nThe ops are constructed by the following commands:", "# NBVAL_IGNORE_OUTPUT\n\nif(habctype!=2):\n op = Operator([stencil0] + src_term + [stencil01,stencil3,stencil02,stencil2,stencil1] + bc + rec_term,subs=grid.spacing_map)\nelse:\n op = Operator([stencil0] + src_term + [stencil01,stencil3,stencil02,stencil2,stencil1,stencil02,stencil4,stencil5] + bc + rec_term,subs=grid.spacing_map)", "Initially:", "u.data[:] = 0.\nu1.data[:] = 0.\nu2.data[:] = 0.\nu3.data[:] = 0.", "We assign to op the number of time steps it must execute and the size of the time step in the local variables time and dt, respectively.", "# NBVAL_IGNORE_OUTPUT\n\nop(time=nt,dt=dt0)", "We view the result of the displacement field at the end time using the graph2d routine given by:", "def graph2d(U,i): \n plot.figure()\n plot.figure(figsize=(16,8))\n fscale = 1/10**(3)\n x0pml = x0 + npmlx*hxv\n x1pml = x1 - npmlx*hxv\n z0pml = z0 \n z1pml = z1 - npmlz*hzv\n scale = np.amax(U[npmlx:-npmlx,0:-npmlz])/10.\n extent = [fscale*x0pml,fscale*x1pml,fscale*z1pml,fscale*z0pml]\n fig = plot.imshow(np.transpose(U[npmlx:-npmlx,0:-npmlz]),vmin=-scale, vmax=scale, cmap=cm.seismic, extent=extent)\n plot.gca().xaxis.set_major_formatter(mticker.FormatStrFormatter('%.1f km'))\n plot.gca().yaxis.set_major_formatter(mticker.FormatStrFormatter('%.1f km'))\n plot.axis('equal')\n if(i==1): plot.title('Map - Acoustic Problem with Devito - HABC A1')\n if(i==2): plot.title('Map - Acoustic Problem with Devito - HABC A2')\n if(i==3): plot.title('Map - Acoustic Problem with Devito - HABC Higdon')\n plot.grid()\n ax = plot.gca()\n divider = make_axes_locatable(ax)\n cax = divider.append_axes(\"right\", size=\"5%\", pad=0.05)\n cbar = plot.colorbar(fig, cax=cax, format='%.2e')\n cbar.set_label('Displacement [km]')\n plot.draw()\n plot.show()\n\n# NBVAL_IGNORE_OUTPUT\n\ngraph2d(u.data[0,:,:],habctype)", "We plot the Receivers shot records using the graph2drec routine.", "def graph2drec(rec,i): \n plot.figure()\n plot.figure(figsize=(16,8))\n fscaled = 1/10**(3)\n fscalet = 1/10**(3)\n x0pml = x0 + npmlx*hxv\n x1pml = x1 - npmlx*hxv\n scale = np.amax(rec[:,npmlx:-npmlx])/10.\n extent = [fscaled*x0pml,fscaled*x1pml, fscalet*tn, fscalet*t0]\n fig = plot.imshow(rec[:,npmlx:-npmlx], vmin=-scale, vmax=scale, cmap=cm.seismic, extent=extent)\n plot.gca().xaxis.set_major_formatter(mticker.FormatStrFormatter('%.1f km'))\n plot.gca().yaxis.set_major_formatter(mticker.FormatStrFormatter('%.1f s'))\n plot.axis('equal')\n if(i==1): plot.title('Receivers Signal Profile - Devito with HABC A1')\n if(i==2): plot.title('Receivers Signal Profile - Devito with HABC A2')\n if(i==3): plot.title('Receivers Signal Profile - Devito with HABC Higdon')\n ax = plot.gca()\n divider = make_axes_locatable(ax)\n cax = divider.append_axes(\"right\", size=\"5%\", pad=0.05)\n cbar = plot.colorbar(fig, cax=cax, format='%.2e')\n plot.show()\n\n# NBVAL_IGNORE_OUTPUT\n\ngraph2drec(rec.data,habctype)\n\nassert np.isclose(np.linalg.norm(rec.data), 990, rtol=1)", "4.7 - Conclusions\nWe have presented the HABC method for the acoustic wave equation, which can be used with any of the \nabsorbing boundary conditions A1, A2 or Higdon. The notebook also include the possibility of using these boundary conditions alone, without being combined with the HABC. The user has the possibilty of testing several combinations of parameters and observe the effects in the absorption of spurious reflections on computational boundaries.\nThe relevant references for the boundary conditions are furnished next.\n4.8 - References\n\n\nClayton, R., & Engquist, B. (1977). \"Absorbing boundary conditions for acoustic and elastic wave equations\", Bulletin of the seismological society of America, 67(6), 1529-1540. <a href=\"https://pubs.geoscienceworld.org/ssa/bssa/article/67/6/1529/117727?casa_token=4TvjJGJDLQwAAAAA:Wm-3fVLn91tdsdHv9H6Ek7tTQf0jwXVSF10zPQL61lXtYZhaifz7jsHxqTvrHPufARzZC2-lDw\">Reference Link.</a>\n\n\nEngquist, B., & Majda, A. (1979). \"Radiation boundary conditions for acoustic and elastic wave calculations,\" Communications on pure and applied mathematics, 32(3), 313-357. DOI: 10.1137/0727049. <a href=\"https://epubs.siam.org/doi/abs/10.1137/0727049\">Reference Link.</a>\n\n\nHigdon, R. L. (1987). \"Absorbing boundary conditions for difference approximations to the multidimensional wave equation,\" Mathematics of computation, 47(176), 437-459. DOI: 10.1090/S0025-5718-1986-0856696-4. <a href=\"https://www.ams.org/journals/mcom/1986-47-176/S0025-5718-1986-0856696-4/\">Reference Link.</a>\n\n\nHigdon, Robert L. \"Numerical absorbing boundary conditions for the wave equation,\" Mathematics of computation, v. 49, n. 179, p. 65-90, 1987. DOI: 10.1090/S0025-5718-1987-0890254-1. <a href=\"https://www.ams.org/journals/mcom/1987-49-179/S0025-5718-1987-0890254-1/\">Reference Link.</a>\n\n\nLiu, Y., & Sen, M. K. (2018). \"An improved hybrid absorbing boundary condition for wave equation modeling,\" Journal of Geophysics and Engineering, 15(6), 2602-2613. DOI: 10.1088/1742-2140/aadd31. <a href=\"https://academic.oup.com/jge/article/15/6/2602/5209803\">Reference Link.</a>" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
ML4DS/ML4all
U3.PCA/PCA_professor.ipynb
mit
[ "Principal Component Analysis\nThe code in this notebook has been taken from a notebook in the Python Data Science Handbook by Jake VanderPlas; the content is available on GitHub.\nThe code has been released by VanderPlas under the MIT license.\nOur text is original, though the presentation structure partially follows VanderPlas' presentation of the topic.\nVersion: 1.0 (2020/09), Jesús Cid-Sueiro\n\n<!-- I KEEP THIS LINK, MAY BE WE COULD GENERATE SIMILAR COLAB LINKS TO ML4ALL \n<a href=\"https://colab.research.google.com/github/jakevdp/PythonDataScienceHandbook/blob/master/notebooks/05.09-Principal-Component-Analysis.ipynb\"><img align=\"left\" src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open in Colab\" title=\"Open and Execute in Google Colaboratory\"></a>\n-->", "# Basic imports\n%matplotlib inline\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport seaborn as sns; sns.set()", "Many machine learning applications involve the processing of highly multidimensional data. More data dimensions usually imply more information to make better predictions. However, a large dimension may state computational problems (the computational load of machine learning algorithms usually grows with the data dimension) and more difficulties to design a good predictor.\nFor this reason, a whole area of machine learning has been focused on feature extraction algorithms, i.e. algorithms that transform a multidimensional dataset into data with a reduced set of features. The goal of these techniques is to reduce the data dimension while preserving the most relevant information for the prediction task.\nFeature extraction (and, more generally, dimensionality reduction) algorithms are also useful for visualization. By reducing the data dimensions to 2 or 3, we can transform data into points in the plane or the space, that can be represented graphically.\nPrincipal Component Analysis (PCA) is a particular example of linear feature extraction methods, that compute the new features as linear combinations of the original data components. Besides feature extraction and visualization, PCA is also a usefull tool for noise filtering, as we will see later.\n1. A visual explanation.\nBefore going into the mathematical details, we can illustrate the behavior of PCA by looking at a two-dimensional dataset with 200 samples:", "rng = np.random.RandomState(1)\nX = np.dot(rng.rand(2, 2), rng.randn(2, 200)).T\nplt.scatter(X[:, 0], X[:, 1])\nplt.xlabel('$x_0$')\nplt.ylabel('$x_1$')\nplt.axis('equal')\nplt.show()", "PCA looks for the principal axes in the data, using them as new coordinates to represent the data points.\nWe can compute this as follows:", "from sklearn.decomposition import PCA\npca = PCA(n_components=2)\npca.fit(X)", "After fitting PCA to the data, we can read the directions of the new axes (the principal directions) using:", "print(pca.components_)", "These directions are unit vectors. We can plot them over the scatter plot of the input data, scaled up by the standard deviation of the data along each direction. The standard deviations can be computed as the square root of the variance along each direction, which is available through", "print(pca.explained_variance_)", "The resulting axis plot is the following", "def draw_vector(v0, v1, ax=None):\n ax = ax or plt.gca()\n arrowprops=dict(arrowstyle='->',\n linewidth=2,\n shrinkA=0, shrinkB=0, color='k')\n ax.annotate('', v1, v0, arrowprops=arrowprops)\n\n# plot data\nplt.scatter(X[:, 0], X[:, 1], alpha=0.2)\nfor length, vector in zip(pca.explained_variance_, pca.components_):\n v = vector * 3 * np.sqrt(length)\n draw_vector(pca.mean_, pca.mean_ + v)\nplt.axis('equal');", "The principal axes of the data can be used as a new basis for the data representation. The principal components of any point are given by the projections of the point onto each principal axes.", "# plot principal components\nT = pca.transform(X)\nplt.scatter(T[:, 0], T[:, 1], alpha=0.2)\nplt.axis('equal')\nplt.xlabel('component 1')\nplt.ylabel('component 2')\nplt.title('principal components')\nplt.show()", "Note that PCA is essentially an affine transformation: data is centered around the mean and rotated according to the principal directions. At this point, we can select those directions that may be more relevant for prediction.\n2. Mathematical Foundations\n(The material in this section is based on Wikipedia: Principal Component Analysis)\nIn this section we will see how the principal directions are determined mathematically, and how can they be used to tranform the original dataset. \nPCA is defined as a linear transformation that transforms the data to a new coordinate system such that the greatest variance by some scalar projection of the data comes to lie on the first coordinate (called the first principal component), the second greatest variance on the second coordinate, and so on.\nConsider a dataset ${\\cal S} = {{\\bf x}k, k=0,\\cdots, K-1}$ of $m$-dimensional samples arranged by rows in data matrix, ${\\bf X}$. Assume the dataset has zero sample mean, that is\n\\begin{align}\n\\sum{k=0}^{K-1} {\\bf x}_k = {\\bf 0}\n\\end{align}\nwhich implies that the sample mean of each column in ${\\bf X}$ is zero. If data is not zero-mean, the data matrix ${\\bf X}$ is built with rows ${\\bf x}_k - {\\bf m}$, where ${\\bf m}$ is the mean.\nPCA transforms each sample ${\\bf x}k \\in {\\cal S}$ into a vector of principal components ${\\bf t}_k$. The transformation is linear so each principal component can be computed as the scalar product of each sample with a weight vector of coefficients. For instance, if the coeficient vectors are ${\\bf w}_0, {\\bf w}_1, \\ldots, {\\bf w}{l-1}$, the principal components of ${\\bf x}k$ are\n$$\nt{k0} = {\\bf w}0^\\top \\mathbf{x}_k, \\ \nt{k1} = {\\bf w}1^\\top \\mathbf{x}_k, \\\nt{k2} = {\\bf w}_2^\\top \\mathbf{x}_k, \\\n...\n$$\nThese components can be computed iteratively. In the next section we will see how to compute the first one.\n2.1. Computing the first component\n2.2.1. Computing ${\\bf w}_0$\nThe principal direction is selected in such a way that the sample variance of the first components of the data (that is, $t_{00}, t_{10}, \\ldots, t_{K-1,0}$) is maximized. Since we can make the variance arbitrarily large by using an arbitrarily large ${\\bf w}_0$, we will impose a constraint of the size of the coefficient vectors, that should be unitary. Thus,\n$$\n\\|{\\bf w}_0\\| = 1\n$$\nNote that the mean of the transformed components is zero, because samples are zero-mean:\n\\begin{align}\n\\sum_{k=0}^{K-1} t_{k0} = \\sum_{k=0}^{K-1} {\\bf w}0^\\top {\\bf x}_k = {\\bf w}_0^\\top \\sum{k=0}^{K-1} {\\bf x}_k ={\\bf 0}\n\\end{align}\ntherefore, the variance of the first principal component can be computed as\n\\begin{align}\nV &= \\frac{1}{K} \\sum_{k=0}^{K-1} t_{k0}^2 \n = \\frac{1}{K} \\sum_{k=0}^{K-1} {\\bf w}0^\\top {\\bf x}_k {\\bf x}_k^\\top {\\bf w}_0 \n = \\frac{1}{K} {\\bf w}_0^\\top \\left(\\sum{k=0}^{K-1} {\\bf x}_k {\\bf x}_k^\\top \\right) {\\bf w}_0 \\\n &= \\frac{1}{K} {\\bf w}_0^\\top {\\bf X}^\\top {\\bf X} {\\bf w}_0\n\\end{align}\nThe first principal component ${\\bf w}_0$ is the maximizer of the variance, thus, it can be computed as\n$$\n{\\bf w}_0 = \\underset{\\Vert {\\bf w} \\Vert= 1}{\\operatorname{\\arg\\,max}} \\left{ {\\bf w}^\\top {\\bf X}^\\top {\\bf X} {\\bf w} \\right}$$\nSince ${\\bf X}^\\top {\\bf X}$ is necessarily a semidefinite matrix, the maximum is equal to the largest eigenvalue of the matrix, which occurs when ${\\bf w}_0$ is the corresponding eigenvector.\n2.2.2. Computing $t_{k0}$\nOnce we have computed the first eigenvector ${\\bf w}0$, we can compute the first component of each sample,\n$$\nt{k0} = {\\bf w}0^\\top \\mathbf{x}_k\n$$\nAlso, we can compute the projection of each sample along the first principal direction as \n$$\nt{k0} {\\bf w}_0\n$$\nWe can illustrate this with the example data, applying PCA with only one component", "pca = PCA(n_components=1)\npca.fit(X)\nT = pca.transform(X)\nprint(\"original shape: \", X.shape)\nprint(\"transformed shape:\", T.shape)", "and projecting the data over the first principal direction:", "X_new = pca.inverse_transform(T)\nplt.scatter(X[:, 0], X[:, 1], alpha=0.2)\nplt.scatter(X_new[:, 0], X_new[:, 1], alpha=0.8)\nplt.axis('equal');", "2.2. Computing further components\nThe error, i.e. the difference between any sample an its projection, is given by\n\\begin{align}\n\\hat{\\bf x}{k0} &= {\\bf x}_k - t{k0} {\\bf w}0 = {\\bf x}_k - {\\bf w}_0 {\\bf w}_0^\\top \\mathbf{x}_k = \\\n &= ({\\bf I} - {\\bf w}_0{\\bf w}_0^\\top ) {\\bf x}_k\n\\end{align}\nIf we arrange all error vectors, by rows, in a data matrix, we get\n$$\n\\hat{\\bf X}{0} = {\\bf X}({\\bf I} - {\\bf w}_0 {\\bf w}_0^T) \n$$\nThe second principal component can be computed by repeating the analysis in section 2.1 over the error matrix $\\hat{\\bf X}_{0}$. Thus, it is given by\n$$\n{\\bf w}_1 = \\underset{\\Vert {\\bf w} \\Vert= 1}{\\operatorname{\\arg\\,max}} \\left{ {\\bf w}^\\top \\hat{\\bf X}_0^\\top \\hat{\\bf X}_0 {\\bf w} \\right}\n$$\nIt turns out that this gives the eigenvector of ${\\bf X}^\\top {\\bf X}$ with the second largest eigenvalue.\nRepeating this process iterativelly (by substracting from the data all components in the previously computed principal directions) we can compute the third, fourth and succesive principal directions.\n2.3. Summary of computations\nSummarizing, we can conclude that the $l$ principal components of the data can be computed as follows: \n\nCompute the $l$ unitary eigenvectors ${\\bf w}0, {\\bf w}_1, \\ldots, {\\bf w}{l-1}$ from matrix ${\\bf X}^\\top{\\bf X}$ with the $l$ largest eigenvalues.\nArrange the eigenvectors columnwise into an $m \\times l$ weight matrix ${\\bf W} = ({\\bf w}0 | {\\bf w}_1 | \\ldots | {\\bf w}{l-1})$\nCompute the principal components for all samples in data matrix ${\\bf X}$ as\n$$\n{\\bf T} = {\\bf X}{\\bf W}\n$$\n\nThe computation of the eigenvectors of ${\\bf X}^\\top{\\bf X}$ can be problematic, specially if the data dimension is very high. Fortunately, there exist efficient algorithms for the computation of the eigenvectors without computing ${\\bf X}^\\top{\\bf X}$, by means of the singular value decomposition of matrix ${\\bf X}$. This is the method used by the PCA method from the sklearn library\n2. PCA as dimensionality reduction\nAfter a PCA transformation, we may find that the variance of the data along some of the principal directions is very small. Thus, we can simply remove those directions, and represent data using the components with the highest variance only.\nIn the above 2-dimensional example, we selected the principal direction only, and all data become projected onto a single line.\nThe key idea in the use of PCA for dimensionality reduction is that, if the removed dimensions had a very low variance, we can expect a small information loss for a prediction task. Thus, we can try to design our predictor with the selected features, with the hope to preserve a good prediction performance.\n3. PCA for visualization: Hand-written digits\nIn the illustrative example we used PCA to project 2-dimensional data into one dimension, but the same analysis can be applied to project $N$-dimensional data to $r<N$ dimensions. An interesting application of this is the projection to 2 or 3 dimensions, that can be visualized.\nWe will illustrate this using the digits dataset:", "from sklearn.datasets import load_digits\ndigits = load_digits()\ndigits.data.shape", "This dataset contains $8\\times 8$ pixel images of digit manuscritps. Thus, each image can be converted into a 64-dimensional vector, and then projected over into two dimensions:", "pca = PCA(2) # project from 64 to 2 dimensions\nprojected = pca.fit_transform(digits.data)\nprint(digits.data.shape)\nprint(projected.shape)", "Every image has been tranformed into a 2 dimensional vector, and we can represent them into a scatter plot:", "plt.scatter(projected[:, 0], projected[:, 1],\n c=digits.target, edgecolor='none', alpha=0.5,\n cmap=plt.cm.get_cmap('rainbow', 10))\nplt.xlabel('component 1')\nplt.ylabel('component 2')\nplt.colorbar();", "Note that we have just transformed a collection of digital images into a cloud of points, using a different color to represent the points corresponding to the same digit. Note that colors from the same digit tend to be grouped in the same cluster, which suggests that these two components may contain useful information for discriminating between digits. Clusters show some overlap, so maybe using more components could help for a better discrimination.\nThe example shows that, despite a 2-dimensional projection may loose relevant information for a prediction task, the visualization of this projections may provide some insights to the data analyst on the predition problem to solve.\n3.1. Interpreting principal components\nNote that an important step in the application of PCA to digital images is the vectorization: each digit image is converted into a 64 dimensional vector:\n$$\n{\\bf x} = (x_0, x_1, x_2 \\cdots x_{63})^\\top\n$$\nwhere $x_i$ represents the intesity of the $i$-th pixel in the image. We can go back to reconstruct the original image as follows: if $I_i$ is an black image with unit intensity at the $i$-th pixel only, we can reconstruct the original image as \n$$\n{\\rm image}({\\bf x}) = \\sum_{i=0}^{63} x_i I_i\n$$\nA crude way to reduce the dimensionality of this data is to remove some of the components in the sum. For instance, we can keep the first eight pixels, only. But we then we get a poor representation of the original image:", "def plot_pca_components(x, coefficients=None, mean=0, components=None,\n imshape=(8, 8), n_components=8, fontsize=12,\n show_mean=True):\n if coefficients is None:\n coefficients = x\n \n if components is None:\n components = np.eye(len(coefficients), len(x))\n \n mean = np.zeros_like(x) + mean\n\n fig = plt.figure(figsize=(1.2 * (5 + n_components), 1.2 * 2))\n g = plt.GridSpec(2, 4 + bool(show_mean) + n_components, hspace=0.3)\n\n def show(i, j, x, title=None):\n ax = fig.add_subplot(g[i, j], xticks=[], yticks=[])\n ax.imshow(x.reshape(imshape), interpolation='nearest')\n if title:\n ax.set_title(title, fontsize=fontsize)\n\n show(slice(2), slice(2), x, \"True\")\n \n approx = mean.copy()\n \n counter = 2\n if show_mean:\n show(0, 2, np.zeros_like(x) + mean, r'$\\mu$')\n show(1, 2, approx, r'$1 \\cdot \\mu$')\n counter += 1\n\n for i in range(n_components):\n approx = approx + coefficients[i] * components[i]\n show(0, i + counter, components[i], f'$c_{i}$')\n show(1, i + counter, approx, f\"${coefficients[i]:.2f} \\cdot c_{i}$\")\n #r\"${0:.2f} \\cdot c_{1}$\".format(coefficients[i], i))\n if show_mean or i > 0:\n plt.gca().text(0, 1.05, '$+$', ha='right', va='bottom',\n transform=plt.gca().transAxes, fontsize=fontsize)\n\n show(slice(2), slice(-2, None), approx, \"Approx\")\n return fig", "PCA provides an alternative basis for the image representation. Using PCA, we can represent each vector as linear combination of the principal direction vectors ${\\bf w}0, {\\bf w}_1, \\cdots, {\\bf w}{63}$:\n$$\n{\\bf x} = {\\bf m} + \\sum_{i=0}^{63} t_i {\\bf w}i\n$$\nand, thus, we can represent the image as the linear combination of the images associated to each direction vector\n$$\nimage({\\bf x}) = image({\\bf m}) + \\sum{i=0}^{63} t_i \\cdot image({\\bf w}_i)\n$$\nPCA selects the principal directions in such a way that the first components capture most of the variance of the data. Thus, a few components may provide a good approximation to the original image.\nThe figure shows a reconstruction of a digit using the mean image and the first eight PCA components:", "idx = 25 # Select digit from the dataset\npca = PCA(n_components=10)\nXproj = pca.fit_transform(digits.data)\nsns.set_style('white')\nfig = plot_pca_components(digits.data[idx], Xproj[idx],\n pca.mean_, pca.components_)\n", "4. Choosing the number of components\nThe number of components required to approximate the data can be quantified by computing the cumulative explained variance ratio as a function of the number of components:", "pca = PCA().fit(digits.data)\nplt.plot(np.cumsum(pca.explained_variance_ratio_))\nplt.xlabel('number of components')\nplt.ylabel('cumulative explained variance');\nprint(np.cumsum(pca.explained_variance_ratio_))", "In this curve we can see that the 16 principal components explain more than 86 % of the data variance. 32 out of 64 components explain 96.6 % of the data variance. This suggest that the original data dimension can be substantally reduced.\n5. PCA as Noise Filtering\nThe use of PCA for noise filtering can be illustrated with some examples from the digits dataset.", "def plot_digits(data):\n fig, axes = plt.subplots(4, 10, figsize=(10, 4),\n subplot_kw={'xticks':[], 'yticks':[]},\n gridspec_kw=dict(hspace=0.1, wspace=0.1))\n for i, ax in enumerate(axes.flat):\n ax.imshow(data[i].reshape(8, 8),\n cmap='binary', interpolation='nearest',\n clim=(0, 16))\nplot_digits(digits.data)", "As we have shown before, the majority of the data variance is concentrated in a fraction of the principal components. Now assume that the dataset is affected by AWGN noise:", "np.random.seed(42)\nnoisy = np.random.normal(digits.data, 4)\nplot_digits(noisy)", "It is not difficult to show that, in the noise samples are independent for all pixels, the noise variance over all principal directions is the same. Thus, the principal components with higher variance will be less afected by nose. By removing the compoments with lower variance, we will be removing noise, majoritarily.\nLet's train a PCA on the noisy data, requesting that the projection preserve 55% of the variance:", "pca = PCA(0.55).fit(noisy)\npca.n_components_", "15 components contain this amount of variance. The corresponding images are shown below:", "components = pca.transform(noisy)\nfiltered = pca.inverse_transform(components)\nplot_digits(filtered)", "This is another reason why PCA works well in some prediction problems: by removing the components with less variance, we can be removing mostly noise, keeping the relevant information for a prediction task in the selected components.\n6. Example: Eigenfaces\nWe will see another application of PCA using the Labeled Faces from the dataset taken from Scikit-Learn:", "from sklearn.datasets import fetch_lfw_people\nfaces = fetch_lfw_people(min_faces_per_person=60)\nprint(faces.target_names)\nprint(faces.images.shape)", "We will take a look at the first 150 principal components. Because of the large dimensionality of this dataset (close to 3000), we will select the randomized solver for a fast approximation to the first $N$ principal components.", "#from sklearn.decomposition import Randomized PCA\npca = PCA(150, svd_solver=\"randomized\")\npca.fit(faces.data)", "Now, let us visualize the images associated to the eigenvectors of the first principal components (the \"eigenfaces\"). These are the basis images, and all faces can be approximated as linear combinations of them.", "fig, axes = plt.subplots(3, 8, figsize=(9, 4),\n subplot_kw={'xticks':[], 'yticks':[]},\n gridspec_kw=dict(hspace=0.1, wspace=0.1))\nfor i, ax in enumerate(axes.flat):\n ax.imshow(pca.components_[i].reshape(62, 47), cmap='bone')", "Note that some eigenfaces seem to be associated to the lighting conditions of the image, an other to specific features of the faces (noses, eyes, mouth, etc).\nThe cumulative variance shows that 150 components cope with more than 90 % of the variance:", "plt.plot(np.cumsum(pca.explained_variance_ratio_))\nplt.xlabel('number of components')\nplt.ylabel('cumulative explained variance');", "We can compare the input images with the images reconstructed from these 150 components:", "# Compute the components and projected faces\npca = PCA(150, svd_solver=\"randomized\").fit(faces.data)\ncomponents = pca.transform(faces.data)\nprojected = pca.inverse_transform(components)\n\n# Plot the results\nfig, ax = plt.subplots(2, 10, figsize=(10, 2.5),\n subplot_kw={'xticks':[], 'yticks':[]},\n gridspec_kw=dict(hspace=0.1, wspace=0.1))\nfor i in range(10):\n ax[0, i].imshow(faces.data[i].reshape(62, 47), cmap='binary_r')\n ax[1, i].imshow(projected[i].reshape(62, 47), cmap='binary_r')\n \nax[0, 0].set_ylabel('full-dim\\ninput')\nax[1, 0].set_ylabel('150-dim\\nreconstruction');", "Note that, despite some image resolution is loss, only 150 features are enough to recognize the faces in the image. This shows the potential of PCA as a preprocessing step to reduce de dimensionality of the data (in this case, for more than 3000 to 150) without loosing prediction power." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
khalido/deep-learning
image-classification/dlnd_image_classification.ipynb
mit
[ "Image Classification\nIn this project, you'll classify images from the CIFAR-10 dataset. The dataset consists of airplanes, dogs, cats, and other objects. You'll preprocess the images, then train a convolutional neural network on all the samples. The images need to be normalized and the labels need to be one-hot encoded. You'll get to apply what you learned and build a convolutional, max pooling, dropout, and fully connected layers. At the end, you'll get to see your neural network's predictions on the sample images.\nGet the Data\nRun the following cell to download the CIFAR-10 dataset for python.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\nfrom urllib.request import urlretrieve\nfrom os.path import isfile, isdir\nfrom tqdm import tqdm\nimport problem_unittests as tests\nimport tarfile\n\ncifar10_dataset_folder_path = 'cifar-10-batches-py'\n\n# Use Floyd's cifar-10 dataset if present\nfloyd_cifar10_location = '/input/cifar-10/python.tar.gz'\nif isfile(floyd_cifar10_location):\n tar_gz_path = floyd_cifar10_location\nelse:\n tar_gz_path = 'cifar-10-python.tar.gz'\n\nclass DLProgress(tqdm):\n last_block = 0\n\n def hook(self, block_num=1, block_size=1, total_size=None):\n self.total = total_size\n self.update((block_num - self.last_block) * block_size)\n self.last_block = block_num\n\nif not isfile(tar_gz_path):\n with DLProgress(unit='B', unit_scale=True, miniters=1, desc='CIFAR-10 Dataset') as pbar:\n urlretrieve(\n 'https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz',\n tar_gz_path,\n pbar.hook)\n\nif not isdir(cifar10_dataset_folder_path):\n with tarfile.open(tar_gz_path) as tar:\n tar.extractall()\n tar.close()\n\n\ntests.test_folder_path(cifar10_dataset_folder_path)", "Explore the Data\nThe dataset is broken into batches to prevent your machine from running out of memory. The CIFAR-10 dataset consists of 5 batches, named data_batch_1, data_batch_2, etc.. Each batch contains the labels and images that are one of the following:\n* airplane\n* automobile\n* bird\n* cat\n* deer\n* dog\n* frog\n* horse\n* ship\n* truck\nUnderstanding a dataset is part of making predictions on the data. Play around with the code cell below by changing the batch_id and sample_id. The batch_id is the id for a batch (1-5). The sample_id is the id for a image and label pair in the batch.\nAsk yourself \"What are all possible labels?\", \"What is the range of values for the image data?\", \"Are the labels in order or random?\". Answers to questions like these will help you preprocess the data and end up with better predictions.", "%matplotlib inline\n%config InlineBackend.figure_format = 'retina'\n\nimport helper\nimport numpy as np\n\n# Explore the dataset\nbatch_id = 1\nsample_id = 5\nhelper.display_stats(cifar10_dataset_folder_path, batch_id, sample_id)", "Implement Preprocess Functions\nNormalize\nIn the cell below, implement the normalize function to take in image data, x, and return it as a normalized Numpy array. The values should be in the range of 0 to 1, inclusive. The return object should be the same shape as x.", "def normalize(x):\n \"\"\"\n Normalize a list of sample image data in the range of 0 to 1\n : x: List of image data. The image shape is (32, 32, 3)\n : return: Numpy array of normalize data\n \"\"\"\n # TODO: Implement Function\n return None\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_normalize(normalize)", "One-hot encode\nJust like the previous code cell, you'll be implementing a function for preprocessing. This time, you'll implement the one_hot_encode function. The input, x, are a list of labels. Implement the function to return the list of labels as One-Hot encoded Numpy array. The possible values for labels are 0 to 9. The one-hot encoding function should return the same encoding for each value between each call to one_hot_encode. Make sure to save the map of encodings outside the function.\nHint: Don't reinvent the wheel.", "def one_hot_encode(x):\n \"\"\"\n One hot encode a list of sample labels. Return a one-hot encoded vector for each label.\n : x: List of sample Labels\n : return: Numpy array of one-hot encoded labels\n \"\"\"\n # TODO: Implement Function\n return None\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_one_hot_encode(one_hot_encode)", "Randomize Data\nAs you saw from exploring the data above, the order of the samples are randomized. It doesn't hurt to randomize it again, but you don't need to for this dataset.\nPreprocess all the data and save it\nRunning the code cell below will preprocess all the CIFAR-10 data and save it to file. The code below also uses 10% of the training data for validation.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\n# Preprocess Training, Validation, and Testing Data\nhelper.preprocess_and_save_data(cifar10_dataset_folder_path, normalize, one_hot_encode)", "Check Point\nThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport pickle\nimport problem_unittests as tests\nimport helper\n\n# Load the Preprocessed Validation data\nvalid_features, valid_labels = pickle.load(open('preprocess_validation.p', mode='rb'))", "Build the network\nFor the neural network, you'll build each layer into a function. Most of the code you've seen has been outside of functions. To test your code more thoroughly, we require that you put each layer in a function. This allows us to give you better feedback and test for simple mistakes using our unittests before you submit your project.\n\nNote: If you're finding it hard to dedicate enough time for this course each week, we've provided a small shortcut to this part of the project. In the next couple of problems, you'll have the option to use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages to build each layer, except the layers you build in the \"Convolutional and Max Pooling Layer\" section. TF Layers is similar to Keras's and TFLearn's abstraction to layers, so it's easy to pickup.\nHowever, if you would like to get the most out of this course, try to solve all the problems without using anything from the TF Layers packages. You can still use classes from other packages that happen to have the same name as ones you find in TF Layers! For example, instead of using the TF Layers version of the conv2d class, tf.layers.conv2d, you would want to use the TF Neural Network version of conv2d, tf.nn.conv2d. \n\nLet's begin!\nInput\nThe neural network needs to read the image data, one-hot encoded labels, and dropout keep probability. Implement the following functions\n* Implement neural_net_image_input\n * Return a TF Placeholder\n * Set the shape using image_shape with batch size set to None.\n * Name the TensorFlow placeholder \"x\" using the TensorFlow name parameter in the TF Placeholder.\n* Implement neural_net_label_input\n * Return a TF Placeholder\n * Set the shape using n_classes with batch size set to None.\n * Name the TensorFlow placeholder \"y\" using the TensorFlow name parameter in the TF Placeholder.\n* Implement neural_net_keep_prob_input\n * Return a TF Placeholder for dropout keep probability.\n * Name the TensorFlow placeholder \"keep_prob\" using the TensorFlow name parameter in the TF Placeholder.\nThese names will be used at the end of the project to load your saved model.\nNote: None for shapes in TensorFlow allow for a dynamic size.", "import tensorflow as tf\n\ndef neural_net_image_input(image_shape):\n \"\"\"\n Return a Tensor for a batch of image input\n : image_shape: Shape of the images\n : return: Tensor for image input.\n \"\"\"\n # TODO: Implement Function\n return None\n\n\ndef neural_net_label_input(n_classes):\n \"\"\"\n Return a Tensor for a batch of label input\n : n_classes: Number of classes\n : return: Tensor for label input.\n \"\"\"\n # TODO: Implement Function\n return None\n\n\ndef neural_net_keep_prob_input():\n \"\"\"\n Return a Tensor for keep probability\n : return: Tensor for keep probability.\n \"\"\"\n # TODO: Implement Function\n return None\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntf.reset_default_graph()\ntests.test_nn_image_inputs(neural_net_image_input)\ntests.test_nn_label_inputs(neural_net_label_input)\ntests.test_nn_keep_prob_inputs(neural_net_keep_prob_input)", "Convolution and Max Pooling Layer\nConvolution layers have a lot of success with images. For this code cell, you should implement the function conv2d_maxpool to apply convolution then max pooling:\n* Create the weight and bias using conv_ksize, conv_num_outputs and the shape of x_tensor.\n* Apply a convolution to x_tensor using weight and conv_strides.\n * We recommend you use same padding, but you're welcome to use any padding.\n* Add bias\n* Add a nonlinear activation to the convolution.\n* Apply Max Pooling using pool_ksize and pool_strides.\n * We recommend you use same padding, but you're welcome to use any padding.\nNote: You can't use TensorFlow Layers or TensorFlow Layers (contrib) for this layer, but you can still use TensorFlow's Neural Network package. You may still use the shortcut option for all the other layers.", "def conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides):\n \"\"\"\n Apply convolution then max pooling to x_tensor\n :param x_tensor: TensorFlow Tensor\n :param conv_num_outputs: Number of outputs for the convolutional layer\n :param conv_ksize: kernal size 2-D Tuple for the convolutional layer\n :param conv_strides: Stride 2-D Tuple for convolution\n :param pool_ksize: kernal size 2-D Tuple for pool\n :param pool_strides: Stride 2-D Tuple for pool\n : return: A tensor that represents convolution and max pooling of x_tensor\n \"\"\"\n # TODO: Implement Function\n return None \n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_con_pool(conv2d_maxpool)", "Flatten Layer\nImplement the flatten function to change the dimension of x_tensor from a 4-D tensor to a 2-D tensor. The output should be the shape (Batch Size, Flattened Image Size). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.", "def flatten(x_tensor):\n \"\"\"\n Flatten x_tensor to (Batch Size, Flattened Image Size)\n : x_tensor: A tensor of size (Batch Size, ...), where ... are the image dimensions.\n : return: A tensor of size (Batch Size, Flattened Image Size).\n \"\"\"\n # TODO: Implement Function\n return None\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_flatten(flatten)", "Fully-Connected Layer\nImplement the fully_conn function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.", "def fully_conn(x_tensor, num_outputs):\n \"\"\"\n Apply a fully connected layer to x_tensor using weight and bias\n : x_tensor: A 2-D tensor where the first dimension is batch size.\n : num_outputs: The number of output that the new tensor should be.\n : return: A 2-D tensor where the second dimension is num_outputs.\n \"\"\"\n # TODO: Implement Function\n return None\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_fully_conn(fully_conn)", "Output Layer\nImplement the output function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.\nNote: Activation, softmax, or cross entropy should not be applied to this.", "def output(x_tensor, num_outputs):\n \"\"\"\n Apply a output layer to x_tensor using weight and bias\n : x_tensor: A 2-D tensor where the first dimension is batch size.\n : num_outputs: The number of output that the new tensor should be.\n : return: A 2-D tensor where the second dimension is num_outputs.\n \"\"\"\n # TODO: Implement Function\n return None\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_output(output)", "Create Convolutional Model\nImplement the function conv_net to create a convolutional neural network model. The function takes in a batch of images, x, and outputs logits. Use the layers you created above to create this model:\n\nApply 1, 2, or 3 Convolution and Max Pool layers\nApply a Flatten Layer\nApply 1, 2, or 3 Fully Connected Layers\nApply an Output Layer\nReturn the output\nApply TensorFlow's Dropout to one or more layers in the model using keep_prob.", "def conv_net(x, keep_prob):\n \"\"\"\n Create a convolutional neural network model\n : x: Placeholder tensor that holds image data.\n : keep_prob: Placeholder tensor that hold dropout keep probability.\n : return: Tensor that represents logits\n \"\"\"\n # TODO: Apply 1, 2, or 3 Convolution and Max Pool layers\n # Play around with different number of outputs, kernel size and stride\n # Function Definition from Above:\n # conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides)\n \n\n # TODO: Apply a Flatten Layer\n # Function Definition from Above:\n # flatten(x_tensor)\n \n\n # TODO: Apply 1, 2, or 3 Fully Connected Layers\n # Play around with different number of outputs\n # Function Definition from Above:\n # fully_conn(x_tensor, num_outputs)\n \n \n # TODO: Apply an Output Layer\n # Set this to the number of classes\n # Function Definition from Above:\n # output(x_tensor, num_outputs)\n \n \n # TODO: return output\n return None\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\n\n##############################\n## Build the Neural Network ##\n##############################\n\n# Remove previous weights, bias, inputs, etc..\ntf.reset_default_graph()\n\n# Inputs\nx = neural_net_image_input((32, 32, 3))\ny = neural_net_label_input(10)\nkeep_prob = neural_net_keep_prob_input()\n\n# Model\nlogits = conv_net(x, keep_prob)\n\n# Name logits Tensor, so that is can be loaded from disk after training\nlogits = tf.identity(logits, name='logits')\n\n# Loss and Optimizer\ncost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y))\noptimizer = tf.train.AdamOptimizer().minimize(cost)\n\n# Accuracy\ncorrect_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1))\naccuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy')\n\ntests.test_conv_net(conv_net)", "Train the Neural Network\nSingle Optimization\nImplement the function train_neural_network to do a single optimization. The optimization should use optimizer to optimize in session with a feed_dict of the following:\n* x for image input\n* y for labels\n* keep_prob for keep probability for dropout\nThis function will be called for each batch, so tf.global_variables_initializer() has already been called.\nNote: Nothing needs to be returned. This function is only optimizing the neural network.", "def train_neural_network(session, optimizer, keep_probability, feature_batch, label_batch):\n \"\"\"\n Optimize the session on a batch of images and labels\n : session: Current TensorFlow session\n : optimizer: TensorFlow optimizer function\n : keep_probability: keep probability\n : feature_batch: Batch of Numpy image data\n : label_batch: Batch of Numpy label data\n \"\"\"\n # TODO: Implement Function\n pass\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_train_nn(train_neural_network)", "Show Stats\nImplement the function print_stats to print loss and validation accuracy. Use the global variables valid_features and valid_labels to calculate validation accuracy. Use a keep probability of 1.0 to calculate the loss and validation accuracy.", "def print_stats(session, feature_batch, label_batch, cost, accuracy):\n \"\"\"\n Print information about loss and validation accuracy\n : session: Current TensorFlow session\n : feature_batch: Batch of Numpy image data\n : label_batch: Batch of Numpy label data\n : cost: TensorFlow cost function\n : accuracy: TensorFlow accuracy function\n \"\"\"\n # TODO: Implement Function\n pass", "Hyperparameters\nTune the following parameters:\n* Set epochs to the number of iterations until the network stops learning or start overfitting\n* Set batch_size to the highest number that your machine has memory for. Most people set them to common sizes of memory:\n * 64\n * 128\n * 256\n * ...\n* Set keep_probability to the probability of keeping a node using dropout", "# TODO: Tune Parameters\nepochs = None\nbatch_size = None\nkeep_probability = None", "Train on a Single CIFAR-10 Batch\nInstead of training the neural network on all the CIFAR-10 batches of data, let's use a single batch. This should save time while you iterate on the model to get a better accuracy. Once the final validation accuracy is 50% or greater, run the model on all the data in the next section.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nprint('Checking the Training on a Single Batch...')\nwith tf.Session() as sess:\n # Initializing the variables\n sess.run(tf.global_variables_initializer())\n \n # Training cycle\n for epoch in range(epochs):\n batch_i = 1\n for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):\n train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)\n print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')\n print_stats(sess, batch_features, batch_labels, cost, accuracy)", "Fully Train the Model\nNow that you got a good accuracy with a single CIFAR-10 batch, try it with all five batches.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nsave_model_path = './image_classification'\n\nprint('Training...')\nwith tf.Session() as sess:\n # Initializing the variables\n sess.run(tf.global_variables_initializer())\n \n # Training cycle\n for epoch in range(epochs):\n # Loop over all batches\n n_batches = 5\n for batch_i in range(1, n_batches + 1):\n for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):\n train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)\n print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')\n print_stats(sess, batch_features, batch_labels, cost, accuracy)\n \n # Save Model\n saver = tf.train.Saver()\n save_path = saver.save(sess, save_model_path)", "Checkpoint\nThe model has been saved to disk.\nTest Model\nTest your model against the test dataset. This will be your final accuracy. You should have an accuracy greater than 50%. If you don't, keep tweaking the model architecture and parameters.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\n%matplotlib inline\n%config InlineBackend.figure_format = 'retina'\n\nimport tensorflow as tf\nimport pickle\nimport helper\nimport random\n\n# Set batch size if not already set\ntry:\n if batch_size:\n pass\nexcept NameError:\n batch_size = 64\n\nsave_model_path = './image_classification'\nn_samples = 4\ntop_n_predictions = 3\n\ndef test_model():\n \"\"\"\n Test the saved model against the test dataset\n \"\"\"\n\n test_features, test_labels = pickle.load(open('preprocess_training.p', mode='rb'))\n loaded_graph = tf.Graph()\n\n with tf.Session(graph=loaded_graph) as sess:\n # Load model\n loader = tf.train.import_meta_graph(save_model_path + '.meta')\n loader.restore(sess, save_model_path)\n\n # Get Tensors from loaded model\n loaded_x = loaded_graph.get_tensor_by_name('x:0')\n loaded_y = loaded_graph.get_tensor_by_name('y:0')\n loaded_keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')\n loaded_logits = loaded_graph.get_tensor_by_name('logits:0')\n loaded_acc = loaded_graph.get_tensor_by_name('accuracy:0')\n \n # Get accuracy in batches for memory limitations\n test_batch_acc_total = 0\n test_batch_count = 0\n \n for train_feature_batch, train_label_batch in helper.batch_features_labels(test_features, test_labels, batch_size):\n test_batch_acc_total += sess.run(\n loaded_acc,\n feed_dict={loaded_x: train_feature_batch, loaded_y: train_label_batch, loaded_keep_prob: 1.0})\n test_batch_count += 1\n\n print('Testing Accuracy: {}\\n'.format(test_batch_acc_total/test_batch_count))\n\n # Print Random Samples\n random_test_features, random_test_labels = tuple(zip(*random.sample(list(zip(test_features, test_labels)), n_samples)))\n random_test_predictions = sess.run(\n tf.nn.top_k(tf.nn.softmax(loaded_logits), top_n_predictions),\n feed_dict={loaded_x: random_test_features, loaded_y: random_test_labels, loaded_keep_prob: 1.0})\n helper.display_image_predictions(random_test_features, random_test_labels, random_test_predictions)\n\n\ntest_model()", "Why 50-80% Accuracy?\nYou might be wondering why you can't get an accuracy any higher. First things first, 50% isn't bad for a simple CNN. Pure guessing would get you 10% accuracy. However, you might notice people are getting scores well above 80%. That's because we haven't taught you all there is to know about neural networks. We still need to cover a few more techniques.\nSubmitting This Project\nWhen submitting this project, make sure to run all the cells before saving the notebook. Save the notebook file as \"dlnd_image_classification.ipynb\" and save it as a HTML file under \"File\" -> \"Download as\". Include the \"helper.py\" and \"problem_unittests.py\" files in your submission." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
jseabold/statsmodels
examples/notebooks/kernel_density.ipynb
bsd-3-clause
[ "Kernel Density Estimation\nKernel density estimation is the process of estimating an unknown probability density function using a kernel function $K(u)$. While a histogram counts the number of data points in somewhat arbitrary regions, a kernel density estimate is a function defined as the sum of a kernel function on every data point. The kernel function typically exhibits the following properties:\n\nSymmetry such that $K(u) = K(-u)$.\nNormalization such that $\\int_{-\\infty}^{\\infty} K(u) \\ du = 1$ .\nMonotonically decreasing such that $K'(u) < 0$ when $u > 0$.\nExpected value equal to zero such that $\\mathrm{E}[K] = 0$.\n\nFor more information about kernel density estimation, see for instance Wikipedia - Kernel density estimation.\nA univariate kernel density estimator is implemented in sm.nonparametric.KDEUnivariate.\nIn this example we will show the following:\n\nBasic usage, how to fit the estimator.\nThe effect of varying the bandwidth of the kernel using the bw argument.\nThe various kernel functions available using the kernel argument.", "%matplotlib inline\nimport numpy as np\nfrom scipy import stats\nimport statsmodels.api as sm\nimport matplotlib.pyplot as plt\nfrom statsmodels.distributions.mixture_rvs import mixture_rvs", "A univariate example", "np.random.seed(12345) # Seed the random number generator for reproducible results", "We create a bimodal distribution: a mixture of two normal distributions with locations at -1 and 1.", "# Location, scale and weight for the two distributions\ndist1_loc, dist1_scale, weight1 = -1 , .5, .25\ndist2_loc, dist2_scale, weight2 = 1 , .5, .75\n\n# Sample from a mixture of distributions\nobs_dist = mixture_rvs(prob=[weight1, weight2], size=250,\n dist=[stats.norm, stats.norm],\n kwargs = (dict(loc=dist1_loc, scale=dist1_scale),\n dict(loc=dist2_loc, scale=dist2_scale)))", "The simplest non-parametric technique for density estimation is the histogram.", "fig = plt.figure(figsize=(12, 5))\nax = fig.add_subplot(111)\n\n# Scatter plot of data samples and histogram\nax.scatter(obs_dist, np.abs(np.random.randn(obs_dist.size)),\n zorder=15, color='red', marker='x', alpha=0.5, label='Samples')\nlines = ax.hist(obs_dist, bins=20, edgecolor='k', label='Histogram')\n\nax.legend(loc='best')\nax.grid(True, zorder=-5)", "Fitting with the default arguments\nThe histogram above is discontinuous. To compute a continuous probability density function,\nwe can use kernel density estimation.\nWe initialize a univariate kernel density estimator using KDEUnivariate.", "kde = sm.nonparametric.KDEUnivariate(obs_dist)\nkde.fit() # Estimate the densities", "We present a figure of the fit, as well as the true distribution.", "fig = plt.figure(figsize=(12, 5))\nax = fig.add_subplot(111)\n\n# Plot the histrogram\nax.hist(obs_dist, bins=20, density=True, label='Histogram from samples',\n zorder=5, edgecolor='k', alpha=0.5)\n\n# Plot the KDE as fitted using the default arguments\nax.plot(kde.support, kde.density, lw=3, label='KDE from samples', zorder=10)\n\n# Plot the true distribution\ntrue_values = (stats.norm.pdf(loc=dist1_loc, scale=dist1_scale, x=kde.support)*weight1\n + stats.norm.pdf(loc=dist2_loc, scale=dist2_scale, x=kde.support)*weight2)\nax.plot(kde.support, true_values, lw=3, label='True distribution', zorder=15)\n\n# Plot the samples\nax.scatter(obs_dist, np.abs(np.random.randn(obs_dist.size))/40,\n marker='x', color='red', zorder=20, label='Samples', alpha=0.5)\n\nax.legend(loc='best')\nax.grid(True, zorder=-5)", "In the code above, default arguments were used. We can also vary the bandwidth of the kernel, as we will now see.\nVarying the bandwidth using the bw argument\nThe bandwidth of the kernel can be adjusted using the bw argument.\nIn the following example, a bandwidth of bw=0.2 seems to fit the data well.", "fig = plt.figure(figsize=(12, 5))\nax = fig.add_subplot(111)\n\n# Plot the histrogram\nax.hist(obs_dist, bins=25, label='Histogram from samples',\n zorder=5, edgecolor='k', density=True, alpha=0.5)\n\n# Plot the KDE for various bandwidths\nfor bandwidth in [0.1, 0.2, 0.4]:\n kde.fit(bw=bandwidth) # Estimate the densities\n ax.plot(kde.support, kde.density, '--', lw=2, color='k', zorder=10,\n label='KDE from samples, bw = {}'.format(round(bandwidth, 2)))\n\n# Plot the true distribution\nax.plot(kde.support, true_values, lw=3, label='True distribution', zorder=15)\n\n# Plot the samples\nax.scatter(obs_dist, np.abs(np.random.randn(obs_dist.size))/50,\n marker='x', color='red', zorder=20, label='Data samples', alpha=0.5)\n\nax.legend(loc='best')\nax.set_xlim([-3, 3])\nax.grid(True, zorder=-5)", "Comparing kernel functions\nIn the example above, a Gaussian kernel was used. Several other kernels are also available.", "from statsmodels.nonparametric.kde import kernel_switch\nlist(kernel_switch.keys())", "The available kernel functions", "# Create a figure\nfig = plt.figure(figsize=(12, 5))\n\n# Enumerate every option for the kernel\nfor i, (ker_name, ker_class) in enumerate(kernel_switch.items()):\n\n # Initialize the kernel object\n kernel = ker_class()\n\n # Sample from the domain\n domain = kernel.domain or [-3, 3]\n x_vals = np.linspace(*domain, num=2**10)\n y_vals = kernel(x_vals)\n\n # Create a subplot, set the title\n ax = fig.add_subplot(2, 4, i + 1)\n ax.set_title('Kernel function \"{}\"'.format(ker_name))\n ax.plot(x_vals, y_vals, lw=3, label='{}'.format(ker_name))\n ax.scatter([0], [0], marker='x', color='red')\n plt.grid(True, zorder=-5)\n ax.set_xlim(domain)\n\nplt.tight_layout()", "The available kernel functions on three data points\nWe now examine how the kernel density estimate will fit to three equally spaced data points.", "# Create three equidistant points\ndata = np.linspace(-1, 1, 3)\nkde = sm.nonparametric.KDEUnivariate(data)\n\n# Create a figure\nfig = plt.figure(figsize=(12, 5))\n\n# Enumerate every option for the kernel\nfor i, kernel in enumerate(kernel_switch.keys()):\n\n # Create a subplot, set the title\n ax = fig.add_subplot(2, 4, i + 1)\n ax.set_title('Kernel function \"{}\"'.format(kernel))\n\n # Fit the model (estimate densities)\n kde.fit(kernel=kernel, fft=False, gridsize=2**10)\n\n # Create the plot\n ax.plot(kde.support, kde.density, lw=3, label='KDE from samples', zorder=10)\n ax.scatter(data, np.zeros_like(data), marker='x', color='red')\n plt.grid(True, zorder=-5)\n ax.set_xlim([-3, 3])\n\nplt.tight_layout()", "A more difficult case\nThe fit is not always perfect. See the example below for a harder case.", "obs_dist = mixture_rvs([.25, .75], size=250, dist=[stats.norm, stats.beta],\n kwargs = (dict(loc=-1, scale=.5), dict(loc=1, scale=1, args=(1, .5))))\n\nkde = sm.nonparametric.KDEUnivariate(obs_dist)\nkde.fit()\n\nfig = plt.figure(figsize=(12, 5))\nax = fig.add_subplot(111)\nax.hist(obs_dist, bins=20, density=True, edgecolor='k', zorder=4, alpha=0.5)\nax.plot(kde.support, kde.density, lw=3, zorder=7)\n# Plot the samples\nax.scatter(obs_dist, np.abs(np.random.randn(obs_dist.size))/50,\n marker='x', color='red', zorder=20, label='Data samples', alpha=0.5)\nax.grid(True, zorder=-5)", "The KDE is a distribution\nSince the KDE is a distribution, we can access attributes and methods such as:\n\nentropy\nevaluate\ncdf\nicdf\nsf\ncumhazard", "obs_dist = mixture_rvs([.25, .75], size=1000, dist=[stats.norm, stats.norm],\n kwargs = (dict(loc=-1, scale=.5), dict(loc=1, scale=.5)))\nkde = sm.nonparametric.KDEUnivariate(obs_dist)\nkde.fit(gridsize=2**10)\n\nkde.entropy\n\nkde.evaluate(-1)", "Cumulative distribution, it's inverse, and the survival function", "fig = plt.figure(figsize=(12, 5))\nax = fig.add_subplot(111)\n\nax.plot(kde.support, kde.cdf, lw=3, label='CDF')\nax.plot(np.linspace(0, 1, num = kde.icdf.size), kde.icdf, lw=3, label='Inverse CDF')\nax.plot(kde.support, kde.sf, lw=3, label='Survival function')\nax.legend(loc = 'best')\nax.grid(True, zorder=-5)", "The Cumulative Hazard Function", "fig = plt.figure(figsize=(12, 5))\nax = fig.add_subplot(111)\nax.plot(kde.support, kde.cumhazard, lw=3, label='Cumulative Hazard Function')\nax.legend(loc = 'best')\nax.grid(True, zorder=-5)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
wilomaku/IA369Z
dev/mean-WJGH.ipynb
gpl-3.0
[ "Corpus callosum's shape signature for segmentation error detection in large datasets\nAbstract\nCorpus Callosum (CC) is a subcortical, white matter structure with great importance in clinical and research studies because its shape and volume are correlated with subject's characteristics and neurodegenerative diseases. CC segmentation is a important step for any medical, clinical or research posterior study. Currently, magnetic resonance imaging (MRI) is the main tool for evaluating brain because it offers the better soft tissue contrast. Particullary, segmentation in MRI difussion modality has great importante given information associated to brain microstruture and fiber composition.\nIn this work a method for detection of erroneous segmentations in large datasets is proposed based-on shape signature. Shape signature is obtained from segmentation, calculating curvature along contour using a spline formulation. A mean correct signature is used as reference for compare new segmentations through root mean square error. This method was applied to 152 subject dataset for three different segmentation methods in diffusion: Watershed, ROQS and pixel-based presenting high accuracy in error detection. This method do not require per-segmentation reference and it can be applied to any MRI modality and other image aplications.", "## Functions\n\nimport sys,os\npath = os.path.abspath('../dev/')\nif path not in sys.path:\n sys.path.append(path)\n\nimport bib_mri as FW\nimport numpy as np\nimport scipy as scipy\nimport scipy.misc as misc \nimport matplotlib as mpl\nimport matplotlib.pyplot as plt\nfrom numpy import genfromtxt\nimport platform\n\n%matplotlib inline\n\ndef sign_extract(seg, resols): #Function for shape signature extraction\n splines = FW.get_spline(seg,smoothness)\n\n sign_vect = np.array([]).reshape(0,points) #Initializing temporal signature vector\n for resol in resols:\n sign_vect = np.vstack((sign_vect, FW.get_profile(splines, n_samples=points, radius=resol)))\n \n return sign_vect\n\ndef sign_fit(sig_ref, sig_fit): #Function for signature fitting\n dif_curv = []\n for shift in range(points):\n dif_curv.append(np.abs(np.sum((sig_ref - np.roll(sig_fit[0],shift))**2)))\n return np.apply_along_axis(np.roll, 1, sig_fit, np.argmin(dif_curv))\n\nprint \"Python version: \", platform.python_version()\nprint \"Numpy version: \", np.version.version\nprint \"Scipy version: \", scipy.__version__\nprint \"Matplotlib version: \", mpl.__version__", "Introduction\nThe Corpus Callosum (CC) is the largest white matter structure in the central nervous system that connects both brain hemispheres and allows the communication between them. The CC has great importance in research studies due to the correlation between shape and volume with some subject's characteristics, such as: gender, age, numeric and mathematical skills and handedness. In addition, some neurodegenerative diseases like Alzheimer, autism, schizophrenia and dyslexia could cause CC shape deformation.\nCC segmentation is a necessary step for morphological and physiological features extraction in order to analyze the structure in image-based clinical and research applications. Magnetic Resonance Imaging (MRI) is the most suitable image technique for CC segmentation due to its ability to provide contrast between brain tissues however CC segmentation is challenging because of the shape and intensity variability between subjects, volume partial effect in diffusion MRI, fornex proximity and narrow areas in CC. Among the known MRI modalities, Diffusion-MRI arouses special interest to study the CC, despite its low resolution and high complexity, since it provides useful information related to the organization of brain tissues and the magnetic field does not interfere with the diffusion process itself.\nSome CC segmentation approaches using Diffusion-MRI were found in the literature. Niogi et al. proposed a method based on thresholding, Freitas et al. e Rittner et al. proposed region methods based on Watershed transform, Nazem-Zadeh et al. implemented based on level surfaces, Kong et al. presented an clustering algorithm for segmentation, Herrera et al. segmented CC directly in diffusion weighted imaging (DWI) using a model based on pixel classification and Garcia et al. proposed a hybrid segmentation method based on active geodesic regions and level surfaces.\nWith the growing of data and the proliferation of automatic algorithms, segmentation over large databases is affordable. Therefore, error automatic detection is important in order to facilitate and speed up filter on CC segmentation databases. presented proposals for content-based image retrieval (CBIR) using shape signature of the planar object representation.\nIn this work, a method for automatic detection of segmentation error in large datasets is proposed based on CC shape signature. Signature offers shape characterization of the CC and therefore it is expected that a \"typical correct signature\" represents well any correct segmentation. Signature is extracted measuring curvature along segmentation contour. The method was implemented in three main stages: mean correct signature generation, signature configuration and method testing. The first one takes 20 corrects segmentations and generates one correct signature of reference (typical correct signature), per-resolution, using mean values in each point. The second stage stage takes 10 correct segmentations and 10 erroneous segmentations and adjusts the optimal resolution and threshold, based on mean correct signature, that lets detection of erroneous segmentations. The third stage labels a new segmentation as correct and erroneous comparing with the mean signature using optimal resolution and threshold.\n<img src=\"../figures/workflow.png\">\nThe comparison between signatures is done using root mean square error (RMSE). True label for each segmentation was done visually. Correct segmentation corresponds to segmentations with at least 50% of agreement with the structure. It is expected that RMSE for correct segmentations is lower than RMSE associated to erroneous segmentation when compared with a typical correct segmentation.", "#Loading labeled segmentations\nseg_label = genfromtxt('../dataset/Seg_Watershed/watershed_label.csv', delimiter=',').astype('uint8')\n\nlist_mask = seg_label[seg_label[:,1] == 0, 0][:20] #Extracting correct segmentations for mean signature\nlist_normal_mask = seg_label[seg_label[:,1] == 0, 0][20:30] #Extracting correct names\nlist_error_mask = seg_label[seg_label[:,1] == 1, 0][:10] #Extracting correct names\n\nmask_correct = np.load('../dataset/Seg_Watershed/mask_wate_{}.npy'.format(list_mask[0]))\nmask_error = np.load('../dataset/Seg_Watershed/mask_wate_{}.npy'.format(list_error_mask[0]))\n\nplt.figure()\nplt.axis('off')\nplt.imshow(mask_correct,'gray',interpolation='none')\nplt.title(\"Correct segmentation example\")\nplt.show()\n\nplt.figure()\nplt.axis('off')\nplt.imshow(mask_error,'gray',interpolation='none')\nplt.title(\"Erroneous segmentation example\")\nplt.show()", "Shape signature for comparison\nSignature is a shape descriptor that measures the rate of variation along the segmentation contour. As shown in figure, the curvature $k$ in the pivot point $p$, with coordinates ($x_p$,$y_p$), is calculated using the next equation. This curvature depict the angle between the segments $\\overline{(x_{p-ls},y_{p-ls})(x_p,y_p)}$ and $\\overline{(x_p,y_p)(x_{p+ls},y_{p+ls})}$. These segments are located to a distance $ls>0$, starting in a pivot point and finishing in anterior and posterior points, respectively.\nThe signature is obtained calculating the curvature along all segmentation contour.\n\\begin{equation} \\label{eq:per1}\nk(x_p,y_p) = \\arctan\\left(\\frac{y_{p+ls}-y_p}{x_{p+ls}-x_p}\\right)-\\arctan\\left(\\frac{y_p-y_{p-ls}}{x_p-x_{p-ls}}\\right)\n\\end{equation}\n<img src=\"../figures/curvature.png\">\nSignature construction is performed from segmentation contour of the CC. From contour, spline is obtained. Spline purpose is twofold: to get a smooth representation of the contour and to facilitate calculation of\nthe curvature using its parametric representation. The signature is obtained measuring curvature along spline. $ls$ is the parametric distance between pivot point and both posterior and anterior points and it determines signature resolution. By simplicity, $ls$ is measured in percentage of reconstructed spline points.\nIn order to achieve quantitative comparison between two signatures root mean square error (RMSE) is introduced. RMSE measures distance, point to point, between signatures $a$ and $b$ along all points $p$ of signatures.\n\\begin{equation} \\label{eq:per4}\nRMSE = \\sqrt{\\frac{1}{P}\\sum_{p=1}^{P}(k_{ap}-k_{bp})^2}\n\\end{equation}\nFrequently, signatures of different segmentations are not fitted along the 'x' axis because of the initial point on the spline calculation starts in different relative positions. This makes impossible to compare directly two signatures and therefore, a prior fitting process must be accomplished. The fitting process is done shifting one of the signature while the other is kept fixed. For each shift, RMSE between the two signatures is measured. The point giving the minor error is the fitting point. Fitting was done at resolution $ls = 0.35$. This resolution represents globally the CC's shape and eases their fitting.\nAfter fitting, RMSE between signatures can be measured in order to achieve final quantitative comparison.\nSignature for segmentation error detection\nFor segmentation error detection, a typical correct signature is obtained calculating mean over a group of signatures from correct segmentations. Because of this signature could be used in any resolution, $ls$ must be chosen for achieve segmentation error detection. The optimal resolution must be able to return the greatest RMSE difference between correct and erroneous segmentation when compared with a typical correct signature.\nIn the optimal resolution, a threshold must be chosen for separate erroneous and correct segmentations. This threshold stays between RMSE associated to correct ($RMSE_E$) and erroneous ($RMSE_C$) signatures and it is given by the next equation where N (in percentage) represents proximity to correct or erroneous RMSE. If RMSE calculated over a group of signatures, mean value is applied.\n\\begin{equation} \\label{eq:eq3}\nth = N*(\\overline{RMSE_E}-\\overline{RMSE_C})+\\overline{RMSE_C}\n\\end{equation}\nExperiments and results\nIn this work, comparison of signatures through RMSE is used for segmentation error detection in large datasets. For this, it will be calculated a mean correct signature based on 20 correct segmentation signatures. This mean correct signature represents a tipycal correct segmentation. For a new segmentation, signature is extracted and compared with mean signature.\nFor experiments, DWI from 152 subjects at the University of Campinas, were acquired on a Philips scanner Achieva 3T in the axial plane with a $1$x$1mm$ spatial resolution and $2mm$ slice thickness, along $32$ directions ($b-value=1000s/mm^2$, $TR=8.5s$, and $TE=61ms$). All data used in this experiment was acquired through a project approved by the research ethics committee from the School of Medicine at UNICAMP. From each acquired DWI volume, only the midsaggital slice was used.\nThree segmentation methods were implemented to obtained binary masks over a 152 subject dataset: Watershed, ROQS and pixel-based. 40 Watershed segmentations were chosen as follows: 20 correct segmentations for mean correct signature generation and 10 correct and 10 erroneous segmentations for signature configuration stage. Watershed was chosen to generate and adjust the mean signature because of its higher error rate and its variability in the erroneous segmentation shape. These characteristics allow improve generalization. The method was tested on the remaining Watershed segmentations (108 masks) and two additional segmentations methods: ROQS (152 masks) and pixel-based (152 masks).\nMean correct signature generation\nIn this work, segmentations based on Watershed method were used for implementation of the first and second stages. From the Watershed dataset, 20 correct segmentations were chosen. Spline for each one was obtained from segmentation contour. The contour was obtained using mathematical morphology, applying xor logical operation, pixel-wise, between original segmentation and the eroded version of itself by an structuring element b:\n\\begin{equation} \\label{eq:per2}\nG_E = XOR(S,S \\ominus b)\n\\end{equation}\nFrom contour, it is calculated spline. The implementation, is a B-spline (Boor's basic spline). This formulation has two parameters: degree, representing polynomial degrees of the spline, and smoothness, being the trade off between proximity and smoothness in the fitness of the spline. Degree was fixed in 5 allowing adequate representation of the contour. Smoothness was fixed in 700. This value is based on the mean quantity of pixels of the contour that are passed for spline calculation. The curvature was measured over 500 points over the spline to generate the signature along 20 segmentations. Signatures were fitted to make possible comparison (Fig. signatures). Fitting resolution was fixed in 0.35.", "n_list = len(list_mask)\nsmoothness = 700 #Smoothness\ndegree = 5 #Spline degree\nfit_res = 0.35\nresols = np.arange(0.01,0.5,0.01) #Signature resolutions\nresols = np.insert(resols,0,fit_res) #Insert resolution for signature fitting\npoints = 500 #Points of Spline reconstruction\n\nrefer_wat = np.empty((n_list,resols.shape[0],points)) #Initializing signature vector\n\nfor mask in xrange(n_list):\n mask_p = np.load('../dataset/Seg_Watershed/mask_wate_{}.npy'.format(list_mask[mask]))\n \n refer_temp = sign_extract(mask_p, resols) #Function for shape signature extraction\n \n refer_wat[mask] = refer_temp\n if mask > 0: #Fitting curves using the first one as basis\n prof_ref = refer_wat[0] \n refer_wat[mask] = sign_fit(prof_ref[0], refer_temp) #Function for signature fitting\n \nprint \"Signatures' vector size: \", refer_wat.shape\n\nres_ex = 10\nplt.figure()\nplt.plot(refer_wat[:,res_ex,:].T)\nplt.title(\"Signatures for res: %f\"%(resols[res_ex]))\nplt.show()", "In order to get a representative correct signature, mean signature per-resolution is generated using 20 correct signatures. The mean is calculated in each point.", "refer_wat_mean = np.mean(refer_wat,axis=0) #Finding mean signature per resolution\n\nprint \"Mean signature size: \", refer_wat_mean.shape\n\nplt.figure() #Plotting mean signature\nplt.plot(refer_wat_mean[res_ex,:])\nplt.title(\"Mean signature for res: %f\"%(resols[res_ex]))\nplt.show()", "Signature configuration\nBecause of the mean signature was extracted for all the resolutions, it is necessary to find resolution in that diference between RMSE for correct signature and RMSE for erroneous signature is maximum. So, 20 news segmentations were used to find this optimal resolution, being divided as 10 correct segmentations and 10 erroneous segmentations. For each segmentation, it was extracted signature for all resolutions.", "n_list = np.amax((len(list_normal_mask),len(list_error_mask)))\nrefer_wat_n = np.empty((n_list,resols.shape[0],points)) #Initializing correct signature vector\nrefer_wat_e = np.empty((n_list,resols.shape[0],points)) #Initializing error signature vector\n\nfor mask in xrange(n_list):\n #Loading correct mask\n mask_pn = np.load('../dataset/Seg_Watershed/mask_wate_{}.npy'.format(list_normal_mask[mask]))\n refer_temp_n = sign_extract(mask_pn, resols) #Function for shape signature extraction\n refer_wat_n[mask] = sign_fit(refer_wat_mean[0], refer_temp_n) #Function for signature fitting\n #Loading erroneous mask\n mask_pe = np.load('../dataset/Seg_Watershed/mask_wate_{}.npy'.format(list_error_mask[mask]))\n refer_temp_e = sign_extract(mask_pe, resols) #Function for shape signature extraction\n refer_wat_e[mask] = sign_fit(refer_wat_mean[0], refer_temp_e) #Function for signature fitting\n \nprint \"Correct segmentations' vector: \", refer_wat_n.shape\nprint \"Erroneous segmentations' vector: \", refer_wat_e.shape\n\nplt.figure()\nplt.plot(refer_wat_n[:,res_ex,:].T)\nplt.title(\"Correct signatures for res: %f\"%(resols[res_ex]))\nplt.show()\n\nplt.figure()\nplt.plot(refer_wat_e[:,res_ex,:].T)\nplt.title(\"Erroneous signatures for res: %f\"%(resols[res_ex]))\nplt.show()", "The RMSE over the 10 correct segmentations was compared with RMSE over the 10 erroneous segmentations. As expected, RMSE for correct segmentations was greater than RMSE for erroneous segmentations along all the resolutions. In general, this is true, but optimal resolution guarantee the maximum difference between both of RMSE results: correct and erroneous.\nSo, to find optimal resolution, difference between correct and erroneous RMSE was calculated over all resolutions.", "rmse_nacum = np.sqrt(np.sum((refer_wat_mean - refer_wat_n)**2,axis=2)/(refer_wat_mean.shape[1]))\nrmse_eacum = np.sqrt(np.sum((refer_wat_mean - refer_wat_e)**2,axis=2)/(refer_wat_mean.shape[1]))\n\ndif_dis = rmse_eacum - rmse_nacum #Difference between erroneous signatures and correct signatures\n\nin_max_res = np.argmax(np.mean(dif_dis,axis=0)) #Finding optimal resolution at maximum difference\nopt_res = resols[in_max_res]\nprint \"Optimal resolution for error detection: \", opt_res\n\ncorrect_max = np.mean(rmse_nacum[:,in_max_res]) #Finding threshold for separate segmentations\nerror_min = np.mean(rmse_eacum[:,in_max_res])\nth_res = 0.3*(error_min-correct_max)+correct_max\nprint \"Threshold for separate segmentations: \", th_res\n\n#### Plotting erroneous and correct segmentation signatures\n\nticksx_resols = [\"%.2f\" % el for el in np.arange(0.01,0.5,0.01)] #Labels for plot xticks\nticksx_resols = ticksx_resols[::6]\nticksx_index = np.arange(1,50,6)\n\nfigpr = plt.figure() #Plotting mean RMSE for correct segmentations\nplt.boxplot(rmse_nacum[:,1:], showmeans=True) #Element 0 was introduced only for fitting, \n #in comparation is not used.\nplt.axhline(y=0, color='g', linestyle='--')\nplt.axhline(y=th_res, color='r', linestyle='--')\nplt.axvline(x=in_max_res, color='r', linestyle='--')\nplt.xlabel('Resolutions', fontsize = 12, labelpad=-2)\nplt.ylabel('RMSE correct signatures', fontsize = 12)\nplt.xticks(ticksx_index, ticksx_resols)\nplt.show()\n\nfigpr = plt.figure() #Plotting mean RMSE for erroneous segmentations\nplt.boxplot(rmse_eacum[:,1:], showmeans=True)\nplt.axhline(y=0, color='g', linestyle='--')\nplt.axhline(y=th_res, color='r', linestyle='--')\nplt.axvline(x=in_max_res, color='r', linestyle='--')\nplt.xlabel('Resolutions', fontsize = 12, labelpad=-2)\nplt.ylabel('RMSE error signatures', fontsize = 12)\nplt.xticks(ticksx_index, ticksx_resols)\nplt.show()\n\nfigpr = plt.figure() #Plotting difference for mean RMSE over all resolutions\nplt.boxplot(dif_dis[:,1:], showmeans=True)\nplt.axhline(y=0, color='g', linestyle='--')\nplt.axvline(x=in_max_res, color='r', linestyle='--')\nplt.xlabel('Resolutions', fontsize = 12, labelpad=-2)\nplt.ylabel('Difference RMSE signatures', fontsize = 12)\nplt.xticks(ticksx_index, ticksx_resols)\nplt.show()", "The greatest difference resulted at resolution 0.1. In this resolution, threshold for separate erroneous and correct segmentations is established as 30% of the distance between the mean RMSE of the correct masks and the mean RMSE of the erroneous masks.\nMethod testing\nFinally, method test was performed in the 152 subject dataset: Watershed dataset with 112 segmentations, ROQS dataset with 152 segmentations and pixel-based dataset with 152 segmentations.", "n_resols = [fit_res, opt_res] #Resolutions for fitting and comparison\n\n#### Teste dataset (Watershed)\n#Loading labels\nseg_label = genfromtxt('../dataset/Seg_Watershed/watershed_label.csv', delimiter=',').astype('uint8')\n\nall_seg = np.hstack((seg_label[seg_label[:,1] == 0, 0][30:],\n seg_label[seg_label[:,1] == 1, 0][10:])) #Extracting erroneous and correct names\n\nlab_seg = np.hstack((seg_label[seg_label[:,1] == 0, 1][30:],\n seg_label[seg_label[:,1] == 1, 1][10:])) #Extracting erroneous and correct labels\n\nrefer_wat_mean_opt = np.vstack((refer_wat_mean[0],refer_wat_mean[in_max_res])) #Mean signature with fitting \n #and optimal resolution\n\nrefer_seg = np.empty((all_seg.shape[0],len(n_resols),points)) #Initializing correct signature vector\nin_mask = 0\nfor mask in all_seg:\n mask_ = np.load('../dataset/Seg_Watershed/mask_wate_{}.npy'.format(mask))\n refer_temp = sign_extract(mask_, n_resols) #Function for shape signature extraction\n refer_seg[in_mask] = sign_fit(refer_wat_mean_opt[0], refer_temp) #Function for signature fitting\n \n ###### Uncomment this block to see each segmentation with true and predicted labels\n #RMSE_ = np.sqrt(np.sum((refer_wat_mean_opt[1] - refer_seg[in_mask,1])**2)/(refer_wat_mean_opt.shape[1]))\n #plt.figure()\n #plt.axis('off')\n #plt.imshow(mask_,'gray',interpolation='none')\n #plt.title(\"True label: {}, Predic. label: {}\".format(lab_seg[in_mask],(RMSE_>th_res).astype('uint8')))\n #plt.show()\n \n in_mask += 1\n\n#### Segmentation evaluation result over all segmentations\nRMSE = np.sqrt(np.sum((refer_wat_mean_opt[1] - refer_seg[:,1])**2,axis=1)/(refer_wat_mean_opt.shape[1]))\npred_seg = RMSE > th_res #Apply threshold\ncomp_seg = np.logical_not(np.logical_xor(pred_seg,lab_seg)) #Comparation method result with true labels\nprint \"Final accuracy on Watershed {} segmentations: {}\".format(len(comp_seg),\n np.sum(comp_seg)/(1.0*len(comp_seg)))\n\n#### Teste dataset (ROQS)\n\nseg_label = genfromtxt('../dataset/Seg_ROQS/roqs_label.csv', delimiter=',').astype('uint8') #Loading labels\n\nall_seg = np.hstack((seg_label[seg_label[:,1] == 0, 0],\n seg_label[seg_label[:,1] == 1, 0])) #Extracting erroneous and correct names\n\nlab_seg = np.hstack((seg_label[seg_label[:,1] == 0, 1],\n seg_label[seg_label[:,1] == 1, 1])) #Extracting erroneous and correct labels\n\nrefer_wat_mean_opt = np.vstack((refer_wat_mean[0],refer_wat_mean[in_max_res])) #Mean signature with fitting \n #and optimal resolution\n\nrefer_seg = np.empty((all_seg.shape[0],len(n_resols),points)) #Initializing correct signature vector\nin_mask = 0\nfor mask in all_seg:\n mask_ = np.load('../dataset/Seg_ROQS/mask_roqs_{}.npy'.format(mask))\n refer_temp = sign_extract(mask_, n_resols) #Function for shape signature extraction\n refer_seg[in_mask] = sign_fit(refer_wat_mean_opt[0], refer_temp) #Function for signature fitting\n \n ###### Uncomment this block to see each segmentation with true and predicted labels\n #RMSE_ = np.sqrt(np.sum((refer_wat_mean_opt[1] - refer_seg[in_mask,1])**2)/(refer_wat_mean_opt.shape[1]))\n #plt.figure()\n #plt.axis('off')\n #plt.imshow(mask_,'gray',interpolation='none')\n #plt.title(\"True label: {}, Predic. label: {}\".format(lab_seg[in_mask],(RMSE_>th_res).astype('uint8')))\n #plt.show()\n \n in_mask += 1\n\n#### Segmentation evaluation result over all segmentations\nRMSE = np.sqrt(np.sum((refer_wat_mean_opt[1] - refer_seg[:,1])**2,axis=1)/(refer_wat_mean_opt.shape[1]))\npred_seg = RMSE > th_res #Apply threshold\ncomp_seg = np.logical_not(np.logical_xor(pred_seg,lab_seg)) #Comparation method result with true labels\nprint \"Final accuracy on ROQS {} segmentations: {}\".format(len(comp_seg),\n np.sum(comp_seg)/(1.0*len(comp_seg)))\n\n#### Teste dataset (Pixel-based)\n\nseg_label = genfromtxt('../dataset/Seg_pixel/pixel_label.csv', delimiter=',').astype('uint8') #Loading labels\n\nall_seg = np.hstack((seg_label[seg_label[:,1] == 0, 0],\n seg_label[seg_label[:,1] == 1, 0])) #Extracting erroneous and correct names\n\nlab_seg = np.hstack((seg_label[seg_label[:,1] == 0, 1],\n seg_label[seg_label[:,1] == 1, 1])) #Extracting erroneous and correct labels\n\nrefer_wat_mean_opt = np.vstack((refer_wat_mean[0],refer_wat_mean[in_max_res])) #Mean signature with fitting \n #and optimal resolution\n\nrefer_seg = np.empty((all_seg.shape[0],len(n_resols),points)) #Initializing correct signature vector\nin_mask = 0\nfor mask in all_seg:\n mask_ = np.load('../dataset/Seg_pixel/mask_pixe_{}.npy'.format(mask))\n refer_temp = sign_extract(mask_, n_resols) #Function for shape signature extraction\n refer_seg[in_mask] = sign_fit(refer_wat_mean_opt[0], refer_temp) #Function for signature fitting\n \n ###### Uncomment this block to see each segmentation with true and predicted labels\n #RMSE_ = np.sqrt(np.sum((refer_wat_mean_opt[1] - refer_seg[in_mask,1])**2)/(refer_wat_mean_opt.shape[1]))\n #plt.figure()\n #plt.axis('off')\n #plt.imshow(mask_,'gray',interpolation='none')\n #plt.title(\"True label: {}, Predic. label: {}\".format(lab_seg[in_mask],(RMSE_>th_res).astype('uint8')))\n #plt.show()\n \n in_mask += 1\n\n#### Segmentation evaluation result over all segmentations\nRMSE = np.sqrt(np.sum((refer_wat_mean_opt[1] - refer_seg[:,1])**2,axis=1)/(refer_wat_mean_opt.shape[1]))\npred_seg = RMSE > th_res #Apply threshold\ncomp_seg = np.logical_not(np.logical_xor(pred_seg,lab_seg)) #Comparation method result with true labels\nprint \"Final accuracy on pixel-based {} segmentations: {}\".format(len(comp_seg),\n np.sum(comp_seg)/(1.0*len(comp_seg)))", "Discussion and conclusion\nIn this work, a method for segmentation error detection in large datasets was proposed based-on shape signature. RMSE was used for comparison between signatures. Signature can be extracted en various resolutions but optimal resolution (ls=0.1) was chosen in order to get maximum separation between correct RMSE and erroneous RMSE. In this optimal resolution, threshold was fixed at 27.95 allowing separation of the two segmentation classes.The method achieved 95% of accuracy on the test Watershed segmentations, and 95% and 94% on new datasets: ROQS and pixel-based, respectively.\n40 Watershed segmentations on dataset were used to generation and configuration mean correct signature because of the greater number of erroneous segmentations and major variability on the error shape. Because the signature holds the CC shape, the method can be extended to new datasets segmented with any method. Accuracy and generalization can be improve varying the segmentations used to generate and adjust the mean correct signature." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
ucsd-ccbb/jupyter-genomics
notebooks/crispr/Dual CRISPR 2-Constuct Filter.ipynb
mit
[ "Dual CRISPR Screen Analysis\nConstruct Filter\nAmanda Birmingham, CCBB, UCSD (abirmingham@ucsd.edu)\nInstructions\nTo run this notebook reproducibly, follow these steps:\n1. Click Kernel > Restart & Clear Output\n2. When prompted, click the red Restart & clear all outputs button\n3. Fill in the values for your analysis for each of the variables in the Input Parameters section\n4. Click Cell > Run All\n<a name = \"input-parameters\"></a>\nInput Parameters", "g_num_processors = 3\ng_trimmed_fastqs_dir = \"/Users/Birmingham/Repositories/ccbb_tickets/20160210_mali_crispr/data/processed/runTemp\"\ng_filtered_fastas_dir = \"\"\ng_min_trimmed_grna_len = 19\ng_max_trimmed_grna_len = 21\ng_len_of_seq_to_match = 19\ng_code_location = \"/Users/Birmingham/Repositories/ccbb_tickets/20160210_mali_crispr/src/python\"", "CCBB Library Imports", "import sys\nsys.path.append(g_code_location)", "Automated Set-Up", "# %load -s describe_var_list /Users/Birmingham/Repositories/ccbb_tickets/20160210_mali_crispr/src/python/ccbbucsd/utilities/analysis_run_prefixes.py\ndef describe_var_list(input_var_name_list):\n description_list = [\"{0}: {1}\\n\".format(name, eval(name)) for name in input_var_name_list]\n return \"\".join(description_list)\n\n\nfrom ccbbucsd.utilities.analysis_run_prefixes import check_or_set, get_run_prefix, get_timestamp\ng_filtered_fastas_dir = check_or_set(g_filtered_fastas_dir, g_trimmed_fastqs_dir)\nprint(describe_var_list(['g_filtered_fastas_dir']))\n\nfrom ccbbucsd.utilities.files_and_paths import verify_or_make_dir\nverify_or_make_dir(g_filtered_fastas_dir)", "Info Logging Pass-Through", "from ccbbucsd.utilities.notebook_logging import set_stdout_info_logger\nset_stdout_info_logger()", "Construct Filtering Functions", "import enum\n\n# %load -s TrimType,get_trimmed_suffix /Users/Birmingham/Repositories/ccbb_tickets/20160210_mali_crispr/src/python/ccbbucsd/malicrispr/scaffold_trim.py\nclass TrimType(enum.Enum):\n FIVE = \"5\"\n THREE = \"3\"\n FIVE_THREE = \"53\"\n\ndef get_trimmed_suffix(trimtype):\n return \"_trimmed{0}.fastq\".format(trimtype.value)\n\n\n# %load /Users/Birmingham/Repositories/ccbb_tickets/20160210_mali_crispr/src/python/ccbbucsd/malicrispr/count_filterer.py\n# standard libraries\nimport logging\n\n# ccbb libraries\nfrom ccbbucsd.utilities.bio_seq_utilities import trim_seq\nfrom ccbbucsd.utilities.basic_fastq import FastqHandler, paired_fastq_generator\nfrom ccbbucsd.utilities.files_and_paths import transform_path\n\n__author__ = \"Amanda Birmingham\"\n__maintainer__ = \"Amanda Birmingham\"\n__email__ = \"abirmingham@ucsd.edu\"\n__status__ = \"development\"\n\n\ndef get_filtered_file_suffix():\n return \"_len_filtered.fastq\"\n\n\ndef filter_pair_by_len(min_len, max_len, retain_len, output_dir, fw_fastq_fp, rv_fastq_fp):\n fw_fastq_handler = FastqHandler(fw_fastq_fp)\n rv_fastq_handler = FastqHandler(rv_fastq_fp)\n fw_out_handle, rv_out_handle = _open_output_file_pair(fw_fastq_fp, rv_fastq_fp, output_dir)\n counters = {\"num_pairs\": 0, \"num_pairs_passing\": 0}\n\n filtered_fastq_records = _filtered_fastq_generator(fw_fastq_handler, rv_fastq_handler, min_len, max_len, retain_len,\n counters)\n for fw_record, rv_record in filtered_fastq_records:\n fw_out_handle.writelines(fw_record.lines)\n rv_out_handle.writelines(rv_record.lines)\n\n fw_out_handle.close()\n rv_out_handle.close()\n return _summarize_counts(counters)\n\n\ndef _filtered_fastq_generator(fw_fastq_handler, rv_fastq_handler, min_len, max_len, retain_len, counters):\n paired_fastq_records = paired_fastq_generator(fw_fastq_handler, rv_fastq_handler, True)\n for curr_pair_fastq_records in paired_fastq_records:\n counters[\"num_pairs\"] += 1\n _report_progress(counters[\"num_pairs\"])\n\n fw_record = curr_pair_fastq_records[0]\n fw_passing_seq = _check_and_trim_seq(_get_upper_seq(fw_record), min_len, max_len, retain_len, False)\n if fw_passing_seq is not None:\n rv_record = curr_pair_fastq_records[1]\n rv_passing_seq = _check_and_trim_seq(_get_upper_seq(rv_record), min_len, max_len, retain_len, True)\n if rv_passing_seq is not None:\n counters[\"num_pairs_passing\"] += 1\n fw_record.sequence = fw_passing_seq\n fw_record.quality = trim_seq(fw_record.quality, retain_len, False)\n rv_record.sequence = rv_passing_seq\n rv_record.quality = trim_seq(rv_record.quality, retain_len, True)\n yield fw_record, rv_record\n\n\ndef _open_output_file_pair(fw_fastq_fp, rv_fastq_fp, output_dir):\n fw_fp = transform_path(fw_fastq_fp, output_dir, get_filtered_file_suffix())\n rv_fp = transform_path(rv_fastq_fp, output_dir, get_filtered_file_suffix())\n fw_handle = open(fw_fp, 'w')\n rv_handle = open(rv_fp, 'w')\n return fw_handle, rv_handle\n\n\ndef _report_progress(num_fastq_pairs):\n if num_fastq_pairs % 100000 == 0:\n logging.debug(\"On fastq pair number {0}\".format(num_fastq_pairs))\n\n\ndef _get_upper_seq(fastq_record):\n return fastq_record.sequence.upper()\n\n\ndef _check_and_trim_seq(input_seq, min_len, max_len, retain_len, retain_5p_end):\n result = None\n seq_len = len(input_seq)\n if seq_len >= min_len and seq_len <= max_len:\n result = trim_seq(input_seq, retain_len, retain_5p_end)\n return result\n\n\ndef _summarize_counts(counts_by_type):\n summary_pieces = []\n sorted_keys = sorted(counts_by_type.keys()) # sort to ensure deterministic output ordering\n for curr_key in sorted_keys:\n curr_value = counts_by_type[curr_key]\n summary_pieces.append(\"{0}:{1}\".format(curr_key, curr_value))\n result = \",\".join(summary_pieces)\n return result\n\n\nfrom ccbbucsd.utilities.parallel_process_fastqs import parallel_process_paired_reads, concatenate_parallel_results\n\ng_parallel_results = parallel_process_paired_reads(g_trimmed_fastqs_dir, get_trimmed_suffix(TrimType.FIVE_THREE), \n g_num_processors, filter_pair_by_len, \n [g_min_trimmed_grna_len, g_max_trimmed_grna_len, \n g_len_of_seq_to_match, g_filtered_fastas_dir])\n\nprint(concatenate_parallel_results(g_parallel_results))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
quantopian/research_public
notebooks/data/quandl.yahoo_index_vix/notebook.ipynb
apache-2.0
[ "Quandl: S&P 500 Volatility Index (VIX)\nIn this notebook, we'll take a look at data set , available on Quantopian. This dataset spans from January, 1990 through the current day. It contains the value for the index VIX, a measure of volatility in the S&P 500. We access this data via the API provided by Quandl. More details on this dataset can be found on Quandl's website.\nTo be clear, this is a single value for VIX each day.\nNotebook Contents\nThere are two ways to access the data and you'll find both of them listed below. Just click on the section you'd like to read through.\n\n<a href='#interactive'><strong>Interactive overview</strong></a>: This is only available on Research and uses blaze to give you access to large amounts of data. Recommended for exploration and plotting.\n<a href='#pipeline'><strong>Pipeline overview</strong></a>: Data is made available through pipeline which is available on both the Research & Backtesting environment. Recommended for custom factor development and moving back & forth between research/backtesting.\n\nLimits\nOne key caveat: we limit the number of results returned from any given expression to 10,000 to protect against runaway memory usage. To be clear, you have access to all the data server side. We are limiting the size of the responses back from Blaze.\nWith preamble in place, let's get started:\n<a id='interactive'></a>\nInteractive Overview\nAccessing the data with Blaze and Interactive on Research\nPartner datasets are available on Quantopian Research through an API service known as Blaze. Blaze provides the Quantopian user with a convenient interface to access very large datasets, in an interactive, generic manner.\nBlaze provides an important function for accessing these datasets. Some of these sets are many millions of records. Bringing that data directly into Quantopian Research directly just is not viable. So Blaze allows us to provide a simple querying interface and shift the burden over to the server side.\nIt is common to use Blaze to reduce your dataset in size, convert it over to Pandas and then to use Pandas for further computation, manipulation and visualization.\nHelpful links:\n* Query building for Blaze\n* Pandas-to-Blaze dictionary\n* SQL-to-Blaze dictionary.\nOnce you've limited the size of your Blaze object, you can convert it to a Pandas DataFrames using:\n\nfrom odo import odo\nodo(expr, pandas.DataFrame)\n\nTo see how this data can be used in your algorithm, search for the Pipeline Overview section of this notebook or head straight to <a href='#pipeline'>Pipeline Overview</a>", "# import the dataset\nfrom quantopian.interactive.data.quandl import yahoo_index_vix as dataset\n# Since this data is provided by Quandl for free, there is no _free version of this\n# data set, as found in the premium sets. This import gets you the entirety of this data set.\n\n# import data operations\nfrom odo import odo\n# import other libraries we will use\nimport pandas as pd\nimport matplotlib.pyplot as plt\n\n# Let's use blaze to understand the data a bit using Blaze dshape()\ndataset.dshape\n\n# And how many rows are there?\n# N.B. we're using a Blaze function to do this, not len()\ndataset.count()\n\n# Let's see what the data looks like. We'll grab the first three rows.\ndataset[:3]", "Let's go over the columns:\n- asof_date: the timeframe to which this data applies\n- timestamp: the simulated date upon which this data point is available to a backtest\n- open: opening price for the day indicated on asof_date\n- high: high price for the day indicated on asof_date\n- low: lowest price for the day indicated by asof_date\n- close: closing price for asof_date\nWe've done much of the data processing for you. Fields like timestamp and sid are standardized across all our Store Datasets, so the datasets are easy to combine. We have standardized the sid across all our equity databases.\nWe can select columns and rows with ease. Let's go plot it for fun below. 6500 rows is small enough to just convert right over to Pandas.", "# Convert it over to a Pandas dataframe for easy charting\nvix_df = odo(dataset, pd.DataFrame)\n\nvix_df.plot(x='asof_date', y='close')\nplt.xlabel(\"As of Date (asof_date)\")\nplt.ylabel(\"Close Price\")\nplt.axis([None, None, 0, 100])\nplt.title(\"VIX\")\nplt.legend().set_visible(False)", "<a id='pipeline'></a>\nPipeline Overview\nAccessing the data in your algorithms & research\nThe only method for accessing partner data within algorithms running on Quantopian is via the pipeline API. Different data sets work differently but in the case of this data, you can add this data to your pipeline as follows:\nImport the data set here\n\nfrom quantopian.pipeline.data.quandl import yahoo_index_vix\n\nThen in intialize() you could do something simple like adding the raw value of one of the fields to your pipeline:\n\npipe.add(yahoo_index_vix.close, 'close')", "# Import necessary Pipeline modules\nfrom quantopian.pipeline import Pipeline\nfrom quantopian.research import run_pipeline\nfrom quantopian.pipeline.factors import AverageDollarVolume\n\n# For use in your algorithms\n# Using the full dataset in your pipeline algo\nfrom quantopian.pipeline.data.quandl import yahoo_index_vix", "Now that we've imported the data, let's take a look at which fields are available for each dataset.\nYou'll find the dataset, the available fields, and the datatypes for each of those fields.", "print \"Here are the list of available fields per dataset:\"\nprint \"---------------------------------------------------\\n\"\n\ndef _print_fields(dataset):\n print \"Dataset: %s\\n\" % dataset.__name__\n print \"Fields:\"\n for field in list(dataset.columns):\n print \"%s - %s\" % (field.name, field.dtype)\n print \"\\n\"\n\nfor data in (yahoo_index_vix,):\n _print_fields(data)\n\n\nprint \"---------------------------------------------------\\n\"", "Now that we know what fields we have access to, let's see what this data looks like when we run it through Pipeline.\nThis is constructed the same way as you would in the backtester. For more information on using Pipeline in Research view this thread:\nhttps://www.quantopian.com/posts/pipeline-in-research-build-test-and-visualize-your-factors-and-filters", "# Let's see what this data looks like when we run it through Pipeline\n# This is constructed the same way as you would in the backtester. For more information\n# on using Pipeline in Research view this thread:\n# https://www.quantopian.com/posts/pipeline-in-research-build-test-and-visualize-your-factors-and-filters\npipe = Pipeline()\n \npipe.add(yahoo_index_vix.open_.latest, 'open')\npipe.add(yahoo_index_vix.close.latest, 'close')\npipe.add(yahoo_index_vix.adjusted_close.latest, 'adjusted_close')\npipe.add(yahoo_index_vix.high.latest, 'high')\npipe.add(yahoo_index_vix.low.latest, 'low')\npipe.add(yahoo_index_vix.volume.latest, 'volume')\n\n# The show_graph() method of pipeline objects produces a graph to show how it is being calculated.\npipe.show_graph(format='png')\n\n# run_pipeline will show the output of your pipeline\npipe_output = run_pipeline(pipe, start_date='2013-11-01', end_date='2013-11-25')\npipe_output", "Taking what we've seen from above, let's see how we'd move that into the backtester.", "# This section is only importable in the backtester\nfrom quantopian.algorithm import attach_pipeline, pipeline_output\n\n# General pipeline imports\nfrom quantopian.pipeline import Pipeline\nfrom quantopian.pipeline.factors import AverageDollarVolume\n\n# Import the datasets available\n# For use in your algorithms\n# Using the full dataset in your pipeline algo\nfrom quantopian.pipeline.data.quandl import yahoo_index_vix\n\ndef make_pipeline():\n # Create our pipeline\n pipe = Pipeline()\n\n # Add pipeline factors\n pipe.add(yahoo_index_vix.open_.latest, 'open')\n pipe.add(yahoo_index_vix.close.latest, 'close')\n pipe.add(yahoo_index_vix.adjusted_close.latest, 'adjusted_close')\n pipe.add(yahoo_index_vix.high.latest, 'high')\n pipe.add(yahoo_index_vix.low.latest, 'low')\n pipe.add(yahoo_index_vix.volume.latest, 'volume')\n\n return pipe\n\ndef initialize(context):\n attach_pipeline(make_pipeline(), \"pipeline\")\n \ndef before_trading_start(context, data):\n results = pipeline_output('pipeline')", "Now you can take that and begin to use it as a building block for your algorithms, for more examples on how to do that you can visit our <a href='https://www.quantopian.com/posts/pipeline-factor-library-for-data'>data pipeline factor library</a>" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
DeepLearningUB/EBISS2017
1. Learning from data and optimization.ipynb
mit
[ "import warnings\nwarnings.filterwarnings('ignore')\n\nimport numpy as np\nimport seaborn as sns\nimport matplotlib.pyplot as plt\n%matplotlib inline", "Basic Concepts\nWhat is \"learning from data\"?\n\nIn general Learning from Data is a scientific discipline that is concerned with the design and development of algorithms that allow computers to infer (from data) a model that allows compact representation (unsupervised learning) and/or good generalization (supervised learning).\n\nThis is an important technology because it enables computational systems to adaptively improve their performance with experience accumulated from the observed data. \nMost of these algorithms are based on the iterative solution of a mathematical problem that involves data and model. If there was an analytical solution to the problem, this should be the adopted one, but this is not the case for most of the cases.\nSo, the most common strategy for learning from data is based on solving a system of equations as a way to find a series of parameters of the model that minimizes a mathematical problem. This is called optimization.\nThe most important technique for solving optimization problems is gradient descend.\nPreliminary: Nelder-Mead method for function minimization.\nThe most simple thing we can try to minimize a function $f(x)$ would be to sample two points relatively near each other, and just repeatedly take a step down away from the largest value. This simple algorithm has a severe limitation: it can't get closer to the true minima than the step size. \nThe Nelder-Mead method dynamically adjusts the step size based off the loss of the new point. If the new point is better than any previously seen value, it expands the step size to accelerate towards the bottom. Likewise if the new point is worse it contracts the step size to converge around the minima. The usual settings are to half the step size when contracting and double the step size when expanding. \nThis method can be easily extended into higher dimensional examples, all that's required is taking one more point than there are dimensions. Then, the simplest approach is to replace the worst point with a point reflected through the centroid of the remaining n points. If this point is better than the best current point, then we can try stretching exponentially out along this line. On the other hand, if this new point isn't much better than the previous value, then we are stepping across a valley, so we shrink the step towards a better point.\n\nSee \"An Interactive Tutorial on Numerical Optimization\": http://www.benfrederickson.com/numerical-optimization/\n\nGradient descend (for hackers): 1-D\nLet's suppose that we have a function $f: \\Re \\rightarrow \\Re$. For example: \n$$f(x) = x^2$$\nOur objective is to find the argument $x$ that minimizes this function (for maximization, consider $-f(x)$). To this end, the critical concept is the derivative.\nThe derivative of $f$ of a variable $x$, $f'(x)$ or $\\frac{\\mathrm{d}f}{\\mathrm{d}x}$, is a measure of the rate at which the value of the function changes with respect to the change of the variable. It is defined as the following limit:\n$$ f'(x) = \\lim_{h \\rightarrow 0} \\frac{f(x + h) - f(x)}{h} $$\nThe derivative specifies how to scale a small change in the input in order to obtain the corresponding change in the output: \n$$ f(x + h) \\approx f(x) + h f'(x)$$", "# numerical derivative at a point x\n\ndef f(x):\n return x**2\n\ndef fin_dif(x, f, h = 0.00001):\n '''\n This method returns the derivative of f at x\n by using the finite difference method\n '''\n return (f(x+h) - f(x))/h\n\nx = 2.0\nprint \"{:2.4f}\".format(fin_dif(x,f))", "It can be shown that the “centered difference formula\" is better when computing numerical derivatives:\n$$ \\lim_{h \\rightarrow 0} \\frac{f(x + h) - f(x - h)}{2h} $$\nThe error in the \"finite difference\" approximation can be derived from Taylor's theorem and, assuming that $f$ is differentiable, is $O(h)$. In the case of “centered difference\" the error is $O(h^2)$.\nThe derivative tells how to chage $x$ in order to make a small improvement in $f$. \nMinimization\nThen, we can follow these steps to decrease the value of the function:\n\nStart from a random $x$ value.\nCompute the derivative $f'(x) = \\lim_{h \\rightarrow 0} \\frac{f(x + h) - f(x - h)}{2h}$.\nWalk a small step (possibly weighted by the derivative module) in the opposite direction of the derivative, because we know that $f(x - h \\mbox{ sign}(f'(x))$ is less than $f(x)$ for small enough $h$. \n\nThe search for the minima ends when the derivative is zero because we have no more information about which direction to move. $x$ is a critical o stationary point if $f'(x)=0$. \n\nA minimum (maximum) is a critical point where $f(x)$ is lower (higher) than at all neighboring points. \nThere is a third class of critical points: saddle points.\n\nIf $f$ is a convex function, this should be the minimum (maximum) of our functions. In other cases it could be a local minimum (maximum) or a saddle point.\nThere are two problems with numerical derivatives:\n+ It is approximate.\n+ It is very slow to evaluate (two function evaluations: $f(x + h) , f(x - h)$ ).\nStep size\nUsually, we multiply the derivative by a step size. This step size (often called alpha) has to be chosen carefully, as a value too small will result in a long computation time, while a value too large will not give you the right result (by overshooting) or even fail to converge. \nAnalytical derivative\nLet's suppose now that we know the analytical derivative. This is only one function evaluation!", "old_min = 0\ntemp_min = 15\nstep_size = 0.01\nprecision = 0.0001\n\ndef f(x):\n return x**2 - 6*x + 5\n \ndef f_derivative(x):\n import math\n return 2*x -6\n\nmins = []\ncost = []\n\nwhile abs(temp_min - old_min) > precision:\n old_min = temp_min \n move = f_derivative(old_min) * step_size\n temp_min = old_min - move\n cost.append((3-temp_min)**2)\n mins.append(temp_min)\n\n# rounding the result to 2 digits because of the step size\nprint \"Local minimum occurs at {:3.6f}.\".format(round(temp_min,2))", "Exercise\nWhat happens if step_size=1.0?", "# your solution\n", "An important feature of gradient descent is that there should be a visible improvement over time. \nIn the following example, we simply plotted the\nchange in the value of the minimum against the iteration during which it was calculated. As we can see, the distance gets smaller over time, but barely changes in later iterations.", "x = np.linspace(-10,20,100)\ny = x**2 - 6*x + 5\n\nx, y = (zip(*enumerate(cost)))\n\nfig, ax = plt.subplots(1, 1)\nfig.set_facecolor('#EAEAF2')\nplt.plot(x,y, 'r-', alpha=0.7)\nplt.ylim([-10,150])\nplt.gcf().set_size_inches((10,3))\nplt.grid(True)\nplt.show\n\nx = np.linspace(-10,20,100)\ny = x**2 - 6*x + 5\n\nfig, ax = plt.subplots(1, 1)\nfig.set_facecolor('#EAEAF2')\nplt.plot(x,y, 'r-')\nplt.ylim([-10,250])\nplt.gcf().set_size_inches((10,3))\nplt.grid(True)\nplt.plot(mins,cost,'o', alpha=0.3)\nax.text(mins[-1],\n cost[-1]+20,\n 'End (%s steps)' % len(mins),\n ha='center',\n color=sns.xkcd_rgb['blue'],\n )\nplt.show", "From derivatives to gradient: $n$-dimensional function minimization.\nLet's consider a $n$-dimensional function $f: \\Re^n \\rightarrow \\Re$. For example: \n$$f(\\mathbf{x}) = \\sum_{n} x_n^2$$\nOur objective is to find the argument $\\mathbf{x}$ that minimizes this function.\nThe gradient of $f$ is the vector whose components are the $n$ partial derivatives of $f$. It is thus a vector-valued function. \nThe gradient points in the direction of the greatest rate of increase of the function.\n$$\\nabla {f} = (\\frac{\\partial f}{\\partial x_1}, \\dots, \\frac{\\partial f}{\\partial x_n})$$", "def f(x):\n return sum(x_i**2 for x_i in x)\n\ndef fin_dif_partial_centered(x, f, i, h=1e-6):\n w1 = [x_j + (h if j==i else 0) for j, x_j in enumerate(x)]\n w2 = [x_j - (h if j==i else 0) for j, x_j in enumerate(x)]\n return (f(w1) - f(w2))/(2*h)\n\ndef gradient_centered(x, f, h=1e-6):\n return[round(fin_dif_partial_centered(x,f,i,h), 10) \n for i,_ in enumerate(x)]\n\nx = [1.0,1.0,1.0]\n\nprint '{:.6f}'.format(f(x)), gradient_centered(x,f)\n", "The function we have evaluated, $f({\\mathbf x}) = x_1^2+x_2^2+x_3^2$, is $3$ at $(1,1,1)$ and the gradient vector at this point is $(2,2,2)$. \nThen, we can follow this steps to maximize (or minimize) the function:\n\nStart from a random $\\mathbf{x}$ vector.\nCompute the gradient vector.\nWalk a small step in the opposite direction of the gradient vector.\n\n\nIt is important to be aware that this gradient computation is very expensive: if $\\mathbf{x}$ has dimension $n$, we have to evaluate $f$ at $2*n$ points.\n\nHow to use the gradient.\n$f(x) = \\sum_i x_i^2$, takes its mimimum value when all $x$ are 0. \nLet's check it for $n=3$:", "def euc_dist(v1,v2):\n import numpy as np\n import math\n v = np.array(v1)-np.array(v2)\n return math.sqrt(sum(v_i ** 2 for v_i in v))", "Let's start by choosing a random vector and then walking a step in the opposite direction of the gradient vector. We will stop when the difference (in $\\mathbf x$) between the new solution and the old solution is less than a tolerance value.", "# choosing a random vector\n\nimport random\nimport numpy as np\n\nx = [random.randint(-10,10) for i in range(3)]\nx\n\ndef step(x,grad,alpha):\n return [x_i - alpha * grad_i for x_i, grad_i in zip(x,grad)]\n\ntol = 1e-15\nalpha = 0.01\nwhile True:\n grad = gradient_centered(x,f)\n next_x = step(x,grad,alpha)\n if euc_dist(next_x,x) < tol:\n break\n x = next_x\nprint [round(i,10) for i in x]", "Choosing Alpha\nThe step size, alpha, is a slippy concept: if it is too small we will slowly converge to the solution, if it is too large we can diverge from the solution. \nThere are several policies to follow when selecting the step size:\n\nConstant size steps. In this case, the size step determines the precision of the solution.\nDecreasing step sizes.\nAt each step, select the optimal step (the one that get the lower $f(\\mathbf x)$).\n\nThe last policy is good, but too expensive. In this case we would consider a fixed set of values:", "step_size = [100, 10, 1, 0.1, 0.01, 0.001, 0.0001, 0.00001]", "Learning from data\nIn general, we have:\n\nA dataset ${(\\mathbf{x},y)}$ of $n$ examples. \nA target function $f_\\mathbf{w}$, that we want to minimize, representing the discrepancy between our data and the model we want to fit. The model is represented by a set of parameters $\\mathbf{w}$. \nThe gradient of the target function, $g_f$. \n\nIn the most common case $f$ represents the errors from a data representation model $M$. \nFor example, to fit the model clould be to find the optimal parameters $\\mathbf{w}$ that minimize the following expression:\n$$ f_\\mathbf{w} = \\frac{1}{n} \\sum_{i} (y_i - M(\\mathbf{x}_i,\\mathbf{w}))^2 $$\nFor example, $(\\mathbf{x},y)$ can represent:\n\n$\\mathbf{x}$: the behavior of a \"Candy Crush\" player; $y$: monthly payments. \n$\\mathbf{x}$: sensor data about your car engine; $y$: probability of engine error.\n$\\mathbf{x}$: finantial data of a bank customer; $y$: customer rating.\n\n\nIf $y$ is a real value, it is called a regression problem.\nIf $y$ is binary/categorical, it is called a classification problem. \n\nLet's suppose that our model is a one-dimensional linear model $M(\\mathbf{x},\\mathbf{w}) = w \\cdot x $. \nBatch gradient descend\nWe can implement gradient descend in the following way (batch gradient descend):", "import numpy as np\nimport random\n\n# f = 2x\nx = np.arange(10)\ny = np.array([2*i for i in x])\n\n# f_target = 1/n Sum (y - wx)**2\ndef target_f(x,y,w):\n return np.sum((y - x * w)**2.0) / x.size\n\n# gradient_f = 2/n Sum 2wx**2 - 2xy\ndef gradient_f(x,y,w):\n return 2 * np.sum(2*w*(x**2) - 2*x*y) / x.size\n\ndef step(w,grad,alpha):\n return w - alpha * grad\n\ndef BGD_multi_step(target_f, \n gradient_f, \n x, \n y, \n toler = 1e-6):\n \n alphas = [100, 10, 1, 0.1, 0.001, 0.00001]\n w = random.random()\n val = target_f(x,y,w)\n i = 0\n while True:\n i += 1\n gradient = gradient_f(x,y,w)\n next_ws = [step(w, gradient, alpha) for alpha in alphas]\n next_vals = [target_f(x,y,w) for w in next_ws]\n min_val = min(next_vals)\n next_w = next_ws[next_vals.index(min_val)] \n next_val = target_f(x,y,next_w) \n if (abs(val - next_val) < toler):\n return w\n else:\n w, val = next_w, next_val\n \nprint '{:.6f}'.format(BGD_multi_step(target_f, gradient_f, x, y))\n\n%%timeit\nBGD_multi_step(target_f, gradient_f, x, y)\n\ndef BGD(target_f, gradient_f, x, y, toler = 1e-6, alpha=0.01):\n w = random.random()\n val = target_f(x,y,w)\n i = 0\n while True:\n i += 1\n gradient = gradient_f(x,y,w)\n next_w = step(w, gradient, alpha)\n next_val = target_f(x,y,next_w) \n if (abs(val - next_val) < toler):\n return w\n else:\n w, val = next_w, next_val\n \nprint '{:.6f}'.format(BGD(target_f, gradient_f, x, y))\n\n%%timeit\nBGD(target_f, gradient_f, x, y)", "Stochastic Gradient Descend\nThe last function evals the whole dataset $(\\mathbf{x}_i,y_i)$ at every step. \nIf the dataset is large, this strategy is too costly. In this case we will use a strategy called SGD (Stochastic Gradient Descend).\nWhen learning from data, the cost function is additive: it is computed by adding sample reconstruction errors. \nThen, we can compute the estimate the gradient (and move towards the minimum) by using only one data sample (or a small data sample).\nThus, we will find the minimum by iterating this gradient estimation over the dataset.\nA full iteration over the dataset is called epoch. During an epoch, data must be used in a random order.\nIf we apply this method we have some theoretical guarantees to find a good minimum:\n+ SGD essentially uses the inaccurate gradient per iteration. Since there is no free food, what is the cost by using approximate gradient? The answer is that the convergence rate is slower than the gradient descent algorithm.\n+ The convergence of SGD has been analyzed using the theories of convex minimization and of stochastic approximation: it converges almost surely to a global minimum when the objective function is convex or pseudoconvex, and otherwise converges almost surely to a local minimum.", "def in_random_order(data):\n import random\n indexes = [i for i,_ in enumerate(data)]\n random.shuffle(indexes)\n for i in indexes:\n yield data[i]\n\nimport numpy as np\nimport random\n\ndef SGD(target_f, \n gradient_f, \n x, \n y, \n toler = 1e-6, \n epochs=100, \n alpha_0=0.01):\n \n data = zip(x,y)\n w = random.random()\n alpha = alpha_0\n min_w, min_val = float('inf'), float('inf')\n epoch = 0\n iteration_no_increase = 0\n while epoch < epochs and iteration_no_increase < 100:\n val = target_f(x, y, w)\n if min_val - val > toler:\n min_w, min_val = w, val\n alpha = alpha_0\n iteration_no_increase = 0\n else:\n iteration_no_increase += 1\n alpha *= 0.9\n for x_i, y_i in in_random_order(data):\n gradient_i = gradient_f(x_i, y_i, w)\n w = w - (alpha * gradient_i)\n epoch += 1\n return min_w\n\nprint 'w: {:.6f}'.format(SGD(target_f, gradient_f, x, y))", "Example: Stochastic Gradient Descent and Linear Regression\nThe linear regression model assumes a linear relationship between data:\n$$ y_i = w_1 x_i + w_0 $$\nLet's generate a more realistic dataset (with noise), where $w_1 = 2$ and $w_0 = 0$.\n\nThe bias trick. It is a little cumbersome to keep track separetey of $w_i$, the feature weights, and $w_0$, the bias. A common used trick is to combine these parameters into a single structure that holds both of them by extending the vector $x$ with one additional dimension that always holds the constant $1$. With this dimension the model simplifies to a single multiply $f(\\mathbf{x},\\mathbf{w}) = \\mathbf{w} \\cdot \\mathbf{x}$.", "%reset\nimport warnings\nwarnings.filterwarnings('ignore')\n\nimport numpy as np\nimport seaborn as sns\nimport matplotlib.pyplot as plt\nfrom sklearn.datasets.samples_generator import make_regression \nfrom scipy import stats \nimport random\n\n%matplotlib inline\n\n# x: input data\n# y: noisy output data\n\nx = np.random.uniform(0,1,20)\n\n# f = 2x + 0\ndef f(x): return 2*x + 0\n\nnoise_variance =0.1\nnoise = np.random.randn(x.shape[0])*noise_variance\ny = f(x) + noise\n\nfig, ax = plt.subplots(1, 1)\nfig.set_facecolor('#EAEAF2')\nplt.xlabel('$x$', fontsize=15)\nplt.ylabel('$f(x)$', fontsize=15)\nplt.plot(x, y, 'o', label='y')\nplt.plot([0, 1], [f(0), f(1)], 'b-', label='f(x)')\nplt.ylim([0,2])\nplt.gcf().set_size_inches((10,3))\nplt.grid(True)\nplt.show\n\n# f_target = 1/n Sum (y - wx)**2\ndef target_f(x,y,w):\n return np.sum((y - x * w)**2.0) / x.size\n\n# gradient_f = 2/n Sum 2wx**2 - 2xy\ndef gradient_f(x,y,w):\n return 2 * np.sum(2*w*(x**2) - 2*x*y) / x.size\n\ndef in_random_order(data):\n indexes = [i for i,_ in enumerate(data)]\n random.shuffle(indexes)\n for i in indexes:\n yield data[i]\n\ndef SGD(target_f, \n gradient_f, \n x, \n y, \n toler = 1e-6, \n epochs=100, \n alpha_0=0.01):\n \n data = zip(x,y)\n w = random.random()\n alpha = alpha_0\n min_w, min_val = float('inf'), float('inf')\n iteration_no_increase = 0\n w_cost = []\n epoch = 0\n while epoch < epochs and iteration_no_increase < 100:\n val = target_f(x, y, w)\n if min_val - val > toler:\n min_w, min_val = w, val\n alpha = alpha_0\n iteration_no_increase = 0\n else:\n iteration_no_increase += 1\n alpha *= 0.9\n for x_i, y_i in in_random_order(data):\n gradient_i = gradient_f(x_i, y_i, w)\n w = w - (alpha * gradient_i)\n w_cost.append(target_f(x,y,w))\n epoch += 1\n return min_w, np.array(w_cost)\n\nw, target_value = SGD(target_f, gradient_f, x, y)\nprint 'w: {:.6f}'.format(w)\n\nfig, ax = plt.subplots(1, 1)\nfig.set_facecolor('#EAEAF2')\nplt.plot(x, y, 'o', label='t')\nplt.plot([0, 1], [f(0), f(1)], 'b-', label='f(x)', alpha=0.5)\nplt.plot([0, 1], [0*w, 1*w], 'r-', label='fitted line', alpha=0.5, linestyle='--')\nplt.xlabel('input x')\nplt.ylabel('target t')\nplt.title('input vs. target')\nplt.ylim([0,2])\nplt.gcf().set_size_inches((10,3))\nplt.grid(True)\nplt.show\n\nfig, ax = plt.subplots(1, 1)\nfig.set_facecolor('#EAEAF2')\nplt.plot(np.arange(target_value.size), target_value, 'o', alpha = 0.2)\nplt.xlabel('Iteration')\nplt.ylabel('Cost')\nplt.grid()\nplt.gcf().set_size_inches((10,3))\nplt.grid(True)\nplt.show()", "Mini-batch Gradient Descent\nIn code, general batch gradient descent looks something like this:\npython\nnb_epochs = 100\nfor i in range(nb_epochs):\n grad = evaluate_gradient(target_f, data, w)\n w = w - learning_rate * grad\nFor a pre-defined number of epochs, we first compute the gradient vector of the target function for the whole dataset w.r.t. our parameter vector. \nStochastic gradient descent (SGD) in contrast performs a parameter update for each training example and label:\npython\nnb_epochs = 100\nfor i in range(nb_epochs):\n np.random.shuffle(data)\n for sample in data:\n grad = evaluate_gradient(target_f, sample, w)\n w = w - learning_rate * grad\nMini-batch gradient descent finally takes the best of both worlds and performs an update for every mini-batch of $n$ training examples:\npython\nnb_epochs = 100\nfor i in range(nb_epochs):\n np.random.shuffle(data)\n for batch in get_batches(data, batch_size=50):\n grad = evaluate_gradient(target_f, batch, w)\n w = w - learning_rate * grad\nMinibatch SGD has the advantage that it works with a slightly less noisy estimate of the gradient. However, as the minibatch size increases, the number of updates done per computation done decreases (eventually it becomes very inefficient, like batch gradient descent). \nThere is an optimal trade-off (in terms of computational efficiency) that may vary depending on the data distribution and the particulars of the class of function considered, as well as how computations are implemented.", "def get_batches(iterable, n = 1):\n current_batch = []\n for item in iterable:\n current_batch.append(item)\n if len(current_batch) == n:\n yield current_batch\n current_batch = []\n if current_batch:\n yield current_batch\n\n%reset\nimport warnings\nwarnings.filterwarnings('ignore')\n\nimport numpy as np\nimport seaborn as sns\nimport matplotlib.pyplot as plt\nfrom sklearn.datasets.samples_generator import make_regression \nfrom scipy import stats \nimport random\n\n%matplotlib inline\n\n# x: input data\n# y: noisy output data\n\nx = np.random.uniform(0,1,2000)\n\n# f = 2x + 0\ndef f(x): return 2*x + 0\n\nnoise_variance =0.1\nnoise = np.random.randn(x.shape[0])*noise_variance\ny = f(x) + noise\n\nplt.plot(x, y, 'o', label='y')\nplt.plot([0, 1], [f(0), f(1)], 'b-', label='f(x)')\nplt.xlabel('$x$', fontsize=15)\nplt.ylabel('$t$', fontsize=15)\nplt.ylim([0,2])\nplt.title('inputs (x) vs targets (y)')\nplt.grid()\nplt.legend(loc=2)\nplt.gcf().set_size_inches((10,3))\nplt.show()\n\n# f_target = 1/n Sum (y - wx)**2\ndef target_f(x,y,w):\n return np.sum((y - x * w)**2.0) / x.size\n\n# gradient_f = 2/n Sum 2wx**2 - 2xy\ndef gradient_f(x,y,w):\n return 2 * np.sum(2*w*(x**2) - 2*x*y) / x.size\n\ndef in_random_order(data):\n indexes = [i for i,_ in enumerate(data)]\n random.shuffle(indexes)\n for i in indexes:\n yield data[i]\n \ndef get_batches(iterable, n = 1):\n current_batch = []\n for item in iterable:\n current_batch.append(item)\n if len(current_batch) == n:\n yield current_batch\n current_batch = []\n if current_batch:\n yield current_batch\n\ndef SGD_MB(target_f, gradient_f, x, y, epochs=100, alpha_0=0.01):\n data = zip(x,y)\n w = random.random()\n alpha = alpha_0\n min_w, min_val = float('inf'), float('inf')\n epoch = 0\n while epoch < epochs:\n val = target_f(x, y, w)\n if val < min_val:\n min_w, min_val = w, val\n alpha = alpha_0\n else:\n alpha *= 0.9\n np.random.shuffle(data)\n for batch in get_batches(data, n = 100):\n x_batch = np.array(zip(*batch)[0])\n y_batch = np.array(zip(*batch)[1])\n gradient = gradient_f(x_batch, y_batch, w)\n w = w - (alpha * gradient)\n epoch += 1\n return min_w\n\nw = SGD_MB(target_f, gradient_f, x, y)\nprint 'w: {:.6f}'.format(w)\n\nplt.plot(x, y, 'o', label='t')\nplt.plot([0, 1], [f(0), f(1)], 'b-', label='f(x)', alpha=0.5)\nplt.plot([0, 1], [0*w, 1*w], 'r-', label='fitted line', alpha=0.5, linestyle='--')\nplt.xlabel('input x')\nplt.ylabel('target t')\nplt.ylim([0,2])\nplt.title('input vs. target')\nplt.grid()\nplt.legend(loc=2)\nplt.gcf().set_size_inches((10,3))\nplt.show()", "Loss Funtions\nLoss functions $L(y, f(\\mathbf{x})) = \\frac{1}{n} \\sum_i \\ell(y_i, f(\\mathbf{x_i}))$ represent the price paid for inaccuracy of predictions in classification/regression problems.\nIn classification this function is often the zero-one loss, that is, $ \\ell(y_i, f(\\mathbf{x_i}))$ is zero when $y_i = f(\\mathbf{x}_i)$ and one otherwise.\nThis function is discontinuous with flat regions and is thus extremely hard to optimize using gradient-based methods. For this reason it is usual to consider a proxy to the loss called a surrogate loss function. For computational reasons this is usually convex function. Here we have some examples:\nSquare / Euclidean Loss\nIn regression problems, the most common loss function is the square loss function:\n$$ L(y, f(\\mathbf{x})) = \\frac{1}{n} \\sum_i (y_i - f(\\mathbf{x}_i))^2 $$\nThe square loss function can be re-written and utilized for classification:\n$$ L(y, f(\\mathbf{x})) = \\frac{1}{n} \\sum_i (1 - y_i f(\\mathbf{x}_i))^2 $$\nHinge / Margin Loss (i.e. Suport Vector Machines)\nThe hinge loss function is defined as:\n$$ L(y, f(\\mathbf{x})) = \\frac{1}{n} \\sum_i \\mbox{max}(0, 1 - y_i f(\\mathbf{x}_i)) $$\nThe hinge loss provides a relatively tight, convex upper bound on the 0–1 Loss.\n<img src=\"images/loss_functions.png\">\nLogistic Loss (Logistic Regression)\nThis function displays a similar convergence rate to the hinge loss function, and since it is continuous, simple gradient descent methods can be utilized. \n$$ L(y, f(\\mathbf{x})) = \\frac{1}{n} log(1 + exp(-y_i f(\\mathbf{x}_i))) $$\nSigmoid Cross-Entropy Loss (Softmax classifier)\nCross-Entropy is a loss function that is very used for training multiclass problems. We'll focus on models that assume that classes are mutually exclusive. \nIn this case, our labels have this form $\\mathbf{y}_i =(1.0,0.0,0.0)$. If our model predicts a different distribution, say $ f(\\mathbf{x}_i)=(0.4,0.1,0.5)$, then we'd like to nudge the parameters so that $f(\\mathbf{x}_i)$ gets closer to $\\mathbf{y}_i$.\nC.Shannon showed that if you want to send a series of messages composed of symbols from an alphabet with distribution $y$ ($y_j$ is the probability of the $j$-th symbol), then to use the smallest number of bits on average, you should assign $\\log(\\frac{1}{y_j})$ bits to the $j$-th symbol. \nThe optimal number of bits is known as entropy:\n$$ H(\\mathbf{y}) = \\sum_j y_j \\log\\frac{1}{y_j} = - \\sum_j y_j \\log y_j$$\nCross entropy is the number of bits we'll need if we encode symbols by using a wrong distribution $\\hat y$:\n$$ H(y, \\hat y) = - \\sum_j y_j \\log \\hat y_j $$ \nIn our case, the real distribution is $\\mathbf{y}$ and the \"wrong\" one is $f(\\mathbf{x}_i)$. So, minimizing cross entropy with respect our model parameters will result in the model that best approximates our labels if considered as a probabilistic distribution. \nCross entropy is used in combination with Softmax classifier. In order to classify $\\mathbf{x}_i$ we could take the index corresponding to the max value of $f(\\mathbf{x}_i)$, but Softmax gives a slightly more intuitive output (normalized class probabilities) and also has a probabilistic interpretation:\n$$ P(\\mathbf{y}_i = j \\mid \\mathbf{x_i}) = - log \\left( \\frac{e^{f_j(\\mathbf{x_i})}}{\\sum_k e^{f_k(\\mathbf{x_i})} } \\right) $$\nwhere $f_k$ is a linear classifier. \nAdvanced gradient descend\nMomentum\nSGD has trouble navigating ravines, i.e. areas where the surface curves much more steeply in one dimension than in another, which are common around local optima. In these scenarios, SGD oscillates across the slopes of the ravine while only making hesitant progress along the bottom towards the local optimum.\n<img src=\"images/ridge2.png\">\nMomentum is a method that helps accelerate SGD in the relevant direction and dampens oscillations. It does this by adding a fraction of the update vector of the past time step to the current update vector:\n$$ v_t = m v_{t-1} + \\alpha \\nabla_w f $$\n$$ w = w - v_t $$\nThe momentum $m$ is commonly set to $0.9$.\nNesterov\nHowever, a ball that rolls down a hill, blindly following the slope, is highly unsatisfactory. We'd like to have a smarter ball, a ball that has a notion of where it is going so that it knows to slow down before the hill slopes up again.\nNesterov accelerated gradient (NAG) is a way to give our momentum term this kind of prescience. We know that we will use our momentum term $m v_{t-1}$ to move the parameters $w$. Computing \n$w - m v_{t-1}$ thus gives us an approximation of the next position of the parameters (the gradient is missing for the full update), a rough idea where our parameters are going to be. We can now effectively look ahead by calculating the gradient not w.r.t. to our current parameters $w$ but w.r.t. the approximate future position of our parameters:\n$$ w_{new} = w - m v_{t-1} $$\n$$ v_t = m v_{t-1} + \\alpha \\nabla_{w_{new}} f $$\n$$ w = w - v_t $$\nAdagrad\nAll previous approaches manipulated the learning rate globally and equally for all parameters. Tuning the learning rates is an expensive process, so much work has gone into devising methods that can adaptively tune the learning rates, and even do so per parameter. \nAdagrad is an algorithm for gradient-based optimization that does just this: It adapts the learning rate to the parameters, performing larger updates for infrequent and smaller updates for frequent parameters.\n$$ c = c + (\\nabla_w f)^2 $$\n$$ w = w - \\frac{\\alpha}{\\sqrt{c}} $$ \nRMProp\nRMSProp update adjusts the Adagrad method in a very simple way in an attempt to reduce its aggressive, monotonically decreasing learning rate. In particular, it uses a moving average of squared gradients instead, giving:\n$$ c = \\beta c + (1 - \\beta)(\\nabla_w f)^2 $$\n$$ w = w - \\frac{\\alpha}{\\sqrt{c}} $$ \nwhere $\\beta$ is a decay rate that controls the size of the moving average.\n<img src=\"images/g1.gif\">\n(Image credit: Alec Radford) \n<img src=\"images/g2.gif\">\n(Image credit: Alec Radford)" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
quantopian/research_public
notebooks/data/quandl.fred_unrate/notebook.ipynb
apache-2.0
[ "Quandl: United Stated Unemployment Rate\nIn this notebook, we'll take a look at data set , available on Quantopian. This dataset spans from 1948 through the current day. It contains the value for the United States unemployment rate provided by the US Federal Reserve via the FRED data initiative. We access this data via the API provided by Quandl. More details on this dataset can be found on Quandl's website.\nBlaze\nBefore we dig into the data, we want to tell you about how you generally access Quantopian partner data sets. These datasets are available using the Blaze library. Blaze provides the Quantopian user with a convenient interface to access very large datasets.\nSome of these sets (though not this one) are many millions of records. Bringing that data directly into Quantopian Research directly just is not viable. So Blaze allows us to provide a simple querying interface and shift the burden over to the server side.\nTo learn more about using Blaze and generally accessing Quantopian partner data, clone this tutorial notebook.\nWith preamble in place, let's get started:", "# import the dataset\nfrom quantopian.interactive.data.quandl import fred_unrate\n# Since this data is public domain and provided by Quandl for free, there is no _free version of this\n# data set, as found in the premium sets. This import gets you the entirety of this data set.\n\n# import data operations\nfrom odo import odo\n# import other libraries we will use\nimport pandas as pd\nimport matplotlib.pyplot as plt\n\nfred_unrate.sort('asof_date')", "The data goes all the way back to 1947 and is updated monthly.\nBlaze provides us with the first 10 rows of the data for display. Just to confirm, let's just count the number of rows in the Blaze expression:", "fred_unrate.count()", "Let's go plot it for fun. This data set is definitely small enough to just put right into a Pandas DataFrame", "unrate_df = odo(fred_unrate, pd.DataFrame)\n\nunrate_df.plot(x='asof_date', y='value')\nplt.xlabel(\"As Of Date (asof_date)\")\nplt.ylabel(\"Unemployment Rate\")\nplt.title(\"United States Unemployment Rate\")\nplt.legend().set_visible(False)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
arsenovic/clifford
docs/tutorials/euler-angles.ipynb
bsd-3-clause
[ "This notebook is part of the clifford documentation: https://clifford.readthedocs.io/.\nRotations in Space: Euler Angles, Matrices, and Quaternions\nThis notebook demonstrates how to use clifford to implement rotations in three dimensions using euler angles, rotation matices and quaternions.\nAll of these forms are derived from the more general rotor form, which is provided by GA.\nConversion from the rotor form to a matrix representation is shown, and takes about three lines of code.\nWe start with euler angles. \nEuler Angles with Rotors\nA common way to parameterize rotations in three dimensions is through Euler Angles. \n\nAny orientation can be achieved by composing three elemental rotations.\nThe elemental rotations can either occur about the axes of the fixed coordinate system (extrinsic rotations) or about the axes of a rotating coordinate system, which is initially aligned with the fixed one, and modifies its orientation after each elemental rotation (intrinsic rotations). \n&mdash; wikipedia\n\nThe animation below shows an intrinsic rotation model as each elemental rotation is applied.Label the left, right, and vertical blue-axes as $e_1, e_2,$ and $e_3$, respectively.\nThe series of rotations can be described:\n\nrotate about $e_3$-axis\nrotate about the rotated $e_1$-axis, call it $e_1^{'}$\nrotate about the twice rotated axis of $e_3$-axis, call it $e_3^{''}$\n\nSo the elemental rotations are about $e_3, e_{1}^{'}, e_3^{''}$-axes, respectively.\n\n(taken from wikipedia)\nFollowing Sec. 2.7.5 from <cite data-cite=\"doran-ga4ph\">Geometric Algebra for Physicists</cite>, we first rotate an angle $\\phi$ about the\n$e_3$-axis, which is equivalent to rotating in the $e_{12}$-plane. This is done with the rotor\n$$ R_{\\phi} = e^{-\\frac{\\phi}{2} e_{12}}$$\nNext we rotate about the rotated $e_1$-axis, which we label $e_1^{'}$. To find where this is, we can rotate the axis, \n$$ e_1^{'} =R_{\\phi} e_1 \\tilde{R_{\\phi}} $$\nThe plane corresponding to this axis is found by taking the dual of $e_1^{'}$\n$$ I R_{\\phi} e_1 \\tilde{R_{\\phi}} = R_{\\phi} e_{23} \\tilde{R_{\\phi}} $$\nWhere we have made use of the fact that the pseudo-scalar commutes in G3. Using this result, the second rotation by angle $\\theta$ about the $e_1^{'}$-axis is then ,\n$$ R_{\\theta} = e^{\\frac{\\theta}{2} R_{\\phi} e_{23} \\tilde{R_{\\phi}}} $$\nHowever, noting that \n$$ e^{R_{\\phi} e_{23} \\tilde{R_{\\phi}}} =R_{\\phi} e^{e_{23}} \\tilde{R_{\\phi}} $$\nAllows us to write the second rotation by angle $\\theta$ about the $e_1^{'}$-axis as \n$$ R_{\\theta} = R_{\\phi} e^{\\frac{\\theta}{2}e_{23}} \\tilde{R_{\\phi}} $$\nSo, the combination of the first two elemental rotations equals, \n\\begin{align} \nR_{\\theta} R_{\\phi} &= R_{\\phi} e^{\\frac{\\theta}{2}e_{23}} \\tilde{R_{\\phi}} R_{\\phi} \\ \n&= e^{-\\frac{\\phi}{2} e_{12}}e^{-\\frac{\\theta}{2} e_{23}}\n\\end{align}\nThis pattern can be extended to the third elemental rotation of angle $\\psi$ in the twice-rotated $e_1$-axis, creating the total rotor \n$$ R = e^{-\\frac{\\phi}{2} e_{12}} e^{-\\frac{\\theta}{2} e_{23}} e^{-\\frac{\\psi}{2} e_{12}} $$\nImplementation of Euler Angles\nFirst, we initialize the algebra and assign the variables", "from numpy import e,pi\nfrom clifford import Cl\n\nlayout, blades = Cl(3) # create a 3-dimensional clifford algebra\n\nlocals().update(blades) # lazy way to put entire basis in the namespace", "Next we define a function to produce a rotor given euler angles", "def R_euler(phi, theta,psi):\n Rphi = e**(-phi/2.*e12)\n Rtheta = e**(-theta/2.*e23)\n Rpsi = e**(-psi/2.*e12)\n \n return Rphi*Rtheta*Rpsi", "For example, using this to create a rotation similar to that shown in the animation above,", "R = R_euler(pi/4, pi/4, pi/4)\nR", "Convert to Quaternions\nA Rotor in 3D space is a unit quaternion, and so we have essentially created a function that converts Euler angles to quaternions.\nAll you need to do is interpret the bivectors as $i,j,$ and $k$'s.\nSee Interfacing Other Mathematical Systems, for more on quaternions.\nConvert to Rotation Matrix\nThe matrix representation for a rotation can defined as the result of rotating an ortho-normal frame.\nRotating an ortho-normal frame can be done easily,", "A = [e1,e2,e3] # initial ortho-normal frame\nB = [R*a*~R for a in A] # resultant frame after rotation\n\nB", "The components of this frame are the rotation matrix, so we just enter the frame components into a matrix.", "from numpy import array \n\nM = [float(b|a) for b in B for a in A] # you need float() due to bug in clifford\nM = array(M).reshape(3,3)\n\nM", "Thats a rotation matrix.\nConvert a Rotation Matrix to a Rotor\nIn 3 Dimenions, there is a simple formula which can be used to directly transform a rotations matrix into a rotor.\nFor arbitrary dimensions you have to use a different algorithm (see clifford.tools.orthoMat2Versor() (docs)).\nAnyway, in 3 dimensions there is a closed form solution, as described in Sec. 4.3.3 of \"Geometric Algebra for Physicists\".\nGiven a rotor $R$ which transforms an orthonormal frame $A={a_k}$ into $B={b_k}$ as such,\n$$b_k = Ra_k\\tilde{R}$$\n$R$ is given by \n$$R= \\frac{1+a_kb_k}{|1+a_kb_k|}$$\nSo, if you want to convert from a rotation matrix into a rotor, start by converting the matrix M into a frame $B$.(You could do this with loop if you want.)", "B = [M[0,0]*e1 + M[1,0]*e2 + M[2,0]*e3,\n M[0,1]*e1 + M[1,1]*e2 + M[2,1]*e3,\n M[0,2]*e1 + M[1,2]*e2 + M[2,2]*e3]\nB", "Then implement the formula", "A = [e1,e2,e3]\nR = 1+sum([A[k]*B[k] for k in range(3)])\nR = R/abs(R)\n\nR", "blam." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
merryjman/astronomy
templateTable.ipynb
gpl-3.0
[ "Data Table Template\nThis simple notebook imports tabular data, displays the first few rows, and makes a scatter plot and histogram.", "#importing what we'll need\nimport numpy as np\nimport pandas as pd\n%matplotlib inline\nimport matplotlib.pyplot as plt", "Import data\nCreates a dataframe (called \"data\") and fills it with data from a URL.", "data = pd.read_csv(\"http://web_address.com/filename.csv\")", "Display part of the data table", "data.head(3)", "Make a scatter plot of two column's data", "# Set variables for scatter plot\n# \nx = data.OneColumnName\ny = data.AnotherColumnName\n\n# make the graph\nplt.scatter(x,y)\nplt.title('title')\nplt.xlabel('label')\nplt.ylabel('label')\n\n# This actually shows the plot\nplt.show()", "Make a histogram of one column's data", "plt.hist(data.ColumnName, bins=10, range=[0,100])" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
BrownDwarf/ApJdataFrames
notebooks/Rebull2016_extra.ipynb
mit
[ "ApJdataFrames Rebull 2016 a & b Extra analysis\nTitles:\na.: Rotation in the Pleiades with K2: I. Data and First Results\nb.: ROTATION IN THE PLEIADES WITH K2. II. MULTIPERIOD STARS \nAuthors: L. M. Rebull, J. R. Stauffer, J. Bouvier, A. M. Cody, L. A. Hillenbrand, et al.\nData is from this paper:\nhttp://iopscience.iop.org/article/10.3847/0004-6256/152/5/114/meta", "%matplotlib inline\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nimport pandas as pd\npd.options.display.max_columns = 150\n\n%config InlineBackend.figure_format = 'retina'\n\nimport astropy\nfrom astropy.table import Table\nfrom astropy.io import ascii\nimport numpy as np\n\ndf_abc = pd.read_csv('../data/Rebull2016/Rebull_Stauffer_merge.csv')\n\ndf_abc.head()", "Import the Fang et al. 2016 data", "tab1 = pd.read_fwf('../data/Fang2016/Table_1+4_online.dat', na_values=['-99.00000000', '-9999.0', '99.000, 99.0'])\ndf = tab1.rename(columns={'# Object_name':'Object_name'})\n\ndf.head()\n\ndf.Object_name.values[0:30]", "Ugh, the naming convention is non-standard in a way that is likely more work than it's worth to try to match to other catalogs. Whyyyyyyyyy.\nTry full-blown coordinate matching\nFrom astropy: http://docs.astropy.org/en/stable/coordinates/matchsep.html", "ra1 = df.RAJ2000\ndec1 = df.DEJ2000\n\nra2 = df_abc.RAdeg\ndec2 = df_abc.DEdeg\n\nfrom astropy.coordinates import SkyCoord\nfrom astropy import units as u\nc = SkyCoord(ra=ra1*u.degree, dec=dec1*u.degree) \ncatalog = SkyCoord(ra=ra2*u.degree, dec=dec2*u.degree) \nidx, d2d, d3d = c.match_to_catalog_sky(catalog) \n\nplt.figure(figsize=(10,10))\nplt.plot(ra1, dec1, '.', alpha=0.3)\nplt.plot(ra2, dec2, '.', alpha=0.3)\n\nplt.hist(d2d.to(u.arcsecond)/u.arcsecond, bins=np.arange(0,3, 0.1));\nplt.yscale('log')\nplt.axvline(x=0.375,color='r', linestyle='dashed')", "Ok, we'll accept all matches with better than 0.375 arcsecond separation.", "boolean_matches = d2d.to(u.arcsecond).value < 0.375", "How many matches are there?", "boolean_matches.sum()", "120 matches--- not bad. Only keep the subset of fang sources that also have K2", "df['EPIC'] = ''\n\nmatched_idx = idx[boolean_matches]\n\nmatched_idx\n\ndf.shape, df_abc.shape\n\nidx.shape\n\ndf['EPIC'][boolean_matches] = df_abc['EPIC'].iloc[matched_idx].values\n\nfang_K2 = pd.merge(df_abc, df, how='left', on='EPIC')\n\nfang_K2.columns\n\nfang_K2[['Name_adopt', 'Object_name']][fang_K2.Object_name.notnull()].tail(10)\n\nfang_K2.to_csv('../data/Fang2016/Rebull_Fang_merge.csv', index=False)\n\nfang_K2", "Great correspondence! Looks like there are 120 targets in both categories.\nLet's spot-check if they use similar temperatures:", "plt.figure(figsize=(7, 7))\nplt.plot(fang_K2.Teff, fang_K2.Tspec, 'o')\nplt.xlabel(r'$T_{\\mathrm{eff}}$ Stauffer et al. 2016')\nplt.ylabel(r'$T_{\\mathrm{spec}}$ Fang et al. 2016')\nplt.plot([3000, 6300], [3000, 6300], 'k--')\nplt.ylim(3000, 6300)\nplt.xlim(3000, 6300);", "What's the scatter?", "delta_Tspec = fang_K2.Teff - fang_K2.Tspec\ndelta_Tspec = delta_Tspec.dropna()\nRMS_Tspec = np.sqrt((delta_Tspec**2.0).sum()/len(delta_Tspec))\nprint('{:0.0f}'.format(RMS_Tspec))", "The authors disagree on temperature by about $\\delta T \\sim$ 100 K RMS.\nLet's make the figure we really want to make: K2 Amplitude versus spectroscopically measured filling factor of starspots $f_{spot}$. We expect that the plot will be a little noisy due to differences in temperature assumptions and such, but it is absolutely fundamental. \nFirst we need to convert the amplitude in magnitudes to a faction $\\in [0,1]$. The $\\Delta V$ in Stauffer et al. 2016 has negative values, so I'm not sure what it is! The Ampl from Rebull et al. 2016 is: \n\nAmplitude, in mag, of the 10th to the 90th percentile", "fang_K2['flux_amp'] = 1.0 - 10**(fang_K2.Ampl/-2.5)\n\nplt.hist(fang_K2.flux_amp, bins=np.arange(0, 0.15, 0.005));\n\nplt.hist(fang_K2.fs1.dropna(), bins=np.arange(0, 0.8, 0.03));\n\nsns.set_context('talk')\n\nplt.figure(figsize=(7, 7))\nplt.plot(fang_K2.fs1, fang_K2.flux_amp, '.')\nplt.plot([0,1], [0,1], 'k--')\nplt.xlim(0.0,0.2)\nplt.ylim(0,0.2)\nplt.xlabel('LAMOST-measured $f_{spot}$ \\n Fang et al. 2016')\nplt.ylabel('K2-measured spot amplitude $A \\in [0,1)$ \\n Rebull et al. 2016')\nplt.xticks(np.arange(0, 0.21, 0.05))\nplt.yticks(np.arange(0, 0.21, 0.05))\nplt.savefig('K2_LAMOST_starspots_data.png', bbox_inches='tight', dpi=300);\n\nplt.figure(figsize=(7, 7))\nplt.plot(fang_K2.fs1, fang_K2.flux_amp, '.')\nplt.plot([0,1], [0,1], 'k--')\nplt.xlim(0.0,0.5)\nplt.ylim(0,0.5)\nplt.xlabel('LAMOST-measured $f_{spot}$ \\n Fang et al. 2016')\nplt.ylabel('K2-measured spot amplitude $A \\in [0,1)$ \\n Rebull et al. 2016')\nplt.xticks(np.arange(0, 0.51, 0.1))\nplt.yticks(np.arange(0, 0.51, 0.1))\nplt.savefig('K2_LAMOST_starspots_wide.png', bbox_inches='tight', dpi=300);", "Awesome! The location of points indicate that starspots have a large longitudinally-symmetric component that evades detection in K2 amplitudes.\nWhat effects can cause / mimic this behavior?\n- Unresolved binarity could cause an errant TiO measurement, biasing the Fang et al. measurement.\n- Increased rotation (Rossby number) could make stronger or weaker dipolar magnetic fields\n- EW H$\\alpha$ could be correlated, from an activity sense.\n-", "fang_K2.columns\n\nfang_K2.beat.value_counts()", "Crosstabs with discreate variables: Legend", "plt.figure(figsize=(7, 7))\ncross_tab = 'resc'\nc1 = fang_K2[cross_tab] == 'yes'\nc2 = fang_K2[cross_tab] == 'no'\nplt.plot(fang_K2.fs1[c1], fang_K2.flux_amp[c1], 'r.', label='{} = yes'.format(cross_tab))\nplt.plot(fang_K2.fs1[c2], fang_K2.flux_amp[c2], 'b.', label='{} = no'.format(cross_tab))\n\nplt.legend(loc='best')\n\nplt.plot([0,1], [0,1], 'k--')\nplt.xlim(-0.01,0.8)\nplt.ylim(0,0.15)\nplt.xlabel('LAMOST-measured $f_{spot}$ \\n Fang et al. 2016')\nplt.ylabel('K2-measured spot amplitude $A \\in [0,1)$ \\n Rebull et al. 2016')\n#plt.xticks(np.arange(0, 0.51, 0.1))\n#plt.yticks(np.arange(0, 0.51, 0.1))\nplt.savefig('K2_LAMOST_starspots_crosstab.png', bbox_inches='tight', dpi=300);", "Crosstabs with continuous variables: Colorbar", "plt.figure(figsize=(7, 7))\ncross_tab = 'Mass'\ncm = plt.cm.get_cmap('Blues')\nsc = plt.scatter(fang_K2.fs1, fang_K2.flux_amp, c=fang_K2[cross_tab], cmap=cm)\ncb = plt.colorbar(sc)\n#cb.set_label(r'$T_{spot}$ (K)')\n\nplt.plot([0,1], [0,1], 'k--')\nplt.xlim(-0.01,0.8)\nplt.ylim(0,0.15)\nplt.xlabel('LAMOST-measured $f_{spot}$ \\n Fang et al. 2016')\nplt.ylabel('K2-measured spot amplitude $A \\in [0,1)$ \\n Rebull et al. 2016')\n#plt.xticks(np.arange(0, 0.51, 0.1))\n#plt.yticks(np.arange(0, 0.51, 0.1))\nplt.savefig('K2_LAMOST_starspots_cb.png', bbox_inches='tight', dpi=300);", "What about inclination?\n$$ V = \\frac{d}{t} = \\frac{2 \\pi R}{P} $$\n$$ V \\sin{i} = \\frac{2 \\pi R}{P} \\sin{i}$$\n$$ V \\sin{i} \\cdot \\frac{P}{2 \\pi R} = \\sin{i}$$\n$$ \\arcsin{\\lgroup V \\sin{i} \\cdot \\frac{P}{2 \\pi R} \\rgroup} = i$$", "import astropy.units as u\n\nsini = fang_K2.vsini * u.km/u.s * fang_K2.Per1* u.day /(2.0*np.pi *u.solRad)\n\nvec = sini.values.to(u.dimensionless_unscaled).value\n\nsns.distplot(vec[vec == vec], bins=np.arange(0,2, 0.1), kde=False)\nplt.axvline(1.0, color='k', linestyle='dashed')\n\ninclination = np.arcsin(vec)*180.0/np.pi\n\nsns.distplot(inclination[inclination == inclination], bins=np.arange(0,90.0, 5), kde=False);\n\nfang_K2['sini'] = vec\n\nplt.figure(figsize=(7, 7))\ncross_tab = 'sini'\ncm = plt.cm.get_cmap('hot')\nsc = plt.scatter(fang_K2.fs1, fang_K2.flux_amp, c=fang_K2[cross_tab], cmap=cm)\ncb = plt.colorbar(sc)\ncb.set_label(r'$\\sin{i}$')\n\nplt.plot([0,1], [0,1], 'k--')\nplt.xlim(-0.01,0.7)\nplt.ylim(0,0.15)\nplt.xlabel('LAMOST-measured $f_{spot}$ \\n Fang et al. 2016')\nplt.ylabel('K2-measured spot amplitude $A \\in [0,1)$ \\n Rebull et al. 2016')\n#plt.xticks(np.arange(0, 0.51, 0.1))\n#plt.yticks(np.arange(0, 0.51, 0.1))\nplt.savefig('K2_LAMOST_starspots_cb.png', bbox_inches='tight', dpi=300);" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
Santara/ML-MOOC-NPTEL
lecture6/ML-Anirban_Tutorial6.ipynb
gpl-3.0
[ "Import necessary libraries", "import numpy as np\nfrom scipy import ndimage\nfrom time import time\nfrom sklearn import datasets, manifold\nfrom sklearn.cluster import KMeans, AgglomerativeClustering\nfrom sklearn.mixture import GMM\nfrom sklearn.cross_validation import StratifiedKFold\nimport matplotlib.pyplot as plt\nimport matplotlib as mpl\n%matplotlib inline", "K-means clustering\nExample adapted from here.\nLoad dataset", "iris = datasets.load_iris()\nX,y = iris.data[:,:2], iris.target", "Define and train model", "num_clusters = 8\nmodel = KMeans(n_clusters=num_clusters)\nmodel.fit(X)", "Extract the labels and the cluster centers", "labels = model.labels_\ncluster_centers = model.cluster_centers_\nprint cluster_centers", "Plot the clusters", "plt.scatter(X[:,0], X[:,1],c=labels.astype(np.float))\nplt.hold(True)\nplt.scatter(cluster_centers[:,0], cluster_centers[:,1], c = np.arange(num_clusters), marker = '^', s = 150)\nplt.show()\nplt.scatter(X[:,0], X[:,1],c=np.choose(y,[0,2,1]).astype(np.float))\nplt.show()", "Gaussian Mixture Model\nExample taken from here.\nDefine a visualization function", "def make_ellipses(gmm, ax):\n \"\"\"\n Visualize the gaussians in a GMM as ellipses\n \"\"\"\n for n, color in enumerate('rgb'):\n v, w = np.linalg.eigh(gmm._get_covars()[n][:2, :2])\n u = w[0] / np.linalg.norm(w[0])\n angle = np.arctan2(u[1], u[0])\n angle = 180 * angle / np.pi # convert to degrees\n v *= 9\n ell = mpl.patches.Ellipse(gmm.means_[n, :2], v[0], v[1],\n 180 + angle, color=color)\n ell.set_clip_box(ax.bbox)\n ell.set_alpha(0.5)\n ax.add_artist(ell)", "Load dataset and make training and test splits", "iris = datasets.load_iris()\n\n# Break up the dataset into non-overlapping training (75%) and testing\n# (25%) sets.\nskf = StratifiedKFold(iris.target, n_folds=4)\n# Only take the first fold.\ntrain_index, test_index = next(iter(skf))\n\n\nX_train = iris.data[train_index]\ny_train = iris.target[train_index]\nX_test = iris.data[test_index]\ny_test = iris.target[test_index]\n\nn_classes = len(np.unique(y_train))", "Train and compare different GMMs", "# Try GMMs using different types of covariances.\nclassifiers = dict((covar_type, GMM(n_components=n_classes,\n covariance_type=covar_type, init_params='wc', n_iter=20))\n for covar_type in ['spherical', 'diag', 'tied', 'full'])\n\nn_classifiers = len(classifiers)\n\nplt.figure(figsize=(2*3 * n_classifiers / 2, 2*6))\nplt.subplots_adjust(bottom=.01, top=0.95, hspace=.15, wspace=.05,\n left=.01, right=.99)\n\n\nfor index, (name, classifier) in enumerate(classifiers.items()):\n # Since we have class labels for the training data, we can\n # initialize the GMM parameters in a supervised manner.\n classifier.means_ = np.array([X_train[y_train == i].mean(axis=0)\n for i in xrange(n_classes)])\n\n # Train the other parameters using the EM algorithm.\n classifier.fit(X_train)\n\n h = plt.subplot(2, n_classifiers / 2, index + 1)\n make_ellipses(classifier, h)\n\n for n, color in enumerate('rgb'):\n data = iris.data[iris.target == n]\n plt.scatter(data[:, 0], data[:, 1], 0.8, color=color,\n label=iris.target_names[n])\n # Plot the test data with crosses\n for n, color in enumerate('rgb'):\n data = X_test[y_test == n]\n plt.plot(data[:, 0], data[:, 1], 'x', color=color)\n\n y_train_pred = classifier.predict(X_train)\n train_accuracy = np.mean(y_train_pred.ravel() == y_train.ravel()) * 100\n plt.text(0.05, 0.9, 'Train accuracy: %.1f' % train_accuracy,\n transform=h.transAxes)\n\n y_test_pred = classifier.predict(X_test)\n test_accuracy = np.mean(y_test_pred.ravel() == y_test.ravel()) * 100\n plt.text(0.05, 0.8, 'Test accuracy: %.1f' % test_accuracy,\n transform=h.transAxes)\n\n plt.xticks(())\n plt.yticks(())\n plt.title(name)\n\nplt.legend(loc='lower right', prop=dict(size=12))\n\n\nplt.show()", "Hierarchical Agglomerative Clustering\nExample taken from here.\nLoad and pre-process dataset", "digits = datasets.load_digits(n_class=10)\nX = digits.data\ny = digits.target\nn_samples, n_features = X.shape\n\nnp.random.seed(0)\n\ndef nudge_images(X, y):\n # Having a larger dataset shows more clearly the behavior of the\n # methods, but we multiply the size of the dataset only by 2, as the\n # cost of the hierarchical clustering methods are strongly\n # super-linear in n_samples\n shift = lambda x: ndimage.shift(x.reshape((8, 8)),\n .3 * np.random.normal(size=2),\n mode='constant',\n ).ravel()\n X = np.concatenate([X, np.apply_along_axis(shift, 1, X)])\n Y = np.concatenate([y, y], axis=0)\n return X, Y\n\n\nX, y = nudge_images(X, y)", "Visualize the clustering", "def plot_clustering(X_red, X, labels, title=None):\n x_min, x_max = np.min(X_red, axis=0), np.max(X_red, axis=0)\n X_red = (X_red - x_min) / (x_max - x_min)\n\n plt.figure(figsize=(2*6, 2*4))\n for i in range(X_red.shape[0]):\n plt.text(X_red[i, 0], X_red[i, 1], str(y[i]),\n color=plt.cm.spectral(labels[i] / 10.),\n fontdict={'weight': 'bold', 'size': 9})\n\n plt.xticks([])\n plt.yticks([])\n if title is not None:\n plt.title(title, size=17)\n plt.axis('off')\n plt.tight_layout()", "Create a 2D embedding of the digits dataset", "print(\"Computing embedding\")\nX_red = manifold.SpectralEmbedding(n_components=2).fit_transform(X)\nprint(\"Done.\")", "Train and visualize the clusters\n\nWard minimizes the sum of squared differences within all clusters. It is a variance-minimizing approach and in this sense is similar to the k-means objective function but tackled with an agglomerative hierarchical approach.\nMaximum or complete linkage minimizes the maximum distance between observations of pairs of clusters.\nAverage linkage minimizes the average of the distances between all observations of pairs of clusters.", "from sklearn.cluster import AgglomerativeClustering\n\nfor linkage in ('ward', 'average', 'complete'):\n clustering = AgglomerativeClustering(linkage=linkage, n_clusters=10)\n t0 = time()\n clustering.fit(X_red)\n print(\"%s : %.2fs\" % (linkage, time() - t0))\n\n plot_clustering(X_red, X, clustering.labels_, \"%s linkage\" % linkage)\n\n\nplt.show()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
amueller/scipy-2017-sklearn
notebooks/10.Case_Study-Titanic_Survival.ipynb
cc0-1.0
[ "Case Study - Titanic Survival\nFeature Extraction\nHere we will talk about an important piece of machine learning: the extraction of\nquantitative features from data. By the end of this section you will\n\nKnow how features are extracted from real-world data.\nSee an example of extracting numerical features from textual data\n\nIn addition, we will go over several basic tools within scikit-learn which can be used to accomplish the above tasks.\nWhat Are Features?\nNumerical Features\nRecall that data in scikit-learn is expected to be in two-dimensional arrays, of size\nn_samples $\\times$ n_features.\nPreviously, we looked at the iris dataset, which has 150 samples and 4 features", "from sklearn.datasets import load_iris\n\niris = load_iris()\nprint(iris.data.shape)", "These features are:\n\nsepal length in cm\nsepal width in cm\npetal length in cm\npetal width in cm\n\nNumerical features such as these are pretty straightforward: each sample contains a list\nof floating-point numbers corresponding to the features\nCategorical Features\nWhat if you have categorical features? For example, imagine there is data on the color of each\niris:\ncolor in [red, blue, purple]\n\nYou might be tempted to assign numbers to these features, i.e. red=1, blue=2, purple=3\nbut in general this is a bad idea. Estimators tend to operate under the assumption that\nnumerical features lie on some continuous scale, so, for example, 1 and 2 are more alike\nthan 1 and 3, and this is often not the case for categorical features.\nIn fact, the example above is a subcategory of \"categorical\" features, namely, \"nominal\" features. Nominal features don't imply an order, whereas \"ordinal\" features are categorical features that do imply an order. An example of ordinal features would be T-shirt sizes, e.g., XL > L > M > S. \nOne work-around for parsing nominal features into a format that prevents the classification algorithm from asserting an order is the so-called one-hot encoding representation. Here, we give each category its own dimension. \nThe enriched iris feature set would hence be in this case:\n\nsepal length in cm\nsepal width in cm\npetal length in cm\npetal width in cm\ncolor=purple (1.0 or 0.0)\ncolor=blue (1.0 or 0.0)\ncolor=red (1.0 or 0.0)\n\nNote that using many of these categorical features may result in data which is better\nrepresented as a sparse matrix, as we'll see with the text classification example\nbelow.\nUsing the DictVectorizer to encode categorical features\nWhen the source data is encoded has a list of dicts where the values are either strings names for categories or numerical values, you can use the DictVectorizer class to compute the boolean expansion of the categorical features while leaving the numerical features unimpacted:", "measurements = [\n {'city': 'Dubai', 'temperature': 33.},\n {'city': 'London', 'temperature': 12.},\n {'city': 'San Francisco', 'temperature': 18.},\n]\n\nfrom sklearn.feature_extraction import DictVectorizer\n\nvec = DictVectorizer()\nvec\n\nvec.fit_transform(measurements).toarray()\n\nvec.get_feature_names()", "Derived Features\nAnother common feature type are derived features, where some pre-processing step is\napplied to the data to generate features that are somehow more informative. Derived\nfeatures may be based in feature extraction and dimensionality reduction (such as PCA or manifold learning),\nmay be linear or nonlinear combinations of features (such as in polynomial regression),\nor may be some more sophisticated transform of the features.\nCombining Numerical and Categorical Features\nAs an example of how to work with both categorical and numerical data, we will perform survival predicition for the passengers of the HMS Titanic.\nWe will use a version of the Titanic (titanic3.xls) from here. We converted the .xls to .csv for easier manipulation but left the data is otherwise unchanged.\nWe need to read in all the lines from the (titanic3.csv) file, set aside the keys from the first line, and find our labels (who survived or died) and data (attributes of that person). Let's look at the keys and some corresponding example lines.", "import os\nimport pandas as pd\n\ntitanic = pd.read_csv(os.path.join('datasets', 'titanic3.csv'))\nprint(titanic.columns)", "Here is a broad description of the keys and what they mean:\npclass Passenger Class\n (1 = 1st; 2 = 2nd; 3 = 3rd)\nsurvival Survival\n (0 = No; 1 = Yes)\nname Name\nsex Sex\nage Age\nsibsp Number of Siblings/Spouses Aboard\nparch Number of Parents/Children Aboard\nticket Ticket Number\nfare Passenger Fare\ncabin Cabin\nembarked Port of Embarkation\n (C = Cherbourg; Q = Queenstown; S = Southampton)\nboat Lifeboat\nbody Body Identification Number\nhome.dest Home/Destination\nIn general, it looks like name, sex, cabin, embarked, boat, body, and homedest may be candidates for categorical features, while the rest appear to be numerical features. We can also look at the first couple of rows in the dataset to get a better understanding:", "titanic.head()", "We clearly want to discard the \"boat\" and \"body\" columns for any classification into survived vs not survived as they already contain this information. The name is unique to each person (probably) and also non-informative. For a first try, we will use \"pclass\", \"sibsp\", \"parch\", \"fare\" and \"embarked\" as our features:", "labels = titanic.survived.values\nfeatures = titanic[['pclass', 'sex', 'age', 'sibsp', 'parch', 'fare', 'embarked']]\n\nfeatures.head()", "The data now contains only useful features, but they are not in a format that the machine learning algorithms can understand. We need to transform the strings \"male\" and \"female\" into binary variables that indicate the gender, and similarly for \"embarked\".\nWe can do that using the pandas get_dummies function:", "pd.get_dummies(features).head()", "This transformation successfully encoded the string columns. However, one might argue that the class is also a categorical variable. We can explicitly list the columns to encode using the columns parameter, and include pclass:", "features_dummies = pd.get_dummies(features, columns=['pclass', 'sex', 'embarked'])\nfeatures_dummies.head(n=16)\n\ndata = features_dummies.values\n\nimport numpy as np\nnp.isnan(data).any()", "With all of the hard data loading work out of the way, evaluating a classifier on this data becomes straightforward. Setting up the simplest possible model, we want to see what the simplest score can be with DummyClassifier.", "from sklearn.model_selection import train_test_split\nfrom sklearn.preprocessing import Imputer\n\n\ntrain_data, test_data, train_labels, test_labels = train_test_split(\n data, labels, random_state=0)\n\nimp = Imputer()\nimp.fit(train_data)\ntrain_data_finite = imp.transform(train_data)\ntest_data_finite = imp.transform(test_data)\n\nnp.isnan(train_data_finite).any()\n\nfrom sklearn.dummy import DummyClassifier\n\nclf = DummyClassifier('most_frequent')\nclf.fit(train_data_finite, train_labels)\nprint(\"Prediction accuracy: %f\"\n % clf.score(test_data_finite, test_labels))", "<div class=\"alert alert-success\">\n <b>EXERCISE</b>:\n <ul>\n <li>\n Try executing the above classification, using LogisticRegression and RandomForestClassifier instead of DummyClassifier\n </li>\n <li>\n Does selecting a different subset of features help?\n </li>\n </ul>\n</div>", "# %load solutions/10_titanic.py" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
honjy/foundations-homework
07/pandas-homework-hon-june13.ipynb
mit
[ "Building a pandas Cheat Sheet, Part 1\nImport pandas with the right name", "import pandas as pd\n\ndf = pd.read_csv(\"07-hw-animals.csv\")", "Set all graphics from matplotlib to display inline", "#!pip install matplotlib\n\nimport matplotlib.pyplot as plt\n%matplotlib inline\n#This lets your graph show you in your notebook\n\ndf", "Display the names of the columns in the csv", "df['name']", "Display the first 3 animals.", "df.head(3)", "Sort the animals to see the 3 longest animals.", "df.sort_values('length', ascending=False).head(3)", "What are the counts of the different values of the \"animal\" column? a.k.a. how many cats and how many dogs.", "df['animal'].value_counts()", "Only select the dogs.", "dog_df = df['animal'] == 'dog'\ndf[dog_df]", "Display all of the animals that are greater than 40 cm.", "long_animals = df['length'] > 40\ndf[long_animals]", "'length' is the animal's length in cm. Create a new column called inches that is the length in inches.\n1 inch = 2.54 cm", "df['length_inches'] = df['length'] / 2.54\ndf", "Save the cats to a separate variable called \"cats.\" Save the dogs to a separate variable called \"dogs.\"", "cats = df['animal'] == 'cat'\ndogs = df['animal'] == 'dog'", "Display all of the animals that are cats and above 12 inches long. First do it using the \"cats\" variable, then do it using your normal dataframe.", "long_animals = df['length_inches'] > 12\ndf[cats & long_animals]\n\ndf[(df['length_inches'] > 12) & (df['animal'] == 'cat')]\n#Amazing!", "What's the mean length of a cat?\nWhat's the mean length of a dog\nCats are mean but dogs are not", "df[cats].mean()\n\ndf[dogs].mean()", "Use groupby to accomplish both of the above tasks at once.", "df.groupby('animal').mean()\n\n#groupby ", "Make a histogram of the length of dogs. I apologize that it is so boring.", "df[dogs].plot.hist(y='length_inches')", "Change your graphing style to be something else (anything else!)", "df[dogs].plot.bar(x='name', y='length_inches')", "Make a horizontal bar graph of the length of the animals, with their name as the label (look at the billionaires notebook I put on Slack!)", "df[dogs].plot.barh(x='name', y='length_inches')\n#Fontaine is such an annoying name for a dog", "Make a sorted horizontal bar graph of the cats, with the larger cats on top.", "df[cats].sort(['length_inches'], ascending=False).plot(kind='barh', x='name', y='length_inches')\n\n#df[df['animal']] == 'cat'].sort_values(by='length).plot(kind='barh', x='name', y='length', legend=False)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
flowersteam/naminggamesal
notebooks/6_Intro_Experiment_Database.ipynb
agpl-3.0
[ "import sys\nimport seaborn as sns\nsys.path.append(\"..\")\n%matplotlib inline\nsns.set(rc={'image.cmap': 'Purples_r'})", "Experiment Database", "import naminggamesal.ngdb as ngdb\nimport naminggamesal.ngsimu as ngsimu", "Using ngdb instead of ngsimu, we create Experiments objects that are re-usable via a database. Execute the code below, and a second time for the graph().show() part, you will notice the difference between the 2 libs (testexp is ngdb.Experiment, testexp2 is ngsimu.Experiment).", "xp_cfg={\n 'pop_cfg':{\n 'voc_cfg':{\n 'voc_type':'pandas',\n #'voc_type':'sparse_matrix',\n #'M':5,\n #'W':10\n },\n 'strat_cfg':{\n 'strat_type':'naive',\n 'voc_update':'minimal'\n },\n 'interact_cfg':{\n 'interact_type':'speakerschoice'\n },\n 'env_cfg':{'env_type':'simple','M':5,'W':10},\n 'nbagent':nbagent\n },\n 'step':10\n}\n\ntestexp=ngdb.Experiment(**xp_cfg)\ntestexp\ntestexp2=ngsimu.Experiment(**xp_cfg)\ntestexp2\n\ntestexp.continue_exp_until(0)\ntestexp2.continue_exp_until(3)\n\nfor i in range()\nprint testexp2._poplist.get_last()._agentlist[i]._vocabulary._content_m\nprint testexp2._poplist.get_last()._agentlist[i]._vocabulary._content_m.index\nprint testexp2._poplist.get_last()._agentlist[i]._vocabulary.get_accessible_words()\n\ntestexp2._poplist.get_last()._agentlist[0]._vocabulary.add()\n\ntestexp.graph(\"entropy\").show()\n\n\ntestexp2.graph(\"entropy\").show()", "Get back existing experiments, merge DBs, and plot with different abscisse", "db=ngdb.NamingGamesDB(\"ng2.db\")\n\ntestexp3=db.get_experiment(force_new=True,**xp_cfg)\n\ntestexp3.continue_exp_until(200)\n\ndb.merge(\"naminggames.db\",remove=False)\n\ntestexp3=db.get_experiment(force_new=False,**xp_cfg)\n\ntestexp3.continue_exp_until(100)\n\ntestexp3.graph(\"srtheo\").show()\ntestexp3.graph(\"entropy\").show()\n\ntestexp3.graph(\"entropy\",X=\"srtheo\").show()\n\ntestexp3.graph(\"entropy\",X=\"interactions_per_agent\").show()" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
wcmckee/wcmckee
nanowritmorst.ipynb
mit
[ "NaNoWritMoRST\nScript to generate RST file daily novel writing", "from collections import Counter\nimport xmltodict\nimport os\n\n#for filera in range(1,28):\n #print (filera)\n \n #use open() for opening file.\n #Always use `with` statement as it'll automatically close the file for you.\n #with open(r'/home/wcmckee/Downloads/writersden/posts/nanowrimo15-day' + str(filera) + '.rst') as f:\n #create a list of all words fetched from the file using a list comprehension\n #words = [word for line in f for word in line.split()]\n #print (\"The total word count is:\", len(words))\n #now use collections.Counter\n #c = Counter(words)\n #for word, count in c.most_common():\n #print (word, count)\n\n'''\nfile=open(\"/home/wcmckee/Downloads/wdm-site/posts/nanowrimo15-day28.rst\",\"r+\")\nwordcount={}\nfor word in file.read().split():\n if word not in wordcount:\n wordcount[word] = 1\n else:\n wordcount[word] += 1\nfor k,v in wordcount.items():\n print(k, v)\n #content = v.decode('utf-8')\n'''\n\ndayil = dict()\n\nwcdic = dict()\n\nalwords = dict()\n\ndef main():\n #use open() for opening file.\n #Always use `with` statement as it'll automatically close the file for you.\n with open(r'/home/wcmckee/Downloads/writersden/posts/nanowrimo15-day2.rst') as f:\n #create a list of all words fetched from the file using a list comprehension\n words = [word for line in f for word in line.split()]\n print (\"The total word count is:\", len(words))\n wcdic.update({'wordcount' : len(words)})\n alwords.update({'test':words})\n #now use collections.Counter\n c = Counter(words)\n for word, count in c.most_common():\n print (word, count)\n dayil.update({word : count})\nmain()\n\nwcdic", "aspell check", "makedailthi = os.listdir('/home/wcmckee/Downloads/writersdenhamilton/posts/')\n\nimport arrow\n\nartim = arrow.now()\n\nartim.strftime('%d')\n\ndaiyrst = artim.strftime('%d')\n\nyrrst = artim.strftime('%y')\n\nyrrst\n\nfulrst = daiyrst\n\nfor maked in makedailthi:\n #print(maked)\n if ('nanowrimo') in makedailthi:\n print(maked)\n\nif ('nanwri' in makedailthi):\n print (makedailthi)\n\nfor its in range(1, int(daiyrst) + 1):\n print(its)", "Swap this around - number of words as key and word as value", "for dakey in dayil.keys():\n if ',' in dakey: \n print(dakey.replace(',', ''))\n elif '.' in dakey:\n print(dakey.replace('.', ''))", "How many single words in the file? \nScript to edit. Find and replace certain words/pharses.\nKeylog whatever you type\nReport generatored at midnight that summery of days writing.", "plusonev = list()\n\nisonep = list()\n\nfor dayi in dayil.values():\n if dayi > 1:\n print(dayi)\n plusonev.append(dayi)\n elif dayi < 2:\n isonep.append(dayi)", "get the difference between two len", "len(isonep)\n\nlen(plusonev)\n\nsum(isonep)\n\nsum(plusonev)\n\nsum(isonep) + sum(plusonev)\n\nmakedailthi\n\nimport requests\nimport xmltodict\n\nreqartc = requests.get('http://nanowrimo.org/wordcount_api/wchistory/artctrl')\n\nreqtx = reqartc.text\n\ntxmls = xmltodict.parse(reqtx)\n\ntxkeys = txmls.keys()\n\ntxmls['wchistory']['wordcounts']['wcentry'][0]\n\nlentxm = len(txmls['wchistory']['wordcounts']['wcentry'])\n\nwclis = list()\n\nfor lent in range(lentxm):\n print(txmls['wchistory']['wordcounts']['wcentry'][lent]['wc'])\n wclis.append(int(txmls['wchistory']['wordcounts']['wcentry'][lent]['wc']))\n\nlen(wclis)\n\nsum(wclis)\n\ntxmls.values()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
OSGeoLabBp/tutorials
english/python/regexp_in_python.ipynb
cc0-1.0
[ "<a href=\"https://colab.research.google.com/github/OSGeoLabBp/tutorials/blob/master/english/python/regexp_in_python.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>\nRegular expressions in Python\nRegular expression (regexp) is a powerful tool to handle diverse text patterns in text processing. Several text editors (e.g Notepad++, vi) and programming languages have regexp functionality.\nTo define text patterns, a special meaning is assigned to some characters. You can find below a very short and incomplete list of special regexp characters:\n|character(s)|explanation |\n|------------|-----------------------------------------------------------------|\n|. (dot) | any character except new line |\n|^ |beginning of the line |\n|$ |end of the line |\n|[abc] |any character from the set in the brackets |\n|[^abc] |none of the characters in the set in brackets |\n|[a-z] |any character from the range in brackets (inclusive) |\n|[^a-z] |none of the characters in the range in brackets |\n|( ) |make group in pattern |\n|{min,max} | repetition of the previous character or group, max part is optional|\n|p1 \\| p2 |p1 pattern or p2 pattern |\n|p* |any number of repetition of p pattern, including zero equivalent to p{0,}|\n|p+ |one or more repetition of p pattern, equivalent to p{1,} |\n|p? |zero or one repetition of p pattern, equivalent to p{0,1} |\n|\\ |escape the special meaning of the next character (e.g. . the dot character, not any character)|\nPython has a special package named re to handle regular expressions. To use it, it is necessary to import it, as follows:", "import re", "Let's make some examples using regexps\nPattern in string", "text = \"\"\"Python is an interpreted high-level general-purpose programming language. \nIts design philosophy emphasizes code readability with its use of significant indentation. \nIts language constructs as well as its object-oriented approach aim to help programmers write clear, \nlogical code for small and large-scale projects.\"\"\" # citation from Wikipedia", "re.match searches for the pattern only at the beginning of string. It returns an object or None if the pattern not found.", "re.match(\"Python\", text) # is Python at the beginning of the text?\n\nif re.match(\"[Pp]ython\", text): # is Python or python at the beginning of the text?\n print('text starts with Python')\n\nresult = re.match(\"[Pp]ython\", text)\nresult.span(), result.group(0)\n", "re.search searches the first occurence of the pattern in the string.", "re.search('prog', text)\n\nre.search('levels?', text) # optional 's' after level\n\nre.findall('pro', text)", "r preface is often used for regular expression", "re.findall(r'[ \\t\\r\\n]a[a-zA-Z0-9_][ \\t\\r\\n]', text) # two letter words starting with letter 'a'\n\nre.findall(r'\\sa\\w\\s', text) # the same as above but shorter \n\nre.findall(r'\\sa\\w*\\s', text) # words strarting with 'a'", "We can use regexp to find/match functions to validate input data. In the example below, is a string a valid number?", "int_numbers = ('12356', '1ac', 'twelve', '23.65', '0', '-768')\nfor int_number in int_numbers:\n if re.match(r'[+-]?(0|[1-9][0-9]*)$', int_number):\n print(f'{int_number} is an integer number')\n\nfloat_numbers =('12', '0.0', '-43.56', '1.76e-1', '1.1.1', '00.289')\nfor float_number in float_numbers:\n if re.match(r'[+-]?(0|[1-9][0-9]*)(\\.[0-9]*)?([eg][+-]?[0-9]+)?$', float_number):\n print(f'{float_number} is a float number')", "There is another approach to check numerical values without regexp, as follows:", "for float_number in float_numbers:\n try:\n float(float_number) # try to convert to float number\n except ValueError:\n continue # can't convert skip it\n print(f'{float_number} is a float number')", "Email address validation: We'll use the precompiled regular expression (re.compile). This alternative is faster than the alternative of using the same regexp evaluated several times:", "email = re.compile(r'^[a-zA-Z0-9.!#$%&\\'*+/=?^_`{|}~-]+@[a-zA-Z0-9]([a-zA-Z0-9-]{0,61}[a-zA-Z0-9])?(\\.[a-zA-Z0-9]([a-zA-Z0-9-]{0,61}[a-zA-Z0-9])?)*$')\naddresses = ['a.b@c', 'siki.zoltan@emk.bme.hu', 'plainaddress', '#@%^%#$@#$@#.com', '@example.com', 'Joe Smith <email@example.com>',\n 'email.example.com', 'email@example@example.com', 'email@123.123.123.123']\nvalid_addresses = [addr for addr in addresses if email.search(addr)]\nprint('valid email addresses:\\n', valid_addresses)\ninvalid_addresses = [addr for addr in addresses if not email.search(addr)]\nprint('invalid email addresses:\\n', invalid_addresses)", "Other functions\nre.sub replaces the occurrence of a regexp with a given text in a string.", "print(re.sub(r' *', ' ', 'Text with several unnecessary spaces')) # truncate adjecent spaces to a single space\nprint(re.sub(r'[ \\t,;]', ',', 'first,second;third fourth fifth')) # unify separators", "re.split splits a text into a list of parts, where separators are given by regexp.", "words = re.split(r'[, \\.\\t\\r\\n]', text) # word separators are space, dot, tabulator and EOL\nwords", "Please note that the previous result contains some empty words where two or more separators are adjecent. Let's correct it:", "words = re.split(r'[, \\.\\t\\r\\n]+', text) # join adjecent separators\nwords", "Why is there an empty word at the end?\nComplex example\nLet's make a complex example: Find the most frequent four-letter word starting with \"s\" in Kipling's The Jungle Book.", "import urllib.request\nurl = 'https://www.gutenberg.org/files/236/236-0.txt'\nwords = {}\nwith urllib.request.urlopen(url) as file:\n for line in file:\n ws = re.split(r'[, \\.\\t\\r\\n]+', line.decode('utf8'))\n for w in ws:\n w = w.lower()\n if re.match('[sS][a-z]{3}', w):\n if w in words:\n words[w] += 1\n else:\n words[w] = 1\nprint(f'{len(words.keys())} different four letter words starting with \"s\"')\nm = max(words, key=words.get)\nprint(f'{m}: {words[m]}')\n", "Tasks\n\nAnalyse and try to understand the used regular expressons for float and email\nCreate a regular expression for phone numbers\nWhich is the longest word in Kipling's book?\nAre there words in the book with all the different vowels (aeiou) of the English ABC?\nHow could we handle plurals and other non-dictionary forms (e.g. Maugli's, sees, saw, seen, etc)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
psychemedia/parlihacks
notebooks/ShowNtell_nov17.ipynb
mit
[ "Postcards of Parliament From A Digital Flâneur\nFlâneur, noun, A man who saunters around observing society.\nThe flâneur wandered in the shopping arcades, but he did not give in to the temptations of consumerism; the arcade was primarily a pathway to a rich sensory experience — and only then a temple of consumption. His goal was to observe, to bathe in the crowd, taking in its noises, its chaos, its heterogeneity, its cosmopolitanism. Occasionally, he would narrate what he saw — surveying both his private self and the world at large — in the form of short essays for daily newspapers.\nThe Death of the Cyberflâneur, Evgeny Morozov, New York Times, Sunday Review, February 4, 2012\nAPIs\n\nUsing the APIs\n\nPackages Make Life Easier (?)", "import mnis\nimport datetime\n\n# Create a date for the analysis\nd = datetime.date.today()\n\n# Download the full data for MPs serving on the given date as a list\nmnis.getCommonsMembersOn(d)[0]", "Creating Custom pandas Data Reader Packages", "import pd_datareader_nhs.nhs_digital_ods as ods\n\nods.search(string='Prison', field='Label')\n\ndd=ods.download('eprison')\ndd.head()", "Package Issues\n\n\ndevelopment\n\n\nbuilding up example and reusable recipes\n\n\nownership and production quality (participation in development)\n\n\nNotebooks as Open / Shared Recipes\n\nBut How Do I Share Working Examples?\n\nBinderHub Build Sequence\n\"[P]hilosophically similar to Heroku Build Packs\"\n\nrequirements.txt\npython packages\nenvironment.yml\nconda environment specification\napt.txt\n\ndebian packages that should be installed (latest version of Ubuntu)\n\n\npostBuild\n\narbitrary commands to be run after the whole repository has been built\nREQUIRE\nJulia packages\nDockerfile\ntreated as a regular Dockerfile. The presence of a Dockerfile will cause all other building behavior to not be triggered.\n\nBuilding a Local Docker Image From a Github Repository\n```bash\npip3 install jupyter-repo2docker\njupyter-repo2docker --image-name psychemedia/parlihacks --no-run https://github.com/psychemedia/parlihacks\ndocker push psychemedia/parlihacks\n```\nCreating Simple Service APIs\nIn terminal:\njupyter kernelgateway --KernelGatewayApp.api='kernel_gateway.notebook_http' --KernelGatewayApp.seed_uri='./SimpleAPI2.ipynb' --port 8899", "import requests\nrequests.get('http://127.0.0.1:8899/demo/role/worker').json()\n\nrequests.get('http://127.0.0.1:8899/demo/name/jo').json()", "Possible DIY Service Types\n\n\ndata servers: for example, API defined against a CSV file or simple SQLite3 database\n\n\nknown entity taggers: for example, identify members mentioned within a text\n\n\nclassifiers: for example, attempt to associate a text with a particular topic, or subject expert who can best handle a particular question, relative to a particular trained classifier / model\n\n\nReporting Patterns\n\n\ngenerate report for a single MP about their constituency\n\nas a standalone item\nin comparison to all other consituencies nationally\nin comparison to a subset of other constitutencies eg neighbouring, similar\n\n\n\ngenerate a report over all consituencies nationally\n\n\ngenerate a report of wards within a particular constituency" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
dseuss/notebooks
Compressed Sensing/IHT -- Compressed Sensing.ipynb
unlicense
[ "Note: This is still work in progress\nTODO I should normalize $x$, not $A$ by $1/N$", "import numpy as np\nfrom numpy.linalg import norm\nimport matplotlib.pyplot as pl\nimport cvxpy as cvx\nimport itertools as it\n\nimport warnings\nwarnings.filterwarnings(\"ignore\", category=DeprecationWarning) \n\nimport sys\nsys.path.append('/Users/dsuess/Code/Pythonlibs/')\n# see https://github.com/dseuss/pythonlibs\nfrom tools.helpers import Progress\n\ndef random_sparse_vector(dim, nnz, rgen=np.random):\n \"\"\"Create a random sparse vector of dimension `dim` \n with `nnz` nonzero elements \"\"\"\n idx = rgen.choice(np.arange(dim), size=nnz, replace=False)\n result = np.zeros(dim)\n result[idx] = rgen.randn(nnz)\n return result\n\ndef compress(x, r):\n \"\"\"Computes the best r-sparse approximation to `x` in place\"\"\"\n argpart = np.argpartition(np.abs(x), -r)\n support, rest = argpart[-r:], argpart[:-r]\n x[rest] = 0\n return support\n\ndef compression(x, r, retsupp=False):\n \"\"\"Returns the best r-sparse approximation to `x`. If \n `retsupp` we also return the support of this compression\"\"\"\n x_new = x.copy()\n supp = compress(x_new, r)\n return x_new, supp if retsupp else x_new\n\ndef sensingmat_gauss(m, n, rgen=np.random):\n \"\"\"Returns a m*n sensing matrix with independent, normal\n components. See the remark below for normalization\"\"\"\n return rgen.randn(m, n) / np.sqrt(m * n)", "The normalization of the sensing matrices $A$ is choosen in such a way that $\\mathbb{E}\\Vert A \\Vert_2 = 1$ is independent of $m$ and $n$.\nThis allows us to normalize $A$ later to $\\Vert A \\Vert_2 < 1$ with high probability.\nThe latter condition is important for the convergence of the constant step-size IHT algorithm.\nFor the other approaches (convex programming or adaptive IHT), the normalization does not matter.\nConvex Programming\nThe first question we will check is: \"How many measurements do we need to recover the signal?\".\nFor this purpose we will use basis pursuit denoising (the $\\ell_1$-regularized least square estimator), which is very efficient in the number of samples.\nOn the other hand it does not scale well for large systems, due to the higher order polynomial scaling of the corresponding Second order cone program.", "def recover(x, m, eta=1.0, sigma=0.0, rgen=np.random):\n A = sensingmat_gauss(m, len(x), rgen)\n y = np.dot(A, x) + sigma / np.sqrt(len(x)) * rgen.randn(m)\n \n x_hat = cvx.Variable(len(x))\n objective = cvx.Minimize(cvx.norm(A * x_hat - y, 2)**2 + eta * cvx.norm(x_hat, 1))\n problem = cvx.Problem(objective, [])\n problem.solve()\n \n return np.ravel(x_hat.value)\n\ndef check(signal, sigma, rgen=np.random):\n return np.frompyfunc(lambda m, eta: np.linalg.norm(signal - recover(signal, m, eta, sigma, rgen)),\n 2, 1)\n\nDIM = 100\nNNZ = 5\nSAMPLES = 20\nMAX_MEASUREMENTS = 100\nSIGMA = 0\n\nsignal = random_sparse_vector(DIM, NNZ)\nms, etas = np.meshgrid(np.linspace(1, MAX_MEASUREMENTS, 20), np.logspace(-4, 1, 20))\n\nerrors = [check(random_sparse_vector(DIM, NNZ), SIGMA)(ms, etas) \n for i in Progress(range(SAMPLES))]\nerrors = np.mean(errors, axis=0).reshape(ms.shape).astype('float64')\n\nvalue = np.log(errors)\nlevels = np.linspace(np.min(value), np.max(value), 15)\n\npl.yscale('log')\npl.contourf(ms, etas, value, levels=levels)\npl.xlabel = r\"m\"\npl.ylabel = r\"$\\lambda$\"\npl.colorbar()", "As we can see from the figure above, about 25 measurements are enough to recover the signal with high precision.\nSince we have perfect measurements (no noise) we obtain better results for a smaller regularization parameter $\\eta$.\nThis changes as soon we add noise to our measurements:", "DIM = 100\nNNZ = 5\nSAMPLES = 20\nMAX_MEASUREMENTS = 100\nSIGMA = .2\n\nsignal = random_sparse_vector(N, NNZ)\nms, etas = np.meshgrid(np.linspace(1, MAX_MEASUREMENTS, 20), np.logspace(-4, 1, 20))\n\nerrors = [check(random_sparse_vector(N, NNZ), SIGMA)(ms, etas) \n for i in Progress(range(SAMPLES))]\nerrors = np.mean(errors, axis=0).reshape(ms.shape).astype('float64')\n\nvalue = np.log(errors)\nlevels = np.linspace(np.min(value), np.max(value), 15)\n\npl.yscale('log')\npl.contourf(ms, etas, value, levels=levels)\npl.xlabel = r\"m\"\npl.ylabel = r\"$\\lambda$\"\npl.colorbar()", "Here, $\\eta \\approx 10^{-2}$ gives the best results.\nFor a larger value the signal is too sparse to fit the data well.\nOn the other hand, for smaller values we overfit the data to the noisy measurements.\nIn the following we will use noiseless measurements.\nConstant IHT\nFirst of all we will try iterative hard thresholding (IHT) with a constant stepsize $\\mu = 1$.\nIn this case it can be proven that the algorithm converges provided $\\Vert A \\Vert_2 < 1$ [1].\nNote that a rescaling of the design matrix $A$ can be compensated by rescaling the step size [2].\nDue to our choice of normalization for $A$, we have that $\\mu = 0.5$ will suffice with high probability.\nSince we cannot expect that cIHT is as efficient w.r.t. the sample size as the basis pursuit denoising, we increase the number of measurements.", "import sys\nsys.path.append('/Users/dsuess/Code/CS\\ Algorithms/')\n\nfrom csalgs.cs import iht\n%autoreload 2\n\ndef cIHT(x, m, r, stepsize=1.0, rgen=np.random, sensingmat=None, x_init=None):\n A = sensingmat_gauss(m, len(x)) if sensingmat is None else sensingmat\n y = np.dot(A, x)\n x_hat = np.zeros(x.shape) if x_init is None else x_init\n while True:\n x_hat += stepsize * A.T.dot(y - A.dot(x_hat))\n compress(x_hat, r)\n yield x_hat\n\n%debug\n\nMEASUREMENTS = 50\n\nfor _ in Progress(range(1)):\n A = sensingmat_gauss(MEASUREMENTS, len(signal))\n y = A @ signal\n solution = iht.csIHT(A, y, 2 * NNZ, stepsize=iht.adaptive_stepsize())\n pl.plot([np.linalg.norm(signal - x_hat) for x_hat in it.islice(solution, 100)])\n \npl.xlabel = \"\\# iterations\"\npl.ylabel = r\"\\Vert x - \\hat x \\Vert_2\"", "Note that the cIHT algorithm does not converge reliably even for a large number of iterations.\nEven worse, the algorithm gets trapped for a finite amount of time.\nThis behavior is not good if one wants to check convergence by comparing two consecutive iteration steps.\nAdaptive IHT", "SCALE_CONST = .5\nKAPPA = 3.\n\n\nassert KAPPA > 1 / (1 - SCALE_CONST)\n\ndef get_stepsize(A, g, supp):\n return norm(g[supp])**2 / norm(np.dot(A[:, supp], g[supp]))**2\n\ndef same_supports(supp1, supp2):\n return np.all(np.sort(supp1) == np.sort(supp2))\n\ndef compute_omega(x_np, x_n, A):\n diff = x_np - x_n\n return (1 - SCALE_CONST) * norm(diff)**2 / norm(A.dot(diff))**2\n\ndef get_update(A, x, y, supp, r):\n g = A.T.dot(y - A.dot(x))\n mu = norm(g[supp])**2 / norm(np.dot(A[:, supp], g[supp]))**2\n while True:\n x_new, supp_new = compression(x + mu * g, r, retsupp=True)\n if same_supports(supp, supp_new) or (mu < compute_omega(x_new, x, A)):\n return x_new, supp_new\n mu /= KAPPA * SCALE_CONST\n\n\ndef aIHT(x, m, r, rgen=np.random, sensingmat=None, x_init=None):\n A = sensingmat_gauss(m, len(x)) if sensingmat is None else sensingmat\n y = np.dot(A, x)\n \n x_hat = np.zeros(x.shape) if x_init is None else x_init\n _, supp = compression(A.T.dot(y), r, retsupp=True)\n \n while True:\n x_hat, supp = get_update(A, x_hat, y, supp, r)\n yield x_hat\n\nDIM = 1000\nNNZ = 25\nMEASUREMENTS = 200\n\nfor _ in Progress(range(20)):\n signal = random_sparse_vector(DIM, NNZ)\n solution = aIHT(signal, MEASUREMENTS, int(2 * NNZ))\n pl.plot([np.linalg.norm(signal - x_hat) for x_hat in it.islice(solution, 250)])\n \npl.xlabel = \"\\# iterations\"\npl.ylabel = r\"\\Vert x - \\hat x \\Vert_2\"", "Not only does the aIHT algorithm converge much faster than the cIHT, it also does not get trapped as pronounced as the latter.\nReferences\n[1] S. Foucart and H. Rauhut, A Mathematical Introduction to Compressive Sensing. New York, NY: Springer New York, 2013.\n[2] T. Blumensath and M. E. Davies, “Normalized Iterative Hard Thresholding: Guaranteed Stability and Performance,” IEEE Journal of Selected Topics in Signal Processing, vol. 4, no. 2, pp. 298–309, Apr. 2010." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
twosigma/beaker-notebook
doc/python/ChartingAPI.ipynb
apache-2.0
[ "Python API to BeakerX Interactive Plotting\nYou can access Beaker's native interactive plotting library from Python.\nPlot with simple properties\nPython plots has syntax very similar to Groovy plots. Property names are the same.", "from beakerx import *\nimport pandas as pd\n\ntableRows = pd.read_csv('../resources/data/interest-rates.csv')\n\nPlot(title=\"Title\",\n xLabel=\"Horizontal\",\n yLabel=\"Vertical\",\n initWidth=500,\n initHeight=200)", "Plot items\nLines, Bars, Points and Right yAxis", "x = [1, 4, 6, 8, 10]\ny = [3, 6, 4, 5, 9]\n\npp = Plot(title='Bars, Lines, Points and 2nd yAxis', \n xLabel=\"xLabel\", \n yLabel=\"yLabel\", \n legendLayout=LegendLayout.HORIZONTAL,\n legendPosition=LegendPosition.RIGHT,\n omitCheckboxes=True)\n\npp.add(YAxis(label=\"Right yAxis\"))\npp.add(Bars(displayName=\"Bar\", \n x=[1,3,5,7,10], \n y=[100, 120,90,100,80], \n width=1))\npp.add(Line(displayName=\"Line\", \n x=x, \n y=y, \n width=6, \n yAxis=\"Right yAxis\"))\npp.add(Points(x=x, \n y=y, \n size=10, \n shape=ShapeType.DIAMOND,\n yAxis=\"Right yAxis\"))\n\nplot = Plot(title= \"Setting line properties\")\nys = [0, 1, 6, 5, 2, 8]\nys2 = [0, 2, 7, 6, 3, 8]\nplot.add(Line(y= ys, width= 10, color= Color.red))\nplot.add(Line(y= ys, width= 3, color= Color.yellow))\nplot.add(Line(y= ys, width= 4, color= Color(33, 87, 141), style= StrokeType.DASH, interpolation= 0))\nplot.add(Line(y= ys2, width= 2, color= Color(212, 57, 59), style= StrokeType.DOT))\nplot.add(Line(y= [5, 0], x= [0, 5], style= StrokeType.LONGDASH))\nplot.add(Line(y= [4, 0], x= [0, 5], style= StrokeType.DASHDOT))\n\nplot = Plot(title= \"Changing Point Size, Color, Shape\")\ny1 = [6, 7, 12, 11, 8, 14]\ny2 = [4, 5, 10, 9, 6, 12]\ny3 = [2, 3, 8, 7, 4, 10]\ny4 = [0, 1, 6, 5, 2, 8]\nplot.add(Points(y= y1))\nplot.add(Points(y= y2, shape= ShapeType.CIRCLE))\nplot.add(Points(y= y3, size= 8.0, shape= ShapeType.DIAMOND))\nplot.add(Points(y= y4, size= 12.0, color= Color.orange, outlineColor= Color.red))\n\nplot = Plot(title= \"Changing point properties with list\")\ncs = [Color.black, Color.red, Color.orange, Color.green, Color.blue, Color.pink]\nss = [6.0, 9.0, 12.0, 15.0, 18.0, 21.0]\nfs = [False, False, False, True, False, False]\nplot.add(Points(y= [5] * 6, size= 12.0, color= cs))\nplot.add(Points(y= [4] * 6, size= 12.0, color= Color.gray, outlineColor= cs))\nplot.add(Points(y= [3] * 6, size= ss, color= Color.red))\nplot.add(Points(y= [2] * 6, size= 12.0, color= Color.black, fill= fs, outlineColor= Color.black))\n\nplot = Plot()\ny1 = [1.5, 1, 6, 5, 2, 8]\ncs = [Color.black, Color.red, Color.gray, Color.green, Color.blue, Color.pink]\nss = [StrokeType.SOLID, StrokeType.SOLID, StrokeType.DASH, StrokeType.DOT, StrokeType.DASHDOT, StrokeType.LONGDASH]\nplot.add(Stems(y= y1, color= cs, style= ss, width= 5))\n\nplot = Plot(title= \"Setting the base of Stems\")\nys = [3, 5, 2, 3, 7]\ny2s = [2.5, -1.0, 3.5, 2.0, 3.0]\nplot.add(Stems(y= ys, width= 2, base= y2s))\nplot.add(Points(y= ys))\n\nplot = Plot(title= \"Bars\")\ncs = [Color(255, 0, 0, 128)] * 5 # transparent bars\ncs[3] = Color.red # set color of a single bar, solid colored bar\nplot.add(Bars(x= [1, 2, 3, 4, 5], y= [3, 5, 2, 3, 7], color= cs, outlineColor= Color.black, width= 0.3))", "Lines, Points with Pandas", "plot = Plot(title= \"Pandas line\")\nplot.add(Line(y= tableRows.y1, width= 2, color= Color(216, 154, 54)))\nplot.add(Line(y= tableRows.y10, width= 2, color= Color.lightGray))\n\nplot\n\nplot = Plot(title= \"Pandas Series\")\nplot.add(Line(y= pd.Series([0, 6, 1, 5, 2, 4, 3]), width=2))\n\nplot = Plot(title= \"Bars\")\ncs = [Color(255, 0, 0, 128)] * 7 # transparent bars\ncs[3] = Color.red # set color of a single bar, solid colored bar\nplot.add(Bars(pd.Series([0, 6, 1, 5, 2, 4, 3]), color= cs, outlineColor= Color.black, width= 0.3))", "Areas, Stems and Crosshair", "ch = Crosshair(color=Color.black, width=2, style=StrokeType.DOT)\nplot = Plot(crosshair=ch)\ny1 = [4, 8, 16, 20, 32]\nbase = [2, 4, 8, 10, 16]\ncs = [Color.black, Color.orange, Color.gray, Color.yellow, Color.pink]\nss = [StrokeType.SOLID, \n StrokeType.SOLID, \n StrokeType.DASH, \n StrokeType.DOT, \n StrokeType.DASHDOT, \n StrokeType.LONGDASH]\nplot.add(Area(y=y1, base=base, color=Color(255, 0, 0, 50)))\nplot.add(Stems(y=y1, base=base, color=cs, style=ss, width=5))\n\nplot = Plot()\ny = [3, 5, 2, 3]\nx0 = [0, 1, 2, 3]\nx1 = [3, 4, 5, 8]\nplot.add(Area(x= x0, y= y))\nplot.add(Area(x= x1, y= y, color= Color(128, 128, 128, 50), interpolation= 0))\n\np = Plot()\np.add(Line(y= [3, 6, 12, 24], displayName= \"Median\"))\np.add(Area(y= [4, 8, 16, 32], base= [2, 4, 8, 16],\n color= Color(255, 0, 0, 50), displayName= \"Q1 to Q3\"))\n\nch = Crosshair(color= Color(255, 128, 5), width= 2, style= StrokeType.DOT)\npp = Plot(crosshair= ch, omitCheckboxes= True,\n legendLayout= LegendLayout.HORIZONTAL, legendPosition= LegendPosition.TOP)\nx = [1, 4, 6, 8, 10]\ny = [3, 6, 4, 5, 9]\npp.add(Line(displayName= \"Line\", x= x, y= y, width= 3))\npp.add(Bars(displayName= \"Bar\", x= [1, 2, 3, 4, 5, 6, 7, 8, 9, 10], y= [2, 2, 4, 4, 2, 2, 0, 2, 2, 4], width= 0.5))\npp.add(Points(x= x, y= y, size= 10))", "Constant Lines, Constant Bands", "p = Plot ()\np.add(Line(y=[-1, 1]))\np.add(ConstantLine(x=0.65, style=StrokeType.DOT, color=Color.blue))\np.add(ConstantLine(y=0.1, style=StrokeType.DASHDOT, color=Color.blue))\np.add(ConstantLine(x=0.3, y=0.4, color=Color.gray, width=5, showLabel=True))\n\nPlot().add(Line(y=[-3, 1, 3, 4, 5])).add(ConstantBand(x=[1, 2], y=[1, 3]))\n\np = Plot() \np.add(Line(x= [-3, 1, 2, 4, 5], y= [4, 2, 6, 1, 5]))\np.add(ConstantBand(x= ['-Infinity', 1], color= Color(128, 128, 128, 50)))\np.add(ConstantBand(x= [1, 2]))\np.add(ConstantBand(x= [4, 'Infinity']))\n\nfrom decimal import Decimal\npos_inf = Decimal('Infinity')\nneg_inf = Decimal('-Infinity')\nprint (pos_inf)\nprint (neg_inf)\n\n\nfrom beakerx.plot import Text as BeakerxText\nplot = Plot()\nxs = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]\nys = [8.6, 6.1, 7.4, 2.5, 0.4, 0.0, 0.5, 1.7, 8.4, 1]\ndef label(i):\n if ys[i] > ys[i+1] and ys[i] > ys[i-1]:\n return \"max\"\n if ys[i] < ys[i+1] and ys[i] < ys[i-1]:\n return \"min\"\n if ys[i] > ys[i-1]:\n return \"rising\"\n if ys[i] < ys[i-1]:\n return \"falling\"\n return \"\"\n\nfor i in xs:\n i = i - 1\n if i > 0 and i < len(xs)-1:\n plot.add(BeakerxText(x= xs[i], y= ys[i], text= label(i), pointerAngle= -i/3.0))\n\nplot.add(Line(x= xs, y= ys))\nplot.add(Points(x= xs, y= ys))\n\nplot = Plot(title= \"Setting 2nd Axis bounds\")\nys = [0, 2, 4, 6, 15, 10]\nys2 = [-40, 50, 6, 4, 2, 0]\nys3 = [3, 6, 3, 6, 70, 6]\nplot.add(YAxis(label=\"Spread\"))\nplot.add(Line(y= ys))\nplot.add(Line(y= ys2, yAxis=\"Spread\"))\nplot.setXBound([-2, 10])\n#plot.setYBound(1, 5)\nplot.getYAxes()[0].setBound(1,5)\nplot.getYAxes()[1].setBound(3,6)\n\n\nplot\n\nplot = Plot(title= \"Setting 2nd Axis bounds\")\nys = [0, 2, 4, 6, 15, 10]\nys2 = [-40, 50, 6, 4, 2, 0]\nys3 = [3, 6, 3, 6, 70, 6]\nplot.add(YAxis(label=\"Spread\"))\nplot.add(Line(y= ys))\nplot.add(Line(y= ys2, yAxis=\"Spread\"))\nplot.setXBound([-2, 10])\nplot.setYBound(1, 5)\n\nplot", "TimePlot", "import time\n\nmillis = current_milli_time()\n\nhour = round(1000 * 60 * 60)\nxs = []\nys = []\nfor i in range(11):\n xs.append(millis + hour * i)\n ys.append(i)\n\nplot = TimePlot(timeZone=\"America/New_York\")\n# list of milliseconds\nplot.add(Points(x=xs, y=ys, size=10, displayName=\"milliseconds\"))\n\nplot = TimePlot()\nplot.add(Line(x=tableRows['time'], y=tableRows['m3']))", "numpy datatime64", "y = pd.Series([7.5, 7.9, 7, 8.7, 8, 8.5])\ndates = [np.datetime64('2015-02-01'), \n np.datetime64('2015-02-02'), \n np.datetime64('2015-02-03'),\n np.datetime64('2015-02-04'),\n np.datetime64('2015-02-05'),\n np.datetime64('2015-02-06')]\nplot = TimePlot()\n\nplot.add(Line(x=dates, y=y))", "Timestamp", "y = pd.Series([7.5, 7.9, 7, 8.7, 8, 8.5])\ndates = pd.Series(['2015-02-01',\n '2015-02-02',\n '2015-02-03',\n '2015-02-04',\n '2015-02-05',\n '2015-02-06']\n , dtype='datetime64[ns]')\nplot = TimePlot()\nplot.add(Line(x=dates, y=y))\n", "Datetime and date", "import datetime\n\ny = pd.Series([7.5, 7.9, 7, 8.7, 8, 8.5])\ndates = [datetime.date(2015, 2, 1),\n datetime.date(2015, 2, 2),\n datetime.date(2015, 2, 3),\n datetime.date(2015, 2, 4),\n datetime.date(2015, 2, 5),\n datetime.date(2015, 2, 6)]\nplot = TimePlot()\nplot.add(Line(x=dates, y=y))\n\n\nimport datetime\n\ny = pd.Series([7.5, 7.9, 7, 8.7, 8, 8.5])\ndates = [datetime.datetime(2015, 2, 1),\n datetime.datetime(2015, 2, 2),\n datetime.datetime(2015, 2, 3),\n datetime.datetime(2015, 2, 4),\n datetime.datetime(2015, 2, 5),\n datetime.datetime(2015, 2, 6)]\nplot = TimePlot()\nplot.add(Line(x=dates, y=y))", "NanoPlot", "millis = current_milli_time()\nnanos = millis * 1000 * 1000\nxs = []\nys = []\nfor i in range(11):\n xs.append(nanos + 7 * i)\n ys.append(i)\n\nnanoplot = NanoPlot()\nnanoplot.add(Points(x=xs, y=ys))", "Stacking", "y1 = [1,5,3,2,3]\ny2 = [7,2,4,1,3]\np = Plot(title='Plot with XYStacker', initHeight=200)\na1 = Area(y=y1, displayName='y1')\na2 = Area(y=y2, displayName='y2')\nstacker = XYStacker()\np.add(stacker.stack([a1, a2]))", "SimpleTime Plot", "SimpleTimePlot(tableRows, [\"y1\", \"y10\"], # column names\n timeColumn=\"time\", # time is default value for a timeColumn\n yLabel=\"Price\", \n displayNames=[\"1 Year\", \"10 Year\"],\n colors = [[216, 154, 54], Color.lightGray],\n displayLines=True, # no lines (true by default)\n displayPoints=False) # show points (false by default))\n\n#time column base on DataFrame index \ntableRows.index = tableRows['time']\n\nSimpleTimePlot(tableRows, ['m3'])\n\nrng = pd.date_range('1/1/2011', periods=72, freq='H')\nts = pd.Series(np.random.randn(len(rng)), index=rng)\ndf = pd.DataFrame(ts, columns=['y'])\nSimpleTimePlot(df, ['y'])\n", "Second Y Axis\nThe plot can have two y-axes. Just add a YAxis to the plot object, and specify its label.\nThen for data that should be scaled according to this second axis,\nspecify the property yAxis with a value that coincides with the label given.\nYou can use upperMargin and lowerMargin to restrict the range of the data leaving more white, perhaps for the data on the other axis.", "p = TimePlot(xLabel= \"Time\", yLabel= \"Interest Rates\")\np.add(YAxis(label= \"Spread\", upperMargin= 4))\np.add(Area(x= tableRows.time, y= tableRows.spread, displayName= \"Spread\",\n yAxis= \"Spread\", color= Color(180, 50, 50, 128)))\np.add(Line(x= tableRows.time, y= tableRows.m3, displayName= \"3 Month\"))\np.add(Line(x= tableRows.time, y= tableRows.y10, displayName= \"10 Year\"))", "Combined Plot", "import math\npoints = 100\nlogBase = 10\nexpys = []\nxs = []\nfor i in range(0, points):\n xs.append(i / 15.0)\n expys.append(math.exp(xs[i]))\n\n\ncplot = CombinedPlot(xLabel= \"Linear\")\nlogYPlot = Plot(title= \"Linear x, Log y\", yLabel= \"Log\", logY= True, yLogBase= logBase)\nlogYPlot.add(Line(x= xs, y= expys, displayName= \"f(x) = exp(x)\"))\nlogYPlot.add(Line(x= xs, y= xs, displayName= \"g(x) = x\"))\ncplot.add(logYPlot, 4)\n\nlinearYPlot = Plot(title= \"Linear x, Linear y\", yLabel= \"Linear\")\nlinearYPlot.add(Line(x= xs, y= expys, displayName= \"f(x) = exp(x)\"))\nlinearYPlot.add(Line(x= xs, y= xs, displayName= \"g(x) = x\"))\ncplot.add(linearYPlot,4)\n\ncplot\n\n\nplot = Plot(title= \"Log x, Log y\", xLabel= \"Log\", yLabel= \"Log\",\n logX= True, xLogBase= logBase, logY= True, yLogBase= logBase)\n\nplot.add(Line(x= xs, y= expys, displayName= \"f(x) = exp(x)\"))\nplot.add(Line(x= xs, y= xs, displayName= \"f(x) = x\"))\n\nplot" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
park-python/course
lectures/01_Intro/notebook.ipynb
bsd-3-clause
[ "import sys\n\nprint(sys.version)\n\nimport random\n\nrandom.randint(1, 10)", "Пример программы на Python", "# Задача: найти 10 самых популярных Python-репозиториев на GitHub\n\n# Можно посмотреть стандартный модуль urllib — https://docs.python.org/3/library/urllib.html\nimport requests\n\n\nAPI_URL = 'https://api.github.com/search/repositories?q=language:python&sort=stars&order=desc'\n\n\ndef get_most_starred_github_repositories():\n response = requests.get(API_URL)\n \n if response.status_code == 200:\n return response.json()['items'][:10]\n\n return\n\n\nfor repo in get_most_starred_github_repositories():\n print(repo['name'])\n", "Переменные и базовые типы данных\nЧто такое переменная? Garbage Collector\nvariable = 1\nvariable = '2'", "%%html\n<style>\ntable {float:left}\n</style>\n\nvariable = 1\nvariable = '1'", "Именование переменных\nmy_variable = 'value'", "a, b = 0, 1\nprint(a, b)\n\na, b = b, a\nprint(a, b)", "Числа\n| Тип | Пример |\n| ---------------- | ---------|\n| Integer | 42 |\n| Integer (Hex) | 0xA |\n| Integer (Binary) | 0b110101 |\n| Float | 2.7182 |\n| Float | 1.4e3 |\n| Complex | 14+0j |\n| Underscore | 100_000 |", "year = 2017\npi = 3.1415\n\nprint(year)\nprint(year + 1)\n\namount = 100_000_000\n\ntype(pi)\n\nint('wat?')\n\nround(10.2), round(10.6)\n\ntype(10.)", "| Операция | Результат |\n| ----------- | ---------------- |\n| num + num2 | Сложение |\n| num - num2 | Вычитание |\n| num == num2 | Равенство |\n| num != num2 | Неравенство |\n| num >= num2 | Больше-равно |\n| num > num2 | Больше |\n| num * num2 | Умножение |\n| num / num2 | Деление |\n| num // num2 | Целочисленное деление |\n| num % num2 | Модуль |\n| num ** num2 | Степень |", "6 / 3\n\n6 // 4\n\n6 / 0\n\n(2 + 2) * 2", "Строки\n| Тип | Пример |\n| ----------- | ----------- |\n| Строка | 'hello' |\n| Строка | \"hello\" |\n| Строка | '''hello''' |\n| Raw string | r'hello' |\n| Byte string | b'hello' |", "s = '\"Python\" is the capital of Great Britain'\nprint(s)\n\ntype(s)\n\nprint(\"\\\"Python\\\" is the capital of Great Britain\")\n\nprint('hi \\n there')\nprint(r'hi \\n there')\n\ncourse_name = 'Курс Python Programming' # строки в Python 3 — Unicode\ncourse_name = \"Курс Python Programming\"\n\nprint(course_name)\n\nlong_string = \"Perl — это тот язык, который одинаково \" \\\n \"выглядит как до, так и после RSA шифрования.\" \\\n \"(Keith Bostic)\"\n \nlong_string = \"\"\"\n Обычно в таких кавычках\n пишут докстринги к функциям\n\"\"\"", "| Операция | Результат |\n| ------------| -------------- |\n| s + s2 | Сложение |\n| 'foo' in s2 | Вхождение |\n| s == s2 | Равенство |\n| s != s2 | Неравенство |\n| s >= s2 | Больше-равно |\n| s > s2 | Больше |\n| s * num | Умножение |\n| s[0] | Доступ по индексу |\n| len(s) | Длина |", "'one' + 'two'\n\n'one' * 10\n\ns1 = 'first'\nprint(id(s1))\n\ns1 += '\\n'\nprint(id(s1))\n\nprint('python'[10])\n\nlen('python') # O(1)\n\n'python'[:3]\n\n'python'[::-1]\n\n'p' in 'python'\n\n'python'.capitalize()\n\nbyte_string = b'python'\nprint(byte_string[0])", "Форматирование строк", "name = 'World'\n\nprint('Hello, {}{}'.format(name, '!'))\n\nprint('Hello, %s' % (name,))\n\nprint(f'Hello, {name}!')\n\ntag_list = 'park, mstu, 21.09'\nsplitted = tag_list.split(', ')\n\nprint(splitted)\n\n':'.join(splitted)\n\ninput_string = ' 79261234567 '\ninput_string.strip(' 7')\n\ndir(str)\n\nhelp(int)\n\nimport this # знать хотя бы первые 3 пункта", "Базовые конструкции\nУсловный оператор", "type(10 > 9)\n\n10 < 9\n\ntype(False)", "Boolean\nTrue / False (__bool__)\n| True | False |\n| ----------------- | ----- |\n| True | False |\n| Большинство объектов | None |\n| 1 | 0 |\n| 3.2 | 0.0 |\n| 'string' | \"\" |", "a = 10\nb = 10\n\nprint(a == b)\nprint(a is b) # magic\n\nbool(0.0)\n\nbool('')\n\n13 < 12 < foo_call()\n\nimport random\n\n\ntemperature_tomorrow = random.randint(18, 27)\nif temperature_tomorrow >= 23:\n print('Срочно на пляж!')\n\nelse:\n print(':(')\n\ntemperature_tomorrow = random.randint(18, 27)\n\ndecision = 'пляж' if temperature_tomorrow >= 23 else 'дома посижу'\nprint(decision)\n\nanswer = input('The answer to life the universe and everything is: ')\nanswer = answer.strip().lower()\n\n\nif answer == '42':\n print('Точно!')\n\nelif (answer == 'сорок два') or (answer == 'forty two'):\n print('Тоже вариант!')\n\nelse:\n print('Нет')\n\nbool(None)\n\ntype(None)\n\na = None\nprint(a is None)", "Задача\nОпределить, является ли введеный год високосным. Год является високосным, если его номер кратен 4, но не кратен 100, а также если он кратен 400", "import calendar\n\n\ncalendar.isleap(1980)\n\nraw_year = input('Year: ')\nyear = int(raw_year)\n\nif year % 400 == 0:\n print('Високосный')\n \nelif year % 4 == 0 and not year % 100 == 0:\n print('Високосный')\n\nelse:\n print('Нет :(')", "Циклы", "for letter in 'python':\n print(letter)\n\ns = 'python'\nfor idx in range(10):\n print(idx)\n\nfor idx, letter in enumerate('python'):\n print(idx, letter)\n\n\nfor letter in 'Python, Ruby. Perl, PHP.':\n if letter == ',':\n continue\n\n elif letter == '.':\n break\n\n print(letter)\n", "for/while-else — знать можно, использовать лучше не стоит", "patience = 5\n\nwhile patience != 0:\n patience -= 1\n \n print(patience)", "Ошибки", "user_range = int(input('Введите максимальное число диапазона: '))\nfor num in range(user_range):\n print(num)", "FizzBuzz\nНапишите программу, которая выводит на экран числа от 1 до 100. При этом вместо чисел, кратных трем, программа должна выводить слово Fizz, а вместо чисел, кратных пяти — слово Buzz. Если число кратно пятнадцати, то программа должна выводить слово FizzBuzz.", "for number in range(1, 101):\n result = ''\n \n if number % 3 == 0:\n result += 'Fizz'\n \n if number % 5 == 0:\n result += 'Buzz'\n \n print(result or number)", "ProjectEuler\n1\nIf we list all the natural numbers below 10 that are multiples of 3 or 5, we get 3, 5, 6 and 9. The sum of these multiples is 23.\nFind the sum of all the multiples of 3 or 5 below 1000.", "# multiples_sum = 0\n# for number in range(1000):\n# if number % 3 == 0 or number % 5 == 0:\n# multiples_sum += number\n \n# print(multiples_sum)\n\nsum(\n num for num in range(1000)\n if num % 3 == 0 or num % 5 == 0\n)", "2\nEach new term in the Fibonacci sequence is generated by adding the previous two terms. By starting with 1 and 2, the first 10 terms will be:\n1, 2, 3, 5, 8, 13, 21, 34, 55, 89, ...\nBy considering the terms in the Fibonacci sequence whose values do not exceed four million, find the sum of the even-valued terms.", "a, b = 1, 1\nfib_sum = 0\nwhile b < 4_000_000:\n if b % 2 == 0:\n fib_sum += b\n \n a, b = b, a + b\n\nprint(fib_sum)", "4\nA palindromic number reads the same both ways. The largest palindrome made from the product of two 2-digit numbers is 9009 = 91 × 99.\nFind the largest palindrome made from the product of two distinct 3-digit numbers.", "def is_palindrome(number):\n str_number = str(number)\n return str_number == str_number[::-1]\n\nis_palindrome(9009)\n\nmax_palindrome = 0\nfor a in range(999, 100, -1):\n for b in range(999, 100, -1):\n multiple = a * b\n if multiple > max_palindrome and is_palindrome(multiple):\n max_palindrome = multiple\n break\n\nprint(max_palindrome)", "Функции", "def add_numbers(x, y):\n return x + y\n\n\nadd_numbers(10, 5)" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
pagutierrez/tutorial-sklearn
notebooks-spanish/20-clustering_jerarquico_y_basado_densidades.ipynb
cc0-1.0
[ "%matplotlib inline\nimport numpy as np\nfrom matplotlib import pyplot as plt", "Aprendizaje no supervisado: algoritmos de clustering jerárquicos y basados en densidades\nEn el cuaderno número 8, introdujimos uno de los algoritmos de agrupamiento más básicos y utilizados, el K-means. Una de las ventajas del K-means es que es extremadamente fácil de implementar y que es muy eficiente computacionalmente si lo comparamos a otros algoritmos de agrupamiento. Sin embargo, ya vimos que una de las debilidades de K-Means es que solo trabaja bien si los datos a agrupar se distribuyen en formas esféricas. Además, tenemos que decidir un número de grupos, k, a priori, lo que puede ser un problema si no tenemos conocimiento previo acerca de cuántos grupos esperamos obtener.\nEn este cuaderno, vamos a ver dos formas alternativas de hacer agrupamiento, agrupamiento jerárquico y agrupamiento basado en densidades. \nAgrupamiento jerárquico\nUna característica importante del agrupamiento jerárquico es que podemos visualizar los resultados como un dendograma, un diagrama de árbol. Utilizando la visualización, podemos entonces decidir el umbral de profundidad a partir del cual vamos a cortar el árbol para conseguir un agrupamiento. En otras palabras, no tenemos que decidir el número de grupos sin tener ninguna información. \nAgrupamiento aglomerativo y divisivo\nAdemás, podemos distinguir dos formas principales de clustering jerárquico: divisivo y aglomerativo. En el clustering aglomerativo, empezamos con un único patrón por clúster y vamos agrupando clusters (uniendo aquellos que están más cercanos), siguiendo una estrategia bottom-up para construir el dendograma. En el clustering divisivo, sin embargo, empezamos incluyendo todos los puntos en un único grupo y luego vamos dividiendo ese grupo en subgrupos más pequeños, siguiendo una estrategia top-down.\nNosotros nos centraremos en el clustering aglomerativo.\nEnlace simple y completo\nAhora, la pregunta es cómo vamos a medir la distancia entre ejemplo. Una forma habitual es usar la distancia Euclídea, que es lo que hace el algoritmo K-Means.\nSin embargo, el algoritmo jerárquico requiere medir la distancia entre grupos de puntos, es decir, saber la distancia entre un clúster (agrupación de puntos) y otro. Dos formas de hacer esto es usar el enlace simple y el enlace completo.\nEn el enlace simple, tomamos el par de puntos más similar (basándonos en distancia Euclídea, por ejemplo) de todos los puntos que pertenecen a los dos grupos. En el enlace competo, tomamos el par de puntos más lejano.\n\nPara ver como funciona el clustering aglomerativo, vamos a cargar el dataset Iris (pretendiendo que no conocemos las etiquetas reales y queremos encontrar las espacies):", "from sklearn import datasets\n\niris = datasets.load_iris()\nX = iris.data[:, [2, 3]]\ny = iris.target\nn_samples, n_features = X.shape\n\nplt.scatter(X[:, 0], X[:, 1], c=y);", "Ahora vamos haciendo una exploración basada en clustering, visualizando el dendograma utilizando las funciones linkage (que hace clustering jerárquico) y dendrogram (que dibuja el dendograma) de SciPy:", "from scipy.cluster.hierarchy import linkage\nfrom scipy.cluster.hierarchy import dendrogram\n\nclusters = linkage(X, \n metric='euclidean',\n method='complete')\n\ndendr = dendrogram(clusters)\n\nplt.ylabel('Distancia Euclídea');", "Alternativamente, podemos usar el AgglomerativeClustering de scikit-learn y dividr el dataset en 3 clases. ¿Puedes adivinar qué tres clases encontraremos?", "from sklearn.cluster import AgglomerativeClustering\n\nac = AgglomerativeClustering(n_clusters=3,\n affinity='euclidean',\n linkage='complete')\n\nprediction = ac.fit_predict(X)\nprint('Etiquetas de clase: %s\\n' % prediction)\n\nplt.scatter(X[:, 0], X[:, 1], c=prediction);", "Clustering basado en densidades - DBSCAN\nOtra forma útil de agrupamiento es la conocida como Density-based Spatial Clustering of Applications with Noise (DBSCAN). En esencia, podríamos pensar que DBSCAN es un algoritmo que divide el dataset en subgrupos, buscando regiones densas de puntos.\nEn DBSCAN, hay tres tipos de puntos:\n\nPuntos núcleo: puntos que tienen un mínimo número de puntos (MinPts) contenidos en una hiperesfera de radio epsilon.\nPuntos fronterizos: puntos que no son puntos núcleo, ya que no tienen suficientes puntos en su vecindario, pero si que pertenecen al vecindario de radio epsilon de algún punto núcleo.\nPuntos de ruido: todos los puntos que no pertenecen a ninguna de las categorías anteriores.\n\n\nUna ventaja de DBSCAN es que no tenemos que especificar el número de clusters a priori. Sin embargo, requiere que establezcamos dos hiper-parámetros adicionales que son MinPts y el radio epsilon.", "from sklearn.datasets import make_moons\nX, y = make_moons(n_samples=400,\n noise=0.1,\n random_state=1)\nplt.scatter(X[:,0], X[:,1])\nplt.show()\n\nfrom sklearn.cluster import DBSCAN\n\ndb = DBSCAN(eps=0.2,\n min_samples=10,\n metric='euclidean')\nprediction = db.fit_predict(X)\n\nprint(\"Etiquetas predichas:\\n\", prediction)\n\nplt.scatter(X[:, 0], X[:, 1], c=prediction);", "<div class=\"alert alert-success\">\n <b>EJERCICIO</b>:\n <ul>\n <li>\n Usando el siguiente conjunto sintético, dos círculos concéntricos, experimenta los resultados obtenidos con los algoritmos de clustering que hemos considerado hasta el momento: `KMeans`, `AgglomerativeClustering` y `DBSCAN`.\n\n¿Qué algoritmo reproduce o descubre mejor la estructura oculta (suponiendo que no conocemos `y`)?\n\n¿Puedes razonar por qué este algoritmo funciona mientras que los otros dos fallan?\n </li>\n </ul>\n</div>", "from sklearn.datasets import make_circles\n\nX, y = make_circles(n_samples=1500, \n factor=.4, \n noise=.05)\n\nplt.scatter(X[:, 0], X[:, 1], c=y);" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
tensorflow/docs-l10n
site/en-snapshot/hub/tutorials/object_detection.ipynb
apache-2.0
[ "Copyright 2018 The TensorFlow Hub Authors.\nLicensed under the Apache License, Version 2.0 (the \"License\");", "# Copyright 2018 The TensorFlow Hub Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n# ==============================================================================", "Object Detection\n<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://www.tensorflow.org/hub/tutorials/object_detection\"><img src=\"https://www.tensorflow.org/images/tf_logo_32px.png\" />View on TensorFlow.org</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/hub/blob/master/examples/colab/object_detection.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />Run in Google Colab</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://github.com/tensorflow/hub/blob/master/examples/colab/object_detection.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />View on GitHub</a>\n </td>\n <td>\n <a href=\"https://storage.googleapis.com/tensorflow_docs/hub/examples/colab/object_detection.ipynb\"><img src=\"https://www.tensorflow.org/images/download_logo_32px.png\" />Download notebook</a>\n </td>\n <td>\n <a href=\"https://tfhub.dev/s?q=google%2Ffaster_rcnn%2Fopenimages_v4%2Finception_resnet_v2%2F1%20OR%20google%2Ffaster_rcnn%2Fopenimages_v4%2Finception_resnet_v2%2F1\"><img src=\"https://www.tensorflow.org/images/hub_logo_32px.png\" />See TF Hub models</a>\n </td>\n</table>\n\nThis Colab demonstrates use of a TF-Hub module trained to perform object detection.\nSetup", "#@title Imports and function definitions\n\n# For running inference on the TF-Hub module.\nimport tensorflow as tf\n\nimport tensorflow_hub as hub\n\n# For downloading the image.\nimport matplotlib.pyplot as plt\nimport tempfile\nfrom six.moves.urllib.request import urlopen\nfrom six import BytesIO\n\n# For drawing onto the image.\nimport numpy as np\nfrom PIL import Image\nfrom PIL import ImageColor\nfrom PIL import ImageDraw\nfrom PIL import ImageFont\nfrom PIL import ImageOps\n\n# For measuring the inference time.\nimport time\n\n# Print Tensorflow version\nprint(tf.__version__)\n\n# Check available GPU devices.\nprint(\"The following GPU devices are available: %s\" % tf.test.gpu_device_name())", "Example use\nHelper functions for downloading images and for visualization.\nVisualization code adapted from TF object detection API for the simplest required functionality.", "def display_image(image):\n fig = plt.figure(figsize=(20, 15))\n plt.grid(False)\n plt.imshow(image)\n\n\ndef download_and_resize_image(url, new_width=256, new_height=256,\n display=False):\n _, filename = tempfile.mkstemp(suffix=\".jpg\")\n response = urlopen(url)\n image_data = response.read()\n image_data = BytesIO(image_data)\n pil_image = Image.open(image_data)\n pil_image = ImageOps.fit(pil_image, (new_width, new_height), Image.ANTIALIAS)\n pil_image_rgb = pil_image.convert(\"RGB\")\n pil_image_rgb.save(filename, format=\"JPEG\", quality=90)\n print(\"Image downloaded to %s.\" % filename)\n if display:\n display_image(pil_image)\n return filename\n\n\ndef draw_bounding_box_on_image(image,\n ymin,\n xmin,\n ymax,\n xmax,\n color,\n font,\n thickness=4,\n display_str_list=()):\n \"\"\"Adds a bounding box to an image.\"\"\"\n draw = ImageDraw.Draw(image)\n im_width, im_height = image.size\n (left, right, top, bottom) = (xmin * im_width, xmax * im_width,\n ymin * im_height, ymax * im_height)\n draw.line([(left, top), (left, bottom), (right, bottom), (right, top),\n (left, top)],\n width=thickness,\n fill=color)\n\n # If the total height of the display strings added to the top of the bounding\n # box exceeds the top of the image, stack the strings below the bounding box\n # instead of above.\n display_str_heights = [font.getsize(ds)[1] for ds in display_str_list]\n # Each display_str has a top and bottom margin of 0.05x.\n total_display_str_height = (1 + 2 * 0.05) * sum(display_str_heights)\n\n if top > total_display_str_height:\n text_bottom = top\n else:\n text_bottom = top + total_display_str_height\n # Reverse list and print from bottom to top.\n for display_str in display_str_list[::-1]:\n text_width, text_height = font.getsize(display_str)\n margin = np.ceil(0.05 * text_height)\n draw.rectangle([(left, text_bottom - text_height - 2 * margin),\n (left + text_width, text_bottom)],\n fill=color)\n draw.text((left + margin, text_bottom - text_height - margin),\n display_str,\n fill=\"black\",\n font=font)\n text_bottom -= text_height - 2 * margin\n\n\ndef draw_boxes(image, boxes, class_names, scores, max_boxes=10, min_score=0.1):\n \"\"\"Overlay labeled boxes on an image with formatted scores and label names.\"\"\"\n colors = list(ImageColor.colormap.values())\n\n try:\n font = ImageFont.truetype(\"/usr/share/fonts/truetype/liberation/LiberationSansNarrow-Regular.ttf\",\n 25)\n except IOError:\n print(\"Font not found, using default font.\")\n font = ImageFont.load_default()\n\n for i in range(min(boxes.shape[0], max_boxes)):\n if scores[i] >= min_score:\n ymin, xmin, ymax, xmax = tuple(boxes[i])\n display_str = \"{}: {}%\".format(class_names[i].decode(\"ascii\"),\n int(100 * scores[i]))\n color = colors[hash(class_names[i]) % len(colors)]\n image_pil = Image.fromarray(np.uint8(image)).convert(\"RGB\")\n draw_bounding_box_on_image(\n image_pil,\n ymin,\n xmin,\n ymax,\n xmax,\n color,\n font,\n display_str_list=[display_str])\n np.copyto(image, np.array(image_pil))\n return image", "Apply module\nLoad a public image from Open Images v4, save locally, and display.", "# By Heiko Gorski, Source: https://commons.wikimedia.org/wiki/File:Naxos_Taverna.jpg\nimage_url = \"https://upload.wikimedia.org/wikipedia/commons/6/60/Naxos_Taverna.jpg\" #@param\ndownloaded_image_path = download_and_resize_image(image_url, 1280, 856, True)", "Pick an object detection module and apply on the downloaded image. Modules:\n* FasterRCNN+InceptionResNet V2: high accuracy,\n* ssd+mobilenet V2: small and fast.", "module_handle = \"https://tfhub.dev/google/faster_rcnn/openimages_v4/inception_resnet_v2/1\" #@param [\"https://tfhub.dev/google/openimages_v4/ssd/mobilenet_v2/1\", \"https://tfhub.dev/google/faster_rcnn/openimages_v4/inception_resnet_v2/1\"]\n\ndetector = hub.load(module_handle).signatures['default']\n\ndef load_img(path):\n img = tf.io.read_file(path)\n img = tf.image.decode_jpeg(img, channels=3)\n return img\n\ndef run_detector(detector, path):\n img = load_img(path)\n\n converted_img = tf.image.convert_image_dtype(img, tf.float32)[tf.newaxis, ...]\n start_time = time.time()\n result = detector(converted_img)\n end_time = time.time()\n\n result = {key:value.numpy() for key,value in result.items()}\n\n print(\"Found %d objects.\" % len(result[\"detection_scores\"]))\n print(\"Inference time: \", end_time-start_time)\n\n image_with_boxes = draw_boxes(\n img.numpy(), result[\"detection_boxes\"],\n result[\"detection_class_entities\"], result[\"detection_scores\"])\n\n display_image(image_with_boxes)\n\nrun_detector(detector, downloaded_image_path)", "More images\nPerform inference on some additional images with time tracking.", "image_urls = [\n # Source: https://commons.wikimedia.org/wiki/File:The_Coleoptera_of_the_British_islands_(Plate_125)_(8592917784).jpg\n \"https://upload.wikimedia.org/wikipedia/commons/1/1b/The_Coleoptera_of_the_British_islands_%28Plate_125%29_%288592917784%29.jpg\",\n # By Américo Toledano, Source: https://commons.wikimedia.org/wiki/File:Biblioteca_Maim%C3%B3nides,_Campus_Universitario_de_Rabanales_007.jpg\n \"https://upload.wikimedia.org/wikipedia/commons/thumb/0/0d/Biblioteca_Maim%C3%B3nides%2C_Campus_Universitario_de_Rabanales_007.jpg/1024px-Biblioteca_Maim%C3%B3nides%2C_Campus_Universitario_de_Rabanales_007.jpg\",\n # Source: https://commons.wikimedia.org/wiki/File:The_smaller_British_birds_(8053836633).jpg\n \"https://upload.wikimedia.org/wikipedia/commons/0/09/The_smaller_British_birds_%288053836633%29.jpg\",\n ]\n\ndef detect_img(image_url):\n start_time = time.time()\n image_path = download_and_resize_image(image_url, 640, 480)\n run_detector(detector, image_path)\n end_time = time.time()\n print(\"Inference time:\",end_time-start_time)\n\ndetect_img(image_urls[0])\n\ndetect_img(image_urls[1])\n\ndetect_img(image_urls[2])" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
IBMDecisionOptimization/docplex-examples
examples/cp/jupyter/SteelMill.ipynb
apache-2.0
[ "Building steel coils\nThis tutorial includes everything you need to set up decision optimization engines, build constraint programming models.\nWhen you finish this tutorial, you'll have a foundational knowledge of Prescriptive Analytics.\n\nThis notebook is part of Prescriptive Analytics for Python\nIt requires either an installation of CPLEX Optimizers or it can be run on IBM Cloud Pak for Data as a Service (Sign up for a free IBM Cloud account\nand you can start using IBM Cloud Pak for Data as a Service right away).\nCPLEX is available on <i>IBM Cloud Pack for Data</i> and <i>IBM Cloud Pak for Data as a Service</i>:\n - <i>IBM Cloud Pak for Data as a Service</i>: Depends on the runtime used:\n - <i>Python 3.x</i> runtime: Community edition\n - <i>Python 3.x + DO</i> runtime: full edition\n - <i>Cloud Pack for Data</i>: Community edition is installed by default. Please install DO addon in Watson Studio Premium for the full edition\n\nTable of contents:\n\nDescribe the business problem\nHow decision optimization (prescriptive analytics) can help\nUse decision optimization\nStep 1: Download the library\nStep 2: Model the Data\nStep 3: Set up the prescriptive model\nDefine the decision variables\nExpress the business constraints\nExpress the objective\nSolve with Decision Optimization solve service\n\n\nStep 4: Investigate the solution and run an example analysis\n\n\nSummary\n\n\nDescribe the business problem\n\nThe problem is to build steel coils from slabs that are available in a work-in-process inventory of semi-finished products. There is no limitation in the number of slabs that can be requested, but only a finite number of slab sizes is available (sizes 11, 13, 16, 17, 19, 20, 23, 24, 25, 26, 27, 28, 29, 30, 33, 34, 40, 43, 45). \n\nThe problem is to select a number of slabs to build the coil orders, and to satisfy the following constraints:\n\nA coil order can be built from only one slab.\nEach coil order requires a specific process to build it from a slab. This process is encoded by a color.\nSeveral coil orders can be built from the same slab. But a slab can be used to produce at most two different \"colors\" of coils.\nThe sum of the sizes of each coil order built from a slab must not exceed the slab size.\n\n\n\nFinally, the production plan should minimize the unused capacity of the selected slabs.\n\n\nThis problem is based on \"prob038: Steel mill slab design problem\" from CSPLib (www.csplib.org). It is a simplification of an industrial problem described in J. R. Kalagnanam, M. W. Dawande, M. Trumbo, H. S. Lee. \"Inventory Matching Problems in the Steel Industry,\" IBM Research Report RC 21171, 1998.\n\n\nPlease refer to documentation for appropriate setup of solving configuration.\n\n\n\nHow decision optimization can help\n\n\nPrescriptive analytics technology recommends actions based on desired outcomes, taking into account specific scenarios, resources, and knowledge of past and current events. This insight can help your organization make better decisions and have greater control of business outcomes. \n\n\nPrescriptive analytics is the next step on the path to insight-based actions. It creates value through synergy with predictive analytics, which analyzes data to predict future outcomes. \n\n\nPrescriptive analytics takes that insight to the next level by suggesting the optimal way to handle that future situation. Organizations that can act fast in dynamic conditions and make superior decisions in uncertain environments gain a strong competitive advantage.\n<br/>\n\n\nFor example:\n\nAutomate complex decisions and trade-offs to better manage limited resources.\nTake advantage of a future opportunity or mitigate a future risk.\nProactively update recommendations based on changing events.\nMeet operational goals, increase customer loyalty, prevent threats and fraud, and optimize business processes.\n\n\n\nUse decision optimization\nStep 1: Download the library\nRun the following code to install Decision Optimization CPLEX Modeling library. The DOcplex library contains the two modeling packages, Mathematical Programming and Constraint Programming, referred to earlier.", "import sys\ntry:\n import docplex.cp\nexcept:\n if hasattr(sys, 'real_prefix'):\n #we are in a virtual env.\n !pip install docplex\n else:\n !pip install --user docplex", "Note that the more global package <i>docplex</i> contains another subpackage <i>docplex.mp</i> that is dedicated to Mathematical Programming, another branch of optimization.\nStep 2: Model the data", "from docplex.cp.model import *", "Set model parameter", "from collections import namedtuple\n\n##############################################################################\n# Model configuration\n##############################################################################\n\n# The number of coils to produce\nTUPLE_ORDER = namedtuple(\"TUPLE_ORDER\", [\"index\", \"weight\", \"color\"])\norders = [ TUPLE_ORDER(1, 22, 5),\n TUPLE_ORDER(2, 9, 3),\n TUPLE_ORDER(3, 9, 4),\n TUPLE_ORDER(4, 8, 5),\n TUPLE_ORDER(5, 8, 7),\n TUPLE_ORDER(6, 6, 3),\n TUPLE_ORDER(7, 5, 6),\n TUPLE_ORDER(8, 3, 0),\n TUPLE_ORDER(9, 3, 2),\n TUPLE_ORDER(10, 3, 3),\n TUPLE_ORDER(11, 2, 1),\n TUPLE_ORDER(12, 2, 5)\n ]\n\nNB_SLABS = 12\nMAX_COLOR_PER_SLAB = 2\n\n# The total number of slabs available. In theory this can be unlimited,\n# but we impose a reasonable upper bound in order to produce a practical\n# optimization model.\n\n# The different slab weights available.\nslab_weights = [ 0, 11, 13, 16, 17, 19, 20, 23, 24, 25,\n 26, 27, 28, 29, 30, 33, 34, 40, 43, 45 ]\n\nnb_orders = len(orders)\nslabs = range(NB_SLABS)\nallcolors = set([ o.color for o in orders ])\n\n# CPO needs lists for pack constraint\norder_weights = [ o.weight for o in orders ]\n\n# The heaviest slab\nmax_slab_weight = max(slab_weights)\n\n# The amount of loss incurred for different amounts of slab use\n# The loss will depend on how much less steel is used than the slab\n# just large enough to produce the coils.\nloss = [ min([sw-use for sw in slab_weights if sw >= use]) for use in range(max_slab_weight+1)]", "Step 3: Set up the prescriptive model\nCreate CPO model", "mdl = CpoModel(name=\"trucks\")", "Define the decision variables", "# Which slab is used to produce each coil\nproduction_slab = integer_var_dict(orders, 0, NB_SLABS-1, \"production_slab\")\n\n# How much of each slab is used\nslab_use = integer_var_list(NB_SLABS, 0, max_slab_weight, \"slab_use\")", "Express the business constraints", "# The total loss is\ntotal_loss = sum([element(slab_use[s], loss) for s in slabs])\n\n# The orders are allocated to the slabs with capacity\nmdl.add(pack(slab_use, [production_slab[o] for o in orders], order_weights))\n\n# At most MAX_COLOR_PER_SLAB colors per slab\nfor s in slabs:\n su = 0\n for c in allcolors:\n lo = False\n for o in orders:\n if o.color==c:\n lo = (production_slab[o] == s) | lo\n su += lo\n mdl.add(su <= MAX_COLOR_PER_SLAB)", "Express the objective", "# Add minimization objective\nmdl.add(minimize(total_loss))", "Solve the model", "print(\"\\nSolving model....\")\n# Search strategy\nmdl.set_search_phases([search_phase([production_slab[o] for o in orders])])\n\nmsol = mdl.solve(FailLimit=100000, TimeLimit=10)", "Step 4: Investigate the solution and then run an example analysis", "# Print solution\nif msol:\n print(\"Solution: \")\n from_slabs = [set([o.index for o in orders if msol[production_slab[o]]== s])for s in slabs]\n slab_colors = [set([o.color for o in orders if o.index in from_slabs[s]])for s in slabs]\n for s in slabs:\n if len(from_slabs[s]) > 0:\n print(\"Slab = \" + str(s))\n print(\"\\tLoss = \" + str(loss[msol[slab_use[s]]]))\n print(\"\\tcolors = \" + str(slab_colors[s]))\n print(\"\\tOrders = \" + str(from_slabs[s]) + \"\\n\")\nelse:\n print(\"No solution found\")", "Summary\nYou learned how to set up and use the IBM Decision Optimization CPLEX Modeling for Python to formulate and solve a Constraint Programming model.\nReferences\n\nCPLEX Modeling for Python documentation\nIBM Decision Optimization\nNeed help with DOcplex or to report a bug? Please go here\nContact us at dofeedback@wwpdl.vnet.ibm.com\n\nCopyright © 2017, 2021 IBM. IPLA licensed Sample Materials." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
d00d/quantNotebooks
Notebooks/quantopian_research_public/notebooks/lectures/Beta_Hedging/notebook.ipynb
unlicense
[ "Beta Hedging\nBy Evgenia \"Jenny\" Nitishinskaya and Delaney Granizo-Mackenzie with example algorithms by David Edwards\nPart of the Quantopian Lecture Series:\n\nwww.quantopian.com/lectures\ngithub.com/quantopian/research_public\n\nNotebook released under the Creative Commons Attribution 4.0 License.\n\nFactor Models\nFactor models are a way of explaining the returns of one asset via a linear combination of the returns of other assets. The general form of a factor model is\n$$Y = \\alpha + \\beta_1 X_1 + \\beta_2 X_2 + \\dots + \\beta_n X_n$$\nThis looks familiar, as it is exactly the model type that a linear regression fits. The $X$'s can also be indicators rather than assets. An example might be a analyst estimation.\nWhat is Beta?\nAn asset's beta to another asset is just the $\\beta$ from the above model. For instance, if we regressed TSLA against the S&P 500 using the model $Y_{TSLA} = \\alpha + \\beta X$, then TSLA's beta exposure to the S&P 500 would be that beta. If we used the model $Y_{TSLA} = \\alpha + \\beta X_{SPY} + \\beta X_{AAPL}$, then we now have two betas, one is TSLA's exposure to the S&P 500 and one is TSLA's exposure to AAPL.\nOften \"beta\" will refer to a stock's beta exposure to the S&P 500. We will use it to mean that unless otherwise specified.", "# Import libraries\nimport numpy as np\nfrom statsmodels import regression\nimport statsmodels.api as sm\nimport matplotlib.pyplot as plt\nimport math\n\n# Get data for the specified period and stocks\nstart = '2014-01-01'\nend = '2015-01-01'\nasset = get_pricing('TSLA', fields='price', start_date=start, end_date=end)\nbenchmark = get_pricing('SPY', fields='price', start_date=start, end_date=end)\n\n# We have to take the percent changes to get to returns\n# Get rid of the first (0th) element because it is NAN\nr_a = asset.pct_change()[1:]\nr_b = benchmark.pct_change()[1:]\n\n# Let's plot them just for fun\nr_a.plot()\nr_b.plot()\nplt.ylabel(\"Daily Return\")\nplt.legend();", "Now we can perform the regression to find $\\alpha$ and $\\beta$:", "# Let's define everything in familiar regression terms\nX = r_b.values # Get just the values, ignore the timestamps\nY = r_a.values\n\ndef linreg(x,y):\n # We add a constant so that we can also fit an intercept (alpha) to the model\n # This just adds a column of 1s to our data\n x = sm.add_constant(x)\n model = regression.linear_model.OLS(y,x).fit()\n # Remove the constant now that we're done\n x = x[:, 1]\n return model.params[0], model.params[1]\n\nalpha, beta = linreg(X,Y)\nprint 'alpha: ' + str(alpha)\nprint 'beta: ' + str(beta)", "If we plot the line $\\alpha + \\beta X$, we can see that it does indeed look like the line of best fit:", "X2 = np.linspace(X.min(), X.max(), 100)\nY_hat = X2 * beta + alpha\n\nplt.scatter(X, Y, alpha=0.3) # Plot the raw data\nplt.xlabel(\"SPY Daily Return\")\nplt.ylabel(\"TSLA Daily Return\")\n\n # Add the regression line, colored in red\nplt.plot(X2, Y_hat, 'r', alpha=0.9);", "Risk Exposure\nMore generally, this beta gets at the concept of how much risk exposure you take on by holding an asset. If an asset has a high beta exposure to the S&P 500, then while it will do very well while the market is rising, it will do very poorly when the market falls. A high beta corresponds to high speculative risk. You are taking out a more volatile bet.\nAt Quantopian, we value stratgies that have negligible beta exposure to as many factors as possible. What this means is that all of the returns in a strategy lie in the $\\alpha$ portion of the model, and are independent of other factors. This is highly desirable, as it means that the strategy is agnostic to market conditions. It will make money equally well in a crash as it will during a bull market. These strategies are the most attractive to individuals with huge cash pools such as endowments and soverign wealth funds.\nRisk Management\nThe process of reducing exposure to other factors is known as risk management. Hedging is one of the best ways to perform risk management in practice.\nHedging\nIf we determine that our portfolio's returns are dependent on the market via this relation\n$$Y_{portfolio} = \\alpha + \\beta X_{SPY}$$\nthen we can take out a short position in SPY to try to cancel out this risk. The amount we take out is $-\\beta V$ where $V$ is the total value of our portfolio. This works because if our returns are approximated by $\\alpha + \\beta X_{SPY}$, then adding a short in SPY will make our new returns be $\\alpha + \\beta X_{SPY} - \\beta X_{SPY} = \\alpha$. Our returns are now purely alpha, which is independent of SPY and will suffer no risk exposure to the market.\nMarket Neutral\nWhen a stragy exhibits a consistent beta of 0, we say that this strategy is market neutral.\nProblems with Estimation\nThe problem here is that the beta we estimated is not necessarily going to stay the same as we walk forward in time. As such the amount of short we took out in the SPY may not perfectly hedge our portfolio, and in practice it is quite difficult to reduce beta by a significant amount.\nWe will talk more about problems with estimating parameters in future lectures. In short, each estimate has a stardard error that corresponds with how stable the estimate is within the observed data.\nImplementing hedging\nNow that we know how much to hedge, let's see how it affects our returns. We will build our portfolio using the asset and the benchmark, weighing the benchmark by $-\\beta$ (negative since we are short in it).", "# Construct a portfolio with beta hedging\nportfolio = -1*beta*r_b + r_a\nportfolio.name = \"TSLA + Hedge\"\n\n# Plot the returns of the portfolio as well as the asset by itself\nportfolio.plot(alpha=0.9)\nr_b.plot(alpha=0.5);\nr_a.plot(alpha=0.5);\nplt.ylabel(\"Daily Return\")\nplt.legend();", "It looks like the portfolio return follows the asset alone fairly closely. We can quantify the difference in their performances by computing the mean returns and the volatilities (standard deviations of returns) for both:", "print \"means: \", portfolio.mean(), r_a.mean()\nprint \"volatilities: \", portfolio.std(), r_a.std()", "We've decreased volatility at the expense of some returns. Let's check that the alpha is the same as before, while the beta has been eliminated:", "P = portfolio.values\nalpha, beta = linreg(X,P)\nprint 'alpha: ' + str(alpha)\nprint 'beta: ' + str(beta)", "Note that we developed our hedging strategy using historical data. We can check that it is still valid out of sample by checking the alpha and beta values of the asset and the hedged portfolio in a different time frame:", "# Get the alpha and beta estimates over the last year\nstart = '2014-01-01'\nend = '2015-01-01'\nasset = get_pricing('TSLA', fields='price', start_date=start, end_date=end)\nbenchmark = get_pricing('SPY', fields='price', start_date=start, end_date=end)\nr_a = asset.pct_change()[1:]\nr_b = benchmark.pct_change()[1:]\nX = r_b.values\nY = r_a.values\nhistorical_alpha, historical_beta = linreg(X,Y)\nprint 'Asset Historical Estimate:'\nprint 'alpha: ' + str(historical_alpha)\nprint 'beta: ' + str(historical_beta)\n\n# Get data for a different time frame:\nstart = '2015-01-01'\nend = '2015-06-01'\nasset = get_pricing('TSLA', fields='price', start_date=start, end_date=end)\nbenchmark = get_pricing('SPY', fields='price', start_date=start, end_date=end)\n\n# Repeat the process from before to compute alpha and beta for the asset\nr_a = asset.pct_change()[1:]\nr_b = benchmark.pct_change()[1:]\nX = r_b.values\nY = r_a.values\nalpha, beta = linreg(X,Y)\nprint 'Asset Out of Sample Estimate:'\nprint 'alpha: ' + str(alpha)\nprint 'beta: ' + str(beta)\n\n# Create hedged portfolio and compute alpha and beta\nportfolio = -1*historical_beta*r_b + r_a\nP = portfolio.values\nalpha, beta = linreg(X,P)\nprint 'Portfolio Out of Sample:'\nprint 'alpha: ' + str(alpha)\nprint 'beta: ' + str(beta)\n\n\n# Plot the returns of the portfolio as well as the asset by itself\nportfolio.name = \"TSLA + Hedge\"\nportfolio.plot(alpha=0.9)\nr_a.plot(alpha=0.5);\nr_b.plot(alpha=0.5)\nplt.ylabel(\"Daily Return\")\nplt.legend();", "As we can see, the beta estimate changes a good deal when we look at the out of sample estimate. The beta that we computed over our historical data doesn't do a great job at reducing the beta of our portfolio, but does manage to reduce the magnitude by about 1/2.\nThe alpha/beta Tradeoff\nHedging against a benchmark such as the market will indeed reduce your returns while the market is not doing poorly. This is, however, completely fine. If your algorithm is less volatile, you will be able to take out leverage on your strategy and multiply your returns back up to their original amount. Even better, your returns will be far more stable than the original volatile beta exposed strategy.\nBy and large, even though high-beta strategies tend to be deceptively attractive due to their extremely good returns during periods of market growth, they fail in the long term as they will suffer extreme losses during a downturn.\nOther types of hedging\nAlthough we will not execute them here, there are strategies for hedging that may be better suited for other investment approaches.\nPairs Trading\nOne is pairs trading, in which a second asset is used in place of the benchmark here. This would allow you, for instance, to cancel out the volatility in an industry by being long in the stock of one company and short in the stock of another company in the same industry.\nwww.quantopian.com/lectures\nLong Short Equity\nIn this case we define a ranking over a group of $n$ equities, then long the top $p\\%$ and short the bottom $p\\%$ in equal dollar volume. This has the advantage of being implicitly, versus explicitly, hedged when $n$ is large. To see why this is the case, imagine buying a set of 100 securities randomly. The chance that the market exposure beta of these 100 is far from 1.0 is very low, as we have taken a large sample of the market. Similarly, when we rank by some independent metric and buy the top 100, the chance that we select securities whose overall beta is far from 1.0 is low. So in selecting 100 long and 100 short, the strategy beta should be very close to 1 - 1 = 0. Obviously some ranking systems will introduce a sample bias and break this assumption, for example ranking by the estimated beta of the equity.\nAnother advantage of long short equity strategies is that you are making a bet on the ranking, or in other words the differential in performance between the top and bottom ranked equities. This means that you don't have to even worry about the alpha/beta tradeoff encountered in hedging.\nThis presentation is for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation for any security; nor does it constitute an offer to provide investment advisory or other services by Quantopian, Inc. (\"Quantopian\"). Nothing contained herein constitutes investment advice or offers any opinion with respect to the suitability of any security, and any views expressed herein should not be taken as advice to buy, sell, or hold any security or as an endorsement of any security or company. In preparing the information contained herein, Quantopian, Inc. has not taken into account the investment needs, objectives, and financial circumstances of any particular investor. Any views expressed and data illustrated herein were prepared based upon information, believed to be reliable, available to Quantopian, Inc. at the time of publication. Quantopian makes no guarantees as to their accuracy or completeness. All information is subject to change and may quickly become unreliable for various reasons, including changes in market conditions or economic circumstances." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
KECB/learn
machine_learning/Machine Learning Notebook.ipynb
mit
[ "Machine Learning Recipes with Jsh Gordon Note\nVideo list\nthis is a note for watching Machine Learning Recipes with Jsh Gordon\n1", "from sklearn import tree\n\nfeatures = [[140, 1], [130, 1], [150, 0], [170, 0]]\nlabels = [0, 0, 1, 1]\n\nclf = tree.DecisionTreeClassifier()\nclf = clf.fit(features, labels)\n\nprint(clf.predict([[120, 0]]))", "Import Concepts\n\nHow does this work in the real world?\nHow much training data do you need?\nHow is the tree created?\nWhat makes a good feature?\n\n2\nMany types of classifiers\n\nArtificial neural network\nSupport Vector Machine\nLions\nTigers\nBears\nOh my!\n\nGoals\n1. Import dataset", "from sklearn.datasets import load_iris\nimport numpy as np\niris = load_iris()\nprint(iris.feature_names)\nprint(iris.target_names)\nprint(iris.data[0])\nprint(iris.target[0])", "Testing Data\n\nExamples used to \"test\" the classifier's accuracy.\nNot part of the training data.\n\nJust like in programming, testing is a very important\npart of ML.", "test_idx = [0, 50, 100]\n\n# training data\ntrain_target = np.delete(iris.target, test_idx)\ntrain_data = np.delete(iris.data, test_idx, axis=0)\nprint(train_target.shape)\nprint(train_data.shape)\n\n# testing data\ntest_target = iris.target[test_idx]\ntest_data = iris.data[test_idx]\nprint(test_target.shape)\nprint(test_data.shape)", "2. Train a classifier", "clf = tree.DecisionTreeClassifier()\nclf.fit(train_data, train_target)", "3. Predict label for new flower.", "print(test_target)\nprint(clf.predict(test_data))", "4. Visualize the tree.", "# viz code\nfrom sklearn.externals.six import StringIO\nimport pydotplus\ndot_data = StringIO()\ntree.export_graphviz(clf, out_file=dot_data,\n feature_names=iris.feature_names,\n class_names=iris.target_names,\n filled=True, rounded = True,\n impurity=False)\ngraph = pydotplus.graph_from_dot_data(dot_data.getvalue())\ngraph.write_pdf('iris.pdf')", "More to learn\n\nHow are trees built automatically from examples?\nHow well do they work in parctice?\n\n3 What Makes a Good Feature?", "import numpy as np\n%matplotlib inline\nimport matplotlib.pyplot as plt \n\ngreyhounds = 500\nlabs = 500\n\ngrey_height = 28 + 4 * np.random.randn(greyhounds)\nlab_height = 24 + 4 * np.random.randn(labs)\n\nplt.hist([grey_height, lab_height], stacked=True, color=['r', 'b'])\nplt.show()", "Analysis\n35 肯定是 greyhounds\n20左右是 lab的几率最大\n但是很难判断在25左右的时候是谁. 所以这个 Feature 是好的, 但不是充分的.\n所以问题是: 我们需要多少 Feature?\n注意事项\n\nAvoid redundant features: 例如 用英尺做单位的高度, 用厘米做单位的高度\nFeatures should be easy to understand: \n 例如 预测邮件发送时间, 使用距离和发送所用天数 而不选择使用经纬度坐标. SImpler relationships are easier to learn\n\nIdeal features are\n\nInformative\nIndependent\nSimple\n\n4. Lets Write a Pipeline", "from sklearn import datasets\niris = datasets.load_iris()\n\nX = iris.data # input: features\ny = iris.target # output: label\n\nfrom sklearn.cross_validation import train_test_split\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size= .5)\n\n# from sklearn import tree\n# my_classifier = tree.DecisionTreeClassifier()\n\nfrom sklearn.neighbors import KNeighborsClassifier\nmy_classifier = KNeighborsClassifier()\n\nmy_classifier.fit(X_train, y_train)\n\npredictions = my_classifier.predict(X_test)\n\nfrom sklearn.metrics import accuracy_score\nprint(accuracy_score(y_test, predictions))\n", "what is X, y?\nX: features\ny: labels\n``` python\n def classify(features):\n # do some logic\n return label\n```\n5. Write Our First Classifier", "from scipy.spatial import distance\n\ndef euc(a, b):\n return distance.euclidean(a, b)\n \nclass ScrappyKNN():\n def fit(self, X_train, y_train):\n self.X_train = X_train\n self.y_train = y_train\n \n def predict(self, X_test):\n predictions = []\n for row in X_test:\n label = self.closest(row)\n predictions.append(label)\n return predictions\n \n def closest(self, row):\n best_dist = euc(row, self.X_train[0])\n best_index = 0\n for i in range(1, len(self.X_train)):\n dist = euc(row, self.X_train[i])\n if dist < best_dist:\n best_dist = dist\n best_index = i\n return self.y_train[best_index]\n\nfrom sklearn import datasets\niris = datasets.load_iris()\n\nX = iris.data # input: features\ny = iris.target # output: label\n\nfrom sklearn.cross_validation import train_test_split\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size= .5)\n\nmy_classifier = ScrappyKNN()\n\nmy_classifier.fit(X_train, y_train)\n\npredictions = my_classifier.predict(X_test)\n\nfrom sklearn.metrics import accuracy_score\nprint(accuracy_score(y_test, predictions))", "6. Train an Image Classifier with TensorFlow for Poets\n7. Classifying Handwritten Digits with TF.Learn\n8. Let's Write a Decision Tree Classifier from Scratch" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
dlsun/symbulate
tutorial/gs_rv.ipynb
mit
[ "Getting Started with Symbulate\nSection 2. Random Variables\n<a id='contents'></a>\n<Probability Spaces | Contents | Multiple random variables and joint distributions>\nEvery time you start Symbulate, you must first run (SHIFT-ENTER) the following commands.", "from symbulate import *\n%matplotlib inline", "This section provides an introduction to the Symbulate commands for simulating and summarizing values of a random variable.\n<a id='counting_numb_heads'></a>\nExample 2.1: Counting the number of Heads in a sequence of coin flips\nIn Example 1.7 we simulated the value of the number of Heads in a sequence of five coin flips. In that example, we simulated the individual coin flips (with 1 representing Heads and 0 Tails) and then used .apply() with the sum function to count the number of Heads. The following Symbulate commands achieve the same goal by defining an RV, X, which measures the number of Heads for each outcome.", "P = BoxModel([1, 0], size=5)\nX = RV(P, sum)\nX.sim(10000)", "The number of Heads in five coin flips is a random variable: a function that takes as an input an outcome of a probability space and returns a real number. The first argument of RV is the probability space on which the RV is defined, e.g., sequences of five 1/0s. The second argument is the function which maps outcomes in the probability space to real numbers, e.g., the sum of the 1/0 values. Values of an RV can be simulated with .sim().\n<a id='sum_of_two_dice'></a>\nExercise 2.2: Sum of two dice\nAfter defining an appropriate BoxModel probability space, define an RV X representing the sum of two six-sided fair dice, and simulate 10000 values of X.", "### Type your commands in this cell and then run using SHIFT-ENTER.", "Solution\n<a id='dist_of_five_flips'></a>\nExample 2.3: Summarizing simulation results with tables and plots\nIn Example 2.1 we defined a RV, X, the number of Heads in a sequence of five coin flips. Simulated values of a random variable can be summarized using .tabulate() (with normalize=False (default) for frequencies (counts) or True for relative frequencies (proportions)).", "P = BoxModel([1, 0], size=5)\nX = RV(P, sum)\nsims = X.sim(10000)\nsims.tabulate()", "The table above can be used to approximate the distribution of the number of Heads in five coin flips. The distribution of a random variable specifies the possible values that the random variable can take and their relative likelihoods. The distribution of a random variable can be visualized using .plot().", "sims.plot()", "By default, .plot() displays relative frequencies (proportions). Use .plot(normalize=False) to display frequencies (counts).\n<a id='dist_of_sum_of_two_dice'></a>\nExercise 2.4: The distribution of the sum of two dice rolls\nContinuing Exercise 2.2 summarize with a table and a plot the distribution of the sum of two rolls of a fair six-sided die.", "### Type your commands in this cell and then run using SHIFT-ENTER.", "Solution\n<a id='prob_of_three_heads'></a>\nExample 2.5: Estimating probabilities from simulations\nThere are several other tools for summarizing simulations, like the count functions. For example, the following commands approximate P(X &lt;= 3) for Example 2.1, the probability that in five coin flips at most three of the flips land on Heads.", "P = BoxModel([1, 0], size=5)\nX = RV(P, sum)\nsims = X.sim(10000)\nsims.count_leq(3)/10000", "<a id='prob_of_10_two_dice'></a>\nExercise 2.6: Estimating probabilities for the sum of two dice rolls\nContinuing Exercise 2.2, estimate P(X &gt;= 10), the probability that the sum of two fair six-sided dice is at least 10.", "### Type your commands in this cell and then run using SHIFT-ENTER.", "Solution\n<a id='sim_from_binom'></a>\nExample 2.7: Specifying a RV by its distribution\nThe plot in Example 2.3 displays the approximate distribution of the random variable X, the number of Heads in five flips of a fair coin. This distribution is called the Binomial distribution with n=5 trials (flips) and a probability that each trial (flip) results in success (1 i.e. Heads) equal to p=0.5.\nIn the above examples the RV X was explicitly defined on the probability space P - i.e. the BoxModel for the outcomes (1 or 0) of the five individual flips - via the sum function. This setup implied a Binomial(5, 0.5) distribution for X.\nIn many situations the distribution of an RV is assumed or specified directly, without mention of the underlying probabilty space or the function defining the random variable. For example, a problem might state \"let Y have a Binomial distribution with n=5 and p=0.5\". The RV command can also be used to define a random variable by specifying its distribution, as in the following.", "Y = RV(Binomial(5, 0.5))\nY.sim(10000).plot()", "By definition, a random variable must always be a function defined on a probability space. Specifying a random variable by specifying its distribution, as in Y = RV(Binomial(5, 0.5)), has the effect of defining the probability space to be the distribution of the random variable and the function defined on this space to be the identity (f(x) = x). However, it is more appropriate to think of such a specification as defining a random variable with the given distribution on an unspecified probability space through an unspecified function.\nFor example, the random variable $X$ in each of the following situations has a Binomial(5, 0.5) distribution.\n- $X$ is the number of Heads in five flips of a fair coin\n- $X$ is the number of Tails in five flips of a fair coin\n- $X$ is the number of even numbers rolled in five rolls of a fair six-sided die\n- $X$ is the number of boys in a random sample of five births\nEach of these situations involves a different probability space (coins, dice, births) with a random variable which counts according to different criteria (Heads, Tails, evens, boys). These examples illustrate that knowledge that a random variable has a specific distribution (e.g. Binomial(5, 0.5)) does not necessarily convey any information about the underlying observational units or variable being measured. This is why we say a specification like X = RV(Binomial(5, 0.5)) defines a random variable X on an unspecified probability space via an unspecified function.\nThe following code compares the two methods for definiting of a random variable with a Binomial(5, 0.5) distribution. (The jitter=True option offsets the vertical lines so they do not coincide.)", "P = BoxModel([1, 0], size=5)\nX = RV(P, sum)\nX.sim(10000).plot(jitter=True)\n\nY = RV(Binomial(5, 0.5))\nY.sim(10000).plot(jitter=True)", "In addition to Binomial, many other commonly used distributions are built in to Symbulate.\n<a id='discrete_unif_dice'></a>\nExercise 2.8: Simulating from a discrete Uniform model\nA random variable has a DiscreteUniform distribution with parameters a and b if it is equally likely to to be any of the integers between a and b (inclusive). Let X be the roll of a fair six-sided die. Define an RV X by specifying an appropriate DiscreteUniform distribution, then simulate 10000 values of X and summarize its approximate distribution in a plot.", "### Type your commands in this cell and then run using SHIFT-ENTER.", "Solution\n<a id='numb_tails'></a>\nExample 2.9: Random variables versus distributions\nContinuing Example 2.1, if X is the random variable representing number of Heads in five coin flips then Y = 5 - X is random variable representing the number of Tails.", "P = BoxModel([1, 0], size=5)\nX = RV(P, sum)\nY = 5 - X\nY.sim(10000).tabulate()", "It is important not to confuse a random variable with its distribution. Note that X and Y are two different random variables; they measure different things. For example, if the outcome of the flips is (1, 0, 0, 1, 0) then X = 2 but Y = 3. The following code illustrates how an RV can be called as a function to return its value for a particular outcome in the probability space.", "outcome = (1, 0, 0, 1, 0)\nX(outcome)\n\nY(outcome)", "In fact, in this example the values of X and Y are unequal for every outcome in the probability space . However, while X and Y are two different random variables, they do have the same distribution over many outcomes.", "X.sim(10000).plot(jitter=True)\nY.sim(10000).plot(jitter=True)", "See Example 2.7 for further comments about the difference between random variables and distributions.\n<a id='expected_value_numb_of_heads'></a>\nExample 2.10: Expected value of the number of heads in five coin flips\nThe expected value, or probability-weighted average value, of an RV can be approximated by simulating many values of the random variable and finding the sample mean (i.e. average) using .mean(). Continuing Example 2.1, the following code estimates the expected value of the number of Heads in five coin flips.", "P = BoxModel([1, 0], size=5)\nX = RV(P, sum)\nX.sim(10000).mean()", "Over many sets of five coin flips, we expect that there will be on average about 2.5 Heads per set. Note that 2.5 is not the number of Heads we would expect in a single set of five coin flips.\n<a id='expected_value_sum_of_dice'></a>\nExercise 2.11: Expected value of the sum of two dice rolls\nContinuing Exercise 2.2, approximate the expected value of the sum of two six-sided dice rolls. (Bonus: interpret the value as an appropriate long run average.)", "### Type your commands in this cell and then run using SHIFT-ENTER.", "Solution\n<a id='sd_numb_of_heads'></a>\nExample 2.12: Standard deviation of the number of Heads in five coin flips\nThe expected value of an RV is its long run average, while the standard deviation of an RV measures the average degree to which individual values of the RV vary from the expected value. The standard deviation of an RV can be approximated from simulated values with .sd(). Continuing Example 2.1, the following code estimates the standard deviation of the number of Heads in five coin flips.", "P = BoxModel([1, 0], size=5)\nX = RV(P, sum)\nsims = X.sim(10000)\nsims.sd()", "Inspecting the plot in Example 2.3 we see there are many simulated values of 2 and 3, which are 0.5 units away from the expected value of 2.5. There are relatively fewer values of 0 and 5 which are 2.5 units away from the expected value of 2.5. Roughly, the simulated values are on average 1.1 units away from the expected value.\nVariance is the square of the standard deviation and can be approximated with .var().", "sims.var()", "<a id='sd_sum_of_dice'></a>\nExercise 2.13: Standard deviation of the sum of two dice rolls\nContinuing Exercise 2.2, approximate the standard deviation of the sum of two six-sided dice rolls. (Bonus: interpret the value.)", "### Type your commands in this cell and then run using SHIFT-ENTER.", "Solution\n<a id='dist_of_normal'></a>\nExample 2.14: Continuous random variables\nThe RVs we have seen so far have been discrete. A discrete random variable can take at most countably many distinct values. For example, the number of Heads in five coin flips can only take values 0, 1, 2, 3, 4, 5.\nA continuous random variable can take any value in some interval of real numbers. For example, if X represents the height of a randomly selected U.S. adult male then X is a continuous random variable. Many continuous random variables are assumed to have a Normal distribution. The following simulates values of the RV X assuming it has a Normal distribution with mean 69.1 inches and standard deviation 2.9 inches.", "X = RV(Normal(mean=69.1, sd=2.9))\nsims = X.sim(10000)", "The same simulation tools are available for both discrete and continuous RVs. Calling .plot() for a continuous RV produces a histogram which displays frequencies of simulated values falling in interval \"bins\".", "sims.plot()", "The number of bins can be set using the bins= option in .plot()", "X.sim(10000).plot(bins=60)", "It is not recommended to use .tabulate() with continuous RVs as almost all simulated values will only occur once.\n<a id='sim_unif'></a>\nExercise 2.15: Simulating from a (continuous) uniform distribution\nThe continuous analog of a BoxModel is a Uniform distribution which produces \"equally likely\" values in an interval with endpoints a and b. (What would you expect the plot of such a distribution to look like?)\nLet X be a random variable which has a Uniform distribution on the interval [0, 1]. Define an appropriate RV and use simulation to display its approximate distribution. (Note that the underlying probability space is unspecified.)", "### Type your commands in this cell and then run using SHIFT-ENTER.", "Solution\n<a id='sqrt_ex'></a>\nExample 2.16: Transformations of random variables\nIn Example 2.9 we defined a new random variable Y = 5 - X (the number of Tails) by transforming the RV X (the number of Heads). A transformation of an RV is also an RV. If X is an RV, define a new random variable Y = g(X) using X.apply(g). The resulting Y behaves like any other RV.\nNote that for arithmetic operations and many common math functions (such as exp, log, sin) you can simply call g(X) rather than X.apply(g).\nContinuing Example 2.1, let $X$ represent the number of Heads in five coin flips and define the random variable $Y = \\sqrt{X}$. The plot below approximates the distribution of $Y$; note that the possible values of $Y$ are 0, 1, $\\sqrt{2}$, $\\sqrt{3}$, 2, and $\\sqrt{5}$.", "P = BoxModel([1, 0], size=5)\nX = RV(P, sum)\nY = X.apply(sqrt)\nY.sim(10000).plot()", "The following code uses a g(X) definition rather than X.apply(g).", "P = BoxModel([1, 0], size=5)\nX = RV(P, sum)\nY = sqrt(X)\nY.sim(10000).plot()", "<a id='dif_normal'></a>\nExercise 2.17 Function of a RV that has a Uniform distribution\nIn Example 2.15 we encountered uniform distributions. Let $U$ be a random variable which has a Uniform distribution on the interval [0, 1]. Use simulation to display the approximate distribution of the random variable $Y = -\\log(U)$.", "### Type your commands in this cell and then run using SHIFT-ENTER.", "Solution\n<a id='Numb_distinct'></a>\nExample 2.18: Number of switches between Heads and Tails in coin flips\nRVs can be defined or transformed through user defined functions. As an example, let Y be the number of times a sequence of five coin flips switches between Heads and Tails (not counting the first toss). For example, for the outcome (0, 1, 0, 0, 1), a switch occurs on the second third, and fifth flip so Y = 3. We define the random variable Y by first defining a function that takes as an input a list of values and returns as an output the number of times a switch from the previous value occurs in the sequence. (Defining functions is one area where some familiarity with Python is helpful.)", "def number_switches(x):\n count = 0\n for i in list(range(1, len(x))):\n if x[i] != x[i-1]:\n count += 1\n return count\n\nnumber_switches((1, 1, 1, 0, 0, 1, 0, 1, 1, 1))", "Now we can use the number_switches function to define the RV Y on the probability space corresponding to five flips of a fair coin.", "P = BoxModel([1, 0], size=5)\nY = RV(P, number_switches)\n\noutcome = (0, 1, 0, 0, 1)\nY(outcome)", "An RV defined or transformed through a user-defined function behaves like any other RV.", "Y.sim(10000).plot()", "<a id='Numb_alterations'></a>\nExercise 2.19: Number of distinct faces rolled in 6 rolls\nLet X count the number of distinct faces rolled in 6 rolls of a fair six-sided die. For example, if the result of the rolls is (3, 3, 3, 3, 3, 3) then X = 1; if (6, 4, 5, 4, 6, 6) then X=3; etc. Use the number_distinct_values function defined below to define the RV X on an appropriate probability space. Then simulate values of X and plot its approximate distribution. (The number_distinct_values function takes as an input a list of values and returns as an output the number of distinct values in the list. We have used the Python functions set and len.)", "def number_distinct_values(x):\n return len(set(x))\n\nnumber_distinct_values((1, 1, 4))\n\n### Type your commands in this cell and then run using SHIFT-ENTER.", "Solution\nAdditional Exercises\n<a id='ev_max_of_dice'></a>\nExercise 2.20: Max of two dice rolls\n1) Approximate the distribution of the max of two six-sided dice rolls.", "### Type your commands in this cell and then run using SHIFT-ENTER.", "2) Approximate the probability that the max of two six-sided dice rolls is greater than or equal to 5.", "### Type your commands in this cell and then run using SHIFT-ENTER.", "3) Approximate the mean and standard deviation of the max of two six-sided dice rolls.", "### Type your commands in this cell and then run using SHIFT-ENTER.\n\n### Type your commands in this cell and then run using SHIFT-ENTER.", "Hint\nSolution \n<a id='var_transformed_unif'></a>\nExercise 2.21: Transforming a random variable\nLet $X$ have a Uniform distribution on the interval [0, 3] and let $Y = 2\\cos(X)$.\n1) Approximate the distribution of $Y$.", "### Type your commands in this cell and then run using SHIFT-ENTER.", "2) Approximate the probability that the $Y$ is less than 1.", "### Type your commands in this cell and then run using SHIFT-ENTER.", "3) Approximate the mean and standard deviation of $Y$.", "### Type your commands in this cell and then run using SHIFT-ENTER.\n\n### Type your commands in this cell and then run using SHIFT-ENTER.", "Hint\nSolution \n<a id='log_normal'></a>\nExercise 2.22: Function of a random variable.\nLet $X$ be a random variable which has a Normal(0,1) distribution. Let $Y = e^X$. \n1) Use simulation to display the approximate distribution of $Y$.", "### Type your commands in this cell and then run using SHIFT-ENTER.", "2) Approximate the probability that the $Y$ is greater than 2.", "### Type your commands in this cell and then run using SHIFT-ENTER.", "3) Approximate the mean and standard deviation of $Y$.", "### Type your commands in this cell and then run using SHIFT-ENTER.\n\n### Type your commands in this cell and then run using SHIFT-ENTER.", "Hint\nSolution \n<a id='hints'></a>\nHints for Additional Exercises\n<a id='hint_ev_max_of_dice'></a>\nExercise 2.20: Hint\nIn Exercise 2.2 we simulated the sum of two six-sided dice rolls. Define an RV using the max function to return the larger of the two rolls. In Example 2.5 we estimated the probability of a random variable taking a value. In Example 2.10 we applied the .mean() funtion to return the long run expected average. In Example 2.12 we estimated the standard deviation.\nBack\n<a id='hint_var_transformed_unif'></a>\nExercise 2.21: Hint\nExample 2.9 introduces transformations. In Exercise 2.15 we simulated an RV that had a Uniform distribution. In Example 2.5 we estimated the probabilities for a RV. In Example 2.10 we applied the .mean() funtion to return the long run expected average. In Example 2.12 we estimated the standard deviation.\nBack\n<a id='hint_log_normal'></a>\nExercise 2.22: Hint\nIn Example 2.14 we simulated an RV with a Normal distribution. In Example 2.9 we defined a random variable as a function of another random variable. In Example 2.5 we estimated the probability of a random variable taking a value. In Example 2.10 we applied the .mean() funtion to return the long run expected average. In Example 2.12 we estimated the standard deviation.\nBack\nSolutions to Exercises\n<a id='sol_sum_of_two_dice'></a>\nExercise 2.2: Solution", "P = BoxModel([1, 2, 3, 4, 5, 6], size=2)\nX = RV(P, sum)\nX.sim(10000)", "Back\n<a id='sol_dist_of_sum_of_two_dice'></a>\nExercise 2.4: Solution", "P = BoxModel([1, 2, 3, 4, 5, 6], size=2)\nX = RV(P, sum)\nsims = X.sim(10000)\nsims.tabulate(normalize=True)\n\nsims.plot()", "Back\n<a id='sol_prob_of_10_two_dice'></a>\nExercise 2.6: Solution", "P = BoxModel([1, 2, 3, 4, 5, 6], size=2)\nX = RV(P, sum)\nsims = X.sim(10000)\nsims.count_geq(10) / 10000", "Back\n<a id='sol_expected_discrete_unif_dice'></a>\nExercise 2.8: Solution", "X = RV(DiscreteUniform(a=1, b=6))\nX.sim(10000).plot(normalize=True)", "Back\n<a id='sol_expected_value_sum_of_dice'></a>\nExercise 2.11: Solution", "P = BoxModel([1, 2, 3, 4, 5, 6], size=2)\nX = RV(P, sum)\nX.sim(10000).mean()", "Over many pairs of rolls of fair six-sided dice, we expect that on average the sum of the two rolls will be about 7.\nBack\n<a id='sol_sd_sum_of_dice'></a>\nExercise 2.13: Solution", "P = BoxModel([1, 2, 3, 4, 5, 6], size=2)\nX = RV(P, sum)\nX.sim(10000).sd()", "Over many pairs of rolls of fair six-sided dice, the values of the sum are on average roughly 2.4 units away from the expected value of 7.\nBack\n<a id='sol_sim_unif'></a>\nExercise 2.15: Solution", "X = RV(Uniform(a=0, b=1))\nX.sim(10000).plot()", "Back\n<a id='sol_dif_normal'></a>\nExercise 2.17: Solution", "U = RV(Uniform(a=0, b=1))\nY = -log(U)\nY.sim(10000).plot()", "Note that the RV has an Exponential(1) distribution.\nBack\n<a id='sol_Numb_alterations'></a>\nExercise 2.19: Solution", "def number_distinct_values(x):\n return len(set(x))\n\nP = BoxModel([1,2,3,4,5,6], size=6)\nX = RV(P, number_distinct_values)\nX.sim(10000).plot()", "Back\n<a id='sol_ev_max_of_dice'></a>\nExercise 2.20: Solution\n1) Approximate the distribution of the max of two six-sided dice rolls.", "P = BoxModel([1, 2, 3, 4, 5, 6], size=2)\nX = RV(P, max)\nsims = X.sim(10000)\nsims.plot()", "2) Approximate the probability that the max of two six-sided dice rolls is greater than or equal to 5.", "sims.count_geq(5)/10000", "3) Approximate the mean and standard deviation of the max of two six-sided dice rolls.", "sims.mean()\n\nsims.sd()", "Back\n<a id='sol_var_transformed_unif'></a>\nExercise 2.21: Solution\n1) Approximate the distribution of $Y$.", "X = RV(Uniform(0, 3))\nY = 2 * cos(X)\nsims = Y.sim(10000)\nsims.plot()", "Alternatively,", "X = RV(Uniform(0, 3))\nY = 2 * X.apply(cos)\nsims = Y.sim(10000)\nsims.plot()", "2) Approximate the probability that the Y is less than 2.", "sims.count_lt(1)/10000", "3) Approximate the mean and standard deviation of Y.", "sims.mean()\n\nsims.sd()", "Back\n<a id='sol_log_normal'></a>\nExercise 2.22: Solution\n1) Use simulation to display the approximate distribution of Y.", "X = RV(Normal(0, 1))\nY = exp(X)\nsims = Y.sim(10000)\nsims.plot()", "2) Approximate the probability that the Y is greater than 2.", "sims.count_gt(2)/10000", "3) Approximate the mean and standard deviation of Y.", "sims.mean()\n\nsims.sd()", "Back\n<Probability Spaces | Contents | Multiple random variables and joint distributions>" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
superbobry/pymc3
pymc3/examples/GLM-logistic.ipynb
apache-2.0
[ "Bayesian Logistic Regression with PyMC3\n\n\nThis is a reproduction with a few slight alterations of Bayesian Log Reg by J. Benjamin Cook\n\n\nAuthor: Peadar Coyle and J. Benjamin Cook\n\nHow likely am I to make more than $50,000 US Dollars?\nExploration of model selection techniques too - I use DIC and WAIC to select the best model. \nThe convenience functions are all taken from Jon Sedars work.\nThis example also has some explorations of the features so serves as a good example of Exploratory Data Analysis and how that can guide the model creation/ model selection process.", "%matplotlib inline\nimport pandas as pd\nimport numpy as np\nimport pymc3 as pm\nimport matplotlib.pyplot as plt\nimport seaborn\nimport warnings\nwarnings.filterwarnings('ignore')\nfrom collections import OrderedDict\nfrom time import time\n\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\n\nfrom scipy.optimize import fmin_powell\nfrom scipy import integrate\n\nimport theano as thno\nimport theano.tensor as T \n\n\ndef run_models(df, upper_order=5):\n ''' \n Convenience function:\n Fit a range of pymc3 models of increasing polynomial complexity. \n Suggest limit to max order 5 since calculation time is exponential.\n '''\n \n models, traces = OrderedDict(), OrderedDict()\n\n for k in range(1,upper_order+1):\n\n nm = 'k{}'.format(k)\n fml = create_poly_modelspec(k)\n\n with pm.Model() as models[nm]:\n\n print('\\nRunning: {}'.format(nm))\n pm.glm.glm(fml, df, family=pm.glm.families.Normal())\n\n start_MAP = pm.find_MAP(fmin=fmin_powell, disp=False)\n traces[nm] = pm.sample(2000, start=start_MAP, step=pm.NUTS(), progressbar=True) \n \n return models, traces\n\ndef plot_traces(traces, retain=1000):\n ''' \n Convenience function:\n Plot traces with overlaid means and values\n '''\n \n ax = pm.traceplot(traces[-retain:], figsize=(12,len(traces.varnames)*1.5),\n lines={k: v['mean'] for k, v in pm.df_summary(traces[-retain:]).iterrows()})\n\n for i, mn in enumerate(pm.df_summary(traces[-retain:])['mean']):\n ax[i,0].annotate('{:.2f}'.format(mn), xy=(mn,0), xycoords='data'\n ,xytext=(5,10), textcoords='offset points', rotation=90\n ,va='bottom', fontsize='large', color='#AA0022')\n \ndef create_poly_modelspec(k=1):\n ''' \n Convenience function:\n Create a polynomial modelspec string for patsy\n '''\n return ('income ~ educ + hours + age ' + ' '.join(['+ np.power(age,{})'.format(j) \n for j in range(2,k+1)])).strip()", "The Adult Data Set is commonly used to benchmark machine learning algorithms. The goal is to use demographic features, or variables, to predict whether an individual makes more than \\$50,000 per year. The data set is almost 20 years old, and therefore, not perfect for determining the probability that I will make more than \\$50K, but it is a nice, simple dataset that can be used to showcase a few benefits of using Bayesian logistic regression over its frequentist counterpart.\nThe motivation for myself to reproduce this piece of work was to learn how to use Odd Ratio in Bayesian Regression.", "data = pd.read_csv(\"https://archive.ics.uci.edu/ml/machine-learning-databases/adult/adult.data\", header=None, names=['age', 'workclass', 'fnlwgt', \n 'education-categorical', 'educ', \n 'marital-status', 'occupation',\n 'relationship', 'race', 'sex', \n 'captial-gain', 'capital-loss', \n 'hours', 'native-country', \n 'income'])\n\ndata", "Scrubbing and cleaning\nWe need to remove any null entries in Income. \nAnd we also want to restrict this study to the United States.", "data = data[~pd.isnull(data['income'])]\n\n\ndata[data['native-country']==\" United-States\"]\n\nincome = 1 * (data['income'] == \" >50K\")\nage2 = np.square(data['age'])\n\ndata = data[['age', 'educ', 'hours']]\ndata['age2'] = age2\ndata['income'] = income\n\nincome.value_counts()", "Exploring the data\nLet us get a feel for the parameters. \n* We see that age is a tailed distribution. Certainly not Gaussian!\n* We don't see much of a correlation between many of the features, with the exception of Age and Age2. \n* Hours worked has some interesting behaviour. How would one describe this distribution?", "\ng = seaborn.pairplot(data)\n\n# Compute the correlation matrix\ncorr = data.corr()\n\n# Generate a mask for the upper triangle\nmask = np.zeros_like(corr, dtype=np.bool)\nmask[np.triu_indices_from(mask)] = True\n\n# Set up the matplotlib figure\nf, ax = plt.subplots(figsize=(11, 9))\n\n# Generate a custom diverging colormap\ncmap = seaborn.diverging_palette(220, 10, as_cmap=True)\n\n# Draw the heatmap with the mask and correct aspect ratio\nseaborn.heatmap(corr, mask=mask, cmap=cmap, vmax=.3,\n linewidths=.5, cbar_kws={\"shrink\": .5}, ax=ax)", "We see here not many strong correlations. The highest is 0.30 according to this plot. We see a weak-correlation between hours and income \n(which is logical), we see a slighty stronger correlation between education and income (which is the kind of question we are answering).\nThe model\nWe will use a simple model, which assumes that the probability of making more than $50K \nis a function of age, years of education and hours worked per week. We will use PyMC3 \ndo inference. \nIn Bayesian statistics, we treat everything as a random variable and we want to know the posterior probability distribution of the parameters\n(in this case the regression coefficients)\nThe posterior is equal to the likelihood $$p(\\theta | D) = \\frac{p(D|\\theta)p(\\theta)}{p(D)}$$\nBecause the denominator is a notoriously difficult integral, $p(D) = \\int p(D | \\theta) p(\\theta) d \\theta $ we would prefer to skip computing it. Fortunately, if we draw examples from the parameter space, with probability proportional to the height of the posterior at any given point, we end up with an empirical distribution that converges to the posterior as the number of samples approaches infinity. \nWhat this means in practice is that we only need to worry about the numerator. \nGetting back to logistic regression, we need to specify a prior and a likelihood in order to draw samples from the posterior. We could use sociological knowledge about the effects of age and education on income, but instead, let's use the default prior specification for GLM coefficients that PyMC3 gives us, which is $p(θ)=N(0,10^{12}I)$. This is a very vague prior that will let the data speak for themselves.\nThe likelihood is the product of n Bernoulli trials, $\\prod^{n}{i=1} p{i}^{y} (1 - p_{i})^{1-y_{i}}$,\nwhere $p_i = \\frac{1}{1 + e^{-z_i}}$, \n$z_{i} = \\beta_{0} + \\beta_{1}(age){i} + \\beta_2(age)^{2}{i} + \\beta_{3}(educ){i} + \\beta{4}(hours){i}$ and $y{i} = 1$ if income is greater than 50K and $y_{i} = 0$ otherwise. \nWith the math out of the way we can get back to the data. Here I use PyMC3 to draw samples from the posterior. The sampling algorithm used is NUTS, which is a form of Hamiltonian Monte Carlo, in which parameteres are tuned automatically. Notice, that we get to borrow the syntax of specifying GLM's from R, very convenient! I use a convenience function from above to plot the trace infromation from the first 1000 parameters.", "with pm.Model() as logistic_model:\n pm.glm.glm('income ~ age + age2 + educ + hours', data, family=pm.glm.families.Binomial())\n trace_logistic_model = pm.sample(2000, pm.NUTS(), progressbar=True)\n\n\nplot_traces(trace_logistic_model, retain=1000)", "Some results\nOne of the major benefits that makes Bayesian data analysis worth the extra computational effort in many circumstances is that we can be explicit about our uncertainty. Maximum likelihood returns a number, but how certain can we be that we found the right number? Instead, Bayesian inference returns a distribution over parameter values.\nI'll use seaborn to look at the distribution of some of these factors.", "plt.figure(figsize=(9,7))\ntrace = trace_logistic_model[1000:]\nseaborn.jointplot(trace['age'], trace['educ'], kind=\"hex\", color=\"#4CB391\")\nplt.xlabel(\"beta_age\")\nplt.ylabel(\"beta_educ\")\nplt.show()", "So how do age and education affect the probability of making more than $$50K?$ To answer this question, we can show how the probability of making more than $50K changes with age for a few different education levels. Here, we assume that the number of hours worked per week is fixed at 50. PyMC3 gives us a convenient way to plot the posterior predictive distribution. We need to give the function a linear model and a set of points to evaluate. We will pass in three different linear models: one with educ == 12 (finished high school), one with educ == 16 (finished undergrad) and one with educ == 19 (three years of grad school).", "# Linear model with hours == 50 and educ == 12\nlm = lambda x, samples: 1 / (1 + np.exp(-(samples['Intercept'] + \n samples['age']*x + \n samples['age2']*np.square(x) + \n samples['educ']*12 + \n samples['hours']*50)))\n\n# Linear model with hours == 50 and educ == 16\nlm2 = lambda x, samples: 1 / (1 + np.exp(-(samples['Intercept'] + \n samples['age']*x + \n samples['age2']*np.square(x) + \n samples['educ']*16 + \n samples['hours']*50)))\n\n# Linear model with hours == 50 and educ == 19\nlm3 = lambda x, samples: 1 / (1 + np.exp(-(samples['Intercept'] + \n samples['age']*x + \n samples['age2']*np.square(x) + \n samples['educ']*19 + \n samples['hours']*50)))", "Each curve shows how the probability of earning more than $ 50K$ changes with age. The red curve represents 19 years of education, the green curve represents 16 years of education and the blue curve represents 12 years of education. For all three education levels, the probability of making more than $50K increases with age until approximately age 60, when the probability begins to drop off. Notice that each curve is a little blurry. This is because we are actually plotting 100 different curves for each level of education. Each curve is a draw from our posterior distribution. Because the curves are somewhat translucent, we can interpret dark, narrow portions of a curve as places where we have low uncertainty and light, spread out portions of the curve as places where we have somewhat higher uncertainty about our coefficient values.", "# Plot the posterior predictive distributions of P(income > $50K) vs. age\npm.glm.plot_posterior_predictive(trace, eval=np.linspace(25, 75, 1000), lm=lm, samples=100, color=\"blue\", alpha=.15)\npm.glm.plot_posterior_predictive(trace, eval=np.linspace(25, 75, 1000), lm=lm2, samples=100, color=\"green\", alpha=.15)\npm.glm.plot_posterior_predictive(trace, eval=np.linspace(25, 75, 1000), lm=lm3, samples=100, color=\"red\", alpha=.15)\nimport matplotlib.lines as mlines\nblue_line = mlines.Line2D(['lm'], [], color='b', label='High School Education')\ngreen_line = mlines.Line2D(['lm2'], [], color='g', label='Bachelors')\nred_line = mlines.Line2D(['lm3'], [], color='r', label='Grad School')\nplt.legend(handles=[blue_line, green_line, red_line], loc='lower right')\nplt.ylabel(\"P(Income > $50K)\")\nplt.xlabel(\"Age\")\nplt.show()\n\nb = trace['educ']\nplt.hist(np.exp(b), bins=20, normed=True)\nplt.xlabel(\"Odds Ratio\")\nplt.show()", "Finally, we can find a credible interval (remember kids - credible intervals are Bayesian and confidence intervals are frequentist) for this quantity. This may be the best part about Bayesian statistics: we get to interpret credibility intervals the way we've always wanted to interpret them. We are 95% confident that the odds ratio lies within our interval!", "lb, ub = np.percentile(b, 2.5), np.percentile(b, 97.5)\n\nprint(\"P(%.3f < O.R. < %.3f) = 0.95\"%(np.exp(3*lb),np.exp(3*ub)))", "Model selection\nThe Deviance Information Criterion (DIC) is a fairly unsophisticated method for comparing the deviance of likelhood across the the sample traces of a model run. However, this simplicity apparently yields quite good results in a variety of cases. We'll run the model with a few changes to see what effect higher order terms have on this model.\nOne question that was immediately asked was what effect does age have on the model, and why should it be age^2 versus age? We'll use the DIC to answer this question.", "models_lin, traces_lin = run_models(data, 4)\n\ndfdic = pd.DataFrame(index=['k1','k2','k3','k4'], columns=['lin'])\ndfdic.index.name = 'model'\n\nfor nm in dfdic.index:\n dfdic.loc[nm, 'lin'] = pm.stats.dic(traces_lin[nm],models_lin[nm])\n\n\ndfdic = pd.melt(dfdic.reset_index(), id_vars=['model'], var_name='poly', value_name='dic')\n\ng = seaborn.factorplot(x='model', y='dic', col='poly', hue='poly', data=dfdic, kind='bar', size=6)", "There isn't a lot of difference between these models in terms of DIC. So our choice is fine in the model above, and there isn't much to be gained for going up to age^3 for example.\nNext we look at WAIC. Which is another model selection technique.", "dfdic = pd.DataFrame(index=['k1','k2','k3','k4'], columns=['lin'])\ndfdic.index.name = 'model'\n\nfor nm in dfdic.index:\n dfdic.loc[nm, 'lin'] = pm.stats.waic(traces_lin[nm],models_lin[nm])\n\n\ndfdic = pd.melt(dfdic.reset_index(), id_vars=['model'], var_name='poly', value_name='waic')\n\ng = seaborn.factorplot(x='model', y='waic', col='poly', hue='poly', data=dfdic, kind='bar', size=6)", "The WAIC confirms our decision to use age^2." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
sdpython/ensae_teaching_cs
_doc/notebooks/td2a_eco/td2_eco_rappels_1a.ipynb
mit
[ "2A.eco - Rappel de ce que vous savez déjà mais avez peut-être oublié\npandas et numpy sont essentiels pour manipuler les données. C'est ce que rappelle ce notebook. Voir aussi Essential Cheat Sheets for Machine Learning and Deep Learning Engineers.", "from jyquickhelper import add_notebook_menu\nadd_notebook_menu()", "Les quelques règles de Python\nPython est un peu susceptible et protocolaire, il y a quelques règles à respecter : \n1) L'indentation est primordiale : un code mal indenté ne fonctionne pas. \nL'indentation indique à l'interpréteur où se trouvent les séparations entre des blocs d'instructions. Un peu comme des points dans un texte. \nSi les lignes ne sont pas bien alignées, l'interpréteur ne sait plus à quel bloc associer la ligne.\n2) On commence à compter à 0. Ca peut paraitre bizarre mais c'est comme ça. Le premier élément d'une liste est le 0-ème. \n3) Les marques de ponctuation sont importantes : \n- Pour une liste : []\n- Pour un dictionnaire : {}\n- Pour un tuple : ()\n- Pour séparer des éléments : ,\n- Pour commenter un bout de code : #\n- Pour aller à la ligne dans un bloc d'instructions : \\\n- Les majuscules et minuscules sont importantes\n- Par contre l'usage des ' ou des \" est indifférente. Il faut juste avoir les mêmes début et fin.\n- Pour documenter une fonction ou une classe \"\"\" documentation \"\"\" \nLes outputs de Python : l'opération, le print et le return\nQuand Python réalise des opérations, il faut lui préciser ce qu'il doit en faire : \n- est-ce qu'il doit juste faire l'opération,\n- afficher le résultat de l'opération, \n- créer un objet avec le résultat de l'opération ?\nRemarque : dans l'environnement Notebook, le dernier élement d'une cellule est automatiquement affiché (print), qu'on lui demande ou non de le faire. Ce n'est pas le cas dans un éditeur classique comme Spyder.", "# on calcule : dans le cas d'une opération par exemple une somme\n2+3 # Python calcule le résultat mais n'affiche rien dans la sortie\n\n# le print : on affiche\n\nprint(2+3) # Python calcule et on lui demande juste de l'afficher\n# le résultat est en dessous du code\n\n# le print dans une fonction \n\ndef addition_v1(a,b) : \n print(a+b)\n\nresultat_print = addition_v1(2,0) \nprint(type(resultat_print))\n\n# dans la sortie on a l'affichage du résultat, car la sortie de la fonction est un print \n# en plus on lui demande quel est le type du résultat. Un print ne renvoie aucun type, ce n'est ni un numérique,\n# ni une chaine de charactères, le résultat d'un print n'est pas un format utilisable", "Le résultat de l'addition est affiché\nLa fonction addition_v1 effectue un print\nPar contre, l'objet crée n'a pas de type, il n'est pas un chiffre, ce n'est qu'un affichage.\nPour créer un objet avec le résultat de la fonction, il faut utiliser return", "# le return dans une fonction\ndef addition_v2(a,b) : \n return a+b\n\nresultat_return = addition_v2(2,5) # \nprint(type(resultat_return))\n## là on a bien un résultat qui est du type \"entier\"", "Le résultat de addition_v2 n'est pas affiché comme dans addition_v1\nPar contre, la fonction addition_v2 permet d'avoir un objet de type int, un entier donc.\nType de base : variables, listes, dictionnaires ...\nPyhon permet de manipuler différents types de base\nOn distingue deux types de variables : les immuables qui ne peuvent être modifiés et les modifiables\nLes variables - types immuables\nLes variables immuables ne peuvent être modifiées\n\nNone : ce type est une convention de programmation pour dire que la valeur n'est pas calculée\nbool : un booléen\nint : un entier\nfloat : un réel\nstr : une chaine de caractères\ntuple : un vecteur", "i = 3 # entier = type numérique (type int)\nr = 3.3 # réel = type numérique (type float)\ns = \"exemple\" # chaîne de caractères = type str \nn = None # None signifie que la variable existe mais qu'elle ne contient rien\n # elle est souvent utilisée pour signifier qu'il n'y a pas de résultat\na = (1,2) # tuple\n\nprint(i,r,s,n,a) ", "Si on essaie de changer le premier élément de la chaine de caractères s on va avoir un peu de mal. \nPar exemple si on voulait mettre une majuscule à \"exemple\", on aurait envie d'écrire que le premier élément de la chaine s est \"E\" majuscule\nMais Python ne va pas nous laisser faire, il nous dit que les objets \"chaine de caractère\" ne peuvent être modifiés", "s[0] = \"E\" # déclenche une exception", "Tout ce qu'on peut faire avec une variable immutable, c'est le réaffecter à une autre valeur : il ne peut pas être modifié\nPour s'en convaincre, utilisons la fonction id() qui donne un identifiant à chaque objet.", "print(s)\nid(s)\n\ns = \"autre_mot\"\nid(s)", "On voit bien que s a changé d'identifiant : il peut avoir le même nom, ce n'est plus le même objet\nLes variables - types modifiable : listes et dictionnaires\nHeureusement, il existe des variables modifiables comme les listes et les dictionnaires. \nLes listes - elles s''écrivent entre [ ]\nLes listes sont des élements très utiles, notamment quand vous souhaitez faire des boucles\nPour faire appel aux élements d'une liste, on donne leur position dans la liste : le 1er est le 0, le 2ème est le 1 ...", "ma_liste = [1,2,3,4]\n\n\nprint(\"La longueur de ma liste est de\", len(ma_liste))\n\nprint(\"Le premier élément de ma liste est :\", ma_liste[0])\n\nprint(\"Le dernier élément de ma liste est :\", ma_liste[3])\nprint(\"Le dernier élément de ma liste est :\", ma_liste[-1])", "Les dictionnaires - ils s'écrivent entre { }\nUn dictionnaire associe à une clé un autre élément, appelé une valeur : un chiffre, un nom, une liste, un autre dictionnaire etc.\n\nFormat d'un dictionnaire : {Clé : valeur}\n\nDictionnaire avec des valeurs int\nOn peut par exemple associer à un nom, un nombre", "mon_dictionnaire_notes = { 'Nicolas' : 18 , 'Pimprenelle' : 15} \n# un dictionnaire qui à chaque nom associe un nombre\n# à Nicolas, on associe 18\n\nprint(mon_dictionnaire_notes) ", "Dictionnaire avec des valeurs qui sont des listes\nPour chaque clé d'un dictionnaire, il ne faut pas forcément garder la même forme de valeur\nDans l'exemple, la valeur de la clé \"Nicolas\" est une liste, alors que celle de \"Philou\" est une liste de liste", "mon_dictionnaire_loisirs = \\\n{ 'Nicolas' : ['Rugby','Pastis','Belote'] , \n 'Pimprenelle' : ['Gin Rami','Tisane','Tara Jarmon','Barcelone','Mickey Mouse'],\n 'Philou' : [['Maths','Jeux'],['Guillaume','Jeanne','Thimothée','Adrien']]}", "Pour accéder à un élément du dictionnaire, on fait appel à la clé et non plus à la position, comme c'était le cas dans les listes", "print(mon_dictionnaire_loisirs['Nicolas']) # on affiche une liste\n\nprint(mon_dictionnaire_loisirs['Philou']) # on affiche une liste de listes", "Si on ne veut avoir que la première liste des loisirs de Philou, on demande le premier élément de la liste", "print(mon_dictionnaire_loisirs['Philou'][0]) # on affiche alors juste la première liste", "On peut aussi avoir des valeurs qui sont des int et des listes", "mon_dictionnaire_patchwork_good = \\\n{ 'Nicolas' : ['Rugby','Pastis','Belote'] ,\n 'Pimprenelle' : 18 }", "A retenir\n\nL'indentation du code est importante (4 espaces et pas une tabulation)\nUne liste est entre [] et on peut appeler les positions par leur place \nUn dictionnaire, clé x valeur, s'écrit entre {} et on appelle un élément en fonction de la clé\n\nQuestions pratiques :\n\nQuelle est la position de 7 dans la liste suivante", "liste_nombres = [1,2,7,5,3]", "Combien de clés a ce dictionnaire ?", " dictionnaire_evangile = {\"Marc\" : \"Lion\", \"Matthieu\" : [\"Ange\",\"Homme ailé\"] , \n \"Jean\" : \"Aigle\" , \"Luc\" : \"Taureau\"}", "Que faut-il écrire pour obtenir \"Ange\" en résultat à partir du dictionnaire_evangile ? \n\nObjets : méthodes et attributs\nMainentant qu'on a vu quels objets existaient en Python, nous allons voir comment nous en servir.\nUn petit détour pour bien comprendre : Un objet, c'est quoi ?\nUn objet a deux choses : des attributs et des méthodes\n\nLes attributs décrivent sa structure interne : sa taille, sa forme (dont on ne va pas parler ici)\nLes méthodes sont des \"actions\" qui s'appliqueront à l'objet\n\nPremiers exemples de méthode\nAvec les éléments définis dans la partie 1 (les listes, les dictionnaires) on peut faire appel à des méthodes qui sont directement liées à ces objets.\nLes méthodes, c'est un peu les actions de Python. \nUne méthode pour les listes\nPour ajouter un item dans une liste : on va utiliser la méthode .append()", "ma_liste = [\"Nicolas\",\"Michel\",\"Bernard\"]\n\nma_liste.append(\"Philippe\")\n\nprint(ma_liste)", "Une méthode pour les dictionnaires\nPour connaitre l'ensemble des clés d'un dictionnaire, on appelle la méthode .keys()", "mon_dictionnaire = {\"Marc\" : \"Lion\", \"Matthieu\" : [\"Ange\",\"Homme ailé\"] , \n \"Jean\" : \"Aigle\" , \"Luc\" : \"Taureau\"}\n\nprint(mon_dictionnaire.keys())", "Connaitre les méthodes d'un objet\nPour savoir quelles sont les méthodes d'un objet vous pouvez : \n - taper help(mon_objet) ou mon_objet? dans la console iPython\n - taper mon_objet. + touche tabulation dans la console iPython ou dans le notebook . iPython permet la complétion, c'est-à-dire que vous pouvez faire appaître la liste\nLes opérations et méthodes classiques des listes\nCréer une liste\nPour créer un objet de la classe list, il suffit de le déclarer. Ici on affecte à x une liste", "x = [4, 5] # création d’une liste composée de deux entiers\nx = [\"un\", 1, \"deux\", 2] # création d’une liste composée de 2 chaînes de caractères\n# et de deux entiers, l’ordre d’écriture est important\nx = [3] # création d’une liste d’un élément, sans la virgule,\nx = [ ] # crée une liste vide\nx = list () # crée une liste vide", "Un premier test sur les listes\nSi on veut tester la présence d'un élément dans une liste, on l'écrit de la manière suivante :", "# Exemple \n\nx = \"Marcel\"\n\nl = [\"Marcel\",\"Edith\",\"Maurice\",\"Jean\"]\n\nprint(x in l)\n\n#vrai si x est un des éléments de l", "Pour concaténer deux listes :\nOn utilise le symbole +", "t = [\"Antoine\",\"David\"]\nprint(l + t) #concaténation de l et t", "Pour trouver certains éléments d'une liste\nPour chercher des élements dans une liste, on utilise la position dans la liste.", "l[1] # donne l'élément qui est en 2ème position de la liste\n\nl[1:3] # donne les éléments de la 2ème position de la liste à la 4ème exclue", "Quelques fonctions des listes", "longueur = len(l) # nombre d’éléments de l\n\nminimum = min(l) # plus petit élément de l, ici par ordre alphabétique\n\nmaximum = max(l) # plus grand élément de l, ici par ordre alphabétique\n\nprint(longueur,minimum,maximum)\n\ndel l[0 : 2] # supprime les éléments entre la position 0 et 2 exclue\nprint(l)", "Les méthodes des listes\nOn les trouve dans l'aide de la liste. On distingue les méthodes et les méthodes spéciales : visuellement, les méthodes spéciales sont celles qui précédées et suivis de deux caractères de soulignement, les autres sont des méthodes classiques.", "help(l)", "A retenir et questions\nA retenir : \n\nChaque objet Python a des attributs et des méthodes\nVous pouvez créer des classes avec des attributs et des méthodes \nLes méthodes des listes et des dictionnaires sont les plus utilisées : \nlist.count()\nlist.sort()\nlist.append()\ndict.keys()\ndict.items()\ndict.values()\n\n\n\n\nQuestions pratiques : \n\n\nDéfinir la liste allant de 1 à 10, puis effectuez les actions suivantes :\n– triez et affichez la liste \n– ajoutez l’élément 11 à la liste et affichez la liste \n– renversez et affichez la liste \n– affichez l’indice de l’élément 7 \n– enlevez l’élément 9 et affichez la liste \n– affichez la sous-liste du 2e au 3e élément ;\n– affichez la sous-liste du début au 2e élément ;\n– affichez la sous-liste du 3e élément à la fin de la liste ;\n\n\n\nConstruire le dictionnaire des 6 premiers mois de l'année avec comme valeurs le nombre de jours respectif. \n- Renvoyer la liste des mois. \n- Renvoyer la liste des jours.\n- Ajouez la clé du mois de Juillet ?\n\n\n\nPasser des listes, dictionnaires à pandas\nSupposons que la variable 'data' est un liste qui contient nos données. \nUne observation correspond à un dictionnaire qui contient le nom, le type, l'ambiance et la note d'un restaurant. \nIl est aisé de transformer cette liste en dataframe grâce à la fonction 'DataFrame'.", "import pandas \n\ndata = [{\"nom\": \"Little Pub\", \"type\" : \"Bar\", \"ambiance\": 9, \"note\": 7},\n {\"nom\": \"Le Corse\", \"type\" : \"Sandwicherie\", \"ambiance\": 2, \"note\": 8},\n {\"nom\": \"Café Caumartin\", \"type\" : \"Bar\", \"ambiance\": 1}]\n\ndf = pandas.DataFrame(data)\n\nprint(data)\ndf" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
ryan-leung/PHYS4650_Python_Tutorial
notebooks/Jan2018/python-matplotlib.ipynb
bsd-3-clause
[ "Matplotlib\n<img src=\"images/matplotlib.svg\" alt=\"matplotlib\" style=\"width: 600px;\"/>\nUsing matplotlib in Jupyter notebook", "import matplotlib.pyplot as plt\n%matplotlib inline\n\nimport numpy as np", "File Reading\nLine Plots\nplt.plot Plot lines and/or markers:\n* plot(x, y)\n * plot x and y using default line style and color\n* plot(x, y, 'bo')\n * plot x and y using blue circle markers\n* plot(y)\n * plot y using x as index array 0..N-1\n* plot(y, 'r+')\n * Similar, but with red plusses\nrun \n%pdoc plt.plot \nfor more details", "x = np.arange(-np.pi,np.pi,0.01) # Create an array of x values from -pi to pi with 0.01 interval\ny = np.sin(x) # Apply sin function on all x\nplt.plot(x,y)\n\nplt.plot(y)", "Scatter Plots\nplt.plot can also plot markers.", "x = np.arange(0,10,1) # x = 1,2,3,4,5...\ny = x*x # Squared x\nplt.plot(x,y,'bo') # plot x and y using blue circle markers\n\nplt.plot(x,y,'r+') # plot x and y using red plusses", "Plot properties\nAdd x-axis and y-axis", "x = np.arange(-np.pi,np.pi,0.001)\nplt.plot(x,np.sin(x))\nplt.title('y = sin(x)') # title\nplt.xlabel('x (radians)') # x-axis label\nplt.ylabel('y') # y-axis label\n\n# To plot the axis label in LaTex, we can run\nfrom matplotlib import rc\n## For sans-serif font:\nrc('font',**{'family':'sans-serif','sans-serif':['Helvetica']})\nrc('text', usetex=True)\n\n## for Palatino and other serif fonts use:\n#rc('font',**{'family':'serif','serif':['Palatino']})\n\nplt.plot(x,np.sin(x))\nplt.title(r'T = sin($\\theta$)') # title, the `r` in front of the string means raw string\nplt.xlabel(r'$\\theta$ (radians)') # x-axis label, LaTex synatx should be encoded with $$\nplt.ylabel('T') # y-axis label", "Multiple plots", "x1 = np.linspace(0.0, 5.0)\nx2 = np.linspace(0.0, 2.0)\n\ny1 = np.cos(2 * np.pi * x1) * np.exp(-x1)\ny2 = np.cos(2 * np.pi * x2)\n\nplt.subplot(2, 1, 1)\nplt.plot(x1, y1, '.-')\nplt.title('Plot 2 graph at the same time')\nplt.ylabel('Amplitude (Damped)')\n\nplt.subplot(2, 1, 2)\nplt.plot(x2, y2, '.-')\nplt.xlabel('time (s)')\nplt.ylabel('Amplitude (Undamped)')", "Save figure", "plt.plot(x,np.sin(x))\nplt.savefig('plot.pdf')\nplt.savefig('plot.png')\n\n# To load image into this Jupyter notebook\nfrom IPython.display import Image\nImage(\"plot.png\")", "AplPy" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
brandoncgay/deep-learning
tv-script-generation/dlnd_tv_script_generation.ipynb
mit
[ "TV Script Generation\nIn this project, you'll generate your own Simpsons TV scripts using RNNs. You'll be using part of the Simpsons dataset of scripts from 27 seasons. The Neural Network you'll build will generate a new TV script for a scene at Moe's Tavern.\nGet the Data\nThe data is already provided for you. You'll be using a subset of the original dataset. It consists of only the scenes in Moe's Tavern. This doesn't include other versions of the tavern, like \"Moe's Cavern\", \"Flaming Moe's\", \"Uncle Moe's Family Feed-Bag\", etc..", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport helper\n\ndata_dir = './data/simpsons/moes_tavern_lines.txt'\ntext = helper.load_data(data_dir)\n# Ignore notice, since we don't use it for analysing the data\ntext = text[81:]", "Explore the Data\nPlay around with view_sentence_range to view different parts of the data.", "view_sentence_range = (0, 10)\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport numpy as np\n\nprint('Dataset Stats')\nprint('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))\nscenes = text.split('\\n\\n')\nprint('Number of scenes: {}'.format(len(scenes)))\nsentence_count_scene = [scene.count('\\n') for scene in scenes]\nprint('Average number of sentences in each scene: {}'.format(np.average(sentence_count_scene)))\n\nsentences = [sentence for scene in scenes for sentence in scene.split('\\n')]\nprint('Number of lines: {}'.format(len(sentences)))\nword_count_sentence = [len(sentence.split()) for sentence in sentences]\nprint('Average number of words in each line: {}'.format(np.average(word_count_sentence)))\n\nprint()\nprint('The sentences {} to {}:'.format(*view_sentence_range))\nprint('\\n'.join(text.split('\\n')[view_sentence_range[0]:view_sentence_range[1]]))", "Implement Preprocessing Functions\nThe first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below:\n- Lookup Table\n- Tokenize Punctuation\nLookup Table\nTo create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:\n- Dictionary to go from the words to an id, we'll call vocab_to_int\n- Dictionary to go from the id to word, we'll call int_to_vocab\nReturn these dictionaries in the following tuple (vocab_to_int, int_to_vocab)", "import numpy as np\nimport problem_unittests as tests\n\ndef create_lookup_tables(text):\n \"\"\"\n Create lookup tables for vocabulary\n :param text: The text of tv scripts split into words\n :return: A tuple of dicts (vocab_to_int, int_to_vocab)\n \"\"\"\n vocab = set(text)\n vocab_to_int = {word: i for word, i in zip(vocab, range(len(vocab)))}\n int_to_vocab = {i: word for word, i in vocab_to_int.items()}\n \n return (vocab_to_int, int_to_vocab)\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_create_lookup_tables(create_lookup_tables)", "Tokenize Punctuation\nWe'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word \"bye\" and \"bye!\".\nImplement the function token_lookup to return a dict that will be used to tokenize symbols like \"!\" into \"||Exclamation_Mark||\". Create a dictionary for the following symbols where the symbol is the key and value is the token:\n- Period ( . )\n- Comma ( , )\n- Quotation Mark ( \" )\n- Semicolon ( ; )\n- Exclamation mark ( ! )\n- Question mark ( ? )\n- Left Parentheses ( ( )\n- Right Parentheses ( ) )\n- Dash ( -- )\n- Return ( \\n )\nThis dictionary will be used to token the symbols and add the delimiter (space) around it. This separates the symbols as it's own word, making it easier for the neural network to predict on the next word. Make sure you don't use a token that could be confused as a word. Instead of using the token \"dash\", try using something like \"||dash||\".", "def token_lookup():\n \"\"\"\n Generate a dict to turn punctuation into a token.\n :return: Tokenize dictionary where the key is the punctuation and the value is the token\n \"\"\"\n token_dict = { '.': '||Period||',\n ',': '||Comma||',\n '\"': '||Quotation_Mark||',\n ';': '||Semi_Colon||',\n '!': '||Exclamation_Mark||',\n '?': '||Question_Mark||',\n '(': '||Left_Parentheses||',\n ')': '||Right_Parentheses||',\n '--': '||Dash||',\n '\\n': '||Return||'\n \n }\n return token_dict\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_tokenize(token_lookup)", "Preprocess all the data and save it\nRunning the code cell below will preprocess all the data and save it to file.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\n# Preprocess Training, Validation, and Testing Data\nhelper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)", "Check Point\nThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport helper\nimport numpy as np\nimport problem_unittests as tests\n\nint_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()", "Build the Neural Network\nYou'll build the components necessary to build a RNN by implementing the following functions below:\n- get_inputs\n- get_init_cell\n- get_embed\n- build_rnn\n- build_nn\n- get_batches\nCheck the Version of TensorFlow and Access to GPU", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nfrom distutils.version import LooseVersion\nimport warnings\nimport tensorflow as tf\n\n# Check TensorFlow Version\nassert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer'\nprint('TensorFlow Version: {}'.format(tf.__version__))\n\n# Check for a GPU\nif not tf.test.gpu_device_name():\n warnings.warn('No GPU found. Please use a GPU to train your neural network.')\nelse:\n print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))", "Input\nImplement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:\n- Input text placeholder named \"input\" using the TF Placeholder name parameter.\n- Targets placeholder\n- Learning Rate placeholder\nReturn the placeholders in the following tuple (Input, Targets, LearningRate)", "def get_inputs():\n \"\"\"\n Create TF Placeholders for input, targets, and learning rate.\n :return: Tuple (input, targets, learning rate)\n \"\"\"\n inputs = tf.placeholder(tf.int32, shape=[None, None], name='input')\n labels = tf.placeholder(tf.int32, shape=[None, None], name='labels')\n learning_rate = tf.placeholder(tf.float32, name='learning_rate')\n return (inputs, labels, learning_rate)\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_get_inputs(get_inputs)", "Build RNN Cell and Initialize\nStack one or more BasicLSTMCells in a MultiRNNCell.\n- The Rnn size should be set using rnn_size\n- Initalize Cell State using the MultiRNNCell's zero_state() function\n - Apply the name \"initial_state\" to the initial state using tf.identity()\nReturn the cell and initial state in the following tuple (Cell, InitialState)", "def get_init_cell(batch_size, rnn_size):\n \"\"\"\n Create an RNN Cell and initialize it.\n :param batch_size: Size of batches\n :param rnn_size: Size of RNNs\n :return: Tuple (cell, initialize state)\n \"\"\"\n layers = 1\n lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size)\n drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=0.5)\n cell = tf.contrib.rnn.MultiRNNCell([drop]*layers)\n initial_state = tf.identity(cell.zero_state(batch_size, tf.float32), name='initial_state')\n \n return (cell, initial_state)\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_get_init_cell(get_init_cell)", "Word Embedding\nApply embedding to input_data using TensorFlow. Return the embedded sequence.", "def get_embed(input_data, vocab_size, embed_dim):\n \"\"\"\n Create embedding for <input_data>.\n :param input_data: TF placeholder for text input.\n :param vocab_size: Number of words in vocabulary.\n :param embed_dim: Number of embedding dimensions\n :return: Embedded input.\n \"\"\"\n embedding = tf.Variable(tf.random_uniform((vocab_size, embed_dim), -1, 1))\n embed = tf.nn.embedding_lookup(embedding, input_data)\n \n return embed\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_get_embed(get_embed)", "Build RNN\nYou created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN.\n- Build the RNN using the tf.nn.dynamic_rnn()\n - Apply the name \"final_state\" to the final state using tf.identity()\nReturn the outputs and final_state state in the following tuple (Outputs, FinalState)", "def build_rnn(cell, inputs):\n \"\"\"\n Create a RNN using a RNN Cell\n :param cell: RNN Cell\n :param inputs: Input text data\n :return: Tuple (Outputs, Final State)\n \"\"\"\n outputs, final_state = tf.nn.dynamic_rnn(cell, inputs, dtype=tf.float32)\n final_state = tf.identity(final_state, name='final_state')\n \n return (outputs, final_state)\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_build_rnn(build_rnn)", "Build the Neural Network\nApply the functions you implemented above to:\n- Apply embedding to input_data using your get_embed(input_data, vocab_size, embed_dim) function.\n- Build RNN using cell and your build_rnn(cell, inputs) function.\n- Apply a fully connected layer with a linear activation and vocab_size as the number of outputs.\nReturn the logits and final state in the following tuple (Logits, FinalState)", "def build_nn(cell, rnn_size, input_data, vocab_size, embed_dim):\n \"\"\"\n Build part of the neural network\n :param cell: RNN cell\n :param rnn_size: Size of rnns\n :param input_data: Input data\n :param vocab_size: Vocabulary size\n :param embed_dim: Number of embedding dimensions\n :return: Tuple (Logits, FinalState)\n \"\"\"\n embed = get_embed(input_data, vocab_size, embed_dim)\n outputs, final_state = build_rnn(cell, embed)\n logits = tf.contrib.layers.fully_connected(inputs=outputs, \n num_outputs=vocab_size, \n activation_fn=None,\n weights_initializer=tf.truncated_normal_initializer(stddev=0.1),\n biases_initializer=tf.zeros_initializer())\n return (logits, final_state)\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_build_nn(build_nn)", "Batches\nImplement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements:\n- The first element is a single batch of input with the shape [batch size, sequence length]\n- The second element is a single batch of targets with the shape [batch size, sequence length]\nIf you can't fill the last batch with enough data, drop the last batch.\nFor exmple, get_batches([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20], 3, 2) would return a Numpy array of the following:\n```\n[\n # First Batch\n [\n # Batch of Input\n [[ 1 2], [ 7 8], [13 14]]\n # Batch of targets\n [[ 2 3], [ 8 9], [14 15]]\n ]\n# Second Batch\n [\n # Batch of Input\n [[ 3 4], [ 9 10], [15 16]]\n # Batch of targets\n [[ 4 5], [10 11], [16 17]]\n ]\n# Third Batch\n [\n # Batch of Input\n [[ 5 6], [11 12], [17 18]]\n # Batch of targets\n [[ 6 7], [12 13], [18 1]]\n ]\n]\n```\nNotice that the last target value in the last batch is the first input value of the first batch. In this case, 1. This is a common technique used when creating sequence batches, although it is rather unintuitive.", "def test(int_text, batch_size, seq_length):\n \n batches = []\n n_batches = len(int_text)//(batch_size * seq_length)\n x = np.array(int_text[:n_batches * batch_size * seq_length])\n y = np.array(int_text[1:n_batches * batch_size * seq_length + 1])\n \n x = x.reshape((batch_size, -1))\n y = y.reshape((batch_size, -1))\n \n x = np.split(x, n_batches, axis=1)\n y = np.split(y, n_batches, axis=1)\n \n batches = np.array(list(zip(x, y)))\n return batches\n\nprint(test([1,2,3,4,5,6,7,8,9,10,11,12,13,14,15], 2, 3))\n\ndef get_batches(int_text, batch_size, seq_length):\n \"\"\"\n Return batches of input and target\n :param int_text: Text with the words replaced by their ids\n :param batch_size: The size of batch\n :param seq_length: The length of sequence\n :return: Batches as a Numpy array\n \"\"\"\n \n n_batches = len(int_text)//(batch_size * seq_length)\n x = np.array(int_text[:n_batches * batch_size * seq_length])\n y = np.array(int_text[1:n_batches * batch_size * seq_length + 1])\n \n x = x.reshape((batch_size, -1))\n y = y.reshape((batch_size, -1))\n \n x = np.split(x, n_batches, axis=1)\n y = np.split(y, n_batches, axis=1)\n \n batches = np.array(list(zip(x, y)))\n \n return batches\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_get_batches(get_batches)", "Neural Network Training\nHyperparameters\nTune the following parameters:\n\nSet num_epochs to the number of epochs.\nSet batch_size to the batch size.\nSet rnn_size to the size of the RNNs.\nSet embed_dim to the size of the embedding.\nSet seq_length to the length of sequence.\nSet learning_rate to the learning rate.\nSet show_every_n_batches to the number of batches the neural network should print progress.", "# Number of Epochs\nnum_epochs = 40\n# Batch Size\nbatch_size = 500\n# RNN Size\nrnn_size = 512\n# Embedding Dimension Size\nembed_dim = 300\n# Sequence Length\nseq_length = 10\n# Learning Rate\nlearning_rate = 0.01\n# Show stats for every n number of batches\nshow_every_n_batches = 20\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\nsave_dir = './save'", "Build the Graph\nBuild the graph using the neural network you implemented.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nfrom tensorflow.contrib import seq2seq\n\ntrain_graph = tf.Graph()\nwith train_graph.as_default():\n vocab_size = len(int_to_vocab)\n input_text, targets, lr = get_inputs()\n input_data_shape = tf.shape(input_text)\n cell, initial_state = get_init_cell(input_data_shape[0], rnn_size)\n logits, final_state = build_nn(cell, rnn_size, input_text, vocab_size, embed_dim)\n\n # Probabilities for generating words\n probs = tf.nn.softmax(logits, name='probs')\n\n # Loss function\n cost = seq2seq.sequence_loss(\n logits,\n targets,\n tf.ones([input_data_shape[0], input_data_shape[1]]))\n\n # Optimizer\n optimizer = tf.train.AdamOptimizer(lr)\n\n # Gradient Clipping\n gradients = optimizer.compute_gradients(cost)\n capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]\n train_op = optimizer.apply_gradients(capped_gradients)", "Train\nTrain the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forums to see if anyone is having the same problem.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nbatches = get_batches(int_text, batch_size, seq_length)\n\nwith tf.Session(graph=train_graph) as sess:\n sess.run(tf.global_variables_initializer())\n\n for epoch_i in range(num_epochs):\n state = sess.run(initial_state, {input_text: batches[0][0]})\n\n for batch_i, (x, y) in enumerate(batches):\n feed = {\n input_text: x,\n targets: y,\n initial_state: state,\n lr: learning_rate}\n train_loss, state, _ = sess.run([cost, final_state, train_op], feed)\n\n # Show every <show_every_n_batches> batches\n if (epoch_i * len(batches) + batch_i) % show_every_n_batches == 0:\n print('Epoch {:>3} Batch {:>4}/{} train_loss = {:.3f}'.format(\n epoch_i,\n batch_i,\n len(batches),\n train_loss))\n\n # Save Model\n saver = tf.train.Saver()\n saver.save(sess, save_dir)\n print('Model Trained and Saved')", "Save Parameters\nSave seq_length and save_dir for generating a new TV script.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\n# Save parameters for checkpoint\nhelper.save_params((seq_length, save_dir))", "Checkpoint", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport tensorflow as tf\nimport numpy as np\nimport helper\nimport problem_unittests as tests\n\n_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()\nseq_length, load_dir = helper.load_params()", "Implement Generate Functions\nGet Tensors\nGet tensors from loaded_graph using the function get_tensor_by_name(). Get the tensors using the following names:\n- \"input:0\"\n- \"initial_state:0\"\n- \"final_state:0\"\n- \"probs:0\"\nReturn the tensors in the following tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)", "def get_tensors(loaded_graph):\n \"\"\"\n Get input, initial state, final state, and probabilities tensor from <loaded_graph>\n :param loaded_graph: TensorFlow graph loaded from file\n :return: Tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)\n \"\"\"\n InputTensor = loaded_graph.get_tensor_by_name(\"input:0\")\n InitialStateTensor = loaded_graph.get_tensor_by_name(\"initial_state:0\")\n FinalStateTensor = loaded_graph.get_tensor_by_name(\"final_state:0\")\n ProbsTensor = loaded_graph.get_tensor_by_name(\"probs:0\")\n \n return (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_get_tensors(get_tensors)", "Choose Word\nImplement the pick_word() function to select the next word using probabilities.", "np.random.choice(5, 1)[0]\n\ndef pick_word(probabilities, int_to_vocab):\n \"\"\"\n Pick the next word in the generated text\n :param probabilities: Probabilites of the next word\n :param int_to_vocab: Dictionary of word ids as the keys and words as the values\n :return: String of the predicted word\n \"\"\"\n i = np.random.choice(len(probabilities), 1, p=probabilities)[0]\n word = int_to_vocab[i]\n \n return word\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_pick_word(pick_word)", "Generate TV Script\nThis will generate the TV script for you. Set gen_length to the length of TV script you want to generate.", "gen_length = 200\n# homer_simpson, moe_szyslak, or Barney_Gumble\nprime_word = 'moe_szyslak'\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\nloaded_graph = tf.Graph()\nwith tf.Session(graph=loaded_graph) as sess:\n # Load saved model\n loader = tf.train.import_meta_graph(load_dir + '.meta')\n loader.restore(sess, load_dir)\n\n # Get Tensors from loaded model\n input_text, initial_state, final_state, probs = get_tensors(loaded_graph)\n\n # Sentences generation setup\n gen_sentences = [prime_word + ':']\n prev_state = sess.run(initial_state, {input_text: np.array([[1]])})\n\n # Generate sentences\n for n in range(gen_length):\n # Dynamic Input\n dyn_input = [[vocab_to_int[word] for word in gen_sentences[-seq_length:]]]\n dyn_seq_length = len(dyn_input[0])\n\n # Get Prediction\n probabilities, prev_state = sess.run(\n [probs, final_state],\n {input_text: dyn_input, initial_state: prev_state})\n \n pred_word = pick_word(probabilities[dyn_seq_length-1], int_to_vocab)\n\n gen_sentences.append(pred_word)\n \n # Remove tokens\n tv_script = ' '.join(gen_sentences)\n for key, token in token_dict.items():\n ending = ' ' if key in ['\\n', '(', '\"'] else ''\n tv_script = tv_script.replace(' ' + token.lower(), key)\n tv_script = tv_script.replace('\\n ', '\\n')\n tv_script = tv_script.replace('( ', '(')\n \n print(tv_script)", "The TV Script is Nonsensical\nIt's ok if the TV script doesn't make any sense. We trained on less than a megabyte of text. In order to get good results, you'll have to use a smaller vocabulary or get more data. Luckly there's more data! As we mentioned in the begging of this project, this is a subset of another dataset. We didn't have you train on all the data, because that would take too long. However, you are free to train your neural network on all the data. After you complete the project, of course.\nSubmitting This Project\nWhen submitting this project, make sure to run all the cells before saving the notebook. Save the notebook file as \"dlnd_tv_script_generation.ipynb\" and save it as a HTML file under \"File\" -> \"Download as\". Include the \"helper.py\" and \"problem_unittests.py\" files in your submission." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
sdaros/placeword
build_wordlist.ipynb
unlicense
[ "Importing our wordlists\nHere we import all of our wordlists and add them to an array which me can merge at the end. \nThis wordlists should not be filtered at this point. However they should all contain the same columns to make merging easier for later.", "wordlists = []", "Dictcc\nDownload the dictionary from http://www.dict.cc/?s=about%3Awordlist\nPrint out the first 20 lines of the dictionary", "!head -n 20 de-en.txt", "Use pandas library to import csv file", "import pandas as pd\n\n\ndictcc_df = pd.read_csv(\"de-en.txt\", \n sep='\\t',\n skiprows=8,\n header=None, \n names=[\"GermanWord\",\"Word\",\"WordType\"])", "Preview a few entries of the wordlist", "dictcc_df[90:100]", "We only need \"Word\" and \"WordType\" column", "dictcc_df = dictcc_df[[\"Word\", \"WordType\"]][:].copy()", "Convert WordType Column to a pandas.Categorical", "word_types = dictcc_df[\"WordType\"].astype('category')\ndictcc_df[\"WordType\"] = word_types\n# show data types of each column in the dataframe\ndictcc_df.dtypes", "List the current distribution of word types in dictcc dataframe", "# nltk TaggedCorpusParses requires uppercase WordType\ndictcc_df[\"WordType\"] = dictcc_df[\"WordType\"].str.upper()\ndictcc_df[\"WordType\"].value_counts().head()", "Add dictcc corpus to our wordlists array", "wordlists.append(dictcc_df)", "Moby\nDownload the corpus from http://icon.shef.ac.uk/Moby/mpos.html\nPerform some basic cleanup on the wordlist", "# the readme file in `nltk/corpora/moby/mpos` gives some information on how to parse the file\n\nresult = []\n# replace all DOS line endings '\\r' with newlines then change encoding to UTF8\nmoby_words = !cat nltk/corpora/moby/mpos/mobyposi.i | iconv --from-code=ISO88591 --to-code=UTF8 | tr -s '\\r' '\\n' | tr -s '×' '/'\nresult.extend(moby_words)\nmoby_df = pd.DataFrame(data = result, columns = ['Word'])\n\nmoby_df.tail(10)", "sort out the nouns, verbs and adjectives", "# Matches nouns\nnouns = moby_df[moby_df[\"Word\"].str.contains('/[Np]$')].copy()\nnouns[\"WordType\"] = \"NOUN\"\n# Matches verbs\nverbs = moby_df[moby_df[\"Word\"].str.contains('/[Vti]$')].copy()\nverbs[\"WordType\"] = \"VERB\"\n# Magtches adjectives\nadjectives = moby_df[moby_df[\"Word\"].str.contains('/A$')].copy()\nadjectives[\"WordType\"] = \"ADJ\"", "remove the trailing stuff and concatenate the nouns, verbs and adjectives", "nouns[\"Word\"] = nouns[\"Word\"].str.replace(r'/N$','')\nverbs[\"Word\"] = verbs[\"Word\"].str.replace(r'/[Vti]$','')\nadjectives[\"Word\"] = adjectives[\"Word\"].str.replace(r'/A$','')\n# Merge nouns, verbs and adjectives into one dataframe\nmoby_df = pd.concat([nouns,verbs,adjectives])", "Add moby corpus to wordlists array", "wordlists.append(moby_df)", "Combine all wordlists", "wordlist = pd.concat(wordlists)", "Filter for results that we want\n\nWe want to remove words that aren't associated with a type (null WordType)", "wordlist_filtered = wordlist[wordlist[\"WordType\"].notnull()]", "We want to remove words that contain non word characters (whitespace, hypens, etc.)", "# we choose [a-z] here and not [A-Za-z] because we do _not_\n# want to match words starting with uppercase characters.\n# ^to matches verbs in the infinitive from `dictcc`\nword_chars = r'^[a-z]+$|^to\\s'\nis_word_chars = wordlist_filtered[\"Word\"].str.contains(word_chars, na=False)\nwordlist_filtered = wordlist_filtered[is_word_chars]\nwordlist_filtered.describe()\nwordlist_filtered[\"WordType\"].value_counts()", "We want results that are less than 'x' letters long (x+3 for verbs since they are in their infinitive form in the dictcc wordlist)", "lt_x_letters = (wordlist_filtered[\"Word\"].str.len() < 9) |\\\n ((wordlist_filtered[\"Word\"].str.contains('^to\\s\\w+\\s')) &\\\n (wordlist_filtered[\"Word\"].str.len() < 11)\\\n )\nwordlist_filtered = wordlist_filtered[lt_x_letters]\nwordlist_filtered.describe()", "We want to remove all duplicates", "wordlist_filtered = wordlist_filtered.drop_duplicates(\"Word\")\nwordlist_filtered.describe()\nwordlist_filtered[\"WordType\"].value_counts()", "Load our wordlists into nltk", "# The TaggedCorpusReader likes to use the forward slash character '/'\n# as seperator between the word and part-of-speech tag (WordType).\nwordlist_filtered.to_csv(\"dictcc_moby.csv\",index=False,sep=\"/\",header=None)\n\nfrom nltk.corpus import TaggedCorpusReader\nfrom nltk.tokenize import WhitespaceTokenizer\nnltk_wordlist = TaggedCorpusReader(\"./\", \"dictcc_moby.csv\")", "NLTK\n\nUse NLTK to help us merge our wordlists", "# Our custom wordlist\nimport nltk\ncustom_cfd = nltk.ConditionalFreqDist((tag, word) for (word, tag) in nltk_wordlist.tagged_words() if len(word) < 9 and word.isalpha)\n\n# Brown Corpus\nimport nltk\nbrown_cfd = nltk.ConditionalFreqDist((tag, word) for (word, tag) in nltk.corpus.brown.tagged_words() if word.isalpha() and len(word) < 9)\n\n# Merge Nouns from all wordlists\nnouns = set(brown_cfd[\"NN\"]) | set(brown_cfd[\"NP\"]) | set(custom_cfd[\"NOUN\"])\n# Lowercase all words to remove duplicates\nnouns = set([noun.lower() for noun in nouns])\nprint(\"Total nouns count: \" + str(len(nouns)))\n\n# Merge Verbs from all wordlists\nverbs = set(brown_cfd[\"VB\"]) | set(brown_cfd[\"VBD\"]) | set(custom_cfd[\"VERB\"])\n# Lowercase all words to remove duplicates\nverbs = set([verb.lower() for verb in verbs])\nprint(\"Total verbs count: \" + str(len(verbs)))\n\n# Merge Adjectives from all wordlists\nadjectives = set(brown_cfd[\"JJ\"]) | set(custom_cfd[\"ADJ\"])\n# Lowercase all words to remove duplicates\nadjectives = set([adjective.lower() for adjective in adjectives])\nprint(\"Total adjectives count: \" + str(len(adjectives)))", "Make Some Placewords Magic Happen", "def populate_degrees(nouns):\n degrees = {}\n nouns_copy = nouns.copy()\n for latitude in range(60):\n for longtitude in range(190):\n degrees[(latitude,longtitude)] = nouns_copy.pop()\n return degrees\n\ndef populate_minutes(verbs):\n minutes = {}\n verbs_copy = verbs.copy()\n for latitude in range(60):\n for longtitude in range(60):\n minutes[(latitude,longtitude)] = verbs_copy.pop()\n return minutes\n\ndef populate_seconds(adjectives):\n seconds = {}\n adjectives_copy = adjectives.copy()\n for latitude in range(60):\n for longtitude in range(60):\n seconds[(latitude,longtitude)] = adjectives_copy.pop()\n return seconds\n\ndef populate_fractions(nouns):\n fractions = {}\n nouns_copy = nouns.copy()\n for latitude in range(10):\n for longtitude in range(10):\n fractions[(latitude,longtitude)] = nouns_copy.pop()\n return fractions\n\ndef placewords(degrees,minutes,seconds,fractions):\n result = []\n result.append(populate_degrees(nouns).get(degrees))\n result.append(populate_minutes(verbs).get(minutes))\n result.append(populate_seconds(adjectives).get(seconds))\n result.append(populate_fractions(nouns).get(fractions))\n return \"-\".join(result)\n\n# Located at 50°40'47.9\" N 10°55'55.2\" E\nilmenau_home = placewords((50,10),(40,55),(47,55),(9,2))\nprint(\"Feel free to stalk me at \" + ilmenau_home)", "TODO (wordlist filtering)\n\nWe want to remove stopwords from wordlist\n\nfrom nltk.corpus import stopwords\ndif = set(wordlist_filtered['Word']) - set(stopwords.words('english'))\nnames = nltk.corpus.names\nnames.fileids()\n- We want to remove all names and animals\n\n\nWe want to remove words that are difficult to spell\n\n\nWords with uncommon vowel duplicates (examples: [\"piing\", \"reeject\"])\n\n\nWe want to remove homonyms that are used in different parts of speech (example: saw (as verb) and saw (as noun))\n\n\nWe want to remove arcane and unusual words\n\n\n```\nimport nltk\ndef unusual_words(text):\n text_vocab = set(w.lower() for w in text if w.isalpha())\n english_vocab = set(w.lower() for w in nltk.corpus.words.words())\n unusual = text_vocab - english_vocab\n return sorted(unusual)\n```" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
jaabberwocky/jaabberwocky.github.io
Python/Plotly/PlotlyTutorial.ipynb
mit
[ "import plotly as py\nfrom plotly import plotly\nimport pandas as pd\nimport numpy as np\nfrom plotly.graph_objs import Scatter, Layout, Data, Figure, Annotation, Scatter3d\nimport plotly.figure_factory as ff\n\npy.offline.init_notebook_mode(connected=True)", "Below we will examine the different aspects/objects that define a plot in Plotly. These are:\n\nData\nLayout\nFigure\n\nWe will first follow with a few examples to showcase Plotly.", "py.offline.iplot({\n \"data\": [Scatter(x=[1, 2, 3, 4], y=[4, 3, 2, 1])],\n \"layout\": Layout(title=\"hello world\")\n})\n\n# do a table\ndf = pd.read_csv(\"https://raw.githubusercontent.com/plotly/datasets/master/school_earnings.csv\")\n\ntable = ff.create_table(df)\npy.offline.iplot(table, filename='jupyter/table1')", "Under every graph is a JSON object, which is a dictionary like data structure. Simply changing values of some keywords and we get different plots.", "# follows trace - data - layout - figure semantic\n\ntrace1 = Scatter(\n x = [1,2,3],\n y = [4,5,6],\n marker = {'color':'red', 'symbol':104, 'size':\"10\"},\n mode = \"markers+lines\",\n text = ['one','two','three'],\n name = '1st Trace'\n)\n\ndata = Data([trace1])\n\nlayout = Layout(\n title=\"First Plot\",\n xaxis={'title':'x1'},\n \n yaxis ={'title':'x2'}\n)\n\nfigure=Figure(data=data, layout=layout)\npy.offline.iplot(figure)\n\ndf = pd.read_csv('https://raw.githubusercontent.com/yankev/test/master/life-expectancy-per-GDP-2007.csv')\namericas = df[(df.continent=='Americas')]\neurope = df[(df.continent=='Europe')]\n\ntrace_comp0 = Scatter(\n x = americas.gdp_percap,\n y=americas.life_exp,\n mode='markers',\n marker=dict(size = 12,\n line = dict(width=1),\n color=\"navy\"),\n name = \"Americas\",\n text=americas.country,\n)\n\ntrace_comp1 = Scatter(\n x = europe.gdp_percap,\n y=europe.life_exp,\n mode='markers',\n marker=dict(size = 12,\n line = dict(width=1),\n color=\"orange\"),\n name = \"Europe\",\n text=europe.country,\n)\n\ndata = [trace_comp0, trace_comp1]\n\nlayout = Layout(\n title=\"YOUR MUM\", # sorry\n hovermode=\"closest\",\n xaxis=dict(\n title='GDP per capita (2000 dollars)',\n ticklen=5,\n zeroline=False,\n gridwidth=2,\n ),\n yaxis=dict(\n title=\"Life expectancy (years)\"\n )\n\n)\n\nfig = Figure(data=data, layout=layout)\npy.offline.iplot(fig)", "Data\nWe see that data is actually a list object in Python. Data will actually contain all the traces that you wish to plot. Now the question may be, what is a trace? A trace is just the name we give a collection of data and the specifications of which we want that data plotted. Notice that a trace will also be an object itself, and these will be named according to how you want the data displayed on the plotting surface.", "# generate data\nx = np.linspace(0,np.pi*8,100)\ny = np.sin(x)\nz = np.cos(x)\n\nlayout = Layout(\n title=\"My First Plotly Graph\",\n xaxis = dict(\n title=\"x\"),\n yaxis = dict(title=\"sin(x)\")\n)\n\ntrace1 = Scatter(\n x = x,\n y = y,\n mode = \"lines\",\n marker = dict(\n size=8,\n color=\"navy\"\n ),\n name=\"Sin(x)\"\n)\n\ntrace2 = Scatter(\n x = x,\n y = z,\n mode = \"markers+lines\",\n marker = dict(\n size=8,\n color=\"red\"\n ),\n name=\"Cos(x)\",\n opacity=0.5\n)\n\n# load data and fig with nec\ndata = Data([trace1,trace2])\nfig = Figure(data=data, layout=layout)\n\n#plot\npy.offline.iplot(fig)\n\n# look at hover text\n\nx = np.arange(1,3.2,0.2)\ny = 6*np.sin(x)\n\nlayout = Layout(\n title=\"My Second Plotly Graph\",\n xaxis = dict(\n title=\"x\"),\n yaxis = dict(title=\"6 * sin(x)\")\n)\n\ntrace1 = Scatter(\n x=[1,2,3], \n y=[4,5,6], \n marker={'color': 'red', 'symbol': 104, 'size': \"10\"}, \n mode=\"markers+lines\", \n text=[\"one\",\"two\",\"three\"],\nname=\"first trace\")\n\ntrace2 = Scatter(x=x, \n y=y, \n marker={'color': 'blue', 'symbol': 'star', 'size': 10}, \n mode='markers', \n name='2nd trace')\n\ndata = Data([trace1,trace2])\nfig = Figure(data=data,layout=layout)\n\npy.offline.iplot(fig)", "Layout\nThe Layout object will define the look of the plot, and plot features which are unrelated to the data. So we will be able to change things like the title, axis titles, spacing, font and even draw shapes on top of your plot!", "layout", "Annotations\nWe added a plot title as well as titles for all the axes. For fun we could add some text annotation as well in order to indicate the maximum point that's been plotted on the current plotting surface.", "# highest point\nlayout.update(dict(\n annotations=[Annotation(\n text=\"Highest Point\", \n x=3, \n y=6)]\n)\n )\npy.offline.iplot(Figure(data=data, layout=layout), filename='pyguide_4')\n\n#lowest point\nlayout.update(dict(\nannotations = [Annotation(\ntext = \"lowest point\",\nx=1,\ny=4)]))\n\npy.offline.iplot(Figure(data=data,layout=layout))", "Shapes\nLet's add a rectangular block to highlight the section where trace 1 is above trace2.", "layout.update(dict(\n annotations=[Annotation(\n text=\"Highest Point\", \n x=3, \n y=6)],\n shapes = [\n # 1st highlight during Feb 4 - Feb 6\n {\n 'type': 'rect',\n # x-reference is assigned to the x-values\n 'xref': 'x',\n # y-reference is assigned to the plot paper [0,1]\n 'yref': 'y',\n 'x0': '1',\n 'y0': 0,\n 'x1': '2',\n 'y1': 7,\n 'fillcolor': '#d3d3d3',\n 'opacity': 0.2,\n 'line': {\n 'width': 0,\n }\n }]\n)\n )\npy.offline.iplot(Figure(data=data, layout=layout), filename='pyguide_4')\n\n# plot scatter with color\n\nx = np.random.randint(0,100,100)\ny = [x + np.random.randint(-100,100) for x in x]\nz = np.random.randint(0,3,100)\n\nlayout = Layout(\n title = \"Color Scatter Plot\",\n xaxis = dict(title=\"x\"),\n yaxis = dict(title=\"y\")\n)\n\ntrace1 = Scatter(\n x = x,\n y = y,\n mode=\"markers\",\n marker=dict(\n size = 12,\n color = z,\n colorscale = \"heatmap-discrete-colorscale\",\n showscale=True\n )\n)\n\ndata = Data([trace1])\nfig = Figure(data=data,layout=layout)\n\npy.offline.iplot(fig)\n\n# a better implementation would be to use different traces for different colors\n\ndf = pd.DataFrame({\n 'x':x,\n 'y':y,\n 'z':z\n})\ndf.z.value_counts()\n\nlayout = Layout(\n title = \"Color Scatter Plot (Improved)\",\n xaxis = dict(title=\"x\"),\n yaxis = dict(title=\"y\")\n)\n\ntrace1 = Scatter(\n x = df.query('z==0')['x'],\n y = df.query('z==0')['y'],\n mode=\"markers\",\n marker=dict(\n size = 12,\n color = \"orange\",\n ),\n name = \"Z = 0\"\n)\n \ntrace2 = Scatter(\n x = df.query('z==1')['x'],\n y = df.query('z==1')['y'],\n mode=\"markers\",\n marker=dict(\n size = 12,\n color = \"red\",\n ),\n name=\"Z = 1\"\n)\n\ntrace3 = Scatter(\n x = df.query('z==2')['x'],\n y = df.query('z==2')['y'],\n mode=\"markers\",\n marker=dict(\n size = 12,\n color = \"blue\",\n ),\n name=\"Z = 2\"\n)\n\ndata = Data([trace1, trace2,trace3])\nfig = Figure(data=data,layout=layout)\n\npy.offline.iplot(fig)\n\n# 3d scatter plot\n\nx = np.random.randint(0,100,100)\ny = np.random.randint(0,100,100)\nz = np.random.randint(0,10,100)\n\nlayout = Layout(\n title=\"3d Scatter Plot\",\n xaxis = dict(\n title = \"X\"\n )\n)\n\ntrace0 = Scatter3d(\n x=x,\n y=y,\n z=z,\n mode=\"markers\",\n marker = dict(\n size=6,\n color=z,\n colorscale=\"Plasma\",\n opacity=0.6\n )\n)\n\ndata = Data([trace0])\n\nfig = Figure(data=data, layout=layout)\n\npy.offline.iplot(fig)\n\n# 3d bubble charts using pokemon data\n# URL: https://www.kaggle.com/rounakbanik/pokemon/data\n\ndataset = pd.read_csv(\"pokemon.csv\")\ndataset.dtypes\n\nlayout = Layout(\n title=\"Pokemon!\",\n autosize = False,\n width= 1000,\n height= 1000,\n scene = dict(\n zaxis=dict(title=\"Attack\"),\n yaxis=dict(title=\"Defense\"),\n xaxis=dict(title=\"Type 1 Class.\")\n )\n)\n\ntrace0 = Scatter3d(\n z = dataset.attack,\n y = dataset.defense,\n x = dataset.type1,\n text = dataset.name,\n mode = \"markers\",\n marker = dict(\n size = dataset.weight_kg/10,\n opacity = 0.5,\n color = dataset.hp,\n colorscale = 'Viridis',\n showscale=True,\n colorbar=dict(title=\"HP\")\n )\n)\n\ndata = Data([trace0])\n\nfig = Figure(data=data, layout=layout)\n\npy.offline.iplot(fig)" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
spacecowboy/article-annriskgroups-source
AnnGroups.ipynb
gpl-3.0
[ "AnnGroups\nThis is just a test script to verify that the ANN code works as expected. It also serves as an example\nfor the usage.\nIt is NOT used for results reported in the article.", "# import stuffs\n%matplotlib inline\nimport numpy as np\nimport pandas as pd\nfrom pyplotthemes import get_savefig, classictheme as plt\nplt.latex = True", "Load some data", "from datasets import get_pbc\n\nd = get_pbc(prints=True, norm_in=True, norm_out=False)\n\ndurcol = d.columns[0]\neventcol = d.columns[1]\n\nif np.any(d[durcol] < 0):\n raise ValueError(\"Negative times encountered\")\n\n# Sort the data before training - handled by ensemble\n#d.sort(d.columns[0], inplace=True)\n\n# Example: d.iloc[:, :2] for times, events\nd", "Create an ANN model\nWith all correct parameters, ensemble settings and such.", "import ann\nfrom classensemble import ClassEnsemble\n\nmingroup = int(0.25 * d.shape[0])\n\ndef get_net(func=ann.geneticnetwork.FITNESS_SURV_KAPLAN_MIN):\n hidden_count = 10\n outcount = 2\n l = (d.shape[1] - 2) + hidden_count + outcount + 1\n\n net = ann.geneticnetwork((d.shape[1] - 2), hidden_count, outcount)\n net.fitness_function = func\n net.mingroup = mingroup\n # Be explicit here even though I changed the defaults\n net.connection_mutation_chance = 0.0\n net.activation_mutation_chance = 0\n # Some other values\n net.crossover_method = net.CROSSOVER_UNIFORM\n net.selection_method = net.SELECTION_TOURNAMENT\n net.population_size = 100\n net.generations = 1000\n net.weight_mutation_chance = 0.15\n net.dropout_hidden_probability = 0.5\n net.dropout_input_probability = 0.8\n\n\n ann.utils.connect_feedforward(net, [5, 5], hidden_act=net.TANH, out_act=net.SOFTMAX)\n #c = net.connections.reshape((l, l))\n #c[-outcount:, :((d.shape[1] - 2) + hidden_count)] = 1\n #net.connections = c.ravel()\n \n return net\n\nnet = get_net()\nl = (d.shape[1] - 2) + net.hidden_count + 2 + 1\nprint(net.connections.reshape((l, l)))\n\nhnets = []\nlnets = []\n\nnetcount = 2\nfor i in range(netcount):\n if i % 2:\n n = get_net(ann.geneticnetwork.FITNESS_SURV_KAPLAN_MIN)\n hnets.append(n)\n else:\n n = get_net(ann.geneticnetwork.FITNESS_SURV_KAPLAN_MAX)\n lnets.append(n)\n \n\ne = ClassEnsemble(hnets, lnets)", "Train the ANNs\nAnd print groupings on training data.", "e.fit(d, durcol, eventcol)\n\n# grouplabels = e.predict_classes\ngrouplabels, mems = e.label_data(d)\nfor l, m in mems.items():\n print(\"Group\", l, \"has\", len(m), \"members\")", "Plot grouping", "from lifelines.plotting import add_at_risk_counts\nfrom lifelines.estimation import KaplanMeierFitter\nfrom lifelines.estimation import median_survival_times\n\nplt.figure()\n\nfitters = []\nfor g in ['high', 'mid', 'low']:\n kmf = KaplanMeierFitter()\n fitters.append(kmf)\n members = grouplabels == g\n \n kmf.fit(d.loc[members, durcol],\n d.loc[members, eventcol],\n label='{}'.format(g))\n kmf.plot(ax=plt.gca())#, color=plt.colors[mi])\n print(\"End survival rate for\", g, \":\",kmf.survival_function_.iloc[-1, 0])\n if kmf.survival_function_.iloc[-1, 0] <= 0.5:\n print(\"Median survival for\", g, \":\",\n median_survival_times(kmf.survival_function_))\n \nplt.legend(loc='best', framealpha=0.1) \n\nplt.ylim((0, 1))\nadd_at_risk_counts(*fitters)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
kemerelab/NeuroHMM
ModelSelection.ipynb
mit
[ "ModelSelection.ipynb\nChoosing the number of states and a suitable timescale for hidden Markov models\nOne of the challenges associated with using hidden Markov models is specifying the correct model. For example, how many hidden states should the model have? At what timescale should we bin our observations? How much data do we need in order to train an effective/useful/representative model?\nOne possibility (which is conceptually very appealing) is to use a nonparametric Bayesian extension to the HMM, the HDP-HMM (hierarchical Dirichlet process hidden Markov model), in which the number of states can be directly inferred from the data, and moreover, where the number of states are allowed to grow as we obtain more and more data.\nFortunately, even if we choose to use a simple HMM, model selection is perhaps not as important as one might at first think. More specifically, we will show that for a wide range of model states, and for a wide range of timescales, the HMM should return plausible and usable models, so that we can use them to learn something about the data even if we don't have a good idea of what the model parameters should be.\nNevertheless, shifting over to the HDP-HMMs and especially to the HDP-HSMMs (semi-Markov models) where state durations are explicitly specified or learned, is certainly something that I would highly recommend.\nTODO: Take a look at e.g. https://www.cs.cmu.edu/~ggordon/siddiqi-gordon-moore.fast-hmm.pdf : fast HMM (order of magnitude faster than Baum-Welch) and better model fit: V-STACS.\nImport packages and initialization", "import matplotlib.pyplot as plt\nimport numpy as np\nimport pandas as pd\nimport seaborn as sns\nimport sys\n\nfrom IPython.display import display, clear_output\n\nsys.path.insert(0, 'helpers')\n\nfrom efunctions import * # load my helper function(s) to save pdf figures, etc.\nfrom hc3 import load_data, get_sessions\nfrom hmmlearn import hmm # see https://github.com/ckemere/hmmlearn\nimport klabtools as klab\nimport seqtools as sq\n\nimport importlib\n\nimportlib.reload(sq) # reload module here only while prototyping...\nimportlib.reload(klab) # reload module here only while prototyping...\n\n%matplotlib inline\n\nsns.set(rc={'figure.figsize': (12, 4),'lines.linewidth': 1.5})\nsns.set_style(\"white\")", "Load data\nHere we consider lin2 data for gor01 on the first recording day (6-7-2006), since this session had the most units (91) of all the gor01 sessions, and lin2 has position data, whereas lin1 only has partial position data.", "datadirs = ['/home/etienne/Dropbox/neoReader/Data',\n 'C:/etienne/Dropbox/neoReader/Data',\n '/Users/etienne/Dropbox/neoReader/Data']\n\nfileroot = next( (dir for dir in datadirs if os.path.isdir(dir)), None)\n\nanimal = 'gor01'; month,day = (6,7); session = '16-40-19' # 91 units\n\nspikes = load_data(fileroot=fileroot, datatype='spikes',animal=animal, session=session, month=month, day=day, fs=32552, verbose=False)\neeg = load_data(fileroot=fileroot, datatype='eeg', animal=animal, session=session, month=month, day=day,channels=[0,1,2], fs=1252, starttime=0, verbose=False)\nposdf = load_data(fileroot=fileroot, datatype='pos',animal=animal, session=session, month=month, day=day, verbose=False)\nspeed = klab.get_smooth_speed(posdf,fs=60,th=8,cutoff=0.5,showfig=False,verbose=False)", "Find most appropriate number of states using cross validation\nHere we split the data into training, validation, and test sets. We monitor the average log probability per sequence (normalized by length) for each of these sets, and we use the validation set to choose the number of model states $m$. \nNote to self: I should re-write my data splitting routines to allow me to extract as many subsets as I want, so that I can do k-fold cross validation.", "## bin ALL spikes\nds = 0.125 # bin spikes into 125 ms bins (theta-cycle inspired)\nbinned_spikes_all = klab.bin_spikes(spikes.data, ds=ds, fs=spikes.samprate, verbose=True)\n\n## identify boundaries for running (active) epochs and then bin those observations into separate sequences:\nrunbdries = klab.get_boundaries_from_bins(eeg.samprate,bins=speed.active_bins,bins_fs=60)\nbinned_spikes_bvr = klab.bin_spikes(spikes.data, fs=spikes.samprate, boundaries=runbdries, boundaries_fs=eeg.samprate, ds=ds)\n\n## stack data for hmmlearn:\nseq_stk_bvr = sq.data_stack(binned_spikes_bvr, verbose=True)\nseq_stk_all = sq.data_stack(binned_spikes_all, verbose=True)\n\n## split data into train, test, and validation sets:\ntr_b,vl_b,ts_b = sq.data_split(seq_stk_bvr, tr=60, vl=20, ts=20, randomseed = 0, verbose=False)\n\nSmax = 40\nS = np.arange(start=5,step=1,stop=Smax+1)\n\ntr_ll = []\nvl_ll = []\nts_ll = []\n\nfor num_states in S:\n clear_output(wait=True)\n print('Training and evaluating {}-state hmm'.format(num_states))\n sys.stdout.flush()\n myhmm = sq.hmm_train(tr_b, num_states=num_states, n_iter=30, verbose=False)\n tr_ll.append( (np.array(list(sq.hmm_eval(myhmm, tr_b)))/tr_b.sequence_lengths ).mean())\n vl_ll.append( (np.array(list(sq.hmm_eval(myhmm, vl_b)))/vl_b.sequence_lengths ).mean())\n ts_ll.append( (np.array(list(sq.hmm_eval(myhmm, ts_b)))/ts_b.sequence_lengths ).mean())\n\nclear_output(wait=True)\nprint('Done!')\nsys.stdout.flush()\n\nnum_states = 35\n\nfig = plt.figure(1, figsize=(12, 4))\nax = fig.add_subplot(111)\n \nax.annotate('plateau at approx ' + str(num_states), xy=(num_states, -38.5), xycoords='data',\n xytext=(-140, -30), textcoords='offset points',\n arrowprops=dict(arrowstyle=\"->\",\n connectionstyle=\"angle3,angleA=0,angleB=-90\"),\n )\n\nax.plot(S, tr_ll, lw=1.5, label='train')\nax.plot(S, vl_ll, lw=1.5, label='validation')\nax.plot(S, ts_ll, lw=1.5, label='test')\nax.legend(loc=2)\nax.set_xlabel('number of states')\nax.set_ylabel('normalized (to single time bin) log likelihood')\n\nax.axhspan(-38.5, -37.5, facecolor='0.75', alpha=0.25)\nax.set_xlim([5, S[-1]])", "Remarks: We see that the training error is decreasing (equivalently, the training log probability is increasing) over the entire range of states considered. Indeed, we have computed this for a much larger number of states, and the training error keeps on decreasing, whereas both the validation and test errors reach a plateau at around 30 or 35 states.\nAs expected, the training set has the largest log probability (best agreement with model), but we might expect the test and validation sets to be about the same. For different subsets of our data this is indeed the case, but the more important thing in model selection is that the validation and test sets should have the same shape or behavior, so that we can choose an appropriate model parameter.\nHowever, if we wanted to predict what our log probability for any given sequence would be, then we probably need a little bit more data, for which the test and validation errors should agree more.\nFinally, we have also repeated the above analysis when we restricted ourselves to only using place cells in the model, and although the log probabilities were uniformly increased to around $-7$ or $-8$, the overall shape and characteristic behavior were left unchanged, so that model selection could be done either way.\nPlace field visualization\nPreviously we have only considered varying the number of model states for model selection, but of course choosing an appropriate timescale is perhaps just as important. We know, for example, that if our timescale is too short (or fast), then most of the bins will be empty, making it difficult for the model to learn appropriate representations and transitions. On the other hand, if our timescale is too coarse (or long or slow) then we will certainly miss SWR events, and we may even miss some behavioral events as well.\nSince theta is around 8 Hz for rodents, it might make sense to consider a timescale of 125 ms or even 62.5 ms for behaviorally relevant events, so that we can hope to capture half or full theta cycles in the observations.\nOne might also reasonably ask: \"even though the log probability has been optimized, how do we know that the learned model makes any sense? That is, that the model is plausible and useful?\" One way to try to answer this question is to again consider the place fields that we learn from the data. Place field visualization is considered in more detail in StateClustering.ipynb, but here we simply want to see if we get plausible, behaviorally relevant state representations out when choosing different numbers of states, and different timescales, for example.\nPlace fields for varying velocity thresholds\nWe train our models on RUN data, so we might want to know how sensitive our model is to a specific velocity threshold. Using a smaller threshold will include more quiescent data, and using a larger threshold will exclude more data from being used to learn in the model.", "from placefieldviz import hmmplacefieldposviz\n\nnum_states = 35\nds = 0.0625 # bin spikes into 62.5 ms bins (theta-cycle inspired)\nvth = 8 # units/sec velocity threshold for place fields\n\n#state_pos, peakorder = hmmplacefieldposviz(num_states=num_states, ds=ds, posdf=posdf, spikes=spikes, speed=speed, vth=vth)\n\nfig, axes = plt.subplots(4, 3, figsize=(17, 11))\naxes = [item for sublist in axes for item in sublist]\n\nfor ii, ax in enumerate(axes):\n vth = ii+1\n state_pos, peakorder, stateorder = hmmplacefieldposviz(num_states=num_states, ds=ds, posdf=posdf, spikes=spikes, speed=speed, vth=vth, normalize=True, verbose=False)\n ax.matshow(state_pos[peakorder,:], interpolation='none', cmap='OrRd')\n #ax.set_xlabel('position bin')\n ax.set_ylabel('state')\n ax.set_xticklabels([])\n ax.set_yticklabels([])\n ax.set_title('learned place fields; RUN > ' + str(vth), y=1.02)\n ax.plot([13, 35], [0, num_states], color='k', linestyle='dashed', linewidth=1)\n ax.plot([7, 41], [0, num_states], color='k', linestyle='dashed', linewidth=1)\n ax.axis('tight')\n \n", "Remarks: As can be expected, with low velocity thresholds, we see an overrepresentation of the reward locations, and only a relatively small number of states that are dedicated to encoding the position along the track. \nRecall that the track was shortened halfway through the recording session. Here, the reward locations for the longer track (first half of the experiment) and shorter track (second half of the experiment) are shown by the ends of the dashed lines.\nWe notice that at some point, the movement velocity (for fixed state evolution) appears to be constant, and that at e.g. 8 units/sec we see a clear bifurcation in the place fields, so that states encode both positions before and after the track was shortened.\nPlace fields for varying number of states\nNext, we take a look at how the place fields are affected by changing the number of states in the model.", "from placefieldviz import hmmplacefieldposviz\n\nnum_states = 35\nds = 0.0625 # bin spikes into 62.5 ms bins (theta-cycle inspired)\nvth = 8 # units/sec velocity threshold for place fields\n\n#state_pos, peakorder = hmmplacefieldposviz(num_states=num_states, ds=ds, posdf=posdf, spikes=spikes, speed=speed, vth=vth)\n\nfig, axes = plt.subplots(4, 3, figsize=(17, 11))\naxes = [item for sublist in axes for item in sublist]\n\nfor ii, ax in enumerate(axes):\n num_states = 5 + ii*5\n state_pos, peakorder, stateorder = hmmplacefieldposviz(num_states=num_states, ds=ds, posdf=posdf, spikes=spikes, speed=speed, vth=vth, normalize=True)\n ax.matshow(state_pos[peakorder,:], interpolation='none', cmap='OrRd')\n #ax.set_xlabel('position bin')\n ax.set_ylabel('state')\n ax.set_xticklabels([])\n ax.set_yticklabels([])\n ax.set_title('learned place fields; RUN > ' + str(vth) + '; m = ' + str(num_states), y=1.02)\n ax.axis('tight')\n \nsaveFigure('posterfigs/numstates.pdf')", "Remarks: First, we see that independent of the number of states, the model captures the place field like nature of the underlying states very well. Furthermore, the bifurcation of some states to represent both the first and second halves of the experiment becomes clear with as few as 15 states, but interestingly this bifurcation fades as we add more states to the model, since there is enough flexibility to encode those shifting positions by their own states. \nWarning: However, in the case where we have many states so that the states are no longer bimodal, the strict linear ordering that we impose (ordering by peak firing location) can easily mask the underlying structural change in the environment.\nPlace fields for varying timescales\nNext we investigate how the place fields are affected by changing the timescale of our observations. First, we consider timescales in the range of 31.25 ms to 375 ms, in increments of 31.25 ms.", "from placefieldviz import hmmplacefieldposviz\n\nnum_states = 35\nds = 0.0625 # bin spikes into 62.5 ms bins (theta-cycle inspired)\nvth = 8 # units/sec velocity threshold for place fields\n\n#state_pos, peakorder = hmmplacefieldposviz(num_states=num_states, ds=ds, posdf=posdf, spikes=spikes, speed=speed, vth=vth)\n\nfig, axes = plt.subplots(4, 3, figsize=(17, 11))\naxes = [item for sublist in axes for item in sublist]\n\nfor ii, ax in enumerate(axes):\n ds = (ii+1)*0.03125\n state_pos, peakorder, stateorder = hmmplacefieldposviz(num_states=num_states, ds=ds, posdf=posdf, spikes=spikes, speed=speed, vth=vth, normalize=True)\n ax.matshow(state_pos[peakorder,:], interpolation='none', cmap='OrRd')\n #ax.set_xlabel('position bin')\n ax.set_ylabel('state')\n ax.set_xticklabels([])\n ax.set_yticklabels([])\n ax.set_title('learned place fields; RUN > ' + str(vth) + '; m = ' + str(num_states) + '; ds = ' + str(ds), y=1.02)\n ax.plot([13, 35], [0, num_states], color='k', linestyle='dashed', linewidth=1)\n ax.plot([7, 41], [0, num_states], color='k', linestyle='dashed', linewidth=1)\n ax.axis('tight')", "Remarks: We notice that we clearly see the bimodal place fields when the timescales are sufficiently small, with a particularly clear example at 62.5 ms, for example. Larger timescales tend to focus on the longer track piece, with a single trajectory being skewed away towards the shorter track piece.\nNext we consider timescales in increments of 62.5 ms.", "from placefieldviz import hmmplacefieldposviz\n\nnum_states = 35\nds = 0.0625 # bin spikes into 62.5 ms bins (theta-cycle inspired)\nvth = 8 # units/sec velocity threshold for place fields\n\n#state_pos, peakorder = hmmplacefieldposviz(num_states=num_states, ds=ds, posdf=posdf, spikes=spikes, speed=speed, vth=vth)\n\nfig, axes = plt.subplots(4, 3, figsize=(17, 11))\naxes = [item for sublist in axes for item in sublist]\n\nfor ii, ax in enumerate(axes):\n ds = (ii+1)*0.0625\n state_pos, peakorder, stateorder = hmmplacefieldposviz(num_states=num_states, ds=ds, posdf=posdf, spikes=spikes, speed=speed, vth=vth, normalize=True)\n ax.matshow(state_pos[peakorder,:], interpolation='none', cmap='OrRd')\n #ax.set_xlabel('position bin')\n ax.set_ylabel('state')\n ax.set_xticklabels([])\n ax.set_yticklabels([])\n ax.set_title('learned place fields; RUN > ' + str(vth) + '; m = ' + str(num_states) + '; ds = ' + str(ds), y=1.02)\n ax.plot([13, 35], [0, num_states], color='k', linestyle='dashed', linewidth=1)\n ax.plot([7, 41], [0, num_states], color='k', linestyle='dashed', linewidth=1)\n ax.axis('tight')", "Remarks: Again, we see that with larger timescales, the spatial resolution becomes more coarse, because we don't have that sufficiently many observations, and the modes of the place fields tend to lie close to those associated wit the longer track.\nSplitting the experimment in half\nJust as a confirmation of what we've seen so far, we next consider the place fields obtained when we split the experiment into its first and second halves, correponding to when the track was longer, and shorter, respectively.", "from placefieldviz import hmmplacefieldposviz\n\nnum_states = 25\nds = 0.0625 # bin spikes into 62.5 ms bins (theta-cycle inspired)\nvth = 8 # units/sec velocity threshold for place fields\n\nstate_pos_b, peakorder_b, stateorder_b = hmmplacefieldposviz(num_states=num_states, ds=ds, posdf=posdf, spikes=spikes, speed=speed, vth=vth, normalize=True, experiment='both')\nstate_pos_1, peakorder_1, stateorder_1 = hmmplacefieldposviz(num_states=num_states, ds=ds, posdf=posdf, spikes=spikes, speed=speed, vth=vth, normalize=True, experiment='first')\nstate_pos_2, peakorder_2, stateorder_2 = hmmplacefieldposviz(num_states=num_states, ds=ds, posdf=posdf, spikes=spikes, speed=speed, vth=vth, normalize=True, experiment='second')\n\nfig, (ax1, ax2, ax3) = plt.subplots(1, 3, figsize=(17, 3))\n\nax1.matshow(state_pos_b[peakorder_b,:], interpolation='none', cmap='OrRd')\nax1.set_ylabel('state')\nax1.set_xticklabels([])\nax1.set_yticklabels([])\nax1.set_title('learned place fields BOTH; RUN > ' + str(vth) + '; m = ' + str(num_states) + '; ds = ' + str(ds), y=1.02)\nax1.plot([13, 35], [0, num_states], color='k', linestyle='dashed', linewidth=2)\nax1.plot([7, 41], [0, num_states], color='k', linestyle='dashed', linewidth=2)\nax1.axis('tight')\n\nax2.matshow(state_pos_1[peakorder_1,:], interpolation='none', cmap='OrRd')\nax2.set_ylabel('state')\nax2.set_xticklabels([])\nax2.set_yticklabels([])\nax2.set_title('learned place fields FIRST; RUN > ' + str(vth) + '; m = ' + str(num_states) + '; ds = ' + str(ds), y=1.02)\nax2.plot([13, 35], [0, num_states], color='gray', linestyle='dashed', linewidth=1)\nax2.plot([7, 41], [0, num_states], color='k', linestyle='dashed', linewidth=2)\nax2.axis('tight')\n\nax3.matshow(state_pos_2[peakorder_2,:], interpolation='none', cmap='OrRd')\nax3.set_ylabel('state')\nax3.set_xticklabels([])\nax3.set_yticklabels([])\nax3.set_title('learned place fields SECOND; RUN > ' + str(vth) + '; m = ' + str(num_states) + '; ds = ' + str(ds), y=1.02)\nax3.plot([13, 35], [0, num_states], color='k', linestyle='dashed', linewidth=2)\nax3.plot([7, 41], [0, num_states], color='gray', linestyle='dashed', linewidth=1)\nax3.axis('tight')\n\nsaveFigure('posterfigs/expsplit.pdf')", "Remarks: We clearly see the bimodal place fields when we use all of the data, and we see the unimodal place fields emerge as we focus on either the first, or the second half of the experiment.\nNotice that the reward locations are more concentrated, but that the velocity (with fixed state progression) is roughly constant.\nHowever, if we increase the number of states:", "from placefieldviz import hmmplacefieldposviz\n\nnum_states = 45\nds = 0.0625 # bin spikes into 62.5 ms bins (theta-cycle inspired)\nvth = 8 # units/sec velocity threshold for place fields\n\nstate_pos_b, peakorder_b, stateorder_b = hmmplacefieldposviz(num_states=num_states, ds=ds, posdf=posdf, spikes=spikes, speed=speed, vth=vth, normalize=True, experiment='both')\nstate_pos_1, peakorder_1, stateorder_1 = hmmplacefieldposviz(num_states=num_states, ds=ds, posdf=posdf, spikes=spikes, speed=speed, vth=vth, normalize=True, experiment='first')\nstate_pos_2, peakorder_2, stateorder_2 = hmmplacefieldposviz(num_states=num_states, ds=ds, posdf=posdf, spikes=spikes, speed=speed, vth=vth, normalize=True, experiment='second')\n\nfig, (ax1, ax2, ax3) = plt.subplots(1, 3, figsize=(17, 3))\n\nax1.matshow(state_pos_b[peakorder_b,:], interpolation='none', cmap='OrRd')\nax1.set_ylabel('state')\nax1.set_xticklabels([])\nax1.set_yticklabels([])\nax1.set_title('learned place fields BOTH; RUN > ' + str(vth) + '; m = ' + str(num_states) + '; ds = ' + str(ds), y=1.02)\nax1.plot([13, 35], [0, num_states], color='k', linestyle='dashed', linewidth=2)\nax1.plot([7, 41], [0, num_states], color='k', linestyle='dashed', linewidth=2)\nax1.axis('tight')\n\nax2.matshow(state_pos_1[peakorder_1,:], interpolation='none', cmap='OrRd')\nax2.set_ylabel('state')\nax2.set_xticklabels([])\nax2.set_yticklabels([])\nax2.set_title('learned place fields FIRST; RUN > ' + str(vth) + '; m = ' + str(num_states) + '; ds = ' + str(ds), y=1.02)\nax2.plot([13, 35], [0, num_states], color='gray', linestyle='dashed', linewidth=1)\nax2.plot([7, 41], [0, num_states], color='k', linestyle='dashed', linewidth=2)\nax2.axis('tight')\n\nax3.matshow(state_pos_2[peakorder_2,:], interpolation='none', cmap='OrRd')\nax3.set_ylabel('state')\nax3.set_xticklabels([])\nax3.set_yticklabels([])\nax3.set_title('learned place fields SECOND; RUN > ' + str(vth) + '; m = ' + str(num_states) + '; ds = ' + str(ds), y=1.02)\nax3.plot([13, 35], [0, num_states], color='k', linestyle='dashed', linewidth=2)\nax3.plot([7, 41], [0, num_states], color='gray', linestyle='dashed', linewidth=1)\nax3.axis('tight')", "then we stat to see the emergence of the S-shaped place field progressions again, indicating that the reward locations are overexpressed by several different states.\nThis observation is even more pronounced if we increase the number of states further:", "from placefieldviz import hmmplacefieldposviz\n\nnum_states = 100\nds = 0.0625 # bin spikes into 62.5 ms bins (theta-cycle inspired)\nvth = 8 # units/sec velocity threshold for place fields\n\nstate_pos_b, peakorder_b, stateorder_b = hmmplacefieldposviz(num_states=num_states, ds=ds, posdf=posdf, spikes=spikes, speed=speed, vth=vth, normalize=True, experiment='both')\nstate_pos_1, peakorder_1, stateorder_1 = hmmplacefieldposviz(num_states=num_states, ds=ds, posdf=posdf, spikes=spikes, speed=speed, vth=vth, normalize=True, experiment='first')\nstate_pos_2, peakorder_2, stateorder_2 = hmmplacefieldposviz(num_states=num_states, ds=ds, posdf=posdf, spikes=spikes, speed=speed, vth=vth, normalize=True, experiment='second')\n\nfig, (ax1, ax2, ax3) = plt.subplots(1, 3, figsize=(17, 3))\n\nax1.matshow(state_pos_b[peakorder_b,:], interpolation='none', cmap='OrRd')\nax1.set_ylabel('state')\nax1.set_xticklabels([])\nax1.set_yticklabels([])\nax1.set_title('learned place fields BOTH; RUN > ' + str(vth) + '; m = ' + str(num_states) + '; ds = ' + str(ds), y=1.02)\nax1.plot([13, 35], [0, num_states], color='k', linestyle='dashed', linewidth=2)\nax1.plot([7, 41], [0, num_states], color='k', linestyle='dashed', linewidth=2)\nax1.axis('tight')\n\nax2.matshow(state_pos_1[peakorder_1,:], interpolation='none', cmap='OrRd')\nax2.set_ylabel('state')\nax2.set_xticklabels([])\nax2.set_yticklabels([])\nax2.set_title('learned place fields FIRST; RUN > ' + str(vth) + '; m = ' + str(num_states) + '; ds = ' + str(ds), y=1.02)\nax2.plot([13, 35], [0, num_states], color='gray', linestyle='dashed', linewidth=1)\nax2.plot([7, 41], [0, num_states], color='k', linestyle='dashed', linewidth=2)\nax2.axis('tight')\n\nax2.add_patch(\n patches.Rectangle(\n (-1, 0), # (x,y)\n 8, # width\n num_states, # height\n hatch='/',\n facecolor='w',\n alpha=0.5\n )\n)\n\nax2.add_patch(\n patches.Rectangle(\n (41, 0), # (x,y)\n 11, # width\n num_states, # height\n hatch='/',\n facecolor='w',\n alpha=0.5\n )\n)\n\nax3.matshow(state_pos_2[peakorder_2,:], interpolation='none', cmap='OrRd')\nax3.set_ylabel('state')\nax3.set_xticklabels([])\nax3.set_yticklabels([])\nax3.set_title('learned place fields SECOND; RUN > ' + str(vth) + '; m = ' + str(num_states) + '; ds = ' + str(ds), y=1.02)\nax3.plot([13, 35], [0, num_states], color='k', linestyle='dashed', linewidth=2)\nax3.plot([7, 41], [0, num_states], color='gray', linestyle='dashed', linewidth=1)\nax3.axis('tight')\n\nax3.add_patch(\n patches.Rectangle(\n (-1, 0), # (x,y)\n 14, # width\n num_states, # height\n hatch='/',\n facecolor='w',\n alpha=0.5\n )\n)\n\nax3.add_patch(\n patches.Rectangle(\n (35, 0), # (x,y)\n 15, # width\n num_states, # height\n hatch='/',\n facecolor='w',\n alpha=0.5\n )\n)", "With enough expressiveness in the number of states, we see the S-shaped curve reappear, which suggests an overexpression of the reward locations, which is consistent with what we see with place cells in animals.", "import matplotlib.patches as patches\n\nfig, (ax1, ax2, ax3) = plt.subplots(1, 3, figsize=(17, 3))\n\nax1.matshow(state_pos_b[stateorder_b,:], interpolation='none', cmap='OrRd')\nax1.set_ylabel('state')\nax1.set_xticklabels([])\nax1.set_yticklabels([])\nax1.set_title('learned place fields BOTH; RUN > ' + str(vth) + '; m = ' + str(num_states) + '; ds = ' + str(ds), y=1.02)\nax1.plot([13, 35], [0, num_states], color='k', linestyle='dashed', linewidth=2)\nax1.plot([7, 41], [0, num_states], color='k', linestyle='dashed', linewidth=2)\nax1.axis('tight')\n\nax2.matshow(state_pos_1[stateorder_1,:], interpolation='none', cmap='OrRd')\nax2.set_ylabel('state')\nax2.set_xticklabels([])\nax2.set_yticklabels([])\nax2.set_title('learned place fields FIRST; RUN > ' + str(vth) + '; m = ' + str(num_states) + '; ds = ' + str(ds), y=1.02)\nax2.plot([13, 13], [0, num_states], color='gray', linestyle='dashed', linewidth=1)\nax2.plot([7, 7], [0, num_states], color='k', linestyle='dashed', linewidth=2)\nax2.plot([35, 35], [0, num_states], color='gray', linestyle='dashed', linewidth=1)\nax2.plot([41, 41], [0, num_states], color='k', linestyle='dashed', linewidth=2)\nax2.axis('tight')\n\nax2.add_patch(\n patches.Rectangle(\n (-1, 0), # (x,y)\n 8, # width\n num_states, # height\n hatch='/',\n facecolor='w',\n alpha=0.5\n )\n)\n\nax2.add_patch(\n patches.Rectangle(\n (41, 0), # (x,y)\n 11, # width\n num_states, # height\n hatch='/',\n facecolor='w',\n alpha=0.5\n )\n)\n\nax3.matshow(state_pos_2[stateorder_2,:], interpolation='none', cmap='OrRd')\nax3.set_ylabel('state')\nax3.set_xticklabels([])\nax3.set_yticklabels([])\nax3.set_title('learned place fields SECOND; RUN > ' + str(vth) + '; m = ' + str(num_states) + '; ds = ' + str(ds), y=1.02)\nax3.plot([13, 13], [0, num_states], color='k', linestyle='dashed', linewidth=2)\nax3.plot([7, 7], [0, num_states], color='gray', linestyle='dashed', linewidth=1)\nax3.plot([35, 35], [0, num_states], color='k', linestyle='dashed', linewidth=2)\nax3.plot([41, 41], [0, num_states], color='gray', linestyle='dashed', linewidth=1)\nax3.axis('tight')\n\nax3.add_patch(\n patches.Rectangle(\n (-1, 0), # (x,y)\n 14, # width\n num_states, # height\n hatch='/',\n facecolor='w',\n alpha=0.5\n )\n)\n\nax3.add_patch(\n patches.Rectangle(\n (35, 0), # (x,y)\n 15, # width\n num_states, # height\n hatch='/',\n facecolor='w',\n alpha=0.5\n )\n)\n\nfig.suptitle('State ordering not by peak location, but by the state transition probability matrix', y=1.08, fontsize=14)\n\nsaveFigure('posterfigs/zigzag.pdf')\n\nstate_pos_b[state_pos_b < np.transpose(np.tile(state_pos_b.max(axis=1),[state_pos_b.shape[1],1]))] = 0\nstate_pos_b[state_pos_b == np.transpose(np.tile(state_pos_b.max(axis=1),[state_pos_b.shape[1],1]))] = 1\nstate_pos_1[state_pos_1 < np.transpose(np.tile(state_pos_1.max(axis=1),[state_pos_1.shape[1],1]))] = 0\nstate_pos_1[state_pos_1 == np.transpose(np.tile(state_pos_1.max(axis=1),[state_pos_1.shape[1],1]))] = 1\nstate_pos_2[state_pos_2 < np.transpose(np.tile(state_pos_2.max(axis=1),[state_pos_2.shape[1],1]))] = 0\nstate_pos_2[state_pos_2 == np.transpose(np.tile(state_pos_2.max(axis=1),[state_pos_2.shape[1],1]))] = 1\n\nfig, (ax1, ax2, ax3) = plt.subplots(1, 3, figsize=(17, 3))\n\nax1.matshow(state_pos_b[peakorder_b,:], interpolation='none', cmap='OrRd')\nax1.set_ylabel('state')\nax1.set_xticklabels([])\nax1.set_yticklabels([])\nax1.set_title('learned place fields BOTH; RUN > ' + str(vth) + '; m = ' + str(num_states) + '; ds = ' + str(ds), y=1.02)\nax1.plot([13, 35], [0, num_states], color='k', linestyle='dashed', linewidth=2)\nax1.plot([7, 41], [0, num_states], color='k', linestyle='dashed', linewidth=2)\nax1.axis('tight')\n\nax2.matshow(state_pos_1[peakorder_1,:], interpolation='none', cmap='OrRd')\nax2.set_ylabel('state')\nax2.set_xticklabels([])\nax2.set_yticklabels([])\nax2.set_title('learned place fields FIRST; RUN > ' + str(vth) + '; m = ' + str(num_states) + '; ds = ' + str(ds), y=1.02)\nax2.plot([13, 35], [0, num_states], color='gray', linestyle='dashed', linewidth=1)\nax2.plot([7, 41], [0, num_states], color='k', linestyle='dashed', linewidth=2)\nax2.axis('tight')\n\nax3.matshow(state_pos_2[peakorder_2,:], interpolation='none', cmap='OrRd')\nax3.set_ylabel('state')\nax3.set_xticklabels([])\nax3.set_yticklabels([])\nax3.set_title('learned place fields SECOND; RUN > ' + str(vth) + '; m = ' + str(num_states) + '; ds = ' + str(ds), y=1.02)\nax3.plot([13, 35], [0, num_states], color='k', linestyle='dashed', linewidth=2)\nax3.plot([7, 41], [0, num_states], color='gray', linestyle='dashed', linewidth=1)\nax3.axis('tight')", "Discussion\nWe saw that we actually get meaningful place fields out of a wide range of model parameters, and that the model behaves in an expected, logical way when we add more states, or when we increase the timescale.\nAlthough using the plateau of the log probability on the validation set can be used as a principled, objective way to select the number of states, there are certainly other approaches too, and in particular I would like to pursue the nonparametric Bayesian alternatives.\nNevertheless, I think seeing how robust the learned representations are for such a wide variety of model parameters should give us confidence to use the model in new data sets." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
irsisyphus/machine-learning
5 Training and Ensemble.ipynb
apache-2.0
[ "Assignment 5\nThis assignment has weighting $1.5$.\nModel tuning and evaluation", "# Added version check for recent scikit-learn 0.18 checks\nfrom distutils.version import LooseVersion as Version\nfrom sklearn import __version__ as sklearn_version", "Dataset\nWe will use the Wisconsin breast cancer dataset for the following questions", "import pandas as pd\n\nwdbc_source = 'https://archive.ics.uci.edu/ml/machine-learning-databases/breast-cancer-wisconsin/wdbc.data'\n#wdbc_source = '../datasets/wdbc/wdbc.data'\n\ndf = pd.read_csv(wdbc_source, header=None)\n\nfrom sklearn.preprocessing import LabelEncoder\nX = df.loc[:, 2:].values\ny = df.loc[:, 1].values\nle = LabelEncoder()\ny = le.fit_transform(y)\nle.transform(['M', 'B'])\n\nif Version(sklearn_version) < '0.18':\n from sklearn.cross_validation import train_test_split\nelse:\n from sklearn.model_selection import train_test_split\n\nX_train, X_test, y_train, y_test = \\\n train_test_split(X, y, test_size=0.20, random_state=1)\n\nimport matplotlib.pyplot as plt\n%matplotlib inline", "K-fold validation (20 points)\nSomeone wrote the code below to conduct cross validation.\nDo you see anything wrong with it?\nAnd if so, correct the code and provide an explanation.", "import numpy as np\nfrom sklearn.preprocessing import StandardScaler\nfrom sklearn.decomposition import PCA\nfrom sklearn.linear_model import Perceptron\nfrom sklearn.pipeline import Pipeline\n\nif Version(sklearn_version) < '0.18':\n from sklearn.cross_validation import StratifiedKFold\nelse:\n from sklearn.model_selection import StratifiedKFold\n\nscl = StandardScaler()\npca = PCA(n_components=2)\n# original: clf = Perceptron(random_state=1)\n\n# data preprocessing\nX_train_std = scl.fit_transform(X_train)\nX_test_std = scl.transform(X_test)\n\nX_train_pca = pca.fit_transform(X_train_std)\nX_test_pca = pca.transform(X_test_std)\n\n# compute the data indices for each fold\nif Version(sklearn_version) < '0.18':\n kfold = StratifiedKFold(y=y_train, \n n_folds=10,\n random_state=1)\nelse:\n kfold = StratifiedKFold(n_splits=10,\n random_state=1).split(X_train, y_train)\n\nnum_epochs = 2\nscores = [[] for i in range(num_epochs)]\n\n\nenumerate_kfold = list(enumerate(kfold))\n# new:\nclfs = [Perceptron(random_state=1) for i in range(len(enumerate_kfold))]\n\nfor epoch in range(num_epochs):\n for k, (train, test) in enumerate_kfold:\n \n# original:\n# clf.partial_fit(X_train_std[train], y_train[train], classes=np.unique(y_train))\n# score = clf.score(X_train_std[test], y_train[test])\n# scores.append(score)\n# new:\n clfs[k].partial_fit(X_train_pca[train], y_train[train], classes=np.unique(y_train))\n score = clfs[k].score(X_train_pca[test], y_train[test])\n scores[epoch].append(score)\n \n print('Epoch: %s, Fold: %s, Class dist.: %s, Acc: %.3f' % (epoch,\n k, \n np.bincount(y_train[train]),\n score))\n print('')\n\n# new:\nfor epoch in range(num_epochs):\n print('Epoch: %s, CV accuracy: %.3f +/- %.3f' % (epoch, np.mean(scores[epoch]), np.std(scores[epoch])))", "Answer\nProblems with the original code:\n\nUse partial fit for all folds, which results in dependent training and thus cumulative learning.<br>\nDue to problem 1, we the CV accuracy is incorrect, and not able to show score for every epoch. <br>\n\nThere are two major changes:\n\nWe create a list of Perceptrons, each corresponds to one fold. Within each fold, we run partial fit for a given number of epochs. <br>\nAlso, we create a list of accuracy scores, each corresponds to one epoch. Within each epoch, we append scores of all folds. <br>\n\nThere is one minor change:\n\nTo enable PCA, we change X_train_std into X_train_pca.<br>\n\nPrecision-recall curve (40 points)\nWe have plotted ROC (receiver operator characteristics) curve for the breast cancer dataset.\nPlot the precision-recall curve for the same data set using the same experimental setup.\nWhat similarities and differences you can find between ROC and precision-recall curves?\nYou can find more information about precision-recall curve online such as: http://scikit-learn.org/stable/modules/generated/sklearn.metrics.precision_recall_curve.html\nAnswer", "from sklearn.metrics import roc_curve, precision_recall_curve, auc\nfrom scipy import interp\nfrom sklearn.linear_model import LogisticRegression\n\npipe_lr = Pipeline([('scl', StandardScaler()),\n ('pca', PCA(n_components=2)),\n ('clf', LogisticRegression(penalty='l2', \n random_state=0, \n C=100.0))])\n\n# intentionally use only 2 features to make the task harder and the curves more interesting\nX_train2 = X_train[:, [4, 14]]\nX_test2 = X_test[:, [4, 14]]\n\n\nif Version(sklearn_version) < '0.18':\n cv = StratifiedKFold(y_train, n_folds=3, random_state=1)\nelse:\n cv = list(StratifiedKFold(n_splits=3, random_state=1).split(X_train, y_train))\n\nfig = plt.figure(figsize=(7, 5))\n\n# **************************************\n# ROC - 2 Features\n# **************************************\nprint (\"ROC Curve - 2 Features\")\nmean_tpr = 0.0\nmean_fpr = np.linspace(0, 1, 100)\nall_tpr = []\n\nfor i, (train, test) in enumerate(cv):\n probas = pipe_lr.fit(X_train2[train],\n y_train[train]).predict_proba(X_train2[test])\n\n fpr, tpr, thresholds = roc_curve(y_train[test],\n probas[:, 1],\n pos_label=1)\n mean_tpr += interp(mean_fpr, fpr, tpr)\n #mean_tpr[0] = 0.0\n roc_auc = auc(fpr, tpr)\n plt.plot(fpr,\n tpr,\n lw=1,\n label='ROC fold %d (area = %0.2f)'\n % (i+1, roc_auc))\n\nmean_tpr /= len(cv)\nmean_tpr[0] = 0.0\nmean_tpr[-1] = 1.0\nmean_auc = auc(mean_fpr, mean_tpr)\nplt.plot(mean_fpr, mean_tpr, 'k--',\n label='mean ROC (area = %0.2f)' % mean_auc, lw=2)\n \nplt.plot([0, 1],\n [0, 1],\n linestyle='--',\n color=(0.6, 0.6, 0.6),\n label='random guessing')\n\nplt.plot([0, 0, 1],\n [0, 1, 1],\n lw=2,\n linestyle=':',\n color='black',\n label='perfect performance')\n\nplt.xlim([-0.05, 1.05])\nplt.ylim([-0.05, 1.05])\nplt.xlabel('false positive rate')\nplt.ylabel('true positive rate')\nplt.title('Receiver Operator Characteristic')\nplt.legend(loc='lower left', bbox_to_anchor=(1, 0))\n\nplt.tight_layout()\n# plt.savefig('./figures/roc.png', dpi=300)\nplt.show()\n\n# **************************************\n# ROC - All Features\n# **************************************\nprint (\"ROC Curve - All Features\")\nmean_tpr = 0.0\nmean_fpr = np.linspace(0, 1, 100)\nall_tpr = []\n\nfor i, (train, test) in enumerate(cv):\n probas = pipe_lr.fit(X_train[train],\n y_train[train]).predict_proba(X_train[test])\n\n fpr, tpr, thresholds = roc_curve(y_train[test],\n probas[:, 1],\n pos_label=1)\n mean_tpr += interp(mean_fpr, fpr, tpr)\n #mean_tpr[0] = 0.0\n roc_auc = auc(fpr, tpr)\n plt.plot(fpr,\n tpr,\n lw=1,\n label='ROC fold %d (area = %0.2f)'\n % (i+1, roc_auc))\n\nmean_tpr /= len(cv)\nmean_tpr[0] = 0.0\nmean_tpr[-1] = 1.0\nmean_auc = auc(mean_fpr, mean_tpr)\nplt.plot(mean_fpr, mean_tpr, 'k--',\n label='mean ROC (area = %0.2f)' % mean_auc, lw=2)\n\nplt.plot([0, 1],\n [0, 1],\n linestyle='--',\n color=(0.6, 0.6, 0.6),\n label='random guessing')\n\nplt.plot([0, 0, 1],\n [0, 1, 1],\n lw=2,\n linestyle=':',\n color='black',\n label='perfect performance')\n\nplt.xlim([-0.05, 1.05])\nplt.ylim([-0.05, 1.05])\nplt.xlabel('false positive rate')\nplt.ylabel('true positive rate')\nplt.title('Receiver Operator Characteristic')\nplt.legend(loc='lower left', bbox_to_anchor=(1, 0))\n\nplt.tight_layout()\n# plt.savefig('./figures/roc.png', dpi=300)\nplt.show()\n\n\n# **************************************\n# Precision-Recall - 2 Features\n# **************************************\nprint (\"Precision Recall Curve - 2 Features\")\nmean_pre = 0.0\nmean_rec = np.linspace(0, 1, 100)\nall_pre = []\n\nfor i, (train, test) in enumerate(cv):\n probas = pipe_lr.fit(X_train2[train],\n y_train[train]).predict_proba(X_train2[test])\n \n ## note that the return order is precision, recall\n pre, rec, thresholds = precision_recall_curve(y_train[test],\n probas[:, 1],\n pos_label=1)\n \n ## flip the recall and precison array\n mean_pre += interp(mean_rec, np.flipud(rec), np.flipud(pre))\n\n pr_auc = auc(rec, pre)\n plt.plot(rec,\n pre,\n lw=1,\n label='PR fold %d (area = %0.2f)'\n % (i+1, pr_auc))\n\nmean_pre /= len(cv)\nmean_auc = auc(mean_rec, mean_pre)\nplt.plot(mean_rec, mean_pre, 'k--',\n label='mean PR (area = %0.2f)' % mean_auc, lw=2)\n\n# random classifier: a line of Positive/(Positive+Negative)\n# here for simplicity, we set it to be Positive/(Positive+Negative) of fold 3\nplt.plot(mean_rec,\n [pre[0] for i in range(len(mean_rec))],\n linestyle='--', label='random guessing of fold 3', color=(0.6, 0.6, 0.6))\n\n# perfect performance: y = 1 for x in [0, 1]; when x = 1; down to the random guessing line\nplt.plot([0, 1, 1],\n [1, 1, pre[0]],\n lw=2,\n linestyle=':',\n color='black',\n label='perfect performance')\n\nplt.xlim([-0.05, 1.05])\nplt.ylim([-0.05, 1.05])\nplt.xlabel('recall')\nplt.ylabel('precision')\nplt.title('Precision-Recall')\nplt.legend(loc='lower left', bbox_to_anchor=(1, 0))\n\nplt.tight_layout()\n# plt.savefig('./figures/roc.png', dpi=300)\nplt.show()\n\n# **************************************\n# Precision-Recall - All Features\n# **************************************\nprint (\"Precision Recall Curve - All Features\")\nmean_pre = 0.0\nmean_rec = np.linspace(0, 1, 100)\nall_pre = []\n\nfor i, (train, test) in enumerate(cv):\n probas = pipe_lr.fit(X_train[train],\n y_train[train]).predict_proba(X_train[test])\n \n \n ## note that the return order is precision, recall\n pre, rec, thresholds = precision_recall_curve(y_train[test],\n probas[:, 1],\n pos_label=1)\n \n ## flip the recall and precison array\n mean_pre += interp(mean_rec, np.flipud(rec), np.flipud(pre))\n\n pr_auc = auc(rec, pre)\n plt.plot(rec,\n pre,\n lw=1,\n label='PR fold %d (area = %0.2f)'\n % (i+1, pr_auc))\n\nmean_pre /= len(cv)\nmean_auc = auc(mean_rec, mean_pre)\nplt.plot(mean_rec, mean_pre, 'k--',\n label='mean PR (area = %0.2f)' % mean_auc, lw=2)\n\n# random classifier: a line of Positive/(Positive+Negative)\n# here for simplicity, we set it to be Positive/(Positive+Negative) of fold 3\nplt.plot(mean_rec,\n [pre[0] for i in range(len(mean_rec))],\n linestyle='--', label='random guessing of fold 3', color=(0.6, 0.6, 0.6))\n\n# perfect performance: y = 1 for x in [0, 1]; when x = 1; down to the random guessing line\nplt.plot([0, 1, 1],\n [1, 1, pre[0]],\n lw=2,\n linestyle=':',\n color='black',\n label='perfect performance')\n\nplt.xlim([-0.05, 1.05])\nplt.ylim([-0.05, 1.05])\nplt.xlabel('recall')\nplt.ylabel('precision')\nplt.title('Precision-Recall')\nplt.legend(loc='lower left', bbox_to_anchor=(1, 0))\n\nplt.tight_layout()\n# plt.savefig('./figures/roc.png', dpi=300)\nplt.show()", "Explanation\nDefinition\nREC(recall) = TPR(true positive rate) = $\\frac{TP}{TP+FN}$<br>\nPRE(precision) = $\\frac{TP}{TP+FP}$<br>\nFPR(false positive rate) = $\\frac{FP}{FP+TN}$<br>\nTP: Actual Class $A$, test result $A$<br>\nFP: Actual Class $B$, test result $A$<br>\nTN: Actual Class $B$, test result $B$<br>\nFN: Actual Class $A$, test result $B$<br>\nSimilarities\n\nWhen number of features increases, in ROC curve, true positive rate is close to 1 regardless of false positive rate (not equals 0); in Precison-Recall curve, precision is close to 1 regardless of recall (not equals 1).<br>\nThe training results are better than random guessing in this example.<br>\n\nDifferences\nFor monotonicity\n\nROC curve: As false positive rate increase, the true positive rate generally increases, and the effect is huger when number of features is small.<br>\nThis is because when avoid classifying class $B$ as class $A$, we actually increase the probability of successfully classifying class $A$ as class $A$, which increases the true positive rate.<br>\nActually, the curve is almost concave. The image below illustrates the reason.\n<img src=\"https://github.com/irsisyphus/Pictures/raw/master/ML-Exercises/roc.png\"><br>\nPlease keep online to view the picture. Reference: https://upload.wikimedia.org/wikipedia/commons/5/5c/ROCfig.PNG<br><br>\nPrecision-Recall curve: As recall increases, the precision generally decreases, especially when number of features is small.<br>\nThis is because when we try to increase recall (reduce false negative), we avoid classifying samples with acutal class $A$ as class $B$, that is, we classify samples similar to class $A$ as class $A$. However, whis will also increase the probability of classifying samples with acutal class $B$ (but tested to be similar to class $A$) as class $A$, which is FP.<br>\nNotice that when FP increases, it is not necessary that precision decrease since TP also increases. Only when the ratio $\\frac{TP}{TP+FP}$ drops, which is often the general case when number of features is small, precision drops. The image below illustrates the reason.\n<img width=50% src=\"https://github.com/irsisyphus/Pictures/raw/master/ML-Exercises/pre-rec.png\"><br>\nPlease keep online to view the picture. Reference: http://numerical.recipes/CS395T/lectures2008/17-ROCPrecisionRecall.pdf\n\nFor random guessing\n\nROC curve: true positive rate = false positive rate. This is because the propotion of TP in (TP+FN) equals the propotion of FP in (FP+TN), because declaring a result has no dependence on the actuall class label for random guessing.\nPrecision-Recall curve: the random guessing is a horizontal line with y = P/(P+N) for given boundary of P and N, which varies with cases. (In this example, we choose P/(P+N) of fold 3 for simplicity).\n\nFor perfect performance\n\nROC curve: The perfect performance is (0, 0) -> (0, 1) -> (1, 1). We aim to have result in the \"top-left\" corner to obtain low false positive rate and high true positive rate.\nPrecision-Recall curve: The perfect performance is (0, 1) -> (1, 1) -> (1, P/(P+N)). We aim to have result in the \"top-right\" corner to obtain high recall and high precision.\n\nEnsemble learning\nWe have used the following code to compute and plot the ensemble error from individual classifiers for binary classification:", "from scipy.misc import comb\nimport math\nimport numpy as np\n\ndef ensemble_error(num_classifier, base_error):\n k_start = math.ceil(num_classifier/2)\n probs = [comb(num_classifier, k)*(base_error**k)*((1-base_error)**(num_classifier-k)) for k in range(k_start, num_classifier+1)]\n return sum(probs)\n\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\ndef plot_base_error(ensemble_error_func, num_classifier, error_delta):\n\n error_range = np.arange(0.0, 1+error_delta, error_delta)\n ensemble_errors = [ensemble_error_func(num_classifier=num_classifier, base_error=error) for error in error_range]\n\n plt.plot(error_range, ensemble_errors, \n label = 'ensemble error',\n linewidth=2)\n plt.plot(error_range, error_range,\n label = 'base error',\n linestyle = '--',\n linewidth=2)\n plt.xlabel('base error')\n plt.ylabel('base/ensemble error')\n plt.legend(loc='best')\n plt.grid()\n plt.show()\n\nnum_classifier = 11\nerror_delta = 0.01\nbase_error = 0.25\n\nprint(ensemble_error(num_classifier=num_classifier, base_error=base_error))\n\nplot_base_error(ensemble_error, num_classifier=num_classifier, error_delta=error_delta)", "Number of classifiers (40 points)\nThe function plot_base_error() above plots the ensemble error as a function of the base error given a fixed number of classifiers.\nWrite another function to plot ensembe error versus different number of classifiers with a given base error.\nDoes the ensemble error always go down with more classifiers? \nWhy or why not?\nCan you improve the method ensemble_error() to produce a more reasonable plot?\nAnswer\nThe code for plotting is below:", "def plot_num_classifier(ensemble_error_func, max_num_classifier, base_error):\n\n num_classifiers = range(1, max_num_classifier+1)\n ensemble_errors = [ensemble_error_func(num_classifier = num_classifier, base_error=base_error) for num_classifier in num_classifiers]\n\n plt.plot(num_classifiers, ensemble_errors, \n label = 'ensemble error',\n linewidth = 2)\n plt.plot(range(max_num_classifier), [base_error]*max_num_classifier,\n label = 'base error',\n linestyle = '--',\n linewidth=2)\n \n plt.xlabel('num classifiers')\n plt.ylabel('ensemble error')\n plt.xlim([1, max_num_classifier])\n plt.ylim([0, 1])\n plt.title('base error %.2f' % base_error)\n plt.legend(loc='best')\n plt.grid()\n plt.show()\n\nmax_num_classifiers = 20\nbase_error = 0.25\n\nplot_num_classifier(ensemble_error, \n max_num_classifier=max_num_classifiers, \n base_error=base_error)", "Explanation\nObservations:\nThe ensemble error DOES NOT ALWAYS go down with more classifiers.<br>\nOverall, the ensemble error declines as the number of classifiers increases. However, when the number of calssifiers $N = 2k$, the error is much higher than $N = 2k-1$ and $2k+1$.\nReason:\nThis is because when a draw comes, i.e. $K$ subclassifiers are wrong and $K$ subclassifiers are correct, it is considered as \"wrong prediction\" by the original function.\nDescribe a better algorithm for computing the ensemble error.", "def better_ensemble_error(num_classifier, base_error):\n k_start = math.ceil(num_classifier/2)\n probs = [comb(num_classifier, k)*(base_error**k)*((1-base_error)**(num_classifier-k)) for k in range(k_start, num_classifier+1)]\n if num_classifier % 2 == 0:\n probs.append(-0.5*comb(num_classifier, k_start)*(base_error**k_start)*((1-base_error)**(num_classifier-k_start)))\n return sum(probs)\n\nplot_num_classifier(better_ensemble_error, \n max_num_classifier=max_num_classifiers, \n base_error=base_error)", "Discription\nWhen there is a draw, i.e. $K$ subclassifiers are wrong and $K$ subclassifiers are correct, we take half of the probability of this condition to be wrong. That is, $\\frac{1}{2} C\\left(2K, K\\right) \\epsilon^K \\left(1-\\epsilon\\right)^{K}$.<br>\nNotice that the result of $2K$ classifiers is the same as $2K-1$ classifier, Becase<br><br>\n$$\n\\begin{align}\nP(2K\\text{ failed}) = & \\text{ }P(2K\\text{ failed | }K\\text{ in }2K-1\\text{ failed})P(K\\text{ in }2K-1\\text{ failed}) \\\n& + P(2K\\text{ failed | }K-1\\text{ in }2K-1\\text{ failed})P(K-1\\text{ in }2K-1\\text{ failed}) \\\n& + P(2K\\text{ failed | less than }K-1\\text{ in }2K-1\\text{ failed})P(\\text{less than }K-1\\text{ in }2K-1\\text{ failed}) \\\n& + P(2K\\text{ failed | more than }K\\text{ in }2K-1\\text{ failed})P(\\text{more than }K\\text{ in }2K-1\\text{ failed}) \\\n= & \\text{ }(\\epsilon + \\frac{1-\\epsilon}{2})C\\left(2K-1, K\\right)\\epsilon^K \\left(1-\\epsilon\\right)^{K-1}\n+ (\\frac{\\epsilon}{2})C\\left(2K-1, K-1\\right)\\epsilon^{K-1} \\left(1-\\epsilon\\right)^{K} \\\n& + 0 \\times \\sum_{i=0}^{K-2}C\\left(2K-1, i\\right)\\epsilon^i \\left(1-\\epsilon\\right)^{2K-1-i}\n+ 1 \\times \\sum_{i=K+1}^{2K-1}C\\left(2K-1, i\\right)\\epsilon^i \\left(1-\\epsilon\\right)^{2K-1-i} \\\n= & \\text{ }C\\left(2K-1, K\\right)\\epsilon^{K+1} \\left(1-\\epsilon\\right)^{K-1} + C\\left(2K-1, K\\right)\\epsilon^K \\left(1-\\epsilon\\right)^{K} + \\sum_{i=K+1}^{2K-1}C\\left(2K-1, i\\right)\\epsilon^i \\left(1-\\epsilon\\right)^{2K-1-i} \\\n= & \\text{ }C\\left(2K-1, K\\right) (\\epsilon + 1 - \\epsilon) \\epsilon^{K} \\left(1-\\epsilon\\right)^{K-1} + \\sum_{i=K+1}^{2K-1}C\\left(2K-1, i\\right)\\epsilon^i \\left(1-\\epsilon\\right)^{2K-1-i} \\\n= & \\text{ }\\sum_{i=K}^{2K-1}C\\left(2K-1, i\\right)\\epsilon^i \\left(1-\\epsilon\\right)^{2K-1-i} \\\n= & \\text{ }P(2K-1\\text{ failed})\n\\end{align}\n$$<br>\nHence $P(2K$ failed) = $P(2K-1$ failed)." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
joelowj/Udacity-Projects
Udacity-Artificial-Intelligence-Nanodegree/Project-4/asl_recognizer.ipynb
apache-2.0
[ "Artificial Intelligence Engineer Nanodegree - Probabilistic Models\nProject: Sign Language Recognition System\n\nIntroduction\nPart 1 Feature Selection\nTutorial\nFeatures Submission\nFeatures Unittest\n\n\nPart 2 Train the models\nTutorial\nModel Selection Score Submission\nModel Score Unittest\n\n\nPart 3 Build a Recognizer\nTutorial\nRecognizer Submission\nRecognizer Unittest\n\n\nPart 4 (OPTIONAL) Improve the WER with Language Models\n\n<a id='intro'></a>\nIntroduction\nThe overall goal of this project is to build a word recognizer for American Sign Language video sequences, demonstrating the power of probabalistic models. In particular, this project employs hidden Markov models (HMM's) to analyze a series of measurements taken from videos of American Sign Language (ASL) collected for research (see the RWTH-BOSTON-104 Database). In this video, the right-hand x and y locations are plotted as the speaker signs the sentence.\n\nThe raw data, train, and test sets are pre-defined. You will derive a variety of feature sets (explored in Part 1), as well as implement three different model selection criterion to determine the optimal number of hidden states for each word model (explored in Part 2). Finally, in Part 3 you will implement the recognizer and compare the effects the different combinations of feature sets and model selection criteria. \nAt the end of each Part, complete the submission cells with implementations, answer all questions, and pass the unit tests. Then submit the completed notebook for review!\n<a id='part1_tutorial'></a>\nPART 1: Data\nFeatures Tutorial\nLoad the initial database\nA data handler designed for this database is provided in the student codebase as the AslDb class in the asl_data module. This handler creates the initial pandas dataframe from the corpus of data included in the data directory as well as dictionaries suitable for extracting data in a format friendly to the hmmlearn library. We'll use those to create models in Part 2.\nTo start, let's set up the initial database and select an example set of features for the training set. At the end of Part 1, you will create additional feature sets for experimentation.", "import numpy as np\nimport pandas as pd\nfrom asl_data import AslDb\n\n\nasl = AslDb() # initializes the database\nasl.df.head() # displays the first five rows of the asl database, indexed by video and frame\n\nasl.df.ix[98,1] # look at the data available for an individual frame", "The frame represented by video 98, frame 1 is shown here:\n\nFeature selection for training the model\nThe objective of feature selection when training a model is to choose the most relevant variables while keeping the model as simple as possible, thus reducing training time. We can use the raw features already provided or derive our own and add columns to the pandas dataframe asl.df for selection. As an example, in the next cell a feature named 'grnd-ry' is added. This feature is the difference between the right-hand y value and the nose y value, which serves as the \"ground\" right y value.", "asl.df['grnd-ry'] = asl.df['right-y'] - asl.df['nose-y']\nasl.df.head() # the new feature 'grnd-ry' is now in the frames dictionary", "Try it!", "from asl_utils import test_features_tryit\n# TODO add df columns for 'grnd-rx', 'grnd-ly', 'grnd-lx' representing differences between hand and nose locations\nasl.df['grnd-rx'] = asl.df['right-x'] - asl.df['nose-x']\nasl.df['grnd-ly'] = asl.df['left-y'] - asl.df['nose-y']\nasl.df['grnd-lx'] = asl.df['left-x'] - asl.df['nose-x']\n\n# test the code\ntest_features_tryit(asl)\n\n# collect the features into a list\nfeatures_ground = ['grnd-rx','grnd-ry','grnd-lx','grnd-ly']\n #show a single set of features for a given (video, frame) tuple\n[asl.df.ix[98,1][v] for v in features_ground]", "Build the training set\nNow that we have a feature list defined, we can pass that list to the build_training method to collect the features for all the words in the training set. Each word in the training set has multiple examples from various videos. Below we can see the unique words that have been loaded into the training set:", "training = asl.build_training(features_ground)\nprint(\"Training words: {}\".format(training.words))", "The training data in training is an object of class WordsData defined in the asl_data module. in addition to the words list, data can be accessed with the get_all_sequences, get_all_Xlengths, get_word_sequences, and get_word_Xlengths methods. We need the get_word_Xlengths method to train multiple sequences with the hmmlearn library. In the following example, notice that there are two lists; the first is a concatenation of all the sequences(the X portion) and the second is a list of the sequence lengths(the Lengths portion).", "training.get_word_Xlengths('CHOCOLATE')", "More feature sets\nSo far we have a simple feature set that is enough to get started modeling. However, we might get better results if we manipulate the raw values a bit more, so we will go ahead and set up some other options now for experimentation later. For example, we could normalize each speaker's range of motion with grouped statistics using Pandas stats functions and pandas groupby. Below is an example for finding the means of all speaker subgroups.", "df_means = asl.df.groupby('speaker').mean()\ndf_means", "To select a mean that matches by speaker, use the pandas map method:", "asl.df['left-x-mean']= asl.df['speaker'].map(df_means['left-x'])\nasl.df.head()", "Try it!", "from asl_utils import test_std_tryit\n# TODO Create a dataframe named `df_std` with standard deviations grouped by speaker\ndf_std = asl.df.groupby('speaker').std()\n\n# test the code\ntest_std_tryit(df_std)", "<a id='part1_submission'></a>\nFeatures Implementation Submission\nImplement four feature sets and answer the question that follows.\n- normalized Cartesian coordinates\n - use mean and standard deviation statistics and the standard score equation to account for speakers with different heights and arm length\n\n\npolar coordinates\n\ncalculate polar coordinates with Cartesian to polar equations\nuse the np.arctan2 function and swap the x and y axes to move the $0$ to $2\\pi$ discontinuity to 12 o'clock instead of 3 o'clock; in other words, the normal break in radians value from $0$ to $2\\pi$ occurs directly to the left of the speaker's nose, which may be in the signing area and interfere with results. By swapping the x and y axes, that discontinuity move to directly above the speaker's head, an area not generally used in signing.\n\n\n\ndelta difference\n\nas described in Thad's lecture, use the difference in values between one frame and the next frames as features\npandas diff method and fillna method will be helpful for this one\n\n\n\ncustom features\n\nThese are your own design; combine techniques used above or come up with something else entirely. We look forward to seeing what you come up with! \nSome ideas to get you started:\nnormalize using a feature scaling equation\nnormalize the polar coordinates\nadding additional deltas", "# TODO add features for normalized by speaker values of left, right, x, y\n# Name these 'norm-rx', 'norm-ry', 'norm-lx', and 'norm-ly'\n# using Z-score scaling (X-Xmean)/Xstd\n\ncolumns = ['right-x','right-y','left-x','left-y']\nfeatures_norm = ['norm-rx','norm-ry', 'norm-lx','norm-ly']\nfor i,f in enumerate(features_norm):\n means = asl.df['speaker'].map(df_means[columns[i]])\n standards = asl.df['speaker'].map(df_std[columns[i]])\n asl.df[f]=(asl.df[columns[i]] - means) / standards\n\n# TODO add features for polar coordinate values where the nose is the origin\n# Name these 'polar-rr', 'polar-rtheta', 'polar-lr', and 'polar-ltheta'\n# Note that 'polar-rr' and 'polar-rtheta' refer to the radius and angle\n\nfeatures_polar = ['polar-rr', 'polar-rtheta', 'polar-lr', 'polar-ltheta']\ncolumns = [['grnd-rx','grnd-ry'],['grnd-lx','grnd-ly']]\n\ndef radius(x, y):\n return np.sqrt(x ** 2 + y ** 2)\n\ndef theta(x, y):\n return np.arctan2(x, y)\n\nfor i, f in enumerate(features_polar):\n if i % 2 == 0:\n asl.df[f] = radius(asl.df[features_ground[i]],\n asl.df[features_ground[i + 1]])\n else:\n asl.df[f] = theta(asl.df[features_ground[i - 1]],\n asl.df[features_ground[i]])\n\n# TODO add features for left, right, x, y differences by one time step, i.e. the \"delta\" values discussed in the lecture\n# Name these 'delta-rx', 'delta-ry', 'delta-lx', and 'delta-ly'\n\nfeatures_delta = ['delta-rx', 'delta-ry', 'delta-lx', 'delta-ly']\ncolumns = ['right-x','right-y','left-x','left-y']\nfor i,f in enumerate(features_delta):\n asl.df[f] = asl.df[columns[i]].diff().fillna(0.0)\n\n# TODO add features of your own design, which may be a combination of the above or something else\n# Name these whatever you would like\n\n# TODO define a list named 'features_custom' for building the training set\n\ncustom_features = ['norm-delta-rx', 'norm-delta-ry', 'norm-delta-lx', 'norm-delta-ly']\ncolumns = ['polar-rr', 'polar-rtheta', 'polar-lr', 'polar-ltheta']\ndf_new_means = asl.df.groupby('speaker').mean()\ndf_new_std = asl.df.groupby('speaker').std()\nfor i,f in enumerate(custom_features):\n means = asl.df['speaker'].map(df_new_means[columns[i]])\n standards = asl.df['speaker'].map(df_new_std[columns[i]])\n asl.df[f]=((asl.df[columns[i]] - means) / standards)+ asl.df[features_delta[i]]", "Question 1: What custom features did you choose for the features_custom set and why?\nAnswer 1: The custom features choosen is the addition of normalized polar coordinates with the delta values. Polar coordinates are used to make the nose the origin of the frame and all the frame points orbit around it. Normalization is used as the gaussian fit normal distribution very well. The delta values show the change in the positions with time and help measure up the probabilities and gaussian size.\n<a id='part1_test'></a>\nFeatures Unit Testing\nRun the following unit tests as a sanity check on the defined \"ground\", \"norm\", \"polar\", and 'delta\"\nfeature sets. The test simply looks for some valid values but is not exhaustive. However, the project should not be submitted if these tests don't pass.", "import unittest\n# import numpy as np\n\nclass TestFeatures(unittest.TestCase):\n\n def test_features_ground(self):\n sample = (asl.df.ix[98, 1][features_ground]).tolist()\n self.assertEqual(sample, [9, 113, -12, 119])\n\n def test_features_norm(self):\n sample = (asl.df.ix[98, 1][features_norm]).tolist()\n np.testing.assert_almost_equal(sample, [ 1.153, 1.663, -0.891, 0.742], 3)\n\n def test_features_polar(self):\n sample = (asl.df.ix[98,1][features_polar]).tolist()\n np.testing.assert_almost_equal(sample, [113.3578, 0.0794, 119.603, -0.1005], 3)\n\n def test_features_delta(self):\n sample = (asl.df.ix[98, 0][features_delta]).tolist()\n self.assertEqual(sample, [0, 0, 0, 0])\n sample = (asl.df.ix[98, 18][features_delta]).tolist()\n self.assertTrue(sample in [[-16, -5, -2, 4], [-14, -9, 0, 0]], \"Sample value found was {}\".format(sample))\n \nsuite = unittest.TestLoader().loadTestsFromModule(TestFeatures())\nunittest.TextTestRunner().run(suite)", "<a id='part2_tutorial'></a>\nPART 2: Model Selection\nModel Selection Tutorial\nThe objective of Model Selection is to tune the number of states for each word HMM prior to testing on unseen data. In this section you will explore three methods: \n- Log likelihood using cross-validation folds (CV)\n- Bayesian Information Criterion (BIC)\n- Discriminative Information Criterion (DIC) \nTrain a single word\nNow that we have built a training set with sequence data, we can \"train\" models for each word. As a simple starting example, we train a single word using Gaussian hidden Markov models (HMM). By using the fit method during training, the Baum-Welch Expectation-Maximization (EM) algorithm is invoked iteratively to find the best estimate for the model for the number of hidden states specified from a group of sample seequences. For this example, we assume the correct number of hidden states is 3, but that is just a guess. How do we know what the \"best\" number of states for training is? We will need to find some model selection technique to choose the best parameter.", "import warnings\nfrom hmmlearn.hmm import GaussianHMM\n\ndef train_a_word(word, num_hidden_states, features):\n \n warnings.filterwarnings(\"ignore\", category=DeprecationWarning)\n training = asl.build_training(features) \n X, lengths = training.get_word_Xlengths(word)\n model = GaussianHMM(n_components=num_hidden_states, n_iter=1000).fit(X, lengths)\n logL = model.score(X, lengths)\n return model, logL\n\ndemoword = 'BOOK'\nmodel, logL = train_a_word(demoword, 3, features_ground)\nprint(\"Number of states trained in model for {} is {}\".format(demoword, model.n_components))\nprint(\"logL = {}\".format(logL))", "The HMM model has been trained and information can be pulled from the model, including means and variances for each feature and hidden state. The log likelihood for any individual sample or group of samples can also be calculated with the score method.", "def show_model_stats(word, model):\n print(\"Number of states trained in model for {} is {}\".format(word, model.n_components)) \n variance=np.array([np.diag(model.covars_[i]) for i in range(model.n_components)]) \n for i in range(model.n_components): # for each hidden state\n print(\"hidden state #{}\".format(i))\n print(\"mean = \", model.means_[i])\n print(\"variance = \", variance[i])\n print()\n \nshow_model_stats(demoword, model)", "Try it!\nExperiment by changing the feature set, word, and/or num_hidden_states values in the next cell to see changes in values.", "my_testword = 'CHOCOLATE'\nmodel, logL = train_a_word(my_testword, 3, features_ground) # Experiment here with different parameters\nshow_model_stats(my_testword, model)\nprint(\"logL = {}\".format(logL))", "Visualize the hidden states\nWe can plot the means and variances for each state and feature. Try varying the number of states trained for the HMM model and examine the variances. Are there some models that are \"better\" than others? How can you tell? We would like to hear what you think in the classroom online.", "%matplotlib inline\n\nimport math\nfrom matplotlib import (cm, pyplot as plt, mlab)\n\ndef visualize(word, model):\n \"\"\" visualize the input model for a particular word \"\"\"\n variance=np.array([np.diag(model.covars_[i]) for i in range(model.n_components)])\n figures = []\n for parm_idx in range(len(model.means_[0])):\n xmin = int(min(model.means_[:,parm_idx]) - max(variance[:,parm_idx]))\n xmax = int(max(model.means_[:,parm_idx]) + max(variance[:,parm_idx]))\n fig, axs = plt.subplots(model.n_components, sharex=True, sharey=False)\n colours = cm.rainbow(np.linspace(0, 1, model.n_components))\n for i, (ax, colour) in enumerate(zip(axs, colours)):\n x = np.linspace(xmin, xmax, 100)\n mu = model.means_[i,parm_idx]\n sigma = math.sqrt(np.diag(model.covars_[i])[parm_idx])\n ax.plot(x, mlab.normpdf(x, mu, sigma), c=colour)\n ax.set_title(\"{} feature {} hidden state #{}\".format(word, parm_idx, i))\n\n ax.grid(True)\n figures.append(plt)\n for p in figures:\n p.show()\n \nvisualize(my_testword, model)", "ModelSelector class\nReview the ModelSelector class from the codebase found in the my_model_selectors.py module. It is designed to be a strategy pattern for choosing different model selectors. For the project submission in this section, subclass SelectorModel to implement the following model selectors. In other words, you will write your own classes/functions in the my_model_selectors.py module and run them from this notebook:\n\nSelectorCV: Log likelihood with CV\nSelectorBIC: BIC \nSelectorDIC: DIC\n\nYou will train each word in the training set with a range of values for the number of hidden states, and then score these alternatives with the model selector, choosing the \"best\" according to each strategy. The simple case of training with a constant value for n_components can be called using the provided SelectorConstant subclass as follow:", "from my_model_selectors import SelectorConstant\n\ntraining = asl.build_training(features_ground) # Experiment here with different feature sets defined in part 1\nword = 'VEGETABLE' # Experiment here with different words\nmodel = SelectorConstant(training.get_all_sequences(), training.get_all_Xlengths(), word, n_constant=3).select()\nprint(\"Number of states trained in model for {} is {}\".format(word, model.n_components))", "Cross-validation folds\nIf we simply score the model with the Log Likelihood calculated from the feature sequences it has been trained on, we should expect that more complex models will have higher likelihoods. However, that doesn't tell us which would have a better likelihood score on unseen data. The model will likely be overfit as complexity is added. To estimate which topology model is better using only the training data, we can compare scores using cross-validation. One technique for cross-validation is to break the training set into \"folds\" and rotate which fold is left out of training. The \"left out\" fold scored. This gives us a proxy method of finding the best model to use on \"unseen data\". In the following example, a set of word sequences is broken into three folds using the scikit-learn Kfold class object. When you implement SelectorCV, you will use this technique.", "from sklearn.model_selection import KFold\n\ntraining = asl.build_training(features_ground) # Experiment here with different feature sets\nword = 'VEGETABLE' # Experiment here with different words\nword_sequences = training.get_word_sequences(word)\nsplit_method = KFold()\nfor cv_train_idx, cv_test_idx in split_method.split(word_sequences):\n print(\"Train fold indices:{} Test fold indices:{}\".format(cv_train_idx, cv_test_idx)) # view indices of the folds", "Tip: In order to run hmmlearn training using the X,lengths tuples on the new folds, subsets must be combined based on the indices given for the folds. A helper utility has been provided in the asl_utils module named combine_sequences for this purpose.\nScoring models with other criterion\nScoring model topologies with BIC balances fit and complexity within the training set for each word. In the BIC equation, a penalty term penalizes complexity to avoid overfitting, so that it is not necessary to also use cross-validation in the selection process. There are a number of references on the internet for this criterion. These slides include a formula you may find helpful for your implementation.\nThe advantages of scoring model topologies with DIC over BIC are presented by Alain Biem in this reference (also found here). DIC scores the discriminant ability of a training set for one word against competing words. Instead of a penalty term for complexity, it provides a penalty if model liklihoods for non-matching words are too similar to model likelihoods for the correct word in the word set.\n<a id='part2_submission'></a>\nModel Selection Implementation Submission\nImplement SelectorCV, SelectorBIC, and SelectorDIC classes in the my_model_selectors.py module. Run the selectors on the following five words. Then answer the questions about your results.\nTip: The hmmlearn library may not be able to train or score all models. Implement try/except contructs as necessary to eliminate non-viable models from consideration.", "words_to_train = ['FISH', 'BOOK', 'VEGETABLE', 'FUTURE', 'JOHN']\nimport timeit\n\n# TODO: Implement SelectorCV in my_model_selector.py\nfrom my_model_selectors import SelectorCV\n\ntraining = asl.build_training(features_ground) # Experiment here with different feature sets defined in part 1\nsequences = training.get_all_sequences()\nXlengths = training.get_all_Xlengths()\nfor word in words_to_train:\n start = timeit.default_timer()\n model = SelectorCV(sequences, Xlengths, word, \n min_n_components=2, max_n_components=15, random_state = 14).select()\n end = timeit.default_timer()-start\n if model is not None:\n print(\"Training complete for {} with {} states with time {} seconds\".format(word, model.n_components, end))\n else:\n print(\"Training failed for {}\".format(word))\n\n# TODO: Implement SelectorBIC in module my_model_selectors.py\nfrom my_model_selectors import SelectorBIC\n\ntraining = asl.build_training(features_ground) # Experiment here with different feature sets defined in part 1\nsequences = training.get_all_sequences()\nXlengths = training.get_all_Xlengths()\nfor word in words_to_train:\n start = timeit.default_timer()\n model = SelectorBIC(sequences, Xlengths, word, \n min_n_components=2, max_n_components=15, random_state = 14).select()\n end = timeit.default_timer()-start\n if model is not None:\n print(\"Training complete for {} with {} states with time {} seconds\".format(word, model.n_components, end))\n else:\n print(\"Training failed for {}\".format(word))\n\n# TODO: Implement SelectorDIC in module my_model_selectors.py\nfrom my_model_selectors import SelectorDIC\n\ntraining = asl.build_training(features_ground) # Experiment here with different feature sets defined in part 1\nsequences = training.get_all_sequences()\nXlengths = training.get_all_Xlengths()\nfor word in words_to_train:\n start = timeit.default_timer()\n model = SelectorDIC(sequences, Xlengths, word, \n min_n_components=2, max_n_components=15, random_state = 14).select()\n end = timeit.default_timer()-start\n if model is not None:\n print(\"Training complete for {} with {} states with time {} seconds\".format(word, model.n_components, end))\n else:\n print(\"Training failed for {}\".format(word))", "Question 2: Compare and contrast the possible advantages and disadvantages of the various model selectors implemented.\nAnswer 2: \nSelectorBIC (lowest Baysian Information Criterion(BIC) score)\nAdvantage : It penalizes the complexity of the model where complexity refers to the number of parameters in the model.\nDisadvantage: The above approximation is only valid for sample size n {\\displaystyle n} n much larger than the number k where k of parameters in the model and BIC cannot handle complex collections of models as in the variable selection (or feature selection) problem in high-dimension.\nSelectorDIC (Discriminative Information Criterion)\nAdvantage : DIC is easily calculated from the samples generated by a Markov chain Monte Carlo simulation. BIC require calculating the likelihood\nDisdvantage : DIC equation is derived under the assumption that the specified parametric family of probability distributions that generate futuire observations encompasses the true model. This assumption does not always hold. -The observed data are both used to construct the posterior distribution and to evaluate the estimated models. DIC therfore tends to select over-fitted models.\nSelectorCV (average log Likelihood of cross-validation folds):\nAdvantage : High Accuracy given large amount of training data and the knowledge that the unseen data does not deviate much from the seen data. of this method over repeated random sub-sampling (see below) is that all observations are used for both training and validation, and each observation is used for validation exactly once\nDisdvantage : When the training data set is small, the model will overfit -If the training data set is small and the unseen data deviates significantly from the training data set, accuracy will be low. -Calculation of the folds introduces increased time and space complexity.\n<a id='part2_test'></a>\nModel Selector Unit Testing\nRun the following unit tests as a sanity check on the implemented model selectors. The test simply looks for valid interfaces but is not exhaustive. However, the project should not be submitted if these tests don't pass.", "from asl_test_model_selectors import TestSelectors\nsuite = unittest.TestLoader().loadTestsFromModule(TestSelectors())\nunittest.TextTestRunner().run(suite)", "<a id='part3_tutorial'></a>\nPART 3: Recognizer\nThe objective of this section is to \"put it all together\". Using the four feature sets created and the three model selectors, you will experiment with the models and present your results. Instead of training only five specific words as in the previous section, train the entire set with a feature set and model selector strategy. \nRecognizer Tutorial\nTrain the full training set\nThe following example trains the entire set with the example features_ground and SelectorConstant features and model selector. Use this pattern for you experimentation and final submission cells.", "# autoreload for automatically reloading changes made in my_model_selectors and my_recognizer\n%load_ext autoreload\n%autoreload 2\n\nfrom my_model_selectors import SelectorConstant\n\ndef train_all_words(features, model_selector):\n training = asl.build_training(features) # Experiment here with different feature sets defined in part 1\n sequences = training.get_all_sequences()\n Xlengths = training.get_all_Xlengths()\n model_dict = {}\n for word in training.words:\n model = model_selector(sequences, Xlengths, word, \n n_constant=3).select()\n model_dict[word]=model\n return model_dict\n\nmodels = train_all_words(features_ground, SelectorConstant)\nprint(\"Number of word models returned = {}\".format(len(models)))", "Load the test set\nThe build_test method in ASLdb is similar to the build_training method already presented, but there are a few differences:\n- the object is type SinglesData \n- the internal dictionary keys are the index of the test word rather than the word itself\n- the getter methods are get_all_sequences, get_all_Xlengths, get_item_sequences and get_item_Xlengths", "test_set = asl.build_test(features_ground)\nprint(\"Number of test set items: {}\".format(test_set.num_items))\nprint(\"Number of test set sentences: {}\".format(len(test_set.sentences_index)))", "<a id='part3_submission'></a>\nRecognizer Implementation Submission\nFor the final project submission, students must implement a recognizer following guidance in the my_recognizer.py module. Experiment with the four feature sets and the three model selection methods (that's 12 possible combinations). You can add and remove cells for experimentation or run the recognizers locally in some other way during your experiments, but retain the results for your discussion. For submission, you will provide code cells of only three interesting combinations for your discussion (see questions below). At least one of these should produce a word error rate of less than 60%, i.e. WER < 0.60 . \nTip: The hmmlearn library may not be able to train or score all models. Implement try/except contructs as necessary to eliminate non-viable models from consideration.", "# TODO implement the recognize method in my_recognizer\nfrom my_recognizer import recognize\nfrom asl_utils import show_errors\n\n# TODO Choose a feature set and model selector\nfeatures = custom_features # change as needed\nmodel_selector = SelectorCV # change as needed\n\n# TODO Recognize the test set and display the result with the show_errors method\nmodels = train_all_words(features, model_selector)\ntest_set = asl.build_test(features)\nprobabilities, guesses = recognize(models, test_set)\nshow_errors(guesses, test_set)\n\n# TODO Choose a feature set and model selector\nfeatures = custom_features # change as needed\nmodel_selector = SelectorBIC # change as needed\n\n# TODO Recognize the test set and display the result with the show_errors method\nmodels = train_all_words(features, model_selector)\ntest_set = asl.build_test(features)\nprobabilities, guesses = recognize(models, test_set)\nshow_errors(guesses, test_set)\n\n# TODO Choose a feature set and model selector\nfeatures = custom_features # change as needed\nmodel_selector = SelectorDIC # change as needed\n\n# TODO Recognize the test set and display the result with the show_errors method\nmodels = train_all_words(features, model_selector)\ntest_set = asl.build_test(features)\nprobabilities, guesses = recognize(models, test_set)\nshow_errors(guesses, test_set)", "Question 3: Summarize the error results from three combinations of features and model selectors. What was the \"best\" combination and why? What additional information might we use to improve our WER? For more insight on improving WER, take a look at the introduction to Part 4.\nAnswer 3: Using the custom_features, but with different model_selector.\nSelectorCV WER = 0.651685393258427\nSelectorBIC WER = 0.5898876404494382\nSelectorDIC WER = 0.6179775280898876\nCustom features are expected to provide an appropriate dataset with all the objects orbiting around nose and the normalisation makes the data fit for gaussian models. By varying the choice of model, BIC appears to give the \"best\" combination. This is because BIC has a strict penalization based on the the number of components and hence provides a inference for the model.\nThere are multiple techniques which we can use to improve WER, such as assigning higher probability to real and frequently observed sentences, use the shanon visualisation method, or use n gram models with generalisation by zeros. In addition, we can use less context data to avoid overfitting and use interpolation of probabilities to compensate.\n<a id='part3_test'></a>\nRecognizer Unit Tests\nRun the following unit tests as a sanity check on the defined recognizer. The test simply looks for some valid values but is not exhaustive. However, the project should not be submitted if these tests don't pass.", "from asl_test_recognizer import TestRecognize\nsuite = unittest.TestLoader().loadTestsFromModule(TestRecognize())\nunittest.TextTestRunner().run(suite)", "<a id='part4_info'></a>\nPART 4: (OPTIONAL) Improve the WER with Language Models\nWe've squeezed just about as much as we can out of the model and still only get about 50% of the words right! Surely we can do better than that. Probability to the rescue again in the form of statistical language models (SLM). The basic idea is that each word has some probability of occurrence within the set, and some probability that it is adjacent to specific other words. We can use that additional information to make better choices.\nAdditional reading and resources\n\nIntroduction to N-grams (Stanford Jurafsky slides)\nSpeech Recognition Techniques for a Sign Language Recognition System, Philippe Dreuw et al see the improved results of applying LM on this data!\nSLM data for this ASL dataset\n\nOptional challenge\nThe recognizer you implemented in Part 3 is equivalent to a \"0-gram\" SLM. Improve the WER with the SLM data provided with the data set in the link above using \"1-gram\", \"2-gram\", and/or \"3-gram\" statistics. The probabilities data you've already calculated will be useful and can be turned into a pandas DataFrame if desired (see next cell).\nGood luck! Share your results with the class!", "# create a DataFrame of log likelihoods for the test word items\ndf_probs = pd.DataFrame(data=probabilities)\ndf_probs.head()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
harmsm/pythonic-science
chapters/03_dealing-with-files/00_interacting-with-files_key.ipynb
unlicense
[ "from IPython.display import HTML\nHTML('<iframe width=\"560\" height=\"315\" src=\"https://www.youtube.com/embed/zV949buXdSg?autoplay=1&loop=1\" frameborder=\"0\" allowfullscreen></iframe>')", "Structures like these are encoded in \"PDB\" files\n\n\nHow can we parse a complicted file like this one?", "import pandas as pd\npd.read_table(\"data/1stn.pdb\")", "We can do better by manually parsing the file.\nOur test file\n\nPredict what this will print", "f = open(\"test-file.txt\")\nprint(f.readlines())\nf.close()", "Predict what this will print", "f = open(\"test-file.txt\")\nfor line in f.readlines():\n print(line)\nf.close()", "Predict what this will print", "f = open(\"test-file.txt\")\nfor line in f.readlines():\n print(line,end=\"\")\nf.close()", "Basic file reading operations:\n\nOpen a file for reading: f = open(SOME_FILE_NAME) \nRead lines of file sequentially: f.readlines()\nRead one line from the file: f.readline()\nRead the whole file into a string: f.read()\nClose the file: f.close()\n\nNow what do we do with each line?\nPredict what the following program will do", "f = open(\"test-file.txt\")\nfor line in f.readlines():\n print(line.split())\nf.close() ", "Predict what the following program will do", "f = open(\"test-file.txt\")\nfor line in f.readlines():\n print(line.split(\"1\"))\nf.close() ", "Splitting strings\n\nSOME_STRING.split(CHAR_TO_SPLIT_ON) allows you to split strings into a list. \nIf CHAR_TO_SPLIT_ON is not defined, it will split on all whitespace (\" \",\"\\t\",\"\\n\",\"\\r\")\n\"\\t\" is TAB, \"\\n\" is NEWLINE, \"\\r\" is CARRIAGE_RETURN. \n\nPredict what the following will do", "f = open(\"test-file.txt\")\nlines = f.readlines()\nf.close()\n\nline_of_interest = lines[-1]\nvalue = line_of_interest.split()[0]\nprint(value)", "Predict what will happen:", "print(value*5)", "value is a string of \"1.5\". You can't do math on it yet. \nThe solution is to cast it into a float", "value_as_float = float(value) \nprint(value_as_float*5)", "Cast calls:\nfloat, int, str, list, tuple", "list(\"1.5\")", "Write a program that grabs the \"1\" from the first line in the file and multiplies it by 75.", "f = open(\"test-file.txt\")\nlines = f.readlines()\nf.close()\n\nvalue = lines[0].split(\" \")[1]\nvalue_as_int = int(value)\nprint(value_as_int*75)", "What about writing to files?\nBasic file writing operations:\n\nOpen a file for writing: f = open(SOME_FILE_NAME,'w') will wipe out file immediately!\nOpen a file to append: f = open(SOME_FILE_NAME,'a')\nWrite a string to a file: f.write(SOME_STRING)\nWrite a list of strings: f.writelines([STRING1,STRING2,...])\nClose the file: f.close()", "def file_printer(file_name):\n f = open(file_name)\n for line in f.readlines():\n print(line,end=\"\")\n f.close()", "Predict what this code will do", "a_list = [\"a\",\"b\",\"c\"]\nf = open(\"another-file.txt\",\"w\")\nfor a in a_list:\n f.write(a)\nf.close()\nfile_printer(\"another-file.txt\")", "Predict what this code will do", "a_list = [\"a\",\"b\",\"c\"]\nf = open(\"another-file.txt\",\"w\")\nfor a in a_list:\n f.write(a)\n f.write(\"\\n\")\nf.close()\nfile_printer(\"another-file.txt\")", "Predict what this code will do", "a_list = [\"a\",\"b\",\"ccat\"]\nf = open(\"another-file.txt\",\"w\")\nfor a in a_list:\n f.write(\"A test {{}} {}\\n\".format(a))\nf.close()\nfile_printer(\"another-file.txt\")", "format lets you make pretty strings", "print(\"The value is: {:}\".format(10.35151))\nprint(\"The value is: {:.2f}\".format(10.35151))\nprint(\"The value is: {:20.2f}\".format(10.35151))\n\nprint(\"The value is: {:}\".format(10))\nprint(\"The value is: {:20d}\".format(10))", "String formatting\n\nPretty decimal printing: \"{:LENGITH_OF_STRING.NUM_DECIMALSf}\".format(FLOAT)\nPretty integer printing: \"{:LENGTH_OF_STRINGd}\".format(INT)\nPretty string printing: \"{:LENGTH_OF_STRINGs}\".format(STRING)\n\nCreate a loop that prints 0 to 9 to a file. Each number should be on its own line, written to 3 decimal places.", "f = open(\"junk\",\"w\")\nfor i in range(10):\n f.write(\"{:.3f}\\n\".format(i))\nf.close()\nfile_printer(\"junk\")", "Basic file reading operations:\n\nOpen a file for reading: f = open(SOME_FILE_NAME) \nRead lines of file sequentially: f.readlines()\nRead one line from the file: f.readline()\nRead the whole file into a string: f.read()\nClose the file: f.close()\n\nBasic file writing operations:\n\nOpen a file for writing: f = open(SOME_FILE_NAME,'w') will wipe out file immediately!\nOpen a file to append: f = open(SOME_FILE_NAME,'a')\nWrite a string to a file: f.write(SOME_STRING)\nWrite a list of strings: f.writeline([STRING1,STRING2,...])\nClose the file: f.close()\n\nSplitting strings\n\nSOME_STRING.split(CHAR_TO_SPLIT_ON) allows you to split strings into a list. \nIf CHAR_TO_SPLIT_ON is not defined, it will split on all whitespace (\" \",\"\\t\",\"\\n\",\"\\r\")\n\"\\t\" is TAB, \"\\n\" is NEWLINE, \"\\r\" is CARRIAGE_RETURN. \n\nString formatting\n\nPretty decimal printing: \"{:LENGITH_OF_STRING.NUM_DECIMALSf}\".format(FLOAT)\nPretty integer printing: \"{:LENGTH_OF_STRINGd}\".format(INT)\nPretty string printing: \"{:LENGTH_OF_STRINGs}\".format(STRING)" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
iannesbitt/ml_bootcamp
Python-for-Data-Analysis/NumPy/Numpy Exercise - Solutions.ipynb
mit
[ "<a href='http://www.pieriandata.com'> <img src='../Pierian_Data_Logo.png' /></a>\n\nNumPy Exercises - Solutions\nNow that we've learned about NumPy let's test your knowledge. We'll start off with a few simple tasks and then you'll be asked some more complicated questions.\nImport NumPy as np", "import numpy as np", "Create an array of 10 zeros", "np.zeros(10)", "Create an array of 10 ones", "np.ones(10)", "Create an array of 10 fives", "np.ones(10) * 5", "Create an array of the integers from 10 to 50", "np.arange(10,51)", "Create an array of all the even integers from 10 to 50", "np.arange(10,51,2)", "Create a 3x3 matrix with values ranging from 0 to 8", "np.arange(9).reshape(3,3)", "Create a 3x3 identity matrix", "np.eye(3)", "Use NumPy to generate a random number between 0 and 1", "np.random.rand(1)", "Use NumPy to generate an array of 25 random numbers sampled from a standard normal distribution", "np.random.randn(25)", "Create the following matrix:", "np.arange(1,101).reshape(10,10) / 100", "Create an array of 20 linearly spaced points between 0 and 1:", "np.linspace(0,1,20)", "Numpy Indexing and Selection\nNow you will be given a few matrices, and be asked to replicate the resulting matrix outputs:", "mat = np.arange(1,26).reshape(5,5)\nmat\n\n# WRITE CODE HERE THAT REPRODUCES THE OUTPUT OF THE CELL BELOW\n# BE CAREFUL NOT TO RUN THE CELL BELOW, OTHERWISE YOU WON'T\n# BE ABLE TO SEE THE OUTPUT ANY MORE\n\nmat[2:,1:]\n\n# WRITE CODE HERE THAT REPRODUCES THE OUTPUT OF THE CELL BELOW\n# BE CAREFUL NOT TO RUN THE CELL BELOW, OTHERWISE YOU WON'T\n# BE ABLE TO SEE THE OUTPUT ANY MORE\n\nmat[3,4]\n\n# WRITE CODE HERE THAT REPRODUCES THE OUTPUT OF THE CELL BELOW\n# BE CAREFUL NOT TO RUN THE CELL BELOW, OTHERWISE YOU WON'T\n# BE ABLE TO SEE THE OUTPUT ANY MORE\n\nmat[:3,1:2]\n\n# WRITE CODE HERE THAT REPRODUCES THE OUTPUT OF THE CELL BELOW\n# BE CAREFUL NOT TO RUN THE CELL BELOW, OTHERWISE YOU WON'T\n# BE ABLE TO SEE THE OUTPUT ANY MORE\n\nmat[4,:]\n\n# WRITE CODE HERE THAT REPRODUCES THE OUTPUT OF THE CELL BELOW\n# BE CAREFUL NOT TO RUN THE CELL BELOW, OTHERWISE YOU WON'T\n# BE ABLE TO SEE THE OUTPUT ANY MORE\n\nmat[3:5,:]", "Now do the following\nGet the sum of all the values in mat", "mat.sum()", "Get the standard deviation of the values in mat", "mat.std()", "Get the sum of all the columns in mat", "mat.sum(axis=0)", "Great Job!" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
tensorflow/docs-l10n
site/en-snapshot/tensorboard/scalars_and_keras.ipynb
apache-2.0
[ "Copyright 2019 The TensorFlow Authors.", "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.", "TensorBoard Scalars: Logging training metrics in Keras\n<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://www.tensorflow.org/tensorboard/scalars_and_keras\"><img src=\"https://www.tensorflow.org/images/tf_logo_32px.png\" />View on TensorFlow.org</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/tensorboard/blob/master/docs/scalars_and_keras.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />Run in Google Colab</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://github.com/tensorflow/tensorboard/blob/master/docs/scalars_and_keras.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />View source on GitHub</a>\n </td>\n</table>\n\nOverview\nMachine learning invariably involves understanding key metrics such as loss and how they change as training progresses. These metrics can help you understand if you're overfitting, for example, or if you're unnecessarily training for too long. You may want to compare these metrics across different training runs to help debug and improve your model.\nTensorBoard's Scalars Dashboard allows you to visualize these metrics using a simple API with very little effort. This tutorial presents very basic examples to help you learn how to use these APIs with TensorBoard when developing your Keras model. You will learn how to use the Keras TensorBoard callback and TensorFlow Summary APIs to visualize default and custom scalars.\nSetup", "# Load the TensorBoard notebook extension.\n%load_ext tensorboard\n\nfrom datetime import datetime\nfrom packaging import version\n\nimport tensorflow as tf\nfrom tensorflow import keras\n\nimport numpy as np\n\nprint(\"TensorFlow version: \", tf.__version__)\nassert version.parse(tf.__version__).release[0] >= 2, \\\n \"This notebook requires TensorFlow 2.0 or above.\"", "Set up data for a simple regression\nYou're now going to use Keras to calculate a regression, i.e., find the best line of fit for a paired data set. (While using neural networks and gradient descent is overkill for this kind of problem, it does make for a very easy to understand example.)\nYou're going to use TensorBoard to observe how training and test loss change across epochs. Hopefully, you'll see training and test loss decrease over time and then remain steady.\nFirst, generate 1000 data points roughly along the line y = 0.5x + 2. Split these data points into training and test sets. Your hope is that the neural net learns this relationship.", "data_size = 1000\n# 80% of the data is for training.\ntrain_pct = 0.8\n\ntrain_size = int(data_size * train_pct)\n\n# Create some input data between -1 and 1 and randomize it.\nx = np.linspace(-1, 1, data_size)\nnp.random.shuffle(x)\n\n# Generate the output data.\n# y = 0.5x + 2 + noise\ny = 0.5 * x + 2 + np.random.normal(0, 0.05, (data_size, ))\n\n# Split into test and train pairs.\nx_train, y_train = x[:train_size], y[:train_size]\nx_test, y_test = x[train_size:], y[train_size:]", "Training the model and logging loss\nYou're now ready to define, train and evaluate your model. \nTo log the loss scalar as you train, you'll do the following:\n\nCreate the Keras TensorBoard callback\nSpecify a log directory\nPass the TensorBoard callback to Keras' Model.fit().\n\nTensorBoard reads log data from the log directory hierarchy. In this notebook, the root log directory is logs/scalars, suffixed by a timestamped subdirectory. The timestamped subdirectory enables you to easily identify and select training runs as you use TensorBoard and iterate on your model.", "logdir = \"logs/scalars/\" + datetime.now().strftime(\"%Y%m%d-%H%M%S\")\ntensorboard_callback = keras.callbacks.TensorBoard(log_dir=logdir)\n\nmodel = keras.models.Sequential([\n keras.layers.Dense(16, input_dim=1),\n keras.layers.Dense(1),\n])\n\nmodel.compile(\n loss='mse', # keras.losses.mean_squared_error\n optimizer=keras.optimizers.SGD(learning_rate=0.2),\n)\n\nprint(\"Training ... With default parameters, this takes less than 10 seconds.\")\ntraining_history = model.fit(\n x_train, # input\n y_train, # output\n batch_size=train_size,\n verbose=0, # Suppress chatty output; use Tensorboard instead\n epochs=100,\n validation_data=(x_test, y_test),\n callbacks=[tensorboard_callback],\n)\n\nprint(\"Average test loss: \", np.average(training_history.history['loss']))", "Examining loss using TensorBoard\nNow, start TensorBoard, specifying the root log directory you used above.\nWait a few seconds for TensorBoard's UI to spin up.", "%tensorboard --logdir logs/scalars", "<!-- <img class=\"tfo-display-only-on-site\" src=\"https://github.com/tensorflow/tensorboard/blob/master/docs/images/scalars_loss.png?raw=1\"/> -->\n\nYou may see TensorBoard display the message \"No dashboards are active for the current data set\". That's because initial logging data hasn't been saved yet. As training progresses, the Keras model will start logging data. TensorBoard will periodically refresh and show you your scalar metrics. If you're impatient, you can tap the Refresh arrow at the top right.\nAs you watch the training progress, note how both training and validation loss rapidly decrease, and then remain stable. In fact, you could have stopped training after 25 epochs, because the training didn't improve much after that point.\nHover over the graph to see specific data points. You can also try zooming in with your mouse, or selecting part of them to view more detail.\nNotice the \"Runs\" selector on the left. A \"run\" represents a set of logs from a round of training, in this case the result of Model.fit(). Developers typically have many, many runs, as they experiment and develop their model over time. \nUse the Runs selector to choose specific runs, or choose from only training or validation. Comparing runs will help you evaluate which version of your code is solving your problem better.\nOk, TensorBoard's loss graph demonstrates that the loss consistently decreased for both training and validation and then stabilized. That means that the model's metrics are likely very good! Now see how the model actually behaves in real life. \nGiven the input data (60, 25, 2), the line y = 0.5x + 2 should yield (32, 14.5, 3). Does the model agree?", "print(model.predict([60, 25, 2]))\n# True values to compare predictions against: \n# [[32.0]\n# [14.5]\n# [ 3.0]]", "Not bad!\nLogging custom scalars\nWhat if you want to log custom values, such as a dynamic learning rate? To do that, you need to use the TensorFlow Summary API.\nRetrain the regression model and log a custom learning rate. Here's how:\n\nCreate a file writer, using tf.summary.create_file_writer().\nDefine a custom learning rate function. This will be passed to the Keras LearningRateScheduler callback.\nInside the learning rate function, use tf.summary.scalar() to log the custom learning rate.\nPass the LearningRateScheduler callback to Model.fit().\n\nIn general, to log a custom scalar, you need to use tf.summary.scalar() with a file writer. The file writer is responsible for writing data for this run to the specified directory and is implicitly used when you use the tf.summary.scalar().", "logdir = \"logs/scalars/\" + datetime.now().strftime(\"%Y%m%d-%H%M%S\")\nfile_writer = tf.summary.create_file_writer(logdir + \"/metrics\")\nfile_writer.set_as_default()\n\ndef lr_schedule(epoch):\n \"\"\"\n Returns a custom learning rate that decreases as epochs progress.\n \"\"\"\n learning_rate = 0.2\n if epoch > 10:\n learning_rate = 0.02\n if epoch > 20:\n learning_rate = 0.01\n if epoch > 50:\n learning_rate = 0.005\n\n tf.summary.scalar('learning rate', data=learning_rate, step=epoch)\n return learning_rate\n\nlr_callback = keras.callbacks.LearningRateScheduler(lr_schedule)\ntensorboard_callback = keras.callbacks.TensorBoard(log_dir=logdir)\n\nmodel = keras.models.Sequential([\n keras.layers.Dense(16, input_dim=1),\n keras.layers.Dense(1),\n])\n\nmodel.compile(\n loss='mse', # keras.losses.mean_squared_error\n optimizer=keras.optimizers.SGD(),\n)\n\ntraining_history = model.fit(\n x_train, # input\n y_train, # output\n batch_size=train_size,\n verbose=0, # Suppress chatty output; use Tensorboard instead\n epochs=100,\n validation_data=(x_test, y_test),\n callbacks=[tensorboard_callback, lr_callback],\n)", "Let's look at TensorBoard again.", "%tensorboard --logdir logs/scalars", "<!-- <img class=\"tfo-display-only-on-site\" src=\"https://github.com/tensorflow/tensorboard/blob/master/docs/images/scalars_custom_lr.png?raw=1\"/> -->\n\nUsing the \"Runs\" selector on the left, notice that you have a &lt;timestamp&gt;/metrics run. Selecting this run displays a \"learning rate\" graph that allows you to verify the progression of the learning rate during this run. \nYou can also compare this run's training and validation loss curves against your earlier runs.\nYou might also notice that the learning rate schedule returned discrete values, depending on epoch, but the learning rate plot may appear smooth. TensorBoard has a smoothing parameter that you may need to turn down to zero to see the unsmoothed values. \nHow does this model do?", "print(model.predict([60, 25, 2]))\n# True values to compare predictions against: \n# [[32.0]\n# [14.5]\n# [ 3.0]]" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
kingb12/languagemodelRNN
model_comparisons/noingX_bow_compared.ipynb
mit
[ "Comparing Encoder-Decoders Analysis\nModel Architecture", "report_files = [\"/Users/bking/IdeaProjects/LanguageModelRNN/experiment_results/encdec_noing6_bow_200_512_04drb/encdec_noing6_bow_200_512_04drb.json\", \"/Users/bking/IdeaProjects/LanguageModelRNN/experiment_results/encdec_noing10_bow_200_512_04drb/encdec_noing10_bow_200_512_04drb.json\", \"/Users/bking/IdeaProjects/LanguageModelRNN/experiment_results/encdec_noing15_bow_200_512_04drb/encdec_noing15_bow_200_512_04drb.json\", \"/Users/bking/IdeaProjects/LanguageModelRNN/experiment_results/encdec_noing23_bow_200_512_04drb/encdec_noing23_bow_200_512_04drb.json\"]\nlog_files = [\"/Users/bking/IdeaProjects/LanguageModelRNN/experiment_results/encdec_noing6_bow_200_512_04drb/encdec_noing6_bow_200_512_04drb_logs.json\", \"/Users/bking/IdeaProjects/LanguageModelRNN/experiment_results/encdec_noing10_bow_200_512_04drb/encdec_noing10_bow_200_512_04drb_logs.json\", \"/Users/bking/IdeaProjects/LanguageModelRNN/experiment_results/encdec_noing15_bow_200_512_04drb/encdec_noing15_bow_200_512_04drb_logs.json\", \"/Users/bking/IdeaProjects/LanguageModelRNN/experiment_results/encdec_noing23_bow_200_512_04drb/encdec_noing23_bow_200_512_04drb_logs.json\"]\nreports = []\nlogs = []\nimport json\nimport matplotlib.pyplot as plt\nimport numpy as np\n\nfor report_file in report_files:\n with open(report_file) as f:\n reports.append((report_file.split('/')[-1].split('.json')[0], json.loads(f.read())))\nfor log_file in log_files:\n with open(log_file) as f:\n logs.append((log_file.split('/')[-1].split('.json')[0], json.loads(f.read())))\n \nfor report_name, report in reports:\n print '\\n', report_name, '\\n'\n print 'Encoder: \\n', report['architecture']['encoder']\n print 'Decoder: \\n', report['architecture']['decoder']\n ", "Perplexity on Each Dataset", "%matplotlib inline\nfrom IPython.display import HTML, display\n\ndef display_table(data):\n display(HTML(\n u'<table><tr>{}</tr></table>'.format(\n u'</tr><tr>'.join(\n u'<td>{}</td>'.format('</td><td>'.join(unicode(_) for _ in row)) for row in data)\n )\n ))\n\ndef bar_chart(data):\n n_groups = len(data)\n \n train_perps = [d[1] for d in data]\n valid_perps = [d[2] for d in data]\n test_perps = [d[3] for d in data]\n \n fig, ax = plt.subplots(figsize=(10,8))\n \n index = np.arange(n_groups)\n bar_width = 0.3\n\n opacity = 0.4\n error_config = {'ecolor': '0.3'}\n\n train_bars = plt.bar(index, train_perps, bar_width,\n alpha=opacity,\n color='b',\n error_kw=error_config,\n label='Training Perplexity')\n\n valid_bars = plt.bar(index + bar_width, valid_perps, bar_width,\n alpha=opacity,\n color='r',\n error_kw=error_config,\n label='Valid Perplexity')\n test_bars = plt.bar(index + 2*bar_width, test_perps, bar_width,\n alpha=opacity,\n color='g',\n error_kw=error_config,\n label='Test Perplexity')\n\n plt.xlabel('Model')\n plt.ylabel('Scores')\n plt.title('Perplexity by Model and Dataset')\n plt.xticks(index + bar_width / 3, [d[0] for d in data])\n plt.legend()\n\n plt.tight_layout()\n plt.show()\n\ndata = [['<b>Model</b>', '<b>Train Perplexity</b>', '<b>Valid Perplexity</b>', '<b>Test Perplexity</b>']]\n\nfor rname, report in reports:\n data.append([rname, report['train_perplexity'], report['valid_perplexity'], report['test_perplexity']])\n\ndisplay_table(data)\nbar_chart(data[1:])\n", "Loss vs. Epoch", "%matplotlib inline\nplt.figure(figsize=(10, 8))\nfor rname, l in logs:\n for k in l.keys():\n plt.plot(l[k][0], l[k][1], label=str(k) + ' ' + rname + ' (train)')\n plt.plot(l[k][0], l[k][2], label=str(k) + ' ' + rname + ' (valid)')\nplt.title('Loss v. Epoch')\nplt.xlabel('Epoch')\nplt.ylabel('Loss')\nplt.legend()\nplt.show()", "Perplexity vs. Epoch", "%matplotlib inline\nplt.figure(figsize=(10, 8))\nfor rname, l in logs:\n for k in l.keys():\n plt.plot(l[k][0], l[k][3], label=str(k) + ' ' + rname + ' (train)')\n plt.plot(l[k][0], l[k][4], label=str(k) + ' ' + rname + ' (valid)')\nplt.title('Perplexity v. Epoch')\nplt.xlabel('Epoch')\nplt.ylabel('Perplexity')\nplt.legend()\nplt.show()", "Generations", "def print_sample(sample, best_bleu=None):\n enc_input = ' '.join([w for w in sample['encoder_input'].split(' ') if w != '<pad>'])\n gold = ' '.join([w for w in sample['gold'].split(' ') if w != '<mask>'])\n print('Input: '+ enc_input + '\\n')\n print('Gend: ' + sample['generated'] + '\\n')\n print('True: ' + gold + '\\n')\n if best_bleu is not None:\n cbm = ' '.join([w for w in best_bleu['best_match'].split(' ') if w != '<mask>'])\n print('Closest BLEU Match: ' + cbm + '\\n')\n print('Closest BLEU Score: ' + str(best_bleu['best_score']) + '\\n')\n print('\\n')\n \ndef display_sample(samples, best_bleu=False):\n for enc_input in samples:\n data = []\n for rname, sample in samples[enc_input]:\n gold = ' '.join([w for w in sample['gold'].split(' ') if w != '<mask>'])\n data.append([rname, '<b>Generated: </b>' + sample['generated']])\n if best_bleu:\n cbm = ' '.join([w for w in sample['best_match'].split(' ') if w != '<mask>'])\n data.append([rname, '<b>Closest BLEU Match: </b>' + cbm + ' (Score: ' + str(sample['best_score']) + ')'])\n data.insert(0, ['<u><b>' + enc_input + '</b></u>', '<b>True: ' + gold+ '</b>'])\n display_table(data)\n\ndef process_samples(samples):\n # consolidate samples with identical inputs\n result = {}\n for rname, t_samples, t_cbms in samples:\n for i, sample in enumerate(t_samples):\n enc_input = ' '.join([w for w in sample['encoder_input'].split(' ') if w != '<pad>'])\n if t_cbms is not None:\n sample.update(t_cbms[i])\n if enc_input in result:\n result[enc_input].append((rname, sample))\n else:\n result[enc_input] = [(rname, sample)]\n return result\n\n\n \n \n\n\nsamples = process_samples([(rname, r['train_samples'], r['best_bleu_matches_train'] if 'best_bleu_matches_train' in r else None) for (rname, r) in reports])\ndisplay_sample(samples, best_bleu='best_bleu_matches_train' in reports[1][1])\n\n\nsamples = process_samples([(rname, r['valid_samples'], r['best_bleu_matches_valid'] if 'best_bleu_matches_valid' in r else None) for (rname, r) in reports])\ndisplay_sample(samples, best_bleu='best_bleu_matches_valid' in reports[1][1])\n\n\nsamples = process_samples([(rname, r['test_samples'], r['best_bleu_matches_test'] if 'best_bleu_matches_test' in r else None) for (rname, r) in reports])\ndisplay_sample(samples, best_bleu='best_bleu_matches_test' in reports[1][1])", "BLEU Analysis", "def print_bleu(blue_structs):\n data= [['<b>Model</b>', '<b>Overall Score</b>','<b>1-gram Score</b>','<b>2-gram Score</b>','<b>3-gram Score</b>','<b>4-gram Score</b>']]\n for rname, blue_struct in blue_structs:\n data.append([rname, blue_struct['score'], blue_struct['components']['1'], blue_struct['components']['2'], blue_struct['components']['3'], blue_struct['components']['4']])\n display_table(data)\n\n# Training Set BLEU Scores\nprint_bleu([(rname, report['train_bleu']) for (rname, report) in reports])\n\n# Validation Set BLEU Scores\nprint_bleu([(rname, report['valid_bleu']) for (rname, report) in reports])\n\n# Test Set BLEU Scores\nprint_bleu([(rname, report['test_bleu']) for (rname, report) in reports])\n\n# All Data BLEU Scores\nprint_bleu([(rname, report['combined_bleu']) for (rname, report) in reports])", "N-pairs BLEU Analysis\nThis analysis randomly samples 1000 pairs of generations/ground truths and treats them as translations, giving their BLEU score. We can expect very low scores in the ground truth and high scores can expose hyper-common generations", "# Training Set BLEU n-pairs Scores\nprint_bleu([(rname, report['n_pairs_bleu_train']) for (rname, report) in reports])\n\n# Validation Set n-pairs BLEU Scores\nprint_bleu([(rname, report['n_pairs_bleu_valid']) for (rname, report) in reports])\n\n# Test Set n-pairs BLEU Scores\nprint_bleu([(rname, report['n_pairs_bleu_test']) for (rname, report) in reports])\n\n# Combined n-pairs BLEU Scores\nprint_bleu([(rname, report['n_pairs_bleu_all']) for (rname, report) in reports])\n\n# Ground Truth n-pairs BLEU Scores\nprint_bleu([(rname, report['n_pairs_bleu_gold']) for (rname, report) in reports])", "Alignment Analysis\nThis analysis computs the average Smith-Waterman alignment score for generations, with the same intuition as N-pairs BLEU, in that we expect low scores in the ground truth and hyper-common generations to raise the scores", "def print_align(reports):\n data= [['<b>Model</b>', '<b>Average (Train) Generated Score</b>','<b>Average (Valid) Generated Score</b>','<b>Average (Test) Generated Score</b>','<b>Average (All) Generated Score</b>', '<b>Average (Gold) Score</b>']]\n for rname, report in reports:\n data.append([rname, report['average_alignment_train'], report['average_alignment_valid'], report['average_alignment_test'], report['average_alignment_all'], report['average_alignment_gold']])\n display_table(data)\n\nprint_align(reports)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
1iyiwei/pyml
code/ch04/ch04.ipynb
mit
[ "Copyright (c) 2015, 2016 Sebastian Raschka\n<br>\nLi-Yi Wei, 2016\nhttps://github.com/1iyiwei/pyml\nMIT License\nPython Machine Learning - Code Examples\nChapter 4 - Building Good Training Sets – Data Pre-Processing\nMachine learns from data (training set)\n* garbage in, garbage out\nData is very important\n* quality, form, etc.\nThis part is about how to pre-process data for better machine learning\n* the first stage of the pipeline\n<img src=\"./images/01_09.png\" width=90%>\nNote that the optional watermark extension is a small IPython notebook plugin that I developed to make the code reproducible. You can just skip the following line(s).", "%load_ext watermark\n%watermark -a '' -u -d -v -p numpy,pandas,matplotlib,sklearn", "The use of watermark is optional. You can install this IPython extension via \"pip install watermark\". For more information, please see: https://github.com/rasbt/watermark.\nOverview\n\nDealing with missing data\nEliminating samples or features with missing values\nImputing missing values\nUnderstanding the scikit-learn estimator API\nHandling categorical data\nMapping ordinal features\nEncoding class labels\nPerforming one-hot encoding on nominal features\nPartitioning a dataset in training and test sets\nBringing features onto the same scale\nSelecting meaningful features\nSparse solutions with L1 regularization\nSequential feature selection algorithms\nAssessing feature importance with random forests\nSummary", "from IPython.display import Image\n\n%matplotlib inline\n# Added version check for recent scikit-learn 0.18 checks\nfrom distutils.version import LooseVersion as Version\nfrom sklearn import __version__ as sklearn_version\n", "Dealing with missing data\nThe training data might be incomplete due to various reasons\n* data collection error\n* measurements not applicable\n* etc.\nMost machine learning algorithms/implementations cannot robustly deal with missing data\nThus we need to deal with missing data before training models\nWe use pandas (Python data analysis) library for dealing with missing data in the examples below", "import pandas as pd\nfrom io import StringIO\n\ncsv_data = '''A,B,C,D\n1.0,2.0,3.0,4.0\n5.0,6.0,,8.0\n10.0,11.0,12.0,'''\n\n# If you are using Python 2.7, you need\n# to convert the string to unicode:\n# csv_data = unicode(csv_data)\n\ndf = pd.read_csv(StringIO(csv_data))\ndf", "The columns (A, B, C, D) are features.\nThe rows (0, 1, 2) are samples.\nMissing values become NaN (not a number).", "df.isnull()\n\ndf.isnull().sum(axis=0)", "Eliminating samples or features with missing values\nOne simple strategy is to simply eliminate samples (table rows) or features (table columns) with missing values, based on various criteria.", "# the default is to drop samples/rows\ndf.dropna()\n\n# but we can also elect to drop features/columns\ndf.dropna(axis=1)\n\n# only drop rows where all columns are NaN\ndf.dropna(how='all') \n\n# drop rows that have not at least 4 non-NaN values\ndf.dropna(thresh=4)\n\n# only drop rows where NaN appear in specific columns (here: 'C')\ndf.dropna(subset=['C'])", "Dropping data might not be desirable, as the resulting data set might become too small.\nImputing missing values\nInterpolating missing values from existing ones can preserve the original data better.\nImpute: the process of replacing missing data with substituted values\n<a href=\"https://en.wikipedia.org/wiki/Imputation_(statistics)\">in statistics</a>", "from sklearn.preprocessing import Imputer\n\n# options from the imputer library includes mean, median, most_frequent\n\nimr = Imputer(missing_values='NaN', strategy='mean', axis=0)\nimr = imr.fit(df)\nimputed_data = imr.transform(df.values)\nimputed_data", "For example, 7.5 is the average of 3 and 12.\n6 is the average of 4 and 8.", "df.values", "We can do better than this, by selecting only the most similar rows for interpolation, instead of all rows. This is how recommendation system could work, e.g. predict your potential rating of a movie or book you have not seen based on item ratings from you and other users.\n<i>Programming Collective Intelligence: Building Smart Web 2.0 Applications, by Toby Segaran</i>\n* very good reference book: recommendation system, search engine, etc.\n* I didn't choose it as one of the text/reference books as the code/data is a bit out of date\n<a href=\"https://www.amazon.com/Programming-Collective-Intelligence-Building-Applications/dp/0596529325/ref=sr_1_1?s=books&ie=UTF8&qid=1475564389&sr=1-1&keywords=collective+intelligence\">\n<img src=\"https://images-na.ssl-images-amazon.com/images/I/51LolW3DugL._SX379_BO1,204,203,200_.jpg\" width=25% align=right>\n</a>\nUnderstanding the scikit-learn estimator API\nTransformer class for data transformation\n* imputer\nKey methods\n* fit() for fitting from (training) ata\n* transform() for transforming future data based on the fitted data\nGood API designs are consistent. For example, the fit() method has similar meanings for different classes, such as transformer and estimator. \nTransformer\n<img src='./images/04_04.png' width=80%>\nEstimator\n<img src='./images/04_05.png' width=80%> \nHandling different types of data\nThere are different types of feature data: numerical and categorical.\nNumerical features are numbers and often \"continuous\" like real numbers.\nCategorical features are \"discrete\", and can be either nominal or ordinal.\n* Ordinal values are discrete but carry some numerical meanings such as ordering and thus can be sorted. \n* Nominal values have no numerical meanings.\nIn the example below:\n* color is nominal (no numerical meaning)\n* size is ordinal (can be sorted in some way)\n* price is numerical\nA given dataset can contain features of different types. It is important to handle them carefully. For example, do not treat nominal values as numbers without proper mapping.", "import pandas as pd\n\ndf = pd.DataFrame([['green', 'M', 10.1, 'class1'],\n ['red', 'L', 13.5, 'class2'],\n ['blue', 'XL', 15.3, 'class1']])\n\ndf.columns = ['color', 'size', 'price', 'classlabel']\ndf", "Data conversion\nFor some estimators such as decision trees that handle one feature at a time, it is OK to keep the features as they are.\nHowever for other estimators that need to handle multiple features together, we need to convert them into compatible forms before proceeding:\n1. convert categorical values into numerical values\n2. scale/normalize numerical values\nMapping ordinal features\nOrdinal features can be converted into numbers, but the conversion often depends on semantics and thus needs to be specified manually (by a human) instead of automatically (by a machine).\nIn the example below, we can map sizes into numbers. Intuitively, larger sizes should map to larger values. Exactly which values to map to is often a judgment call.\nBelow, we use Python dictionary to define a mapping.", "size_mapping = {'XL': 3,\n 'L': 2,\n 'M': 1}\n\ndf['size'] = df['size'].map(size_mapping)\ndf\n\ninv_size_mapping = {v: k for k, v in size_mapping.items()}\ndf['size'].map(inv_size_mapping)", "Encoding class labels\nClass labels often need to be represented as integers for machine learning libraries\n* not ordinal, so any integer mapping will do\n* but a good idea and convention is to use consecutive small values like 0, 1, ...", "import numpy as np\n\nclass_mapping = {label: idx for idx, label in enumerate(np.unique(df['classlabel']))}\nclass_mapping\n\n# forward map\ndf['classlabel'] = df['classlabel'].map(class_mapping)\ndf\n\n# inverse map\ninv_class_mapping = {v: k for k, v in class_mapping.items()}\ndf['classlabel'] = df['classlabel'].map(inv_class_mapping)\ndf", "We can use LabelEncoder in scikit learn to convert class labels automatically.", "from sklearn.preprocessing import LabelEncoder\n\nclass_le = LabelEncoder()\ny = class_le.fit_transform(df['classlabel'].values)\ny\n\nclass_le.inverse_transform(y)", "Performing one-hot encoding on nominal features\nHowever, unlike class labels, we cannot just convert nominal features (such as colors) directly into integers.\nA common mistake is to map nominal features into numerical values, e.g. for colors\n* blue $\\rightarrow$ 0\n* green $\\rightarrow$ 1\n* red $\\rightarrow$ 2", "X = df[['color', 'size', 'price']].values\n\ncolor_le = LabelEncoder()\nX[:, 0] = color_le.fit_transform(X[:, 0])\nX", "For categorical features, it is important to keep the mapped values \"equal distance\"\n* unless you have good reasons otherwise\nFor example, for colors red, green, blue, we want to convert them to values so that each color has equal distance from one another.\nThis cannot be done in 1D but doable in 2D (how? think about it).\nOne hot encoding is a straightforward way to make this just work, by mapping n-value nominal feature into n-dimensional binary vector. \n* blue $\\rightarrow$ (1, 0, 0)\n* green $\\rightarrow$ (0, 1, 0)\n* red $\\rightarrow$ (0, 0, 1)", "from sklearn.preprocessing import OneHotEncoder\n\nohe = OneHotEncoder(categorical_features=[0])\nohe.fit_transform(X).toarray()\n\n# automatic conversion via the get_dummies method in pd\npd.get_dummies(df[['price', 'color', 'size']])\n\ndf", "Partitioning a dataset in training and test sets\nTraining set to train the models\nTest set to evaluate the trained models\nSeparate the two to avoid over-fitting\n* well trained models should generalize to unseen, test data\nValidation set for tuning hyper-parameters\n* parameters are trained by algorithms\n* hyper-parameters are selected by humans\n* will talk about this later\nWine dataset\nA dataset to classify wines based on 13 features and 178 samples.", "wine_data_remote = 'https://archive.ics.uci.edu/ml/machine-learning-databases/wine/wine.data'\nwine_data_local = '../datasets/wine/wine.data'\n\ndf_wine = pd.read_csv(wine_data_local,\n header=None)\n\ndf_wine.columns = ['Class label', 'Alcohol', 'Malic acid', 'Ash',\n 'Alcalinity of ash', 'Magnesium', 'Total phenols',\n 'Flavanoids', 'Nonflavanoid phenols', 'Proanthocyanins',\n 'Color intensity', 'Hue', 'OD280/OD315 of diluted wines',\n 'Proline']\n\nprint('Class labels', np.unique(df_wine['Class label']))\ndf_wine.head()", "<hr>\n\nNote:\nIf the link to the Wine dataset provided above does not work for you, you can find a local copy in this repository at ./../datasets/wine/wine.data.\nOr you could fetch it via", "df_wine = pd.read_csv('https://raw.githubusercontent.com/1iyiwei/pyml/master/code/datasets/wine/wine.data', header=None)\n\ndf_wine.columns = ['Class label', 'Alcohol', 'Malic acid', 'Ash', \n'Alcalinity of ash', 'Magnesium', 'Total phenols', \n'Flavanoids', 'Nonflavanoid phenols', 'Proanthocyanins', \n'Color intensity', 'Hue', 'OD280/OD315 of diluted wines', 'Proline']\ndf_wine.head()", "How to allocate training and test proportions?\nAs much training data as possible for accurate model\nAs much test data as possible for evaluation\nUsual rules is 60:40, 70:30, 80:20\nLarger datasets can have more portions for training\n* e.g. 90:10\nOther partitions possible\n* talk about later in model evaluation and parameter tuning", "if Version(sklearn_version) < '0.18':\n from sklearn.cross_validation import train_test_split\nelse:\n from sklearn.model_selection import train_test_split\n\nX, y = df_wine.iloc[:, 1:].values, df_wine.iloc[:, 0].values\n\nX_train, X_test, y_train, y_test = \\\n train_test_split(X, y, test_size=0.3, random_state=0)\n \nprint(X.shape)\n\nimport numpy as np\nprint(np.unique(y))", "Bringing features onto the same scale\nMost machine learning algorithms behave better when features are on similar scales.\nExceptions\n* decision trees\n* random forests\nExample\nTwo features, in scale [0 1] and [0 100000]\nThink about what happens when we apply \n* perceptron\n* KNN\nTwo common approaches\nNormalization\nmin-max scaler:\n$$\\frac{x-x_{min}}{x_{max}-x_{min}}$$\nStandardization\nstandard scaler:\n$$\\frac{x-x_\\mu}{x_\\sigma}$$\n* $x_\\mu$: mean of x values\n* $x_\\sigma$: standard deviation of x values\nStandardization more common as normalization sensitive to outliers", "from sklearn.preprocessing import MinMaxScaler\nmms = MinMaxScaler()\nX_train_norm = mms.fit_transform(X_train)\nX_test_norm = mms.transform(X_test)\n\nfrom sklearn.preprocessing import StandardScaler\n\nstdsc = StandardScaler()\nX_train_std = stdsc.fit_transform(X_train)\nX_test_std = stdsc.transform(X_test)", "Toy case: standardization versus normalization", "ex = pd.DataFrame([0, 1, 2, 3, 4, 5])\n\n# standardize\nex[1] = (ex[0] - ex[0].mean()) / ex[0].std(ddof=0)\n\n# Please note that pandas uses ddof=1 (sample standard deviation) \n# by default, whereas NumPy's std method and the StandardScaler\n# uses ddof=0 (population standard deviation)\n\n# normalize\nex[2] = (ex[0] - ex[0].min()) / (ex[0].max() - ex[0].min())\nex.columns = ['input', 'standardized', 'normalized']\nex", "Selecting meaningful features\nOverfitting is a common problem for machine learning. \n* model fits training data too closely and fails to generalize to real data\n* model too complex for the given training data\n<img src=\"./images/03_06.png\" width=80%>\nWays to address overfitting\n\nCollect more training data (to make overfitting less likely)\nReduce the model complexity explicitly, such as the number of parameters\nReduce the model complexity implicitly via regularization\nReduce data dimensionality, which forced model reduction\n\nAmount of data should be sufficient relative to model complexity.\nObjective\nWe can sum up both the loss and regularization terms as the total objective:\n$$\\Phi(\\mathbf{X}, \\mathbf{T}, \\Theta) = L\\left(\\mathbf{X}, \\mathbf{T}, \\mathbf{Y}=f(\\mathbf{X}, \\Theta)\\right) + P(\\Theta)$$\nDuring training, the goal is to optimize the parameters $\\Theta$ with respect to the given training data $\\mathbf{X}$ and $\\mathbf{T}$:\n$$argmin_\\Theta \\; \\Phi(\\mathbf{X}, \\mathbf{T}, \\Theta)$$\nAnd hope the trained model with generalize well to future data. \nLoss\nEvery machine learning task as a goal, which can be formalized as a loss function:\n$$L(\\mathbf{X}, \\mathbf{T}, \\mathbf{Y})$$\n, where $\\mathbf{T}$ is some form of target or auxiliary information, such as:\n* labels for supervised classification\n* number of clusters for unsupervised clustering\n* environment for reinforcement learning\nRegularization\nIn addition to the objective, we often care about the simplicity of the model, for better efficiency and generalization (avoiding over-fitting).\nThe complexity of the model can be measured by another penalty function:\n$$P(\\Theta)$$\nSome common penalty functions include number and/or magnitude of parameters.\nRegularization\nFor weight vector $\\mathbf{w}$ of some model (e.g. perceptron or SVM)\n$L_2$:\n$\n\\|\\mathbf{w}\\|2^2 = \\sum{k} w_k^2\n$\n$L_1$:\n$\n\\|\\mathbf{w}\\|1 = \\sum{k} \\left|w_k \\right|\n$\n$L_1$ tends to produce sparser solutions than $L_2$\n* more zero weights\n* more like feature selection\n<img src='./images/04_12.png' width=80%>\n<img src='./images/04_13.png' width=80%> \nWe are more likely to bump into sharp corners of an object.\nExperiment: drop a circle and a square into a flat floor.\nWhat is the probability of hitting any point on the shape?\n<img src=\"./images/sharp_and_round.svg\" width=80%>\nHow about a non-flat floor, e.g. concave or convex with different curvatures?\nRegularization in scikit-learn\nMany ML models support regularization with different\n* methods (e.g. $L_1$ and $L_2$) \n* strength (the $C$ value inversely proportional to regularization strength)", "from sklearn.linear_model import LogisticRegression\n\n# l1 regularization\nlr = LogisticRegression(penalty='l1', C=0.1)\nlr.fit(X_train_std, y_train)\n\n# compare training and test accuracy to see if there is overfitting\nprint('Training accuracy:', lr.score(X_train_std, y_train))\nprint('Test accuracy:', lr.score(X_test_std, y_test))\n\n# 3 sets of parameters due to one-versus-rest with 3 classes\nlr.intercept_\n\n# 13 coefficients for 13 wine features; notice many of them are 0\nlr.coef_\n\nfrom sklearn.linear_model import LogisticRegression\n\n# l2 regularization\nlr = LogisticRegression(penalty='l2', C=0.1)\nlr.fit(X_train_std, y_train)\n\n# compare training and test accuracy to see if there is overfitting\nprint('Training accuracy:', lr.score(X_train_std, y_train))\nprint('Test accuracy:', lr.score(X_test_std, y_test))\n\n# notice the disappearance of 0 coefficients due to L2\nlr.coef_", "Plot regularization\n$C$ is inverse to the regularization strength", "import matplotlib.pyplot as plt\n\nfig = plt.figure()\nax = plt.subplot(111)\n \ncolors = ['blue', 'green', 'red', 'cyan', \n 'magenta', 'yellow', 'black', \n 'pink', 'lightgreen', 'lightblue', \n 'gray', 'indigo', 'orange']\n\nweights, params = [], []\nfor c in np.arange(-4, 6):\n lr = LogisticRegression(penalty='l1', C=10**c, random_state=0)\n lr.fit(X_train_std, y_train)\n weights.append(lr.coef_[1])\n params.append(10**c)\n\nweights = np.array(weights)\n\nfor column, color in zip(range(weights.shape[1]), colors):\n plt.plot(params, weights[:, column],\n label=df_wine.columns[column + 1],\n color=color)\nplt.axhline(0, color='black', linestyle='--', linewidth=3)\nplt.xlim([10**(-5), 10**5])\nplt.ylabel('weight coefficient')\nplt.xlabel('C')\nplt.xscale('log')\nplt.legend(loc='upper left')\nax.legend(loc='upper center', \n bbox_to_anchor=(1.38, 1.03),\n ncol=1, fancybox=True)\n# plt.savefig('./figures/l1_path.png', dpi=300)\nplt.show()", "Dimensionality reduction\n$L_1$ regularization implicitly selects features via zero out\nFeature selection\n* explicit - you specify how many features to select, the algorithm picks the most relevant (not important) ones\n* forward, backward\n* next topic\nNote: 2 important features might be highly correlated, and thus it is relevant to select only 1\nFeature extraction\n* implicit\n* can build new, not just select original, features\n* e.g. PCA\n* next chapter\nSequential feature selection algorithms\nFeature selection is a way to reduce input data dimensionality. You can think of it as reducing the number of columns of the input data table/frame. \nHow do we decide which features/columns to keep? Intuitively, we want to keep relevant ones and remove the rest.\nWe can select these features sequentially, either forward or backward.\nBackward selection\nSequential backward selection (SBS) is a simple heuristic. The basic idea is to start with $n$ features, and consider all possible $n-1$ subfeatures, and remove the one that matters the least for model training.\nWe then move on to reduce the number of features further ($[n-2, n-3, \\cdots]$) until reaching the desired number of features.", "from sklearn.base import clone\nfrom itertools import combinations\nimport numpy as np\nfrom sklearn.metrics import accuracy_score\nif Version(sklearn_version) < '0.18':\n from sklearn.cross_validation import train_test_split\nelse:\n from sklearn.model_selection import train_test_split\n\n\nclass SBS():\n def __init__(self, estimator, k_features, scoring=accuracy_score,\n test_size=0.25, random_state=1):\n self.scoring = scoring\n self.estimator = clone(estimator)\n self.k_features = k_features\n self.test_size = test_size\n self.random_state = random_state\n\n def fit(self, X, y):\n \n X_train, X_test, y_train, y_test = \\\n train_test_split(X, y, test_size=self.test_size,\n random_state=self.random_state)\n\n dim = X_train.shape[1]\n self.indices_ = tuple(range(dim))\n self.subsets_ = [self.indices_]\n score = self._calc_score(X_train, y_train, \n X_test, y_test, self.indices_)\n self.scores_ = [score]\n\n while dim > self.k_features:\n scores = []\n subsets = []\n\n for p in combinations(self.indices_, r=dim - 1):\n score = self._calc_score(X_train, y_train, \n X_test, y_test, p)\n scores.append(score)\n subsets.append(p)\n\n best = np.argmax(scores)\n self.indices_ = subsets[best]\n self.subsets_.append(self.indices_)\n dim -= 1\n\n self.scores_.append(scores[best])\n self.k_score_ = self.scores_[-1]\n\n return self\n\n def transform(self, X):\n return X[:, self.indices_]\n\n def _calc_score(self, X_train, y_train, X_test, y_test, indices):\n self.estimator.fit(X_train[:, indices], y_train)\n y_pred = self.estimator.predict(X_test[:, indices])\n score = self.scoring(y_test, y_pred)\n return score", "Below we try to apply the SBS class above.\nWe use the KNN classifer, which can suffer from curse of dimensionality.", "import matplotlib.pyplot as plt\nfrom sklearn.neighbors import KNeighborsClassifier\n\nknn = KNeighborsClassifier(n_neighbors=2)\n\n# selecting features\nsbs = SBS(knn, k_features=1)\nsbs.fit(X_train_std, y_train)\n\n# plotting performance of feature subsets\nk_feat = [len(k) for k in sbs.subsets_]\n\nplt.plot(k_feat, sbs.scores_, marker='o')\nplt.ylim([0.7, 1.1])\nplt.ylabel('Accuracy')\nplt.xlabel('Number of features')\nplt.grid()\nplt.tight_layout()\n# plt.savefig('./sbs.png', dpi=300)\nplt.show()\n\n# list the 5 most important features\nk5 = list(sbs.subsets_[8]) # 5+8 = 13\nprint(df_wine.columns[1:][k5])\n\nknn.fit(X_train_std, y_train)\nprint('Training accuracy:', knn.score(X_train_std, y_train))\nprint('Test accuracy:', knn.score(X_test_std, y_test))\n\nknn.fit(X_train_std[:, k5], y_train)\nprint('Training accuracy:', knn.score(X_train_std[:, k5], y_train))\nprint('Test accuracy:', knn.score(X_test_std[:, k5], y_test))", "Note the improved test accuracy by fitting lower dimensional training/test data.\nForward selection\nThis is essetially the reverse of backward selection; we will leave this as an exercise.\nAssessing Feature Importances with Random Forests\nRecall\n* a decision tree is built by splitting nodes\n* each node split is to maximize information gain\n* random forest is a collection of decision trees with randomly selected features\nInformation gain (or impurity loss) at each node can measure the importantce of the feature being split", "# feature_importances_ from random forest classifier records this info\nfrom sklearn.ensemble import RandomForestClassifier\n\nfeat_labels = df_wine.columns[1:]\n\nforest = RandomForestClassifier(n_estimators=10000,\n random_state=0,\n n_jobs=-1)\n\nforest.fit(X_train, y_train)\nimportances = forest.feature_importances_\n\nindices = np.argsort(importances)[::-1]\n\nfor f in range(X_train.shape[1]):\n print(\"%2d) %-*s %f\" % (f + 1, 30, \n feat_labels[indices[f]], \n importances[indices[f]]))\n\nplt.title('Feature Importances')\nplt.bar(range(X_train.shape[1]), \n importances[indices],\n color='lightblue', \n align='center')\n\nplt.xticks(range(X_train.shape[1]), \n feat_labels[indices], rotation=90)\nplt.xlim([-1, X_train.shape[1]])\nplt.tight_layout()\n#plt.savefig('./random_forest.png', dpi=300)\nplt.show()\n\nthreshold = 0.15\nif False: #Version(sklearn_version) < '0.18':\n X_selected = forest.transform(X_train, threshold=threshold)\nelse:\n from sklearn.feature_selection import SelectFromModel\n sfm = SelectFromModel(forest, threshold=threshold, prefit=True)\n X_selected = sfm.transform(X_train)\n\nX_selected.shape", "Now, let's print the 3 features that met the threshold criterion for feature selection that we set earlier (note that this code snippet does not appear in the actual book but was added to this notebook later for illustrative purposes):", "for f in range(X_selected.shape[1]):\n print(\"%2d) %-*s %f\" % (f + 1, 30, \n feat_labels[indices[f]], \n importances[indices[f]]))", "Summary\nData is important for machine learning: garbage in, garbage out.\nSo pre-process data is important.\nThis chapter covers various topics for data processing, such as handling missing data, treating different types of data (numerical, categorical), and how to avoid over-fitting which can improve both accuracy and speed.\nReading\n\nPML Chapter 4\nIML Chapter 6.2" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
PYPIT/arclines
docs/nb/Match_script.ipynb
bsd-3-clause
[ "Script to match an input spectrum to the line lists", "# imports\nimport json\n\nfrom astropy.table import Table\n\nimport arclines", "Testing\nLRISb spectrum", "test_arc_path = arclines.__path__[0]+'/data/sources/'\nsrc_file = 'lrisb_600_4000_PYPIT.json'\nwith open(test_arc_path+src_file,'r') as f:\n pypit_fit = json.load(f)\n\nspec = pypit_fit['spec']\ntbl = Table()\ntbl['spec'] = spec\ntbl.write(arclines.__path__[0]+'/tests/files/LRISb_600_spec.ascii',format='ascii')", "Run\narclines_match LRISb_600_spec.ascii 4000. 1.26 CdI,HgI,ZnI\n\nKastr", "test_arc_path = '/Users/xavier/local/Python/PYPIT-development-suite/REDUX_OUT/Kast_red/600_7500_d55/MF_kast_red/'\nsrc_file = 'MasterWaveCalib_A_01_aa.json'\nwith open(test_arc_path+src_file,'r') as f:\n pypit_fit = json.load(f)\n\nspec = pypit_fit['spec']\ntbl = Table()\ntbl['spec'] = spec\ntbl.write(arclines.__path__[0]+'/tests/files/Kastr_600_7500_spec.ascii',format='ascii')", "Run\narclines_match Kastr_600_7500_spec.ascii 7500. 2.35 HgI,NeI,ArI\n\nCheck dispersion", "pix = np.array(pypit_fit['xfit'])*(len(spec)-1.)\nwave = np.array(pypit_fit['yfit'])\ndisp = (wave-np.roll(wave,1))/(pix-np.roll(pix,1))\ndisp", "Ran without UNKNWN -- Excellent\nLRISr -- 600/7500", "test_arc_path = arclines.__path__[0]+'/data/sources/'\nsrc_file = 'lrisr_600_7500_PYPIT.json'\nwith open(test_arc_path+src_file,'r') as f:\n pypit_fit = json.load(f)\n\nspec = pypit_fit['spec']\ntbl = Table()\ntbl['spec'] = spec\ntbl.write(arclines.__path__[0]+'/tests/files/LRISr_600_7500_spec.ascii',format='ascii')", "Run\narclines_match LRISr_600_7500_spec.ascii 7000. 1.6 ArI,HgI,KrI,NeI,XeI\nNeed UNKNOWNS for better performance\narclines_match LRISr_600_7500_spec.ascii 7000. 1.6 ArI,HgI,KrI,NeI,XeI --unknowns\n\nAm scanning now (successfully)\nRCS at MMT", "range(5,-1,-1)", "Run\narclines_match data/test_arcs/MMT_RCS_600_6310.hdf5 7000. 1.6 ArI,NeI" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
luofan18/deep-learning
image-classification/.ipynb_checkpoints/dlnd_image_classification-checkpoint.ipynb
mit
[ "Image Classification\nIn this project, you'll classify images from the CIFAR-10 dataset. The dataset consists of airplanes, dogs, cats, and other objects. You'll preprocess the images, then train a convolutional neural network on all the samples. The images need to be normalized and the labels need to be one-hot encoded. You'll get to apply what you learned and build a convolutional, max pooling, dropout, and fully connected layers. At the end, you'll get to see your neural network's predictions on the sample images.\nGet the Data\nRun the following cell to download the CIFAR-10 dataset for python.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\nfrom urllib.request import urlretrieve\nfrom os.path import isfile, isdir\nfrom tqdm import tqdm\nimport problem_unittests as tests\nimport tarfile\n\ncifar10_dataset_folder_path = 'cifar-10-batches-py'\n\nclass DLProgress(tqdm):\n last_block = 0\n\n def hook(self, block_num=1, block_size=1, total_size=None):\n self.total = total_size\n self.update((block_num - self.last_block) * block_size)\n self.last_block = block_num\n\nif not isfile('cifar-10-python.tar.gz'):\n with DLProgress(unit='B', unit_scale=True, miniters=1, desc='CIFAR-10 Dataset') as pbar:\n urlretrieve(\n 'https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz',\n 'cifar-10-python.tar.gz',\n pbar.hook)\n\nif not isdir(cifar10_dataset_folder_path):\n with tarfile.open('cifar-10-python.tar.gz') as tar:\n tar.extractall()\n tar.close()\n\n\ntests.test_folder_path(cifar10_dataset_folder_path)", "Explore the Data\nThe dataset is broken into batches to prevent your machine from running out of memory. The CIFAR-10 dataset consists of 5 batches, named data_batch_1, data_batch_2, etc.. Each batch contains the labels and images that are one of the following:\n* airplane\n* automobile\n* bird\n* cat\n* deer\n* dog\n* frog\n* horse\n* ship\n* truck\nUnderstanding a dataset is part of making predictions on the data. Play around with the code cell below by changing the batch_id and sample_id. The batch_id is the id for a batch (1-5). The sample_id is the id for a image and label pair in the batch.\nAsk yourself \"What are all possible labels?\", \"What is the range of values for the image data?\", \"Are the labels in order or random?\". Answers to questions like these will help you preprocess the data and end up with better predictions.", "%matplotlib inline\n%config InlineBackend.figure_format = 'retina'\n\nimport helper\nimport numpy as np\n\n# Explore the dataset\nbatch_id = 1\nsample_id = 5\nhelper.display_stats(cifar10_dataset_folder_path, batch_id, sample_id)", "Implement Preprocess Functions\nNormalize\nIn the cell below, implement the normalize function to take in image data, x, and return it as a normalized Numpy array. The values should be in the range of 0 to 1, inclusive. The return object should be the same shape as x.", "def normalize(x):\n \"\"\"\n Normalize a list of sample image data in the range of 0 to 1\n : x: List of image data. The image shape is (32, 32, 3)\n : return: Numpy array of normalize data\n \"\"\"\n # TODO: Implement Function\n arrays = []\n for x_ in x:\n array = np.array(x_)\n arrays.append(array)\n return np.stack(arrays, axis=0) / 256.\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_normalize(normalize)", "One-hot encode\nJust like the previous code cell, you'll be implementing a function for preprocessing. This time, you'll implement the one_hot_encode function. The input, x, are a list of labels. Implement the function to return the list of labels as One-Hot encoded Numpy array. The possible values for labels are 0 to 9. The one-hot encoding function should return the same encoding for each value between each call to one_hot_encode. Make sure to save the map of encodings outside the function.\nHint: Don't reinvent the wheel.", "def one_hot_encode(x):\n \"\"\"\n One hot encode a list of sample labels. Return a one-hot encoded vector for each label.\n : x: List of sample Labels\n : return: Numpy array of one-hot encoded labels\n \"\"\"\n # TODO: Implement Function\n # class_num = np.array(x).max()\n class_num = 10\n num = len(x)\n out = np.zeros((num, class_num))\n for i in range(num):\n out[i, x[i]-1] = 1\n return out\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_one_hot_encode(one_hot_encode)", "Randomize Data\nAs you saw from exploring the data above, the order of the samples are randomized. It doesn't hurt to randomize it again, but you don't need to for this dataset.\nPreprocess all the data and save it\nRunning the code cell below will preprocess all the CIFAR-10 data and save it to file. The code below also uses 10% of the training data for validation.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\n# Preprocess Training, Validation, and Testing Data\nhelper.preprocess_and_save_data(cifar10_dataset_folder_path, normalize, one_hot_encode)", "Check Point\nThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport pickle\nimport problem_unittests as tests\nimport helper\n\n# Load the Preprocessed Validation data\nvalid_features, valid_labels = pickle.load(open('preprocess_validation.p', mode='rb'))", "Build the network\nFor the neural network, you'll build each layer into a function. Most of the code you've seen has been outside of functions. To test your code more thoroughly, we require that you put each layer in a function. This allows us to give you better feedback and test for simple mistakes using our unittests before you submit your project.\n\nNote: If you're finding it hard to dedicate enough time for this course each week, we've provided a small shortcut to this part of the project. In the next couple of problems, you'll have the option to use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages to build each layer, except the layers you build in the \"Convolutional and Max Pooling Layer\" section. TF Layers is similar to Keras's and TFLearn's abstraction to layers, so it's easy to pickup.\nHowever, if you would like to get the most out of this course, try to solve all the problems without using anything from the TF Layers packages. You can still use classes from other packages that happen to have the same name as ones you find in TF Layers! For example, instead of using the TF Layers version of the conv2d class, tf.layers.conv2d, you would want to use the TF Neural Network version of conv2d, tf.nn.conv2d. \n\nLet's begin!\nInput\nThe neural network needs to read the image data, one-hot encoded labels, and dropout keep probability. Implement the following functions\n* Implement neural_net_image_input\n * Return a TF Placeholder\n * Set the shape using image_shape with batch size set to None.\n * Name the TensorFlow placeholder \"x\" using the TensorFlow name parameter in the TF Placeholder.\n* Implement neural_net_label_input\n * Return a TF Placeholder\n * Set the shape using n_classes with batch size set to None.\n * Name the TensorFlow placeholder \"y\" using the TensorFlow name parameter in the TF Placeholder.\n* Implement neural_net_keep_prob_input\n * Return a TF Placeholder for dropout keep probability.\n * Name the TensorFlow placeholder \"keep_prob\" using the TensorFlow name parameter in the TF Placeholder.\nThese names will be used at the end of the project to load your saved model.\nNote: None for shapes in TensorFlow allow for a dynamic size.", "import tensorflow as tf\n\ndef neural_net_image_input(image_shape):\n \"\"\"\n Return a Tensor for a batch of image input\n : image_shape: Shape of the images\n : return: Tensor for image input.\n \"\"\"\n # TODO: Implement Function\n # print ('image_shape')\n # print (image_shape)\n shape = (None, )\n shape = shape + image_shape\n # print ('shape')\n # print (shape)\n inputs = tf.placeholder(tf.float32, shape=shape, name='x')\n # print ('inputs')\n # print (inputs)\n return inputs\n\n\ndef neural_net_label_input(n_classes):\n \"\"\"\n Return a Tensor for a batch of label input\n : n_classes: Number of classes\n : return: Tensor for label input.\n \"\"\"\n # TODO: Implement Function\n shape = (None, )\n shape = shape + (n_classes, )\n return tf.placeholder(tf.float32, shape=shape, name='y')\n\n\ndef neural_net_keep_prob_input():\n \"\"\"\n Return a Tensor for keep probability\n : return: Tensor for keep probability.\n \"\"\"\n # TODO: Implement Function\n return tf.placeholder(tf.float32, name='keep_prob')\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntf.reset_default_graph()\ntests.test_nn_image_inputs(neural_net_image_input)\ntests.test_nn_label_inputs(neural_net_label_input)\ntests.test_nn_keep_prob_inputs(neural_net_keep_prob_input)", "Convolution and Max Pooling Layer\nConvolution layers have a lot of success with images. For this code cell, you should implement the function conv2d_maxpool to apply convolution then max pooling:\n* Create the weight and bias using conv_ksize, conv_num_outputs and the shape of x_tensor.\n* Apply a convolution to x_tensor using weight and conv_strides.\n * We recommend you use same padding, but you're welcome to use any padding.\n* Add bias\n* Add a nonlinear activation to the convolution.\n* Apply Max Pooling using pool_ksize and pool_strides.\n * We recommend you use same padding, but you're welcome to use any padding.\nNote: You can't use TensorFlow Layers or TensorFlow Layers (contrib) for this layer, but you can still use TensorFlow's Neural Network package. You may still use the shortcut option for all the other layers.", "def conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides, maxpool=True):\n \"\"\"\n Apply convolution then max pooling to x_tensor\n :param x_tensor: TensorFlow Tensor\n :param conv_num_outputs: Number of outputs for the convolutional layer\n :param conv_ksize: kernal size 2-D Tuple for the convolutional layer\n :param conv_strides: Stride 2-D Tuple for convolution\n :param pool_ksize: kernal size 2-D Tuple for pool\n :param pool_strides: Stride 2-D Tuple for pool\n : return: A tensor that represents convolution and max pooling of x_tensor\n \"\"\"\n # TODO: Implement Function\n input_channel = x_tensor.get_shape().as_list()[-1]\n weights_size = conv_ksize + (input_channel,) + (conv_num_outputs,)\n conv_strides = (1,) + conv_strides + (1,)\n pool_ksize = (1,) + pool_ksize + (1,)\n pool_strides = (1,) + pool_strides + (1,)\n \n weights = tf.Variable(tf.random_normal(weights_size, stddev=0.01))\n biases = tf.Variable(tf.zeros(conv_num_outputs))\n out = tf.nn.conv2d(x_tensor, weights, conv_strides, padding='SAME')\n out = out + biases\n out = tf.nn.relu(out)\n if maxpool:\n out = tf.nn.max_pool(out, pool_ksize, pool_strides, padding='SAME')\n return out\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_con_pool(conv2d_maxpool)", "Flatten Layer\nImplement the flatten function to change the dimension of x_tensor from a 4-D tensor to a 2-D tensor. The output should be the shape (Batch Size, Flattened Image Size). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.", "def flatten(x_tensor):\n \"\"\"\n Flatten x_tensor to (Batch Size, Flattened Image Size)\n : x_tensor: A tensor of size (Batch Size, ...), where ... are the image dimensions.\n : return: A tensor of size (Batch Size, Flattened Image Size).\n \"\"\"\n # TODO: Implement Function\n num, hight, width, channel = tuple(x_tensor.get_shape().as_list())\n new_shape = (-1, hight * width * channel)\n # print ('new_shape')\n # print (new_shape)\n return tf.reshape(x_tensor, new_shape)\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_flatten(flatten)", "Fully-Connected Layer\nImplement the fully_conn function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.", "def fully_conn(x_tensor, num_outputs):\n \"\"\"\n Apply a fully connected layer to x_tensor using weight and bias\n : x_tensor: A 2-D tensor where the first dimension is batch size.\n : num_outputs: The number of output that the new tensor should be.\n : return: A 2-D tensor where the second dimension is num_outputs.\n \"\"\"\n # TODO: Implement Function\n num, dim = x_tensor.get_shape().as_list()\n weights = tf.Variable(tf.random_normal((dim, num_outputs), stddev=np.sqrt(2 / num_outputs)))\n biases = tf.Variable(tf.zeros(num_outputs))\n return tf.nn.relu(tf.matmul(x_tensor, weights) + biases)\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_fully_conn(fully_conn)", "Output Layer\nImplement the output function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.\nNote: Activation, softmax, or cross entropy should not be applied to this.", "def output(x_tensor, num_outputs):\n \"\"\"\n Apply a output layer to x_tensor using weight and bias\n : x_tensor: A 2-D tensor where the first dimension is batch size.\n : num_outputs: The number of output that the new tensor should be.\n : return: A 2-D tensor where the second dimension is num_outputs.\n \"\"\"\n # TODO: Implement Function\n num, dim = x_tensor.get_shape().as_list()\n weights = tf.Variable(tf.random_normal((dim, num_outputs), np.sqrt(2 / num_outputs)))\n biases = tf.Variable(tf.zeros(num_outputs))\n return tf.matmul(x_tensor, weights) + biases\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_output(output)", "Create Convolutional Model\nImplement the function conv_net to create a convolutional neural network model. The function takes in a batch of images, x, and outputs logits. Use the layers you created above to create this model:\n\nApply 1, 2, or 3 Convolution and Max Pool layers\nApply a Flatten Layer\nApply 1, 2, or 3 Fully Connected Layers\nApply an Output Layer\nReturn the output\nApply TensorFlow's Dropout to one or more layers in the model using keep_prob.", "def conv_net(x, keep_prob):\n \"\"\"\n Create a convolutional neural network model\n : x: Placeholder tensor that holds image data.\n : keep_prob: Placeholder tensor that hold dropout keep probability.\n : return: Tensor that represents logits\n \"\"\"\n # TODO: Apply 1, 2, or 3 Convolution and Max Pool layers\n # Play around with different number of outputs, kernel size and stride\n # Function Definition from Above:\n # conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides)\n conv_ksize3 = (3, 3)\n conv_ksize1 = (1, 1)\n conv_ksize5 = (5, 5)\n conv_ksize7 = (7, 7)\n conv_strides1 = (1, 1)\n conv_strides2 = (2, 2)\n pool_ksize = (2, 2)\n pool_strides = (2, 2)\n channels = [32,128,512,512]\n # L = 4\n out = x\n # 6 layers\n # for i in range(int(L / 4)):\n out = conv2d_maxpool(out, channels[0], conv_ksize7, conv_strides1, pool_ksize, pool_strides, maxpool=True)\n out = conv2d_maxpool(out, channels[1], conv_ksize5, conv_strides1, pool_ksize, pool_strides, maxpool=True)\n out = conv2d_maxpool(out, channels[2], conv_ksize3, conv_strides1, pool_ksize, pool_strides, maxpool=True)\n # out = conv2d_maxpool(out, channels[3], conv_ksize5, conv_strides2, pool_ksize, pool_strides, maxpool=True)\n\n # TODO: Apply a Flatten Layer\n # Function Definition from Above:\n # flatten(x_tensor)\n out = flatten(out)\n\n # TODO: Apply 1, 2, or 3 Fully Connected Layers\n # Play around with different number of outputs\n # Function Definition from Above:\n # fully_conn(x_tensor, num_outputs)\n # by remove this fully connected layer can improve performance\n out = fully_conn(out, 256) \n \n \n # TODO: Apply an Output Layer\n # Set this to the number of classes\n # Function Definition from Above:\n # output(x_tensor, num_outputs)\n out = tf.nn.dropout(out, keep_prob)\n out = output(out, 10)\n \n # TODO: return output\n return out\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\n\n##############################\n## Build the Neural Network ##\n##############################\n\n# Remove previous weights, bias, inputs, etc..\ntf.reset_default_graph()\n\n# Inputs\nx = neural_net_image_input((32, 32, 3))\ny = neural_net_label_input(10)\nkeep_prob = neural_net_keep_prob_input()\n\n# Model\nlogits = conv_net(x, keep_prob)\n\n# Name logits Tensor, so that is can be loaded from disk after training\nlogits = tf.identity(logits, name='logits')\n\n# Loss and Optimizer\ncost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y))\noptimizer = tf.train.AdamOptimizer().minimize(cost)\n\n# Accuracy\ncorrect_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1))\naccuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy')\n\ntests.test_conv_net(conv_net)", "Train the Neural Network\nSingle Optimization\nImplement the function train_neural_network to do a single optimization. The optimization should use optimizer to optimize in session with a feed_dict of the following:\n* x for image input\n* y for labels\n* keep_prob for keep probability for dropout\nThis function will be called for each batch, so tf.global_variables_initializer() has already been called.\nNote: Nothing needs to be returned. This function is only optimizing the neural network.", "def train_neural_network(session, optimizer, keep_probability, feature_batch, label_batch):\n \"\"\"\n Optimize the session on a batch of images and labels\n : session: Current TensorFlow session\n : optimizer: TensorFlow optimizer function\n : keep_probability: keep probability\n : feature_batch: Batch of Numpy image data\n : label_batch: Batch of Numpy label data\n \"\"\"\n # TODO: Implement Function\n feed_dict = {keep_prob: keep_probability, x: feature_batch, y: label_batch}\n session.run(optimizer, feed_dict=feed_dict)\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_train_nn(train_neural_network)", "Show Stats\nImplement the function print_stats to print loss and validation accuracy. Use the global variables valid_features and valid_labels to calculate validation accuracy. Use a keep probability of 1.0 to calculate the loss and validation accuracy.", "def print_stats(session, feature_batch, label_batch, cost, accuracy):\n \"\"\"\n Print information about loss and validation accuracy\n : session: Current TensorFlow session\n : feature_batch: Batch of Numpy image data\n : label_batch: Batch of Numpy label data\n : cost: TensorFlow cost function\n : accuracy: TensorFlow accuracy function\n \"\"\"\n # TODO: Implement Function\n # here will print loss, train_accuracy, and val_accuracy\n # I implemented the val_accuracy, please read them all, thanks\n # print train_accuracy to see overfit\n loss = session.run(cost, feed_dict={x: feature_batch, y: label_batch, keep_prob: 1.0})\n train_accuracy = session.run(accuracy, feed_dict={x: feature_batch, y: label_batch, keep_prob: 1.0})\n \n batch = feature_batch.shape[0]\n num_valid = valid_features.shape[0]\n val_accuracy = 0\n for i in range(0, num_valid, batch):\n end_i = i + batch\n if end_i > num_valid:\n end_i = num_valid\n batch_accuracy = session.run(accuracy, feed_dict={\n x: valid_features[i:end_i], y: valid_labels[i:end_i], keep_prob: 1.0})\n batch_accuracy *= (end_i - i)\n val_accuracy += batch_accuracy\n val_accuracy /= num_valid\n print ('loss is {}, train_accuracy is {}, val_accuracy is {}'.format(loss, train_accuracy, val_accuracy))", "Hyperparameters\nTune the following parameters:\n* Set epochs to the number of iterations until the network stops learning or start overfitting\n* Set batch_size to the highest number that your machine has memory for. Most people set them to common sizes of memory:\n * 64\n * 128\n * 256\n * ...\n* Set keep_probability to the probability of keeping a node using dropout", "# TODO: Tune Parameters\nepochs = 10\nbatch_size = 128\nkeep_probability = 0.8", "Train on a Single CIFAR-10 Batch\nInstead of training the neural network on all the CIFAR-10 batches of data, let's use a single batch. This should save time while you iterate on the model to get a better accuracy. Once the final validation accuracy is 50% or greater, run the model on all the data in the next section.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nprint('Checking the Training on a Single Batch...')\nwith tf.Session() as sess:\n # Initializing the variables\n sess.run(tf.global_variables_initializer())\n \n # Training cycle\n for epoch in range(epochs):\n batch_i = 1\n for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):\n train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)\n print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')\n print_stats(sess, batch_features, batch_labels, cost, accuracy)", "Fully Train the Model\nNow that you got a good accuracy with a single CIFAR-10 batch, try it with all five batches.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nsave_model_path = './image_classification'\n\nprint('Training...')\nwith tf.Session() as sess:\n # Initializing the variables\n sess.run(tf.global_variables_initializer())\n \n # Training cycle\n for epoch in range(epochs):\n # Loop over all batches\n n_batches = 5\n for batch_i in range(1, n_batches + 1):\n for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):\n train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)\n print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')\n print_stats(sess, batch_features, batch_labels, cost, accuracy)\n \n # Save Model\n saver = tf.train.Saver()\n save_path = saver.save(sess, save_model_path)", "Checkpoint\nThe model has been saved to disk.\nTest Model\nTest your model against the test dataset. This will be your final accuracy. You should have an accuracy greater than 50%. If you don't, keep tweaking the model architecture and parameters.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\n%matplotlib inline\n%config InlineBackend.figure_format = 'retina'\n\nimport tensorflow as tf\nimport pickle\nimport helper\nimport random\n\n# Set batch size if not already set\ntry:\n if batch_size:\n pass\nexcept NameError:\n batch_size = 64\n\nsave_model_path = './image_classification'\nn_samples = 4\ntop_n_predictions = 3\n\ndef test_model():\n \"\"\"\n Test the saved model against the test dataset\n \"\"\"\n\n test_features, test_labels = pickle.load(open('preprocess_training.p', mode='rb'))\n loaded_graph = tf.Graph()\n\n with tf.Session(graph=loaded_graph) as sess:\n # Load model\n loader = tf.train.import_meta_graph(save_model_path + '.meta')\n loader.restore(sess, save_model_path)\n\n # Get Tensors from loaded model\n loaded_x = loaded_graph.get_tensor_by_name('x:0')\n loaded_y = loaded_graph.get_tensor_by_name('y:0')\n loaded_keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')\n loaded_logits = loaded_graph.get_tensor_by_name('logits:0')\n loaded_acc = loaded_graph.get_tensor_by_name('accuracy:0')\n \n # Get accuracy in batches for memory limitations\n test_batch_acc_total = 0\n test_batch_count = 0\n \n for train_feature_batch, train_label_batch in helper.batch_features_labels(test_features, test_labels, batch_size):\n test_batch_acc_total += sess.run(\n loaded_acc,\n feed_dict={loaded_x: train_feature_batch, loaded_y: train_label_batch, loaded_keep_prob: 1.0})\n test_batch_count += 1\n\n print('Testing Accuracy: {}\\n'.format(test_batch_acc_total/test_batch_count))\n\n # Print Random Samples\n random_test_features, random_test_labels = tuple(zip(*random.sample(list(zip(test_features, test_labels)), n_samples)))\n random_test_predictions = sess.run(\n tf.nn.top_k(tf.nn.softmax(loaded_logits), top_n_predictions),\n feed_dict={loaded_x: random_test_features, loaded_y: random_test_labels, loaded_keep_prob: 1.0})\n helper.display_image_predictions(random_test_features, random_test_labels, random_test_predictions)\n\n\ntest_model()", "Why 50-80% Accuracy?\nYou might be wondering why you can't get an accuracy any higher. First things first, 50% isn't bad for a simple CNN. Pure guessing would get you 10% accuracy. However, you might notice people are getting scores well above 80%. That's because we haven't taught you all there is to know about neural networks. We still need to cover a few more techniques.\nSubmitting This Project\nWhen submitting this project, make sure to run all the cells before saving the notebook. Save the notebook file as \"dlnd_image_classification.ipynb\" and save it as a HTML file under \"File\" -> \"Download as\". Include the \"helper.py\" and \"problem_unittests.py\" files in your submission." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
kingb12/languagemodelRNN
report_notebooks/encdec_noing10_200_512_04drb.ipynb
mit
[ "Encoder-Decoder Analysis\nModel Architecture", "report_file = '/Users/bking/IdeaProjects/LanguageModelRNN/experiment_results/encdec_noing10_200_512_04drb/encdec_noing10_200_512_04drb.json'\nlog_file = '/Users/bking/IdeaProjects/LanguageModelRNN/experiment_results/encdec_noing10_200_512_04drb/encdec_noing10_200_512_04drb_logs.json'\n\nimport json\nimport matplotlib.pyplot as plt\nwith open(report_file) as f:\n report = json.loads(f.read())\nwith open(log_file) as f:\n logs = json.loads(f.read())\nprint'Encoder: \\n\\n', report['architecture']['encoder']\nprint'Decoder: \\n\\n', report['architecture']['decoder']", "Perplexity on Each Dataset", "print('Train Perplexity: ', report['train_perplexity'])\nprint('Valid Perplexity: ', report['valid_perplexity'])\nprint('Test Perplexity: ', report['test_perplexity'])", "Loss vs. Epoch", "%matplotlib inline\nfor k in logs.keys():\n plt.plot(logs[k][0], logs[k][1], label=str(k) + ' (train)')\n plt.plot(logs[k][0], logs[k][2], label=str(k) + ' (valid)')\nplt.title('Loss v. Epoch')\nplt.xlabel('Epoch')\nplt.ylabel('Loss')\nplt.legend()\nplt.show()", "Perplexity vs. Epoch", "%matplotlib inline\nfor k in logs.keys():\n plt.plot(logs[k][0], logs[k][3], label=str(k) + ' (train)')\n plt.plot(logs[k][0], logs[k][4], label=str(k) + ' (valid)')\nplt.title('Perplexity v. Epoch')\nplt.xlabel('Epoch')\nplt.ylabel('Perplexity')\nplt.legend()\nplt.show()", "Generations", "def print_sample(sample, best_bleu=None):\n enc_input = ' '.join([w for w in sample['encoder_input'].split(' ') if w != '<pad>'])\n gold = ' '.join([w for w in sample['gold'].split(' ') if w != '<mask>'])\n print('Input: '+ enc_input + '\\n')\n print('Gend: ' + sample['generated'] + '\\n')\n print('True: ' + gold + '\\n')\n if best_bleu is not None:\n cbm = ' '.join([w for w in best_bleu['best_match'].split(' ') if w != '<mask>'])\n print('Closest BLEU Match: ' + cbm + '\\n')\n print('Closest BLEU Score: ' + str(best_bleu['best_score']) + '\\n')\n print('\\n')\n \n\nfor i, sample in enumerate(report['train_samples']):\n print_sample(sample, report['best_bleu_matches_train'][i] if 'best_bleu_matches_train' in report else None)\n\nfor i, sample in enumerate(report['valid_samples']):\n print_sample(sample, report['best_bleu_matches_valid'][i] if 'best_bleu_matches_valid' in report else None)\n\nfor i, sample in enumerate(report['test_samples']):\n print_sample(sample, report['best_bleu_matches_test'][i] if 'best_bleu_matches_test' in report else None)", "BLEU Analysis", "def print_bleu(blue_struct):\n print 'Overall Score: ', blue_struct['score'], '\\n'\n print '1-gram Score: ', blue_struct['components']['1']\n print '2-gram Score: ', blue_struct['components']['2']\n print '3-gram Score: ', blue_struct['components']['3']\n print '4-gram Score: ', blue_struct['components']['4']\n\n# Training Set BLEU Scores\nprint_bleu(report['train_bleu'])\n\n# Validation Set BLEU Scores\nprint_bleu(report['valid_bleu'])\n\n# Test Set BLEU Scores\nprint_bleu(report['test_bleu'])\n\n# All Data BLEU Scores\nprint_bleu(report['combined_bleu'])", "N-pairs BLEU Analysis\nThis analysis randomly samples 1000 pairs of generations/ground truths and treats them as translations, giving their BLEU score. We can expect very low scores in the ground truth and high scores can expose hyper-common generations", "# Training Set BLEU n-pairs Scores\nprint_bleu(report['n_pairs_bleu_train'])\n\n# Validation Set n-pairs BLEU Scores\nprint_bleu(report['n_pairs_bleu_valid'])\n\n# Test Set n-pairs BLEU Scores\nprint_bleu(report['n_pairs_bleu_test'])\n\n# Combined n-pairs BLEU Scores\nprint_bleu(report['n_pairs_bleu_all'])\n\n# Ground Truth n-pairs BLEU Scores\nprint_bleu(report['n_pairs_bleu_gold'])", "Alignment Analysis\nThis analysis computs the average Smith-Waterman alignment score for generations, with the same intuition as N-pairs BLEU, in that we expect low scores in the ground truth and hyper-common generations to raise the scores", "print 'Average (Train) Generated Score: ', report['average_alignment_train']\nprint 'Average (Valid) Generated Score: ', report['average_alignment_valid']\nprint 'Average (Test) Generated Score: ', report['average_alignment_test']\nprint 'Average (All) Generated Score: ', report['average_alignment_all']\nprint 'Average Gold Score: ', report['average_alignment_gold']" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
xrubio/simulationdh
doc/DH2016tutorial.ipynb
gpl-3.0
[ "Introduction to Simulation: Complex social dynamics in a few lines of code\nWe will create a model depicting competition between two cultural traits within a common population. This is a typical cultural dynamics scenario where individuals must adopt one option amongst two or more mutually exclusive options (i.e. religion, elections, football teams, ...). In this case we are interested in situations when you have to choose one option (e.g., you cannot practice two religions), but more complex versions with individuals adopting more than one trait can easily be developed.\nIndividuals can change their choice over time. The decision is based on the payoff of each trait. This payoff is a measure of the relative interest of the trait, based on: \na) how many people exhibits the trait and \nb) the attractiveness of the trait.\nAn example of this dynamic could be a competition between two different religions. The number of people practicing a belief makes this belief more appealing. However, some beliefs could be intrinsically more interesting for some individuals so part of the population could adopt them even if they are a minority. Finally, social norms are not static so the attractiveness of specific beliefs can vary over time.\n\nTime is divided in discrete steps starting at t = 0. \nAt each step t the two populations $A_t$ and $B_t$ are updated as individuals move from A to B and from B to A. Take into accout that the value here can be negative meaning that more people move from B to A.\n\\begin{equation}\nA_{t+1} = A_t + \\Delta AB\n\\end{equation}\n\\begin{equation}\nB_{t+1} = B_t - \\Delta AB\n\\end{equation} \nLet's see how we can express it in code. \n<!--XRC: no needed because they all use python 3.2\nWe start with an auxiliary line of code which prevents truncation when dividing integers. You don't have to worry about it now but just keep in mind to start any code written in Python 2.7 or earlier with this line. If you have installed Python 3.2 or higher you can skip this line.\nfrom __future__ import division\n-->\n\nFirst, we need to define the number of individuals in the population. Say we want to start with 100 people. The text after a hash symbol is just a comment so you can skip it for now, but in general it is a good idea to keep documenting the code as you write.", "N = 100 # total population size", "Then, we decide on how many believers of each cultural option (religion) we want to start with.", "A = 65 # initial number of believers A\nB = N - A # initial number of believers A", "Finally, we want to update these quantities at every time step depending on some variation:", "t = 0\nMAX_TIME = 100\nwhile t < MAX_TIME:\n A = A + variation\n B = B - variation\n \n # advance time to next iteration\n t = t + 1 ", "Type the code in the code tab. Keep in mind that indents are important. \nRun the code by hitting F5 (or click on a green triangle). What happens? \nWell, nothing happens or, rather, we get an error. \npython\nNameError: name variation is not defined\nIndeed, we have not defined what do we mean by 'variation'. So let's calculate it based on the population switching trait based on a comparison between payoffs. For example if B has higher payoff then A then we should get something like this:\n\\begin{equation}\n\\Delta_{A\\to B} = A · (payoff_{A\\to B} - payoff_{B\\to A})\n\\end{equation}\nSo the proportion of population A that switches to B is proportional to the difference between payoffs. As we mentioned the payoff of a trait is determined by the population exhibiting the competing trait as well as its intrinsic attractiveness.\nTo define the payoff we need to implement the following competition equations:\n\\begin{equation}\nPayoff_{B\\to A} = \\frac{A_t}{N}\\ \\frac {T_A} {(T_A + T_B)}\\\n\\end{equation} \n\\begin{equation}\nPayoff_{A\\to B} = \\frac{B_t}{N}\\ \\frac {T_B} {(T_A + T_B)}\\\n\\end{equation} \nLet's look at the equations a bit more closely. The first term is the proportion of the entire population N holding a particular cultural trait ($\\frac {A_t}{N}$ for $A$ and $\\frac {B_t}{N}$ for $B$). While the second element of the equations is the balance between the attractiveness of both ideas ($T_A$ and $T_B$) expressed as the attractiveness of the given trait in respect to the total 'available' attractiveness ($T_A + T_B$). \nYou have probably immediately noticed that these two equations are the same in structure and only differ in terms of what is put into them. Therefore, to avoid unnecessary hassle we will create a 'universal' function that can be used for both. Type the code below at the beginning of your script:", "def payoff(believers, Tx,Ty): \n proportionBelievers = believers/N\n attraction = Tx/(Ty + Tx)\n return proportionBelievers * attraction", "Let's break it down a little. First we define the function and give it the input - the number of believers and the two values that define how attractive each cultural option is. \npython\ndef payoff(believers, Tx, Ty):\nThen we calculate two values:\n\npercentage of population sharing this cultural option\npython\n proportionBelievers = believers/N \nhow attractive the option is\npython\n attraction = Tx/(Ty + Tx) \n\nLook at the equations above, the first element is just the $\\frac {A_t}{N}$ and $\\frac {B_t}{N}$ part and the second is this bit: $\\frac {T_A} {(T_A + T_B)}$. \nFinally we return the results of the calculations. \npython\n return proportionBelievers * attraction\nVoila! We have implemented the equation into Python code. Now, let's modify the main loop to call the function - we need to do it twice to get the payoff for changing from A to B and from B to A. This is repeated during each iteration of the loop, so each time we can pass different values into it. To get the payoffs for switching from A to B and from B to A we have to add the calls to 'payoff' at the beginning of our loop:", "while t < MAX_TIME:\n \n variationBA = payoff(A, Ta, Tb)\n variationAB = payoff(B, Tb, Ta)\n \n A = A + variation\n B = B - variation\n \n # advance time to next iteration\n t = t + 1 ", "That is, we pass the number of believers and the attractiveness of the traits. The order in which we pass the input variables into a function is important. In the B to A transmission, B becomes 'believers', while Ta becomes 'Tx' and Tb becomes 'Ty'. In the second line showing the A to B transmission, A becomes '_believers', Tb becomes 'Tx' and Ta becomes 'Ty'.\nThe obvious problem after this change is that we have not defined the 'attractiveness' of each trait. To do so add their definitions at the beginning of the script around other definitions (N, A, B, MAX_TIME, etc).", "Ta = 1.0 # initial attractiveness of option A\nTb = 2.0 # initial attractiveness of option B\nalpha = 0.1 # strength of the transmission process", "We can now calculate the difference between the perceived payoffs during this time step. To do so, we need to first see which one did better (A or B).", " difference = variationBA - variationAB", "Now, if the difference between the two is negative then we know that during time step B is more interesting than A. On the contrary, if it is positive then A seems better than B. What is left is to see how many people moved based on this difference between payoffs. We can express it in the main while loop, like this:", " # B -> A\n if difference > 0:\n variation = difference*B\n # A -> B \n else:\n variation = difference*A\n # update the population \n A = A + variation\n B = B - variation ", "We can use an additional term to have a control over how strong the transmission from A to B and back is - we will call it alpha (α). This parameter will multiply change before we modify populations A and B:", " variation = alpha*variation", "And we need to add it to the rest of the parameters of our model:", "# temporal dimension\nMAX_TIME = 100\nt = 0 # initial time\n\n# init populations\nN = 100 # population size\nA = 65 # initial population of believers A\nB = N-A # initial population of believers B\n\n# additional params\nTa = 1.0 # initial attractiveness of option A\nTb = 2.0 # initial attractiveness of option B\nalpha = 0.1 # strength of the transmission process", "You should bear in mind that the main loop code should be located after the definition of the transmission function and the initialization of variables (because they are used here). After all the edits you have done it should look like this:", "while t < MAX_TIME:\n # calculate the payoff for change of believers A and B in the current time step \n variationBA = payoff(A, Ta, Tb) \n variationAB = payoff(B, Tb, Ta) \n difference = variationBA - variationAB\n \n # B -> A \n if difference > 0:\n variation = difference*B\n # A -> B \n else:\n variation = difference*A\n \n # control the pace of change with alpha\n variation = alpha*variation \n \n # update the population \n A = A + variation\n B = B - variation\n \n # advance time to next iteration\n t = t + 1", "OK, we have all the elements ready now and if you run the code the computer will churn all the numbers somewhere in the background. However, it produces no output so we have no idea what is actually happening. Let's solve this by visualising the flow of believers from one option to another. \nFirst, we will create two empty lists. Second, we will add there the initial populations. Then, at each timestep, we will add the current number of believers to these lists and finally, at the end of the simulation run we will plot them to see how they changed over time. \nStart with creating two empty lists. Add the following code right after all the variable definitions at the beginning of the code before the while loop:", "# initialise the list used for plotting\nbelieversA = []\nbelieversB = []\n\n# add the initial populations\nbelieversA.append(A) \nbelieversB.append(B)", "The whole initialisation/definition block at the beginning of your code should look like this:", "# initialisation \nMAX_TIME = 100\nt = 0 # initial time\nN = 100 # population size\nA = 65 # initial proportion of believers A\nB = N-A # initial proportion of believers B\n\nTa = 1.0 # initial attractiveness of option A\nTb = 2.0 # initial attractiveness of option B\nalpha = 0.1 # strength of the transmission process\n\n# initialise the list used for plotting\nbelieversA = [] \nbelieversB = []\n\n# add the initial populations\nbelieversA.append(A) \nbelieversB.append(B)", "We just added the initial number of believers to their respective lists. However, we also need to do this at the end of each time step. Add the following code at the end of the while loop - remember to align the indents with the previous line!", " believersA.append(A)\n believersB.append(B)", "The whole while-loop block should now look like this:", "while t < MAX_TIME: \n # calculate the payoff for change of believers A and B in the current time step \n variationBA = payoff(A, Ta, Tb) \n variationAB = payoff(B, Tb, Ta) \n difference = variationBA - variationAB\n \n # B -> A \n if difference > 0:\n variation = difference*B\n # A -> B \n else:\n variation = difference*A\n \n # control the pace of change with alpha\n variation = alpha*variation \n \n # update the population \n A = A + variation\n B = B - variation \n \n # save the values to a list for plotting \n believersA.append(A)\n believersB.append(B)\n \n # advance time to next iteration\n t = t + 1", "Finally, let's plot the results. First, we will import Python's plotting library, Matplotlib and use a predifined plotting style. Add these two lines at the beginning of your script:", "import matplotlib.pyplot as plt # plotting library\nplt.style.use('ggplot') # makes the graphs look pretty", "Finally, let's plot! Plotting in Python is as easy as saying 'please plot this data for me'. Type these two lines at the very end of your code. We only want to plot the results once the simulation has finished so make sure this are not inside the while loop - that is, ensure this block of code is not indented. Run the code!", "%matplotlib inline\n# plot the results \nplt.plot(believersA)\nplt.plot(believersB)", "You can see how over time one set of believers increases while the other decreases. This is not particularly surprising as the attractiveness Tb is higher than Ta. However, can you imagine a configuration where this is not enough to sway the population? Have a go at setting the variables to different initial values to see what combination can counteract this pattern. \n1. set the initial value of A to 5, 10, 25, 50, 75\n2. set the MAX_TIME to 1000,\n3. set the Ta and Tb to 1.0, 10.0, 0.1, 0.01, etc.,\n4. set alpha to 0.01, and 1.0\nYou can try all sorts of configurations to see how quickly the population shifts from one option to another or what are the minimum values of each variable that prevent it. \nHowever, we can make the model more interesting if we allow the attractiveness of each option to change through time. To do so let's define a new function. Add the following line at the beginning of the while loop (remember indentation!).", " Ta, Tb = attractiveness(Ta, Tb) ", "Ta and Tb will then be modified based on some dynamics we want to model. Let's define the 'attractiveness' function. We have already done it once for the 'payoff' function so it should be a piece of cake. At each time step we will slightly modify the attractiveness of each trait using a kernel K that we will define. This can be expressed as: \n\\begin{equation}\nT_{A, t+1} = {T_A} + {K_a}\n\\end{equation} \n\\begin{equation}\nT_{B, t+1} = {T_B} + {K_b}\n\\end{equation} \nK can have several shapes such as:\n* Fixed traits with $K_a = K_b = 0$\n* A gaussian stochastic process such as $K = N (0, 1)$\n* A combination (e.g., $K_a = N (0, 1)$ and $K_b = 1/3)$\nLet's start with a simple case scenario, such as a) $T_a$ will increase each step by a fixed $K_a$ and b) $K_b$ is equal to zero (so $T_b$ will be fixed over the whole simulation).", "def attractiveness(Ta, Tb):\n\n Ka = 0.1 \n Kb = 0\n \n Ta = Ta + Ka\n Tb = Tb + Kb\n return Ta, Tb", "First, we define the function and give it the input values. \npython\ndef attractiveness(Ta, Tb):\nThen, we establish how much the attractiveness of each trait changes (i.e., define $K_a$ and $K_b$)\npython \nKa = 0.1 \nKb = 0\nAnd plug them into the equations:\npython \nTa = Ta + Ka\nTb = Tb + Kb\nFinally, we return the new values:\npython \nreturn Ta, Tb\nThis is how the function is defined. The main loop will now look like this:", "while t < MAX_TIME: \n # update attractiveness\n Ta, Tb = attractiveness(Ta, Tb)\n # calculate the payoff for change of believers A and B in the current time step \n variationBA = payoff(A, Ta, Tb) \n variationAB = payoff(B, Tb, Ta) \n difference = variationBA - variationAB\n \n # B -> A \n if difference > 0:\n variation = difference*B\n # A -> B \n else:\n variation = difference*A\n \n # control the pace of change with alpha\n variation = alpha*variation \n \n # update the population \n A = A + variation\n B = B - variation \n \n # save the values to a list for plotting \n believersA.append(A)\n believersB.append(B)\n \n # advance time to next iteration\n t = t + 1", "If you plot this you will get a different result than previously:", "plt.plot(believersA)\nplt.plot(believersB) ", "Have a go at changing the values of $K_a$ and $K_b$ and see what happens. Can you see any equilibrium where both traits coexist?\nThere are a number of functions we can use to dynamically change the 'attractiveness' of each trait. Try the following ones:", "import numpy as np # stick this line at the beginning of the script alongside other 'imports'\n\n\ndef attractiveness2(Ta, Tb):\n # temporal autocorrelation with stochasticity (normal distribution)\n # we get 2 samples from a normal distribution N(0,1)\n Ka, Kb = np.random.normal(0, 1, 2)\n # compute the difference between Ks\n diff = Ka-Kb\n # apply difference of Ks to attractiveness\n Ta += diff\n Tb -= diff\n return Ta, Tb\n\ndef attractiveness3(Ta, Tb):\n # anti-conformism dynamics (more population means less attractiveness)\n \n # both values initialized at 0\n Ka = 0\n Kb = 0\n \n # first we sample gamma with mean=last popSize of A times relevance \n diffPop = np.random.gamma(believersA[t])\n # we sustract from this value the same computation for population B\n diffPop = diffPop - np.random.gamma(believersB[t])\n \n # if B is larger then we need to increase the attractiveness of A\n if diffPop < 0:\n Ka = -diffPop\n # else A is larger and we need to increase the attractiveness of B\n else:\n Kb = diffPop\n \n # change current values\n Ta = Ta + Ka\n Tb = Tb + Kb\n \n return Ta, Tb", "Use the functions above (just change 'attractiveness' to, e.g., 'attractiveness2' in the main code) to see what happens when we add small dynamic variations to the process. What happens when the initial conditions are changed? What if we look at a much longer time scale?" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
LimeeZ/phys292-2015-work
assignments/assignment06/InteractEx05.ipynb
mit
[ "Interact Exercise 5\nImports\nPut the standard imports for Matplotlib, Numpy and the IPython widgets in the following cell.", "%matplotlib inline\nfrom matplotlib import pyplot as plt\n\nimport numpy as np\nfrom IPython.html.widgets import interact, interactive, fixed\n\n\n\nfrom IPython.display import display, SVG\nfrom IPython.display import Javascript", "Interact with SVG display\nSVG is a simple way of drawing vector graphics in the browser. Here is a simple example of how SVG can be used to draw a circle in the Notebook:", "s = \"\"\"\n<svg width=\"100\" height=\"100\">\n <circle cx=\"50\" cy=\"50\" r=\"20\" fill=\"aquamarine\" />\n</svg>\n\"\"\"\n\n\nSVG(s)", "Write a function named draw_circle that draws a circle using SVG. Your function should take the parameters of the circle as function arguments and have defaults as shown. You will have to write the raw SVG code as a Python string and then use the IPython.display.SVG object and IPython.display.display function.", "def draw_circle(width=100, height=100, cx=25, cy=25, r=5, fill='red'):\n \"\"\"Draw an SVG circle.\n \n Parameters\n ----------\n width : int\n The width of the svg drawing area in px.\n height : int\n The height of the svg drawing area in px.\n cx : int\n The x position of the center of the circle in px.\n cy : int\n The y position of the center of the circle in px.\n r : int\n The radius of the circle in px.\n fill : str\n The fill color of the circle.\n \"\"\"\n#TRIPLE QUOTES GIVE YOU LINE CAPABILITIES \n l=\"\"\"\n <svg width=\"%d\" height=\"%d\">\n <circle cx=\"%d\" cy=\"%d\" r=\"%d\" fill=\"%s\" />\n </svg> \n \"\"\"\n \n svg= l %(width, height, cx, cy, r, fill)\n \n display(SVG(svg))\n\ndraw_circle(cx=10, cy=10, r=10, fill='blue')\n\nassert True # leave this to grade the draw_circle function", "Use interactive to build a user interface for exploing the draw_circle function:\n\nwidth: a fixed value of 300px\nheight: a fixed value of 300px\ncx/cy: a slider in the range [0,300]\nr: a slider in the range [0,50]\nfill: a text area in which you can type a color's name\n\nSave the return value of interactive to a variable named w.", "#?interactive\n\n\nw = interactive(draw_circle, width = fixed(300), height = fixed(300), cx=[0,300], cy=[0,300], r=[0,50], fill = '')\nw\n\nc = w.children\nassert c[0].min==0 and c[0].max==300\nassert c[1].min==0 and c[1].max==300\nassert c[2].min==0 and c[2].max==50\nassert c[3].value=='red'", "Use the display function to show the widgets created by interactive:", "display(w)\n\nassert True # leave this to grade the display of the widget", "Play with the sliders to change the circles parameters interactively." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
GoogleCloudPlatform/vertex-ai-samples
notebooks/community/migration/UJ4 AutoML for structured data with Vertex AI Regression.ipynb
apache-2.0
[ "# Copyright 2021 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.", "Vertex AI AutoML tables regression\nInstallation\nInstall the latest (preview) version of Vertex SDK.", "! pip3 install -U google-cloud-aiplatform --user", "Install the Google cloud-storage library as well.", "! pip3 install google-cloud-storage", "Restart the Kernel\nOnce you've installed the Vertex SDK and Google cloud-storage, you need to restart the notebook kernel so it can find the packages.", "import os\n\nif not os.getenv(\"AUTORUN\") and False:\n # Automatically restart kernel after installs\n import IPython\n\n app = IPython.Application.instance()\n app.kernel.do_shutdown(True)", "Before you begin\nGPU run-time\nMake sure you're running this notebook in a GPU runtime if you have that option. In Colab, select Runtime > Change Runtime Type > GPU\nSet up your GCP project\nThe following steps are required, regardless of your notebook environment.\n\n\nSelect or create a GCP project. When you first create an account, you get a $300 free credit towards your compute/storage costs.\n\n\nMake sure that billing is enabled for your project.\n\n\nEnable the Vertex APIs and Compute Engine APIs.\n\n\nGoogle Cloud SDK is already installed in Google Cloud Notebooks.\n\n\nEnter your project ID in the cell below. Then run the cell to make sure the\nCloud SDK uses the right project for all the commands in this notebook.\n\n\nNote: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $ into these commands.", "PROJECT_ID = \"[your-project-id]\" # @param {type:\"string\"}\n\nif PROJECT_ID == \"\" or PROJECT_ID is None or PROJECT_ID == \"[your-project-id]\":\n # Get your GCP project id from gcloud\n shell_output = !gcloud config list --format 'value(core.project)' 2>/dev/null\n PROJECT_ID = shell_output[0]\n print(\"Project ID:\", PROJECT_ID)\n\n! gcloud config set project $PROJECT_ID", "Region\nYou can also change the REGION variable, which is used for operations\nthroughout the rest of this notebook. Below are regions supported for Vertex AI. We recommend when possible, to choose the region closest to you.\n\nAmericas: us-central1\nEurope: europe-west4\nAsia Pacific: asia-east1\n\nYou cannot use a Multi-Regional Storage bucket for training with Vertex. Not all regions provide support for all Vertex services. For the latest support per region, see Region support for Vertex AI services", "REGION = \"us-central1\" # @param {type: \"string\"}", "Timestamp\nIf you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append onto the name of resources which will be created in this tutorial.", "from datetime import datetime\n\nTIMESTAMP = datetime.now().strftime(\"%Y%m%d%H%M%S\")", "Authenticate your GCP account\nIf you are using Google Cloud Notebooks, your environment is already\nauthenticated. Skip this step.\nNote: If you are on an Vertex notebook and run the cell, the cell knows to skip executing the authentication steps.", "import os\nimport sys\n\n# If you are running this notebook in Colab, run this cell and follow the\n# instructions to authenticate your Google Cloud account. This provides access\n# to your Cloud Storage bucket and lets you submit training jobs and prediction\n# requests.\n\n# If on Vertex, then don't execute this code\nif not os.path.exists(\"/opt/deeplearning/metadata/env_version\"):\n if \"google.colab\" in sys.modules:\n from google.colab import auth as google_auth\n\n google_auth.authenticate_user()\n\n # If you are running this tutorial in a notebook locally, replace the string\n # below with the path to your service account key and run this cell to\n # authenticate your Google Cloud account.\n else:\n %env GOOGLE_APPLICATION_CREDENTIALS your_path_to_credentials.json\n\n # Log in to your account on Google Cloud\n ! gcloud auth login", "Create a Cloud Storage bucket\nThe following steps are required, regardless of your notebook environment.\nThis tutorial is designed to use training data that is in a public Cloud Storage bucket and a local Cloud Storage bucket for your batch predictions. You may alternatively use your own training data that you have stored in a local Cloud Storage bucket.\nSet the name of your Cloud Storage bucket below. It must be unique across all Cloud Storage buckets.", "BUCKET_NAME = \"[your-bucket-name]\" # @param {type:\"string\"}\n\nif BUCKET_NAME == \"\" or BUCKET_NAME is None or BUCKET_NAME == \"[your-bucket-name]\":\n BUCKET_NAME = PROJECT_ID + \"aip-\" + TIMESTAMP", "Only if your bucket doesn't already exist: Run the following cell to create your Cloud Storage bucket.", "! gsutil mb -l $REGION gs://$BUCKET_NAME", "Finally, validate access to your Cloud Storage bucket by examining its contents:", "! gsutil ls -al gs://$BUCKET_NAME", "Set up variables\nNext, set up some variables used throughout the tutorial.\nImport libraries and define constants\nImport Vertex SDK\nImport the Vertex SDK into our Python environment.", "import os\nimport sys\nimport time\n\nfrom google.cloud.aiplatform import gapic as aip\nfrom google.protobuf import json_format\nfrom google.protobuf.json_format import MessageToJson, ParseDict\nfrom google.protobuf.struct_pb2 import Struct, Value", "Vertex AI constants\nSetup up the following constants for Vertex AI:\n\nAPI_ENDPOINT: The Vertex AI API service endpoint for dataset, model, job, pipeline and endpoint services.\nAPI_PREDICT_ENDPOINT: The Vertex AI API service endpoint for prediction.\nPARENT: The Vertex AI location root path for dataset, model and endpoint resources.", "# API Endpoint\nAPI_ENDPOINT = \"{}-aiplatform.googleapis.com\".format(REGION)\n\n# Vertex AI location root path for your dataset, model and endpoint resources\nPARENT = \"projects/\" + PROJECT_ID + \"/locations/\" + REGION", "AutoML constants\nNext, setup constants unique to AutoML Table classification datasets and training:\n\nDataset Schemas: Tells the managed dataset service which type of dataset it is.\nData Labeling (Annotations) Schemas: Tells the managed dataset service how the data is labeled (annotated).\nDataset Training Schemas: Tells the Vertex AI Pipelines service the task (e.g., classification) to train the model for.", "# Tabular Dataset type\nTABLE_SCHEMA = \"google-cloud-aiplatform/schema/dataset/metadata/tables_1.0.0.yaml\"\n# Tabular Labeling type\nIMPORT_SCHEMA_TABLE_CLASSIFICATION = (\n \"gs://google-cloud-aiplatform/schema/dataset/ioformat/table_io_format_1.0.0.yaml\"\n)\n# Tabular Training task\nTRAINING_TABLE_CLASSIFICATION_SCHEMA = \"gs://google-cloud-aiplatform/schema/trainingjob/definition/automl_tables_1.0.0.yaml\"", "Clients\nThe Vertex SDK works as a client/server model. On your side (the Python script) you will create a client that sends requests and receives responses from the server (Vertex).\nYou will use several clients in this tutorial, so set them all up upfront.\n\nDataset Service for managed datasets.\nModel Service for managed models.\nPipeline Service for training.\nEndpoint Service for deployment.\nJob Service for batch jobs and custom training.\nPrediction Service for serving. Note: Prediction has a different service endpoint.", "# client options same for all services\nclient_options = {\"api_endpoint\": API_ENDPOINT}\n\n\ndef create_dataset_client():\n client = aip.DatasetServiceClient(client_options=client_options)\n return client\n\n\ndef create_model_client():\n client = aip.ModelServiceClient(client_options=client_options)\n return client\n\n\ndef create_pipeline_client():\n client = aip.PipelineServiceClient(client_options=client_options)\n return client\n\n\ndef create_endpoint_client():\n client = aip.EndpointServiceClient(client_options=client_options)\n return client\n\n\ndef create_prediction_client():\n client = aip.PredictionServiceClient(client_options=client_options)\n return client\n\n\ndef create_job_client():\n client = aip.JobServiceClient(client_options=client_options)\n return client\n\n\nclients = {}\nclients[\"dataset\"] = create_dataset_client()\nclients[\"model\"] = create_model_client()\nclients[\"pipeline\"] = create_pipeline_client()\nclients[\"endpoint\"] = create_endpoint_client()\nclients[\"prediction\"] = create_prediction_client()\nclients[\"job\"] = create_job_client()\n\nfor client in clients.items():\n print(client)\n\nIMPORT_FILE = \"gs://cloud-ml-tables-data/bank-marketing.csv\"\n\n! gsutil cat $IMPORT_FILE | head -n 10", "Example output:\nAge,Job,MaritalStatus,Education,Default,Balance,Housing,Loan,Contact,Day,Month,Duration,Campaign,PDays,Previous,POutcome,Deposit\n58,management,married,tertiary,no,2143,yes,no,unknown,5,may,261,1,-1,0,unknown,1\n44,technician,single,secondary,no,29,yes,no,unknown,5,may,151,1,-1,0,unknown,1\n33,entrepreneur,married,secondary,no,2,yes,yes,unknown,5,may,76,1,-1,0,unknown,1\n47,blue-collar,married,unknown,no,1506,yes,no,unknown,5,may,92,1,-1,0,unknown,1\n33,unknown,single,unknown,no,1,no,no,unknown,5,may,198,1,-1,0,unknown,1\n35,management,married,tertiary,no,231,yes,no,unknown,5,may,139,1,-1,0,unknown,1\n28,management,single,tertiary,no,447,yes,yes,unknown,5,may,217,1,-1,0,unknown,1\n42,entrepreneur,divorced,tertiary,yes,2,yes,no,unknown,5,may,380,1,-1,0,unknown,1\n58,retired,married,primary,no,121,yes,no,unknown,5,may,50,1,-1,0,unknown,1\nCreate a dataset\nprojects.locations.datasets.create\nRequest", "DATA_SCHEMA = TABLE_SCHEMA\n\nmetadata = {\n \"input_config\": {\n \"gcs_source\": {\n \"uri\": [IMPORT_FILE],\n }\n }\n}\n\ndataset = {\n \"display_name\": \"bank_\" + TIMESTAMP,\n \"metadata_schema_uri\": \"gs://\" + DATA_SCHEMA,\n \"metadata\": json_format.ParseDict(metadata, Value()),\n}\n\nprint(\n MessageToJson(\n aip.CreateDatasetRequest(\n parent=PARENT,\n dataset=dataset,\n ).__dict__[\"_pb\"]\n )\n)", "Example output:\n{\n \"parent\": \"projects/migration-ucaip-training/locations/us-central1\",\n \"dataset\": {\n \"displayName\": \"bank_20210226015209\",\n \"metadataSchemaUri\": \"gs://google-cloud-aiplatform/schema/dataset/metadata/tables_1.0.0.yaml\",\n \"metadata\": {\n \"input_config\": {\n \"gcs_source\": {\n \"uri\": [\n \"gs://cloud-ml-tables-data/bank-marketing.csv\"\n ]\n }\n }\n }\n }\n}\nCall", "request = clients[\"dataset\"].create_dataset(parent=PARENT, dataset=dataset)", "Response", "result = request.result()\n\nprint(MessageToJson(result.__dict__[\"_pb\"]))", "Example output:\n{\n \"name\": \"projects/116273516712/locations/us-central1/datasets/7748812594797871104\",\n \"displayName\": \"bank_20210226015209\",\n \"metadataSchemaUri\": \"gs://google-cloud-aiplatform/schema/dataset/metadata/tabular_1.0.0.yaml\",\n \"labels\": {\n \"aiplatform.googleapis.com/dataset_metadata_schema\": \"TABLE\"\n },\n \"metadata\": {\n \"inputConfig\": {\n \"gcsSource\": {\n \"uri\": [\n \"gs://cloud-ml-tables-data/bank-marketing.csv\"\n ]\n }\n }\n }\n}", "# The full unique ID for the dataset\ndataset_id = result.name\n# The short numeric ID for the dataset\ndataset_short_id = dataset_id.split(\"/\")[-1]\n\nprint(dataset_id)", "Train a model\nprojects.locations.trainingPipelines.create\nRequest", "TRAINING_SCHEMA = TRAINING_TABLE_CLASSIFICATION_SCHEMA\n\nTRANSFORMATIONS = [\n {\"auto\": {\"column_name\": \"Age\"}},\n {\"auto\": {\"column_name\": \"Job\"}},\n {\"auto\": {\"column_name\": \"MaritalStatus\"}},\n {\"auto\": {\"column_name\": \"Education\"}},\n {\"auto\": {\"column_name\": \"Default\"}},\n {\"auto\": {\"column_name\": \"Balance\"}},\n {\"auto\": {\"column_name\": \"Housing\"}},\n {\"auto\": {\"column_name\": \"Loan\"}},\n {\"auto\": {\"column_name\": \"Contact\"}},\n {\"auto\": {\"column_name\": \"Day\"}},\n {\"auto\": {\"column_name\": \"Month\"}},\n {\"auto\": {\"column_name\": \"Duration\"}},\n {\"auto\": {\"column_name\": \"Campaign\"}},\n {\"auto\": {\"column_name\": \"PDays\"}},\n {\"auto\": {\"column_name\": \"POutcome\"}},\n]\n\ntask = Value(\n struct_value=Struct(\n fields={\n \"disable_early_stopping\": Value(bool_value=False),\n \"prediction_type\": Value(string_value=\"regression\"),\n \"target_column\": Value(string_value=\"Deposit\"),\n \"train_budget_milli_node_hours\": Value(number_value=1000),\n \"transformations\": json_format.ParseDict(TRANSFORMATIONS, Value()),\n }\n )\n)\n\ntraining_pipeline = {\n \"display_name\": \"bank_\" + TIMESTAMP,\n \"input_data_config\": {\n \"dataset_id\": dataset_short_id,\n \"fraction_split\": {\n \"training_fraction\": 0.8,\n \"validation_fraction\": 0.1,\n \"test_fraction\": 0.1,\n },\n },\n \"model_to_upload\": {\n \"display_name\": \"flowers_\" + TIMESTAMP,\n },\n \"training_task_definition\": TRAINING_SCHEMA,\n \"training_task_inputs\": task,\n}\n\nprint(\n MessageToJson(\n aip.CreateTrainingPipelineRequest(\n parent=PARENT,\n training_pipeline=training_pipeline,\n ).__dict__[\"_pb\"]\n )\n)", "Example output:\n{\n \"parent\": \"projects/migration-ucaip-training/locations/us-central1\",\n \"trainingPipeline\": {\n \"displayName\": \"bank_20210226015209\",\n \"inputDataConfig\": {\n \"datasetId\": \"7748812594797871104\",\n \"fractionSplit\": {\n \"trainingFraction\": 0.8,\n \"validationFraction\": 0.1,\n \"testFraction\": 0.1\n }\n },\n \"trainingTaskDefinition\": \"gs://google-cloud-aiplatform/schema/trainingjob/definition/automl_tables_1.0.0.yaml\",\n \"trainingTaskInputs\": {\n \"transformations\": [\n {\n \"auto\": {\n \"column_name\": \"Age\"\n }\n },\n {\n \"auto\": {\n \"column_name\": \"Job\"\n }\n },\n {\n \"auto\": {\n \"column_name\": \"MaritalStatus\"\n }\n },\n {\n \"auto\": {\n \"column_name\": \"Education\"\n }\n },\n {\n \"auto\": {\n \"column_name\": \"Default\"\n }\n },\n {\n \"auto\": {\n \"column_name\": \"Balance\"\n }\n },\n {\n \"auto\": {\n \"column_name\": \"Housing\"\n }\n },\n {\n \"auto\": {\n \"column_name\": \"Loan\"\n }\n },\n {\n \"auto\": {\n \"column_name\": \"Contact\"\n }\n },\n {\n \"auto\": {\n \"column_name\": \"Day\"\n }\n },\n {\n \"auto\": {\n \"column_name\": \"Month\"\n }\n },\n {\n \"auto\": {\n \"column_name\": \"Duration\"\n }\n },\n {\n \"auto\": {\n \"column_name\": \"Campaign\"\n }\n },\n {\n \"auto\": {\n \"column_name\": \"PDays\"\n }\n },\n {\n \"auto\": {\n \"column_name\": \"POutcome\"\n }\n }\n ],\n \"prediction_type\": \"regression\",\n \"disable_early_stopping\": false,\n \"train_budget_milli_node_hours\": 1000.0,\n \"target_column\": \"Deposit\"\n },\n \"modelToUpload\": {\n \"displayName\": \"flowers_20210226015209\"\n }\n }\n}\nCall", "request = clients[\"pipeline\"].create_training_pipeline(\n parent=PARENT, training_pipeline=training_pipeline\n)", "Response", "print(MessageToJson(request.__dict__[\"_pb\"]))", "Example output:\n{\n \"name\": \"projects/116273516712/locations/us-central1/trainingPipelines/3147717072369221632\",\n \"displayName\": \"bank_20210226015209\",\n \"inputDataConfig\": {\n \"datasetId\": \"7748812594797871104\",\n \"fractionSplit\": {\n \"trainingFraction\": 0.8,\n \"validationFraction\": 0.1,\n \"testFraction\": 0.1\n }\n },\n \"trainingTaskDefinition\": \"gs://google-cloud-aiplatform/schema/trainingjob/definition/automl_tables_1.0.0.yaml\",\n \"trainingTaskInputs\": {\n \"targetColumn\": \"Deposit\",\n \"trainBudgetMilliNodeHours\": \"1000\",\n \"transformations\": [\n {\n \"auto\": {\n \"columnName\": \"Age\"\n }\n },\n {\n \"auto\": {\n \"columnName\": \"Job\"\n }\n },\n {\n \"auto\": {\n \"columnName\": \"MaritalStatus\"\n }\n },\n {\n \"auto\": {\n \"columnName\": \"Education\"\n }\n },\n {\n \"auto\": {\n \"columnName\": \"Default\"\n }\n },\n {\n \"auto\": {\n \"columnName\": \"Balance\"\n }\n },\n {\n \"auto\": {\n \"columnName\": \"Housing\"\n }\n },\n {\n \"auto\": {\n \"columnName\": \"Loan\"\n }\n },\n {\n \"auto\": {\n \"columnName\": \"Contact\"\n }\n },\n {\n \"auto\": {\n \"columnName\": \"Day\"\n }\n },\n {\n \"auto\": {\n \"columnName\": \"Month\"\n }\n },\n {\n \"auto\": {\n \"columnName\": \"Duration\"\n }\n },\n {\n \"auto\": {\n \"columnName\": \"Campaign\"\n }\n },\n {\n \"auto\": {\n \"columnName\": \"PDays\"\n }\n },\n {\n \"auto\": {\n \"columnName\": \"POutcome\"\n }\n }\n ],\n \"predictionType\": \"regression\"\n },\n \"modelToUpload\": {\n \"displayName\": \"flowers_20210226015209\"\n },\n \"state\": \"PIPELINE_STATE_PENDING\",\n \"createTime\": \"2021-02-26T01:57:51.364312Z\",\n \"updateTime\": \"2021-02-26T01:57:51.364312Z\"\n}", "# The full unique ID for the training pipeline\ntraining_pipeline_id = request.name\n# The short numeric ID for the training pipeline\ntraining_pipeline_short_id = training_pipeline_id.split(\"/\")[-1]\n\nprint(training_pipeline_id)", "projects.locations.trainingPipelines.get\nCall", "request = clients[\"pipeline\"].get_training_pipeline(name=training_pipeline_id)", "Response", "print(MessageToJson(request.__dict__[\"_pb\"]))", "Example output:\n{\n \"name\": \"projects/116273516712/locations/us-central1/trainingPipelines/3147717072369221632\",\n \"displayName\": \"bank_20210226015209\",\n \"inputDataConfig\": {\n \"datasetId\": \"7748812594797871104\",\n \"fractionSplit\": {\n \"trainingFraction\": 0.8,\n \"validationFraction\": 0.1,\n \"testFraction\": 0.1\n }\n },\n \"trainingTaskDefinition\": \"gs://google-cloud-aiplatform/schema/trainingjob/definition/automl_tables_1.0.0.yaml\",\n \"trainingTaskInputs\": {\n \"trainBudgetMilliNodeHours\": \"1000\",\n \"transformations\": [\n {\n \"auto\": {\n \"columnName\": \"Age\"\n }\n },\n {\n \"auto\": {\n \"columnName\": \"Job\"\n }\n },\n {\n \"auto\": {\n \"columnName\": \"MaritalStatus\"\n }\n },\n {\n \"auto\": {\n \"columnName\": \"Education\"\n }\n },\n {\n \"auto\": {\n \"columnName\": \"Default\"\n }\n },\n {\n \"auto\": {\n \"columnName\": \"Balance\"\n }\n },\n {\n \"auto\": {\n \"columnName\": \"Housing\"\n }\n },\n {\n \"auto\": {\n \"columnName\": \"Loan\"\n }\n },\n {\n \"auto\": {\n \"columnName\": \"Contact\"\n }\n },\n {\n \"auto\": {\n \"columnName\": \"Day\"\n }\n },\n {\n \"auto\": {\n \"columnName\": \"Month\"\n }\n },\n {\n \"auto\": {\n \"columnName\": \"Duration\"\n }\n },\n {\n \"auto\": {\n \"columnName\": \"Campaign\"\n }\n },\n {\n \"auto\": {\n \"columnName\": \"PDays\"\n }\n },\n {\n \"auto\": {\n \"columnName\": \"POutcome\"\n }\n }\n ],\n \"targetColumn\": \"Deposit\",\n \"predictionType\": \"regression\"\n },\n \"modelToUpload\": {\n \"displayName\": \"flowers_20210226015209\"\n },\n \"state\": \"PIPELINE_STATE_PENDING\",\n \"createTime\": \"2021-02-26T01:57:51.364312Z\",\n \"updateTime\": \"2021-02-26T01:57:51.364312Z\"\n}", "while True:\n response = clients[\"pipeline\"].get_training_pipeline(name=training_pipeline_id)\n if response.state != aip.PipelineState.PIPELINE_STATE_SUCCEEDED:\n print(\"Training job has not completed:\", response.state)\n model_to_deploy_name = None\n if response.state == aip.PipelineState.PIPELINE_STATE_FAILED:\n break\n else:\n model_id = response.model_to_upload.name\n print(\"Training Time:\", response.end_time - response.start_time)\n break\n time.sleep(60)\n\nprint(model_id)", "Evaluate the model\nprojects.locations.models.evaluations.list\nCall", "request = clients[\"model\"].list_model_evaluations(parent=model_id)", "Response", "import json\n\nmodel_evaluations = [json.loads(MessageToJson(me.__dict__[\"_pb\"])) for me in request]\n# The evaluation slice\nevaluation_slice = request.model_evaluations[0].name\n\nprint(json.dumps(model_evaluations, indent=2))", "Example output:\n[\n {\n \"name\": \"projects/116273516712/locations/us-central1/models/3936304403996213248/evaluations/6323797633322037836\",\n \"metricsSchemaUri\": \"gs://google-cloud-aiplatform/schema/modelevaluation/regression_metrics_1.0.0.yaml\",\n \"metrics\": {\n \"rSquared\": 0.39799774,\n \"meanAbsolutePercentageError\": 9.791032,\n \"rootMeanSquaredError\": 0.24675915,\n \"rootMeanSquaredLogError\": 0.10022795,\n \"meanAbsoluteError\": 0.12842195\n },\n \"createTime\": \"2021-02-26T03:39:42.254525Z\"\n }\n]\nprojects.locations.models.evaluations.get\nCall", "request = clients[\"model\"].get_model_evaluation(name=evaluation_slice)", "Response", "print(MessageToJson(request.__dict__[\"_pb\"]))", "Example output:\n{\n \"name\": \"projects/116273516712/locations/us-central1/models/3936304403996213248/evaluations/6323797633322037836\",\n \"metricsSchemaUri\": \"gs://google-cloud-aiplatform/schema/modelevaluation/regression_metrics_1.0.0.yaml\",\n \"metrics\": {\n \"meanAbsolutePercentageError\": 9.791032,\n \"rootMeanSquaredLogError\": 0.10022795,\n \"rSquared\": 0.39799774,\n \"meanAbsoluteError\": 0.12842195,\n \"rootMeanSquaredError\": 0.24675915\n },\n \"createTime\": \"2021-02-26T03:39:42.254525Z\"\n}\nMake batch predictions\nMake the batch input file\nLet's now make a batch input file, which you store in your local Cloud Storage bucket. The batch input file can be either CSV or JSONL. You will use CSV in this tutorial.ion on.", "! gsutil cat $IMPORT_FILE | head -n 1 > tmp.csv\n! gsutil cat $IMPORT_FILE | tail -n 10 >> tmp.csv\n\n! cut -d, -f1-16 tmp.csv > batch.csv\n\ngcs_input_uri = \"gs://\" + BUCKET_NAME + \"/test.csv\"\n\n! gsutil cp batch.csv $gcs_input_uri\n\n! gsutil cat $gcs_input_uri", "Example output:\nAge,Job,MaritalStatus,Education,Default,Balance,Housing,Loan,Contact,Day,Month,Duration,Campaign,PDays,Previous,POutcome\n53,management,married,tertiary,no,583,no,no,cellular,17,nov,226,1,184,4,success\n34,admin.,single,secondary,no,557,no,no,cellular,17,nov,224,1,-1,0,unknown\n23,student,single,tertiary,no,113,no,no,cellular,17,nov,266,1,-1,0,unknown\n73,retired,married,secondary,no,2850,no,no,cellular,17,nov,300,1,40,8,failure\n25,technician,single,secondary,no,505,no,yes,cellular,17,nov,386,2,-1,0,unknown\n51,technician,married,tertiary,no,825,no,no,cellular,17,nov,977,3,-1,0,unknown\n71,retired,divorced,primary,no,1729,no,no,cellular,17,nov,456,2,-1,0,unknown\n72,retired,married,secondary,no,5715,no,no,cellular,17,nov,1127,5,184,3,success\n57,blue-collar,married,secondary,no,668,no,no,telephone,17,nov,508,4,-1,0,unknown\n37,entrepreneur,married,secondary,no,2971,no,no,cellular,17,nov,361,2,188,11,other\nprojects.locations.batchPredictionJobs.create\nRequest", "batch_prediction_job = {\n \"display_name\": \"bank_\" + TIMESTAMP,\n \"model\": model_id,\n \"input_config\": {\n \"instances_format\": \"csv\",\n \"gcs_source\": {\n \"uris\": [gcs_input_uri],\n },\n },\n \"output_config\": {\n \"predictions_format\": \"csv\",\n \"gcs_destination\": {\n \"output_uri_prefix\": \"gs://\" + f\"{BUCKET_NAME}/batch_output/\",\n },\n },\n \"dedicated_resources\": {\n \"machine_spec\": {\n \"machine_type\": \"n1-standard-2\",\n \"accelerator_count\": 0,\n },\n \"starting_replica_count\": 1,\n \"max_replica_count\": 1,\n },\n}\n\nprint(\n MessageToJson(\n aip.CreateBatchPredictionJobRequest(\n parent=PARENT,\n batch_prediction_job=batch_prediction_job,\n ).__dict__[\"_pb\"]\n )\n)", "Example output:\n{\n \"parent\": \"projects/migration-ucaip-training/locations/us-central1\",\n \"batchPredictionJob\": {\n \"displayName\": \"bank_20210226015209\",\n \"model\": \"projects/116273516712/locations/us-central1/models/3936304403996213248\",\n \"inputConfig\": {\n \"instancesFormat\": \"csv\",\n \"gcsSource\": {\n \"uris\": [\n \"gs://migration-ucaip-trainingaip-20210226015209/test.csv\"\n ]\n }\n },\n \"outputConfig\": {\n \"predictionsFormat\": \"csv\",\n \"gcsDestination\": {\n \"outputUriPrefix\": \"gs://migration-ucaip-trainingaip-20210226015209/batch_output/\"\n }\n },\n \"dedicatedResources\": {\n \"machineSpec\": {\n \"machineType\": \"n1-standard-2\"\n },\n \"startingReplicaCount\": 1,\n \"maxReplicaCount\": 1\n }\n }\n}\nCall", "request = clients[\"job\"].create_batch_prediction_job(\n parent=PARENT, batch_prediction_job=batch_prediction_job\n)", "Response", "print(MessageToJson(request.__dict__[\"_pb\"]))", "Example output:\n{\n \"name\": \"projects/116273516712/locations/us-central1/batchPredictionJobs/4417450692310990848\",\n \"displayName\": \"bank_20210226015209\",\n \"model\": \"projects/116273516712/locations/us-central1/models/3936304403996213248\",\n \"inputConfig\": {\n \"instancesFormat\": \"csv\",\n \"gcsSource\": {\n \"uris\": [\n \"gs://migration-ucaip-trainingaip-20210226015209/test.csv\"\n ]\n }\n },\n \"outputConfig\": {\n \"predictionsFormat\": \"csv\",\n \"gcsDestination\": {\n \"outputUriPrefix\": \"gs://migration-ucaip-trainingaip-20210226015209/batch_output/\"\n }\n },\n \"state\": \"JOB_STATE_PENDING\",\n \"createTime\": \"2021-02-26T09:35:43.113270Z\",\n \"updateTime\": \"2021-02-26T09:35:43.113270Z\"\n}", "# The fully qualified ID for the batch job\nbatch_job_id = request.name\n# The short numeric ID for the batch job\nbatch_job_short_id = batch_job_id.split(\"/\")[-1]\n\nprint(batch_job_id)", "projects.locations.batchPredictionJobs.get\nCall", "request = clients[\"job\"].get_batch_prediction_job(name=batch_job_id)", "Response", "print(MessageToJson(request.__dict__[\"_pb\"]))", "Example output:\n{\n \"name\": \"projects/116273516712/locations/us-central1/batchPredictionJobs/4417450692310990848\",\n \"displayName\": \"bank_20210226015209\",\n \"model\": \"projects/116273516712/locations/us-central1/models/3936304403996213248\",\n \"inputConfig\": {\n \"instancesFormat\": \"csv\",\n \"gcsSource\": {\n \"uris\": [\n \"gs://migration-ucaip-trainingaip-20210226015209/test.csv\"\n ]\n }\n },\n \"outputConfig\": {\n \"predictionsFormat\": \"csv\",\n \"gcsDestination\": {\n \"outputUriPrefix\": \"gs://migration-ucaip-trainingaip-20210226015209/batch_output/\"\n }\n },\n \"state\": \"JOB_STATE_PENDING\",\n \"createTime\": \"2021-02-26T09:35:43.113270Z\",\n \"updateTime\": \"2021-02-26T09:35:43.113270Z\"\n}", "def get_latest_predictions(gcs_out_dir):\n \"\"\" Get the latest prediction subfolder using the timestamp in the subfolder name\"\"\"\n folders = !gsutil ls $gcs_out_dir\n latest = \"\"\n for folder in folders:\n subfolder = folder.split(\"/\")[-2]\n if subfolder.startswith(\"prediction-\"):\n if subfolder > latest:\n latest = folder[:-1]\n return latest\n\n\nwhile True:\n response = clients[\"job\"].get_batch_prediction_job(name=batch_job_id)\n if response.state != aip.JobState.JOB_STATE_SUCCEEDED:\n print(\"The job has not completed:\", response.state)\n if response.state == aip.JobState.JOB_STATE_FAILED:\n break\n else:\n folder = get_latest_predictions(\n response.output_config.gcs_destination.output_uri_prefix\n )\n ! gsutil ls $folder/prediction*\n\n ! gsutil cat $folder/prediction*\n break\n time.sleep(60)", "Example output:\ngs://migration-ucaip-trainingaip-20210226015209/batch_output/prediction-flowers_20210226015209-2021-02-26T09:35:43.034287Z/predictions_1.csv\nAge,Balance,Campaign,Contact,Day,Default,Duration,Education,Housing,Job,Loan,MaritalStatus,Month,PDays,POutcome,Previous,predicted_Deposit\n72,5715,5,cellular,17,no,1127,secondary,no,retired,no,married,nov,184,success,3,1.6232702732086182\n23,113,1,cellular,17,no,266,tertiary,no,student,no,single,nov,-1,unknown,0,1.3257474899291992\n34,557,1,cellular,17,no,224,secondary,no,admin.,no,single,nov,-1,unknown,0,1.0801490545272827\n25,505,2,cellular,17,no,386,secondary,no,technician,yes,single,nov,-1,unknown,0,1.2516863346099854\n73,2850,1,cellular,17,no,300,secondary,no,retired,no,married,nov,40,failure,8,1.5064295530319214\n37,2971,2,cellular,17,no,361,secondary,no,entrepreneur,no,married,nov,188,other,11,1.1924527883529663\n57,668,4,telephone,17,no,508,secondary,no,blue-collar,no,married,nov,-1,unknown,0,1.1636843681335449\nMake online predictions\nprojects.locations.endpoints.create\nRequest", "endpoint = {\"display_name\": \"bank_\" + TIMESTAMP}\n\nprint(\n MessageToJson(\n aip.CreateEndpointRequest(\n parent=PARENT,\n endpoint=endpoint,\n ).__dict__[\"_pb\"]\n )\n)", "Example output:\n{\n \"parent\": \"projects/migration-ucaip-training/locations/us-central1\",\n \"endpoint\": {\n \"displayName\": \"bank_20210226015209\"\n }\n}\nCall", "request = clients[\"endpoint\"].create_endpoint(parent=PARENT, endpoint=endpoint)", "Response", "result = request.result()\n\nprint(MessageToJson(result.__dict__[\"_pb\"]))", "Example output:\n{\n \"name\": \"projects/116273516712/locations/us-central1/endpoints/6899338707271155712\"\n}", "# The fully qualified ID for the endpoint\nendpoint_id = result.name\n# The short numeric ID for the endpoint\nendpoint_short_id = endpoint_id.split(\"/\")[-1]\n\nprint(endpoint_id)", "projects.locations.endpoints.deployModel\nRequest", "deployed_model = {\n \"model\": model_id,\n \"display_name\": \"bank_\" + TIMESTAMP,\n \"dedicated_resources\": {\n \"min_replica_count\": 1,\n \"machine_spec\": {\"machine_type\": \"n1-standard-2\"},\n },\n}\n\ntraffic_split = {\"0\": 100}\n\nprint(\n MessageToJson(\n aip.DeployModelRequest(\n endpoint=endpoint_id,\n deployed_model=deployed_model,\n traffic_split=traffic_split,\n ).__dict__[\"_pb\"]\n )\n)", "Example output:\n{\n \"endpoint\": \"projects/116273516712/locations/us-central1/endpoints/6899338707271155712\",\n \"deployedModel\": {\n \"model\": \"projects/116273516712/locations/us-central1/models/3936304403996213248\",\n \"displayName\": \"bank_20210226015209\",\n \"dedicatedResources\": {\n \"machineSpec\": {\n \"machineType\": \"n1-standard-2\"\n },\n \"minReplicaCount\": 1\n }\n },\n \"trafficSplit\": {\n \"0\": 100\n }\n}\nCall", "request = clients[\"endpoint\"].deploy_model(\n endpoint=endpoint_id, deployed_model=deployed_model, traffic_split=traffic_split\n)", "Response", "result = request.result()\n\nprint(MessageToJson(result.__dict__[\"_pb\"]))", "Example output:\n{\n \"deployedModel\": {\n \"id\": \"7646795507926302720\"\n }\n}", "# The numeric ID for the deploy model\ndeploy_model_id = result.deployed_model.id\n\nprint(deploy_model_id)", "projects.locations.endpoints.predict\nPrepare data item for online prediction", "INSTANCE = {\n \"Age\": \"58\",\n \"Job\": \"managment\",\n \"MaritalStatus\": \"married\",\n \"Education\": \"teritary\",\n \"Default\": \"no\",\n \"Balance\": \"2143\",\n \"Housing\": \"yes\",\n \"Loan\": \"no\",\n \"Contact\": \"unknown\",\n \"Day\": \"5\",\n \"Month\": \"may\",\n \"Duration\": \"261\",\n \"Campaign\": \"1\",\n \"PDays\": \"-1\",\n \"Previous\": 0,\n \"POutcome\": \"unknown\",\n}", "Request", "instances_list = [INSTANCE]\ninstances = [json_format.ParseDict(s, Value()) for s in instances_list]\n\nrequest = aip.PredictRequest(\n endpoint=endpoint_id,\n)\nrequest.instances.append(instances)\n\nprint(MessageToJson(request.__dict__[\"_pb\"]))", "Example output:\n{\n \"endpoint\": \"projects/116273516712/locations/us-central1/endpoints/6899338707271155712\",\n \"instances\": [\n [\n {\n \"Education\": \"teritary\",\n \"MaritalStatus\": \"married\",\n \"Balance\": \"2143\",\n \"Contact\": \"unknown\",\n \"Housing\": \"yes\",\n \"Previous\": 0.0,\n \"Loan\": \"no\",\n \"Duration\": \"261\",\n \"Default\": \"no\",\n \"Day\": \"5\",\n \"POutcome\": \"unknown\",\n \"Age\": \"58\",\n \"Month\": \"may\",\n \"PDays\": \"-1\",\n \"Campaign\": \"1\",\n \"Job\": \"managment\"\n }\n ]\n ]\n}\nCall", "request = clients[\"prediction\"].predict(endpoint=endpoint_id, instances=instances)", "Response", "print(MessageToJson(request.__dict__[\"_pb\"]))", "Example output:\n{\n \"predictions\": [\n {\n \"upper_bound\": 1.685426712036133,\n \"value\": 1.007092595100403,\n \"lower_bound\": 0.06719603389501572\n }\n ],\n \"deployedModelId\": \"7646795507926302720\"\n}\nprojects.locations.endpoints.undeployModel\nCall", "request = clients[\"endpoint\"].undeploy_model(\n endpoint=endpoint_id, deployed_model_id=deploy_model_id, traffic_split={}\n)", "Response", "result = request.result()\n\nprint(MessageToJson(result.__dict__[\"_pb\"]))", "Example output:\n{}\nCleaning up\nTo clean up all GCP resources used in this project, you can delete the GCP\nproject you used for the tutorial.\nOtherwise, you can delete the individual resources you created in this tutorial.", "delete_dataset = True\ndelete_model = True\ndelete_endpoint = True\ndelete_pipeline = True\ndelete_batchjob = True\ndelete_bucket = True\n\n# Delete the dataset using the Vertex AI fully qualified identifier for the dataset\ntry:\n if delete_dataset:\n clients[\"dataset\"].delete_dataset(name=dataset_id)\nexcept Exception as e:\n print(e)\n\n# Delete the model using the Vertex AI fully qualified identifier for the model\ntry:\n if delete_model:\n clients[\"model\"].delete_model(name=model_id)\nexcept Exception as e:\n print(e)\n\n# Delete the endpoint using the Vertex AI fully qualified identifier for the endpoint\ntry:\n if delete_endpoint:\n clients[\"endpoint\"].delete_endpoint(name=endpoint_id)\nexcept Exception as e:\n print(e)\n\n# Delete the training pipeline using the Vertex AI fully qualified identifier for the training pipeline\ntry:\n if delete_pipeline:\n clients[\"pipeline\"].delete_training_pipeline(name=training_pipeline_id)\nexcept Exception as e:\n print(e)\n\n# Delete the batch job using the Vertex AI fully qualified identifier for the batch job\ntry:\n if delete_batchjob:\n clients[\"job\"].delete_batch_prediction_job(name=batch_job_id)\nexcept Exception as e:\n print(e)\n\nif delete_bucket and \"BUCKET_NAME\" in globals():\n ! gsutil rm -r gs://$BUCKET_NAME" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
tensorflow/docs-l10n
site/en-snapshot/guide/keras/customizing_what_happens_in_fit.ipynb
apache-2.0
[ "Copyright 2020 The TensorFlow Authors.", "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.", "Customize what happens in Model.fit\n<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://www.tensorflow.org/guide/keras/customizing_what_happens_in_fit\"><img src=\"https://www.tensorflow.org/images/tf_logo_32px.png\" />View on TensorFlow.org</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/docs/blob/snapshot-keras/site/en/guide/keras/customizing_what_happens_in_fit.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />Run in Google Colab</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://github.com/keras-team/keras-io/blob/master/guides/customizing_what_happens_in_fit.py\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />View source on GitHub</a>\n </td>\n <td>\n <a href=\"https://storage.googleapis.com/tensorflow_docs/docs/site/en/guide/keras/customizing_what_happens_in_fit.ipynb\"><img src=\"https://www.tensorflow.org/images/download_logo_32px.png\" />Download notebook</a>\n </td>\n</table>\n\nIntroduction\nWhen you're doing supervised learning, you can use fit() and everything works\nsmoothly.\nWhen you need to write your own training loop from scratch, you can use the\nGradientTape and take control of every little detail.\nBut what if you need a custom training algorithm, but you still want to benefit from\nthe convenient features of fit(), such as callbacks, built-in distribution support,\nor step fusing?\nA core principle of Keras is progressive disclosure of complexity. You should\nalways be able to get into lower-level workflows in a gradual way. You shouldn't fall\noff a cliff if the high-level functionality doesn't exactly match your use case. You\nshould be able to gain more control over the small details while retaining a\ncommensurate amount of high-level convenience.\nWhen you need to customize what fit() does, you should override the training step\nfunction of the Model class. This is the function that is called by fit() for\nevery batch of data. You will then be able to call fit() as usual -- and it will be\nrunning your own learning algorithm.\nNote that this pattern does not prevent you from building models with the Functional\nAPI. You can do this whether you're building Sequential models, Functional API\nmodels, or subclassed models.\nLet's see how that works.\nSetup\nRequires TensorFlow 2.2 or later.", "import tensorflow as tf\nfrom tensorflow import keras", "A first simple example\nLet's start from a simple example:\n\nWe create a new class that subclasses keras.Model.\nWe just override the method train_step(self, data).\nWe return a dictionary mapping metric names (including the loss) to their current\nvalue.\n\nThe input argument data is what gets passed to fit as training data:\n\nIf you pass Numpy arrays, by calling fit(x, y, ...), then data will be the tuple\n(x, y)\nIf you pass a tf.data.Dataset, by calling fit(dataset, ...), then data will be\nwhat gets yielded by dataset at each batch.\n\nIn the body of the train_step method, we implement a regular training update,\nsimilar to what you are already familiar with. Importantly, we compute the loss via\nself.compiled_loss, which wraps the loss(es) function(s) that were passed to\ncompile().\nSimilarly, we call self.compiled_metrics.update_state(y, y_pred) to update the state\nof the metrics that were passed in compile(), and we query results from\nself.metrics at the end to retrieve their current value.", "class CustomModel(keras.Model):\n def train_step(self, data):\n # Unpack the data. Its structure depends on your model and\n # on what you pass to `fit()`.\n x, y = data\n\n with tf.GradientTape() as tape:\n y_pred = self(x, training=True) # Forward pass\n # Compute the loss value\n # (the loss function is configured in `compile()`)\n loss = self.compiled_loss(y, y_pred, regularization_losses=self.losses)\n\n # Compute gradients\n trainable_vars = self.trainable_variables\n gradients = tape.gradient(loss, trainable_vars)\n # Update weights\n self.optimizer.apply_gradients(zip(gradients, trainable_vars))\n # Update metrics (includes the metric that tracks the loss)\n self.compiled_metrics.update_state(y, y_pred)\n # Return a dict mapping metric names to current value\n return {m.name: m.result() for m in self.metrics}\n", "Let's try this out:", "import numpy as np\n\n# Construct and compile an instance of CustomModel\ninputs = keras.Input(shape=(32,))\noutputs = keras.layers.Dense(1)(inputs)\nmodel = CustomModel(inputs, outputs)\nmodel.compile(optimizer=\"adam\", loss=\"mse\", metrics=[\"mae\"])\n\n# Just use `fit` as usual\nx = np.random.random((1000, 32))\ny = np.random.random((1000, 1))\nmodel.fit(x, y, epochs=3)", "Going lower-level\nNaturally, you could just skip passing a loss function in compile(), and instead do\neverything manually in train_step. Likewise for metrics.\nHere's a lower-level\nexample, that only uses compile() to configure the optimizer:\n\nWe start by creating Metric instances to track our loss and a MAE score.\nWe implement a custom train_step() that updates the state of these metrics\n(by calling update_state() on them), then query them (via result()) to return their current average value,\nto be displayed by the progress bar and to be pass to any callback.\nNote that we would need to call reset_states() on our metrics between each epoch! Otherwise\ncalling result() would return an average since the start of training, whereas we usually work\nwith per-epoch averages. Thankfully, the framework can do that for us: just list any metric\nyou want to reset in the metrics property of the model. The model will call reset_states()\non any object listed here at the beginning of each fit() epoch or at the beginning of a call to\nevaluate().", "loss_tracker = keras.metrics.Mean(name=\"loss\")\nmae_metric = keras.metrics.MeanAbsoluteError(name=\"mae\")\n\n\nclass CustomModel(keras.Model):\n def train_step(self, data):\n x, y = data\n\n with tf.GradientTape() as tape:\n y_pred = self(x, training=True) # Forward pass\n # Compute our own loss\n loss = keras.losses.mean_squared_error(y, y_pred)\n\n # Compute gradients\n trainable_vars = self.trainable_variables\n gradients = tape.gradient(loss, trainable_vars)\n\n # Update weights\n self.optimizer.apply_gradients(zip(gradients, trainable_vars))\n\n # Compute our own metrics\n loss_tracker.update_state(loss)\n mae_metric.update_state(y, y_pred)\n return {\"loss\": loss_tracker.result(), \"mae\": mae_metric.result()}\n\n @property\n def metrics(self):\n # We list our `Metric` objects here so that `reset_states()` can be\n # called automatically at the start of each epoch\n # or at the start of `evaluate()`.\n # If you don't implement this property, you have to call\n # `reset_states()` yourself at the time of your choosing.\n return [loss_tracker, mae_metric]\n\n\n# Construct an instance of CustomModel\ninputs = keras.Input(shape=(32,))\noutputs = keras.layers.Dense(1)(inputs)\nmodel = CustomModel(inputs, outputs)\n\n# We don't passs a loss or metrics here.\nmodel.compile(optimizer=\"adam\")\n\n# Just use `fit` as usual -- you can use callbacks, etc.\nx = np.random.random((1000, 32))\ny = np.random.random((1000, 1))\nmodel.fit(x, y, epochs=5)\n", "Supporting sample_weight & class_weight\nYou may have noticed that our first basic example didn't make any mention of sample\nweighting. If you want to support the fit() arguments sample_weight and\nclass_weight, you'd simply do the following:\n\nUnpack sample_weight from the data argument\nPass it to compiled_loss & compiled_metrics (of course, you could also just apply\nit manually if you don't rely on compile() for losses & metrics)\nThat's it. That's the list.", "class CustomModel(keras.Model):\n def train_step(self, data):\n # Unpack the data. Its structure depends on your model and\n # on what you pass to `fit()`.\n if len(data) == 3:\n x, y, sample_weight = data\n else:\n sample_weight = None\n x, y = data\n\n with tf.GradientTape() as tape:\n y_pred = self(x, training=True) # Forward pass\n # Compute the loss value.\n # The loss function is configured in `compile()`.\n loss = self.compiled_loss(\n y,\n y_pred,\n sample_weight=sample_weight,\n regularization_losses=self.losses,\n )\n\n # Compute gradients\n trainable_vars = self.trainable_variables\n gradients = tape.gradient(loss, trainable_vars)\n\n # Update weights\n self.optimizer.apply_gradients(zip(gradients, trainable_vars))\n\n # Update the metrics.\n # Metrics are configured in `compile()`.\n self.compiled_metrics.update_state(y, y_pred, sample_weight=sample_weight)\n\n # Return a dict mapping metric names to current value.\n # Note that it will include the loss (tracked in self.metrics).\n return {m.name: m.result() for m in self.metrics}\n\n\n# Construct and compile an instance of CustomModel\ninputs = keras.Input(shape=(32,))\noutputs = keras.layers.Dense(1)(inputs)\nmodel = CustomModel(inputs, outputs)\nmodel.compile(optimizer=\"adam\", loss=\"mse\", metrics=[\"mae\"])\n\n# You can now use sample_weight argument\nx = np.random.random((1000, 32))\ny = np.random.random((1000, 1))\nsw = np.random.random((1000, 1))\nmodel.fit(x, y, sample_weight=sw, epochs=3)", "Providing your own evaluation step\nWhat if you want to do the same for calls to model.evaluate()? Then you would\noverride test_step in exactly the same way. Here's what it looks like:", "class CustomModel(keras.Model):\n def test_step(self, data):\n # Unpack the data\n x, y = data\n # Compute predictions\n y_pred = self(x, training=False)\n # Updates the metrics tracking the loss\n self.compiled_loss(y, y_pred, regularization_losses=self.losses)\n # Update the metrics.\n self.compiled_metrics.update_state(y, y_pred)\n # Return a dict mapping metric names to current value.\n # Note that it will include the loss (tracked in self.metrics).\n return {m.name: m.result() for m in self.metrics}\n\n\n# Construct an instance of CustomModel\ninputs = keras.Input(shape=(32,))\noutputs = keras.layers.Dense(1)(inputs)\nmodel = CustomModel(inputs, outputs)\nmodel.compile(loss=\"mse\", metrics=[\"mae\"])\n\n# Evaluate with our custom test_step\nx = np.random.random((1000, 32))\ny = np.random.random((1000, 1))\nmodel.evaluate(x, y)", "Wrapping up: an end-to-end GAN example\nLet's walk through an end-to-end example that leverages everything you just learned.\nLet's consider:\n\nA generator network meant to generate 28x28x1 images.\nA discriminator network meant to classify 28x28x1 images into two classes (\"fake\" and\n\"real\").\nOne optimizer for each.\nA loss function to train the discriminator.", "from tensorflow.keras import layers\n\n# Create the discriminator\ndiscriminator = keras.Sequential(\n [\n keras.Input(shape=(28, 28, 1)),\n layers.Conv2D(64, (3, 3), strides=(2, 2), padding=\"same\"),\n layers.LeakyReLU(alpha=0.2),\n layers.Conv2D(128, (3, 3), strides=(2, 2), padding=\"same\"),\n layers.LeakyReLU(alpha=0.2),\n layers.GlobalMaxPooling2D(),\n layers.Dense(1),\n ],\n name=\"discriminator\",\n)\n\n# Create the generator\nlatent_dim = 128\ngenerator = keras.Sequential(\n [\n keras.Input(shape=(latent_dim,)),\n # We want to generate 128 coefficients to reshape into a 7x7x128 map\n layers.Dense(7 * 7 * 128),\n layers.LeakyReLU(alpha=0.2),\n layers.Reshape((7, 7, 128)),\n layers.Conv2DTranspose(128, (4, 4), strides=(2, 2), padding=\"same\"),\n layers.LeakyReLU(alpha=0.2),\n layers.Conv2DTranspose(128, (4, 4), strides=(2, 2), padding=\"same\"),\n layers.LeakyReLU(alpha=0.2),\n layers.Conv2D(1, (7, 7), padding=\"same\", activation=\"sigmoid\"),\n ],\n name=\"generator\",\n)", "Here's a feature-complete GAN class, overriding compile() to use its own signature,\nand implementing the entire GAN algorithm in 17 lines in train_step:", "class GAN(keras.Model):\n def __init__(self, discriminator, generator, latent_dim):\n super(GAN, self).__init__()\n self.discriminator = discriminator\n self.generator = generator\n self.latent_dim = latent_dim\n\n def compile(self, d_optimizer, g_optimizer, loss_fn):\n super(GAN, self).compile()\n self.d_optimizer = d_optimizer\n self.g_optimizer = g_optimizer\n self.loss_fn = loss_fn\n\n def train_step(self, real_images):\n if isinstance(real_images, tuple):\n real_images = real_images[0]\n # Sample random points in the latent space\n batch_size = tf.shape(real_images)[0]\n random_latent_vectors = tf.random.normal(shape=(batch_size, self.latent_dim))\n\n # Decode them to fake images\n generated_images = self.generator(random_latent_vectors)\n\n # Combine them with real images\n combined_images = tf.concat([generated_images, real_images], axis=0)\n\n # Assemble labels discriminating real from fake images\n labels = tf.concat(\n [tf.ones((batch_size, 1)), tf.zeros((batch_size, 1))], axis=0\n )\n # Add random noise to the labels - important trick!\n labels += 0.05 * tf.random.uniform(tf.shape(labels))\n\n # Train the discriminator\n with tf.GradientTape() as tape:\n predictions = self.discriminator(combined_images)\n d_loss = self.loss_fn(labels, predictions)\n grads = tape.gradient(d_loss, self.discriminator.trainable_weights)\n self.d_optimizer.apply_gradients(\n zip(grads, self.discriminator.trainable_weights)\n )\n\n # Sample random points in the latent space\n random_latent_vectors = tf.random.normal(shape=(batch_size, self.latent_dim))\n\n # Assemble labels that say \"all real images\"\n misleading_labels = tf.zeros((batch_size, 1))\n\n # Train the generator (note that we should *not* update the weights\n # of the discriminator)!\n with tf.GradientTape() as tape:\n predictions = self.discriminator(self.generator(random_latent_vectors))\n g_loss = self.loss_fn(misleading_labels, predictions)\n grads = tape.gradient(g_loss, self.generator.trainable_weights)\n self.g_optimizer.apply_gradients(zip(grads, self.generator.trainable_weights))\n return {\"d_loss\": d_loss, \"g_loss\": g_loss}\n", "Let's test-drive it:", "# Prepare the dataset. We use both the training & test MNIST digits.\nbatch_size = 64\n(x_train, _), (x_test, _) = keras.datasets.mnist.load_data()\nall_digits = np.concatenate([x_train, x_test])\nall_digits = all_digits.astype(\"float32\") / 255.0\nall_digits = np.reshape(all_digits, (-1, 28, 28, 1))\ndataset = tf.data.Dataset.from_tensor_slices(all_digits)\ndataset = dataset.shuffle(buffer_size=1024).batch(batch_size)\n\ngan = GAN(discriminator=discriminator, generator=generator, latent_dim=latent_dim)\ngan.compile(\n d_optimizer=keras.optimizers.Adam(learning_rate=0.0003),\n g_optimizer=keras.optimizers.Adam(learning_rate=0.0003),\n loss_fn=keras.losses.BinaryCrossentropy(from_logits=True),\n)\n\n# To limit the execution time, we only train on 100 batches. You can train on\n# the entire dataset. You will need about 20 epochs to get nice results.\ngan.fit(dataset.take(100), epochs=1)", "The ideas behind deep learning are simple, so why should their implementation be painful?" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
liganega/Gongsu-DataSci
ref_materials/exams/2015/midterm.ipynb
gpl-3.0
[ "2015년 2학기 공업수학 중간고사 시험지\n이름:\n학번:\n시험지 작성 요령\n\n예제코드를 보면서 문제의 내용을 이해하도록 노력한다.\n문제별로 '해야 할 일' 에서 요구하는 방향으로 변경된 코드의 빈자리를 채우거나 답을 한다.\n\n문제 1\n세 개의 숫자를 입력받는 함수 interval_point는 아래 기능을 구현한다.\n\n숫자 a와 b는 구간의 처음과 끝을 나타낸다. 숫자 x는 0과 1 사이의 값이다. 그러면 interval_point(a, b, x)는 a에서 출발하여 x비율만큼 b 방향으로 이동할 때 갈 수 있는 위치를 되돌려준다.\n\ninterval_point 함수를 다음과 같이 정의할 수 있다.", "def interval_point(a, b, x):\n if a < b:\n return (b-a)*x + a\n else:\n return a - (a-b)*x", "활용 예제", "interval_point(0, 1, 0.5)\n\ninterval_point(3, 2, 0.2)", "해야 할 일 1 (5점)\n위 코드를 if 조건문을 사용하지 않도록 수정하고자 한다. 아래 코드의 빈칸 (A)를 채워라.\ndef interval_point_no_if(a, b, x):\n return (A)\n\n문제 2\n아래 코드는 오류가 발생할 경우를 대비하여 예외처리를 사용한 코드이다.", "while True:\n try:\n x = float(raw_input(\"Please type a new number: \"))\n inverse = 1.0 / x\n print(\"The inverse of {} is {}.\".format(x, inverse))\n break\n except ValueError:\n print(\"You should have given either an int or a float\")\n except ZeroDivisionError:\n print(\"The input number is {} which cannot be inversed.\".format(int(x)))", "해야 할 일 2 (10점)\n아래 코드가 하는 일을 설명하고 발생할 수 있는 예외들을 나열하며, 예외처리를 어떻게 하는지 설명하라.\n문제 3\n콤마(',')로 구분된 문자들이 저장되어 있는 파일을 csv(comma separated value) 파일이라 부른다.\n숫자들로만 구성된 csv 파일을 인자로 받아서 각 줄별로 포함된 숫자들과 숫자들의 합을 계산하여 보여주는 함수 \nprint_line_sum_of_file과 관련된 문제이다.\n예를 들어 test.txt 파일에 아래 내용이 들어 있다고 가정하면 아래의 결과가 나와야 한다.\n1,3,5,8\n0,4,7\n1,18\n\nIn [1]: print_line_sum_of_file(\"test.txt\")\nout[1]: 1 + 3 + 5 + 8 = 17\n 0 + 4 + 7 = 11\n 1 + 18 = 19\n\ntext.txt 파일을 생성하는 방법은 다음과 같다.", "f = open(\"test.txt\", 'w')\nf.write(\"1,3,5,8\\n0,4,7\\n1,18\")\nf.close()", "또한 print_line_sum_of_file을 예를 들어 다음과 같이 작성할 수 있다.", "def print_line_sum_of_file(filename):\n g = open(\"test.txt\", 'r')\n h = g.readlines()\n g.close()\n \n for line in h:\n sum = 0\n k = line.strip().split(',')\n for i in range(len(k)):\n if i < len(k) -1:\n print(k[i] + \" +\"),\n else:\n print(k[i] + \" =\"),\n sum = sum + int(k[i])\n print(sum)", "위 함수를 이전에 작성한 예제 파일에 적용하면 예상된 결과과 나온다.", "print_line_sum_of_file(\"test.txt\")", "해야 할 일 3 (5점)\n그런데 위와 같이 정의하면 숫자가 아닌 문자열이 포함되어 있을 경우 ValueError가 발생한다.\nValuError가 어디에서 발생하는지 답하라.\n해야 할 일 4 (10점)\n이제 데이터 파일에 숫자가 아닌 문자열이 포함되어 있을 경우도 다룰 수 있도록 \nprint_line_sum_of_file를 수정해야 한다.\n예를 들어 숫자가 아닌 문자가 포함되어 있는 단어가 있을 경우 단어의 길이를 덧셈에 추가하도록 해보자.\n예제: \ntest.txt 파일에 아래 내용이 들어 있다고 가정하면 아래 결과가 나와야 한다.\n1,3,5,8\n1,cat4,7\nco2ffee\n\n\nIn [1]: print_line_sum_of_file(\"test.txt\")\nout[1]: 1 + 3 + 5 + 8 = 17\n 1 + cat4 + 7 = 12\n co2ffee = 7\n\n예를 들어 다음과 같이 수정할 수 있다. 빈 칸 (A)와 (B)를 채워라.\nf = open(\"test.txt\", 'w')\nf.write(\"1,3,5,8\\n1,cat4,7\\nco2ffee\")\nf.close()\n\ndef print_line_sum_of_file(filename):\n g = open(\"test.txt\", 'r')\n h = g.readlines()\n g.close()\n\n for line in h:\n sum = 0\n k = line.strip().split(',')\n for i in range(len(k)):\n if i &lt; len(k) - 1:\n print(k[i] + \" +\"),\n else:\n print(k[i] + \" =\"),\n try:\n (A)\n except ValueError:\n (B)\n print(sum)\n\n문제 4\n함수를 리턴값으로 갖는 고계함수(higer-order function)를 다루는 문제이다. \n먼저 다음의 함수들을 살펴보자.", "def linear_1(a, b):\n return a + b\n\ndef linear_2(a, b):\n return a * 2 + b", "동일한 방식을 반복하면 임의의 자연수 n에 대해 linear_n 함수를 정의할 수 있다. 즉, \nlinear_n(a, b) = a * n + b\n\n이 만족되는 함수를 무한히 많이 만들 수 있다. \n그런데 그런 함수들을 위와같은 방식으로 정의하는 것은 매우 비효율적이다.\n한 가지 대안은 변수를 하나 더 추가하는 것이다.", "def linear_gen(n, a, b):\n return a * n + b", "위와 같이 linear_gen 함수를 정의한 다음에 특정 n에 대해 linear_n 이 필요하다면 아래와 같이 간단하게 정의해서 사용할 수 있다.\n예를 들어 n = 10인 경우이다.", "def linear_10(a, b):\n return linear_gen(10, a, b)", "해야 할 일 5 (10점)\n그런데 이 방식은 특정 linear_n을 사용하고자 할 때마다 def 키워드를 이용하여 함수를 정의해야 하는 단점이 있다. 그런데 고계함수를 활용하면 def 키워드를 한 번만 사용해도 모든 수 n에 대해 linear_n 함수를 필요할 때마다 사용할 수 있다. \n예를 들어 아래 등식이 만족시키는 고계함수 linear_high를 정의할 수 있다.\nlinear_10(3, 5) = linear_high(10)(3, 5)\nlinear_15(2, 7) = linear_high(15)(2, 7)\n\n아래 코드가 위 등식을 만족시키도록 빈자리 (A)와 (B)를 채워라.\ndef linear_high(n):\n def linear_n(a, b):\n (A)\n return (B)\n\n문제 5\n이름이 없는 무명함수를 다루는 문제이다.\n무명함수는 간단하게 정의할 수 있는 함수를 한 번만 사용하고자 할 경우에 \n굳이 함수 이름이 필요없다고 판단되면 사용할 수 있다. \n예를 들어 앞서 linear_high 함수를 정의할 때 사용된 linear_n 함수의 경우가 그렇다. \n\nlinear_n 함수는 linear_high 함수가 호출될 때만 의미를 갖는 함수이며 그 이외에는 존재하지 않는 함수가 된다. \n\n따라서 그냥 linear_n 함수를 물어보면 파이썬 해석기가 전혀 알지 못하며 NameError가 발생한다.\n해야 할 일 6 (5점)\nlinear_n 함수의 정의가 매우 단순하다. \n따라서 굳이 이름을 줄 필요가 없이 람다(lambda) 기호를 이용하여 함수를 정의하면 편리하다.\n__문제 4__에서 linear_high 함수와 동일한 기능을 수행하는 함수 linear_high_lambda \n함수를 아래와 같이 정의하고자 한다. 빈 칸 (A)를 채워라.\ndef linear_high_lambda(n):\n return (A)\n\n문제 6\n문자열로 구성된 리스트 names가 있다.", "names = [\"Koh\", \"Kang\", \"Park\", \"Kwon\", \"Lee\", \"Yi\", \"Kim\", \"Jin\"]", "K로 시작하는 이름으로만 구성된 리스트는 \n파이썬 내장함수 filter를 이용하여 만들 수 있다.", "def StartsWithK(s):\n return s[0] == 'K'\nK_names = filter(StartsWithK, names)\nK_names", "해야 할 일 7 (15점)\nfilter 함수를 사용하지 않으면서 동일한 기능을 수행하는 코드를 작성하고자 하면 다음과 같이 할 수 있다.\n빈칸 (A), (B), (C)를 채워라.\nK_names = []\nfor name in names:\n if (A) : \n (B)\n else:\n (C)\n\n해야 할 일 8 (5점)\nK로 시작하는 이름만으로 구성된 리스트를 글자 순서의 __역순__으로 정렬하고자 한다.\n아래 코드가 리스트 관련 특정 메소드를 사용하도록 빈 자리 (D)를 채워라.\nK_names.(D)\n\n문제 7\n파이썬 내장함수 map의 기능은 아래 예제에서 확인할 수 있다.", "map(lambda x : x ** 2, range(5))", "map 함수를 사용하지 않는 방식은 다음과 같다.", "def list_square(num):\n L = []\n for i in range(num):\n L.append(i ** 2)\n return L\nlist_square(5)", "해야 할 일 9 (10점)\nlist_square 함수와 동일한 기능을 수행하는 함수 list_square_comp 함수를 \n리스트 조건제시법을 활용하여 아래처럼 구현하고자 한다. 빈자리 (A)를 채워라.\ndef list_square_comp(num):\n return (A)\n\n문제 8\n다섯 개의 도시명과 각 도시의 인구수로 이루어진 두 개의 리스트가 아래처럼 있다.", "cities = ['A', 'B', 'C', 'D', 'E']\npopulations = [20, 30, 140, 80, 25]", "도시이름과 인구수를 쌍으로 갖는 리스트를 구현하는 방법은 아래와 같다.", "city_pop = []\nfor i in range(len(cities)):\n city_pop.append((cities[i], populations[i]))\ncity_pop", "city_pop를 이용하여 예를 들어 C 도시의 인구수를 확인하는 방법은 다음과 같다.", "city_pop[2][1]", "해야 할 일 10 (15점)\n그런데 위와같은 코딩은 사용하기에 매우 불편하다.\n그래서 아래와 같은 기능을 수행하는 함수 show_city_pop 함수를 구현하고자 한다.\nshow_city_pop(\"B\") = 30\nshow_city_pop(\"E\") = 25\n\n아래 코드의 빈자리 (A)를 완성하라.\ndef show_pop_city(s):\n\n```\n(A)\n.```\n해야할 일 11 (10점)\n도시이름을 키(key)로, 인구수를 키값(key value)로 사용하는 사전(해시 테이블) 자료형인\ncity_pop_hash를 구현할 수 있다. 즉, 다음이 성립해야 한다.\ncity_pop_hash = {'A': 20, 'B': 30, 'C': 140, 'D': 80, 'E': 25}\n\n그러면 아래 등식이 성립한다.\ncity_pop_hash[\"B\"] = 30\ncity_pop_hash[\"E\"] = 25\n\ncity_pop_hash 사전을 구현하는 코드를 작성하라." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
Pybonacci/notebooks
tutormagic.ipynb
bsd-2-clause
[ "Esta será una microentrada para presentar una extensión para el notebook que estoy usando en un curso interno que estoy dando en mi empresa.\nSi a alguno más os puede valer para mostrar cosas básicas de Python (2 y 3, además de Java y Javascript) para muy principiantes me alegro.\nNombre en clave: tutormagic\nEsta extensión lo único que hace es embeber dentro de un IFrame la página de pythontutor usando el código que hayamos definido en una celda de código precedida de la cell magic %%tutor.\nComo he comentado anteriormente, se puede escribir código Python2, Python3, Java y Javascript, que son los lenguajes soportados por pythontutor.\nEjemplo\nPrimero deberemos instalar la extensión. Está disponible en pypi por lo que la podéis instalar usando pip install tutormagic. Una vez instalada, dentro de un notebook de IPython la deberías cargar usando:", "%load_ext tutormagic", "Una vez hecho esto ya deberiamos tener disponible la cell magic para ser usada:", "%%tutor --lang python3\na = 1\nb = 2\n\ndef add(x, y):\n return x + y\n\nc = add(a, b)", "Ahora un ejemplo con javascript:", "%%tutor --lang javascript\nvar a = 1;\nvar b = 1;\nconsole.log(a + b);", "Y eso es todo\nLo dicho, espero que sea útil para alguien.\n\ntutormagic en pypi.\ntutormagic en github\n\nSaludos." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
DB2-Samples/db2jupyter
Db2 Jupyter Macros.ipynb
apache-2.0
[ "Db2 Macros\nThe %sql command also allows the use of macros. Macros are used to substitute text into SQL commands that you execute. Macros substitution is done before any SQL is executed. This allows you to create macros that include commonly used SQL commands or parameters rather than having to type them in. Before using any macros, we must make sure we have loaded the Db2 extensions.", "%run db2.ipynb", "Macro Basics\nA Macro command begins with a percent sign (% similar to the %sql magic command) and can be found anywhere within a %sql line or %%sql block. Macros must be separated from other text in the SQL with a space. \nTo define a macro, the %%sql macro &lt;name&gt; command is used. The body of the macro is found in the cell below the definition of the macro. This simple macro called EMPTABLE will substitute a SELECT statement into a SQL block.", "%%sql macro emptable\nselect * from employee", "The name of the macro follows the %%sql macro command and is case sensitive. To use the macro, we can place it anywhere in the %sql block. This first example uses it by itself.", "%sql %emptable", "The actual SQL that is generated is not shown by default. If you do want to see the SQL that gets generated, you can use the -e (echo) option to display the final SQL statement. The following example will display the generated SQL. Note that the echo setting is only used to display results for the current cell that is executing.", "%%sql -e\n%emptable", "Since we can use the %emptable anywhere in our SQL, we can add additional commands around it. In this example we add some logic to the select statement.", "%%sql\n%emptable\nwhere empno = '000010'", "Macros can also have parameters supplied to them. The parameters are included after the name of the macro. Here is a simple macro which will use the first parameter as the name of the column we want returned from the EMPLOYEE table.", "%%sql macro emptable\nSELECT {1} FROM EMPLOYEE", "This example illustrates two concepts. The MACRO command will replace any existing macro with the same name. Since we already have an emptable macro, the macro body will be replaced with this code. In addition, macros only exist for the duration of your notebook. If you create another Jupyter notebook, it will not contain any macros that you may have created. If there are macros that you want to share across notebooks, you should create a separate notebook and place all of the macro definitions in there. Then you can include these macros by executing the %run command using the name of the notebook that contains the macros.\nThe following SQL shows the use of the macro with parameters.", "%%sql\n%emptable(lastname)", "The remainder of this notebook will explore the advanced features of macros.\nMacro Parameters\nMacros can have up to 9 parameters supplied to them. The parameters are numbered from 1 to 9, left to right in the argument list for the macro. For instance, the following macro has 5 paramters:\n%emptable(lastname,firstnme,salary,bonus,'000010')\nParameters are separated by commas, and can contain strings as shown using single or double quotes. When the parameters are used within a macro, the quotes are not included as part of the string. If you do want to pass the quotes as part of the parameter, use square brackets [] around the string. For instance, the following parameter will not have quotes passed to the macro:\npython\n%sql %abc('no quotes')\nTo send the string with quotes, you could surround the parameter with other quotes \"'hello'\" or use the following technique if you use multiple quotes in your string:\npython\n%sql %abc (['quotes'])\nTo use a parameter within your macro, you enclose the parameter number with braces {}. The next command will illustrate the use of the five parameters.", "%%sql macro emptable\ndisplay on\nSELECT {1},{2},{3},{4} \nFROM EMPLOYEE\nWHERE EMPNO = '{5}'", "Note that the EMPNO field is a character field in the EMPLOYEE table. Even though the employee number was supplied as a string, the quotes are not included in the parameter. The macro places quotes around the parameter {5} so that it is properly used in the SQL statement. The other feature of this macro is that the display (on) command is part of the macro body so the generated SQL will always be displayed.", "%sql %emptable(lastname,firstnme,salary,bonus,'000010')", "We can modify the macro to assume that the parameters will include the quotes in the string.", "%%sql macro emptable\nSELECT {1},{2},{3},{4} \nFROM EMPLOYEE\nWHERE EMPNO = {5}", "We just have to make sure that the quotes are part of the parameter now.", "%sql -e %emptable(lastname,firstnme,salary,bonus,\"'000010'\")", "We could use the square brackets as an alternative way of passing the parameter.", "%sql -e %emptable(lastname,firstnme,salary,bonus,['000010'])", "Parameters can also be named in a macro. To name an input value, the macro needs to use the format:\nfield=value\nFor instance, the following macro call will have 2 numbered parameters and one named parameter:\n%showemp(firstnme,lastname,logic=\"WHERE EMPNO='000010'\")\nFrom within the macro the parameter count would be 2 and the value for parameter 1 is firstnme, and the value for parameter 2 is lastname. Since we have a named parameter, it is not included in the list of numbered parameters. In fact, the following statement is equivalent since unnamed parameters are numbered in the order that they are found in the macro, ignoring any named parameters that are found:\n%showemp(firstnme,logic=\"WHERE EMPNO='000010'\",lastname)\nThe following macro illustrates this feature.", "%%sql macro showemp\nSELECT {1},{2} FROM EMPLOYEE\n {logic}\n\n%sql %showemp(firstnme,lastname,logic=\"WHERE EMPNO='000010'\")\n\n%sql %showemp(firstnme,logic=\"WHERE EMPNO='000010'\",lastname)", "Named parameters are useful when there are many options within the macro and you don't want to keep track of which position it is in. In addition, if you have a variable number of parameters, you should use named parameters for the fixed (required) parameters and numbered parameters for the optional ones.\nMacro Coding Overview\nMacros can contain any type of text, including SQL commands. In addition to the text, macros can also contain the following keywords:\n\necho - Display a message\nexit - Exit the macro immediately\nif/else/endif - Conditional logic\nvar - Set a variable\ndisplay - Turn the display of the final text on\n\nThe only restriction with macros is that macros cannot be nested. This means I can't call a macro from within a macro. The sections below explain the use of each of these statement types.\nEcho Option\nThe -e option will result in the final SQL being display after the macro substitution is done. \n%%sql -e\n%showemp(...)", "%%sql macro showdisplay\nSELECT * FROM EMPLOYEE FETCH FIRST ROW ONLY", "Using the -e flag will display the final SQL that is run.", "%sql -e %showdisplay", "If we remove the -e option, the final SQL will not be shown.", "%sql %showdisplay", "Exit Command\nThe exit command will terminate the processing within a macro and not run the generated SQL. You would use this when a condition is not met within the macro (like a missing parameter).", "%%sql macro showexit\necho This message gets shown\nSELECT * FROM EMPLOYEE FETCH FIRST ROW ONLY\nexit\necho This message does not get shown", "The macro that was defined will not show the second statement, nor will it execute the SQL that was defined in the macro body.", "%sql %showexit", "Echo Command\nAs you already noticed in the previous example, the echo command will display information on the screen. Any text following the command will have variables substituted and then displayed with a green box surrounding it. The following code illustates the use of the command.", "%%sql macro showecho\necho Here is a message\necho Two lines are shown", "The echo command will show each line as a separate box.", "%sql %showecho", "If you want to have a message go across multiple lines use the &lt;br&gt; to start a new line.", "%%sql macro showecho\necho Here is a paragraph. <br> And a final paragraph.\n\n%sql %showecho", "Var Command\nThe var (variable) command sets a macro variable to a value. A variable is referred to in the macro script using curly braces {name}. By default the arguments that are used in the macro call are assigned the variable names {1} to {9}. If you use a named argument (option=\"value\") in the macro call, a variable called {option} will contain the value within the macro.\nTo set a variable within a macro you would use the var command:\nvar name value\nThe variable name can be any name as long as it only includes letters, numbers, underscore _ and $. Variable names are case sensitive so {a} and {A} are different. When the macro finishes executing, the contents of the variables will be lost. If you do want to keep a variable between macros, you should start the name of the variable with a $ sign:\nvar $name value\nThis variable will persist between macro calls.", "%%sql macro initialize\nvar $hello Hello There \nvar hello You won't see this\n\n%%sql macro runit\necho The value of hello is *{hello}*\necho {$hello}", "Calling runit will display the variable that was set in the first macro.", "%sql %initialize\n%sql %runit", "A variable can be converted to uppercase by placing the ^ beside the variable name or number.", "%%sql macro runit\necho The first parameter is {^1}\n\n%sql %runit(Hello There)", "The string following the variable name can include quotes and these will not be removed. Only quotes that are supplied in a parameter to a macro will have the quotes removed.", "%%sql macro runit\nvar hello This is a long string without quotes\nvar hello2 'This is a long string with quotes'\necho {hello} <br> {hello2}\n\n%sql %runit", "When passing parameters to a macro, the program will automatically create variables based on whether they are positional parameters (1, 2, ..., n) or named parameters. The following macro will be used to show how parameters are passed to the routine.", "%%sql macro showvar\necho parm1={1} <br>parm2={2} <br>message={message}", "Calling the macro will show how the variable names get assigned and used.", "%sql %showvar(parameter 1, another parameter,message=\"Hello World\")", "If you pass an empty value (or if a variable does not exist), a \"null\" value will be shown.", "%sql %showvar(1,,message=\"Hello World\")", "An empty string also returns a null value.", "%sql %showvar(1,2,message=\"\")", "Finally, any string that is supplied to the macro will not include the quotes in the variable. The Hello World string will not have quotes when it is displayed:", "%sql %showvar(1,2,message=\"Hello World\")", "You need to supply the quotes in the script or macro when using variables since quotes are stripped from any strings that are supplied.", "%%sql macro showvar\necho parm1={1} <br>parm2={2} <br>message='{message}'\n\n%sql %showvar(1,2,message=\"Hello World\")", "The count of the total number of parameters passed is found in the {argc} variable. You can use this variable to decide whether or not the user has supplied the proper number of arguments or change which code should be executed.", "%%sql macro showvar\necho The number of unnamed parameters is {argc}. The where clause is *{where}*.", "Unnamed parameters are included in the count of arguments while named parameters are ignored.", "%sql %showvar(1,2,option=nothing,3,4,where=)", "If/Else/Endif Command\nIf you need to add conditional logic to your macro then you should use the if/else/endif commands. The format of the if statement is:\nif variable condition value\n statements\nelse\n statements\nendif\nThe else portion is optional, but the block must be closed with the endif command. If statements can be nested up to 9 levels deep:\nif condition 1\n if condition 2\n statements\n else\n if condition 3\n statements\n end if \n endif\nendif\nIf the condition in the if clause is true, then anything following the if statement will be executed and included in the final SQL statement. For instance, the following code will create a SQL statement based on the value of parameter 1:\nif {1} = null\n SELECT * FROM EMPLOYEE\nelse\n SELECT {1} FROM EMPLOYEE\nendif\nConditions\nThe if statement requires a condition to determine whether or not the block should be executed. The condition uses the following format:\nif {variable} condition {variable} | constant | null\nVariable can be a number from 1 to 9 which represents the argument in the macro list. So {1} refers to the first argument. The variable can also be the name of a named parameter or global variable.\nThe condition is one of the following comparison operators:\n- =, ==: Equal to\n- &lt;: Less than\n- &gt;: Greater than\n- &lt;=,=&lt;: Less than or equal to\n- &gt;=, =&gt;: Greater than or equal to\n- !=, &lt;&gt; : Not equal to\nThe variable or constant will have quotes stripped away before doing the comparison. If you are testing for the existence of a variable, or to check if a variable is empty, use the keyword null.", "%%sql macro showif\nif {argc} = 0\n echo No parameters supplied\n if {option} <> null\n echo The optional parameter option was set: {option}\n endif\nelse\n if {argc} = \"1\"\n echo One parameter was supplied\n else\n echo More than one parameter was supplied: {argc}\n endif\nendif", "Running the previous macro with no parameters will check to see if the option keyword was used.", "%sql %showif", "Now include the optional parameter.", "%sql %showif(option=\"Yes there is an option\")", "Finally, issue the macro with multiple parameters.", "%sql %showif(Here,are,a,number,of,parameters)", "One additional option is available for variable substitution. If the first character of the variable name or parameter number is the ^ symbol, it will uppercase the entire string.", "%%sql macro showif\nif {option} <> null\n echo The optional parameter option was set: {^option}\nendif\n\n%sql %showif(option=\"Yes there is an option\")", "Credits: IBM 2018, George Baklarz [baklarz@ca.ibm.com]" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
sdpython/ensae_teaching_cs
_doc/notebooks/1a/recherche_dichotomique.ipynb
mit
[ "1A.algo - Recherche dichotomique\nRecherche dichotomique illustrée. Extrait de Recherche dichotomique, récursive, itérative et le logarithme.", "from jyquickhelper import add_notebook_menu\nadd_notebook_menu()", "Lorsqu'on décrit n'importe quel algorithme, on évoque toujours son coût, souvent une formule de ce style :\n$$O(n^u(\\ln_2 n)^v)$$\n$u$ et $v$ sont des entiers. $v$ est souvent soit 0, soit 1. Mais d'où vient ce logarithme ? Le premier algorithme auquel on pense et dont le coût correspond au cas $u=0$ et $v=1$ est la recherche dichotomique. Il consiste à chercher un élément dans une liste triée. Le logarithme vient du fait qu'on réduit l'espace de recherche par deux à chaque itération. Fatalement, on trouve très vite l'élément à chercher. Et le logarithme, dans la plupart des algorithmes, vient du fait qu'on divise la dimension du problème par un nombre entier à chaque itération, ici 2.\nLa recherche dichotomique est assez simple : on part d'une liste triée T et on cherche l'élément v (on suppose qu'il s'y trouve). On procède comme suit :\n\nOn compare v à l'élément du milieu de la liste.\nS'il est égal à v, on a fini.\nSinon, s'il est inférieur, il faut chercher dans la première moitié de la liste. On retourne à l'étape 1 avec la liste réduite.\nS'il est supérieur, on fait de même avec la seconde moitié de la liste.\n\nC'est ce qu'illustre la figure suivante où a désigne le début de la liste, b la fin, m le milieu. A chaque itération, on déplace ces trois positions.", "from pyquickhelper.helpgen import NbImage\nNbImage(\"images/dicho.png\")", "Version itérative", "def recherche_dichotomique(element, liste_triee):\n a = 0\n b = len(liste_triee)-1\n m = (a+b)//2\n while a < b :\n if liste_triee[m] == element:\n return m\n elif liste_triee[m] > element:\n b = m-1\n else :\n a = m+1\n m = (a+b)//2\n return a\n\nli = [0, 4, 5, 19, 100, 200, 450, 999]\nrecherche_dichotomique(5, li)", "Version récursive", "def recherche_dichotomique_recursive( element, liste_triee, a = 0, b = -1 ):\n if a == b : \n return a\n if b == -1 : \n b = len(liste_triee)-1\n m = (a+b)//2\n if liste_triee[m] == element:\n return m\n elif liste_triee[m] > element:\n return recherche_dichotomique_recursive(element, liste_triee, a, m-1)\n else :\n return recherche_dichotomique_recursive(element, liste_triee, m+1, b)\n\nrecherche_dichotomique(5, li)", "Version récursive 2\nL'ajout des parametrès a et b peut paraître un peu lourd. Voici une troisième implémentation en Python (toujours récursive) :", "def recherche_dichotomique_recursive2(element, liste_triee):\n if len(liste_triee)==1 :\n return 0\n m = len(liste_triee)//2\n if liste_triee[m] == element:\n return m\n elif liste_triee[m] > element:\n return recherche_dichotomique_recursive2(element, liste_triee[:m])\n else :\n return m + recherche_dichotomique_recursive2(element, liste_triee[m:])\n\nrecherche_dichotomique(5, li)", "Il ne faut pas oublier m + sinon le résultat peut être décalé dans certains cas. Ensuite, cette version sera un peu moins rapide du fait de la recopie d'une partie de la liste." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
olgabot/cshl-singlecell-2017
notebooks/1.4_make_clustered_heatmap.ipynb
mit
[ "How to make a clustered heatmap\nNow we'll break down how to read the clustered heatmap we made in 1.3_explore_gene_dropout_via_distance_correlation_linkage_clustering", "# Import the pandas dataframe library\nimport pandas as pd\n\n# Import the seaborn library for plotting\nimport seaborn as sns\n\n# Put all the plots directly into the notebook\n%matplotlib inline", "Read Expression data using pandas. Notice that pandas can read URLs (!), not just files on your computer!", "csv = \"https://media.githubusercontent.com/media/olgabot/macosko2015/\" \\\n \"master/data/05_make_rentina_subsets_for_teaching/big_clusters_expression.csv\"\nexpression = pd.read_csv(csv, index_col=0)\nprint(expression.shape)\nexpression.head()", "Exercise 1\nNow use pd.read_csv to read the csv file of the cell metadata", "csv = \"https://media.githubusercontent.com/media/olgabot/macosko2015/\" \\\n \"master/data/05_make_rentina_subsets_for_teaching/big_clusters_cell_metadata.csv\"\n# YOUR CODE HERE", "", "csv = \"https://media.githubusercontent.com/media/olgabot/macosko2015/\" \\\n \"master/data/05_make_rentina_subsets_for_teaching/big_clusters_cell_metadata.csv\"\ncell_metadata = pd.read_csv(csv, index_col=0)\nprint(cell_metadata.shape)\ncell_metadata.head()", "To correlate columns of dataframes in pandas, you use the function .corr. Let's look at the documentation of .corr\n\nIs the default method Pearson or Spearman correlation? \nCan you correlate between rows, or only between columns?", "expression.corr?", "Since .corr only correlates between columns, we need to transpose our dataframe. Here's a little animation of matrix transposition from Wikipedia:\n\nExercise 2\nTranspose the expression matrix so the cells are the columns, which makes it easy to calculate correlations. How do you transpose a dataframe in pandas? (hint: google knows everything)", "# YOUR CODE HERE", "", "expression_t = expression.T\nprint(expression_t.shape)\nexpression_t.head()", "Exercise 3\nUse .corr to calculate the Spearman correlation of the transposed expression dataframe. Make sure to print the shape, and show the head of the resulting dataframe.", "# YOUR CODE HERE", "", "expression_corr = expression_t.corr(method='spearman')\nprint(expression_corr.shape)\nexpression_corr.head()", "Pro tip: if your matrix is really big, here's a trick to make spearman correlations faster\nRemember that spearman correlation is equal to performing pearson correlation on the ranks? Well, that's exactly what's happening inside the .corr(method='spearman') function! Every time it's calculating spearman, it's converting each row to ranks, which means that it's double-converting to ranks since it has to do it for each pair. Let's cut the work in half by converting to ranks FIRST. Let's take a look at the options for .rank:", "expression_t.rank?", "Notice we can specify axis=1 or axis=0, but what does that really mean? Was this ascending along rows, or ascending along columns?\n\n\nTo figure this out, let's use a small, simple dataframe:", "df = pd.DataFrame([[5, 6, 7], [5, 6, 7], [5, 6, 7]])\ndf", "Exercise 4\nTry axis=0 when using rank on this df", "# YOUR CODE HERE", "", "df.rank(axis=0)", "Did that make ranks ascending along columns or along rows?\nExercise 5\nNow try axis=1 when using rank on this df", "# YOUR CODE HERE", "", "df.rank(axis=1)", "Did that make ranks ascending along columns or along rows?\n\n\nExercise 6\nTo get the gene (row) ranks for each cell (column), do we want axis=1 or axis=0? Perform .rank on the transposed expression matrix (expression_t), print the shape of the resulting ranks, and show the head() of it.", "# YOUR CODE HERE", "", "ranks = expression_t.rank(axis=0)\nprint(ranks.shape)\nranks.head()", "Exercise 6\nNow that you're armed with all this information, we'll calculate the ranks. While you're at it, let's compare the time it takes to run (\"runtime\") of .corr(method=\"pearson\") on the ranks matrix vs .corr(method=\"spearman\") on the expression matrix.\n\nPerform pearson correlation on the ranks\nCheck that it is equal to the expression spearman correlation.\nUse the %timeit magic to check the runtimes of .corr on the ranks and expression matrices. (Feel free to calculate the expression correlation again, below)\nNote that when you use timeit, you cannot assign any variables -- using an equals sign doesn't work here.\n\n\nHow much time did it take, in comparison? What's the order of magnitude difference?\n\nUse as many cells as you need.", "# YOUR CODE HERE\n\n# YOUR CODE HERE\n\n# YOUR CODE HERE", "", "%timeit expression_t.corr(method='spearman')\n%timeit ranks.corr(method='pearson')\n\nranks_corr = ranks.corr(method='pearson')\nprint(ranks_corr.shape)\nranks_corr.head()", "Use inequality to see if any points are not the same. If this is equal to zero, then we know that they are ALL the same.", "(ranks_corr != expression_corr).sum().sum()", "This is a flip of checking for equality, which is a little trickier because then you have to know exactly how many items are in the matrix. Since we have a 300x300 matrix, that multiplication is a little easier to do in your head and know that you got the right answer.", "(ranks_corr == expression_corr).sum().sum()", "Make a heatmap!!\nNow we are ready to make a clustered heatmap! We'll use seaborn's sns.clustermap. Let's read the documentation for sns.clustermap. What is the default distance metric and linkage method?", "sns.clustermap?", "Exercise 7\nNow run sns.clustermap on either the ranks or expression correlation matrices, since they are equal :)", "# YOUR CODE HERE", "", "sns.clustermap(expression_corr)", "How can we add the colors labeling the rows and columns? Check the documentation for sns.clustermap again:\nExercise 8", "# YOUR CODE HERE", "", "sns.clustermap?", "Since I am not a color design expert, I defer to color design experts in choosing my color palettes. One such expert is Cynthia Brewer, who made a ColorBrewer (hah!) list of color maps for both increasing quantity (shades), and for categories (differing colors).\nAs a reference, I like using this demo of every ColorBrewer scale. Hover over the palette to see its name.\nThankfully, seaborn has the ColorBrewer color maps built-in. Let's see what this output is\nRemember -- we never make a variable without looking at it first!!", "palette = sns.color_palette('Accent', n_colors=3)\npalette", "Huh that's a bunch of weird numbers. What do they mean? Turns out it's a value from 0 to 1 representing the red, green, and blue (RGB) color channels that computers understand. But I'm not a computer .... what am I supposed to do??\nTurns out, seaborn also has a very convenient function called palplot to plot the entire palette. This lets us look at the variable without having to convert from RGB", "sns.palplot(palette)", "Exercise 9\n\nGet the color palette for the \"Set2\" colormap and specify that you want 6 colors (read the documentation of sns.color_palette)\nPlot the color palette", "# YOUR CODE HERE\n\n# YOUR CODE HERE", "", "set2 = sns.color_palette('Set2', n_colors=6)\nsns.palplot(set2)", "If you are more advanced and want access to more colormaps, I recommend checking out palettable.\nAssign colors to clusters\nTo set a specific color to each cluster, we'll need to see the unique clusters here. For an individual column (called a \"Series\" in pandas-speak), how can we get only the unique items?\nExercise 10\nGet the unique values from the column \"cluster_celltype_with_id\". Remember, always look at the variable you created!", "# YOUR CODE HERE", "", "cluster_ids_unique = cell_metadata['cluster_celltype_with_id'].unique()\ncluster_ids_unique", "Detour: zip and dict\nTo map colors to each cluster name, we need to talk about some built-in functions in Python, called zip and dict\nFor this next part, we'll use the built-in function zip which is very useful. It acts like a zipper (like for clothes) to glue together the pairs of items in two lists:", "english = [\"hello\", \"goodbye\", \"no\", \"yes\", \"please\", \"thank you\",]\nspanish = [\"hola\", \"adios\", \"no\", \"si\", \"por favor\", \"gracias\"]\nzip(english, spanish)", "To be memory efficient, this doesn't show us what's inside right away. To look inside a zip object, we can use list:", "list(zip(english, spanish))", "Exercise 11\nWhat happened to \"please\" and \"thank you\" from english? Make another list, called spanish2, that contains the Spanish words for \"please\" and \"thank you\" (again, google knows everything), then call zip on english and spanish2. Don't forget to use list on them!", "# YOUR CODE HERE", "", "english = [\"hello\", \"goodbye\", \"no\", \"yes\", \"please\", \"thank you\",]\nspanish = [\"hola\", \"adios\", \"no\", \"si\", \"por favor\", \"gracias\"]\nlist(zip(english, spanish))", "Now we'll use a dictionary dict to make a lookup table that uses the pairing made by zip, using the first item as the \"key\" (what you use to look up) and the second item as the \"value\" (the result of the lookup)\nYou can think of it as a translator -- use the word in English to look up the word in Spanish.", "english_to_spanish = dict(zip(english, spanish))\nenglish_to_spanish", "Now we can use English words to look up the word in Spanish! We use the square brackets and the english word we want to use, to look up the spanish word.", "english_to_spanish['hello']", "Exercise 12\nMake an spanish_to_english dictionary and look up the English word for \"por favor\"", "# YOUR CODE HERE", "", "spanish_to_english = dict(zip(spanish, english))\nspanish_to_english['por favor']", "Okay, detour over! Switching from linguistics back to biology :)\nExercise 13\nUse dict and zip to create a variable called id_to_color that assigns labels in cluster_ids_unique to a color in set2", "# YOUR CODE HERE", "", "id_to_color = dict(zip(cluster_ids_unique, set2))\nid_to_color", "Now we want to use this id_to_color lookup table to make a long list of colors for each cell.", "cell_metadata.head()", "As an example, let's use the celltypes column to make a list of each celltype color first. Notice that we can use cell_metadata.celltype or cell_metadata['celltype'] to get the column we want.\nWe can only use the 'dot' notation because our column name has no unfriendly characters like spaces, dashes, or dots -- characters that mean something special in Python.", "celltypes = cell_metadata.celltype.unique() # Could also use cell_metadata['celltype'].unique()\ncelltypes\n\ncelltype_to_color = dict(zip(celltypes, sns.color_palette('Accent', n_colors=len(celltypes))))\ncelltype_to_color", "Now we'll use the existing column cell_metadata.celltype to make a list of colors for each celltype", "per_cell_celltype_color = [celltype_to_color[celltype] for celltype in cell_metadata.celltype]\n\n# Since this list is as long as our number of cells (300!), let's slice it and only look at the first 10\nper_cell_celltype_color[:5]", "Exercise 14\nMake a variable called per_cell_cluster_color that uses the id_to_color dictionary to look up the color for each value in the cluster_celltype_with_id column of cluster_metadata", "# YOUR CODE HERE", "", "per_cell_cluster_color = [id_to_color[i] for i in cell_metadata.cluster_celltype_with_id]\n\n# Since this list is as long as our number of cells (300!), let's slice it and only look at the first 10\nper_cell_cluster_color[:10]", "Exercise 15\nNow use the cluster colors to label the rows and columns in sns.clustermap. How can", "# YOUR CODE HERE", "", "sns.clustermap(expression_corr, row_colors=per_cell_cluster_color, col_colors=per_cell_cluster_color)", "We can also combine the celltype and cluster colors we created to create a double-layer colormap!", "combined_colors = [per_cell_cluster_color, per_cell_celltype_color]\nlen(combined_colors)\n\nsns.clustermap(expression_corr, row_colors=combined_colors, col_colors=combined_colors)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
yingchi/fastai-notes
deeplearning1/nbs/statefarm-sample.ipynb
apache-2.0
[ "Enter State Farm", "from theano.sandbox import cuda\ncuda.use('gpu1')\n\n%matplotlib inline\nfrom __future__ import print_function, division\n#path = \"data/state/\"\npath = \"data/state/sample/\"\nimport utils; reload(utils)\nfrom utils import *\nfrom IPython.display import FileLink\n\nbatch_size=64", "Create sample\nThe following assumes you've already created your validation set - remember that the training and validation set should contain different drivers, as mentioned on the Kaggle competition page.", "%cd data/state\n\n%cd train\n\n%mkdir ../sample\n%mkdir ../sample/train\n%mkdir ../sample/valid\n\nfor d in glob('c?'):\n os.mkdir('../sample/train/'+d)\n os.mkdir('../sample/valid/'+d)\n\nfrom shutil import copyfile\n\ng = glob('c?/*.jpg')\nshuf = np.random.permutation(g)\nfor i in range(1500): copyfile(shuf[i], '../sample/train/' + shuf[i])\n\n%cd ../valid\n\ng = glob('c?/*.jpg')\nshuf = np.random.permutation(g)\nfor i in range(1000): copyfile(shuf[i], '../sample/valid/' + shuf[i])\n\n%cd ../../..\n\n%mkdir data/state/results\n\n%mkdir data/state/sample/test", "Create batches", "batches = get_batches(path+'train', batch_size=batch_size)\nval_batches = get_batches(path+'valid', batch_size=batch_size*2, shuffle=False)\n\n(val_classes, trn_classes, val_labels, trn_labels, val_filenames, filenames,\n test_filename) = get_classes(path)", "Basic models\nLinear model\nFirst, we try the simplest model and use default parameters. Note the trick of making the first layer a batchnorm layer - that way we don't have to worry about normalizing the input ourselves.", "model = Sequential([\n BatchNormalization(axis=1, input_shape=(3,224,224)),\n Flatten(),\n Dense(10, activation='softmax')\n ])", "As you can see below, this training is going nowhere...", "model.compile(Adam(), loss='categorical_crossentropy', metrics=['accuracy'])\nmodel.fit_generator(batches, batches.nb_sample, nb_epoch=2, validation_data=val_batches, \n nb_val_samples=val_batches.nb_sample)", "Let's first check the number of parameters to see that there's enough parameters to find some useful relationships:", "model.summary()", "Over 1.5 million parameters - that should be enough. Incidentally, it's worth checking you understand why this is the number of parameters in this layer:", "10*3*224*224", "Since we have a simple model with no regularization and plenty of parameters, it seems most likely that our learning rate is too high. Perhaps it is jumping to a solution where it predicts one or two classes with high confidence, so that it can give a zero prediction to as many classes as possible - that's the best approach for a model that is no better than random, and there is likely to be where we would end up with a high learning rate. So let's check:", "np.round(model.predict_generator(batches, batches.N)[:10],2)", "Our hypothesis was correct. It's nearly always predicting class 1 or 6, with very high confidence. So let's try a lower learning rate:", "model = Sequential([\n BatchNormalization(axis=1, input_shape=(3,224,224)),\n Flatten(),\n Dense(10, activation='softmax')\n ])\nmodel.compile(Adam(lr=1e-5), loss='categorical_crossentropy', metrics=['accuracy'])\nmodel.fit_generator(batches, batches.nb_sample, nb_epoch=2, validation_data=val_batches, \n nb_val_samples=val_batches.nb_sample)", "Great - we found our way out of that hole... Now we can increase the learning rate and see where we can get to.", "model.optimizer.lr=0.001\n\nmodel.fit_generator(batches, batches.nb_sample, nb_epoch=4, validation_data=val_batches, \n nb_val_samples=val_batches.nb_sample)", "We're stabilizing at validation accuracy of 0.39. Not great, but a lot better than random. Before moving on, let's check that our validation set on the sample is large enough that it gives consistent results:", "rnd_batches = get_batches(path+'valid', batch_size=batch_size*2, shuffle=True)\n\nval_res = [model.evaluate_generator(rnd_batches, rnd_batches.nb_sample) for i in range(10)]\nnp.round(val_res, 2)", "Yup, pretty consistent - if we see improvements of 3% or more, it's probably not random, based on the above samples.\nL2 regularization\nThe previous model is over-fitting a lot, but we can't use dropout since we only have one layer. We can try to decrease overfitting in our model by adding l2 regularization (i.e. add the sum of squares of the weights to our loss function):", "model = Sequential([\n BatchNormalization(axis=1, input_shape=(3,224,224)),\n Flatten(),\n Dense(10, activation='softmax', W_regularizer=l2(0.01))\n ])\nmodel.compile(Adam(lr=10e-5), loss='categorical_crossentropy', metrics=['accuracy'])\nmodel.fit_generator(batches, batches.nb_sample, nb_epoch=2, validation_data=val_batches, \n nb_val_samples=val_batches.nb_sample)\n\nmodel.optimizer.lr=0.001\n\nmodel.fit_generator(batches, batches.nb_sample, nb_epoch=4, validation_data=val_batches, \n nb_val_samples=val_batches.nb_sample)", "Looks like we can get a bit over 50% accuracy this way. This will be a good benchmark for our future models - if we can't beat 50%, then we're not even beating a linear model trained on a sample, so we'll know that's not a good approach.\nSingle hidden layer\nThe next simplest model is to add a single hidden layer.", "model = Sequential([\n BatchNormalization(axis=1, input_shape=(3,224,224)),\n Flatten(),\n Dense(100, activation='relu'),\n BatchNormalization(),\n Dense(10, activation='softmax')\n ])\nmodel.compile(Adam(lr=1e-5), loss='categorical_crossentropy', metrics=['accuracy'])\nmodel.fit_generator(batches, batches.nb_sample, nb_epoch=2, validation_data=val_batches, \n nb_val_samples=val_batches.nb_sample)\n\nmodel.optimizer.lr = 0.01\nmodel.fit_generator(batches, batches.nb_sample, nb_epoch=5, validation_data=val_batches, \n nb_val_samples=val_batches.nb_sample)", "Not looking very encouraging... which isn't surprising since we know that CNNs are a much better choice for computer vision problems. So we'll try one.\nSingle conv layer\n2 conv layers with max pooling followed by a simple dense network is a good simple CNN to start with:", "def conv1(batches):\n model = Sequential([\n BatchNormalization(axis=1, input_shape=(3,224,224)),\n Convolution2D(32,3,3, activation='relu'),\n BatchNormalization(axis=1),\n MaxPooling2D((3,3)),\n Convolution2D(64,3,3, activation='relu'),\n BatchNormalization(axis=1),\n MaxPooling2D((3,3)),\n Flatten(),\n Dense(200, activation='relu'),\n BatchNormalization(),\n Dense(10, activation='softmax')\n ])\n\n model.compile(Adam(lr=1e-4), loss='categorical_crossentropy', metrics=['accuracy'])\n model.fit_generator(batches, batches.nb_sample, nb_epoch=2, validation_data=val_batches, \n nb_val_samples=val_batches.nb_sample)\n model.optimizer.lr = 0.001\n model.fit_generator(batches, batches.nb_sample, nb_epoch=4, validation_data=val_batches, \n nb_val_samples=val_batches.nb_sample)\n return model\n\nconv1(batches)", "The training set here is very rapidly reaching a very high accuracy. So if we could regularize this, perhaps we could get a reasonable result.\nSo, what kind of regularization should we try first? As we discussed in lesson 3, we should start with data augmentation.\nData augmentation\nTo find the best data augmentation parameters, we can try each type of data augmentation, one at a time. For each type, we can try four very different levels of augmentation, and see which is the best. In the steps below we've only kept the single best result we found. We're using the CNN we defined above, since we have already observed it can model the data quickly and accurately.\nWidth shift: move the image left and right -", "gen_t = image.ImageDataGenerator(width_shift_range=0.1)\nbatches = get_batches(path+'train', gen_t, batch_size=batch_size)\n\nmodel = conv1(batches)", "Height shift: move the image up and down -", "gen_t = image.ImageDataGenerator(height_shift_range=0.05)\nbatches = get_batches(path+'train', gen_t, batch_size=batch_size)\n\nmodel = conv1(batches)", "Random shear angles (max in radians) -", "gen_t = image.ImageDataGenerator(shear_range=0.1)\nbatches = get_batches(path+'train', gen_t, batch_size=batch_size)\n\nmodel = conv1(batches)", "Rotation: max in degrees -", "gen_t = image.ImageDataGenerator(rotation_range=15)\nbatches = get_batches(path+'train', gen_t, batch_size=batch_size)\n\nmodel = conv1(batches)", "Channel shift: randomly changing the R,G,B colors -", "gen_t = image.ImageDataGenerator(channel_shift_range=20)\nbatches = get_batches(path+'train', gen_t, batch_size=batch_size)\n\nmodel = conv1(batches)", "And finally, putting it all together!", "gen_t = image.ImageDataGenerator(rotation_range=15, height_shift_range=0.05, \n shear_range=0.1, channel_shift_range=20, width_shift_range=0.1)\nbatches = get_batches(path+'train', gen_t, batch_size=batch_size)\n\nmodel = conv1(batches)", "At first glance, this isn't looking encouraging, since the validation set is poor and getting worse. But the training set is getting better, and still has a long way to go in accuracy - so we should try annealing our learning rate and running more epochs, before we make a decisions.", "model.optimizer.lr = 0.0001\nmodel.fit_generator(batches, batches.nb_sample, nb_epoch=5, validation_data=val_batches, \n nb_val_samples=val_batches.nb_sample)", "Lucky we tried that - we starting to make progress! Let's keep going.", "model.fit_generator(batches, batches.nb_sample, nb_epoch=25, validation_data=val_batches, \n nb_val_samples=val_batches.nb_sample)", "Amazingly, using nothing but a small sample, a simple (not pre-trained) model with no dropout, and data augmentation, we're getting results that would get us into the top 50% of the competition! This looks like a great foundation for our futher experiments.\nTo go further, we'll need to use the whole dataset, since dropout and data volumes are very related, so we can't tweak dropout without using all the data." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
google/CFU-Playground
proj/fccm_tutorial/Amaranth_for_CFUs.ipynb
apache-2.0
[ "<a href=\"https://colab.research.google.com/github/alanvgreen/CFU-Playground/blob/fccm2/proj/fccm_tutorial/Amaranth_for_CFUs.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>\nAmaranth for CFUs\nCopyright 2022 Google LLC.\nSPDX-License-Identifier: Apache-2.0\nThis page shows \n\nIncremental building of an Amaranth CFU\nSimple examples of Amaranth's language features.\n\nAlso see:\n\nhttps://github.com/amaranth-lang/amaranth\nDocs: https://amaranth-lang.org/docs/amaranth/latest/\n\navg@google.com / 2022-04-19\nThis next cell initialises the libraries and Python path. Execute it before any others.", "# Install Amaranth \n!pip install --upgrade 'amaranth[builtin-yosys]'\n\n# CFU-Playground library\n!git clone https://github.com/google/CFU-Playground.git\nimport sys\nsys.path.append('CFU-Playground/python')\n\n# Imports\nfrom amaranth import *\nfrom amaranth.back import verilog\nfrom amaranth.sim import Delay, Simulator, Tick\nfrom amaranth_cfu import TestBase, SimpleElaboratable, pack_vals, simple_cfu, InstructionBase, CfuTestBase\nimport re, unittest\n\n# Utility to convert Amaranth to verilog \ndef convert_elaboratable(elaboratable):\n v = verilog.convert(elaboratable, name='Top', ports=elaboratable.ports)\n v = re.sub(r'\\(\\*.*\\*\\)', '', v)\n return re.sub(r'^ *\\n', '\\n', v, flags=re.MULTILINE)\n\ndef runTests(klazz):\n loader = unittest.TestLoader()\n suite = unittest.TestSuite()\n suite.addTests(loader.loadTestsFromTestCase(klazz))\n runner = unittest.TextTestRunner()\n runner.run(suite)", "Four-way Multiply-Accumulate\nThese cells demonstrate the evolution of a full four-way multiply-accumulate CFU instruction.\nSingleMultiply\nDemonstrates a simple calculation: (a+128)*b", "class SingleMultiply(SimpleElaboratable):\n def __init__(self):\n self.a = Signal(signed(8))\n self.b = Signal(signed(8))\n self.result = Signal(signed(32))\n def elab(self, m):\n m.d.comb += self.result.eq((self.a + 128) * self.b)\n\nclass SingleMultiplyTest(TestBase):\n def create_dut(self):\n return SingleMultiply()\n def test(self):\n TEST_CASE = [\n (1-128, 1, 1),\n (33-128, -25, 33*-25),\n ]\n def process():\n for (a, b, expected) in TEST_CASE:\n yield self.dut.a.eq(a)\n yield self.dut.b.eq(b)\n yield Delay(0.1)\n self.assertEqual(expected, (yield self.dut.result))\n yield\n self.run_sim(process)\n\nrunTests(SingleMultiplyTest)", "WordMultiplyAdd\nPerforms four (a + 128) * b operations in parallel, and adds the results.", "class WordMultiplyAdd(SimpleElaboratable):\n def __init__(self):\n self.a_word = Signal(32)\n self.b_word = Signal(32)\n self.result = Signal(signed(32))\n def elab(self, m):\n a_bytes = [self.a_word[i:i+8].as_signed() for i in range(0, 32, 8)]\n b_bytes = [self.b_word[i:i+8].as_signed() for i in range(0, 32, 8)]\n m.d.comb += self.result.eq(\n sum((a + 128) * b for a, b in zip(a_bytes, b_bytes)))\n\n\nclass WordMultiplyAddTest(TestBase):\n def create_dut(self):\n return WordMultiplyAdd()\n \n def test(self):\n def a(a, b, c, d): return pack_vals(a, b, c, d, offset=-128)\n def b(a, b, c, d): return pack_vals(a, b, c, d, offset=0)\n TEST_CASE = [\n (a(99, 22, 2, 1), b(-2, 6, 7, 111), 59),\n (a(63, 161, 15, 0), b(29, 13, 62, -38), 4850),\n ]\n def process():\n for (a, b, expected) in TEST_CASE:\n yield self.dut.a_word.eq(a)\n yield self.dut.b_word.eq(b)\n yield Delay(0.1)\n self.assertEqual(expected, (yield self.dut.result))\n yield\n self.run_sim(process)\n\nrunTests(WordMultiplyAddTest)", "WordMultiplyAccumulate\nAdds an accumulator to the four-way multiply and add operation.\nIncludes an enable signal to control when accumulation takes place and a clear signal to rest the accumulator.", "class WordMultiplyAccumulate(SimpleElaboratable):\n def __init__(self):\n self.a_word = Signal(32)\n self.b_word = Signal(32)\n self.accumulator = Signal(signed(32))\n self.enable = Signal()\n self.clear = Signal()\n def elab(self, m):\n a_bytes = [self.a_word[i:i+8].as_signed() for i in range(0, 32, 8)]\n b_bytes = [self.b_word[i:i+8].as_signed() for i in range(0, 32, 8)]\n calculations = ((a + 128) * b for a, b in zip(a_bytes, b_bytes))\n summed = sum(calculations)\n with m.If(self.enable):\n m.d.sync += self.accumulator.eq(self.accumulator + summed)\n with m.If(self.clear):\n m.d.sync += self.accumulator.eq(0)\n\n\nclass WordMultiplyAccumulateTest(TestBase):\n def create_dut(self):\n return WordMultiplyAccumulate()\n \n def test(self):\n def a(a, b, c, d): return pack_vals(a, b, c, d, offset=-128)\n def b(a, b, c, d): return pack_vals(a, b, c, d, offset=0)\n DATA = [\n # (a_word, b_word, enable, clear), expected accumulator\n ((a(0, 0, 0, 0), b(0, 0, 0, 0), 0, 0), 0),\n\n # Simple tests: with just first byte\n ((a(10, 0, 0, 0), b(3, 0, 0, 0), 1, 0), 0),\n ((a(11, 0, 0, 0), b(-4, 0, 0, 0), 1, 0), 30),\n ((a(11, 0, 0, 0), b(-4, 0, 0, 0), 0, 0), -14),\n # Since was not enabled last cycle, accumulator will not change\n ((a(11, 0, 0, 0), b(-4, 0, 0, 0), 1, 0), -14),\n # Since was enabled last cycle, will change accumlator\n ((a(11, 0, 0, 0), b(-4, 0, 0, 0), 0, 1), -58),\n # Accumulator cleared\n ((a(11, 0, 0, 0), b(-4, 0, 0, 0), 0, 0), 0),\n\n # Uses all bytes (calculated on a spreadsheet)\n ((a(99, 22, 2, 1), b(-2, 6, 7, 111), 1, 0), 0),\n ((a(2, 45, 79, 22), b(-33, 6, -97, -22), 1, 0), 59),\n ((a(23, 34, 45, 56), b(-128, -121, 119, 117), 1, 0), -7884),\n ((a(188, 34, 236, 246), b(-87, 56, 52, -117), 1, 0), -3035),\n ((a(131, 92, 21, 83), b(-114, -72, -31, -44), 1, 0), -33997),\n ((a(74, 68, 170, 39), b(102, 12, 53, -128), 1, 0), -59858),\n ((a(16, 63, 1, 198), b(29, 36, 106, 62), 1, 0), -47476),\n ((a(0, 0, 0, 0), b(0, 0, 0, 0), 0, 1), -32362),\n\n # Interesting bug\n ((a(128, 0, 0, 0), b(-104, 0, 0, 0), 1, 0), 0),\n ((a(0, 51, 0, 0), b(0, 43, 0, 0), 1, 0), -13312),\n ((a(0, 0, 97, 0), b(0, 0, -82, 0), 1, 0), -11119),\n ((a(0, 0, 0, 156), b(0, 0, 0, -83), 1, 0), -19073),\n ((a(0, 0, 0, 0), b(0, 0, 0, 0), 1, 0), -32021),\n ]\n\n dut = self.dut\n\n def process():\n for (a_word, b_word, enable, clear), expected in DATA:\n yield dut.a_word.eq(a_word)\n yield dut.b_word.eq(b_word)\n yield dut.enable.eq(enable)\n yield dut.clear.eq(clear)\n yield Delay(0.1) # Wait for input values to settle\n\n # Check on accumulator, as calcuated last cycle\n self.assertEqual(expected, (yield dut.accumulator))\n yield Tick()\n self.run_sim(process)\n\nrunTests(WordMultiplyAccumulateTest) ", "CFU Wrapper\nWraps the preceding logic in a CFU. Uses funct7 to determine what function the WordMultiplyAccumulate unit should perform.", "class Macc4Instruction(InstructionBase):\n \"\"\"Simple instruction that provides access to a WordMultiplyAccumulate\n\n The supported functions are:\n * 0: Reset accumulator\n * 1: 4-way multiply accumulate.\n * 2: Read accumulator\n \"\"\"\n\n def elab(self, m):\n # Build the submodule\n m.submodules.macc4 = macc4 = WordMultiplyAccumulate()\n\n # Inputs to the macc4\n m.d.comb += macc4.a_word.eq(self.in0)\n m.d.comb += macc4.b_word.eq(self.in1)\n\n # Only function 2 has a defined response, so we can\n # unconditionally set it.\n m.d.comb += self.output.eq(macc4.accumulator)\n\n with m.If(self.start):\n m.d.comb += [\n # We can always return control to the CPU on next cycle\n self.done.eq(1),\n\n # clear on function 0, enable on function 1\n macc4.clear.eq(self.funct7 == 0),\n macc4.enable.eq(self.funct7 == 1),\n ]\n\n\ndef make_cfu():\n return simple_cfu({0: Macc4Instruction()})\n\n\nclass CfuTest(CfuTestBase):\n def create_dut(self):\n return make_cfu()\n\n def test(self):\n \"Tests CFU plumbs to Madd4 correctly\"\n def a(a, b, c, d): return pack_vals(a, b, c, d, offset=-128)\n def b(a, b, c, d): return pack_vals(a, b, c, d, offset=0)\n # These values were calculated with a spreadsheet\n DATA = [\n # ((fn3, fn7, op1, op2), result)\n ((0, 0, 0, 0), None), # reset\n ((0, 1, a(130, 7, 76, 47), b(104, -14, -24, 71)), None), # calculate\n ((0, 1, a(84, 90, 36, 191), b(109, 57, -50, -1)), None),\n ((0, 1, a(203, 246, 89, 178), b(-87, 26, 77, 71)), None),\n ((0, 1, a(43, 27, 78, 167), b(-24, -8, 65, 124)), None),\n ((0, 2, 0, 0), 59986), # read result\n\n ((0, 0, 0, 0), None), # reset\n ((0, 1, a(67, 81, 184, 130), b(81, 38, -116, 65)), None),\n ((0, 1, a(208, 175, 180, 198), b(-120, -70, 8, 11)), None),\n ((0, 1, a(185, 81, 101, 108), b(90, 6, -92, 83)), None),\n ((0, 1, a(219, 216, 114, 236), b(-116, -9, -109, -16)), None),\n ((0, 2, 0, 0), -64723), # read result\n\n ((0, 0, 0, 0), None), # reset\n ((0, 1, a(128, 0, 0, 0), b(-104, 0, 0, 0)), None),\n ((0, 1, a(0, 51, 0, 0), b(0, 43, 0, 0)), None),\n ((0, 1, a(0, 0, 97, 0), b(0, 0, -82, 0)), None),\n ((0, 1, a(0, 0, 0, 156), b(0, 0, 0, -83)), None),\n ((0, 2, a(0, 0, 0, 0), b(0, 0, 0, 0)), -32021),\n ]\n self.run_ops(DATA)\n\nrunTests(CfuTest)", "Amaranth to Verilog Examples\nThese examples show Amaranth and the Verilog it is translated into.\nSyncAndComb\nDemonstrates synchronous and combinatorial logic with a simple component that outputs the high bit of a 12 bit counter.", "class SyncAndComb(Elaboratable):\n def __init__(self):\n self.out = Signal(1)\n self.ports = [self.out]\n def elaborate(self, platform):\n m = Module()\n counter = Signal(12)\n m.d.sync += counter.eq(counter + 1)\n m.d.comb += self.out.eq(counter[-1])\n return m\nprint(convert_elaboratable(SyncAndComb()))", "Conditional Enable\nDemonstrates Amaranth's equivalent to Verilog's if statement. A five bit counter is incremented when input signal up is high or decremented when down is high.", "class ConditionalEnable(Elaboratable):\n def __init__(self):\n self.up = Signal()\n self.down = Signal()\n self.value = Signal(5)\n self.ports = [self.value, self.up, self.down]\n\n def elaborate(self, platform):\n m = Module()\n with m.If(self.up):\n m.d.sync += self.value.eq(self.value + 1)\n with m.Elif(self.down):\n m.d.sync += self.value.eq(self.value - 1)\n return m\n\nprint(convert_elaboratable(ConditionalEnable()))\n ", "EdgeDetector\nSimple edge detector, along with a test case.", "class EdgeDetector(SimpleElaboratable):\n \"\"\"Detects low-high transitions in a signal\"\"\"\n def __init__(self):\n self.input = Signal()\n self.detected = Signal()\n self.ports = [self.input, self.detected]\n def elab(self, m):\n last = Signal()\n m.d.sync += last.eq(self.input)\n m.d.comb += self.detected.eq(self.input & ~last)\n \nclass EdgeDetectorTestCase(TestBase):\n def create_dut(self):\n return EdgeDetector()\n\n def test_with_table(self):\n TEST_CASE = [\n (0, 0),\n (1, 1),\n (0, 0),\n (0, 0),\n (1, 1),\n (1, 0),\n (0, 0),\n ]\n def process():\n for (input, expected) in TEST_CASE:\n # Set input\n yield self.dut.input.eq(input)\n # Allow some time for signals to propagate\n yield Delay(0.1)\n self.assertEqual(expected, (yield self.dut.detected))\n yield\n self.run_sim(process)\n\nrunTests(EdgeDetectorTestCase)\nprint(convert_elaboratable(EdgeDetector()))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
parrt/msan501
notes/computation.ipynb
mit
[ "Model of Computation\nNow that we know how computers store data in memory, we need a basic understanding of how computers process that data. Let's explore the simplest, fine-grained operations that a computer can perform. Ultimately, we will draw from these operations to design programs. Programmers think in terms of high-level operations, such as map, search, or filter, but our fingers type fine-grained code patterns associated with those high-level operations.\nCanonical processor operations\nAs we saw in Representing data in memory, a computer's memory holds data temporarily while the processor works on that data. We typically load data into memory from the disk and organize it into a structure that is suitable for the computation we'd like to perform. In the end, though, memory just holds data. All of the action happens in the computer processor (CPU), which performs five principal operations:\n\nload small chunks of data from memory into the CPU\nperform arithmetic computations on data in the CPU\nstore small chunks of data back to memory\njump to a new location (this is how we loop)\njump to a new location if condition is true\n\nProcessors execute low-level machine instructions that perform one or more of those principal operations. Each instruction does a tiny amount of work (like adding two numbers) but the processor can do them extremely fast, on the order of billions a second. Writing a program in these low-level machine instructions would be extremely tedious, so we typically use programming languages such as Python to make our lives easier.\nTo give you an idea of just how low-level these machine operations are, consider the following simple assignment statement.\npython\ntotal = cost + tax\nEven something this simple requires the processor to execute multiple low-level instructions. The processor must look at the values of cost and tax in memory, add the two values, then store the result back into memory at the address associated with total.\nOrder of operations\nUnless given instructions to the contrary, the processor keeps executing instructions one after the other. For example, given the following pseudocode sequence, a processor would execute the assignment statements and then execute the print.", "cost = 100.0\ntax = 8.1\ntotal = cost + tax\nprint(total)", "The notion of doing a sequence of operations in order is familiar to us from cooking with recipes. For example, a recipe might direct us to:\nput ingredients in bowl<br>\nmix ingredients together<br>\npour into baking pan<br>\nbake at 375 degrees for 40 minutes\nWe naturally assume the steps are given in execution order.\nConditional execution\nSome recipes give conditional instructions, such as\nif not sweet enough, add some sugar\nSimilarly, processors can conditionally execute one or a group of operations.", "x = -3\nif x<0:\n x = 0\n print(\"was negative\")\nprint(x)", "Conditional operations execute only if the conditional expression is true. To be clear, the processor does not execute all of the operations present in the program.\nWhen mapping a real-world problem to a conditional statement, your goal is to identify these key elements:\n\nthe conditional expression\nthe operation(s) to perform if the condition is true\n\nA template for conditional execution looks like:\nif condition:<br>\n&nbsp;&nbsp;&nbsp;&nbsp;operation 1<br>\n&nbsp;&nbsp;&nbsp;&nbsp;operation 2<br>\n&nbsp;&nbsp;&nbsp;&nbsp;...\nThe condition must be actionable by a computer. For example, the condition \"the cat is hungry\" is not actionable by computer because there's nothing in its memory a computer can test that is equivalent to the cat being hungry. Conditions almost always consist of equality or relational operators for arithmetic, such as cost &gt; 10, cost + tax &lt; 100, or quantity == 4.\nIt's time to introduce a new data type: boolean, which holds either a true or false value. The result (value) of any equality or relational operator is boolean. For example, \"3>2\" evaluates to true and \"3>4\" evaluates to false.\nIn some cases, we want to execute an operation in either case, one operation if the condition is true and a different operation if the condition is false. For example, we can express a conditional operation to find the larger of two values, x and y, as:", "x = 99\ny = 210\nif x > y: max = x\nelse: max = y\nprint(max)", "Conditional execution is kind of like executing an operation zero or one times, depending on the conditional expression. We also need to execute operations multiple times in some cases.\nRepeated execution\nMost of the programming you will do involves applying operations to data, which means operating on data elements one by one. This means we need to be able to repeat instructions, but we rarely use such a low level construct as: while condition, do this or that. For example, it's rare that we want to print hello five times:", "for i in range(5):\n print(\"Hello\")", "(range(n) goes from 0 to n-1)\nOn the other hand, generic loops that go around until the condition is met can be useful, such as \"read data from the network until the data stops.\" Or, we can do computations such as the following rough approximation to log base 10:", "n = 1000\nlog = 0\ni = n\nwhile i > 1:\n log += 1\n print(i)\n i = i / 10\nprint(f\"Log of {n} is {log}\")", "Most of the time, though, we are scanning through data, which means a \"for each loop\" or \"indexed for loop.\"\nFor-each loops\nThe for-each loop iterates through a sequence of elements, such as a list but can be any iteratable object, such as a file on the disk. It is the most common kind of loop you will use. Here is the pattern:\nfor x in iterable_object:\n process x in some way\nFor example:", "for name in ['parrt', 'mary', 'tombu']:\n print(name.upper())", "We can iterate through more complicated objects too, such as the following numpy matrix:", "import numpy as np\nA = np.array([[19,11],\n [21,15],\n [103,18],\n [99,13],\n [8,2]])\nfor row in A:\n print(row)", "Or even a dataframe:", "from pandas import DataFrame\n\ndf = DataFrame(data=[[99,'parrt'],[101,'sri'],[42,'kayla']],\n columns=['ID','user'])\ndf\n\nfor row in df.itertuples():\n print(f\"Info {row.ID:04d}: {row.user}\")", "A useful bit of Python magic gives us the iteration index as well as the iterated value:", "for i,row in enumerate(A):\n print(f\"{i+1}. {row}\")\n\n# same as the enumerate\ni = 1\nfor row in A:\n print(f\"{i}. {row}\")\n i += 1", "With this code pattern, our goal is to find a good iterated variable named, identify the conditional, and then identify the operation(s) to repeat. \nIndexed loops\nIt's sometimes useful to execute an indexed loop. These are useful when we have to iterate through multiple lists at the same time. For example, if we have a list of names and their phone numbers, we might want to use an indexed loop:", "names = ['parrt', 'mary', 'tombu']\nphones = ['5707', '1001', '3412']\nfor i in range(len(names)):\n name = names[i]\n phone = phones[i]\n print(f\"{name:>8}: {phone}\")", "zip'd loops\nAnd here is how the cool kids do the same thing without an indexed loop (using a foreach):", "for name, phone in zip(names,phones):\n print(f\"{name:>8}: {phone}\")\n\nfor i, (name, phone) in enumerate(zip(names,phones)):\n print(f\"{i}. {name:>8}: {phone}\")", "List comprehensions", "names = ['parrt', 'mary', 'tombu']\n[name.upper() for name in names]\n\n[name for name in names] # make (shallow) copy of list\n\n[name for name in names if name.startswith('m')]\n\n[name.upper() for name in names if name.startswith('m')]\n\nQuantity = [6, 49, 27, 30, 19, 21, 12, 22, 21]\n[q*10 for q in Quantity]\n\nQuantity2 = [6, 49, 0, 30, -19, 21, 12, 22, 21]\n[q*10 for q in Quantity2 if q>0]\n\n# Find indexes of all values <= 0 in Quantity2\n[i for i,q in enumerate(Quantity2) if q<=0]\n\n[f\"{name}: {phone}\" for name, phone in zip(names,phones)]", "Translating formulas\nSigma notation from mathematics translates in a straightforward fashion to indexed loops. For example:\n<img src=\"images/formula-translation.png\" width=\"490\"> \nWe pick elements from the summations and insert them into the template for an indexed loop.\nExercises\nExercise: Given a string containing the digits of a number, such as s = \"501\", convert that number to an integer and print it out. Work backwards from the desired result, n. Recall that 501 = 5*100 + 0*10 + 1*1. Start with the simplest possible bit of Python and then tidy it up using any cool constructs you know from Python. Hint: The cool kids will end up using \"Horner's rule.\" What happens if the string is empty? You can't use int('501') but you need int('5') on single digit.\nExercise: Reuse that pattern to convert a string containing binary digits of a binary number, such as s = \"1101\", to an integer, n, and print it out. 1101 binary is 13 in decimal. 1101 is $12^3 + 12^2 + 02^1 + 12^0$.\nExercise: Given two lists, such as a = [9, 3] and b = [1, 4, 10], create and print a new list, c, containing alternating elements from the input lists. In this case, the output would be [9, 1, 3, 4, 10]. Start by assuming the same number of elements and then try for the more general case. What happens if one or both lists are empty?\nExercise: Python has a built-in function called zip(a,b) that is a handy way to get a list of tuples containing elements from lists a and b. For example, if a = [9, 3] and b = [1, 4, 10], zip(a,b) gives a sequence of tuples (9, 1), (3, 4). The built in zip stops when one of the lists of runs out of elements, but we want to fill in missing elements with None: you should get output list c = [(9, 1), (3, 4), (None, 10)]. In this exercise, we use the ideas or even the code itself from the previous exercise to implement your own zip functionality. The only difference is that you should fill in missing elements with None.\nIf you get stuck, or just to check your answers, you can check my solutions.\nSummary\nOther than transferring data to and from memory, processors primarily perform arithmetic operations, such as \"cost + tax\". Processors can also conditionally or repeatedly execute operations.\nWhen mapping real-world problems to pseudocode, you'll follow the program or function work plan and eventually work backwards from the desired result to identify a suitable sequence of operations. These operations will either map to our high level programming operations or to the lower level pseudocode patterns described here.\nIf you can't identify a higher level operation for a piece of the problem, try to map it to a conditional operation or a loop around one or more operations.\nFor conditionals, you have to identify the conditional Boolean expression and the operation or operations that should be executed conditionally:\nif condition:<br>\n&nbsp;&nbsp;&nbsp;&nbsp;operation 1<br>\n&nbsp;&nbsp;&nbsp;&nbsp;operation 2<br>\n&nbsp;&nbsp;&nbsp;&nbsp;...\nIf you need to execute code in case that condition fails, use this template:\nif condition:<br>\n&nbsp;&nbsp;&nbsp;&nbsp;operation 1<br>\n&nbsp;&nbsp;&nbsp;&nbsp;operation 2<br>\n&nbsp;&nbsp;&nbsp;&nbsp;...<br>\nelse:<br>\n&nbsp;&nbsp;&nbsp;&nbsp;operation 1<br>\n&nbsp;&nbsp;&nbsp;&nbsp;operation 2<br>\n&nbsp;&nbsp;&nbsp;&nbsp;...\nFor repeated execution, we have a generic loop that executes one or more operations while a condition is met:\nloop setup, usually init counter or value to update in loop<br>\nwhile condition:<br>\n&nbsp;&nbsp;&nbsp;&nbsp;operation 1<br>\n&nbsp;&nbsp;&nbsp;&nbsp;operation 2<br>\n&nbsp;&nbsp;&nbsp;&nbsp;...\nA very common version of a loop traverses a sequence, such as a list, with a variable that takes on each value of the sequence one at a time:\nfor each x in sequence:<br>\n&nbsp;&nbsp;&nbsp;&nbsp;operate on x\nWhen iterating through multiple lists at the same time, we use an indexed list of the form:\nfor i in some integer_set or range:<br>\n&nbsp;&nbsp;&nbsp;&nbsp;operation 1<br>\n&nbsp;&nbsp;&nbsp;&nbsp;operation 2<br>\n&nbsp;&nbsp;&nbsp;&nbsp;..." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
gaufung/Data_Analytics_Learning_Note
python-statatics-tutorial/basic-theme/scipy_basic/details.ipynb
mit
[ "模块使用\n1 scipy.io\n读取矩阵数据", "import numpy as np\nfrom scipy import io as spio\na = np.ones((3,3))\nspio.savemat('file.mat',{'a':a})\ndata = spio.loadmat('file.mat',struct_as_record=True)\ndata['a']", "读取图像", "from scipy import misc\nmisc.imread('fname.png')\n\nimport matplotlib.pyplot as plt\nplt.imread('fname.png')", "文本文件\nnumpy.loadtxt() / numpy.savetxt() \ntxt/csv文件\nnumpy.genfromtxt()/numpy.recfromcsv() \n二进制文件\nnumpy.load() / numpy.save()\n\n2 scipy.linalg\n计算行列式", "from scipy import linalg\narr= np.array([[1,2],\n [3,4]])\nlinalg.det(arr)\n\narr = np.array([[3,2],\n [6,4]])\nlinalg.det(arr)\n\nlinalg.det(np.ones(3,4))", "计算逆矩阵", "arr = np.array([[1,2],[3,4]])\niarr = linalg.inv(arr)\niarr\n\n# 验证\nnp.allclose(np.dot(arr,iarr),np.eye(2))\n\n# 奇异矩阵求逆抛出异常\narr = np.array([[3,2],[6,4]])\nlinalg.inv(arr)", "奇异值分解", "arr = np.arange(9).reshape((3,3)) + np.diag([1,0,1])\nuarr,spec,vharr = linalg.svd(arr)\nspec\n\nsarr = np.diag(spec)\nsvd_mat = uarr.dot(sarr).dot(vharr)\nnp.allclose(svd_mat,arr)", "SVD常用于统计和信号处理领域。其他的一些标准分解方法(QR, LU, Cholesky, Schur) 在 scipy.linalg 中也能够找到。\n3 优化", "from scipy import optimize\ndef f(x):\n return x**2 + 10*np.sin(x)\nx = np.arange(-10,10,0.1)\nplt.plot(x,f(x))\nplt.show()", "此函数有一个全局最小值,约为-1.3,含有一个局部最小值,约为3.8.\n在寻找最小值的过程中,确定初始值,用梯度下降的方法,bfgs是一个很好的方法。", "optimize.fmin_bfgs(f,0)\n\n# 但是方法的缺陷是陷入局部最优解\noptimize.fmin_bfgs(f,5)", "可以在一个区间中找到一个最小值", "xmin_local = optimize.fminbound(f,0,10)\nxmin_local", "寻找函数的零点", "# guess 1 \nroot = optimize.fsolve(f,1)\nroot\n\n# guess -2.5\nroot = optimize.fsolve(f,-2.5)\nroot", "曲线拟合\n从函数f中采样得到一些含有噪声的数据", "xdata = np.linspace(-10,10,num=20)\nydata = f(xdata)+np.random.randn(xdata.size)", "我们已经知道函数的形式$x^2+\\sin(x)$,但是每一项的系数不清楚,因此进行拟合处理", "def f2(x,a,b):\n return a*x**2 + b*np.sin(x)\nguess=[3,2]\nparams,params_covariance = optimize.curve_fit(f2, xdata, ydata, guess)\nparams", "绘制结果", "x = np.arange(-10,10,0.1)\ndef f(x):\n return x**2 + 10 * np.sin(x)\ngrid = (-10,10,0.1)\nxmin_global = optimize.brute(f,(grid,))\nxmin_local = optimize.fminbound(f,0,10)\nroot = optimize.fsolve(f,1)\nroot2 = optimize.fsolve(f,-2.5)\nxdata = np.linspace(-10,10,num=20)\nnp.random.seed(1234)\nydata = f(xdata)+np.random.randn(xdata.size)\ndef f2(x,a,b):\n return a*x**2 + b * np.sin(x)\nguess=[2,2]\nparams,_ =optimize.curve_fit(f2,xdata,ydata,guess)\nfig = plt.figure()\nax = fig.add_subplot(111)\nax.plot(x,f(x),'b-',label='f(x)')\nax.plot(x,f2(x,*params),'r--',label='Curve fit result')\nxmins = np.array([xmin_global[0],xmin_local])\nax.plot(xmins,f(xmins),'go',label='minize')\nroots = np.array([root,root2])\nax.plot(roots,f(roots),'kv',label='Roots')\nax.legend()\nax.set_xlabel('x')\nax.set_ylabel('f(x)')\nplt.show()", "4 统计\n直方图和概率密度统计", "a = np.random.normal(size=1000)\nbins = np.arange(-4,5)\nbins\n\nhistogram = np.histogram(a,bins=bins,normed=True)[0]\nbins = 0.5*(bins[1:]+bins[:-1])\nbins\n\nfrom scipy import stats\nb =stats.norm.pdf(bins)\nplt.plot(bins,histogram)\nplt.plot(bins,b)\nplt.show()", "百分位数\n百分位是累计概率分布函数的一个估计", "np.median(a)\n\nstats.scoreatpercentile(a,50)\n\nstats.scoreatpercentile(a,90)", "统计检验\n统计检验的结果常用作一个决策指标。例如,如果我们有两组观察点,它们都来自高斯过程,我们可以使用 T-检验 来判断两组观察点是都显著不同:", "a = np.random.normal(0,1,size=100)\nb = np.random.normal(0,1,size=10)\nstats.ttest_ind(a,b)", "返回结果分成连个部分\n+ T检验统计量\n使用检验的统计量的值\n+ P值\n如果结果接近1,表明符合预期,接近0表明不符合预期。\n5 插值", "measured_time = np.linspace(0,1,10)\nnoise = (np.random.random(10)*2-1) *1e-1\nmeasure = np.sin(2*np.pi*measured_time) + noise\nfrom scipy.interpolate import interp1d\nlinear_interp = interp1d(measured_time, measure)\ncomputed_time = np.linspace(0, 1, 50)\nlinear_results = linear_interp(computed_time)\ncublic_interp = interp1d(measured_time, measure, kind='cubic')\ncublic_results = cublic_interp(computed_time)\nplt.plot(measured_time,measure,'o',label='points')\nplt.plot(computed_time,linear_results,'r-',label='linear interp')\nplt.plot(computed_time,cublic_results,'y-',label='cublic interp')\nplt.legend()\n3 plt.show()", "练习\n温度曲线拟合\n阿拉斯加每个月温度的最大值和最小值数据见下表: \n最小值 | 最大值 | 最小值 | 最大值\n--- | --- | --- | --- \n-62 | 17 | -9 | 37\n-59 | 19 | -13 | 37\n-56 | 21 | -25 | 31\n-46 | 28 | -46 | 23\n-32 | 33 | -52 | 19\n-18 | 38 | -48 | 18\n要求\n+ 绘制温度图像\n+ 拟合出一条函数曲线\n+ 使用scipy.optimize.curvie_fit()来拟合函数\n+ 画出函数图像。\n+ 判断最大值和最小值的偏置是否合理", "import numpy as np\nimport matplotlib.pyplot as plt\nmonths = np.arange(1,13)\nmins = [-62,-59,-56,-46,-32,-18,-9,-13,-25,-46,-52,-48]\nmaxes = [17,19,21,28,33,38,37,37,31,23,19,18]\nfig,ax = plt.subplots()\nplt.plot(months,mins,'b-',label='min')\nplt.plot(months,maxes,'r-',label='max')\nplt.ylim(-80,80)\nplt.xlim(0.5,12.5)\nplt.xlabel('month')\nplt.ylabel('temperature')\nplt.xticks([1,2,3,4,5,6,7,8,9,10,11,12],\n ['Jan.','Feb.','Mar.','Apr.','May.','Jun.','Jul.','Aug.','Sep,','Oct.','Nov.','Dec.'])\nplt.legend()\nplt.title('Alaska temperature')\nplt.show()", "从图像上来看,温度的最高值和最低值都符合二次函数的特点,$y = at^2+bt+c$,其中$c$为时间$t$的偏置。", "from scipy import optimize\ndef f(t,a,b,c):\n return a * t**2+b*t+c\nguess = [-1,8,50]\nparams_min,_ = optimize.curve_fit(f,months,mins,guess)\nparams_max,_ = optimize.curve_fit(f,months,maxes,guess)\ntimes = np.linspace(1,12,30)\nplt.plot(times,f(times,*params_min),'b--',label='min_fit')\nplt.plot(times,f(times,*params_max),'r--',label='max_fit')\nplt.plot(months,mins,'bo',label='min')\nplt.plot(months,maxes,'ro',label='max')\nplt.ylim(-80,80)\nplt.xlim(0.5,12.5)\nplt.xlabel('month')\nplt.ylabel('temperature')\nplt.xticks([1,2,3,4,5,6,7,8,9,10,11,12],\n ['Jan.','Feb.','Mar.','Apr.','May.','Jun.','Jul.','Aug.','Sep,','Oct.','Nov.','Dec.'])\nplt.title('Alaska temperature')\nplt.show()", "温度最高值拟合效果较好,但温度最低值拟合效果不太好\n求解最小值\n驼峰函数 $$f(x,y)=(4-2.1x^2+\\frac{x^4}{3})x^2+xy+(4y^2-4)y^2$$\n+ 限制变量范围: $-2<x<2,-1<y<1$\n+ 使用 numpy.meshgrid() 和 pylab.imshow() 目测最小值所在区域\n+ 使用 scipy.optimize.fmin_bfgs() 或者其他的用于可以求解多维函数最小值的算法", "import numpy as np\nfrom scipy import optimize\nimport matplotlib.pyplot as plt\nfrom mpl_toolkits.mplot3d import Axes3D\n\n\ndef sixhump(x):\n return (4 - 2.1*x[0]**2 + x[0]**4 / 3.) * x[0]**2 + x[0] * x[1] + (-4 + 4*x[1]**2) * x[1] **2\n\nx = np.linspace(-2, 2)\ny = np.linspace(-1, 1)\nxg, yg = np.meshgrid(x, y)\n\n#plt.figure() # simple visualization for use in tutorial\n#plt.imshow(sixhump([xg, yg]))\n#plt.colorbar()\n\nfig = plt.figure()\nax = fig.add_subplot(111, projection='3d')\nsurf = ax.plot_surface(xg, yg, sixhump([xg, yg]), rstride=1, cstride=1,\n cmap=plt.cm.jet, linewidth=0, antialiased=False)\n\nax.set_xlabel('x')\nax.set_ylabel('y')\nax.set_zlabel('f(x, y)')\nax.set_title('Six-hump Camelback function')\nplt.show()\n\nmin1 = optimize.fmin_bfgs(sixhump,[0,-0.5])\nmin2 = optimize.fmin_bfgs(sixhump,[0,0.5])\nmin3 = optimize.fmin_bfgs(sixhump,[-1.4,1.0])\n\nlocal1 = sixhump(min1)\nlocal2 = sixhump(min2)\nlocal3 = sixhump(min3)\nprint local1,local2,local3" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
detcitty/intro-numerical-methods
1_intro_to_python.ipynb
mit
[ "Discussion 1: Introduction to Python\nSo you want to code in Python? We will do some basic manipulations and demonstrate some of the basics of the notebook interface that we will be using extensively throughout the course.\nTopics:\n - Math\n - Variables\n - Lists\n - Control flow\n - Coding style\n - Other data structures\n - IPython/Jupyter notebooks\nOther intros: \n - Basic Python\n - Software Carpentry - Programming in Python\nPython Math\nLets start with some basic functions:", "2 + 2\n\n32 - (4 + 2)**2\n\n1 / 2", "Why do we get the answer above rather than what we would expect?\nThe answer has to do with the type of number being used. Python is a \"dynamically\" typed language and automatically determines what kind of number to allocate for us. Above, because we did not include a decimal, Python automatically treated the expression as integers (int type) and according to integer arithmetic, 1 / 2 = 0. Now if we include a decimal we get:", "1.0 / 2", "Note that Python will make the output a float in this case. What happens for the following though?", "4.0 + 4**(3/2)\n\n4.0 + 4.0**(3.0 / 2.0)", "Good practice to just add a decimal after any number you really want to treat as a float.\nAdditional types of numbers include complex, Decimal and Fraction.", "3+5j", "Note that to use \"named\" functions such as sqrt or sin we need to import a module so that we have access to those functions. When you import a module (or package) in Python we are asking Python to go look for the code that is named and make them active in our workspace (also called a namespace in more general parlance). Here is an example where we use Python's builtin math module:", "import math\nmath.sqrt(4)\n\nmath.sin(math.pi / 2.0)\n\nmath.exp(-math.pi / 4.0)", "Note that in order to access these functions we need to prepend the math. to the functions and the constant $\\pi$. We can forgo this and import all of what math holds if we do the following:", "from math import *\nsin(pi / 2.0)", "Note that many of these functions always return a float number regardless of their input.\nVariables\nAssign variables like you would in any other language:", "num_students = 80\nroom_capacity = 85\n(room_capacity - num_students) / room_capacity * 100.0", "Note that we do not get what we expect from this expression as we expected from above. What would we have to change to get this to work?\nWe could go back to change our initializations but we could also use the function float to force these values to be of float type:", "float(room_capacity - num_students) / float(room_capacity) * 100.0", "Note here we have left the defined variables as integers as it makes sense that they remain that way (fractional students aside).", "a = 10\nb = a + 2\nprint b", "Lists\nOne of the most useful data structures in Python is the list.", "grades = [90.0, 67.0, 85.0, 76.0, 98.0, 70.0]", "Lists are defined with square brackets and delineated by commas. Note that there is another data type called sequences denoted by ( ) which are immutable (cannot be changed) once created. Lets try to do some list manipulations with our list of grades above.\nAccess a single value in a list", "grades[3]", "Note that Python is 0 indexed, i.e. the first value in the list is accessed by 0.\nFind the length of a list", "len(grades)", "Add values to a list", "grades = grades + [62.0, 82.0, 59.0]\nprint grades", "Slicing is another important operation", "grades[2:5]\n\ngrades[0:4]\n\ngrades[:4]\n\ngrades[4:]", "Note that the range of values does not include the last indexed! This is important to remember for more than lists but we will get to that later.", "grades[4:11]", "Another property of lists is that you can put different types in them at the same time. This can be important to remember if you may have both int and float types.", "remember = [\"2\", 2, 2.0]\n\nremember[0] / 1\n\nremember[1] / 1\n\nremember[2] / 1", "Finally, one of the more useful list creation functions is range which creates a list with the bounds requested", "count = range(3,7)\nprint count", "Control Flow\nif\nMost basic logical control", "x = 4\nif x > 5:\n print \"x is greater than 5\"\nelif x < 5:\n print \"x is less than 5\"\nelse:\n print \"x is equal to 5\"\n ", "for\nThe for statements provide the most common type of loops in Python (there is also a while construct).", "for i in range(5):\n print i\n\nfor i in range(3,7):\n print i\n\nfor animal in ['cat', 'dog', 'chinchilla']:\n print animal", "Related to the for statement are the control statements break and continue. Ideally we can create a loop with logic that can avoid these but sometimes code can be more readable with judiciuos use of these statements.", "for n in range(2, 10):\n is_prime = True\n for x in range(2, n):\n if n % x == 0:\n print n, 'equals', x, '*', n / x\n is_prime = False\n break\n if is_prime:\n print \"%s is a prime number\" % (n)", "The pass statement might appear fairly useless as it simply does nothing but can provide a stub to remember to come back and implement something", "def my_func(x):\n # Remember to implement this later!\n pass", "Defining Functions\nThe last statement above defines a function in Python with an argument called x. Functions can be defined and do lots of different things, here are a few examples.", "def my_print_function(x):\n print x\n\nmy_print_function(3)\n\ndef my_add_function(a, b):\n return a + b\n\nmy_add_function(3.0, 5.0)\n\ndef my_crazy_function(a, b, c=1.0):\n d = a + b**c\n return d\n\nmy_crazy_function(2.0, 3.0), my_crazy_function(2.0, 3.0, 2.0), my_crazy_function(2.0, 3.0, c=2.0)\n\ndef my_other_function(a, b, c=1.0):\n return a + b, a + b**c, a + b**(3.0 / 7.0)\n\nmy_other_function(2.0, 3.0, c=2.0)", "Lets try writing a bit more of a complex (and useful) function. The Fibinocci sequence is formed by adding the previous two numbers of the sequence to get the next value (starting with [0, 1]).", "def fibonacci(n):\n \"\"\"Return a list of the Fibonacci sequence up to n\"\"\"\n values = [0, 1]\n while values[-1] <= n:\n values.append(values[-1] + values[-2])\n print values\n return values\n\nfibonacci(100)", "Coding Style\nVery important in practice to write readable and understandable code. Here are a few things to keep in mind while programming in and out of this class, we will work on this actively as the semester progresses as well. The standard for which Python program are written to is called PEP 8 and contains the following basic guidelines:\n - Use 4-space indentation, no tabs\n - Wrap lines that exceed 80 characters\n - Use judicious use of blank lines to separate out functions, classes, and larger blocks of contained code\n - Comment! Also, put comments on their own line when possible\n - Use docstrings (function descriptions)\n - Use spaces around operators and after commas, a = f(1, 2) + g(3, 4)\n - Name your classes and functions consistently.\n - Use CamelCase for classes\n - Use lower_case_with_underscores for functions and variables\n - When in doubt be verbose with your comments and names of variables, functions, and classes\nTo help all of us learn from each other what coding styles are easier to read we will be doing peer-reviews of the coding portions of the assignments. After the first asssignment is turned in we will review a general template for code review which you will need to fill out for each of your peer's homework. Please be as thorough and helpful as you can!\nIPython/Jupyter Notebooks\nWe will use a lot of IPython/Jupyter notebooks in this class for both class notes (what you are looking at now) and for turning in homework. The IPython notebook allows for the inline inclusion of a number of different types of input, the most critical will be\n - Code (python or otherwise) and\n - Markdown which includes\n - LaTeX,\n - HTML, and\n - JavaScript.\nIPython notebooks allow us to organize and comment on our efforts together along with writing active documents that can be modified in-situ to our work. This can lead to better practice of important ideas such as reproducibility in our work." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
fja05680/pinkfish
examples/160.merge-trades/strategy.ipynb
mit
[ "merge-trades\nThe purpose of this example is to demonstrate the merge capability of the trade log. You may want to merge your trades when you are examining the result of trading algorithms that scaling in and/or out of positions. You can view a whole sequence of trades as one trade instead of the individual trades. For example, you scale in 4 different buy operations, then sell all at once. Using merge, those 4 trades will be grouped into 1 buy operation with the single sell operation. Statistical Analysis in pinkfish will treat this as a single trade instead of 4 distinct trades. This will be useful in evaluating the effectiveness of a scaling algorithm.\nDouble 7's (Short Term Trading Strategies that Work)\n\n1. The SPY is above its 200-day moving average\n2. The SPY closes at a X-day low, buy some shares.\n If it falls further, buy some more, etc...\n3. If the SPY closes at a X-day high, sell your entire long position.\n\n(Scaling in; compare regular trade log vs merged trade log)", "import datetime\n\nimport matplotlib.pyplot as plt\nimport pandas as pd\n\nimport pinkfish as pf\nimport strategy\n\n# Format price data.\npd.options.display.float_format = '{:0.2f}'.format\n\n%matplotlib inline\n\n# Set size of inline plots.\n'''note: rcParams can't be in same cell as import matplotlib\n or %matplotlib inline\n \n %matplotlib notebook: will lead to interactive plots embedded within\n the notebook, you can zoom and resize the figure\n \n %matplotlib inline: only draw static images in the notebook\n'''\nplt.rcParams[\"figure.figsize\"] = (10, 7)", "Some global data", "capital = 10000\nstart = datetime.datetime(2020, 1, 1)\nend = datetime.datetime.now()", "Define symbols", "symbols = ['SPY', 'SPY_merged']", "Options", "options = {\n 'use_adj' : False,\n 'use_cache' : True,\n 'stop_loss_pct' : 1.0,\n 'margin' : 1,\n 'period' : 7,\n 'max_open_trades' : 4,\n 'enable_scale_in' : True,\n 'enable_scale_out' : True,\n 'merge_trades' : False\n}", "Run Strategy", "options['merge_trades'] = False\nstrategies = pd.Series(dtype=object)\nfor symbol in symbols:\n print(symbol, end=\" \")\n strategies[symbol] = strategy.Strategy(symbols[0], capital, start, end, options)\n strategies[symbol].run()\n options['merge_trades'] = True\n\n# View all columns and row in dataframe\npd.set_option('display.max_columns', None)\npd.set_option('display.max_rows', None)\n\n# View raw log\nstrategies[symbols[0]].rlog\n\n# View unmerged trade log\nstrategies[symbols[0]].tlog\n\n# View merged trade log\nstrategies[symbols[1]].tlog", "Summarize results - compare statistics between unmerged and merged view of trade log.", "metrics = strategies[symbol].stats.index\n\ndf = pf.optimizer_summary(strategies, metrics)\ndf" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
luizhsda10/Data-Science-Projectcs
Machine Learning/Recommender Systems/Recommender Systems with Python.ipynb
mit
[ "Recommender Systems\nIn this project, we build a movie recommender system. We read a dataset of movie ratings by users, then we select other movies that a specific user would be interesting in based on his previous choice.", "import numpy as np\nimport pandas as pd", "Read the data", "column_names = ['user_id', 'item_id', 'rating', 'timestamp']\ndf = pd.read_csv('u.data', sep='\\t', names=column_names)\n\ndf.head()", "Get movie titles", "movie_titles = pd.read_csv(\"Movie_Id_Titles\")\nmovie_titles.head()", "Merged dataframes", "df = pd.merge(df,movie_titles,on='item_id')\ndf.head()", "Exploratory Data Analysis", "import matplotlib.pyplot as plt\nimport seaborn as sns\nsns.set_style('white')\n%matplotlib inline", "Create a ratings dataframe with average rating and number of ratings", "df.groupby('title')['rating'].mean().sort_values(ascending=False).head()\n\ndf.groupby('title')['rating'].count().sort_values(ascending=False).head()\n\nratings = pd.DataFrame(df.groupby('title')['rating'].mean())\nratings.head()", "Number of ratings column", "ratings['num of ratings'] = pd.DataFrame(df.groupby('title')['rating'].count())\nratings.head()", "Data Visualization: Histogram", "plt.figure(figsize=(10,4))\nratings['num of ratings'].hist(bins=70)\n\nplt.figure(figsize=(10,4))\nratings['rating'].hist(bins=70)\n\nsns.jointplot(x='rating',y='num of ratings',data=ratings,alpha=0.5)", "Recommending Similar Movies", "moviemat = df.pivot_table(index='user_id',columns='title',values='rating')\nmoviemat.head()", "Most rated movies", "ratings.sort_values('num of ratings',ascending=False).head(10)", "We choose two movies: starwars, a sci-fi movie. And Liar Liar, a comedy.", "ratings.head()", "Now let's grab the user ratings for those two movies:", "starwars_user_ratings = moviemat['Star Wars (1977)']\nliarliar_user_ratings = moviemat['Liar Liar (1997)']\nstarwars_user_ratings.head()", "Using corrwith() method to get correlations between two pandas series:", "similar_to_starwars = moviemat.corrwith(starwars_user_ratings)\nsimilar_to_liarliar = moviemat.corrwith(liarliar_user_ratings)", "Clear data by removing NaN values and using a DataFrame instead of a series", "corr_starwars = pd.DataFrame(similar_to_starwars,columns=['Correlation'])\ncorr_starwars.dropna(inplace=True)\ncorr_starwars.head()\n\ncorr_starwars.sort_values('Correlation',ascending=False).head(10)", "Filtering out movies that have less than 100 reviews (this value was chosen based off the histogram). This is needed to get more accurate results", "corr_starwars = corr_starwars.join(ratings['num of ratings'])\ncorr_starwars.head()", "Now sort the values", "corr_starwars[corr_starwars['num of ratings']>100].sort_values('Correlation',ascending=False).head()", "The same for the comedy Liar Liar:", "corr_liarliar = pd.DataFrame(similar_to_liarliar,columns=['Correlation'])\ncorr_liarliar.dropna(inplace=True)\ncorr_liarliar = corr_liarliar.join(ratings['num of ratings'])\ncorr_liarliar[corr_liarliar['num of ratings']>100].sort_values('Correlation',ascending=False).head()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
statsmodels/statsmodels.github.io
v0.13.2/examples/notebooks/generated/mixed_lm_example.ipynb
bsd-3-clause
[ "Linear Mixed Effects Models", "%matplotlib inline\n\nimport numpy as np\nimport pandas as pd\nimport statsmodels.api as sm\nimport statsmodels.formula.api as smf\nfrom statsmodels.tools.sm_exceptions import ConvergenceWarning", "Note: The R code and the results in this notebook has been converted to markdown so that R is not required to build the documents. The R results in the notebook were computed using R 3.5.1 and lme4 1.1.\nipython\n%load_ext rpy2.ipython\nipython\n%R library(lme4)\narray(['lme4', 'Matrix', 'tools', 'stats', 'graphics', 'grDevices',\n 'utils', 'datasets', 'methods', 'base'], dtype='&lt;U9')\nComparing R lmer to statsmodels MixedLM\nThe statsmodels imputation of linear mixed models (MixedLM) closely follows the approach outlined in Lindstrom and Bates (JASA 1988). This is also the approach followed in the R package LME4. Other packages such as Stata, SAS, etc. should also be consistent with this approach, as the basic techniques in this area are mostly mature.\nHere we show how linear mixed models can be fit using the MixedLM procedure in statsmodels. Results from R (LME4) are included for comparison.\nHere are our import statements:\nGrowth curves of pigs\nThese are longitudinal data from a factorial experiment. The outcome variable is the weight of each pig, and the only predictor variable we will use here is \"time\". First we fit a model that expresses the mean weight as a linear function of time, with a random intercept for each pig. The model is specified using formulas. Since the random effects structure is not specified, the default random effects structure (a random intercept for each group) is automatically used.", "data = sm.datasets.get_rdataset(\"dietox\", \"geepack\").data\nmd = smf.mixedlm(\"Weight ~ Time\", data, groups=data[\"Pig\"])\nmdf = md.fit(method=[\"lbfgs\"])\nprint(mdf.summary())", "Here is the same model fit in R using LMER:\nipython\n%%R\ndata(dietox, package='geepack')\nipython\n%R print(summary(lmer('Weight ~ Time + (1|Pig)', data=dietox)))\n```\nLinear mixed model fit by REML ['lmerMod']\nFormula: Weight ~ Time + (1 | Pig)\n Data: dietox\nREML criterion at convergence: 4809.6\nScaled residuals: \n Min 1Q Median 3Q Max \n-4.7118 -0.5696 -0.0943 0.4877 4.7732 \nRandom effects:\n Groups Name Variance Std.Dev.\n Pig (Intercept) 40.39 6.356 \n Residual 11.37 3.371 \nNumber of obs: 861, groups: Pig, 72\nFixed effects:\n Estimate Std. Error t value\n(Intercept) 15.72352 0.78805 19.95\nTime 6.94251 0.03339 207.94\nCorrelation of Fixed Effects:\n (Intr)\nTime -0.275\n```\nNote that in the statsmodels summary of results, the fixed effects and random effects parameter estimates are shown in a single table. The random effect for animal is labeled \"Intercept RE\" in the statsmodels output above. In the LME4 output, this effect is the pig intercept under the random effects section.\nThere has been a lot of debate about whether the standard errors for random effect variance and covariance parameters are useful. In LME4, these standard errors are not displayed, because the authors of the package believe they are not very informative. While there is good reason to question their utility, we elected to include the standard errors in the summary table, but do not show the corresponding Wald confidence intervals.\nNext we fit a model with two random effects for each animal: a random intercept, and a random slope (with respect to time). This means that each pig may have a different baseline weight, as well as growing at a different rate. The formula specifies that \"Time\" is a covariate with a random coefficient. By default, formulas always include an intercept (which could be suppressed here using \"0 + Time\" as the formula).", "md = smf.mixedlm(\"Weight ~ Time\", data, groups=data[\"Pig\"], re_formula=\"~Time\")\nmdf = md.fit(method=[\"lbfgs\"])\nprint(mdf.summary())", "Here is the same model fit using LMER in R:\nipython\n%R print(summary(lmer(\"Weight ~ Time + (1 + Time | Pig)\", data=dietox)))\n```\nLinear mixed model fit by REML ['lmerMod']\nFormula: Weight ~ Time + (1 + Time | Pig)\n Data: dietox\nREML criterion at convergence: 4434.1\nScaled residuals: \n Min 1Q Median 3Q Max \n-6.4286 -0.5529 -0.0416 0.4841 3.5624 \nRandom effects:\n Groups Name Variance Std.Dev. Corr\n Pig (Intercept) 19.493 4.415 \n Time 0.416 0.645 0.10\n Residual 6.038 2.457 \nNumber of obs: 861, groups: Pig, 72\nFixed effects:\n Estimate Std. Error t value\n(Intercept) 15.73865 0.55012 28.61\nTime 6.93901 0.07982 86.93\nCorrelation of Fixed Effects:\n (Intr)\nTime 0.006 \n```\nThe random intercept and random slope are only weakly correlated $(0.294 / \\sqrt{19.493 * 0.416} \\approx 0.1)$. So next we fit a model in which the two random effects are constrained to be uncorrelated:", "0.294 / (19.493 * 0.416) ** 0.5\n\nmd = smf.mixedlm(\"Weight ~ Time\", data, groups=data[\"Pig\"], re_formula=\"~Time\")\nfree = sm.regression.mixed_linear_model.MixedLMParams.from_components(\n np.ones(2), np.eye(2)\n)\n\nmdf = md.fit(free=free, method=[\"lbfgs\"])\nprint(mdf.summary())", "The likelihood drops by 0.3 when we fix the correlation parameter to 0. Comparing 2 x 0.3 = 0.6 to the chi^2 1 df reference distribution suggests that the data are very consistent with a model in which this parameter is equal to 0.\nHere is the same model fit using LMER in R (note that here R is reporting the REML criterion instead of the likelihood, where the REML criterion is twice the log likelihood):\nipython\n%R print(summary(lmer(\"Weight ~ Time + (1 | Pig) + (0 + Time | Pig)\", data=dietox)))\n```\nLinear mixed model fit by REML ['lmerMod']\nFormula: Weight ~ Time + (1 | Pig) + (0 + Time | Pig)\n Data: dietox\nREML criterion at convergence: 4434.7\nScaled residuals: \n Min 1Q Median 3Q Max \n-6.4281 -0.5527 -0.0405 0.4840 3.5661 \nRandom effects:\n Groups Name Variance Std.Dev.\n Pig (Intercept) 19.8404 4.4543\n Pig.1 Time 0.4234 0.6507\n Residual 6.0282 2.4552\nNumber of obs: 861, groups: Pig, 72\nFixed effects:\n Estimate Std. Error t value\n(Intercept) 15.73875 0.55444 28.39\nTime 6.93899 0.08045 86.25\nCorrelation of Fixed Effects:\n (Intr)\nTime -0.086\n```\nSitka growth data\nThis is one of the example data sets provided in the LMER R library. The outcome variable is the size of the tree, and the covariate used here is a time value. The data are grouped by tree.", "data = sm.datasets.get_rdataset(\"Sitka\", \"MASS\").data\nendog = data[\"size\"]\ndata[\"Intercept\"] = 1\nexog = data[[\"Intercept\", \"Time\"]]", "Here is the statsmodels LME fit for a basic model with a random intercept. We are passing the endog and exog data directly to the LME init function as arrays. Also note that endog_re is specified explicitly in argument 4 as a random intercept (although this would also be the default if it were not specified).", "md = sm.MixedLM(endog, exog, groups=data[\"tree\"], exog_re=exog[\"Intercept\"])\nmdf = md.fit()\nprint(mdf.summary())", "Here is the same model fit in R using LMER:\nipython\n%R\ndata(Sitka, package=\"MASS\")\nprint(summary(lmer(\"size ~ Time + (1 | tree)\", data=Sitka)))\n```\nLinear mixed model fit by REML ['lmerMod']\nFormula: size ~ Time + (1 | tree)\n Data: Sitka\nREML criterion at convergence: 164.8\nScaled residuals: \n Min 1Q Median 3Q Max \n-2.9979 -0.5169 0.1576 0.5392 4.4012 \nRandom effects:\n Groups Name Variance Std.Dev.\n tree (Intercept) 0.37451 0.612 \n Residual 0.03921 0.198 \nNumber of obs: 395, groups: tree, 79\nFixed effects:\n Estimate Std. Error t value\n(Intercept) 2.2732443 0.0878955 25.86\nTime 0.0126855 0.0002654 47.80\nCorrelation of Fixed Effects:\n (Intr)\nTime -0.611\n```\nWe can now try to add a random slope. We start with R this time. From the code and output below we see that the REML estimate of the variance of the random slope is nearly zero.\nipython\n%R print(summary(lmer(\"size ~ Time + (1 + Time | tree)\", data=Sitka)))\n```\nLinear mixed model fit by REML ['lmerMod']\nFormula: size ~ Time + (1 + Time | tree)\n Data: Sitka\nREML criterion at convergence: 153.4\nScaled residuals: \n Min 1Q Median 3Q Max \n-2.7609 -0.5173 0.1188 0.5270 3.5466 \nRandom effects:\n Groups Name Variance Std.Dev. Corr \n tree (Intercept) 2.217e-01 0.470842 \n Time 3.288e-06 0.001813 -0.17\n Residual 3.634e-02 0.190642 \nNumber of obs: 395, groups: tree, 79\nFixed effects:\n Estimate Std. Error t value\n(Intercept) 2.273244 0.074655 30.45\nTime 0.012686 0.000327 38.80\nCorrelation of Fixed Effects:\n (Intr)\nTime -0.615\nconvergence code: 0\nModel failed to converge with max|grad| = 0.793203 (tol = 0.002, component 1)\nModel is nearly unidentifiable: very large eigenvalue\n - Rescale variables?\n```\nIf we run this in statsmodels LME with defaults, we see that the variance estimate is indeed very small, which leads to a warning about the solution being on the boundary of the parameter space. The regression slopes agree very well with R, but the likelihood value is much higher than that returned by R.", "exog_re = exog.copy()\nmd = sm.MixedLM(endog, exog, data[\"tree\"], exog_re)\nmdf = md.fit()\nprint(mdf.summary())", "We can further explore the random effects structure by constructing plots of the profile likelihoods. We start with the random intercept, generating a plot of the profile likelihood from 0.1 units below to 0.1 units above the MLE. Since each optimization inside the profile likelihood generates a warning (due to the random slope variance being close to zero), we turn off the warnings here.", "import warnings\n\nwith warnings.catch_warnings():\n warnings.filterwarnings(\"ignore\")\n likev = mdf.profile_re(0, \"re\", dist_low=0.1, dist_high=0.1)", "Here is a plot of the profile likelihood function. We multiply the log-likelihood difference by 2 to obtain the usual $\\chi^2$ reference distribution with 1 degree of freedom.", "import matplotlib.pyplot as plt\n\nplt.figure(figsize=(10, 8))\nplt.plot(likev[:, 0], 2 * likev[:, 1])\nplt.xlabel(\"Variance of random intercept\", size=17)\nplt.ylabel(\"-2 times profile log likelihood\", size=17)", "Here is a plot of the profile likelihood function. The profile likelihood plot shows that the MLE of the random slope variance parameter is a very small positive number, and that there is low uncertainty in this estimate.", "re = mdf.cov_re.iloc[1, 1]\nwith warnings.catch_warnings():\n # Parameter is often on the boundary\n warnings.simplefilter(\"ignore\", ConvergenceWarning)\n likev = mdf.profile_re(1, \"re\", dist_low=0.5 * re, dist_high=0.8 * re)\n\nplt.figure(figsize=(10, 8))\nplt.plot(likev[:, 0], 2 * likev[:, 1])\nplt.xlabel(\"Variance of random slope\", size=17)\nlbl = plt.ylabel(\"-2 times profile log likelihood\", size=17)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]