repo_name
stringlengths 6
77
| path
stringlengths 8
215
| license
stringclasses 15
values | cells
list | types
list |
|---|---|---|---|---|
OceanPARCELS/parcels
|
parcels/examples/tutorial_interpolation.ipynb
|
mit
|
[
"Tutorial on Parcels tracer interpolation methods\nParcels support a range of different interpolation methods for tracers, such as temperature. Here, we will show how these work, in an idealised example. \nWe first import the relevant modules",
"%matplotlib inline\nfrom parcels import FieldSet, ParticleSet, JITParticle, Variable, AdvectionRK4\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom matplotlib import cm\nimport xarray as xr",
"We create a small 2D grid where P is a tracer that we want to interpolate. In each grid cell, P has a random value between 0.1 and 1.1. We then set P[1,1] to 0, which for Parcels specifies that this is a land cell",
"dims = [5, 4]\ndx, dy = 1./dims[0], 1./dims[1]\ndimensions = {'lat': np.linspace(0., 1., dims[0], dtype=np.float32),\n 'lon': np.linspace(0., 1., dims[1], dtype=np.float32)}\ndata = {'U': np.zeros(dims, dtype=np.float32),\n 'V': np.zeros(dims, dtype=np.float32),\n 'P': np.random.rand(dims[0], dims[1])+0.1}\ndata['P'][1, 1] = 0.\nfieldset = FieldSet.from_data(data, dimensions, mesh='flat')",
"We create a Particle class that can sample this field",
"class SampleParticle(JITParticle):\n p = Variable('p', dtype=np.float32)\n\ndef SampleP(particle, fieldset, time):\n particle.p = fieldset.P[time, particle.depth, particle.lat, particle.lon]",
"Now, we perform four different interpolation on P, which we can control by setting fieldset.P.interp_method. Note that this can always be done after the FieldSet creation. We store the results of each interpolation method in an entry in the dictionary pset.",
"pset = {}\nfor p_interp in ['linear', 'linear_invdist_land_tracer', 'nearest', 'cgrid_tracer']:\n fieldset.P.interp_method = p_interp # setting the interpolation method for fieldset.P\n\n xv, yv = np.meshgrid(np.linspace(0, 1, 8), np.linspace(0, 1, 8))\n pset[p_interp] = ParticleSet(fieldset, pclass=SampleParticle, lon=xv.flatten(), lat=yv.flatten())\n pset[p_interp].execute(SampleP, endtime=1, dt=1)",
"And then we can show each of the four interpolation methods, by plotting the interpolated values on the Particle locations (circles) on top of the Field values (background colors)",
"fig, ax = plt.subplots(1, 4, figsize=(18, 5))\nfor i, p in enumerate(pset.keys()):\n data = fieldset.P.data[0, :, :]\n data[1, 1] = np.nan\n x = np.linspace(-dx/2, 1+dx/2, dims[0]+1)\n y = np.linspace(-dy/2, 1+dy/2, dims[1]+1)\n if p == 'cgrid_tracer':\n for lat in fieldset.P.grid.lat:\n ax[i].axhline(lat, color='k', linestyle='--')\n for lon in fieldset.P.grid.lon:\n ax[i].axvline(lon, color='k', linestyle='--')\n ax[i].pcolormesh(y, x, data, vmin=0.1, vmax=1.1)\n ax[i].scatter(pset[p].lon, pset[p].lat, c=pset[p].p, edgecolors='k', s=50, vmin=0.1, vmax=1.1)\n xp, yp = np.meshgrid(fieldset.P.lon, fieldset.P.lat)\n ax[i].plot(xp, yp, 'kx')\n ax[i].set_title(\"Using interp_method='%s'\" % p)\nplt.show()",
"The white box is here the 'land' point where the tracer is set to zero and the crosses are the locations of the grid points. As you see, the interpolated value is always equal to the field value if the particle is exactly on the grid point (circles on crosses). \nFor interp_method='nearest', the particle values are the same for all particles in a grid cell. They are also the same for interp_method='cgrid_tracer', but the grid cells have then shifted. That is because in a C-grid, the tracer grid cell is on the top-right corner (black dashed lines in right-most panel).\nFor interp_method='linear_invdist_land_tracer', we see that values are the same as interp_method='linear' for grid cells that don't border the land point. For grid cells that do border the land cell, the linear_invdist_land_tracer interpolation method gives higher values, as also shown in the difference plot below",
"plt.scatter(pset['linear'].lon, pset['linear'].lat, c=pset['linear_invdist_land_tracer'].p-pset['linear'].p,\n edgecolors='k', s=50, cmap=cm.bwr, vmin=-0.25, vmax=0.25)\nplt.colorbar()\nplt.title(\"Difference between 'interp_method=linear' and 'interp_method=linear_invdist_land_tracer'\")\nplt.show()",
"So in summary, Parcels has four different interpolation schemes for tracers:\n\ninterp_method=linear: compute linear interpolation\ninterp_method=linear_invdist_land_tracer: compute linear interpolation except near land (where field value is zero). In that case, inverse distance weighting interpolation is computed, weighting by squares of the distance.\ninterp_method=nearest: return nearest field value\ninterp_method=cgrid_tracer: return nearest field value supposing C cells\n\nInterpolation and sampling on time-varying Fields\nNote that there is an important subtlety in Sampling a time-evolving Field. As noted in this Issue, interpolation of a Field only gives the correct answer when that field is interpolated at time+particle.dt and the Sampling Kernel is concatenated after the Advection Kernel.\nLet's show how this works with a simple idealised Field P given by the equation",
"def calc_p(t, y, x):\n return 10*t+x+0.2*y",
"Let's define a simple FieldSet with two timesteps, a 0.5 m/s zonal velocity and no meridional velocity.",
"dims = [2, 4, 5]\ndimensions = {'lon': np.linspace(0., 1., dims[2], dtype=np.float32),\n 'lat': np.linspace(0., 1., dims[1], dtype=np.float32),\n 'time': np.arange(dims[0], dtype=np.float32)}\n\np = np.zeros(dims, dtype=np.float32)\nfor i, x in enumerate(dimensions['lon']):\n for j, y in enumerate(dimensions['lat']):\n for n, t in enumerate(dimensions['time']):\n p[n, j, i] = calc_p(t, y, x)\n\ndata = {'U': 0.5*np.ones(dims, dtype=np.float32),\n 'V': np.zeros(dims, dtype=np.float32),\n 'P': p}\nfieldset = FieldSet.from_data(data, dimensions, mesh='flat')",
"Now create four particles and a Sampling class so we can sample the Field P",
"xv, yv = np.meshgrid(np.arange(0, 1, 0.5), np.arange(0, 1, 0.5))\nclass SampleParticle(JITParticle):\n p = Variable('p', dtype=np.float32)\npset = ParticleSet(fieldset, pclass=SampleParticle, lon=xv.flatten(), lat=yv.flatten())",
"The key now is that we need to create a sampling Kernel where the Field P is sampled at time+particle.dt and that we concatenate this kernel after the AdvectionRK4 Kernel",
"def SampleP(particle, fieldset, time):\n \"\"\" offset sampling by dt\"\"\"\n particle.p = fieldset.P[time+particle.dt, particle.depth, particle.lat, particle.lon]\n\nkernels = AdvectionRK4 + pset.Kernel(SampleP) # Note that the order of concatenation matters here!",
"We can now run these kernels on the ParticleSet",
"pfile = pset.ParticleFile(\"interpolation_offset.nc\", outputdt=1)\npset.execute(kernels, endtime=1, dt=1, output_file=pfile)\npfile.close()",
"And we can check whether the Particle.p values indeed are consistent with the calc_p() values",
"for p in pset:\n assert np.isclose(p.p, calc_p(p.time, p.lat, p.lon))",
"And the same for the netcdf file (note that we need to convert time from nanoseconds to seconds)",
"ds = xr.open_dataset(\"interpolation_offset.nc\").isel(obs=1)\nfor i in range(len(ds['p'])):\n assert np.isclose(ds['p'].values[i], calc_p(float(ds['time'].values[i])/1e9, ds['lat'].values[i], ds['lon'].values[i]))",
"As a bit of background for why sampling needs to be done this way: the reason is that the particles are already moved within the AdvectionRK4 kernel, but the time is not updated yet until all concatenated kernels are completed."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
astroNN/astroNN
|
notebooks/1_notmnist.ipynb
|
mit
|
[
"Deep Learning\nAssignment 1\nThe objective of this assignment is to learn about simple data curation practices, and familiarize you with some of the data we'll be reusing later.\nThis notebook uses the notMNIST dataset to be used with python experiments. This dataset is designed to look like the classic MNIST dataset, while looking a little more like real data: it's a harder task, and the data is a lot less 'clean' than MNIST.",
"# Third-party packages\nimport h5py\nimport matplotlib.pyplot as pl\n%matplotlib inline\nimport numpy as np\nfrom sklearn.linear_model import LogisticRegression\n\n# this package\nfrom astronn.data import fetch_notMNIST",
"First, we'll download the dataset to our local machine. The data consists of characters rendered in a variety of fonts in a series of 28 by 28 images. The labels simply identify the letter presented in each image (and are limited to A-J, so, 10 classes). The training set and test set have about 500000 and 19000 image-label pairs, respectively. Even with these sizes, it should be possible to train models quickly on any machine.\nNote: This could take some time! You are about to download a ~1.7 GB file. Go get some coffee.",
"cache_file = fetch_notMNIST()",
"First, we'll print some information about the data:",
"with h5py.File(cache_file, 'r') as f:\n for name,group in f.items():\n print(\"{}:\".format(name))\n \n for k,v in group.items():\n print(\"\\t {} {}\".format(k,v.shape))",
"Problem 1\nLet's take a peek at some of the data to make sure it looks sensible. Each exemplar should be an image of a character A through J rendered in a different font. \nPlot a 3 by 3 grid of sample images from the test set and set the title of each panel to the character name (use the labels).\nHint: use matplotlib.pyplot.imshow()",
"with h5py.File(cache_file, 'r') as f:\n pass",
"Problem 2\nNow display the mean of all images from each class individually and again set the title of each panel to the corresponding character name.",
"with h5py.File(cache_file, 'r') as f:\n pass",
"Problem 3\nNext, we'll randomize the data. It's important to randomize both the train and test data sets. Verify that the data is still labeled correctly after randomization.",
"def randomize(data, labels):\n pass\n\nwith h5py.File(cache_file, 'r') as f:\n train_dataset, train_labels = randomize(f['train']['images'][:], f['train']['labels'][:])\n test_dataset, test_labels = randomize(f['test']['images'][:], f['test']['labels'][:])",
"Problem 4\nAnother check: we expect the data to be balanced across classes. Verify that.\n\n\nProblem 5\nBy construction, this dataset might contain a lot of overlapping samples (identical images), including training data that's also contained in the validation and test set. Overlap between training and test can skew the results if you expect to use your model in an environment where there is never an overlap, but are actually ok if you expect to see training samples recur when you use it.\n\nHow much overlap is there between training, validation and test samples?\nWhat about near duplicates between datasets? (images that are almost identical)\nCreate a sanitized validation and test set, and compare your accuracy on those in subsequent assignments.\n\n\n\nProblem 6\nLet's get an idea of what an off-the-shelf classifier can give you on this data. It's always good to check that there is something to learn, and that it's a problem that is not so trivial that a canned solution solves it.\nTrain a simple model on this data using 50, 100, 1000 and 5000 training samples. Hint: you can use the LogisticRegression model from sklearn.linear_model.\nOptional question: train an off-the-shelf model on all the data!"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
axbaretto/beam
|
examples/notebooks/tour-of-beam/windowing.ipynb
|
apache-2.0
|
[
"#@title ###### Licensed to the Apache Software Foundation (ASF), Version 2.0 (the \"License\")\n\n# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements. See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership. The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# \"License\"); you may not use this file except in compliance\n# with the License. You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing,\n# software distributed under the License is distributed on an\n# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n# KIND, either express or implied. See the License for the\n# specific language governing permissions and limitations\n# under the License.",
"Windowing -- Tour of Beam\nSometimes, we want to aggregate data, like GroupByKey or Combine, only at certain intervals, like hourly or daily, instead of processing the entire PCollection of data only once.\nWe might want to emit a moving average as we're processing data.\nMaybe we want to analyze the user experience for a certain task in a web app, it would be nice to get the app events by sessions of activity.\nOr we could be running a streaming pipeline, and there is no end to the data, so how can we aggregate data?\nWindows in Beam allow us to process only certain data intervals at a time.\nIn this notebook, we go through different ways of windowing our pipeline.\nLets begin by installing apache-beam.",
"# Install apache-beam with pip.\n!pip install --quiet apache-beam",
"First, lets define some helper functions to simplify the rest of the examples.\nWe have a transform to help us analyze an element alongside its window information, and we have another transform to help us analyze how many elements landed into each window.\nWe use a custom DoFn\nto access that information.\nYou don't need to understand these, you just need to know they exist 🙂.",
"import apache_beam as beam\n\ndef human_readable_window(window) -> str:\n \"\"\"Formats a window object into a human readable string.\"\"\"\n if isinstance(window, beam.window.GlobalWindow):\n return str(window)\n return f'{window.start.to_utc_datetime()} - {window.end.to_utc_datetime()}'\n\nclass PrintElementInfo(beam.DoFn):\n \"\"\"Prints an element with its Window information.\"\"\"\n def process(self, element, timestamp=beam.DoFn.TimestampParam, window=beam.DoFn.WindowParam):\n print(f'[{human_readable_window(window)}] {timestamp.to_utc_datetime()} -- {element}')\n yield element\n\n@beam.ptransform_fn\ndef PrintWindowInfo(pcollection):\n \"\"\"Prints the Window information with how many elements landed in that window.\"\"\"\n class PrintCountsInfo(beam.DoFn):\n def process(self, num_elements, window=beam.DoFn.WindowParam):\n print(f'>> Window [{human_readable_window(window)}] has {num_elements} elements')\n yield num_elements\n\n return (\n pcollection\n | 'Count elements per window' >> beam.combiners.Count.Globally().without_defaults()\n | 'Print counts info' >> beam.ParDo(PrintCountsInfo())\n )",
"Now lets create some data to use in the examples.\nWindows define data intervals based on time, so we need to tell Apache Beam a timestamp for each element.\nWe define a PTransform for convenience, so we can attach the timestamps automatically.\nApache Beam requires us to provide the timestamp as Unix time, which is a way to represent a date and time as the number of seconds since January 1st, 1970.\nFor our data, lets analyze some events about the seasons and moon phases for the year 2021, which might be useful for a gardening project.\nTo attach timestamps to each element, we can Map each element and return a TimestmpedValue.",
"import time\nfrom apache_beam.options.pipeline_options import PipelineOptions\n\ndef to_unix_time(time_str: str, time_format='%Y-%m-%d %H:%M:%S') -> int:\n \"\"\"Converts a time string into Unix time.\"\"\"\n time_tuple = time.strptime(time_str, time_format)\n return int(time.mktime(time_tuple))\n\n@beam.ptransform_fn\n@beam.typehints.with_input_types(beam.pvalue.PBegin)\n@beam.typehints.with_output_types(beam.window.TimestampedValue)\ndef AstronomicalEvents(pipeline):\n return (\n pipeline\n | 'Create data' >> beam.Create([\n ('2021-03-20 03:37:00', 'March Equinox 2021'),\n ('2021-04-26 22:31:00', 'Super full moon'),\n ('2021-05-11 13:59:00', 'Micro new moon'),\n ('2021-05-26 06:13:00', 'Super full moon, total lunar eclipse'),\n ('2021-06-20 22:32:00', 'June Solstice 2021'),\n ('2021-08-22 07:01:00', 'Blue moon'),\n ('2021-09-22 14:21:00', 'September Equinox 2021'),\n ('2021-11-04 15:14:00', 'Super new moon'),\n ('2021-11-19 02:57:00', 'Micro full moon, partial lunar eclipse'),\n ('2021-12-04 01:43:00', 'Super new moon'),\n ('2021-12-18 10:35:00', 'Micro full moon'),\n ('2021-12-21 09:59:00', 'December Solstice 2021'),\n ])\n | 'With timestamps' >> beam.MapTuple(\n lambda timestamp, element:\n beam.window.TimestampedValue(element, to_unix_time(timestamp))\n )\n )\n\n# Lets see how the data looks like.\nbeam_options = PipelineOptions(flags=[], type_check_additional='all')\nwith beam.Pipeline(options=beam_options) as pipeline:\n (\n pipeline\n | 'Astronomical events' >> AstronomicalEvents()\n | 'Print element' >> beam.Map(print)\n )",
"ℹ️ After running this, it looks like the timestamps disappeared!\nThey're actually still implicitly part of the element, just like the windowing information.\nIf we need to access it, we can do so via a custom DoFn.\nAggregation transforms use each element's timestamp along with the windowing we specified to create windows of elements.\n\nGlobal window\nAll pipelines use the GlobalWindow by default.\nThis is a single window that covers the entire PCollection.\nIn many cases, especially for batch pipelines, this is what we want since we want to analyze all the data that we have.\n\nℹ️ GlobalWindow is not very useful in a streaming pipeline unless you only need element-wise transforms.\nAggregations, like GroupByKey and Combine, need to process the entire window, but a streaming pipeline has no end, so they would never finish.",
"import apache_beam as beam\n\n# All elements fall into the GlobalWindow by default.\nwith beam.Pipeline() as pipeline:\n (\n pipeline\n | 'Astrolonomical events' >> AstronomicalEvents()\n | 'Print element info' >> beam.ParDo(PrintElementInfo())\n | 'Print window info' >> PrintWindowInfo()\n )",
"Fixed time windows\nIf we want to analyze our data hourly, daily, monthly, etc. We might want to create evenly spaced intervals.\nFixedWindows\nallow us to create fixed-sized windows.\nWe only need to specify the window size in seconds.\nIn Python, we can use timedelta\nto help us do the conversion of minutes, hours, or days for us.\n\nℹ️ Some time deltas like a month cannot be so easily converted into seconds, since a month can have from 28 to 31 days.\nSometimes using an estimate like 30 days in a month is enough.\n\nWe must use the WindowInto\ntransform to apply the kind of window we want.",
"import apache_beam as beam\nfrom datetime import timedelta\n\n# Fixed-sized windows of approximately 3 months.\nwindow_size = timedelta(days=3*30).total_seconds() # in seconds\nprint(f'window_size: {window_size} seconds')\n\nwith beam.Pipeline() as pipeline:\n elements = (\n pipeline\n | 'Astronomical events' >> AstronomicalEvents()\n | 'Fixed windows' >> beam.WindowInto(beam.window.FixedWindows(window_size))\n | 'Print element info' >> beam.ParDo(PrintElementInfo())\n | 'Print window info' >> PrintWindowInfo()\n )",
"Sliding time windows\nMaybe we want a fixed-sized window, but we don't want to wait until a window finishes so we can start the new one.\nWe might want to calculate a moving average.\nFor example, lets say we want to analyze our data for the last three months, but we want to have a monthly report.\nIn other words, we want windows at a monthly frequency, but each window should cover the last three months.\nSliding windows\nallow us to do just that.\nWe need to specify the window size in seconds just like with FixedWindows. We also need to specify a window period in seconds, which is how often we want to emit each window.",
"import apache_beam as beam\nfrom datetime import timedelta\n\n# Sliding windows of approximately 3 months every month.\nwindow_size = timedelta(days=3*30).total_seconds() # in seconds\nwindow_period = timedelta(days=30).total_seconds() # in seconds\nprint(f'window_size: {window_size} seconds')\nprint(f'window_period: {window_period} seconds')\n\nwith beam.Pipeline() as pipeline:\n (\n pipeline\n | 'Astronomical events' >> AstronomicalEvents()\n | 'Sliding windows' >> beam.WindowInto(\n beam.window.SlidingWindows(window_size, window_period)\n )\n | 'Print element info' >> beam.ParDo(PrintElementInfo())\n | 'Print window info' >> PrintWindowInfo()\n )",
"A thing to note with SlidingWindows is that one element might be processed multiple times because it might overlap in more than one window.\nIn our example, the \"processing\" is done by PrintElementInfo which simply prints the element with its window information. For windows of three months every month, each element is processed three times, one time per window.\nIn many cases, if we're just doing simple element-wise operations, this isn't generally an issue.\nBut for more resource-intensive transformations, it might be a good idea to perform those transformations before doing the windowing.",
"import apache_beam as beam\nfrom datetime import timedelta\n\n# Sliding windows of approximately 3 months every month.\nwindow_size = timedelta(days=3*30).total_seconds() # in seconds\nwindow_period = timedelta(days=30).total_seconds() # in seconds\nprint(f'window_size: {window_size} seconds')\nprint(f'window_period: {window_period} seconds')\n\nwith beam.Pipeline() as pipeline:\n (\n pipeline\n | 'Astronomical events' >> AstronomicalEvents()\n #------\n # ℹ️ Here we're processing / printing the data before windowing.\n | 'Print element info' >> beam.ParDo(PrintElementInfo())\n | 'Sliding windows' >> beam.WindowInto(\n beam.window.SlidingWindows(window_size, window_period)\n )\n #------\n | 'Print window info' >> PrintWindowInfo()\n )",
"Note that by doing the windowing after the processing, we only process / print the elments once, but the windowing afterwards is the same.\nSession windows\nMaybe we don't want regular windows, but instead, have the windows reflect periods where activity happened.\nSessions\nallow us to create those kinds of windows.\nWe now have to specify a gap size in seconds, which is the maximum number of seconds of inactivity to close a session window.\nFor example, if we specify a gap size of 30 days. The first event would open a new session window since there are no already opened windows. If the next event happens within the next 30 days or less, like 20 days after the previous event, the session window extends and covers that as well. If there are no new events for the next 30 days, the session window closes and is emitted.",
"import apache_beam as beam\nfrom datetime import timedelta\n\n# Sessions divided by approximately 1 month gaps.\ngap_size = timedelta(days=30).total_seconds() # in seconds\nprint(f'gap_size: {gap_size} seconds')\n\nwith beam.Pipeline() as pipeline:\n (\n pipeline\n | 'Astronomical events' >> AstronomicalEvents()\n | 'Session windows' >> beam.WindowInto(beam.window.Sessions(gap_size))\n | 'Print element info' >> beam.ParDo(PrintElementInfo())\n | 'Print window info' >> PrintWindowInfo()\n )",
"What's next?\n\nWindowing -- learn more about windowing in the Beam Programming Guide.\nTriggers -- learn about triggers in the Beam Programming Guide.\nTransform catalog --\n check out all the available transforms.\nMobile gaming example --\n learn more about windowing, triggers, and streaming through a complete example pipeline.\nRunners --\n check the available runners, their capabilities, and how to run your pipeline in them."
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
jmeyers314/batoid
|
notebook/DESI model details.ipynb
|
bsd-2-clause
|
[
"Dark Energy Spectroscopic Instrument\nSome calculations to assist with building the DESI model from an existing ZEMAX model and other sources.\nYou can safely ignore this if you just want to use the model.",
"import batoid\nimport numpy as np\nimport yaml\nimport matplotlib.pyplot as plt\nfrom mpl_toolkits.mplot3d import Axes3D\n%matplotlib inline",
"Load the DESI model:",
"fiducial_telescope = batoid.Optic.fromYaml(\"DESI.yaml\")",
"Corrector Internal Baffles\nSetup YAML to preserve dictionary order and trunctate distances (in meters) to 5 digits:",
"import collections\n\ndef dict_representer(dumper, data):\n return dumper.represent_dict(data.items())\n\nyaml.Dumper.add_representer(collections.OrderedDict, dict_representer)\n\ndef float_representer(dumper, value):\n return dumper.represent_scalar(u'tag:yaml.org,2002:float', f'{value:.5f}')\n\nyaml.Dumper.add_representer(float, float_representer)",
"Define the corrector internal baffle apertures, from DESI-4103-v1. These have been checked against DESI-4037-v6, with the extra baffle between ADC1 and ADC2 added:",
"# baffle z-coordinates relative to FP in mm from DESI-4103-v1, checked\n# against DESI-4037-v6 (and with extra ADC baffle added).\nZBAFFLE = np.array([\n 2302.91, 2230.29, 1916.86, 1823.57, 1617.37, 1586.76, 1457.88, 1349.45, 1314.68,\n 1232.06, 899.67, 862.08, 568.81, 483.84, 415.22]) \n# baffle radii in mm from DESI-4103-v1, checked\n# against DESI-4037-v6 (and with extra ADC baffle added).\nRBAFFLE = np.array([\n 558.80, 544.00, 447.75, 417.00, 376.00, 376.00, 378.00, 378.00, 395.00,\n 403.00, 448.80, 453.70, 492.00, 501.00, 496.00])",
"Calculate batoid Baffle surfaces for the corrector. These are mechanically planar, but that would put their (planar) center inside a lens, breaking the sequential tracing model. We fix this by use spherical baffle surfaces that have the same apertures. This code was originally used to read a batoid model without baffles, but also works if the baffles are already added.",
"def baffles(nindent=10):\n indent = ' ' * nindent\n # Measure z from C1 front face in m.\n zbaffle = 1e-3 * (2425.007 - ZBAFFLE)\n # Convert r from mm to m.\n rbaffle = 1e-3 * RBAFFLE\n # By default, all baffles are planar.\n nbaffles = len(zbaffle)\n baffles = []\n for i in range(nbaffles):\n baffle = collections.OrderedDict()\n baffle['type'] = 'Baffle'\n baffle['name'] = f'B{i+1}'\n baffle['coordSys'] = {'z': float(zbaffle[i])}\n baffle['surface'] = {'type': 'Plane'}\n baffle['obscuration'] = {'type': 'ClearCircle', 'radius': float(rbaffle[i])}\n baffles.append(baffle)\n # Loop over corrector lenses.\n corrector = fiducial_telescope['DESI.Hexapod.Corrector']\n lenses = 'C1', 'C2', 'ADC1rotator.ADC1', 'ADC2rotator.ADC2', 'C3', 'C4'\n for lens in lenses:\n obj = corrector['Corrector.' + lens]\n assert isinstance(obj, batoid.optic.Lens)\n front, back = obj.items[0], obj.items[1]\n fTransform = batoid.CoordTransform(front.coordSys, corrector.coordSys)\n bTransform = batoid.CoordTransform(back.coordSys, corrector.coordSys)\n _, _, zfront = fTransform.applyForwardArray(0, 0, 0)\n _, _, zback = bTransform.applyForwardArray(0, 0, 0)\n # Find any baffles \"inside\" this lens.\n inside = (zbaffle >= zfront) & (zbaffle <= zback)\n if not any(inside):\n continue\n inside = np.where(inside)[0]\n for k in inside:\n baffle = baffles[k]\n r = rbaffle[k]\n # Calculate sag at (x,y)=(0,r) to avoid effect of ADC rotation about y.\n sagf, sagb = front.surface.sag(0, r), back.surface.sag(0, r)\n _, _, zf = fTransform.applyForwardArray(0, r, sagf)\n _, _, zb = bTransform.applyForwardArray(0, r, sagb)\n if zf > zbaffle[k]:\n print(f'{indent}# Move B{k+1} in front of {obj.name} and make spherical to keep model sequential.')\n assert isinstance(front.surface, batoid.Sphere)\n baffle['surface'] = {'type': 'Sphere', 'R': front.surface.R}\n baffle['coordSys']['z'] = float(zfront - (zf - zbaffle[k]))\n elif zbaffle[k] > zb:\n print(f'{indent}# Move B{k+1} behind {obj.name} and make spherical to keep model sequential.')\n assert isinstance(back.surface, batoid.Sphere)\n baffle['surface'] = {'type': 'Sphere', 'R': back.surface.R}\n baffle['coordSys']['z'] = float(zback + (zbaffle[k] - zb))\n else:\n print(f'Cannot find a solution for B{k+1} inside {obj.name}!')\n\n lines = yaml.dump(baffles)\n for line in lines.split('\\n'):\n print(indent + line)\n\nbaffles()",
"Validate that the baffle edges in the final model have the correct apertures:",
"def validate_baffles():\n corrector = fiducial_telescope['DESI.Hexapod.Corrector']\n for i in range(len(ZBAFFLE)):\n baffle = corrector[f'Corrector.B{i+1}']\n # Calculate surface z at origin in corrector coordinate system.\n _, _, z = batoid.CoordTransform(baffle.coordSys, corrector.coordSys).applyForwardArray(0, 0, 0)\n # Calculate surface z at (r,0) in corrector coordinate system.\n sag = baffle.surface.sag(1e-3 * RBAFFLE[i], 0)\n z += sag\n # Measure from FP in mm.\n z = np.round(2425.007 - 1e3 * z, 2)\n assert z == ZBAFFLE[i], baffle.name\n \nvalidate_baffles()",
"Corrector Cage and Spider\nCalculate simplified vane coordinates using parameters from DESI-4110-v1:",
"def spider(dmin=1762, dmax=4940.3, ns_angle=77, widths=[28.5, 28.5, 60., 19.1],\n wart_r=958, wart_dth=6, wart_w=300):\n # Vane order is [NE, SE, SW, NW], with N along -y and E along +x.\n fig, ax = plt.subplots(figsize=(10, 10))\n ax.add_artist(plt.Circle((0, 0), 0.5 * dmax, color='yellow'))\n ax.add_artist(plt.Circle((0, 0), 0.5 * dmin, color='gray'))\n ax.set_xlim(-0.5 * dmax, 0.5 * dmax)\n ax.set_ylim(-0.5 * dmax, 0.5 * dmax)\n \n # Place outer vertices equally along the outer ring at NE, SE, SW, NW.\n xymax = 0.5 * dmax * np.array([[1, -1], [1, 1], [-1, 1], [-1, -1]]) / np.sqrt(2)\n # Calculate inner vertices so that the planes of the NE and NW vanes intersect\n # with an angle of ns_angle (same for the SE and SW planes).\n angle = np.deg2rad(ns_angle)\n x = xymax[1, 0]\n dx = xymax[1, 1] * np.tan(0.5 * angle)\n xymin = np.array([[x - dx, 0], [x - dx, 0], [-x+dx, 0], [-x+dx, 0]])\n for i in range(4):\n plt.plot([xymin[i,0], xymax[i,0]], [xymin[i,1], xymax[i,1]], '-', lw=0.1 * widths[i])\n\n # Calculate batoid rectangle params for the vanes.\n xy0 = 0.5 * (xymin + xymax)\n heights = np.sqrt(np.sum((xymax - xymin) ** 2, axis=1))\n\n # Calculate wart rectangle coords.\n wart_h = 2 * (wart_r - 0.5 * dmin)\n wart_dth = np.deg2rad(wart_dth)\n wart_xy = 0.5 * dmin * np.array([-np.sin(wart_dth), np.cos(wart_dth)])\n plt.plot(*wart_xy, 'rx', ms=25)\n # Print batoid config.\n indent = ' ' * 10\n print(f'{indent}-\\n{indent} type: ClearAnnulus')\n print(f'{indent} inner: {np.round(0.5e-3 * dmin, 5)}')\n print(f'{indent} outer: {np.round(0.5e-3 * dmax, 5)}')\n for i in range(4):\n print(f'{indent}-\\n{indent} type: ObscRectangle')\n print(f'{indent} x: {np.round(1e-3 * xy0[i, 0], 5)}')\n print(f'{indent} y: {np.round(1e-3 * xy0[i, 1], 5)}')\n print(f'{indent} width: {np.round(1e-3 * widths[i], 5)}')\n print(f'{indent} height: {np.round(1e-3 * heights[i], 5)}')\n dx, dy = xymax[i] - xymin[i]\n angle = np.arctan2(-dx, dy)\n print(f'{indent} theta: {np.round(angle, 5)}')\n print(f'-\\n type: ObscRectangle')\n print(f' x: {np.round(1e-3 * wart_xy[0], 5)}')\n print(f' y: {np.round(1e-3 * wart_xy[1], 5)}')\n print(f' width: {np.round(1e-3 * wart_w, 5)}')\n print(f' height: {np.round(1e-3 * wart_h, 5)}')\n print(f' theta: {np.round(wart_dth, 5)}')\n \nspider()",
"Plot \"User Aperture Data\" from the ZEMAX \"spider\" surface 6, as cross check:",
"def plot_obs():\n wart1 = np.array([\n [ -233.22959, 783.94254],\n [-249.32698, 937.09892],\n [49.02959, 968.45746],\n [ 65.126976, 815.30108],\n [ -233.22959, 783.94254],\n ])\n wart2 = np.array([\n [-233.22959, 783.94254],\n [ -249.32698, 937.09892],\n [49.029593, 968.45746],\n [65.126976, 815.30108],\n [-233.22959, 783.94254],\n ])\n vane1 = np.array([\n [363.96554,-8.8485008],\n [341.66121, 8.8931664],\n [1713.4345, 1733.4485],\n [1735.7388, 1715.7068],\n [363.96554,-8.8485008],\n ])\n vane2 = np.array([\n [-1748.0649, 1705.9022],\n [ -1701.1084, 1743.2531],\n [ -329.33513, 18.697772],\n [ -376.29162, -18.653106],\n [-1748.0649, 1705.9022],\n ])\n vane3 = np.array([\n [ -1717.1127, -1730.5227],\n [ -1732.0605, -1718.6327],\n [ -360.28728, 5.922682],\n [-345.33947, -5.9673476],\n [ -1717.1127, -1730.5227],\n ])\n vane4 = np.array([\n [ 341.66121, -8.8931664],\n [363.96554, 8.8485008],\n [1735.7388, -1715.7068],\n [1713.4345, -1733.4485],\n [ 341.66121, -8.8931664],\n ])\n extra = np.array([\n [ 2470 , 0 ],\n [ 2422.5396 , -481.8731 ],\n [ 2281.9824 , -945.22808 ],\n [ 2053.7299 , -1372.2585 ],\n [ 1746.5537 , -1746.5537 ],\n [ 1372.2585 , -2053.7299 ],\n [ 945.22808 , -2281.9824 ],\n [ 481.8731 , -2422.5396 ],\n [ 3.0248776e-13 , -2470 ],\n [ -481.8731 , -2422.5396 ],\n [ -945.22808 , -2281.9824 ],\n [ -1372.2585 , -2053.7299 ],\n [ -1746.5537 , -1746.5537 ],\n [ -2053.7299 , -1372.2585 ],\n [ -2281.9824 , -945.22808 ],\n [ -2422.5396 , -481.8731 ],\n [ -2470 , 2.9882133e-12 ],\n [ -2422.5396 , 481.8731 ],\n [ -2281.9824 , 945.22808 ],\n [ -2053.7299 , 1372.2585 ],\n [ -1746.5537 , 1746.5537 ],\n [ -1372.2585 , 2053.7299 ],\n [ -945.22808 , 2281.9824 ],\n [ -481.8731 , 2422.5396 ],\n [ 5.9764266e-12 , 2470 ],\n [ 481.8731 , 2422.5396 ],\n [ 945.22808 , 2281.9824 ],\n [ 1372.2585 , 2053.7299 ],\n [ 1746.5537 , 1746.5537 ],\n [ 2053.7299 , 1372.2585 ],\n [ 2281.9824 , 945.22808 ],\n [ 2422.5396 , 481.8731 ],\n [ 2470 , -1.0364028e-11 ],\n [ 2724 , 0 ],\n [ 2671.6591 , -531.42604 ],\n [ 2516.6478 , -1042.4297 ],\n [ 2264.9232 , -1513.3733 ],\n [ 1926.1589 , -1926.1589 ],\n [ 1513.3733 , -2264.9232 ],\n [ 1042.4297 , -2516.6478 ],\n [ 531.42604 , -2671.6591 ],\n [ 3.3359379e-13 , -2724 ],\n [ -531.42604 , -2671.6591 ],\n [ -1042.4297 , -2516.6478 ],\n [ -1513.3733 , -2264.9232 ],\n [ -1926.1589 , -1926.1589 ],\n [ -2264.9232 , -1513.3733 ],\n [ -2516.6478 , -1042.4297 ],\n [ -2671.6591 , -531.42604 ],\n [ -2724 , 3.2955032e-12 ],\n [ -2671.6591 , 531.42604 ],\n [ -2516.6478 , 1042.4297 ],\n [ -2264.9232 , 1513.3733 ],\n [ -1926.1589 , 1926.1589 ],\n [ -1513.3733 , 2264.9232 ],\n [ -1042.4297 , 2516.6478 ],\n [ -531.42604 , 2671.6591 ],\n [ 6.5910065e-12 , 2724 ],\n [ 531.42604 , 2671.6591 ],\n [ 1042.4297 , 2516.6478 ],\n [ 1513.3733 , 2264.9232 ],\n [ 1926.1589 , 1926.1589 ],\n [ 2264.9232 , 1513.3733 ],\n [ 2516.6478 , 1042.4297 ],\n [ 2671.6591 , 531.42604 ],\n [ 2724 , -1.1429803e-11 ],\n [ 2470 , 0 ],\n ])\n plt.figure(figsize=(20, 20))\n plt.plot(*wart1.T)\n plt.plot(*wart2.T)\n plt.plot(*vane1.T)\n plt.plot(*vane2.T)\n plt.plot(*vane3.T)\n plt.plot(*vane4.T)\n plt.plot(*extra.T)\n w = 1762./2.\n plt.gca().add_artist(plt.Circle((0, 0), w, color='gray'))\n plt.gca().set_aspect(1.)\n \nplot_obs()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
stsouko/CGRtools
|
doc/tutorial/1_data_types_and_operations.ipynb
|
lgpl-3.0
|
[
"1. Data types and operations with them\n\n(c) 2019, 2020 Dr. Ramil Nugmanov;\n(c) 2019 Dr. Timur Madzhidov; Ravil Mukhametgaleev\n\nInstallation instructions of CGRtools package information and tutorial's files see on https://github.com/stsouko/CGRtools\nNOTE: Tutorial should be performed sequentially from the start. Random cell running will lead to unexpected results.",
"import pkg_resources\nif pkg_resources.get_distribution('CGRtools').version.split('.')[:2] != ['4', '0']:\n print('WARNING. Tutorial was tested on 4.0 version of CGRtools')\nelse:\n print('Welcome!')\n\n# load data for tutorial\nfrom pickle import load\nfrom traceback import format_exc\n\nwith open('molecules.dat', 'rb') as f:\n molecules = load(f) # list of MoleculeContainer objects\nwith open('reactions.dat', 'rb') as f:\n reactions = load(f) # list of ReactionContainer objects\n\nm1, m2, m3, m4 = molecules # molecule\nm7 = m3.copy()\nm11 = m3.copy()\nm11.standardize()\nm7.standardize()\nr1 = reactions[0] # reaction\nm5 = r1.reactants[0]\nm8 = m7.substructure([4, 5, 6, 7, 8, 9])\nm10 = r1.products[0].copy()",
"CGRtools has subpackage containers with data structures classes:\n\nMoleculeContainer - for molecular structure\nReactionContainer - for chemical reaction \nCGRContainer - for Condensed Graph of Reaction\nQueryContainer - queries for substructure search in molecules\nQueryCGRContainer - queries for substructure search in CGRs",
"from CGRtools.containers import * # import all containers",
"1.1. MoleculeContainer\nMolecules are represented as undirected graphs. Molecules contain Atom objects and Bond objects.\nAtom objects are represented as dictionary with unique number for each atom as key. \nBond objects are stored as sparse matrix with adjacent atoms pair as keys for rows and columns.\nHereafter, atom number is unique integer used to enumerate atoms in molecule. Please, don't confuse it with element number in Periodic Table, hereafter called element number.\nMethods for molecule handling and the arguments of MoleculeContainer are described below.",
"m1.meta # dictionary for molecule properties storage. For example, DTYPE/DATUM fields of SDF file are read into this dictionary\n\nm1 # MoleculeContainer supports depiction and graphic representation in Jupyter notebooks.\n\nm1.depict() # depiction returns SVG image in format string\n\nwith open('molecule.svg', 'w') as f: # saving image to SVG file\n f.write(m1.depict())\n\nm_copy = m1.copy() # copy of molecule\nm_copy\n\nlen(m1) # get number of atoms in molecule\n# or \nm1.atoms_count\n\nm1.bonds_count # number of bonds\n\nm1.atoms_numbers # list of atoms numbers",
"Each structure has additional atoms attributes: number of neighbors and hybridization.\nThe following notations are used for hybridization of atoms. Values are given as numbers below (in parenthesis symbols that are used in SMILES-like signatures are shown):\n\n1 (s) - all bonds of atom are single, i.e. sp3 hybridization\n2 (d) - atom has one double bond and others are single, i.e. sp2 hybridization\n3 (t) - atom has one triple or two double bonds and other are single, i.e. sp hybridization\n4 (a) - atom is in aromatic ring\n\nNeighbors and hybridizations atom attributes are required for substructure operations and structure standardization. See below",
"# iterate over atoms using its numbers\nlist(m1.atoms()) # works the same as dict.items()\n\n# iterate over bonds using adjacent atoms numbers\nlist(m1.bonds())\n\n# access to atom by number\nm1.atom(1)\n\ntry:\n m1.atom(10) # raise error for absent atom numbers\nexcept KeyError:\n print(format_exc())\n\n# access to bond using adjacent atoms numbers\nm1.bond(1, 4)\n\ntry:\n m1.bond(1, 3) # raise error for absent bond\nexcept KeyError:\n print(format_exc())",
"Atom objects are dataclasses which store information about:\n\nelement\nisotope\ncharge\nradical state\nxy coordinates\n\nAlso atoms has methods for data integrity checks and include some internally used data.",
"a = m1.atom(1)\n\n# access to information\na.atomic_symbol # element symbol\n\na.charge # formal charge\n\na.is_radical # atom radical state\n\na.isotope # atom isotope. Default isotope if not set. Default isotopes are the same as used in InChI notation\n\na.x # coordinates\na.y\n#or \na.xy\n\na.neighbors # Number of neighboring atoms. It is read-only.\n\na.hybridization # Atoms hybridization. It is read-only.\n\ntry:\n a.hybridization = 2 # Not assignable. Read-only! Thus error is raised.\nexcept AttributeError:\n print(format_exc())",
"Atomic attributes are assignable.\nCGRtools has integrity checks for verification of changes induced by user",
"a.charge = 1\nm1\n\na.charge = 0\na.is_radical = True\nm1\n\n# bond objects also are data-like classes which store information about bond order\nb = m1.bond(3, 4)\nb.order\n\ntry:\n b.order = 1 # order change not possible\nexcept AttributeError:\n print(format_exc())",
"Bonds are Read-only\nFor bond modification one should to use delete_bond method to break bond and add_bond for creating new.",
"m1.delete_bond(3, 4)\nm1",
"Method delete_atom removes atom from the molecule",
"m1.delete_atom(3)\nm1\n\nm_copy # copy unchanged!",
"Atoms and bonds objects can be converted into integer representation that could be used to classify their types.\nAtom type is represented by 21 bit code rounded to 32 bit integer number:\n\n9 bits are used for isotope (511 posibilities, highest known isotope is ~300)\n7 bits stand for atom number (2 ** 7 - 1 == 127, currently 118 elements are presented in Periodic Table)\n4 bits stand for formal charge. Charges range from -4 to +4 rescaled to range 0-8\n1 bit are used for radical state.",
"int(a)\n# 61705 == 000001111 0001000 0100 1\n# 000001111 == 15 isotope\n# 0001000 == 8 Oxygen\n# 0100 == 4 (4 - 4 = 0) uncharged\n# 1 == 1 is radical\n\nint(b) # bonds are encoded by their order\n\na = m_copy.atom(1)\nprint(a.implicit_hydrogens) # get number of implicit hydrogens on atom 1\nprint(a.explicit_hydrogens) # get number of explicit hydrogens on atom 1\nprint(a.total_hydrogens) # get total number of hydrogens on atom 1\n\nm1\n\nm1.check_valence() # return list of numbers of atoms with invalid valences\n\nm4 # molecule with valence errors\n\nm4.check_valence()\n\nm3\n\nm3.sssr # Method for application of Smallest Set of Smallest Rings algorithm for rings \n # identification. Returns tuple of tuples of atoms forming smallest rings",
"Connected components.\nSometimes molecules has disconnected components (salts etc).\nOne can find them and split molecule to separate components.",
"m2 # it's a salt represented as one graph\n\nm2.connected_components # tuple of tuples of atoms belonging to graph components\n\nanion, cation = m2.split() # split molecule to components\n\nanion # graph of only one salt component\n\ncation # graph of only one salt component",
"Union of molecules\nSometimes it is more convenient to represent salts as ion pair. Otherwise ambiguity could be introduced, for example in reaction of salt exchange:\nAg+ + NO3- + Na+ + Br- = Ag+ + Br- + Na+ + NO3-. Reactants and products sets are the same. \nIn this case one can combine anion-cation pair into single graph. It could be convenient way to represent other molecule mixtures.",
"salt = anion | cation \n# or \nsalt = anion.union(cation)\nsalt # this graph has disconnected components, it is considered single compound now",
"Substructures could be extracted from molecules.",
"sub = m3.substructure([4,5,6,7,8,9]) # substructure with passed atoms\nsub",
"augmented_substructure is a substructure consisting from atoms and a given number of shells of neighboring atoms around it.\ndeep argument is a number of considered shells. \nIt also returns projection by default.",
"aug = m3.augmented_substructure([10], deep=2) # atom 10 is Nitrogen\naug",
"Atoms Ordering.\nThis functionality is used for canonic numbering of atoms in molecules. Morgan algorithm is used for atom ranking. Property atoms_order returns dictionary of atom numbers as keys and their ranks according to canonicalization as values. Equal rank mean that atoms are symmetric (are mapped to each other in automorhisms).",
"m5.atoms_order",
"Atom number can be changed by remap method.\nThis method is useful when it is needed to change order of atoms in molecules. First argument to remap method is dictionary with existing atom numbers as keys and desired atom number as values. It is possible to change atom numbers for only part of atoms. Atom numbers could be non-sequencial but need to be unique. \nIf argument copy is set True new object will be created, else existing molecule will be changed. Default is False.",
"m5\n\nremapped = m5.remap({4:2}, copy=True)\nremapped",
"1.2. ReactionContainer\nReactionContainer objects has the following properties:\n\nreactants - list of reactants molecules\nreagents - list of reagents molecules\nproducts - list of products molecules\nmeta - dictinary of reaction metadata (DTYPE/DATUM block in RDF)",
"r1 # depiction supported\n\nr1.meta\n\nprint(r1.reactants, r1.products) # Access to lists of reactant and products.\nreactant1, reactant2, reactant3 = r1.reactants\nproduct = r1.products[0]",
"Reactions also has standardize, kekule, thiele, implicify_hydrogens, explicify_hydrogens, etc methods (see part 3). These methods are applied independently to every molecule in reaction.\n1.3. CGR\nCGRContainer object is similar to MoleculeConrtainer, except some methods. The following methods are not suppoted for CGRContainer:\n\nstandardization methods\nhydrogens count methods\ncheck_valence\n\nCGRContainer also has some methods absent in MoleculeConrtainer:\n\ncenters_list\ncenter_atoms\ncenter_bonds\n\nCGRContainer is undirected graph. Atoms and bonds in CGR has two states: reactant and product.\nComposing to CGR\nAs mentioned above, atoms in MoleculeContainer have unique numbers. These numbers are used as atom-to-atom mapping in CGRtools upon CGR creation. Thus, atom order for molecules in reaction should correspond to atom-to-atom mapping. \nPair of molecules can be transformed into CGR. Notice that, the same atom numbers in reagents and products imply the same atoms.\nReaction also can be composed into CGR. Atom numbers of molecules in reaction are used as atom-to-atom mapping of reactants to products.",
"cgr1 = m7 ^ m8 # CGR from molecules\n# or \ncgr1 = m7.compose(m8)\nprint(cgr1)\ncgr1\n\nr1\n\ncgr2 = ~r1 # CGR from reactions\n# or \ncgr2 = r1.compose()\nprint(cgr2) # signature is printed out.\ncgr2.clean2d()\ncgr2\n\na = cgr2.atom(2) # atom access is the same as for MoleculeContainer\n\na.atomic_symbol # element attribute\n\na.isotope # isotope attribute",
"For CGRContainer attributes charge, is_radical, neighbors and hybridization refer to atom state in reactant of reaction; arguments p_charge, p_is_radical, p_neighbors and p_hybridization could be used to extract atom state in product part in reaction.",
"a.charge # charge of atom in reactant\n\na.p_charge # charge of atom in product\n\na.p_is_radical # radical state of atom in product.\n\na.neighbors # number of neighbors of atom in reactant\n\na.p_neighbors # number of neighbors of atom in product\n\na.hybridization # hybridization of atom in reactant. 1 means only single bonds are incident to atom\n\na.p_hybridization # hybridization of atom in product. 1 means only single bonds are incident to atom\n\nb = cgr1.bond(4, 10) # take bond",
"Bonds has order and p_order attribute\nIf order attribute value is None, it means that bond was formed\nIf p_order is None, it means that bond was broken \nBoth order and p_order can't be None",
"b.order # bond order in reactant\n\nb.p_order is None # bond order in product in None",
"CGR can be decomposed back to reaction, i.e. reactants and products.\nNotice that CGR can lose information in case of unbalanced reactions (where some atoms of reactant does not have counterpart in product, and vice versa). Decomposition of CGRs for unbalanced reactions back to reaction may lead to strange (and erroneous) structures.",
"reactant_part, product_part = ~cgr1 # CGR of unbalanced reaction is decomposed back into reaction\n# or \nreactant_part, product_part = cgr1.decompose()\n\nreactant_part # reactants extracted. One can notice it is initial molecule\n\nproduct_part #extracted products. Originally benzene was the product.",
"For decomposition of CGRContainer back into ReactionContainer ReactionContainer.from_cgr constructor method can be used.",
"decomposed = ReactionContainer.from_cgr(cgr2)\ndecomposed.clean2d()\ndecomposed",
"You can see that water absent in products initially was restored. \nThis is a side-effect of CGR decomposing that could help with reaction balancing. \nBut balancing using CGR decomposition works correctly only if minor part atoms are lost \nbut multiplicity and formal charge are saved. In next release electronic state balansing will be added.",
"r1 # compare with initial reaction",
"1.4 Queries\nCGRtools supports special objects for Queries. Queries are designed for substructure isomorphism. User can set number of neighbors and hybridization by himself (in molecules they could be calculated but could not be changed).\nQueries don't have reset_query_marks method",
"from CGRtools.containers import*\n\nm10 # ether\n\ncarb = m10.substructure([5,7,2], as_query=True) # extract of carboxyl fragment\nprint(carb)\ncarb",
"CGRs also can be transformed into Query.\nQueryCGRContainer is similar to QueryContainer class for CGRs and has the same API.\nQueryCGRContainer take into account state of atoms and bonds in reactant and product, including neighbors and hybridization",
"cgr_q = cgr1.substructure(cgr1, as_query=True) # transfrom CGRContainer into QueryCGRContainer\n#or\ncgr_q = QueryCGRContainer() | cgr1 # Union of Query container with CGR or Molecule gives QueryCGRContainer\nprint(cgr_q) # print out signature of query\ncgr_q",
"1.5. Molecules, CGRs, Reactions construction\nCGRtools has API for objects construction from scratch.\nCGR and Molecule has methods add_atom and add_bond for adding atoms and bonds.",
"from CGRtools.containers import MoleculeContainer\nfrom CGRtools.containers.bonds import Bond\nfrom CGRtools.periodictable import Na\n\nm = MoleculeContainer() # new empty molecule\n\nm.add_atom('C') # add Carbon atom using element symbol\nm.add_atom(6) # add Carbon atom using element number. {'element': 6} is not valid, but {'element': 'O'} is also acceptable\nm.add_atom('O', charge=-1) # add negatively charged Oxygen atom. Similarly other atomic properties can be set\n\n# add_atom has second argument for setting atom number. \n# If not set, the next integer after the biggest among already created will be used.\nm.add_atom(Na(23), 4, charge=1) # For isotopes required element object construction.\n\nm.add_bond(1, 2, 1) # add bond with order = 1 between atoms 1 and 2\nm.add_bond(3, 2, Bond(1)) # the other possibility to set bond order\n\nm.clean2d() #experimental function to calculate atom coordinates. Has number of flaws yet\nm",
"Reactions can be constructed from molecules.\nReactions are tuple-like objects. Modification impossible.",
"r = ReactionContainer(reactants=[m1], products=[m11]) # one-step way to construct reaction\n# or\nr = ReactionContainer([m1], [m11]) # first list of MoleculeContainers is interpreted as reactants, second one - as products\n\nr\n\nr.fix_positions() # this method fixes coordinates of molecules in reaction without calculation of atoms coordinates.\nr",
"QueryContainers can be constructed in the same way as MoleculeContainers.\nUnlike other containers QueryContainers additionally support atoms, neighbors and hybridization lists.",
"q = QueryContainer() # creation of empty container\nq.add_atom('N') # add N atom, any isotope, not radical, neutral charge, \n # number of neighbors and hybridization are irrelevant\nq.add_atom('C', neighbors=[2, 3], hybridization=2) # add carbon atom, any isotope, not radical, neutral charge, \n # has 2 or 3 explicit neighbors and sp2 hybridization\nq.add_atom('O', neighbors=1)\nq.add_bond(1, 2, 1) # add single bond between atom 1 and 2 \nq.add_bond(2, 3, 2) # add double bond between atom 1 and 2 \n# any amide group will fit this query\n\nprint(q) # print out signature (SMILES-like)\nq.clean2d()\nq",
"1.6. Extending CGRtools\nYou can easily customize CGRtools for your tasks.\nCGRtools is OOP-oriented library with subclassing and inheritance support.\nAs an example, we show how special marks on atoms for ligand donor centers can be added.",
"from CGRtools.periodictable import Core, C, O\n\nclass Marked(Core):\n __slots__ = '__mark' # all new attributes should be slotted!\n\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n self.__mark = None # set default value for added attribute\n\n @property\n def mark(self): # created new property \n return self.__mark\n\n @mark.setter\n def mark(self, mark):\n # do some checks and calculations\n self.__mark = mark\n \n def __repr__(self):\n if self.__isotope:\n return f'{self.__class__.__name__[6:]}({self.__isotope})'\n return f'{self.__class__.__name__[6:]}()'\n \n @property\n def atomic_symbol(self) -> str:\n return self.__class__.__name__[6:]\n\n\nclass MarkedC(Marked, C):\n pass\n\n\nclass MarkedO(Marked, O):\n pass\n\nm = MoleculeContainer() # create newly developed container MarkedMoleculeContainer\nm.add_atom(MarkedC()) # add custom atom C\nm.add_atom(MarkedO()) # add custom atom O\nm.add_bond(1, 2, 1)\n\nm.atom(2).mark = 1 # set mark on atom.\n\nprint(m)\nm.clean2d()\nm\n\nm.atom(2).mark # one can return mark"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
google-research/evoflow
|
notebooks/onemax.ipynb
|
apache-2.0
|
[
"Copyright 2020 The EvoFlow Authors.",
"#@title Licensed under the Apache License, Version 2.0 & Creative Common licence 4.0\n# EvoFlow and its tutorials are released under the Apache 2.0 licence\n# its documentaton is licensed under the Creative Common licence 4.0",
"EvoFlow hello world: OneMax\nThis notebook provide a quick introduction of how EvoFlow works by showing how you can use to solve the classic OneMax problem with it.\nAt its core <b>EvoFlow is a modern hardware accelerated genetic algorithm framework that recast genetic algorithm programing as a dataflow computation on tensors</b>. Conceptually is very similar to what Tensorflow.keras is doing so if you have experience with Keras or Tensorflow you will feel right at home. Under the hood, EvoFlow leverage Tensorflow or Cupy to provide hardware accelerated operations.\nFor more information about EvoFlow design and architecture, see our paper.\n<b>EvoFlow, while heavily tested, is considered experimental - use at your own risks. Issues should be reported on Github. For the rest: evoflow@google.com</b>\n<table align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/google-research/evoflow/blob/master/notebooks/onemax.ipynb\"><img src=\"https://storage.googleapis.com/evoflow/images/colab.png\" />Run in Google Colab</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://github.com/google-research/evoflow/blob/master/notebooks/onemax.ipynb\"><img src=\"https://storage.googleapis.com/evoflow/images/github.png\" />View source on GitHub</a>\n </td>\n</table>\n\nSetup",
"# installing the latest version of evoflow\ntry:\n import evoflow\nexcept:\n !pip install -U evoflow\n%load_ext autoreload\n%autoreload 2\n\nfrom evoflow.engine import EvoFlow\nfrom evoflow.selection import SelectFittest\nfrom evoflow.population import randint_population\nfrom evoflow.fitness import Sum\nfrom evoflow.ops import Input, RandomMutations1D, UniformCrossover1D",
"Population definition\nIn this example, we are going to represent the population as a 2D tensor where:\n\nthe 1st dimension is the number of chromosome (population)\nthe 2nd dimension is the number of gene per chromosome.",
"POPULATION_SIZE = 64 #@param {type: \"slider\", min: 16, max: 2048}\nCHROMOSOME_SIZE = 32 #@param {type: \"slider\", min: 16, max: 2048}\n\nSHAPE = (POPULATION_SIZE, CHROMOSOME_SIZE)\nprint(f\"Population will have {SHAPE[0]} distinct chromosomes made of {SHAPE[1]} genes\")",
"Our model need an initial population to mutate as inputs. Here, we are taking the traditional approach to initialize this population at random while ensure the gene value is either 0 or 1 by setting the max_value to 1. We will use the population in the evolve() funtion very similarly that you would feed your x_train data in deep-learning.",
"population = randint_population(SHAPE, max_value=1)",
"Evolution model setup\nBuilding an evolution model requires setup the <i>evolution operations</i> and then compiling the model.\nGenetic operations are represented by OPs that are very similar to the tf.keras.layers. They are combined by creating a directed graph that inter-connect them, again very similarly to what the keras functional API is doing.\nIn this example we will mutate our population using two very basic genetic algorithm operations: the Random Mutation and Uniform Crossover. As our population is made of single dimensions chromosomes we will use the 1D variant of those ops: RandomMutations1D and UniformCrossover1D.\nYou can experiment using a 2D or even a 3D population by changing the input shape above and using operation variants that match your chromosomes shape. For example you need to use UniformCrossover2D for a 2D population, and UniformCrossover3D for a 3D population.",
"inputs = Input(shape=(SHAPE))\nx = RandomMutations1D(max_gene_value=1, min_gene_value=0)(inputs)\noutputs = UniformCrossover1D()(x)",
"We instantiate our model by providing its inputs and outputs. Under the hood a EvoFlow model is represented as a direct graph that support multiples inputs, multiple outputs, and arbitrary branching to tackle the most complex use-cases. You can use summary() to check what your model will look like.",
"ef = EvoFlow(inputs, outputs)\nef.summary()",
"Model compilation\nBefore the model is ready for evolutions, it needs two additional key components that will be supplied to the compile function:\n\nHow to asses how good is a given chromosome (fitness function) \nHow to select chromosomes and renew the pool (Selection function)\n\nFitness function\nThe fitness function is the algorithm objective function that the model try to maximize. It is the most critical part of a genetic algorithm where you express the constraints that a chromosome must satisify to solve the problem. At each evolution this function is used to compute how fit for the task each chromosome are.\nFitness functions are very similar to loss functions in deep-learning except they don't need to be differentiable and therefore can perform arbitrary computation. Depending on the problem solved, you can decide to either maximize, minimize the fitness value or get it to converge to a fixed value. \nThe cost of fitness function increased expressiveness and flexibility compared to neural network loss is that we don't have the gradients to help guide the model convergence and therefore coverging is more computationaly expensive.\nTo make things efficient and fast, it is recommended to implement Fitness functions in EvoFlow as tensor operations but this is not requireds as long as the function return at the end an 1D tensor that contains the fitness value for each chromosome in the population.\nTo solve the OneMax problem we want a fitness function that encourages the chromosomes to have as many genes with a value of 1 as possible. In tensor representation this is easy to acheive by simply computing the sum of the chromosome and using that value as its fitness. \nTo make the progress look nicer, we scale the fitness between 0 and 1 by supplying a max_sum_value equivalent to CHROMOSOME_SIZE as the best case is a chromosome made only of 1s.",
"fitness_function = Sum(max_sum_value=CHROMOSOME_SIZE)",
"Modeling evolutionary selection\nThe evolutionary process takes the fitness values and decides which chromosomes to keep. A naive form to model selection is to keep the fitest individuals and carry them over to the next generation. This is usualy referred as an elitist selection strategy. For instance, we can keep the fitest individuals (the one with the largest fitness values) as we create the next generation. \nAlternative functions that have different selection intensitive pressure properties exist. For example, roulette wheel selection has non-constant selection intensity depending on the population's fitness distribution, whereas tournament selection provides constant selection intesity regardless of the fitness distribution.",
"selection_strategy = SelectFittest()",
"Compilation\nNow that we have defined our fitness_function and our selection_strategy, our model is ready to get compiled with those as parameters.",
"ef.compile(selection_strategy, fitness_function)",
"Evolution\nWe now are going to evolve our inital random population over a couple of generations. At each generation the evolution strategy keep the best invididuals and replace the low performing ones with the best one of the previous generation.\ngeneration= controls the number of time the model is applied to the population to generate a new generation of mutated specimens. It is equivalent to what the number of epochs is deep-learning. The harder the problem, the more generations you need.",
"GENERATIONS = 4 #@param {type: \"slider\", min: 5, max: 100}",
"As the model evolve, look for the value of fitness_function_max which indicate the value of the best performing chromosome. As you will see it will quickly reach 1, which indicates that we found an optimal solution",
"results = ef.evolve(population, generations=GENERATIONS)",
"Results\nLet's check that the optimal solution we found are chromosomes fully made of 1s. \nFirst let's look at how quickly our model converged. Depending on your population size and chromosomes size the convergence will be slower or faster. We encourage you to experiments with different values.\nNote: in the graph below we use static=True to generate graph as .png and have them display in colab and github. However, when developping your own algorithms, we recommend using the interactive ones that rely on altair as it makes for a nicer experience :)",
"results.plot_fitness(static=True) # note we use a static display",
"Next we can look at what the population look like using a heatmap. As the model converged to the optimal solution the top chromosomes are all made of ones and form a uniform color band. If you evolve long enough the whole heatmap will be of a solid color as all the chromosomes will contains the optimial solution.",
"results.display_populations(top_k=100, rounding=0)",
"Finally to convince ourselves that EvoFlow worked as intended we can display the top 10 best solutions and check they are made of 1s",
"results.top_k()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
matheusportela/indeed-ml-codesprint
|
indeed.ipynb
|
mit
|
[
"Indeed Machine Learning CodeSprint\nLoad the important packages:",
"import numpy as np\nimport sklearn",
"Data loading\nLoad the training data:",
"import csv\n\ndef load_train_data(filename):\n X = []\n y = []\n \n with open(filename) as fd:\n reader = csv.reader(fd, delimiter='\\t')\n\n # ignore header row\n next(reader, None)\n \n for row in reader:\n X.append(row[1])\n y.append(row[0].split())\n\n return np.array(X), np.array(y)\n\nX, y = load_train_data('data/train.tsv')",
"Show some input and output data:",
"print 'Input:', X[0]\nprint\nprint 'Output:', y[0]",
"Preprocessing definition\nPreprocessing steps are applied differently for input vectors and target vectors.\nInput preprocessing\nFirst, we need to transform the input text into a numerical representation. This is done by generating a vector where each position is the number of occurrences for a given word in the data.\nFor instance, given the text hello. this is my first line. this is my next line. this is the final one, its count vector, considering that the first position is with respect to this, the second is line and the third is final is [3, 2, 1]. The count vector did not use any stop words but only considered words that appeared at least 2 times in the training data, with maximum frequency of 95%.\nNext, we apply tf-idf to weight the words according to their importance. Too frequent or too rare words are less important than the others.\nOutput preprocessing\nUsually, the output is given as a list of tags for each description, such as [['part-time-job', 'salary', 'supervising-job'], ['2-4-years-experience-required', 'hourly-wage']]. However, since some tags are mutually exclusive (only one can exist at a time), we take that into account. For instance, no description can be both 'part-time-job' and 'full-time-job' at the same time.\nTherefore, the target vector is splitted into several vectors, one for each mutually exclusive set of tags, in a format such as:\npython\n{\n 'job': [['part-time-job'], ['full-time-job'], ['part-time-job']],\n 'wage': [['salary'], [], []],\n 'degree': [[], [], []],\n 'experience': [[], [], []],\n 'supervising': [[], [], ['supervising-job']]\n}\nWith the splitted target vector, we will be able to train one model for each tag type.\nAfter that, each tag type target label will be encoded in numerical format, where each tag will be replaced by an integer. For instance, [['part-time-job'], ['full-time-job'], [], ['part-time-job'], []] may be encoded to [1, 2, 0, 1, 0].\nDefine input data preprocessor as bag-of-words and tf-idf feature extraction:\n\nCountVectorizer: Transforms text to vector of occurrences for each word found in training set (bag-of-words representation).\nTfidfTransformer: Transforms bag-of-words to its relative frequency, removing too frequent or rare words from the final representation.",
"from sklearn.pipeline import Pipeline\nfrom sklearn.feature_extraction.text import CountVectorizer\nfrom sklearn.feature_extraction.text import TfidfTransformer\n\nX_preprocessor = Pipeline([\n ('count', CountVectorizer(max_df=0.95, min_df=2, ngram_range=(1, 2))),\n ('tfidf', TfidfTransformer())\n])",
"Define multi-label binarizer for output data. Each target sample will be a binary array: 0 if not present, 1 otherwise.",
"from sklearn.preprocessing import LabelEncoder\n\ny_preprocessors = {\n 'job': LabelEncoder(),\n 'wage': LabelEncoder(),\n 'degree': LabelEncoder(),\n 'experience': LabelEncoder(),\n 'supervising': LabelEncoder()\n}",
"Separate the target vector y into one vector for each mutually exclusive tag type:\n```python\n\n\n\ny = [['part-time-job', 'salary'], ['full-time-job'], ['part-time-job', 'supervising-job']]\nsplit_y = split_exclusive_tags(y)\nsplit_y\n{\n 'job': [['part-time-job'], ['full-time-job'], ['part-time-job']],\n 'wage': [['salary'], [], []],\n 'degree': [[], [], []],\n 'experience': [[], [], []],\n 'supervising': [[], [], ['supervising-job']]\n}\n```\n\n\n\nThis is a useful step when training one model for each exclusive tag type.",
"# Separate targets for mutually exclusive tags\ndef split_exclusive_tags(y):\n split_y = {\n 'job': [],\n 'wage': [],\n 'degree': [],\n 'experience': [],\n 'supervising': []\n }\n \n for target in y:\n split_y['job'].append(filter(lambda x: x in ['part-time-job', 'full-time-job'], target))\n split_y['wage'].append(filter(lambda x: x in ['hourly-wage', 'salary'], target))\n split_y['degree'].append(filter(lambda x: x in ['associate-needed', 'bs-degree-needed', 'ms-or-phd-needed', 'licence-needed'], target))\n split_y['experience'].append(filter(lambda x: x in ['1-year-experience-needed', '2-4-years-experience-needed', '5-plus-years-experience-needed'], target))\n split_y['supervising'].append(filter(lambda x: x in ['supervising-job'], target))\n \n return split_y",
"Classifier definition\nDefine classifier as SVM with one-vs-all strategy for multilabel classification.",
"# F1 score: 0.511\nfrom sklearn.multiclass import OneVsRestClassifier\nfrom sklearn.svm import LinearSVC\n\nmodels = {\n 'job': OneVsRestClassifier(LinearSVC()),\n 'wage': OneVsRestClassifier(LinearSVC()),\n 'degree': OneVsRestClassifier(LinearSVC()),\n 'experience': OneVsRestClassifier(LinearSVC()),\n 'supervising': OneVsRestClassifier(LinearSVC())\n}",
"Model usage\nFor each mutually exclusive tag type, we train one multiclass model capable of deciding which tag (or even none) is appropriate for the given input.\nInitially, an attempt of a single multilabel model was used, which would be able to output multiple labels at once. However, considering that the input space was huge for this situation, better results were achieved by using multiclass models, one for each mutually exclusive tag type. Thus the output would be the output for each tag type model aggregated in a single vector.",
"def fit_models(models, X_preprocessor, y_preprocessors, X, y):\n print 'Fitting models'\n split_y = split_exclusive_tags(y)\n\n X_processed = X_preprocessor.fit_transform(X)\n \n for tag_type, model in models.items():\n # Learn one preprocessor for each mutually exclusive tag\n X_processed = X_preprocessor.transform(X)\n y_processed = y_preprocessors[tag_type].fit_transform(split_y[tag_type])\n \n # Learn one model for each mutually exclusive tag\n model.fit(X_processed, y_processed)",
"Predict the output by executing the model for each tag type:",
"def predict_models(models, X_preprocessor, y_preprocessors, X):\n print 'Predicting with models'\n \n output = [[] for _ in X]\n \n for tag_type, model in models.items():\n # Preprocess and use model for the given type of tag\n X_processed = X_preprocessor.transform(X)\n model_output = model.predict(X_processed)\n \n tag_type_output = y_preprocessors[tag_type].inverse_transform(model_output)\n\n # Aggregate outputs for all types of tags in the same array\n for i, out in enumerate(tag_type_output):\n if type(out) in [list, tuple]:\n output[i].extend(out)\n else:\n output[i].append(out)\n\n return output",
"Model evaluation\nCalculate the F1 score given the target vector and the model output.",
"def calculate_f1_score(y_test, y_output):\n print 'Calculating F1 score'\n \n tags = ['part-time-job', 'full-time-job', 'hourly-wage', 'salary', 'associate-needed', 'bs-degree-needed',\n 'ms-or-phd-needed', 'licence-needed', '1-year-experience-needed', '2-4-years-experience-needed',\n '5-plus-years-experience-needed', 'supervising-job']\n\n true_positive = np.array([0.0 for _ in tags])\n true_negative = np.array([0.0 for _ in tags])\n false_positive = np.array([0.0 for _ in tags])\n false_negative = np.array([0.0 for _ in tags])\n \n for target, output in zip(y_test, y_output):\n for i, tag in enumerate(tags):\n if tag in target and tag in output:\n true_positive[i] += 1\n elif tag not in target and tag not in output:\n true_negative[i] += 1\n elif tag in target and tag not in output:\n false_negative[i] += 1\n elif tag not in target and tag in output:\n false_positive[i] += 1\n else:\n raise Exception('Unknown situation - tag: {} target: {} output: {}'.format(tag, target, output))\n \n tags_precision = np.array([0.0 for _ in tags])\n tags_recall = np.array([0.0 for _ in tags])\n tags_f1_score = np.array([0.0 for _ in tags])\n \n for i, tag in enumerate(tags):\n tags_precision[i] = true_positive[i] / (true_positive[i] + false_positive[i])\n tags_recall[i] = true_positive[i] / (true_positive[i] + false_negative[i])\n tags_f1_score[i] = 2*tags_precision[i]*tags_recall[i] / (tags_precision[i] + tags_recall[i])\n \n min_tags_precision = np.argmin(tags_precision)\n min_tags_recall = np.argmin(tags_recall)\n min_tags_f1_score = np.argmin(tags_f1_score)\n \n print\n print '{:30s} | {:5s} | {:5s} | {:5s}'.format('Tag', 'Prec.', 'Rec. ', 'F1')\n for i in range(len(tags)):\n print '{:30s} | {:.3f} | {:.3f} | {:.3f}'.format(\n tags[i], tags_precision[i], tags_recall[i], tags_f1_score[i])\n print\n \n print 'Worst precision:', tags[min_tags_precision]\n print 'Worst recall:', tags[min_tags_recall]\n print 'Worst F1 score:', tags[min_tags_f1_score]\n print\n \n precision = np.sum(true_positive) / (np.sum(true_positive) + np.sum(false_positive))\n recall = np.sum(true_positive) / (np.sum(true_positive) + np.sum(false_negative))\n f1_score = 2*precision*recall / (precision + recall)\n \n print 'General:'\n print 'Precision: {:.3f}'.format(precision)\n print 'Recall: {:.3f}'.format(recall)\n print 'F1 score: {:.3f}'.format(f1_score)\n \n return f1_score",
"Evaluate model with 5-fold cross-validation using the F1 score metric:",
"from sklearn.model_selection import KFold\nfrom sklearn.metrics import f1_score\n\nscores = []\nk_fold = KFold(n_splits=5)\n\nfor i, (train, validation) in enumerate(k_fold.split(X)):\n X_train, X_validation, y_train, y_validation = X[train], X[validation], y[train], y[validation]\n\n fit_models(models, X_preprocessor, y_preprocessors, X_train, y_train)\n y_output = predict_models(models, X_preprocessor, y_preprocessors, X_validation)\n \n score = calculate_f1_score(y_validation, y_output)\n scores.append(score)\n print '#{0} F1 score: {1:.3f}'.format(i, score)\n print\n \nf1_score = np.mean(scores)\n \nprint 'Total F1 score: {0:.3f}'.format(f1_score)",
"Model usage\nLoad the data:",
"def load_test_data(filename):\n with open(filename) as fd:\n reader = csv.reader(fd, delimiter='\\t')\n next(reader, None) # ignore header row\n X = [row[0] for row in reader]\n\n return np.array(X)\n\nX_train, y_train = load_train_data('data/train.tsv')\nX_test = load_test_data('data/test.tsv')",
"Train the model with all training data:",
"fit_models(models, X_preprocessor, y_preprocessors, X_train, y_train)",
"Predict output from test data:",
"y_output = predict_models(models, X_preprocessor, y_preprocessors, X_test)",
"Show some output data:",
"print y_output[:10]",
"Save output data:",
"def save_output(filename, output):\n with open(filename, 'w') as fd:\n fd.write('tags\\n')\n \n for i, tags in enumerate(output):\n fd.write(' '.join(tags))\n fd.write('\\n')\n \nsave_output('data/tags.tsv', y_output)",
"Save preprocessors and model:",
"import pickle\n\ndef save(filename, obj):\n pickle.dump(obj, open(filename, 'w'))\n\nsave('models/X_preprocessor.pickle', X_preprocessor)\nsave('models/y_preprocessor.pickle', y_preprocessors)\nsave('models/clf_{0:.3f}_f1_score.pickle'.format(f1_score), models)",
"Load saved model",
"def load(filename):\n return pickle.load(open(filename))\n\nmodels = load('models/clf_0.461_f1_score.pickle')\nX_preprocessors = load('models/X_preprocessor.pickle')\ny_preprocessors = load('models/y_preprocessor.pickle')"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
vinecopulib/pyvinecopulib
|
examples/bivariate_copulas.ipynb
|
mit
|
[
"Import the library",
"import pyvinecopulib as pv",
"Create an independence bivariate copula",
"pv.Bicop() ",
"Create a Gaussian copula\nSee help(pv.BicopFamily) for the available families",
"pv.Bicop(family=pv.BicopFamily.gaussian)",
"Create a 90 degrees rotated Clayon copula with parameter = 3",
"pv.Bicop(family=pv.BicopFamily.clayton, rotation=90, parameters=[3])",
"Create a t copula with correlation of 0.5 and 4 degrees of freedom\nand showcase some methods",
"cop = pv.Bicop(family=pv.BicopFamily.student, rotation=0, parameters=[0.5, 4])\nu = cop.simulate(n=10, seeds=[1, 2, 3])\nfcts = [cop.pdf, cop.cdf,\n cop.hfunc1, cop.hfunc2,\n cop.hinv1, cop.hinv2,\n cop.loglik, cop.aic, cop.bic]\n[f(u) for f in fcts]",
"Different ways to fit a copula...",
"u = cop.simulate(n=1000, seeds=[1, 2, 3])\n\n# Create a new object an sets its parameters by fitting afterwards\ncop2 = pv.Bicop(pv.BicopFamily.student)\ncop2.fit(data=u)\nprint(cop2)\n\n# Otherwise, define first an object to control the fits:\n# - pv.FitControlsBicop objects store the controls\n# - here, we only restrict the parametric family\n# - see help(pv.FitControlsBicop) for more details\n# Then, create a copula from the data\ncontrols = pv.FitControlsBicop(family_set=[pv.BicopFamily.student])\nprint(controls)\ncop2 = pv.Bicop(data=u, controls=controls)\nprint(cop2)",
"Similarly, when the family is unkown,\nthere are two ways to also do model selection...",
"# Create a new object an selects both its family and parameters afterwards\ncop3 = pv.Bicop()\ncop3.select(data=u)\nprint(cop3)\n\n# Or create directly from data\ncop3 = pv.Bicop(data=u)\nprint(cop3)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
quantumlib/Cirq
|
docs/tutorials/educators/neutral_atom.ipynb
|
apache-2.0
|
[
"Copyright 2020 The Cirq Developers",
"#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.",
"Neutral atom device class\n<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://quantumai.google/cirq/tutorials/educators/neutral_atom\"><img src=\"https://quantumai.google/site-assets/images/buttons/quantumai_logo_1x.png\" />View on QuantumAI</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/quantumlib/Cirq/blob/master/docs/tutorials/educators/neutral_atom.ipynb\"><img src=\"https://quantumai.google/site-assets/images/buttons/colab_logo_1x.png\" />Run in Google Colab</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://github.com/quantumlib/Cirq/blob/master/docs/tutorials/educators/neutral_atom.ipynb\"><img src=\"https://quantumai.google/site-assets/images/buttons/github_logo_1x.png\" />View source on GitHub</a>\n </td>\n <td>\n <a href=\"https://storage.googleapis.com/tensorflow_docs/Cirq/docs/tutorials/educators/neutral_atom.ipynb\"><img src=\"https://quantumai.google/site-assets/images/buttons/download_icon_1x.png\" />Download notebook</a>\n </td>\n</table>\n\nThis tutorial provides an introduction to making circuits that are compatible with neutral atom devices.\nNeutral atom devices implement quantum gates in one of two ways. One method is by hitting the entire qubit array with microwaves to simultaneously act on every qubit. This method implements global $XY$ gates which take up to $100$ microseconds to perform. Alternatively, we can shine laser light on some fraction of the array. Gates of this type typically take around $1$ microsecond to perform. This method can act on one or more qubits at a time up to some limit dictated by the available laser power and the beam steering system used to address the qubits. Each category in the native gate set has its own limit, discussed more below.",
"try:\n import cirq\nexcept ImportError:\n print(\"installing cirq...\")\n !pip install cirq --quiet\n import cirq\n print(\"installed cirq.\")\n\nfrom math import pi",
"Defining a NeutralAtomDevice\nTo define a NeutralAtomDevice, we specify\n\nThe set of qubits in the device.\nThe maximum duration of gates and measurements.\nmax_parallel_z: The maximum number of single qubit $Z$ rotations that can be applied in parallel.\nmax_parallel_xy: The maximum number of single qubit $XY$ rotations that can be applied in parallel.\nmax_parallel_c: The maximum number of atoms that can be affected by controlled gates simultaneously.\nNote that max_parallel_c must be less than or equal to the minimum of max_parallel_z and max_parallel_xy.\ncontrol_radius: The maximum allowed distance between atoms acted on by controlled gates.\n\nWe show an example of defining a NeutralAtomDevice below.",
"\"\"\"Defining a NeutralAtomDevice.\"\"\"\n# Define milliseconds and microseconds for convenience.\nms = cirq.Duration(nanos=10**6)\nus = cirq.Duration(nanos=10**3)\n\n# Create a NeutralAtomDevice\nneutral_atom_device = cirq.NeutralAtomDevice(\n qubits=cirq.GridQubit.rect(2, 3),\n measurement_duration=5 * ms,\n gate_duration=100 * us,\n max_parallel_z=3,\n max_parallel_xy=3,\n max_parallel_c=3,\n control_radius=2\n)",
"Note that all above arguments are required to instantiate a NeutralAtomDevice. The example device above has the following properties:\n\nThe device is defined on a $3 \\times 3$ grid of qubits.\nMeasurements take $5$ milliseconds.\nGates may take as long as $100$ microseconds if we utilize global microwave gates. Otherwise, a more reasonable bound would be $1$ microsecond.\nA maximum of $3$ qubits may be simultaneously acted on by any gate category (max_parallel_c = 3).\nControlled gates have next-nearest neighbor connectivity (control_radius = 2).\n\nWe can see some properties of the device as follows.",
"\"\"\"View some properties of the device.\"\"\"\n# Display the neutral atom device.\nprint(\"Neutral atom device:\", neutral_atom_device, sep=\"\\n\")\n\n# Get the neighbors of a qubit.\nqubit = cirq.GridQubit(0, 1)\nprint(f\"\\nNeighbors of qubit {qubit}:\")\nprint(neutral_atom_device.neighbors_of(qubit))",
"Native gate set\nThe gates supported by the NeutralAtomDevice class can be placed into three categories:\n\nSingle-qubit rotations about the $Z$ axis.\nSingle-qubit rotations about an arbitrary axis in the $X$-$Y$ plane. We refer to these as $XY$ gates in this tutorial.\nControlled gates: CZ, CNOT, CCZ, and CCNOT (TOFFOLI).\n\nAny rotation angle is allowed for single-qubit rotations. Some examples of valid single-qubit rotations are shown below.",
"# Examine metadata gateset info.\nfor gate_family in neutral_atom_device.metadata.gateset.gates:\n print(gate_family)\n print('-' * 80)\n\n\"\"\"Examples of valid single-qubit gates.\"\"\"\n# Single qubit Z rotations with any angle are valid.\nneutral_atom_device.validate_gate(cirq.rz(pi / 5))\n\n# Single qubit rotations about the X-Y axis with any angle are valid.\nneutral_atom_device.validate_gate(\n cirq.PhasedXPowGate(phase_exponent=pi / 3, exponent=pi / 7)\n)",
"A Hadamard gate is invalid because it is a rotation in the $X$-$Z$ plane instead of the $X$-$Y$ plane.",
"\"\"\"Example of an invalid single-qubit gate.\"\"\"\ninvalid_gate = cirq.H\n\ntry:\n neutral_atom_device.validate_gate(invalid_gate)\nexcept ValueError as e:\n print(f\"As expected, {invalid_gate} is invalid!\", e)",
"For controlled gates, the rotation must be a multiple of $\\pi$ due to the physical implementation of the gates. In Cirq, this means the exponent of a controlled gate must be an integer. The next cell shows two examples of valid controlled gates.",
"\"\"\"Examples of valid multi-qubit gates.\"\"\"\n# Controlled gates with integer exponents are valid.\nneutral_atom_device.validate_gate(cirq.CNOT)\n\n# Controlled NOT gates with two controls are valid.\nneutral_atom_device.validate_gate(cirq.TOFFOLI)",
"Any controlled gate with non-integer exponent is invalid.",
"\"\"\"Example of an invalid controlled gate.\"\"\"\ninvalid_gate = cirq.CNOT ** 1.5\n\ntry:\n neutral_atom_device.validate_gate(invalid_gate)\nexcept ValueError as e:\n print(f\"As expected, {invalid_gate} is invalid!\", e)",
"Multiple controls are allowed as long as every pair of atoms (qubits) acted on by the controlled gate are close enough to each other. We can see this by using the validate_operation (or validate_circuit) method, as follows.",
"\"\"\"Examples of valid and invalid multi-controlled gates.\"\"\"\n# This TOFFOLI is valid because all qubits involved are close enough to each other.\nvalid_toffoli = cirq.TOFFOLI.on(cirq.GridQubit(0, 0), cirq.GridQubit(0, 1), cirq.GridQubit(0, 2))\nneutral_atom_device.validate_operation(valid_toffoli)\n\n# This TOFFOLI is invalid because all qubits involved are not close enough to each other.\ninvalid_toffoli = cirq.TOFFOLI.on(cirq.GridQubit(0, 0), cirq.GridQubit(1, 0), cirq.GridQubit(0, 2))\n\ntry:\n neutral_atom_device.validate_operation(invalid_toffoli)\nexcept ValueError as e:\n print(f\"As expected, {invalid_toffoli} is invalid!\", e)",
"NeutralAtomDevices do not currently support gates with more than two controls although these are in principle allowed by the physical realizations.",
"\"\"\"Any gate with more than two controls is invalid.\"\"\"\ninvalid_gate = cirq.ControlledGate(cirq.TOFFOLI)\n\ntry:\n neutral_atom_device.validate_gate(invalid_gate)\nexcept ValueError as e:\n print(f\"As expected, {invalid_gate} is invalid!\", e)",
"Finally, we note that the duration of any operation can be determined via the duration_of method.",
"\"\"\"Example of getting the duration of a valid operation.\"\"\"\nneutral_atom_device.duration_of(valid_toffoli)",
"Moment and circuit rules\nIn addition to consisting of valid operations as discussed above, valid moments on a NeutralAtomDevice must satisfy the following criteria:\n\nOnly max_parallel_c gates of the same category may be performed in the same moment.\nAll instances of gates in the same category in the same moment must be identical.\nControlled gates cannot be applied in parallel with other gate types.\nPhysically, this is because controlled gates make use of all types of light used to implement gates.\nQubits acted on by different controlled gates in parallel must be farther apart than the control_radius.\nPhysically, this is so that the entanglement mechanism doesn't cause the gates to interfere with one another.\nAll measurements must be terminal.\n\nMoments can be validated with the validate_moment method. Some examples are given below.",
"\"\"\"Example of a valid moment with single qubit gates.\"\"\"\nqubits = sorted(neutral_atom_device.qubits)\n\n# Get a valid moment.\nvalid_moment = cirq.Moment(cirq.Z.on_each(qubits[:3]) + cirq.X.on_each(qubits[3:6]))\n\n# Display it.\nprint(\"Example of a valid moment with single-qubit gates:\", cirq.Circuit(valid_moment), sep=\"\\n\\n\")\n\n# Verify it is valid.\nneutral_atom_device.validate_moment(valid_moment)",
"Recall that we defined max_parallel_z = 3 in our device. Thus, if we tried to do 4 $Z$ gates in the same moment, this would be invalid.",
"\"\"\"Example of an invalid moment with single qubit gates.\"\"\"\n# Get an invalid moment.\ninvalid_moment = cirq.Moment(cirq.Z.on_each(qubits[:4]))\n\n# Display it.\nprint(\"Example of an invalid moment with single-qubit gates:\", cirq.Circuit(invalid_moment), sep=\"\\n\\n\")\n\n# Uncommenting raises ValueError: Too many simultaneous Z gates.\n# neutral_atom_device.validate_moment(invalid_moment)",
"This is also true for 4 $XY$ gates since we set max_parallel_xy = 3. However, there is an exception for $XY$ gates acting on every qubit, as illustrated below.",
"\"\"\"An XY gate can be performed on every qubit in the device simultaneously.\n\nIf the XY gate does not act on every qubit, it must act on <= max_parallel_xy qubits.\n\"\"\"\nvalid_moment = cirq.Moment(cirq.X.on_each(qubits))\nneutral_atom_device.validate_moment(valid_moment)",
"Although both $Z$ and $Z^{1.5}$ are valid gates, they cannot be performed simultaneously because all gates \"of the same type\" must be identical in the same moment.",
"\"\"\"Example of an invalid moment with single qubit gates.\"\"\"\n# Get an invalid moment.\ninvalid_moment = cirq.Moment(cirq.Z(qubits[0]), cirq.Z(qubits[1]) ** 1.5)\n\n# Display it.\nprint(\"Example of an invalid moment with single-qubit gates:\", cirq.Circuit(invalid_moment), sep=\"\\n\\n\")\n\n# Uncommenting raises ValueError: Non-identical simultaneous Z gates.\n# neutral_atom_device.validate_moment(invalid_moment)",
"Exercise: Multiple controlled gates in the same moment\nConstruct a NeutralAtomDevice which is capable of implementing two CNOTs in the same moment. Verify that these operations can indeed be performed in parallel by calling the validate_moment method or showing that Cirq inserts the operations into the same moment.",
"# Your code here!",
"Solution",
"\"\"\"Example solution for creating a device which allows two CNOTs in the same moment.\"\"\"\n# Create a NeutralAtomDevice.\ndevice = cirq.NeutralAtomDevice(\n qubits=cirq.GridQubit.rect(2, 3),\n measurement_duration=5 * cirq.Duration(nanos=10**6),\n gate_duration=100 * cirq.Duration(nanos=10**3),\n max_parallel_z=4,\n max_parallel_xy=4,\n max_parallel_c=4,\n control_radius=1\n)\nprint(\"Device:\")\nprint(device)\n\n# Create a circuit for a NeutralAtomDevice.\ncircuit = cirq.Circuit()\n\n# Append two CNOTs that can be in the same moment.\ncircuit.append(\n [cirq.CNOT(cirq.GridQubit(0, 0), cirq.GridQubit(1, 0)), \n cirq.CNOT(cirq.GridQubit(0, 2), cirq.GridQubit(1, 2))]\n)\n\n# Append two CNOTs that cannot be in the same moment.\ncircuit.append(\n cirq.Moment(cirq.CNOT(cirq.GridQubit(0, 0), cirq.GridQubit(1, 0))), \n cirq.Moment(cirq.CNOT(cirq.GridQubit(0, 1), cirq.GridQubit(1, 1)))\n)\n\n# Validate the circuit.\ndevice.validate_circuit(circuit)\n\n# Display the circuit.\nprint(\"\\nCircuit:\")\nprint(circuit)",
"Note that the square brackets above/below the circuit indicate the first two CNOTs are in the same moment."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
corochann/chainer-hands-on-tutorial
|
src/01_chainer_intro/dataset_introduction.ipynb
|
mit
|
[
"Dataset module introduction",
"# Initial setup following http://docs.chainer.org/en/stable/tutorial/basic.html\nimport numpy as np\nimport chainer\nfrom chainer import cuda, Function, gradient_check, report, training, utils, Variable\nfrom chainer import datasets, iterators, optimizers, serializers\nfrom chainer import Link, Chain, ChainList\nimport chainer.functions as F\nimport chainer.links as L\nfrom chainer.training import extensions\nimport chainer.dataset\nimport chainer.datasets",
"Built-in dataset modules\nSome dataset format is already implemented in chainer.datasets\nTupleDataset",
"from chainer.datasets import TupleDataset\n\nx = np.arange(10)\nt = x * x\n\ndata = TupleDataset(x, t)\n\nprint('data type: {}, len: {}'.format(type(data), len(data)))\n\n# Unlike numpy, it does not have shape property.\ndata.shape",
"i-th data can be accessed by data[i]\nwhich is a tuple of format ($x_i$, $t_i$, ...)",
"# get forth data -> x=3, t=9\ndata[3]",
"Slice accessing\nWhen TupleDataset is accessed by slice indexing, e.g. data[i:j], returned value is list of tuple\n$[(x_i, t_i), ..., (x_{j-1}, t_{j-1})]$",
"# Get 1st, 2nd, 3rd data at the same time.\nexamples = data[0:4]\n\nprint(examples)\nprint('examples type: {}, len: {}'\n .format(type(examples), len(examples)))",
"To convert examples into minibatch format, you can use concat_examples function in chainer.dataset.\nIts return value is in format ([x_array], [t array], ...)",
"from chainer.dataset import concat_examples\n\ndata_minibatch = concat_examples(examples)\n\n#print(data_minibatch)\n#print('data_minibatch type: {}, len: {}'\n# .format(type(data_minibatch), len(data_minibatch)))\n\nx_minibatch, t_minibatch = data_minibatch\n# Now it is array format, which has shape\nprint('x_minibatch = {}, type: {}, shape: {}'.format(x_minibatch, type(x_minibatch), x_minibatch.shape))\nprint('t_minibatch = {}, type: {}, shape: {}'.format(t_minibatch, type(t_minibatch), t_minibatch.shape))",
"DictDataset\nTBD",
"from chainer.datasets import DictDataset\n\nx = np.arange(10)\nt = x * x\n\n# To construct `DictDataset`, you can specify each key-value pair by passing \"key=value\" in kwargs.\ndata = DictDataset(x=x, t=t)\n\nprint('data type: {}, len: {}'.format(type(data), len(data)))\n\n# Get 3rd data at the same time.\nexample = data[2]\n \nprint(example)\nprint('examples type: {}, len: {}'\n .format(type(example), len(example)))\n\n# You can access each value via key\nprint('x: {}, t: {}'.format(example['x'], example['t']))",
"ImageDataset\nThis is util class for image dataset.\nIf the number of dataset becomes very big (for example ImageNet dataset), \nit is not practical to load all the images into memory unlike CIFAR-10 or CIFAR-100.\nIn this case, ImageDataset class can be used to open image from storage everytime of minibatch creation.\n[Note] ImageDataset will download only the images, if you need another label information \n(for example if you are working with image classification task) use LabeledImageDataset instead.\nYou need to create a text file which contains the list of image paths to use ImageDataset.\nSee data/images.dat for how the paths text file look like.",
"import os\n\nfrom chainer.datasets import ImageDataset\n\n# print('Current direcotory: ', os.path.abspath(os.curdir))\n\nfilepath = './data/images.dat'\nimage_dataset = ImageDataset(filepath, root='./data/images')\n\nprint('image_dataset type: {}, len: {}'.format(type(image_dataset), len(image_dataset)))",
"We have created the image_dataset above, however, images are not expanded into memory yet.\nImage data will be loaded into memory from storage every time when you access via index, for efficient memory use.",
"# Access i-th image by image_dataset[i].\n# image data is loaded here. for only 0-th image.\nimg = image_dataset[0]\n\n# img is numpy array, already aligned as (channels, height, width), \n# which is the standard shape format to feed into convolutional layer.\nprint('img', type(img), img.shape)",
"LabeledImageDataset\nThis is util class for image dataset.\nIt is similar to ImageDataset to allow load the image file from storage into memory at runtime of training.\nThe difference is that it contains label information, which is usually used for image classification task.\nYou need to create a text file which contains the list of image paths and labels to use LabeledImageDataset.\nSee data/images_labels.dat for how the text file look like.",
"import os\n\nfrom chainer.datasets import LabeledImageDataset\n\n# print('Current direcotory: ', os.path.abspath(os.curdir))\n\nfilepath = './data/images_labels.dat'\nlabeled_image_dataset = LabeledImageDataset(filepath, root='./data/images')\n\nprint('labeled_image_dataset type: {}, len: {}'.format(type(labeled_image_dataset), len(labeled_image_dataset)))",
"We have created the labeled_image_dataset above, however, images are not expanded into memory yet.\nImage data will be loaded into memory from storage every time when you access via index, for efficient memory use.",
"# Access i-th image and label by image_dataset[i].\n# image data is loaded here. for only 0-th image.\nimg, label = labeled_image_dataset[0]\n\nprint('img', type(img), img.shape)\nprint('label', type(label), label)",
"SubDataset\nTBD\nIt can be used for cross validation.",
"datasets.split_dataset_n_random()",
"Implement your own custom dataset\nYou can define your own dataset by implementing a sub class of DatasetMixin in chainer.dataset\nDatasetMixin\nIf you want to define custom dataset, DatasetMixin provides the base function to make compatible with other dataset format.\nAnother important usage for DatasetMixin is to preprocess the input data, including data augmentation.\nTo implement subclass of DatasetMixin, you usually need to implement these 3 functions.\n - Override __init__(self, *args) function: It is not compulsary but\n - Override __len__(self) function : Iterator need to know the length of this dataset to understand the end of epoch.\n - Override get_examples(self, i) function:",
"from chainer.dataset import DatasetMixin\n\n\nprint_debug = True\nclass SimpleDataset(DatasetMixin):\n def __init__(self, values):\n self.values = values\n \n def __len__(self):\n return len(self.values)\n\n def get_example(self, i):\n if print_debug: \n print('get_example, i = {}'.format(i))\n return self.values[i]",
"Important function in DatasetMixin is get_examples(self, i) function. \nThis function is called when they access data[i]",
"simple_data = SimpleDataset([0, 1, 4, 9, 16, 25])\n\n# get_example(self, i) is called when data is accessed by data[i]\nsimple_data[3]\n\n# data can be accessed using slice indexing as well\n\nsimple_data[1:3]",
"The important point is that get_example function is called every time when the data is accessed by [] indexing.\nThus you may put random value generation for data augmentation code in get_example.",
"import numpy as np\nfrom chainer.dataset import DatasetMixin\n\nprint_debug = False\n\n\ndef calc(x):\n return x * x\n\n\nclass SquareNoiseDataset(DatasetMixin):\n def __init__(self, values):\n self.values = values\n \n def __len__(self):\n return len(self.values)\n\n def get_example(self, i):\n if print_debug: \n print('get_example, i = {}'.format(i))\n x = self.values[i]\n t = calc(x) \n t_noise = t + np.random.normal(0, 0.1)\n return x, t_noise\n\nsquare_noise_data = SquareNoiseDataset(np.arange(10))",
"Below SimpleNoiseDataset adds small Gaussian noise to the original value,\nand every time the value is accessed, get_example is called and differenct noise is added even if you access to the data with same index.",
"# Accessing to the same index, but the value is different!\nprint('Accessing square_noise_data[3]', )\nprint('1st: ', square_noise_data[3])\nprint('2nd: ', square_noise_data[3])\nprint('3rd: ', square_noise_data[3])\n\n# Same applies for slice index accessing.\nprint('Accessing square_noise_data[0:4]')\nprint('1st: ', square_noise_data[0:4])\nprint('2nd: ', square_noise_data[0:4])\nprint('3rd: ', square_noise_data[0:4])",
"To convert examples into minibatch format, you can use concat_examples function in chainer.dataset in the sameway explained at TupleDataset.",
"from chainer.dataset import concat_examples\n\nexamples = square_noise_data[0:4]\nprint('examples = {}'.format(examples))\ndata_minibatch = concat_examples(examples)\n\nx_minibatch, t_minibatch = data_minibatch\n# Now it is array format, which has shape\nprint('x_minibatch = {}, type: {}, shape: {}'.format(x_minibatch, type(x_minibatch), x_minibatch.shape))\nprint('t_minibatch = {}, type: {}, shape: {}'.format(t_minibatch, type(t_minibatch), t_minibatch.shape))",
"TransformDataset\nTransform dataset can be used to create/modify dataset from existing dataset.\nNew (modified) dataset can be created by TransformDataset(original_data, transform_function).\nLet's see a concrete example to create new dataset from original tuple dataset by adding a small noise.",
"from chainer.datasets import TransformDataset\n\nx = np.arange(10)\nt = x * x - x\n\noriginal_dataset = TupleDataset(x, t)\n\ndef transform_function(in_data):\n x_i, t_i = in_data\n new_t_i = t_i + np.random.normal(0, 0.1)\n return x_i, new_t_i\n\ntransformed_dataset = TransformDataset(original_dataset, transform_function)\n\noriginal_dataset[:3]\n\n# Now Gaussian noise is added (in transform_function) to the original_dataset.\ntransformed_dataset[:3]"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
GoogleCloudPlatform/training-data-analyst
|
courses/machine_learning/deepdive2/text_classification/labs/rnn.ipynb
|
apache-2.0
|
[
"Recurrent Neural Networks (RNN) with Keras\nLearning Objectives\n\nAdd built-in RNN layers.\nBuild bidirectional RNNs.\nUsing CuDNN kernels when available.\nBuild a RNN model with nested input/output.\n\nIntroduction\nRecurrent neural networks (RNN) are a class of neural networks that is powerful for\nmodeling sequence data such as time series or natural language.\nSchematically, a RNN layer uses a for loop to iterate over the timesteps of a\nsequence, while maintaining an internal state that encodes information about the\ntimesteps it has seen so far.\nThe Keras RNN API is designed with a focus on:\n\n\nEase of use: the built-in keras.layers.RNN, keras.layers.LSTM,\nkeras.layers.GRU layers enable you to quickly build recurrent models without\nhaving to make difficult configuration choices.\n\n\nEase of customization: You can also define your own RNN cell layer (the inner\npart of the for loop) with custom behavior, and use it with the generic\nkeras.layers.RNN layer (the for loop itself). This allows you to quickly\nprototype different research ideas in a flexible way with minimal code.\n\n\nEach learning objective will correspond to a #TODO in the notebook where you will complete the notebook cell's code before running. Refer to the solution for reference.\nSetup",
"import numpy as np\nimport tensorflow as tf\nfrom tensorflow import keras\nfrom tensorflow.keras import layers",
"Built-in RNN layers: a simple example\nThere are three built-in RNN layers in Keras:\n\n\nkeras.layers.SimpleRNN, a fully-connected RNN where the output from previous\ntimestep is to be fed to next timestep.\n\n\nkeras.layers.GRU, first proposed in\nCho et al., 2014.\n\n\nkeras.layers.LSTM, first proposed in\nHochreiter & Schmidhuber, 1997.\n\n\nIn early 2015, Keras had the first reusable open-source Python implementations of LSTM\nand GRU.\nHere is a simple example of a Sequential model that processes sequences of integers,\nembeds each integer into a 64-dimensional vector, then processes the sequence of\nvectors using a LSTM layer.",
"model = keras.Sequential()\n# Add an Embedding layer expecting input vocab of size 1000, and\n# output embedding dimension of size 64.\nmodel.add(layers.Embedding(input_dim=1000, output_dim=64))\n\n# Add a LSTM layer with 128 internal units.\n# TODO -- your code goes here\n\n\n# Add a Dense layer with 10 units.\n# TODO -- your code goes here\n\n\nmodel.summary()",
"Built-in RNNs support a number of useful features:\n\nRecurrent dropout, via the dropout and recurrent_dropout arguments\nAbility to process an input sequence in reverse, via the go_backwards argument\nLoop unrolling (which can lead to a large speedup when processing short sequences on\nCPU), via the unroll argument\n...and more.\n\nFor more information, see the\nRNN API documentation.\nOutputs and states\nBy default, the output of a RNN layer contains a single vector per sample. This vector\nis the RNN cell output corresponding to the last timestep, containing information\nabout the entire input sequence. The shape of this output is (batch_size, units)\nwhere units corresponds to the units argument passed to the layer's constructor.\nA RNN layer can also return the entire sequence of outputs for each sample (one vector\nper timestep per sample), if you set return_sequences=True. The shape of this output\nis (batch_size, timesteps, units).",
"model = keras.Sequential()\nmodel.add(layers.Embedding(input_dim=1000, output_dim=64))\n\n# The output of GRU will be a 3D tensor of shape (batch_size, timesteps, 256)\nmodel.add(layers.GRU(256, return_sequences=True))\n\n# The output of SimpleRNN will be a 2D tensor of shape (batch_size, 128)\nmodel.add(layers.SimpleRNN(128))\n\nmodel.add(layers.Dense(10))\n\nmodel.summary()",
"In addition, a RNN layer can return its final internal state(s). The returned states\ncan be used to resume the RNN execution later, or\nto initialize another RNN.\nThis setting is commonly used in the\nencoder-decoder sequence-to-sequence model, where the encoder final state is used as\nthe initial state of the decoder.\nTo configure a RNN layer to return its internal state, set the return_state parameter\nto True when creating the layer. Note that LSTM has 2 state tensors, but GRU\nonly has one.\nTo configure the initial state of the layer, just call the layer with additional\nkeyword argument initial_state.\nNote that the shape of the state needs to match the unit size of the layer, like in the\nexample below.",
"encoder_vocab = 1000\ndecoder_vocab = 2000\n\nencoder_input = layers.Input(shape=(None,))\nencoder_embedded = layers.Embedding(input_dim=encoder_vocab, output_dim=64)(\n encoder_input\n)\n\n# Return states in addition to output\noutput, state_h, state_c = layers.LSTM(64, return_state=True, name=\"encoder\")(\n encoder_embedded\n)\nencoder_state = [state_h, state_c]\n\ndecoder_input = layers.Input(shape=(None,))\ndecoder_embedded = layers.Embedding(input_dim=decoder_vocab, output_dim=64)(\n decoder_input\n)\n\n# Pass the 2 states to a new LSTM layer, as initial state\ndecoder_output = layers.LSTM(64, name=\"decoder\")(\n decoder_embedded, initial_state=encoder_state\n)\noutput = layers.Dense(10)(decoder_output)\n\nmodel = keras.Model([encoder_input, decoder_input], output)\nmodel.summary()",
"RNN layers and RNN cells\nIn addition to the built-in RNN layers, the RNN API also provides cell-level APIs.\nUnlike RNN layers, which processes whole batches of input sequences, the RNN cell only\nprocesses a single timestep.\nThe cell is the inside of the for loop of a RNN layer. Wrapping a cell inside a\nkeras.layers.RNN layer gives you a layer capable of processing batches of\nsequences, e.g. RNN(LSTMCell(10)).\nMathematically, RNN(LSTMCell(10)) produces the same result as LSTM(10). In fact,\nthe implementation of this layer in TF v1.x was just creating the corresponding RNN\ncell and wrapping it in a RNN layer. However using the built-in GRU and LSTM\nlayers enable the use of CuDNN and you may see better performance.\nThere are three built-in RNN cells, each of them corresponding to the matching RNN\nlayer.\n\n\nkeras.layers.SimpleRNNCell corresponds to the SimpleRNN layer.\n\n\nkeras.layers.GRUCell corresponds to the GRU layer.\n\n\nkeras.layers.LSTMCell corresponds to the LSTM layer.\n\n\nThe cell abstraction, together with the generic keras.layers.RNN class, make it\nvery easy to implement custom RNN architectures for your research.\nCross-batch statefulness\nWhen processing very long sequences (possibly infinite), you may want to use the\npattern of cross-batch statefulness.\nNormally, the internal state of a RNN layer is reset every time it sees a new batch\n(i.e. every sample seen by the layer is assumed to be independent of the past). The\nlayer will only maintain a state while processing a given sample.\nIf you have very long sequences though, it is useful to break them into shorter\nsequences, and to feed these shorter sequences sequentially into a RNN layer without\nresetting the layer's state. That way, the layer can retain information about the\nentirety of the sequence, even though it's only seeing one sub-sequence at a time.\nYou can do this by setting stateful=True in the constructor.\nIf you have a sequence s = [t0, t1, ... t1546, t1547], you would split it into e.g.\ns1 = [t0, t1, ... t100]\ns2 = [t101, ... t201]\n...\ns16 = [t1501, ... t1547]\nThen you would process it via:\npython\nlstm_layer = layers.LSTM(64, stateful=True)\nfor s in sub_sequences:\n output = lstm_layer(s)\nWhen you want to clear the state, you can use layer.reset_states().\n\nNote: In this setup, sample i in a given batch is assumed to be the continuation of\nsample i in the previous batch. This means that all batches should contain the same\nnumber of samples (batch size). E.g. if a batch contains [sequence_A_from_t0_to_t100,\n sequence_B_from_t0_to_t100], the next batch should contain\n[sequence_A_from_t101_to_t200, sequence_B_from_t101_to_t200].\n\nHere is a complete example:",
"paragraph1 = np.random.random((20, 10, 50)).astype(np.float32)\nparagraph2 = np.random.random((20, 10, 50)).astype(np.float32)\nparagraph3 = np.random.random((20, 10, 50)).astype(np.float32)\n\nlstm_layer = layers.LSTM(64, stateful=True)\noutput = lstm_layer(paragraph1)\noutput = lstm_layer(paragraph2)\noutput = lstm_layer(paragraph3)\n\n# reset_states() will reset the cached state to the original initial_state.\n# If no initial_state was provided, zero-states will be used by default.\n# TODO -- your code goes here\n",
"RNN State Reuse\n<a id=\"rnn_state_reuse\"></a>\nThe recorded states of the RNN layer are not included in the layer.weights(). If you\nwould like to reuse the state from a RNN layer, you can retrieve the states value by\nlayer.states and use it as the\ninitial state for a new layer via the Keras functional API like new_layer(inputs,\ninitial_state=layer.states), or model subclassing.\nPlease also note that sequential model might not be used in this case since it only\nsupports layers with single input and output, the extra input of initial state makes\nit impossible to use here.",
"paragraph1 = np.random.random((20, 10, 50)).astype(np.float32)\nparagraph2 = np.random.random((20, 10, 50)).astype(np.float32)\nparagraph3 = np.random.random((20, 10, 50)).astype(np.float32)\n\nlstm_layer = layers.LSTM(64, stateful=True)\noutput = lstm_layer(paragraph1)\noutput = lstm_layer(paragraph2)\n\nexisting_state = lstm_layer.states\n\nnew_lstm_layer = layers.LSTM(64)\nnew_output = new_lstm_layer(paragraph3, initial_state=existing_state)\n",
"Bidirectional RNNs\nFor sequences other than time series (e.g. text), it is often the case that a RNN model\ncan perform better if it not only processes sequence from start to end, but also\nbackwards. For example, to predict the next word in a sentence, it is often useful to\nhave the context around the word, not only just the words that come before it.\nKeras provides an easy API for you to build such bidirectional RNNs: the\nkeras.layers.Bidirectional wrapper.",
"model = keras.Sequential()\n\n# Add Bidirectional layers\n# TODO -- your code goes here\n\n\nmodel.summary()",
"Under the hood, Bidirectional will copy the RNN layer passed in, and flip the\ngo_backwards field of the newly copied layer, so that it will process the inputs in\nreverse order.\nThe output of the Bidirectional RNN will be, by default, the concatenation of the forward layer\noutput and the backward layer output. If you need a different merging behavior, e.g.\nconcatenation, change the merge_mode parameter in the Bidirectional wrapper\nconstructor. For more details about Bidirectional, please check\nthe API docs.\nPerformance optimization and CuDNN kernels\nIn TensorFlow 2.0, the built-in LSTM and GRU layers have been updated to leverage CuDNN\nkernels by default when a GPU is available. With this change, the prior\nkeras.layers.CuDNNLSTM/CuDNNGRU layers have been deprecated, and you can build your\nmodel without worrying about the hardware it will run on.\nSince the CuDNN kernel is built with certain assumptions, this means the layer will\nnot be able to use the CuDNN kernel if you change the defaults of the built-in LSTM or\nGRU layers. E.g.:\n\nChanging the activation function from tanh to something else.\nChanging the recurrent_activation function from sigmoid to something else.\nUsing recurrent_dropout > 0.\nSetting unroll to True, which forces LSTM/GRU to decompose the inner\ntf.while_loop into an unrolled for loop.\nSetting use_bias to False.\nUsing masking when the input data is not strictly right padded (if the mask\ncorresponds to strictly right padded data, CuDNN can still be used. This is the most\ncommon case).\n\nFor the detailed list of constraints, please see the documentation for the\nLSTM and\nGRU layers.\nUsing CuDNN kernels when available\nLet's build a simple LSTM model to demonstrate the performance difference.\nWe'll use as input sequences the sequence of rows of MNIST digits (treating each row of\npixels as a timestep), and we'll predict the digit's label.",
"batch_size = 64\n# Each MNIST image batch is a tensor of shape (batch_size, 28, 28).\n# Each input sequence will be of size (28, 28) (height is treated like time).\ninput_dim = 28\n\nunits = 64\noutput_size = 10 # labels are from 0 to 9\n\n# Build the RNN model\ndef build_model(allow_cudnn_kernel=True):\n # CuDNN is only available at the layer level, and not at the cell level.\n # This means `LSTM(units)` will use the CuDNN kernel,\n # while RNN(LSTMCell(units)) will run on non-CuDNN kernel.\n if allow_cudnn_kernel:\n # The LSTM layer with default options uses CuDNN.\n lstm_layer = keras.layers.LSTM(units, input_shape=(None, input_dim))\n else:\n # Wrapping a LSTMCell in a RNN layer will not use CuDNN.\n lstm_layer = keras.layers.RNN(\n keras.layers.LSTMCell(units), input_shape=(None, input_dim)\n )\n model = keras.models.Sequential(\n [\n lstm_layer,\n keras.layers.BatchNormalization(),\n keras.layers.Dense(output_size),\n ]\n )\n return model\n",
"Let's load the MNIST dataset:",
"mnist = keras.datasets.mnist\n\n(x_train, y_train), (x_test, y_test) = mnist.load_data()\nx_train, x_test = x_train / 255.0, x_test / 255.0\nsample, sample_label = x_train[0], y_train[0]",
"Let's create a model instance and train it.\nWe choose sparse_categorical_crossentropy as the loss function for the model. The\noutput of the model has shape of [batch_size, 10]. The target for the model is an\ninteger vector, each of the integer is in the range of 0 to 9.",
"model = build_model(allow_cudnn_kernel=True)\n\n# Compile the model\n# TODO -- your code goes here\n\n\nmodel.fit(\n x_train, y_train, validation_data=(x_test, y_test), batch_size=batch_size, epochs=1\n)",
"Now, let's compare to a model that does not use the CuDNN kernel:",
"noncudnn_model = build_model(allow_cudnn_kernel=False)\nnoncudnn_model.set_weights(model.get_weights())\nnoncudnn_model.compile(\n loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),\n optimizer=\"sgd\",\n metrics=[\"accuracy\"],\n)\nnoncudnn_model.fit(\n x_train, y_train, validation_data=(x_test, y_test), batch_size=batch_size, epochs=1\n)",
"When running on a machine with a NVIDIA GPU and CuDNN installed,\nthe model built with CuDNN is much faster to train compared to the\nmodel that uses the regular TensorFlow kernel.\nThe same CuDNN-enabled model can also be used to run inference in a CPU-only\nenvironment. The tf.device annotation below is just forcing the device placement.\nThe model will run on CPU by default if no GPU is available.\nYou simply don't have to worry about the hardware you're running on anymore. Isn't that\npretty cool?",
"import matplotlib.pyplot as plt\n\nwith tf.device(\"CPU:0\"):\n cpu_model = build_model(allow_cudnn_kernel=True)\n cpu_model.set_weights(model.get_weights())\n result = tf.argmax(cpu_model.predict_on_batch(tf.expand_dims(sample, 0)), axis=1)\n print(\n \"Predicted result is: %s, target result is: %s\" % (result.numpy(), sample_label)\n )\n plt.imshow(sample, cmap=plt.get_cmap(\"gray\"))",
"RNNs with list/dict inputs, or nested inputs\nNested structures allow implementers to include more information within a single\ntimestep. For example, a video frame could have audio and video input at the same\ntime. The data shape in this case could be:\n[batch, timestep, {\"video\": [height, width, channel], \"audio\": [frequency]}]\nIn another example, handwriting data could have both coordinates x and y for the\ncurrent position of the pen, as well as pressure information. So the data\nrepresentation could be:\n[batch, timestep, {\"location\": [x, y], \"pressure\": [force]}]\nThe following code provides an example of how to build a custom RNN cell that accepts\nsuch structured inputs.\nDefine a custom cell that supports nested input/output\nSee Making new Layers & Models via subclassing\nfor details on writing your own layers.",
"class NestedCell(keras.layers.Layer):\n def __init__(self, unit_1, unit_2, unit_3, **kwargs):\n self.unit_1 = unit_1\n self.unit_2 = unit_2\n self.unit_3 = unit_3\n self.state_size = [tf.TensorShape([unit_1]), tf.TensorShape([unit_2, unit_3])]\n self.output_size = [tf.TensorShape([unit_1]), tf.TensorShape([unit_2, unit_3])]\n super(NestedCell, self).__init__(**kwargs)\n\n def build(self, input_shapes):\n # expect input_shape to contain 2 items, [(batch, i1), (batch, i2, i3)]\n i1 = input_shapes[0][1]\n i2 = input_shapes[1][1]\n i3 = input_shapes[1][2]\n\n self.kernel_1 = self.add_weight(\n shape=(i1, self.unit_1), initializer=\"uniform\", name=\"kernel_1\"\n )\n self.kernel_2_3 = self.add_weight(\n shape=(i2, i3, self.unit_2, self.unit_3),\n initializer=\"uniform\",\n name=\"kernel_2_3\",\n )\n\n def call(self, inputs, states):\n # inputs should be in [(batch, input_1), (batch, input_2, input_3)]\n # state should be in shape [(batch, unit_1), (batch, unit_2, unit_3)]\n input_1, input_2 = tf.nest.flatten(inputs)\n s1, s2 = states\n\n output_1 = tf.matmul(input_1, self.kernel_1)\n output_2_3 = tf.einsum(\"bij,ijkl->bkl\", input_2, self.kernel_2_3)\n state_1 = s1 + output_1\n state_2_3 = s2 + output_2_3\n\n output = (output_1, output_2_3)\n new_states = (state_1, state_2_3)\n\n return output, new_states\n\n def get_config(self):\n return {\"unit_1\": self.unit_1, \"unit_2\": unit_2, \"unit_3\": self.unit_3}\n",
"Build a RNN model with nested input/output\nLet's build a Keras model that uses a keras.layers.RNN layer and the custom cell\nwe just defined.",
"unit_1 = 10\nunit_2 = 20\nunit_3 = 30\n\ni1 = 32\ni2 = 64\ni3 = 32\nbatch_size = 64\nnum_batches = 10\ntimestep = 50\n\ncell = NestedCell(unit_1, unit_2, unit_3)\nrnn = keras.layers.RNN(cell)\n\ninput_1 = keras.Input((None, i1))\ninput_2 = keras.Input((None, i2, i3))\n\noutputs = rnn((input_1, input_2))\n\nmodel = keras.models.Model([input_1, input_2], outputs)\n\nmodel.compile(optimizer=\"adam\", loss=\"mse\", metrics=[\"accuracy\"])",
"Train the model with randomly generated data\nSince there isn't a good candidate dataset for this model, we use random Numpy data for\ndemonstration.",
"input_1_data = np.random.random((batch_size * num_batches, timestep, i1))\ninput_2_data = np.random.random((batch_size * num_batches, timestep, i2, i3))\ntarget_1_data = np.random.random((batch_size * num_batches, unit_1))\ntarget_2_data = np.random.random((batch_size * num_batches, unit_2, unit_3))\ninput_data = [input_1_data, input_2_data]\ntarget_data = [target_1_data, target_2_data]\n\nmodel.fit(input_data, target_data, batch_size=batch_size)",
"With the Keras keras.layers.RNN layer, You are only expected to define the math\nlogic for individual step within the sequence, and the keras.layers.RNN layer\nwill handle the sequence iteration for you. It's an incredibly powerful way to quickly\nprototype new kinds of RNNs (e.g. a LSTM variant).\nFor more details, please visit the API docs."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
arkottke/pysra
|
examples/example-09.ipynb
|
mit
|
[
"Example 9: Quarter wavelength site amplification\nExample of quarter-wavelength site amplification and fitting a profile to a target crustal amplification.",
"import json\n\nimport matplotlib.pyplot as plt\nimport numpy as np\n\nimport pysra\n\n%matplotlib inline\n\n# Increased figure sizes\nplt.rcParams[\"figure.dpi\"] = 120",
"Load the WNA profile from Campbell (2003).",
"with open(\"../tests/data/qwl_tests.json\") as fp:\n data = json.load(fp)[1]\n\nthickness = np.diff(data[\"site\"][\"depth\"])\n\nprofile = pysra.site.Profile()\nfor i, (thick, vel_shear, density) in enumerate(\n zip(thickness, data[\"site\"][\"velocity\"], data[\"site\"][\"density\"])\n):\n profile.append(\n pysra.site.Layer(\n pysra.site.SoilType(f\"{i}\", density * pysra.motion.GRAVITY),\n thick * 1000,\n vel_shear * 1000,\n )\n )\n\nprofile.update_layers(0)",
"Create simple point source motion",
"motion = pysra.motion.SourceTheoryRvtMotion(magnitude=6.5, distance=20, region=\"cena\")\nmotion.calc_fourier_amps(data[\"freqs\"])\n\ncalc = pysra.propagation.QuarterWaveLenCalculator(site_atten=0.04)\ninput_loc = profile.location(\"outcrop\", index=-1)\n\ncalc(motion, profile, input_loc)\n\nfig, ax = plt.subplots()\nax.plot(motion.freqs, calc.crustal_amp, label=\"Crustal Amp.\")\nax.plot(motion.freqs, calc.site_term, label=\"Site Term\")\nax.set(\n xlabel=\"Frequency (Hz)\",\n xscale=\"log\",\n ylabel=\"Amplitude\",\n yscale=\"linear\",\n)\nax.legend()\nfig.tight_layout();",
"The quarter-wavelength calculation is tested against the WNA and CENA crustal amplification models provided by Campbell (2003). The test of the CENA model passes, but the WNA model fails. Below is a comparison of the two crustal amplifications.",
"fig, ax = plt.subplots()\nax.plot(motion.freqs, calc.crustal_amp, label=\"Calculated\")\nax.plot(data[\"freqs\"], data[\"crustal_amp\"], label=\"Campbell (03)\")\nax.set(\n xlabel=\"Frequency (Hz)\",\n xscale=\"log\",\n ylabel=\"Amplitude\",\n yscale=\"linear\",\n)\nax.legend()\nfig.tight_layout();",
"Adjust the profile to match the target crustal amplification -- no consideration of the site attenuation paramater although this can also be done. First, the adjustment is only performed on the velocity. Second set of plots adjusts velocity and thickness.",
"for adjust_thickness in [False, True]:\n calc.fit(\n target_type=\"crustal_amp\",\n target=data[\"crustal_amp\"],\n adjust_thickness=adjust_thickness,\n )\n\n fig, ax = plt.subplots()\n ax.plot(motion.freqs, calc.crustal_amp, label=\"Calculated\")\n ax.plot(data[\"freqs\"], data[\"crustal_amp\"], label=\"Campbell (03)\")\n ax.set(\n xlabel=\"Frequency (Hz)\",\n xscale=\"log\",\n ylabel=\"Amplitude\",\n yscale=\"linear\",\n )\n ax.legend()\n fig.tight_layout()\n\n for yscale in [\"log\", \"linear\"]:\n fig, ax = plt.subplots()\n\n ax.plot(\n profile.initial_shear_vel,\n profile.depth,\n label=\"Initial\",\n drawstyle=\"steps-pre\",\n )\n ax.plot(\n calc.profile.initial_shear_vel,\n calc.profile.depth,\n label=\"Fit\",\n drawstyle=\"steps-pre\",\n )\n\n ax.legend()\n ax.set(\n xlabel=\"$V_s$ (m/s)\",\n xlim=(0, 3500),\n ylabel=\"Depth (m)\",\n ylim=(8000, 0.1),\n yscale=yscale,\n )\n fig.tight_layout();"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
daverick/alella
|
google cloud/google cloud with python/cloud.google.compute.instances.ipynb
|
gpl-3.0
|
[
"gcloud.google.compute.instances\nThis notebook create, delete start and stop instances in google compute cloud service",
"#imports\nimport googleapiclient.discovery\nfrom apiclient.discovery import build\nimport ipywidgets as widgets\nimport re\nimport json\n#we use pandas.DataFrame for printing arrays and dictionnary nicely\nfrom pandas import DataFrame",
"Load credentials",
"# %load getCredentialsFromFile.py\n\n\ndef getCredentials():\n from oauth2client import file\n import httplib2\n import ipywidgets as widgets\n print(\"Getting the credentials from file...\")\n storage = file.Storage(\"oauth2.dat\")\n credentials=storage.get()\n if credentials is None or credentials.invalid:\n print( '❗')\n display(widgets.Valid(\n value=False,\n description='Credentials are ',\n disabled=False))\n display(widgets.HTML('go create a credential valid file here: <a target=\"_blank\" href=\"cloud.google.auth.ipynb.ipynb\">gcloud authorization notebook</a> and try again'))\n else:\n http_auth = credentials.authorize(httplib2.Http())\n print('✅ Ok')\n return credentials\n\n\ncredentials=getCredentials()",
"Create services",
"compute_service = build('compute', 'v1', credentials=credentials)\nresource_service = build('cloudresourcemanager', 'v1', credentials=credentials)",
"Choose projectId and zone",
"# %load chooseProjectId.py\n#projectId is the variable that will contains the projectId that will be used in the API calls\nprojectId=None\n\n#list the existing projects \nprojects=resource_service.projects().list().execute()\n#we create a dictionaray name:projectId foe a dropdown list widget\nprojectsList={project['name']:project['projectId'] for project in projects['projects']}\nprojectsList['None']='invalid'\n\n#the dropdownlist widget\nprojectWidget=widgets.Dropdown(options=projectsList,description='Choose your Project',value='invalid')\n#a valid widget that get valid when a project is selected\nprojectIdValid=widgets.Valid(value=False,description='')\ndisplay(widgets.Box([projectWidget,projectIdValid]))\n\ndef projectValueChange(sender):\n if projectWidget.value!='invalid':\n #when a valid project is selected ,the gloabl variable projectId is set \n projectIdValid.value=True\n projectIdValid.description=projectWidget.value\n global projectId\n projectId=projectWidget.value \n else:\n projectIdValid.value=False\n projectIdValid.description=''\nprojectWidget.observe(projectValueChange, 'value')\n\n# %load chooseZone.py\n#zone is the variable that will contains the zone that will be used in the API calls\nzone=None\n\n#list the existing zones \nzones=compute_service.zones().list(project=projectId).execute()\n#list that will contains the zones for a dropdown list\nzonesList=[item['name'] for item in zones['items']]\nzonesList.append('none')\n\n#the dropdownlist widget\nzoneWidget=widgets.Dropdown(options=zonesList,value='none',description='Choose your Zone:')\nzoneValid=widgets.Valid(value=False,description='')\ndisplay(widgets.Box([zoneWidget,zoneValid]))\n\ndef zoneValueChange(sender):\n if zoneWidget.value!='none':\n #when a vail zone is slected, the variable zone is set\n zoneValid.value=True\n zoneValid.description=zoneWidget.value\n global zone\n zone=zoneWidget.value \n else:\n zoneValid.value=False\n zoneValid.description=''\n \nzoneWidget.observe(zoneValueChange, 'value') ",
"Create a new instance\n- choosing the disk image",
"image_response = compute_service.images().getFromFamily(\n project='debian-cloud', family='debian-8').execute()\nsource_disk_image = image_response['selfLink']",
"- choosing the machineType",
"machineType=None\n\nmachineTypes=compute_service.machineTypes().list(project=projectId,zone=zone).execute()\n\nmachineTypesList=[item['name'] for item in machineTypes['items']]\nmachineTypesList.append('none')\n\nmachineTypesWidget=widgets.Dropdown(options=machineTypesList,value='none',description='Choose your MachineType:')\nmachineTypesValid=widgets.Valid(value=False,description='')\ndisplay(widgets.Box([machineTypesWidget,machineTypesValid]))\n\ndef machineTypeValueChange(sender):\n if machineTypesWidget.value!='none':\n machineTypesValid.value=True\n machineTypesValid.description=machineTypesWidget.value\n global machineType\n machineType=machineTypesWidget.value \n else:\n machineTypesValid.value=True\n machineTypesValid.description=''\n\nmachineTypesWidget.observe(machineTypeValueChange, 'value') ",
"- choose an instance name",
"instanceName=None\n\n# instanceName have to validates this regexp\ninstanceNameControl=re.compile(r'^(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?)$')\n\n#the widgets\ninstanceNameWidget=widgets.Text(description=\"Name for the new instance:\")\nvalid=widgets.Valid(value=False,description='',disabled=False)\ndisplay(widgets.Box([instanceNameWidget,valid]))\n\ndef instanceNameValueChange(sender):\n if instanceNameWidget.value!=\"\":\n if instanceNameControl.match(instanceNameWidget.value):\n #when the entered text valid the regexp we set the \n valid.value=True\n valid.description='OK'\n global instanceName\n instanceName=instanceNameWidget.value\n else:\n valid.value=False\n valid.description=\"The instance name has to verify the regexp '(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?)'\"\n else:\n valid.value=False\n valid.description=''\n\n\ninstanceNameWidget.observe(instanceNameValueChange, 'value')",
"- creating the config for the new instance\nThe name, machineType and disk are set accordingly to the previous steps.\nWith scheduling.preemptible to true we choose a preemptible instance (cheaper ;-) )\nYou can adjust labels to your needs.",
"config= {'name':instanceName,\n 'machineType': \"zones/%(zone)s/machineTypes/%(machineType)s\" %{'zone':zone,'machineType':machineType},\n 'disks':[ \n \n {\n 'boot':True,\n 'autoDelete':True,\n 'initializeParams':{\n 'sourceImage':source_disk_image\n }\n }],\n 'scheduling':\n {\n 'preemptible': True\n },\n 'networkInterfaces':[{\n 'network':'global/networks/default',\n 'accessConfigs': [\n {'type':'ONE_TO_ONE_NAT','name':'ExternalNAT'}\n ]\n }],\n 'serviceAccounts':[{\n 'email':'default',\n 'scopes':[\n 'https://www.googleapis.com/auth/devstorage.read_write',\n 'https://www.googleapis.com/auth/logging.write'\n ]\n }],\n \"labels\": {\n \"env\": \"test\",\n \"created-by\": \"jupyter-notebooks-cloud-google-compute-instances\"\n }, \n }\n\n#print(json.dumps(config, indent=2))",
"- executing the API call",
"#a progress widget will present the progress of the operation\nprogress=widgets.IntProgress(value=0,min=0,max=3,step=1,description=':',bar_style='warning')\ndisplay(progress)\n\n#executing the insert operation\noperation = compute_service.instances().insert(project=projectId,\n zone=zone,\n body=config\n ).execute()\n\ndef updateProgress(result,progress=progress):\n #updating the progress widget with the result of the operation\n if result['status']== 'PENDING':\n progress.value=1\n progress.bar_style='warning'\n progress.description=result['status']\n elif result['status']== 'RUNNING':\n progress.value=2\n progress.bar_style='info'\n progress.description=result['status']\n elif result['status']== 'DONE':\n progress.value=3\n if 'error' in result: \n progress.description='Error'\n progress.bar_style='danger'\n else:\n progress.description=result['status']\n progress.bar_style='success'\n\nimport time \n\n#repeat until the result is DONE\nwhile True:\n #obtain the status of the operation\n result=compute_service.zoneOperations().get(project=projectId,\n zone=zone,\n operation=operation['name']).execute()\n updateProgress(result)\n if result['status']== 'DONE':\n break\n time.sleep(.25) \n",
"Listing the instance and their status",
"result = compute_service.instances().list(project=projectId, zone=zone).execute()\n\nif 'items' in result.keys():\n display(DataFrame.from_dict({instance['name']:(instance['status'],'✅'if instance['status']=='RUNNING' else '✖'if instance['status']=='TERMINATED' else '❓')for instance in result['items']},orient='index'))\nelse:\n print(\"No instance found.\")\n ",
"Start/Stop/Delete instances",
"# getting the current instances list\ninstances=compute_service.instances().list(project=projectId,zone=zone).execute()\ninstancesList=[item['name'] for item in instances['items']]\n# none is added for the dropdownlist\ninstancesList.append('none')\n\n#building and displaying the widgets\ninstancesWidget=widgets.Dropdown(options=instancesList,value='none')\ninstancesValid=widgets.Valid(value=False,description='')\ninstanceAction=widgets.RadioButtons(\n options=[ 'Status','Start','Stop', 'Delete'],value='Status')\ninstanceExecute=widgets.ToggleButton(value=False,description='Execute',disabled=True)\ndisplay(widgets.Box([instancesWidget,instancesValid,instanceAction,instanceExecute]))\n\n## execute an operation. \ndef execute(operation):\n #exctract the method and the instancename form the operation\n instanceName=operation.uri.split('?')[0].split('/')[-1]\n methodId=operation.methodId.split('.')[-1]\n \n #some widgets (action + instance + progress)\n progress=widgets.IntProgress(value=0,min=0,max=3,step=1,description=':',bar_style='info')\n display(widgets.Box([widgets.Label(value=methodId+\"ing\"),widgets.Label(value=instanceName),progress]))\n \n #the dropdown and buttons are disabled when an operation is executing\n global instanceExecute\n global instancesWidget\n instancesWidget.disabled=True\n instanceExecute.disabled=True\n \n #execute the operation\n operation=operation.execute()\n #until the operation is not DONE, we update the progress bar\n while True:\n result=compute_service.zoneOperations().get(project=projectId,\n zone=zone,\n operation=operation['name']).execute()\n updateProgress(result,progress)\n if result['status']== 'DONE':\n if methodId==u'delete':\n #when the instance is deleted, it has to be remove from the dropdownlist\n global instancesList \n instancesList.remove(instanceName)\n instancesWidget.options=instancesList\n instancesValid.value=False\n #the operation is completed, the dropwdown and buttons are enabled\n instancesWidget.disabled=False\n instanceExecute.disabled=False\n break\n time.sleep(0.1) \n \n\n\ndef executeInstance(sender):\n #callback when the execute button is clicked \n if instancesValid.value==True:\n # the correct operation is created and pass to the execute method\n if instanceAction.value=='Stop':\n execute(compute_service.instances().stop(project=projectId,\n zone=zone,\n instance=instancesWidget.value\n ))\n elif instanceAction.value=='Start':\n execute(compute_service.instances().start(project=projectId,\n zone=zone,\n instance=instancesWidget.value\n ))\n elif instanceAction.value=='Delete':\n execute(compute_service.instances().delete(project=projectId,\n zone=zone,\n instance=instancesWidget.value\n ))\n elif instanceAction.value=='Status': \n instance=compute_service.instances().get(project=projectId,\n zone=zone,\n instance=instancesWidget.value).execute()\n display(widgets.Box([widgets.Label(value=instance['name']),\n widgets.Label(value=instance['status'])\n ]))\ndef instancesValueChange(sender):\n #callback when an element is selected in the dropdown list \n if instancesWidget.value!=None:\n #when the seleciton is correct the valid widget is valid\n instancesValid.value=True\n instanceExecute.disabled=False\n\n#set up the callback on the widgets \ninstancesWidget.observe(instancesValueChange, 'value') \ninstanceExecute.observe(executeInstance,'value')"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
egillanton/Udacity-SDCND
|
1. Computer Vision and Deep Learning/P1 Finding Lane Lines on the Road/P1.ipynb
|
mit
|
[
"Self-Driving Car Engineer Nanodegree\nProject 1. Finding Lane Lines on the Road\n\nIn this project, I used the skills that I learned in the second lecture to identify lane lines on the road.\nI developed a pipeline on a series of individual images, and later applied the result to tewo video stream (really just a series of images).\nThen I complete the optional challenge by implementing my version of daw_lines() function.\nImport Packages",
"#importing some useful packages\nimport matplotlib.pyplot as plt\nimport matplotlib.image as mpimg\nimport numpy as np\nimport cv2\nimport math\n%matplotlib inline",
"Global Variables",
"# Global Variables\n# Region-of-interest ofsets\ntop_x_offset = 0.45\ntop_y_offset = 0.60\nbottom_x_offset = 0.07\n\n# Stores the left and right lines from an image.\n# Notice, need to clear before using it on new set of images (video).\nright_lines = []\nleft_lines = []\n",
"Helper Functions",
"import math\n\n# Global Variables\n# Region-of-interest ofsets\ntop_x_offset = 0.45\ntop_y_offset = 0.60\nbottom_x_offset = 0.07\n\n# Stores the left and right lines from an image.\n# Notice, need to clear before using it on new set of images (video).\nright_lines = []\nleft_lines = []\n\ndef grayscale(img):\n \"\"\"Applies the Grayscale transform\n This will return an image with only one color channel\n but NOTE: to see the returned image as grayscale\n (assuming your grayscaled image is called 'gray')\n you should call plt.imshow(gray, cmap='gray')\"\"\"\n return cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)\n # Or use BGR2GRAY if you read an image with cv2.imread()\n # return cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)\n \ndef canny(img, low_threshold, high_threshold):\n \"\"\"Applies the Canny transform\"\"\"\n return cv2.Canny(img, low_threshold, high_threshold)\n\ndef gaussian_blur(img, kernel_size):\n \"\"\"Applies a Gaussian Noise kernel\"\"\"\n return cv2.GaussianBlur(img, (kernel_size, kernel_size), 0)\n\ndef region_of_interest(img, vertices):\n \"\"\"\n Applies an image mask.\n \n Only keeps the region of the image defined by the polygon\n formed from `vertices`. The rest of the image is set to black.\n \"\"\"\n #defining a blank mask to start with\n mask = np.zeros_like(img) \n \n #defining a 3 channel or 1 channel color to fill the mask with depending on the input image\n if len(img.shape) > 2:\n channel_count = img.shape[2] # i.e. 3 or 4 depending on your image\n ignore_mask_color = (255,) * channel_count\n else:\n ignore_mask_color = 255\n \n #filling pixels inside the polygon defined by \"vertices\" with the fill color \n cv2.fillPoly(mask, vertices, ignore_mask_color)\n \n #returning the image only where mask pixels are nonzero\n masked_image = cv2.bitwise_and(img, mask)\n return masked_image\n\n\ndef draw_lines(img, lines, color=[255, 0, 0], thickness=2):\n for line in lines:\n for x1,y1,x2,y2 in line:\n cv2.line(img, (x1, y1), (x2, y2), color, thickness)\n\ndef hough_lines(img, rho, theta, threshold, min_line_len, max_line_gap):\n \"\"\"\n `img` should be the output of a Canny transform.\n \n Returns an image with hough lines drawn.\n \"\"\"\n lines = cv2.HoughLinesP(img, rho, theta, threshold, np.array([]), minLineLength=min_line_len, maxLineGap=max_line_gap)\n line_img = np.zeros((img.shape[0], img.shape[1], 3), dtype=np.uint8)\n draw_lines(line_img, lines)\n # draw_lines2(line_img, lines)\n return line_img\n\n# Python 3 has support for cool math symbols.\n\ndef weighted_img(img, initial_img, α=0.8, β=1., λ=0.):\n \"\"\"\n `img` is the output of the hough_lines(), An image with lines drawn on it.\n Should be a blank image (all black) with lines drawn on it.\n \n `initial_img` should be the image before any processing.\n \n The result image is computed as follows:\n \n initial_img * α + img * β + λ\n NOTE: initial_img and img must be the same shape!\n \"\"\"\n return cv2.addWeighted(initial_img, α, img, β, λ)\n\ndef display_image(img, title):\n plt.imshow(img, cmap='gray')\n plt.suptitle(title)\n plt.show()\n \ndef draw_lines2(img, lines, color=[255, 0, 0], thickness=2):\n \"\"\"\n This is my implementation of draw_lines() function \n \"\"\"\n threshold = 0.5\n for line in lines:\n for x1,y1,x2,y2 in line:\n # Advoid divided by 0 exception\n if x1 == x2:\n continue\n \n line_slope = ((y2-y1)/(x2-x1))\n \n # only accept slopes >= threshold\n if abs(line_slope) < threshold:\n continue\n \n if (line_slope >= threshold ): # Then its Right side\n right_lines.append(line[0])\n elif (line_slope < -threshold ): # Then its Left side\n left_lines.append(line[0])\n \n # Calculate Extrapolation based least-squares curve-fitting calculations of all the previous points\n # Good link:\n # https://ece.uwaterloo.ca/~dwharder/NumericalAnalysis/06LeastSquares/extrapolation/complete.html\n right_lines_x = [x1 for x1, y1, x2, y2 in right_lines] + [x2 for x1, y1, x2, y2 in right_lines]\n right_lines_y = [y1 for x1, y1, x2, y2 in right_lines] + [y2 for x1, y1, x2, y2 in right_lines]\n\n # Calculate the slope (m) and the intercept (b), They are kept the same\n # y = m*x + b \n right_m = 1\n right_b = 1\n if right_lines_x:\n right_m, right_b = np.polyfit(right_lines_x, right_lines_y, 1) # y = m*x + b\n\n # collect left lines x and y sets for least-squares curve-fitting calculating\n left_lines_x = [x1 for x1, y1, x2, y2 in left_lines] + [x2 for x1, y1, x2, y2 in left_lines]\n left_lines_y = [y1 for x1, y1, x2, y2 in left_lines] + [y2 for x1, y1, x2, y2 in left_lines]\n\n # Calculate the slope (m) and the intercept (b), They are kept the same\n # y = m*x + b \n left_m = 1 \n left_b = 1\n if left_lines_x:\n left_m, left_b = np.polyfit(left_lines_x, left_lines_y, 1) \n \n # Calculate the y values\n y_size = img.shape[0]\n y1 = y_size\n y2 = int(y_size*top_y_offset)\n\n # Calculate the 4 points x values\n right_x1 = int((y1-right_b)/right_m)\n right_x2 = int((y2-right_b)/right_m)\n\n left_x1 = int((y1-left_b)/left_m)\n left_x2 = int((y2-left_b)/left_m)\n\n # Graph the lines\n if right_lines_x:\n cv2.line(img, (right_x1, y1), (right_x2, y2), [255,0,0], 5)\n if left_lines_x:\n cv2.line(img, (left_x1, y1), (left_x2, y2), [255,0,0], 5)",
"Test Images\nBuild your pipeline to work on the images in the directory \"test_images\"\nYou should make sure your pipeline works well on these images before you try the videos.\nBuild the pipeline and run your solution on all test_images. Make copies into the test_images_output directory, and you can use the images in your writeup report.\nTry tuning the various parameters, especially the low and high Canny thresholds as well as the Hough lines parameters.",
"import os\nos.listdir(\"test_images/\")\n\ndef process_image(image, display_images=False, export_images=False):\n \n # Step 1. Convert the image to grayscale.\n gray_image = grayscale(image)\n \n # Step 2. Blur using Gaussian smoothing / blurring\n # kernel_size = 5\n blurred_image = gaussian_blur(gray_image, 5)\n \n # Step 3. Use Canny Edge Detection to get a image of edges\n # low_threshold = 50\n # high_threshold = 150)\n edged_image = canny(blurred_image, 50, 150)\n \n \n # Step 4. Mask with a trapozoid the area of interest\n y_size = image.shape[0]\n x_size = image.shape[1]\n \n tx = int(x_size * top_x_offset)\n bx = int(x_size * bottom_x_offset)\n ty = int(y_size * top_y_offset)\n \n vertices = np.array( [[\n (bx, y_size),# Bottom Left\n (tx, ty), # Top Left\n (x_size - tx, ty), # Top Right\n (x_size - bx, y_size) # Bottom Right\n ]], dtype=np.int32 )\n\n roi_img = region_of_interest(edged_image, vertices)\n\n # Step 5. Run Hough Transformation on masked edge detected image\n houghed_image = hough_lines(roi_img, 1, np.pi/180, 40, 30, 200)\n\n # Step 6. Draw the lines on the original image\n final_image = weighted_img(houghed_image, image)\n \n if display_images:\n display_image(image, \"Original Image\")\n display_image(gray_image, \"Grayscale Image\")\n display_image(blurred_image, \"Gaussian Blured Image\")\n display_image(edged_image, \"Canny Edge Detectioned Image\")\n display_image(roi_img, \"Region of Interest Mapped Image\")\n display_image(houghed_image, \"Hough Transformed Image\")\n display_image(final_image, \"Final image\")\n \n if export_images:\n mpimg.imsave(\"display_images_output\" + \"/\" + \"original_image.jpg\", image, cmap='gray')\n mpimg.imsave(\"display_images_output\" + \"/\" + \"gray_image.jpg\", gray_image, cmap='gray')\n mpimg.imsave(\"display_images_output\" + \"/\" + \"blurred_image.jpg\", blurred_image, cmap='gray')\n mpimg.imsave(\"display_images_output\" + \"/\" + \"edged_image.jpg\", edged_image, cmap='gray')\n mpimg.imsave(\"display_images_output\" + \"/\" + \"roi_img.jpg\", roi_img, cmap='gray')\n mpimg.imsave(\"display_images_output\" + \"/\" + \"houghed_image.jpg\", houghed_image, cmap='gray')\n mpimg.imsave(\"display_images_output\" + \"/\" + \"final_image.jpg\", final_image, cmap='gray')\n\n\n return final_image\n\nfinal_image = process_image(mpimg.imread('test_images/solidWhiteRight.jpg'), display_images=True, export_images=True)\n\nin_directory = \"test_images\"\n# Create a corresponding output directory\nout_directory = \"test_images_out\"\nif not os.path.exists(out_directory):\n os.makedirs(out_directory)\n\n# Get all images in input directory and store their names\nimageNames = os.listdir(in_directory + \"/\")\nfor imageName in imageNames:\n image = mpimg.imread(in_directory + \"/\" + imageName)\n # Apply my Lane Finding Image Processing Algorithm on each image\n resultImage = process_image(image)\n # Save the result in the output directory\n mpimg.imsave(out_directory + \"/\" + imageName, resultImage)",
"Test on Videos\nWe can test our solution on two provided videos:\nsolidWhiteRight.mp4\nsolidYellowLeft.mp4",
"# Import everything needed to edit/save/watch video clips\nfrom moviepy.editor import VideoFileClip\nfrom IPython.display import HTML\nimport imageio\nimageio.plugins.ffmpeg.download()\n\nright_lines.clear()\nleft_lines.clear()\nwhite_output = 'test_videos_output/solidWhiteRight.mp4'\nclip1 = VideoFileClip(\"test_videos/solidWhiteRight.mp4\")\nwhite_clip = clip1.fl_image(process_image) #NOTE: this function expects color images!!\n%time white_clip.write_videofile(white_output, audio=False)\n\nHTML(\"\"\"\n<video width=\"960\" height=\"540\" controls>\n <source src=\"{0}\">\n</video>\n\"\"\".format(white_output))",
"Improve the draw_lines() function\nAt this point, if you were successful with making the pipeline and tuning parameters, you probably have the Hough line segments drawn onto the road, but what about identifying the full extent of the lane and marking it clearly as in the example video (P1_example.mp4)? Think about defining a line to run the full length of the visible lane based on the line segments you identified with the Hough Transform. As mentioned previously, try to average and/or extrapolate the line segments you've detected to map out the full extent of the lane lines. You can see an example of the result you're going for in the video \"P1_example.mp4\".\nGo back and modify your draw_lines function accordingly and try re-running your pipeline. The new output should draw a single, solid line over the left lane line and a single, solid line over the right lane line. The lines should start from the bottom of the image and extend out to the top of the region of interest.\nNow for the one with the solid yellow lane on the left. This one's more tricky!",
"right_lines.clear()\nleft_lines.clear()\nyellow_output = 'test_videos_output/solidYellowLeft.mp4'\nclip2 = VideoFileClip('test_videos/solidYellowLeft.mp4')\nyellow_clip = clip2.fl_image(process_image)\n%time yellow_clip.write_videofile(yellow_output, audio=False)\n\nHTML(\"\"\"\n<video width=\"960\" height=\"540\" controls>\n <source src=\"{0}\">\n</video>\n\"\"\".format(yellow_output))",
"Optional Challenge\nI modified my pipeline so it works with this video and submited it along with the rest of my project!",
"right_lines.clear()\nleft_lines.clear()\nchallenge_output = 'test_videos_output/challenge.mp4'\nclip3 = VideoFileClip('test_videos/challenge.mp4')\nchallenge_clip = clip3.fl_image(process_image)\n%time challenge_clip.write_videofile(challenge_output, audio=False)\n\nHTML(\"\"\"\n<video width=\"960\" height=\"540\" controls>\n <source src=\"{0}\">\n</video>\n\"\"\".format(challenge_output))"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
sysid/kg
|
fish/fish_explore.ipynb
|
mit
|
[
"A simple exploration notebook to get some insights about the data.\nAs per NDA, sample photos are confidential and also it says you cannot disclose confidential information without written consent from the Sponsors. More about NDA on this forum post. Thank you Alan for pointing it out to me.\nSo here is the revised version of the exploration notebook where the animation part is commented. \nPlease uncomment the Animation part of the notebook and then run it in the local for animation\nObjective:\nIn this competition, The Nature Conservancy asks you to help them detect which species of fish appears on a fishing boat, based on images captured from boat cameras of various angles. \nYour goal is to predict the likelihood of fish species in each picture.\nAs mentioned in the data page, there are eight target categories available in the dataset.\n\nAlbacore tuna\nBigeye tuna\nYellowfin tuna\nMahi Mahi\nOpah\nSharks\nOther (meaning that there are fish present but not in the above categories)\nNo Fish (meaning that no fish is in the picture)\n\nImportant points to note:\n\nPre-trained models and external data are allowed in the competition, but need to be posted on this official forum thread\nThe competition comprises of two stages. Test data for second stage will be released in the last week. \n\nFirst let us see the number of image files present for each of the species",
"# This Python 3 environment comes with many helpful analytics libraries installed\n# It is defined by the kaggle/python docker image: https://github.com/kaggle/docker-python\n# For example, here's several helpful packages to load in \n\nimport numpy as np # linear algebra\nimport pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)\nfrom scipy.misc import imread\nimport matplotlib.pyplot as plt\nimport seaborn as sns\n%matplotlib inline\n\nfrom subprocess import check_output\nprint(check_output([\"ls\", \"../input/train/\"]).decode(\"utf8\"))",
"So there are 8 folders present inside the train folder, one for each species.\nNow let us check the number of files present in each of these sub folders.",
"sub_folders = check_output([\"ls\", \"../input/train/\"]).decode(\"utf8\").strip().split('\\n')\ncount_dict = {}\nfor sub_folder in sub_folders:\n num_of_files = len(check_output([\"ls\", \"../input/train/\"+sub_folder]).decode(\"utf8\").strip().split('\\n'))\n print(\"Number of files for the species\",sub_folder,\":\",num_of_files)\n count_dict[sub_folder] = num_of_files\n \nplt.figure(figsize=(12,4))\nsns.barplot(list(count_dict.keys()), list(count_dict.values()), alpha=0.8)\nplt.xlabel('Fish Species', fontsize=12)\nplt.ylabel('Number of Images', fontsize=12)\nplt.show()\n ",
"So the number of files for species ALB (Albacore tuna) is much higher than other species. \nLet us look at the number of files present in the test folder.",
"num_test_files = len(check_output([\"ls\", \"../input/test_stg1/\"]).decode(\"utf8\").strip().split('\\n'))\nprint(\"Number of test files present :\", num_test_files)",
"Image Size:\nNow let us look at the image size of each of the files and see what different sizes are available.",
"train_path = \"../input/train/\"\nsub_folders = check_output([\"ls\", train_path]).decode(\"utf8\").strip().split('\\n')\ndifferent_file_sizes = {}\nfor sub_folder in sub_folders:\n file_names = check_output([\"ls\", train_path+sub_folder]).decode(\"utf8\").strip().split('\\n')\n for file_name in file_names:\n im_array = imread(train_path+sub_folder+\"/\"+file_name)\n size = \"_\".join(map(str,list(im_array.shape)))\n different_file_sizes[size] = different_file_sizes.get(size,0) + 1\n\nplt.figure(figsize=(12,4))\nsns.barplot(list(different_file_sizes.keys()), list(different_file_sizes.values()), alpha=0.8)\nplt.xlabel('Image size', fontsize=12)\nplt.ylabel('Number of Images', fontsize=12)\nplt.title(\"Image size present in train dataset\")\nplt.xticks(rotation='vertical')\nplt.show()",
"So 720_1280_3 is the most common image size available in the train data and 10 different sizes are available. \n720_1244_3 is the smallest size of the available images in train set and 974_1732_3 is the largest one.\nNow let us look at the distribution in test dataset as well.",
"test_path = \"../input/test_stg1/\"\nfile_names = check_output([\"ls\", test_path]).decode(\"utf8\").strip().split('\\n')\ndifferent_file_sizes = {}\nfor file_name in file_names:\n size = \"_\".join(map(str,list(imread(test_path+file_name).shape)))\n different_file_sizes[size] = different_file_sizes.get(size,0) + 1\n\nplt.figure(figsize=(12,4))\nsns.barplot(list(different_file_sizes.keys()), list(different_file_sizes.values()), alpha=0.8)\nplt.xlabel('File size', fontsize=12)\nplt.ylabel('Number of Images', fontsize=12)\nplt.xticks(rotation='vertical')\nplt.title(\"Image size present in test dataset\")\nplt.show()",
"Test set also has a very similar distribution.\nAnimation:\nLet us try to have some animation on the available images. Not able to embed the video in the notebook.\nPlease uncomment the following part of the code and run it in local for animation",
"\"\"\"\nimport random\nimport matplotlib.animation as animation\nfrom matplotlib import animation, rc\nfrom IPython.display import HTML\n\nrandom.seed(12345)\ntrain_path = \"../input/train/\"\nsub_folders = check_output([\"ls\", train_path]).decode(\"utf8\").strip().split('\\n')\ndifferent_file_sizes = {}\nall_files = []\nfor sub_folder in sub_folders:\n file_names = check_output([\"ls\", train_path+sub_folder]).decode(\"utf8\").strip().split('\\n')\n selected_files = random.sample(file_names, 10)\n for file_name in selected_files:\n all_files.append([sub_folder,file_name])\n\nfig = plt.figure()\nsns.set_style(\"whitegrid\", {'axes.grid' : False})\nimg_file = \"\".join([train_path, sub_folder, \"/\", file_name])\nim = plt.imshow(imread(img_file), vmin=0, vmax=255)\n\ndef updatefig(ind):\n sub_folder = all_files[ind][0]\n file_name = all_files[ind][1]\n img_file = \"\".join([train_path, sub_folder, \"/\", file_name])\n im.set_array(imread(img_file))\n plt.title(\"Species : \"+sub_folder, fontsize=15)\n return im,\n\nani = animation.FuncAnimation(fig, updatefig, frames=len(all_files))\nani.save('lb.gif', fps=1, writer='imagemagick')\n#rc('animation', html='html5')\n#HTML(ani.to_html5_video())\nplt.show()\n\"\"\"",
"Basic CNN Model using Keras:\nNow let us try to build a CNN model on the dataset. Due to the memory constraints of the kernels, let us take only (500,500,3) array from top left corner of each image and then try to classify based on that portion.\nKindly note that running it offline with the full image will give much better results. This is just a started script I tried and I am a newbie for image classification problems.",
"import random\nfrom subprocess import check_output\nfrom scipy.misc import imread\nimport numpy as np\nnp.random.seed(2016)\nfrom keras.datasets import mnist\nfrom keras.models import Sequential\nfrom keras.layers import Dense, Dropout, Activation, Flatten\nfrom keras.layers import Convolution2D, MaxPooling2D\nfrom keras.utils import np_utils\nfrom keras import backend as K\n\nbatch_size = 1\nnb_classes = 8\nnb_epoch = 1\n\nimg_rows, img_cols, img_rgb = 500, 500, 3\nnb_filters = 4\npool_size = (2, 2)\nkernel_size = (3, 3)\ninput_shape = (img_rows, img_cols, 3)\n\nspecies_map_dict = {\n'ALB':0,\n'BET':1,\n'DOL':2,\n'LAG':3,\n'NoF':4,\n'OTHER':5,\n'SHARK':6,\n'YFT':7\n}\n\ndef batch_generator_train(sample_size):\n\ttrain_path = \"../input/train/\"\n\tall_files = []\n\ty_values = []\n\tsub_folders = check_output([\"ls\", train_path]).decode(\"utf8\").strip().split('\\n')\n\tfor sub_folder in sub_folders:\n\t\tfile_names = check_output([\"ls\", train_path+sub_folder]).decode(\"utf8\").strip().split('\\n')\n\t\tfor file_name in file_names:\n\t\t\tall_files.append([sub_folder, '/', file_name])\n\t\t\ty_values.append(species_map_dict[sub_folder])\n\tnumber_of_images = range(len(all_files))\n\n\tcounter = 0\n\twhile True:\n\t\timage_index = random.choice(number_of_images)\n\t\tfile_name = \"\".join([train_path] + all_files[image_index])\n\t\tprint(file_name)\n\t\ty = [0]*8\n\t\ty[y_values[image_index]] = 1\n\t\ty = np.array(y).reshape(1,8)\n\t\t\n\t\tim_array = imread(file_name)\n\t\tX = np.zeros([1, img_rows, img_cols, img_rgb])\n\t\t#X[:im_array.shape[0], :im_array.shape[1], 3] = im_array.copy().astype('float32')\n\t\tX[0, :, :, :] = im_array[:500,:500,:].astype('float32')\n\t\tX /= 255.\n \n\t\tprint(X.shape)\n\t\tyield X,y\n\t\t\n\t\tcounter += 1\n\t\t#if counter == sample_size:\n\t\t#\tbreak\n\ndef batch_generator_test(all_files):\n\tfor file_name in all_files:\n\t\tfile_name = test_path + file_name\n\t\t\n\t\tim_array = imread(file_name)\n\t\tX = np.zeros([1, img_rows, img_cols, img_rgb])\n\t\tX[0,:, :, :] = im_array[:500,:500,:].astype('float32')\n\t\tX /= 255.\n\n\t\tyield X\n\n\ndef keras_cnn_model():\n\tmodel = Sequential()\n\tmodel.add(Convolution2D(nb_filters, kernel_size[0], kernel_size[1],\n border_mode='valid',\n input_shape=input_shape))\n\tmodel.add(Activation('relu'))\n\tmodel.add(Convolution2D(nb_filters, kernel_size[0], kernel_size[1]))\n\tmodel.add(Activation('relu'))\n\tmodel.add(MaxPooling2D(pool_size=pool_size))\n\tmodel.add(Dropout(0.25))\t\n\tmodel.add(Flatten())\n\tmodel.add(Dense(128))\n\tmodel.add(Activation('relu'))\n\tmodel.add(Dropout(0.5))\n\tmodel.add(Dense(nb_classes))\n\tmodel.add(Activation('softmax'))\n\tmodel.compile(loss='categorical_crossentropy', optimizer='adadelta')\n\treturn model\n\nmodel = keras_cnn_model()\nfit= model.fit_generator(\n\tgenerator = batch_generator_train(100),\n\tnb_epoch = 1,\n\tsamples_per_epoch = 100\n)\n\ntest_path = \"../input/test_stg1/\"\nall_files = []\nfile_names = check_output([\"ls\", test_path]).decode(\"utf8\").strip().split('\\n')\nfor file_name in file_names:\n\tall_files.append(file_name)\n#preds = model.predict_generator(generator=batch_generator_test(all_files), val_samples=len(all_files))\n\n#out_df = pd.DataFrame(preds)\n#out_df.columns = ['ALB', 'BET', 'DOL', 'LAG', 'NoF', 'OTHER', 'SHARK', 'YFT']\n#out_df['image'] = all_files\n#out_df.to_csv(\"sample_sub_keras.csv\", index=False)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
terencezl/scientific-python-walkabout
|
scientific-python-walkabout.ipynb
|
mit
|
[
"Scientific Python Walkabout\nTo use the most up-to-date version of this notebook, go to a safe and quite directory and type in the command line interface\ngit clone https://github.com/terencezl/scientific-python-walkabout\nand\nipython notebook\nand click your way into it.\n\nOverview\nWe are going to get to know the scientific python stack (only a little bit):\n\nNumPy (data analysis foundation, your pythonic \"MATLAB\")\nSciPy (more functionalities, e.g. integration, optimization, Fourier transforms, signal processing, linear algebra)\nmatplotlib (2D plotting tool, some 3D capabilities)\npandas (statistics, pretty viewing and flexible in/output, your pythonic \"R\")\n(Optional) SymPy (symbolic computation, your pythonic \"Mathematica\")\n\nInstallation\nIf you are using MacOS, your system has already come with a distrubution of Python (but it is usually older than the up-to-date version, and is difficult to update). There are also some other ways, such as downloading the official Python installation package, along with Windows Users (but it is also difficult to update when it gets old), or using Homebrew, a convenient package manager in MacOS. Linux users can just rely on the system pakage manager.\nBut all of the above still require the additional installation of Python packages (NumPy, SciPy, matplotlib, pandas, etc.) that come on top of Python itself, the very packages that make Python a powerful and versatile language. There have been more integrated solutions, among which Anaconda scientific Python distribution is what the scientific community is having a great time with. If you install Anaconda, it directly comes with the full scientific stack ready, and provides a very consistent way of updating the Python packages and even Python itself.\nGo to the webpage and select what suits your OS. Let's select \"I want Python 3\" as well.\nIPython Configuration Custom Setup\nGo to the command line interface and call \nipython locate\nIt will return a directory. Enter that directory and find profile_default/ipythonrc.py. If there is not, create one, and copy the content below into it. You can of course modify it to your need, and please remember the existence of this config file, in case you want to change it afterwards.\n```python\nimport os\nimport sys\nimport NumPy\ntry:\n import numpy as np\n print(\"NumPy is imported.\")\nexcept ImportError:\n print(\"NumPy is not imported!\")\nimport matplotlib\ntry:\n import matplotlib as mpl\n print(\"matplotlib is imported.\")\n import matplotlib.pyplot as plt\n # turn on interactive mode\n plt.ion()\n print(\"Using matplotlib interactive mode.\")\n # try using custom style for prettier looks\n try:\n plt.style.use('ggplot')\n print(\"Using custom ggplot style from matplotlib 1.4.\")\n except ValueError:\n print(\"If matplotlib >= 1.4 is installed, styles will be used for better looks.\")\nexcept ImportError:\n print(\"matplotlib is not imported!\")\nimport pandas\ntry:\n import pandas as pd\n print(\"pandas is imported.\")\nexcept ImportError:\n print(\"pandas is not imported!\")\n```\nBasic Python\n\nTutorial in the official docs\n\nSome rehash",
"# copying a referecne vs copying as a new list\n\n# copying a refernce\na = [3,4,5]\nb = a\nprint(b is a)\nb[2] = 555\nprint(a, b)\n\n# slice copying as a new list\na = [3,4,5]\nb = a[:] # meaning slicing all\nprint(b is a)\nb[2] = 666\nprint(a, b)\n\n# removing something from a list\n\n# wrong\na = [1,2,3,3,3,3,4]\nfor i in a:\n if i == 3:\n a.remove(i)\nprint(a)\n\n# right, because a[:] creates a copy\na = [1,2,3,3,3,3,4]\nfor i in a[:]:\n if i == 3:\n a.remove(i)\nprint(a)\n\n# iterate a list as index and value\na = [10,20,30,40]\nfor idx, value in enumerate(a):\n print(idx, value)\n\n# iterate a dict as key and value\nb = {'x': 1, 'y': 2, 'z': 3}\nfor key, value in b.items():\n print(key, value)\n\n# use zip to iterate two lists\na = [1,2,3]\nb = [4,5,6]\n\nfor i, j in zip(a, b):\n print(i, j)",
"NumPy & SciPy\nVery good tutorials and docs:\n\nTentative NumPy Tutorial\nScientific Python stack official docs\n\nThere are a few compilations for helping MATLAB, IDL, R users transitioning to Python/NumPy. HTML and phf versions are both available. (Big thanks to Alex Mulia for bringing this to our attention!)\n\nThesaurus of Mathematical Languages, or MATLAB synonymous commands in Python/NumPy",
"%matplotlib inline\n\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport pandas as pd\n\nfrom scipy.optimize import curve_fit\n\nnp.polyfit?",
"NumPy Arrays",
"a0 = np.arange(6)\na = a0.reshape((2,3))\nprint(a.dtype, a.itemsize, a.size, a.shape, '\\n')\n\nprint(a, '\\n')\nprint(repr(a), '\\n')\nprint(a.tolist())\n\nb = a.astype(float)\nprint(b, '\\n')\nprint(repr(b))\n\n# re-define a0 and a\na0 = np.arange(6)\na = a0.reshape((2,3))\n\n# get a slice of a to make c\nc = a[:2, 1:3]\n\n# a and c are both based on a0, the very initial storage space\nprint(c, '\\n')\nprint(a.base, a.base is c.base)\n\n# changing c will change a and a0\n\nc[0, 0] = 1111\nprint('\\n', c, '\\n')\nprint(a)\n\n# WAT??? This is different from the slice copy of a list, e.g. mylist[:]\n# if you want to make a real copy, and re-allocate some RAM, use\nd = a[:]\ne = a.copy()\nprint(d is a, e is a)\n\n# Well... You may expect d is the same as a, but it is just not.\n# Our reasoning still holds though. You change d, you'll change a.",
"Now, we'll primarily demonstrate SciPy's capability of fitting.\nFitting a single variable simple function\n$f(x) = a e^{b x}$",
"def f(x, a, b):\n return a * np.exp(b * x)\n \nx = np.linspace(0, 1, 1000)\ny_ideal = f(x, 1, 2)\ny = f(x, 1, 2) + np.random.randn(1000)\n\nplt.plot(x, y)\nplt.plot(x, y_ideal, lw=2)\n\npopt, pcov = curve_fit(f, x, y)\n# popt is the optimized parameters, and pcov is the covariance matrix.\n# diagnal members np.diag(pcov) is the variances of each parameter.\n# np.sqrt(np.diag(pcov)) is the standard deviation.\n\nprint(popt, '\\n\\n', pcov)\n\ny_fit = f(x, popt[0], popt[1])\nplt.plot(x, y_ideal, label='ideal')\nplt.plot(x, y_fit, '--', label='fit')\nplt.legend(loc=0, fontsize=14)",
"Fitting a single variable function containing an integral\n$f(x) = c \\int_o^x (a x' + b) dx' + d$",
"from scipy.integrate import quad\n \ndef f(x, a, b, c, d):\n # the integrand function should be within function f, because parameters a and b \n # are available within.\n def integrand(xx):\n return a * xx + b\n # if the upper/lower limit of the integral is our unknown variable x, x has to be \n # iterated from an array to a single value, because the quad function only accepts\n # a single value each time.\n y = np.zeros(len(x))\n for idx, value in enumerate(x):\n y[idx] = c * quad(integrand, 0, value)[0] + d\n return y\n\nx = np.linspace(0, 1, 1000)\ny_ideal = f(x, 1, 2, 3, 4)\ny = f(x, 1, 2, 3, 4) + np.random.randn(1000)\n\nplt.plot(x, y)\nplt.plot(x, y_ideal, lw=2)\n\npopt, pcov = curve_fit(f, x, y)\nprint(popt, '\\n\\n', pcov)\n\ny_fit = f(x, popt[0], popt[1], popt[2], popt[3])\nplt.plot(x, y_ideal, label='ideal')\nplt.plot(x, y_fit, '--', label='fit')\nplt.legend(loc=0, fontsize=14)",
"Fitting a 2 variable function\n$f(x) = a e^{b x_1} + e^{c x_2}$",
"def f(x, a, b, c):\n return a * np.exp(b * x[0]) + np.exp(c * x[1])\n\nx1 = np.linspace(0, 1, 1000)\nx2 = np.linspace(0, 1, 1000)\nx = [x1, x2]\ny_ideal = f(x, 1, 2, 3)\ny = f(x, 1, 2, 3) + np.random.randn(1000)\n\nfrom mpl_toolkits.mplot3d.axes3d import Axes3D\nfig = plt.figure(figsize=(10,8))\nax = fig.add_subplot(111, projection='3d')\nax.scatter(x[0], x[1], y, alpha=.1)\nax.plot(x[0], x[1], y_ideal, 'r', lw=2)\nax.view_init(30, 80)\n\npopt, pcov = curve_fit(f, x, y)\nprint(popt, '\\n\\n', pcov)\n\nfig = plt.figure(figsize=(10,8))\ny_fit = f(x, popt[0], popt[1], popt[2])\nax = fig.add_subplot(111, projection='3d')\nax.plot(x[0], x[1], y_ideal, label='ideal')\nax.plot(x[0], x[1], y_fit, label='fit')\nplt.legend(loc=0, fontsize=14)\nax.view_init(30, 80)",
"matplotlib\nSome core concepts in http://matplotlib.org/faq/usage_faq.html regarding backends, (non-)interactive modes.",
"# pyplot (plt) interface vs object oriented interface\nfig, axes = plt.subplots(1, 2)\nplt.plot([2,3,4])\n\n# Looks like it automatically chose the right axes to plot on.\n\n# how can I plot on the first graph? \n# Either keep (well... kind of) using the convenient pyplot interface\n\nfig, axes = plt.subplots(1, 2)\nplt.plot([2,3,4])\n# change the state of the focus by switching to the zeroth axes\nplt.sca(axes[0])\nplt.plot([3,2,1])\n\n# Or use the object oriented interface\n\nfig, axes = plt.subplots(1, 2)\nplt.plot([2,3,4])\nprint(axes)\nax = axes[0]\nax.plot([1,2,3])\n# if you are not using notebook, and have switched on interactive mode by plt.ion(),\n# you need to explicitly say\nplt.draw()\n\n# But it doesn't hurt if you say it anyway.\n# So there I said it.\n\n# Similarly, if you have two figures and want to switch back and forth\n\n# create figs\nfig1 = plt.figure('Ha')\nplt.plot([1,2,32])\nfig2 = plt.figure(2)\nplt.plot([32,2,1])\n\n# switch back to fig 'Ha'\nplt.figure('Ha')\nplt.scatter([0,1,2], [3,4,5])\n\n# add text and then delete\nplt.plot([2,3,4])\nplt.text(1, 2.5, r'This is $\\frac{x}{x - 1} = 1$!', fontsize=14)\n\n# to delete the text, first get the axes reference, and pop the just added text object out of the list\nplt.plot([2,3,4])\nplt.text(1, 2.5, r'This is $\\frac{x}{x - 1} = 1$!', fontsize=14)\nax = plt.gca()\n# print(ax.texts) will give you a list, with one element\nax.texts.pop()\n# you have to redraw the figure\nplt.draw()\n\n# same can be applied to lines by `ax.lines.pop()`\n\n# tight_layout() to automatically adjust the elements in a figure\n\nplt.plot([35,3,54])\nplt.xlabel('X')\nplt.ylabel('Y')\n\nplt.plot([35,3,54])\nplt.xlabel('X')\nplt.ylabel('Y')\nplt.tight_layout()\n\n# locator_params() to have more or less ticks\nplt.plot([35,3,54])\nplt.locator_params(nbins=10)",
"pandas\nA very good glimpse: Ten minutes of pandas.\nRead data from online files.",
"pd.read_csv('https://raw.githubusercontent.com/pydata/pandas/master/doc/data/baseball.csv', index_col='id')\n\ndf = pd.read_excel('https://github.com/pydata/pandas/raw/master/doc/data/test.xls')\nprint(df)",
"Ways of Indexing\nVery confusing? See http://pandas.pydata.org/pandas-docs/stable/indexing.html#different-choices-for-indexing",
"# simple column selection by label\ndf['A']\n\n# simple row slice by position, end not included\ndf[0:2]\n\n# explicit row selection\ndf.loc['2000-01-03']\n\n# explicit row slicing, end included\ndf.loc['2000-01-03':'2000-01-05']\n\n# explicit column selection by label\ndf.loc[:, 'A']\n\n# explicit element selection by label\ndf.loc['Jan 3, 2000', 'A']\n\n# explicit row selection by position\ndf.iloc[0]\n\n# explicit row slicing by position, end not included\ndf.iloc[0:2]\n\n# explicit column selection by position\ndf.iloc[:, 0]\n\n# explicit element selection by position\ndf.iloc[0, 0]\n\n# mixed selection, row by position and column by label\ndf.ix[0, 'A']",
"Reference and Resources\n\njrjohansson: Lectures on scientific computing with Python\njakevdp: Astronomy 599: Introduction to Scientific Computing in Python\nPython Scientific Lecture Notes"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
Kreiswolke/gensim
|
docs/notebooks/Topics_and_Transformations.ipynb
|
lgpl-2.1
|
[
"Topics and Transformation\nDon't forget to set",
"import logging\nimport os.path\n\nlogging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)",
"if you want to see logging events.\nTransformation interface\nIn the previous tutorial on Corpora and Vector Spaces, we created a corpus of documents represented as a stream of vectors. To continue, let’s fire up gensim and use that corpus:",
"from gensim import corpora, models, similarities\nif (os.path.exists(\"/tmp/deerwester.dict\")):\n dictionary = corpora.Dictionary.load('/tmp/deerwester.dict')\n corpus = corpora.MmCorpus('/tmp/deerwester.mm')\n print(\"Used files generated from first tutorial\")\nelse:\n print(\"Please run first tutorial to generate data set\")\n\nprint (dictionary[0])\nprint (dictionary[1])\nprint (dictionary[2])",
"In this tutorial, I will show how to transform documents from one vector representation into another. This process serves two goals:\n\nTo bring out hidden structure in the corpus, discover relationships between words and use them to describe the documents in a new and (hopefully) more semantic way.\nTo make the document representation more compact. This both improves efficiency (new representation consumes less resources) and efficacy (marginal data trends are ignored, noise-reduction).\n\nCreating a transformation\nThe transformations are standard Python objects, typically initialized by means of a training corpus:",
"tfidf = models.TfidfModel(corpus) # step 1 -- initialize a model",
"We used our old corpus from tutorial 1 to initialize (train) the transformation model. Different transformations may require different initialization parameters; in case of TfIdf, the “training” consists simply of going through the supplied corpus once and computing document frequencies of all its features. Training other models, such as Latent Semantic Analysis or Latent Dirichlet Allocation, is much more involved and, consequently, takes much more time.\n\n<B>Note</B>:\nTransformations always convert between two specific vector spaces. The same vector space (= the same set of feature ids) must be used for training as well as for subsequent vector transformations. Failure to use the same input feature space, such as applying a different string preprocessing, using different feature ids, or using bag-of-words input vectors where TfIdf vectors are expected, will result in feature mismatch during transformation calls and consequently in either garbage output and/or runtime exceptions.",
"doc_bow = [(0, 1), (1, 1)]\nprint(tfidf[doc_bow]) # step 2 -- use the model to transform vectors",
"Or to apply a transformation to a whole corpus:",
"corpus_tfidf = tfidf[corpus]\nfor doc in corpus_tfidf:\n print(doc)",
"In this particular case, we are transforming the same corpus that we used for training, but this is only incidental. Once the transformation model has been initialized, it can be used on any vectors (provided they come from the same vector space, of course), even if they were not used in the training corpus at all. This is achieved by a process called folding-in for LSA, by topic inference for LDA etc.\n\n<b>Note:</b> \nCalling model[corpus] only creates a wrapper around the old corpus document stream – actual conversions are done on-the-fly, during document iteration. We cannot convert the entire corpus at the time of calling corpus_transformed = model[corpus], because that would mean storing the result in main memory, and that contradicts gensim’s objective of memory-indepedence. If you will be iterating over the transformed corpus_transformed multiple times, and the transformation is costly, serialize the resulting corpus to disk first and continue using that.\n\nTransformations can also be serialized, one on top of another, in a sort of chain:",
"lsi = models.LsiModel(corpus_tfidf, id2word=dictionary, num_topics=2) # initialize an LSI transformation\ncorpus_lsi = lsi[corpus_tfidf] # create a double wrapper over the original corpus: bow->tfidf->fold-in-lsi",
"Here we transformed our Tf-Idf corpus via Latent Semantic Indexing into a latent 2-D space (2-D because we set num_topics=2). Now you’re probably wondering: what do these two latent dimensions stand for? Let’s inspect with models.LsiModel.print_topics():",
"lsi.print_topics(2)",
"(the topics are printed to log – see the note at the top of this page about activating logging)\nIt appears that according to LSI, “trees”, “graph” and “minors” are all related words (and contribute the most to the direction of the first topic), while the second topic practically concerns itself with all the other words. As expected, the first five documents are more strongly related to the second topic while the remaining four documents to the first topic:",
"for doc in corpus_lsi: # both bow->tfidf and tfidf->lsi transformations are actually executed here, on the fly\n print(doc)\n\nlsi.save('/tmp/model.lsi') # same for tfidf, lda, ...\nlsi = models.LsiModel.load('/tmp/model.lsi')",
"The next question might be: just how exactly similar are those documents to each other? Is there a way to formalize the similarity, so that for a given input document, we can order some other set of documents according to their similarity? Similarity queries are covered in the next tutorial.\nAvailable transformations\nGensim implements several popular Vector Space Model algorithms:\nTerm Frequency * Inverse Document Frequency\nTf-Idf expects a bag-of-words (integer values) training corpus during initialization. During transformation, it will take a vector and return another vector of the same dimensionality, except that features which were rare in the training corpus will have their value increased. It therefore converts integer-valued vectors into real-valued ones, while leaving the number of dimensions intact. It can also optionally normalize the resulting vectors to (Euclidean) unit length.",
"model = models.TfidfModel(corpus, normalize=True)",
"Latent Semantic Indexing, LSI (or sometimes LSA)\nLSI transforms documents from either bag-of-words or (preferrably) TfIdf-weighted space into a latent space of a lower dimensionality. For the toy corpus above we used only 2 latent dimensions, but on real corpora, target dimensionality of 200–500 is recommended as a “golden standard” [1].",
"model = models.LsiModel(corpus_tfidf, id2word=dictionary, num_topics=300)",
"LSI training is unique in that we can continue “training” at any point, simply by providing more training documents. This is done by incremental updates to the underlying model, in a process called online training. Because of this feature, the input document stream may even be infinite – just keep feeding LSI new documents as they arrive, while using the computed transformation model as read-only in the meanwhile!\n\n<b>Example</b> \nmodel.add_documents(another_tfidf_corpus) # now LSI has been trained on tfidf_corpus + another_tfidf_corpus\nlsi_vec = model[tfidf_vec] # convert some new document into the LSI space, without affecting the model\nmodel.add_documents(more_documents) # tfidf_corpus + another_tfidf_corpus + more_documents\nlsi_vec = model[tfidf_vec]\n\nSee the gensim.models.lsimodel documentation for details on how to make LSI gradually “forget” old observations in infinite streams. If you want to get dirty, there are also parameters you can tweak that affect speed vs. memory footprint vs. numerical precision of the LSI algorithm.\ngensim uses a novel online incremental streamed distributed training algorithm (quite a mouthful!), which I published in [5]. gensim also executes a stochastic multi-pass algorithm from Halko et al. [4] internally, to accelerate in-core part of the computations. See also \n Experiments on the English Wikipedia for further speed-ups by distributing the computation across a cluster of computers.\nRandom Projections\nRP aim to reduce vector space dimensionality. This is a very efficient (both memory- and CPU-friendly) approach to approximating TfIdf distances between documents, by throwing in a little randomness. Recommended target dimensionality is again in the hundreds/thousands, depending on your dataset.",
"model = models.RpModel(corpus_tfidf, num_topics=500)",
"Latent Dirichlet Allocation, LDA\nLDA is yet another transformation from bag-of-words counts into a topic space of lower dimensionality. LDA is a probabilistic extension of LSA (also called multinomial PCA), so LDA’s topics can be interpreted as probability distributions over words. These distributions are, just like with LSA, inferred automatically from a training corpus. Documents are in turn interpreted as a (soft) mixture of these topics (again, just like with LSA).",
"model = models.LdaModel(corpus, id2word=dictionary, num_topics=100)",
"gensim uses a fast implementation of online LDA parameter estimation based on [2], modified to run in distributed mode on a cluster of computers.\nHierarchical Dirichlet Process, HDP\nHDP is a non-parametric bayesian method (note the missing number of requested topics):",
"model = models.HdpModel(corpus, id2word=dictionary)",
"gensim uses a fast, online implementation based on [3]. The HDP model is a new addition to gensim, and still rough around its academic edges – use with care.\nAdding new VSM transformations (such as different weighting schemes) is rather trivial; see the API reference or directly the Python code for more info and examples.\nIt is worth repeating that these are all unique, incremental implementations, which do not require the whole training corpus to be present in main memory all at once. With memory taken care of, I am now improving Distributed Computing, to improve CPU efficiency, too. If you feel you could contribute (by testing, providing use-cases or code), please let me know.\nContinue on to the next tutorial on Similarity Queries.\n\n[1] Bradford. 2008. An empirical study of required dimensionality for large-scale latent semantic indexing applications.\n[2] Hoffman, Blei, Bach. 2010. Online learning for Latent Dirichlet Allocation.\n[3] Wang, Paisley, Blei. 2011. Online variational inference for the hierarchical Dirichlet process.\n[4] Halko, Martinsson, Tropp. 2009. Finding structure with randomness.\n[5] Řehůřek. 2011. Subspace tracking for Latent Semantic Analysis."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
jdoepfert/nyc_taxi_tips
|
data_preparation.ipynb
|
apache-2.0
|
[
"import urllib\nimport os\nimport pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\n # for subsampling the data\nfrom random import sample\n\n# for plotting on a geographic map\nimport folium \nfrom folium import plugins \nimport mplleaflet",
"Getting and inspecting the data\nDownloading the data. At the moment, only data from June 2015 is considered.",
"# list containing the link(s) to the csv file(s)\ndata_links = ['https://storage.googleapis.com/tlc-trip-data/2015/yellow_tripdata_2015-06.csv']\n\nfilenames = []\nfor link in data_links:\n filenames.append(link.split('/')[-1])\n if not(os.path.isfile(filenames[-1])): # do not download file if it already exists\n urllib.urlretrieve(link, filename)",
"Loading the data into a pandas data frame and look at it:",
"df = pd.DataFrame()\nfor filename in filenames:\n df = df.append(pd.read_csv(filename), ignore_index=True)\n\ndf.head()\n\ndf.info()\n\ndf.describe()",
"Note that some of the numerical features like tip amount and fare amount actually contain negative values. Those invalid values will be deleted in the next section.\nData cleaning\n1) Retaining only trips paid by credit card\nOnly the credit card tips are recorded in the data set. Therefore, let's only retain trips with credit card payment. This might introduce some bias (as credit card payers may have a different tipping behaviour than others).\nAs seen below, most of the trips are anyway paid by credit card (label \"1\", followed by cash payment, label \"2\").",
"df.groupby('payment_type').size().plot(kind='bar');",
"For some trips, people actually tipped with credit card, even though they did not pay with credit card:",
"np.sum((df.payment_type != 1) & (df.tip_amount != 0))",
"However, the number of those trips is negligible, so I ignore them here and only retain credit card trips. Then, the column \"payment_type\" can be removed:",
"df = df[df.payment_type == 1]\ndf.drop('payment_type', axis=1, inplace=True)\ndf.shape",
"2) Checking for unfeasible values in numerical features\nAs seen above, some of the numerical features contained negative values. Let's have a closer look...",
"(df < 0).sum()",
"...and remove the corresponding rows where negative values do not make any sense:",
"col_names = ['total_amount', 'improvement_surcharge', 'tip_amount', 'mta_tax', 'extra', 'fare_amount']\n\n# this removes all rows where at least one value of the columns in col_names is < 0\nrows_to_keep = (df[col_names] >= 0).sum(axis=1) == len(col_names)\nprint 'removing '+ str((~rows_to_keep).sum()) + ' rows...'\ndf = df[rows_to_keep]\n\n(df[col_names] < 0).sum() # check if it worked",
"3) Deleting \"invalid\" trips\nInspecting trip distance",
"ax = df.loc[sample(df.index, 30000)].plot(y='trip_distance',kind='hist', bins=200)\nax.set_xlim([0,25]);",
"Delete trips that are longer than 50 miles...",
"rows_to_keep = df.trip_distance <= 50\nprint 'removing '+ str((~rows_to_keep).sum()) + ' rows...'\ndf = df[rows_to_keep]",
"...and shorter than 0.1 miles:",
"rows_to_keep = df.trip_distance >= 0.1\nprint 'removing '+ str((~rows_to_keep).sum()) + ' rows...'\ndf = df[rows_to_keep]",
"Inspecting trip fare",
"ax = df.loc[sample(df.index, 300000)].plot(y='fare_amount',kind='hist', bins=200)\nax.set_xlim([0,102]);",
"There seem to be a decent amount of trips with a fixed rate of 50 USD (see spike above).\nNow let's remove rows where the fare is below 1 USD:",
"rows_to_keep = df.fare_amount >= 1\nprint 'removing '+ str((~rows_to_keep).sum()) + ' rows...'\ndf = df[rows_to_keep]",
"Inspecting trip duration",
"df.tpep_pickup_datetime = pd.to_datetime(df.tpep_pickup_datetime)\ndf.tpep_dropoff_datetime = pd.to_datetime(df.tpep_dropoff_datetime)\n\ndf['trip_duration'] = df.tpep_dropoff_datetime - df.tpep_pickup_datetime \n\ndf['trip_duration_minutes'] = df.trip_duration.dt.seconds/60\n\nax = df.loc[sample(df.index, 300000)].plot(y='trip_duration_minutes', kind='hist', bins=500)\nax.set_xlim([0,150]);",
"Remove trips that took less than half a minute...",
"rows_to_keep = df.trip_duration_minutes>0.5\nprint 'removing '+ str((~rows_to_keep).sum()) + ' rows...'\ndf = df[rows_to_keep]",
"...as well as trips with a duration of more than 2 hours:",
"rows_to_keep = df.trip_duration_minutes<=2*60\nprint 'removing '+ str((~rows_to_keep).sum()) + ' rows...'\ndf = df[rows_to_keep]",
"Inspecting passenger count",
"df.plot(y='passenger_count', kind='hist', bins=30);",
"Remove trips with zero passenger count:",
"rows_to_keep = df.passenger_count > 0\nprint 'removing '+ str((~rows_to_keep).sum()) + ' rows...'\ndf = df[rows_to_keep]",
"Remove trips with a passenger count of more than 6:",
"rows_to_keep = df.passenger_count <= 6\nprint 'removing '+ str((~rows_to_keep).sum()) + ' rows...'\ndf = df[rows_to_keep]",
"Removing invalid location coordinates\nRemove trips that obviously did not start in NY:",
"within_NY = (df.pickup_latitude > 40) & (df.pickup_latitude < 40.9) & \\\n (df.pickup_longitude > -74.4) & (df.pickup_longitude < -73.4)\n\nprint 'removing '+ str((~within_NY).sum()) + ' rows...'\ndf = df[within_NY]",
"Plot the pickup locations to check if they look good. Choose a random sample of all trips, since plotting all trips would take quite a while.",
"fig, ax = plt.subplots(figsize=(15, 10))\ndf.loc[sample(df.index, 200000)].plot(x='pickup_longitude', y='pickup_latitude', \n kind='scatter', ax=ax, alpha=0.3, s=3)\nax.set_xlim([-74.2, -73.7])\nax.set_ylim([40.6, 40.9]);",
"The above plot looks reasonable, you can clearly identify the geometry of New York. Let's plot a small subset of data points on a map. Next to central NY, one can identify small hotspots at the surrounding airports.",
"subdf = df.loc[sample(df.index, 10000)] # subsample df \ndata = subdf[['pickup_latitude', 'pickup_longitude']].values\n\nmapa = folium.Map([40.7, -73.9], zoom_start=11, tiles='stamentoner') # create heatmap\nmapa.add_children(plugins.HeatMap(data, min_opacity=0.005, max_zoom=18,\n max_val=0.01, radius=3, blur=3))\nmapa",
"Inspecting the tip\nAs the tip distribution below shows, people tend to tip whole numbers of dollars (see peaks at e.g. 1 and 2 dollars).",
"fig, ax = plt.subplots(figsize=(12,4))\nax = df.loc[sample(df.index, 100000)].plot(y='tip_amount', kind='hist',bins=1500, ax=ax)\nax.set_xlim([0,10.5])\nax.set_xticks(np.arange(0, 11, 0.5));",
"A useful metric for a taxi driver to compare tips is the percentage of tip given with respect to the total fare amount.",
"# check if the fares and fees sum up to total_amount\nprint pd.concat([df.tip_amount + df.fare_amount + df.tolls_amount + \\\n df.extra + df.mta_tax + df.improvement_surcharge, \\\n df.total_amount], axis=1).head()\n\n# calculate tip percentage\ndf['total_fare'] = df.total_amount - df.tip_amount\ndf['tip_percentage'] = df.tip_amount / df.total_fare * 100",
"The tip percentage distribution below shows that people mostly seem to tip 0, 20, 25 or 30%.",
"data = df.loc[sample(df.index, 100000)].tip_percentage.values\nplt.hist(data, np.arange(min(data)-0.5, max(data)+1.5)) \nplt.gca().set_xlim([0,35])\nplt.gca().set_xticks(np.arange(0, 51, 5));\nplt.legend(['tip_percentage']);",
"Remove trips where a tip of more than 100% was recorded, regarding them as invalid outliers.",
"rows_to_keep = df.tip_percentage <= 100\nprint 'removing '+ str((~rows_to_keep).sum()) + ' rows...'\ndf = df[rows_to_keep]\n\ndf.tip_percentage.mean()\n\ndf.tip_percentage.median()\n\ndf.tip_percentage.mode()\n\ndf.tip_percentage.quantile(0.25)\n\n# fig, ax = plt.subplots(figsize=(14,5))\n# ax = df.loc[sample(df.index, 100000)].tip_percentage.plot(kind='hist',bins=2000, cumulative=True)\n# ax.set_xlim([0,200])",
"Tip percentage by day of the week (Monday=0, Sunday=6). People tend to tip a little less on weekends (day 5-6).",
"fig, ax = plt.subplots(figsize=(12, 6))\nfor i in range(7):\n df[df.pickup_weekday==i].groupby('pickup_hour').mean().plot(y='tip_percentage', ax=ax)\nplt.legend(['day ' + str(x) for x in range(7)])\nax.set_ylabel('average tip percentage')",
"Let's look at the number of trips per hour and day:",
"fig, ax = plt.subplots(figsize=(12, 6))\nfor i in range(7):\n df[df.pickup_weekday==i].groupby('pickup_hour').size().plot(ax=ax)\nplt.legend(['day ' + str(x) for x in range(7)])\nax.set_ylabel('number of trips')",
"The tip percentage does seem to depend too much on the number of passengers:",
"fig, ax = plt.subplots(figsize=(8,7))\ndf.boxplot('tip_percentage', by='passenger_count', showmeans=True, ax=ax)\nax.set_ylim([15,21])",
"Save the cleaned data frame to a file:",
"df.to_pickle('df.pickle')"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
probml/pyprobml
|
notebooks/book2/03/schools8_pymc3.ipynb
|
mit
|
[
"<a href=\"https://colab.research.google.com/github/probml/pyprobml/blob/master/notebooks/schools8_pymc3.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>\nIn this notebook, we fit a hierarchical Bayesian model to the \"8 schools\" dataset.\nSee also https://github.com/probml/pyprobml/blob/master/scripts/schools8_pymc3.py",
"%matplotlib inline\nimport sklearn\nimport scipy.stats as stats\nimport scipy.optimize\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nimport time\nimport numpy as np\nimport os\nimport pandas as pd\n\n!pip install -qq -U pymc3>=3.8\ntry:\n import pymc3 as pm\nexcept ModuleNotFoundError:\n %pip install -qq pymc3\n import pymc3 as pm\nprint(pm.__version__)\ntry:\n import theano.tensor as tt\nexcept ModuleNotFoundError:\n %pip install -qq theano\n import theano.tensor as tt\nimport theano\n\n#!pip install -qq arviz\ntry:\n import arviz as az\nexcept ModuleNotFoundError:\n %pip install -qq arviz\n import arviz as az\n\n!mkdir ../figures",
"Data",
"# https://github.com/probml/pyprobml/blob/master/scripts/schools8_pymc3.py\n\n# Data of the Eight Schools Model\nJ = 8\ny = np.array([28.0, 8.0, -3.0, 7.0, -1.0, 1.0, 18.0, 12.0])\nsigma = np.array([15.0, 10.0, 16.0, 11.0, 9.0, 11.0, 10.0, 18.0])\nprint(np.mean(y))\nprint(np.median(y))\n\nnames = []\nfor t in range(8):\n names.append(\"{}\".format(t))\n\n# Plot raw data\nfig, ax = plt.subplots()\ny_pos = np.arange(8)\nax.errorbar(y, y_pos, xerr=sigma, fmt=\"o\")\nax.set_yticks(y_pos)\nax.set_yticklabels(names)\nax.invert_yaxis() # labels read top-to-bottom\nplt.title(\"8 schools\")\nplt.savefig(\"../figures/schools8_data.png\")\nplt.show()",
"Centered model",
"# Centered model\nwith pm.Model() as Centered_eight:\n mu_alpha = pm.Normal(\"mu_alpha\", mu=0, sigma=5)\n sigma_alpha = pm.HalfCauchy(\"sigma_alpha\", beta=5)\n alpha = pm.Normal(\"alpha\", mu=mu_alpha, sigma=sigma_alpha, shape=J)\n obs = pm.Normal(\"obs\", mu=alpha, sigma=sigma, observed=y)\n log_sigma_alpha = pm.Deterministic(\"log_sigma_alpha\", tt.log(sigma_alpha))\n\nnp.random.seed(0)\nwith Centered_eight:\n trace_centered = pm.sample(1000, chains=4, return_inferencedata=False)\n\npm.summary(trace_centered).round(2)\n# PyMC3 gives multiple warnings about divergences\n# Also, see r_hat ~ 1.01, ESS << nchains*1000, especially for sigma_alpha\n# We can solve these problems below by using a non-centered parameterization.\n# In practice, for this model, the results are very similar.\n\n# Display the total number and percentage of divergent chains\ndiverging = trace_centered[\"diverging\"]\nprint(\"Number of Divergent Chains: {}\".format(diverging.nonzero()[0].size))\ndiverging_pct = diverging.nonzero()[0].size / len(trace_centered) * 100\nprint(\"Percentage of Divergent Chains: {:.1f}\".format(diverging_pct))\n\ndir(trace_centered)\n\ntrace_centered.varnames\n\nwith Centered_eight:\n # fig, ax = plt.subplots()\n az.plot_autocorr(trace_centered, var_names=[\"mu_alpha\", \"sigma_alpha\"], combined=True)\n plt.savefig(\"schools8_centered_acf_combined.png\", dpi=300)\n\nwith Centered_eight:\n # fig, ax = plt.subplots()\n az.plot_autocorr(trace_centered, var_names=[\"mu_alpha\", \"sigma_alpha\"])\n plt.savefig(\"schools8_centered_acf.png\", dpi=300)\n\nwith Centered_eight:\n az.plot_forest(trace_centered, var_names=\"alpha\", hdi_prob=0.95, combined=True)\n plt.savefig(\"schools8_centered_forest_combined.png\", dpi=300)\n\nwith Centered_eight:\n az.plot_forest(trace_centered, var_names=\"alpha\", hdi_prob=0.95, combined=False)\n plt.savefig(\"schools8_centered_forest.png\", dpi=300)",
"Non-centered",
"# Non-centered parameterization\n\nwith pm.Model() as NonCentered_eight:\n mu_alpha = pm.Normal(\"mu_alpha\", mu=0, sigma=5)\n sigma_alpha = pm.HalfCauchy(\"sigma_alpha\", beta=5)\n alpha_offset = pm.Normal(\"alpha_offset\", mu=0, sigma=1, shape=J)\n alpha = pm.Deterministic(\"alpha\", mu_alpha + sigma_alpha * alpha_offset)\n # alpha = pm.Normal('alpha', mu=mu_alpha, sigma=sigma_alpha, shape=J)\n obs = pm.Normal(\"obs\", mu=alpha, sigma=sigma, observed=y)\n log_sigma_alpha = pm.Deterministic(\"log_sigma_alpha\", tt.log(sigma_alpha))\n\nnp.random.seed(0)\nwith NonCentered_eight:\n trace_noncentered = pm.sample(1000, chains=4)\n\npm.summary(trace_noncentered).round(2)\n# Samples look good: r_hat = 1, ESS ~= nchains*1000\n\nwith NonCentered_eight:\n az.plot_autocorr(trace_noncentered, var_names=[\"mu_alpha\", \"sigma_alpha\"], combined=True)\n plt.savefig(\"schools8_noncentered_acf_combined.png\", dpi=300)\n\nwith NonCentered_eight:\n az.plot_forest(trace_noncentered, var_names=\"alpha\", combined=True, hdi_prob=0.95)\n plt.savefig(\"schools8_noncentered_forest_combined.png\", dpi=300)\n\naz.plot_forest(\n [trace_centered, trace_noncentered],\n model_names=[\"centered\", \"noncentered\"],\n var_names=\"alpha\",\n combined=True,\n hdi_prob=0.95,\n)\nplt.axvline(np.mean(y), color=\"k\", linestyle=\"--\")\n\naz.plot_forest(\n [trace_centered, trace_noncentered],\n model_names=[\"centered\", \"noncentered\"],\n var_names=\"alpha\",\n kind=\"ridgeplot\",\n combined=True,\n hdi_prob=0.95,\n);",
"Funnel of hell",
"# Plot the \"funnel of hell\"\n# Based on\n# https://github.com/twiecki/WhileMyMCMCGentlySamples/blob/master/content/downloads/notebooks/GLM_hierarchical_non_centered.ipynb\n\nfig, axs = plt.subplots(ncols=2, sharex=True, sharey=True)\nx = pd.Series(trace_centered[\"mu_alpha\"], name=\"mu_alpha\")\ny = pd.Series(trace_centered[\"log_sigma_alpha\"], name=\"log_sigma_alpha\")\naxs[0].plot(x, y, \".\")\naxs[0].set(title=\"Centered\", xlabel=\"µ\", ylabel=\"log(sigma)\")\n# axs[0].axhline(0.01)\n\nx = pd.Series(trace_noncentered[\"mu_alpha\"], name=\"mu\")\ny = pd.Series(trace_noncentered[\"log_sigma_alpha\"], name=\"log_sigma_alpha\")\naxs[1].plot(x, y, \".\")\naxs[1].set(title=\"NonCentered\", xlabel=\"µ\", ylabel=\"log(sigma)\")\n# axs[1].axhline(0.01)\n\nplt.savefig(\"schools8_funnel.png\", dpi=300)\n\nxlim = axs[0].get_xlim()\nylim = axs[0].get_ylim()\n\nx = pd.Series(trace_centered[\"mu_alpha\"], name=\"mu\")\ny = pd.Series(trace_centered[\"log_sigma_alpha\"], name=\"log sigma_alpha\")\nsns.jointplot(x, y, xlim=xlim, ylim=ylim)\nplt.suptitle(\"centered\")\nplt.savefig(\"schools8_centered_joint.png\", dpi=300)\n\nx = pd.Series(trace_noncentered[\"mu_alpha\"], name=\"mu\")\ny = pd.Series(trace_noncentered[\"log_sigma_alpha\"], name=\"log sigma_alpha\")\nsns.jointplot(x, y, xlim=xlim, ylim=ylim)\nplt.suptitle(\"noncentered\")\nplt.savefig(\"schools8_noncentered_joint.png\", dpi=300)\n\ngroup = 0\nfig, axs = plt.subplots(ncols=2, sharex=True, sharey=True, figsize=(10, 5))\nx = pd.Series(trace_centered[\"alpha\"][:, group], name=f\"alpha {group}\")\ny = pd.Series(trace_centered[\"log_sigma_alpha\"], name=\"log_sigma_alpha\")\naxs[0].plot(x, y, \".\")\naxs[0].set(title=\"Centered\", xlabel=r\"$\\alpha_0$\", ylabel=r\"$\\log(\\sigma_\\alpha)$\")\n\nx = pd.Series(trace_noncentered[\"alpha\"][:, group], name=f\"alpha {group}\")\ny = pd.Series(trace_noncentered[\"log_sigma_alpha\"], name=\"log_sigma_alpha\")\naxs[1].plot(x, y, \".\")\naxs[1].set(title=\"NonCentered\", xlabel=r\"$\\alpha_0$\", ylabel=r\"$\\log(\\sigma_\\alpha)$\")\n\nxlim = axs[0].get_xlim()\nylim = axs[0].get_ylim()\n\nplt.savefig(\"schools8_funnel_group0.png\", dpi=300)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
ireapps/cfj-2017
|
completed/13. Web scraping (Part 3).ipynb
|
mit
|
[
"Let's scrape the IRE homepage\nOur goal: Print out the headlines from the IRE home page.\nrequests is a handy third-party library for making HTTP requests. It does the same thing your browser does when you type in a URL and hit enter -- sends a message to a server and requests a copy of the page -- but it allows us to do this programatically instead of pointing and clicking. For our purposes today, we're interested in the library's get() method.\nImport the libraries",
"import requests\nfrom bs4 import BeautifulSoup",
"Fetch and parse the HTML",
"# use the `get()` method to fetch a copy of the IRE home page\nire_page = requests.get('http://ire.org')\n\n# feed the text of the web page to a BeautifulSoup object\nsoup = BeautifulSoup(ire_page.text, 'html.parser')",
"Target the headlines\nView source on the IRE homepage and find the headlines. What's the pattern?",
"# get a list of headlines we're interested in\nheds = soup.find_all('h1', {'class': 'title1'})",
"Loop over the heds, printing out the text\nYou can drill down into a nested tag using a period.",
"for hed in heds:\n print(hed.a.string)",
"Exercise: Print the links\nYour mission: Loop over the headlines and print the links (the href portion of the tag) for each one. You can access tag attributes like you'd access values in a dictionary. (This might require some Googling.)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
bjshaw/phys202-2015-work
|
days/day19/FittingModels.ipynb
|
mit
|
[
"Fitting Models\nLearning Objectives: learn to fit models to data using linear and non-linear regression.\nThis material is licensed under the MIT license and was developed by Brian Granger. It was adapted from material from Jake VanderPlas and Jennifer Klay.",
"%matplotlib inline\nimport matplotlib.pyplot as plt\nimport numpy as np\nfrom scipy import optimize as opt\n\nfrom IPython.html.widgets import interact",
"Introduction\nIn Data Science it is common to start with data and develop a model of that data. Such models can help to explain the data and make predictions about future observations. In fields like Physics, these models are often given in the form of differential equations, whose solutions explain and predict the data. In most other fields, such differential equations are not known. Often, models have to include sources of uncertainty and randomness. Given a set of data, fitting a model to the data is the process of tuning the parameters of the model to best explain the data.\nWhen a model has a linear dependence on its parameters, such as $a x^2 + b x + c$, this process is known as linear regression. When a model has a non-linear dependence on its parameters, such as $ a e^{bx} $, this process in known as non-linear regression. Thus, fitting data to a straight line model of $m x + b $ is linear regression, because of its linear dependence on $m$ and $b$ (rather than $x$).\nFitting a straight line\nA classical example of fitting a model is finding the slope and intercept of a straight line that goes through a set of data points ${x_i,y_i}$. For a straight line the model is:\n$$\ny_{model}(x) = mx + b\n$$\nGiven this model, we can define a metric, or cost function, that quantifies the error the model makes. One commonly used metric is $\\chi^2$, which depends on the deviation of the model from each data point ($y_i - y_{model}(x_i)$) and the measured uncertainty of each data point $ \\sigma_i$:\n$$\n\\chi^2 = \\sum_{i=1}^N \\left(\\frac{y_i - y_{model}(x)}{\\sigma_i}\\right)^2\n$$\nWhen $\\chi^2$ is small, the model's predictions will be close the data points. Likewise, when $\\chi^2$ is large, the model's predictions will be far from the data points. Given this, our task is to minimize $\\chi^2$ with respect to the model parameters $\\theta = [m, b]$ in order to find the best fit.\nTo illustrate linear regression, let's create a synthetic data set with a known slope and intercept, but random noise that is additive and normally distributed.",
"N = 50\nm_true = 2\nb_true = -1\ndy = 2.0 # uncertainty of each point\n\nnp.random.seed(0)\nxdata = 10 * np.random.random(N) # don't use regularly spaced data\nydata = b_true + m_true * xdata + np.random.normal(0.0, dy, size=N) # our errors are additive\n\nplt.errorbar(xdata, ydata, dy,\n fmt='.k', ecolor='lightgray')\nplt.xlabel('x')\nplt.ylabel('y');",
"Fitting by hand\nIt is useful to see visually how changing the model parameters changes the value of $\\chi^2$. By using IPython's interact function, we can create a user interface that allows us to pick a slope and intercept interactively and see the resulting line and $\\chi^2$ value.\nHere is the function we want to minimize. Note how we have combined the two parameters into a single parameters vector $\\theta = [m, b]$, which is the first argument of the function:",
"def chi2(theta, x, y, dy):\n # theta = [b, m]\n return np.sum(((y - theta[0] - theta[1] * x) / dy) ** 2)\n\ndef manual_fit(b, m):\n modely = m*xdata + b\n plt.plot(xdata, modely)\n plt.errorbar(xdata, ydata, dy,\n fmt='.k', ecolor='lightgray')\n plt.xlabel('x')\n plt.ylabel('y')\n plt.text(1, 15, 'b={0:.2f}'.format(b))\n plt.text(1, 12.5, 'm={0:.2f}'.format(m))\n plt.text(1, 10.0, '$\\chi^2$={0:.2f}'.format(chi2([b,m],xdata,ydata, dy)))\n\ninteract(manual_fit, b=(-3.0,3.0,0.01), m=(0.0,4.0,0.01));",
"Go ahead and play with the sliders and try to:\n\nFind the lowest value of $\\chi^2$\nFind the \"best\" line through the data points.\n\nYou should see that these two conditions coincide.\nMinimize $\\chi^2$ using scipy.optimize.minimize\nNow that we have seen how minimizing $\\chi^2$ gives the best parameters in a model, let's perform this minimization numerically using scipy.optimize.minimize. We have already defined the function we want to minimize, chi2, so we only have to pass it to minimize along with an initial guess and the additional arguments (the raw data):",
"theta_guess = [0.0,1.0]\nresult = opt.minimize(chi2, theta_guess, args=(xdata,ydata,dy))",
"Here are the values of $b$ and $m$ that minimize $\\chi^2$:",
"theta_best = result.x\nprint(theta_best)",
"These values are close to the true values of $b=-1$ and $m=2$. The reason our values are different is that our data set has a limited number of points. In general, we expect that as the number of points in our data set increases, the model parameters will converge to the true values. But having a limited number of data points is not a problem - it is a reality of most data collection processes.\nWe can plot the raw data and the best fit line:",
"xfit = np.linspace(0,10.0)\nyfit = theta_best[1]*xfit + theta_best[0]\n\nplt.plot(xfit, yfit)\nplt.errorbar(xdata, ydata, dy,\n fmt='.k', ecolor='lightgray')\nplt.xlabel('x')\nplt.ylabel('y');",
"Minimize $\\chi^2$ using scipy.optimize.leastsq\nPerforming regression by minimizing $\\chi^2$ is known as least squares regression, because we are minimizing the sum of squares of the deviations. The linear version of this is known as linear least squares. For this case, SciPy provides a purpose built function, scipy.optimize.leastsq. Instead of taking the $\\chi^2$ function to minimize, leastsq takes a function that computes the deviations:",
"def deviations(theta, x, y, dy):\n return (y - theta[0] - theta[1] * x) / dy\n\nresult = opt.leastsq(deviations, theta_guess, args=(xdata, ydata, dy), full_output=True)",
"Here we have passed the full_output=True option. When this is passed the covariance matrix $\\Sigma_{ij}$ of the model parameters is also returned. The uncertainties (as standard deviations) in the parameters are the square roots of the diagonal elements of the covariance matrix:\n$$ \\sigma_i = \\sqrt{\\Sigma_{ii}} $$\nA proof of this is beyond the scope of the current notebook.",
"theta_best = result[0]\ntheta_cov = result[1]\nprint('b = {0:.3f} +/- {1:.3f}'.format(theta_best[0], np.sqrt(theta_cov[0,0])))\nprint('m = {0:.3f} +/- {1:.3f}'.format(theta_best[1], np.sqrt(theta_cov[1,1])))",
"We can again plot the raw data and best fit line:",
"yfit = theta_best[0] + theta_best[1] * xfit\n\nplt.errorbar(xdata, ydata, dy,\n fmt='.k', ecolor='lightgray');\nplt.plot(xfit, yfit, '-b');",
"Fitting using scipy.optimize.curve_fit\nSciPy also provides a general curve fitting function, curve_fit, that can handle both linear and non-linear models. This function: \n\nAllows you to directly specify the model as a function, rather than the cost function (it assumes $\\chi^2$).\nReturns the covariance matrix for the parameters that provides estimates of the errors in each of the parameters.\n\nLet's apply curve_fit to the above data. First we define a model function. The first argument should be the independent variable of the model.",
"def model(x, b, m):\n return m*x+b",
"Then call curve_fit passing the model function and the raw data. The uncertainties of each data point are provided with the sigma keyword argument. If there are no uncertainties, this can be omitted. By default the uncertainties are treated as relative. To treat them as absolute, pass the absolute_sigma=True argument.",
"theta_best, theta_cov = opt.curve_fit(model, xdata, ydata, sigma=dy)",
"Again, display the optimal values of $b$ and $m$ along with their uncertainties:",
"print('b = {0:.3f} +/- {1:.3f}'.format(theta_best[0], np.sqrt(theta_cov[0,0])))\nprint('m = {0:.3f} +/- {1:.3f}'.format(theta_best[1], np.sqrt(theta_cov[1,1])))",
"We can again plot the raw data and best fit line:",
"xfit = np.linspace(0,10.0)\nyfit = theta_best[1]*xfit + theta_best[0]\n\nplt.plot(xfit, yfit)\nplt.errorbar(xdata, ydata, dy,\n fmt='.k', ecolor='lightgray')\nplt.xlabel('x')\nplt.ylabel('y');",
"Non-linear models\nSo far we have been using a linear model $y_{model}(x) = m x +b$. Remember this model was linear, not because of its dependence on $x$, but on $b$ and $m$. A non-linear model will have a non-linear dependece on the model parameters. Examples are $A e^{B x}$, $A \\cos{B x}$, etc. In this section we will generate data for the following non-linear model:\n$$y_{model}(x) = Ae^{Bx}$$\nand fit that data using curve_fit. Let's start out by using this model to generate a data set to use for our fitting:",
"npoints = 20\nAtrue = 10.0\nBtrue = -0.2\nxdata = np.linspace(0.0, 20.0, npoints)\ndy = np.random.normal(0.0, 0.1, size=npoints)\nydata = Atrue*np.exp(Btrue*tdata) + dy",
"Plot the raw data:",
"plt.plot(xdata, ydata, 'k.')\nplt.xlabel('x')\nplt.ylabel('y');",
"Let's see if we can use non-linear regression to recover the true values of our model parameters. First define the model:",
"def exp_model(x, A, B):\n return A*np.exp(x*B)",
"Then use curve_fit to fit the model:",
"theta_best, theta_cov = opt.curve_fit(exp_model2, xdata, ydata)",
"Our optimized parameters are close to the true values of $A=10$ and $B=-0.2$:",
"print('A = {0:.3f} +/- {1:.3f}'.format(theta_best[0], np.sqrt(theta_cov[0,0])))\nprint('B = {0:.3f} +/- {1:.3f}'.format(theta_best[1], np.sqrt(theta_cov[1,1])))",
"Plot the raw data and fitted model:",
"xfit = np.linspace(0,20)\nyfit = exp_model(xfit, theta_best[0], theta_best[1])\nplt.plot(xfit, yfit)\nplt.plot(xdata, ydata, 'k.')\nplt.xlabel('x')\nplt.ylabel('y');",
"A note about transforming to a linear model\nAnother approach to dealing with non-linear models is to linearize them with a transformation. For example, the exponential model used above,\n$$y_{model}(x) = Ae^{Bx},$$\ncan be linearized by taking the natural log of both sides:\n$$ ln(y) = ln(A) + B x $$\nThis model is linear in the parameters $ln(A)$ and $B$ and can be treated as a standard linear regression problem. This approach is used in most introductory physics laboratories. **However, in most cases, transforming to a linear model will give a poor fit. The reasons for this are a bit subtle, but here is the basic idea:\n\nLeast squares regression assumes that errors are symmetric, additive and normally distributed. This assumption has been present throughout this notebook, when we generated data by adding a small amount of randomness to our data using np.random.normal.\nTransforming the data with a non-linear transformation, such as the square root, exponential or logarithm will not lead to errors that follow this assumption.\nHowever, in the rare case that there are no (minimal) random errors in the original data set, the transformation approach will give the same result as the non-linear regression on the original model.\n\nHere is a nice discussion of this in the Matlab documentation.\nModel selection\nIn all of the examples in this notebook, we started with a model and used that model to generate data. This was done to make it easy to check the predicted model parameters with the true values used to create the data set. However, in the real world, you almost never know the model underlying the data. Because of this, there is an additional step called model selection where you have to figure out a way to pick a good model. This is a notoriously difficult problem, especially when the randomness in the data is large.\n\nPick the simplest possible model. In general picking a more complex model will give a better fit. However, it won't be a useful model and will make poor predictions about future data. This is known as overfitting.\nWhenever possible, pick a model that has a underlying theoretical foundation or motivation. For example, in Physics, most of our models come from well tested differential equations.\nThere are more advanced methods (AIC,BIC) that can assist in this model selection process. A good discussion can be found in this notebook by Jake VanderPlas."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
ceos-seo/Data_Cube_v2
|
agdc-v2/contrib/notebooks/CSIRO Water Quality Analysis using Turbidity .ipynb
|
apache-2.0
|
[
"CSIRO Water Quality Analysis using Turbidity\nMaintainer: Xavier Ho xavier.ho@csiro.au\nPlease contact Xavier for any queries about this example.\nThis example uses the Analytics Execution Engine of AGDC v2 to query Landsat imagery. Two bands are combined to calculate an average, and the result is saved to an image.",
"from pprint import pprint\nfrom datetime import datetime\nimport xarray as xr\n\nimport matplotlib\nimport matplotlib.image\n%matplotlib inline\n\nimport datacube\nfrom datacube.api import API, geo_xarray\nfrom datacube.analytics.analytics_engine import AnalyticsEngine\nfrom datacube.execution.execution_engine import ExecutionEngine\nfrom datacube.analytics.utils.analytics_utils import plot\n\nprint('This example runs on Data Cube v2/{}.'.format(datacube.__version__))",
"First, we make a query to the datacube to find out what datasets we have.",
"dc_a = AnalyticsEngine()\ndc_e = ExecutionEngine()\ndc_api = API()\n\nprint(dc_api.list_field_values('product')) # 'LEDAPS' should be in the list\nprint(dc_api.list_field_values('platform')) # 'LANDSAT_5' should be in the list",
"Landsat Ecosystem Disturbance Adaptive Processing System (LEDAPS) is a NASA-funded project to map North American forest disturbance since 1975. We have datasets in the same format for Australia.\nLet's find out what kind of datasets we have for Landsat 5.",
"query = {\n 'product': 'LEDAPS',\n 'platform': 'LANDSAT_5',\n}\ndescriptor = dc_api.get_descriptor(query, include_storage_units=False)\npprint(descriptor)",
"For Landsat 5, Band 1-3 are Blue, Green, and Red visible light spectrum bands.\n\nTo set up the Engine, we first need to instantiate the modules and setup query parameters. \n\ncreate_array sets up the platform and product we are interested in querying, as well as the bands (variables) of the satellite data set. We also limit the amount of data processed by a long-lat boundary and time.\napply_expression binds the variables into a generic string to execute, in this case an average of two bands.\nexecute_plan is when the computation is actually run and returned.",
"dimensions = {\n 'x': {\n 'range': (140, 141)\n },\n 'y': {\n 'range': (-35.5, -36.5)\n },\n 'time': {\n 'range': (datetime(2011, 10, 17), datetime(2011, 10, 18))\n }\n}\n\nred = dc_a.create_array(('LANDSAT_5', 'LEDAPS'), ['band3'], dimensions, 'red')\ngreen = dc_a.create_array(('LANDSAT_5', 'LEDAPS'), ['band2'], dimensions, 'green')\nblue = dc_a.create_array(('LANDSAT_5', 'LEDAPS'), ['band1'], dimensions, 'blue')",
"Now we have created references to the green and blue bands, we can do simple band maths.",
"blue_result = dc_a.apply_expression([blue], 'array1', 'blue')\ndc_e.execute_plan(dc_a.plan)\nplot(dc_e.cache['blue'])\n\nturbidity = dc_a.apply_expression([blue, green, red], '(array1 + array2 - array3) / 2', 'turbidity')\n\ndc_e.execute_plan(dc_a.plan)\nplot(dc_e.cache['turbidity'])",
"geo_xarray.reproject reprojects northings and eastings to longitude and latitude units.\nLet's reproject the file into the common longitude-latitude projection, and save it to a picture.",
"result = dc_e.cache['turbidity']['array_result']['turbidity']\nreprojected = datacube.api.geo_xarray.reproject(result.isel(time=0), 'EPSG:3577', 'WGS84')\n\npprint(reprojected)\n\nreprojected.plot.imshow()\n\nmatplotlib.image.imsave('turbidity.png', reprojected)",
"The boundaries in long-lat are as follows:",
"map(float, (reprojected.x[0], reprojected.x[-1], reprojected.y[0], reprojected.y[-1]))"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
NLeSC/noodles
|
notebooks/poetry_tutorial.ipynb
|
apache-2.0
|
[
"Real World Tutorial 1: Translating Poetry\nFirst example\nWe build workflows by calling functions. The simplest example of this\nis the \"diamond workflow\":",
"from noodles import run_single\nfrom noodles.tutorial import (add, sub, mul)\n\nu = add(5, 4)\nv = sub(u, 3)\nw = sub(u, 2)\nx = mul(v, w)\n\nanswer = run_single(x)\n\nprint(\"The answer is {0}.\".format(answer))",
"That looks like any other Python code! But this example is a bit silly.\nHow do we leverage Noodles to earn an honest living? Here's a slightly less\nsilly example (but only just!). We will build a small translation engine\nthat translates sentences by submitting each word to an online dictionary\nover a Rest API. To do this we make loops (\"For thou shalt make loops of \nblue\"). First we build the program as you would do in Python, then we\nsprinkle some Noodles magic and make it work parallel! Furthermore, we'll\nsee how to:\n\nmake more loops\ncache results for reuse\n\nMaking loops\nThats all swell, but how do we make a parallel loop? Let's look at a map operation; in Python there are several ways to perform a function on all elements in an array. For this example, we will translate some words using the Glosbe service, which has a nice REST interface. We first build some functionality to use this interface.",
"import urllib.request\nimport json\nimport re\n\n\nclass Translate:\n \"\"\"Translate words and sentences in the worst possible way. The Glosbe dictionary\n has a nice REST interface that we query for a phrase. We then take the first result.\n To translate a sentence, we cut it in pieces, translate it and paste it back into\n a Frankenstein monster.\"\"\"\n def __init__(self, src_lang='en', tgt_lang='fy'):\n self.src = src_lang\n self.tgt = tgt_lang\n self.url = 'https://glosbe.com/gapi/translate?' \\\n 'from={src}&dest={tgt}&' \\\n 'phrase={{phrase}}&format=json'.format(\n src=src_lang, tgt=tgt_lang)\n \n def query_phrase(self, phrase):\n with urllib.request.urlopen(self.url.format(phrase=phrase.lower())) as response:\n translation = json.loads(response.read().decode())\n return translation\n\n def word(self, phrase):\n translation = self.query_phrase(phrase)\n #translation = {'tuc': [{'phrase': {'text': phrase.lower()[::-1]}}]}\n if len(translation['tuc']) > 0 and 'phrase' in translation['tuc'][0]:\n result = translation['tuc'][0]['phrase']['text']\n if phrase[0].isupper():\n return result.title()\n else:\n return result \n else:\n return \"<\" + phrase + \">\"\n \n def sentence(self, phrase):\n words = re.sub(\"[^\\w]\", \" \", phrase).split()\n space = re.sub(\"[\\w]+\", \"{}\", phrase)\n return space.format(*map(self.word, words))",
"We start with a list of strings that desparately need translation. And add a little\nroutine to print it in a gracious manner.",
"shakespeare = [\n \"If music be the food of love, play on,\",\n \"Give me excess of it; that surfeiting,\",\n \"The appetite may sicken, and so die.\"]\n\ndef print_poem(intro, poem):\n print(intro)\n for line in poem:\n print(\" \", line)\n print()\n\nprint_poem(\"Original:\", shakespeare)",
"Beginning Python programmers like to append things; this is not how you are\nsupposed to program in Python; if you do, please go and read Jeff Knupp's Writing Idiomatic Python.",
"shakespeare_auf_deutsch = []\nfor line in shakespeare:\n shakespeare_auf_deutsch.append(\n Translate('en', 'de').sentence(line))\nprint_poem(\"Auf Deutsch:\", shakespeare_auf_deutsch)",
"Rather use a comprehension like so:",
"shakespeare_ynt_frysk = \\\n (Translate('en', 'fy').sentence(line) for line in shakespeare)\nprint_poem(\"Yn it Frysk:\", shakespeare_ynt_frysk)",
"Or use map:",
"shakespeare_pa_dansk = \\\n map(Translate('en', 'da').sentence, shakespeare)\nprint_poem(\"På Dansk:\", shakespeare_pa_dansk)",
"Noodlify!\nIf your connection is a bit slow, you may find that the translations take a while to process. Wouldn't it be nice to do it in parallel? How much code would we have to change to get there in Noodles? Let's take the slow part of the program and add a @schedule decorator, and run! Sadly, it is not that simple. We can add @schedule to the word method. This means that it will return a promise. \n\nRule: Functions that take promises need to be scheduled functions, or refer to a scheduled function at some level. \n\nWe could write\nreturn schedule(space.format)(*(self.word(w) for w in words))\n\nin the last line of the sentence method, but the string format method doesn't support wrapping. We rely on getting the signature of a function by calling inspect.signature. In some cases of build-in function this raises an exception. We may find a work around for these cases in future versions of Noodles. For the moment we'll have to define a little wrapper function.",
"from noodles import schedule\n\n\n@schedule\ndef format_string(s, *args, **kwargs):\n return s.format(*args, **kwargs)\n\n\nimport urllib.request\nimport json\nimport re\n\n\nclass Translate:\n \"\"\"Translate words and sentences in the worst possible way. The Glosbe dictionary\n has a nice REST interface that we query for a phrase. We then take the first result.\n To translate a sentence, we cut it in pieces, translate it and paste it back into\n a Frankenstein monster.\"\"\"\n def __init__(self, src_lang='en', tgt_lang='fy'):\n self.src = src_lang\n self.tgt = tgt_lang\n self.url = 'https://glosbe.com/gapi/translate?' \\\n 'from={src}&dest={tgt}&' \\\n 'phrase={{phrase}}&format=json'.format(\n src=src_lang, tgt=tgt_lang)\n \n def query_phrase(self, phrase):\n with urllib.request.urlopen(self.url.format(phrase=phrase.lower())) as response:\n translation = json.loads(response.read().decode())\n return translation\n \n @schedule\n def word(self, phrase):\n #translation = {'tuc': [{'phrase': {'text': phrase.lower()[::-1]}}]}\n translation = self.query_phrase(phrase)\n \n if len(translation['tuc']) > 0 and 'phrase' in translation['tuc'][0]:\n result = translation['tuc'][0]['phrase']['text']\n if phrase[0].isupper():\n return result.title()\n else:\n return result \n else:\n return \"<\" + phrase + \">\"\n \n def sentence(self, phrase):\n words = re.sub(\"[^\\w]\", \" \", phrase).split()\n space = re.sub(\"[\\w]+\", \"{}\", phrase)\n return format_string(space, *map(self.word, words))\n \n def __str__(self):\n return \"[{} -> {}]\".format(self.src, self.tgt)\n \n def __serialize__(self, pack):\n return pack({'src_lang': self.src,\n 'tgt_lang': self.tgt})\n\n @classmethod\n def __construct__(cls, msg):\n return cls(**msg)",
"Let's take stock of the mutations to the original. We've added a @schedule decorator to word, and changed a function call in sentence. Also we added the __str__ method; this is only needed to plot the workflow graph. Let's run the new script.",
"from noodles import gather, run_parallel\nfrom noodles.tutorial import get_workflow_graph\n\nshakespeare_en_esperanto = \\\n map(Translate('en', 'eo').sentence, shakespeare)\n\nwf = gather(*shakespeare_en_esperanto)\nworkflow_graph = get_workflow_graph(wf._workflow)\nresult = run_parallel(wf, n_threads=8)\nprint_poem(\"Shakespeare en Esperanto:\", result)",
"The last peculiar thing that you may notice, is the gather function. It collects the promises that map generates and creates a single new promise. The definition of gather is very simple:\n@schedule\ndef gather(*lst):\n return lst\n\nThe workflow graph of the Esperanto translator script looks like this:",
"workflow_graph.attr(size='10')\nworkflow_graph",
"Dealing with repetition\nIn the following example we have a line with some repetition.",
"from noodles import (schedule, gather_all)\nimport re\n\n@schedule\ndef word_size(word):\n return len(word)\n\n@schedule\ndef format_string(s, *args, **kwargs):\n return s.format(*args, **kwargs)\n\ndef word_size_phrase(phrase):\n words = re.sub(\"[^\\w]\", \" \", phrase).split()\n space = re.sub(\"[\\w]+\", \"{}\", phrase)\n word_lengths = map(word_size, words)\n return format_string(space, *word_lengths)\n\nfrom noodles.tutorial import display_workflows, run_and_print_log\n\ndisplay_workflows(\n prefix='poetry',\n sizes=word_size_phrase(\"Oote oote oote, Boe\"))",
"Let's run the example workflows now, but focus on the actions taken, looking at the logs. The function run_and_print_log in the tutorial module runs our workflow with four parallel threads and caches results in a Sqlite3 database.\nTo see how this program is being run, we monitor the job submission, retrieval and result storage. First, should you have run this tutorial before, remove the database file.",
"# remove the database if it already exists\n!rm -f tutorial.db",
"Running the workflow, we can now see that at the second occurence of the word 'oote', the function call is attached to the first job that asked for the same result. The job word_size('oote') is run only once.",
"run_and_print_log(word_size_phrase(\"Oote oote oote, Boe\"), highlight=range(4, 8))",
"Now, running a similar workflow again, notice that previous results are retrieved from the database.",
"run_and_print_log(word_size_phrase(\"Oe oe oote oote oote\"), highlight=range(5, 10))",
"Although the result of every single job is retrieved we still had to go through the trouble of looking up the results of word_size('Oote'), word_size('oote'), and word_size('Boe') to find out that we wanted the result from the format_string. If you want to cache the result of an entire workflow, pack the workflow in another scheduled function!\nVersioning\nWe may add a version string to a function. This version is taken into account when looking up results in the database.",
"@schedule(version='1.0')\ndef word_size_phrase(phrase):\n words = re.sub(\"[^\\w]\", \" \", phrase).split()\n space = re.sub(\"[\\w]+\", \"{}\", phrase)\n word_lengths = map(word_size, words)\n return format_string(space, *word_lengths)\n\nrun_and_print_log(\n word_size_phrase(\"Kneu kneu kneu kneu ote kneu eur\"),\n highlight=[1, 17])",
"See how the first job is evaluated to return a new workflow. Note that if the version is omitted, it is automatically generated from the source of the function. For example, let's say we decided the function word_size_phrase should return a dictionary of all word sizes in stead of a string. Here we use the function called lift to transform a dictionary containing promises to a promise of a dictionary. lift can handle lists, dictionaries, sets, tuples and objects that are constructable from their __dict__ member.",
"from noodles import lift\n\ndef word_size_phrase(phrase):\n words = re.sub(\"[^\\w]\", \" \", phrase).split()\n return lift({word: word_size(word) for word in words})\n\ndisplay_workflows(prefix='poetry', lift=word_size_phrase(\"Kneu kneu kneu kneu ote kneu eur\"))\n\nrun_and_print_log(word_size_phrase(\"Kneu kneu kneu kneu ote kneu eur\"))",
"Be careful with versions! Noodles will believe you upon your word! If we lie about the version, it will go ahead and retrieve the result belonging to the old function:",
"@schedule(version='1.0')\ndef word_size_phrase(phrase):\n words = re.sub(\"[^\\w]\", \" \", phrase).split()\n return lift({word: word_size(word) for word in words})\n\nrun_and_print_log(\n word_size_phrase(\"Kneu kneu kneu kneu ote kneu eur\"),\n highlight=[1])"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
indiependente/Social-Networks-Structure
|
results/RandomGraph Results Analysis.ipynb
|
mit
|
[
"Random Graph Experiments Output Visualization",
"#!/usr/bin/python\n%matplotlib inline\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom stats import parse_results, get_percentage, get_avg_per_seed, draw_pie, draw_bars, draw_bars_comparison, draw_avgs",
"Parse results",
"pr, eigen, bet = parse_results('test_rdbg.txt')",
"PageRank Seeds Percentage\nHow many times the \"Top X\" nodes from PageRank have led to the max infection",
"draw_pie(get_percentage(pr))",
"Avg adopters per seed comparison",
"draw_bars_comparison('Avg adopters per seeds', 'Avg adopters', np.array(get_avg_per_seed(pr)+[(0, np.mean(pr[:,1]))]))",
"Eigenvector Seeds Percentage\nHow many times the \"Top X\" nodes from Eigenvector have led to the max infection",
"draw_pie(get_percentage(eigen))",
"Avg adopters per seed comparison",
"draw_bars_comparison('Avg adopters per seeds', 'Avg adopters', np.array(get_avg_per_seed(eigen)+[(0, np.mean(eigen[:,1]))]))",
"Betweenness Seeds Percentage\nHow many times the \"Top X\" nodes from Betweenness have led to the max infection",
"draw_pie(get_percentage(bet))",
"Avg adopters per seed comparison",
"draw_bars_comparison('Avg adopters per seeds', 'Avg adopters', np.array(get_avg_per_seed(bet)+[(0, np.mean(bet[:,1]))]))",
"100 runs adopters comparison",
"draw_bars(np.sort(pr.view('i8,i8'), order=['f0'], axis=0).view(np.int),\n np.sort(eigen.view('i8,i8'), order=['f0'], axis=0).view(np.int),\n np.sort(bet.view('i8,i8'), order=['f0'], axis=0).view(np.int))",
"Centrality Measures Averages\nPageRank avg adopters and seed",
"pr_mean = np.mean(pr[:,1])\npr_mean_seed = np.mean(pr[:,0])\nprint 'Avg Seed:',pr_mean_seed, 'Avg adopters:', pr_mean",
"Eigenv avg adopters and seed",
"eigen_mean = np.mean(eigen[:,1])\neigen_mean_seed = np.mean(eigen[:,0])\nprint 'Avg Seed:',eigen_mean_seed, 'Avg adopters:',eigen_mean",
"Betweenness avg adopters and seed",
"bet_mean = np.mean(bet[:,1])\nbet_mean_seed = np.mean(bet[:,0])\nprint 'Avg Seed:',bet_mean_seed, 'Avg adopters:',bet_mean\n\ndraw_avgs([pr_mean, eigen_mean, bet_mean])"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
astyler/scratch
|
torque plotting.ipynb
|
mit
|
[
"Let's plot our most recent Torque log in python",
"import pandas as pd\nimport seaborn as sns\nimport matplotlib.pyplot as plt\nimport osmapping\nimport glob\n%matplotlib inline",
"Import the file into pandas, and drop all rows without a GPS fix",
"dname = '/Users/astyler/projects/torquedata/'\ntrips = []\nfnames = glob.glob(dname+'*.csv')\nfor fname in fnames:\n trip = pd.read_csv(fname, na_values=['-'],encoding ='U8',index_col=False, header=False, names=['GPSTime','Time','Longitude','Latitude','GPSSpeed','GPSError','Altitude','Bearing','Gx','Gy','Gz','G','Az','Ay','Ax','A','Power','Accuracy','Satellites','GPSAltitude','GPSBearing','Lat2','Lon2','OBDSpeed','GPSSpeedkmhr'])\n trip = trip.dropna(subset = ['Longitude','Latitude'])\n trips.append(trip)\n \nfnames",
"Find the Lat/Lon bounding box and create a new map from the osmapping library",
"buffr = 0.01\nmins=[(min(trip.Longitude) -buffr,min(trip.Latitude)-buffr) for trip in trips]\nmaxs=[(max(trip.Longitude) + buffr,max(trip.Latitude)+buffr) for trip in trips]\n\nll = map(min,zip(*mins))\nur = map(max,zip(*maxs))\nprint ll\nprint ur\nmymap = osmapping.MLMap(ll,ur)\n\nfor trip in trips:\n trip['x'], trip['y'] = mymap.convert_coordinates(trip[['Longitude','Latitude']].values).T",
"Import the shapefiles from Mapzen for Boston",
"reload(osmapping)\nmymap.load_shape_file('./shapefiles/boston/line.shp')\nmymap.load_shape_file('./shapefiles/boston/polygon.shp')\n\n\nmymap.shapes.shape\n\ncoords = [(79,80),(15,24)]\n\nprint zip(*coords)\nprint zip(*[(1,1),(2,2)])\n#print mymap.basemap([79,15],[80,24])\nprint mymap.basemap(79,80)\nprint mymap.basemap(15,24)\nprint zip(*mymap.basemap(*zip(*coords)))\n",
"Select most road-types and some parks for plotting",
"mymap.clear_selected_shapes()\n\nroad = {'edgecolor':'white','lw':3, 'facecolor':'none','zorder':6};\n\nmymap.select_shape('highway','motorway',**road)\nmymap.select_shape('highway','trunk',**road)\nmymap.select_shape('highway','primary',**road)\nmymap.select_shape('highway','secondary',**road)\nmymap.select_shape('highway','tertiary',**road)\nmymap.select_shape('highway','residential',**road)\nmymap.select_shape('leisure','park',facecolor='#BBDDBB',edgecolor='none',zorder=4)\nmymap.select_shape('waterway','riverbank',facecolor='#0044CC', edgecolor='none', zorder=5)\n\nmymap.select_shape('natural','water',facecolor='#CCCCEE', edgecolor='none', zorder=5)\n\nbselect = lambda x: x['building'] in ['yes', 'apartments', 'commercial', 'house', 'residential', 'university', 'church', 'garage'] \n\nbldg = {'facecolor':'none', 'edgecolor':'#dedede', 'hatch':'////','zorder':7}\nmymap.select_shapes(bselect, **bldg)",
"Plot the basemap and then overlay the trip trace",
"for trip in trips:\n trip.loc[trip.Satellites < 5,'Satellites'] = None\n trip.loc[trip.Accuracy > 20,'Accuracy'] = None\n trip.dropna(subset=['Accuracy'], inplace=True)\n\nfig = plt.figure(figsize=(12,12))\nax = fig.add_subplot(111)\nmymap.draw_map(ax, map_fill='#eeeeee')\n\nfor (idx,trip) in enumerate(trips):\n ax.plot(trip.x, trip.y, lw=2, alpha=1,zorder=99, label=str(idx))\n\nplt.legend()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
DJCordhose/ai
|
notebooks/workshops/tss/cnn-intro.ipynb
|
mit
|
[
"CNN Intro\n\nhttps://keras.io/applications/\nhttp://www.image-net.org/",
"import warnings\nwarnings.filterwarnings('ignore')\n\n%matplotlib inline\n%pylab inline\n\nimport matplotlib.pylab as plt\nimport numpy as np\n\nfrom distutils.version import StrictVersion\n\nimport sklearn\nprint(sklearn.__version__)\n\nassert StrictVersion(sklearn.__version__ ) >= StrictVersion('0.18.1')\n\nimport tensorflow as tf\ntf.logging.set_verbosity(tf.logging.ERROR)\nprint(tf.__version__)\n\nassert StrictVersion(tf.__version__) >= StrictVersion('1.1.0')\n\nimport keras\nprint(keras.__version__)\n\nassert StrictVersion(keras.__version__) >= StrictVersion('2.0.0')",
"Modell-Architektur\nhttp://cs231n.github.io/neural-networks-1/#power\nLayout of a typical CNN\n\nhttp://cs231n.github.io/convolutional-networks/\nClassic VGG like Architecture\n\nwe use a VGG like architecture\nbased on https://arxiv.org/abs/1409.1556\nbasic idea: sequential, deep, small convolutional filters, use dropouts to reduce overfitting\n16/19 layers are typical\nwe choose less layers, because we have limited resources\n\nConvolutional Blocks: Cascading many Convolutional Layers having down sampling in between\n\nhttp://cs231n.github.io/convolutional-networks/#conv\nExample of a Convolution\nOriginal Image\n\nMany convolutional filters applied over all channels\n\nhttp://cs.stanford.edu/people/karpathy/convnetjs/demo/cifar10.html\nDownlsampling Layer: Reduces data sizes and risk of overfitting\n\n\nhttp://cs231n.github.io/convolutional-networks/#pool\nActivation Functions",
"def centerAxis(uses_negative=False):\n # http://matplotlib.org/api/pyplot_api.html#matplotlib.pyplot.plot\n ax = plt.gca()\n ax.spines['left'].set_position('center')\n if uses_negative:\n ax.spines['bottom'].set_position('center')\n ax.spines['right'].set_color('none')\n ax.spines['top'].set_color('none')\n ax.xaxis.set_ticks_position('bottom')\n ax.yaxis.set_ticks_position('left')",
"Sigmoid\n\nThis is the classic\nContinuous version of step function",
"def np_sigmoid(X):\n return 1 / (1 + np.exp(X * -1))\n\nx = np.arange(-10,10,0.01)\ny = np_sigmoid(x)\n\ncenterAxis()\nplt.plot(x,y,lw=3)",
"Relu\n\nperfect for blacking out everyhing beyong threshold\nthis is just what everyone actually uses",
"def np_relu(x):\n return np.maximum(0, x)\n\nx = np.arange(-10, 10, 0.01)\ny = np_relu(x)\n\ncenterAxis()\nplt.plot(x,y,lw=3)",
"The classic VGG16 Architecture",
"def predict(model, img_path):\n img = image.load_img(img_path, target_size=(224, 224))\n x = image.img_to_array(img)\n x = np.expand_dims(x, axis=0)\n x = preprocess_input(x)\n\n preds = model.predict(x)\n # decode the results into a list of tuples (class, description, probability)\n # (one such list for each sample in the batch)\n print('Predicted:', decode_predictions(preds, top=3)[0])\n\nfrom keras import applications\n# applications.VGG16?\nvgg16_model = applications.VGG16(weights='imagenet')",
"VGG starts with a number of convolutional blocks for feature extraction and ends with a fully connected classifier",
"vgg16_model.summary()\n\n!curl -O https://upload.wikimedia.org/wikipedia/commons/thumb/d/de/Beagle_Upsy.jpg/440px-Beagle_Upsy.jpg",
"",
"predict(model = vgg16_model, img_path = '440px-Beagle_Upsy.jpg')\n\n!curl -O https://djcordhose.github.io/ai/img/cat-bonkers.png\n\npredict(model = vgg16_model, img_path = 'cat-bonkers.png')\n\n!curl -O https://djcordhose.github.io/ai/img/squirrels/original/Michigan-MSU-raschka.jpg\n!curl -O https://djcordhose.github.io/ai/img/squirrels/original/Black_New_York_stuy_town_squirrel_amanda_ernlund.jpeg\n!curl -O https://djcordhose.github.io/ai/img/squirrels/original/london.jpg",
"",
"predict(model = vgg16_model, img_path = 'Michigan-MSU-raschka.jpg')\n\npredict(model = vgg16_model, img_path = 'Black_New_York_stuy_town_squirrel_amanda_ernlund.jpeg')\n\npredict(model = vgg16_model, img_path = 'london.jpg')",
"What does the CNN \"see\"?\nDoes it \"see\" the right thing?\n\nEach filter output of a convolutional layer is called feature channel\nwith each input they should ideally either be\nblank if they do not recognize any feature in the input or\nencode what the feature channel \"sees\" in the input\nfeature channels directly before FC layers are often called bottleneck feature channels\n\nSome activations from bottleneck features:",
"# create a tmp dir in the local directory this notebook runs in, otherwise quiver will fail (and won't tell you why)\n!rm -rf tmp\n!mkdir tmp",
"Visualizing feature channels using Quiver\nOnly works locally",
"# https://github.com/keplr-io/quiver\n\n# Alternative with more styles of visualization: https://github.com/raghakot/keras-vis\n\n# https://github.com/keplr-io/quiver\nfrom quiver_engine import server\nserver.launch(vgg16_model, input_folder='.', port=7000)\n\n# open at http://localhost:7000/\n# interrupt kernel to return control to notebook",
"Modern Alternative: Resnet\n\nhttps://keras.io/applications/#resnet50\nhttps://arxiv.org/abs/1512.03385\nNew Layer Type: https://keras.io/layers/normalization/",
"from keras.applications.resnet50 import ResNet50\nfrom keras.preprocessing import image\nfrom keras.applications.resnet50 import preprocess_input, decode_predictions\nimport numpy as np\n\nresnet_model = ResNet50(weights='imagenet')\n\nresnet_model.summary()\n\npredict(model = resnet_model, img_path = 'cat-bonkers.png')\n\npredict(model = resnet_model, img_path = 'Michigan-MSU-raschka.jpg')\n\npredict(model = resnet_model, img_path = 'Black_New_York_stuy_town_squirrel_amanda_ernlund.jpeg')\n\npredict(model = resnet_model, img_path = 'london.jpg')",
"Hands-On 1 (CNN Overview)\nExperiment with all Kinds of Layers: https://transcranial.github.io/keras-js/#/mnist-cnn\n\n* Try to fool the network by incrementally drawing ambiguous digits\nSide Node: Keras.js makes all Keras Models available in the Browser\n\n\nHands-On 2 (Filter Kernel Details)\nTry out Filter Kernels: http://setosa.io/ev/image-kernels/\n\n* Try out Filter Kernels Sharpen and Blur on a speed limit sign: https://github.com/DJCordhose/speed-limit-signs/raw/master/data/real-world/4/100-sky-cutoff-detail.jpg\n* Create a custom filter"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
robertoalotufo/ia898
|
deliver/Leitura-Display-imagem-com-matplotlib.ipynb
|
mit
|
[
"Leitura e display de imagens com matplotlib\nimportando",
"import matplotlib.pyplot as plt\nimport matplotlib.image as mpimg\nimport numpy as np",
"Leitura usando matplotlib native e com PIL\nO matplotlib possui a leitura nativa de imagens no formato png. Quando este formato é lido, a imagem é automaticamente mapeada da faixa 0 a 255 contido no pixel uint8 da imagem para float 0 a 1 no pixel do array lido\nJá se o formato for outro, se o PIL estiver instalado, a leitura é feita pelo PIL e neste caso o tipo do pixel é mantido em uint8, de 0 a 255. Veja no exemplo a seguir\nLeitura de imagem em níveis de cinza de imagem TIFF:\nComo a imagem lida é TIFF, o array lido fica no tipo uint8, com valores de 0 a 255",
"f = mpimg.imread('../data/cameraman.tif')\nprint(f.dtype,f.shape,f.max(),f.min())",
"Leitura de imagem colorida formato TIFF\nQuando a imagem é colorida e não está no formato png, matplotlib utiliza PIL para leitura. O array terá o tipo uint8 e o shape do array é organizado em (H, W, 3).",
"fcor = mpimg.imread('../data/boat.tif')\nprint(fcor.dtype,fcor.shape,fcor.max(),fcor.min())",
"Leitura de imagem colorida formato png\nSe a imagem está no formato png, o matplotlib mapeia os pixels de 0 a 255 para float de 0 a 1.0",
"fcor2 = mpimg.imread('../data/boat.tif')\nprint(fcor2.dtype, fcor2.shape, fcor2.max(), fcor2.min())",
"Mostrando as imagens lidas",
"%matplotlib inline\nplt.imshow(f, cmap='gray')\nplt.colorbar()\n\nplt.imshow(fcor)\nplt.colorbar()\n\nplt.imshow(fcor2)\nplt.colorbar()",
"Observe que o display mostra apenas a última chamada do imshow",
"plt.imshow(fcor2)\nplt.imshow(fcor)\nplt.imshow(f, cmap='gray')"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
cogstat/cogstat
|
cogstat/docs/CogStat Jupyter Notebook tutorial.ipynb
|
gpl-3.0
|
[
"CogStat in Jupyter Notebook\n(The table of contents below may not be visible on all systems.)\n<h1 id=\"tocheading\">Table of Contents</h1>\n<div id=\"toc\"></div>\n<script src=\"https://kmahelona.github.io/ipython_notebook_goodies/ipython_notebook_toc.js\"></script>\n\nCogStat is an open source and free statistical tool written in Python. It helps you to choose and collect the most typical statistical analysis for a task, making the analysis faster, more precise and more efficient.\nCogStat can be used in Jupyter Notebook. This tutorial presents the main functions of CogStat (as in version 2.0), and it shows how to use them.",
"%matplotlib inline\n\nimport sys\nimport os\nimport pandas as pd\n\n# Import CogStat, and we will abbreviate it as cs\nfrom cogstat import cogstat as cs\n# The version of CogStat is available in the __version__ variable\nprint(cs.__version__)\n\ncs_dir, dummy_filename = os.path.split(cs.__file__) # We use this for the demo data",
"Quick demo",
"# Let's see a very quick demonstration, how CogStat works\n# All results can be seen below. Appropriate graphs and statistics were chosen and compiled automatically by CogStat.\n\n# Load some data\ndata = cs.CogStatData(data = os.path.join(cs_dir, 'sample_data', 'example_data.csv'))\n# Display the data below\ncs.display(data.print_data())\n# Let's compare two variables\ncs.display(data.compare_variables(['X', 'Y']))\n# Let's compare two groups in a variable\ncs.display(data.compare_groups('X', grouping_variables=['TIME']))",
"Import and display data\nCogStat can import from three sources:\n- It can read from file (either SPSS .sav file or tab separated txt files)\n- It can convert pandas data frames (only in the Jupyter Notebook interface)\n- It can read multiline string",
"### Import from file ###\n\n\"\"\"\nThe file should have the following structure:\n- The first line should contain the names of the variables.\n- The second lines can contain the measurement levels (int, ord or nom). This is optional, but recommended.\n- The rest of the file is your data.\n\"\"\"\n\n# New CogStat data can be created with the CogStatData class of the cogstat module\n# For importing a file, the data parameter should include tha path of the file\ndata = cs.CogStatData(data=os.path.join(cs_dir, 'sample_data', 'example_data.csv'))\n# The filename looks a bit complicated here, but it is only to make sure that the tutorial works OK.\n# Instead you could use a simple path like this:\n# data = cs.CogStatData(data='path/to/file/filename.csv')\n\n# Now let's display our imported data.\n\n# All methods of the CogStatData class return a list of html files and graphs.\n# These items can be displayed with the cogstat.display function\n\n# To display the current data, use the print_data() method to create the appropriate html output,\n# and display it with the cogstat.display function\nresult = data.print_data()\ncs.display(result)\n# Or you can write it shorter:\ncs.display(data.print_data())\n\n# If your csv file doesn't include the measurement levels, you can specify them in the import process.\ndata = cs.CogStatData(data=os.path.join(cs_dir, 'sample_data', 'example_data_no_levels.csv'), measurement_level='nom nom nom nom nom int ord ord')\ncs.display(data.print_data())\n\n# If your file does include the measurement levels, and you still specify it, then your specification \n# overwrites the file settings\ndata = cs.CogStatData(data=os.path.join(cs_dir, 'sample_data', 'example_data.csv'), measurement_level='nom nom nom nom nom int ord ord')\ncs.display(data.print_data())\n\n# If your csv file doesn't include the measurement levels, and you do not specify them, then CogStat sets them \n# nom (nominal) for string variables and unk (unkown) otherwise.\ndata = cs.CogStatData(data=os.path.join(cs_dir, 'sample_data', 'example_data_no_levels.csv'))\ncs.display(data.print_data())\n\n# Or simply read your SPSS .sav file\ndata = cs.CogStatData(data=os.path.join(cs_dir, 'sample_data', 'example_data.sav'))\ncs.display(data.print_data())\n\n\n### Import from pandas ###\n\n# First, we create a pandas dataframe\ndata = {'one' : [1., 3., 3., 4.],\n 'two' : [4., 3., 2., 1.]}\npandas_data = pd.DataFrame(data)\nprint(pandas_data)\n\n# Then we simply specify the pandas data to import\ndata = cs.CogStatData(data=pandas_data)\ncs.display(data.print_data())\n\n# Again, you can specify the measurement level\ndata = cs.CogStatData(data=pandas_data, measurement_level='ord ord')\ncs.display(data.print_data())\n\n\n### Import from multiline string ###\n\n# Use \\t to separate columns and \\n to separate rows.\ndata_string = '''A\\tB\\tC\nnom\\tint\\tord\na\\t123\\t23\nb\\t143\\t42'''\n\ndata = cs.CogStatData(data=data_string)\ncs.display(data.print_data())\n\n# measurement_level parameter can be used as in the case of the file and the pandas import",
"Filter outliers\nCases can be filtered based on outliers.\nIn its simplest form (2sd method) a case is an outlier if based on the appropriate variable its value is more extreme than the average +- 2 standard deviation.\nWhen several variables are used for filtering, a case will be an outlier, if it is an outlier based on any of the variables.\nFiltering is kept until a new filtering is set, which will overwrite the previous filtering.",
"# Let's import a data file\ndata = cs.CogStatData(data = os.path.join(cs_dir, 'sample_data', 'example_data.csv'))\ncs.display(data.print_data())\n\n# To turn on filtering based on a singla variable:\n# Note that even if only a single variable is given, it should be in a list.\ncs.display(data.filter_outlier(['X']))\ncs.display(data.print_data())\n\n# To turn on filtering based on several variables simultaniously:\ncs.display(data.filter_outlier(['X', 'Y']))\ncs.display(data.print_data())\n\n# To turn off filtering:\ncs.display(data.filter_outlier(None))\ncs.display(data.print_data())",
"Analyse the data\nCogStat collects the most typical analysis into a single task, and chooses the appropriate methods. This is one of the main strength of CogStat: you don't have to figure out what statistics to use, and don't have to click through several menus and dialogs, but you get all the main (and only the relevant) information with a single command.",
"# Here are all the available CogStat analysis packages\n# Hopefully all these method names speak for themselves\n# In a function the chosen analysis will automatically depend on the measurement level, and other properties of the data.\n\n# Load some data\ndata = cs.CogStatData(data=os.path.join(cs_dir, 'sample_data', 'example_data.csv'))\n# Display the data\ncs.display(data.print_data())\n\n### Explore variable ###\n# Get the most important statistics of a single variable\ncs.display(data.explore_variable('X', frequencies=True, central_value=0.0))\n# A shorter, but less readable version:\n#cs.display(data.explore_variable('X', 1, 0.0))\n\n### Explore variable pair ###\n# Get the statistics of a variable pair\n# Optionally \ncs.display(data.explore_variable_pair('X', 'Y', xlims=[None, None], ylims=[None, None]))\n\n### Pivot tables ###\n# Pivot tables are only available from the GUI at the moment\n# Fortunatelly, all CogStat pivot computations can be run in pandas\n\n### Behavioral data diffusion analyses ###\n# cs.display(data.diffusion(error_name=['error'], RT_name=['RT'], participant_name=['participant_id'], condition_names=['loudness', 'side']))\n\n### Compare variables ###\n# Specify two or more variables to compare\n# Optionally set the visible range of the y axis\ncs.display(data.compare_variables(['X', 'Y'], factors=[], ylims=[None, None]))\n# To use several factors add the factor names and levels, too. Variable names will be assigned to the factor \n# level combinations automatically.\n# cs.display(data.compare_variables(['F1S1', 'F1S2', 'F2S1', 'F2S2']), factors=[['first factor', 2], ['second factor', 2]])\n\n\n### Compare groups ###\n# Specify a dependent and a grouping variable\n# Optionally set the visible range of the y axis\ncs.display(data.compare_groups('X', grouping_variables=['TIME'], ylims=[None, None]))\n",
"Summary (Cheatsheet)",
"### Import data ###\n# Import from file\ndata = cs.CogStatData(data=os.path.join(cs_dir, 'sample_data', 'example_data.sav'))\ndata = cs.CogStatData(data=os.path.join(cs_dir, 'sample_data', 'example_data.csv'))\ndata = cs.CogStatData(data=os.path.join(cs_dir, 'sample_data', 'example_data.csv'), \n measurement_level='nom nom nom nom nom int ord ord')\n# Import from pandas\ndata = cs.CogStatData(data=pandas_data)\ndata = cs.CogStatData(data=pandas_data, measurement_level='ord ord')\n# Import from multiline string\ndata = cs.CogStatData(data=data_string)\ndata = cs.CogStatData(data=data_string, measurement_level='ord ord')\n\n\n### Display the data ###\ndata = cs.CogStatData(data=os.path.join(cs_dir, 'sample_data', 'example_data.csv'))\ncs.display(data.print_data())\n\n\n### Filter outliers ###\n# Filter outliers based on a single variable\ncs.display(data.filter_outlier(['X']))\n# Filter outliers based on several variables simultaniously\ncs.display(data.filter_outlier(['X', 'Y']))\n# Turn off filtering\ncs.display(data.filter_outlier(None))\n\n\n### Analyse the data ###\n# Explore variable\ncs.display(data.explore_variable('X', frequencies=True, central_value=0.0))\n# Explore variable pair\ncs.display(data.explore_variable_pair('X', 'Y'))\n# Compare variables\ncs.display(data.compare_variables(['X', 'Y']))\n# Compare groups\ncs.display(data.compare_groups('X', grouping_variables=['TIME']))"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
root-mirror/training
|
SoftwareCarpentry/04-histograms-and-graphs.ipynb
|
gpl-2.0
|
[
"ROOT histograms\nHistogram class documentation\nROOT has powerful histogram objects that, among other features, let you produce complex plots and perform fits of arbitrary functions.\nTH1F is a 1D histogram with Floating point y-axis, TH2I is a 2D histogram with Integer y-axis, etc.\n<center><img src=\"images/examplehisto.png\"><center>\nTo have something to play with, let's quickly fill a histogram with 5000 normally distributed values:",
"import ROOT\nh = ROOT.TH1D(name=\"h\", title=\"My histo\", nbinsx=100, xlow=-5, xup=5)\n\n\nh.FillRandom(\"gaus\", ntimes=5000)",
"To check the full documentation you can always refer to https://root.cern/doc/master (and then switch to the documentation for your particular ROOT version with the drop-down menu at the top of the page).\nDrawing a histogram\nDrawing options documentation\nThe link above contains the documentation for the histogram drawing options.\nIn a notebook, as usual, we want to also use the %jsroot on magic and also explicitly draw a TCanvas.",
"%jsroot on\nc = ROOT.TCanvas()\n#h.SetLineColor(ROOT.kBlue)\n#h.SetFillColor(ROOT.kBlue)\n#h.GetXaxis().SetTitle(\"value\")\n#h.GetYaxis().SetTitle(\"count\")\n#h.SetTitle(\"My histo with latex: p_{t}, #eta, #phi\")\nh.Draw() # draw the histogram on the canvas\nc.Draw() # draw the canvas on the screen",
"ROOT functions\nThe type that represents an arbitrary one-dimensional mathematical function in ROOT is TF1.<br>\nSimilarly, TF2 and TF3 represent 2-dimensional and 3-dimensional functions.\nAs an example, let's define and plot a simple surface:",
"f2 = ROOT.TF2(\"f2\", \"sin(x*x - y*y)\", xmin=-2, xmax=2, ymin=-2, ymax=2)\n\nc = ROOT.TCanvas()\nf2.Draw(\"surf1\") # to get a surface instead of the default contour plot\nc.Draw()",
"Fitting a histogram\nLet's see how to perform simple histogram fits of arbitrary functions. We will need a TF1 that represents the function we want to use for the fit.\nThis time we define our TF1 as a C++ function (note the usage of the %%cpp magic to define some C++ inline). Here we define a simple gaussian with scale and mean parameters (par[0] and par[1] respectively):",
"%%cpp\n\ndouble gaussian(double *x, double *par) {\n return par[0]*TMath::Exp(-TMath::Power(x[0] - par[1], 2.) / 2.)\n / TMath::Sqrt(2 * TMath::Pi());\n}",
"The function signature, that takes an array of coordinates and an array of parameters as inputs, is the generic signature of functions that can be used to construct a TF1 object:",
"fitFunc = ROOT.TF1(\"fitFunc\", ROOT.gaussian, xmin=-5, xmax=5, npar=2)",
"Now we fit our h histogram with fitFunc:",
"res = h.Fit(fitFunc)",
"Drawing the histogram now automatically also shows the fitted function:",
"c2 = ROOT.TCanvas()\nh.Draw()\nc2.Draw()",
"For the particular case of a gaussian fit, we could also have used the built-in \"gaus\" function, as we did when we called FillRandom (for the full list of supported expressions see here):",
"res = h.Fit(\"gaus\")\n\nc3 = ROOT.TCanvas()\nh.Draw()\nc3.Draw()",
"For more complex binned and unbinned likelihood fits, check out RooFit, a powerful data modelling framework integrated in ROOT.\nROOT graphs\nTGraph is a type useful for scatter plots.\nTheir drawing options are documented here.\nLike for histograms, the aspect of TGraphs can be greatly customized, they can be fitted with custom functions, etc.",
"g = ROOT.TGraph()\n\nfor x in range(-20, 21):\n y = -x*x\n g.AddPoint(x, y)\n\nc4 = ROOT.TCanvas()\ng.SetMarkerStyle(7)\ng.SetLineColor(ROOT.kBlue)\ng.SetTitle(\"My graph\")\ng.Draw()\nc4.Draw()",
"The same graph can be displayed as a bar plot:",
"c5 = ROOT.TCanvas()\ng.SetTitle(\"My graph\")\ng.SetFillColor(ROOT.kOrange + 1) # base colors can be tweaked by adding/subtracting values to them \ng.Draw(\"AB1\")\nc5.Draw()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
miguelgr/python-crash-course
|
workshop/fundamentals/fundamental.ipynb
|
mit
|
[
"Python Fundamentals\nAnatomy of your very simple first script",
"#!/usr/bin/env python\n\ndef main():\n print(\"Hello World!\")\n\nif __name__ == '__main__':\n main()\n",
"Shebang: /usr/bin/python2 or /usr/bin/python3\nmain function\nWhat the heck is __name__ or __file__ ? Snake charmers\n\nAdding parameters to your script ...\nYour first imports and function parameters: *args, **kwargs\n$ python hello_montoya.py \"Hello\" \"My name is Iñigo Montoya\" \"You killed my father\" \"Prepare to Die\"\nEncoding\nCheck out How to unicode\nBrief history\nASCII (American Standard Code for Information Interexchange) 1968.\nEnglish alphabet took the conversion of a letter to a digit, between 0-127.\nMid 1980's computers -> 8-bit (0-255)\nBut what happens to accents? cyrilic alphabets? French (Latin1 or ISO-8859-1) Russian (KOI8)?\nUNICODE Standarization with 16-bit (2^16 = 65.535 distinct values)\nDefinitions\n\nCharacter: smallest component of text (\"A\", \"É\")\nUnicode: code points: integer value usually denoted in base 16\nUnicode string: Serie of code points from 0 to 0x010ffff.\nUnicode scapes:\n- \\xhh -> \\xf1 == ñ\n- \\uhhhh ->\n- \\Uhhhhhhhh\nEncoding: translates a unicode string sequence of bytes\n\nComment -- X: Y -- (inspired by Emacs, PEP 263)\n# -*- coding: latin-1 -*-\nIn Python 3 the default encoding: UTF-8\nAll strings → python3 -c 'print(\"buenos dias\" \"hyvää huomenta\" \"\"\"おはようございます\"\"\")' are unicode\nIMPORTS\nOfficial Docs\nNamespace is designed to overcome this difficulty and is used to differentiate functions, classes, variables etc. with the same name, available in different modules.\nA Python module is simply a Python source file, which can expose classes, functions and global variables. When imported from another Python source file, the file name is sometimes treated as a namespace.\n__main__ is the name of the scope in which top-level code executes. \nA module’s __name__ variable is set to __main__ when read from standard input, a script, or from an interactive prompt.\n```python\n!/usr/bin/env python\nimport os\nimport sys\nif name == \"main\":\nsettings_module = \"settings.local\"\nos.environ.setdefault(\"DJANGO_SETTINGS_MODULE\", settings_module)\n\nfrom django.core.management import execute_from_command_line\n\nexecute_from_command_line(sys.argv)\n\n```\nA Python package is simply a directory of Python module(s).\n__init__.py.\nThe __init__.py file is the first thing that gets executed when a package is loaded.\nrocklab/ ...\nspacelab/\n __init__.py\n manage.py\n utils/\n __init__.py\n physics.py\n multidimensional/\n __init__.py\n laws.py\n rockets/\n __init__.py\n engine.py \n base.py # Defines a rocket model that exposes all its functionality\nRelative imports specific location of the modules to be imported are relative to the current package.\n\nlaws.py\n\npython\nfrom ..phsycis import gravity\n\nbase.py\n\n```python\nfrom ..mutidimensional.laws import InterDimensionalTravel\nfrom .engine import (Motor, turn_on_engine, turn_off_engine,\n Bolt)\nAvoid the use of \\ for linebreaks and use parenthesis. Lisp people will be happy\nfrom .engine import Motor, turn_on_engine, turn_off_engine, \\\n Bolt \n```\nAbsolute imports an import where you fully specify the location of the entities being imported.\n\nbase.py\n\npython\nfrom utils.multidimensional.laws import Infinity\nfrom rockets.engine import Motor\nCircular imports happen when you create two modules that import each other.\n\nrockets.engine.py\n\n```python\nfrom .base import bad_design_decision\nD'ouh\ndef inside_the_function(*params):\n # avoid circular import, or ci. Is good to mention (IMAO)\n from rockets.base import bad_design_decision\n bad_design_decision(params)\n```\nHere comes the future\nOfficial Docs\nNote for absolute imports:\nfrom __future__ import absolute_import\nKeep Reading\nAbout imports in Python\nAbout future imports",
"#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\nimport sys\n\ndef main(*args, **kwargs):\n \"\"\"Simple main function that prints stuff\"\"\"\n print(args) # args is a tuple of positional params\n print(kwargs) # kwargs is a dict with keyword params\n ## The names are just a mere convention\n\nif __name__ == '__main__':\n main(sys.argv) # input params from the command line\n",
"Programming Types\nBoolean: True, False\nNoneType: None\nMutable / Immutable Objects\nSome immutable types:\nint, float, long, complex\nstr\nbytes\ntuple\nfrozen set\n\nSome mutable types:\nbyte array\nlist\nset\ndict",
"# Immutable Objects\n\nage = 60 # int\nweight = 77.8 # float\ninfinite = float('inf')\nname = \"Rick\" # basestring/str\nnick_names = (\"Sanchez\", \"Grandpa\") # tuple\n\njobs = frozenset((\"scientist\", \"inventor\", \"arms salesman\", \"store owner\"))\n\n# Mutable Objects\ninterests = ['interdimensional travel', 'nihilism', 'alcohol']\ninfo = {\n \"name\": name,\n \"last_names\": last_names,\n \"age\": age\n}\nredundant = set(interests)\n\n\n# Information from objects\ntype(age) # int\nisinstance(age, int) # True \ntype(infinite)\ntype(name)\nisinstance(name, basestring)\n# typve vs isinstance: type doesnt check for object subclasses\n# we will discuss the type constructor later on",
"Why immutable objects?",
"# integers\nprint(id(age))\nage += 10\nprint(id(age))\nage -= 10\nprint(id(age))\n\n\n# Strings\nprint(name + \": Wubba lubba dub-dub!!\")\nprint(name.replace(\"R\", \"r\"))\nprint(name.upper(), name.lower())\n\n\n\n# Tuples\noperations = \"test\", # note the comma as it makes it a tuple!!! | tuple.count/index\nprint(id(operations))\noperations += ('build', 'deploy')\nprint(operations, id(operations))\n\n\n\n## Tuple assignment\n\ndef say(*args):\n print(args)\n \nsay(range(8))\n\n# Packing \ntest, build, deploy = \"Always passing\", \"A better world\", \"Your mind\"\n\n# OK but use parenthesis :)\n\n(test, build, deploy) = (\"Always passing\", \"A better world\", \"Your mind\")\n\n\nprint(\"Test: \", test)\nprint(\"Build: \" + build)\nprint(\"Deploy: \" + deploy)\n\n\n\n# Unpacking\ntest, build, deploy = operations\nprint(test, build, deploy)\n\n# You are warned: # ERROR -- too many values to unpack\n# https://docs.python.org/3.6/tutorial/controlflow.html#unpacking-argument-lists\n\n# lists\n\n",
"Operators\nOfficial Docs\nArithmetic Operators: + - / // * ** %",
"print(1 + 3)\nprint(2 ** 10)\nprint(5 % 3)\n\n\n\n\nprint(10 / 4) 2\n\nfrom __future__ import division\n\nprint(10 / 4) 2.5 # float division\n\n10 // 4 # in python 3 to get integer division\n\nimport operator\n\noperator.add(2, 4)\noperator.gt(10, 5)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
billzhao1990/CS231n-Spring-2017
|
assignment1/.ipynb_checkpoints/knn-checkpoint.ipynb
|
mit
|
[
"k-Nearest Neighbor (kNN) exercise\nComplete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the assignments page on the course website.\nThe kNN classifier consists of two stages:\n\nDuring training, the classifier takes the training data and simply remembers it\nDuring testing, kNN classifies every test image by comparing to all training images and transfering the labels of the k most similar training examples\nThe value of k is cross-validated\n\nIn this exercise you will implement these steps and understand the basic Image Classification pipeline, cross-validation, and gain proficiency in writing efficient, vectorized code.",
"# Run some setup code for this notebook.\n\nimport random\nimport numpy as np\nfrom cs231n.data_utils import load_CIFAR10\nimport matplotlib.pyplot as plt\n\nfrom __future__ import print_function\n\n# This is a bit of magic to make matplotlib figures appear inline in the notebook\n# rather than in a new window.\n%matplotlib inline\nplt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots\nplt.rcParams['image.interpolation'] = 'nearest'\nplt.rcParams['image.cmap'] = 'gray'\n\n# Some more magic so that the notebook will reload external python modules;\n# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython\n%load_ext autoreload\n%autoreload 2\n\n# Load the raw CIFAR-10 data.\ncifar10_dir = 'cs231n/datasets/cifar-10-batches-py'\nX_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)\n\n# As a sanity check, we print out the size of the training and test data.\nprint('Training data shape: ', X_train.shape)\nprint('Training labels shape: ', y_train.shape)\nprint('Test data shape: ', X_test.shape)\nprint('Test labels shape: ', y_test.shape)\n\n# Visualize some examples from the dataset.\n# We show a few examples of training images from each class.\nclasses = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']\nnum_classes = len(classes)\nsamples_per_class = 7\nfor y, cls in enumerate(classes):\n idxs = np.flatnonzero(y_train == y)\n idxs = np.random.choice(idxs, samples_per_class, replace=False)\n for i, idx in enumerate(idxs):\n plt_idx = i * num_classes + y + 1\n plt.subplot(samples_per_class, num_classes, plt_idx)\n plt.imshow(X_train[idx].astype('uint8'))\n plt.axis('off')\n if i == 0:\n plt.title(cls)\nplt.show()\n\n# Subsample the data for more efficient code execution in this exercise\nnum_training = 5000\nmask = list(range(num_training))\nX_train = X_train[mask]\ny_train = y_train[mask]\n\nnum_test = 500\nmask = list(range(num_test))\nX_test = X_test[mask]\ny_test = y_test[mask]\n\n# Reshape the image data into rows\nX_train = np.reshape(X_train, (X_train.shape[0], -1))\nX_test = np.reshape(X_test, (X_test.shape[0], -1))\nprint(X_train.shape, X_test.shape)\n\nfrom cs231n.classifiers import KNearestNeighbor\n\n# Create a kNN classifier instance. \n# Remember that training a kNN classifier is a noop: \n# the Classifier simply remembers the data and does no further processing \nclassifier = KNearestNeighbor()\nclassifier.train(X_train, y_train)",
"We would now like to classify the test data with the kNN classifier. Recall that we can break down this process into two steps: \n\nFirst we must compute the distances between all test examples and all train examples. \nGiven these distances, for each test example we find the k nearest examples and have them vote for the label\n\nLets begin with computing the distance matrix between all training and test examples. For example, if there are Ntr training examples and Nte test examples, this stage should result in a Nte x Ntr matrix where each element (i,j) is the distance between the i-th test and j-th train example.\nFirst, open cs231n/classifiers/k_nearest_neighbor.py and implement the function compute_distances_two_loops that uses a (very inefficient) double loop over all pairs of (test, train) examples and computes the distance matrix one element at a time.",
"# Open cs231n/classifiers/k_nearest_neighbor.py and implement\n# compute_distances_two_loops.\n\n# Test your implementation:\ndists = classifier.compute_distances_two_loops(X_test)\nprint(dists.shape)\n\n# We can visualize the distance matrix: each row is a single test example and\n# its distances to training examples\nplt.imshow(dists, interpolation='none')\nplt.show()",
"Inline Question #1: Notice the structured patterns in the distance matrix, where some rows or columns are visible brighter. (Note that with the default color scheme black indicates low distances while white indicates high distances.)\n\nWhat in the data is the cause behind the distinctly bright rows?\nWhat causes the columns?\n\nYour Answer: fill this in.\n* The distinctly bright rows indicate that they are all far away from all the training set (outlier)\n* The distinctly bright columns indicate that they are all far away from all the test set",
"# Now implement the function predict_labels and run the code below:\n# We use k = 1 (which is Nearest Neighbor).\ny_test_pred = classifier.predict_labels(dists, k=1)\n\n# Compute and print the fraction of correctly predicted examples\nnum_correct = np.sum(y_test_pred == y_test)\naccuracy = float(num_correct) / num_test\nprint('Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy))",
"You should expect to see approximately 27% accuracy. Now lets try out a larger k, say k = 5:",
"y_test_pred = classifier.predict_labels(dists, k=5)\nnum_correct = np.sum(y_test_pred == y_test)\naccuracy = float(num_correct) / num_test\nprint('Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy))",
"You should expect to see a slightly better performance than with k = 1.",
"# Now lets speed up distance matrix computation by using partial vectorization\n# with one loop. Implement the function compute_distances_one_loop and run the\n# code below:\ndists_one = classifier.compute_distances_one_loop(X_test)\n\n# To ensure that our vectorized implementation is correct, we make sure that it\n# agrees with the naive implementation. There are many ways to decide whether\n# two matrices are similar; one of the simplest is the Frobenius norm. In case\n# you haven't seen it before, the Frobenius norm of two matrices is the square\n# root of the squared sum of differences of all elements; in other words, reshape\n# the matrices into vectors and compute the Euclidean distance between them.\ndifference = np.linalg.norm(dists - dists_one, ord='fro')\nprint('Difference was: %f' % (difference, ))\nif difference < 0.001:\n print('Good! The distance matrices are the same')\nelse:\n print('Uh-oh! The distance matrices are different')\n\n# Now implement the fully vectorized version inside compute_distances_no_loops\n# and run the code\ndists_two = classifier.compute_distances_no_loops(X_test)\n\n# check that the distance matrix agrees with the one we computed before:\ndifference = np.linalg.norm(dists - dists_two, ord='fro')\nprint('Difference was: %f' % (difference, ))\nif difference < 0.001:\n print('Good! The distance matrices are the same')\nelse:\n print('Uh-oh! The distance matrices are different')\n\n# Let's compare how fast the implementations are\ndef time_function(f, *args):\n \"\"\"\n Call a function f with args and return the time (in seconds) that it took to execute.\n \"\"\"\n import time\n tic = time.time()\n f(*args)\n toc = time.time()\n return toc - tic\n\ntwo_loop_time = time_function(classifier.compute_distances_two_loops, X_test)\nprint('Two loop version took %f seconds' % two_loop_time)\n\none_loop_time = time_function(classifier.compute_distances_one_loop, X_test)\nprint('One loop version took %f seconds' % one_loop_time)\n\nno_loop_time = time_function(classifier.compute_distances_no_loops, X_test)\nprint('No loop version took %f seconds' % no_loop_time)\n\n# you should see significantly faster performance with the fully vectorized implementation",
"Cross-validation\nWe have implemented the k-Nearest Neighbor classifier but we set the value k = 5 arbitrarily. We will now determine the best value of this hyperparameter with cross-validation.",
"num_folds = 5\nk_choices = [1, 3, 5, 8, 10, 12, 15, 20, 50, 100]\n\nX_train_folds = []\ny_train_folds = []\n################################################################################\n# TODO: #\n# Split up the training data into folds. After splitting, X_train_folds and #\n# y_train_folds should each be lists of length num_folds, where #\n# y_train_folds[i] is the label vector for the points in X_train_folds[i]. #\n# Hint: Look up the numpy array_split function. #\n################################################################################\n#pass\nX_train_folds = np.array_split(X_train, num_folds)\ny_train_folds = np.array_split(y_train, num_folds)\n################################################################################\n# END OF YOUR CODE #\n################################################################################\n\n# A dictionary holding the accuracies for different values of k that we find\n# when running cross-validation. After running cross-validation,\n# k_to_accuracies[k] should be a list of length num_folds giving the different\n# accuracy values that we found when using that value of k.\nk_to_accuracies = {}\n\n\n\n################################################################################\n# TODO: #\n# Perform k-fold cross validation to find the best value of k. For each #\n# possible value of k, run the k-nearest-neighbor algorithm num_folds times, #\n# where in each case you use all but one of the folds as training data and the #\n# last fold as a validation set. Store the accuracies for all fold and all #\n# values of k in the k_to_accuracies dictionary. #\n################################################################################\n#pass\nfor k in k_choices:\n inner_accuracies = np.zeros(num_folds)\n for i in range(num_folds):\n X_sub_train = np.concatenate(np.delete(X_train_folds, i, axis=0))\n y_sub_train = np.concatenate(np.delete(y_train_folds, i, axis=0))\n print(X_sub_train.shape,y_sub_train.shape)\n \n X_sub_test = X_train_folds[i]\n y_sub_test = y_train_folds[i]\n print(X_sub_test.shape,y_sub_test.shape)\n \n classifier = KNearestNeighbor()\n classifier.train(X_sub_train, y_sub_train)\n \n dists = classifier.compute_distances_no_loops(X_sub_test)\n pred_y = classifier.predict_labels(dists, k)\n num_correct = np.sum(y_sub_test == pred_y)\n inner_accuracies[i] = float(num_correct)/X_test.shape[0]\n \n k_to_accuracies[k] = np.sum(inner_accuracies)/num_folds\n################################################################################\n# END OF YOUR CODE #\n################################################################################\n\n# Print out the computed accuracies\nfor k in sorted(k_to_accuracies):\n for accuracy in k_to_accuracies[k]:\n print('k = %d, accuracy = %f' % (k, accuracy))\n\nX_train_folds = np.array_split(X_train, 5)\nt = np.delete(X_train_folds, 1,axis=0)\n\nprint(X_train_folds)\n\n# plot the raw observations\nfor k in k_choices:\n accuracies = k_to_accuracies[k]\n plt.scatter([k] * len(accuracies), accuracies)\n\n# plot the trend line with error bars that correspond to standard deviation\naccuracies_mean = np.array([np.mean(v) for k,v in sorted(k_to_accuracies.items())])\naccuracies_std = np.array([np.std(v) for k,v in sorted(k_to_accuracies.items())])\nplt.errorbar(k_choices, accuracies_mean, yerr=accuracies_std)\nplt.title('Cross-validation on k')\nplt.xlabel('k')\nplt.ylabel('Cross-validation accuracy')\nplt.show()\n\n# Based on the cross-validation results above, choose the best value for k, \n# retrain the classifier using all the training data, and test it on the test\n# data. You should be able to get above 28% accuracy on the test data.\nbest_k = 1\n\nclassifier = KNearestNeighbor()\nclassifier.train(X_train, y_train)\ny_test_pred = classifier.predict(X_test, k=best_k)\n\n# Compute and display the accuracy\nnum_correct = np.sum(y_test_pred == y_test)\naccuracy = float(num_correct) / num_test\nprint('Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy))"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
PyLCARS/PythonUberHDL
|
myHDL_ComputerFundamentals/Memorys/.ipynb_checkpoints/Memory-checkpoint.ipynb
|
bsd-3-clause
|
[
"\\title{Memories in myHDL}\n\\author{Steven K Armour}\n\\maketitle",
"from myhdl import *\nfrom myhdlpeek import Peeker\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\nfrom sympy import *\ninit_printing()\n\nimport random\n\n#https://github.com/jrjohansson/version_information\n%load_ext version_information\n%version_information myhdl, myhdlpeek, numpy, pandas, matplotlib, sympy, random\n\n#helper functions to read in the .v and .vhd generated files into python\ndef VerilogTextReader(loc, printresult=True):\n with open(f'{loc}.v', 'r') as vText:\n VerilogText=vText.read()\n if printresult:\n print(f'***Verilog modual from {loc}.v***\\n\\n', VerilogText)\n return VerilogText\n\ndef VHDLTextReader(loc, printresult=True):\n with open(f'{loc}.vhd', 'r') as vText:\n VerilogText=vText.read()\n if printresult:\n print(f'***VHDL modual from {loc}.vhd***\\n\\n', VerilogText)\n return VerilogText",
"RTL and Implimentation Schamatics are from Xilinx Vivado 2016.1\nRead Only Memory (ROM)\nROM is a memory structure that holds static information that can only be read from. In other words, these are hard-coded instruction memory. That should never change. Furthermore, this data is held in a sort of array; for example, we can think of a python tuple as a sort of read-only memory since the content of a tuple is static and we use array indexing to access a certain portions of the memory.",
"#use type casting on list genrator to store 0-9 in 8bit binary\nTupleROM=tuple([bin(i, 8) for i in range(10)])\nTupleROM\n\nf'accesss location 6: {TupleROM[6]}, read contents of location 6 to dec:{int(TupleROM[6], 2)}'",
"And if we try writing to the tuple we will get an error",
"#TupleROM[6]=bin(16,2)",
"Random and Sequntial Access Memory\nSo to start off with the Random in RAM does not mean Random in a proplositc sence. It refares to Random as in you can randomly access any part of the data array opposed to the now specility sequantil only memory wich are typicly made with a counter or stat machine to sequance that acation\nHDL Memeorys\nin HDL ROM the data is stored a form of a D flip flop that are structerd in a sort of two diminal array where one axis is the address and the other is the content and we use a mux to contorl wich address \"row\" we are trying to read. There fore we have two signals: address and content. Where the address contorls the mux.\nROM Preloaded",
"@block\ndef ROMLoaded(addr, dout):\n \"\"\"\n A ROM laoded with data already incoded in the structer\n insted of using myHDL inchanced parmter loading\n \n I/O:\n addr(Signal>4): addres; range is from 0-3\n dout(Signal>4): data at each address\n \"\"\"\n \n @always_comb\n def readAction():\n if addr==0:\n dout.next=3\n elif addr==1:\n dout.next=2\n elif addr==2:\n dout.next=1\n \n elif addr==3:\n dout.next=0\n \n return instances()\n \n\nPeeker.clear()\naddr=Signal(intbv(0)[4:]); Peeker(addr, 'addr')\ndout=Signal(intbv(0)[4:]); Peeker(dout, 'dout')\n\nDUT=ROMLoaded(addr, dout)\n\ndef ROMLoaded_TB():\n \"\"\"Python Only Testbench for `ROMLoaded`\"\"\"\n @instance\n def stimules():\n for i in range(3+1):\n addr.next=i\n yield delay(1)\n raise StopSimulation()\n \n return instances()\n\nsim = Simulation(DUT, ROMLoaded_TB(), *Peeker.instances()).run()\n\nPeeker.to_wavedrom()\n\nPeeker.to_dataframe()\n\nDUT.convert()\nVerilogTextReader('ROMLoaded');",
"ROMLoaded RTL\n<img src='ROMLoadedRTL.png'>\nROMLoaded Synthesis\n<img src='ROMLoadedSynth.png'>",
"@block\ndef ROMLoaded_TBV():\n \"\"\"Verilog Only Testbench for `ROMLoaded`\"\"\"\n clk = Signal(bool(0))\n addr=Signal(intbv(0)[4:])\n dout=Signal(intbv(0)[4:])\n \n DUT=ROMLoaded(addr, dout)\n \n @instance\n def clk_signal():\n while True:\n clk.next = not clk\n yield delay(10)\n\n \n @instance\n def stimules():\n for i in range(3+1):\n addr.next=i\n #yield delay(1)\n yield clk.posedge\n raise StopSimulation\n\n \n @always(clk.posedge)\n def print_data():\n print(addr, dout)\n \n return instances()\n\n#create instaince of TB\nTB=ROMLoaded_TBV()\n#convert to verilog with reintilzed values\nTB.convert(hdl=\"Verilog\", initial_values=True)\n#readback the testbench results\nVerilogTextReader('ROMLoaded_TBV');",
"With myHDL we can dynamicaly load the contents that will be hard coded in the convertion to verilog/VHDL wich is an ammazing benfict for devlopment as is sean here",
"@block\ndef ROMParmLoad(addr, dout, CONTENT):\n \"\"\"\n A ROM laoded with data from CONTENT input tuple\n \n I/O:\n addr(Signal>4): addres; range is from 0-3\n dout(Signal>4): data at each address\n Parm:\n CONTENT: tuple size 4 with contende must be no larger then 4bit\n \"\"\"\n @always_comb\n def readAction():\n dout.next=CONTENT[int(addr)]\n \n return instances()\n \n\nPeeker.clear()\naddr=Signal(intbv(0)[4:]); Peeker(addr, 'addr')\ndout=Signal(intbv(0)[4:]); Peeker(dout, 'dout')\nCONTENT=tuple([i for i in range(4)][::-1])\n\nDUT=ROMParmLoad(addr, dout, CONTENT)\n\ndef ROMParmLoad_TB():\n \"\"\"Python Only Testbench for `ROMParmLoad`\"\"\"\n @instance\n def stimules():\n for i in range(3+1):\n addr.next=i\n yield delay(1)\n raise StopSimulation()\n\n \n return instances()\n\nsim = Simulation(DUT, ROMParmLoad_TB(), *Peeker.instances()).run()\n\nPeeker.to_wavedrom()\n\nPeeker.to_dataframe()\n\nDUT.convert()\nVerilogTextReader('ROMParmLoad');",
"ROMParmLoad RTL\n<img src=\"ROMParmLoadRTL.png\">\nROMParmLoad Synthesis\n<img src=\"ROMParmLoadSynth.png\">",
"@block\ndef ROMParmLoad_TBV():\n \"\"\"Verilog Only Testbench for `ROMParmLoad`\"\"\"\n clk=Signal(bool(0))\n addr=Signal(intbv(0)[4:])\n dout=Signal(intbv(0)[4:])\n CONTENT=tuple([i for i in range(4)][::-1])\n\n DUT=ROMParmLoad(addr, dout, CONTENT)\n \n @instance\n def clk_signal():\n while True:\n clk.next = not clk\n yield delay(1)\n\n @instance\n def stimules():\n for i in range(3+1):\n addr.next=i\n yield clk.posedge\n raise StopSimulation\n\n\n \n @always(clk.posedge)\n def print_data():\n print(addr, dout)\n \n return instances()\n\n#create instaince of TB\nTB=ROMParmLoad_TBV()\n#convert to verilog with reintilzed values\nTB.convert(hdl=\"Verilog\", initial_values=True)\n#readback the testbench results\nVerilogTextReader('ROMParmLoad_TBV');",
"we can also create rom that insted of being asynchronous is synchronous",
"@block\ndef ROMParmLoadSync(addr, dout, clk, rst, CONTENT):\n \"\"\"\n A ROM laoded with data from CONTENT input tuple\n \n I/O:\n addr(Signal>4): addres; range is from 0-3\n dout(Signal>4): data at each address\n clk (bool): clock feed\n rst (bool): reset\n Parm:\n CONTENT: tuple size 4 with contende must be no larger then 4bit\n \"\"\"\n @always(clk.posedge)\n def readAction():\n if rst:\n dout.next=0\n else:\n dout.next=CONTENT[int(addr)]\n \n return instances()\n \n\nPeeker.clear()\naddr=Signal(intbv(0)[4:]); Peeker(addr, 'addr')\ndout=Signal(intbv(0)[4:]); Peeker(dout, 'dout')\nclk=Signal(bool(0)); Peeker(clk, 'clk')\nrst=Signal(bool(0)); Peeker(rst, 'rst')\nCONTENT=tuple([i for i in range(4)][::-1])\n\nDUT=ROMParmLoadSync(addr, dout, clk, rst, CONTENT)\n\ndef ROMParmLoadSync_TB():\n \"\"\"Python Only Testbench for `ROMParmLoadSync`\"\"\"\n \n @always(delay(1))\n def ClkGen():\n clk.next=not clk\n \n @instance\n def stimules():\n for i in range(3+1):\n yield clk.posedge\n addr.next=i\n \n for i in range(4):\n yield clk.posedge\n rst.next=1 \n addr.next=i\n\n raise StopSimulation()\n\n \n return instances()\n\nsim = Simulation(DUT, ROMParmLoadSync_TB(), *Peeker.instances()).run()\n\nPeeker.to_wavedrom()\n\nROMData=Peeker.to_dataframe()\n#keep only clock high\nROMData=ROMData[ROMData['clk']==1]\nROMData.drop(columns='clk', inplace=True)\nROMData.reset_index(drop=True, inplace=True)\nROMData\n\nDUT.convert()\nVerilogTextReader('ROMParmLoadSync');",
"ROMParmLoadSync RTL\n<img src=\"ROMParmLoadSyncRTL.png\">\nROMParmLoadSync Synthesis\n<img src=\"ROMParmLoadSyncSynth.png\">",
"@block\ndef ROMParmLoadSync_TBV():\n \"\"\"Python Only Testbench for `ROMParmLoadSync`\"\"\"\n \n addr=Signal(intbv(0)[4:])\n dout=Signal(intbv(0)[4:])\n clk=Signal(bool(0))\n rst=Signal(bool(0))\n CONTENT=tuple([i for i in range(4)][::-1])\n\n DUT=ROMParmLoadSync(addr, dout, clk, rst, CONTENT)\n\n \n @instance\n def clk_signal():\n while True:\n clk.next = not clk\n yield delay(1)\n\n @instance\n def stimules():\n for i in range(3+1):\n yield clk.posedge\n addr.next=i\n \n for i in range(4):\n yield clk.posedge\n rst.next=1 \n addr.next=i\n raise StopSimulation\n\n\n \n @always(clk.posedge)\n def print_data():\n print(addr, dout, rst)\n \n return instances()\n\n#create instaince of TB\nTB=ROMParmLoadSync_TBV()\n#convert to verilog with reintilzed values\nTB.convert(hdl=\"Verilog\", initial_values=True)\n#readback the testbench results\nVerilogTextReader('ROMParmLoadSync_TBV');\n\n@block\ndef SeqROMEx(clk, rst, dout):\n \"\"\"\n Seq Read Only Memory Ex\n I/O:\n clk (bool): clock\n rst (bool): rst on counter\n dout (signal >4): data out\n \"\"\"\n Count=Signal(intbv(0)[3:])\n \n @always(clk.posedge)\n def counter():\n if rst:\n Count.next=0\n elif Count==3:\n Count.next=0\n \n else:\n Count.next=Count+1\n \n @always(clk.posedge)\n def Memory():\n if Count==0:\n dout.next=3\n elif Count==1:\n dout.next=2\n elif Count==2:\n dout.next=1\n elif Count==3:\n dout.next=0\n \n return instances()\n\nPeeker.clear()\ndout=Signal(intbv(0)[4:]); Peeker(dout, 'dout')\nclk=Signal(bool(0)); Peeker(clk, 'clk')\nrst=Signal(bool(0)); Peeker(rst, 'rst')\n\nDUT=SeqROMEx(clk, rst, dout)\n\ndef SeqROMEx_TB():\n \"\"\"Python Only Testbench for `SeqROMEx`\"\"\"\n\n @always(delay(1))\n def ClkGen():\n clk.next=not clk\n \n @instance\n def stimules():\n for i in range(5+1):\n yield clk.posedge\n \n for i in range(4):\n yield clk.posedge\n rst.next=1 \n\n raise StopSimulation()\n\n \n return instances()\n\nsim = Simulation(DUT, SeqROMEx_TB(), *Peeker.instances()).run()\n\nPeeker.to_wavedrom()\n\nSROMData=Peeker.to_dataframe()\n#keep only clock high\nSROMData=SROMData[SROMData['clk']==1]\nSROMData.drop(columns='clk', inplace=True)\nSROMData.reset_index(drop=True, inplace=True)\nSROMData\n\nDUT.convert()\nVerilogTextReader('SeqROMEx');",
"SeqROMEx RTL\n<img src=\"SeqROMExRTL.png\">\nSeqROMEx Synthesis\n<img src=\"SeqROMExSynth.png\">",
"@block\ndef SeqROMEx_TBV():\n \"\"\"Verilog Only Testbench for `SeqROMEx`\"\"\"\n \n dout=Signal(intbv(0)[4:])\n clk=Signal(bool(0))\n rst=Signal(bool(0))\n\n DUT=SeqROMEx(clk, rst, dout)\n \n @instance\n def clk_signal():\n while True:\n clk.next = not clk\n yield delay(1)\n \n \n @instance\n def stimules():\n for i in range(5+1):\n yield clk.posedge\n \n for i in range(4):\n yield clk.posedge\n rst.next=1 \n\n raise StopSimulation()\n \n @always(clk.posedge)\n def print_data():\n print(clk, rst, dout)\n \n\n \n return instances()\n\n\n\n#create instaince of TB\nTB=SeqROMEx_TBV()\n#convert to verilog with reintilzed values\nTB.convert(hdl=\"Verilog\", initial_values=True)\n#readback the testbench results\nVerilogTextReader('SeqROMEx_TBV');",
"read and write memory",
"@block\ndef RAMConcur(addr, din, writeE, dout, clk):\n \"\"\"\n Random access read write memeory\n I/O:\n addr(signal>4): the memory cell arrdress\n din (signal>4): data to write into memeory\n writeE (bool): write enable contorl; false is read only\n dout (signal>4): the data out\n clk (bool): clock\n \n Note:\n this is only a 4 byte memory\n \"\"\"\n #create the memeory list (1D array)\n memory=[Signal(intbv(0)[4:]) for i in range(4)]\n \n @always(clk.posedge)\n def writeAction():\n if writeE:\n memory[addr].next=din\n \n @always_comb\n def readAction():\n dout.next=memory[addr]\n \n return instances()\n \n\nPeeker.clear()\naddr=Signal(intbv(0)[4:]); Peeker(addr, 'addr')\ndin=Signal(intbv(0)[4:]); Peeker(din, 'din')\nwriteE=Signal(bool(0)); Peeker(writeE, 'writeE')\ndout=Signal(intbv(0)[4:]); Peeker(dout, 'dout')\nclk=Signal(bool(0)); Peeker(clk, 'clk')\nCONTENT=tuple([i for i in range(4)][::-1])\n\nDUT=RAMConcur(addr, din, writeE, dout, clk)\n\ndef RAMConcur_TB():\n \"\"\"Python Only Testbench for `RAMConcur`\"\"\"\n\n \n @always(delay(1))\n def ClkGen():\n clk.next=not clk\n \n @instance\n def stimules():\n # do nothing\n for i in range(1):\n yield clk.posedge\n \n #write memory\n for i in range(4):\n yield clk.posedge\n writeE.next=True\n addr.next=i\n din.next=CONTENT[i]\n \n #do nothing\n for i in range(1):\n yield clk.posedge\n writeE.next=False\n \n #read memory\n for i in range(4):\n yield clk.posedge\n addr.next=i\n\n # rewrite memory\n for i in range(4):\n yield clk.posedge\n writeE.next=True\n addr.next=i\n din.next=CONTENT[-i]\n \n #do nothing\n for i in range(1):\n yield clk.posedge\n writeE.next=False\n \n #read memory\n for i in range(4):\n yield clk.posedge\n addr.next=i\n \n raise StopSimulation()\n \n \n\n \n return instances()\n\nsim = Simulation(DUT, RAMConcur_TB(), *Peeker.instances()).run()\n\nPeeker.to_wavedrom()\n\nRAMData=Peeker.to_dataframe()\nRAMData=RAMData[RAMData['clk']==1]\nRAMData.drop(columns='clk', inplace=True)\nRAMData.reset_index(drop=True, inplace=True)\nRAMData\n\nRAMData[RAMData['writeE']==1]\n\nRAMData[RAMData['writeE']==0]\n\nDUT.convert()\nVerilogTextReader('RAMConcur');",
"RAMConcur RTL\n<img src=\"RAMConcurRTL.png\">\nRAMConcur Synthesis\n<img src=\"RAMConcurSynth.png\">",
"@block\ndef RAMConcur_TBV():\n \"\"\"Verilog Only Testbench for `RAMConcur`\"\"\"\n addr=Signal(intbv(0)[4:])\n din=Signal(intbv(0)[4:])\n writeE=Signal(bool(0))\n dout=Signal(intbv(0)[4:])\n clk=Signal(bool(0))\n CONTENT=tuple([i for i in range(4)][::-1])\n\n DUT=RAMConcur(addr, din, writeE, dout, clk)\n\n \n @instance\n def clk_signal():\n while True:\n clk.next = not clk\n yield delay(1)\n \n @instance\n def stimules():\n # do nothing\n for i in range(1):\n yield clk.posedge\n \n #write memory\n for i in range(4):\n yield clk.posedge\n writeE.next=True\n addr.next=i\n din.next=CONTENT[i]\n \n #do nothing\n for i in range(1):\n yield clk.posedge\n writeE.next=False\n \n #read memory\n for i in range(4):\n yield clk.posedge\n addr.next=i\n\n # rewrite memory\n for i in range(4):\n yield clk.posedge\n writeE.next=True\n addr.next=i\n din.next=CONTENT[-i]\n \n #do nothing\n for i in range(1):\n yield clk.posedge\n writeE.next=False\n \n #read memory\n for i in range(4):\n yield clk.posedge\n addr.next=i\n \n raise StopSimulation()\n \n \n @always(clk.posedge)\n def print_data():\n print(addr, din, writeE, dout, clk)\n \n\n \n return instances()\n\n#create instaince of TB\nTB=RAMConcur_TBV()\n#convert to verilog with reintilzed values\nTB.convert(hdl=\"Verilog\", initial_values=True)\n#readback the testbench results\nVerilogTextReader('RAMConcur_TBV');"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
ChinmaiRaman/phys227-final
|
Final.ipynb
|
mit
|
[
"import numpy as np\nimport sympy as sp\nimport pandas as pd\nimport math\n\nimport final as p1\n\nimport matplotlib.pyplot as plt\n\n%matplotlib inline",
"Final\nChinmai Raman\n5/21/2016\nGiven three real-valued functions of time x(t), y(t), z(t), consider the following coupled first-order ODEs:\n$x˙ = −y − z, y˙ = x + ay, z˙ = b + z(x − c)$\nwhere a = b = 0.2 and c is a parameter that we will tune. Note that this system has a single nonlinear term\nxz. \nI will be exploring the consequences of this nonlinearity.\nProblem 1 (c = 2)",
"ros = p1.Rossler(2)\nros.run()\n\nros.plotx()\n\nros.ploty()\n\nros.plotz()\n\nros.plotxy()\n\nros.plotyz()\n\nros.plotxz()\n\nros.plotxyz()",
"Problem 2 (c = 2, 3, 4, 4.15, 4.2, 5.7)\nc = 2 is plotted above\nc = 3",
"ros3 = p1.Rossler(3)\nros3.run()\n\nros3.plotx()\n\nros3.plotxy()\n\nros3.plotxyz()",
"Already we can see a bifurcation occuring in the y vs x and z vs y vs x graphs that were not there in the case of c =2. The nonlinearity in the z variable has begun to become active in that the trajectory is leaving the x-y plane. \nThe x vs t graph shows us that the x-values are now alternating between four values, as opposed to two previously. This is identical to the behavior we saw from the logistic update map on the midterm.\nc = 4",
"ros4 = p1.Rossler(4)\nros4.run()\n\nros4.plotx()\n\nros4.plotxy()\n\nros4.plotxyz()",
"Another bifurcation has now occured and is apparent in the y vs x graph. The limits of the x-values are now eight-fold; the number of values that x converges to has doubled again. The influence of the non-linearity in z is now very obvious.\nc = 4.15",
"ros415 = p1.Rossler(4.15)\nros415.run()\n\nros415.plotx()\n\nros415.plotxy()\n\nros415.plotxyz()",
"The period doubling is occuring at an increasing rate. This is demonstrated by the thicker lines in the xy and xyz graphs. This period doubling phase will soon end as the system approaches complete chaos and the our predictive power decreases greatly.\nc = 4.2",
"ros42 = p1.Rossler(4.2)\nros42.run()\n\nros42.plotx()\n\nros42.plotxy()\n\nros42.plotxyz()",
"The lines are getting thicker as the bifurcations increase at an increasing rate. It is now not immediately apparent how many asymptotic values x approaches from the x vs t graph.\nc = 5.7",
"ros57 = p1.Rossler(5.7)\nros57.run()\n\nros57.plotx()\n\nros57.plotxy()\n\nros57.plotxyz()",
"The period doubling cascade has given rise to a chaotic attractor with a single lobe. This is an example of spiral-type chaos and exhibits the characteristic sensitivity to initial conditions. The oscillations in the x vs t graph are now completely chaotic and irregular in amplitude. The logistic update map also displays the same behavior of a period doubling cascade giving rise to a chaotic system. As we increase c, this system, like the logistic update map from the midterm as ve vary initial conditions, also demonstrates the stretching and folding quality that we discussed in class. The xyz graph is also reminiscent of a mobius strip in that the underside becomes the upperside via the portion of the graph not in the x-y plane.\nProblem 3",
"p1.plotmaxima('x')\n\np1.plotmaxima('y')\n\np1.plotmaxima('z')",
"The diagram of the local maxima of x vs c shows the bifurcation in maxima as c increases. The first bifurcation in x-values occurs around 2.7. The next occurs around 3.7, followed by one around 4.2. The system attains chaos around 5.7, at which point our predictive power of the asymptotic x-values is gone. This behavior is similar to that from the logistic update map from the midterm. The local maxima of y vs c plot is similar to the x vs c, except that the graph is squished downward. The bifurcations occur at the same values, however. The z maxima vs c graph is particularly interesting, because the majority of maxima occur at values below five, but there is a small chain of maxima that continue to increase in value as c increases. These maxima constitute the lobe that exits the x-y plane as show in the xyz graph from problem 1.\nUnfortunately, python was taking too long for me to use the required mesh spacing of 0.001, so I simply used as many points as I could for the bifurcation diagram without ipython having to take an hour to visualize a single plot instead. I assume that this would have worked much faster in Julia. Hopefully, the diagrams are still relatively clear."
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
luizfmoura/datascience
|
Luiz_Fernando_De_Moura_2021_1_Practice_1_Introduction_to_Colab_and_Keras.ipynb
|
gpl-2.0
|
[
"<a href=\"https://colab.research.google.com/github/luizfmoura/datascience/blob/master/Luiz_Fernando_De_Moura_2021_1_Practice_1_Introduction_to_Colab_and_Keras.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>\nWhat is Google Colab?\nI use Google services - search engine, GMail, Google Sites, Drive (Spreadsheet, Docs, Presentation) - since their beginning . I really enjoy their quality and simplicity. More recently, Google launched the coolest service ever. It provides through its infrastructure as a service environment, the so-called Google Compute Engine, a free cloud service based on Jupyter Notebooks and integrated to Drive allowing up to 12 hours of GPU access for free. \nColab is a Google software as a service product which is founded on a very interesting scientific software, the so-called Jupyter Notebook. Brought about by the IPython (Iterative Python) Project, Jupyter is among the favorites packages for prototyping and education of the python ecosystem. \nDuring this course Colab will be used to preform your assignments throughout Google Classroom. Besides being adherent to Classroom, it is also integrated to the Google Drive, where Jupyter Notebook files (saved using IPYNB extension) are stored. \nBy and large, Colab may be understood as a environment for machine learning and computational sciences, which permits prototyping and training algorithms using python as supporting language. It is basically a Google fork of Jupyter Notebook. Click\nhere for further information. \nColab's architecture is very similar to the original Jupyter Notebook, which is presented in more detail in the next session. \n1 - High level achitecture of Jupyter Notebook\nA Notebook file is basically a junction of python code cells and rich formatted text cells. That combination brings about an interesting way to cope with computational assignments, since students are either able to run their code and get code running results integrated to the Notebook. Then, when professors get the file after assignments completion, together with discussion presented on the notebook text, and highlighted code blocks, their final outputs will still remain there to be graded. \nIn therms of implementation, when a IPYNB, which is basically a JSON, is opened, a web application is started throughout your default Internet Browser. The entire application has a front-end, which runs directly on the browser, and a back-end, represented by the Notebook server and the kernel. This server controls the application both by storing data on the IPYNB and keeping track of its respective Kernel session and workspace, as well sending code blocks to be run on it. \n<img src=\"https://jupyter.readthedocs.io/en/latest/_images/notebook_components.png\" alt=\"Jupyter Notebook\" width=\"633\" height=\"357\">\n1.1 - Text cells\nTo add a Text Cell, it is necessary to click on the '+ Text'. Inside a text cell you may provide a rich format text using the Markdown language. See the samples bellow:\nTitles and subtitles\n```\nTitle\nSubtitle\nSubsubtitle and so on.\n```\nText formatting\nRegular text doesn't require any special character.\n**Bold text** produces Bold text \n*Italic text* exhibtis Italic text\n`Monospace font` generates Monospace font\n~~Strikethrough~~ produces ~~Strikethrough~~\nHorizontal separator\nTo make a horizontal line you make 3x'-'\n\nLists\nThe following bullet list\n * apples\n - chery\n * oranges\n * pears\nis generated by the code bellow:\n* apples\n - chery\n * oranges\n * pears\nNumbered list:\n\nlather\nrinse\nrepeat\n\nBlocks of code\nRaw code blocks\n~~~~\nprint(a)\n~~~~\nRich code blocks\ncss\n1 python code, \n2 shell command \n3 or magic command\nQuotation\n\nQuoted text\n\nQuoted quoted text\n\n\nMathematical and logic expressions\nIt also possible including expressions presented using LaTex syntax. For more details, see this reference guide and vast documentation Latex formulae documentation. For including inline equations use \\$ before and after the equation code, e.g. $\\mathbf{x}={x_1, x_2, ..., x_n}^t$. \nOn the other hand, using \\$\\$ before and after the equation, it is possible including it centered on the next line\n$$x = \\frac{-b \\pm \\sqrt{b^2-4ac}}{2a}.$$\nTables\nJupyter allow also including tables, where character | means a column separator while a sequence of - indicates a horizontal line. \nFilename | Size | Error \n--------- | ------ | ------\njpeg | 265KB | 0.1\npng | 198KB | 0.2\n\nHTML coding\n<strong>Bold font HTML tag</stong>\n<H4> Html header level 4.</H4>\n<em>italic and so on.</em>\n1.2 - Code cells\nTo add a Code Cell, it is necessary to click on the '+ Code'. Inside such cells you may put python code, magic commands or shell commands. See, the examples bellow:\nPython code\nSimple example:",
"a=[10,20]\nb=[5]\nprint (a+b)\nprint(a[0])\nprint(a[1])\nc=[[1,2,3],[4,5,6],[7,8,9]]\nprint(c[1][2])\n\n%timeit L = [n ** 2 for n in range(1000)]\n\n%lsmagic",
"Magic commands\nMagic commands start with % charcter, e.g.:",
"%cd /content",
"Shell commands\nShell commands must be precided by ! character, e.g.:",
"!ls -lah",
"2 - Colab sessions\nI invite you to explore Colab front-end menus and see what you get. \nWhen you open a Colab file, you gain a virtual machine session inside the Google Compute Engine.\nSpecial attention should be payed to Colab sessions operation. On one hand, code, text and last result of each code cell execution in the notebook are automatically saved in your Google Drive during the session using IPYNB extension. Nonetheless, on the other hand, virtual execution environment and workspace are lost each time the notebook file is closed. In other words, kernel is reset to the factory raw configuration, losing the information from imported libraries, variables, image files downloaded to the execution environment and so on.\nTherefore, a brand new virtual machine is set up with Google default configuration, no mater how many packages you installed on the last session. Files downloaded in previous sessions and anything else won’t be available anymore, unless you put them on your google drive. You may execute each code cell in any order, however, errors may occur if dependencies are not respected. \nGoogle raw virtual machine has a default software configuration. See bellow some alternatives to reconfigure the environment.\n2.1 - Managing your virtual machine\nAccording to its documentation, Colab is a Google Research product dedicated to allowing the execution of arbitrary Python code on Google virtual machines. \nColab focuses on supporting Python, its ecosystem and third-party tools. Despite users interest for other Jupyter Notebook kernels (e.g. Julia, R or Scala), Python is still the only programming language natively supported by Colab. \nIn order to overcome the virtual machine default configuration limitations you may want or need to manage your virtual machine software infrastructure. If that is the case, your first choice should be using pip, the package installer for Python. This stands for a sort of package manager for the Python ecosystem as well as third part packages. \nSee bellow a brief list of pip commands:\nTo install the latest version of a package:\n\n >>pip install 'PackageName'\n\nTo install a specific version, type the package name followed by the required version:\n\n >>pip install 'PackageName==1.4'\n\nTo upgrade an already installed package to the latest from PyPI:\n\n >>pip install --upgrade PackageName\n\nUninstalling/removing a package is very easy with pip:\n\n >>pip uninstall PackageName\n\nFor more details about pip follow its user guide.",
"!pip freeze | grep keras\nprint()\n!pip freeze | grep tensorflow\n\n!apt update\n\n\n!apt list --upgradable\n\n!apt upgrade\n\n\n!python3 --version\n\n!uname -r",
"3 - Hands-on TensorFlow + Keras\n3.1 - Load tensor flow",
"import tensorflow as tf",
"3.2 - Dataset preparation\nImport dataset\nModified NIST (MNIST) is a database of handwritten digits. It encompasses a training set of 60,000 examples, and a test set of 10,000 examples. It is a subset of a larger set produced by EUA National Institute for Standards and Technology (NIST). Images have a fixed-size image and is also available in keras library.",
"mnist = tf.keras.datasets.mnist\n\n(x_train, y_train), (x_test, y_test) = mnist.load_data() #verificar se esta tupla é realmente necessária\n\nprint (\"Training set info:\",x_train.shape)\nprint (\"Train target info:\",y_train.shape)\nprint (\"Test set info:\",x_test.shape)\nprint (\"Test target info:\",y_test.shape)",
"Show sample images",
"import matplotlib.pyplot as plt\n\nplt.figure(figsize=(10, 10))\nfor i in range(100,200):\n ax = plt.subplot(10, 10, i-99)\n plt.axis(\"off\")\n plt.imshow(x_train[i].reshape(28,28))\n plt.gray()",
"Normalize data between [0, 1]",
"x_train_norm, x_test_norm = x_train / 255.0, x_test / 255.0\n\nplt.figure(figsize=(10, 10))\nfor i in range(100):\n ax = plt.subplot(10, 10, i+1)\n plt.axis(\"off\")\n plt.imshow(x_test_norm[i].reshape(28,28))\n plt.gray()",
"3.3 - Create and Initialize Network Perceptron Architecture\nThe code bellow creates a single hidden layer perceptron. \nTask 1 Calculate the number of parameters of the model bellow.",
"model = tf.keras.models.Sequential([\n tf.keras.layers.Flatten(input_shape=(28, 28)),\n tf.keras.layers.Dense(128, activation='relu'),\n tf.keras.layers.Dropout(0.2),\n tf.keras.layers.Dense(10, activation='softmax')\n])\n\n#Reference: https://towardsdatascience.com/how-to-calculate-the-number-of-parameters-in-keras-models-710683dae0ca\n#Input: 28*28*1\n#The Flattern layer doesn’t learn anything, and thus the number of parameters is 0.\n#Dense layer formula: param_number = output_channel_number * (input_channel_number + 1)\nfirstLayer = 128 * ((28*28*1) + 1)\nsecondLayer = 10 * (128 + 1)\ntotal_params = firstLayer + secondLayer\nprint(total_params)\nmodel.summary()",
"Task 2 Create a new version of the above code converting this sequential implementation into a functional one. Then encapsulate it in a Python function. Make the number of neurons in the hidden layer, the respective activation function and the dropout frequency parameters of such function. Implement it on the code cell bellow.",
"def createFuncModel(hiddenNeurons: int = 128, activationFunc: str = \"relu\", dropoutFrequency: float=0.2):\n input = tf.keras.Input(shape=(28,28))\n layer1 = tf.keras.layers.Flatten()\n layer2 = tf.keras.layers.Dense(hiddenNeurons, activation=activationFunc)\n layer3 = tf.keras.layers.Dropout(dropoutFrequency)\n layer4 = tf.keras.layers.Dense(10, activation='softmax')\n\n x = layer4(layer3(layer2(layer1(input))))\n\n return tf.keras.Model(inputs=input, outputs=x, name=\"mnist_model\")\n\nnewModel = createFuncModel()\nnewModel.summary()",
"3.4 - Network training\nDetails about training process can be found in the keras documentation \nPresent model details",
"model.summary()",
"Task 3: Compare the number of parameters calculated by yourself to the one provided by model.summary \nNumber of parameters are equal in both ways (calculated and returned by summary function)\nBuild network graph object for training",
"model.compile(\n optimizer='adam',\n loss='sparse_categorical_crossentropy',\n metrics=['accuracy'],\n)",
"Task 4: If you try to change loss parameter to 'categorical_crossentropy' the model.fit will stop working. Please explain the reasons why. Make on the next text cell your considerations about Task 4.\nCategorical_crossentropy loss function requires the labels to be encoded in a one-hot representation, that is, each label would be a vector of binary encoded variables according to the categories. On the other hand, sparse_categorical_crossentropy works with integer encoded label, in other words, each label comes with an integer representing the right category. Since the provided training dataset comes with integer encoded labels, sparse_categorical_crossentropy shall be picked for this model.\nStart network training",
"EPOCHS = 10\nH = model.fit(x_train_norm, y_train, epochs=EPOCHS)",
"Task 5: Use the 'validation_split' parameter on the previous code cell to dedicate 10% of the training set to validation and run it again. Put your code on the cell bellow.",
"H = model.fit(x_train_norm, y_train, epochs=EPOCHS, validation_split = 0.1)",
"Task 6: Take note of the accuracy values. Then, run the model construction, the model compilation, and training again and see what happens with the accuracies. Pay attention to implicit parameters initialization and to the effective number o epochs of training in each case. \nScribe your answer to Task 6 on the text cell bellow.\nAccuracy values slightly drop in the original model after recompiling. The model with validation split data kept a good result.\nOriginal model:\nFirst run: best accuracy: 0.9907\nSecond run: best accuracy: 0.9808\nValidation split model: \nFirst run: best accuracy: 0.9938\nSecond run: best accuracy: 0.9953\nShow graphs\nThe function 'fit' returns a history object, which keeps track of the network training.",
"plt.figure(figsize=(10, 10))\nplt.plot( H.history[\"loss\"], label=\"train_loss\")\nplt.plot( H.history[\"accuracy\"], label=\"train_acc\")\nplt.plot( H.history[\"val_loss\"], label=\"validation_loss\")\nplt.plot( H.history[\"val_accuracy\"], label=\"validation_acc\")\n\nplt.title(\"Loss / accuracy evolution\")\nplt.xlabel(\"Epoch #\")\nplt.ylabel(\"Loss / Accuracy\")\nplt.ylim([0, 1])\nleg=plt.legend()\n\n\nH.history",
"Task 7: Include the validation loss and accuraccy on the above chart\n3.5 - Test Network\nThe final model must be evaluated in order to check its quality.",
"print(\"Train:\")\nTrain_Evaluation = model.evaluate(x_train_norm, y_train, verbose=2)\nprint(\"Test:\")\nTest_Evaluation = model.evaluate(x_test_norm, y_test, verbose=2)",
"Task 8: Notice the differences between the loss and accuracy presented by the 'model.fit' and the 'model.evaluate'. Don't find out such a strange outcome? Try to explain it on the following text cell. \nModel.fit method trains the model by adjusting the weights and minimizing the losses whereas the Model.evaluate method only tests the trained model and computes the accuracy and loss aided by the labeled test data. Thus, it is natural that the evaluation of never seen data results on an lower accuracy compared to evaluation of already trained data. It is even considered wrong to run evaluations with already seen data. \nGiven an image, it is possible using the trained network to preview its class using the 'model.predict' method. \nTask 9: Present ten test images aside the respective predictions.",
"predictions = model.predict(x_test[:10])\n\npredictions\n\n\nplt.figure(figsize=(10, 10))\nfor i in range(10):\n ax = plt.subplot(10, 10, i+1)\n plt.axis(\"off\")\n plt.imshow(x_test[i].reshape(28,28))\n plt.gray()\n print([j for j in range(10) if predictions[i][j] == 1])\n\n",
"3.6 - Save, del and load trained network",
"print(\"Train:\")\nTrain_Evaluation = model.evaluate(x_train_norm, y_train, verbose=2)\nprint(\"Test:\")\nTest_Evaluation = model.evaluate(x_test_norm, y_test, verbose=2)\n\nmodel.save('ultimate_model.h5') # creates a HDF5 file 'my_model.h5'\n\ndel model # deletes the existing model object\n\nfrom keras.models import load_model\nmodel = load_model('ultimate_model.h5')\nprint(\"Train:\")\nTrain_Evaluation = model.evaluate(x_train_norm, y_train, verbose=2)\nprint(\"Test:\")\nTest_Evaluation = model.evaluate(x_test_norm, y_test, verbose=2)\n\n\nfrom google.colab import drive\n\nprint(\"Train:\")\nTrain_Evaluation = model.evaluate(x_train_norm, y_train, verbose=2)\nprint(\"Test:\")\nTest_Evaluation = model.evaluate(x_test_norm, y_test, verbose=2)\n\ndrive.mount('/content/gdrive')\nroot_path = 'gdrive/My Drive/CComp/Deeplearning'\nmodel.save(root_path + '/ultimate_model.h5') # creates a HDF5 file 'my_model.h5'\n\ndel model # deletes the existing model object\n\nfrom keras.models import load_model\nmodel = load_model(root_path + '/ultimate_model.h5')\nprint(\"Train:\")\nTrain_Evaluation = model.evaluate(x_train_norm, y_train, verbose=2)\nprint(\"Test:\")\nTest_Evaluation = model.evaluate(x_test_norm, y_test, verbose=2)\n",
"Task 10: Modify the above code to persist and recover your HDF5 on google drive. See the following link for more information about how to mount your Google Drive into Colab. \n3.7 - Using training checkpoints\nThe code bellow save the model at each epoch in HDF5 format. \nTask 11: Use the callback options 'save_freq' and 'period' to save a copy of your model at each 3 epochs. Change it on code bellow.",
"%cd ../\nimport os\ncheckpoint_path = \"/content/MyFirstCkpt/\"\ncheckpoint_dir = os.path.dirname(checkpoint_path)\n\n# Create a callback that saves the model's weights\ncp_callback = tf.keras.callbacks.ModelCheckpoint(\n filepath=checkpoint_path +'model.{epoch:02d}-{val_loss:.2f}.h5',\n save_weights_only=True,\n verbose=1,\n save_freq='epoch',\n period=3\n)\n\n# Train the model with the new callback\nH = model.fit(\n x_train_norm, y_train, epochs=EPOCHS,\n validation_split = 0.1,\n callbacks=[cp_callback] # Pass callback to training\n) \n",
"Task 12: Produce in the cells bellow code pieces that help you to get the results necessary to fill in the following tables: \nHidden layer Neurons | Train accuracy |Test accuracy\n---- |-----|-----\n16| 0.8972 | 0.9396\n32 | 0.9475 | 0.9630\n64| 0.9723 | 0.9747\n128| 0.9854 | 0.9801\n256 | 0.9895 | 0.9795",
"#Put here the code for obtaining the values for distinct amount of hidden layer neurons.\nnewModel = createFuncModel(hiddenNeurons=256)\nnewModel.compile(\n optimizer='adam',\n loss='sparse_categorical_crossentropy',\n metrics=['accuracy'],\n)\nH = newModel.fit(x_train_norm, y_train, epochs=10, validation_split = 0.1)\neval_results = newModel.evaluate(x_test_norm, y_test, verbose=2)\nprint(max(H.history['accuracy']))\nprint(eval_results[1])\n",
"Task 13: Using the number of neurons that best performed over the test set in the previous table, fill in the following one.\nDropout frequency | Train accuracy |Test accuracy\n--- |-----|-----\n0.1| 0.9894 | 0.9787\n0.2 | 0.9839 | 0.9791\n0.3| 0.9802 | 0.9786\n0.4| 0.9714 | 0.9778\n0.5 | 0.9622 | 0.9758",
"#Put here the code for obtaining the values for distinct values of dropout frequency.\nnewModel = createFuncModel(hiddenNeurons=128,dropoutFrequency=0.5)\nnewModel.compile(\n optimizer='adam',\n loss='sparse_categorical_crossentropy',\n metrics=['accuracy'],\n)\nH = newModel.fit(x_train_norm, y_train, epochs=10, validation_split = 0.1)\neval_results = newModel.evaluate(x_test_norm, y_test, verbose=2)\nprint(max(H.history['accuracy']))\nprint(eval_results[1])"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
bflaven/BlogArticlesExamples
|
stop_starting_start_stopping/pandas_convert_json/pandas-convert-json.ipynb
|
mit
|
[
"Pandas convert JSON into a DataFrame\nThis is a notebook for the medium article How to convert JSON into a Pandas DataFrame?\nPlease check out article for instructions\nLicense: BSD 2-Clause",
"import pandas as pd",
"1. Reading simple JSON from a local file",
"df = pd.read_json('data/simple.json')\ndf\n\ndf.info()",
"2. Reading simple JSON from a URL",
"URL = 'http://raw.githubusercontent.com/BindiChen/machine-learning/master/data-analysis/027-pandas-convert-json/data/simple.json'\ndf = pd.read_json(URL)\ndf\n\ndf.info()",
"3. Flattening nested list from JSON object",
"df = pd.read_json('data/nested_list.json')\ndf\n\nimport json\n# load data using Python JSON module\nwith open('data/nested_list.json','r') as f:\n data = json.loads(f.read())\n \n# Flatten data\ndf_nested_list = pd.json_normalize(data, record_path =['students'])\ndf_nested_list\n\n# To include school_name and class\ndf_nested_list = pd.json_normalize(\n data, \n record_path =['students'], \n meta=['school_name', 'class']\n)\ndf_nested_list",
"4. Flattening nested list and dict from JSON object",
"### working\nimport json\n# load data using Python JSON module\nwith open('data/nested_mix.json','r') as f:\n data = json.loads(f.read())\n \n# Normalizing data\ndf = pd.json_normalize(data, record_path =['students'])\ndf\n\n# Normalizing data\ndf = pd.json_normalize(\n data, \n record_path =['students'], \n meta=[\n 'class',\n ['info', 'president'], \n ['info', 'contacts', 'tel']\n ]\n)\ndf",
"5. Extracting a value from deeply nested JSON",
"df = pd.read_json('data/nested_deep.json')\ndf\n\ntype(df['students'][0])\n\n# to install glom inside your python env through the notebook\n# pip install glom\n\nfrom glom import glom\ndf['students'].apply(lambda row: glom(row, 'grade.math'))"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
Donnyvdm/courses
|
Machine Learning/1. Foundations A Case Study Approach/Week 3/Analyzing product sentiment.ipynb
|
unlicense
|
[
"import numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\n\nplt.style.use('ggplot')\n%matplotlib inline\n\nproducts = pd.read_csv('../data/amazon_baby.csv')",
"Let's explore the data",
"products.head()\n\nlen(products)\n\n# Sklearn does not work well with empty fields, so we're dropping all rows that have empty fields\nproducts = products.dropna()\nlen(products)",
"Build the word count vector for each review\nHere Sklearn works different from the Graphlab.\nWord counts are recorded in a sparse matrix, where every column is a unique word and every row is a review. For demonstration purposes and to stay in line with the lecture, the word_counts column is added here, but this is not actually used in the model later on. Instead, the word count vector cv will be used.",
"from sklearn.feature_extraction.text import CountVectorizer\ncv = CountVectorizer()\n\ncv.fit(products['review']) # Create the word count vector\nproducts['word_counts'] = cv.transform(products['review'])\n\nproducts.head()\n\nproducts['name'].describe()",
"The total number of reviews is lower than in the lecture video. Likely due to dropping the reviews with NA's.\nExplore Vulli Sophie",
"giraffe_reviews = products[products['name'] == 'Vulli Sophie the Giraffe Teether']\n\nlen(giraffe_reviews)\n\ngiraffe_reviews['rating'].hist()\n\ngiraffe_reviews['rating'].value_counts()",
"Build a sentiment classifier\nDefine what's a positive and negative review",
"# Ignore all 3* review\nproducts = products[products['rating'] != 3]\n\nproducts['sentiment'] = products['rating'] >= 4\n\nproducts.head()",
"Let's train the sentiment classifier",
"from sklearn.cross_validation import train_test_split\n\n# Due to the random divide between the train and test data, the model will be \n# slightly different from the lectures from here on out.\ntrain_data, test_data = train_test_split(products, test_size=0.2, random_state=42)\n\nfrom sklearn.linear_model import LogisticRegression\n\ncv.fit(train_data['review']) # Use the count vector, but fit only the train data\n\nsentiment_model = LogisticRegression().fit(cv.transform(train_data['review']), train_data['sentiment'])\n\n# Predict sentiment for the test data, based on the sentiment model\n# The cv.transform is necessary to get the test_data review data in the right format for the model\npredicted = sentiment_model.predict(cv.transform(test_data['review']))",
"Evaluate the sentiment model",
"from sklearn import metrics\n\n# These metrics will be slightly different then in the lecture, due to the different\n# train/test data split and differences in how the model is fitted\n\nprint (\"Accuracy:\", metrics.accuracy_score(test_data['sentiment'], predicted))\nprint (\"ROC AUC Score:\", metrics.roc_auc_score(test_data['sentiment'], predicted))\nprint (\"Confusion matrix:\")\nprint (metrics.confusion_matrix(test_data['sentiment'], predicted))\nprint (metrics.classification_report(test_data['sentiment'], predicted))\n\n# for the ROC curve, we need the prediction probabilities rather than the True/False values\n# which are obtained by using the .predict_proba function instead of .predict\npredicted_probs = sentiment_model.predict_proba(cv.transform(test_data['review']))\n\nfalse_positive_rate, true_positive_rate, _ = metrics.roc_curve(test_data['sentiment'], predicted_probs[:,1])\n\nplt.plot(false_positive_rate, true_positive_rate)\nplt.xlabel('False Positive Rate')\nplt.ylabel('True Positive Rate')\nplt.title('ROC Sentiment Analysis')\nplt.show()",
"Applying the learned model to understand sentiment for Giraffe",
"giraffe_reviews['predicted_sentiment'] = sentiment_model.predict_proba(cv.transform(giraffe_reviews['review']))[:,1]\n\ngiraffe_reviews.head()",
"Sort the reviews based on the predicted sentiment and explore",
"giraffe_reviews.sort_values(by='predicted_sentiment', inplace=True, ascending=False)\n\n# Despite the slightly different model, the same review is ranked highest in predicted sentiment\ngiraffe_reviews.head(10)\n\ngiraffe_reviews.iloc[0]['review']",
"Let's look at the negative reviews",
"giraffe_reviews.tail(10)\n## We can see the lowest scoring review in the lecture is ranked 10th lowest in this analysis\n\ngiraffe_reviews.iloc[-1]['review']"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
femtotrader/pyfolio
|
pyfolio/examples/bayesian.ipynb
|
apache-2.0
|
[
"Bayesian performance analysis example in pyfolio\nThere are also a few more advanced (and still experimental) analysis methods in pyfolio based on Bayesian statistics. \nThe main benefit of these methods is uncertainty quantification. All the values you saw above, like the Sharpe ratio, are just single numbers. These estimates are noisy because they have been computed over a limited number of data points. So how much can you trust these numbers? You don't know because there is no sense of uncertainty. That is where Bayesian statistics helps as instead of single values, we are dealing with probability distributions that assign degrees of belief to all possible parameter values.\nLets create the Bayesian tear sheet. Under the hood this is running MCMC sampling in PyMC3 to estimate the posteriors which can take quite a while (that's the reason why we don't generate this by default in create_full_tear_sheet()).\nImport pyfolio",
"%matplotlib inline\nimport pyfolio as pf",
"Fetch the daily returns for a stock",
"stock_rets = pf.utils.get_symbol_rets('FB')",
"Create Bayesian tear sheet",
"out_of_sample = stock_rets.index[-40]\n\npf.create_bayesian_tear_sheet(stock_rets, live_start_date=out_of_sample)",
"Lets go through these row by row:\n\nThe first one is the Bayesian cone plot that is the result of a summer internship project of Sepideh Sadeghi here at Quantopian. It's similar to the cone plot you already saw in the tear sheet above but has two critical additions: (i) it takes uncertainty into account (i.e. a short backtest length will result in a wider cone), and (ii) it does not assume normality of returns but instead uses a Student-T distribution with heavier tails.\nThe next row compares mean returns of the in-sample (backest) and out-of-sample or OOS (forward) period. As you can see, mean returns are not a single number but a (posterior) distribution that gives us an indication of how certain we can be in our estimates. The green distribution on the left side is much wider, representing our increased uncertainty due to having less OOS data. We can then calculate the difference between these two distributions as shown on the right side. The grey lines denote the 2.5% and 97.5% percentiles. Intuitively, if the right grey line is lower than 0 you can say that with probability > 97.5% the OOS mean returns are below what is suggested by the backtest. The model used here is called BEST and was developed by John Kruschke.\nThe next couple of rows follow the same pattern but are an estimate of annual volatility, Sharpe ratio and their respective differences.\nThe 5th row shows the effect size or the difference of means normalized by the standard deviation and gives you a general sense how far apart the two distributions are. Intuitively, even if the means are significantly different, it may not be very meaningful if the standard deviation is huge amounting to a tiny difference of the two returns distributions.\nThe 6th row shows predicted returns (based on the backtest) for tomorrow, and 5 days from now. The blue line indicates the probability of losing more than 5% of your portfolio value and can be interpeted as a Bayesian VaR estimate.\nThe 7th row shows a Bayesian estimate of annual alpha and beta. In addition to uncertainty estimates, this model, like all above ones, assumes returns to be T-distributed which leads to more robust estimates than a standard linear regression would. The default benchmark is the S&P500. Alternatively, users may use the Fama-French model as a bunchmark by setting benchmark_rets=\"Fama-French\". \nBy default, stoch_vol=False because running the stochastic volatility model is computationally expensive.\nOnly the most recent 400 days of returns are used when computing the stochastic volatility model. This is to minimize computational time.\n\nRunning models directly\nYou can also run individual models. All models can be found in pyfolio.bayesian and run via the run_model() function.",
"help(pf.bayesian.run_model)",
"For example, to run a model that assumes returns to be normally distributed, you can call:",
"# Run model that assumes returns to be T-distributed\ntrace = pf.bayesian.run_model('t', stock_rets)",
"The returned trace object can be directly inquired. For example might we ask what the probability of the Sharpe ratio being larger than 0 is by checking what percentage of posterior samples of the Sharpe ratio are > 0:",
"# Check what frequency of samples from the sharpe posterior are above 0.\nprint('Probability of Sharpe ratio > 0 = {:3}%'.format((trace['sharpe'] > 0).mean() * 100))",
"But we can also interact with it like with any other pymc3 trace:",
"import pymc3 as pm\npm.traceplot(trace);",
"Further reading\nFor more information on Bayesian statistics, check out these resources:\n\nA blog post about the Bayesian models with Sepideh Sadeghi\nMy personal blog on Bayesian modeling\nA talk I gave in Singapore on Probabilistic Programming in Quantitative Finance\nThe IPython NB book Bayesian Methods for Hackers."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
liangjg/openmc
|
examples/jupyter/search.ipynb
|
mit
|
[
"This Notebook illustrates the usage of the OpenMC Python API's generic eigenvalue search capability. In this Notebook, we will do a critical boron concentration search of a typical PWR pin cell.\nTo use the search functionality, we must create a function which creates our model according to the input parameter we wish to search for (in this case, the boron concentration). \nThis notebook will first create that function, and then, run the search.",
"# Initialize third-party libraries and the OpenMC Python API\nimport matplotlib.pyplot as plt\nimport numpy as np\n\nimport openmc\nimport openmc.model\n\n%matplotlib inline",
"Create Parametrized Model\nTo perform the search we will use the openmc.search_for_keff function. This function requires a different function be defined which creates an parametrized model to analyze. This model is required to be stored in an openmc.model.Model object. The first parameter of this function will be modified during the search process for our critical eigenvalue.\nOur model will be a pin-cell from the Multi-Group Mode Part II assembly, except this time the entire model building process will be contained within a function, and the Boron concentration will be parametrized.",
"# Create the model. `ppm_Boron` will be the parametric variable.\n\ndef build_model(ppm_Boron):\n \n # Create the pin materials\n fuel = openmc.Material(name='1.6% Fuel')\n fuel.set_density('g/cm3', 10.31341)\n fuel.add_element('U', 1., enrichment=1.6)\n fuel.add_element('O', 2.)\n\n zircaloy = openmc.Material(name='Zircaloy')\n zircaloy.set_density('g/cm3', 6.55)\n zircaloy.add_element('Zr', 1.)\n\n water = openmc.Material(name='Borated Water')\n water.set_density('g/cm3', 0.741)\n water.add_element('H', 2.)\n water.add_element('O', 1.)\n\n # Include the amount of boron in the water based on the ppm,\n # neglecting the other constituents of boric acid\n water.add_element('B', ppm_Boron * 1e-6)\n \n # Instantiate a Materials object\n materials = openmc.Materials([fuel, zircaloy, water])\n \n # Create cylinders for the fuel and clad\n fuel_outer_radius = openmc.ZCylinder(r=0.39218)\n clad_outer_radius = openmc.ZCylinder(r=0.45720)\n\n # Create boundary planes to surround the geometry\n min_x = openmc.XPlane(x0=-0.63, boundary_type='reflective')\n max_x = openmc.XPlane(x0=+0.63, boundary_type='reflective')\n min_y = openmc.YPlane(y0=-0.63, boundary_type='reflective')\n max_y = openmc.YPlane(y0=+0.63, boundary_type='reflective')\n\n # Create fuel Cell\n fuel_cell = openmc.Cell(name='1.6% Fuel')\n fuel_cell.fill = fuel\n fuel_cell.region = -fuel_outer_radius\n\n # Create a clad Cell\n clad_cell = openmc.Cell(name='1.6% Clad')\n clad_cell.fill = zircaloy\n clad_cell.region = +fuel_outer_radius & -clad_outer_radius\n\n # Create a moderator Cell\n moderator_cell = openmc.Cell(name='1.6% Moderator')\n moderator_cell.fill = water\n moderator_cell.region = +clad_outer_radius & (+min_x & -max_x & +min_y & -max_y)\n\n # Create root Universe\n root_universe = openmc.Universe(name='root universe')\n root_universe.add_cells([fuel_cell, clad_cell, moderator_cell])\n\n # Create Geometry and set root universe\n geometry = openmc.Geometry(root_universe)\n \n # Instantiate a Settings object\n settings = openmc.Settings()\n \n # Set simulation parameters\n settings.batches = 300\n settings.inactive = 20\n settings.particles = 1000\n \n # Create an initial uniform spatial source distribution over fissionable zones\n bounds = [-0.63, -0.63, -10, 0.63, 0.63, 10.]\n uniform_dist = openmc.stats.Box(bounds[:3], bounds[3:], only_fissionable=True)\n settings.source = openmc.source.Source(space=uniform_dist)\n \n # We dont need a tallies file so dont waste the disk input/output time\n settings.output = {'tallies': False}\n \n model = openmc.model.Model(geometry, materials, settings)\n \n return model",
"Search for the Critical Boron Concentration\nTo perform the search we imply call the openmc.search_for_keff function and pass in the relvant arguments. For our purposes we will be passing in the model building function (build_model defined above), a bracketed range for the expected critical Boron concentration (1,000 to 2,500 ppm), the tolerance, and the method we wish to use. \nInstead of the bracketed range we could have used a single initial guess, but have elected not to in this example. Finally, due to the high noise inherent in using as few histories as are used in this example, our tolerance on the final keff value will be rather large (1.e-2) and the default 'bisection' method will be used for the search.",
"# Perform the search\ncrit_ppm, guesses, keffs = openmc.search_for_keff(build_model, bracket=[1000., 2500.],\n tol=1e-2, print_iterations=True)\n\nprint('Critical Boron Concentration: {:4.0f} ppm'.format(crit_ppm))",
"Finally, the openmc.search_for_keff function also provided us with Lists of the guesses and corresponding keff values generated during the search process with OpenMC. Let's use that information to make a quick plot of the value of keff versus the boron concentration.",
"plt.figure(figsize=(8, 4.5))\nplt.title('Eigenvalue versus Boron Concentration')\n# Create a scatter plot using the mean value of keff\nplt.scatter(guesses, [keffs[i].nominal_value for i in range(len(keffs))])\nplt.xlabel('Boron Concentration [ppm]')\nplt.ylabel('Eigenvalue')\nplt.show()",
"We see a nearly linear reactivity coefficient for the boron concentration, exactly as one would expect for a pure 1/v absorber at small concentrations."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
ODZ-UJF-AV-CR/osciloskop
|
vxi.ipynb
|
gpl-3.0
|
[
"Oscilloskope utility – using Ethernet",
"import matplotlib.pyplot as plt\nimport sys\nimport os\nimport time\nimport h5py\nimport numpy as np\nimport glob\nimport vxi11\n\n# Step 0:\n# Connect oscilloscope via direct Ethernet link\n# Step 1:\n# Run \"Record\" on the oscilloscope\n# and wait for 508 frames to be acquired.\n# Step 2:\n# Run this cell to initialize grabbing.\n\n\n# This will need a rewrite\nclass TmcDriver:\n\n def __init__(self, device):\n print(\"Initializing connection to: \" + device)\n self.device = device\n self.instr = vxi11.Instrument(device)\n \n def write(self, command):\n self.instr.write(command);\n\n def read(self, length = 500):\n return self.instr.read(length)\n\n def read_raw(self, length = 500):\n return self.instr.read_raw(length)\n \n def getName(self):\n self.write(\"*IDN?\")\n return self.read(300)\n \n def ask(self, command):\n return self.instr.ask(command)\n \n def sendReset(self):\n self.write(\"*RST\") # Be carefull, this real resets an oscilloscope\n \n# Default oscilloscope record timeout [s]\nloop_sleep_time = 60\n \n# For Ethernet\n#osc = TmcDriver(\"TCPIP::147.231.24.72::INSTR\")\nosc = TmcDriver(\"TCPIP::10.1.1.254::INSTR\")\nprint(osc.ask(\"*IDN?\"))",
"Read repeatedly records from oscilloscope",
"filename = 1\n \nif (filename == 1):\n for f in glob.iglob(\"./data/*.h5\"): # delete all .h5 files \n print 'Deleting', f\n os.remove(f)\nelse:\n print 'Not removing old files, as filename {0} is not 1.'.format(filename)\n\n\nosc.write(':STOP') # start recording\ntime.sleep(0.5)\n \nwhile True:\n #print(' Enter to continue.')\n #raw_input() Wait for key press\n\n osc.write(':FUNC:WREC:OPER REC') # start recording\n run_start_time = time.time()\n print ' Capturing...'\n time.sleep(0.5)\n \n while True:\n osc.write(':FUNC:WREC:OPER?') # finish recording?\n reply = osc.read()\n if reply == 'STOP':\n run_time = round(time.time() - run_start_time, 2)\n print(' Subrun finished, capturing for %.2f seconds.' % run_time)\n break\n time.sleep(0.01)\n\n osc.write(':WAV:SOUR CHAN1')\n osc.write(':WAV:MODE NORM')\n osc.write(':WAV:FORM BYTE')\n osc.write(':WAV:POIN 1400')\n\n osc.write(':WAV:XINC?')\n xinc = float(osc.read(100))\n print 'XINC:', xinc,\n osc.write(':WAV:YINC?')\n yinc = float(osc.read(100))\n print 'YINC:', yinc,\n osc.write(':TRIGger:EDGe:LEVel?')\n trig = float(osc.read(100))\n print 'TRIG:', trig,\n osc.write(':WAVeform:YORigin?')\n yorig = float(osc.read(100))\n print 'YORIGIN:', yorig,\n osc.write(':WAVeform:XORigin?')\n xorig = float(osc.read(100))\n print 'XORIGIN:', xorig,\n \n\n osc.write(':FUNC:WREP:FEND?') # get number of last frame\n frames = int(osc.read(100))\n print 'FRAMES:', frames, 'SUBRUN', filename\n \n with h5py.File('./data/data'+'{:02.0f}'.format(filename)+'_'+str(int(round(time.time(),0)))+'.h5', 'w') as hf: \n hf.create_dataset('FRAMES', data=(frames)) # write number of frames\n hf.create_dataset('XINC', data=(xinc)) # write axis parameters\n hf.create_dataset('YINC', data=(yinc))\n hf.create_dataset('TRIG', data=(trig))\n hf.create_dataset('YORIGIN', data=(yorig))\n hf.create_dataset('XORIGIN', data=(xorig))\n hf.create_dataset('CAPTURING', data=(run_time))\n osc.write(':FUNC:WREP:FCUR 1') # skip to n-th frame\n time.sleep(0.5)\n for n in range(1,frames+1):\n osc.write(':FUNC:WREP:FCUR ' + str(n)) # skip to n-th frame\n time.sleep(0.001)\n\n osc.write(':WAV:DATA?') # read data\n #time.sleep(0.4)\n wave1 = bytearray(osc.read_raw(500))\n wave2 = bytearray(osc.read_raw(500))\n wave3 = bytearray(osc.read_raw(500))\n #wave4 = bytearray(osc.read(500))\n #wave = np.concatenate((wave1[11:],wave2[:(500-489)],wave3[:(700-489)]))\n wave = np.concatenate((wave1[11:],wave2,wave3[:-1]))\n hf.create_dataset(str(n), data=wave)\n filename = filename + 1\n ",
"Read repeatedly records from oscilloscope\nThis should be run after the initialization step. Timeout at the end should be enlarged if not all 508 frames are transferred.",
"filename = 1\nrun_start_time = time.time()\n \nif (filename == 1):\n for f in glob.iglob(\"./data/*.h5\"): # delete all .h5 files \n print 'Deleting', f\n os.remove(f)\nelse:\n print 'Not removing old files, as filename {0} is not 1.'.format(filename)\n\nwhile True:\n osc.write(':WAV:SOUR CHAN1')\n osc.write(':WAV:MODE NORM')\n osc.write(':WAV:FORM BYTE')\n osc.write(':WAV:POIN 1400')\n\n osc.write(':WAV:XINC?')\n xinc = float(osc.read(100))\n print 'XINC:', xinc,\n osc.write(':WAV:YINC?')\n yinc = float(osc.read(100))\n print 'YINC:', yinc,\n osc.write(':TRIGger:EDGe:LEVel?')\n trig = float(osc.read(100))\n print 'TRIG:', trig,\n osc.write(':WAVeform:YORigin?')\n yorig = float(osc.read(100))\n print 'YORIGIN:', yorig,\n osc.write(':WAVeform:XORigin?')\n xorig = float(osc.read(100))\n print 'XORIGIN:', xorig,\n \n\n osc.write(':FUNC:WREP:FEND?') # get number of last frame\n frames = int(osc.read(100))\n print 'FRAMES:', frames, 'SUBRUN', filename\n \n # This is not good if the scaling is different and frames are for example just 254\n # if (frames < 508):\n # loop_sleep_time += 10\n\n with h5py.File('./data/data'+'{:02.0f}'.format(filename)+'.h5', 'w') as hf: \n hf.create_dataset('FRAMES', data=(frames)) # write number of frames\n hf.create_dataset('XINC', data=(xinc)) # write axis parameters\n hf.create_dataset('YINC', data=(yinc))\n hf.create_dataset('TRIG', data=(trig))\n hf.create_dataset('YORIGIN', data=(yorig))\n hf.create_dataset('XORIGIN', data=(xorig))\n osc.write(':FUNC:WREP:FCUR 1') # skip to n-th frame\n time.sleep(0.5)\n for n in range(1,frames+1):\n osc.write(':FUNC:WREP:FCUR ' + str(n)) # skip to n-th frame\n time.sleep(0.001)\n\n osc.write(':WAV:DATA?') # read data\n #time.sleep(0.4)\n wave1 = bytearray(osc.read_raw(500))\n wave2 = bytearray(osc.read_raw(500))\n wave3 = bytearray(osc.read_raw(500))\n #wave4 = bytearray(osc.read(500))\n #wave = np.concatenate((wave1[11:],wave2[:(500-489)],wave3[:(700-489)]))\n wave = np.concatenate((wave1[11:],wave2,wave3[:-1]))\n hf.create_dataset(str(n), data=wave)\n filename = filename + 1\n osc.write(':FUNC:WREC:OPER REC') # start recording\n #print(' Subrun finished, sleeping for %.0f seconds.' % loop_sleep_time)\n run_start_time = time.time()\n #time.sleep(loop_sleep_time) # delay for capturing\n \n print(' Subrun finished, Enter to continue.')\n #raw_input()\n time.sleep(100) # delay for capturing\n #print(' We were waiting for ', time.time() - run_start_time())\n ",
"Stopwatch for timing the first loop",
"first_run_start_time = time.time()\nraw_input()\nloop_sleep_time = time.time() - first_run_start_time + 15\nprint loop_sleep_time\n\nloop_sleep_time=60"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
Duke-GCB/cwl-freezer
|
cwl-freezing.ipynb
|
mit
|
[
"import cwltool\n\ntool = '/Users/dcl9/Code/python/mmap-cwl/go-blast/go-blasttool.cwl'\n\n%%sh \ncwltool --print-pre /Users/dcl9/Code/python/mmap-cwl/go-blast/go-blasttool.cwl\n\nimport yaml\n\ny = None\nwith open(tool) as f:\n y = yaml.load(f)\n\n# find if y has a hint that is a docker requirement\n\n\nimport dpath.util\n\ndpath.util.search(y,'*/*/dockerImageId')\n\n# parse function\nimport urlparse\nfrom schema_salad import schema\nfrom cwltool import process, update\ndef parse(cwlpath):\n (document_loader, avsc_names, schema_metadata) = process.get_schema()\n fileuri = 'file://' + cwlpath\n workflowobj = document_loader.fetch(fileuri)\n # If strict is true, names are required everywhere (among other requirements)\n strict = False\n # This updates from draft2 to draft3\n workflowobj = update.update(workflowobj, document_loader, fileuri)\n document_loader.idx.clear()\n processobj, metadata = schema.load_and_validate(document_loader, avsc_names, workflowobj, strict)\n return processobj\n\nimport json\nprint json.dumps(parse(tool), indent=2)",
"Questions\n\nCould this be a CWL compiler?\nWIll it take a root document and return the whole structure?\nCan I find the dockerRequirement anywhere in the doc?\nCan I find the dockerRequirement using the schema?\n\n1. CWL Docker Compiler\nWhat does that mean? Abstractly, that it would read an input document, look for all docker requirements and hints, pull the images, and then write a shell script to reload everything\n2. Root document and return whole structure?",
"workflow = parse('/Users/dcl9/Code/python/mmap-cwl/mmap.cwl')",
"Yes, that works",
"# This function will find dockerImageId anyhwere in the tree\ndef find_key(d, key, path=[]):\n if isinstance(d, list):\n for i, v in enumerate(d):\n for f in find_key(v, key, path + [str(i)]):\n yield f\n elif isinstance(d, dict):\n if key in d:\n pathstring = '/'.join(path + [key])\n yield pathstring\n for k, v in d.items():\n for f in find_key(v, key, path + [k]):\n yield f\n \n\n# Could adapt to find class: DockerRequirement instead\nfor x in find_key(workflow, 'dockerImageId'):\n print x, dpath.util.get(workflow, x)\n\ndpath.util.get(workflow, 'steps/0/run/steps/0/run/hints/0')",
"extract docker image names",
"def image_names(workflow):\n image_ids = []\n for x in find_key(workflow, 'dockerImageId'):\n image_id = dpath.util.get(workflow, x)\n if image_id not in image_ids: image_ids.append(image_id) \n return image_ids\n\nimage_names(workflow)\n\nimport docker\n\ndef docker_hashes(image_ids):\n for name in image_ids:\n print name\n\ndocker_hashes(image_names(workflow))",
"Docker IO\nQuery docker for the sha of the docker image id",
"%%sh\n\neval $(docker-machine env default)\n\nimport docker_io\n\nimages = get_image_metadata(client, 'dukegcb/xgenovo')\nfor img in images:\n write_image(client, img, '/tmp/images')\n\nmd"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
darcamo/pyphysim
|
apps/ia/IA Results 2x2(1).ipynb
|
gpl-2.0
|
[
"Simulation Results for varying number of maximum iterations\nThis notebook shows BER and Sum Capacity results for different IA\nalgorithms when the maximum number of allowed iterations is limited. Note\nthat the algorithm might run less iterations than the allowed maximum if\nthe precoders do not change significantly from one iteration to the next\none. The maximum number of allowed iterations vary from 5 to 60, except\nfor the closed form algorithm, which is not iterative. The solid lines\nindicate the BER or Sum Capacity in the left axis, while the dashed lines\nindicate the mean number of iterations that algorithm used.\nLet's perform some initializations.\nFirst we enable the \"inline\" mode for plots.",
"%pylab inline",
"Now we import some modules we use and add the PyPhysim to the python path.",
"import sys\nsys.path.append(\"/home/darlan/cvs_files/pyphysim\")\n# xxxxxxxxxx Import Statements xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx\nfrom pyphysim.simulations.core import SimulationRunner, SimulationParameters, SimulationResults, Result\nfrom pyphysim.comm import modulators, channels\nfrom pyphysim.util.conversion import dB2Linear\nfrom pyphysim.util import misc\n#from pyphysim.ia import ia\nimport numpy as np\nfrom pprint import pprint\n\nfrom matplotlib import pyplot\n# xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx",
"Now we set the transmit parameters and load the simulation results from the file corresponding to those transmit parameters.",
"# xxxxx Parameters xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx\n#params = SimulationParameters.load_from_config_file('ia_config_file.txt')\nK = 3\nNr = 2\nNt = 2\nNs = 1\nM = 4\nmodulator = \"PSK\"\n#max_iterations = np.r_[5:121:5]\nmax_iterations_string = \"[5_(5)_120]\" #misc.replace_dict_values(\"{max_iterations}\",{\"max_iterations\":max_iterations})\n# xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx\n\n# xxxxx Results base name xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx\nbase_name = \"results_{M}-{modulator}_{Nr}x{Nt}_({Ns})_MaxIter_{max_iterations}\".format(M=M, modulator=modulator, Nr=Nr, Nt=Nt, Ns=Ns, max_iterations=max_iterations_string)\n# xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx\n\n# xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx\nalt_min_results_2x2_1 = SimulationResults.load_from_file(\n 'ia_alt_min_{0}.pickle'.format(base_name))\nmax_sinrn_results_2x2_1 = SimulationResults.load_from_file(\n \"ia_max_sinr_{0}_['random'].pickle\".format(base_name))\nmmse_CF_init_results_2x2_1 = SimulationResults.load_from_file(\n \"ia_mmse_{0}_['random'].pickle\".format(base_name))\n# xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx\n",
"Let's define helper methods to get mean number of IA iterations from a simulation results object.",
"# Helper function to get the number of repetitions for a given set of transmit parameters\ndef get_num_runned_reps(sim_results_object, fixed_params=dict()):\n all_runned_reps = np.array(sim_results_object.runned_reps)\n indexes = sim_results_object.params.get_pack_indexes(fixed_params)\n return all_runned_reps[indexes]\n\n# Helper function to get the number of IA runned iterations for a given set of transmit parameters\ndef get_num_mean_ia_iterations(sim_results_object, fixed_params=dict()):\n return sim_results_object.get_result_values_list('ia_runned_iterations', fixed_params)",
"Get the SNR values from the simulation parameters object.",
"SNR_alt_min = np.array(alt_min_results_2x2_1.params['SNR'])\nSNR_max_SINR = np.array(max_sinrn_results_2x2_1.params['SNR'])\n# SNR_min_leakage = np.array(min_leakage_results.params['SNR'])\nSNR_mmse = np.array(mmse_CF_init_results_2x2_1.params['SNR'])",
"Define a function that we can call to plot the BER.\nThis function will plot the BER for all SNR values for the four IA algorithms, given the desired \"max_iterations\" parameter value.",
"def plot_ber(alt_min_results, max_sinrn_results, mmse_results, max_iterations, ax=None):\n # Alt. Min. Algorithm\n ber_alt_min = alt_min_results.get_result_values_list(\n 'ber',\n fixed_params={'max_iterations': max_iterations})\n ber_CF_alt_min = alt_min_results.get_result_values_confidence_intervals(\n 'ber',\n P=95,\n fixed_params={'max_iterations': max_iterations})\n ber_errors_alt_min = np.abs([i[1] - i[0] for i in ber_CF_alt_min])\n\n # Max SINR Algorithm\n ber_max_sinr = max_sinrn_results.get_result_values_list(\n 'ber',\n fixed_params={'max_iterations': max_iterations})\n ber_CF_max_sinr = max_sinrn_results.get_result_values_confidence_intervals(\n 'ber',\n P=95,\n fixed_params={'max_iterations': max_iterations})\n ber_errors_max_sinr = np.abs([i[1] - i[0] for i in ber_CF_max_sinr])\n\n # MMSE Algorithm\n ber_mmse = mmse_results.get_result_values_list(\n 'ber',\n fixed_params={'max_iterations': max_iterations})\n ber_CF_mmse = mmse_results.get_result_values_confidence_intervals(\n 'ber',\n P=95,\n fixed_params={'max_iterations': max_iterations})\n ber_errors_mmse = np.abs([i[1] - i[0] for i in ber_CF_mmse])\n\n if ax is None:\n fig, ax = plt.subplots(nrows=1, ncols=1)\n ax.errorbar(SNR_alt_min, ber_alt_min, ber_errors_alt_min, fmt='-r*', elinewidth=2.0, label='Alt. Min.')\n ax.errorbar(SNR_max_SINR, ber_max_sinr, ber_errors_max_sinr, fmt='-g*', elinewidth=2.0, label='Max SINR')\n ax.errorbar(SNR_mmse, ber_mmse, ber_errors_mmse, fmt='-m*', elinewidth=2.0, label='MMSE.')\n\n ax.set_xlabel('SNR')\n ax.set_ylabel('BER')\n title = 'BER for Different Algorithms ({max_iterations} Max Iterations)\\nK={K}, Nr={Nr}, Nt={Nt}, Ns={Ns}, {M}-{modulator}'.replace(\"{max_iterations}\", str(max_iterations))\n ax.set_title(title.format(**alt_min_results.params.parameters))\n\n ax.set_yscale('log')\n leg = ax.legend(fancybox=True, shadow=True, loc='lower left', bbox_to_anchor=(0.01, 0.01), ncol=4)\n ax.grid(True, which='both', axis='both')\n \n # Lets plot the mean number of ia iterations\n ax2 = ax.twinx()\n mean_alt_min_ia_terations = get_num_mean_ia_iterations(alt_min_results, {'max_iterations': max_iterations})\n mean_max_sinrn_ia_terations = get_num_mean_ia_iterations(max_sinrn_results, {'max_iterations': max_iterations})\n mean_mmse_ia_terations = get_num_mean_ia_iterations(mmse_results, {'max_iterations': max_iterations})\n ax2.plot(SNR_alt_min, mean_alt_min_ia_terations, '--r*')\n ax2.plot(SNR_max_SINR, mean_max_sinrn_ia_terations, '--g*')\n ax2.plot(SNR_mmse, mean_mmse_ia_terations, '--m*')\n \n # Horizontal line with the max alowed ia iterations\n ax2.hlines(max_iterations, SNR_alt_min[0], SNR_alt_min[-1], linestyles='dashed')\n ax2.set_ylim(0, max_iterations*1.1)\n ax2.set_ylabel('IA Mean Iterations')\n\n # Set the X axis limits\n ax.set_xlim(SNR_alt_min[0], SNR_alt_min[-1])\n # Set the Y axis limits\n ax.set_ylim(1e-6, 1)",
"Plot the BER\nWe can create a 4x4 grids if plots and call the plot_ber function to plot in each subplot.",
"fig, ax = pyplot.subplots(2,2,figsize=(20,15))\nplot_ber(alt_min_results_2x2_1, max_sinrn_results_2x2_1, mmse_CF_init_results_2x2_1, 5, ax[0,0])\nplot_ber(alt_min_results_2x2_1, max_sinrn_results_2x2_1, mmse_CF_init_results_2x2_1, 10, ax[0,1])\nplot_ber(alt_min_results_2x2_1, max_sinrn_results_2x2_1, mmse_CF_init_results_2x2_1, 15, ax[1,0])\nplot_ber(alt_min_results_2x2_1, max_sinrn_results_2x2_1, mmse_CF_init_results_2x2_1, 20, ax[1,1])\n\nfig, ax = subplots(2,2,figsize=(20,15))\nplot_ber(alt_min_results_2x2_1, max_sinrn_results_2x2_1, mmse_CF_init_results_2x2_1, 25, ax[0,0])\nplot_ber(alt_min_results_2x2_1, max_sinrn_results_2x2_1, mmse_CF_init_results_2x2_1, 30, ax[0,1])\nplot_ber(alt_min_results_2x2_1, max_sinrn_results_2x2_1, mmse_CF_init_results_2x2_1, 35, ax[1,0])\nplot_ber(alt_min_results_2x2_1, max_sinrn_results_2x2_1, mmse_CF_init_results_2x2_1, 40, ax[1,1])\n\nfig, ax = subplots(2,2,figsize=(20,15))\nplot_ber(alt_min_results_2x2_1, max_sinrn_results_2x2_1, mmse_CF_init_results_2x2_1, 45, ax[0,0])\nplot_ber(alt_min_results_2x2_1, max_sinrn_results_2x2_1, mmse_CF_init_results_2x2_1, 50, ax[0,1])\nplot_ber(alt_min_results_2x2_1, max_sinrn_results_2x2_1, mmse_CF_init_results_2x2_1, 55, ax[1,0])\nplot_ber(alt_min_results_2x2_1, max_sinrn_results_2x2_1, mmse_CF_init_results_2x2_1, 60, ax[1,1])",
"Plot the Capacity",
"def plot_capacity(alt_min_results, max_sinrn_results, mmse_results, max_iterations, ax=None):\n # xxxxx Plot Sum Capacity (all) xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx\n sum_capacity_alt_min = alt_min_results.get_result_values_list(\n 'sum_capacity',\n fixed_params={'max_iterations': max_iterations})\n sum_capacity_CF_alt_min = alt_min_results.get_result_values_confidence_intervals(\n 'sum_capacity',\n P=95,\n fixed_params={'max_iterations': max_iterations})\n sum_capacity_errors_alt_min = np.abs([i[1] - i[0] for i in sum_capacity_CF_alt_min])\n\n #sum_capacity_closed_form = closed_form_results.get_result_values_list(\n # 'sum_capacity',\n # fixed_params={'max_iterations': max_iterations})\n #sum_capacity_CF_closed_form = closed_form_results.get_result_values_confidence_intervals(\n # 'sum_capacity',\n # P=95,\n # fixed_params={'max_iterations': max_iterations})\n #sum_capacity_errors_closed_form = np.abs([i[1] - i[0] for i in sum_capacity_CF_closed_form])\n\n sum_capacity_max_sinr = max_sinrn_results.get_result_values_list(\n 'sum_capacity',\n fixed_params={'max_iterations': max_iterations})\n sum_capacity_CF_max_sinr = max_sinrn_results.get_result_values_confidence_intervals(\n 'sum_capacity',\n P=95,\n fixed_params={'max_iterations': max_iterations})\n sum_capacity_errors_max_sinr = np.abs([i[1] - i[0] for i in sum_capacity_CF_max_sinr])\n\n # sum_capacity_min_leakage = min_leakage_results.get_result_values_list('sum_capacity')\n # sum_capacity_CF_min_leakage = min_leakage_results.get_result_values_confidence_intervals('sum_capacity', P=95)\n # sum_capacity_errors_min_leakage = np.abs([i[1] - i[0] for i in sum_capacity_CF_min_leakage])\n\n sum_capacity_mmse = mmse_results.get_result_values_list(\n 'sum_capacity',\n fixed_params={'max_iterations': max_iterations})\n sum_capacity_CF_mmse = mmse_results.get_result_values_confidence_intervals(\n 'sum_capacity',\n P=95,\n fixed_params={'max_iterations': max_iterations})\n sum_capacity_errors_mmse = np.abs([i[1] - i[0] for i in sum_capacity_CF_mmse])\n\n if ax is None:\n fig, ax = plt.subplots(nrows=1, ncols=1)\n # xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx\n ax.errorbar(SNR_alt_min, sum_capacity_alt_min, sum_capacity_errors_alt_min, fmt='-r*', elinewidth=2.0, label='Alt. Min.')\n #ax.errorbar(SNR_closed_form, sum_capacity_closed_form, sum_capacity_errors_closed_form, fmt='-b*', elinewidth=2.0, label='Closed Form')\n ax.errorbar(SNR_max_SINR, sum_capacity_max_sinr, sum_capacity_errors_max_sinr, fmt='-g*', elinewidth=2.0, label='Max SINR')\n # ax.errorbar(SNR, sum_capacity_min_leakage, sum_capacity_errors_min_leakage, fmt='-k*', elinewidth=2.0, label='Min Leakage.')\n ax.errorbar(SNR_mmse, sum_capacity_mmse, sum_capacity_errors_mmse, fmt='-m*', elinewidth=2.0, label='MMSE.')\n # xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx\n\n ax.set_xlabel('SNR')\n ax.set_ylabel('Sum Capacity')\n title = 'Sum Capacity for Different Algorithms ({max_iterations} Max Iterations)\\nK={K}, Nr={Nr}, Nt={Nt}, Ns={Ns}, {M}-{modulator}'.replace(\"{max_iterations}\", str(max_iterations))\n ax.set_title(title.format(**alt_min_results.params.parameters))\n\n #leg = ax.legend(fancybox=True, shadow=True, loc=2)\n leg = ax.legend(fancybox=True, shadow=True, loc='lower right', bbox_to_anchor=(0.99, 0.01), ncol=4)\n \n ax.grid(True, which='both', axis='both')\n \n # Lets plot the mean number of ia iterations\n ax2 = ax.twinx()\n mean_alt_min_ia_terations = get_num_mean_ia_iterations(alt_min_results, {'max_iterations': max_iterations})\n mean_max_sinrn_ia_terations = get_num_mean_ia_iterations(max_sinrn_results, {'max_iterations': max_iterations})\n mean_mmse_ia_terations = get_num_mean_ia_iterations(mmse_results, {'max_iterations': max_iterations})\n ax2.plot(SNR_alt_min, mean_alt_min_ia_terations, '--r*')\n ax2.plot(SNR_max_SINR, mean_max_sinrn_ia_terations, '--g*')\n ax2.plot(SNR_mmse, mean_mmse_ia_terations, '--m*')\n \n # Horizontal line with the max alowed ia iterations\n ax2.hlines(max_iterations, SNR_alt_min[0], SNR_alt_min[-1], linestyles='dashed')\n ax2.set_ylim(0, max_iterations*1.1)\n ax2.set_ylabel('IA Mean Iterations')\n\n # Set the X axis limits\n ax.set_xlim(SNR_alt_min[0], SNR_alt_min[-1])\n # Set the Y axis limits\n #ax.set_ylim(1e-6, 1)\n\nfig, ax = subplots(2,2,figsize=(20,15))\n\nplot_capacity(alt_min_results_2x2_1, max_sinrn_results_2x2_1, mmse_CF_init_results_2x2_1, 5, ax[0,0])\nplot_capacity(alt_min_results_2x2_1, max_sinrn_results_2x2_1, mmse_CF_init_results_2x2_1, 10, ax[0,1])\nplot_capacity(alt_min_results_2x2_1, max_sinrn_results_2x2_1, mmse_CF_init_results_2x2_1, 15, ax[1,0])\nplot_capacity(alt_min_results_2x2_1, max_sinrn_results_2x2_1, mmse_CF_init_results_2x2_1, 20, ax[1,1])\n\nfig, ax = subplots(2,2,figsize=(20,15))\nplot_capacity(alt_min_results_2x2_1, max_sinrn_results_2x2_1, mmse_CF_init_results_2x2_1, 25, ax[0,0])\nplot_capacity(alt_min_results_2x2_1, max_sinrn_results_2x2_1, mmse_CF_init_results_2x2_1, 30, ax[0,1])\nplot_capacity(alt_min_results_2x2_1, max_sinrn_results_2x2_1, mmse_CF_init_results_2x2_1, 35, ax[1,0])\nplot_capacity(alt_min_results_2x2_1, max_sinrn_results_2x2_1, mmse_CF_init_results_2x2_1, 40, ax[1,1])\n\nfig, ax = subplots(2,2,figsize=(20,15))\nplot_capacity(alt_min_results_2x2_1, max_sinrn_results_2x2_1, mmse_CF_init_results_2x2_1, 45, ax[0,0])\nplot_capacity(alt_min_results_2x2_1, max_sinrn_results_2x2_1, mmse_CF_init_results_2x2_1, 50, ax[0,1])\nplot_capacity(alt_min_results_2x2_1, max_sinrn_results_2x2_1, mmse_CF_init_results_2x2_1, 55, ax[1,0])\nplot_capacity(alt_min_results_2x2_1, max_sinrn_results_2x2_1, mmse_CF_init_results_2x2_1, 60, ax[1,1])"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
AllenDowney/ThinkStats2
|
code/chap14ex.ipynb
|
gpl-3.0
|
[
"Chapter 14\nExamples and Exercises from Think Stats, 2nd Edition\nhttp://thinkstats2.com\nCopyright 2016 Allen B. Downey\nMIT License: https://opensource.org/licenses/MIT",
"from os.path import basename, exists\n\n\ndef download(url):\n filename = basename(url)\n if not exists(filename):\n from urllib.request import urlretrieve\n\n local, _ = urlretrieve(url, filename)\n print(\"Downloaded \" + local)\n\n\ndownload(\"https://github.com/AllenDowney/ThinkStats2/raw/master/code/thinkstats2.py\")\ndownload(\"https://github.com/AllenDowney/ThinkStats2/raw/master/code/thinkplot.py\")\n\nimport numpy as np\nimport pandas as pd\n\nimport random\n\nimport thinkstats2\nimport thinkplot",
"Analytic methods\nIf we know the parameters of the sampling distribution, we can compute confidence intervals and p-values analytically, which is computationally faster than resampling.",
"import scipy.stats\n\n\ndef EvalNormalCdfInverse(p, mu=0, sigma=1):\n return scipy.stats.norm.ppf(p, loc=mu, scale=sigma)",
"Here's the confidence interval for the estimated mean.",
"EvalNormalCdfInverse(0.05, mu=90, sigma=2.5)\n\nEvalNormalCdfInverse(0.95, mu=90, sigma=2.5)",
"normal.py provides a Normal class that encapsulates what we know about arithmetic operations on normal distributions.",
"download(\"https://github.com/AllenDowney/ThinkStats2/raw/master/code/normal.py\")\ndownload(\"https://github.com/AllenDowney/ThinkStats2/raw/master/code/hypothesis.py\")\ndownload(\"https://github.com/AllenDowney/ThinkStats2/raw/master/code/nsfg2.py\")\n\ndownload(\"https://github.com/AllenDowney/ThinkStats2/raw/master/code/nsfg.py\")\ndownload(\"https://github.com/AllenDowney/ThinkStats2/raw/master/code/first.py\")\n\nfrom normal import Normal\n\ndist = Normal(90, 7.5**2)\ndist",
"We can use it to compute the sampling distribution of the mean.",
"dist_xbar = dist.Sum(9) / 9\ndist_xbar.sigma",
"And then compute a confidence interval.",
"dist_xbar.Percentile(5), dist_xbar.Percentile(95)",
"Central Limit Theorem\nIf you add up independent variates from a distribution with finite mean and variance, the sum converges on a normal distribution.\nThe following function generates samples with difference sizes from an exponential distribution.",
"def MakeExpoSamples(beta=2.0, iters=1000):\n \"\"\"Generates samples from an exponential distribution.\n\n beta: parameter\n iters: number of samples to generate for each size\n\n returns: list of samples\n \"\"\"\n samples = []\n for n in [1, 10, 100]:\n sample = [np.sum(np.random.exponential(beta, n)) for _ in range(iters)]\n samples.append((n, sample))\n return samples",
"This function generates normal probability plots for samples with various sizes.",
"def NormalPlotSamples(samples, plot=1, ylabel=\"\"):\n \"\"\"Makes normal probability plots for samples.\n\n samples: list of samples\n label: string\n \"\"\"\n for n, sample in samples:\n thinkplot.SubPlot(plot)\n thinkstats2.NormalProbabilityPlot(sample)\n\n thinkplot.Config(\n title=\"n=%d\" % n,\n legend=False,\n xticks=[],\n yticks=[],\n xlabel=\"random normal variate\",\n ylabel=ylabel,\n )\n plot += 1",
"The following plot shows how the sum of exponential variates converges to normal as sample size increases.",
"thinkplot.PrePlot(num=3, rows=2, cols=3)\nsamples = MakeExpoSamples()\nNormalPlotSamples(samples, plot=1, ylabel=\"sum of expo values\")",
"The lognormal distribution has higher variance, so it requires a larger sample size before it converges to normal.",
"def MakeLognormalSamples(mu=1.0, sigma=1.0, iters=1000):\n \"\"\"Generates samples from a lognormal distribution.\n\n mu: parmeter\n sigma: parameter\n iters: number of samples to generate for each size\n\n returns: list of samples\n \"\"\"\n samples = []\n for n in [1, 10, 100]:\n sample = [np.sum(np.random.lognormal(mu, sigma, n)) for _ in range(iters)]\n samples.append((n, sample))\n return samples\n\nthinkplot.PrePlot(num=3, rows=2, cols=3)\nsamples = MakeLognormalSamples()\nNormalPlotSamples(samples, ylabel=\"sum of lognormal values\")",
"The Pareto distribution has infinite variance, and sometimes infinite mean, depending on the parameters. It violates the requirements of the CLT and does not generally converge to normal.",
"def MakeParetoSamples(alpha=1.0, iters=1000):\n \"\"\"Generates samples from a Pareto distribution.\n\n alpha: parameter\n iters: number of samples to generate for each size\n\n returns: list of samples\n \"\"\"\n samples = []\n\n for n in [1, 10, 100]:\n sample = [np.sum(np.random.pareto(alpha, n)) for _ in range(iters)]\n samples.append((n, sample))\n return samples\n\nthinkplot.PrePlot(num=3, rows=2, cols=3)\nsamples = MakeParetoSamples()\nNormalPlotSamples(samples, ylabel=\"sum of Pareto values\")",
"If the random variates are correlated, that also violates the CLT, so the sums don't generally converge.\nTo generate correlated values, we generate correlated normal values and then transform to whatever distribution we want.",
"def GenerateCorrelated(rho, n):\n \"\"\"Generates a sequence of correlated values from a standard normal dist.\n\n rho: coefficient of correlation\n n: length of sequence\n\n returns: iterator\n \"\"\"\n x = random.gauss(0, 1)\n yield x\n\n sigma = np.sqrt(1 - rho**2)\n for _ in range(n - 1):\n x = random.gauss(x * rho, sigma)\n yield x\n\ndef GenerateExpoCorrelated(rho, n):\n \"\"\"Generates a sequence of correlated values from an exponential dist.\n\n rho: coefficient of correlation\n n: length of sequence\n\n returns: NumPy array\n \"\"\"\n normal = list(GenerateCorrelated(rho, n))\n uniform = scipy.stats.norm.cdf(normal)\n expo = scipy.stats.expon.ppf(uniform)\n return expo\n\ndef MakeCorrelatedSamples(rho=0.9, iters=1000):\n \"\"\"Generates samples from a correlated exponential distribution.\n\n rho: correlation\n iters: number of samples to generate for each size\n\n returns: list of samples\n \"\"\"\n samples = []\n for n in [1, 10, 100]:\n sample = [np.sum(GenerateExpoCorrelated(rho, n)) for _ in range(iters)]\n samples.append((n, sample))\n return samples\n\nthinkplot.PrePlot(num=3, rows=2, cols=3)\nsamples = MakeCorrelatedSamples()\nNormalPlotSamples(samples, ylabel=\"sum of correlated exponential values\")",
"Difference in means\nLet's use analytic methods to compute a CI and p-value for an observed difference in means.\nThe distribution of pregnancy length is not normal, but it has finite mean and variance, so the sum (or mean) of a few thousand samples is very close to normal.",
"download(\"https://github.com/AllenDowney/ThinkStats2/raw/master/code/2002FemPreg.dct\")\ndownload(\n \"https://github.com/AllenDowney/ThinkStats2/raw/master/code/2002FemPreg.dat.gz\"\n)\n\nimport first\n\nlive, firsts, others = first.MakeFrames()\ndelta = firsts.prglngth.mean() - others.prglngth.mean()\ndelta",
"The following function computes the sampling distribution of the mean for a set of values and a given sample size.",
"def SamplingDistMean(data, n):\n \"\"\"Computes the sampling distribution of the mean.\n\n data: sequence of values representing the population\n n: sample size\n\n returns: Normal object\n \"\"\"\n mean, var = data.mean(), data.var()\n dist = Normal(mean, var)\n return dist.Sum(n) / n",
"Here are the sampling distributions for the means of the two groups under the null hypothesis.",
"dist1 = SamplingDistMean(live.prglngth, len(firsts))\ndist2 = SamplingDistMean(live.prglngth, len(others))",
"And the sampling distribution for the difference in means.",
"dist_diff = dist1 - dist2\ndist",
"Under the null hypothesis, here's the chance of exceeding the observed difference.",
"1 - dist_diff.Prob(delta)",
"And the chance of falling below the negated difference.",
"dist_diff.Prob(-delta)",
"The sum of these probabilities is the two-sided p-value.\nTesting a correlation\nUnder the null hypothesis (that there is no correlation), the sampling distribution of the observed correlation (suitably transformed) is a \"Student t\" distribution.",
"def StudentCdf(n):\n \"\"\"Computes the CDF correlations from uncorrelated variables.\n\n n: sample size\n\n returns: Cdf\n \"\"\"\n ts = np.linspace(-3, 3, 101)\n ps = scipy.stats.t.cdf(ts, df=n - 2)\n rs = ts / np.sqrt(n - 2 + ts**2)\n return thinkstats2.Cdf(rs, ps)",
"The following is a HypothesisTest that uses permutation to estimate the sampling distribution of a correlation.",
"import hypothesis\n\n\nclass CorrelationPermute(hypothesis.CorrelationPermute):\n \"\"\"Tests correlations by permutation.\"\"\"\n\n def TestStatistic(self, data):\n \"\"\"Computes the test statistic.\n\n data: tuple of xs and ys\n \"\"\"\n xs, ys = data\n return np.corrcoef(xs, ys)[0][1]",
"Now we can estimate the sampling distribution by permutation and compare it to the Student t distribution.",
"def ResampleCorrelations(live):\n \"\"\"Tests the correlation between birth weight and mother's age.\n\n live: DataFrame for live births\n\n returns: sample size, observed correlation, CDF of resampled correlations\n \"\"\"\n live2 = live.dropna(subset=[\"agepreg\", \"totalwgt_lb\"])\n data = live2.agepreg.values, live2.totalwgt_lb.values\n ht = CorrelationPermute(data)\n p_value = ht.PValue()\n return len(live2), ht.actual, ht.test_cdf\n\nn, r, cdf = ResampleCorrelations(live)\n\nmodel = StudentCdf(n)\nthinkplot.Plot(model.xs, model.ps, color=\"gray\", alpha=0.5, label=\"Student t\")\nthinkplot.Cdf(cdf, label=\"sample\")\n\nthinkplot.Config(xlabel=\"correlation\", ylabel=\"CDF\", legend=True, loc=\"lower right\")",
"That confirms the analytic result. Now we can use the CDF of the Student t distribution to compute a p-value.",
"t = r * np.sqrt((n - 2) / (1 - r**2))\np_value = 1 - scipy.stats.t.cdf(t, df=n - 2)\nprint(r, p_value)",
"Chi-squared test\nThe reason the chi-squared statistic is useful is that we can compute its distribution under the null hypothesis analytically.",
"def ChiSquaredCdf(n):\n \"\"\"Discrete approximation of the chi-squared CDF with df=n-1.\n\n n: sample size\n\n returns: Cdf\n \"\"\"\n xs = np.linspace(0, 25, 101)\n ps = scipy.stats.chi2.cdf(xs, df=n - 1)\n return thinkstats2.Cdf(xs, ps)",
"Again, we can confirm the analytic result by comparing values generated by simulation with the analytic distribution.",
"data = [8, 9, 19, 5, 8, 11]\ndt = hypothesis.DiceChiTest(data)\np_value = dt.PValue(iters=1000)\nn, chi2, cdf = len(data), dt.actual, dt.test_cdf\n\nmodel = ChiSquaredCdf(n)\nthinkplot.Plot(model.xs, model.ps, color=\"gray\", alpha=0.3, label=\"chi squared\")\nthinkplot.Cdf(cdf, label=\"sample\")\n\nthinkplot.Config(xlabel=\"chi-squared statistic\", ylabel=\"CDF\", loc=\"lower right\")",
"And then we can use the analytic distribution to compute p-values.",
"p_value = 1 - scipy.stats.chi2.cdf(chi2, df=n - 1)\nprint(chi2, p_value)",
"Exercises\nExercise: In Section 5.4, we saw that the distribution of adult weights is approximately lognormal. One possible explanation is that the weight a person gains each year is proportional to their current weight. In that case, adult weight is the product of a large number of multiplicative factors:\nw = w0 f1 f2 ... fn \nwhere w is adult weight, w0 is birth weight, and fi is the weight gain factor for year i.\nThe log of a product is the sum of the logs of the factors:\nlogw = logw0 + logf1 + logf2 + ... + logfn \nSo by the Central Limit Theorem, the distribution of logw is approximately normal for large n, which implies that the distribution of w is lognormal.\nTo model this phenomenon, choose a distribution for f that seems reasonable, then generate a sample of adult weights by choosing a random value from the distribution of birth weights, choosing a sequence of factors from the distribution of f, and computing the product. What value of n is needed to converge to a lognormal distribution?\nExercise: In Section 14.6 we used the Central Limit Theorem to find the sampling distribution of the difference in means, δ, under the null hypothesis that both samples are drawn from the same population.\nWe can also use this distribution to find the standard error of the estimate and confidence intervals, but that would only be approximately correct. To be more precise, we should compute the sampling distribution of δ under the alternate hypothesis that the samples are drawn from different populations.\nCompute this distribution and use it to calculate the standard error and a 90% confidence interval for the difference in means.\nExercise: In a recent paper, Stein et al. investigate the effects of an intervention intended to mitigate gender-stereotypical task allocation within student engineering teams.\nBefore and after the intervention, students responded to a survey that asked them to rate their contribution to each aspect of class projects on a 7-point scale.\nBefore the intervention, male students reported higher scores for the programming aspect of the project than female students; on average men reported a score of 3.57 with standard error 0.28. Women reported 1.91, on average, with standard error 0.32.\nCompute the sampling distribution of the gender gap (the difference in means), and test whether it is statistically significant. Because you are given standard errors for the estimated means, you don’t need to know the sample size to figure out the sampling distributions.\nAfter the intervention, the gender gap was smaller: the average score for men was 3.44 (SE 0.16); the average score for women was 3.18 (SE 0.16). Again, compute the sampling distribution of the gender gap and test it.\nFinally, estimate the change in gender gap; what is the sampling distribution of this change, and is it statistically significant?"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
vitojph/kschool-nlp
|
notebooks-py3/superbowl.ipynb
|
gpl-3.0
|
[
"Superbowl 2017\ntl;dr\nVamos a analizar una colección de tweets en inglés publicados durante un partido de fútbol.\nContexto\nEl pasado 5 de febrero se celebró la 51ª edición de la Superbowl, la gran final del campeonato de fútbol americano de la NFL. El partido enfrentó a los New England Patriots (los favoritos, los de la costa este, con Tom Brady a la cabeza) contra los Atlanta Falcons (los aspirantes, los del Sur, encabezados por Matt Ryan).\n\nComo cualquier final, el resultado a priori era impredecible y a un partido podía ganar cualquiera. Pero el del otro día fue un encuentro inolvidable porque comenzó con el equipo débil barriendo al favorito y con un Brady que no daba una. Al descanso, el marcador reflejaba un inesperado 3 - 28 y todo indicaba que los Falcons ganarían su primer anillo.\n\nPero, en la segunda mitad, Brady resurgió... y su equipo comenzó a anotar una y otra vez... con los Falcons ko. Los Patriots consiguieron darle la vuelta al marcador y vencieron por 34 - 28 su quinta Superbowl. Brady fue elegido MVP del encuentro y aclamado como el mejor quaterback de la historia.\n\nComo os imaginaréis, tanto vaivén nos va a dar mucho juego a la hora de analizar un corpus de mensajes de Twitter. Durante la primera mitad, es previsible que encuentres mensajes a favor de Atlanta y burlas a New England y a sus jugadores, que no estaban muy finos. Pero al final del partido, con la remontada, las opiniones y las burlas cambiarán de sentido.\nComo tanto Tom Brady como su entrenador, Bill Belichick, habían declarado públicamente sus preferencias por Donald Trump durante las elecciones a la presidencia, es muy probable que encuentres mensajes al respecto y menciones a demócratas y republicanos.\nPor último, durante el half time show actuó Lady Gaga, que también levanta pasiones a su manera, así que es probable que haya menciones a otras reinas de la música y comparaciones con actuaciones pasadas.\n\nLos datos\nEl fichero 2017-superbowl-tweets.tsv ubicado en el directorio data/ contiene una muestra, ordenada cronológicamente, de mensajes escritos en inglés publicados antes, durante y después del partido. Todos los mensajes contienen el hashtag #superbowl. Hazte una copia de este fichero en el directorio notebooks de tu espacio personal.\nEl fichero es en realidad una tabla con cuatro columnas separadas por tabuladores, que contiene líneas (una por tweet) con el siguiente formato:\nid_del_tweet fecha_y_hora_de_publicación autor_del_tweet texto_del_mensaje\n\nLa siguiente celda te permite abrir el fichero para lectura y cargar los mensajes en la lista tweets. Modifica el código para que la ruta apunte a la copia local de tu fichero.",
"!gunzip ../data/2017-superbowl-tweets.tsv.gz\n!ls ../data\n\ntweets = []\nRUTA = '../data/2017-superbowl-tweets.tsv'\nfor line in open(RUTA).readlines():\n tweets.append(line.split('\\t'))",
"Fíjate en la estructura de la lista: se trata de una lista de tuplas con cuatro elementos. Puedes comprobar si el fichero se ha cargado como debe en la siguiente celda:",
"ultimo_tweet = tweets[-1]\nprint('id =>', ultimo_tweet[0])\nprint('fecha =>', ultimo_tweet[1])\nprint('autor =>', ultimo_tweet[2])\nprint('texto =>', ultimo_tweet[3])",
"Al lío\nA partir de aquí puedes hacer distintos tipos de análisis. Añade tantas celdas como necesites para intentar, por ejemplo:\n\ncalcular distintas estadísticas de la colección: número de mensajes, longitud de los mensajes, presencia de hashtags y emojis, etc.\nnúmero de menciones a usuarios, frecuencia de aparición de menciones, frecuencia de autores\ncalcular estadísticas sobre usuarios: menciones, mensajes por usuario, etc.\ncalcular estadísticas sobre las hashtags\ncalcular estadísticas sobre las URLs presentes en los mensajes\ncalcular estadísticas sobre los emojis y emoticonos de los mensajes\nextraer automáticamente las entidades nombradas que aparecen en los mensajes y su frecuencia\nprocesar los mensajes para extraer y analizar opiniones: calcular la subjetividad y la polaridad de los mensajes\nextraer las entidades nombradas que levantan más pasiones, quiénes son los más queridos y los más odiados, atendiendo a la polaridad de los mensajes\ncomprobar si la polaridad de alguna entidad varía radicalmente a medida que avanza el partido\ncualquier otra cosa que se te ocurra :-P",
"from textblob import TextBlob\n\nfor tweet in tweets:\n try:\n t = TextBlob(tweet[3]) # in Python2: t = TextBlob(tweet[3].decode('utf-8'))\n if t.sentiment.polarity < -0.5:\n print(tweet[3], '-->', t.sentiment)\n except IndexError:\n pass\n\nfor tweet in tweets:\n try:\n t = TextBlob(tweet[3]) # in Python2: t = TextBlob(tweet[3].decode('utf-8'))\n print(\" \".join(t.noun_phrases))\n except IndexError:\n pass\n\nfor tweet in tweets[:20]:\n try:\n t = TextBlob(tweet[3]) # in Python2: t = TextBlob(tweet[3].decode('utf-8'))\n print(t.translate(to='es'))\n except IndexError:\n pass"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
konstantinstadler/pymrio
|
doc/source/notebooks/load_save_export.ipynb
|
gpl-3.0
|
[
"Loading, saving and exporting data\nPymrio includes several functions for data reading and storing. This section presents the methods to use for saving and loading data already in a pymrio compatible format. For parsing raw MRIO data see the different tutorials for working with available MRIO databases.\nHere, we use the included small test MRIO system to highlight the different function. The same functions are available for any MRIO loaded into pymrio. Expect, however, significantly decreased performance due to the size of real MRIO system.",
"import pymrio\nimport os\nio = pymrio.load_test().calc_all()",
"Basic save and read\nTo save the full system, use:",
"save_folder_full = '/tmp/testmrio/full'\nio.save_all(path=save_folder_full)",
"To read again from that folder do:",
"io_read = pymrio.load_all(path=save_folder_full)",
"The fileio activities are stored in the included meta data history field:",
"io_read.meta",
"Storage format\nInternally, pymrio stores data in csv format, with the 'economic core' data in the root and each satellite account in a subfolder. Metadata as file as a file describing the data format ('file_parameters.json') are included in each folder.",
"import os\nos.listdir(save_folder_full)",
"The file format for storing the MRIO data can be switched to a binary pickle format with:",
"save_folder_bin = '/tmp/testmrio/binary'\nio.save_all(path=save_folder_bin, table_format='pkl')\nos.listdir(save_folder_bin)",
"This can be used to reduce the storage space required on the disk for large MRIO databases.\nArchiving MRIOs databases\nTo archive a MRIO system after saving use pymrio.archive:",
"mrio_arc = '/tmp/testmrio/archive.zip'\n\n# Remove a potentially existing archive from before\ntry:\n os.remove(mrio_arc)\nexcept FileNotFoundError:\n pass\n \npymrio.archive(source=save_folder_full, archive=mrio_arc)",
"Data can be read directly from such an archive by:",
"tt = pymrio.load_all(mrio_arc)",
"Currently data can not be saved directly into a zip archive.\nIt is, however, possible to remove the source files after archiving:",
"tmp_save = '/tmp/testmrio/tmp'\n\n# Remove a potentially existing archive from before\ntry:\n os.remove(mrio_arc)\nexcept FileNotFoundError:\n pass\n\nio.save_all(tmp_save)\n\nprint(\"Directories before archiving: {}\".format(os.listdir('/tmp/testmrio')))\npymrio.archive(source=tmp_save, archive=mrio_arc, remove_source=True)\nprint(\"Directories after archiving: {}\".format(os.listdir('/tmp/testmrio')))",
"Several MRIO databases can be stored in the same archive:",
"# Remove a potentially existing archive from before\ntry:\n os.remove(mrio_arc)\nexcept FileNotFoundError:\n pass\n\ntmp_save = '/tmp/testmrio/tmp'\n\nio.save_all(tmp_save)\npymrio.archive(source=tmp_save, archive=mrio_arc, path_in_arc='version1/', remove_source=True)\nio2 = io.copy()\ndel io2.emissions\nio2.save_all(tmp_save)\npymrio.archive(source=tmp_save, archive=mrio_arc, path_in_arc='version2/', remove_source=True)",
"When loading from an archive which includes multiple MRIO databases, specify\none with the parameter 'path_in_arc':",
"io1_load = pymrio.load_all(mrio_arc, path_in_arc='version1/')\nio2_load = pymrio.load_all(mrio_arc, path_in_arc='version2/')\n\nprint(\"Extensions of the loaded io1 {ver1} and of io2: {ver2}\".format(\n ver1=sorted(io1_load.get_extensions()),\n ver2=sorted(io2_load.get_extensions())))",
"The pymrio.load function can be used directly to only a specific satellite account \nof a MRIO database from a zip archive:",
"emissions = pymrio.load(mrio_arc, path_in_arc='version1/emissions')\nprint(emissions)",
"The archive function is a wrapper around python.zipfile module.\nThere are, however, some differences to the defaults choosen in the original:\n\n\nIn contrast to zipfile.write, \n pymrio.archive raises an\n error if the data (path + filename) are identical in the zip archive.\n Background: the zip standard allows that files with the same name and path\n are stored side by side in a zip file. This becomes an issue when unpacking\n this files as they overwrite each other upon extraction.\n\n\nThe standard for the parameter 'compression' is set to ZIP_DEFLATED \n This is different from the zipfile default (ZIP_STORED) which would\n not give any compression. \n See the zipfile docs \n for further information. \n Depending on the value given for the parameter 'compression' \n additional modules might be necessary (e.g. zlib for ZIP_DEFLATED). \n Futher information on this can also be found in the zipfile python docs.\n\n\nStoring or exporting a specific table or extension\nEach extension of the MRIO system can be stored separetly with:",
"save_folder_em= '/tmp/testmrio/emissions'\n\nio.emissions.save(path=save_folder_em)",
"This can then be loaded again as separate satellite account:",
"emissions = pymrio.load(save_folder_em)\n\nemissions\n\nemissions.D_cba",
"As all data in pymrio is stored as pandas DataFrame, the full pandas stack for exporting tables is available. For example, to export a table as excel sheet use:",
"io.emissions.D_cba.to_excel('/tmp/testmrio/emission_footprints.xlsx')",
"For further information see the pandas documentation on import/export."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
ecabreragranado/OpticaFisicaII
|
Trabajo Filtro Interferencial/.ipynb_checkpoints/TrabajoFiltros-checkpoint.ipynb
|
gpl-3.0
|
[
"TRABAJO PROPUESTO SOBRE FILTROS INTERFERENCIALES\nConsultar el manual de uso de los cuadernos interactivos (notebooks) que se encuentra disponible en el Campus Virtual\nGrupo de trabajo\nEn esta celda los integrantes del grupo: modificar el texto\n\nJuan Antonio Fernández \nAlberto Pérez\nJuan \n\nIncluir las direcciones de correo electrónico\nIntroducción\nEl trabajo consiste en encontrar un filtro interferencial comercial que sirva para \nproteger el ojo de la radiación visible de un puntero láser de alta potencia. El trabajo\nse divide en las siguientes tareas:\nTarea 1. Exposición máxima permisible (MPE)\nLa exposici\u0013óm m\u0013áxima permisible (MPE maximum permissible exposure) es la m\u0013áxima densidad de potencia \no de energí\u0013\u0010a (W/cm$^2$ o J/cm$^2$) de un haz de luz que puede alcanzar el ojo humano sin producir daño. \nLa MPE se mide en la c\u0013órnea, y depende de la longitud de onda de la radiaci\u0013ón y del tiempo de exposici\u0013ón. \nEn la siguiente fi\fgura se muestra la MPE en la córnea (en unidades de irradiancia (W/cm$^2$)) en función \ndel tiempo de exposición para distintos rangos del espectro electromagnético.\nFigura de http://en.wikipedia.org/wiki/Laser_safety",
"from IPython.core.display import Image\nImage(\"http://upload.wikimedia.org/wikipedia/commons/thumb/2/28/IEC60825_MPE_W_s.png/640px-IEC60825_MPE_W_s.png\")",
"Tarea 1 (a). Irradiancia máxima\nComo estamos considerando el haz láser de un puntero que emite\nen el visible, como tiempo de exposición emplearemos el tiempo que\nse tarda en cerrar el párpado. Así con este tiempo de exposición\nestimar de la gráfica la irradiancia máxima que puede alcanzar el\nojo. \nEscribir el tiempo de exposición empleado y el correspondiente valor de la irradiancia.\n\n\nTiempo de exposición (parpadeo) = s\n\n\nIrradiancia máxima permisible = W/cm$^2$\n\n\nTarea 1 (b). Potencia máxima\nVamos a considerar que el haz que alcanza nuestro ojo está colimado\ncon un tamaño equivalente al de nuestra pupila. Empleando dicho\ntamaño calcular la potencia máxima que puede alcanzar nuestro ojo\nsin provocar daño.\nEscribir el tamaño de la pupila considerado, las operaciones y el resultado final de la potencia (en mW)\n\n\nDiámetro o radio de la pupila = mm\n\n\nCálculos intermedios\n\n\nPotencia máxima permisible = mW\n\n\nTarea 2. Elección del puntero láser\nBuscar en internet información sobre un\npuntero láser visible que sea de alta potencia.\nVerifi\fcar que dicho puntero l\u0013áser puede provocar daño ocular (teniendo en cuenta el resultado de la Tarea 1 (b))\nEscribir aquí las características técnicas de dicho láser \n\npotencia\nlongitud de onda \nprecio\notras características\npágina web http://www.ucm.es\n\nTarea 3. Elección del filtro interferencial\nVamos a buscar en internet un filtro interferencial\ncomercial que permita evitar el riesgo de daño ocular para el\npuntero láser seleccionado. Se tratará de un filtro que bloquee \nla longitud de onda del puntero láser.\nTarea 3 (a). Busqueda e información del filtro interferencial\nVamos a emplear la información accesible en la casa Semrock ( http://www.semrock.com/filters.aspx )\nSeleccionar en esta página web un filtro adecuado. Pinchar sobre cada filtro (sobre la curva de transmitancia, \nsobre el Part Number, o sobre Show Product Detail) para obtener más información. Escribir aquí \nlas características más relevantes del filtro seleccionado: \n\ntransmitancia T o densidad óptica OD \nrango de longitudes de onda\nprecio\npágina web del filtro seleccionado (cambiar la siguiente dirección) http://www.semrock.com/FilterDetails.aspx?id=LP02-224R-25\n\nTarea 3 (b). Verificación del filtro\nEmpleando el dato de la transmitancia (T) a la longitud de onda del\npuntero láser comprobar que dicho filtro evitará el riesgo de\nlesión.\nPara ello vamos a usar los datos de la transmitancia del filtro seleccionado\nque aparecen en la página web de Semrock. Para cargar dichos datos en nuestro notebook seguimos los siguientes pasos:\n\nPinchar con el ratón en la página web del filtro seleccionado sobre ASCII Data, que se encuentra en la leyenda de la figura (derecha).\n\n\n\n\nCopiar la dirección de la página web que se abre (esta página muestra los datos experimentales de la transmitancia)\n\n\nPegar esa dirección en la siguiente celda de código, detrás de filename = \n(Nota: asegurarse de que la dirección queda entre las comillas)\n\n\nEn la siguiente celda de código se representa la transmitancia del filtro en escala logarítmica \nen función de la longitud de onda (en nm).",
"####\n# Parámetros a modificar. INICIO\n####\n\nfilename = \"http://www.semrock.com/_ProductData/Spectra/NF01-229_244_DesignSpectrum.txt\"\n\n\n# Parámetros a modificar. FIN\n####\n\n%pylab inline\ndata=genfromtxt(filename,dtype=float,skip_header=4) # Carga los datos \nlongitud_de_onda=data[:,0];transmitancia=data[:,1];\n\nprint \"Datos cargados OK\"\n\nimport plotly\n\npy = plotly.plotly('ofii','i6jc6xsecb')\ndata = [{'x': longitud_de_onda, 'y':transmitancia}]\nlayout={'title': 'Transmitancia Filtro Escogido','yaxis':{'type':'log'}}\npy.iplot(data,layout=layout,fileopt='overwrite')\n\n",
"Esta gráfica nos permite obtener el valor de la transmitancia a la longitud de onda de nuestro \npuntero láser. Explora la curva moviendo el ratón sobre ella y haciendo zoom utilizando los\ncontroles que aparecen en la parte superior derecha de la figura. Localiza la longitud de onda\ndel puntero láser seleccionado y apunta el valor de la transmitancia en esta celda.\n\n\n$\\lambda$ = \n\n\nT = \n\n\nEmpleando el valor de la transmitancia del filtro a la longitud de onda del puntero láser verificar \nque el filtro evitará el riesgo de daño ocular. Escribir a continuación la estimación realizada."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
anhaidgroup/py_entitymatching
|
notebooks/guides/step_wise_em_guides/Performing Blocking Using Built-In Blockers (Sorted Neighborhood Blocker).ipynb
|
bsd-3-clause
|
[
"Contents\n\nIntroduction\nBlock Using the Sorted Neighborhood Blocker\nBlock Tables to Produce a Candidate Set of Tuple Pairs\nHandling Missing Values\nWindow Size\nStable Sort Order\nSorted Neighborhood Blocker Limitations\n\nIntroduction\n<font color='red'>WARNING: The sorted neighborhood blocker is still experimental and has not been fully tested yet. Use this blocker at your own risk.</font>\nBlocking is typically done to reduce the number of tuple pairs considered for matching. There are several blocking methods proposed. The py_entitymatching package supports a subset of such blocking methods (#ref to what is supported). One such supported blocker is the sorted neighborhood blocker. This IPython notebook illustrates how to perform blocking using the sorted neighborhood blocker.\nNote, often the sorted neighborhood blocking technique is used on a single table. In this case we have implemented sorted neighborhood blocking between two tables. We first enrich the tables with whether the table is the left table, or right table. Then we merge the tables. At this point we perform sorted neighborhood blocking, which is to pass a sliding window of window_size (default 2) across the merged dataset. Within the sliding window all tuple pairs that have one tuple from the left table and one tuple from the right table are returned.\nFirst, we need to import py_entitymatching package and other libraries as follows:",
"# Import py_entitymatching package\nimport py_entitymatching as em\nimport os\nimport pandas as pd",
"Then, read the input tablse from the datasets directory",
"# Get the datasets directory\ndatasets_dir = em.get_install_path() + os.sep + 'datasets'\n\n# Get the paths of the input tables\npath_A = datasets_dir + os.sep + 'person_table_A.csv'\npath_B = datasets_dir + os.sep + 'person_table_B.csv'\n\n# Read the CSV files and set 'ID' as the key attribute\nA = em.read_csv_metadata(path_A, key='ID')\nB = em.read_csv_metadata(path_B, key='ID')\n\nA.head()\n\nB.head()",
"Block Using the Sorted Neighborhood Blocker\nOnce the tables are read, we can do blocking using sorted neighborhood blocker.\nWith the sorted neighborhood blocker, you can only block between two tables to produce a candidate set of tuple pairs.\nBlock Tables to Produce a Candidate Set of Tuple Pairs",
"# Instantiate attribute equivalence blocker object\nsn = em.SortedNeighborhoodBlocker()",
"For the given two tables, we will assume that two persons with different zipcode values do not refer to the same real world person. So, we apply attribute equivalence blocking on zipcode. That is, we block all the tuple pairs that have different zipcodes.",
"# Use block_tables to apply blocking over two input tables.\nC1 = sn.block_tables(A, B, \n l_block_attr='birth_year', r_block_attr='birth_year', \n l_output_attrs=['name', 'birth_year', 'zipcode'],\n r_output_attrs=['name', 'birth_year', 'zipcode'],\n l_output_prefix='l_', r_output_prefix='r_', window_size=3)\n\n# Display the candidate set of tuple pairs\nC1.head()",
"Note that the tuple pairs in the candidate set have the same zipcode. \nThe attributes included in the candidate set are based on l_output_attrs and r_output_attrs mentioned in block_tables command (the key columns are included by default). Specifically, the list of attributes mentioned in l_output_attrs are picked from table A and the list of attributes mentioned in r_output_attrs are picked from table B. The attributes in the candidate set are prefixed based on l_output_prefix and r_ouptut_prefix parameter values mentioned in block_tables command.",
"# Show the metadata of C1\nem.show_properties(C1)\n\nid(A), id(B)",
"Note that the metadata of C1 includes key, foreign key to the left and right tables (i.e A and B) and pointers to left and right tables.\nHandling Missing Values\nIf the input tuples have missing values in the blocking attribute, then they are ignored by default. This is because, including all possible tuple pairs with missing values can significantly increase the size of the candidate set. But if you want to include them, then you can set allow_missing paramater to be True.",
"# Introduce some missing values\nA1 = em.read_csv_metadata(path_A, key='ID')\nA1.ix[0, 'zipcode'] = pd.np.NaN\nA1.ix[0, 'birth_year'] = pd.np.NaN\n\nA1\n\n# Use block_tables to apply blocking over two input tables.\nC2 = sn.block_tables(A1, B, \n l_block_attr='zipcode', r_block_attr='zipcode', \n l_output_attrs=['name', 'birth_year', 'zipcode'],\n r_output_attrs=['name', 'birth_year', 'zipcode'],\n l_output_prefix='l_', r_output_prefix='r_', \n allow_missing=True) # setting allow_missing parameter to True\n\nlen(C1), len(C2)\n\nC2",
"The candidate set C2 includes all possible tuple pairs with missing values.\nWindow Size\nA tunable parameter to the Sorted Neighborhood Blocker is the Window size. To perform the same result as above with a larger window size is via the window_size argument. Note that it has more results than C1.",
"C3 = sn.block_tables(A, B, \n l_block_attr='birth_year', r_block_attr='birth_year', \n l_output_attrs=['name', 'birth_year', 'zipcode'],\n r_output_attrs=['name', 'birth_year', 'zipcode'],\n l_output_prefix='l_', r_output_prefix='r_', window_size=5)\n\nlen(C1)\n\nlen(C3)",
"Stable Sort Order\nOne final challenge for the Sorted Neighborhood Blocker is making the sort order stable. If the column being sorted on has multiple identical keys, and those keys are longer than the window size, then different results may occur between runs. To always guarantee the same results for every run, make sure to make the sorting column unique. One method to do so is to append the id of the tuple onto the end of the sorting column. Here is an example.",
"A[\"birth_year_plus_id\"]=A[\"birth_year\"].map(str)+'-'+A[\"ID\"].map(str)\nB[\"birth_year_plus_id\"]=B[\"birth_year\"].map(str)+'-'+A[\"ID\"].map(str)\nC3 = sn.block_tables(A, B, \n l_block_attr='birth_year_plus_id', r_block_attr='birth_year_plus_id', \n l_output_attrs=['name', 'birth_year_plus_id', 'birth_year', 'zipcode'],\n r_output_attrs=['name', 'birth_year_plus_id', 'birth_year', 'zipcode'],\n l_output_prefix='l_', r_output_prefix='r_', window_size=5)\n\nC3.head()",
"Sorted Neighborhood Blocker limitations\nSince the sorted neighborhood blocker requires position in sorted order, unlike other blockers, blocking on a candidate set or checking two tuples is not applicable. Attempts to call block_candset or block_tuples will raise an assertion."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
fotis007/python_intermediate
|
Python_2_1.ipynb
|
gpl-3.0
|
[
"Table of Contents\n<p><div class=\"lev1 toc-item\"><a href=\"#Python-für-Fortgeschrittene\" data-toc-modified-id=\"Python-für-Fortgeschrittene-1\"><span class=\"toc-item-num\">1 </span>Python für Fortgeschrittene</a></div><div class=\"lev2 toc-item\"><a href=\"#Überblick-über-den-Kurs\" data-toc-modified-id=\"Überblick-über-den-Kurs-11\"><span class=\"toc-item-num\">1.1 </span>Überblick über den Kurs</a></div><div class=\"lev2 toc-item\"><a href=\"#1.-Sitzung:-Wiederholung\" data-toc-modified-id=\"1.-Sitzung:-Wiederholung-12\"><span class=\"toc-item-num\">1.2 </span>1. Sitzung: Wiederholung</a></div><div class=\"lev3 toc-item\"><a href=\"#Datenstrukturen-im-Überblick\" data-toc-modified-id=\"Datenstrukturen-im-Überblick-121\"><span class=\"toc-item-num\">1.2.1 </span>Datenstrukturen im Überblick</a></div><div class=\"lev2 toc-item\"><a href=\"#Aufgabe\" data-toc-modified-id=\"Aufgabe-13\"><span class=\"toc-item-num\">1.3 </span>Aufgabe</a></div><div class=\"lev2 toc-item\"><a href=\"#Programmsteuerung\" data-toc-modified-id=\"Programmsteuerung-14\"><span class=\"toc-item-num\">1.4 </span>Programmsteuerung</a></div><div class=\"lev2 toc-item\"><a href=\"#Aufgaben\" data-toc-modified-id=\"Aufgaben-15\"><span class=\"toc-item-num\">1.5 </span>Aufgaben</a></div><div class=\"lev2 toc-item\"><a href=\"#Funktionen\" data-toc-modified-id=\"Funktionen-16\"><span class=\"toc-item-num\">1.6 </span>Funktionen</a></div><div class=\"lev2 toc-item\"><a href=\"#Aufgaben\" data-toc-modified-id=\"Aufgaben-17\"><span class=\"toc-item-num\">1.7 </span>Aufgaben</a></div><div class=\"lev2 toc-item\"><a href=\"#Dateien-lesen-und-schreiben\" data-toc-modified-id=\"Dateien-lesen-und-schreiben-18\"><span class=\"toc-item-num\">1.8 </span>Dateien lesen und schreiben</a></div><div class=\"lev2 toc-item\"><a href=\"#Aufgabe\" data-toc-modified-id=\"Aufgabe-19\"><span class=\"toc-item-num\">1.9 </span>Aufgabe</a></div><div class=\"lev2 toc-item\"><a href=\"#Reguläre-Ausdrücke\" data-toc-modified-id=\"Reguläre-Ausdrücke-110\"><span class=\"toc-item-num\">1.10 </span>Reguläre Ausdrücke</a></div><div class=\"lev2 toc-item\"><a href=\"#Aufgabe\" data-toc-modified-id=\"Aufgabe-111\"><span class=\"toc-item-num\">1.11 </span>Aufgabe</a></div>\n\n# Python für Fortgeschrittene\n\n## Überblick über den Kurs\n\n1. Wiederholung Basiswissen Python\n2. Funktionales Programmieren 1: Iteratoren, List Comprehension, map und filter\n3. Programme strukturieren, Funktionales Programmieren 2: Generatoren\n4. Graphen\n5. Datenanalyse 1: numpy\n6. Datenanalyse 2: pandas\n7. Datenanalyse 3: matplotlib\n8. Datenanalyse 4: Maschinelles Lernen\n9. Arbeiten mit XML: lxml\n\n## 1. Sitzung: Wiederholung\n\n<h3>Datenstrukturen im Überblick</h3>\n<ul>\n<li>Sequence (geordnete Folge)</li>\n<ul>\n<li>String (enthält Folge von Unicode-Zeichen) <b>nicht</b> veränderbar</li>\n<li>List (enthält Elemente des gleichen Datentyps; beliebige Länge) veränderbar</li>\n<li>Tuple (enthält Elemente unterschiedlichen Datentyps; gleiche Länge) <b>nicht</b> veränderbar</li>\n<li>namedtuple (Tuple, dessen Felder Namen haben) <b>nicht</b> veränderbar</li>\n<li>Range (Folge von Zahlen) <b>nicht</b> veränderbar</li>\n<li>deque (double-ended queue) veränderbar</li>\n</ul>\n<li>Maps (ungeordnete Zuordnungen)</li>\n<ul>\n<li>Dictionary (enthält key-value Paare)</li>\n<li>Counter</li>\n<li>OrderedDict</li>\n</ul>\n<li>Set (Gruppe von Elementen ohne Duplikate)</li>\n<ul>\n<li>Set (enthält ungeordnet Elemente ohne Duplikate; veränderbar)</li>\n<li>Frozenset (wie Set, nur unveränderlich)</li>\n</ul>\n</ul>",
"#List\na = [1, 5, 2, 84, 23]\nb = list(\"hallo\")\nc = range(10)\nlist(c)\n\n#dictionary\nz = dict(a=2,b=5,c=1)\nz",
"<h2>Aufgabe</h2>\n<ul><li>Schreiben Sie eine vergleichbare Zuweisung für jede der oben aufgelisteten Datenstrukturen</li></ul>",
"#tuple\na = (\"a\", 1)\n#check\ntype(a)",
"<h2>Programmsteuerung</h2>\n<p>Programmkontrolle: Bedingte Verzweigung</p>\n<p>Mit if kann man den Programmablauf abhängig vom Wahrheitswert von Bedingungen verzweigen lassen. Z.B.:</p>",
"a = True\nb = False\nif a == True: \n print(\"a is true\")\nelse:\n print(\"a is not true\")",
"Mit <b>for</b> kann man Schleifen über alles Iterierbare machen. Z.B.:",
"#chr(x) gibt den char mit dem unicode value x aus\n\n\n\nfor c in range(80,90):\n print(chr(c),end=\" \")",
"<h2>Aufgaben</h2>\n<ul><li>Geben Sie alle Buchstaben von A bis z aus, deren Unicode-Code eine gerade Zahl ist.</li>",
"for c in range(65,112):\n if c % 2 == 0:\n print(chr(c))\n\n ",
"<ul>Zählen Sie, wie häufig der Buchstabe \"a\" im folgenden Satz vorkommt: \"Goethes literarische Produktion umfasst Lyrik, Dramen, erzählende Werke (in Vers und Prosa), autobiografische, kunst- und literaturtheoretische sowie naturwissenschaftliche Schriften.\"</ul>",
"a = \"ABCDEuvwwxyz\"\nfor i in a:\n print(i)\n \n",
"<h2>Funktionen</h2>\n<p>Funktionen dienen der Modularisierung des Programms und der Komplexitätsreduktion. Sie ermöglichen die Wiederverwendung von Programmcode und eine einfachere Fehlersuche.",
"#diese Funktion dividiert 2 Zahlen:\ndef div(a, b):\n return a / b\n\n#test\ndiv(6,2)",
"<h2>Aufgaben</h2>\n<p>Schreiben Sie eine Funktion, die die Anzahl der Vokale in einem String zählt.</p>",
"a = \"Hallo\"\n\ndef count_vowels(s):\n result = 0\n for i in s:\n if i in \"AEIOUaeiou\":\n result += 1\n return result\ncount_vowels(a)\n\ns = \"hallo\"\nfor i in s:\n print(i)",
"<h2>Dateien lesen und schreiben</h2>\n<p>open(file, mode='r', encoding=None) können Sie Dateien schreiben oder lesen. </p>\n<p><b>modes:</b> <br/>\n\"r\" - read (default)<br/>\n\"w\" - write. Löscht bestehende Inhalte<br/>\n\"a\" - append. Hängt neue Inhalte an.<br/>\n\"t\" - text (default) <br/>\n\"b\" - binary. <br/>\n\"x\" - exclusive. Öffnet Schreibzugriff auf eine Datei. Gibt Fehlermeldung, wenn die Datei existiert.</p>\n<p> <b>encoding</b>\n\"utf-8\"<br/>\n\"ascii\"<br/>\n\"cp1252\"<br/>\n\"iso-8859-1\"<br/><p>",
"words = []\nwith open(\"goethe.txt\", \"w\", encoding=\"utf-8\") as fin:\n for line in fin:\n re.findall(\"\\w+\", s)",
"<h2>Aufgabe</h2>\n<p>Schreiben Sie diesen Text in eine Datei mit den Namen \"goethe.txt\" (utf-8):<br/>\n<code>\nJohann Wolfgang von Goethe (* 28. August 1749 in Frankfurt am Main; † 22. März 1832 in Weimar), geadelt 1782, gilt als einer der bedeutendsten Repräsentanten deutschsprachiger Dichtung.\n\nGoethes literarische Produktion umfasst Lyrik, Dramen, erzählende Werke (in Vers und Prosa), autobiografische, kunst- und literaturtheoretische sowie naturwissenschaftliche Schriften. Daneben ist sein umfangreicher Briefwechsel von literarischer Bedeutung. Goethe war Vorbereiter und wichtigster Vertreter des Sturm und Drang. Sein Roman Die Leiden des jungen Werthers machte ihn in Europa berühmt. Gemeinsam mit Schiller, Herder und Wieland verkörpert er die Weimarer Klassik. Im Alter wurde er auch im Ausland als Repräsentant des geistigen Deutschlands angesehen.\n\nAm Hof von Weimar bekleidete er als Freund und Minister des Herzogs Carl August politische und administrative Ämter und leitete ein Vierteljahrhundert das Hoftheater.\n\nIm Deutschen Kaiserreich wurde er „zum Kronzeugen der nationalen Identität der Deutschen“[1] und als solcher für den deutschen Nationalismus vereinnahmt. Es setzte damit eine Verehrung nicht nur des Werks, sondern auch der Persönlichkeit des Dichters ein, dessen Lebensführung als vorbildlich empfunden wurde. Bis heute zählen Gedichte, Dramen und Romane von ihm zu den Meisterwerken der Weltliteratur.</code>\n\n<h2>Reguläre Ausdrücke</h2>\n<ul>\n<li>Zeichenklassen<br/>z.B. '.' (hier und im folgenden ohne '') = beliebiges Zeichen\n<li>Quantifier<br/>z.B. '+' = 1 oder beliebig viele des vorangehenden Zeichens\u000b'ab+' matches 'ab' 'abb' 'abbbbb', aber nicht 'abab'\n<li>Positionen<br/>z.B. '^' am Anfang der Zeile\n<li>Sonstiges<br/>Gruppen (x), '|' Oder‚ '|', Non-greedy: ?, '\\' Escape character\n</ul>\n<p>Beispiel. Aufgabe: Finden Sie alle Großbuchstaben in einem String s.",
"import re\ns = \"Dies ist ein Beispiel.\"\nre.findall(r\"[A-ZÄÖÜ]\", s)\n\nre.findall(\"\\w+\", s)",
"<h2>Aufgabe</h2>\n<p>Schreiben Sie Skript, dass eine Worthäufigkeitsliste für die Datei goethe.txt erstellt. Begründen Sie den Aufbau Ihres Skripts</p>"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
analysiscenter/dataset
|
examples/tutorials/07_sampler.ipynb
|
apache-2.0
|
[
"Sampler",
"import sys\nsys.path.append('../..')\nimport matplotlib.pyplot as plt\nimport numpy as np\n%matplotlib inline\nimport pandas as pd",
"Intro\nWelcome! In this section you'll learn about Sampler-class. Instances of Sampler can be used for flexible sampling of multivariate distributions.\nTo begin with, Sampler gives rise to several building-blocks classes such as\n- NumpySampler, or NS\n- ScipySampler - SS\nWhat's more, Sampler incorporates a set of operations on Sampler-instances, among which are\n- \"|\" for building a mixture of two samplers: s = s1 | s2\n- \"&\" for setting a mixture-weight of a sampler: s = 0.6 & s1 | 0.4 & s2\n- \" truncate\" for truncating the support of underlying sampler's distribution: s.truncate(high=[1.0, 1.5])\n- ..all arithmetic operations: s = s1 + s2 or s = s1 + 0.5\nThese operations can be used for combining building-blocks samplers into complex multivariate-samplers, just like that:",
"from batchflow import NumpySampler as NS\n\n# truncated normal and uniform\nns1 = NS('n', dim=2).truncate(2.0, 0.8, lambda m: np.sum(np.abs(m), axis=1)) + 4\nns2 = 2 * NS('u', dim=2).truncate(1, expr=lambda m: np.sum(m, axis=1)) - (1, 1)\nns3 = NS('n', dim=2).truncate(1.5, expr=lambda m: np.sum(np.square(m), axis=1)) + (4, 0)\nns4 = ((NS('n', dim=2).truncate(2.5, expr=lambda m: np.sum(np.square(m), axis=1)) * 4)\n .apply(lambda m: m.astype(np.int)) / 4 + (0, 3))\n\n# a mixture of all four\nns = 0.4 & ns1 | 0.2 & ns2 | 0.39 & ns3 | 0.01 & ns4\n\n# take a look at the heatmap of our sampler:\nh = np.histogramdd(ns.sample(int(1e6)), bins=100, normed=True)\nplt.imshow(h[0])",
"Building Samplers\n1. Numpy, Scipy, TensorFlow - Samplers\nTo build a NumpySampler(NS) you need to specify a name of distribution from numpy.random (or its alias) and the number of independent dimensions:",
"from batchflow import NumpySampler as NS\nns = NS('n', dim=2)",
"take a look at a sample generated by our sampler:",
"smp = ns.sample(size=200)\n\nplt.scatter(*np.transpose(smp))",
"The same goes for ScipySampler based on scipy.stats-distributions, or SS (\"mvn\" stands for multivariate-normal):",
"from batchflow import ScipySampler as SS\nss = SS('mvn', mean=[0, 0], cov=[[2, 1], [1, 2]]) # note also that you can pass the same params as in\nsmp = ss.sample(2000) # scipy.sample.multivariate_normal, such as `mean` and `cov` \nplt.scatter(*np.transpose(smp))",
"2. HistoSampler as an estimate of a distribution generating a cloud of points\nHistoSampler, or HS can be used for building samplers, with underlying distributions given by a histogram. You can either pass a np.histogram-output into the initialization of HS",
"from batchflow import HistoSampler as HS\nhisto = np.histogramdd(ss.sample(1000000))\nhs = HS(histo)\nplt.scatter(*np.transpose(hs.sample(150)))",
"...or you can specify empty bins and estimate its weights using a method HS.update and a cloud of points:",
"hs = HS(edges=2 * [np.linspace(-4, 4)])\nhs.update(ss.sample(1000000))\nplt.imshow(hs.bins, interpolation='bilinear')",
"3. Algebra of Samplers; operations on Samplers\nSampler-instances support artithmetic operations (+, *, -,...). Arithmetics works on either\n* (Sampler, Sampler) - pair\n* (Sampler, array-like) - pair",
"# blur using \"+\"\nu = NS('u', dim=2)\nnoise = NS('n', dim=2)\nblurred = u + noise * 0.2 # decrease the magnitude of the noise\nboth = blurred | u + (2, 2)\n\nplt.imshow(np.histogramdd(both.sample(1000000), bins=100)[0])",
"You may also want to truncate a sampler's distribution so that sampling points belong to a specific region. The common use-case is to sample normal points inside a box.\n..or, inside a ring:",
"n = NS('n', dim=2).truncate(3, 0.3, expr=lambda m: np.sum(m**2, axis=1))\nplt.imshow(np.histogramdd(n.sample(1000000), bins=100)[0])",
"Not infrequently you need to obtain \"normal\" sample in integers. For this you can use Sampler.apply method:",
"n = (4 * NS('n', dim=2)).apply(lambda m: m.astype(np.int)).truncate([6, 6], [-6, -6])\nplt.imshow(np.histogramdd(n.sample(1000000), bins=100)[0])",
"Note that Sampler.apply-method allows you to add an arbitrary transformation to a sampler. For instance, Box-Muller transform:",
"bm = lambda vec2: np.sqrt(-2 * np.log(vec2[:, 0:1])) * np.concatenate([np.cos(2 * np.pi * vec2[:, 1:2]),\n np.sin(2 * np.pi * vec2[:, 1:2])], axis=1)\nn = NS('u', dim=2).apply(bm)\n\nplt.imshow(np.histogramdd(u.sample(1000000), bins=100)[0])",
"Another useful thing is coordinate stacking (\"&\" stands for multiplication of distribution functions):",
"n, u = NS('n'), SS('u') # initialize one-dimensional notrmal and uniform samplers\ns = n & u # stack them together\ns.sample(3)",
"4. Alltogether",
"ns1 = NS('n', dim=2).truncate(2.0, 0.8, lambda m: np.sum(np.abs(m), axis=1)) + 4\nns2 = 2 * NS('u', dim=2).truncate(1, expr=lambda m: np.sum(m, axis=1)) - (1, 1)\nns3 = NS('n', dim=2).truncate(1.5, expr=lambda m: np.sum(np.square(m), axis=1)) + (4, 0)\nns4 = ((NS('n', dim=2).truncate(2.5, expr=lambda m: np.sum(np.square(m), axis=1)) * 4)\n .apply(lambda m: m.astype(np.int)) / 4 + (0, 3))\nns = 0.4 & ns1 | 0.2 & ns2 | 0.39 & ns3 | 0.01 & ns4\n\nplt.imshow(np.histogramdd(ns.sample(int(1e6)), bins=100, normed=True)[0])"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
radhikapc/foundation-homework
|
homework05/Homework05_NYTimes-radhika_graded.ipynb
|
mit
|
[
"Grade:",
"import requests\n\nresponse = requests.get('http://api.nytimes.com/svc/books/v2/lists/2010-05-09/hardcover-fiction.json?api-key=3880684abea14d86b6280c6dbd80a793')\ndata = response.json()\n#print(data)\n\n#print(data.keys())\n\n#print(data['results'])",
"1a) What books topped the Hardcover Fiction NYT best-sellers list on Mother's Day in 2009 and 2010? How about Father's Day?",
"#What books topped the Hardcover Fiction NYT best-sellers list on Mother's Day in 2009 and 2010? How about Father's Day?\n#mothers day in 2010\nbook_result = data['results']\n\n#print(book_result)\n\nprint(\"The hardcover Fiction NYT best-sellers on mothers day in 2010 are:\")\n\nfor i in book_result:\n #print(i['book_details'])\n for item in i['book_details']:\n print(\"-\", item['title'])\n \n\n ",
"1b) What books topped the Hardcover Fiction NYT best-sellers list on Mother's Day in 2009 and 2010? How about Father's Day?",
"#mothers day in 2009\nresponse = requests.get('http://api.nytimes.com/svc/books/v2/lists/2009-05-10/hardcover-fiction.json?api-key=3880684abea14d86b6280c6dbd80a793')\ndata = response.json()\n#print(data)\n\nprint(\"The hardcover Fiction NYT best-sellers on mothers day in 2009 are:\")\nbook_result = data['results']\n\n#print(book_result)\n\nfor i in book_result:\n #print(i['book_details'])\n for item in i['book_details']:\n print(\"-\",item['title'])\n ",
"1c) What books topped the Hardcover Fiction NYT best-sellers list on Mother's Day in 2009 and 2010? How about Father's Day?",
"#fathers day in 2010\nresponse = requests.get('http://api.nytimes.com/svc/books/v2/lists/2010-06-20/hardcover-fiction.json?api-key=3880684abea14d86b6280c6dbd80a793')\ndata = response.json()\n#print(data)\nprint(\"The hardcover Fiction NYT best-sellers on fathers day in 2010 are:\")\nbook_result = data['results']\n\n#print(book_result)\n\nfor i in book_result:\n #print(i['book_details'])\n for item in i['book_details']:\n print(\"-\", item['title'])\n ",
"1d) What books topped the Hardcover Fiction NYT best-sellers list on Mother's Day in 2009 and 2010? How about Father's Day?",
"#fathers day in 2009\nresponse = requests.get('http://api.nytimes.com/svc/books/v2/lists/2009-06-21/hardcover-fiction.json?api-key=3880684abea14d86b6280c6dbd80a793')\ndata = response.json()\n#print(data)\nbook_result = data['results']\nprint(\"The hardcover Fiction NYT best-sellers on fathers day in 2009 are:\")\n\n#print(book_result)\n\nfor i in book_result:\n #print(i['book_details'])\n for item in i['book_details']:\n print(\"-\", item['title'])",
"2a) What are all the different book categories the NYT ranked in June 6, 2009? How about June 6, 2015?",
"#What are all the different book categories the NYT ranked in June 6, 2009? How about June 6, 2015?\n\nresponse = requests.get('http://api.nytimes.com/svc/books/v2/lists/names.json?date=2009-06-06&api-key=3880684abea14d86b6280c6dbd80a793')\ndata = response.json()\n#print(data)\n\n#What are all the different book categories the NYT ranked in June 6, 2009\nbook_result = data['results']\nprint(\"The following are the different book categories the NYT ranked in June 6, 2009:\")\n\n#print(book_result)\n\nfor i in book_result:\n print(\"-\", i['display_name'])\n ",
"2b) What are all the different book categories the NYT ranked in June 6, 2009? How about June 6, 2015?",
"#What are all the different book categories the NYT ranked in June 6, 2015?\n\nresponse = requests.get('http://api.nytimes.com/svc/books/v2/lists/names.json?date=2015-06-06&api-key=3880684abea14d86b6280c6dbd80a793')\ndata = response.json()\n#print(data)\nbook_result = data['results']\nprint(\"The following are the different book categories the NYT ranked in June 6, 2015:\")\n\n#print(book_result)\n\nfor i in book_result:\n print(\"-\", i['display_name'])",
"3) Finding the Total Occurrence of Muammar Gaddafi's\nMuammar Gaddafi's name can be transliterated many many ways. His last name is often a source of a million and one versions - Gadafi, Gaddafi, Kadafi, and Qaddafi to name a few. How many times has the New York Times referred to him by each of those names?\nTip: Add \"Libya\" to your search to make sure (-ish) you're talking about the right guy.",
"Gadafi_response = requests.get('https://api.nytimes.com/svc/search/v2/articlesearch.json?q=Gadafi&fq=Libya&api-key=3880684abea14d86b6280c6dbd80a793')\nGadafi_data = Gadafi_response.json()\nGadafi_data_result = Gadafi_data['response']['meta']['hits']\nprint(\"Gadafi appears\", Gadafi_data_result, \"times\")\nGaddafi_response = requests.get('https://api.nytimes.com/svc/search/v2/articlesearch.json?q=Gaddafi&fq=Libya&api-key=3880684abea14d86b6280c6dbd80a793')\nGaddafi_data = Gaddafi_response.json()\nGaddafi_data_result = Gaddafi_data['response']['meta']['hits']\nprint(\"Gaddafi appears\", Gaddafi_data_result, \"times\")\nKadafi_response = requests.get('https://api.nytimes.com/svc/search/v2/articlesearch.json?q=Kadafi&fq=Libya&api-key=3880684abea14d86b6280c6dbd80a793')\nKadafi_data = Kadafi_response.json()\nKadafi_data_result = Kadafi_data['response']['meta']['hits']\nprint(\"Kadafi appears\", Kadafi_data_result, \"times\")\nQaddafi_response = requests.get('https://api.nytimes.com/svc/search/v2/articlesearch.json?q=Qaddafi&fq=Libya&api-key=3880684abea14d86b6280c6dbd80a793')\nQaddafi_data = Qaddafi_response.json()\nQaddafi_data_result = Qaddafi_data['response']['meta']['hits']\nprint(\"Qaddafi appears\", Qaddafi_data_result, \"times\")\n\n ",
"4a) Hipster\nWhat's the title of the first story to mention the word 'hipster' in 1995? What's the first paragraph?",
"# testing it for count\nhipster_response = requests.get('https://api.nytimes.com/svc/search/v2/articlesearch.json?q=hipster&begin_date=19950101&end_date=19953112&sort=oldest&api-key=3880684abea14d86b6280c6dbd80a793')\nhipster_data = hipster_response.json()\nhipster_data_result = hipster_data['response']['meta']['hits']\nprint(\"hipster appears\", hipster_data_result, \"times\")",
"4b) Hipster\nWhat's the title of the first story to mention the word 'hipster' in 1995? What's the first paragraph?",
"#print(hipster_data)\n#print(hipster_data.keys())\nhipster_resp = hipster_data['response']\nhipster_resp = hipster_data['response']['docs']\nfor item in hipster_resp:\n print(item['headline']['main'], item['pub_date'])\n#print(\"The word, hipster, appears for the first time in the following article: \", hipster_resp)\n#hipster_para = hipster_data['response']['docs'][0]['lead_paragraph']\n#print(\"The word, hipster, appears for the first time in the following paragraph:\")\n#print(\"-------------------------------------------------------------------------\")\n#print(hipster_para)\n\n# TA-COMMENT: (-0.5) Missing the first paragraph of the first story",
"5) Gay Marriage\n5) How many times was gay marriage mentioned in the NYT between 1950-1959, 1960-1969, 1970-1978, 1980-1989, 1990-2099, 2000-2009, and 2010-present?\nTip: You'll want to put quotes around the search term so it isn't just looking for \"gay\" and \"marriage\" in the same article.",
"gay50S_response = requests.get('https://api.nytimes.com/svc/search/v2/articlesearch.json?q=\"gay marriage\"&begin_date=19500101&end_date=19591231&api-key=3880684abea14d86b6280c6dbd80a793')\ngay50S_data = gay50S_response.json()\ngay50S_data_result = gay50S_data['response']['meta']['hits']\nprint(\"Gay Marriage appears\", gay50S_data_result, \"times in the period 1950-1959\")\n\ngay60S_response = requests.get('https://api.nytimes.com/svc/search/v2/articlesearch.json?q=\"gay marriage\"&begin_date=19600101&end_date=19691231&api-key=3880684abea14d86b6280c6dbd80a793')\ngay60S_data = gay60S_response.json()\ngay60S_data_result = gay60S_data['response']['meta']['hits']\nprint(\"Gay Marriage appears\", gay60S_data_result, \"times in the period 1960-1969\")\n\n# TA-COMMENT: Is there a way to do this programmatically without repeating yourself? \n\n#1950\ngay50S_response = requests.get('https://api.nytimes.com/svc/search/v2/articlesearch.json?q=%22gay%20marriage%22&begin_date=19500101&end_date=19591231&api-key=3880684abea14d86b6280c6dbd80a793')\ngay50S_data = gay50S_response.json()\ngay50S_data_result = gay50S_data['response']['meta']['hits']\nprint(\"Gay Marriage appears\", gay50S_data_result, \"times in the period 1950-1959\")\n\n#1960\n\ngay60S_response = requests.get('https://api.nytimes.com/svc/search/v2/articlesearch.json?q=%22gay%20marriage%22&begin_date=19600101&end_date=19691231&api-key=3880684abea14d86b6280c6dbd80a793')\ngay60S_data = gay60S_response.json()\ngay60S_data_result = gay60S_data['response']['meta']['hits']\nprint(\"Gay Marriage appears\", gay60S_data_result, \"times in the period 1960-1969\")\n\n#1970\ngay70S_response = requests.get('https://api.nytimes.com/svc/search/v2/articlesearch.json?q=%22gay%20marriage%22&begin_date=19700101&end_date=19781231&api-key=3880684abea14d86b6280c6dbd80a793')\ngay70S_data = gay70S_response.json()\ngay70S_data_result = gay70S_data['response']['meta']['hits']\nprint(\"Gay Marriage appears\", gay70S_data_result, \"times in the period 1970-1978\")\n#1980\ngay80S_response = requests.get('https://api.nytimes.com/svc/search/v2/articlesearch.json?q=%22gay%20marriage%22&begin_date=19800101&end_date=19891231&api-key=3880684abea14d86b6280c6dbd80a793')\ngay80S_data = gay80S_response.json()\ngay80S_data_result = gay80S_data['response']['meta']['hits']\nprint(\"Gay Marriage appears\", gay80S_data_result, \"times in the period 1980-1989\")\n#1990\ngay90S_response = requests.get('https://api.nytimes.com/svc/search/v2/articlesearch.json?q=%22gay%20marriage%22&begin_date=19900101&end_date=19991231&api-key=3880684abea14d86b6280c6dbd80a793')\ngay90S_data = gay90S_response.json()\ngay90S_data_result = gay90S_data['response']['meta']['hits']\nprint(\"Gay Marriage appears\", gay90S_data_result, \"times in the period 1990-1999\")\n#2000\ngay00s_response = requests.get('https://api.nytimes.com/svc/search/v2/articlesearch.json?q=%22gay%20marriage%22&begin_date=20000101&end_date=20091231&api-key=3880684abea14d86b6280c6dbd80a793')\ngay00s_data = gay00s_response.json()\ngay00s_data_result = gay00s_data['response']['meta']['hits']\nprint(\"Gay Marriage appears\", gay00s_data_result, \"times in the period 2000-2009\")\n# 2010\ngm_response = requests.get('https://api.nytimes.com/svc/search/v2/articlesearch.json?q=%22gay%20marriage%22&begin_date=20100101&api-key=3880684abea14d86b6280c6dbd80a793')\ngm_data = gm_response.json()\ngm_data_result = gm_data['response']['meta']['hits']\nprint(\"Gay Marriage appears\", gm_data_result, \"times from the year 2010\")",
"6) What section talks about motorcycles the most? ##\nTip: You'll be using facets",
"motor_response = requests.get('https://api.nytimes.com/svc/search/v2/articlesearch.json?q=motorcycle&facet_field=section_name&api-key=3880684abea14d86b6280c6dbd80a793')\nmotor_data = motor_response.json()\n#motor_data_result = motor_data['response']['meta']['hits']\n#print(\"Motorcycles appear\", motor_data_result, \"times\")\nmotor_info = motor_data['response']['facets']['section_name']['terms']\nprint(\"This input gives all the sections and count:\", motor_info)\n#for i in motor_info:\n #print(i)\nprint(\"Therefore, Motorcycles appear\", motor_info[0]['term'], \"section the most\")",
"7) Critics's Picks\nHow many of the last 20 movies reviewed by the NYT were Critics' Picks? How about the last 40? The last 60?\nTip: You really don't want to do this 3 separate times (1-20, 21-40 and 41-60) and add them together. What if, perhaps, you were able to figure out how to combine two lists? Then you could have a 1-20 list, a 1-40 list, and a 1-60 list, and then just run similar code for each of them.",
"#first 20 movies\nmovie_response = requests.get('https://api.nytimes.com/svc/movies/v2/reviews/search.json?&api-key=3880684abea14d86b6280c6dbd80a793')\nmovie_data = movie_response.json()\n#print(movie_data)\n#print(movie_data.keys())\ncount = 0\nmovie_result = movie_data['results']\nfor i in movie_result:\n #print(i)\n #print(i.keys())\n #print(item)\n if i['critics_pick']:\n count = count + 1\nprint(\"Out of last 20 movies\", count, \"movies were critics picks\")\n\n#first 40 movies\nmovie_response = requests.get('https://api.nytimes.com/svc/movies/v2/reviews/search.json?&&offset=20&api-key=3880684abea14d86b6280c6dbd80a793')\nmovie_data = movie_response.json()\n#print(movie_data)\n#print(movie_data.keys())\ncount_40 = 0\nmovie_result = movie_data['results']\nfor i in movie_result:\n #print(i)\n #print(i.keys())\n #print(item)\n if i['critics_pick']:\n count_40 = count_40 + 1\n#print(count_40)\nlast_fourty = count + count_40\nprint(\"Out of last 40 movies\", last_fourty, \"movies were critics picks\")\n\n#first 60 movies\nmovie_response = requests.get('https://api.nytimes.com/svc/movies/v2/reviews/search.json?&offset=40&api-key=3880684abea14d86b6280c6dbd80a793')\nmovie_data = movie_response.json()\n#print(movie_data)\n#print(movie_data.keys())\ncount_60 = 0\nmovie_result = movie_data['results']\nfor i in movie_result:\n #print(i)\n #print(i.keys())\n #print(item)\n if i['critics_pick']:\n count_60 = count_60 + 1\n#print(count_60)\nlast_sixty = last_fourty + count_60\nprint(\"Out of last 60 movies\", last_sixty, \"movies were critics picks\")",
"8) Critics with Highest Reviews\nOut of the last 40 movie reviews from the NYT, which critic has written the most reviews?",
"last_fourty = []\noffset = [0, 20]\n\nfor n in offset:\n url = \"https://api.nytimes.com/svc/movies/v2/reviews/search.json?api-key=a39223b33e0e46fd82dbddcc4972ff91&offset=\" + str(n)\n movie_reviews_60 = requests.get(url)\n movie_reviews_60 = movie_reviews_60.json()\n last_fourty = last_fourty + movie_reviews_60['results']\n\nprint(len(last_fourty))\n\n\nlist_byline = []\nfor i in last_fourty:\n list_byline.append(i['byline'])\nprint(list_byline)\nmax_occ = max(list_byline)\n\n\nprint(\"The author with the highest number of reviews to his credit is\", max_occ)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
dsevilla/bdge
|
intro/sesion0.ipynb
|
mit
|
[
"Introducción a Jupyter Notebook, Pandas, Matplotlib, etc.\nEn esta hoja introduciremos la forma de trabajar con Jupyter Notebook, instalado a través de la imagen docker jupyter/scipy-notebook. Veremos cómo los distintos elementos de las librerías de Python interactúan con el notebook para mostrar imágenes, gráficos, etc. También, en las siguientes sesiones los usaremos para acceder a conexiones SQL y a bases de datos NoSQL.\nEnlaces a otros tutoriales introductorios (que también se centran en tratamiento de datos para Big Data): 1 y 2, entre otros muchos.\nPequeña introducción a Docker, Docker-Compose y Jupyter Notebook\nPara facilitar la instalación de las herramientas y que todos los alumnos tengan el mismo entorno de trabajo, usaremos la herramienta Docker. Docker es un gestor de contenedores. Permite instalar paquetes pre-instalados de las utilidades que vamos a usar en este curso. En las máquinas del laboratorio está instalado el paquete Docker de Jupyter Notebook que usaremos. Para listar los contenedores docker disponibles en una máquina podemos ejecutar docker images:\nbash\n$ docker images\nREPOSITORY TAG IMAGE ID CREATED SIZE\njupyter/scipy-notebook latest fd9cad0aeeeb 2 months ago 6.57GB\nneo4j latest 9481a852963b 2 months ago 173MB\nmongo latest 57c67caab3d8 2 months ago 359MB\nDocker ofrece también docker-compose, una utilidad que permite conectar entre sí varios contenedores para configurar escenarios más complejos. Por ejemplo, en nuestro caso iniciaremos una base de datos y el servidor de Notebooks. docker-compose también descargará automáticamente los contenedores necesarios. En cada directorio de cada sesión de prácticas existirá un fichero docker-compose.yml, que incluye la configuración para ejecutar el Notebook y los otros contenedores necesarios (por ejemplo otras bases de datos).\nPara las prácticas vamos a usar la imagen jupyter/scipy-notebook. La información de cómo usar este contenedor se puede obtener aquí.\nInstalación de Docker en Windows\nSi se quiere utilizar Docker desde Windows, se puede hacer igualmente. Existen unas instrucciones para usar Docker en Windows. Una vez instalado, habría que instalar la imagen que usaremos con la misma orden docker pull jupyter/scipy-notebook.\nDescarga del código de prácticas\nExisten varias formas de traer el código de prácticas al contenedor. La más sencilla es ejecutar el siguiente código:\nbash\n$ git clone https://github.com/dsevilla/bdge.git\nEsto creará el directorio bdge con todo el código de las prácticas. Dentro de cada subdirectorio (por ejemplo en este caso intro), habrá un fichero docker-compose.yml, que sirve para ejecutar y conectar los contenedores necesarios para cada parte de la práctica. Así pues:\nbash\n$ git clone https://github.com/dsevilla/bdge.git\n$ cd bdge/intro\n$ docker-compose up\nCreating network \"intro_default\" with the default driver\nCreating intro_notebook_1 ... done\nAttaching to intro_notebook_1\nnotebook_1 | Execute the command\nnotebook_1 | [I 10:32:05.350 NotebookApp] Writing notebook server cookie secret to ...\nnotebook_1 | [I 10:32:05.578 NotebookApp] Serving notebooks from local directory: /home/jovyan\nnotebook_1 | [I 10:32:05.578 NotebookApp] 0 active kernels \nnotebook_1 | [I 10:32:05.578 NotebookApp] Use Control-C to stop this server and shut down all kernels (twice to skip confirmation).\nnotebook_1 | to login:\nnotebook_1 | http://localhost:8888/\nAccediendo a la IP http://localhost:8888/ ó http://127.0.0.1:8888/ se accede al Notebook, y el directorio actual (intro) aparece disponible en la lista de directorios.\nSe aconseja guardar el Notebook con otro nombre (File->Rename...) para evitar problemas con las actualizaciones posteriores del repositorio con git.\nPara parar el contenedor, se puede ejecutar:\nbash\n$ docker-compose stop\n$ docker-compose rm\n(Si no se realiza el rm, docker-compose up volverá a lanzar el contenedor anteriormente parado).\nEjecución del Jupyter Notebook de forma aislada\nEl Notebook también se puede ejecutar independientemente. Para ejecutar una sesión de Jupyter Notebook hay que escribir:\n```bash\n$ docker run -it --rm -p 8888:8888 jupyter/scipy-notebook\n[I 23:39:02.615 NotebookApp] Writing notebook server cookie secret to ...\n[W 23:39:02.712 NotebookApp] WARNING: The notebook server is listening ...\n[I 23:39:02.877 NotebookApp] Use Control-C to stop this server and shut down all kernels ...\nCopy/paste this URL into your browser when you connect for the first time,\nto login with a token:\n http://localhost:8888/?token=<TOKEN>\n```\nJupyter Notebook\nLos Notebooks contienen una mezcla de texto y código, y se pueden ir ejecutando paso a paso. En general utilizaremos el lenguaje Python en su versión 3, así que las hojas son en realidad un programa Python que se puede ejecutar en orden, junto con imágenes y texto explicativo adjunto.\nAl pulsar Ctrl+Intro en una celda, se ejecuta el código de la celda y se muestra el la siguiente celda. Al pulsar Shift+Intro se ejecuta la celda actual y pasa automáticamente a la siguiente.\nExisten también \"magics\", que sirven para obtener información de la hoja, o ejecutar comandos especiales. Por ejemplo, órdenes de shell, como en la siguiente celda. Hay varios tutoriales Online. Por ejemplo: Tutorial.",
"!uname -a\n\n%lsmagic",
"A continuación mostramos los paquetes que usaremos regularmente para tratar datos, pandas, numpy, matplotlib. Al ser un programa en Python, se pueden importar paquetes que seguirán siendo válidos hasta el final del notebook.",
"import numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport matplotlib",
"Lo siguiente hace que los gráficos se muestren inline. Para figuras pequeñas se puede utilizar unas figuras interactivas que permiten zoom, usando %maplotlib nbagg.",
"%matplotlib inline\nmatplotlib.style.use('ggplot')",
"Numpy\nNumpy es una de las librerías más utilizadas en Python, y ofrece un interfaz sencillo para operaciones eficientes con números, arrays y matrices. Numpy se utilizará de apoyo muchas veces que haya que hacer procesamiento local de datos recogidos de una base de datos, o como preparación para la graficación de datos. En la celda siguiente se muestra un vídeo introductorio, y también se puede acceder a tutoriales online: Tutorial.",
"from IPython.display import YouTubeVideo\nYouTubeVideo('o8fmjaW9a0A') # Yes, it can also embed youtube videos.",
"Numpy permite generar y procesar arrays de datos de forma muy eficiente. A continuación se muestran algunos ejemplos:",
"a = np.array([4,5,6])\nprint(a.shape)\nprint(a[0])\na[0] = 9\nprint (a)\n\nnp.arange(10)\n\nnp.arange(1,20)",
"También arrays multidimensionales:",
"a = np.zeros((2,2))\nprint (a)\n\na.ndim\n\na.dtype\n\nb = np.random.random((2,2))\nprint (b)\n\na = np.random.random((2,2))\nprint(a)",
"Se pueden aplicar funciones sobre todo el array o matriz, y el resultado será una matriz idéntica con el operador aplicado. Similar a lo que ocurre con la operación map de algunos lenguajes de programación (incluído Python):",
"print (a >= .5)",
"También se pueden filtrar los elementos de un array o matriz que cumplan una condición. Para eso se utiliza el operador de indización ([]) con una expresión booleana.",
"print (a[a >= .5])",
"¿Por qué usar Numpy? \n%%capture captura la salida de la ejecución de la celda en la variable dada como parámetro. Después se puede imprimir.\n%timeit se utiliza para ejecutar varias veces una instrucción y calcular un promedio de su duración.",
"%%capture timeit_output\n\n%timeit l1 = range(1,1000)\n\n%timeit l2 = np.arange(1,1000)\n\nprint(timeit_output)\n\nx = np.array([[1,2],[3,4]])\n\nprint (np.sum(x)) # Compute sum of all elements; prints \"10\"\nprint (np.sum(x, axis=0)) # Compute sum of each column; prints \"[4 6]\"\nprint (np.sum(x, axis=1)) # Compute sum of each row; prints \"[3 7]\"\n\nx * 2\n\nx ** 2",
"numpy tiene infinidad de funciones, por lo que sería interesante darse una vuelta por su documentación: https://docs.scipy.org/doc/.\nMatplotlib\nMatplotlib permite generar gráficos de forma sencilla. Lo veremos aquí primero conectado sólo con Numpy y después conectado con Pandas.",
"x = np.arange(0, 3 * np.pi, 0.1)\ny = np.sin(x)\nplt.subplot()\n# Plot the points using matplotlib\nplt.plot(x, y)\nplt.show()\n\nplt.subplot(211)\nplt.plot(range(12))\nplt.subplot(212, facecolor='y')\nplt.plot(range(100))\nplt.show()\n\n# Compute the x and y coordinates for points on sine and cosine curves\nx = np.arange(0, 3 * np.pi, 0.1)\ny_sin = np.sin(x)\ny_cos = np.cos(x)\n\n# Plot the points using matplotlib\nplt.plot(x, y_sin)\nplt.plot(x, y_cos)\nplt.xlabel('x axis label')\nplt.ylabel('y axis label')\nplt.title('Sine and Cosine')\nplt.legend(['Sine', 'Cosine'])\nplt.show()",
"Pandas\nTutoriales: 1, 2, 3.\nPandas permite gestionar conjuntos de datos n-dimensionales de diferentes formas, y también conectarlo con matplotlib para hacer gráficas.\nLos conceptos principales de Pandas son los Dataframes y las Series. La diferencia entre ambas es que la serie guarda sólo una serie (una columna o una fila, depende de como se quiera interpretar), mientras que un Dataframe guarda estructuras multidimensaionales agregando series.\nAmbas tienen una (o varias) \"columna fantasma\", que sirve de índice, y que se puede acceder con d.index (tanto si d es una serie o un dataframe). Si no se especifica un índice, se le añade uno virtual numerando las filas desde cero. Además, los índices pueden ser multidimensionales (por ejemplo, tener un índice por mes y dentro uno por dia de la semana).",
"ts = pd.Series(np.random.randn(1000), index=pd.date_range('1/1/2000', periods=1000))\nts\n\nts.describe()\n\nts = ts.cumsum()\nts.plot();\n\ndf = pd.DataFrame(np.random.randn(1000, 4), index=ts.index, columns=list('ABCD'))\n\ndf = df.cumsum()\n\ndf.plot();",
"Se puede hacer plot también de una columna contra otra.",
"df3 = pd.DataFrame(np.random.randn(1000, 2), columns=['B', 'C']).cumsum()\ndf3['A'] = pd.Series(list(range(len(df3))))\ndf3.plot(x='A', y='B');",
"Valores incompletos. Si no se establecen, se pone a NaN (not a number).",
"d = {'one' : pd.Series([1., 2., 3.], index=['a', 'b', 'c']),\n 'two' : pd.Series([1., 2., 3., 4.], index=['a', 'b', 'c', 'd'])}\ndf = pd.DataFrame(d)\ndf",
"fillna() permite cambiar el valor de los datos faltantes.",
"df.fillna(0)\n\npd.DataFrame(d, index=['d', 'b', 'a'])\n\npd.DataFrame(d, index=['d', 'b', 'a'], columns=['two', 'three'])",
"A continuación se muestra un ejemplo de uso de Pandas para leer datos y procesarlos en un Dataframe.\nEl primer ejemplo completo carga desde el fichero swift-question-dates.txt.gz las fechas de las preguntas en Stackoverflow que contienen el tag \"swift\".\nLa función read_csv es capaz de leer cualquier fichero CSV y lo convierte en un \"Dataframe\", una estructura de tabla que guarda también los nombres y los tipos de las columnas, así como un índice por el que se identificarán las tablas. La lista incluye la fecha en donde se produjo una pregunta con el tag \"swift\". Como los datos en sí son las fechas, hacemos que la columna de fechas haga a su vez de índice.",
"df = pd.read_csv('https://github.com/dsevilla/bdge/raw/master/intro/swift-question-dates.txt.gz',\n header=None,\n names=['date'],\n compression='gzip',\n parse_dates=['date'],\n index_col='date')\n\ndf",
"De la fecha, extraer sólo la fecha (no la hora, que no nos interesa).",
"df.index = df.index.date",
"Añadimos una columna de todo \"1\" para especificar que cada pregunta cuenta como 1.",
"df['Count'] = 1\ndf",
"A los Dataframe de Pandas también se les puede aplicar operaciones de agregación, como groupby o sum. Finalmente, la funcion plot() permite mostrar los datos en un gráfico.",
"accum = df.groupby(df.index).sum()\naccum\n\n# Los 30 primeros registros que tengan un número de preguntas mayor que 20 por día.\naccum = accum[accum.Count > 20][:30]\naccum\n\naccum[accum.Count > 30][:30].plot.bar()",
"A continuación comprobamos con la página de la Wikipedia cuándo apareció el lenguaje Swift:",
"!pip install lxml\n\ndfwiki = pd.read_html('https://en.wikipedia.org/wiki/Swift_(programming_language)',attrs={'class': 'infobox vevent'})\n\ndfwiki[0]\n\nfirstdate = dfwiki[0][1][4]\nfirstdate\n\nfrom dateutil.parser import parse\ndt = parse(firstdate.split(';')[0])\nprint (dt.date().isoformat())\nprint (accum.index[0].isoformat())\n\nassert dt.date().isoformat() == accum.index[0].isoformat()",
"A continuación se muestra cómo ubicar posiciones en un mapa con el paquete folium. Se muestra también cómo acceder a distintas posiciones del Dataframe con iloc, loc, etc.",
"# cargar municipios y mostrarlos en el mapa\ndf = pd.read_csv('https://github.com/dsevilla/bdge/raw/master/intro/municipios-2017.csv.gz',header=0,compression='gzip')\n\ndf.head()\n\ndf.iloc[0]\n\ndf.iloc[0].NOMBRE_ACTUAL\n\ndf.loc[:,'NOMBRE_ACTUAL']\n\ndf.iloc[:,0]\n\ndf.PROVINCIA\n\ndf[df.PROVINCIA == 'A Coruña']\n\nmula = df[df.NOMBRE_ACTUAL == 'Mula'].iloc[0]\nmula\n\n(mula_lat,mula_lon) = (mula.LATITUD_ETRS89, mula.LONGITUD_ETRS89)\n(mula_lat,mula_lon)",
"El paquete folium permite generar mapas de posiciones. El siguiente ejemplo centra un mapa en Mula y pone un marcador con su nombre:",
"!pip install folium\n\nimport folium\n\nmap = folium.Map(location=[mula_lat, mula_lon],zoom_start=10)\nfolium.Marker(location = [mula_lat, mula_lon], popup=\"{} ({} habitantes)\".format(mula.NOMBRE_ACTUAL,mula.POBLACION_MUNI)).add_to(map)\n\nmap",
"Ejercicio\nMostrar con folium marcadores para cada pueblo de A Coruña y Murcia. Se pueden usar las funciones itertuples() o iterrows() de un Dataframe para recorrer los elementos del mismo."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
nerdcommander/scientific_computing_2017
|
lesson15/Lesson15_individual.ipynb
|
mit
|
[
"Lesson15 Individual Assignment\nIndividual means that you do it yourself. You won't learn to code if you don't struggle for yourself and write your own code. Remember that while you can discuss the general (algorithmic) way to solve a problem, you should not even be looking at anyone else's code or showing anyone else your code for an individual assignment.\nReview the Group Work guidelines on Cavas and/or ask an instructor if you have any questions.\nProgramming Practice\nBe sure to spell all function names correctly - misspelled functions will lose points (and often break anyway since no one is sure what to type to call it). If you prefer showing your earlier, scratch work as you figure out what you are doing, please be sure that you make a final, complete, correct last function in its own cell that you then call several times to test. In other words, separate your thought process/working versions from the final one (a comment that tells us which is the final version would be lovely).\nEvery function should have at least a docstring at the start that states what it does (see Lesson3 Team Notebook if you need a reminder). Make other comments as necessary. \nMake sure that you are running test cases (plural) for everything and commenting on the results in markdown. Your comments should discuss how you know that the test case results are correct.\npart 1: Instance Variables and Get Methods\nWhile you should do steps A, B, and C incrementally, you cannot really test each step until the end of this part, except to check that your code runs without an error in a Python cell (shift-enter). Therefore you can just provide the entire class definition at the end of this part, along with your code that tests the methods. We do not need to see your intermediate work for parts A and B. \nA note about names:\nIt is programming convention that classes have capitalized names and objects (and variables in general) have lowercase names. For example, if we had a very minimal Customer class as shown below:\n```python\nclass Customer:\ndef __init__(self, name, balance=0.0):\n self.name = name\n self.balance = balance\n\nWe would instantiate objects of that class with lowercase identifiers:python\njane = ('Jane', 1000.0)\n``\nNotice the distinction between the lowercaseidentifier-jane- and the capitalized string - 'Jane' - that is passed in for thenameargument. This situation will also hold true below with thePlanetclass - the name of the planet is Venus (and can be passed as 'Venus') but the name of the object should bevenus`.\nA. Define a new class called Planet. (remember the case conventions!) Write a constructor for the class that takes four parameters: self, name (e.g., Mars), radius (of the planet), and mass (of the planet). All these parameters should be required. Be sure to include a docstring that gives the expected units for each parameter.\nB. Add a fifth parameter to the constructor: the number of moons. This last parameter should be optional, and if not given, the default number of moons should be zero.\nC. Add four get methods to the Planet class: get_name, get_radius, get_mass, and get_moons. Be sure to test each of these methods to make sure they work! They should all RETURN values (not print the values).",
"# Planet class definition at the end of Part 1 ",
"Test your code to make sure that the class definition worked.",
"# code to make sure constructors and get methods all work",
"part 2: Remaining Methods\nIn this part, you should test your class after each step. Each of the methods should RETURN, not print, values. \nYou should NOT need to add any new instance variables to your class in this part. You should be able to write the methods using local variables and parameters only.\nD. Add one set method to the Planet class: set_moons. Be sure to check for invalid values, if any exist. Test the method to make sure it works, before proceeding to the next step!\nE. Add a calculate_volume method to the Planet class that returns the volume of the Planet. You may assume a planet is a perfect sphere. Assuming r is the sphere’s radius, the formula for volume (V) is: $V = \\frac{4}{3}\\pi r^3$. Test the method to make sure it works, before proceeding to the next step!\nF. Add a calculate_surface_area method that returns the surface area of the Planet, using the formula: $A = 4 \\pi r^2$. Test the method to make sure it works, before proceeding to the last step!\nG. Add a calculate_density method that returns the density of the Planet, using the formula: $density = \\frac{mass}{volume}$. Take advantage of the methods you have already defined in the class. Don't forget to test everything before continuing. \npart 3: Final Class Definition\nCopy and paste your Planet class here:\nWrite code that defines a Planet object called mars. Use the constructor to set its name, radius, and mass (see http://solarsystem.nasa.gov/planets/mars/facts for the correct values). Then print its volume, surface area, and density, by calling the appropriate methods."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
martinjrobins/hobo
|
examples/toy/distribution-german-credit.ipynb
|
bsd-3-clause
|
[
"Fitting a logistic model to German credit data\nThis notebook explains how to run the toy logistic regression model example using the German credit data from [1]. In this example, we have predictors for 1000 individuals and an outcome variable indicating whether or not each individual should be given credit.\n[1] \"UCI machine learning repository\", 2010. A. Frank and A. Asuncion. https://archive.ics.uci.edu/ml/datasets/statlog+(german+credit+data)",
"import matplotlib.pyplot as plt\nimport numpy as np\nimport pints\nimport pints.toy",
"To run this example, we need to first get the data from [1] and process it so we have dichtonomous $y\\in{-1,1}$ outputs and the matrix of predictors has been standardised. In addition, we also add a column of 1s corresponding to a constant term in the regression.\nIf you are connected to the internet, by instantiating with x=None, Pints will fetch the data from the repo for you. If, instead, you have local copies of the x and y matrices, these can be supplied as arguments.",
"logpdf = pints.toy.GermanCreditLogPDF(download=True)",
"Let's look at the data: x is a matrix of predictors and y is a vector of credit recommendations for 1000 individuals. \nSpecifically, let's look at the PCA scores and plot the first two dimensions against one another. Here, we see that the two groups overlap substantially, but that there neverless some separation along the first PCA component.",
"def pca(X):\n # Data matrix X, assumes 0-centered\n n, m = X.shape\n # Compute covariance matrix\n C = np.dot(X.T, X) / (n-1)\n # Eigen decomposition\n eigen_vals, eigen_vecs = np.linalg.eig(C)\n # Project X onto PC space\n X_pca = np.dot(X, eigen_vecs)\n return X_pca\n\nx, y = logpdf.data()\nscores = pca(x)\n\n# colour individual points by whether or not to recommend them credit\nplt.scatter(scores[:, 0], scores[:, 1], c=y)\nplt.xlabel('PCA 1')\nplt.ylabel('PCA 2')\nplt.show()",
"Now we run HMC to fit the parameters of the model.",
"xs = [\n np.random.uniform(0, 1, size=(logpdf.n_parameters())),\n np.random.uniform(0, 1, size=(logpdf.n_parameters())),\n np.random.uniform(0, 1, size=(logpdf.n_parameters())),\n]\n\nmcmc = pints.MCMCController(logpdf, len(xs), xs, method=pints.HamiltonianMCMC)\nmcmc.set_max_iterations(200)\n\n# Set up modest logging\nmcmc.set_log_to_screen(True)\nmcmc.set_log_interval(10)\n\nfor sampler in mcmc.samplers():\n sampler.set_leapfrog_step_size(0.2)\n sampler.set_leapfrog_steps(10)\n\n# Run!\nprint('Running...')\nchains = mcmc.run()\nprint('Done!')",
"HMC is quite efficient here at sampling from the posterior distribution.",
"results = pints.MCMCSummary(chains=chains, time=mcmc.time())\nprint(results)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
barjacks/foundations-homework
|
07/Animal_Panda_Homework_7_Skinner.ipynb
|
mit
|
[
"*1. Import pandas with the right name",
"import pandas as pd",
"*2. Set all graphics from matplotlib to display inline",
"import matplotlib.pyplot as plt\n%matplotlib inline",
"*3. Import pandas with the right name",
"#for encoding the command would look smth like this:\n#df = pd.read_csv(\"XXXXXXXXXXXXXXXXX.csv\", encoding='mac_roman')\ndf = pd.read_csv(\"Animal_Data/07-hw-animals.csv\")",
"*4. Display the names of the columns in the csv",
"df.columns",
"*5. Display the first 3 animals.",
"df.head(3)",
"*6. Sort the animals to see the 3 longest animals.",
"df.sort_values(by='length', ascending=False).head(3)",
"*7. What are the counts of the different values of the \"animal\" column? a.k.a. how many cats and how many dogs.",
"df['animal'].value_counts()",
"*8. Only select the dogs.",
"#df['animal'] == 'dog' this just tests, whether row is a dog or not, True or False\n#is_dog = df['animal'] == 'dog'\n#df[is_dog]\ndf[df['animal'] == 'dog']",
"*9. Display all of the animals that are greater than 40 cm.",
"df[df['length'] > 40]\n\n#del df['feet']",
"*10. 'length' is the animal's length in cm. Create a new column called inches that is the length in inches.",
"df['inches'] = df['length'] * 0.394\ndf.head()",
"*11. Save the cats to a separate variable called \"cats.\" Save the dogs to a separate variable called \"dogs.\"",
"dogs = df[df['animal'] == 'dog']\ncats = df[df['animal'] == 'cat']",
"*12. Display all of the animals that are cats and above 12 inches long. First do it using the \"cats\" variable, then do it using your normal dataframe.",
"cats[cats['inches'] > 12]\n\n#df[df[df[df['animal'] == 'cat']'inches'] > 12]\n#df[df['animal'] == 'cat']&\n#df[df['inches'] > 12]\n\n#pd.read_csv('imdb.txt')\n# .sort(columns='year')\n# .filter('year >1990')\n# .to_csv('filtered.csv')\n\ndf[(df['animal'] == 'cat') & (df['inches'] > 12)]\n\n#3 > 2 & 4 > 3\n#true & true\n#true\n\n#3 > 2 & 4 > 3\n#true & 4 > 3\n\n#(3 > 2) & (4 > 3)",
"*13. What's the mean length of a cat?",
"df[df['animal'] == 'cat'].describe()",
"*14. What's the mean length of a dog?",
"df[df['animal'] == 'dog'].describe()",
"*15. Use groupby to accomplish both of the above tasks at once.",
"df.groupby(['animal'])['inches'].describe()",
"*16. Make a histogram of the length of dogs. I apologize that it is so boring.",
"df[df['animal'] == 'dog'].hist()",
"*17. Change your graphing style to be something else (anything else!)",
"import matplotlib.pyplot as plt\nplt.style.available\n\nplt.style.use('ggplot')\n\ndogs['inches'].hist()",
"*18. Make a horizontal bar graph of the length of the animals, with their name as the label (look at the billionaires notebook I put on Slack!)",
"df['length'].plot(kind='bar')\n\n#or:\n\ndf.plot(kind='barh', x='name', y='length', legend=False)",
"*19. Make a sorted horizontal bar graph of the cats, with the larger cats on top.",
"cats_sorted = cats.sort_values(by='length', ascending=True).head(3)\ncats_sorted.plot(kind='barh', x='name', y='length', legend=False)\n\n#or:\n\ndf[df['animal'] == 'cat'].sort_values(by='length', ascending=True).plot(kind='barh', x='name', y='length')"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
metpy/MetPy
|
v0.4/_downloads/Advanced_Sounding.ipynb
|
bsd-3-clause
|
[
"%matplotlib inline",
"Advanced Sounding\nPlot a sounding using MetPy with more advanced features.\nBeyond just plotting data, this uses calculations from metpy.calc to find the lifted\ncondensation level (LCL) and the profile of a surface-based parcel. The area between the\nambient profile and the parcel profile is colored as well.",
"from datetime import datetime\n\nimport matplotlib.pyplot as plt\n\nimport metpy.calc as mpcalc\nfrom metpy.io import get_upper_air_data\nfrom metpy.io.upperair import UseSampleData\nfrom metpy.plots import SkewT\nfrom metpy.units import concatenate\n\nwith UseSampleData(): # Only needed to use our local sample data\n # Download and parse the data\n dataset = get_upper_air_data(datetime(1999, 5, 4, 0), 'OUN')\n\np = dataset.variables['pressure'][:]\nT = dataset.variables['temperature'][:]\nTd = dataset.variables['dewpoint'][:]\nu = dataset.variables['u_wind'][:]\nv = dataset.variables['v_wind'][:]",
"Create a new figure. The dimensions here give a good aspect ratio",
"fig = plt.figure(figsize=(9, 9))\nskew = SkewT(fig, rotation=45)\n\n# Plot the data using normal plotting functions, in this case using\n# log scaling in Y, as dictated by the typical meteorological plot\nskew.plot(p, T, 'r')\nskew.plot(p, Td, 'g')\nskew.plot_barbs(p, u, v)\nskew.ax.set_ylim(1000, 100)\nskew.ax.set_xlim(-40, 60)\n\n# Calculate LCL height and plot as black dot\nl = mpcalc.lcl(p[0], T[0], Td[0])\nlcl_temp = mpcalc.dry_lapse(concatenate((p[0], l)), T[0])[-1].to('degC')\nskew.plot(l, lcl_temp, 'ko', markerfacecolor='black')\n\n# Calculate full parcel profile and add to plot as black line\nprof = mpcalc.parcel_profile(p, T[0], Td[0]).to('degC')\nskew.plot(p, prof, 'k', linewidth=2)\n\n# Example of coloring area between profiles\ngreater = T >= prof\nskew.ax.fill_betweenx(p, T, prof, where=greater, facecolor='blue', alpha=0.4)\nskew.ax.fill_betweenx(p, T, prof, where=~greater, facecolor='red', alpha=0.4)\n\n# An example of a slanted line at constant T -- in this case the 0\n# isotherm\nl = skew.ax.axvline(0, color='c', linestyle='--', linewidth=2)\n\n# Add the relevant special lines\nskew.plot_dry_adiabats()\nskew.plot_moist_adiabats()\nskew.plot_mixing_lines()\n\n# Show the plot\nplt.show()"
] |
[
"code",
"markdown",
"code",
"markdown",
"code"
] |
WomensCodingCircle/CodingCirclePython
|
Lesson01_Variables/lesson01.ipynb
|
mit
|
[
"Variables, expressions, and statements\nValues and Types\nValues are basic things a program works with. Values come in several different types:\n* A string is any value between quotes ('' single or double \"\") e.g. 'Hello Coding Circle', \"I am very smart\", \"342\"\n* An integer is any whole number, positive or negative e.g. 145, -3, 5\n* A float is any number with a decimal point e.g. 3.14, -2.5 \nTo tell what type a value is, use the built-in function type()",
"type('I am amazing!')\n\ntype(145)\n\ntype(2.5)",
"To print a value to the screen, we use the function print()\ne.g. print(1)",
"print(\"Hello World\")",
"Jupyter notebooks will always print the value of the last line so you don't have to. You can suppress this with a semicolon ';'",
"\"Hello World\"\n\n\"Hello World\";",
"TRY IT\nPredict and then print the type of 'Orca'\nVariables\nA variable is a name that you give a value. You can then use this name anywhere you would use the value that the name refers to. \nIt has some rules.\n* It must only contain letters, numbers and/or the underscore character. \n* However, it cannot start with a number.\n* It can start with an underscore but this usually means something special so stick to letters for now. \nTo assign a value to a variable, you use the assignment operator, which is '=' e.g., my_name = 'Charlotte'",
"WHALE = 'Orca'\nnumber_of_whales = 10\nweight_of_1_whale = 5003.2",
"Notice that when you ran that, nothing printed out. To print a variable, you use the same statement you would use to print the value. e.g. print(WHALE)",
"print(number_of_whales)",
"TRY IT\nAssign the name of a sea creature to the variable sea_creature. Then print the value.\nReccomendation \nName your variables with descriptive names. Naming a variable 'a' is easy to type but won't help you figure out what it is doing when you come back to your code six months later.\nOperators and operands\nOperators are special symbols that represent computations that the computer performs. We have already learned one operator: the assignment operator '='.\nOperands are the values the operator is applied to.\nBasic math operators\n* + addition\n* - subtraction\n* * multiplication\n* / division\n* ** power (exponentiation)\nTo use these operators, put a value or variable on either side of them. You can even assign the new value to a variable or print it out. They work with both integers or floats.",
"1 + 2\n\nfish = 15\nfish_left = fish - 3\nprint(fish_left)\n\nprint(3 * 2.1)\n\nnumber_of_whales ** 2\n\nprint(5 / 2)",
"Hint: You can use a variable and assign it to the same variable name in the same statement.",
"number_of_whales = 8\nnumber_of_whales = number_of_whales + 2 \nprint(number_of_whales)",
"TRY IT\nFind the result of 6^18.\nOrder of operations\nYou can combine many operators in a single python statement. The way python evaluates it is the same way you were taught to in elementary school. PEMDAS for Please Excuse My Dear Aunt Sally. Or 1. Parentheses, 2. Exponents, 3. Multiplication, 4. Division, 5. Addition, 6. Subtraction. Left to right, with that precedence. It is good practice to always include parentheses to make your intention clear, even if order of operations is on your side.",
"2 * 3 + 4 / 2\n\n(2 * (3 + 4)) / 2",
"Modulus operator\nThe modulus operator is not one you were taught in school. It returns the remainder of integer division. It is useful in a few specific cases, but you could go months without using it.",
"5 % 2",
"TRY IT\nFind if 12342872 is divisible by 3\nString operations\nThe + operator also works on strings. It is the concatenation operator, meaning it joins the two strings together.",
"print('Hello ' + 'Coding Circle')\n\nprint(\"The \" + WHALE + \" lives in the sea.\")",
"Hint: Be careful with spaces",
"print(\"My name is\" + \"Charlotte\")",
"TRY IT\nPrint out Good morning to the sea creature you stored in variable named sea_creature earlier.\nAsking the user for input\nTo get an input for the user we use the built-in function input() and assign it to a variable.\nNOTE: The result is always a string.\nWARNING if you leave an input box without ever putting input in, jupyter won't be able to run any code. Ex. you run a cell with input and then re-run that cell before submitting input. To fix this hang the stop button in the menu.",
"my_name = input()\nprint(my_name)",
"You can pass a string to the input() function to prompt the user for what you are looking for.\ne.g. input('How are you feeling?')\nHint, add a new line character \"\\n\" to the end of the prompt to make the user enter it on a new line.",
"favorite_ocean_animal = input(\"What is your favorite sea creature?\\n\")\nprint(\"The \" + favorite_ocean_animal + \" is so cool!\")",
"If you want the user to enter a number, you will have to convert the string. Here are the conversion commands.\n\nTo convert a variable to an integer, use int -- e.g., int(variable_name)\nTo convert a variable to a float, use float -- e.g., float(variable_name)\nTo convert a variable to a string, use str -- e.g., str(variable_name)",
"number_of_fish = input(\"How many fish do you want?\\n\")\nnumber_of_fish_int = int(number_of_fish)\nprint(number_of_fish_int * 1.05)",
"TRY IT\nPrompt the user for their favorite whale and store the value in a variable called favorite_whale.\nComments\nComments let you explain your progam to someone who is reading your code. Do you know who that person is? It is almost always you in six months. Don't screw over future you. Comment your code.\nTo make a comment: you use the # symbol. You can put a comment on its own line or at the end of a statement.",
"# Calculate the price of fish that a user wants\nnumber_of_fish = input(\"How many fish do you want?\\n\") # Ask user for quantity of fish\nnumber_of_fish_int = int(number_of_fish) # raw_input returns string, so convert to integer\nprint(number_of_fish_int * 1.05) # multiply by price of fish",
"TRY IT\nWrite a comment. \nProject: Milestones\nWe are going to create an application that prompts the user for their (or their child's) birth year and will calculate and tell them the years of their milestones: Drive a car at 16, Drink alcohol at 21, and Run for president at 35.\n\nAsk the user for the birth year and store in a variable called birth_year.\nConvert birth_year to an int and store in variable called birth_year.\nAdd 16 to the birth_year and store in a variable called drive_car_year.\nAdd 21 to the birth_year and store in a variable called alcohol_year.\nAdd 35 to the birth_year and store in a variable called president_year.\nPrint out the message \"You can drive a car in: drive_car_year\" and similar messages for the other years. Hint: you will need to use string concatenation, you will also have to cast the integer years to strings using the str() method."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
google/flax
|
docs/notebooks/linen_intro.ipynb
|
apache-2.0
|
[
"Preface\n<br>\n<div style=\"font-variant: small-caps;\">CAVEAT PROGRAMMER</div>\n\nThe below is an alpha API preview and things might break. The surface syntax of the features of the API are not fixed in stone, and we welcome feedback on any points.\nUseful links\n⟶ Slides for the core ideas of the new Functional Core and Linen\n⟶ \"Design tests\" guided our design process. Many are available for functional core and some for the proposed Module abstraction\n⟶ Ported examples: ImageNet and WMT (to the proposed Module abstraction). TODO: Port to functional core.\n⟶ Our new discussion forums\nInstall and Import",
"# Install the newest JAXlib version.\n!pip install --upgrade -q pip jax jaxlib\n# Install Flax at head:\n!pip install --upgrade -q git+https://github.com/google/flax.git\n\nimport functools\nfrom typing import Any, Callable, Sequence, Optional\nimport numpy as np\nimport jax\nfrom jax import lax, random, numpy as jnp\nimport flax\nfrom flax.core import freeze, unfreeze\nfrom flax import linen as nn",
"Invoking Modules\nLet's instantiate a Dense layer.\n - Modules are actually objects in this API, so we provide contructor arguments when initializing the Module. In this case, we only have to provide the output features dimension.",
"model = nn.Dense(features=3)",
"We need to initialize the Module variables, these include the parameters of the Module as well as any other state variables.\nWe call the init method on the instantiated Module. If the Module __call__ method has args (self, *args, **kwargs) then we call init with (rngs, *args, **kwargs) so in this case, just (rng, input):",
"# Make RNG Keys and a fake input.\nkey1, key2 = random.split(random.PRNGKey(0), 2)\nx = random.uniform(key1, (4,4))\n\n# provide key and fake input to get initialized variables\ninit_variables = model.init(key2, x)\n\ninit_variables",
"We call the apply method on the instantiated Module. If the Module __call__ method has args (self, *args, **kwargs) then we call apply with (variables, *args, rngs=<RNGS>, mutable=<MUTABLEKINDS>, **kwargs) where \n - <RNGS> are the optional call time RNGs for things like dropout. For simple Modules this is just a single key, but if your module has multiple kinds of data, it's a dictionary of rng-keys per-kind, e.g. {'params': key0, 'dropout': key1} for a Module with dropout layers.\n - <MUTABLEKINDS> is an optional list of names of kinds that are expected to be mutated during the call. e.g. ['batch_stats'] for a layer updating batchnorm statistics.\nSo in this case, just (variables, input):",
"y = model.apply(init_variables, x)\ny",
"Additional points:\n - If you want to init or apply a Module using a method other than call, you need to provide the method= kwarg to init and apply to use it instead of the default __call__, e.g. method='encode', method='decode' to apply the encode/decode methods of an autoencoder.\nDefining Basic Modules\nComposing submodules\nWe support declaring modules in setup() that can still benefit from shape inference by using Lazy Initialization that sets up variables the first time the Module is called.",
"class ExplicitMLP(nn.Module):\n features: Sequence[int]\n\n def setup(self):\n # we automatically know what to do with lists, dicts of submodules\n self.layers = [nn.Dense(feat) for feat in self.features]\n # for single submodules, we would just write:\n # self.layer1 = nn.Dense(feat1)\n\n def __call__(self, inputs):\n x = inputs\n for i, lyr in enumerate(self.layers):\n x = lyr(x)\n if i != len(self.layers) - 1:\n x = nn.relu(x)\n return x\n\nkey1, key2 = random.split(random.PRNGKey(0), 2)\nx = random.uniform(key1, (4,4))\n\nmodel = ExplicitMLP(features=[3,4,5])\ninit_variables = model.init(key2, x)\ny = model.apply(init_variables, x)\n\nprint('initialized parameter shapes:\\n', jax.tree_map(jnp.shape, unfreeze(init_variables)))\nprint('output:\\n', y)",
"Here we show the equivalent compact form of the MLP that declares the submodules inline using the @compact decorator.",
"class SimpleMLP(nn.Module):\n features: Sequence[int]\n\n @nn.compact\n def __call__(self, inputs):\n x = inputs\n for i, feat in enumerate(self.features):\n x = nn.Dense(feat, name=f'layers_{i}')(x)\n if i != len(self.features) - 1:\n x = nn.relu(x)\n # providing a name is optional though!\n # the default autonames would be \"Dense_0\", \"Dense_1\", ...\n # x = nn.Dense(feat)(x)\n return x\n\nkey1, key2 = random.split(random.PRNGKey(0), 2)\nx = random.uniform(key1, (4,4))\n\nmodel = SimpleMLP(features=[3,4,5])\ninit_variables = model.init(key2, x)\ny = model.apply(init_variables, x)\n\nprint('initialized parameter shapes:\\n', jax.tree_map(jnp.shape, unfreeze(init_variables)))\nprint('output:\\n', y)",
"Declaring and using variables\nFlax uses lazy initialization, which allows declared variables to be initialized only at the first site of their use, using whatever shape information is available a the local call site for shape inference. Once a variable has been initialized, a reference to the data is kept for use in subsequent calls.\nFor declaring parameters that aren't mutated inside the model, but rather by gradient descent, we use the syntax:\nself.param(parameter_name, parameter_init_fn, *init_args)\nwith arguments:\n - parameter_name just the name, a string\n - parameter_init_fn a function taking an RNG key and a variable number of other arguments, i.e. fn(rng, *args). typically those in nn.initializers take an rng and a shape argument.\n - the remaining arguments to feed to the init function when initializing.\nAgain, we'll demonstrate declaring things inline as we typically do using the @compact decorator.",
"class SimpleDense(nn.Module):\n features: int\n kernel_init: Callable = nn.initializers.lecun_normal()\n bias_init: Callable = nn.initializers.zeros\n\n @nn.compact\n def __call__(self, inputs):\n kernel = self.param('kernel',\n self.kernel_init, # RNG passed implicitly.\n (inputs.shape[-1], self.features)) # shape info.\n y = lax.dot_general(inputs, kernel,\n (((inputs.ndim - 1,), (0,)), ((), ())),)\n bias = self.param('bias', self.bias_init, (self.features,))\n y = y + bias\n return y\n\nkey1, key2 = random.split(random.PRNGKey(0), 2)\nx = random.uniform(key1, (4,4))\n\nmodel = SimpleDense(features=3)\ninit_variables = model.init(key2, x)\ny = model.apply(init_variables, x)\n\nprint('initialized parameters:\\n', init_variables)\nprint('output:\\n', y)",
"We can also declare variables in setup, though in doing so you can't take advantage of shape inference and have to provide explicit shape information at initialization. The syntax is a little repetitive in this case right now, but we do force agreement of the assigned names.",
"class ExplicitDense(nn.Module):\n features_in: int # <-- explicit input shape\n features: int\n kernel_init: Callable = nn.initializers.lecun_normal()\n bias_init: Callable = nn.initializers.zeros\n \n def setup(self):\n self.kernel = self.param('kernel',\n self.kernel_init,\n (self.features_in, self.features))\n self.bias = self.param('bias', self.bias_init, (self.features,))\n\n def __call__(self, inputs):\n y = lax.dot_general(inputs, self.kernel,\n (((inputs.ndim - 1,), (0,)), ((), ())),)\n y = y + self.bias\n return y\n\nkey1, key2 = random.split(random.PRNGKey(0), 2)\nx = random.uniform(key1, (4,4))\n\nmodel = ExplicitDense(features_in=4, features=3)\ninit_variables = model.init(key2, x)\ny = model.apply(init_variables, x)\n\nprint('initialized parameters:\\n', init_variables)\nprint('output:\\n', y)",
"General Variables\nFor declaring generally mutable variables that may be mutated inside the model we use the call:\nself.variable(variable_kind, variable_name, variable_init_fn, *init_args)\nwith arguments:\n - variable_kind the \"kind\" of state this variable is, i.e. the name of the nested-dict collection that this will be stored in inside the top Modules variables. e.g. batch_stats for the moving statistics for a batch norm layer or cache for autoregressive cache data. Note that parameters also have a kind, but they're set to the default param kind.\n - variable_name just the name, a string\n - variable_init_fn a function taking a variable number of other arguments, i.e. fn(*args). Note that we don't assume the need for an RNG, if you do want an RNG, provide it via a self.make_rng(variable_kind) call in the provided arguments.\n - the remaining arguments to feed to the init function when initializing.\n⚠️ Unlike parameters, we expect these to be mutated, so self.variable returns not a constant, but a reference to the variable. To get the raw value, you'd write myvariable.value and to set it myvariable.value = new_value.",
"class Counter(nn.Module):\n @nn.compact\n def __call__(self):\n # easy pattern to detect if we're initializing\n is_initialized = self.has_variable('counter', 'count')\n counter = self.variable('counter', 'count', lambda: jnp.zeros((), jnp.int32))\n if is_initialized:\n counter.value += 1\n return counter.value\n\n\nkey1 = random.PRNGKey(0)\n\nmodel = Counter()\ninit_variables = model.init(key1)\nprint('initialized variables:\\n', init_variables)\n\ny, mutated_variables = model.apply(init_variables, mutable=['counter'])\n\nprint('mutated variables:\\n', mutated_variables)\nprint('output:\\n', y)",
"Another Mutability and RNGs Example\nLet's make an artificial, goofy example that mixes differentiable parameters, stochastic layers, and mutable variables:",
"class Block(nn.Module):\n features: int\n training: bool\n @nn.compact\n def __call__(self, inputs):\n x = nn.Dense(self.features)(inputs)\n x = nn.Dropout(rate=0.5)(x, deterministic=not self.training)\n x = nn.BatchNorm(use_running_average=not self.training)(x)\n return x\n\nkey1, key2, key3, key4 = random.split(random.PRNGKey(0), 4)\nx = random.uniform(key1, (3,4,4))\n\nmodel = Block(features=3, training=True)\n\ninit_variables = model.init({'params': key2, 'dropout': key3}, x)\n_, init_params = init_variables.pop('params')\n\n# When calling `apply` with mutable kinds, returns a pair of output, \n# mutated_variables.\ny, mutated_variables = model.apply(\n init_variables, x, rngs={'dropout': key4}, mutable=['batch_stats'])\n\n# Now we reassemble the full variables from the updates (in a real training\n# loop, with the updated params from an optimizer).\nupdated_variables = freeze(dict(params=init_params, \n **mutated_variables))\n\nprint('updated variables:\\n', updated_variables)\nprint('initialized variable shapes:\\n', \n jax.tree_map(jnp.shape, init_variables))\nprint('output:\\n', y)\n\n# Let's run these model variables during \"evaluation\":\neval_model = Block(features=3, training=False)\ny = eval_model.apply(updated_variables, x) # Nothing mutable; single return value.\nprint('eval output:\\n', y)\n",
"JAX transformations inside modules\nJIT\nIt's not immediately clear what use this has, but you can compile specific submodules if there's a reason to.\nKnown Gotcha: at the moment, the decorator changes the RNG stream slightly, so comparing jitted an unjitted initializations will look different.",
"class MLP(nn.Module):\n features: Sequence[int]\n\n @nn.compact\n def __call__(self, inputs):\n x = inputs\n for i, feat in enumerate(self.features):\n # JIT the Module (it's __call__ fn by default.)\n x = nn.jit(nn.Dense)(feat, name=f'layers_{i}')(x)\n if i != len(self.features) - 1:\n x = nn.relu(x)\n return x\n\nkey1, key2 = random.split(random.PRNGKey(3), 2)\nx = random.uniform(key1, (4,4))\n\nmodel = MLP(features=[3,4,5])\ninit_variables = model.init(key2, x)\ny = model.apply(init_variables, x)\n\nprint('initialized parameter shapes:\\n', jax.tree_map(jnp.shape, unfreeze(init_variables)))\nprint('output:\\n', y)",
"Remat\nFor memory-expensive computations, we can remat our method to recompute a Module's output during a backwards pass.\nKnown Gotcha: at the moment, the decorator changes the RNG stream slightly, so comparing remat'd and undecorated initializations will look different.",
"class RematMLP(nn.Module):\n features: Sequence[int]\n # For all transforms, we can annotate a method, or wrap an existing \n # Module class. Here we annotate the method.\n @nn.remat\n @nn.compact\n def __call__(self, inputs):\n x = inputs\n for i, feat in enumerate(self.features):\n x = nn.Dense(feat, name=f'layers_{i}')(x)\n if i != len(self.features) - 1:\n x = nn.relu(x)\n return x\n\nkey1, key2 = random.split(random.PRNGKey(3), 2)\nx = random.uniform(key1, (4,4))\n\nmodel = RematMLP(features=[3,4,5])\ninit_variables = model.init(key2, x)\ny = model.apply(init_variables, x)\n\nprint('initialized parameter shapes:\\n', jax.tree_map(jnp.shape, unfreeze(init_variables)))\nprint('output:\\n', y)",
"Vmap\nYou can now vmap Modules inside. The transform has a lot of arguments, they have the usual jax vmap args:\n - in_axes - an integer or None for each input argument\n - out_axes - an integer or None for each output argument\n - axis_size - the axis size if you need to give it explicitly\nIn addition, we provide for each kind of variable it's axis rules:\n\nvariable_in_axes - a dict from kinds to a single integer or None specifying the input axes to map \nvariable_out_axes - a dict from kinds to a single integer or None specifying the output axes to map\nsplit_rngs - a dict from RNG-kinds to a bool, specifying whether to split the rng along the axis.\n\nBelow we show an example defining a batched, multiheaded attention module from a single-headed unbatched attention implementation.",
"class RawDotProductAttention(nn.Module):\n attn_dropout_rate: float = 0.1\n train: bool = False\n\n @nn.compact\n def __call__(self, query, key, value, bias=None, dtype=jnp.float32):\n assert key.ndim == query.ndim\n assert key.ndim == value.ndim\n\n n = query.ndim\n attn_weights = lax.dot_general(\n query, key,\n (((n-1,), (n - 1,)), ((), ())))\n if bias is not None:\n attn_weights += bias\n norm_dims = tuple(range(attn_weights.ndim // 2, attn_weights.ndim))\n attn_weights = jax.nn.softmax(attn_weights, axis=norm_dims)\n attn_weights = nn.Dropout(self.attn_dropout_rate)(attn_weights, \n deterministic=not self.train)\n attn_weights = attn_weights.astype(dtype)\n\n contract_dims = (\n tuple(range(n - 1, attn_weights.ndim)),\n tuple(range(0, n - 1)))\n y = lax.dot_general(\n attn_weights, value,\n (contract_dims, ((), ())))\n return y\n\nclass DotProductAttention(nn.Module):\n qkv_features: Optional[int] = None\n out_features: Optional[int] = None\n train: bool = False\n\n @nn.compact\n def __call__(self, inputs_q, inputs_kv, bias=None, dtype=jnp.float32):\n qkv_features = self.qkv_features or inputs_q.shape[-1]\n out_features = self.out_features or inputs_q.shape[-1]\n\n QKVDense = functools.partial(\n nn.Dense, features=qkv_features, use_bias=False, dtype=dtype)\n query = QKVDense(name='query')(inputs_q)\n key = QKVDense(name='key')(inputs_kv)\n value = QKVDense(name='value')(inputs_kv)\n\n y = RawDotProductAttention(train=self.train)(\n query, key, value, bias=bias, dtype=dtype)\n\n y = nn.Dense(features=out_features, dtype=dtype, name='out')(y)\n return y\n\nclass MultiHeadDotProductAttention(nn.Module):\n qkv_features: Optional[int] = None\n out_features: Optional[int] = None\n batch_axes: Sequence[int] = (0,)\n num_heads: int = 1\n broadcast_dropout: bool = False\n train: bool = False\n @nn.compact\n def __call__(self, inputs_q, inputs_kv, bias=None, dtype=jnp.float32):\n qkv_features = self.qkv_features or inputs_q.shape[-1]\n out_features = self.out_features or inputs_q.shape[-1]\n\n # Make multiheaded attention from single-headed dimension.\n Attn = nn.vmap(DotProductAttention,\n in_axes=(None, None, None),\n out_axes=2,\n axis_size=self.num_heads,\n variable_axes={'params': 0},\n split_rngs={'params': True,\n 'dropout': not self.broadcast_dropout})\n\n # Vmap across batch dimensions.\n for axis in reversed(sorted(self.batch_axes)):\n Attn = nn.vmap(Attn,\n in_axes=(axis, axis, axis),\n out_axes=axis,\n variable_axes={'params': None},\n split_rngs={'params': False, 'dropout': False})\n\n # Run the vmap'd class on inputs.\n y = Attn(qkv_features=qkv_features // self.num_heads,\n out_features=out_features,\n train=self.train,\n name='attention')(inputs_q, inputs_kv, bias)\n\n return y.mean(axis=-2)\n\n\nkey1, key2, key3, key4 = random.split(random.PRNGKey(0), 4)\nx = random.uniform(key1, (3, 13, 64))\n\nmodel = functools.partial(\n MultiHeadDotProductAttention,\n broadcast_dropout=False,\n num_heads=2,\n batch_axes=(0,))\n\ninit_variables = model(train=False).init({'params': key2}, x, x)\nprint('initialized parameter shapes:\\n', jax.tree_map(jnp.shape, unfreeze(init_variables)))\n\ny = model(train=True).apply(init_variables, x, x, rngs={'dropout': key4})\nprint('output:\\n', y.shape)",
"Scan\nScan allows us to apply lax.scan to Modules, including their parameters and mutable variables. To use it we have to specify how we want each \"kind\" of variable to be transformed. For scanned variables we specify similar to vmap via in variable_in_axes, variable_out_axes:\n - nn.broadcast broadcast the variable kind across the scan steps as a constant \n - <axis:int> scan along axis for e.g. unique parameters at each step\nOR we specify that the variable kind is to be treated like a \"carry\" by passing to the variable_carry argument.\nFurther, for scan'd variable kinds, we further specify whether or not to split the rng at each step.",
"class SimpleScan(nn.Module):\n @nn.compact\n def __call__(self, xs):\n dummy_rng = random.PRNGKey(0)\n init_carry = nn.LSTMCell.initialize_carry(dummy_rng, \n xs.shape[:1], \n xs.shape[-1])\n LSTM = nn.scan(nn.LSTMCell,\n in_axes=1, out_axes=1,\n variable_broadcast='params',\n split_rngs={'params': False})\n return LSTM(name=\"lstm_cell\")(init_carry, xs)\n\nkey1, key2 = random.split(random.PRNGKey(0), 2)\nxs = random.uniform(key1, (1, 5, 2))\n\nmodel = SimpleScan()\ninit_variables = model.init(key2, xs)\n\nprint('initialized parameter shapes:\\n', jax.tree_map(jnp.shape, unfreeze(init_variables)))\n\ny = model.apply(init_variables, xs)\nprint('output:\\n', y)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
chris1610/pbpython
|
notebooks/Ipython-pandas-tips-and-tricks.ipynb
|
bsd-3-clause
|
[
"Introduction\nIPython, pandas and matplotlib have a number of useful options you can use to make it easier to view and format your data. This notebook collects a bunch of them in one place. I hope this will be a useful reference.\nThe original blog posting is on http://pbpython.com/ipython-pandas-display-tips.html\nImport modules and some sample data\nFirst, do our standard pandas, numpy and matplotlib imports as well as configure inline displays of plots.",
"import numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\n%matplotlib inline",
"One of the simple things we can do is override the default CSS to customize our DataFrame output.\nThis specific example is from - Brandon Rhodes' talk at pycon\nFor the purposes of the notebook, I'm defining CSS as a variable but you could easily read in from a file as well.",
"CSS = \"\"\"\nbody {\n margin: 0;\n font-family: Helvetica;\n}\ntable.dataframe {\n border-collapse: collapse;\n border: none;\n}\ntable.dataframe tr {\n border: none;\n}\ntable.dataframe td, table.dataframe th {\n margin: 0;\n border: 1px solid white;\n padding-left: 0.25em;\n padding-right: 0.25em;\n}\ntable.dataframe th:not(:empty) {\n background-color: #fec;\n text-align: left;\n font-weight: normal;\n}\ntable.dataframe tr:nth-child(2) th:empty {\n border-left: none;\n border-right: 1px dashed #888;\n}\ntable.dataframe td {\n border: 2px solid #ccf;\n background-color: #f4f4ff;\n}\n\"\"\"",
"Now add this CSS into the current notebook's HTML.",
"from IPython.core.display import HTML\nHTML('<style>{}</style>'.format(CSS))\n\nSALES=pd.read_csv(\"../data/sample-sales-tax.csv\", parse_dates=True)\nSALES.head()",
"You can see how the CSS is now applied to the DataFrame and how you could easily modify it to customize it to your liking.\nJupyter notebooks do a good job of automatically displaying information but sometimes you want to force data to display. Fortunately, ipython provides and option. This is especially useful if you want to display multiple dataframes.",
"from IPython.display import display\n\ndisplay(SALES.head(2))\ndisplay(SALES.tail(2))\ndisplay(SALES.describe())",
"Using pandas settings to control output\nPandas has many different options to control how data is displayed.\nYou can use max_rows to control how many rows are displayed",
"pd.set_option(\"display.max_rows\",4)\n\nSALES",
"Depending on the data set, you may only want to display a smaller number of columns.",
"pd.set_option(\"display.max_columns\",6)\n\nSALES",
"You can control how many decimal points of precision to display",
"pd.set_option('precision',2)\n\nSALES\n\npd.set_option('precision',7)\n\nSALES",
"You can also format floating point numbers using float_format",
"pd.set_option('float_format', '{:.2f}'.format)\n\nSALES",
"This does apply to all the data. In our example, applying dollar signs to everything would not be correct for this example.",
"pd.set_option('float_format', '${:.2f}'.format)\n\nSALES",
"Third Party Plugins\nQtopian has a useful plugin called qgrid - https://github.com/quantopian/qgrid\nImport it and install it.",
"import qgrid\nqgrid.nbinstall(overwrite=True)",
"Showing the data is straighforward.",
"qgrid.show_grid(SALES, remote_js=True)",
"The plugin is very similar to the capability of an Excel autofilter. It can be handy to quickly filter and sort your data.\nImproving your plots\nI have mentioned before how the default pandas plots don't look so great. Fortunately, there are style sheets in matplotlib which go a long way towards improving the visualization of your data.\nHere is a simple plot with the default values.",
"SALES.groupby('name')['quantity'].sum().plot(kind=\"bar\")",
"We can use some of the matplolib styles available to us to make this look better.\nhttp://matplotlib.org/users/style_sheets.html",
"plt.style.use('ggplot')\n\nSALES.groupby('name')['quantity'].sum().plot(kind=\"bar\")",
"You can see all the styles available",
"plt.style.available\n\nplt.style.use('bmh')\n\nSALES.groupby('name')['quantity'].sum().plot(kind=\"bar\")\n\nplt.style.use('fivethirtyeight')\n\nSALES.groupby('name')['quantity'].sum().plot(kind=\"bar\")",
"Each of the different styles have subtle (and not so subtle) changes. Fortunately it is easy to experiment with them and your own plots.\nYou can find other articles at Practical Business Python\nThis notebook is referenced in the following post - http://pbpython.com/ipython-pandas-display-tips.html"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
fmaussion/cleo
|
examples/DataLevels.ipynb
|
gpl-3.0
|
[
"cleo.DataLevels examples",
"import matplotlib as mpl\nimport numpy as np\nfrom cleo.graphics import DataLevels\n\n%matplotlib inline \n\na = np.array([-1., 0., 1.1, 1.9, 9.])\ncm = mpl.cm.get_cmap('RdYlBu_r')",
"Basic principles\nAutomatic levels\nIf data is set, the levels are chosen according to the min and max of the data:",
"dl = DataLevels(a, cmap=cm)\ndl.visualize(orientation='horizontal', add_values=True)",
"The Default number of levels is 8, but it's up to you to choose something else:",
"dl = DataLevels(a, cmap=cm, nlevels=256)\ndl.visualize(orientation='horizontal', add_values=True)",
"Automatic levels with min and max value\nYou can also specify the boundaries of the levels to choose:",
"dl = DataLevels(a, cmap=cm, nlevels=256, vmax=3)\ndl.visualize(orientation='horizontal', add_values=True)",
"You see that the colorbar has been extended. This behavior is forced by DataLevels and prevents unexpected clipping and such. If you set another data, it will remember your choice for a vmax:",
"dl.set_data(np.arange(5) / 4)\ndl.visualize(orientation='horizontal', add_values=True)",
"However vmin has changed, of course. The object remembers what has to be chosen automatically and what was previously set.\nUser levels\nThe same is true when the user specifies the levels:",
"dl = DataLevels(a, cmap=cm, levels=[0, 1, 2, 3])\ndl.visualize(orientation='horizontal', add_values=True)",
"Note that the colors have been chosen to cover the full palette, which is much better than the default behavior (see https://github.com/matplotlib/matplotlib/issues/4850).\nSince the color choice is automated, when changing the data the colorbar also changes:",
"dl.set_data(np.arange(5) / 4)\ndl.visualize(orientation='horizontal', add_values=True)",
"Since the new data remains within the level range, there is no need to extend the colorbar. Maybe you'd like the two plots above to have the same color code. For this you shoudl set the extend keyword:",
"dl.set_extend('both')\ndl.visualize(orientation='horizontal', add_values=True)",
"Cleo made the choice to warn you if you did something wrong that hides information from the data:",
"dl = DataLevels(a, cmap=cm, levels=[0, 1, 2, 3], extend='neither')\ndl.visualize(orientation='horizontal', add_values=True)",
"Using DataLevels for your own visualisations\nThe examples above made use of the utilitary function visualize(), which is just here to have a look at the data. Here's a \"real world\" example with a scatterplot:",
"# Make the data\nx = np.random.randn(1000)\ny = np.random.randn(1000)\nz = x**2 + y**2\n\n# The datalevel step\ndl = DataLevels(z, cmap=cm, levels=np.arange(6))\n\n# The actual plot\nfig, ax = plt.subplots(1)\nax.scatter(x, y, color=dl.to_rgb(), s=64)\ncbar = dl.append_colorbar(ax, \"right\", size=\"5%\", pad=0.5) # Note that the DataLevel class has to draw the colorbar",
"This also works for images:",
"# Make the data\nimg, xi, yi = np.histogram2d(x, y)\n# The datalevel step\ndl = DataLevels(img, cmap=cm, levels=np.arange(6))\n# The actual plot\nfig, ax = plt.subplots(1)\ntoplot = dl.to_rgb()\nax.imshow(toplot, interpolation='none')\ncbar = dl.append_colorbar(ax, \"right\", size=\"5%\", pad=0.5) # Note that the DataLevel class has to draw the colorbar",
"And with missing data:",
"cm.set_bad('green')\nimg[1:5, 1:5] = np.NaN\n\n# The datalevel step\ndl = DataLevels(img, cmap=cm, levels=np.arange(6))\n# The actual plot\nfig, ax = plt.subplots(1)\ntoplot = dl.to_rgb() # note that the shape is preserved\nax.imshow(toplot, interpolation='none')\ncbar = dl.append_colorbar(ax, \"right\", size=\"5%\", pad=0.5) # Note that the DataLevel class has to draw the colorbar"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
gwu-libraries/notebooks
|
20170720-building-social-network-graphs-CSV.ipynb
|
mit
|
[
"Exports nodes and edges from tweets (Retweets, Mentions, or Replies) [CSV]\nExports nodes and edges from tweets (either from retweets or mentions) in json format that can be exported from SFM, and saves it in a file format compatible with various social network graph tools such as Gephi, Cytoscape, Kumu, etc. These are for directed graphs.",
"import sys\nimport json\nimport re\nimport numpy as np\nfrom datetime import datetime\nimport pandas as pd \n\ntweetfile = '/home/soominpark/sfmproject/Work/Network Graphs/food_security.csv'\n\ntweets = pd.read_csv(tweetfile)",
"1. Export edges from Retweets, Mentions, or Replies\n\nRun one of three blocks of codes below for your purpose.",
"# 1. Export edges from Retweets\n\nretweets = tweets[tweets['is_retweet'] == 'Yes']\nretweets['original_twitter'] = retweets['text'].str.extract('RT @([a-zA-Z0-9]\\w{0,}):', expand=True)\n\nedges = retweets[['screen_name', 'original_twitter','created_at']]\nedges.columns = ['Source', 'Target', 'Strength']\n\n# 2. Export edges from Mentions\n\nmentions = tweets[tweets['mentions'].notnull()]\n\nedges = pd.DataFrame(columns=('Source','Target','Strength'))\n\nfor index, row in mentions.iterrows():\n mention_list = row['mentions'].split(\", \")\n for mention in mention_list:\n edges = edges.append(pd.DataFrame([[row['screen_name'],\n mention,\n row['created_at']]]\n , columns=('Source','Target','Strength')), ignore_index=True)\n\n# 3. Export edges from Replies\n\nreplies = tweets[tweets['in_reply_to_screen_name'].notnull()]\n\nedges = replies[['screen_name', 'in_reply_to_screen_name','created_at']]\nedges.columns = ['Source', 'Target', 'Strength']",
"2. Leave only the tweets whose strength level >= user specified level (directed)",
"strengthLevel = 3 # Network connection strength level: the number of times in total each of the tweeters responded to or mentioned the other.\n # If you have 1 as the level, then all tweeters who mentioned or replied to another at least once will be displayed. But if you have 5, only those who have mentioned or responded to a particular tweeter at least 5 times will be displayed, which means that only the strongest bonds are shown.\n\nedges2 = edges.groupby(['Source','Target'])['Strength'].count()\nedges2 = edges2.reset_index()\nedges2 = edges2[edges2['Strength'] >= strengthLevel]",
"3. Export nodes",
"# Export nodes from the edges and add node attributes for both Sources and Targets.\n\nusers = tweets[['screen_name','followers_count','friends_count']]\nusers = users.sort_values(['screen_name','followers_count'], ascending=[True, False])\nusers = users.drop_duplicates(['screen_name'], keep='first') \n\nids = edges2['Source'].append(edges2['Target']).to_frame()\nids['Label'] = ids\nids.columns = ['screen_name', 'Label']\nids = ids.drop_duplicates(['screen_name'], keep='first') \nnodes = pd.merge(ids, users, on='screen_name', how='left')\n\nprint(nodes.shape)\nprint(edges2.shape)",
"4. Export nodes and edges to csv files",
"# change column names for Kumu import (Run this when using Kumu)\nedges2.columns = ['From','To','Strength']\n\n# Print nodes to check\nnodes.head(3)\n\n# Print edges to check\nedges2.head(3)\n\n# Export nodes and edges to csv files\nnodes.to_csv('nodes.csv', encoding='utf-8', index=False)\nedges2.to_csv('edges.csv', encoding='utf-8', index=False)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
robertoalotufo/ia898
|
master/tutorial_ti_2.ipynb
|
mit
|
[
"Table of Contents\n<p><div class=\"lev1 toc-item\"><a href=\"#Transformações-de-intensidade\" data-toc-modified-id=\"Transformações-de-intensidade-1\"><span class=\"toc-item-num\">1 </span>Transformações de intensidade</a></div><div class=\"lev2 toc-item\"><a href=\"#Descrição\" data-toc-modified-id=\"Descrição-11\"><span class=\"toc-item-num\">1.1 </span>Descrição</a></div><div class=\"lev2 toc-item\"><a href=\"#Indexação-por-arrays\" data-toc-modified-id=\"Indexação-por-arrays-12\"><span class=\"toc-item-num\">1.2 </span>Indexação por arrays</a></div><div class=\"lev2 toc-item\"><a href=\"#Utilização-em-imagens\" data-toc-modified-id=\"Utilização-em-imagens-13\"><span class=\"toc-item-num\">1.3 </span>Utilização em imagens</a></div><div class=\"lev3 toc-item\"><a href=\"#T1:-Função-identidade\" data-toc-modified-id=\"T1:-Função-identidade-131\"><span class=\"toc-item-num\">1.3.1 </span>T1: Função identidade</a></div><div class=\"lev3 toc-item\"><a href=\"#T2:-Função-logaritmica\" data-toc-modified-id=\"T2:-Função-logaritmica-132\"><span class=\"toc-item-num\">1.3.2 </span>T2: Função logaritmica</a></div><div class=\"lev3 toc-item\"><a href=\"#T3:-Função-negativo\" data-toc-modified-id=\"T3:-Função-negativo-133\"><span class=\"toc-item-num\">1.3.3 </span>T3: Função negativo</a></div><div class=\"lev3 toc-item\"><a href=\"#T4:-Função-threshold-128\" data-toc-modified-id=\"T4:-Função-threshold-128-134\"><span class=\"toc-item-num\">1.3.4 </span>T4: Função threshold 128</a></div><div class=\"lev3 toc-item\"><a href=\"#T5:-Função-quantização\" data-toc-modified-id=\"T5:-Função-quantização-135\"><span class=\"toc-item-num\">1.3.5 </span>T5: Função quantização</a></div><div class=\"lev2 toc-item\"><a href=\"#Outras-página-da-toolbox\" data-toc-modified-id=\"Outras-página-da-toolbox-14\"><span class=\"toc-item-num\">1.4 </span>Outras página da toolbox</a></div>\n\n# Transformações de intensidade\n\n## Descrição\n\nTransformações de intensidade modificam o valor do pixel de acordo com uma\nequação ou mapeamento. Estas transformações são ditas pontuais para contrastar\ncom operações ditas de vizinhança. Um exemplo de transformação de intensidade\né uma operação que divide o valor dos pixels por 2. O resultado será uma nova\nimagem onde todos os pixels serão mais escuros.\n\nA transformação de intensidade tem a forma $s = T(v)$, onde $v$ é um valor de nível\nde cinza de entrada e s é o valor de nível de cinza na saída. Este tipo de \nmapeamento pode apresentar muitas denominações: transformação de contraste,\nlookup table, tabela ou mapa de cores, etc. A transformação T pode ser implementada\npor uma função ou através de uma simples tabela de mapeamento.\nO NumPy possui um forma elegante e eficiente de se aplicar\num mapeamento de intensidade a uma imagem. \n\n## Indexação por arrays\n\nO ``ndarray`` pode ser indexado por outros ``ndarrays``. O uso de arrays como índice\npodem ser simples mas também bastante complexos e difíceis de entender. O uso de\narrays indexados retornam sempre uma cópia dos dados originais e não uma visão ou\ncópia rasa normalmente obtida com o *slicing*. Assim, o uso de arrays indexados devem\nser utilizados com precaução pois podem gerar códigos não tão eficientes.\n\nVeja um exemplo numérico unidimensional simples. Um vetor ``row`` de 10 elementos de 0 a 90\né criado e um outro vetor de indices ``i`` com valores [3,5,0,8] irá indexar ``row``\nna forma ``row[i]``. O resultado será [30,50,0,80] que são os elementos de ``row`` indexados por ``i``:",
"%matplotlib inline\nimport matplotlib.pyplot as plt\nimport matplotlib.image as mpimg\nimport numpy as np\nimport sys,os\nia898path = os.path.abspath('/etc/jupyterhub/ia898_1s2017/')\nif ia898path not in sys.path:\n sys.path.append(ia898path)\nimport ia898.src as ia",
"O indexador i precisa ser inteiro. Entretanto o array que será indexado pode ser qualquer tipo.\nf = row[i]\n\nshape(f) é igual ao shape(i)\ndtype(f) é o dtype(row)",
"row = np.arange(0.,100,10)\nprint('row:', row)\ni = np.array([[3,5,0,8],[4,2,7,1]])\nf = row[i]\nprint('i:', i)\nprint('f=row[i]\\n',f)\nprint(id(i),id(row),id(f))",
"Vejamos agora o caso bidimensional, apropriado para imagens e a transformação de intensidade.\nSeja uma imagem f de dimensões (2,3) com os valores de pixels variando de 0 a 2:",
"f = np.array([[0, 1, 2],\n [2, 0, 1]])\nprint('f=\\n',f)",
"Seja agora a transformação de intensidade T, especificada por um vetor de 3 elementos, onde\nT[0] = 5; T[1] = 6 e T[2] = 7:",
"T = np.array([5, 6, 7])\nprint('T:', T)\nfor i in np.arange(T.size):\n print('%d:%d'% (i,T[i]))",
"A aplicação da transformação de intensidade é feita utilizando-se a imagem f como índice da\ntransformação T, como se escreve na equação matemática:",
"g = T[f]\nprint('g=T[f]= \\n', g)\nprint('g.shape:', g.shape)",
"Note que T[f] tem as mesmas dimensões de f, entretanto, seus pixels passaram pelo\nmapeamento da tabela T.\nUtilização em imagens\nExistem muitas funções úteis que podem ser feitas com o mapeamento T: realce de contraste, equalização de\nhistograma, thresholding, redução de níveis de cinza, negativo da imagem, entre várias outras.\nÉ comum representar a tabela de transformação de intensidade em um gráfico. A seguir várias funções de\ntransformações são calculadas:",
"T1 = np.arange(256).astype('uint8') # função identidade\nT2 = ia.normalize(np.log(T1+1.)) # logaritmica - realce partes escuras\nT3 = 255 - T1 # negativo\nT4 = ia.normalize(T1 > 128) # threshold 128\nT5 = ia.normalize(T1//30) # reduz o número de níveis de cinza\nplt.plot(T1)\nplt.plot(T2)\nplt.plot(T3)\nplt.plot(T4)\nplt.plot(T5)\nplt.legend(['T1', 'T2', 'T3', 'T4','T5'], loc='right')\nplt.xlabel('valores de entrada')\nplt.ylabel('valores de saída')\nplt.show()",
"Veja a aplicação destas tabelas na imagem \"cameraman.tif\":\nT1: Função identidade",
"nb = ia.nbshow(2)\nf = mpimg.imread('../data/cameraman.tif')\nf1 = T1[f]\nnb.nbshow(f,'original')\nplt.plot(T1)\nplt.title('T1: identidade')\nnb.nbshow(f1,'T1[f]')\nnb.nbshow()",
"T2: Função logaritmica",
"f2 = T2[f]\nnb.nbshow(f,'original')\nplt.plot(T2)\nplt.title('T2: logaritmica')\nnb.nbshow(f2,'T2[f]')\nnb.nbshow()",
"T3: Função negativo",
"f3 = T3[f]\nnb.nbshow(f,'original')\nplt.plot(T3)\nplt.title('T3: negativo')\nnb.nbshow(f3,'T3[f]')\nnb.nbshow()",
"T4: Função threshold 128",
"f4 = T4[f]\nnb.nbshow(f,'original')\nplt.plot(T4)\nplt.title('T4: threshold 128')\nnb.nbshow(f4,'T4[f]')\nnb.nbshow()",
"T5: Função quantização",
"f5 = T5[f]\nnb.nbshow(f,'original')\nplt.plot(T5)\nplt.title('T5: quantização')\nnb.nbshow(f5,'T5[f]')\nnb.nbshow()",
"Observando o histograma de cada imagem após o mapaemento:",
"h = ia.histogram(f)\nh2 = ia.histogram(f2) #logaritmica\nh3 = ia.histogram(f3) # negativo\nh4 = ia.histogram(f4) # threshold\nh5 = ia.histogram(f5) # quantização\nplt.plot(h)\n#plt.plot(h2)\n#plt.plot(h3)\n#plt.plot(h4)\nplt.plot(h5)\nplt",
"Do ponto de vista de eficiência, qual é o melhor, utilizar o mapeamento pela tabela, ou processar a imagem diretamente?",
"f = ia.normalize(np.arange(1000000).reshape(1000,1000))\n\n%timeit g2t = T2[f]\n%timeit g2 = ia.normalize(np.log(f+1.))\n\n%timeit g3t = T3[f]\n%timeit g3 = 255 - f",
"T pode ser denominado como função de transferência de intensidade. Quando a derivada de T for maior que 1, o contraste é aumentado naqueles valores de T, se for menor que 1, o contraste é diminuído. Caso a derivada for negativa, existe uma quebra de ordenação dos valores.\nSe T for uma função crescente, isto é:\n$$ T(i) >= T(j) \\ \\text{ se }\\ i > j $$\nEntão aplicando-se g = T[f], a propriedade \n$$ f(r,c) >= f(r1,c1)\\ \\text{então}\\ g(r,c) >= g(r1,c1) $$\nOutras página da toolbox\n\nia838:applylut - Transformação da intensidade da imagem\nia636:iait iait` - ia636: Demonstração da transformação de contraste"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
edwardd1/phys202-2015-work
|
assignments/assignment04/MatplotlibEx02.ipynb
|
mit
|
[
"Matplotlib Exercise 2\nImports",
"%matplotlib inline\nimport matplotlib.pyplot as plt\nimport numpy as np",
"Exoplanet properties\nOver the past few decades, astronomers have discovered thousands of extrasolar planets. The following paper describes the properties of some of these planets.\nhttp://iopscience.iop.org/1402-4896/2008/T130/014001\nYour job is to reproduce Figures 2 and 4 from this paper using an up-to-date dataset of extrasolar planets found on this GitHub repo:\nhttps://github.com/OpenExoplanetCatalogue/open_exoplanet_catalogue\nA text version of the dataset has already been put into this directory. The top of the file has documentation about each column of data:",
"!head -n 30 open_exoplanet_catalogue.txt",
"Use np.genfromtxt with a delimiter of ',' to read the data into a NumPy array called data:",
"data = np.genfromtxt('open_exoplanet_catalogue.txt', delimiter=',')\ndata\n#raise NotImplementedError()\n\nassert data.shape==(1993,24)",
"Make a histogram of the distribution of planetary masses. This will reproduce Figure 2 in the original paper.\n\nCustomize your plot to follow Tufte's principles of visualizations.\nCustomize the box, grid, spines and ticks to match the requirements of this data.\nPick the number of bins for the histogram appropriately.",
"np.histogram(data)\n#raise NotImplementedError()\n\nassert True # leave for grading",
"Make a scatter plot of the orbital eccentricity (y) versus the semimajor axis. This will reproduce Figure 4 of the original paper. Use a log scale on the x axis.\n\nCustomize your plot to follow Tufte's principles of visualizations.\nCustomize the box, grid, spines and ticks to match the requirements of this data.",
"# YOUR CODE HERE\nraise NotImplementedError()\n\nassert True # leave for grading"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
ewulczyn/ewulczyn.github.io
|
ipython/what_if_ab_testing_is_like_science/what_if_ab_testing_is_like_science.ipynb
|
mit
|
[
"import matplotlib.pyplot as plt\n%matplotlib inline\nimport seaborn as sns",
"In Why Most Published Research Findings Are False John Ioannidis argues that if most hypotheses we test are false, we end up with more false research findings than true findings, even if we do rigorous hypothesis testing. The argument hinges on a vanilla application of Bayes' rule. Lets assume that science is \"really hard\" and that only 50 out of 1000 hypotheses we formulate are in fact true. Say we test our hypotheses at significance level alpha=0.05 and with power beta=0.80. Out of our 950 incorrect hypotheses, our hypothesis testing will lead to 950x0.05 = 47.5 false positives i.e. false research findings. Out of our 100 correct hypotheses, we will correctly identify 50x0.8 = 40 true research findings. To our horror, we find that most published findings are false!\nMost applications of AB testing involve running multiple repeated experiments in order to optimize a metric. At each iteration, we test a hypothesis: Does the new design perform better than the control? If so, we adopt the new design as our control and test the next idea. After many iterations, we expect to have a design that is better than when we started. But Ioannidis' argument about how most research findings could be false should make us wonder:\n\nIs it possible, that if the chances of generating a better new design are slim, that we adopt bad designs more often than we adopt good designs? What effect does this have on our performance in the long run?\nHow can we change our testing strategy in such a way that we still expect to increase performance over time? Conversely, how can we take advantage of a situation where the chances of generating a design that is better than the control is really high?\n\nTo investigate these questions, lets simulate the process of repeated AB testing for optimizing some conversion rate (CR) under different scenarios for how hard our optimization problem is. For example, our CR could be the fraction of users who donate to Wikipedia in response to being shown a particular fundraising banner. I will model the difficulty of the problem using a distribution over the percent lift in conversion rate (CR) that a new idea has over the control. In practice we might expect the mean of this distribution to change with time. As we work on a problem longer, the average idea probably gives a smaller performance increase. For our purposes, I will assume this distribution (call it $I$) is fixed and normally distributed.\nWe start with a control banner with some fixed conversion rate (CR). At each iteration, we test the control against a new banner whose percent lift over the control is drawn from $I$. If the new banner wins, it becomes the new control. We repeat this step several times to see what final the CR is after running a sequence of tests. I will refer to a single sequence of tests as a campaign. We can simulate several campaigns to characterize the distribution of outcomes we can expect at the end of a campaign.\nCode\nFor those who are interested, this section describes the simulation code. The Test class, simulates running a single AB test. The parameters significance, power and mde correspond to the significance, power and minimum effect size of the z-test used to test the hypothesis that the new design and the control have the same CR. The optimistic parameter determines which banner we choose if we fail to reject the null hypothesis that the two designs are the same.",
"import numpy as np\nnp.random.seed(seed=0)\nfrom statsmodels.stats.weightstats import ztest\nfrom statsmodels.stats.power import tt_ind_solve_power\nfrom scipy.stats import bernoulli\n\nclass Test():\n \n def __init__(self, significance, power, mde, optimistic):\n self.significance = significance\n self.power = power\n self.mde = mde\n self.optimistic = optimistic\n \n def compute_sample_size(self, u_hat):\n var_hat = u_hat*(1-u_hat)\n absolute_effect = u_hat - (u_hat*(1+self.mde))\n standardized_effect = absolute_effect / np.sqrt(var_hat)\n sample_size = tt_ind_solve_power(effect_size=standardized_effect,\n alpha=self.significance,\n power=self.power)\n return sample_size\n \n def run(self, control_cr, treatment_cr):\n\n # run null hypothesis test with a fixed sample size\n N = self.compute_sample_size(control_cr)\n data_control = bernoulli.rvs(control_cr,size=N)\n data_treatment = bernoulli.rvs(treatment_cr,size=N)\n p = ztest(data_control, data_treatment)[1]\n\n # if p > alpha, no clear winner\n if p > self.significance:\n if self.optimistic:\n return treatment_cr\n else:\n return control_cr \n\n # other wise pick the winner\n else:\n if data_control.sum() > data_treatment.sum():\n return control_cr\n else:\n return treatment_cr\n ",
"The Campaign class simulates running num_tests AB tests, starting with a base_rate CR. The parameters mu and sigma characterize $I$, the distribution over the percent gain in performance of a new design compared to the control.",
"class Campaign():\n \n def __init__(self, base_rate, num_tests, test, mu, sigma):\n \n self.num_tests = num_tests\n self.test = test\n self.mu = mu\n self.sigma = sigma\n self.base_rate = base_rate\n \n \n def run(self):\n \n true_rates = [self.base_rate,]\n for i in range(self.num_tests):\n #the control of the current test is the winner of the last test\n control_cr = true_rates[-1]\n # create treatment banner with a lift drawn from the lift distribution\n lift = np.random.normal(self.mu, self.sigma)\n treatment_cr = min(0.9, control_cr*(1.0+lift/100.0))\n winning_cr = self.test.run(control_cr, treatment_cr)\n true_rates.append (winning_cr)\n \n return true_rates",
"The expected_campaign_results function implements running many campaigns with the same starting conditions. It generates a plot depicting the expected CR as a function of the number of sequential AB test.",
"import matplotlib.pyplot as plt\nimport pandas as pd\n\ndef expected_campaign_results(campaign, sim_runs):\n fig = plt.figure(figsize=(10, 6), dpi=80)\n \n d = pd.DataFrame()\n for i in range(sim_runs):\n d[i] = campaign.run()\n \n d2 = pd.DataFrame()\n d2['mean'] = d.mean(axis=1)\n d2['lower'] = d2['mean'] + 2*d.std(axis=1)\n d2['upper'] = d2['mean'] - 2*d.std(axis=1)\n \n plt.plot(d2.index, d2['mean'], label= 'CR')\n plt.fill_between(d2.index, d2['lower'], d2['upper'], alpha=0.31,\n edgecolor='#3F7F4C', facecolor='0.75',linewidth=0)\n \n plt.xlabel('num tests')\n plt.ylabel('CR')\n \n plt.plot(d2.index, [base_rate]*(num_tests+1), label = 'Start CR')\n plt.legend() ",
"Simulations\nI will start out with a moderately pessimistic scenario and assume the average new design is 5% worse than the control and that standard deviation sigma is 3. The plot below shows the distribution over percent gains from new designs.",
"def plot_improvements(mu, sigma):\n plt.figure(figsize = (7, 3))\n x = np.arange(-45.0, 45.0, 0.5)\n plt.xticks(np.arange(-45.0, 45.0, 5))\n plt.plot(x, 1/(sigma * np.sqrt(2 * np.pi)) *np.exp( - (x - mu)**2 / (2 * sigma**2) ))\n plt.xlabel('lift')\n plt.ylabel('probability density')\n plt.title('Distribution over lift in CR of a new design compared to the control')\n\n#Distribution over % Improvements\nmu = -5.0\nsigma = 3\n\nplot_improvements(mu, sigma)",
"Lets start out with some standard values of alpha = 0.05, beta = 0.8 and mde = 0.10 for the hypothesis tests. The plot below shows the expected CR after a simulating a sequence of 30 AB tests 100 times.",
"# hypothesis test params\nsignificance = 0.05\npower = 0.8\nmde = 0.10\n\n#camapign params\nnum_tests = 30\nbase_rate = 0.2\n\n#number of trials\nsim_runs = 100\n\ntest = Test(significance, power, mde, optimistic = False)\ncampaign = Campaign(base_rate, num_tests, test, mu, sigma)\nexpected_campaign_results(campaign, sim_runs)",
"Even though we went through all work of running 100 AB test, we cannot expect to improve our CR. The good news is that although most of our ideas were bad, doing the AB testing prevented us from loosing performance. The plot below shows what would happen if we had used the new idea as the control when the hypothesis test could not discern a significant difference.",
"test = Test(significance, power, mde, optimistic = True)\ncampaign = Campaign(base_rate, num_tests, test, mu, sigma)\nexpected_campaign_results(campaign, sim_runs)",
"Impressive. The CR starts tanking at a rapid pace. This is an extreme example but it spells out a clear warning: if your optimization problem is hard, stick to your control.\nNow lets imagine a world in which most ideas are neutral but there is still the potential for big wins and big losses. The plot below shows our new distribution over the quality of new ideas.",
"mu = 0.0\nsigma = 5\n\nplot_improvements(mu, sigma)",
"And here are the result of the new simulation:",
"test = Test(significance, power, mde, optimistic = False)\ncampaign = Campaign(base_rate, num_tests, test, mu, sigma)\nexpected_campaign_results(campaign, sim_runs)",
"Now there is huge variance in how things could turn out. In expectation, we get a 2% absolute gain every 10 tests. As you might have guessed, in this scenario it does not matter which banner you choose when the hypothesis test does not detect a significant difference. \nLets see if we can reduce the variance in outcomes by decreasing the minimum detectable effect mde to 0.05. This will cost us in terms of runtime for each test, but it also should reduce the variance in the expected results.",
"mde = 0.05\n\ntest = Test(significance, power, mde, optimistic = False)\ncampaign = Campaign(base_rate, num_tests, test, mu, sigma)\nexpected_campaign_results(campaign, sim_runs)",
"Now we can expect 5% absolute gain every 15 tests. Furthermore, it is very unlikely that we have not improved out CR after 30 tests.\nFinally, lets consider the rosy scenario in which most new ideas are a winner.",
"mu = 5\nsigma = 3\nplot_improvements(mu, sigma)",
"Again, here are the result of the new simulation:",
"mde = 0.10\ntest = Test(significance, power, mde, optimistic = False)\ncampaign = Campaign(base_rate, num_tests, test, mu, sigma)\nexpected_campaign_results(campaign, sim_runs)",
"Having good ideas is a recipe for runaway success. You might even decide that its foolish to choose the control banner when you don't have significance since chances are that your new idea is better, even if you could not detect it. The plot below shows that choosing the new idea over the control leads to even faster growth in performance.",
"test = Test(significance, power, mde, optimistic = True)\ncampaign = Campaign(base_rate, num_tests, test, mu, sigma)\nexpected_campaign_results(campaign, sim_runs)",
"Lessons Learned\nI hope these simulations provide some insight into the possible long term outcomes AB testing can achieve. Not surprisingly, whether you will be successful in increasing your performance metric has everything to do with the quality of your new designs. Even if your optimization problem is hard, however, AB testing can give you ability to safely explore new ideas as long as you stick to your control in times of doubt. \nWhen most of your ideas have a neutral effect, AB testing helps you pick out the better ideas in expectation. As a result, performance does not simply oscillate around the starting point, but steadily increases. \nIf you are in the lucky position that your design team cannot miss, you are bound to succeed, whether you test or not. However, running AB tests will protect you from adopting a rare failed design. If you are in a hurry to boost your metric, you might consider going with the new design when results are ambiguous. \nThe results also suggest that it would be immensely useful to know which of these 3 scenarios most closely resembles your own. If your optimization problem is hard, you might decide that designing further iterations is not worth the effort. If there is the possibility of large gains and losses, you might decide to run your tests longer, to decrease the variance in possible future outcomes. \nIf you have already run a series of tests, you could gauge where you stand by looking at the distribution over the empirical gains of your new designs. For example, in the first simulation, gains are normally distributed with mean -5 and standard error 3. Lets assume that the true distribution over gains is also normally distributed. You could use the observed empirical gains to estimate the mean and standard error of the true lift distribution. \nNote: You could also consider switching from the standard z-test, to a Bayesian test and use the results of past tests to set an appropriate prior."
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
nansencenter/nansat-lectures
|
notebooks/03 object oriented programming.ipynb
|
gpl-3.0
|
[
"Intorduction to Object Oriented Programming in Python\nDefinition of the minimal class in two lines\nDefine class",
"class A():\n pass",
"Use class",
"a = A() # create an instance of class A\nprint (a)\nprint (type(a))",
"Definition of a class with attributes (properties)",
"class Human(object):\n name = ''\n age = 0\n\nhuman1 = Human() # create instance of Human\nhuman1.name = 'Anton' # name him (add data to this object)\nhuman1.age = 39 # set the age (add data to this object)\n\nprint (type(human1))\nprint (human1.name)\nprint (human1.age)",
"Definition of a class with constructor",
"class Human(object):\n name = ''\n age = 0\n \n def __init__(self, name):\n self.name = name",
"Create a Human instance and give him a name instantly",
"h1 = Human('Anton')\nprint (h1.name)\nprint (h1.age)",
"Definition of a class with several methods",
"class Human(object):\n ''' Human being '''\n name = ''\n age = 0\n \n def __init__(self, name):\n ''' Create a Human '''\n self.name = name\n \n def grow(self):\n ''' Grow a Human by one year (in-place) '''\n self.age += 1",
"Create a Human, give him a name, grow by one year (in-place)",
"human1 = Human('Adam')\nhuman1.grow()\n\nprint (human1.name)\nprint (human1.age)",
"Add get_ methods to the class",
"class Human(object):\n ''' Human being '''\n name = ''\n age = 0\n \n def __init__(self, name):\n ''' Create a Human '''\n self.name = name\n \n def grow(self):\n ''' Grow a Human by one year (in-place) '''\n self.age += 1\n \n def get_name(self):\n ''' Return name of a Human '''\n return self.name\n\n def get_age(self):\n ''' Return name of a Human '''\n return self.age\n\nh1 = Human('Eva')\nprint (h1.get_name())",
"Create a class with Inheritance",
"class Teacher(Human):\n ''' Teacher of Python '''\n\n def give_lecture(self):\n ''' Print lecture on the screen ''' \n \n print ('bla bla bla')",
"Create an Teacher with name, grow him sufficiently, use him.",
"t1 = Teacher('Anton')\n\nwhile t1.get_age() < 50:\n t1.grow()\n\nprint (t1.get_name())\nprint (t1.get_age())\nt1.give_lecture()",
"Import class definition from a module\nStore class definition in a separate file. E.g.:\nhttps://github.com/nansencenter/nansat-lectures/blob/master/human_teacher.py",
"# add directory scripts to PYTHONPATH (searchable path)\nimport sys\nsys.path.append('scripts')\n\nfrom human_teacher import Teacher\n\nt1 = Teacher('Morten')\nt1.give_lecture()",
"Practical example",
"## add scripts to the list of searchable directories\nimport sys\nsys.path.append('scripts')\n\n# import class definiton our module\nfrom ts_profile import Profile\n\n# load data\np = Profile('data/tsprofile.txt')\n\n# work with the object\nprint (p.get_ts_at_level(5))\nprint (p.get_ts_at_depth(200))\nprint (p.get_mixed_layer_depth(.1))",
"How would it look without OOP?\n1. A lot of functions to import",
"from st_profile import load_profile, get_ts_at_level, get_ts_at_depth\nfrom st_profile import get_mixed_layer_depth, plot_ts",
"2. A lot of data to unpack and to pass between functions",
"depth, temp, sal = load_profile('tsprofile.txt')\nprint (get_ts_at_level(depth, temp, sal))",
"3. And imagine now we open a satellite image which has:\n\nmany matrices with data\ngeoreference information (e.g. lon, lat of corners)\ndescription of data (metadata)\nand so on...\n\nAnd here comes OOP:",
"from nansat import Nansat\nn = Nansat('satellite_filename.hdf')",
"Exercise\nCreate a class for managing data from T/S profile\nThe class should:\n\nLoad a D/T/S profile from tsprofile.txt\nPrint D/T/S values of all measurements\nPrint D/T/S values and depth of measurement a from given point\nPrint T/S values for a given depth (linear interpolation)\nFind a mixed layer depth\n\nInput data\n'tsprofile.txt' contains three columns of synthetic values: depth (H), temperature (T), salinity (S)\nhttp://192.168.33.10:8888/edit/data/tsprofile.txt\nExample usage\n\nprofile = Profile(fileName)\nallTemperatures = profile.get_all_temperature_values()\nt1 = profile.get_temperature_at_point(pointNumber)\nt2 = profile.get_temperature_at_depth(depth)\nmld = profile.get_mixed_layer_depth(threshold)\n\nHints\n\n\nAdd attributes temperature, salinity, depth of type list\n\n\nUse f = open(fileName) and lines = f.readlines() to read data from file\n\n\nIf you have two measurements of e.g. temperature (t1 and t1) at two depths (d1 and d2) you can linearly interpolate between these values and calculate a temperature (t) at depth (d):\n\n\nt = t1 + (d - d1) * (t2 - t1) / (d2 - d1)\n\nMixed layer can be defined as a layer, where temperature variability do not exceed a predefined threshold (water is relatively homogenous). Therefore to find a mixed layer depth, we need to loop through values of T and identify when the difference beween the value of T at one depth and at the next depth is above a threshold. \n\nGet the nansat-lectures course material at Github: 'git clone https://github.com/nansencenter/nansat-lectures'\nTo get stay updated: 'git pull'"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
opencobra/cobrapy
|
documentation_builder/solvers.ipynb
|
gpl-2.0
|
[
"Solvers\nA constraint-based reconstruction and analysis model for biological systems is actually just an application of a class of discrete optimization problems typically solved with linear, mixed integer or quadratic programming techniques. Cobrapy does not implement any algorithm to find solutions to such problems but rather creates a biologically motivated abstraction to these techniques to make it easier to think of how metabolic systems work without paying much attention to how that formulates to an optimization problem.\nThe actual solving is instead done by tools such as the free software glpk or commercial tools gurobi and cplex which are all made available as a common programmers interface via the optlang package.\nWhen you have defined your model, you can switch solver backend by simply assigning to the model.solver property.",
"from cobra.io import load_model\nmodel = load_model('textbook')\n\nmodel.solver = 'glpk'\n# or if you have cplex installed\nmodel.solver = 'cplex'",
"For information on how to configure and tune the solver, please see the documentation for optlang project and note that model.solver is simply an optlang object of class Model.",
"type(model.solver)",
"Internal solver interfaces\nCobrapy also contains its own solver interfaces but these are now deprecated and will be removed completely in the near future. For documentation of how to use these, please refer to older documentation."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
DB2-Samples/db2jupyter
|
Db2 Compatibility Features.ipynb
|
apache-2.0
|
[
"<a id=\"top\"></a>\nDb2 Compatibility Features\nMoving from one database vendor to another can sometimes be difficult due to syntax differences between data types, functions, and language elements. Db2 already has a high degree of compatibility with Oracle PLSQL along with some of the Oracle data types. \nDb2 11 introduces some additional data type and function compatibility that will reduce some of the migration effort required when porting from other systems. There are some specific features within Db2 that are targeted at Netezza SQL and that is discussed in a separate section.",
"%run db2.ipynb",
"We populate the database with the EMPLOYEE and DEPARTMENT tables so that we can run the various examples.",
"%sql -sampledata",
"Table of Contents\n\n\nOuter Join Operator\n\n\nCHAR datatype size increase\n\n\nBinary Data Type\n\n\nBoolean Data Type\n\n\nSynonyms for Data Types\n\n\nFunction Synonymns\n\n\nNetezza Compatibility\n\n\nSelect Enhancements\n\n\nHexadecimal Functions\n\n\nTable Creation with Data\n\n\n<a id='outer'></a>\nOuter Join Operator\nDb2 allows the use of the Oracle outer-join operator when Oracle compatibility is turned on within a database. In Db2 11, the outer join operator is available by default and does not require the DBA to turn on Oracle compatibility.\nDb2 supports standard join syntax for LEFT and RIGHT OUTER JOINS.\nHowever, there is proprietary syntax used by Oracle employing a keyword: \"(+)\"\nto mark the \"null-producing\" column reference that precedes it in an\nimplicit join notation. That is (+) appears in the WHERE clause and\nrefers to a column of the inner table in a left outer join.\nFor instance:\nPython \nSELECT * FROM T1, T2\nWHERE T1.C1 = T2.C2 (+)\nIs the same as:\nPython\nSELECT * FROM T1 LEFT OUTER JOIN T2 ON T1.C1 = T2.C2\nIn this example, we get list of departments and their employees, as\nwell as the names of departments who have no employees.\nThis example uses the standard Db2 syntax.",
"%%sql -a\nSELECT DEPTNAME, LASTNAME FROM\n DEPARTMENT D LEFT OUTER JOIN EMPLOYEE E\n ON D.DEPTNO = E.WORKDEPT",
"This example works in the same manner as the last one, but uses\nthe \"+\" sign syntax. The format is a lot simpler to remember than OUTER JOIN\nsyntax, but it is not part of the SQL standard.",
"%%sql\nSELECT DEPTNAME, LASTNAME FROM\n DEPARTMENT D, EMPLOYEE E\nWHERE D.DEPTNO = E.WORKDEPT (+)",
"Back to Top\n<a id='char'></a>\nCHAR Datatype Size Increase\nThe CHAR datatype was limited to 254 characters in prior releases of Db2. In Db2 11, the limit has been increased\nto 255 characters to bring it in line with other SQL implementations.\nFirst we drop the table if it already exists.",
"%%sql -q\nDROP TABLE LONGER_CHAR;\n \nCREATE TABLE LONGER_CHAR\n (\n NAME CHAR(255)\n );",
"Back to Top\n<a id='binary'></a>\nBinary Data Types\nDb2 11 introduces two new binary data types: BINARY and VARBINARY. These two data types can contain any combination \nof characters or binary values and are not affected by the codepage of the server that the values are stored on.\nA BINARY data type is fixed and can have a maximum length of 255 bytes, while a VARBINARY column can contain up to \n32672 bytes. Each of these data types is compatible with columns created with the FOR BIT DATA keyword.\nThe BINARY data type will reduce the amount of conversion required from other data bases. Although binary data was supported with the FOR BIT DATA clause on a character column, it required manual DDL changes when migrating a table definition.\nThis example shows the creation of the three types of binary data types.",
"%%sql -q\nDROP TABLE HEXEY;\n\nCREATE TABLE HEXEY\n (\n AUDIO_SHORT BINARY(255),\n AUDIO_LONG VARBINARY(1024),\n AUDIO_CHAR VARCHAR(255) FOR BIT DATA\n );",
"Inserting data into a binary column can be done through the use of BINARY functions, or the use of X'xxxx' modifiers when using the VALUE clause. For fixed strings you use the X'00' format to specify a binary value and BX'00' for variable length binary strings. For instance, the following SQL will insert data into the previous table that was created.",
"%%sql\nINSERT INTO HEXEY VALUES\n (BINARY('Hello there'), \n BX'2433A5D5C1', \n VARCHAR_BIT_FORMAT(HEX('Hello there')));\n\nSELECT * FROM HEXEY;",
"Handling binary data with a FOR BIT DATA column was sometimes tedious, so the BINARY columns will make coding a little simpler. You can compare and assign values between any of these types of columns. The next SQL statement will update the AUDIO_CHAR column with the contents of the AUDIO_SHORT column. Then the SQL will test to make sure they are the same value.",
"%%sql\nUPDATE HEXEY \n SET AUDIO_CHAR = AUDIO_SHORT",
"We should have one record that is equal.",
"%%sql\nSELECT COUNT(*) FROM HEXEY WHERE\n AUDIO_SHORT = AUDIO_CHAR",
"Back to Top\n<a id='boolean'></a>\nBoolean Data Type\nThe boolean data type (true/false) has been available in SQLPL and PL/SQL scripts for some time. However,\nthe boolean data type could not be used in a table definition. Db2 11 FP1 now allows you to use this\ndata type in a table definition and use TRUE/FALSE clauses to compare values.\nThis simple table will be used to demonstrate how BOOLEAN types can be used.",
"%%sql -q\nDROP TABLE TRUEFALSE;\n\nCREATE TABLE TRUEFALSE (\n EXAMPLE INT,\n STATE BOOLEAN\n);",
"The keywords for a true value are TRUE, 'true', 't', 'yes', 'y', 'on', and '1'. For false the values are\nFALSE, 'false', 'f', 'no', 'n', and '0'.",
"%%sql\nINSERT INTO TRUEFALSE VALUES\n (1, TRUE), \n (2, FALSE),\n (3, 0),\n (4, 't'),\n (5, 'no')",
"Now we can check to see what has been inserted into the table.",
"%sql SELECT * FROM TRUEFALSE",
"Retrieving the data in a SELECT statement will return an integer value for display purposes.\n1 is true and 0 is false (binary 1 and 0).\nComparison operators with BOOLEAN data types will use TRUE, FALSE, 1 or 0 or any of the supported binary values. You have the choice of using the equal (=) operator or the IS or IS NOT syntax as shown in the following SQL.",
"%%sql\nSELECT * FROM TRUEFALSE\n WHERE STATE = TRUE OR STATE = 1 OR STATE = 'on' OR STATE IS TRUE",
"Back to Top\n<a id='synonyms'></a>\nSynonym Data types\nDb2 has the standard data types that most developers are familiar with, like CHAR, INTEGER, and DECIMAL. There are other SQL implementations that use different names for these data types, so Db2 11 now allows these data types as syonomys for the base types.\nThese data types are:\n|Type |Db2 Equivalent\n|:----- |:-------------\n|INT2 |SMALLINT\n|INT4 |INTEGER\n|INT8 |BIGINT\n|FLOAT4 |REAL\n|FLOAT8 |FLOAT\nThe following SQL will create a table with all of these data types.",
"%%sql -q\nDROP TABLE SYNONYM_EMPLOYEE;\n\nCREATE TABLE SYNONYM_EMPLOYEE\n (\n NAME VARCHAR(20),\n SALARY INT4,\n BONUS INT2,\n COMMISSION INT8,\n COMMISSION_RATE FLOAT4,\n BONUS_RATE FLOAT8\n );",
"When you create a table with these other data types, Db2 does not use these \"types\" in the catalog. What Db2 will do is use the Db2 type instead of these synonym types. What this means is that if you describe the contents of a table, \nyou will see the Db2 types displayed, not these synonym types.",
"%%sql\nSELECT DISTINCT(NAME), COLTYPE, LENGTH FROM SYSIBM.SYSCOLUMNS \n WHERE TBNAME='SYNONYM_EMPLOYEE' AND TBCREATOR=CURRENT USER",
"Back to Top\n<a id='function'></a>\nFunction Name Compatibility\nDb2 has a wealth of built-in functions that are equivalent to competitive functions, but with a different name. In\nDb2 11, these alternate function names are mapped to the Db2 function so that there is no re-write of the function\nname required. This first SQL statement generates some data required for the statistical functions.\nGenerate Linear Data\nThis command generates X,Y coordinate pairs in the xycoord table that are based on the\nfunction y = 2x + 5. Note that the table creation uses Common Table Expressions\nand recursion to generate the data!",
"%%sql -q\nDROP TABLE XYCOORDS;\n\nCREATE TABLE XYCOORDS\n (\n X INT,\n Y INT\n );\n \nINSERT INTO XYCOORDS\n WITH TEMP1(X) AS\n (\n VALUES (0)\n UNION ALL\n SELECT X+1 FROM TEMP1 WHERE X < 10\n )\n SELECT X, 2*X + 5\n FROM TEMP1;",
"COVAR_POP is an alias for COVARIANCE",
"%%sql\nSELECT 'COVAR_POP', COVAR_POP(X,Y) FROM XYCOORDS\nUNION ALL\nSELECT 'COVARIANCE', COVARIANCE(X,Y) FROM XYCOORDS",
"VAR_POP is an alias for VARIANCE",
"%%sql\nSELECT 'STDDEV_POP', STDDEV_POP(X) FROM XYCOORDS\nUNION ALL\nSELECT 'STDDEV', STDDEV(X) FROM XYCOORDS",
"VAR_SAMP is an alias for VARIANCE_SAMP",
"%%sql\nSELECT 'VAR_SAMP', VAR_SAMP(X) FROM XYCOORDS\nUNION ALL\nSELECT 'VARIANCE_SAMP', VARIANCE_SAMP(X) FROM XYCOORDS",
"ISNULL, NOTNULL is an alias for IS NULL, IS NOT NULL",
"%%sql\nWITH EMP(LASTNAME, WORKDEPT) AS\n (\n VALUES ('George','A01'),\n ('Fred',NULL),\n ('Katrina','B01'),\n ('Bob',NULL)\n )\nSELECT * FROM EMP WHERE \n WORKDEPT ISNULL",
"LOG is an alias for LN",
"%%sql\nVALUES ('LOG',LOG(10))\nUNION ALL\nVALUES ('LN', LN(10))",
"RANDOM is an alias for RAND\nNotice that the random number that is generated for the two calls results in a different value! This behavior is the \nnot the same with timestamps, where the value is calculated once during the execution of the SQL.",
"%%sql\nVALUES ('RANDOM', RANDOM())\nUNION ALL\nVALUES ('RAND', RAND())",
"STRPOS is an alias for POSSTR",
"%%sql\nVALUES ('POSSTR',POSSTR('Hello There','There'))\nUNION ALL\nVALUES ('STRPOS',STRPOS('Hello There','There'))",
"STRLEFT is an alias for LEFT",
"%%sql\nVALUES ('LEFT',LEFT('Hello There',5))\nUNION ALL\nVALUES ('STRLEFT',STRLEFT('Hello There',5))",
"STRRIGHT is an alias for RIGHT",
"%%sql\nVALUES ('RIGHT',RIGHT('Hello There',5))\nUNION ALL\nVALUES ('STRRIGHT',STRRIGHT('Hello There',5))",
"Additional Synonyms\nThere are a couple of additional keywords that are synonyms for existing Db2 functions. The list below includes only\nthose features that were introduced in Db2 11.\n|Keyword | Db2 Equivalent\n|:------------| :-----------------------------\n|BPCHAR | VARCHAR (for casting function)\n|DISTRIBUTE ON| DISTRIBUTE BY\nBack to Top\n<a id='netezza'></a>\nNetezza Compatibility\nDb2 provides features that enable applications that were written for a Netezza Performance Server (NPS) \ndatabase to use a Db2 database without having to be rewritten.\nThe SQL_COMPAT global variable is used to activate the following optional NPS compatibility features:\n\nDouble-dot notation - When operating in NPS compatibility mode, you can use double-dot notation to specify a database object.\nTRANSLATE parameter syntax - The syntax of the TRANSLATE parameter depends on whether NPS compatibility mode is being used.\nOperators - Which symbols are used to represent operators in expressions depends on whether NPS compatibility mode is being used.\nGrouping by SELECT clause columns - When operating in NPS compatibility mode, you can specify the ordinal position or exposed name of a SELECT clause column when grouping the results of a query.\nRoutines written in NZPLSQL - When operating in NPS compatibility mode, the NZPLSQL language can be used in addition to the SQL PL language.\n\nSpecial Characters\nA quick review of Db2 special characters. Before we change the behavior of Db2, we need to understand\nwhat some of the special characters do. The following SQL shows how some of the special characters\nwork. Note that the HASH/POUND sign (#) has no meaning in Db2.",
"%%sql\nWITH SPECIAL(OP, DESCRIPTION, EXAMPLE, RESULT) AS\n (\n VALUES \n (' | ','OR ', '2 | 3 ', 2 | 3),\n (' & ','AND ', '2 & 3 ', 2 & 3),\n (' ^ ','XOR ', '2 ^ 3 ', 2 ^ 3),\n (' ~ ','COMPLEMENT', '~2 ', ~2),\n (' # ','NONE ', ' ',0)\n )\nSELECT * FROM SPECIAL",
"If we turn on NPS compatibility, you see a couple of special characters change behavior. Specifically the \n^ operator becomes a \"power\" operator, and the # becomes an XOR operator.",
"%%sql\nSET SQL_COMPAT = 'NPS';\nWITH SPECIAL(OP, DESCRIPTION, EXAMPLE, RESULT) AS\n (\n VALUES \n (' | ','OR ', '2 | 3 ', 2 | 3),\n (' & ','AND ', '2 & 3 ', 2 & 3),\n (' ^ ','POWER ', '2 ^ 3 ', 2 ^ 3),\n (' ~ ','COMPLIMENT', '~2 ', ~2),\n (' # ','XOR ', '2 # 3 ', 2 # 3)\n )\nSELECT * FROM SPECIAL;",
"GROUP BY Ordinal Location\nThe GROUP BY command behavior also changes in NPS mode. The following SQL statement groups results\nusing the default Db2 syntax:",
"%%sql\nSET SQL_COMPAT='DB2';\n\nSELECT WORKDEPT,INT(AVG(SALARY)) \n FROM EMPLOYEE\nGROUP BY WORKDEPT;",
"If you try using the ordinal location (similar to an ORDER BY clause), you will\nget an error message.",
"%%sql\nSELECT WORKDEPT, INT(AVG(SALARY))\n FROM EMPLOYEE\nGROUP BY 1;",
"If NPS compatibility is turned on then then you use the GROUP BY clause with an ordinal location.",
"%%sql\nSET SQL_COMPAT='NPS';\nSELECT WORKDEPT, INT(AVG(SALARY))\n FROM EMPLOYEE\nGROUP BY 1;",
"TRANSLATE Function\nThe translate function syntax in Db2 is: \nPython\nTRANSLATE(expression, to_string, from_string, padding)\nThe TRANSLATE function returns a value in which one or more characters in a string expression might \nhave been converted to other characters. The function converts all the characters in char-string-exp\nin from-string-exp to the corresponding characters in to-string-exp or, if no corresponding characters exist, \nto the pad character specified by padding.\nIf no parameters are given to the function, the original string is converted to uppercase.\nIn NPS mode, the translate syntax is: \nPython\nTRANSLATE(expression, from_string, to_string)\nIf a character is found in the from string, and there is no corresponding character in the to string, it is removed. If it was using Db2 syntax, the padding character would be used instead.\nNote: If ORACLE compatibility is ON then the behavior of TRANSLATE is identical to NPS mode.\nThis first example will uppercase the string.",
"%%sql \nSET SQL_COMPAT = 'NPS';\nVALUES TRANSLATE('Hello');",
"In this example, the letter 'o' will be replaced with an '1'.",
"%sql VALUES TRANSLATE('Hello','o','1')",
"Note that you could replace more than one character by expanding both the \"to\" and \"from\" strings. This\nexample will replace the letter \"e\" with an \"2\" as well as \"o\" with \"1\".",
"%sql VALUES TRANSLATE('Hello','oe','12')",
"Translate will also remove a character if it is not in the \"to\" list.",
"%sql VALUES TRANSLATE('Hello','oel','12')",
"Reset the behavior back to Db2 mode.",
"%sql SET SQL_COMPAT='DB2'",
"Back to Top\n<a id='select'></a>\nSELECT Enhancements\nDb2 has the ability to limit the amount of data retrieved on a SELECT statement\nthrough the use of the FETCH FIRST n ROWS ONLY clause. In Db2 11, the ability to offset \nthe rows before fetching was added to the FETCH FIRST clause.\nSimple SQL with Fetch First Clause\nThe FETCH first clause can be used in a variety of locations in a SELECT clause. This\nfirst example fetches only 10 rows from the EMPLOYEE table.",
"%%sql\nSELECT LASTNAME FROM EMPLOYEE\n FETCH FIRST 5 ROWS ONLY",
"You can also add ORDER BY and GROUP BY clauses in the SELECT statement. Note that\nDb2 still needs to process all of the records and do the ORDER/GROUP BY work\nbefore limiting the answer set. So you are not getting the first 5 rows \"sorted\". You \nare actually getting the entire answer set sorted before retrieving just 5 rows.",
"%%sql\nSELECT LASTNAME FROM EMPLOYEE\n ORDER BY LASTNAME\n FETCH FIRST 5 ROWS ONLY",
"Here is an example with the GROUP BY statement. This first SQL statement gives us the total\nanswer set - the count of employees by WORKDEPT.",
"%%sql\nSELECT WORKDEPT, COUNT(*) FROM EMPLOYEE\n GROUP BY WORKDEPT\n ORDER BY WORKDEPT",
"Adding the FETCH FIRST clause only reduces the rows returned, not the rows that\nare used to compute the GROUPing result.",
"%%sql\nSELECT WORKDEPT, COUNT(*) FROM EMPLOYEE\n GROUP BY WORKDEPT\n ORDER BY WORKDEPT\n FETCH FIRST 5 ROWS ONLY",
"OFFSET Extension\nThe FETCH FIRST n ROWS ONLY clause can also include an OFFSET keyword. The OFFSET keyword \nallows you to retrieve the answer set after skipping \"n\" number of rows. The syntax of the OFFSET\nkeyword is:\nPython\nOFFSET n ROWS FETCH FIRST x ROWS ONLY\nThe OFFSET n ROWS must precede the FETCH FIRST x ROWS ONLY clause. The OFFSET clause can be used to \nscroll down an answer set without having to hold a cursor. For instance, you could have the \nfirst SELECT call request 10 rows by just using the FETCH FIRST clause. After that you could\nrequest the first 10 rows be skipped before retrieving the next 10 rows. \nThe one thing you must be aware of is that that answer set could change between calls if you use\nthis technique of a \"moving\" window. If rows are updated or added after your initial query you may\nget different results. This is due to the way that Db2 adds rows to a table. If there is a DELETE and then\nan INSERT, the INSERTed row may end up in the empty slot. There is no guarantee of the order of retrieval. For\nthis reason you are better off using an ORDER by to force the ordering although this too won't always prevent \nrows changing positions.\nHere are the first 10 rows of the employee table (not ordered).",
"%%sql\nSELECT LASTNAME FROM EMPLOYEE\n FETCH FIRST 10 ROWS ONLY",
"You can specify a zero offset to begin from the beginning.",
"%%sql\nSELECT LASTNAME FROM EMPLOYEE\n OFFSET 0 ROWS\n FETCH FIRST 10 ROWS ONLY",
"Now we can move the answer set ahead by 5 rows and get the remaining\n5 rows in the answer set.",
"%%sql\nSELECT LASTNAME FROM EMPLOYEE\n OFFSET 5 ROWS\n FETCH FIRST 5 ROWS ONLY",
"FETCH FIRST and OFFSET in SUBSELECTs\nThe FETCH FIRST/OFFSET clause is not limited to regular SELECT statements. You can also \nlimit the number of rows that are used in a subselect. In this case you are limiting the amount of\ndata that Db2 will scan when determining the answer set.\nFor instance, say you wanted to find the names of the employees who make more than the\naverage salary of the 3rd highest paid department. (By the way, there are multiple ways to \ndo this, but this is one approach).\nThe first step is to determine what the average salary is of all departments.",
"%%sql\nSELECT WORKDEPT, AVG(SALARY) FROM EMPLOYEE\nGROUP BY WORKDEPT\nORDER BY AVG(SALARY) DESC;",
"We only want one record from this list (the third one), so we can use the FETCH FIRST clause with\nan OFFSET to get the value we want (Note: we need to skip 2 rows to get to the 3rd one).",
"%%sql\nSELECT WORKDEPT, AVG(SALARY) FROM EMPLOYEE\nGROUP BY WORKDEPT\nORDER BY AVG(SALARY) DESC\nOFFSET 2 ROWS FETCH FIRST 1 ROWS ONLY",
"And here is the list of employees that make more than the average salary of the 3rd highest department in the \ncompany.",
"%%sql\nSELECT LASTNAME, SALARY FROM EMPLOYEE\n WHERE\n SALARY > (\n SELECT AVG(SALARY) FROM EMPLOYEE\n GROUP BY WORKDEPT\n ORDER BY AVG(SALARY) DESC\n OFFSET 2 ROWS FETCH FIRST 1 ROW ONLY\n )\nORDER BY SALARY",
"Alternate Syntax for FETCH FIRST\nThe FETCH FIRST n ROWS ONLY and OFFSET clause can also be specified using a simpler LIMIT/OFFSET syntax.\nThe LIMIT clause and the equivalent FETCH FIRST syntax are shown below.\n|Syntax |Equivalent\n|:-----------------|:-----------------------------\n|LIMIT x |FETCH FIRST x ROWS ONLY\n|LIMIT x OFFSET y |OFFSET y ROWS FETCH FIRST x ROWS ONLY\n|LIMIT y,x |OFFSET y ROWS FETCH FIRST x ROWS ONLY\nThe previous examples are rewritten using the LIMIT clause.\nWe can use the LIMIT clause with an OFFSET to get the value we want from the table.",
"%%sql\nSELECT WORKDEPT, AVG(SALARY) FROM EMPLOYEE\nGROUP BY WORKDEPT\nORDER BY AVG(SALARY) DESC\nLIMIT 1 OFFSET 2",
"Here is the list of employees that make more than the average salary of the 3rd highest department in the \ncompany. Note that the LIMIT clause specifies only the offset (LIMIT x) or the offset and limit (LIMIT y,x) when you do not use the LIMIT keyword. One would think that LIMIT x OFFSET y would translate into LIMIT x,y but that is not the case. Don't try to figure out the SQL standards reasoning behind the syntax!",
"%%sql\nSELECT LASTNAME, SALARY FROM EMPLOYEE\n WHERE\n SALARY > (\n SELECT AVG(SALARY) FROM EMPLOYEE\n GROUP BY WORKDEPT\n ORDER BY AVG(SALARY) DESC\n LIMIT 2,1 \n )\nORDER BY SALARY",
"Back to Top\n<a id='hexadecimal'></a>\nHexadecimal Functions\nA number of new HEX manipulation functions have been added to Db2 11. There are a class of functions\nthat manipulate different size integers (SMALL, INTEGER, BIGINT) using NOT, OR, AND, and XOR. In addition to\nthese functions, there are a number of functions that display and convert values into hexadecimal values.\nINTN Functions\nThe INTN functions are bitwise functions that operate on the \"two's complement\" representation of \nthe integer value of the input arguments and return the result as a corresponding base 10 integer value.\nThe function names all include the size of the integers that are being manipulated:\n\nN = 2 (Smallint), 4 (Integer), 8 (Bigint)\n\nThere are four functions:\n\nINTNAND - Performs a bitwise AND operation, 1 only if the corresponding bits in both arguments are 1\nINTNOR - Performs a bitwise OR operation, 1 unless the corresponding bits in both arguments are zero\nINTNXOR Performs a bitwise exclusive OR operation, 1 unless the corresponding bits in both arguments are the same\nINTNNOT - Performs a bitwise NOT operation, opposite of the corresponding bit in the argument\n\nSix variables will be created to use in the examples. The X/Y values will be set to X=1 (01) and Y=3 (11) \nand different sizes to show how the functions work.",
"%%sql -q\nDROP VARIABLE XINT2; \nDROP VARIABLE YINT2;\nDROP VARIABLE XINT4;\nDROP VARIABLE YINT4;\nDROP VARIABLE XINT8; \nDROP VARIABLE YINT8;\nCREATE VARIABLE XINT2 INT2 DEFAULT(1);\nCREATE VARIABLE YINT2 INT2 DEFAULT(3);\nCREATE VARIABLE XINT4 INT4 DEFAULT(1);\nCREATE VARIABLE YINT4 INT4 DEFAULT(3);\nCREATE VARIABLE XINT8 INT8 DEFAULT(1);\nCREATE VARIABLE YINT8 INT8 DEFAULT(3);",
"This example will show the four functions used against SMALLINT (INT2) data types.",
"%%sql\nWITH LOGIC(EXAMPLE, X, Y, RESULT) AS\n (\n VALUES\n ('INT2AND(X,Y)',XINT2,YINT2,INT2AND(XINT2,YINT2)),\n ('INT2OR(X,Y) ',XINT2,YINT2,INT2OR(XINT2,YINT2)),\n ('INT2XOR(X,Y)',XINT2,YINT2,INT2XOR(XINT2,YINT2)),\n ('INT2NOT(X) ',XINT2,YINT2,INT2NOT(XINT2))\n )\nSELECT * FROM LOGIC",
"This example will use the 4 byte (INT4) data type.",
"%%sql\nWITH LOGIC(EXAMPLE, X, Y, RESULT) AS\n (\n VALUES\n ('INT4AND(X,Y)',XINT4,YINT4,INT4AND(XINT4,YINT4)),\n ('INT4OR(X,Y) ',XINT4,YINT4,INT4OR(XINT4,YINT4)),\n ('INT4XOR(X,Y)',XINT4,YINT4,INT4XOR(XINT4,YINT4)),\n ('INT4NOT(X) ',XINT4,YINT4,INT4NOT(XINT4))\n )\nSELECT * FROM LOGIC",
"Finally, the INT8 data type is used in the SQL. Note that you can mix and match the INT2, INT4, and INT8 values\nin these functions but you may get truncation if the value is too big.",
"%%sql\nWITH LOGIC(EXAMPLE, X, Y, RESULT) AS\n (\n VALUES\n ('INT8AND(X,Y)',XINT8,YINT8,INT8AND(XINT8,YINT8)),\n ('INT8OR(X,Y) ',XINT8,YINT8,INT8OR(XINT8,YINT8)),\n ('INT8XOR(X,Y)',XINT8,YINT8,INT8XOR(XINT8,YINT8)),\n ('INT8NOT(X) ',XINT8,YINT8,INT8NOT(XINT8))\n )\nSELECT * FROM LOGIC",
"TO_HEX Function\nThe TO_HEX function converts a numeric expression into a character hexadecimal representation. For example, the\nnumeric value 255 represents x'FF'. The value returned from this function is a VARCHAR value and its \nlength depends on the size of the number you supply.",
"%sql VALUES TO_HEX(255)",
"RAWTOHEX Function\nThe RAWTOHEX function returns a hexadecimal representation of a value as a character string. The\nresult is a character string itself.",
"%sql VALUES RAWTOHEX('Hello')",
"The string \"00\" converts to a hex representation of x'3030' which is 12336 in Decimal.\nSo the TO_HEX function would convert this back to the HEX representation.",
"%sql VALUES TO_HEX(12336)",
"The string that is returned by the RAWTOHEX function should be the same.",
"%sql VALUES RAWTOHEX('00'); ",
"Back to Top\n<a id=\"create\"><a/>\nTable Creation Extensions\nThe CREATE TABLE statement can now use a SELECT clause to generate the definition and LOAD the data\nat the same time. \nCreate Table Syntax\nThe syntax of the CREATE table statement has been extended with the AS (SELECT ...) WITH DATA clause:\nPython\nCREATE TABLE <name> AS (SELECT ...) [ WITH DATA | DEFINITION ONLY ]\nThe table definition will be generated based on the SQL statement that you specify. The column names\nare derived from the columns that are in the SELECT list and can only be changed by specifying the columns names\nas part of the table name: EMP(X,Y,Z,...) AS (...).\nFor example, the following SQL will fail because a column list was not provided:",
"%sql -q DROP TABLE AS_EMP\n%sql CREATE TABLE AS_EMP AS (SELECT EMPNO, SALARY+BONUS FROM EMPLOYEE) DEFINITION ONLY;",
"You can name a column in the SELECT list or place it in the table definition.",
"%sql -q DROP TABLE AS_EMP\n%sql CREATE TABLE AS_EMP AS (SELECT EMPNO, SALARY+BONUS AS PAY FROM EMPLOYEE) DEFINITION ONLY;",
"You can check the SYSTEM catalog to see the table definition.",
"%%sql\nSELECT DISTINCT(NAME), COLTYPE, LENGTH FROM SYSIBM.SYSCOLUMNS \n WHERE TBNAME='AS_EMP' AND TBCREATOR=CURRENT USER",
"The DEFINITION ONLY clause will create the table but not load any data into it. Adding the WITH DATA \nclause will do an INSERT of rows into the newly created table. If you have a large amount of data \nto load into the table you may be better off creating the table with DEFINITION ONLY and then using LOAD\nor other methods to load the data into the table.",
"%sql -q DROP TABLE AS_EMP\n%sql CREATE TABLE AS_EMP AS (SELECT EMPNO, SALARY+BONUS AS PAY FROM EMPLOYEE) WITH DATA;",
"The SELECT statement can be very sophisticated. It can do any type of calculation or limit the\ndata to a subset of information.",
"%%sql -q\nDROP TABLE AS_EMP;\nCREATE TABLE AS_EMP(LAST,PAY) AS \n (\n SELECT LASTNAME, SALARY FROM EMPLOYEE \n WHERE WORKDEPT='D11'\n FETCH FIRST 3 ROWS ONLY\n ) WITH DATA;",
"You can also use the OFFSET clause as part of the FETCH FIRST ONLY to get chunks of data from the\noriginal table.",
"%%sql -q\nDROP TABLE AS_EMP;\nCREATE TABLE AS_EMP(DEPARTMENT, LASTNAME) AS \n (SELECT WORKDEPT, LASTNAME FROM EMPLOYEE\n OFFSET 5 ROWS\n FETCH FIRST 10 ROWS ONLY\n ) WITH DATA;\nSELECT * FROM AS_EMP;",
"Back to Top\nCredits: IBM 2018, George Baklarz [baklarz@ca.ibm.com]"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
phoebe-project/phoebe2-docs
|
2.3/examples/detached_rotstar.ipynb
|
gpl-3.0
|
[
"Detached Binary: Roche vs Rotstar\nSetup\nLet's first make sure we have the latest version of PHOEBE 2.3 installed (uncomment this line if running in an online notebook session such as colab).",
"#!pip install -I \"phoebe>=2.3,<2.4\"",
"As always, let's do imports and initialize a logger and a new bundle.",
"import phoebe\nfrom phoebe import u # units\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nlogger = phoebe.logger()\n\nb = phoebe.default_binary()",
"Adding Datasets\nNow we'll create an empty mesh dataset at quarter-phase so we can compare the difference between using roche and rotstar for deformation potentials:",
"b.add_dataset('mesh', compute_times=[0.75], dataset='mesh01')",
"Running Compute\nLet's set the radius of the primary component to be large enough to start to show some distortion when using the roche potentials.",
"b['requiv@primary@component'] = 1.8",
"Now we'll compute synthetics at the times provided using the default options",
"b.run_compute(irrad_method='none', distortion_method='roche', model='rochemodel')\n\nb.run_compute(irrad_method='none', distortion_method='rotstar', model='rotstarmodel')",
"Plotting",
"afig, mplfig = b.plot(model='rochemodel',show=True)\n\nafig, mplfig = b.plot(model='rotstarmodel',show=True)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
vpenso/scripts
|
docs/code/python/pandas.ipynb
|
gpl-3.0
|
[
"Pandas\nRead the input data for all Netrunner cards:\nhttps://github.com/Alsciende/netrunner-cards-json",
"import glob\nimport pandas\n\n# read all files from pack/\nfiles = glob.glob('pack/*.json')\npacks = []\nfor file in files:\n packs.append(pandas.read_json(file,orient='records'))\n\n# Use Pandas for data analysis\ncards = pandas.concat(packs,sort=False)\n\n# use the unique card identifier as index\ncards.set_index('code',inplace=True)\n\n# remove duplicate cards (i.e. from the Core sets)\ncards.drop_duplicates('title',inplace=True)",
"README.md describes the card JSON schema.\nCards\nList all card titles:",
"# read columns card title and type, sort by title\ncards[['title','type_code']].sort_values(by='title').head(10)",
"Find a specific card",
"# do not truncate strings\npandas.set_option('display.max_colwidth', -1)\n\ncards[cards['title'].str.match('Noise')][['type_code','faction_code','title','text']]",
"Card Types\nList all card types, and the number of cards for a given type",
"cards['type_code'].value_counts()\n\ncards['type_code'].value_counts().plot(kind='bar')",
"By Faction\nSelect a specific card-type and count the cards per faction",
"programs = cards[cards['type_code'] == 'program']\nprograms['faction_code'].value_counts()",
"ICE with faction and keywords",
"ice = cards[cards['type_code'] == 'ice']\nice[['title','faction_code','keywords']].head(10)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
biocommons/hgvs
|
examples/using-hgvs.ipynb
|
apache-2.0
|
[
"Using hgvs\nThis notebook demonstrates major features of the hgvs package.",
"import hgvs\nhgvs.__version__",
"Variant I/O\nInitialize the parser",
"# You only need to do this once per process\nimport hgvs.parser\nhp = hgvsparser = hgvs.parser.Parser()",
"Parse a simple variant",
"v = hp.parse_hgvs_variant(\"NC_000007.13:g.21726874G>A\")\n\nv\n\nv.ac, v.type\n\nv.posedit\n\nv.posedit.pos\n\nv.posedit.pos.start",
"Parsing complex variants",
"v = hp.parse_hgvs_variant(\"NM_003777.3:c.13552_*36del57\")\n\nv.posedit.pos.start, v.posedit.pos.end\n\nv.posedit.edit",
"Formatting variants\nAll objects may be formatted simply by \"stringifying\" or printing them using str, print(), or \"\".format().",
"str(v)\n\nprint(v)\n\n\"{v} spans the CDS end\".format(v=v)",
"Projecting variants between sequences\nSet up a dataprovider\nMapping variants requires exon structures, alignments, CDS bounds, and raw sequence. These are provided by a hgvs.dataprovider instance. The only dataprovider provided with hgvs uses UTA. You may write your own by subsclassing hgvs.dataproviders.interface.",
"import hgvs.dataproviders.uta\nhdp = hgvs.dataproviders.uta.connect()",
"Initialize mapper classes\nThe VariantMapper class projects variants between two sequence accessions using alignments from a specified source. In order to use it, you must know that two sequences are aligned. VariantMapper isn't demonstrated here.\nAssemblyMapper builds on VariantMapper and handles identifying appropriate sequences. It is configured for a particular genome assembly.",
"import hgvs.variantmapper\n#vm = variantmapper = hgvs.variantmapper.VariantMapper(hdp)\nam37 = easyvariantmapper = hgvs.assemblymapper.AssemblyMapper(hdp, assembly_name='GRCh37')\nam38 = easyvariantmapper = hgvs.assemblymapper.AssemblyMapper(hdp, assembly_name='GRCh38')",
"c_to_g\nThis is the easiest case because there is typically only one alignment between a transcript and the genome. (Exceptions exist for pseudoautosomal regions.)",
"var_c = hp.parse_hgvs_variant(\"NM_015120.4:c.35G>C\")\nvar_g = am37.c_to_g(var_c)\nvar_g\n\nam38.c_to_g(var_c)",
"g_to_c\nIn order to project a genomic variant onto a transcript, you must tell the AssemblyMapper which transcript to use.",
"am37.relevant_transcripts(var_g)\n\nam37.g_to_c(var_g, \"NM_015120.4\")",
"c_to_p",
"var_p = am37.c_to_p(var_c)\nstr(var_p)\n\nvar_p.posedit.uncertain = False\nstr(var_p)",
"Projecting in the presence of a genome-transcript gap\nAs of Oct 2016, 1033 RefSeq transcripts in 433 genes have gapped alignments. These gaps require special handlingin order to maintain the correspondence of positions in an alignment. hgvs uses the precomputed alignments in UTA to correctly project variants in exons containing gapped alignments. \nThis example demonstrates projecting variants in the presence of a gap in the alignment of NM_015120.4 (ALMS1) with GRCh37 chromosome 2. (The alignment with GRCh38 is similarly gapped.) Specifically, the adjacent genomic positions 73613031 and 73613032 correspond to the non-adjacent CDS positions 35 and 39.\nNM_015120.4 c 15 > > 58\n NM_015120.4 n 126 > CCGGGCGAGCTGGAGGAGGAGGAG > 169\n ||||||||||| |||||||||| 21=3I20= \n NC_000002.11 g 73613021 > CCGGGCGAGCT---GGAGGAGGAG > 73613041\n NC_000002.11 g 73613021 < GGCCCGCTCGA---CCTCCTCCTC < 73613041",
"str(am37.c_to_g(hp.parse_hgvs_variant(\"NM_015120.4:c.35G>C\")))\n\nstr(am37.c_to_g(hp.parse_hgvs_variant(\"NM_015120.4:c.39G>C\")))",
"Normalizing variants\nIn hgvs, normalization means shifting variants 3' (as requried by the HGVS nomenclature) as well as rewriting variants. The variant \"NM_001166478.1:c.30_31insT\" is in a poly-T run (on the transcript). It should be shifted 3' and is better written as dup, as shown below:\n* NC_000006.11:g.49917127dupA\n NC_000006.11 g 49917117 > AGAAAGAAAAATAAAACAAAG > 49917137 \n NC_000006.11 g 49917117 < TCTTTCTTTTTATTTTGTTTC < 49917137 \n ||||||||||||||||||||| 21= \n NM_001166478.1 n 41 < TCTTTCTTTTTATTTTGTTTC < 21 NM_001166478.1:n.35dupT\n NM_001166478.1 c 41 < < 21 NM_001166478.1:c.30_31insT",
"import hgvs.normalizer\nhn = hgvs.normalizer.Normalizer(hdp)\n\nv = hp.parse_hgvs_variant(\"NM_001166478.1:c.30_31insT\")\nstr(hn.normalize(v))",
"A more complex normalization example\nThis example is based on https://github.com/biocommons/hgvs/issues/382/.\nNC_000001.11 g 27552104 > CTTCACACGCATCCTGACCTTG > 27552125\n NC_000001.11 g 27552104 < GAAGTGTGCGTAGGACTGGAAC < 27552125\n |||||||||||||||||||||| 22= \n NM_001029882.3 n 843 < GAAGTGTGCGTAGGACTGGAAC < 822 \n NM_001029882.3 c 12 < < -10 \n ^^ \n NM_001029882.3:c.1_2del\n NM_001029882.3:n.832_833delAT\n NC_000001.11:g.27552114_27552115delAT",
"am38.c_to_g(hp.parse_hgvs_variant(\"NM_001029882.3:c.1A>G\"))\n\nam38.c_to_g(hp.parse_hgvs_variant(\"NM_001029882.3:c.2T>G\"))\n\nam38.c_to_g(hp.parse_hgvs_variant(\"NM_001029882.3:c.1_2del\"))",
"The genomic coordinates for the SNVs at c.1 and c.2 match those for the del at c.1_2. Good!\nNow, notice what happens with c.1_3del, c.1_4del, and c.1_5del:",
"am38.c_to_g(hp.parse_hgvs_variant(\"NM_001029882.3:c.1_3del\"))\n\nam38.c_to_g(hp.parse_hgvs_variant(\"NM_001029882.3:c.1_4del\"))\n\nam38.c_to_g(hp.parse_hgvs_variant(\"NM_001029882.3:c.1_5del\"))",
"Explanation:\nOn the transcript, c.1_2delAT deletes AT from …AGGATGCG…, resulting in …AGGGCG…. There's no ambiguity about what sequence was actually deleted.\nc.1_3delATG deletes ATG, resulting in …AGGCG…. Note that you could also get this result by deleting GAT. This is an example of an indel that is subject to normalization and hgvs does this.\nc.1_4delATGC and 1_5delATGCG have similar behaviors.\nNormalization is always 3' with respect to the reference sequence. So, after projecting from a - strand transcript to the genome, normalization will go in the opposite direction to the transcript. It will have roughly the same effect as being 5' shifted on the transcript (but revcomp'd).\nFor more precise control, see the normalize and replace_reference options of AssemblyMapper.\nValidating variants\nhgvs.validator.Validator is a composite of two classes, hgvs.validator.IntrinsicValidator and hgvs.validator.ExtrinsicValidator. Intrinsic validation evaluates a given variant for internal consistency, such as requiring that insertions specify adjacent positions. Extrinsic validation evaluates a variant using external data, such as ensuring that the reference nucleotide in the variant matches that implied by the reference sequence and position. Validation returns True if successful, and raises an exception otherwise.",
"import hgvs.validator\nhv = hgvs.validator.Validator(hdp)\n\nhv.validate(hp.parse_hgvs_variant(\"NM_001166478.1:c.30_31insT\"))\n\nfrom hgvs.exceptions import HGVSError\n\ntry:\n hv.validate(hp.parse_hgvs_variant(\"NM_001166478.1:c.30_32insT\"))\nexcept HGVSError as e:\n print(e)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
kevinsung/OpenFermion
|
docs/tutorials/circuits_3_arbitrary_basis_trotter.ipynb
|
apache-2.0
|
[
"Copyright 2020 The OpenFermion Developers",
"#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.",
"Circuits 3: Low rank, arbitrary basis molecular simulations\n<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://quantumai.google/openfermion/tutorials/circuits_3_arbitrary_basis_trotter\"><img src=\"https://quantumai.google/site-assets/images/buttons/quantumai_logo_1x.png\" />View on QuantumAI</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/quantumlib/OpenFermion/blob/master/docs/tutorials/circuits_3_arbitrary_basis_trotter.ipynb\"><img src=\"https://quantumai.google/site-assets/images/buttons/colab_logo_1x.png\" />Run in Google Colab</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://github.com/quantumlib/OpenFermion/blob/master/docs/tutorials/circuits_3_arbitrary_basis_trotter.ipynb\"><img src=\"https://quantumai.google/site-assets/images/buttons/github_logo_1x.png\" />View source on GitHub</a>\n </td>\n <td>\n <a href=\"https://storage.googleapis.com/tensorflow_docs/OpenFermion/docs/tutorials/circuits_3_arbitrary_basis_trotter.ipynb\"><img src=\"https://quantumai.google/site-assets/images/buttons/download_icon_1x.png\" />Download notebook</a>\n </td>\n</table>\n\nSetup\nInstall the OpenFermion package:",
"try:\n import openfermion\nexcept ImportError:\n !pip install git+https://github.com/quantumlib/OpenFermion.git@master#egg=openfermion",
"Low rank decomposition of the Coulomb operator\nThe algorithm discussed in this tutorial is described in arXiv:1808.02625.\nIn Circuits 1 we discussed methods for very compiling single-particle basis transformations of fermionic operators in $O(N)$ depth on a linearly connected architecture. We looked at the particular example of simulating a free fermion model by using Bogoliubov transformations to diagonalize the model.\nIn Circuits 2 we discussed methods for compiling Trotter steps of electronic structure Hamiltonian in $O(N)$ depth on a linearly connected architecture when expressed in a basis diagonalizing the Coulomb operator so that\n$$\nH = \\sum_{pq} T_{pq} a^\\dagger_p a_q + \\sum_{pq} V_{pq} a^\\dagger_p a_p a^\\dagger_q a_q.\n$$\nHere we will discuss how both of those techniques can be combined, along with some insights from electronic structure, in order to simulate arbitrary basis molecular Hamiltonians taking the form\n$$\nH = \\sum_{pq} T_{pq} a^\\dagger_p a_q + \\sum_{pqrs} V_{pqrs} a^\\dagger_p a_q a^\\dagger_r a_s\n$$\nin depth scaling only as $O(N^2)$ on a linear array of qubits. First, we note that the one-body part of the above expression is easy to simulate using the techniques introduced in Circuits 1. Thus, the real challenge is to simulate the two-body part of the operator.\nWe begin with the observation that the rank-4 tensor $V$, with the values $V_{pqrs}$ representing the coefficient of $a^\\dagger_p a_q a^\\dagger_r a_s$ can be flattened into an $N^2 \\times N^2$ array by making $p,q$ one index and $r,s$ the other. This is the electronic repulsion integral (ERI) matrix in chemist notation. We will refer to the ERI matrix as $W$. By diagonalizing $W$, one obtains $W g_\\ell = \\lambda_\\ell g_\\ell$ where the eigenvector $g_\\ell$ is a vector of dimension $N^2$. If we reshape $g_\\ell$ into an $N \\times N$ vector, we realize that\n$$\n\\sum_{pqrs} V_{pqrs} a^\\dagger_p a_q a^\\dagger_r a_s = \\sum_{\\ell=0}^{L-1} \\lambda_\\ell \\left(\\sum_{pq} \\left[g_{\\ell}\\right]{pq} a^\\dagger_p a_q\\right)^2.\n$$\nThis is related to the concept of density fitting in electronic structure, which is often accomplished using a Cholesky decomposition. It is fairly well known in the quantum chemistry community that the ERI matrix is positive semi-definite and despite having linear dimension $N^2$, has rank of only $L = O(N)$. Thus, the eigenvalues $\\lambda\\ell$ are positive and there are only $O(N)$ of them.\nNext, we diagonalize the one-body operators inside of the square so that\n$$\nR_\\ell \\left(\\sum_{pq} \\left[g_\\ell\\right]{pq} a^\\dagger_p a_q\\right) R\\ell^\\dagger = \\sum_{p} f_{\\ell p} a^\\dagger_p a_p\n$$\nwhere the $R_\\ell$ represent single-particle basis transformations of the sort we compiled in Circuits 1. Then,\n$$\n\\sum_{\\ell=0}^{L-1} \\lambda_\\ell \\left(\\sum_{pq} \\left[g_{\\ell}\\right]{pq} a^\\dagger_p a_q\\right)^2 =\n\\sum{\\ell=0}^{L-1} \\lambda_\\ell \\left(R_\\ell \\left(\\sum_{p} f_{\\ell p} a^\\dagger_p a_p\\right) R_\\ell^\\dagger\\right)^2 = \\sum_{\\ell=0}^{L-1} \\lambda_\\ell \\left(R_\\ell \\left(\\sum_{p} f_{\\ell p} a^\\dagger_p a_p\\right) R_\\ell^\\dagger R_\\ell \\left(\\sum_{p} f_{\\ell p} a^\\dagger_p a_p\\right) R_\\ell^\\dagger\\right)\n= \\sum_{\\ell=0}^{L-1} \\lambda_\\ell R_\\ell \\left(\\sum_{pq} f_{\\ell p} f_{\\ell q} a^\\dagger_p a_p a^\\dagger_q a_q\\right) R_\\ell^\\dagger.\n$$\nWe now see that we can simulate a Trotter step under the arbitrary basis two-body operator as\n$$\n\\prod_{\\ell=0}^{L-1} R_\\ell \\exp\\left(-i\\sum_{pq} f_{\\ell p} f_{\\ell q} a^\\dagger_p a_p a^\\dagger_q a_q\\right) R_\\ell^\\dagger\n$$\nwhere we note that the operator in the exponential take the form of a diagonal Coulomb operator. Since we can implement the $R_\\ell$ circuits in $O(N)$ depth (see Circuits 1) and we can implement Trotter steps under diagonal Coulomb operators in $O(N)$ layers of gates (see Circuits 2) we see that we can implement Trotter steps under arbitrary basis electronic structure Hamiltionians in $O(L N) = O(N^2)$ depth, and all on a linearly connected device. This is a big improvement over the usual way of doing things, which would lead to no less than $O(N^5)$ depth! In fact, it is also possible to do better by truncating rank on the second diagonalization but we have not implemented that (details will be discussed in aforementioned paper-in-preparation).\nNote that these techniques are also applicable to realizing evolution under other two-body operators, such as the generator of unitary coupled cluster. Note that one can create variational algorithms where a variational parameter specifies the rank at which to truncate the $\\lambda_\\ell$.\nExample implementation: Trotter steps of LiH in molecular orbital basis\nWe will now use these techniques to implement Trotter steps for an actual molecule. We will focus on LiH at equilibrium geometry, since integrals for that system are provided with every OpenFermion installation. However, by installing OpenFermion-PySCF or OpenFermion-Psi4 one can use these techniques for any molecule at any geometry. We will generate LiH in an active space consisting of 4 qubits. First, we obtain the Hamiltonian as an InteractionOperator.",
"import openfermion\n\n# Set Hamiltonian parameters for LiH simulation in active space.\ndiatomic_bond_length = 1.45\ngeometry = [('Li', (0., 0., 0.)), ('H', (0., 0., diatomic_bond_length))]\nbasis = 'sto-3g'\nmultiplicity = 1\nactive_space_start = 1\nactive_space_stop = 3\n\n# Generate and populate instance of MolecularData.\nmolecule = openfermion.MolecularData(geometry, basis, multiplicity, description=\"1.45\")\nmolecule.load()\n\n# Get the Hamiltonian in an active space.\nmolecular_hamiltonian = molecule.get_molecular_hamiltonian(\n occupied_indices=range(active_space_start),\n active_indices=range(active_space_start, active_space_stop))\nprint(openfermion.get_fermion_operator(molecular_hamiltonian))",
"We see from the above output that this is a fairly complex Hamiltonian already. Next we will use the simulate_trotter function from Circuits 1, but this time using a different type of Trotter step associated with these low rank techniques. To keep this circuit very short for pedagogical purposes we will force a truncation of the eigenvalues $\\lambda_\\ell$ at a predetermined value of final_rank. While we also support a canned LOW_RANK option for the Trotter steps, in order to pass this value of final_rank we will instantiate a custom Trotter algorithm type.",
"import cirq\nimport openfermion\nfrom openfermion.circuits import trotter\n\n# Trotter step parameters.\ntime = 1.\nfinal_rank = 2\n\n# Initialize circuit qubits in a line.\nn_qubits = openfermion.count_qubits(molecular_hamiltonian)\nqubits = cirq.LineQubit.range(n_qubits)\n\n# Compile the low rank Trotter step using OpenFermion.\ncustom_algorithm = trotter.LowRankTrotterAlgorithm(final_rank=final_rank)\ncircuit = cirq.Circuit(\n trotter.simulate_trotter(\n qubits, molecular_hamiltonian,\n time=time, omit_final_swaps=True,\n algorithm=custom_algorithm),\n strategy=cirq.InsertStrategy.EARLIEST)\n\n# Print circuit.\ncirq.drop_negligible_operations(circuit)\nprint(circuit.to_text_diagram(transpose=True))",
"We were able to print out the circuit this way but forcing final_rank of 2 is not very accurate. In the cell below, we compile the Trotter step with full rank so $L = N^2$ and depth is actually $O(N^3)$ and repeat the Trotter step multiple times to show that it actually converges to the correct result. Since we are not forcing the rank truncation we can use the built-in LOW_RANK Trotter step type. Note that the rank of the Coulomb operators is asymptotically $O(N)$ but for very small molecules in small basis sets only a few eigenvalues can be truncated.",
"# Initialize a random initial state.\nimport numpy\nrandom_seed = 8317\ninitial_state = openfermion.haar_random_vector(\n 2 ** n_qubits, random_seed).astype(numpy.complex64)\n\n# Numerically compute the correct circuit output.\nimport scipy\nhamiltonian_sparse = openfermion.get_sparse_operator(molecular_hamiltonian)\nexact_state = scipy.sparse.linalg.expm_multiply(\n -1j * time * hamiltonian_sparse, initial_state)\n\n# Trotter step parameters.\nn_steps = 3\n\n# Compile the low rank Trotter step using OpenFermion.\nqubits = cirq.LineQubit.range(n_qubits)\ncircuit = cirq.Circuit(\n trotter.simulate_trotter(\n qubits, molecular_hamiltonian,\n time=time, n_steps=n_steps,\n algorithm=trotter.LOW_RANK),\n strategy=cirq.InsertStrategy.EARLIEST)\n\n# Use Cirq simulator to apply circuit.\nsimulator = cirq.Simulator()\nresult = simulator.simulate(circuit, qubit_order=qubits, initial_state=initial_state)\nsimulated_state = result.final_state_vector\n\n# Print final fidelity.\nfidelity = abs(numpy.dot(simulated_state, numpy.conjugate(exact_state))) ** 2\nprint('Fidelity with exact result is {}.\\n'.format(fidelity))\n\n# Print circuit.\ncirq.drop_negligible_operations(circuit)\nprint(circuit.to_text_diagram(transpose=True))"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
gavruskin/microinteractions
|
taylor_lin_fit.ipynb
|
mit
|
[
"Microinteractions\nComputational pipeline for fitting linear regression model for interaction coefficients inference.\nThe pre-release version of this pipeline assumes the data to be in a very specific format.\nPlease contact alex@gavruskin.com if you wish to give it a try.\nInstructions for people working on this project\n\nClone this repository git clone https://github.com/gavruskin/microinteractions.git\ncd microinteractions\nPut the data files in this folder\nMake sure that read-out columns are called total_CFUs, DailyFecundity, Development, Survival\nMake sure that the column that enumerates combinations of bacteria is called treat\npython data_preprocess_total_CFUs.py\npython data_preprocess_DailyFecundity.py\npython data_preprocess_Development.py\npython data_preprocess_Survival.py\njupyter notebook\nRun all cells\n\nLoad dependencies:",
"import pandas as pd\nimport numpy as np\nimport statsmodels.formula.api as smf\nimport matplotlib\nimport matplotlib.pyplot as plt\nimport sys\n\nfrom IPython.display import set_matplotlib_formats\nset_matplotlib_formats(\"png\", \"pdf\", \"svg\")\n\nmatplotlib.style.use('ggplot')\n%matplotlib inline",
"total_CFUs:\n\nRead the data and fit the model",
"data_total_CFUs = pd.read_csv(\"flygut_cfus_expts1345_totals_processed.csv\")\n\nlm_total_CFUs = smf.ols(formula=\"total_CFUs ~ a + a1 + a2 + a3 + a4 + a5 +\"\n \"b12 + b13 + b14 + b15 + b23 + b24 + b25 + b34 + b35 + b45 +\"\n \"c123 + c124 + c125 + c134 + c135 + c145 + c234 + c235 + c245 + c345 +\"\n \"d1234 + d1235 + d1245 + d1345 + d2345 + e12345\", data=data_total_CFUs).fit()",
"Output summary statistics",
"lm_total_CFUs.summary()",
"Plot inferred coefficients with confidence intervals",
"conf_int_total_CFUs = pd.DataFrame(lm_total_CFUs.conf_int())\nconf_int_total_CFUs[2] = [lm_total_CFUs.params.a,\n lm_total_CFUs.params.a,\n lm_total_CFUs.params.a1,\n lm_total_CFUs.params.a2,\n lm_total_CFUs.params.a3,\n lm_total_CFUs.params.a4,\n lm_total_CFUs.params.a5,\n lm_total_CFUs.params.b12,\n lm_total_CFUs.params.b13,\n lm_total_CFUs.params.b14,\n lm_total_CFUs.params.b15,\n lm_total_CFUs.params.b23,\n lm_total_CFUs.params.b24,\n lm_total_CFUs.params.b25,\n lm_total_CFUs.params.b34,\n lm_total_CFUs.params.b35,\n lm_total_CFUs.params.b45,\n lm_total_CFUs.params.c123,\n lm_total_CFUs.params.c124,\n lm_total_CFUs.params.c125,\n lm_total_CFUs.params.c134,\n lm_total_CFUs.params.c135,\n lm_total_CFUs.params.c145,\n lm_total_CFUs.params.c234,\n lm_total_CFUs.params.c235,\n lm_total_CFUs.params.c245,\n lm_total_CFUs.params.c345,\n lm_total_CFUs.params.d1234,\n lm_total_CFUs.params.d1235,\n lm_total_CFUs.params.d1245,\n lm_total_CFUs.params.d1345,\n lm_total_CFUs.params.d2345,\n lm_total_CFUs.params.e12345]\nconf_int_total_CFUs.columns = [\"95% conf. int. bottom\", \"95% conf. int. top\", \"coef\"]\n# Set Intercept and a to 0, as otherwise the rest of the plot vanishes.\nconf_int_total_CFUs[\"coef\"].Intercept = 0\nconf_int_total_CFUs[\"95% conf. int. bottom\"].Intercept = 0\nconf_int_total_CFUs[\"95% conf. int. top\"].Intercept = 0\nconf_int_total_CFUs[\"coef\"].a = 0\nconf_int_total_CFUs[\"95% conf. int. bottom\"].a = 0\nconf_int_total_CFUs[\"95% conf. int. top\"].a = 0\nconf_int_total_CFUs.plot.bar(figsize=(20,10))\n",
"DailyFecundity:\n\nRead the data and fit the model",
"data_DailyFecundity = pd.read_csv(\"DailyFecundityData_processed.csv\")\n\nlm_DailyFecundity = smf.ols(formula=\"DailyFecundity ~ a + a1 + a2 + a3 + a4 + a5 +\"\n \"b12 + b13 + b14 + b15 + b23 + b24 + b25 + b34 + b35 + b45 +\"\n \"c123 + c124 + c125 + c134 + c135 + c145 + c234 + c235 + c245 + c345 +\"\n \"d1234 + d1235 + d1245 + d1345 + d2345 + e12345\", data=data_DailyFecundity).fit()",
"Output summary statistics",
"lm_DailyFecundity.summary()",
"Plot inferred coefficients with confidence intervals",
"conf_int_DailyFecundity = pd.DataFrame(lm_DailyFecundity.conf_int())\nconf_int_DailyFecundity[2] = [lm_DailyFecundity.params.a,\n lm_DailyFecundity.params.a,\n lm_DailyFecundity.params.a1,\n lm_DailyFecundity.params.a2,\n lm_DailyFecundity.params.a3,\n lm_DailyFecundity.params.a4,\n lm_DailyFecundity.params.a5,\n lm_DailyFecundity.params.b12,\n lm_DailyFecundity.params.b13,\n lm_DailyFecundity.params.b14,\n lm_DailyFecundity.params.b15,\n lm_DailyFecundity.params.b23,\n lm_DailyFecundity.params.b24,\n lm_DailyFecundity.params.b25,\n lm_DailyFecundity.params.b34,\n lm_DailyFecundity.params.b35,\n lm_DailyFecundity.params.b45,\n lm_DailyFecundity.params.c123,\n lm_DailyFecundity.params.c124,\n lm_DailyFecundity.params.c125,\n lm_DailyFecundity.params.c134,\n lm_DailyFecundity.params.c135,\n lm_DailyFecundity.params.c145,\n lm_DailyFecundity.params.c234,\n lm_DailyFecundity.params.c235,\n lm_DailyFecundity.params.c245,\n lm_DailyFecundity.params.c345,\n lm_DailyFecundity.params.d1234,\n lm_DailyFecundity.params.d1235,\n lm_DailyFecundity.params.d1245,\n lm_DailyFecundity.params.d1345,\n lm_DailyFecundity.params.d2345,\n lm_DailyFecundity.params.e12345]\nconf_int_DailyFecundity.columns = [\"95% conf. int. bottom\", \"95% conf. int. top\", \"coef\"]\n# Set Intercept and a to 0, as otherwise the rest of the plot vanishes.\nconf_int_DailyFecundity[\"coef\"].Intercept = 0\nconf_int_DailyFecundity[\"95% conf. int. bottom\"].Intercept = 0\nconf_int_DailyFecundity[\"95% conf. int. top\"].Intercept = 0\nconf_int_DailyFecundity[\"coef\"].a = 0\nconf_int_DailyFecundity[\"95% conf. int. bottom\"].a = 0\nconf_int_DailyFecundity[\"95% conf. int. top\"].a = 0\nconf_int_DailyFecundity.plot.bar(figsize=(20,10))",
"Development:\n\nRead the data and fit the model",
"data_Development = pd.read_csv(\"DevelopmentData_processed.csv\")\n\nlm_Development = smf.ols(formula=\"Development ~ a + a1 + a2 + a3 + a4 + a5 +\"\n \"b12 + b13 + b14 + b15 + b23 + b24 + b25 + b34 + b35 + b45 +\"\n \"c123 + c124 + c125 + c134 + c135 + c145 + c234 + c235 + c245 + c345 +\"\n \"d1234 + d1235 + d1245 + d1345 + d2345 + e12345\", data=data_Development).fit()",
"Output summary statistics",
"lm_Development.summary()",
"Plot inferred coefficients with confidence intervals",
"conf_int_Development = pd.DataFrame(lm_Development.conf_int())\nconf_int_Development[2] = [lm_Development.params.a,\n lm_Development.params.a,\n lm_Development.params.a1,\n lm_Development.params.a2,\n lm_Development.params.a3,\n lm_Development.params.a4,\n lm_Development.params.a5,\n lm_Development.params.b12,\n lm_Development.params.b13,\n lm_Development.params.b14,\n lm_Development.params.b15,\n lm_Development.params.b23,\n lm_Development.params.b24,\n lm_Development.params.b25,\n lm_Development.params.b34,\n lm_Development.params.b35,\n lm_Development.params.b45,\n lm_Development.params.c123,\n lm_Development.params.c124,\n lm_Development.params.c125,\n lm_Development.params.c134,\n lm_Development.params.c135,\n lm_Development.params.c145,\n lm_Development.params.c234,\n lm_Development.params.c235,\n lm_Development.params.c245,\n lm_Development.params.c345,\n lm_Development.params.d1234,\n lm_Development.params.d1235,\n lm_Development.params.d1245,\n lm_Development.params.d1345,\n lm_Development.params.d2345,\n lm_Development.params.e12345]\nconf_int_Development.columns = [\"95% conf. int. bottom\", \"95% conf. int. top\", \"coef\"]\n# Comment all the following lines out to plot the Intercept and a.\nconf_int_Development[\"coef\"].Intercept = 0\nconf_int_Development[\"95% conf. int. bottom\"].Intercept = 0\nconf_int_Development[\"95% conf. int. top\"].Intercept = 0\nconf_int_Development[\"coef\"].a = 0\nconf_int_Development[\"95% conf. int. bottom\"].a = 0\nconf_int_Development[\"95% conf. int. top\"].a = 0\nconf_int_Development.plot.bar(figsize=(20,10))",
"Survival:\n\nRead the data and fit the model",
"data_Survival = pd.read_csv(\"SurvivalData_processed.csv\")\n\nlm_Survival = smf.ols(formula=\"Survival ~ a + a1 + a2 + a3 + a4 + a5 +\"\n \"b12 + b13 + b14 + b15 + b23 + b24 + b25 + b34 + b35 + b45 +\"\n \"c123 + c124 + c125 + c134 + c135 + c145 + c234 + c235 + c245 + c345 +\"\n \"d1234 + d1235 + d1245 + d1345 + d2345 + e12345\", data=data_Survival).fit()",
"Output summary statistics",
"lm_Survival.summary()",
"Plot inferred coefficients with confidence intervals",
"conf_int_Survival = pd.DataFrame(lm_Survival.conf_int())\nconf_int_Survival[2] = [lm_Survival.params.a,\n lm_Survival.params.a,\n lm_Survival.params.a1,\n lm_Survival.params.a2,\n lm_Survival.params.a3,\n lm_Survival.params.a4,\n lm_Survival.params.a5,\n lm_Survival.params.b12,\n lm_Survival.params.b13,\n lm_Survival.params.b14,\n lm_Survival.params.b15,\n lm_Survival.params.b23,\n lm_Survival.params.b24,\n lm_Survival.params.b25,\n lm_Survival.params.b34,\n lm_Survival.params.b35,\n lm_Survival.params.b45,\n lm_Survival.params.c123,\n lm_Survival.params.c124,\n lm_Survival.params.c125,\n lm_Survival.params.c134,\n lm_Survival.params.c135,\n lm_Survival.params.c145,\n lm_Survival.params.c234,\n lm_Survival.params.c235,\n lm_Survival.params.c245,\n lm_Survival.params.c345,\n lm_Survival.params.d1234,\n lm_Survival.params.d1235,\n lm_Survival.params.d1245,\n lm_Survival.params.d1345,\n lm_Survival.params.d2345,\n lm_Survival.params.e12345]\nconf_int_Survival.columns = [\"95% conf. int. bottom\", \"95% conf. int. top\", \"coef\"]\n# Comment all the following lines out to plot the Intercept and a.\nconf_int_Survival[\"coef\"].Intercept = 0\nconf_int_Survival[\"95% conf. int. bottom\"].Intercept = 0\nconf_int_Survival[\"95% conf. int. top\"].Intercept = 0\nconf_int_Survival[\"coef\"].a = 0\nconf_int_Survival[\"95% conf. int. bottom\"].a = 0\nconf_int_Survival[\"95% conf. int. top\"].a = 0\nconf_int_Survival.plot.bar(figsize=(20,10))"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
ithallojunior/NN_compare
|
MLP_final_test/.ipynb_checkpoints/MLP_from_data-checkpoint.ipynb
|
mit
|
[
"This code shows an example for using the imported data from a modified .mat file into a artificial neural network and its training",
"import numpy as np\nfrom sklearn.neural_network import MLPRegressor\nfrom sklearn import preprocessing\nfrom sklearn.cross_validation import train_test_split\nimport matplotlib.pyplot as plt\nimport matplotlib.patches as mpatches\nfrom sklearn.metrics import r2_score # in order to test the results\nfrom sklearn.grid_search import GridSearchCV # looking for parameters\nimport pickle #saving to file\n\n",
"Importing preprocessing data",
"#this function reads the file \ndef read_data(archive, rows, columns):\n data = open(archive, 'r')\n mylist = data.read().split()\n data.close()\n myarray = np.array(mylist).reshape(( rows, columns)).astype(float)\n return myarray\n \n\ndata = read_data('../get_data_example/set.txt',72, 12)\nX = data[:, [0, 2, 4, 6, 7, 8, 9, 10, 11]]\n#print pre_X.shape, data.shape\ny = data[:,1]\n#print y.shape\n\n#getting the time vector for plotting purposes\ntime_stamp = np.zeros(data.shape[0])\nfor i in xrange(data.shape[0]):\n time_stamp[i] = i*(1.0/60.0)\n \n#print X.shape, time_stamp.shape\nX = np.hstack((X, time_stamp.reshape((X.shape[0], 1))))\nprint X.shape\n\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=0)\nt_test = X_test[:,-1]\nt_train = X_train[:, -1]\nX_train_std = preprocessing.scale(X_train[:,0:-1])\nX_test_std = preprocessing.scale(X_test[:, 0:-1])",
"Sorting out data (for plotting purposes)",
"#Here comes the way to sort out the data according to one the elements of it\ntest_sorted = np.hstack(\n (t_test.reshape(X_test_std.shape[0], 1), X_test_std, y_test.reshape(X_test_std.shape[0], 1)))\n\ntest_sorted = test_sorted[np.argsort(test_sorted[:,0])] #modified\n\ntrain_sorted = np.hstack((t_train.reshape(t_train.shape[0], 1), y_train.reshape(y_train.shape[0], 1) ))\ntrain_sorted = train_sorted[np.argsort(train_sorted[:,0])]",
"Artificial Neural Network (Gridsearch, DO NOT RUN)",
"#Grid search, random state =0: same beginning for all\nalpha1 = np.linspace(0.001,0.9, 9).tolist()\nmomentum1 = np.linspace(0.3,0.9, 9).tolist()\nparams_dist = {\"hidden_layer_sizes\":[(20, 40), (15, 40), (10,15), (15, 15, 10), (15, 10), (15, 5)],\n \"activation\":['tanh','logistic'],\"algorithm\":['sgd', 'l-bfgs'], \"alpha\":alpha1,\n \"learning_rate\":['constant'],\"max_iter\":[500], \"random_state\":[0],\n \"verbose\": [False], \"warm_start\":[False], \"momentum\":momentum1}\ngrid = GridSearchCV(MLPRegressor(), param_grid=params_dist)\ngrid.fit(X_train_std, y_train)\n\nprint \"Best score:\", grid.best_score_\nprint \"Best parameter's set found:\\n\"\nprint grid.best_params_ \n\nreg = MLPRegressor(warm_start = grid.best_params_['warm_start'], verbose= grid.best_params_['verbose'], \n algorithm= grid.best_params_['algorithm'],hidden_layer_sizes=grid.best_params_['hidden_layer_sizes'], \n activation= grid.best_params_['activation'], max_iter= grid.best_params_['max_iter'],\n random_state= None,alpha= grid.best_params_['alpha'], learning_rate= grid.best_params_['learning_rate'], \n momentum= grid.best_params_['momentum'])\n\nreg.fit(X_train_std, y_train)",
"Plotting",
"%matplotlib inline\n\nresults = reg.predict(test_sorted[:, 1:-1])\n\nplt.plot(test_sorted[:, 0], results, c='r') # ( sorted time, results)\nplt.plot(train_sorted[:, 0], train_sorted[:,1], c='b' ) #expected\nplt.scatter(time_stamp, y, c='k')\n\nplt.xlabel(\"Time(s)\")\nplt.ylabel(\"Angular velocities(rad/s)\")\n\nred_patch = mpatches.Patch(color='red', label='Predicted')\nblue_patch = mpatches.Patch(color='blue', label ='Expected')\nblack_patch = mpatches.Patch(color='black', label ='Original')\nplt.legend(handles=[red_patch, blue_patch, black_patch])\nplt.title(\"MLP results vs Expected values\")\nplt.show()\n\nprint \"Accuracy:\", reg.score(X_test_std, y_test)\n#print \"Accuracy test 2\", r2_score(test_sorted[:,-1], results)\n",
"Saving ANN to file through pickle (and using it later)",
"#This prevents the user from losing a previous important result\ndef save_it(ans):\n if ans == \"yes\":\n f = open('data.ann', 'w')\n mem = pickle.dumps(grid)\n f.write(mem)\n f.close()\n else:\n print \"Nothing to save\" \n \n \nsave_it(\"no\")\n\n#Loading a successful ANN\nf = open('data.ann', 'r')\nnw = f.read()\nsaved_ann = pickle.loads(nw)\nprint \"Just the accuracy:\", saved_ann.score(X_test_std, y_test), \"\\n\"\nprint \"Parameters:\"\nprint saved_ann.get_params(), \"\\n\"\nprint \"Loss:\", saved_ann.loss_\nprint \"Total of layers:\", saved_ann.n_layers_\nprint \"Total of iterations:\", saved_ann.n_iter_\n\n\n#print from previously saved data\n%matplotlib inline\n\nresults = saved_ann.predict(test_sorted[:, 1:-1])\n\n\nplt.plot(test_sorted[:, 0], results, c='r') # ( sorted time, results)\nplt.plot(train_sorted[:, 0], train_sorted[:,1], c='b' ) #expected\nplt.scatter(time_stamp, y, c='k')\n\nplt.xlabel(\"Time(s)\")\nplt.ylabel(\"Angular velocities(rad/s)\")\n\nred_patch = mpatches.Patch(color='red', label='Predicted')\nblue_patch = mpatches.Patch(color='blue', label ='Expected')\nblack_patch = mpatches.Patch(color='black', label ='Original')\nplt.legend(handles=[red_patch, blue_patch, black_patch])\nplt.title(\"MLP results vs Expected values (Loaded from file)\")\nplt.show()\n\n\n\n\nplt.plot(time_stamp, y,'--.', c='r')\nplt.xlabel(\"Time(s)\")\nplt.ylabel(\"Angular velocities(rad/s)\")\nplt.title(\"Resuts from patient:\\n\"\n \" Angular velocities for the right knee\")\nplt.show()\n\n\nprint \"Accuracy:\", saved_ann.score(X_test_std, y_test)\n#print \"Accuracy test 2\", r2_score(test_sorted[:,-1], results)\n\nprint max(y), saved_ann.predict(X_train_std[y_train.tolist().index(max(y_train)),:].reshape((1,9)))"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
yashdeeph709/Algorithms
|
PythonBootCamp/Complete-Python-Bootcamp-master/Statements Assessment Test.ipynb
|
apache-2.0
|
[
"Statements Assessment Test\nLets test your knowledge!\n\nUse for, split(), and if to create a Statement that will print out words that start with 's':",
"st = 'Print only the words that start with s in this sentence'\n\n#Code here",
"Use range() to print all the even numbers from 0 to 10.",
"#Code Here",
"Use List comprehension to create a list of all numbers between 1 and 50 that are divisible by 3.",
"#Code in this cell\n[]",
"Go through the string below and if the length of a word is even print \"even!\"",
"st = 'Print every word in this sentence that has an even number of letters'\n\n#Code in this cell",
"Write a program that prints the integers from 1 to 100. But for multiples of three print \"Fizz\" instead of the number, and for the multiples of five print \"Buzz\". For numbers which are multiples of both three and five print \"FizzBuzz\".",
"#Code in this cell",
"Use List Comprehension to create a list of the first letters of every word in the string below:",
"st = 'Create a list of the first letters of every word in this string'\n\n#Code in this cell",
"Great Job!"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
statsmodels/statsmodels.github.io
|
v0.13.0/examples/notebooks/generated/pca_fertility_factors.ipynb
|
bsd-3-clause
|
[
"statsmodels Principal Component Analysis\nKey ideas: Principal component analysis, world bank data, fertility\nIn this notebook, we use principal components analysis (PCA) to analyze the time series of fertility rates in 192 countries, using data obtained from the World Bank. The main goal is to understand how the trends in fertility over time differ from country to country. This is a slightly atypical illustration of PCA because the data are time series. Methods such as functional PCA have been developed for this setting, but since the fertility data are very smooth, there is no real disadvantage to using standard PCA in this case.",
"%matplotlib inline\n\nimport matplotlib.pyplot as plt\nimport statsmodels.api as sm\nfrom statsmodels.multivariate.pca import PCA\n\nplt.rc(\"figure\", figsize=(16, 8))\nplt.rc(\"font\", size=14)",
"The data can be obtained from the World Bank web site, but here we work with a slightly cleaned-up version of the data:",
"data = sm.datasets.fertility.load_pandas().data\ndata.head()",
"Here we construct a DataFrame that contains only the numerical fertility rate data and set the index to the country names. We also drop all the countries with any missing data.",
"columns = list(map(str, range(1960, 2012)))\ndata.set_index(\"Country Name\", inplace=True)\ndta = data[columns]\ndta = dta.dropna()\ndta.head()",
"There are two ways to use PCA to analyze a rectangular matrix: we can treat the rows as the \"objects\" and the columns as the \"variables\", or vice-versa. Here we will treat the fertility measures as \"variables\" used to measure the countries as \"objects\". Thus the goal will be to reduce the yearly fertility rate values to a small number of fertility rate \"profiles\" or \"basis functions\" that capture most of the variation over time in the different countries.\nThe mean trend is removed in PCA, but its worthwhile taking a look at it. It shows that fertility has dropped steadily over the time period covered in this dataset. Note that the mean is calculated using a country as the unit of analysis, ignoring population size. This is also true for the PC analysis conducted below. A more sophisticated analysis might weight the countries, say by population in 1980.",
"ax = dta.mean().plot(grid=False)\nax.set_xlabel(\"Year\", size=17)\nax.set_ylabel(\"Fertility rate\", size=17)\nax.set_xlim(0, 51)",
"Next we perform the PCA:",
"pca_model = PCA(dta.T, standardize=False, demean=True)",
"Based on the eigenvalues, we see that the first PC dominates, with perhaps a small amount of meaningful variation captured in the second and third PC's.",
"fig = pca_model.plot_scree(log_scale=False)",
"Next we will plot the PC factors. The dominant factor is monotonically increasing. Countries with a positive score on the first factor will increase faster (or decrease slower) compared to the mean shown above. Countries with a negative score on the first factor will decrease faster than the mean. The second factor is U-shaped with a positive peak at around 1985. Countries with a large positive score on the second factor will have lower than average fertilities at the beginning and end of the data range, but higher than average fertility in the middle of the range.",
"fig, ax = plt.subplots(figsize=(8, 4))\nlines = ax.plot(pca_model.factors.iloc[:, :3], lw=4, alpha=0.6)\nax.set_xticklabels(dta.columns.values[::10])\nax.set_xlim(0, 51)\nax.set_xlabel(\"Year\", size=17)\nfig.subplots_adjust(0.1, 0.1, 0.85, 0.9)\nlegend = fig.legend(lines, [\"PC 1\", \"PC 2\", \"PC 3\"], loc=\"center right\")\nlegend.draw_frame(False)",
"To better understand what is going on, we will plot the fertility trajectories for sets of countries with similar PC scores. The following convenience function produces such a plot.",
"idx = pca_model.loadings.iloc[:, 0].argsort()",
"First we plot the five countries with the greatest scores on PC 1. These countries have a higher rate of fertility increase than the global mean (which is decreasing).",
"def make_plot(labels):\n fig, ax = plt.subplots(figsize=(9, 5))\n ax = dta.loc[labels].T.plot(legend=False, grid=False, ax=ax)\n dta.mean().plot(ax=ax, grid=False, label=\"Mean\")\n ax.set_xlim(0, 51)\n fig.subplots_adjust(0.1, 0.1, 0.75, 0.9)\n ax.set_xlabel(\"Year\", size=17)\n ax.set_ylabel(\"Fertility\", size=17)\n legend = ax.legend(\n *ax.get_legend_handles_labels(), loc=\"center left\", bbox_to_anchor=(1, 0.5)\n )\n legend.draw_frame(False)\n\nlabels = dta.index[idx[-5:]]\nmake_plot(labels)",
"Here are the five countries with the greatest scores on factor 2. These are countries that reached peak fertility around 1980, later than much of the rest of the world, followed by a rapid decrease in fertility.",
"idx = pca_model.loadings.iloc[:, 1].argsort()\nmake_plot(dta.index[idx[-5:]])",
"Finally we have the countries with the most negative scores on PC 2. These are the countries where the fertility rate declined much faster than the global mean during the 1960's and 1970's, then flattened out.",
"make_plot(dta.index[idx[:5]])",
"We can also look at a scatterplot of the first two principal component scores. We see that the variation among countries is fairly continuous, except perhaps that the two countries with highest scores for PC 2 are somewhat separated from the other points. These countries, Oman and Yemen, are unique in having a sharp spike in fertility around 1980. No other country has such a spike. In contrast, the countries with high scores on PC 1 (that have continuously increasing fertility), are part of a continuum of variation.",
"fig, ax = plt.subplots()\npca_model.loadings.plot.scatter(x=\"comp_00\", y=\"comp_01\", ax=ax)\nax.set_xlabel(\"PC 1\", size=17)\nax.set_ylabel(\"PC 2\", size=17)\ndta.index[pca_model.loadings.iloc[:, 1] > 0.2].values"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
Bismarrck/deep-learning
|
sentiment-rnn/Sentiment_RNN.ipynb
|
mit
|
[
"Sentiment Analysis with an RNN\nIn this notebook, you'll implement a recurrent neural network that performs sentiment analysis. Using an RNN rather than a feedfoward network is more accurate since we can include information about the sequence of words. Here we'll use a dataset of movie reviews, accompanied by labels.\nThe architecture for this network is shown below.\n<img src=\"assets/network_diagram.png\" width=400px>\nHere, we'll pass in words to an embedding layer. We need an embedding layer because we have tens of thousands of words, so we'll need a more efficient representation for our input data than one-hot encoded vectors. You should have seen this before from the word2vec lesson. You can actually train up an embedding with word2vec and use it here. But it's good enough to just have an embedding layer and let the network learn the embedding table on it's own.\nFrom the embedding layer, the new representations will be passed to LSTM cells. These will add recurrent connections to the network so we can include information about the sequence of words in the data. Finally, the LSTM cells will go to a sigmoid output layer here. We're using the sigmoid because we're trying to predict if this text has positive or negative sentiment. The output layer will just be a single unit then, with a sigmoid activation function.\nWe don't care about the sigmoid outputs except for the very last one, we can ignore the rest. We'll calculate the cost from the output of the last step and the training label.",
"import numpy as np\nimport tensorflow as tf\n\nwith open('../sentiment-network/reviews.txt', 'r') as f:\n reviews = f.read()\nwith open('../sentiment-network/labels.txt', 'r') as f:\n labels = f.read()\n\nreviews[:2000]",
"Data preprocessing\nThe first step when building a neural network model is getting your data into the proper form to feed into the network. Since we're using embedding layers, we'll need to encode each word with an integer. We'll also want to clean it up a bit.\nYou can see an example of the reviews data above. We'll want to get rid of those periods. Also, you might notice that the reviews are delimited with newlines \\n. To deal with those, I'm going to split the text into each review using \\n as the delimiter. Then I can combined all the reviews back together into one big string.\nFirst, let's remove all punctuation. Then get all the text without the newlines and split it into individual words.",
"from string import punctuation\nall_text = ''.join([c for c in reviews if c not in punctuation])\nreviews = all_text.split('\\n')\n\nall_text = ' '.join(reviews)\nwords = all_text.split()\n\nall_text[:2000]\n\nwords[:100]",
"Encoding the words\nThe embedding lookup requires that we pass in integers to our network. The easiest way to do this is to create dictionaries that map the words in the vocabulary to integers. Then we can convert each of our reviews into integers so they can be passed into the network.\n\nExercise: Now you're going to encode the words with integers. Build a dictionary that maps words to integers. Later we're going to pad our input vectors with zeros, so make sure the integers start at 1, not 0.\nAlso, convert the reviews to integers and store the reviews in a new list called reviews_ints.",
"# Create your dictionary that maps vocab words to integers here\nfrom collections import Counter\ncounter = Counter(words)\nvocab = sorted(counter, key=counter.get, reverse=True)\nvocab_to_int = {word: i for i, word in enumerate(vocab, 1)}\n\n# Convert the reviews to integers, same shape as reviews list, but with integers\nreviews_ints = []\nfor review in reviews:\n reviews_ints.append([vocab_to_int[word] for word in review.split()])",
"Encoding the labels\nOur labels are \"positive\" or \"negative\". To use these labels in our network, we need to convert them to 0 and 1.\n\nExercise: Convert labels from positive and negative to 1 and 0, respectively.",
"# Convert labels to 1s and 0s for 'positive' and 'negative'\nlabel_to_int= {\"positive\": 1, \"negative\": 0}\nlabels = labels.split()\nlabels = np.array([label_to_int[label.strip().lower()] for label in labels])",
"If you built labels correctly, you should see the next output.",
"from collections import Counter\nreview_lens = Counter([len(x) for x in reviews_ints])\nprint(\"Zero-length reviews: {}\".format(review_lens[0]))\nprint(\"Maximum review length: {}\".format(max(review_lens)))",
"Okay, a couple issues here. We seem to have one review with zero length. And, the maximum review length is way too many steps for our RNN. Let's truncate to 200 steps. For reviews shorter than 200, we'll pad with 0s. For reviews longer than 200, we can truncate them to the first 200 characters.\n\nExercise: First, remove the review with zero length from the reviews_ints list.",
"# Filter out that review with 0 length\nreviews_ints = [review for review in reviews_ints if len(review) > 0]",
"Exercise: Now, create an array features that contains the data we'll pass to the network. The data should come from review_ints, since we want to feed integers to the network. Each row should be 200 elements long. For reviews shorter than 200 words, left pad with 0s. That is, if the review is ['best', 'movie', 'ever'], [117, 18, 128] as integers, the row will look like [0, 0, 0, ..., 0, 117, 18, 128]. For reviews longer than 200, use on the first 200 words as the feature vector.\n\nThis isn't trivial and there are a bunch of ways to do this. But, if you're going to be building your own deep learning networks, you're going to have to get used to preparing your data.",
"seq_len = 200\nnum_reviews = len(reviews_ints)\nfeatures = np.zeros((num_reviews, seq_len), dtype=int)\nfor i, review in enumerate(reviews_ints):\n rlen = min(len(review), seq_len)\n istart = seq_len - rlen\n features[i, istart:] = review[:rlen]\nprint(features[0, :100])",
"If you build features correctly, it should look like that cell output below.",
"features[:10,:100]",
"Training, Validation, Test\nWith our data in nice shape, we'll split it into training, validation, and test sets.\n\nExercise: Create the training, validation, and test sets here. You'll need to create sets for the features and the labels, train_x and train_y for example. Define a split fraction, split_frac as the fraction of data to keep in the training set. Usually this is set to 0.8 or 0.9. The rest of the data will be split in half to create the validation and testing data.",
"split_frac = 0.8\nsplit_index = int(num_reviews * split_frac)\n\ntrain_x, val_x = features[:split_index], features[split_index:]\ntrain_y, val_y = labels[:split_index], labels[split_index:]\n\nsplit_index = int(len(val_x) * 0.5)\nval_x, test_x = val_x[:split_index], val_x[split_index:]\nval_y, test_y = val_y[:split_index], val_y[split_index:]\n\nprint(\"\\t\\t\\tFeature Shapes:\")\nprint(\"Train set: \\t\\t{}\".format(train_x.shape), \n \"\\nValidation set: \\t{}\".format(val_x.shape),\n \"\\nTest set: \\t\\t{}\".format(test_x.shape))",
"With train, validation, and text fractions of 0.8, 0.1, 0.1, the final shapes should look like:\nFeature Shapes:\nTrain set: (20000, 200) \nValidation set: (2500, 200) \nTest set: (2500, 200)\nBuild the graph\nHere, we'll build the graph. First up, defining the hyperparameters.\n\nlstm_size: Number of units in the hidden layers in the LSTM cells. Usually larger is better performance wise. Common values are 128, 256, 512, etc.\nlstm_layers: Number of LSTM layers in the network. I'd start with 1, then add more if I'm underfitting.\nbatch_size: The number of reviews to feed the network in one training pass. Typically this should be set as high as you can go without running out of memory.\nlearning_rate: Learning rate",
"lstm_size = 256\nlstm_layers = 1\nbatch_size = 500\nlearning_rate = 0.001",
"For the network itself, we'll be passing in our 200 element long review vectors. Each batch will be batch_size vectors. We'll also be using dropout on the LSTM layer, so we'll make a placeholder for the keep probability.\n\nExercise: Create the inputs_, labels_, and drop out keep_prob placeholders using tf.placeholder. labels_ needs to be two-dimensional to work with some functions later. Since keep_prob is a scalar (a 0-dimensional tensor), you shouldn't provide a size to tf.placeholder.",
"n_words = len(vocab_to_int) + 1 # Adding 1 because we use 0's for padding, dictionary started at 1\n\n# Create the graph object\ngraph = tf.Graph()\n\n# Add nodes to the graph\nwith graph.as_default():\n inputs_ = tf.placeholder(tf.int32, [None], name=\"inputs\")\n labels_ = tf.placeholder(tf.int32, [None, None], name=\"labels\")\n keep_prob = tf.placeholder(tf.float32, shape=None, name=\"keep_prob\")",
"Embedding\nNow we'll add an embedding layer. We need to do this because there are 74000 words in our vocabulary. It is massively inefficient to one-hot encode our classes here. You should remember dealing with this problem from the word2vec lesson. Instead of one-hot encoding, we can have an embedding layer and use that layer as a lookup table. You could train an embedding layer using word2vec, then load it here. But, it's fine to just make a new layer and let the network learn the weights.\n\nExercise: Create the embedding lookup matrix as a tf.Variable. Use that embedding matrix to get the embedded vectors to pass to the LSTM cell with tf.nn.embedding_lookup. This function takes the embedding matrix and an input tensor, such as the review vectors. Then, it'll return another tensor with the embedded vectors. So, if the embedding layer has 200 units, the function will return a tensor with size [batch_size, 200].",
"# Size of the embedding vectors (number of units in the embedding layer)\nembed_size = 300 \n\nwith graph.as_default():\n embedding = tf.Variable()\n embed = ",
"LSTM cell\n<img src=\"assets/network_diagram.png\" width=400px>\nNext, we'll create our LSTM cells to use in the recurrent network (TensorFlow documentation). Here we are just defining what the cells look like. This isn't actually building the graph, just defining the type of cells we want in our graph.\nTo create a basic LSTM cell for the graph, you'll want to use tf.contrib.rnn.BasicLSTMCell. Looking at the function documentation:\ntf.contrib.rnn.BasicLSTMCell(num_units, forget_bias=1.0, input_size=None, state_is_tuple=True, activation=<function tanh at 0x109f1ef28>)\nyou can see it takes a parameter called num_units, the number of units in the cell, called lstm_size in this code. So then, you can write something like \nlstm = tf.contrib.rnn.BasicLSTMCell(num_units)\nto create an LSTM cell with num_units. Next, you can add dropout to the cell with tf.contrib.rnn.DropoutWrapper. This just wraps the cell in another cell, but with dropout added to the inputs and/or outputs. It's a really convenient way to make your network better with almost no effort! So you'd do something like\ndrop = tf.contrib.rnn.DropoutWrapper(cell, output_keep_prob=keep_prob)\nMost of the time, your network will have better performance with more layers. That's sort of the magic of deep learning, adding more layers allows the network to learn really complex relationships. Again, there is a simple way to create multiple layers of LSTM cells with tf.contrib.rnn.MultiRNNCell:\ncell = tf.contrib.rnn.MultiRNNCell([drop] * lstm_layers)\nHere, [drop] * lstm_layers creates a list of cells (drop) that is lstm_layers long. The MultiRNNCell wrapper builds this into multiple layers of RNN cells, one for each cell in the list.\nSo the final cell you're using in the network is actually multiple (or just one) LSTM cells with dropout. But it all works the same from an achitectural viewpoint, just a more complicated graph in the cell.\n\nExercise: Below, use tf.contrib.rnn.BasicLSTMCell to create an LSTM cell. Then, add drop out to it with tf.contrib.rnn.DropoutWrapper. Finally, create multiple LSTM layers with tf.contrib.rnn.MultiRNNCell.\n\nHere is a tutorial on building RNNs that will help you out.",
"with graph.as_default():\n # Your basic LSTM cell\n lstm = \n \n # Add dropout to the cell\n drop = \n \n # Stack up multiple LSTM layers, for deep learning\n cell = \n \n # Getting an initial state of all zeros\n initial_state = cell.zero_state(batch_size, tf.float32)",
"RNN forward pass\n<img src=\"assets/network_diagram.png\" width=400px>\nNow we need to actually run the data through the RNN nodes. You can use tf.nn.dynamic_rnn to do this. You'd pass in the RNN cell you created (our multiple layered LSTM cell for instance), and the inputs to the network.\noutputs, final_state = tf.nn.dynamic_rnn(cell, inputs, initial_state=initial_state)\nAbove I created an initial state, initial_state, to pass to the RNN. This is the cell state that is passed between the hidden layers in successive time steps. tf.nn.dynamic_rnn takes care of most of the work for us. We pass in our cell and the input to the cell, then it does the unrolling and everything else for us. It returns outputs for each time step and the final_state of the hidden layer.\n\nExercise: Use tf.nn.dynamic_rnn to add the forward pass through the RNN. Remember that we're actually passing in vectors from the embedding layer, embed.",
"with graph.as_default():\n outputs, final_state = ",
"Output\nWe only care about the final output, we'll be using that as our sentiment prediction. So we need to grab the last output with outputs[:, -1], the calculate the cost from that and labels_.",
"with graph.as_default():\n predictions = tf.contrib.layers.fully_connected(outputs[:, -1], 1, activation_fn=tf.sigmoid)\n cost = tf.losses.mean_squared_error(labels_, predictions)\n \n optimizer = tf.train.AdamOptimizer(learning_rate).minimize(cost)",
"Validation accuracy\nHere we can add a few nodes to calculate the accuracy which we'll use in the validation pass.",
"with graph.as_default():\n correct_pred = tf.equal(tf.cast(tf.round(predictions), tf.int32), labels_)\n accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))",
"Batching\nThis is a simple function for returning batches from our data. First it removes data such that we only have full batches. Then it iterates through the x and y arrays and returns slices out of those arrays with size [batch_size].",
"def get_batches(x, y, batch_size=100):\n \n n_batches = len(x)//batch_size\n x, y = x[:n_batches*batch_size], y[:n_batches*batch_size]\n for ii in range(0, len(x), batch_size):\n yield x[ii:ii+batch_size], y[ii:ii+batch_size]",
"Training\nBelow is the typical training code. If you want to do this yourself, feel free to delete all this code and implement it yourself. Before you run this, make sure the checkpoints directory exists.",
"epochs = 10\n\nwith graph.as_default():\n saver = tf.train.Saver()\n\nwith tf.Session(graph=graph) as sess:\n sess.run(tf.global_variables_initializer())\n iteration = 1\n for e in range(epochs):\n state = sess.run(initial_state)\n \n for ii, (x, y) in enumerate(get_batches(train_x, train_y, batch_size), 1):\n feed = {inputs_: x,\n labels_: y[:, None],\n keep_prob: 0.5,\n initial_state: state}\n loss, state, _ = sess.run([cost, final_state, optimizer], feed_dict=feed)\n \n if iteration%5==0:\n print(\"Epoch: {}/{}\".format(e, epochs),\n \"Iteration: {}\".format(iteration),\n \"Train loss: {:.3f}\".format(loss))\n\n if iteration%25==0:\n val_acc = []\n val_state = sess.run(cell.zero_state(batch_size, tf.float32))\n for x, y in get_batches(val_x, val_y, batch_size):\n feed = {inputs_: x,\n labels_: y[:, None],\n keep_prob: 1,\n initial_state: val_state}\n batch_acc, val_state = sess.run([accuracy, final_state], feed_dict=feed)\n val_acc.append(batch_acc)\n print(\"Val acc: {:.3f}\".format(np.mean(val_acc)))\n iteration +=1\n saver.save(sess, \"checkpoints/sentiment.ckpt\")",
"Testing",
"test_acc = []\nwith tf.Session(graph=graph) as sess:\n saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))\n test_state = sess.run(cell.zero_state(batch_size, tf.float32))\n for ii, (x, y) in enumerate(get_batches(test_x, test_y, batch_size), 1):\n feed = {inputs_: x,\n labels_: y[:, None],\n keep_prob: 1,\n initial_state: test_state}\n batch_acc, test_state = sess.run([accuracy, final_state], feed_dict=feed)\n test_acc.append(batch_acc)\n print(\"Test accuracy: {:.3f}\".format(np.mean(test_acc)))"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
tensorflow/docs-l10n
|
site/ja/hub/tutorials/object_detection.ipynb
|
apache-2.0
|
[
"Copyright 2018 The TensorFlow Hub Authors.\nLicensed under the Apache License, Version 2.0 (the \"License\");",
"# Copyright 2018 The TensorFlow Hub Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n# ==============================================================================",
"オブジェクト検出\n<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td><a target=\"_blank\" href=\"https://www.tensorflow.org/hub/tutorials/object_detection\"><img src=\"https://www.tensorflow.org/images/tf_logo_32px.png\"> TensorFlow.orgで表示</a></td>\n <td><a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ja/hub/tutorials/object_detection.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\">Run in Google Colab</a></td>\n <td><a target=\"_blank\" href=\"https://github.com/tensorflow/docs-l10n/blob/master/site/ja/hub/tutorials/object_detection.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\">GitHub でソースを表示</a></td>\n <td><a href=\"https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ja/hub/tutorials/object_detection.ipynb\"><img src=\"https://www.tensorflow.org/images/download_logo_32px.png\">ノートブックをダウンロード/a0}</a></td>\n <td> <a href=\"https://tfhub.dev/s?q=google%2Ffaster_rcnn%2Fopenimages_v4%2Finception_resnet_v2%2F1%20OR%20google%2Ffaster_rcnn%2Fopenimages_v4%2Finception_resnet_v2%2F1\"><img src=\"https://www.tensorflow.org/images/hub_logo_32px.png\">TF Hub モデルを参照</a>\n</td>\n</table>\n\nこの Colab では、オブジェクト検出を実行するようにトレーニングされた TF-Hub モジュールの使用を実演します。\nセットアップ",
"#@title Imports and function definitions\n\n# For running inference on the TF-Hub module.\nimport tensorflow as tf\n\nimport tensorflow_hub as hub\n\n# For downloading the image.\nimport matplotlib.pyplot as plt\nimport tempfile\nfrom six.moves.urllib.request import urlopen\nfrom six import BytesIO\n\n# For drawing onto the image.\nimport numpy as np\nfrom PIL import Image\nfrom PIL import ImageColor\nfrom PIL import ImageDraw\nfrom PIL import ImageFont\nfrom PIL import ImageOps\n\n# For measuring the inference time.\nimport time\n\n# Print Tensorflow version\nprint(tf.__version__)\n\n# Check available GPU devices.\nprint(\"The following GPU devices are available: %s\" % tf.test.gpu_device_name())",
"使用例\n画像のダウンロードと視覚化用のヘルパー関数\n必要最低限の単純な機能性を得るために、TF オブジェクト検出 API から採用された視覚化コードです。",
"def display_image(image):\n fig = plt.figure(figsize=(20, 15))\n plt.grid(False)\n plt.imshow(image)\n\n\ndef download_and_resize_image(url, new_width=256, new_height=256,\n display=False):\n _, filename = tempfile.mkstemp(suffix=\".jpg\")\n response = urlopen(url)\n image_data = response.read()\n image_data = BytesIO(image_data)\n pil_image = Image.open(image_data)\n pil_image = ImageOps.fit(pil_image, (new_width, new_height), Image.ANTIALIAS)\n pil_image_rgb = pil_image.convert(\"RGB\")\n pil_image_rgb.save(filename, format=\"JPEG\", quality=90)\n print(\"Image downloaded to %s.\" % filename)\n if display:\n display_image(pil_image)\n return filename\n\n\ndef draw_bounding_box_on_image(image,\n ymin,\n xmin,\n ymax,\n xmax,\n color,\n font,\n thickness=4,\n display_str_list=()):\n \"\"\"Adds a bounding box to an image.\"\"\"\n draw = ImageDraw.Draw(image)\n im_width, im_height = image.size\n (left, right, top, bottom) = (xmin * im_width, xmax * im_width,\n ymin * im_height, ymax * im_height)\n draw.line([(left, top), (left, bottom), (right, bottom), (right, top),\n (left, top)],\n width=thickness,\n fill=color)\n\n # If the total height of the display strings added to the top of the bounding\n # box exceeds the top of the image, stack the strings below the bounding box\n # instead of above.\n display_str_heights = [font.getsize(ds)[1] for ds in display_str_list]\n # Each display_str has a top and bottom margin of 0.05x.\n total_display_str_height = (1 + 2 * 0.05) * sum(display_str_heights)\n\n if top > total_display_str_height:\n text_bottom = top\n else:\n text_bottom = top + total_display_str_height\n # Reverse list and print from bottom to top.\n for display_str in display_str_list[::-1]:\n text_width, text_height = font.getsize(display_str)\n margin = np.ceil(0.05 * text_height)\n draw.rectangle([(left, text_bottom - text_height - 2 * margin),\n (left + text_width, text_bottom)],\n fill=color)\n draw.text((left + margin, text_bottom - text_height - margin),\n display_str,\n fill=\"black\",\n font=font)\n text_bottom -= text_height - 2 * margin\n\n\ndef draw_boxes(image, boxes, class_names, scores, max_boxes=10, min_score=0.1):\n \"\"\"Overlay labeled boxes on an image with formatted scores and label names.\"\"\"\n colors = list(ImageColor.colormap.values())\n\n try:\n font = ImageFont.truetype(\"/usr/share/fonts/truetype/liberation/LiberationSansNarrow-Regular.ttf\",\n 25)\n except IOError:\n print(\"Font not found, using default font.\")\n font = ImageFont.load_default()\n\n for i in range(min(boxes.shape[0], max_boxes)):\n if scores[i] >= min_score:\n ymin, xmin, ymax, xmax = tuple(boxes[i])\n display_str = \"{}: {}%\".format(class_names[i].decode(\"ascii\"),\n int(100 * scores[i]))\n color = colors[hash(class_names[i]) % len(colors)]\n image_pil = Image.fromarray(np.uint8(image)).convert(\"RGB\")\n draw_bounding_box_on_image(\n image_pil,\n ymin,\n xmin,\n ymax,\n xmax,\n color,\n font,\n display_str_list=[display_str])\n np.copyto(image, np.array(image_pil))\n return image",
"モジュールを適用する\nOpen Images v4 から公開画像を読み込み、ローカルの保存して表示します。",
"# By Heiko Gorski, Source: https://commons.wikimedia.org/wiki/File:Naxos_Taverna.jpg\nimage_url = \"https://upload.wikimedia.org/wikipedia/commons/6/60/Naxos_Taverna.jpg\" #@param\ndownloaded_image_path = download_and_resize_image(image_url, 1280, 856, True)",
"オブジェクト検出モジュールを選択し、ダウンロードされた画像に適用します。モジュールのリストを示します。\n\nFasterRCNN+InceptionResNet V2: 高精度\nssd+mobilenet V2: 小規模で高速",
"module_handle = \"https://tfhub.dev/google/faster_rcnn/openimages_v4/inception_resnet_v2/1\" #@param [\"https://tfhub.dev/google/openimages_v4/ssd/mobilenet_v2/1\", \"https://tfhub.dev/google/faster_rcnn/openimages_v4/inception_resnet_v2/1\"]\n\ndetector = hub.load(module_handle).signatures['default']\n\ndef load_img(path):\n img = tf.io.read_file(path)\n img = tf.image.decode_jpeg(img, channels=3)\n return img\n\ndef run_detector(detector, path):\n img = load_img(path)\n\n converted_img = tf.image.convert_image_dtype(img, tf.float32)[tf.newaxis, ...]\n start_time = time.time()\n result = detector(converted_img)\n end_time = time.time()\n\n result = {key:value.numpy() for key,value in result.items()}\n\n print(\"Found %d objects.\" % len(result[\"detection_scores\"]))\n print(\"Inference time: \", end_time-start_time)\n\n image_with_boxes = draw_boxes(\n img.numpy(), result[\"detection_boxes\"],\n result[\"detection_class_entities\"], result[\"detection_scores\"])\n\n display_image(image_with_boxes)\n\nrun_detector(detector, downloaded_image_path)",
"その他の画像\n時間トラッキングを使用して、追加の画像に推論を実行します。",
"image_urls = [\n # Source: https://commons.wikimedia.org/wiki/File:The_Coleoptera_of_the_British_islands_(Plate_125)_(8592917784).jpg\n \"https://upload.wikimedia.org/wikipedia/commons/1/1b/The_Coleoptera_of_the_British_islands_%28Plate_125%29_%288592917784%29.jpg\",\n # By Américo Toledano, Source: https://commons.wikimedia.org/wiki/File:Biblioteca_Maim%C3%B3nides,_Campus_Universitario_de_Rabanales_007.jpg\n \"https://upload.wikimedia.org/wikipedia/commons/thumb/0/0d/Biblioteca_Maim%C3%B3nides%2C_Campus_Universitario_de_Rabanales_007.jpg/1024px-Biblioteca_Maim%C3%B3nides%2C_Campus_Universitario_de_Rabanales_007.jpg\",\n # Source: https://commons.wikimedia.org/wiki/File:The_smaller_British_birds_(8053836633).jpg\n \"https://upload.wikimedia.org/wikipedia/commons/0/09/The_smaller_British_birds_%288053836633%29.jpg\",\n ]\n\ndef detect_img(image_url):\n start_time = time.time()\n image_path = download_and_resize_image(image_url, 640, 480)\n run_detector(detector, image_path)\n end_time = time.time()\n print(\"Inference time:\",end_time-start_time)\n\ndetect_img(image_urls[0])\n\ndetect_img(image_urls[1])\n\ndetect_img(image_urls[2])"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
niketanpansare/systemml
|
samples/jupyter-notebooks/Linear_Regression_Algorithms_Demo.ipynb
|
apache-2.0
|
[
"Linear Regression Algorithms using Apache SystemML\nTable of Content:\n- Install SystemML using pip\n- Example 1: Implement a simple 'Hello World' program in SystemML\n- Example 2: Matrix Multiplication\n- Load diabetes dataset from scikit-learn for the example 3\n- Example 3: Implement three different algorithms to train linear regression model\n - Algorithm 1: Linear Regression - Direct Solve (no regularization)\n - Algorithm 2: Linear Regression - Batch Gradient Descent (no regularization)\n - Algorithm 3: Linear Regression - Conjugate Gradient (no regularization)\n- Example 4: Invoke existing SystemML algorithm script LinearRegDS.dml using MLContext API\n- Example 5: Invoke existing SystemML algorithm using scikit-learn/SparkML pipeline like API\n- Uninstall/Clean up SystemML Python package and jar file\nInstall SystemML using pip <a class=\"anchor\" id=\"bullet1\"></a>\nFor more details, please see the install guide.",
"!pip install --upgrade --user systemml\n\n!pip show systemml",
"Example 1: Implement a simple 'Hello World' program in SystemML <a class=\"anchor\" id=\"bullet2\"></a>\nFirst import the classes necessary to implement the 'Hello World' program.\nThe MLContext API offers a programmatic interface for interacting with SystemML from Spark using languages such as Scala, Java, and Python. As a result, it offers a convenient way to interact with SystemML from the Spark Shell and from Notebooks such as Jupyter and Zeppelin. Please refer to the documentation for more detail on the MLContext API.\nAs a sidenote, here are alternative ways by which you can invoke SystemML (not covered in this notebook): \n- Command-line invocation using either spark-submit or hadoop.\n- Using the JMLC API.",
"from systemml import MLContext, dml, dmlFromResource\n\nml = MLContext(sc)\n\nprint(\"Spark Version:\", sc.version)\nprint(\"SystemML Version:\", ml.version())\nprint(\"SystemML Built-Time:\", ml.buildTime())\n\n# Step 1: Write the DML script\nscript = \"\"\"\nprint(\"Hello World!\");\n\"\"\"\n\n# Step 2: Create a Python DML object\nscript = dml(script)\n\n# Step 3: Execute it using MLContext API\nml.execute(script)",
"Now let's implement a slightly more complicated 'Hello World' program where we initialize a string variable to 'Hello World!' and print it using Python. Note: we first register the output variable in the dml object (in the step 2) and then fetch it after execution (in the step 3).",
"# Step 1: Write the DML script\nscript = \"\"\"\ns = \"Hello World!\";\n\"\"\"\n\n# Step 2: Create a Python DML object\nscript = dml(script).output('s')\n\n# Step 3: Execute it using MLContext API\ns = ml.execute(script).get('s')\nprint(s)",
"Example 2: Matrix Multiplication <a class=\"anchor\" id=\"bullet3\"></a>\nLet's write a script to generate a random matrix, perform matrix multiplication, and compute the sum of the output.",
"# Step 1: Write the DML script\nscript = \"\"\"\n # The number of rows is passed externally by the user via 'nr'\n X = rand(rows=nr, cols=1000, sparsity=0.5)\n A = t(X) %*% X\n s = sum(A)\n\"\"\"\n\n# Step 2: Create a Python DML object\nscript = dml(script).input(nr=1e5).output('s')\n\n# Step 3: Execute it using MLContext API\ns = ml.execute(script).get('s')\nprint(s)",
"Now, let's generate a random matrix in NumPy and pass it to SystemML.",
"import numpy as np\nnpMatrix = np.random.rand(1000, 1000)\n\n# Step 1: Write the DML script\nscript = \"\"\"\n A = t(X) %*% X\n s = sum(A)\n\"\"\"\n\n# Step 2: Create a Python DML object\nscript = dml(script).input(X=npMatrix).output('s')\n\n# Step 3: Execute it using MLContext API\ns = ml.execute(script).get('s')\nprint(s)",
"Load diabetes dataset from scikit-learn for the example 3 <a class=\"anchor\" id=\"bullet4\"></a>",
"import matplotlib.pyplot as plt\nimport numpy as np\nfrom sklearn import datasets\nplt.switch_backend('agg')\n\n%matplotlib inline\n\ndiabetes = datasets.load_diabetes()\ndiabetes_X = diabetes.data[:, np.newaxis, 2]\ndiabetes_X_train = diabetes_X[:-20]\ndiabetes_X_test = diabetes_X[-20:]\ndiabetes_y_train = diabetes.target[:-20].reshape(-1,1)\ndiabetes_y_test = diabetes.target[-20:].reshape(-1,1)\n\nplt.scatter(diabetes_X_train, diabetes_y_train, color='black')\nplt.scatter(diabetes_X_test, diabetes_y_test, color='red')",
"Example 3: Implement three different algorithms to train linear regression model\nLinear regression models the relationship between one numerical response variable and one or more explanatory (feature) variables by fitting a linear equation to observed data. The feature vectors are provided as a matrix $X$ an the observed response values are provided as a 1-column matrix $y$.\nA linear regression line has an equation of the form $y = Xw$.\nAlgorithm 1: Linear Regression - Direct Solve (no regularization) <a class=\"anchor\" id=\"example3algo1\"></a>\nLeast squares formulation\nThe least squares method calculates the best-fitting line for the observed data by minimizing the sum of the squares of the difference between the predicted response $Xw$ and the actual response $y$.\n$w^* = argmin_w ||Xw-y||^2 \\\n\\;\\;\\; = argmin_w (y - Xw)'(y - Xw) \\\n\\;\\;\\; = argmin_w \\dfrac{(w'(X'X)w - w'(X'y))}{2}$\nTo find the optimal parameter $w$, we set the gradient $dw = (X'X)w - (X'y)$ to 0.\n$(X'X)w - (X'y) = 0 \\\nw = (X'X)^{-1}(X' y) \\\n \\;\\;= solve(X'X, X'y)$",
"# Step 1: Write the DML script\nscript = \"\"\"\n # add constant feature to X to model intercept\n X = cbind(X, matrix(1, rows=nrow(X), cols=1))\n A = t(X) %*% X\n b = t(X) %*% y\n w = solve(A, b)\n bias = as.scalar(w[nrow(w),1])\n w = w[1:nrow(w)-1,]\n\"\"\"\n\n# Step 2: Create a Python DML object\nscript = dml(script).input(X=diabetes_X_train, y=diabetes_y_train).output('w', 'bias')\n\n# Step 3: Execute it using MLContext API\nw, bias = ml.execute(script).get('w','bias')\nw = w.toNumPy()\n\nplt.scatter(diabetes_X_train, diabetes_y_train, color='black')\nplt.scatter(diabetes_X_test, diabetes_y_test, color='red')\n\nplt.plot(diabetes_X_test, (w*diabetes_X_test)+bias, color='blue', linestyle ='dotted')",
"Algorithm 2: Linear Regression - Batch Gradient Descent (no regularization) <a class=\"anchor\" id=\"example3algo2\"></a>\nAlgorithm\nStep 1: Start with an initial point \nwhile(not converged) { \n Step 2: Compute gradient dw. \n Step 3: Compute stepsize alpha. \n Step 4: Update: wnew = wold + alpha*dw \n}\nGradient formula\ndw = r = (X'X)w - (X'y)\nStep size formula\nFind number alpha to minimize f(w + alpha*r) \nalpha = -(r'r)/(r'X'Xr)",
"# Step 1: Write the DML script\nscript = \"\"\"\n # add constant feature to X to model intercepts\n X = cbind(X, matrix(1, rows=nrow(X), cols=1))\n max_iter = 100\n w = matrix(0, rows=ncol(X), cols=1)\n for(i in 1:max_iter){\n XtX = t(X) %*% X\n dw = XtX %*%w - t(X) %*% y\n alpha = -(t(dw) %*% dw) / (t(dw) %*% XtX %*% dw)\n w = w + dw*alpha\n }\n bias = as.scalar(w[nrow(w),1])\n w = w[1:nrow(w)-1,] \n\"\"\"\n\n# Step 2: Create a Python DML object\nscript = dml(script).input(X=diabetes_X_train, y=diabetes_y_train).output('w', 'bias')\n\n# Step 3: Execute it using MLContext API\nw, bias = ml.execute(script).get('w','bias')\nw = w.toNumPy()\n\nplt.scatter(diabetes_X_train, diabetes_y_train, color='black')\nplt.scatter(diabetes_X_test, diabetes_y_test, color='red')\n\nplt.plot(diabetes_X_test, (w*diabetes_X_test)+bias, color='red', linestyle ='dashed')",
"Algorithm 3: Linear Regression - Conjugate Gradient (no regularization) <a class=\"anchor\" id=\"example3algo3\"></a>\nProblem with gradient descent: Takes very similar directions many times\nSolution: Enforce conjugacy\nStep 1: Start with an initial point \nwhile(not converged) {\n Step 2: Compute gradient dw.\n Step 3: Compute stepsize alpha.\n Step 4: Compute next direction p by enforcing conjugacy with previous direction.\n Step 4: Update: w_new = w_old + alpha*p\n}",
"# Step 1: Write the DML script\nscript = \"\"\"\n # add constant feature to X to model intercepts\n X = cbind(X, matrix(1, rows=nrow(X), cols=1))\n m = ncol(X); i = 1; \n max_iter = 20;\n w = matrix (0, rows = m, cols = 1); # initialize weights to 0\n dw = - t(X) %*% y; p = - dw; # dw = (X'X)w - (X'y)\n norm_r2 = sum (dw ^ 2); \n for(i in 1:max_iter) {\n q = t(X) %*% (X %*% p)\n alpha = norm_r2 / sum (p * q); # Minimizes f(w - alpha*r)\n w = w + alpha * p; # update weights\n dw = dw + alpha * q; \n old_norm_r2 = norm_r2; norm_r2 = sum (dw ^ 2);\n p = -dw + (norm_r2 / old_norm_r2) * p; # next direction - conjugacy to previous direction\n i = i + 1;\n }\n bias = as.scalar(w[nrow(w),1])\n w = w[1:nrow(w)-1,] \n\"\"\"\n\n# Step 2: Create a Python DML object\nscript = dml(script).input(X=diabetes_X_train, y=diabetes_y_train).output('w', 'bias')\n\n# Step 3: Execute it using MLContext API\nw, bias = ml.execute(script).get('w','bias')\nw = w.toNumPy()\n\nplt.scatter(diabetes_X_train, diabetes_y_train, color='black')\nplt.scatter(diabetes_X_test, diabetes_y_test, color='red')\n\nplt.plot(diabetes_X_test, (w*diabetes_X_test)+bias, color='red', linestyle ='dashed')",
"Example 4: Invoke existing SystemML algorithm script LinearRegDS.dml using MLContext API <a class=\"anchor\" id=\"example4\"></a>\nSystemML ships with several pre-implemented algorithms that can be invoked directly. Please refer to the algorithm reference manual for usage.",
"# Step 1: No need to write a DML script here. But, keeping it as a placeholder for consistency :)\n\n# Step 2: Create a Python DML object\nscript = dmlFromResource('scripts/algorithms/LinearRegDS.dml')\nscript = script.input(X=diabetes_X_train, y=diabetes_y_train).input('$icpt',1.0).output('beta_out')\n\n# Step 3: Execute it using MLContext API\nw = ml.execute(script).get('beta_out')\nw = w.toNumPy()\nbias = w[1]\nw = w[0]\n\nplt.scatter(diabetes_X_train, diabetes_y_train, color='black')\nplt.scatter(diabetes_X_test, diabetes_y_test, color='red')\n\nplt.plot(diabetes_X_test, (w*diabetes_X_test)+bias, color='red', linestyle ='dashed')",
"Example 5: Invoke existing SystemML algorithm using scikit-learn/SparkML pipeline like API <a class=\"anchor\" id=\"example5\"></a>\nmllearn API allows a Python programmer to invoke SystemML's algorithms using scikit-learn like API as well as Spark's MLPipeline API.",
"# Step 1: No need to write a DML script here. But, keeping it as a placeholder for consistency :)\n\n# Step 2: No need to create a Python DML object. But, keeping it as a placeholder for consistency :)\n\n# Step 3: Execute Linear Regression using the mllearn API\nfrom systemml.mllearn import LinearRegression\nregr = LinearRegression(spark)\n# Train the model using the training sets\nregr.fit(diabetes_X_train, diabetes_y_train)\n\npredictions = regr.predict(diabetes_X_test)\n\n# Use the trained model to perform prediction\n%matplotlib inline\nplt.scatter(diabetes_X_train, diabetes_y_train, color='black')\nplt.scatter(diabetes_X_test, diabetes_y_test, color='red')\n\nplt.plot(diabetes_X_test, predictions, color='black')",
"Uninstall/Clean up SystemML Python package and jar file <a class=\"anchor\" id=\"uninstall\"></a>"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
mjlong/openmc
|
docs/source/pythonapi/examples/tally-arithmetic.ipynb
|
mit
|
[
"This notebook shows the how tallies can be combined (added, subtracted, multiplied, etc.) using the Python API in order to create derived tallies. Since no covariance information is obtained, it is assumed that tallies are completely independent of one another when propagating uncertainties. The target problem is a simple pin cell.\nNote: that this Notebook was created using the latest Pandas v0.16.1. Everything in the Notebook will wun with older versions of Pandas, but the multi-indexing option in >v0.15.0 makes the tables look prettier.",
"%load_ext autoreload\n%autoreload 2\n\nimport glob\nfrom IPython.display import Image\nimport numpy as np\n\nimport openmc\nfrom openmc.statepoint import StatePoint\nfrom openmc.summary import Summary\n\n%matplotlib inline",
"Generate Input Files\nFirst we need to define materials that will be used in the problem. Before defining a material, we must create nuclides that are used in the material.",
"# Instantiate some Nuclides\nh1 = openmc.Nuclide('H-1')\nb10 = openmc.Nuclide('B-10')\no16 = openmc.Nuclide('O-16')\nu235 = openmc.Nuclide('U-235')\nu238 = openmc.Nuclide('U-238')\nzr90 = openmc.Nuclide('Zr-90')",
"With the nuclides we defined, we will now create three materials for the fuel, water, and cladding of the fuel pin.",
"# 1.6 enriched fuel\nfuel = openmc.Material(name='1.6% Fuel')\nfuel.set_density('g/cm3', 10.31341)\nfuel.add_nuclide(u235, 3.7503e-4)\nfuel.add_nuclide(u238, 2.2625e-2)\nfuel.add_nuclide(o16, 4.6007e-2)\n\n# borated water\nwater = openmc.Material(name='Borated Water')\nwater.set_density('g/cm3', 0.740582)\nwater.add_nuclide(h1, 4.9457e-2)\nwater.add_nuclide(o16, 2.4732e-2)\nwater.add_nuclide(b10, 8.0042e-6)\n\n# zircaloy\nzircaloy = openmc.Material(name='Zircaloy')\nzircaloy.set_density('g/cm3', 6.55)\nzircaloy.add_nuclide(zr90, 7.2758e-3)",
"With our three materials, we can now create a materials file object that can be exported to an actual XML file.",
"# Instantiate a MaterialsFile, add Materials\nmaterials_file = openmc.MaterialsFile()\nmaterials_file.add_material(fuel)\nmaterials_file.add_material(water)\nmaterials_file.add_material(zircaloy)\nmaterials_file.default_xs = '71c'\n\n# Export to \"materials.xml\"\nmaterials_file.export_to_xml()",
"Now let's move on to the geometry. Our problem will have three regions for the fuel, the clad, and the surrounding coolant. The first step is to create the bounding surfaces -- in this case two cylinders and six reflective planes.",
"# Create cylinders for the fuel and clad\nfuel_outer_radius = openmc.ZCylinder(x0=0.0, y0=0.0, R=0.39218)\nclad_outer_radius = openmc.ZCylinder(x0=0.0, y0=0.0, R=0.45720)\n\n# Create boundary planes to surround the geometry\n# Use both reflective and vacuum boundaries to make life interesting\nmin_x = openmc.XPlane(x0=-0.63, boundary_type='reflective')\nmax_x = openmc.XPlane(x0=+0.63, boundary_type='reflective')\nmin_y = openmc.YPlane(y0=-0.63, boundary_type='reflective')\nmax_y = openmc.YPlane(y0=+0.63, boundary_type='reflective')\nmin_z = openmc.ZPlane(z0=-0.63, boundary_type='reflective')\nmax_z = openmc.ZPlane(z0=+0.63, boundary_type='reflective')",
"With the surfaces defined, we can now create cells that are defined by intersections of half-spaces created by the surfaces.",
"# Create a Universe to encapsulate a fuel pin\npin_cell_universe = openmc.Universe(name='1.6% Fuel Pin')\n\n# Create fuel Cell\nfuel_cell = openmc.Cell(name='1.6% Fuel')\nfuel_cell.fill = fuel\nfuel_cell.region = -fuel_outer_radius\npin_cell_universe.add_cell(fuel_cell)\n\n# Create a clad Cell\nclad_cell = openmc.Cell(name='1.6% Clad')\nclad_cell.fill = zircaloy\nclad_cell.region = +fuel_outer_radius & -clad_outer_radius\npin_cell_universe.add_cell(clad_cell)\n\n# Create a moderator Cell\nmoderator_cell = openmc.Cell(name='1.6% Moderator')\nmoderator_cell.fill = water\nmoderator_cell.region = +clad_outer_radius\npin_cell_universe.add_cell(moderator_cell)",
"OpenMC requires that there is a \"root\" universe. Let us create a root cell that is filled by the pin cell universe and then assign it to the root universe.",
"# Create root Cell\nroot_cell = openmc.Cell(name='root cell')\nroot_cell.fill = pin_cell_universe\n\n# Add boundary planes\nroot_cell.region = +min_x & -max_x & +min_y & -max_y & +min_z & -max_z\n\n# Create root Universe\nroot_universe = openmc.Universe(universe_id=0, name='root universe')\nroot_universe.add_cell(root_cell)",
"We now must create a geometry that is assigned a root universe, put the geometry into a geometry file, and export it to XML.",
"# Create Geometry and set root Universe\ngeometry = openmc.Geometry()\ngeometry.root_universe = root_universe\n\n# Instantiate a GeometryFile\ngeometry_file = openmc.GeometryFile()\ngeometry_file.geometry = geometry\n\n# Export to \"geometry.xml\"\ngeometry_file.export_to_xml()",
"With the geometry and materials finished, we now just need to define simulation parameters. In this case, we will use 5 inactive batches and 15 active batches each with 2500 particles.",
"# OpenMC simulation parameters\nbatches = 20\ninactive = 5\nparticles = 2500\n\n# Instantiate a SettingsFile\nsettings_file = openmc.SettingsFile()\nsettings_file.batches = batches\nsettings_file.inactive = inactive\nsettings_file.particles = particles\nsettings_file.output = {'tallies': True, 'summary': True}\nsource_bounds = [-0.63, -0.63, -0.63, 0.63, 0.63, 0.63]\nsettings_file.set_source_space('box', source_bounds)\n\n# Export to \"settings.xml\"\nsettings_file.export_to_xml()",
"Let us also create a plot file that we can use to verify that our pin cell geometry was created successfully.",
"# Instantiate a Plot\nplot = openmc.Plot(plot_id=1)\nplot.filename = 'materials-xy'\nplot.origin = [0, 0, 0]\nplot.width = [1.26, 1.26]\nplot.pixels = [250, 250]\nplot.color = 'mat'\n\n# Instantiate a PlotsFile, add Plot, and export to \"plots.xml\"\nplot_file = openmc.PlotsFile()\nplot_file.add_plot(plot)\nplot_file.export_to_xml()",
"With the plots.xml file, we can now generate and view the plot. OpenMC outputs plots in .ppm format, which can be converted into a compressed format like .png with the convert utility.",
"# Run openmc in plotting mode\nexecutor = openmc.Executor()\nexecutor.plot_geometry(output=False)\n\n# Convert OpenMC's funky ppm to png\n!convert materials-xy.ppm materials-xy.png\n\n# Display the materials plot inline\nImage(filename='materials-xy.png')",
"As we can see from the plot, we have a nice pin cell with fuel, cladding, and water! Before we run our simulation, we need to tell the code what we want to tally. The following code shows how to create a variety of tallies.",
"# Instantiate an empty TalliesFile\ntallies_file = openmc.TalliesFile()\n\n# Create Tallies to compute microscopic multi-group cross-sections\n\n# Instantiate energy filter for multi-group cross-section Tallies\nenergy_filter = openmc.Filter(type='energy', bins=[0., 0.625e-6, 20.])\n\n# Instantiate flux Tally in moderator and fuel\ntally = openmc.Tally(name='flux')\ntally.add_filter(openmc.Filter(type='cell', bins=[fuel_cell.id, moderator_cell.id]))\ntally.add_filter(energy_filter)\ntally.add_score('flux')\ntallies_file.add_tally(tally)\n\n# Instantiate reaction rate Tally in fuel\ntally = openmc.Tally(name='fuel rxn rates')\ntally.add_filter(openmc.Filter(type='cell', bins=[fuel_cell.id]))\ntally.add_filter(energy_filter)\ntally.add_score('nu-fission')\ntally.add_score('scatter')\ntally.add_nuclide(u238)\ntally.add_nuclide(u235)\ntallies_file.add_tally(tally)\n\n# Instantiate reaction rate Tally in moderator\ntally = openmc.Tally(name='moderator rxn rates')\ntally.add_filter(openmc.Filter(type='cell', bins=[moderator_cell.id]))\ntally.add_filter(energy_filter)\ntally.add_score('absorption')\ntally.add_score('total')\ntally.add_nuclide(o16)\ntally.add_nuclide(h1)\ntallies_file.add_tally(tally)\n\n# K-Eigenvalue (infinity) tallies\nfiss_rate = openmc.Tally(name='fiss. rate')\nabs_rate = openmc.Tally(name='abs. rate')\nfiss_rate.add_score('nu-fission')\nabs_rate.add_score('absorption')\ntallies_file.add_tally(fiss_rate)\ntallies_file.add_tally(abs_rate)\n\n# Resonance Escape Probability tallies\ntherm_abs_rate = openmc.Tally(name='therm. abs. rate')\ntherm_abs_rate.add_score('absorption')\ntherm_abs_rate.add_filter(openmc.Filter(type='energy', bins=[0., 0.625]))\ntallies_file.add_tally(therm_abs_rate)\n\n# Thermal Flux Utilization tallies\nfuel_therm_abs_rate = openmc.Tally(name='fuel therm. abs. rate')\nfuel_therm_abs_rate.add_score('absorption')\nfuel_therm_abs_rate.add_filter(openmc.Filter(type='energy', bins=[0., 0.625]))\nfuel_therm_abs_rate.add_filter(openmc.Filter(type='cell', bins=[fuel_cell.id]))\ntallies_file.add_tally(fuel_therm_abs_rate)\n\n# Fast Fission Factor tallies\ntherm_fiss_rate = openmc.Tally(name='therm. fiss. rate')\ntherm_fiss_rate.add_score('nu-fission')\ntherm_fiss_rate.add_filter(openmc.Filter(type='energy', bins=[0., 0.625]))\ntallies_file.add_tally(therm_fiss_rate)\n\n# Instantiate energy filter to illustrate Tally slicing\nenergy_filter = openmc.Filter(type='energy', bins=np.logspace(np.log10(1e-8), np.log10(20), 10))\n\n# Instantiate flux Tally in moderator and fuel\ntally = openmc.Tally(name='need-to-slice')\ntally.add_filter(openmc.Filter(type='cell', bins=[fuel_cell.id, moderator_cell.id]))\ntally.add_filter(energy_filter)\ntally.add_score('nu-fission')\ntally.add_score('scatter')\ntally.add_nuclide(h1)\ntally.add_nuclide(u238)\ntallies_file.add_tally(tally)\n\n# Export to \"tallies.xml\"\ntallies_file.export_to_xml()",
"Now we a have a complete set of inputs, so we can go ahead and run our simulation.",
"# Remove old HDF5 (summary, statepoint) files\n!rm statepoint.*\n\n# Run OpenMC with MPI!\nexecutor.run_simulation()",
"Tally Data Processing\nOur simulation ran successfully and created a statepoint file with all the tally data in it. We begin our analysis here loading the statepoint file and 'reading' the results. By default, the tally results are not read into memory because they might be large, even large enough to exceed the available memory on a computer.",
"# Load the statepoint file\nsp = StatePoint('statepoint.20.h5')",
"You may have also noticed we instructed OpenMC to create a summary file with lots of geometry information in it. This can help to produce more sensible output from the Python API, so we will use the summary file to link against.",
"# Load the summary file and link with statepoint\nsu = Summary('summary.h5')\nsp.link_with_summary(su)",
"We have a tally of the total fission rate and the total absorption rate, so we can calculate k-infinity as:\n$$k_\\infty = \\frac{\\langle \\nu \\Sigma_f \\phi \\rangle}{\\langle \\Sigma_a \\phi \\rangle}$$\nIn this notation, $\\langle \\cdot \\rangle^a_b$ represents an OpenMC that is integrated over region $a$ and energy range $b$. If $a$ or $b$ is not reported, it means the value represents an integral over all space or all energy, respectively.",
"# Compute k-infinity using tally arithmetic\nfiss_rate = sp.get_tally(name='fiss. rate')\nabs_rate = sp.get_tally(name='abs. rate')\nkeff = fiss_rate / abs_rate\nkeff.get_pandas_dataframe()",
"Notice that even though the neutron production rate and absorption rate are separate tallies, we still get a first-order estimate of the uncertainty on the quotient of them automatically!\nOften in textbooks you'll see k-infinity represented using the four-factor formula $$k_\\infty = p \\epsilon f \\eta.$$ Let's analyze each of these factors, starting with the resonance escape probability which is defined as $$p=\\frac{\\langle\\Sigma_a\\phi\\rangle_T}{\\langle\\Sigma_a\\phi\\rangle}$$ where the subscript $T$ means thermal energies.",
"# Compute resonance escape probability using tally arithmetic\ntherm_abs_rate = sp.get_tally(name='therm. abs. rate')\nres_esc = therm_abs_rate / abs_rate\nres_esc.get_pandas_dataframe()",
"The fast fission factor can be calculated as\n$$\\epsilon=\\frac{\\langle\\nu\\Sigma_f\\phi\\rangle}{\\langle\\nu\\Sigma_f\\phi\\rangle_T}$$",
"# Compute fast fission factor factor using tally arithmetic\ntherm_fiss_rate = sp.get_tally(name='therm. fiss. rate')\nfast_fiss = fiss_rate / therm_fiss_rate\nfast_fiss.get_pandas_dataframe()",
"The thermal flux utilization is calculated as\n$$f=\\frac{\\langle\\Sigma_a\\phi\\rangle^F_T}{\\langle\\Sigma_a\\phi\\rangle_T}$$\nwhere the superscript $F$ denotes fuel.",
"# Compute thermal flux utilization factor using tally arithmetic\nfuel_therm_abs_rate = sp.get_tally(name='fuel therm. abs. rate')\ntherm_util = fuel_therm_abs_rate / therm_abs_rate\ntherm_util.get_pandas_dataframe()",
"The final factor is the number of fission neutrons produced per absorption in fuel, calculated as $$\\eta = \\frac{\\langle \\nu\\Sigma_f\\phi \\rangle_T}{\\langle \\Sigma_a \\phi \\rangle^F_T}$$",
"# Compute neutrons produced per absorption (eta) using tally arithmetic\neta = therm_fiss_rate / fuel_therm_abs_rate\neta.get_pandas_dataframe()",
"Now we can calculate $k_\\infty$ using the product of the factors form the four-factor formula.",
"keff = res_esc * fast_fiss * therm_util * eta\nkeff.get_pandas_dataframe()",
"We see that the value we've obtained here has exactly the same mean as before. However, because of the way it was calculated, the standard deviation appears to be larger.\nLet's move on to a more complicated example now. Before we set up tallies to get reaction rates in the fuel and moderator in two energy groups for two different nuclides. We can use tally arithmetic to divide each of these reaction rates by the flux to get microscopic multi-group cross sections.",
"# Compute microscopic multi-group cross-sections\nflux = sp.get_tally(name='flux')\nflux = flux.get_slice(filters=['cell'], filter_bins=[(fuel_cell.id,)])\nfuel_rxn_rates = sp.get_tally(name='fuel rxn rates')\nmod_rxn_rates = sp.get_tally(name='moderator rxn rates')\n\nfuel_xs = fuel_rxn_rates / flux\nfuel_xs.get_pandas_dataframe()",
"We see that when the two tallies with multiple bins were divided, the derived tally contains the outer product of the combinations. If the filters/scores are the same, no outer product is needed. The get_values(...) method allows us to obtain a subset of tally scores. In the following example, we obtain just the neutron production microscopic cross sections.",
"# Show how to use Tally.get_values(...) with a CrossScore\nnu_fiss_xs = fuel_xs.get_values(scores=['(nu-fission / flux)'])\nprint(nu_fiss_xs)",
"The same idea can be used not only for scores but also for filters and nuclides.",
"# Show how to use Tally.get_values(...) with a CrossScore and CrossNuclide\nu235_scatter_xs = fuel_xs.get_values(nuclides=['(U-235 / total)'], \n scores=['(scatter / flux)'])\nprint(u235_scatter_xs)\n\n# Show how to use Tally.get_values(...) with a CrossFilter and CrossScore\nfast_scatter_xs = fuel_xs.get_values(filters=['energy'], \n filter_bins=[((0.625e-6, 20.),)], \n scores=['(scatter / flux)'])\nprint(fast_scatter_xs)",
"A more advanced method is to use get_slice(...) to create a new derived tally that is a subset of an existing tally. This has the benefit that we can use get_pandas_dataframe() to see the tallies in a more human-readable format.",
"# \"Slice\" the nu-fission data into a new derived Tally\nnu_fission_rates = fuel_rxn_rates.get_slice(scores=['nu-fission'])\nnu_fission_rates.get_pandas_dataframe()\n\n# \"Slice\" the H-1 scatter data in the moderator Cell into a new derived Tally\nneed_to_slice = sp.get_tally(name='need-to-slice')\nslice_test = need_to_slice.get_slice(scores=['scatter'], nuclides=['H-1'],\n filters=['cell'], filter_bins=[(moderator_cell.id,)])\nslice_test.get_pandas_dataframe()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
mne-tools/mne-tools.github.io
|
0.19/_downloads/243172b1ef6a2d804d3245b8c0a927ef/plot_60_maxwell_filtering_sss.ipynb
|
bsd-3-clause
|
[
"%matplotlib inline",
"Signal-space separation (SSS) and Maxwell filtering\nThis tutorial covers reducing environmental noise and compensating for head\nmovement with SSS and Maxwell filtering.\n :depth: 2\nAs usual we'll start by importing the modules we need, loading some\nexample data <sample-dataset>, and cropping it to save on memory:",
"import os\nimport mne\n\nsample_data_folder = mne.datasets.sample.data_path()\nsample_data_raw_file = os.path.join(sample_data_folder, 'MEG', 'sample',\n 'sample_audvis_raw.fif')\nraw = mne.io.read_raw_fif(sample_data_raw_file, verbose=False)\nraw.crop(tmax=60).load_data()",
"Background on SSS and Maxwell filtering\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\nSignal-space separation (SSS) [1] [2] is a technique based on the physics\nof electromagnetic fields. SSS separates the measured signal into components\nattributable to sources inside the measurement volume of the sensor array\n(the internal components), and components attributable to sources outside\nthe measurement volume (the external components). The internal and external\ncomponents are linearly independent, so it is possible to simply discard the\nexternal components to reduce environmental noise. Maxwell filtering is a\nrelated procedure that omits the higher-order components of the internal\nsubspace, which are dominated by sensor noise. Typically, Maxwell filtering\nand SSS are performed together (in MNE-Python they are implemented together\nin a single function).\nLike SSP <tut-artifact-ssp>, SSS is a form of projection. Whereas SSP\nempirically determines a noise subspace based on data (empty-room recordings,\nEOG or ECG activity, etc) and projects the measurements onto a subspace\northogonal to the noise, SSS mathematically constructs the external and\ninternal subspaces from spherical harmonics_ and reconstructs the sensor\nsignals using only the internal subspace (i.e., does an oblique projection).\n<div class=\"alert alert-danger\"><h4>Warning</h4><p>Maxwell filtering was originally developed for Elekta Neuromag® systems,\n and should be considered *experimental* for non-Neuromag data. See the\n Notes section of the :func:`~mne.preprocessing.maxwell_filter` docstring\n for details.</p></div>\n\nThe MNE-Python implementation of SSS / Maxwell filtering currently provides\nthe following features:\n\nBad channel reconstruction\nCross-talk cancellation\nFine calibration correction\ntSSS\nCoordinate frame translation\nRegularization of internal components using information theory\nRaw movement compensation (using head positions estimated by MaxFilter)\ncHPI subtraction (see :func:mne.chpi.filter_chpi)\nHandling of 3D (in addition to 1D) fine calibration files\nEpoch-based movement compensation as described in [1]_ through\n :func:mne.epochs.average_movements\nExperimental processing of data from (un-compensated) non-Elekta\n systems\n\nUsing SSS and Maxwell filtering in MNE-Python\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\nFor optimal use of SSS with data from Elekta Neuromag® systems, you should\nprovide the path to the fine calibration file (which encodes site-specific\ninformation about sensor orientation and calibration) as well as a crosstalk\ncompensation file (which reduces interference between Elekta's co-located\nmagnetometer and paired gradiometer sensor units).",
"fine_cal_file = os.path.join(sample_data_folder, 'SSS', 'sss_cal_mgh.dat')\ncrosstalk_file = os.path.join(sample_data_folder, 'SSS', 'ct_sparse_mgh.fif')",
"Before we perform SSS we'll set a couple additional bad channels — MEG\n2313 has some DC jumps and MEG 1032 has some large-ish low-frequency\ndrifts. After that, performing SSS and Maxwell filtering is done with a\nsingle call to :func:~mne.preprocessing.maxwell_filter, with the crosstalk\nand fine calibration filenames provided (if available):",
"raw.info['bads'].extend(['MEG 1032', 'MEG 2313'])\nraw_sss = mne.preprocessing.maxwell_filter(raw, cross_talk=crosstalk_file,\n calibration=fine_cal_file)",
"<div class=\"alert alert-danger\"><h4>Warning</h4><p>Automatic bad channel detection is not currently implemented. It is\n critical to mark bad channels in ``raw.info['bads']`` *before* calling\n :func:`~mne.preprocessing.maxwell_filter` in order to prevent bad\n channel noise from spreading.</p></div>\n\nTo see the effect, we can plot the data before and after SSS / Maxwell\nfiltering.",
"raw.pick(['meg']).plot(duration=2, butterfly=True)\nraw_sss.pick(['meg']).plot(duration=2, butterfly=True)",
"Notice that channels marked as \"bad\" have been effectively repaired by SSS,\neliminating the need to perform interpolation <tut-bad-channels>.\nThe heartbeat artifact has also been substantially reduced.\nThe :func:~mne.preprocessing.maxwell_filter function has parameters\nint_order and ext_order for setting the order of the spherical\nharmonic expansion of the interior and exterior components; the default\nvalues are appropriate for most use cases. Additional parameters include\ncoord_frame and origin for controlling the coordinate frame (\"head\"\nor \"meg\") and the origin of the sphere; the defaults are appropriate for most\nstudies that include digitization of the scalp surface / electrodes. See the\ndocumentation of :func:~mne.preprocessing.maxwell_filter for details.\nSpatiotemporal SSS (tSSS)\n^^^^^^^^^^^^^^^^^^^^^^^^^\nAn assumption of SSS is that the measurement volume (the spherical shell\nwhere the sensors are physically located) is free of electromagnetic sources.\nThe thickness of this source-free measurement shell should be 4-8 cm for SSS\nto perform optimally. In practice, there may be sources falling within that\nmeasurement volume; these can often be mitigated by using Spatiotemporal\nSignal Space Separation (tSSS) [2]_. tSSS works by looking for temporal\ncorrelation between components of the internal and external subspaces, and\nprojecting out any components that are common to the internal and external\nsubspaces. The projection is done in an analogous way to\nSSP <tut-artifact-ssp>, except that the noise vector is computed\nacross time points instead of across sensors.\nTo use tSSS in MNE-Python, pass a time (in seconds) to the parameter\nst_duration of :func:~mne.preprocessing.maxwell_filter. This will\ndetermine the \"chunk duration\" over which to compute the temporal projection.\nThe chunk duration effectively acts as a high-pass filter with a cutoff\nfrequency of $\\frac{1}{\\mathtt{st_duration}}~\\mathrm{Hz}$; this\neffective high-pass has an important consequence:\n\nIn general, larger values of st_duration are better (provided that your\n computer has sufficient memory) because larger values of st_duration\n will have a smaller effect on the signal.\n\nIf the chunk duration does not evenly divide your data length, the final\n(shorter) chunk will be added to the prior chunk before filtering, leading\nto slightly different effective filtering for the combined chunk (the\neffective cutoff frequency differing at most by a factor of 2). If you need\nto ensure identical processing of all analyzed chunks, either:\n\n\nchoose a chunk duration that evenly divides your data length (only\n recommended if analyzing a single subject or run), or\n\n\ninclude at least 2 * st_duration of post-experiment recording time at\n the end of the :class:~mne.io.Raw object, so that the data you intend to\n further analyze is guaranteed not to be in the final or penultimate chunks.\n\n\nAdditional parameters affecting tSSS include st_correlation (to set the\ncorrelation value above which correlated internal and external components\nwill be projected out) and st_only (to apply only the temporal projection\nwithout also performing SSS and Maxwell filtering). See the docstring of\n:func:~mne.preprocessing.maxwell_filter for details.\nMovement compensation\n^^^^^^^^^^^^^^^^^^^^^\nIf you have information about subject head position relative to the sensors\n(i.e., continuous head position indicator coils, or :term:cHPI <hpi>), SSS\ncan take that into account when projecting sensor data onto the internal\nsubspace. Head position data is loaded with the\n:func:~mne.chpi.read_head_pos function. The example data\n<sample-dataset> doesn't include cHPI, so here we'll load a :file:.pos\nfile used for testing, just to demonstrate:",
"head_pos_file = os.path.join(mne.datasets.testing.data_path(), 'SSS',\n 'test_move_anon_raw.pos')\nhead_pos = mne.chpi.read_head_pos(head_pos_file)\nmne.viz.plot_head_positions(head_pos, mode='traces')",
"The cHPI data file could also be passed as the head_pos parameter of\n:func:~mne.preprocessing.maxwell_filter. Not only would this account for\nmovement within a given recording session, but also would effectively\nnormalize head position across different measurement sessions and subjects.\nSee here <example-movement-comp> for an extended example of applying\nmovement compensation during Maxwell filtering / SSS. Another option is to\napply movement compensation when averaging epochs into an\n:class:~mne.Evoked instance, using the :func:mne.epochs.average_movements\nfunction.\nEach of these approaches requires time-varying estimates of head position,\nwhich is obtained from MaxFilter using the -headpos and -hp\narguments (see the MaxFilter manual for details).\nCaveats to using SSS / Maxwell filtering\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\n\nThere are patents related to the Maxwell filtering algorithm, which may\n legally preclude using it in commercial applications. More details are\n provided in the documentation of\n :func:~mne.preprocessing.maxwell_filter.\n\n\nSSS works best when both magnetometers and gradiometers are present, and\n is most effective when gradiometers are planar (due to the need for very\n accurate sensor geometry and fine calibration information). Thus its\n performance is dependent on the MEG system used to collect the data.\n\n\nReferences\n^^^^^^^^^^\n.. [1] Taulu S and Kajola M. (2005). Presentation of electromagnetic\n multichannel data: The signal space separation method. J Appl Phys\n 97, 124905 1-10. https://doi.org/10.1063/1.1935742\n.. [2] Taulu S and Simola J. (2006). Spatiotemporal signal space separation\n method for rejecting nearby interference in MEG measurements. Phys\n Med Biol 51, 1759-1768. https://doi.org/10.1088/0031-9155/51/7/008\n.. LINKS"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
sthuggins/phys202-2015-work
|
assignments/assignment05/InteractEx02.ipynb
|
mit
|
[
"Interact Exercise 2\nImports",
"%matplotlib inline\nfrom matplotlib import pyplot as plt\nimport numpy as np\n\nfrom IPython.html.widgets import interact, interactive, fixed\nfrom IPython.display import display",
"Plotting with parameters\nWrite a plot_sin1(a, b) function that plots $sin(ax+b)$ over the interval $[0,4\\pi]$.\n\nCustomize your visualization to make it effective and beautiful.\nCustomize the box, grid, spines and ticks to match the requirements of this data.\nUse enough points along the x-axis to get a smooth plot.\nFor the x-axis tick locations use integer multiples of $\\pi$.\nFor the x-axis tick labels use multiples of pi using LaTeX: $3\\pi$.",
"def plot_sine1(a, b):\n x=range(0, 4*np.pi)\n y= np.sin(a*x + b)\nplt.plot()\n #for x in range(0, 4*np.pi, np.dtype(float)):\n #plt.plot(a, b, x, np.sin(a*x+b))\n \n\nplot_sine1(5.0, 3.4)",
"Then use interact to create a user interface for exploring your function:\n\na should be a floating point slider over the interval $[0.0,5.0]$ with steps of $0.1$.\nb should be a floating point slider over the interval $[-5.0,5.0]$ with steps of $0.1$.",
"# YOUR CODE HERE\nraise NotImplementedError()\n\nassert True # leave this for grading the plot_sine1 exercise",
"In matplotlib, the line style and color can be set with a third argument to plot. Examples of this argument:\n\ndashed red: r--\nblue circles: bo\ndotted black: k.\n\nWrite a plot_sine2(a, b, style) function that has a third style argument that allows you to set the line style of the plot. The style should default to a blue line.",
"# YOUR CODE HERE\nraise NotImplementedError()\n\nplot_sine2(4.0, -1.0, 'r--')",
"Use interact to create a UI for plot_sine2.\n\nUse a slider for a and b as above.\nUse a drop down menu for selecting the line style between a dotted blue line line, black circles and red triangles.",
"# YOUR CODE HERE\nraise NotImplementedError()\n\nassert True # leave this for grading the plot_sine2 exercise"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
josef-pkt/statsmodels
|
examples/notebooks/markov_regression.ipynb
|
bsd-3-clause
|
[
"Markov switching dynamic regression models\nThis notebook provides an example of the use of Markov switching models in Statsmodels to estimate dynamic regression models with changes in regime. It follows the examples in the Stata Markov switching documentation, which can be found at http://www.stata.com/manuals14/tsmswitch.pdf.",
"%matplotlib inline\n\nimport numpy as np\nimport pandas as pd\nimport statsmodels.api as sm\nimport matplotlib.pyplot as plt\n\n# NBER recessions\nfrom pandas_datareader.data import DataReader\nfrom datetime import datetime\nusrec = DataReader('USREC', 'fred', start=datetime(1947, 1, 1), end=datetime(2013, 4, 1))",
"Federal funds rate with switching intercept\nThe first example models the federal funds rate as noise around a constant intercept, but where the intercept changes during different regimes. The model is simply:\n$$r_t = \\mu_{S_t} + \\varepsilon_t \\qquad \\varepsilon_t \\sim N(0, \\sigma^2)$$\nwhere $S_t \\in {0, 1}$, and the regime transitions according to\n$$ P(S_t = s_t | S_{t-1} = s_{t-1}) =\n\\begin{bmatrix}\np_{00} & p_{10} \\\n1 - p_{00} & 1 - p_{10}\n\\end{bmatrix}\n$$\nWe will estimate the parameters of this model by maximum likelihood: $p_{00}, p_{10}, \\mu_0, \\mu_1, \\sigma^2$.\nThe data used in this example can be found at http://www.stata-press.com/data/r14/usmacro.",
"# Get the federal funds rate data\nfrom statsmodels.tsa.regime_switching.tests.test_markov_regression import fedfunds\ndta_fedfunds = pd.Series(fedfunds, index=pd.date_range('1954-07-01', '2010-10-01', freq='QS'))\n\n# Plot the data\ndta_fedfunds.plot(title='Federal funds rate', figsize=(12,3))\n\n# Fit the model\n# (a switching mean is the default of the MarkovRegession model)\nmod_fedfunds = sm.tsa.MarkovRegression(dta_fedfunds, k_regimes=2)\nres_fedfunds = mod_fedfunds.fit()\n\nres_fedfunds.summary()",
"From the summary output, the mean federal funds rate in the first regime (the \"low regime\") is estimated to be $3.7$ whereas in the \"high regime\" it is $9.6$. Below we plot the smoothed probabilities of being in the high regime. The model suggests that the 1980's was a time-period in which a high federal funds rate existed.",
"res_fedfunds.smoothed_marginal_probabilities[1].plot(\n title='Probability of being in the high regime', figsize=(12,3));",
"From the estimated transition matrix we can calculate the expected duration of a low regime versus a high regime.",
"print(res_fedfunds.expected_durations)",
"A low regime is expected to persist for about fourteen years, whereas the high regime is expected to persist for only about five years.\nFederal funds rate with switching intercept and lagged dependent variable\nThe second example augments the previous model to include the lagged value of the federal funds rate.\n$$r_t = \\mu_{S_t} + r_{t-1} \\beta_{S_t} + \\varepsilon_t \\qquad \\varepsilon_t \\sim N(0, \\sigma^2)$$\nwhere $S_t \\in {0, 1}$, and the regime transitions according to\n$$ P(S_t = s_t | S_{t-1} = s_{t-1}) =\n\\begin{bmatrix}\np_{00} & p_{10} \\\n1 - p_{00} & 1 - p_{10}\n\\end{bmatrix}\n$$\nWe will estimate the parameters of this model by maximum likelihood: $p_{00}, p_{10}, \\mu_0, \\mu_1, \\beta_0, \\beta_1, \\sigma^2$.",
"# Fit the model\nmod_fedfunds2 = sm.tsa.MarkovRegression(\n dta_fedfunds.iloc[1:], k_regimes=2, exog=dta_fedfunds.iloc[:-1])\nres_fedfunds2 = mod_fedfunds2.fit()\n\nres_fedfunds2.summary()",
"There are several things to notice from the summary output:\n\nThe information criteria have decreased substantially, indicating that this model has a better fit than the previous model.\nThe interpretation of the regimes, in terms of the intercept, have switched. Now the first regime has the higher intercept and the second regime has a lower intercept.\n\nExamining the smoothed probabilities of the high regime state, we now see quite a bit more variability.",
"res_fedfunds2.smoothed_marginal_probabilities[0].plot(\n title='Probability of being in the high regime', figsize=(12,3));",
"Finally, the expected durations of each regime have decreased quite a bit.",
"print(res_fedfunds2.expected_durations)",
"Taylor rule with 2 or 3 regimes\nWe now include two additional exogenous variables - a measure of the output gap and a measure of inflation - to estimate a switching Taylor-type rule with both 2 and 3 regimes to see which fits the data better.\nBecause the models can be often difficult to estimate, for the 3-regime model we employ a search over starting parameters to improve results, specifying 20 random search repetitions.",
"# Get the additional data\nfrom statsmodels.tsa.regime_switching.tests.test_markov_regression import ogap, inf\ndta_ogap = pd.Series(ogap, index=pd.date_range('1954-07-01', '2010-10-01', freq='QS'))\ndta_inf = pd.Series(inf, index=pd.date_range('1954-07-01', '2010-10-01', freq='QS'))\n\nexog = pd.concat((dta_fedfunds.shift(), dta_ogap, dta_inf), axis=1).iloc[4:]\n\n# Fit the 2-regime model\nmod_fedfunds3 = sm.tsa.MarkovRegression(\n dta_fedfunds.iloc[4:], k_regimes=2, exog=exog)\nres_fedfunds3 = mod_fedfunds3.fit()\n\n# Fit the 3-regime model\nnp.random.seed(12345)\nmod_fedfunds4 = sm.tsa.MarkovRegression(\n dta_fedfunds.iloc[4:], k_regimes=3, exog=exog)\nres_fedfunds4 = mod_fedfunds4.fit(search_reps=20)\n\nres_fedfunds3.summary()\n\nres_fedfunds4.summary()",
"Due to lower information criteria, we might prefer the 3-state model, with an interpretation of low-, medium-, and high-interest rate regimes. The smoothed probabilities of each regime are plotted below.",
"fig, axes = plt.subplots(3, figsize=(10,7))\n\nax = axes[0]\nax.plot(res_fedfunds4.smoothed_marginal_probabilities[0])\nax.set(title='Smoothed probability of a low-interest rate regime')\n\nax = axes[1]\nax.plot(res_fedfunds4.smoothed_marginal_probabilities[1])\nax.set(title='Smoothed probability of a medium-interest rate regime')\n\nax = axes[2]\nax.plot(res_fedfunds4.smoothed_marginal_probabilities[2])\nax.set(title='Smoothed probability of a high-interest rate regime')\n\nfig.tight_layout()",
"Switching variances\nWe can also accomodate switching variances. In particular, we consider the model\n$$\ny_t = \\mu_{S_t} + y_{t-1} \\beta_{S_t} + \\varepsilon_t \\quad \\varepsilon_t \\sim N(0, \\sigma_{S_t}^2)\n$$\nWe use maximum likelihood to estimate the parameters of this model: $p_{00}, p_{10}, \\mu_0, \\mu_1, \\beta_0, \\beta_1, \\sigma_0^2, \\sigma_1^2$.\nThe application is to absolute returns on stocks, where the data can be found at http://www.stata-press.com/data/r14/snp500.",
"# Get the federal funds rate data\nfrom statsmodels.tsa.regime_switching.tests.test_markov_regression import areturns\ndta_areturns = pd.Series(areturns, index=pd.date_range('2004-05-04', '2014-5-03', freq='W'))\n\n# Plot the data\ndta_areturns.plot(title='Absolute returns, S&P500', figsize=(12,3))\n\n# Fit the model\nmod_areturns = sm.tsa.MarkovRegression(\n dta_areturns.iloc[1:], k_regimes=2, exog=dta_areturns.iloc[:-1], switching_variance=True)\nres_areturns = mod_areturns.fit()\n\nres_areturns.summary()",
"The first regime is a low-variance regime and the second regime is a high-variance regime. Below we plot the probabilities of being in the low-variance regime. Between 2008 and 2012 there does not appear to be a clear indication of one regime guiding the economy.",
"res_areturns.smoothed_marginal_probabilities[0].plot(\n title='Probability of being in a low-variance regime', figsize=(12,3));"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
GuillaumeDec/machine-learning
|
Building your Deep Neural Network - Step by Step/Building+your+Deep+Neural+Network+-+Step+by+Step+v4.ipynb
|
gpl-3.0
|
[
"Building your Deep Neural Network: Step by Step\nWelcome to your week 4 assignment (part 1 of 2)! You have previously trained a 2-layer Neural Network (with a single hidden layer). This week, you will build a deep neural network, with as many layers as you want!\n\nIn this notebook, you will implement all the functions required to build a deep neural network.\nIn the next assignment, you will use these functions to build a deep neural network for image classification.\n\nAfter this assignment you will be able to:\n- Use non-linear units like ReLU to improve your model\n- Build a deeper neural network (with more than 1 hidden layer)\n- Implement an easy-to-use neural network class\nNotation:\n- Superscript $[l]$ denotes a quantity associated with the $l^{th}$ layer. \n - Example: $a^{[L]}$ is the $L^{th}$ layer activation. $W^{[L]}$ and $b^{[L]}$ are the $L^{th}$ layer parameters.\n- Superscript $(i)$ denotes a quantity associated with the $i^{th}$ example. \n - Example: $x^{(i)}$ is the $i^{th}$ training example.\n- Lowerscript $i$ denotes the $i^{th}$ entry of a vector.\n - Example: $a^{[l]}_i$ denotes the $i^{th}$ entry of the $l^{th}$ layer's activations).\nLet's get started!\n1 - Packages\nLet's first import all the packages that you will need during this assignment. \n- numpy is the main package for scientific computing with Python.\n- matplotlib is a library to plot graphs in Python.\n- dnn_utils provides some necessary functions for this notebook.\n- testCases provides some test cases to assess the correctness of your functions\n- np.random.seed(1) is used to keep all the random function calls consistent. It will help us grade your work. Please don't change the seed.",
"import numpy as np\nimport h5py\nimport matplotlib.pyplot as plt\nfrom testCases_v2 import *\nfrom dnn_utils_v2 import sigmoid, sigmoid_backward, relu, relu_backward\n\n%matplotlib inline\nplt.rcParams['figure.figsize'] = (5.0, 4.0) # set default size of plots\nplt.rcParams['image.interpolation'] = 'nearest'\nplt.rcParams['image.cmap'] = 'gray'\n\n%load_ext autoreload\n%autoreload 2\n\nnp.random.seed(1)",
"2 - Outline of the Assignment\nTo build your neural network, you will be implementing several \"helper functions\". These helper functions will be used in the next assignment to build a two-layer neural network and an L-layer neural network. Each small helper function you will implement will have detailed instructions that will walk you through the necessary steps. Here is an outline of this assignment, you will:\n\nInitialize the parameters for a two-layer network and for an $L$-layer neural network.\nImplement the forward propagation module (shown in purple in the figure below).\nComplete the LINEAR part of a layer's forward propagation step (resulting in $Z^{[l]}$).\nWe give you the ACTIVATION function (relu/sigmoid).\nCombine the previous two steps into a new [LINEAR->ACTIVATION] forward function.\nStack the [LINEAR->RELU] forward function L-1 time (for layers 1 through L-1) and add a [LINEAR->SIGMOID] at the end (for the final layer $L$). This gives you a new L_model_forward function.\n\n\nCompute the loss.\nImplement the backward propagation module (denoted in red in the figure below).\nComplete the LINEAR part of a layer's backward propagation step.\nWe give you the gradient of the ACTIVATE function (relu_backward/sigmoid_backward) \nCombine the previous two steps into a new [LINEAR->ACTIVATION] backward function.\nStack [LINEAR->RELU] backward L-1 times and add [LINEAR->SIGMOID] backward in a new L_model_backward function\n\n\nFinally update the parameters.\n\n<img src=\"images/final outline.png\" style=\"width:800px;height:500px;\">\n<caption><center> Figure 1</center></caption><br>\nNote that for every forward function, there is a corresponding backward function. That is why at every step of your forward module you will be storing some values in a cache. The cached values are useful for computing gradients. In the backpropagation module you will then use the cache to calculate the gradients. This assignment will show you exactly how to carry out each of these steps. \n3 - Initialization\nYou will write two helper functions that will initialize the parameters for your model. The first function will be used to initialize parameters for a two layer model. The second one will generalize this initialization process to $L$ layers.\n3.1 - 2-layer Neural Network\nExercise: Create and initialize the parameters of the 2-layer neural network.\nInstructions:\n- The model's structure is: LINEAR -> RELU -> LINEAR -> SIGMOID. \n- Use random initialization for the weight matrices. Use np.random.randn(shape)*0.01 with the correct shape.\n- Use zero initialization for the biases. Use np.zeros(shape).",
"# GRADED FUNCTION: initialize_parameters\n\ndef initialize_parameters(n_x, n_h, n_y):\n \"\"\"\n Argument:\n n_x -- size of the input layer\n n_h -- size of the hidden layer\n n_y -- size of the output layer\n \n Returns:\n parameters -- python dictionary containing your parameters:\n W1 -- weight matrix of shape (n_h, n_x)\n b1 -- bias vector of shape (n_h, 1)\n W2 -- weight matrix of shape (n_y, n_h)\n b2 -- bias vector of shape (n_y, 1)\n \"\"\"\n \n np.random.seed(1)\n \n ### START CODE HERE ### (≈ 4 lines of code)\n W1 = None\n b1 = None\n W2 = None\n b2 = None\n ### END CODE HERE ###\n \n assert(W1.shape == (n_h, n_x))\n assert(b1.shape == (n_h, 1))\n assert(W2.shape == (n_y, n_h))\n assert(b2.shape == (n_y, 1))\n \n parameters = {\"W1\": W1,\n \"b1\": b1,\n \"W2\": W2,\n \"b2\": b2}\n \n return parameters\n\nparameters = initialize_parameters(3,2,1)\nprint(\"W1 = \" + str(parameters[\"W1\"]))\nprint(\"b1 = \" + str(parameters[\"b1\"]))\nprint(\"W2 = \" + str(parameters[\"W2\"]))\nprint(\"b2 = \" + str(parameters[\"b2\"]))",
"Expected output:\n<table style=\"width:80%\">\n <tr>\n <td> **W1** </td>\n <td> [[ 0.01624345 -0.00611756 -0.00528172]\n [-0.01072969 0.00865408 -0.02301539]] </td> \n </tr>\n\n <tr>\n <td> **b1**</td>\n <td>[[ 0.]\n [ 0.]]</td> \n </tr>\n\n <tr>\n <td>**W2**</td>\n <td> [[ 0.01744812 -0.00761207]]</td>\n </tr>\n\n <tr>\n <td> **b2** </td>\n <td> [[ 0.]] </td> \n </tr>\n\n</table>\n\n3.2 - L-layer Neural Network\nThe initialization for a deeper L-layer neural network is more complicated because there are many more weight matrices and bias vectors. When completing the initialize_parameters_deep, you should make sure that your dimensions match between each layer. Recall that $n^{[l]}$ is the number of units in layer $l$. Thus for example if the size of our input $X$ is $(12288, 209)$ (with $m=209$ examples) then:\n<table style=\"width:100%\">\n\n\n <tr>\n <td> </td> \n <td> **Shape of W** </td> \n <td> **Shape of b** </td> \n <td> **Activation** </td>\n <td> **Shape of Activation** </td> \n <tr>\n\n <tr>\n <td> **Layer 1** </td> \n <td> $(n^{[1]},12288)$ </td> \n <td> $(n^{[1]},1)$ </td> \n <td> $Z^{[1]} = W^{[1]} X + b^{[1]} $ </td> \n\n <td> $(n^{[1]},209)$ </td> \n <tr>\n\n <tr>\n <td> **Layer 2** </td> \n <td> $(n^{[2]}, n^{[1]})$ </td> \n <td> $(n^{[2]},1)$ </td> \n <td>$Z^{[2]} = W^{[2]} A^{[1]} + b^{[2]}$ </td> \n <td> $(n^{[2]}, 209)$ </td> \n <tr>\n\n <tr>\n <td> $\\vdots$ </td> \n <td> $\\vdots$ </td> \n <td> $\\vdots$ </td> \n <td> $\\vdots$</td> \n <td> $\\vdots$ </td> \n <tr>\n\n <tr>\n <td> **Layer L-1** </td> \n <td> $(n^{[L-1]}, n^{[L-2]})$ </td> \n <td> $(n^{[L-1]}, 1)$ </td> \n <td>$Z^{[L-1]} = W^{[L-1]} A^{[L-2]} + b^{[L-1]}$ </td> \n <td> $(n^{[L-1]}, 209)$ </td> \n <tr>\n\n\n <tr>\n <td> **Layer L** </td> \n <td> $(n^{[L]}, n^{[L-1]})$ </td> \n <td> $(n^{[L]}, 1)$ </td>\n <td> $Z^{[L]} = W^{[L]} A^{[L-1]} + b^{[L]}$</td>\n <td> $(n^{[L]}, 209)$ </td> \n <tr>\n\n</table>\n\nRemember that when we compute $W X + b$ in python, it carries out broadcasting. For example, if: \n$$ W = \\begin{bmatrix}\n j & k & l\\\n m & n & o \\\n p & q & r \n\\end{bmatrix}\\;\\;\\; X = \\begin{bmatrix}\n a & b & c\\\n d & e & f \\\n g & h & i \n\\end{bmatrix} \\;\\;\\; b =\\begin{bmatrix}\n s \\\n t \\\n u\n\\end{bmatrix}\\tag{2}$$\nThen $WX + b$ will be:\n$$ WX + b = \\begin{bmatrix}\n (ja + kd + lg) + s & (jb + ke + lh) + s & (jc + kf + li)+ s\\\n (ma + nd + og) + t & (mb + ne + oh) + t & (mc + nf + oi) + t\\\n (pa + qd + rg) + u & (pb + qe + rh) + u & (pc + qf + ri)+ u\n\\end{bmatrix}\\tag{3} $$\nExercise: Implement initialization for an L-layer Neural Network. \nInstructions:\n- The model's structure is [LINEAR -> RELU] $ \\times$ (L-1) -> LINEAR -> SIGMOID. I.e., it has $L-1$ layers using a ReLU activation function followed by an output layer with a sigmoid activation function.\n- Use random initialization for the weight matrices. Use np.random.rand(shape) * 0.01.\n- Use zeros initialization for the biases. Use np.zeros(shape).\n- We will store $n^{[l]}$, the number of units in different layers, in a variable layer_dims. For example, the layer_dims for the \"Planar Data classification model\" from last week would have been [2,4,1]: There were two inputs, one hidden layer with 4 hidden units, and an output layer with 1 output unit. Thus means W1's shape was (4,2), b1 was (4,1), W2 was (1,4) and b2 was (1,1). Now you will generalize this to $L$ layers! \n- Here is the implementation for $L=1$ (one layer neural network). It should inspire you to implement the general case (L-layer neural network).\npython\n if L == 1:\n parameters[\"W\" + str(L)] = np.random.randn(layer_dims[1], layer_dims[0]) * 0.01\n parameters[\"b\" + str(L)] = np.zeros((layer_dims[1], 1))",
"# GRADED FUNCTION: initialize_parameters_deep\n\ndef initialize_parameters_deep(layer_dims):\n \"\"\"\n Arguments:\n layer_dims -- python array (list) containing the dimensions of each layer in our network\n \n Returns:\n parameters -- python dictionary containing your parameters \"W1\", \"b1\", ..., \"WL\", \"bL\":\n Wl -- weight matrix of shape (layer_dims[l], layer_dims[l-1])\n bl -- bias vector of shape (layer_dims[l], 1)\n \"\"\"\n \n np.random.seed(3)\n parameters = {}\n L = len(layer_dims) # number of layers in the network\n\n for l in range(1, L):\n ### START CODE HERE ### (≈ 2 lines of code)\n parameters['W' + str(l)] = None\n parameters['b' + str(l)] = None\n ### END CODE HERE ###\n \n assert(parameters['W' + str(l)].shape == (layer_dims[l], layer_dims[l-1]))\n assert(parameters['b' + str(l)].shape == (layer_dims[l], 1))\n\n \n return parameters\n\nparameters = initialize_parameters_deep([5,4,3])\nprint(\"W1 = \" + str(parameters[\"W1\"]))\nprint(\"b1 = \" + str(parameters[\"b1\"]))\nprint(\"W2 = \" + str(parameters[\"W2\"]))\nprint(\"b2 = \" + str(parameters[\"b2\"]))",
"Expected output:\n<table style=\"width:80%\">\n <tr>\n <td> **W1** </td>\n <td>[[ 0.01788628 0.0043651 0.00096497 -0.01863493 -0.00277388]\n [-0.00354759 -0.00082741 -0.00627001 -0.00043818 -0.00477218]\n [-0.01313865 0.00884622 0.00881318 0.01709573 0.00050034]\n [-0.00404677 -0.0054536 -0.01546477 0.00982367 -0.01101068]]</td> \n </tr>\n\n <tr>\n <td>**b1** </td>\n <td>[[ 0.]\n [ 0.]\n [ 0.]\n [ 0.]]</td> \n </tr>\n\n <tr>\n <td>**W2** </td>\n <td>[[-0.01185047 -0.0020565 0.01486148 0.00236716]\n [-0.01023785 -0.00712993 0.00625245 -0.00160513]\n [-0.00768836 -0.00230031 0.00745056 0.01976111]]</td> \n </tr>\n\n <tr>\n <td>**b2** </td>\n <td>[[ 0.]\n [ 0.]\n [ 0.]]</td> \n </tr>\n\n</table>\n\n4 - Forward propagation module\n4.1 - Linear Forward\nNow that you have initialized your parameters, you will do the forward propagation module. You will start by implementing some basic functions that you will use later when implementing the model. You will complete three functions in this order:\n\nLINEAR\nLINEAR -> ACTIVATION where ACTIVATION will be either ReLU or Sigmoid. \n[LINEAR -> RELU] $\\times$ (L-1) -> LINEAR -> SIGMOID (whole model)\n\nThe linear forward module (vectorized over all the examples) computes the following equations:\n$$Z^{[l]} = W^{[l]}A^{[l-1]} +b^{[l]}\\tag{4}$$\nwhere $A^{[0]} = X$. \nExercise: Build the linear part of forward propagation.\nReminder:\nThe mathematical representation of this unit is $Z^{[l]} = W^{[l]}A^{[l-1]} +b^{[l]}$. You may also find np.dot() useful. If your dimensions don't match, printing W.shape may help.",
"# GRADED FUNCTION: linear_forward\n\ndef linear_forward(A, W, b):\n \"\"\"\n Implement the linear part of a layer's forward propagation.\n\n Arguments:\n A -- activations from previous layer (or input data): (size of previous layer, number of examples)\n W -- weights matrix: numpy array of shape (size of current layer, size of previous layer)\n b -- bias vector, numpy array of shape (size of the current layer, 1)\n\n Returns:\n Z -- the input of the activation function, also called pre-activation parameter \n cache -- a python dictionary containing \"A\", \"W\" and \"b\" ; stored for computing the backward pass efficiently\n \"\"\"\n \n ### START CODE HERE ### (≈ 1 line of code)\n Z = None\n ### END CODE HERE ###\n \n assert(Z.shape == (W.shape[0], A.shape[1]))\n cache = (A, W, b)\n \n return Z, cache\n\nA, W, b = linear_forward_test_case()\n\nZ, linear_cache = linear_forward(A, W, b)\nprint(\"Z = \" + str(Z))",
"Expected output:\n<table style=\"width:35%\">\n\n <tr>\n <td> **Z** </td>\n <td> [[ 3.26295337 -1.23429987]] </td> \n </tr>\n\n</table>\n\n4.2 - Linear-Activation Forward\nIn this notebook, you will use two activation functions:\n\n\nSigmoid: $\\sigma(Z) = \\sigma(W A + b) = \\frac{1}{ 1 + e^{-(W A + b)}}$. We have provided you with the sigmoid function. This function returns two items: the activation value \"a\" and a \"cache\" that contains \"Z\" (it's what we will feed in to the corresponding backward function). To use it you could just call: \npython\nA, activation_cache = sigmoid(Z)\n\n\nReLU: The mathematical formula for ReLu is $A = RELU(Z) = max(0, Z)$. We have provided you with the relu function. This function returns two items: the activation value \"A\" and a \"cache\" that contains \"Z\" (it's what we will feed in to the corresponding backward function). To use it you could just call:\npython\nA, activation_cache = relu(Z)\n\n\nFor more convenience, you are going to group two functions (Linear and Activation) into one function (LINEAR->ACTIVATION). Hence, you will implement a function that does the LINEAR forward step followed by an ACTIVATION forward step.\nExercise: Implement the forward propagation of the LINEAR->ACTIVATION layer. Mathematical relation is: $A^{[l]} = g(Z^{[l]}) = g(W^{[l]}A^{[l-1]} +b^{[l]})$ where the activation \"g\" can be sigmoid() or relu(). Use linear_forward() and the correct activation function.",
"# GRADED FUNCTION: linear_activation_forward\n\ndef linear_activation_forward(A_prev, W, b, activation):\n \"\"\"\n Implement the forward propagation for the LINEAR->ACTIVATION layer\n\n Arguments:\n A_prev -- activations from previous layer (or input data): (size of previous layer, number of examples)\n W -- weights matrix: numpy array of shape (size of current layer, size of previous layer)\n b -- bias vector, numpy array of shape (size of the current layer, 1)\n activation -- the activation to be used in this layer, stored as a text string: \"sigmoid\" or \"relu\"\n\n Returns:\n A -- the output of the activation function, also called the post-activation value \n cache -- a python dictionary containing \"linear_cache\" and \"activation_cache\";\n stored for computing the backward pass efficiently\n \"\"\"\n \n if activation == \"sigmoid\":\n # Inputs: \"A_prev, W, b\". Outputs: \"A, activation_cache\".\n ### START CODE HERE ### (≈ 2 lines of code)\n Z, linear_cache = None\n A, activation_cache = None\n ### END CODE HERE ###\n \n elif activation == \"relu\":\n # Inputs: \"A_prev, W, b\". Outputs: \"A, activation_cache\".\n ### START CODE HERE ### (≈ 2 lines of code)\n Z, linear_cache = None\n A, activation_cache = None\n ### END CODE HERE ###\n \n assert (A.shape == (W.shape[0], A_prev.shape[1]))\n cache = (linear_cache, activation_cache)\n\n return A, cache\n\nA_prev, W, b = linear_activation_forward_test_case()\n\nA, linear_activation_cache = linear_activation_forward(A_prev, W, b, activation = \"sigmoid\")\nprint(\"With sigmoid: A = \" + str(A))\n\nA, linear_activation_cache = linear_activation_forward(A_prev, W, b, activation = \"relu\")\nprint(\"With ReLU: A = \" + str(A))",
"Expected output:\n<table style=\"width:35%\">\n <tr>\n <td> **With sigmoid: A ** </td>\n <td > [[ 0.96890023 0.11013289]]</td> \n </tr>\n <tr>\n <td> **With ReLU: A ** </td>\n <td > [[ 3.43896131 0. ]]</td> \n </tr>\n</table>\n\nNote: In deep learning, the \"[LINEAR->ACTIVATION]\" computation is counted as a single layer in the neural network, not two layers. \nd) L-Layer Model\nFor even more convenience when implementing the $L$-layer Neural Net, you will need a function that replicates the previous one (linear_activation_forward with RELU) $L-1$ times, then follows that with one linear_activation_forward with SIGMOID.\n<img src=\"images/model_architecture_kiank.png\" style=\"width:600px;height:300px;\">\n<caption><center> Figure 2 : [LINEAR -> RELU] $\\times$ (L-1) -> LINEAR -> SIGMOID model</center></caption><br>\nExercise: Implement the forward propagation of the above model.\nInstruction: In the code below, the variable AL will denote $A^{[L]} = \\sigma(Z^{[L]}) = \\sigma(W^{[L]} A^{[L-1]} + b^{[L]})$. (This is sometimes also called Yhat, i.e., this is $\\hat{Y}$.) \nTips:\n- Use the functions you had previously written \n- Use a for loop to replicate [LINEAR->RELU] (L-1) times\n- Don't forget to keep track of the caches in the \"caches\" list. To add a new value c to a list, you can use list.append(c).",
"# GRADED FUNCTION: L_model_forward\n\ndef L_model_forward(X, parameters):\n \"\"\"\n Implement forward propagation for the [LINEAR->RELU]*(L-1)->LINEAR->SIGMOID computation\n \n Arguments:\n X -- data, numpy array of shape (input size, number of examples)\n parameters -- output of initialize_parameters_deep()\n \n Returns:\n AL -- last post-activation value\n caches -- list of caches containing:\n every cache of linear_relu_forward() (there are L-1 of them, indexed from 0 to L-2)\n the cache of linear_sigmoid_forward() (there is one, indexed L-1)\n \"\"\"\n\n caches = []\n A = X\n L = len(parameters) // 2 # number of layers in the neural network\n \n # Implement [LINEAR -> RELU]*(L-1). Add \"cache\" to the \"caches\" list.\n for l in range(1, L):\n A_prev = A \n ### START CODE HERE ### (≈ 2 lines of code)\n A, cache = None\n \n ### END CODE HERE ###\n \n # Implement LINEAR -> SIGMOID. Add \"cache\" to the \"caches\" list.\n ### START CODE HERE ### (≈ 2 lines of code)\n AL, cache = None\n \n ### END CODE HERE ###\n \n assert(AL.shape == (1,X.shape[1]))\n \n return AL, caches\n\nX, parameters = L_model_forward_test_case()\nAL, caches = L_model_forward(X, parameters)\nprint(\"AL = \" + str(AL))\nprint(\"Length of caches list = \" + str(len(caches)))",
"<table style=\"width:40%\">\n <tr>\n <td> **AL** </td>\n <td > [[ 0.17007265 0.2524272 ]]</td> \n </tr>\n <tr>\n <td> **Length of caches list ** </td>\n <td > 2</td> \n </tr>\n</table>\n\nGreat! Now you have a full forward propagation that takes the input X and outputs a row vector $A^{[L]}$ containing your predictions. It also records all intermediate values in \"caches\". Using $A^{[L]}$, you can compute the cost of your predictions.\n5 - Cost function\nNow you will implement forward and backward propagation. You need to compute the cost, because you want to check if your model is actually learning.\nExercise: Compute the cross-entropy cost $J$, using the following formula: $$-\\frac{1}{m} \\sum\\limits_{i = 1}^{m} (y^{(i)}\\log\\left(a^{[L] (i)}\\right) + (1-y^{(i)})\\log\\left(1- a^{L}\\right)) \\tag{7}$$",
"# GRADED FUNCTION: compute_cost\n\ndef compute_cost(AL, Y):\n \"\"\"\n Implement the cost function defined by equation (7).\n\n Arguments:\n AL -- probability vector corresponding to your label predictions, shape (1, number of examples)\n Y -- true \"label\" vector (for example: containing 0 if non-cat, 1 if cat), shape (1, number of examples)\n\n Returns:\n cost -- cross-entropy cost\n \"\"\"\n \n m = Y.shape[1]\n\n # Compute loss from aL and y.\n ### START CODE HERE ### (≈ 1 lines of code)\n cost = None\n ### END CODE HERE ###\n \n cost = np.squeeze(cost) # To make sure your cost's shape is what we expect (e.g. this turns [[17]] into 17).\n assert(cost.shape == ())\n \n return cost\n\nY, AL = compute_cost_test_case()\n\nprint(\"cost = \" + str(compute_cost(AL, Y)))",
"Expected Output:\n<table>\n\n <tr>\n <td>**cost** </td>\n <td> 0.41493159961539694</td> \n </tr>\n</table>\n\n6 - Backward propagation module\nJust like with forward propagation, you will implement helper functions for backpropagation. Remember that back propagation is used to calculate the gradient of the loss function with respect to the parameters. \nReminder: \n<img src=\"images/backprop_kiank.png\" style=\"width:650px;height:250px;\">\n<caption><center> Figure 3 : Forward and Backward propagation for LINEAR->RELU->LINEAR->SIGMOID <br> The purple blocks represent the forward propagation, and the red blocks represent the backward propagation. </center></caption>\n<!-- \nFor those of you who are expert in calculus (you don't need to be to do this assignment), the chain rule of calculus can be used to derive the derivative of the loss $\\mathcal{L}$ with respect to $z^{[1]}$ in a 2-layer network as follows:\n\n$$\\frac{d \\mathcal{L}(a^{[2]},y)}{{dz^{[1]}}} = \\frac{d\\mathcal{L}(a^{[2]},y)}{{da^{[2]}}}\\frac{{da^{[2]}}}{{dz^{[2]}}}\\frac{{dz^{[2]}}}{{da^{[1]}}}\\frac{{da^{[1]}}}{{dz^{[1]}}} \\tag{8} $$\n\nIn order to calculate the gradient $dW^{[1]} = \\frac{\\partial L}{\\partial W^{[1]}}$, you use the previous chain rule and you do $dW^{[1]} = dz^{[1]} \\times \\frac{\\partial z^{[1]} }{\\partial W^{[1]}}$. During the backpropagation, at each step you multiply your current gradient by the gradient corresponding to the specific layer to get the gradient you wanted.\n\nEquivalently, in order to calculate the gradient $db^{[1]} = \\frac{\\partial L}{\\partial b^{[1]}}$, you use the previous chain rule and you do $db^{[1]} = dz^{[1]} \\times \\frac{\\partial z^{[1]} }{\\partial b^{[1]}}$.\n\nThis is why we talk about **backpropagation**.\n!-->\n\nNow, similar to forward propagation, you are going to build the backward propagation in three steps:\n- LINEAR backward\n- LINEAR -> ACTIVATION backward where ACTIVATION computes the derivative of either the ReLU or sigmoid activation\n- [LINEAR -> RELU] $\\times$ (L-1) -> LINEAR -> SIGMOID backward (whole model)\n6.1 - Linear backward\nFor layer $l$, the linear part is: $Z^{[l]} = W^{[l]} A^{[l-1]} + b^{[l]}$ (followed by an activation).\nSuppose you have already calculated the derivative $dZ^{[l]} = \\frac{\\partial \\mathcal{L} }{\\partial Z^{[l]}}$. You want to get $(dW^{[l]}, db^{[l]} dA^{[l-1]})$.\n<img src=\"images/linearback_kiank.png\" style=\"width:250px;height:300px;\">\n<caption><center> Figure 4 </center></caption>\nThe three outputs $(dW^{[l]}, db^{[l]}, dA^{[l]})$ are computed using the input $dZ^{[l]}$.Here are the formulas you need:\n$$ dW^{[l]} = \\frac{\\partial \\mathcal{L} }{\\partial W^{[l]}} = \\frac{1}{m} dZ^{[l]} A^{[l-1] T} \\tag{8}$$\n$$ db^{[l]} = \\frac{\\partial \\mathcal{L} }{\\partial b^{[l]}} = \\frac{1}{m} \\sum_{i = 1}^{m} dZ^{l}\\tag{9}$$\n$$ dA^{[l-1]} = \\frac{\\partial \\mathcal{L} }{\\partial A^{[l-1]}} = W^{[l] T} dZ^{[l]} \\tag{10}$$\nExercise: Use the 3 formulas above to implement linear_backward().",
"# GRADED FUNCTION: linear_backward\n\ndef linear_backward(dZ, cache):\n \"\"\"\n Implement the linear portion of backward propagation for a single layer (layer l)\n\n Arguments:\n dZ -- Gradient of the cost with respect to the linear output (of current layer l)\n cache -- tuple of values (A_prev, W, b) coming from the forward propagation in the current layer\n\n Returns:\n dA_prev -- Gradient of the cost with respect to the activation (of the previous layer l-1), same shape as A_prev\n dW -- Gradient of the cost with respect to W (current layer l), same shape as W\n db -- Gradient of the cost with respect to b (current layer l), same shape as b\n \"\"\"\n A_prev, W, b = cache\n m = A_prev.shape[1]\n\n ### START CODE HERE ### (≈ 3 lines of code)\n dW = None\n db = None\n dA_prev = None\n ### END CODE HERE ###\n \n assert (dA_prev.shape == A_prev.shape)\n assert (dW.shape == W.shape)\n assert (db.shape == b.shape)\n \n return dA_prev, dW, db\n\n# Set up some test inputs\ndZ, linear_cache = linear_backward_test_case()\n\ndA_prev, dW, db = linear_backward(dZ, linear_cache)\nprint (\"dA_prev = \"+ str(dA_prev))\nprint (\"dW = \" + str(dW))\nprint (\"db = \" + str(db))",
"Expected Output: \n<table style=\"width:90%\">\n <tr>\n <td> **dA_prev** </td>\n <td > [[ 0.51822968 -0.19517421]\n [-0.40506361 0.15255393]\n [ 2.37496825 -0.89445391]] </td> \n </tr> \n\n <tr>\n <td> **dW** </td>\n <td > [[-0.10076895 1.40685096 1.64992505]] </td> \n </tr> \n\n <tr>\n <td> **db** </td>\n <td> [[ 0.50629448]] </td> \n </tr> \n\n</table>\n\n6.2 - Linear-Activation backward\nNext, you will create a function that merges the two helper functions: linear_backward and the backward step for the activation linear_activation_backward. \nTo help you implement linear_activation_backward, we provided two backward functions:\n- sigmoid_backward: Implements the backward propagation for SIGMOID unit. You can call it as follows:\npython\ndZ = sigmoid_backward(dA, activation_cache)\n\nrelu_backward: Implements the backward propagation for RELU unit. You can call it as follows:\n\npython\ndZ = relu_backward(dA, activation_cache)\nIf $g(.)$ is the activation function, \nsigmoid_backward and relu_backward compute $$dZ^{[l]} = dA^{[l]} * g'(Z^{[l]}) \\tag{11}$$. \nExercise: Implement the backpropagation for the LINEAR->ACTIVATION layer.",
"# GRADED FUNCTION: linear_activation_backward\n\ndef linear_activation_backward(dA, cache, activation):\n \"\"\"\n Implement the backward propagation for the LINEAR->ACTIVATION layer.\n \n Arguments:\n dA -- post-activation gradient for current layer l \n cache -- tuple of values (linear_cache, activation_cache) we store for computing backward propagation efficiently\n activation -- the activation to be used in this layer, stored as a text string: \"sigmoid\" or \"relu\"\n \n Returns:\n dA_prev -- Gradient of the cost with respect to the activation (of the previous layer l-1), same shape as A_prev\n dW -- Gradient of the cost with respect to W (current layer l), same shape as W\n db -- Gradient of the cost with respect to b (current layer l), same shape as b\n \"\"\"\n linear_cache, activation_cache = cache\n \n if activation == \"relu\":\n ### START CODE HERE ### (≈ 2 lines of code)\n dZ = None\n dA_prev, dW, db = None\n ### END CODE HERE ###\n \n elif activation == \"sigmoid\":\n ### START CODE HERE ### (≈ 2 lines of code)\n dZ = None\n dA_prev, dW, db = None\n ### END CODE HERE ###\n \n return dA_prev, dW, db\n\nAL, linear_activation_cache = linear_activation_backward_test_case()\n\ndA_prev, dW, db = linear_activation_backward(AL, linear_activation_cache, activation = \"sigmoid\")\nprint (\"sigmoid:\")\nprint (\"dA_prev = \"+ str(dA_prev))\nprint (\"dW = \" + str(dW))\nprint (\"db = \" + str(db) + \"\\n\")\n\ndA_prev, dW, db = linear_activation_backward(AL, linear_activation_cache, activation = \"relu\")\nprint (\"relu:\")\nprint (\"dA_prev = \"+ str(dA_prev))\nprint (\"dW = \" + str(dW))\nprint (\"db = \" + str(db))",
"Expected output with sigmoid:\n<table style=\"width:100%\">\n <tr>\n <td > dA_prev </td> \n <td >[[ 0.11017994 0.01105339]\n [ 0.09466817 0.00949723]\n [-0.05743092 -0.00576154]] </td> \n\n </tr> \n\n <tr>\n <td > dW </td> \n <td > [[ 0.10266786 0.09778551 -0.01968084]] </td> \n </tr> \n\n <tr>\n <td > db </td> \n <td > [[-0.05729622]] </td> \n </tr> \n</table>\n\nExpected output with relu\n<table style=\"width:100%\">\n <tr>\n <td > dA_prev </td> \n <td > [[ 0.44090989 0. ]\n [ 0.37883606 0. ]\n [-0.2298228 0. ]] </td> \n\n </tr> \n\n <tr>\n <td > dW </td> \n <td > [[ 0.44513824 0.37371418 -0.10478989]] </td> \n </tr> \n\n <tr>\n <td > db </td> \n <td > [[-0.20837892]] </td> \n </tr> \n</table>\n\n6.3 - L-Model Backward\nNow you will implement the backward function for the whole network. Recall that when you implemented the L_model_forward function, at each iteration, you stored a cache which contains (X,W,b, and z). In the back propagation module, you will use those variables to compute the gradients. Therefore, in the L_model_backward function, you will iterate through all the hidden layers backward, starting from layer $L$. On each step, you will use the cached values for layer $l$ to backpropagate through layer $l$. Figure 5 below shows the backward pass. \n<img src=\"images/mn_backward.png\" style=\"width:450px;height:300px;\">\n<caption><center> Figure 5 : Backward pass </center></caption>\n Initializing backpropagation:\nTo backpropagate through this network, we know that the output is, \n$A^{[L]} = \\sigma(Z^{[L]})$. Your code thus needs to compute dAL $= \\frac{\\partial \\mathcal{L}}{\\partial A^{[L]}}$.\nTo do so, use this formula (derived using calculus which you don't need in-depth knowledge of):\npython\ndAL = - (np.divide(Y, AL) - np.divide(1 - Y, 1 - AL)) # derivative of cost with respect to AL\nYou can then use this post-activation gradient dAL to keep going backward. As seen in Figure 5, you can now feed in dAL into the LINEAR->SIGMOID backward function you implemented (which will use the cached values stored by the L_model_forward function). After that, you will have to use a for loop to iterate through all the other layers using the LINEAR->RELU backward function. You should store each dA, dW, and db in the grads dictionary. To do so, use this formula : \n$$grads[\"dW\" + str(l)] = dW^{[l]}\\tag{15} $$\nFor example, for $l=3$ this would store $dW^{[l]}$ in grads[\"dW3\"].\nExercise: Implement backpropagation for the [LINEAR->RELU] $\\times$ (L-1) -> LINEAR -> SIGMOID model.",
"# GRADED FUNCTION: L_model_backward\n\ndef L_model_backward(AL, Y, caches):\n \"\"\"\n Implement the backward propagation for the [LINEAR->RELU] * (L-1) -> LINEAR -> SIGMOID group\n \n Arguments:\n AL -- probability vector, output of the forward propagation (L_model_forward())\n Y -- true \"label\" vector (containing 0 if non-cat, 1 if cat)\n caches -- list of caches containing:\n every cache of linear_activation_forward() with \"relu\" (it's caches[l], for l in range(L-1) i.e l = 0...L-2)\n the cache of linear_activation_forward() with \"sigmoid\" (it's caches[L-1])\n \n Returns:\n grads -- A dictionary with the gradients\n grads[\"dA\" + str(l)] = ... \n grads[\"dW\" + str(l)] = ...\n grads[\"db\" + str(l)] = ... \n \"\"\"\n grads = {}\n L = len(caches) # the number of layers\n m = AL.shape[1]\n Y = Y.reshape(AL.shape) # after this line, Y is the same shape as AL\n \n # Initializing the backpropagation\n ### START CODE HERE ### (1 line of code)\n dAL = None\n ### END CODE HERE ###\n \n # Lth layer (SIGMOID -> LINEAR) gradients. Inputs: \"AL, Y, caches\". Outputs: \"grads[\"dAL\"], grads[\"dWL\"], grads[\"dbL\"]\n ### START CODE HERE ### (approx. 2 lines)\n current_cache = None\n grads[\"dA\" + str(L)], grads[\"dW\" + str(L)], grads[\"db\" + str(L)] = None\n ### END CODE HERE ###\n \n for l in reversed(range(L-1)):\n # lth layer: (RELU -> LINEAR) gradients.\n # Inputs: \"grads[\"dA\" + str(l + 2)], caches\". Outputs: \"grads[\"dA\" + str(l + 1)] , grads[\"dW\" + str(l + 1)] , grads[\"db\" + str(l + 1)] \n ### START CODE HERE ### (approx. 5 lines)\n current_cache = None\n dA_prev_temp, dW_temp, db_temp = None\n grads[\"dA\" + str(l + 1)] = None\n grads[\"dW\" + str(l + 1)] = None\n grads[\"db\" + str(l + 1)] = None\n ### END CODE HERE ###\n\n return grads\n\nAL, Y_assess, caches = L_model_backward_test_case()\ngrads = L_model_backward(AL, Y_assess, caches)\nprint (\"dW1 = \"+ str(grads[\"dW1\"]))\nprint (\"db1 = \"+ str(grads[\"db1\"]))\nprint (\"dA1 = \"+ str(grads[\"dA1\"]))",
"Expected Output\n<table style=\"width:60%\">\n\n <tr>\n <td > dW1 </td> \n <td > [[ 0.41010002 0.07807203 0.13798444 0.10502167]\n [ 0. 0. 0. 0. ]\n [ 0.05283652 0.01005865 0.01777766 0.0135308 ]] </td> \n </tr> \n\n <tr>\n <td > db1 </td> \n <td > [[-0.22007063]\n [ 0. ]\n [-0.02835349]] </td> \n </tr> \n\n <tr>\n <td > dA1 </td> \n <td > [[ 0. 0.52257901]\n [ 0. -0.3269206 ]\n [ 0. -0.32070404]\n [ 0. -0.74079187]] </td> \n\n </tr> \n</table>\n\n6.4 - Update Parameters\nIn this section you will update the parameters of the model, using gradient descent: \n$$ W^{[l]} = W^{[l]} - \\alpha \\text{ } dW^{[l]} \\tag{16}$$\n$$ b^{[l]} = b^{[l]} - \\alpha \\text{ } db^{[l]} \\tag{17}$$\nwhere $\\alpha$ is the learning rate. After computing the updated parameters, store them in the parameters dictionary. \nExercise: Implement update_parameters() to update your parameters using gradient descent.\nInstructions:\nUpdate parameters using gradient descent on every $W^{[l]}$ and $b^{[l]}$ for $l = 1, 2, ..., L$.",
"# GRADED FUNCTION: update_parameters\n\ndef update_parameters(parameters, grads, learning_rate):\n \"\"\"\n Update parameters using gradient descent\n \n Arguments:\n parameters -- python dictionary containing your parameters \n grads -- python dictionary containing your gradients, output of L_model_backward\n \n Returns:\n parameters -- python dictionary containing your updated parameters \n parameters[\"W\" + str(l)] = ... \n parameters[\"b\" + str(l)] = ...\n \"\"\"\n \n L = len(parameters) // 2 # number of layers in the neural network\n\n # Update rule for each parameter. Use a for loop.\n ### START CODE HERE ### (≈ 3 lines of code)\n for l in range(L):\n parameters[\"W\" + str(l+1)] = None\n parameters[\"b\" + str(l+1)] = None\n ### END CODE HERE ###\n \n return parameters\n\nparameters, grads = update_parameters_test_case()\nparameters = update_parameters(parameters, grads, 0.1)\n\nprint (\"W1 = \"+ str(parameters[\"W1\"]))\nprint (\"b1 = \"+ str(parameters[\"b1\"]))\nprint (\"W2 = \"+ str(parameters[\"W2\"]))\nprint (\"b2 = \"+ str(parameters[\"b2\"]))",
"Expected Output:\n<table style=\"width:100%\"> \n <tr>\n <td > W1 </td> \n <td > [[-0.59562069 -0.09991781 -2.14584584 1.82662008]\n [-1.76569676 -0.80627147 0.51115557 -1.18258802]\n [-1.0535704 -0.86128581 0.68284052 2.20374577]] </td> \n </tr> \n\n <tr>\n <td > b1 </td> \n <td > [[-0.04659241]\n [-1.28888275]\n [ 0.53405496]] </td> \n </tr> \n <tr>\n <td > W2 </td> \n <td > [[-0.55569196 0.0354055 1.32964895]]</td> \n </tr> \n\n <tr>\n <td > b2 </td> \n <td > [[-0.84610769]] </td> \n </tr> \n</table>\n\n7 - Conclusion\nCongrats on implementing all the functions required for building a deep neural network! \nWe know it was a long assignment but going forward it will only get better. The next part of the assignment is easier. \nIn the next assignment you will put all these together to build two models:\n- A two-layer neural network\n- An L-layer neural network\nYou will in fact use these models to classify cat vs non-cat images!"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
YoungKwonJo/mlxtend
|
docs/examples/classifier_nn_mlp.ipynb
|
bsd-3-clause
|
[
"%load_ext watermark\n%watermark -a 'Sebastian Raschka' -d -v -p mlxtend",
"mlxtend - Multilayer Perceptron Examples\nSections\n\nClassify Iris\nClassify handwritten digits from MNIST\n\n<br>\n<br>\nClassify Iris\nLoad 2 features from Iris (petal length and petal width) for visualization purposes.",
"from mlxtend.data import iris_data\nX, y = iris_data()\nX = X[:, 2:]",
"Train neural network for 3 output flower classes ('Setosa', 'Versicolor', 'Virginica'), regular gradient decent (minibatches=1), 30 hidden units, and no regularization.",
"from mlxtend.classifier import NeuralNetMLP\nimport numpy as np\n\nnn1 = NeuralNetMLP(n_output=3, \n n_features=X.shape[1], \n n_hidden=30, \n l2=0.0, \n l1=0.0, \n epochs=5000, \n eta=0.001, \n alpha=0.00,\n minibatches=1, \n shuffle=True,\n random_state=0)\n\nnn1.fit(X, y)\ny_pred = nn1.predict(X)\nacc = np.sum(y == y_pred, axis=0) / X.shape[0]\nprint('Accuracy: %.2f%%' % (acc * 100))",
"Now, check if the gradient descent converged after 5000 epochs, and choose smaller learning rate (eta) otherwise.",
"import matplotlib.pyplot as plt\n%matplotlib inline\nplt.plot(range(len(nn1.cost_)), nn1.cost_)\nplt.ylim([0, 300])\nplt.ylabel('Cost')\nplt.xlabel('Epochs')\nplt.grid()\nplt.show()",
"Standardize features for smoother and faster convergence.",
"X_std = np.copy(X)\nfor i in range(2):\n X_std[:,i] = (X[:,i] - X[:,i].mean()) / X[:,i].std()\n\nnn2 = NeuralNetMLP(n_output=3, \n n_features=X_std.shape[1], \n n_hidden=30, \n l2=0.0, \n l1=0.0, \n epochs=1000, \n eta=0.05,\n alpha=0.1,\n minibatches=1, \n shuffle=True,\n random_state=1)\n\nnn2.fit(X_std, y)\ny_pred = nn2.predict(X_std)\nacc = np.sum(y == y_pred, axis=0) / X_std.shape[0]\nprint('Accuracy: %.2f%%' % (acc * 100))\n\nplt.plot(range(len(nn2.cost_)), nn2.cost_)\nplt.ylim([0, 300])\nplt.ylabel('Cost')\nplt.xlabel('Epochs')\nplt.show()",
"Visualize the decision regions.",
"from mlxtend.evaluate import plot_decision_regions\n\nplot_decision_regions(X, y, clf=nn1)\nplt.xlabel('petal length [cm]')\nplt.ylabel('petal width [cm]')\nplt.show()",
"<br>\n<br>\nClassify handwritten digits from MNIST\nLoad a 5000-sample subset of the MNIST dataset.",
"from mlxtend.data import mnist_data\nX, y = mnist_data()",
"Visualize a sample from the MNIST dataset.",
"def plot_digit(X, y, idx):\n img = X[idx].reshape(28,28)\n plt.imshow(img, cmap='Greys', interpolation='nearest')\n plt.title('true label: %d' % y[idx])\n plt.show()\n\nplot_digit(X, y, 4) ",
"Initialize the neural network to recognize the 10 different digits (0-10) using 300 epochs and minibatch learning.",
"nn = NeuralNetMLP(n_output=10, n_features=X.shape[1], \n n_hidden=100, \n l2=0.0, \n l1=0.0, \n epochs=300, \n eta=0.0005,\n alpha=0.0,\n minibatches=50, \n random_state=1)",
"Learn the features while printing the progress to get an idea about how long it may take.",
"nn.fit(X, y, print_progress=True)\ny_pred = nn.predict(X)\nacc = np.sum(y == y_pred, axis=0) / X.shape[0]\nprint('Accuracy: %.2f%%' % (acc * 100))",
"Check for convergence.",
"plt.plot(range(len(nn.cost_)), nn.cost_)\nplt.ylim([0, 500])\nplt.ylabel('Cost')\nplt.xlabel('Mini-batches * Epochs')\nplt.show()\n\nplt.plot(range(len(nn.cost_)//50), nn.cost_[::50], color='red')\nplt.ylim([0, 500])\nplt.ylabel('Cost')\nplt.xlabel('Epochs')\nplt.show()"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
robertoalotufo/ia898
|
deliver/Aula_10_Wavelets.ipynb
|
mit
|
[
"Aula 10 Discrete Wavelets Transform\nExercícios\nisccsym\nNão é fácil projetar um conjunto de testes para garantir que o seu programa esteja correta.\nNo caso em que o resultado é Falso, i.e., não simétrico, basta um pixel não ser simétrico para\no resultado ser Falso. Com isso, o conjunto de teste com uma imagem enorme com muitos pixels\nnão simétricos não é bom teste. Por exemplo, neste caso, faltou um teste onde tudo seja\nsimétrico, com exceção da origem (F[0,0]).\nSolução apresentada pelo Marcelo, onde é comparado com a imagem refletida com translação periódica de 1 deslocamento é bem conceitual. A solução do Deângelo parece ser a mais rápida: não foi feita nenhuma cópia e comparou apenas com metade dos pixels.\nExiste ainda pequeno problema a ser encontrado na questão de utilizar apenas metade dos pixels\npara serem comparados.\nminify\nA redução da imagem deve ser feita com uma filtragem inicial de período de corte 2.r onde r é o\nfator de redução da imagem. A seguir, é feita a reamostragem (decimação).\nPara se fazer a redução no domínio da frequência, bastaria recortar o espectro da imagem original\ne fazer a transforma inversa de Fourier.\nresize\nVerificou-se que a melhor função de ampliação/redução é a scipy.misc.imresize, tanto na qualidade como\nna rapidez.",
"import numpy as np\nimport sys,os\nimport matplotlib.image as mpimg\nia898path = os.path.abspath('../../')\nif ia898path not in sys.path:\n sys.path.append(ia898path)\nimport ia898.src as ia",
"Exercícios para a próxima aula\n\nFazer uma função que amplie/reduza a imagem utilizando interpolação no domínio da frequência, \n conforme discutido em aula. Comparar os resultados com o scipy.misc.imresize, tanto de qualidade do\n espectro como de tempo de execução.\n Os alunos com RA ímpar devem fazer as ampliações e os com RA par devem fazer\n as reduções.\n Nome da função: imresize",
"def imresize(f, size):\n '''\n Resize an image\n Parameters\n ----------\n f: input image\n size: integer, float or tuple\n - integer: percentage of current size\n - float: fraction of current size\n - tuple: new dimensions\n Returns\n -------\n output image resized \n '''\n return f",
"Modificar a função pconv para executar no domínio da frequência, caso o número de\n elementos não zeros da menor imagem, é maior que um certo valor, digamos 15.\n Nome da função: pconvfft",
"def pconvfft(f,h):\n '''\n Periodical convolution.\n This is an efficient implementation of the periodical convolution.\n This implementation should be comutative, i.e., pconvfft(f,h)==pconvfft(h,f).\n This implementation should be fast. If the number of pixels used in the \n convolution is larger than 15, it uses the convolution theorem to implement\n the convolution.\n Parameters:\n -----------\n f: input image (can be complex, up to 2 dimensions)\n h: input kernel (can be complex, up to 2 dimensions)\n Outputs:\n image of the result of periodical convolution\n '''\n return f",
"Transforma Discreta de Wavelets\nIremos utilizar um notebook que foi um resultado de projeto de anos\nanteriores.\n\nDWT",
"/home/lotufo/ia898/dev/wavelets.ipynb"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
sot/aimpoint_mon
|
testing/dynam_offsets/verify_dynamic_offsets.ipynb
|
bsd-2-clause
|
[
"Verify FOT Matlab tools 2016_210 dynamic offsets test products\nThis notebook performs functional testing of test load products created starting with \nexisting OFLS products and processing with Matlab tools 2016_210.\nThis release introduces dynamical aimpoint offsets that\ncompensate for temperature-dependent alignment changes of the ACA. The Matlab\ntools code in turn depends a new version 0.7 of the chandra_aca package which\nprovides the core function to compute the temperature-dependent dynamical aimpoint offsets.\nOverall test requirements are defined in the Aimpoint Transition Plan for Cycle 18. The functional testing in this notebook consists of the following:\nFor post-NOV0215 products with a nominal zero-offsets aimpoint table\n\nCheck internal consistency of dynamical_offsets table\nCheck dynamical offsets ACA attitude matches TEST maneuver summary\n\nCheck dynamical offsets ACA attitude matches FLIGHT maneuver summary for ACIS observations within ~10 arcsec.\n\nThis is expected because the new dynamic offsets should roughly replace the OFLS CHARACTERISTICS update method of aimpoint adjustment. \nThis constitutes a rough functional test of Python code to generate dynamic ACA offsets.\nIt also provides a very simple demonstration that the attitude changes are not large.\nDifferences will occur due to:\nChange from the cycle 17 default aimpoint to the cycle 18 default aimpoint. These are less than 5 arcsec.\nDifference in per-obsid predicted ACA temperature from the mean 3-month value used in the CHARACTERISTICS update.\n\n\n\nCheck that predicted CHIPX / CHIPY matches expectation to within 10 arcsec.\n\nUse the test ACA attitude and observed SIM DY/DZ (alignment from fids).\nGenerate a \"fake\" observation with adjusted RA_NOM, DEC_NOM, ROLL_NOM (\"HRMA attitude\").\nAdjustment based on the delta between TEST and FLIGHT pointing attitudes.\nUse the CIAO tool dmcoords to compute predicted CHIPX / CHIPY.\n\nFor pre-NOV1615 products with an empty zero-offsets aimpoint table\n\nCheck that TEST and FLIGHT attitudes from maneuver summary match to within 0.1 arcsec.\n\nALL TESTS PASS",
"# Required imports\n\nfrom __future__ import division, print_function\nimport glob\nimport shelve\nfrom astropy.table import Table\nimport numpy as np\nfrom Quaternion import Quat\nimport chandra_aca\nfrom chandra_aca import calc_aca_from_targ, calc_targ_from_aca\nimport parse_cm\nfrom Ska.engarchive import fetch_sci\nfrom Chandra.Time import DateTime\nimport Ska.Shell\nimport functools\nimport mica.archive.obspar\nimport Ska.arc5gl\nimport os\nimport parse_cm.maneuver\n\n%matplotlib inline",
"\"Post-NOV0215\" verification testing using the MAR0516O products\nThese are nominal flight-like products created using the existing MAR0516 loads.\nOne expected difference is that Matlab uses the OFLS CHARACTERISTICS file\nCHARACTERIS_07JUL16 to back out the OFLS ODB_SI_ALIGN transform.\nMAR0516 was planned using CHARACTERIS_28FEB16. There is a (1.77, 0.33) arcsec difference\nfor both ACIS-S and ACIS-I in the ODB_SI_ALIGN offsets. This 1.8 arcsec offset\nis seen in comparisons while in newly generated products there will be no such offset.",
"PRODUCTS = 'MAR0516O'\n\nTEST_DIR = '/proj/sot/ska/ops/SFE'\nFLIGHT_DIR = '/data/mpcrit1/mplogs/2016' # Must match PRODUCTS\n\n# SI_ALIGN from Matlab code\nSI_ALIGN = chandra_aca.ODB_SI_ALIGN\nSI_ALIGN\n\ndef print_dq(q1, q2):\n \"\"\"\n Print the difference between two quaternions in a nice formatted way.\n \"\"\"\n dq = q1.inv() * q2\n dr, dp, dy, _ = np.degrees(dq.q) * 2 * 3600\n print('droll={:6.2f}, dpitch={:6.2f}, dyaw={:6.2f} arcsec'.format(dr, dp, dy))\n\ndef check_obs(obs):\n \"\"\"\n Check `obs` (which is a row out of the dynamic offsets table) for consistency\n between target and aca coordinates given the target and aca offsets and the\n SI_ALIGN alignment matrix.\n \"\"\"\n y_off = (obs['target_offset_y'] + obs['aca_offset_y']) / 3600\n z_off = (obs['target_offset_z'] + obs['aca_offset_z']) / 3600\n \n q_targ = Quat([obs['target_ra'], obs['target_dec'], obs['target_roll']])\n q_aca = Quat([obs['aca_ra'], obs['aca_dec'], obs['aca_roll']])\n \n q_aca_out = calc_aca_from_targ(q_targ, y_off, z_off, SI_ALIGN)\n print('{} {:6s} '.format(obs['obsid'], obs['detector']), end='')\n print_dq(q_aca, q_aca_out)\n\ndef check_obs_vs_manvr(obs, manvr):\n \"\"\"\n Check against attitude from actual flight products (from maneuver summary file)\n \"\"\"\n mf = manvr['final']\n q_flight = Quat([mf['q1'], mf['q2'], mf['q3'], mf['q4']])\n q_aca = Quat([obs['aca_ra'], obs['aca_dec'], obs['aca_roll']])\n print('{} {:6s} chipx={:8.2f} chipy={:8.2f} '\n .format(obs['obsid'], obs['detector'], obs['chipx'], obs['chipy']), end='')\n print_dq(q_aca, q_flight)\n\nfilename = os.path.join(TEST_DIR, PRODUCTS, 'ofls', 'output', '{}_dynamical_offsets.txt'.format(PRODUCTS))\ndat = Table.read(filename, format='ascii')\n\ndat[:5]",
"Check internal consistency of dynamical_offsets table\nThis checks that applying the sum of target and ACA offsets along with the nominal SI_ALIGN to the target attitude produces the ACA attitude.\nAll the ACIS observations have a similar offset due to the CHARACTERISTICS mismatch noted earlier, while the HRC observations show the expected 0.0 offset.",
"for obs in dat:\n check_obs(obs)",
"Check dynamical offsets ACA attitude matches TEST maneuver summary\nIt is assumed that the maneuver summary matches the load product attitudes.",
"filename = glob.glob(os.path.join(TEST_DIR, PRODUCTS, 'ofls', 'mps', 'mm*.sum'))[0]\nprint('Reading', filename)\nmm = parse_cm.maneuver.read_maneuver_summary(filename, structured=True)\nmm = {m['final']['id']: m for m in mm} # Turn into a dict\n\nfor obs in dat:\n check_obs_vs_manvr(obs, mm[obs['obsid'] * 100])",
"Check dynamical offsets ACA attitude matches FLIGHT maneuver summary to ~10 arcsec\nIt is assumed that the maneuver summary matches the load product attitudes.\nThere are three discrepancies below: obsids 18725, 18790, and 18800. All of these are DDT observations that are configured in the OR list to use Cycle 18 aimpoint values, but without changing the target offsets from cycle 17 values. This is an artifact of testing and would not occur in flight planning.",
"os.path.join(FLIGHT_DIR, PRODUCTS[:-1], 'ofls', 'mps', 'mm*.sum')\n\nfilename = glob.glob(os.path.join(FLIGHT_DIR, PRODUCTS[:-1], 'ofls', 'mps', 'mm*.sum'))[0]\nprint('Reading', filename)\nmm = parse_cm.maneuver.read_maneuver_summary(filename, structured=True)\nmm = {m['final']['id']: m for m in mm} # Turn into a dict\n\nfor obs in dat:\n check_obs_vs_manvr(obs, mm[obs['obsid'] * 100])",
"Check that predicted CHIPX / CHIPY matches expectation to within 10 arcsec.\n\nUse the test ACA attitude and observed SIM DY/DZ (alignment from fids).\nGenerate a \"fake\" observation with adjusted RA_NOM, DEC_NOM, ROLL_NOM (\"HRMA attitude\").\nAdjustment based on the delta between TEST and FLIGHT pointing attitudes.\nUse the CIAO tool dmcoords to compute predicted CHIPX / CHIPY.\n\nIn the results below there are two discrepancies, obsids 18168 and 18091. These are very hot observations coming right after safe mode recovery. In this case the thermal model is inaccurate and the commanded pointing would have been offset by up to 15 arcsec. Future improvements in thermal modeling could reduce this offset, but it should be understood that pointing accuracy will be degraded in such a situation.\nCIAO dmcoords tool setup",
"ciaoenv = Ska.Shell.getenv('source /soft/ciao/bin/ciao.sh')\nciaorun = functools.partial(Ska.Shell.bash, env=ciaoenv)\n\ndmcoords_cmd = ['dmcoords', 'none',\n 'asolfile=none',\n 'detector=\"{detector}\"',\n 'fpsys=\"{fpsys}\"',\n 'opt=cel',\n 'ra={ra_targ}', \n 'dec={dec_targ}',\n 'celfmt=deg', \n 'ra_nom={ra_nom}',\n 'dec_nom={dec_nom}',\n 'roll_nom={roll_nom}', \n 'ra_asp=\")ra_nom\"',\n 'dec_asp=\")dec_nom\"',\n 'roll_asp=\")roll_nom\"', \n 'sim=\"{sim_x} 0 {sim_z}\"',\n 'displace=\"0 {dy} {dz} 0 0 0\"', \n 'verbose=0']\ndmcoords_cmd = ' '.join(dmcoords_cmd)\n\ndef dmcoords_chipx_chipy(keys, verbose=False):\n \"\"\"\n Get the dmcoords-computed chipx and chipy for given event file \n header keyword params. NOTE: the ``dy`` and ``dz`` inputs\n to dmcoords are flipped in sign from the ASOL values. Generally the\n ASOL DY/DZ are positive and dmcoord input values are negative. This\n sign flip is handled *here*, so input to this is ASOL DY/DZ.\n \n :param keys: dict of event file keywords\n \"\"\"\n # See the absolute_pointing_uncertainty notebook in this repo for the\n # detailed derivation of this -15.5, 6.0 arcsec offset factor. See the\n # cell below for the summary version.\n ciaorun('punlearn dmcoords')\n fpsys_map = {'HRC-I': 'HI1',\n 'HRC-S': 'HS2',\n 'ACIS': 'ACIS'}\n keys = {key.lower(): val for key, val in keys.items()}\n det = keys['detnam']\n keys['detector'] = (det if det.startswith('HRC') else 'ACIS')\n keys['dy'] = -keys['dy_avg']\n keys['dz'] = -keys['dz_avg']\n keys['fpsys'] = fpsys_map[keys['detector']]\n \n cmd = dmcoords_cmd.format(**keys)\n ciaorun(cmd)\n \n if verbose:\n print(cmd)\n return [float(x) for x in ciaorun('pget dmcoords chipx chipy chip_id')]\n\ndef get_evt_meta(obsid, detector):\n \"\"\"\n Get event file metadata (FITS keywords) for ``obsid`` and ``detector`` and cache for later use.\n \n Returns a dict of key=value pairs.\n \"\"\"\n evts = shelve.open('event_meta.shelf')\n sobsid = str(obsid)\n if sobsid not in evts:\n det = 'hrc' if detector.startswith('HRC') else 'acis'\n arc5gl = Ska.arc5gl.Arc5gl()\n arc5gl.sendline('obsid={}'.format(obsid))\n arc5gl.sendline('get {}2'.format(det) + '{evt2}')\n del arc5gl\n\n files = glob.glob('{}f{}*_evt2.fits.gz'.format(det, obsid))\n if len(files) != 1:\n raise ValueError('Wrong number of files {}'.format(files))\n evt2 = Table.read(files[0])\n os.unlink(files[0])\n\n evts[sobsid] = {k.lower(): v for k, v in evt2.meta.items()}\n\n out = evts[sobsid]\n evts.close()\n \n return out\n\ndef check_predicted_chipxy(obs):\n \"\"\"\n Compare the predicted CHIPX/Y values with planned using observed event file\n data on actual ACA alignment.\n \"\"\"\n obsid = obs['obsid']\n detector = obs['detector']\n try:\n evt = get_evt_meta(obsid, detector)\n except ValueError as err:\n print('Obsid={} detector={}: fail {}'.format(obsid, detector, err))\n return\n f_chipx, f_chipy, f_chip_id = dmcoords_chipx_chipy(evt)\n \n q_nom_flight = Quat([evt['ra_nom'], evt['dec_nom'], evt['roll_nom']])\n q_aca = Quat([obs['aca_ra'], obs['aca_dec'], obs['aca_roll']])\n mf = mm[obsid * 100]['final']\n q_flight = Quat([mf['q1'], mf['q2'], mf['q3'], mf['q4']])\n dq = q_flight.dq(q_aca)\n q_nom_test = q_nom_flight * dq\n evt_test = dict(evt)\n evt_test['ra_nom'] = q_nom_test.ra\n evt_test['dec_nom'] = q_nom_test.dec\n evt_test['roll_nom'] = q_nom_test.roll\n \n scale = 0.13175 if detector.startswith('HRC') else 0.492\n aim_chipx = obs['chipx']\n aim_chipy = obs['chipy']\n if detector == 'ACIS-S':\n aim_chipx += -obs['target_offset_y'] / scale\n aim_chipy += -obs['target_offset_z'] / scale + 20.5 / 0.492 * (-190.14 - evt['sim_z'])\n elif detector == 'ACIS-I':\n aim_chipx += -obs['target_offset_z'] / scale + 20.5 / 0.492 * (-233.59 - evt['sim_z'])\n aim_chipy += +obs['target_offset_y'] / scale\n\n chipx, chipy, chip_id = dmcoords_chipx_chipy(evt_test)\n print('{} {:6s} aimpoint:{:6.1f},{:6.1f} test:{:6.1f},{:6.1f} '\n 'flight:{:6.1f},{:6.1f} delta: {:.1f} arcsec'\n .format(obsid, detector, aim_chipx, aim_chipy, chipx, chipy, f_chipx, f_chipy,\n np.hypot(aim_chipx - chipx, aim_chipy - chipy) * scale))\n\nfor obs in dat:\n check_predicted_chipxy(obs)",
"For pre-NOV1615 products with an empty zero-offsets aimpoint table\nCheck that TEST (JUL0415M) and FLIGHT attitudes from maneuver summary match to within 0.1 arcsec.\nJUL0415M was constructed with an OR-list zero-offset aimpoint table which exists but has\nno row entries. This has the effect of telling SAUSAGE to run through the attitude\nreplacement machinery but use 0.0 for the ACA offset y/z values. This should output attitudes\nthat are precisely the same as the FLIGHT attitudes.\nResults: pass",
"PRODUCTS = 'JUL0415M'\nFLIGHT_DIR = '/data/mpcrit1/mplogs/2015' # Must match PRODUCTS\n\n# Get FLIGHT maneuver summary\nfilename = glob.glob(os.path.join(FLIGHT_DIR, PRODUCTS[:-1], 'ofls', 'mps', 'mm*.sum'))[0]\nprint('Reading', filename)\nmmf = parse_cm.maneuver.read_maneuver_summary(filename, structured=True)\nmmf = {m['final']['id']: m for m in mmf} # Turn into a dict\n\n# Get TEST maneuver summary\nfilename = glob.glob(os.path.join(TEST_DIR, PRODUCTS, 'ofls', 'mps', 'mm*.sum'))[0]\nprint('Reading', filename)\nmmt = parse_cm.maneuver.read_maneuver_summary(filename, structured=True)\nmmt = {m['final']['id']: m for m in mmt} # Turn into a dict\n\n# Make sure set of obsids are the same\nset(mmf) == set(mmt)\n\n# Now do the actual attitude comparison\nfor trace_id, mf in mmf.items():\n mt = mmt[trace_id]['final']\n mf = mf['final']\n qt = Quat([mt['q1'], mt['q2'], mt['q3'], mt['q4']])\n qf = Quat([mf['q1'], mf['q2'], mf['q3'], mf['q4']])\n print(trace_id, ' ', end='')\n print_dq(qt, qf)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
craigrshenton/home
|
notebooks/data_smart_project_1.ipynb
|
mit
|
[
"1. Prepare Problem\na) Load libraries",
"import pandas as pd\nimport numpy as np\nfrom pandas.tools.plotting import scatter_matrix\nfrom matplotlib import pyplot as plt\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.model_selection import KFold\nfrom sklearn.model_selection import cross_val_score\nfrom sklearn.metrics import classification_report\nfrom sklearn.metrics import confusion_matrix\nfrom sklearn.metrics import accuracy_score\nfrom sklearn.linear_model import LogisticRegression\nfrom sklearn.tree import DecisionTreeClassifier\nfrom sklearn.neighbors import KNeighborsClassifier\nfrom sklearn.discriminant_analysis import LinearDiscriminantAnalysis\nfrom sklearn.naive_bayes import GaussianNB\nfrom sklearn.svm import SVC\nfrom sklearn import metrics",
"b) Load dataset\nDownload customer account data from Wiley's website, RetailMart.xlsx",
"# find path to your RetailMart.xlsx\ndataset = pd.read_excel(open('C:/Users/craigrshenton/Desktop/Dropbox/excel_data_sci/ch06/RetailMart.xlsx','rb'), sheetname=0)\ndataset = dataset.drop('Unnamed: 17', 1) # drop empty col\ndataset.rename(columns={'PREGNANT':'Pregnant'}, inplace=True)\ndataset.rename(columns={'Home/Apt/ PO Box':'Residency'}, inplace=True) # add simpler col name\ndataset.columns = [x.strip().replace(' ', '_') for x in dataset.columns] # python does not like spaces in var names",
"The 'Pregnant' column can only take on one of two (in this case) possabilities. Here 1 = pregnant, and 0 = not pregnant\n2. Summarize Data\na) Descriptive statistics",
"# shape\nprint(dataset.shape)\n\n# types\nprint(dataset.dtypes)\n\n# head\ndataset.head()\n\n# feature distribution\nprint(dataset.groupby('Implied_Gender').size())\n\n# target distribution\nprint(dataset.groupby('Pregnant').size())\n\n# correlation\nr = dataset.corr(method='pearson')\nid_matrix = np.identity(r.shape[0]) # create identity matrix\nr = r-id_matrix # remove same-feature correlations\nnp.where( r > 0.7 )",
"We can see no features with significant correlation coefficents (i.e., $r$ values > 0.7)\n3. Prepare Data\na) Data Transforms\nWe need to 'dummify' (i.e., separate out) the catagorical variables: implied gender and residency",
"# dummify gender variable\ndummy_gender = pd.get_dummies(dataset['Implied_Gender'], prefix='Gender')\nprint(dummy_gender.head())\n\n# dummify residency variable\ndummy_resident = pd.get_dummies(dataset['Residency'], prefix='Resident')\nprint(dummy_resident.head())\n\n# Drop catagorical variables\ndataset = dataset.drop('Implied_Gender', 1)\ndataset = dataset.drop('Residency', 1)\n\n# Add dummy variables\ndataset = pd.concat([dummy_gender.ix[:, 'Gender_M':],dummy_resident.ix[:, 'Resident_H':],dataset], axis=1)\ndataset.head()\n\n# Make clean dataframe for regression model\narray = dataset.values\nn_features = len(array[0]) \nX = array[:,0:n_features-1] # features\ny = array[:,n_features-1] # target",
"4. Evaluate Algorithms\na) Split-out validation dataset",
"# Split-out validation dataset\nvalidation_size = 0.20\nseed = 7\nX_train, X_validation, Y_train, Y_validation = train_test_split(X, y,\ntest_size=validation_size, random_state=seed)",
"b) Spot Check Algorithms",
"# Spot-Check Algorithms\nmodels = []\nmodels.append(('LR', LogisticRegression()))\nmodels.append(('LDA', LinearDiscriminantAnalysis()))\nmodels.append(('KNN', KNeighborsClassifier()))\nmodels.append(('CART', DecisionTreeClassifier()))\nmodels.append(('NB', GaussianNB()))\nmodels.append(('SVM', SVC()))\n\n# evaluate each model in turn\nresults = []\nnames = []\nfor name, model in models:\n kfold = KFold(n_splits=10, random_state=seed)\n cv_results = cross_val_score(model, X_train, Y_train, cv=kfold, scoring='accuracy')\n results.append(cv_results)\n names.append(name)\n msg = \"%s: %f (%f)\" % (name, cv_results.mean(), cv_results.std())\n print(msg)",
"c) Select The Best Model",
"# Compare Algorithms\nfig = plt.figure()\nfig.suptitle('Algorithm Comparison')\nax = fig.add_subplot(111)\nplt.boxplot(results)\nax.set_xticklabels(names)\nplt.show()",
"5. Make predictions on validation dataset\nLinear Discriminant Analysis is just about the most accurate model. Now test the accuracy of the model on the validation dataset.",
"lda = LinearDiscriminantAnalysis()\nlda.fit(X_train, Y_train)\npredictions = lda.predict(X_validation)\nprint(accuracy_score(Y_validation, predictions))\nprint(confusion_matrix(Y_validation, predictions))\nprint(classification_report(Y_validation, predictions))\n\n# predict probability of survival\ny_pred_prob = lda.predict_proba(X_validation)[:, 1]\n\n# plot ROC curve\nfpr, tpr, thresholds = metrics.roc_curve(Y_validation, y_pred_prob)\nplt.plot(fpr, tpr)\nplt.plot([0, 1], [0, 1], color='navy', linestyle='--')\nplt.xlim([-0.05, 1.0])\nplt.ylim([0.0, 1.05])\nplt.gca().set_aspect('equal', adjustable='box')\nplt.xlabel('False Positive Rate (1 - Specificity)')\nplt.ylabel('True Positive Rate (Sensitivity)')\nplt.show()\n\n# calculate AUC\nprint(metrics.roc_auc_score(Y_validation, y_pred_prob))"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
jaidevd/inmantec_fdp
|
notebooks/day1/04_intro_scikit_image.ipynb
|
mit
|
[
"Introduction to Image Processing\nImages as NumPy Arrays",
"import numpy as np\nimport matplotlib.pyplot as plt\nfrom skimage import io\nfrom skimage import data\nplt.style.use('ggplot')\n%matplotlib inline",
"A very simple \"image\"",
"X = np.zeros((9, 9))\nX[::2, 1::2] = 1\nX[1::2, ::2] = 1\nplt.imshow(X, cmap=plt.cm.gray, interpolation=\"nearest\")\n\ncamera = data.camera()\nprint(type(camera))\nplt.imshow(camera, cmap=plt.cm.gray)\n\nprint(camera.shape)\n\nprint(camera.dtype)",
"Histogram Equalization\nQ: What is the histogram of an image?",
"print(camera.min(), camera.max())\n\nfrom scipy.stats import uniform\nbins = np.linspace(0, 256, 20)\nhist = np.histogram(camera.ravel(), bins, normed=True)[0]\nbins = 0.5*(bins[1:] + bins[:-1])\nplt.plot(bins, hist, label=\"original\")\nestimate = uniform.pdf(bins)\nplt.plot(bins, estimate, label=\"uniform estimate\")",
"Q: What is wrong with the figure above?\nQ: What does it mean to improve the contrast of an image?",
"from skimage import exposure\ncamera_eq = exposure.equalize_hist(camera)\n\nprint(camera_eq.min(), camera_eq.max())\n\nbins = np.linspace(0, 1, 20)\nhist = np.histogram(camera_eq.ravel(), bins, normed=True)[0]\nbins = 0.5*(bins[1:] + bins[:-1])\nplt.plot(bins, hist, label=\"Original\")\nestimate_uni = uniform.pdf(bins)\nplt.plot(bins, estimate_uni, label=\"Uniform\")\nplt.legend(loc=\"best\")\n\nfig, ax = plt.subplots(2, 2)\n\nax[0, 0].imshow(camera, cmap=plt.cm.gray)\nax[0, 0].set_xticks([])\nax[0, 0].set_yticks([])\nax[0, 0].grid(False)\nax[0, 0].set_title(\"Original\")\n\n# compute bins and histogram for original image\nbins = np.linspace(0, 256, 20)\nhist = np.histogram(camera.ravel(), bins, normed=True)[0]\nbins = 0.5*(bins[1:] + bins[:-1])\nax[0, 1].plot(bins, hist)\n\nax[1, 0].imshow(camera_eq, cmap=plt.cm.gray)\nax[1, 0].set_xticks([])\nax[1, 0].set_yticks([])\nax[1, 0].grid(False)\nax[1, 0].set_title(\"High Contrast\")\n\n# compute bins and histogram for high contrast image\nbins = np.linspace(0, 1, 20)\nhist = np.histogram(camera_eq.ravel(), bins, normed=True)[0]\nbins = 0.5*(bins[1:] + bins[:-1])\nax[1, 1].plot(bins, hist)\n\nfig.tight_layout()",
"Exercise: Image filtering by thresholding\nConvert the background in the following image to zero-valued pixels",
"coins = data.coins()\nplt.imshow(coins, cmap=plt.cm.gray)\nplt.xticks([])\nplt.yticks([])\nplt.grid()",
"Hint: Use histogram to pick the right threshold",
"# enter code here",
"Q: What was the problem with this approach of thresholding?\nAdaptive Thresholding",
"from skimage import filters\nthreshold = filters.threshold_otsu(coins)\ncoins_low = coins.copy()\ncoins_low[coins_low < threshold] = 0\nplt.imshow(coins_low, cmap=plt.cm.gray)\nplt.xticks([])\nplt.yticks([])\nplt.grid()\n\nbins = np.linspace(0, 256, 50)\nhist = np.histogram(coins.ravel(), bins, normed=True)[0]\nbins = 0.5*(bins[1:] + bins[:-1])\nplt.plot(bins, hist, label=\"Histogram\")\nplt.vlines(threshold, 0, hist.max(), label=\"Threshold\")\nplt.legend()",
"Independent labeling of objects\nSegmentation with boolean masks",
"plt.imshow(coins_low > 0, cmap=plt.cm.gray)\nplt.grid()",
"Filling in smaller regions",
"from skimage.morphology import closing, square\nbw = closing(coins_low > 0, square(3))\nplt.imshow(bw, cmap=plt.cm.gray)\nplt.grid()\n\nfrom skimage.segmentation import clear_border\nfrom skimage.color import label2rgb\nfrom skimage.measure import label\n# remove artifacts connected to image border\ncleared = clear_border(bw)\n\n# label image regions\nlabel_image = label(cleared)\nimage_label_overlay = label2rgb(label_image, image=coins_low)\nplt.imshow(image_label_overlay)\nplt.xticks([])\nplt.yticks([])\n\nfrom skimage.measure import regionprops\nimport matplotlib.patches as mpatches\nfig, ax = plt.subplots(figsize=(10, 6))\nax.imshow(image_label_overlay)\n\nfor region in regionprops(label_image):\n # take regions with large enough areas\n if region.area >= 100:\n # draw rectangle around segmented coins\n minr, minc, maxr, maxc = region.bbox\n rect = mpatches.Rectangle((minc, minr), maxc - minc, maxr - minr,\n fill=False, edgecolor='red', linewidth=2)\n ax.add_patch(rect)\n\nax.set_axis_off()\nplt.tight_layout()",
"Exercise: Color each independent region in the following image\nNote: Image is entireyly only black and white. It has already been thresholded and binaried",
"n = 20\nl = 256\nim = np.zeros((l, l))\npoints = l * np.random.random((2, n ** 2))\nim[(points[0]).astype(np.int), (points[1]).astype(np.int)] = 1\nim = filters.gaussian_filter(im, sigma=l / (4. * n))\nblobs = im > im.mean()\nplt.imshow(blobs, cmap=plt.cm.gray)\nplt.xticks([])\nplt.yticks([])\n\n# enter code here"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
tkurfurst/deep-learning
|
autoencoder/Simple_Autoencoder_Solution.ipynb
|
mit
|
[
"A Simple Autoencoder\nWe'll start off by building a simple autoencoder to compress the MNIST dataset. With autoencoders, we pass input data through an encoder that makes a compressed representation of the input. Then, this representation is passed through a decoder to reconstruct the input data. Generally the encoder and decoder will be built with neural networks, then trained on example data.\n\nIn this notebook, we'll be build a simple network architecture for the encoder and decoder. Let's get started by importing our libraries and getting the dataset.",
"%matplotlib inline\n\nimport numpy as np\nimport tensorflow as tf\nimport matplotlib.pyplot as plt\n\nfrom tensorflow.examples.tutorials.mnist import input_data\nmnist = input_data.read_data_sets('MNIST_data', validation_size=0)",
"Below I'm plotting an example image from the MNIST dataset. These are 28x28 grayscale images of handwritten digits.",
"img = mnist.train.images[2]\nplt.imshow(img.reshape((28, 28)), cmap='Greys_r')",
"We'll train an autoencoder with these images by flattening them into 784 length vectors. The images from this dataset are already normalized such that the values are between 0 and 1. Let's start by building basically the simplest autoencoder with a single ReLU hidden layer. This layer will be used as the compressed representation. Then, the encoder is the input layer and the hidden layer. The decoder is the hidden layer and the output layer. Since the images are normalized between 0 and 1, we need to use a sigmoid activation on the output layer to get values matching the input.\n\n\nExercise: Build the graph for the autoencoder in the cell below. The input images will be flattened into 784 length vectors. The targets are the same as the inputs. And there should be one hidden layer with a ReLU activation and an output layer with a sigmoid activation. The loss should be calculated with the cross-entropy loss, there is a convenient TensorFlow function for this tf.nn.sigmoid_cross_entropy_with_logits (documentation). You should note that tf.nn.sigmoid_cross_entropy_with_logits takes the logits, but to get the reconstructed images you'll need to pass the logits through the sigmoid function.",
"# Size of the encoding layer (the hidden layer)\nencoding_dim = 32\n\nimage_size = mnist.train.images.shape[1]\n\ninputs_ = tf.placeholder(tf.float32, (None, image_size), name='inputs')\ntargets_ = tf.placeholder(tf.float32, (None, image_size), name='targets')\n\n# Output of hidden layer\nencoded = tf.layers.dense(inputs_, encoding_dim, activation=tf.nn.relu)\n\n# Output layer logits\nlogits = tf.layers.dense(encoded, image_size, activation=None)\n# Sigmoid output from\ndecoded = tf.nn.sigmoid(logits, name='output')\n\nloss = tf.nn.sigmoid_cross_entropy_with_logits(labels=targets_, logits=logits)\ncost = tf.reduce_mean(loss)\nopt = tf.train.AdamOptimizer(0.001).minimize(cost)",
"Training",
"# Create the session\nsess = tf.Session()",
"Here I'll write a bit of code to train the network. I'm not too interested in validation here, so I'll just monitor the training loss and the test loss afterwards. \nCalling mnist.train.next_batch(batch_size) will return a tuple of (images, labels). We're not concerned with the labels here, we just need the images. Otherwise this is pretty straightfoward training with TensorFlow. We initialize the variables with sess.run(tf.global_variables_initializer()). Then, run the optimizer and get the loss with batch_cost, _ = sess.run([cost, opt], feed_dict=feed).",
"epochs = 20\nbatch_size = 200\nsess.run(tf.global_variables_initializer())\nfor e in range(epochs):\n for ii in range(mnist.train.num_examples//batch_size):\n batch = mnist.train.next_batch(batch_size)\n feed = {inputs_: batch[0], targets_: batch[0]}\n batch_cost, _ = sess.run([cost, opt], feed_dict=feed)\n\n print(\"Epoch: {}/{}...\".format(e+1, epochs),\n \"Training loss: {:.4f}\".format(batch_cost))",
"Checking out the results\nBelow I've plotted some of the test images along with their reconstructions. For the most part these look pretty good except for some blurriness in some parts.",
"fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))\nin_imgs = mnist.test.images[:10]\nreconstructed, compressed = sess.run([decoded, encoded], feed_dict={inputs_: in_imgs})\n\nfor images, row in zip([in_imgs, reconstructed], axes):\n for img, ax in zip(images, row):\n ax.imshow(img.reshape((28, 28)), cmap='Greys_r')\n ax.get_xaxis().set_visible(False)\n ax.get_yaxis().set_visible(False)\n\nfig.tight_layout(pad=0.1)\n\nsess.close()",
"Up Next\nWe're dealing with images here, so we can (usually) get better performance using convolution layers. So, next we'll build a better autoencoder with convolutional layers.\nIn practice, autoencoders aren't actually better at compression compared to typical methods like JPEGs and MP3s. But, they are being used for noise reduction, which you'll also build."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
darioizzo/d-CGP
|
doc/sphinx/notebooks/dCGPANNs_for_function_approximation.ipynb
|
gpl-3.0
|
[
"Training a FFNN in dCGPANN vs. Keras (regression)\nA Feed Forward Neural network is a widely used ANN model for regression and classification. Here we show how to encode it into a dCGPANN and train it with stochastic gradient descent on a regression task. To check the correctness of the result we perform the same training using the widely used Keras Deep Learning toolbox.",
"# Initial import\nimport dcgpy\n# For plotting\nfrom mpl_toolkits.mplot3d import Axes3D\nimport matplotlib.pyplot as plt\nfrom matplotlib import cm\nfrom matplotlib.ticker import LinearLocator, FormatStrFormatter\n# For scientific computing and more ...\nimport numpy as np\nfrom tqdm import tqdm\nfrom sklearn.utils import shuffle\n\n%matplotlib inline",
"Data set",
"# To plot the unction we use a uniform grid\nX = np.arange(-1, 1, 0.05)\nY = np.arange(-1, 1, 0.05)\nn_samples = len(X) * len(Y)\npoints = np.zeros((n_samples, 2))\n\ni=0\nfor x in X:\n for y in Y:\n points[i][0] = x\n points[i][1] = y\n i=i+1\nlabels = (np.sin(5 * points[:,0] * (3 * points[:,1] + 1.)) + 1. ) / 2.\npoints = points.reshape((n_samples,2))\nlabels = labels.reshape((n_samples,1))\n\n# To plot the function \nX, Y = np.meshgrid(X, Y)\nZ = (np.sin(5 * X * (3 * Y + 1.)) + 1. ) / 2.\nfig = plt.figure()\nax = fig.gca(projection='3d')\nsurf = ax.plot_surface(X, Y, Z, cmap=cm.coolwarm,\n linewidth=0, antialiased=False)\n\n# We shuffle the points and labels\npoints, labels = shuffle(points, labels, random_state=0)\n# We create training and test sets\nX_train = points[:800]\nY_train = labels[:800]\nX_test = points[800:]\nY_test = labels[800:]\n_ = plt.title(\"function to be learned\")",
"Encoding and training a FFNN using dCGP\nThere are many ways the same FFNN could be encoded into a CGP chromosome. The utility encode_ffnn selects one for you returning the expression.",
"# We define a 2 input 1 output dCGPANN with sigmoid nonlinearities\ndcgpann = dcgpy.encode_ffnn(2,1,[50,20],[\"sig\", \"sig\", \"sig\"], 5)\n\nstd = 1.5\n# Weight/biases initialization is made using a normal distribution\ndcgpann.randomise_weights(mean = 0., std = std)\ndcgpann.randomise_biases(mean = 0., std = std)\n\n# We show the initial MSE\nprint(\"Starting error:\", dcgpann.loss(X_test,Y_test, \"MSE\"))\nprint(\"Net complexity (number of active weights):\", dcgpann.n_active_weights())\nprint(\"Net complexity (number of unique active weights):\", dcgpann.n_active_weights(unique=True))\nprint(\"Net complexity (number of active nodes):\", len(dcgpann.get_active_nodes()))\nx = dcgpann.get()\nw = dcgpann.get_weights()\nb = dcgpann.get_biases()\nres = []\n\n# And show a visualization of the FFNN encoded in a CGP\ndcgpann.visualize(show_nonlinearities=True)\n\nimport timeit\nstart_time = timeit.default_timer()\n\nlr0 = 0.3\nfor i in tqdm(range(5000)):\n lr = lr0 #* np.exp(-0.0001 * i)\n loss = dcgpann.sgd(X_train, Y_train, lr, 32, \"MSE\", parallel = 4)\n res.append(loss)\nelapsed = timeit.default_timer() - start_time\n\n# Print the time taken to train and the final result on the test set\nprint(\"Time (s): \", elapsed)\nprint(\"End MSE: \", dcgpann.loss(X_test,Y_test, \"MSE\"))\n",
"Same training is done using Keras (Tensor Flow backend)",
"import keras\n\n\nfrom keras.models import Sequential\nfrom keras.layers import Dense, Activation\nfrom keras import optimizers\n\n# We define Stochastic Gradient Descent as an optimizer\nsgd = optimizers.SGD(lr=0.3)\n# We define weight initializetion\ninitializerw = keras.initializers.RandomNormal(mean=0.0, stddev=std, seed=None)\ninitializerb = keras.initializers.RandomNormal(mean=0.0, stddev=std, seed=None)\n\nmodel = Sequential([\n Dense(50, input_dim=2, kernel_initializer=initializerw, bias_initializer=initializerb),\n Activation('sigmoid'),\n Dense(20, kernel_initializer=initializerw, bias_initializer=initializerb),\n Activation('sigmoid'),\n Dense(1, kernel_initializer=initializerw, bias_initializer=initializerb),\n Activation('sigmoid'),\n])\n\n# For a mean squared error regression problem\nmodel.compile(optimizer=sgd,\n loss='mse')\n\n# Train the model, iterating on the data in batches of 32 samples\nstart_time = timeit.default_timer()\nhistory = model.fit(X_train, Y_train, epochs=5000, batch_size=32, verbose=False)\nelapsed = timeit.default_timer() - start_time\n\n# Print the time taken to train and the final result on the test set\nprint(\"Time (s): \", elapsed)\nprint(\"End MSE: \", model.evaluate(X_train, Y_train))\n\n# We plot for comparison the MSE during learning in the two cases\nplt.semilogy(np.sqrt(history.history['loss']), label='Keras')\nplt.semilogy(np.sqrt(res), label='dCGP')\nplt.title('dCGP vs Keras')\nplt.xlabel('epochs')\nplt.legend()\n_ = plt.ylabel('RMSE')",
"Repeating ten times the same comparison",
"epochs = 5000\nfor i in range(10):\n # dCGP\n dcgpann = dcgpy.encode_ffnn(2,1,[50,20],[\"sig\", \"sig\", \"sig\"], 5)\n dcgpann.randomise_weights(mean = 0., std = std)\n dcgpann.randomise_biases(mean = 0., std = std)\n res = []\n for i in tqdm(range(epochs)):\n lr = lr0 #* np.exp(-0.0001 * i)\n loss = dcgpann.sgd(X_train, Y_train, lr, 32, \"MSE\", parallel = 4)\n res.append(loss)\n # Keras\n model = Sequential([\n Dense(50, input_dim=2, kernel_initializer=initializerw, bias_initializer=initializerb),\n Activation('sigmoid'),\n Dense(20, kernel_initializer=initializerw, bias_initializer=initializerb),\n Activation('sigmoid'),\n Dense(1, kernel_initializer=initializerw, bias_initializer=initializerb),\n Activation('sigmoid'),\n ])\n model.compile(optimizer=sgd, loss='mse')\n history = model.fit(X_train, Y_train, epochs=epochs, batch_size=32, verbose=False)\n plt.semilogy(np.sqrt(history.history['loss']), color = 'b')\n plt.semilogy(np.sqrt(res), color = 'C1')\nplt.title('dCGP vs Keras')\nplt.xlabel('epochs')\n_ = plt.ylabel('RMSE')\n"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
radu941208/DeepLearning
|
Intro_to_Neural_Networks/Python+Basics+With+Numpy+v3.ipynb
|
mit
|
[
"Python Basics with Numpy (optional assignment)\nWelcome to your first assignment. This exercise gives you a brief introduction to Python. Even if you've used Python before, this will help familiarize you with functions we'll need. \nInstructions:\n- You will be using Python 3.\n- Avoid using for-loops and while-loops, unless you are explicitly told to do so.\n- Do not modify the (# GRADED FUNCTION [function name]) comment in some cells. Your work would not be graded if you change this. Each cell containing that comment should only contain one function.\n- After coding your function, run the cell right below it to check if your result is correct.\nAfter this assignment you will:\n- Be able to use iPython Notebooks\n- Be able to use numpy functions and numpy matrix/vector operations\n- Understand the concept of \"broadcasting\"\n- Be able to vectorize code\nLet's get started!\nAbout iPython Notebooks\niPython Notebooks are interactive coding environments embedded in a webpage. You will be using iPython notebooks in this class. You only need to write code between the ### START CODE HERE ### and ### END CODE HERE ### comments. After writing your code, you can run the cell by either pressing \"SHIFT\"+\"ENTER\" or by clicking on \"Run Cell\" (denoted by a play symbol) in the upper bar of the notebook. \nWe will often specify \"(≈ X lines of code)\" in the comments to tell you about how much code you need to write. It is just a rough estimate, so don't feel bad if your code is longer or shorter.\nExercise: Set test to \"Hello World\" in the cell below to print \"Hello World\" and run the two cells below.",
"### START CODE HERE ### (≈ 1 line of code)\ntest = \"Hello World\"\n\n### END CODE HERE ###\n\nprint (\"test: \" + test)",
"Expected output:\ntest: Hello World\n<font color='blue'>\nWhat you need to remember:\n- Run your cells using SHIFT+ENTER (or \"Run cell\")\n- Write code in the designated areas using Python 3 only\n- Do not modify the code outside of the designated areas\n1 - Building basic functions with numpy\nNumpy is the main package for scientific computing in Python. It is maintained by a large community (www.numpy.org). In this exercise you will learn several key numpy functions such as np.exp, np.log, and np.reshape. You will need to know how to use these functions for future assignments.\n1.1 - sigmoid function, np.exp()\nBefore using np.exp(), you will use math.exp() to implement the sigmoid function. You will then see why np.exp() is preferable to math.exp().\nExercise: Build a function that returns the sigmoid of a real number x. Use math.exp(x) for the exponential function.\nReminder:\n$sigmoid(x) = \\frac{1}{1+e^{-x}}$ is sometimes also known as the logistic function. It is a non-linear function used not only in Machine Learning (Logistic Regression), but also in Deep Learning.\n<img src=\"images/Sigmoid.png\" style=\"width:500px;height:228px;\">\nTo refer to a function belonging to a specific package you could call it using package_name.function(). Run the code below to see an example with math.exp().",
"# GRADED FUNCTION: basic_sigmoid\n\nimport math\n\ndef basic_sigmoid(x):\n \"\"\"\n Compute sigmoid of x.\n\n Arguments:\n x -- A scalar\n\n Return:\n s -- sigmoid(x)\n \"\"\"\n \n ### START CODE HERE ### (≈ 1 line of code)\n s = 1/(1+math.exp(-x))\n ### END CODE HERE ###\n \n return s\n\nbasic_sigmoid(3)",
"Expected Output: \n<table style = \"width:40%\">\n <tr>\n <td>** basic_sigmoid(3) **</td> \n <td>0.9525741268224334 </td> \n </tr>\n\n</table>\n\nActually, we rarely use the \"math\" library in deep learning because the inputs of the functions are real numbers. In deep learning we mostly use matrices and vectors. This is why numpy is more useful.",
"### One reason why we use \"numpy\" instead of \"math\" in Deep Learning ###\nx = [1, 2, 3]\nbasic_sigmoid(x) # you will see this give an error when you run it, because x is a vector.",
"In fact, if $ x = (x_1, x_2, ..., x_n)$ is a row vector then $np.exp(x)$ will apply the exponential function to every element of x. The output will thus be: $np.exp(x) = (e^{x_1}, e^{x_2}, ..., e^{x_n})$",
"import numpy as np\n\n# example of np.exp\nx = np.array([1, 2, 3])\nprint(np.exp(x)) # result is (exp(1), exp(2), exp(3))",
"Furthermore, if x is a vector, then a Python operation such as $s = x + 3$ or $s = \\frac{1}{x}$ will output s as a vector of the same size as x.",
"# example of vector operation\nx = np.array([1, 2, 3])\nprint (x + 3)",
"Any time you need more info on a numpy function, we encourage you to look at the official documentation. \nYou can also create a new cell in the notebook and write np.exp? (for example) to get quick access to the documentation.\nExercise: Implement the sigmoid function using numpy. \nInstructions: x could now be either a real number, a vector, or a matrix. The data structures we use in numpy to represent these shapes (vectors, matrices...) are called numpy arrays. You don't need to know more for now.\n$$ \\text{For } x \\in \\mathbb{R}^n \\text{, } sigmoid(x) = sigmoid\\begin{pmatrix}\n x_1 \\\n x_2 \\\n ... \\\n x_n \\\n\\end{pmatrix} = \\begin{pmatrix}\n \\frac{1}{1+e^{-x_1}} \\\n \\frac{1}{1+e^{-x_2}} \\\n ... \\\n \\frac{1}{1+e^{-x_n}} \\\n\\end{pmatrix}\\tag{1} $$",
"# GRADED FUNCTION: sigmoid\n\nimport numpy as np # this means you can access numpy functions by writing np.function() instead of numpy.function()\n\ndef sigmoid(x):\n \"\"\"\n Compute the sigmoid of x\n\n Arguments:\n x -- A scalar or numpy array of any size\n\n Return:\n s -- sigmoid(x)\n \"\"\"\n \n ### START CODE HERE ### (≈ 1 line of code)\n s = 1/(1+np.exp(-x))\n ### END CODE HERE ###\n \n return s\n\nx = np.array([1, 2, 3])\nsigmoid(x)",
"Expected Output: \n<table>\n <tr> \n <td> **sigmoid([1,2,3])**</td> \n <td> array([ 0.73105858, 0.88079708, 0.95257413]) </td> \n </tr>\n</table>\n\n1.2 - Sigmoid gradient\nAs you've seen in lecture, you will need to compute gradients to optimize loss functions using backpropagation. Let's code your first gradient function.\nExercise: Implement the function sigmoid_grad() to compute the gradient of the sigmoid function with respect to its input x. The formula is: $$sigmoid_derivative(x) = \\sigma'(x) = \\sigma(x) (1 - \\sigma(x))\\tag{2}$$\nYou often code this function in two steps:\n1. Set s to be the sigmoid of x. You might find your sigmoid(x) function useful.\n2. Compute $\\sigma'(x) = s(1-s)$",
"# GRADED FUNCTION: sigmoid_derivative\n\ndef sigmoid_derivative(x):\n \"\"\"\n Compute the gradient (also called the slope or derivative) of the sigmoid function with respect to its input x.\n You can store the output of the sigmoid function into variables and then use it to calculate the gradient.\n \n Arguments:\n x -- A scalar or numpy array\n\n Return:\n ds -- Your computed gradient.\n \"\"\"\n \n ### START CODE HERE ### (≈ 2 lines of code)\n s = 1/(1+np.exp(-x))\n ds = s*(1-s)\n ### END CODE HERE ###\n \n return ds\n\nx = np.array([1, 2, 3])\nprint (\"sigmoid_derivative(x) = \" + str(sigmoid_derivative(x)))",
"Expected Output: \n<table>\n <tr> \n <td> **sigmoid_derivative([1,2,3])**</td> \n <td> [ 0.19661193 0.10499359 0.04517666] </td> \n </tr>\n</table>\n\n1.3 - Reshaping arrays\nTwo common numpy functions used in deep learning are np.shape and np.reshape(). \n- X.shape is used to get the shape (dimension) of a matrix/vector X. \n- X.reshape(...) is used to reshape X into some other dimension. \nFor example, in computer science, an image is represented by a 3D array of shape $(length, height, depth = 3)$. However, when you read an image as the input of an algorithm you convert it to a vector of shape $(lengthheight3, 1)$. In other words, you \"unroll\", or reshape, the 3D array into a 1D vector.\n<img src=\"images/image2vector_kiank.png\" style=\"width:500px;height:300;\">\nExercise: Implement image2vector() that takes an input of shape (length, height, 3) and returns a vector of shape (length*height*3, 1). For example, if you would like to reshape an array v of shape (a, b, c) into a vector of shape (a*b,c) you would do:\npython\nv = v.reshape((v.shape[0]*v.shape[1], v.shape[2])) # v.shape[0] = a ; v.shape[1] = b ; v.shape[2] = c\n- Please don't hardcode the dimensions of image as a constant. Instead look up the quantities you need with image.shape[0], etc.",
"# GRADED FUNCTION: image2vector\ndef image2vector(image):\n \"\"\"\n Argument:\n image -- a numpy array of shape (length, height, depth)\n \n Returns:\n v -- a vector of shape (length*height*depth, 1)\n \"\"\"\n \n ### START CODE HERE ### (≈ 1 line of code)\n v = image.reshape((image.shape[0]*image.shape[1]*image.shape[2],1))\n ### END CODE HERE ###\n \n return v\n\n# This is a 3 by 3 by 2 array, typically images will be (num_px_x, num_px_y,3) where 3 represents the RGB values\nimage = np.array([[[ 0.67826139, 0.29380381],\n [ 0.90714982, 0.52835647],\n [ 0.4215251 , 0.45017551]],\n\n [[ 0.92814219, 0.96677647],\n [ 0.85304703, 0.52351845],\n [ 0.19981397, 0.27417313]],\n\n [[ 0.60659855, 0.00533165],\n [ 0.10820313, 0.49978937],\n [ 0.34144279, 0.94630077]]])\n\nprint (\"image2vector(image) = \" + str(image2vector(image)))",
"Expected Output: \n<table style=\"width:100%\">\n <tr> \n <td> **image2vector(image)** </td> \n <td> [[ 0.67826139]\n [ 0.29380381]\n [ 0.90714982]\n [ 0.52835647]\n [ 0.4215251 ]\n [ 0.45017551]\n [ 0.92814219]\n [ 0.96677647]\n [ 0.85304703]\n [ 0.52351845]\n [ 0.19981397]\n [ 0.27417313]\n [ 0.60659855]\n [ 0.00533165]\n [ 0.10820313]\n [ 0.49978937]\n [ 0.34144279]\n [ 0.94630077]]</td> \n </tr>\n\n\n</table>\n\n1.4 - Normalizing rows\nAnother common technique we use in Machine Learning and Deep Learning is to normalize our data. It often leads to a better performance because gradient descent converges faster after normalization. Here, by normalization we mean changing x to $ \\frac{x}{\\| x\\|} $ (dividing each row vector of x by its norm).\nFor example, if $$x = \n\\begin{bmatrix}\n 0 & 3 & 4 \\\n 2 & 6 & 4 \\\n\\end{bmatrix}\\tag{3}$$ then $$\\| x\\| = np.linalg.norm(x, axis = 1, keepdims = True) = \\begin{bmatrix}\n 5 \\\n \\sqrt{56} \\\n\\end{bmatrix}\\tag{4} $$and $$ x_normalized = \\frac{x}{\\| x\\|} = \\begin{bmatrix}\n 0 & \\frac{3}{5} & \\frac{4}{5} \\\n \\frac{2}{\\sqrt{56}} & \\frac{6}{\\sqrt{56}} & \\frac{4}{\\sqrt{56}} \\\n\\end{bmatrix}\\tag{5}$$ Note that you can divide matrices of different sizes and it works fine: this is called broadcasting and you're going to learn about it in part 5.\nExercise: Implement normalizeRows() to normalize the rows of a matrix. After applying this function to an input matrix x, each row of x should be a vector of unit length (meaning length 1).",
"# GRADED FUNCTION: normalizeRows\n\ndef normalizeRows(x):\n \"\"\"\n Implement a function that normalizes each row of the matrix x (to have unit length).\n \n Argument:\n x -- A numpy matrix of shape (n, m)\n \n Returns:\n x -- The normalized (by row) numpy matrix. You are allowed to modify x.\n \"\"\"\n \n ### START CODE HERE ### (≈ 2 lines of code)\n # Compute x_norm as the norm 2 of x. Use np.linalg.norm(..., ord = 2, axis = ..., keepdims = True)\n x_norm = np.linalg.norm(x,axis=1,keepdims=True)\n \n # Divide x by its norm.\n x = x/x_norm\n ### END CODE HERE ###\n\n return x\n\nx = np.array([\n [0, 3, 4],\n [1, 6, 4]])\nprint(\"normalizeRows(x) = \" + str(normalizeRows(x)))",
"Expected Output: \n<table style=\"width:60%\">\n\n <tr> \n <td> **normalizeRows(x)** </td> \n <td> [[ 0. 0.6 0.8 ]\n [ 0.13736056 0.82416338 0.54944226]]</td> \n </tr>\n\n\n</table>\n\nNote:\nIn normalizeRows(), you can try to print the shapes of x_norm and x, and then rerun the assessment. You'll find out that they have different shapes. This is normal given that x_norm takes the norm of each row of x. So x_norm has the same number of rows but only 1 column. So how did it work when you divided x by x_norm? This is called broadcasting and we'll talk about it now! \n1.5 - Broadcasting and the softmax function\nA very important concept to understand in numpy is \"broadcasting\". It is very useful for performing mathematical operations between arrays of different shapes. For the full details on broadcasting, you can read the official broadcasting documentation.\nExercise: Implement a softmax function using numpy. You can think of softmax as a normalizing function used when your algorithm needs to classify two or more classes. You will learn more about softmax in the second course of this specialization.\nInstructions:\n- $ \\text{for } x \\in \\mathbb{R}^{1\\times n} \\text{, } softmax(x) = softmax(\\begin{bmatrix}\n x_1 &&\n x_2 &&\n ... &&\n x_n\n\\end{bmatrix}) = \\begin{bmatrix}\n \\frac{e^{x_1}}{\\sum_{j}e^{x_j}} &&\n \\frac{e^{x_2}}{\\sum_{j}e^{x_j}} &&\n ... &&\n \\frac{e^{x_n}}{\\sum_{j}e^{x_j}} \n\\end{bmatrix} $ \n\n$\\text{for a matrix } x \\in \\mathbb{R}^{m \\times n} \\text{, $x_{ij}$ maps to the element in the $i^{th}$ row and $j^{th}$ column of $x$, thus we have: }$ $$softmax(x) = softmax\\begin{bmatrix}\n x_{11} & x_{12} & x_{13} & \\dots & x_{1n} \\\n x_{21} & x_{22} & x_{23} & \\dots & x_{2n} \\\n \\vdots & \\vdots & \\vdots & \\ddots & \\vdots \\\n x_{m1} & x_{m2} & x_{m3} & \\dots & x_{mn}\n\\end{bmatrix} = \\begin{bmatrix}\n \\frac{e^{x_{11}}}{\\sum_{j}e^{x_{1j}}} & \\frac{e^{x_{12}}}{\\sum_{j}e^{x_{1j}}} & \\frac{e^{x_{13}}}{\\sum_{j}e^{x_{1j}}} & \\dots & \\frac{e^{x_{1n}}}{\\sum_{j}e^{x_{1j}}} \\\n \\frac{e^{x_{21}}}{\\sum_{j}e^{x_{2j}}} & \\frac{e^{x_{22}}}{\\sum_{j}e^{x_{2j}}} & \\frac{e^{x_{23}}}{\\sum_{j}e^{x_{2j}}} & \\dots & \\frac{e^{x_{2n}}}{\\sum_{j}e^{x_{2j}}} \\\n \\vdots & \\vdots & \\vdots & \\ddots & \\vdots \\\n \\frac{e^{x_{m1}}}{\\sum_{j}e^{x_{mj}}} & \\frac{e^{x_{m2}}}{\\sum_{j}e^{x_{mj}}} & \\frac{e^{x_{m3}}}{\\sum_{j}e^{x_{mj}}} & \\dots & \\frac{e^{x_{mn}}}{\\sum_{j}e^{x_{mj}}}\n\\end{bmatrix} = \\begin{pmatrix}\n softmax\\text{(first row of x)} \\\n softmax\\text{(second row of x)} \\\n ... \\\n softmax\\text{(last row of x)} \\\n\\end{pmatrix} $$",
"# GRADED FUNCTION: softmax\n\ndef softmax(x):\n \"\"\"Calculates the softmax for each row of the input x.\n\n Your code should work for a row vector and also for matrices of shape (n, m).\n\n Argument:\n x -- A numpy matrix of shape (n,m)\n\n Returns:\n s -- A numpy matrix equal to the softmax of x, of shape (n,m)\n \"\"\"\n \n ### START CODE HERE ### (≈ 3 lines of code)\n # Apply exp() element-wise to x. Use np.exp(...).\n x_exp = np.exp(x)\n\n # Create a vector x_sum that sums each row of x_exp. Use np.sum(..., axis = 1, keepdims = True).\n x_sum = np.sum(x_exp,axis=1,keepdims=True)\n \n # Compute softmax(x) by dividing x_exp by x_sum. It should automatically use numpy broadcasting.\n s = x_exp/x_sum\n\n ### END CODE HERE ###\n \n return s\n\nx = np.array([\n [9, 2, 5, 0, 0],\n [7, 5, 0, 0 ,0]])\nprint(\"softmax(x) = \" + str(softmax(x)))",
"Expected Output:\n<table style=\"width:60%\">\n\n <tr> \n <td> **softmax(x)** </td> \n <td> [[ 9.80897665e-01 8.94462891e-04 1.79657674e-02 1.21052389e-04\n 1.21052389e-04]\n [ 8.78679856e-01 1.18916387e-01 8.01252314e-04 8.01252314e-04\n 8.01252314e-04]]</td> \n </tr>\n</table>\n\nNote:\n- If you print the shapes of x_exp, x_sum and s above and rerun the assessment cell, you will see that x_sum is of shape (2,1) while x_exp and s are of shape (2,5). x_exp/x_sum works due to python broadcasting.\nCongratulations! You now have a pretty good understanding of python numpy and have implemented a few useful functions that you will be using in deep learning.\n<font color='blue'>\nWhat you need to remember:\n- np.exp(x) works for any np.array x and applies the exponential function to every coordinate\n- the sigmoid function and its gradient\n- image2vector is commonly used in deep learning\n- np.reshape is widely used. In the future, you'll see that keeping your matrix/vector dimensions straight will go toward eliminating a lot of bugs. \n- numpy has efficient built-in functions\n- broadcasting is extremely useful\n2) Vectorization\nIn deep learning, you deal with very large datasets. Hence, a non-computationally-optimal function can become a huge bottleneck in your algorithm and can result in a model that takes ages to run. To make sure that your code is computationally efficient, you will use vectorization. For example, try to tell the difference between the following implementations of the dot/outer/elementwise product.",
"import time\n\nx1 = [9, 2, 5, 0, 0, 7, 5, 0, 0, 0, 9, 2, 5, 0, 0]\nx2 = [9, 2, 2, 9, 0, 9, 2, 5, 0, 0, 9, 2, 5, 0, 0]\n\n### CLASSIC DOT PRODUCT OF VECTORS IMPLEMENTATION ###\ntic = time.process_time()\ndot = 0\nfor i in range(len(x1)):\n dot+= x1[i]*x2[i]\ntoc = time.process_time()\nprint (\"dot = \" + str(dot) + \"\\n ----- Computation time = \" + str(1000*(toc - tic)) + \"ms\")\n\n### CLASSIC OUTER PRODUCT IMPLEMENTATION ###\ntic = time.process_time()\nouter = np.zeros((len(x1),len(x2))) # we create a len(x1)*len(x2) matrix with only zeros\nfor i in range(len(x1)):\n for j in range(len(x2)):\n outer[i,j] = x1[i]*x2[j]\ntoc = time.process_time()\nprint (\"outer = \" + str(outer) + \"\\n ----- Computation time = \" + str(1000*(toc - tic)) + \"ms\")\n\n### CLASSIC ELEMENTWISE IMPLEMENTATION ###\ntic = time.process_time()\nmul = np.zeros(len(x1))\nfor i in range(len(x1)):\n mul[i] = x1[i]*x2[i]\ntoc = time.process_time()\nprint (\"elementwise multiplication = \" + str(mul) + \"\\n ----- Computation time = \" + str(1000*(toc - tic)) + \"ms\")\n\n### CLASSIC GENERAL DOT PRODUCT IMPLEMENTATION ###\nW = np.random.rand(3,len(x1)) # Random 3*len(x1) numpy array\ntic = time.process_time()\ngdot = np.zeros(W.shape[0])\nfor i in range(W.shape[0]):\n for j in range(len(x1)):\n gdot[i] += W[i,j]*x1[j]\ntoc = time.process_time()\nprint (\"gdot = \" + str(gdot) + \"\\n ----- Computation time = \" + str(1000*(toc - tic)) + \"ms\")\n\nx1 = [9, 2, 5, 0, 0, 7, 5, 0, 0, 0, 9, 2, 5, 0, 0]\nx2 = [9, 2, 2, 9, 0, 9, 2, 5, 0, 0, 9, 2, 5, 0, 0]\n\n### VECTORIZED DOT PRODUCT OF VECTORS ###\ntic = time.process_time()\ndot = np.dot(x1,x2)\ntoc = time.process_time()\nprint (\"dot = \" + str(dot) + \"\\n ----- Computation time = \" + str(1000*(toc - tic)) + \"ms\")\n\n### VECTORIZED OUTER PRODUCT ###\ntic = time.process_time()\nouter = np.outer(x1,x2)\ntoc = time.process_time()\nprint (\"outer = \" + str(outer) + \"\\n ----- Computation time = \" + str(1000*(toc - tic)) + \"ms\")\n\n### VECTORIZED ELEMENTWISE MULTIPLICATION ###\ntic = time.process_time()\nmul = np.multiply(x1,x2)\ntoc = time.process_time()\nprint (\"elementwise multiplication = \" + str(mul) + \"\\n ----- Computation time = \" + str(1000*(toc - tic)) + \"ms\")\n\n### VECTORIZED GENERAL DOT PRODUCT ###\ntic = time.process_time()\ndot = np.dot(W,x1)\ntoc = time.process_time()\nprint (\"gdot = \" + str(dot) + \"\\n ----- Computation time = \" + str(1000*(toc - tic)) + \"ms\")",
"As you may have noticed, the vectorized implementation is much cleaner and more efficient. For bigger vectors/matrices, the differences in running time become even bigger. \nNote that np.dot() performs a matrix-matrix or matrix-vector multiplication. This is different from np.multiply() and the * operator (which is equivalent to .* in Matlab/Octave), which performs an element-wise multiplication.\n2.1 Implement the L1 and L2 loss functions\nExercise: Implement the numpy vectorized version of the L1 loss. You may find the function abs(x) (absolute value of x) useful.\nReminder:\n- The loss is used to evaluate the performance of your model. The bigger your loss is, the more different your predictions ($ \\hat{y} $) are from the true values ($y$). In deep learning, you use optimization algorithms like Gradient Descent to train your model and to minimize the cost.\n- L1 loss is defined as:\n$$\\begin{align} & L_1(\\hat{y}, y) = \\sum_{i=0}^m|y^{(i)} - \\hat{y}^{(i)}| \\end{align}\\tag{6}$$",
"# GRADED FUNCTION: L1\n\ndef L1(yhat, y):\n \"\"\"\n Arguments:\n yhat -- vector of size m (predicted labels)\n y -- vector of size m (true labels)\n \n Returns:\n loss -- the value of the L1 loss function defined above\n \"\"\"\n \n ### START CODE HERE ### (≈ 1 line of code)\n loss = np.sum(np.abs(yhat-y))\n ### END CODE HERE ###\n \n return loss\n\nyhat = np.array([.9, 0.2, 0.1, .4, .9])\ny = np.array([1, 0, 0, 1, 1])\nprint(\"L1 = \" + str(L1(yhat,y)))",
"Expected Output:\n<table style=\"width:20%\">\n\n <tr> \n <td> **L1** </td> \n <td> 1.1 </td> \n </tr>\n</table>\n\nExercise: Implement the numpy vectorized version of the L2 loss. There are several way of implementing the L2 loss but you may find the function np.dot() useful. As a reminder, if $x = [x_1, x_2, ..., x_n]$, then np.dot(x,x) = $\\sum_{j=0}^n x_j^{2}$. \n\nL2 loss is defined as $$\\begin{align} & L_2(\\hat{y},y) = \\sum_{i=0}^m(y^{(i)} - \\hat{y}^{(i)})^2 \\end{align}\\tag{7}$$",
"# GRADED FUNCTION: L2\n\ndef L2(yhat, y):\n \"\"\"\n Arguments:\n yhat -- vector of size m (predicted labels)\n y -- vector of size m (true labels)\n \n Returns:\n loss -- the value of the L2 loss function defined above\n \"\"\"\n \n ### START CODE HERE ### (≈ 1 line of code)\n loss = np.sum(np.dot(y-yhat,y-yhat))\n ### END CODE HERE ###\n \n return loss\n\nyhat = np.array([.9, 0.2, 0.1, .4, .9])\ny = np.array([1, 0, 0, 1, 1])\nprint(\"L2 = \" + str(L2(yhat,y)))",
"Expected Output: \n<table style=\"width:20%\">\n <tr> \n <td> **L2** </td> \n <td> 0.43 </td> \n </tr>\n</table>\n\nCongratulations on completing this assignment. We hope that this little warm-up exercise helps you in the future assignments, which will be more exciting and interesting!\n<font color='blue'>\nWhat to remember:\n- Vectorization is very important in deep learning. It provides computational efficiency and clarity.\n- You have reviewed the L1 and L2 loss.\n- You are familiar with many numpy functions such as np.sum, np.dot, np.multiply, np.maximum, etc..."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
bjackman/lisa
|
ipynb/examples/android/workloads/Android_Viewer.ipynb
|
apache-2.0
|
[
"Generic Android viewer",
"from conf import LisaLogging\nLisaLogging.setup()\n\n%pylab inline\n\nimport json\nimport os\n\n# Support to access the remote target\nimport devlib\nfrom env import TestEnv\n\n# Import support for Android devices\nfrom android import Screen, Workload, System, ViewerWorkload\nfrom target_script import TargetScript\n\n# Support for trace events analysis\nfrom trace import Trace\n\n# Suport for FTrace events parsing and visualization\nimport trappy\n\nimport pandas as pd\nimport sqlite3\n\nfrom IPython.display import display",
"Test environment setup\nFor more details on this please check out examples/utils/testenv_example.ipynb.\ndevlib requires the ANDROID_HOME environment variable configured to point to your local installation of the Android SDK. If you have not this variable configured in the shell used to start the notebook server, you need to run a cell to define where your Android SDK is installed or specify the ANDROID_HOME in your target configuration.\nIn case more than one Android device are connected to the host, you must specify the ID of the device you want to target in my_target_conf. Run adb devices on your host to get the ID.",
"# Setup target configuration\nmy_conf = {\n\n # Target platform and board\n \"platform\" : 'android',\n \"board\" : 'hikey960',\n \n # Device serial ID\n # Not required if there is only one device connected to your computer\n \"device\" : \"0123456789ABCDEF\",\n \n # Android home\n # Not required if already exported in your .bashrc\n #\"ANDROID_HOME\" : \"/home/vagrant/lisa/tools/\",\n\n # Folder where all the results will be collected\n \"results_dir\" : \"Viewer_example\",\n\n # Define devlib modules to load\n \"modules\" : [\n 'cpufreq' # enable CPUFreq support\n ],\n\n # FTrace events to collect for all the tests configuration which have\n # the \"ftrace\" flag enabled\n \"ftrace\" : {\n \"events\" : [\n \"sched_switch\",\n \"sched_wakeup\",\n \"sched_wakeup_new\",\n \"sched_overutilized\",\n \"sched_load_avg_cpu\",\n \"sched_load_avg_task\",\n \"sched_load_waking_task\",\n \"cpu_capacity\",\n \"cpu_frequency\",\n \"cpu_idle\",\n \"sched_tune_config\",\n \"sched_tune_tasks_update\",\n \"sched_tune_boostgroup_update\",\n \"sched_tune_filter\",\n \"sched_boost_cpu\",\n \"sched_boost_task\",\n \"sched_energy_diff\"\n ],\n \"buffsize\" : 100 * 1024,\n },\n\n # Tools required by the experiments\n \"tools\" : [ 'trace-cmd', 'taskset'],\n}\n\n# Initialize a test environment using:\nte = TestEnv(my_conf, wipe=False)\ntarget = te.target",
"Workload definition\nThe Viewer workload will simply read an URI and let Android pick the best application to view the item designated by that URI. That item could be a web page, a photo, a pdf, etc. For instance, if given an URL to a Google Maps location, the Google Maps application will be opened at that location. If the device doesn't have Google Play Services (e.g. HiKey960), it will open Google Maps through the default web browser.\nThe Viewer class is intended to be subclassed to customize your workload. There are pre_interact(), interact() and post_interact() methods that are made to be overridden.\nIn this case we'll simply execute a script on the target to swipe around a location on Gmaps. This script is generated using the TargetScript class, which is used here on System.{h,v}swipe() calls to accumulate commands instead of executing them directly. Those commands are then outputted to a script on the remote device, and that script is later on executed as the item is being viewed. See ${LISA_HOME}/libs/util/target_script.py",
"class GmapsViewer(ViewerWorkload):\n \n def pre_interact(self):\n self.script = TargetScript(te, \"gmaps_swiper.sh\")\n\n # Define commands to execute during experiment\n for i in range(2):\n System.hswipe(self.script, 40, 60, 100, False)\n self.script.append('sleep 1')\n System.vswipe(self.script, 40, 60, 100, True)\n self.script.append('sleep 1')\n System.hswipe(self.script, 40, 60, 100, True)\n self.script.append('sleep 1')\n System.vswipe(self.script, 40, 60, 100, False)\n self.script.append('sleep 1')\n\n # Push script to the target\n self.script.push()\n \n def interact(self):\n self.script.run()\n\ndef experiment():\n # Configure governor\n target.cpufreq.set_all_governors('sched')\n \n # Get workload\n wload = Workload.getInstance(te, 'gmapsviewer')\n \n # Run workload\n wload.run(out_dir=te.res_dir,\n collect=\"ftrace\",\n uri=\"https://goo.gl/maps/D8Sn3hxsHw62\")\n \n # Dump platform descriptor\n te.platform_dump(te.res_dir)",
"Workload execution",
"results = experiment()\n\n# Load traces in memory (can take several minutes)\nplatform_file = os.path.join(te.res_dir, 'platform.json')\n\nwith open(platform_file, 'r') as fh:\n platform = json.load(fh)\n\ntrace_file = os.path.join(te.res_dir, 'trace.dat')\ntrace = Trace(platform, trace_file, events=my_conf['ftrace']['events'], normalize_time=False)",
"Traces visualisation",
"!kernelshark {trace_file} 2>/dev/null"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
bcantarel/bcantarel.github.io
|
bicf_nanocourses/courses/ML_1/exercises/Keras_Tutorial.ipynb
|
gpl-3.0
|
[
"Keras Tutorial\nKeras is a high-level neural networks library for Python capable of running on top of TensorFlow, Theano and other lower level frameworks. What makes keras special is that it is extremely user friendly: its syntax focuses on the big ideas, and takes care of a lot of the detailed plumbing, that can make these topics seems extremely complicated. For these reasons it is the ideal deep-learning package for beginners (although the fast-experimentation enabled by its simplicity has made it popular among serious researchers too).\nKeras Structure\nThe keras package is broken down into multiple parts, each describing a different part of neural network pipeline.\n1. Models (keras.models): This governs the overall type of architecture of the neural network. In our case, we go from a single input to output, and therefore use a sequential model. Other model types for reccurrent networks etc can also be found here.\n2. Layers (keras.layers): neural networks are built up as asequence of layers. These layers can be of many types Fully connected dense layers as we use here, and 2D convolutional (conv2D) layers that you will see soon are two well known examples.\n3. Optimizers (keras.optimizers): This is what keras uses to learn the neural network weights etc from the training data.\n4. Sample Data (keras.datasets): Like scikit learn, keras comes with several popular machine learning data sets to make it easy to benchmark and test our neural network architectures.\n5. Utils (keras.utils): Various utility functions to make our lives easier. Here we will use a function to draw a diagram depicting our network.",
"# Lets import the various libraries we need\nfrom __future__ import print_function\n\nimport keras\nfrom keras.datasets import mnist\nfrom keras.models import Sequential\nfrom keras.layers import Dense, Dropout\nfrom keras.optimizers import RMSprop\nfrom keras.utils.vis_utils import plot_model\n\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom IPython.display import Image",
"The Data\nFor this exercise (and the next one) will will be using the classic MNIST dataset. Like the digits data you had used earlier, these are pictures of numbers. The big difference is the pictures are bigger (28x28 pixels rather than 8x8) and we will be looking at a much larger data set: 60,000 training images and 10,000 to test. See sample pictures by running the code below.\nData structure:\n* As with the digits data earlier, we will \"unwrap\" the 28x28 square arrays into 784 dimensional vectors\n* With 60,000 input images this gives us an input matrix x_train of dimensionality 60,000 x 784 (and a 10,000 x 784 matrix x_testfor the test data)\n* For each of these images we have a true class label y_test and y_train, which is a number between 0 and 9 corresponding to the true digit.Thus num_classes=10\n* Below we reshape this label data into a \"binary\" format for ease of comparison. Each element in y_test is represented by a 10 dimensional vector, with a 1 corresponding to the correct label and all the others zero.\n* Thus our \"output\" vector dimensionality will 60,000 x 10 (and a 10,000 x 784 matrix x_testfor the test data)",
"# Load Mnist data\n# the data, split between train and test sets\nnum_classes = 10 # we have 10 digits to classify\n(x_train, y_train), (x_test, y_test) = mnist.load_data()\n\nx_train = x_train.reshape(60000, 784)\nx_test = x_test.reshape(10000, 784)\nx_train = x_train.astype('float32')\nx_test = x_test.astype('float32')\nx_train /= 255\nx_test /= 255\nprint(x_train.shape[0], 'train samples')\nprint(x_test.shape[0], 'test samples')\n\n# convert class vectors to binary class matrices\ny_train = keras.utils.to_categorical(y_train, num_classes)\ny_test = keras.utils.to_categorical(y_test, num_classes)\n\n\n# Plot sample pictures\nfig = plt.figure(figsize=(8, 8))\nclassNum=np.zeros((y_train.shape[0],1))\nfor i in range(y_train.shape[0]):\n classNum[i]=np.where(y_train[i,:])\n\nmeanVecs=np.zeros((num_classes,784)) \nnReps=10\ncounter=1\nfor num in range(num_classes):\n idx =np.where(classNum==num)[0]\n meanVecs[num,:]=np.mean(x_train[idx,:],axis=0)\n for rep in range(nReps):\n mat=x_train[idx[rep],:]\n mat=(mat.reshape(28,28))\n ax = fig.add_subplot(num_classes,nReps,counter)\n plt.imshow(mat,cmap=plt.cm.binary)\n plt.xticks([])\n plt.yticks([])\n counter=counter+1\nplt.suptitle('Sample Numbers')\nplt.show()\n\n# Plot \"Average\" Pictures\nfig = plt.figure(figsize=(8, 8))\n\nfor num in range(num_classes):\n \n mat=(meanVecs[num,:].reshape(28,28))\n ax = fig.add_subplot(3,4,num+1)\n plt.imshow(mat,cmap=plt.cm.binary)\nplt.suptitle('Average Numbers')\nplt.show()\n",
"Building the model\nWe're now ready to build our first neural network model. We'll begin with the easiest model imaginable given the data. A network where eeach one of the 784 pixels in the input data is connected to each of the 10 output classes.\n\nThis network essentially just has a single layer (it is both the first and last layer), and is essentially just a matrix 784x10 matrix, which multiplies the 784 pixel input vector to produve a 10 dimensional output. This is not deep yet, but we'll get to adding more layers later.\nStep 1:Define the Model\nIn this case there is one set of input going forward to a single set of outputs with no feedback. This is called a sequential model. We can thus create the \"shell\" for such a model by declaring:",
"model = Sequential()",
"The variable model will contain all the information on the model, and as we add layers and perform calculations, it will all be stored insidle this variable.\nStep II: Adding a (input) layer\nIn general adding a layer to a mode is achieved by the model.add(layer_definition) command. Where the layer_definition is specific to the kind of layer we want to add.\nLayers are defined by 4 traits:\n1. The layer type: Each layer type will have its own function in keras like Dense for a fully-connected layer, conv2d of a 2D convolutional layer and so on. \n2. The output size: This is the number of outputs emerging after the layer, and is specified as the first argument to the layer function.\n - In our case the output size is num_classes=10\n3. The input size: This is the number of inputs coming in to the function. \n - In general, * except for the input layer, keras is smart enough to figure out the input size based on the preceding layers* \n - We need to explicitly specify the size of the input layer as an input_shape= parameter.\n4. Activation: This is the non-linear function used to transform the integrated inputs into the output. E.g. 'relu is the rectified-linear-unit transform. For a layer connecting to the output, we want to enforce a binary-ish response, and hence the last-layer will often contain a softmax activation.\nNote: in our super-simple first case, the first layer is essentially the last layer. which why we end up providing both the input_shape= and using softmax. This will not be the case for deeper networks.\nSo we can add our (Dense) fully connected layer with num_classes outputs, softmax activation and 784 pixel inputs as:",
"model.add(Dense(num_classes, activation='softmax', input_shape=(784,)))\n",
"Note the input_shape=(784,) here. 784 is the size of out input images. The (784,) means we can have an unspecified number of input images coming into the network.\nNormally, we would keep adding more layers (see examples later). But in this case we only have one layer. So we can see what our model looks like by invoking:",
"model.summary()",
"Whenever you build a model be sure to look at the summary. The number of trainable parameters gives you a sense of model complexity. Models with fewer parameters are likely to be easier to train, and likely to generalize better for the same performance.\nStep III: Compile the model\nThis step describes how keras will update the model during the training phase (this is defining some of the parameters for this calculation).\nThere are two choices we make here:\n1. How to define the loss: This defines how we penalize disagreement of the output of the classifier with respect to the true labels. There are several options (https://keras.io/losses/), but for a multi-class problem such as this with a binary encoding categorical_crossentropy is a popular choice that penalizes high-confidence mistakes.\n2. The optimizer: Optimization is performed using some form of gradient descent. But there are several choices on exactly how this is performed (https://keras.io/optimizers/ and http://ruder.io/optimizing-gradient-descent/). Here we choose to go with RMSprop() a popular choice. The parameters of the optimizer, such as learning rate, can also be specified as parameters to the optimization function, but we will use default values for now.\nWe also choose to report on the accuracy during the optimization process, to keep track of progress:",
"model.compile(loss='categorical_crossentropy',\n optimizer=RMSprop(),\n metrics=['accuracy'])",
"Step IV: Train the model\nWe're now ready to perform the training. A couple more decisions need to be made:\n1. How long to train: This is measured in \"epochs\", each epoch representing the whole training data going through the algorithm. You want to give the optimizer long enough so it can get to a good solution, but you will hit a point of diminishing returns (more on this later, and in the exxercises)\n2. Batch Size: the network is updated by splitting the data into mini-batches and this is the number of data points used for each full update of the weights. Increasing batch size, will lead to larger memory requirements etc. \nWith these parameters we can fit the model to our training data as follows. This will produce a nice plot updating the accuracy (% of correctly classified points in the training data):",
"batch_size = 128\nepochs = 20\nhistory = model.fit(x_train, y_train,\n batch_size=batch_size,\n epochs=epochs,\n verbose=1)\n",
"Step V: Test the model\nThe real test of a model is how well it does on the training data (a severly overfit model could get 100% accuracy on training data, but fail miserably on new data). So the real score we care about is the performance on un-seen data. We do this by evaluating performance on the test data we held out so far. The code below also makes a plot showing the confidence curve on the training data, and a flat line corresponding to the performance on the new data. \nQuestions:\n\nWhat does the deviation between these two curves tell us?\nWhat do you think would happen if we continued training? Feel free to test.",
"score = model.evaluate(x_test, y_test, verbose=0)\nprint('Test loss:', score[0])\nprint('Test accuracy:', score[1])\n\nfig = plt.figure()\ntrainCurve=plt.plot(history.history['acc'],label='Training')\ntestCurve=plt.axhline(y=score[1],color='k',label='Testing')\nplt.title('model accuracy')\nplt.ylabel('accuracy')\nplt.xlabel('epoch')\nplt.legend()\nplt.show()",
"Interpreting the results\nSince we used an incredibly simple network, it is easy to look under the covers and see what is going on. For each class (i.e. digit=1 to 9) this simple network essentially has a filter-matrix the same size as the image, which it \"multiplies\" an input image against. The digit for which the filter produces the highest response is the chosen digit. We can look at the filter matrices by looking at the weights of the first layer.",
"denseLayer=model.layers[0]\nweights=denseLayer.get_weights()\ntopWeights=weights[0]\nfig = plt.figure(figsize=(15, 15))\nmeanTop=np.mean(topWeights,axis=1)\nfor num in range(topWeights.shape[1]):\n mat=topWeights[:,num]-meanTop\n mat=(mat.reshape(28,28))\n ax = fig.add_subplot(3,4,num+1)\n plt.imshow(mat,cmap=plt.cm.binary)\nplt.show()\n#fig=plt.figure()\n#plt.imshow(meanTop.reshape(28,28),cmap=plt.cm.binary)\n#plt.show()",
"Questions:\nCompare the results of these weights to the mean digit images at the beginning of this notebook:\n1. Can you explain the form of these curves? Why does the first one have a white spot (low value) at the center while the second has a high value there?\n2. What do you think the limitations of this approach are? What might we gain by adding more layers?\nBuilding a deeper network\nWe will build a deeper 3-layer network, following exactly the same steps, but with 2-additional layers thrown in. FYI, this topology comes from one of the examples provided by keras at https://github.com/keras-team/keras/blob/master/examples/mnist_mlp.py \n* Note the changes to the input layer below (output size and activation).\n* There is a new intermediate dense layer. This takes the 512 outputs from the input layer and connects them to 512 outputs.\n* Then the final layer connects the 512 inputs from the previous layer to the 10 needed outputs.",
"\n# Model initialization unchanged\nmodel = Sequential()\n# Input layer is similar, but because it doesn't connect to the final layer we are free\n# to choose number of outputs (here 512) and we use 'relu' activation instead of softmax. It also \nmodel.add(Dense(512, activation='relu', input_shape=(784,)))\n# New intermediate layer connecting the 512 inputs from the previous layer to the 512 new outputs\nmodel.add(Dense(512, activation='relu'))\n# The 512 inputs from the previous layer connect to the final 10 class outputs, with a softmax activation\nmodel.add(Dense(num_classes, activation='softmax'))\n\n# Compilation as before\nmodel.compile(loss='categorical_crossentropy',\n optimizer=RMSprop(),\n metrics=['accuracy'])\n#plot_model(model, to_file='model_plot.png', show_shapes=True, show_layer_names=True)\n#Image(\"model_plot.png\")\nmodel.summary()",
"Note the dramatic increase in the number of parameters achieved with this network. When you train this network, you'll see that this increase in number of parameters really slows things down, but leads to an increase in performance:",
"batch_size = 128\nepochs = 20\nhistory = model.fit(x_train, y_train,\n batch_size=batch_size,\n epochs=epochs,\n verbose=1)\nscore = model.evaluate(x_test, y_test, verbose=0)\nprint('Test loss:', score[0])\nprint('Test accuracy:', score[1])\n\nfig = plt.figure()\ntrainCurve=plt.plot(history.history['acc'],label='Training')\ntestCurve=plt.axhline(y=score[1],color='k',label='Testing')\nplt.title('model accuracy')\nplt.ylabel('accuracy')\nplt.xlabel('epoch')\nplt.legend()\nplt.show()",
"Questions:\n\nWhat do you think about the agreement between the training and testing curves?\nDo you think there is a qualitative difference in what this classifier picks up? \n\nWe can get some clues by looking at the input layer as before. Although now there are 512 instead of 10 filters. So we just look at the first 10.",
"denseLayer=model.layers[0]\nweights=denseLayer.get_weights()\ntopWeights=weights[0]\nfig = plt.figure(figsize=(15, 15))\nmeanTop=np.mean(topWeights,axis=1)\nfor num in range(10):\n mat=topWeights[:,num]-meanTop\n mat=(mat.reshape(28,28))\n ax = fig.add_subplot(3,4,num+1)\n plt.imshow(mat,cmap=plt.cm.binary)\nplt.show()",
"What is the quaitative difference between these plots and \nOverfitting\nAs you can see with the increase in the number of parameters, there is an increase in the mismatch between training and testing, raising the concern that the model is overfitting on the training data. There are a couple of approaches to combat this. \n1. Dropout\n2. Don't train so hard!\nDropout is a way of introducing noise into the model to prevent overfitting. It is implemented by adding dropout layers. These layers randomly set a specified fraction of input units to zeros (at each update during training). Below is the same model implemented with dropout layers added.",
"model = Sequential()\nmodel.add(Dense(512, activation='relu', input_shape=(784,)))\n# NEW dropout layer dropping 20% of inputs\nmodel.add(Dropout(0.2))\nmodel.add(Dense(512, activation='relu'))\n# NEW dropout layer dropping 20% of inputs\nmodel.add(Dropout(0.2))\nmodel.add(Dense(num_classes, activation='softmax'))\n\n\n# Compilation as before\nmodel.compile(loss='categorical_crossentropy',\n optimizer=RMSprop(),\n metrics=['accuracy'])\n#plot_model(model, to_file='model_plot.png', show_shapes=True, show_layer_names=True)\n#Image(\"model_plot.png\")\nmodel.summary()\nbatch_size = 128\nepochs = 20\nhistory = model.fit(x_train, y_train,\n batch_size=batch_size,\n epochs=epochs,\n verbose=1)\nscore = model.evaluate(x_test, y_test, verbose=0)\nprint('Test loss:', score[0])\nprint('Test accuracy:', score[1])\n\nfig = plt.figure()\ntrainCurve=plt.plot(history.history['acc'],label='Training')\ntestCurve=plt.axhline(y=score[1],color='k',label='Testing')\nplt.title('model accuracy')\nplt.ylabel('accuracy')\nplt.xlabel('epoch')\nplt.legend()\nplt.show()",
"Exercises & Questions\n\nWhat do you think would happen with increased dropout? Feel freee to experiment\nWhat is the effect of training too long? Repeat these experiments of more or fewer epochs. What does that do?\nPlay with the (hyper-parameters) parameters. We don't have to use 2 layers with 512 outputs. Change things around and see how sensitive the results are.\nIs it fair to keep doing this and using the topology that gives the best performance on the testing data?"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
GoogleCloudPlatform/vertex-ai-samples
|
community-content/tf_keras_text_classification_distributed_single_worker_gpus_with_gcloud_local_run_and_vertex_sdk/vertex_training_with_local_mode_container.ipynb
|
apache-2.0
|
[
"# Copyright 2022 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.",
"TF-Keras Text Classification Distributed Single Worker GPUs using Vertex Training with Local Mode Container\n<table align=\"left\">\n <td>\n <a href=\"https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/master/community-content/tf_keras_text_classification_distributed_single_worker_gpus_with_gcloud_local_run_and_vertex_sdk/vertex_training_with_local_mode_container.ipynb\">\n <img src=\"https://cloud.google.com/ml-engine/images/github-logo-32px.png\" alt=\"GitHub logo\">\n View on GitHub\n </a>\n </td>\n</table>\n\nSetup",
"PROJECT_ID = \"YOUR PROJECT ID\"\nBUCKET_NAME = \"gs://YOUR BUCKET NAME\"\nREGION = \"YOUR REGION\"\nSERVICE_ACCOUNT = \"YOUR SERVICE ACCOUNT\"\n\ncontent_name = \"tf-keras-txt-cls-dist-single-worker-gpus-local-mode-cont\"",
"Local Training with Vertex Local Mode and Auto Packaging",
"BASE_IMAGE_URI = \"us-docker.pkg.dev/vertex-ai/training/tf-gpu.2-5:latest\"\nSCRIPT_PATH = \"trainer/task.py\"\nOUTPUT_IMAGE_NAME = \"gcr.io/{}/{}:latest\".format(PROJECT_ID, content_name)\nARGS = \"--epochs 5 --batch-size 16 --local-mode\"\n\n! gcloud ai custom-jobs local-run \\\n --executor-image-uri=$BASE_IMAGE_URI \\\n --script=$SCRIPT_PATH \\\n --output-image-uri=$OUTPUT_IMAGE_NAME \\\n -- \\\n $ARGS",
"Vertex Training using Vertex SDK and Vertex Local Mode Container\nContainer Built by Vertex Local Mode",
"custom_container_image_uri = OUTPUT_IMAGE_NAME\n\n! docker push $custom_container_image_uri\n\n! gcloud container images list --repository \"gcr.io\"/$PROJECT_ID",
"Initialize Vertex SDK",
"! pip install -r requirements.txt\n\nfrom google.cloud import aiplatform\n\naiplatform.init(\n project=PROJECT_ID,\n staging_bucket=BUCKET_NAME,\n location=REGION,\n)",
"Create a Vertex Tensorboard Instance",
"tensorboard = aiplatform.Tensorboard.create(\n display_name=content_name,\n)",
"Option: Use a Previously Created Vertex Tensorboard Instance\ntensorboard_name = \"Your Tensorboard Resource Name or Tensorboard ID\"\ntensorboard = aiplatform.Tensorboard(tensorboard_name=tensorboard_name)\nRun a Vertex SDK CustomContainerTrainingJob",
"display_name = content_name\ngcs_output_uri_prefix = f\"{BUCKET_NAME}/{display_name}\"\n\nmachine_type = \"n1-standard-8\"\naccelerator_count = 4\naccelerator_type = \"NVIDIA_TESLA_P100\"\n\nargs = [\n \"--epochs\",\n \"100\",\n \"--batch-size\",\n \"128\",\n \"--num-gpus\",\n f\"{accelerator_count}\",\n]\n\ncustom_container_training_job = aiplatform.CustomContainerTrainingJob(\n display_name=display_name,\n container_uri=custom_container_image_uri,\n)\n\ncustom_container_training_job.run(\n args=args,\n base_output_dir=gcs_output_uri_prefix,\n machine_type=machine_type,\n accelerator_type=accelerator_type,\n accelerator_count=accelerator_count,\n tensorboard=tensorboard.resource_name,\n service_account=SERVICE_ACCOUNT,\n)\n\nprint(f\"Custom Training Job Name: {custom_container_training_job.resource_name}\")\nprint(f\"GCS Output URI Prefix: {gcs_output_uri_prefix}\")",
"Training Output Artifact",
"! gsutil ls $gcs_output_uri_prefix",
"Clean Up Artifact",
"! gsutil rm -rf $gcs_output_uri_prefix"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
dfm/george
|
docs/_static/notebooks/hyper.ipynb
|
mit
|
[
"%matplotlib inline\n%config InlineBackend.figure_format = \"retina\"\n\nfrom __future__ import print_function\n\nfrom matplotlib import rcParams\nrcParams[\"savefig.dpi\"] = 100\nrcParams[\"figure.dpi\"] = 100\nrcParams[\"font.size\"] = 20",
"Hyperparameter optimization\nThis notebook was made with the following version of george:",
"import george\ngeorge.__version__",
"In this tutorial, we’ll reproduce the analysis for Figure 5.6 in Chapter 5 of Rasmussen & Williams (R&W). The data are measurements of the atmospheric CO2 concentration made at Mauna Loa, Hawaii (Keeling & Whorf 2004). The dataset is said to be available online but I couldn’t seem to download it from the original source. Luckily the statsmodels package includes a copy that we can load as follows:",
"import numpy as np\nimport matplotlib.pyplot as pl\nfrom statsmodels.datasets import co2\n\ndata = co2.load_pandas().data\nt = 2000 + (np.array(data.index.to_julian_date()) - 2451545.0) / 365.25\ny = np.array(data.co2)\nm = np.isfinite(t) & np.isfinite(y) & (t < 1996)\nt, y = t[m][::4], y[m][::4]\n\npl.plot(t, y, \".k\")\npl.xlim(t.min(), t.max())\npl.xlabel(\"year\")\npl.ylabel(\"CO$_2$ in ppm\");",
"In this figure, you can see that there is periodic (or quasi-periodic) signal with a year-long period superimposed on a long term trend. We will follow R&W and model these effects non-parametrically using a complicated covariance function. The covariance function that we’ll use is:\n$$k(r) = k_1(r) + k_2(r) + k_3(r) + k_4(r)$$\nwhere\n$$\n\\begin{eqnarray}\n k_1(r) &=& \\theta_1^2 \\, \\exp \\left(-\\frac{r^2}{2\\,\\theta_2} \\right) \\\n k_2(r) &=& \\theta_3^2 \\, \\exp \\left(-\\frac{r^2}{2\\,\\theta_4}\n -\\theta_5\\,\\sin^2\\left(\n \\frac{\\pi\\,r}{\\theta_6}\\right)\n \\right) \\\n k_3(r) &=& \\theta_7^2 \\, \\left [ 1 + \\frac{r^2}{2\\,\\theta_8\\,\\theta_9}\n \\right ]^{-\\theta_8} \\\n k_4(r) &=& \\theta_{10}^2 \\, \\exp \\left(-\\frac{r^2}{2\\,\\theta_{11}} \\right)\n + \\theta_{12}^2\\,\\delta_{ij}\n\\end{eqnarray}\n$$\nWe can implement this kernel in George as follows (we'll use the R&W results\nas the hyperparameters for now):",
"from george import kernels\n\nk1 = 66**2 * kernels.ExpSquaredKernel(metric=67**2)\nk2 = 2.4**2 * kernels.ExpSquaredKernel(90**2) * kernels.ExpSine2Kernel(gamma=2/1.3**2, log_period=0.0)\nk3 = 0.66**2 * kernels.RationalQuadraticKernel(log_alpha=np.log(0.78), metric=1.2**2)\nk4 = 0.18**2 * kernels.ExpSquaredKernel(1.6**2)\nkernel = k1 + k2 + k3 + k4",
"Optimization\nIf we want to find the \"best-fit\" hyperparameters, we should optimize an objective function.\nThe two standard functions (as described in Chapter 5 of R&W) are the marginalized ln-likelihood and the cross validation likelihood.\nGeorge implements the former in the GP.lnlikelihood function and the gradient with respect to the hyperparameters in the GP.grad_lnlikelihood function:",
"import george\ngp = george.GP(kernel, mean=np.mean(y), fit_mean=True,\n white_noise=np.log(0.19**2), fit_white_noise=True)\ngp.compute(t)\nprint(gp.log_likelihood(y))\nprint(gp.grad_log_likelihood(y))",
"We'll use a gradient based optimization routine from SciPy to fit this model as follows:",
"import scipy.optimize as op\n\n# Define the objective function (negative log-likelihood in this case).\ndef nll(p):\n gp.set_parameter_vector(p)\n ll = gp.log_likelihood(y, quiet=True)\n return -ll if np.isfinite(ll) else 1e25\n\n# And the gradient of the objective function.\ndef grad_nll(p):\n gp.set_parameter_vector(p)\n return -gp.grad_log_likelihood(y, quiet=True)\n\n# You need to compute the GP once before starting the optimization.\ngp.compute(t)\n\n# Print the initial ln-likelihood.\nprint(gp.log_likelihood(y))\n\n# Run the optimization routine.\np0 = gp.get_parameter_vector()\nresults = op.minimize(nll, p0, jac=grad_nll, method=\"L-BFGS-B\")\n\n# Update the kernel and print the final log-likelihood.\ngp.set_parameter_vector(results.x)\nprint(gp.log_likelihood(y))",
"Warning: An optimization code something like this should work on most problems but the results can be very sensitive to your choice of initialization and algorithm. If the results are nonsense, try choosing a better initial guess or try a different value of the method parameter in op.minimize.\nWe can plot our prediction of the CO2 concentration into the future using our optimized Gaussian process model by running:",
"x = np.linspace(max(t), 2025, 2000)\nmu, var = gp.predict(y, x, return_var=True)\nstd = np.sqrt(var)\n\npl.plot(t, y, \".k\")\npl.fill_between(x, mu+std, mu-std, color=\"g\", alpha=0.5)\n\npl.xlim(t.min(), 2025)\npl.xlabel(\"year\")\npl.ylabel(\"CO$_2$ in ppm\");",
"Sampling & Marginalization\nThe prediction made in the previous section take into account uncertainties due to the fact that a Gaussian process is stochastic but it doesn’t take into account any uncertainties in the values of the hyperparameters. This won’t matter if the hyperparameters are very well constrained by the data but in this case, many of the parameters are actually poorly constrained. To take this effect into account, we can apply prior probability functions to the hyperparameters and marginalize using Markov chain Monte Carlo (MCMC). To do this, we’ll use the emcee package.\nFirst, we define the probabilistic model:",
"def lnprob(p):\n # Trivial uniform prior.\n if np.any((-100 > p[1:]) + (p[1:] > 100)):\n return -np.inf\n\n # Update the kernel and compute the lnlikelihood.\n gp.set_parameter_vector(p)\n return gp.lnlikelihood(y, quiet=True)",
"In this function, we’ve applied a prior on every parameter that is uniform between -100 and 100 for every parameter. In real life, you should probably use something more intelligent but this will work for this problem. The quiet argument in the call to GP.lnlikelihood() means that that function will return -numpy.inf if the kernel is invalid or if there are any linear algebra errors (otherwise it would raise an exception).\nThen, we run the sampler (this will probably take a while to run if you want to repeat this analysis):",
"import emcee\n\ngp.compute(t)\n\n# Set up the sampler.\nnwalkers, ndim = 36, len(gp)\nsampler = emcee.EnsembleSampler(nwalkers, ndim, lnprob)\n\n# Initialize the walkers.\np0 = gp.get_parameter_vector() + 1e-4 * np.random.randn(nwalkers, ndim)\n\nprint(\"Running burn-in\")\np0, _, _ = sampler.run_mcmc(p0, 200)\n\nprint(\"Running production chain\")\nsampler.run_mcmc(p0, 200);",
"After this run, you can plot 50 samples from the marginalized predictive probability distribution:",
"x = np.linspace(max(t), 2025, 250)\nfor i in range(50):\n # Choose a random walker and step.\n w = np.random.randint(sampler.chain.shape[0])\n n = np.random.randint(sampler.chain.shape[1])\n gp.set_parameter_vector(sampler.chain[w, n])\n\n # Plot a single sample.\n pl.plot(x, gp.sample_conditional(y, x), \"g\", alpha=0.1)\n \npl.plot(t, y, \".k\")\n\npl.xlim(t.min(), 2025)\npl.xlabel(\"year\")\npl.ylabel(\"CO$_2$ in ppm\");",
"Comparing this to the same figure in the previous section, you’ll notice that the error bars on the prediction are now substantially larger than before. This is because we are now considering all the predictions that are consistent with the data, not just the “best” prediction. In general, even though it requires much more computation, it is more conservative (and honest) to take all these sources of uncertainty into account."
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
xiaoxiaoyao/MyApp
|
jupyter_notebook/getKeyWord.ipynb
|
unlicense
|
[
"如何用Python提取中文关键词?\n本文一步步为你演示,如何用Python从中文文本中提取关键词。如果你需要对长文“观其大略”,不妨尝试一下。(单一文本关键词的提取方法)",
"# -*- coding=utf-8 -*-\nimport jieba.analyse\nimport jieba\nwith open('../docs/HLS.TXT', encoding='utf-8') as f:\n data = f.read()",
"分别使用TF-idf、TextRank方式提取关键词和权重,并且依次显示出来。(如果你不做特殊指定的话,默认显示数量为20个关键词)",
"for keyword, weight in jieba.analyse.extract_tags(data, topK=30, withWeight=True):\n print('%s %s' % (keyword, weight))\n\nfor keyword, weight in jieba.analyse.textrank(data, topK=30, withWeight=True):\n print('%s %s' % (keyword, weight))\n\nresult=\" \".join(jieba.cut(data))\nprint(\"切分结果: \"+result[0:99])\n\nfrom wordcloud import WordCloud\nwordcloud = WordCloud(\n background_color=\"white\", #背景颜色\n max_words=200, #显示最大词数\n width=800, # 输出的画布宽度,默认为400像素\n height=600,# 输出的画布高度,默认为400像素\n font_path=r\"C:\\Windows\\Fonts\\msyh.ttc\", #使用字体微软雅黑\n ).generate(result)\n%pylab inline\nimport matplotlib.pyplot as plt\nplt.imshow(wordcloud)\nplt.axis(\"off\")",
"原理\n我们简要讲解一下,前文出现的2种不同关键词提取方式——TF-idf和TextRank的基本原理。\n为了不让大家感到枯燥,这里咱们就不使用数学公式了。后文我会给出相关的资料链接。如果你对细节感兴趣,欢迎按图索骥,查阅学习。\n先说TF-idf。\n它的全称是 Term Frequency - inverse document frequency。中间有个连字符,左右两侧各是一部分,共同结合起来,决定某个词的重要程度。\n第一部分,就是词频(Term Frequency),即某个词语出现的频率。\n我们常说“重要的事说三遍”。\n同样的道理,某个词语出现的次数多,也就说明这个词语重要性可能会很高。\n但是,这只是可能性,并不绝对。\n例如现代汉语中的许多虚词——“的,地,得”,古汉语中的许多句尾词“之、乎、者、也、兮”,这些词在文中可能出现许多次,但是它们显然不是关键词。\n这就是为什么我们在判断关键词的时候,需要第二部分(idf)配合。\n逆文档频率(inverse document frequency)首先计算某个词在各文档中出现的频率。假设一共有10篇文档,其中某个词A在其中10篇文章中都出先过,另一个词B只在其中3篇文中出现。请问哪一个词更关键?\n给你一分钟思考一下,然后继续读。\n公布答案时间到。\n答案是B更关键。\nA可能就是虚词,或者全部文档共享的主题词。而B只在3篇文档中出现,因此很有可能是个关键词。\n逆文档频率就是把这种文档频率取倒数。这样第一部分和第二部分都是越高越好。二者都高,就很有可能是关键词了。\nTF-idf讲完了,下面我们说说TextRank。\n相对于TF-idf,TextRank要显得更加复杂一些。它不是简单做加减乘除运算,而是基于图的计算。\n文章来源:https://zhuanlan.zhihu.com/p/31870596?group_id=923093802266013696"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
ImAlexisSaez/deep-learning-specialization-coursera
|
course_1/week_3/assignment_1/planar_data_classification_with_one_hidden_layer_v2.ipynb
|
mit
|
[
"Planar data classification with one hidden layer\nWelcome to your week 3 programming assignment. It's time to build your first neural network, which will have a hidden layer. You will see a big difference between this model and the one you implemented using logistic regression. \nYou will learn how to:\n- Implement a 2-class classification neural network with a single hidden layer\n- Use units with a non-linear activation function, such as tanh \n- Compute the cross entropy loss \n- Implement forward and backward propagation\n1 - Packages\nLet's first import all the packages that you will need during this assignment.\n- numpy is the fundamental package for scientific computing with Python.\n- sklearn provides simple and efficient tools for data mining and data analysis. \n- matplotlib is a library for plotting graphs in Python.\n- testCases provides some test examples to assess the correctness of your functions\n- planar_utils provide various useful functions used in this assignment",
"# Package imports\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom testCases import *\nimport sklearn\nimport sklearn.datasets\nimport sklearn.linear_model\nfrom planar_utils import plot_decision_boundary, sigmoid, load_planar_dataset, load_extra_datasets\n\n%matplotlib inline\n\nnp.random.seed(1) # set a seed so that the results are consistent",
"2 - Dataset\nFirst, let's get the dataset you will work on. The following code will load a \"flower\" 2-class dataset into variables X and Y.",
"X, Y = load_planar_dataset()",
"Visualize the dataset using matplotlib. The data looks like a \"flower\" with some red (label y=0) and some blue (y=1) points. Your goal is to build a model to fit this data.",
"# Visualize the data:\nplt.scatter(X[0, :], X[1, :], c=Y, s=40, cmap=plt.cm.Spectral);",
"You have:\n - a numpy-array (matrix) X that contains your features (x1, x2)\n - a numpy-array (vector) Y that contains your labels (red:0, blue:1).\nLets first get a better sense of what our data is like. \nExercise: How many training examples do you have? In addition, what is the shape of the variables X and Y? \nHint: How do you get the shape of a numpy array? (help)",
"### START CODE HERE ### (≈ 3 lines of code)\nshape_X = X.shape\nshape_Y = Y.shape\nm = Y.flatten().shape # training set size\n### END CODE HERE ###\n\nprint ('The shape of X is: ' + str(shape_X))\nprint ('The shape of Y is: ' + str(shape_Y))\nprint ('I have m = %d training examples!' % (m))",
"Expected Output:\n<table style=\"width:20%\">\n\n <tr>\n <td>**shape of X**</td>\n <td> (2, 400) </td> \n </tr>\n\n <tr>\n <td>**shape of Y**</td>\n <td>(1, 400) </td> \n </tr>\n\n <tr>\n <td>**m**</td>\n <td> 400 </td> \n </tr>\n\n</table>\n\n3 - Simple Logistic Regression\nBefore building a full neural network, lets first see how logistic regression performs on this problem. You can use sklearn's built-in functions to do that. Run the code below to train a logistic regression classifier on the dataset.",
"# Train the logistic regression classifier\nclf = sklearn.linear_model.LogisticRegressionCV();\nclf.fit(X.T, Y.T);",
"You can now plot the decision boundary of these models. Run the code below.",
"# Plot the decision boundary for logistic regression\nplot_decision_boundary(lambda x: clf.predict(x), X, Y)\nplt.title(\"Logistic Regression\")\n\n# Print accuracy\nLR_predictions = clf.predict(X.T)\nprint ('Accuracy of logistic regression: %d ' % float((np.dot(Y,LR_predictions) + np.dot(1-Y,1-LR_predictions))/float(Y.size)*100) +\n '% ' + \"(percentage of correctly labelled datapoints)\")",
"Expected Output:\n<table style=\"width:20%\">\n <tr>\n <td>**Accuracy**</td>\n <td> 47% </td> \n </tr>\n\n</table>\n\nInterpretation: The dataset is not linearly separable, so logistic regression doesn't perform well. Hopefully a neural network will do better. Let's try this now! \n4 - Neural Network model\nLogistic regression did not work well on the \"flower dataset\". You are going to train a Neural Network with a single hidden layer.\nHere is our model:\n<img src=\"images/classification_kiank.png\" style=\"width:600px;height:300px;\">\nMathematically:\nFor one example $x^{(i)}$:\n$$z^{[1] (i)} = W^{[1]} x^{(i)} + b^{[1] (i)}\\tag{1}$$ \n$$a^{[1] (i)} = \\tanh(z^{[1] (i)})\\tag{2}$$\n$$z^{[2] (i)} = W^{[2]} a^{[1] (i)} + b^{[2] (i)}\\tag{3}$$\n$$\\hat{y}^{(i)} = a^{[2] (i)} = \\sigma(z^{ [2] (i)})\\tag{4}$$\n$$y^{(i)}_{prediction} = \\begin{cases} 1 & \\mbox{if } a^{2} > 0.5 \\ 0 & \\mbox{otherwise } \\end{cases}\\tag{5}$$\nGiven the predictions on all the examples, you can also compute the cost $J$ as follows: \n$$J = - \\frac{1}{m} \\sum\\limits_{i = 0}^{m} \\large\\left(\\small y^{(i)}\\log\\left(a^{[2] (i)}\\right) + (1-y^{(i)})\\log\\left(1- a^{[2] (i)}\\right) \\large \\right) \\small \\tag{6}$$\nReminder: The general methodology to build a Neural Network is to:\n 1. Define the neural network structure ( # of input units, # of hidden units, etc). \n 2. Initialize the model's parameters\n 3. Loop:\n - Implement forward propagation\n - Compute loss\n - Implement backward propagation to get the gradients\n - Update parameters (gradient descent)\nYou often build helper functions to compute steps 1-3 and then merge them into one function we call nn_model(). Once you've built nn_model() and learnt the right parameters, you can make predictions on new data.\n4.1 - Defining the neural network structure\nExercise: Define three variables:\n - n_x: the size of the input layer\n - n_h: the size of the hidden layer (set this to 4) \n - n_y: the size of the output layer\nHint: Use shapes of X and Y to find n_x and n_y. Also, hard code the hidden layer size to be 4.",
"# GRADED FUNCTION: layer_sizes\n\ndef layer_sizes(X, Y):\n \"\"\"\n Arguments:\n X -- input dataset of shape (input size, number of examples)\n Y -- labels of shape (output size, number of examples)\n \n Returns:\n n_x -- the size of the input layer\n n_h -- the size of the hidden layer\n n_y -- the size of the output layer\n \"\"\"\n ### START CODE HERE ### (≈ 3 lines of code)\n n_x = X.shape[0] # size of input layer\n n_h = 4\n n_y = Y.shape[0] # size of output layer\n ### END CODE HERE ###\n return (n_x, n_h, n_y)\n\nX_assess, Y_assess = layer_sizes_test_case()\n(n_x, n_h, n_y) = layer_sizes(X_assess, Y_assess)\nprint(\"The size of the input layer is: n_x = \" + str(n_x))\nprint(\"The size of the hidden layer is: n_h = \" + str(n_h))\nprint(\"The size of the output layer is: n_y = \" + str(n_y))",
"Expected Output (these are not the sizes you will use for your network, they are just used to assess the function you've just coded).\n<table style=\"width:20%\">\n <tr>\n <td>**n_x**</td>\n <td> 5 </td> \n </tr>\n\n <tr>\n <td>**n_h**</td>\n <td> 4 </td> \n </tr>\n\n <tr>\n <td>**n_y**</td>\n <td> 2 </td> \n </tr>\n\n</table>\n\n4.2 - Initialize the model's parameters\nExercise: Implement the function initialize_parameters().\nInstructions:\n- Make sure your parameters' sizes are right. Refer to the neural network figure above if needed.\n- You will initialize the weights matrices with random values. \n - Use: np.random.randn(a,b) * 0.01 to randomly initialize a matrix of shape (a,b).\n- You will initialize the bias vectors as zeros. \n - Use: np.zeros((a,b)) to initialize a matrix of shape (a,b) with zeros.",
"# GRADED FUNCTION: initialize_parameters\n\ndef initialize_parameters(n_x, n_h, n_y):\n \"\"\"\n Argument:\n n_x -- size of the input layer\n n_h -- size of the hidden layer\n n_y -- size of the output layer\n \n Returns:\n params -- python dictionary containing your parameters:\n W1 -- weight matrix of shape (n_h, n_x)\n b1 -- bias vector of shape (n_h, 1)\n W2 -- weight matrix of shape (n_y, n_h)\n b2 -- bias vector of shape (n_y, 1)\n \"\"\"\n \n np.random.seed(2) # we set up a seed so that your output matches ours although the initialization is random.\n \n ### START CODE HERE ### (≈ 4 lines of code)\n W1 = np.random.randn(n_h, n_x) * 0.01\n b1 = np.zeros((n_h, 1))\n W2 = np.random.randn(n_y, n_h) * 0.01\n b2 = np.zeros((n_y, 1))\n ### END CODE HERE ###\n \n assert (W1.shape == (n_h, n_x))\n assert (b1.shape == (n_h, 1))\n assert (W2.shape == (n_y, n_h))\n assert (b2.shape == (n_y, 1))\n \n parameters = {\"W1\": W1,\n \"b1\": b1,\n \"W2\": W2,\n \"b2\": b2}\n \n return parameters\n\nn_x, n_h, n_y = initialize_parameters_test_case()\n\nparameters = initialize_parameters(n_x, n_h, n_y)\nprint(\"W1 = \" + str(parameters[\"W1\"]))\nprint(\"b1 = \" + str(parameters[\"b1\"]))\nprint(\"W2 = \" + str(parameters[\"W2\"]))\nprint(\"b2 = \" + str(parameters[\"b2\"]))",
"Expected Output:\n<table style=\"width:90%\">\n <tr>\n <td>**W1**</td>\n <td> [[-0.00416758 -0.00056267]\n [-0.02136196 0.01640271]\n [-0.01793436 -0.00841747]\n [ 0.00502881 -0.01245288]] </td> \n </tr>\n\n <tr>\n <td>**b1**</td>\n <td> [[ 0.]\n [ 0.]\n [ 0.]\n [ 0.]] </td> \n </tr>\n\n <tr>\n <td>**W2**</td>\n <td> [[-0.01057952 -0.00909008 0.00551454 0.02292208]]</td> \n </tr>\n\n\n <tr>\n <td>**b2**</td>\n <td> [[ 0.]] </td> \n </tr>\n\n</table>\n\n4.3 - The Loop\nQuestion: Implement forward_propagation().\nInstructions:\n- Look above at the mathematical representation of your classifier.\n- You can use the function sigmoid(). It is built-in (imported) in the notebook.\n- You can use the function np.tanh(). It is part of the numpy library.\n- The steps you have to implement are:\n 1. Retrieve each parameter from the dictionary \"parameters\" (which is the output of initialize_parameters()) by using parameters[\"..\"].\n 2. Implement Forward Propagation. Compute $Z^{[1]}, A^{[1]}, Z^{[2]}$ and $A^{[2]}$ (the vector of all your predictions on all the examples in the training set).\n- Values needed in the backpropagation are stored in \"cache\". The cache will be given as an input to the backpropagation function.",
"# GRADED FUNCTION: forward_propagation\n\ndef forward_propagation(X, parameters):\n \"\"\"\n Argument:\n X -- input data of size (n_x, m)\n parameters -- python dictionary containing your parameters (output of initialization function)\n \n Returns:\n A2 -- The sigmoid output of the second activation\n cache -- a dictionary containing \"Z1\", \"A1\", \"Z2\" and \"A2\"\n \"\"\"\n # Retrieve each parameter from the dictionary \"parameters\"\n ### START CODE HERE ### (≈ 4 lines of code)\n W1 = parameters[\"W1\"]\n b1 = parameters[\"b1\"]\n W2 = parameters[\"W2\"]\n b2 = parameters[\"b2\"]\n ### END CODE HERE ###\n \n # Implement Forward Propagation to calculate A2 (probabilities)\n ### START CODE HERE ### (≈ 4 lines of code)\n Z1 = np.dot(W1, X) + b1\n A1 = np.tanh(Z1)\n Z2 = np.dot(W2, A1) + b2\n A2 = sigmoid(Z2)\n ### END CODE HERE ###\n \n assert(A2.shape == (1, X.shape[1]))\n \n cache = {\"Z1\": Z1,\n \"A1\": A1,\n \"Z2\": Z2,\n \"A2\": A2}\n \n return A2, cache\n\nX_assess, parameters = forward_propagation_test_case()\n\nA2, cache = forward_propagation(X_assess, parameters)\n\n# Note: we use the mean here just to make sure that your output matches ours. \nprint(np.mean(cache['Z1']) ,np.mean(cache['A1']),np.mean(cache['Z2']),np.mean(cache['A2']))",
"Expected Output:\n<table style=\"width:55%\">\n <tr>\n <td> -0.000499755777742 -0.000496963353232 0.000438187450959 0.500109546852 </td> \n </tr>\n</table>\n\nNow that you have computed $A^{[2]}$ (in the Python variable \"A2\"), which contains $a^{2}$ for every example, you can compute the cost function as follows:\n$$J = - \\frac{1}{m} \\sum\\limits_{i = 0}^{m} \\large{(} \\small y^{(i)}\\log\\left(a^{[2] (i)}\\right) + (1-y^{(i)})\\log\\left(1- a^{[2] (i)}\\right) \\large{)} \\small\\tag{13}$$\nExercise: Implement compute_cost() to compute the value of the cost $J$.\nInstructions:\n- There are many ways to implement the cross-entropy loss. To help you, we give you how we would have implemented\n$- \\sum\\limits_{i=0}^{m} y^{(i)}\\log(a^{2})$:\npython\nlogprobs = np.multiply(np.log(A2),Y)\ncost = - np.sum(logprobs) # no need to use a for loop!\n(you can use either np.multiply() and then np.sum() or directly np.dot()).",
"# GRADED FUNCTION: compute_cost\n\ndef compute_cost(A2, Y, parameters):\n \"\"\"\n Computes the cross-entropy cost given in equation (13)\n \n Arguments:\n A2 -- The sigmoid output of the second activation, of shape (1, number of examples)\n Y -- \"true\" labels vector of shape (1, number of examples)\n parameters -- python dictionary containing your parameters W1, b1, W2 and b2\n \n Returns:\n cost -- cross-entropy cost given equation (13)\n \"\"\"\n \n m = Y.shape[1] # number of example\n \n # Retrieve W1 and W2 from parameters\n ### START CODE HERE ### (≈ 2 lines of code)\n W1 = parameters[\"W1\"]\n W2 = parameters[\"W2\"]\n ### END CODE HERE ###\n \n # Compute the cross-entropy cost\n ### START CODE HERE ### (≈ 2 lines of code)\n logprobs = np.multiply(Y, np.log(A2)) + np.multiply(np.log(1 - A2), 1 - Y)\n cost = - 1 / m * np.sum(logprobs)\n ### END CODE HERE ###\n \n cost = np.squeeze(cost) # makes sure cost is the dimension we expect. \n # E.g., turns [[17]] into 17 \n assert(isinstance(cost, float))\n \n return cost\n\nA2, Y_assess, parameters = compute_cost_test_case()\n\nprint(\"cost = \" + str(compute_cost(A2, Y_assess, parameters)))",
"Expected Output:\n<table style=\"width:20%\">\n <tr>\n <td>**cost**</td>\n <td> 0.692919893776 </td> \n </tr>\n\n</table>\n\nUsing the cache computed during forward propagation, you can now implement backward propagation.\nQuestion: Implement the function backward_propagation().\nInstructions:\nBackpropagation is usually the hardest (most mathematical) part in deep learning. To help you, here again is the slide from the lecture on backpropagation. You'll want to use the six equations on the right of this slide, since you are building a vectorized implementation. \n<img src=\"images/grad_summary.png\" style=\"width:600px;height:300px;\">\n<!--\n$\\frac{\\partial \\mathcal{J} }{ \\partial z_{2}^{(i)} } = \\frac{1}{m} (a^{[2](i)} - y^{(i)})$\n\n$\\frac{\\partial \\mathcal{J} }{ \\partial W_2 } = \\frac{\\partial \\mathcal{J} }{ \\partial z_{2}^{(i)} } a^{[1] (i) T} $\n\n$\\frac{\\partial \\mathcal{J} }{ \\partial b_2 } = \\sum_i{\\frac{\\partial \\mathcal{J} }{ \\partial z_{2}^{(i)}}}$\n\n$\\frac{\\partial \\mathcal{J} }{ \\partial z_{1}^{(i)} } = W_2^T \\frac{\\partial \\mathcal{J} }{ \\partial z_{2}^{(i)} } * ( 1 - a^{[1] (i) 2}) $\n\n$\\frac{\\partial \\mathcal{J} }{ \\partial W_1 } = \\frac{\\partial \\mathcal{J} }{ \\partial z_{1}^{(i)} } X^T $\n\n$\\frac{\\partial \\mathcal{J} _i }{ \\partial b_1 } = \\sum_i{\\frac{\\partial \\mathcal{J} }{ \\partial z_{1}^{(i)}}}$\n\n- Note that $*$ denotes elementwise multiplication.\n- The notation you will use is common in deep learning coding:\n - dW1 = $\\frac{\\partial \\mathcal{J} }{ \\partial W_1 }$\n - db1 = $\\frac{\\partial \\mathcal{J} }{ \\partial b_1 }$\n - dW2 = $\\frac{\\partial \\mathcal{J} }{ \\partial W_2 }$\n - db2 = $\\frac{\\partial \\mathcal{J} }{ \\partial b_2 }$\n\n!-->\n\n\nTips:\nTo compute dZ1 you'll need to compute $g^{[1]'}(Z^{[1]})$. Since $g^{[1]}(.)$ is the tanh activation function, if $a = g^{[1]}(z)$ then $g^{[1]'}(z) = 1-a^2$. So you can compute \n$g^{[1]'}(Z^{[1]})$ using (1 - np.power(A1, 2)).",
"# GRADED FUNCTION: backward_propagation\n\ndef backward_propagation(parameters, cache, X, Y):\n \"\"\"\n Implement the backward propagation using the instructions above.\n \n Arguments:\n parameters -- python dictionary containing our parameters \n cache -- a dictionary containing \"Z1\", \"A1\", \"Z2\" and \"A2\".\n X -- input data of shape (2, number of examples)\n Y -- \"true\" labels vector of shape (1, number of examples)\n \n Returns:\n grads -- python dictionary containing your gradients with respect to different parameters\n \"\"\"\n m = X.shape[1]\n \n # First, retrieve W1 and W2 from the dictionary \"parameters\".\n ### START CODE HERE ### (≈ 2 lines of code)\n W1 = parameters[\"W1\"]\n W2 = parameters[\"W2\"]\n ### END CODE HERE ###\n \n # Retrieve also A1 and A2 from dictionary \"cache\".\n ### START CODE HERE ### (≈ 2 lines of code)\n A1 = cache[\"A1\"]\n A2 = cache[\"A2\"]\n ### END CODE HERE ###\n \n # Backward propagation: calculate dW1, db1, dW2, db2. \n ### START CODE HERE ### (≈ 6 lines of code, corresponding to 6 equations on slide above)\n dZ2 = A2 - Y\n dW2 = 1 / m * np.dot(dZ2, A1.T)\n db2 = 1 / m * np.sum(dZ2, axis=1, keepdims=True)\n dZ1 = np.dot(W2.T, dZ2) * (1 - np.power(A1, 2))\n dW1 = 1 / m * np.dot(dZ1, X.T)\n db1 = 1 / m * np.sum(dZ1, axis=1, keepdims=True)\n ### END CODE HERE ###\n \n grads = {\"dW1\": dW1,\n \"db1\": db1,\n \"dW2\": dW2,\n \"db2\": db2}\n \n return grads\n\nparameters, cache, X_assess, Y_assess = backward_propagation_test_case()\n\ngrads = backward_propagation(parameters, cache, X_assess, Y_assess)\nprint (\"dW1 = \"+ str(grads[\"dW1\"]))\nprint (\"db1 = \"+ str(grads[\"db1\"]))\nprint (\"dW2 = \"+ str(grads[\"dW2\"]))\nprint (\"db2 = \"+ str(grads[\"db2\"]))",
"Expected output:\n<table style=\"width:80%\">\n <tr>\n <td>**dW1**</td>\n <td> [[ 0.01018708 -0.00708701]\n [ 0.00873447 -0.0060768 ]\n [-0.00530847 0.00369379]\n [-0.02206365 0.01535126]] </td> \n </tr>\n\n <tr>\n <td>**db1**</td>\n <td> [[-0.00069728]\n [-0.00060606]\n [ 0.000364 ]\n [ 0.00151207]] </td> \n </tr>\n\n <tr>\n <td>**dW2**</td>\n <td> [[ 0.00363613 0.03153604 0.01162914 -0.01318316]] </td> \n </tr>\n\n\n <tr>\n <td>**db2**</td>\n <td> [[ 0.06589489]] </td> \n </tr>\n\n</table>\n\nQuestion: Implement the update rule. Use gradient descent. You have to use (dW1, db1, dW2, db2) in order to update (W1, b1, W2, b2).\nGeneral gradient descent rule: $ \\theta = \\theta - \\alpha \\frac{\\partial J }{ \\partial \\theta }$ where $\\alpha$ is the learning rate and $\\theta$ represents a parameter.\nIllustration: The gradient descent algorithm with a good learning rate (converging) and a bad learning rate (diverging). Images courtesy of Adam Harley.\n<img src=\"images/sgd.gif\" style=\"width:400;height:400;\"> <img src=\"images/sgd_bad.gif\" style=\"width:400;height:400;\">",
"# GRADED FUNCTION: update_parameters\n\ndef update_parameters(parameters, grads, learning_rate = 1.2):\n \"\"\"\n Updates parameters using the gradient descent update rule given above\n \n Arguments:\n parameters -- python dictionary containing your parameters \n grads -- python dictionary containing your gradients \n \n Returns:\n parameters -- python dictionary containing your updated parameters \n \"\"\"\n # Retrieve each parameter from the dictionary \"parameters\"\n ### START CODE HERE ### (≈ 4 lines of code)\n W1 = parameters[\"W1\"]\n b1 = parameters[\"b1\"]\n W2 = parameters[\"W2\"]\n b2 = parameters[\"b2\"]\n ### END CODE HERE ###\n \n # Retrieve each gradient from the dictionary \"grads\"\n ### START CODE HERE ### (≈ 4 lines of code)\n dW1 = grads[\"dW1\"]\n db1 = grads[\"db1\"]\n dW2 = grads[\"dW2\"]\n db2 = grads[\"db2\"]\n ## END CODE HERE ###\n \n # Update rule for each parameter\n ### START CODE HERE ### (≈ 4 lines of code)\n W1 = W1 - learning_rate * dW1\n b1 = b1 - learning_rate * db1\n W2 = W2 - learning_rate * dW2\n b2 = b2 - learning_rate * db2\n ### END CODE HERE ###\n \n parameters = {\"W1\": W1,\n \"b1\": b1,\n \"W2\": W2,\n \"b2\": b2}\n \n return parameters\n\nparameters, grads = update_parameters_test_case()\nparameters = update_parameters(parameters, grads)\n\nprint(\"W1 = \" + str(parameters[\"W1\"]))\nprint(\"b1 = \" + str(parameters[\"b1\"]))\nprint(\"W2 = \" + str(parameters[\"W2\"]))\nprint(\"b2 = \" + str(parameters[\"b2\"]))",
"Expected Output:\n<table style=\"width:80%\">\n <tr>\n <td>**W1**</td>\n <td> [[-0.00643025 0.01936718]\n [-0.02410458 0.03978052]\n [-0.01653973 -0.02096177]\n [ 0.01046864 -0.05990141]]</td> \n </tr>\n\n <tr>\n <td>**b1**</td>\n <td> [[ -1.02420756e-06]\n [ 1.27373948e-05]\n [ 8.32996807e-07]\n [ -3.20136836e-06]]</td> \n </tr>\n\n <tr>\n <td>**W2**</td>\n <td> [[-0.01041081 -0.04463285 0.01758031 0.04747113]] </td> \n </tr>\n\n\n <tr>\n <td>**b2**</td>\n <td> [[ 0.00010457]] </td> \n </tr>\n\n</table>\n\n4.4 - Integrate parts 4.1, 4.2 and 4.3 in nn_model()\nQuestion: Build your neural network model in nn_model().\nInstructions: The neural network model has to use the previous functions in the right order.",
"# GRADED FUNCTION: nn_model\n\ndef nn_model(X, Y, n_h, num_iterations = 10000, print_cost=False):\n \"\"\"\n Arguments:\n X -- dataset of shape (2, number of examples)\n Y -- labels of shape (1, number of examples)\n n_h -- size of the hidden layer\n num_iterations -- Number of iterations in gradient descent loop\n print_cost -- if True, print the cost every 1000 iterations\n \n Returns:\n parameters -- parameters learnt by the model. They can then be used to predict.\n \"\"\"\n \n np.random.seed(3)\n n_x = layer_sizes(X, Y)[0]\n n_y = layer_sizes(X, Y)[2]\n \n # Initialize parameters, then retrieve W1, b1, W2, b2. Inputs: \"n_x, n_h, n_y\". Outputs = \"W1, b1, W2, b2, parameters\".\n ### START CODE HERE ### (≈ 5 lines of code)\n parameters = initialize_parameters(n_x, n_h, n_y)\n W1 = parameters[\"W1\"]\n b1 = parameters[\"b1\"]\n W2 = parameters[\"W2\"]\n b2 = parameters[\"b2\"]\n ### END CODE HERE ###\n \n # Loop (gradient descent)\n\n for i in range(0, num_iterations):\n \n ### START CODE HERE ### (≈ 4 lines of code)\n # Forward propagation. Inputs: \"X, parameters\". Outputs: \"A2, cache\".\n A2, cache = forward_propagation(X, parameters)\n \n # Cost function. Inputs: \"A2, Y, parameters\". Outputs: \"cost\".\n cost = compute_cost(A2, Y, parameters)\n \n # Backpropagation. Inputs: \"parameters, cache, X, Y\". Outputs: \"grads\".\n grads = backward_propagation(parameters, cache, X, Y)\n \n # Gradient descent parameter update. Inputs: \"parameters, grads\". Outputs: \"parameters\".\n parameters = update_parameters(parameters, grads)\n \n ### END CODE HERE ###\n \n # Print the cost every 1000 iterations\n if print_cost and i % 1000 == 0:\n print (\"Cost after iteration %i: %f\" %(i, cost))\n\n return parameters\n\nX_assess, Y_assess = nn_model_test_case()\n\nparameters = nn_model(X_assess, Y_assess, 4, num_iterations=10000, print_cost=False)\nprint(\"W1 = \" + str(parameters[\"W1\"]))\nprint(\"b1 = \" + str(parameters[\"b1\"]))\nprint(\"W2 = \" + str(parameters[\"W2\"]))\nprint(\"b2 = \" + str(parameters[\"b2\"]))",
"Expected Output:\n<table style=\"width:90%\">\n <tr>\n <td>**W1**</td>\n <td> [[-4.18494056 5.33220609]\n [-7.52989382 1.24306181]\n [-4.1929459 5.32632331]\n [ 7.52983719 -1.24309422]]</td> \n </tr>\n\n <tr>\n <td>**b1**</td>\n <td> [[ 2.32926819]\n [ 3.79458998]\n [ 2.33002577]\n [-3.79468846]]</td> \n </tr>\n\n <tr>\n <td>**W2**</td>\n <td> [[-6033.83672146 -6008.12980822 -6033.10095287 6008.06637269]] </td> \n </tr>\n\n\n <tr>\n <td>**b2**</td>\n <td> [[-52.66607724]] </td> \n </tr>\n\n</table>\n\n4.5 Predictions\nQuestion: Use your model to predict by building predict().\nUse forward propagation to predict results.\nReminder: predictions = $y_{prediction} = \\mathbb 1 \\text{{activation > 0.5}} = \\begin{cases}\n 1 & \\text{if}\\ activation > 0.5 \\\n 0 & \\text{otherwise}\n \\end{cases}$ \nAs an example, if you would like to set the entries of a matrix X to 0 and 1 based on a threshold you would do: X_new = (X > threshold)",
"# GRADED FUNCTION: predict\n\ndef predict(parameters, X):\n \"\"\"\n Using the learned parameters, predicts a class for each example in X\n \n Arguments:\n parameters -- python dictionary containing your parameters \n X -- input data of size (n_x, m)\n \n Returns\n predictions -- vector of predictions of our model (red: 0 / blue: 1)\n \"\"\"\n \n # Computes probabilities using forward propagation, and classifies to 0/1 using 0.5 as the threshold.\n ### START CODE HERE ### (≈ 2 lines of code)\n A2, cache = forward_propagation(X, parameters)\n predictions = (A2 > 0.5)\n ### END CODE HERE ###\n \n return predictions\n\nparameters, X_assess = predict_test_case()\n\npredictions = predict(parameters, X_assess)\nprint(\"predictions mean = \" + str(np.mean(predictions)))",
"Expected Output: \n<table style=\"width:40%\">\n <tr>\n <td>**predictions mean**</td>\n <td> 0.666666666667 </td> \n </tr>\n\n</table>\n\nIt is time to run the model and see how it performs on a planar dataset. Run the following code to test your model with a single hidden layer of $n_h$ hidden units.",
"# Build a model with a n_h-dimensional hidden layer\nparameters = nn_model(X, Y, n_h = 4, num_iterations = 10000, print_cost=True)\n\n# Plot the decision boundary\nplot_decision_boundary(lambda x: predict(parameters, x.T), X, Y)\nplt.title(\"Decision Boundary for hidden layer size \" + str(4))",
"Expected Output:\n<table style=\"width:40%\">\n <tr>\n <td>**Cost after iteration 9000**</td>\n <td> 0.218607 </td> \n </tr>\n\n</table>",
"# Print accuracy\npredictions = predict(parameters, X)\nprint ('Accuracy: %d' % float((np.dot(Y,predictions.T) + np.dot(1-Y,1-predictions.T))/float(Y.size)*100) + '%')",
"Expected Output: \n<table style=\"width:15%\">\n <tr>\n <td>**Accuracy**</td>\n <td> 90% </td> \n </tr>\n</table>\n\nAccuracy is really high compared to Logistic Regression. The model has learnt the leaf patterns of the flower! Neural networks are able to learn even highly non-linear decision boundaries, unlike logistic regression. \nNow, let's try out several hidden layer sizes.\n4.6 - Tuning hidden layer size (optional/ungraded exercise)\nRun the following code. It may take 1-2 minutes. You will observe different behaviors of the model for various hidden layer sizes.",
"# This may take about 2 minutes to run\n\nplt.figure(figsize=(16, 32))\nhidden_layer_sizes = [1, 2, 3, 4, 5, 20, 50]\nfor i, n_h in enumerate(hidden_layer_sizes):\n plt.subplot(5, 2, i+1)\n plt.title('Hidden Layer of size %d' % n_h)\n parameters = nn_model(X, Y, n_h, num_iterations = 5000)\n plot_decision_boundary(lambda x: predict(parameters, x.T), X, Y)\n predictions = predict(parameters, X)\n accuracy = float((np.dot(Y,predictions.T) + np.dot(1-Y,1-predictions.T))/float(Y.size)*100)\n print (\"Accuracy for {} hidden units: {} %\".format(n_h, accuracy))",
"Interpretation:\n- The larger models (with more hidden units) are able to fit the training set better, until eventually the largest models overfit the data. \n- The best hidden layer size seems to be around n_h = 5. Indeed, a value around here seems to fits the data well without also incurring noticable overfitting.\n- You will also learn later about regularization, which lets you use very large models (such as n_h = 50) without much overfitting. \nOptional questions:\nNote: Remember to submit the assignment but clicking the blue \"Submit Assignment\" button at the upper-right. \nSome optional/ungraded questions that you can explore if you wish: \n- What happens when you change the tanh activation for a sigmoid activation or a ReLU activation?\n- Play with the learning_rate. What happens?\n- What if we change the dataset? (See part 5 below!)\n<font color='blue'>\nYou've learnt to:\n- Build a complete neural network with a hidden layer\n- Make a good use of a non-linear unit\n- Implemented forward propagation and backpropagation, and trained a neural network\n- See the impact of varying the hidden layer size, including overfitting.\nNice work! \n5) Performance on other datasets\nIf you want, you can rerun the whole notebook (minus the dataset part) for each of the following datasets.",
"# Datasets\nnoisy_circles, noisy_moons, blobs, gaussian_quantiles, no_structure = load_extra_datasets()\n\ndatasets = {\"noisy_circles\": noisy_circles,\n \"noisy_moons\": noisy_moons,\n \"blobs\": blobs,\n \"gaussian_quantiles\": gaussian_quantiles}\n\n### START CODE HERE ### (choose your dataset)\ndataset = \"noisy_moons\"\n### END CODE HERE ###\n\nX, Y = datasets[dataset]\nX, Y = X.T, Y.reshape(1, Y.shape[0])\n\n# make blobs binary\nif dataset == \"blobs\":\n Y = Y%2\n\n# Visualize the data\nplt.scatter(X[0, :], X[1, :], c=Y, s=40, cmap=plt.cm.Spectral);",
"Congrats on finishing this Programming Assignment!\nReference:\n- http://scs.ryerson.ca/~aharley/neural-networks/\n- http://cs231n.github.io/neural-networks-case-study/"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
eroicaleo/LearningPython
|
HandsOnML/ch02/ex03.ipynb
|
mit
|
[
"print('Hello world!')\n\nimport numpy as np\nimport pandas as pd\n\nimport os\nimport tarfile\n\nHOUSING_PATH = 'datasets/housing'\n\nDOWNLOAD_ROOT = \"https://raw.githubusercontent.com/ageron/handson-ml/master/\"\nHOUSING_PATH = \"datasets/housing\"\nHOUSING_URL = DOWNLOAD_ROOT + HOUSING_PATH + \"/housing.tgz\"\n\ndef fetch_housing_data(housing_url=HOUSING_URL, housing_path=HOUSING_PATH):\n housing_csv_path = os.path.join(housing_path, 'housing.csv') \n housing_tgz_path = os.path.join(housing_path, 'housing.tgz') \n if os.path.isfile(housing_csv_path):\n print(f'Find {housing_csv_path}, do nothing')\n return\n if os.path.isfile(housing_tgz_path):\n print(f'Find {housing_tgz_path}, will extract it')\n housing_tgz = tarfile.open(housing_tgz_path)\n housing_tgz.extractall(path=housing_path)\n housing_tgz.close()\n return\n print(f'Can not find {housing_csv_path}')\n\nfetch_housing_data()\n\ndef load_housing_data(housing_path=HOUSING_PATH):\n csv_path = os.path.join(housing_path, \"housing.csv\")\n return pd.read_csv(csv_path)\n\nhousing = load_housing_data()\n\nhousing.head()\n\nhousing.info()\n\nhousing.ocean_proximity.value_counts()\n\nhousing['ocean_proximity'].value_counts()\n\nhousing.describe()\n\n%matplotlib inline\nimport matplotlib.pyplot as plt\nhousing.hist(bins=50, figsize=(20,15))\nplt.show()\n\nhousing.median_income.describe()\n\nhousing.median_income.hist(bins=15)\nplt.show()\n\nincome_cat = np.ceil(housing.median_income / 1.5)\n\nincome_cat.where(income_cat < 5.0, 5.0, inplace=True)\n\n# The above operations can be replaced by the following\nincome_cat2 = np.ceil(housing.median_income / 1.5)\nincome_cat2[income_cat2 > 5.0] = 5.0\n(income_cat2 == income_cat).all()\n\nincome_cat.describe()\n\nincome_cat.value_counts() / len(income_cat)\n\nfrom sklearn.model_selection import StratifiedShuffleSplit\n\nsplit = StratifiedShuffleSplit(n_splits=1, test_size=0.2, random_state=42)\n\nhousing['income_cat'] = income_cat\n\nfor train_index, test_index in split.split(housing, housing['income_cat']):\n strat_train_set = housing.loc[train_index]\n strat_test_set = housing.loc[test_index]\n\n\nStratified = strat_test_set['income_cat'].value_counts().sort_index() / len(strat_test_set)\nOverall = housing['income_cat'].value_counts().sort_index() / len(housing)\ndata = pd.DataFrame({'Overall': Overall, 'Stratified' : Stratified})\ndata['Strat. %error'] = (data['Overall'] - data['Stratified']) / data['Overall'] * 100\ndata",
"Visualizing Data",
"strat_train_set_copy = strat_train_set.copy()\n\nhousing.plot(kind=\"scatter\", x='longitude', y='latitude')\n\nhousing.plot(kind=\"scatter\", x='longitude', y='latitude', alpha=0.1)\n\nstrat_train_set_copy.plot(kind='scatter', x='longitude', y='latitude', alpha=0.4,\n s=strat_train_set_copy.population/100,\n c=strat_train_set_copy.median_house_value,\n cmap=plt.get_cmap(\"jet\"),\n label=\"population\", figsize=(15, 15),\n colorbar=True)\nplt.legend()\n\ncorr_matrix = strat_train_set_copy.corr()\n\ncorr_matrix.median_house_value.sort_values(ascending=False)\n\nfrom pandas.plotting import scatter_matrix\n\nattributes = [\"median_house_value\", \"median_income\", \"total_rooms\",\n\"housing_median_age\"]\nscatter_matrix(housing[attributes], figsize=(12, 8))\n\nstrat_train_set_copy.plot.scatter(x=\"median_income\", y=\"median_house_value\", alpha=0.1)",
"Experimenting with Attribute Combinations",
"housing[\"rooms_per_household\"] = housing[\"total_rooms\"] / housing[\"households\"]\nhousing[\"bedrooms_per_room\"] = housing[\"total_bedrooms\"]/housing[\"total_rooms\"]\nhousing[\"population_per_household\"]=housing[\"population\"]/housing[\"households\"]\n\nhousing.info()\n\ncorr_matrix = housing.corr()\ncorr_matrix['median_house_value'].sort_values(ascending=False)",
"2.5 Prepare the Data for Machine Learning Algorithms",
"housing = strat_train_set.drop('median_house_value', axis=1)\nhousing_labels = strat_train_set['median_house_value'].copy()\n\nhousing.info()\n\nhousing.dropna(subset=['total_bedrooms']).info()\n\nhousing.drop('total_bedrooms', axis=1).info()\n\nhousing['total_bedrooms'].fillna(housing['total_bedrooms'].median()).describe()\n\nfrom sklearn.impute import SimpleImputer\nimputer = SimpleImputer(strategy='median')\nhousing_num = housing.drop(\"ocean_proximity\", axis=1)\nimputer.fit(housing_num)\nimputer.statistics_\n\nimputer.strategy\n\nhousing.drop(\"ocean_proximity\", axis=1).median().values\n\nX = imputer.transform(housing_num)\nX\n\nhousing_tr = pd.DataFrame(X, columns=housing_num.columns)\nhousing_tr.head()",
"Handling Text and Categorical Attributes",
"from sklearn.preprocessing import LabelEncoder\n\nencoder = LabelEncoder()\n\nhousing_cat = housing.ocean_proximity\n\nhousing_cat.describe()\n\nhousing_cat.value_counts()\n\nhousing_cat_encoded = encoder.fit_transform(housing_cat)\n\nhousing_cat_encoded\n\ntype(housing_cat_encoded)\n\nprint(encoder.classes_)",
"One hot encoding",
"from sklearn.preprocessing import OneHotEncoder\n\nencoder = OneHotEncoder()\n\nprint(housing_cat_encoded.shape)\nprint(type(housing_cat_encoded))\n\n(housing_cat_encoded.reshape(-1, 1)).shape\n\nhousing_cat_1hot = encoder.fit_transform(housing_cat_encoded.reshape(-1, 1))\n\nhousing_cat_1hot\n\ntype(housing_cat_1hot)\n\nhousing_cat_1hot.toarray()",
"Combine",
"from sklearn.preprocessing import LabelBinarizer\n\nencoder = LabelBinarizer(sparse_output=False)\n\nhousing_cat_1hot = encoder.fit_transform(housing_cat)\n\nhousing_cat_1hot\n\ntype(housing_cat_1hot)",
"Custom Transformers",
"rooms_ix, bedrooms_ix, population_ix, households_ix = 3, 4, 5, 6\n\nhousing.head()\n\nhousing.iloc[:, 3]\n\nX = housing.values\n\n# This can be achieved by the iloc, with using .values\nhousing.iloc[:, [rooms_ix, bedrooms_ix, households_ix, population_ix]].head()\n\nrooms_per_household = X[:, rooms_ix] / X[:, households_ix]\npopulation_per_household = X[:, population_ix] / X[:, households_ix]\nbedrooms_per_room = X[:, bedrooms_ix] / X[:, rooms_ix]\nnp.c_[X, rooms_per_household, population_per_household]\nnp.c_[X, rooms_per_household, population_per_household, bedrooms_per_room]\n\nfrom sklearn.base import BaseEstimator, TransformerMixin\nrooms_ix, bedrooms_ix, population_ix, households_ix = 3, 4, 5, 6\n\nclass CombinedAttributesAdder(BaseEstimator, TransformerMixin):\n def __init__(self, add_bedrooms_per_room=False):\n self.add_bedrooms_per_room = add_bedrooms_per_room\n def fit(self, X, y=None):\n return self\n def transform(self, X, y=None):\n rooms_per_household = X[:, rooms_ix] / X[:, households_ix]\n population_per_household = X[:, population_ix] / X[:, households_ix]\n if self.add_bedrooms_per_room:\n bedrooms_per_room = X[:, bedrooms_ix] / X[:, rooms_ix]\n return np.c_[X, rooms_per_household, population_per_household, bedrooms_per_room]\n else:\n return np.c_[X, rooms_per_household, population_per_household]\n\nattr_adder = CombinedAttributesAdder(add_bedrooms_per_room=False)\nhousing_extra_attribs = attr_adder.transform(X)\n\nprint(housing_extra_attribs.shape)\nprint(housing.shape)\n\n# Convert back to data frame -- My way\nnew_columns = housing.columns.append(\n pd.Index(['rooms_per_household', 'population_per_household'])\n)\nnew_columns\nhousing_extra_attribs_df = pd.DataFrame(housing_extra_attribs, columns=new_columns)\nhousing_extra_attribs_df.head()",
"2.5.4 Feature Scaling",
"housing.describe()\n\nhousing.total_rooms.describe()\n\nfrom sklearn.preprocessing import MinMaxScaler\n\nscalar = MinMaxScaler()\nscalar.fit(housing[\"total_rooms\"].values.reshape(-1, 1))\npd.DataFrame(scalar.transform(housing[\"total_rooms\"].values.reshape(-1, 1)), columns=[\"total_rooms\"])[\"total_rooms\"].describe()\n\nfrom sklearn.preprocessing import StandardScaler\n\nscalar = StandardScaler()\nscalar.fit(housing[\"total_rooms\"].values.reshape(-1, 1))\npd.DataFrame(scalar.transform(housing[\"total_rooms\"].values.reshape(-1, 1)), columns=[\"total_rooms\"])[\"total_rooms\"].describe()",
"2.5.5 Transformation Pipeline",
"from sklearn.pipeline import Pipeline\n\nnum_pipeline = Pipeline([\n ('imputer', SimpleImputer(strategy=\"median\")),\n ('attr_adder', CombinedAttributesAdder()),\n ('std_scaler', StandardScaler())\n])\n\n# I want to verify the pipelined version\n# doest the same thing as the separated steps\n\nnum_pipeline_stage1 = Pipeline([\n ('imputer', SimpleImputer(strategy=\"median\")),\n])\n\nX_pipeline = num_pipeline_stage1.fit_transform(housing_num)\nX = imputer.transform(housing_num)\nX_pipeline\nnp.array_equal(X, X_pipeline)\n\nnum_pipeline_stage2 = Pipeline([\n ('imputer', SimpleImputer(strategy=\"median\")),\n ('attr_adder', CombinedAttributesAdder()),\n])\n\nY = attr_adder.fit_transform(X)\nY_pipeline = num_pipeline_stage2.fit_transform(housing_num)\nnp.array_equal(Y, Y_pipeline)\n\nnum_pipeline_stage3 = Pipeline([\n ('imputer', SimpleImputer(strategy=\"median\")),\n ('attr_adder', CombinedAttributesAdder()),\n ('std_scaler', StandardScaler())\n])\n\nZ = scalar.fit_transform(Y)\nZ.std(), Z.mean()\nZ_pipeline = num_pipeline_stage3.fit_transform(housing_num)\nnp.array_equal(Z, Z_pipeline)\n\nfrom sklearn.base import BaseEstimator, TransformerMixin\n\nclass DataFrameSelector(BaseEstimator, TransformerMixin):\n def __init__(self, attribute_names):\n self.attribute_names = attribute_names\n def fit(self, X, y=None):\n return self\n def transform(self, X):\n return X[self.attribute_names].values\n\nclass CustomizedLabelBinarizer(BaseEstimator, TransformerMixin):\n def __init__(self, sparse_output=False):\n self.encode = LabelBinarizer(sparse_output = sparse_output)\n def fit(self, X, y=None):\n return self.encode.fit(X)\n def transform(self, X):\n return self.encode.transform(X)\n\n\nnum_attribs = list(housing_num)\ncat_attribs = [\"ocean_proximity\"]\n\nnum_pipeline = Pipeline([\n ('selector', DataFrameSelector(num_attribs)),\n ('imputer', SimpleImputer(strategy=\"median\")),\n ('attr_adder', CombinedAttributesAdder()),\n ('std_scaler', StandardScaler()),\n]\n)\n\ncat_pipeline = Pipeline([\n ('selector', DataFrameSelector(cat_attribs)),\n ('label_binarizer', CustomizedLabelBinarizer()),\n]\n)\n\n# LabelBinarizer().fit_transform(DataFrameSelector(cat_attribs).fit_transform(housing))\n# num_pipeline.fit_transform(housing)\n# cat_pipeline.fit_transform(housing)\n\nfrom sklearn.pipeline import FeatureUnion\n\nfull_pipeline = FeatureUnion(transformer_list=[\n ('num_pipeline', num_pipeline),\n ('cat_pipeline', cat_pipeline),\n])\n\nhousing_prepared = full_pipeline.fit_transform(housing)\nprint(housing_prepared.shape)\nhousing_prepared",
"2.6.1 Training and Evaluating on the Training Set",
"from sklearn.linear_model import LinearRegression\n\nlin_reg = LinearRegression()\nlin_reg.fit(housing_prepared, housing_labels)\n\nsome_data = housing[:5]\nsome_data\n\nsome_labels = housing_labels[:5]\nsome_labels\n\nsome_data_prepared = full_pipeline.transform(some_data)\nsome_data_prepared\n\nprint(f'Prediction:\\t{lin_reg.predict(some_data_prepared)}')\nprint(f'Lables:\\t\\t{list(some_labels)}')\n\nfrom sklearn.metrics import mean_squared_error\n\nhousing_prediction = lin_reg.predict(housing_prepared)\nlin_mse = mean_squared_error(housing_prediction, housing_labels)\nlin_rmse = np.sqrt(lin_mse)\nlin_rmse",
"Tree model",
"from sklearn.tree import DecisionTreeRegressor\n\ntree_reg = DecisionTreeRegressor()\ntree_reg.fit(housing_prepared, housing_labels)\ntree_predictions = tree_reg.predict(housing_prepared)\ntree_mse = mean_squared_error(tree_predictions, housing_labels)\ntree_rmse = np.sqrt(tree_mse)\ntree_rmse",
"2.6.2 Better Evaluation Using Cross-Validation",
"from sklearn.model_selection import cross_val_score\nscores = cross_val_score(tree_reg, housing_prepared, housing_labels, scoring=\"neg_mean_squared_error\", cv=10)\n\nrmse_scores = np.sqrt(-scores)\nrmse_scores\n\ndef display_scores(scores):\n print(f'Scores: {scores}')\n print(f'Mean: {scores.mean()}')\n print(f'STD: {scores.std()}')\n\ndisplay_scores(rmse_scores)",
"Random Forest",
"from sklearn.ensemble import RandomForestRegressor\n\nforest_reg = RandomForestRegressor()\nforest_reg.fit(housing_prepared, housing_labels)\nforest_scores = cross_val_score(forest_reg, housing_prepared, housing_labels, scoring=\"neg_mean_squared_error\", cv=10)\nforest_rmse_scores = np.sqrt(-forest_scores)\ndisplay_scores(forest_rmse_scores)\n\nforest_prediction = forest_reg.predict(housing_prepared)\nforest_rmse = np.sqrt(mean_squared_error(forest_prediction, housing_labels))\nforest_rmse",
"Ex03\nTry adding a transformer in the preparation pipeline to select only the most\nimportant attributes.\nThe importance of each feature is show below in 2.7.4\n```python\n\n\n\nsorted(zip(feature_importances, attributes), reverse=True)\n[(0.32649798665134971, 'median_income'),\n(0.15334491760305854, 'INLAND'),\n(0.11305529021187399, 'pop_per_hhold'),\n(0.07793247662544775, 'bedrooms_per_room'),\n(0.071415642259275158, 'longitude'),\n(0.067613918945568688, 'latitude'),\n(0.060436577499703222, 'rooms_per_hhold'),\n(0.04442608939578685, 'housing_median_age'),\n(0.018240254462909437, 'population'),\n(0.01663085833886218, 'total_rooms'),\n(0.016607686091288865, 'total_bedrooms'),\n(0.016345876147580776, 'households'),\n(0.011216644219017424, '<1H OCEAN'),\n(0.0034668118081117387, 'NEAR OCEAN'),\n(0.0026848388432755429, 'NEAR BAY'),\n(8.4130896890070617e-05, 'ISLAND')]\n```\n\n\n\nBased on the ranking, I will select the following 3:\n* median_income\n* INLAND\n* pop_per_hhold",
"from sklearn.base import BaseEstimator, TransformerMixin\n\nclass EX3NumSelector(BaseEstimator, TransformerMixin):\n def __init__(self):\n pass\n def fit(self, X, y=None):\n return self\n def transform(self, X):\n X['pop_per_hhold'] = X['population'] / X['households']\n return X[['median_income', 'pop_per_hhold', 'longitude']].values\n\nclass EX3CatSelector(BaseEstimator, TransformerMixin):\n def __init__(self):\n pass\n def fit(self, X, y=None):\n return self\n def transform(self, X):\n Y = housing['ocean_proximity']\n Y[Y != 'INLAND'] = 'NON_INLAND'\n return Y.values\n\nnum_sel = EX3NumSelector()\nnum_sel.fit_transform(housing)\n\ncat_sel = EX3CatSelector()\ncat_sel.fit_transform(housing)\n\nnum_pipeline = Pipeline([\n ('selector', EX3NumSelector()),\n ('imputer', SimpleImputer(strategy=\"median\")),\n ('std_scaler', StandardScaler()),\n]\n)\n\ncat_pipeline = Pipeline([\n ('selector', EX3CatSelector()),\n ('label_binarizer', CustomizedLabelBinarizer()),\n]\n)\n\nfull_pipeline = FeatureUnion(transformer_list=[\n ('num_pipeline', num_pipeline),\n ('cat_pipeline', cat_pipeline),\n])\n\nhousing_prepared = full_pipeline.fit_transform(housing)\nprint(housing_prepared.shape)\nhousing_prepared\n\nforest_reg.fit(housing_prepared, housing_labels)\nforest_scores = cross_val_score(forest_reg, housing_prepared, housing_labels, scoring=\"neg_mean_squared_error\", cv=10)\nforest_rmse_scores = np.sqrt(-forest_scores)\ndisplay_scores(forest_rmse_scores)",
"Conclusions of Ex03\nWith only 3 features, we do see the performance degradation. Adding one more feature 'longitude' improves a little\nbit, which makes sense.\n2.7.1 Grid Search",
"# from sklearn.model_selection import GridSearchCV\n\n# param_grid = [\n# {'n_estimators': [3, 10, 30], 'max_features': [2,4,6,8]},\n# {'bootstrap': [False], 'n_estimators': [3, 10, 30], 'max_features': [2,4,6,8]}\n# ]\n\n# forest_reg = RandomForestRegressor()\n\n# grid_search = GridSearchCV(forest_reg, param_grid, cv=5, scoring=\"neg_mean_squared_error\")\n\n# grid_search.fit(housing_prepared, housing_labels)\n\n# grid_search.best_params_\n\n# grid_search.best_estimator_\n\n# cvres = grid_search.cv_results_\n# for mean_score, params in zip(cvres[\"mean_test_score\"], cvres[\"params\"]):\n# print(np.sqrt(-mean_score), params)",
"2.7.4 Analyze the best models and their errors",
"# feature_importances = grid_search.best_estimator_.feature_importances_\n# feature_importances\n\n# extra_attribs = ['rooms_per_hhold', 'pop_per_hhold']\n\n# cat_one_hot_attribs = list(encoder.classes_)\n# cat_one_hot_attribs\n\n# attributes = num_attribs + extra_attribs + cat_one_hot_attribs\n# attributes, len(attributes)\n\n# sorted(zip(feature_importances, attributes), reverse=True)",
"2.7.5 Evaluate Your System on the Test Set",
"# final_model = grid_search.best_estimator_\n# X_test = strat_test_set.drop(\"median_house_value\", axis=1)\n# y_test = strat_test_set.median_house_value.copy()\n# X_test_prepared = full_pipeline.transform(X_test)\n\n# final_predictions = final_model.predict(X_test_prepared)\n# final_mse = mean_squared_error(final_predictions, y_test)\n# final_rmse = np.sqrt(final_mse)\n# final_rmse"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
tensorflow/docs-l10n
|
site/en-snapshot/hub/tutorials/semantic_similarity_with_tf_hub_universal_encoder_lite.ipynb
|
apache-2.0
|
[
"Copyright 2018 The TensorFlow Hub Authors.\nLicensed under the Apache License, Version 2.0 (the \"License\");",
"# Copyright 2018 The TensorFlow Hub Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n# ==============================================================================",
"Universal Sentence Encoder-Lite demo\n<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://www.tensorflow.org/hub/tutorials/semantic_similarity_with_tf_hub_universal_encoder_lite\"><img src=\"https://www.tensorflow.org/images/tf_logo_32px.png\" />View on TensorFlow.org</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/hub/blob/master/examples/colab/semantic_similarity_with_tf_hub_universal_encoder_lite.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />Run in Google Colab</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://github.com/tensorflow/hub/blob/master/examples/colab/semantic_similarity_with_tf_hub_universal_encoder_lite.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />View on GitHub</a>\n </td>\n <td>\n <a href=\"https://storage.googleapis.com/tensorflow_docs/hub/examples/colab/semantic_similarity_with_tf_hub_universal_encoder_lite.ipynb\"><img src=\"https://www.tensorflow.org/images/download_logo_32px.png\" />Download notebook</a>\n </td>\n <td>\n <a href=\"https://tfhub.dev/google/universal-sentence-encoder-lite/2\"><img src=\"https://www.tensorflow.org/images/hub_logo_32px.png\" />See TF Hub model</a>\n </td>\n</table>\n\nThis Colab illustrates how to use the Universal Sentence Encoder-Lite for sentence similarity task. This module is very similar to Universal Sentence Encoder with the only difference that you need to run SentencePiece processing on your input sentences.\nThe Universal Sentence Encoder makes getting sentence level embeddings as easy as it has historically been to lookup the embeddings for individual words. The sentence embeddings can then be trivially used to compute sentence level meaning similarity as well as to enable better performance on downstream classification tasks using less supervised training data.\nGetting started\nSetup",
"# Install seaborn for pretty visualizations\n!pip3 install --quiet seaborn\n# Install SentencePiece package\n# SentencePiece package is needed for Universal Sentence Encoder Lite. We'll\n# use it for all the text processing and sentence feature ID lookup.\n!pip3 install --quiet sentencepiece\n\nfrom absl import logging\n\nimport tensorflow.compat.v1 as tf\ntf.disable_v2_behavior()\n\nimport tensorflow_hub as hub\nimport sentencepiece as spm\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport os\nimport pandas as pd\nimport re\nimport seaborn as sns",
"Load the module from TF-Hub",
"module = hub.Module(\"https://tfhub.dev/google/universal-sentence-encoder-lite/2\")\n\ninput_placeholder = tf.sparse_placeholder(tf.int64, shape=[None, None])\nencodings = module(\n inputs=dict(\n values=input_placeholder.values,\n indices=input_placeholder.indices,\n dense_shape=input_placeholder.dense_shape))",
"Load SentencePiece model from the TF-Hub Module\nThe SentencePiece model is conveniently stored inside the module's assets. It has to be loaded in order to initialize the processor.",
"with tf.Session() as sess:\n spm_path = sess.run(module(signature=\"spm_path\"))\n\nsp = spm.SentencePieceProcessor()\nwith tf.io.gfile.GFile(spm_path, mode=\"rb\") as f:\n sp.LoadFromSerializedProto(f.read())\nprint(\"SentencePiece model loaded at {}.\".format(spm_path))\n\ndef process_to_IDs_in_sparse_format(sp, sentences):\n # An utility method that processes sentences with the sentence piece processor\n # 'sp' and returns the results in tf.SparseTensor-similar format:\n # (values, indices, dense_shape)\n ids = [sp.EncodeAsIds(x) for x in sentences]\n max_len = max(len(x) for x in ids)\n dense_shape=(len(ids), max_len)\n values=[item for sublist in ids for item in sublist]\n indices=[[row,col] for row in range(len(ids)) for col in range(len(ids[row]))]\n return (values, indices, dense_shape)",
"Test the module with a few examples",
"# Compute a representation for each message, showing various lengths supported.\nword = \"Elephant\"\nsentence = \"I am a sentence for which I would like to get its embedding.\"\nparagraph = (\n \"Universal Sentence Encoder embeddings also support short paragraphs. \"\n \"There is no hard limit on how long the paragraph is. Roughly, the longer \"\n \"the more 'diluted' the embedding will be.\")\nmessages = [word, sentence, paragraph]\n\nvalues, indices, dense_shape = process_to_IDs_in_sparse_format(sp, messages)\n\n# Reduce logging output.\nlogging.set_verbosity(logging.ERROR)\n\nwith tf.Session() as session:\n session.run([tf.global_variables_initializer(), tf.tables_initializer()])\n message_embeddings = session.run(\n encodings,\n feed_dict={input_placeholder.values: values,\n input_placeholder.indices: indices,\n input_placeholder.dense_shape: dense_shape})\n\n for i, message_embedding in enumerate(np.array(message_embeddings).tolist()):\n print(\"Message: {}\".format(messages[i]))\n print(\"Embedding size: {}\".format(len(message_embedding)))\n message_embedding_snippet = \", \".join(\n (str(x) for x in message_embedding[:3]))\n print(\"Embedding: [{}, ...]\\n\".format(message_embedding_snippet))",
"Semantic Textual Similarity (STS) task example\nThe embeddings produced by the Universal Sentence Encoder are approximately normalized. The semantic similarity of two sentences can be trivially computed as the inner product of the encodings.",
"def plot_similarity(labels, features, rotation):\n corr = np.inner(features, features)\n sns.set(font_scale=1.2)\n g = sns.heatmap(\n corr,\n xticklabels=labels,\n yticklabels=labels,\n vmin=0,\n vmax=1,\n cmap=\"YlOrRd\")\n g.set_xticklabels(labels, rotation=rotation)\n g.set_title(\"Semantic Textual Similarity\")\n\n\ndef run_and_plot(session, input_placeholder, messages):\n values, indices, dense_shape = process_to_IDs_in_sparse_format(sp,messages)\n\n message_embeddings = session.run(\n encodings,\n feed_dict={input_placeholder.values: values,\n input_placeholder.indices: indices,\n input_placeholder.dense_shape: dense_shape})\n \n plot_similarity(messages, message_embeddings, 90)",
"Similarity visualized\nHere we show the similarity in a heat map. The final graph is a 9x9 matrix where each entry [i, j] is colored based on the inner product of the encodings for sentence i and j.",
"messages = [\n # Smartphones\n \"I like my phone\",\n \"My phone is not good.\",\n \"Your cellphone looks great.\",\n\n # Weather\n \"Will it snow tomorrow?\",\n \"Recently a lot of hurricanes have hit the US\",\n \"Global warming is real\",\n\n # Food and health\n \"An apple a day, keeps the doctors away\",\n \"Eating strawberries is healthy\",\n \"Is paleo better than keto?\",\n\n # Asking about age\n \"How old are you?\",\n \"what is your age?\",\n]\n\n\nwith tf.Session() as session:\n session.run(tf.global_variables_initializer())\n session.run(tf.tables_initializer())\n run_and_plot(session, input_placeholder, messages)",
"Evaluation: STS (Semantic Textual Similarity) Benchmark\nThe STS Benchmark provides an intristic evaluation of the degree to which similarity scores computed using sentence embeddings align with human judgements. The benchmark requires systems to return similarity scores for a diverse selection of sentence pairs. Pearson correlation is then used to evaluate the quality of the machine similarity scores against human judgements.\nDownload data",
"import pandas\nimport scipy\nimport math\n\n\ndef load_sts_dataset(filename):\n # Loads a subset of the STS dataset into a DataFrame. In particular both\n # sentences and their human rated similarity score.\n sent_pairs = []\n with tf.gfile.GFile(filename, \"r\") as f:\n for line in f:\n ts = line.strip().split(\"\\t\")\n # (sent_1, sent_2, similarity_score)\n sent_pairs.append((ts[5], ts[6], float(ts[4])))\n return pandas.DataFrame(sent_pairs, columns=[\"sent_1\", \"sent_2\", \"sim\"])\n\n\ndef download_and_load_sts_data():\n sts_dataset = tf.keras.utils.get_file(\n fname=\"Stsbenchmark.tar.gz\",\n origin=\"http://ixa2.si.ehu.es/stswiki/images/4/48/Stsbenchmark.tar.gz\",\n extract=True)\n\n sts_dev = load_sts_dataset(\n os.path.join(os.path.dirname(sts_dataset), \"stsbenchmark\", \"sts-dev.csv\"))\n sts_test = load_sts_dataset(\n os.path.join(\n os.path.dirname(sts_dataset), \"stsbenchmark\", \"sts-test.csv\"))\n\n return sts_dev, sts_test\n\n\nsts_dev, sts_test = download_and_load_sts_data()",
"Build evaluation graph",
"sts_input1 = tf.sparse_placeholder(tf.int64, shape=(None, None))\nsts_input2 = tf.sparse_placeholder(tf.int64, shape=(None, None))\n\n# For evaluation we use exactly normalized rather than\n# approximately normalized.\nsts_encode1 = tf.nn.l2_normalize(\n module(\n inputs=dict(values=sts_input1.values,\n indices=sts_input1.indices,\n dense_shape=sts_input1.dense_shape)),\n axis=1)\nsts_encode2 = tf.nn.l2_normalize(\n module(\n inputs=dict(values=sts_input2.values,\n indices=sts_input2.indices,\n dense_shape=sts_input2.dense_shape)),\n axis=1)\n\nsim_scores = -tf.acos(tf.reduce_sum(tf.multiply(sts_encode1, sts_encode2), axis=1))\n",
"Evaluate sentence embeddings",
"#@title Choose dataset for benchmark\ndataset = sts_dev #@param [\"sts_dev\", \"sts_test\"] {type:\"raw\"}\n\nvalues1, indices1, dense_shape1 = process_to_IDs_in_sparse_format(sp, dataset['sent_1'].tolist())\nvalues2, indices2, dense_shape2 = process_to_IDs_in_sparse_format(sp, dataset['sent_2'].tolist())\nsimilarity_scores = dataset['sim'].tolist()\n\ndef run_sts_benchmark(session):\n \"\"\"Returns the similarity scores\"\"\"\n scores = session.run(\n sim_scores,\n feed_dict={\n sts_input1.values: values1,\n sts_input1.indices: indices1,\n sts_input1.dense_shape: dense_shape1,\n sts_input2.values: values2,\n sts_input2.indices: indices2,\n sts_input2.dense_shape: dense_shape2,\n })\n return scores\n\n\nwith tf.Session() as session:\n session.run(tf.global_variables_initializer())\n session.run(tf.tables_initializer())\n scores = run_sts_benchmark(session)\n\npearson_correlation = scipy.stats.pearsonr(scores, similarity_scores)\nprint('Pearson correlation coefficient = {0}\\np-value = {1}'.format(\n pearson_correlation[0], pearson_correlation[1]))"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
SHDShim/pytheos
|
examples/6_p_scale_test_Dorogokupets2007_Au.ipynb
|
apache-2.0
|
[
"%cat 0Source_Citation.txt\n\n%matplotlib inline\n# %matplotlib notebook # for interactive",
"For high dpi displays.",
"%config InlineBackend.figure_format = 'retina'",
"0. General note\nThis example compares pressure calculated from pytheos and original publication for the gold scale by Dorogokupets 2007.\n1. Global setup",
"import matplotlib.pyplot as plt\nimport numpy as np\nfrom uncertainties import unumpy as unp\nimport pytheos as eos",
"3. Compare",
"eta = np.linspace(1., 0.65, 8)\nprint(eta)\n\ndorogokupets2007_au = eos.gold.Dorogokupets2007()\n\nhelp(dorogokupets2007_au)\n\ndorogokupets2007_au.print_equations()\n\ndorogokupets2007_au.print_equations()\n\ndorogokupets2007_au.print_parameters()\n\nv0 = 67.84742110765599\n\ndorogokupets2007_au.three_r\n\nv = v0 * (eta) \ntemp = 2500.\n\np = dorogokupets2007_au.cal_p(v, temp * np.ones_like(v))",
"<img src='./tables/Dorogokupets2007_Au.png'>",
"print('for T = ', temp)\nfor eta_i, p_i in zip(eta, p):\n print(\"{0: .3f} {1: .2f} \".format(eta_i, p_i))\n\nv = dorogokupets2007_au.cal_v(p, temp * np.ones_like(p), min_strain=0.6)\nprint(1.-(v/v0))"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
chi-hung/PythonTutorial
|
tutorials/PandasTutorial.ipynb
|
mit
|
[
"目的:熟悉Pandas套件的使用\n\n序列(Series):詞頻表\n以index檢查序列索引\n以loc(), iloc()提取序列內容\n以apply()對序列做處理\n畫詞頻圖\n序列(Series):高斯分佈\n畫盒鬚圖\n資料表(DataFrame)\n以loc()或iloc()做資料的選取\ngroupby()\n以屬性indices檢查groupby後的結果\n我們可用for將一個個群從群集裡取出\n用get_group(2)將群集內的第二個群取出\ngroupby完之後,可以用sum()做加總\n或以describe()來對個群做一些簡單統計\n產生一個亂數時間序列資料,用來了解groupby, aggregate, transform, filter\n接著我們將資料groupby年以後,以transform()對資料做轉換\ngroupby後,除了以transform()做轉換,我們亦可用agg()做聚合,計算各群的平均和標準差\ngroupby後,以filter()過濾出符合判斷式的群\n未定義(NaN)的處理\n\n練習(手寫數字)\n手寫數字資料可載於此\n\n選出數字為7的path,並告訴我數字為7的資料有幾筆\n各手寫數字分別有幾張圖?",
"import pandas as pd\nimport numpy as np\n\nimport matplotlib.pyplot as plt\n%matplotlib inline\nimport seaborn as sns\nsns.set()\nimport os",
"根據數據的維度,Pandas可將資料存成序列(series) 或資料表(data frame)。以下將各別針對序列和資料表做介紹。\n<a id=\"01\"/>序列(Series):詞頻表\n可將Python字典存成Pandas序列。字典格式為 {索引1:索引值1, 索引2:索引值2, 索引3:索引值3,...}。",
"adjs={'affordable': 1,\n 'comfortable': 3,\n 'comparable': 1,\n 'different': 1,\n 'disappointed': 1,\n 'fantastic': 1,\n 'good': 8,\n 'great': 15}\n\nser=pd.Series(adjs)",
"以[ ]方法取出序列中的前三筆資料",
"ser[:3]",
"回索引\n<a id=\"02\"/>以index檢查序列索引",
"ser.index",
"回索引\n<a id=\"03\"/>以loc(), iloc()提取序列內容\n以[索引名稱]可提取和該索引名稱相應的索引值。",
"ser['comfortable']\n\nfor ind in ser.index:\n print(ind,ser[ind])",
"ser[索引名稱] 等同於使用 ser.loc[索引名稱]:",
"for ind in ser.index:\n print(ind,ser.loc[ind])",
"以iloc[索引排序]可提取對應該索引排序的索引值。索引排序是整數,代表該筆資料在序列內的順位。",
"for i,ind in enumerate(ser.index):\n print(ind,ser[ind])\n print(ind,ser.iloc[i])\n print('------------')",
"回索引\n<a id=\"04\"/>以apply()對序列做處理",
"ser",
"利用apply()將序列內的每個值+1。",
"ser.apply(lambda x:x+1)",
"回索引\n<a id=\"05\"/>畫詞頻圖",
"ser.plot.bar()",
"以上等同於:",
"ser.plot(kind='bar')",
"回索引\n<a id=\"06\"/>序列(Series):高斯分佈\n我們亦可不提供索引,直接輸入一個無索引, 非字典形式的序列給Pandas:",
"ser=pd.Series(np.random.normal(0,1,1000))\n\nser[:5]",
"由於並無提供索引,索引會以流水號從0開始自動產生。",
"ser.shape",
"回索引\n<a id=\"07\"/>畫盒鬚圖",
"ser.plot.box()",
"以上結果等同於:",
"ser.plot(kind='box')",
"我們先前使用了 pd.Series(序列) 讓序列自動產生由0開始遞增的索引。而若是我們想要自己提供索引,我們仍然可使用 pd.Series(序列,索引序列) 建立自帶索引的序列。",
"ser=pd.Series(np.random.normal(0,1,1000), index=np.random.choice(['A','B'],1000))\n\nser.head(5)",
"複習\n* 給字典(key:value),則索引為key\n* 給一維序列不給索引,則索引為流水號\n* 給兩個等長的一維序列,則一個為資料,一個為資料的索引\n回索引\n<a id=\"08\"/>資料表(DataFrame)\n以下我們建立一個有三個行,且索引為自動流水號的資料表。\n我們可以提供{行名稱1:行序列1,行名稱2:行序列2,行名稱3:行序列3,...}給pandas.DataFrame()建構資料表。",
"a=[1,2,3]*2\nb=[1, 2, 3, 10, 20, 30]\nc=np.array(b)*2\n\ndf=pd.DataFrame({'col1':a,'col2':b,'col3':c})\n\ndf",
"直接於df物件之後加[欄位名稱]可選取欄位",
"df['col2']",
"回索引\n<a id=\"09\"/>以loc()或iloc()做資料的選取\n用loc或iloc時機:當你不只想選取欄位,也想選取特定索引範圍時。\n以loc[索引名稱,行的名稱] 可擷取一部分範圍的資料表\n例:選出索引0,1,2;列為col2的資料",
"df.loc[0:2,'col2']",
"以上範例索引為數值,因此可以用$0:2$的方式,選出索引由$0$至$2$ ($0,1,2$) 的資料列。\n例:選出索引為0,1,2, 列為col1和col2的資料",
"df.loc[0:2,['col1','col2']]\n\ndf",
"選取位於'col'行中等於1的列",
"df['col1']==1\n\ndf[df['col1']==1]",
"先依條件df['col1']==1選定資料,再指定要選取的欄位是'col2'",
"df.loc[df['col1']==1,'col2']",
"亦可使用iloc(索引排序,行排序),根據索引排序和行排序來選取資料",
"df\n\ndf.iloc[0:2,1]",
"回索引\n<a id=\"ex00\"/>練習0:選出數字為7的path,並告訴我數字為7的資料有幾筆",
"def filePathsGen(rootPath):\n paths=[]\n dirs=[]\n for dirPath,dirNames,fileNames in os.walk(rootPath):\n for fileName in fileNames:\n fullPath=os.path.join(dirPath,fileName)\n paths.append((int(dirPath[len(rootPath) ]),fullPath))\n dirs.append(dirNames)\n return dirs,paths\n\ndirs,paths=filePathsGen('mnist/') #載入圖片路徑\n\ndfPath=pd.DataFrame(paths,columns=['class','path']) #圖片路徑存成Pandas資料表\ndfPath.head(5) # 看資料表前5個row\n\n# 完成以下程式碼:\ndfPath[...]",
"回索引\n\n<a id=\"10\"/>groupby()\nPandas 可做各種等同於SQL查詢的處理,詳見Pandas和SQL指令的比較:\nhttp://pandas.pydata.org/pandas-docs/stable/comparison_with_sql.html\n現在,我們首先來介紹,SQL查詢指令中常見的groupby,於pandas是該怎麼做呢?",
"df",
"以行'col1'來做groupby",
"grouped=df.groupby('col1')",
"回索引\n<a id=\"11\"/>以屬性indices檢查groupby後的結果",
"grouped.indices",
"現在,grouped這個物件是一個群集,裡面有三個群。其中,群為1的有索引是0和索引是3的這兩列,群為2的有索引是1和索引是4的這兩列,群為3的有索引是2和索引是5的這兩列。\n回索引\n<a id=\"12\"/>我們可用for將一個個群從群集裡取出",
"for name,group in grouped:\n print(name)\n print(group)\n print('--------------')",
"直接以* 將群集展開",
"print(*grouped)",
"回索引\n<a id=\"13\"/>用get_group(2)將群集內的第二個群取出",
"grouped.get_group(2)",
"回索引\n<a id=\"14\"/>groupby完之後,可以用sum()做加總",
"grouped.sum()",
"回索引\n<a id=\"15\"/>或以describe()來對個群做一些簡單統計",
"grouped.describe()",
"回索引\n\n<a id=\"16\"/>產生一個亂數時間序列資料,用來了解groupby, aggregate, transform, filter\n現在,我們來製作一個亂數產生的時間序列。我們將利用它來學習,於群聚(groupby)之後,如何將得到的群集做聚合 (aggregate),轉換 (transform)或過濾 (filter),進而得出想要的結果。\n以下我們要亂數產生一個時間序列的範例資料。首先,先產生$365\\times2$個日期",
"dates = pd.date_range('2017/1/1', periods=365*2,freq='D')",
"建立一個時間序列,以剛剛產生的日期當做索引。序列的前365個亂數是由常態分佈$N(\\mu=0,\\sigma=1)$抽取出來的樣本,而後365個亂數則是由常態分佈$N(\\mu=6,\\sigma=1)$抽取出來的樣本。\n以下,我們以水平方向堆疊(horizontal stack)兩個不同分佈的亂數序列,將其輸入給pd.Series()。",
"dat=pd.Series(np.hstack( (np.random.normal(0,1,365),np.random.normal(6,1,365) )) ,dates)\n\ndat[:5]",
"接著我們將序列以年來分群。",
"grouped=pd.Series(dat).groupby(lambda x:x.year)\n\ngrouped.indices",
"顯而易見,因2017和2018年的資料是不同的亂數分佈產生的,所以畫出來這兩年的資料是分開的。",
"for name,group in grouped:\n group.plot(label=name,legend=True)",
"畫直方圖(histogram),可發現兩組數據的確類似常態分佈,一個中心約在x=0處,另一個中心約在x=6處。\n法一",
"grouped.plot.hist(15)",
"法二(有畫出兩組資料的標籤)",
"for name,group in grouped:\n group.plot.hist(15,label=name,legend=True)",
"盒鬚圖(boxplot)",
"fig,axes=plt.subplots(1,2)\nfor idx,(name,group) in enumerate(grouped):\n group.plot(kind='box', label=name,ax=axes[idx])",
"接著我們想畫盒鬚圖(boxplot)。在這裡遇到一個問題,也就是boxplot()這個方法只有物件是DataFrame才有,物件是Series尚無此方法。因此我們只好將序列(Series)轉成DataFrame。\n註:我們想用boxplot這個方法,因為它可以寫一行就幫我們自動分群,畫出盒鬚圖。",
"dat.name='random variables'",
"將序列dat轉成DataFrame並且將日期索引變成行。",
"datNew=dat.to_frame().reset_index()",
"將日期欄位取年的部份,存到一個欄位叫做'year'。",
"datNew['year']=datNew['index'].apply(lambda x:x.year)",
"將原先是索引的日期欄位刪除。",
"del datNew['index']",
"最終我們產生了新的資料表,可以用'year'這個欄位來做groupby。",
"datNew[:5]",
"最後以boxplot()畫盒鬚圖並以年做groupby。",
"datNew.boxplot(by='year')",
"從盒鬚圖可看出,2017年的亂數資料平均值靠近0, 而2018年的亂數資料平均值靠近6。這符合我們的預期。\n用seaborn畫可能會漂亮些:",
"sns.boxplot(data=datNew,x='year',y='random variables',width=0.3)\n\ndat[:5]",
"回索引\n<a id=\"17\"/>接著我們將資料groupby年以後,以transform()對資料做轉換\n以下我們想做的事情是將常態分佈的亂數做標準化,也就是將$x\\sim N(\\mu,\\sigma)$轉換成 $x_{new}=\\frac{x-\\mu}{\\sigma}\\sim N(0,1)$。\nhttps://en.wikipedia.org/wiki/Standard_score",
"grouped=dat.groupby(lambda x:x.year)\n\ntransformed = grouped.transform(lambda x: (x - x.mean()) / x.std())\n\ntransformed[:5]\n\nlen(transformed)",
"畫圖比較轉換前和轉換後的序列",
"compare = pd.DataFrame({'before transformation': dat, 'after transformation': transformed})\n\ncompare.plot()",
"轉換後,2018年的資料和2017年的資料有相同的分佈\n回索引\n<a id=\"18\"/>groupby後,除了以transform()做轉換,我們亦可用agg()做聚合,計算各群的平均和標準差\n將原資料做groupby",
"groups=dat.groupby(lambda x:x.year)",
"將轉換後資料做groupby",
"groupsTrans=transformed.groupby(lambda x:x.year)",
"計算原資料各群平均",
"groups.agg(np.mean)",
"計算原資料各群標準差",
"groups.agg(np.std)",
"計算轉換後資料各群平均",
"groupsTrans.agg(np.mean)",
"計算轉換後資料各群標準差",
"groupsTrans.agg(np.std)",
"回索引\n<a id=\"19\"/>groupby後,以filter()過濾出符合判斷式的群\n例如說,我們想找出平均數是小於5的群集。由先前所畫的圖,或是由數據產生的方式,我們知道2017年這個群,平均數是小於五,因此我們可使用np.abs(x.mean())<5 這個條件過濾出2017年的群集資料。",
"filtered=groups.filter(lambda x: np.abs(x.mean())<5)\n\nlen(filtered)\n\nfiltered[:5]",
"相同的,我們知道2018年這個群,平均數是大於五,因此我們可使用np.abs(x.mean())>5 這個條件過濾出2018年的群集資料。",
"filtered=groups.filter(lambda x: np.abs(x.mean())>5)\n\nlen(filtered)\n\nfiltered[:5]",
"使用np.abs(x.mean())>6 將找不到任何群集的資料。",
"filtered=groups.filter(lambda x: np.abs(x.mean())>6)\n\nlen(filtered)",
"使用np.abs(x.mean())<6 將得到屬於任何群集的資料。",
"filtered=groups.filter(lambda x: np.abs(x.mean())<6)\n\nlen(filtered)\n\nfiltered[:5]\n\nfiltered[-1-5:-1]",
"回索引\n<a id=\"20\"/>未定義(NaN)的處理",
"dat=pd.Series(np.random.normal(0,1,6))\n\ndat",
"將索引2的值換成NaN",
"dat[2]=np.NAN\n\ndat",
"將序列所有值往後挪動(shift)一格。此舉會導致最前面的數變成NaN。",
"shifted=dat.shift(1)\n\nshifted",
"用NaN後的數來填補該NaN (backward fill)",
"shifted.bfill()",
"用NaN前的數來填補該NaN (forward fill)",
"shifted.ffill()",
"用0來填補該NaN",
"shifted.fillna(0)\n\nshifted.mean()",
"用平均數來填補該NaN",
"shifted.fillna( shifted.mean() )",
"將序列中的NaN丟棄(drop)",
"shifted.dropna()",
"回索引\n<a id=\"ex01\"/>練習1:各手寫數字分別有幾張圖?",
"def filePathsGen(rootPath):\n paths=[]\n dirs=[]\n for dirPath,dirNames,fileNames in os.walk(rootPath):\n for fileName in fileNames:\n fullPath=os.path.join(dirPath,fileName)\n paths.append((int(dirPath[len(rootPath) ]),fullPath))\n dirs.append(dirNames)\n return dirs,paths\n\ndirs,paths=filePathsGen('mnist/') #載入圖片路徑\n\ndfPath=pd.DataFrame(paths,columns=['class','path']) #圖片路徑存成Pandas資料表\ndfPath.head(5) # 看資料表前5個row\n\n# 完成以下程式碼:\n...\n...\ngroups.count()",
"回索引"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
ben-williams/cbb-retreat
|
material/rr-intro-exercise.ipynb
|
cc0-1.0
|
[
"Intro Session: Motivating reproducibility\nThis Jupyter Notebook file contains Python code you can use to complete the exercises from the Intro\nsession. To see the output simply click Cell->Run All above\nPandas:\npandas is a data analysis package for Python, which provides similar functionality to R. Typically, this is imported as pd for the convenience of typing fewer characters.",
"import pandas as pd",
"Notebook\nThe Jupyter Notebook is a literate programming tool that lets you combine Python code, Markdown text, and other commands. Each notebook is composed of cells, and cells can either be code (e.g. Python) or Markdown.\nIn addition to Python code, the notebook can run so-called magic functions. These begin with a percent % symbol and can be used to do common tasks like listing files with %ls",
"%ls",
"Both the magic functions and the python ones support tab-completion",
"%ls rr-intro-data-v0.2/intro/data/",
"Data:\nIn the pandas package, the read_csv function is used to read the data into a pandas Data Frame. Note that the argument provided to this function is the complete path that leads to the dataset from your current \nworking directory (where this Notebook is located). Also note that this is provided \nas a character string, hence in quotation marks.\nRead the Gapminder 1950-1960 data in from CSV file",
"gap_5060 = pd.read_csv('rr-intro-data-v0.2/intro/data/gapminder-5060.csv')",
"Task 1: Visualize life expectancy over time for Canada in the 1950s and 1960s using a line plot.\n\nFilter the data for Canada only:",
"gap_5060_CA = gap_5060.loc[gap_5060['country'] == 'Canada']",
"Visualize:\n\nPandas uses matplotlib by default, and a very common practice is to generate plots inline - meaning within the notebook and not as separate files. A magic function configures this, and you'll often just put it at the top of your notebook with import pandas as pd",
"%matplotlib inline\n\ngap_5060_CA.plot(kind='line', x='year', y='lifeExp')\npass",
"Task 2: Something is clearly wrong with this plot!\nTurns out there's a data error in the data file: life expectancy for Canada in the year 1957 is coded as 999999, it should actually be 69.96. Make this correction.\n\nloc and ['column'] to index into the Data Frame",
"gap_5060.loc[(gap_5060['country'] == 'Canada') & (gap_5060['year'] == 1957)]",
"loc[<col>, 'row'] allows assignment with =",
"gap_5060.loc[(gap_5060['country'] == 'Canada') & (gap_5060['year'] == 1957), 'lifeExp'] = 69.96\ngap_5060.loc[(gap_5060['country'] == 'Canada') & (gap_5060['year'] == 1957)]",
"Task 3: Visualize life expectancy over time for Canada again, with the corrected data.\nExact same code as before, but note that the contents of gap_5060 are different as it \nhas been updated in the previous task.",
"gap_5060_CA = gap_5060.loc[gap_5060['country'] == 'Canada']\ngap_5060_CA.plot(kind='line', x='year', y='lifeExp')\npass",
"Task 3 - Stretch goal: Add lines for Mexico and United States.\n\n.isin() for logical operator testing if a country's name is in the list provided",
"loc = gap_5060['country'].isin(['Canada','United States','Mexico'])\nus_mexico_ca = gap_5060.loc[loc]",
"To get each country as a series, we create a 2-level index",
"indexed_by_country = us_mexico_ca.set_index(['country','year'])",
"The data table now indexes by country and year. \nSee how they appear on the second header row, and the data is grouped by country?",
"indexed_by_country",
"Visualization code is almost the same, but we tell Pandas how to unstack the data into multiple series",
"indexed_by_country.unstack(level='country').plot(kind='line',y='lifeExp')\npass"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.