repo_name
stringlengths
6
77
path
stringlengths
8
215
license
stringclasses
15 values
cells
list
types
list
fluxcapacitor/source.ml
jupyterhub.ml/notebooks/train_deploy/zz_under_construction/zz_old/talks/StartupML/Jan-20-2017/SparkMLTensorflowAI-HybridCloud-ContinuousDeployment.ipynb
apache-2.0
[ "Where Am I?\nStartup.ML Conference - San Francisco - Jan 20, 2017\n\nWho Am I?\nChris Fregly\n\n\nResearch Scientist @ PipelineIO\n\nVideo Series Author \"High Performance Tensorflow in Production\" @ OReilly (Coming Soon)\n\nFounder @ Advanced Spark and Tensorflow Meetup\n\nGithub Repo\n\nDockerHub Repo\n\nSlideshare\n\nYouTube\n\nWho Was I?\nSoftware Engineer @ Netflix, Databricks, IBM Spark Tech Center\n \n \n\n1. Infrastructure and Tools\nDocker\nImages, Containers\nUseful Docker Image: AWS + GPU + Docker + Tensorflow + Spark\n\nKubernetes\nContainer Orchestration Across Clusters\n \nWeavescope\nKubernetes Cluster Visualization\n\nJupyter Notebooks\nWhat We're Using Here for Everything!\n \nAirflow\nInvoke Any Type of Workflow on Any Type of Schedule \n\nGithub\nCommit New Model to Github, Airflow Workflow Triggered for Continuous Deployment \n\nDockerHub\nMaintains Docker Images \n\nContinuous Deployment\nNot Just for Code, Also for ML/AI Models!\nCanary Release\nDeploy and Compare New Model Alongside Existing\nMetrics and Dashboards\nNot Just System Metrics, ML/AI Model Prediction Metrics\nNetflixOSS-based\nPrometheus\nGrafana\nElasticsearch\nSeparate Cluster Concerns\nTraining/Admin Cluster\nPrediction Cluster\nHybrid Cloud Deployment for eXtreme High Availability (XHA)\nAWS and Google Cloud\nApache Spark\n \nTensorflow + Tensorflow Serving\n\n\n2. Model Deployment Bundles\nKeyValue\nie. Recommendations\nIn-memory: Redis, Memcache\nOn-disk: Cassandra, RocksDB\nFirst-class Servable in Tensorflow Serving\nPMML\nIt's Useful and Well-Supported\nApple, Cisco, Airbnb, HomeAway, etc\nPlease Don't Re-build It - Reduce Your Technical Debt!\n\nNative Code\nHand-coded (Python + Pickling)\nGenerate Java Code from PMML?\n\nTensorflow Model Exports\nfreeze_graph.py: Combine Tensorflow Graph (Static) with Trained Weights (Checkpoints) into Single Deployable Model\n3. Model Deployments and Rollbacks\nMutable\nEach New Model is Deployed to Live, Running Container\nImmutable\nEach New Model is a New Docker Image\n4. Optimizing Tensorflow Models for Serving\nPython Scripts\noptimize_graph_for_inference.py\nPete Warden's Blog\nGraph Transform Tool\nCompile (Tensorflow 1.0+)\nXLA Compiler\nCompiles 3 graph operations (input, operation, output) into 1 operation\nRemoves need for Tensorflow Runtime (20 MB is significant on tiny devices)\nAllows new backends for hardware-specific optimizations (better portability)\ntfcompile\nConvert Graph into executable code\nCompress/Distill Ensemble Models\nConvert ensembles or other complex models into smaller models\nRe-score training data with output of model being distilled \nTrain smaller model to produce same output\nOutput of smaller model learns more information than original label\n5. Optimizing Serving Runtime Environment\nThroughput\nOption 1: Add more Tensorflow Serving servers behind load balancer\nOption 2: Enable request batching in each Tensorflow Serving\nOption Trade-offs: Higher Latency (bad) for Higher Throughput (good)\n$TENSORFLOW_SERVING_HOME/bazel-bin/tensorflow_serving/model_servers/tensorflow_model_server \n--port=9000 \n--model_name=tensorflow_minimal \n--model_base_path=/root/models/tensorflow_minimal/export\n--enable_batching=true\n--max_batch_size=1000000\n--batch_timeout_micros=10000\n--max_enqueued_batches=1000000\n\nLatency\nThe deeper the model, the longer the latency\nStart inference in parallel where possible (ie. user inference in parallel with item inference)\nPre-load common inputs from database (ie. user attributes, item attributes) \nPre-compute/partial-compute common inputs (ie. popular word embeddings)\nMemory\nWord embeddings are huge!\nUse hashId for each word\nOff-load embedding matrices to parameter server and share between serving servers\n6. Demos!!\nTrain and Deploy Tensorflow AI Model (Simple Model, Immutable Deploy)\nTrain Tensorflow AI Model", "import numpy as np\nimport os\nimport tensorflow as tf\nfrom tensorflow.contrib.session_bundle import exporter\nimport time\n\n# make things wide\nfrom IPython.core.display import display, HTML\ndisplay(HTML(\"<style>.container { width:100% !important; }</style>\"))\n\nfrom IPython.display import clear_output, Image, display, HTML\n\ndef strip_consts(graph_def, max_const_size=32):\n \"\"\"Strip large constant values from graph_def.\"\"\"\n strip_def = tf.GraphDef()\n for n0 in graph_def.node:\n n = strip_def.node.add() \n n.MergeFrom(n0)\n if n.op == 'Const':\n tensor = n.attr['value'].tensor\n size = len(tensor.tensor_content)\n if size > max_const_size:\n tensor.tensor_content = \"<stripped %d bytes>\"%size\n return strip_def\n\ndef show_graph(graph_def=None, width=1200, height=800, max_const_size=32, ungroup_gradients=False):\n if not graph_def:\n graph_def = tf.get_default_graph().as_graph_def()\n \n \"\"\"Visualize TensorFlow graph.\"\"\"\n if hasattr(graph_def, 'as_graph_def'):\n graph_def = graph_def.as_graph_def()\n strip_def = strip_consts(graph_def, max_const_size=max_const_size)\n data = str(strip_def)\n if ungroup_gradients:\n data = data.replace('\"gradients/', '\"b_')\n #print(data)\n code = \"\"\"\n <script>\n function load() {{\n document.getElementById(\"{id}\").pbtxt = {data};\n }}\n </script>\n <link rel=\"import\" href=\"https://tensorboard.appspot.com/tf-graph-basic.build.html\" onload=load()>\n <div style=\"height:600px\">\n <tf-graph-basic id=\"{id}\"></tf-graph-basic>\n </div>\n \"\"\".format(data=repr(data), id='graph'+str(np.random.rand()))\n\n iframe = \"\"\"\n <iframe seamless style=\"width:{}px;height:{}px;border:0\" srcdoc=\"{}\"></iframe>\n \"\"\".format(width, height, code.replace('\"', '&quot;'))\n display(HTML(iframe))\n\n# If this errors out, increment the `export_version` variable, restart the Kernel, and re-run\n\nflags = tf.app.flags\nFLAGS = flags.FLAGS\nflags.DEFINE_integer(\"batch_size\", 10, \"The batch size to train\")\nflags.DEFINE_integer(\"epoch_number\", 10, \"Number of epochs to run trainer\")\nflags.DEFINE_integer(\"steps_to_validate\", 1,\n \"Steps to validate and print loss\")\nflags.DEFINE_string(\"checkpoint_dir\", \"./checkpoint/\",\n \"indicates the checkpoint dirctory\")\n#flags.DEFINE_string(\"model_path\", \"./model/\", \"The export path of the model\")\nflags.DEFINE_string(\"model_path\", \"/root/pipeline/prediction.ml/tensorflow/models/tensorflow_minimal/export/\", \"The export path of the model\")\n#flags.DEFINE_integer(\"export_version\", 27, \"The version number of the model\")\n\nfrom datetime import datetime \n\nseconds_since_epoch = int(datetime.now().strftime(\"%s\"))\n\nexport_version = seconds_since_epoch\n\n# If this errors out, increment the `export_version` variable, restart the Kernel, and re-run\n\ndef main():\n # Define training data\n x = np.ones(FLAGS.batch_size)\n y = np.ones(FLAGS.batch_size)\n\n # Define the model\n X = tf.placeholder(tf.float32, shape=[None], name=\"X\")\n Y = tf.placeholder(tf.float32, shape=[None], name=\"yhat\")\n w = tf.Variable([1.0], name=\"weight\")\n b = tf.Variable([1.0], name=\"bias\")\n loss = tf.square(Y - tf.matmul(X, w) - b)\n train_op = tf.train.GradientDescentOptimizer(0.01).minimize(loss)\n predict_op = tf.mul(X, w) + b\n\n saver = tf.train.Saver()\n checkpoint_dir = FLAGS.checkpoint_dir\n checkpoint_file = checkpoint_dir + \"/checkpoint.ckpt\"\n if not os.path.exists(checkpoint_dir):\n os.makedirs(checkpoint_dir)\n \n # Start the session\n with tf.Session() as sess:\n sess.run(tf.initialize_all_variables())\n\n ckpt = tf.train.get_checkpoint_state(checkpoint_dir)\n if ckpt and ckpt.model_checkpoint_path:\n print(\"Continue training from the model {}\".format(ckpt.model_checkpoint_path))\n saver.restore(sess, ckpt.model_checkpoint_path)\n\n saver_def = saver.as_saver_def()\n print(saver_def.filename_tensor_name)\n print(saver_def.restore_op_name)\n\n # Start training\n start_time = time.time()\n for epoch in range(FLAGS.epoch_number):\n sess.run(train_op, feed_dict={X: x, Y: y})\n\n # Start validating\n if epoch % FLAGS.steps_to_validate == 0:\n end_time = time.time()\n print(\"[{}] Epoch: {}\".format(end_time - start_time, epoch))\n\n saver.save(sess, checkpoint_file)\n tf.train.write_graph(sess.graph_def, checkpoint_dir, 'trained_model.pb', as_text=False)\n tf.train.write_graph(sess.graph_def, checkpoint_dir, 'trained_model.txt', as_text=True)\n\n start_time = end_time\n\n # Print model variables\n w_value, b_value = sess.run([w, b])\n print(\"The model of w: {}, b: {}\".format(w_value, b_value))\n\n # Export the model\n print(\"Exporting trained model to {}\".format(FLAGS.model_path))\n model_exporter = exporter.Exporter(saver)\n model_exporter.init(\n sess.graph.as_graph_def(),\n named_graph_signatures={\n 'inputs': exporter.generic_signature({\"features\": X}),\n 'outputs': exporter.generic_signature({\"prediction\": predict_op})\n })\n model_exporter.export(FLAGS.model_path, tf.constant(export_version), sess)\n print('Done exporting!')\n\nif __name__ == \"__main__\":\n main()\n\nshow_graph()", "Commit and Deploy New Tensorflow AI Model\nCommit Model to Github", "!ls -l /root/pipeline/prediction.ml/tensorflow/models/tensorflow_minimal/export\n\n!ls -l /root/pipeline/prediction.ml/tensorflow/models/tensorflow_minimal/export/00000027\n\n!git status\n\n!git add --all /root/pipeline/prediction.ml/tensorflow/models/tensorflow_minimal/export/00000027/\n\n!git status\n\n!git commit -m \"updated tensorflow model\"\n\n!git status\n\n# If this fails with \"Permission denied\", use terminal within jupyter to manually `git push`\n!git push", "Airflow Workflow Deploys New Model through Github Post-Commit Webhook to Triggers", "from IPython.core.display import display, HTML\ndisplay(HTML(\"<style>.container { width:100% !important; }</style>\"))\n\nfrom IPython.display import clear_output, Image, display, HTML\n\nhtml = '<iframe width=100% height=500px src=\"http://demo.pipeline.io:8080/admin\">'\ndisplay(HTML(html))", "Train and Deploy Spark ML Model (Airbnb Model, Mutable Deploy)\nScale Out Spark Training Cluster\nKubernetes CLI", "!kubectl scale --context=awsdemo --replicas=2 rc spark-worker-2-0-1\n\n!kubectl get pod --context=awsdemo", "Weavescope Kubernetes AWS Cluster Visualization", "from IPython.core.display import display, HTML\ndisplay(HTML(\"<style>.container { width:100% !important; }</style>\"))\n\nfrom IPython.display import clear_output, Image, display, HTML\n\nhtml = '<iframe width=100% height=500px src=\"http://kubernetes-aws.demo.pipeline.io\">'\ndisplay(HTML(html))", "Generate PMML from Spark ML Model", "from pyspark.ml.linalg import Vectors\nfrom pyspark.ml.feature import VectorAssembler, StandardScaler\nfrom pyspark.ml.feature import OneHotEncoder, StringIndexer\nfrom pyspark.ml import Pipeline, PipelineModel\nfrom pyspark.ml.regression import LinearRegression\n\n# You may need to Reconnect (more than Restart) the Kernel to pick up changes to these sett\nimport os\n\nmaster = '--master spark://spark-master-2-1-0:7077'\nconf = '--conf spark.cores.max=1 --conf spark.executor.memory=512m'\npackages = '--packages com.amazonaws:aws-java-sdk:1.7.4,org.apache.hadoop:hadoop-aws:2.7.1'\njars = '--jars /root/lib/jpmml-sparkml-package-1.0-SNAPSHOT.jar'\npy_files = '--py-files /root/lib/jpmml.py'\n\nos.environ['PYSPARK_SUBMIT_ARGS'] = master \\\n + ' ' + conf \\\n + ' ' + packages \\\n + ' ' + jars \\\n + ' ' + py_files \\\n + ' ' + 'pyspark-shell'\n\nprint(os.environ['PYSPARK_SUBMIT_ARGS'])\n\nfrom pyspark.sql import SparkSession\n\nspark = SparkSession.builder.getOrCreate()", "Step 0: Load Libraries and Data", "df = spark.read.format(\"csv\") \\\n .option(\"inferSchema\", \"true\").option(\"header\", \"true\") \\\n .load(\"s3a://datapalooza/airbnb/airbnb.csv.bz2\")\n\ndf.registerTempTable(\"df\")\n\nprint(df.head())\n\nprint(df.count())", "Step 1: Clean, Filter, and Summarize the Data", "df_filtered = df.filter(\"price >= 50 AND price <= 750 AND bathrooms > 0.0 AND bedrooms is not null\")\n\ndf_filtered.registerTempTable(\"df_filtered\")\n\ndf_final = spark.sql(\"\"\"\n select\n id,\n city,\n case when state in('NY', 'CA', 'London', 'Berlin', 'TX' ,'IL', 'OR', 'DC', 'WA')\n then state\n else 'Other'\n end as state,\n space,\n cast(price as double) as price,\n cast(bathrooms as double) as bathrooms,\n cast(bedrooms as double) as bedrooms,\n room_type,\n host_is_super_host,\n cancellation_policy,\n cast(case when security_deposit is null\n then 0.0\n else security_deposit\n end as double) as security_deposit,\n price_per_bedroom,\n cast(case when number_of_reviews is null\n then 0.0\n else number_of_reviews\n end as double) as number_of_reviews,\n cast(case when extra_people is null\n then 0.0\n else extra_people\n end as double) as extra_people,\n instant_bookable,\n cast(case when cleaning_fee is null\n then 0.0\n else cleaning_fee\n end as double) as cleaning_fee,\n cast(case when review_scores_rating is null\n then 80.0\n else review_scores_rating\n end as double) as review_scores_rating,\n cast(case when square_feet is not null and square_feet > 100\n then square_feet\n when (square_feet is null or square_feet <=100) and (bedrooms is null or bedrooms = 0)\n then 350.0\n else 380 * bedrooms\n end as double) as square_feet\n from df_filtered\n\"\"\").persist()\n\ndf_final.registerTempTable(\"df_final\")\n\ndf_final.select(\"square_feet\", \"price\", \"bedrooms\", \"bathrooms\", \"cleaning_fee\").describe().show()\n\nprint(df_final.count())\n\nprint(df_final.schema)\n\n# Most popular cities\n\nspark.sql(\"\"\"\n select \n state,\n count(*) as ct,\n avg(price) as avg_price,\n max(price) as max_price\n from df_final\n group by state\n order by count(*) desc\n\"\"\").show()\n\n# Most expensive popular cities\n\nspark.sql(\"\"\"\n select \n city,\n count(*) as ct,\n avg(price) as avg_price,\n max(price) as max_price\n from df_final\n group by city\n order by avg(price) desc\n\"\"\").filter(\"ct > 25\").show()", "Step 2: Define Continous and Categorical Features", "continuous_features = [\"bathrooms\", \\\n \"bedrooms\", \\\n \"security_deposit\", \\\n \"cleaning_fee\", \\\n \"extra_people\", \\\n \"number_of_reviews\", \\\n \"square_feet\", \\\n \"review_scores_rating\"]\n\ncategorical_features = [\"room_type\", \\\n \"host_is_super_host\", \\\n \"cancellation_policy\", \\\n \"instant_bookable\", \\\n \"state\"]", "Step 3: Split Data into Training and Validation", "[training_dataset, validation_dataset] = df_final.randomSplit([0.8, 0.2])", "Step 4: Continous Feature Pipeline", "continuous_feature_assembler = VectorAssembler(inputCols=continuous_features, outputCol=\"unscaled_continuous_features\")\n\ncontinuous_feature_scaler = StandardScaler(inputCol=\"unscaled_continuous_features\", outputCol=\"scaled_continuous_features\", \\\n withStd=True, withMean=False)", "Step 5: Categorical Feature Pipeline", "categorical_feature_indexers = [StringIndexer(inputCol=x, \\\n outputCol=\"{}_index\".format(x)) \\\n for x in categorical_features]\n\ncategorical_feature_one_hot_encoders = [OneHotEncoder(inputCol=x.getOutputCol(), \\\n outputCol=\"oh_encoder_{}\".format(x.getOutputCol() )) \\\n for x in categorical_feature_indexers]", "Step 6: Assemble our Features and Feature Pipeline", "feature_cols_lr = [x.getOutputCol() \\\n for x in categorical_feature_one_hot_encoders]\nfeature_cols_lr.append(\"scaled_continuous_features\")\n\nfeature_assembler_lr = VectorAssembler(inputCols=feature_cols_lr, \\\n outputCol=\"features_lr\")", "Step 7: Train a Linear Regression Model", "linear_regression = LinearRegression(featuresCol=\"features_lr\", \\\n labelCol=\"price\", \\\n predictionCol=\"price_prediction\", \\\n maxIter=10, \\\n regParam=0.3, \\\n elasticNetParam=0.8)\n\nestimators_lr = \\\n [continuous_feature_assembler, continuous_feature_scaler] \\\n + categorical_feature_indexers + categorical_feature_one_hot_encoders \\\n + [feature_assembler_lr] + [linear_regression]\n\npipeline = Pipeline(stages=estimators_lr)\n\npipeline_model = pipeline.fit(training_dataset)\n\nprint(pipeline_model)", "Step 8: Convert PipelineModel to PMML", "from jpmml import toPMMLBytes\n\nmodel_bytes = toPMMLBytes(spark, training_dataset, pipeline_model)\n\nprint(model_bytes.decode(\"utf-8\"))", "Push PMML to Live, Running Spark ML Model Server (Mutable)", "import urllib.request\n\nnamespace = 'default'\nmodel_name = 'airbnb'\nversion = '1'\nupdate_url = 'http://prediction-pmml-aws.demo.pipeline.io/update-pmml-model/%s/%s/%s' % (namespace, model_name, version)\n\nupdate_headers = {}\nupdate_headers['Content-type'] = 'application/xml'\n\nreq = urllib.request.Request(update_url, \\\n headers=update_headers, \\\n data=model_bytes)\n\nresp = urllib.request.urlopen(req)\n\nprint(resp.status) # Should return Http Status 200 \n\nimport urllib.request\n\nupdate_url = 'http://prediction-pmml-gcp.demo.pipeline.io/update-pmml/pmml_airbnb'\n\nupdate_headers = {}\nupdate_headers['Content-type'] = 'application/xml'\n\nreq = urllib.request.Request(update_url, \\\n headers=update_headers, \\\n data=pmmlBytes)\n\nresp = urllib.request.urlopen(req)\n\nprint(resp.status) # Should return Http Status 200 \n\nimport urllib.parse\nimport json\n\nnamespace = 'default'\nmodel_name = 'airbnb'\nversion = '1'\nevaluate_url = 'http://prediction-pmml-aws.demo.pipeline.io/evaluate-pmml-model/%s/%s/%s' % (namespace, model_name, version)\n\nevaluate_headers = {}\nevaluate_headers['Content-type'] = 'application/json'\n\ninput_params = '{\"bathrooms\":5.0, \\\n \"bedrooms\":4.0, \\\n \"security_deposit\":175.00, \\\n \"cleaning_fee\":25.0, \\\n \"extra_people\":1.0, \\\n \"number_of_reviews\": 2.0, \\\n \"square_feet\": 250.0, \\\n \"review_scores_rating\": 2.0, \\\n \"room_type\": \"Entire home/apt\", \\\n \"host_is_super_host\": \"0.0\", \\\n \"cancellation_policy\": \"flexible\", \\\n \"instant_bookable\": \"1.0\", \\\n \"state\": \"CA\"}' \nencoded_input_params = input_params.encode('utf-8')\n\nreq = urllib.request.Request(evaluate_url, \\\n headers=evaluate_headers, \\\n data=encoded_input_params)\n\nresp = urllib.request.urlopen(req)\n\nprint(resp.read())", "Deploy Java-based Model (Simple Model, Mutable Deploy)", "from urllib import request\n\nsourceBytes = ' \\n\\\n private String str; \\n\\\n \\n\\\n public void initialize(Map<String, Object> args) { \\n\\\n } \\n\\\n \\n\\\n public Object predict(Map<String, Object> inputs) { \\n\\\n String id = (String)inputs.get(\"id\"); \\n\\\n \\n\\\n return id.equals(\"21619\"); \\n\\\n } \\n\\\n'.encode('utf-8')\n\nfrom urllib import request\n\nnamespace = 'default'\nmodel_name = 'java_equals'\nversion = '1'\n\nupdate_url = 'http://prediction-java-aws.demo.pipeline.io/update-java/%s/%s/%s' % (namespace, model_name, version)\n\nupdate_headers = {}\nupdate_headers['Content-type'] = 'text/plain'\n\nreq = request.Request(\"%s\" % update_url, headers=update_headers, data=sourceBytes)\nresp = request.urlopen(req)\n\ngenerated_code = resp.read()\nprint(generated_code.decode('utf-8'))\n\nfrom urllib import request\n\nnamespace = 'default'\nmodel_name = 'java_equals'\nversion = '1'\n\nevaluate_url = 'http://prediction-java-aws.demo.pipeline.io/evaluate-java/%s/%s/%s' % (namespace, model_name, version)\n\nevaluate_headers = {}\nevaluate_headers['Content-type'] = 'application/json'\ninput_params = '{\"id\":\"21618\"}' \nencoded_input_params = input_params.encode('utf-8')\n\nreq = request.Request(evaluate_url, headers=evaluate_headers, data=encoded_input_params)\nresp = request.urlopen(req)\n\nprint(resp.read()) # Should return false \n\nfrom urllib import request\n\nnamespace = 'default'\nmodel_name = 'java_equals'\nversion = '1'\nevaluate_url = 'http://prediction-java-aws.demo.pipeline.io/evaluate-java/%s/%s/%s' % (namespace, model_name, version)\n\nevaluate_headers = {}\nevaluate_headers['Content-type'] = 'application/json'\ninput_params = '{\"id\":\"21619\"}' \nencoded_input_params = input_params.encode('utf-8')\n\nreq = request.Request(evaluate_url, headers=evaluate_headers, data=encoded_input_params)\nresp = request.urlopen(req)\n\nprint(resp.read()) # Should return true", "Deploy Java Model (HttpClient Model, Mutable Deploy)", "from urllib import request\n\nsourceBytes = ' \\n\\\n public Map<String, Object> data = new HashMap<String, Object>(); \\n\\\n \\n\\\n public void initialize(Map<String, Object> args) { \\n\\\n data.put(\"url\", \"http://demo.pipeline.io:9040/prediction/\"); \\n\\\n } \\n\\\n \\n\\\n public Object predict(Map<String, Object> inputs) { \\n\\\n try { \\n\\\n String userId = (String)inputs.get(\"userId\"); \\n\\\n String itemId = (String)inputs.get(\"itemId\"); \\n\\\n String url = data.get(\"url\") + \"/\" + userId + \"/\" + itemId; \\n\\\n \\n\\\n return org.apache.http.client.fluent.Request \\n\\\n .Get(url) \\n\\\n .execute() \\n\\\n .returnContent(); \\n\\\n \\n\\\n } catch(Exception exc) { \\n\\\n System.out.println(exc); \\n\\\n throw exc; \\n\\\n } \\n\\\n } \\n\\\n'.encode('utf-8')\n\nfrom urllib import request\n\nname = 'codegen_httpclient'\n# Note: Must have trailing '/'\nupdate_url = 'http://prediction-codegen-aws.demo.pipeline.io/update-codegen/%s/' % name\n\nupdate_headers = {}\nupdate_headers['Content-type'] = 'text/plain'\n\nreq = request.Request(\"%s\" % update_url, headers=update_headers, data=sourceBytes)\nresp = request.urlopen(req)\n\ngenerated_code = resp.read()\nprint(generated_code.decode('utf-8'))\n\nfrom urllib import request\n\nname = 'codegen_httpclient'\n# Note: Must have trailing '/'\nupdate_url = 'http://prediction-codegen-gcp.demo.pipeline.io/update-codegen/%s/' % name\n\nupdate_headers = {}\nupdate_headers['Content-type'] = 'text/plain'\n\nreq = request.Request(\"%s\" % update_url, headers=update_headers, data=sourceBytes)\nresp = request.urlopen(req)\n\ngenerated_code = resp.read()\nprint(generated_code.decode('utf-8'))\n\nfrom urllib import request\n\nname = 'codegen_httpclient'\nevaluate_url = 'http://prediction-codegen-aws.demo.pipeline.io/evaluate-codegen/%s' % name\n\nevaluate_headers = {}\nevaluate_headers['Content-type'] = 'application/json'\ninput_params = '{\"userId\":\"21619\", \"itemId\":\"10006\"}' \nencoded_input_params = input_params.encode('utf-8')\n\nreq = request.Request(evaluate_url, headers=evaluate_headers, data=encoded_input_params)\nresp = request.urlopen(req)\n\nprint(resp.read()) # Should return float\n\nfrom urllib import request\n\nname = 'codegen_httpclient'\nevaluate_url = 'http://prediction-codegen-gcp.demo.pipeline.io/evaluate-codegen/%s' % name\n\nevaluate_headers = {}\nevaluate_headers['Content-type'] = 'application/json'\ninput_params = '{\"userId\":\"21619\", \"itemId\":\"10006\"}' \nencoded_input_params = input_params.encode('utf-8')\n\nreq = request.Request(evaluate_url, headers=evaluate_headers, data=encoded_input_params)\nresp = request.urlopen(req)\n\nprint(resp.read()) # Should return float", "Load Test and Compare Cloud Providers (AWS and Google)\nMonitor Performance Across Cloud Providers\nNetflixOSS Services Dashboard (Hystrix)", "from IPython.core.display import display, HTML\ndisplay(HTML(\"<style>.container { width:100% !important; }</style>\"))\n\nfrom IPython.display import clear_output, Image, display, HTML\n\nhtml = '<iframe width=100% height=500px src=\"http://hystrix.demo.pipeline.io/hystrix-dashboard/monitor/monitor.html?streams=%5B%7B%22name%22%3A%22Predictions%20-%20AWS%22%2C%22stream%22%3A%22http%3A%2F%2Fturbine-aws.demo.pipeline.io%2Fturbine.stream%22%2C%22auth%22%3A%22%22%2C%22delay%22%3A%22%22%7D%2C%7B%22name%22%3A%22Predictions%20-%20GCP%22%2C%22stream%22%3A%22http%3A%2F%2Fturbine-gcp.demo.pipeline.io%2Fturbine.stream%22%2C%22auth%22%3A%22%22%2C%22delay%22%3A%22%22%7D%5D\">'\ndisplay(HTML(html))", "Start Load Tests\nRun JMeter Tests from Local Laptop (Limited by Laptop)\nRun Headless JMeter Tests from Training Clusters in Cloud", "# Spark ML - PMML - Airbnb\n!kubectl create --context=awsdemo -f /root/pipeline/loadtest.ml/loadtest-aws-airbnb-rc.yaml\n!kubectl create --context=gcpdemo -f /root/pipeline/loadtest.ml/loadtest-aws-airbnb-rc.yaml\n\n# Codegen - Java - Simple\n!kubectl create --context=awsdemo -f /root/pipeline/loadtest.ml/loadtest-aws-equals-rc.yaml\n!kubectl create --context=gcpdemo -f /root/pipeline/loadtest.ml/loadtest-aws-equals-rc.yaml\n\n# Tensorflow AI - Tensorflow Serving - Simple \n!kubectl create --context=awsdemo -f /root/pipeline/loadtest.ml/loadtest-aws-minimal-rc.yaml\n!kubectl create --context=gcpdemo -f /root/pipeline/loadtest.ml/loadtest-aws-minimal-rc.yaml", "End Load Tests", "!kubectl delete --context=awsdemo rc loadtest-aws-airbnb\n!kubectl delete --context=gcpdemo rc loadtest-aws-airbnb\n!kubectl delete --context=awsdemo rc loadtest-aws-equals\n!kubectl delete --context=gcpdemo rc loadtest-aws-equals\n!kubectl delete --context=awsdemo rc loadtest-aws-minimal\n!kubectl delete --context=gcpdemo rc loadtest-aws-minimal", "Rolling Deploy Tensorflow AI (Simple Model, Immutable Deploy)\nKubernetes CLI", "!kubectl rolling-update prediction-tensorflow --context=awsdemo --image-pull-policy=Always --image=fluxcapacitor/prediction-tensorflow\n\n!kubectl get pod --context=awsdemo \n\n!kubectl rolling-update prediction-tensorflow --context=gcpdemo --image-pull-policy=Always --image=fluxcapacitor/prediction-tensorflow\n\n!kubectl get pod --context=gcpdemo ", "7. Q&A" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
avallarino-ar/MCDatos
Notas/Notas-Python/01_Estructuras.ipynb
mit
[ "Resumen de estructuras de datos en Python\nTuplas, Listas, Sets, Diccionarios, Listas de comprehension, Funciones, Clases\nDefinición y ejemplos de algunas estructuras de datos en Python\nTuplas\nSon el tipo mas simple de estr. que puede almacenar en una misma variable más de un tipo de dato.", "x = (1,2,3,0,2,1) # Declaración de una tupla con valores numéricos\nx # Imprimo tupla\n\nx = (0, 'Hola', (1,2)) # Declaración de una tupla con diferentes tipos de datos\nx[1] # Imprimo contenido de la posición 1", "Las tuplas son inmutables. \n\nSi intento cambiar el valor de una posición, genera error.\nSi asingo otra tupla a la misma variable, genera otro ID.", "id(x)\n\nx = (0, 'Cambio', (1,2))\nid(x)\n\nx", "Listas\nSon elementos .....", "x = [1,2,3] # Declaración de una Lista\nx.append('Nuevo valor') # Agrego nuevo contenido\nx # Imprimo Lista completa\n\nx.insert(2, 'Valor Intermedio') # Inserto otro valor\nx", "¿Qué es más rapido: Tulpas o Listas?", "import timeit\ntimeit.timeit('x = (1,2,3,4,5,6)') # Mido tiempo de ejecución de una Tupla\n\ntimeit.timeit('x = [1,2,3,4,5,6]') # Mido tiempo de ejecución de una", "Referencia / asignacion:", "x = [1,2,3] # Asignación\ny = [0, x] # Referencia\ny\n\nx[0] = -1 # Asigno otra lista a x\ny # al cambiar el valor en x se cambio en y debido a que y apunta a x", "Diccionarios\nEn una gran cantidad de problemas se requieren almacenar claves y asignarle a cada clave un valor.\nUn ejemplo paar un dicc. podría ser un \"directorio telefonico\": (nombre : nro_tel)", "dir_tel = {'juan':5512345, 'pedro':5554321, 'itam':'is fun'} # Defino un diccionario\ndir_tel['juan'] # Obtengo el valor de la clave 'juan'\n\ndir_tel.keys() # Obtengo el listado de las claves del dicc.\n\ndir_tel.values() # Obtengo el listado de los valores del dicc.", "Sets\nConjuntos matemáticos", "A = set([1,2,3]) # Defino 2 sets\nB = set([2,3,4])\n\nA | B # Union\n\nA & B # Intersección\n\nA - B # Diferencia de conj.\n\nA ^ B # Diferencia simetrica", "Condicionales y Loops, For, While, If, Elif\nUna opción para hacer loops en python es la func. range", "range(1000)\n\nfor i in range(5):\n print(i)\n\nfor i in range(10):\n if i % 2 == 0:\n print(str(i) + ' Par')\n else:\n print(str(i) + ' Impar')\n\ni = 0\nwhile i < 10:\n print(i)\n i = i + 1", "Clases", "# Definición de clase:\nclass Person: \n def __init__(self, first, last): # Constructor\n self.first = first\n self.last = last\n \n def greet(self, add_msg = ''): # Método\n print('Hello ' + self.first + ' ' + add_msg)\n\njuan = Person('juan', 'dominguez') # Creo un objeto del tipo Person\njuan.first # Obtengo el valor del atributo first\n\njuan.greet() # Ejecuto método " ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
navoj/ecell4
ipynb/Tutorials/Simulator.ipynb
gpl-2.0
[ "Tutorial 4 (Simulator)\nThis is a tutorial for E-Cell4. Here, we explain how to handle Simulators.\nEach World has its corresponding Simulator.", "from ecell4.core import *\nfrom ecell4.gillespie import GillespieWorld as world_type, GillespieSimulator as simulator_type\n# from ecell4.ode import ODEWorld as world_type, ODESimulator as simulator_type\n# from ecell4.lattice import LatticeWorld as world_type, LatticeSimulator as simulator_type\n# from ecell4.meso import MesoscopicWorld as world_type, MesoscopicSimulator as simulator_type\n# from ecell4.bd import BDWorld as world_type, BDSimulator as simulator_type\n# from ecell4.egfrd import EGFRDWorld as world_type, EGFRDSimulator as simulator_type", "Simulator needs a Model and World at the instantiation.", "m = NetworkModel()\nm.add_species_attribute(Species(\"A\", \"0.0025\", \"1\"))\nm.add_reaction_rule(create_degradation_reaction_rule(Species(\"A\"), 0.693 / 1))\n\nw = world_type(Real3(1, 1, 1))\nw.bind_to(m)\nw.add_molecules(Species(\"A\"), 60)\n\nsim = simulator_type(m, w)\nsim.set_dt(0.01) #XXX: Optional", "A Simulator has getters for a simulation time, a step interval, and the next-event time. In principle, a Simulator returns the World's time as its simulation time, and does a sum of the current time and a step interval as the next-event time.", "print(sim.num_steps())\nprint(sim.t(), w.t())\nprint(sim.next_time(), sim.t() + sim.dt())", "A Simulator can return the connected model and world. They are not copies, but the shared objects.", "print(sim.model(), sim.world())", "If you change a World after connecting it to a Simulator, you have to call initialize() manually before step(). The call will update the internal state of the Simulator.", "sim.world().add_molecules(Species(\"A\"), 60) # w.add_molecules(Species(\"A\"), 60)\nsim.initialize()\n\n# w.save('test.h5')", "Simulator has two types of step functions. First, with no argument, step() increments the time until next_time().", "print(\"%.3e %.3e\" % (sim.t(), sim.next_time()))\nsim.step()\nprint(\"%.3e %.3e\" % (sim.t(), sim.next_time()))", "With an argument upto, if upto is later than next_time(), step(upto) increments the time upto the next_time() and returns True. Otherwise, it increments the time for upto and returns False. (If the current time t() is less than upto, it does nothing and returns False.)", "print(\"%.3e %.3e\" % (sim.t(), sim.next_time()))\nprint(sim.step(0.1))\nprint(\"%.3e %.3e\" % (sim.t(), sim.next_time()))", "For a discrete-step simulation, the main loop should be written like:", "# w.load('test.h5')\nsim.initialize()\n\nnext_time, dt = 0.0, 1e-2\nfor _ in range(5):\n while sim.step(next_time): pass\n next_time += dt\n print(\"%.3e %.3e %d %g\" % (sim.t(), sim.dt(), sim.num_steps(), w.num_molecules(Species(\"A\"))))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
mne-tools/mne-tools.github.io
0.16/_downloads/plot_read_epochs.ipynb
bsd-3-clause
[ "%matplotlib inline", "Reading epochs from a raw FIF file\nThis script shows how to read the epochs from a raw file given\na list of events. For illustration, we compute the evoked responses\nfor both MEG and EEG data by averaging all the epochs.", "# Authors: Alexandre Gramfort <alexandre.gramfort@telecom-paristech.fr>\n# Matti Hamalainen <msh@nmr.mgh.harvard.edu>\n#\n# License: BSD (3-clause)\n\nimport mne\nfrom mne import io\nfrom mne.datasets import sample\n\nprint(__doc__)\n\ndata_path = sample.data_path()", "Set parameters", "raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'\nevent_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'\nevent_id, tmin, tmax = 1, -0.2, 0.5\n\n# Setup for reading the raw data\nraw = io.read_raw_fif(raw_fname)\nevents = mne.read_events(event_fname)\n\n# Set up pick list: EEG + MEG - bad channels (modify to your needs)\nraw.info['bads'] += ['MEG 2443', 'EEG 053'] # bads + 2 more\npicks = mne.pick_types(raw.info, meg=True, eeg=False, stim=True, eog=True,\n exclude='bads')\n\n# Read epochs\nepochs = mne.Epochs(raw, events, event_id, tmin, tmax, proj=True,\n picks=picks, baseline=(None, 0), preload=True,\n reject=dict(grad=4000e-13, mag=4e-12, eog=150e-6))\n\nevoked = epochs.average() # average epochs to get the evoked response", "Show result", "evoked.plot(time_unit='s')" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
crackhopper/TFS-toolbox
notebook/0.Train-LeNet.ipynb
mit
[ "Load LeNet Model\n\nsome famous models have already been distributed together with the package\nwe provide some tools to look into the models", "%pylab inline\nfrom tfs.models import LeNet\nnet = LeNet()", "Build the network\n\nbefore we use the network object, we need to build() it first. \nby default, any network object only contains definitions about the network. we need to call build() function to construct the computational graph.", "netout = net.build()\nprint netout", "Explore the network object\n\nwe can see the network structure easily", "print net\n\nprint net.print_shape()", "each network object also has the following components binding with it:\ninitializer\nloss\noptimizer", "print net.initializer\n\nprint net.losser\n\nprint net.optimizer", "Load and explore the data\n\nafter we have construct the model, what we need to do next is to load data.\nour package has provided some frequently used dataset, such as Mnist, and Cifar10", "from tfs.dataset import Mnist\ndataset = Mnist()", "we can explore the image inside the mnist dataset", "import numpy as np\nidx = np.random.randint(0,60000) # we have 60000 images in the training dataset\nimg = dataset.train.data[idx,:,:,0]\nlbl = dataset.train.labels[idx]\nimshow(img,cmap='gray')\nprint 'index:',idx,'\\t','label:',lbl", "Train the network\n\nIt's very easy to train a network, just use fit function, which is a bit like sklearn\nIf you want to record some information during training, you can define a monitor, and plug it onto the network\nThe default monitor would only print some information every 10 steps.", "net.monitor", "now we change the print step to 20, and add a monitor that record the variance of each layer's input and output.", "from tfs.core.monitor import *\nnet.monitor['default'].interval=20\nnet.monitor['var'] = LayerInputVarMonitor(net,interval=10)\n\nnet.fit(dataset,batch_size=200,n_epoch=1)\n\nvar_result = net.monitor['var'].results\n\nimport pandas as pd\nvar = pd.DataFrame(var_result,columns=[n.name for n in net.nodes])\n\nvar" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
GoogleCloudPlatform/ml-design-patterns
05_resilience/continuous_eval.ipynb
apache-2.0
[ "Continuous Evaluation\nThis notebook demonstrates how to use Cloud AI Platform to execute continuous evaluation of a deployed machine learning model. You'll need to have a project set up with Google Cloud Platform. \nSet up\nStart by creating environment variables for your Google Cloud project and bucket. Also, import the libraries we'll need for this notebook.", "# change these to try this notebook out\nPROJECT = 'munn-sandbox'\nBUCKET = 'munn-sandbox'\n\nimport os\nos.environ['BUCKET'] = BUCKET\nos.environ['PROJECT'] = PROJECT\nos.environ['TFVERSION'] = '2.1'\n\nimport shutil\n\nimport pandas as pd\nimport tensorflow as tf\n\nfrom google.cloud import bigquery\nfrom tensorflow.keras.utils import to_categorical\nfrom tensorflow.keras.callbacks import EarlyStopping\nfrom tensorflow_hub import KerasLayer\nfrom tensorflow.keras.layers import Dense, Input, Lambda\nfrom tensorflow.keras.models import Model\nprint(tf.__version__)\n\n%matplotlib inline", "Train and deploy the model\nFor this notebook, we'll build a text classification model using the Hacker News dataset. Each training example consists of an article title and the article source. The model will be trained to classify a given article title as belonging to either nytimes, github or techcrunch.\nLoad the data", "DATASET_NAME = \"titles_full.csv\"\nCOLUMNS = ['title', 'source']\n\ntitles_df = pd.read_csv(DATASET_NAME, header=None, names=COLUMNS)\ntitles_df.head()", "We one-hot encode the label...", "CLASSES = {\n 'github': 0,\n 'nytimes': 1,\n 'techcrunch': 2\n}\nN_CLASSES = len(CLASSES)\n\ndef encode_labels(sources):\n classes = [CLASSES[source] for source in sources]\n one_hots = to_categorical(classes, num_classes=N_CLASSES)\n return one_hots\n\nencode_labels(titles_df.source[:4])", "...and create a train/test split.", "N_TRAIN = int(len(titles_df) * 0.80)\n\ntitles_train, sources_train = (\n titles_df.title[:N_TRAIN], titles_df.source[:N_TRAIN])\n\ntitles_valid, sources_valid = (\n titles_df.title[N_TRAIN:], titles_df.source[N_TRAIN:])\n\nX_train, Y_train = titles_train.values, encode_labels(sources_train)\nX_valid, Y_valid = titles_valid.values, encode_labels(sources_valid)\n\nX_train[:3]", "Swivel Model\nWe'll build a simple text classification model using a Tensorflow Hub embedding module derived from Swivel. Swivel is an algorithm that essentially factorizes word co-occurrence matrices to create the words embeddings. \nTF-Hub hosts the pretrained gnews-swivel-20dim-with-oov 20-dimensional Swivel module.", "SWIVEL = \"https://tfhub.dev/google/tf2-preview/gnews-swivel-20dim-with-oov/1\"\nswivel_module = KerasLayer(SWIVEL, output_shape=[20], input_shape=[], dtype=tf.string, trainable=True)", "The build_model function is written so that the TF Hub module can easily be exchanged with another module.", "def build_model(hub_module, model_name):\n inputs = Input(shape=[], dtype=tf.string, name=\"text\")\n module = hub_module(inputs)\n h1 = Dense(16, activation='relu', name=\"h1\")(module)\n outputs = Dense(N_CLASSES, activation='softmax', name='outputs')(h1)\n model = Model(inputs=inputs, outputs=[outputs], name=model_name)\n \n model.compile(\n optimizer='adam',\n loss='categorical_crossentropy',\n metrics=['accuracy']\n )\n return model\n\ndef train_and_evaluate(train_data, val_data, model, batch_size=5000):\n tf.random.set_seed(33) \n X_train, Y_train = train_data\n\n history = model.fit(\n X_train, Y_train,\n epochs=100,\n batch_size=batch_size,\n validation_data=val_data,\n callbacks=[EarlyStopping()],\n )\n return history\n\ntxtcls_model = build_model(swivel_module, model_name='txtcls_swivel')\n\ntxtcls_model.summary()", "Train and evaluation the model\nWith the model defined and data set up, next we'll train and evaluate the model.", "# set up train and validation data\ntrain_data = (X_train, Y_train)\nval_data = (X_valid, Y_valid)", "For training we'll call train_and_evaluate on txtcls_model.", "txtcls_history = train_and_evaluate(train_data, val_data, txtcls_model)\n\nhistory = txtcls_history\npd.DataFrame(history.history)[['loss', 'val_loss']].plot()\npd.DataFrame(history.history)[['accuracy', 'val_accuracy']].plot()", "Calling predicition from model head produces output from final dense layer. This final layer is used to compute categorical cross-entropy when training.", "txtcls_model.predict(x=[\"YouTube introduces Video Chapters to make it easier to navigate longer videos\"])", "We can save the model artifacts in the local directory called ./txtcls_swivel.", "tf.saved_model.save(txtcls_model, './txtcls_swivel/')", "....and examine the model's serving default signature. As expected the model takes as input a text string (e.g. an article title) and retrns a 3-dimensional vector of floats (i.e. the softmax output layer).", "!saved_model_cli show \\\n --tag_set serve \\\n --signature_def serving_default \\\n --dir ./txtcls_swivel/", "To simplify the returned predictions, we'll modify the model signature so that the model outputs the predicted article source (either nytimes, techcrunch, or github) rather than the final softmax layer. We'll also return the 'confidence' of the model's prediction. This will be the softmax value corresonding to the predicted article source.", "@tf.function(input_signature=[tf.TensorSpec([None], dtype=tf.string)])\ndef source_name(text):\n labels = tf.constant(['github', 'nytimes', 'techcrunch'], dtype=tf.string)\n probs = txtcls_model(text, training=False)\n indices = tf.argmax(probs, axis=1)\n pred_source = tf.gather(params=labels, indices=indices)\n pred_confidence = tf.reduce_max(probs, axis=1)\n \n return {'source': pred_source,\n 'confidence': pred_confidence}", "Now, we'll re-save the new Swivel model that has this updated model signature by referencing the source_name function for the model's serving_default.", "shutil.rmtree('./txtcls_swivel', ignore_errors=True)\ntxtcls_model.save('./txtcls_swivel', signatures={'serving_default': source_name})", "Examine the model signature to confirm the changes:", "!saved_model_cli show \\\n --tag_set serve \\\n --signature_def serving_default \\\n --dir ./txtcls_swivel/", "Now when we call predictions using the updated serving input function, the model will return the predicted article source as a readable string, and the model's confidence for that prediction.", "title1 = \"House Passes Sweeping Policing Bill Targeting Racial Bias and Use of Force\"\ntitle2 = \"YouTube introduces Video Chapters to make it easier to navigate longer videos\"\ntitle3 = \"As facebook turns 10 zuckerberg wants to change how tech industry works\"\n\nrestored = tf.keras.models.load_model('./txtcls_swivel')\ninfer = restored.signatures['serving_default']\noutputs = infer(text=tf.constant([title1, title2, title3]))\n\nprint(outputs['source'].numpy())\nprint(outputs['confidence'].numpy())", "Deploy the model for online serving\nOnce the model is trained and the assets saved, deploying the model to GCP is straightforward. After some time you should be able to see your deployed model and its version on the model page of GCP console.", "%%bash\nMODEL_NAME=\"txtcls\"\nMODEL_VERSION=\"swivel\"\nMODEL_LOCATION=\"./txtcls_swivel/\"\n\ngcloud ai-platform models create ${MODEL_NAME}\n\ngcloud ai-platform versions create ${MODEL_VERSION} \\\n--model ${MODEL_NAME} \\\n--origin ${MODEL_LOCATION} \\\n--staging-bucket gs://${BUCKET} \\\n--runtime-version=2.1", "Set up the Evaluation job on CAIP\nNow that the model is deployed, go to Cloud AI Platform to see the model version you've deployed and set up an evaluation job by clicking on the button called \"Create Evaluation Job\". You will be asked to provide some relevant information:\n - Job description: txtcls_swivel_eval\n - Model objective: text classification\n - Classification type: single-label classification\n - Prediction label file path for the annotation specification set: When you create an evaluation job on CAIP, you must specify a CSV file that defines your annotation specification set. This file must have one row for every possible label your model outputs during prediction. Each row should be a comma-separated pair containing the label and a description of the label: label-name,description\n - Daily sample percentage: We'll set this to 100% so that all online predicitons are captured for evaluation.\n - BigQuery table to house online prediction requests: We'll use the BQ dataset and table txtcls_eval.swivel. If you enter a BigQuery table that doesn’t exist, one with that name will be created with the correct schema. \n - Prediction input\n - Data key: this is The key for the raw prediction data. From examining our deployed model signature, the input data key is text.\n - Data reference key: this is for image models, so we can ignore\n - Prediction output\n - Prediction labels key: This is the prediction key which contains the predicted label (i.e. the article source). For our model, the label key is source.\n - Prediction score key: This is the prediction key which contains the predicted scores (i.e. the model confidence). For our model, the score key is confidence.\n - Ground-truth method: Check the box that indicates we will provide our own labels, and not use a Human data labeling service.\nOnce the evaluation job is set up, the table will be made in BigQuery to capture the online prediction requests.", "%load_ext google.cloud.bigquery\n\n%%bigquery --project $PROJECT\nSELECT * FROM `txtcls_eval.swivel`", "Now, every time this model version receives an online prediction request, this information will be captured and stored in the BQ table. Note, this happens everytime because we set the sampling proportion to 100%. \nSend prediction requests to your model\nHere are some article titles and their groundtruth sources that we can test with prediciton.\n| title | groundtruth |\n|---|---|\n| YouTube introduces Video Chapters to make it easier to navigate longer videos | techcrunch |\n| A Filmmaker Put Away for Tax Fraud Takes Us Inside a British Prison | nytimes |\n| A native Mac app wrapper for WhatsApp Web | github |\n| Astronauts Dock With Space Station After Historic SpaceX Launch | nytimes |\n| House Passes Sweeping Policing Bill Targeting Racial Bias and Use of Force | nytimes |\n| Scrollability | github |\n| iOS 14 lets deaf users set alerts for important sounds, among other clever accessibility perks | techcrunch |", "%%writefile input.json\n{\"text\": \"YouTube introduces Video Chapters to make it easier to navigate longer videos\"}\n\n!gcloud ai-platform predict \\\n --model txtcls \\\n --json-instances input.json \\\n --version swivel\n\n%%writefile input.json\n{\"text\": \"A Filmmaker Put Away for Tax Fraud Takes Us Inside a British Prison\"}\n\n!gcloud ai-platform predict \\\n --model txtcls \\\n --json-instances input.json \\\n --version swivel\n\n%%writefile input.json\n{\"text\": \"A native Mac app wrapper for WhatsApp Web\"}\n\n!gcloud ai-platform predict \\\n --model txtcls \\\n --json-instances input.json \\\n --version swivel\n\n%%writefile input.json\n{\"text\": \"Astronauts Dock With Space Station After Historic SpaceX Launch\"}\n\n!gcloud ai-platform predict \\\n --model txtcls \\\n --json-instances input.json \\\n --version swivel\n\n%%writefile input.json\n{\"text\": \"House Passes Sweeping Policing Bill Targeting Racial Bias and Use of Force\"}\n\n!gcloud ai-platform predict \\\n --model txtcls \\\n --json-instances input.json \\\n --version swivel\n\n%%writefile input.json\n{\"text\": \"Scrollability\"}\n\n!gcloud ai-platform predict \\\n --model txtcls \\\n --json-instances input.json \\\n --version swivel\n\n%%writefile input.json\n{\"text\": \"iOS 14 lets deaf users set alerts for important sounds, among other clever accessibility perks\"}\n\n!gcloud ai-platform predict \\\n --model txtcls \\\n --json-instances input.json \\\n --version swivel", "Summarizing the results from our model:\n| title | groundtruth | predicted\n|---|---|---|\n| YouTube introduces Video Chapters to make it easier to navigate longer videos | techcrunch | techcrunch |\n| A Filmmaker Put Away for Tax Fraud Takes Us Inside a British Prison | nytimes | techcrunch |\n| A native Mac app wrapper for WhatsApp Web | github | techcrunch |\n| Astronauts Dock With Space Station After Historic SpaceX Launch | nytimes | techcrunch |\n| House Passes Sweeping Policing Bill Targeting Racial Bias and Use of Force | nytimes | nytimes |\n| Scrollability | github | techcrunch |\n| iOS 14 lets deaf users set alerts for important sounds, among other clever accessibility perks | techcrunch | nytimes |", "%%bigquery --project $PROJECT\nSELECT * FROM `txtcls_eval.swivel`", "Provide the ground truth for the raw prediction input\nNotice the groundtruth is missing. We'll update the evaluation table to contain the ground truth.", "%%bigquery --project $PROJECT\nUPDATE `txtcls_eval.swivel`\nSET \n groundtruth = '{\"predictions\": [{\"source\": \"techcrunch\"}]}'\nWHERE\n raw_data = '{\"instances\": [{\"text\": \"YouTube introduces Video Chapters to make it easier to navigate longer videos\"}]}';\n\n%%bigquery --project $PROJECT\nUPDATE `txtcls_eval.swivel`\nSET \n groundtruth = '{\"predictions\": [{\"source\": \"nytimes\"}]}'\nWHERE\n raw_data = '{\"instances\": [{\"text\": \"A Filmmaker Put Away for Tax Fraud Takes Us Inside a British Prison\"}]}';\n\n%%bigquery --project $PROJECT\nUPDATE `txtcls_eval.swivel`\nSET \n groundtruth = '{\"predictions\": [{\"source\": \"github\"}]}'\nWHERE\n raw_data = '{\"instances\": [{\"text\": \"A native Mac app wrapper for WhatsApp Web\"}]}';\n\n%%bigquery --project $PROJECT\nUPDATE `txtcls_eval.swivel`\nSET \n groundtruth = '{\"predictions\": [{\"source\": \"nytimes\"}]}'\nWHERE\n raw_data = '{\"instances\": [{\"text\": \"Astronauts Dock With Space Station After Historic SpaceX Launch\"}]}';\n\n%%bigquery --project $PROJECT\nUPDATE `txtcls_eval.swivel`\nSET \n groundtruth = '{\"predictions\": [{\"source\": \"nytimes\"}]}'\nWHERE\n raw_data = '{\"instances\": [{\"text\": \"House Passes Sweeping Policing Bill Targeting Racial Bias and Use of Force\"}]}';\n\n%%bigquery --project $PROJECT\nUPDATE `txtcls_eval.swivel`\nSET \n groundtruth = '{\"predictions\": [{\"source\": \"github\"}]}'\nWHERE\n raw_data = '{\"instances\": [{\"text\": \"Scrollability\"}]}';\n\n%%bigquery --project $PROJECT\nUPDATE `txtcls_eval.swivel`\nSET \n groundtruth = '{\"predictions\": [{\"source\": \"techcrunch\"}]}'\nWHERE\n raw_data = '{\"instances\": [{\"text\": \"iOS 14 lets deaf users set alerts for important sounds, among other clever accessibility perks\"}]}';", "We can confirm that the ground truch has been properly added to the table.", "%%bigquery --project $PROJECT\nSELECT * FROM `txtcls_eval.swivel`", "Compute evaluation metrics\nWith the raw prediction input, the model output and the groundtruth in one place, we can evaluation how our model performs. And how the model performs across various aspects (e.g. over time, different model versions, different labels, etc)", "import seaborn as sns\nimport pandas as pd\nimport matplotlib.pyplot as plt\nfrom sklearn.metrics import confusion_matrix\nfrom sklearn.metrics import precision_recall_fscore_support as score\nfrom sklearn.metrics import classification_report", "Using regex we can extract the model predictions, to have an easier to read format:", "%%bigquery --project $PROJECT\nSELECT\n model,\n model_version,\n time,\n REGEXP_EXTRACT(raw_data, r'.*\"text\": \"(.*)\"') AS text,\n REGEXP_EXTRACT(raw_prediction, r'.*\"source\": \"(.*?)\"') AS prediction,\n REGEXP_EXTRACT(raw_prediction, r'.*\"confidence\": (0.\\d{2}).*') AS confidence,\n REGEXP_EXTRACT(groundtruth, r'.*\"source\": \"(.*?)\"') AS groundtruth,\nFROM\n `txtcls_eval.swivel`\n\nquery = '''\nSELECT\n model,\n model_version,\n time,\n REGEXP_EXTRACT(raw_data, r'.*\"text\": \"(.*)\"') AS text,\n REGEXP_EXTRACT(raw_prediction, r'.*\"source\": \"(.*?)\"') AS prediction,\n REGEXP_EXTRACT(raw_prediction, r'.*\"confidence\": (0.\\d{2}).*') AS confidence,\n REGEXP_EXTRACT(groundtruth, r'.*\"source\": \"(.*?)\"') AS groundtruth,\nFROM\n `txtcls_eval.swivel`\n'''\n\nclient = bigquery.Client()\ndf_results = client.query(query).to_dataframe()\n\ndf_results.head(20)\n\nprediction = list(df_results.prediction)\ngroundtruth = list(df_results.groundtruth)\n\nprecision, recall, fscore, support = score(groundtruth, prediction)\n\nfrom tabulate import tabulate\nsources = list(CLASSES.keys())\nresults = list(zip(sources, precision, recall, fscore, support))\nprint(tabulate(results, headers = ['source', 'precision', 'recall', 'fscore', 'support'],\n tablefmt='orgtbl'))", "Or a full classification report from the sklearn library:", "print(classification_report(y_true=groundtruth, y_pred=prediction))", "Can also examine a confusion matrix:", "cm = confusion_matrix(groundtruth, prediction, labels=sources)\n\nax= plt.subplot()\nsns.heatmap(cm, annot=True, ax = ax, cmap=\"Blues\")\n\n# labels, title and ticks\nax.set_xlabel('Predicted labels')\nax.set_ylabel('True labels')\nax.set_title('Confusion Matrix') \nax.xaxis.set_ticklabels(sources)\nax.yaxis.set_ticklabels(sources)\nplt.savefig(\"./txtcls_cm.png\")", "Examine eval metrics by model version or timestamp\nBy specifying the same evaluation table, two different model versions can be evaluated. Also, since the timestamp is captured, it is straightforward to evaluation model performance over time.", "now = pd.Timestamp.now(tz='UTC')\none_week_ago = now - pd.DateOffset(weeks=1)\none_month_ago = now - pd.DateOffset(months=1)\n\ndf_prev_week = df_results[df_results.time > one_week_ago]\ndf_prev_month = df_results[df_results.time > one_month_ago]\n\ndf_prev_month", "Copyright 2020 Google Inc. Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
0Rick0/Fontys-DS-GCD
Spark.ipynb
mit
[ "Spark and Python\nSpark is another framework ontop of HDFS/Hadoop.\nIt gives an api compatible with many languages, includig Python.\nIn this notebook I will give some examples based on an NGINX like access log.", "from pyspark import SparkContext\n\nsc = SparkContext('local', 'ipynb Example')\n\nimport re\n\nfile = sc.textFile('hdfs://localhost:8020/user/root/GCD-Week-6-access_log.txt')\n\n# A regex for matching the nginx log line.\n# The only problem with this approach is that it does not always match every line\nreg = re.compile('(?P<ipaddress>\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}) - - \\[(?P<dateandtime>\\d{2}\\/[a-zA-Z]{3}\\/\\d{4}:\\d{2}:\\d{2}:\\d{2} (\\+|\\-)\\d{4})\\] ((\\\"(?P<method>[A-Z]+) )(?P<url>.+) (HTTP\\/1\\.1\\\")) (?P<statuscode>\\d{3}) (?P<bytessent>\\d+)')\n\nfile.top(1)", "Count the amount of requests per path or address", "# First parse the line\n# Then get the url component of the line if the line was successfully parsed\n# Then map it to one, e.g. ('/', 1)\n# Then reduce it to count them\ncounts = file.map(lambda line: reg.match(line))\\\n .map(lambda group: group.group('url') if group else None)\\\n .map(lambda url: (url, 1))\\\n .reduceByKey(lambda a, b: a + b)\n\nactual_counts = dict(counts.collect())\n# For example:\n# ‘/assets/js/the-associates.js’\nactual_counts['/assets/js/the-associates.js']\n\n# This is very similar with the source ip:\ncounts_ip = file.map(lambda line: reg.match(line))\\\n .map(lambda group: group.group('ipaddress') if group else None)\\\n .map(lambda ip: (ip, 1))\\\n .reduceByKey(lambda a, b: a + b)\n\nactual_counts_ip = dict(counts_ip.collect())\n\n# for example 10.99.99.186\"\nactual_counts_ip['10.99.99.186']", "Ordering", "# Ordering is also quite easy to do in spark. For instance for the most commonly requested file:\ncounts = file.map(lambda line: reg.match(line))\\\n .map(lambda group: group.group('url') if group else None)\\\n .map(lambda url: (url, 1))\\\n .reduceByKey(lambda a,b:a+b)\\\n .sortBy(lambda pair: -pair[1])\n# this orders by the second pair item, where pair is ('path', count)\n\nactual_counts = counts.collect()\n\n# The first 20 items:\nfor path, count in actual_counts[:20]:\n print(\"%s: %d\" % (path, count))\n \n# Here you can see the None problem. To fix it a better regex needs to be created or another parsing method should be used", "Map Reduce wordcount in Spark", "gutenberg_file = sc.textFile('hdfs://localhost:8020/user/root/gutenberg_total.txt')\n\n\nimport string\nimport sys\nsys.path.insert(0, '.')\nsys.path.insert(0, './Portfolio')\nfrom MapReduce_code import allStopWords as stopwords\n\npunc = str.maketrans('', '', string.punctuation)\n\n\ndef rem_punctuation(inp):\n return inp.translate(punc)\n\n\nword_count = gutenberg_file.flatMap(lambda line: \n [word for word in rem_punctuation(line).lower().split(' ')])\\\n .filter(lambda word: len(word) > 1)\\\n .filter(lambda word: word not in stopwords)\\\n .map(lambda word: (word, 1))\\\n .reduceByKey(lambda a, b: a + b)\\\n .sortBy(lambda pair: -pair[1])\n\nactual_word_count = word_count.collect()\n\nfor word, count in actual_word_count[:10]:\n print(\"%s:\\t%d\" % (word, count))\n\n\n# store the result to a file\ndef convert_to_csv(data):\n return ','.join([str(d) for d in data])\n\n\ncsv_lines = word_count.map(convert_to_csv)\ncsv_lines.saveAsTextFile('hdfs://localhost:8020/user/root/wordcount.csv')", "Result:\n[root@quickstart /]# hadoop fs -ls wordcount.csv\nFound 2 items\n-rw-r--r-- 3 rick supergroup 0 2017-10-24 09:12 wordcount.csv/_SUCCESS\n-rw-r--r-- 3 rick supergroup 2544698 2017-10-24 09:12 wordcount.csv/part-00000\n[root@quickstart /]# hadoop fs -ls wordcount.csv/part-00000\n-rw-r--r-- 3 rick supergroup 2544698 2017-10-24 09:12 wordcount.csv/part-00000\n^[[A[root@quickstart /]# hadoop fs -tail wordcount.csv/part-00000\nhlessly,1\nlemonp,1\naptp,1\nbraceletp,1\nidref9chapter,1\nNote to teacher\n\nsc.textFile already returns a RDD object\nmap, flatMap already is an transformation\nfilter, reduceByKey, sortBy already is an action" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
sdpython/pyquickhelper
_unittests/ut_ipythonhelper/data/having_a_form_in_a_notebook.ipynb
mit
[ "Having a form in a notebook\n\nForm\nAnimated output\nA form with IPython 3+\nAutomated menu (this one is fix)\n\nForm\nThis following trick is inspired from IPython Notebook: Javascript/Python Bi-directional Communication. The code is copy pasted below with some modifications.", "from IPython.display import HTML, Javascript, display_html, display_javascript\n\ninput_form = \"\"\"\n<div style=\"background-color:gainsboro; width:500px; padding:10px;\">\n<label>Variable Name: </label> \n<input type=\"text\" id=\"var_name\" value=\"myvar\" size=\"170\" />\n<label>Variable Value: </label>\n<input type=\"text\" id=\"var_value\" value=\"myvalue\" />\n<br />\n<button onclick=\"set_value()\">Set Value</button>\n</div>\n\"\"\"\n\njavascript = \"\"\"\nfunction set_value(){\n var var_name = document.getElementById('var_name').value;\n var var_value = document.getElementById('var_value').value;\n var command = var_name + \" = '\" + var_value + \"'\";\n console.log(\"Executing Command: \" + command);\n\n var kernel = IPython.notebook.kernel;\n kernel.execute(command);\n}\n\"\"\"\n\ndisplay_javascript(Javascript(javascript))\nHTML(input_form)\n\nmyvar", "Now we try to get something like this:", "from pyquickhelper import open_html_form\nparams= {\"module\":\"\", \"version\":\"v...\"}\nopen_html_form(params, \"fill the fields\", \"form1\")\n\nform1", "With a password:", "from pyquickhelper import open_html_form\nparams= {\"login\":\"\", \"password\":\"\"}\nopen_html_form(params, \"credential\", \"credential\")\n\ncredential", "To excecute an instruction when the button Ok is clicked:", "my_address = None\ndef custom_action(x):\n x[\"combined\"] = x[\"first_name\"] + \" \" + x[\"last_name\"]\n return str(x)\nfrom pyquickhelper import open_html_form\nparams = { \"first_name\":\"\", \"last_name\":\"\" }\nopen_html_form (params, title=\"enter your name\", key_save=\"my_address\", hook=\"custom_action(my_address)\")\n\nmy_address", "Animated output", "from pyquickhelper.ipythonhelper import StaticInteract, RangeWidget, RadioWidget\n\ndef show_fib(N):\n sequence = \"\"\n a, b = 0, 1\n for i in range(N):\n sequence += \"{0} \".format(a)\n a, b = b, a + b\n return sequence\n\nStaticInteract(show_fib,\n N=RangeWidget(1, 100, default=10))", "In order to have a fast display, the function show_lib is called for each possible version. If it is a graph, all possible graphs will be generated.", "%matplotlib inline\nimport numpy as np\nimport matplotlib.pyplot as plt\n\ndef plot(amplitude, color):\n fig, ax = plt.subplots(figsize=(4, 3),\n subplot_kw={'axisbg':'#EEEEEE',\n 'axisbelow':True})\n ax.grid(color='w', linewidth=2, linestyle='solid')\n x = np.linspace(0, 10, 1000)\n ax.plot(x, amplitude * np.sin(x), color=color,\n lw=5, alpha=0.4)\n ax.set_xlim(0, 10)\n ax.set_ylim(-1.1, 1.1)\n return fig\n\nStaticInteract(plot,\n amplitude=RangeWidget(0.1, 0.5, 0.1, default=0.4),\n color=RadioWidget(['blue', 'green', 'red'], default='red'))", "A form with IPython 3+\nNot yet ready and the form does not show up in the converted notebook. You need to execute the notebook.", "from IPython.display import display \nfrom IPython.html.widgets import Text\nlast_name = Text(description=\"Last Name\")\nfirst_name = Text(description=\"First Name\")\ndisplay(last_name)\ndisplay(first_name)\n\nfirst_name.value, last_name.value", "Automated menu", "from jyquickhelper import add_notebook_menu\nadd_notebook_menu()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
anthonyng2/FX-Trading-with-Python-and-Oanda
Oanda v20 REST-oandapyV20/03.00 Account Information.ipynb
mit
[ "<!--NAVIGATION-->\n< Rates Information | Contents | Order Management >\nAccount Information\nOANDA REST-V20 API Wrapper Doc on Account\nOANDA API Getting Started\nOANDA API Account\nAccount Details", "import pandas as pd\nimport oandapyV20\nimport oandapyV20.endpoints.accounts as accounts\nimport configparser\n\nconfig = configparser.ConfigParser()\nconfig.read('../config/config_v20.ini')\naccountID = config['oanda']['account_id']\naccess_token = config['oanda']['api_key']\n\nclient = oandapyV20.API(access_token=access_token)\nr = accounts.AccountDetails(accountID)\n\nclient.request(r)\n\nprint(r.response)\n\npd.Series(r.response['account'])", "Account List", "r = accounts.AccountList()\n\nclient.request(r)\n\nprint(r.response)", "Account Summary", "r = accounts.AccountSummary(accountID)\n\nclient.request(r)\n\nprint(r.response)\n\npd.Series(r.response['account'])", "Account Instruments", "r = accounts.AccountInstruments(accountID=accountID, params = \"EUR_USD\")\n\nclient.request(r)\n\npd.DataFrame(r.response['instruments'])", "<!--NAVIGATION-->\n< Rates Information | Contents | Order Management >" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
ml4a/ml4a-guides
examples/fundamentals/transfer_learning.ipynb
gpl-2.0
[ "<a href=\"https://colab.research.google.com/github/kylemath/ml4a-guides/blob/master/notebooks/transfer-learning.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>\nTransfer learning / fine-tuning\nThis tutorial will guide you through the process of using transfer learning to learn an accurate image classifier from a relatively small number of training samples. Generally speaking, transfer learning refers to the process of leveraging the knowledge learned in one model for the training of another model. \nMore specifically, the process involves taking an existing neural network which was previously trained to good performance on a larger dataset, and using it as the basis for a new model which leverages that previous network's accuracy for a new task. This method has become popular in recent years to improve the performance of a neural net trained on a small dataset; the intuition is that the new dataset may be too small to train to good performance by itself, but we know that most neural nets trained to learn image features often learn similar features anyway, especially at early layers where they are more generic (edge detectors, blobs, and so on). \nTransfer learning has been largely enabled by the open-sourcing of state-of-the-art models; for the top performing models in image classification tasks (like from ILSVRC), it is common practice now to not only publish the architecture, but to release the trained weights of the model as well. This lets amateurs use these top image classifiers to boost the performance of their own task-specific models.\nFeature extraction vs. fine-tuning\nAt one extreme, transfer learning can involve taking the pre-trained network and freezing the weights, and using one of its hidden layers (usually the last one) as a feature extractor, using those features as the input to a smaller neural net. \nAt the other extreme, we start with the pre-trained network, but we allow some of the weights (usually the last layer or last few layers) to be modified. Another name for this procedure is called \"fine-tuning\" because we are slightly adjusting the pre-trained net's weights to the new task. We usually train such a network with a lower learning rate, since we expect the features are already relatively good and do not need to be changed too much. \nSometimes, we do something in-between: Freeze just the early/generic layers, but fine-tune the later layers. Which strategy is best depends on the size of your dataset, the number of classes, and how much it resembles the dataset the previous model was trained on (and thus, whether it can benefit from the same learned feature extractors). A more detailed discussion of how to strategize can be found in [1] [2].\nProcedure\nIn this guide will go through the process of loading a state-of-the-art, 1000-class image classifier, VGG16 which won the ImageNet challenge in 2014, and using it as a fixed feature extractor to train a smaller custom classifier on our own images, although with very few code changes, you can try fine-tuning as well.\nWe will first load VGG16 and remove its final layer, the 1000-class softmax classification layer specific to ImageNet, and replace it with a new classification layer for the classes we are training over. We will then freeze all the weights in the network except the new ones connecting to the new classification layer, and then train the new classification layer over our new dataset. \nWe will also compare this method to training a small neural network from scratch on the new dataset, and as we shall see, it will dramatically improve our accuracy. We will do that part first.\nAs our test subject, we'll use a dataset consisting of around 6000 images belonging to 97 classes, and train an image classifier with around 80% accuracy on it. It's worth noting that this strategy scales well to image sets where you may have even just a couple hundred or less images. Its performance will be lesser from a small number of samples (depending on classes) as usual, but still impressive considering the usual constraints.", "import numpy as np\n\n%matplotlib inline\n\nimport os\n\n#if using Theano with GPU\n#os.environ[\"KERAS_BACKEND\"] = \"tensorflow\"\n\nimport random\nimport numpy as np\nimport keras\n\nimport matplotlib.pyplot as plt\nfrom matplotlib.pyplot import imshow\n\nfrom keras.preprocessing import image\nfrom keras.applications.imagenet_utils import preprocess_input\nfrom keras.models import Sequential\nfrom keras.layers import Dense, Dropout, Flatten, Activation\nfrom keras.layers import Conv2D, MaxPooling2D\nfrom keras.models import Model", "Getting a dataset\nThe first step is going to be to load our data. As our example, we will be using the dataset CalTech-101, which contains around 9000 labeled images belonging to 101 object categories. However, we will exclude 5 of the categories which have the most images. This is in order to keep the class distribution fairly balanced (around 50-100) and constrained to a smaller number of images, around 6000. \nTo obtain this dataset, you can either run the download script download.sh in the data folder, or the following commands:\nwget http://www.vision.caltech.edu/Image_Datasets/Caltech101/101_ObjectCategories.tar.gz\ntar -xvzf 101_ObjectCategories.tar.gz\n\nIf you wish to use your own dataset, it should be aranged in the same fashion to 101_ObjectCategories with all of the images organized into subfolders, one for each class. In this case, the following cell should load your custom dataset correctly by just replacing root with your folder. If you have an alternate structure, you just need to make sure that you load the list data where every element is a dict where x is the data (a 1-d numpy array) and y is the label (an integer). Use the helper function get_image(path) to load the image correctly into the array, and note also that the images are being resized to 224x224. This is necessary because the input to VGG16 is a 224x224 RGB image. You do not need to resize them on your hard drive, as that is being done in the code below.\nIf you have 101_ObjectCategories in your data folder, the following cell should load all the data.", "!echo \"Downloading 101_Object_Categories for image notebooks\"\n!curl -L -o 101_ObjectCategories.tar.gz --progress-bar http://www.vision.caltech.edu/Image_Datasets/Caltech101/101_ObjectCategories.tar.gz\n!tar -xzf 101_ObjectCategories.tar.gz\n!rm 101_ObjectCategories.tar.gz\n!ls\n\n\nroot = '101_ObjectCategories'\nexclude = ['BACKGROUND_Google', 'Motorbikes', 'airplanes', 'Faces_easy', 'Faces']\ntrain_split, val_split = 0.7, 0.15\n\ncategories = [x[0] for x in os.walk(root) if x[0]][1:]\ncategories = [c for c in categories if c not in [os.path.join(root, e) for e in exclude]]\n\nprint(categories)", "This function is useful for pre-processing the data into an image and input vector.", "# helper function to load image and return it and input vector\ndef get_image(path):\n img = image.load_img(path, target_size=(224, 224))\n x = image.img_to_array(img)\n x = np.expand_dims(x, axis=0)\n x = preprocess_input(x)\n return img, x", "Load all the images from root folder", "data = []\nfor c, category in enumerate(categories):\n images = [os.path.join(dp, f) for dp, dn, filenames \n in os.walk(category) for f in filenames \n if os.path.splitext(f)[1].lower() in ['.jpg','.png','.jpeg']]\n for img_path in images:\n img, x = get_image(img_path)\n data.append({'x':np.array(x[0]), 'y':c})\n\n# count the number of classes\nnum_classes = len(categories)", "Randomize the data order.", "random.shuffle(data)", "create training / validation / test split (70%, 15%, 15%)", "idx_val = int(train_split * len(data))\nidx_test = int((train_split + val_split) * len(data))\ntrain = data[:idx_val]\nval = data[idx_val:idx_test]\ntest = data[idx_test:]", "Separate data for labels.", "x_train, y_train = np.array([t[\"x\"] for t in train]), [t[\"y\"] for t in train]\nx_val, y_val = np.array([t[\"x\"] for t in val]), [t[\"y\"] for t in val]\nx_test, y_test = np.array([t[\"x\"] for t in test]), [t[\"y\"] for t in test]\nprint(y_test)", "Pre-process the data as before by making sure it's float32 and normalized between 0 and 1.", "# normalize data\nx_train = x_train.astype('float32') / 255.\nx_val = x_val.astype('float32') / 255.\nx_test = x_test.astype('float32') / 255.\n\n# convert labels to one-hot vectors\ny_train = keras.utils.to_categorical(y_train, num_classes)\ny_val = keras.utils.to_categorical(y_val, num_classes)\ny_test = keras.utils.to_categorical(y_test, num_classes)\nprint(y_test.shape)", "Let's get a summary of what we have.", "# summary\nprint(\"finished loading %d images from %d categories\"%(len(data), num_classes))\nprint(\"train / validation / test split: %d, %d, %d\"%(len(x_train), len(x_val), len(x_test)))\nprint(\"training data shape: \", x_train.shape)\nprint(\"training labels shape: \", y_train.shape)\n", "If everything worked properly, you should have loaded a bunch of images, and split them into three sets: train, val, and test. The shape of the training data should be (n, 224, 224, 3) where n is the size of your training set, and the labels should be (n, c) where c is the number of classes (97 in the case of 101_ObjectCategories. \nNotice that we divided all the data into three subsets -- a training set train, a validation set val, and a test set test. The reason for this is to properly evaluate the accuracy of our classifier. During training, the optimizer uses the validation set to evaluate its internal performance, in order to determine the gradient without overfitting to the training set. The test set is always held out from the training algorithm, and is only used at the end to evaluate the final accuracy of our model.\nLet's quickly look at a few sample images from our dataset.", "images = [os.path.join(dp, f) for dp, dn, filenames in os.walk(root) for f in filenames if os.path.splitext(f)[1].lower() in ['.jpg','.png','.jpeg']]\nidx = [int(len(images) * random.random()) for i in range(8)]\nimgs = [image.load_img(images[i], target_size=(224, 224)) for i in idx]\nconcat_image = np.concatenate([np.asarray(img) for img in imgs], axis=1)\nplt.figure(figsize=(16,4))\nplt.imshow(concat_image)", "First training a neural net from scratch\nBefore doing the transfer learning, let's first build a neural network from scratch for doing classification on our dataset. This will give us a baseline to compare to our transfer-learned network later.\nThe network we will construct contains 4 alternating convolutional and max-pooling layers, followed by a dropout after every other conv/pooling pair. After the last pooling layer, we will attach a fully-connected layer with 256 neurons, another dropout layer, then finally a softmax classification layer for our classes.\nOur loss function will be, as usual, categorical cross-entropy loss, and our learning algorithm will be AdaDelta. Various things about this network can be changed to get better performance, perhaps using a larger network or a different optimizer will help, but for the purposes of this notebook, the goal is to just get an understanding of an approximate baseline for comparison's sake, and so it isn't neccessary to spend much time trying to optimize this network.\nUpon compiling the network, let's run model.summary() to get a snapshot of its layers.", "# build the network\nmodel = Sequential()\nprint(\"Input dimensions: \",x_train.shape[1:])\n\nmodel.add(Conv2D(32, (3, 3), input_shape=x_train.shape[1:]))\nmodel.add(Activation('relu'))\nmodel.add(MaxPooling2D(pool_size=(2, 2)))\n\nmodel.add(Conv2D(32, (3, 3)))\nmodel.add(Activation('relu'))\nmodel.add(MaxPooling2D(pool_size=(2, 2)))\n\nmodel.add(Dropout(0.25))\n\nmodel.add(Conv2D(32, (3, 3)))\nmodel.add(Activation('relu'))\nmodel.add(MaxPooling2D(pool_size=(2, 2)))\n\nmodel.add(Conv2D(32, (3, 3)))\nmodel.add(Activation('relu'))\nmodel.add(MaxPooling2D(pool_size=(2, 2)))\n\nmodel.add(Dropout(0.25))\n\nmodel.add(Flatten())\nmodel.add(Dense(256))\nmodel.add(Activation('relu'))\n\nmodel.add(Dropout(0.5))\n\nmodel.add(Dense(num_classes))\nmodel.add(Activation('softmax'))\n\nmodel.summary()", "We've created a medium-sized network with ~1.2 million weights and biases (the parameters). Most of them are leading into the one pre-softmax fully-connected layer \"dense_5\".\nWe can now go ahead and train our model for 100 epochs with a batch size of 128. We'll also record its history so we can plot the loss over time later.", "# compile the model to use categorical cross-entropy loss function and adadelta optimizer\nmodel.compile(loss='categorical_crossentropy',\n optimizer='adam',\n metrics=['accuracy'])\n\nhistory = model.fit(x_train, y_train,\n batch_size=128,\n epochs=10,\n validation_data=(x_val, y_val))\n", "Let's plot the validation loss and validation accuracy over time.", "fig = plt.figure(figsize=(16,4))\nax = fig.add_subplot(121)\nax.plot(history.history[\"val_loss\"])\nax.set_title(\"validation loss\")\nax.set_xlabel(\"epochs\")\n\nax2 = fig.add_subplot(122)\nax2.plot(history.history[\"val_acc\"])\nax2.set_title(\"validation accuracy\")\nax2.set_xlabel(\"epochs\")\nax2.set_ylim(0, 1)\n\nplt.show()", "Notice that the validation loss begins to actually rise after around 16 epochs, even though validation accuracy remains roughly between 40% and 50%. This suggests our model begins overfitting around then, and best performance would have been achieved if we had stopped early around then. Nevertheless, our accuracy would not have likely been above 50%, and probably lower down.\nWe can also get a final evaluation by running our model on the training set. Doing so, we get the following results:", "loss, accuracy = model.evaluate(x_test, y_test, verbose=0)\nprint('Test loss:', loss)\nprint('Test accuracy:', accuracy)", "Finally, we see that we have achieved a (top-1) accuracy of around 49%. That's not too bad for 6000 images, considering that if we were to use a naive strategy of taking random guesses, we would have only gotten around 1% accuracy. \nTransfer learning by starting with existing network\nNow we can move on to the main strategy for training an image classifier on our small dataset: by starting with a larger and already trained network.\nTo start, we will load the VGG16 from keras, which was trained on ImageNet and the weights saved online. If this is your first time loading VGG16, you'll need to wait a bit for the weights to download from the web. Once the network is loaded, we can again inspect the layers with the summary() method.", "vgg = keras.applications.VGG16(weights='imagenet', include_top=True)\nvgg.summary()", "Notice that VGG16 is much bigger than the network we constructed earlier. It contains 13 convolutional layers and two fully connected layers at the end, and has over 138 million parameters, around 100 times as many parameters than the network we made above. Like our first network, the majority of the parameters are stored in the connections leading into the first fully-connected layer.\nVGG16 was made to solve ImageNet, and achieves a 8.8% top-5 error rate, which means that 91.2% of test samples were classified correctly within the top 5 predictions for each image. It's top-1 accuracy--equivalent to the accuracy metric we've been using (that the top prediction is correct)--is 73%. This is especially impressive since there are not just 97, but 1000 classes, meaning that random guesses would get us only 0.1% accuracy.\nIn order to use this network for our task, we \"remove\" the final classification layer, the 1000-neuron softmax layer at the end, which corresponds to ImageNet, and instead replace it with a new softmax layer for our dataset, which contains 97 neurons in the case of the 101_ObjectCategories dataset. \nIn terms of implementation, it's easier to simply create a copy of VGG from its input layer until the second to last layer, and then work with that, rather than modifying the VGG object directly. So technically we never \"remove\" anything, we just circumvent/ignore it. This can be done in the following way, by using the keras Model class to initialize a new model whose input layer is the same as VGG but whose output layer is our new softmax layer, called new_classification_layer. Note: although it appears we are duplicating this large network, internally Keras is actually just copying all the layers by reference, and thus we don't need to worry about overloading the memory.", "# make a reference to VGG's input layer\ninp = vgg.input\n\n# make a new softmax layer with num_classes neurons\nnew_classification_layer = Dense(num_classes, activation='softmax')\n\n# connect our new layer to the second to last layer in VGG, and make a reference to it\nout = new_classification_layer(vgg.layers[-2].output)\n\n# create a new network between inp and out\nmodel_new = Model(inp, out)\n", "We are going to retrain this network, model_new on the new dataset and labels. But first, we need to freeze the weights and biases in all the layers in the network, except our new one at the end, with the expectation that the features that were learned in VGG should still be fairly relevant to the new image classification task. Not optimal, but most likely better than what we can train to in our limited dataset. \nBy setting the trainable flag in each layer false (except our new classification layer), we ensure all the weights and biases in those layers remain fixed, and we simply train the weights in the one layer at the end. In some cases, it is desirable to not freeze all the pre-classification layers. If your dataset has enough samples, and doesn't resemble ImageNet very much, it might be advantageous to fine-tune some of the VGG layers along with the new classifier, or possibly even all of them. To do this, you can change the below code to make more of the layers trainable.\nIn the case of CalTech-101, we will just do feature extraction, fearing that fine-tuning too much with this dataset may overfit. But maybe we are wrong? A good exercise would be to try out both, and compare the results.\nSo we go ahead and freeze the layers, and compile the new model with exactly the same optimizer and loss function as in our first network, for the sake of a fair comparison. We then run summary again to look at the network's architecture.", "# make all layers untrainable by freezing weights (except for last layer)\nfor l, layer in enumerate(model_new.layers[:-1]):\n layer.trainable = False\n\n# ensure the last layer is trainable/not frozen\nfor l, layer in enumerate(model_new.layers[-1:]):\n layer.trainable = True\n\nmodel_new.compile(loss='categorical_crossentropy',\n optimizer='adam',\n metrics=['accuracy'])\n\nmodel_new.summary()", "Looking at the summary, we see the network is identical to the VGG model we instantiated earlier, except the last layer, formerly a 1000-neuron softmax, has been replaced by a new 97-neuron softmax. Additionally, we still have roughly 134 million weights, but now the vast majority of them are \"non-trainable params\" because we froze the layers they are contained in. We now only have 397,000 trainable parameters, which is actually only a quarter of the number of parameters needed to train the first model.\nAs before, we go ahead and train the new model, using the same hyperparameters (batch size and number of epochs) as before, along with the same optimization algorithm. We also keep track of its history as we go.", "history2 = model_new.fit(x_train, y_train, \n batch_size=128, \n epochs=10, \n validation_data=(x_val, y_val))\n", "Our validation accuracy hovers close to 80% towards the end, which is more than 30% improvement on the original network trained from scratch (meaning that we make the wrong prediction on 20% of samples, rather than 50%). \nIt's worth noting also that this network actually trains slightly faster than the original network, despite having more than 100 times as many parameters! This is because freezing the weights negates the need to backpropagate through all those layers, saving us on runtime.\nLet's plot the validation loss and accuracy again, this time comparing the original model trained from scratch (in blue) and the new transfer-learned model in green.", "fig = plt.figure(figsize=(16,4))\nax = fig.add_subplot(121)\nax.plot(history.history[\"val_loss\"])\nax.plot(history2.history[\"val_loss\"])\nax.set_title(\"validation loss\")\nax.set_xlabel(\"epochs\")\n\nax2 = fig.add_subplot(122)\nax2.plot(history.history[\"val_acc\"])\nax2.plot(history2.history[\"val_acc\"])\nax2.set_title(\"validation accuracy\")\nax2.set_xlabel(\"epochs\")\nax2.set_ylim(0, 1)\n\nplt.show()", "Notice that whereas the original model began overfitting around epoch 16, the new model continued to slowly decrease its loss over time, and likely would have improved its accuracy slightly with more iterations. The new model made it to roughly 80% top-1 accuracy (in the validation set) and continued to improve slowly through 100 epochs.\nIt's possibly we could have improved the original model with better regularization or more dropout, but we surely would not have made up the >30% improvement in accuracy. \nAgain, we do a final validation on the test set.", "loss, accuracy = model_new.evaluate(x_test, y_test, verbose=0)\n\nprint('Test loss:', loss)\nprint('Test accuracy:', accuracy)", "To predict a new image, simply run the following code to get the probabilities for each class.", "img, x = get_image('101_ObjectCategories/airplanes/image_0003.jpg')\nprobabilities = model_new.predict([x])\n", "Improving the results\n78.2% top-1 accuracy on 97 classes, roughly evenly distributed, is a pretty good achievement. It is not quite as impressive as the original VGG16 which achieved 73% top-1 accuracy on 1000 classes. Nevertheless, it is much better than what we were able to achieve with our original network, and there is room for improvement. Some techniques which possibly could have improved our performance.\n\nUsing data augementation: augmentation refers to using various modifications of the original training data, in the form of distortions, rotations, rescalings, lighting changes, etc to increase the size of the training set and create more tolerance for such distortions.\nUsing a different optimizer, adding more regularization/dropout, and other hyperparameters.\nTraining for longer (of course)\n\nA more advanced example of transfer learning in Keras, involving augmentation for a small 2-class dataset, can be found in the Keras blog." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
xtr33me/deep-learning
language-translation/dlnd_language_translation.ipynb
mit
[ "Language Translation\nIn this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French.\nGet the Data\nSince translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport helper\nimport problem_unittests as tests\n\nsource_path = 'data/small_vocab_en'\ntarget_path = 'data/small_vocab_fr'\nsource_text = helper.load_data(source_path)\ntarget_text = helper.load_data(target_path)", "Explore the Data\nPlay around with view_sentence_range to view different parts of the data.", "view_sentence_range = (0, 10)\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport numpy as np\n\nprint('Dataset Stats')\nprint('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()})))\n\nsentences = source_text.split('\\n')\nword_counts = [len(sentence.split()) for sentence in sentences]\nprint('Number of sentences: {}'.format(len(sentences)))\nprint('Average number of words in a sentence: {}'.format(np.average(word_counts)))\n\nprint()\nprint('English sentences {} to {}:'.format(*view_sentence_range))\nprint('\\n'.join(source_text.split('\\n')[view_sentence_range[0]:view_sentence_range[1]]))\nprint()\nprint('French sentences {} to {}:'.format(*view_sentence_range))\nprint('\\n'.join(target_text.split('\\n')[view_sentence_range[0]:view_sentence_range[1]]))", "Implement Preprocessing Function\nText to Word Ids\nAs you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function text_to_ids(), you'll turn source_text and target_text from words to ids. However, you need to add the &lt;EOS&gt; word id at the end of each sentence from target_text. This will help the neural network predict when the sentence should end.\nYou can get the &lt;EOS&gt; word id by doing:\npython\ntarget_vocab_to_int['&lt;EOS&gt;']\nYou can get other word ids using source_vocab_to_int and target_vocab_to_int.", "def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int):\n \"\"\"\n Convert source and target text to proper word ids\n :param source_text: String that contains all the source text.\n :param target_text: String that contains all the target text.\n :param source_vocab_to_int: Dictionary to go from the source words to an id\n :param target_vocab_to_int: Dictionary to go from the target words to an id\n :return: A tuple of lists (source_id_text, target_id_text)\n \"\"\"\n def process_sentence(sentence, vocab_to_int, eos=False):\n processed_sent = [vocab_to_int[word] for word in sentence]\n if eos == True:\n processed_sent.append(vocab_to_int['<EOS>'])\n return processed_sent\n \n source_sent = [s_ndx.split() for s_ndx in source_text.split('\\n')]\n source_id_text = [process_sentence(s, source_vocab_to_int) for s in source_sent]\n target_sent = [t_ndx.split() for t_ndx in target_text.split('\\n')]\n target_id_text = [process_sentence(t, target_vocab_to_int, True) for t in target_sent]\n return source_id_text, target_id_text\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_text_to_ids(text_to_ids)", "Preprocess all the data and save it\nRunning the code cell below will preprocess all the data and save it to file.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nhelper.preprocess_and_save_data(source_path, target_path, text_to_ids)", "Check Point\nThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport numpy as np\nimport helper\n\n(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()", "Check the Version of TensorFlow and Access to GPU\nThis will check to make sure you have the correct version of TensorFlow and access to a GPU", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nfrom distutils.version import LooseVersion\nimport warnings\nimport tensorflow as tf\n\n# Check TensorFlow Version\nassert LooseVersion(tf.__version__) in [LooseVersion('1.0.0'), LooseVersion('1.0.1')], 'This project requires TensorFlow version 1.0 You are using {}'.format(tf.__version__)\nprint('TensorFlow Version: {}'.format(tf.__version__))\n\n# Check for a GPU\nif not tf.test.gpu_device_name():\n warnings.warn('No GPU found. Please use a GPU to train your neural network.')\nelse:\n print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))", "Build the Neural Network\nYou'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below:\n- model_inputs\n- process_decoding_input\n- encoding_layer\n- decoding_layer_train\n- decoding_layer_infer\n- decoding_layer\n- seq2seq_model\nInput\nImplement the model_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:\n\nInput text placeholder named \"input\" using the TF Placeholder name parameter with rank 2.\nTargets placeholder with rank 2.\nLearning rate placeholder with rank 0.\nKeep probability placeholder named \"keep_prob\" using the TF Placeholder name parameter with rank 0.\n\nReturn the placeholders in the following the tuple (Input, Targets, Learing Rate, Keep Probability)", "def model_inputs():\n \"\"\"\n Create TF Placeholders for input, targets, and learning rate.\n :return: Tuple (input, targets, learning rate, keep probability)\n \"\"\"\n input = tf.placeholder(tf.int32, [None, None],name=\"input\")\n targets = tf.placeholder(tf.int32, [None, None], name=\"targets\")\n learning_rate = tf.placeholder(tf.float32, name=\"learning_rate\")\n keep_probability = tf.placeholder(tf.float32, name=\"keep_prob\")\n return (input, targets, learning_rate, keep_probability)\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_model_inputs(model_inputs)", "Process Decoding Input\nImplement process_decoding_input using TensorFlow to remove the last word id from each batch in target_data and concat the GO ID to the beginning of each batch.", "def process_decoding_input(target_data, target_vocab_to_int, batch_size):\n \"\"\"\n Preprocess target data for decoding\n :param target_data: Target Placeholder\n :param target_vocab_to_int: Dictionary to go from the target words to an id\n :param batch_size: Batch Size\n :return: Preprocessed target data\n \"\"\"\n ending = tf.strided_slice(target_data, [0,0], [batch_size, -1], [1,1])\n dec_input = tf.concat([tf.fill([batch_size, 1], target_vocab_to_int['<GO>']), ending], 1)\n return dec_input\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_process_decoding_input(process_decoding_input)", "Encoding\nImplement encoding_layer() to create a Encoder RNN layer using tf.nn.dynamic_rnn().", "def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob):\n \"\"\"\n Create encoding layer\n :param rnn_inputs: Inputs for the RNN\n :param rnn_size: RNN Size\n :param num_layers: Number of layers\n :param keep_prob: Dropout keep probability\n :return: RNN state\n \"\"\"\n #Encoder\n enc_cell = tf.contrib.rnn.MultiRNNCell([tf.contrib.rnn.BasicLSTMCell(rnn_size)] * num_layers)\n enc_cell = tf.contrib.rnn.core_rnn_cell.DropoutWrapper(enc_cell, output_keep_prob=keep_prob)\n _, enc_state = tf.nn.dynamic_rnn(enc_cell, rnn_inputs, dtype=tf.float32)\n return enc_state\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_encoding_layer(encoding_layer)", "Decoding - Training\nCreate training logits using tf.contrib.seq2seq.simple_decoder_fn_train() and tf.contrib.seq2seq.dynamic_rnn_decoder(). Apply the output_fn to the tf.contrib.seq2seq.dynamic_rnn_decoder() outputs.", "def decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope,\n output_fn, keep_prob):\n \"\"\"\n Create a decoding layer for training\n :param encoder_state: Encoder State\n :param dec_cell: Decoder RNN Cell\n :param dec_embed_input: Decoder embedded input\n :param sequence_length: Sequence Length\n :param decoding_scope: TenorFlow Variable Scope for decoding\n :param output_fn: Function to apply the output layer\n :param keep_prob: Dropout keep probability\n :return: Train Logits\n \"\"\"\n train_decoder_fn = tf.contrib.seq2seq.simple_decoder_fn_train(encoder_state)\n dec_cell = tf.contrib.rnn.core_rnn_cell.DropoutWrapper(dec_cell, output_keep_prob=keep_prob)\n train_prediction, _, _ = tf.contrib.seq2seq.dynamic_rnn_decoder(\n dec_cell, train_decoder_fn, dec_embed_input, sequence_length, scope=decoding_scope)\n \n #Apply output function\n train_logits = output_fn(train_prediction)\n return train_logits\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_decoding_layer_train(decoding_layer_train)", "Decoding - Inference\nCreate inference logits using tf.contrib.seq2seq.simple_decoder_fn_inference() and tf.contrib.seq2seq.dynamic_rnn_decoder().", "def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id,\n maximum_length, vocab_size, decoding_scope, output_fn, keep_prob):\n \"\"\"\n Create a decoding layer for inference\n :param encoder_state: Encoder state\n :param dec_cell: Decoder RNN Cell\n :param dec_embeddings: Decoder embeddings\n :param start_of_sequence_id: GO ID\n :param end_of_sequence_id: EOS Id\n :param maximum_length: The maximum allowed time steps to decode\n :param vocab_size: Size of vocabulary\n :param decoding_scope: TensorFlow Variable Scope for decoding\n :param output_fn: Function to apply the output layer\n :param keep_prob: Dropout keep probability\n :return: Inference Logits\n \"\"\"\n infer_decoder_fn = tf.contrib.seq2seq.simple_decoder_fn_inference(\n output_fn, encoder_state, dec_embeddings, start_of_sequence_id, end_of_sequence_id, \n maximum_length, vocab_size)\n dec_cell = tf.contrib.rnn.core_rnn_cell.DropoutWrapper(dec_cell, output_keep_prob=keep_prob)\n inference_logits, _, _ = tf.contrib.seq2seq.dynamic_rnn_decoder(dec_cell, infer_decoder_fn, scope=decoding_scope)\n return inference_logits\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_decoding_layer_infer(decoding_layer_infer)", "Build the Decoding Layer\nImplement decoding_layer() to create a Decoder RNN layer.\n\nCreate RNN cell for decoding using rnn_size and num_layers.\nCreate the output fuction using lambda to transform it's input, logits, to class logits.\nUse the your decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob) function to get the training logits.\nUse your decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, maximum_length, vocab_size, decoding_scope, output_fn, keep_prob) function to get the inference logits.\n\nNote: You'll need to use tf.variable_scope to share variables between training and inference.", "def decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size,\n num_layers, target_vocab_to_int, keep_prob):\n \"\"\"\n Create decoding layer\n :param dec_embed_input: Decoder embedded input\n :param dec_embeddings: Decoder embeddings\n :param encoder_state: The encoded state\n :param vocab_size: Size of vocabulary\n :param sequence_length: Sequence Length\n :param rnn_size: RNN Size\n :param num_layers: Number of layers\n :param target_vocab_to_int: Dictionary to go from the target words to an id\n :param keep_prob: Dropout keep probability\n :return: Tuple of (Training Logits, Inference Logits)\n \"\"\"\n #Decoder RNN\n dec_cell = tf.contrib.rnn.MultiRNNCell([tf.contrib.rnn.BasicLSTMCell(rnn_size)] * num_layers)\n with tf.variable_scope(\"decoding\") as decoding_scope:\n #Output layer\n output_fn = lambda x: tf.contrib.layers.fully_connected(x, vocab_size, None, scope=decoding_scope)\n #Get the training logits\n training_logits = decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope,\n output_fn, keep_prob)\n with tf.variable_scope(\"decoding\", reuse=True) as decoding_scope:\n inference_logits = decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, target_vocab_to_int['<GO>'], target_vocab_to_int['<EOS>'],\n rnn_size, vocab_size, decoding_scope, output_fn, keep_prob)\n return training_logits, inference_logits\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_decoding_layer(decoding_layer)", "Build the Neural Network\nApply the functions you implemented above to:\n\nApply embedding to the input data for the encoder.\nEncode the input using your encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob).\nProcess target data using your process_decoding_input(target_data, target_vocab_to_int, batch_size) function.\nApply embedding to the target data for the decoder.\nDecode the encoded input using your decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob).", "def seq2seq_model(input_data, target_data, keep_prob, batch_size, sequence_length, source_vocab_size, target_vocab_size,\n enc_embedding_size, dec_embedding_size, rnn_size, num_layers, target_vocab_to_int):\n \"\"\"\n Build the Sequence-to-Sequence part of the neural network\n :param input_data: Input placeholder\n :param target_data: Target placeholder\n :param keep_prob: Dropout keep probability placeholder\n :param batch_size: Batch Size\n :param sequence_length: Sequence Length\n :param source_vocab_size: Source vocabulary size\n :param target_vocab_size: Target vocabulary size\n :param enc_embedding_size: Decoder embedding size\n :param dec_embedding_size: Encoder embedding size\n :param rnn_size: RNN Size\n :param num_layers: Number of layers\n :param target_vocab_to_int: Dictionary to go from the target words to an id\n :return: Tuple of (Training Logits, Inference Logits)\n \"\"\"\n #Encoder embedding\n enc_embedding_input = tf.contrib.layers.embed_sequence(input_data, source_vocab_size, enc_embedding_size)\n encoder_state = encoding_layer(enc_embedding_input, rnn_size, num_layers, keep_prob)\n #Process our input data\n dec_input = process_decoding_input(target_data, target_vocab_to_int, batch_size)\n \n #Pass our state to the decoder\n #Decoder Embedding\n dec_embeddings = tf.Variable(tf.random_uniform([target_vocab_size, dec_embedding_size]))\n dec_embedding_input = tf.nn.embedding_lookup(dec_embeddings, dec_input)\n \n #Decode our encoded input\n training_logits, inference_logits = decoding_layer(dec_embedding_input, dec_embeddings, encoder_state, target_vocab_size, sequence_length, rnn_size,\n num_layers, target_vocab_to_int, keep_prob)\n return training_logits, inference_logits\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_seq2seq_model(seq2seq_model)", "Neural Network Training\nHyperparameters\nTune the following parameters:\n\nSet epochs to the number of epochs.\nSet batch_size to the batch size.\nSet rnn_size to the size of the RNNs.\nSet num_layers to the number of layers.\nSet encoding_embedding_size to the size of the embedding for the encoder.\nSet decoding_embedding_size to the size of the embedding for the decoder.\nSet learning_rate to the learning rate.\nSet keep_probability to the Dropout keep probability", "# Number of Epochs\nepochs = 4\n# Batch Size\nbatch_size = 256\n# RNN Size\nrnn_size = 256\n# Number of Layers\nnum_layers = 2\n# Embedding Size\nencoding_embedding_size = 256\ndecoding_embedding_size = 256\n# Learning Rate\nlearning_rate = 0.001\n# Dropout Keep Probability\nkeep_probability = 0.7", "Build the Graph\nBuild the graph using the neural network you implemented.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nsave_path = 'checkpoints/dev'\n(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()\nmax_source_sentence_length = max([len(sentence) for sentence in source_int_text])\n\ntrain_graph = tf.Graph()\nwith train_graph.as_default():\n input_data, targets, lr, keep_prob = model_inputs()\n sequence_length = tf.placeholder_with_default(max_source_sentence_length, None, name='sequence_length')\n input_shape = tf.shape(input_data)\n \n train_logits, inference_logits = seq2seq_model(\n tf.reverse(input_data, [-1]), targets, keep_prob, batch_size, sequence_length, len(source_vocab_to_int), len(target_vocab_to_int),\n encoding_embedding_size, decoding_embedding_size, rnn_size, num_layers, target_vocab_to_int)\n\n tf.identity(inference_logits, 'logits')\n with tf.name_scope(\"optimization\"):\n # Loss function\n cost = tf.contrib.seq2seq.sequence_loss(\n train_logits,\n targets,\n tf.ones([input_shape[0], sequence_length]))\n\n # Optimizer\n optimizer = tf.train.AdamOptimizer(lr)\n\n # Gradient Clipping\n gradients = optimizer.compute_gradients(cost)\n capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]\n train_op = optimizer.apply_gradients(capped_gradients)", "Train\nTrain the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forums to see if anyone is having the same problem.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport time\n\ndef get_accuracy(target, logits):\n \"\"\"\n Calculate accuracy\n \"\"\"\n max_seq = max(target.shape[1], logits.shape[1])\n if max_seq - target.shape[1]:\n target = np.pad(\n target,\n [(0,0),(0,max_seq - target.shape[1])],\n 'constant')\n if max_seq - logits.shape[1]:\n logits = np.pad(\n logits,\n [(0,0),(0,max_seq - logits.shape[1]), (0,0)],\n 'constant')\n\n return np.mean(np.equal(target, np.argmax(logits, 2)))\n\ntrain_source = source_int_text[batch_size:]\ntrain_target = target_int_text[batch_size:]\n\nvalid_source = helper.pad_sentence_batch(source_int_text[:batch_size])\nvalid_target = helper.pad_sentence_batch(target_int_text[:batch_size])\n\nwith tf.Session(graph=train_graph) as sess:\n sess.run(tf.global_variables_initializer())\n\n for epoch_i in range(epochs):\n for batch_i, (source_batch, target_batch) in enumerate(\n helper.batch_data(train_source, train_target, batch_size)):\n start_time = time.time()\n \n _, loss = sess.run(\n [train_op, cost],\n {input_data: source_batch,\n targets: target_batch,\n lr: learning_rate,\n sequence_length: target_batch.shape[1],\n keep_prob: keep_probability})\n \n batch_train_logits = sess.run(\n inference_logits,\n {input_data: source_batch, keep_prob: 1.0})\n batch_valid_logits = sess.run(\n inference_logits,\n {input_data: valid_source, keep_prob: 1.0})\n \n train_acc = get_accuracy(target_batch, batch_train_logits)\n valid_acc = get_accuracy(np.array(valid_target), batch_valid_logits)\n end_time = time.time()\n print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.3f}, Validation Accuracy: {:>6.3f}, Loss: {:>6.3f}'\n .format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss))\n\n # Save Model\n saver = tf.train.Saver()\n saver.save(sess, save_path)\n print('Model Trained and Saved')", "Save Parameters\nSave the batch_size and save_path parameters for inference.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\n# Save parameters for checkpoint\nhelper.save_params(save_path)", "Checkpoint", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport tensorflow as tf\nimport numpy as np\nimport helper\nimport problem_unittests as tests\n\n_, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess()\nload_path = helper.load_params()", "Sentence to Sequence\nTo feed a sentence into the model for translation, you first need to preprocess it. Implement the function sentence_to_seq() to preprocess new sentences.\n\nConvert the sentence to lowercase\nConvert words into ids using vocab_to_int\nConvert words not in the vocabulary, to the &lt;UNK&gt; word id.", "def sentence_to_seq(sentence, vocab_to_int):\n \"\"\"\n Convert a sentence to a sequence of ids\n :param sentence: String\n :param vocab_to_int: Dictionary to go from the words to an id\n :return: List of word ids\n \"\"\"\n ret_val = []\n for word in sentence.split():\n word_lower = word.lower()\n if word_lower in vocab_to_int:\n ret_val.append(vocab_to_int[word_lower])\n else:\n ret_val.append(vocab_to_int['<UNK>'])\n \n return ret_val\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_sentence_to_seq(sentence_to_seq)", "Translate\nThis will translate translate_sentence from English to French.", "translate_sentence = 'he saw a old yellow truck .'\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\ntranslate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int)\n\nloaded_graph = tf.Graph()\nwith tf.Session(graph=loaded_graph) as sess:\n # Load saved model\n loader = tf.train.import_meta_graph(load_path + '.meta')\n loader.restore(sess, load_path)\n\n input_data = loaded_graph.get_tensor_by_name('input:0')\n logits = loaded_graph.get_tensor_by_name('logits:0')\n keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')\n\n translate_logits = sess.run(logits, {input_data: [translate_sentence], keep_prob: 1.0})[0]\n\nprint('Input')\nprint(' Word Ids: {}'.format([i for i in translate_sentence]))\nprint(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence]))\n\nprint('\\nPrediction')\nprint(' Word Ids: {}'.format([i for i in np.argmax(translate_logits, 1)]))\nprint(' French Words: {}'.format([target_int_to_vocab[i] for i in np.argmax(translate_logits, 1)]))", "Imperfect Translation\nYou might notice that some sentences translate better than others. Since the dataset you're using only has a vocabulary of 227 English words of the thousands that you use, you're only going to see good results using these words. Additionally, the translations in this data set were made by Google translate, so the translations themselves aren't particularly good. (We apologize to the French speakers out there!) Thankfully, for this project, you don't need a perfect translation. However, if you want to create a better translation model, you'll need better data.\nYou can train on the WMT10 French-English corpus. This dataset has more vocabulary and richer in topics discussed. However, this will take you days to train, so make sure you've a GPU and the neural network is performing well on dataset we provided. Just make sure you play with the WMT10 corpus after you've submitted this project.\nSubmitting This Project\nWhen submitting this project, make sure to run all the cells before saving the notebook. Save the notebook file as \"dlnd_language_translation.ipynb\" and save it as a HTML file under \"File\" -> \"Download as\". Include the \"helper.py\" and \"problem_unittests.py\" files in your submission." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
newsapps/public-notebooks
Red Light Camera Locations.ipynb
mit
[ "These are the URLs for the JSON data powering the ESRI/ArcGIS maps.", "few_crashes_url = 'http://www.arcgis.com/sharing/rest/content/items/5a8841f92e4a42999c73e9a07aca0c23/data?f=json&token=lddNjwpwjOibZcyrhJiogNmyjIZmzh-pulx7jPD9c559e05tWo6Qr8eTcP7Deqw_CIDPwZasbNOCSBHfthynf-8WRMmguxHbIFptbZQvnpRupJHSY8Abrz__xUteBS93MitgvoU6AqSN5eDVKRYiUg..'\nremoved_url = 'http://www.arcgis.com/sharing/rest/content/items/1e01ac5dc4d54dc186502316feab156e/data?f=json&token=lddNjwpwjOibZcyrhJiogNmyjIZmzh-pulx7jPD9c559e05tWo6Qr8eTcP7Deqw_CIDPwZasbNOCSBHfthynf-8WRMmguxHbIFptbZQvnpRupJHSY8Abrz__xUteBS93MitgvoU6AqSN5eDVKRYiUg..'", "We need a way to easily extract the actual data points from the JSON. The data will actually contain multiple layers (really, one layer per operationalLayer, but multiple operationalLayers) so, if we pass a title, we should return the operationalLayer corresponding to that title; otherwise, just return the first one.", "import requests\ndef extract_features(url, title=None):\n r = requests.get(url)\n idx = 0\n found = False\n if title:\n while idx < len(r.json()['operationalLayers']):\n for item in r.json()['operationalLayers'][idx].items():\n if item[0] == 'title' and item[1] == title:\n found = True\n break\n if found:\n break\n idx += 1\n try:\n return r.json()['operationalLayers'][idx]['featureCollection']['layers'][0]['featureSet']['features']\n except IndexError, e:\n return {}\n\nfew_crashes = extract_features(few_crashes_url)\nall_cameras = extract_features(removed_url, 'All Chicago red light cameras')\nremoved_cameras = extract_features(removed_url, 'red-light-cams')\nprint 'Found %d data points for few-crash intersections, %d total cameras and %d removed camera locations' % (\n len(few_crashes), len(all_cameras), len(removed_cameras))", "Now we need to filter out the bad points from few_crashes - the ones with 0 given as the lat/lon.", "filtered_few_crashes = [\n point for point in few_crashes if point['attributes']['LONG_X'] != 0 and point['attributes']['LAT_Y'] != 0]", "Now let's build a dictionary of all the cameras, so we can merge all their info.", "cameras = {}\nfor point in all_cameras:\n label = point['attributes']['LABEL']\n if label not in cameras:\n cameras[label] = point\n cameras[label]['attributes']['Few crashes'] = False\n cameras[label]['attributes']['To be removed'] = False", "Set the 'Few crashes' flag to True for those intersections that show up in filtered_few_crashes.", "for point in filtered_few_crashes:\n label = point['attributes']['LABEL']\n if label not in cameras:\n print 'Missing label %s' % label\n else:\n cameras[label]['attributes']['Few crashes'] = True", "Set the 'To be removed' flag to True for those intersections that show up in removed_cameras.", "for point in removed_cameras:\n label = point['attributes']['displaylabel'].replace(' and ', '-')\n if label not in cameras:\n print 'Missing label %s' % label\n else:\n cameras[label]['attributes']['To be removed'] = True", "How many camera locations have few crashes and were slated to be removed?", "counter = {\n 'both': {\n 'names': [],\n 'count': 0\n },\n 'crashes only': {\n 'names': [],\n 'count': 0\n },\n 'removed only': {\n 'names': [],\n 'count': 0\n }\n}\n\nfor camera in cameras:\n if cameras[camera]['attributes']['Few crashes']:\n if cameras[camera]['attributes']['To be removed']:\n counter['both']['count'] += 1\n counter['both']['names'].append(camera)\n else:\n counter['crashes only']['count'] += 1\n counter['crashes only']['names'].append(camera)\n elif cameras[camera]['attributes']['To be removed']:\n counter['removed only']['count'] += 1\n counter['removed only']['names'].append(camera)\n\nprint '%d locations had few crashes and were slated to be removed: %s\\n' % (\n counter['both']['count'], '; '.join(counter['both']['names']))\nprint '%d locations had few crashes but were not slated to be removed: %s\\n' % (\n counter['crashes only']['count'], '; '.join(counter['crashes only']['names']))\nprint '%d locations were slated to be removed despite having reasonable numbers of crashes: %s' % (\n counter['removed only']['count'], '; '.join(counter['removed only']['names']))", "How does this list compare to the one currently published on the Chicago Data Portal?", "from csv import DictReader\nfrom StringIO import StringIO\n\ndata_portal_url = 'https://data.cityofchicago.org/api/views/thvf-6diy/rows.csv?accessType=DOWNLOAD'\nr = requests.get(data_portal_url)\nfh = StringIO(r.text)\nreader = DictReader(fh)\n\ndef cleaner(str):\n filters = [\n ('Stony?Island', 'Stony Island'),\n ('Van?Buren', 'Van Buren'),\n (' (SOUTH INTERSECTION)', '')\n ]\n for filter in filters:\n str = str.replace(filter[0], filter[1])\n return str\n\nfor line in reader:\n line['INTERSECTION'] = cleaner(line['INTERSECTION'])\n cameras[line['INTERSECTION']]['attributes']['current'] = line\n\ncounter = {\n 'not current': [],\n 'current': [],\n 'not current and slated for removal': [],\n 'not current and not slated for removal': [],\n 'current and slated for removal': []\n}\nfor camera in cameras:\n if 'current' not in cameras[camera]['attributes']:\n counter['not current'].append(camera)\n if cameras[camera]['attributes']['To be removed']:\n counter['not current and slated for removal'].append(camera)\n else:\n counter['not current and not slated for removal'].append(camera)\n else:\n counter['current'].append(camera)\n if cameras[camera]['attributes']['To be removed']:\n counter['current and slated for removal'].append(camera)\n\nfor key in counter:\n print key, len(counter[key])\n print '; '.join(counter[key]), '\\n'", "Now we need to compute how much money has been generated at each intersection - assuming $100 fine for each violation. In order to do that, we need to make the violation data line up with the camera location data.\nThen, we'll add 3 fields: number of violations overall; number on/after 12/22/2014; number on/after 3/6/2015.", "import requests\nfrom csv import DictReader\nfrom datetime import datetime\nfrom StringIO import StringIO\n\ndata_portal_url = 'https://data.cityofchicago.org/api/views/spqx-js37/rows.csv?accessType=DOWNLOAD'\nr = requests.get(data_portal_url)\nfh = StringIO(r.text)\nreader = DictReader(fh)\n\ndef violation_cleaner(str):\n filters = [\n (' AND ', '-'),\n (' and ', '-'),\n ('/', '-'),\n # These are streets spelled one way in ticket data, another way in location data\n ('STONEY ISLAND', 'STONY ISLAND'),\n ('CORNELL DRIVE', 'CORNELL'),\n ('NORTHWEST HWY', 'NORTHWEST HIGHWAY'),\n ('CICERO-I55', 'CICERO-STEVENSON NB'),\n ('31ST ST-MARTIN LUTHER KING DRIVE', 'DR MARTIN LUTHER KING-31ST'),\n ('4700 WESTERN', 'WESTERN-47TH'),\n ('LAKE SHORE DR-BELMONT', 'LAKE SHORE-BELMONT'),\n # These are 3-street intersections where the ticket data has 2 streets, location data has 2 other streets\n ('KIMBALL-DIVERSEY', 'MILWAUKEE-DIVERSEY'),\n ('PULASKI-ARCHER', 'PULASKI-ARCHER-50TH'),\n ('KOSTNER-NORTH', 'KOSTNER-GRAND-NORTH'),\n ('79TH-KEDZIE', 'KEDZIE-79TH-COLUMBUS'),\n ('LINCOLN-MCCORMICK', 'KIMBALL-LINCOLN-MCCORMICK'),\n ('KIMBALL-LINCOLN', 'KIMBALL-LINCOLN-MCCORMICK'),\n ('DIVERSEY-WESTERN', 'WESTERN-DIVERSEY-ELSTON'),\n ('HALSTED-FULLERTON', 'HALSTED-FULLERTON-LINCOLN'),\n ('COTTAGE GROVE-71ST', 'COTTAGE GROVE-71ST-SOUTH CHICAGO'),\n ('DAMEN-FULLERTON', 'DAMEN-FULLERTON-ELSTON'),\n ('DAMEN-DIVERSEY', 'DAMEN-DIVERSEY-CLYBOURN'),\n ('ELSTON-FOSTER', 'ELSTON-LAPORTE-FOSTER'),\n ('STONY ISLAND-79TH', 'STONY ISLAND-79TH-SOUTH CHICAGO'),\n # This last one is an artifact of the filter application process\n ('KIMBALL-LINCOLN-MCCORMICK-MCCORMICK', 'KIMBALL-LINCOLN-MCCORMICK')\n ]\n for filter in filters:\n str = str.replace(filter[0], filter[1])\n return str\n\ndef intersection_is_reversed(key, intersection):\n split_key = key.upper().split('-')\n split_intersection = intersection.upper().split('-')\n if len(split_key) != len(split_intersection):\n return False\n for k in split_key:\n if k not in split_intersection:\n return False\n for k in split_intersection:\n if k not in split_key:\n return False\n return True\n \n\nmissing_intersections = set()\nfor idx, line in enumerate(reader):\n line['INTERSECTION'] = violation_cleaner(line['INTERSECTION'])\n found = False\n for key in cameras:\n if key.lower() == line['INTERSECTION'].lower() or intersection_is_reversed(key, line['INTERSECTION']):\n found = True\n if 'total tickets' not in cameras[key]['attributes']:\n cameras[key]['attributes']['total tickets'] = 0\n cameras[key]['attributes']['tickets since 12/22/2014'] = 0\n cameras[key]['attributes']['tickets since 3/6/2015'] = 0\n cameras[key]['attributes']['last ticket date'] = line['VIOLATION DATE']\n else:\n cameras[key]['attributes']['total tickets'] += int(line['VIOLATIONS'])\n dt = datetime.strptime(line['VIOLATION DATE'], '%m/%d/%Y')\n if dt >= datetime.strptime('12/22/2014', '%m/%d/%Y'):\n cameras[key]['attributes']['tickets since 12/22/2014'] += int(line['VIOLATIONS'])\n if dt >= datetime.strptime('3/6/2015', '%m/%d/%Y'):\n cameras[key]['attributes']['tickets since 3/6/2015'] += int(line['VIOLATIONS'])\n if not found:\n missing_intersections.add(line['INTERSECTION'])\nprint 'Missing %d intersections' % len(missing_intersections), missing_intersections", "Now it's time to ask some specific questions. First: how much money has the program raised overall? (Note that this data only goes back to 7/1/2014, several years after the program began.)", "import locale\nlocale.setlocale( locale.LC_ALL, '' )\n\ntotal = 0\nmissing_tickets = []\nfor camera in cameras:\n try:\n total += cameras[camera]['attributes']['total tickets']\n except KeyError:\n missing_tickets.append(camera)\n\nprint '%d tickets have been issued since 7/1/2014, raising %s' % (total, locale.currency(total * 100, grouping=True))\nprint 'The following %d intersections appear to never have issued a ticket in that time: %s' % (\n len(missing_tickets), '; '.join(missing_tickets))", "Since 12/22/2014, how much money has been generated by low-crash intersections?", "total = 0\nlow_crash_total = 0\nfor camera in cameras:\n try:\n total += cameras[camera]['attributes']['tickets since 12/22/2014']\n if cameras[camera]['attributes']['Few crashes']:\n low_crash_total += cameras[camera]['attributes']['tickets since 12/22/2014']\n except KeyError:\n continue\n\nprint '%d tickets have been issued at low-crash intersections since 12/22/2014, raising %s' % (\n low_crash_total, locale.currency(low_crash_total * 100, grouping=True))\nprint '%d tickets have been issued overall since 12/22/2014, raising %s' % (\n total, locale.currency(total * 100, grouping=True))", "How about since 3/6/2015?", "total = 0\nlow_crash_total = 0\nslated_for_closure_total = 0\nfor camera in cameras:\n try:\n total += cameras[camera]['attributes']['tickets since 3/6/2015']\n if cameras[camera]['attributes']['Few crashes']:\n low_crash_total += cameras[camera]['attributes']['tickets since 3/6/2015']\n if cameras[camera]['attributes']['To be removed']:\n slated_for_closure_total += cameras[camera]['attributes']['tickets since 3/6/2015']\n except KeyError:\n continue\n\nprint '%d tickets have been issued at low-crash intersections since 3/6/2015, raising %s' % (\n low_crash_total, locale.currency(low_crash_total * 100, grouping=True))\nprint '%d tickets have been issued overall since 3/6/2015, raising %s' % (\n total, locale.currency(total * 100, grouping=True))\nprint '%d tickets have been issued at cameras that were supposed to be closed since 3/6/2015, raising %s' % (\n slated_for_closure_total, locale.currency(slated_for_closure_total * 100, grouping=True))", "Now let's generate a CSV of the cameras data for export.", "from csv import DictWriter\noutput = []\n\nfor camera in cameras:\n data = {\n 'intersection': camera,\n 'last ticket date': cameras[camera]['attributes'].get('last ticket date', ''),\n 'tickets since 7/1/2014': cameras[camera]['attributes'].get('total tickets', 0),\n 'revenue since 7/1/2014': cameras[camera]['attributes'].get('total tickets', 0) * 100,\n 'tickets since 12/22/2014': cameras[camera]['attributes'].get('tickets since 12/22/2014', 0),\n 'revenue since 12/22/2014': cameras[camera]['attributes'].get('tickets since 12/22/2014', 0) * 100,\n 'was slated for removal': cameras[camera]['attributes'].get('To be removed', False),\n 'had few crashes': cameras[camera]['attributes'].get('Few crashes', False),\n 'is currently active': True if 'current' in cameras[camera]['attributes'] else False,\n 'latitude': cameras[camera]['attributes'].get('LAT', 0),\n 'longitude': cameras[camera]['attributes'].get('LNG', 0)\n }\n output.append(data)\n\nwith open('/tmp/red_light_intersections.csv', 'w+') as fh:\n writer = DictWriter(fh, sorted(output[0].keys()))\n writer.writeheader()\n writer.writerows(output)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
gangiman/CiGraphVis
fragments/plot_processed_papers.ipynb
mit
[ "import os\nimport numpy as np\nfrom tqdm import tqdm\nimport pickle\nimport networkx as nx\nimport editdistance # see https://pypi.python.org/pypi/editdistance", "processing\nWe first need to resolve references among papers in the user's base", "papers_data = pickle.load(open('papers_data.pkl', 'rb'))\n\n# lets collec all papers and their references into one array, type = 1 means paper is in the user's base, 0 - only in references\nall_papers = []\nfor k in papers_data:\n all_papers.append(papers_data[k]['metadata'])\n all_papers[-1]['type'] = 1\n for ref in papers_data[k]['references']:\n all_papers.append(ref)\n all_papers[-1]['type'] = 0 \n\n# filter papers without titles\nno_title = 0\nnone_title = 0\nfiltered_papers = []\nfor i in range(len(all_papers)):\n if 'title' not in all_papers[i]:\n no_title += 1\n elif all_papers[i]['title'] == '<None>':\n none_title += 1\n else:\n filtered_papers.append(all_papers[i])\nprint(no_title, none_title)\nprint(len(filtered_papers))\n\n# let's get rid of papers without titles\nall_papers = filtered_papers\n\n# likely a stub\n# papers are considered to be duplicates if the edit-distance between their titles is less than \"threshold * minimal_title_legngth\"\ndef is_duplicate(p1, p2, threshold = 0.25):\n if 'title' in p1 and 'title' in p2:\n if editdistance.eval(p1['title'], p2['title']) < threshold * min(len(p1['title']), len(p2['title'])):\n return True\n return False\n\n# resolving duplicates and links\n# Note: O(N^2), for 10k papers takes ~ 5 min\nresolved_papers = []\nall_cnt_duplicates = []\nfor i in tqdm( range(len(all_papers))):\n cnt_duplicates = 0\n for j in range(len(resolved_papers)):\n if is_duplicate(all_papers[i], resolved_papers[j]):\n cnt_duplicates += 1\n all_papers[i]['id'] = j\n if all_papers[i]['type'] == 0 and resolved_papers[j]['type'] == 1: # if referenced paper is in the user's base, mark it\n all_papers[i]['type'] = 1\n all_cnt_duplicates.append(cnt_duplicates)\n if cnt_duplicates == 0:\n all_papers[i]['id'] = len(resolved_papers)\n resolved_papers.append(all_papers[i])\n\nfrom collections import Counter\nCounter(all_cnt_duplicates).most_common(10) # if the counter has values larger than 1, there are ambiguities (it is not good to have them)\n\n# store processed papers_data in order to skip processing steps\npickle.dump(papers_data, open('papers_data_processed.pkl', 'wb'))", "", "papers_data = pickle.load(open('papers_data_processed.pkl', 'rb'))\n\nimport string\n# split long string into lines, also replace not printable characters with their representations\ndef split_long(s, split_length = 20, clip_size = 100):\n s = ''.join(x if x in string.printable else repr(x) for x in s)\n if clip_size is not None:\n s = s[:clip_size]\n split_s = s.split(' ')\n res_s = ''\n cur_length = 0\n for i in range(len(split_s)):\n if cur_length > split_length:\n res_s += '\\n'\n cur_length = 0\n res_s += split_s[i] + ' '\n cur_length += len(split_s[i])\n return res_s\n ", "create citation graph and plot it", "%matplotlib inline\nimport matplotlib.pyplot as plt\nimport matplotlib.cm as cm\n\nimport seaborn as sns\n\nnode_degrees = []\ncodes = {}\nfor p in papers_data.values():\n if 'id' in p['metadata']:\n if p['metadata']['id'] not in codes: # and len(added_nodes) < 20:\n codes[p['metadata']['id']] = len(codes)\n node_degrees.append(0)\n for ref in p['references']:\n if 'id' in ref:\n if ref['id'] not in codes:\n codes[ref['id']] = len(codes)\n node_degrees.append(0)\n node_degrees[codes[p['metadata']['id']]] += 1\n node_degrees[codes[ref['id']]] += 1\nnode_degrees = np.array(node_degrees)\n\nMIN_REFERENCES = 5 # include in the graph only papers with at least this amount references from the user's base\n\nG=nx.DiGraph()\nnode_labels = {}\nlist_nodes = []\nnode_colors = []\nadded_nodes = {}\nfor p in tqdm(papers_data.values()):\n if 'id' in p['metadata']:\n if p['metadata']['id'] not in added_nodes: # and len(added_nodes) < 20:\n added_nodes[p['metadata']['id']] = len(added_nodes)\n G.add_node(p['metadata']['id'])\n list_nodes.append(p['metadata']['id'])\n node_colors.append('red' if p['metadata']['type'] == 1 else 'black')\n node_labels[p['metadata']['id']] = split_long(p['metadata']['title'])\n \n for ref in p['references']:\n if 'id' in ref and node_degrees[codes[ref['id']]] >= MIN_REFERENCES:\n if ref['id'] not in added_nodes:\n added_nodes[ref['id']] = len(added_nodes)\n G.add_node(ref['id'])\n list_nodes.append(ref['id'])\n node_colors.append('red' if ref['type'] == 1 else 'black')\n node_labels[ref['id']] = split_long(ref['title'])\n G.add_edge(p['metadata']['id'], ref['id'], weight = 1)\n \n \n\nfor p in tqdm(papers_data.values()):\n if 'id' in p['metadata'] and p['metadata']['id'] in added_nodes:\n node_colors[added_nodes[p['metadata']['id']]] = 'red'\n\nfor edge in G.edges():\n if node_colors[added_nodes[edge[0]]] == node_colors[added_nodes[edge[1]]] and node_colors[added_nodes[edge[0]]] == 'black':\n print(edge)\n print(node_colors[added_nodes[edge[0]]], node_colors[added_nodes[edge[1]]])\n break\n\nlen(added_nodes)", "try plot with spring layout", "pos = nx.spring_layout(G, iterations = 30, k = 0.1)\n\nplt.figure(figsize = (7, 7))\nnx.draw(G, pos, node_size=3, width=0.15, node_color=[node_colors[added_nodes[node]] for node in G.nodes()], \n with_labels=False, edge_color='gray');", "since spring layout is not good at semantic clustering papers, we will try to use node2vec library : https://github.com/aditya-grover/node2vec", "import node2vec.src.node2vec as n2v\nimport node2vec\nfrom gensim.models import Word2Vec\n\n# we make an undirected representation of the directed graph with spliting original vertices into \"in\" and \"out\" ones\ngraph = nx.Graph()\n\nfor i in range(len(list_nodes)):\n graph.add_node(str(added_nodes[list_nodes[i]]))\n graph.add_node('~' + str(added_nodes[list_nodes[i]]))\n graph.add_edge('~' + str(added_nodes[list_nodes[i]]), str(added_nodes[list_nodes[i]]), weight = 1)\n \nfor edge in G.edges():\n graph.add_edge(str(added_nodes[edge[0]]), '~' + str(added_nodes[edge[1]]), weight = 1)\n \n\ng = n2v.Graph(graph, False, 1, 1)\ng.preprocess_transition_probs()\n\n# Note: the library node2vec is for python 2, you will need to remove some \"print\" statements out of the simulate_walks method\nwalks = g.simulate_walks(20, 30)\n\nmodel = Word2Vec(walks, size=50, window=10, min_count=0, sg=1, workers=1, iter=1)\nword_vectors = model.wv\n\nY = np.array([word_vectors[str(added_nodes[list_nodes[i]])] for i in range(len(list_nodes))])\n\n# node2vec works better when it embeds nodes into high-dimensional space, then we will embed their representations into 2d with TSNE\nfrom sklearn.manifold import TSNE\n\ntsne = TSNE(n_components=2, perplexity=15, learning_rate = 700)\n\nY = (Y.T/np.linalg.norm(Y, axis = 1)).T\nY = tsne.fit_transform(Y)\n\nfor i in range(len(Y)):\n pos[list_nodes[i]] = Y[i]\n\nplt.figure(figsize = (7, 7))\nnx.draw(G, pos, node_size=3, width=0.15, node_color=[node_colors[added_nodes[node]] for node in G.nodes()], \n with_labels=False, edge_color='gray');", "interactive visualization with plotly", "import plotly.plotly as py\nfrom plotly.graph_objs import *\n# you need to set up the connection to plotly API : https://plot.ly/python/getting-started/#initialization-for-online-plotting\n\nedge_trace = Scatter(\n x=[],\n y=[],\n line=Line(width=0.5,color='#888'),\n hoverinfo='none',\n mode='lines')\n\nfor edge in G.edges():\n x0, y0 = pos[edge[0]]\n x1, y1 = pos[edge[1]]\n edge_trace['x'] += [x0, x1, None]\n edge_trace['y'] += [y0, y1, None]\n\nnode_trace = Scatter(\n x=[],\n y=[],\n text=[],\n mode='markers',\n hoverinfo='text',\n marker=Marker(\n showscale=True,\n colorscale='YIGnBu',\n reversescale=True,\n color=[],\n size=5,\n line=dict(width=1)))\n\n\nfor i in range(len(G.nodes())):\n x, y = pos[list_nodes[i]]\n node_trace['x'].append(x)\n node_trace['y'].append(y)\n node_trace['marker']['color'].append(node_colors[i])\n node_info = node_labels[list_nodes[i]]\n node_trace['text'].append(node_info)\n\nfig = Figure(data=Data([edge_trace, node_trace]),\n layout=Layout(\n title='<br>Interactive citation graph',\n titlefont=dict(size=16),\n showlegend=False,\n hovermode='closest',\n width=750,\n height=750,\n margin=dict(b=20,l=5,r=5,t=40),\n annotations=[ dict(\n showarrow=False,\n xref=\"paper\", yref=\"paper\",\n x=0.005, y=-0.002 ) ],\n xaxis=XAxis(showgrid=False, zeroline=False, showticklabels=False),\n yaxis=YAxis(showgrid=False, zeroline=False, showticklabels=False)))\n\npy.iplot(fig, filename='networkx_3')" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
nicoguaro/notebooks_examples
QM_analytical solutions.ipynb
mit
[ "from __future__ import print_function, division\nfrom sympy import symbols\nfrom sympy.core import S, pi, Rational\nfrom sympy.functions import sqrt, exp, factorial, gamma, tanh\nfrom sympy.functions import assoc_laguerre as L\nfrom sympy.functions import assoc_legendre as P\nfrom sympy.physics.quantum.constants import hbar\n\nfrom sympy import init_printing\ninit_printing()", "Morse potential\nThe Morse potential is given by\n$$ V(x) = D_e [1 - e^{-a(r-r_e)}])$$", "x, lam = symbols(\"x lambda\")\nn = symbols(\"n\", integer=True)\n\ndef morse_psi_n(n, x, lam, xe):\n Nn = sqrt((factorial(n)*(2*lam - 2*n - 10))/gamma(2*lam - n))\n z = 2*lam*exp(-(x - xe))\n psi = Nn*z**(lam - n -S(1)/2) * exp(-S(1)/2*z) * L(n, 2*lam - 2*n - 1, z)\n return psi \n\ndef morse_E_n(n, lam):\n return 1 - 1/lam**2*(lam - n - S(1)/2)**2 \n\nmorse_E_n(0, lam)\n\nfrom sympy.functions import cosh", "Pöschl-Teller potential\nThe Pösch-Teller potential is given by\n$$ V(x) = -\\frac{\\lambda(\\lambda + 1)}{2} \\operatorname{sech}^2(x)$$", "def posch_teller_psi_n(n, x, lam):\n psi = P(lam, n, tanh(x))\n return psi\n\nposch_teller_psi_n(n, x, lam)\n\ndef morse_E_n(n, lam):\n if n <= lam:\n return -n**2/ 2\n else:\n raise ValueError(\"Lambda should not be greater than n.\")\n\nmorse_E_n(5, 6)", "References\n\n\nPöschl, G.; Teller, E. (1933). \"Bemerkungen zur Quantenmechanik des anharmonischen Oszillators\". Zeitschrift für Physik 83 (3–4): 143–151. doi:10.1007/BF01331132.\n\n\nSiegfried Flügge Practical Quantum Mechanics (Springer, 1998)\nLekner, John (2007). \"Reflectionless eigenstates of the sech2 potential\". American Journal of Physics 875 (12): 1151–1157. doi:10.1119/1.2787015.", "from IPython.core.display import HTML\ndef css_styling():\n styles = open('./styles/custom_barba.css', 'r').read()\n return HTML(styles)\ncss_styling()" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
spectralDNS/shenfun
binder/drivencavity.ipynb
bsd-2-clause
[ "<!-- dom:TITLE: Demo - Lid driven cavity -->\nDemo - Lid driven cavity\n<!-- dom:AUTHOR: Mikael Mortensen Email:mikaem@math.uio.no at Department of Mathematics, University of Oslo. -->\n<!-- Author: -->\nMikael Mortensen (email: mikaem@math.uio.no), Department of Mathematics, University of Oslo.\nDate: May 6, 2019\nSummary. The lid driven cavity is a classical benchmark for Navier Stokes solvers.\nThis is a demonstration of how the Python module shenfun can be used to solve the lid\ndriven cavity problem with full spectral accuracy using a mixed (coupled) basis\nin a 2D tensor product domain. The demo also shows how to use mixed\ntensor product spaces for vector valued equations. Note that the regular\nlid driven cavity, where the top wall has constant velocity and the\nremaining three walls are stationary, has a singularity at the two\nupper corners, where the velocity is discontinuous.\nDue to their global nature, spectral methods\nare usually not very good at handling problems with discontinuities, and\nfor this reason we will also look at a regularized lid driven cavity,\nwhere the top lid moves according to $(1-x)^2(1+x)^2$, thus removing\nthe corner discontinuities.\n<!-- dom:FIGURE: [https://raw.githack.com/spectralDNS/spectralutilities/master/figures/DrivenCavity.png] Velocity vectors for the lid driven cavity at Reynolds number 100. <div id=\"fig:drivencavity\"></div> -->\n<!-- begin figure -->\n<div id=\"fig:drivencavity\"></div>\n\n<p>Velocity vectors for the lid driven cavity at Reynolds number 100.</p>\n<img src=\"https://raw.githack.com/spectralDNS/spectralutilities/master/figures/DrivenCavity.png\" >\n<!-- end figure -->\n\nNavier Stokes equations\n<div id=\"demo:navierstokes\"></div>\n\nThe nonlinear steady Navier Stokes equations are given in strong form as\n$$\n\\begin{align}\n\\nu \\nabla^2 \\boldsymbol{u} - \\nabla p &= \\nabla \\cdot \\boldsymbol{u} \\boldsymbol{u} \\quad \\text{in } \\Omega , \\ \n\\nabla \\cdot \\boldsymbol{u} &= 0 \\quad \\text{in } \\Omega \\ \n\\int_{\\Omega} p dx &= 0 \\ \n\\boldsymbol{u}(x, y=1) = (1, 0) \\, &\\text{ or }\\, \\boldsymbol{u}(x, y=1) = ((1-x)^2(1+x)^2, 0) \\ \n\\boldsymbol{u}(x, y=-1) &= (0, 0) \\ \n\\boldsymbol{u}(x=\\pm 1, y) &= (0, 0)\n\\end{align}\n$$\nwhere $\\boldsymbol{u}, p$ and $\\nu$ are, respectively, the\nfluid velocity vector, pressure and kinematic viscosity. The domain\n$\\Omega = [-1, 1]^2$ and the nonlinear term $\\boldsymbol{u} \\boldsymbol{u}$ is the\nouter product of vector $\\boldsymbol{u}$ with itself. Note that the final\n$\\int_{\\Omega} p dx = 0$ is there because there is no Dirichlet boundary\ncondition on the pressure and the system of equations would otherwise be\nill conditioned.\nWe want to solve these steady nonlinear Navier Stokes equations with the Galerkin\nmethod, using the shenfun Python\npackage. The first thing we need to do then is to import all of shenfun's\nfunctionality", "%matplotlib inline\n\nimport matplotlib.pyplot as plt\nfrom shenfun import *", "Note that MPI for Python (mpi4py)\nis a requirement for shenfun, but the current solver cannot be used with more\nthan one processor.\nTensor product spaces\n<div id=\"sec:bases\"></div>\n\nWith the Galerkin method we need function spaces for both velocity and\npressure, as well as for the\nnonlinear right hand side. A Dirichlet space will be used for velocity,\nwhereas there is no boundary restriction on the pressure space. For both\ntwo-dimensional spaces we will use one basis function for the $x$-direction,\n$\\mathcal{X}_k(x)$, and one for the $y$-direction, $\\mathcal{Y}_l(y)$. And\nthen we create two-dimensional basis functions like\n<!-- Equation labels as ordinary links -->\n<div id=\"eq:nstestfunction\"></div>\n\n$$\n\\begin{equation}\nv_{kl}(x, y) = \\mathcal{X}_k(x) \\mathcal{Y}_l(y), \\label{eq:nstestfunction} \\tag{1}\n\\end{equation}\n$$\nand solutions (trial functions) as\n<!-- Equation labels as ordinary links -->\n<div id=\"eq:nstrialfunction\"></div>\n\n$$\n\\begin{equation}\n u(x, y) = \\sum_{k}\\sum_{l} \\hat{u}{kl} v{kl}(x, y). \\label{eq:nstrialfunction} \\tag{2}\n\\end{equation}\n$$\nFor the homogeneous Dirichlet boundary condition the basis functions\n$\\mathcal{X}_k(x)$ and $\\mathcal{Y}_l(y)$ are chosen as composite\nLegendre polynomials (we could also use Chebyshev):\n<!-- Equation labels as ordinary links -->\n<div id=\"eq:D0\"></div>\n\n$$\n\\begin{equation}\n\\mathcal{X}k(x) = L_k(x) - L{k+2}(x), \\quad \\forall \\, k \\in \\boldsymbol{k}^{N_0-2}, \\label{eq:D0} \\tag{3} \n\\end{equation}\n$$\n<!-- Equation labels as ordinary links -->\n<div id=\"eq:D1\"></div>\n\n$$\n\\begin{equation}\n\\mathcal{Y}l(y) = L_l(y) - L{l+2}(y), \\quad \\forall \\, l \\in \\boldsymbol{l}^{N_1-2}, \\label{eq:D1} \\tag{4}\n\\end{equation}\n$$\nwhere $\\boldsymbol{k}^{N_0-2} = (0, 1, \\ldots, N_0-3)$, $\\boldsymbol{l}^{N_1-2} = (0, 1, \\ldots, N_1-3)$\nand $N = (N_0, N_1)$ is the number\nof quadrature points in each direction. Note that $N_0$ and $N_1$ do not need\nto be the same. The basis funciton (3) satisfies\nthe homogeneous Dirichlet boundary conditions at $x=\\pm 1$ and (4) the same\nat $y=\\pm 1$. As such, the basis function $v_{kl}(x, y)$ satisfies the homogeneous Dirichlet boundary\ncondition for the entire domain.\nWith shenfun we create these homogeneous spaces, $D_0^{N_0}(x)=\\text{span}{L_k-L_{k+2}}{k=0}^{N_0-2}$ and\n$D_0^{N_1}(y)=\\text{span}{L_l-L{l+2}}_{l=0}^{N_1-2}$ as", "N = (45, 45)\nfamily = 'Legendre' # or use 'Chebyshev'\nquad = 'GL' # for Chebyshev use 'GC' or 'GL'\nD0X = FunctionSpace(N[0], family, quad=quad, bc=(0, 0))\nD0Y = FunctionSpace(N[1], family, quad=quad, bc=(0, 0))", "The spaces are here the same, but we will use D0X in the $x$-direction and\nD0Y in the $y$-direction. But before we use these bases in\ntensor product spaces, they remain identical as long as $N_0 = N_1$.\nSpecial attention is required by the moving lid. To get a solution\nwith nonzero boundary condition at $y=1$ we need to add one more basis function\nthat satisfies that solution. In general, a nonzero boundary condition\ncan be added on both sides of the domain using the following basis\n<!-- Equation labels as ordinary links -->\n<div id=\"_auto1\"></div>\n\n$$\n\\begin{equation}\n\\mathcal{Y}l(y) = L_l(y) - L{l+2}(y), \\quad \\forall \\, l \\in \\boldsymbol{l}^{N_1-2}. \n\\label{_auto1} \\tag{5}\n\\end{equation}\n$$\n<!-- Equation labels as ordinary links -->\n<div id=\"_auto2\"></div>\n\n$$\n\\begin{equation}\n\\mathcal{Y}_{N_1-2}(y) = (L_0+L_1)/2 \\quad \\left(=(1+y)/2\\right), \n\\label{_auto2} \\tag{6}\n\\end{equation}\n$$\n<!-- Equation labels as ordinary links -->\n<div id=\"_auto3\"></div>\n\n$$\n\\begin{equation}\n\\mathcal{Y}_{N_1-1}(y) = (L_0-L_1)/2 \\quad \\left(=(1-y)/2\\right).\n\\label{_auto3} \\tag{7}\n\\end{equation}\n$$\nAnd then the unknown component $N_1-2$ decides the value at $y=1$, whereas\nthe unknown at $N_1-1$ decides the value at $y=-1$. Here we only need to\nadd the $N_1-2$ component, but for generality this is implemented in shenfun\nusing both additional basis functions. We create the space\n$D_1^{N_1}(y)=\\text{span}{\\mathcal{Y}l(y)}{l=0}^{N_1-1}$ as", "D1Y = FunctionSpace(N[1], family, quad=quad, bc=(0, 1))", "where bc=(1, 0) fixes the values for $y=1$ and $y=-1$, respectively.\nFor a regularized lid driven cavity the velocity of the top lid is\n$(1-x)^2(1+x)^2$ and not unity. To implement this boundary condition\ninstead, we can make use of sympy and\nquite straight forward do", "import sympy\nx = sympy.symbols('x')\n#D1Y = FunctionSpace(N[1], family, quad=quad, bc=(0, (1-x)**2*(1+x)**2))", "Uncomment the last line to run the regularized boundary conditions.\nOtherwise, there is no difference at all between the regular and the\nregularized lid driven cavity implementations.\nThe pressure basis that comes with no restrictions for the boundary is a\nlittle trickier. The reason for this has to do with\ninf-sup stability. The obvious choice of basis functions are the\nregular Legendre polynomials $L_k(x)$ in $x$ and $L_l(y)$ in the\n$y$-directions. The problem is that for the natural choice of\n$(k, l) \\in \\boldsymbol{k}^{N_0} \\times \\boldsymbol{l}^{N_1}$\nthere are nullspaces and the problem is not well-defined. It turns out\nthat the proper choice for the pressure basis is simply the regular\nLegendre basis functions, but for\n$(k, l) \\in \\boldsymbol{k}^{N_0-2} \\times \\boldsymbol{l}^{N_1-2}$.\nThe bases $P^{N_0}(x)=\\text{span}{L_k(x)}{k=0}^{N_0-3}$ and\n$P^{N_1}(y)=\\text{span}{L_l(y)}{l=0}^{N_1-3}$ are created as", "PX = FunctionSpace(N[0], family, quad=quad)\nPY = FunctionSpace(N[1], family, quad=quad)\nPX.slice = lambda: slice(0, N[0]-2)\nPY.slice = lambda: slice(0, N[1]-2)", "Note that we still use these spaces with the same $N_0 \\cdot N_1$\nquadrature points in real space, but the two highest frequencies have\nbeen set to zero.\nWe have now created all relevant function spaces for the problem at hand.\nIt remains to combine these spaces into tensor product spaces, and to\ncombine tensor product spaces into mixed (coupled) tensor product\nspaces. From the Dirichlet bases we create two different tensor\nproduct spaces, whereas one is enough for the pressure\n<!-- Equation labels as ordinary links -->\n<div id=\"_auto4\"></div>\n\n$$\n\\begin{equation}\nV_{1}^{\\boldsymbol{N}}(\\boldsymbol{x}) = D_0^{N_0}(x) \\otimes D_1^{N_1}(y), \n\\label{_auto4} \\tag{8}\n\\end{equation}\n$$\n<!-- Equation labels as ordinary links -->\n<div id=\"_auto5\"></div>\n\n$$\n\\begin{equation}\nV_{0}^{\\boldsymbol{N}}(\\boldsymbol{x}) = D_0^{N_0}(x) \\otimes D_0^{N_1}(y), \n\\label{_auto5} \\tag{9}\n\\end{equation}\n$$\n<!-- Equation labels as ordinary links -->\n<div id=\"_auto6\"></div>\n\n$$\n\\begin{equation}\nP^{\\boldsymbol{N}}(\\boldsymbol{x}) = P^{N_0}(x) \\otimes P^{N_1}(y).\n\\label{_auto6} \\tag{10}\n\\end{equation}\n$$\nWith shenfun the tensor product spaces are created as", "V1 = TensorProductSpace(comm, (D0X, D1Y))\nV0 = TensorProductSpace(comm, (D0X, D0Y))\nP = TensorProductSpace(comm, (PX, PY), modify_spaces_inplace=True)", "These tensor product spaces are all scalar valued.\nThe velocity is a vector, and a vector requires a mixed vector basis like\n$W_1^{\\boldsymbol{N}} = V_1^{\\boldsymbol{N}} \\times V_0^{\\boldsymbol{N}}$. The vector basis is created\nin shenfun as", "W1 = VectorSpace([V1, V0])\nW0 = VectorSpace([V0, V0])", "Note that the second vector basis, $W_0^{\\boldsymbol{N}} = V_0^{\\boldsymbol{N}} \\times V_0^{\\boldsymbol{N}}$, uses\nhomogeneous boundary conditions throughout.\nMixed variational form\n<div id=\"sec:mixedform\"></div>\n\nWe now formulate a variational problem using the\nGalerkin method: Find\n$\\boldsymbol{u} \\in W_1^{\\boldsymbol{N}}$ and $p \\in P^{\\boldsymbol{N}}$ such that\n<!-- Equation labels as ordinary links -->\n<div id=\"eq:nsvarform\"></div>\n\n$$\n\\begin{equation}\n\\int_{\\Omega} (\\nu \\nabla^2 \\boldsymbol{u} - \\nabla p ) \\cdot \\boldsymbol{v} \\, dxdy = \\int_{\\Omega} (\\nabla \\cdot \\boldsymbol{u}\\boldsymbol{u}) \\cdot \\boldsymbol{v}\\, dxdy \\quad\\forall \\boldsymbol{v} \\, \\in \\, W_0^{\\boldsymbol{N}}, \\label{eq:nsvarform} \\tag{11} \n\\end{equation}\n$$\n<!-- Equation labels as ordinary links -->\n<div id=\"_auto7\"></div>\n\n$$\n\\begin{equation}\n\\int_{\\Omega} \\nabla \\cdot \\boldsymbol{u} \\, q \\, dxdy = 0 \\quad\\forall q \\, \\in \\, P^{\\boldsymbol{N}}.\n\\label{_auto7} \\tag{12}\n\\end{equation}\n$$\nNote that we are using test functions $\\boldsymbol{v}$ with homogeneous\nboundary conditions.\nThe first obvious issue with Eq (11) is the nonlinearity.\nIn other words we will\nneed to linearize and iterate to be able to solve these equations with\nthe Galerkin method. To this end we will introduce the solution on\niteration $k \\in [0, 1, \\ldots]$ as $\\boldsymbol{u}^k$ and compute the nonlinearity\nusing only known solutions\n$\\int_{\\Omega} (\\nabla \\cdot \\boldsymbol{u}^k\\boldsymbol{u}^k) \\cdot \\boldsymbol{v}\\, dxdy$.\nUsing further integration by parts we end up with the equations to solve\nfor iteration number $k+1$ (using $\\boldsymbol{u} = \\boldsymbol{u}^{k+1}$ and $p=p^{k+1}$\nfor simplicity)\n<!-- Equation labels as ordinary links -->\n<div id=\"eq:nsvarform2\"></div>\n\n$$\n\\begin{equation}\n-\\int_{\\Omega} \\nu \\nabla \\boldsymbol{u} \\, \\colon \\nabla \\boldsymbol{v} \\, dxdy + \\int_{\\Omega} p \\nabla \\cdot \\boldsymbol{v} \\, dxdy = \\int_{\\Omega} (\\nabla \\cdot \\boldsymbol{u}^k\\boldsymbol{u}^k) \\cdot \\boldsymbol{v}\\, dxdy \\quad\\forall \\boldsymbol{v} \\, \\in \\, W_0^{\\boldsymbol{N}}, \\label{eq:nsvarform2} \\tag{13} \n\\end{equation}\n$$\n<!-- Equation labels as ordinary links -->\n<div id=\"_auto8\"></div>\n\n$$\n\\begin{equation}\n\\int_{\\Omega} \\nabla \\cdot \\boldsymbol{u} \\, q \\, dxdy = 0 \\quad\\forall q \\, \\in \\, P^{\\boldsymbol{N}}.\n\\label{_auto8} \\tag{14}\n\\end{equation}\n$$\nNote that the nonlinear term may also be integrated by parts and\nevaluated as $\\int_{\\Omega}-\\boldsymbol{u}^k\\boldsymbol{u}^k \\, \\colon \\nabla \\boldsymbol{v} \\, dxdy$. All\nboundary integrals disappear since we are using test functions with\nhomogeneous boundary conditions.\nSince we are to solve for $\\boldsymbol{u}$ and $p$ at the same time, we formulate a\nmixed (coupled) problem: find $(\\boldsymbol{u}, p) \\in W_1^{\\boldsymbol{N}} \\times P^{\\boldsymbol{N}}$\nsuch that\n<!-- Equation labels as ordinary links -->\n<div id=\"_auto9\"></div>\n\n$$\n\\begin{equation}\na((\\boldsymbol{u}, p), (\\boldsymbol{v}, q)) = L((\\boldsymbol{v}, q)) \\quad \\forall (\\boldsymbol{v}, q) \\in W_0^{\\boldsymbol{N}} \\times P^{\\boldsymbol{N}},\n\\label{_auto9} \\tag{15}\n\\end{equation}\n$$\nwhere bilinear ($a$) and linear ($L$) forms are given as\n<!-- Equation labels as ordinary links -->\n<div id=\"_auto10\"></div>\n\n$$\n\\begin{equation}\n a((\\boldsymbol{u}, p), (\\boldsymbol{v}, q)) = -\\int_{\\Omega} \\nu \\nabla \\boldsymbol{u} \\, \\colon \\nabla \\boldsymbol{v} \\, dxdy + \\int_{\\Omega} p \\nabla \\cdot \\boldsymbol{v} \\, dxdy + \\int_{\\Omega} \\nabla \\cdot \\boldsymbol{u} \\, q \\, dxdy, \n\\label{_auto10} \\tag{16}\n\\end{equation}\n$$\n<!-- Equation labels as ordinary links -->\n<div id=\"_auto11\"></div>\n\n$$\n\\begin{equation}\n L((\\boldsymbol{v}, q); \\boldsymbol{u}^{k}) = \\int_{\\Omega} (\\nabla \\cdot \\boldsymbol{u}^{k}\\boldsymbol{u}^{k}) \\cdot \\boldsymbol{v}\\, dxdy.\n\\label{_auto11} \\tag{17}\n\\end{equation}\n$$\nNote that the bilinear form will assemble to a block matrix, whereas the right hand side\nlinear form will assemble to a block vector. The bilinear form does not change\nwith the solution and as such it does not need to be reassembled inside\nan iteration loop.\nThe algorithm used to solve the equations are:\n\n\nSet $k = 0$\n\n\nGuess $\\boldsymbol{u}^0 = (0, 0)$\n\n\nwhile not converged:\n\n\nassemble $L((\\boldsymbol{v}, q); \\boldsymbol{u}^{k})$\n\n\nsolve $a((\\boldsymbol{u}, p), (\\boldsymbol{v}, q)) = L((\\boldsymbol{v}, q); \\boldsymbol{u}^{k})$ for $\\boldsymbol{u}^{k+1}, p^{k+1}$\n\n\ncompute error = $\\int_{\\Omega} (\\boldsymbol{u}^{k+1}-\\boldsymbol{u}^{k})^2 \\, dxdy$\n\n\nif error $<$ some tolerance then converged = True\n\n\n$k$ += $1$\n\n\n\n\nImplementation of solver\nWe will now implement the coupled variational problem described in previous\nsections. First of all, since we want to solve for the velocity and pressure\nin a coupled solver, we have to\ncreate a mixed tensor product space $VQ = W_1^{\\boldsymbol{N}} \\times P^{\\boldsymbol{N}}$ that\ncouples velocity and pressure", "VQ = CompositeSpace([W1, P]) # Coupling velocity and pressure", "We can now create test- and trialfunctions for the coupled space $VQ$,\nand then split them up into components afterwards:", "up = TrialFunction(VQ)\nvq = TestFunction(VQ)\nu, p = up\nv, q = vq", "Notice.\nThe test function v is using homogeneous Dirichlet boundary conditions even\nthough it is derived from VQ, which contains W1. It is currently not (and will\nprobably never be) possible to use test functions with inhomogeneous\nboundary conditions.\nWith the basisfunctions in place we may assemble the different blocks of the\nfinal coefficient matrix. For this we also need to specify the kinematic\nviscosity, which is given here in terms of the Reynolds number:", "Re = 100.\nnu = 2./Re\nif family.lower() == 'legendre':\n A = inner(grad(v), -nu*grad(u))\n G = inner(div(v), p)\nelse:\n A = inner(v, nu*div(grad(u)))\n G = inner(v, -grad(p))\nD = inner(q, div(u))", "The assembled subsystems A, G and D are lists containg the different blocks of\nthe complete, coupled, coefficient matrix. A actually contains 4\ntensor product matrices of type TPMatrix. The first two\nmatrices are for vector component zero of the test function v[0] and\ntrial function u[0], the\nmatrices 2 and 3 are for components 1. The first two matrices are as such for\n A[0:2] = inner(grad(v[0]), -nu*grad(u[0]))\n\nBreaking it down the inner product is mathematically\n<!-- Equation labels as ordinary links -->\n<div id=\"eq:partialeq1\"></div>\n\n$$\n\\begin{equation}\n\\label{eq:partialeq1} \\tag{18}\n\\int_{\\Omega}-\\nu \\left(\\frac{\\partial \\boldsymbol{v}[0]}{\\partial x}, \\frac{\\partial \\boldsymbol{v}[0]}{\\partial y}\\right) \\cdot \\left(\\frac{\\partial \\boldsymbol{u}[0]}{\\partial x}, \\frac{\\partial \\boldsymbol{u}[0]}{\\partial y}\\right) dx dy .\n\\end{equation}\n$$\nWe can now insert for test function $\\boldsymbol{v}[0]$\n<!-- Equation labels as ordinary links -->\n<div id=\"_auto12\"></div>\n\n$$\n\\begin{equation}\n\\boldsymbol{v}[0]_{kl} = \\mathcal{X}_k \\mathcal{Y}_l, \\quad (k, l) \\in \\boldsymbol{k}^{N_0-2} \\times \\boldsymbol{l}^{N_1-2}\n\\label{_auto12} \\tag{19}\n\\end{equation}\n$$\nand trialfunction\n<!-- Equation labels as ordinary links -->\n<div id=\"_auto13\"></div>\n\n$$\n\\begin{equation}\n\\boldsymbol{u}[0]{mn} = \\sum{m=0}^{N_0-3} \\sum_{n=0}^{N_1-1} \\hat{\\boldsymbol{u}}[0]_{mn} \\mathcal{X}_m \\mathcal{Y}_n,\n\\label{_auto13} \\tag{20}\n\\end{equation}\n$$\nwhere $\\hat{\\boldsymbol{u}}$ are the unknown degrees of freedom for the velocity vector.\nNotice that the sum over the second\nindex runs all the way to $N_1-1$, whereas the other indices runs to either\n$N_0-3$ or $N_1-3$. This is because of the additional basis functions required\nfor the inhomogeneous boundary condition.\nInserting for these basis functions into (18), we obtain after a few trivial\nmanipulations\n<!-- Equation labels as ordinary links -->\n<div id=\"_auto14\"></div>\n\n$$\n\\begin{equation}\n -\\sum_{m=0}^{N_0-3} \\sum_{n=0}^{N_1-1} \\nu \\Big( \\underbrace{\\int_{-1}^{1} \\frac{\\partial \\mathcal{X}k}{\\partial x} \\frac{\\partial \\mathcal{X}_m}{\\partial x} dx \\int{-1}^{1} \\mathcal{Y}l \\mathcal{Y}_n dy}{A[0]} + \\underbrace{\\int_{-1}^{1} \\mathcal{X}k X_m dx \\int{-1}^{1} \\frac{\\partial \\mathcal{Y}l}{\\partial y} \\frac{\\partial \\mathcal{Y}_n}{\\partial y} dy}{A[1]} \\Big) \\hat{\\boldsymbol{u}}[0]_{mn}.\n\\label{_auto14} \\tag{21}\n\\end{equation}\n$$\nWe see that each tensor product matrix (both A[0] and A[1]) is composed as\nouter products of two smaller matrices, one for each dimension.\nThe first tensor product matrix, A[0], is\n<!-- Equation labels as ordinary links -->\n<div id=\"_auto15\"></div>\n\n$$\n\\begin{equation}\n \\underbrace{\\int_{-1}^{1} \\frac{\\partial \\mathcal{X}k}{\\partial x} \\frac{\\partial \\mathcal{X}_m}{\\partial x} dx}{c_{km}} \\underbrace{\\int_{-1}^{1} \\mathcal{Y}l \\mathcal{Y}_n dy}{f_{ln}}\n\\label{_auto15} \\tag{22}\n\\end{equation}\n$$\nwhere $C\\in \\mathbb{R}^{N_0-2 \\times N_1-2}$ and $F \\in \\mathbb{R}^{N_0-2 \\times N_1}$.\nNote that due to the inhomogeneous boundary conditions this last matrix $F$\nis actually not square. However, remember that all contributions from the two highest\ndegrees of freedom ($\\hat{\\boldsymbol{u}}[0]{m,N_1-2}$ and $\\hat{\\boldsymbol{u}}[0]{m,N_1-1}$) are already\nknown and they can, as such, be moved directly over to the right hand side of the\nlinear algebra system that is to be solved. More precisely, we can split the\ntensor product matrix into two contributions and obtain\n$$\n\\sum_{m=0}^{N_0-3}\\sum_{n=0}^{N_1-1} c_{km}f_{ln} \\hat{\\boldsymbol{u}}[0]{m, n} = \\sum{m=0}^{N_0-3}\\sum_{n=0}^{N_1-3}c_{km}f_{ln}\\hat{\\boldsymbol{u}}[0]{m, n} + \\sum{m=0}^{N_0-3}\\sum_{n=N_1-2}^{N_1-1}c_{km}f_{ln}\\hat{\\boldsymbol{u}}[0]_{m, n}, \\quad \\forall (k, l) \\in \\boldsymbol{k}^{N_0-2} \\times \\boldsymbol{l}^{N_1-2},\n$$\nwhere the first term on the right hand side is square and the second term is known and\ncan be moved to the right hand side of the linear algebra equation system.\nAt this point all matrices, both regular and boundary matrices, are\ncontained within the three lists A, G and D. We can now create a solver\nfor block matrices that incorporates these boundary conditions\nautomatically", "sol = la.BlockMatrixSolver(A+G+D)", "In the solver sol there is now a regular block matrix found in\nsol.mat, which is the symmetric\n$$\n\\begin{bmatrix}\n A[0]+A[1] & 0 & G[0] \\ \n 0 & A[2]+A[3] & G[1] \\ \n D[0] & D[1] & 0\n \\end{bmatrix}\n$$\nThe boundary matrices are similarly collected in a boundary block matrix\nin sol.bc_mat. This matrix is used under the hood to modify the\nright hand side.\nWe now have all the matrices we need in order to solve the Navier Stokes equations.\nHowever, we also need some work arrays for iterations", "# Create Function to hold solution. Use set_boundary_dofs to fix the degrees\n# of freedom in uh_hat that determines the boundary conditions.\nuh_hat = Function(VQ).set_boundary_dofs()\nui_hat = uh_hat[0]\n\n# New solution (iterative)\nuh_new = Function(VQ).set_boundary_dofs()\nui_new = uh_new[0]", "The nonlinear right hand side also requires some additional attention.\nNonlinear terms are usually computed in physical space before transforming\nto spectral. For this we need to evaluate the velocity vector on the\nquadrature mesh. We also need a rank 2 Array to hold the outer\nproduct $\\boldsymbol{u}\\boldsymbol{u}$. The required arrays and spaces are\ncreated as", "bh_hat = Function(VQ)\n\n# Create arrays to hold velocity vector solution\nui = Array(W1)\n\n# Create work arrays for nonlinear part\nQT = CompositeSpace([W1, W0]) # for uiuj\nuiuj = Array(QT)\nuiuj_hat = Function(QT)", "The right hand side $L((\\boldsymbol{v}, q);\\boldsymbol{u}^{k});$ is computed in its\nown function compute_rhs as", "def compute_rhs(ui_hat, bh_hat):\n global ui, uiuj, uiuj_hat, V1, bh_hat0\n bh_hat.fill(0)\n ui = W1.backward(ui_hat, ui)\n uiuj = outer(ui, ui, uiuj)\n uiuj_hat = uiuj.forward(uiuj_hat)\n bi_hat = bh_hat[0]\n bi_hat = inner(v, div(uiuj_hat), output_array=bi_hat)\n #bi_hat = inner(grad(v), -uiuj_hat, output_array=bi_hat)\n return bh_hat", "Here outer() is a shenfun function that computes the\nouter product of two vectors and returns the product in a rank two\narray (here uiuj). With uiuj forward transformed to uiuj_hat\nwe can assemble the linear form either as inner(v, div(uiuj_hat) or\ninner(grad(v), -uiuj_hat).\nNow all that remains is to guess an initial solution and solve\niteratively until convergence. For initial solution we simply set the\nvelocity and pressure to zero. With an initial solution we are ready\nto start iterating.\nHowever, for convergence it is necessary to add some underrelaxation $\\alpha$,\nand update the solution each time step as\n$$\n\\begin{align}\n\\hat{\\boldsymbol{u}}^{k+1} &= \\alpha \\hat{\\boldsymbol{u}}^ + (1-\\alpha)\\hat{\\boldsymbol{u}}^{k},\\ \n\\hat{p}^{k+1} &= \\alpha \\hat{p}^ + (1-\\alpha)\\hat{p}^{k},\n\\end{align}\n$$\nwhere $\\hat{\\boldsymbol{u}}^$ and $\\hat{p}^$ are the newly computed velocity\nand pressure returned from M.solve. Without underrelaxation the solution\nwill quickly blow up. The iteration loop goes as follows", "converged = False\ncount = 0\nalfa = 0.5\nwhile not converged:\n count += 1\n bh_hat = compute_rhs(ui_hat, bh_hat)\n uh_new = sol(bh_hat, u=uh_new, constraints=((2, 0, 0),))\n error = np.linalg.norm(ui_hat-ui_new)\n uh_hat[:] = alfa*uh_new + (1-alfa)*uh_hat\n converged = abs(error) < 1e-8 or count >= 100\n print('Iteration %d Error %2.4e' %(count, error))\n\nup = uh_hat.backward()\nu, p = up\n\nX = V0.local_mesh(True)\nplt.figure()\nplt.quiver(X[0], X[1], u[0], u[1])", "Note that the constraints=((2, 0, 0),) keyword argument\nensures that the pressure integrates to zero, i.e., $\\int_{\\Omega} p \\omega dxdy=0$.\nHere the number 2 tells us that block component 2 in the mixed space\n(the pressure) should be integrated, dof 0 should be fixed, and it\nshould be fixed to 0.\nThe last three lines plots velocity vectors, like also seen in the figure\nin the top of this demo. The solution is apparently nice\nand smooth, but hidden underneath are Gibbs oscillations from the\ncorner discontinuities. This is painfully obvious when switching from\nLegendre to Chebyshev polynomials. With Chebyshev the same plot looks\nlike the Figure below. However, choosing instead the\nregularized lid, with no discontinuities, the solutions will be nice and\nsmooth, both for Legendre and Chebyshev polynomials.\n<!-- dom:FIGURE: [https://raw.githack.com/spectralDNS/spectralutilities/master/figures/DrivenCavityCheb.png] Velocity vectors for Re=100 using Chebyshev. <div id=\"fig:drivencavitycheb\"></div> -->\n<!-- begin figure -->\n<div id=\"fig:drivencavitycheb\"></div>\n\n<p>Velocity vectors for Re=100 using Chebyshev.</p>\n<img src=\"https://raw.githack.com/spectralDNS/spectralutilities/master/figures/DrivenCavityCheb.png\" >\n<!-- end figure -->\n\nComplete solver\n<div id=\"sec:nscomplete\"></div>\n\nA complete solver can be found in demo NavierStokesDrivenCavity.py." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
scikit-optimize/scikit-optimize.github.io
0.8/notebooks/auto_examples/plots/partial-dependence-plot-with-categorical.ipynb
bsd-3-clause
[ "%matplotlib inline", "Partial Dependence Plots with categorical values\nSigurd Carlsen Feb 2019\nHolger Nahrstaedt 2020\n.. currentmodule:: skopt\nPlot objective now supports optional use of partial dependence as well as\ndifferent methods of defining parameter values for dependency plots.", "print(__doc__)\nimport sys\nfrom skopt.plots import plot_objective\nfrom skopt import forest_minimize\nimport numpy as np\nnp.random.seed(123)\nimport matplotlib.pyplot as plt\nimport numpy as np\nfrom sklearn.datasets import load_breast_cancer\nfrom sklearn.tree import DecisionTreeClassifier\nfrom sklearn.model_selection import cross_val_score\nfrom skopt.space import Integer, Categorical\nfrom skopt import plots, gp_minimize\nfrom skopt.plots import plot_objective", "objective function\nHere we define a function that we evaluate.", "def objective(params):\n clf = DecisionTreeClassifier(\n **{dim.name: val for dim, val in\n zip(SPACE, params) if dim.name != 'dummy'})\n return -np.mean(cross_val_score(clf, *load_breast_cancer(True)))", "Bayesian optimization", "SPACE = [\n Integer(1, 20, name='max_depth'),\n Integer(2, 100, name='min_samples_split'),\n Integer(5, 30, name='min_samples_leaf'),\n Integer(1, 30, name='max_features'),\n Categorical(list('abc'), name='dummy'),\n Categorical(['gini', 'entropy'], name='criterion'),\n Categorical(list('def'), name='dummy'),\n]\n\nresult = gp_minimize(objective, SPACE, n_calls=20)", "Partial dependence plot\nHere we see an example of using partial dependence. Even when setting\nn_points all the way down to 10 from the default of 40, this method is\nstill very slow. This is because partial dependence calculates 250 extra\npredictions for each point on the plots.", "_ = plot_objective(result, n_points=10)", "Plot without partial dependence\nHere we plot without partial dependence. We see that it is a lot faster.\nAlso the values for the other parameters are set to the default \"result\"\nwhich is the parameter set of the best observed value so far. In the case\nof funny_func this is close to 0 for all parameters.", "_ = plot_objective(result, sample_source='result', n_points=10)", "Modify the shown minimum\nHere we try with setting the other parameters to something other than\n\"result\". When dealing with categorical dimensions we can't use\n'expected_minimum'. Therefore we try with \"expected_minimum_random\"\nwhich is a naive way of finding the minimum of the surrogate by only\nusing random sampling. n_minimum_search sets the number of random samples,\nwhich is used to find the minimum", "_ = plot_objective(result, n_points=10, sample_source='expected_minimum_random',\n minimum='expected_minimum_random', n_minimum_search=10000)", "Set a minimum location\nLastly we can also define these parameters ourselfs by\nparsing a list as the pars argument:", "_ = plot_objective(result, n_points=10, sample_source=[15, 4, 7, 15, 'b', 'entropy', 'e'],\n minimum=[15, 4, 7, 15, 'b', 'entropy', 'e'])" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
HemantTiwariGitHub/AndroidNDSunshineProgress
Question_Answering_with_SQuAD_2_0_20210102.ipynb
apache-2.0
[ "<a href=\"https://colab.research.google.com/github/HemantTiwariGitHub/AndroidNDSunshineProgress/blob/master/Question_Answering_with_SQuAD_2_0_20210102.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>\nQuestion answering comes in many forms. In this example, we’ll look at the particular type of extractive QA that involves answering a question about a passage by highlighting the segment of the passage that answers the question. This involves fine-tuning a model which predicts a start position and an end position in the passage. We will use the Stanford Question Answering Dataset (SQuAD) 2.0.\nWe will start by downloading the data:\nNote :\nPlease write your code in the cells with the \"Your code here\" placeholder.\nDownload SQuAD 2.0 Data\nNote : This dataset can be explored in the Hugging Face model hub (SQuAD V2), and can be alternatively downloaded with the 🤗 NLP library with load_dataset(\"squad_v2\").", "!mkdir squad\n!wget https://rajpurkar.github.io/SQuAD-explorer/dataset/train-v2.0.json -O /content/squad/train-v2.0.json\n!wget https://rajpurkar.github.io/SQuAD-explorer/dataset/dev-v2.0.json -O /content/squad/dev-v2.0.json\n\nimport json\nfrom pathlib import Path\n\ndef loadJSONData(filename):\n with open(filename) as jsonDataFile:\n data = json.load(jsonDataFile)\n return data\n\n#Data has Multiple Titles\n#Every Title has Multiple Paragraphs and Each Para has Text in Context\n#Every Paragraphs has Multiple Questions and Every Question has multiple answers with Answer start index\n#If Answer is plausible , is_impossible is False.\n\ndef preprocessSQUAD(JSONData):\n contextList = []\n questionsList = []\n answersList = []\n\n titlesCount = len(JSONData['data'])\n BaseData = JSONData['data']\n print(\"Length Of JSON Data : \", titlesCount)\n for titleID in (range(titlesCount)):\n title = BaseData[titleID]['title']\n #print(\"Title : \", title);\n paragraphs = BaseData[titleID]['paragraphs']\n paragraphCount = len(paragraphs)\n\n for paraID in range(paragraphCount):\n context = paragraphs[paraID]['context']\n #print(\"Context : \",context);\n \n questions = paragraphs[paraID]['qas']\n questionCount = len(questions)\n \n for questionID in range(questionCount):\n \n # No Need to Process Questions whose Answers are not present\n if (questions[questionID]['is_impossible'] == True):\n continue\n\n questionText = questions[questionID]['question']\n answers = questions[questionID]['answers']\n\n #The SQUAD answer is a List and in DEV most of times there are multiple answers\n for answer in answers:\n #Prepare The list of Context, Question and Answers parallely.\n contextList.append(context)\n questionsList.append(questionText)\n answersList.append(answer)\n\n\n print(\"Length of Context, Questions and Answers\" , len (contextList), \" , \", len(questionsList), \" , \", len(answersList) ) \n return contextList, questionsList, answersList\n", "Each split is in a structured json file with a number of questions and answers for each passage (or context). We’ll take this apart into parallel lists of contexts, questions, and answers (note that the contexts here are repeated since there are multiple questions per context):", "def read_squad(path):\n dataInJSON = loadJSONData(path)\n return preprocessSQUAD(dataInJSON)\n\n\ntrain_contexts, train_questions, train_answers = read_squad('/content/squad/train-v2.0.json')\nprint(\"Length of Context, Questions and Answers\" , len (train_contexts), \" , \", len(train_questions), \" , \", len(train_answers) ) \nval_contexts, val_questions, val_answers = read_squad('/content/squad/dev-v2.0.json')\nprint(\"Length of Context, Questions and Answers\" , len (val_contexts), \" , \", len(val_questions), \" , \", len(val_answers) ) \n", "The contexts and questions are just strings. The answers are dicts containing the subsequence of the passage with the correct answer as well as an integer indicating the character at which the answer begins. In order to train a model on this data we need (1) the tokenized context/question pairs, and (2) integers indicating at which token positions the answer begins and ends.\nFirst, let’s get the character position at which the answer ends in the passage (we are given the starting position). Sometimes SQuAD answers are off by one or two characters, so we will also adjust for that.", "def add_end_idx(answers, contexts):\n offByOneCount = 0\n offByTwoCount = 0\n exactCount = 0\n for answer, context in zip(answers, contexts):\n \n # extract Answers and Start Positions\n #print(answer)\n answerText = answer['text']\n answerStartIndex = answer['answer_start']\n \n # calculate the end positions\n answerEndIndex = answerStartIndex + len (answerText)\n #print(\"Answer : \",answerText)\n #print(\"AnswerStartIndex : \",answerStartIndex,\" AnswerEndIndex : \",answerEndIndex ) \n\n # Check if Answers are off by 1 or 2 and fix\n if context[answerStartIndex:answerEndIndex] == answerText:\n answer['answer_end'] = answerEndIndex\n exactCount = exactCount + 1\n\n # Answer is off by 1 char \n elif context[answerStartIndex - 1:answerEndIndex - 1] == answerText:\n answer['answer_start'] = answerStartIndex - 1\n answer['answer_end'] = answerEndIndex - 1 \n offByOneCount = offByOneCount + 1\n\n elif context[answerStartIndex + 1:answerEndIndex + 1] == answerText:\n answer['answer_start'] = answerStartIndex + 1\n answer['answer_end'] = answerEndIndex + 1 \n offByOneCount = offByOneCount + 1\n\n # Answer is off by 2 chars\n elif context[answerStartIndex - 2:answerEndIndex - 2] == answerText:\n answer['answer_start'] = answerStartIndex - 2\n answer['answer_end'] = answerEndIndex - 2\n offByTwoCount = offByTwoCount + 1\n \n elif context[answerStartIndex + 2:answerEndIndex + 2] == answerText:\n answer['answer_start'] = answerStartIndex + 2\n answer['answer_end'] = answerEndIndex + 2\n offByTwoCount = offByTwoCount + 1\n\n else:\n print(\"!!Answer is outside correctable range!!\") \n\n print (\"OffByOne : \" , offByOneCount, \" , OffByTwo : \", offByTwoCount, \" exact : \", exactCount)\nadd_end_idx(train_answers, train_contexts)\nadd_end_idx(val_answers, val_contexts)", "Now train_answers and val_answers include the character end positions and the corrected start positions. Next, let’s tokenize our context/question pairs. 🤗 Tokenizers can accept parallel lists of sequences and encode them together as sequence pairs.", "!pip install transformers==4.0.1\nfrom transformers import DistilBertTokenizerFast\ntokenizer = DistilBertTokenizerFast.from_pretrained('distilbert-base-uncased')\n\n# Your code here\ntrain_encodings = tokenizer(train_contexts, train_questions, truncation=True, padding=True)\n\n# Your code here\nval_encodings = tokenizer(val_contexts, val_questions, truncation=True, padding=True)", "Next we need to convert our character start/end positions to token start/end positions. When using 🤗 Fast Tokenizers, we can use the <b>built in char_to_token()</b> method.", "def add_token_positions(encodings, answers):\n start_positions = []\n end_positions = []\n \n # Your code here\n for answerIndex in range(len(answers)):\n #print (answers[answerIndex])\n start_positions.append(encodings.char_to_token(answerIndex, answers[answerIndex]['answer_start']))\n end_positions.append(encodings.char_to_token(answerIndex, answers[answerIndex]['answer_end'] - 1))\n \n # if None, the answer passage has been truncated\n if start_positions[-1] is None:\n start_positions[-1] = tokenizer.model_max_length\n \n if end_positions[-1] is None:\n end_positions[-1] = tokenizer.model_max_length\n\n encodings.update({'start_positions': start_positions, 'end_positions': end_positions})\n\nadd_token_positions(train_encodings, train_answers)\nadd_token_positions(val_encodings, val_answers)", "Our data is ready. Let’s just put it in a PyTorch/TensorFlow dataset so that we can easily use it for training. In PyTorch, we define a custom Dataset class. In TensorFlow, we pass a tuple of (inputs_dict, labels_dict) to the from_tensor_slices method.", "import tensorflow as tf\n\n# Your code here\ntrain_dataset = tf.data.Dataset.from_tensor_slices((\n {key: train_encodings[key] for key in ['input_ids', 'attention_mask']},\n {key: train_encodings[key] for key in ['start_positions', 'end_positions']}\n))\n\n# Your code here\nval_dataset = tf.data.Dataset.from_tensor_slices((\n {key: val_encodings[key] for key in ['input_ids', 'attention_mask']},\n {key: val_encodings[key] for key in ['start_positions', 'end_positions']}\n))", "Now we can use a DistilBert model with a QA head for training:", "from transformers import TFDistilBertForQuestionAnswering\n\n# Your code here\nmodel = TFDistilBertForQuestionAnswering.from_pretrained(\"distilbert-base-uncased\")", "The data and model are both ready to go. You can train the model with Trainer/TFTrainer exactly as in the sequence classification example above. If using native PyTorch, replace labels with start_positions and end_positions in the training example. If using Keras’s fit, we need to make a minor modification to handle this example since it involves multiple model outputs.", "# Keras will expect a tuple when dealing with labels\n\n# Write your code here to replace labels with start_positions and end_positions in the training example\ntrain_dataset = train_dataset.map(lambda x, y: (x, (y['start_positions'], y['end_positions'])))\n\n# Keras will assign a separate loss for each output and add them together. So we'll just use the standard CE loss\n# instead of using the built-in model.compute_loss, which expects a dict of outputs and averages the two terms.\n# Note that this means the loss will be 2x of when using TFTrainer since we're adding instead of averaging them.\n\n# Your code here\nloss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)\nmodel.distilbert.return_dict = False # if using 🤗 Transformers >3.02, make sure outputs are tuples\n\n# Your code here\noptimizer = tf.keras.optimizers.Adam(learning_rate=5e-5)\n\nmodel.compile(optimizer=optimizer, loss=loss) # can also use any keras loss fn\nmodel.fit(train_dataset.shuffle(1000).batch(16), epochs=3, batch_size=16)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
fdmazzone/Ecuaciones_Diferenciales
examenes/ejer.5.ipynb
gpl-2.0
[ "from sympy import *\ninit_printing() #muestra símbolos más agradab\nR=lambda n,d: Rational(n,d)\n\nx,y,a,b,c,d,xi,eta=symbols('x,y,a,b,c,d,xi,eta',real=true)", "<h1>Ejercicio parcial:<h1> Resolver $\\frac{dy}{dx}=\\frac{x y^{4}}{3} - \\frac{2 y}{3 x} + \\frac{1}{3 x^{3} y^{2}}$. Ayuda: hace el anzats $\\xi=ax+c$ y $\\eta=bx+d$ para encontrar las simetrías", "#cargamos la función\nf=x*y**4/3-R(2,3)*y/x+R(1,3)/x**3/y**2\nf", "Hacemos el anzats para encontrar $\\xi=ax+c$ y $\\eta=bx+d$", "xi=a*x+c\neta=b*y+d\nxi, eta\n\n(eta.diff(x)+(eta.diff(y)-xi.diff(x))*f-xi*f.diff(x)-eta*f.diff(y)).factor()", "Luego, $c=d=0$, $b=-\\frac23 a$, podemos tomar $a=1$", "L=(eta.diff(x)+(eta.diff(y)-xi.diff(x))*f-xi*f.diff(x)-eta*f.diff(y)).factor()\n#L.subs({c:0,d:0})\nL=(2*a*x**3*y**4 + 2*a*x*y + 3*b*x**3*y**4 + 3*b*x*y + c*x**2*y**4 + 3*c*y + 4*d*x**3*y**3 + 2*d*x)\npoly(L,x,y)", "Encontremos las coordenadas polares", "y=Function('y')(x)\nxi=x\neta=-R(2,3)*y\nxi, eta\n\ndsolve(Eq(y.diff(x),eta/xi),y)\n\ny=symbols('y')\nr=x**2*y**3\nr\n\ns=integrate(xi**(-1),x)\ns\n\ns=log(abs(x))\nr, s", "Encontremos la ecuación $\\frac{dr}{ds}=??$", "y=Function('y')(x)\nr=x**2*y**3\ns=log(abs(x))\nf=x*y**4/3-R(2,3)*y/x+R(1,3)/x**3/y**2\nr,s,f \n\n(r.diff(x)/s.diff(x)).subs(y.diff(x),f).simplify()", "Resolvamos $\\frac{dr}{ds}=1+r^2$", "r=symbols('r')\ns=symbols('s')\nC=symbols('C')\nsolEcuacionPolares=Eq(integrate((1+r**2)**(-1),r),s+C)\nsolEcuacionPolares", "Expresemos la ecuación en coordenadas cartesianas", "solEcuacionCart=solEcuacionPolares.subs(r,x**2*y**3).subs(s,log(abs(x)))\nsolEcuacionCart\n\nec1=Eq(solEcuacionCart.lhs.diff(x),solEcuacionCart.rhs.diff(x))\nec1\n\nec2=Eq(ec1.lhs,1/x)\nec2\n\nsolve(ec2,y.diff(x))" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
anonyXmous/CapstoneProject
Mini_Project_Linear_Regression.ipynb
unlicense
[ "Regression in Python\n\nThis is a very quick run-through of some basic statistical concepts, adapted from Lab 4 in Harvard's CS109 course. Please feel free to try the original lab if you're feeling ambitious :-) The CS109 git repository also has the solutions if you're stuck.\n\nLinear Regression Models\nPrediction using linear regression\nSome re-sampling methods \nTrain-Test splits\nCross Validation\n\n\n\nLinear regression is used to model and predict continuous outcomes while logistic regression is used to model binary outcomes. We'll see some examples of linear regression as well as Train-test splits.\nThe packages we'll cover are: statsmodels, seaborn, and scikit-learn. While we don't explicitly teach statsmodels and seaborn in the Springboard workshop, those are great libraries to know.\n\n<img width=600 height=300 src=\"https://imgs.xkcd.com/comics/sustainable.png\"/>", "# special IPython command to prepare the notebook for matplotlib and other libraries\n%pylab inline \n\nimport numpy as np\nimport pandas as pd\nimport scipy.stats as stats\nimport matplotlib.pyplot as plt\nimport sklearn\n\nimport seaborn as sns\n\n# special matplotlib argument for improved plots\nfrom matplotlib import rcParams\nsns.set_style(\"whitegrid\")\nsns.set_context(\"poster\")\n", "Part 1: Linear Regression\nPurpose of linear regression\n\n<div class=\"span5 alert alert-info\">\n\n<p> Given a dataset $X$ and $Y$, linear regression can be used to: </p>\n<ul>\n <li> Build a <b>predictive model</b> to predict future values of $X_i$ without a $Y$ value. </li>\n <li> Model the <b>strength of the relationship</b> between each dependent variable $X_i$ and $Y$</li>\n <ul>\n <li> Sometimes not all $X_i$ will have a relationship with $Y$</li>\n <li> Need to figure out which $X_i$ contributes most information to determine $Y$ </li>\n </ul>\n <li>Linear regression is used in so many applications that I won't warrant this with examples. It is in many cases, the first pass prediction algorithm for continuous outcomes. </li>\n</ul>\n</div>\n\nA brief recap (feel free to skip if you don't care about the math)\n\nLinear Regression is a method to model the relationship between a set of independent variables $X$ (also knowns as explanatory variables, features, predictors) and a dependent variable $Y$. This method assumes the relationship between each predictor $X$ is linearly related to the dependent variable $Y$. \n$$ Y = \\beta_0 + \\beta_1 X + \\epsilon$$\nwhere $\\epsilon$ is considered as an unobservable random variable that adds noise to the linear relationship. This is the simplest form of linear regression (one variable), we'll call this the simple model. \n\n\n$\\beta_0$ is the intercept of the linear model\n\n\nMultiple linear regression is when you have more than one independent variable\n\n$X_1$, $X_2$, $X_3$, $\\ldots$\n\n\n\n$$ Y = \\beta_0 + \\beta_1 X_1 + \\ldots + \\beta_p X_p + \\epsilon$$ \n\nBack to the simple model. The model in linear regression is the conditional mean of $Y$ given the values in $X$ is expressed a linear function. \n\n$$ y = f(x) = E(Y | X = x)$$ \n\nhttp://www.learner.org/courses/againstallodds/about/glossary.html\n\nThe goal is to estimate the coefficients (e.g. $\\beta_0$ and $\\beta_1$). We represent the estimates of the coefficients with a \"hat\" on top of the letter. \n\n$$ \\hat{\\beta}_0, \\hat{\\beta}_1 $$\n\nOnce you estimate the coefficients $\\hat{\\beta}_0$ and $\\hat{\\beta}_1$, you can use these to predict new values of $Y$\n\n$$\\hat{y} = \\hat{\\beta}_0 + \\hat{\\beta}_1 x_1$$\n\nHow do you estimate the coefficients? \nThere are many ways to fit a linear regression model\nThe method called least squares is one of the most common methods\nWe will discuss least squares today\n\n\n\nEstimating $\\hat\\beta$: Least squares\n\nLeast squares is a method that can estimate the coefficients of a linear model by minimizing the difference between the following: \n$$ S = \\sum_{i=1}^N r_i = \\sum_{i=1}^N (y_i - (\\beta_0 + \\beta_1 x_i))^2 $$\nwhere $N$ is the number of observations. \n\nWe will not go into the mathematical details, but the least squares estimates $\\hat{\\beta}_0$ and $\\hat{\\beta}_1$ minimize the sum of the squared residuals $r_i = y_i - (\\beta_0 + \\beta_1 x_i)$ in the model (i.e. makes the difference between the observed $y_i$ and linear model $\\beta_0 + \\beta_1 x_i$ as small as possible). \n\nThe solution can be written in compact matrix notation as\n$$\\hat\\beta = (X^T X)^{-1}X^T Y$$ \nWe wanted to show you this in case you remember linear algebra, in order for this solution to exist we need $X^T X$ to be invertible. Of course this requires a few extra assumptions, $X$ must be full rank so that $X^T X$ is invertible, etc. This is important for us because this means that having redundant features in our regression models will lead to poorly fitting (and unstable) models. We'll see an implementation of this in the extra linear regression example.\nNote: The \"hat\" means it is an estimate of the coefficient. \n\nPart 2: Boston Housing Data Set\nThe Boston Housing data set contains information about the housing values in suburbs of Boston. This dataset was originally taken from the StatLib library which is maintained at Carnegie Mellon University and is now available on the UCI Machine Learning Repository. \nLoad the Boston Housing data set from sklearn\n\nThis data set is available in the sklearn python module which is how we will access it today.", "from sklearn.datasets import load_boston\nboston = load_boston()\n\nboston.keys()\n\nboston.data.shape\n\n# Print column names\nprint (boston.feature_names)\n\n# Print description of Boston housing data set\nprint (boston.DESCR)", "Now let's explore the data set itself.", "bos = pd.DataFrame(boston.data)\nbos.head()", "There are no column names in the DataFrame. Let's add those.", "bos.columns = boston.feature_names\nbos.head()", "Now we have a pandas DataFrame called bos containing all the data we want to use to predict Boston Housing prices. Let's create a variable called PRICE which will contain the prices. This information is contained in the target data.", "print (boston.target.shape)\n\nbos['PRICE'] = boston.target\nbos.head()", "EDA and Summary Statistics\n\nLet's explore this data set. First we use describe() to get basic summary statistics for each of the columns.", "bos.describe()", "Scatter plots\n\nLet's look at some scatter plots for three variables: 'CRIM', 'RM' and 'PTRATIO'. \nWhat kind of relationship do you see? e.g. positive, negative? linear? non-linear?", "plt.scatter(bos.CRIM, bos.PRICE)\nplt.xlabel(\"Per capita crime rate by town (CRIM)\")\nplt.ylabel(\"Housing Price\")\nplt.title(\"Relationship between CRIM and Price\")", "Your turn: Create scatter plots between RM and PRICE, and PTRATIO and PRICE. What do you notice?", "#your turn: scatter plot between *RM* and *PRICE*\nplt.scatter(bos.RM, bos.PRICE)\nplt.xlabel(\"average number of rooms per dwelling (RM)\")\nplt.ylabel(\"Housing Price\")\nplt.title(\"Relationship between RM and Price\")\n\n#your turn: scatter plot between *PTRATIO* and *PRICE*\nplt.scatter(bos.PTRATIO, bos.PRICE)\nplt.xlabel(\"pupil-teacher ratio by town (PTRATIO)\")\nplt.ylabel(\"Housing Price\")\nplt.title(\"Relationship between PTRATIO and Price\")", "Your turn: What are some other numeric variables of interest? Plot scatter plots with these variables and PRICE.", "#your turn: create some other scatter plots\nplt.scatter(bos.AGE, bos.PRICE)\nplt.xlabel(\"proportion of owner-occupied units built prior to 1940 (AGE)\")\nplt.ylabel(\"Housing Price\")\nplt.title(\"Relationship between House Ages and Price\")", "Scatter Plots using Seaborn\n\nSeaborn is a cool Python plotting library built on top of matplotlib. It provides convenient syntax and shortcuts for many common types of plots, along with better-looking defaults.\nWe can also use seaborn regplot for the scatterplot above. This provides automatic linear regression fits (useful for data exploration later on). Here's one example below.", "sns.regplot(y=\"PRICE\", x=\"RM\", data=bos, fit_reg = True)", "Histograms\n\nHistograms are a useful way to visually summarize the statistical properties of numeric variables. They can give you an idea of the mean and the spread of the variables as well as outliers.", "plt.hist(bos.CRIM)\nplt.title(\"CRIM\")\nplt.xlabel(\"Crime rate per capita\")\nplt.ylabel(\"Frequency\")\nplt.show()", "Your turn: Plot separate histograms and one for RM, one for PTRATIO. Any interesting observations?", "#your turn\nplt.hist(bos.RM)\nplt.title(\"RM\")\nplt.xlabel(\"average number of rooms per dwelling\")\nplt.ylabel(\"Frequency\")\nplt.show()\n\n\n# Histogram for pupil-teacher ratio by town\nplt.hist(bos.PTRATIO)\nplt.title(\"PTRATIO\")\nplt.xlabel(\"pupil-teacher ratio by town\")\nplt.ylabel(\"Frequency\")\nplt.show()", "Linear regression with Boston housing data example\n\nHere, \n$Y$ = boston housing prices (also called \"target\" data in python)\nand\n$X$ = all the other features (or independent variables)\nwhich we will use to fit a linear regression model and predict Boston housing prices. We will use the least squares method as the way to estimate the coefficients. \nWe'll use two ways of fitting a linear regression. We recommend the first but the second is also powerful in its features.\nFitting Linear Regression using statsmodels\n\nStatsmodels is a great Python library for a lot of basic and inferential statistics. It also provides basic regression functions using an R-like syntax, so it's commonly used by statisticians. While we don't cover statsmodels officially in the Data Science Intensive, it's a good library to have in your toolbox. Here's a quick example of what you could do with it.", "# Import regression modules\n# ols - stands for Ordinary least squares, we'll use this\nimport statsmodels.api as sm\nfrom statsmodels.formula.api import ols\n\n# statsmodels works nicely with pandas dataframes\n# The thing inside the \"quotes\" is called a formula, a bit on that below\nm = ols('PRICE ~ RM',bos).fit()\nprint (m.summary())", "Interpreting coefficients\nThere is a ton of information in this output. But we'll concentrate on the coefficient table (middle table). We can interpret the RM coefficient (9.1021) by first noticing that the p-value (under P&gt;|t|) is so small, basically zero. We can interpret the coefficient as, if we compare two groups of towns, one where the average number of rooms is say $5$ and the other group is the same except that they all have $6$ rooms. For these two groups the average difference in house prices is about $9.1$ (in thousands) so about $\\$9,100$ difference. The confidence interval fives us a range of plausible values for this difference, about ($\\$8,279, \\$9,925$), deffinitely not chump change. \nstatsmodels formulas\n\nThis formula notation will seem familiar to R users, but will take some getting used to for people coming from other languages or are new to statistics.\nThe formula gives instruction for a general structure for a regression call. For statsmodels (ols or logit) calls you need to have a Pandas dataframe with column names that you will add to your formula. In the below example you need a pandas data frame that includes the columns named (Outcome, X1,X2, ...), bbut you don't need to build a new dataframe for every regression. Use the same dataframe with all these things in it. The structure is very simple:\nOutcome ~ X1\nBut of course we want to to be able to handle more complex models, for example multiple regression is doone like this:\nOutcome ~ X1 + X2 + X3\nThis is the very basic structure but it should be enough to get you through the homework. Things can get much more complex, for a quick run-down of further uses see the statsmodels help page.\nLet's see how our model actually fit our data. We can see below that there is a ceiling effect, we should probably look into that. Also, for large values of $Y$ we get underpredictions, most predictions are below the 45-degree gridlines. \nYour turn: Create a scatterpot between the predicted prices, available in m.fittedvalues and the original prices. How does the plot look?", "# your turn\nplt.scatter(bos.PRICE, m.fittedvalues)\nplt.xlabel(\"Housing Price\")\nplt.ylabel(\"Predicted Housing Price\")\nplt.title(\"Relationship between Predicted and Actual Price\")", "Fitting Linear Regression using sklearn", "from sklearn.linear_model import LinearRegression\nX = bos.drop('PRICE', axis = 1)\n\n# This creates a LinearRegression object\nlm = LinearRegression()\nlm", "What can you do with a LinearRegression object?\n\nCheck out the scikit-learn docs here. We have listed the main functions here.\nMain functions | Description\n--- | --- \nlm.fit() | Fit a linear model\nlm.predit() | Predict Y using the linear model with estimated coefficients\nlm.score() | Returns the coefficient of determination (R^2). A measure of how well observed outcomes are replicated by the model, as the proportion of total variation of outcomes explained by the model\nWhat output can you get?", "# Look inside lm object\n#lm.<tab>\n", "Output | Description\n--- | --- \nlm.coef_ | Estimated coefficients\nlm.intercept_ | Estimated intercept \nFit a linear model\n\nThe lm.fit() function estimates the coefficients the linear regression using least squares.", "# Use all 13 predictors to fit linear regression model\nlm.fit(X, bos.PRICE)", "Your turn: How would you change the model to not fit an intercept term? Would you recommend not having an intercept?\nEstimated intercept and coefficients\nLet's look at the estimated coefficients from the linear model using 1m.intercept_ and lm.coef_. \nAfter we have fit our linear regression model using the least squares method, we want to see what are the estimates of our coefficients $\\beta_0$, $\\beta_1$, ..., $\\beta_{13}$: \n$$ \\hat{\\beta}0, \\hat{\\beta}_1, \\ldots, \\hat{\\beta}{13} $$", "print ('Estimated intercept coefficient:', lm.intercept_)\n\nprint ('Number of coefficients:', len(lm.coef_))\n\n# The coefficients\npd.DataFrame(list(zip(X.columns, lm.coef_)), columns = ['features', 'estimatedCoefficients'])", "Predict Prices\nWe can calculate the predicted prices ($\\hat{Y}_i$) using lm.predict. \n$$ \\hat{Y}i = \\hat{\\beta}_0 + \\hat{\\beta}_1 X_1 + \\ldots \\hat{\\beta}{13} X_{13} $$", "# first five predicted prices\nlm.predict(X)[0:5]", "Your turn: \n\nHistogram: Plot a histogram of all the predicted prices\nScatter Plot: Let's plot the true prices compared to the predicted prices to see they disagree (we did this with statsmodels before).", "# your turn\n# Plot a histogram of all the predicted prices\nplt.hist(lm.predict(X))\nplt.title(\"Predicted Prices\")\nplt.xlabel(\"Predicted Prices\")\nplt.ylabel(\"Frequency\")\nplt.show()\n\n# Let's plot the true prices compared to the predicted prices to see they disagree \nplt.scatter(bos.PRICE, lm.predict(X))\nplt.xlabel(\"Housing Price\")\nplt.ylabel(\"Predicted Housing Price\")\nplt.title(\"Relationship between Predicted and Actual Price\")\n", "Residual sum of squares\nLet's calculate the residual sum of squares \n$$ S = \\sum_{i=1}^N r_i = \\sum_{i=1}^N (y_i - (\\beta_0 + \\beta_1 x_i))^2 $$", "print (np.sum((bos.PRICE - lm.predict(X)) ** 2))", "Mean squared error\n\nThis is simple the mean of the residual sum of squares.\nYour turn: Calculate the mean squared error and print it.", "#your turn\nprint ('Mean squared error: ', np.mean((bos.PRICE - lm.predict(X)) ** 2))\n", "Relationship between PTRATIO and housing price\n\nTry fitting a linear regression model using only the 'PTRATIO' (pupil-teacher ratio by town)\nCalculate the mean squared error.", "lm = LinearRegression()\nlm.fit(X[['PTRATIO']], bos.PRICE)\n\nmsePTRATIO = np.mean((bos.PRICE - lm.predict(X[['PTRATIO']])) ** 2)\nprint (msePTRATIO)", "We can also plot the fitted linear regression line.", "plt.scatter(bos.PTRATIO, bos.PRICE)\nplt.xlabel(\"Pupil-to-Teacher Ratio (PTRATIO)\")\nplt.ylabel(\"Housing Price\")\nplt.title(\"Relationship between PTRATIO and Price\")\n\nplt.plot(bos.PTRATIO, lm.predict(X[['PTRATIO']]), color='blue', linewidth=3)\nplt.show()", "Your turn\n\nTry fitting a linear regression model using three independent variables\n\n'CRIM' (per capita crime rate by town)\n'RM' (average number of rooms per dwelling)\n'PTRATIO' (pupil-teacher ratio by town)\n\nCalculate the mean squared error.", "# your turn\nlm.fit(X[['CRIM']], bos.PRICE)\nprint ('(MSE) Per capita crime rate by town: ', np.mean((bos.PRICE - lm.predict(X[['CRIM']])) ** 2))\nlm.fit(X[['RM']], bos.PRICE)\nprint ('(MSE) Average number of rooms per dwelling: ', np.mean((bos.PRICE - lm.predict(X[['RM']])) ** 2))\nlm.fit(X[['PTRATIO']], bos.PRICE)\nprint ('(MSE) Pupil-teacher ratio by town: ', np.mean((bos.PRICE - lm.predict(X[['PTRATIO']])) ** 2))\n", "Other important things to think about when fitting a linear regression model\n\n<div class=\"span5 alert alert-danger\">\n<ul>\n <li>**Linearity**. The dependent variable $Y$ is a linear combination of the regression coefficients and the independent variables $X$. </li>\n <li>**Constant standard deviation**. The SD of the dependent variable $Y$ should be constant for different values of X. \n <ul>\n <li>e.g. PTRATIO\n </ul>\n </li>\n <li> **Normal distribution for errors**. The $\\epsilon$ term we discussed at the beginning are assumed to be normally distributed. \n $$ \\epsilon_i \\sim N(0, \\sigma^2)$$\nSometimes the distributions of responses $Y$ may not be normally distributed at any given value of $X$. e.g. skewed positively or negatively. </li>\n<li> **Independent errors**. The observations are assumed to be obtained independently.\n <ul>\n <li>e.g. Observations across time may be correlated\n </ul>\n</li>\n</ul> \n\n</div>", "sns.set(font_scale=.8)\nsns.heatmap(X.corr(), vmax=.8, square=True, annot=True)", "Part 3: Training and Test Data sets\nPurpose of splitting data into Training/testing sets\n\n<div class=\"span5 alert alert-info\">\n\n<p> Let's stick to the linear regression example: </p>\n<ul>\n <li> We built our model with the requirement that the model fit the data well. </li>\n <li> As a side-effect, the model will fit <b>THIS</b> dataset well. What about new data? </li>\n <ul>\n <li> We wanted the model for predictions, right?</li>\n </ul>\n <li> One simple solution, leave out some data (for <b>testing</b>) and <b>train</b> the model on the rest </li>\n <li> This also leads directly to the idea of cross-validation, next section. </li> \n</ul>\n</div>\n\n\nOne way of doing this is you can create training and testing data sets manually.", "X_train = X[:-50]\nX_test = X[-50:]\nY_train = bos.PRICE[:-50]\nY_test = bos.PRICE[-50:]\nprint (X_train.shape)\nprint (X_test.shape)\nprint (Y_train.shape)\nprint (Y_test.shape)", "Another way, is to split the data into random train and test subsets using the function train_test_split in sklearn.cross_validation. Here's the documentation.", "X_train, X_test, Y_train, Y_test = sklearn.cross_validation.train_test_split(\n X, bos.PRICE, test_size=0.33, random_state = 5)\nprint (X_train.shape)\nprint (X_test.shape)\nprint (Y_train.shape)\nprint (Y_test.shape)", "Your turn: Let's build a linear regression model using our new training data sets. \n\nFit a linear regression model to the training set\nPredict the output on the test set", "# your turn\n# Fit a linear regression model to the training set\nlm.fit(X_train, Y_train)\nlm.predict(X_test)", "Your turn:\nCalculate the mean squared error \n\nusing just the test data\nusing just the training data\n\nAre they pretty similar or very different? What does that mean?", "# your turn\n# Calculate MSE using just the test data\nprint ('(MSE) using just the test data: ', np.mean((Y_test - lm.predict(X_test)) ** 2))\n\n# Calculate MSE using just the training data\nprint ('(MSE) using just the training data: ', np.mean((Y_train - lm.predict(X_train)) ** 2))\n", "Are they pretty similar or very different? What does that mean?\n-> They are very different because the model us based on training data so it will be accurate compared to the test data. The model is not exposed to test data so it will give a greater mean square error. It means there are data in test data which are different with the training data.\nResidual plots", "plt.scatter(lm.predict(X_train), lm.predict(X_train) - Y_train, c='b', s=40, alpha=0.5)\nplt.scatter(lm.predict(X_test), lm.predict(X_test) - Y_test, c='g', s=40)\nplt.hlines(y = 0, xmin=0, xmax = 50)\nplt.title('Residual Plot using training (blue) and test (green) data')\nplt.ylabel('Residuals')", "Your turn: Do you think this linear regression model generalizes well on the test data?\n-> <b> No, the scatter points are not close to zero so the model needs improvements. Check the features to see highly correlated predictors and remove one of them or check the parameters of the model and do fine-tuning.\nK-fold Cross-validation as an extension of this idea\n\n<div class=\"span5 alert alert-info\">\n\n<p> A simple extension of the Test/train split is called K-fold cross-validation. </p>\n\n<p> Here's the procedure:</p>\n<ul>\n <li> randomly assign your $n$ samples to one of $K$ groups. They'll each have about $n/k$ samples</li>\n <li> For each group $k$: </li>\n <ul>\n <li> Fit the model (e.g. run regression) on all data excluding the $k^{th}$ group</li>\n <li> Use the model to predict the outcomes in group $k$</li>\n <li> Calculate your prediction error for each observation in $k^{th}$ group (e.g. $(Y_i - \\hat{Y}_i)^2$ for regression, $\\mathbb{1}(Y_i = \\hat{Y}_i)$ for logistic regression). </li>\n </ul>\n <li> Calculate the average prediction error across all samples $Err_{CV} = \\frac{1}{n}\\sum_{i=1}^n (Y_i - \\hat{Y}_i)^2$ </li>\n</ul>\n</div>\n\n\nLuckily you don't have to do this entire process all by hand (for loops, etc.) every single time, sci-kit learn has a very nice implementation of this, have a look at the documentation.\nYour turn (extra credit): Implement K-Fold cross-validation using the procedure above and Boston Housing data set using $K=4$. How does the average prediction error compare to the train-test split above?", "from sklearn import cross_validation, linear_model\n \n# If the estimator is a classifier and y is either binary or multiclass, StratifiedKFold is used. \n# In all other cases, KFold is used\nscores = cross_validation.cross_val_score(lm, X, bos.PRICE, scoring='mean_squared_error', cv=4)\n\n# This will print metric for evaluation\nprint ('(MSE) Using k-fold: ', np.mean(scores))\nprint ('The K-fold cross-validation is not performaing well compared to the previous train-test split above')" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
snegirigens/DLND
transfer-learning/Transfer_Learning.ipynb
mit
[ "Transfer Learning\nMost of the time you won't want to train a whole convolutional network yourself. Modern ConvNets training on huge datasets like ImageNet take weeks on multiple GPUs. Instead, most people use a pretrained network either as a fixed feature extractor, or as an initial network to fine tune. In this notebook, you'll be using VGGNet trained on the ImageNet dataset as a feature extractor. Below is a diagram of the VGGNet architecture.\n<img src=\"assets/cnnarchitecture.jpg\" width=700px>\nVGGNet is great because it's simple and has great performance, coming in second in the ImageNet competition. The idea here is that we keep all the convolutional layers, but replace the final fully connected layers with our own classifier. This way we can use VGGNet as a feature extractor for our images then easily train a simple classifier on top of that. What we'll do is take the first fully connected layer with 4096 units, including thresholding with ReLUs. We can use those values as a code for each image, then build a classifier on top of those codes.\nYou can read more about transfer learning from the CS231n course notes.\nPretrained VGGNet\nWe'll be using a pretrained network from https://github.com/machrisaa/tensorflow-vgg. Make sure to clone this repository to the directory you're working from. You'll also want to rename it so it has an underscore instead of a dash.\ngit clone https://github.com/machrisaa/tensorflow-vgg.git tensorflow_vgg\nThis is a really nice implementation of VGGNet, quite easy to work with. The network has already been trained and the parameters are available from this link. You'll need to clone the repo into the folder containing this notebook. Then download the parameter file using the next cell.", "from urllib.request import urlretrieve\nfrom os.path import isfile, isdir\nfrom tqdm import tqdm\n\nvgg_dir = 'tensorflow_vgg/'\n# Make sure vgg exists\nif not isdir(vgg_dir):\n raise Exception(\"VGG directory doesn't exist!\")\n\nclass DLProgress(tqdm):\n last_block = 0\n\n def hook(self, block_num=1, block_size=1, total_size=None):\n self.total = total_size\n self.update((block_num - self.last_block) * block_size)\n self.last_block = block_num\n\nif not isfile(vgg_dir + \"vgg16.npy\"):\n with DLProgress(unit='B', unit_scale=True, miniters=1, desc='VGG16 Parameters') as pbar:\n urlretrieve(\n 'https://s3.amazonaws.com/content.udacity-data.com/nd101/vgg16.npy',\n vgg_dir + 'vgg16.npy',\n pbar.hook)\nelse:\n print(\"Parameter file already exists!\")", "Flower power\nHere we'll be using VGGNet to classify images of flowers. To get the flower dataset, run the cell below. This dataset comes from the TensorFlow inception tutorial.", "import tarfile\n\ndataset_folder_path = 'flower_photos'\n\nclass DLProgress(tqdm):\n last_block = 0\n\n def hook(self, block_num=1, block_size=1, total_size=None):\n self.total = total_size\n self.update((block_num - self.last_block) * block_size)\n self.last_block = block_num\n\nif not isfile('flower_photos.tar.gz'):\n with DLProgress(unit='B', unit_scale=True, miniters=1, desc='Flowers Dataset') as pbar:\n urlretrieve(\n 'http://download.tensorflow.org/example_images/flower_photos.tgz',\n 'flower_photos.tar.gz',\n pbar.hook)\n\nif not isdir(dataset_folder_path):\n with tarfile.open('flower_photos.tar.gz') as tar:\n tar.extractall()\n tar.close()", "ConvNet Codes\nBelow, we'll run through all the images in our dataset and get codes for each of them. That is, we'll run the images through the VGGNet convolutional layers and record the values of the first fully connected layer. We can then write these to a file for later when we build our own classifier.\nHere we're using the vgg16 module from tensorflow_vgg. The network takes images of size $224 \\times 224 \\times 3$ as input. Then it has 5 sets of convolutional layers. The network implemented here has this structure (copied from the source code):\n```\nself.conv1_1 = self.conv_layer(bgr, \"conv1_1\")\nself.conv1_2 = self.conv_layer(self.conv1_1, \"conv1_2\")\nself.pool1 = self.max_pool(self.conv1_2, 'pool1')\nself.conv2_1 = self.conv_layer(self.pool1, \"conv2_1\")\nself.conv2_2 = self.conv_layer(self.conv2_1, \"conv2_2\")\nself.pool2 = self.max_pool(self.conv2_2, 'pool2')\nself.conv3_1 = self.conv_layer(self.pool2, \"conv3_1\")\nself.conv3_2 = self.conv_layer(self.conv3_1, \"conv3_2\")\nself.conv3_3 = self.conv_layer(self.conv3_2, \"conv3_3\")\nself.pool3 = self.max_pool(self.conv3_3, 'pool3')\nself.conv4_1 = self.conv_layer(self.pool3, \"conv4_1\")\nself.conv4_2 = self.conv_layer(self.conv4_1, \"conv4_2\")\nself.conv4_3 = self.conv_layer(self.conv4_2, \"conv4_3\")\nself.pool4 = self.max_pool(self.conv4_3, 'pool4')\nself.conv5_1 = self.conv_layer(self.pool4, \"conv5_1\")\nself.conv5_2 = self.conv_layer(self.conv5_1, \"conv5_2\")\nself.conv5_3 = self.conv_layer(self.conv5_2, \"conv5_3\")\nself.pool5 = self.max_pool(self.conv5_3, 'pool5')\nself.fc6 = self.fc_layer(self.pool5, \"fc6\")\nself.relu6 = tf.nn.relu(self.fc6)\n```\nSo what we want are the values of the first fully connected layer, after being ReLUd (self.relu6). To build the network, we use\nwith tf.Session() as sess:\n vgg = vgg16.Vgg16()\n input_ = tf.placeholder(tf.float32, [None, 224, 224, 3])\n with tf.name_scope(\"content_vgg\"):\n vgg.build(input_)\nThis creates the vgg object, then builds the graph with vgg.build(input_). Then to get the values from the layer,\nfeed_dict = {input_: images}\ncodes = sess.run(vgg.relu6, feed_dict=feed_dict)", "import os\n\nimport numpy as np\nimport tensorflow as tf\n\nfrom tensorflow_vgg import vgg16\nfrom tensorflow_vgg import utils\n\ndata_dir = 'flower_photos/'\ncontents = os.listdir(data_dir)\nclasses = [each for each in contents if os.path.isdir(data_dir + each)]", "Below I'm running images through the VGG network in batches.\n\nExercise: Below, build the VGG network. Also get the codes from the first fully connected layer (make sure you get the ReLUd values).", "# Set the batch size higher if you can fit in in your GPU memory\nbatch_size = 10\ncodes_list = []\nlabels = []\nbatch = []\n\ncodes = None\n\nwith tf.Session() as sess:\n # TODO: Build the vgg network here\n vgg = vgg16.Vgg16()\n input_ = tf.placeholder (tf.float32, [None, 224, 224, 3])\n with tf.name_scope ('content_vgg'):\n vgg.build (input_)\n\n for each in classes:\n print(\"Starting {} images\".format(each))\n class_path = data_dir + each\n files = os.listdir(class_path)\n for ii, file in enumerate(files, 1):\n # Add images to the current batch\n # utils.load_image crops the input images for us, from the center\n img = utils.load_image(os.path.join(class_path, file))\n batch.append(img.reshape((1, 224, 224, 3)))\n labels.append(each)\n \n # Running the batch through the network to get the codes\n if ii % batch_size == 0 or ii == len(files):\n \n # Image batch to pass to VGG network\n images = np.concatenate(batch)\n \n # TODO: Get the values from the relu6 layer of the VGG network\n feed_dict = {input_ : images}\n codes_batch = sess.run (vgg.relu6, feed_dict=feed_dict)\n \n # Here I'm building an array of the codes\n if codes is None:\n codes = codes_batch\n else:\n codes = np.concatenate((codes, codes_batch))\n \n # Reset to start building the next batch\n batch = []\n print('{} images processed'.format(ii))\n\n# write codes to file\nwith open('codes', 'w') as f:\n codes.tofile(f)\n \n# write labels to file\nimport csv\nwith open('labels', 'w') as f:\n writer = csv.writer(f, delimiter='\\n')\n writer.writerow(labels)", "Building the Classifier\nNow that we have codes for all the images, we can build a simple classifier on top of them. The codes behave just like normal input into a simple neural network. Below I'm going to have you do most of the work.", "# read codes and labels from file\nimport numpy as np\nimport csv\n\nwith open('labels') as f:\n reader = csv.reader(f, delimiter='\\n')\n labels = np.array([each for each in reader if len(each) > 0]).squeeze()\nwith open('codes') as f:\n codes = np.fromfile(f, dtype=np.float32)\n codes = codes.reshape((len(labels), -1))", "Data prep\nAs usual, now we need to one-hot encode our labels and create validation/test sets. First up, creating our labels!\n\nExercise: From scikit-learn, use LabelBinarizer to create one-hot encoded vectors from the labels.", "from sklearn import preprocessing\n\nlb = preprocessing.LabelBinarizer()\nlb.fit (labels)\nlabels_vecs = lb.transform (labels)\n#print ('Labels: {}'.format([labels[i] for i in range (0, len(labels), 200)]))\n#print ('One-hot: {}'.format([labels_vecs[i] for i in range (0, len(labels), 200)]))", "Now you'll want to create your training, validation, and test sets. An important thing to note here is that our labels and data aren't randomized yet. We'll want to shuffle our data so the validation and test sets contain data from all classes. Otherwise, you could end up with testing sets that are all one class. Typically, you'll also want to make sure that each smaller set has the same the distribution of classes as it is for the whole data set. The easiest way to accomplish both these goals is to use StratifiedShuffleSplit from scikit-learn.\nYou can create the splitter like so:\nss = StratifiedShuffleSplit(n_splits=1, test_size=0.2)\nThen split the data with \nsplitter = ss.split(x, y)\nss.split returns a generator of indices. You can pass the indices into the arrays to get the split sets. The fact that it's a generator means you either need to iterate over it, or use next(splitter) to get the indices. Be sure to read the documentation and the user guide.\n\nExercise: Use StratifiedShuffleSplit to split the codes and labels into training, validation, and test sets.", "from sklearn.model_selection import StratifiedShuffleSplit\n\nsss = StratifiedShuffleSplit (n_splits=1, test_size=0.2)\ntrain_index, test_index = next (sss.split (codes, labels))\nval_test_split = int(len(test_index)/2)\n\ntrain_x, train_y = codes[train_index], labels_vecs[train_index]\nval_x, val_y = codes[test_index[:val_test_split]], labels_vecs[test_index[:val_test_split]]\ntest_x, test_y = codes[test_index[val_test_split:]], labels_vecs[test_index[val_test_split:]]\n\nprint(\"Train shapes (x, y):\", train_x.shape, train_y.shape)\nprint(\"Validation shapes (x, y):\", val_x.shape, val_y.shape)\nprint(\"Test shapes (x, y):\", test_x.shape, test_y.shape)", "If you did it right, you should see these sizes for the training sets:\nTrain shapes (x, y): (2936, 4096) (2936, 5)\nValidation shapes (x, y): (367, 4096) (367, 5)\nTest shapes (x, y): (367, 4096) (367, 5)\nClassifier layers\nOnce you have the convolutional codes, you just need to build a classfier from some fully connected layers. You use the codes as the inputs and the image labels as targets. Otherwise the classifier is a typical neural network.\n\nExercise: With the codes and labels loaded, build the classifier. Consider the codes as your inputs, each of them are 4096D vectors. You'll want to use a hidden layer and an output layer as your classifier. Remember that the output layer needs to have one unit for each class and a softmax activation function. Use the cross entropy to calculate the cost.", "def fully_connected (x_tensor, num_outputs):\n weights = tf.Variable (tf.truncated_normal (shape=[x_tensor.get_shape().as_list()[1], num_outputs], stddev=0.1, dtype=tf.float32), name='weights')\n biases = tf.Variable (tf.zeros (shape=[num_outputs], dtype=tf.float32), name='biases')\n activations = tf.add (tf.matmul (x_tensor, weights), biases)\n return activations\n\ndef create_nn (x_tensor, num_outputs, keep_prob):\n conn = fully_connected (x_tensor, 512)\n conn = tf.nn.relu (conn)\n conn = tf.nn.dropout (conn, keep_prob)\n \n conn2 = fully_connected (conn, 128)\n conn2 = tf.nn.relu (conn2)\n conn2 = tf.nn.dropout (conn2, keep_prob)\n \n conn3 = fully_connected (conn2, 32)\n conn3 = tf.nn.relu (conn3)\n conn3 = tf.nn.dropout (conn3, keep_prob)\n \n out = fully_connected (conn3, num_outputs)\n return tf.nn.softmax (out, name='softmax')\n\ninputs_ = tf.placeholder(tf.float32, shape=[None, codes.shape[1]])\nlabels_ = tf.placeholder(tf.int64, shape=[None, labels_vecs.shape[1]])\nkeep_prob = tf.placeholder (tf.float32, name='keep_prob')\n\n# TODO: Classifier layers and operations\nlogits = create_nn (inputs_, labels_vecs.shape[1], keep_prob)\n#logits = tf.identity (logits, name='logits')\n\ncost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=labels_))\noptimizer = tf.train.AdamOptimizer(learning_rate=0.00005).minimize(cost)\n\n# Operations for validation/test accuracy\npredicted = tf.nn.softmax(logits)\ncorrect_pred = tf.equal(tf.argmax(predicted, 1), tf.argmax(labels_, 1))\naccuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))", "Batches!\nHere is just a simple way to do batches. I've written it so that it includes all the data. Sometimes you'll throw out some data at the end to make sure you have full batches. Here I just extend the last batch to include the remaining data.", "def get_batches(x, y, n_batches=10):\n \"\"\" Return a generator that yields batches from arrays x and y. \"\"\"\n batch_size = len(x)//n_batches\n \n for ii in range(0, n_batches*batch_size, batch_size):\n # If we're not on the last batch, grab data with size batch_size\n if ii != (n_batches-1)*batch_size:\n X, Y = x[ii: ii+batch_size], y[ii: ii+batch_size] \n # On the last batch, grab the rest of the data\n else:\n X, Y = x[ii:], y[ii:]\n # I love generators\n yield X, Y", "Training\nHere, we'll train the network.\n\nExercise: So far we've been providing the training code for you. Here, I'm going to give you a bit more of a challenge and have you write the code to train the network. Of course, you'll be able to see my solution if you need help. Use the get_batches function I wrote before to get your batches like for x, y in get_batches(train_x, train_y). Or write your own!", "epochs = 10000\n\nsaver = tf.train.Saver()\nwith tf.Session() as sess:\n # TODO: Your training code here\n sess.run (tf.global_variables_initializer())\n \n for epoch in range(epochs):\n for x, y in get_batches(train_x, train_y):\n entropy_cost, _, train_accuracy = sess.run ([cost, optimizer, accuracy], feed_dict={inputs_:x, labels_:y, keep_prob:0.5})\n \n if (epoch+1) % 10 == 0:\n valid_accuracy = sess.run (accuracy, feed_dict={inputs_:val_x, labels_:val_y, keep_prob:1.0})\n print ('Epoch: {:3d}/{} Cost = {:8.5f} Train accuracy = {:.4f}, Validation Accuracy = {:.4f}'.format (epoch+1, epochs, entropy_cost, train_accuracy, valid_accuracy))\n \n if (epoch+1) % 1000 == 0:\n print ('Saving checkpoint')\n saver.save(sess, \"checkpoints/flowers.ckpt\")\n \n print ('Saving checkpoint')\n saver.save(sess, \"checkpoints/flowers.ckpt\")", "Testing\nBelow you see the test accuracy. You can also see the predictions returned for images.", "with tf.Session() as sess:\n saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))\n \n feed = {inputs_: test_x,\n labels_: test_y,\n keep_prob:1.0}\n test_acc = sess.run(accuracy, feed_dict=feed)\n print(\"Test accuracy: {:.4f}\".format(test_acc))\n\n%matplotlib inline\n\nimport matplotlib.pyplot as plt\nfrom scipy.ndimage import imread", "Below, feel free to choose images and see how the trained classifier predicts the flowers in them.", "test_img_path = 'flower_photos/roses/10894627425_ec76bbc757_n.jpg'\ntest_img = imread(test_img_path)\nplt.imshow(test_img)\n\n# Run this cell if you don't have a vgg graph built\nif 'vgg' in globals():\n print('\"vgg\" object already exists. Will not create again.')\nelse:\n #create vgg\n with tf.Session() as sess:\n input_ = tf.placeholder(tf.float32, [None, 224, 224, 3])\n vgg = vgg16.Vgg16()\n vgg.build(input_)\n\n#test_img_path = 'flower_photos/dandelion/9939430464_5f5861ebab.jpg'\ntest_img_path = 'flower_photos/daisy/9922116524_ab4a2533fe_n.jpg'\n\nwith tf.Session() as sess:\n img = utils.load_image(test_img_path)\n img = img.reshape((1, 224, 224, 3))\n\n feed_dict = {input_: img}\n code = sess.run(vgg.relu6, feed_dict=feed_dict)\n \nsaver = tf.train.Saver()\nwith tf.Session() as sess:\n saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))\n \n feed = {inputs_: code, keep_prob:1.0}\n prediction = sess.run(predicted, feed_dict=feed).squeeze()\n\ntest_img = imread(test_img_path)\nplt.imshow(test_img)\n\nplt.barh(np.arange(5), prediction)\n_ = plt.yticks(np.arange(5), lb.classes_)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
tensorflow/fold
tensorflow_fold/g3doc/quick.ipynb
apache-2.0
[ "TensorFlow Fold Quick Start\nTensorFlow Fold is a library for turning complicated Python data structures into TensorFlow Tensors.", "# boilerplate\nimport random\nimport tensorflow as tf\nsess = tf.InteractiveSession()\nimport tensorflow_fold as td", "The basic elements of Fold are blocks. We'll start with some blocks that work on simple data types.", "scalar_block = td.Scalar()\nvector3_block = td.Vector(3)", "Blocks are functions with associated input and output types.", "def block_info(block):\n print(\"%s: %s -> %s\" % (block, block.input_type, block.output_type))\n \nblock_info(scalar_block)\nblock_info(vector3_block)", "We can use eval() to see what a block does with its input:", "scalar_block.eval(42)\n\nvector3_block.eval([1,2,3])", "Not very exciting. We can compose simple blocks together with Record, like so:", "record_block = td.Record({'foo': scalar_block, 'bar': vector3_block})\nblock_info(record_block)", "We can see that Fold's type system is a bit richer than vanilla TF; we have tuple types! Running a record block does what you'd expect:", "record_block.eval({'foo': 1, 'bar': [5, 7, 9]})", "One useful thing you can do with blocks is wire them up to create pipelines using the &gt;&gt; operator, which performs function composition. For example, we can take our two tuple tensors and compose it with Concat, like so:", "record2vec_block = record_block >> td.Concat()\nrecord2vec_block.eval({'foo': 1, 'bar': [5, 7, 9]})", "Note that because Python dicts are unordered, Fold always sorts the outputs of a record block by dictionary key. If you want to preserve order you can construct a Record block from an OrderedDict.\nThe whole point of Fold is to get your data into TensorFlow; the Function block lets you convert a TITO (Tensors In, Tensors Out) function to a block:", "negative_block = record2vec_block >> td.Function(tf.negative)\nnegative_block.eval({'foo': 1, 'bar': [5, 7, 9]})", "This is all very cute, but where's the beef? Things start to get interesting when our inputs contain sequences of indeterminate length. The Map block comes in handy here:", "map_scalars_block = td.Map(td.Scalar())", "There's no TF type for sequences of indeterminate length, but Fold has one:", "block_info(map_scalars_block)", "Right, but you've done the TF RNN Tutorial and even poked at seq-to-seq. You're a wizard with dynamic rnns. What does Fold offer?\nWell, how about jagged arrays?", "jagged_block = td.Map(td.Map(td.Scalar()))\nblock_info(jagged_block)", "The Fold type system is fully compositional; any block you can create can be composed with Map to create a sequence, or Record to create a tuple, or both to create sequences of tuples or tuples of sequences:", "seq_of_tuples_block = td.Map(td.Record({'foo': td.Scalar(), 'bar': td.Scalar()}))\nseq_of_tuples_block.eval([{'foo': 1, 'bar': 2}, {'foo': 3, 'bar': 4}])\n\ntuple_of_seqs_block = td.Record({'foo': td.Map(td.Scalar()), 'bar': td.Map(td.Scalar())})\ntuple_of_seqs_block.eval({'foo': range(3), 'bar': range(7)})", "Most of the time, you'll eventually want to get one or more tensors out of your sequence, for wiring up to your particular learning task. Fold has a bunch of built-in reduction functions for this that do more or less what you'd expect:", "((td.Map(td.Scalar()) >> td.Sum()).eval(range(10)),\n (td.Map(td.Scalar()) >> td.Min()).eval(range(10)),\n (td.Map(td.Scalar()) >> td.Max()).eval(range(10)))\n ", "The general form of such functions is Reduce:", "(td.Map(td.Scalar()) >> td.Reduce(td.Function(tf.multiply))).eval(range(1,10))", "If the order of operations is important, you should use Fold instead of Reduce (but if you can use Reduce you should, because it will be faster):", "((td.Map(td.Scalar()) >> td.Fold(td.Function(tf.divide), tf.ones([]))).eval(range(1,5)),\n (td.Map(td.Scalar()) >> td.Reduce(td.Function(tf.divide), tf.ones([]))).eval(range(1,5))) # bad, not associative!", "Now, let's do some learning! This is the part where \"magic\" happens; if you want a deeper understanding of what's happening here you might want to jump right to our more formal blocks tutorial or learn more about running blocks in TensorFlow", "def reduce_net_block():\n net_block = td.Concat() >> td.FC(20) >> td.FC(1, activation=None) >> td.Function(lambda xs: tf.squeeze(xs, axis=1))\n return td.Map(td.Scalar()) >> td.Reduce(net_block)\n", "The reduce_net_block function creates a block (net_block) that contains a two-layer fully connected (FC) network that takes a pair of scalar tensors as input and produces a scalar tensor as output. This network gets applied in a binary tree to reduce a sequence of scalar tensors to a single scalar tensor.\nOne thing to notice here is that we are calling tf.squeeze with axis=1, even though the Fold output type of td.FC(1, activation=None) (and hence the input type of the enclosing Function block) is a TensorType with shape (1). This is because all Fold blocks actually run on TF tensors with an implicit leading batch dimension, which enables execution via dynamic batching. It is important to bear this in mind when creating Function blocks that wrap functions that are not applied elementwise.", "def random_example(fn):\n length = random.randrange(1, 10)\n data = [random.uniform(0,1) for _ in range(length)]\n result = fn(data)\n return data, result", "The random_example function generates training data consisting of (example, fn(example)) pairs, where example is a random list of numbers, e.g.:", "random_example(sum)\n\nrandom_example(min)\n\ndef train(fn, batch_size=100):\n net_block = reduce_net_block()\n compiler = td.Compiler.create((net_block, td.Scalar()))\n y, y_ = compiler.output_tensors\n loss = tf.nn.l2_loss(y - y_)\n train = tf.train.AdamOptimizer().minimize(loss)\n sess.run(tf.global_variables_initializer())\n validation_fd = compiler.build_feed_dict(random_example(fn) for _ in range(1000))\n for i in range(2000):\n sess.run(train, compiler.build_feed_dict(random_example(fn) for _ in range(batch_size)))\n if i % 100 == 0:\n print(i, sess.run(loss, validation_fd))\n return net_block\n ", "Now we're going to train a neural network to approximate a reduction function of our choosing. Calling eval() repeatedly is super-slow and cannot exploit batch-wise parallelism, so we create a Compiler. See our page on running blocks in TensorFlow for more on Compilers and how to use them effectively.", "sum_block = train(sum)\n\nsum_block.eval([1, 1])", "Breaking news: deep neural network learns to calculate 1 + 1!!!!\nOf course we've done something a little sneaky here by constructing a model that can only represent associative functions and then training it to compute an associative function. The technical term for being sneaky in machine learning is inductive bias.", "min_block = train(min)\n\nmin_block.eval([2, -1, 4])", "Oh noes! What went wrong? Note that we trained our network to compute min on positive numbers; negative numbers are outside of its input distribution.", "min_block.eval([0.3, 0.2, 0.9])", "Well, that's better. What happens if you train the network on negative numbers as well as on positives? What if you only train on short lists and then evaluate the net on long ones? What if you used a Fold block instead of a Reduce? ... Happy Folding!" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
guyk1971/deep-learning
embeddings/Skip-Gram word2vec.ipynb
mit
[ "Skip-gram word2vec\nIn this notebook, I'll lead you through using TensorFlow to implement the word2vec algorithm using the skip-gram architecture. By implementing this, you'll learn about embedding words for use in natural language processing. This will come in handy when dealing with things like translations.\nReadings\nHere are the resources I used to build this notebook. I suggest reading these either beforehand or while you're working on this material.\n\nA really good conceptual overview of word2vec from Chris McCormick \nFirst word2vec paper from Mikolov et al.\nNIPS paper with improvements for word2vec also from Mikolov et al.\nAn implementation of word2vec from Thushan Ganegedara\nTensorFlow word2vec tutorial\n\nWord embeddings\nWhen you're dealing with language and words, you end up with tens of thousands of classes to predict, one for each word. Trying to one-hot encode these words is massively inefficient, you'll have one element set to 1 and the other 50,000 set to 0. The word2vec algorithm finds much more efficient representations by finding vectors that represent the words. These vectors also contain semantic information about the words. Words that show up in similar contexts, such as \"black\", \"white\", and \"red\" will have vectors near each other. There are two architectures for implementing word2vec, CBOW (Continuous Bag-Of-Words) and Skip-gram.\n<img src=\"assets/word2vec_architectures.png\" width=\"500\">\nIn this implementation, we'll be using the skip-gram architecture because it performs better than CBOW. Here, we pass in a word and try to predict the words surrounding it in the text. In this way, we can train the network to learn representations for words that show up in similar contexts.\nFirst up, importing packages.", "import time\n\nimport numpy as np\nimport tensorflow as tf\n\nimport utils", "Load the text8 dataset, a file of cleaned up Wikipedia articles from Matt Mahoney. The next cell will download the data set to the data folder. Then you can extract it and delete the archive file to save storage space.", "from urllib.request import urlretrieve\nfrom os.path import isfile, isdir\nfrom tqdm import tqdm\nimport zipfile\n\ndataset_folder_path = 'data'\ndataset_filename = 'text8.zip'\ndataset_name = 'Text8 Dataset'\n\nclass DLProgress(tqdm):\n last_block = 0\n\n def hook(self, block_num=1, block_size=1, total_size=None):\n self.total = total_size\n self.update((block_num - self.last_block) * block_size)\n self.last_block = block_num\n\nif not isfile(dataset_filename):\n with DLProgress(unit='B', unit_scale=True, miniters=1, desc=dataset_name) as pbar:\n urlretrieve(\n 'http://mattmahoney.net/dc/text8.zip',\n dataset_filename,\n pbar.hook)\n\nif not isdir(dataset_folder_path):\n with zipfile.ZipFile(dataset_filename) as zip_ref:\n zip_ref.extractall(dataset_folder_path)\n \nwith open('data/text8') as f:\n text = f.read()", "Preprocessing\nHere I'm fixing up the text to make training easier. This comes from the utils module I wrote. The preprocess function coverts any punctuation into tokens, so a period is changed to &lt;PERIOD&gt;. In this data set, there aren't any periods, but it will help in other NLP problems. I'm also removing all words that show up five or fewer times in the dataset. This will greatly reduce issues due to noise in the data and improve the quality of the vector representations. If you want to write your own functions for this stuff, go for it.", "words = utils.preprocess(text)\nprint(words[:30])\n\nprint(\"Total words: {}\".format(len(words)))\nprint(\"Unique words: {}\".format(len(set(words))))", "And here I'm creating dictionaries to covert words to integers and backwards, integers to words. The integers are assigned in descending frequency order, so the most frequent word (\"the\") is given the integer 0 and the next most frequent is 1 and so on. The words are converted to integers and stored in the list int_words.", "vocab_to_int, int_to_vocab = utils.create_lookup_tables(words)\nint_words = [vocab_to_int[word] for word in words]", "Subsampling\nWords that show up often such as \"the\", \"of\", and \"for\" don't provide much context to the nearby words. If we discard some of them, we can remove some of the noise from our data and in return get faster training and better representations. This process is called subsampling by Mikolov. For each word $w_i$ in the training set, we'll discard it with probability given by \n$$ P(w_i) = 1 - \\sqrt{\\frac{t}{f(w_i)}} $$\nwhere $t$ is a threshold parameter and $f(w_i)$ is the frequency of word $w_i$ in the total dataset.\nI'm going to leave this up to you as an exercise. This is more of a programming challenge, than about deep learning specifically. But, being able to prepare your data for your network is an important skill to have. Check out my solution to see how I did it.\n\nExercise: Implement subsampling for the words in int_words. That is, go through int_words and discard each word given the probablility $P(w_i)$ shown above. Note that $P(w_i)$ is the probability that a word is discarded. Assign the subsampled data to train_words.", "## Your code here\ntrain_words = # The final subsampled word list", "Making batches\nNow that our data is in good shape, we need to get it into the proper form to pass it into our network. With the skip-gram architecture, for each word in the text, we want to grab all the words in a window around that word, with size $C$. \nFrom Mikolov et al.: \n\"Since the more distant words are usually less related to the current word than those close to it, we give less weight to the distant words by sampling less from those words in our training examples... If we choose $C = 5$, for each training word we will select randomly a number $R$ in range $< 1; C >$, and then use $R$ words from history and $R$ words from the future of the current word as correct labels.\"\n\nExercise: Implement a function get_target that receives a list of words, an index, and a window size, then returns a list of words in the window around the index. Make sure to use the algorithm described above, where you choose a random number of words from the window.", "def get_target(words, idx, window_size=5):\n ''' Get a list of words in a window around an index. '''\n \n # Your code here\n \n return", "Here's a function that returns batches for our network. The idea is that it grabs batch_size words from a words list. Then for each of those words, it gets the target words in the window. I haven't found a way to pass in a random number of target words and get it to work with the architecture, so I make one row per input-target pair. This is a generator function by the way, helps save memory.", "def get_batches(words, batch_size, window_size=5):\n ''' Create a generator of word batches as a tuple (inputs, targets) '''\n \n n_batches = len(words)//batch_size\n \n # only full batches\n words = words[:n_batches*batch_size]\n \n for idx in range(0, len(words), batch_size):\n x, y = [], []\n batch = words[idx:idx+batch_size]\n for ii in range(len(batch)):\n batch_x = batch[ii]\n batch_y = get_target(batch, ii, window_size)\n y.extend(batch_y)\n x.extend([batch_x]*len(batch_y))\n yield x, y\n ", "Building the graph\nFrom Chris McCormick's blog, we can see the general structure of our network.\n\nThe input words are passed in as one-hot encoded vectors. This will go into a hidden layer of linear units, then into a softmax layer. We'll use the softmax layer to make a prediction like normal.\nThe idea here is to train the hidden layer weight matrix to find efficient representations for our words. This weight matrix is usually called the embedding matrix or embedding look-up table. We can discard the softmax layer becuase we don't really care about making predictions with this network. We just want the embedding matrix so we can use it in other networks we build from the dataset.\nI'm going to have you build the graph in stages now. First off, creating the inputs and labels placeholders like normal.\n\nExercise: Assign inputs and labels using tf.placeholder. We're going to be passing in integers, so set the data types to tf.int32. The batches we're passing in will have varying sizes, so set the batch sizes to [None]. To make things work later, you'll need to set the second dimension of labels to None or 1.", "train_graph = tf.Graph()\nwith train_graph.as_default():\n inputs = \n labels = ", "Embedding\nThe embedding matrix has a size of the number of words by the number of neurons in the hidden layer. So, if you have 10,000 words and 300 hidden units, the matrix will have size $10,000 \\times 300$. Remember that we're using one-hot encoded vectors for our inputs. When you do the matrix multiplication of the one-hot vector with the embedding matrix, you end up selecting only one row out of the entire matrix:\n\nYou don't actually need to do the matrix multiplication, you just need to select the row in the embedding matrix that corresponds to the input word. Then, the embedding matrix becomes a lookup table, you're looking up a vector the size of the hidden layer that represents the input word.\n<img src=\"assets/word2vec_weight_matrix_lookup_table.png\" width=500>\n\nExercise: Tensorflow provides a convenient function tf.nn.embedding_lookup that does this lookup for us. You pass in the embedding matrix and a tensor of integers, then it returns rows in the matrix corresponding to those integers. Below, set the number of embedding features you'll use (200 is a good start), create the embedding matrix variable, and use tf.nn.embedding_lookup to get the embedding tensors. For the embedding matrix, I suggest you initialize it with a uniform random numbers between -1 and 1 using tf.random_uniform. This TensorFlow tutorial will help if you get stuck.", "n_vocab = len(int_to_vocab)\nn_embedding = # Number of embedding features \nwith train_graph.as_default():\n embedding = # create embedding weight matrix here\n embed = # use tf.nn.embedding_lookup to get the hidden layer output", "Negative sampling\nFor every example we give the network, we train it using the output from the softmax layer. That means for each input, we're making very small changes to millions of weights even though we only have one true example. This makes training the network very inefficient. We can approximate the loss from the softmax layer by only updating a small subset of all the weights at once. We'll update the weights for the correct label, but only a small number of incorrect labels. This is called \"negative sampling\". Tensorflow has a convenient function to do this, tf.nn.sampled_softmax_loss.\n\nExercise: Below, create weights and biases for the softmax layer. Then, use tf.nn.sampled_softmax_loss to calculate the loss. Be sure to read the documentation to figure out how it works.", "# Number of negative labels to sample\nn_sampled = 100\nwith train_graph.as_default():\n softmax_w = # create softmax weight matrix here\n softmax_b = # create softmax biases here\n \n # Calculate the loss using negative sampling\n loss = tf.nn.sampled_softmax_loss \n \n cost = tf.reduce_mean(loss)\n optimizer = tf.train.AdamOptimizer().minimize(cost)", "Validation\nThis code is from Thushan Ganegedara's implementation. Here we're going to choose a few common words and few uncommon words. Then, we'll print out the closest words to them. It's a nice way to check that our embedding table is grouping together words with similar semantic meanings.", "with train_graph.as_default():\n ## From Thushan Ganegedara's implementation\n valid_size = 16 # Random set of words to evaluate similarity on.\n valid_window = 100\n # pick 8 samples from (0,100) and (1000,1100) each ranges. lower id implies more frequent \n valid_examples = np.array(random.sample(range(valid_window), valid_size//2))\n valid_examples = np.append(valid_examples, \n random.sample(range(1000,1000+valid_window), valid_size//2))\n\n valid_dataset = tf.constant(valid_examples, dtype=tf.int32)\n \n # We use the cosine distance:\n norm = tf.sqrt(tf.reduce_sum(tf.square(embedding), 1, keep_dims=True))\n normalized_embedding = embedding / norm\n valid_embedding = tf.nn.embedding_lookup(normalized_embedding, valid_dataset)\n similarity = tf.matmul(valid_embedding, tf.transpose(normalized_embedding))\n\n# If the checkpoints directory doesn't exist:\n!mkdir checkpoints", "Training\nBelow is the code to train the network. Every 100 batches it reports the training loss. Every 1000 batches, it'll print out the validation words.", "epochs = 10\nbatch_size = 1000\nwindow_size = 10\n\nwith train_graph.as_default():\n saver = tf.train.Saver()\n\nwith tf.Session(graph=train_graph) as sess:\n iteration = 1\n loss = 0\n sess.run(tf.global_variables_initializer())\n\n for e in range(1, epochs+1):\n batches = get_batches(train_words, batch_size, window_size)\n start = time.time()\n for x, y in batches:\n \n feed = {inputs: x,\n labels: np.array(y)[:, None]}\n train_loss, _ = sess.run([cost, optimizer], feed_dict=feed)\n \n loss += train_loss\n \n if iteration % 100 == 0: \n end = time.time()\n print(\"Epoch {}/{}\".format(e, epochs),\n \"Iteration: {}\".format(iteration),\n \"Avg. Training loss: {:.4f}\".format(loss/100),\n \"{:.4f} sec/batch\".format((end-start)/100))\n loss = 0\n start = time.time()\n \n if iteration % 1000 == 0:\n ## From Thushan Ganegedara's implementation\n # note that this is expensive (~20% slowdown if computed every 500 steps)\n sim = similarity.eval()\n for i in range(valid_size):\n valid_word = int_to_vocab[valid_examples[i]]\n top_k = 8 # number of nearest neighbors\n nearest = (-sim[i, :]).argsort()[1:top_k+1]\n log = 'Nearest to %s:' % valid_word\n for k in range(top_k):\n close_word = int_to_vocab[nearest[k]]\n log = '%s %s,' % (log, close_word)\n print(log)\n \n iteration += 1\n save_path = saver.save(sess, \"checkpoints/text8.ckpt\")\n embed_mat = sess.run(normalized_embedding)", "Restore the trained network if you need to:", "with train_graph.as_default():\n saver = tf.train.Saver()\n\nwith tf.Session(graph=train_graph) as sess:\n saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))\n embed_mat = sess.run(embedding)", "Visualizing the word vectors\nBelow we'll use T-SNE to visualize how our high-dimensional word vectors cluster together. T-SNE is used to project these vectors into two dimensions while preserving local stucture. Check out this post from Christopher Olah to learn more about T-SNE and other ways to visualize high-dimensional data.", "%matplotlib inline\n%config InlineBackend.figure_format = 'retina'\n\nimport matplotlib.pyplot as plt\nfrom sklearn.manifold import TSNE\n\nviz_words = 500\ntsne = TSNE()\nembed_tsne = tsne.fit_transform(embed_mat[:viz_words, :])\n\nfig, ax = plt.subplots(figsize=(14, 14))\nfor idx in range(viz_words):\n plt.scatter(*embed_tsne[idx, :], color='steelblue')\n plt.annotate(int_to_vocab[idx], (embed_tsne[idx, 0], embed_tsne[idx, 1]), alpha=0.7)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
pligor/predicting-future-product-prices
04_time_series_prediction/23_price_history_seq2seq-cross-validation.ipynb
agpl-3.0
[ "# -*- coding: UTF-8 -*-\n#%load_ext autoreload\n%reload_ext autoreload\n%autoreload 2", "https://www.youtube.com/watch?v=ElmBrKyMXxs\nhttps://github.com/hans/ipython-notebooks/blob/master/tf/TF%20tutorial.ipynb\nhttps://github.com/ematvey/tensorflow-seq2seq-tutorials", "from __future__ import division\nimport tensorflow as tf\nfrom os import path, remove\nimport numpy as np\nimport pandas as pd\nimport csv\nfrom sklearn.model_selection import StratifiedShuffleSplit\nfrom time import time\nfrom matplotlib import pyplot as plt\nimport seaborn as sns\nfrom mylibs.jupyter_notebook_helper import show_graph, renderStatsList, renderStatsCollection, \\\n renderStatsListWithLabels, renderStatsCollectionOfCrossValids, plot_res_gp, my_plot_convergence\nfrom tensorflow.contrib import rnn\nfrom tensorflow.contrib import learn\nimport shutil\nfrom tensorflow.contrib.learn.python.learn import learn_runner\nfrom mylibs.tf_helper import getDefaultGPUconfig\nfrom sklearn.metrics import r2_score\nfrom mylibs.py_helper import factors\nfrom fastdtw import fastdtw\nfrom collections import OrderedDict\nfrom scipy.spatial.distance import euclidean\nfrom statsmodels.tsa.stattools import coint\nfrom common import get_or_run_nn\nfrom data_providers.price_history_seq2seq_data_provider import PriceHistorySeq2SeqDataProvider\nfrom data_providers.price_history_dataset_generator import PriceHistoryDatasetGenerator\nfrom skopt.space.space import Integer, Real\nfrom skopt import gp_minimize\nfrom skopt.plots import plot_convergence\nimport pickle\nimport inspect\nimport dill\nimport sys\nfrom models.price_history_21_seq2seq_dyn_dec_ins import PriceHistorySeq2SeqDynDecIns\nfrom gp_opt.price_history_23_gp_opt import PriceHistory23GpOpt\n\ndtype = tf.float32\nseed = 16011984\nrandom_state = np.random.RandomState(seed=seed)\nconfig = getDefaultGPUconfig()\nn_jobs = 1\n%matplotlib inline", "Step 0 - hyperparams\nvocab_size is all the potential words you could have (classification for translation case)\nand max sequence length are the SAME thing\ndecoder RNN hidden units are usually same size as encoder RNN hidden units in translation but for our case it does not seem really to be a relationship there but we can experiment and find out later, not a priority thing right now", "num_units = 400 #state size\n\ninput_len = 60\ntarget_len = 30\n\nbatch_size = 64 #50\nwith_EOS = False\n\ntotal_train_size = 57994\ntrain_size = 6400 \ntest_size = 1282", "Once generate data", "data_path = '../data/price_history'\n\n#npz_full_train = data_path + '/price_history_03_dp_60to30_train.npz'\n#npz_full_train = data_path + '/price_history_60to30_targets_normed_train.npz'\nnpz_full_train = data_path + '/price_history_03_dp_60to30_global_remove_scale_targets_normed_train.npz'\n\n#npz_train = data_path + '/price_history_03_dp_60to30_57980_train.npz'\n#npz_train = data_path + '/price_history_03_dp_60to30_6400_train.npz'\n#npz_train = data_path + '/price_history_60to30_6400_targets_normed_train.npz'\nnpz_train = data_path + '/price_history_03_dp_60to30_6400_global_remove_scale_targets_normed_train.npz'\n\n#npz_test = data_path + '/price_history_03_dp_60to30_test.npz'\n#npz_test = data_path + '/price_history_60to30_targets_normed_test.npz'\nnpz_test = data_path + '/price_history_03_dp_60to30_global_remove_scale_targets_normed_test.npz'", "Step 1 - collect data", "# dp = PriceHistorySeq2SeqDataProvider(npz_path=npz_train, batch_size=batch_size, with_EOS=with_EOS)\n# dp.inputs.shape, dp.targets.shape\n\n# aa, bb = dp.next()\n# aa.shape, bb.shape", "Step 2 - Build model", "model = PriceHistorySeq2SeqDynDecIns(rng=random_state, dtype=dtype, config=config, with_EOS=with_EOS)\n\n# graph = model.getGraph(batch_size=batch_size,\n# num_units=num_units,\n# input_len=input_len,\n# target_len=target_len)\n\n#show_graph(graph)", "Cross Validating", "def plotter(stats_list, label_text):\n _ = renderStatsListWithLabels(stats_list=stats_list, label_text=label_text)\n plt.show()\n\n _ = renderStatsListWithLabels(stats_list=stats_list, label_text=label_text,\n title='Validation Error', kk='error(valid)')\n plt.show()\n\n#sorted(factors(6400))\n\nobj = PriceHistory23GpOpt(model=model,\n stats_npy_filename = 'bayes_opt_23_stats_dic',\n cv_score_dict_npy_filename = 'bayes_opt_23_cv_scores_dic',\n random_state=random_state,\n plotter = plotter,\n npz_path=npz_train,\n epochs=15,\n batch_size=batch_size,\n input_len=input_len,\n target_len=target_len,\n n_splits=5,\n )\n\nopt_res = obj.run_opt(n_random_starts=2, n_calls=17)\n\nplot_res_gp(opt_res)\n\nopt_res.best_params\n\nfilepath = PriceHistory23GpOpt.bayes_opt_dir + '/bayes_opt_23_stats_dic.npy'\n\nrenderStatsCollectionOfCrossValids(stats_dic=np.load(filepath)[()], label_texts=[\n 'num_units', 'activation', 'lamda2', 'keep_prob_input', 'learning_rate'])\nplt.show()", "Step 3 training the network", "model = PriceHistorySeq2SeqDynDecIns(rng=random_state, dtype=dtype, config=config, with_EOS=with_EOS)\n\nopt_res.best_params\n\nnum_units, activation, lamda2, keep_prob_input, learning_rate = opt_res.best_params\n\nbatch_size\n\nnpz_1280_test = '../data/price_history/price_history_03_dp_60to30_global_remove_scale_targets_normed_1280_test.npz'\n\n# PriceHistoryDatasetGenerator.create_subsampled(inpath=npz_test, target_size=1280,\n# outpath='../data/price_history/price_history_03_dp_60to30_global_remove_scale_targets_normed_1280_test.npz',\n# random_state=random_state)\n\ndef experiment():\n return model.run(npz_path=npz_train,\n npz_test = npz_1280_test,\n epochs=200,\n batch_size = batch_size,\n num_units = num_units,\n input_len=input_len,\n target_len=target_len,\n learning_rate = learning_rate * 10,\n preds_gather_enabled=True,\n batch_norm_enabled = True,\n activation = activation,\n decoder_first_input = PriceHistorySeq2SeqDynDecIns.DECODER_FIRST_INPUT.ZEROS,\n keep_prob_input = keep_prob_input,\n lamda2 = lamda2,\n )\n\n#%%time\ndyn_stats, preds_dict, targets = get_or_run_nn(experiment, filename='023_seq2seq_60to30_001')\n\ndyn_stats.plotStats()\nplt.show()\n\nr2_scores = [r2_score(y_true=targets[ind], y_pred=preds_dict[ind])\n for ind in range(len(targets))]\n\nind = np.argmin(r2_scores)\nind\n\nreals = targets[ind]\npreds = preds_dict[ind]\n\nr2_score(y_true=reals, y_pred=preds)\n\n#sns.tsplot(data=dp.inputs[ind].flatten())\n\nfig = plt.figure(figsize=(15,6))\nplt.plot(reals, 'b')\nplt.plot(preds, 'g')\nplt.legend(['reals','preds'])\nplt.show()\n\n%%time\ndtw_scores = [fastdtw(targets[ind], preds_dict[ind])[0]\n for ind in range(len(targets))]\n\nnp.mean(dtw_scores)\n\ncoint(preds, reals)\n\ncur_ind = np.random.randint(len(targets))\nreals = targets[cur_ind]\npreds = preds_dict[cur_ind]\nfig = plt.figure(figsize=(15,6))\nplt.plot(reals, 'b')\nplt.plot(preds, 'g')\nplt.legend(['reals','preds'])\nplt.show()\n\nnpz_1280_test\n\naa = np.load(npz_1280_test)\n\nlen( set(aa['sku_ids']) )", "Conclusion\nThe above test will be considered UNRELIABLE because it represents only 24 cellphones!\nWe will generate a new test set" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
donaghhorgan/COMP9033
labs/09a - K-means clustering.ipynb
gpl-3.0
[ "Lab 09a: K-means clustering\nIntroduction\nThis lab focuses on $K$-means clustering using the Iris flower data set. At the end of the lab, you should be able to:\n\nCreate a $K$-means clustering model for various cluster sizes.\nEstimate the right number of clusters to choose by plotting the total inertia of the clusters and finding the \"elbow\" of the curve.\n\nGetting started\nLet's start by importing the packages we'll need. As usual, we'll import pandas for exploratory analysis, but this week we're also going to use the cluster subpackage from scikit-learn to create $K$-means models and the datasets subpackage to access the Iris data set.", "%matplotlib inline\nimport numpy as np\nimport pandas as pd\n\nfrom matplotlib import pyplot as plt\nfrom sklearn import cluster\nfrom sklearn import datasets", "Next, let's load the data. The iris data set is included in scikit-learn's datasets submodule, so we can just load it directly like this:", "iris = datasets.load_iris()\nX = pd.DataFrame({k: v for k, v in zip(iris.feature_names, iris.data.T)}) # Convert the raw data to a data frame\nX.head()", "Exploratory data analysis\nLet's start by making a scatter plot matrix of our data. We can colour the individual scatter points according to their true class labels by passing c=iris.target to the function, like this:", "pd.plotting.scatter_matrix(X, c=iris.target, figsize=(9, 9));", "The colours of the data points here are our ground truth, that is the actual labels of the data. Generally, when we cluster data, we don't know the ground truth, but in this instance it will help us to assess how well $K$-means clustering segments the data into its true categories.\nK-means clustering\nLet's build an $K$-means clustering model of the document data. scikit-learn supports $K$-means clustering functionality via the cluster subpackage. We can use the KMeans class to build our model.\n3 clusters\nGenerally, we won't know in advance how many clusters to use but, as we do in this instance, let's start by splitting the data into three clusters. We can run $K$-means clustering with scikit-learn using the KMeans class. We can specify n_clusters=3 to find three clusters, like this:", "k_means = cluster.KMeans(n_clusters=3)\nk_means.fit(X)", "Note: In previous weeks, we have called fit(X, y) when fitting scikit-learn estimators. However, in each of these cases, we were fitting supervised learning models where y represented the true class labels of the data. This week, we're fitting $K$-means clustering models, which are unsupervised learners, and so there is no need to specify the true class labels (i.e. y).\n\nWhen we call the predict method on our fitted estimator, it predicts the class labels for each record in our explanatory data matrix (i.e. X):", "labels = k_means.predict(X)\nprint labels", "We can check the results of our clustering visually by building another scatter plot matrix, this time colouring the points according to the cluster labels:", "pd.plotting.scatter_matrix(X, c=labels, figsize=(9, 9));", "As can be seen, the $K$-means algorithm has partitioned the data into three distinct sets, using just the values of petal length, petal width, sepal length and sepal width. The clusters do not precisely correspond to the true class labels plotted earlier but, as we usually perform clustering in situations where we don't know the true class labels, this seems like a reasonable attempt.\nOther numbers of clusters\nWe can cluster the data into arbitrary many clusters (up to the point where each sample is its own cluster). Let's cluster the data into two clusters and see what effect this has:", "k_means = cluster.KMeans(n_clusters=2)\nk_means.fit(X)\n\nlabels = k_means.predict(X)\npd.plotting.scatter_matrix(X, c=labels, figsize=(9, 9));", "Finding the optimum number of clusters\nOne way to find the optimum number of clusters is to plot the variation in total inertia with increasing numbers of clusters. Because the total inertia decreases as the number of clusters increases, we can determine a reasonable, but possibly not true, clustering of the data by finding the \"elbow\" in the curve, which occurs as a result of the diminishing returns from adding further clusters.\nWe can access the inertia value of a fitted $K$-means model using its inertia_ attribute, like this:", "clusters = range(1, 10)\ninertia = []\nfor n in clusters:\n k_means = cluster.KMeans(n_clusters=n)\n k_means.fit(X)\n inertia.append(k_means.inertia_)\n\nplt.plot(clusters, inertia)\nplt.xlabel(\"Number of clusters\")\nplt.ylabel(\"Inertia\");", "In this instance, we could choose either two or three clusters to represent the data, as these represent the largest decreases in inertia. As we know that there are three true classes choosing two would be an incorrect conclusion in this case, but this is an unavoidable consequence of clustering. If we do not know the structure of the data in advance, we always risk choosing a representation of it that does not reflect the ground truth." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
scoaste/showcase
machine-learning/foundations/document-retrieval-assignment.ipynb
mit
[ "Document retrieval from wikipedia data\nFire up GraphLab Create", "import graphlab", "Load some text data - from wikipedia, pages on people", "people = graphlab.SFrame('people_wiki.gl/')", "Data contains: link to wikipedia article, name of person, text of article.", "people.head()\n\nlen(people)", "Explore the dataset and checkout the text it contains\nExploring the entry for president Obama", "obama = people[people['name'] == 'Barack Obama']\n\nobama\n\nobama['text']", "Exploring the entry for actor George Clooney", "clooney = people[people['name'] == 'George Clooney']\nclooney['text']", "Get the word counts for Obama article", "obama['word_count'] = graphlab.text_analytics.count_words(obama['text'])\n\nprint obama['word_count']", "Sort the word counts for the Obama article\nTurning dictonary of word counts into a table", "obama_word_count_table = obama[['word_count']].stack('word_count', new_column_name = ['word','count'])", "Sorting the word counts to show most common words at the top", "obama_word_count_table.head()\n\nobama_word_count_table.sort('count',ascending=False)", "Most common words include uninformative words like \"the\", \"in\", \"and\",...\nCompute TF-IDF for the corpus\nTo give more weight to informative words, we weigh them by their TF-IDF scores.", "people['word_count'] = graphlab.text_analytics.count_words(people['text'])\npeople.head()\n\ntfidf = graphlab.text_analytics.tf_idf(people['word_count'])\n\n# Earlier versions of GraphLab Create returned an SFrame rather than a single SArray\n# This notebook was created using Graphlab Create version 1.7.1\nif graphlab.version <= '1.6.1':\n tfidf = tfidf['docs']\n\ntfidf\n\npeople['tfidf'] = tfidf\n\npeople.head()", "Examine the TF-IDF for the Obama article", "obama = people[people['name'] == 'Barack Obama']\n\nobama[['tfidf']].stack('tfidf',new_column_name=['word','tfidf']).sort('tfidf',ascending=False)", "Words with highest TF-IDF are much more informative.\nManually compute distances between a few people\nLet's manually compare the distances between the articles for a few famous people.", "clinton = people[people['name'] == 'Bill Clinton']\n\nbeckham = people[people['name'] == 'David Beckham']", "Is Obama closer to Clinton than to Beckham?\nWe will use cosine distance, which is given by\n(1-cosine_similarity) \nand find that the article about president Obama is closer to the one about former president Clinton than that of footballer David Beckham.", "graphlab.distances.cosine(obama['tfidf'][0],clinton['tfidf'][0])\n\ngraphlab.distances.cosine(obama['tfidf'][0],beckham['tfidf'][0])", "Build a nearest neighbor model for document retrieval\nWe now create a nearest-neighbors model and apply it to document retrieval.", "knn_model = graphlab.nearest_neighbors.create(people,features=['tfidf'],label='name')\n\nknn_model.summary()", "Applying the nearest-neighbors model for retrieval\nWho is closest to Obama?", "knn_model.query(obama)", "As we can see, president Obama's article is closest to the one about his vice-president Biden, and those of other politicians. \nOther examples of document retrieval", "swift = people[people['name'] == 'Taylor Swift']\n\nknn_model.query(swift)\n\njolie = people[people['name'] == 'Angelina Jolie']\n\nknn_model.query(jolie)\n\narnold = people[people['name'] == 'Arnold Schwarzenegger']\n\nknn_model.query(arnold)\n\nelton = people[people['name'] == 'Elton John']\n\nelton\n\nelton[['word_count']].stack('word_count', new_column_name = ['word','count']).sort('count',ascending=False)\n\n\nelton[['tfidf']].stack('tfidf',new_column_name=['word','tfidf']).sort('tfidf',ascending=False)\n\nvictoria = people[people['name'] == 'Victoria Beckham']\n\ngraphlab.distances.cosine(elton['tfidf'][0],victoria['tfidf'][0])\n\npaul = people[people['name'] == 'Paul McCartney']\n\ngraphlab.distances.cosine(elton['tfidf'][0],paul['tfidf'][0])\n\nknn_model_counts_cosine = graphlab.nearest_neighbors.create(people,features=['word_count'],label='name',distance='cosine')\n\nknn_model_tfidf_cosine = graphlab.nearest_neighbors.create(people,features=['tfidf'],label='name',distance='cosine')\n\nknn_model_counts_cosine.query(elton)\n\nknn_model_tfidf_cosine.query(elton)\n\nknn_model_counts_cosine.query(victoria)\n\nknn_model_tfidf_cosine.query(victoria)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
tensorflow/model-optimization
tensorflow_model_optimization/g3doc/guide/combine/sparse_clustering_example.ipynb
apache-2.0
[ "Copyright 2021 The TensorFlow Authors.", "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.", "Sparsity preserving clustering Keras example\n<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://www.tensorflow.org/model_optimization/guide/combine/sparse_clustering_example\"><img src=\"https://www.tensorflow.org/images/tf_logo_32px.png\" />View on TensorFlow.org</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/model-optimization/blob/master/tensorflow_model_optimization/g3doc/guide/combine/sparse_clustering_example.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />Run in Google Colab</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://github.com/tensorflow/model-optimization/blob/master/tensorflow_model_optimization/g3doc/guide/combine/sparse_clustering_example.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />View on GitHub</a>\n </td>\n <td>\n <a href=\"https://storage.googleapis.com/tensorflow_docs/model-optimization/tensorflow_model_optimization/g3doc/guide/combine/sparse_clustering_example.ipynb\"><img src=\"https://www.tensorflow.org/images/download_logo_32px.png\" />Download notebook</a>\n </td>\n</table>\n\nOverview\nThis is an end to end example showing the usage of the sparsity preserving clustering API, part of the TensorFlow Model Optimization Toolkit's collaborative optimization pipeline.\nOther pages\nFor an introduction to the pipeline and other available techniques, see the collaborative optimization overview page.\nContents\nIn the tutorial, you will:\n\nTrain a tf.keras model for the MNIST dataset from scratch.\nFine-tune the model with sparsity and see the accuracy and observe that the model was successfully pruned.\nApply weight clustering to the pruned model and observe the loss of sparsity.\nApply sparsity preserving clustering on the pruned model and observe that the sparsity applied earlier has been preserved.\nGenerate a TFLite model and check that the accuracy has been preserved in the pruned clustered model.\nCompare the sizes of the different models to observe the compression benefits of applying sparsity followed by the collaborative optimization technique of sparsity preserving clustering.\n\nSetup\nYou can run this Jupyter Notebook in your local virtualenv or colab. For details of setting up dependencies, please refer to the installation guide.", "! pip install -q tensorflow-model-optimization\n\nimport tensorflow as tf\n\nimport numpy as np\nimport tempfile\nimport zipfile\nimport os", "Train a tf.keras model for MNIST to be pruned and clustered", "# Load MNIST dataset\nmnist = tf.keras.datasets.mnist\n(train_images, train_labels), (test_images, test_labels) = mnist.load_data()\n\n# Normalize the input image so that each pixel value is between 0 to 1.\ntrain_images = train_images / 255.0\ntest_images = test_images / 255.0\n\nmodel = tf.keras.Sequential([\n tf.keras.layers.InputLayer(input_shape=(28, 28)),\n tf.keras.layers.Reshape(target_shape=(28, 28, 1)),\n tf.keras.layers.Conv2D(filters=12, kernel_size=(3, 3),\n activation=tf.nn.relu),\n tf.keras.layers.MaxPooling2D(pool_size=(2, 2)),\n tf.keras.layers.Flatten(),\n tf.keras.layers.Dense(10)\n])\n\n# Train the digit classification model\nmodel.compile(optimizer='adam',\n loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),\n metrics=['accuracy'])\n\nmodel.fit(\n train_images,\n train_labels,\n validation_split=0.1,\n epochs=10\n)", "Evaluate the baseline model and save it for later usage", "_, baseline_model_accuracy = model.evaluate(\n test_images, test_labels, verbose=0)\n\nprint('Baseline test accuracy:', baseline_model_accuracy)\n\n_, keras_file = tempfile.mkstemp('.h5')\nprint('Saving model to: ', keras_file)\ntf.keras.models.save_model(model, keras_file, include_optimizer=False)", "Prune and fine-tune the model to 50% sparsity\nApply the prune_low_magnitude() API to prune the whole pre-trained model to achieve the model that is to be clustered in the next step. For how best to use the API to achieve the best compression rate while maintaining your target accuracy, refer to the pruning comprehensive guide.\nDefine the model and apply the sparsity API\nNote that the pre-trained model is used.", "import tensorflow_model_optimization as tfmot\n\nprune_low_magnitude = tfmot.sparsity.keras.prune_low_magnitude\n\npruning_params = {\n 'pruning_schedule': tfmot.sparsity.keras.ConstantSparsity(0.5, begin_step=0, frequency=100)\n }\n\ncallbacks = [\n tfmot.sparsity.keras.UpdatePruningStep()\n]\n\npruned_model = prune_low_magnitude(model, **pruning_params)\n\n# Use smaller learning rate for fine-tuning\nopt = tf.keras.optimizers.Adam(learning_rate=1e-5)\n\npruned_model.compile(\n loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),\n optimizer=opt,\n metrics=['accuracy'])\n\npruned_model.summary()", "Fine-tune the model, check sparsity, and evaluate the accuracy against baseline\nFine-tune the model with pruning for 3 epochs.", "# Fine-tune model\npruned_model.fit(\n train_images,\n train_labels,\n epochs=3,\n validation_split=0.1,\n callbacks=callbacks)", "Define helper functions to calculate and print the sparsity of the model.", "def print_model_weights_sparsity(model):\n\n for layer in model.layers:\n if isinstance(layer, tf.keras.layers.Wrapper):\n weights = layer.trainable_weights\n else:\n weights = layer.weights\n for weight in weights:\n if \"kernel\" not in weight.name or \"centroid\" in weight.name:\n continue\n weight_size = weight.numpy().size\n zero_num = np.count_nonzero(weight == 0)\n print(\n f\"{weight.name}: {zero_num/weight_size:.2%} sparsity \",\n f\"({zero_num}/{weight_size})\",\n )", "Check that the model kernels was correctly pruned. We need to strip the pruning wrapper first. We also create a deep copy of the model to be used in the next step.", "stripped_pruned_model = tfmot.sparsity.keras.strip_pruning(pruned_model)\n\nprint_model_weights_sparsity(stripped_pruned_model)\n\nstripped_pruned_model_copy = tf.keras.models.clone_model(stripped_pruned_model)\nstripped_pruned_model_copy.set_weights(stripped_pruned_model.get_weights())", "Apply clustering and sparsity preserving clustering and check its effect on model sparsity in both cases\nNext, we apply both clustering and sparsity preserving clustering on the pruned model and observe that the latter preserves sparsity on your pruned model. Note that we stripped pruning wrappers from the pruned model with tfmot.sparsity.keras.strip_pruning before applying the clustering API.", "# Clustering\ncluster_weights = tfmot.clustering.keras.cluster_weights\nCentroidInitialization = tfmot.clustering.keras.CentroidInitialization\n\nclustering_params = {\n 'number_of_clusters': 8,\n 'cluster_centroids_init': CentroidInitialization.KMEANS_PLUS_PLUS\n}\n\nclustered_model = cluster_weights(stripped_pruned_model, **clustering_params)\n\nclustered_model.compile(optimizer='adam',\n loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),\n metrics=['accuracy'])\n\nprint('Train clustering model:')\nclustered_model.fit(train_images, train_labels,epochs=3, validation_split=0.1)\n\n\nstripped_pruned_model.save(\"stripped_pruned_model_clustered.h5\")\n\n# Sparsity preserving clustering\nfrom tensorflow_model_optimization.python.core.clustering.keras.experimental import (\n cluster,\n)\n\ncluster_weights = cluster.cluster_weights\n\nclustering_params = {\n 'number_of_clusters': 8,\n 'cluster_centroids_init': CentroidInitialization.KMEANS_PLUS_PLUS,\n 'preserve_sparsity': True\n}\n\nsparsity_clustered_model = cluster_weights(stripped_pruned_model_copy, **clustering_params)\n\nsparsity_clustered_model.compile(optimizer='adam',\n loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),\n metrics=['accuracy'])\n\nprint('Train sparsity preserving clustering model:')\nsparsity_clustered_model.fit(train_images, train_labels,epochs=3, validation_split=0.1)", "Check sparsity for both models.", "print(\"Clustered Model sparsity:\\n\")\nprint_model_weights_sparsity(clustered_model)\nprint(\"\\nSparsity preserved clustered Model sparsity:\\n\")\nprint_model_weights_sparsity(sparsity_clustered_model)", "Create 1.6x smaller models from clustering\nDefine helper function to get zipped model file.", "def get_gzipped_model_size(file):\n # It returns the size of the gzipped model in kilobytes.\n\n _, zipped_file = tempfile.mkstemp('.zip')\n with zipfile.ZipFile(zipped_file, 'w', compression=zipfile.ZIP_DEFLATED) as f:\n f.write(file)\n\n return os.path.getsize(zipped_file)/1000\n\n# Clustered model\nclustered_model_file = 'clustered_model.h5'\n\n# Save the model.\nclustered_model.save(clustered_model_file)\n \n#Sparsity Preserve Clustered model\nsparsity_clustered_model_file = 'sparsity_clustered_model.h5'\n\n# Save the model.\nsparsity_clustered_model.save(sparsity_clustered_model_file)\n \nprint(\"Clustered Model size: \", get_gzipped_model_size(clustered_model_file), ' KB')\nprint(\"Sparsity preserved clustered Model size: \", get_gzipped_model_size(sparsity_clustered_model_file), ' KB')", "Create a TFLite model from combining sparsity preserving weight clustering and post-training quantization\nStrip clustering wrappers and convert to TFLite.", "stripped_sparsity_clustered_model = tfmot.clustering.keras.strip_clustering(sparsity_clustered_model)\n\nconverter = tf.lite.TFLiteConverter.from_keras_model(stripped_sparsity_clustered_model)\nconverter.optimizations = [tf.lite.Optimize.DEFAULT]\nsparsity_clustered_quant_model = converter.convert()\n\n_, pruned_and_clustered_tflite_file = tempfile.mkstemp('.tflite')\n\nwith open(pruned_and_clustered_tflite_file, 'wb') as f:\n f.write(sparsity_clustered_quant_model)\n\nprint(\"Sparsity preserved clustered Model size: \", get_gzipped_model_size(sparsity_clustered_model_file), ' KB')\nprint(\"Sparsity preserved clustered and quantized TFLite model size:\",\n get_gzipped_model_size(pruned_and_clustered_tflite_file), ' KB')", "See the persistence of accuracy from TF to TFLite", "def eval_model(interpreter):\n input_index = interpreter.get_input_details()[0][\"index\"]\n output_index = interpreter.get_output_details()[0][\"index\"]\n\n # Run predictions on every image in the \"test\" dataset.\n prediction_digits = []\n for i, test_image in enumerate(test_images):\n if i % 1000 == 0:\n print(f\"Evaluated on {i} results so far.\")\n # Pre-processing: add batch dimension and convert to float32 to match with\n # the model's input data format.\n test_image = np.expand_dims(test_image, axis=0).astype(np.float32)\n interpreter.set_tensor(input_index, test_image)\n\n # Run inference.\n interpreter.invoke()\n\n # Post-processing: remove batch dimension and find the digit with highest\n # probability.\n output = interpreter.tensor(output_index)\n digit = np.argmax(output()[0])\n prediction_digits.append(digit)\n\n print('\\n')\n # Compare prediction results with ground truth labels to calculate accuracy.\n prediction_digits = np.array(prediction_digits)\n accuracy = (prediction_digits == test_labels).mean()\n return accuracy", "You evaluate the model, which has been pruned, clustered and quantized, and then see that the accuracy from TensorFlow persists in the TFLite backend.", "# Keras model evaluation\nstripped_sparsity_clustered_model.compile(optimizer='adam',\n loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),\n metrics=['accuracy'])\n_, sparsity_clustered_keras_accuracy = stripped_sparsity_clustered_model.evaluate(\n test_images, test_labels, verbose=0)\n\n# TFLite model evaluation\ninterpreter = tf.lite.Interpreter(pruned_and_clustered_tflite_file)\ninterpreter.allocate_tensors()\n\nsparsity_clustered_tflite_accuracy = eval_model(interpreter)\n\nprint('Pruned, clustered and quantized Keras model accuracy:', sparsity_clustered_keras_accuracy)\nprint('Pruned, clustered and quantized TFLite model accuracy:', sparsity_clustered_tflite_accuracy)", "Conclusion\nIn this tutorial, you learned how to create a model, prune it using the prune_low_magnitude() API, and apply sparsity preserving clustering to preserve sparsity while clustering the weights. The sparsity preserving clustered model was compared to a clustered one to show that sparsity is preserved in the former and lost in the latter. Next, the pruned clustered model was converted to TFLite to show the compression benefits of chaining the pruning and sparsity preserving clustering model optimization techniques and, finally, the TFLite model was evaluated to ensure that the accuracy persists in the TFLite backend." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
mne-tools/mne-tools.github.io
0.24/_downloads/1537c1215a3e40187a4513e0b5f1d03d/eeg_csd.ipynb
bsd-3-clause
[ "%matplotlib inline", "Transform EEG data using current source density (CSD)\nThis script shows an example of how to use CSD\n:footcite:PerrinEtAl1987,PerrinEtAl1989,Cohen2014,KayserTenke2015.\nCSD takes the spatial Laplacian of the sensor signal (derivative in both\nx and y). It does what a planar gradiometer does in MEG. Computing these\nspatial derivatives reduces point spread. CSD transformed data have a sharper\nor more distinct topography, reducing the negative impact of volume conduction.", "# Authors: Alex Rockhill <aprockhill@mailbox.org>\n#\n# License: BSD-3-Clause\n\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nimport mne\nfrom mne.datasets import sample\n\nprint(__doc__)\n\ndata_path = sample.data_path()", "Load sample subject data", "raw = mne.io.read_raw_fif(data_path + '/MEG/sample/sample_audvis_raw.fif')\nraw = raw.pick_types(meg=False, eeg=True, eog=True, ecg=True, stim=True,\n exclude=raw.info['bads']).load_data()\nevents = mne.find_events(raw)\nraw.set_eeg_reference(projection=True).apply_proj()", "Plot the raw data and CSD-transformed raw data:", "raw_csd = mne.preprocessing.compute_current_source_density(raw)\nraw.plot()\nraw_csd.plot()", "Also look at the power spectral densities:", "raw.plot_psd()\nraw_csd.plot_psd()", "CSD can also be computed on Evoked (averaged) data.\nHere we epoch and average the data so we can demonstrate that.", "event_id = {'auditory/left': 1, 'auditory/right': 2, 'visual/left': 3,\n 'visual/right': 4, 'smiley': 5, 'button': 32}\nepochs = mne.Epochs(raw, events, event_id=event_id, tmin=-0.2, tmax=.5,\n preload=True)\nevoked = epochs['auditory'].average()", "First let's look at how CSD affects scalp topography:", "times = np.array([-0.1, 0., 0.05, 0.1, 0.15])\nevoked_csd = mne.preprocessing.compute_current_source_density(evoked)\nevoked.plot_joint(title='Average Reference', show=False)\nevoked_csd.plot_joint(title='Current Source Density')", "CSD has parameters stiffness and lambda2 affecting smoothing and\nspline flexibility, respectively. Let's see how they affect the solution:", "fig, ax = plt.subplots(4, 4)\nfig.subplots_adjust(hspace=0.5)\nfig.set_size_inches(10, 10)\nfor i, lambda2 in enumerate([0, 1e-7, 1e-5, 1e-3]):\n for j, m in enumerate([5, 4, 3, 2]):\n this_evoked_csd = mne.preprocessing.compute_current_source_density(\n evoked, stiffness=m, lambda2=lambda2)\n this_evoked_csd.plot_topomap(\n 0.1, axes=ax[i, j], outlines='skirt', contours=4, time_unit='s',\n colorbar=False, show=False)\n ax[i, j].set_title('stiffness=%i\\nλ²=%s' % (m, lambda2))", "References\n.. footbibliography::" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
jorisvandenbossche/DS-python-data-analysis
notebooks/pandas_03a_selecting_data.ipynb
bsd-3-clause
[ "<p><font size=\"6\"><b>03 - Pandas: Indexing and selecting data - part I</b></font></p>\n\n\n© 2021, Joris Van den Bossche and Stijn Van Hoey (&#106;&#111;&#114;&#105;&#115;&#118;&#97;&#110;&#100;&#101;&#110;&#98;&#111;&#115;&#115;&#99;&#104;&#101;&#64;&#103;&#109;&#97;&#105;&#108;&#46;&#99;&#111;&#109;, &#115;&#116;&#105;&#106;&#110;&#118;&#97;&#110;&#104;&#111;&#101;&#121;&#64;&#103;&#109;&#97;&#105;&#108;&#46;&#99;&#111;&#109;). Licensed under CC BY 4.0 Creative Commons", "import pandas as pd\n\n# redefining the example DataFrame\n\ndata = {'country': ['Belgium', 'France', 'Germany', 'Netherlands', 'United Kingdom'],\n 'population': [11.3, 64.3, 81.3, 16.9, 64.9],\n 'area': [30510, 671308, 357050, 41526, 244820],\n 'capital': ['Brussels', 'Paris', 'Berlin', 'Amsterdam', 'London']}\ncountries = pd.DataFrame(data)\ncountries", "Subsetting data\nSubset variables (columns)\nFor a DataFrame, basic indexing selects the columns (cfr. the dictionaries of pure python)\nSelecting a single column:", "countries['area'] # single []", "Remember that the same syntax can also be used to add a new columns: df['new'] = ....\nWe can also select multiple columns by passing a list of column names into []:", "countries[['area', 'population']] # double [[]]", "Subset observations (rows)\nUsing [], slicing or boolean indexing accesses the rows:\nSlicing", "countries[0:4]", "Boolean indexing (filtering)\nOften, you want to select rows based on a certain condition. This can be done with 'boolean indexing' (like a where clause in SQL) and comparable to numpy. \nThe indexer (or boolean mask) should be 1-dimensional and the same length as the thing being indexed.", "countries['area'] > 100000\n\ncountries[countries['area'] > 100000]\n\ncountries[countries['population'] > 50]", "An overview of the possible comparison operations:\nOperator | Description\n------ | --------\n== | Equal\n!= | Not equal\n> | Greater than\n>= | Greater than or equal\n\\< | Lesser than\n<= | Lesser than or equal\nand to combine multiple conditions:\nOperator | Description\n------ | --------\n& | And (cond1 &amp; cond2)\n\\| | Or (cond1 \\| cond2)\n<div class=\"alert alert-info\" style=\"font-size:120%\">\n<b>REMEMBER</b>: <br><br>\n\nSo as a summary, `[]` provides the following convenience shortcuts:\n\n* **Series**: selecting a **label**: `s[label]`\n* **DataFrame**: selecting a single or multiple **columns**:`df['col']` or `df[['col1', 'col2']]`\n* **DataFrame**: slicing or filtering the **rows**: `df['row_label1':'row_label2']` or `df[mask]`\n\n</div>\n\nSome other useful methods: isin and string methods\nThe isin method of Series is very useful to select rows that may contain certain values:", "s = countries['capital']\n\ns.isin?\n\ns.isin(['Berlin', 'London'])", "This can then be used to filter the dataframe with boolean indexing:", "countries[countries['capital'].isin(['Berlin', 'London'])]", "Let's say we want to select all data for which the capital starts with a 'B'. In Python, when having a string, we could use the startswith method:", "string = 'Berlin'\n\nstring.startswith('B')", "In pandas, these are available on a Series through the str namespace:", "countries['capital'].str.startswith('B')", "For an overview of all string methods, see: https://pandas.pydata.org/pandas-docs/stable/reference/series.html#string-handling\nExercises using the Titanic dataset", "df = pd.read_csv(\"data/titanic.csv\")\n\ndf.head()", "<div class=\"alert alert-success\">\n\n<b>EXERCISE 1</b>:\n\n <ul>\n <li>Select all rows for male passengers and calculate the mean age of those passengers. Do the same for the female passengers.</li>\n</ul>\n</div>", "# %load _solutions/pandas_03a_selecting_data1.py\n\n# %load _solutions/pandas_03a_selecting_data2.py\n\n# %load _solutions/pandas_03a_selecting_data3.py", "We will later see an easier way to calculate both averages at the same time with groupby.\n<div class=\"alert alert-success\">\n\n<b>EXERCISE 2</b>:\n\n <ul>\n <li>How many passengers older than 70 were on the Titanic?</li>\n</ul>\n</div>", "# %load _solutions/pandas_03a_selecting_data4.py\n\n# %load _solutions/pandas_03a_selecting_data5.py", "<div class=\"alert alert-success\">\n\n<b>EXERCISE 3</b>:\n\n <ul>\n <li>Select the passengers that are between 30 and 40 years old?</li>\n</ul>\n</div>", "# %load _solutions/pandas_03a_selecting_data6.py", "<div class=\"alert alert-success\">\n\n<b>EXERCISE 4</b>:\n\nFor a single string `name = 'Braund, Mr. Owen Harris'`, split this string (check the `split()` method of a string) and get the first element of the resulting list.\n\n<details><summary>Hints</summary>\n\n- No Pandas in this exercise, just standard Python.\n- The `split()` method of a string returns a python list. Accessing elements of a python list can be done using the square brackets indexing (`a_list[i]`).\n\n</details> \n\n</div>", "name = 'Braund, Mr. Owen Harris'\n\n# %load _solutions/pandas_03a_selecting_data7.py", "<div class=\"alert alert-success\">\n\n<b>EXERCISE 5</b>:\n\nConvert the solution of the previous exercise to all strings of the `Name` column at once. Split the 'Name' column on the `,`, extract the first part (the surname), and add this as new column 'Surname'. \n\n<details><summary>Hints</summary>\n\n- Pandas uses the `str` accessor to use the string methods such as `split`, e.g. `.str.split(...)` as the equivalent of the `split()` method of a single string (note: there is a small difference in the naming of the first keyword argument: `sep` vs `pat`).\n- The [`.str.get()`](https://pandas.pydata.org/docs/reference/api/pandas.Series.str.get.html#pandas.Series.str.get) can be used to get the n-th element of a list, which is what the `str.split()` returns. This is the equivalent of selecting an element of a single list (`a_list[i]`) but then for all values of the Series.\n- One can chain multiple `.str` methods, e.g. `str.SOMEMETHOD(...).str.SOMEOTHERMETHOD(...)`.\n\n</details> \n\n</div>", "# %load _solutions/pandas_03a_selecting_data8.py", "<div class=\"alert alert-success\">\n\n<b>EXERCISE 6</b>:\n\n <ul>\n <li>Select all passenger that have a surname starting with 'Williams'.</li>\n</ul>\n</div>", "# %load _solutions/pandas_03a_selecting_data9.py", "<div class=\"alert alert-success\">\n\n<b>EXERCISE 7</b>:\n\n <ul>\n <li>Select all rows for the passengers with a surname of more than 15 characters.</li>\n</ul>\n\n</div>", "# %load _solutions/pandas_03a_selecting_data10.py", "[OPTIONAL] more exercises\nFor the quick ones among you, here are some more exercises with some larger dataframe with film data. These exercises are based on the PyCon tutorial of Brandon Rhodes (so all credit to him!) and the datasets he prepared for that. You can download these data from here: titles.csv and cast.csv and put them in the /notebooks/data folder.", "cast = pd.read_csv('data/cast.csv')\ncast.head()\n\ntitles = pd.read_csv('data/titles.csv')\ntitles.head()", "<div class=\"alert alert-success\">\n\n<b>EXERCISE 8</b>:\n\n <ul>\n <li>How many movies are listed in the titles dataframe?</li>\n</ul>\n\n</div>", "# %load _solutions/pandas_03a_selecting_data11.py", "<div class=\"alert alert-success\">\n\n<b>EXERCISE 9</b>:\n\n <ul>\n <li>What are the earliest two films listed in the titles dataframe?</li>\n</ul>\n</div>", "# %load _solutions/pandas_03a_selecting_data12.py", "<div class=\"alert alert-success\">\n\n<b>EXERCISE 10</b>:\n\n <ul>\n <li>How many movies have the title \"Hamlet\"?</li>\n</ul>\n</div>", "# %load _solutions/pandas_03a_selecting_data13.py", "<div class=\"alert alert-success\">\n\n<b>EXERCISE 11</b>:\n\n <ul>\n <li>List all of the \"Treasure Island\" movies from earliest to most recent.</li>\n</ul>\n</div>", "# %load _solutions/pandas_03a_selecting_data14.py", "<div class=\"alert alert-success\">\n\n<b>EXERCISE 12</b>:\n\n <ul>\n <li>How many movies were made from 1950 through 1959?</li>\n</ul>\n</div>", "# %load _solutions/pandas_03a_selecting_data15.py\n\n# %load _solutions/pandas_03a_selecting_data16.py", "<div class=\"alert alert-success\">\n\n<b>EXERCISE 13</b>:\n\n <ul>\n <li>How many roles in the movie \"Inception\" are NOT ranked by an \"n\" value?</li>\n</ul>\n</div>", "# %load _solutions/pandas_03a_selecting_data17.py\n\n# %load _solutions/pandas_03a_selecting_data18.py\n\n# %load _solutions/pandas_03a_selecting_data19.py", "<div class=\"alert alert-success\">\n\n<b>EXERCISE 14</b>:\n\n <ul>\n <li>But how many roles in the movie \"Inception\" did receive an \"n\" value?</li>\n</ul>\n</div>", "# %load _solutions/pandas_03a_selecting_data20.py", "<div class=\"alert alert-success\">\n\n<b>EXERCISE 15</b>:\n\n <ul>\n <li>Display the cast of the \"Titanic\" (the most famous 1997 one) in their correct \"n\"-value order, ignoring roles that did not earn a numeric \"n\" value.</li>\n</ul>\n</div>", "# %load _solutions/pandas_03a_selecting_data21.py", "<div class=\"alert alert-success\">\n\n<b>EXERCISE 16</b>:\n\n <ul>\n <li>List the supporting roles (having n=2) played by Brad Pitt in the 1990s, in order by year.</li>\n</ul>\n</div>", "# %load _solutions/pandas_03a_selecting_data22.py", "Acknowledgement\n\nThe optional exercises are based on the PyCon tutorial of Brandon Rhodes (so all credit to him!) and the datasets he prepared for that." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
lknelson/text-analysis-2017
01-IntroToPython/00-PythonBasics_ExerciseSolutions.ipynb
bsd-3-clause
[ "Solution to Python Basics Exercises\nThe exercises:\n1. EX. Using the list of words you produced by splitting 'new_string', create a new list that contains only the words whose last letter is \"y\". \n\n\nEX. Create a new list that contains the first letter of each word.\n\n\nEX. Create a new list that contains only words longer than two letters.\n\n\nWe can do all of this using built-in Python functions: arithmetic, string manipulation, and list comprehension. \nFirst, the sentence 'new_string'. We can split 'new_string' into a list of words using the .split() function", "# copying 'new_string' from the tutorial on Wednesday\n\nnew_string = \"It seems very strange that one must turn back, \\\nand be transported to the very beginnings of history, \\\nin order to arrive at an understanding of humanity as it is at present.\"\n\n#creat a new variable containing a list of words using the .split() function\nnew_string_list = new_string.split() \n#print the new list\nnew_string_list", "(1) EX. Using the list of words you produced by splitting 'new_string', create a new list that contains only the words whose last letter is \"y\"\nWe can combine list comprehension and the string manipulation function .endswith(), both of which we learned about on Wednesday, to create a new list the keeps only the elements from the original list that end with the character 'y'.", "word_list_y = [word for word in new_string_list if word.endswith('y')]\n#print the new list\nword_list_y", "(2) EX. Create a new list that contains the first letter of each word.\nWe can again use list comprehension, combined with string splicing, to produce a new list that contain only the first letter of each word. Remember in Python counting starts at 0.", "word_list_firstletter = [word[0] for word in new_string_list]\n#print our new list\nword_list_firstletter", "(3) EX. Create a new list that contains only words longer than two letters.\nWe can, again, use list comprehension, the 'len' function, and the algorithm function greater than, or '>', to filter and keep words longer than two letters. Note that '>' is strictly greater than. If we wanted to include words with 2 letters we would need to use greater than or equal to, or '>='.\nSyntax is important here, so refer to the tutorial from Wednesday to remind yourself of the syntax. Use copy and paste to avoid errors in typing.", "word_list_long = [n for n in new_string_list if len(n)>2]\n#print new list\nword_list_long" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
yavuzovski/playground
machine learning/google-ml-crash-course/pandas/pandas_intro.ipynb
gpl-3.0
[ "Following the tutorial at: https://colab.research.google.com/notebooks/mlcc/intro_to_pandas.ipynb?hl=tr#scrollTo=U5ouUp1cU6pC", "import pandas as pd\n\n# There are two data structures in pandas, Series and DataFrames\ncity_names = pd.Series(['San Francisco', 'San Jose', 'Sacramento'])\npopulation = pd.Series([852469, 1015785, 485199])\n\npd.DataFrame({\"City Name\": city_names, \"Population\": population})\n\n# importing an existing csv file into DataFrame\ncalifornia_housing_dataframe = pd.read_csv(\n \"https://storage.googleapis.com/mledu-datasets/california_housing_train.csv\",\n sep=\",\"\n)\n\ncalifornia_housing_dataframe.shape\n\ncalifornia_housing_dataframe.head()\n\ncalifornia_housing_dataframe.hist('housing_median_age')", "Accessing Data\nYou can access DataFrame data using familiar Python dict/list operations:", "cities = pd.DataFrame({'City Name': city_names, 'Population': population})\nprint(type(cities['City Name']))\ncities['City Name']\n\nprint(type(cities[\"City Name\"][1]))\ncities[\"City Name\"][1]\n\nprint(type(cities[0:2]))\ncities[0:2]", "Manipulating Data\nYou may apply Python's basic arithmetic operations to Series. For example:", "population / 1000\n\nimport numpy as np\nnp.log(population)\n\ncities['Area square miles'] = pd.Series([46.87, 176.53, 97.92])\ncities['Population density'] = cities['Population'] / cities['Area square miles']\ncities\n\npopulation.apply(lambda val: val > 1000000)", "Exercise #1\nModify the cities table by adding a new boolean column that is True if and only if both of the following are True:\n\nThe city is named after a saint.\nThe city has an area greater than 50 square miles.\n\nNote: Boolean Series are combined using the bitwise, rather than the traditional boolean, operators. For example, when performing logical and, use &amp; instead of and.\nHint: \"San\" in Spanish means \"saint.\"", "cities['is saint and wide'] = (cities['Area square miles'] > 50) & (cities['City Name'].apply(lambda name: name.startswith(\"San\")))\ncities", "Indexes\nBoth Series and DataFrame objects also define an index property that assigns an identifier value to each Series item or DataFrame row. \nBy default, at construction, pandas assigns index values that reflect the ordering of the source data. Once created, the index values are stable; that is, they do not change when data is reordered.", "city_names.index\n\ncities.index\n\ncities.reindex([2, 0, 1])", "Reindexing is a great way to shuffle (randomize) a DataFrame. In the example below, we take the index, which is array-like, and pass it to NumPy's random.permutation function, which shuffles its values in place. Calling reindex with this shuffled array causes the DataFrame rows to be shuffled in the same way.", "cities.reindex(np.random.permutation(cities.index))", "Exercise #2\nThe reindex method allows index values that are not in the original DataFrame's index values. Try it and see what happens if you use such values! Why do you think this is allowed?", "cities.reindex([4, 2, 1, 3, 0])" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
maxis42/ML-DA-Coursera-Yandex-MIPT
1 Mathematics and Python/Lectures notebooks/9 vector operations/vector_operations.ipynb
mit
[ "NumPy: векторы и операции над ними\n\nВ этом ноутбуке нам понадобятся библиотека NumPy. Для удобства импортируем ее под более коротким именем:", "import numpy as np", "1. Создание векторов\nСамый простой способ создать вектор в NumPy — задать его явно с помощью numpy.array(list, dtype=None, ...).\nПараметр list задает итерируемый объект, из которого можно создать вектор. Например, в качестве этого параметра можно задать список чисел. Параметр dtype задает тип значений вектора, например, float — для вещественных значений и int — для целочисленных. Если этот параметр не задан, то тип данных будет определен из типа элементов первого аргумента.", "a = np.array([1, 2, 3, 4])\nprint 'Вектор:\\n', a\n\nb = np.array([1, 2, 3, 4, 5], dtype=float)\nprint 'Вещественный вектор:\\n', b\n\nc = np.array([True, False, True], dtype=bool)\nprint 'Булевский вектор:\\n', c", "Тип значений вектора можно узнать с помощью numpy.ndarray.dtype:", "print 'Тип булевского вектора:\\n', c.dtype", "Другим способом задания вектора является функция numpy.arange(([start, ]stop, [step, ]...), которая задает последовательность чисел заданного типа из промежутка [start, stop) через шаг step:", "d = np.arange(start=10, stop=20, step=2) # последнее значение не включается!\nprint 'Вектор чисел от 10 до 20 с шагом 2:\\n', d\n\nf = np.arange(start=0, stop=1, step=0.3, dtype=float)\nprint 'Вещественный вектор чисел от 0 до 1 с шагом 0.3:\\n', f", "По сути вектор в NumPy является одномерным массивом, что соответствует интуитивному определению вектора:", "print c.ndim # количество размерностей\n\nprint c.shape # shape фактически задает длину вектора ", "Обратите внимание: вектор _и одномерный массив тождественные понятия в NumPy. Помимо этого, также существуют понятия _вектор-столбец и вектор-строка, которые, несмотря на то что математически задают один и тот же объект, являются двумерными массивами и имеют другое значение поля shape (в этом случае поле состоит из двух чисел, одно из которых равно единице). Эти тонкости будут рассмотрены в следующем уроке.\nБолее подробно о том, как создавать векторы в NumPy, \nсм. документацию.\n2. Операции над векторами\nВекторы в NumPy можно складывать, вычитать, умножать на число и умножать на другой вектор (покоординатно):", "a = np.array([1, 2, 3])\nb = np.array([6, 5, 4])\nk = 2\n\nprint 'Вектор a:', a\nprint 'Вектор b:', b\nprint 'Число k:', k\n\nprint 'Сумма a и b:\\n', a + b\n\nprint 'Разность a и b:\\n', a - b\n\nprint 'Покоординатное умножение a и b:\\n', a * b \n\nprint 'Умножение вектора на число (осуществляется покоординатно):\\n', k * a ", "3. Нормы векторов\nВспомним некоторые нормы, которые можно ввести в пространстве $\\mathbb{R}^{n}$, и рассмотрим, с помощью каких библиотек и функций их можно вычислять в NumPy.\np-норма\np-норма (норма Гёльдера) для вектора $x = (x_{1}, \\dots, x_{n}) \\in \\mathbb{R}^{n}$ вычисляется по формуле:\n$$\n\\left\\Vert x \\right\\Vert_{p} = \\left( \\sum_{i=1}^n \\left| x_{i} \\right|^{p} \\right)^{1 / p},~p \\geq 1.\n$$\nВ частных случаях при:\n* $p = 1$ получаем $\\ell_{1}$ норму\n* $p = 2$ получаем $\\ell_{2}$ норму\nДалее нам понабится модуль numpy.linalg, реализующий некоторые приложения линейной алгебры. Для вычисления различных норм мы используем функцию numpy.linalg.norm(x, ord=None, ...), где x — исходный вектор, ord — параметр, определяющий норму (мы рассмотрим два варианта его значений — 1 и 2). Импортируем эту функцию:", "from numpy.linalg import norm", "$\\ell_{1}$ норма\n$\\ell_{1}$ норма \n(также известная как манхэттенское расстояние)\nдля вектора $x = (x_{1}, \\dots, x_{n}) \\in \\mathbb{R}^{n}$ вычисляется по формуле:\n$$\n \\left\\Vert x \\right\\Vert_{1} = \\sum_{i=1}^n \\left| x_{i} \\right|.\n$$\nЕй в функции numpy.linalg.norm(x, ord=None, ...) соответствует параметр ord=1.", "a = np.array([1, 2, -3])\nprint 'Вектор a:', a\n\nprint 'L1 норма вектора a:\\n', norm(a, ord=1)", "$\\ell_{2}$ норма\n$\\ell_{2}$ норма (также известная как евклидова норма)\nдля вектора $x = (x_{1}, \\dots, x_{n}) \\in \\mathbb{R}^{n}$ вычисляется по формуле:\n$$\n \\left\\Vert x \\right\\Vert_{2} = \\sqrt{\\sum_{i=1}^n \\left( x_{i} \\right)^2}.\n$$\nЕй в функции numpy.linalg.norm(x, ord=None, ...) соответствует параметр ord=2.", "a = np.array([1, 2, -3])\nprint 'Вектор a:', a\n\nprint 'L2 норма вектора a:\\n', norm(a, ord=2)", "Более подробно о том, какие еще нормы (в том числе матричные) можно вычислить, см. документацию. \n4. Расстояния между векторами\nДля двух векторов $x = (x_{1}, \\dots, x_{n}) \\in \\mathbb{R}^{n}$ и $y = (y_{1}, \\dots, y_{n}) \\in \\mathbb{R}^{n}$ $\\ell_{1}$ и $\\ell_{2}$ раccтояния вычисляются по следующим формулам соответственно:\n$$\n \\rho_{1}\\left( x, y \\right) = \\left\\Vert x - y \\right\\Vert_{1} = \\sum_{i=1}^n \\left| x_{i} - y_{i} \\right|\n$$\n$$\n \\rho_{2}\\left( x, y \\right) = \\left\\Vert x - y \\right\\Vert_{2} = \n \\sqrt{\\sum_{i=1}^n \\left( x_{i} - y_{i} \\right)^2}.\n$$", "a = np.array([1, 2, -3])\nb = np.array([-4, 3, 8])\nprint 'Вектор a:', a\nprint 'Вектор b:', b\n\nprint 'L1 расстояние между векторами a и b:\\n', norm(a - b, ord=1)\n\nprint 'L2 расстояние между векторами a и b:\\n', norm(a - b, ord=2)", "Также расстояние между векторами можно посчитать с помощью функции scipy.spatial.distance.cdist(XA, XB, metric='euclidean', p=2, ...) из модуля SciPy, предназначенного для выполнения научных и инженерных расчётов.", "from scipy.spatial.distance import cdist", "scipy.spatial.distance.cdist(...) требует, чтобы размерность XA и XB была как минимум двумерная. По этой причине для использования этой функции необходимо преобразовать векторы, которые мы рассматриваем в этом ноутбуке, к вектор-строкам с помощью способов, которые мы рассмотрим ниже. \nПараметры XA, XB — исходные вектор-строки, а metric и p задают метрику расстояния\n(более подробно о том, какие метрики можно использовать, см. документацию).\nПервый способ из вектора сделать вектор-строку (вектор-столбец) — это использовать метод array.reshape(shape), где параметр shape задает размерность вектора (кортеж чисел).", "a = np.array([6, 3, -5])\nb = np.array([-1, 0, 7])\nprint 'Вектор a:', a\nprint 'Его размерность:', a.shape\nprint 'Вектор b:', b\nprint 'Его размерность:', b.shape\n\na = a.reshape((1, 3))\nb = b.reshape((1, 3))\nprint 'После применения метода reshape:\\n'\nprint 'Вектор-строка a:', a\nprint 'Его размерность:', a.shape\nprint 'Вектор-строка b:', b\nprint 'Его размерность:', b.shape\n\nprint 'Манхэттенское расстояние между a и b (через cdist):', cdist(a, b, metric='cityblock')", "Заметим, что после применения этого метода размерность полученных вектор-строк будет равна shape. Следующий метод позволяет сделать такое же преобразование, но не изменяет размерность исходного вектора. \nВ NumPy к размерностям объектов можно добавлять фиктивные оси с помощью np.newaxis. Для того, чтобы понять, как это сделать, рассмотрим пример:", "d = np.array([3, 0, 8, 9, -10])\nprint 'Вектор d:', d\nprint 'Его размерность:', d.shape\n\nprint 'Вектор d с newaxis --> вектор-строка:\\n', d[np.newaxis, :]\nprint 'Полученная размерность:', d[np.newaxis, :].shape\n\nprint 'Вектор d с newaxis --> вектор-столбец:\\n', d[:, np.newaxis]\nprint 'Полученная размерность:', d[:, np.newaxis].shape", "Важно, что np.newaxis добавляет к размерности ось, длина которой равна 1 (это и логично, так как количество элементов должно сохраняться). Таким образом, надо вставлять новую ось там, где нужна единица в размерности. \nТеперь посчитаем расстояния с помощью scipy.spatial.distance.cdist(...), используя np.newaxis для преобразования векторов:", "a = np.array([6, 3, -5])\nb = np.array([-1, 0, 7])\nprint 'Евклидово расстояние между a и b (через cdist):', cdist(a[np.newaxis, :], \n b[np.newaxis, :], \n metric='euclidean')", "Эта функция также позволяет вычислять попарные расстояния между множествами векторов. Например, пусть у нас имеется матрица размера $m_{A} \\times n$. Мы можем рассматривать ее как описание некоторых $m_{A}$ наблюдений в $n$-мерном пространстве. Пусть также имеется еще одна аналогичная матрица размера $m_{B} \\times n$, где $m_{B}$ векторов в том же $n$-мерном пространстве. Часто необходимо посчитать попарные расстояния между векторами первого и второго множеств. В этом случае можно пользоваться функцией scipy.spatial.distance.cdist(XA, XB, metric='euclidean', p=2, ...), где в качестве XA, XB необходимо передать две описанные матрицы. Функция возвращает матрицу попарных расстояний размера $m_{A} \\times m_{B}$, где элемент матрицы на $[i, j]$-ой позиции равен расстоянию между $i$-тым вектором первого множества и $j$-ым вектором второго множества. \nВ данном случае эта функция предподчительнее numpy.linalg.norm(...), так как она вычисляет попарные расстояния быстрее и эффективнее. \n5. Скалярное произведение и угол между векторами", "a = np.array([0, 5, -1])\nb = np.array([-4, 9, 3])\nprint 'Вектор a:', a\nprint 'Вектор b:', b", "Скалярное произведение в пространстве $\\mathbb{R}^{n}$ для двух векторов $x = (x_{1}, \\dots, x_{n})$ и $y = (y_{1}, \\dots, y_{n})$ определяется как:\n$$\n\\langle x, y \\rangle = \\sum_{i=1}^n x_{i} y_{i}.\n$$\nСкалярное произведение двух векторов можно вычислять с помощью функции numpy.dot(a, b, ...) или метода vec1.dot(vec2), где vec1 и vec2 — исходные векторы. Также эти функции подходят для матричного умножения, о котором речь пойдет в следующем уроке.", "print 'Скалярное произведение a и b (через функцию):', np.dot(a, b)\n\nprint 'Скалярное произведение a и b (через метод):', a.dot(b)", "Длиной вектора $x = (x_{1}, \\dots, x_{n}) \\in \\mathbb{R}^{n}$ называется квадратный корень из скалярного произведения, то есть длина равна евклидовой норме вектора:\n$$\n\\left| x \\right| = \\sqrt{\\langle x, x \\rangle} = \\sqrt{\\sum_{i=1}^n x_{i}^2} = \\left\\Vert x \\right\\Vert_{2}.\n$$\nТеперь, когда мы знаем расстояние между двумя ненулевыми векторами и их длины, мы можем вычислить угол между ними через скалярное произведение:\n$$\n\\langle x, y \\rangle = \\left| x \\right| | y | \\cos(\\alpha)\n\\implies \\cos(\\alpha) = \\frac{\\langle x, y \\rangle}{\\left| x \\right| | y |},\n$$\nгде $\\alpha \\in [0, \\pi]$ — угол между векторами $x$ и $y$.", "cos_angle = np.dot(a, b) / norm(a) / norm(b)\nprint 'Косинус угла между a и b:', cos_angle\nprint 'Сам угол:', np.arccos(cos_angle)", "Более подробно о том, как вычислять скалярное произведение в NumPy, \nсм. документацию." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
nerdslab/xbrain
Demo/xbrain-demo-dvid.ipynb
apache-2.0
[ "XBRAIN Demo Notebook\nLast Update: 10/12/2017", "# Imports \nimport numpy as np\nimport scipy.io as sio\nfrom PIL import Image\nimport ndparse as ndp\nimport xbrain\nimport time\nimport os\nimport intern\nfrom intern.remote.dvid import DVIDRemote", "Initialize\n\nPath of files\nParameters for ilastik\nParameters for cell detection\nParameters for blood vessel segmentation", "# Set folder where data is stored\ncurrent_dir = os.path.abspath(os.getcwd())\nfolder = next(os.walk('.'))[1][0]\n#print(os.path.exists(folder)) # testing\n\n# ilastik parameters\nclassifier_file = folder + '/xbrain_vessel_seg_v7.ilp'\n#print(classifier_file) # testing\n#print(os.path.exists(classifier_file)) # testing\n\n# testing ground truth\nimage_file = folder + '/V3_imgdata_gt.npy'\n#print(image_file) # testing\n#print(os.path.exists(image_file)) # testing\n\n\n# Dictate the processing power\nram_size = 4000 # 4000 MB \nno_of_threads = 8 # 8 threads\n\n# Cell detection parameters \ncell_probability_threshold = 0.2\nstopping_criterion = 0.47\ninitial_template_size = 18\ndilation_size = 8\nmax_no_cells = 500\n\n# Vessel segmentation parameters \nvessel_probability_threshold = .68\ndilation_size = 3\nminimum_size = 4000", "Load in the data", "# load data\n\ndvid = DVIDRemote({ \"protocol\": \"http\",\n \"host\": \"172.19.248.41:8000/\",\n })\nchan = dvid.get_channel('a3afee0bf807466c9b7c3b0bbfd1acbd','grayscale')\nprint(chan)\ninput_data = dvid.get_cutout(chan,0,[0,2],[0,64],[390,454])#DVID here -- np.load(image_file)\n\n# plot the 50th slice\n#ndp.plot(input_data, 50)\n\n# get the shape of the data\nprint(input_data.shape) # (200, 200, 200)", "Ingest data and classifer into ilastik", "# Compute time required for processing\nstart = time.time()\n\n# Process the data to probability maps\nprobability_maps = xbrain.classify_pixel(input_data, classifier_file, threads=no_of_threads, ram=ram_size)\n\nend = time.time()\nprint(\"\\nElapsed time: %f minutes\" % ((end - start)/60))", "Display the results of ilastik", "# pull down the coorisponding matricies\ncell_prob_map = probability_maps[:, :, :, 2]\nvessel_prob_map = probability_maps[:, :, :, 1]\n\nprint(\"cell_prob_map shape\", cell_prob_map.shape)\nndp.plot(cell_prob_map, slice=50, cmap1='jet', alpha=0.5)\n\n\nprint(\"vessel_prob_map shape\", vessel_prob_map.shape)\nndp.plot(vessel_prob_map, slice=50, cmap1='jet')\n", "Running different package for testing new algorithm", "# reload packages for testing new algorithms\n# import importlib\n# importlib.reload(xbrain)\n\n# Compute time required for processing\nstart = time.time()\n\n# cell detection\ncentroids, cell_map = xbrain.detect_cells(cell_prob_map, cell_probability_threshold, stopping_criterion, initial_template_size, dilation_size, max_no_cells)\nprint(centroids)\n\nend = time.time()\nprint(\"\\nElapsed time: %f minutes\" % ((end - start)/60))\n\n# find vessels\n\nvessel_map = xbrain.segment_vessels(vessel_prob_map, vessel_probability_threshold, dilation_size, minimum_size)", "Display results of new algorithm", "# show results\n\nprint(\"Vessel Segmentation\")\nndp.plot(input_data, vessel_map, slice = 50, alpha = 0.5)\n\nprint(\"Cell Segmentation\")\nndp.plot(input_data, cell_map, slice = 50, alpha = 0.5)\n", "Thank you for going throught the XBRAIN tutorial! For more information about the lab, please visit: dyerlab.gatech.edu" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
CopernicusMarineInsitu/INSTACTraining
PythonNotebooks/PlatformPlots/Read_drifter_data_2.ipynb
mit
[ "This notebook uses what was shown in the previous one.<br/>\nThe goal is represent all the temperature data corresponding to a given month, in this case July 2015", "year = 2015\nmonth = 7\n\n%matplotlib inline\nimport glob\nimport os\nimport netCDF4\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom mpl_toolkits.basemap import Basemap\nfrom matplotlib import colors", "The directory where we store the data files:", "basedir = \"~/DataOceano/MyOcean/INSITU_GLO_NRT_OBSERVATIONS_013_030/monthly/\" + str(year) + str(month).zfill(2) + '/'\nbasedir = os.path.expanduser(basedir)", "Simple plot\nConfiguration\nWe start by defining some options for the scatter plot:\n* the range of temperature that will be shown;\n* the colormap;\n* the ticks to be put on the colorbar.", "tempmin, tempmax = 5., 30.\ncmaptemp = plt.cm.RdYlBu_r\nnormtemp = colors.Normalize(vmin=tempmin, vmax=tempmax)\ntempticks = np.arange(tempmin, tempmax+0.1,2.5)", "Loop on the files\nWe create a loop on the netCDF files located in our directory. Longitude, latitude and dept are read from every file, while the temperature is not always available. For the plot, we only take the data with have a depth dimension equal to 1.", "fig = plt.figure(figsize=(12, 8))\n\nnfiles_notemp = 0\nfilelist = sorted(glob.glob(basedir+'*.nc'))\nfor datafiles in filelist:\n\n with netCDF4.Dataset(datafiles) as nc:\n lon = nc.variables['LONGITUDE'][:]\n lat = nc.variables['LATITUDE'][:]\n depth = nc.variables['DEPH'][:]\n\n try:\n temperature = nc.variables['TEMP'][:,0]\n\n if(depth.shape[1] == 1):\n\n scat = plt.scatter(lon.mean(), lat.mean(), s=15., c=temperature.mean(), edgecolor='None', \n cmap=cmaptemp, norm=normtemp)\n\n except KeyError:\n # print 'No variable temperature in this file'\n #temperature = np.nan*np.ones_like(lat)\n nfiles_notemp+=1\n \n# Add colorbar and title\ncbar = plt.colorbar(scat, extend='both')\ncbar.set_label('$^{\\circ}$C', rotation=0, ha='left')\nplt.title('Temperature from surface drifters\\n' + str(year) + '-' + str(month).zfill(2))\nplt.show()", "We also counted how many files don't have the temperature variable:", "print 'Number of files: ' + str(len(filelist))\nprint 'Number of files without temperature: ' + str(nfiles_notemp)", "Plot on a map\nConfiguration of the projection\nWe choose a Robin projection centered on 0ºE, with a cruse ('c') resolution for the coastline.", "m = Basemap(projection='moll', lon_0=0, resolution='c')", "The rest of the configuration of the plot can be kept as it was.\nLoop on the files\nWe can copy the part of the code used before. We need to add a line for the projection of the coordinates: lon, lat = m(lon, lat). After the loop we can add the coastline and the continents.", "fig = plt.figure(figsize=(12, 8))\n\nnfiles_notemp = 0\nfilelist = sorted(glob.glob(basedir+'*.nc'))\nfor datafiles in filelist:\n\n with netCDF4.Dataset(datafiles) as nc:\n lon = nc.variables['LONGITUDE'][:]\n lat = nc.variables['LATITUDE'][:]\n depth = nc.variables['DEPH'][:]\n \n lon, lat = m(lon, lat)\n try:\n temperature = nc.variables['TEMP'][:,0]\n\n if(depth.shape[1] == 1):\n\n scat = m.scatter(lon.mean(), lat.mean(), s=15., c=temperature.mean(), edgecolor='None', \n cmap=cmaptemp, norm=normtemp)\n\n except KeyError:\n # print 'No variable temperature in this file'\n #temperature = np.nan*np.ones_like(lat)\n nfiles_notemp+=1\n \n# Add colorbar and title\ncbar = plt.colorbar(scat, extend='both', shrink=0.7)\ncbar.set_label('$^{\\circ}$C', rotation=0, ha='left')\nm.drawcoastlines(linewidth=0.2)\nm.fillcontinents(color = 'gray')\nplt.title('Temperature from surface drifters\\n' + str(year) + '-' + str(month).zfill(2))\nplt.show()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
uber/pyro
tutorial/source/epi_intro.ipynb
apache-2.0
[ "Epidemiological models: Introduction\nThis tutorial introduces the pyro.contrib.epidemiology module, an epidemiological modeling language with a number of black box inference algorithms. This tutorial assumes the reader is already familiar with modeling, inference, and distribution shapes.\nSee also the following scripts:\n\nEpidemiological models: Univariate\nEpidemiological models: Regional\nEpidemiological inference via HMC\n\nSummary\n\nTo create a new model, inherit from the CompartmentalModel base class.\nOverride methods .global_model(), .initialize(params), and .transition(params, state, t).\nTake care to support broadcasting and vectorized interpretation in those methods.\nFor single time series, set population to an integer.\nFor batched time series, let population be a vector, and use self.region_plate.\nFor models with complex inter-compartment flows, override the .compute_flows() method. \nFlows with loops (undirected or directed) are not currently supported.\nTo perform cheap approximate inference via SVI, call the .fit_svi() method.\nTo perform more expensive inference via MCMC, call the .fit_mcmc() method.\nTo stochastically predict latent and future variables, call the .predict() method.\n\nTable of contents\n\nBasic workflow\nModeling\nGenerating data\nInference\nPrediction\nForecasting\nAdvanced modeling\nRegional models\nPhylogenetic likelihoods\nHeterogeneous models\nComplex compartment flow\nReferences", "import os\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nimport torch\nimport pyro\nimport pyro.distributions as dist\nfrom pyro.contrib.epidemiology import CompartmentalModel, binomial_dist, infection_dist\n\n%matplotlib inline\nassert pyro.__version__.startswith('1.7.0')\ntorch.set_default_dtype(torch.double) # Required for MCMC inference.\nsmoke_test = ('CI' in os.environ)", "Basic workflow <a class=\"anchor\" id=\"Basic-workflow\"></a>\nThe pyro.contrib.epidemiology module provides a modeling language for a class of stochastic discrete-time discrete-count compartmental models, together with a number of black box inference algorithms to perform joint inference on global parameters and latent variables. This modeling language is more restrictive than the full Pyro probabilistic programming language:\n\ncontrol flow must be static;\ncompartmental distributions are restricted to binomial_dist(), beta_binomial_dist(), and infection_dist();\nplates are not allowed, except for the single optional .region_plate;\nall random variables must be either global or Markov and sampled at every time step, so e.g. time-windowed random variables are not supported;\nmodels must support broadcasting and vectorization of time t.\n\nThese restrictions allow inference algorithms to vectorize over the time dimension, leading to inference algorithms with per-iteration parallel complexity sublinear in length of the time axis. The restriction on distributions allows inference algorithms to approximate parts of the model as Gaussian via moment matching, further speeding up inference. Finally, because real data is so often overdispersed relative to Binomial idealizations, the three distribution helpers provide an overdispersion parameter calibrated so that in the large-population limit all distribution helpers converge to log-normal.\nBlack box inference algorithms currently include: SVI with a moment-matching approximation, and NUTS either with a moment-matched approximation or with an exact auxiliary variable method detailed in the SIR HMC tutorial. All three algorithms initialize using SMC and reparameterize time dependent variables using a fast Haar wavelet transform. Default inference parameters are set for cheap approximate results; accurate results will require more steps and ideally comparison among different inference algorithms. We recommend that, when running MCMC inference, you use multiple chains, thus making it easier to diagnose mixing issues.\nWhile MCMC inference can be more accurate for a given model, SVI is much faster and thus allows richer model structure (e.g. incorporating neural networks) and more rapid model iteration. We recommend starting model exploration using mean field SVI (via .fit_svi(guide_rank=0)), then optionally increasing accuracy using a low-rank multivariate normal guide (via .fit_svi(guide_rank=None)). For even more accurate posteriors you could then try moment-matched MCMC (via .fit_mcmc(num_quant_bins=1)), or the most accurate and most expensive enumerated MCMC (via .fit_mcmc(num_quant_bins=4)). We recommend that, when fitting models with neural networks, you train via .fit_svi(), then freeze the network (say by omitting a pyro.module() statement) before optionally running MCMC inference.\nModeling <a class=\"anchor\" id=\"Modeling\"></a>\nThe pyro.contrib.epidemiology.models module provides a number of example models. While in principle these are reusable, we recommend forking and modifying these models for your task. Let's take a look at one of the simplest examples, SimpleSIRModel. This model derives from the CompartmentalModel base class and overrides the three standard methods using familiar Pyro modeling code in each method.\n\n.global_model() samples global parameters and packs them into a single return value (here a tuple, but any structure is allowed). The return value is available as the params argument to the other two methods.\n.initialize(params) samples (or deterministically sets) initial values of time series, returning a dictionary mapping time series name to initial value.\n.transition(params, state, t) inputs global params, the state at the previous time step, and the time index t (which may be a slice!). It then samples flows and updates the state dict.", "class SimpleSIRModel(CompartmentalModel):\n def __init__(self, population, recovery_time, data):\n compartments = (\"S\", \"I\") # R is implicit.\n duration = len(data)\n super().__init__(compartments, duration, population)\n assert isinstance(recovery_time, float)\n assert recovery_time > 1\n self.recovery_time = recovery_time\n self.data = data\n\n def global_model(self):\n tau = self.recovery_time\n R0 = pyro.sample(\"R0\", dist.LogNormal(0., 1.))\n rho = pyro.sample(\"rho\", dist.Beta(100, 100))\n return R0, tau, rho\n\n def initialize(self, params):\n # Start with a single infection.\n return {\"S\": self.population - 1, \"I\": 1}\n\n def transition(self, params, state, t):\n R0, tau, rho = params\n\n # Sample flows between compartments.\n S2I = pyro.sample(\"S2I_{}\".format(t),\n infection_dist(individual_rate=R0 / tau,\n num_susceptible=state[\"S\"],\n num_infectious=state[\"I\"],\n population=self.population))\n I2R = pyro.sample(\"I2R_{}\".format(t),\n binomial_dist(state[\"I\"], 1 / tau))\n\n # Update compartments with flows.\n state[\"S\"] = state[\"S\"] - S2I\n state[\"I\"] = state[\"I\"] + S2I - I2R\n\n # Condition on observations.\n t_is_observed = isinstance(t, slice) or t < self.duration\n pyro.sample(\"obs_{}\".format(t),\n binomial_dist(S2I, rho),\n obs=self.data[t] if t_is_observed else None)", "Note that we've stored data in the model. These models have a scikit-learn like interface: we instantiate a model class with data, then call a .fit_*() method to train, then call .predict() on a trained model.\nNote also that we've taken special care so that t can be either an integer or a slice. Under the hood, t is an integer during SMC initialization, a slice during SVI or MCMC inference, and an integer again during prediction.\nGenerating data <a class=\"anchor\" id=\"Generating-data\"></a>\nTo check that our model generates plausible data, we can create a model with empty data and call the model's .generate() method. This method first calls, .global_model(), then calls .initialize(), then calls .transition() once per time step (based on the length of our empty data.", "population = 10000\nrecovery_time = 10.\nempty_data = [None] * 90\nmodel = SimpleSIRModel(population, recovery_time, empty_data)\n\n# We'll repeatedly generate data until a desired number of infections is found.\npyro.set_rng_seed(20200709)\nfor attempt in range(100):\n synth_data = model.generate({\"R0\": 2.0})\n total_infections = synth_data[\"S2I\"].sum().item()\n if 4000 <= total_infections <= 6000:\n break\nprint(\"Simulated {} infections after {} attempts\".format(total_infections, 1 + attempt))", "The generated data contains both global variables and time series, packed into tensors.", "for key, value in sorted(synth_data.items()):\n print(\"{}.shape = {}\".format(key, tuple(value.shape)))\n\nplt.figure(figsize=(8,4))\nfor name, value in sorted(synth_data.items()):\n if value.dim():\n plt.plot(value, label=name)\nplt.xlim(0, len(empty_data) - 1)\nplt.ylim(0.8, None)\nplt.xlabel(\"time step\")\nplt.ylabel(\"individuals\")\nplt.yscale(\"log\")\nplt.legend(loc=\"best\")\nplt.title(\"Synthetic time series\")\nplt.tight_layout()", "Inference <a class=\"anchor\" id=\"Inference\"></a>\nNext let's recover estimates of the latent variables given only observations obs. To do this we'll create a new model instance from the synthetic observations.", "obs = synth_data[\"obs\"]\nmodel = SimpleSIRModel(population, recovery_time, obs)", "The CompartmentalModel provides a number of inference algorithms. The cheapest and most scalable algorithm is SVI, avilable via the .fit_svi() method. This method returns a list of losses to help us diagnose convergence; the fitted parameters are stored in the model object.", "%%time\nlosses = model.fit_svi(num_steps=101 if smoke_test else 2001,\n jit=True)\n\nplt.figure(figsize=(8, 3))\nplt.plot(losses)\nplt.xlabel(\"SVI step\")\nplt.ylabel(\"loss\")\nplt.ylim(min(losses), max(losses[50:]));", "After inference, samples of latent variables are stored in the .samples attribute. These are primarily for internal use, and do not contain the full set of latent variables.", "for key, value in sorted(model.samples.items()):\n print(\"{}.shape = {}\".format(key, tuple(value.shape)))", "Prediction <a class=\"anchor\" id=\"Prediction\"></a>\nAfter inference we can both examine latent variables and forecast forward using the .predict() method. First let's simply predict latent variables.", "%%time\nsamples = model.predict()\n\nfor key, value in sorted(samples.items()):\n print(\"{}.shape = {}\".format(key, tuple(value.shape)))\n\nnames = [\"R0\", \"rho\"]\nfig, axes = plt.subplots(2, 1, figsize=(5, 5))\naxes[0].set_title(\"Posterior estimates of global parameters\")\nfor ax, name in zip(axes, names):\n truth = synth_data[name]\n sns.distplot(samples[name], ax=ax, label=\"posterior\")\n ax.axvline(truth, color=\"k\", label=\"truth\")\n ax.set_xlabel(name)\n ax.set_yticks(())\n ax.legend(loc=\"best\")\nplt.tight_layout()", "Notice that while the inference recovers the basic reproductive number R0, it poorly estimates the response rate rho and underestimates its uncertainty. While perfect inference would provide better uncertainty estimates, the response rate is known to be difficult to recover from data. Ideally the model can either incorporate a narrower prior, either obtained by testing a random sample of the population, or by more accurate observations, e.g. counting deaths rather than confirmed infections.\nForecasting <a class=\"anchor\" id=\"Forecasting\"></a>\nWe can forecast forward by passing a forecast argument to the .predict() method, specifying the number of time steps ahead we'd like to forecast. The returned sample will contain time values during both the first observed time interval (here 90 days) and the forecasted window (say 30 days).", "%time\nsamples = model.predict(forecast=30)\n\ndef plot_forecast(samples):\n duration = len(empty_data)\n forecast = samples[\"S\"].size(-1) - duration\n num_samples = len(samples[\"R0\"])\n\n time = torch.arange(duration + forecast)\n S2I = samples[\"S2I\"]\n median = S2I.median(dim=0).values\n p05 = S2I.kthvalue(int(round(0.5 + 0.05 * num_samples)), dim=0).values\n p95 = S2I.kthvalue(int(round(0.5 + 0.95 * num_samples)), dim=0).values\n\n plt.figure(figsize=(8, 4))\n plt.fill_between(time, p05, p95, color=\"red\", alpha=0.3, label=\"90% CI\")\n plt.plot(time, median, \"r-\", label=\"median\")\n plt.plot(time[:duration], obs, \"k.\", label=\"observed\")\n plt.plot(time[:duration], synth_data[\"S2I\"], \"k--\", label=\"truth\")\n plt.axvline(duration - 0.5, color=\"gray\", lw=1)\n plt.xlim(0, len(time) - 1)\n plt.ylim(0, None)\n plt.xlabel(\"day after first infection\")\n plt.ylabel(\"new infections per day\")\n plt.title(\"New infections in population of {}\".format(population))\n plt.legend(loc=\"upper left\")\n plt.tight_layout()\n\nplot_forecast(samples)", "It looks like the mean field guide underestimates uncertainty. To improve uncertainty estimates we can instead try MCMC inference. In this simple model MCMC is only a small factor slower than SVI; in more complex models MCMC can be multiple orders of magnitude slower than SVI.", "%%time\nmodel = SimpleSIRModel(population, recovery_time, obs)\nmcmc = model.fit_mcmc(num_samples=4 if smoke_test else 400,\n jit_compile=True)\n\nsamples = model.predict(forecast=30)\nplot_forecast(samples)", "Advanced modeling <a class=\"anchor\" id=\"Advanced-modeling\"></a>\nSo far we've seen how to create a simple univariate model, fit the model to data, and predict and forecast future data. Next let's consider more advanced modeling techniques:\n\nregional models that couple compartments among multiple aggregated regions;\nphylogenetic likelihoods to incorporate genetic sequencing data;\nheterogeneous models with time-varying latent variables; and\nComplex compartment flow for models with non-linear transitions.\n\nRegional models <a class=\"anchor\" id=\"Regional-models\"></a>\nEpidemiology models vary in their level of detail. At the coarse-grained extreme are univariate aggregate models as we saw above. At the fine-grained extreme are network models where each individual's state is tracked and infections occur along edges of a sparse graph (pyro.contrib.epidemiology does not implement network models). We now consider an mid-level model where each of many regions (e.g. countries or zip codes) is tracked in aggregate, and infections occur both within regions and between pairs of regions. In Pyro we model multiple regions with a plate. Pyro's CompartmentalModel class does not support general pyro.plate syntax, but it does support a single special self.region_plate for regional models. This plate is available iff a CompartmentalModel is initialized with a vector population, and the size of the region_plate will be the length of the population vector.\nLet's take a look at the example RegionalSIRModel:", "class RegionalSIRModel(CompartmentalModel):\n def __init__(self, population, coupling, recovery_time, data):\n duration = len(data)\n num_regions, = population.shape\n assert coupling.shape == (num_regions, num_regions)\n assert (0 <= coupling).all()\n assert (coupling <= 1).all()\n assert isinstance(recovery_time, float)\n assert recovery_time > 1\n if isinstance(data, torch.Tensor):\n # Data tensors should be oriented as (time, region).\n assert data.shape == (duration, num_regions)\n compartments = (\"S\", \"I\") # R is implicit.\n\n # We create a regional model by passing a vector of populations.\n super().__init__(compartments, duration, population, approximate=(\"I\",))\n\n self.coupling = coupling\n self.recovery_time = recovery_time\n self.data = data\n\n def global_model(self):\n # Assume recovery time is a known constant.\n tau = self.recovery_time\n\n # Assume reproductive number is unknown but homogeneous.\n R0 = pyro.sample(\"R0\", dist.LogNormal(0., 1.))\n\n # Assume response rate is heterogeneous and model it with a\n # hierarchical Gamma-Beta prior.\n rho_c1 = pyro.sample(\"rho_c1\", dist.Gamma(10, 1))\n rho_c0 = pyro.sample(\"rho_c0\", dist.Gamma(10, 1))\n with self.region_plate:\n rho = pyro.sample(\"rho\", dist.Beta(rho_c1, rho_c0))\n\n return R0, tau, rho\n\n def initialize(self, params):\n # Start with a single infection in region 0.\n I = torch.zeros_like(self.population)\n I[0] += 1\n S = self.population - I\n return {\"S\": S, \"I\": I}\n\n def transition(self, params, state, t):\n R0, tau, rho = params\n\n # Account for infections from all regions. This uses approximate (point\n # estimate) counts I_approx for infection from other regions, but uses\n # the exact (enumerated) count I for infections from one's own region.\n I_coupled = state[\"I_approx\"] @ self.coupling\n I_coupled = I_coupled + (state[\"I\"] - state[\"I_approx\"]) * self.coupling.diag()\n I_coupled = I_coupled.clamp(min=0) # In case I_approx is negative.\n pop_coupled = self.population @ self.coupling\n\n with self.region_plate:\n # Sample flows between compartments.\n S2I = pyro.sample(\"S2I_{}\".format(t),\n infection_dist(individual_rate=R0 / tau,\n num_susceptible=state[\"S\"],\n num_infectious=I_coupled,\n population=pop_coupled))\n I2R = pyro.sample(\"I2R_{}\".format(t),\n binomial_dist(state[\"I\"], 1 / tau))\n\n # Update compartments with flows.\n state[\"S\"] = state[\"S\"] - S2I\n state[\"I\"] = state[\"I\"] + S2I - I2R\n\n # Condition on observations.\n t_is_observed = isinstance(t, slice) or t < self.duration\n pyro.sample(\"obs_{}\".format(t),\n binomial_dist(S2I, rho),\n obs=self.data[t] if t_is_observed else None)", "The main differences from the earlier univariate model are that: we assume population is a vector of length num_regions, we sample all compartmental variables and some global variables inside the region_plate, and we compute coupled vectors I_coupled and pop_coupled of the effective number of infected individuals and population accounting for both intra-region and inter-region infections. Among global variables we have chosen for demonstration purposes to make tau a fixed single number, R0 a single latent variable shared among all regions, and rho a local latent variable that can take a different value for each region. Note that while rho is not shared among regions, we have created a hierarchical model whereby rho's parent variables are shared among regions. While some of our variables are region-global and some region-local, only the compartmental variables are both region-local and time-dependent; all other parameters are fixed for all time. See the heterogeneous models section below for time-dependent latent variables.\nNote that Pyro's enumerated MCMC strategy (.fit_mcmc() with num_quant_bins &gt; 1) requires extra logic to use a mean-field approximation across compartments: we pass approximate=(\"I\",) to the constructor and force compartements to iteract via state[\"I_approx\"] rather than state[\"I\"]. This code is not required for SVI inference or for moment-matched MCMC inference (.fit_mcmc() with the default num_quant_bins=0).\nSee the Epidemiology: regional models example for a demonstration of how to generate data, train, predict, and forecast with regional models. \nPhylogenetic likelihoods <a class=\"anchor\" id=\"Phylogenetic-likelihoods\"></a>\nEpidemiological parameters can be difficult to identify from aggregate observations alone. However some parameters like the superspreading parameter k can be more accurately identified by combining aggregate count data with viral phylogenetic trees reconstructed from viral genetic sequencing data (Li et al. 2017). Pyro implements a CoalescentRateLikelihood class to compute a population likelihood p(I|phylogeny) given statistics of a phylogenetic tree (or a batch of tree samples). The statistics needed are exactly the times of each sampling event (i.e. when a viral genome was sequenced) and the times of genetic coalescent events in a binary phylogenetic tree; let us call these two vectors leaf_times and coal_times, respectively, where len(leaf_times) == 1 + len(coal_times) for binary trees. Pyro provides a helper bio_phylo_to_times() to extract these statistics from a Bio.Phylo tree objects; in turn Bio.Phylo can parse many file formats of phylogenetic trees.\nLet's take a look at the SuperspreadingSEIRModel which includes a phylogenetic likelihood. We'll focus on the phylogenetic parts of the model:\n```python\nclass SuperspreadingSEIRModel(CompartmentalModel):\n def init(self, population, incubation_time, recovery_time, data, *,\n leaf_times=None, coal_times=None):\n compartments = (\"S\", \"E\", \"I\") # R is implicit.\n duration = len(data)\n super().init(compartments, duration, population)\n ...\n self.coal_likelihood = dist.CoalescentRateLikelihood(\n leaf_times, coal_times, duration)\n ...\ndef transition(self, params, state, t):\n ...\n # Condition on observations.\n t_is_observed = isinstance(t, slice) or t &lt; self.duration\n R = R0 * state[\"S\"] / self.population\n coal_rate = R * (1. + 1. / k) / (tau_i * state[\"I\"] + 1e-8)\n pyro.factor(\"coalescent_{}\".format(t),\n self.coal_likelihood(coal_rate, t)\n if t_is_observed else torch.tensor(0.))\n\n`\nWe first constructed aCoalescentRateLikelihoodobject to be used throughout inference and prediction; this performs preprocessing work once so that it is cheap to evaluateself.coal_likelihood(...). Note that(leaf_times, coal_times)should be in units of time steps, the same time steps as the time index `t` and `duration`. Typicallyleaf_timesare in[0, duration), butcoal_timesprecedeleaf_times(as points of common ancestry), and may be negative. The likelihood involves the coalescent ratecoal_ratein a coalescent process; we can compute this from an epidemiological model. In this superspreading modelcoal_ratedepends on the reproductive numberR, the superspreading parameterk, the incubation timetau_i, and the current number of infected individualsstate[\"I\"]`` (Li et al. 2017).\nHeterogeneous models <a class=\"anchor\" id=\"Heterogeneous-models\"></a>\nEpidemiological parameters often vary in time, due to human interventions, changes in weather, and other external factors. We can model real-valued time-varying latent variables in CompartmentalModel by moving static latent variables from .global_model() to .initialize() and .transition(). For example we can model a reproductive number under Brownian drift in log-space by initializing at a random R0 and multiplying by a drifting factor, as in the HeterogeneousSIRModel example:\n```python\nclass HeterogeneousSIRModel(CompartmentalModel):\n ...\n def global_model(self):\n tau = self.recovery_time\n R0 = pyro.sample(\"R0\", dist.LogNormal(0., 1.))\n rho = ...\n return R0, tau, rho\ndef initialize(self, params):\n # Start with a single infection.\n # We also store the initial beta value in the state dict.\n return {\"S\": self.population - 1, \"I\": 1, \"beta\": torch.tensor(1.)}\n\ndef transition(self, params, state, t):\n R0, tau, rho = params\n # Sample heterogeneous variables.\n # This assumes beta slowly drifts via Brownian motion in log space.\n beta = pyro.sample(\"beta_{}\".format(t),\n dist.LogNormal(state[\"beta\"].log(), 0.1))\n Rt = pyro.deterministic(\"Rt_{}\".format(t), R0 * beta)\n\n # Sample flows between compartments.\n S2I = pyro.sample(\"S2I_{}\".format(t),\n infection_dist(individual_rate=Rt / tau,\n num_susceptible=state[\"S\"],\n num_infectious=state[\"I\"],\n population=self.population))\n ...\n # Update compartments and heterogeneous variables.\n state[\"S\"] = state[\"S\"] - S2I\n state[\"I\"] = state[\"I\"] + S2I - I2R\n state[\"beta\"] = beta # We store the latest beta value in the state dict.\n ...\n\n`\nHere we deterministically initialize a scale factorbeta = 1in.initialize()then let it drift via log-Brownian motion. We also need to updatestate[\"beta\"]just as we update the compartmental variables. Nowbetawill be provided as a time series when we.predict(). While we could have writtenRt = R0 * beta, we instead wrapped this computation in apyro.deterministicthereby exposingRtas another time series provided by.predict(). Note that we could have instead sampledR0in.initialize()and letRtdrift directly, rather than introducing a scale factorbeta``. However separating the two into a non-centered form improves geometry (Betancourt and Girolami 2013).\nIt is also easy to pass in time-varying covariates as tensors, in the same way we have passed in data to the constructors of all example models. To predict the effects of different causal interventions, you can pass in a covariate that is longer than duration, run inference (looking only at the first [0,duration) entries), then mutate entries of the covariate after duration and generate different .predict()ions.\nComplex compartment flow <a class=\"anchor\" id=\"Complex-compartment-flow\"></a>\nThe CompartmentalModel class assumes by default that the compartments are arranged linearly and terminate in an implicit terminal compartment named \"R\", for example S-I-R, S-E-I-R or boxcar models like S-E1-E2-I1-I2-I3-R. To describe other more complex flows between compartments, you can override the .compute_flows() method. However currently there is no support for flows with undirected loops (e.g. S-I-S).\nLet's create a branching SIRD model with possible flows\nS → I → R\n ↘\n D\nAs with other models, we'll keep the \"R\" state implicit (although we could equally keep the \"D\" state implicit and the \"R\" state explicit). In the .compute_flows() method, we'll input a pair of states and we'll need to compute three flow values: S2I, I2R, and I2D.\n```python\nclass SIRDModel(CompartmentalModel):\n def init(self, population, data):\n compartments = (\"S\", \"I\", \"D\")\n duration = len(data)\n super().init(compartments, duration, population)\n self.data = data\ndef compute_flows(self, prev, curr, t):\n S2I = prev[\"S\"] - curr[\"S\"] # S can only go in one direction.\n I2D = curr[\"D\"] - prev[\"D\"] # D can only have come from one direction.\n # Now by conservation at I, change + inflows + outflows = 0,\n # so we can solve for the single unknown I2R.\n I2R = prev[\"I\"] - curr[\"I\"] + S2I - I2D\n return {\n \"S2I_{}\".format(t): S2I,\n \"I2D_{}\".format(t): I2D,\n \"I2R_{}\".format(t): I2R,\n }\n...\ndef transition(self, params, state, t):\n ...\n # Sample flows between compartments.\n S2I = pyro.sample(\"S2I_{}\".format(t), ...)\n I2D = pyro.sample(\"I2D_{}\".format(t), ...)\n I2R = pyro.sample(\"I2R_{}\".format(t), ...)\n\n # Update compartments with flows.\n state[\"S\"] = state[\"S\"] - S2I\n state[\"I\"] = state[\"I\"] + S2I - I2D - I2R\n state[\"D\"] = state[\"D\"] + I2D\n ...\n\nNote you can name the dict keys anything you want, as long as they match your sample statements in ``.transition()`` and you correctly reverse the flow computation in ``.transition()``. During inference Pyro will check that the ``.compute_flows()`` and ``.transition()`` computations agree. Take care to avoid in-place PyTorch operations, since these can modify the tensors rather than the dictionary:diff\n+ state[\"S\"] = state[\"S\"] - S2I # Correct\n- state[\"S\"] -= S2I # AVOID: may corrupt tensors\n```\nFor a slightly more complex example, take a look at the SimpleSEIRDModel.\nReferences\n\n<a class=\"anchor\" id=\"1\"></a>\n Lucy M. Li, Nicholas C. Grassly, Christophe Fraser (2017)\n \"Quantifying Transmission Heterogeneity Using Both Pathogen Phylogenies\n and Incidence Time Series\"\n https://academic.oup.com/mbe/article/34/11/2982/3952784\n<a class=\"anchor\" id=\"2\"></a>\n M. J. Betancourt, Mark Girolami (2013)\n \"Hamiltonian Monte Carlo for Hierarchical Models\"\n https://arxiv.org/abs/1312.0906" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
scikit-multilearn/scikit-multilearn
docs/source/multilabeldnn.ipynb
bsd-2-clause
[ "Multi-label deep learning with scikit-multilearn\nDeep learning methods have expanded in the python community with many tutorials on performing classification using neural networks, however few out-of-the-box solutions exist for multi-label classification with deep learning, scikit-multilearn allows you to deploy single-class and multi-class DNNs to solve multi-label problems via problem transformation methods. Two main deep learning frameworks exist for Python: keras and pytorch, you will learn how to use any of them for multi-label problems with scikit-multilearn. Let's start with loading some data.", "import numpy\nimport sklearn.metrics as metrics\nfrom skmultilearn.dataset import load_dataset\n\nX_train, y_train, feature_names, label_names = load_dataset('emotions', 'train')\nX_test, y_test, _, _ = load_dataset('emotions', 'test')", "Keras\nKeras is a neural network library that supports multiple backends, most notably the well-established tensorflow, but also the popular on Windows: CNTK, as scikit-multilearn supports both Windows, Linux and MacOSX, you can you a backend of choice, as described in the backend selection tutorial. To install Keras run:\nbash\npip install -U keras\nSingle-class Keras classifier\nWe train a two-layer neural network using Keras and tensortflow as backend (feel free to use others), the network is fairly simple 12 x 8 RELU that finish with a sigmoid activator optimized via binary cross entropy. This is a case from the Keras example page. Note that the model creation function must create a model that accepts an input dimension and outpus a relevant output dimension. The Keras wrapper from scikit-multilearn will pass relevant dimensions upon fitting.", "from keras.models import Sequential\nfrom keras.layers import Dense\n\ndef create_model_single_class(input_dim, output_dim):\n\t# create model\n\tmodel = Sequential()\n\tmodel.add(Dense(12, input_dim=input_dim, activation='relu'))\n\tmodel.add(Dense(8, activation='relu'))\n\tmodel.add(Dense(output_dim, activation='sigmoid'))\n\t# Compile model\n\tmodel.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])\n\treturn model", "Let's use it with a problem transformation method which converts multi-label classification problems to single-label single-class problems, ex. Binary Relevance which trains a classifier per label. We will use 10 epochs and disable verbosity.", "from skmultilearn.problem_transform import BinaryRelevance\nfrom skmultilearn.ext import Keras\n\nKERAS_PARAMS = dict(epochs=10, batch_size=100, verbose=0)\n\nclf = BinaryRelevance(classifier=Keras(create_model_single_class, False, KERAS_PARAMS), require_dense=[True,True])\nclf.fit(X_train, y_train)\nresult = clf.predict(X_test)", "Multi-class Keras classifier\nWe now train a multi-class neural network using Keras and tensortflow as backend (feel free to use others) optimized via categorical cross entropy. This is a case from the Keras multi-class tutorial. Note again that the model creation function must create a model that accepts an input dimension and outpus a relevant output dimension. The Keras wrapper from scikit-multilearn will pass relevant dimensions upon fitting.", "def create_model_multiclass(input_dim, output_dim):\n\t# create model\n\tmodel = Sequential()\n\tmodel.add(Dense(8, input_dim=input_dim, activation='relu'))\n\tmodel.add(Dense(output_dim, activation='softmax'))\n\t# Compile model\n\tmodel.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])\n\treturn model", "We use the Label Powerset multi-label to multi-class transformation approach, but this can also be used with all the advanced label space division methods available in scikit-multilearn. Note that we set the second parameter of our Keras wrapper to true, as the base problem is multi-class now.", "from skmultilearn.problem_transform import LabelPowerset\nclf = LabelPowerset(classifier=Keras(create_model_multiclass, True, KERAS_PARAMS), require_dense=[True,True])\nclf.fit(X_train,y_train)\ny_pred = clf.predict(X_test)", "Pytorch\nPytorch is another often used library, that is compatible with scikit-multilearn via the skorch wrapping library, to use it, you must first install the required libraries:\nbash\npip install -U skorch torch\nTo start, import:", "import torch\nfrom torch import nn\nimport torch.nn.functional as F\nfrom skorch import NeuralNetClassifier", "Single-class pytorch classifier\nWe train a two-layer neural network using pytorch based on a simple example from the pytorch example page. Note that the model's first layer has to agree in size with the input data, and the model's last layer is two-dimensions, as there are two classes: 0 or 1.", "input_dim = X_train.shape[1]\n\nclass SingleClassClassifierModule(nn.Module):\n def __init__(\n self,\n num_units=10,\n nonlin=F.relu,\n dropout=0.5,\n ):\n super(SingleClassClassifierModule, self).__init__()\n self.num_units = num_units\n\n self.dense0 = nn.Linear(input_dim, num_units)\n self.dense1 = nn.Linear(num_units, 10)\n self.output = nn.Linear(10, 2)\n\n def forward(self, X, **kwargs):\n X = F.relu(self.dense0(X))\n X = F.relu(self.dense1(X))\n X = torch.sigmoid(self.output(X))\n return X", "We now wrap the model with skorch and use scikit-multilearn for Binary Relevance classification.", "net = NeuralNetClassifier(\n SingleClassClassifierModule,\n max_epochs=20,\n verbose=0\n)\n\nfrom skmultilearn.problem_transform import BinaryRelevance\n\nclf = BinaryRelevance(classifier=net, require_dense=[True,True])\nclf.fit(X_train.astype(numpy.float32),y_train)\ny_pred = clf.predict(X_test.astype(numpy.float32))", "Multi-class pytorch classifier\nSimilarly we can train a multi-class DNN, this time hte last layer must agree with size with the number of classes.", "nodes = 8\ninput_dim = X_train.shape[1]\nhidden_dim = int(input_dim/nodes)\noutput_dim = len(numpy.unique(y_train.rows))\n\nclass MultiClassClassifierModule(nn.Module):\n def __init__(\n self,\n input_dim=input_dim,\n hidden_dim=hidden_dim,\n output_dim=output_dim,\n dropout=0.5,\n ):\n super(MultiClassClassifierModule, self).__init__()\n self.dropout = nn.Dropout(dropout)\n\n self.hidden = nn.Linear(input_dim, hidden_dim)\n self.output = nn.Linear(hidden_dim, output_dim)\n\n def forward(self, X, **kwargs):\n X = F.relu(self.hidden(X))\n X = self.dropout(X)\n X = F.softmax(self.output(X), dim=-1)\n return X", "Now let's skorch-wrap it:", "net = NeuralNetClassifier(\n MultiClassClassifierModule,\n max_epochs=20,\n verbose=0\n)\n\nfrom skmultilearn.problem_transform import LabelPowerset\nclf = LabelPowerset(classifier=net, require_dense=[True,True])\nclf.fit(X_train.astype(numpy.float32),y_train)\ny_pred = clf.predict(X_test.astype(numpy.float32))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
mne-tools/mne-tools.github.io
0.15/_downloads/plot_roi_erpimage_by_rt.ipynb
bsd-3-clause
[ "%matplotlib inline", "===========================================================\nPlot single trial activity, grouped by ROI and sorted by RT\n===========================================================\nThis will produce what is sometimes called an event related\npotential / field (ERP/ERF) image.\nThe EEGLAB example file - containing an experiment with button press responses\nto simple visual stimuli - is read in and response times are calculated.\nROIs are determined by the channel types (in 10/20 channel notation,\neven channels are right, odd are left, and 'z' are central). The\nmedian and the Global Field Power within each channel group is calculated,\nand the trials are plotted, sorted by response time.", "# Authors: Jona Sassenhagen <jona.sassenhagen@gmail.com>\n#\n# License: BSD (3-clause)\n\nimport mne\nfrom mne.datasets import testing\nfrom mne import Epochs, io, pick_types\nfrom mne.event import define_target_events\n\nprint(__doc__)", "Load EEGLAB example data (a small EEG dataset)", "data_path = testing.data_path()\nfname = data_path + \"/EEGLAB/test_raw.set\"\nmontage = data_path + \"/EEGLAB/test_chans.locs\"\n\nevent_id = {\"rt\": 1, \"square\": 2} # must be specified for str events\neog = {\"FPz\", \"EOG1\", \"EOG2\"}\nraw = io.eeglab.read_raw_eeglab(fname, eog=eog, montage=montage,\n event_id=event_id)\npicks = pick_types(raw.info, eeg=True)\nevents = mne.find_events(raw)", "Create Epochs", "# define target events:\n# 1. find response times: distance between \"square\" and \"rt\" events\n# 2. extract A. \"square\" events B. followed by a button press within 700 msec\ntmax = .7\nsfreq = raw.info[\"sfreq\"]\nreference_id, target_id = 2, 1\nnew_events, rts = define_target_events(events, reference_id, target_id, sfreq,\n tmin=0., tmax=tmax, new_id=2)\n\nepochs = Epochs(raw, events=new_events, tmax=tmax + .1,\n event_id={\"square\": 2}, picks=picks)", "Plot", "# Parameters for plotting\norder = rts.argsort() # sorting from fast to slow trials\n\nrois = dict()\nfor pick, channel in enumerate(epochs.ch_names):\n last_char = channel[-1] # for 10/20, last letter codes the hemisphere\n roi = (\"Midline\" if last_char == \"z\" else\n (\"Left\" if int(last_char) % 2 else \"Right\"))\n rois[roi] = rois.get(roi, list()) + [pick]\n\n# The actual plots\nfor combine_measures in ('gfp', 'median'):\n epochs.plot_image(group_by=rois, order=order, overlay_times=rts / 1000.,\n sigma=1.5, combine=combine_measures,\n ts_args=dict(vlines=[0, rts.mean() / 1000.]))" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
dolittle007/dolittle007.github.io
notebooks/bayesian_neural_network_advi.ipynb
gpl-3.0
[ "Variational Inference: Bayesian Neural Networks\n(c) 2016 by Thomas Wiecki\nOriginal blog post: http://twiecki.github.io/blog/2016/06/01/bayesian-deep-learning/\nCurrent trends in Machine Learning\nThere are currently three big trends in machine learning: Probabilistic Programming, Deep Learning and \"Big Data\". Inside of PP, a lot of innovation is in making things scale using Variational Inference. In this blog post, I will show how to use Variational Inference in PyMC3 to fit a simple Bayesian Neural Network. I will also discuss how bridging Probabilistic Programming and Deep Learning can open up very interesting avenues to explore in future research.\nProbabilistic Programming at scale\nProbabilistic Programming allows very flexible creation of custom probabilistic models and is mainly concerned with insight and learning from your data. The approach is inherently Bayesian so we can specify priors to inform and constrain our models and get uncertainty estimation in form of a posterior distribution. Using MCMC sampling algorithms we can draw samples from this posterior to very flexibly estimate these models. PyMC3 and Stan are the current state-of-the-art tools to consruct and estimate these models. One major drawback of sampling, however, is that it's often very slow, especially for high-dimensional models. That's why more recently, variational inference algorithms have been developed that are almost as flexible as MCMC but much faster. Instead of drawing samples from the posterior, these algorithms instead fit a distribution (e.g. normal) to the posterior turning a sampling problem into and optimization problem. ADVI -- Automatic Differentation Variational Inference -- is implemented in PyMC3 and Stan, as well as a new package called Edward which is mainly concerned with Variational Inference. \nUnfortunately, when it comes to traditional ML problems like classification or (non-linear) regression, Probabilistic Programming often plays second fiddle (in terms of accuracy and scalability) to more algorithmic approaches like ensemble learning (e.g. random forests or gradient boosted regression trees.\nDeep Learning\nNow in its third renaissance, deep learning has been making headlines repeatadly by dominating almost any object recognition benchmark, kicking ass at Atari games, and beating the world-champion Lee Sedol at Go. From a statistical point, Neural Networks are extremely good non-linear function approximators and representation learners. While mostly known for classification, they have been extended to unsupervised learning with AutoEncoders and in all sorts of other interesting ways (e.g. Recurrent Networks, or MDNs to estimate multimodal distributions). Why do they work so well? No one really knows as the statistical properties are still not fully understood.\nA large part of the innoviation in deep learning is the ability to train these extremely complex models. This rests on several pillars:\n* Speed: facilitating the GPU allowed for much faster processing.\n* Software: frameworks like Theano and TensorFlow allow flexible creation of abstract models that can then be optimized and compiled to CPU or GPU.\n* Learning algorithms: training on sub-sets of the data -- stochastic gradient descent -- allows us to train these models on massive amounts of data. Techniques like drop-out avoid overfitting.\n* Architectural: A lot of innovation comes from changing the input layers, like for convolutional neural nets, or the output layers, like for MDNs.\nBridging Deep Learning and Probabilistic Programming\nOn one hand we Probabilistic Programming which allows us to build rather small and focused models in a very principled and well-understood way to gain insight into our data; on the other hand we have deep learning which uses many heuristics to train huge and highly complex models that are amazing at prediction. Recent innovations in variational inference allow probabilistic programming to scale model complexity as well as data size. We are thus at the cusp of being able to combine these two approaches to hopefully unlock new innovations in Machine Learning. For more motivation, see also Dustin Tran's recent blog post.\nWhile this would allow Probabilistic Programming to be applied to a much wider set of interesting problems, I believe this bridging also holds great promise for innovations in Deep Learning. Some ideas are:\n* Uncertainty in predictions: As we will see below, the Bayesian Neural Network informs us about the uncertainty in its predictions. I think uncertainty is an underappreciated concept in Machine Learning as it's clearly important for real-world applications. But it could also be useful in training. For example, we could train the model specifically on samples it is most uncertain about.\n* Uncertainty in representations: We also get uncertainty estimates of our weights which could inform us about the stability of the learned representations of the network.\n* Regularization with priors: Weights are often L2-regularized to avoid overfitting, this very naturally becomes a Gaussian prior for the weight coefficients. We could, however, imagine all kinds of other priors, like spike-and-slab to enforce sparsity (this would be more like using the L1-norm).\n* Transfer learning with informed priors: If we wanted to train a network on a new object recognition data set, we could bootstrap the learning by placing informed priors centered around weights retrieved from other pre-trained networks, like GoogLeNet. \n* Hierarchical Neural Networks: A very powerful approach in Probabilistic Programming is hierarchical modeling that allows pooling of things that were learned on sub-groups to the overall population (see my tutorial on Hierarchical Linear Regression in PyMC3). Applied to Neural Networks, in hierarchical data sets, we could train individual neural nets to specialize on sub-groups while still being informed about representations of the overall population. For example, imagine a network trained to classify car models from pictures of cars. We could train a hierarchical neural network where a sub-neural network is trained to tell apart models from only a single manufacturer. The intuition being that all cars from a certain manufactures share certain similarities so it would make sense to train individual networks that specialize on brands. However, due to the individual networks being connected at a higher layer, they would still share information with the other specialized sub-networks about features that are useful to all brands. Interestingly, different layers of the network could be informed by various levels of the hierarchy -- e.g. early layers that extract visual lines could be identical in all sub-networks while the higher-order representations would be different. The hierarchical model would learn all that from the data.\n* Other hybrid architectures: We can more freely build all kinds of neural networks. For example, Bayesian non-parametrics could be used to flexibly adjust the size and shape of the hidden layers to optimally scale the network architecture to the problem at hand during training. Currently, this requires costly hyper-parameter optimization and a lot of tribal knowledge.\nBayesian Neural Networks in PyMC3\nGenerating data\nFirst, lets generate some toy data -- a simple binary classification problem that's not linearly separable.", "%matplotlib inline\nimport theano\nfloatX = theano.config.floatX\nimport pymc3 as pm\nimport theano.tensor as T\nimport sklearn\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nsns.set_style('white')\nfrom sklearn import datasets\nfrom sklearn.preprocessing import scale\nfrom sklearn.cross_validation import train_test_split\nfrom sklearn.datasets import make_moons\n\nX, Y = make_moons(noise=0.2, random_state=0, n_samples=1000)\nX = scale(X)\nX = X.astype(floatX)\nY = Y.astype(floatX)\nX_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=.5)\n\nfig, ax = plt.subplots()\nax.scatter(X[Y==0, 0], X[Y==0, 1], label='Class 0')\nax.scatter(X[Y==1, 0], X[Y==1, 1], color='r', label='Class 1')\nsns.despine(); ax.legend()\nax.set(xlabel='X', ylabel='Y', title='Toy binary classification data set');", "Model specification\nA neural network is quite simple. The basic unit is a perceptron which is nothing more than logistic regression. We use many of these in parallel and then stack them up to get hidden layers. Here we will use 2 hidden layers with 5 neurons each which is sufficient for such a simple problem.", "# Trick: Turn inputs and outputs into shared variables. \n# It's still the same thing, but we can later change the values of the shared variable \n# (to switch in the test-data later) and pymc3 will just use the new data. \n# Kind-of like a pointer we can redirect.\n# For more info, see: http://deeplearning.net/software/theano/library/compile/shared.html\nann_input = theano.shared(X_train)\nann_output = theano.shared(Y_train)\n\nn_hidden = 5\n\n# Initialize random weights between each layer\ninit_1 = np.random.randn(X.shape[1], n_hidden).astype(floatX)\ninit_2 = np.random.randn(n_hidden, n_hidden).astype(floatX)\ninit_out = np.random.randn(n_hidden).astype(floatX)\n \nwith pm.Model() as neural_network:\n # Weights from input to hidden layer\n weights_in_1 = pm.Normal('w_in_1', 0, sd=1, \n shape=(X.shape[1], n_hidden), \n testval=init_1)\n \n # Weights from 1st to 2nd layer\n weights_1_2 = pm.Normal('w_1_2', 0, sd=1, \n shape=(n_hidden, n_hidden), \n testval=init_2)\n \n # Weights from hidden layer to output\n weights_2_out = pm.Normal('w_2_out', 0, sd=1, \n shape=(n_hidden,), \n testval=init_out)\n \n # Build neural-network using tanh activation function\n act_1 = pm.math.tanh(pm.math.dot(ann_input, \n weights_in_1))\n act_2 = pm.math.tanh(pm.math.dot(act_1, \n weights_1_2))\n act_out = pm.math.sigmoid(pm.math.dot(act_2, \n weights_2_out))\n \n # Binary classification -> Bernoulli likelihood\n out = pm.Bernoulli('out', \n act_out,\n observed=ann_output)", "That's not so bad. The Normal priors help regularize the weights. Usually we would add a constant b to the inputs but I omitted it here to keep the code cleaner.\nVariational Inference: Scaling model complexity\nWe could now just run a MCMC sampler like NUTS which works pretty well in this case but as I already mentioned, this will become very slow as we scale our model up to deeper architectures with more layers.\nInstead, we will use the brand-new ADVI variational inference algorithm which was recently added to PyMC3. This is much faster and will scale better. Note, that this is a mean-field approximation so we ignore correlations in the posterior.", "%%time\n\nwith neural_network:\n # Run ADVI which returns posterior means, standard deviations, and the evidence lower bound (ELBO)\n v_params = pm.variational.advi(n=50000)", "< 20 seconds on my older laptop. That's pretty good considering that NUTS is having a really hard time. Further below we make this even faster. To make it really fly, we probably want to run the Neural Network on the GPU.\nAs samples are more convenient to work with, we can very quickly draw samples from the variational posterior using sample_vp() (this is just sampling from Normal distributions, so not at all the same like MCMC):", "with neural_network:\n trace = pm.variational.sample_vp(v_params, draws=5000)", "Plotting the objective function (ELBO) we can see that the optimization slowly improves the fit over time.", "plt.plot(v_params.elbo_vals)\nplt.ylabel('ELBO')\nplt.xlabel('iteration')", "Now that we trained our model, lets predict on the hold-out set using a posterior predictive check (PPC). We use sample_ppc() to generate new data (in this case class predictions) from the posterior (sampled from the variational estimation).", "# Replace shared variables with testing set\nann_input.set_value(X_test)\nann_output.set_value(Y_test)\n\n# Creater posterior predictive samples\nppc = pm.sample_ppc(trace, model=neural_network, samples=500)\n\n# Use probability of > 0.5 to assume prediction of class 1\npred = ppc['out'].mean(axis=0) > 0.5\n\nfig, ax = plt.subplots()\nax.scatter(X_test[pred==0, 0], X_test[pred==0, 1])\nax.scatter(X_test[pred==1, 0], X_test[pred==1, 1], color='r')\nsns.despine()\nax.set(title='Predicted labels in testing set', xlabel='X', ylabel='Y');\n\nprint('Accuracy = {}%'.format((Y_test == pred).mean() * 100))", "Hey, our neural network did all right!\nLets look at what the classifier has learned\nFor this, we evaluate the class probability predictions on a grid over the whole input space.", "grid = np.mgrid[-3:3:100j,-3:3:100j].astype(floatX)\ngrid_2d = grid.reshape(2, -1).T\ndummy_out = np.ones(grid.shape[1], dtype=np.int8)\n\nann_input.set_value(grid_2d)\nann_output.set_value(dummy_out)\n\n# Creater posterior predictive samples\nppc = pm.sample_ppc(trace, model=neural_network, samples=500)", "Probability surface", "cmap = sns.diverging_palette(250, 12, s=85, l=25, as_cmap=True)\nfig, ax = plt.subplots(figsize=(10, 6))\ncontour = ax.contourf(*grid, ppc['out'].mean(axis=0).reshape(100, 100), cmap=cmap)\nax.scatter(X_test[pred==0, 0], X_test[pred==0, 1])\nax.scatter(X_test[pred==1, 0], X_test[pred==1, 1], color='r')\ncbar = plt.colorbar(contour, ax=ax)\n_ = ax.set(xlim=(-3, 3), ylim=(-3, 3), xlabel='X', ylabel='Y');\ncbar.ax.set_ylabel('Posterior predictive mean probability of class label = 0');", "Uncertainty in predicted value\nSo far, everything I showed we could have done with a non-Bayesian Neural Network. The mean of the posterior predictive for each class-label should be identical to maximum likelihood predicted values. However, we can also look at the standard deviation of the posterior predictive to get a sense for the uncertainty in our predictions. Here is what that looks like:", "cmap = sns.cubehelix_palette(light=1, as_cmap=True)\nfig, ax = plt.subplots(figsize=(10, 6))\ncontour = ax.contourf(*grid, ppc['out'].std(axis=0).reshape(100, 100), cmap=cmap)\nax.scatter(X_test[pred==0, 0], X_test[pred==0, 1])\nax.scatter(X_test[pred==1, 0], X_test[pred==1, 1], color='r')\ncbar = plt.colorbar(contour, ax=ax)\n_ = ax.set(xlim=(-3, 3), ylim=(-3, 3), xlabel='X', ylabel='Y');\ncbar.ax.set_ylabel('Uncertainty (posterior predictive standard deviation)');", "We can see that very close to the decision boundary, our uncertainty as to which label to predict is highest. You can imagine that associating predictions with uncertainty is a critical property for many applications like health care. To further maximize accuracy, we might want to train the model primarily on samples from that high-uncertainty region.\nMini-batch ADVI: Scaling data size\nSo far, we have trained our model on all data at once. Obviously this won't scale to something like ImageNet. Moreover, training on mini-batches of data (stochastic gradient descent) avoids local minima and can lead to faster convergence.\nFortunately, ADVI can be run on mini-batches as well. It just requires some setting up:", "from six.moves import zip\n\n# Set back to original data to retrain\nann_input.set_value(X_train)\nann_output.set_value(Y_train)\n\n# Tensors and RV that will be using mini-batches\nminibatch_tensors = [ann_input, ann_output]\nminibatch_RVs = [out]\n\n# Generator that returns mini-batches in each iteration\ndef create_minibatch(data):\n rng = np.random.RandomState(0)\n \n while True:\n # Return random data samples of set size 100 each iteration\n ixs = rng.randint(len(data), size=50)\n yield data[ixs]\n\nminibatches = zip(\n create_minibatch(X_train), \n create_minibatch(Y_train),\n)\n\ntotal_size = len(Y_train)", "While the above might look a bit daunting, I really like the design. Especially the fact that you define a generator allows for great flexibility. In principle, we could just pool from a database there and not have to keep all the data in RAM. \nLets pass those to advi_minibatch():", "%%time\n\nwith neural_network:\n # Run advi_minibatch\n v_params = pm.variational.advi_minibatch(\n n=50000, minibatch_tensors=minibatch_tensors, \n minibatch_RVs=minibatch_RVs, minibatches=minibatches, \n total_size=total_size, learning_rate=1e-2, epsilon=1.0\n )\n\nwith neural_network: \n trace = pm.variational.sample_vp(v_params, draws=5000)\n\nplt.plot(v_params.elbo_vals)\nplt.ylabel('ELBO')\nplt.xlabel('iteration')\nsns.despine()", "As you can see, mini-batch ADVI's running time is much lower. It also seems to converge faster.\nFor fun, we can also look at the trace. The point is that we also get uncertainty of our Neural Network weights.", "pm.traceplot(trace);", "Summary\nHopefully this blog post demonstrated a very powerful new inference algorithm available in PyMC3: ADVI. I also think bridging the gap between Probabilistic Programming and Deep Learning can open up many new avenues for innovation in this space, as discussed above. Specifically, a hierarchical neural network sounds pretty bad-ass. These are really exciting times.\nNext steps\nTheano, which is used by PyMC3 as its computational backend, was mainly developed for estimating neural networks and there are great libraries like Lasagne that build on top of Theano to make construction of the most common neural network architectures easy. Ideally, we wouldn't have to build the models by hand as I did above, but use the convenient syntax of Lasagne to construct the architecture, define our priors, and run ADVI. \nYou can also run this example on the GPU by setting device = gpu and floatX = float32 in your .theanorc.\nYou might also argue that the above network isn't really deep, but note that we could easily extend it to have more layers, including convolutional ones to train on more challenging data sets.\nI also presented some of this work at PyData London, view the video below:\n<iframe width=\"560\" height=\"315\" src=\"https://www.youtube.com/embed/LlzVlqVzeD8\" frameborder=\"0\" allowfullscreen></iframe>\n\nFinally, you can download this NB here. Leave a comment below, and follow me on twitter.\nAcknowledgements\nTaku Yoshioka did a lot of work on ADVI in PyMC3, including the mini-batch implementation as well as the sampling from the variational posterior. I'd also like to the thank the Stan guys (specifically Alp Kucukelbir and Daniel Lee) for deriving ADVI and teaching us about it. Thanks also to Chris Fonnesbeck, Andrew Campbell, Taku Yoshioka, and Peadar Coyle for useful comments on an earlier draft." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
staeiou/reddit_downvote
games/games-analysis.ipynb
mit
[ "When /r/games disabled the downvoting button via CSS\nSubreddit disabled downvote button on 2013-01-16, thread here\nData processing\nGoogle BigQuery\nSELECT author, num_comments, score, ups, downs, gilded, created_utc FROM [fh-bigquery:reddit_posts.full_corpus_201509] \nwhere subreddit = 'Games' \nAND created_utc BETWEEN 1356998400 AND 1359676800", "!ls\n\n!pip install bokeh\nimport pandas as pd\nimport seaborn as sns\nfrom bokeh.charts import TimeSeries, output_file, show\n\n%matplotlib inline\n\nposts_df = pd.DataFrame.from_csv(\"reddit_posts_games_201301.csv\")\n\nposts_df[0:5]\n\nposts_df['created'] = pd.to_datetime(posts_df.created_utc, unit='s')\nposts_df['created_date'] = posts_df.created.dt.date\n\nposts_df['downs'] = posts_df.score - posts_df.ups\n\nposts_time_ups = posts_df.set_index('created_date').ups.sort_index()\nposts_time_ups[0:5]\n\nposts_date_df = posts_df.set_index('created').sort_index()\n\nposts_date_df[0:5]\n\nposts_groupby = posts_date_df.groupby([pd.TimeGrouper('1D', closed='left')])", "Visualizations\nDaily average of number of comments per post", "posts_groupby.mean().num_comments.plot(kind='barh', figsize=[8,8])", "Daily average of number of upvotes per post", "posts_groupby.mean().ups.plot(kind='barh', figsize=[8,8])" ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
davidrpugh/pyCollocation
examples/ramsey-cass-koopmans-model.ipynb
mit
[ "%load_ext autoreload\n\n%autoreload 2\n\n%matplotlib inline\n\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport seaborn as sn\n\nimport pycollocation", "<h1>Textbook example: Ramsey-Cass-Koopmans model</h1>\n\n<h2> Household behavior </h2>\n\nSuppose that there exist a representative household with the following lifetime utility...\n$$U \\equiv \\int_{t=0}^{\\infty} e^{-\\rho t}u(c(t))N(t)dt$$\n...where the flow utility function $u(C(t))$ is assumed to be a concave function of per capita consumption, $c(t)$. Note that $N(t)$, the size of the household, is assumed to grow at a constant and exogenous rate $n$: \n$$ N(t) = N(0)e^{nt}.$$\nAt each instant in time, the representative household faces the following budget constraint...\n$$\\dot{K}(t) = r(t)K(t) + W(t)lN(t) - P(t)c(t)N(t)$$\n...where $r(t)$ is the real interest rate, $K(t)$ is the total amount of household capital; $W(t)$ is the real wage paid for labor, $l$ is the household's labor endowment; $P(t)$ is the price level of the consumption good (which we are free to normalize to 1 for convenience).\n<h3> Solution to the household problem </h3>\n\nForm the Hamiltonian...\n$$ H(t, K, c, \\lambda) \\equiv e^{-\\rho t}u(c(t))N(t) + \\lambda(t)\\bigg[r(t)K(t) + W(t)lN(t) - c(t)N(t)\\bigg] $$\n...differentiate with respect to control variables $c$ and the state variable $K$...\n\\begin{align}\n \\frac{\\partial H}{\\partial c} \\equiv& e^{-\\rho t}\\frac{\\partial u}{\\partial c}N(t) - \\lambda(t)N(t) \\\n \\frac{\\partial H}{\\partial K} \\equiv& r(t)\\lambda(t)\n\\end{align}\n...the state and co-state equations are...\n\\begin{align}\n \\dot{K}(t) = \\frac{\\partial H}{\\partial \\lambda} =& r(t)K(t) + W(t)lN(t) - c(t)N(t) \\\n \\dot{\\lambda} = -\\frac{\\partial H}{\\partial K} =& -r(t)\\lambda(t)\\\n\\end{align}\n<h4> Derivation of the Euler equation </h4>\nFrom first-order condition for $c$ we have that...\n$$ e^{-\\rho t}\\frac{\\partial u}{\\partial c}=\\lambda(t) $$\n...differentiate with respect to time...\n$$ -\\rho e^{-\\rho t}\\frac{\\partial u}{\\partial c} + e^{-\\rho t}\\frac{\\partial^2 u}{\\partial c^2}\\dot{c}(t)=\\dot{\\lambda}(t)$$\n...using the co-state equation we can write...\n$$ -\\rho e^{-\\rho t}\\frac{\\partial u}{\\partial c} + e^{-\\rho t}\\frac{\\partial^2 u}{\\partial c^2}\\dot{c}(t)=-r(t)\\lambda(t)$$\n...using the first-order condition again we can write...\n$$ -\\rho e^{-\\rho t}\\frac{\\partial u}{\\partial c} + e^{-\\rho t}\\frac{\\partial^2 u}{\\partial c^2}\\dot{c}(t)=-r(t)e^{-\\rho t}\\frac{\\partial u}{\\partial c}$$\n...finally, after a bit of algebra we find that the consumption behavior of the representative household is described by the following Euler equation...\n$$ \\frac{\\dot{c}(t)}{c(t)} = \\frac{1}{RRA(c(t))}\\bigg[r(t) - \\rho\\bigg] $$\n...where...\n$$ RRA(c(t)) = -\\frac{c\\frac{\\partial^2 u}{\\partial c^2}}{\\frac{\\partial u}{\\partial c}}$$\n...is the <a href=\"https://en.wikipedia.org/wiki/Risk_aversion\">Pratt-Arrow measure of relative risk aversion</a>. Consumption Euler equation says that consumption growth is proportional to the gap between the real interest rate $r(t)$ and the discount rate $\\rho$; and inversely proportional to risk preferences.\n<h2>Firm behavior</h2>\n\nInputs to production are capital, $K$, labor, $L$, and technology, $A$. Representative firm has a production function, $F$, that is homogenous of degree 1...\n\\begin{equation}\n Y(t) = F\\bigg(A(t), K(t), L(t)\\bigg)\n\\end{equation}\n...note that technology, $A$, is assumed to grow at a constant, exogenous rate $g$\n$$ A(t) = A(0)e^{gt}. $$\nFirms choose demands for $K$ and $L$ in order to maximise profits...\n\\begin{align}\n \\Pi(t) =& P(t)Y(t) - (r(t) + \\delta)K(t) - W(t)L(t) \\\n =& F\\bigg(A(t), K(t), L(t)\\bigg) - (r(t) + \\delta)K(t) - W(t)L(t) \n\\end{align}\nStandard assumptions of perfect competition in factor markets as well as the production of output goods implies that inputs to production are paid their marginal products. Thus the real interest rate $r(t)$ and the wage, $W(t)$ are...\n\\begin{align}\n r(t) =& \\frac{\\partial F}{\\partial K} - \\delta \\\n W(t) =& \\frac{\\partial F}{\\partial L}.\n\\end{align}\nNote that our homogeneity assumption on $F$ combined with <a href=\"https://en.wikipedia.org/wiki/Homogeneous_function\">Euler's other theorem</a> are enough to insure that the representative firm earns zero profit...\n$$ \\Pi(t) = F\\bigg(A(t), K(t), L(t)\\bigg) - \\bigg[K\\frac{\\partial F}{\\partial K} + L\\frac{\\partial F}{\\partial L}\\bigg] = 0. $$\n<h2> Market clearing equilibrium </h2>\n\nImpose market clearing equilibrium assumption for capital and labor markets (market for consumption goods clears by <a href=\"https://en.wikipedia.org/wiki/Walras%27_law\">Walras Law</a>). In particular for the labor market...\n$$ \\text{labor demand} \\equiv L(t) = lN(t) \\equiv \\text{labor supply}. $$\nNote that the explicit assumption of market clearing equilibrium really imposes two implicit assumptions:\n<ol>\n <li> Dynamic adjustment processes necessary to actually clear markets occur at time scales that are very small relevant to the time scale of the model.</li>\n <li> ??\n</ol>\n\nAssumption that factor markets clear allows us to combine household and firm behavior to get a system of ordinary differential equations...\n\\begin{align}\n \\dot{c}(t) =& \\frac{1}{ARA(c(t))}\\bigg[\\frac{\\partial F}{\\partial K} - \\delta - \\rho\\bigg] \\\n \\dot{K}(t) =& F\\bigg(A(t), K(t), lN(t)\\bigg) - \\delta K(t) - c(t)N(t) \\\n\\end{align}\n...where...\n$$ ARA(c(t)) = -\\frac{\\frac{\\partial^2 u}{\\partial c^2}}{\\frac{\\partial u}{\\partial c}}$$\n...is the <a href=\"https://en.wikipedia.org/wiki/Risk_aversion\">Pratt-Arrow measure of absolute risk aversion</a>.\n<h2> De-trending the model </h2>\n\nRecall that technology and household size (i.e., population!) are growing at exogenous rates. Want to de-trend the model so that we can analyze fixed point equilibrium of the dynamical system.\nHomogeneity assumption on $F$ tells us that...\n$$\\frac{1}{A(t)N(t)}F\\bigg(A(t), K(t), lN(t)\\bigg) = F\\bigg(1, \\frac{K(t)}{A(t)N(t)}, l\\bigg) = f(k(t)) $$\n...and...\n$$ \\frac{\\partial F}{\\partial K} = \\frac{\\partial F}{\\partial K}\\bigg(1, \\frac{K(t)}{A(t)N(t)}, l\\bigg) = \\frac{\\partial f}{\\partial k} = f'(k(t)) $$\n...where...\n$$ \\tilde{k}(t) = \\frac{K(t)}{A(t)N(t)} $$\n...is now defined to be capital per unit effective labor supply. Similarly we can de-trend per capita consumption $c(t)$ using the following change of variables...\n\\begin{align}\n c(t) =& A(t)\\tilde{c}(t) \\\n \\dot{c} =& \\dot{A}(t)\\tilde{c}(t) + A(t)\\dot{\\tilde{c}}(t)\n\\end{align}\n...where...\n$$ \\tilde{c}(t) \\equiv \\frac{c(t)}{A(t)}$$\nis consumption per unit of effective labor supply.\nUsing these results, we can re-write the above system of differential equations as follows...\n\\begin{align}\n \\dot{\\tilde{k}}(t) =& f(\\tilde{k}(t)) - (g + n + \\delta)\\tilde{k}(t) - \\tilde{c}(t)\\\n \\dot{\\tilde{c}}(t) =& \\frac{1}{A(0)e^{gt}ARA\\bigg(A(0)e^{gt}\\tilde{c}(t)\\bigg)}\\bigg[f'(k(t)) - \\delta - \\rho\\bigg] - g\\tilde{c}(t)\n\\end{align}\n...where $k$ and $c$ are now measured per unit of effective labor supply. Note that the equation of motion for consumption per unit effective labor supply is no longer time invariant. However, if marginal utility of consumption for the representative household is homogenous of degree $k$, then the equation of motion for $\\tilde{c}$ becomes <a href=\"https://en.wikipedia.org/wiki/Autonomous_system_(mathematics)\">time-invariant</a>...\n\\begin{align}\n \\dot{\\tilde{c}}(t) =& \\frac{1}{ARA\\bigg(\\tilde{c}(t)\\bigg)}\\bigg[f'(k(t)) - \\delta - \\rho\\bigg] - g\\tilde{c}(t)\n\\end{align}\n<h2> Boundary conditions </h2>\n\nTo complete the model we need to specifcy some boundary conditions. The initial conditions for technology, $A(0)$, and household size (i.e., population), $N(0)$, and capital, $K(0)$, are assumed given. Therefore...\n$$ \\tilde{k}(0) = \\frac{K(0)}{A(0)N(0)}. $$\nWe will impose the following terminal condition on consumption per unit effective labor supply...\n$$ \\lim_{t \\rightarrow \\infty} \\tilde{c}(t) = \\tilde{c}^*. $$\n<h2> Full model </h2>\n\nThe full model is completely specified by the following system of ordinary conditions and boundary conditions...\n\\begin{align}\n \\dot{\\tilde{k}}(t) =& f(\\tilde{k}(t)) - (g + n + \\delta)\\tilde{k}(t) - \\tilde{c}(t),\\ \\tilde{k}(0) = \\tilde{k}0 \\\n \\dot{\\tilde{c}}(t) =& \\frac{1}{A(0)e^{gt}ARA\\bigg(A(0)e^{gt}\\tilde{c}(t)\\bigg)}\\bigg[f'(\\tilde{k}(t)) - \\delta - \\rho\\bigg] - g\\tilde{c}(t),\\ \\lim{t \\rightarrow \\infty} \\tilde{c}(t) = c^*\n\\end{align}\n<h2> Example: HARA risk aversion and Cobb-Douglas production</h2>\n\nSuppose representative household preferences are consistent with Hyperbolic Relative Risk Aversion (HARA)...\n$$ ARA(c) = \\frac{1}{ac + b} $$\n...in this case the equation of motion for consumption per unit effective labor reduces to...\n$$\\dot{\\tilde{c}}(t) = \\bigg[a\\tilde{c}(t) + \\frac{b}{A(0)e^{gt}}\\bigg]\\bigg[f'(\\tilde{k}(t)) - \\delta - \\rho\\bigg] - g\\tilde{c}(t) $$\n...note that when $b \\ne 0$ this is a non-autonomous differential equation and the representative household has time-varying risk preferences. On the other hand, if $b = 0$ then the representative household has standard CRRA risk preferences (which do not depend directly on time).", "def hara(t, c, a, b, **params):\n \"\"\"\n Hyperbolic Absolute Risk Aversion (HARA).\n \n Notes\n -----\n For Constant Absolute Risk Aversion (CARA), set a=0; for\n Constant Relative Risk Aversion (CRRA), set b=0.\n\n \"\"\"\n return 1 / (a * c + b)\n\n\ndef cobb_douglas_output(k_tilde, alpha, l, **params):\n return k_tilde**alpha * l**(1 - alpha)\n\n\ndef cobb_douglas_mpk(k_tilde, alpha, l, **params):\n return alpha * k_tilde**(alpha - 1) * l**(1 - alpha)\n\n\ndef c_tilde_dot(t, k_tilde, c_tilde, A0, delta, g, rho, **params):\n A = A0 * np.exp(g * t)\n r = cobb_douglas_mpk(k_tilde, **params) - delta\n return ((r - rho) / (A * hara(t, A * c_tilde, **params))) - g * c_tilde\n\n\ndef k_tilde_dot(t, k_tilde, c_tilde, delta, g, n, **params):\n return cobb_douglas_output(k_tilde, **params) - c_tilde - (g + n + delta) * k_tilde\n\n\ndef standard_ramsey_model(t, k_tilde, c_tilde, A0, delta, g, n, rho, **params):\n out = [k_tilde_dot(t, k_tilde, c_tilde, delta, g, n, **params),\n c_tilde_dot(t, k_tilde, c_tilde, A0, delta, g, rho, **params)]\n return out\n\n\ndef initial_condition(t, k_tilde, c_tilde, A0, K0, N0, **params):\n return [k_tilde - (K0 / (A0 * N0))]\n\n\ndef terminal_condition(t, k_tilde, c_tilde, **params):\n return [c_tilde - equilibrium_consumption(**params)]\n\n\ndef equilibrium_capital(a, alpha, b, delta, g, l, n, rho, **params):\n return ((a * alpha * l**(1 - alpha)) / (a * (delta + rho) + g))**(1 / (1 - alpha))\n\n\ndef equilibrium_consumption(a, alpha, b, delta, g, l, n, rho, **params):\n kss = equilibrium_capital(a, alpha, b, delta, g, l, n, rho)\n return cobb_douglas_output(kss, alpha, l) - (g + n + delta) * kss\n", "To complete the model we need to define some parameter values.", "# set b=0 for CRRA...\nparams = {'a': 1.0, 'b': 0.0, 'g': 0.02, 'n': 0.02, 'alpha': 0.15,\n 'delta': 0.04, 'l': 1.0, 'K0': 1.0, 'A0': 1.0, 'N0': 1.0,\n 'rho': 0.02}", "<h2>Solving the model with pyCollocation</h2>\n\n<h3>Defining a `pycollocation.TwoPointBVP` instance</h3>", "pycollocation.problems.TwoPointBVP?\n\nstandard_ramsey_bvp = pycollocation.problems.TwoPointBVP(bcs_lower=initial_condition,\n bcs_upper=terminal_condition,\n number_bcs_lower=1,\n number_odes=2,\n params=params,\n rhs=standard_ramsey_model,\n )", "Finding a good initial guess for $\\tilde{k}(t)$\nTheory tells us that, starting from some initial condition $\\tilde{k}_0$, the solution to the Solow model converges monotonically toward its long run equilibrium value $\\tilde{k}^*$. Our initial guess for the solution should preserve this property...", "def initial_mesh(t, T, num, problem):\n # compute equilibrium values\n cstar = equilibrium_consumption(**problem.params)\n kstar = equilibrium_capital(**problem.params)\n ystar = cobb_douglas_output(kstar, **problem.params)\n\n # create the mesh for capital\n ts = np.linspace(t, T, num)\n k0 = problem.params['K0'] / (problem.params['A0'] * problem.params['N0'])\n ks = kstar - (kstar - k0) * np.exp(-ts)\n\n # create the mesh for consumption\n s = 1 - (cstar / ystar)\n y0 = cobb_douglas_output(k0, **problem.params)\n c0 = (1 - s) * y0\n cs = cstar - (cstar - c0) * np.exp(-ts)\n\n return ts, ks, cs\n", "Solving the model", "pycollocation.solvers.Solver?", "<h3> Polynomial basis functions </h3>", "polynomial_basis = pycollocation.basis_functions.PolynomialBasis()\nsolver = pycollocation.solvers.Solver(polynomial_basis)\n\nboundary_points = (0, 200)\nts, ks, cs = initial_mesh(*boundary_points, num=1000, problem=standard_ramsey_bvp)\n\nbasis_kwargs = {'kind': 'Chebyshev', 'domain': boundary_points, 'degree': 25}\nk_poly = polynomial_basis.fit(ts, ks, **basis_kwargs)\nc_poly = polynomial_basis.fit(ts, cs, **basis_kwargs)\ninitial_coefs = np.hstack([k_poly.coef, c_poly.coef])\nnodes = polynomial_basis.roots(**basis_kwargs)\n\nsolution = solver.solve(basis_kwargs, boundary_points, initial_coefs,\n nodes, standard_ramsey_bvp)\n\n\nts, _, _ = initial_mesh(*boundary_points, 1000, standard_ramsey_bvp)\nk_soln, c_soln = solution.evaluate_solution(ts)\nplt.plot(ts, k_soln)\nplt.plot(ts, c_soln)\nplt.show()\n\nk_resids, c_resids = solution.evaluate_residual(ts)\nplt.plot(ts, k_resids)\nplt.plot(ts, c_resids)\n\nplt.show()\n\nk_normalized_resids, c_normalized_resids = solution.normalize_residuals(ts)\nplt.plot(ts, np.abs(k_normalized_resids))\nplt.plot(ts, np.abs(c_normalized_resids))\nplt.yscale('log')\nplt.show()", "<h3> B-spline basis functions </h3>", "bspline_basis = pycollocation.basis_functions.BSplineBasis()\nsolver = pycollocation.solvers.Solver(bspline_basis)\n\nboundary_points = (0, 200)\nts, ks, cs = initial_mesh(*boundary_points, num=250, problem=standard_ramsey_bvp)\n\ntck, u = bspline_basis.fit([ks, cs], u=ts, k=5, s=0)\nknots, coefs, k = tck\ninitial_coefs = np.hstack(coefs)\n\nbasis_kwargs = {'knots': knots, 'degree': k, 'ext': 2}\nnodes = np.linspace(*boundary_points, num=249) \n\nsolution = solver.solve(basis_kwargs, boundary_points, initial_coefs,\n nodes, standard_ramsey_bvp)\n\n\nts, _, _ = initial_mesh(*boundary_points, num=1000, problem=standard_ramsey_bvp)\nk_soln, c_soln = solution.evaluate_solution(ts)\nplt.plot(ts, k_soln)\nplt.plot(ts, c_soln)\nplt.show()\n\nk_resids, c_resids = solution.evaluate_residual(ts)\nplt.plot(ts, k_resids)\nplt.plot(ts, c_resids)\n\nplt.show()\n\nk_normalized_resids, c_normalized_resids = solution.normalize_residuals(ts)\nplt.plot(ts, np.abs(k_normalized_resids))\nplt.plot(ts, np.abs(c_normalized_resids))\nplt.yscale('log')\nplt.show()", "<h1> Generic Ramsey-Cass-Koopmans model</h1>\n\nCan we refactor the above code so that we can solve a Ramsey-Cass-Koopmans model for arbitrary intensive production functions and risk preferences? Yes!", "from pycollocation.tests import models", "Example usage...", "def ces_output(k_tilde, alpha, l, sigma, **params):\n gamma = (sigma - 1) / sigma\n if gamma == 0:\n y = k_tilde**alpha * l**(1 - alpha)\n else:\n y = (alpha * k_tilde**gamma + (1 - alpha) * l**gamma)**(1 / gamma)\n return y\n\n\ndef ces_mpk(k_tilde, alpha, l, sigma, **params):\n y = ces_output(k_tilde, alpha, l, sigma)\n gamma = (sigma - 1) / sigma\n if gamma == 0:\n mpk = alpha * (y / k_tilde)\n else:\n mpk = alpha * k_tilde**(gamma - 1) * (y / (alpha * k_tilde**gamma + (1 - alpha) * l**gamma))\n return mpk\n\n\ndef ces_equilibrium_capital(a, alpha, b, delta, g, l, n, rho, sigma, **params):\n \"\"\"Steady state value for capital stock (per unit effective labor).\"\"\"\n gamma = (sigma - 1) / sigma\n if gamma == 1:\n kss = ((a * alpha * l**(1 - alpha)) / (a * (delta + rho) + g))**(1 / (1 - alpha))\n else:\n kss = l * ((1 / (1 - alpha)) * (((a * (delta + rho) + g) / (a * alpha))**(gamma / (1 - gamma)) - alpha))**(-1 / gamma)\n return kss\n\n\nces_params = {'a': 0.5, 'b': 1.0, 'g': 0.02, 'n': 0.02, 'alpha': 0.15,\n 'delta': 0.04, 'l': 5.0, 'K0': 2.0, 'A0': 1.0, 'N0': 1.0,\n 'rho': 0.02, 'sigma': 2.0}\n\ngeneric_ramsey_bvp = models.RamseyCassKoopmansModel(hara,\n ces_output,\n ces_equilibrium_capital,\n ces_mpk,\n ces_params)\n\npolynomial_basis = pycollocation.basis_functions.PolynomialBasis()\nsolver = pycollocation.solvers.Solver(polynomial_basis)\n\nboundary_points = (0, 500)\nts, ks, cs = initial_mesh(*boundary_points, num=1000, problem=generic_ramsey_bvp)\n\nbasis_kwargs = {'kind': 'Chebyshev', 'domain': boundary_points, 'degree': 30}\nk_poly = polynomial_basis.fit(ts, ks, **basis_kwargs)\nc_poly = polynomial_basis.fit(ts, cs, **basis_kwargs)\ninitial_coefs = np.hstack([k_poly.coef, c_poly.coef])\nnodes = polynomial_basis.roots(**basis_kwargs)\n\nsolution = solver.solve(basis_kwargs, boundary_points, initial_coefs,\n nodes, generic_ramsey_bvp)\n\n\nsolution.result.success\n\nk_soln, c_soln = solution.evaluate_solution(ts)\nplt.plot(ts, k_soln)\nplt.plot(ts, c_soln)\nplt.show()\n\nk_normalized_resids, c_normalized_resids = solution.normalize_residuals(ts)\nplt.plot(ts, np.abs(k_normalized_resids))\nplt.plot(ts, np.abs(c_normalized_resids))\nplt.yscale('log')\nplt.show()", "<h3> Phase space plots </h3>\n\n<h4> 2D phase space </h4>", "css = generic_ramsey_bvp.equilibrium_consumption(**generic_ramsey_bvp.params)\nkss = ces_equilibrium_capital(**generic_ramsey_bvp.params)\n\nplt.plot(k_soln / kss, c_soln / css)\nplt.xlabel(r'$\\frac{\\tilde{k}}{\\tilde{k}^*}$', fontsize=20)\nplt.ylabel(r'$\\frac{\\tilde{c}}{\\tilde{c}^*}$', fontsize=20, rotation='horizontal')\nplt.title(\"Phase space for the Ramsey-Cass-Koopmans model\")", "<h4> 3D phase space </h4>", "from mpl_toolkits.mplot3d import Axes3D\n\nfig = plt.figure()\nax = fig.gca(projection='3d')\nax.plot(k_soln / kss, c_soln / css, ts, label='Ramsey model trajectory')\nplt.xlabel(r'$\\frac{\\tilde{k}}{\\tilde{k}^*}$', fontsize=20)\nplt.ylabel(r'$\\frac{\\tilde{c}}{\\tilde{c}^*}$', fontsize=20)\nax.set_zlabel('$t$')\nax.legend()\n\nplt.show()\n" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
AllenDowney/ModSimPy
soln/penny_model_comparison.ipynb
mit
[ "Modeling and Simulation in Python\nComparison of the penny models from Chapters 1, 20, and 21\nCopyright 2018 Allen Downey\nLicense: Creative Commons Attribution 4.0 International", "# Configure Jupyter so figures appear in the notebook\n%matplotlib inline\n\n# Configure Jupyter to display the assigned value after an assignment\n%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'\n\n# import functions from the modsim.py module\nfrom modsim import *", "With air resistance\nNext we'll add air resistance using the drag equation\nI'll start by getting the units we'll need from Pint.", "m = UNITS.meter\ns = UNITS.second\nkg = UNITS.kilogram", "Now I'll create a Params object to contain the quantities we need. Using a Params object is convenient for grouping the system parameters in a way that's easy to read (and double-check).", "params = Params(height = 381 * m,\n v_init = 0 * m / s,\n g = 9.8 * m/s**2,\n mass = 2.5e-3 * kg,\n diameter = 19e-3 * m,\n rho = 1.2 * kg/m**3,\n v_term = 18 * m / s)", "Now we can pass the Params object make_system which computes some additional parameters and defines init.\nmake_system uses the given radius to compute area and the given v_term to compute the drag coefficient C_d.", "def make_system(params):\n \"\"\"Makes a System object for the given conditions.\n \n params: Params object\n \n returns: System object\n \"\"\"\n unpack(params)\n \n area = np.pi * (diameter/2)**2\n C_d = 2 * mass * g / (rho * area * v_term**2)\n init = State(y=height, v=v_init)\n t_end = 30 * s\n \n return System(params, area=area, C_d=C_d, \n init=init, t_end=t_end)", "Let's make a System", "system = make_system(params)", "Here's the slope function, including acceleration due to gravity and drag.", "def slope_func(state, t, system):\n \"\"\"Compute derivatives of the state.\n \n state: position, velocity\n t: time\n system: System object\n \n returns: derivatives of y and v\n \"\"\"\n y, v = state\n rho, C_d, area = system.rho, system.C_d, system.area\n mass = system.mass\n g = system.g\n \n f_drag = rho * v**2 * C_d * area / 2\n a_drag = f_drag / mass\n \n dydt = v\n dvdt = -g + a_drag\n \n return dydt, dvdt", "As always, let's test the slope function with the initial conditions.", "slope_func(system.init, 0, system)", "We can use the same event function as in the previous chapter.", "def event_func(state, t, system):\n \"\"\"Return the height of the penny above the sidewalk.\n \"\"\"\n y, v = state\n return y", "And then run the simulation.", "results, details = run_ode_solver(system, slope_func, events=event_func)\ndetails", "Here are the results.", "results", "The final height is close to 0, as expected.\nInterestingly, the final velocity is not exactly terminal velocity, which suggests that there are some numerical errors.\nWe can get the flight time from results.", "t_sidewalk = get_last_label(results)", "Here's the plot of position as a function of time.", "def plot_position(results):\n plot(results.y)\n decorate(xlabel='Time (s)',\n ylabel='Position (m)')\n \nplot_position(results)", "And velocity as a function of time:", "def plot_velocity(results):\n plot(results.v, color='C1', label='v')\n \n decorate(xlabel='Time (s)',\n ylabel='Velocity (m/s)')\n \nplot_velocity(results)", "From an initial velocity of 0, the penny accelerates downward until it reaches terminal velocity; after that, velocity is constant.\nBack to Chapter 1\nWe have now considered three models of a falling penny:\n\n\nIn Chapter 1, we started with the simplest model, which includes gravity and ignores drag.\n\n\nAs an exercise in Chapter 1, we did a \"back of the envelope\" calculation assuming constant acceleration until the penny reaches terminal velocity, and then constant velocity until it reaches the sidewalk.\n\n\nIn this chapter, we model the interaction of gravity and drag during the acceleration phase.\n\n\nWe can compare the models by plotting velocity as a function of time.", "g = 9.8\nv_term = 18\nt_end = 22.4\n\nts = linspace(0, t_end, 201)\nmodel1 = -g * ts;\n\nmodel2 = TimeSeries()\nfor t in ts:\n v = -g * t\n if v < -v_term:\n model2[t] = -v_term\n else:\n model2[t] = v\n\nresults, details = run_ode_solver(system, slope_func, events=event_func, max_step=0.5)\nmodel3 = results.v;\n\nplot(ts, model1, label='model1', color='gray')\nplot(model2, label='model2', color='C0')\nplot(model3, label='model3', color='C1')\n \ndecorate(xlabel='Time (s)',\n ylabel='Velocity (m/s)')\n\nplot(model2, label='model2', color='C0')\nplot(results.v, label='model3', color='C1')\n \ndecorate(xlabel='Time (s)',\n ylabel='Velocity (m/s)')", "Clearly Model 1 is very different from the other two, which are almost identical except during the transition from constant acceleration to constant velocity.\nWe can also compare the predictions:\n\n\nAccording to Model 1, the penny takes 8.8 seconds to reach the sidewalk, and lands at 86 meters per second.\n\n\nAccording to Model 2, the penny takes 22.1 seconds and lands at terminal velocity, 18 m/s.\n\n\nAccording to Model 3, the penny takes 22.4 seconds and lands at terminal velocity.\n\n\nSo what can we conclude? The results from Model 1 are clearly unrealistic; it is probably not a useful model of this system. The results from Model 2 are off by about 1%, which is probably good enough for most purposes.\nIn fact, our estimate of the terminal velocity could by off by 10% or more, and the figure we are using for the height of the Empire State Building is not precise either.\nSo the difference between Models 2 and 3 is swamped by other uncertainties." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
soloman817/udacity-ml
misc/keyboard-shortcuts.ipynb
mit
[ "Keyboard shortcuts\nIn this notebook, you'll get some practice using keyboard shortcuts. These are key to becoming proficient at using notebooks and will greatly increase your work speed.\nFirst up, switching between edit mode and command mode. Edit mode allows you to type into cells while command mode will use key presses to execute commands such as creating new cells and openning the command palette. When you select a cell, you can tell which mode you're currently working in by the color of the box around the cell. In edit mode, the box and thick left border are colored green. In command mode, they are colored blue. Also in edit mode, you should see a cursor in the cell itself.\nBy default, when you create a new cell or move to the next one, you'll be in command mode. To enter edit mode, press Enter/Return. To go back from edit mode to command mode, press Escape.\n\nExercise: Click on this cell, then press Enter + Shift to get to the next cell. Switch between edit and command mode a few times.", "# mode practice", "Help with commands\nIf you ever need to look up a command, you can bring up the list of shortcuts by pressing H in command mode. The keyboard shortcuts are also available above in the Help menu. Go ahead and try it now.\nCreating new cells\nOne of the most common commands is creating new cells. You can create a cell above the current cell by pressing A in command mode. Pressing B will create a cell below the currently selected cell.\n\nExercise: Create a cell above this cell using the keyboard command.\nExercise: Create a cell below this cell using the keyboard command.\n\nSwitching between Markdown and code\nWith keyboard shortcuts, it is quick and simple to switch between Markdown and code cells. To change from Markdown to cell, press Y. To switch from code to Markdown, press M.\n\nExercise: Switch the cell below between Markdown and code cells.", "## Practice here\n\ndef fibo(n): # Recursive Fibonacci sequence!\n if n == 0:\n return 0\n elif n == 1:\n return 1\n return fibo(n-1) + fibo(n-2)", "Line numbers\nA lot of times it is helpful to number the lines in your code for debugging purposes. You can turn on numbers by pressing L (in command mode of course) on a code cell.\n\nExercise: Turn line numbers on and off in the above code cell.\n\nDeleting cells\nDeleting cells is done by pressing D twice in a row so D, D. This is to prevent accidently deletions, you have to press the button twice!\n\nExercise: Delete the cell below.", "# DELETE ME", "Saving the notebook\nNotebooks are autosaved every once in a while, but you'll often want to save your work between those times. To save the book, press S. So easy!\nThe Command Palette\nYou can easily access the command palette by pressing Shift + Control/Command + P. \n\nNote: This won't work in Firefox and Internet Explorer unfortunately. There is already a keyboard shortcut assigned to those keys in those browsers. However, it does work in Chrome and Safari.\n\nThis will bring up the command palette where you can search for commands that aren't available through the keyboard shortcuts. For instance, there are buttons on the toolbar that move cells up and down (the up and down arrows), but there aren't corresponding keyboard shortcuts. To move a cell down, you can open up the command palette and type in \"move\" which will bring up the move commands.\n\nExercise: Use the command palette to move the cell below down one position.", "# Move this cell down\n\n# below this cell", "Finishing up\nThere is plenty more you can do such as copying, cutting, and pasting cells. I suggest getting used to using the keyboard shortcuts, you’ll be much quicker at working in notebooks. When you become proficient with them, you'll rarely need to move your hands away from the keyboard, greatly speeding up your work.\nRemember, if you ever need to see the shortcuts, just press H in command mode." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
tpin3694/tpin3694.github.io
python/pandas_apply_operations_to_dataframes.ipynb
mit
[ "Title: Applying Operations Over pandas Dataframes\nSlug: pandas_apply_operations_to_dataframes\nSummary: Applying Operations Over pandas Dataframes\nDate: 2016-05-01 12:00\nCategory: Python\nTags: Data Wrangling\nAuthors: Chris Albon \nImport Modules", "import pandas as pd\nimport numpy as np", "Create a dataframe", "data = {'name': ['Jason', 'Molly', 'Tina', 'Jake', 'Amy'], \n 'year': [2012, 2012, 2013, 2014, 2014], \n 'reports': [4, 24, 31, 2, 3],\n 'coverage': [25, 94, 57, 62, 70]}\ndf = pd.DataFrame(data, index = ['Cochice', 'Pima', 'Santa Cruz', 'Maricopa', 'Yuma'])\ndf", "Create a capitalization lambda function", "capitalizer = lambda x: x.upper()", "Apply the capitalizer function over the column 'name'\napply() can apply a function along any axis of the dataframe", "df['name'].apply(capitalizer)", "Map the capitalizer lambda function over each element in the series 'name'\nmap() applies an operation over each element of a series", "df['name'].map(capitalizer)", "Apply a square root function to every single cell in the whole data frame\napplymap() applies a function to every single element in the entire dataframe.", "# Drop the string variable so that applymap() can run\ndf = df.drop('name', axis=1)\n\n# Return the square root of every cell in the dataframe\ndf.applymap(np.sqrt)", "Applying A Function Over A Dataframe\nCreate a function that multiplies all non-strings by 100", "# create a function called times100\ndef times100(x):\n # that, if x is a string,\n if type(x) is str:\n # just returns it untouched\n return x\n # but, if not, return it multiplied by 100\n elif x:\n return 100 * x\n # and leave everything else\n else:\n return", "Apply the times100 over every cell in the dataframe", "df.applymap(times100)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
miaecle/deepchem
examples/notebooks/deepchem_tensorflow_eager.ipynb
mit
[ "TensorGraph Layers and TensorFlow eager\nIn this tutorial we will look at the working of TensorGraph layer with TensorFlow eager.\n But before that let's see what exactly is TensorFlow eager.\nEager execution is an imperative, define-by-run interface where operations are executed immediately as they are called from Python. In other words, eager execution is a feature that makes TensorFlow execute operations immediately. Concrete values are returned instead of a computational graph to be executed later.\nAs a result:\n- It allows writing imperative coding style like numpy\n- Provides fast debugging with immediate run-time errors and integration with Python tools\n- Strong support for higher-order gradients", "import tensorflow as tf\nimport tensorflow.contrib.eager as tfe", "After importing neccessary modules, at the program startup we invoke enable_eager_execution().", "tfe.enable_eager_execution()", "Enabling eager execution changes how TensorFlow functions behave. Tensor objects return concrete values instead of being a symbolic reference to nodes in a static computational graph(non-eager mode). As a result, eager execution should be enabled at the beginning of a program.\nNote that with eager execution enabled, these operations consume and return multi-dimensional arrays as Tensor objects, similar to NumPy ndarrays\nDense layer", "import numpy as np\nimport deepchem as dc\nfrom deepchem.models.tensorgraph import layers", "In the following snippet we describe how to create a Dense layer in eager mode. The good thing about calling a layer as a function is that we don't have to call create_tensor() directly. This is identical to tensorflow API and has no conflict. And since eager mode is enabled, it should return concrete tensors right away.", "# Initialize parameters\nin_dim = 2\nout_dim = 3\nbatch_size = 10\n\ninputs = np.random.rand(batch_size, in_dim).astype(np.float32) #Input \n\nlayer = layers.Dense(out_dim) # Provide the number of output values as parameter. This creates a Dense layer\nresult = layer(inputs) #get the ouput tensors\n\nprint(result)", "Creating a second Dense layer should produce different results.", "layer2 = layers.Dense(out_dim)\nresult2 = layer2(inputs)\n\nprint(result2)", "We can also execute the layer in eager mode to compute its output as a function of inputs. If the layer defines any variables, they are created the first time it is invoked. This happens in the same exact way that we would create a single layer in non-eager mode.\nThe following is also a way to create a layer in eager mode. The create_tensor() is invoked by __call__() object. This gives us an advantage of directly passing the tensor as a parameter while constructing a TensorGraph layer.", "x = layers.Dense(out_dim)(inputs)\n\nprint(x)", "Conv1D layer\nDense layers are one of the layers defined in Deepchem. Along with it there are several others like Conv1D, Conv2D, conv3D etc. We also take a look at how to construct a Conv1D layer below.\nBasically this layer creates a convolution kernel that is convolved with the layer input over a single spatial (or temporal) dimension to produce a tensor of outputs.\nWhen using this layer as the first layer in a model, provide an input_shape argument (tuple of integers or None)\nWhen the argument input_shape is passed in as a tuple of integers e.g (2, 3) it would mean we are passing a sequence of 2 vectors of 3-Dimensional vectors.\nAnd when it is passed as (None, 3) it means that we want variable-length sequences of 3-dimensional vectors.", "from deepchem.models.tensorgraph.layers import Conv1D\n\nwidth = 5\nin_channels = 2\nfilters = 3\nkernel_size = 2\nbatch_size = 5\n\ninputs = np.random.rand(batch_size, width, in_channels).astype(\n np.float32)\n\nlayer = layers.Conv1D(filters, kernel_size)\n\nresult = layer(inputs)\nprint(result)", "Again it should be noted that creating a second Conv1D layer would producr different results. \nSo thats how we invoke different DeepChem layers in eager mode.\nOne of the other interesting point is that we can mix tensorflow layers and DeepChem layers. Since they all take tensors as inputs and return tensors as outputs, so you can take the output from one kind of layer and pass it as input to a different kind of layer. But it should be noted that tensorflow layers can't be added to a TensorGraph.\nWorkflow of DeepChem layers\nNow that we've generalised so much, we should actually see if deepchem supplies an identical workflow for layers to that of tensorflow. For instance, let's consider the code where we create a Dense layer.\npython\ny = Dense(3)(input)\nWhat the above line does is that it creates a dense layer with three outputs. It initializes the weights and the biases. And then it multiplies the input tensor by the weights.\nLet's put the above statement in some mathematical terms. A Dense layer has a matrix of weights of shape (M, N), where M is the number of outputs and N is the number of inputs. The first time we call it, the layer sets N based on the shape of the input we passed to it and creates the weight matrix.", "_input = tf.random_normal([2, 3])\nprint(_input)\n\nlayer = layers.Dense(4) # A DeepChem Dense layer\nresult = layer(_input)\nprint(result)", "This is exactly how a tensorflow Dense layer works. It implements the same operation as that of DeepChem's Dense layer i.e., outputs = activation(inputs.kernel + bias) where kernel is the weights matrix created by the layer, and bias is a bias vector created by the layer.", "result = tf.layers.dense(_input, units=4) # A tensorflow Dense layer\nprint(result)", "We pass a tensor input to that of tensorflow Dense layer and recieve an output tensor that has the same shape as that of input except the last dimension is that of ouput space.\nGradients\nFinding gradients under eager mode is much similar to the autograd API. The computational flow is very clean and logical.\nWhat happens is that different operations can occur during each call, all forward operations are recorded to a tape, which is then played backwards when computing gradients. After the gradients have been computed, the tape is discared.", "def dense_squared(x):\n return layers.Dense(1)(layers.Dense(1)(inputs))\n\ngrad = tfe.gradients_function(dense_squared)\n\nprint(dense_squared(3.0))\nprint(grad(3.0))", "In the above example, The gradients_function call takes a Python function dense_squared() as an argument and returns a Python callable that computes the partial derivatives of dense_squared() with respect to its inputs." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
bjornstenqvist/faunus
examples/multipole/multipole.ipynb
mit
[ "Decomposition of electrostatic interaction energy between two multipolar molecules\nThis is a Jupyter notebook (http://jupyter.org) for studying the interaction between multipolar particles.\nLoad required packages", "%matplotlib inline\nimport matplotlib\nimport matplotlib.cm as cm\nimport unittest\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport sys\nplt.rcParams.update({'font.size': 18, 'figure.figsize': [8.0, 6.0]})", "Run simulation", "%%bash\nif [[ -z \"${FAUNUS_EXECUTABLE}\" ]]; then\n yason.py multipole.yml | faunus --nobar -s multipole.state.ubj\nelse\n echo \"Seems we're running CTest - use Faunus target from CMake\"\n \"${YASON_EXECUTABLE}\" multipole.yml | \"${FAUNUS_EXECUTABLE}\" --nobar -s multipole.state.ubj\nfi", "Plot multipolar energies as a function of separation", "R, exact, total, ionion, iondip, dipdip, ionquad, mucorr = np.loadtxt('multipole.dat', unpack=True, skiprows=2)\nlw=4\nplt.plot(R, ionion, label='ion-ion', lw=lw)\nplt.plot(R, iondip, label='ion-dipole', lw=lw)\nplt.plot(R, dipdip, label='dipole-dipole', lw=lw)\nplt.plot(R, ionquad, label='ion-quadrupole', lw=lw)\nplt.plot(R, total, label='sum of multipoles', lw=lw)\nplt.plot(R, exact, 'ko', label='exact')\nplt.xlabel('separation (Å)')\nplt.ylabel('energy ($kT/\\lambda_B$)')\nplt.legend(loc=0, frameon=False)\nplt.savefig('multipole.png', bbox_inches='tight')", "Unittests\nCompare distributions with previously saved results", "class TestMultipole(unittest.TestCase):\n def test_Exact(self):\n self.assertAlmostEqual(exact.mean(), -0.12266326530612245, places=3)\n\n def test_IonIon(self):\n self.assertAlmostEqual(ionion.mean(), -0.11624285714285715, places=2)\n\n def test_IonDipole(self):\n self.assertAlmostEqual(iondip.mean(), -0.006777551020408164, places=2)\n \n def test_DipoleDipole(self):\n self.assertAlmostEqual(dipdip.mean(), -0.0008632653061224489, places=2)\n \n def test_IonQuadrupole(self):\n self.assertAlmostEqual(ionquad.mean(), 0.0005040816326530612, places=3)\n\ntest = TestMultipole()\nsuite = unittest.TestLoader().loadTestsFromModule(test)\nret = unittest.TextTestRunner(verbosity=2).run(suite)\nif (not ret.wasSuccessful()):\n sys.exit(ret)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
zzsza/Datascience_School
12. 추정 및 검정/07. 베이지안 모수 추정.ipynb
mit
[ "베이지안 모수 추정\n베이지안 모수 추정(Bayesian parameter estimation) 방법은 모수의 값에 해당하는 특정한 하나의 숫자를 계산하는 것이 아니라 모수의 값이 가질 수 있는 모든 가능성, 즉 모수의 분포를 계산하는 작업이다.\n이때 계산된 모수의 분포를 표현 방법은 두 가지가 있다.\n\n비모수적(non-parametric) 방법\n\n샘플을 제시한 후 히스토그램와 같은 방법으로 임의의 분포를 표현한다. MCMC(Markov chain Monte Carlo)와 같은 몬테카를로 방법에서 사용한다.\n\n\n모수적(parametric) 방법 \n\n모수의 분포를 잘 알려진 확률 분포 모형을 사용하여 나타낸다. 이렇게 하면 모수를 나타내는 확률 분포 수식이 다시 모수(parameter)를 가지게 되는데 이를 hyper-parameter라고도 부른다. 모수적 방법은 결국 hypter-parameter의 값을 숫자로 계산하는 작업이 된다.\n\n여기에서는 모수적 방법의 몇 가지 간단한 예를 보인다.\n베이지안 모수 추정의 기본 원리\n베이지안 모수 추정 방법은 다음 공식을 사용하여 모수의 분포 $p(\\theta)$를 $p(\\theta \\mid x_{1},\\ldots,x_{N})$ 로 갱신(update)하는 작업이다.\n$$ p(\\theta \\mid x_{1},\\ldots,x_{N}) = \\dfrac{p(x_{1},\\ldots,x_{N} \\mid \\theta) \\cdot p(\\theta)}{p(x_{1},\\ldots,x_{N})} \\propto p(x_{1},\\ldots,x_{N} \\mid \\theta ) \\cdot p(\\theta) $$ \n이 식에서 \n$p(\\theta)$ 는 사전(Prior) 분포라고 한다. 사전 분포는 베이지안 추정 작업을 하기 전에 이미 알고 있던 모수 $\\theta$의 분포를 뜻한다. \n아무런 지식이 없는 경우에는 보통 uniform 분포 $\\text{Beta}(1,1)$나 0 을 중심으로하는 정규 분포 $\\mathcal{N}(0, 1)$를 사용한다\n$p(\\theta \\mid x_{1},\\ldots,x_{N})$ 는 사후(Posterior) 분포라고 한다. 수학적으로는 데이터 $x_{1},\\ldots,x_{N}$가 알려진 상태에서의 $\\theta$에 대한 조건부 확률 분포이다. 우리가 베이지안 모수 추정 작업을 통해 구하고자 하는 것이 바로 이 사후 분포이다. \n$p(x_{1},\\ldots,x_{N} \\mid \\theta)$ 분포는 우도(Likelihood) 분포라고 한다. 현재 우리가 알고 있는 값은 데이터 $x_{1},\\ldots,x_{N}$ 이고 $\\theta$가 미지수이다. 이와 반대로 $theta$를 알고 있는 상태에서의 데이터 $x_{1},\\ldots,x_{N}$ 가 나올 조건부 확률 분포를 우도라고 한다. \n베르누이 분포의 모수 추정\n가장 단순한 이산 확률 분포인 베르누이 분포의 모수 $\\theta$를 베이지안 추정법으로 추정해 본다.\n베르누이 분포의 모수는 0부터 1사이의 값을 가지므로 사전 분포는 하이퍼 모수 $a=b=1$인 베타 분포로 한다.\n$$ P(\\theta) \\propto \\theta^{a−1}(1−\\theta)^{b−1} \\;\\;\\; (a=1, b=1)$$\n데이터는 모두 독립적인 베르누이 분포의 곱이므로 우도는 다음과 같이 이항 분포가 된다.\n$$ P(x_{1},\\ldots,x_{N} \\mid \\theta) = \\prod_{i=1}^N \\theta^{x_i} (1 - \\theta)^{1-x_i} $$\n베이지안 규칙을 사용하여 사후 분포를 구하면 다음과 같이 갱신된 하이퍼 모수 $a'$, $b'$를 가지는 베타 분포가 된다.\n$$ \n\\begin{eqnarray}\nP(\\theta \\mid x_{1},\\ldots,x_{N})\n&\\propto & P(x_{1},\\ldots,x_{N} \\mid \\theta) P(\\theta) \\\n&=& \\prod_{i=1}^N \\theta^{x_i} (1 - \\theta)^{1-x_i} \\cdot \\theta^{a−1}(1−\\theta)^{b−1} \\\n&=& \\theta^{\\sum_{i=1}^N x_i + a−1} (1 - \\theta)^{\\sum_{i=1}^N (1-x_i) + b−1 } \\\n&=& \\theta^{N_1 + a−1} (1 - \\theta)^{N_0 + b−1 } \\\n&=& \\theta^{a'−1} (1 - \\theta)^{b'−1 } \\\n\\end{eqnarray}\n$$\n이렇게 사전 분포와 사후 분포가 같은 확률 분포 모형을 가지게 하는 사전 분포를 conjugate prior 라고 한다.\n갱신된 하이퍼 모수의 값은 다음과 같다.\n$$ a' = N_1 + a $$\n$$ b' = N_0 + b $$", "theta0 = 0.6\na0, b0 = 1, 1\nprint(\"step 0: mode = unknown\")\n\nxx = np.linspace(0, 1, 1000)\nplt.plot(xx, sp.stats.beta(a0, b0).pdf(xx), label=\"initial\");\n\nnp.random.seed(0)\nx = sp.stats.bernoulli(theta0).rvs(50)\nN0, N1 = np.bincount(x, minlength=2)\na1, b1 = a0 + N1, b0 + N0\nplt.plot(xx, sp.stats.beta(a1, b1).pdf(xx), label=\"1st\");\nprint(\"step 1: mode =\", (a1 - 1)/(a1 + b1 - 2))\n\nx = sp.stats.bernoulli(theta0).rvs(50)\nN0, N1 = np.bincount(x, minlength=2)\na2, b2 = a1 + N1, b1 + N0\nplt.plot(xx, sp.stats.beta(a2, b2).pdf(xx), label=\"2nd\");\nprint(\"step 2: mode =\", (a2 - 1)/(a2 + b2 - 2))\n\nx = sp.stats.bernoulli(theta0).rvs(50)\nN0, N1 = np.bincount(x, minlength=2)\na3, b3 = a2 + N1, b2 + N0\nplt.plot(xx, sp.stats.beta(a3, b3).pdf(xx), label=\"3rd\");\nprint(\"step 3: mode =\", (a3 - 1)/(a3 + b3 - 2))\n\nx = sp.stats.bernoulli(theta0).rvs(50)\nN0, N1 = np.bincount(x, minlength=2)\na4, b4 = a3 + N1, b3 + N0\nplt.plot(xx, sp.stats.beta(a4, b4).pdf(xx), label=\"4th\");\nprint(\"step 4: mode =\", (a4 - 1)/(a4 + b4 - 2))\n\nplt.legend()\nplt.show()", "카테고리 분포의 모수 추정\n다음으로 클래스 갯수가 $K$인 카테고리 분포의 모수 $\\theta$ 벡터를 베이지안 추정법으로 추정해 본다.\n카테고리 분포의 모수의 각 원소는 모두 0부터 1사이의 값을 가지므로 사전 분포는 하이퍼 모수 $\\alpha_k=\\dfrac{1}{K}$인 디리클리 분포로 한다.\n$$ P(\\theta) \\propto \\prod_{k=1}^K \\theta_k^{\\alpha_k - 1} \\;\\;\\; (\\alpha_k = 1/K , \\; \\text{ for all } k) $$\n데이터는 모두 독립적인 카테고리 분포의 곱이므로 우도는 다음과 같이 다항 분포가 된다.\n$$ P(x_{1},\\ldots,x_{N} \\mid \\theta) = \\prod_{i=1}^N \\prod_{k=1}^K \\theta_k^{x_{i,k}} $$\n베이지안 규칙을 사용하여 사후 분포를 구하면 다음과 같이 갱신된 하이퍼 모수 $\\alpha'_i$를 가지는 디리클리 분포가 된다.\n$$ \n\\begin{eqnarray}\nP(\\theta \\mid x_{1},\\ldots,x_{N})\n&\\propto & P(x_{1},\\ldots,x_{N} \\mid \\theta) P(\\theta) \\\n&=& \\prod_{i=1}^N \\prod_{k=1}^K \\theta_k^{x_{i,k}} \\cdot \\prod_{k=1}^K \\theta_k^{\\alpha_k - 1} \\\n&=& \\prod_{k=1}^K \\theta^{\\sum_{i=1}^N x_{i,k} + \\alpha_k − 1} \\\n&=& \\prod_{k=1}^K \\theta^{N_k + \\alpha_k −1} \\\n&=& \\prod_{k=1}^K \\theta^{\\alpha'_k −1} \\\n\\end{eqnarray}\n$$\n이 경우에도 conjugate prior 임을 알 수 있다.\n갱신된 하이퍼 모수의 값은 다음과 같다.\n$$ \\alpha'_k = N_k + \\alpha_k $$", "def plot_dirichlet(alpha):\n\n def project(x):\n n1 = np.array([1, 0, 0])\n n2 = np.array([0, 1, 0])\n n3 = np.array([0, 0, 1])\n n12 = (n1 + n2)/2\n m1 = np.array([1, -1, 0])\n m2 = n3 - n12\n m1 = m1/np.linalg.norm(m1)\n m2 = m2/np.linalg.norm(m2)\n return np.dstack([(x-n12).dot(m1), (x-n12).dot(m2)])[0]\n\n def project_reverse(x):\n n1 = np.array([1, 0, 0])\n n2 = np.array([0, 1, 0])\n n3 = np.array([0, 0, 1])\n n12 = (n1 + n2)/2\n m1 = np.array([1, -1, 0])\n m2 = n3 - n12\n m1 = m1/np.linalg.norm(m1)\n m2 = m2/np.linalg.norm(m2)\n return x[:,0][:, np.newaxis] * m1 + x[:,1][:, np.newaxis] * m2 + n12\n\n eps = np.finfo(float).eps * 10\n X = project([[1-eps,0,0], [0,1-eps,0], [0,0,1-eps]])\n \n import matplotlib.tri as mtri\n\n triang = mtri.Triangulation(X[:,0], X[:,1], [[0, 1, 2]])\n refiner = mtri.UniformTriRefiner(triang)\n triang2 = refiner.refine_triangulation(subdiv=6)\n XYZ = project_reverse(np.dstack([triang2.x, triang2.y, 1-triang2.x-triang2.y])[0])\n\n pdf = sp.stats.dirichlet(alpha).pdf(XYZ.T)\n plt.tricontourf(triang2, pdf)\n plt.axis(\"equal\")\n plt.show()\n\ntheta0 = np.array([0.2, 0.6, 0.2])\n\nnp.random.seed(0)\nx1 = np.random.choice(3, 20, p=theta0)\nN1 = np.bincount(x1, minlength=3)\nx2 = np.random.choice(3, 100, p=theta0)\nN2 = np.bincount(x2, minlength=3)\nx3 = np.random.choice(3, 1000, p=theta0)\nN3 = np.bincount(x3, minlength=3)\n\na0 = np.ones(3) / 3\nplot_dirichlet(a0)\n\na1 = a0 + N1\nplot_dirichlet(a1)\nprint((a1 - 1)/(a1.sum() - 3))\n\na2 = a1 + N2\nplot_dirichlet(a2)\nprint((a2 - 1)/(a2.sum() - 3))\n\na3 = a2 + N3\nplot_dirichlet(a3)\nprint((a3 - 1)/(a3.sum() - 3))", "정규 분포의 기댓값 모수 추정\n이번에는 정규 분포의 기댓값 모수를 베이지안 방법으로 추정한다. 분산 모수 $\\sigma^2$은 알고 있다고 가정한다.\n기댓값은 $-\\infty$부터 $\\infty$까지의 모든 수가 가능하기 때문에 모수의 사전 분포로는 정규 분포를 사용한다.\n$$ P(\\mu) = N(\\mu_0, \\sigma^2_0) = \\dfrac{1}{\\sqrt{2\\pi\\sigma_0^2}} \\exp \\left(-\\dfrac{(\\mu-\\mu_0)^2}{2\\sigma_0^2}\\right)$$\n데이터는 모두 독립적인 정규 분포의 곱이므로 우도는 다음과 같이 된다.\n$$ P(x_{1},\\ldots,x_{N} \\mid \\mu) = \\prod_{i=1}^N N(x_i \\mid \\mu ) = \\prod_{i=1}^N \\dfrac{1}{\\sqrt{2\\pi\\sigma^2}} \\exp \\left(-\\dfrac{(x_i-\\mu)^2}{2\\sigma^2}\\right) $$\n$$ \n\\begin{eqnarray}\nP(\\theta \\mid x_{1},\\ldots,x_{N})\n&\\propto & P(x_{1},\\ldots,x_{N} \\mid \\theta) P(\\theta) \\\n&\\propto & \\exp \\left(-\\dfrac{(\\mu-\\mu'_0)^2}{2\\sigma_0^{'2}}\\right) \\\n\\end{eqnarray}\n$$\n베이지안 규칙을 사용하여 사후 분포를 구하면 다음과 같이 갱신된 하이퍼 모수 를 가지는 정규 분포가 된다.\n$$\n\\begin{eqnarray}\n\\mu'_0 &=& \\dfrac{\\sigma^2}{N\\sigma_0^2 + \\sigma^2}\\mu_0 + \\dfrac{N\\sigma_0^2}{N\\sigma_0^2 + \\sigma^2} \\dfrac{\\sum x_i}{N} \\\n\\dfrac{1}{\\sigma_0^{'2}} &=& \\dfrac{1}{\\sigma_0^{2}} + \\dfrac{N}{\\sigma^{'2}}\n\\end{eqnarray}\n$$", "mu, sigma2 = 2, 4\n\nmu0, sigma20 = 0, 1\nxx = np.linspace(1, 3, 1000)\n\nnp.random.seed(0)\n\nN = 10\nx = sp.stats.norm(mu).rvs(N)\nmu0 = sigma2/(N*sigma20 + sigma2) * mu0 + (N*sigma20)/(N*sigma20 + sigma2)*x.mean()\nsigma20 = 1/(1/sigma20 + N/sigma2)\nplt.plot(xx, sp.stats.norm(mu0, sigma20).pdf(xx), label=\"1st\");\nprint(mu0)\n\nN = 20\nx = sp.stats.norm(mu).rvs(N)\nmu0 = sigma2/(N*sigma20 + sigma2) * mu0 + (N*sigma20)/(N*sigma20 + sigma2)*x.mean()\nsigma20 = 1/(1/sigma20 + N/sigma2)\nplt.plot(xx, sp.stats.norm(mu0, sigma20).pdf(xx), label=\"2nd\");\nprint(mu0)\n\nN = 50\nx = sp.stats.norm(mu).rvs(N)\nmu0 = sigma2/(N*sigma20 + sigma2) * mu0 + (N*sigma20)/(N*sigma20 + sigma2)*x.mean()\nsigma20 = 1/(1/sigma20 + N/sigma2)\nplt.plot(xx, sp.stats.norm(mu0, sigma20).pdf(xx), label=\"3rd\");\nprint(mu0)\n\nN = 100\nx = sp.stats.norm(mu).rvs(N)\nmu0 = sigma2/(N*sigma20 + sigma2) * mu0 + (N*sigma20)/(N*sigma20 + sigma2)*x.mean()\nsigma20 = 1/(1/sigma20 + N/sigma2)\nplt.plot(xx, sp.stats.norm(mu0, sigma20).pdf(xx), label=\"4th\");\nprint(mu0)\n\nplt.axis([1, 3, 0, 20])\nplt.legend()\nplt.show()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
tensorflow/docs-l10n
site/en-snapshot/probability/examples/Gaussian_Copula.ipynb
apache-2.0
[ "Copyright 2018 The TensorFlow Probability Authors.\nLicensed under the Apache License, Version 2.0 (the \"License\");", "#@title Licensed under the Apache License, Version 2.0 (the \"License\"); { display-mode: \"form\" }\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.", "Copulas Primer\n<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://www.tensorflow.org/probability/examples/Gaussian_Copula\"><img src=\"https://www.tensorflow.org/images/tf_logo_32px.png\" />View on TensorFlow.org</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/probability/blob/main/tensorflow_probability/examples/jupyter_notebooks/Gaussian_Copula.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />Run in Google Colab</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://github.com/tensorflow/probability/blob/main/tensorflow_probability/examples/jupyter_notebooks/Gaussian_Copula.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />View source on GitHub</a>\n </td>\n <td>\n <a href=\"https://storage.googleapis.com/tensorflow_docs/probability/tensorflow_probability/examples/jupyter_notebooks/Gaussian_Copula.ipynb\"><img src=\"https://www.tensorflow.org/images/download_logo_32px.png\" />Download notebook</a>\n </td>\n</table>", "import numpy as np\nimport matplotlib.pyplot as plt\nimport tensorflow.compat.v2 as tf\ntf.enable_v2_behavior()\n\nimport tensorflow_probability as tfp\ntfd = tfp.distributions\ntfb = tfp.bijectors", "A [copula](https://en.wikipedia.org/wiki/Copula_(probability_theory%29) is a classical approach for capturing the dependence between random variables. More formally, a copula is a multivariate distribution $C(U_1, U_2, ...., U_n)$ such that marginalizing gives $U_i \\sim \\text{Uniform}(0, 1)$.\nCopulas are interesting because we can use them to create multivariate distributions with arbitrary marginals. This is the recipe:\n\nUsing the Probability Integral Transform turns an arbitrary continuous R.V. $X$ into a uniform one $F_X(X)$, where $F_X$ is the CDF of $X$.\nGiven a copula (say bivariate) $C(U, V)$, we have that $U$ and $V$ have uniform marginal distributions.\nNow given our R.V's of interest $X, Y$, create a new distribution $C'(X, Y) = C(F_X(X), F_Y(Y))$. The marginals for $X$ and $Y$ are the ones we desired.\n\nMarginals are univariate and thus may be easier to measure and/or model. A copula enables starting from marginals yet also achieving arbitrary correlation between dimensions.\nGaussian Copula\nTo illustrate how copulas are constructed, consider the case of capturing dependence according to multivariate Gaussian correlations. A Gaussian Copula is one given by $C(u_1, u_2, ...u_n) = \\Phi_\\Sigma(\\Phi^{-1}(u_1), \\Phi^{-1}(u_2), ... \\Phi^{-1}(u_n))$ where $\\Phi_\\Sigma$ represents the CDF of a MultivariateNormal, with covariance $\\Sigma$ and mean 0, and $\\Phi^{-1}$ is the inverse CDF for the standard normal.\nApplying the normal's inverse CDF warps the uniform dimensions to be normally distributed. Applying the multivariate normal's CDF then squashes the distribution to be marginally uniform and with Gaussian correlations.\nThus, what we get is that the Gaussian Copula is a distribution over the unit hypercube $[0, 1]^n$ with uniform marginals.\nDefined as such, the Gaussian Copula can be implemented with tfd.TransformedDistribution and appropriate Bijector. That is, we are transforming a MultivariateNormal, via the use of the Normal distribution's inverse CDF, implemented by the tfb.NormalCDF bijector.\nBelow, we implement a Gaussian Copula with one simplifying assumption: that the covariance is parameterized\nby a Cholesky factor (hence a covariance for MultivariateNormalTriL). (One could use other tf.linalg.LinearOperators to encode different matrix-free assumptions.).", "class GaussianCopulaTriL(tfd.TransformedDistribution):\n \"\"\"Takes a location, and lower triangular matrix for the Cholesky factor.\"\"\"\n def __init__(self, loc, scale_tril):\n super(GaussianCopulaTriL, self).__init__(\n distribution=tfd.MultivariateNormalTriL(\n loc=loc,\n scale_tril=scale_tril),\n bijector=tfb.NormalCDF(),\n validate_args=False,\n name=\"GaussianCopulaTriLUniform\")\n\n\n# Plot an example of this.\nunit_interval = np.linspace(0.01, 0.99, num=200, dtype=np.float32)\nx_grid, y_grid = np.meshgrid(unit_interval, unit_interval)\ncoordinates = np.concatenate(\n [x_grid[..., np.newaxis],\n y_grid[..., np.newaxis]], axis=-1)\n\npdf = GaussianCopulaTriL(\n loc=[0., 0.],\n scale_tril=[[1., 0.8], [0., 0.6]],\n).prob(coordinates)\n\n# Plot its density.\n\nplt.contour(x_grid, y_grid, pdf, 100, cmap=plt.cm.jet);", "The power, however, from such a model is using the Probability Integral Transform, to use the copula on arbitrary R.V.s. In this way, we can specify arbitrary marginals, and use the copula to stitch them together.\nWe start with a model:\n$$\\begin{align}\nX &\\sim \\text{Kumaraswamy}(a, b) \\\nY &\\sim \\text{Gumbel}(\\mu, \\beta)\n\\end{align}$$\nand use the copula to get a bivariate R.V. $Z$, which has marginals Kumaraswamy and Gumbel.\nWe'll start by plotting the product distribution generated by those two R.V.s. This is just to serve as a comparison point to when we apply the Copula.", "a = 2.0\nb = 2.0\ngloc = 0.\ngscale = 1.\n\nx = tfd.Kumaraswamy(a, b)\ny = tfd.Gumbel(loc=gloc, scale=gscale)\n\n# Plot the distributions, assuming independence\nx_axis_interval = np.linspace(0.01, 0.99, num=200, dtype=np.float32)\ny_axis_interval = np.linspace(-2., 3., num=200, dtype=np.float32)\nx_grid, y_grid = np.meshgrid(x_axis_interval, y_axis_interval)\n\npdf = x.prob(x_grid) * y.prob(y_grid)\n\n# Plot its density\n\nplt.contour(x_grid, y_grid, pdf, 100, cmap=plt.cm.jet);", "Joint Distribution with Different Marginals\nNow we use a Gaussian copula to couple the distributions together, and plot that. Again our tool of choice is TransformedDistribution applying the appropriate Bijector to obtain the chosen marginals.\nSpecifically, we use a Blockwise bijector which applies different bijectors at different parts of the vector (which is still a bijective transformation).\nNow we can define the Copula we want. Given a list of target marginals (encoded as bijectors), we can easily construct\na new distribution that uses the copula and has the specified marginals.", "class WarpedGaussianCopula(tfd.TransformedDistribution):\n \"\"\"Application of a Gaussian Copula on a list of target marginals.\n\n This implements an application of a Gaussian Copula. Given [x_0, ... x_n]\n which are distributed marginally (with CDF) [F_0, ... F_n],\n `GaussianCopula` represents an application of the Copula, such that the\n resulting multivariate distribution has the above specified marginals.\n\n The marginals are specified by `marginal_bijectors`: These are\n bijectors whose `inverse` encodes the CDF and `forward` the inverse CDF.\n\n block_sizes is a 1-D Tensor to determine splits for `marginal_bijectors`\n length should be same as length of `marginal_bijectors`.\n See tfb.Blockwise for details\n \"\"\"\n def __init__(self, loc, scale_tril, marginal_bijectors, block_sizes=None):\n super(WarpedGaussianCopula, self).__init__(\n distribution=GaussianCopulaTriL(loc=loc, scale_tril=scale_tril),\n bijector=tfb.Blockwise(bijectors=marginal_bijectors,\n block_sizes=block_sizes),\n validate_args=False,\n name=\"GaussianCopula\")", "Finally, let's actually use this Gaussian Copula. We'll use a Cholesky of $\\begin{bmatrix}1 & 0\\\\rho & \\sqrt{(1-\\rho^2)}\\end{bmatrix}$, which will correspond to variances 1, and correlation $\\rho$ for the multivariate normal.\nWe'll look at a few cases:", "# Create our coordinates:\ncoordinates = np.concatenate(\n [x_grid[..., np.newaxis], y_grid[..., np.newaxis]], -1)\n\n\ndef create_gaussian_copula(correlation):\n # Use Gaussian Copula to add dependence.\n return WarpedGaussianCopula(\n loc=[0., 0.],\n scale_tril=[[1., 0.], [correlation, tf.sqrt(1. - correlation ** 2)]],\n # These encode the marginals we want. In this case we want X_0 has\n # Kumaraswamy marginal, and X_1 has Gumbel marginal.\n\n marginal_bijectors=[\n tfb.Invert(tfb.KumaraswamyCDF(a, b)),\n tfb.Invert(tfb.GumbelCDF(loc=0., scale=1.))])\n\n\n# Note that the zero case will correspond to independent marginals!\ncorrelations = [0., -0.8, 0.8]\ncopulas = []\nprobs = []\nfor correlation in correlations:\n copula = create_gaussian_copula(correlation)\n copulas.append(copula)\n probs.append(copula.prob(coordinates))\n\n\n# Plot it's density\n\nfor correlation, copula_prob in zip(correlations, probs):\n plt.figure()\n plt.contour(x_grid, y_grid, copula_prob, 100, cmap=plt.cm.jet)\n plt.title('Correlation {}'.format(correlation))", "Finally, let's verify that we actually get the marginals we want.", "def kumaraswamy_pdf(x):\n return tfd.Kumaraswamy(a, b).prob(np.float32(x))\n\ndef gumbel_pdf(x):\n return tfd.Gumbel(gloc, gscale).prob(np.float32(x))\n\n\ncopula_samples = []\nfor copula in copulas:\n copula_samples.append(copula.sample(10000))\n\nplot_rows = len(correlations)\nplot_cols = 2 # for 2 densities [kumarswamy, gumbel]\nfig, axes = plt.subplots(plot_rows, plot_cols, sharex='col', figsize=(18,12))\n\n# Let's marginalize out on each, and plot the samples.\n\nfor i, (correlation, copula_sample) in enumerate(zip(correlations, copula_samples)):\n k = copula_sample[..., 0].numpy()\n g = copula_sample[..., 1].numpy()\n\n\n _, bins, _ = axes[i, 0].hist(k, bins=100, density=True)\n axes[i, 0].plot(bins, kumaraswamy_pdf(bins), 'r--')\n axes[i, 0].set_title('Kumaraswamy from Copula with correlation {}'.format(correlation))\n\n _, bins, _ = axes[i, 1].hist(g, bins=100, density=True)\n axes[i, 1].plot(bins, gumbel_pdf(bins), 'r--')\n axes[i, 1].set_title('Gumbel from Copula with correlation {}'.format(correlation))\n ", "Conclusion\nAnd there we go! We've demonstrated that we can construct Gaussian Copulas using the Bijector API.\nMore generally, writing bijectors using the Bijector API and composing them with a distribution, can create rich families of distributions for flexible modelling." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
rdhyee/webtech-learning
notebooks/js_notes.ipynb
apache-2.0
[ "import requests\nfrom jinja2 import Template\nfrom IPython.display import Javascript, HTML\n\nfrom settings import FLICKR_KEY\n\n%%javascript\n\nconsole.log(\"hello\");\n\n%%javascript\n// access of jquery\n\nconsole.log($('body'));\n\n# jquery ajax, getJSON, especially with promises\n# https://www.flickr.com/services/api/explore/flickr.photos.search\n\nflickr_url = \"https://api.flickr.com/services/rest/?method=flickr.photos.search&api_key={key}&tags={tag}&format=json&nojsoncallback=1\"\nurl = flickr_url.format(key=FLICKR_KEY, tag='tiger')\n\n# in Python, can retrieve Flickr output\n\nr = requests.get(url)\nr.json()\n\nJS = \"\"\"\n<script>\n$.getJSON('{{url}}', function (data) {\n console.log(data.photos.photo.length);\n console.log(element);\n})\n</script>\n\"\"\"\n\ntemplate = Template(JS)\nHTML(template.render(url=url))\n\nimport uuid\ndiv_id = str(uuid.uuid4())\ndiv_id\n\n# unfinished idea...easier to know that in JS, we have element to work with\n\nimport uuid\n\ndef _HTML (html, css='', js='', div_id=None, template_vars=None):\n\n if div_id is None:\n div_id = \"i-\"+str(uuid.uuid4())\n \n if template_vars is None:\n template_vars = {}\n \n html = \"<div id={}>{}</div>\".format(div_id, html)\n \n # wrap css to apply to this div\n if css:\n css = \"\"\"\n<style>\n#{div_id} * {{ \n {css}\n}}\n</style>\"\"\".format(div_id=div_id, css=css)\n\n # wrap js in <script>\n if js:\n js = \"\"\"\n<script>\n var element = $('#{div_id}');\n {js}\n</script>\"\"\".format(div_id=div_id, js=js)\n \n net_html = html + css + js\n \n template = Template(net_html)\n return HTML(template.render(template_vars))\n\njs = \"\"\"\nconsole.log(element);\n\"\"\"\n\n_HTML(\"<b>hi</b>\", css='color:red', js=js)\n\n# deferred.done()\n\n%%javascript\n//http://stackoverflow.com/a/28713427\nelement.html('<b>Hi</b>')\nconsole.log(element)", "Promises\nHypothesis --> full implementation of promises coming in jQuery 3.0", "%%javascript\n// version of jQuery\nelement.text($.fn.jquery)\n\n%%javascript\n\n{\n \n// incorrect attempt to compute element independently...\n// just use element: http://stackoverflow.com/a/20020566/\n\nvar _e = IPython.notebook.get_selected_cell().output_area.element;\nvar _element = _e.find(\".rendered_html\");\n\nelement.text(\"hello\");\nconsole.log(element);\nconsole.log(element.text());\n\n \n}", "jquery ajax, getJSON, especially with promises\nhttps://www.flickr.com/services/api/explore/flickr.photos.search", "from IPython.display import Javascript\nimport requests\nfrom jinja2 import Template\n\nflickr_url = \"https://api.flickr.com/services/rest/?method=flickr.photos.search&api_key={key}&tags={tag}&format=json&nojsoncallback=1\"\nurl = flickr_url.format(key=FLICKR_KEY, tag='tiger')\n\njs_template = Template(\"\"\"\n$.getJSON('{{url}}', function (data) {\n element.text(data.photos.photo.length);\n})\n\"\"\")\n\nJavascript(js_template.render(url=url))\n\n# use done\n\njs_template = Template(\"\"\"\n\nvar p = $.getJSON('{{url}}');\n\np.done(function(data){\n\n element.text(data.photos.photo.length);\n\n})\n\n\"\"\")\n\nJavascript(js_template.render(url=url))\n\n# use then\n# https://api.jquery.com/deferred.then/\n\njs_template = Template(\"\"\"\n\nvar p = $.getJSON('{{url}}');\n\nvar done = function(data){\n element.text(data.photos.photo.length);\n};\n\nvar fail = function (jqXHR, status, error) {};\n\np.then(done, fail);\n\n\"\"\")\n\nJavascript(js_template.render(url=url))\n", "ideas behind promises and deferred.\nWikipedia article: Futures and promises - Wikipedia, the free encyclopedia:\n\nSpecifically, when usage is distinguished, a future is a read-only placeholder view of a variable, while a promise is a writable, single assignment container which sets the value of the future.\" Notably, a future may be defined without specifying which specific promise will set its value, and different possible promises may set the value of a given future, though this can be done only once for a given future. In other cases a future and a promise are created together and associated with each other: the future is the value, the promise is the function that sets the value – essentially the return value (future) of an asynchronous function (promise). Setting the value of a future is also called resolving, fulfilling, or binding it.\n\nGold standard in the JavaScript world: Promises/A+\nwhat I've understood, explained well in cosdev --> moving from callback/continuation style programmging to done and fail.\nThings to figure out:\n\nwhat is a rejection value?", "%%javascript\n\nvar d = $.Deferred();\nvar p = d.promise();\np.then (function (value) {console.log(\"p: \" + value)});\n\nrdhyee.d = d;\n\n\n%%javascript\n// thinking that you would pass a value\nrdhyee.d.resolve(10);\n\n\n%%javascript\n\n// kinda ugly -- must be a better way\n\n// how to express\n// b = a +1\n// c = 2*b\n\nvar a = $.Deferred();\na.then(function(value){console.log('a: '+value)})\n\nvar b = $.Deferred();\nb.then(function(value){console.log('b: '+value)})\n\nvar c = $.Deferred();\nc.then(function(value){console.log('c: '+value)})\n\n// b = a + 1\n\nvar a0 = a.promise();\na0.then( function(value) {b.resolve(value + 1)})\n\n// c = 2*b\nvar b0 = b.promise();\nb0.then(function(value) {c.resolve(2*value)})\n\na.resolve(2);\n\n%%javascript \n\n// better way?\n// https://api.jquery.com/deferred.then/\n\nvar a = $.Deferred();\na.then(function(value){console.log('a: '+value)})\n\n// Deferred.then returns a Promise\nvar b = a.then(function(value){return value + 1});\nvar c = b.then(function(value){return 2*value});\n\nb.done(function(value) {console.log('b: '+ value)});\nc.done(function(value) {console.log('c: '+ value)});\n\na.resolve(2);\n\n%%javascript\n\n// bug\n\n// x, y are Deferred\n// c = x + y\n\nvar x = $.Deferred();\nvar y = $.Deferred();\n\nvar c = $.when(x,y).then(function(x,y) {\n console.log('c.done x:' + x);\n console.log('c.done y:' + y);\n return (x+y);\n})\n\nvar d = c.then(function(value){return value})\nd.done(function(value){console.log(value)})\n\ny.resolve(2);\nx.resolve(27);\n\n\n", "Possible Next steps:\n\nwork with jQuery's implementation of promises\nconsider alts: cujojs/when, tildeio/rsvp.js, kriskowal/q\nwrap jQuery promise with Q: Coming from jQuery · kriskowal/q Wiki\n\nLearn about Python analogs:\n\nagronholm/pythonfutures\nCh 17, 18 of Fluent Python: Fluent Python > V. Control Flow > 17. Concurrency with Futures : Safari Books Online.\nan implementation of Promises/A+ in Python: xogeny/aplus" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
rainyear/pytips
Tips/2016-05-01-Class-and-Metaclass-i.ipynb
mit
[ "Python 类与元类的深度挖掘 I\n上一篇介绍了 Python 枚举类型的标准库,除了考虑到其实用性,还有一个重要的原因是其实现过程是一个非常好的学习、理解 Python 类与元类的例子。因此接下来两篇就以此为例,深入挖掘 Python 中类与元类背后的机制。\n翻开任何一本 Python 教程,你一定可以在某个位置看到下面这两句话:\n\nPython 中一切皆为对象(Everything in Python is an object);\nPython 是一种面向对象编程(Object Oriented Programming, OOP)的语言。\n\n虽然在上面两句话的语境中,对象(Object)的含义可能稍有不同,但可以肯定的是对象在 Python 中具有非常重要的意义,也是我们接下来将要讨论的所有内容的基础。那么,对象到底是什么?\n\n对象(Object)\n\n对象是 Python 中对数据的一种抽象,Python 程序中所有数据都是通过对象或对象之间的关系来表示的。[ref: Data Model]\n\n港台将 Object 翻译为“物件”,可以将其看作是一个盛有数据的盒子,只不过除了纯粹的数据之外还有其它有用的属性信息,在 Python 中,所有的对象都具有id、type、value三个属性:\n+---------------+\n| |\n| Python Object |\n| |\n+------+--------+\n| ID | |\n+---------------+\n| Type | |\n+---------------+\n| Value| |\n+---------------+\n其中 id 代表内存地址,可以通过内置函数 id() 查看,而 type 表示对象的类别,不同的类别意味着该对象拥有的属性和方法等,可以通过 type() 方法查看:", "def who(obj):\n print(id(obj), type(obj))\n \nwho(1)\nwho(None)\nwho(who)", "对象作为 Python 中的基本单位,可以被创建、命名或删除。Python 中一般不需要手动删除对象,其垃圾回收机制会自动处理不再使用的对象,当然如果需要,也可以使用 del 语句删除某个变量;所谓命名则是指给对象贴上一个名字标签,方便使用,也就是声明或赋值变量;接下来我们重点来看如何创建一个对象。对于一些 Python 内置类型的对象,通常可以使用特定的语法生成,例如数字直接使用阿拉伯数字字面量,字符串使用引号 '',列表使用 [],字典使用 {} ,函数使用 def 语法等,这些对象的类型都是 Python 内置的,那我们能不能创建其它类型的对象呢?\n类与实例\n既然说 Python 是面向对象编程语言,也就允许用户自己创建对象,通常使用 class 语句,与其它对象不同的是,class 定义的对象(称之为类)可以用于产生新的对象(称之为实例):", "class A:\n pass\na = A()\nwho(A)\nwho(a)", "上面的例子中 A 是我们创建的一个新的类,而通过调用 A() 可以获得一个 A 类型的实例对象,我们将其赋值为 a,也就是说我们成功创建了一个与所有内置对象类型不同的对象 a,它的类型为 __main__.A!至此我们可以将 Python 中一切的对象分为两种:\n\n可以用来生成新对象的类,包括内置的 int、str 以及自己定义的 A 等;\n由类生成的实例对象,包括内置类型的数字、字符串以及自己定义的类型为 __main__.A 的 a。\n\n单纯从概念上理解这两种对象没有任何问题,但是这里要讨论的是在实践中不得不考虑的一些细节性问题:\n\n需要一些方便的机制来实现面向对象编程中的继承、重载等特性;\n需要一些固定的流程让我们可以在生成实例化对象的过程中执行一些特定的操作;\n\n这两个问题主要关于类的一些特殊的操作,也就是这一篇后面的主要内容。如果再回顾一下开头提到的两句话,你可能会想到,既然类本身也是对象,那它们又是怎样生成的?这就是后一篇将主要讨论的问题:用于生成类对象的类,即元类(Metaclass)。\nsuper, mro()\n0x00 Python 之禅中提到的最后一条,命名空间(namespace)是个绝妙的理念,类或对象在 Python 中就承担了一部分命名空间的作用。比如说某些特定的方法或属性只有特定类型的对象才有,不同类型对象的属性和方法尽管名字可能相同,但由于隶属不同的命名空间,其值可能完全不同。在实现类的继承与重载等特性时同样需要考虑命名空间的问题,以枚举类型的实现为例,我们需要保证枚举对象的属性名称不能有重复,因此我们需要继承内置的 dict 类:", "class _EnumDict(dict):\n def __init__(self):\n dict.__init__(self)\n self._member_names = []\n def keys(self):\n keys = dict.keys(self)\n return list(filter(lambda k: k.isupper(), keys))\n\ned = _EnumDict()\ned['RED'] = 1\ned['red'] = 2\nprint(ed, ed.keys())", "在上面的例子中 _EnumDict 重载同时调用了父类 dict 的一些方法,上面的写法在语法上是没有错误的,但是如果我们要改变 _EnumDict 的父类,不再是继承自 dict,则必须手动修改所有方法中 dict.method(self) 的调用形式,这样就不是一个好的实践方案了。为了解决这一问题,Python 提供了一个内置函数 super():", "print(super.__doc__)", "我最初只是把 super() 当做指向父类对象的指针,但实际上它可以提供更多功能:给定一个对象及其子类(这里对象要求至少是类对象,而子类可以是实例对象),从该对象父类的命名空间开始搜索对应的方法。\n以下面的代码为例:", "class A(object):\n def method(self):\n who(self)\n print(\"A.method\")\nclass B(A):\n def method(self):\n who(self)\n print(\"B.method\")\nclass C(B):\n def method(self):\n who(self)\n print(\"C.method\")\nclass D(C):\n def __init__(self):\n super().method()\n super(__class__, self).method()\n \n super(C, self).method() # calling C's parent's method\n super(B, self).method() # calling B's parent's method\n \n super(B, C()).method() # calling B's parent's method with instance of C\n\nd = D()\n\nprint(\"\\nInstance of D:\")\nwho(d)", "当然我们也可以在外部使用 super() 方法,只是不能再用缺省参数的形式,因为在外部的命名空间中不再存在 __class__ 和 self:", "super(D, d).method() # calling D's parent's method with instance d", "上面的例子可以用下图来描述:\n+----------+\n| A |\n+----------+\n| method() &lt;---------------+ super(B,self)\n+----------+ |\n |\n+----------+ +----------+\n| B | | D |\n+----------+ super(C,self) +----------+\n| method() &lt;---------------+ method() |\n+----------+ +----------+\n |\n+----------+ |\n| C | |\n+----------+ | super(D,self)\n| method() &lt;---------------+\n+----------+\n可以认为 super() 方法通过向父类方向回溯给我们找到了变量搜寻的起点,但是这个回溯的顺序是如何确定的呢?上面的例子中继承关系是 object-&gt;A-&gt;B-&gt;C-&gt;D 的顺序,如果是比较复杂的继承关系呢?", "class A(object):\n pass\nclass B(A):\n def method(self):\n print(\"B's method\")\nclass C(A):\n def method(self):\n print(\"C's method\")\nclass D(B, C):\n def __init__(self):\n super().method()\nclass E(C, B):\n def __init__(self):\n super().method()\n \nd = D()\ne = E()", "Python 中提供了一个类方法 mro() 可以指定搜寻的顺序,mro 是Method Resolution Order 的缩写,它是类方法而不是实例方法,可以通过重载 mro() 方法改变继承中的方法解析顺序,但这需要在元类中完成,在这里只看一下其结果:", "D.mro()\n\nE.mro()", "super() 方法就是沿着 mro() 给出的顺序向上寻找起点的:", "super(D, d).method()\nsuper(E, e).method()\n\nsuper(C, e).method()\nsuper(B, d).method()", "总结\nsuper() 方法解决了类->实例实践过程中关于命名空间的一些问题,而关于生成对象的流程,我们知道初始化实例是通过类的 __init__() 方法完成的,在此之前可能涉及到一些其它的准备工作,包括上面提到的 mro() 方法以及关键的元类->类的过程,将在后面一篇中继续介绍。" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
jinntrance/MOOC
coursera/deep-neural-network/quiz and assignments/week 12 CNN - Detection algorithms/Autonomous+driving+application+-+Car+detection+-+v1.ipynb
cc0-1.0
[ "Autonomous driving - Car detection\nWelcome to your week 3 programming assignment. You will learn about object detection using the very powerful YOLO model. Many of the ideas in this notebook are described in the two YOLO papers: Redmon et al., 2016 (https://arxiv.org/abs/1506.02640) and Redmon and Farhadi, 2016 (https://arxiv.org/abs/1612.08242). \nYou will learn to:\n- Use object detection on a car detection dataset\n- Deal with bounding boxes\nRun the following cell to load the packages and dependencies that are going to be useful for your journey!", "import argparse\nimport os\nimport matplotlib.pyplot as plt\nfrom matplotlib.pyplot import imshow\nimport scipy.io\nimport scipy.misc\nimport numpy as np\nimport pandas as pd\nimport PIL\nimport tensorflow as tf\nfrom keras import backend as K\nfrom keras.layers import Input, Lambda, Conv2D\nfrom keras.models import load_model, Model\nfrom yolo_utils import read_classes, read_anchors, generate_colors, preprocess_image, draw_boxes, scale_boxes\nfrom yad2k.models.keras_yolo import yolo_head, yolo_boxes_to_corners, preprocess_true_boxes, yolo_loss, yolo_body\n\n%matplotlib inline", "Important Note: As you can see, we import Keras's backend as K. This means that to use a Keras function in this notebook, you will need to write: K.function(...).\n1 - Problem Statement\nYou are working on a self-driving car. As a critical component of this project, you'd like to first build a car detection system. To collect data, you've mounted a camera to the hood (meaning the front) of the car, which takes pictures of the road ahead every few seconds while you drive around. \n<center>\n<video width=\"400\" height=\"200\" src=\"nb_images/road_video_compressed2.mp4\" type=\"video/mp4\" controls>\n</video>\n</center>\n<caption><center> Pictures taken from a car-mounted camera while driving around Silicon Valley. <br> We would like to especially thank drive.ai for providing this dataset! Drive.ai is a company building the brains of self-driving vehicles.\n</center></caption>\n<img src=\"nb_images/driveai.png\" style=\"width:100px;height:100;\">\nYou've gathered all these images into a folder and have labelled them by drawing bounding boxes around every car you found. Here's an example of what your bounding boxes look like.\n<img src=\"nb_images/box_label.png\" style=\"width:500px;height:250;\">\n<caption><center> <u> Figure 1 </u>: Definition of a box<br> </center></caption>\nIf you have 80 classes that you want YOLO to recognize, you can represent the class label $c$ either as an integer from 1 to 80, or as an 80-dimensional vector (with 80 numbers) one component of which is 1 and the rest of which are 0. The video lectures had used the latter representation; in this notebook, we will use both representations, depending on which is more convenient for a particular step. \nIn this exercise, you will learn how YOLO works, then apply it to car detection. Because the YOLO model is very computationally expensive to train, we will load pre-trained weights for you to use. \n2 - YOLO\nYOLO (\"you only look once\") is a popular algoritm because it achieves high accuracy while also being able to run in real-time. This algorithm \"only looks once\" at the image in the sense that it requires only one forward propagation pass through the network to make predictions. After non-max suppression, it then outputs recognized objects together with the bounding boxes.\n2.1 - Model details\nFirst things to know:\n- The input is a batch of images of shape (m, 608, 608, 3)\n- The output is a list of bounding boxes along with the recognized classes. Each bounding box is represented by 6 numbers $(p_c, b_x, b_y, b_h, b_w, c)$ as explained above. If you expand $c$ into an 80-dimensional vector, each bounding box is then represented by 85 numbers. \nWe will use 5 anchor boxes. So you can think of the YOLO architecture as the following: IMAGE (m, 608, 608, 3) -> DEEP CNN -> ENCODING (m, 19, 19, 5, 85).\nLets look in greater detail at what this encoding represents. \n<img src=\"nb_images/architecture.png\" style=\"width:700px;height:400;\">\n<caption><center> <u> Figure 2 </u>: Encoding architecture for YOLO<br> </center></caption>\nIf the center/midpoint of an object falls into a grid cell, that grid cell is responsible for detecting that object.\nSince we are using 5 anchor boxes, each of the 19 x19 cells thus encodes information about 5 boxes. Anchor boxes are defined only by their width and height.\nFor simplicity, we will flatten the last two last dimensions of the shape (19, 19, 5, 85) encoding. So the output of the Deep CNN is (19, 19, 425).\n<img src=\"nb_images/flatten.png\" style=\"width:700px;height:400;\">\n<caption><center> <u> Figure 3 </u>: Flattening the last two last dimensions<br> </center></caption>\nNow, for each box (of each cell) we will compute the following elementwise product and extract a probability that the box contains a certain class.\n<img src=\"nb_images/probability_extraction.png\" style=\"width:700px;height:400;\">\n<caption><center> <u> Figure 4 </u>: Find the class detected by each box<br> </center></caption>\nHere's one way to visualize what YOLO is predicting on an image:\n- For each of the 19x19 grid cells, find the maximum of the probability scores (taking a max across both the 5 anchor boxes and across different classes). \n- Color that grid cell according to what object that grid cell considers the most likely.\nDoing this results in this picture: \n<img src=\"nb_images/proba_map.png\" style=\"width:300px;height:300;\">\n<caption><center> <u> Figure 5 </u>: Each of the 19x19 grid cells colored according to which class has the largest predicted probability in that cell.<br> </center></caption>\nNote that this visualization isn't a core part of the YOLO algorithm itself for making predictions; it's just a nice way of visualizing an intermediate result of the algorithm. \nAnother way to visualize YOLO's output is to plot the bounding boxes that it outputs. Doing that results in a visualization like this: \n<img src=\"nb_images/anchor_map.png\" style=\"width:200px;height:200;\">\n<caption><center> <u> Figure 6 </u>: Each cell gives you 5 boxes. In total, the model predicts: 19x19x5 = 1805 boxes just by looking once at the image (one forward pass through the network)! Different colors denote different classes. <br> </center></caption>\nIn the figure above, we plotted only boxes that the model had assigned a high probability to, but this is still too many boxes. You'd like to filter the algorithm's output down to a much smaller number of detected objects. To do so, you'll use non-max suppression. Specifically, you'll carry out these steps: \n- Get rid of boxes with a low score (meaning, the box is not very confident about detecting a class)\n- Select only one box when several boxes overlap with each other and detect the same object.\n2.2 - Filtering with a threshold on class scores\nYou are going to apply a first filter by thresholding. You would like to get rid of any box for which the class \"score\" is less than a chosen threshold. \nThe model gives you a total of 19x19x5x85 numbers, with each box described by 85 numbers. It'll be convenient to rearrange the (19,19,5,85) (or (19,19,425)) dimensional tensor into the following variables:\n- box_confidence: tensor of shape $(19 \\times 19, 5, 1)$ containing $p_c$ (confidence probability that there's some object) for each of the 5 boxes predicted in each of the 19x19 cells.\n- boxes: tensor of shape $(19 \\times 19, 5, 4)$ containing $(b_x, b_y, b_h, b_w)$ for each of the 5 boxes per cell.\n- box_class_probs: tensor of shape $(19 \\times 19, 5, 80)$ containing the detection probabilities $(c_1, c_2, ... c_{80})$ for each of the 80 classes for each of the 5 boxes per cell.\nExercise: Implement yolo_filter_boxes().\n1. Compute box scores by doing the elementwise product as described in Figure 4. The following code may help you choose the right operator: \npython\na = np.random.randn(19*19, 5, 1)\nb = np.random.randn(19*19, 5, 80)\nc = a * b # shape of c will be (19*19, 5, 80)\n2. For each box, find:\n - the index of the class with the maximum box score (Hint) (Be careful with what axis you choose; consider using axis=-1)\n - the corresponding box score (Hint) (Be careful with what axis you choose; consider using axis=-1)\n3. Create a mask by using a threshold. As a reminder: ([0.9, 0.3, 0.4, 0.5, 0.1] &lt; 0.4) returns: [False, True, False, False, True]. The mask should be True for the boxes you want to keep. \n4. Use TensorFlow to apply the mask to box_class_scores, boxes and box_classes to filter out the boxes we don't want. You should be left with just the subset of boxes you want to keep. (Hint)\nReminder: to call a Keras function, you should use K.function(...).", "# GRADED FUNCTION: yolo_filter_boxes\n\ndef yolo_filter_boxes(box_confidence, boxes, box_class_probs, threshold = .6):\n \"\"\"Filters YOLO boxes by thresholding on object and class confidence.\n \n Arguments:\n box_confidence -- tensor of shape (19, 19, 5, 1)\n boxes -- tensor of shape (19, 19, 5, 4)\n box_class_probs -- tensor of shape (19, 19, 5, 80)\n threshold -- real value, if [ highest class probability score < threshold], then get rid of the corresponding box\n \n Returns:\n scores -- tensor of shape (None,), containing the class probability score for selected boxes\n boxes -- tensor of shape (None, 4), containing (b_x, b_y, b_h, b_w) coordinates of selected boxes\n classes -- tensor of shape (None,), containing the index of the class detected by the selected boxes\n \n Note: \"None\" is here because you don't know the exact number of selected boxes, as it depends on the threshold. \n For example, the actual output size of scores would be (10,) if there are 10 boxes.\n \"\"\"\n \n # Step 1: Compute box scores\n ### START CODE HERE ### (≈ 1 line)\n box_scores = box_confidence * box_class_probs\n ### END CODE HERE ###\n \n # Step 2: Find the box_classes thanks to the max box_scores, keep track of the corresponding score\n ### START CODE HERE ### (≈ 2 lines)\n box_classes = K.argmax(box_class_probs, -1)\n box_class_scores = K.max(box_scores, -1)\n ### END CODE HERE ###\n \n # Step 3: Create a filtering mask based on \"box_class_scores\" by using \"threshold\". The mask should have the\n # same dimension as box_class_scores, and be True for the boxes you want to keep (with probability >= threshold)\n ### START CODE HERE ### (≈ 1 line)\n filtering_mask = box_class_scores >= threshold\n ### END CODE HERE ###\n \n # Step 4: Apply the mask to scores, boxes and classes\n ### START CODE HERE ### (≈ 3 lines)\n scores = tf.boolean_mask(box_class_scores, filtering_mask)\n# print(filtering_mask.shape, boxes.shape, box_class_scores.shape, box_classes.shape)\n boxes = tf.boolean_mask(boxes, filtering_mask)\n classes = tf.boolean_mask(box_classes, filtering_mask)\n ### END CODE HERE ###\n \n return scores, boxes, classes\n\nwith tf.Session() as test_a:\n box_confidence = tf.random_normal([19, 19, 5, 1], mean=1, stddev=4, seed = 1)\n boxes = tf.random_normal([19, 19, 5, 4], mean=1, stddev=4, seed = 1)\n box_class_probs = tf.random_normal([19, 19, 5, 80], mean=1, stddev=4, seed = 1)\n scores, boxes, classes = yolo_filter_boxes(box_confidence, boxes, box_class_probs, threshold = 0.5)\n print(\"scores[2] = \" + str(scores[2].eval()))\n print(\"boxes[2] = \" + str(boxes[2].eval()))\n print(\"classes[2] = \" + str(classes[2].eval()))\n print(\"scores.shape = \" + str(scores.shape))\n print(\"boxes.shape = \" + str(boxes.shape))\n print(\"classes.shape = \" + str(classes.shape))", "Expected Output:\n<table>\n <tr>\n <td>\n **scores[2]**\n </td>\n <td>\n 10.7506\n </td>\n </tr>\n <tr>\n <td>\n **boxes[2]**\n </td>\n <td>\n [ 8.42653275 3.27136683 -0.5313437 -4.94137383]\n </td>\n </tr>\n\n <tr>\n <td>\n **classes[2]**\n </td>\n <td>\n 7\n </td>\n </tr>\n <tr>\n <td>\n **scores.shape**\n </td>\n <td>\n (?,)\n </td>\n </tr>\n <tr>\n <td>\n **boxes.shape**\n </td>\n <td>\n (?, 4)\n </td>\n </tr>\n\n <tr>\n <td>\n **classes.shape**\n </td>\n <td>\n (?,)\n </td>\n </tr>\n\n</table>\n\n2.3 - Non-max suppression\nEven after filtering by thresholding over the classes scores, you still end up a lot of overlapping boxes. A second filter for selecting the right boxes is called non-maximum suppression (NMS). \n<img src=\"nb_images/non-max-suppression.png\" style=\"width:500px;height:400;\">\n<caption><center> <u> Figure 7 </u>: In this example, the model has predicted 3 cars, but it's actually 3 predictions of the same car. Running non-max suppression (NMS) will select only the most accurate (highest probabiliy) one of the 3 boxes. <br> </center></caption>\nNon-max suppression uses the very important function called \"Intersection over Union\", or IoU.\n<img src=\"nb_images/iou.png\" style=\"width:500px;height:400;\">\n<caption><center> <u> Figure 8 </u>: Definition of \"Intersection over Union\". <br> </center></caption>\nExercise: Implement iou(). Some hints:\n- In this exercise only, we define a box using its two corners (upper left and lower right): (x1, y1, x2, y2) rather than the midpoint and height/width.\n- To calculate the area of a rectangle you need to multiply its height (y2 - y1) by its width (x2 - x1)\n- You'll also need to find the coordinates (xi1, yi1, xi2, yi2) of the intersection of two boxes. Remember that:\n - xi1 = maximum of the x1 coordinates of the two boxes\n - yi1 = maximum of the y1 coordinates of the two boxes\n - xi2 = minimum of the x2 coordinates of the two boxes\n - yi2 = minimum of the y2 coordinates of the two boxes\nIn this code, we use the convention that (0,0) is the top-left corner of an image, (1,0) is the upper-right corner, and (1,1) the lower-right corner.", "# GRADED FUNCTION: iou\n\ndef iou(box1, box2):\n \"\"\"Implement the intersection over union (IoU) between box1 and box2\n \n Arguments:\n box1 -- first box, list object with coordinates (x1, y1, x2, y2)\n box2 -- second box, list object with coordinates (x1, y1, x2, y2)\n \"\"\"\n\n # Calculate the (y1, x1, y2, x2) coordinates of the intersection of box1 and box2. Calculate its Area.\n ### START CODE HERE ### (≈ 5 lines)\n xi1 = max(box1[0], box2[0])\n yi1 = max(box1[1], box2[1])\n xi2 = min(box1[2], box2[2])\n yi2 = min(box1[3], box2[3])\n inter_area = (xi2-xi1) * (yi2-yi1)\n ### END CODE HERE ### \n\n # Calculate the Union area by using Formula: Union(A,B) = A + B - Inter(A,B)\n ### START CODE HERE ### (≈ 3 lines)\n box1_area = (box1[2]-box1[0]) * (box1[3]-box1[1])\n box2_area = (box2[2]-box2[0]) * (box2[3]-box2[1])\n union_area = box1_area + box2_area - inter_area\n ### END CODE HERE ###\n \n # compute the IoU\n ### START CODE HERE ### (≈ 1 line)\n iou = inter_area / union_area\n ### END CODE HERE ###\n\n return iou\n\nbox1 = (2, 1, 4, 3)\nbox2 = (1, 2, 3, 4) \nprint(\"iou = \" + str(iou(box1, box2)))", "Expected Output:\n<table>\n <tr>\n <td>\n **iou = **\n </td>\n <td>\n 0.14285714285714285\n </td>\n </tr>\n\n</table>\n\nYou are now ready to implement non-max suppression. The key steps are: \n1. Select the box that has the highest score.\n2. Compute its overlap with all other boxes, and remove boxes that overlap it more than iou_threshold.\n3. Go back to step 1 and iterate until there's no more boxes with a lower score than the current selected box.\nThis will remove all boxes that have a large overlap with the selected boxes. Only the \"best\" boxes remain.\nExercise: Implement yolo_non_max_suppression() using TensorFlow. TensorFlow has two built-in functions that are used to implement non-max suppression (so you don't actually need to use your iou() implementation):\n- tf.image.non_max_suppression()\n- K.gather()", "# GRADED FUNCTION: yolo_non_max_suppression\n\ndef yolo_non_max_suppression(scores, boxes, classes, max_boxes = 10, iou_threshold = 0.5):\n \"\"\"\n Applies Non-max suppression (NMS) to set of boxes\n \n Arguments:\n scores -- tensor of shape (None,), output of yolo_filter_boxes()\n boxes -- tensor of shape (None, 4), output of yolo_filter_boxes() that have been scaled to the image size (see later)\n classes -- tensor of shape (None,), output of yolo_filter_boxes()\n max_boxes -- integer, maximum number of predicted boxes you'd like\n iou_threshold -- real value, \"intersection over union\" threshold used for NMS filtering\n \n Returns:\n scores -- tensor of shape (, None), predicted score for each box\n boxes -- tensor of shape (4, None), predicted box coordinates\n classes -- tensor of shape (, None), predicted class for each box\n \n Note: The \"None\" dimension of the output tensors has obviously to be less than max_boxes. Note also that this\n function will transpose the shapes of scores, boxes, classes. This is made for convenience.\n \"\"\"\n \n max_boxes_tensor = K.variable(max_boxes, dtype='int32') # tensor to be used in tf.image.non_max_suppression()\n K.get_session().run(tf.variables_initializer([max_boxes_tensor])) # initialize variable max_boxes_tensor\n \n # Use tf.image.non_max_suppression() to get the list of indices corresponding to boxes you keep\n ### START CODE HERE ### (≈ 1 line)\n nms_indices = tf.image.non_max_suppression(boxes, scores, max_boxes, iou_threshold) \n ### END CODE HERE ###\n \n # Use K.gather() to select only nms_indices from scores, boxes and classes\n ### START CODE HERE ### (≈ 3 lines)\n scores = K.gather(scores, nms_indices)\n boxes = K.gather(boxes, nms_indices)\n classes = K.gather(classes, nms_indices)\n ### END CODE HERE ###\n \n return scores, boxes, classes\n\nwith tf.Session() as test_b:\n scores = tf.random_normal([54,], mean=1, stddev=4, seed = 1)\n boxes = tf.random_normal([54, 4], mean=1, stddev=4, seed = 1)\n classes = tf.random_normal([54,], mean=1, stddev=4, seed = 1)\n scores, boxes, classes = yolo_non_max_suppression(scores, boxes, classes)\n print(\"scores[2] = \" + str(scores[2].eval()))\n print(\"boxes[2] = \" + str(boxes[2].eval()))\n print(\"classes[2] = \" + str(classes[2].eval()))\n print(\"scores.shape = \" + str(scores.eval().shape))\n print(\"boxes.shape = \" + str(boxes.eval().shape))\n print(\"classes.shape = \" + str(classes.eval().shape))", "Expected Output:\n<table>\n <tr>\n <td>\n **scores[2]**\n </td>\n <td>\n 6.9384\n </td>\n </tr>\n <tr>\n <td>\n **boxes[2]**\n </td>\n <td>\n [-5.299932 3.13798141 4.45036697 0.95942086]\n </td>\n </tr>\n\n <tr>\n <td>\n **classes[2]**\n </td>\n <td>\n -2.24527\n </td>\n </tr>\n <tr>\n <td>\n **scores.shape**\n </td>\n <td>\n (10,)\n </td>\n </tr>\n <tr>\n <td>\n **boxes.shape**\n </td>\n <td>\n (10, 4)\n </td>\n </tr>\n\n <tr>\n <td>\n **classes.shape**\n </td>\n <td>\n (10,)\n </td>\n </tr>\n\n</table>\n\n2.4 Wrapping up the filtering\nIt's time to implement a function taking the output of the deep CNN (the 19x19x5x85 dimensional encoding) and filtering through all the boxes using the functions you've just implemented. \nExercise: Implement yolo_eval() which takes the output of the YOLO encoding and filters the boxes using score threshold and NMS. There's just one last implementational detail you have to know. There're a few ways of representing boxes, such as via their corners or via their midpoint and height/width. YOLO converts between a few such formats at different times, using the following functions (which we have provided): \npython\nboxes = yolo_boxes_to_corners(box_xy, box_wh)\nwhich converts the yolo box coordinates (x,y,w,h) to box corners' coordinates (x1, y1, x2, y2) to fit the input of yolo_filter_boxes\npython\nboxes = scale_boxes(boxes, image_shape)\nYOLO's network was trained to run on 608x608 images. If you are testing this data on a different size image--for example, the car detection dataset had 720x1280 images--this step rescales the boxes so that they can be plotted on top of the original 720x1280 image. \nDon't worry about these two functions; we'll show you where they need to be called.", "# GRADED FUNCTION: yolo_eval\n\ndef yolo_eval(yolo_outputs, image_shape = (720., 1280.), max_boxes=10, score_threshold=.6, iou_threshold=.5):\n \"\"\"\n Converts the output of YOLO encoding (a lot of boxes) to your predicted boxes along with their scores, box coordinates and classes.\n \n Arguments:\n yolo_outputs -- output of the encoding model (for image_shape of (608, 608, 3)), contains 4 tensors:\n box_confidence: tensor of shape (None, 19, 19, 5, 1)\n box_xy: tensor of shape (None, 19, 19, 5, 2)\n box_wh: tensor of shape (None, 19, 19, 5, 2)\n box_class_probs: tensor of shape (None, 19, 19, 5, 80)\n image_shape -- tensor of shape (2,) containing the input shape, in this notebook we use (608., 608.) (has to be float32 dtype)\n max_boxes -- integer, maximum number of predicted boxes you'd like\n score_threshold -- real value, if [ highest class probability score < threshold], then get rid of the corresponding box\n iou_threshold -- real value, \"intersection over union\" threshold used for NMS filtering\n \n Returns:\n scores -- tensor of shape (None, ), predicted score for each box\n boxes -- tensor of shape (None, 4), predicted box coordinates\n classes -- tensor of shape (None,), predicted class for each box\n \"\"\"\n \n ### START CODE HERE ### \n \n # Retrieve outputs of the YOLO model (≈1 line)\n box_confidence, box_xy, box_wh, box_class_probs = yolo_outputs\n\n # Convert boxes to be ready for filtering functions \n boxes = yolo_boxes_to_corners(box_xy, box_wh)\n\n # Use one of the functions you've implemented to perform Score-filtering with a threshold of score_threshold (≈1 line)\n scores, boxes, classes = yolo_filter_boxes(box_confidence, boxes, box_class_probs, score_threshold)\n \n # Scale boxes back to original image shape.\n boxes = scale_boxes(boxes, image_shape)\n\n # Use one of the functions you've implemented to perform Non-max suppression with a threshold of iou_threshold (≈1 line)\n scores, boxes, classes = yolo_non_max_suppression(scores, boxes, classes, max_boxes, iou_threshold)\n \n ### END CODE HERE ###\n \n return scores, boxes, classes\n\nwith tf.Session() as test_b:\n yolo_outputs = (tf.random_normal([19, 19, 5, 1], mean=1, stddev=4, seed = 1),\n tf.random_normal([19, 19, 5, 2], mean=1, stddev=4, seed = 1),\n tf.random_normal([19, 19, 5, 2], mean=1, stddev=4, seed = 1),\n tf.random_normal([19, 19, 5, 80], mean=1, stddev=4, seed = 1))\n scores, boxes, classes = yolo_eval(yolo_outputs)\n print(\"scores[2] = \" + str(scores[2].eval()))\n print(\"boxes[2] = \" + str(boxes[2].eval()))\n print(\"classes[2] = \" + str(classes[2].eval()))\n print(\"scores.shape = \" + str(scores.eval().shape))\n print(\"boxes.shape = \" + str(boxes.eval().shape))\n print(\"classes.shape = \" + str(classes.eval().shape))", "Expected Output:\n<table>\n <tr>\n <td>\n **scores[2]**\n </td>\n <td>\n 138.791\n </td>\n </tr>\n <tr>\n <td>\n **boxes[2]**\n </td>\n <td>\n [ 1292.32971191 -278.52166748 3876.98925781 -835.56494141]\n </td>\n </tr>\n\n <tr>\n <td>\n **classes[2]**\n </td>\n <td>\n 54\n </td>\n </tr>\n <tr>\n <td>\n **scores.shape**\n </td>\n <td>\n (10,)\n </td>\n </tr>\n <tr>\n <td>\n **boxes.shape**\n </td>\n <td>\n (10, 4)\n </td>\n </tr>\n\n <tr>\n <td>\n **classes.shape**\n </td>\n <td>\n (10,)\n </td>\n </tr>\n\n</table>\n\n<font color='blue'>\nSummary for YOLO:\n- Input image (608, 608, 3)\n- The input image goes through a CNN, resulting in a (19,19,5,85) dimensional output. \n- After flattening the last two dimensions, the output is a volume of shape (19, 19, 425):\n - Each cell in a 19x19 grid over the input image gives 425 numbers. \n - 425 = 5 x 85 because each cell contains predictions for 5 boxes, corresponding to 5 anchor boxes, as seen in lecture. \n - 85 = 5 + 80 where 5 is because $(p_c, b_x, b_y, b_h, b_w)$ has 5 numbers, and and 80 is the number of classes we'd like to detect\n- You then select only few boxes based on:\n - Score-thresholding: throw away boxes that have detected a class with a score less than the threshold\n - Non-max suppression: Compute the Intersection over Union and avoid selecting overlapping boxes\n- This gives you YOLO's final output. \n3 - Test YOLO pretrained model on images\nIn this part, you are going to use a pretrained model and test it on the car detection dataset. As usual, you start by creating a session to start your graph. Run the following cell.", "sess = K.get_session()", "3.1 - Defining classes, anchors and image shape.\nRecall that we are trying to detect 80 classes, and are using 5 anchor boxes. We have gathered the information about the 80 classes and 5 boxes in two files \"coco_classes.txt\" and \"yolo_anchors.txt\". Let's load these quantities into the model by running the next cell. \nThe car detection dataset has 720x1280 images, which we've pre-processed into 608x608 images.", "class_names = read_classes(\"model_data/coco_classes.txt\")\nanchors = read_anchors(\"model_data/yolo_anchors.txt\")\nimage_shape = (720., 1280.) ", "3.2 - Loading a pretrained model\nTraining a YOLO model takes a very long time and requires a fairly large dataset of labelled bounding boxes for a large range of target classes. You are going to load an existing pretrained Keras YOLO model stored in \"yolo.h5\". (These weights come from the official YOLO website, and were converted using a function written by Allan Zelener. References are at the end of this notebook. Technically, these are the parameters from the \"YOLOv2\" model, but we will more simply refer to it as \"YOLO\" in this notebook.) Run the cell below to load the model from this file.", "yolo_model = load_model(\"model_data/yolo.h5\")", "This loads the weights of a trained YOLO model. Here's a summary of the layers your model contains.", "yolo_model.summary()", "Note: On some computers, you may see a warning message from Keras. Don't worry about it if you do--it is fine.\nReminder: this model converts a preprocessed batch of input images (shape: (m, 608, 608, 3)) into a tensor of shape (m, 19, 19, 5, 85) as explained in Figure (2).\n3.3 - Convert output of the model to usable bounding box tensors\nThe output of yolo_model is a (m, 19, 19, 5, 85) tensor that needs to pass through non-trivial processing and conversion. The following cell does that for you.", "yolo_outputs = yolo_head(yolo_model.output, anchors, len(class_names))", "You added yolo_outputs to your graph. This set of 4 tensors is ready to be used as input by your yolo_eval function.\n3.4 - Filtering boxes\nyolo_outputs gave you all the predicted boxes of yolo_model in the correct format. You're now ready to perform filtering and select only the best boxes. Lets now call yolo_eval, which you had previously implemented, to do this.", "scores, boxes, classes = yolo_eval(yolo_outputs, image_shape)", "3.5 - Run the graph on an image\nLet the fun begin. You have created a (sess) graph that can be summarized as follows:\n\n<font color='purple'> yolo_model.input </font> is given to yolo_model. The model is used to compute the output <font color='purple'> yolo_model.output </font>\n<font color='purple'> yolo_model.output </font> is processed by yolo_head. It gives you <font color='purple'> yolo_outputs </font>\n<font color='purple'> yolo_outputs </font> goes through a filtering function, yolo_eval. It outputs your predictions: <font color='purple'> scores, boxes, classes </font>\n\nExercise: Implement predict() which runs the graph to test YOLO on an image.\nYou will need to run a TensorFlow session, to have it compute scores, boxes, classes.\nThe code below also uses the following function:\npython\nimage, image_data = preprocess_image(\"images/\" + image_file, model_image_size = (608, 608))\nwhich outputs:\n- image: a python (PIL) representation of your image used for drawing boxes. You won't need to use it.\n- image_data: a numpy-array representing the image. This will be the input to the CNN.\nImportant note: when a model uses BatchNorm (as is the case in YOLO), you will need to pass an additional placeholder in the feed_dict {K.learning_phase(): 0}.", "def predict(sess, image_file):\n \"\"\"\n Runs the graph stored in \"sess\" to predict boxes for \"image_file\". Prints and plots the preditions.\n \n Arguments:\n sess -- your tensorflow/Keras session containing the YOLO graph\n image_file -- name of an image stored in the \"images\" folder.\n \n Returns:\n out_scores -- tensor of shape (None, ), scores of the predicted boxes\n out_boxes -- tensor of shape (None, 4), coordinates of the predicted boxes\n out_classes -- tensor of shape (None, ), class index of the predicted boxes\n \n Note: \"None\" actually represents the number of predicted boxes, it varies between 0 and max_boxes. \n \"\"\"\n\n # Preprocess your image\n image, image_data = preprocess_image(\"images/\" + image_file, model_image_size = (608, 608))\n\n # Run the session with the correct tensors and choose the correct placeholders in the feed_dict.\n # You'll need to use feed_dict={yolo_model.input: ... , K.learning_phase(): 0})\n ### START CODE HERE ### (≈ 1 line)\n out_scores, out_boxes, out_classes = sess.run([scores, boxes, classes], feed_dict = {yolo_model.input: image_data, K.learning_phase(): 0} )\n ### END CODE HERE ###\n\n # Print predictions info\n print('Found {} boxes for {}'.format(len(out_boxes), image_file))\n # Generate colors for drawing bounding boxes.\n colors = generate_colors(class_names)\n # Draw bounding boxes on the image file\n draw_boxes(image, out_scores, out_boxes, out_classes, class_names, colors)\n # Save the predicted bounding box on the image\n image.save(os.path.join(\"out\", image_file), quality=90)\n # Display the results in the notebook\n output_image = scipy.misc.imread(os.path.join(\"out\", image_file))\n imshow(output_image)\n \n return out_scores, out_boxes, out_classes", "Run the following cell on the \"test.jpg\" image to verify that your function is correct.", "out_scores, out_boxes, out_classes = predict(sess, \"test.jpg\")", "Expected Output:\n<table>\n <tr>\n <td>\n **Found 7 boxes for test.jpg**\n </td>\n </tr>\n <tr>\n <td>\n **car**\n </td>\n <td>\n 0.60 (925, 285) (1045, 374)\n </td>\n </tr>\n <tr>\n <td>\n **car**\n </td>\n <td>\n 0.66 (706, 279) (786, 350)\n </td>\n </tr>\n <tr>\n <td>\n **bus**\n </td>\n <td>\n 0.67 (5, 266) (220, 407)\n </td>\n </tr>\n <tr>\n <td>\n **car**\n </td>\n <td>\n 0.70 (947, 324) (1280, 705)\n </td>\n </tr>\n <tr>\n <td>\n **car**\n </td>\n <td>\n 0.74 (159, 303) (346, 440)\n </td>\n </tr>\n <tr>\n <td>\n **car**\n </td>\n <td>\n 0.80 (761, 282) (942, 412)\n </td>\n </tr>\n <tr>\n <td>\n **car**\n </td>\n <td>\n 0.89 (367, 300) (745, 648)\n </td>\n </tr>\n</table>\n\nThe model you've just run is actually able to detect 80 different classes listed in \"coco_classes.txt\". To test the model on your own images:\n 1. Click on \"File\" in the upper bar of this notebook, then click \"Open\" to go on your Coursera Hub.\n 2. Add your image to this Jupyter Notebook's directory, in the \"images\" folder\n 3. Write your image's name in the cell above code\n 4. Run the code and see the output of the algorithm!\nIf you were to run your session in a for loop over all your images. Here's what you would get:\n<center>\n<video width=\"400\" height=\"200\" src=\"nb_images/pred_video_compressed2.mp4\" type=\"video/mp4\" controls>\n</video>\n</center>\n<caption><center> Predictions of the YOLO model on pictures taken from a camera while driving around the Silicon Valley <br> Thanks drive.ai for providing this dataset! </center></caption>\n<font color='blue'>\nWhat you should remember:\n- YOLO is a state-of-the-art object detection model that is fast and accurate\n- It runs an input image through a CNN which outputs a 19x19x5x85 dimensional volume. \n- The encoding can be seen as a grid where each of the 19x19 cells contains information about 5 boxes.\n- You filter through all the boxes using non-max suppression. Specifically: \n - Score thresholding on the probability of detecting a class to keep only accurate (high probability) boxes\n - Intersection over Union (IoU) thresholding to eliminate overlapping boxes\n- Because training a YOLO model from randomly initialized weights is non-trivial and requires a large dataset as well as lot of computation, we used previously trained model parameters in this exercise. If you wish, you can also try fine-tuning the YOLO model with your own dataset, though this would be a fairly non-trivial exercise. \nReferences: The ideas presented in this notebook came primarily from the two YOLO papers. The implementation here also took significant inspiration and used many components from Allan Zelener's github repository. The pretrained weights used in this exercise came from the official YOLO website. \n- Joseph Redmon, Santosh Divvala, Ross Girshick, Ali Farhadi - You Only Look Once: Unified, Real-Time Object Detection (2015)\n- Joseph Redmon, Ali Farhadi - YOLO9000: Better, Faster, Stronger (2016)\n- Allan Zelener - YAD2K: Yet Another Darknet 2 Keras\n- The official YOLO website (https://pjreddie.com/darknet/yolo/) \nCar detection dataset:\n<a rel=\"license\" href=\"http://creativecommons.org/licenses/by/4.0/\"><img alt=\"Creative Commons License\" style=\"border-width:0\" src=\"https://i.creativecommons.org/l/by/4.0/88x31.png\" /></a><br /><span xmlns:dct=\"http://purl.org/dc/terms/\" property=\"dct:title\">The Drive.ai Sample Dataset</span> (provided by drive.ai) is licensed under a <a rel=\"license\" href=\"http://creativecommons.org/licenses/by/4.0/\">Creative Commons Attribution 4.0 International License</a>. We are especially grateful to Brody Huval, Chih Hu and Rahul Patel for collecting and providing this dataset." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
tensorflow/docs
site/en/guide/checkpoint.ipynb
apache-2.0
[ "Copyright 2018 The TensorFlow Authors.", "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.", "Training checkpoints\n<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://www.tensorflow.org/guide/checkpoint\"><img src=\"https://www.tensorflow.org/images/tf_logo_32px.png\" />View on TensorFlow.org</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/guide/checkpoint.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />Run in Google Colab</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://github.com/tensorflow/docs/blob/master/site/en/guide/checkpoint.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />View source on GitHub</a>\n </td>\n <td>\n <a href=\"https://storage.googleapis.com/tensorflow_docs/docs/site/en/guide/checkpoint.ipynb\"><img src=\"https://www.tensorflow.org/images/download_logo_32px.png\" />Download notebook</a>\n </td>\n</table>\n\nThe phrase \"Saving a TensorFlow model\" typically means one of two things:\n\nCheckpoints, OR \nSavedModel.\n\nCheckpoints capture the exact value of all parameters (tf.Variable objects) used by a model. Checkpoints do not contain any description of the computation defined by the model and thus are typically only useful when source code that will use the saved parameter values is available.\nThe SavedModel format on the other hand includes a serialized description of the computation defined by the model in addition to the parameter values (checkpoint). Models in this format are independent of the source code that created the model. They are thus suitable for deployment via TensorFlow Serving, TensorFlow Lite, TensorFlow.js, or programs in other programming languages (the C, C++, Java, Go, Rust, C# etc. TensorFlow APIs).\nThis guide covers APIs for writing and reading checkpoints.\nSetup", "import tensorflow as tf\n\nclass Net(tf.keras.Model):\n \"\"\"A simple linear model.\"\"\"\n\n def __init__(self):\n super(Net, self).__init__()\n self.l1 = tf.keras.layers.Dense(5)\n\n def call(self, x):\n return self.l1(x)\n\nnet = Net()", "Saving from tf.keras training APIs\nSee the tf.keras guide on saving and\nrestoring.\ntf.keras.Model.save_weights saves a TensorFlow checkpoint.", "net.save_weights('easy_checkpoint')", "Writing checkpoints\nThe persistent state of a TensorFlow model is stored in tf.Variable objects. These can be constructed directly, but are often created through high-level APIs like tf.keras.layers or tf.keras.Model.\nThe easiest way to manage variables is by attaching them to Python objects, then referencing those objects. \nSubclasses of tf.train.Checkpoint, tf.keras.layers.Layer, and tf.keras.Model automatically track variables assigned to their attributes. The following example constructs a simple linear model, then writes checkpoints which contain values for all of the model's variables.\nYou can easily save a model-checkpoint with Model.save_weights.\nManual checkpointing\nSetup\nTo help demonstrate all the features of tf.train.Checkpoint, define a toy dataset and optimization step:", "def toy_dataset():\n inputs = tf.range(10.)[:, None]\n labels = inputs * 5. + tf.range(5.)[None, :]\n return tf.data.Dataset.from_tensor_slices(\n dict(x=inputs, y=labels)).repeat().batch(2)\n\ndef train_step(net, example, optimizer):\n \"\"\"Trains `net` on `example` using `optimizer`.\"\"\"\n with tf.GradientTape() as tape:\n output = net(example['x'])\n loss = tf.reduce_mean(tf.abs(output - example['y']))\n variables = net.trainable_variables\n gradients = tape.gradient(loss, variables)\n optimizer.apply_gradients(zip(gradients, variables))\n return loss", "Create the checkpoint objects\nUse a tf.train.Checkpoint object to manually create a checkpoint, where the objects you want to checkpoint are set as attributes on the object.\nA tf.train.CheckpointManager can also be helpful for managing multiple checkpoints.", "opt = tf.keras.optimizers.Adam(0.1)\ndataset = toy_dataset()\niterator = iter(dataset)\nckpt = tf.train.Checkpoint(step=tf.Variable(1), optimizer=opt, net=net, iterator=iterator)\nmanager = tf.train.CheckpointManager(ckpt, './tf_ckpts', max_to_keep=3)", "Train and checkpoint the model\nThe following training loop creates an instance of the model and of an optimizer, then gathers them into a tf.train.Checkpoint object. It calls the training step in a loop on each batch of data, and periodically writes checkpoints to disk.", "def train_and_checkpoint(net, manager):\n ckpt.restore(manager.latest_checkpoint)\n if manager.latest_checkpoint:\n print(\"Restored from {}\".format(manager.latest_checkpoint))\n else:\n print(\"Initializing from scratch.\")\n\n for _ in range(50):\n example = next(iterator)\n loss = train_step(net, example, opt)\n ckpt.step.assign_add(1)\n if int(ckpt.step) % 10 == 0:\n save_path = manager.save()\n print(\"Saved checkpoint for step {}: {}\".format(int(ckpt.step), save_path))\n print(\"loss {:1.2f}\".format(loss.numpy()))\n\ntrain_and_checkpoint(net, manager)", "Restore and continue training\nAfter the first training cycle you can pass a new model and manager, but pick up training exactly where you left off:", "opt = tf.keras.optimizers.Adam(0.1)\nnet = Net()\ndataset = toy_dataset()\niterator = iter(dataset)\nckpt = tf.train.Checkpoint(step=tf.Variable(1), optimizer=opt, net=net, iterator=iterator)\nmanager = tf.train.CheckpointManager(ckpt, './tf_ckpts', max_to_keep=3)\n\ntrain_and_checkpoint(net, manager)", "The tf.train.CheckpointManager object deletes old checkpoints. Above it's configured to keep only the three most recent checkpoints.", "print(manager.checkpoints) # List the three remaining checkpoints", "These paths, e.g. './tf_ckpts/ckpt-10', are not files on disk. Instead they are prefixes for an index file and one or more data files which contain the variable values. These prefixes are grouped together in a single checkpoint file ('./tf_ckpts/checkpoint') where the CheckpointManager saves its state.", "!ls ./tf_ckpts", "<a id=\"loading_mechanics\"/>\nLoading mechanics\nTensorFlow matches variables to checkpointed values by traversing a directed graph with named edges, starting from the object being loaded. Edge names typically come from attribute names in objects, for example the \"l1\" in self.l1 = tf.keras.layers.Dense(5). tf.train.Checkpoint uses its keyword argument names, as in the \"step\" in tf.train.Checkpoint(step=...).\nThe dependency graph from the example above looks like this:\n\nThe optimizer is in red, regular variables are in blue, and the optimizer slot variables are in orange. The other nodes—for example, representing the tf.train.Checkpoint—are in black.\nSlot variables are part of the optimizer's state, but are created for a specific variable. For example, the 'm' edges above correspond to momentum, which the Adam optimizer tracks for each variable. Slot variables are only saved in a checkpoint if the variable and the optimizer would both be saved, thus the dashed edges.\nCalling restore on a tf.train.Checkpoint object queues the requested restorations, restoring variable values as soon as there's a matching path from the Checkpoint object. For example, you can load just the bias from the model you defined above by reconstructing one path to it through the network and the layer.", "to_restore = tf.Variable(tf.zeros([5]))\nprint(to_restore.numpy()) # All zeros\nfake_layer = tf.train.Checkpoint(bias=to_restore)\nfake_net = tf.train.Checkpoint(l1=fake_layer)\nnew_root = tf.train.Checkpoint(net=fake_net)\nstatus = new_root.restore(tf.train.latest_checkpoint('./tf_ckpts/'))\nprint(to_restore.numpy()) # This gets the restored value.", "The dependency graph for these new objects is a much smaller subgraph of the larger checkpoint you wrote above. It includes only the bias and a save counter that tf.train.Checkpoint uses to number checkpoints.\n\nrestore returns a status object, which has optional assertions. All of the objects created in the new Checkpoint have been restored, so status.assert_existing_objects_matched passes.", "status.assert_existing_objects_matched()", "There are many objects in the checkpoint which haven't matched, including the layer's kernel and the optimizer's variables. status.assert_consumed only passes if the checkpoint and the program match exactly, and would throw an exception here.\nDeferred restorations\nLayer objects in TensorFlow may defer the creation of variables to their first call, when input shapes are available. For example, the shape of a Dense layer's kernel depends on both the layer's input and output shapes, and so the output shape required as a constructor argument is not enough information to create the variable on its own. Since calling a Layer also reads the variable's value, a restore must happen between the variable's creation and its first use.\nTo support this idiom, tf.train.Checkpoint defers restores which don't yet have a matching variable.", "deferred_restore = tf.Variable(tf.zeros([1, 5]))\nprint(deferred_restore.numpy()) # Not restored; still zeros\nfake_layer.kernel = deferred_restore\nprint(deferred_restore.numpy()) # Restored", "Manually inspecting checkpoints\ntf.train.load_checkpoint returns a CheckpointReader that gives lower level access to the checkpoint contents. It contains mappings from each variable's key, to the shape and dtype for each variable in the checkpoint. A variable's key is its object path, like in the graphs displayed above.\nNote: There is no higher level structure to the checkpoint. It only know's the paths and values for the variables, and has no concept of models, layers or how they are connected.", "reader = tf.train.load_checkpoint('./tf_ckpts/')\nshape_from_key = reader.get_variable_to_shape_map()\ndtype_from_key = reader.get_variable_to_dtype_map()\n\nsorted(shape_from_key.keys())", "So if you're interested in the value of net.l1.kernel you can get the value with the following code:", "key = 'net/l1/kernel/.ATTRIBUTES/VARIABLE_VALUE'\n\nprint(\"Shape:\", shape_from_key[key])\nprint(\"Dtype:\", dtype_from_key[key].name)", "It also provides a get_tensor method allowing you to inspect the value of a variable:", "reader.get_tensor(key)", "Object tracking\nCheckpoints save and restore the values of tf.Variable objects by \"tracking\" any variable or trackable object set in one of its attributes. When executing a save, variables are gathered recursively from all of the reachable tracked objects.\nAs with direct attribute assignments like self.l1 = tf.keras.layers.Dense(5), assigning lists and dictionaries to attributes will track their contents.", "save = tf.train.Checkpoint()\nsave.listed = [tf.Variable(1.)]\nsave.listed.append(tf.Variable(2.))\nsave.mapped = {'one': save.listed[0]}\nsave.mapped['two'] = save.listed[1]\nsave_path = save.save('./tf_list_example')\n\nrestore = tf.train.Checkpoint()\nv2 = tf.Variable(0.)\nassert 0. == v2.numpy() # Not restored yet\nrestore.mapped = {'two': v2}\nrestore.restore(save_path)\nassert 2. == v2.numpy()", "You may notice wrapper objects for lists and dictionaries. These wrappers are checkpointable versions of the underlying data-structures. Just like the attribute based loading, these wrappers restore a variable's value as soon as it's added to the container.", "restore.listed = []\nprint(restore.listed) # ListWrapper([])\nv1 = tf.Variable(0.)\nrestore.listed.append(v1) # Restores v1, from restore() in the previous cell\nassert 1. == v1.numpy()", "Trackable objects include tf.train.Checkpoint, tf.Module and its subclasses (e.g. keras.layers.Layer and keras.Model), and recognized Python containers:\n\ndict (and collections.OrderedDict)\nlist\ntuple (and collections.namedtuple, typing.NamedTuple)\n\nOther container types are not supported, including:\n\ncollections.defaultdict\nset\n\nAll other Python objects are ignored, including:\n\nint\nstring\nfloat\n\nSummary\nTensorFlow objects provide an easy automatic mechanism for saving and restoring the values of variables they use." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
ES-DOC/esdoc-jupyterhub
notebooks/miroc/cmip6/models/miroc-es2l/land.ipynb
gpl-3.0
[ "ES-DOC CMIP6 Model Properties - Land\nMIP Era: CMIP6\nInstitute: MIROC\nSource ID: MIROC-ES2L\nTopic: Land\nSub-Topics: Soil, Snow, Vegetation, Energy Balance, Carbon Cycle, Nitrogen Cycle, River Routing, Lakes. \nProperties: 154 (96 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-20 15:02:40\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook", "# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'miroc', 'miroc-es2l', 'land')", "Document Authors\nSet document authors", "# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Contributors\nSpecify document contributors", "# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Publication\nSpecify document publication status", "# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)", "Document Table of Contents\n1. Key Properties\n2. Key Properties --&gt; Conservation Properties\n3. Key Properties --&gt; Timestepping Framework\n4. Key Properties --&gt; Software Properties\n5. Grid\n6. Grid --&gt; Horizontal\n7. Grid --&gt; Vertical\n8. Soil\n9. Soil --&gt; Soil Map\n10. Soil --&gt; Snow Free Albedo\n11. Soil --&gt; Hydrology\n12. Soil --&gt; Hydrology --&gt; Freezing\n13. Soil --&gt; Hydrology --&gt; Drainage\n14. Soil --&gt; Heat Treatment\n15. Snow\n16. Snow --&gt; Snow Albedo\n17. Vegetation\n18. Energy Balance\n19. Carbon Cycle\n20. Carbon Cycle --&gt; Vegetation\n21. Carbon Cycle --&gt; Vegetation --&gt; Photosynthesis\n22. Carbon Cycle --&gt; Vegetation --&gt; Autotrophic Respiration\n23. Carbon Cycle --&gt; Vegetation --&gt; Allocation\n24. Carbon Cycle --&gt; Vegetation --&gt; Phenology\n25. Carbon Cycle --&gt; Vegetation --&gt; Mortality\n26. Carbon Cycle --&gt; Litter\n27. Carbon Cycle --&gt; Soil\n28. Carbon Cycle --&gt; Permafrost Carbon\n29. Nitrogen Cycle\n30. River Routing\n31. River Routing --&gt; Oceanic Discharge\n32. Lakes\n33. Lakes --&gt; Method\n34. Lakes --&gt; Wetlands \n1. Key Properties\nLand surface key properties\n1.1. Model Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of land surface model.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.2. Model Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nName of land surface model code (e.g. MOSES2.2)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.3. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral description of the processes modelled (e.g. dymanic vegation, prognostic albedo, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.4. Land Atmosphere Flux Exchanges\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nFluxes exchanged with the atmopshere.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.land_atmosphere_flux_exchanges') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"water\" \n# \"energy\" \n# \"carbon\" \n# \"nitrogen\" \n# \"phospherous\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "1.5. Atmospheric Coupling Treatment\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the treatment of land surface coupling with the Atmosphere model component, which may be different for different quantities (e.g. dust: semi-implicit, water vapour: explicit)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.atmospheric_coupling_treatment') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.6. Land Cover\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nTypes of land cover defined in the land surface model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.land_cover') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"bare soil\" \n# \"urban\" \n# \"lake\" \n# \"land ice\" \n# \"lake ice\" \n# \"vegetated\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "1.7. Land Cover Change\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe how land cover change is managed (e.g. the use of net or gross transitions)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.land_cover_change') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.8. Tiling\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the general tiling procedure used in the land surface (if any). Include treatment of physiography, land/sea, (dynamic) vegetation coverage and orography/roughness", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2. Key Properties --&gt; Conservation Properties\nTODO\n2.1. Energy\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how energy is conserved globally and to what level (e.g. within X [units]/year)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.conservation_properties.energy') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2.2. Water\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how water is conserved globally and to what level (e.g. within X [units]/year)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.conservation_properties.water') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2.3. Carbon\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how carbon is conserved globally and to what level (e.g. within X [units]/year)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.conservation_properties.carbon') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "3. Key Properties --&gt; Timestepping Framework\nTODO\n3.1. Timestep Dependent On Atmosphere\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs a time step dependent on the frequency of atmosphere coupling?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.timestepping_framework.timestep_dependent_on_atmosphere') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "3.2. Time Step\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverall timestep of land surface model (i.e. time between calls)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.timestepping_framework.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "3.3. Timestepping Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral description of time stepping method and associated time step(s)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.timestepping_framework.timestepping_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4. Key Properties --&gt; Software Properties\nSoftware properties of land surface code\n4.1. Repository\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nLocation of code for this component.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.software_properties.repository') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4.2. Code Version\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCode version identifier.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.software_properties.code_version') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4.3. Code Languages\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nCode language(s).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.software_properties.code_languages') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5. Grid\nLand surface grid\n5.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of the grid in the land surface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.grid.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6. Grid --&gt; Horizontal\nThe horizontal grid in the land surface\n6.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the general structure of the horizontal grid (not including any tiling)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.grid.horizontal.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.2. Matches Atmosphere Grid\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDoes the horizontal grid match the atmosphere?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.grid.horizontal.matches_atmosphere_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "7. Grid --&gt; Vertical\nThe vertical grid in the soil\n7.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the general structure of the vertical grid in the soil (not including any tiling)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.grid.vertical.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.2. Total Depth\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe total depth of the soil (in metres)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.grid.vertical.total_depth') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "8. Soil\nLand surface soil\n8.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of soil in the land surface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.2. Heat Water Coupling\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the coupling between heat and water in the soil", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_water_coupling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.3. Number Of Soil layers\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe number of soil layers", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.number_of_soil layers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "8.4. Prognostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nList the prognostic variables of the soil scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9. Soil --&gt; Soil Map\nKey properties of the land surface soil map\n9.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral description of soil map", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.2. Structure\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the soil structure map", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.structure') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.3. Texture\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the soil texture map", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.texture') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.4. Organic Matter\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the soil organic matter map", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.organic_matter') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.5. Albedo\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the soil albedo map", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.albedo') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.6. Water Table\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the soil water table map, if any", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.water_table') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.7. Continuously Varying Soil Depth\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDoes the soil properties vary continuously with depth?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.continuously_varying_soil_depth') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "9.8. Soil Depth\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the soil depth map", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.soil_depth') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "10. Soil --&gt; Snow Free Albedo\nTODO\n10.1. Prognostic\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs snow free albedo prognostic?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.snow_free_albedo.prognostic') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "10.2. Functions\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nIf prognostic, describe the dependancies on snow free albedo calculations", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.snow_free_albedo.functions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"vegetation type\" \n# \"soil humidity\" \n# \"vegetation state\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "10.3. Direct Diffuse\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf prognostic, describe the distinction between direct and diffuse albedo", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.snow_free_albedo.direct_diffuse') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"distinction between direct and diffuse albedo\" \n# \"no distinction between direct and diffuse albedo\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "10.4. Number Of Wavelength Bands\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf prognostic, enter the number of wavelength bands used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.snow_free_albedo.number_of_wavelength_bands') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "11. Soil --&gt; Hydrology\nKey properties of the land surface soil hydrology\n11.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral description of the soil hydrological model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11.2. Time Step\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime step of river soil hydrology in seconds", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "11.3. Tiling\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the soil hydrology tiling, if any.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11.4. Vertical Discretisation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the typical vertical discretisation", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.vertical_discretisation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11.5. Number Of Ground Water Layers\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe number of soil layers that may contain water", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.number_of_ground_water_layers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "11.6. Lateral Connectivity\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nDescribe the lateral connectivity between tiles", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.lateral_connectivity') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"perfect connectivity\" \n# \"Darcian flow\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "11.7. Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe hydrological dynamics scheme in the land surface model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Bucket\" \n# \"Force-restore\" \n# \"Choisnel\" \n# \"Explicit diffusion\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "12. Soil --&gt; Hydrology --&gt; Freezing\nTODO\n12.1. Number Of Ground Ice Layers\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHow many soil layers may contain ground ice", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.freezing.number_of_ground_ice_layers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "12.2. Ice Storage Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the method of ice storage", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.freezing.ice_storage_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "12.3. Permafrost\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the treatment of permafrost, if any, within the land surface scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.freezing.permafrost') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "13. Soil --&gt; Hydrology --&gt; Drainage\nTODO\n13.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral describe how drainage is included in the land surface scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.drainage.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "13.2. Types\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nDifferent types of runoff represented by the land surface model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.drainage.types') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Gravity drainage\" \n# \"Horton mechanism\" \n# \"topmodel-based\" \n# \"Dunne mechanism\" \n# \"Lateral subsurface flow\" \n# \"Baseflow from groundwater\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "14. Soil --&gt; Heat Treatment\nTODO\n14.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral description of how heat treatment properties are defined", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "14.2. Time Step\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime step of soil heat scheme in seconds", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "14.3. Tiling\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the soil heat treatment tiling, if any.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "14.4. Vertical Discretisation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the typical vertical discretisation", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.vertical_discretisation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "14.5. Heat Storage\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSpecify the method of heat storage", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.heat_storage') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Force-restore\" \n# \"Explicit diffusion\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "14.6. Processes\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nDescribe processes included in the treatment of soil heat", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"soil moisture freeze-thaw\" \n# \"coupling with snow temperature\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15. Snow\nLand surface snow\n15.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of snow in the land surface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "15.2. Tiling\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the snow tiling, if any.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "15.3. Number Of Snow Layers\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe number of snow levels used in the land surface scheme/model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.number_of_snow_layers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "15.4. Density\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescription of the treatment of snow density", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.density') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"constant\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.5. Water Equivalent\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescription of the treatment of the snow water equivalent", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.water_equivalent') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.6. Heat Content\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescription of the treatment of the heat content of snow", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.heat_content') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.7. Temperature\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescription of the treatment of snow temperature", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.temperature') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.8. Liquid Water Content\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescription of the treatment of snow liquid water", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.liquid_water_content') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.9. Snow Cover Fractions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nSpecify cover fractions used in the surface snow scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.snow_cover_fractions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"ground snow fraction\" \n# \"vegetation snow fraction\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.10. Processes\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nSnow related processes in the land surface scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"snow interception\" \n# \"snow melting\" \n# \"snow freezing\" \n# \"blowing snow\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.11. Prognostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nList the prognostic variables of the snow scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "16. Snow --&gt; Snow Albedo\nTODO\n16.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the treatment of snow-covered land albedo", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.snow_albedo.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"prescribed\" \n# \"constant\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "16.2. Functions\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\n*If prognostic, *", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.snow_albedo.functions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"vegetation type\" \n# \"snow age\" \n# \"snow density\" \n# \"snow grain type\" \n# \"aerosol deposition\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17. Vegetation\nLand surface vegetation\n17.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of vegetation in the land surface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "17.2. Time Step\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime step of vegetation scheme in seconds", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "17.3. Dynamic Vegetation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs there dynamic evolution of vegetation?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.dynamic_vegetation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "17.4. Tiling\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the vegetation tiling, if any.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "17.5. Vegetation Representation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nVegetation classification used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.vegetation_representation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"vegetation types\" \n# \"biome types\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.6. Vegetation Types\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList of vegetation types in the classification, if any", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.vegetation_types') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"broadleaf tree\" \n# \"needleleaf tree\" \n# \"C3 grass\" \n# \"C4 grass\" \n# \"vegetated\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.7. Biome Types\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList of biome types in the classification, if any", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.biome_types') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"evergreen needleleaf forest\" \n# \"evergreen broadleaf forest\" \n# \"deciduous needleleaf forest\" \n# \"deciduous broadleaf forest\" \n# \"mixed forest\" \n# \"woodland\" \n# \"wooded grassland\" \n# \"closed shrubland\" \n# \"opne shrubland\" \n# \"grassland\" \n# \"cropland\" \n# \"wetlands\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.8. Vegetation Time Variation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHow the vegetation fractions in each tile are varying with time", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.vegetation_time_variation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"fixed (not varying)\" \n# \"prescribed (varying from files)\" \n# \"dynamical (varying from simulation)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.9. Vegetation Map\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf vegetation fractions are not dynamically updated , describe the vegetation map used (common name and reference, if possible)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.vegetation_map') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "17.10. Interception\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs vegetation interception of rainwater represented?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.interception') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "17.11. Phenology\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTreatment of vegetation phenology", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.phenology') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic (vegetation map)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.12. Phenology Description\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nGeneral description of the treatment of vegetation phenology", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.phenology_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "17.13. Leaf Area Index\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTreatment of vegetation leaf area index", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.leaf_area_index') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prescribed\" \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.14. Leaf Area Index Description\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nGeneral description of the treatment of leaf area index", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.leaf_area_index_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "17.15. Biomass\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\n*Treatment of vegetation biomass *", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.biomass') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.16. Biomass Description\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nGeneral description of the treatment of vegetation biomass", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.biomass_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "17.17. Biogeography\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTreatment of vegetation biogeography", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.biogeography') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.18. Biogeography Description\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nGeneral description of the treatment of vegetation biogeography", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.biogeography_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "17.19. Stomatal Resistance\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nSpecify what the vegetation stomatal resistance depends on", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.stomatal_resistance') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"light\" \n# \"temperature\" \n# \"water availability\" \n# \"CO2\" \n# \"O3\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.20. Stomatal Resistance Description\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nGeneral description of the treatment of vegetation stomatal resistance", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.stomatal_resistance_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "17.21. Prognostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nList the prognostic variables of the vegetation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "18. Energy Balance\nLand surface energy balance\n18.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of energy balance in land surface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.energy_balance.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "18.2. Tiling\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the energy balance tiling, if any.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.energy_balance.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "18.3. Number Of Surface Temperatures\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe maximum number of distinct surface temperatures in a grid cell (for example, each subgrid tile may have its own temperature)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.energy_balance.number_of_surface_temperatures') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "18.4. Evaporation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nSpecify the formulation method for land surface evaporation, from soil and vegetation", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.energy_balance.evaporation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"alpha\" \n# \"beta\" \n# \"combined\" \n# \"Monteith potential evaporation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "18.5. Processes\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nDescribe which processes are included in the energy balance scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.energy_balance.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"transpiration\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "19. Carbon Cycle\nLand surface carbon cycle\n19.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of carbon cycle in land surface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "19.2. Tiling\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the carbon cycle tiling, if any.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "19.3. Time Step\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime step of carbon cycle in seconds", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "19.4. Anthropogenic Carbon\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nDescribe the treament of the anthropogenic carbon pool", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.anthropogenic_carbon') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"grand slam protocol\" \n# \"residence time\" \n# \"decay time\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "19.5. Prognostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nList the prognostic variables of the carbon scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "20. Carbon Cycle --&gt; Vegetation\nTODO\n20.1. Number Of Carbon Pools\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nEnter the number of carbon pools used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.number_of_carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "20.2. Carbon Pools\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList the carbon pools used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "20.3. Forest Stand Dynamics\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the treatment of forest stand dyanmics", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.forest_stand_dynamics') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "21. Carbon Cycle --&gt; Vegetation --&gt; Photosynthesis\nTODO\n21.1. Method\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the general method used for photosynthesis (e.g. type of photosynthesis, distinction between C3 and C4 grasses, Nitrogen depencence, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.photosynthesis.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "22. Carbon Cycle --&gt; Vegetation --&gt; Autotrophic Respiration\nTODO\n22.1. Maintainance Respiration\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the general method used for maintainence respiration", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.maintainance_respiration') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "22.2. Growth Respiration\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the general method used for growth respiration", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.growth_respiration') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "23. Carbon Cycle --&gt; Vegetation --&gt; Allocation\nTODO\n23.1. Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the general principle behind the allocation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "23.2. Allocation Bins\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSpecify distinct carbon bins used in allocation", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_bins') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"leaves + stems + roots\" \n# \"leaves + stems + roots (leafy + woody)\" \n# \"leaves + fine roots + coarse roots + stems\" \n# \"whole plant (no distinction)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "23.3. Allocation Fractions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe how the fractions of allocation are calculated", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_fractions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"fixed\" \n# \"function of vegetation type\" \n# \"function of plant allometry\" \n# \"explicitly calculated\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "24. Carbon Cycle --&gt; Vegetation --&gt; Phenology\nTODO\n24.1. Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the general principle behind the phenology scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.phenology.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "25. Carbon Cycle --&gt; Vegetation --&gt; Mortality\nTODO\n25.1. Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the general principle behind the mortality scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.mortality.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "26. Carbon Cycle --&gt; Litter\nTODO\n26.1. Number Of Carbon Pools\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nEnter the number of carbon pools used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.litter.number_of_carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "26.2. Carbon Pools\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList the carbon pools used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.litter.carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "26.3. Decomposition\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList the decomposition methods used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.litter.decomposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "26.4. Method\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList the general method used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.litter.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "27. Carbon Cycle --&gt; Soil\nTODO\n27.1. Number Of Carbon Pools\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nEnter the number of carbon pools used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.soil.number_of_carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "27.2. Carbon Pools\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList the carbon pools used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.soil.carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "27.3. Decomposition\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList the decomposition methods used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.soil.decomposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "27.4. Method\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList the general method used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.soil.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "28. Carbon Cycle --&gt; Permafrost Carbon\nTODO\n28.1. Is Permafrost Included\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs permafrost included?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.is_permafrost_included') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "28.2. Emitted Greenhouse Gases\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList the GHGs emitted", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.emitted_greenhouse_gases') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "28.3. Decomposition\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList the decomposition methods used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.decomposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "28.4. Impact On Soil Properties\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the impact of permafrost on soil properties", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.impact_on_soil_properties') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "29. Nitrogen Cycle\nLand surface nitrogen cycle\n29.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of the nitrogen cycle in the land surface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.nitrogen_cycle.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "29.2. Tiling\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the notrogen cycle tiling, if any.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.nitrogen_cycle.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "29.3. Time Step\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime step of nitrogen cycle in seconds", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.nitrogen_cycle.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "29.4. Prognostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nList the prognostic variables of the nitrogen scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.nitrogen_cycle.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "30. River Routing\nLand surface river routing\n30.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of river routing in the land surface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "30.2. Tiling\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the river routing, if any.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "30.3. Time Step\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime step of river routing scheme in seconds", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "30.4. Grid Inherited From Land Surface\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs the grid inherited from land surface?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.grid_inherited_from_land_surface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "30.5. Grid Description\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nGeneral description of grid, if not inherited from land surface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.grid_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "30.6. Number Of Reservoirs\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nEnter the number of reservoirs", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.number_of_reservoirs') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "30.7. Water Re Evaporation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nTODO", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.water_re_evaporation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"flood plains\" \n# \"irrigation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "30.8. Coupled To Atmosphere\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIs river routing coupled to the atmosphere model component?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.coupled_to_atmosphere') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "30.9. Coupled To Land\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the coupling between land and rivers", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.coupled_to_land') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "30.10. Quantities Exchanged With Atmosphere\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nIf couple to atmosphere, which quantities are exchanged between river routing and the atmosphere model components?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.quantities_exchanged_with_atmosphere') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"heat\" \n# \"water\" \n# \"tracers\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "30.11. Basin Flow Direction Map\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat type of basin flow direction map is being used?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.basin_flow_direction_map') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"present day\" \n# \"adapted for other periods\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "30.12. Flooding\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the representation of flooding, if any", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.flooding') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "30.13. Prognostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nList the prognostic variables of the river routing", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "31. River Routing --&gt; Oceanic Discharge\nTODO\n31.1. Discharge Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSpecify how rivers are discharged to the ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.oceanic_discharge.discharge_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"direct (large rivers)\" \n# \"diffuse\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "31.2. Quantities Transported\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nQuantities that are exchanged from river-routing to the ocean model component", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.oceanic_discharge.quantities_transported') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"heat\" \n# \"water\" \n# \"tracers\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "32. Lakes\nLand surface lakes\n32.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of lakes in the land surface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "32.2. Coupling With Rivers\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nAre lakes coupled to the river routing model component?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.coupling_with_rivers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "32.3. Time Step\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime step of lake scheme in seconds", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "32.4. Quantities Exchanged With Rivers\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nIf coupling with rivers, which quantities are exchanged between the lakes and rivers", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.quantities_exchanged_with_rivers') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"heat\" \n# \"water\" \n# \"tracers\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "32.5. Vertical Grid\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the vertical grid of lakes", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.vertical_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "32.6. Prognostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nList the prognostic variables of the lake scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "33. Lakes --&gt; Method\nTODO\n33.1. Ice Treatment\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs lake ice included?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.method.ice_treatment') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "33.2. Albedo\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the treatment of lake albedo", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.method.albedo') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "33.3. Dynamics\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nWhich dynamics of lakes are treated? horizontal, vertical, etc.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.method.dynamics') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"No lake dynamics\" \n# \"vertical\" \n# \"horizontal\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "33.4. Dynamic Lake Extent\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs a dynamic lake extent scheme included?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.method.dynamic_lake_extent') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "33.5. Endorheic Basins\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nBasins not flowing to ocean included?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.method.endorheic_basins') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "34. Lakes --&gt; Wetlands\nTODO\n34.1. Description\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the treatment of wetlands, if any", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.wetlands.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "©2017 ES-DOC" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
yaoxx151/UCSB_Boot_Camp_copy
Day02_EverythingData/notebooks/05 - Theory and Practice.ipynb
cc0-1.0
[ "Demo of the increased efficiency of vectorized forms\nClosest 2D point location:\nhttp://nbviewer.ipython.org/github/rossant/ipython-minibook/blob/master/chapter3/301-vector-computations.ipynb\nBubble sort\nbubble sort more\nSebastian Raschka\nlast updated: 05/28/2014\n\nOpen in IPython nbviewer \nLink to this IPython notebook on Github \nLink to the GitHub Repository One-Python-benchmark-per-day\n\n<hr>\nI would be happy to hear your comments and suggestions.\nPlease feel free to drop me a note via\ntwitter, email, or google+.\n<hr>\n\nDay 4.2 - One Python Benchmark per Day\nComparing (C)Python compilers - Cython vs. Numba vs. Parakeet on Bubblesort\n<br>\n-> skip to the results\n<br>\n<br>\n<br>\nI made some significant changes to the previous Day 4 benchmark, thus I decided to make it a separate article:\n- added the parakeet compiler\n- improved the Bubblesort algorithm to avoid comparing already-sorted pairs\n- improved the Cython implementation (avoiding redundant conversion from memory view to array object)\n- focussed on only the optimized (\"best\") Cython and Numba implementations of Bubblesort\n- used Python 2.7.6 instead of Python 3.4.0 (because of parakeet) \n<br>\n<br>\nQuick note about Bubblesort\nI don't want to get into the details about sorting algorithms here, but there is a great report\n\"Sorting in the Presence of Branch Prediction and Caches - Fast Sorting on Modern Computers\" written by Paul Biggar and David Gregg, where they describe and analyze elementary sorting algorithms in very nice detail (see chapter 4). \nAnd for a quick reference, this website has a nice animation of this algorithm.\nA long story short: The \"worst-case\" complexity of the Bubblesort algorithm (i.e., \"Big-O\")\n $\\Rightarrow \\pmb O(n^2)$\n<br>\n<br>\nBubble sort implemented in (C)Python", "def python_bubblesort(a_list):\n \"\"\" Bubblesort in Python for list objects. \"\"\"\n length = len(a_list)\n swapped = 1\n for i in xrange(0, length):\n if swapped: \n swapped = 0\n for ele in xrange(0, length-i-1):\n if a_list[ele] > a_list[ele + 1]:\n temp = a_list[ele + 1]\n a_list[ele + 1] = a_list[ele]\n a_list[ele] = temp\n swapped = 1\n return a_list\n\ndef python_bubblesort_ary(np_ary):\n \"\"\" Bubblesort in Python for NumPy arrays. \"\"\"\n length = np_ary.shape[0]\n swapped = 1\n for i in xrange(0, length):\n if swapped: \n swapped = 0\n for ele in xrange(0, length-i-1):\n if np_ary[ele] > np_ary[ele + 1]:\n temp = np_ary[ele + 1]\n np_ary[ele + 1] = np_ary[ele]\n np_ary[ele] = temp\n swapped = 1\n return np_ary\n\na = np.array[1,2,3,4,5])", "<br>\n<br>\nBubble sort implemented in Cython\nMaybe we can speed things up a little bit via Cython's C-extensions for Python. Cython is basically a hybrid between C and Python and can be pictured as compiled Python code with type declarations.\nSince we are working in an IPython notebook here, we can make use of the very convenient IPython magic: It will take care of the conversion to C code, the compilation, and eventually the loading of the function. \nNote that the static type declarations that we add via cdef are note required for Cython to work, but it will speed things up tremendously.", "%load_ext cythonmagic\n\n%%cython\nimport numpy as np\ncimport numpy as np\ncimport cython\n@cython.boundscheck(False) \n@cython.wraparound(False)\ncpdef cython_bubblesort(inp_ary):\n \"\"\" The Cython implementation of Bubblesort with NumPy memoryview.\"\"\"\n cdef unsigned long length, i, swapped, ele, temp\n cdef long[:] np_ary = inp_ary\n length = np_ary.shape[0]\n swapped = 1\n for i in xrange(0, length):\n if swapped: \n swapped = 0\n for ele in xrange(0, length-i-1):\n if np_ary[ele] > np_ary[ele + 1]:\n temp = np_ary[ele + 1]\n np_ary[ele + 1] = np_ary[ele]\n np_ary[ele] = temp\n swapped = 1\n return inp_ary", "<br>\n<br>\nBubble sort implemented in Numba\nNumba is using the LLVM compiler infrastructure for compiling Python code to machine code. Its strength is to work with NumPy arrays to speed-up the code. If you want to read more about Numba, please see refer to the original website and documentation.", "from numba import jit as numba_jit\n@numba_jit\ndef numba_bubblesort(np_ary):\n \"\"\" The NumPy implementation of Bubblesort on NumPy arrays.\"\"\"\n length = np_ary.shape[0]\n swapped = 1\n for i in xrange(0, length):\n if swapped: \n swapped = 0\n for ele in xrange(0, length-i-1):\n if np_ary[ele] > np_ary[ele + 1]:\n temp = np_ary[ele + 1]\n np_ary[ele + 1] = np_ary[ele]\n np_ary[ele] = temp\n swapped = 1\n return np_ary", "<br>\n<br>\nBubble sort implemented in parakeet\nSimilar to Numba, parakeet is a Python compiler that optimizes the runtime of numerical computations based on the NumPy data types, such as NumPy arrays.\nThe usage is also similar to Numba where we just have to put the jit decorator on top of the function we want to optimize.", "from parakeet import jit as para_jit\n@para_jit\ndef parakeet_bubblesort(np_ary):\n \"\"\" The parakeet implementation of Bubblesort on NumPy arrays.\"\"\"\n length = np_ary.shape[0]\n swapped = 1\n for i in xrange(0, length):\n if swapped: \n swapped = 0\n for ele in xrange(0, length-i-1):\n if np_ary[ele] > np_ary[ele + 1]:\n temp = np_ary[ele + 1]\n np_ary[ele + 1] = np_ary[ele]\n np_ary[ele] = temp\n swapped = 1\n return np_ary", "<br>\n<br>\nVerifying that all implementations work correctly", "import random\nimport copy\nimport numpy as np\nrandom.seed(4354353)\nprint \"my number is\", random.randint(1,1000)\n#l = np.asarray([random.randint(1,1000) for num in xrange(1, 1000)])\n#print l\nl_sorted = np.sort(l)\nfor f in [python_bubblesort, python_bubblesort_ary, cython_bubblesort, \n numba_bubblesort]:\n assert(l_sorted.all() == f(copy.copy(l)).all())\nprint('Bubblesort works correctly')\n", "<br>\n<br>\nTiming", "import timeit\nimport copy\nimport numpy as np\n\nfuncs = ['python_bubblesort',\n 'python_bubblesort_ary',\n 'cython_bubblesort',\n 'numba_bubblesort'\n ]\n\norders_n = [10**n for n in range(1, 6)]\ntimings = {f:[] for f in funcs}\n\nfor n in orders_n:\n l = [np.random.randint(n) for num in range(n)]\n for f in funcs:\n l_copy = copy.deepcopy(l)\n if f != 'python_bubblesort':\n l_copy = np.asarray(l_copy)\n timings[f].append(min(timeit.Timer('%s(l_copy)' %f, \n 'from __main__ import %s, l_copy' %f)\n .repeat(repeat=3, number=10)))", "<br>\n<br>\nSetting up the plots", "import platform\nimport multiprocessing\nfrom cython import __version__ as cython__version__\nfrom llvm import __version__ as llvm__version__\nfrom numba import __version__ as numba__version__\nfrom parakeet import __version__ as parakeet__version__\n\ndef print_sysinfo():\n \n print '\\nPython version :', platform.python_version()\n print 'compiler :', platform.python_compiler()\n print 'Cython version :', cython__version__\n print 'NumPy version :', np.__version__\n print 'Numba version :', numba__version__\n print 'llvm version :', llvm__version__\n print 'parakeet version:', parakeet__version__\n \n print '\\nsystem :', platform.system()\n print 'release :', platform.release()\n print 'machine :', platform.machine()\n print 'processor :', platform.processor()\n print 'CPU count :', multiprocessing.cpu_count()\n print 'interpreter:', platform.architecture()[0]\n print '\\n\\n'\n\n%matplotlib inline\n\nimport matplotlib.pyplot as plt\n\ndef plot(timings, title, ranked_labels, labels, orders_n):\n plt.rcParams.update({'font.size': 12})\n\n fig = plt.figure(figsize=(11,10))\n for lb in ranked_labels:\n plt.plot(orders_n, timings[lb], alpha=0.5, label=labels[lb], \n marker='o', lw=3)\n plt.xlabel('sample size n (items in the list)')\n plt.ylabel('time per computation in milliseconds')\n plt.xlim([min(orders_n) / 10, max(orders_n)* 10])\n plt.legend(loc=2)\n plt.grid()\n plt.xscale('log')\n plt.yscale('log')\n plt.title(title)\n plt.show()\n\nimport prettytable\n\ndef summary_table(ranked_labels):\n fit_table = prettytable.PrettyTable(['n=%s' %orders_n[-1], \n 'bubblesort function' ,\n 'time in millisec.',\n 'rel. performance gain'])\n fit_table.align['bubblesort function'] = 'l'\n for entry in ranked_labels:\n fit_table.add_row(['', labels[entry[1]], round(entry[0]*100, 3), \n round(ranked_labels[0][0]/entry[0], 2)])\n # times 100 for converting from seconds to milliseconds: (time*1000 / 10-loops)\n print(fit_table)", "<br>\n<br>\nResults", "title = 'Performance of Bubblesort in Python, Cython, parakeet, and Numba'\n\nlabels = {'python_bubblesort':'(C)Python Bubblesort - Python lists', \n 'python_bubblesort_ary':'(C)Python Bubblesort - NumPy arrays', \n 'cython_bubblesort': 'Cython Bubblesort - NumPy arrays',\n 'numba_bubblesort': 'Numba Bubblesort - NumPy arrays',\n 'parakeet_bubblesort': 'parakeet Bubblesort - NumPy arrays'\n }\n\nranked_by_time = sorted([(time[1][-1],time[0]) for time in timings.items()], reverse=True)\n\nprint_sysinfo()\nplot(timings, title, [l for t,l in ranked_by_time], labels, orders_n)\nsummary_table(ranked_by_time)", "Note that the relative results also depend on what version of Python, Cython, Numba, parakeet, and NumPy you are using. Also, the compiler choice for installing NumPy can account for differences in the results. \n<br>\n<br>" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
scikit-optimize/scikit-optimize.github.io
dev/notebooks/auto_examples/exploration-vs-exploitation.ipynb
bsd-3-clause
[ "%matplotlib inline", "Exploration vs exploitation\nSigurd Carlen, September 2019.\nReformatted by Holger Nahrstaedt 2020\n.. currentmodule:: skopt\nWe can control how much the acqusition function favors exploration and\nexploitation by tweaking the two parameters kappa and xi. Higher values\nmeans more exploration and less exploitation and vice versa with low values.\nkappa is only used if acq_func is set to \"LCB\". xi is used when acq_func is\n\"EI\" or \"PI\". By default the acqusition function is set to \"gp_hedge\" which\nchooses the best of these three. Therefore I recommend not using gp_hedge\nwhen tweaking exploration/exploitation, but instead choosing \"LCB\",\n\"EI\" or \"PI\".\nThe way to pass kappa and xi to the optimizer is to use the named argument\n\"acq_func_kwargs\". This is a dict of extra arguments for the aqcuisition\nfunction.\nIf you want opt.ask() to give a new acquisition value immediately after\ntweaking kappa or xi call opt.update_next(). This ensures that the next\nvalue is updated with the new acquisition parameters.\nThis example uses :class:plots.plot_gaussian_process which is available\nsince version 0.8.", "print(__doc__)\n\nimport numpy as np\nnp.random.seed(1234)\nimport matplotlib.pyplot as plt\nfrom skopt.learning import ExtraTreesRegressor\nfrom skopt import Optimizer\nfrom skopt.plots import plot_gaussian_process", "Toy example\nFirst we define our objective like in the ask-and-tell example notebook and\ndefine a plotting function. We do however only use on initial random point.\nAll points after the first one is therefore chosen by the acquisition\nfunction.", "noise_level = 0.1\n\n\n# Our 1D toy problem, this is the function we are trying to\n# minimize\ndef objective(x, noise_level=noise_level):\n return np.sin(5 * x[0]) * (1 - np.tanh(x[0] ** 2)) +\\\n np.random.randn() * noise_level\n\n\ndef objective_wo_noise(x):\n return objective(x, noise_level=0)\n\nopt = Optimizer([(-2.0, 2.0)], \"GP\", n_initial_points=3,\n acq_optimizer=\"sampling\")", "Plotting parameters", "plot_args = {\"objective\": objective_wo_noise,\n \"noise_level\": noise_level, \"show_legend\": True,\n \"show_title\": True, \"show_next_point\": False,\n \"show_acq_func\": True}", "We run a an optimization loop with standard settings", "for i in range(30):\n next_x = opt.ask()\n f_val = objective(next_x)\n opt.tell(next_x, f_val)\n# The same output could be created with opt.run(objective, n_iter=30)\n_ = plot_gaussian_process(opt.get_result(), **plot_args)", "We see that some minima is found and \"exploited\"\nNow lets try to set kappa and xi using'to other values and\npass it to the optimizer:", "acq_func_kwargs = {\"xi\": 10000, \"kappa\": 10000}\n\nopt = Optimizer([(-2.0, 2.0)], \"GP\", n_initial_points=3,\n acq_optimizer=\"sampling\",\n acq_func_kwargs=acq_func_kwargs)\n\nopt.run(objective, n_iter=20)\n_ = plot_gaussian_process(opt.get_result(), **plot_args)", "We see that the points are more random now.\nThis works both for kappa when using acq_func=\"LCB\":", "opt = Optimizer([(-2.0, 2.0)], \"GP\", n_initial_points=3,\n acq_func=\"LCB\", acq_optimizer=\"sampling\",\n acq_func_kwargs=acq_func_kwargs)\n\nopt.run(objective, n_iter=20)\n_ = plot_gaussian_process(opt.get_result(), **plot_args)", "And for xi when using acq_func=\"EI\": or acq_func=\"PI\":", "opt = Optimizer([(-2.0, 2.0)], \"GP\", n_initial_points=3,\n acq_func=\"PI\", acq_optimizer=\"sampling\",\n acq_func_kwargs=acq_func_kwargs)\n\nopt.run(objective, n_iter=20)\n_ = plot_gaussian_process(opt.get_result(), **plot_args)", "We can also favor exploitaton:", "acq_func_kwargs = {\"xi\": 0.000001, \"kappa\": 0.001}\n\nopt = Optimizer([(-2.0, 2.0)], \"GP\", n_initial_points=3,\n acq_func=\"LCB\", acq_optimizer=\"sampling\",\n acq_func_kwargs=acq_func_kwargs)\n\nopt.run(objective, n_iter=20)\n_ = plot_gaussian_process(opt.get_result(), **plot_args)\n\nopt = Optimizer([(-2.0, 2.0)], \"GP\", n_initial_points=3,\n acq_func=\"EI\", acq_optimizer=\"sampling\",\n acq_func_kwargs=acq_func_kwargs)\n\nopt.run(objective, n_iter=20)\n_ = plot_gaussian_process(opt.get_result(), **plot_args)\n\nopt = Optimizer([(-2.0, 2.0)], \"GP\", n_initial_points=3,\n acq_func=\"PI\", acq_optimizer=\"sampling\",\n acq_func_kwargs=acq_func_kwargs)\n\nopt.run(objective, n_iter=20)\n_ = plot_gaussian_process(opt.get_result(), **plot_args)", "Note that negative values does not work with the \"PI\"-acquisition function\nbut works with \"EI\":", "acq_func_kwargs = {\"xi\": -1000000000000}\n\nopt = Optimizer([(-2.0, 2.0)], \"GP\", n_initial_points=3,\n acq_func=\"PI\", acq_optimizer=\"sampling\",\n acq_func_kwargs=acq_func_kwargs)\n\nopt.run(objective, n_iter=20)\n_ = plot_gaussian_process(opt.get_result(), **plot_args)\n\nopt = Optimizer([(-2.0, 2.0)], \"GP\", n_initial_points=3,\n acq_func=\"EI\", acq_optimizer=\"sampling\",\n acq_func_kwargs=acq_func_kwargs)\n\nopt.run(objective, n_iter=20)\n_ = plot_gaussian_process(opt.get_result(), **plot_args)", "Changing kappa and xi on the go\nIf we want to change kappa or ki at any point during our optimization\nprocess we just replace opt.acq_func_kwargs. Remember to call\nopt.update_next() after the change, in order for next point to be\nrecalculated.", "acq_func_kwargs = {\"kappa\": 0}\n\nopt = Optimizer([(-2.0, 2.0)], \"GP\", n_initial_points=3,\n acq_func=\"LCB\", acq_optimizer=\"sampling\",\n acq_func_kwargs=acq_func_kwargs)\n\nopt.acq_func_kwargs\n\nopt.run(objective, n_iter=20)\n_ = plot_gaussian_process(opt.get_result(), **plot_args)\n\nacq_func_kwargs = {\"kappa\": 100000}\n\nopt.acq_func_kwargs = acq_func_kwargs\nopt.update_next()\n\nopt.run(objective, n_iter=20)\n_ = plot_gaussian_process(opt.get_result(), **plot_args)" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
scruwys/and-the-award-goes-to
notebooks/analysis.ipynb
mit
[ "Analysis of Oscar-nominated Films", "import re\nimport numpy as np\nimport pandas as pd\nimport scipy.stats as stats\n\npd.set_option('display.float_format', lambda x: '%.3f' % x)\n\n%matplotlib inline\n\nimport matplotlib as mpl\nimport matplotlib.pyplot as plt\nimport seaborn as sb\n\nsb.set(color_codes=True)\nsb.set_palette(\"muted\")\n\nnp.random.seed(sum(map(ord, \"regression\")))\n\nawards = pd.read_csv('../data/nominations.csv')\noscars = pd.read_csv('../data/analysis.csv')", "Descriptive Analysis\nTo better understand general trends in the data. This is a work in progress. last updated on: February 26, 2017\nSeasonality\nIt is well known that movies gunning for an Academy Award aim to be released between December and February, two months before the award ceremony. This is pretty evident looking at a distribution of film release months:", "sb.countplot(x=\"release_month\", data=oscars)", "This can be more or less confirmed by calculating the Pearson correlation coefficient, which measures the linear dependence between two variables:", "def print_pearsonr(data, dependent, independent):\n for field in independent:\n coeff = stats.pearsonr(data[dependent], data[field])\n print \"{0} | coeff: {1} | p-value: {2}\".format(field, coeff[0], coeff[1])\n\nprint_pearsonr(oscars, 'Oscar', ['q1_release', 'q2_release', 'q3_release', 'q4_release'])", "Q1 and Q4 have a higher coefficient than Q2 and Q3, so that points in the right direction...\nThis won't really help us determine who will win the actual Oscar, but at least we know that if we want a shot, we need to be releasing in late Q4 and early Q1.\nProfitability\nHow do the financial details contribute to Oscar success?", "# In case we want to examine the data based on the release decade...\noscars['decade'] = oscars['year'].apply(lambda y: str(y)[2] + \"0\")\n\n# Adding some fields to slice and dice...\nprofit = oscars[~oscars['budget'].isnull()]\nprofit = profit[~profit['box_office'].isnull()]\n\nprofit['profit'] = profit['box_office'] - profit['budget']\nprofit['margin'] = profit['profit'] / profit['box_office']", "Profitability by Award Category\nSince 1980, the profitability for films which won an Oscar were on average higher than all films nominated that year.", "avg_margin_for_all = profit.groupby(['category'])['margin'].mean()\navg_margin_for_win = profit[profit['Oscar'] == 1].groupby(['category'])['margin'].mean()\n\nfig, ax = plt.subplots()\n\nindex = np.arange(len(profit['category'].unique()))\n\nrects1 = plt.bar(index, avg_margin_for_win, 0.45, color='r', label='Won')\nrects2 = plt.bar(index, avg_margin_for_all, 0.45, color='b', label='All')\n\nplt.xlabel('Award Category')\nax.set_xticklabels(profit['category'].unique(), rotation='vertical')\n\nplt.ylabel('Profit Margin (%)')\nplt.title('Average Profit Margin by Award Category')\nplt.legend()\nplt.show()", "The biggest losers...that won?\nThis is just a fun fact. There were 5 awards since 1980 that were given to films that actually lost money.", "fields = ['year', 'film', 'category', 'name', 'budget', 'box_office', 'profit', 'margin']\nprofit[(profit['profit'] < 0) & (profit['Oscar'] == 1)][fields]", "Other Awards\nDo the BAFTAs, Golden Globes, Screen Actors Guild Awards, etc. forecast who is going to win the Oscars? Let's find out...", "winning_awards = oscars[['category', 'Oscar', 'BAFTA', 'Golden Globe', 'Guild']]\nwinning_awards.head()\n\nacting_categories = ['Actor', 'Actress', 'Supporting Actor', 'Supporting Actress']\n \ny = winning_awards[(winning_awards['Oscar'] == 1)&(winning_awards['category'].isin(acting_categories))]\n\nfig, (ax1, ax2, ax3) = plt.subplots(ncols=3, sharey=True)\n\nplt.title('Count Plot of Wins by Award')\n\nsb.countplot(x=\"BAFTA\", data=y, ax=ax1)\nsb.countplot(x=\"Golden Globe\", data=y, ax=ax2)\nsb.countplot(x=\"Guild\", data=y, ax=ax3)\n\nprint \"Pearson correlation for acting categories\\n\"\nprint_pearsonr(oscars[oscars['category'].isin(acting_categories)], 'Oscar', ['BAFTA', 'Golden Globe', 'Guild'])", "It looks like if the Golden Globes and Screen Actors Guild awards are better indicators of Oscar success than the BAFTAs. Let's take a look at the same analysis, but for Best Picture. The \"Guild\" award we use is the Screen Actor Guild Award for Outstanding Performance by a Cast in a Motion Picture.", "y = winning_awards[(winning_awards['Oscar'] == 1)&(winning_awards['category'] == 'Picture')]\n\nfig, (ax1, ax2, ax3) = plt.subplots(ncols=3, sharey=True)\n\nplt.title('Count Plot of Wins by Award')\n\nsb.countplot(x=\"BAFTA\", data=y, ax=ax1)\nsb.countplot(x=\"Golden Globe\", data=y, ax=ax2)\nsb.countplot(x=\"Guild\", data=y, ax=ax3)\n\nprint \"Pearson correlation for acting categories\\n\"\nprint_pearsonr(oscars[oscars['category'] == 'Picture'], 'Oscar', ['BAFTA', 'Golden Globe', 'Guild'])", "Seems like the BAFTAs hold a bit more weight than the SAG awar, but the Golden Globes are still the best way to forecast an Oscar win." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
mcflugen/bmi-tutorial
notebooks/waves_example.ipynb
mit
[ "<img src=\"images/csdms_logo.jpg\">\nUsing a BMI: Waves\nThis example explores how to use a BMI implementation using the Waves model as an example.\nLinks\n\nWaves source code: Look at the files that have waves in their name.\nWaves description on CSDMS: Detailed information on the Waves model.\n\nInteracting with the Waves BMI using Python\nSome magic that allows us to view images within the notebook.", "%matplotlib inline", "Import the Waves class, and instantiate it. In Python, a model with a BMI will have no arguments for its constructor. Note that although the class has been instantiated, it's not yet ready to be run. We'll get to that later!", "from cmt.components import Waves\nwaves = Waves()", "Even though we can't run our waves model yet, we can still get some information about it. Just don't try to run it. Some things we can do with our model are get the names of the input variables.", "waves.get_output_var_names()", "Or the output variables.", "waves.get_input_var_names()", "We can also get information about specific variables. Here we'll look at some info about wave direction. This is the main output of the Waves model. Notice that BMI components always use CSDMS standard names. The CSDMS Standard Name for wave angle is,\n\"sea_surface_water_wave__azimuth_angle_of_opposite_of_phase_velocity\"\n\nQuite a mouthful, I know. With that name we can get information about that variable and the grid that it is on (it's actually not a one).", "angle_name = 'sea_surface_water_wave__azimuth_angle_of_opposite_of_phase_velocity'\n\nprint \"Data type: %s\" % waves.get_var_type(angle_name)\nprint \"Units: %s\" % waves.get_var_units(angle_name)\nprint \"Grid id: %d\" % waves.get_var_grid(angle_name)\nprint \"Number of elements in grid: %d\" % waves.get_grid_size(0)\nprint \"Type of grid: %s\" % waves.get_grid_type(0)", "OK. We're finally ready to run the model. Well not quite. First we initialize the model with the BMI initialize method. Normally we would pass it a string that represents the name of an input file. For this example we'll pass None, which tells Waves to use some defaults.", "waves.initialize(None)", "Before running the model, let's set a couple input parameters. These two parameters represent the frequency for which waves approach the shore at a high angle and if they come from a prefered direction.", "waves.set_value('sea_shoreline_wave~incoming~deepwater__ashton_et_al_approach_angle_asymmetry_parameter', .25)\nwaves.set_value('sea_shoreline_wave~incoming~deepwater__ashton_et_al_approach_angle_highness_parameter', .7)", "To advance the model in time, we use the update method. We'll advance the model one day.", "waves.update()", "Let's double-check that the model advanced to the given time and see what the new wave angle is.", "print 'Current model time: %f' % waves.get_current_time()\nval = waves.get_value(angle_name)\nprint 'The current wave angle is: %f' % val[0]", "We'll put all this in a loop and advance the model in time to generate a time series of waves angles.", "import numpy as np\n\nnumber_of_time_steps = 400\nangles = np.empty(number_of_time_steps)\nfor time in xrange(number_of_time_steps):\n waves.update()\n angles[time] = waves.get_value(angle_name)\n\nimport matplotlib.pyplot as plt\n\nplt.plot(np.array(angles) * 180 / np.pi)\nplt.xlabel('Time (days)')\nplt.ylabel('Incoming wave angle (degrees)')\n\nplt.hist(np.array(angles) * 180 / np.pi, bins=25)\nplt.xlabel('Incoming wave angle (degrees)')\nplt.ylabel('Number of occurences')" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
dchad/malware-detection
mmcc/model-selection.ipynb
gpl-3.0
[ "import warnings\nimport numpy as np\nimport scipy as sp\nimport pandas as pd\nimport sklearn as skl\nimport matplotlib.pyplot as plt\nfrom time import time\nfrom scipy.stats import randint as sp_randint\nfrom sklearn.metrics import log_loss, confusion_matrix, accuracy_score, classification_report\nfrom sklearn.ensemble import RandomForestClassifier, ExtraTreesClassifier, GradientBoostingClassifier\nfrom sklearn.cross_validation import cross_val_score, KFold, train_test_split\nfrom sklearn.grid_search import GridSearchCV, RandomizedSearchCV\nfrom sklearn.linear_model import RidgeClassifierCV\nfrom sklearn.svm import SVC\nimport seaborn as sns\nimport xgboost as xgb\n%pylab inline\nwarnings.filterwarnings(\"ignore\", category=DeprecationWarning)\nwarnings.filterwarnings(\"ignore\", category=FutureWarning)", "1. Load Training ASM and Byte Feature Data and Combine\nRun the model selection functions on the combined ASM training data for the\n30% best feature set and call graph feature set.\nSo the data frames will be:\n - final-combined-train-data-30percent.csv\n - sorted_train_labels.csv\n - all-combined-train-data.csv", "# First load the .asm and .byte training data and training labels\n# sorted_train_data_asm = pd.read_csv('data/sorted-train-malware-features-asm-reduced.csv')\n# sorted_train_data_byte = pd.read_csv('data/sorted-train-malware-features-byte.csv')\nsorted_train_labels = pd.read_csv('data/sorted-train-labels.csv')\ncombined_train_data = pd.read_csv('data/final-combined-train-data-30percent.csv')\ncombined_test_data = pd.read_csv('data/final-combined-test-data-30percent.csv')\ncall_graph_features_train = pd.read_csv('data/final-call-graph-features-10percent.csv')\n\nsorted_train_labels.head()\n\ncombined_train_data.head()\n\ncombined_test_data.head()\n\n# Utility function to report best scores\nfrom operator import itemgetter\n\ndef report(grid_scores, n_top=3):\n top_scores = sorted(grid_scores, key=itemgetter(1), reverse=True)[:n_top]\n for i, score in enumerate(top_scores):\n print(\"Model with rank: {0}\".format(i + 1))\n print(\"Mean validation score: {0:.3f} (std: {1:.3f})\".format(score.mean_validation_score, np.std(score.cv_validation_scores)))\n print(\"Parameters: {0}\".format(score.parameters))\n print(\"\")\n\ndef run_cv(X,y, clf):\n\n # Construct a kfolds object\n kf = KFold(len(y),n_folds=10,shuffle=True)\n y_prob = np.zeros((len(y),9))\n y_pred = np.zeros(len(y))\n \n # Iterate through folds\n for train_index, test_index in kf:\n print(test_index, train_index)\n X_train = X.loc[train_index,:]\n X_test = X.loc[test_index,:]\n y_train = y[train_index]\n\n clf.fit(X_train, y_train.flatten()) # use flatten to get rid of data conversion warnings\n \n y_prob[test_index] = clf.predict_proba(X_test)\n y_pred[test_index] = clf.predict(X_test)\n #print(clf.get_params())\n \n return y_prob, y_pred", "2. Model Selection On The ASM Features Using GridSearchCV\n Models include:\n - GradientBoostingClassifier: randomized and grid search.\n - SVC: randomized and grid search.\n - ExtraTrees: randomized and grid search.", "# Assign asm data to X,y for brevity, then split the dataset in two equal parts.\nX = combined_train_data.iloc[:,1:]\ny = np.array(sorted_train_labels.iloc[:,1])\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.1, random_state=0)\n\nX_train.shape\n\nplt.figure(figsize=(15,15))\nplt.xlabel(\"EDX Register\")\nplt.ylabel(\"Malware Class\")\nxa = np.array(X['edx'])\nxb = np.array(X['esi'])\nya = np.array(y)\nplt.scatter(xa,ya,c=ya,cmap='brg')\n\nplt.figure(figsize=(15,15))\nplt.xlabel(\"EDX Register\")\nplt.ylabel(\"ESI Register\")\nxa = np.array(X['edx'])\nxb = np.array(X['esi'])\nya = np.array(y)\nplt.scatter(xa,xb,c=ya,cmap='brg')\n\nX_means = X.mean()\nX_std = X.std()\nX_var = X.var()\nX_cov = X.cov()\n\nX_means.head()\n\nX_std.head()\n\nX_var.head()\n\nX_cov.head()", "2.1 Gradient Boosting\n2.2 Support Vector Machine\n2.2.1 Randomized Search", "# Set the parameters by cross-validation\ntuned_parameters = [{'kernel': ['rbf'], 'gamma': [1e-3, 1e-4],\n 'C': [1, 10, 100, 1000]},\n {'kernel': ['linear'], 'C': [1, 10, 100, 1000]}]\n\nprint(\"# Tuning hyper-parameters for SVC\")\nprint()\n\nclfrand = RandomizedSearchCV(SVC(C=1), tuned_parameters, cv=10)\n\nstart = time()\nclfrand.fit(X_train, y_train)\n\nprint(\"Best parameters set found on training set:\")\nprint()\nprint(clfrand.best_params_)\nprint()\nprint(\"Grid scores on training set:\")\nprint()\nreport(clfrand.grid_scores_) \nprint()\nprint(\"Classification report:\")\nprint(\"SVC took {:.2f} seconds for {:d} candidates.\".format(((time() - start), n_iter_search)))\nprint()\ny_true, y_pred = y_test, clfrand.predict(X_test)\nprint(classification_report(y_true, y_pred))\nprint()", "2.2.2 Grid Search", "# Set the parameters by cross-validation\ntuned_parameters = [{'kernel': ['rbf'], 'gamma': [1e-3, 1e-4],\n 'C': [1, 10, 100, 1000]},\n {'kernel': ['linear'], 'C': [1, 10, 100, 1000]}]\n\nprint(\"# Tuning hyper-parameters for SVC\")\nprint()\n\nclfgrid = GridSearchCV(SVC(C=1), tuned_parameters, cv=10, n_jobs=4)\n\nstart = time()\nclfgrid.fit(X_train, y_train)\n\nprint(\"Best parameters set found on training set:\")\nprint()\nprint(clfgrid.best_params_)\nprint()\nprint(\"Grid scores on training set:\")\nprint()\nreport(clfgrid.grid_scores_) \nprint()\nprint(\"Classification report:\")\nprint(\"SVC took {:.2f} seconds for {:d} candidates.\".format(((time() - start), n_iter_search)))\nprint()\ny_true, y_pred = y_test, clfgrid.predict(X_test)\nprint(classification_report(y_true, y_pred))\nprint()", "2.3 Extra Trees Classifier\n2.3.1 Randomized Search", "clfextra1 = ExtraTreesClassifier(n_jobs=4)\n\n# use a random grid over parameters, most important parameters are n_estimators (larger is better) and\n# max_features (for classification best value is square root of the number of features)\n# Reference: http://scikit-learn.org/stable/modules/ensemble.html\nparam_dist = {\"n_estimators\": [100, 500, 1000],\n \"max_depth\": [3, None],\n \"max_features\": sp_randint(1, 11),\n \"min_samples_split\": sp_randint(1, 11),\n \"min_samples_leaf\": sp_randint(1, 11),\n \"bootstrap\": [True, False],\n \"criterion\": [\"gini\", \"entropy\"]}\n\n# run randomized search\nn_iter_search = 20\nrandom_search = RandomizedSearchCV(clfextra1, param_distributions=param_dist, n_iter=n_iter_search)\n\nstart = time()\nrandom_search.fit(X_train, y_train)\n\nprint(\"ExtraTreesClassifier - RandomizedSearchCV:\")\nprint(\" \")\nprint(\"Best parameters set found on training set:\")\nprint(\" \")\nprint(random_search.best_params_)\nprint(\" \")\nprint(\"Grid scores on training set:\")\nprint(\" \")\nreport(random_search.grid_scores_)\nprint(\" \")\nprint(\"Classification report:\")\nprint(\"RandomizedSearchCV took {:.2f} seconds for {:d} candidates.\".format((time() - start), n_iter_search))\nprint(\" \")\ny_pred = random_search.predict(X_test)\nprint(classification_report(y_test, y_pred))\nprint(\" \")\ny_prob = random_search.predict_proba(X_test)\nprint(\"logloss = {:.3f}\".format(log_loss(y_test, y_prob)))\nprint(\"score = {:.3f}\".format(accuracy_score(y_test, y_pred)))\ncm = confusion_matrix(y_test, y_pred)\nprint(cm)", "2.3.2 Grid Search", "clfextra2 = ExtraTreesClassifier(n_jobs=4)\n\n# use a full grid over all parameters, most important parameters are n_estimators (larger is better) and\n# max_features (for classification best value is square root of the number of features)\n# Reference: http://scikit-learn.org/stable/modules/ensemble.html\nparam_grid = {\"n_estimators\": [100, 500, 1000, 2000],\n \"max_depth\": [None],\n \"max_features\": [20],\n \"min_samples_split\": [1],\n \"min_samples_leaf\": [1],\n \"bootstrap\": [True, False],\n \"criterion\": [\"gini\", \"entropy\"]}\n\n# run grid search\ngrid_search = GridSearchCV(clfextra2, param_grid=param_grid, cv=10)\nstart = time()\ngrid_search.fit(X_train, y_train)\n\nprint(\"ExtraTreesClassifier - GridSearchCV:\")\nprint(\" \")\nprint(\"Best parameters set found on training set:\")\nprint(\" \")\nprint(grid_search.best_params_)\nprint(\" \")\nprint(\"Grid scores on training set:\")\nprint(\" \")\nreport(grid_search.grid_scores_)\nprint(\" \")\nprint(\"Classification report:\")\nprint(\"GridSearchCV took {:.2f} seconds.\".format((time() - start)))\nprint(\" \")\ny_pred = grid_search.predict(X_test)\nprint(classification_report(y_test, y_pred))\nprint(\" \")\ny_prob = grid_search.predict_proba(X_test)\nprint(\"logloss = {:.3f}\".format(log_loss(y_test, y_prob)))\nprint(\"score = {:.3f}\".format(accuracy_score(y_train, y_pred)))\ncm = confusion_matrix(y_test, y_pred)\nprint(cm)\n\nprint(\"Classification report:\")\nprint(\"GridSearchCV took {:.2f} seconds.\".format(time() - start))\nprint(\" \")\ny_pred = grid_search.predict(X_test)\nprint(classification_report(y_test, y_pred))\nprint(\" \")\ny_prob = grid_search.predict_proba(X_test)\nprint(\"logloss = {:.3f}\".format(log_loss(y_test, y_prob)))\nprint(\"score = {:.3f}\".format(accuracy_score(y_test, y_pred)))\ncm = confusion_matrix(y_test, y_pred)\nprint(cm)", "3. Model selection On The Byte Features Using GridSearchCV\n Models include:\n - Ridge Classifier\n - SVC: grid search.\n - ExtraTrees: grid search.", "# Assign byte data to X,y for brevity, then split the dataset in two equal parts.\nX = sorted_train_data_byte.iloc[:,1:]\ny = np.array(sorted_train_labels.iloc[:,1])\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.5, random_state=0)\n\nplt.figure(figsize=(15,15))\nplt.xlabel(\"File Entropy\")\nplt.ylabel(\"Malware Class\")\nxa = np.array(X['entropy'])\nxb = np.array(X['filesize'])\nya = np.array(y)\nplt.scatter(xa,ya,c=ya,cmap='brg')\n\nplt.figure(figsize=(15,15))\nplt.xlabel(\"File Size\")\nplt.ylabel(\"Malware Class\")\nplt.scatter(xb,ya,c=ya,cmap='brg')\n\nplt.figure(figsize=(15,15))\nplt.xlabel(\"File Size\")\nplt.ylabel(\"Shannon's Entropy\")\n#colors = cm.rainbow(np.linspace(0, 1, len(ya)))\nplt.scatter(xb,xa,c=ya,cmap='brg')", "3.1 Ridge Classifier", "clfridge = RidgeClassifierCV(cv=10)\nclfridge.fit(X_train, y_train)\ny_pred = clfridge.predict(X_test)\nprint(classification_report(y_test, y_pred))\nprint(\" \")\nprint(\"score = {:.3f}\".format(accuracy_score(y_train, y_pred)))\ncm = confusion_matrix(y_test, y_pred)\nprint(cm)", "3.2 Support Vector Machine\n3.3 Extra Trees Classifier", "clfextra = ExtraTreesClassifier(n_jobs=4)\n\n# use a full grid over all parameters, most important parameters are n_estimators (larger is better) and\n# max_features (for classification best value is square root of the number of features)\n# Reference: http://scikit-learn.org/stable/modules/ensemble.html\nparam_grid = {\"n_estimators\": [1000, 2000],\n \"max_depth\": [3, None],\n \"max_features\": [1, 2],\n \"min_samples_split\": [1, 3, 10],\n \"min_samples_leaf\": [1, 3, 10],\n \"bootstrap\": [True, False],\n \"criterion\": [\"gini\", \"entropy\"]}\n\n# run grid search\ngrid_search = GridSearchCV(clfextra, param_grid=param_grid)\nstart = time()\ngrid_search.fit(X, y)\n\nprint(\"ExtraTreesClassifier - GridSearchCV:\")\nprint(\" \")\nprint(\"Best parameters set found on training set:\")\nprint(\" \")\nprint(grid_search.best_params_)\nprint(\" \")\nprint(\"Grid scores on training set:\")\nprint(\" \")\nreport(grid_search.grid_scores_)\nprint(\" \")\nprint(\"Classification report:\")\nprint(\"GridSearchCV took {:.2f} seconds.\".format((time() - start)))\nprint(\" \")\ny_pred = grid_search.predict(X)\nprint(classification_report(y, y_pred))\nprint(\" \")\ny_prob = grid_search.predict_proba(X)\nprint(\"logloss = {:.3f}\".format(log_loss(y, y_prob)))\nprint(\"score = {:.3f}\".format(accuracy_score(y, y_pred)))\ncm = confusion_matrix(y, y_pred)\nprint(cm)", "4. Model Selection On The Combined Training ASM/Byte Data Using GridSearchCV\nGrid search will be done on the following classifiers:\n- Ridge Classifier\n- ExtraTreesClassifier\n- GradientBoost\n- RandomForest\n- SVC", "# Assign byte data to X,y for brevity, then split the dataset in two equal parts.\nX = combined_train_data.iloc[:,1:]\ny = np.array(sorted_train_labels.iloc[:,1])\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.5, random_state=0)", "4.1 Ridge Classifier", "from sklearn.linear_model import RidgeClassifierCV\n\nclfridge = RidgeClassifierCV(cv=10)\nclfridge.fit(X_train, y_train)\ny_pred = clfridge.predict(X_test)\nprint(classification_report(y_test, y_pred))\nprint(\" \")\nprint(\"score = {:.3f}\".format(accuracy_score(y_train, y_pred)))\ncm = confusion_matrix(y_test, y_pred)\nprint(cm)", "4.2 Extra Trees Classifier", "clf1 = ExtraTreesClassifier(n_estimators=1000, max_features=None, min_samples_leaf=1, min_samples_split=9, n_jobs=4, criterion='gini')\np1, pred1 = run_cv(X,y,clf1)\nprint(\"logloss = {:.3f}\".format(log_loss(y, p1)))\nprint(\"score = {:.3f}\".format(accuracy_score(y, pred1)))\ncm = confusion_matrix(y, pred1)\nprint(cm)", "4.3 Gradient Boost\n4.4 Random Forest\n4.5 Support Vector Machine\n4.6 Nearest Neighbours\n4.7 XGBoost", "X = combined_train_data.iloc[:,1:]\nylabels = sorted_train_labels.iloc[:,1:]\ny = np.array(ylabels - 1)\ny = y.flatten()\ny\n\nxgclf = xgb.XGBClassifier(objective=\"multi:softprob\", nthread=4)\n\nparams = {\"n_estimators\": [1000, 2000],\n \"max_depth\": [5, 10],\n \"learning_rate\": [0.1, 0.05]}\n\n# run grid search\ngrid_search = GridSearchCV(xgclf, param_grid=params)\nstart = time()\ngrid_search.fit(X, y)\n\nprint(\"XGBoost Classifier - GridSearchCV:\")\nprint(\" \")\nprint(\"Best parameters set found on training set:\")\nprint(\" \")\nprint(grid_search.best_params_)\nprint(\" \")\nprint(\"Grid scores on training set:\")\nprint(\" \")\nreport(grid_search.grid_scores_)\nprint(\" \")\nprint(\"Classification report:\")\nprint(\"GridSearchCV took {:.2f} seconds.\".format((time() - start)))\nprint(\" \")\ny_pred = grid_search.predict(X)\nprint(classification_report(y, y_pred))\nprint(\" \")\ny_prob = grid_search.predict_proba(X)\nprint(\"logloss = {:.3f}\".format(log_loss(y, y_prob)))\nprint(\"score = {:.3f}\".format(accuracy_score(y, y_pred)))\ncm = confusion_matrix(y, y_pred)\nprint(cm)\n\nprint(\"GridSearchCV took {:.2f} seconds.\".format((time() - start)))\nprint(\" \")\ny_pred = grid_search.predict(X)\nprint(classification_report(y, y_pred))\nprint(\" \")\ny_prob = grid_search.predict_proba(X)\nprint(\"logloss = {:.3f}\".format(log_loss(y, y_prob)))\nprint(\"score = {:.3f}\".format(accuracy_score(y, y_pred)))\ncm = confusion_matrix(y, y_pred)\nprint(cm)\n\n# Now try with best parameters and 50/50 train-test split\nxgclf = xgb.XGBClassifier(n_estimators=1000, max_depth=10, learning_rate=0.01,objective=\"multi:softprob\", nthread=4)\nprob1, pred1 = run_cv(X_train, y_train, xgclf)\nprint(\"logloss = {:.3f}\".format(log_loss(y_train, prob1)))\nprint(\"score = {:.3f}\".format(accuracy_score(y_train, pred1)))\ncm = confusion_matrix(y_train, pred1)\nprint(cm)\n\npred2 = xgclf.predict(X_test)\nprob2 = xgclf.predict_proba(X_test)\nprint(\"logloss = {:.3f}\".format(log_loss(y_test, prob2)))\nprint(\"score = {:.3f}\".format(accuracy_score(y_test, pred2)))\ncm = confusion_matrix(y_test, pred2)\nprint(cm)\n\nxgclf = xgb.XGBClassifier(n_estimators=1000, max_depth=10, learning_rate=0.1,objective=\"multi:softprob\", nthread=4)\nprob1, pred1 = run_cv(X,y,xgclf)\nprint(\"logloss = {:.3f}\".format(log_loss(y, prob1)))\nprint(\"score = {:.3f}\".format(accuracy_score(y, pred1)))\ncm = confusion_matrix(y, pred1)\nprint(cm)", "5. Run ExtraTreeClassifiers With 10-Fold Cross Validation", "help(xgb)\n\nhelp(ExtraTreesClassifier)\n\nytrain = np.array(y)\n\nX = data_reduced.iloc[:,1:]\nX.shape\n\nclf1 = ExtraTreesClassifier(n_estimators=1000, max_features=None, min_samples_leaf=1, min_samples_split=9, n_jobs=4, criterion='gini')\np1, pred1 = run_cv(X,ytrain,clf1)\nprint \"logloss = %.3f\" % log_loss(y, p1)\nprint \"score = %.3f\" % accuracy_score(ytrain, pred1)\ncm = confusion_matrix(y, pred1)\nprint(cm)\n\nclf2 = ExtraTreesClassifier(n_estimators=500, max_features=None, min_samples_leaf=1, min_samples_split=9, n_jobs=4, criterion='gini')\np2, pred2 = run_cv(X,ytrain,clf2)\nprint \"logloss = %.3f\" % log_loss(y, p2)\nprint \"score = %.3f\" % accuracy_score(ytrain, pred2)\ncm = confusion_matrix(y, pred2)\nprint(cm)\n\nclf3 = ExtraTreesClassifier(n_estimators=250, max_features=None, min_samples_leaf=1, min_samples_split=9, n_jobs=4, criterion='gini')\np3, pred3 = run_cv(X,ytrain,clf3)\nprint \"logloss = %.3f\" % log_loss(y, p3)\nprint \"score = %.3f\" % accuracy_score(ytrain, pred3)\ncm = confusion_matrix(y, pred3)\nprint(cm)\n\nclf4 = ExtraTreesClassifier(n_estimators=2000, max_features=None, min_samples_leaf=2, min_samples_split=3, n_jobs=4, criterion='gini')\np4, pred4 = run_cv(X,ytrain,clf4)\nprint \"logloss = %.3f\" % log_loss(y, p4)\nprint \"score = %.3f\" % accuracy_score(ytrain, pred4)\ncm = confusion_matrix(y, pred4)\nprint(cm)\n\nclf5 = ExtraTreesClassifier(n_estimators=1000, n_jobs=4, criterion='gini')\np5, pred5 = run_cv(X,ytrain,clf5)\nprint \"logloss = %.4f\" % log_loss(y, p5)\nprint \"score = %.4f\" % accuracy_score(ytrain, pred5)\ncm = confusion_matrix(y, pred5)\nprint(cm)\n\nclf6 = ExtraTreesClassifier(n_estimators=2000, n_jobs=4, criterion='gini')\np6, pred6 = run_cv(X,ytrain,clf6)\nprint \"logloss = %.4f\" % log_loss(y, p6)\nprint \"score = %.4f\" % accuracy_score(ytrain, pred6)\ncm = confusion_matrix(y, pred6)\nprint(cm)", "6. GridSearchCV with XGBoost on All Combined ASM and Call Graph Features.", "data = pd.read_csv('data/all-combined-train-data-final.csv')\nlabels = pd.read_csv('data/sorted-train-labels.csv')\ndata.head(20)\n\nX = data.iloc[:,1:]\nylabels = labels.iloc[:,1:].values\ny = np.array(ylabels - 1).flatten() # numpy arrays are unloved in many places.\ny\n\nlabels.head()\n\nxgclf = xgb.XGBClassifier(objective=\"multi:softprob\", nthread=4)\n\nparams = {\"n_estimators\": [1000, 2000],\n \"max_depth\": [5, 10],\n \"learning_rate\": [0.1, 0.05]}\n\n# run grid search\ngrid_search = GridSearchCV(xgclf, param_grid=params)\nstart = time()\ngrid_search.fit(X, y)\n\nprint(\"XGBoost Classifier - GridSearchCV:\")\nprint(\" \")\nprint(\"Best parameters set found on training set:\")\nprint(\" \")\nprint(grid_search.best_params_)\nprint(\" \")\nprint(\"Grid scores on training set:\")\nprint(\" \")\nreport(grid_search.grid_scores_)\nprint(\" \")\nprint(\"Classification report:\")\nprint(\"GridSearchCV took {:.2f} seconds.\".format((time() - start)))\nprint(\" \")\ny_pred = grid_search.predict(X)\nprint(classification_report(y, y_pred))\nprint(\" \")\ny_prob = grid_search.predict_proba(X)\nprint(\"logloss = {:.3f}\".format(log_loss(y, y_prob)))\nprint(\"score = {:.3f}\".format(accuracy_score(y, y_pred)))\ncm = confusion_matrix(y, y_pred)\nprint(cm)", "7. Summary Of Results", "# TODO:", "8. Test/Experimental Code Only", "# go through the features and delete any that sum to less than 200\ncolsum = X.sum(axis=0, numeric_only=True)\n\nzerocols = colsum[(colsum[:] == 0)]\nzerocols\n\nzerocols = colsum[(colsum[:] < 110)]\nzerocols.shape\n\nreduceX = X\nfor col in reduceX.columns:\n if sum(reduceX[col]) < 100:\n del reduceX[col]\n \nreduceX.shape\n\nskb = SelectKBest(chi2, k=20)\nX_kbestnew = skb.fit_transform(X, y)\nX_kbestnew.shape\n\ncombined_train_data.loc[combined_train_data['filename'] == '4jKA1GUDv6TMNpPuIxER',:]\n# Get an array of labels in the same order as the asm filenames\n# y = [0]*labels.shape[0]\n# fnames = train_data_asm['filename']\n# for i in range(len(y)):\n# fname = train_data_asm.loc[i,'filename']\n# row = labels[labels['Id'] == fname]\n# y[i] = row.iloc[0,1]\n\ntrain_data_byte[train_data_byte.loc[:,'filename']=='4jKA1GUDv6TMNpPuIxER']\n\ncount = 0\nfor i in range(len(y)):\n if y[i] == 0:\n count += 1\n \nprint(count)\n\ncount = 0\nfor i in range(len(sorted_train_labels)):\n if sorted_train_labels.iloc[i,1] == 0:\n count += 1\n \nprint(count)\n\nfrom sklearn.datasets import load_digits\nfrom sklearn.ensemble import RandomForestClassifier\n\n# get some data\ndigits = load_digits()\nX, y = digits.data, digits.target\n\ntype(X)" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
WMD-group/SMACT
examples/Practical_tutorial/ELS_practical.ipynb
mit
[ "The ELS matching procedure\nThis practical is based on the concepts introduced for optimising electrical contacts in photovoltaic cells. The procedure was published in [J. Mater. Chem. C (2016)]((http://pubs.rsc.org/en/content/articlehtml/2016/tc/c5tc04091d).\n<img src=\"Images/toc.gif\">\nIn this practical we screen electrical contact materials for CH$_3$NH$_3$PbI$_3$. There are three main steps:\n* Electronic matching of band energies\n* Lattice matching of surface vectors \n* Site matching of under-coordinated surface atoms\n1. Electronic matching\nBackground\nEffective charge extraction requires a low barrier to electron or hole transport accross an the interface. This barrier is exponential in the discontinuity of the band energies across the interface. To a first approximation the offset or discontinuity can be estimated by comparing the ionisation potentials (IPs) or electron affinities (EAs) of the two materials, this is known as Anderson's rule.\n<img src=\"Images/anderson.gif\">\nHere we have collected a database of 173 measured or estimated semiconductor IPs and EAs (CollatedData.txt). We use it as the first step in our screening. The screening is performed by the script scan_energies.py. We enforce several criteria:\n\nThe IP and EA of the target material are supplied using the flags -i and -e\nThe IP/EA must be within a certain range from the target material; by default this is set to 0.5 eV, but it can be contolled by the flag -w. The window is the full width so the max offset is 0.5*window\nA selective contact should be a semiconductor, so we apply a criterion based on its band gap. If the gap is too large we consider that it would be an insulator. By default this is set to 4.0 eV and is controlled by the flag -g", "%%bash\ncd Electronic/\npython scan_energies.py -h", "Now let's do a proper scan\n\nIP = 5.7 eV\nEA = 4.0 eV\nWindow = 0.25 eV\nInsulating threshold = 4.0 eV", "%%bash\ncd Electronic/\npython scan_energies.py -i 5.7 -e 4.0 -w 0.5 -g 4.0", "2. Lattice matching\nBackground\nFor stable interfaces there should be an integer relation between the lattice constants of the two surfaces in contact, which allows for perfect matching, with minimal strain. Generally a strain value of ~ 3% is considered acceptable, above this the interface will be incoherent.\nThis section uses the ASE package to construct the low index surfaces of the materials identified in the electronic step, as well as those of the target material. The code LatticeMatch.py to identify optimal matches.\nFirst we need .cif files of the materials obtained from the electronic matching. These are obtained from the Materials Project website. Most of the .cif files are there already, but we should add Cu$_2$O and GaN, just for practice.\nLattice matching routine\nThe lattice matching routine involves obtaining reduced cells for each surface and looking for multiples of each side which match. The procedure is described in more detail in our paper.\n<img src=\"Images/lattice_match.gif\">\nThe actual clever stuff of the algorithm comes from a paper from Zur and McGill in J. Appl. Physics (1984).\n<img src=\"Images/ZurMcGill.jpg\">\nThe script\nThe work is done by a python script called LatticeMatch.py. As input it reads .cif files. It takes a number of flags: \n* -a the file containing the crystallographic information of the first material\n* -b the file containing the crystallographic information of the second material \n* -s the strain threshold above which to cutoff, defaults to 0.05\n* -l the maximum number of times to expand either surface to find matching conditions, defaults to 5\nWe will run the script in a bash loop to iterate over all interfaces of our contact materials with the (100) and (110) surfaces of pseudo-cubic CH$_3$NH$_3$PbI$_3$. Note that I have made all lattice parameters of CH$_3$NH$_3$PbI$_3$ exactly equal, this is to facilitate the removal of duplicate surfaces by the script.", "%%bash\ncd Lattice/\nfor file in *.cif; do python LatticeMatch.py -a MAPI/CH3NH3PbI3.cif -b $file -s 0.03; done", "3. Site matching\nSo far the interface matching considered only the magnitude of the lattice vectors. It would be nice to be able to include some measure of how well the dangling bonds can passivate one another. We do this by calculating the site overlap. Basically, we determine the undercoordinated surface atoms on each side and project their positions into a 2D plane. \n<img src=\"Images/site_overlap.gif\">\nWe then lay the planes over each other and slide them around until there is the maximum coincidence. We calculate the overlap factor from \n$$ ASO = \\frac{2S_C}{S_A + S_B}$$\nwhere $S_C$ is the number of overlapping sites in the interface, and $S_A$ and $S_B$ are the number of sites in each surface.\n<img src=\"Images/ASO.gif\">\nThe script\nThis section can be run in a stand-alone script called csl.py. It relies on a library of the 2D projections of lattice sites from different surfaces, which is called surface_points.py. Currently this contains a number of common materials types, but sometimes must be expanded as new materials are identified from the electronic and lattice steps.\ncsl.py takes the following input parameters:\n* -a The first material to consider\n* -b The second material to consider\n* -x The first materials miller index to consider, format : 001\n* -y The second materials miller index to consider, format : 001\n* -u The first materials multiplicity, format : 2,2\n* -v The second materials multiplicity, format : 2,2\nWe can run it for one example from the previous step, let's say GaN (010)x(2,5) with CH$_3$NH$_3$PbI$_3$ (110)x(1,3)", "%%bash\ncd Site/\npython csl.py -a CH3NH3PbI3 -b GaN -x 110 -y 010 -u 1,3 -v 2,5", "All together\nThe lattice and site examples above give a a feel for what is going on. For a proper screening procedure it would be nice to be able to run them together. That's exactly what happens with the LatticeSite.py script. It uses a new class Pair to store and pass information about the interface pairings. This includes the materials names, miller indices of matching surfaces, strians, multiplicities etc.\nThe LatticeSite.py script takes the same variables as LatticeMatch.py. It just takes a little longer to run, so a bit of patience is required.\nThis script outputs the standard pair information as well as the site matching factor, which is calculated as\n$$ \\frac{100\\times ASO}{1 + |\\epsilon|}$$\nwhere the $ASO$ was defined above, and $\\epsilon$ in the average of the $u$ and $v$ strains. The number is a measure of the mechanical stability of an interface. A perfect interface of a material with itself would have a fator of 100.\nWhere lattices match but no information on the structure of the surface exists it is flagged up. You can always add new surfaces as required.", "%%bash\ncd Site/\nfor file in *cif; do python LatticeSite.py -a MAPI/CH3NH3PbI3.cif -b $file -s 0.03; done" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
csaladenes/csaladenes.github.io
present/bi2/2020/ubb/az_en_jupyter2_mappam/sklearn_tutorial/02.1-Machine-Learning-Intro.ipynb
mit
[ "<small><i>This notebook was put together by Jake Vanderplas. Source and license info is on GitHub.</i></small>\nIntroduction to Scikit-Learn: Machine Learning with Python\nThis session will cover the basics of Scikit-Learn, a popular package containing a collection of tools for machine learning written in Python. See more at http://scikit-learn.org.\nOutline\nMain Goal: To introduce the central concepts of machine learning, and how they can be applied in Python using the Scikit-learn Package.\n\nDefinition of machine learning\nData representation in scikit-learn\nIntroduction to the Scikit-learn API\n\nAbout Scikit-Learn\nScikit-Learn is a Python package designed to give access to well-known machine learning algorithms within Python code, through a clean, well-thought-out API. It has been built by hundreds of contributors from around the world, and is used across industry and academia.\nScikit-Learn is built upon Python's NumPy (Numerical Python) and SciPy (Scientific Python) libraries, which enable efficient in-core numerical and scientific computation within Python. As such, scikit-learn is not specifically designed for extremely large datasets, though there is some work in this area.\nFor this short introduction, I'm going to stick to questions of in-core processing of small to medium datasets with Scikit-learn.\nWhat is Machine Learning?\nIn this section we will begin to explore the basic principles of machine learning.\nMachine Learning is about building programs with tunable parameters (typically an\narray of floating point values) that are adjusted automatically so as to improve\ntheir behavior by adapting to previously seen data.\nMachine Learning can be considered a subfield of Artificial Intelligence since those\nalgorithms can be seen as building blocks to make computers learn to behave more\nintelligently by somehow generalizing rather that just storing and retrieving data items\nlike a database system would do.\nWe'll take a look at two very simple machine learning tasks here.\nThe first is a classification task: the figure shows a\ncollection of two-dimensional data, colored according to two different class\nlabels. A classification algorithm may be used to draw a dividing boundary\nbetween the two clusters of points:", "%matplotlib inline\nimport matplotlib.pyplot as plt\n\nplt.style.use('seaborn')\n\n# Import the example plot from the figures directory\nfrom fig_code import plot_sgd_separator\nplot_sgd_separator()", "This may seem like a trivial task, but it is a simple version of a very important concept.\nBy drawing this separating line, we have learned a model which can generalize to new\ndata: if you were to drop another point onto the plane which is unlabeled, this algorithm\ncould now predict whether it's a blue or a red point.\nIf you'd like to see the source code used to generate this, you can either open the\ncode in the figures directory, or you can load the code using the %load magic command:\nThe next simple task we'll look at is a regression task: a simple best-fit line\nto a set of data:", "from fig_code import plot_linear_regression\nplot_linear_regression()", "Again, this is an example of fitting a model to data, such that the model can make\ngeneralizations about new data. The model has been learned from the training\ndata, and can be used to predict the result of test data:\nhere, we might be given an x-value, and the model would\nallow us to predict the y value. Again, this might seem like a trivial problem,\nbut it is a basic example of a type of operation that is fundamental to\nmachine learning tasks.\nRepresentation of Data in Scikit-learn\nMachine learning is about creating models from data: for that reason, we'll start by\ndiscussing how data can be represented in order to be understood by the computer. Along\nwith this, we'll build on our matplotlib examples from the previous section and show some\nexamples of how to visualize data.\nMost machine learning algorithms implemented in scikit-learn expect data to be stored in a\ntwo-dimensional array or matrix. The arrays can be\neither numpy arrays, or in some cases scipy.sparse matrices.\nThe size of the array is expected to be [n_samples, n_features]\n\nn_samples: The number of samples: each sample is an item to process (e.g. classify).\n A sample can be a document, a picture, a sound, a video, an astronomical object,\n a row in database or CSV file,\n or whatever you can describe with a fixed set of quantitative traits.\nn_features: The number of features or distinct traits that can be used to describe each\n item in a quantitative manner. Features are generally real-valued, but may be boolean or\n discrete-valued in some cases.\n\nThe number of features must be fixed in advance. However it can be very high dimensional\n(e.g. millions of features) with most of them being zeros for a given sample. This is a case\nwhere scipy.sparse matrices can be useful, in that they are\nmuch more memory-efficient than numpy arrays.\n\n(Figure from the Python Data Science Handbook)\nA Simple Example: the Iris Dataset\nAs an example of a simple dataset, we're going to take a look at the\niris data stored by scikit-learn.\nThe data consists of measurements of three different species of irises.\nThere are three species of iris in the dataset, which we can picture here:", "from IPython.core.display import Image, display\ndisplay(Image(filename='images/iris_setosa.jpg'))\nprint(\"Iris Setosa\\n\")\n\ndisplay(Image(filename='images/iris_versicolor.jpg'))\nprint(\"Iris Versicolor\\n\")\n\ndisplay(Image(filename='images/iris_virginica.jpg'))\nprint(\"Iris Virginica\")", "Quick Question:\nIf we want to design an algorithm to recognize iris species, what might the data be?\nRemember: we need a 2D array of size [n_samples x n_features].\n\n\nWhat would the n_samples refer to?\n\n\nWhat might the n_features refer to?\n\n\nRemember that there must be a fixed number of features for each sample, and feature\nnumber i must be a similar kind of quantity for each sample.\nLoading the Iris Data with Scikit-Learn\nScikit-learn has a very straightforward set of data on these iris species. The data consist of\nthe following:\n\n\nFeatures in the Iris dataset:\n\n\nsepal length in cm\n\nsepal width in cm\npetal length in cm\n\npetal width in cm\n\n\nTarget classes to predict:\n\n\nIris Setosa\n\nIris Versicolour\nIris Virginica\n\nscikit-learn embeds a copy of the iris CSV file along with a helper function to load it into numpy arrays:", "from sklearn.datasets import load_iris\niris = load_iris()\n\niris.keys()\n\nn_samples, n_features = iris.data.shape\nprint((n_samples, n_features))\nprint(iris.data[10])\n\nprint(iris.data.shape)\nprint(iris.target.shape)\n\nprint(iris.target)\n\nprint(iris.target_names)", "This data is four dimensional, but we can visualize two of the dimensions\nat a time using a simple scatter-plot:", "import numpy as np\nimport matplotlib.pyplot as plt\n\nx_index = 2\ny_index = 1\n\n# this formatter will label the colorbar with the correct target names\nformatter = plt.FuncFormatter(lambda i, *args: iris.target_names[int(i)])\n\nplt.scatter(iris.data[:, x_index], iris.data[:, y_index],\n c=iris.target, cmap=plt.cm.get_cmap('RdYlBu', 3))\nplt.colorbar(ticks=[0, 1, 2], format=formatter)\nplt.clim(-0.5, 2.5)\nplt.xlabel(iris.feature_names[x_index])\nplt.ylabel(iris.feature_names[y_index]);", "Quick Exercise:\nChange x_index and y_index in the above script\nand find a combination of two parameters\nwhich maximally separate the three classes.\nThis exercise is a preview of dimensionality reduction, which we'll see later.\nOther Available Data\nThey come in three flavors:\n\nPackaged Data: these small datasets are packaged with the scikit-learn installation,\n and can be downloaded using the tools in sklearn.datasets.load_*\nDownloadable Data: these larger datasets are available for download, and scikit-learn\n includes tools which streamline this process. These tools can be found in\n sklearn.datasets.fetch_*\nGenerated Data: there are several datasets which are generated from models based on a\n random seed. These are available in the sklearn.datasets.make_*\n\nYou can explore the available dataset loaders, fetchers, and generators using IPython's\ntab-completion functionality. After importing the datasets submodule from sklearn,\ntype\ndatasets.load_ + TAB\n\nor\ndatasets.fetch_ + TAB\n\nor\ndatasets.make_ + TAB\n\nto see a list of available functions.", "from sklearn import datasets\n\n# Type datasets.fetch_<TAB> or datasets.load_<TAB> in IPython to see all possibilities\n\n# datasets.fetch_\n\n# datasets.load_", "In the next section, we'll use some of these datasets and take a look at the basic principles of machine learning." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
YannickJadoul/Parselmouth
docs/examples/batch_processing.ipynb
gpl-3.0
[ "Batch processing of files\nUsing the Python standard libraries (i.e., the glob and os modules), we can also quickly code up batch operations e.g. over all files with a certain extension in a directory. For example, we can make a list of all .wav files in the audio directory, use Praat to pre-emphasize these Sound objects, and then write the pre-emphasized sound to a WAV and AIFF format file.", "# Find all .wav files in a directory, pre-emphasize and save as new .wav and .aiff file\nimport parselmouth\n\nimport glob\nimport os.path\n\nfor wave_file in glob.glob(\"audio/*.wav\"):\n print(\"Processing {}...\".format(wave_file))\n s = parselmouth.Sound(wave_file)\n s.pre_emphasize()\n s.save(os.path.splitext(wave_file)[0] + \"_pre.wav\", 'WAV') # or parselmouth.SoundFileFormat.WAV instead of 'WAV'\n s.save(os.path.splitext(wave_file)[0] + \"_pre.aiff\", 'AIFF')", "After running this, the original home directory now contains all of the original .wav files pre-emphazised and written again as .wav and .aiff files. The reading, pre-emphasis, and writing are all done by Praat, while looping over all .wav files is done by standard Python code.", "# List the current contents of the audio/ folder\n!ls audio/\n\n# Remove the generated audio files again, to clean up the output from this example\n!rm audio/*_pre.wav\n!rm audio/*_pre.aiff", "Similarly, we can use the pandas library to read a CSV file with data collected in an experiment, and loop over that data to e.g. extract the mean harmonics-to-noise ratio. The results CSV has the following structure:\ncondition | ... | pp_id\n--------- | --- | -----\n0 | ... | 1877\n1 | ... | 801\n1 | ... | 2456\n0 | ... | 3126\nThe following code would read such a table, loop over it, use Praat through Parselmouth to calculate the analysis of each row, and then write an augmented CSV file to disk. To illustrate we use an example set of sound fragments: results.csv, 1_b.wav, 2_b.wav, 3_b.wav, 4_b.wav, 5_b.wav, 1_y.wav, 2_y.wav, 3_y.wav, 4_y.wav, 5_y.wav\nIn our example, the original CSV file, results.csv contains the following table:", "import pandas as pd\n\nprint(pd.read_csv(\"other/results.csv\"))\n\ndef analyse_sound(row):\n condition, pp_id = row['condition'], row['pp_id']\n filepath = \"audio/{}_{}.wav\".format(condition, pp_id)\n sound = parselmouth.Sound(filepath)\n harmonicity = sound.to_harmonicity()\n return harmonicity.values[harmonicity.values != -200].mean()\n\n# Read in the experimental results file\ndataframe = pd.read_csv(\"other/results.csv\")\n\n# Apply parselmouth wrapper function row-wise\ndataframe['harmonics_to_noise'] = dataframe.apply(analyse_sound, axis='columns')\n\n# Write out the updated dataframe\ndataframe.to_csv(\"processed_results.csv\", index=False)", "We can now have a look at the results by reading in the processed_results.csv file again:", "print(pd.read_csv(\"processed_results.csv\"))\n\n# Clean up, remove the CSV file generated by this example\n!rm processed_results.csv" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
kinshuk4/MoocX
k2e/ml/deep-learning/mlp.ipynb
mit
[ "Deep Learning 101 : Multilayer Perceptrons\nIn this tutorial we'll go over the basics of fitting neural networks in python, using keras and tensorflow. You'll need to have them both installed to be able to run this tutorial. Key take aways are to become familiar with the keras API and nomenclature and understand how MLPs relate to logistic regression.", "%matplotlib inline\n\nimport matplotlib\nimport numpy as np\nimport matplotlib.pyplot as plt\nplt.style.use('fivethirtyeight')\n\nfrom sklearn.datasets import make_circles\n\nfrom keras.layers import Input, Dense\nfrom keras.models import Sequential\n\nX, y = make_circles(n_samples=5000, factor=.3, noise=.05)\nX_train = X[:4000]\ny_train = y[:4000]\n\nX_val = X[4000:]\ny_val = y[4000:]\nnum_variables = X.shape[1]\n\nplt.scatter(X[:,0], X[:,1], c=y,cmap=plt.cm.coolwarm)\nplt.title(\"Classification Dataset\")\nplt.xlabel('x1')\nplt.ylabel('x2')\nplt.show()", "A first model with logistic regression\nThe first thing we have to do is to set up an input tensor. One the key advantages of using keras is that you only have to specify the shapes for the inputs and outputs, the shapes for the remaining tensors will be inferred automatically. \nIn our case, each input vector $x_i = <x_{i1},x_{i2}>$ has two dimensions and the output is a single binary variable $y_i \\in {0,1}$. Let's see how we would set this up with keras:", "logreg = Sequential()\nlogreg.add(Dense(output_dim=1, input_dim=num_variables, activation='sigmoid'))", "The first line tells keras to initialze a new sequential model. Sequential models take a single input tensor and produce a single output tensor. For the purposes of this tutorial, we're going to stick with the sequential model because it will have all of the functionality we'll need.\nThe bulk of the action happens on the second line, and it's a litte terse so let's unpack it:\n\nlogreg.add tells keras that we want to add a new layer to our network.\nDense specifies that we want to use a fully-connected layer, aka a dense layer and is probably most opaque aspect of the keras API. In general, what a dense layer does create several different linear combinations of the output of the previous layer, followed by an element-wise application of an activation function. The number of linear combinations is specified by the number of 'hidden units' in the layer, which in this case is specified by output_dim.\ninput_dim=num_variables - Since this is the first layer in our network, we have to tell keras how big the input will be. In this case \nactivation='sigmoid' specifies which element-wise transformation we should apply the resulting output. Here we've specified a sigmoid transformation, which is given by $\\frac{1}{1 + \\exp(-t)}$, where $t$ is the result of the linear combination. This function 'squashes' the weighted sum to be a number between 0 and 1. \n\nPutting that all together, we are telling keras that we'd like a single linear combination of the input variables, followed by a sigmoid transformation, which will represent the probability that an observation belongs to the class $y=1$. In this case, the Dense layer will have 2 parameters/weights ${w_1,w_2}$ and intercept/bias term $w_0$ since this is included by default. \nSince this is a relatively simple case, we can write out the actual function implied by this model. It looks like this:\n<center>\n$ \n\\frac{1}{1 + \\exp(-(w0 + w1x_{i1} + w2x_{i2}))}\n$\n</center>\nAs you might be able to see now, this is equivalent to a logistic regression model, where we are modeling the log-odds as linear function of the input variables.\nEnough talking, let's fit the model! That's easily enough accomplished in another 2 lines:", "logreg.compile(optimizer='adam',\n loss='binary_crossentropy',\n metrics=['accuracy'])\nlogreg.fit(X_train, y_train,validation_data=[X_val,y_val],verbose=0)\nval_score = logreg.evaluate(X_val,y_val,verbose=0)\nprint \"Accuracy on validation data: \" + str(val_score[1]*100) + \"%\"\n\n# create a mesh to plot in\nh = 0.02\nx_min, x_max = X_val[:, 0].min(), X_val[:, 0].max()\ny_min, y_max = X_val[:, 1].min(), X_val[:, 1].max()\nxx, yy = np.meshgrid(np.arange(x_min, x_max, h),\n np.arange(y_min, y_max, h))\nZ = logreg.predict(np.c_[xx.ravel(), yy.ravel()])\nZ = Z.reshape(xx.shape)\nplt.contourf(xx, yy, Z, cmap=plt.cm.coolwarm, alpha=0.8)\nplt.scatter(X_val[:, 0], X_val[:, 1], c=y_val, cmap=plt.cm.coolwarm)\nplt.xlim(xx.min(), xx.max())\nplt.ylim(yy.min(), yy.max())\nplt.title(\"Logistic Regression Decision Regions\")\nplt.show()\n\nnum_hidden = 5\nmlp = Sequential()\nmlp.add(Dense(output_dim=num_hidden, input_dim=num_variables, activation='relu'))\nmlp.add(Dense(output_dim=1, activation='sigmoid'))\n\nmlp.compile(optimizer='adam',\n loss='binary_crossentropy',\n metrics=['accuracy'])\nmlp.fit(X_train, y_train,validation_data=[X_val,y_val],nb_epoch=20,verbose=0)\n\n# create a mesh to plot in\nh = 0.02\nx_min, x_max = X_val[:, 0].min(), X_val[:, 0].max()\ny_min, y_max = X_val[:, 1].min(), X_val[:, 1].max()\nxx, yy = np.meshgrid(np.arange(x_min, x_max, h),\n np.arange(y_min, y_max, h))\nZ = mlp.predict(np.c_[xx.ravel(), yy.ravel()])\nZ = Z.reshape(xx.shape)\nplt.contourf(xx, yy, Z, cmap=plt.cm.coolwarm, alpha=0.8)\nplt.scatter(X_val[:, 0], X_val[:, 1], c=y_val, cmap=plt.cm.coolwarm)\nplt.xlim(xx.min(), xx.max())\nplt.ylim(yy.min(), yy.max())\nplt.title(\"MLP with \" + str(num_hidden) + \" Hidden Units\")\nplt.show()", "We can see that a neural network with 5 hidden units is trying to stitch together a relatively boxy set of lines to create piecewise linear functions in an effort to classify the points.", "num_hidden = 128\nmlp = Sequential()\nmlp.add(Dense(output_dim=num_hidden, input_dim=num_variables, activation='relu'))\nmlp.add(Dense(output_dim=1, activation='sigmoid'))\nmlp.compile(optimizer='adam',\n loss='binary_crossentropy',\n metrics=['accuracy'])\nmlp.fit(X_train, y_train,validation_data=[X_val,y_val],nb_epoch=10,verbose=1)\n\n# create a mesh to plot in\nh = 0.02\nx_min, x_max = X_val[:, 0].min(), X_val[:, 0].max()\ny_min, y_max = X_val[:, 1].min(), X_val[:, 1].max()\nxx, yy = np.meshgrid(np.arange(x_min, x_max, h),\n np.arange(y_min, y_max, h))\nZ = mlp.predict(np.c_[xx.ravel(), yy.ravel()])\nZ = Z.reshape(xx.shape)\nplt.contourf(xx, yy, Z, cmap=plt.cm.coolwarm, alpha=0.8)\nplt.scatter(X_val[:, 0], X_val[:, 1], c=y_val, cmap=plt.cm.coolwarm)\nplt.xlim(xx.min(), xx.max())\nplt.ylim(yy.min(), yy.max())\nplt.title(\"MLP with \" + str(num_hidden) + \" Hidden Units\")\nplt.show()", "Not that we need to for this example, but we can easily extend this model to add more layers, simply by adding another dense layer.", "mlp = Sequential()\nmlp.add(Dense(output_dim=num_hidden, input_dim=num_variables, activation='relu'))\nmlp.add(Dense(output_dim=num_hidden, activation='relu'))\nmlp.add(Dense(output_dim=1, activation='sigmoid'))\nmlp.compile(optimizer='adam',\n loss='binary_crossentropy',\n metrics=['accuracy'])\nmlp.fit(X_train, y_train,validation_data=[X_val,y_val],nb_epoch=10,verbose=1)\n\nh = 0.02\nx_min, x_max = X_val[:, 0].min(), X_val[:, 0].max()\ny_min, y_max = X_val[:, 1].min(), X_val[:, 1].max()\nxx, yy = np.meshgrid(np.arange(x_min, x_max, h),\n np.arange(y_min, y_max, h))\nZ = mlp.predict(np.c_[xx.ravel(), yy.ravel()])\nZ = Z.reshape(xx.shape)\nplt.contourf(xx, yy, Z, cmap=plt.cm.coolwarm, alpha=0.8)\nplt.scatter(X_val[:, 0], X_val[:, 1], c=y_val, cmap=plt.cm.coolwarm)\nplt.xlim(xx.min(), xx.max())\nplt.ylim(yy.min(), yy.max())\nplt.title(\"MLP with 2 Hidden Layers and \" + str(num_hidden) + \" Units\")\nplt.show()", "That's it for this tutorial. Hopefully it helped you get a handle on how to use the keras API to build some simple neural networks. In the next one, we'll go over how to use this same approach to build convolutional models.\nCredit\nhttps://github.com/beamandrew/deeplearning_101/" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
joshspeagle/dynesty
demos/Examples -- Noisy Likelihoods.ipynb
mit
[ "Noisy Likelihoods\nSetup\nFirst, let's set up some environmental dependencies. These just make the numerics easier and adjust some of the plotting defaults to make things more legible.", "# system functions that are always useful to have\nimport time, sys, os\n\n# basic numeric setup\nimport numpy as np\nfrom numpy import linalg\n\n# inline plotting\n%matplotlib inline\n\n# plotting\nimport matplotlib\nfrom matplotlib import pyplot as plt\n\n# seed the random number generator\nrstate = np.random.default_rng(819)\n\n# re-defining plotting defaults\nfrom matplotlib import rcParams\nrcParams.update({'xtick.major.pad': '7.0'})\nrcParams.update({'xtick.major.size': '7.5'})\nrcParams.update({'xtick.major.width': '1.5'})\nrcParams.update({'xtick.minor.pad': '7.0'})\nrcParams.update({'xtick.minor.size': '3.5'})\nrcParams.update({'xtick.minor.width': '1.0'})\nrcParams.update({'ytick.major.pad': '7.0'})\nrcParams.update({'ytick.major.size': '7.5'})\nrcParams.update({'ytick.major.width': '1.5'})\nrcParams.update({'ytick.minor.pad': '7.0'})\nrcParams.update({'ytick.minor.size': '3.5'})\nrcParams.update({'ytick.minor.width': '1.0'})\nrcParams.update({'font.size': 30})\n\nimport dynesty", "Noisy Likelihoods\nIn some problems, we don't actually have access to the likelihood $\\mathcal{L}(\\mathbf{x})$ because it might be intractable or numerically infeasible to compute. Instead, we are able to compute a noisy estimate\n$$ \\mathcal{L}^\\prime(\\mathbf{x}) \\sim P(\\mathcal{L} | \\mathbf{x}, \\boldsymbol{\\theta}) $$\nas a function of the \"true\" likelihood $\\mathcal{L}$ that depends on some hyper-parameters $\\boldsymbol{\\theta}$ (e.g., the number of Monte Carlo draws used in the approximation) and might also vary as a function of position $\\mathbf{x}$.\nThere are several methods for dealing with noisy (i.e. stochastic) likelihoods in an MCMC context (see, e.g., pseudo-marginal MCMC), but at first glance this might seem difficult to deal with in Nested Sampling since we require the likelihood for each sample to be monotonically increasing. This will inevitably lead to biased inference, since positive fluctuations can skew later results to also be positive. We can, however, still recover the true distribution by making judicious use of importance reweighting to re-assign a new set of likelihood realizations to our results.\n3-D Multivariate Normal\nTo demonstrate this, we will again utilize a 3-D multivariate Normal distribution.", "ndim = 3 # number of dimensions\nC = np.identity(ndim) # set covariance to identity matrix\nCinv = linalg.inv(C) # precision matrix\nlnorm = -0.5 * (np.log(2 * np.pi) * ndim + np.log(linalg.det(C))) # ln(normalization)\n\n# 3-D correlated multivariate normal log-likelihood\ndef loglikelihood(x):\n \"\"\"Multivariate normal log-likelihood.\"\"\"\n return -0.5 * np.dot(x, np.dot(Cinv, x)) + lnorm", "We'll again define our prior (via prior_transform) to be uniform in each dimension from -10 to 10 and 0 everywhere else.", "# prior transform\ndef prior_transform(u):\n \"\"\"Transforms our unit cube samples `u` to a flat prior between -10. and 10. in each variable.\"\"\"\n return 10. * (2. * u - 1.)", "Noiseless Case\nLet's first generate samples from this noiseless target distribution.", "# initialize our nested sampler\ndsampler = dynesty.DynamicNestedSampler(loglikelihood, prior_transform, ndim=3,\n bound='single', sample='unif', rstate=rstate)\ndsampler.run_nested(maxiter=20000, use_stop=False)\ndres = dsampler.results", "Noisy Case\nNow let's generate samples from a noisy version. Here we'll assume that we have a noisy estimate of our \"model\", which here is $f(\\mathbf{x}) = \\mathbf{x}$ such that\n$$ \\mathbf{x}^\\prime \\sim \\mathcal{N}(\\mathbf{x}, \\sigma=1.) $$\nSince our \"true\" model is $\\mathbf{x} = 0$ and our prior ranges from $-10$ to $10$, this is roughly equivalent to assuming $\\sim 10\\%$ uncertainty in each dimension.", "noise = 1.\n\n# 3-D correlated multivariate normal log-likelihood\ndef loglikelihood2(x):\n \"\"\"Multivariate normal log-likelihood.\"\"\"\n xp = rstate.normal(x, noise)\n logl = -0.5 * np.dot(xp, np.dot(Cinv, xp)) + lnorm\n scale = - 0.5 * noise**2 # location and scale\n bias_corr = scale * ndim # ***bias correction term***\n return logl - bias_corr", "Note the additional bias correction term we have now included in the log-likelihood. This ensures that our noisy likelihood is unbiased relative to the true likelihood.", "# compute estimator\nx = np.zeros(ndim)\nlogls = np.array([loglikelihood2(x) for i in range(10000)])\n\nprint('True log-likelihood:', loglikelihood(x))\nprint('Estimated:', np.mean(logls), '+/-', np.std(logls))", "Let's now sample from our noisy distribution.", "dsampler2 = dynesty.DynamicNestedSampler(loglikelihood2, prior_transform, ndim=3,\n bound='single', sample='unif', \n update_interval=50., \n rstate=rstate)\ndsampler2.run_nested(maxiter=20000, use_stop=False)\ndres2 = dsampler2.results", "As expected, sampling is substantially more inefficient in the noisy case since more likelihood calls are required to get a noisy realization that is \"better\" than the previous noisy realization.\nComparing Results\nComparing the two results, we see that the noise in our model appears to give a larger estimate for the evidence. This is what we'd expect given that our sampling will preferentially be biased to noisy realizations with larger-than-expected log-likelihoods at any given position.", "# plot results\nfrom dynesty import plotting as dyplot\n\nlnz_truth = ndim * -np.log(2 * 10.) # analytic evidence solution\nfig, axes = dyplot.runplot(dres, color='blue') # noiseless\nfig, axes = dyplot.runplot(dres2, color='red', # noisy\n lnz_truth=lnz_truth, truth_color='black',\n fig=(fig, axes))\nfig.tight_layout()", "This effect also propagates through to our posteriors, broadening them relative to the underlying distribution.", "# initialize figure\nfig, axes = plt.subplots(3, 7, figsize=(35, 15))\naxes = axes.reshape((3, 7))\n[a.set_frame_on(False) for a in axes[:, 3]]\n[a.set_xticks([]) for a in axes[:, 3]]\n[a.set_yticks([]) for a in axes[:, 3]]\n\n# plot noiseless run (left)\nfg, ax = dyplot.cornerplot(dres, color='blue', truths=[0., 0., 0.], truth_color='black',\n show_titles=True, max_n_ticks=3, title_kwargs={'y': 1.05},\n quantiles=None, fig=(fig, axes[:, :3]))\n\n# plot noisy run (right)\nfg, ax = dyplot.cornerplot(dres2, color='red', truths=[0., 0., 0.], truth_color='black',\n show_titles=True, title_kwargs={'y': 1.05},\n quantiles=None, max_n_ticks=3, fig=(fig, axes[:, 4:]))", "Importance Reweighting\nIf we knew the \"true\" likelihood, we could naively use importance reweighting to reweight our noiseless samples to approximate the \"correct\" distribution, as shown below.", "# importance reweighting\nlogl = np.array([loglikelihood(s) for s in dres2.samples])\ndres2_rwt = dynesty.utils.reweight_run(dres2, logl)\n\n# initialize figure\nfig, axes = plt.subplots(3, 7, figsize=(35, 15))\naxes = axes.reshape((3, 7))\n[a.set_frame_on(False) for a in axes[:, 3]]\n[a.set_xticks([]) for a in axes[:, 3]]\n[a.set_yticks([]) for a in axes[:, 3]]\n\n# plot noiseless run (left)\nfg, ax = dyplot.cornerplot(dres, color='blue', truths=[0., 0., 0.], truth_color='black',\n show_titles=True, max_n_ticks=3, title_kwargs={'y': 1.05},\n quantiles=None, fig=(fig, axes[:, :3]))\n\n# plot reweighted noisy run (right)\nfg, ax = dyplot.cornerplot(dres2_rwt, color='red', truths=[0., 0., 0.], truth_color='black',\n show_titles=True, title_kwargs={'y': 1.05},\n quantiles=None, max_n_ticks=3, fig=(fig, axes[:, 4:]))", "Full Analysis\nIn general, however, we don't have access to the true likelihood. In that case, we need to incorporate importance reweighting into our error analysis. One possible approach would be the naive scheme outlined below, where we just add in an importance reweighting step as part of the error budget. Note that it's important that this reweighting step happens at the end since simulate_run (re-)sorts the samples.", "Nmc = 50\n\n# compute realizations of covariances (noiseless)\ncovs = []\nfor i in range(Nmc):\n if i % 5 == 0: sys.stderr.write(str(i)+' ')\n dres_t = dynesty.utils.resample_run(dres)\n x, w = dres_t.samples, np.exp(dres_t.logwt - dres_t.logz[-1])\n covs.append(dynesty.utils.mean_and_cov(x, w)[1].flatten())\n\n# noisy case (ignoring reweighting)\ncovs2 = []\nfor i in range(Nmc):\n if i % 5 == 0: sys.stderr.write(str(i)+' ')\n dres2_t = dynesty.utils.resample_run(dres2)\n x, w = dres2_t.samples, np.exp(dres2_t.logwt - dres2_t.logz[-1])\n covs2.append(dynesty.utils.mean_and_cov(x, w)[1].flatten())\n \n# noisy case (w/ naive reweighting)\ncovs3 = []\nfor i in range(Nmc):\n if i % 5 == 0: sys.stderr.write(str(i)+' ')\n dres2_t = dynesty.utils.resample_run(dres2)\n logl_t = np.array([loglikelihood2(s) for s in dres2_t.samples])\n dres2_t = dynesty.utils.reweight_run(dres2_t, logp_new=logl_t)\n x, w = dres2_t.samples, np.exp(dres2_t.logwt - dres2_t.logz[-1])\n covs3.append(dynesty.utils.mean_and_cov(x, w)[1].flatten())\n\n# compute errors\ncov_mean, cov_std = np.mean(covs, axis=0), np.std(covs, axis=0)\ncov2_mean, cov2_std = np.mean(covs2, axis=0), np.std(covs2, axis=0)\ncov3_mean, cov3_std = np.mean(covs3, axis=0), np.std(covs3, axis=0)\n\n# print results\nprint('Noiseless Likelihood Std:\\n', cov_mean[[0, 4, 8]], \n '+/-', cov_std[[0, 4, 8]])\nprint('Noisy Likelihood Std:\\n', cov2_mean[[0, 4, 8]], \n '+/-', cov2_std[[0, 4, 8]])\nprint('Noisy Likelihood (Naive Reweight) Std:\\n', cov3_mean[[0, 4, 8]], \n '+/-', cov3_std[[0, 4, 8]])", "While including the noise from our intrinsic likelihoods appears to substantially increase our error budget, it didn't actually shift our mean prediction closer to the truth. What gives? The issue is that we aren't accounting for the fact that we are able to get an estimate of the true (expected) log-likelihood from our many repeated realizations (via the mean). We can estimate this and our possible uncertainties around the mean using bootstrapping.", "# compute sample mean and std(sample mean)\nlogls = np.array([[loglikelihood2(s) for s in dres2.samples] for i in range(Nmc)])\nlogls_est = logls.mean(axis=0) # sample mean\nlogls_bt = []\nfor i in range(Nmc * 10):\n idx = rstate.choice(Nmc, size=Nmc)\n logls_bt.append(logls[idx].mean(axis=0)) # bootstrapped mean\nlogls_std = np.std(logls_bt, axis=0) # bootstrapped std(mean)\n\n# noisy case (w/ mean reweighting)\ncovs4 = []\nfor i in range(Nmc):\n if i % 5 == 0: sys.stderr.write(str(i)+' ')\n dres2_t, idx = dynesty.utils.resample_run(dres2, return_idx=True)\n logl_t = rstate.normal(logls_est[idx], logls_std[idx])\n dres2_t = dynesty.utils.reweight_run(dres2_t, logp_new=logl_t)\n x, w = dres2_t.samples, np.exp(dres2_t.logwt - dres2_t.logz[-1])\n covs4.append(dynesty.utils.mean_and_cov(x, w)[1].flatten())\n\n# print results\ncov4_mean, cov4_std = np.mean(covs4, axis=0), np.std(covs4, axis=0)\nprint('Noiseless Likelihood Std:\\n', cov_mean[[0, 4, 8]], \n '+/-', cov_std[[0, 4, 8]])\nprint('Noisy Likelihood Std:\\n', cov2_mean[[0, 4, 8]], \n '+/-', cov2_std[[0, 4, 8]])\nprint('Noisy Likelihood (Naive Reweight) Std:\\n', cov3_mean[[0, 4, 8]], \n '+/-', cov3_std[[0, 4, 8]])\nprint('Noisy Likelihood (Mean+Bootstrap Reweight) Std:\\n', cov4_mean[[0, 4, 8]], \n '+/-', cov4_std[[0, 4, 8]])", "We see that after reweighting using our mean likelihoods (with bootstrapped errors) now properly shifts the mean while leaving us with uncertainties that are slightly larger than the noiseless case. This is what we'd expect given that we only have a noisy estimate of the true log-likelihood at a given position.", "# initialize figure\nfig, axes = plt.subplots(3, 7, figsize=(35, 15))\naxes = axes.reshape((3, 7))\n[a.set_frame_on(False) for a in axes[:, 3]]\n[a.set_xticks([]) for a in axes[:, 3]]\n[a.set_yticks([]) for a in axes[:, 3]]\n\n# plot noiseless run (left)\nfg, ax = dyplot.cornerplot(dres, color='blue', truths=[0., 0., 0.], truth_color='black',\n show_titles=True, max_n_ticks=3, title_kwargs={'y': 1.05},\n quantiles=None, fig=(fig, axes[:, :3]))\n\n# plot realization of reweighted run (right)\nlogl_t = rstate.normal(logls_est, logls_std)\ndres2_rwt2 = dynesty.utils.reweight_run(dres2, logp_new=logl_t)\nfg, ax = dyplot.cornerplot(dres2_rwt2, color='red', truths=[0., 0., 0.], truth_color='black',\n show_titles=True, title_kwargs={'y': 1.05},\n quantiles=None, max_n_ticks=3, fig=(fig, axes[:, 4:]))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
AllenDowney/ModSim
python/soln/examples/orbit_soln.ipynb
gpl-2.0
[ "Orbiting the Sun\nModeling and Simulation in Python\nCopyright 2021 Allen Downey\nLicense: Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International", "# install Pint if necessary\n\ntry:\n import pint\nexcept ImportError:\n !pip install pint\n\n# download modsim.py if necessary\n\nfrom os.path import exists\n\nfilename = 'modsim.py'\nif not exists(filename):\n from urllib.request import urlretrieve\n url = 'https://raw.githubusercontent.com/AllenDowney/ModSim/main/'\n local, _ = urlretrieve(url+filename, filename)\n print('Downloaded ' + local)\n\n# import functions from modsim\n\nfrom modsim import *", "In a previous example, we modeled the interaction between the Earth and the Sun, simulating what would happen if the Earth stopped in its orbit and fell straight into the Sun.\nNow let's extend the model to two dimensions and simulate one revolution of the Earth around the Sun, that is, one year.\nAt perihelion, the distance from the Earth to the Sun is 147.09 million km and its velocity is 30,290 m/s.", "r_0 = 147.09e9 # initial distance m\nv_0 = 30.29e3 # initial velocity m/s", "Here are the other constants we'll need, all with about 4 significant digits.", "G = 6.6743e-11 # gravitational constant N / kg**2 * m**2\nm1 = 1.989e30 # mass of the Sun kg\nm2 = 5.972e24 # mass of the Earth kg\nt_end = 3.154e7 # one year in seconds", "Exercise: Put the initial conditions in a State object with variables x, y, vx, and vy.\nCreate a System object with variables init and t_end.", "# Solution\n\ninit = State(x=r_0, y=0, vx=0, vy=-v_0)\n\n# Solution\n\nsystem = System(init=init,\n t_end=t_end)", "Exercise: Write a function called universal_gravitation that takes a State and a System and returns the gravitational force of the Sun on the Earth as a Vector.\nTest your function with the initial conditions; the result should be a Vector with approximate components:\nx -3.66e+22\ny 0", "# Solution\n\ndef universal_gravitation(state, system):\n \"\"\"Computes gravitational force.\n \n state: State object with distance r\n system: System object with m1, m2, and G\n \n returns: Vector\n \"\"\"\n x, y, vx, vy = state\n R = Vector(x, y)\n \n mag = G * m1 * m2 / vector_mag(R)**2\n direction = -vector_hat(R)\n return mag * direction\n\n# Solution\n\nuniversal_gravitation(init, system)", "Exercise: Write a slope function that takes a timestamp, a State, and a System and computes the derivatives of the state variables.\nTest your function with the initial conditions. The result should be a sequence of four values, approximately\n0.0, -30290.0, -0.006, 0.0", "# Solution\n\ndef slope_func(t, state, system):\n x, y, vx, vy = state\n\n F = universal_gravitation(state, system)\n A = F / m2\n \n return vx, vy, A.x, A.y\n\n# Solution\n\nslope_func(0, init, system)", "Exercise: Use run_solve_ivp to run the simulation.\nSave the return values in variables called results and details.", "# Solution\n\nresults, details = run_solve_ivp(system, slope_func)\ndetails.message", "You can use the following function to plot the results.", "from matplotlib.pyplot import plot\n\ndef plot_trajectory(results):\n x = results.x / 1e9\n y = results.y / 1e9\n\n make_series(x, y).plot(label='orbit')\n plot(0, 0, 'yo')\n\n decorate(xlabel='x distance (million km)',\n ylabel='y distance (million km)')\n\nplot_trajectory(results)", "You will probably see that the earth does not end up back where it started, as we expect it to after one year.\nThe following cells compute the error, which is the distance between the initial and final positions.", "error = results.iloc[-1] - system.init\nerror\n\noffset = Vector(error.x, error.y)\nvector_mag(offset) / 1e9", "The problem is that the algorithm used by run_solve_ivp does not work very well with systems like this.\nThere are two ways we can improve it.\nrun_solve_ivp takes a keyword argument, rtol, that specifies the \"relative tolerance\", which determines the size of the time steps in the simulation. Lower values of rtol require smaller steps, which yield more accurate results.\nThe default value of rtol is 1e-3. \nExercise: Try running the simulation again with smaller values, like 1e-4 or 1e-5, and see what effect it has on the magnitude of offset.\nThe other way to improve the results is to use a different algorithm. run_solve_ivp takes a keyword argument, method, that specifies which algorithm it should use. The default is RK45, which is a good, general-purpose algorithm, but not particularly good for this system. One of the other options is RK23, which is usually less accurate than RK45 (with the same step size), but for this system it turns out to be unreasonably good, for reasons I don't entirely understand.\nYet another option is 'DOP853', which is particularly good when rtol is small.\nExercise: Run the simulation with one of these methods and see what effect it has on the results. To get a sense of how efficient the methods are, display details.nfev, which is the number of times run_solve_ivp called the slope function.", "details.nfev", "Animation\nYou can use the following draw function to animate the results, if you want to see what the orbit looks like (not in real time).", "xlim = results.x.min(), results.x.max()\nylim = results.y.min(), results.y.max()\n\ndef draw_func(t, state):\n x, y, vx, vy = state\n plot(x, y, 'b.')\n plot(0, 0, 'yo')\n decorate(xlabel='x distance (million km)',\n ylabel='y distance (million km)',\n xlim=xlim,\n ylim=ylim)\n\n# animate(results, draw_func)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
kadrlica/ugali
notebooks/isochrone_example.ipynb
mit
[ "%matplotlib inline\n\nimport numpy as np\nimport pylab as plt\nfrom matplotlib.colors import LogNorm\n\nfrom ugali import isochrone", "Creating Isochrones\nTo use the isochrone module, you must have the isochrone library installed (see instructions here). The isochrone module provides an API to create isochrones and calculate various characteristics. The easiest way to create an isochrone is through the general factory interface shown below.", "def plot_iso(iso):\n plt.scatter(iso.mag_1-iso.mag_2,iso.mag_1+iso.distance_modulus,marker='o',c='k')\n plt.gca().invert_yaxis()\n plt.xlabel('%s - %s'%(iso.band_1,iso.band_2)); plt.ylabel(iso.band_1)\n\niso1 = isochrone.factory(name='Padova',\n age=12, # Gyr\n metallicity=0.0002, # Z\n distance_modulus=17\n )\nprint(iso1)\nplot_iso(iso1)\n\n# Change the two bands that the isochrone loads\niso2 = isochrone.factory(name='Padova',\n age=12, # Gyr\n metallicity=0.0002, # Z\n distance_modulus=17,\n band_1 = 'i',\n band_2 = 'z'\n )\nprint(iso2)\nplot_iso(iso2)\n\n# Create a Dotter isochrone\niso3 = isochrone.factory(name='Dotter',\n age=12, # Gyr\n metallicity=0.0002, # Z\n distance_modulus=17\n )\nprint(iso3)\nplot_iso(iso3)", "Modifying Isochrones\nOnce you create an isochrone, you can modify it's parameters on the fly.", "iso = isochrone.factory(name='Padova',\n age=12, # Gyr\n metallicity=0.0002, # Z\n distance_modulus=17\n )\n\n# You can set the age, metallicity, and distance modulus\niso.age = 11\niso.distance_modulus = 20\niso.metallicity = 0.00015\nprint(iso)\n\n# Each parameter has bounds and will throw an error if you are outside the range (useful for fitting)\ntry:\n iso.distance_modulus = 40\nexcept ValueError as e:\n print(\"Error:\",e)\n \n# However, you can increase the range\niso.params['distance_modulus'].set_bounds([10,50])\niso.distance_modulus = 40\nprint(iso)\niso.distance_modulus = 17\n\n# Updating a parameters just changes the underlying isochrone file\n# Note: There is no interpolation being done\n\nfor metal in [0.00011,0.00012,0.00013]:\n iso.metallicity = metal\n print(\"Metallicity:\",iso.metallicity,iso.filename)\n \niso.metallicity = 0.000115\nprint(\"Metallicity:\",iso.metallicity,iso.filename)", "Advanced Methods\nThe Isochrone class wraps several more complicated functions related to isochrones. A few examples are shown below.", "# Draw a regular grid of points from the isochrone with associated IMF\ninitial_mass,mass_pdf,actual_mass,mag_1,mag_2 = iso1.sample(mass_steps=1e2)\n\nplt.scatter(mag_1-mag_2,mag_1+iso1.distance_modulus,c=mass_pdf,marker='o',facecolor='none',vmax=0.001)\nplt.colorbar()\nplt.gca().invert_yaxis()\nplt.xlabel('%s - %s'%(iso.band_1,iso.band_2)); plt.ylabel(iso.band_1)\nplt.ylim(23,15); plt.xlim(-0.5,1.0)\n\n# Randomly sample stars from the isochrone pdf\n# Note: `sample` returns the apparent magnitudes of stars\nmag_1,mag_2 = iso1.simulate(stellar_mass=3e5)\nn,bx,by,p = plt.hist2d(mag_1-mag_2,mag_2,bins=50,norm=LogNorm())\nplt.colorbar(label=\"Number of Stars\")\nplt.gca().invert_yaxis()\nplt.xlabel('%s - %s'%(iso.band_1,iso.band_2)); plt.ylabel(iso.band_1)\n\n# The isochrone is normalized using a `richness` parameter (number of stars above a given mass threshold)\nrichness = 1e4\n# Total stellar mass above some minimum mass\nprint(\"Stellar Mass:\",richness * iso.stellar_mass())\n# Luminosity calculated from the isochrone file and mass pdf\nprint(\"Stellar Luminosity:\",richness * iso.stellar_luminosity())\n# Calculate the richness \nprint(\"Absolute Magnitude:\",iso.absolute_magnitude(richness=richness))\n# Calculate the absolute magnitude using the random sampling of Martin et al. 2008\nprint(\"Martin Absolute Magnitude:\",iso.absolute_magnitude_martin(richness=richness))" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
BrownDwarf/ApJdataFrames
notebooks/Geers2011.ipynb
mit
[ "ApJdataFrames Geers2011\nTitle: SUBSTELLAR OBJECTS IN NEARBY YOUNG CLUSTERS (SONYC). II. THE BROWN DWARF POPULATION OF ρ OPHIUCHI\nAuthors: Geers et al.\nData is from this paper:\nhttp://iopscience.iop.org/0004-637X/726/1/23/", "import warnings\nwarnings.filterwarnings(\"ignore\")\n\nfrom astropy.io import ascii\n\nimport pandas as pd", "Table 2 - Probable Low Mass and Substellar Mass Members of rho Oph, with MOIRCS Spectroscopy Follow-up", "names = [\"No.\",\"R.A. (J2000)\",\"Decl. (J2000)\",\"i (mag)\",\"J (mag)\",\"K_s (mag)\",\"T_eff (K)\",\"A_V\",\"Notes\"]\ntbl2 = pd.read_csv(\"http://iopscience.iop.org/0004-637X/726/1/23/suppdata/apj373191t2_ascii.txt\",\n sep=\"\\t\", skiprows=[0,1,2,3], na_values=\"sat\", names = names)\ntbl2.dropna(how=\"all\", inplace=True)\ntbl2.head()", "Save the data", "! mkdir ../data/Geers2011\n\ntbl2.to_csv(\"../data/Geers2011/tb2.csv\", index=False, sep='\\t')", "The end" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
Startupsci/data-science-notebooks
.ipynb_checkpoints/titanic-data-science-solutions-refactor-checkpoint.ipynb
mit
[ "Titanic Data Science Solutions\nThis notebook is companion to the book Data Science Solutions. The notebook walks us through a typical workflow for solving data science competitions at sites like Kaggle.\nThere are several excellent notebooks to study data science competition entries. However many will skip some of the explanation on how the solution is developed as these notebooks are developed by experts for experts. The objective of this notebook is to follow a step-by-step workflow, explaining each step and rationale for every decision we take during solution development.\nWorkflow stages\nThe competition solution workflow goes through seven stages described in the Data Science Solutions book's sample chapter online here.\n\nQuestion or problem definition.\nAcquire training and testing data.\nWrangle, prepare, cleanse the data.\nAnalyze, identify patterns, and explore the data.\nModel, predict and solve the problem.\nVisualize, report, and present the problem solving steps and final solution.\nSupply or submit the results.\n\nThe workflow indicates general sequence of how each stage may follow the other. However there are use cases with exceptions.\n\nWe may combine mulitple workflow stages. We may analyze by visualizing data.\nPerform a stage earlier than indicated. We may analyze data before and after wrangling.\nPerform a stage multiple times in our workflow. Visualize stage may be used multiple times.\n\nDrop a stage altogether. We may not need supply stage to productize or service enable our dataset for a competition.\nQuestion and problem definition\nCompetition sites like Kaggle define the problem to solve or questions to ask while providing the datasets for training your data science model and testing the model results against a test dataset. The question or problem definition for Titanic Survival competition is described here at Kaggle.\n\nKnowing from a training set of samples listing passengers who survived or did not survive the Titanic disaster, can our model determine based on a given test dataset not containing the survival information, if these passengers in the test dataset survived or not.\n\nWe may also want to develop some early understanding about the domain of our problem. This is described on the Kaggle competition description page here. Here are the highlights to note.\n\nOn April 15, 1912, during her maiden voyage, the Titanic sank after colliding with an iceberg, killing 1502 out of 2224 passengers and crew. Translated 32% survival rate.\nOne of the reasons that the shipwreck led to such loss of life was that there were not enough lifeboats for the passengers and crew.\nAlthough there was some element of luck involved in surviving the sinking, some groups of people were more likely to survive than others, such as women, children, and the upper-class.\n\nWorkflow goals\nThe data science solutions workflow solves for seven major goals.\nClassifying. We may want to classify or categorize our samples. We may also want to understand the implications or correlation of different classes with our solution goal.\nCorrelating. One can approach the problem based on available features within the training dataset. Which features within the dataset contribute significantly to our solution goal? Statistically speaking is there a correlation among a feature and solution goal? As the feature values change does the solution state change as well, and visa-versa? This can be tested both for numerical and categorical features in the given dataset. We may also want to determine correlation among features other than survival for subsequent goals and workflow stages. Correlating certain features may help in creating, completing, or correcting features.\nConverting. For modeling stage, one needs to prepare the data. Depending on the choice of model algorithm one may require all features to be converted to numerical equivalent values. So for instance converting text categorical values to numeric values.\nCompleting. Data preparation may also require us to estimate any missing values within a feature. Model algorithms may work best when there are no missing values.\nCorrecting. We may also analyze the given training dataset for errors or possibly innacurate values within features and try to corrent these values or exclude the samples containing the errors. One way to do this is to detect any outliers among our samples or features. We may also completely discard a feature if it is not contribting to the analysis or may significantly skew the results.\nCreating. Can we create new features based on an existing feature or a set of features, such that the new feature follows the correlation, conversion, completeness goals.\nCharting. How to select the right visualization plots and charts depending on nature of the data and the solution goals. A good start is to read the Tableau paper on Which chart or graph is right for you?.\n\n\nRefactor Release 2017-Jan-29\nWe are significantly refactoring the notebook based on (a) comments received by readers, (b) issues in porting notebook from Jupyter kernel (2.7) to Kaggle kernel (3.5), and (c) review of few more best practice kernels.\nUser comments\n\nCombine training and test data for certain operations like converting titles across dataset to numerical values. (thanks @Sharan Naribole)\nCorrect observation - nearly 30% of the passengers had siblings and/or spouses aboard. (thanks @Reinhard)\nCorrectly interpreting logistic regresssion coefficients. (thanks @Reinhard)\n\nPorting issues\n\nSpecify plot dimensions, bring legend into plot.\n\nBest practices\n\nPerforming feature correlation analysis early in the project.\nUsing multiple plots instead of overlays for readability.", "# data analysis and wrangling\nimport pandas as pd\nimport numpy as np\nimport random as rnd\n\n# visualization\nimport seaborn as sns\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\n# machine learning\nfrom sklearn.linear_model import LogisticRegression\nfrom sklearn.svm import SVC, LinearSVC\nfrom sklearn.ensemble import RandomForestClassifier\nfrom sklearn.neighbors import KNeighborsClassifier\nfrom sklearn.naive_bayes import GaussianNB\nfrom sklearn.linear_model import Perceptron\nfrom sklearn.linear_model import SGDClassifier\nfrom sklearn.tree import DecisionTreeClassifier", "Acquire data\nThe Python Pandas packages helps us work with our datasets. We start by acquiring the training and testing datasets into Pandas DataFrames. We also combine these datasets to run certain operations on both datasets together.", "train_df = pd.read_csv('data/titanic-kaggle/train.csv')\ntest_df = pd.read_csv('data/titanic-kaggle/test.csv')\ncombine = [train_df, test_df]", "Analyze by describing data\nPandas also helps describe the datasets answering following questions early in our project.\nWhich features are available in the dataset?\nNoting the feature names for directly manipulating or analyzing these. These feature names are described on the Kaggle data page here.", "print(train_df.columns.values)", "Which features are categorical?\nThese values classify the samples into sets of similar samples. Within categorical features are the values nominal, ordinal, ratio, or interval based? Among other things this helps us select the appropriate plots for visualization.\n\nCategorical: Survived, Sex, and Embarked. Ordinal: Pclass.\n\nWhich features are numerical?\nWhich features are numerical? These values change from sample to sample. Within numerical features are the values discrete, continuous, or timeseries based? Among other things this helps us select the appropriate plots for visualization.\n\nContinous: Age, Fare. Discrete: SibSp, Parch.", "# preview the data\ntrain_df.head()", "Which features are mixed data types?\nNumerical, alphanumeric data within same feature. These are candidates for correcting goal.\n\nTicket is a mix of numeric and alphanumeric data types. Cabin is alphanumeric.\n\nWhich features may contain errors or typos?\nThis is harder to review for a large dataset, however reviewing a few samples from a smaller dataset may just tell us outright, which features may require correcting.\n\nName feature may contain errors or typos as there are several ways used to describe a name including titles, round brackets, and quotes used for alternative or short names.", "train_df.tail()", "Which features contain blank, null or empty values?\nThese will require correcting.\n\nCabin > Age > Embarked features contain a number of null values in that order for the training dataset.\nCabin > Age are incomplete in case of test dataset.\n\nWhat are the data types for various features?\nHelping us during converting goal.\n\nSeven features are integer or floats. Six in case of test dataset.\nFive features are strings (object).", "train_df.info()\nprint('_'*40)\ntest_df.info()", "What is the distribution of numerical feature values across the samples?\nThis helps us determine, among other early insights, how representative is the training dataset of the actual problem domain.\n\nTotal samples are 891 or 40% of the actual number of passengers on board the Titanic (2,224).\nSurvived is a categorical feature with 0 or 1 values.\nAround 38% samples survived representative of the actual survival rate at 32%.\nMost passengers (> 75%) did not travel with parents or children.\nNearly 30% of the passengers had siblings and/or spouse aboard.\nFares varied significantly with few passengers (<1%) paying as high as $512.\nFew elderly passengers (<1%) within age range 65-80.", "train_df.describe()\n# Review survived rate using `percentiles=[.61, .62]` knowing our problem description mentions 38% survival rate.\n# Review Parch distribution using `percentiles=[.75, .8]`\n# SibSp distribution `[.68, .69]`\n# Age and Fare `[.1, .2, .3, .4, .5, .6, .7, .8, .9, .99]`", "What is the distribution of categorical features?\n\nNames are unique across the dataset (count=unique=891)\nSex variable as two possible values with 65% male (top=male, freq=577/count=891).\nCabin values have several dupicates across samples. Alternatively several passengers shared a cabin.\nEmbarked takes three possible values. S port used by most passengers (top=S)\nTicket feature has high ratio (22%) of duplicate values (unique=681).", "train_df.describe(include=['O'])", "Assumtions based on data analysis\nWe arrive at following assumptions based on data analysis done so far. We may validate these assumptions further before taking appropriate actions.\nCorrelating.\nWe want to know how well does each feature correlate with Survival. We want to do this early in our project and match these quick correlations with modelled correlations later in the project.\nCompleting.\n\nWe may want to complete Age feature as it is definitely correlated to survival.\nWe may want to complete the Embarked feature as it may also correlate with survival or another important feature.\n\nCorrecting.\n\nTicket feature may be dropped from our analysis as it contains high ratio of duplicates (22%) and there may not be a correlation between Ticket and survival.\nCabin feature may be dropped as it is highly incomplete or contains many null values both in training and test dataset.\nPassengerId may be dropped from training dataset as it does not contribute to survival.\nName feature is relatively non-standard, may not contribute directly to survival, so maybe dropped.\n\nCreating.\n\nWe may want to create a new feature called Family based on Parch and SibSp to get total count of family members on board.\nWe may want to engineer the Name feature to extract Title as a new feature.\nWe may want to create new feature for Age bands. This turns a continous numerical feature into an ordinal categorical feature.\nWe may also want to create a Fare range feature if it helps our analysis.\n\nClassifying.\nWe may also add to our assumptions based on the problem description noted earlier.\n\nWomen (Sex=female) were more likely to have survived.\nChildren (Age<?) were more likely to have survived. \nThe upper-class passengers (Pclass=1) were more likely to have survived.\n\nAnalyze by pivoting features\nTo confirm some of our observations and assumptions, we can quickly analyze our feature correlations by pivoting features against each other. We can only do so at this stage for features which do not have any empty values. It also makes sense doing so only for features which are categorical (Sex), ordinal (Pclass) or discrete (SibSp, Parch) type.\n\nPclass We observe significant correlation (>0.5) among Pclass=1 and Survived (classifying #3). We decide to include this feature in our model.\nSex We confirm the observation during problem definition that Sex=female had very high survival rate at 74% (classifying #1).\nSibSp and Parch These features have zero correlation for certain values. It may be best to derive a feature or a set of features from these individual features (creating #1).", "pivot = train_df[['Pclass', 'Survived']]\npivot = pivot.groupby(['Pclass'], as_index=False).mean()\npivot.sort_values(by='Survived', ascending=False)\n\npivot = train_df[[\"Sex\", \"Survived\"]]\npivot = pivot.groupby(['Sex'], as_index=False).mean()\npivot.sort_values(by='Survived', ascending=False)\n\npivot = train_df[[\"SibSp\", \"Survived\"]]\npivot = pivot.groupby(['SibSp'], as_index=False).mean()\npivot.sort_values(by='Survived', ascending=False)\n\npivot = train_df[[\"Parch\", \"Survived\"]]\npivot = pivot.groupby(['Parch'], as_index=False).mean()\npivot.sort_values(by='Survived', ascending=False)", "Analyze by visualizing data\nNow we can continue confirming some of our assumptions using visualizations for analyzing the data.\nCorrelating numerical features\nLet us start by understanding correlations between numerical features and our solution goal (Survived).\nA histogram chart is useful for analyzing continous numerical variables like Age where banding or ranges will help identify useful patterns. The histogram can indicate distribution of samples using automatically defined bins or equally ranged bands. This helps us answer questions relating to specific bands (Did infants have better survival rate?)\nNote that x-axis in historgram visualizations represents the count of samples or passengers.\nObservations.\n\nInfants (Age <=4) had high survival rate.\nOldest passengers (Age = 80) survived.\nLarge number of 15-25 year olds did not survive.\nMost passengers are in 15-35 age range.\n\nDecisions.\nThis simple analysis confirms our assumptions as decisions for subsequent workflow stages.\n\nWe should consider Age (our assumption classifying #2) in our model training.\nComplete the Age feature for null values (completing #1).\nWe should band age groups (creating #3).", "g = sns.FacetGrid(train_df, col='Survived')\ng.map(plt.hist, 'Age', bins=20)", "Correlating numerical and ordinal features\nWe can combine multiple features for identifying correlations using a single plot. This can be done with numerical and categorical features which have numeric values.\nObservations.\n\nPclass=3 had most passengers, however most did not survive. Confirms our classifying assumption #2.\nInfant passengers in Pclass=2 and Pclass=3 mostly survived. Further qualifies our classifying assumption #2.\nMost passengers in Pclass=1 survived. Confirms our classifying assumption #3.\nPclass varies in terms of Age distribution of passengers.\n\nDecisions.\n\nConsider Pclass for model training.", "# grid = sns.FacetGrid(train_df, col='Pclass', hue='Survived')\ngrid = sns.FacetGrid(train_df, col='Survived', \n row='Pclass', size=2.2, aspect=1.6)\ngrid.map(plt.hist, 'Age', alpha=.5, bins=20)\ngrid.add_legend()", "Correlating categorical features\nNow we can correlate categorical features with our solution goal.\nObservations.\n\nFemale passengers had much better survival rate than males. Confirms classifying (#1).\nException in Embarked=C where males had higher survival rate. This could be a correlation between Pclass and Embarked and in turn Pclass and Survived, not necessarily direct correlation between Embarked and Survived.\nMales had better survival rate in Pclass=3 when compared with Pclass=2 for C and Q ports. Completing (#2).\nPorts of embarkation have varying survival rates for Pclass=3 and among male passengers. Correlating (#1).\n\nDecisions.\n\nAdd Sex feature to model training.\nComplete and add Embarked feature to model training.", "# grid = sns.FacetGrid(train_df, col='Embarked')\ngrid = sns.FacetGrid(train_df, row='Embarked', size=2.2, aspect=1.6)\ngrid.map(sns.pointplot, 'Pclass', 'Survived', 'Sex', palette='deep')\ngrid.add_legend()", "Correlating categorical and numerical features\nWe may also want to correlate categorical features (with non-numeric values) and numeric features. We can consider correlating Embarked (Categorical non-numeric), Sex (Categorical non-numeric), Fare (Numeric continuous), with Survived (Categorical numeric).\nObservations.\n\nHigher fare paying passengers had better survival. Confirms our assumption for creating (#4) fare ranges.\nPort of embarkation correlates with survival rates. Confirms correlating (#1) and completing (#2).\n\nDecisions.\n\nConsider banding Fare feature.", "# grid = sns.FacetGrid(train_df, col='Embarked', hue='Survived', palette={0: 'k', 1: 'w'})\ngrid = sns.FacetGrid(train_df, row='Embarked', \n col='Survived', size=2.2, aspect=1.6)\ngrid.map(sns.barplot, 'Sex', 'Fare', alpha=.5, ci=None)\ngrid.add_legend()", "Wrangle data\nWe have collected several assumptions and decisions regarding our datasets and solution requirements. So far we did not have to change a single feature or value to arrive at these. Let us now execute our decisions and assumptions for correcting, creating, and completing goals.\nCorrecting by dropping features\nThis is a good starting goal to execute. By dropping features we are dealing with fewer data points. Speeds up our notebook and eases the analysis.\nBased on our assumptions and decisions we want to drop the Cabin (correcting #2) and Ticket (correcting #1) features.\nNote that where applicable we perform operations on both training and testing datasets together to stay consistent.", "print(\"Before\", train_df.shape, test_df.shape, \n combine[0].shape, combine[1].shape)\n\ntrain_df = train_df.drop(['Ticket', 'Cabin'], axis=1)\ntest_df = test_df.drop(['Ticket', 'Cabin'], axis=1)\ncombine = [train_df, test_df]\n\n\"After\", train_df.shape, test_df.shape, combine[0].shape, combine[1].shape", "Creating new feature extracting from existing\nWe want to analyze if Name feature can be engineered to extract titles and test correlation between titles and survival, before dropping Name and PassengerId features.\nIn the following code we extract Title feature using regular expressions. The RegEx pattern (\\w+\\.) matches the first word which ends with a dot character within Name feature. The expand=False flag returns a DataFrame.\nObservations.\nWhen we plot Title, Age, and Survived, we note the following observations.\n\nMost titles band Age groups accurately. For example: Master title has Age mean of 5 years.\nSurvival among Title Age bands varies slightly.\nCertain titles mostly survived (Mme, Lady, Sir) or did not (Don, Rev, Jonkheer).\n\nDecision.\n\nWe decide to retain the new Title feature for model training.", "for dataset in combine:\n dataset['Title'] = dataset.Name.str.extract(' ([A-Za-z]+)\\.', expand=False)\n\npd.crosstab(train_df['Title'], train_df['Sex'])", "We can replace many titles with a more common name or classify them as Rare.", "for dataset in combine:\n dataset['Title'] = dataset['Title'].replace([\n 'Lady', 'Countess','Capt', 'Col',\n 'Don', 'Dr', 'Major', 'Rev', 'Sir', \n 'Jonkheer', 'Dona'], 'Rare')\n\n dataset['Title'] = dataset['Title'].replace('Mlle', 'Miss')\n dataset['Title'] = dataset['Title'].replace('Ms', 'Miss')\n dataset['Title'] = dataset['Title'].replace('Mme', 'Mrs')\n \npivot = train_df[['Title', 'Survived']]\npivot = pivot.groupby(['Title'], as_index=False).mean()\npivot.sort_values(by='Survived', ascending=False)", "We can convert the categorical titles to ordinal.", "title_mapping = {\"Mr\": 1, \"Miss\": 2, \"Mrs\": 3, \"Master\": 4, \"Rare\": 5}\nfor dataset in combine:\n dataset['Title'] = dataset['Title'].map(title_mapping)\n dataset['Title'] = dataset['Title'].fillna(0)\n\ntrain_df.head()", "Now we can safely drop the Name feature from training and testing datasets. We also do not need the PassengerId feature in the training dataset.", "train_df = train_df.drop(['Name', 'PassengerId'], axis=1)\ntest_df = test_df.drop(['Name'], axis=1)\ncombine = [train_df, test_df]\ntrain_df.shape, test_df.shape", "Converting a categorical feature\nNow we can convert features which contain strings to numerical values. This is required by most model algorithms. Doing so will also help us in achieving the feature completing goal.\nLet us start by converting Sex feature to a new feature called Gender where female=1 and male=0.", "for dataset in combine:\n dataset['Sex'] = dataset['Sex'].map( {'female': 1, 'male': 0} ).astype(int)\n\ntrain_df.head()", "Completing a numerical continuous feature\nNow we should start estimating and completing features with missing or null values. We will first do this for the Age feature.\nWe can consider three methods to complete a numerical continuous feature.\n\n\nA simple way is to generate random numbers between mean and standard deviation.\n\n\nMore accurate way of guessing missing values is to use other correlated features. In our case we note correlation among Age, Sex, and Pclass. Guess Age values using median values for Age across sets of Pclass and Sex feature combinations. So, median Age for Pclass=1 and Sex=0, Pclass=1 and Sex=1, and so on...\n\n\nCombine methods 1 and 2. So instead of guessing age values based on median, use random numbers between mean and standard deviation, based on sets of Pclass and Sex combinations.\n\n\nMethod 1 and 3 will introduce random noise into our models. The results from multiple executions might vary. We will prefer method 2.", "# grid = sns.FacetGrid(train_df, col='Pclass', hue='Gender')\ngrid = sns.FacetGrid(train_df, row='Pclass', \n col='Sex', size=2.2, aspect=1.6)\ngrid.map(plt.hist, 'Age', alpha=.5, bins=20)\ngrid.add_legend()", "Let us start by preparing an empty array to contain guessed Age values based on Pclass x Sex combinations.", "guess_ages = np.zeros((2,3))\nguess_ages", "Now we iterate over Sex (0 or 1) and Pclass (1, 2, 3) to calculate guessed values of Age for the six combinations.", "for dataset in combine:\n for i in range(0, 2):\n for j in range(0, 3):\n guess_df = dataset[(dataset['Sex'] == i) & \\\n (dataset['Pclass'] == j+1)]['Age'].dropna()\n\n # age_mean = guess_df.mean()\n # age_std = guess_df.std()\n # age_guess = rnd.uniform(age_mean - age_std, age_mean + age_std)\n\n age_guess = guess_df.median()\n\n # Convert random age float to nearest .5 age\n guess_ages[i,j] = int( age_guess/0.5 + 0.5 ) * 0.5\n \n for i in range(0, 2):\n for j in range(0, 3):\n dataset.loc[ (dataset.Age.isnull()) & (dataset.Sex == i) & (dataset.Pclass == j+1),\\\n 'Age'] = guess_ages[i,j]\n\n dataset['Age'] = dataset['Age'].astype(int)\n\ntrain_df.head()", "Let us create Age bands and determine correlations with Survived.", "train_df['AgeBand'] = pd.cut(train_df['Age'], 5)\npivot = train_df[['AgeBand', 'Survived']]\npivot = pivot.groupby(['AgeBand'], as_index=False).mean()\npivot.sort_values(by='AgeBand', ascending=True)", "Let us replace Age with ordinals based on these bands.", "for dataset in combine: \n dataset.loc[ dataset['Age'] <= 16, 'Age'] = 0\n dataset.loc[(dataset['Age'] > 16) & (dataset['Age'] <= 32), 'Age'] = 1\n dataset.loc[(dataset['Age'] > 32) & (dataset['Age'] <= 48), 'Age'] = 2\n dataset.loc[(dataset['Age'] > 48) & (dataset['Age'] <= 64), 'Age'] = 3\n dataset.loc[ dataset['Age'] > 64, 'Age']\ntrain_df.head()", "We can now remove the AgeBand feature.", "train_df = train_df.drop(['AgeBand'], axis=1)\ncombine = [train_df, test_df]\ntrain_df.head()", "Create new feature combining existing features\nWe can create a new feature for FamilySize which combines Parch and SibSp. This will enable us to drop Parch and SibSp from our datasets.", "for dataset in combine:\n dataset['FamilySize'] = dataset['SibSp'] + dataset['Parch'] + 1\n\npivot = train_df[['FamilySize', 'Survived']]\npivot = pivot.groupby(['FamilySize'], as_index=False).mean()\npivot.sort_values(by='Survived', ascending=False)", "We can create another feature called IsAlone based on FamilySize feature we just created.", "for dataset in combine:\n dataset['IsAlone'] = 0\n dataset.loc[dataset['FamilySize'] == 1, 'IsAlone'] = 1\n\ntrain_df[['IsAlone', 'Survived']].groupby(['IsAlone'], as_index=False).mean()", "Let us drop Parch, SibSp, and FamilySize features in favor of IsAlone.", "train_df = train_df.drop(['Parch', 'SibSp', 'FamilySize'], axis=1)\ntest_df = test_df.drop(['Parch', 'SibSp', 'FamilySize'], axis=1)\ncombine = [train_df, test_df]\n\ntrain_df.head()", "We can also create an artificial feature combining Pclass and Age.", "for dataset in combine:\n dataset['Age*Class'] = dataset.Age * dataset.Pclass\n\ntrain_df.loc[:, ['Age*Class', 'Age', 'Pclass']].head(10)", "Completing a categorical feature\nEmbarked feature takes S, Q, C values based on port of embarkation. Our training dataset has two missing values. We simply fill these with the most common occurance.", "freq_port = train_df.Embarked.dropna().mode()[0]\nfreq_port\n\nfor dataset in combine:\n dataset['Embarked'] = dataset['Embarked'].fillna(freq_port)\n \npivot = train_df[['Embarked', 'Survived']]\npivot = pivot.groupby(['Embarked'], as_index=False).mean()\npivot.sort_values(by='Survived', ascending=False)", "Converting categorical feature to numeric\nWe can now convert the Embarked feature to a new numeric feature.", "for dataset in combine:\n dataset['Embarked'] = dataset['Embarked'].map( \n {'S': 0, 'C': 1, 'Q': 2} ).astype(int)\n\ntrain_df.head()", "Quick completing and converting a numeric feature\nWe can now complete the Fare feature for single missing value in test dataset using mode to get the value that occurs most frequently for this feature. We do this in a single line of code.\nNote that we are not creating an intermediate new feature or doing any further analysis for correlation to guess missing feature as we are replacing only a single value. The completion goal achieves desired requirement for model algorithm to operate on non-null values.\nWe may also want round off the fare to two decimals as it represents currency.", "test_df['Fare'].fillna(test_df['Fare'].dropna().median(), inplace=True)\ntest_df.head()", "We can not create FareBand temporary or reference feature.", "train_df['FareBand'] = pd.qcut(train_df['Fare'], 4)\npivot = train_df[['FareBand', 'Survived']]\npivot = pivot.groupby(['FareBand'], as_index=False).mean()\npivot.sort_values(by='FareBand', ascending=True)", "Convert the Fare feature to ordinal values based on the FareBand.", "for dataset in combine:\n dataset.loc[ dataset['Fare'] <= 7.91, 'Fare'] = 0\n dataset.loc[(dataset['Fare'] > 7.91) & \n (dataset['Fare'] <= 14.454), 'Fare'] = 1\n dataset.loc[(dataset['Fare'] > 14.454) & \n (dataset['Fare'] <= 31), 'Fare'] = 2\n dataset.loc[ dataset['Fare'] > 31, 'Fare'] = 3\n dataset['Fare'] = dataset['Fare'].astype(int)\n\ntrain_df = train_df.drop(['FareBand'], axis=1)\ncombine = [train_df, test_df]\n \ntrain_df.head(10)", "And the test dataset.", "test_df.head(10)", "Model, predict and solve\nNow we are ready to train a model and predict the required solution. There are 60+ predictive modelling algorithms to choose from. We must understand the type of problem and solution requirement to narrow down to a select few models which we can evaluate. Our problem is a classification and regression problem. We want to identify relationship between output (Survived or not) with other variables or features (Gender, Age, Port...). We are also perfoming a category of machine learning which is called supervised learning as we are training our model with a given dataset. With these two criteria - Supervised Learning plus Classification and Regression, we can narrow down our choice of models to a few. These include:\n\nLogistic Regression\nKNN or k-Nearest Neighbors\nSupport Vector Machines\nNaive Bayes classifier\nDecision Tree\nRandom Forrest\nPerceptron\nArtificial neural network\nRVM or Relevance Vector Machine", "X_train = train_df.drop(\"Survived\", axis=1)\nY_train = train_df[\"Survived\"]\nX_test = test_df.drop(\"PassengerId\", axis=1).copy()\nX_train.shape, Y_train.shape, X_test.shape", "Logistic Regression is a useful model to run early in the workflow. Logistic regression measures the relationship between the categorical dependent variable (feature) and one or more independent variables (features) by estimating probabilities using a logistic function, which is the cumulative logistic distribution. Reference Wikipedia.\nNote the confidence score generated by the model based on our training dataset.", "# Logistic Regression\n\nlogreg = LogisticRegression()\nlogreg.fit(X_train, Y_train)\nY_pred = logreg.predict(X_test)\nacc_log = round(logreg.score(X_train, Y_train) * 100, 2)\nacc_log", "We can use Logistic Regression to validate our assumptions and decisions for feature creating and completing goals. This can be done by calculating the coefficient of the features in the decision function.\nPositive coefficients increase the log-odds of the response (and thus increase the probability), and negative coefficients decrease the log-odds of the response (and thus decrease the probability).\n\nSex is highest positivie coefficient, implying as the Sex value increases (male: 0 to female: 1), the probability of Survived=1 increases the most.\nInversely as Pclass increases, probability of Survived=1 decreases the most.\nThis way Age*Class is a good artificial feature to model as it has second highest negative correlation with Survived.\nSo is Title as second highest positive correlation.", "coeff_df = pd.DataFrame(train_df.columns.delete(0))\ncoeff_df.columns = ['Feature']\ncoeff_df[\"Correlation\"] = pd.Series(logreg.coef_[0])\n\ncoeff_df.sort_values(by='Correlation', ascending=False)", "Next we model using Support Vector Machines which are supervised learning models with associated learning algorithms that analyze data used for classification and regression analysis. Given a set of training samples, each marked as belonging to one or the other of two categories, an SVM training algorithm builds a model that assigns new test samples to one category or the other, making it a non-probabilistic binary linear classifier. Reference Wikipedia.\nNote that the model generates a confidence score which is higher than Logistics Regression model.", "# Support Vector Machines\n\nsvc = SVC()\nsvc.fit(X_train, Y_train)\nY_pred = svc.predict(X_test)\nacc_svc = round(svc.score(X_train, Y_train) * 100, 2)\nacc_svc", "In pattern recognition, the k-Nearest Neighbors algorithm (or k-NN for short) is a non-parametric method used for classification and regression. A sample is classified by a majority vote of its neighbors, with the sample being assigned to the class most common among its k nearest neighbors (k is a positive integer, typically small). If k = 1, then the object is simply assigned to the class of that single nearest neighbor. Reference Wikipedia.\nKNN confidence score is better than Logistics Regression but worse than SVM.", "knn = KNeighborsClassifier(n_neighbors = 3)\nknn.fit(X_train, Y_train)\nY_pred = knn.predict(X_test)\nacc_knn = round(knn.score(X_train, Y_train) * 100, 2)\nacc_knn", "In machine learning, naive Bayes classifiers are a family of simple probabilistic classifiers based on applying Bayes' theorem with strong (naive) independence assumptions between the features. Naive Bayes classifiers are highly scalable, requiring a number of parameters linear in the number of variables (features) in a learning problem. Reference Wikipedia.\nThe model generated confidence score is the lowest among the models evaluated so far.", "# Gaussian Naive Bayes\n\ngaussian = GaussianNB()\ngaussian.fit(X_train, Y_train)\nY_pred = gaussian.predict(X_test)\nacc_gaussian = round(gaussian.score(X_train, Y_train) * 100, 2)\nacc_gaussian", "The perceptron is an algorithm for supervised learning of binary classifiers (functions that can decide whether an input, represented by a vector of numbers, belongs to some specific class or not). It is a type of linear classifier, i.e. a classification algorithm that makes its predictions based on a linear predictor function combining a set of weights with the feature vector. The algorithm allows for online learning, in that it processes elements in the training set one at a time. Reference Wikipedia.", "# Perceptron\n\nperceptron = Perceptron()\nperceptron.fit(X_train, Y_train)\nY_pred = perceptron.predict(X_test)\nacc_perceptron = round(perceptron.score(X_train, Y_train) * 100, 2)\nacc_perceptron\n\n# Linear SVC\n\nlinear_svc = LinearSVC()\nlinear_svc.fit(X_train, Y_train)\nY_pred = linear_svc.predict(X_test)\nacc_linear_svc = round(linear_svc.score(X_train, Y_train) * 100, 2)\nacc_linear_svc\n\n# Stochastic Gradient Descent\n\nsgd = SGDClassifier()\nsgd.fit(X_train, Y_train)\nY_pred = sgd.predict(X_test)\nacc_sgd = round(sgd.score(X_train, Y_train) * 100, 2)\nacc_sgd", "This model uses a decision tree as a predictive model which maps features (tree branches) to conclusions about the target value (tree leaves). Tree models where the target variable can take a finite set of values are called classification trees; in these tree structures, leaves represent class labels and branches represent conjunctions of features that lead to those class labels. Decision trees where the target variable can take continuous values (typically real numbers) are called regression trees. Reference Wikipedia.\nThe model confidence score is the highest among models evaluated so far.", "# Decision Tree\n\ndecision_tree = DecisionTreeClassifier()\ndecision_tree.fit(X_train, Y_train)\nY_pred = decision_tree.predict(X_test)\nacc_decision_tree = round(decision_tree.score(X_train, Y_train) * 100, 2)\nacc_decision_tree", "The next model Random Forests is one of the most popular. Random forests or random decision forests are an ensemble learning method for classification, regression and other tasks, that operate by constructing a multitude of decision trees (n_estimators=100) at training time and outputting the class that is the mode of the classes (classification) or mean prediction (regression) of the individual trees. Reference Wikipedia.\nThe model confidence score is the highest among models evaluated so far. We decide to use this model's output (Y_pred) for creating our competition submission of results.", "# Random Forest\n\nrandom_forest = RandomForestClassifier(n_estimators=100)\nrandom_forest.fit(X_train, Y_train)\nY_pred = random_forest.predict(X_test)\nrandom_forest.score(X_train, Y_train)\nacc_random_forest = round(random_forest.score(X_train, Y_train) * 100, 2)\nacc_random_forest", "Model evaluation\nWe can now rank our evaluation of all the models to choose the best one for our problem. While both Decision Tree and Random Forest score the same, we choose to use Random Forest as they correct for decision trees' habit of overfitting to their training set.", "models = pd.DataFrame({\n 'Model': ['Support Vector Machines', 'KNN', 'Logistic Regression', \n 'Random Forest', 'Naive Bayes', 'Perceptron', \n 'Stochastic Gradient Decent', 'Linear SVC', \n 'Decision Tree'],\n 'Score': [acc_svc, acc_knn, acc_log, \n acc_random_forest, acc_gaussian, acc_perceptron, \n acc_sgd, acc_linear_svc, acc_decision_tree]})\nmodels.sort_values(by='Score', ascending=False)\n\nsubmission = pd.DataFrame({\n \"PassengerId\": test_df[\"PassengerId\"],\n \"Survived\": Y_pred\n })\nsubmission.to_csv('data/titanic-kaggle/submission.csv', index=False)", "Our submission to the competition site Kaggle results in scoring 3,883 of 6,082 competition entries. This result is indicative while the competition is running. This result only accounts for part of the submission dataset. Not bad for our first attempt. Any suggestions to improve our score are most welcome.\nReferences\nThis notebook has been created based on great work done solving the Titanic competition and other sources.\n\nA journey through Titanic\nGetting Started with Pandas: Kaggle's Titanic Competition\nTitanic Best Working Classifier" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
stargaser/advancedviz2016
Data_apps.ipynb
mit
[ "Data apps\nMuch of the motivation for this section came from the experience of making data apps in R on the Shiny platform.\nA simple app for querying the WISE catalog and displaying sources is this one I wrote on a Sunday afternoon for a data science class.\nA sophisticated Shiny app is this one for photometric redshift estimation using generalized linear models.\nHow to make apps in Python?\nUnfortunately, for publishing data apps, there is not a Python equivalent to Shiny, that makes it easy to host apps and is free of charge. This blog post shows an example from a commercial enterprise.\nFor now, we will skip the publishing aspect and focus on making a personal app that you run yourself in the notebook (or can give to colleagues to run). \nBokeh widgets\nAdding Widgets\nBokeh provides a simple default set of widgets, largely based off the Bootstrap JavaScript library. In the future, it will be possible for users to wrap and use other widget libraries, or their own custom widgets. By themselves, most widgets are not useful. There are two ways to use widgets to drive interactions:\n\nUse the CustomJS callback. This will work in static HTML documents.\nUse the bokeh-server and set up event handlers with .on_change.\n\nThe current value of interactive widgets is available from the .value attribute.", "from bokeh.io import vform\nfrom bokeh.models import CustomJS, ColumnDataSource, Slider\nfrom bokeh.plotting import Figure, output_file, show\n\noutput_file(\"callback.html\")\n\nx = [x*0.005 for x in range(0, 200)]\ny = x\n\nsource = ColumnDataSource(data=dict(x=x, y=y))\n\nplot = Figure(plot_width=400, plot_height=400)\nplot.line('x', 'y', source=source, line_width=3, line_alpha=0.6)\n\ncallback = CustomJS(args=dict(source=source), code=\"\"\"\n var data = source.get('data');\n var f = cb_obj.get('value')\n x = data['x']\n y = data['y']\n for (i = 0; i < x.length; i++) {\n y[i] = Math.pow(x[i], f)\n }\n source.trigger('change');\n \"\"\")\n\nslider = Slider(start=0.1, end=4, value=1, step=.1, title=\"power\", callback=callback)\n\nlayout = vform(slider, plot)\n\nshow(layout)", "The next cell shows the start of how to set up something like the WISE app.", "from bokeh.models.widgets import Slider, RadioGroup, Button\nfrom bokeh.io import output_file, show, vform\nfrom bokeh.plotting import figure\n\noutput_file(\"queryWise.html\")\n\n\n\nband = RadioGroup(labels=[\"3.5 microns\", \"4.5 microns\",\n \"12 microns\", \"22 microns\"], active=0)\n\nfov = Slider(start=5, end=15, value=5, step=.25, title=\"Field of View (arcmin)\")\nra = Slider(start=0, end=359, value=120, step=1, title=\"Right Ascension (degrees)\")\ndec = Slider(start=-90, end=90, value=0, step=1, title=\"Declination (degrees)\")\nbutton = Button(label=\"Submit\")\n\np = figure(plot_width=400, plot_height=400,\n tools=\"tap\", title=\"WISE sources\")\n\np.circle(ra.value, dec.value)\nshow(vform(fov,band,ra,dec,button, p))\n", "The rest of the notebook is not currently working\nAn attempt at a simple app\nSince the slider does not display...there is some problem with my installation for using widgets in the notebook.", "from ipywidgets import *\n\nfrom IPython.display import display\nfov = FloatSlider(value = 5.0,\n min = 5.0,\n max = 15.0,\n step = 0.25)\ndisplay(fov)", "Example from the blog post", "%matplotlib notebook\nimport pandas as pd \nimport matplotlib.pyplot as plt \nfrom ipywidgets import * \nfrom IPython.display import display \n##from jnotebook import display\nfrom IPython.html import widgets \nplt.style.use('ggplot')\n\nNUMBER_OF_PINGS = 4\n\n#displaying the text widget\ntext = widgets.Text(description=\"Domain to ping\", width=200) \ndisplay(text)\n\n#preparing the plot\ndata = pd.DataFrame() \nx = range(1,NUMBER_OF_PINGS+1) \nplots = dict() \nfig, ax = plt.subplots() \nplt.xlabel('iterations') \nplt.ylabel('ms') \nplt.xticks(x) \nplt.show()\n\n#preparing a container to put in created checkbox per domain\ncheckboxes = [] \ncb_container = widgets.HBox() \ndisplay(cb_container)\n\n#add button that updates the graph based on the checkboxes\nbutton = widgets.Button(description=\"Update the graph\")\n\n#function to deal with the added domain name\ndef handle_submit(sender): \n #a part of the magic inside python : pinging\n res = !ping -c {NUMBER_OF_PINGS} {text.value}\n hits = res.grep('64 bytes').fields(-2).s.replace(\"time=\",\"\").split()\n if len(hits) == 0:\n print(\"Domain gave error on pinging\")\n else:\n #rebuild plot based on ping result\n data[text.value] = hits\n data[text.value] = data[text.value].astype(float)\n plots[text.value], = ax.plot(x, data[text.value], label=text.value)\n plt.legend()\n plt.draw()\n #add a new checkbox for the new domain\n checkboxes.append(widgets.Checkbox(description = text.value, value=True, width=90))\n cb_container.children=[i for i in checkboxes]\n if len(checkboxes) == 1:\n display(button)\n\n#function to deal with the checkbox update button \ndef on_button_clicked(b): \n for c in cb_container.children:\n if not c.value:\n plots[c.description].set_visible(False)\n else:\n plots[c.description].set_visible(True)\n plt.legend()\n plt.draw()\n\nbutton.on_click(on_button_clicked) \ntext.on_submit(handle_submit) \nplt.show()\n" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
GoogleCloudPlatform/vertex-ai-samples
community-content/tf_keras_image_classification_distributed_multi_worker_with_vertex_sdk/multi_worker_vertex_training_on_cpu_with_custom_container.ipynb
apache-2.0
[ "# Copyright 2021 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.", "TF-Keras Image Classification Distributed Multi-Worker Training on CPU using Vertex Training with Custom Container\n<table align=\"left\">\n <td>\n <a href=\"https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/master/community-content/tf_keras_image_classification_distributed_multi_worker_with_vertex_sdk/multi_worker_vertex_training_on_cpu_with_custom_container.ipynb\">\n <img src=\"https://cloud.google.com/ml-engine/images/github-logo-32px.png\" alt=\"GitHub logo\">\n View on GitHub\n </a>\n </td>\n</table>\n\nSetup", "PROJECT_ID = \"YOUR PROJECT ID\"\nBUCKET_NAME = \"gs://YOUR BUCKET NAME\"\nREGION = \"YOUR REGION\"\nSERVICE_ACCOUNT = \"YOUR SERVICE ACCOUNT\"\n\ncontent_name = \"tf-keras-img-cls-dist-multi-worker-cpu-cust-cont\"", "Vertex Training using Vertex SDK and Custom Container\nBuild Custom Container", "hostname = \"gcr.io\"\nimage_name = content_name\ntag = \"latest\"\n\ncustom_container_image_uri = f\"{hostname}/{PROJECT_ID}/{image_name}:{tag}\"\n\n! cd trainer && docker build -t $custom_container_image_uri -f cpu.Dockerfile .\n\n! docker run --rm $custom_container_image_uri --epochs 2 --local-mode\n\n! docker push $custom_container_image_uri\n\n! gcloud container images list --repository $hostname/$PROJECT_ID", "Initialize Vertex SDK", "! pip install -r requirements.txt\n\nfrom google.cloud import aiplatform\n\naiplatform.init(\n project=PROJECT_ID,\n staging_bucket=BUCKET_NAME,\n location=REGION,\n)", "Create a Vertex Tensorboard Instance", "tensorboard = aiplatform.Tensorboard.create(\n display_name=content_name,\n)", "Option: Use a Previously Created Vertex Tensorboard Instance\ntensorboard_name = \"Your Tensorboard Resource Name or Tensorboard ID\"\ntensorboard = aiplatform.Tensorboard(tensorboard_name=tensorboard_name)\nRun a Vertex SDK CustomContainerTrainingJob", "display_name = content_name\ngcs_output_uri_prefix = f\"{BUCKET_NAME}/{display_name}\"\n\nreplica_count = 4\nmachine_type = \"n1-standard-4\"\n\ncontainer_args = [\n \"--epochs\",\n \"50\",\n \"--batch-size\",\n \"32\",\n]\n\ncustom_container_training_job = aiplatform.CustomContainerTrainingJob(\n display_name=display_name,\n container_uri=custom_container_image_uri,\n)\n\ncustom_container_training_job.run(\n args=container_args,\n base_output_dir=gcs_output_uri_prefix,\n replica_count=replica_count,\n machine_type=machine_type,\n tensorboard=tensorboard.resource_name,\n service_account=SERVICE_ACCOUNT,\n)\n\nprint(f\"Custom Training Job Name: {custom_container_training_job.resource_name}\")\nprint(f\"GCS Output URI Prefix: {gcs_output_uri_prefix}\")", "Training Output Artifact", "! gsutil ls $gcs_output_uri_prefix", "Clean Up Artifact", "! gsutil rm -rf $gcs_output_uri_prefix" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
blua/deep-learning
tv-script-generation/olds_ipnbs/dlnd_tv_script_generation.ipynb
mit
[ "TV Script Generation\nIn this project, you'll generate your own Simpsons TV scripts using RNNs. You'll be using part of the Simpsons dataset of scripts from 27 seasons. The Neural Network you'll build will generate a new TV script for a scene at Moe's Tavern.\nGet the Data\nThe data is already provided for you. You'll be using a subset of the original dataset. It consists of only the scenes in Moe's Tavern. This doesn't include other versions of the tavern, like \"Moe's Cavern\", \"Flaming Moe's\", \"Uncle Moe's Family Feed-Bag\", etc..", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport helper\n\ndata_dir = './data/simpsons/moes_tavern_lines.txt'\ntext = helper.load_data(data_dir)\n# Ignore notice, since we don't use it for analysing the data\ntext = text[81:]", "Explore the Data\nPlay around with view_sentence_range to view different parts of the data.", "view_sentence_range = (0, 10)\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport numpy as np\n\nprint('Dataset Stats')\nprint('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))\nscenes = text.split('\\n\\n')\nprint('Number of scenes: {}'.format(len(scenes)))\nsentence_count_scene = [scene.count('\\n') for scene in scenes]\nprint('Average number of sentences in each scene: {}'.format(np.average(sentence_count_scene)))\n\nsentences = [sentence for scene in scenes for sentence in scene.split('\\n')]\nprint('Number of lines: {}'.format(len(sentences)))\nword_count_sentence = [len(sentence.split()) for sentence in sentences]\nprint('Average number of words in each line: {}'.format(np.average(word_count_sentence)))\n\nprint()\nprint('The sentences {} to {}:'.format(*view_sentence_range))\nprint('\\n'.join(text.split('\\n')[view_sentence_range[0]:view_sentence_range[1]]))", "Implement Preprocessing Functions\nThe first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below:\n- Lookup Table\n- Tokenize Punctuation\nLookup Table\nTo create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:\n- Dictionary to go from the words to an id, we'll call vocab_to_int\n- Dictionary to go from the id to word, we'll call int_to_vocab\nReturn these dictionaries in the following tuple (vocab_to_int, int_to_vocab)", "import numpy as np\nimport problem_unittests as tests\n\ndef create_lookup_tables(text):\n \"\"\"\n Create lookup tables for vocabulary\n :param text: The text of tv scripts split into words\n :return: A tuple of dicts (vocab_to_int, int_to_vocab)\n \"\"\"\n # TODO: Implement Function\n \n vocab = set(text)\n vocab_to_int = {w: i for i, w in enumerate(vocab)}\n int_to_vocab = dict(enumerate(vocab))\n \n return (vocab_to_int, int_to_vocab)\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_create_lookup_tables(create_lookup_tables)", "Tokenize Punctuation\nWe'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word \"bye\" and \"bye!\".\nImplement the function token_lookup to return a dict that will be used to tokenize symbols like \"!\" into \"||Exclamation_Mark||\". Create a dictionary for the following symbols where the symbol is the key and value is the token:\n- Period ( . )\n- Comma ( , )\n- Quotation Mark ( \" )\n- Semicolon ( ; )\n- Exclamation mark ( ! )\n- Question mark ( ? )\n- Left Parentheses ( ( )\n- Right Parentheses ( ) )\n- Dash ( -- )\n- Return ( \\n )\nThis dictionary will be used to token the symbols and add the delimiter (space) around it. This separates the symbols as it's own word, making it easier for the neural network to predict on the next word. Make sure you don't use a token that could be confused as a word. Instead of using the token \"dash\", try using something like \"||dash||\".", "def token_lookup():\n \"\"\"\n Generate a dict to turn punctuation into a token.\n :return: Tokenize dictionary where the key is the punctuation and the value is the token\n \"\"\"\n # TODO: Implement Function\n \n punctuation_dict = {\n '.' : '||Period||',\n ',' : '||Comma||',\n '\"' : '||Quotation_mark||',\n ';' : '||Semicolon||',\n '!' : '||Exclamation_mark||',\n '?' : '||Question_mark||',\n '(' : '||Left_parentheses||',\n ')' : '||Right_parentheses||',\n '--' : '||Dash||',\n '\\n' : '||Return||'\n }\n \n return punctuation_dict\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_tokenize(token_lookup)", "Preprocess all the data and save it\nRunning the code cell below will preprocess all the data and save it to file.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\n# Preprocess Training, Validation, and Testing Data\nhelper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)", "Check Point\nThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport helper\nimport numpy as np\nimport problem_unittests as tests\n\nint_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()", "Build the Neural Network\nYou'll build the components necessary to build a RNN by implementing the following functions below:\n- get_inputs\n- get_init_cell\n- get_embed\n- build_rnn\n- build_nn\n- get_batches\nCheck the Version of TensorFlow and Access to GPU", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nfrom distutils.version import LooseVersion\nimport warnings\nimport tensorflow as tf\n\n# Check TensorFlow Version\nassert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer'\nprint('TensorFlow Version: {}'.format(tf.__version__))\n\n# Check for a GPU\nif not tf.test.gpu_device_name():\n warnings.warn('No GPU found. Please use a GPU to train your neural network.')\nelse:\n print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))", "Input\nImplement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:\n- Input text placeholder named \"input\" using the TF Placeholder name parameter.\n- Targets placeholder\n- Learning Rate placeholder\nReturn the placeholders in the following the tuple (Input, Targets, LearingRate)", "def get_inputs():\n \"\"\"\n Create TF Placeholders for input, targets, and learning rate.\n :return: Tuple (input, targets, learning rate)\n \"\"\"\n # TODO: Implement Function\n \n inputs = tf.placeholder(tf.int32, shape=[None, None], name='input')\n targets = tf.placeholder(tf.int32, shape=[None, None], name='targets')\n learningrate = tf.placeholder(tf.float32, name='learningrate')\n return (inputs, targets, learningrate)\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_get_inputs(get_inputs)", "Build RNN Cell and Initialize\nStack one or more BasicLSTMCells in a MultiRNNCell.\n- The Rnn size should be set using rnn_size\n- Initalize Cell State using the MultiRNNCell's zero_state() function\n - Apply the name \"initial_state\" to the initial state using tf.identity()\nReturn the cell and initial state in the following tuple (Cell, InitialState)", "def get_init_cell(batch_size, rnn_size):\n \"\"\"\n Create an RNN Cell and initialize it.\n :param batch_size: Size of batches\n :param rnn_size: Size of RNNs\n :return: Tuple (cell, initialize state)\n \"\"\"\n # TODO: Implement Function\n\n \n LSTM = tf.contrib.rnn.BasicLSTMCell(rnn_size, state_is_tuple=True)\n drop = tf.contrib.rnn.DropoutWrapper(LSTM,output_keep_prob=0.8)\n cell = tf.contrib.rnn.MultiRNNCell([drop]*rnn_size)\n initial_state = cell.zero_state(batch_size, tf.float32)\n initial_state = tf.identity(initial_state, name='initial_state')\n \n return cell, initial_state\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_get_init_cell(get_init_cell)", "Word Embedding\nApply embedding to input_data using TensorFlow. Return the embedded sequence.", "def get_embed(input_data, vocab_size, embed_dim):\n \"\"\"\n Create embedding for <input_data>.\n :param input_data: TF placeholder for text input.\n :param vocab_size: Number of words in vocabulary.\n :param embed_dim: Number of embedding dimensions\n :return: Embedded input.\n \"\"\"\n # TODO: Implement Function\n \n embedding = tf.Variable(tf.random_uniform((vocab_size, embed_dim), -1, 1))\n embed = tf.nn.embedding_lookup(embedding, input_data)\n \n return embed\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_get_embed(get_embed)", "Build RNN\nYou created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN.\n- Build the RNN using the tf.nn.dynamic_rnn()\n - Apply the name \"final_state\" to the final state using tf.identity()\nReturn the outputs and final_state state in the following tuple (Outputs, FinalState)", "def build_rnn(cell, inputs):\n \"\"\"\n Create a RNN using a RNN Cell\n :param cell: RNN Cell\n :param inputs: Input text data\n :return: Tuple (Outputs, Final State)\n \"\"\"\n # TODO: Implement Function\n \n outputs, final_state = tf.nn.dynamic_rnn(cell, inputs, dtype=tf.float32)\n\n final_state = tf.identity(final_state, name='final_state')\n \n\n return outputs, final_state\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_build_rnn(build_rnn)", "Build the Neural Network\nApply the functions you implemented above to:\n- Apply embedding to input_data using your get_embed(input_data, vocab_size, embed_dim) function.\n- Build RNN using cell and your build_rnn(cell, inputs) function.\n- Apply a fully connected layer with a linear activation and vocab_size as the number of outputs.\nReturn the logits and final state in the following tuple (Logits, FinalState)", "def build_nn(cell, rnn_size, input_data, vocab_size):\n \"\"\"\n Build part of the neural network\n :param cell: RNN cell\n :param rnn_size: Size of rnns\n :param input_data: Input data\n :param vocab_size: Vocabulary size\n :return: Tuple (Logits, FinalState)\n \"\"\"\n # TODO: Implement Function\n\n embed_dim = 200\n\n embedded = get_embed(input_data, vocab_size, embed_dim)\n outputs, final_state = build_rnn(cell, embedded)\n\n weights = tf.Variable(tf.truncated_normal(shape=(rnn_size,vocab_size),mean=0.0,stddev=0.1))\n biases = tf.Variable(tf.zeros(shape=[vocab_size]))\n \n def mul_fn(current_input):\n return tf.matmul(current_input, weights) + biases\n \n logits = tf.map_fn(mul_fn, outputs)\n \n return logits, final_state\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_build_nn(build_nn)", "Batches\nImplement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements:\n- The first element is a single batch of input with the shape [batch size, sequence length]\n- The second element is a single batch of targets with the shape [batch size, sequence length]\nIf you can't fill the last batch with enough data, drop the last batch.\nFor exmple, get_batches([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15], 2, 3) would return a Numpy array of the following:\n```\n[\n # First Batch\n [\n # Batch of Input\n [[ 1 2 3], [ 7 8 9]],\n # Batch of targets\n [[ 2 3 4], [ 8 9 10]]\n ],\n# Second Batch\n [\n # Batch of Input\n [[ 4 5 6], [10 11 12]],\n # Batch of targets\n [[ 5 6 7], [11 12 13]]\n ]\n]\n```", "def get_batches(int_text, batch_size, seq_length):\n \"\"\"\n Return batches of input and target\n :param int_text: Text with the words replaced by their ids\n :param batch_size: The size of batch\n :param seq_length: The length of sequence\n :return: Batches as a Numpy array\n \"\"\"\n # TODO: Implement Function\n\n n_batches = int(len(int_text) / (batch_size * seq_length))\n\n # Drop the last few characters to make only full batches\n xdata = np.array(int_text[: n_batches * batch_size * seq_length])\n ydata = np.array(int_text[1: n_batches * batch_size * seq_length + 1])\n\n x_batches = np.split(xdata.reshape(batch_size, -1), n_batches, 1)\n y_batches = np.split(ydata.reshape(batch_size, -1), n_batches, 1)\n\n return np.array(list(zip(x_batches, y_batches)))\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\" \ntests.test_get_batches(get_batches)", "Neural Network Training\nHyperparameters\nTune the following parameters:\n\nSet num_epochs to the number of epochs.\nSet batch_size to the batch size.\nSet rnn_size to the size of the RNNs.\nSet seq_length to the length of sequence.\nSet learning_rate to the learning rate.\nSet show_every_n_batches to the number of batches the neural network should print progress.", "# Number of Epochs\nnum_epochs = 5\n# Batch Size\nbatch_size = 100\n# RNN Size\nrnn_size = 200\n# Sequence Length\nseq_length = 15\n# Learning Rate\nlearning_rate = 0.01\n# Show stats for every n number of batches\nshow_every_n_batches = 30\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\nsave_dir = './save'", "Build the Graph\nBuild the graph using the neural network you implemented.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nfrom tensorflow.contrib import seq2seq\n\ntrain_graph = tf.Graph()\nwith train_graph.as_default():\n vocab_size = len(int_to_vocab)\n input_text, targets, lr = get_inputs()\n input_data_shape = tf.shape(input_text)\n cell, initial_state = get_init_cell(input_data_shape[0], rnn_size)\n logits, final_state = build_nn(cell, rnn_size, input_text, vocab_size)\n\n # Probabilities for generating words\n probs = tf.nn.softmax(logits, name='probs')\n\n # Loss function\n cost = seq2seq.sequence_loss(\n logits,\n targets,\n tf.ones([input_data_shape[0], input_data_shape[1]]))\n\n # Optimizer\n optimizer = tf.train.AdamOptimizer(lr)\n\n # Gradient Clipping\n gradients = optimizer.compute_gradients(cost)\n capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients]\n train_op = optimizer.apply_gradients(capped_gradients)", "Train\nTrain the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nbatches = get_batches(int_text, batch_size, seq_length)\n\nwith tf.Session(graph=train_graph) as sess:\n sess.run(tf.global_variables_initializer())\n\n for epoch_i in range(num_epochs):\n state = sess.run(initial_state, {input_text: batches[0][0]})\n\n for batch_i, (x, y) in enumerate(batches):\n feed = {\n input_text: x,\n targets: y,\n initial_state: state,\n lr: learning_rate}\n train_loss, state, _ = sess.run([cost, final_state, train_op], feed)\n\n # Show every <show_every_n_batches> batches\n if (epoch_i * len(batches) + batch_i) % show_every_n_batches == 0:\n print('Epoch {:>3} Batch {:>4}/{} train_loss = {:.3f}'.format(\n epoch_i,\n batch_i,\n len(batches),\n train_loss))\n\n # Save Model\n saver = tf.train.Saver()\n saver.save(sess, save_dir)\n print('Model Trained and Saved')", "Save Parameters\nSave seq_length and save_dir for generating a new TV script.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\n# Save parameters for checkpoint\nhelper.save_params((seq_length, save_dir))", "Checkpoint", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport tensorflow as tf\nimport numpy as np\nimport helper\nimport problem_unittests as tests\n\n_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()\nseq_length, load_dir = helper.load_params()", "Implement Generate Functions\nGet Tensors\nGet tensors from loaded_graph using the function get_tensor_by_name(). Get the tensors using the following names:\n- \"input:0\"\n- \"initial_state:0\"\n- \"final_state:0\"\n- \"probs:0\"\nReturn the tensors in the following tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)", "def get_tensors(loaded_graph):\n \"\"\"\n Get input, initial state, final state, and probabilities tensor from <loaded_graph>\n :param loaded_graph: TensorFlow graph loaded from file\n :return: Tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)\n \"\"\"\n # TODO: Implement Function\n \n input_tensor = loaded_graph.get_tensor_by_name(\"input:0\")\n initial_state_tensor = loaded_graph.get_tensor_by_name(\"initial_state:0\")\n final_state_tensor = loaded_graph.get_tensor_by_name(\"final_state:0\")\n probs_tensor = loaded_graph.get_tensor_by_name(\"probs:0\")\n \n return input_tensor, initial_state_tensor, final_state_tensor, probs_tensor\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_get_tensors(get_tensors)", "Choose Word\nImplement the pick_word() function to select the next word using probabilities.", "def pick_word(probabilities, int_to_vocab):\n \"\"\"\n Pick the next word in the generated text\n :param probabilities: Probabilites of the next word\n :param int_to_vocab: Dictionary of word ids as the keys and words as the values\n :return: String of the predicted word\n \"\"\"\n # TODO: Implement Function\n \n next_word = np.random.choice(list(int_to_vocab.values()), p=probabilities)\n \n return next_word\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_pick_word(pick_word)", "Generate TV Script\nThis will generate the TV script for you. Set gen_length to the length of TV script you want to generate.", "gen_length = 200\n# homer_simpson, moe_szyslak, or Barney_Gumble\nprime_word = 'moe_szyslak'\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\nloaded_graph = tf.Graph()\nwith tf.Session(graph=loaded_graph) as sess:\n # Load saved model\n loader = tf.train.import_meta_graph(load_dir + '.meta')\n loader.restore(sess, load_dir)\n\n # Get Tensors from loaded model\n input_text, initial_state, final_state, probs = get_tensors(loaded_graph)\n\n # Sentences generation setup\n gen_sentences = [prime_word + ':']\n prev_state = sess.run(initial_state, {input_text: np.array([[1]])})\n\n # Generate sentences\n for n in range(gen_length):\n # Dynamic Input\n dyn_input = [[vocab_to_int[word] for word in gen_sentences[-seq_length:]]]\n dyn_seq_length = len(dyn_input[0])\n\n # Get Prediction\n probabilities, prev_state = sess.run(\n [probs, final_state],\n {input_text: dyn_input, initial_state: prev_state})\n \n pred_word = pick_word(probabilities[dyn_seq_length-1], int_to_vocab)\n\n gen_sentences.append(pred_word)\n \n # Remove tokens\n tv_script = ' '.join(gen_sentences)\n for key, token in token_dict.items():\n ending = ' ' if key in ['\\n', '(', '\"'] else ''\n tv_script = tv_script.replace(' ' + token.lower(), key)\n tv_script = tv_script.replace('\\n ', '\\n')\n tv_script = tv_script.replace('( ', '(')\n \n print(tv_script)", "The TV Script is Nonsensical\nIt's ok if the TV script doesn't make any sense. We trained on less than a megabyte of text. In order to get good results, you'll have to use a smaller vocabulary or get more data. Luckly there's more data! As we mentioned in the begging of this project, this is a subset of another dataset. We didn't have you train on all the data, because that would take too long. However, you are free to train your neural network on all the data. After you complete the project, of course.\nSubmitting This Project\nWhen submitting this project, make sure to run all the cells before saving the notebook. Save the notebook file as \"dlnd_tv_script_generation.ipynb\" and save it as a HTML file under \"File\" -> \"Download as\". Include the \"helper.py\" and \"problem_unittests.py\" files in your submission." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
katie-truong/Udacity-DA
Project0/.ipynb_checkpoints/Bay_Area_Bike_Share_Analysis-checkpoint.ipynb
mit
[ "Bay Area Bike Share Analysis\nIntroduction\n\nTip: Quoted sections like this will provide helpful instructions on how to navigate and use an iPython notebook.\n\nBay Area Bike Share is a company that provides on-demand bike rentals for customers in San Francisco, Redwood City, Palo Alto, Mountain View, and San Jose. Users can unlock bikes from a variety of stations throughout each city, and return them to any station within the same city. Users pay for the service either through a yearly subscription or by purchasing 3-day or 24-hour passes. Users can make an unlimited number of trips, with trips under thirty minutes in length having no additional charge; longer trips will incur overtime fees.\nIn this project, you will put yourself in the shoes of a data analyst performing an exploratory analysis on the data. You will take a look at two of the major parts of the data analysis process: data wrangling and exploratory data analysis. But before you even start looking at data, think about some questions you might want to understand about the bike share data. Consider, for example, if you were working for Bay Area Bike Share: what kinds of information would you want to know about in order to make smarter business decisions? Or you might think about if you were a user of the bike share service. What factors might influence how you would want to use the service?\nQuestion 1: Write at least two questions you think could be answered by data.\nAnswer: \n\n\nThe demographic (age, occupation, income, etc...) of the customers.\n\n\nThe travel distances.\n\n\nThe hotspots (the most popular beginning and ending points of the trips).\n\n\nUsing Visualizations to Communicate Findings in Data\nAs a data analyst, the ability to effectively communicate findings is a key part of the job. After all, your best analysis is only as good as your ability to communicate it.\nIn 2014, Bay Area Bike Share held an Open Data Challenge to encourage data analysts to create visualizations based on their open data set. You’ll create your own visualizations in this project, but first, take a look at the submission winner for Best Analysis from Tyler Field. Read through the entire report to answer the following question:\nQuestion 2: What visualizations do you think provide the most interesting insights? Are you able to answer either of the questions you identified above based on Tyler’s analysis? Why or why not?\nAnswer: I am most interested in his \"where\" portion, which the starting and ending stations, and the most popular trips. The visualizations are very insightful and able to fully answer most of my questions, except for the demographic, maybe because it isn't included in the dataset (but could still be guessed based on the starting and ending stations).\nData Wrangling\nNow it's time to explore the data for yourself. Year 1 and Year 2 data from the Bay Area Bike Share's Open Data page have already been provided with the project materials; you don't need to download anything extra. The data comes in three parts: the first half of Year 1 (files starting 201402), the second half of Year 1 (files starting 201408), and all of Year 2 (files starting 201508). There are three main datafiles associated with each part: trip data showing information about each trip taken in the system (*_trip_data.csv), information about the stations in the system (*_station_data.csv), and daily weather data for each city in the system (*_weather_data.csv).\nWhen dealing with a lot of data, it can be useful to start by working with only a sample of the data. This way, it will be much easier to check that our data wrangling steps are working since our code will take less time to complete. Once we are satisfied with the way things are working, we can then set things up to work on the dataset as a whole.\nSince the bulk of the data is contained in the trip information, we should target looking at a subset of the trip data to help us get our bearings. You'll start by looking at only the first month of the bike trip data, from 2013-08-29 to 2013-09-30. The code below will take the data from the first half of the first year, then write the first month's worth of data to an output file. This code exploits the fact that the data is sorted by date (though it should be noted that the first two days are sorted by trip time, rather than being completely chronological).\nFirst, load all of the packages and functions that you'll be using in your analysis by running the first code cell below. Then, run the second code cell to read a subset of the first trip data file, and write a new file containing just the subset we are initially interested in.\n\nTip: You can run a code cell like you formatted Markdown cells by clicking on the cell and using the keyboard shortcut Shift + Enter or Shift + Return. Alternatively, a code cell can be executed using the Play button in the toolbar after selecting it. While the cell is running, you will see an asterisk in the message to the left of the cell, i.e. In [*]:. The asterisk will change into a number to show that execution has completed, e.g. In [1]. If there is output, it will show up as Out [1]:, with an appropriate number to match the \"In\" number.", "# import all necessary packages and functions.\nimport csv\nfrom datetime import datetime\nimport numpy as np\nimport pandas as pd\nfrom babs_datacheck import question_3\nfrom babs_visualizations import usage_stats, usage_plot\nfrom IPython.display import display\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\n# file locations\nfile_in = '201402_trip_data.csv'\nfile_out = '201309_trip_data.csv'\n\nwith open(file_out, 'w') as f_out, open(file_in, 'r') as f_in:\n # set up csv reader and writer objects\n in_reader = csv.reader(f_in)\n out_writer = csv.writer(f_out)\n\n # write rows from in-file to out-file until specified date reached\n while True:\n datarow = next(in_reader)\n # trip start dates in 3rd column, m/d/yyyy HH:MM formats\n if datarow[2][:9] == '10/1/2013':\n break\n out_writer.writerow(datarow)", "Condensing the Trip Data\nThe first step is to look at the structure of the dataset to see if there's any data wrangling we should perform. The below cell will read in the sampled data file that you created in the previous cell, and print out the first few rows of the table.", "sample_data = pd.read_csv('201309_trip_data.csv')\n\ndisplay(sample_data.head())", "In this exploration, we're going to concentrate on factors in the trip data that affect the number of trips that are taken. Let's focus down on a few selected columns: the trip duration, start time, start terminal, end terminal, and subscription type. Start time will be divided into year, month, and hour components. We will also add a column for the day of the week and abstract the start and end terminal to be the start and end city.\nLet's tackle the lattermost part of the wrangling process first. Run the below code cell to see how the station information is structured, then observe how the code will create the station-city mapping. Note that the station mapping is set up as a function, create_station_mapping(). Since it is possible that more stations are added or dropped over time, this function will allow us to combine the station information across all three parts of our data when we are ready to explore everything.", "# Display the first few rows of the station data file.\nstation_info = pd.read_csv('201402_station_data.csv')\ndisplay(station_info.head())\n\n# This function will be called by another function later on to create the mapping.\ndef create_station_mapping(station_data):\n \"\"\"\n Create a mapping from station IDs to cities, returning the\n result as a dictionary.\n \"\"\"\n station_map = {}\n for data_file in station_data:\n with open(data_file, 'r') as f_in:\n # set up csv reader object - note that we are using DictReader, which\n # takes the first row of the file as a header row for each row's\n # dictionary keys\n weather_reader = csv.DictReader(f_in)\n\n for row in weather_reader:\n station_map[row['station_id']] = row['landmark']\n return station_map", "You can now use the mapping to condense the trip data to the selected columns noted above. This will be performed in the summarise_data() function below. As part of this function, the datetime module is used to parse the timestamp strings from the original data file as datetime objects (strptime), which can then be output in a different string format (strftime). The parsed objects also have a variety of attributes and methods to quickly obtain\nThere are two tasks that you will need to complete to finish the summarise_data() function. First, you should perform an operation to convert the trip durations from being in terms of seconds to being in terms of minutes. (There are 60 seconds in a minute.) Secondly, you will need to create the columns for the year, month, hour, and day of the week. Take a look at the documentation for datetime objects in the datetime module. Find the appropriate attributes and method to complete the below code.", "def summarise_data(trip_in, station_data, trip_out):\n \"\"\"\n This function takes trip and station information and outputs a new\n data file with a condensed summary of major trip information. The\n trip_in and station_data arguments will be lists of data files for\n the trip and station information, respectively, while trip_out\n specifies the location to which the summarized data will be written.\n \"\"\"\n # generate dictionary of station - city mapping\n station_map = create_station_mapping(station_data)\n \n with open(trip_out, 'w') as f_out:\n # set up csv writer object \n out_colnames = ['duration', 'start_date', 'start_year',\n 'start_month', 'start_hour', 'weekday',\n 'start_city', 'end_city', 'subscription_type'] \n trip_writer = csv.DictWriter(f_out, fieldnames = out_colnames)\n trip_writer.writeheader()\n \n for data_file in trip_in:\n with open(data_file, 'r') as f_in:\n # set up csv reader object\n trip_reader = csv.DictReader(f_in)\n\n # collect data from and process each row\n for row in trip_reader:\n new_point = {}\n \n # convert duration units from seconds to minutes\n ### Question 3a: Add a mathematical operation below ###\n ### to convert durations from seconds to minutes. ###\n new_point['duration'] = float(row['Duration'])/60\n # reformat datestrings into multiple columns\n ### Question 3b: Fill in the blanks below to generate ###\n ### the expected time values. ###\n trip_date = datetime.strptime(row['Start Date'], '%m/%d/%Y %H:%M')\n new_point['start_date'] = trip_date.strftime('%Y-%m-%d')\n new_point['start_year'] = trip_date.year\n new_point['start_month'] = trip_date.month\n new_point['start_hour'] = trip_date.hour\n new_point['weekday'] = trip_date.weekday()\n \n # remap start and end terminal with start and end city\n new_point['start_city'] = station_map[row['Start Terminal']]\n new_point['end_city'] = station_map[row['End Terminal']]\n # two different column names for subscribers depending on file\n if 'Subscription Type' in row:\n new_point['subscription_type'] = row['Subscription Type']\n else:\n new_point['subscription_type'] = row['Subscriber Type']\n\n # write the processed information to the output file.\n trip_writer.writerow(new_point)", "Question 3: Run the below code block to call the summarise_data() function you finished in the above cell. It will take the data contained in the files listed in the trip_in and station_data variables, and write a new file at the location specified in the trip_out variable. If you've performed the data wrangling correctly, the below code block will print out the first few lines of the dataframe and a message verifying that the data point counts are correct.", "# Process the data by running the function we wrote above.\nstation_data = ['201402_station_data.csv']\ntrip_in = ['201309_trip_data.csv']\ntrip_out = '201309_trip_summary.csv'\nsummarise_data(trip_in, station_data, trip_out)\n\n# Load in the data file and print out the first few rows\nsample_data = pd.read_csv(trip_out)\ndisplay(sample_data.head())\n\n# Verify the dataframe by counting data points matching each of the time features.\nquestion_3(sample_data)", "Tip: If you save a jupyter Notebook, the output from running code blocks will also be saved. However, the state of your workspace will be reset once a new session is started. Make sure that you run all of the necessary code blocks from your previous session to reestablish variables and functions before picking up where you last left off.\n\nExploratory Data Analysis\nNow that you have some data saved to a file, let's look at some initial trends in the data. Some code has already been written for you in the babs_visualizations.py script to help summarize and visualize the data; this has been imported as the functions usage_stats() and usage_plot(). In this section we'll walk through some of the things you can do with the functions, and you'll use the functions for yourself in the last part of the project. First, run the following cell to load the data, then use the usage_stats() function to see the total number of trips made in the first month of operations, along with some statistics regarding how long trips took.", "trip_data = pd.read_csv('201309_trip_summary.csv')\n\nusage_stats(trip_data)", "You should see that there are over 27,000 trips in the first month, and that the average trip duration is larger than the median trip duration (the point where 50% of trips are shorter, and 50% are longer). In fact, the mean is larger than the 75% shortest durations. This will be interesting to look at later on.\nLet's start looking at how those trips are divided by subscription type. One easy way to build an intuition about the data is to plot it. We'll use the usage_plot() function for this. The second argument of the function allows us to count up the trips across a selected variable, displaying the information in a plot. The expression below will show how many customer and how many subscriber trips were made. Try it out!", "usage_plot(trip_data, 'subscription_type')", "Seems like there's about 50% more trips made by subscribers in the first month than customers. Let's try a different variable now. What does the distribution of trip durations look like?", "usage_plot(trip_data, 'duration')", "Looks pretty strange, doesn't it? Take a look at the duration values on the x-axis. Most rides are expected to be 30 minutes or less, since there are overage charges for taking extra time in a single trip. The first bar spans durations up to about 1000 minutes, or over 16 hours. Based on the statistics we got out of usage_stats(), we should have expected some trips with very long durations that bring the average to be so much higher than the median: the plot shows this in a dramatic, but unhelpful way.\nWhen exploring the data, you will often need to work with visualization function parameters in order to make the data easier to understand. Here's where the third argument of the usage_plot() function comes in. Filters can be set for data points as a list of conditions. Let's start by limiting things to trips of less than 60 minutes.", "usage_plot(trip_data, 'duration', ['duration < 60'])", "This is looking better! You can see that most trips are indeed less than 30 minutes in length, but there's more that you can do to improve the presentation. Since the minimum duration is not 0, the left hand bar is slighly above 0. We want to be able to tell where there is a clear boundary at 30 minutes, so it will look nicer if we have bin sizes and bin boundaries that correspond to some number of minutes. Fortunately, you can use the optional \"boundary\" and \"bin_width\" parameters to adjust the plot. By setting \"boundary\" to 0, one of the bin edges (in this case the left-most bin) will start at 0 rather than the minimum trip duration. And by setting \"bin_width\" to 5, each bar will count up data points in five-minute intervals.", "usage_plot(trip_data, 'duration', ['duration < 60'], boundary = 0, bin_width = 5)", "Question 4: Which five-minute trip duration shows the most number of trips? Approximately how many trips were made in this range?\nAnswer: Approximately 9,000 trips were made in the 5-10 minutes range.\nVisual adjustments like this might be small, but they can go a long way in helping you understand the data and convey your findings to others.\nPerforming Your Own Analysis\nNow that you've done some exploration on a small sample of the dataset, it's time to go ahead and put together all of the data in a single file and see what trends you can find. The code below will use the same summarise_data() function as before to process data. After running the cell below, you'll have processed all the data into a single data file. Note that the function will not display any output while it runs, and this can take a while to complete since you have much more data than the sample you worked with above.", "station_data = ['201402_station_data.csv',\n '201408_station_data.csv',\n '201508_station_data.csv' ]\ntrip_in = ['201402_trip_data.csv',\n '201408_trip_data.csv',\n '201508_trip_data.csv' ]\ntrip_out = 'babs_y1_y2_summary.csv'\n\n# This function will take in the station data and trip data and\n# write out a new data file to the name listed above in trip_out.\nsummarise_data(trip_in, station_data, trip_out)", "Since the summarise_data() function has created a standalone file, the above cell will not need to be run a second time, even if you close the notebook and start a new session. You can just load in the dataset and then explore things from there.", "trip_data = pd.read_csv('babs_y1_y2_summary.csv')\ndisplay(trip_data.head())", "Now it's your turn to explore the new dataset with usage_stats() and usage_plot() and report your findings! Here's a refresher on how to use the usage_plot() function:\n\nfirst argument (required): loaded dataframe from which data will be analyzed.\nsecond argument (required): variable on which trip counts will be divided.\nthird argument (optional): data filters limiting the data points that will be counted. Filters should be given as a list of conditions, each element should be a string in the following format: '&lt;field&gt; &lt;op&gt; &lt;value&gt;' using one of the following operations: >, <, >=, <=, ==, !=. Data points must satisfy all conditions to be counted or visualized. For example, [\"duration &lt; 15\", \"start_city == 'San Francisco'\"] retains only trips that originated in San Francisco and are less than 15 minutes long.\n\nIf data is being split on a numeric variable (thus creating a histogram), some additional parameters may be set by keyword.\n- \"n_bins\" specifies the number of bars in the resultant plot (default is 10).\n- \"bin_width\" specifies the width of each bar (default divides the range of the data by number of bins). \"n_bins\" and \"bin_width\" cannot be used simultaneously.\n- \"boundary\" specifies where one of the bar edges will be placed; other bar edges will be placed around that value (this may result in an additional bar being plotted). This argument may be used alongside the \"n_bins\" and \"bin_width\" arguments.\nYou can also add some customization to the usage_stats() function as well. The second argument of the function can be used to set up filter conditions, just like how they are set up in usage_plot().", "usage_stats(trip_data)\n\nusage_plot(trip_data, 'start_city', boundary = 0, bin_width = 1)\n\nusage_plot(trip_data, 'start_hour', boundary = 0, bin_width = 1)\n\nusage_plot(trip_data, 'start_month', boundary = 0, bin_width = 1)\n\nusage_plot(trip_data, 'weekday', boundary = 0, bin_width = 1)", "Explore some different variables using the functions above and take note of some trends you find. Feel free to create additional cells if you want to explore the dataset in other ways or multiple ways.\n\nTip: In order to add additional cells to a notebook, you can use the \"Insert Cell Above\" and \"Insert Cell Below\" options from the menu bar above. There is also an icon in the toolbar for adding new cells, with additional icons for moving the cells up and down the document. By default, new cells are of the code type; you can also specify the cell type (e.g. Code or Markdown) of selected cells from the Cell menu or the dropdown in the toolbar.\n\nOne you're done with your explorations, copy the two visualizations you found most interesting into the cells below, then answer the following questions with a few sentences describing what you found and why you selected the figures. Make sure that you adjust the number of bins or the bin limits so that they effectively convey data findings. Feel free to supplement this with any additional numbers generated from usage_stats() or place multiple visualizations to support your observations.", "# Final Plot 1\nusage_plot(trip_data, 'start_hour', boundary = 0, bin_width = 1)", "Question 5a: What is interesting about the above visualization? Why did you select it?\nAnswer: The plot shows the usage of the service during the day. We can see that busiest hours are 7-10AM and 4-7PM, which are the time people commute to and from work/school. There are also a fair share of trips happen during the day, but there is no significant increase during lunch time (I guess people don't bike for lunch).", "# Final Plot 2\nusage_plot(trip_data, 'duration', ['duration < 60'], boundary = 0, bin_width = 5)", "Question 5b: What is interesting about the above visualization? Why did you select it?\nAnswer: \nWe can see that the majority of trips are under 15 minutes, which indicates that the majority of customers used the service for short range travel. Combining with starting points and ending points, this would help the company knows where to put their bikes that benefit the business the most.\nConclusions\nCongratulations on completing the project! This is only a sampling of the data analysis process: from generating questions, wrangling the data, and to exploring the data. Normally, at this point in the data analysis process, you might want to draw conclusions about our data by performing a statistical test or fitting the data to a model for making predictions. There are also a lot of potential analyses that could be performed on the data which are not possible with only the code given. Instead of just looking at number of trips on the outcome axis, you could see what features affect things like trip duration. We also haven't looked at how the weather data ties into bike usage.\nQuestion 6: Think of a topic or field of interest where you would like to be able to apply the techniques of data science. What would you like to be able to learn from your chosen subject?\nAnswer: I am currently working in content analysis for mobile apps, and thanks to data science and data analysis, we are able to forecast and analyze trends in the market. By data analysis and data science, I hope to gain a better insight of the market trend, and hopefully would be able to build models to automatically analyze trends." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
jornvdent/WUR-Geo-Scripting-Course
Lesson 9/Excercise 9.ipynb
gpl-3.0
[ "Solution Excercise 9 Team Hadochi\nJorn van der Ent\nMichiel Voermans\n23 January 2017\nLoad Modules and check for presence of ESRI Shapefile drive", "from osgeo import ogr\nfrom osgeo import osr\nimport os\n\ndriverName = \"ESRI Shapefile\"\ndrv = ogr.GetDriverByName( driverName )\nif drv is None:\n print \"%s driver not available.\\n\" % driverName\nelse:\n print \"%s driver IS available.\\n\" % driverName", "Set working directory to 'data'", "os.chdir('./data')", "Interactive input system", "layername = raw_input(\"Name of Layer: \")\n\npointnumber = raw_input(\"How many points do you want to insert? \")\n\npointcoordinates = []\nfor number in range(1, (int(pointnumber)+1)):\n x = raw_input((\"What is the Latitude (WGS 84) of Point %s ? \" % str(number)))\n y = raw_input((\"What is the Longitude (WGS 84) of Point %s ? \" % str(number)))\n pointcoordinates += [(float(x), float(y))]\n\n# e.g.:\n# pointcoordinates =[(4.897070, 52.377956), (5.104480, 52.092876)]", "Create shape file from input", "# Set filename\nfn = layername + \".shp\"\n\nds = drv.CreateDataSource(fn)\n\n# Set spatial reference\nspatialReference = osr.SpatialReference()\nspatialReference.ImportFromEPSG(4326)\n\n## Create Layer\nlayer=ds.CreateLayer(layername, spatialReference, ogr.wkbPoint)\n\n# Get layer Definition\nlayerDefinition = layer.GetLayerDefn()\n\nfor pointcoord in pointcoordinates:\n ## Create a point\n point = ogr.Geometry(ogr.wkbPoint)\n\n ## SetPoint(self, int point, double x, double y, double z = 0)\n point.SetPoint(0, pointcoord[0], pointcoord[1]) \n\n ## Feature is defined from properties of the layer:e.g:\n feature = ogr.Feature(layerDefinition)\n\n ## Lets add the points to the feature\n feature.SetGeometry(point)\n\n ## Lets store the feature in a layer\n layer.CreateFeature(feature)\n\nds.Destroy()", "Convert shapefile to KML with bash", "bashcommand = 'ogr2ogr -f KML -t_srs crs:84 points.kml points.shp'\nos.system(bashcommand)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
apozas/BIST-Python-Bootcamp
3_Types_Functions_FlowControl.ipynb
gpl-3.0
[ "3 Types, Functions and Flow Control\nData types", "x_int = 3\n\nx_float = 3.\n\nx_string = 'three'\n\nx_list = [3, 'three']\n\ntype(x_float)\n\ntype(x_string)\n\ntype(x_list)", "Numbers", "abs(-1)\n\nimport math\n\nmath.floor(4.5)\n\nmath.exp(1)\n\nmath.log(1)\n\nmath.log10(10)\n\nmath.sqrt(9)\n\nround(4.54,1)", "If this should not make sense, you can print some documentation:", "round?", "Strings", "string = 'Hello World!'\nstring2 = \"This is also allowed, helps if you want 'this' in a string and vice versa\"\nlen(string)", "Slicing", "print(string)\nprint(string[0])\nprint(string[2:5])\nprint(string[2:])\nprint(string[:5])\nprint(string * 2)\nprint(string + 'TEST')\nprint(string[-1])", "String Operations", "print(string/2)\n\nprint(string - 'TEST')\n\nprint(string**2)", "capitalizing strings:", "x = 'test'\n\nx.capitalize()\n\nx.find('e')\n\nx = 'TEST'\nx.lower()", "Enviromnents like Jupyter and Spyder allow you to explore the methods (like .capitalize() or .upper() using x. and pressing tab.\nFormating\nYou can also format strings, e.g. to display rounded numbers", "print('Pi is {:06.2f}'.format(3.14159))\nprint('Space can be filled using {:_>10}'.format(x))", "With python 3.6 this became even more readable", "print(f'{x} 1 2 3')", "Lists", "x_list\n\nx_list[0]\n\nx_list.append('III')\nx_list\n\nx_list.append('III')\nx_list\n\ndel x_list[-1]\nx_list\n\ny_list = ['john', '2.', '1']\n\ny_list + x_list\n\nx_list*2\n\nz_list=[4,78,3]\nmax(z_list)\n\nmin(z_list)\n\nsum(z_list)\n\nz_list.count(4)\n\nz_list.append(4)\nz_list.count(4)\n\nz_list.sort()\nz_list\n\nz_list.reverse()\nz_list", "Tuples\nTuples are immutable and can be thought of as read-only lists.", "y_tuple = ('john', '2.', '1')\ntype(y_tuple)\n\ny_list\n\ny_list[0] = 'Erik'\ny_list\n\ny_tuple[0] = 'Erik'", "Dictionaries\nDictonaries are lists with named entries. There is also named tuples, which are immutable dictonaries. Use OrderedDict from collections if you need to preserve the order.", "tinydict = {'name': 'john', 'code':6734, 'dept': 'sales'}\ntype(tinydict)\n\nprint(tinydict)\nprint(tinydict.keys())\nprint(tinydict.values())\n\ntinydict['code']\n\ntinydict['surname']\n\ntinydict['dept'] = 'R&D' # update existing entry\ntinydict['surname'] = 'Sloan' # Add new entry\n\ntinydict['surname']\n\ndel tinydict['code'] # remove entry with key 'code'\n\ntinydict['code']\n\ntinydict.clear()\ndel tinydict", "When duplicate keys encountered during assignment, the last assignment wins", "dic = {'Name': 'Zara', 'Age': 7, 'Name': 'Manni'}\ndic", "Finding the total number of items in the dictionary:", "len(dic)", "Produces a printable string representation of a dictionary:", "str(dic)", "Functions", "def mean(mylist):\n \"\"\"Calculate the mean of the elements in mylist\"\"\"\n number_of_items = len(mylist)\n sum_of_items = sum(mylist)\n return sum_of_items / number_of_items\n\ntype(mean)\n\nz_list\n\nmean(z_list)\n\nhelp(mean)\n\nmean?", "Flow Control\nIn general, statements are executed sequentially: The first statement in a function is executed first, followed by the second, and so on. There may be a situation when you need to execute a block of code several number of times. In Python a block is delimitated by intendation, i.e. all lines starting at the same space are one block.\nProgramming languages provide various control structures that allow for more complicated execution paths.\nA loop statement allows us to execute a statement or group of statements multiple times.\nwhile Loop\nRepeats a statement or group of statements while a given condition is TRUE. It tests the condition before executing the loop body.", "count = 0\nwhile (count < 9):\n print('The count is: ' + str(count))\n count += 1\n\nprint('Good bye!')", "A loop becomes infinite loop if a condition never becomes FALSE. You must use caution when using while loops because of the possibility that this condition never resolves to a FALSE value. This results in a loop that never ends. Such a loop is called an infinite loop.\nAn infinite loop might be useful in client/server programming where the server needs to run continuously so that client programs can communicate with it as and when required.\nfor Loop\nExecutes a sequence of statements multiple times and abbreviates the code that manages the loop variable.", "fruits = ['banana', 'apple', 'mango']\nfor fruit in fruits: # Second Example\n print('Current fruit :', fruit)", "Sometimes one also needs the index of the element, e.g. to plot a subset of data on different subplots. Then enumerate provides an elegant (\"pythonic\") way:", "for index, fruit in enumerate(fruits):\n print('Current fruit :', fruits[index])", "In principle one could also iterate over an index going from 0 to the number of elements:", "for index in range(len(fruits)):\n print('Current fruit:', fruits[index])", "for loops can be elegantly integrated for creating lists", "fruits_with_b = [fruit for fruit in fruits if fruit.startswith('b')]\nfruits_with_b", "This is equivalent to the following loop:", "fruits_with_b = []\nfor fruit in fruits:\n if fruit.startswith('b'):\n fruits_with_b.append(fruit)\nfruits_with_b", "Nested Loops\nPython programming language allows to use one loop inside another loop.\nA final note on loop nesting is that you can put any type of loop inside of any other type of loop. For example a for loop can be inside a while loop or vice versa.", "for x in range(1, 3):\n for y in range(1, 4):\n print(f'{x} * {y} = {x*y}')", "if\nThe if statement will evaluate the code only if a given condition is met (used with logical operator such as ==, &lt;, &gt;, =&lt;, =&gt;, not, is, in, etc.\nOptionally we can introduce a elsestatement to execute an alternative code when the condition is not met.", "x = 'Mark'\n\nif x in ['Mark', 'Jack', 'Mary']:\n print('present!')\nelse:\n print('absent!')\n\nx = 'Tom'\n\nif x in ['Mark', 'Jack', 'Mary']:\n print('present!')\nelse:\n print('absent!')", "We can also use one or more elif statements to check multiple expressions for TRUE and execute a block of code as soon as one of the conditions evaluates to TRUE", "x = 'Tom'\n\nif x in ['Mark', 'Jack', 'Mary']:\n print('present in list A!')\nelif x in ['Tom', 'Dick', 'Harry']:\n print('present in list B!')\nelse:\n print('absent!')", "else statements can be also use with while and for loops (the code will be executed in the end)\nYou can also use nested if statements.\nbreak\nIt terminates the current loop and resumes execution at the next statement. The break statement can be used in both while and for loops. If you are using nested loops, the break statement stops the execution of the innermost loop and start executing the next line of code after the block.", "var = 10\nwhile var > 0: \n print('Current variable value: ' + str(var))\n var = var -1\n if var == 5:\n break\n\nprint('Good bye!')", "continue\nThe continue statement rejects all the remaining statements in the current iteration of the loop and moves the control back to the top of the loop (like a \"skip\").\nThe continue statement can be used in both while and for loops.", "for letter in 'Python':\n if letter == 'h':\n continue\n print('Current Letter: ' + letter)\n", "pass\nThe pass statement is a null operation; nothing happens when it executes. The pass is also useful in places where your code will eventually go, but has not been written yet", "for letter in 'Python': \n if letter == 'h':\n pass\n print('This is pass block')\n print('Current Letter: ' + letter)\n\nprint('Good bye!')", "pass and continuecould seem similar but they are not. The printed message \"This is pass block\", wouldn't have been printed if continue had been used instead. pass does nothing, continue goes to the next iteration.", "for letter in 'Python': \n if letter == 'h':\n continue\n print('This is pass block')\n print('Current Letter: ' + letter)\n\nprint('Good bye!')" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
ebonnassieux/fundamentals_of_interferometry
4_Visibility_Space/4_5_1_uv_coverage_uv_tracks.ipynb
gpl-2.0
[ "<a id='beginning'></a> <!--\\label{beginning}-->\n* Outline\n* Glossary\n* 4. The Visibility Space\n * Previous: 4.4 The Visibility Function\n * Next: 4.5.2 UV Coverage: Improving Your Coverage\n\nImport standard modules:", "import numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline\nfrom IPython.display import HTML \nHTML('../style/course.css') #apply general CSS", "Import section specific modules:", "from mpl_toolkits.mplot3d import Axes3D\nimport plotBL\n\nHTML('../style/code_toggle.html')", "4.5.1 UV coverage : UV tracks\nThe objective of $\\S$ 4.5.1 &#10549; and $\\S$ 4.5.2 &#10142; is to give you a glimpse into the process of aperture synthesis. <span style=\"background-color:cyan\">TLG:GM: Check if the italic words are in the glossary. </span> An interferometer measures components of the Fourier Transform of the sky by sampling the visibility function, $\\mathcal{V}$. This collection of samples lives in ($u$, $v$, $w$) space, and are often projected onto the so-called $uv$-plane.\nIn $\\S$ 4.5.1 &#10549;, we will focus on the way the visibility function is sampled. This sampling is a function of the interferometer's configuration, the direction of the source and the observation time.\nIn $\\S$ 4.5.2 &#10142;, we will see how this sampling can be improved by using certain observing techniques.\n4.5.1.1 The projected baseline with time: the $uv$ track\nA projected baseline depends on a baseline's coordinates, and the direction being observed in the sky. It corresponds to the baseline as seen from the source. The projected baseline is what determines the spatial frequency of the sky that the baseline will measure. As the Earth rotates, the projected baseline and its corresponding spatial frequency (defined by the baseline's ($u$, $v$)-coordinates) vary slowly in time, generating a path in the $uv$-plane.\nWe will now generate test cases to see what locus the path takes, and how it can be predicted depending on the baseline's geometry.\n4.5.1.1.1 Baseline projection as seen from the source\nLet's generate one baseline from two antennas Ant$_1$ and Ant$_2$.", "ant1 = np.array([-500e3,500e3,0]) # in m\nant2 = np.array([500e3,-500e3,+10]) # in m", "Let's express the corresponding physical baseline in ENU coordinates.", "b_ENU = ant2-ant1 # baseline \nD = np.sqrt(np.sum((b_ENU)**2)) # |b|\nprint str(D/1000)+\" km\"", "Let's place the interferometer at a latitude $L_a=+45^\\circ00'00''$.", "L = (np.pi/180)*(45+0./60+0./3600) # Latitude in radians\n\nA = np.arctan2(b_ENU[0],b_ENU[1])\nprint \"Baseline Azimuth=\"+str(np.degrees(A))+\"°\"\n\nE = np.arcsin(b_ENU[2]/D)\nprint \"Baseline Elevation=\"+str(np.degrees(E))+\"°\"\n\n%matplotlib nbagg\nplotBL.sphere(ant1,ant2,A,E,D,L)", "Figure 4.5.1: A baseline located at +45$^\\circ$ as seen from the sky. This plot is interactive and can be rotated in 3D to see different baseline projections, depending on the position of the source w.r.t. the physical baseline.\nOn the interactive plot above, we represent a baseline located at +45$^\\circ$. It is aligned with the local south-west/north-east axis, as seen from the sky frame of reference. By rotating the sphere westward, you can simulate the variation of the projected baseline as seen from a source in apparent motion on the celestial sphere.\n4.5.1.1.2 Coordinates of the baseline in the ($u$,$v$,$w$) plane\nWe will now simulate an observation to study how a projected baseline will change with time. We will position this baseline at a South African latitude. We first need the expression of the physical baseline in a convenient reference frame, attached to the source in the sky.\nIn $\\S$ 4.2 &#10142;, we linked the equatorial coordinates of the baseline to the ($u$,$v$,$w$) coordinates through the transformation matrix:\n\\begin{equation}\n\\begin{pmatrix}\nu\\\nv\\\nw\n\\end{pmatrix}\n=\n\\frac{1}{\\lambda}\n\\begin{pmatrix}\n\\sin H_0 & \\cos H_0 & 0\\ \n-\\sin \\delta_0 \\cos H_0 & \\sin\\delta_0\\sin H_0 & \\cos\\delta_0\\\n\\cos \\delta_0 \\cos H_0 & -\\cos\\delta_0\\sin H_0 & \\sin\\delta_0\\\n\\end{pmatrix} \n\\begin{pmatrix}\nX\\\nY\\\nZ\n\\end{pmatrix}\n\\end{equation}\n<a id=\"vis:eq:451\"></a> <!---\\label{vis:eq:451}--->\n\\begin{equation}\n\\begin{bmatrix}\nX\\\nY\\\nZ\n\\end{bmatrix}\n=|\\mathbf{b}|\n\\begin{bmatrix}\n\\cos L_a \\sin \\mathcal{E} - \\sin L_a \\cos \\mathcal{E} \\cos \\mathcal{A}\\nonumber\\ \n\\cos \\mathcal{E} \\sin \\mathcal{A} \\nonumber\\\n\\sin L_a \\sin \\mathcal{E} + \\cos L_a \\cos \\mathcal{E} \\cos \\mathcal{A}\\\n\\end{bmatrix}\n\\end{equation}\nEquation 4.5.1 \nThis expression of $\\mathbf{b}$ is a function of ($\\mathcal{A}$,$\\mathcal{E}$), and therefore of ($X$,$Y$,$Z$) in the equatorial frame of reference.\n4.5.1.1.2 Observation parameters\nLet's define an arbitrary set of observation parameters to mimic a real observation.\n\nLatitude of the baseline: $L_a=-30^\\circ43'17.34''$\nDeclination of the observation: $\\delta=-74^\\circ39'37.481''$\nDuration of the observation: $\\Delta \\text{HA}=[-4^\\text{h},4^\\text{h}]$\nTime steps: 600\nFrequency: 1420 MHz", "# Observation parameters\nc = 3e8 # Speed of light\nf = 1420e9 # Frequency\nlam = c/f # Wavelength \ndec = (np.pi/180)*(-30-43.0/60-17.34/3600) # Declination\n\ntime_steps = 600 # Time Steps\nh = np.linspace(-4,4,num=time_steps)*np.pi/12 # Hour angle window", "4.5.1.1.3 Computing of the projected baselines in ($u$,$v$,$w$) coordinates as a function of time\nAs seen previously, we convert the baseline coordinates using the previous matrix transformation.", "ant1 = np.array([25.095,-9.095,0.045])\nant2 = np.array([90.284,26.380,-0.226])\nb_ENU = ant2-ant1\nD = np.sqrt(np.sum((b_ENU)**2))\nL = (np.pi/180)*(-30-43.0/60-17.34/3600)\n\nA=np.arctan2(b_ENU[0],b_ENU[1])\nprint \"Azimuth=\",A*(180/np.pi)\nE=np.arcsin(b_ENU[2]/D)\nprint \"Elevation=\",E*(180/np.pi)\n\nX = D*(np.cos(L)*np.sin(E)-np.sin(L)*np.cos(E)*np.cos(A))\nY = D*np.cos(E)*np.sin(A)\nZ = D*(np.sin(L)*np.sin(E)+np.cos(L)*np.cos(E)*np.cos(A))", "As the $u$, $v$, $w$ coordinates explicitly depend on $H$, we must evaluate them for each observational time step. We will use the equations defined in $\\S$ 4.2.2 &#10142;:\n\n$\\lambda u = X \\sin H + Y \\cos H$\n$\\lambda v= -X \\sin \\delta \\cos H + Y \\sin\\delta\\sin H + Z \\cos\\delta$\n$\\lambda w= X \\cos \\delta \\cos H -Y \\cos\\delta\\sin H + Z \\sin\\delta$", "u = lam**(-1)*(np.sin(h)*X+np.cos(h)*Y)/1e3\nv = lam**(-1)*(-np.sin(dec)*np.cos(h)*X+np.sin(dec)*np.sin(h)*Y+np.cos(dec)*Z)/1e3\nw = lam**(-1)*(np.cos(dec)*np.cos(h)*X-np.cos(dec)*np.sin(h)*Y+np.sin(dec)*Z)/1e3", "We now have everything that describes the $uvw$-track of the baseline (over an 8-hour observational period). It is hard to predict which locus the $uvw$ track traverses given only the three mathematical equations from above. Let's plot it in $uvw$ space and its projection in $uv$ space.", "%matplotlib nbagg\nplotBL.UV(u,v,w)", "Figure 4.5.2: $uvw$ track derived from the simulation and projection in the $uv$-plane.\nThe track in $uvw$ space are curves and the projection in the $uv$ plane are arcs. Let us focus on the track's projection in this plane. To get observation-independent knowledge of the track we can try to combine the three equations of $u$, $v$ and $w$, the aim being to eliminate $H$ from the equation. We end up with an equation linking $u$, $v$, $X$ and $Y$ (the full derivation can be found in $\\S$ A.3 &#10142;):\n$$\\boxed{u^2 + \\left[ \\frac{v -\\frac{Z}{\\lambda} \\cos \\delta}{\\sin \\delta} \\right]^2 = \\left[ \\frac{X}{\\lambda} \\right]^2 + \\left[ \\frac{Y}{\\lambda} \\right]^2}$$\nOne can note that in this particular case, the $uv$ track takes on the form of an ellipse.\n<span style=\"background-color:cyan\">TLG:GM: Check if the italic words are in the glossary. </span>\nThis ellipse is centered at $(0,\\frac{Z}{\\lambda} \\cos \\delta)$ in the ($u$,$v$) plane.\nThe major axis is $a=\\frac{\\sqrt{X^2 + Y^2}}{\\lambda}$.\nThe minor axis (along the axis $v$) will be a function of $Z$, $\\delta$ and $a$.\nWe can check this by plotting the theoretical ellipse over the observed portion of the track. (You can fall back to the duration of the observation to see that the track is mapping this ellipse exactly).", "%matplotlib inline\nfrom matplotlib.patches import Ellipse\n\n# parameters of the UVtrack as an ellipse\na=np.sqrt(X**2+Y**2)/lam/1e3 # major axis \nb=a*np.sin(dec) # minor axis\nv0=Z/lam*np.cos(dec)/1e3 # center of ellipse\n\nplotBL.UVellipse(u,v,w,a,b,v0)", "Figure 4.5.3: The blue (resp. the red) curve is the $uv$ track of the baseline $\\mathbf{b}{12}$ (resp. $\\mathbf{b}{21}$). As $I_\\nu$ is real, the real part of the visibility $\\mathcal{V}$ is even and the imaginary part is odd making $\\mathcal{V}(-u,-v)=\\mathcal{V}^*$. It implies that one baseline automatically provides a measurement of a visibility and its complex conjugate at ($-u$,$-v$).\n4.5.1.2 Special cases\n4.5.1.2.1 The Polar interferometer\nLet settle one baseline at the North pole. The local zenith corresponds to the North Celestial Pole (NCP) at $\\delta=90^\\circ$. As seen from the NCP, the baseline will rotate and the projected baseline will correspond to the physical baseline. This configuration is the only case where this happens.\nIf $\\mathbf{b}$ rotates, we can guess that the $uv$ tracks will be perfect circles. Let's check:", "L=np.radians(90.)\nant1 = np.array([25.095,-9.095,0.045])\nant2 = np.array([90.284,26.380,-0.226])\nb_ENU = ant2-ant1\nD = np.sqrt(np.sum((b_ENU)**2))\n\nA=np.arctan2(b_ENU[0],b_ENU[1])\nprint \"Azimuth=\",A*(180/np.pi)\nE=np.arcsin(b_ENU[2]/D)\nprint \"Elevation=\",E*(180/np.pi)\n\nX = D*(np.cos(L)*np.sin(E)-np.sin(L)*np.cos(E)*np.cos(A))\nY = D*np.cos(E)*np.sin(A)\nZ = D*(np.sin(L)*np.sin(E)+np.cos(L)*np.cos(E)*np.cos(A))", "Let's compute the $uv$ tracks of an observation of the NCP ($\\delta=90^\\circ$):", "dec=np.radians(90.)\n\nuNCP = lam**(-1)*(np.sin(h)*X+np.cos(h)*Y)/1e3\nvNCP = lam**(-1)*(-np.sin(dec)*np.cos(h)*X+np.sin(dec)*np.sin(h)*Y+np.cos(dec)*Z)/1e3\nwNCP = lam**(-1)*(np.cos(dec)*np.cos(h)*X-np.cos(dec)*np.sin(h)*Y+np.sin(dec)*Z)/1e3\n\n# parameters of the UVtrack as an ellipse\naNCP=np.sqrt(X**2+Y**2)/lam/1e3 # major axis \nbNCP=aNCP*np.sin(dec) # minor axi\nv0NCP=Z/lam*np.cos(dec)/1e3 # center of ellipse", "Let's compute the uv tracks when observing a source at $\\delta=30^\\circ$:", "dec=np.radians(30.)\n\nu30 = lam**(-1)*(np.sin(h)*X+np.cos(h)*Y)/1e3\nv30 = lam**(-1)*(-np.sin(dec)*np.cos(h)*X+np.sin(dec)*np.sin(h)*Y+np.cos(dec)*Z)/1e3\nw30 = lam**(-1)*(np.cos(dec)*np.cos(h)*X-np.cos(dec)*np.sin(h)*Y+np.sin(dec)*Z)/1e3\n\na30=np.sqrt(X**2+Y**2)/lam/1e3 # major axis \nb30=a*np.sin(dec) # minor axi\nv030=Z/lam*np.cos(dec)/1e3 # center of ellipse\n\n%matplotlib inline\nplotBL.UVellipse(u30,v30,w30,a30,b30,v030)\nplotBL.UVellipse(uNCP,vNCP,wNCP,aNCP,bNCP,v0NCP)", "Figure 4.5.4: $uv$ track for a baseline at the pole observing at $\\delta=90^\\circ$ (NCP) and at $\\delta=30^\\circ$ with the same color conventions as the previous figure.\nWhen observing a source at declination $\\delta$, we still have an elliptical shape but centered at (0,0). In the case of a polar interferometer, the full $uv$ track can be covered in 12 hours only due to the symmetry of the baseline.\n4.5.1.2.2 The Equatorial interferometer\nLet's consider the other extreme scenario: this time, we position the interferometer at the equator. The local zenith is crossed by the Celestial Equator at $\\delta=0^\\circ$. As seen from the celestial equator, the baseline will not rotate and the projected baseline will no longer correspond to the physical baseline. This configuration is the only case where this happens.\nIf $\\mathbf{b}$ is not rotating, we can intuitively guess that the $uv$ tracks will be straight lines.", "L=np.radians(90.)\nX = D*(np.cos(L)*np.sin(E)-np.sin(L)*np.cos(E)*np.cos(A))\nY = D*np.cos(E)*np.sin(A)\nZ = D*(np.sin(L)*np.sin(E)+np.cos(L)*np.cos(E)*np.cos(A))\n\n# At local zenith == Celestial Equator\ndec=np.radians(0.)\n\nuEQ = lam**(-1)*(np.sin(h)*X+np.cos(h)*Y)/1e3\nvEQ = lam**(-1)*(-np.sin(dec)*np.cos(h)*X+np.sin(dec)*np.sin(h)*Y+np.cos(dec)*Z)/1e3\nwEQ = lam**(-1)*(np.cos(dec)*np.cos(h)*X-np.cos(dec)*np.sin(h)*Y+np.sin(dec)*Z)/1e3\n\n# parameters of the UVtrack as an ellipse\naEQ=np.sqrt(X**2+Y**2)/lam/1e3 # major axis \nbEQ=aEQ*np.sin(dec) # minor axi\nv0EQ=Z/lam*np.cos(dec)/1e3 # center of ellipse\n\n# Close to Zenith\ndec=np.radians(10.)\n\nu10 = lam**(-1)*(np.sin(h)*X+np.cos(h)*Y)/1e3\nv10 = lam**(-1)*(-np.sin(dec)*np.cos(h)*X+np.sin(dec)*np.sin(h)*Y+np.cos(dec)*Z)/1e3\nw10 = lam**(-1)*(np.cos(dec)*np.cos(h)*X-np.cos(dec)*np.sin(h)*Y+np.sin(dec)*Z)/1e3\n\na10=np.sqrt(X**2+Y**2)/lam/1e3 # major axis \nb10=a*np.sin(dec) # minor axi\nv010=Z/lam*np.cos(dec)/1e3 # center of ellipse\n\n%matplotlib inline\nplotBL.UVellipse(u10,v10,w10,a10,b10,v010)\nplotBL.UVellipse(uEQ,vEQ,wEQ,aEQ,bEQ,v0EQ)", "Figure 4.5.5: $uv$ track for a baseline at the equator observing at $\\delta=0^\\circ$ and at $\\delta=10^\\circ$, with the same color conventions as the previous figure.\nAn equatorial interferometer observing its zenith will see radio sources crossing the sky on straight, linear paths. Therefore, they will produce straight $uv$ coordinates.\n4.5.1.1.3 The East-West array <a id='vis:sec:ew'></a> <!--\\label{vis:sec:ew}-->\nThe East-West array is the special case of an interferometer with physical baselines aligned with the East-West direction in the ground-based frame of reference. They have the convenient property of giving a $uv$ coverage which lies entirely on a plane.\nIf the baseline is aligned with the East-West direction, then the Elevation $\\mathcal{E}$ of the baseline is zero and the Azimuth $\\mathcal{A}$ is $\\frac{\\pi}{2}$. Eq. 4.5.1 &#10549; then simplifies considerably:\nThe only non-zero component of the baseline will be its $Y$-component.\n\\begin{equation}\n\\frac{1}{\\lambda}\n\\begin{bmatrix}\nX\\\nY\\\nZ\n\\end{bmatrix}\n=\n|\\mathbf{b_\\lambda}|\n\\begin{bmatrix}\n\\cos L_a \\sin 0 - \\sin L_a \\cos 0 \\cos \\frac{\\pi}{2}\\nonumber\\ \n\\cos 0 \\sin \\frac{\\pi}{2} \\nonumber\\\n\\sin L_a \\sin 0 + \\cos L_a \\cos 0 \\cos \\frac{\\pi}{2}\\\n\\end{bmatrix}\n=\n\\begin{bmatrix}\n0\\\n|\\mathbf{b_\\lambda}|\\\n0 \\\n\\end{bmatrix}\n\\end{equation}\nIf we observe a source at declination $\\delta_0$ with varying Hour Angle, $H$, we obtain:\n\\begin{equation}\n\\begin{pmatrix}\nu\\\nv\\\nw\\\n\\end{pmatrix}\n=\n\\begin{pmatrix}\n\\sin H & \\cos H & 0\\ \n-\\sin \\delta_0 \\cos H & \\sin\\delta_0\\sin H & \\cos\\delta_0\\\n\\cos \\delta_0 \\cos H & -\\cos\\delta_0\\sin H & \\sin\\delta_0\\\n\\end{pmatrix} \n\\begin{pmatrix}\n0\\\n|\\mathbf{b_\\lambda}| \\\n0\n\\end{pmatrix}\n\\end{equation}\n\\begin{equation}\n\\begin{pmatrix}\nu\\\nv\\\nw\\\n\\end{pmatrix}\n=\n\\begin{pmatrix}\n|\\mathbf{b_\\lambda}| \\cos H \\ \n|\\mathbf{b_\\lambda}| \\sin\\delta_0 \\sin H\\\n-|\\mathbf{b_\\lambda}|\\cos\\delta_0\\sin H\\\n\\end{pmatrix} \n\\end{equation}\nwhen $H = 6^\\text{h}$ (West)\n\\begin{equation}\n\\begin{pmatrix}\nu\\\nv\\\nw\\\n\\end{pmatrix}\n=\n\\begin{pmatrix}\n0 \\ \n|\\mathbf{b_\\lambda}|\\sin\\delta_0\\\n|\\mathbf{b_\\lambda}|\\cos\\delta_0\\\n\\end{pmatrix} \n\\end{equation}\nwhen $H = 0^\\text{h}$ (South)\n\\begin{equation}\n\\begin{pmatrix}\nu\\\nv\\\nw\\\n\\end{pmatrix}\n=\n\\begin{pmatrix}\n|\\mathbf{b_\\lambda}| \\ \n0\\\n0\\\n\\end{pmatrix} \n\\end{equation}\nwhen $H = -6^\\text{h}$ (East)\n\\begin{equation}\n\\begin{pmatrix}\nu\\\nv\\\nw\\\n\\end{pmatrix}\n=\n\\begin{pmatrix}\n0 \\ \n-|\\mathbf{b_\\lambda}|\\sin\\delta_0\\\n-|\\mathbf{b_\\lambda}|\\cos\\delta_0\n\\end{pmatrix} \n\\end{equation}\nIn this case, one can notice that we always have a relationship between $u$, $v$ and $|\\mathbf{b_\\lambda}|$:\n$$ u^2+\\left( \\frac{v}{\\sin\\delta_0}\\right) ^2=|\\mathbf{b_\\lambda}|^2$$ \n<div class=warn>\n<b>Warning:</b> The $\\sin\\delta_0$ factor, appearing in the previous equation, can be interpreted as a compression factor.\n</div>\n\n4.5.1.3 Sampling the visibility plane with $uv$-tracks\n4.5.1.3.1 Simulating a baseline\nWhen we have an EW baseline, some equations simplify.\nFirstly, $XYZ = [0~d~0]^T$, where $d$ is the baseline length measured in wavelengths.\nSecondly, we have the following relationships: $u = d\\cos(H)$, $v = d\\sin(H)\\sin(\\delta)$,\nwhere $H$ is the hour angle of the field center and $\\delta$ its declination.\nIn this section, we will plot the $uv$-coverage of an EW-baseline whose field center is at two different declinations.", "H = np.linspace(-6,6,600)*(np.pi/12) #Hour angle in radians\nd = 100 #We assume that we have already divided by wavelength\n\ndelta = 60*(np.pi/180) #Declination in degrees\nu_60 = d*np.cos(H)\nv_60 = d*np.sin(H)*np.sin(delta)", "<span style=\"background-color:red\">TLG:AC: Add the following figures. This is specifically for an EW array. They will add some more insight. </span>\n<img src='figures/EW_1_d.svg' width=40%>\n<img src='figures/EW_2_d.svg' width=40%>\n<img src='figures/EW_3_d.svg' width=40%>\n4.5.1.3.2 Simulating the sky\nLet us populate our sky with three sources, with positions given in RA ($\\alpha$) and DEC ($\\delta$):\n* Source 1: (5h 32m 0.4s,60$^{\\circ}$-17' 57'') - 1 Jy\n* Source 2: (5h 36m 12.8s,-61$^{\\circ}$ 12' 6.9'') - 0.5 Jy\n* Source 3: (5h 40m 45.5s,-61$^{\\circ}$ 56' 34'') - 0.2 Jy\nWe place the field center at $(\\alpha_0,\\delta_0) = $ (5h 30m,60$^{\\circ}$).", "RA_sources = np.array([5+30.0/60,5+32.0/60+0.4/3600,5+36.0/60+12.8/3600,5+40.0/60+45.5/3600])\nDEC_sources = np.array([60,60+17.0/60+57.0/3600,61+12.0/60+6.9/3600,61+56.0/60+34.0/3600])\nFlux_sources_labels = np.array([\"\",\"1 Jy\",\"0.5 Jy\",\"0.2 Jy\"])\nFlux_sources = np.array([1,0.5,0.1]) #in Jy\nstep_size = 200\nprint \"Phase center Source 1 Source 2 Source3\"\nprint repr(\"RA=\"+str(RA_sources)).ljust(2)\nprint \"DEC=\"+str(DEC_sources)", "We then convert the ($\\alpha$,$\\delta$) to $l,m$: <span style=\"background-color:red\">TLG:AC:Point to Chapter 3.</span>\n* $l = \\cos \\delta \\sin \\Delta \\alpha$\n* $m = \\sin \\delta\\cos\\delta_0 -\\cos \\delta\\sin\\delta_0\\cos\\Delta \\alpha$\n* $\\Delta \\alpha = \\alpha - \\alpha_0$", "RA_rad = np.array(RA_sources)*(np.pi/12)\nDEC_rad = np.array(DEC_sources)*(np.pi/180)\nRA_delta_rad = RA_rad-RA_rad[0]\n\nl = np.cos(DEC_rad)*np.sin(RA_delta_rad)\nm = (np.sin(DEC_rad)*np.cos(DEC_rad[0])-np.cos(DEC_rad)*np.sin(DEC_rad[0])*np.cos(RA_delta_rad))\nprint \"l=\",l*(180/np.pi)\nprint \"m=\",m*(180/np.pi)\n\npoint_sources = np.zeros((len(RA_sources)-1,3))\npoint_sources[:,0] = Flux_sources\npoint_sources[:,1] = l[1:]\npoint_sources[:,2] = m[1:]", "The source and phase centre coordinates are now given in degrees.", "%matplotlib inline\nfig = plt.figure(figsize=(10,10))\nax = fig.add_subplot(111)\nplt.xlim([-4,4])\nplt.ylim([-4,4])\nplt.xlabel(\"$l$ [degrees]\")\nplt.ylabel(\"$m$ [degrees]\")\nplt.plot(l[0],m[0],\"bx\")\nplt.hold(\"on\")\nplt.plot(l[1:]*(180/np.pi),m[1:]*(180/np.pi),\"ro\") \ncounter = 1\nfor xy in zip(l[1:]*(180/np.pi)+0.25, m[1:]*(180/np.pi)+0.25): \n ax.annotate(Flux_sources_labels[counter], xy=xy, textcoords='offset points',horizontalalignment='right',\n verticalalignment='bottom') \n counter = counter + 1\n \nplt.grid()", "Figure 4.5.6: Distribution of the simulated sky in the $l$,$m$ plane.\n4.5.1.3.3 Simulating an observation\nWe will now create a fully-filled $uv$-plane, and sample it using the EW-baseline track we created in the first section. We will be ignoring the $w$-term for the sake of simplicity.", "u = np.linspace(-1*(np.amax(np.abs(u_60)))-10, np.amax(np.abs(u_60))+10, num=step_size, endpoint=True)\nv = np.linspace(-1*(np.amax(abs(v_60)))-10, np.amax(abs(v_60))+10, num=step_size, endpoint=True) \nuu, vv = np.meshgrid(u, v)\nzz = np.zeros(uu.shape).astype(complex)", "We create the dimensions of our visibility plane.", "s = point_sources.shape\nfor counter in xrange(1, s[0]+1):\n A_i = point_sources[counter-1,0]\n l_i = point_sources[counter-1,1]\n m_i = point_sources[counter-1,2]\n zz += A_i*np.exp(-2*np.pi*1j*(uu*l_i+vv*m_i))\nzz = zz[:,::-1]", "We create our fully-filled visibility plane. With a \"perfect\" interferometer, we could sample the entire $uv$-plane. Since we only have a finite amount of antennas, this is never possible in practice. Recall that our sky brightness $I(l,m)$ is related to our visibilites $V(u,v)$ via the Fourier transform. For a bunch of point sources we can therefore write:\n$$V(u,v)=\\mathcal{F}{I(l,m)} = \\mathcal{F}{\\sum_k A_k \\delta(l-l_k,m-m_k)} = \\sum_k A_k e^{-2\\pi i (ul_i+vm_i)}$$\nLet's compute the total visibilities for our simulated sky.", "u_track = u_60\nv_track = v_60\nz = np.zeros(u_track.shape).astype(complex) \n\ns = point_sources.shape\nfor counter in xrange(1, s[0]+1):\n A_i = point_sources[counter-1,0]\n l_i = point_sources[counter-1,1]\n m_i = point_sources[counter-1,2]\n z += A_i*np.exp(-1*2*np.pi*1j*(u_track*l_i+v_track*m_i))", "Below we sample our visibility plane on the $uv$-track derived in the first section, i.e. $V(u_t,v_t)$.", "plt.figure(figsize=(12,6))\n\nplt.subplot(121)\nplt.imshow(zz.real,extent=[-1*(np.amax(np.abs(u_60)))-10, np.amax(np.abs(u_60))+10,-1*(np.amax(abs(v_60)))-10, \\\n np.amax(abs(v_60))+10])\nplt.plot(u_60,v_60,\"k\")\nplt.xlim([-1*(np.amax(np.abs(u_60)))-10, np.amax(np.abs(u_60))+10])\nplt.ylim(-1*(np.amax(abs(v_60)))-10, np.amax(abs(v_60))+10)\nplt.xlabel(\"u\")\nplt.ylabel(\"v\")\nplt.title(\"Real part of visibilities\")\n\nplt.subplot(122)\nplt.imshow(zz.imag,extent=[-1*(np.amax(np.abs(u_60)))-10, np.amax(np.abs(u_60))+10,-1*(np.amax(abs(v_60)))-10, \\\n np.amax(abs(v_60))+10])\nplt.plot(u_60,v_60,\"k\")\nplt.xlim([-1*(np.amax(np.abs(u_60)))-10, np.amax(np.abs(u_60))+10])\nplt.ylim(-1*(np.amax(abs(v_60)))-10, np.amax(abs(v_60))+10)\nplt.xlabel(\"u\")\nplt.ylabel(\"v\")\nplt.title(\"Imaginary part of visibilities\")", "Figure 4.5.7: Real and imaginary parts of the visibility function. The black curve is the portion of the $uv$ track crossing the visibility.\nWe now plot the sampled visibilites as a function of time-slots, i.e $V(u_t(t_s),v_t(t_s))$.", "plt.figure(figsize=(12,6))\nplt.subplot(121)\nplt.plot(z.real)\nplt.xlabel(\"Timeslots\")\nplt.ylabel(\"Jy\")\nplt.title(\"Real: sampled visibilities\")\n\nplt.subplot(122)\nplt.plot(z.imag)\nplt.xlabel(\"Timeslots\")\nplt.ylabel(\"Jy\")\nplt.title(\"Imag: sampled visibilities\")", "Figure 4.5.8: Real and imaginary parts of the visibility sampled by the black curve in Fig. 4.5.7, plotted as a function of time.", "plt.figure(figsize=(12,6))\nplt.subplot(121)\nplt.imshow(abs(zz),\n extent=[-1*(np.amax(np.abs(u_60)))-10,\n np.amax(np.abs(u_60))+10,\n -1*(np.amax(abs(v_60)))-10,\n np.amax(abs(v_60))+10])\nplt.plot(u_60,v_60,\"k\")\nplt.xlim([-1*(np.amax(np.abs(u_60)))-10, np.amax(np.abs(u_60))+10])\nplt.ylim(-1*(np.amax(abs(v_60)))-10, np.amax(abs(v_60))+10)\nplt.xlabel(\"u\")\nplt.ylabel(\"v\")\nplt.title(\"Amplitude of visibilities\")\n\nplt.subplot(122)\nplt.imshow(np.angle(zz),\n extent=[-1*(np.amax(np.abs(u_60)))-10,\n np.amax(np.abs(u_60))+10,\n -1*(np.amax(abs(v_60)))-10,\n np.amax(abs(v_60))+10])\nplt.plot(u_60,v_60,\"k\")\nplt.xlim([-1*(np.amax(np.abs(u_60)))-10, np.amax(np.abs(u_60))+10])\nplt.ylim(-1*(np.amax(abs(v_60)))-10, np.amax(abs(v_60))+10)\nplt.xlabel(\"u\")\nplt.ylabel(\"v\")\nplt.title(\"Phase of visibilities\")", "Figure 4.5.9: Amplitude and Phase of the visibility function. The black curve is the portion of the $uv$ track crossing the visibility.", "plt.figure(figsize=(12,6))\nplt.subplot(121)\nplt.plot(abs(z))\nplt.xlabel(\"Timeslots\")\nplt.ylabel(\"Jy\")\nplt.title(\"Abs: sampled visibilities\")\n\nplt.subplot(122)\nplt.plot(np.angle(z))\nplt.xlabel(\"Timeslots\")\nplt.ylabel(\"Jy\")\nplt.title(\"Phase: sampled visibilities\")", "Figure 4.5.10: Amplitude and Phase of the visibility sampled by the black curve in Fig. 4.5.7, plotted as a function of time.\n4.5.1.3.4 \"Real-life\" visibility\nIn the following figure, we present a collection of visibility measurements taken with different baselines, as a function of time. These measurements come from a real LOFAR dataset observing Cygnus A (Fig. 4.4.11 &#10549;), a powerful radiosource.\nEach color corresponds to a different baseline measurement, and consequently, a different sampling of the same visibility function along different uv-track.\n<a id=\"vis:fig:4411\"></a> <!---\\label{vis:eq:4411}--->\n<img src='figures/cygnusA.jpg' width=30%>\nFigure 4.5.11: Cygnus A at 21 cm.\n<a id=\"vis:fig:4412\"></a> <!---\\label{vis:eq:4412}--->\n<img src='figures/baselines.jpg' width=70%>\nFigure 4.5.12: Visibility amplitude as a function of time.\nFig. 4.5.12 &#10549; shows a plot of the amplitudes of all the visibility samples from our observation of Cygnus A. The large number of antennas makes its interpretation difficult. Even the inspection of single visibility's amplitude (i.e. a single $uv$ track) is hard to interpret due to the source's intrinsic complexity. Let us see what happens if we plot the same information as a function of the $uv$-distance, $r_{uv}$.\n<a id=\"vis:fig:4413\"></a> <!---\\label{vis:eq:4413}--->\n<img src='figures/baseline-uvdist.jpg' width=70%>\nFigure 4.5.13: Visibility amplitude as a function of $r_{uv}$.\nFig. 4.5.13 &#10549; display the same information as Fig. 4.5.12 &#10549; this time as a function of $r_{uv}$. It should be quite clear that, as in $\\S$ 4.4 &#10142;, we are stacking the radial plots of the visibility function. The interpretation of these radial plots provides us with information about the size of the source. For Fig. 4.5.13 &#10549; in particular, when the amplitude of the visibility goes to zero, one characteristic size of the source has been resolved.\nFrom these plots, it is clear that the more baselines we have, the better the sampling of the visibility function.\nIn the next section, we discuss how astronomers improve their $uv$ coverage.\n<p class=conclusion>\n <font size=4><b>Important things to remember</b></font>\n <br>\n <br>\n\n&bull; Each individual baseline samples the visibility function along a single $uv$ track.<br>\n&bull; The $uv$ tracks are ellipses whose parameters depends on the latitude and declination of observation.<br>\n&bull; The polar (resp. equatorial) interferometer gives circular (linear) $uv$ tracks.<br>\n&bull; Accumulating samples over time enhances the sampling of the visibility function, thus improving our knowledge of the source.<br>\n\n</p>\n\n\n\nNext: 4.5.2 UV Coverage: Improving Your Coverage\n\nFormat status:\n\n<span style=\"background-color:green\">&nbsp;&nbsp;&nbsp;&nbsp;</span> : LF: 09/02/2017\n<span style=\"background-color:green\">&nbsp;&nbsp;&nbsp;&nbsp;</span> : NC: 09/02/2017\n<span style=\"background-color:green\">&nbsp;&nbsp;&nbsp;&nbsp;</span> : RF: 09/02/2017\n<span style=\"background-color:green\">&nbsp;&nbsp;&nbsp;&nbsp;</span> : HF: 09/02/2017\n<span style=\"background-color:green\">&nbsp;&nbsp;&nbsp;&nbsp;</span> : GM: 09/02/2017\n<span style=\"background-color:green\">&nbsp;&nbsp;&nbsp;&nbsp;</span> : CC: 09/02/2017\n<span style=\"background-color:green\">&nbsp;&nbsp;&nbsp;&nbsp;</span> : CL: 09/02/2017\n<span style=\"background-color:green\">&nbsp;&nbsp;&nbsp;&nbsp;</span> : ST: 09/02/2017\n<span style=\"background-color:green\">&nbsp;&nbsp;&nbsp;&nbsp;</span> : FN: 09/02/2017\n<span style=\"background-color:green\">&nbsp;&nbsp;&nbsp;&nbsp;</span> : TC: 09/02/2017\n<span style=\"background-color:green\">&nbsp;&nbsp;&nbsp;&nbsp;</span> : XX: 09/02/2017\n\n<div class=warn><b>Future Additions:</b></div>" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
elsuizo/Charlas_presentaciones
Python_scientific/numpy_matplot_ejemplos.ipynb
gpl-2.0
[ "<img src=\"files/Images/Numpylogo.png\" style=\"float: left;\"/>\n<div style=\"clear: both;\">\n\n## Numpy\n\n### Biblioteca(modulo) numérico de Python\n\n* Arrays multimensionales (ndarray)\n* Más cercano al hardware(numéricamente más eficiente)\n* Designado para aplicaciones científicas\n* Utiliza memoria de manera más eficiente\n* Todos los modulos cientificos lo utilizan como tipo de dato básico(Opencv a partir de Opencv2¡¡¡¡)\n\n##Under the hood: la dispocisión de memoria en un numpy array es :\n\nbloque de memoria + esquema de indexado + descriptor del tipo de dato\n\n* datos planos\n* como alocar elementos\n* como interpretar elementos\n\n\n\n\n\n<img src=\"files/Images/threefundamental1.png\" style=\"float: left;\"/>\n<div style=\"clear: both;\">", "#Importamos el modulo numpy con el alias np\nimport numpy as np\n\n#Creo un array \na = np.array([1,0,0])\na\n\ntype(a)", "Python posee por defecto un tipo de datos que se asemeja(listas), pero es numéricamente ineficiente", "#Ejemplo creo una lista de Python de 0 a 1000 y calculo el cuadrado de cada elemento\nL = range(1000)\n\n\n%%timeit\n[i**2 for i in L]\n\n#Ahora hago lo mismo con Numpy\na = np.arange(1000)\n\n%%timeit\na**2\n\nprint 'Numpy es '\nprint 111/5.4\nprint 'veces mas rapido'", "Caracteristicas y utilidades principales:", "#Creando arrays\n\na = np.array([1,1,1]) # 1D\nb = np.array([[1,1,1],[1,1,1]]) #2d (Matrix)\nc = np.array([[[1,1,1],[1,1,1],[1,1,1]]]) #3D (Tensor...)\n\n\nprint a.shape\nprint b.shape\nprint c.shape\n\n#Podemos crear arrays predeterminados con funciones muy utiles\n\na = np.arange(10) # un array de enteros 0 a 10(no lo incluye) \na\n\n# Array de 1 a 9 con paso 2\nb = np.arange(1, 9, 2)\nb\n\n#Como en Matlab\na = np.ones((3,3))\na\n\nb = np.zeros((5,5))\nb\n\nc = np.eye(10)\nc\n\nd = np.diag(np.array([1, 2, 3, 4]))\nd\n\n#Complex numbers\ne = np.array([1+2j, 3+4j, 5+6*1j])\ne\n\n\n#boolean\ne = np.array([True, False, False, True])\ne\n\n#String\nf = np.array(['Bonjour', 'Hello', 'Hallo',])\nf", "Indexing and slicing\nLos items de un array pueden accederse de una manera natural como en Matlab(hay que tener en cuenta que los indices comienzan en 0)", "a = np.arange(10)\na\n\n#creo una tupla \na[0],a[2],a[-1]\n\n# Slicing [comienzo:final:paso]\n# Los tres no son necesarios explicitamente ya que por default comienzo=0, final=[-1] y el paso=1\n\na[2:8:2]\n\n\n\na[::4] # solo cambiamos el paso", "Fancy indexing\nHay más maneras de indexar y asignar", "np.random.seed(3)\na = np.random.random_integers(0, 20, 15)\na\n\n\n(a % 3 == 0)\n\nmask = (a % 3 == 0)\na_multiplos_3 = a[mask]\na_multiplos_3\n\n#Puedo indexar y asignar al mismo tiempo\na[a % 3 == 0] = -1\na", "Elementwise operations\nOperar con funciones en cada elemento de un array(NO USE FOR)", "a = np.array([1, 2, 3, 4])\na\n\n\n#todas las operaciones aritmeticas funcionan\na+1\n\nj = np.arange(5)\n2**(j + 1) - j\n\n#Multiplicacion ES ELEMENTO A ELEMENTO\na * a \n\n#Para hacer multiplicaciones matriciales usamos dot\n\nb = np.random.rand(3,3)\nc = np.random.rand(3,3)\n\nnp.dot(b,c)\n\n#cado objeto ndarray tiene muuchos metodos eje\nc.sum()", "Optimizaciones con Fortran y C\n<img src=\"files/Images/index.png\" style=\"float: left;\"/>\n<div style=\"clear: both;\">", "%%file hellofortran.f\nC File hellofortran.f\n subroutine hellofortran (n)\n integer n\n \n do 100 i=0, n\n print *, \"Hola Soy Fortran tengo muuchos años\"\n100 continue\n end", "Generamos un modulo de Python con f2py", "!f2py -c -m hellofortran hellofortran.f\n", "Importamos el modulo que generamos y lo utilizamos", "%%file hello.py\nimport hellofortran\n\nhellofortran.hellofortran(5)\n\n# corremos el script\n!python hello.py", "Ejemplo 2: suma acumulativa, entrada vector, salida vector\nPrimero hacemos la implementacion en Python puro, este tipo de suma es particularmente costoso ya que se necesitan loops for", "# Esta no es la mejor implementacion\n#Porque el loop esta implementado en Python\ndef py_dcumsum(a):\n b = np.empty_like(a)\n b[0] = a[0]\n for n in range(1,len(a)):\n b[n] = b[n-1]+a[n]\n return b", "Ahora hacemos la implementacion en Fortran", "%%file dcumsum.f\nc File dcumsum.f\n subroutine dcumsum(a, b, n)\n double precision a(n)\n double precision b(n)\n integer n\ncf2py intent(in) :: a\ncf2py intent(out) :: b\ncf2py intent(hide) :: n\n\n b(1) = a(1)\n do 100 i=2, n\n b(i) = b(i-1) + a(i)\n100 continue\n end", "Compilamos directamente a un modulo de python", "!f2py -c dcumsum.f -m dcumsum\n\n#importamos el modulo recien creado\n\nimport dcumsum\n\n\n\na = np.array([1.0,2.0,3.0,4.0,5.0,6.0,7.0,8.0])\n\npy_dcumsum(a)\n\ndcumsum.dcumsum(a)", "Ahora los ponemos a prueba", "a = np.random.rand(10000)\n\n%%timeit \npy_dcumsum(a)\n\n%%timeit\ndcumsum.dcumsum(a)\n\n%%timeit\na.cumsum()", "Guauuuu¡¡¡¡\nEjemplo de vectorizacion", "%run srinivasan_pruebas.py\n\n%run srinivasan_pruebas_vec.py", "<img src=\"files/Images/logo2.png\" style=\"float: left;\"/>\n<div style=\"clear: both;\">\n\n#Matplotlib\n\n## Es el modulo para realizar y visualizar graficos\n\n[Pagina oficial](http://matplotlib.org/)", "#para que los graficos queden empotrados\n%pylab inline\n\nX = np.linspace(-np.pi, np.pi, 256, endpoint=True)\nC, S = np.cos(X), np.sin(X)\n\nplot(X, C)\nplot(X, S)\n\n\n\nt = 2 * np.pi / 3\n\nplot(X, C, color=\"blue\", linewidth=2.5, linestyle=\"-\", label=\"cosine\")\nplot(X, S, color=\"red\", linewidth=2.5, linestyle=\"-\", label=\"sine\")\nplot([t, t], [0, np.cos(t)], color='blue', linewidth=2.5, linestyle=\"--\")\nscatter([t, ], [np.cos(t), ], 50, color='blue')\n\nannotate(r'$sin(\\frac{2\\pi}{3})=\\frac{\\sqrt{3}}{2}$',\n xy=(t, np.sin(t)), xycoords='data',\n xytext=(+10, +30), textcoords='offset points', fontsize=16,\n arrowprops=dict(arrowstyle=\"->\", connectionstyle=\"arc3,rad=.2\"))\n\nplot([t, t],[0, np.sin(t)], color='red', linewidth=2.5, linestyle=\"--\")\nscatter([t, ],[np.sin(t), ], 50, color='red')\n\nannotate(r'$cos(\\frac{2\\pi}{3})=-\\frac{1}{2}$',\n xy=(t, np.cos(t)), xycoords='data',\n xytext=(-90, -50), textcoords='offset points', fontsize=16,\n arrowprops=dict(arrowstyle=\"->\", connectionstyle=\"arc3,rad=.2\"))\n", "Ejemplos\nEjemplo 1 Ecuaciones de Navier-Stokes\ncon convección no-lineal en 2D\nNow we solve 2D Convection, represented by the pair of coupled partial differential equations below: \n$$\\frac{\\partial u}{\\partial t} + u \\frac{\\partial u}{\\partial x} + v \\frac{\\partial u}{\\partial y} = 0$$\n$$\\frac{\\partial v}{\\partial t} + u \\frac{\\partial v}{\\partial x} + v \\frac{\\partial v}{\\partial y} = 0$$\nDiscretizing these equations using the methods we've applied previously yields:\n$$\\frac{u_{i,j}^{n+1}-u_{i,j}^n}{\\Delta t} + u_{i,j}^n \\frac{u_{i,j}^n-u_{i-1,j}^n}{\\Delta x} + v_{i,j}^n \\frac{u_{i,j}^n-u_{i,j-1}^n}{\\Delta y} = 0$$\n$$\\frac{v_{i,j}^{n+1}-v_{i,j}^n}{\\Delta t} + u_{i,j}^n \\frac{v_{i,j}^n-v_{i-1,j}^n}{\\Delta x} + v_{i,j}^n \\frac{v_{i,j}^n-v_{i,j-1}^n}{\\Delta y} = 0$$\nInitial Conditions\nThe initial conditions are the same that we used for 1D convection, applied in both the x and y directions. \n$$u,\\ v\\ = \\begin{cases}\\begin{matrix}\n2 & \\text{for } x,y \\in (0.5, 1)\\times(0.5,1) \\cr\n1 & \\text{everywhere else}\n\\end{matrix}\\end{cases}$$\nBoundary Conditions\nThe boundary conditions hold u and v equal to 1 along the boundaries of the grid\n.\n$$u = 1,\\ v = 1 \\text{ for } \\begin{cases} \\begin{matrix}x=0,2\\cr y=0,2 \\end{matrix}\\end{cases}$$", "from mpl_toolkits.mplot3d import Axes3D\nimport matplotlib.pyplot as plt\nimport numpy as np\n\n###variable declarations\nnx = 101\nny = 101\nnt = 80\nc = 1\ndx = 2.0/(nx-1)\ndy = 2.0/(ny-1)\nsigma = .2\ndt = sigma*dx\n\nx = np.linspace(0,2,nx)\ny = np.linspace(0,2,ny)\n\nu = np.ones((ny,nx)) ##create a 1xn vector of 1's\nv = np.ones((ny,nx))\nun = np.ones((ny,nx))\nvn = np.ones((ny,nx))\n\n###Assign initial conditions\n\nu[.5/dy:1/dy+1,.5/dx:1/dx+1]=2 ##set hat function I.C. : u(.5<=x<=1 && .5<=y<=1 ) is 2\nv[.5/dy:1/dy+1,.5/dx:1/dx+1]=2 ##set hat function I.C. : u(.5<=x<=1 && .5<=y<=1 ) is 2\n\nfor n in range(nt+1): ##loop across number of time steps\n un[:] = u[:]\n vn[:] = v[:]\n\n u[1:,1:]=un[1:,1:]-(un[1:,1:]*dt/dx*(un[1:,1:]-un[0:-1,1:]))-vn[1:,1:]*dt/dy*(un[1:,1:]-un[1:,0:-1]) \n v[1:,1:]=vn[1:,1:]-(un[1:,1:]*dt/dx*(vn[1:,1:]-vn[0:-1,1:]))-vn[1:,1:]*dt/dy*(vn[1:,1:]-vn[1:,0:-1])\n \n u[0,:] = 1\n u[-1,:] = 1\n u[:,0] = 1\n u[:,-1] = 1\n \n v[0,:] = 1\n v[-1,:] = 1\n v[:,0] = 1\n v[:,-1] = 1\n\nfrom matplotlib import cm ##cm = \"colormap\" for changing the 3d plot color palette\nfig = plt.figure(figsize=(11,7), dpi=100)\nax = fig.gca(projection='3d')\nX,Y = np.meshgrid(x,y)\n\nax.plot_surface(X,Y,u, cmap=cm.coolwarm)\n" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
Sasanita/nmt-keras
examples/3_decoding_tutorial.ipynb
mit
[ "NMT-Keras tutorial\n3. Decoding with a trained Neural Machine Translation Model\nNow, we'll load from disk a trained Neural Machine Translation (NMT) model. We'll apply it for translating new text. In this case, we want to translate the 'test' split of our dataset.\nThis tutorial assumes that you followed both previous tutorials.\nAs before, let's import some stuff and load the dataset instance.", "from config import load_parameters\nfrom data_engine.prepare_data import keep_n_captions\nfrom keras_wrapper.cnn_model import loadModel\nfrom keras_wrapper.dataset import loadDataset\nparams = load_parameters()\ndataset = loadDataset('datasets/Dataset_tutorial_dataset.pkl')", "Since we want to translate a new data split ('test') we must add it to the dataset instance, just as we did before (at the first tutorial). In case we also had the refences of the test split and we wanted to evaluate it, we can add it to the dataset. Note that this is not mandatory and we could just predict without evaluating.", "dataset.setInput('examples/EuTrans/DATA/test.es',\n 'test',\n type='text',\n id='source_text',\n pad_on_batch=True,\n tokenization='tokenize_none',\n fill='end',\n max_text_len=30,\n min_occ=0)\n\ndataset.setInput(None,\n 'test',\n type='ghost',\n id='state_below',\n required=False)", "Now, let's load the translation model. Suppose we want to load the model saved at the end of the epoch 4:", "params['INPUT_VOCABULARY_SIZE'] = dataset.vocabulary_len[params['INPUTS_IDS_DATASET'][0]]\nparams['OUTPUT_VOCABULARY_SIZE'] = dataset.vocabulary_len[params['OUTPUTS_IDS_DATASET'][0]]\n\n# Load model\nnmt_model = loadModel('trained_models/tutorial_model', 4)", "Once we loaded the model, we just have to invoke the sampling method (in this case, the Beam Search algorithm) for the 'test' split:", "params_prediction = {'max_batch_size': 50,\n 'n_parallel_loaders': 8,\n 'predict_on_sets': ['test'],\n 'beam_size': 12,\n 'maxlen': 50,\n 'model_inputs': ['source_text', 'state_below'],\n 'model_outputs': ['target_text'],\n 'dataset_inputs': ['source_text', 'state_below'],\n 'dataset_outputs': ['target_text'],\n 'normalize': True,\n 'alpha_factor': 0.6 \n }\npredictions = nmt_model.predictBeamSearchNet(dataset, params_prediction)['test']", "Up to this moment, in the variable 'predictions', we have the indices of the words of the hypotheses. We must decode them into words. For doing this, we'll use the dictionary stored in the dataset object:", "from keras_wrapper.utils import decode_predictions_beam_search\nvocab = dataset.vocabulary['target_text']['idx2words']\npredictions = decode_predictions_beam_search(predictions,\n vocab,\n verbose=params['VERBOSE'])", "Finally, we store the system hypotheses:", "filepath = nmt_model.model_path+'/' + 'test' + '_sampling.pred' # results file\nfrom keras_wrapper.extra.read_write import list2file\nlist2file(filepath, predictions)", "If we have the references of this split, we can also evaluate the performance of our system on it. First, we must add them to the dataset object:", "# In case we had the references of this split, we could also load the split and evaluate on it\ndataset.setOutput('examples/EuTrans/DATA/test.en',\n 'test',\n type='text',\n id='target_text',\n pad_on_batch=True,\n tokenization='tokenize_none',\n sample_weights=True,\n max_text_len=30,\n max_words=0)\nkeep_n_captions(dataset, repeat=1, n=1, set_names=['test'])", "Next, we call the evaluation system: The COCO package. Although its main usage is for multimodal captioning, we can use it in machine translation:", "from keras_wrapper.extra.evaluation import select\nmetric = 'coco'\n# Apply sampling\nextra_vars = dict()\nextra_vars['tokenize_f'] = eval('dataset.' + 'tokenize_none')\nextra_vars['language'] = params['TRG_LAN']\nextra_vars['test'] = dict()\nextra_vars['test']['references'] = dataset.extra_variables['test']['target_text']\nmetrics = select[metric](pred_list=predictions,\n verbose=1,\n extra_vars=extra_vars,\n split='test')" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
marcinofulus/teaching
ML_SS2017/linear_regression_from_np_to_tf.ipynb
gpl-3.0
[ "Regresja liniowa\nZamiast:\n<p>$$  A x =  b$$</p>\n<p>rozwiązujemy</p>\n<p>$$ A^T A x = A^T b.$$</p>\n<p>Inaczej mówiąc, niech błąd:</p>\n<p>$$  r=b-A x$$</p>\n<p>leży w lewym jądrze operatora A:</p>\n<p>$$A^T r =A^T( b-Ax) = A^T b-A^TAx = 0.$$</p>\n<p> </p>\n<p>$$ A^T A x = A^T b.$$</p>\n<ol> </ol>\n\nMożna też zarządać znikania gradientu kwadratu odchylenia:\n$$ \\frac{\\partial}{\\partial x_k} (A_{ij} x_j - b_i) (A_{il} x_l - b_i) = 0$$ \n$$ A_{ij} \\delta_{jk} (A_{il} x_l - b_i) + A_{il} \\delta_{lk} (A_{ij} x_j - b_i) =0 $$\n$$ A^TAx - A^T b + A^TAx - A^T b =0 $$\n$$ A^TAx - A^T b =0 $$", "%matplotlib notebook\nimport tensorflow as tf\nconfig = tf.ConfigProto()\nconfig.gpu_options.allow_growth = True\nimport numpy as np \nimport matplotlib.pyplot as plt\n\n\n\nlearning_rate = 0.01\ntraining_epochs = 1000\ndisplay_step = 50\n\ntrain_X = np.asarray([3.3,4.4,5.5,6.71,6.93,4.168,9.779,6.182,7.59,2.167,\n 7.042,10.791,5.313,7.997,5.654,9.27,3.1])\ntrain_Y = np.asarray([1.7,2.76,2.09,3.19,1.694,1.573,3.366,2.596,2.53,1.221,\n 2.827,3.465,1.65,2.904,2.42,2.94,1.3])\nn_samples = train_X.shape[0]", "Macierz $A$ dla regresji liniowej wynosi:", "import numpy as np \nM = np.vstack([np.ones_like(train_X),train_X]).T\nM\n\nprint (np.dot(M.T,M))\n\nprint(np.dot(M.T,train_Y))", "Współczynniki dokładnie będą wynosiły:", "c = np.linalg.solve(np.dot(M.T,M),np.dot(M.T,train_Y))\nc\n\nplt.plot(train_X, train_Y, 'ro', label='Original data')\nplt.plot(train_X, c[1] * train_X + c[0], label='Fitted line')\nplt.legend()\n\n\nplt.close()", "Optymalizacja metodą iteracyjną,\nNie zakładamy, że mamy problem regresji liniowej.\nUżywamy: https://docs.scipy.org/doc/scipy-0.18.1/reference/tutorial/optimize.html", "from scipy.optimize import minimize\n\ndef cost(c,x=train_X,y=train_Y):\n return sum( (c[0]+x_*c[1]-y_)**2 for (x_,y_) in zip(x,y) )\n\ncost([1,2])\n\nres = minimize(cost, [1,1], method='nelder-mead', options={'xtol': 1e-8, 'disp': True})\n\nres.x\n\nx = np.linspace(-2,2,77)\ny = np.linspace(-2,2,77)\nX,Y = np.meshgrid(x,y)\n\ncost([X,Y]).shape\n\nplt.contourf( X,Y,np.log(cost([X,Y])),cmap='gray')\n\nplt.plot(res.x[0],res.x[1],'o')\n\nnp.min(cost([X,Y]))\n\npx=[]\npy=[]\nfor i in range(20):\n res = minimize(cost, [1,1], options={ 'maxiter':i})\n px.append(res.x[0])\n py.append(res.x[1])\n print(res.x)\n\nplt.plot(px,py,'ro-')\n\nimport sympy\nfrom sympy.abc import x,y\nsympy.init_printing(use_latex='mathjax')\n\nf_symb = cost([x,y]).expand()\n\nf_symb.diff(x)\n\nF = sympy.lambdify((x,y),f_symb,np)\nFx = sympy.lambdify((x,y),f_symb.diff(x),np)\nFy = sympy.lambdify((x,y),f_symb.diff(y),np)\nF(1,1),cost([1,1])\n\nx0,y0 = -1,1\n\nh = 0.01/(2*17) \n\nfor i in range(500):\n plt.plot(x0,y0,'go')\n #print(i,x0,y0)\n x0 += -h * Fx(x0,y0)\n y0 += -h * Fy(x0,y0)\n", "Tensor flow - gradient descend", "# tf Graph Input\nX = tf.placeholder(\"float\")\nY = tf.placeholder(\"float\")\n\n# Set model weights\nW = tf.Variable(1.0, name=\"weight\")\nb = tf.Variable(1.0, name=\"bias\")\n\n# Construct a linear model\npred = tf.add(tf.multiply(X, W), b)\n\n# Mean squared error\ncost = tf.reduce_sum(tf.pow(pred-Y, 2))/(2*n_samples)\n# Gradient descent\noptimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)\n\n# Initializing the variables\ninit = tf.global_variables_initializer()\n\n# TEST\nwith tf.Session() as sess:\n sess.run(init)\n sess.run(tf.assign(W,1.0))\n sess.run(tf.assign(b,2.0))\n\n print(sess.run(b),sess.run(cost, feed_dict={X: train_X, Y: train_Y}))\n\n# Launch the graph\nx_tf_lst = [] \ny_tf_lst = [] \nwith tf.Session() as sess:\n sess.run(init)\n\n # Fit all training data\n for epoch in range(training_epochs):\n for (x, y) in zip(train_X, train_Y):\n sess.run(optimizer, feed_dict={X: x, Y: y})\n\n #Display logs per epoch step\n if (epoch+1) % display_step == 0:\n c = sess.run(cost, feed_dict={X: train_X, Y:train_Y})\n print (\"Epoch:\", '%04d' % (epoch+1), \"cost=\", \"{:.9f}\".format(c), \\\n \"W=\", sess.run(W), \"b=\", sess.run(b))\n\n x_tf_lst.append(sess.run(b))\n y_tf_lst.append(sess.run(W))\n \n training_cost = sess.run(cost, feed_dict={X: train_X, Y: train_Y})\n print (\"Training cost=\", training_cost, \"W=\", sess.run(W), \"b=\", sess.run(b), '\\n')\n \n \n\nplt.plot(x_tf_lst,y_tf_lst,'yo')" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
esa-as/2016-ml-contest
GCC_FaciesClassification/01 - Facies Classification - GCC-VALIDATION.ipynb
apache-2.0
[ "Facies Classification using TPOT\nGeorge Crowther - https://www.linkedin.com/in/george-crowther-9669a931?trk=hp-identity-name\nI've had a play with some of the data here and used something of a brute force approach, by creating a large number of additional features and then using the TPOT library to train a model and refine the model parameters. I will be interested to see whether this has over-fitted, as the selected Extra Trees Classifier can do that.\n1. Data Loading and Initial Observations", "# Initial imports for reading data and first observations\nimport pandas as pd\nimport bokeh.plotting as bk\nimport numpy as np\n\nfrom sklearn import preprocessing\nfrom sklearn.model_selection import train_test_split\n\nfrom tpot import TPOTClassifier\n\nbk.output_notebook()\n\n# Input file paths\ntrain_path = r'../training_data.csv'\ntest_path = r'../validation_data_nofacies.csv'\n\n# Read training data to dataframe\ntrain = pd.read_csv(train_path)\n\n# TPOT library requires that the target class is renamed to 'class'\ntrain.rename(columns={'Facies': 'class'}, inplace=True)\n\nformations = {}\nfor i, value in enumerate(train['Formation'].unique()):\n formations[value] = i\n train.loc[train['Formation'] == value, 'Formation'] = i\n\nwells = {}\nfor i, value in enumerate(train['Well Name'].unique()):\n wells[value] = i\n train.loc[train['Well Name'] == value, 'Well Name'] = i\n\nfacies_labels = ['SS', 'CSiS', 'FSiS', 'SiSh', 'MS',\n 'WS', 'D','PS', 'BS']", "Feature construction and data clean-up.\n 1. Z-score normalisation of data.\n 2. Group each of the measurement parameters into quartiles. Most of the classification methods find data like this easier to work with.\n 3. Create a series of 'adjacent' parameters by looking for the above and below depth sample for each well. Create a series of features associated with the above and below parameters.", "train_columns = train.columns[1:]\nstd_scaler = preprocessing.StandardScaler().fit(train[train_columns])\n\ntrain_std = std_scaler.transform(train[train_columns])\n\ntrain_std_frame = train\nfor i, column in enumerate(train_columns):\n train_std_frame.loc[:, column] = train_std[:, i]\n\ntrain = train_std_frame\nmaster_columns = train.columns[4:]\n\ndef in_range(row, vmin, vmax, variable):\n if vmin <= row[variable] < vmax:\n return 1\n else:\n return 0\n \nfor i, column in train[master_columns].iteritems():\n ds = np.linspace(0, 1.0, 5)\n quantiles = [column.quantile(n) for n in ds]\n \n for j in range(len(quantiles) - 1):\n train[i + '_{0}'.format(j)] = train.apply(lambda row: in_range(row, ds[j], ds[j + 1], i), axis = 1)\n\nmaster_columns = train.columns[4:]\n\nabove = []\nbelow = []\nfor i, group in train.groupby('Well Name'):\n \n df = group.sort_values('Depth')\n u = df.shift(-1).fillna(method = 'ffill')\n b = df.shift(1).fillna(method = 'bfill')\n \n above.append(u[master_columns])\n below.append(b[master_columns])\n \nabove_frame = pd.concat(above)\nabove_frame.columns = ['above_'+ column for column in above_frame.columns]\nbelow_frame = pd.concat(below)\nbelow_frame.columns = ['below_'+ column for column in below_frame.columns]\n\nframe = pd.concat((train, above_frame, below_frame), axis = 1)", "4. TPOT", "train_vector = ['class']\ntrain_columns = frame.columns[4:]\n\n# train_f, test_f = train_test_split(frame, test_size = 0.1, random_state = 7)", "TPOT uses a genetic algorithm to tune model parameters for the most effective fit. This can take quite a while to process if you want to re-run this part!", "# tpot = TPOTClassifier(verbosity=2, generations=5, max_eval_time_mins=30)\n# tpot.fit(train_f[train_columns], train_f['class'])\n\n# tpot.score(test_f[train_columns], test_f['class'])\n\n# tpot.export('contest_export.py')\n\n!cat contest_export.py\n\nfrom sklearn.pipeline import make_pipeline\nfrom sklearn.feature_selection import VarianceThreshold\nfrom sklearn.ensemble import ExtraTreesClassifier\n\nclf = make_pipeline(\n VarianceThreshold(threshold=0.37),\n ExtraTreesClassifier(criterion=\"entropy\", max_features=0.71, n_estimators=500, random_state=49)\n )\n\nclf.fit(frame[train_columns], frame['class'])", "5.0 Workflow for Test Data\nRun this to generate results from output model.", "test_path = r'../validation_data_nofacies.csv'\n\n# Read training data to dataframe\ntest = pd.read_csv(test_path)\n\n# TPOT library requires that the target class is renamed to 'class'\ntest.rename(columns={'Facies': 'class'}, inplace=True)\n\ntest_columns = test.columns\n\nformations = {}\nfor i, value in enumerate(test['Formation'].unique()):\n formations[value] = i\n test.loc[test['Formation'] == value, 'Formation'] = i\n\nwells = {}\nfor i, value in enumerate(test['Well Name'].unique()):\n wells[value] = i\n test.loc[test['Well Name'] == value, 'Well Name'] = i\n\nstd_scaler = preprocessing.StandardScaler().fit(test[test_columns])\ntest_std = std_scaler.transform(test[test_columns])\n\ntest_std_frame = test\nfor i, column in enumerate(test_columns):\n test_std_frame.loc[:, column] = test_std[:, i]\n\ntest = test_std_frame\nmaster_columns = test.columns[3:]\n\ndef in_range(row, vmin, vmax, variable):\n \n if vmin <= row[variable] < vmax:\n return 1\n else:\n return 0\n \nfor i, column in test[master_columns].iteritems():\n ds = np.linspace(0, 1.0, 5)\n quantiles = [column.quantile(n) for n in ds]\n \n for j in range(len(quantiles) - 1):\n test[i + '_{0}'.format(j)] = test.apply(lambda row: in_range(row, ds[j], ds[j + 1], i), axis = 1)\n\nmaster_columns = test.columns[3:]\n\nabove = []\nbelow = []\nfor i, group in test.groupby('Well Name'):\n \n df = group.sort_values('Depth')\n u = df.shift(-1).fillna(method = 'ffill')\n b = df.shift(1).fillna(method = 'bfill')\n \n above.append(u[master_columns])\n below.append(b[master_columns])\n \nabove_frame = pd.concat(above)\nabove_frame.columns = ['above_'+ column for column in above_frame.columns]\nbelow_frame = pd.concat(below)\nbelow_frame.columns = ['below_'+ column for column in below_frame.columns]\n\nframe = pd.concat((test, above_frame, below_frame), axis = 1)\n\ntest_columns = frame.columns[3:]\n\nresult = clf.predict(frame[test_columns])\n\nresult\n\noutput_frame = pd.read_csv(test_path)\noutput_frame['Facies'] = result\noutput_frame.to_csv('Well Facies Prediction - Test Data Set__MATT3.csv')" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
sebastiandres/mat281
clases/Unidad4-MachineLearning/Clase02-Clustering/clustering.ipynb
cc0-1.0
[ "\"\"\"\nIPython Notebook v4.0 para python 2.7\nLibrerías adicionales: numpy, matplotlib\nContenido bajo licencia CC-BY 4.0. Código bajo licencia MIT. (c) Sebastian Flores.\n\"\"\"\n\n# Configuracion para recargar módulos y librerías \n%reload_ext autoreload\n%autoreload 2\n\n%matplotlib inline\n\nfrom IPython.core.display import HTML\n\nHTML(open(\"style/mat281.css\", \"r\").read())", "<header class=\"w3-container w3-teal\">\n<img src=\"images/utfsm.png\" alt=\"\" height=\"100px\" align=\"left\"/>\n<img src=\"images/mat.png\" alt=\"\" height=\"100px\" align=\"right\"/>\n</header>\n<br/><br/><br/><br/><br/>\nMAT281\nAplicaciones de la Matemática en la Ingeniería\nSebastián Flores\nhttps://www.github.com/usantamaria/mat281\nClase anterior\n\nMotivación para Data Science y Machine Learning.\n\nPregunta\n¿Cuáles son las 3 grandes familias de algoritmos?\n\nClustering\nRegresión\nClasificación\n\nIris Dataset\nBuscaremos ilustrar los distintos algoritmos con datos reales. Un conjunto de datos interesante y versatil es el Iris Dataset, que puede utilizarse para clustering, regresión y clasificación.\nFue utilizado por Ronald Fisher en su artículo \"The use of multiple measurements in taxonomic problems\" (1936). \nEl conjunto de datos consiste en 50 muestras de 3 especies de Iris (Iris setosa, Iris virginica y Iris versicolor). \nPara cada flor, se midieron 4 características: largo y ancho de los petalos, y largo y ancho de los sépalos, en centímetros.\nIris Dataset\n<img src=\"images/iris_petal_sepal.png\" alt=\"\" width=\"600px\" align=\"middle\"/>\nIris Dataset\nExploracion de datos", "from sklearn import datasets\nimport matplotlib.pyplot as plt\niris = datasets.load_iris()\n\ndef plot(dataset, ax, i, j):\n ax.scatter(dataset.data[:,i], dataset.data[:,j], c=dataset.target, s=50)\n ax.set_xlabel(dataset.feature_names[i], fontsize=20)\n ax.set_ylabel(dataset.feature_names[j], fontsize=20)\n\n# row and column sharing\nf, ((ax1, ax2), (ax3, ax4), (ax5,ax6)) = plt.subplots(3, 2, figsize=(16,8))\nplot(iris, ax1, 0, 1)\nplot(iris, ax2, 0, 2)\nplot(iris, ax3, 0, 3)\nplot(iris, ax4, 1, 2)\nplot(iris, ax5, 1, 3)\nplot(iris, ax6, 2, 3)\nf.tight_layout()\nplt.show()", "Clustering\nPregunta Crucial: \n¿Si no supiéramos que existen 3 tipos de Iris, seríamos capaces algorítmicamente de encontrar 3 tipos de flores?\nClustering\n\nSe tienen datos sin etiquetar/agrupar.\nSe busca obtener un agrupamiento \"natural\" de los datos.\nNo existen ejemplos de los cuales aprender: método sin supervisar.\nFácil de verificar por inspección visual en 2D y 3D. \nDifícil de verificar en dimensiones superiores.\n\nEjemplo de Problemas de Clustering\n\nSegmentación de mercado: \n¿Cómo atendemos mejor a nuestros clientes?\nUbicación de centros de reabastacimiento: \n¿Cómo minimizamos tiempos de entrega?\nCompresión de imágenes: \n¿Cómo minimizamos el espacio destinado al almacenamiento?\n\nUbicación centros de reabastecimiento\n<img src=\"images/reabastecimiento1.png\" width=\"500px\" align=\"middle\"/>\nUbicación centros de reabastecimiento\n<img src=\"images/reabastecimiento2.png\" width=\"500px\" align=\"middle\"/>\nCompresión de Imágenes\nUtilizando todos los colores:\n<img src=\"images/colores.png\" width=\"500px\" align=\"middle\"/>\nCompresión de Imágenes\nUtilizando únicamente 32 colores:\n<img src=\"images/colores_32means.png\" width=\"500px\" align=\"middle\"/>\nCaracterísticas de un Problema de Clustering\n\nDatos de entrada: Conjunto de inputs sin etiquetas.\nDatos de salida: Etiquetas para cada input.\n\nObs: La etiqueta/label típicamente se asocia a un entero (0,1,2, etc.) pero en realidad es cualquier variable categórica. \nAlgoritmos de Clustering\nBuscan utilizar las propiedades inherentes presentes en los datos para organizarlos en grupos de máxima\nsimilitud.\n\nAlgoritmos basados en conectividad: Hierarchical Clustering.\nAlgoritmos basados en densidad: Expectation Maximization\nAlgoritmos basados en centroides: k-means.\n\nk-means\n\n\nInput: set $X$ de $N$ datos $x=(x_1, ..., x_n)$ y un meta-parámetro $k$ con el número de clusters a crear.\n\n\nOutput: Set de $k$ centroides de clusters ($\\mu_l$) y una etiquetación de cada dato $x$ en $X$ indicando a qué cluster pertenece.\n\n\n$x_i$ y $\\mu_l$ son vectores en $\\mathcal{R}^m$.\nLa pertenencia es única. Todos los puntos dentro de un cluster se encuentran mas\ncercanos en distancia al centroide de su cluster que al centroide de otro cluster.\nk-means\nMatemáticamente:\n\\begin{align}\n\\textrm{Minimizar } \\sum_{l=1}^k \\sum_{x_n \\in C_l} ||x_n - \\mu_l ||^2 \\textrm{ respecto a } C_l, \\mu_l. \n\\end{align}\nDonde $C_l$ es el cluster l-ésimo.\nEl problema anterior es NP-hard (imposible de resolver en tiempo polinomial, del tipo más difícil de los probleams NP).\nAlgoritmo de Lloyd\nHeurística que converge en pocos pasos a un mínimo local.\nProcedimiento\n\n\nCalcular el centroide del cluster promediando las posiciones de los puntos actualmente en el cluster.\n\n\nActualizar la pertenencia a los clusters utilizando la distancia más cercana a cada centroide.\n\n\n<span class=\"good\">¿Cuándo funciona k-means?<span/>\n\nCuando los clusters son bien definidos y pueden separarse por círculos (n-esferas) de igual tamaño.\n\n<img src=\"images/kmeans1.png\" width=\"600px\" align=\"middle\"/>\n<img src=\"images/kmeans2.png\" width=\"600px\" align=\"middle\"/>\n<img src=\"images/kmeans3.png\" width=\"400px\" align=\"middle\"/>\n<img src=\"images/kmeans4.png\" width=\"400px\" align=\"middle\"/>\n<span class=\"bad\">¿Cuándo falla k-means?<span/>\n\nCuando se selecciona mal el número $k$ de clusters.\nCuando no existe separación clara entre los clusters.\nCuando los clusters son de tamaños muy distintos.\nCuando la inicialización no es apropiada.\n\n<img src=\"images/kmeans4.png\" width=\"400px\" align=\"middle\"/>\n<img src=\"images/kmeans5.png\" width=\"600px\" align=\"middle\"/>\n<img src=\"images/kmeans6.png\" width=\"600px\" align=\"middle\"/>\nEjemplos de k-means", "from mat281_code import iplot\niplot.kmeans(N_points=100, n_clusters=4)", "k-means\n<span class=\"good\">Ventajas<span/>\n\nRápido y sencillo de programar\n\n<span class=\"bad\">Desventajas<span/>\n\nTrabaja en datos continuos, o donde distancias y promedios pueden definirse.\nHeurística depende del puntos iniciales.\nRequiere especificar el número de clusters $k$.\nNo funciona correctamente en todos los casos de clustering, incluso conociendo $k$ correctamente.", "import numpy as np\nfrom scipy.linalg import norm\n\ndef find_centers(X, k, seed=None):\n if seed is None:\n seed = np.random.randint(10000000)\n np.random.seed(seed)\n # Initialize to K random centers\n old_centroids = random_centers(X, k)\n new_centroids = random_centers(X, k)\n while not has_converged(new_centroids, old_centroids):\n old_centroids = new_centroids\n # Assign all points in X to clusters\n clusters = cluster_points(X, old_centroids)\n # Reevaluate centers\n new_centroids = reevaluate_centers(X, clusters, k)\n return (new_centroids, clusters)\n\ndef random_centers(X, k):\n index = np.random.randint(0, X.shape[0], k)\n return X[index, :]\n\ndef has_converged(new_mu, old_mu, tol=1E-6):\n num = norm(np.array(new_mu)-np.array(old_mu))\n den = norm(new_mu)\n rel_error= num/den\n return rel_error < tol\n\ndef cluster_points(X, centroids):\n clusters = []\n for i, x in enumerate(X):\n distances = np.array([norm(x-cj) for cj in centroids])\n clusters.append( distances.argmin())\n return np.array(clusters)\n\ndef reevaluate_centers(X, clusters, k):\n centroids = []\n for j in range(k):\n cj = X[clusters==j,:].mean(axis=0)\n centroids.append(cj)\n return centroids", "Aplicación a datos", "from mat281_code import gendata\nfrom mat281_code import plot\nfrom mat281_code import kmeans\n\nX = gendata.init_blobs(1000, 4, seed=40)\nax = plot.data(X)\n\ncentroids, clusters = kmeans.find_centers(X, k=4)\nplot.clusters(X, centroids, clusters)", "¿Es necesario reinventar la rueda?\nUtilicemos la libreria sklearn.", "from mat281_code import gendata\nfrom mat281_code import plot\nfrom sklearn.cluster import KMeans\n\nX = gendata.init_blobs(10000, 6, seed=43)\nplot.data(X)\n\nkmeans = KMeans(n_clusters=6)\nkmeans.fit(X)\ncentroids = kmeans.cluster_centers_\nclusters = kmeans.labels_\nplot.clusters(X, centroids, clusters)", "¿Cómo seleccionar k?\n\nConocimiento previo de los datos.\nPrueba y error.\nRegla del codo (Elbow rule).\nEstimating the number of clusters in a dataset via the gap statistic, Tibshirani, Walther and Hastie (2001).\nSelection of k in k-means, Pham, Dimov y Nguyen (2004).\n\nVolviendo al Iris Dataset\nApliquemos k-means al Iris Dataset y calculemos el error de clasificación.\nMostremos el resultado utilizando la matriz de confusión.\n<img src=\"images/predictionMatrix.png\" alt=\"\" width=\"900px\" align=\"middle\"/>", "import numpy as np\nfrom sklearn import datasets\nfrom sklearn.cluster import KMeans\nfrom sklearn.metrics import confusion_matrix\n\n# Parameters\nn_clusters = 8\n\n# Loading the data\niris = datasets.load_iris()\nX = iris.data\ny_true = iris.target\n\n# Running the algorithm\nkmeans = KMeans(n_clusters)\nkmeans.fit(X)\ny_pred = kmeans.labels_\n\n# Show the classificacion report\ncm = confusion_matrix(y_true, y_pred)\nprint cm\nprint (cm.sum() - np.diag(cm).sum() ) / float(cm.sum()) # 16/100", "Referencias\n\nJake VanderPlas, ESAC Data Analysis and Statistics Workshop 2014, https://github.com/jakevdp/ESAC-stats-2014\nAndrew Ng, Machine Learning CS144, Stanford University." ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
AhmetHamzaEmra/Deep-Learning-Specialization-Coursera
Improving Deep Neural Networks/Gradient+Checking.ipynb
mit
[ "Gradient Checking\nWelcome to the final assignment for this week! In this assignment you will learn to implement and use gradient checking. \nYou are part of a team working to make mobile payments available globally, and are asked to build a deep learning model to detect fraud--whenever someone makes a payment, you want to see if the payment might be fraudulent, such as if the user's account has been taken over by a hacker. \nBut backpropagation is quite challenging to implement, and sometimes has bugs. Because this is a mission-critical application, your company's CEO wants to be really certain that your implementation of backpropagation is correct. Your CEO says, \"Give me a proof that your backpropagation is actually working!\" To give this reassurance, you are going to use \"gradient checking\".\nLet's do it!", "# Packages\nimport numpy as np\nfrom testCases import *\nfrom gc_utils import sigmoid, relu, dictionary_to_vector, vector_to_dictionary, gradients_to_vector", "1) How does gradient checking work?\nBackpropagation computes the gradients $\\frac{\\partial J}{\\partial \\theta}$, where $\\theta$ denotes the parameters of the model. $J$ is computed using forward propagation and your loss function.\nBecause forward propagation is relatively easy to implement, you're confident you got that right, and so you're almost 100% sure that you're computing the cost $J$ correctly. Thus, you can use your code for computing $J$ to verify the code for computing $\\frac{\\partial J}{\\partial \\theta}$. \nLet's look back at the definition of a derivative (or gradient):\n$$ \\frac{\\partial J}{\\partial \\theta} = \\lim_{\\varepsilon \\to 0} \\frac{J(\\theta + \\varepsilon) - J(\\theta - \\varepsilon)}{2 \\varepsilon} \\tag{1}$$\nIf you're not familiar with the \"$\\displaystyle \\lim_{\\varepsilon \\to 0}$\" notation, it's just a way of saying \"when $\\varepsilon$ is really really small.\"\nWe know the following:\n\n$\\frac{\\partial J}{\\partial \\theta}$ is what you want to make sure you're computing correctly. \nYou can compute $J(\\theta + \\varepsilon)$ and $J(\\theta - \\varepsilon)$ (in the case that $\\theta$ is a real number), since you're confident your implementation for $J$ is correct. \n\nLets use equation (1) and a small value for $\\varepsilon$ to convince your CEO that your code for computing $\\frac{\\partial J}{\\partial \\theta}$ is correct!\n2) 1-dimensional gradient checking\nConsider a 1D linear function $J(\\theta) = \\theta x$. The model contains only a single real-valued parameter $\\theta$, and takes $x$ as input.\nYou will implement code to compute $J(.)$ and its derivative $\\frac{\\partial J}{\\partial \\theta}$. You will then use gradient checking to make sure your derivative computation for $J$ is correct. \n<img src=\"images/1Dgrad_kiank.png\" style=\"width:600px;height:250px;\">\n<caption><center> <u> Figure 1 </u>: 1D linear model<br> </center></caption>\nThe diagram above shows the key computation steps: First start with $x$, then evaluate the function $J(x)$ (\"forward propagation\"). Then compute the derivative $\\frac{\\partial J}{\\partial \\theta}$ (\"backward propagation\"). \nExercise: implement \"forward propagation\" and \"backward propagation\" for this simple function. I.e., compute both $J(.)$ (\"forward propagation\") and its derivative with respect to $\\theta$ (\"backward propagation\"), in two separate functions.", "# GRADED FUNCTION: forward_propagation\n\ndef forward_propagation(x, theta):\n \"\"\"\n Implement the linear forward propagation (compute J) presented in Figure 1 (J(theta) = theta * x)\n \n Arguments:\n x -- a real-valued input\n theta -- our parameter, a real number as well\n \n Returns:\n J -- the value of function J, computed using the formula J(theta) = theta * x\n \"\"\"\n \n ### START CODE HERE ### (approx. 1 line)\n J = theta * x\n ### END CODE HERE ###\n \n return J\n\nx, theta = 2, 4\nJ = forward_propagation(x, theta)\nprint (\"J = \" + str(J))", "Expected Output:\n<table style=>\n <tr>\n <td> ** J ** </td>\n <td> 8</td>\n </tr>\n</table>\n\nExercise: Now, implement the backward propagation step (derivative computation) of Figure 1. That is, compute the derivative of $J(\\theta) = \\theta x$ with respect to $\\theta$. To save you from doing the calculus, you should get $dtheta = \\frac { \\partial J }{ \\partial \\theta} = x$.", "# GRADED FUNCTION: backward_propagation\n\ndef backward_propagation(x, theta):\n \"\"\"\n Computes the derivative of J with respect to theta (see Figure 1).\n \n Arguments:\n x -- a real-valued input\n theta -- our parameter, a real number as well\n \n Returns:\n dtheta -- the gradient of the cost with respect to theta\n \"\"\"\n \n ### START CODE HERE ### (approx. 1 line)\n dtheta = x\n ### END CODE HERE ###\n \n return dtheta\n\nx, theta = 2, 4\ndtheta = backward_propagation(x, theta)\nprint (\"dtheta = \" + str(dtheta))", "Expected Output:\n<table>\n <tr>\n <td> ** dtheta ** </td>\n <td> 2 </td>\n </tr>\n</table>\n\nExercise: To show that the backward_propagation() function is correctly computing the gradient $\\frac{\\partial J}{\\partial \\theta}$, let's implement gradient checking.\nInstructions:\n- First compute \"gradapprox\" using the formula above (1) and a small value of $\\varepsilon$. Here are the Steps to follow:\n 1. $\\theta^{+} = \\theta + \\varepsilon$\n 2. $\\theta^{-} = \\theta - \\varepsilon$\n 3. $J^{+} = J(\\theta^{+})$\n 4. $J^{-} = J(\\theta^{-})$\n 5. $gradapprox = \\frac{J^{+} - J^{-}}{2 \\varepsilon}$\n- Then compute the gradient using backward propagation, and store the result in a variable \"grad\"\n- Finally, compute the relative difference between \"gradapprox\" and the \"grad\" using the following formula:\n$$ difference = \\frac {\\mid\\mid grad - gradapprox \\mid\\mid_2}{\\mid\\mid grad \\mid\\mid_2 + \\mid\\mid gradapprox \\mid\\mid_2} \\tag{2}$$\nYou will need 3 Steps to compute this formula:\n - 1'. compute the numerator using np.linalg.norm(...)\n - 2'. compute the denominator. You will need to call np.linalg.norm(...) twice.\n - 3'. divide them.\n- If this difference is small (say less than $10^{-7}$), you can be quite confident that you have computed your gradient correctly. Otherwise, there may be a mistake in the gradient computation.", "# GRADED FUNCTION: gradient_check\n\ndef gradient_check(x, theta, epsilon = 1e-7):\n \"\"\"\n Implement the backward propagation presented in Figure 1.\n \n Arguments:\n x -- a real-valued input\n theta -- our parameter, a real number as well\n epsilon -- tiny shift to the input to compute approximated gradient with formula(1)\n \n Returns:\n difference -- difference (2) between the approximated gradient and the backward propagation gradient\n \"\"\"\n \n # Compute gradapprox using left side of formula (1). epsilon is small enough, you don't need to worry about the limit.\n ### START CODE HERE ### (approx. 5 lines)\n thetaplus = theta+epsilon # Step 1\n thetaminus = theta-epsilon # Step 2\n J_plus = forward_propagation(x,thetaplus) # Step 3\n J_minus = forward_propagation(x,thetaminus) # Step 4\n gradapprox = (J_plus-J_minus)/(2*epsilon ) # Step 5\n ### END CODE HERE ###\n \n # Check if gradapprox is close enough to the output of backward_propagation()\n ### START CODE HERE ### (approx. 1 line)\n grad = backward_propagation(x, theta)\n ### END CODE HERE ###\n \n ### START CODE HERE ### (approx. 1 line)\n numerator = np.linalg.norm(gradapprox-grad) # Step 1'\n denominator =np.linalg.norm(gradapprox)+np.linalg.norm(grad) # Step 2'\n difference = numerator/denominator # Step 3'\n ### END CODE HERE ###\n \n if difference < 1e-7:\n print (\"The gradient is correct!\")\n else:\n print (\"The gradient is wrong!\")\n \n return difference\n\nx, theta = 2, 4\ndifference = gradient_check(x, theta)\nprint(\"difference = \" + str(difference))", "Expected Output:\nThe gradient is correct!\n<table>\n <tr>\n <td> ** difference ** </td>\n <td> 2.9193358103083e-10 </td>\n </tr>\n</table>\n\nCongrats, the difference is smaller than the $10^{-7}$ threshold. So you can have high confidence that you've correctly computed the gradient in backward_propagation(). \nNow, in the more general case, your cost function $J$ has more than a single 1D input. When you are training a neural network, $\\theta$ actually consists of multiple matrices $W^{[l]}$ and biases $b^{[l]}$! It is important to know how to do a gradient check with higher-dimensional inputs. Let's do it!\n3) N-dimensional gradient checking\nThe following figure describes the forward and backward propagation of your fraud detection model.\n<img src=\"images/NDgrad_kiank.png\" style=\"width:600px;height:400px;\">\n<caption><center> <u> Figure 2 </u>: deep neural network<br>LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID</center></caption>\nLet's look at your implementations for forward propagation and backward propagation.", "def forward_propagation_n(X, Y, parameters):\n \"\"\"\n Implements the forward propagation (and computes the cost) presented in Figure 3.\n \n Arguments:\n X -- training set for m examples\n Y -- labels for m examples \n parameters -- python dictionary containing your parameters \"W1\", \"b1\", \"W2\", \"b2\", \"W3\", \"b3\":\n W1 -- weight matrix of shape (5, 4)\n b1 -- bias vector of shape (5, 1)\n W2 -- weight matrix of shape (3, 5)\n b2 -- bias vector of shape (3, 1)\n W3 -- weight matrix of shape (1, 3)\n b3 -- bias vector of shape (1, 1)\n \n Returns:\n cost -- the cost function (logistic cost for one example)\n \"\"\"\n \n # retrieve parameters\n m = X.shape[1]\n W1 = parameters[\"W1\"]\n b1 = parameters[\"b1\"]\n W2 = parameters[\"W2\"]\n b2 = parameters[\"b2\"]\n W3 = parameters[\"W3\"]\n b3 = parameters[\"b3\"]\n\n # LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID\n Z1 = np.dot(W1, X) + b1\n A1 = relu(Z1)\n Z2 = np.dot(W2, A1) + b2\n A2 = relu(Z2)\n Z3 = np.dot(W3, A2) + b3\n A3 = sigmoid(Z3)\n\n # Cost\n logprobs = np.multiply(-np.log(A3),Y) + np.multiply(-np.log(1 - A3), 1 - Y)\n cost = 1./m * np.sum(logprobs)\n \n cache = (Z1, A1, W1, b1, Z2, A2, W2, b2, Z3, A3, W3, b3)\n \n return cost, cache", "Now, run backward propagation.", "def backward_propagation_n(X, Y, cache):\n \"\"\"\n Implement the backward propagation presented in figure 2.\n \n Arguments:\n X -- input datapoint, of shape (input size, 1)\n Y -- true \"label\"\n cache -- cache output from forward_propagation_n()\n \n Returns:\n gradients -- A dictionary with the gradients of the cost with respect to each parameter, activation and pre-activation variables.\n \"\"\"\n \n m = X.shape[1]\n (Z1, A1, W1, b1, Z2, A2, W2, b2, Z3, A3, W3, b3) = cache\n \n dZ3 = A3 - Y\n dW3 = 1./m * np.dot(dZ3, A2.T)\n db3 = 1./m * np.sum(dZ3, axis=1, keepdims = True)\n \n dA2 = np.dot(W3.T, dZ3)\n dZ2 = np.multiply(dA2, np.int64(A2 > 0))\n dW2 = 1./m * np.dot(dZ2, A1.T) * 2\n db2 = 1./m * np.sum(dZ2, axis=1, keepdims = True)\n \n dA1 = np.dot(W2.T, dZ2)\n dZ1 = np.multiply(dA1, np.int64(A1 > 0))\n dW1 = 1./m * np.dot(dZ1, X.T)\n db1 = 4./m * np.sum(dZ1, axis=1, keepdims = True)\n \n gradients = {\"dZ3\": dZ3, \"dW3\": dW3, \"db3\": db3,\n \"dA2\": dA2, \"dZ2\": dZ2, \"dW2\": dW2, \"db2\": db2,\n \"dA1\": dA1, \"dZ1\": dZ1, \"dW1\": dW1, \"db1\": db1}\n \n return gradients", "You obtained some results on the fraud detection test set but you are not 100% sure of your model. Nobody's perfect! Let's implement gradient checking to verify if your gradients are correct.\nHow does gradient checking work?.\nAs in 1) and 2), you want to compare \"gradapprox\" to the gradient computed by backpropagation. The formula is still:\n$$ \\frac{\\partial J}{\\partial \\theta} = \\lim_{\\varepsilon \\to 0} \\frac{J(\\theta + \\varepsilon) - J(\\theta - \\varepsilon)}{2 \\varepsilon} \\tag{1}$$\nHowever, $\\theta$ is not a scalar anymore. It is a dictionary called \"parameters\". We implemented a function \"dictionary_to_vector()\" for you. It converts the \"parameters\" dictionary into a vector called \"values\", obtained by reshaping all parameters (W1, b1, W2, b2, W3, b3) into vectors and concatenating them.\nThe inverse function is \"vector_to_dictionary\" which outputs back the \"parameters\" dictionary.\n<img src=\"images/dictionary_to_vector.png\" style=\"width:600px;height:400px;\">\n<caption><center> <u> Figure 2 </u>: dictionary_to_vector() and vector_to_dictionary()<br> You will need these functions in gradient_check_n()</center></caption>\nWe have also converted the \"gradients\" dictionary into a vector \"grad\" using gradients_to_vector(). You don't need to worry about that.\nExercise: Implement gradient_check_n().\nInstructions: Here is pseudo-code that will help you implement the gradient check.\nFor each i in num_parameters:\n- To compute J_plus[i]:\n 1. Set $\\theta^{+}$ to np.copy(parameters_values)\n 2. Set $\\theta^{+}_i$ to $\\theta^{+}_i + \\varepsilon$\n 3. Calculate $J^{+}_i$ using to forward_propagation_n(x, y, vector_to_dictionary($\\theta^{+}$ )). \n- To compute J_minus[i]: do the same thing with $\\theta^{-}$\n- Compute $gradapprox[i] = \\frac{J^{+}_i - J^{-}_i}{2 \\varepsilon}$\nThus, you get a vector gradapprox, where gradapprox[i] is an approximation of the gradient with respect to parameter_values[i]. You can now compare this gradapprox vector to the gradients vector from backpropagation. Just like for the 1D case (Steps 1', 2', 3'), compute: \n$$ difference = \\frac {\\| grad - gradapprox \\|_2}{\\| grad \\|_2 + \\| gradapprox \\|_2 } \\tag{3}$$", "# GRADED FUNCTION: gradient_check_n\n\ndef gradient_check_n(parameters, gradients, X, Y, epsilon = 1e-7):\n \"\"\"\n Checks if backward_propagation_n computes correctly the gradient of the cost output by forward_propagation_n\n \n Arguments:\n parameters -- python dictionary containing your parameters \"W1\", \"b1\", \"W2\", \"b2\", \"W3\", \"b3\":\n grad -- output of backward_propagation_n, contains gradients of the cost with respect to the parameters. \n x -- input datapoint, of shape (input size, 1)\n y -- true \"label\"\n epsilon -- tiny shift to the input to compute approximated gradient with formula(1)\n \n Returns:\n difference -- difference (2) between the approximated gradient and the backward propagation gradient\n \"\"\"\n \n # Set-up variables\n parameters_values, _ = dictionary_to_vector(parameters)\n grad = gradients_to_vector(gradients)\n num_parameters = parameters_values.shape[0]\n J_plus = np.zeros((num_parameters, 1))\n J_minus = np.zeros((num_parameters, 1))\n gradapprox = np.zeros((num_parameters, 1))\n \n # Compute gradapprox\n for i in range(num_parameters):\n \n # Compute J_plus[i]. Inputs: \"parameters_values, epsilon\". Output = \"J_plus[i]\".\n # \"_\" is used because the function you have to outputs two parameters but we only care about the first one\n ### START CODE HERE ### (approx. 3 lines)\n thetaplus = np.copy(parameters_values) # Step 1\n thetaplus[i][0] +=epsilon # Step 2\n J_plus[i], _ = forward_propagation_n(X, Y,vector_to_dictionary(thetaplus)) # Step 3\n ### END CODE HERE ###\n \n # Compute J_minus[i]. Inputs: \"parameters_values, epsilon\". Output = \"J_minus[i]\".\n ### START CODE HERE ### (approx. 3 lines)\n thetaminus = np.copy(parameters_values) # Step 1\n thetaminus[i][0] -=epsilon # Step 2 \n J_minus[i], _ = forward_propagation_n(X, Y,vector_to_dictionary(thetaminus)) # Step 3\n ### END CODE HERE ###\n \n # Compute gradapprox[i]\n ### START CODE HERE ### (approx. 1 line)\n gradapprox[i] = (J_plus[i]-J_minus[i])/(2*epsilon)\n ### END CODE HERE ###\n \n # Compare gradapprox to backward propagation gradients by computing difference.\n ### START CODE HERE ### (approx. 1 line)\n numerator = np.linalg.norm(gradapprox-grad) # Step 1'\n denominator = np.linalg.norm(gradapprox)+np.linalg.norm(grad) # Step 2'\n difference = numerator/denominator # Step 3'\n ### END CODE HERE ###\n\n if difference > 1e-7:\n print (\"\\033[93m\" + \"There is a mistake in the backward propagation! difference = \" + str(difference) + \"\\033[0m\")\n else:\n print (\"\\033[92m\" + \"Your backward propagation works perfectly fine! difference = \" + str(difference) + \"\\033[0m\")\n \n return difference\n\nX, Y, parameters = gradient_check_n_test_case()\n\ncost, cache = forward_propagation_n(X, Y, parameters)\ngradients = backward_propagation_n(X, Y, cache)\ndifference = gradient_check_n(parameters, gradients, X, Y)", "Expected output:\n<table>\n <tr>\n <td> ** There is a mistake in the backward propagation!** </td>\n <td> difference = 0.285093156781 </td>\n </tr>\n</table>\n\nIt seems that there were errors in the backward_propagation_n code we gave you! Good that you've implemented the gradient check. Go back to backward_propagation and try to find/correct the errors (Hint: check dW2 and db1). Rerun the gradient check when you think you've fixed it. Remember you'll need to re-execute the cell defining backward_propagation_n() if you modify the code. \nCan you get gradient check to declare your derivative computation correct? Even though this part of the assignment isn't graded, we strongly urge you to try to find the bug and re-run gradient check until you're convinced backprop is now correctly implemented. \nNote \n- Gradient Checking is slow! Approximating the gradient with $\\frac{\\partial J}{\\partial \\theta} \\approx \\frac{J(\\theta + \\varepsilon) - J(\\theta - \\varepsilon)}{2 \\varepsilon}$ is computationally costly. For this reason, we don't run gradient checking at every iteration during training. Just a few times to check if the gradient is correct. \n- Gradient Checking, at least as we've presented it, doesn't work with dropout. You would usually run the gradient check algorithm without dropout to make sure your backprop is correct, then add dropout. \nCongrats, you can be confident that your deep learning model for fraud detection is working correctly! You can even use this to convince your CEO. :) \n<font color='blue'>\nWhat you should remember from this notebook:\n- Gradient checking verifies closeness between the gradients from backpropagation and the numerical approximation of the gradient (computed using forward propagation).\n- Gradient checking is slow, so we don't run it in every iteration of training. You would usually run it only to make sure your code is correct, then turn it off and use backprop for the actual learning process." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
chainsawriot/pycon2016hk_sklearn
Super_textbook.ipynb
mit
[ "Textbook examples\nFairy tales kind of situation\n\nThe data has been processed, ready to analysis\nThe learning objective is clearly defined\nSomething that kNN works\nJust something you feel so good working with\n\nClassification example\nBreast Cancer Wisconsin (Diagnostic) Data Set\nhttps://archive.ics.uci.edu/ml/datasets/Breast+Cancer+Wisconsin+(Diagnostic)", "from sklearn.datasets import load_breast_cancer\nourdata = load_breast_cancer()\nprint ourdata.DESCR\n\nprint ourdata.data.shape\nourdata.data\n\nourdata.target\n\nourdata.target.shape\n\nourdata.target_names", "Data preparation\nSplit the data into training set and test set", "from sklearn.model_selection import train_test_split\nX_train, X_test, y_train, y_test = train_test_split(ourdata.data, ourdata.target, test_size=0.3)\n\nprint X_train.shape\nprint y_train.shape\nprint X_test.shape\nprint y_test.shape\n\nfrom scipy.stats import itemfreq\nitemfreq(y_train)", "Let's try an unrealistic algorithm: kNN", "from sklearn.neighbors import KNeighborsClassifier\nhx_knn = KNeighborsClassifier()\n\nhx_knn.fit(X_train, y_train)\n\nhx_knn.predict(X_train)", "Training set performance evaluation\nConfusion matrix\nhttps://en.wikipedia.org/wiki/Confusion_matrix", "from sklearn.metrics import confusion_matrix, f1_score\n\nprint confusion_matrix(y_train, hx_knn.predict(X_train))\nprint f1_score(y_train, hx_knn.predict(X_train))", "Moment of truth: test set performance evaluation", "print confusion_matrix(y_test, hx_knn.predict(X_test))\nprint f1_score(y_test, hx_knn.predict(X_test))", "Classical analysis: Logistic Regression Classifier (Mint)", "from sklearn.linear_model import LogisticRegression\nhx_log = LogisticRegression()\n\nhx_log.fit(X_train, y_train)\n\nhx_log.predict(X_train)\n\n# Training set evaluation\nprint confusion_matrix(y_train, hx_log.predict(X_train))\nprint f1_score(y_train, hx_log.predict(X_train))\n\n# test set evaluation\nprint confusion_matrix(y_test, hx_log.predict(X_test))\nprint f1_score(y_test, hx_log.predict(X_test))", "Your turn: Task 1\nSuppose there is a learning algorithm called Naive Bayes and the API is located in\nhttp://scikit-learn.org/stable/modules/generated/sklearn.naive_bayes.GaussianNB.html#sklearn.naive_bayes.GaussianNB\nCreate a hx_nb, fit the data, and evaluate the training set and test set performance.\nRegression example\nBoston house price dataset\nhttps://archive.ics.uci.edu/ml/datasets/Housing", "from sklearn.datasets import load_boston\nbostondata = load_boston()\nprint bostondata.target\nprint bostondata.data.shape\n\n\n### Learn more about the dataset\n\nprint bostondata.DESCR\n\n# how the first row of data looks like\n\nbostondata.data[1,]\n\nBX_train, BX_test, By_train, By_test = train_test_split(bostondata.data, bostondata.target, test_size=0.3)", "Classical algo: Linear Regression", "from sklearn.linear_model import LinearRegression\nhx_lin = LinearRegression()\n\nhx_lin.fit(BX_train, By_train)\n\nhx_lin.predict(BX_train)", "Plot a scatter plot of predicted and actual value", "import matplotlib.pyplot as plt\nplt.scatter(By_train, hx_lin.predict(BX_train))\nplt.ylabel(\"predicted value\")\nplt.xlabel(\"actual value\")\nplt.show()\n\n### performance evaluation: training set\n\nfrom sklearn.metrics import mean_squared_error\nmean_squared_error(By_train, hx_lin.predict(BX_train))\n\n### performance evaluation: test set\nmean_squared_error(By_test, hx_lin.predict(BX_test))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
statsmodels/statsmodels
examples/notebooks/generic_mle.ipynb
bsd-3-clause
[ "Maximum Likelihood Estimation (Generic models)\nThis tutorial explains how to quickly implement new maximum likelihood models in statsmodels. We give two examples: \n\nProbit model for binary dependent variables\nNegative binomial model for count data\n\nThe GenericLikelihoodModel class eases the process by providing tools such as automatic numeric differentiation and a unified interface to scipy optimization functions. Using statsmodels, users can fit new MLE models simply by \"plugging-in\" a log-likelihood function. \nExample 1: Probit model", "import numpy as np\nfrom scipy import stats\nimport statsmodels.api as sm\nfrom statsmodels.base.model import GenericLikelihoodModel", "The Spector dataset is distributed with statsmodels. You can access a vector of values for the dependent variable (endog) and a matrix of regressors (exog) like this:", "data = sm.datasets.spector.load_pandas()\nexog = data.exog\nendog = data.endog\nprint(sm.datasets.spector.NOTE)\nprint(data.exog.head())", "Them, we add a constant to the matrix of regressors:", "exog = sm.add_constant(exog, prepend=True)", "To create your own Likelihood Model, you simply need to overwrite the loglike method.", "class MyProbit(GenericLikelihoodModel):\n def loglike(self, params):\n exog = self.exog\n endog = self.endog\n q = 2 * endog - 1\n return stats.norm.logcdf(q*np.dot(exog, params)).sum()", "Estimate the model and print a summary:", "sm_probit_manual = MyProbit(endog, exog).fit()\nprint(sm_probit_manual.summary())", "Compare your Probit implementation to statsmodels' \"canned\" implementation:", "sm_probit_canned = sm.Probit(endog, exog).fit()\n\nprint(sm_probit_canned.params)\nprint(sm_probit_manual.params)\n\nprint(sm_probit_canned.cov_params())\nprint(sm_probit_manual.cov_params())", "Notice that the GenericMaximumLikelihood class provides automatic differentiation, so we did not have to provide Hessian or Score functions in order to calculate the covariance estimates.\nExample 2: Negative Binomial Regression for Count Data\nConsider a negative binomial regression model for count data with\nlog-likelihood (type NB-2) function expressed as:\n$$\n \\mathcal{L}(\\beta_j; y, \\alpha) = \\sum_{i=1}^n y_i ln \n \\left ( \\frac{\\alpha exp(X_i'\\beta)}{1+\\alpha exp(X_i'\\beta)} \\right ) -\n \\frac{1}{\\alpha} ln(1+\\alpha exp(X_i'\\beta)) + ln \\Gamma (y_i + 1/\\alpha) - ln \\Gamma (y_i+1) - ln \\Gamma (1/\\alpha)\n$$\nwith a matrix of regressors $X$, a vector of coefficients $\\beta$,\nand the negative binomial heterogeneity parameter $\\alpha$. \nUsing the nbinom distribution from scipy, we can write this likelihood\nsimply as:", "import numpy as np\nfrom scipy.stats import nbinom\n\ndef _ll_nb2(y, X, beta, alph):\n mu = np.exp(np.dot(X, beta))\n size = 1/alph\n prob = size/(size+mu)\n ll = nbinom.logpmf(y, size, prob)\n return ll", "New Model Class\nWe create a new model class which inherits from GenericLikelihoodModel:", "from statsmodels.base.model import GenericLikelihoodModel\n\nclass NBin(GenericLikelihoodModel):\n def __init__(self, endog, exog, **kwds):\n super(NBin, self).__init__(endog, exog, **kwds)\n \n def nloglikeobs(self, params):\n alph = params[-1]\n beta = params[:-1]\n ll = _ll_nb2(self.endog, self.exog, beta, alph)\n return -ll \n \n def fit(self, start_params=None, maxiter=10000, maxfun=5000, **kwds):\n # we have one additional parameter and we need to add it for summary\n self.exog_names.append('alpha')\n if start_params == None:\n # Reasonable starting values\n start_params = np.append(np.zeros(self.exog.shape[1]), .5)\n # intercept\n start_params[-2] = np.log(self.endog.mean())\n return super(NBin, self).fit(start_params=start_params, \n maxiter=maxiter, maxfun=maxfun, \n **kwds) ", "Two important things to notice: \n\nnloglikeobs: This function should return one evaluation of the negative log-likelihood function per observation in your dataset (i.e. rows of the endog/X matrix). \nstart_params: A one-dimensional array of starting values needs to be provided. The size of this array determines the number of parameters that will be used in optimization.\n\nThat's it! You're done!\nUsage Example\nThe Medpar\ndataset is hosted in CSV format at the Rdatasets repository. We use the read_csv\nfunction from the Pandas library to load the data\nin memory. We then print the first few columns:", "import statsmodels.api as sm\n\nmedpar = sm.datasets.get_rdataset(\"medpar\", \"COUNT\", cache=True).data\n\nmedpar.head()", "The model we are interested in has a vector of non-negative integers as\ndependent variable (los), and 5 regressors: Intercept, type2,\ntype3, hmo, white.\nFor estimation, we need to create two variables to hold our regressors and the outcome variable. These can be ndarrays or pandas objects.", "y = medpar.los\nX = medpar[[\"type2\", \"type3\", \"hmo\", \"white\"]].copy()\nX[\"constant\"] = 1", "Then, we fit the model and extract some information:", "mod = NBin(y, X)\nres = mod.fit()", "Extract parameter estimates, standard errors, p-values, AIC, etc.:", "print('Parameters: ', res.params)\nprint('Standard errors: ', res.bse)\nprint('P-values: ', res.pvalues)\nprint('AIC: ', res.aic)", "As usual, you can obtain a full list of available information by typing\ndir(res).\nWe can also look at the summary of the estimation results.", "print(res.summary())", "Testing\nWe can check the results by using the statsmodels implementation of the Negative Binomial model, which uses the analytic score function and Hessian.", "res_nbin = sm.NegativeBinomial(y, X).fit(disp=0)\nprint(res_nbin.summary())\n\nprint(res_nbin.params)\n\nprint(res_nbin.bse)", "Or we could compare them to results obtained using the MASS implementation for R:\nurl = 'https://raw.githubusercontent.com/vincentarelbundock/Rdatasets/csv/COUNT/medpar.csv'\nmedpar = read.csv(url)\nf = los~factor(type)+hmo+white\n\nlibrary(MASS)\nmod = glm.nb(f, medpar)\ncoef(summary(mod))\n Estimate Std. Error z value Pr(&gt;|z|)\n(Intercept) 2.31027893 0.06744676 34.253370 3.885556e-257\nfactor(type)2 0.22124898 0.05045746 4.384861 1.160597e-05\nfactor(type)3 0.70615882 0.07599849 9.291748 1.517751e-20\nhmo -0.06795522 0.05321375 -1.277024 2.015939e-01\nwhite -0.12906544 0.06836272 -1.887951 5.903257e-02\n\nNumerical precision\nThe statsmodels generic MLE and R parameter estimates agree up to the fourth decimal. The standard errors, however, agree only up to the second decimal. This discrepancy is the result of imprecision in our Hessian numerical estimates. In the current context, the difference between MASS and statsmodels standard error estimates is substantively irrelevant, but it highlights the fact that users who need very precise estimates may not always want to rely on default settings when using numerical derivatives. In such cases, it is better to use analytical derivatives with the LikelihoodModel class." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
linwoodc3/linwoodc3.github.io
notebooks/gdeltPyR Tutorial Notebook.ipynb
apache-2.0
[ "<i>Author: Linwood Creekmore<br>\nEmail: valinvescap@gmail.com</i>\nGetting Started\nMake sure you have installed all the libraries in the import section below. If you get any errors, look up the documentation for each library for help.", "import re\nimport pytz\nimport gdelt\nimport datetime\nimport numpy as np\nimport pandas as pd\nimport seaborn as sns\nimport geoplot as gplt\nfrom tzwhere import tzwhere \nfrom bs4 import BeautifulSoup\nimport matplotlib.pyplot as plt\n\ntz1 = tzwhere.tzwhere(forceTZ=True)", "Setting up gdeltPyR\nIt's easy to set up gdeltPyR. This single line gets us ready to query. See the github project page for details on accessing other tables and setting other parameters. Then, we just pass in a date to pull the data. It's really that simple. The only concern, is memory. Pulling multiple days of GDELT can consume lots of memory. Make a workflow to pull and write the disc if you have issues.", "gd = gdelt.gdelt()\n\n%time vegas = gd.Search(['Oct 1 2017','Oct 2 2017'],normcols=True,coverage=True)", "Time format transformations\nThese custom function handle time transformations.", "\n\n\ndef striptimen(x):\n \"\"\"Strip time from numpy array or list of dates that are integers\"\"\"\n date = str(int(x))\n n = np.datetime64(\"{}-{}-{}T{}:{}:{}\".format(date[:4],date[4:6],date[6:8],date[8:10],date[10:12],date[12:]))\n return n\n\ndef timeget(x):\n '''convert to datetime object with UTC time tag'''\n \n try:\n now_aware = pytz.utc.localize(x[2].to_pydatetime())\n except:\n pass\n \n # get the timezone string representation using lat/lon pair\n try:\n timezone_str=tz1.tzNameAt(x[0],x[1],forceTZ=True)\n \n # get the time offset\n timezone = pytz.timezone(timezone_str)\n\n # convert UTC to calculated local time\n aware = now_aware.astimezone(timezone)\n return aware\n \n except Exception as e:\n pass\n\n# vectorize our two functions\nvect = np.vectorize(striptimen)\nvect2=np.vectorize(timeget)", "Now we apply the functions to create a datetime object column (dates) and a timezone aware column (datezone).", "# vectorize our function\nvect = np.vectorize(striptimen)\n\n\n# use custom functions to build time enabled columns of dates and zone\nvegastimed = (vegas.assign(\n dates=vect(vegas.dateadded.values)).assign(\n zone=list(timeget(k) for k in vegas.assign(\n dates=vect(vegas.dateadded.values))\\\n [['actiongeolat','actiongeolong','dates']].values)))", "Filtering to a city and specific CAMEO Code\nI return data in pandas dataframes to leverage the power of pandas data manipulation. Now we filter our data on the two target fields; actiongeofeatureid and eventrootcode. To learn more about the columns, see this page with descriptions for each header.", "# filter to data in Las Vegas and about violence/fighting/mass murder only\nvegastimedfil=(vegastimed[\n ((vegas.eventrootcode=='19') | \n (vegas.eventrootcode=='20') | \n (vegas.eventrootcode=='18')) & \n (vegas.actiongeofeatureid=='847388')])\\\n .drop_duplicates('sourceurl') \nprint(vegastimedfil.shape)", "Stripping out unique news providers\nThis regex extracts baseurls from the sourceurl column. These extractions allow us to analyze the contributions of unique providers in GDELT events data.", "# lazy meta-character regex; more elegant\ns = re.compile('(http://|https://)([A-Za-z0-9_\\.-]+)')", "Build Chronological List\nIf you want to see a chronological list, you'll need to time enable your data.", "# build the chronological news stories and show the first few rows\nprint(vegastimedfil.set_index('zone')[['dates','sourceurl']].head())", "To time enable the entire dataset, it's a fairly simple task.", "# example of converting to Los Angeles time. \nvegastimed.set_index(\n vegastimed.dates.astype('datetime64[ns]')\n ).tz_localize(\n 'UTC'\n ).tz_convert(\n 'America/Los_Angeles'\n )", "Counting Who Produced the Most\nWe use pandas to find the provider with the most unique content. One drawback of GDELT, is repeated URLs. But, in the pandas ecosystem, removing duplicates is easy. We extract provider baseurls, remove duplicates, and count the number of articles.", "# regex to strip a url from a string; should work on any url (let me know if it doesn't)\ns = re.compile('(http://|https://)([A-Za-z0-9_\\.-]+)')\n\n# apply regex to each url; strip provider; assign as new column\nprint(vegastimedfil.assign(provider=vegastimedfil.sourceurl.\\\n apply(lambda x: s.search(x).group() if s.search(x) else np.nan))\\\n.groupby(['provider']).size().sort_values(ascending=False).reset_index().rename(columns={0:\"count\"}).head())", "How many unique news providers?\nThis next block uses regex to strip the base URL from each record. Then, you just use the pandas.Series.unique() method to get a total count of providers", "# chained operation to return shape\nvegastimedfil.assign(provider=vegastimedfil.sourceurl.\\\n apply(lambda x: s.search(x).group() if \\\n s.search(x) else np.nan))['provider']\\\n.value_counts().shape", "Understanding how many providers we have producing, it would be a good idea to understand the distribution of production. Or, we want to see how many articles each provider published. We use a distribution and cumulative distribution plot.", "# make plot canvas\nf,ax = plt.subplots(figsize=(15,5))\n\n# set title\nplt.title('Distributions of Las Vegas Active Shooter News Production')\n\n# ckernel density plot\nsns.kdeplot(vegastimedfil.assign(provider=vegastimedfil.sourceurl.\\\n apply(lambda x: s.search(x).group() if s.search(x) else np.nan))['provider']\\\n.value_counts(),bw=0.4,shade=True,label='No. of articles written',ax=ax)\n\n# cumulative distribution plot\nsns.kdeplot(vegastimedfil.assign(provider=vegastimedfil.sourceurl.\\\n apply(lambda x: s.search(x).group() if s.search(x) else np.nan))['provider']\\\n.value_counts(),bw=0.4,shade=True,label='Cumulative',cumulative=True,ax=ax)\n\n# show it\nplt.show()", "Time Series: Calculating the volumetric change\nNext, we use the exponentially weighted moving average to see the change in production.", "timeseries = pd.concat([vegastimed.set_index(vegastimed.dates.astype('datetime64[ns]')).tz_localize('UTC').tz_convert('America/Los_Angeles').resample('15T')['sourceurl'].count(),vegastimedfil.set_index('zone').resample('15T')['sourceurl'].count()]\n ,axis=1)\n\n# file empty event counts with zero\ntimeseries.fillna(0,inplace=True)\n\n# rename columns\ntimeseries.columns = ['Total Events','Las Vegas Events Only']\n\n# combine\ntimeseries = timeseries.assign(Normalized=(timeseries['Las Vegas Events Only']/timeseries['Total Events'])*100)\n\n# make the plot\nf,ax = plt.subplots(figsize=(13,7))\nax = timeseries.Normalized.ewm(adjust=True,ignore_na=True,min_periods=10,span=20).mean().plot(color=\"#C10534\",label='Exponentially Weighted Count')\nax.set_title('Reports of Violent Events Per 15 Minutes in Vegas',fontsize=28)\nfor label in ax.get_xticklabels():\n label.set_fontsize(16)\nax.set_xlabel('Hour of the Day', fontsize=20)\nax.set_ylabel('Percentage of Hourly Total',fontsize='15')\nax.legend()\nplt.tight_layout()\nplt.show()", "Finding Who Produced the \"Fastest\"\nThis block of code finds the news provider who produced reports faster \"on average\". We convert the date of each article to epoch time, average across providers, and compare. Again, pandas makes this easy.", "# complex, chained operations to perform all steps listed above\n\nprint((((vegastimedfil.reset_index().assign(provider=vegastimedfil.reset_index().sourceurl.\\\n apply(lambda x: s.search(x).group() if s.search(x) else np.nan),\\\n epochzone=vegastimedfil.set_index('dates')\\\n .reset_index()['dates']\\\n.apply(lambda x: (x.to_pydatetime().timestamp()))).groupby('provider')\\\n.filter(lambda x: len(x)>=10).groupby('provider').agg([np.mean,np.max,np.min,np.median])\\\n.sort_index(level='median',ascending=False)['epochzone']['median'])\\\n .apply(lambda x:datetime.datetime.fromtimestamp(int(x)))\\\n.sort_values(ascending=True)).reset_index()\\\n .set_index('median',drop=False)).tz_localize('UTC')\\\n.tz_convert('America/Los_Angeles'))", "Getting the Content\nThis code gets the content (or tries to) at the end of each GDELT sourceurl.", "# Author: Linwood Creekmore\n# Email: valinvescap@gmail.com\n# Description: Python script to pull content from a website (works on news stories).\n\n# Notes\n\"\"\"\n23 Oct 2017: updated to include readability based on PyCon talk: https://github.com/DistrictDataLabs/PyCon2016/blob/master/notebooks/tutorial/Working%20with%20Text%20Corpora.ipynb\n\n\n\"\"\"\n\n###################################\n# Standard Library imports\n###################################\n\nimport re\nfrom io import BytesIO\n\n###################################\n# Third party imports\n###################################\n\nimport requests\nimport numpy as np\nfrom bs4 import BeautifulSoup\nfrom readability.readability import Document as Paper\n\n# placeholder dictionary to keep track of what's been completed\ndone ={}\ndef textgetter(url):\n \"\"\"Scrapes web news and returns the content\n \n Parameters\n ----------\n \n url : str\n Address to news report\n \n newstext: str\n Returns all text in the \"p\" tag. This usually is the content of the news story.\n \"\"\"\n global done\n TAGS = [\n 'h1', 'h2', 'h3', 'h4', 'h5', 'h6', 'h7', 'p', 'li'\n ]\n \n # regex for url check\n s = re.compile('(http://|https://)([A-Za-z0-9_\\.-]+)')\n answer = {}\n # check that its an url\n if s.search(url):\n if url in done.keys():\n return done[url]\n pass\n else:\n\n r = requests.get(url)\n if r.status_code != 200:\n done[url]=\"Unable to reach website.\"\n answer['base']=s.search(url).group()\n answer['url']=url\n answer['text']=\"Unable to reach website.\"\n answer['title']=''\n yield answer\n \n doc = Paper(r.content)\n data = doc.summary()\n title = doc.title()\n\n soup = BeautifulSoup(data,'lxml')\n\n newstext = \" \".join([l.text for l in soup.find_all(TAGS)]) \n \n del r,data\n if len(newstext)>200:\n answer['base']=s.search(url).group()\n answer['text']=newstext\n answer['url']=url\n answer['title']=title\n yield answer\n else:\n newstext = \" \".join([l.text for l in soup.find_all('div',class_='field-item even')])\n done[url]=newstext\n if len(newstext)>200:\n answer['url']=url\n answer['base']=s.search(url).group()\n answer['text']=newstext\n answer['title']=\"\"\n yield answer\n else:\n answer['url']=url\n answer['base']=s.search(url).group()\n answer['text']='No text returned'\n answer['title']=\"\"\n yield answer\n else:\n answer['text']='This is not a proper url'\n answer['url']=url\n answer['base']=''\n answer['title']=\"\"\n yield answer", "Testing the Function\nHere is a test. The done dictionary is important; it keeps you from repeating calls to urls you've already processed. It's like \"caching\".", "# create vectorized function\nvect = np.vectorize(textgetter)\n\n#vectorize the operation\ncc = vect(vegastimedfil['sourceurl'].values[10:25])\n\n#Vectorized opp\ndd = list(next(l) for l in cc)\n\n# the output\npd.DataFrame(dd).head(5)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
gcgruen/homework
foundations-homework/08/homework-08-gruen-dataset3-refugees.ipynb
mit
[ "Homework 8: Dataset 3: Refugees", "# datasource: https://data.humdata.org/dataset/unhcr-refugee-pop-stats/resource/fbacbba3-1b20-4331-931b-6a21a4cb80f5\n# dataset is labeled \"Group of concern to UNHCR\"\n\nimport pandas as pd\nimport matplotlib.pyplot as plt\n% matplotlib inline\n\ndf = pd.read_csv('refugee_data.csv')\n\ndf.head(20)", "1) What different population types are of concern for the UNHCR?", "df['Population type'].unique()", "2)-8) For each of these population types, which were the 20 country hosting most in 2013?\nexcept: \"Others of concern\"\n@TAs: This question might actually count as six questions, right? ;)", "df.columns\n\nrecent = df[['Country', 'Origin_Returned_from', 'Population type','2013']]\ndef population_type_count(a):\n a = recent[recent['Population type'] == a]\n a.groupby('Country')['2013'].sum()\n a_table = pd.DataFrame(a.groupby('Country')['2013'].sum())\n return a_table.sort_values(by='2013', ascending = False).head(20)", "TOP20 most refugees", "population_type_count('Refugees')", "TOP 20 most asylum seekers", "population_type_count('Asylum seekers')", "TOP20 Internally displaced", "population_type_count('Internally displaced')", "TOP20 most stateless", "population_type_count('Stateless')", "TOP20 most Returned IDPs", "population_type_count('Returned IDPs')", "TOP20 most Returned refugees", "population_type_count('Returned refugees')", "9) Which were the TOP10 countries, most refugees returned home from in 2013?", "recent.columns\n\nreturned_refugees = recent[recent['Population type'] == 'Returned refugees']\nreturned_refugees.groupby('Origin_Returned_from')['2013'].sum()\nreturned_table = pd.DataFrame(returned_refugees.groupby('Origin_Returned_from')['2013'].sum())\nreturned_table.sort_values(by='2013', ascending = False).head(10)", "10) Which country of origin were most asylum seekers from in 2013?", "asylum_seekers = recent[recent['Population type'] == 'Asylum seekers']\nseekers_table = pd.DataFrame(asylum_seekers.groupby('Origin_Returned_from')['2013'].sum())\nseekers_table.sort_values(by='2013', ascending = False).head(10)", "11) What is the overall number of (asylum seekers/refugees/idp) for each year?", "df.columns\n\nyears_available=['2000', '2001','2002', '2003', '2004', '2005', '2006', '2007', '2008', '2009', '2010', '2011', '2012', '2013']\npop_types=['Asylum seekers', 'Refugees', 'Internally displaced']\ntotals_dict_list=[]\n\nfor poptype in pop_types:\n poptype_dictionary ={}\n for year in years_available:\n poptype_only = df[(df['Population type'] == poptype) & (df[year].notnull())]\n poptype_per_year = poptype_only[year].sum()\n print(year, 'there were in total', poptype_per_year, poptype)\n poptype_dictionary[year] = poptype_per_year\n totals_dict_list.append(poptype_dictionary)\n\ntotals_dict_list", "12) Line graph or stacked bar chart for 11)", "asylum_over_time = totals_dict_list[0]\nasylums_table = pd.DataFrame(asylum_over_time, index=['Total asylum seekers per year'])\nasylums_table\n#asylums_table.plot(kind='bar')\n\nrefugees_over_time = totals_dict_list[1]\nrefugees_table = pd.DataFrame(refugees_over_time, index=['Total refugees per year'])\nrefugees_table\n\nidp_over_time = totals_dict_list[2]\nidps_table = pd.DataFrame(idp_over_time, index=['Total IDPs per year'])\nidps_table\n\n#CONCATENATE DID NOT WORK, BUT KEEPING THIS NOTES AS MEMORY\n\n#asylums_table.plot(kind='bar')\n#asylum_over_time = totals_dict_list[0]\n#asylums_table = pd.DataFrame(asylum_over_time, index=['Total asylum seekers per year'])\n#asylums_table\n#pd.concat(totals_dict_list[0], axis=0, join='outer', join_axes=None, ignore_index=False, keys=None, levels=None, names=None, verify_integrity=False, copy=True)\n\n\n#http://stackoverflow.com/questions/31974548/take-a-row-from-one-dataframe-and-insert-into-first-row-of-another-dataframe-in\n#A-> dataframe with (v,w,x,y,z) columns ( Some values)\n#b -> dataframe with (v,w,x,y,z) columns ( All values)\n#b = pd.concat([A[A.v==1],b])\n\n#asylums_table = pd.concat([idps_table[idps_table.v==1],asylums_table])\n\ntwo_table = asylums_table.append(refugees_table)\ntwo_table\n\ntotals_table = two_table.append(idps_table)\ntotals_table\n# http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.append.html\n\n#totals_table.plot()\nplt.style.use(\"ggplot\")\ntotals_table2 = totals_table.T\ntotals_table2.plot(figsize=(10,7), ylim=(0,25000000), linewidth=3, y)\n#http://matplotlib.org/api/pyplot_api.html#matplotlib.pyplot.plot" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
metpy/MetPy
v1.1/_downloads/f8c7f51c50c58b17901913e49a5b977e/Inverse_Distance_Verification.ipynb
bsd-3-clause
[ "%matplotlib inline", "Inverse Distance Verification: Cressman and Barnes\nCompare inverse distance interpolation methods\nTwo popular interpolation schemes that use inverse distance weighting of observations are the\nBarnes and Cressman analyses. The Cressman analysis is relatively straightforward and uses\nthe ratio between distance of an observation from a grid cell and the maximum allowable\ndistance to calculate the relative importance of an observation for calculating an\ninterpolation value. Barnes uses the inverse exponential ratio of each distance between\nan observation and a grid cell and the average spacing of the observations over the domain.\nAlgorithmically:\n\nA KDTree data structure is built using the locations of each observation.\nAll observations within a maximum allowable distance of a particular grid cell are found in\n O(log n) time.\nUsing the weighting rules for Cressman or Barnes analyses, the observations are given a\n proportional value, primarily based on their distance from the grid cell.\nThe sum of these proportional values is calculated and this value is used as the\n interpolated value.\nSteps 2 through 4 are repeated for each grid cell.", "import matplotlib.pyplot as plt\nimport numpy as np\nfrom scipy.spatial import cKDTree\n\nfrom metpy.interpolate.geometry import dist_2\nfrom metpy.interpolate.points import barnes_point, cressman_point\nfrom metpy.interpolate.tools import average_spacing, calc_kappa\n\n\ndef draw_circle(ax, x, y, r, m, label):\n th = np.linspace(0, 2 * np.pi, 100)\n nx = x + r * np.cos(th)\n ny = y + r * np.sin(th)\n ax.plot(nx, ny, m, label=label)", "Generate random x and y coordinates, and observation values proportional to x * y.\nSet up two test grid locations at (30, 30) and (60, 60).", "np.random.seed(100)\n\npts = np.random.randint(0, 100, (10, 2))\nxp = pts[:, 0]\nyp = pts[:, 1]\nzp = xp * xp / 1000\n\nsim_gridx = [30, 60]\nsim_gridy = [30, 60]", "Set up a cKDTree object and query all of the observations within \"radius\" of each grid point.\nThe variable indices represents the index of each matched coordinate within the\ncKDTree's data list.", "grid_points = np.array(list(zip(sim_gridx, sim_gridy)))\n\nradius = 40\nobs_tree = cKDTree(list(zip(xp, yp)))\nindices = obs_tree.query_ball_point(grid_points, r=radius)", "For grid 0, we will use Cressman to interpolate its value.", "x1, y1 = obs_tree.data[indices[0]].T\ncress_dist = dist_2(sim_gridx[0], sim_gridy[0], x1, y1)\ncress_obs = zp[indices[0]]\n\ncress_val = cressman_point(cress_dist, cress_obs, radius)", "For grid 1, we will use barnes to interpolate its value.\nWe need to calculate kappa--the average distance between observations over the domain.", "x2, y2 = obs_tree.data[indices[1]].T\nbarnes_dist = dist_2(sim_gridx[1], sim_gridy[1], x2, y2)\nbarnes_obs = zp[indices[1]]\n\nkappa = calc_kappa(average_spacing(list(zip(xp, yp))))\n\nbarnes_val = barnes_point(barnes_dist, barnes_obs, kappa)", "Plot all of the affiliated information and interpolation values.", "fig, ax = plt.subplots(1, 1, figsize=(15, 10))\nfor i, zval in enumerate(zp):\n ax.plot(pts[i, 0], pts[i, 1], '.')\n ax.annotate(str(zval) + ' F', xy=(pts[i, 0] + 2, pts[i, 1]))\n\nax.plot(sim_gridx, sim_gridy, '+', markersize=10)\n\nax.plot(x1, y1, 'ko', fillstyle='none', markersize=10, label='grid 0 matches')\nax.plot(x2, y2, 'ks', fillstyle='none', markersize=10, label='grid 1 matches')\n\ndraw_circle(ax, sim_gridx[0], sim_gridy[0], m='k-', r=radius, label='grid 0 radius')\ndraw_circle(ax, sim_gridx[1], sim_gridy[1], m='b-', r=radius, label='grid 1 radius')\n\nax.annotate(f'grid 0: cressman {cress_val:.3f}', xy=(sim_gridx[0] + 2, sim_gridy[0]))\nax.annotate(f'grid 1: barnes {barnes_val:.3f}', xy=(sim_gridx[1] + 2, sim_gridy[1]))\n\nax.set_aspect('equal', 'datalim')\nax.legend()", "For each point, we will do a manual check of the interpolation values by doing a step by\nstep and visual breakdown.\nPlot the grid point, observations within radius of the grid point, their locations, and\ntheir distances from the grid point.", "fig, ax = plt.subplots(1, 1, figsize=(15, 10))\nax.annotate(f'grid 0: ({sim_gridx[0]}, {sim_gridy[0]})', xy=(sim_gridx[0] + 2, sim_gridy[0]))\nax.plot(sim_gridx[0], sim_gridy[0], '+', markersize=10)\n\nmx, my = obs_tree.data[indices[0]].T\nmz = zp[indices[0]]\n\nfor x, y, z in zip(mx, my, mz):\n d = np.sqrt((sim_gridx[0] - x)**2 + (y - sim_gridy[0])**2)\n ax.plot([sim_gridx[0], x], [sim_gridy[0], y], '--')\n\n xave = np.mean([sim_gridx[0], x])\n yave = np.mean([sim_gridy[0], y])\n\n ax.annotate(f'distance: {d}', xy=(xave, yave))\n ax.annotate(f'({x}, {y}) : {z} F', xy=(x, y))\n\nax.set_xlim(0, 80)\nax.set_ylim(0, 80)\nax.set_aspect('equal', 'datalim')", "Step through the cressman calculations.", "dists = np.array([22.803508502, 7.21110255093, 31.304951685, 33.5410196625])\nvalues = np.array([0.064, 1.156, 3.364, 0.225])\n\ncres_weights = (radius * radius - dists * dists) / (radius * radius + dists * dists)\ntotal_weights = np.sum(cres_weights)\nproportion = cres_weights / total_weights\nvalue = values * proportion\n\nval = cressman_point(cress_dist, cress_obs, radius)\n\nprint('Manual cressman value for grid 1:\\t', np.sum(value))\nprint('Metpy cressman value for grid 1:\\t', val)", "Now repeat for grid 1, except use barnes interpolation.", "fig, ax = plt.subplots(1, 1, figsize=(15, 10))\nax.annotate(f'grid 1: ({sim_gridx[1]}, {sim_gridy[1]})', xy=(sim_gridx[1] + 2, sim_gridy[1]))\nax.plot(sim_gridx[1], sim_gridy[1], '+', markersize=10)\n\nmx, my = obs_tree.data[indices[1]].T\nmz = zp[indices[1]]\n\nfor x, y, z in zip(mx, my, mz):\n d = np.sqrt((sim_gridx[1] - x)**2 + (y - sim_gridy[1])**2)\n ax.plot([sim_gridx[1], x], [sim_gridy[1], y], '--')\n\n xave = np.mean([sim_gridx[1], x])\n yave = np.mean([sim_gridy[1], y])\n\n ax.annotate(f'distance: {d}', xy=(xave, yave))\n ax.annotate(f'({x}, {y}) : {z} F', xy=(x, y))\n\nax.set_xlim(40, 80)\nax.set_ylim(40, 100)\nax.set_aspect('equal', 'datalim')", "Step through barnes calculations.", "dists = np.array([9.21954445729, 22.4722050542, 27.892651362, 38.8329756779])\nvalues = np.array([2.809, 6.241, 4.489, 2.704])\n\nweights = np.exp(-dists**2 / kappa)\ntotal_weights = np.sum(weights)\nvalue = np.sum(values * (weights / total_weights))\n\nprint('Manual barnes value:\\t', value)\nprint('Metpy barnes value:\\t', barnes_point(barnes_dist, barnes_obs, kappa))\n\nplt.show()" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
JacksonTanBS/iPythonNotebooks
150528 How Much of Earth is Raining at Any One Time.ipynb
gpl-2.0
[ "This notebook investigates the simple question of how much of the Earth is raining using one day of IMERG data.\nAssumption:\n* rainfall is statistically constant over one day (true to first order), and\n* IMERG (restricted to ±60° latitudes) is representative of global precipitation (pretty good).\nCaveat: IMERG is primarily constructed from space-borne microwave instruments, which may encounter challenges picking up very light rain (< 0.1 mm / hr).\nIMERG data can be downloaded here: http://pmm.nasa.gov/data-access/downloads/gpm.", "import numpy as np\nimport h5py\nfrom glob import glob\n\nimergpath = '/media/Sentinel/data/IMERG/'\nyear, month, day = 2014, 4, 1", "Read the IMERG files for one day.", "inpath = '%s%4d/%02d/%02d/' % (imergpath, year, 4, 1)\nfiles = sorted(glob('%s3B-HHR*' % inpath))\nnt = len(files)\n\nwith h5py.File(files[0]) as f:\n lats = f['Grid/lat'][:]\n lons = f['Grid/lon'][:]\n fillvalue = f['Grid/precipitationCal'].attrs['_FillValue']\n \nnlon, nlat = len(lons), len(lats)\n\nPimerg = np.ma.masked_all([nt, nlon, nlat], dtype = np.float32)\n\ninpath = '%s%4d/%02d/%02d/' % (imergpath, year, month, day)\nfiles = sorted(glob('%s3B-HHR*' % inpath))\n\nfor tt in range(nt):\n with h5py.File(files[tt]) as f:\n Pimerg[tt] = np.ma.masked_less(f['Grid/precipitationCal'][:], -100.)", "Calculate the ratio of the number of grid boxes with rain (above 0.01 mm / h threshold) to the number of valid grid boxes (mainly between ±60° latitudes).", "print(np.ma.sum(Pimerg > 0.01) / np.ma.count(Pimerg))", "Therefore, about 5.5% of the Earth's area is raining at any one time.\nThe simplification made here is that the ratio of grid boxes is similar to the ratio of areas. Or, putting it in another way, a grid box at high latitude has a smaller area than a grid box at the equator and thus contribute less to the ratio. However, to first order this is a fair assumption. Improvements can be made by simply weighting the grid boxes by the cosine of the latitude.", "# print system info: please manually input non-core imported modules\n%load_ext version_information\n%version_information numpy, h5py" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
google/consent-based-conversion-adjustments
cocoa/cocoa_template.ipynb
apache-2.0
[ "<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/google/consent-based-conversion-adjustments/blob/main/cocoa/cocoa_template.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />Run in Google Colab</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://github.com/google/consent-based-conversion-adjustments/blob/main/cocoa/cocoa_template.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />View source on GitHub</a>\n </td>\n</table>\n\nLicense\nCopyright 2020 Google LLC\nLicensed under the Apache License, Version 2.0 (the \"License\");\n you may not use this file except in compliance with the License.\n You may obtain a copy of the License at\n http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\n distributed under the License is distributed on an \"AS IS\" BASIS,\n WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n See the License for the specific language governing permissions and\n limitations under the License.\nConsent-based Conversion Adjustments\nIn this notebook, we are illustrating how we can use a non-parametric model (based on k nearest-neighbors) to redistribute conversion values of customers opting out of advertising cookies over customers who opt in. \nThe resulting conversion-value adjustments can be used within value-based bidding to prevent biases in the bidding-algorithm due to systematic differences between customers who opt in vs customers who don't.\nImports", "!pip install git+https://github.com/google/consent-based-conversion-adjustments.git\nfrom IPython.display import clear_output\n\nclear_output()\n\n\nfrom itertools import combinations\nimport typing\n\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport pandas as pd\n\n\nnp.random.seed(123)\n\nfrom cocoa import nearest_consented_customers", "Data Simulation", "#@title Create fake dataset of adgroups and conversion values\n#@markdown We are generating random data: each row is an individual conversion\n#@markdown with a given conversion value. \\\n#@markdown For each conversion, we know the\n#@markdown adgroup, which is our only feature here and just consists of 3 letters.\n\nn_consenting_customers = 8000 #@param\nn_nonconsenting_customers = 2000 #@param\n\n\ndef simulate_conversion_data_consenting_non_consenting(\n n_consenting_customers: int,\n n_nonconsenting_customers: int) -> typing.Tuple[pd.DataFrame, pd.DataFrame]:\n \"\"\"Simulates dataframes for consenting and non-consenting customers.\n\n Args:\n n_consenting_customers: Desired number of consenting customers. Should be\n larger than n_nonconsenting_customers.\n n_nonconsenting_customers: Desired number non non-consenting customers.\n\n Returns:\n Two dataframes of simulated consenting and non-consenting customers.\n \"\"\"\n fake_adgroups = np.array(\n ['_'.join(fake_ad) for fake_ad in (combinations('ABCDEFG', 3))])\n\n data_consenting = pd.DataFrame.from_dict({\n 'adgroup':\n fake_adgroups[np.random.randint(\n low=0, high=len(fake_adgroups), size=n_consenting_customers)],\n 'conversion_value':\n np.random.lognormal(1, size=n_consenting_customers)\n })\n\n data_nonconsenting = pd.DataFrame.from_dict({\n 'adgroup':\n fake_adgroups[np.random.randint(\n low=0, high=len(fake_adgroups), size=n_nonconsenting_customers)],\n 'conversion_value':\n np.random.lognormal(1, size=n_nonconsenting_customers)\n })\n return data_consenting, data_nonconsenting\n\n\ndata_consenting, data_nonconsenting = simulate_conversion_data_consenting_non_consenting(\n n_consenting_customers, n_nonconsenting_customers)\ndata_consenting.head()", "Preprocessing", "#@title Split adgroups in separate levels\n#@markdown We preprocess our data. Consenting and non-consenting data are\n#@markdown concatenated to ensure that they have the same feature-columns. \\\n#@markdown We then split our adgroup-string into its components and dummy code each.\n#@markdown The level of each letter in the adgroup-string is added as prefix here.\n\ndef preprocess_data(data_consenting, data_nonconsenting):\n data_consenting['consent'] = 1\n data_nonconsenting['consent'] = 0\n data_all = pd.concat([data_consenting, data_nonconsenting])\n data_all.reset_index(inplace=True)\n\n # split the adgroups in their levels and dummy-code those.\n data_all = data_all.join(\n pd.get_dummies(data_all['adgroup'].str.split('_').apply(pd.Series)))\n data_all.drop(['adgroup'], axis=1, inplace=True)\n return data_all[data_all['consent'] == 1], data_all[data_all['consent'] == 0]\n\ndata_consenting, data_nonconsenting = preprocess_data(data_consenting,\n data_nonconsenting)\ndata_consenting.head()", "Create NearestCustomerMatcher object and run conversion-adjustments.\nWe now have our fake data in the right format – similarity here depends alone on\nthe adgroup of a given customer. In reality, we would have a gCLID and a\ntimestamp for each customer that we could pass as id_columns to the matcher.\\\nOther example features that could be used instead/in addition to the adgroup are\n\ndevice type\ngeo\ntime of day\nad-type\nGA-derived features\netc. \n\nWhen using the NearestCustomerMatcher, we can choose between three matching\nstrategies:\n* if we define number_nearest_neighbors, a fixed number of nearest (consenting)\ncustomers is used, irrespective of how dissimilar those customers are to the \nseed-non-consenting customer.\n* if we define radius, all consenting customers that fall within the specified radius of a non-consenting customer are used. This means that the number of nearest-neighbors likely differs between non-consenting customers, and a given non-consenting customer might have no consenting customers in their radius.\n* if we define percentage, the NearestCustomerMatcher first which minimal radius needs to be set in order to find at least one closest consenting customer for at least percentage non-consenting customers (not implemented in beam yet)\nIn practice, the simplest approach is to set number_nearest_neighbors and\nchoose a sufficiently high number here to ensure that individual consenting\ncustomers do not receive too high a share of non-consenting conversion values.", "matcher = nearest_consented_customers.NearestCustomerMatcher(\n data_consenting, conversion_column='conversion_value', id_columns=['index'])\ndata_adjusted = matcher.calculate_adjusted_conversions(\n data_nonconsenting, number_nearest_neighbors=100)\n\n#@title We generated a new dataframe containing the conversion-value adjustments\ndata_adjusted.sample(5)\n\n#@title Visualise distribution of adjusted conversions\n#@markdown We can plot the original and adjusted (original + adjustment-values)\n#@markdown conversion values and see that in general, the distributions are\n#@markdown very similar, but as expected, the adjusted values are shifted towards\n#@markdown larger values.\nax = data_adjusted['conversion_value'].plot(kind='hist', alpha=.5, )\n(data_adjusted['adjusted_conversion']+data_adjusted['conversion_value']).plot(kind='hist', ax=ax, alpha=.5)\nax.legend(['original conversion value', 'adjusted conversion value'])\nplt.show()", "Next steps\nThe above would run automatically on a daily basis within a Google Cloud Project. A new table ready to use with Offline Conversion Import is created.\nIf no custom pipeline has been set up yet, we recommend using Tentacles." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
tensorflow/docs-l10n
site/zh-cn/lattice/tutorials/aggregate_function_models.ipynb
apache-2.0
[ "Copyright 2020 The TensorFlow Authors.", "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.", "TF Lattice 聚合函数模型\n<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td><a target=\"_blank\" href=\"https://tensorflow.google.cn/lattice/tutorials/aggregate_function_models\"> <img src=\"https://tensorflow.google.cn/images/tf_logo_32px.png\"> 在 TensorFlow.org 上查看</a></td>\n <td><a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/zh-cn/lattice/tutorials/aggregate_function_models.ipynb\"><img src=\"https://tensorflow.google.cn/images/colab_logo_32px.png\">在 Google Colab 中运行 </a></td>\n <td><a target=\"_blank\" href=\"https://github.com/tensorflow/docs-l10n/blob/master/site/zh-cn/lattice/tutorials/aggregate_function_models.ipynb\"><img src=\"https://tensorflow.google.cn/images/GitHub-Mark-32px.png\">在 GitHub 中查看源代码</a></td>\n <td><a href=\"https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/zh-cn/lattice/tutorials/aggregate_function_models.ipynb\"><img src=\"https://tensorflow.google.cn/images/download_logo_32px.png\"> 下载笔记本</a></td>\n</table>\n\n概述\n利用 TFL 预制聚合函数模型,您可以快速轻松地构建 TFL tf.keras.model 实例来学习复杂聚合函数。本指南概述了构造 TFL 预制聚合函数模型并对其进行训练/测试所需的步骤。 \n设置\n安装 TF Lattice 软件包:", "#@test {\"skip\": true}\n!pip install tensorflow-lattice pydot", "导入所需的软件包:", "import tensorflow as tf\n\nimport collections\nimport logging\nimport numpy as np\nimport pandas as pd\nimport sys\nimport tensorflow_lattice as tfl\nlogging.disable(sys.maxsize)", "下载 Puzzles 数据集:", "train_dataframe = pd.read_csv(\n 'https://raw.githubusercontent.com/wbakst/puzzles_data/master/train.csv')\ntrain_dataframe.head()\n\ntest_dataframe = pd.read_csv(\n 'https://raw.githubusercontent.com/wbakst/puzzles_data/master/test.csv')\ntest_dataframe.head()", "提取并转换特征和标签", "# Features:\n# - star_rating rating out of 5 stars (1-5)\n# - word_count number of words in the review\n# - is_amazon 1 = reviewed on amazon; 0 = reviewed on artifact website\n# - includes_photo if the review includes a photo of the puzzle\n# - num_helpful number of people that found this review helpful\n# - num_reviews total number of reviews for this puzzle (we construct)\n#\n# This ordering of feature names will be the exact same order that we construct\n# our model to expect.\nfeature_names = [\n 'star_rating', 'word_count', 'is_amazon', 'includes_photo', 'num_helpful',\n 'num_reviews'\n]\n\ndef extract_features(dataframe, label_name):\n # First we extract flattened features.\n flattened_features = {\n feature_name: dataframe[feature_name].values.astype(float)\n for feature_name in feature_names[:-1]\n }\n\n # Construct mapping from puzzle name to feature.\n star_rating = collections.defaultdict(list)\n word_count = collections.defaultdict(list)\n is_amazon = collections.defaultdict(list)\n includes_photo = collections.defaultdict(list)\n num_helpful = collections.defaultdict(list)\n labels = {}\n\n # Extract each review.\n for i in range(len(dataframe)):\n row = dataframe.iloc[i]\n puzzle_name = row['puzzle_name']\n star_rating[puzzle_name].append(float(row['star_rating']))\n word_count[puzzle_name].append(float(row['word_count']))\n is_amazon[puzzle_name].append(float(row['is_amazon']))\n includes_photo[puzzle_name].append(float(row['includes_photo']))\n num_helpful[puzzle_name].append(float(row['num_helpful']))\n labels[puzzle_name] = float(row[label_name])\n\n # Organize data into list of list of features.\n names = list(star_rating.keys())\n star_rating = [star_rating[name] for name in names]\n word_count = [word_count[name] for name in names]\n is_amazon = [is_amazon[name] for name in names]\n includes_photo = [includes_photo[name] for name in names]\n num_helpful = [num_helpful[name] for name in names]\n num_reviews = [[len(ratings)] * len(ratings) for ratings in star_rating]\n labels = [labels[name] for name in names]\n\n # Flatten num_reviews\n flattened_features['num_reviews'] = [len(reviews) for reviews in num_reviews]\n\n # Convert data into ragged tensors.\n star_rating = tf.ragged.constant(star_rating)\n word_count = tf.ragged.constant(word_count)\n is_amazon = tf.ragged.constant(is_amazon)\n includes_photo = tf.ragged.constant(includes_photo)\n num_helpful = tf.ragged.constant(num_helpful)\n num_reviews = tf.ragged.constant(num_reviews)\n labels = tf.constant(labels)\n\n # Now we can return our extracted data.\n return (star_rating, word_count, is_amazon, includes_photo, num_helpful,\n num_reviews), labels, flattened_features\n\ntrain_xs, train_ys, flattened_features = extract_features(train_dataframe, 'Sales12-18MonthsAgo')\ntest_xs, test_ys, _ = extract_features(test_dataframe, 'SalesLastSixMonths')\n\n# Let's define our label minimum and maximum.\nmin_label, max_label = float(np.min(train_ys)), float(np.max(train_ys))\nmin_label, max_label = float(np.min(train_ys)), float(np.max(train_ys))", "设置用于在本指南中进行训练的默认值:", "LEARNING_RATE = 0.1\nBATCH_SIZE = 128\nNUM_EPOCHS = 500\nMIDDLE_DIM = 3\nMIDDLE_LATTICE_SIZE = 2\nMIDDLE_KEYPOINTS = 16\nOUTPUT_KEYPOINTS = 8", "特征配置\n使用 tfl.configs.FeatureConfig 设置特征校准和按特征的配置。特征配置包括单调性约束、按特征的正则化(请参阅 tfl.configs.RegularizerConfig)以及点阵模型的点阵大小。\n请注意,我们必须为希望模型识别的任何特征完全指定特征配置。否则,模型将无法获知存在这样的特征。对于聚合模型,将自动考虑这些特征并将其处理为不规则特征。\n计算分位数\n尽管 tfl.configs.FeatureConfig 中 pwl_calibration_input_keypoints 的默认设置为“分位数”,但对于预制模型,我们必须手动定义输入关键点。为此,我们首先定义自己的辅助函数来计算分位数。", "def compute_quantiles(features,\n num_keypoints=10,\n clip_min=None,\n clip_max=None,\n missing_value=None):\n # Clip min and max if desired.\n if clip_min is not None:\n features = np.maximum(features, clip_min)\n features = np.append(features, clip_min)\n if clip_max is not None:\n features = np.minimum(features, clip_max)\n features = np.append(features, clip_max)\n # Make features unique.\n unique_features = np.unique(features)\n # Remove missing values if specified.\n if missing_value is not None:\n unique_features = np.delete(unique_features,\n np.where(unique_features == missing_value))\n # Compute and return quantiles over unique non-missing feature values.\n return np.quantile(\n unique_features,\n np.linspace(0., 1., num=num_keypoints),\n interpolation='nearest').astype(float)", "定义我们的特征配置\n现在我们可以计算分位数了,我们为希望模型将其作为输入的每个特征定义一个特征配置。", "# Feature configs are used to specify how each feature is calibrated and used.\nfeature_configs = [\n tfl.configs.FeatureConfig(\n name='star_rating',\n lattice_size=2,\n monotonicity='increasing',\n pwl_calibration_num_keypoints=5,\n pwl_calibration_input_keypoints=compute_quantiles(\n flattened_features['star_rating'], num_keypoints=5),\n ),\n tfl.configs.FeatureConfig(\n name='word_count',\n lattice_size=2,\n monotonicity='increasing',\n pwl_calibration_num_keypoints=5,\n pwl_calibration_input_keypoints=compute_quantiles(\n flattened_features['word_count'], num_keypoints=5),\n ),\n tfl.configs.FeatureConfig(\n name='is_amazon',\n lattice_size=2,\n num_buckets=2,\n ),\n tfl.configs.FeatureConfig(\n name='includes_photo',\n lattice_size=2,\n num_buckets=2,\n ),\n tfl.configs.FeatureConfig(\n name='num_helpful',\n lattice_size=2,\n monotonicity='increasing',\n pwl_calibration_num_keypoints=5,\n pwl_calibration_input_keypoints=compute_quantiles(\n flattened_features['num_helpful'], num_keypoints=5),\n # Larger num_helpful indicating more trust in star_rating.\n reflects_trust_in=[\n tfl.configs.TrustConfig(\n feature_name=\"star_rating\", trust_type=\"trapezoid\"),\n ],\n ),\n tfl.configs.FeatureConfig(\n name='num_reviews',\n lattice_size=2,\n monotonicity='increasing',\n pwl_calibration_num_keypoints=5,\n pwl_calibration_input_keypoints=compute_quantiles(\n flattened_features['num_reviews'], num_keypoints=5),\n )\n]", "聚合函数模型\n要构造 TFL 预制模型,首先从 tfl.configs 构造模型配置。使用 tfl.configs.AggregateFunctionConfig 构造聚合函数模型。它会先应用分段线性和分类校准,随后再将点阵模型应用于不规则输入的每个维度。然后,它会在每个维度的输出上应用聚合层。接下来,会应用可选的输出分段线性校准。", "# Model config defines the model structure for the aggregate function model.\naggregate_function_model_config = tfl.configs.AggregateFunctionConfig(\n feature_configs=feature_configs,\n middle_dimension=MIDDLE_DIM,\n middle_lattice_size=MIDDLE_LATTICE_SIZE,\n middle_calibration=True,\n middle_calibration_num_keypoints=MIDDLE_KEYPOINTS,\n middle_monotonicity='increasing',\n output_min=min_label,\n output_max=max_label,\n output_calibration=True,\n output_calibration_num_keypoints=OUTPUT_KEYPOINTS,\n output_initialization=np.linspace(\n min_label, max_label, num=OUTPUT_KEYPOINTS))\n# An AggregateFunction premade model constructed from the given model config.\naggregate_function_model = tfl.premade.AggregateFunction(\n aggregate_function_model_config)\n# Let's plot our model.\ntf.keras.utils.plot_model(\n aggregate_function_model, show_layer_names=False, rankdir='LR')", "每个聚合层的输出是已校准点阵在不规则输入上的平均输出。下面是在第一个聚合层内使用的模型:", "aggregation_layers = [\n layer for layer in aggregate_function_model.layers\n if isinstance(layer, tfl.layers.Aggregation)\n]\ntf.keras.utils.plot_model(\n aggregation_layers[0].model, show_layer_names=False, rankdir='LR')", "现在,与任何其他 tf.keras.Model 一样,我们编译该模型并将其拟合到我们的数据中。", "aggregate_function_model.compile(\n loss='mae',\n optimizer=tf.keras.optimizers.Adam(LEARNING_RATE))\naggregate_function_model.fit(\n train_xs, train_ys, epochs=NUM_EPOCHS, batch_size=BATCH_SIZE, verbose=False)", "训练完模型后,我们可以在测试集中对其进行评估。", "print('Test Set Evaluation...')\nprint(aggregate_function_model.evaluate(test_xs, test_ys))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
M0nica/python-foundations-hw
11/.ipynb_checkpoints/Parking Data-checkpoint.ipynb
mit
[ "import pandas as pd\n\n\nviolations_df = pd.read_csv(\"violations.csv\")\n#violations_df = pd.read_csv(\"violations.csv\", nrows=100)\n\n# 1. I want to make sure my Plate ID is a string. Can't lose the leading zeroes!\n\n#3. I want the dates to be dates! Read the read_csv documentation to find out how to make pandas automatically parse dates.\ncol_types = { 'Plate ID': 'str'}\ndof_violations_df = pd.read_csv(\"DOF_Parking_Violation_Codes.csv\", dtype=col_types, parse_dates=True)\n#dof_violations_df = pd.read_csv(\"DOF_Parking_Violation_Codes.csv\", dtype=col_types, parse_dates=True, nrows=100)\n\n# print(violations_df.columns)\n\n# len(violations_df)\n\n# print(dof_violations_df.head())", "I don't think anyone's car was built in 0AD. Discard the '0's as NaN.", "violations_df = violations_df[violations_df['Vehicle Year'] != 0]\n# print(violations_df['Vehicle Year'].head())", "\"Date first observed\" is a pretty weird column, but it seems like it has a date hiding inside. Using a function with .apply, transform the string (e.g. \"20140324\") into a Python date. Make the 0's show up as NaN.", "# print(violations_df['Date First Observed'].head())\n\nimport datetime\nviolations_df.head()['Issue Date'].astype(datetime.datetime)\n\nimport datetime\nviolations_df.head()['Issue Date'].astype(datetime.datetime)\n\n#violations_df['Issue Date'] = violations_df['Issue Date'].astype('datetime64[ns]')\n\n\n# violations_df.dtypes", "\"Violation time\" is... not a time. Make it a time.", "import re\n# violations_df['Violation Time'].head()\n\nimport numpy as np\n# Build a function using that method\ndef time_to_datetime(str_time):\n try:\n #str_time = re.sub('^0', '', str_time)\n if isinstance(str_time, str):\n str_time = str_time + \"M\"\n #print(\"Trying to convert\", str_time, \"into a time\")\n return datetime.datetime.strptime(str_time.strip(), \"%I%M%p\").time()\n #try:\n # if str_time == -999:\n # print(\"It's -999\")\n # return np.nan\n \n except:\n return np.nan\n\n\n# Apply that method to the 'Time' column of the dataframe\nviolations_df['Violation Time'] = violations_df['Violation Time'].apply(time_to_datetime)\n\nprint(violations_df['Violation Time'].head())\n\n# def remove_1900(year):\n# year = str(year)\n# year = re.sub('^1900-01-01', '', year)\n# return year\n\n# print(violations_df['Violation Time'].apply(remove_1900))", "There sure are a lot of colors of cars, too bad so many of them are the same. Make \"BLK\" and \"BLACK\", \"WT\" and \"WHITE\", and any other combinations that you notice.", "import re\n\ndef vehicle_colors(vehicle):\n if isinstance(vehicle, str):\n #print(vehicle)\n vehicle = re.sub('^GY$', 'GREY', vehicle)\n #vehicle = vehicle.replace(\"GY\", \"GREY\")\n vehicle = re.sub('^WH$', 'WHITE', vehicle)\n vehicle = re.sub('^BR$', 'BROWN', vehicle)\n vehicle = re.sub('^RD$', 'RED', vehicle)\n vehicle = re.sub('^B[LK]$', 'BLACK', vehicle)\n vehicle = re.sub('^TN$', 'TAN', vehicle)\n vehicle = re.sub('^YW$', 'YELLOW', vehicle)\n vehicle = re.sub('^SIL$', 'SILVER', vehicle)\n vehicle = re.sub('^GR$', 'GREEN', vehicle)\n vehicle = re.sub('^SILVE$', 'SILVER', vehicle)\n \n \n \n# vehicle = vehicle.replace(\"WH\",\"WHITE\")\n# vehicle = vehicle.replace(\"BR\",\"BROWN\")\n# vehicle = vehicle.replace(\"RD\",\"RED\")\n# vehicle = vehicle.replace(\"BL\",\"BLACK\")\n# vehicle = vehicle.replace(\"BK\",\"BLACK\")\n# vehicle = vehicle.replace(\"TN\",\"TAN\")\n# vehicle = vehicle.replace(\"YW\",\"YELLOW\")\n# vehicle = re.sub('SIL$', 'SILVER', vehicle)\n# vehicle = vehicle.replace(\"SILVR\",\"SILVER\")\n# vehicle = vehicle.replace(\"SIL\",\"SILVER\")\n return vehicle\n\nviolations_df['Vehicle Color'] = violations_df['Vehicle Color'].apply(vehicle_colors)\n\nviolations_df['Vehicle Color'].head()", "Join the data with the Parking Violations Code dataset from the NYC Open Data site.", "print(violations_df.columns)\nviolations_df['Violation Code'].head()\n\nviolations_df\n\nprint(dof_violations_df.columns)\ndof_violations_df['CODE'].head()\n\n# violations_df.merge(dof_violations_df, left_on='Violation Code', right_on='CODE')\n\n#print(dof_violations_df['CODE'][0])\n#test = dof_violations_df['CODE'][0]\n\ndef to_int(test):\n try:\n # test = re.sub('$', '', test)\n test = test.strip(\"$\")\n #print(test)\n test = int(test)\n #print(test, \" is now an int\")\n return test\n except:\n print(test, \" coult not be converted to an int\")\n return np.nan\n\ndof_violations_df['CODE'] = dof_violations_df['CODE'].apply(to_int)", "Join the data with the Parking Violations Code dataset from the NYC Open Data site.", "violations_df = violations_df.merge(dof_violations_df, left_on='Violation Code', right_on='CODE')", "How much money did NYC make off of parking violations?", "violations_df.describe()\n\nviolations_df.columns\n\nviolations_df['Summons Number']\n\nviolations_df['Manhattan\\xa0 96th St. & below']\n\nviolations_df['Manhattan\\xa0 96th St. & below'] = violations_df['Manhattan\\xa0 96th St. & below'].apply(to_int)\n\nviolations_df['Manhattan\\xa0 96th St. & below'].sum()\n\nviolations_df['All Other Areas'] = violations_df['All Other Areas'].apply(to_int)\n\nviolations_df['All Other Areas'].sum()\n\nviolations_df['Manhattan\\xa0 96th St. & below'].describe()", "What's the most lucrative kind of parking violation? Below lists all the ones that chaarge $115, which is the highest amount", "violations_df.groupby('CODE')['All Other Areas'].mean().sort_values(ascending=False).head(8)", "The most frequent parking violation is:", "import matplotlib.pyplot as plt\n%matplotlib inline\nfreqNYviolations = violations_df.groupby('CODE')['All Other Areas'].count().sort_values(ascending=False).head().plot(kind=\"bar\", color = ['#624ea7', '#599ad3', '#f9a65a', '#9e66ab', 'purple'])\nfreqNYviolations.set_title('Most Frequent NYC Parking Violations by Code')\nfreqNYviolations.set_xlabel('Parking Violation Code')\nfreqNYviolations.set_ylabel('Frequency of Citations')\nplt.savefig('freqParkingViolations.png')\n\nviolations_df.groupby('CODE')['All Other Areas'].count().sort_values(ascending=False).head(1)\n\n# violations_df.sort_values('All Other Areas', ascending=False)", "New Jersey has bad drivers, but does it have bad parkers, too?", "nj_violations_df = violations_df[violations_df['Registration State'] != 'NJ']\n\nprint(\"There were\", len(nj_violations_df['All Other Areas']), \"parking violations in NYC by drivers registered in New Jersey\")\nprint(\"These violations totaled\", nj_violations_df['All Other Areas'].sum(), \"of revenue\")", "How much money does NYC make off of all non-New York vehicles?\n11. Make a chart of the top few.", "violations_df.groupby('Registration State')['All Other Areas'].sum().sort_values(ascending=False).head(8)\n\n\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\nnonny_violations_df = violations_df[violations_df['Registration State'] != 'NY']\nnonny_violations_df = nonny_violations_df[violations_df['Registration State'] != '99']\nnonNYviolations = nonny_violations_df.groupby('Registration State')['All Other Areas'].sum().sort_values(ascending=False).head(20).plot(kind=\"bar\", color = ['#624ea7', '#599ad3', '#f9a65a', '#9e66ab', 'purple'])\nnonNYviolations.set_title('Money Earned from NYC Parking Violations by State')\nnonNYviolations.set_xlabel('Vehicle Registration State (NON-NYS)')\nnonNYviolations.set_ylabel('Amount Earned')\nplt.savefig('nonNYParkingViolations.png')\n\n\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\nnonny_violations_df = violations_df[violations_df['Registration State'] != 'NY']\nnonny_violations_df = nonny_violations_df[violations_df['Registration State'] != '99']\nfreqnonNYviolations = nonny_violations_df.groupby('Registration State')['All Other Areas'].count().sort_values(ascending=False).head(20).plot(kind=\"bar\", color = ['#624ea7', '#599ad3', '#f9a65a', '#9e66ab', 'purple'])\nfreqnonNYviolations.set_title('Frequency of NYC Parking Violations by State')\nfreqnonNYviolations.set_xlabel('Vehicle Registration State (NON-NYS)')\nfreqnonNYviolations.set_ylabel('Number of Citations')\nplt.savefig('freqnonNYParkingViolations.png')", "What time of day do people usually get their tickets? You can break the day up into several blocks - for example 12am-6am, 6am-12pm, 12pm-6pm, 6pm-12am.\n\n\nWhat's the average ticket cost in NYC?", "print(\"The average ticket cost in NYC is $\", violations_df['All Other Areas'].mean())", "Make a graph of the number of tickets per day.", "violations_df.groupby('Issue Date')['All Other Areas'].count().sort_values(ascending=False)\n\n\nfreqnonNYviolations = violations_df.groupby('Issue Date')['All Other Areas'].count().plot(kind=\"bar\", color = ['#624ea7', '#599ad3', '#f9a65a', '#9e66ab', 'purple'])\nfreqnonNYviolations.set_title('Frequency of NYC Parking Violations by Date')\nfreqnonNYviolations.set_xlabel('Date')\nfreqnonNYviolations.set_ylabel('Number of Citations')\nplt.savefig('datedfreqnonNYParkingViolations.png')", "Make a graph of the amount of revenue collected per day.", "\nfreqnonNYviolations = violations_df.groupby('Issue Date')['All Other Areas'].sum().plot(kind=\"bar\", color = ['#624ea7', '#599ad3', '#f9a65a', '#9e66ab', 'purple'])\nfreqnonNYviolations.set_title('Revenue of NYC Parking Violations by Date')\nfreqnonNYviolations.set_xlabel('Date')\nfreqnonNYviolations.set_ylabel('Revenue Generated from Citations')\nplt.savefig('datedrevenueonNYParkingViolations.png')\n\n16. Manually construct a dataframe out of https://dmv.ny.gov/statistic/2015licinforce-web.pdf (only NYC boroughts - bronx, queens, manhattan, staten island, brooklyn), having columns for borough name, abbreviation, and number of licensed drivers.\n17. What's the parking-ticket-$-per-licensed-driver in each borough of NYC? Do this with pandas and the dataframe you just made, not with your head!" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
ghvn7777/ghvn7777.github.io
content/fluent_python/14_iter.ipynb
apache-2.0
[ "所有生成器都是迭代器,因为生成器完全实现了迭代器接口,不过迭代器一般用于从集合取出元素,生成器用于 “凭空” 创造元素。斐波那契数列例子可以很好的说明两者区别:斐波那契数列中的数有无穷个,在一个集合里放不下。\n在 Python 3 中,生成器有广泛用途。现在即使是内置的 range() 函数也要返回一个类似生成器的对象,而以前返回完整列表。如果一定让 range() 函数返回列表,必须明确指明(例如,list(range(100)))。\n在 Python 中,所有集合都能迭代。在 Python 内部,迭代器用于支持:\n\nfor 循环\n构建和扩展集合类型\n逐行遍历文本文件\n列表推导,字典推导和集合推导\n元组拆包\n调用函数时,使用 * 拆包\n\n本章探讨以下话题:\n\n语言内部使用 iter(...) 内置函数处理可迭代对象的方式\n如何使用 Python 经典的迭代器模式\n详细说明生成器函数的工作原理\n如何使用生成器函数或生成器表达式代替经典的迭代器\n如何使用标准库中通用的生成器函数\n如何使用 yield from 语句合并生成器\n案例分析: 在一个数据库转换工具中使用生成器处理大型数据集\n为什么生成器和协程看似相同,其实差别很大,不能混淆\n\nSentence 类第 1 版:单词序列\n我们创建一个类,并向它传入一些包含文本的字符串,然后可以逐个单词迭代,第 1 版要实现序列协议,这个类的对象可以迭代,因为所有序列都可以迭代 -- 这一点前面已经说过,现在说明真正的原因\n下面展示了一个可以通过索引从文本提取单词的类:", "import re\nimport reprlib\n\nRE_WORD = re.compile('\\w+')\n\nclass Sentence:\n \n def __init__(self, text):\n self.text = text\n # 返回一个字符串列表,里面的元素是正则表达式的全部非重叠匹配\n self.words = RE_WORD.findall(text)\n \n def __getitem__(self, index):\n return self.words[index]\n \n # 为了完善序列协议,我们实现了 __len__ 方法,不过,为了让对象可迭代,没必要实现这个方法\n def __len__(self):\n return len(self.words)\n \n def __repr__(self):\n # 下面这个函数用于生成大型数据结构的简略字符串表示形式\n return 'Sentence(%s)' % reprlib.repr(self.text)\n\ns = Sentence('\"The time has come,\", the Walrus said')\ns\n\nfor word in s:\n print(word)\n\nlist(s)\n\ns[0], s[-1]", "我们都知道,序列可以迭代,下面说明具体原因: iter 函数\n解释器需要迭代对象 x 时候,会自动调用 iter(x)\n内置的 iter 函数有以下作用。\n\n\n检查对象是否实现了 __iter__ 方法,如果实现了就调用它,获取一个迭代器\n\n\n如果没有实现 __iter__ 方法,但是实现了 __getitem__ 方法,Python 会创建一个迭代器,尝试按顺序(从索引 0 开始)获取元素\n\n\n如果尝试失败,Python 抛出 TypeError 异常,通常提示 C object is not iterable,其中 C 是目标对象所属的类\n\n\n任何 Pytho 序列都可迭代的原因是实现了 __getitem__ 方法。其实标准的序列也都实现了 __iter__ 方法,因此我们也应该这么做。之所以对 __getitem__ 方法特殊处理,是为了向后兼容,未来可能不会再这么做\n11 章提到过,这是鸭子类型的极端形式,不仅要实现特殊的 __iter__ 方法,还要实现 __getitem__ 方法,而且 __getitem__ 方法的参数是从 0 开始的整数(int),这样才认为对象是可迭代的。\n在白鹅类型理论中,可迭代对象定义的简单一些,不过没那么灵活,如果实现了 __iter__ 方法,那么就认为对象是可迭代的。此时,不需要创建子类,也不需要注册,因为 abc.Iterable 类实现了 __subclasshook__ 方法,下面举个例子:", "from collections import abc\n\nclass Foo:\n def __iter__(self):\n pass\n \nissubclass(Foo, abc.Iterable)\n\nf = Foo()\nisinstance(f, abc.Iterable)", "不过要注意,前面定义的 Sentence 类是可迭代的,却无法通过 issubclass(Sentence, abc.Iterable) 测试\n\n从 Python 3.4 开始,检测对象 x 是否可迭代,最准确的方法是调用 iter(x) 函数,如果不可迭代,再处理 TypeError 异常,这回比使用 isinstance(x, abc.Iterable) 更准确,因为 iter(x) 会考虑到 __getitem__ 方法\n\n迭代对象之前显式检查或许没必要,因为试图迭代不可迭代对象时,抛出的错误很明显。如果除了跑出 TypeError 异常之外还要进一步处理,可以使用 try/except 块,无需显式检查。如果要保存对象,等以后迭代,或许可以显式检查,因为这种情况需要尽早捕捉错误\n可迭代对象与迭代器对比\n可迭代对象: \n使用 iter 内置函数可以获取迭代器对象。如果对象实现了能返回迭代器的 __iter__ 方法,那么对象可迭代。序列都可以迭代:实现了 __getitem__ 方法,而且其参数是从 0 开始的索引,这种对象也可以迭代。\n我们要明确可迭代对象和迭代器之间的关系: Python 从可迭代的对象中获取迭代器\n下面是一个 for 循环,迭代一个字符串,这里字符串 'ABC' 是可迭代对象,背后有迭代器,只是我们看不到", "s = 'ABC'\nfor char in s:\n print(char)", "如果用 while 循环,要像下面这样:", "s = 'ABC'\nit = iter(s)\nwhile True:\n try:\n print(next(it))\n except StopIteration: # 这个异常表示迭代器到头了\n del it\n break", "标准迭代器接口有两个方法:\n__next__ 返回下一个可用的元素,如果没有元素了,抛出 StopIteration 异常\n__iter__ 返回 self,以便在应该使用可迭代对象的地方使用迭代器,比如 for 循环\n这个接口在 collections.abc.Iterator 抽象基类中,这个类定义了 __next__ 抽象方法,而且继承自 Iterable 类: __iter__ 抽象方法则在 Iterable 类中定义\n\nabc.Iterator 抽象基类中 __subclasshook__ 的方法作用就是检查有没有 __iter__ 和 __next__ 属性\n检查对象 x 是否为 迭代器 的最好方式是调用 isinstance(x, abc.Iterator)。得益于 Iterator.__subclasshook__ 方法,即使对象 x 所属的类不是 Iterator 类的真实子类或虚拟子类,也能这样检查\n\n下面可以看到 Sentence 类如何使用 iter 函数构建迭代器,和如何使用 next 函数使用迭代器", "s3 = Sentence('Pig and Pepper')\nit = iter(s3)\nit\n\nnext(it)\n\nnext(it)\n\nnext(it)\n\nnext(it)\n\nlist(it) # 到头后,迭代器没用了\n\nlist(s3) # 如果想再次迭代,要重新构建迭代器", "因为迭代器只需要 __next__ 和 __iter__ 两个方法,所以除了调用 next() 方法,以及捕获 StopIteration 异常之外,没有办法检查是否还有遗留元素。此外,也没有办法 ”还原“ 迭代器。如果想再次迭代,那就要调用 iter(...) 传入之前构造迭代器传入的可迭代对象。传入迭代器本身没用,因为前面说过 Iterator.__iter__ 方法实现方式是返回实例本身,所以传入迭代器无法还原已经耗尽的迭代器\n我们可以得出迭代器定义如下:实现了无参数的 __next__ 方法,返回序列中的下一个元素,如果没有元素了,那么抛出 StopIteration 异常。Python 中迭代器还实现了 __iter__ 方法,因此迭代器也可以迭代。因为内置的 iter(...) 函数会对序列做特殊处理,所以第 1 版 的 Sentence 类可以迭代。\nSentence 类第 2 版:典型的迭代器\n这一版根据《设计模式:可复用面向对象软件的基础》一书给出的模型,实现典型的迭代器设计模式。注意,这不符合 Python 的习惯做法,后面重构时候会说明原因。不过,通过这一版能明确可迭代集合和迭代器对象之间的区别\n下面的类可以迭代,因为实现了 __iter__ 方法,构建并返回一个 SentenceIterator 实例,《设计模式:可复用面向对象软件的基础》一书就是这样描述迭代器设计模式的。\n这里之所以这么做,是为了清楚的说明可迭代的对象和迭代器之间的重要区别,以及二者间的联系。", "import re\nimport reprlib\n\nRE_WORD = re.compile('\\w+')\n\nclass Sentence:\n \n def __init__(self, text):\n self.text = text\n self.words = RE_WORD.findall(text)\n \n def __repr__(self):\n return 'Sentence(%s)' % reprlib.repr(self.text)\n \n def __iter__(self):\n return SentenceIterator(self.words)\n \nclass SentenceIterator:\n \n def __init__(self, words):\n self.words = words\n self.index = 0\n \n def __next__(self):\n try:\n word = self.words[self.index]\n except IndexError:\n raise StopIteration\n self.index += 1\n return word\n \n def __iter__(self):\n return self", "注意,对于这个例子来说,没有必要在 SentenceIterator 类中实现 __iter__ 方法,不过这么做是对的,因为迭代器应该实现 __next__ 和 __iter__ 两个方法,而且这么做能让迭代器通过 issubclass(SentenceInterator, abc.Iterator) 测试。如果让 SentenceIterator 继承 abc.Iterator 类,那么它会继承 abc.Iterator.__iter__ 这个具体方法\n注意 SentenceIterator 类的大多数代码在处理迭代器内部状态,稍后会说明如何简化,不过我们先讨论一个看似合理实则错误的实现捷径\n把 Sentence 变成迭代器:坏主意\n构建可迭代的对象和迭代器经常出现错误,原因是混淆了二者。要知道,可迭代对象有个 __iter__ 方法,每次实例化一个新的迭代器,迭代器要实现 __next__ 方法,返回单个元素,此外要实现 __iter__ 方法,返回迭代器本身。\n因此,迭代器可以迭代,但是可迭代的对象不是迭代器\n除了 __iter__ 方法之外,你可能还想在 Sentence 类中实现 __next__ 方法,让 Sentence 实例既是可迭代对象,也是自身迭代器,可是这种想法非常糟糕,这也是常见的反模式\n迭代器模式可以用来:\n\n访问一个聚合对象的内容而无需暴露它的内部表示\n支持对聚合对象的多种遍历\n为遍历不同的聚合结构提供一个统一的接口(即支持多态迭代)\n\n为了“支持多种遍历”,必须能从同一个迭代的实例中获取多个独立的迭代器,而且各个迭代器要能维护自身的内部状态,因此这一模式正确的实现方法是,每次调用 iter(my_iterable) 都新建一个独立的迭代器,这就是为什么这个示例需要定义 SentenceIterator 类\n\n可迭代对象一定不能是自身的迭代器,也就是说,可迭代对象必须实现 __iter__ 方法,但不能实现 __next__ 方法。另一方面,迭代器应该可以一直迭代,迭代器的 __iter__ 应该返回自身\n\nSentence 类第 3 版:生成器函数\n实现同样功能,却符合 Python 习惯的方式是,用生成器函数替代 SentenceIterator 类。先看下面的例子:", "import re\nimport reprlib\n\nRE_WORD = re.compile('\\w+')\n\nclass Sentence:\n \n def __init__(self, text):\n self.text = text\n self.words = RE_WORD.findall(text)\n \n def __repr__(self):\n return 'Sentence(%s)' % reprlib.repr(self.text)\n \n def __iter__(self):\n for word in self.words:\n yield word\n # 这个 return 不是必要的,生成器函数不会抛出 StopIteration 异常,\n #而是在生成全部值之后直接退出\n return \n\na = Sentence('hello world')\none = iter(a)\nprint(next(one))\ntwo = iter(a)\nprint(next(two)) # 两个迭代器之间不会互相干扰", "在这个例子中,迭代器其实是生成器对象,每次调用 __iter__ 方法都会自动创建,因为这里的 __iter__ 方法是生成器函数\n生成器函数的工作原理\n只要 Python 函数定义体中有 yield 关键字,该函数就是生成器函数,调用生成器函数时,会返回一个生成器对象。也就是说,生成器函数是生成器工厂\n下面用一个特别简单的函数说明生成器行为:", "def gen_123():\n yield 1\n yield 2\n yield 3\n \ngen_123\n\ngen_123()\n\nfor i in gen_123():\n print(i)\n\ng = gen_123()\nnext(g)\n\nnext(g)\n\nnext(g)\n\nnext(g) # 生成器函数定义体执行完毕后,跑出 StopIteration 异常", "生成器函数会创建一个生成器对象,包装生成器函数的定义体。把生成器传给 next(..) 函数时,生成器函数会向前,执行函数定义体中的下一个 yield 语句,返回产出的值,并在函数定义体的当前位置暂停。最终函数的定义体返回时,外层的生成器对象会抛出 StopIteration 异常 -- 这一点与迭代器协议一致\n下面例子更清楚的说明了生成器函数定义体的执行过程:", "def gen_AB():\n print('start')\n yield 'A'\n print('continue')\n yield 'B'\n print('end')\n \nfor c in gen_AB():\n print('-->', c)", "现在在我们应该知道 Sentence.__iter__ 作用了: __iter__ 方法是生成器函数,调用时会构建一个实现了迭代器接口的生成器对象,因此不用再定义 SentenceIterator 类了。\n这一版 Sentence 类比之前简短多了,但还不够懒惰,懒惰实现是指尽可能延后生成值,这样能节省内存,或许还可以避免做无用的处理\nSentence 类第 4 版:惰性实现\n设计 Iterator 接口时考虑了惰性:next(my_iterator) 一次生成一个元素。惰性求值和及早求值是编程语言理论的技术术语\n目前的 Sentence 类不具有惰性,因为 __init__ 方法急迫的构建好了文本中的单词列表,然后绑定到 self.words 属性上。这样就得到处理后的整个文本,列表使用的内存量可能与文本本身一样多(获取更多,这取决于文本中有多少非单词字符)。如果只需迭代前几个单词,大多数工作都是白费力气。\nre.finditer 函数是 re.findall 函数的惰性版本,返回的不是列表,而是一个生成器,按需生成 re.MatchObject 实例。如果有很多匹配,re.finditer 能节省大量内存。如果我们要使用这个函数让上一版 Sentence 类变得懒惰,即只在需要时才生成下一个单词。代码如下所示:", "import re\nimport reprlib\n\nRE_WORD = re.compile('\\w+')\n\nclass Sentence:\n \n def __init__(self, text):\n self.text = text\n \n def __repr__(self):\n return 'Sentence(%s)' % reprlib.repr(self.text)\n \n def __iter__(self):\n for match in RE_WORD.finditer(self.text):\n yield match.group() # 从 MatchObject 实例中提取匹配正则表达式的具体文本", "生成器表达式\n简单的生成器函数,如前面的例子中使用的那个,可以替换成生成器表达式\n生成器表达式可以理解为列表推导式的惰性版本:不会迫切的构建列表,而是返回一共额生成器,按需惰性产称元素。也就是说,如果列表推导是制造列表的工厂,那么生成器表达式是制造生成器的工厂\n下面展示了一个生成器表达式,并与列表推导式对比:", "def gen_AB():\n print('start')\n yield 'A'\n print('continue')\n yield 'B'\n print('end')\n \nres1 = [x * 3 for x in gen_AB()]\n\nfor i in res1:\n print('-->', i)\n\nres2 = (x * 3 for x in gen_AB())\nres2\n\nfor i in res2:\n print('-->', i)", "可以看出,生成器表达式会产出生成器,因此可以使用生成器表达式进一步减少 Sentence 类的代码:", "import re\nimport reprlib\n\nRE_WORD = re.compile('\\w+')\n\nclass Sentence:\n \n def __init__(self, text):\n self.text = text\n \n def __repr__(self):\n return 'Sentence(%s)' % reprlib.repr(self.text)\n \n def __iter__(self):\n return (match.group() for match in RE_WORD.finditer(self.text))", "这里用的是生成器表达式构建生成器,然后将其返回,不过最终效果一样:调用 __iter__ 方法会得到一个生成器对象\n生成器表达式是语法糖:完全可以替换成生成器函数,不过有时使用生成器表达式更加便利\n何时使用生成器表达式\n遇到简单的情况,可以使用成器表达式,因为因为这样扫一眼就知道代码作用\n如果生成器表达式要分成多行,最好使用生成器函数,提高可读性\n\n如果函数或构造方法只有一个参数,传入生成器表达式时不用写一堆调用函数的括号,再写一堆括号围住生成器表达式,只写一对括号就行,如果生成器表达式后面还有其他参数,那么必须使用括号围住,否则会抛出 SynataxError 异常\n\n另一个例子:等差数列生成器", "class ArithmeticProgression:\n \n def __init__(self, begin, step, end=None):\n self.begin = begin\n self.step = step\n self.end = end # 无穷数列\n \n def __iter__(self): \n # self 赋值给 result,不过要先强制转成前面加法表达式类型(两个支持加法的对象返回一个对象)\n result = type(self.begin + self.step)(self.begin)\n forever = self.end is None\n index = 0\n while forever or result < self.end:\n yield result\n index += 1\n result = self.begin + self.step * index\n\nap = ArithmeticProgression(0, 1, 3)\nlist(ap)\n\nap = ArithmeticProgression(1, 5, 3)\nlist(ap)\n\nap = ArithmeticProgression(0, 1 / 3, 1)\nlist(ap)", "上面的类完全可以用一个生成器函数代替", "def aritprog_gen(begin, step, end=None):\n result = type(begin + step)(begin)\n forever = end is None\n index = 0\n while forever or result < end:\n yield result\n index += 1\n result = begin + step * index", "上面的实现很棒,但是要记住,标准库中有很多现成的生成器,下面会用 itertools 模块实现,这个版本更棒\n使用 itertools 生成等差数列\nitertools 提供了 19 个生成器函数,结合起来很有意思。\n例如 itertools.count 函数返回的生成器能生成多个数。如果不传入参数,itertools.count 函数会生成从 0 开始的整数数列。不过,我们可以提供 start 和 step 值,这样实现的作用与 aritprog_gen 函数相似", "import itertools\ngen = itertools.count(1, .5)\nnext(gen)\n\nnext(gen)\n\nnext(gen)\n\nnext(gen)", "然而 itertools.count 函数从不停止,因此,调用 list(count())) 会产生一个特别大的列表,超出可用的内存\n不过,itertools.takewhile 函数不同,他会生成一个使用另一个生成器的生成器,在指定条件计算结果为 False 时候停止,因此,可以把这两个函数结合:", "gen = itertools.takewhile(lambda n: n < 3, itertools.count(1, .5))\nlist(gen)", "所以,我们可以将等差数列写成这样:", "import itertools\n\ndef aritprog_gen(begin, step, end=None):\n first = type(begin+step)(begin)\n ap_gen = itertools.count(first, step)\n if end is not None:\n ap_gen = itertools.takewhile(lambda n: n < end, ap_gen)\n return ap_gen", "注意, aritprog_gen 不是生成器函数,因为没有 yield 关键字,但是会返回一个生成器,因此它和其他的生成器函数一样,是一个生成器工厂函数\n标准库中的生成器函数\n标准库中有很多生成器,有用于逐行迭代文本文件的对象,还有出色的 os.walk 函数,不过本节专注于通用的函数:参数为任意可迭代对象,返回值是生成器,用于生成选中的,计算出的和重新排列的元素。\n第一组是过滤生成器函数,如下:", "def vowel(c):\n return c.lower() in 'aeiou'\n\n# 字符串各个元素传给 vowel 函数,为真则返回对应元素\nlist(filter(vowel, 'Aardvark'))\n\nimport itertools\n# 与上面相反\nlist(itertools.filterfalse(vowel, 'Aardvark'))\n\n# 处理 字符串,跳过 vowel 为真的元素,然后产出剩余的元素,不再检查\nlist(itertools.dropwhile(vowel, 'Aardvark'))\n\n#返回真值对应的元素,立即停止,不再检查\nlist(itertools.takewhile(vowel, 'Aardvark')) \n\n# 并行处理两个迭代对象,如果第二个是真值,则返回第一个\nlist(itertools.compress('Aardvark', (1, 0, 1, 1, 0, 1)))\n\nlist(itertools.islice('Aardvark', 4))\n\nlist(itertools.islice('Aardvark', 4, 7))\n\nlist(itertools.islice('Aardvark', 1, 7, 2))", "下面是映射生成器函数:", "sample = [5, 4, 2, 8, 7, 6, 3, 0, 9, 1]\nimport itertools\n# 产出累计的总和\nlist(itertools.accumulate(sample))\n\n# 如果提供了函数,那么把前两个元素给他,然后把计算结果和下一个元素给它,以此类推\nlist(itertools.accumulate(sample, min))\n\nlist(itertools.accumulate(sample, max))\n\nimport operator\nlist(itertools.accumulate(sample, operator.mul)) # 计算乘积\n\nlist(itertools.accumulate(range(1, 11), operator.mul))\n\nlist(enumerate('albatroz', 1)) #从 1 开始,为字母编号\n\nimport operator\nlist(map(operator.mul, range(11), range(11)))\n\n# 计算两个可迭代对象中对应位置的两个之和,元素最少的迭代完毕就停止\nlist(map(operator.mul, range(11), [2, 4, 8]))\n\nlist(map(lambda a, b: (a, b), range(11), [2, 4, 8]))\n\nimport itertools\n# starmap 把第二个参数的每个元素传给第一个函数 func,产出结果,\n# 输入的可迭代对象应该产出可迭代对象 iit,\n# 然后以(func(*iit) 这种形式调用 func)\nlist(itertools.starmap(operator.mul, enumerate('albatroz', 1)))\n\nsample = [5, 4, 2, 8, 7, 6, 3, 0, 9, 1]\n# 计算平均值\nlist(itertools.starmap(lambda a, b: b / a, \n enumerate(itertools.accumulate(sample), 1)))", "接下来是用于合并的生成器函数:", "# 先产生第一个元素,然后产生第二个参数的所有元素,以此类推,无缝连接到一起\nlist(itertools.chain('ABC', range(2)))\n\nlist(itertools.chain(enumerate('ABC')))\n\n# chain.from_iterable 函数从可迭代对象中获取每个元素,\n# 然后按顺序把元素连接起来,前提是各个元素本身也是可迭代对象\nlist(itertools.chain.from_iterable(enumerate('ABC')))\n\nlist(zip('ABC', range(5), [10, 20, 30, 40])) #只要有一个生成器到头,就停止\n\n# 处理到最长的迭代器到头,短的会填充 None\nlist(itertools.zip_longest('ABC', range(5)))\n\nlist(itertools.zip_longest('ABC', range(5), fillvalue='?')) # 填充问号", "itertools.product 生成器是计算笛卡尔积的惰性方式,从输入的各个迭代对象中获取元素,合并成由 N 个元素构成的元组,与嵌套的 for 循环效果一样。repeat指明重复处理多少次可迭代对象。下面演示 itertools.product 的用法", "list(itertools.product('ABC', range(2)))\n\nsuits = 'spades hearts diamonds clubs'.split()\nlist(itertools.product('AK', suits))\n\n# 传入一个可迭代对象,产生一系列只有一个元素的元祖,不是特别有用\nlist(itertools.product('ABC'))\n\n# repeat = N 重复 N 次处理各个可迭代对象\nlist(itertools.product('ABC', repeat=2))\n\nlist(itertools.product(range(2), repeat=3))\n\nrows = itertools.product('AB', range(2), repeat=2)\nfor row in rows: print(row)", "把输入的各个元素扩展成多个输出元素的生成器函数:", "ct = itertools.count()\nnext(ct) # 不能构建 ct 列表,因为 ct 是无穷的\n\nnext(ct), next(ct), next(ct)\n\nlist(itertools.islice(itertools.count(1, .3), 3))\n\ncy = itertools.cycle('ABC')\nnext(cy)\n\nlist(itertools.islice(cy, 7))\n\nrp = itertools.repeat(7) # 重复出现指定元素\nnext(rp), next(rp)\n\nlist(itertools.repeat(8, 4)) # 4 次数字 8\n\nlist(map(operator.mul, range(11), itertools.repeat(5)))", "itertools 中 combinations, comb 和 permutations 生成器函数,连同 product 函数称为组合生成器。itertool.product 和其余组合学函数有紧密关系,如下:", "# 'ABC' 中每两个元素 len() == 2 的各种组合\nlist(itertools.combinations('ABC', 2))\n\n# 包括相同元素的每两个元素的各种组合\nlist(itertools.combinations_with_replacement('ABC', 2))\n\n# 每两个元素的各种排列\nlist(itertools.permutations('ABC', 2))\n\nlist(itertools.product('ABC', repeat=2))", "用于重新排列元素的生成器函数:", "# 产出由两个元素组成的元素,形式为 (key, group),其中 key 是分组标准,\n#group 是生成器,用于产出分组里的元素\nlist(itertools.groupby('LLLAAGGG'))\n\nfor char, group in itertools.groupby('LLLLAAAGG'):\n print(char, '->', list(group))\n\nanimals = ['duck', 'eagle', 'rat', 'giraffe', 'bear',\n 'bat', 'dolphin', 'shark', 'lion']\nanimals.sort(key=len)\nanimals\n\nfor length, group in itertools.groupby(animals, len):\n print(length, '->', list(group))\n\n# 使用 reverse 生成器从右往左迭代 animals\nfor length, group in itertools.groupby(reversed(animals), len):\n print(length, '->', list(group))\n\n# itertools 产生多个生成器,每个生成器都产出输入的各个元素\nlist(itertools.tee('abc'))\n\ng1, g2 = itertools.tee('abc')\nnext(g1)\n\nnext(g2)\n\nnext(g2)\n\nlist(g1)\n\nlist(g2)\n\nlist(zip(*itertools.tee('ABC')))", "Python 3.3 中新语法 yield from\n如果生成器函数需要产生两一个生成器生成的值,传统方法是使用 for 循环", "def chain(*iterables): # 自己写的 chain 函数,标准库中的 chain 是用 C 写的\n for it in iterables:\n for i in it:\n yield i\n \ns = 'ABC'\nt = tuple(range(3))\nlist(chain(s, t))", "chain 生成器函数把操作依次交给接收到的各个可迭代对象处理。为此 Python 3.3 引入了新语法,如下:", "def chain(*iterables):\n for i in iterables:\n yield from i # 详细语法在 16 章讲\n \nlist(chain(s, t))", "可迭代的归约函数\n接受可迭代对象,然后返回单个结果,叫归约函数。", "all([1, 2, 3]) # 所有元素为真返回 True\n\nall([1, 0, 3])\n\nany([1, 2, 3]) # 有元素为真就返回 True\n\nany([1, 0, 3])\n\nany([0, 0, 0])\n\nany([])\n\ng = (n for n in [0, 0.0, 7, 8])\nany(g) \n\nnext(g) # any 碰到一个为真就不往下判断了", "还有一个内置的函数接受一个可迭代对象,返回不同的值 -- sorted,reversed 是生成器函数,与此不同,sorted 会构建并返回真正的列表,毕竟要读取每一个元素才能排序。它返回的是一个排好序的列表。这里提到 sorted,是因为它可以处理任何可迭代对象\n当然,sorted 和这些归约函数只能处理最终会停止的可迭代对象,这些函数会一直收集元素,永远无法返回结果\n深入分析 iter 函数\niter 函数还有一个鲜为人知的用法:传两个参数,使用常规的函数或任何可调用的对象创建迭代器。这样使用时,第一个参数必须是可调用对象,用于不断调用(没有参数),产出各个值,第二个是哨符,是个标记值,当可调用对象返回这个值时候,触发迭代器抛\n出 StopIteration 异常,而不产出哨符。\n下面是掷骰子,直到掷出 1", "from random import randint\n\ndef d6():\n return randint(1, 6)\n\nd6_iter = iter(d6, 1)\nd6_iter\n\nfor roll in d6_iter:\n print(roll)", "内置函数 iter 的文档有一个实用的例子,逐行读取文件,直到遇到空行或者到达文件末尾为止:", "# for line in iter(fp.readline, '\\n'):\n# process_line(line)", "把生成器当成协程\nPython 2.2 引入了 yield 关键字实现的生成器函数,Python 2.5 为生成器对象添加了额外的方法和功能,其中最引人关注的是 .send() 方法\n与 .__next__() 方法一样,.send() 方法致使生成器前进到下一个 yield 语句。不过 send() 方法还允许使用生成器的客户把数据发给自己,即不管传给 .send() 方法什么参数,那个参数都会成为生成器函数定义体中对应的 yield 表达式的值。也就是说,.send() 方法允许在客户代码和生成器之间双向交换数据。而 .__next__() 方法只允许客户从生成器中获取数据\n这是一项重要的 “改进”,甚至改变了生成器本性,这样使用的话,生成器就变成了协程。所以要提醒一下:\n\n生成器用于生成供迭代的数据\n协程是数据的消费者\n为了避免脑袋爆炸,不能把两个概念混为一谈\n协程与迭代无关\n注意,虽然在协程中会使用 yield 产出值,但这与迭代无关\n\n延伸阅读\n有个简单的生成器函数例子", "def f(): \n x=0\n while True:\n x += 1\n yield x", "我们无法通过函数调用抽象产出这个过程,下面似乎能抽象产出这个过程:", "def f():\n def do_yield(n):\n yield n\n x = 0\n while True:\n x += 1\n do_yield(x)", "调用 f() 会得到一个死循环,而不是生成器,因为 yield 只能将最近的外层函数变成生成器函数。虽然生成器函数看起来像函数,可是我们不能通过简单的函数调用把职责委托给另一个生成器函数。\nPython 新引入的 yield from 语法允许生成器或协程把工作委托给第三方完成,这样就无需嵌套 for 循环作为变通了。在函数调用前面加上 yield from 能 ”解决“ 上面的问题,如下:", "def f():\n def do_yield(n):\n yield n\n x = 0\n while True:\n x += 1\n yield from do_yield(x)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
bakanchevn/DBCourseMirea2017
Неделя 1/Задание в классе/Лабораторная-3-2.ipynb
gpl-3.0
[ "%load_ext sql\n%sql sqlite:///dataset_1.db", "Задание #1\nРассмотрим набор таблиц описывающая индустрию бубликов (bagel) bagel, типы бубликов и компании их делающие:\n\n\nname STRING\nprice FLOAT\nmade_by STRING\n\n\nИ purchase:\n\n\nbagel_name STRING\nfranchise STRING\ndate INT\nquantity INT\npurchaser_age INT\n\n\nГде purchase.bagel_name ссылается bagel.name т purchase.franchise на/ bagel.made_by:", "%sql SELECT * FROM bagel LIMIT 3;\n\n%sql SELECT * FROM purchase LIMIT 3;", "Напишите запрос для получение суммарного дохода для каждого типа бубликов, для которых средний возраст покупателя больше 18\nExercise #2\nВоспользуемся упрощенной версией precipitation_full таблицы, которая имеет только дневные daily осадки только в CA, и имеет следующую схему:\n\n\nstation_id\nday\nprecipitation", "%sql SELECT * FROM precipitation LIMIT 5;", "Верните id станции, у которых средние количество осадков > 75. (попробуйте написать это вложенным запросом):\nперепишите в GROUP BY:\nПосмотрим на время выполнения", "%time %sql SELECT DISTINCT p.station_id FROM precipitation p WHERE (SELECT AVG(precipitation) FROM precipitation WHERE station_id = p.station_id) > 75;\n\n%time %sql SELECT p.station_id FROM precipitation p GROUP BY p.station_id HAVING AVG(p.precipitation) > 75;", "An ~ 10-20x разница во времени!" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
tommyod/abelian
docs/notebooks/functions.ipynb
gpl-3.0
[ "Tutorial: Functions on LCAs\nThis is an interactive tutorial written with real code.\nWe start by setting up $\\LaTeX$ printing, and importing the classes LCA, HomLCA and LCAFunc.", "# Imports from abelian\nfrom abelian import LCA, HomLCA, LCAFunc\n\n# Other imports\nimport math\nimport matplotlib.pyplot as plt\nfrom IPython.display import display, Math\n\ndef show(arg):\n return display(Math(arg.to_latex()))", "Initializing a new function\nThere are two ways to create a function $f: G \\to \\mathbb{C}$:\n\nOn general LCAs $G$, the function is represented by an analytical expression.\nIf $G = \\mathbb{Z}_{\\mathbf{p}}$ with $p_i \\geq 1$ for every $i$ ($G$ is a direct sum of discrete groups with finite period), a table of values (multidimensional array) can also be used.\n\nWith an analytical representation\nIf the representation of the function is given by an analytical expression, initialization is simple.\nBelow we define a Gaussian function on $\\mathbb{Z}$, and one on $T$.", "def gaussian(vector_arg, k = 0.1):\n return math.exp(-sum(i**2 for i in vector_arg)*k)\n\n# Gaussian function on Z\nZ = LCA([0])\ngauss_on_Z = LCAFunc(gaussian, domain = Z)\nprint(gauss_on_Z) # Printing\nshow(gauss_on_Z) # LaTeX output\n\n# Gaussian function on T\nT = LCA([1], [False])\ngauss_on_T = LCAFunc(gaussian, domain = T)\nshow(gauss_on_T) # LaTeX output", "Notice how the print built-in and the to_latex() method will show human-readable output.\nWith a table of values\nFunctions on $\\mathbb{Z}_\\mathbf{p}$ can be defined using a table of values, if $p_i \\geq 1$ for every $p_i \\in \\mathbf{p}$.", "# Create a table of values\ntable_data = [[1,2,3,4,5],\n [2,3,4,5,6],\n [3,4,5,6,7]]\n\n# Create a domain matching the table\ndomain = LCA([3, 5])\n\ntable_func = LCAFunc(table_data, domain)\nshow(table_func)\nprint(table_func([1, 1])) # [1, 1] maps to 3", "Function evaluation\nA function $f \\in \\mathbb{C}^G$ is callable.\nTo call (i.e. evaluate) a function,\npass a group element.", "# An element in Z\nelement = [0]\n\n# Evaluate the function\ngauss_on_Z(element)", "The sample() method can be used to sample a function on a list of group elements in the domain.", "# Create a list of sample points [-6, ..., 6]\nsample_points = [[i] for i in range(-6, 7)]\n\n# Sample the function, returns a list of values\nsampled_func = gauss_on_Z.sample(sample_points)\n\n# Plot the result of sampling the function\nplt.figure(figsize = (8, 3))\nplt.title('Gaussian function on $\\mathbb{Z}$')\nplt.plot(sample_points, sampled_func, '-o')\nplt.grid(True)\nplt.show()", "Shifts\nLet $f: G \\to \\mathbb{C}$ be a function. The shift operator (or translation operator) $S_{h}$ is defined as\n$$S_{h}[f(g)] = f(g - h).$$\nThe shift operator shifts $f(g)$ by $h$, where $h, g \\in G$.\nThe shift operator is implemented as a method called shift.", "# The group element to shift by\nshift_by = [3]\n\n# Shift the function\nshifted_gauss = gauss_on_Z.shift(shift_by)\n\n# Create sample poits and sample\nsample_points = [[i] for i in range(-6, 7)]\nsampled1 = gauss_on_Z.sample(sample_points)\nsampled2 = shifted_gauss.sample(sample_points)\n\n# Create a plot\nplt.figure(figsize = (8, 3))\nttl = 'Gaussians on $\\mathbb{Z}$, one is shifted'\nplt.title(ttl)\nplt.plot(sample_points, sampled1, '-o')\nplt.plot(sample_points, sampled2, '-o')\nplt.grid(True)\nplt.show()", "Pullbacks\nLet $\\phi: G \\to H$ be a homomorphism and let $f:H \\to \\mathbb{C}$ be a function. The pullback of $f$ along $\\phi$, denoted $\\phi^*(f)$,\nis defined as\n$$\\phi^*(f) := f \\circ \\phi.$$\nThe pullback \"moves\" the domain of the function $f$ to $G$, i.e. $\\phi^*(f) : G \\to \\mathbb{C}$. The pullback is of f is calculated using the pullback method, as shown below.", "def linear(arg):\n return sum(arg)\n\n# The original function\nf = LCAFunc(linear, LCA([10]))\nshow(f)\n\n# A homomorphism phi\nphi = HomLCA([2], target = [10])\nshow(phi)\n\n# The pullback of f along phi\ng = f.pullback(phi)\nshow(g)", "We now sample the functions and plot them.", "# Sample the functions and plot them\nsample_points = [[i] for i in range(-5, 15)]\nf_sampled = f.sample(sample_points)\ng_sampled = g.sample(sample_points)\n\n# Plot the original function and the pullback\nplt.figure(figsize = (8, 3))\nplt.title('Linear functions')\nlabel = '$f \\in \\mathbb{Z}_{10}$'\nplt.plot(sample_points, f_sampled, '-o', label = label)\nlabel = '$g \\circ \\phi \\in \\mathbb{Z}$'\nplt.plot(sample_points, g_sampled, '-o', label = label)\nplt.grid(True)\nplt.legend(loc = 'best')\nplt.show()", "Pushforwards\nLet $\\phi: G \\to H$ be a epimorphism and let $f:G \\to \\mathbb{C}$ be a function. The pushforward of $f$ along $\\phi$, denoted $\\phi_*(f)$,\nis defined as\n$$(\\phi_*(f))(g) := \\sum_{k \\in \\operatorname{ker}\\phi} f(k + h), \\quad \\phi(g) = h$$\nThe pullback \"moves\" the domain of the function $f$ to $H$, i.e. $\\phi_*(f) : H \\to \\mathbb{C}$. First a solution is obtained, then we sum over the kernel. Since such a sum may contain an infinite number of terms, we bound it using a norm. Below is an example where we:\n\nDefine a Gaussian $f(x) = \\exp(-kx^2)$ on $\\mathbb{Z}$\nUse pushforward to \"move\" it with $\\phi(g) = g \\in \\operatorname{Hom}(\\mathbb{Z}, \\mathbb{Z}_{10})$", "# We create a function on Z and plot it\ndef gaussian(arg, k = 0.05):\n \"\"\"\n A gaussian function.\n \"\"\"\n return math.exp(-sum(i**2 for i in arg)*k)\n\n# Create gaussian on Z, shift it by 5\ngauss_on_Z = LCAFunc(gaussian, LCA([0]))\ngauss_on_Z = gauss_on_Z.shift([5])\n\n# Sample points and sampled function\ns_points = [[i] for i in range(-5, 15)]\nf_sampled = gauss_on_Z.sample(s_points)\n\n# Plot it\nplt.figure(figsize = (8, 3))\nplt.title('A gaussian function on $\\mathbb{Z}$')\nplt.plot(s_points, f_sampled, '-o')\nplt.grid(True)\nplt.show()\n\n# Use a pushforward to periodize the function\nphi = HomLCA([1], target = [10])\nshow(phi)", "First we do a pushforward with only one term. Not enough terms are present in the sum to capture what the pushforward would look like if the sum went to infinity.", "terms = 1\n\n# Pushforward of the function along phi\ngauss_on_Z_10 = gauss_on_Z.pushforward(phi, terms)\n\n# Sample the functions and plot them\npushforward_sampled = gauss_on_Z_10.sample(sample_points)\n\nplt.figure(figsize = (8, 3))\nlabel = 'A gaussian function on $\\mathbb{Z}$ and \\\npushforward to $\\mathbb{Z}_{10}$ with few terms in the sum'\nplt.title(label)\nplt.plot(s_points, f_sampled, '-o', label ='Original')\nplt.plot(s_points, pushforward_sampled, '-o', label ='Pushforward')\nplt.legend(loc = 'best')\nplt.grid(True)\nplt.show()", "Next we do a pushforward with more terms in the sum, this captures what the pushforward would look like if the sum went to infinity.", "terms = 9\n\ngauss_on_Z_10 = gauss_on_Z.pushforward(phi, terms)\n\n# Sample the functions and plot them\npushforward_sampled = gauss_on_Z_10.sample(sample_points)\n\nplt.figure(figsize = (8, 3))\nplt.title('A gaussian function on $\\mathbb{Z}$ and \\\npushforward to $\\mathbb{Z}_{10}$ with enough terms')\nplt.plot(s_points, f_sampled, '-o', label ='Original')\nplt.plot(s_points, pushforward_sampled, '-o', label ='Pushforward')\nplt.legend(loc = 'best')\nplt.grid(True)\nplt.show()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
kubeflow/pipelines
components/gcp/ml_engine/deploy/sample.ipynb
apache-2.0
[ "Name\nDeploying a trained model to Cloud Machine Learning Engine \nLabel\nCloud Storage, Cloud ML Engine, Kubeflow, Pipeline\nSummary\nA Kubeflow Pipeline component to deploy a trained model from a Cloud Storage location to Cloud ML Engine.\nDetails\nIntended use\nUse the component to deploy a trained model to Cloud ML Engine. The deployed model can serve online or batch predictions in a Kubeflow Pipeline.\nRuntime arguments\n| Argument | Description | Optional | Data type | Accepted values | Default |\n|--------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------|--------------|-----------------|---------|\n| model_uri | The URI of a Cloud Storage directory that contains a trained model file.<br/> Or <br/> An Estimator export base directory that contains a list of subdirectories named by timestamp. The directory with the latest timestamp is used to load the trained model file. | No | GCSPath | | |\n| project_id | The ID of the Google Cloud Platform (GCP) project of the serving model. | No | GCPProjectID | | |\n| model_id | The name of the trained model. | Yes | String | | None |\n| version_id | The name of the version of the model. If it is not provided, the operation uses a random name. | Yes | String | | None |\n| runtime_version | The Cloud ML Engine runtime version to use for this deployment. If it is not provided, the default stable version, 1.0, is used. | Yes | String | | None |\n| python_version | The version of Python used in the prediction. If it is not provided, version 2.7 is used. You can use Python 3.5 if runtime_version is set to 1.4 or above. Python 2.7 works with all supported runtime versions. | Yes | String | | 2.7 |\n| model | The JSON payload of the new model. | Yes | Dict | | None |\n| version | The new version of the trained model. | Yes | Dict | | None |\n| replace_existing_version | Indicates whether to replace the existing version in case of a conflict (if the same version number is found.) | Yes | Boolean | | FALSE |\n| set_default | Indicates whether to set the new version as the default version in the model. | Yes | Boolean | | FALSE |\n| wait_interval | The number of seconds to wait in case the operation has a long run time. | Yes | Integer | | 30 |\nInput data schema\nThe component looks for a trained model in the location specified by the model_uri runtime argument. The accepted trained models are:\n\nTensorflow SavedModel \nScikit-learn & XGBoost model\n\nThe accepted file formats are:\n\n*.pb\n*.pbtext\nmodel.bst\nmodel.joblib\nmodel.pkl\n\nmodel_uri can also be an Estimator export base directory, which contains a list of subdirectories named by timestamp. The directory with the latest timestamp is used to load the trained model file.\nOutput\n| Name | Description | Type |\n|:------- |:---- | :--- |\n| job_id | The ID of the created job. | String |\n| job_dir | The Cloud Storage path that contains the trained model output files. | GCSPath |\nCautions & requirements\nTo use the component, you must:\n\nSet up the cloud environment.\nThe component can authenticate to GCP. Refer to Authenticating Pipelines to GCP for details.\nGrant read access to the Cloud Storage bucket that contains the trained model to the Kubeflow user service account.\n\nDetailed description\nUse the component to: \n* Locate the trained model at the Cloud Storage location you specify.\n* Create a new model if a model provided by you doesn’t exist.\n* Delete the existing model version if replace_existing_version is enabled.\n* Create a new version of the model from the trained model.\n* Set the new version as the default version of the model if set_default is enabled.\nFollow these steps to use the component in a pipeline:\n\nInstall the Kubeflow Pipeline SDK:", "%%capture --no-stderr\n\n!pip3 install kfp --upgrade", "Load the component using KFP SDK", "import kfp.components as comp\n\nmlengine_deploy_op = comp.load_component_from_url(\n 'https://raw.githubusercontent.com/kubeflow/pipelines/1.7.0-rc.3/components/gcp/ml_engine/deploy/component.yaml')\nhelp(mlengine_deploy_op)", "Sample\nNote: The following sample code works in IPython notebook or directly in Python code.\nIn this sample, you deploy a pre-built trained model from gs://ml-pipeline-playground/samples/ml_engine/census/trained_model/ to Cloud ML Engine. The deployed model is kfp_sample_model. A new version is created every time the sample is run, and the latest version is set as the default version of the deployed model.\nSet sample parameters", "# Required Parameters\nPROJECT_ID = '<Please put your project ID here>'\n\n# Optional Parameters\nEXPERIMENT_NAME = 'CLOUDML - Deploy'\nTRAINED_MODEL_PATH = 'gs://ml-pipeline-playground/samples/ml_engine/census/trained_model/'", "Example pipeline that uses the component", "import kfp.dsl as dsl\nimport json\n@dsl.pipeline(\n name='CloudML deploy pipeline',\n description='CloudML deploy pipeline'\n)\ndef pipeline(\n model_uri = 'gs://ml-pipeline-playground/samples/ml_engine/census/trained_model/',\n project_id = PROJECT_ID,\n model_id = 'kfp_sample_model',\n version_id = '',\n runtime_version = '1.10',\n python_version = '',\n version = {},\n replace_existing_version = 'False',\n set_default = 'True',\n wait_interval = '30'):\n task = mlengine_deploy_op(\n model_uri=model_uri, \n project_id=project_id, \n model_id=model_id, \n version_id=version_id, \n runtime_version=runtime_version, \n python_version=python_version,\n version=version, \n replace_existing_version=replace_existing_version, \n set_default=set_default, \n wait_interval=wait_interval)", "Compile the pipeline", "pipeline_func = pipeline\npipeline_filename = pipeline_func.__name__ + '.zip'\nimport kfp.compiler as compiler\ncompiler.Compiler().compile(pipeline_func, pipeline_filename)", "Submit the pipeline for execution", "#Specify pipeline argument values\narguments = {}\n\n#Get or create an experiment and submit a pipeline run\nimport kfp\nclient = kfp.Client()\nexperiment = client.create_experiment(EXPERIMENT_NAME)\n\n#Submit a pipeline run\nrun_name = pipeline_func.__name__ + ' run'\nrun_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments)", "References\n\nComponent python code\nComponent docker file\nSample notebook\nCloud Machine Learning Engine Model REST API\nCloud Machine Learning Engine Version REST API\n\nLicense\nBy deploying or using this software you agree to comply with the AI Hub Terms of Service and the Google APIs Terms of Service. To the extent of a direct conflict of terms, the AI Hub Terms of Service will control." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
jiaphuan/models
research/object_detection/object_detection_tutorial.ipynb
apache-2.0
[ "Object Detection Demo\nWelcome to the object detection inference walkthrough! This notebook will walk you step by step through the process of using a pre-trained model to detect objects in an image. Make sure to follow the installation instructions before you start.\nImports", "import numpy as np\nimport os\nimport six.moves.urllib as urllib\nimport sys\nimport tarfile\nimport tensorflow as tf\nimport zipfile\n\nfrom collections import defaultdict\nfrom io import StringIO\nfrom matplotlib import pyplot as plt\nfrom PIL import Image\n\n# This is needed since the notebook is stored in the object_detection folder.\nsys.path.append(\"..\")\nfrom object_detection.utils import ops as utils_ops\n\nif tf.__version__ < '1.4.0':\n raise ImportError('Please upgrade your tensorflow installation to v1.4.* or later!')\n", "Env setup", "# This is needed to display the images.\n%matplotlib inline", "Object detection imports\nHere are the imports from the object detection module.", "from utils import label_map_util\n\nfrom utils import visualization_utils as vis_util", "Model preparation\nVariables\nAny model exported using the export_inference_graph.py tool can be loaded here simply by changing PATH_TO_CKPT to point to a new .pb file. \nBy default we use an \"SSD with Mobilenet\" model here. See the detection model zoo for a list of other models that can be run out-of-the-box with varying speeds and accuracies.", "# What model to download.\nMODEL_NAME = 'ssd_mobilenet_v1_coco_2017_11_17'\nMODEL_FILE = MODEL_NAME + '.tar.gz'\nDOWNLOAD_BASE = 'http://download.tensorflow.org/models/object_detection/'\n\n# Path to frozen detection graph. This is the actual model that is used for the object detection.\nPATH_TO_CKPT = MODEL_NAME + '/frozen_inference_graph.pb'\n\n# List of the strings that is used to add correct label for each box.\nPATH_TO_LABELS = os.path.join('data', 'mscoco_label_map.pbtxt')\n\nNUM_CLASSES = 90", "Download Model", "opener = urllib.request.URLopener()\nopener.retrieve(DOWNLOAD_BASE + MODEL_FILE, MODEL_FILE)\ntar_file = tarfile.open(MODEL_FILE)\nfor file in tar_file.getmembers():\n file_name = os.path.basename(file.name)\n if 'frozen_inference_graph.pb' in file_name:\n tar_file.extract(file, os.getcwd())", "Load a (frozen) Tensorflow model into memory.", "detection_graph = tf.Graph()\nwith detection_graph.as_default():\n od_graph_def = tf.GraphDef()\n with tf.gfile.GFile(PATH_TO_CKPT, 'rb') as fid:\n serialized_graph = fid.read()\n od_graph_def.ParseFromString(serialized_graph)\n tf.import_graph_def(od_graph_def, name='')", "Loading label map\nLabel maps map indices to category names, so that when our convolution network predicts 5, we know that this corresponds to airplane. Here we use internal utility functions, but anything that returns a dictionary mapping integers to appropriate string labels would be fine", "label_map = label_map_util.load_labelmap(PATH_TO_LABELS)\ncategories = label_map_util.convert_label_map_to_categories(label_map, max_num_classes=NUM_CLASSES, use_display_name=True)\ncategory_index = label_map_util.create_category_index(categories)", "Helper code", "def load_image_into_numpy_array(image):\n (im_width, im_height) = image.size\n return np.array(image.getdata()).reshape(\n (im_height, im_width, 3)).astype(np.uint8)", "Detection", "# For the sake of simplicity we will use only 2 images:\n# image1.jpg\n# image2.jpg\n# If you want to test the code with your images, just add path to the images to the TEST_IMAGE_PATHS.\nPATH_TO_TEST_IMAGES_DIR = 'test_images'\nTEST_IMAGE_PATHS = [ os.path.join(PATH_TO_TEST_IMAGES_DIR, 'image{}.jpg'.format(i)) for i in range(1, 3) ]\n\n# Size, in inches, of the output images.\nIMAGE_SIZE = (12, 8)\n\ndef run_inference_for_single_image(image, graph):\n with graph.as_default():\n with tf.Session() as sess:\n # Get handles to input and output tensors\n ops = tf.get_default_graph().get_operations()\n all_tensor_names = {output.name for op in ops for output in op.outputs}\n tensor_dict = {}\n for key in [\n 'num_detections', 'detection_boxes', 'detection_scores',\n 'detection_classes', 'detection_masks'\n ]:\n tensor_name = key + ':0'\n if tensor_name in all_tensor_names:\n tensor_dict[key] = tf.get_default_graph().get_tensor_by_name(\n tensor_name)\n if 'detection_masks' in tensor_dict:\n # The following processing is only for single image\n detection_boxes = tf.squeeze(tensor_dict['detection_boxes'], [0])\n detection_masks = tf.squeeze(tensor_dict['detection_masks'], [0])\n # Reframe is required to translate mask from box coordinates to image coordinates and fit the image size.\n real_num_detection = tf.cast(tensor_dict['num_detections'][0], tf.int32)\n detection_boxes = tf.slice(detection_boxes, [0, 0], [real_num_detection, -1])\n detection_masks = tf.slice(detection_masks, [0, 0, 0], [real_num_detection, -1, -1])\n detection_masks_reframed = utils_ops.reframe_box_masks_to_image_masks(\n detection_masks, detection_boxes, image.shape[0], image.shape[1])\n detection_masks_reframed = tf.cast(\n tf.greater(detection_masks_reframed, 0.5), tf.uint8)\n # Follow the convention by adding back the batch dimension\n tensor_dict['detection_masks'] = tf.expand_dims(\n detection_masks_reframed, 0)\n image_tensor = tf.get_default_graph().get_tensor_by_name('image_tensor:0')\n\n # Run inference\n output_dict = sess.run(tensor_dict,\n feed_dict={image_tensor: np.expand_dims(image, 0)})\n\n # all outputs are float32 numpy arrays, so convert types as appropriate\n output_dict['num_detections'] = int(output_dict['num_detections'][0])\n output_dict['detection_classes'] = output_dict[\n 'detection_classes'][0].astype(np.uint8)\n output_dict['detection_boxes'] = output_dict['detection_boxes'][0]\n output_dict['detection_scores'] = output_dict['detection_scores'][0]\n if 'detection_masks' in output_dict:\n output_dict['detection_masks'] = output_dict['detection_masks'][0]\n return output_dict\n\nfor image_path in TEST_IMAGE_PATHS:\n image = Image.open(image_path)\n # the array based representation of the image will be used later in order to prepare the\n # result image with boxes and labels on it.\n image_np = load_image_into_numpy_array(image)\n # Expand dimensions since the model expects images to have shape: [1, None, None, 3]\n image_np_expanded = np.expand_dims(image_np, axis=0)\n # Actual detection.\n output_dict = run_inference_for_single_image(image_np, detection_graph)\n # Visualization of the results of a detection.\n vis_util.visualize_boxes_and_labels_on_image_array(\n image_np,\n output_dict['detection_boxes'],\n output_dict['detection_classes'],\n output_dict['detection_scores'],\n category_index,\n instance_masks=output_dict.get('detection_masks'),\n use_normalized_coordinates=True,\n line_thickness=8)\n plt.figure(figsize=IMAGE_SIZE)\n plt.imshow(image_np)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
probml/pyprobml
notebooks/book1/15/attention_torch.ipynb
mit
[ "Please find jax implementation of this notebook here: https://colab.research.google.com/github/probml/pyprobml/blob/master/notebooks/book1/15/attention_jax.ipynb\n<a href=\"https://colab.research.google.com/github/probml/pyprobml/blob/master/notebooks/attention_torch.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>\nBasics of differentiable (soft) attention\nWe show how to implement soft attention.\nBased on sec 10.3 of http://d2l.ai/chapter_attention-mechanisms/attention-scoring-functions.html.", "import numpy as np\nimport matplotlib.pyplot as plt\nimport math\nfrom IPython import display\n\ntry:\n import torch\nexcept ModuleNotFoundError:\n %pip install -qq torch\n import torch\nfrom torch import nn\nfrom torch.nn import functional as F\nfrom torch.utils import data\n\nimport random\nimport os\nimport time\n\nnp.random.seed(seed=1)\ntorch.manual_seed(1)\n!mkdir figures # for saving plots", "Masked soft attention", "def sequence_mask(X, valid_len, value=0):\n \"\"\"Mask irrelevant entries in sequences.\"\"\"\n maxlen = X.size(1)\n mask = torch.arange((maxlen), dtype=torch.float32, device=X.device)[None, :] < valid_len[:, None]\n X[~mask] = value\n return X\n\n\ndef masked_softmax(X, valid_lens):\n \"\"\"Perform softmax operation by masking elements on the last axis.\"\"\"\n # `X`: 3D tensor, `valid_lens`: 1D or 2D tensor\n if valid_lens is None:\n return nn.functional.softmax(X, dim=-1)\n else:\n shape = X.shape\n if valid_lens.dim() == 1:\n valid_lens = torch.repeat_interleave(valid_lens, shape[1])\n else:\n valid_lens = valid_lens.reshape(-1)\n # On the last axis, replace masked elements with a very large negative\n # value, whose exponentiation outputs 0\n X = sequence_mask(X.reshape(-1, shape[-1]), valid_lens, value=-1e6)\n return nn.functional.softmax(X.reshape(shape), dim=-1)", "Example. Batch size 2, feature size 2, sequence length 4.\nThe valid lengths are 2,3. So the output has size (2,2,4),\nbut the length dimension is full of 0s in the invalid locations.", "Y = masked_softmax(torch.rand(2, 2, 4), torch.tensor([2, 3]))\nprint(Y)", "Example. Batch size 2, feature size 2, sequence length 4.\nThe valid lengths are (1,3) for batch 1, and (2,4) for batch 2.", "Y = masked_softmax(torch.rand(2, 2, 4), torch.tensor([[1, 3], [2, 4]]))\nprint(Y)", "Additive attention\n$$\n\\alpha(q,k) = w_v^T \\tanh(W_q q + w_k k)\n$$", "class AdditiveAttention(nn.Module):\n def __init__(self, key_size, query_size, num_hiddens, dropout, **kwargs):\n super(AdditiveAttention, self).__init__(**kwargs)\n self.W_k = nn.Linear(key_size, num_hiddens, bias=False)\n self.W_q = nn.Linear(query_size, num_hiddens, bias=False)\n self.w_v = nn.Linear(num_hiddens, 1, bias=False)\n self.dropout = nn.Dropout(dropout)\n\n def forward(self, queries, keys, values, valid_lens):\n queries, keys = self.W_q(queries), self.W_k(keys)\n # After dimension expansion, shape of `queries`: (`batch_size`, no. of\n # queries, 1, `num_hiddens`) and shape of `keys`: (`batch_size`, 1,\n # no. of key-value pairs, `num_hiddens`). Sum them up with\n # broadcasting\n features = queries.unsqueeze(2) + keys.unsqueeze(1)\n features = torch.tanh(features)\n # There is only one output of `self.w_v`, so we remove the last\n # one-dimensional entry from the shape. Shape of `scores`:\n # (`batch_size`, no. of queries, no. of key-value pairs)\n scores = self.w_v(features).squeeze(-1)\n self.attention_weights = masked_softmax(scores, valid_lens)\n # Shape of `values`: (`batch_size`, no. of key-value pairs, value\n # dimension)\n return torch.bmm(self.dropout(self.attention_weights), values)\n\n# batch size 2. 1 query of dim 20, 10 keys of dim 2.\nqueries, keys = torch.normal(0, 1, (2, 1, 20)), torch.ones((2, 10, 2))\n# 10 values of dim 4 in each of the 2 batches.\nvalues = torch.arange(40, dtype=torch.float32).reshape(1, 10, 4).repeat(2, 1, 1)\nprint(values.shape)\nvalid_lens = torch.tensor([2, 6])\n\nattention = AdditiveAttention(key_size=2, query_size=20, num_hiddens=8, dropout=0.1)\nattention.eval()\nA = attention(queries, keys, values, valid_lens)\nprint(A.shape)\nprint(A)", "The heatmap is uniform across the keys, since the keys are all 1s.\nHowever, the support is truncated to the valid length.", "def show_heatmaps(matrices, xlabel, ylabel, titles=None, figsize=(2.5, 2.5), cmap=\"Reds\"):\n display.set_matplotlib_formats(\"svg\")\n num_rows, num_cols = matrices.shape[0], matrices.shape[1]\n fig, axes = plt.subplots(num_rows, num_cols, figsize=figsize, sharex=True, sharey=True, squeeze=False)\n for i, (row_axes, row_matrices) in enumerate(zip(axes, matrices)):\n for j, (ax, matrix) in enumerate(zip(row_axes, row_matrices)):\n pcm = ax.imshow(matrix.detach(), cmap=cmap)\n if i == num_rows - 1:\n ax.set_xlabel(xlabel)\n if j == 0:\n ax.set_ylabel(ylabel)\n if titles:\n ax.set_title(titles[j])\n fig.colorbar(pcm, ax=axes, shrink=0.6)\n\nshow_heatmaps(attention.attention_weights.reshape((1, 1, 2, 10)), xlabel=\"Keys\", ylabel=\"Queries\")", "Dot-product attention\n$$\nA = \\text{softmax}(Q K^T/\\sqrt{d}) V\n$$", "class DotProductAttention(nn.Module):\n \"\"\"Scaled dot product attention.\"\"\"\n\n def __init__(self, dropout, **kwargs):\n super(DotProductAttention, self).__init__(**kwargs)\n self.dropout = nn.Dropout(dropout)\n\n # Shape of `queries`: (`batch_size`, no. of queries, `d`)\n # Shape of `keys`: (`batch_size`, no. of key-value pairs, `d`)\n # Shape of `values`: (`batch_size`, no. of key-value pairs, value\n # dimension)\n # Shape of `valid_lens`: (`batch_size`,) or (`batch_size`, no. of queries)\n def forward(self, queries, keys, values, valid_lens=None):\n d = queries.shape[-1]\n # Set `transpose_b=True` to swap the last two dimensions of `keys`\n scores = torch.bmm(queries, keys.transpose(1, 2)) / math.sqrt(d)\n self.attention_weights = masked_softmax(scores, valid_lens)\n return torch.bmm(self.dropout(self.attention_weights), values)\n\n# batch size 2. 1 query of dim 2, 10 keys of dim 2.\nqueries = torch.normal(0, 1, (2, 1, 2))\nattention = DotProductAttention(dropout=0.5)\nattention.eval()\nattention(queries, keys, values, valid_lens)\n\nshow_heatmaps(attention.attention_weights.reshape((1, 1, 2, 10)), xlabel=\"Keys\", ylabel=\"Queries\")" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
CoreSecurity/pysap
docs/fileformats/SAPSSFS.ipynb
gpl-2.0
[ "SAP SSFS\nThe following subsections show a representation of the file format portions and how to generate them.\nFirst we need to perform some setup to import the packet classes:", "from pysap.SAPSSFS import *\nfrom pysap.utils.crypto import rsec_decrypt\nfrom IPython.display import display", "SSFS files\nWe'll read the key and data files used in the test case suite and use them as example:", "with open(\"../../tests/data/ssfs_hdb_dat\", \"rb\") as fd:\n data = fd.read()\n \nssfs_data = SAPSSFSData(data)\n\nwith open(\"../../tests/data/ssfs_hdb_key\", \"rb\") as fd:\n key = fd.read()\n\nssfs_key = SAPSSFSKey(key)", "SSFS files are comprised of the following main structures:\nSSFS Data", "ssfs_data.show()", "As can be observed, a SSFS Data file contains multiple records with different key/value pairs, as well as associated meta data.\nSome records contain values stored in plaintext, while others are stored in an encrypted fashion. We'll see a password record, which is stored encrypted:", "ssfs_data.records[-1].canvas_dump()", "Additionally, each SSFS record contains an HMAC-SHA1 value calculated using a fixed key. The intent of this value is to provide integrity validation as well as ensure that an authentic tool was used to generate the files:", "ssfs_data.records[-1].valid", "SSFS Key content", "ssfs_key.canvas_dump()", "SSFS Value access\nThe values contained in SSFS Data records can be accessed by providing the key name:", "ssfs_data.get_value('HDB/KEYNAME/DB_USER')", "SSFS Data content decryption\nFor those records that are stored encrypted, it's possible to access the right value by providing the key name and the proper SSFS decryption key structure:", "ssfs_data.get_value('HDB/KEYNAME/DB_PASSWORD', ssfs_key)", "SSFS Decrypted Payload structure\nThe decryption mechanism can be user to obtain the raw data stored encrypted:", "decrypted_blob = rsec_decrypt(ssfs_data.get_record('HDB/KEYNAME/DB_PASSWORD').data, ssfs_key.key)\ndecrypted_blob", "It's possible also to parse that raw data and obtain the underlying strucutures and meta data associated:", "payload = SAPSSFSDecryptedPayload(decrypted_blob)\npayload.canvas_dump()", "The decrypted payload contains a hash calculated using the SHA-1 algorithm, and that can be used to validate integrity of the entire payload:", "payload.valid" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
kradams/MIDS-W261-2015-Adams
week13/MIDS-MLS-Project-Criteo-CTR.ipynb
mit
[ "DATASCI W261: Machine Learning at Scale\n W261-1 Fall 2015 \nWeek 12: Criteo CTR Project \nNovember 14, 2015\nStudent name Katrina Adams\n\nClick-Through Rate Prediction Lab\nThis lab covers the steps for creating a click-through rate (CTR) prediction pipeline. You will work with the Criteo Labs dataset that was used for a recent Kaggle competition.\n This lab will cover: \n\n\nPart 1: Featurize categorical data using one-hot-encoding (OHE)\n\n\nPart 2: Construct an OHE dictionary\n\n\nPart 3: Parse CTR data and generate OHE features\n\n\nVisualization 1: Feature frequency\n\n\nPart 4: CTR prediction and logloss evaluation\n\n\nVisualization 2: ROC curve\n\n\nPart 5: Reduce feature dimension via feature hashing\n\n\nVisualization 3: Hyperparameter heat map\n\n\nNote that, for reference, you can look up the details of the relevant Spark methods in Spark's Python API and the relevant NumPy methods in the NumPy Reference", "labVersion = 'MIDS_MLS_week12_v_0_9'\n\n%cd ~/Documents/W261/hw12/\n\nimport os\nimport sys\n\nspark_home = os.environ['SPARK_HOME'] = \\\n '/Users/davidadams/packages/spark-1.5.1-bin-hadoop2.6/'\n\nif not spark_home:\n raise ValueError('SPARK_HOME enviroment variable is not set')\nsys.path.insert(0,os.path.join(spark_home,'python'))\nsys.path.insert(0,os.path.join(spark_home,'python/lib/py4j-0.8.2.1-src.zip'))\nexecfile(os.path.join(spark_home,'python/pyspark/shell.py'))", "Part 1: Featurize categorical data using one-hot-encoding \n (1a) One-hot-encoding \nWe would like to develop code to convert categorical features to numerical ones, and to build intuition, we will work with a sample unlabeled dataset with three data points, with each data point representing an animal. The first feature indicates the type of animal (bear, cat, mouse); the second feature describes the animal's color (black, tabby); and the third (optional) feature describes what the animal eats (mouse, salmon).\nIn a one-hot-encoding (OHE) scheme, we want to represent each tuple of (featureID, category) via its own binary feature. We can do this in Python by creating a dictionary that maps each tuple to a distinct integer, where the integer corresponds to a binary feature. To start, manually enter the entries in the OHE dictionary associated with the sample dataset by mapping the tuples to consecutive integers starting from zero, ordering the tuples first by featureID and next by category.\nLater in this lab, we'll use OHE dictionaries to transform data points into compact lists of features that can be used in machine learning algorithms.", "# Data for manual OHE\n# Note: the first data point does not include any value for the optional third feature\nsampleOne = [(0, 'mouse'), (1, 'black')]\nsampleTwo = [(0, 'cat'), (1, 'tabby'), (2, 'mouse')]\nsampleThree = [(0, 'bear'), (1, 'black'), (2, 'salmon')]\nsampleDataRDD = sc.parallelize([sampleOne, sampleTwo, sampleThree])\n\n# TODO: Replace <FILL IN> with appropriate code\nsampleOHEDictManual = {}\nsampleOHEDictManual[(0,'bear')] = 0\nsampleOHEDictManual[(0,'cat')] = 1\nsampleOHEDictManual[(0,'mouse')] = 2\n\nsampleOHEDictManual[(1,'black')] = 3\nsampleOHEDictManual[(1,'tabby')] = 4\n\nsampleOHEDictManual[(2,'mouse')] = 5\nsampleOHEDictManual[(2,'salmon')] = 6\n\n# TEST One-hot-encoding (1a)\nfrom test_helper import Test\n\nTest.assertEqualsHashed(sampleOHEDictManual[(0,'bear')],\n 'b6589fc6ab0dc82cf12099d1c2d40ab994e8410c',\n \"incorrect value for sampleOHEDictManual[(0,'bear')]\")\nTest.assertEqualsHashed(sampleOHEDictManual[(0,'cat')],\n '356a192b7913b04c54574d18c28d46e6395428ab',\n \"incorrect value for sampleOHEDictManual[(0,'cat')]\")\nTest.assertEqualsHashed(sampleOHEDictManual[(0,'mouse')],\n 'da4b9237bacccdf19c0760cab7aec4a8359010b0',\n \"incorrect value for sampleOHEDictManual[(0,'mouse')]\")\nTest.assertEqualsHashed(sampleOHEDictManual[(1,'black')],\n '77de68daecd823babbb58edb1c8e14d7106e83bb',\n \"incorrect value for sampleOHEDictManual[(1,'black')]\")\nTest.assertEqualsHashed(sampleOHEDictManual[(1,'tabby')],\n '1b6453892473a467d07372d45eb05abc2031647a',\n \"incorrect value for sampleOHEDictManual[(1,'tabby')]\")\nTest.assertEqualsHashed(sampleOHEDictManual[(2,'mouse')],\n 'ac3478d69a3c81fa62e60f5c3696165a4e5e6ac4',\n \"incorrect value for sampleOHEDictManual[(2,'mouse')]\")\nTest.assertEqualsHashed(sampleOHEDictManual[(2,'salmon')],\n 'c1dfd96eea8cc2b62785275bca38ac261256e278',\n \"incorrect value for sampleOHEDictManual[(2,'salmon')]\")\nTest.assertEquals(len(sampleOHEDictManual.keys()), 7,\n 'incorrect number of keys in sampleOHEDictManual')", "(1b) Sparse vectors \nData points can typically be represented with a small number of non-zero OHE features relative to the total number of features that occur in the dataset. By leveraging this sparsity and using sparse vector representations of OHE data, we can reduce storage and computational burdens. Below are a few sample vectors represented as dense numpy arrays. Use SparseVector to represent them in a sparse fashion, and verify that both the sparse and dense representations yield the same results when computing dot products (we will later use MLlib to train classifiers via gradient descent, and MLlib will need to compute dot products between SparseVectors and dense parameter vectors).\nUse SparseVector(size, *args) to create a new sparse vector where size is the length of the vector and args is either a dictionary, a list of (index, value) pairs, or two separate arrays of indices and values (sorted by index). You'll need to create a sparse vector representation of each dense vector aDense and bDense.", "import numpy as np\nfrom pyspark.mllib.linalg import SparseVector\n\n# TODO: Replace <FILL IN> with appropriate code\naDense = np.array([0., 3., 0., 4.])\naSparse = SparseVector(4,[(1,3),(3,4)])\n\nbDense = np.array([0., 0., 0., 1.])\nbSparse = SparseVector(4,[(3,1)])\n\nw = np.array([0.4, 3.1, -1.4, -.5])\nprint aDense.dot(w)\nprint aSparse.dot(w)\nprint bDense.dot(w)\nprint bSparse.dot(w)\n\n# TEST Sparse Vectors (1b)\nTest.assertTrue(isinstance(aSparse, SparseVector), 'aSparse needs to be an instance of SparseVector')\nTest.assertTrue(isinstance(bSparse, SparseVector), 'aSparse needs to be an instance of SparseVector')\nTest.assertTrue(aDense.dot(w) == aSparse.dot(w),\n 'dot product of aDense and w should equal dot product of aSparse and w')\nTest.assertTrue(bDense.dot(w) == bSparse.dot(w),\n 'dot product of bDense and w should equal dot product of bSparse and w')", "(1c) OHE features as sparse vectors \nNow let's see how we can represent the OHE features for points in our sample dataset. Using the mapping defined by the OHE dictionary from Part (1a), manually define OHE features for the three sample data points using SparseVector format. Any feature that occurs in a point should have the value 1.0. For example, the DenseVector for a point with features 2 and 4 would be [0.0, 0.0, 1.0, 0.0, 1.0, 0.0, 0.0].", "# Reminder of the sample features\n# sampleOne = [(0, 'mouse'), (1, 'black')]\n# sampleTwo = [(0, 'cat'), (1, 'tabby'), (2, 'mouse')]\n# sampleThree = [(0, 'bear'), (1, 'black'), (2, 'salmon')]\n\n# TODO: Replace <FILL IN> with appropriate code\nsampleOneOHEFeatManual = SparseVector(7,[(2,1),(3,1)])\nsampleTwoOHEFeatManual = SparseVector(7,[(1,1),(4,1),(5,1)])\nsampleThreeOHEFeatManual = SparseVector(7,[(0,1),(3,1),(6,1)])\n\n# TEST OHE Features as sparse vectors (1c)\nTest.assertTrue(isinstance(sampleOneOHEFeatManual, SparseVector),\n 'sampleOneOHEFeatManual needs to be a SparseVector')\nTest.assertTrue(isinstance(sampleTwoOHEFeatManual, SparseVector),\n 'sampleTwoOHEFeatManual needs to be a SparseVector')\nTest.assertTrue(isinstance(sampleThreeOHEFeatManual, SparseVector),\n 'sampleThreeOHEFeatManual needs to be a SparseVector')\nTest.assertEqualsHashed(sampleOneOHEFeatManual,\n 'ecc00223d141b7bd0913d52377cee2cf5783abd6',\n 'incorrect value for sampleOneOHEFeatManual')\nTest.assertEqualsHashed(sampleTwoOHEFeatManual,\n '26b023f4109e3b8ab32241938e2e9b9e9d62720a',\n 'incorrect value for sampleTwoOHEFeatManual')\nTest.assertEqualsHashed(sampleThreeOHEFeatManual,\n 'c04134fd603ae115395b29dcabe9d0c66fbdc8a7',\n 'incorrect value for sampleThreeOHEFeatManual')", "(1d) Define a OHE function \nNext we will use the OHE dictionary from Part (1a) to programatically generate OHE features from the original categorical data. First write a function called oneHotEncoding that creates OHE feature vectors in SparseVector format. Then use this function to create OHE features for the first sample data point and verify that the result matches the result from Part (1c).", "print sampleOHEDictManual\n\n# TODO: Replace <FILL IN> with appropriate code\ndef oneHotEncoding(rawFeats, OHEDict, numOHEFeats):\n \"\"\"Produce a one-hot-encoding from a list of features and an OHE dictionary.\n\n Note:\n You should ensure that the indices used to create a SparseVector are sorted.\n\n Args:\n rawFeats (list of (int, str)): The features corresponding to a single observation. Each\n feature consists of a tuple of featureID and the feature's value. (e.g. sampleOne)\n OHEDict (dict): A mapping of (featureID, value) to unique integer.\n numOHEFeats (int): The total number of unique OHE features (combinations of featureID and\n value).\n\n Returns:\n SparseVector: A SparseVector of length numOHEFeats with indicies equal to the unique\n identifiers for the (featureID, value) combinations that occur in the observation and\n with values equal to 1.0.\n \"\"\"\n sparsevect = []\n for feat in rawFeats:\n sparsevect.append((OHEDict[feat],1))\n return SparseVector(numOHEFeats,sparsevect)\n \n \n\n# Calculate the number of features in sampleOHEDictManual\nnumSampleOHEFeats = len(sampleOHEDictManual.keys())\n\n# Run oneHotEnoding on sampleOne\nsampleOneOHEFeat = oneHotEncoding(sampleOne, sampleOHEDictManual, numSampleOHEFeats)\n\nprint sampleOneOHEFeat\n\n# TEST Define an OHE Function (1d)\nTest.assertTrue(sampleOneOHEFeat == sampleOneOHEFeatManual,\n 'sampleOneOHEFeat should equal sampleOneOHEFeatManual')\nTest.assertEquals(sampleOneOHEFeat, SparseVector(7, [2,3], [1.0,1.0]),\n 'incorrect value for sampleOneOHEFeat')\nTest.assertEquals(oneHotEncoding([(1, 'black'), (0, 'mouse')], sampleOHEDictManual,\n numSampleOHEFeats), SparseVector(7, [2,3], [1.0,1.0]),\n 'incorrect definition for oneHotEncoding')", "(1e) Apply OHE to a dataset \nFinally, use the function from Part (1d) to create OHE features for all 3 data points in the sample dataset.", "# TODO: Replace <FILL IN> with appropriate code\nsampleOHEData = sampleDataRDD.map(lambda point: oneHotEncoding(point,sampleOHEDictManual, numSampleOHEFeats))\nprint sampleOHEData.collect()\n\n# TEST Apply OHE to a dataset (1e)\nsampleOHEDataValues = sampleOHEData.collect()\nTest.assertTrue(len(sampleOHEDataValues) == 3, 'sampleOHEData should have three elements')\nTest.assertEquals(sampleOHEDataValues[0], SparseVector(7, {2: 1.0, 3: 1.0}),\n 'incorrect OHE for first sample')\nTest.assertEquals(sampleOHEDataValues[1], SparseVector(7, {1: 1.0, 4: 1.0, 5: 1.0}),\n 'incorrect OHE for second sample')\nTest.assertEquals(sampleOHEDataValues[2], SparseVector(7, {0: 1.0, 3: 1.0, 6: 1.0}),\n 'incorrect OHE for third sample')", "Part 2: Construct an OHE dictionary \n(2a) Pair RDD of (featureID, category) \nTo start, create an RDD of distinct (featureID, category) tuples. In our sample dataset, the 7 items in the resulting RDD are (0, 'bear'), (0, 'cat'), (0, 'mouse'), (1, 'black'), (1, 'tabby'), (2, 'mouse'), (2, 'salmon'). Notably 'black' appears twice in the dataset but only contributes one item to the RDD: (1, 'black'), while 'mouse' also appears twice and contributes two items: (0, 'mouse') and (2, 'mouse'). Use flatMap and distinct.", "print sampleDataRDD.flatMap(lambda x: x).distinct().collect()\n\n# TODO: Replace <FILL IN> with appropriate code\nsampleDistinctFeats = sampleDataRDD.flatMap(lambda x: x).distinct()\n\n# TEST Pair RDD of (featureID, category) (2a)\nTest.assertEquals(sorted(sampleDistinctFeats.collect()),\n [(0, 'bear'), (0, 'cat'), (0, 'mouse'), (1, 'black'),\n (1, 'tabby'), (2, 'mouse'), (2, 'salmon')],\n 'incorrect value for sampleDistinctFeats')", "(2b) OHE Dictionary from distinct features \nNext, create an RDD of key-value tuples, where each (featureID, category) tuple in sampleDistinctFeats is a key and the values are distinct integers ranging from 0 to (number of keys - 1). Then convert this RDD into a dictionary, which can be done using the collectAsMap action. Note that there is no unique mapping from keys to values, as all we require is that each (featureID, category) key be mapped to a unique integer between 0 and the number of keys. In this exercise, any valid mapping is acceptable. Use zipWithIndex followed by collectAsMap.\nIn our sample dataset, one valid list of key-value tuples is: [((0, 'bear'), 0), ((2, 'salmon'), 1), ((1, 'tabby'), 2), ((2, 'mouse'), 3), ((0, 'mouse'), 4), ((0, 'cat'), 5), ((1, 'black'), 6)]. The dictionary defined in Part (1a) illustrates another valid mapping between keys and integers.", "# TODO: Replace <FILL IN> with appropriate code\nsampleOHEDict = (sampleDistinctFeats\n .zipWithIndex().collectAsMap())\nprint sampleOHEDict\n\n# TEST OHE Dictionary from distinct features (2b)\nTest.assertEquals(sorted(sampleOHEDict.keys()),\n [(0, 'bear'), (0, 'cat'), (0, 'mouse'), (1, 'black'),\n (1, 'tabby'), (2, 'mouse'), (2, 'salmon')],\n 'sampleOHEDict has unexpected keys')\nTest.assertEquals(sorted(sampleOHEDict.values()), range(7), 'sampleOHEDict has unexpected values')", "(2c) Automated creation of an OHE dictionary \nNow use the code from Parts (2a) and (2b) to write a function that takes an input dataset and outputs an OHE dictionary. Then use this function to create an OHE dictionary for the sample dataset, and verify that it matches the dictionary from Part (2b).", "# TODO: Replace <FILL IN> with appropriate code\ndef createOneHotDict(inputData):\n \"\"\"Creates a one-hot-encoder dictionary based on the input data.\n\n Args:\n inputData (RDD of lists of (int, str)): An RDD of observations where each observation is\n made up of a list of (featureID, value) tuples.\n\n Returns:\n dict: A dictionary where the keys are (featureID, value) tuples and map to values that are\n unique integers.\n \"\"\"\n return inputData.flatMap(lambda x: x).distinct().zipWithIndex().collectAsMap()\n\nsampleOHEDictAuto = createOneHotDict(sampleDataRDD)\nprint sampleOHEDictAuto\n\n# TEST Automated creation of an OHE dictionary (2c)\nTest.assertEquals(sorted(sampleOHEDictAuto.keys()),\n [(0, 'bear'), (0, 'cat'), (0, 'mouse'), (1, 'black'),\n (1, 'tabby'), (2, 'mouse'), (2, 'salmon')],\n 'sampleOHEDictAuto has unexpected keys')\nTest.assertEquals(sorted(sampleOHEDictAuto.values()), range(7),\n 'sampleOHEDictAuto has unexpected values')", "Part 3: Parse CTR data and generate OHE features\nBefore we can proceed, you'll first need to obtain the data from Criteo. If you have already completed this step in the setup lab, just run the cells below and the data will be loaded into the rawData variable.\nBelow is Criteo's data sharing agreement. After you accept the agreement, you can obtain the download URL by right-clicking on the \"Download Sample\" button and clicking \"Copy link address\" or \"Copy Link Location\", depending on your browser. Paste the URL into the # TODO cell below. The file is 8.4 MB compressed. The script below will download the file to the virtual machine (VM) and then extract the data.\nIf running the cell below does not render a webpage, open the Criteo agreement in a separate browser tab. After you accept the agreement, you can obtain the download URL by right-clicking on the \"Download Sample\" button and clicking \"Copy link address\" or \"Copy Link Location\", depending on your browser. Paste the URL into the # TODO cell below.\nNote that the download could take a few minutes, depending upon your connection speed.", "# Run this code to view Criteo's agreement\nfrom IPython.lib.display import IFrame\n\nIFrame(\"http://labs.criteo.com/downloads/2014-kaggle-display-advertising-challenge-dataset/\",\n 600, 350)\n\n# TODO: Replace <FILL IN> with appropriate code\n# Just replace <FILL IN> with the url for dac_sample.tar.gz\nimport glob\nimport os.path\nimport tarfile\nimport urllib\nimport urlparse\n\n# Paste url, url should end with: dac_sample.tar.gz\nurl = 'http://labs.criteo.com/wp-content/uploads/2015/04/dac_sample.tar.gz'\n\nurl = url.strip()\nbaseDir = os.path.join('data')\ninputPath = os.path.join('cs190', 'dac_sample.txt')\nfileName = os.path.join(baseDir, inputPath)\ninputDir = os.path.split(fileName)[0]\n\ndef extractTar(check = False):\n # Find the zipped archive and extract the dataset\n tars = glob.glob('dac_sample*.tar.gz*')\n if check and len(tars) == 0:\n return False\n\n if len(tars) > 0:\n try:\n tarFile = tarfile.open(tars[0])\n except tarfile.ReadError:\n if not check:\n print 'Unable to open tar.gz file. Check your URL.'\n return False\n\n tarFile.extract('dac_sample.txt', path=inputDir)\n print 'Successfully extracted: dac_sample.txt'\n return True\n else:\n print 'You need to retry the download with the correct url.'\n print ('Alternatively, you can upload the dac_sample.tar.gz file to your Jupyter root ' +\n 'directory')\n return False\n\n\nif os.path.isfile(fileName):\n print 'File is already available. Nothing to do.'\nelif extractTar(check = True):\n print 'tar.gz file was already available.'\nelif not url.endswith('dac_sample.tar.gz'):\n print 'Check your download url. Are you downloading the Sample dataset?'\nelse:\n # Download the file and store it in the same directory as this notebook\n try:\n urllib.urlretrieve(url, os.path.basename(urlparse.urlsplit(url).path))\n except IOError:\n print 'Unable to download and store: {0}'.format(url)\n\n extractTar()\n\nimport os.path\nbaseDir = os.path.join('data')\ninputPath = os.path.join('cs190', 'dac_sample.txt')\nfileName = os.path.join(baseDir, inputPath)\n\nif os.path.isfile(fileName):\n rawData = (sc\n .textFile(fileName, 2)\n .map(lambda x: x.replace('\\t', ','))) # work with either ',' or '\\t' separated data\n print rawData.take(1)", "(3a) Loading and splitting the data \nWe are now ready to start working with the actual CTR data, and our first task involves splitting it into training, validation, and test sets. Use the randomSplit method with the specified weights and seed to create RDDs storing each of these datasets, and then cache each of these RDDs, as we will be accessing them multiple times in the remainder of this lab. Finally, compute the size of each dataset.", "# TODO: Replace <FILL IN> with appropriate code\nweights = [.8, .1, .1]\nseed = 42\n# Use randomSplit with weights and seed\nrawTrainData, rawValidationData, rawTestData = rawData.randomSplit(weights, seed)\n# Cache the data\nrawTrainData.cache()\nrawValidationData.cache()\nrawTestData.cache()\n\nnTrain = rawTrainData.count()\nnVal = rawValidationData.count()\nnTest = rawTestData.count()\nprint nTrain, nVal, nTest, nTrain + nVal + nTest\nprint rawData.take(1)\n\n# TEST Loading and splitting the data (3a)\nTest.assertTrue(all([rawTrainData.is_cached, rawValidationData.is_cached, rawTestData.is_cached]),\n 'you must cache the split data')\nTest.assertEquals(nTrain, 79911, 'incorrect value for nTrain')\nTest.assertEquals(nVal, 10075, 'incorrect value for nVal')\nTest.assertEquals(nTest, 10014, 'incorrect value for nTest')", "(3b) Extract features \nWe will now parse the raw training data to create an RDD that we can subsequently use to create an OHE dictionary. Note from the take() command in Part (3a) that each raw data point is a string containing several fields separated by some delimiter. For now, we will ignore the first field (which is the 0-1 label), and parse the remaining fields (or raw features). To do this, complete the implemention of the parsePoint function.", "point = '0,1,1,5,0,1382,4,15,2,181,1,2,,2,68fd1e64,80e26c9b,fb936136,7b4723c4,25c83c98,7e0ccccf,de7995b8,1f89b562,a73ee510,a8cd5504,b2cb9c98,37c9c164,2824a5f6,1adce6ef,8ba8b39a,891b62e7,e5ba7672,f54016b9,21ddcdc9,b1252a9d,07b5194c,,3a171ecb,c5c50484,e8b83407,9727dd16'\nprint parsePoint(point)\n\n# TODO: Replace <FILL IN> with appropriate code\ndef parsePoint(point):\n \"\"\"Converts a comma separated string into a list of (featureID, value) tuples.\n\n Note:\n featureIDs should start at 0 and increase to the number of features - 1.\n\n Args:\n point (str): A comma separated string where the first value is the label and the rest\n are features.\n\n Returns:\n list: A list of (featureID, value) tuples.\n \"\"\"\n feats = point.split(',')\n featlist = []\n for i,feat in enumerate(feats):\n if i==0:\n continue\n else:\n featlist.append((i-1,feat))\n \n return featlist\n \n\nparsedTrainFeat = rawTrainData.map(parsePoint)\n\nnumCategories = (parsedTrainFeat\n .flatMap(lambda x: x)\n .distinct()\n .map(lambda x: (x[0], 1))\n .reduceByKey(lambda x, y: x + y)\n .sortByKey()\n .collect())\n\nprint numCategories[2][1]\n\n# TEST Extract features (3b)\nTest.assertEquals(numCategories[2][1], 855, 'incorrect implementation of parsePoint')\nTest.assertEquals(numCategories[32][1], 4, 'incorrect implementation of parsePoint')", "(3c) Create an OHE dictionary from the dataset \nNote that parsePoint returns a data point as a list of (featureID, category) tuples, which is the same format as the sample dataset studied in Parts 1 and 2 of this lab. Using this observation, create an OHE dictionary using the function implemented in Part (2c). Note that we will assume for simplicity that all features in our CTR dataset are categorical.", "# TODO: Replace <FILL IN> with appropriate code\nctrOHEDict = createOneHotDict(parsedTrainFeat)\nnumCtrOHEFeats = len(ctrOHEDict.keys())\nprint numCtrOHEFeats\nprint ctrOHEDict[(0, '')]\n\n# TEST Create an OHE dictionary from the dataset (3c)\nTest.assertEquals(numCtrOHEFeats, 233286, 'incorrect number of features in ctrOHEDict')\nTest.assertTrue((0, '') in ctrOHEDict, 'incorrect features in ctrOHEDict')", "(3d) Apply OHE to the dataset \nNow let's use this OHE dictionary by starting with the raw training data and creating an RDD of LabeledPoint objects using OHE features. To do this, complete the implementation of the parseOHEPoint function. Hint: parseOHEPoint is an extension of the parsePoint function from Part (3b) and it uses the oneHotEncoding function from Part (1d).", "from pyspark.mllib.regression import LabeledPoint\n\n# TODO: Replace <FILL IN> with appropriate code\ndef parseOHEPoint(point, OHEDict, numOHEFeats):\n \"\"\"Obtain the label and feature vector for this raw observation.\n\n Note:\n You must use the function `oneHotEncoding` in this implementation or later portions\n of this lab may not function as expected.\n\n Args:\n point (str): A comma separated string where the first value is the label and the rest\n are features.\n OHEDict (dict of (int, str) to int): Mapping of (featureID, value) to unique integer.\n numOHEFeats (int): The number of unique features in the training dataset.\n\n Returns:\n LabeledPoint: Contains the label for the observation and the one-hot-encoding of the\n raw features based on the provided OHE dictionary.\n \"\"\"\n\n feats = point.split(',')\n featlist = []\n for i,feat in enumerate(feats):\n if i==0:\n label=feat\n \n else:\n featlist.append((i-1,feat))\n \n featSparseVector = oneHotEncoding(featlist, OHEDict, numOHEFeats)\n \n \n return LabeledPoint(label, featSparseVector)\n\n\nOHETrainData = rawTrainData.map(lambda point: parseOHEPoint(point, ctrOHEDict, numCtrOHEFeats))\nOHETrainData.cache()\nprint OHETrainData.take(1)\n\n# Check that oneHotEncoding function was used in parseOHEPoint\nbackupOneHot = oneHotEncoding\noneHotEncoding = None\nwithOneHot = False\ntry: parseOHEPoint(rawTrainData.take(1)[0], ctrOHEDict, numCtrOHEFeats)\nexcept TypeError: withOneHot = True\noneHotEncoding = backupOneHot\n\n# TEST Apply OHE to the dataset (3d)\nnumNZ = sum(parsedTrainFeat.map(lambda x: len(x)).take(5))\nnumNZAlt = sum(OHETrainData.map(lambda lp: len(lp.features.indices)).take(5))\nTest.assertEquals(numNZ, numNZAlt, 'incorrect implementation of parseOHEPoint')\nTest.assertTrue(withOneHot, 'oneHotEncoding not present in parseOHEPoint')", "Visualization 1: Feature frequency \nWe will now visualize the number of times each of the 233,286 OHE features appears in the training data. We first compute the number of times each feature appears, then bucket the features by these counts. The buckets are sized by powers of 2, so the first bucket corresponds to features that appear exactly once ( $ \\scriptsize 2^0 $ ), the second to features that appear twice ( $ \\scriptsize 2^1 $ ), the third to features that occur between three and four ( $ \\scriptsize 2^2 $ ) times, the fifth bucket is five to eight ( $ \\scriptsize 2^3 $ ) times and so on. The scatter plot below shows the logarithm of the bucket thresholds versus the logarithm of the number of features that have counts that fall in the buckets.", "def bucketFeatByCount(featCount):\n \"\"\"Bucket the counts by powers of two.\"\"\"\n for i in range(11):\n size = 2 ** i\n if featCount <= size:\n return size\n return -1\n\nfeatCounts = (OHETrainData\n .flatMap(lambda lp: lp.features.indices)\n .map(lambda x: (x, 1))\n .reduceByKey(lambda x, y: x + y))\nfeatCountsBuckets = (featCounts\n .map(lambda x: (bucketFeatByCount(x[1]), 1))\n .filter(lambda (k, v): k != -1)\n .reduceByKey(lambda x, y: x + y)\n .collect())\nprint featCountsBuckets\n\n%matplotlib inline\nimport matplotlib.pyplot as plt\n\nx, y = zip(*featCountsBuckets)\nx, y = np.log(x), np.log(y)\n\ndef preparePlot(xticks, yticks, figsize=(10.5, 6), hideLabels=False, gridColor='#999999',\n gridWidth=1.0):\n \"\"\"Template for generating the plot layout.\"\"\"\n plt.close()\n fig, ax = plt.subplots(figsize=figsize, facecolor='white', edgecolor='white')\n ax.axes.tick_params(labelcolor='#999999', labelsize='10')\n for axis, ticks in [(ax.get_xaxis(), xticks), (ax.get_yaxis(), yticks)]:\n axis.set_ticks_position('none')\n axis.set_ticks(ticks)\n axis.label.set_color('#999999')\n if hideLabels: axis.set_ticklabels([])\n plt.grid(color=gridColor, linewidth=gridWidth, linestyle='-')\n map(lambda position: ax.spines[position].set_visible(False), ['bottom', 'top', 'left', 'right'])\n return fig, ax\n\n# generate layout and plot data\nfig, ax = preparePlot(np.arange(0, 10, 1), np.arange(4, 14, 2))\nax.set_xlabel(r'$\\log_e(bucketSize)$'), ax.set_ylabel(r'$\\log_e(countInBucket)$')\nplt.scatter(x, y, s=14**2, c='#d6ebf2', edgecolors='#8cbfd0', alpha=0.75)\npass", "(3e) Handling unseen features \nWe naturally would like to repeat the process from Part (3d), e.g., to compute OHE features for the validation and test datasets. However, we must be careful, as some categorical values will likely appear in new data that did not exist in the training data. To deal with this situation, update the oneHotEncoding() function from Part (1d) to ignore previously unseen categories, and then compute OHE features for the validation data.", "# TODO: Replace <FILL IN> with appropriate code\ndef oneHotEncoding(rawFeats, OHEDict, numOHEFeats):\n \"\"\"Produce a one-hot-encoding from a list of features and an OHE dictionary.\n\n Note:\n If a (featureID, value) tuple doesn't have a corresponding key in OHEDict it should be\n ignored.\n\n Args:\n rawFeats (list of (int, str)): The features corresponding to a single observation. Each\n feature consists of a tuple of featureID and the feature's value. (e.g. sampleOne)\n OHEDict (dict): A mapping of (featureID, value) to unique integer.\n numOHEFeats (int): The total number of unique OHE features (combinations of featureID and\n value).\n\n Returns:\n SparseVector: A SparseVector of length numOHEFeats with indicies equal to the unique\n identifiers for the (featureID, value) combinations that occur in the observation and\n with values equal to 1.0.\n \"\"\"\n sparsevect = []\n for feat in rawFeats:\n try:\n sparsevect.append((OHEDict[feat],1))\n except KeyError:\n continue\n \n return SparseVector(numOHEFeats,sparsevect)\n \n\n\nOHEValidationData = rawValidationData.map(lambda point: parseOHEPoint(point, ctrOHEDict, numCtrOHEFeats))\nOHEValidationData.cache()\nprint OHEValidationData.take(1)\n\n# TEST Handling unseen features (3e)\nnumNZVal = (OHEValidationData\n .map(lambda lp: len(lp.features.indices))\n .sum())\nTest.assertEquals(numNZVal, 372080, 'incorrect number of features')", "Part 4: CTR prediction and logloss evaluation \n (4a) Logistic regression \nWe are now ready to train our first CTR classifier. A natural classifier to use in this setting is logistic regression, since it models the probability of a click-through event rather than returning a binary response, and when working with rare events, probabilistic predictions are useful. First use LogisticRegressionWithSGD to train a model using OHETrainData with the given hyperparameter configuration. LogisticRegressionWithSGD returns a LogisticRegressionModel. Next, use the LogisticRegressionModel.weights and LogisticRegressionModel.intercept attributes to print out the model's parameters. Note that these are the names of the object's attributes and should be called using a syntax like model.weights for a given model.", "from pyspark.mllib.classification import LogisticRegressionWithSGD\n\n# fixed hyperparameters\nnumIters = 50\nstepSize = 10.\nregParam = 1e-6\nregType = 'l2'\nincludeIntercept = True\n\n# TODO: Replace <FILL IN> with appropriate code\nmodel0 = LogisticRegressionWithSGD.train(OHETrainData, iterations=numIters, \n step=stepSize, regParam=regParam, \n regType=regType, intercept=includeIntercept)\nsortedWeights = sorted(model0.weights)\nprint sortedWeights[:5], model0.intercept\n\n# TEST Logistic regression (4a)\nTest.assertTrue(np.allclose(model0.intercept, 0.56455084025), 'incorrect value for model0.intercept')\nTest.assertTrue(np.allclose(sortedWeights[0:5],\n [-0.45899236853575609, -0.37973707648623956, -0.36996558266753304,\n -0.36934962879928263, -0.32697945415010637]), 'incorrect value for model0.weights')", "(4b) Log loss \nThroughout this lab, we will use log loss to evaluate the quality of models. Log loss is defined as: $$ \\begin{align} \\scriptsize \\ell_{log}(p, y) = \\begin{cases} -\\log (p) & \\text{if } y = 1 \\\\ -\\log(1-p) & \\text{if } y = 0 \\end{cases} \\end{align} $$ where $ \\scriptsize p$ is a probability between 0 and 1 and $ \\scriptsize y$ is a label of either 0 or 1. Log loss is a standard evaluation criterion when predicting rare-events such as click-through rate prediction (it is also the criterion used in the Criteo Kaggle competition). Write a function to compute log loss, and evaluate it on some sample inputs.", "# TODO: Replace <FILL IN> with appropriate code\nfrom math import log\n\ndef computeLogLoss(p, y):\n \"\"\"Calculates the value of log loss for a given probabilty and label.\n\n Note:\n log(0) is undefined, so when p is 0 we need to add a small value (epsilon) to it\n and when p is 1 we need to subtract a small value (epsilon) from it.\n\n Args:\n p (float): A probabilty between 0 and 1.\n y (int): A label. Takes on the values 0 and 1.\n\n Returns:\n float: The log loss value.\n \"\"\"\n epsilon = 10e-12\n if p==0:\n p = p+epsilon\n elif p==1:\n p = p-epsilon\n \n if y==1:\n return -1.0*log(p)\n elif y==0:\n return -1.0*log(1-p)\n else:\n return None\n\nprint computeLogLoss(.5, 1)\nprint computeLogLoss(.5, 0)\nprint computeLogLoss(.99, 1)\nprint computeLogLoss(.99, 0)\nprint computeLogLoss(.01, 1)\nprint computeLogLoss(.01, 0)\nprint computeLogLoss(0, 1)\nprint computeLogLoss(1, 1)\nprint computeLogLoss(1, 0)\n\n# TEST Log loss (4b)\nTest.assertTrue(np.allclose([computeLogLoss(.5, 1), computeLogLoss(.01, 0), computeLogLoss(.01, 1)],\n [0.69314718056, 0.0100503358535, 4.60517018599]),\n 'computeLogLoss is not correct')\nTest.assertTrue(np.allclose([computeLogLoss(0, 1), computeLogLoss(1, 1), computeLogLoss(1, 0)],\n [25.3284360229, 1.00000008275e-11, 25.3284360229]),\n 'computeLogLoss needs to bound p away from 0 and 1 by epsilon')", "(4c) Baseline log loss \nNext we will use the function we wrote in Part (4b) to compute the baseline log loss on the training data. A very simple yet natural baseline model is one where we always make the same prediction independent of the given datapoint, setting the predicted value equal to the fraction of training points that correspond to click-through events (i.e., where the label is one). Compute this value (which is simply the mean of the training labels), and then use it to compute the training log loss for the baseline model. The log loss for multiple observations is the mean of the individual log loss values.", "\n# TODO: Replace <FILL IN> with appropriate code\n# Note that our dataset has a very high click-through rate by design\n# In practice click-through rate can be one to two orders of magnitude lower\nclassOneFracTrain = 1.0*OHETrainData.filter(lambda point: point.label==1).count()/OHETrainData.count()\nprint classOneFracTrain\n\nlogLossTrBase = OHETrainData.map(lambda point: computeLogLoss(classOneFracTrain, point.label)).mean()\nprint 'Baseline Train Logloss = {0:.3f}\\n'.format(logLossTrBase)\n\n# TEST Baseline log loss (4c)\nTest.assertTrue(np.allclose(classOneFracTrain, 0.22717773523), 'incorrect value for classOneFracTrain')\nTest.assertTrue(np.allclose(logLossTrBase, 0.535844), 'incorrect value for logLossTrBase')", "(4d) Predicted probability \nIn order to compute the log loss for the model we trained in Part (4a), we need to write code to generate predictions from this model. Write a function that computes the raw linear prediction from this logistic regression model and then passes it through a sigmoid function $ \\scriptsize \\sigma(t) = (1+ e^{-t})^{-1} $ to return the model's probabilistic prediction. Then compute probabilistic predictions on the training data.\nNote that when incorporating an intercept into our predictions, we simply add the intercept to the value of the prediction obtained from the weights and features. Alternatively, if the intercept was included as the first weight, we would need to add a corresponding feature to our data where the feature has the value one. This is not the case here.", "# TODO: Replace <FILL IN> with appropriate code\nfrom math import exp # exp(-t) = e^-t\n\ndef getP(x, w, intercept):\n \"\"\"Calculate the probability for an observation given a set of weights and intercept.\n\n Note:\n We'll bound our raw prediction between 20 and -20 for numerical purposes.\n\n Args:\n x (SparseVector): A vector with values of 1.0 for features that exist in this\n observation and 0.0 otherwise.\n w (DenseVector): A vector of weights (betas) for the model.\n intercept (float): The model's intercept.\n\n Returns:\n float: A probability between 0 and 1.\n \"\"\"\n rawPrediction = x.dot(w)+intercept\n\n # Bound the raw prediction value\n rawPrediction = min(rawPrediction, 20)\n rawPrediction = max(rawPrediction, -20)\n \n return 1.0/(1.0+exp(-1.0*rawPrediction))\n\ntrainingPredictions = OHETrainData.map(lambda point: getP(point.features, model0.weights, model0.intercept))\n\nprint trainingPredictions.take(5)\n\n# TEST Predicted probability (4d)\nTest.assertTrue(np.allclose(trainingPredictions.sum(), 18135.4834348),\n 'incorrect value for trainingPredictions')", "(4e) Evaluate the model \nWe are now ready to evaluate the quality of the model we trained in Part (4a). To do this, first write a general function that takes as input a model and data, and outputs the log loss. Then run this function on the OHE training data, and compare the result with the baseline log loss.", "# TODO: Replace <FILL IN> with appropriate code\ndef evaluateResults(model, data):\n \"\"\"Calculates the log loss for the data given the model.\n\n Args:\n model (LogisticRegressionModel): A trained logistic regression model.\n data (RDD of LabeledPoint): Labels and features for each observation.\n\n Returns:\n float: Log loss for the data.\n \"\"\"\n \n return data.map(lambda point: computeLogLoss(getP(point.features, model.weights, model.intercept), point.label)).mean()\n\nlogLossTrLR0 = evaluateResults(model0, OHETrainData)\nprint ('OHE Features Train Logloss:\\n\\tBaseline = {0:.3f}\\n\\tLogReg = {1:.3f}'\n .format(logLossTrBase, logLossTrLR0))\n\n# TEST Evaluate the model (4e)\nTest.assertTrue(np.allclose(logLossTrLR0, 0.456903), 'incorrect value for logLossTrLR0')", "(4f) Validation log loss \nNext, following the same logic as in Parts (4c) and 4(e), compute the validation log loss for both the baseline and logistic regression models. Notably, the baseline model for the validation data should still be based on the label fraction from the training dataset.", "# TODO: Replace <FILL IN> with appropriate code\nlogLossValBase = OHEValidationData.map(lambda point: computeLogLoss(classOneFracTrain, point.label)).mean()\n\nlogLossValLR0 = evaluateResults(model0, OHEValidationData)\nprint ('OHE Features Validation Logloss:\\n\\tBaseline = {0:.3f}\\n\\tLogReg = {1:.3f}'\n .format(logLossValBase, logLossValLR0))\n\n# TEST Validation log loss (4f)\nTest.assertTrue(np.allclose(logLossValBase, 0.527603), 'incorrect value for logLossValBase')\nTest.assertTrue(np.allclose(logLossValLR0, 0.456957), 'incorrect value for logLossValLR0')", "Visualization 2: ROC curve \nWe will now visualize how well the model predicts our target. To do this we generate a plot of the ROC curve. The ROC curve shows us the trade-off between the false positive rate and true positive rate, as we liberalize the threshold required to predict a positive outcome. A random model is represented by the dashed line.", "labelsAndScores = OHEValidationData.map(lambda lp:\n (lp.label, getP(lp.features, model0.weights, model0.intercept)))\nlabelsAndWeights = labelsAndScores.collect()\nlabelsAndWeights.sort(key=lambda (k, v): v, reverse=True)\nlabelsByWeight = np.array([k for (k, v) in labelsAndWeights])\n\nlength = labelsByWeight.size\ntruePositives = labelsByWeight.cumsum()\nnumPositive = truePositives[-1]\nfalsePositives = np.arange(1.0, length + 1, 1.) - truePositives\n\ntruePositiveRate = truePositives / numPositive\nfalsePositiveRate = falsePositives / (length - numPositive)\n\n# Generate layout and plot data\nfig, ax = preparePlot(np.arange(0., 1.1, 0.1), np.arange(0., 1.1, 0.1))\nax.set_xlim(-.05, 1.05), ax.set_ylim(-.05, 1.05)\nax.set_ylabel('True Positive Rate (Sensitivity)')\nax.set_xlabel('False Positive Rate (1 - Specificity)')\nplt.plot(falsePositiveRate, truePositiveRate, color='#8cbfd0', linestyle='-', linewidth=3.)\nplt.plot((0., 1.), (0., 1.), linestyle='--', color='#d6ebf2', linewidth=2.) # Baseline model\npass", "Part 5: Reduce feature dimension via feature hashing\n (5a) Hash function \nAs we just saw, using a one-hot-encoding featurization can yield a model with good statistical accuracy. However, the number of distinct categories across all features is quite large -- recall that we observed 233K categories in the training data in Part (3c). Moreover, the full Kaggle training dataset includes more than 33M distinct categories, and the Kaggle dataset itself is just a small subset of Criteo's labeled data. Hence, featurizing via a one-hot-encoding representation would lead to a very large feature vector. To reduce the dimensionality of the feature space, we will use feature hashing.\nBelow is the hash function that we will use for this part of the lab. We will first use this hash function with the three sample data points from Part (1a) to gain some intuition. Specifically, run code to hash the three sample points using two different values for numBuckets and observe the resulting hashed feature dictionaries.", "from collections import defaultdict\nimport hashlib\n\ndef hashFunction(numBuckets, rawFeats, printMapping=False):\n \"\"\"Calculate a feature dictionary for an observation's features based on hashing.\n\n Note:\n Use printMapping=True for debug purposes and to better understand how the hashing works.\n\n Args:\n numBuckets (int): Number of buckets to use as features.\n rawFeats (list of (int, str)): A list of features for an observation. Represented as\n (featureID, value) tuples.\n printMapping (bool, optional): If true, the mappings of featureString to index will be\n printed.\n\n Returns:\n dict of int to float: The keys will be integers which represent the buckets that the\n features have been hashed to. The value for a given key will contain the count of the\n (featureID, value) tuples that have hashed to that key.\n \"\"\"\n mapping = {}\n for ind, category in rawFeats:\n featureString = category + str(ind)\n mapping[featureString] = int(int(hashlib.md5(featureString).hexdigest(), 16) % numBuckets)\n if(printMapping): print mapping\n sparseFeatures = defaultdict(float)\n for bucket in mapping.values():\n sparseFeatures[bucket] += 1.0\n return dict(sparseFeatures)\n\n# Reminder of the sample values:\n# sampleOne = [(0, 'mouse'), (1, 'black')]\n# sampleTwo = [(0, 'cat'), (1, 'tabby'), (2, 'mouse')]\n# sampleThree = [(0, 'bear'), (1, 'black'), (2, 'salmon')]\n\n# TODO: Replace <FILL IN> with appropriate code\n# Use four buckets\nsampOneFourBuckets = hashFunction(4, sampleOne, True)\nsampTwoFourBuckets = hashFunction(4, sampleTwo, True)\nsampThreeFourBuckets = hashFunction(4, sampleThree, True)\n\n# Use one hundred buckets\nsampOneHundredBuckets = hashFunction(100, sampleOne, True)\nsampTwoHundredBuckets = hashFunction(100, sampleTwo, True)\nsampThreeHundredBuckets = hashFunction(100, sampleThree, True)\n\nprint '\\t\\t 4 Buckets \\t\\t\\t 100 Buckets'\nprint 'SampleOne:\\t {0}\\t\\t {1}'.format(sampOneFourBuckets, sampOneHundredBuckets)\nprint 'SampleTwo:\\t {0}\\t\\t {1}'.format(sampTwoFourBuckets, sampTwoHundredBuckets)\nprint 'SampleThree:\\t {0}\\t {1}'.format(sampThreeFourBuckets, sampThreeHundredBuckets)\n\n# TEST Hash function (5a)\nTest.assertEquals(sampOneFourBuckets, {2: 1.0, 3: 1.0}, 'incorrect value for sampOneFourBuckets')\nTest.assertEquals(sampThreeHundredBuckets, {72: 1.0, 5: 1.0, 14: 1.0},\n 'incorrect value for sampThreeHundredBuckets')", "(5b) Creating hashed features \nNext we will use this hash function to create hashed features for our CTR datasets. First write a function that uses the hash function from Part (5a) with numBuckets = $ \\scriptsize 2^{15} \\approx 33K $ to create a LabeledPoint with hashed features stored as a SparseVector. Then use this function to create new training, validation and test datasets with hashed features. Hint: parsedHashPoint is similar to parseOHEPoint from Part (3d).", "print 2**15\n\npoint = rawTrainData.take(1)[0]\nfeats = point.split(',')\nfeatlist= []\nfor i,feat in enumerate(feats):\n #print i, feat\n if i==0:\n label=float(feat)\n else:\n featlist.append((i-1,feat))\nprint label, featlist\nprint hashFunction(2**15, featlist, printMapping=False)\n\n# TODO: Replace <FILL IN> with appropriate code\ndef parseHashPoint(point, numBuckets):\n \"\"\"Create a LabeledPoint for this observation using hashing.\n\n Args:\n point (str): A comma separated string where the first value is the label and the rest are\n features.\n numBuckets: The number of buckets to hash to.\n\n Returns:\n LabeledPoint: A LabeledPoint with a label (0.0 or 1.0) and a SparseVector of hashed\n features.\n \"\"\"\n feats = point.split(',')\n featlist = []\n for i,feat in enumerate(feats):\n if i==0:\n label=float(feat)\n else:\n featlist.append((i-1,feat))\n\n hashSparseVector = SparseVector(numBuckets, hashFunction(numBuckets, featlist, printMapping=False)) \n return LabeledPoint(label, hashSparseVector)\n\nnumBucketsCTR = 2**15\nhashTrainData = rawTrainData.map(lambda point: parseHashPoint(point, numBucketsCTR))\nhashTrainData.cache()\nhashValidationData = rawValidationData.map(lambda point: parseHashPoint(point, numBucketsCTR))\nhashValidationData.cache()\nhashTestData = rawTestData.map(lambda point: parseHashPoint(point, numBucketsCTR))\nhashTestData.cache()\n\nprint hashTrainData.take(1)\n\n# TEST Creating hashed features (5b)\nhashTrainDataFeatureSum = sum(hashTrainData\n .map(lambda lp: len(lp.features.indices))\n .take(20))\nhashTrainDataLabelSum = sum(hashTrainData\n .map(lambda lp: lp.label)\n .take(100))\nhashValidationDataFeatureSum = sum(hashValidationData\n .map(lambda lp: len(lp.features.indices))\n .take(20))\nhashValidationDataLabelSum = sum(hashValidationData\n .map(lambda lp: lp.label)\n .take(100))\nhashTestDataFeatureSum = sum(hashTestData\n .map(lambda lp: len(lp.features.indices))\n .take(20))\nhashTestDataLabelSum = sum(hashTestData\n .map(lambda lp: lp.label)\n .take(100))\n\nTest.assertEquals(hashTrainDataFeatureSum, 772, 'incorrect number of features in hashTrainData')\nTest.assertEquals(hashTrainDataLabelSum, 24.0, 'incorrect labels in hashTrainData')\nTest.assertEquals(hashValidationDataFeatureSum, 776,\n 'incorrect number of features in hashValidationData')\nTest.assertEquals(hashValidationDataLabelSum, 16.0, 'incorrect labels in hashValidationData')\nTest.assertEquals(hashTestDataFeatureSum, 774, 'incorrect number of features in hashTestData')\nTest.assertEquals(hashTestDataLabelSum, 23.0, 'incorrect labels in hashTestData')", "(5c) Sparsity \nSince we have 33K hashed features versus 233K OHE features, we should expect OHE features to be sparser. Verify this hypothesis by computing the average sparsity of the OHE and the hashed training datasets.\nNote that if you have a SparseVector named sparse, calling len(sparse) returns the total number of features, not the number features with entries. SparseVector objects have the attributes indices and values that contain information about which features are nonzero. Continuing with our example, these can be accessed using sparse.indices and sparse.values, respectively.", "# TODO: Replace <FILL IN> with appropriate code\ndef computeSparsity(data, d, n):\n \"\"\"Calculates the average sparsity for the features in an RDD of LabeledPoints.\n\n Args:\n data (RDD of LabeledPoint): The LabeledPoints to use in the sparsity calculation.\n d (int): The total number of features.\n n (int): The number of observations in the RDD.\n\n Returns:\n float: The average of the ratio of features in a point to total features.\n \"\"\"\n total = data.map(lambda point: len(point.features.indices)).sum()\n avg_num_feat = 1.0*total/n\n return 1.0*avg_num_feat/d\n\naverageSparsityHash = computeSparsity(hashTrainData, numBucketsCTR, nTrain)\naverageSparsityOHE = computeSparsity(OHETrainData, numCtrOHEFeats, nTrain)\n\nprint 'Average OHE Sparsity: {0:.7e}'.format(averageSparsityOHE)\nprint 'Average Hash Sparsity: {0:.7e}'.format(averageSparsityHash)\n\n# TEST Sparsity (5c)\nTest.assertTrue(np.allclose(averageSparsityOHE, 1.6717677e-04),\n 'incorrect value for averageSparsityOHE')\nTest.assertTrue(np.allclose(averageSparsityHash, 1.1805561e-03),\n 'incorrect value for averageSparsityHash')", "(5d) Logistic model with hashed features \nNow let's train a logistic regression model using the hashed features. Run a grid search to find suitable hyperparameters for the hashed features, evaluating via log loss on the validation data. Note: This may take a few minutes to run. Use 1 and 10 for stepSizes and 1e-6 and 1e-3 for regParams.", "numIters = 500\nregType = 'l2'\nincludeIntercept = True\n\n# Initialize variables using values from initial model training\nbestModel = None\nbestLogLoss = 1e10\n\n# TODO: Replace <FILL IN> with appropriate code\nstepSizes = [1.0,10.0]\nregParams = [1.0e-6,1.0e-3]\nfor stepSize in stepSizes:\n for regParam in regParams:\n model = (LogisticRegressionWithSGD\n .train(hashTrainData, numIters, stepSize, regParam=regParam, regType=regType,\n intercept=includeIntercept))\n logLossVa = evaluateResults(model, hashValidationData)\n print ('\\tstepSize = {0:.1f}, regParam = {1:.0e}: logloss = {2:.3f}'\n .format(stepSize, regParam, logLossVa))\n if (logLossVa < bestLogLoss):\n bestModel = model\n bestLogLoss = logLossVa\n\nprint ('Hashed Features Validation Logloss:\\n\\tBaseline = {0:.6f}\\n\\tLogReg = {1:.6f}'\n .format(logLossValBase, bestLogLoss))\n\n# TEST Logistic model with hashed features (5d)\nTest.assertTrue(np.allclose(bestLogLoss, 0.4481683608), 'incorrect value for bestLogLoss')", "Visualization 3: Hyperparameter heat map\nWe will now perform a visualization of an extensive hyperparameter search. Specifically, we will create a heat map where the brighter colors correspond to lower values of logLoss.\nThe search was run using six step sizes and six values for regularization, which required the training of thirty-six separate models. We have included the results below, but omitted the actual search to save time.", "from matplotlib.colors import LinearSegmentedColormap\n\n# Saved parameters and results. Eliminate the time required to run 36 models\nstepSizes = [3, 6, 9, 12, 15, 18]\nregParams = [1e-7, 1e-6, 1e-5, 1e-4, 1e-3, 1e-2]\nlogLoss = np.array([[ 0.45808431, 0.45808493, 0.45809113, 0.45815333, 0.45879221, 0.46556321],\n [ 0.45188196, 0.45188306, 0.4518941, 0.4520051, 0.45316284, 0.46396068],\n [ 0.44886478, 0.44886613, 0.44887974, 0.44902096, 0.4505614, 0.46371153],\n [ 0.44706645, 0.4470698, 0.44708102, 0.44724251, 0.44905525, 0.46366507],\n [ 0.44588848, 0.44589365, 0.44590568, 0.44606631, 0.44807106, 0.46365589],\n [ 0.44508948, 0.44509474, 0.44510274, 0.44525007, 0.44738317, 0.46365405]])\n\nnumRows, numCols = len(stepSizes), len(regParams)\nlogLoss = np.array(logLoss)\nlogLoss.shape = (numRows, numCols)\n\nfig, ax = preparePlot(np.arange(0, numCols, 1), np.arange(0, numRows, 1), figsize=(8, 7),\n hideLabels=True, gridWidth=0.)\nax.set_xticklabels(regParams), ax.set_yticklabels(stepSizes)\nax.set_xlabel('Regularization Parameter'), ax.set_ylabel('Step Size')\n\ncolors = LinearSegmentedColormap.from_list('blue', ['#0022ff', '#000055'], gamma=.2)\nimage = plt.imshow(logLoss,interpolation='nearest', aspect='auto',\n cmap = colors)\npass", "(5e) Evaluate on the test set \nFinally, evaluate the best model from Part (5d) on the test set. Compare the resulting log loss with the baseline log loss on the test set, which can be computed in the same way that the validation log loss was computed in Part (4f).", "# TODO: Replace <FILL IN> with appropriate code\n# Log loss for the best model from (5d)\n\n# fixed hyperparameters\nnumIters = 50\nstepSize = 10.\nregParam = 1e-6\nregType = 'l2'\nincludeIntercept = True\n\nmodel_5dbest = LogisticRegressionWithSGD.train(hashTrainData, iterations=numIters, \n step=stepSize, regParam=regParam, \n regType=regType, intercept=includeIntercept)\nlogLossTest = evaluateResults(model_5dbest, hashTestData)\n\n# Log loss for the baseline model\nclassOneFracTrain = 1.0*hashTrainData.filter(lambda point: point.label==1).count()/hashTrainData.count()\nlogLossTestBaseline = hashTestData.map(lambda point: computeLogLoss(classOneFracTrain, point.label)).mean()\n\nprint ('Hashed Features Test Log Loss:\\n\\tBaseline = {0:.6f}\\n\\tLogReg = {1:.6f}'\n .format(logLossTestBaseline, logLossTest))\n\n# TEST Evaluate on the test set (5e)\nTest.assertTrue(np.allclose(logLossTestBaseline, 0.537438),\n 'incorrect value for logLossTestBaseline')\nTest.assertTrue(np.allclose(logLossTest, 0.455616931), 'incorrect value for logLossTest')" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
RoebideBruijn/datascience-intensive-course
exercises/data_wrangling_json/sliderule_dsi_json_exercise.ipynb
mit
[ "JSON examples and exercise\n\n\nget familiar with packages for dealing with JSON\nstudy examples with JSON strings and files \nwork on exercise to be completed and submitted \n\n\n\nreference: http://pandas.pydata.org/pandas-docs/stable/io.html#io-json-reader\ndata source: http://jsonstudio.com/resources/", "import pandas as pd\nimport numpy as np", "imports for Python, Pandas", "import json\nfrom pandas.io.json import json_normalize", "JSON example, with string\n\ndemonstrates creation of normalized dataframes (tables) from nested json string\nsource: http://pandas.pydata.org/pandas-docs/stable/io.html#normalization", "# define json string\ndata = [{'state': 'Florida', \n 'shortname': 'FL',\n 'info': {'governor': 'Rick Scott'},\n 'counties': [{'name': 'Dade', 'population': 12345},\n {'name': 'Broward', 'population': 40000},\n {'name': 'Palm Beach', 'population': 60000}]},\n {'state': 'Ohio',\n 'shortname': 'OH',\n 'info': {'governor': 'John Kasich'},\n 'counties': [{'name': 'Summit', 'population': 1234},\n {'name': 'Cuyahoga', 'population': 1337}]}]\n\n# use normalization to create tables from nested element\njson_normalize(data, 'counties')\n\n# further populate tables created from nested element\njson_normalize(data, 'counties', ['state', 'shortname', ['info', 'governor']])", "JSON example, with file\n\ndemonstrates reading in a json file as a string and as a table\nuses small sample file containing data about projects funded by the World Bank \ndata source: http://jsonstudio.com/resources/", "# load json as string\njson.load((open('data/world_bank_projects_less.json')))\n\n# load as Pandas dataframe\nsample_json_df = pd.read_json('data/world_bank_projects_less.json')\nsample_json_df", "JSON exercise\nUsing data in file 'data/world_bank_projects.json' and the techniques demonstrated above,\n1. Find the 10 countries with most projects\n2. Find the top 10 major project themes (using column 'mjtheme_namecode')\n3. In 2. above you will notice that some entries have only the code and the name is missing. Create a dataframe with the missing names filled in.", "bank = pd.read_json('data/world_bank_projects.json')\nbank.head()", "1. Find the 10 countries with most projects", "bank.countryname.value_counts().head(10)", "2. Find the top 10 major project themes (using column 'mjtheme_namecode')", "names = []\nfor i in bank.index:\n namecode = bank.loc[i,'mjtheme_namecode']\n names.extend(list(json_normalize(namecode)['name']))\n\npd.Series(names).value_counts().head(10).drop('', axis=0) ", "3. In 2. above you will notice that some entries have only the code and the name is missing. Create a dataframe with the missing names filled in.", "codes_names = pd.DataFrame(columns=['code', 'name'])\nfor i in bank.index:\n namecode = bank.loc[i,'mjtheme_namecode']\n codes_names = pd.concat([codes_names, json_normalize(namecode)])\n\ncodes_names_dict = (codes_names[codes_names.name != '']\n .drop_duplicates()\n .to_dict())\n\nfor i in bank.index:\n namecode = bank.loc[i,'mjtheme_namecode']\n cell = json_normalize(namecode).replace('', np.nan)\n cell = cell.fillna(codes_names_dict)\n bank.set_value(i, 'mjtheme_namecode', cell.to_dict(orient='record'))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
robertoalotufo/ia898
src/normalize.ipynb
mit
[ "Function normalize\nSynopse\nNormalize the pixels values between the specified range.\n\n\ng = normalize(f, range)\n\n\ng: Output: ndmage, same type as f. If range==[0,255], dtype is set to 'uint8'\n\nf: Input: ndimage.\nrange: Input: vector with two elements, minimum and maximum values in the output image, respectively.\n\nDescription\nNormalize the input image f. The minimum value of f is assigned to the minimum desired value and the \nmaximum value of f, to the maximum desired value. The minimum and maximum desired values are given by the \nparameter range. The data type of the normalized image is the same data type of input image, unless the\nrange is [0,255], in which case the dtype is set to 'uint8'.", "import numpy as np\n\ndef normalize(f, range=[0,255]):\n\n f = np.asarray(f)\n range = np.asarray(range)\n if f.dtype.char in ['D', 'F']:\n raise Exception('error: cannot normalize complex data')\n faux = np.ravel(f).astype(float)\n minimum = faux.min()\n maximum = faux.max()\n lower = range[0]\n upper = range[1]\n if upper == lower:\n g = np.ones(f.shape) * maximum\n if minimum == maximum:\n g = np.ones(f.shape) * (upper + lower) / 2.\n else:\n g = (faux-minimum)*(upper-lower) / (maximum-minimum) + lower\n g = g.reshape(f.shape)\n\n if f.dtype == np.uint8:\n if upper > 255: \n raise Exception('normalize: warning, upper valuer larger than 255. Cannot fit in uint8 image')\n if lower == 0 and upper == 255:\n g = g.astype(np.uint8)\n else:\n g = g.astype(f.dtype) # set data type of result the same as the input image\n return g", "Examples", "testing = (__name__ == \"__main__\")\n\nif testing:\n import numpy as np\n import sys,os\n ia898path = os.path.abspath('../../')\n if ia898path not in sys.path:\n sys.path.append(ia898path)\n import ia898.src as ia\n", "Example 1", "if testing:\n f = np.array([100., 500., 1000.])\n g1 = ia.normalize(f, [0,255])\n print(g1)\n\nif testing:\n g2 = ia.normalize(f, [-1,1])\n print(g2)\n\nif testing:\n g3 = ia.normalize(f, [0,1])\n print(g3)\n\nif testing:\n #\n f = np.array([-100., 0., 100.])\n g4 = ia.normalize(f, [0,255])\n print(g4)\n g5 = ia.normalize(f, [-1,1])\n print(g5)\n g6 = ia.normalize(f, [-0.5,0.5])\n print(g6)\n #\n f = np.arange(10).astype('uint8')\n g7 = ia.normalize(f)\n print(g7)\n #\n f = np.array([1,1,1])\n g8 = ia.normalize(f)\n print(g8)", "Equation\n$$ g = f|{gmin}^{gmax}$$\n$$ g(p) = \\frac{g{max} - g_{min}}{f_{max}-f_{min}} (f(p) - f_{min}) + g_{min} $$\nSee Also\n\niaimginfo - Print image size and pixel data type information\niait - Illustrate the contrast transform function\n\nReferences\n\nWikipedia - Normalization\n\nContributions\n\nMarcos Fernandes, course IA368S, 1st semester 2013", "if testing:\n print('testing normalize')\n print(repr(ia.normalize(np.array([-100., 0., 100.]), [0,255])) == repr(np.array([ 0. , 127.5, 255. ])))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
james-prior/cohpy
20170327-cohpy-defer-fstring-evaluation.ipynb
mit
[ "How to defer evaluation of f-strings\nIt seems that one solution is to use lambdas, which are explored below.\nWhat other solutions are there?\n\nImagine that one wants to format a string\nselecting the format from several f-strings.\nUnfortunately, the obvious straightforward way\nof having the f-strings as values in a dictionary,\ndoes not work as desired,\nbecause the f-strings are evaluated when creating the dictionary.\nThis unhappy way follows.", "year, month, day = 'hello', -1, 0\ndate_formats = {\n 'iso': f'{year}-{month:02d}-{day:02d}',\n 'us': f'{month}/{day}/{year}',\n 'other': f'{day} {month} {year}',\n}", "Notice below that\nthe current values of year, month, and day\nare ignored when evaluating date_formats['iso'].", "year, month, day = 2017, 3, 27\nprint(year, month, day)\nprint(date_formats['iso'])", "A solution is to use lambdas in the dictionary, and call them later,\nas shown below.", "year, month, day = 'hello', -1, 0\n# year, month, and day do not have to be defined when creating dictionary.\ndel year # Test that with one of them.\ndate_formats = {\n 'iso': (lambda: f'{year}-{month:02d}-{day:02d}'),\n 'us': (lambda: f'{month}/{day}/{year}'),\n 'other': (lambda: f'{day}.{month}.{year}'),\n}\ndates = (\n (2017, 3, 27),\n (2017, 4, 24),\n (2017, 5, 22),\n)\n\nfor format_name, format in date_formats.items():\n print(f'{format_name}:')\n for year, month, day in dates:\n print(format())", "This also answers the question about\nwhat good use there is\nfor\nZak K (aka y2kbugger)'s\ncrazy whacko f-string lambdas.\nThanks Zak!" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
tanmay987/deepLearning
dcgan-svhn/DCGAN_Exercises.ipynb
mit
[ "Deep Convolutional GANs\nIn this notebook, you'll build a GAN using convolutional layers in the generator and discriminator. This is called a Deep Convolutional GAN, or DCGAN for short. The DCGAN architecture was first explored last year and has seen impressive results in generating new images, you can read the original paper here.\nYou'll be training DCGAN on the Street View House Numbers (SVHN) dataset. These are color images of house numbers collected from Google street view. SVHN images are in color and much more variable than MNIST. \n\nSo, we'll need a deeper and more powerful network. This is accomplished through using convolutional layers in the discriminator and generator. It's also necessary to use batch normalization to get the convolutional networks to train. The only real changes compared to what you saw previously are in the generator and discriminator, otherwise the rest of the implementation is the same.", "%matplotlib inline\n\nimport pickle as pkl\n\nimport matplotlib.pyplot as plt\nimport numpy as np\nfrom scipy.io import loadmat\nimport tensorflow as tf\n\n!mkdir data", "Getting the data\nHere you can download the SVHN dataset. Run the cell above and it'll download to your machine.", "from urllib.request import urlretrieve\nfrom os.path import isfile, isdir\nfrom tqdm import tqdm\n\ndata_dir = 'data/'\n\nif not isdir(data_dir):\n raise Exception(\"Data directory doesn't exist!\")\n\nclass DLProgress(tqdm):\n last_block = 0\n\n def hook(self, block_num=1, block_size=1, total_size=None):\n self.total = total_size\n self.update((block_num - self.last_block) * block_size)\n self.last_block = block_num\n\nif not isfile(data_dir + \"train_32x32.mat\"):\n with DLProgress(unit='B', unit_scale=True, miniters=1, desc='SVHN Training Set') as pbar:\n urlretrieve(\n 'http://ufldl.stanford.edu/housenumbers/train_32x32.mat',\n data_dir + 'train_32x32.mat',\n pbar.hook)\n\nif not isfile(data_dir + \"test_32x32.mat\"):\n with DLProgress(unit='B', unit_scale=True, miniters=1, desc='SVHN Training Set') as pbar:\n urlretrieve(\n 'http://ufldl.stanford.edu/housenumbers/test_32x32.mat',\n data_dir + 'test_32x32.mat',\n pbar.hook)", "These SVHN files are .mat files typically used with Matlab. However, we can load them in with scipy.io.loadmat which we imported above.", "trainset = loadmat(data_dir + 'train_32x32.mat')\ntestset = loadmat(data_dir + 'test_32x32.mat')", "Here I'm showing a small sample of the images. Each of these is 32x32 with 3 color channels (RGB). These are the real images we'll pass to the discriminator and what the generator will eventually fake.", "idx = np.random.randint(0, trainset['X'].shape[3], size=36)\nfig, axes = plt.subplots(6, 6, sharex=True, sharey=True, figsize=(5,5),)\nfor ii, ax in zip(idx, axes.flatten()):\n ax.imshow(trainset['X'][:,:,:,ii], aspect='equal')\n ax.xaxis.set_visible(False)\n ax.yaxis.set_visible(False)\nplt.subplots_adjust(wspace=0, hspace=0)", "Here we need to do a bit of preprocessing and getting the images into a form where we can pass batches to the network. First off, we need to rescale the images to a range of -1 to 1, since the output of our generator is also in that range. We also have a set of test and validation images which could be used if we're trying to identify the numbers in the images.", "def scale(x, feature_range=(-1, 1)):\n # scale to (0, 1)\n x = ((x - x.min())/(255 - x.min()))\n \n # scale to feature_range\n min, max = feature_range\n x = x * (max - min) + min\n return x\n\nclass Dataset:\n def __init__(self, train, test, val_frac=0.5, shuffle=False, scale_func=None):\n split_idx = int(len(test['y'])*(1 - val_frac))\n self.test_x, self.valid_x = test['X'][:,:,:,:split_idx], test['X'][:,:,:,split_idx:]\n self.test_y, self.valid_y = test['y'][:split_idx], test['y'][split_idx:]\n self.train_x, self.train_y = train['X'], train['y']\n \n self.train_x = np.rollaxis(self.train_x, 3)\n self.valid_x = np.rollaxis(self.valid_x, 3)\n self.test_x = np.rollaxis(self.test_x, 3)\n \n if scale_func is None:\n self.scaler = scale\n else:\n self.scaler = scale_func\n self.shuffle = shuffle\n \n def batches(self, batch_size):\n if self.shuffle:\n idx = np.arange(len(dataset.train_x))\n np.random.shuffle(idx)\n self.train_x = self.train_x[idx]\n self.train_y = self.train_y[idx]\n \n n_batches = len(self.train_y)//batch_size\n for ii in range(0, len(self.train_y), batch_size):\n x = self.train_x[ii:ii+batch_size]\n y = self.train_y[ii:ii+batch_size]\n \n yield self.scaler(x), y", "Network Inputs\nHere, just creating some placeholders like normal.", "def model_inputs(real_dim, z_dim):\n inputs_real = tf.placeholder(tf.float32, (None, *real_dim), name='input_real')\n inputs_z = tf.placeholder(tf.float32, (None, z_dim), name='input_z')\n \n return inputs_real, inputs_z", "Generator\nHere you'll build the generator network. The input will be our noise vector z as before. Also as before, the output will be a $tanh$ output, but this time with size 32x32 which is the size of our SVHN images.\nWhat's new here is we'll use convolutional layers to create our new images. The first layer is a fully connected layer which is reshaped into a deep and narrow layer, something like 4x4x1024 as in the original DCGAN paper. Then we use batch normalization and a leaky ReLU activation. Next is a transposed convolution where typically you'd halve the depth and double the width and height of the previous layer. Again, we use batch normalization and leaky ReLU. For each of these layers, the general scheme is convolution > batch norm > leaky ReLU.\nYou keep stacking layers up like this until you get the final transposed convolution layer with shape 32x32x3. Below is the archicture used in the original DCGAN paper:\n\nNote that the final layer here is 64x64x3, while for our SVHN dataset, we only want it to be 32x32x3. \n\nExercise: Build the transposed convolutional network for the generator in the function below. Be sure to use leaky ReLUs on all the layers except for the last tanh layer, as well as batch normalization on all the transposed convolutional layers except the last one.", "def generator(z, output_dim, reuse=False, alpha=0.2, training=True):\n with tf.variable_scope('generator', reuse=reuse):\n # First fully connected layer\n x1 = tf.layers.dense(z, 4*4*512)\n x1 = tf.reshape(x1, (-1,4,4,512))\n x1 = tf.layers.batch_normalization(x1,training=training)\n x1 = tf.maximum(alpha * x1, x1)\n # Output layer, 32x32x3\n \n x2= tf.layers.conv2d_transpose(x1,256, 5 , strides=2, padding =same)\n x2 = tf.layers.batch_normalization(x2,training=training)\n x2 = tf.maximum(alpha * x2, x2)\n \n x3= tf.layers.conv2d_transpose(x2 ,128, 5 , strides=2, padding =same)\n x3 = tf.layers.batch_normalization(x3 ,training=training)\n x3 = tf.maximum(alpha * x3, x3)\n \n logits = tf.layers.conv2d_transpose(x3 ,output_dim, 5 , strides=2, padding =same)\n \n out = tf.tanh(logits)\n \n return out", "Discriminator\nHere you'll build the discriminator. This is basically just a convolutional classifier like you've build before. The input to the discriminator are 32x32x3 tensors/images. You'll want a few convolutional layers, then a fully connected layer for the output. As before, we want a sigmoid output, and you'll need to return the logits as well. For the depths of the convolutional layers I suggest starting with 16, 32, 64 filters in the first layer, then double the depth as you add layers. Note that in the DCGAN paper, they did all the downsampling using only strided convolutional layers with no maxpool layers.\nYou'll also want to use batch normalization with tf.layers.batch_normalization on each layer except the first convolutional and output layers. Again, each layer should look something like convolution > batch norm > leaky ReLU.\nNote: in this project, your batch normalization layers will always use batch statistics. (That is, always set training to True.) That's because we are only interested in using the discriminator to help train the generator. However, if you wanted to use the discriminator for inference later, then you would need to set the training parameter appropriately.\n\nExercise: Build the convolutional network for the discriminator. The input is a 32x32x3 images, the output is a sigmoid plus the logits. Again, use Leaky ReLU activations and batch normalization on all the layers except the first.", "def discriminator(x, reuse=False, alpha=0.2):\n with tf.variable_scope('discriminator', reuse=reuse):\n # Input layer is 32x32x3\n x = tf.layers.conv2d(x, 64, 5, strides=2, padding='same')\n relu1 = tf.maximum(alpha*x,x)\n # 16*16*64\n \n x2 = tf.layers.conv2d(relu1, 128, 5, strides=2, padding='same')\n bn2 = tf.layers.batch_normalizationa(x2, training=True )\n relu2 = tf.maximum(alpha*bn2,bn2)\n #8*8*128 \n \n x3 = tf.layers.conv2d(relu2, 256, 5, strides=2, padding='same')\n bn3 = tf.layers.batch_normalizationa(x3, training=True )\n relu3 = tf.maximum(alpha*bn3,bn3)\n #4*4*256\n \n flat = tf.reshape(relu3 , (-1, 4*4*256))\n logits = tf.layers.dense(flat,1)\n out = tf.sigmoid(logits)\n \n return out, logits", "Model Loss\nCalculating the loss like before, nothing new here.", "def model_loss(input_real, input_z, output_dim, alpha=0.2):\n \"\"\"\n Get the loss for the discriminator and generator\n :param input_real: Images from the real dataset\n :param input_z: Z input\n :param out_channel_dim: The number of channels in the output image\n :return: A tuple of (discriminator loss, generator loss)\n \"\"\"\n g_model = generator(input_z, output_dim, alpha=alpha)\n d_model_real, d_logits_real = discriminator(input_real, alpha=alpha)\n d_model_fake, d_logits_fake = discriminator(g_model, reuse=True, alpha=alpha)\n\n d_loss_real = tf.reduce_mean(\n tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_real, labels=tf.ones_like(d_model_real)))\n d_loss_fake = tf.reduce_mean(\n tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=tf.zeros_like(d_model_fake)))\n g_loss = tf.reduce_mean(\n tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=tf.ones_like(d_model_fake)))\n\n d_loss = d_loss_real + d_loss_fake\n\n return d_loss, g_loss", "Optimizers\nNot much new here, but notice how the train operations are wrapped in a with tf.control_dependencies block so the batch normalization layers can update their population statistics.", "def model_opt(d_loss, g_loss, learning_rate, beta1):\n \"\"\"\n Get optimization operations\n :param d_loss: Discriminator loss Tensor\n :param g_loss: Generator loss Tensor\n :param learning_rate: Learning Rate Placeholder\n :param beta1: The exponential decay rate for the 1st moment in the optimizer\n :return: A tuple of (discriminator training operation, generator training operation)\n \"\"\"\n # Get weights and bias to update\n t_vars = tf.trainable_variables()\n d_vars = [var for var in t_vars if var.name.startswith('discriminator')]\n g_vars = [var for var in t_vars if var.name.startswith('generator')]\n\n # Optimize\n with tf.control_dependencies(tf.get_collection(tf.GraphKeys.UPDATE_OPS)):\n d_train_opt = tf.train.AdamOptimizer(learning_rate, beta1=beta1).minimize(d_loss, var_list=d_vars)\n g_train_opt = tf.train.AdamOptimizer(learning_rate, beta1=beta1).minimize(g_loss, var_list=g_vars)\n\n return d_train_opt, g_train_opt", "Building the model\nHere we can use the functions we defined about to build the model as a class. This will make it easier to move the network around in our code since the nodes and operations in the graph are packaged in one object.", "class GAN:\n def __init__(self, real_size, z_size, learning_rate, alpha=0.2, beta1=0.5):\n tf.reset_default_graph()\n \n self.input_real, self.input_z = model_inputs(real_size, z_size)\n \n self.d_loss, self.g_loss = model_loss(self.input_real, self.input_z,\n real_size[2], alpha=0.2)\n \n self.d_opt, self.g_opt = model_opt(self.d_loss, self.g_loss, learning_rate, beta1)", "Here is a function for displaying generated images.", "def view_samples(epoch, samples, nrows, ncols, figsize=(5,5)):\n fig, axes = plt.subplots(figsize=figsize, nrows=nrows, ncols=ncols, \n sharey=True, sharex=True)\n for ax, img in zip(axes.flatten(), samples[epoch]):\n ax.axis('off')\n img = ((img - img.min())*255 / (img.max() - img.min())).astype(np.uint8)\n ax.set_adjustable('box-forced')\n im = ax.imshow(img, aspect='equal')\n \n plt.subplots_adjust(wspace=0, hspace=0)\n return fig, axes", "And another function we can use to train our network. Notice when we call generator to create the samples to display, we set training to False. That's so the batch normalization layers will use the population statistics rather than the batch statistics. Also notice that we set the net.input_real placeholder when we run the generator's optimizer. The generator doesn't actually use it, but we'd get an errror without it because of the tf.control_dependencies block we created in model_opt.", "def train(net, dataset, epochs, batch_size, print_every=10, show_every=100, figsize=(5,5)):\n saver = tf.train.Saver()\n sample_z = np.random.uniform(-1, 1, size=(72, z_size))\n\n samples, losses = [], []\n steps = 0\n\n with tf.Session() as sess:\n sess.run(tf.global_variables_initializer())\n for e in range(epochs):\n for x, y in dataset.batches(batch_size):\n steps += 1\n\n # Sample random noise for G\n batch_z = np.random.uniform(-1, 1, size=(batch_size, z_size))\n\n # Run optimizers\n _ = sess.run(net.d_opt, feed_dict={net.input_real: x, net.input_z: batch_z})\n _ = sess.run(net.g_opt, feed_dict={net.input_z: batch_z, net.input_real: x})\n\n if steps % print_every == 0:\n # At the end of each epoch, get the losses and print them out\n train_loss_d = net.d_loss.eval({net.input_z: batch_z, net.input_real: x})\n train_loss_g = net.g_loss.eval({net.input_z: batch_z})\n\n print(\"Epoch {}/{}...\".format(e+1, epochs),\n \"Discriminator Loss: {:.4f}...\".format(train_loss_d),\n \"Generator Loss: {:.4f}\".format(train_loss_g))\n # Save losses to view after training\n losses.append((train_loss_d, train_loss_g))\n\n if steps % show_every == 0:\n gen_samples = sess.run(\n generator(net.input_z, 3, reuse=True, training=False),\n feed_dict={net.input_z: sample_z})\n samples.append(gen_samples)\n _ = view_samples(-1, samples, 6, 12, figsize=figsize)\n plt.show()\n\n saver.save(sess, './checkpoints/generator.ckpt')\n\n with open('samples.pkl', 'wb') as f:\n pkl.dump(samples, f)\n \n return losses, samples", "Hyperparameters\nGANs are very senstive to hyperparameters. A lot of experimentation goes into finding the best hyperparameters such that the generator and discriminator don't overpower each other. Try out your own hyperparameters or read the DCGAN paper to see what worked for them.\n\nExercise: Find hyperparameters to train this GAN. The values found in the DCGAN paper work well, or you can experiment on your own. In general, you want the discriminator loss to be around 0.3, this means it is correctly classifying images as fake or real about 50% of the time.", "real_size = (32,32,3)\nz_size = 100\nlearning_rate = 0.001\nbatch_size = 64\nepochs = 1\nalpha = 0.01\nbeta1 = 0.9\n\n# Create the network\nnet = GAN(real_size, z_size, learning_rate, alpha=alpha, beta1=beta1)\n\n# Load the data and train the network here\ndataset = Dataset(trainset, testset)\nlosses, samples = train(net, dataset, epochs, batch_size, figsize=(10,5))\n\nfig, ax = plt.subplots()\nlosses = np.array(losses)\nplt.plot(losses.T[0], label='Discriminator', alpha=0.5)\nplt.plot(losses.T[1], label='Generator', alpha=0.5)\nplt.title(\"Training Losses\")\nplt.legend()\n\n_ = view_samples(-1, samples, 6, 12, figsize=(10,5))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
nwjs/chromium.src
third_party/tflite_support/src/tensorflow_lite_support/tools/Build_TFLite_Support_Targets.ipynb
bsd-3-clause
[ "Licensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\n http://www.apache.org/licenses/LICENSE-2.0\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.\nBuild TensorFlow Lite Support libraries with Bazel\nSet up Android environment", "# Create folders\n!mkdir -p '/android/sdk'\n\n# Download and move android SDK tools to specific folders\n!wget -q 'https://dl.google.com/android/repository/tools_r25.2.5-linux.zip'\n\n!unzip 'tools_r25.2.5-linux.zip'\n!mv '/content/tools' '/android/sdk'\n# Copy paste the folder\n!cp -r /android/sdk/tools /android/android-sdk-linux\n\n# Download NDK, unzip and move contents\n!wget 'https://dl.google.com/android/repository/android-ndk-r19c-linux-x86_64.zip'\n\n!unzip 'android-ndk-r19c-linux-x86_64.zip'\n!mv /content/android-ndk-r19c /content/ndk\n!mv '/content/ndk' '/android'\n# Copy paste the folder\n!cp -r /android/ndk /android/android-ndk-r19c\n\n# Remove .zip files\n!rm 'tools_r25.2.5-linux.zip'\n!rm 'android-ndk-r19c-linux-x86_64.zip'\n\n# Make android ndk executable to all users\n!chmod -R go=u '/android'\n\n# Set and view environment variables\n%env PATH = /usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/tools/node/bin:/tools/google-cloud-sdk/bin:/opt/bin:/android/sdk/tools:/android/sdk/platform-tools:/android/ndk\n%env ANDROID_SDK_API_LEVEL=29\n%env ANDROID_API_LEVEL=29\n%env ANDROID_BUILD_TOOLS_VERSION=29.0.2\n%env ANDROID_DEV_HOME=/android\n%env ANDROID_NDK_API_LEVEL=21\n%env ANDROID_NDK_FILENAME=android-ndk-r19c-linux-x86_64.zip\n%env ANDROID_NDK_HOME=/android/ndk\n%env ANDROID_NDK_URL=https://dl.google.com/android/repository/android-ndk-r19c-linux-x86_64.zip\n%env ANDROID_SDK_FILENAME=tools_r25.2.5-linux.zip\n%env ANDROID_SDK_HOME=/android/sdk\n#%env ANDROID_HOME=/android/sdk\n%env ANDROID_SDK_URL=https://dl.google.com/android/repository/tools_r25.2.5-linux.zip\n\n#!echo $PATH\n!export -p\n\n# Install specific versions of sdk, tools etc.\n!android update sdk --no-ui -a \\\n --filter tools,platform-tools,android-29,build-tools-29.0.2", "Install BAZEL with Baselisk", "# Download Latest version of Bazelisk\n!wget https://github.com/bazelbuild/bazelisk/releases/latest/download/bazelisk-linux-amd64\n\n# Make script executable\n!chmod +x bazelisk-linux-amd64\n\n# Adding to the path\n!sudo mv bazelisk-linux-amd64 /usr/local/bin/bazel\n\n# Extract bazel info\n!bazel\n\n# Clone TensorFlow Lite Support repository OR upload your custom folder to build\n!git clone https://github.com/tensorflow/tflite-support.git\n\n# Move into tflite-support folder\n%cd /content/tflite-support/\n\n!ls", "Build .aar files", "#@title Select library. { display-mode: \"form\" }\n\nlibrary = 'Support library' #@param [\"Support library\", \"Task Vision library\", \"Task Text library\", \"Task Audio library\",\"Metadata library\",\"C++ image_classifier\",\"C++ image_objector\",\"C++ image_segmenter\",\"C++ image_embedder\",\"C++ nl_classifier\",\"C++ bert_nl_classifier\", \"C++ bert_question_answerer\", \"C++ metadata_extractor\"]\n\nprint('You selected:', library)\n\nif library == 'Support library':\n library = '//tensorflow_lite_support/java:tensorflowlite_support.aar'\nelif library == 'Task Vision library':\n library = '//tensorflow_lite_support/java/src/java/org/tensorflow/lite/task/vision:task-library-vision'\nelif library == 'Task Text library':\n library = '//tensorflow_lite_support/java/src/java/org/tensorflow/lite/task/text:task-library-text'\nelif library == 'Task Audio library':\n library = '//tensorflow_lite_support/java/src/java/org/tensorflow/lite/task/audio:task-library-audio'\nelif library == 'Metadata library':\n library = '//tensorflow_lite_support/metadata/java:tensorflow-lite-support-metadata-lib'\nelif library == 'C++ image_classifier':\n library = '//tensorflow_lite_support/cc/task/vision:image_classifier'\nelif library == 'C++ image_objector':\n library = '//tensorflow_lite_support/cc/task/vision:image_objector'\nelif library == 'C++ image_segmenter':\n library = '//tensorflow_lite_support/cc/task/vision:image_segmenter'\nelif library == 'C++ image_embedder':\n library = '//tensorflow_lite_support/cc/task/vision:image_embedder'\nelif library == 'C++ nl_classifier':\n library = '//tensorflow_lite_support/cc/task/text/nlclassifier:nl_classifier'\nelif library == 'C++ bert_nl_classifier':\n library = '//tensorflow_lite_support/cc/task/text/nlclassifier:bert_nl_classifier'\nelif library == 'C++ bert_question_answerer':\n library = '//tensorflow_lite_support/cc/task/text/qa:bert_question_answerer'\nelif library == 'C++ metadata_extractor':\n library = '//tensorflow_lite_support/metadata/cc:metadata_extractor'\n\n\n\n#@title Select platform(s). { display-mode: \"form\" }\n\nplatforms = 'arm64-v8a,armeabi-v7a' #@param [\"arm64-v8a,armeabi-v7a\",\"x86\", \"x86_64\", \"arm64-v8a\", \"armeabi-v7a\",\"x86,x86_64,arm64-v8a,armeabi-v7a\"]\nprint('You selected:', platforms)\n\n\n# Build library\n!bazel build \\\n --fat_apk_cpu='{platforms}' \\\n '{library}'" ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]