text
stringlengths 454
608k
| url
stringlengths 17
896
| dump
stringclasses 91
values | source
stringclasses 1
value | word_count
int64 101
114k
| flesch_reading_ease
float64 50
104
|
|---|---|---|---|---|---|
Back in the day when I was far less experienced I can remember creating websites and applications that didn’t have any error logging. Without error logging you are essentially blind to what's going on with your application.
For a modern application, Error logging is absolutely critical for visibility of errors, indicators for code that needs refactoring and to understand how users are using your app and what is happening during the times when you are not able to monitor it, i.e. the middle of the night.
Error logging is also invaluable to catch frustrating intermittent issues that may arise due to performance bottlenecks or infrastructure issues.
Thankfully the Apache Software foundation have created Log4Net, a dedicate logging system designed for use with the Microsoft .NET Framework. The developers at Apache took the well-established Log4J (Log for Java) and ported it for other frameworks such as .NET, PHP and others. For more details of Log4Net and the apache software foundation please view there website ->
Getting Started
To begin you first need to install Log4Net into your project. The easiest way to do this is to use Nuget and install the package to your main application or website project.
To install Log4Net follow these steps.
Saving your Error Logs to SQL Server
Log4Net allows you to save your error logs to a number of different formats. These include, XML, via Email and stored in a Database. From my experience it’s much better to store logs in SQL server and if necessary setup a Log4Net email appender to trigger emails for serious and fatal errors. You can use any version of SQL Server or DBMS to store your logs. To begin lets create the Log table. Run the following SQL against your database in SSMS to create the table )
Now we have the table, the next thing we need to do is configure the Web.config or App.config with our Log4Net setup.
Unfortunately, the documentation on the Log4Net website is not overly intuitive and ive spend many hours in the past playing around with the config to get it working. What I find is the most reliable way of installing Log4Net in new projects is to copy the log4net config from an existing project and update the database connections string.
To make things more complicated, if you have a config issue, Log4Net simply will not log anything, nothing happens. This is necessary to ensure that an application cannot fail due to issues with the logging system, but I can be tricky to work out what’s gone wrong.
To make it easy for you, I have posted the Web.config areas for Log4Net from intermittentBug.com below. All you need to so it copy it and update the areas I have added hashes.
1 – add the Log4Net section – this goes in as a child of <configSections>
<section name="log4net" type="log4net.Config.Log4NetConfigurationSectionHandler, log4net" />
2 – add in the log4net section updating the areas that are hashed. This goes under your connection strings section
<log4net> <root> <level value="ALL" /> <appender-ref <appender-ref </root> <logger name="NHibernate" additivity="false" /> <appender name="ADONetAppender" type="log4net.Appender.ADONetAppender"> <bufferSize value="1" /> <connectionType value="System.Data.SqlClient.SqlConnection, System.Data, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" /> <connectionString value="data source=####;User ID=####;Password=####;initial catalog=####;" /> > <appender name="DebugAppender" type="log4net.Appender.DebugAppender"> <immediateFlush value="true" /> <layout type="log4net.Layout.SimpleLayout" /> </appender> </log4net>
Setting up the XmlConfigurator
To enable us to use Log4Net within our .NET code we need to return the ILog object which provides access to the logging methods. To make life easy, I have created a class I have used in many different apps that works great. It makes sense if your project has a DAL (Data Access Library) to create this class in that so it can be called everywhere within your application.
public class Log4Net { public static ILog GetLog4Net(Type ClassType) { log4net.Config.XmlConfigurator.Configure(); return log4net.LogManager.GetLogger(ClassType); } }
Testing Log4Net
Ok so now you have setup the Database, the web.config and we have our GetLog4Net Class lets write some code to test to see if Log4Net is working and if we are getting any logs to the database.
For this example, I will install Log4Net into my unit test project. To do this I simply repeat the steps above for use my unit test project.
[TestClass] public class TestLog4Net { private static ILog _log = Log4Net.GetLog4Net(typeof(TestLog4Net)); [TestMethod] public void Log4NetTest() { var ex = new Exception("tester"); _log.Info("unit test", ex); _log.Error("unit test", ex); _log.Fatal("unit test", ex); _log.Warn("unit test", ex); } }
For every class in which you plan to use Log4Net you need to create the private static field _log passing the type of the current class.
When you run this class it should run successfully and in the database you will have 4 logs.
Real world example
If you have got Log4Net saving to your database, then you’re ready to start Error logging your application. Below is a real world example of how you can use Log4Net in a typical MVC controller method using a try catch.
public class HomeController : Controller { private static ILog _log = Log4Net.GetLog4Net(typeof(HomeController)); public ActionResult Index() { try { HomeModel Model = new HomeModel(); return View(Model); } catch(Exception ex) { _log.Error("Error on HomeController - Action Index() " + ex.Message, ex); throw; } } }
If any exceptions were triggered on the home page you will have full visibility of the exception in the database, and you can use SQL to filter down to see exactly the errors you’re interested in.
So there we have it, your now setup with Log4Net and have full visibility of any exceptions your users experience.
|
https://www.intermittentbug.com/article/articlepage/getting-started-with-log4net/2296
|
CC-MAIN-2019-13
|
refinedweb
| 957
| 55.54
|
coll, objects and free method
Hi.
Sorry for the machine-gun posting.
I’m having a quite serious problem with coll dealing with A_OBJ atoms.
The thing is, it looks like coll in certains occasions (e.g., I duplicate the coll) tries to free the objects whose pointers it has stored. But most of the times those objects have already been freed, and this results either in a "freeobject: bad object" in the Max window (small problem), or a Max crash (big problem), or even a computer crash (huge problem).
Of course it doesn’t make much sense to me to store the objects in a coll; but it’s a possible patching mistake, and it should not have devastating consequences. Moreover, while I have experienced this problem with coll, I fear that other object might have the same behavior.
My question is: is there a way to prevent coll from freeing the object? E.g., is there a flag I can set somewhere saying that my class must not be freed? In this case, I could free it only when I need, by "manually" calling its former free method and then freeing the memory taken by the object… does this make sense?
thank you
In general you should not be passing A_OBJ atoms in a max patcher, or storing them in a coll. There is essentially zero support for these atoms in standard box objects. We use hashed symbolic names for Jitter matrices for this reason.
That said, there is an essentially undocumented API for object_retain(t_object *x), and object_release(t_object *x) for object reference counting similar to Obj C. It’s up to you to ensure that there will be no memory leaks, and there’s no guarantee that coll might reference said object after you’ve freed said object, since coll will not call object_retain(). Use them at your own risk.
Or better yet, register the object in a namespace via object_register, and send the object name around the patcher, resolving by client objects with object_findregistered(). This way no other objects will be using or referencing the A_OBJ directly except the ones you are writing.
This lets you safely use useful box classes like pack, zl, coll, etc. with your symbolic object references. If your object disappears while any of them still refer to said reference, it won’t be locatable from object_findregistered().
However, internal to your class, I thought you might also have a use for object_retain/release. For example one of your recent posts that showed creating a temporary atom array could have been made a little simpler with object_retain/release.
-Joshua
In fact, because of the potentially great number of instances of my class, I don’t want to use symbols for naming or binding them – I fear they might clog up the symbol table. But I can do something not too different with numeric keys – I just need to set up a hash-table-like thing in order to deal with them, and then I can harmlessly pass them in the patcher as regular A_LONGs.
Some questions about object_retain/release, which look very useful to me:
1- are they thread-safe, or should I put a lock around their calls instead?
2- when an instance of an object is created, is its reference count automatically set to 1, or should I do it manually?
3- What happens if I call object_free() on a retained object? I.e., is object_free() aware of reference counting? (I guess 2 and 3 are mutually exclusive)
Thank you very much, as always
aa
As for not bloating the symbol pool, you can use symbol_unique(), which after 10k symbols (@30 bytes each), will start reusing symbols from the unique symbol pool, so no further bloat than ~300k per application launch.
As for object_retain/release():
1. Yes, they are threadsafe.
2. Yes, the refcount is by default 1.
3. object_free decrements the count, as does object_release(). If the count is zero, the object is actually freed, otherwise it lingers in memory.
Hope this helps.
-Joshua
aah, I didn’t know about this thing of symbol_unique()
last questions about the reference counting system:
1. what is the actual difference between object_free and object_release, if both decrement the count and dispose the object when the count is 0?
2. is there a way to know the refcount of an object, for debugging?
thank you!
aa
1. The details are subtle. I would simply match object_new() (or other constructor) with object_free() and object_retain() with object_release().
Essentially, object_release() may not free an object if object_retain() has never been called. It is dependent on some special extra object info which we allocate if certain extended features are being used. object_retain() forces this to happen, but other things like registering an object, adding object specific attributes, etc. can do this as well.
In future versions, we could make object_release() free even in the absence of calling object_retain() first, but such a situation hasn’t been required by our internal usage.
2. Unfortunately, there is currently no way to get the reference count. We can consider this for the future, but for now, you’ll need to manage this elsewhere somehow.
Forums > Dev
|
https://cycling74.com/forums/topic/coll-objects-and-free-method/
|
CC-MAIN-2017-04
|
refinedweb
| 866
| 62.48
|
Introduction
TensorFlow Probability (TFP) offers a number of
JointDistribution abstractions that make probabilistic inference easier by allowing a user to easily express a probabilistic graphical model in a near-mathematical form; the abstraction generates methods for sampling from the model and evaluating the log probability of samples from the model. In this tutorial, we review "autobatched" variants, which were developed after the original
JointDistribution abstractions. Relative to the original, non-autobatched abstractions, the autobatched versions are simpler to use and more ergonomic, allowing many models to be expressed with less boilerplate. In this colab, we explore a simple model in (perhaps tedious) detail, making clear the problems autobatching solves, and (hopefully) teaching the reader more about TFP shape concepts along the way.
Prior to the introduction of autobatching, there were a few different variants of
JointDistribution, corresponding to different syntactic styles for expressing probabilistic models:
JointDistributionSequential,
JointDistributionNamed, and
JointDistributionCoroutine. Auobatching exists as a mixin, so we now have
AutoBatched variants of all of these. In this tutorial, we explore the differences between
JointDistributionSequential and
JointDistributionSequentialAutoBatched; however, everything we do here is applicable to the other variants with essentially no changes.
Dependencies & Prerequisites
Import and set ups
import functools import numpy as np import tensorflow.compat.v2 as tf tf.enable_v2_behavior() import tensorflow_probability as tfp tfd = tfp.distributions
Prerequisite: A Bayesian Regression Problem
We'll consider a very simple Bayesian regression scenario:
\[ \begin{align*} m & \sim \text{Normal}(0, 1) \\ b & \sim \text{Normal}(0, 1) \\ Y & \sim \text{Normal}(mX + b, 1) \end{align*} \]
In this model,
m and
b are drawn from standard normals, and the observations
Y are drawn from a normal distribution whose mean depends on the random variables
m and
b, and some (nonrandom, known) covariates
X. (For simplicity, in this example, we assume the scale of all random variables is known.)
To perform inference in this model, we'd need to know both the covariates
X and the observations
Y, but for the purposes of this tutorial, we'll only need
X, so we define a simple dummy
X:
X = np.arange(7) X
array([0, 1, 2, 3, 4, 5, 6])
Desiderata
In probabilistic inference, we often want to perform two basic operations:
sample: Drawing samples from the model.
log_prob: Computing the log probability of a sample from the model.
The key contribution of TFP's
JointDistribution abstractions (as well as of many other approaches to probabilistic programming) is to allow users to write a model once and have access to both
sample and
log_prob computations.
Noting that we have 7 points in our data set (
X.shape = (7,)), we can now state the desiderata for an excellent
JointDistribution:
sample()should produce a list of
Tensorshaving shape
[(), (), (7,)], corresponding to the scalar slope, scalar bias, and vector observations, respectively.
log_prob(sample())should produce a scalar: the log probability of a particular slope, bias, and observations.
sample([5, 3])should produce a list of
Tensorshaving shape
[(5, 3), (5, 3), (5, 3, 7)], representing a
(5, 3)-batch of samples from the model.
log_prob(sample([5, 3]))should produce a
Tensorwith shape (5, 3).
We'll now look at a succession of
JointDistribution models, see how to achieve the above desiderata, and hopefully learn a little more about TFP shapes along the way.
Spoiler alert: The approach that satisfies the above desiderata without added boilerplate is autobatching.
First Attempt;
JointDistributionSequential
jds = tfd.JointDistributionSequential([ tfd.Normal(loc=0., scale=1.), # m tfd.Normal(loc=0., scale=1.), # b lambda b, m: tfd.Normal(loc=m*X + b, scale=1.) # Y ])
This is more or less a direct translation of the model into code. The slope
m and bias
b are straightforward.
Y is defined using a
lambda-function: the general pattern is that a
lambda-function of \(k\) arguments in a
JointDistributionSequential (JDS) uses the previous \(k\) distributions in the model. Note the "reverse" order.
We'll call
sample_distributions, which returns both a sample and the underlying "sub-distributions" that were used to generate the sample. (We could have produced just the sample by calling
sample; later in the tutorial it will be convenient to have the distributions as well.) The sample we produce is fine:
dists, sample = jds.sample_distributions())>]
But
log_prob produces a result with an undesired shape:
jds.log_prob(sample)
<tf.Tensor: shape=(7,), dtype=float32, numpy= array([-4.4777603, -4.6775575, -4.7430477, -4.647725 , -4.5746684, -4.4368567, -4.480562 ], dtype=float32)>
And multiple sampling doesn't work:
try: jds.sample([5, 3]) except tf.errors.InvalidArgumentError as e: print(e)
Incompatible shapes: [5,3] vs. [7] [Op:Mul]
Let's try to understand what's going wrong.
A Brief Review: Batch and Event Shape
In TFP, an ordinary (not a
JointDistribution) probability distribution has an event shape and a batch shape, and understanding the difference is crucial to effective use of TFP:
- Event shape describes the shape of a single draw from the distribution; the draw may be dependent across dimensions. For scalar distributions, the event shape is []. For a 5-dimensional MultivariateNormal, the event shape is [5].
- Batch shape describes independent, not identically distributed draws, aka a "batch" of distributions. Representing a batch of distributions in a single Python object is one of the key ways TFP achieves efficiency at scale.
For our purposes, a critical fact to keep in mind is that if we call
log_prob on a single sample from a distribution, the result will always have a shape that matches (i.e., has as rightmost dimensions) the batch shape.
For a more in-depth discussion of shapes, see the "Understanding TensorFlow Distributions Shapes" tutorial.
Why Doesn't
log_prob(sample()) Produce a Scalar?
Let's use our knowledge of batch and event shape to explore what's happening with
log_prob(sample()). Here's our sample again:)>]
And here are our distributions:
dists
[<tfp.distributions.Normal 'Normal' batch_shape=[] event_shape=[] dtype=float32>, <tfp.distributions.Normal 'Normal' batch_shape=[] event_shape=[] dtype=float32>, <tfp.distributions.Normal 'JointDistributionSequential_sample_distributions_Normal' batch_shape=[7] event_shape=[] dtype=float32>]
The log probability is computed by summing the log probabilities of the sub-distributions at the (matched) elements of the parts:
log_prob_parts = [dist.log_prob(s) for (dist, s) in zip(dists, sample)] log_prob_parts
[<tf.Tensor: shape=(), dtype=float32, numpy=-2.3113134>, <tf.Tensor: shape=(), dtype=float32, numpy=-1.1357536>, <tf.Tensor: shape=(7,), dtype=float32, numpy= array([-1.0306933, -1.2304904, -1.2959809, -1.200658 , -1.1276014, -0.9897899, -1.0334952], dtype=float32)>]
np.sum(log_prob_parts) - jds.log_prob(sample)
<tf.Tensor: shape=(7,), dtype=float32, numpy=array([0., 0., 0., 0., 0., 0., 0.], dtype=float32)>
So, one level of explanation is that the log probability calculation is returning a 7-Tensor because the third subcomponent of
log_prob_parts is a 7-Tensor. But why?
Well, we see that the last element of
dists, which corresponds to our distribution over
Y in the mathematial formulation, has a
batch_shape of
[7]. In other words, our distribution over
Y is a batch of 7 independent normals (with different means and, in this case, the same scale).
We now understand what's wrong: in JDS, the distribution over
Y has
batch_shape=[7], a sample from the JDS represents scalars for
m and
b and a "batch" of 7 independent normals. and
log_prob computes 7 separate log-probabilities, each of which represents the log probability of drawing
m and
b and a single observation
Y[i] at some
X[i].
Fixing
log_prob(sample()) with
Independent
Recall that
dists[2] has
event_shape=[] and
batch_shape=[7]:
dists[2]
<tfp.distributions.Normal 'JointDistributionSequential_sample_distributions_Normal' batch_shape=[7] event_shape=[] dtype=float32>
By using TFP's
Independent metadistribution, which converts batch dimensions to event dimensions, we can convert this into a distribution with
event_shape=[7] and
batch_shape=[] (we'll rename it
y_dist_i because it's a distribution on
Y, with the
_i standing in for our
Independent wrapping):
y_dist_i = tfd.Independent(dists[2], reinterpreted_batch_ndims=1) y_dist_i
<tfp.distributions.Independent 'IndependentJointDistributionSequential_sample_distributions_Normal' batch_shape=[] event_shape=[7] dtype=float32>
Now, the
log_prob of a 7-vector is a scalar:
y_dist_i.log_prob(sample[2])
<tf.Tensor: shape=(), dtype=float32, numpy=-7.9087086>
Under the covers,
Independent sums over the batch:
y_dist_i.log_prob(sample[2]) - tf.reduce_sum(dists[2].log_prob(sample[2]))
<tf.Tensor: shape=(), dtype=float32, numpy=0.0>
And indeed, we can use this to construct a new
jds_i (the
i again stands for
Independent) where
log_prob returns a scalar:
jds_i = tfd.JointDistributionSequential([ tfd.Normal(loc=0., scale=1.), # m tfd.Normal(loc=0., scale=1.), # b lambda b, m: tfd.Independent( # Y tfd.Normal(loc=m*X + b, scale=1.), reinterpreted_batch_ndims=1) ]) jds_i.log_prob(sample)
<tf.Tensor: shape=(), dtype=float32, numpy=-11.355776>
A couple notes:
jds_i.log_prob(s)is not the same as
tf.reduce_sum(jds.log_prob(s)). The former produces the "correct" log probability of the joint distribution. The latter sums over a 7-Tensor, each element of which is the sum of the log probability of
m,
b, and a single element of the log probability of
Y, so it overcounts
mand
b. (
log_prob(m) + log_prob(b) + log_prob(Y)returns a result rather than throwing an exception because TFP follows TF and NumPy's broadcasting rules; adding a scalar to a vector produces a vector-sized result.)
- In this particular case, we could have solved the problem and achieved the same result using
MultivariateNormalDiaginstead of
Independent(Normal(...)).
MultivariateNormalDiagis a vector-valued distribution (i.e., it already has vector event-shape). Indeeed
MultivariateNormalDiagcould be (but isn't) implemented as a composition of
Independentand
Normal. It's worthwhile to remember that given a vector
V, samples from
n1 = Normal(loc=V), and
n2 = MultivariateNormalDiag(loc=V)are indistinguishable; the difference beween these distributions is that
n1.log_prob(n1.sample())is a vector and
n2.log_prob(n2.sample())is a scalar.
Multiple Samples?
Drawing multiple samples still doesn't work:
try: jds_i.sample([5, 3]) except tf.errors.InvalidArgumentError as e: print(e)
Incompatible shapes: [5,3] vs. [7] [Op:Mul]
Let's think about why. When we call
jds_i.sample([5, 3]), we'll first draw samples for
m and
b, each with shape
(5, 3). Next, we're going to try to construct a
Normal distribution via:
tfd.Normal(loc=m*X + b, scale=1.)
But if
m has shape
(5, 3) and
X has shape
7, we can't multiply them together, and indeed this is the error we're hitting:
m = tfd.Normal(0., 1.).sample([5, 3]) try: m * X except tf.errors.InvalidArgumentError as e: print(e)
Incompatible shapes: [5,3] vs. [7] [Op:Mul]
To resolve this issue, let's think about what properties the distribution over
Y has to have. If we've called
jds_i.sample([5, 3]), then we know
m and
b will both have shape
(5, 3). What shape should a call to
sample on the
Y distribution produce? The obvious answer is
(5, 3, 7): for each batch point, we want a sample with the same size as
X. We can achieve this by using TensorFlow's broadcasting capabilities, adding extra dimensions:
m[..., tf.newaxis].shape
TensorShape([5, 3, 1])
(m[..., tf.newaxis] * X).shape
TensorShape([5, 3, 7])
Adding an axis to both
m and
b, we can define a new JDS that supports multiple samples:) ]) shaped_sample = jds_ia.sample([5, 3]) shaped_sample
[<tf.Tensor: shape=(5, 3), dtype=float32, numpy= array([[-1.1133379 , 0.16390413, -0.24177533], [-1.1312429 , -0.6224666 , -1.8182136 ], [-0.31343174, -0.32932565, 0.5164407 ], [-0.0119963 , -0.9079621 , 2.3655841 ], [-0.26293617, 0.8229698 , 0.31098196]], dtype=float32)>, <tf.Tensor: shape=(5, 3), dtype=float32, numpy= array([[-0.02876974, 1.0872147 , 1.0138507 ], [ 0.27367726, -1.331534 , -0.09084719], [ 1.3349475 , -0.68765205, 1.680652 ], [ 0.75436825, 1.3050154 , -0.9415123 ], [-1.2502679 , -0.25730947, 0.74611956]], dtype=float32)>, <tf.Tensor: shape=(5, 3, 7), dtype=float32, numpy= array([[[-1.8258233e+00, -3.0641669e-01, -2.7595463e+00, -1.6952467e+00, -4.8197951e+00, -5.2986512e+00, -6.6931367e+00], [ 3.6438566e-01, 1.0067395e+00, 1.4542470e+00, 8.1155670e-01, 1.8868095e+00, 2.3877139e+00, 1.0195159e+00], [-8.3624744e-01, 1.2518480e+00, 1.0943471e+00, 1.3052304e+00, -4.5756745e-01, -1.0668410e-01, -7.0669651e-02]], [[-3.1788960e-01, 9.2615485e-03, -3.0963073e+00, -2.2846246e+00, -3.2269263e+00, -6.0213070e+00, -7.4806519e+00], [-3.9149747e+00, -3.5155020e+00, -1.5669601e+00, -5.0759468e+00, -4.5065498e+00, -5.6719379e+00, -4.8012795e+00], [ 1.3053948e-01, -8.0493152e-01, -4.7845001e+00, -4.9721808e+00, -7.1365709e+00, -9.6198196e+00, -9.7951422e+00]], [[ 2.0621397e+00, 3.4639853e-01, 7.0252883e-01, -1.4311566e+00, 3.3790007e+00, 1.1619035e+00, -8.9105040e-01], [-7.8956139e-01, -8.5023916e-01, -9.7148323e-01, -2.6229355e+00, -2.7150445e+00, -2.4633870e+00, -2.1841538e+00], [ 7.7627432e-01, 2.2401071e+00, 3.7601702e+00, 2.4245868e+00, 4.0690269e+00, 4.0605016e+00, 5.1753912e+00]], [[ 1.4275590e+00, 3.3346462e+00, 1.5374103e+00, -2.2849756e-01, 9.1219616e-01, -3.1220305e-01, -3.2643962e-01], [-3.1910419e-02, -3.8848895e-01, 9.9946201e-02, -2.3619974e+00, -1.8507402e+00, -3.6830821e+00, -5.4907336e+00], [-7.1941972e-02, 2.1602919e+00, 4.9575748e+00, 4.2317696e+00, 9.3528280e+00, 1.0526063e+01, 1.5262107e+01]], [[-2.3257759e+00, -2.5343289e+00, -3.5342445e+00, -4.0423255e+00, -3.2361765e+00, -3.3434000e+00, -2.6849220e+00], [ 1.5006512e-02, -1.9866472e-01, 7.6781356e-01, 1.6228745e+00, 1.4191239e+00, 2.6655579e+00, 4.4663467e+00], [ 2.6599693e+00, 1.2663836e+00, 1.7162113e+00, 1.4839669e+00, 2.0559487e+00, 2.5976877e+00, 2.5977583e+00]]], dtype=float32)>]
jds_ia.log_prob(shaped_sample)
<tf.Tensor: shape=(5, 3), dtype=float32, numpy= array([[-12.483114 , -10.139662 , -11.514159 ], [-11.656767 , -17.201958 , -12.132455 ], [-17.838818 , -9.474525 , -11.24898 ], [-13.95219 , -12.490049 , -17.123957 ], [-14.487818 , -11.3755455, -10.576363 ]], dtype=float32)>
As an extra check, we'll verify that the log probability for a single batch point matches what we had before:
(jds_ia.log_prob(shaped_sample)[3, 1] - jds_i.log_prob([shaped_sample[0][3, 1], shaped_sample[1][3, 1], shaped_sample[2][3, 1, :]]))
<tf.Tensor: shape=(), dtype=float32, numpy=0.0>
AutoBatching For The Win
Excellent! We now have a version of JointDistribution that handles all our desiderata:
log_prob returns a scalar thanks to the use of
tfd.Independent, and multiple samples work now that we fixed broadcasting by adding extra axes.
What if I told you there was an easier, better way? There is, and it's called
JointDistributionSequentialAutoBatched (JDSAB):
jds_ab = tfd.JointDistributionSequentialAutoBatched([ tfd.Normal(loc=0., scale=1.), # m tfd.Normal(loc=0., scale=1.), # b lambda b, m: tfd.Normal(loc=m*X + b, scale=1.) # Y ])
jds_ab.log_prob(jds.sample())
<tf.Tensor: shape=(), dtype=float32, numpy=-12.954952>
shaped_sample = jds_ab.sample([5, 3]) jds_ab.log_prob(shaped_sample)
<tf.Tensor: shape=(5, 3), dtype=float32, numpy= array([[-12.191533 , -10.43885 , -16.371655 ], [-13.292994 , -11.97949 , -16.788685 ], [-15.987699 , -13.435732 , -10.6029 ], [-10.184758 , -11.969714 , -14.275676 ], [-12.740775 , -11.5654125, -12.990162 ]], dtype=float32)>
jds_ab.log_prob(shaped_sample) - jds_ia.log_prob(shaped_sample)
<tf.Tensor: shape=(5, 3), dtype=float32, numpy= array([[0., 0., 0.], [0., 0., 0.], [0., 0., 0.], [0., 0., 0.], [0., 0., 0.]], dtype=float32)>
How does this work? While you could attempt to read the code for a deep understanding, we'll give a brief overview which is sufficient for most use cases:
- Recall that our first problem was that our distribution for
Yhad
batch_shape=[7]and
event_shape=[], and we used
Independentto convert the batch dimension to an event dimension. JDSAB ignores the batch shapes of component distributions; instead it treats batch shape as an overall property of the model, which is assumed to be
batch_ndims > 0). The effect is equivalent to using tfd.Independent to convert all batch dimensions of component distributions into event dimensions, as we did manually above.
- Our second problem was a need to massage the shapes of
mand
bso that they could broadcast appropriately with
Xwhen creating multiple samples. With JDSAB, you write a model to generate a single sample, and we "lift" the entire model to generate multiple samples using TensorFlow's vectorized_map. (This feature is analagous to JAX's vmap.)
Exploring the batch shape issue in more detail, we can compare the batch shapes of our original "bad" joint distribution
jds, our batch-fixed distributions
jds_i and
jds_ia, and our autobatched
jds_ab:
jds.batch_shape
[TensorShape([]), TensorShape([]), TensorShape([7])]
jds_i.batch_shape
[TensorShape([]), TensorShape([]), TensorShape([])]
jds_ia.batch_shape
[TensorShape([]), TensorShape([]), TensorShape([])]
jds_ab.batch_shape
TensorShape([])
We see that the original
jds has subdistributions with different batch shapes.
jds_i and
jds_ia fix this by creating subdistributions with the same (empty) batch shape.
jds_ab has only a single (empty) batch shape.
It's worth noting that
JointDistributionSequentialAutoBatched offers some additional generality for free. Suppose we make the covariates
X (and, implicitly, the observations
Y) two-dimensional:
X = np.arange(14).reshape((2, 7)) X
array([[ 0, 1, 2, 3, 4, 5, 6], [ 7, 8, 9, 10, 11, 12, 13]])
Our
JointDistributionSequentialAutoBatched works with no changes (we need to redefine the model because the shape of
X is cached by
jds_ab.log_prob):
jds_ab = tfd.JointDistributionSequentialAutoBatched([ tfd.Normal(loc=0., scale=1.), # m tfd.Normal(loc=0., scale=1.), # b lambda b, m: tfd.Normal(loc=m*X + b, scale=1.) # Y ]) shaped_sample = jds_ab.sample([5, 3]) shaped_sample
[<tf.Tensor: shape=(5, 3), dtype=float32, numpy= array([[ 0.1813647 , -0.85994506, 0.27593774], [-0.73323774, 1.1153806 , 0.8841938 ], [ 0.5127983 , -0.29271227, 0.63733214], [ 0.2362284 , -0.919168 , 1.6648189 ], [ 0.26317367, 0.73077047, 2.5395133 ]], dtype=float32)>, <tf.Tensor: shape=(5, 3), dtype=float32, numpy= array([[ 0.09636458, 2.0138032 , -0.5054413 ], [ 0.63941646, -1.0785882 , -0.6442188 ], [ 1.2310615 , -0.3293852 , 0.77637213], [ 1.2115169 , -0.98906034, -0.07816773], [-1.1318136 , 0.510014 , 1.036522 ]], dtype=float32)>, <tf.Tensor: shape=(5, 3, 2, 7), dtype=float32, numpy= array([[[[-1.9685398e+00, -1.6832136e+00, -6.9127172e-01, 8.5992378e-01, -5.3123581e-01, 3.1584005e+00, 2.9044402e+00], [-2.5645006e-01, 3.1554163e-01, 3.1186538e+00, 1.4272424e+00, 1.2843871e+00, 1.2266440e+00, 1.2798605e+00]], [[ 1.5973477e+00, -5.3631151e-01, 6.8143606e-03, -1.4910895e+00, -2.1568544e+00, -2.0513713e+00, -3.1663666e+00], [-4.9448099e+00, -2.8385928e+00, -6.9027486e+00, -5.6543546e+00, -7.2378774e+00, -8.1577444e+00, -9.3582869e+00]], [[-2.1233239e+00, 5.8853775e-02, 1.2024102e+00, 1.6622503e+00, -1.9197327e-01, 1.8647723e+00, 6.4322817e-01], [ 3.7549341e-01, 1.5853541e+00, 2.4594500e+00, 2.1952972e+00, 1.7517658e+00, 2.9666045e+00, 2.5468128e+00]]], [[[ 8.9906776e-01, 6.7375046e-01, 7.3354661e-01, -9.9894643e-01, -3.4606690e+00, -3.4810467e+00, -4.4315586e+00], [-3.0670738e+00, -6.3628020e+00, -6.2538433e+00, -6.8091092e+00, -7.7134805e+00, -8.6319380e+00, -8.6904278e+00]], [[-2.2462025e+00, -3.3060855e-01, 1.8974400e-01, 3.1422038e+00, 4.1483402e+00, 3.5642972e+00, 4.8709240e+00], [ 4.7880130e+00, 5.8790064e+00, 9.6695948e+00, 7.8112822e+00, 1.2022618e+01, 1.2411858e+01, 1.4323385e+01]], [[-1.0189297e+00, -7.8115642e-01, 1.6466728e+00, 8.2378983e-01, 3.0765080e+00, 3.0170646e+00, 5.1899948e+00], [ 6.5285158e+00, 7.8038850e+00, 6.4155884e+00, 9.0899811e+00, 1.0040427e+01, 9.1404457e+00, 1.0411951e+01]]], [[[ 4.5557004e-01, 1.4905317e+00, 1.4904103e+00, 2.9777462e+00, 2.8620450e+00, 3.4745665e+00, 3.8295493e+00], [ 3.9977460e+00, 5.7173767e+00, 7.8421035e+00, 6.3180594e+00, 6.0838981e+00, 8.2257290e+00, 9.6548376e+00]], [[-7.0750320e-01, -3.5972297e-01, 4.3136525e-01, -2.3301599e+00, -5.0374687e-01, -2.8338656e+00, -3.4453444e+00], [-3.1258626e+00, -3.4687450e+00, -1.2045374e+00, -4.0196013e+00, -5.8831010e+00, -4.2965469e+00, -4.1388311e+00]], [[ 2.1969774e+00, 2.4614549e+00, 2.2314475e+00, 1.8392437e+00, 2.8367062e+00, 4.8600502e+00, 4.2273531e+00], [ 6.1879644e+00, 5.1792760e+00, 6.1141996e+00, 5.6517797e+00, 8.9979610e+00, 7.5938139e+00, 9.7918644e+00]]], [[[ 1.5249090e+00, 1.1388919e+00, 8.6903995e-01, 3.0762129e+00, 1.5128503e+00, 3.5204377e+00, 2.4760864e+00], [ 3.4166217e+00, 3.5930209e+00, 3.1694956e+00, 4.5797420e+00, 4.5271711e+00, 2.8774328e+00, 4.7288942e+00]], [[-2.3095846e+00, -2.0595703e+00, -3.0093951e+00, -3.8594103e+00, -4.9681158e+00, -6.4256043e+00, -5.5345035e+00], [-6.4306297e+00, -7.0924540e+00, -8.4075985e+00, -1.0417805e+01, -1.1727266e+01, -1.1196255e+01, -1.1333830e+01]], [[-7.0419472e-01, 1.4568675e+00, 3.7946482e+00, 4.8489718e+00, 6.6498446e+00, 9.0224218e+00, 1.1153137e+01], [ 1.0060651e+01, 1.1998097e+01, 1.5326431e+01, 1.7957514e+01, 1.8323889e+01, 2.0160881e+01, 2.1269085e+01]]], [[[-2.2360647e-01, -1.3632748e+00, -7.2704530e-01, 2.3558271e-01, -1.0381399e+00, 1.9387857e+00, -3.3694571e-01], [ 1.6015106e-01, 1.5284677e+00, -4.8567140e-01, -1.7770648e-01, 2.1919653e+00, 1.3015286e+00, 1.3877077e+00]], [[ 1.3688663e+00, 2.6602898e+00, 6.6657305e-01, 4.6554832e+00, 5.7781887e+00, 4.9115267e+00, 4.8446012e+00], [ 5.1983776e+00, 6.2297459e+00, 6.3848300e+00, 8.4291229e+00, 7.1309576e+00, 1.0395646e+01, 8.5736713e+00]], [[ 1.2675294e+00, 5.2844582e+00, 5.1331611e+00, 8.9993315e+00, 1.0794343e+01, 1.4039831e+01, 1.5731170e+01], [ 1.9084715e+01, 2.2191265e+01, 2.3481146e+01, 2.5803375e+01, 2.8632090e+01, 3.0234968e+01, 3.1886738e+01]]]], dtype=float32)>]
jds_ab.log_prob(shaped_sample)
<tf.Tensor: shape=(5, 3), dtype=float32, numpy= array([[-28.90071 , -23.052422, -19.851362], [-19.775568, -25.894997, -20.302256], [-21.10754 , -23.667885, -20.973007], [-19.249458, -20.87892 , -20.573763], [-22.351208, -25.457762, -24.648403]], dtype=float32)>
On the other hand, our carefully crafted
JointDistributionSequential no longer works:) ]) try: jds_ia.sample([5, 3]) except tf.errors.InvalidArgumentError as e: print(e)
Incompatible shapes: [5,3,1] vs. [2,7] [Op:Mul]
To fix this, we'd have to add a second
tf.newaxis to both
m and
b match the shape, and increase
reinterpreted_batch_ndims to 2 in the call to
Independent. In this case, letting the auto-batching machinery handle the shape issues is shorter, easier, and more ergonomic.
Once again, we note that while this notebook explored
JointDistributionSequentialAutoBatched, the other variants of
JointDistribution have equivalent
AutoBatched. (For users of
JointDistributionCoroutine,
JointDistributionCoroutineAutoBatched has the additional benefit that you no longer need to specify
Root nodes; if you've never used
JointDistributionCoroutine you can safely ignore this statement.)
Concluding Thoughts
In this notebook, we introduced
JointDistributionSequentialAutoBatched and worked through a simple example in detail. Hopefully you learned something about TFP shapes and about autobatching!
|
https://tensorflow.google.cn/probability/examples/JointDistributionAutoBatched_A_Gentle_Tutorial
|
CC-MAIN-2022-21
|
refinedweb
| 3,840
| 55.64
|
neopixel — control of WS2812 / NeoPixel LEDs¶
This module provides a driver for WS2818 / NeoPixel LEDs.
Note
This module is only included by default on the ESP8266 and ESP32 ports. On STM32 / Pyboard, you can download the module and copy it to the filesystem.
class NeoPixel¶
This class stores pixel data for a WS2812 LED strip connected to a pin. The
application should set pixel data and then call
NeoPixel.write()
when it is ready to update the strip.
For example:
import neopixel # 32 LED strip connected to X8. p = machine.Pin.board.X8 n = neopixel.NeoPixel(p, 32) # Draw a red gradient. for i in range(32): n[i] = (i * 8, 0, 0) # Update the strip. n.write()
Constructors¶
Pixel access methods¶
NeoPixel.
fill(pixel)¶
Sets the value of all pixels to the specified pixel value (i.e. an RGB/RGBW tuple).
|
http://docs.micropython.org/en/v1.17/library/neopixel.html
|
CC-MAIN-2022-05
|
refinedweb
| 142
| 69.79
|
Arduino Nano: Ultrasonic Ranger(Ping) Distance I2C 2 X 16 LCD Display With Visuino
Introduction: Arduino Nano: Ultrasonic Ranger(Ping) Distance I2C 2 X 16 LCD Display With Visuino
In this Instructable, I will show you how easy it is to connect Ultrasonic Sensor to Arduino and display the distance on a LCD Display.
Step 1: Components
- One Arduino compatible board (I use Arduino Nano, because I have one, but any other will be just fine)
- One Ultrasonic Ranger Sensor Module - I used HC-SR04, but US-015, or very much any other will also work
- One I2C 16x2 LCD Display (Back side of the LCD with the I2C adapter showed on Picture 2)
- One small Breadboard (Any breadboard can be used, or any other way to connect 3 wires together)
- 3 Female-Male (Red) jumper wires
- 6 Female-Female jumper wires
Step 2: Connect the LCD Module to the Arduino
- Connect Female-FemaleGround(Black wire), SDA(Green wire), and SCL(Yellow wire) to the LCD Module (Picture 1)
- Connect the Female end of a Female-MalePower(Red wire) to the VCC/Power pin of the LCD Module (Picture 1) , and leave the Male end unconnected
- Connect the other end of the Ground wire(Black wire) to Ground)
- Connect another Female-Male Power wire(Red wire) to the 5V Power pin of the Arduino board(Picture 2), and leave the Male end unconnected
- Picture 3 shows where are the Ground, 5V Power, SDA/Analog pin 4, and SCL/Analog pin 5 pins of the Arduino Nano
Step 3: Connect the Ultrasonic Ranger to Arduino
- Connect Ground(Black wire), Power(Red wire), Trigger(Brown wire), and Echo(Purple wire) to the Ultrasonic Ranger Sensor Module (Picture 1)
- Connect the Male ends of the 3 Power wires(Red wires) - from the Display, the Ultrasonic Ranger Module, and the Arduino together as example with the help of a Breadboard (Picture 2) - In my case I used a small Breadboard
- Connect the other end of the Ground wire(Black wire) to Ground pin of the Arduino board(Picture 3)
- Connect the other end of the Trigger wire(Brown wire) to Digital pin 2 of the Arduino board(Picture 3)
- Connect the other end of the Echo wire(Purple wire) to Digital pin 3 of the Arduino board(Picture 3)
- Picture 4 shows in Red where are the Ground, Digital 2, and Digital 3 pins of the Arduino Nano(In Blue are shown the connections made in the previous step)
Step 4: Start Visuino, and Select the Arduino Board Type
5: In Visuino: Add and Connect Ultrasonic Ranger Component
- Type "sonic" in the Filter box of the Component Toolbox then select the "Ultrasonic Ranger(Ping)" component (Picture 1), and drop it in the design area
- In the Object Inspector set the value of the "PauseTime" property of the UltrasonicRanger1 to 1000 (Picture 2) This will give 1 second period between the measurements, so the LCD display will not be updated too often
- Connect the "Ping(Trigger)" pin of the UltrasonicRanger1 component to the "Digital" input pin of the Digital[ 2 ] channel of the Arduino component (Picture 3)
- Connect the "Out" pin of the Digital[ 3 ] channel of the Arduino component to the "Echo" input pin of the UltrasonicRanger1 component (Picture 4)
Step 6: In Visuino: Add LCD Component, and Text Field in It
- Type "lcd" in the Filter box of the Component Toolbox then select the "Liquid Crystal Display (LCD) - I2C" component (Picture 1), and drop it in the design area
- Click on the "Tools" button (Picture 2) to open the "Elements" editor (Picture 3)
We will add a Text field with the description of the value:
- Add Text field for the Distance description text by select the "Text Field" in the right window of the "Elements" editor, and clicking on the "+" button on the left (Picture 3)
- In the Object Inspector set the "Initial Value" property of the element to "Distance:" (Picture 4) - This will specify the text to be displayed
Step 7: In Visuino: Add, and Setup Analog Value Element to Display the Distance
- Add Analog field for the Distance value by selecting the "Analog Field" in the right window of the "Elements" editor, and clicking on the "+" button on the left (Picture 1)
- In the Object Inspector set the "Precision" property of the element to "2" (Picture 2)
- In the Object Inspector set the "Row" property of the element to "1" (Picture 2) - This will specify that the field will be shown in the second row of the Display
- In the Object Inspector set the "Width" property of the element to "6" (Picture 2)
Step 8: In Visuino: Add, and Setup Text Element to Display the Units
- Add Text field for the Units text selecting the "Text Field" in the right window of the "Elements" editor, and clicking on the "+" button (Picture 1)
- In the Object Inspector set the "Column" property of the element to "7" (Picture 2)
- In the Object Inspector set the "Initial Value" property of the element to "CM" (Picture 3)
- In the Object Inspector set the "Row" property of the element to "1" (Picture 3)
- Close the Elements editor
Step 9: In Visuino: Connect the LCD Component
- Connect the "Out" pin of the UltrasonicRanger1 component (Picture 1) to the "In" pin of the "Elements.AnalogField1" element of the LiquidCrystalDisplay1 component (Picture 2)
- Connect the "Out" pin of the LiquidCrystalDisplay1 component to the to the "In" pin of the I2C channel of the Arduino component (Picture 3)
Step 10: Generate, Compile, and Upload the Arduino Code
Step 11: And Play...
Congratulations! You have completed the project.
Picture 1 shows the connected and powered up project. As you can see on the picture the Display will show the Distance to the nearest object from the Ultrasonic Ranger.
On Picture 2 you can see the complete Visuino diagram.
Also attached is the Visuino project, that I created for this Instructable. You can download and open it in Visuino:
#include <Visuino_LiquidCrystal_I2C.h>
bro there's a problem about this library. used other i2c libraries but still not working
I would check your install. I just had to reinstall my Arduino IDE and Windows 10 added a little confusion.
my dearest friend, could you send the library
It should be part of the install, but I have also uploaded it here:
amigo el link quebro, podrias pasarme uno alterno en el q lo hayas subido porfavor? te lo agradeceria mucho
What is the problem? I use it all the time. At the moment I am in Cologne for a conference, but will be back in the office later in the week, and will be able to help you more if you have problems.
I purchase the program and the one year subscription. I am not putting the hardware together as a hobby and I feel that this software did save me time at accomplishing what I want to do and that the price was an outright bargain. I needed a quick and inexpensive sonic measurement device and this software helped me do that quickly. Now, time to add a PIR, atm pressure, humidity, CO2, smoke, and temp. Thanks for working on this. I am sure that it will help with the rest.
Congratulations! Looks good :-)
Thank you for purchasing, and for the kind words :-) . I think it will do the rest well. The smoke/gas type sensors are usually analog sensors, and you can easily read them in Visuino, but even better, I am expecting a set of sensors to arrive soon, and will see o even add components doing the computations for them so you will get the values properly calculated, and not need to do your own calculation on Visuino :-) Hope they will arrive soon.
Hey Guy i have done the same project with the help of the above guidance. But by Compiling the program an error is coming about the wire.h file missing in the library, so plz anyone help me out for this it's urgent for as its one of my project.
so plz reply me fast as possible
This means that you probably have a bad Arduino IDE installation. The wire.h is part of the standard Arduino IDE Installation. What version of Visuino you use and what version of Arduino IDE ?
Nice. But Visuino is not freeware.
Thank you!
The Atmel processors and the Ultrasonic sensors are not free either :-( . The Atmel is also proprietary.
It is up to you to decide if something is worth paying or not ;-) .
And Visuino can be used for 5 minutes every time you run it for free which is more than enough for very much all the projects I posted and more ;-) .
So I am actually giving some small part of my hard work for free, while actually this is my only income, so it is up to you to decide if my work has any value. Unfortunately unless I make some money, I can't buy even food, and as I said, this is a full time job. I work on Visuino and the rest of my products ~100H a week, with only bathroom and sleep brakes. If it was free, it would be of the quality and the functionality of the Arduino IDE. You decide if you want something of the level of the Arduino IDE for free, or something of the level of Visuino for $9.99 ;-)
Sorry! Your work and your time have a very big value. Reading your instructable, means that I appreciate it. I was just amazed because most of the instructable are DIY and preferably low cost. I will give a look at this software; seems not bad.
P.S.: sensors can be free when you reuse old electronic. ;)
Again, please accept my apologies.
Thank you MonyCris, No problem, I hope that the $9.99 is not that high of a cost. I actually wanted to price it lower, but at that level the payment processors take most of the money :-( . I did a survey last year, and the lowest suggested price was $20, I decided to half it to make it as affordable as I can :-) . I am sorry for probably overreacting. I am trying my best to make it affordable for everyone ;-)
Enjoy :-)
I am interested in digital vernier cliper to send signal to pc. This project can be converted to send signal to pc.
Yes, it can easily be changed. Here is one such Instructable:...
I have more Instructables posted with more information on connection to PC ;-)
Hi. Nice project. How about the tolerance?
Thank you!
What tolerance you have in mind? Different sensors have different tolerances, but if you have something other than the sensor measurement tolerance in mind, please let me know.
|
http://www.instructables.com/id/Arduino-Nano-Ultrasonic-RangerPing-Distance-I2C-2-/
|
CC-MAIN-2018-05
|
refinedweb
| 1,796
| 58.76
|
Web Crawler – Part 1
Hi Everyone! Today we will learn about Web Crawler. A Web Crawler is a technique that we use to extract information from a web-page. In this basic crawler we will extract all the website links being present on a web-page. To implement it we will use a Python Library called “BeautifulSoup” .
Library for Web crawler: BeautifulSoup
It is a Python Library, which simplifies the extraction of data from HTML and XML files. It automatically converts all the outputs to UTF-8 convention and all the inputs to Unicode. There are multiple methods in BeautifulSoup which makes it easy to render all the XML data of web-pages. Note: While using BeautifulSoup in Python-3 you may face warnings, and those warnings just terminates the execution of our code, so to suppress them, in your requests made using BeautifulSoup add “lxml”.
The code basic Web Crawler is as follows:
[sourcecode language=”python” wraplines=”false” collapse=”false”]
from bs4 import BeautifulSoup
import requests
url = “”
r = requests.get(url)
data = r.text
soup = BeautifulSoup(data, “lxml”)
for link in soup.find_all(‘a’):
aString = link.get(‘href’)
#print(aString)
“””Uncomment this line to see all the results we are extracting from page”””
if aString != None:
“””Added this statement becasue solmetimes we get None type object from websites or some string”””
if aString.startswith(“http”):
“””Extraxting only links starting with http, you can modify it as per requirement”””
print(aString)
[/sourcecode]
In the above code I have provided the URL of our blog “”, Crawler goes through the Home Page of our website and extracts all the links present on that page.
In our next tutorial we will make some major changes in our crawler, like navigating to other pages from the links being present on first page and save all the links being found on next page and many more. So stay tuned and keep learning!!
|
https://mlforanalytics.com/2018/04/21/web-crawler-part-1/
|
CC-MAIN-2021-21
|
refinedweb
| 317
| 63.19
|
Cheap Holidays - Tips
Sometimes taking a cheap holiday is the only way to have a vacation away from home and by employing some great money saving tips outlined here you can still take the holiday of your dreams.
Many years ago, before the internet was a major source of travel information, I was a travel agent and I helped people to choose cheap holidays. Here are some cheap holidays tips learnt from the people I helped.
Typically the cost of travel can be grouped into four major categories:
- Airfares or travel costs to reach the destination;
- Accommodation;
- Food;
- Spending and incidentals.
The amount spent in each category will depend on a number of factors such as:
- the length of your holiday,
- how far you are traveling from your home base,
- the standard of accommodation you choose to stay in.
Sometimes airfares are the significant cost; sometimes it’s the accommodation. At the end of this article I’ve shown an example of a trip where significant savings were possible. This trip saved over one thousand dollars.
Reducing costs so your holidays are cheap means looking for savings in each of the categories outlined above as small adjustments mount up. Some ideas:
Airfares or travel costs
- Investigate new airlines/cruise lines or starting new routes. Airlines and cruise lines, offer introductory deals to promote a new service. These offers can be incredibly cheap. Sometimes established competitors drop their prices too to protect their market share. I recently bought a fare from Sydney to Los Angeles (with the bonus of a stopover in Auckland) for $1,000 return (including taxes!). By taking advantage of an existing competitor trying to protect its market share over a new entrant I saved over $1300 on the normal regular promotional fare.
- Travel outside peak times. Travel during shoulder seasons for cheaper holidays. Low season is even cheaper but the weather is not always traveler friendly Book early. Many airlines and cruise lines offer cheap holiday deals many months in advance. Be sure of your dates though as cancellation fees are prohibitive. Check the fare conditions too.
- If you are after a holiday experience and are flexible, look at alternative destinations. Going to the Pacific Island of Vanuatu rather than Tahiti’s Bora Bora will give you a great cheap PacificIsland holiday – just a different one. Or, choosing to experience the ancient temples around Cambodia’s Angkor Wat rather than the pyramids of Egypt will give you a stunning alternative destination and very likely a much cheaper holiday.
- Relocation deals - in many places rental car and motor home companies need to relocate their stock from one place to another. By taking advantage of these offers, for a few dollars a day you will have a very cheap holiday. The offers are usually available only a week or two in advance, so flexibility is the key to taking advantage of these amazingly cheap holiday deals.
- If at all possible, take holidays outside school holidays.
More after the photos.....
Accommodation
- Look at staying in “flashpacker” type accommodation. Being a step up from backpackers you can often obtain a very nice double or single room with ensuite at a significantly cheaper price than hotel accommodation. A good resource for checking out cheap accommodation alternatives is.
- Look for new hotels opening or those in the throes of, of having just completed, renovations. Sometimes the inconvenience is minor. Do check the work schedule to ensure jack hammers in the middle of the night won’t disrupt your precious holiday sleep!
- Try to negotiate the price of your hotel/flashpacker room, especially if you plan to stay for several days.
- Stay just outside the centre. Stay close enough so that you can walk and enjoy the sights, sounds and feel of the place as you stroll.
- If you are taking your cheap holidays outside peak times, consider arriving without a room booking. That way you can look around, check the rooms available and negotiate a price to suit your budget.
Spending and incidentals
- Shop around for travel insurance. If you are leaving your home country to travel internationally, travel insurance is an absolute must. Cheap holidays travel insurance deals can be found online. I do urge you to read the small print because the “devil is always in the detail”.
- Day tours can be expensive but you can save money on them. One tip: If traveling as a couple, look for another couple at your destination and combine resources. It can be cheaper for the four of you to get a taxi, maintain flexibility, perhaps make new friends and save you money.
- Limit the total number of stops on your holiday. Moving around increases the cost of the holiday in a way that staying in just one or two places won’t. Use one place as a hub and take day trips to see other tourist spots. You just need to be sure you don’t spend all day traveling, defeating the purpose. This will only work if the places you want to see are in relatively close proximity.
- Buy souvenirs from the markets rather than the larger shops.
- Walk more – this is a triple bonus as you see more, save money and lose weight.
Food
- Buy some food at a local market and enjoy a relaxed meal on the beach watching the sun go down. Take snacks and drinks when embarking on a long journey.
- Look out for special meal deal options. Eating dinner in Australia on the bank of the YarraRiver in Melbourne I was able to enjoy a steak meal which included a lovely glass of red wine. In the Laos capital of Vientiane my regular breakfast was a special deal – large fruit salad, coffee, bread roll and cheese all for less than US$3. I ate the fruit and saved the bread roll and cheese for a snack later in the day.
- Consider eating breakfast late and combining it with lunch.
An example of the ways to save money on your holiday:
In this example the original price of the holiday was $5040 but after making some adjustments, of $600 on airfares, $500 by electing to stay at cheaper accommodation, $40 on food and $200 on incidentals savings of $1340 are easily achievable whilst still retaining a degree of comfort.
If you are flexible in your approach, have a little time and willingness to research you can experience cheap holidays….or enjoy a second holiday on the savings!
- Surprising, Friendly and Beautiful Ninh Binh - Vietnam
Ninh Binh Countryside The town of Ninh Binh is a surprise package for visitors to Vietnam. Beautiful scenery, flat open rice paddies in every shade of green imaginable are a treat to...
Popular
Hmm.. Nice tips here. Great post...
Thanks for posting this piece!
Some great ideas here! I have couple of my own that can also help cut the cost of luxury holidays.
The cheapest way to travel is by hitchhiking and couchsurfing. In Summer 2009 I made a nine week trip for about 300 US$. And it is interesting, you meet a lot of nice people you otherwise wouldn't have met, and you have a lot of fun.
6
|
https://hubpages.com/travel/Cheap-Holidays-Tips
|
CC-MAIN-2018-39
|
refinedweb
| 1,198
| 63.49
|
Scheduling a Golf Tournament
On May 1998, a post on the sci.op-research newsgroup
posed the following problem: suppose you have 32 golfers who organize
each week into groups of four. The original post said that the goal is
to select the foursomes so that each person only golfs with the same
person once. How many weeks before all of the options are exhausted?
This problem has since become a famous combinatorial problem and is
usually called the social golfer problem. It is a
generalization of a
round-robin
tournament. It helps to bound the problem. You can easily show that there is
no solution for a number of weeks strictly greater than ten. This
follows from the fact that each player plays with three other players
each week, and since there is a total of 31 other players, this means
a player runs out of opponents after (31)/3 weeks. In this article, we
will see how to construct a nine-week schedule. Constructing a ten-week schedule or proving that none exists remains an open problem!
To solve the nine-week schedule problem, we will useKoalog Constraint Solver,
a Java library for constraint programming, to write the code.
Information about Koalog Constraint Solver (including its JavaDoc) is
available at. All
of the techniques presented in this article do not depend on the choice
of the solver, but could be implemented using many commercial or
open source solvers.
Constraint Programming
Constraint programming (CP) is a technique for solving
combinatorial problems such as the social golfer problem.
It consists of modelling the problem using mathematical relations or
constraints (precisely, the term constraint denotes the implementation
of the mathematical relation).
Hence, a constraint satisfaction problem (CSP) is defined by:
- A set of variables.
- A set of domains (possible values) for the variables.
- A set of constraints/relations on the variables.
One has to define a strategy for enumerating the
different solutions. CP differs from naive enumeration methods by the
fact that constraints contain powerful filtering algorithms (usually
inspired by operational research techniques) that will
drastically reduce the search space by dynamically reducing the
domains of the variables during the enumeration.
Modeling the Golfers Problem
Let's first create a problem object that will be the placeholder for the
problem modelization. Using Koalog Constraint Solver, this can be done by
subclassing a
BaseProblem:
[prettify] public class GolfersProblem extends BaseProblem { public static final int GROUP_NB = 8; public static final int GROUP_CARD = 4; public static final int GOLFERS_NB = GROUP_NB*GROUP_CARD; // the forthcoming model will come here } </pre> <p>We have defined some constants to make the code more readable, but also because the problem can be generalized to various group numbers and sizes. In the following, the number of weeks (<code>n) will be a parameter of the problem.
An important step in constraint programming is defining the problem variables. In our case, we want to find, for each week, the set of golfers playing in each group. Fortunately, Koalog Constraint Solver comes with set variables (in addition to Boolean and integer variables). Thus, it is possible to write:
SetVariable[ ][ ] group; [/prettify]
where the instance variable
groupwill be indexed by weeks
(from 1 to
n) and then by groups
(from 1 to
GROUP_NB).
Note that it would also be possible to model the problem using integers
(defining, each week, for each golfer, the number of its group) or Booleans
(deciding, each week, whether a golfer belongs to a given group or not).
But a set-based representation is much more compact and powerful, since it
eliminates most of the problem symmetries.
We can now model our problem and define a
GolfersProblem
constructor:
public GolfersProblem(int n) {
super();
group = new SetVariable[n][GROUP_NB];
Collection golfers = new ArrayList();
for (int i=1; i<=GOLFERS_NB; i++) {
golfers.add(new Integer(i));
}
List vars = new ArrayList();
// to be continued
}
The collection
golfersis the collection of all golfers
(identified here by a number from 1 to
GOLFERS_NB).
The list
varswill contain all
problem variables.
Each group variable is a set of four golfers:
[prettify] for (int i=0; i<n; i++) { for (int j=0; j<GROUP_NB; j++) { group[i][j] = new SetVariable(new SetDomain(Collections .EMPTY_SET, golfers)); add(new ConstantCard(GROUP_CARD, group[i][j])); vars.add(group[i][j]); } } </pre> <p>The domain <code>new SetDomain(Collections.EMPTY_SET, golfers)states that the group is a set containing at least the empty set and at most the set of all golfers (previously defined). More generally, set domains are defined by two sets:
- A set of values that will be contained in the set; this set increases with time and it is the lower bound (LB) of the domain.
- A set of values containing the set, this set decreases with time and it is the upper bound (UB) of the domain.
For example, the set domain {1, 2, 3} denotes the sets containing at least 1 and at most 1, 2, and 3: these are {1}, {1,2}, {1,3}, {1, 2, 3}. A set domain is instantiated when the upper bound is equal to the lower bound.
The constraint
ConstantCard states that the cardinality of the group
should be constant (here equal to
GROUP_CARD).
Each week the groups of golfers form a partition of the set of all golfers:
for (int i=0; i<n; i++) { add(new Partition(group[i], golfers)); } </pre> <p>We want the golfers to be social. The intersection of two groups should have at most one golfer in common:</p> <pre> <code> for (int i=0; i<n-1; i++) { for (int l=i+1; l<n; l++) { for (int j=0; j<GROUP_NB; j++) { for (int k=0; k<GROUP_NB; k++) { add(new IntersectionMaxSize(1, group[i][j], group[l][k])); } } } } </pre> <p>Finally, we define the array of variables to be displayed:</p> <pre> <code> setVariables((SetVariable[])vars .toArray(new SetVariable[0])); </pre> <p>We are done with the modelling; the complete model can be found at <a href=""></a>.</p> <h2 id="solving">Solving the Golfers Problem</h2> <p>Let's first create a solver object that will be the placeholder for the strategy. Using Koalog Constraint Solver, this can be done by subclassing a <code>BacktrackSolver:public class GolfersSolver extends BacktrackSolver { SetVariable[][] group; public GolfersSolver(GolfersProblem p) { super(p); this.group = p.group; } public boolean choice() { // to be continued } } [/prettify]
The
choicemethod will be responsible for making choices and thus build
a search tree, the nodes of which are called choice points. This method
will be called successively until all of the group variables are instantiated.
We add the golfers by numbers to the groups:
for (int i=1;
i<=GolfersProblem.GOLFERS_NB;
i++) {
Integer j = new Integer(i);
for (int d=0; d<group.length; d++) {
for (int g=0;
g<GolfersProblem.GROUP_NB;
g++) {
if (((SetDomain) group[d][g].getDomain())
.isPossibleElement(j)) {
choicePoints.push();
group[d][g]
.chooseMinByAdding(choicePoints,
constraintScheduler,
j);
return true;
}
}
}
}
return false;
If a golfer can be added to a group, a new node (choice point) is created
in the search tree and the method returns true. When no more golfers can be added, the method returns false.
The test
((SetDomain) group[d][g].getDomain()).isPossibleElement(j)
returns true if golfer
jcan be added to the group
group[d][g].
Precisely,
isPossibleElementreturns true if the element is contained
in the UB of the domain, but not in the LB. The method
chooseMinByAddingadds a new element to the set (increasing
its LB).
We are done with the strategy; the complete solver can be found at.
Results
This strategy performs very well. Koalog Constraint Solver is able to find a
solution to the nine-week problem in 440ms (on a 1600Mhz PC with J2SE
1.4):
- Week 1: {1, 2, 3, 4}, {5, 6, 7, 8}, {9, 10, 11, 12}, {13, 14, 15,
16}, {17, 18, 19, 20}, {21, 22, 23, 24}, {25, 26, 27, 28}, {29, 30, 31, 32}
- Week 2: {1, 5, 9, 13}, {2, 6, 10, 14}, {3, 7, 11, 15}, {4, 8, 12, 16}, {17,
21, 25, 29}, {18, 22, 26, 30}, {19, 23, 27, 31}, {20, 24, 28, 32}
- Week 3: {1, 6, 11, 16}, {2, 5, 12, 15}, {3, 8, 9, 14}, {4, 7, 10, 13}, {17,
22, 27, 32}, {18, 21, 28, 31}, {19, 24, 25, 30}, {20, 23, 26, 29}
- Week 4: {1, 7, 17, 23}, {2, 8, 18, 24}, {3, 5, 19, 21}, {4, 6, 20, 22},
{9, 15, 25, 31}, {10, 16, 26, 32}, {11, 13, 27, 29}, {12, 14, 28, 30}
- Week 5: {1, 8, 19, 22}, {2, 7, 20, 21}, {3, 6, 17, 24}, {4, 5, 18, 23},
{9, 16, 27, 30}, {10, 15, 28, 29}, {11, 14, 25, 32}, {12, 13, 26, 31}
- Week 6: {1, 10, 18, 25}, {2, 9, 17, 26}, {3, 12, 20, 27}, {4, 11, 19, 28},
{5, 14, 22, 29}, {6, 13, 21, 30}, {7, 16, 24, 31}, {8, 15, 23, 32}
- Week 7: {1, 12, 21, 32}, {2, 11, 22, 31}, {3, 10, 23, 30}, {4, 9, 24, 29},
{5, 16, 17, 28}, {6, 15, 18, 27}, {7, 14, 19, 26}, {8, 13, 20, 25}
- Week 8: {1, 14, 20, 31}, {2, 13, 19, 32}, {3, 16, 18, 29}, {4, 15, 17, 30},
{5, 10, 24, 27}, {6, 9, 23, 28}, {7, 12, 22, 25}, {8, 11, 21, 26}
- Week 9: {1, 15, 24, 26}, {2, 16, 23, 25}, {3, 13, 22, 28}, {4, 14, 21, 27},
{5, 11, 20, 30}, {6, 12, 19, 29}, {7, 9, 18, 32}, {8, 10, 17, 31}
Precisely, only 32 nodes are explored in the search tree. This is to compare
with the huge size of the search tree (32!^9).
It is also interesting to notice that local search methods perform poorly on
this problem.
Of course, this model can be generalized to arbitrary numbers of groups and
golfers, and it is also possible to add side constraints such as Golfer 1
and Golfer 2 would like to golf together on week 1.
- Login or register to post comments
- Printer-friendly version
- 4188 reads
|
https://today.java.net/pub/a/today/2005/01/11/golfer.html
|
CC-MAIN-2015-18
|
refinedweb
| 1,682
| 64.64
|
A group blog from members of the VB teamlegacy
Why not get people to start using the .net method of doing things, that are support by mscorelib? LEFT, RIGHT, MID stink of old school VB6.
Can we get a compiler switch that flags vb only feature usage so that we can ensure our applications only use what is avaialbe to both C# and VB?
As the article said - if you can't live without this stuff then here's how to put it back in. I wouldnt advocate this as an approach as using the intrinsic .NET functionality is a much better way forward than trying to recreate the past but sometimes people mistake these functionality as part of the the language itself. Take a look in the language specification and you will find many of thee functions that you may consider part of the language not even mentioned. The message is really - "new platform, new code, write using supported .NET functionality."
A compiler switch to do such things would be limited use as the co-evolution strategy of the compilers team means that many of the differences are simply disappearing, sure VB has XML Literals, C# has iterators and some pointer functionality but Iterators will be in next version (check out the Async CTP and Async is implemented in both languages). So I think the strategy is to remove the differences and they are diminishing. There are many other small differences but the cost/benefit probably would not justify the effort.
Tom,
I'm curious. Why would one want to limit one's usage of either language to the common subset of the two? Are there any particular VB-specific language features you have in mind other than the runtime members/My namespace. It would seem a shame for either language to have productivity features handicapped arbitrarily. Imagine where the state of the .NET platform would be if no one ever used XML literals, the Aggregate, Distinct, Skip, Skip While, Take, Take While query operators, Named and Optional Arguments, Dynamic runtime binding, iterators, iterator lambdas, pointers, clean COM interop, and various other nuggets that have existed before in the past in one language or another. I can imagine some reasons for doing this but I'm curious about yours in particular, if you don't mind sharing.
Regards,
-ADG
I would like to flag usage of VB only .net Framework api calls. Normal legal VB syntax is not included in the flag.
I have the impression that a last wave of vb6 programmers is finally rolling in, klinging on to the flotsam that once belonged to a great ship. Let's feed them, give them dry clothing and welcome them in the woolly world of VB.Net, so we can all move on with our lives and leave vb6 for what it was.
Good info. A few people have commented on using the framework functions instead of those "old school" vb6 functions.
But there are differences, and in some ways, very convenient differences. For instance, the MID, LEFT and RIGHT functions take a string as an argument, whereas the equivalent functions on the string object +require+ that the variable actually contain a string object. If it's uninitialized, those functions will fail, whereas the older functions will still work (they just assume the value of the string is a null).
Generally, I agree, using the newer framework is the better tack. but it's good to realize there are differences that might be worth carrying forward.
|
http://blogs.msdn.com/b/vbteam/archive/2011/11/09/putting-back-functionality-left-out-of-vb-core-if-you-can-t-live-without-mid-when-writing-wp7-apps.aspx
|
CC-MAIN-2014-15
|
refinedweb
| 587
| 61.16
|
You'll need...
Assortment of LEGO, including wheels (we used a selection from the LEGO Education SPIKE Prime kit)
Small breadboard, buzzer, and jumper leads (optional)
5 V power supply with a barrel jack (optional)
Set up Build HAT
Before you begin, you’ll need to have set up your Raspberry Pi computer and attached a Build HAT. If you have a Maker Plate™, mount your Raspberry Pi to it using M2 bolts and nuts, making sure your Raspberry Pi is mounted on the side without the ‘edge’.
Mounting Raspberry Pi this way round enables easy access to the ports as well as the SD card slot. The Maker Plate will allow you to connect Raspberry Pi to the main structure of your dashboard more easily (this is an optional extra).
Line up the Build HAT with your Raspberry Pi, ensuring you can see the ‘This way up’ label. Make sure all the GPIO pins are covered by the HAT, and press down firmly.
You should now power your Raspberry Pi using the 7.5 V barrel jack on the Build HAT, which will allow you to use the motors.
You will also need to install the buildhat Python library. Open a Terminal window on your Raspberry Pi and enter:
sudo pip3 install buildhat
Press ENTER and wait for the ‘installation completed’ message.
Motor encoders
Motor encoders can not only rotate: they can also accurately detect how many degrees they have been rotated.
The LEGO Spike™ motors all have encoders. If you look at the rotating disc part of the motor, you will see a mark shaped like a lollipop that can be lined up with the 0 mark on the white body of the motor itself. This is the encoder set to zero degrees, and any angular movement of the motor shaft can be measured relative to this point.
A motor encoder, also called a rotary or shaft encoder, is an electromechanical device that allows you to record the angular position or motion of the axle. It normally does this by converting the angular position to an analogue or digital output.
If a motor has an encoder, that means you can very accurately set the position of the axle. It also allows you to use the motor as an input device so that if something changes the position of the axle, this can be registered and used to trigger other actions in a computer program. Open Thonny from the Programming menu, click on the Shell area at the bottom, and enter:
from buildhat import Motor motor_left = Motor('A')
Depending on how well you positioned the motor at the start, you should get a value close to ‘0’. Move the motor and type the second line again, and see how the value changes.
React to motor encoder movement
To use the LEGO Technic™ motors as a controller for a game, you’ll need to be able to constantly read their absolute positions.
In the main Thonny window above the Shell, you can use the commands you already know to find the absolute position of the motor. Then, in a while True: loop, you can print the value of the position.
Enter this code, save the file, and press Run.
from buildhat import Motor motor_left = Motor('A') while True: print(motor_left.get_aposition())
You should see that your program continually prints the position of the motor. If you rotate the motor, these values should change.
There is a better way of doing this, though. You only need to read the motor position if it is moved.
Delete the while True: loop from your program and create this simple function that prints the absolute position of the motor. You will also need to add another import line to use the pause() function.
from buildhat import Motor from signal import pause motor_left = Motor('A') def moved_left(motor_speed, motor_pos, motor_apos): print(motor_apos)
Now set this function to run when the motor's encoder is moved:
from buildhat import Motor from signal import pause motor_left = Motor('A') def moved_left(motor_speed, motor_pos, motor_apos): print(motor_apos) motor_left.when_rotated = moved_left pause()
Run your code and you should see the values printed out in the Shell change when the motor is moved.
Make your Pong screen
Open a new file in Thonny and add the following code to import the Turtle, time, and Build HAT libraries, and then set up a screen. Run the file and you should see a black window with the title ‘PONG’ open.
from turtle import Screen, Turtle from time import sleep from buildhat import Motor game_area = Screen() #Create a screen game_area.title("PONG") #Give the screen a title game_area.bgcolor('black') #Set the background colour game_area.tracer(0) #Give smoother animations
The Turtle library also has a useful way of setting the co-ordinates for a screen area. Add this line to your program:
game_area.tracer(0) game_area.setworldcoordinates(-200, -170, 200, 170)
This creates a rectangular window 400 pixels wide and 340 high, with 0 being in the centre (see below).
Now, you need to update your game area, to see the paddle and ball. Add a game loop to the bottom of your code, and call the update() method.
while True: game_area.update()
Run your code and you should see a black window appear. Next, you can make a ball by using a Turtle that is set to be a white circle. The ball should start in the middle of the screen, and shouldn’t draw a line when it moves. Above your while True: loop, add the following code:
ball = Turtle() ball.color('white') ball.shape('circle') ball.penup() ball.setpos(0,0) while True:
Run your code again. You should see a white ball appear in your window. Next, you can set up a paddle in the same way. It will be a green rectangle, and positioned on the far left of the screen.
paddle_left = Turtle() paddle_left.color('green') paddle_left.shape('square') paddle_left.shapesize(4, 1, 1) paddle_left.penup() paddle_left.setpos(-190, 0)
Run your code and you should see your ball and paddle.
Move the ball
The ball is going to bounce around the screen, so two variables are needed to keep track of its speed in both the ‘x’ and ‘y’ directions. These numbers can be larger to make the game harder, or smaller to make the game easier.
ball.speed_x = 1 ball.speed_y = 1
You can check where a Turtle is by using turtle.xcor() and turtle.ycor() to find the ‘x’ and ‘y’ co-ordinates, respectively. So to make the ball move, you can combine the position and speed.
Add the lines below to your program:
while True: game_area.update() ball.setx(ball.xcor() + ball.speed_x) ball.sety(ball.ycor() + ball.speed_y)
Run the program and see what happens! The ball should move diagonally upwards towards the top right corner of the game area… and then keep on going! If you want your game to be fast and challenging, you can increase the speed_x and speed_y values to make the ball move more quickly.
The ball should bounce off the top wall rather than disappear off the screen. To do this, the speed can be reversed, making the ball travel in the opposite direction, if its ‘y’ position is greater than 160.
Add the following code into your game loop and run it.
while True: game_area.update() ball.setx(ball.xcor() + ball.speed_x) ball.sety(ball.ycor() + ball.speed_y) if ball.ycor() > 160: ball.speed_y *= -1
Run your code again, and the ball should bounce off the top of the screen, but disappear off the right of the screen.
In the same way that the code checks the upper ‘y’ position of the ball, to make it bounce, it can check the right ‘x’ position and the lower ‘y’ position, in your game loop.
Add these checks on the ball’s position.
if ball.ycor() > 160: ball.speed_y *= -1 if ball.xcor() > 195: ball.speed_x *= -1 if ball.ycor() < -160: ball.speed_y *= -1
The ball should now bounce around the screen, and fly off the left edge. Next, you will control your paddle to reflect the ball back from the left edge.
Control the paddle
The LEGO Spike motor is going to be used to control the position of the paddle, but you don’t want to be able to make full turns.
A simple way to limit the motion of the wheel is to add a LEGO element to prevent the wheel turning through a complete rotation.
Line up the encoder marks on your motor using the wheel, like before. Insert a peg or axle as close to level with the markers as possible.
Add a line to create the motor_left object after the import line.
from buildhat import Motor motor_left = Motor('A')
Now a new variable is needed to keep track of the location of the paddle. This will be called pos_left and set to 0.
ball.speed_x = 0.4 ball.speed_y = 0.4 pos_left = 0
Create a function for the paddle that will run when the motor encoder moves. Note that it uses a global variable so that it can change the value of the pos_left variable.
def moved_left(motor_speed, motor_rpos, motor_apos): global pos_left pos_left = motor_apos
Now add a single line that will use that function each time the motor is moved. It can be just before your while True: loop.
motor_left.when_rotated = moved_left
Then, add a line to the while True: loop to update the paddle object on the screen to the new position.
if ball.ycor() < -160: ball.speed_y *= -1 paddle_left.sety(pos_left)
Run your code and then turn the wheel on your motor encoder. You should see your paddle moving up and down the screen.
In case there are errors, your code should currently look like pong.py.
Paddle collisions
The game is nearly complete – but first you need to add some extra collision detection that covers the ball hitting the paddle.
Within the while True: loop, check if the ball’s ‘x’ position is within the horizontal area covered by the paddle. Also use an and to check the ball’s ‘y’ position is in the vertical line in which the paddle moves.
paddle_left.sety(pos_left) if (ball.xcor() < -180 and ball.xcor() > -190) and (ball.ycor() < paddle_left.ycor() + 20 and ball.ycor() > paddle_left.ycor() - 20): ball.setx(-180) ball.speed_x *= -1
Try the program out. You should be able to bounce the ball off your paddle and play a solo game of ‘squash’!
Now you have a way of preventing the ball from disappearing off-screen, it’s time to think about what happens if you fail to make a save. For now, let’s just reset the ball back to the start.
Add this code within the while True: loop:
ball.speed_x *= -1 if ball.xcor() < -195: #Left ball.hideturtle() ball.goto(0,0) ball.showturtle()
Once you’re happy with the various settings, it’s time to add in the second paddle. Using what you’ve created for the left-hand paddle as a starting point, add a second paddle on the right-hand side of the game area.
First of all, connect a second LEGO Technic motor to the Build HAT (port B) and set it up in the program.
motor_left = Motor('A') motor_right = Motor('B')
You can copy and paste your code for setting up your left paddle, and change the name and values for your right paddle. Create your right paddle.
paddle_left = Turtle() paddle_left.color('green') paddle_left.shape("square") paddle_left.shapesize(4,1,1) paddle_left.penup() paddle_left.setpos(-190,0) paddle_right = Turtle() paddle_right.color('blue') paddle_right.shape("square") paddle_right.shapesize(4,1,1) paddle_right.penup() paddle_right.setpos(190,0)
Add a variable for the right paddle position, a function for the paddle, and the line to call the function when the right motor is moved.
pos_left = 0 pos_right = 0 def moved_left(motor_speed, motor_rpos, motor_apos): global pos_left pos_left = motor_apos def moved_right(motor_speed, motor_rpos, motor_apos): global pos_right pos_right = motor_apos motor_left.when_rotated = moved_left motor_right.when_rotated = moved_right
Add a line to update the paddle on screen to the while True: loop:
paddle_left.sety(pos_left) paddle_right.sety(pos_right)
Currently, the ball will bounce off the right-hand wall. Modify the lines of your program that make that happen so that the ball is instead reset to the centre. Now add a similar condition for the right paddle as you did with the left, to handle collisions.
if (ball.xcor() < -180 and ball.xcor() > -190) and (ball.ycor() < paddle_left.ycor() + 20 and ball.ycor() > paddle_left.ycor() - 20): ball.setx(-180) ball.speed_x *= -1 if (ball.xcor() > 180 and ball.xcor() < 190) and (ball.ycor() < paddle_right.ycor() + 20 and ball.ycor() > paddle_right.ycor() - 20): ball.setx(180) ball.speed_x *= -1
You should now be able to enjoy a basic two-player game of Pong. Your code should currently look like two_player_basic.py.
Improve your project
There are a few additional features you can add to finish off your game. Keep track of the score by using two variables (one for each player) and update them whenever a round is lost. First of all, declare the new variables towards the top of the program and set the score to zero.
score_r = 0 score_l = 0
Whenever a ball is missed, increment the appropriate score variable by one. There are two conditional tests you’ll need to modify.
if ball.xcor() > 195: #Right ball.hideturtle() ball.goto(0,0) ball.showturtle() score_r+=1 if ball.xcor() < -195: #Left ball.hideturtle() ball.goto(0,0) ball.showturtle() score_l+=1
Now you need to display the score in the game area. You can use a fourth Turtle to do this. Add the following to your program after the creation of the paddle and ball Turtles, but before the while True: loop.
writer = Turtle() writer.hideturtle() writer.color('grey') writer.penup() style = ("Courier",30,'bold') writer.setposition(0,150) writer.write(f'{score_l} PONG {score_r}', font=style, align='center')
You can look at the documentation for the Turtle library to see what other options there are for how the text is displayed.
If you run your program now, the score and Pong legend should appear, but the scores themselves won’t get updated.
Find the two conditionals for each of the scoring situations – when the ball is missed by a paddle and disappears to the left or right – and update the score by rewriting the new value.
writer.clear() writer.write(f'{score_l} PONG {score_r}', font=style, align='center')
Adding a buzzer
To include some simple sound effects, connect a buzzer to the GPIO pins on Raspberry Pi. Instead of using a breadboard, you could use jumper leads with female sockets at both ends and poke the legs of the buzzer into the socket. Then use some LEGO elements to mount the buzzer so that it doesn’t flop around and become disconnected during frantic gaming sessions.
Now add the gpiozero library to the list of imports at the start of your program:
from gpiozero import Buzzer
Then, make the buzzer available for the program to use, by setting which pin you have connected the positive (+) leg to. here, we used GPIO 17.
buzz = Buzzer(17)
If you didn’t use GPIO 17, change the value to reflect the pin your buzzer is connected to.
Now, whenever the paddle and ball make contact, you want the game to play a short tone.
Add this line to each action part of the collision detection if conditionals for the ball and paddle:
buzz.beep(0.1,0.1,background=True)
Then add a line to play a longer tone whenever the player misses the ball.
buzz.beep(0.5,0.5,background=True)
You can read more about the options available with buzzers in the GPIO Zero documentation.
Improve the Pong game
Here are some ideas to improve your game of Pong. You can add even more randomness to the speed and trajectory of the ball, and make the ball move faster as the game progresses.
Right now the game carries on forever – consider having a target score that a player must achieve in order to win and then start a new set of rounds; change the scoring method to count how many times the players return the ball to one another, and reset when someone misses. Or, introduce some haptic feedback, so that the motors turn a small amount when a point is lost.
At the moment it doesn’t matter what part of the paddle connects with the ball; it will always bounce off at the same angle as it hit. Modify the collision code so that the angle becomes more obtuse if the ball makes contact close to the end of the paddle.
Create more games that use the LEGO Technic motors as controllers.
How about a game in the style of Angry Birds, where two controllers are used to set the launch trajectory and the amount of force applied to the catapult?
|
https://magpi.raspberrypi.com/articles/make-a-build-hat-game-controller
|
CC-MAIN-2022-05
|
refinedweb
| 2,837
| 73.07
|
Weigen Liang:
The current Struts framework creates only one instance for each Action class, and uses that instance to serve all the relevant requests. This makes the Action class thread safety a big issue for newer users of the Struts framework. The struts user mailing list has many, many such questions from Struts users. This is similar to servlet itself. The servlet spec has the SingleThreadModel to "try" to solve this problem, but as we know, this does not really work.
If one instance of Action object is created for EACH request, then we would not have this threading issue. This instance would be tied to a request object and a response object, and has life span of that particular request. The class would be like this:
{{{ public class Action {
public Action(ActionMapping mapping, ActionForm form,
ServletRequest request, ServletResponse response) {
- ..
public ActionForward execute() {
- ..
private ActionMapping mapping; private ActionForm form; private ServletRequest request; private ServletResponse response; // plus other instance variable for this particular request
- ..
- } }}}
Considering that a significant percentage of Struts users have limited understanding of thread safety, removal of such a concern from the framework would certainly help the users.
Jacob Hookom:
Actions follow the same constraints as the Servlet API such with thread safety-- see HttpServlet (not to mention that I believe they are getting rid of the single thread HttpServlet model all together with the API).
Andrew Hill:
The main argument against this is that it is inefficient and will result in more object creation - taking time and memory.
Personally I dont think that one extra object per request would be such a big deal - I mean its already created an action form instance, gone through all the effort of poulating it etc... One more new Action object probably wouldnt hurt too much... (and its pretty insignificant compared to all the tons of objects that get created and garbage collected inside one of my actions!...)
And like you point out in your example, it would make the signature a lot simpler for the execute method. Furthermore one could add extra setters for custom stuff that could be called from a custom request processor.
That said - the efficiency argument does still stand - and I cant see the struts developers being ready to make such a big break with the current design. There is also a certain truth in the argument that things are simplified for developers by following the same technique as the servlet api itself.
In the beginning, object creates were expensive, and the framework was being parsimonious. Using Action this way also made it easy to refactor working servlets into an Action. But, those days are past.
Moving forward, IMHO, the framework should support a wider range of action paradigms. We should also support creating actions using scripts, rather than just Java classes. JPublish supports this directly, and there is already a Struts extension along these lines.
This sounds scary, but there's really no reason why JSPs could not be used as actions (so long as they did not contain any display code).
The use case for scripts and JSPs would be to open the framework to non-Java programmers. Right now, to build a Struts web app, there's a lot of overhead involved just in getting a Java compiler, Ant, and an editor/IDE up and running.
Another approach entirely would be to combine ActionForm and Actions into a single object that would collect and validate its own data and then execute itself. The use case here would be to permit better object orientation to complex applications.
Now, I'm not suggesting that the current approach be modified. I'm suggesting that we increase the number of alternative approaches.
I think the trick to making this sort of thing work lies in defining our use cases and designing the configuration file to suit our definitions. We do have a very good design now, but for Struts 2.0, we might look at making it more generic and allowing for a broader range of functionality.
I've posted some ideas I've been kicking around for awhile on the Wiki.
Craig R. McClanahan:
Changing this now would be a major backwards incompatibility issue, so it will not be done in any Struts 1.x release. We can look at other approaches in a 2.x time frame.
Per-request object instantiation is cheaper in 1.4 JDKs, but it's still not free. However, lots of Struts users still run on 1.2 and 1.3 JDKs where this is still a pretty big issue.
{{{ > Considering that a significant percentage of Struts users have limited
> understanding of thread safety, removal of such a concern from the > framework would certainly help the users. }}}
Yet the large majority of those people seem to figure it out after a while -- especially the single most important rule (do not store per-request state information in instance variables of an Action). As others have pointed out, the Servlet API imposes exactly the same sort of issue; it wasn't invented here.
Bradley Hill:? Pool them for performance (if it's even worth the trouble), include a reset/cleanup method to be called by the framework, and guarantee single-threaded access within the scope of a request.
It would be an equally familiar and easy to explain by analogy model for most users and avoid the issues surrounding thread safety which, if in practice are not actually so complicated, many find intimidating and unintuitive.
David Graham:?
Because this is much more difficult to understand than how Struts currently is. Look at the number of bugs reported against custom tags that deal with lifecycle issues for proof.
{{{ > Pool them for
> performance (if it's even worth the trouble), include a reset/cleanup method to > be called by the framework, and guarantee single-threaded access within the > scope of a request. }}}
IMO, this is not worth the trouble. Singleton Actions are fast, easy to code, and easy for the Struts team to maintain the implementation.
{{{ > It would be an equally familiar and easy to explain by analogy model for most
> users and avoid the issues surrounding thread safety which, if in practice are > not actually so complicated, many find intimidating and unintuitive. }}}
Coding Struts Actions should be done by programmers. Java programmers should know (or at least be willing to learn) threading rules. It's extremely simple to explain to people that they shouldn't use instance variables in their actions.
Craig R. McClanahan:
{{{ > Indeed, the multi-threaded use of Actions is much like the Servlet API,
> but most Struts users (not developers) at this point are probably more > familiar with the JSP APIs than with working directly with "pure" > Servlet apps. }}}
None of this is relevant for 1.x anyway, but I would counter this with the observation that people who don't understand the fundamentals of the APIs they are using are going to run into trouble no matter how "simple" we try to make them.
Case in point - sessions can be accessed from multiple threads simultaneously, and it's ridiculously easy to cause this to happen. It doesn't matter whether you are used to the JSP or the Servlet APIs for accessing session attributes -- if you don't understand this, your app is going to fail in mysterious ways.
There are lots more similar issues that servlet and JSP inherit simply because they run on top of a stateless protocol (HTTP). You cannot write robust web-based applications without understanding them -- papering over the problems with simpler apis makes webapp development more approachable, but it cannot hide the fundamental ugliness of HTTP.
{{{ > Why not use the lifecycle model of a JSP tag for Actions? Pool them for
> performance (if it's even worth the trouble), include a reset/cleanup method to > be called by the framework, and guarantee single-threaded access within the > scope of a request. }}}
JSP 1.2 taught the world that the lifecycle model of custom tag instances was horribly hard to use, and even more difficult to program for than the thread-safe single instance model.
It is so bad that, in JSP 2.0, a new API (SimpleTag) was introduced that (among other things) dispenses with all the pooling complexity and just creates instances every time.
{{{ > It would be an equally familiar and easy to explain by analogy model for
> most users and avoid the issues surrounding thread safety which, if in > practice are not actually so complicated, many find intimidating and > unintuitive. > }}}
Strong advice from having lived through tags and pooling -- don't go there
- -).
Leonardo Quijano:
And why is so expensive to create an action in each request? How many objects are created on each request for a typical web app? And what's expensive about an Action? AFAIK, an Action is a object with a lot of static constants and one reference to the ActionServlet... a String could be more expensive than that!
As for the user-implemented subclasses, they are lightweight too, because we currently can't use instance variables - you can't just say "be careful with thread safety" and expect programmers to behave, life is not that way. If we could have those variables, that's the same with every other object... a heavy one is a bad one.
I have to admit that, besides of the thread safety issue, I don't see much gain in this change (and as a matter of fact I'd prefer an interface - it just makes testing so easier...!). But it *is* a safer design...
Brendan:
The advantages to the "action instance per request" approach are:
- The current approach means that users must be specifically cautioned about
the issues for *actions*.
- Instance variables can be used by both the framework and users to shorten
parameter lists etc.
The disadvantage to this new alternative approach is that it is different from the current struts model.
If a mechanism can be invented so that new isolated actions are clearly distinguished from singleton actions to provide backward compatablity, for humans and source, then the change can be made.
I personally like instance variables, and I like not having to worry about shared variables when I don't have too.
David Graham:
You can still use instance variables but you have to do it in a thread safe way. If you have many instance variables then you have too much logic in the actions. I have coded many (maybe all) of my actions without using one instance variable.
Implementing both singleton and instance-per-request actions is not worth the trouble.
Brandon Goodin:
Something else to consider...
Within the strut-config, the action element has the set-property capability. But, it doesn't really set the property of the action. instead it sets the property of a custom ActionMapping class which your Action uses. Two classes for the job of one. This is especially confusing to new users. Setting default parameters via the config works great for several scenarios. I use the custom ActionMapping for several general use Actions. If we were able to get away from the Singelton action it would allow for the elimination of the custom ActionMapping class just to set properties for a specific action to use.
I think the key point in all this discussion is productivity. Granted, any java developer should understand the general constructs of an MVC framework (or Model 2 framework, for all you achronymbaphobiacs) . But, as time marches on the efficiency we gain through technological maturity should be taken advantage of to reduce the learning curve.
Of course... this is a Struts 2.0 thing and not 1.x.
Margaritis Jason:
Kinda new to struts, but wouldn't it be trivial to implement this on top of the current framework by having your Action instantiate a new Object (could be an Action) and go from there? But i don't really see how instance variables are going to shorten parameter lists.
Edgar Dollin:
Since everyone is weighing in on this, I'll add my two cents.
First off, the only 'real' problems people have with struts (excluding bugs) is the mindset of the first application. The problems that I see are
1) Initial form load 2) Getting used to the tags 3) Getting used to reset / validation / validator 4) Triing to force their business logic into action forms.
By then if they have had a problem with thread safety, they have already worked through it during the initial confusion stage anyway.
In my opinion, not worth the effort.
Jason Miller:
I'll throw in one thought, as well.
Having multiple instances of Actions doesn't really allow much anyway, since ALL parts of a web app have to be reentrant. By shifting the thread safety issues down one level, I don't see much being gained. Particularly in light of the fact that Actions should be fairly lightweight, anyway.
David Graham wrote:
> Why should we reduce dependency on the Servlet API? Servlets are the Java > standard web application technology. Struts is a web tier MVC framework > enabling fast application development.
Craig R. McClanahan:.
Vic Cekvenich:
In order to wrap action around non servlet signatures (execute(req,resp, formbean, mapping) I am using an event object. This lets same signature work on tileaction, and other action. It works fine, the event object contains the req,resp, formbean,mapping so any signature will work, and simpler to dispatch.
Also, A while back a group wanted "Struts" for SOAP, not sure what happened to them but...
I have started going into XML/RPC world, looking for Rich UI View. I followed X-Forms around and.... I re-discovered action script (based on ECMA script as Javascipt) that has good plug in support for browsers, since I want UI to use browser side processing for UI and ofload processing from server.
Sounds hard?
We'll "my" beans are list backed, so easy to write a base bean reflection (find getters) that generates W3 XML Doc using JDOM. So all my baseBeans have getXMLDOC() that returns a list of rows in XML. (this should realy go in Beantuils) Already done, tested in CVS.
Step 2: ActionSciprt/openSWF (yes, Flash, but no splash screen) knows XML RPC, and the new Fire Fly (data connection kit) has $0 run time costs.
So today in basicPortal cvs I have flash calling bean.populate(); bean.getXMLDoc(); and displaying in Fire Fly. (what a name) There are hundreds of Flash GUI controls out there 3rd party. Fire Fly has live preview.
It's not portlet API, more XML/RPC, but now with $0 run time costs, under apache license I can have GUI that executes on client and looks like this:
(try to order something, looks nice).
So the formbean can have getXMLDoc(), called by FireFly. This is done. My next move is to have Validator.xml emit action script! Moving Struts to Fire Fly will be apache license, $0 run time costs (yes you may have to buy the IDE from Macromedia but..... swf is open source, and editing action script is just text editor).
Back full circle, Today I am doing sample app that does not use Servlet but uses formbean and a single JSP that just has flash on it. I would be happy to share and welcome contributions. I am writing my 2nd book, 20 chapters, one chapter is Rich GUI to form beans and is a demo I do when I teach Advanced Struts.
There was a thread that said growth will be in the "Soap" like services and I agree with them, to that end my beans will be callable via SOAP. (good things "my" beans have a DAO helper that has nothing to do with action). So basicPortal in futre will demo how to go from Struts JSP to Struts Flash, under Apache license.
Just FYI, there is open source implemenations code out there already.
Brendan Johnston:
The struts designers clearly do not agree that actions should be single use. They are not single use now.
This shows that Struts designers have different tastes to me. I have yet to see one advantage of singleton actions.
What advantage of singletons am I not getting?
I have responded to the other comments. But this seems like a useless distraction if Singleton actions have zero advantages.
All parts of a web application do not have to be reentrant. ActionForms do not have to be reentrant. In general re-entrant code is very hard to test and therefore likely to be buggy. I would suggest that all part of a web application that are written by applications prgrammers should not be re-entrant.
To avoid the problem Craig mentioned with session, one solution is to insert a standard object whenever you create a new session, and in your request processor / wib action server / some filter, synchronize on that object. I think I might implement that..
David Graham:
Using a Singleton is faster than creating a new object for each request, especially for high traffic applications. If you need instance variables then you likely have too much logic in your action that should go into a separate layer.
There is nothing preventing you from using instance variables in actions but you must do it in a thread safe manner. The easiest solution is to just not use them.
It's trivial to override RequestProcessor.processActionCreate to return a new Action for each request so Struts doesn't even have to support this by default.
---
Weigen:
I disagree withthe downplay of thread safety issue. IMHO in a web application thread safety is more critical than in many other multi-user environments due to the fact that system load is horribly unpredictable for web applications. If thread safety is not kept in one's mind ALL THE TIME when one designs and codes the web application, I guarantee that when the web application has to process hundreds of orders per minute, the architects/programmers will be drown in boss and customers' angry voice/emails of "why I am get charged for things I did not order". If one does not pay enough attention to threading issue during design/development, one usually will find it out when the system is tested/used under load near the end of the development circle. And that "small" bug somewhere will cost one many, many frustrating days to track down, since everyone in the team swears that "when I tested it on my machine, it worked". Granted for a small and less demanding web application, this may not be as prominent as drawing up pretty pages.
Couple of days ago I brought up the thread safety issue after I was kind of surprised by the large numbers of struts users grappling this very issue on the struts user mailing list. Since I have to live with threading issue all the time, and have to educate my team constantly about the issue. I'm in sympathy with them. If a framework can relieve some of that burden from its users, it's well worth the effort.
David Graham:
Coding thread safe Struts actions is extremely easy, just don't use instance variables (there's probably something I forgot but this is the biggest pitfall). If you want to make it even easier, override RequestProcessor.processActionCreate to return a new Action instance for each request.
Craig McClanahan:
> The struts designers clearly do not agree that actions > should be single use. They are not single use now.
The real key is whether your business logic is embedded in actions or not. If it is, I would suggest that you are making a mistake.
If you want per-request instantiation of business logic objects (so that you can use instance variables), the appropriate design is to do that inside an Action. Properly designed, these business objects should not have any linkage to servlet or Struts APIs -- they should be configured simply by property beans setters or parameters to the constructor. You can easily provide per-request instantiation as a service in your Action as well if you want.
If the single-instance nature of Action gets in your way, that is a pretty good clue that you aren't factoring your application correctly.
> This shows that Struts designers have different tastes to me. > I have yet to see one advantage of singleton actions. > What advantage of singletons am I not getting?
Struts gives you mapping of logical paths to instances of an Action class where the caller doesn't have to know the name of the Java class, or worry about instantiating it. Nothing you couldn't do for yourself with singletons, but you would need to replicate this logic.
> I have responded to the other comments. > But this seems like a useless distraction > if Singleton actions have zero advantages. > All parts of a web application do not have to be reentrant. > ActionForms do not have to be reentrant.
Absolutely not true if the form bean is in session scope.
> In general re-entrant code is very hard to test and therefore likely to be > buggy. > I would suggest that all part of a web application > that are written by applications prgrammers should not be re-entrant.
For business logic developers, you can make a good case for this -- indeed, I would go further and say these classes should not depend on struts apis either.
> To avoid the problem Craig mentioned with session, > one solution is to insert a standard object whenever you create a new > session, > and in your request processor / wib action server / some filter, synchronize > on that object. I think I might implement that.
Feel free -- but I wish you'd also look at the benefits of decoupling a little bit first. Think of a Struts action as an adapter between the web tier and the business logic tier, not as a container for business logic.
>. >
So use 'em ... just not in Actions :-).
>.
Changing Action itself would break backwards compatibility for people who, based on the promise that there is one instance of an Action, do expensive one-time setup operations when the Action instance is created.
But the right answer to getting what you want in Struts 1.x is to provide your own Action that implements per-request instantiation of your business objects. You don't need the framework to provide that for you.
Note that it's certainly feasible to add this service into a later Struts 1.x release -- but it will NOT be done by making Action into a per-request-instantiated object.
Phil Steitz:
> Using a Singleton is faster than creating a new object for each request, > especially for high traffic applications.
Memory and speed of garbage collection can also be an issue when creating excessive objects per request in high volume applications.
Struts is doing us a *big* favor by maintaining singletons so we don't have to worry about wasting resources on what should be reusable object instances (since as has been repeated several times on this thread, Actions should just be stateless adaptors that delegate to business objects). The fact that object creation and garbage collection are faster/more efficient in newer jvm's does not mean that we should needlessly waste resources.
Hajime Ohtoshi wrote: > The real key is whether your business logic is embedded in actions or not. > If it is, I would suggest that you are making a mistake. > Then, we had better define business logic bean as framework. > How do you think about it?
Ted Husted:
I think that's a good idea, but it may be another framework entirely.
I started to do some work on a generic Business Request/Business Service framework. What you end up with something that looks just like Struts but without the web semantics. (Good patterns are good patterns!)
Ideally, I believe there should be a business layer Controller framework, and then Struts becomes a web specialization of that framework. This would drop right in with what we've been doing with things like the Commons Validator. Both the business layer Controller and the Struts Controller could share the same set of validations. Ditto for MessageResources and the rest of it.
But, the problem for me, is that I'm just writing web applications now, and my clients are not interested in anything but web applications. I like the idea of layered architectures, and leaving my options open, but it's hard to justify re-writing Struts on the business layer when I don't have a use case of my own =:(
At this point, I'm defining a business interfaces and then implementing the interfaces as Actions and ActionForms. So, instead of having the Action instantiate a business object, I have a subclass that instantiates the business method, and the Action just "calls itself".
Since the business method doesn't use web semantics, I could easily refactor it into a separate object if need be, but until then, I save the extra machinery.
With the ActionForms, there's the old issue of String-based properties. But I decided not to care. If they put letters in a numeric field, and it ends up blank, no one really notices.
(Though, I'd really like to follow up on the idea of using the request as a data-entry buffer. If the ActionForm property is null, the page could check for a request parameter.)
The one side effect is that my unit tests for the business interfaces have to import and instantiate Actions and ActionForms. This isn't a real problem, but it complicates the builds a bit. The alternative would be to do a business implementation of the interfaces too, but, without a usecase, that would just be busy work.
|
http://wiki.apache.org/struts/StrutsWhyOnlyOneInstanceOfActionClass?highlight=ServletRequest
|
CC-MAIN-2016-36
|
refinedweb
| 4,298
| 62.48
|
. July 2018 (Outline updated through July 3, 2018) © Copyright 2018 Year 2008 Congressional Budget Resolution (March 2007).............. 19 B. The Fiscal Year 2009 Congressional Budget Resolution (March 2008).............. 23 C. Finance Committee Hearings................................................................................ 24 VII. The One-Hundred-Eleventh Congress (2009-2010)....................................................25 A. The First Pomeroy Bill (H.R. 436) ....................................................................... 25 B. The Arithmetic of the Estate Tax.......................................................................... 26 C. The Obama Administration’s Fiscal Year 2010 Budget Proposal........................ 27 D. The Fiscal Year” ........ 122 2016 “Blueprint”.............................................129 A. Significance......................................................................................................... 129 B. Goals................................................................................................................... 129 C. The Stated Problems with “Our Broken Tax Code”:.......................................... 129 D. Provisions for Businesses (“Job Creators”):....................................................... 130. Consideration of the “Tax Cuts and Jobs Act” ..........................................................137 A. Introduction and Initial Consideration................................................................ 137 B. Income Tax Provisions ....................................................................................... 137 C. Transfer Tax Provisions...................................................................................... 139 XXIII. The 2017 Tax Act ......................................................................................................139 A. Enactment ........................................................................................................... 139 B. Income Tax Provisions ....................................................................................... 140 C. Transfer Tax Provisions...................................................................................... 142 D. Sequel.................................................................................................................. 144 XXIV. 2017-2018 Priority Guidance Plan and Other Administrative Guidance ..................144 A. Part 1: “Initial Implementation of Tax Cuts and Jobs Act (TCJA)”................... 145 B. Part 2: E.O. 13789 - Identifying and Reducing Regulatory Burdens ................. 146 C. Part 3. Near-Term Burden Reduction ................................................................. 158 D. The Consistent Basis Rules................................................................................. 159 E. The Section 2642(g) Regulations........................................................................ 169 F. Part 5: General Guidance .................................................................................... 171 G. Deletions in 2017 from the 2016-2017 Plan....................................................... 177 H. Deletions in 2016 from the 2015-2016 Plan....................................................... 184 I. Other Notable Omissions.................................................................................... 185 J. General Requirements of the Regulatory Process .............................................. 190 - vi - - year year 2000 and fiscal yearIV.C.4.b, beginning of page 159. Year 2008 Congressional Budget Resolution (March 2007) 1. On March 21, 2007, in the context of finalizing the fiscal year 2008 budget resolution (S. Con. Res. 21), the Senate, by a vote of 97-1 (with only Senator Feingold (D-WI) opposed), approved an amendment offered by Senator Baucus (joined by Senators Mary Landrieu (D-LA), Mark Pryor (D-AR), Evan Bayh (DIN), year - 21 - fiscal yearly. year responsibility. - 23 - c.” – Year 2009 Congressional Budget Resolution (March 2008) 1. In the consideration of the fiscal year - 24 - - 25 - - 26 - year 2011 – the 12 months that began October 1, 2010 – - 27 - Year Year year year 2010 Congressional Budget Resolution in April 2009. And it was the measure that 84 Senators (including then Senators Obama, Biden, Clinton, and Salazar) voted for, albeit in two separate votes, in the consideration of the fiscal year.2, (an exemption and rate that then Senator Obama himself had voted for in the consideration of the fiscal year: The IRS will not impose late filing and late payment penalties under section 6651(a)(1) or (2) on estates of decedents who died after December 31, 2009, and before December 17, 2010, if the estate timely files Form 4768 and then files Form 706 or Form 706-NA and pays the estate tax by March 19, 2012. The IRS also will not impose late filing or late payment penalties under section 6651(a)(1) or (2) on estates of decedents who died after December 16, 2010, and before January 1, 2011, if the estate timely files Form 4768 and then files Form 706 or Form 706-NA and pays the estate tax within 15 months after the decedent's date of death. followed the normal indexing rules that. 2. The inflation adjustment was computed by comparing the average consumer price index (CPI) for the 12-month period ending on August 31 of the preceding year with the corresponding CPI for 2010. Thus, the inflation adjustment to the applicable exclusion amount in 2012 was computed by dividing the CPI for the 12 months ending August 31, 2011, by the CPI for the 12 months ending August 31, 2010. 3. Indexing occurs in $10,000 increments, so the amount applicable in any year is a relatively round number. - 49 - a. But, unlike the typical inflation adjustments on which the indexing is patterned, the result of the calculation is be rounded down to the next lowest multiple of $10,000. It is rounded to the nearest multiple of $10,000 and thus possibly rounded up. But that not make a huge difference in practice – obviously not more than $10,000 in any year. b. It will also need to be remembered that the calculation were had its own unified credit (“determined as if the applicable exclusion amount were $1,000,000,” as section 2505(a)(1) provided for gifts from 2002 through 2009) or its own rate schedule. It is once again the same as the estate tax unified credit and rates. a. This was introduced in section 302(b)(1) of the 2010 Tax Act under the heading “Restoration of unified credit against gift tax.” It is awkward to refer. was scheduled to sunset in 2013 when it might have been most Part). Wife - at the time about any estate tax legislation Congress might entertain. Any optimism, however, was eventually to be dashed. See Part XXIV.D.5 beginning on page 162 that .” year - year, the 2015 exemption $5.43 million, the 2016 exemption $5.45 million, and the 2017 exemption $5.49 million. - 72 - 5. There is no “clawback.” This is addressed for gift tax purposes in the flush language to section 2505(a)(2) and to some extent for estate tax purposes in section 2001(g) (both of which are no longer sunsetted). See Part VIII.G.6 beginning on page 57. - 73 - donor’s now-lost 2012 exemption. b. If the donor chose to use gift tax exemption for the 2012 trust and forwent the. - 74 - - 75 -). - 76 - b. Long-term benefits. and increasing every year by reason of inflation, is often both easier and more - 78 - beneficial than paying gift tax...) a. Reg. §20.2010-2. - 79 - - 80 -. g. Reg. §25.2505-2(b) creates an ordering rule providing that when the surviving spouse makes a taxable gift, the DSUE amount of the last deceased spouse (at that time) is applied to the surviving spouse’s taxable gifts before - 81 - the surviving spouse’s own basic exclusion amount. Reg. §§.) d. Part 6, Section A, providing that the portability election is made “by - 82 - available for same-sex married couples until the Supreme Court decided -- - 84 - style trust, again if a QTIP election in that case is not disregarded under Rev. Proc. 2001-38), and f. facilitating proactive planning by the surviving spouse. 8.. 9. About QTIP trusts and Rev. Proc. 2001-38. a..” i...” - 85 - i. January 2, 2018 (thus, among other things, in effect renewing the relief granted by Rev. Proc. 2014-18 described in paragraph 5 on page 82) or ii. the second anniversary of the decedent’s death. - 86 - c. Executors who miss that deadline may still seek 9100 relief by a ruling request, accompanied by the required user fee.. - 87 -. - 88 -. - 89 -. - 91 - children and grandchildren with corresponding dollar amounts.].… 4. The court stressed the now familiar “distinction between a ‘savings clause’, which a taxpayer may not use to avoid the tax imposed by section 2501,’.” 5... b. It is also telling that in the court’s words the effect of the language in the gift - 92 - documents was to “correct the allocation of Norseman membership units among petitioners and the donees because the [appraisal] report understated Norseman’s value.” Until Wand,” and thus the court might be suggesting that the time-honored distinction between “formula transfers” and “formula allocations” might not be so crucial after all. But it is a cause for concern. by Donald Woelbing, who owned the majority of the voting and nonvoting stock of Carma Laboratories, Inc., of Franklin, Wisconsin, the maker of Carmex skin care products. a. In 2006 Mr. Woelbing sold all..” g. In Frank Aragona Trust, Paul Aragona, Executive Trustee v. Commissioner, 142 T.C. 165 (March 27, 2014), the Tax Court (Judge Morrison) rejected an IRS argument that for purposes of the passive loss rules of section 469 itself a trust in effect could never materially participate in a trade or business. i. - 96 -. h. “Guidance regarding material participation by trusts and estates for purposes of §469” was placed on the Treasury-IRS 2014-2015 Priority Guidance Plan, published August 26, 2014, under the heading of “General Tax Issues.” It was dropped from the 2017-2018 Priority Guidance Plan 4. The new 3.8 percent tax is imposed on trusts at an income level of $11,950 for 2013, $12,150 for 2014, $12,300 for 2015, $12,400 for 2016, and $12,500 for 2017 and 2018, but it is imposed on individuals only at an income level over $200,000 ($250,000 for a joint return and $125,000 for a married person filing separately, all unindexed), and the top regular income tax rate, also applicable to trusts on taxable income over $12,500 in 2018 for example, is otherwise not reached for individuals until an income level much higher. As a result, the difference between the trust’s marginal rate and the marginal rate of its beneficiaries can be very great, especially at the lowest income levels of the beneficiaries. (State income taxes might increase this effect.) year 2014 (splitting the difference between the Democratic goal of $1.058 trillion and the Republican goal of $967 billion) and $1.014 trillion for fiscal year. - 102 - - 103 -.rees - 104 - - 106 - - 107 - 98). - 109 - minimum term for GRATs. On page 112, under the heading “Insurance Companies and Products,” the Greenbook proposed to “modify the transfer-for-value rule [applicable to life insurance policies] to - 110 - education exclusion trusts.” - 111 -IV.C.4.b beginning on page 159. - 112 - penalties of perjury, and significant underpayment penalties are imposed on the understatement of capital gains and thus income tax that would result from an overstatement of basis. Proposal The proposal would expand the property subject to the consistency requirement imposed under section 1014(f) to also. that “[t]axpayers.” “Tax Reform for Fairness, Simplicity, and Economic Growth” - 113 - ( Obama, - 115 - - 116 -. effective planning with so-called “defective” grantor trusts. c. It is hard to argue - 117 -: - 118 - trust. f. That proposal was very broad and vague, and many observers assumed that it would be revised. Sure enough, in the 2013 Greenbook (page 145) and the 2014 Greenbook (page 166), the proposal was reworded as follows:..” - 119 -. Like the 2012 Greenbook, the 2013 Greenbook also stated:.. k. In contrast, the reference to GRITs, GRATs, PRTs, and QPRTs in the 2013 Greenbook). i. The passes given to these trusts were no doubt meant to be helpful, and - 120 - - 121 - and 2016 Combined GRAT and Grantor Trust Proposal a. The 2015 Greenbook combined the GRAT and grantor trust proposals into one proposal. b. The GRAT proposal added “a requirement that the remainder interest in the GRAT at the time the interest is created must have a minimum value equal to the greater of 25 percent of the value of the assets contributed to the GRAT or $500,000 (but not more than the value of the assets contributed).”.” - 122 - - 123 - - 124 -.26). - 125 - - 126 -. - 127 -. “under the same rules that apply to portability for estate and gift tax purposes.” The exclusion under section 1202 for capital gain on certain small business stock would also apply. - 128 - - 129 -. XIX. THE HOUSE REPUBLICAN LEADERSHIP’S 2016 “BLUEPRINT” A. Significance 1. The House of Representatives Republican leadership released a “Blueprint” for “A Better Way: Our Vision for a Confident America” dated June 24, 2016. 2. The Republicans’ retained control of the Senate and of course the election of President Trump gave the Blueprint new life. 3. It appeared, at least originally, that the Trump Administration would by and large defer to the Blueprint as the primary vehicle for tax reform in the 115th Congress (1917-1918). B. Goals “This Blueprint will achieve three important goals: • It will fuel job creation and deliver opportunity for all Americans. • It will simplify the broken tax code and make it fairer and less burdensome. • It will transform the broken IRS into an agency focused on customer service.” C. The Stated Problems with “Our Broken Tax Code”: 1. It imposes burdensome paperwork and compliance costs. 2. It delivers special interest subsidies and crony capitalism. 3. It penalizes savings and investments. - 130 - fiscal 2019 budget proposal (released March 16, 2017) would have cut another $239 million from the IRS budget. The fiscal year 2019 budget proposal (released February 12, 2018) asked for a total of about $11.1 billion for the IRS. That is down only about $24 million, but still down, from the annualized level of the fiscal year 2018 “continuing resolutions” and down about $1 billion from fiscal year 2010. On May 24, 2018, the House Appropriations Financial Services and General Government Subcommittee approved fiscal year 2019 funding for the IRS in the amount of $11.6 billion, including $77 million for implementing the 2017 Tax Act, subject to conditions and limitations that largely reflect recent congressional concerns with the IRS.. D. Provisions for Businesses (“Job Creators”): 1. A top corporate rate of 20 percent. 2. A top rate of 25 percent on small businesses and passthrough income. - 131 - was only an outline. 2. The “business tax rate” of 15 percent was apparently intended to apply to corporations and pass-throughs alike. 3. The outline was silent on expensing of capital expenditures and non-deductibility of net business interest deductions (items XIX.D.3 and XIX.D.4 in the “Blueprint” above). 4. The outline omitted any reference to border adjustments (item XIX.D.7 above), but does advocate a territorial tax system (as in item XIX.D.6), apparently with a one-time deemed repatriation tax of an unspecified rate (possibly about 3 or 5 percent but probably no higher than about 10 percent). [By the end of July 2017 it appeared that the unpopular border adjustment proposal would not be pursued.] 5. President Trump had campaigned for a 33 percent top individual rate. His top rate of 35 percent was 2 percent higher than the Blueprint (item XIX.E.1), while his lowest rate of 10 percent was 2 percent lower. 6. The outline was silent on changing the tax treatment of retirement savings (item XIX.E.4). 7. The outline retained (from the presidential campaign) the vague proposal to “[r]epeal the death tax.” 8. In contrast, the Trump campaign website had stated that “[t]he Trump Plan will repeal the death tax, but capital gains held [sic] until death and valued [sic] over $10 million will be subject to tax.” 9. The outline omitted any reference to reforming the IRS (item XIX.C.5) as part of tax reform. Ways and Means Committee Chairman Brady (R-TX) was reported in early May 2017 as moving away from that. - 136 - the tax burden on the middle-class. 5. Repeal of the individual alternative minimum tax (AMT). 6. Elimination of most itemized deductions, other than deductions for home mortgage interest and charitable contributions. 7. Under the heading of .. CONSIDERATION OF THE “TAX CUTS AND JOBS ACT” A. Introduction and Initial Consideration 1. The “Tax Cuts and Jobs Act” was introduced as H.R. 1 by Ways and Means Committee Chairman Kevin Brady on November 2, 2017. Chairman Brady followed with a “chairman’s mark, or “amendment in the nature of a substitute,” on November 3 and a shorter targeted amendment on November 6, both making a few technical corrections. The Ways and Means Committee finished reviewing and amending H.R. 1 on November 9. The House approved it on November 16 by a vote of 227-205. The vote was very partisan: 13 Republicans voted against it, but no Democrat voted for it. 2. On November 9, Senate Finance Committee Chairman Orrin Hatch revealed a different version of H.R. 1, reflected in a publication of the staff of the Joint Committee on Taxation entitled “Description of the Chairman’s Mark of the ‘Tax Cuts and Jobs Act.’” Statutory language, reflecting the Finance Committee’s approval, was released on November 21. On November 29, 2017, the Senate agreed, by a strictly partisan vote of 52-48, to consider the substitution of the Finance Committee version for the House-passed H.R. 1. After a number of amendments, the Senate approved a substitute by a vote of 51-49. Again the vote was very partisan: Senator Corker of Tennessee was the only Republican to vote against it, and no Democrat voted for it. In general, the provisions of the Senate version sunsetted in 2026. B. Income Tax Provisions 1. The House version of H.R. 1 followed the Blueprint of June 23, 2016 (Part XIX beginning on page 129) and the Unified Framework of September 27, 2017 (Part XXI beginning on page 133) by proposing (i) a corporate income tax rate of 20 percent, (ii) a 25 percent net rate on the business income of individuals (either in a sole proprietorship or in a passthrough entity), and (iii) repeal of the corporate alternative minimum tax, all effective January 1, 2018. - 138 - 2. The Senate version (i) embraced the 20 percent corporate rate, but deferred it until 2019, (ii) recast the individual business income provision in a more mathematically subtle manner as a deduction generally of “the lesser of (A) 23 percent of the taxpayer’s qualified business income with respect to the trade or business, or (B) 50 percent of the W-2 wages with respect to the qualified trade or business,” and (iii) retained the corporate AMT but with an exemption for that “qualified business income.” 3. The House version generally followed the Blueprint and Framework regarding individual income tax rates and supplied four brackets for those rates. The Senate version would retain seven brackets, but would recast them, again in a more mathematically subtle manner. Here, for example, is a comparison of the House and Senate versions with former law for married individuals filing joint returns and surviving spouses: Married Couples Filing Jointly and Surviving Spouse Former Law (for 2018) House Version (2018) Senate Version (2018) Rate Starting at (taxable income) Rate Starting at (taxable income) Rate Starting at (taxable income) 10% $0 12% $0 10% $0 15% $19,050 12% $19,050 25% $77,400 25% $90,000 22% $77,400 28% $156,150 24% $140,000 33% $237,950 35% $260,000 32% $320,000 35% $424,950 35% $400,000 39.6% $480,050 39.6% $1,000,000 38.5% $1,000,000 4. And for estates and trusts: Estates and Trusts Former Law (for 2018) House Version (2018) Senate Version (2018) Rate Starting at (taxable income) Rate Starting at (taxable income) Rate Starting at (taxable income) 15% $0 12% $0 10% $0 25% $2,600 25% $2,550 24% $2,550 28% $6,100 35% $9,150 35% $9,150 33% $9,300 39.6% $12,700 39.6% $12,500 38.5% $12,500 The House and Senate’s proposed 2018 brackets were the current 2017 brackets. 5. In both versions, bracket amounts would be indexed after 2018, but by reference not to the customary Consumer Price Index (CPI), but to a “Chained CPI.” A Chained CPI would take into account anticipated consumer shifts from products whose prices increase to products whose prices do not increase or increase at a lower rate. The result would generally be slower inflation adjustments and higher tax levels over the long term. 6. The House version would have repealed the individual alternative minimum tax. - 139 - The Senate version retained the individual AMT, with a significantly increased exemption. C. Transfer Tax Provisions 1. Effective January 1, 2018, both the House and Senate versions of H.R. 1 doubled the exemptions – technically the basic exclusion amount for estate and gift tax purposes and the GST exemption for GST tax purposes. Thus, those amounts for 2018 would be approximately $11,200,000 (2 times $5,600,000). The exemptions would continue to be indexed for inflation going forward, also by using a “Chained CPI” approach. The tax rate would remain 40 percent. 2. Effective January 1, 2025, the House version of H.R. 1, but not the Senate version, would repeal the estate and GST taxes – technically, would provide that those taxes “shall not apply.” a. The gift tax would be retained, at a rate of 35 percent (its lowest point in recent history, in 2010). The doubled exclusion amount, indexed, would continue. b. Current basis rules would not change. Indeed, amendments to section 1014 would explicitly preserve the stepped-up basis for appreciated assets at death. c. Distributions from (but not terminations of) qualified domestic trusts (QDOTs) for surviving spouses of decedents dying before January 1, 2025, would continue to be taxed as under current law through 2034. d. Except for the deferred effective date, these provisions would resemble the “Death Tax Repeal Act of 2015” passed by the House of Representatives on April 16, 2015 (see Part XVII.A beginning on page 100) and the “Death Tax Repeal Act of 2017” introduced on January 24, 2017 (see Part XXI.D.2.a on page 136). XXIII.THE 2017 TAX ACT A. Enactment 1. The conference agreement to resolve the differences between the House and Senate versions of H.R. 1 was approved by the House 227-203 on December 19, by the Senate 51-48 early on December 20, and by the House again 224-201 later on December 20 to approve changes made by the Senate. Again the votes were very partisan. In the House 12 Republicans voted against it and no Democrat voted for it. In the Senate all the Democrats voted against it and all the Republicans voted for it except Senator McCain, who was not in Washington. 2. One of the changes made by the Senate was to drop the short title “Tax Cuts and Jobs Act” when the Senate parliamentarian ruled that the reference to “Jobs” was not strictly fiscal enough to comply with the “Byrd Rule” for budget reconciliation in the Senate. The description in the Act – “An Act To provide for reconciliation pursuant to titles II and V of the concurrent resolution on the budget for fiscal year 2018” – will probably not be commonly used. 3. President Trump signed the Act (Public Law 115-97) on December 22, 2017. - 140 - B. Income Tax Provisions 1. The compromise top corporate income tax rate, apparently selected mainly to make the revenue estimates acceptable, is 21 percent. 2. The Qualified Business Income Deduction in section 199A largely follows the Senate model, again to help the Senate comply with its reconciliation rules, but at a 20 percent level, and includes adjustments designed to achieve certain fiscal goals or meet the specific concerns of some lawmakers. After some suspense, trusts are included among the pass-through entities receiving the preferential treatment. 3. The basic income tax rates and brackets resemble the Senate version. a. The Senate’s proposed top rate of 38.5 percent is reduced to 37 percent. The top four individual brackets are flattened somewhat, with the starting point for the top bracket reduced from $1,000,000 to $600,000 (although the top rate is also reduced). The result is that the 2018 tax on a married couple’s taxable income of $1,000,000 would have been $301,479 under the Senate version, but is $309,379 under the 2017 Tax Act. b. With the lower top rate of 37 percent in the final Act, the crossover point is $1,526,667. At a taxable income of $1,526,667, a married couple’s tax would be the same under both the Senate substitute and the Act – $504,246. At a taxable income above $1,526,667, the tax is lower under the Act. 4. Here is a comparison of the 2017 Tax Act with former law for married individuals filing joint returns and surviving spouses: Married Couples Filing Jointly and Surviving Spouses Former Law (for 2018) New Law (2018) Rate Starting at (taxable income) Rate Starting at (taxable income) 10% $0 10% $0 15% $19,050 12% $19,050 25% $77,400 22% $77,400 28% $156,150 24% $165,000 33% $237,950 32% $315,000 35% $424,950 35% $400,000 39.6% $480,050 37% $600,000 5. And for estates and trusts: Estates and Trusts Former Law (for 2018) New Law (2018) Rate Starting at (taxable income) Rate Starting at (taxable income) 15% $0 10% $0 25% $2,600 24% $2,550 28% $6,100 35% $9,150 33% $9,300 39.6% $12,700 37% $12,500 - 141 - As in the House and Senate versions, the 2018 brackets are the same as the 2017 brackets. 6. As in both the House and Senate versions, bracket amounts are indexed after 2018 by reference to a “Chained CPI.” 7. As in the Senate version, the Act retains the individual alternative minimum tax, with a significantly larger exemption and much higher phase-out threshold. 8. The Act adds new Section 67(g) as follows: (g) SUSPENSION FOR TAXABLE YEARS 2018 THROUGH 2025.— Notwithstanding subsection (a), no miscellaneous itemized deduction shall be allowed for any taxable year beginning after December 31, 2017, and before January 1, 2026 a. Section 67, and specifically subsection (a), was added to the Code by the Tax Reform Act of 1986 to subject most itemized deductions to a floor equal to 2 percent of adjusted gross income. Subsection (b) defines “miscellaneous itemized deductions” as all itemized deductions except 12 specified deductions that are excluded from the definition. Thus, simply put, what Congress did in the 2017 Tax Act was to completely disallow the deductions it had merely limited in the 2-percent floor in 1986. b. The Tax Reform Act of 1986 also added subsection (e), which, since its amendment in 1988, provides: (e) DETERMINATION OF ADJUSTED GROSS INCOME IN CASE OF ESTATES AND TRUSTS. For purposes of this section, the adjusted gross income adjustments shall be made in the application of part I of subchapter J of this chapter to take into account the provisions of this section. Because the effect of subsection (e) is to exempt such administration expenses from the 2-percent floor that applies to “miscellaneous itemized deductions,” it appears likely that those administration expenses will continue, in effect, to be deductible under the 2017 Tax Act. Clarification of that result would be welcome, however. - 142 - C. Transfer Tax Provisions 1. The estate and GST taxes are not repealed, not even temporarily. 2. There are no structural or technical changes to the estate, gift, and GST taxes, and no changes to the determination of the basis of property received by gift or upon death. 3. The only change is to double the basic exclusion amount for estate and gift tax purposes and the GST exemption for GST tax purposes – as both the House version (through 2024) and Senate version (through 2025) would have done. In the final Act, the doubling of the exemptions sunsets January 1, 2026, as in the Senate version, to help the Senate comply with its reconciliation rules. 4. Annual increases in the exemptions, including the initial increase from 2017 to 2018 itself, will be measured, like income tax brackets and other significant numbers under the Act, by a “Chained” Consumer Price Index, resulting in slower inflation adjustments over the long term. This has made the basic exclusion amount $11.18 million for 2018. Rev. Proc. 2018-18, 2018-10 I.R.B. 392, sec. 3.35 (March 2, 2018). 5. The Act retains section 2001(g), redesignated section 2001(g)(1), the “anticlawback” language added by the 2010 Tax Act to prevent, in effect, gifts exempt from gift tax under the higher exemption from being nevertheless subject to estate tax if the increased exemption were to actually “sunset” – then in 2013 and now in 2026. This is a lesson the drafters learned after the awkward failure to address such a scenario in the 2001 Tax Act, although section 2001(g)(1) standing alone arguably is insufficient. See Part VIII.G.6 beginning on page 57. a. The potential for clawback results from the decision made when the gift and estate tax rates were “unified” by the Tax Reform Act of 1976 to replace the historic exemptions ($30,000 for the gift tax and $60,000 for the estate tax) with a credit, because “a tax credit tends to confer more tax savings on smalland medium-sized estates” and therefore “would be more equitable” (H.R. REP. NO. 94-1380, 94TH CONG., 2D SESS. 15 (1976)). With estate and gift taxes now viewed as being imposed in effect at a flat rate, and “small- and medium-sized estates” now exempt, that decision now produces little equity but considerable complexity in tax return preparation and, as seen here, in statutory drafting. b. In fact, the statutory drafting was apparently so daunting that Congress simply gave up and left completion of the task to Treasury in a new section 2001(g)(2): - 143 - (B) the basic exclusion amount under such section applicable with respect to any gifts made by the decedent. c. This looks simply like authority to do the math needed to carry out the mandate of the 2010 Tax Act in section 2001(g)(1), although some have expressed concern that it could be used to cut back the benefits to decedents’ estates that the 2017 Tax Act was intended to confer, either by Treasury and IRS drafters not necessarily as committed to the agenda of the current congressional leadership or by a new Administration (which there is certain to be before the doubling of the exemptions sunsets on January 1, 2026). d. Here is a simplified illustration of how clawback might work. (For a more detailed analysis, in the context of the 2010 Tax Act, see Part VIII.G.5 beginning on page 55.) i. For the sake of simplicity, assume an unindexed exclusion amount of $10 million in 2018 under the 2017 Tax Act, reverting to $5 million after sunset in 2026, and no portability. ii. Assume that the donor’s only lifetime taxable gift is a gift of $10 million in 2018, the donor dies in 2026 with a taxable estate (line 3 of Form 706) of $20 million, and there has been no change in the law. iii. There is no gift tax on the 2018 gift because of the exclusion amount. iv. Intuitively, the estate tax in 2026 should be $8 million – 40 percent of $20 million. v. Adjusted taxable gifts under section 2001(b)(1)(B) (line 4) are $10 million, and the total base for the tentative tax under section 2001(b)(1) (line 5) is $30 million ($20 million + $10 million). The tentative tax on $30 million under section 2001(c) (line 6) is $11,945,800. vi. Using “the rates of tax … in effect at the decedent’s death” under section 2001(g)(1) – but not the exclusion amount in effect at the decedent’s death (because section 2001(g)(1) addresses only “rates”) – the subtraction under section 2001(b)(2) (line 7) is zero (which is what the gift tax on the 2018 gift was). vii. Thus, the gross estate tax computed under section 2001(b) (line 8) is $11,945,800. Subtracting the applicable credit amount (line 11) of $1,945,800 (calculated on a $5 million exclusion amount) leaves a net estate tax (lines 12 and 16) of $10 million. viii. This result of $10 million exceeds the intuitively correct result of $8 million by exactly $2 million (40 percent of the additional $5 million exclusion in 2018). That is the clawback penalty. e. If the regulations under new section 2001(g)(2) provide (as surely they must) that in this case the subtraction under section 2001(b)(2) (line 7) is computed using the exclusion amount (as well as rates) in effect at the decedent’s death, the subtraction under section 2001(b)(2) (line 7) (paragraph - 144 - vi above) would be $2 million (what the 2018 gift tax would have been if the 2017 Tax Act had not been enacted) instead of zero, and the net estate tax of $8 million would be exactly the same as the intuitively correct result. D. Sequel 1. The first half of 2018 was filled with speculation about “phase 2” of tax legislation, or “tax legislation 2.0” in the fall of 2018, which could include a. refinement of the international provisions, including the provisions relating to global intangible low-taxed income (GILTI) and the base erosion and antiabuse tax (BEAT), b. making some of the sunsetted provisions permanent (which could be very expensive and add substantially to the federal debt), c. some, but probably not all, technical corrections, and d. making more of the year-to-year “extenders” permanent. 2. June 26, 2018, saw the IRS’s rollout of the manifestation of the “postcard-sized” simplified individual income tax return (dated June 29). a. It is not “postcard-sized,” but it fits, two-sided, on one-half of an 8½×11-inch sheet of paper. The first side contains identifying information, filing status, identification of dependents, federal election campaign designation, and signatures. The second side contains 26 lines (numbered from 1 to 23, but “lines” 17 and 20 use two lines and three lines, respectively). b. As needed, the simplified form may be supplemented by one or more of six new schedules. But the accompanying IRS press release (also dated June 29) stated that “[t]axpayers with straightforward tax situations would only need to file this new 1040 with no additional schedules.” c. The press release expressed the objective of making the simplified form available for the filing of 2018 returns in 2019. XXIV. 2017-2018 PRIORITY GUIDANCE PLAN AND OTHER ADMINISTRATIVE GUIDANCE The Treasury-IRS Priority Guidance Plan for the 12 months beginning July 1, 2017, was released on October 20, 2017 (- 2018_pgp_initial.pdf). The Second Quarter Update was released on February 7, 2018 (). Reflecting additional review mandated by President Trump, the organization and tone of the 2017- 2018 Priority Guidance Plan differ from previous Plans. The introduction to the original October 2017 Plan provides the following explanation: Part 1 [now Part 2] of the plan focuses on the eight regulations from 2016 that were identified pursuant to Executive Order 13789 [See Part XXIV.B.16 beginning on page 156] and our intended actions with respect to those regulations. Part 2 [now Part 3] of the plan describes certain projects that we have identified as burden reducing and that we believe can be completed in the 8½ months remaining in the plan year. As in the past, we intend to update the - 145 - plan on a quarterly basis, and additional burden reduction projects may be added. Part 3 [now Part 4] of the plan describes the various projects that comprise our implementation of the new statutory partnership audit regime, which has been a topic of significant concern and focus as the statutory rules go into effect on January 1, 2018. Part 4 [now Part 5] of the plan, in line with past years’ plans and our long-standing commitment to transparency in the process, describes specific projects by subject area that will be the focus of the balance of our efforts this plan year. A. Part 1: “Initial Implementation of Tax Cuts and Jobs Act (TCJA)” On December 22, 2017, the President signed the 2017 Tax Act (Public Law 115-97). The Act radically changed the need for prompt administrative guidance on a large number of issues. It was not surprising then that the February 2018 Second Quarter Update added a new Part 1 in response to the 2017 Tax Act, titled “Initial Implementation of Tax Cuts and Jobs Act (TCJA)” (the name used through most of the congressional consideration but dropped by the Senate on the final day to comply with reconciliation rules), and renumbered all the other parts accordingly. Part 1 contains 18 items. Of particular interest to estate planners: 1. Item 7: “Computational, definitional, and anti-avoidance guidance under new §199A.” a. Arguably, definitions, computations, and anti-avoidance rules comprise the entire implementation of any tax. b. There is much need for clarification of the section 199A Qualified Business Income Deduction, particularly as it applies to trusts. 2. Item 16: “Guidance on computation of estate and gift taxes to reflect changes in the basic exclusion amount.” a. This might have been intended to include clarification of the application of the “Chained CPI” in calculating the basic exclusion amount, which is also the GST exemption, even for 2018 (see Part XXIII.C.4 on page 142). That was clarified, however, along with many other 2018 numbers, in Rev. Proc. 2018-18, sec. 3.35, 2018-10 I.R.B. 392. b. It might also be intended to address “clawback” issues in the regulations mandated by section 2001(g)(2). See Part XXIII.C.5 beginning on page 142. 3. Other Issues. a. Clarification of the treatment of miscellaneous itemized deductions under new section 67(g), especially fiduciary fees and other expenses of trusts and estates. See Part XXIII.B.8 beginning on page 141. b. And because the 2017 Tax Act did not repeal the estate and GST taxes, everything related to estate planning in the 2017-2018 Treasury-IRS Priority Guidance Plan (discussed below), and some items dropped from the Plan, continue to be relevant and important. c. Similarly, because the Act did not repeal the 3.8 percent tax imposed on net investment income by section 1411 (discussed in detail in Part XV beginning - 146 - on page 94), as some at one time had hoped, there will continue to be a need for guidance regarding that tax, particular for the vexing issue of identifying “material participation” under section 469(h) in the case of a trust or estate..” B. Part 2: E.O. 13789 - Identifying and Reducing Regulatory Burdens 1. Part-2004, - 147 - v. Commissioner, 113 T.C. 449, 473 (1999), aff’d, 292 F.3d 490 (5th Cir. 2002), about which the preamble to the proposed regulations explicitly stated.”. - 148 - - 149 - - 150 - property. Proposed Reg. §25.2704-3(b)(5)(v). (And this must be an actual put right, not a “deemed put right”! And it would be a very unusual feature of most entities, especially operating businesses.) 9. “Nontax Reasons” a. - 151 -. - 152 - - 153 - )” (emphasis added). Proposed Reg. §25.2704-4(b)(1) stated that the new three-year rule would “apply to lapses of rights … occurring on or after the date these, that “a business entity is any entity recognized for federal tax purposes (including an entity with a single owner that may be disregarded as an entity separate from its owner under §301.7701-3) that is not properly classified as a trust under §301.7701-4 or - 154 - otherwise subject to special treatment under the Internal Revenue Code.”. - 155 - tax at all in a year.. ():. Treasury’s action will help working families around the country because, when the wealthiest households are able to use sophisticated techniques to exploit loopholes and reduce the taxes that they owe, more of the tax burden ultimately falls on middle-class taxpayers. c. And bills were introduced in Congress to block the regulations: i. H.R. 6042 (Rep. F. James Sensenbrenner, Jr. (R-WI), Sept. 15, 2016):. ii. “Protect Family Farms and Businesses Act,” H.R. 6100 (Rep. Warren Davidson (R-OH), Sept. 21, 2016) and S. 3436 (Sen. Marco Rubio (RFL) Sept. 28, 2016):. iii. The “Protect Family Farms and Businesses Act” were reintroduced in the new Congress on January 2, 2017, as H.R. 308 and S. 47. d. In this vein, section 115 of the fiscal year 2018 appropriations bill reported - 156 - out of the House Appropriations Committee on June 29, 2017, provided:. - 157 - valuations and transfer tax liability that would increase financial burdens. Commenters were also concerned that the proposed regulations would make valuations more difficult and that the proposed narrowing of existing regulatory exceptions was arbitrary and capricious. goal of the proposed regulations was to counteract changes in state statutes and developments in case law that have eroded Section 2704’s applicability and facilitated the use of family-controlled entities to generate. - 158 -. i..” ii. Thus, the report arguably left the door open for the section 2704 regulations to be modified and re-proposed in the future. Significantly, though, the report concluded not only that details of the 2 of the updated Priority Guidance Plan, the proposed section 2704 regulations are now withdrawn. 82 FED. REG. 48779-80 (October 20, 2017). C. Part 3. original 2017- - 159 - 2018 Priority Guidance Plan that “Part 2 [now Part 3] of the plan describes certain projects that we have identified as burden reducing and that we believe can be completed in the 8½ months remaining in the plan year” – that is, by June 30, 2018. There were 19 such items in the original Plan, and two were added by the Second Quarter Update. The first of those 21 items is a generic affirmation of “Guidance removing or updating regulations that are unnecessary, create undue complexity, impose excessive burdens, or fail to provide clarity and useful guidance.” 3. The other 20 items in Part 3 refer more conventionally to particular topics and projects. themselves, discussed in Part XXIV.D.9 beginning on page 165, as “regulations that are unnecessary, create undue complexity, impose excessive burdens, or fail to provide clarity and useful guidance.” c. The background and significance of these regulations are discussed in Part D below. 5. The eighth item in Part 3 is “Final regulations under §2642(g) describing the circumstancesIV.E beginning on page 169. D. The Consistent Basis Rules 1. was section 2004 of the Act, labelled “Consistent Basis Reporting Between Estate and Person Acquiring - 160 - Property from Decedent,” which of course has nothing to do with highways or veterans’ health care other than raising money. The provision added new provisions to the Code. a. New section 1014(f) requires in general that the basis of property received from a decedent, “whose inclusion in the decedent’s estate increased the liability for the tax,” - 161 - seems to have realized. - 162 - - 163 - section 1014 includes some twists. a. Like the Camp Discussion Draft and the current “Sensible Estate Tax Act” Notice 2015-57 implied. Indeed, Notice 2016-19 affirmatively - 164 - requirements of section 6035” and that “[t]he Treasury Department and the IRS expect to issue proposed regulations under sections 1014(f) and 6035 very shortly.” c. Notice 2016-27, 2016-15 I.R.B. 576, released on March 23, 2016 (three weeks after the publication of the proposed regulations discussed in paragraph 9 below), the biggest problem with the reporting deadline – namely, that executors, especially of estates large enough to be required to file an estate tax return, will not know just one month after filing the estate tax return which beneficiaries will receive which assets – Schedule A of Form 8971 states (emphasis in original):. b. The Instructions to Form 8971 candidly state (emphasis may, but aren’t required to, be filed once the distribution to each such beneficiary has been made.. - 165 -, albeit modest,). Such assets will rarely be sold at a gain, and any loss on a sale of such personal property would be nondeductible in any event. iii. In addition to such tangible personal property, Proposed Reg. §1.6035- 1(b)(1) would exclude from the Form 8971 reporting requirement: (A) cash (other than a coin collection or other coins or bills with numismatic value), which ordinarily has no basis apart from its face amount anyway; (B) income in respect of a decedent, which ordinarily would be reported as such on the beneficiary’s income tax return anyway; and (C) property that is sold (and therefore not distributed to a beneficiary) in a transaction in which capital gain or loss is recognized, which ordinarily would therefore be reported as a taxable sale on the fiduciary’s income tax return anyway.). - 166 -, – in other words, “received” – or “acquired.” In that case, section 6035(a)(3) would be construed to require reporting for property passing upon death or distributed before its value is reported on an estate tax return within 30 days after the estate tax return is filed, whereas property distributed after the estate tax return is filed would be reported on a supplemented Form 8971 and Schedule A within 30 days after the distribution or perhaps on a year-by-year basis. That would be a much more workable rule. - 167 - ii. After-discovered and omitted property that is not reported on an (initial or supplemental) estate tax return before the estate tax statute of limitations runs (thus including all property and omissions discovered after the estate tax statute of limitations runs) would would be considered to be zero. Proposed Reg. §10.1014-10(c)(3)(ii). Thus, a very innocent omission by the executor could result in a very harsh penalty for beneficiaries. The statutory support for these zero basis rules). iii. Proposed Reg. §1.6035-1(f) would impose a seemingly open-ended requirement on a recipient of a Schedule A to in turn file a Schedule A when making any gift or other retransfer of the property that results wholly or partly in a carryover basis for the transferee. The preamble again imposes the reporting requirement only on an “executor,” and section 1014(a) itself applies. (A) In the preamble to the proposed regulations, Treasury recites that regulatory authority in section 6035(b)(2), but construes it in effect to apply only to a person with a legal or beneficial interest in - 168 - property who is required to file an estate tax return under section 6018(b) in some cases. , and most of the foregoing points were made... 11. The 2016 Greenbook renewed the proposal of past Greenbooks to also apply the consistency rules to property qualifying for an estate tax marital deduction and to - 169 - gifts reportable on a gift tax return. 12.. Notice 2017-38, 2017-30 I.R.B. 147, identified eight regulations that meet at least one of the first two criteria specified by the Executive Order, including the proposed section 2704 regulations (see Part XXIV.B.16.b on page 156), but not including the consistent basis regulations. 13. Now Part 3 of the 2017-2018 Priority Guidance Plan suggests that Treasury and the IRS will revisit the proposed basis consistency regulations in the context of “near-term burden reduction.” They cannot undo the ill-advised statute, but they could apply the statute in a reasonable way to provide a more practical reporting date and could reconsider the zero-basis rule and continuous reporting requirement that the statute does not appear to authorize. That would be “burden reduction.” E. The Section 2642(g) Regulations 1. This project first appeared in the 2007-2008 Plan. 2. The background of this project is section 564(a) of the 2001 Tax Act,). a. Before the 2001 Tax Act, similar extensions of time under Reg. §301.9100-3 (so-called “9100 relief”) were not available, because the deadlines for taking such actions were prescribed by the Code, not by the regulations. b.. c. Shortly after the enactment of the 2001 Tax Act, Notice 2001-50, 2001-2 C.B. 189, acknowledged section 2642(g)(1) and stated that taxpayers may seek extensions of time to take those actions under Reg. §301.9100-3. The - 170 - Service has received and granted many requests for such relief over the years since the publication of Notice 2001-50. 3. In addition, Rev. Proc. 2004-46, 2004-2 C.B. 142, provides a simplified method of dealing with pre-2001 gifts that meet the requirements of the annual gift tax exclusion under section 2503(b) but not the special “tax-vesting” requirements applicable for GST tax purposes to gifts in trust under section 2642(c)(2).. 4. Proposed Reg. §26.2642-7 (REG-147775-06) was released on April 16, 2008. When finalized, it will oust Reg. §301.9100-3 and become the exclusive basis for seeking the extensions of time Congress mandated in section 2642(g)(1) (except that the simplified procedure for dealing with pre-2001 annual exclusion gifts under Rev. Proc. 2004-46 will be retained). 5..” a.. b.. c. Noticeably, the proposed regulations seem to invite more deliberate weighing of all those factors than the identification of one or two dispositive factors as under Reg. §301.9100-3. 6. “Hindsight,” which could be both a form of bad faith and a way the interests of the Government are prejudiced, seems to be a focus of the proposed regulations. This is probably explained by the obvious distinctive feature of the GST tax – its effects are felt for generations, in contrast to most “9100 relief” elections that - 171 - affect only a current year or a few years..” 7.).” a..IV.H.1 beginning on page 184.. How can this be, when. F. Part 5: General Guidance Part 5 of the updated Priority Guidance Plan, titled “General Guidance,” like previous Plans and as noted above, “describes specific projects by subject area that will be the - 172 - focus of the balance of [Treasury’s and the Service’s] efforts this plan year.” The original 2017-2018 Plan listed 166 items, and the Second Quarter Update added eight for a total of 174 (down from 281 in the 2016-2017 Plan), including final regulations for the 3.8 percent tax on net investment income under section 1411 (item 20 under “General Tax Issues”), but apparently not including the issue of material participation by estates and trusts (see Part XXIV.A.3.c on page 145). The Plan includes the following three items (down from 12 last year) under the heading of “Gifts and Estates and Trusts,” all of which should be near completion: 1. Guidance on basis of grantor trust assets at death under §1014. a. This project 116 - 173 - nevertheless believed that “asserting that the conversion of a nongrantor trust to a grantor trust results in taxable income to the grantor would have an impact on non-abusive situations.” “.” That designation was continued in section 5.01(12) of Rev. Proc. 2016-3, 2016-1 I.R.B. 126, section 5.01(10) of Rev. Proc. 2017-3, 2017-1 I.R.B. 130, and section 5.01(8) or Rev. Proc. 2018-3, 2018 have been aimed only at foreign trusts, and so might this proposal first announced in the 2015- 2016 Priority Guidance Plan a month later on July 31, 2015. It is also possible that, even if the project originally had such a narrow focus, it has since been expanded in the Trump Administration. 2. Final regulations under §2032(a) regarding imposition of restrictions on estate assets during the six month alternate valuation period. Proposed regulations were published on November 18, 2011. a. This project first appeared in the 2007-2008 - 174 - made no change to Reg. §20.2032-1(c)(1), on which the Kohler court relied. But they invoked “the - 175 - general purpose of the statute” that was articulated in 1935, relied on in Flanders determining the alternate value (which would still subject a 6.2 percent difference as in Kohler to the new rules).) & (f)(3)). g. While the 2008 proposed regulations were)). i.. - 176 - ii. Example 1 reaches the same result with respect to the post-death formation of a limited partnership. h. The 2008 proposed regulations were to be effective April 25, 2008, the date the proposed regulations were published. The 2011-2009 Plan as an outgrowth of the project that led to the final amendments of the section 2053 regulations in October 2009. A connection between personal guarantees and present value concepts is elaborated in this paragraph in the preamble to the 2009 regulations (T.D. 9468, 74 FED. REG. 53652 (Oct. 20, 2009)):. b. But it is easy to see how the Treasury Department’s and the IRS’s “further consideration” of “the appropriate use of present value concepts” could turn its focus on the leveraged benefit in general that can be obtained when a claim or expense is paid long after the due date of the estate tax, but the additional estate tax reduction is credited as of, and earns interest from, that due date. i. Graegin loans (see Estate of Graegin v. Commissioner, T.C. Memo 1988-477) could be an obvious target of such consideration. ii. - 177 - “present value concepts” might make very little difference on paper. But it might require legislation to accomplish all these things. iii. Since claims or expenses are rarely paid exactly on the due date of the tax, the precise application of such principles might be exceedingly complicated.. It is well known that the Tax Court has held that section 7872 is the applicable provision for valuing an intra-family promissory note –. Perhaps this project was intended to resolve that anomaly, probably by regulations. c. Section 7872(i)(2) states: Under]. i.. ii. - 178 - below-market treatment under section 7872(e), and (3) with respect to section 7872(i)(2) itself, the loan is not made “with donative intent” because the transaction is a sale. iii. (although that would be an aggressive choice that undoubtedly would be roundly criticized). But, unless and until that happens, most estate planners have seen no reason why the estate tax value should not be fair market value, which, after all, is the general rule, subject to Reg. §20.2031-4, which states: The fair market value of notes, secured or unsecured, is presumed to be the amount of unpaid principal, plus interest accrued to the date of death, unless the executor establishes that the value is lower or that the notes are worthless. However, items of interest shall be separately stated on the estate tax return. If not returned at face value, plus accrued interest, satisfactory evidence must be submitted that the note is worth less than the unpaid amount (because of the interest rate, date of maturity, or other cause), or that the note is was related to these developments, and in any event it did not cite Proposed Reg. §20.7872-1. i. It is clear that the IRS has long been interested in the valuation of promissory notes, and at times has seemed to embrace a market interest rate standard. See Letter Ruling 200147028 (issued Aug. 9, 2001; released Nov. 23, 2001). ii. The interest of the IRS was especially apparent after the docketing of Estate of Davidson v. Commissioner, T.C. Docket No. 13748-13, in which the IRS asserted $2.8 billion in estate, gift, and generationskipping taxes owed. On July 6, 2015, the case was settled for just over $550 million. Addressing Mr. Davidson’s sales both in Chief Counsel Advice 201330033 (Feb. 24, 2012) and in its answer in the Tax Court, the IRS argued that the notes should be valued, not under section 7520, but under a willing buyer-willing seller standard that took account of Mr. Davidson’s health. See also Estate of Kite v. Commissioner, T.C. Memo. 2013-43. e. Promissory notes are frequently used in estate planning, and guidance could provide welcome clarity. - 179 - 3. Guidance on the gift tax effect of defined value formula clauses under §§2512 and 2511. a. This project: “)). Maybe, in this guidance project, Treasury was proposing to accept that invitation. Because of the widespread use of defined value formula clauses in estate planning, particularly (as we saw in 2012) to make use of increased exemptions such as those offered by the 2017 Tax Act, guidance could provide needed clarity on this point also. 4. Guidance under §§2522 and 2055 regarding the tax impact of certain irregularities in the administration of split-interest charitable trusts. This project was new in 2016. - 180 - 5. 2017 transfer would be due June 17, 2019 (June 15 being a Saturday). - 181 - - 182 - - 183 - - 184 -. Care is advisable in agreeing to be a U.S. agent in these circumstances. 6. And, under the heading of “General Tax Issues,” deletion of the project described as “Guidance regarding material participation by trusts and estates for purposes of §469.” This is the guidance that could have shed light on the application to trusts and estates of the 3.8 percent tax on net investment income mentioned in Part XXIV.A.3.c on page 145.? - 185 -. 2. Final regulations under §2642(g) regarding extensions of time to make allocations of the generation-skipping transfer tax exemption. Proposed regulations were published on April 17, 2008. a. This project first appeared in the 2007-2008 Plan. b. It has reappeared in the 2017-2018 Plan and is discussed in Part XXIV.E beginning on page 169. I. Other Notable Omissions 1. Decanting a. The 2011-2012 Priority Guidance Plan included, as item 13, “Notice on decanting of trusts under §§2501 and 2601.” This project was new in 2011- 2012, but it had been anticipated for some time, especially. 2018-3, 2018-1 I.R.B. 118, §§5.01(7), (12) & (13)): - 186 - tax - 187 - and the IRS have continued to study decanting. But decanting was omitted from the 2012-2013 Plan and from subsequent Plans. f. A new Uniform Trust Decanting Act (UTDA) was approved by the Uniform Law Commission at its annual conference in July 2015. The Act, available at,-2005 Plan, it was described as “Guidance regarding family trust companies.” ii. In the 2005-2006, 2006-2007, and 2007-2008 - 188 - income tax rule. iii. In the 2008-2009 and 2009-2010 Plans (published after Notice 2008-63, which is discussed below), the description was a more comprehensive “Revenue ruling regarding the consequences under various income, estate, gift, and generation-skipping transfer tax provisions of using a family owned company as a trustee of a trust.” iv. That reassurance of comprehensive treatment was maintained in the 2010-2011 Plan had attracted a large number of diverse public comments after the publication of Notice 2008- 63., a revenue procedure, regulations, or otherwise.” Rev. Proc. 2005-3, 2005-1 C.B. 118, §§5.07, 5.08 & 5.09. This designation has continued to the present. Rev. Proc. 2018-3, 2018-1 I.R.B. 118, §§5.01(9), (10) & (11). b. The proposed revenue ruling in question was released with Notice 2008-63 on July 11, 2008, and published at 2008-31 I - 189 - see grantor trust treatment addressed, in view of the omission of income tax from the formulation of this project on the then most recent 2007-2008 “from each trust [meaning “all trusts”?] for which it serves as trustee.” Anyone may serve on the DDC, but. ii. In Situation 2, an “Amendment Committee” with exclusive authority to amend the relevant sensitive limitations in the private company’s - 190 - possible requirement of a single independent “Discretionary Distribution Committee” for all trusts administered by the private trust company, possibly excluding a differently conceived body with a similar effect, a different committee for different trusts, and any exception for trusts for customers other than family members administered by familyowned trust companies that offer fiduciary services to the public. iii. The explicit prohibition of certain express or implied reciprocal agreements regarding distributions, possibly excluding such prohibitions derived from general fiduciary law. 3. The project relating to private trust companies was omitted from the 2014-2015 Plan. Unlike decanting, however, it cannot be said that private trust companies are a priority, or that the contemplated guidance may. J. General Requirements of the Regulatory Process 1. In addition to directing the action that led to the withdrawal of the proposed regulations under section 2704 (see Part XXIV.B.16 beginning on page 156), Executive Order 13789 of April 21, 2017, directed the Treasury Department and the Office of Management and Budget (OMB) to “review and, if appropriate, reconsider the scope and implementation of the existing exemption for certain tax regulations from the review process set forth in Executive Order 12866 and any successor order.” 2. Executive Order 12866, which was signed by President Clinton on September 30, 1993, requires generally that Treasury a. periodically provide the Office of Information and Regulatory Affairs (OIRA) within OMB with a list of its planned regulatory actions, including - 191 - those it believes are “significant regulatory actions” (section 6(a)(3)(A) of Executive Order 12866), b. for each “significant regulatory action,” provide to OIRA “(i) [t]he text of the draft regulatory action, together with a reasonably detailed description of the need for the regulatory action and an explanation of how the regulatory action will meet that need; and (ii) [a” (section 6(a)(3)(B) of Executive Order 12866), and c. for each “significant regulatory action” that is likely to have an annual effect on the economy of $100 million or more, include the following regulatory impact assessment (section 6(a)(3)(C) of Executive Order 12866, emphasis added): . 3. Under section 3(f) of Executive Order 12866, a “significant regulatory action” to which the requirements described in subparagraphs b and c above apply is defined; - 192 - . 4. The regulatory impact assessment, along with a draft of the proposed regulations, must be reviewed within OMB before a proposed regulation is published for public comment. In addition, the public must be informed of the content of the regulatory impact assessment and of any substantive changes made in the draft of the proposed regulations after that draft was submitted to OMB for review (section 6(a)(3)(E) of Executive Order 12866). 5. Obviously, that is not information we are accustomed to seeing in connection with tax regulations. Since a Memorandum of Agreement between Treasury and OMB in 1983, most tax regulations have been viewed as exempt from rigorous OMB review, partly because they have been viewed as interpreting a statute, and any burden on the economy therefore is attributable to the statute, not to the regulations. 6. A new Memorandum of Agreement, signed by the Administrator of OIRA and the General Counsel of the Treasury Department on April 11, 2018, supersedes the 1983 Memorandum of Agreement and generally affirms the application of Executive Order 12866 to tax regulatory actions. a. Under paragraph 3 of the new Memorandum of Agreement, the frequency of providing the list of planned tax regulatory actions referred to in subparagraph a above is quarterly. b. Under paragraph 8, the new Memorandum of Agreement is effective immediately, except that the regulatory impact assessment described in subparagraph c above will not be required until the earlier of April 11, 2019, or “when Treasury obtains reasonably sufficient resources (with the assistance of OMB) to perform the required analysis.” c. Under paragraph 4, the time allowed for OIRA review is generally 45 days, with the opportunity for Treasury and OIRA to agree to 10 business days “[t]o ensure timely implementation of the Tax Cuts and Jobs Act of 2017.”
|
https://www.lexology.com/library/detail.aspx?g=2fa4eace-43dc-4ca4-a37a-b2c3c4b713b6
|
CC-MAIN-2018-51
|
refinedweb
| 10,525
| 60.65
|
Important: Please read the Qt Code of Conduct -
Custom TextFieldStyle for iOS application
- benlau Qt Champions 2016 last edited by
Hi,
I would like to modify the style of TextField in an iOS application. Here is the code:
TextField { style: TextFieldStyle { background: Item { } } }
That will clear the border of the text field. However, it will also unset the selection handle . It won't show the popup of "paste | select | selection all". That is not what I need.
I have checked the source code of Qt. The parameter for selection handle is defined in QtQuick.Controls.Styles.IOS.TextFieldStyle , but they are private. I can not declare a TextFieldStyle to inherit iOS.TextFieldStyle.
So my question is , what is the preferred way to modify the style of a control component for iOS platform? e.g copy the private header to my program?
Thanks.
Hi benlau,
Unfortunatly, I don't think that there is a prefered way modify a style of a specific platform.
What you could do (but it's really a hack) is something like this:
Add
import "qrc:/QtQuick/Controls/Styles/iOS" (or)
import "file:/QtQuick/Controls/Styles/iOS"
Remove the
import QtQuick.Controls.Style 1.0
This will import the iOS style objects and you can modify it like in your code above.
- benlau Qt Champions 2016 last edited by
hi jseeQt,
Thank for your reply. Too bad , it don't have any preferred/standard method.. I don't mind for a hack solution. However, my code is not executed in iOS only. It also run on desktop / Linux for preview and automated test.
The desktop build won't bundle those qml files into qrc. That means the import path of desktop and iOS will be different. I still need a way to detect the location of qml plugin , inherit the TextFieldStyle and modify it.
|
https://forum.qt.io/topic/51929/custom-textfieldstyle-for-ios-application/1
|
CC-MAIN-2022-05
|
refinedweb
| 307
| 66.84
|
tag:blogger.com,1999:blog-15116335911339675882018-05-28T22:41:22.274-07:00My IT areaIT helpful thingiesAresius in Windows XP using Powershell<i>Note: If you don't know what BSSID is, you mostly need to know that it's the MAC address of the Access Point you're connected to.</i><br /><br />Have you ever tried to get the BSSID in Windows XP? It's kind of difficult to do it, specially when all the Google results end up showing things with <b>netsh wlan</b> command. <b>netsh wlan</b> is part of a newer version than the one in Windows XP. It comes in Windows Vista and above. However that's the fastest way to find out which BSSID you're using. <br /><br /><a name='more'></a><p>Here's a workaround I created with Powershell: <br><!-- Stylesheet link --><link href="" rel="stylesheet" type="text/css"/> <!-- Code --><div class="dp-highlighter" id="hlDiv"><div class="bar"></div><ol start="1" class="dp-rb"><li class="alt"><span><span class="variable">$wmi</span><span> = </span><span class="builtin">Get-WmiObject</span><span> -class </span><span class="string">"MSNdis_80211_BaseServiceSetIdentifier"</span><span> -namespace </span><span class="string">"root\WMI"</span><span> -comp </span><span class="variable">$env</span><span class="symbol">:computername</span><span> </span></span></li><li class=""><span><span class="variable">$mac</span><span> = </span><span class="variable">$wmi</span><span>[0].Ndis80211MacAddress </span></span></li><li class="alt"><span><span class="variable">$BSSID</span><span> = </span><span class="string">""</span><span> </span></span></li><li class=""><span><span class="keyword">foreach</span><span> (</span><span class="variable">$num</span><span> </span><span class="keyword">in</span><span> </span><span class="variable">$mac</span><span>) { </span></span></li><li class="alt"><span> <span class="variable">$digit</span><span> = [convert]::ToString(</span><span class="variable">$num</span><span>, 16) </span></span></li><li class=""><span> <span class="keyword">if</span><span> (</span><span class="variable">$digit</span><span>.length -eq 1) { </span></span></li><li class="alt"><span> <span class="variable">$digit</span><span> = </span><span class="string">"0"</span><span> + </span><span class="variable">$digit</span><span>.ToString() </span></span></li><li class=""><span> } </span></li><li class="alt"><span> <span class="variable">$BSSID</span><span> = </span><span class="variable">$BSSID</span><span> + </span><span class="variable">$digit</span><span> </span></span></li><li class=""><span>} </span></li><li class="alt"><span><span class="builtin">Write-Host</span><span> </span><span class="string">" BSSID: $BSSID "</span><span> -BackgroundColor Black -ForegroundColor Yellow </span></span></li></ol></div> <p>Enjoy!<img src="" height="1" width="1" alt=""/>Aresius: Unable to obtain hardware information for the selected machine.Here's an odd problem I've found in VMware vSphere 4.1 while trying to export a VM. When wizard for export was called, it runs a hardware inspection, and I couldn't get thru it because it was showing that error: <br /><div class="code">Error: Unable to obtain hardware information for the selected machine.</div><br />Like any respectable sysadmin, I googled the solution, but I could only find vConverter stand-alone issues regarding VMs registered as Windows 2008 R2 and software not being able to handle that, so a change to Vista 64 was needed. However, this was not the case.<br /><a name='more'></a>Digging in the VM settings, I was able to find problem. It was in the NIC configuration. Somehow something changed it and left the distributed switch assigned, but not the port. I looked like this: <br /><div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="118" src="" width="320" /></a></div>I'm still looking for the reason how it would happen, but it could be that specific port got deleted, or maybe vSphere is just messing with me. Anyway, keep your eyes open for this!<img src="" height="1" width="1" alt=""/>Aresius VMware tools in LinuxSo installing VMWare tools in Windows it's pretty easy with a wizard, but on Linux, if you're a first-timer, you'll have some issues. Here are the steps (they're pretty simple).<br /><br />Select Install VMware tools in Console.<br /><br /><div class="separator" style="clear: both; text-align: center;"><img border="0" height="110" src="" width="524" /></div><a name='more'></a><br /><br />And now run the following commands<br /><div class="code">mkdir /mnt/cdrom<br />mount /dev/cdrom /mnt/cdrom<br />cp /mnt/cdrom/VMwareTools-*.tar.gz /tmp/<br />cd /tmp/<br />tar xvfz VMwareTools-*.tar.gz<br />cd vmware-tools-distrib/<br />sudo ./vmware-install.pl</div><br />From here on, just follow the wizard and you're all set. Basically, these commands do the following: create folder where to mount the CD, mount the actual CD, copy the zipped file with the installer to the temp folder, unzip the file, and run the installer. <br /><br />Let me know if you have any doubts!<img src="" height="1" width="1" alt=""/>Aresius your proxy at workI bet you wanted to do this since you got hired and saw that there are filters for certain webpages. So here's the thing, you can create a tunnel between your worstation and a host at home using SSH via SSL port.<br /><br />Here's a classic scenario:<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="100" src="" width="320" /></a></div><br />Basically here you have your workstation, and in order to get to the internet, you need to go through an ISA server working as a proxy server filtering your content. Here's where your facebook, youtube, IRC, and so on, gets filtered. <br /><br /><br /><a name='more'></a>We are going to create a tunnel between your workstation and your home computer, and redirect all your internet traffic from your workstation to your home connection, so you don't have any content filtering involved. For this, here's the list of components that are part of this solution:<br /><br /><ul><li>Workstation running Windows (XP/7)</li><li>PuTTY</li><li>Cntlm</li><li>Home computer running OpenSSH (Windows/Linux)</li><li>Router at home with port forwarding capabilities.</li><li>Patience</li></ul><br /><b><span class="Apple-style-span" style="font-size: x-large;">So let's get started at home.</span></b><br /><br />As I said, you need a host running OpenSSH. If you're using Linux, you have the simplest path to install it, and probably, you already have it. If you're using Windows, you have two options that may suit you:<br /><br /><ul><li>Try to install OpenSSH in Windows</li><li>Install a Virtual Machine running linux</li></ul><br />I consider the second option to be the easiest one. Not only because SSH is way simpler to configure, manage and install in Linux, but also because it's always nice to have a Linux at home to mess around.<br /><br />If you're thinking about the first one, you can use this guide that, even though is aimed for Windows server 2003, it works for Windows XP: <a href="" target="_blank">Installing OpenSSH for Windows 2003 Server - How to get it working</a>.<br /><br />At the end of the day, what we need is a box listening in port 22 and ready for connections.<br /><br />Once we have our SSH host, we are going to need our home router to forward our connections from outside to our SSH host. Since ISA server only lets encrypted traffic go through port 443 (SSL), we need to forward port 443 for incoming traffic from internet to port 22 in our SSH host.<br /><br />There's too many ways to do this according to which router you have. Basically, every port forward is going to ask you the following info:<br /><br /><b>Destination IP:</b> <i>Your SSH Host ip.</i><br /><b>Source Port:</b> 443<br /><b>Destination Port:</b> 22<br /><br />If you want the exact steps for your router, you can use this awesome page: <a href="" target="_blank"></a>. Here you'll find guides for a LOT of routers.<br /><br />So now you have your home computer waiting for connections. What you will need now is to find out your Public IP so you'll know where to connect to from your workstation. Just Google "What's my ip" and you'll find a zillion results.<br /><br /><br /><span class="Apple-style-span" style="font-size: x-large;"><b>Now let's take care of our workstation.</b></span><br /><br />In our workstation we're going to use two pieces of software: PuTTY and Cntlm. PuTTY is the software that actually builds the tunnel, but there's a problem: ISA server uses NTLM for authentication which PuTTY doesn't support. That's where Cntlm comes as a solution. Cntlm is proxy we install in our own work computers to solve the authentication problem.<br /><br />First of all, let's download it from here:<br /><a href="" target="_blank"></a><br /><br />Once you install it, you'll see that there's not much to be done but to edit the <b>cntlm.ini</b>. Here, you will need to change a few lines:<br /><br /><div class="code">Username Your Username at work (e.g. name.lastname)<br />Domain Domain your workstation is using (e.g. contoso.com)<br />Password Password for that same user (e.g. P@ssw0rd)<br />Proxy Work ISA proxy server (e.g. proxy.contoso.com:8080)<br />Listen 5555</div><br />Leave the rest for the ini file unedited. Save it, and close it. Now, just run the <b>cntlm.exe</b>. Don't worry, you won't see anything because it runs in the background. You can see it in the Process list in the Task Manager. Now you have CNTLM listening in port 5555 and it will deal with your proxy authentication.<br /><br />Now let's deal with our final part of this guide, building the tunnel with PuTTY. First of all, install <a href="" target="_blank">PuTTY</a> if you don't have it (duh) and open it.<br /><br />You start in the Session category. Write down your Public IP in "Host Name" text box and 443 in "Port".<br /><br /><img src="" /><br /><br />Now go to Connection > Proxy. Here you only need to select Proxy Type: HTTP, and then write down the following:<br /><br /><img src="" /><br /><br />Leave the rest as it is. Now move on to SSH > Tunnels. Here you have to write 1080 in "Source Port", select Dynamic and click Add. You will have something like this:<br /><br /><img src="" /><br /><br />Now, let's go back to Session category at the top of the list. In Saved sessions, write something descriptive for this tunnel like "Home" or "SSH tunnel", whatever you want. Then click Save. This way, next you only have to double click the session and it will connect automatically to it.<br /><br />And that's it! You will have a new window showing the following:<br /><br /><img src="" /><br /><br />That means that you're able to connect to your home SSH host and your tunnel is ready for use. Now you will need to configure your browser proxy settings. <br /><br />Here's how it should look:<br /><br /><img src="" /><br /><br />Now try to open a once-filtered webpage and if you open it, you'll be using your home connection! I hope you liked this guide and let me know if you have any doubts!<img src="" height="1" width="1" alt=""/>Aresius a centralized syslogRemember that I told you about this guide? So here it is. Basically, it explains step by step how to install <span style="font-weight: bold;">Adiscon LogAnalyzer</span> in an Ubuntu box. Let me know if something's not clear enough:<br /><br /><br /><span style="font-size: medium; font-weight: bold;">The Prep:</span><br /><br />Before we dive into setting up any of this we need to do a little prep work. If you are going to be looking at these logs in a web browser then it might be good if the time stamps you are seeing reflect the appropriate timezone.<br /><br /><a name='more'></a><br /><br /.<br /><br />Suppose you are in the eastern timezone:<br /><br /><div class="code">~# cp /usr/share/zoneinfo/EST5EDT /etc/localtime</div><br />Using your favorite text editor, create the file /etc/cron.daily/ntpdate and insert the following:<br /><div class="code">ntpdate ntp.ubuntu.com</div><br />Save the file then<br /><div class="code">~# chmod 755 /etc/cron.daily/ntpdate</div><br />Now let's run it to get our time corrected:<br /><div class="code">~# /etc/cron.daily/ntpdate</div><br />Lastly, verify...<br /><div class="code">~# date</div><br />You should see your current time here.<br /><br /><br /><span style="font-size: medium; font-weight: bold;">The Logging:</span><br /><br /:<br /><br />On Ubuntu, just run tasksel and pick LAMP from there.<br /><br /><a href=""><img alt="" border="0" id="BLOGGER_PHOTO_ID_5655021221411756386" src="" style="cursor: hand; cursor: pointer; display: block; height: 188px; margin: 0px auto 10px; text-align: center; width: 320px;" /></a><br /><br />After installing the LAMP stack, don’t forget to manually restart Apache. I would suggest also setting up phpMyAdmin to simplify managing MySQL.<br /><br />Ubuntu now comes with rsyslog as its defacto logger but we need to add a little bit of additional functionality to it. Namely, we need to add MySQL output support and add in the Reliable Event Logging Protocol (relp):<br /><br /><div class="code">~# apt-get install rsyslog-mysql rsyslog-relp<br />Reading package lists... Done<br />Building dependency tree <br />Reading state information... Done<br />The following extra packages will be installed:<br />dbconfig-common librelp0<br />The following NEW packages will be installed:<br />dbconfig-common librelp0 rsyslog-mysql rsyslog-relp<br />0 upgraded, 4 newly installed, 0 to remove and 0 not upgraded.<br />Need to get 677kB of archives.<br />After this operation, 2,335kB of additional disk space will be used.<br />Do you want to continue [Y/n]? y</div><br />During this install’s process you will be prompted to make the tables that are needed in MySQL:<br /><br /><a href=""><img alt="" border="0" id="BLOGGER_PHOTO_ID_5655021597747458498" src="" style="cursor: hand; cursor: pointer; display: block; height: 86px; margin: 0px auto 10px; text-align: center; width: 320px;" /></a><br /><br />Do this. You will then be asked for your MySQL root password followed by being asked to create a password for rsyslog to use. This is the password that rsyslog will use in its config files.<br /><br />Now we need to make a couple of tweeks to the config files of rsyslog. Ubuntu takes advantage of the fact that rsyslog can use multiple config files that are merged into one “config.” You have<span style="font-weight: bold;"> /etc/rsyslog.conf </span>but you also have a directory named<span style="font-weight: bold;"> /etc/rsyslog.d/ </span>that contains additional configs. In there you will now see one named <span style="font-weight: bold;">mysql.conf</span> that contains the needed info to dump our logs into the database. To turn on accepting remote logs though we still have to uncoment a couple of lines in <span style="font-weight: bold;">/etc/rsyslog.conf</span><br /><br /><div class="code"># provides UDP syslog reception<br />$ModLoad imudp<br />$UDPServerRun 514<br /># provides TCP syslog reception<br />$ModLoad imtcp<br />$InputTCPServerRun 514</div><br />Now, let's apply our changes:<br /><div class="code">~# service rsyslog restart</div><br />It is a little rough to read if your terminal is not wide enough, but you can verify that the logs are going to the database with<br /><div class="code">~# mysql -p -e "SELECT * FROM Syslog.SystemEvents;"</div><br /><br /><br /><span style="font-size: medium; font-weight: bold;">RELP: Reliable Event Logging Protocol</span><br /><br /: <span style="font-weight: bold;">RELP</span>,.<br /><br />Use your favorite editor and create <span style="font-weight: bold;">/etc/rsyslog.d/relp.conf</span> and enter the following in it:<br /><div class="code">$ModLoad imrelp<br />$InputRELPServerRun 20514</div><br /><br /><br /><span style="font-size: medium; font-weight: bold;">Adding reliability to your logging systems</span><br /><br />One of the many helpful articles at <a href="">rsyslog.com<.<br /><div class="code">~ # mkdir -p /var/rsyslog/work</div><br />Now we need to add the following to <span style="font-weight: bold;">/etc/rsyslog.conf</span> or <span style="font-weight: bold;">/etc/rsyslog.d/mysql.conf</span><br /><div class="code"># Buffering stuff:<br />$WorkDirectory /var/rsyslog/work # default location for work (spool) files<br />$ActionQueueType LinkedList # use asynchronous processing<br />$ActionQueueFileName dbq # set file name, also enables disk mode<br />$ActionResumeRetryCount -1 # infinite retries on insert failure</div><br />And now we need to restart rsyslog:<br /><div class="code">~ # service rsyslog restart</div><br /><br /><br /><span style="font-size: medium; font-weight: bold;">The Viewing:</span><br /><br />To view the info that we are now dumping into MySQL via the web we need to setup LogAnalyzer. Step one of this is to download the software from <a href=""></a>. As of this writing, the newest version is v3.2.1.<br /><div class="code">~ # wget<br />~ # tar -xzf loganalyzer-*.*.*.tar.gz<br />~ # cd loganalyzer-3.0.1<br />~/loganalyzer-3.0.1# mkdir /var/www/logs<br />~/loganalyzer-3.0.1# cp -R src/* /var/www/logs/<br />~/loganalyzer-3.0.1# cp contrib/* /var/www/logs/<br />~/loganalyzer-3.0.1# cd /var/www/logs/<br />/var/www/logs# chmod +x configure.sh secure.sh<br />/var/www/logs# ./configure.sh</div><br />To enable the authentication part of LogAnalyzer we need to make an empty database for users to be stored in and grant privileges on it.<br /><div class="code">/var/www/logs# mysql -p<br />mysql> create database LogAnalyzerUsers;<br />mysql> show databases;<br />mysql> grant all on LogAnalyzerUsers.* to LAUser@'localhost' identified by 'password';<br />mysql> quit</div><br />Now, go to <span style="font-weight: bold;"></span> and you will be pointed to the installation script which will guide you through the process of setting up <span style="font-weight: bold;">LogAnalyzer</span>.<br /><br /><span style="font-weight: bold;">Basic Configuration</span><br /><br />You can set several basic options here.<br /><ul><li>Number of syslog messages per page = 50 (default)</li><li>This is the number of syslog messages displayed on each page. You can increase the value (makes LogAnalyzer slower) or decrease the value (makes it faster).</li><li>Message character limit for the main view = 80 (default)</li><li>Set the number of characters per message which will be shown in the last column of the main view. </li><li>Full messages can be reviewed by hovering the mouse over it.</li><ul><li>Many folks prefer to use a setting of "0", which means complete messages will be displayed</li></ul><li>Show message details popup (default yes) = yes (default). Note that many people find the popups intrusive and prefer to disable them. Use "no" in this case.</li><li>During the setup you will also be prompted to enable the user database. Do so and enter in the information that is requested.</li><li>A couple of pages later you will be prompted for the main (admin) user.</li><li>The defaults on Step 7 demonstrate that it is possible to use this without the database backend. We need to change this to match our setup though.</li><li>Name the source something logical seeing as it is going to be the compiled logs from all your servers.</li><ul><li>Source Type = MYSQL Native</li><li>Select View = Syslog Fields</li><li>Table type = MonitorWare</li><li>Database Host = localhost</li><li>Database Name = Syslog</li><li>Database Tablename = SystemEvents</li><li>Database User = rsyslog</li><li>Enable Row Counting = no</li></ul></ul>Once you finish up, log into your new site and have a look at what has been being logged on your server so far.<img src="" height="1" width="1" alt=""/>Aresius Services GatewayWe were looking for some solution where we can centralize every RDP session coming from outside (especially for vendor access), and we ended up with 2 choices: <span style="font-weight: bold;">Citrix </span>or <span style="font-weight: bold;">Terminal Services Gateway</span> (now known as <span style="font-style: italic;">Remote Desktop Services Gateway</span>). Considering we already have licenses for Terminal Services, we are taking the latter. If you don't know what it is, here's a brief explanation.<br /><br />Windows Server Terminal Services uses Remote Desktop Protocol (RDP) to enable the connections from clients to the terminal server, which uses port 3389. If you need to access a terminal server from outside the internal network (intranet), you have two options for doing so. You can either enable port 3389 through your firewall to specific servers (which isn’t a good idea), or, more commonly, clients connect to the corporate network via VPN, which can then enable the RDP session in a secure manner.<br /><br /><a name='more'></a><br /><br />In general, technologies are moving away from requiring VPN connections. For example, remote procedure call (RPC) over HTTP Secure (HTTPS) is used for for Microsoft Exchange Server connections and Microsoft Office SharePoint Server and Microsoft Office Groove access. Windows Server 2008 includes Terminal Services (TS) Gateway, a new technology that allows secure RDP connections from outside a corporate intranet without requiring a VPN connection.<br /><br />TS Gateway allows RDP traffic to be encapsulated in HTTPS. Essentially the client outside the network makes a configuration change on their Remote Desktop client to instruct the client to communicate via a TS Gateway. The RDP traffic on the client is encapsulated in HTTPS, encrypted using the TS Gateway’s Secure Sockets Layer (SSL) certificate, and sent to the TS Gateway. The TS Gateway extracts the RDP traffic from the HTTPS and forwards it on to the destination target. The Remote Desktop client sends responses via the TS Gateway in normal RDP, and once again the TS Gateway encapsulates the RDP in HTTPS and sends it back to the RDP client. The diagram below illustrates the TS Gateway communications process.<br /><br /><a href="" target="_blank"><img alt="Photobucket" border="0" src="" /></a><br /><br />Configuring a system to use TS Gateway is simple. Note that the RDP target can be any Remote Desktop target—it doesn’t have to be a Server 2008 terminal server, and a system can connect to any target via the TS Gateway.<br /><br />You would normally place the TS Gateway in your network’s demilitarized zone (DMZ). However, an alternative option is to place a Microsoft ISA Server system or other SSL terminator in the DMZ and place the TS Gateway in the internal network to perform the RDP encapsulation and extraction duties.<img src="" height="1" width="1" alt=""/>Aresius Syslog CollectorI was trying to find a way to create a syslog server where I could centralize all my ESX hosts' logs. I was between options (like Kiwi, phpsyslog-ng, etc.) when I decided to do it with <span style="font-weight: bold;">Adiscon LogAnalyzer</span>, which is a free and opensource solution. I'll post a guide for its installation in another post (because I actually installed it successfully all the way).<br /><br />When I was about to add the ESX hosts to my sources list in the syslog server, I found out that vSphere 5 contains a new feature called <span style="font-weight: bold;">VMware Syslog Collector</span>, and since we'll be migrating to that version in a few weeks, makes no sense to move on with my LogAnalyzer.<br /><br /><a name='more'></a><br /><br />If you don't know what Syslog Collector is, let me give you a brief intro:<br /><br />VMware Syslog Collector is a tool that provides a centralized repository for logs from multiple ESX/ESXi hosts. Having installed Syslog Collector, you can redirect every log entry from your ESX/ESXi hosts to this repo in the network, instead of hosting them locally, easying up our troubleshooting jobs. This is extremely important considering that in ESXi logs are hosted locally by a very limited amount of time.<br /><br />Syslog Collector can be installed in the same server with vCenter Server, or in a separate one that can connect to vCenter Server, like our current Update Manager server.<br /><br />If you already have vSphere 5 and want to install it, you can follow the post in VMware Blog explaining it:<br /><br /><a href=""></a><br /><br /><br />Let me know what you think and if you installed it, share your experience.<img src="" height="1" width="1" alt=""/>Aresius referenceYou have to remember diferences between TCP and UDP. Here's a quick definition of UDP:<br /><br />UDP is the User Datagram Protocol. It is used to send individual packets across an IP network, in an unreliable fashion. This means that successful, error-free delivery of a message is not guaranteed.<br /><br />So remember that UDP is not reliable but it's fast! Examples of protocols that use UDP are TFTP, DNS (it can use TCP too!), DHCP, SNMP, RCP, NFS.<br /><br />If you're really interested in its standard and how it's defined, you should definitely read the RCF doc here: <br /><br /><a href="">RFC 768 - User Datagram Protocol</a><br /><br /><br />And finally a couple of jokes:<br /><br />"A UDP packet walks into a bar, no one acknowledges him.<br />A TCP packet walks into a bar twice because no one acknowledged him the first time."<br /><br />"The best thing about UDP jokes is i don’t care if you get it or not."<img src="" height="1" width="1" alt=""/>Aresius Error registering vmxI was trying to bring an ESX host to maintenance mode, but it keep failing to migrate a box using vMotion. Being that VM the only one left, I suspended it and migrated it manually. Everything worked ok.<br /><br />When I came back to that box to power it on again, this error showed up:<br /><br /><span style="font-style: italic; font-weight: bold;">This virtual machine cannot be powered on because its working directory is not valid. Use the configuration editor to set a valid working directory, and then try again.</span><br /><span style="font-style: italic; font-weight: bold;"><br /></span><br /><a name='more'></a><br /><br />Reading the not valid working directory, I proceeded to remove from inventory the VM so I can add it again manually registering the vmx file, only when I opened the datastore the folder for that VM was empty. Enter the WTF.<br /><br />I've tried refreshing everything but the result was the same everytime I browsed the datastore. So finally I opened a new vClient for the specific ESX host, opened the datastore and there they were, all the files for that VM. "Everthing's good now" I thought ingeniously.<br /><br />I tried to register the vmx but suddenly the same error popped up. This time I found this KB article from VMware:<br /><br /><a href=""></a><br /><br />Specially where it says that this issue may occur if the name of the virtual machine directory has a space at the beginning or the end of the directory name. You may also experience problems powering on a virtual machine if there are spaces in the virtual machine disk’s name.<br /><br />I've doublechecked and bingo! There was a space at the end of the folder name. I fixed that and vmx was registered with no further problems.<img src="" height="1" width="1" alt=""/>Aresius Spanning Tree ProtocolIf you ever studied CCNA, I bet you had troubles understanding <span style="font-weight: bold;">Spanning Tree Protocol (IEEE 802.1D)</span>. I know I did! Here's an explanation of it so you can clear your doubts.<br /><br /><span style="font-weight: bold;">1 - Basic concepts</span><br /><br />Spanning Tree Protocol (STP) is a OSI layer 2 protocol (Data Link).<br /><br />Its function is to deal with loops presence due to redundant links existence. This protocol grants switches the ability to enable or disable automatically the connection links guaranteeing a loop-free topology.<br /><br /><a name='more'></a><br /><br />Imagine two LANs interconnected. Host n is sending a Frame F to an unknown destination.<br /><br /><div style="text-align: center;"><img src="" /></div><br /><br />This is what would happen:<br /><ul><li>Bridge A sends this frame to LAN 2.</li><li>Bridge B sees frame F in LAN 2 (with unknown destination) and sends it to LAN 1.</li><li>Bridge A does the same and the loop begins.</li></ul><br /><br />When there are loops in our network topology, data link layer devices resend frames via <span style="font-weight: bold;">broadcast </span>and <span style="font-weight: bold;">multicast</span>, since there's no TTL field in layer 2 like in layer 3. A great amount of bandwidth gets consumed and, in many cases, the network is no longer working. Solution consists in allowing redundant physical links existence, but creating a loop-free logic topology. <span style="font-weight: bold;">STP</span> allows only one active way at a time between two network devices (this prevents loops from happening) but keeps redundant links as reserved to activate them in case something fails in the main way.<br /><br />Spanning tree is valid until any change happens in the topology which it detects automatically. STP maximum duration is 5 minutes. When one of these changes ocurred, root bridge redefines STP topology or a new root brigde is elected.<br /><br /><br /><span style="font-weight: bold;">2 - How it works</span><br /><br />This algorithm changes a mesh physical network with loops to a logic tree-shaped loop-free one. Bridges talk to each other using configuration messages called <span style="font-weight: bold;">Bridges Protocol Data Units</span> (BPDU).<br /><br /><br /><span style="font-weight: bold;">3 - Ports status</span><br /><br /><ul><li><span style="font-style: italic;">Blocking</span> Port can only receive BPDUs. Frames are discarded and MAC address table doesn't get updated.</li><li><span style="font-style: italic;">Listening:</span> The switch processes BPDUs and awaits possible new information that would cause it to return to the blocking state.</li><li><span style="font-style: italic;">Learning:</span> While the port does not yet forward frames (packets) it does learn source addresses from frames received and adds them to the filtering database (switching database).</li><li><span style="font-style: italic;">Forwarding:</span> A port receiving and sending data, normal operation. STP still monitors incoming BPDUs that would indicate it should return to the blocking state to prevent a loop.</li><li><span style="font-style: italic;">Disabled:</span> Not strictly part of STP, a network administrator can manually disable a port</li></ul><br /><br /><span style="font-weight: bold;">4 - STP in theory</span><br /><br />Algorithm has three parts and it requires that every switch has an ID and that it may know each port status:<br /><div><ol><li> Bridge with smaller ID gets elected as root.</li><li>Each bridge calculates shortest path to root bridge and marks that port as a root port.</li><li>For each LAN, every bridge connected to it must agree in which of them will be the designated. In order to it, BPDUs are interchanged. Designated bridge will be:<br /><div></div></li></ol><ul><li>Closest to root bridge or...</li><li>Closest to root bridge and with lowest ID.</li></ul></div><br /><br /><span style="font-weight: bold;">4 - STP in real life</span><br /><br />Bridges syncronize sending each other packets called BPDU that contain:<br /><div><ul><li>Sender ID</li><li>Suspected root bridge ID.</li><li>Distance between switch and root bridge.</li></ul></div><br /><br /><br /><br />If you still have some doubts, please, take a look at the following video that Radia Perlman, inventor of the STP algorithm, created.<br /><br /><iframe allowfullscreen="" frameborder="0" height="345" src="" width="420"></iframe><img src="" height="1" width="1" alt=""/>Aresius functionFollowing my previous post, here's the PowerOff-VM function:<br /><br /><div class="code">Function PowerOff-VM($vm, $id){<br />$getvm = Get-VM -Id $id<br />Shutdown-VMGuest -VM $getvm -Confirm:$false | Out-Null<br />Write-Host "$vm is stopping!" -ForegroundColor Yellow<br />sleep 10<br /><br />do {<br />$vmview = Get-VM -Id $id | Get-View<br />$getvm = Get-VM -Id $id<br />$powerstate = $getvm.PowerState<br />$toolsstatus = $vmview.Guest.ToolsStatus<br /><br />Write-Host "$vm is stopping with powerstate $powerstate and toolsStatus $toolsstatus!" -ForegroundColor Yellow<br />sleep 5<br /><br />}until($powerstate -match "PoweredOff")<br />Write-Host "$vm is powered-off"<br />}</div><br />This function is a little more straightforward. It basically calls the VMGuest to shutdown and work from there. As the previous function, it imports the same two variables ($vm and $id) for the same purposes.<br /><br />If you plan on scripting for powering off a VM, don't use Stop-VM! It powers off the VM cold, and it will give an error next time you boot. Only use it if you don't care about the next boot (e.g. VM is in IndependentNonPersistent).<img src="" height="1" width="1" alt=""/>Aresius functionMessing around with PowerCLI, I always had problems to power on and off cleanly/reliably. However, after a few tries, i've come up with two functions i use a lot now for almost every script that involves those two actions. Today i'm going to post the first one.<br /><br /><br />Here's the function for <span style="font-weight:bold;">PowerOn</span>:<br /><br /><div class="code">Function PowerOn-VM($vm, $id){<br /> $getvm = Get-VM -Id $id<br /> Start-VM -VM $getvm -Confirm:$false -RunAsync | Out-Null<br /> Write-Host "$vm is starting!" -ForegroundColor Yellow<br /> sleep 10<br /><br /> do {<br /> $vmview = Get-VM -Id $id | Get-View<br /> $getvm = Get-VM -Id $id<br /> $powerstate = $getvm.PowerState<br /> $toolsstatus = $vmview.Guest.ToolsStatus<br /><br /> Write-Host "$vm is starting, powerstate is $powerstate and toolsstatus is $toolsstatus!" -ForegroundColor Yellow<br /> sleep 5<br /> #NOTE that if the tools in the VM get the state toolsNotRunning this loop will never end. There needs to be a timekeeper variable to make sure the loop ends<br /><br /> }until(($powerstate -match "PoweredOn") -and (($toolsstatus -match "toolsOld") -or ($toolsstatus -match "toolsOk") -or ($toolsstatus -match "toolsNotInstalled")))<br /><br /> if (($toolsstatus -match "toolsOk") -or ($toolsstatus -match "toolsOld")){<br /> $OK</span> or <span style="font-weight:bold;">ERROR</span>) and you can use it as you'd like in the main script. <span style="font-weight:bold;">ERROR </span>will happen when VMTools are not detected and $toolsstatus is "toolsNotInstalled".<br /><br />In order to run this function, you would need to call two variables: <span style="font-weight:bold;">$vm</span> and <span style="font-weight:bold;">$id</span>. Here are the reasons for using this variables:<br /><br /><div><ul><li>$id: I used this variable importing the id of a VM (e.g. VirtualMachine-vm-1234) because this way i could avoid having problems when i have two VMs with the same name. It's a bit tedious to add it to a script, but it was needed.</li><br /><li>$vm: This one is pretty simple. Originally i was importing the VM name from a CSV file, and that's why I left it basically for aesthetic purposes (i.e. Write-Host cmdlets).</ul></div><br /><br />Here's an example for this function:<br /><br /><div class="code">$vm = Get-VM -Name VM2134 | Select Name<br />$id = Get-VM -Name VM2134 | Select Id<br /><br />PowerOn-VM $vm $id</div><br /><br />Let me know if you think I can improve it or if you have any doubts.<img src="" height="1" width="1" alt=""/>Aresius all!<br />I'm Aresius and I'm an IT guy. I've been working in IT for a few years now, and I've learned a thing or two. I've worked with a lot of different stuff including Windows/Linux servers, Cisco routing and switching, Cisco ASA, Cisco VoIP, VMware ESX and vCenter, PowerCLI, SAN, NAS, Exchange, OCS, AD, and so on.<br /><br />I'll try to post about stuff that have been useful for me, and hopefully, for you!<br /><br /><br /><b>Thanks for stopping by, and please, come back!</b><img src="" height="1" width="1" alt=""/>Aresius
|
http://feeds.feedburner.com/MyItArea
|
CC-MAIN-2018-26
|
refinedweb
| 6,284
| 53.21
|
)
China cheap plum preserved importers dried fruit price
US $0.02-0.06 / Piece
400000 Pieces (Min. Order)
Import China Products FD fruit Freeze Dried yellows mango dice
US $10.0-30.0 / Kilograms
18000 Kilograms (Min. Order)
Fashion Best-Selling import dried fruit
300 Kilograms (Min. Order)
dates fruit importers
US $1.4-1.5 / Kilogram
6000 Kilograms (Min. Order)
2016 In season natural best freeze import dried fruit
US $0.5-6 / Piece
500 Kilograms (Min. Order)
100% dehydrated Pineapple chips dried fruits importers
US $2.5-3.5 / Bag
5000 Bags (Min. Order)
dried fruit importers
150 Kilograms (Min. Order)
Imports Fruits Wholesale Goji Berries, Healthy Snack Goji Cream
US $4-11 / Kilogram
1000 Kilograms (Min. Order)
dry fruits importers
US $8-22 / Carton
500 Cartons (Min. Order)
fresh fruit importers fresh green rambutan freeze dried rambutan
US $5-23 / Kilogram
300 Kilograms (Min. Order)
xinjiang good best raisins importer dried fruit
US $1202-2999 / Ton
1 Ton (Min. Order)
China supplier import kiwi fruit
US $2950.0-3400.0 / Kilograms
1000 Kilograms (Min. Order)
Import China Products FD Dried Fruits Freeze Dried Peaches
US $22-25 / Kilogram
1 Kilogram (Min. Order)
Goji for dried fruits importers
US $5-20 / Kilogram
500 Kilograms (Min. Order)
import dried fruit / Palarich bulk supply crunchy Fuji apple chips with cheap price
1 Twenty-Foot Container (Min. Order)
import new crop dried kiwi fruit with high quality and good price
US $2200-4000 / Ton
18 Tons (Min. Order)
High Nutritional Value Premium Quality Dried Fruit
US $10-15 / Bag
1 Bag (Min. Order)
Juicy and Sweet Dried Persimmon Fruit
US $1-2 / Kilogram
1000 Kilograms (Min. Order)
Surri promotion price fruit and vegetable dryer
US $4000-5000 / Set
1 Set (Min. Order)
100% goji juice concentrate medlar fruits
US $3500-4500 / Metric Ton
1 Metric Ton (Min. Order)
Bulk wholesale halal dried kiwi fruit factory
US $2-4 / Kilogram
500 Kilograms (Min. Order)
Dried Apple Rings/Dried Fruit/Preserved Fruit
US $2000-3000 / Metric Ton
1 Metric Ton (Min. Order)
Hot Sale Wolfberry Medlar Dried Organic Goji Berry Import Goji berries,China medlar fruits
US $6-20 / Kilogram
100 Kilograms (Min. Order)
Industrial automatic air bubble fruit and vegetable washing machine
US $1500-3000 / Piece
1 Piece (Min. Order)
China Famous Brand Zhenxin Canned Strawberry Fruit in Light Syrup for Import and Export
US $5-20 / Carton
100 Cartons (Min. Order)
2017 Hot Sale Natural Dried Papaya/Dried Fruit With Lower Price
US $1380-4500 / Ton
1 Ton (Min. Order)
import goji berry bulk goji organic food frozen fruits
US $5.79-6.03 / Piece
1 Piece (Min. Order)
Dried pineapple fruits, import China porducts
US $4-6 / Kilogram
1 Metric Ton (Min. Order)
Fresh fruits and vegetables washing & Peeling Machine , Potato peeler and washer
US $300-3000 / Set
1 Set (Min. Order)
washing machine dryer/fruits washing machines
US $2500-10000 / Set
1 Set (Min. Order)
Low power consumption fruit and vegetable juicers with imported compressor
US $550.0-550.0 / Unit
1 Unit (Min. Order)
- About product and suppliers:
Alibaba.com offers 47,455 import fruits products. About 4% of these are fresh apples, 1% are dried fruit, and 1% are fresh citrus fruit. A wide variety of import fruits options are available to you, such as bag, bulk, and can (tinned). You can also choose from sour, sweet, and salty. As well as from semi-soft, hard. And whether import fruits is free samples, or paid samples. There are 47,455 import fruits suppliers, mainly located in Asia. The top supplying country is China (Mainland), which supply 100% of import fruits respectively. Import fruits products are most popular in North America, Western Europe, and South America. You can ensure product safety by selecting from certified suppliers, including 5,418 with Other, 2,765 with ISO9001, and 1,153 with ISO22000 certification.
Buying Request Hub
Haven't found the right supplier yet ? Let matching verified suppliers find you. Get Quotation NowFREE
Do you want to show import fruits or other products of your own company? Display your Products FREE now!
Related Category
Supplier Features
Supplier Types
Recommendation for you
related suppliers
related Guide
related from other country
|
http://www.alibaba.com/countrysearch/CN/import-fruits.html
|
CC-MAIN-2018-17
|
refinedweb
| 698
| 68.16
|
Suppose, you want to show some messages for a few seconds to notify your users upon a particular action, what method will you use? We have noticed many iPhone app developers use UIAlertView to achieve this goal. While UIAlertView is a good option to show some messages/titles, but the problem is, the user will have to close the alert message manually.
Normally, it looks like this:
Now, what if you want to show the same message for some time and after that, the message should hide automatically? The answer is iOS Toast Message used for this purpose.
Contents
What is Toast Message?
A toast message in iOS is a small, short-lived popup that provides a small bite of information to the users.
It looks something like this:
In this iOS Swift tutorial, we will learn how to implement an iOS toast message in your iPhone app, but we will also show how to add animation to that.
Steps to Implement iOS Toast Message With Animation
In this demo, we will show Toast Alert with Blur animation when the username or password field is left empty and a user presses the Login button.
First, go to XCode and create a new project.
Now, select the project type as a Single View Application and click on next.
In the next tab, set the project name and move forward.
Once you create a new project, go to Main.Storyboard, select ViewController, and drop UITextField from the “Show Object Library” section.
Next, again go to Main.Storyboard, select ViewController and drop UIButton from the “Show Object Library” section.
After adding the button, the final UI should look like this:
Once you have the UI ready similar to the image above, it’s time to create an Event for the login button.
Now, once you create the event, it’s time to implement the animation on toast.
On the login button touch, call the extension method which we created earlier
Now, you can repeat this step for the password as well.
import UIKit //MARK: Add Toast method function in UIView Extension so can use in whole project. extension UIView { func showToast(toastMessage:String,duration:CGFloat) { //View to blur bg and stopping user interaction let bgView = UIView(frame: self.frame) bgView.backgroundColor = UIColor(red: CGFloat(255.0/255.0), green: CGFloat(255.0/255.0), blue: CGFloat(255.0/255.0), alpha: CGFloat(0.6)) bgView.tag = 555 //Label For showing toast text let lblMessage = UILabel() lblMessage.numberOfLines = 0 lblMessage.lineBreakMode = .byWordWrapping lblMessage.textColor = .white lblMessage.backgroundColor = .black lblMessage.textAlignment = .center lblMessage.font = UIFont.init(name: "Helvetica Neue", size: 17) lblMessage.text = toastMessage //calculating toast label frame as per message content let maxSizeTitle : CGSize = CGSize(width: self.bounds.size.width-16, height: self.bounds.size.height) var expectedSizeTitle : CGSize = lblMessage.sizeThatFits(maxSizeTitle) // UILabel can return a size larger than the max size when the number of lines is 1 expectedSizeTitle = CGSize(width:maxSizeTitle.width.getminimum(value2:expectedSizeTitle.width), height: maxSizeTitle.height.getminimum(value2:expectedSizeTitle.height)) lblMessage.frame = CGRect(x:((self.bounds.size.width)/2) - ((expectedSizeTitle.width+16)/2) , y: (self.bounds.size.height/2) - ((expectedSizeTitle.height+16)/2), width: expectedSizeTitle.width+16, height: expectedSizeTitle.height+16) lblMessage.layer.cornerRadius = 8 lblMessage.layer.masksToBounds = true lblMessage.padding = UIEdgeInsets(top: 4, left: 4, bottom: 4, right: 4) bgView.addSubview(lblMessage) self.addSubview(bgView) lblMessage.alpha = 0 UIView.animateKeyframes(withDuration:TimeInterval(duration) , delay: 0, options: [] , animations: { lblMessage.alpha = 1 }, completion: { sucess in UIView.animate(withDuration:TimeInterval(duration), delay: 8, options: [] , animations: { lblMessage.alpha = 0 bgView.alpha = 0 }) bgView.removeFromSuperview() }) } } extension CGFloat { func getminimum(value2:CGFloat)->CGFloat { if self < value2 { return self } else { return value2 } } } //MARK: Extension on UILabel for adding insets - for adding padding in top, bottom, right, left. extension UILabel { private struct AssociatedKeys { static var padding = UIEdgeInsets() } var padding: UIEdgeInsets? { get { return objc_getAssociatedObject(self, &AssociatedKeys.padding) as? UIEdgeInsets } set { if let newValue = newValue { objc_setAssociatedObject(self, &AssociatedKeys.padding, newValue as UIEdgeInsets!, objc_AssociationPolicy.OBJC_ASSOCIATION_RETAIN_NONATOMIC) } } } override open func draw(_ rect: CGRect) { if let insets = padding { self.drawText(in: UIEdgeInsetsInsetRect(rect, insets)) } else { self.drawText(in: rect) } } override open var intrinsicContentSize: CGSize { get { var contentSize = super.intrinsicContentSize if let insets = padding { contentSize.height += insets.top + insets.bottom contentSize.width += insets.left + insets.right } return contentSize } } }
@IBOutlet weak var txtPassword: UITextField! @IBOutlet weak var txtUserName: UITextField! //MARK: Login Action @IBAction func btnLoginPressed(_ sender: UIButton) { if txtUserName.text == "" { //Call UIView extension method self.view.showToast(toastMessage: "Please Enter Username,beacuse it is necessary to add For getting logged in.", duration: 1.1) } else if txtPassword.text == "" { self.view.showToast(toastMessage: "Please Enter Password.", duration: 1.1) } else { //Do login } }
And Done!
Frequently Asked Questions
Where can I use toast messages in iOS?
There are several use-cases of toast messages. You can use them for displaying the consequences of actions triggered by your users. For instance, you can notify a user upon successful sign-in by displaying a welcome toast message on iOS.
What’s the difference between the iPhone app toast message and Snackbar?
While a Snackbar is similar to a swift toast message, the only difference being that it’s more versatile and interactive. For instance, it allows users to undo any action with just a single tap. A Snackbar is generally used to communicate feedback on user actions.
Can I display a toast message in Android?
Yes, you can display a toast message similar to the one showed here in an Android by following our tutorial on displaying custom toast messages in your app using Android.
Conclusion
Now that you know how to display a toast message with animation in an iPhone app, you can play around with different animations. For instance, you can make an animation to ease out or ease in, run it in reverse, or set it to repeat. Once you feel ready to continue your journey towards the next stage of iPhone app development, start learning other components.
Meanwhile, if you’ve got an idea for iPhone app development and would like to discuss it with us, drop us a line with your app requirements. Our business analyst will get back to you within 48 hours.
You may also like,
- How to Display Custom Toast Message in Your App Using Android Toast
- How to Integrate Twitter REST API in Swift
- How to Use UIPageControl To Add Walkthrough Introduction Screens in iPhone App
This page was last edited on February 2nd, 2021, at 4:40.
|
https://www.spaceotechnologies.com/ios-toast-message-animation-swift-tutorial/
|
CC-MAIN-2021-17
|
refinedweb
| 1,071
| 50.94
|
In this guide I will walk through how to get the .NET framework to download and install on-the-fly in an Inno Setup installer.
It works in 3 steps:
- Detect if the desired .NET framework is installed
- Download the .NET Framework bootstrap installer with Inno Download Plugin
- Run the bootstrap installer in quiet mode, which will download and install the .NET Framework. This is better than downloading the full installer since it only downloads the files it needs for your platform.
Detecting if the desired .NET Framework is installed
First you need to determine what registry key to check to see if your .NET version is installed. There is a good Stack Overflow answer that covers this, though the MSDN page is more likely to be up to date. There is also a good article on how to apply that in Inno Setup’s Pascal scripting language.
I wrote mine to check for .NET 4.5. I use this helper function, in the [Code] section of the .iss file:
function Framework45IsNotInstalled(): Boolean;
var
bSuccess: Boolean;
regVersion: Cardinal;
begin
Result := True;
bSuccess := RegQueryDWordValue(HKLM, ‘Software\Microsoft\NET Framework Setup\NDP\v4\Full’, ‘Release’, regVersion);
if (True = bSuccess) and (regVersion >= 378389) then begin
Result := False;
end;
end;
Downloading the bootstrapper
Next we need to find out where to download the installer from. The .NET Framework Deployment Guide for Developers has a great list of stable download links for the bootstrapper (web) installers. I picked 4.5.2, as it still supports code targeting .NET 4.5, and we might as well give the users the latest we can. This link should prompt you to download an .exe file directly; if it’s bringing you to a download webpage, it won’t work.
Now install Inno Download Plugin and put this at the top of your .iss file:
#include <idp.iss>
Put this in the [Code] section:
procedure InitializeWizard;
begin
if Framework45IsNotInstalled() then
begin
idpAddFile(‘’, ExpandConstant(‘{tmp}\NetFrameworkInstaller.exe’));
idpDownloadAfter(wpReady);
end;
end;
This “InitializeWizard” method is a special one that InnoSetup calls when first setting up its wizard pages. We call our helper function to detect if the framework is installed, then schedule a download step after the “Ready to install” page. We include our direct download link determined earlier, and save it in a temp folder, with a file name of “NetFrameworkInstaller.exe”. This name I picked arbitrarily; we just need to refer to it later when we’re installing and cleaning up.
Installing the bootstrapper
We’ll make another helper method to do the actual install. We specify another specially named method to hook our code up after the post-install event:
procedure InstallFramework;
var
StatusText: string;
ResultCode: Integer;
begin
StatusText := WizardForm.StatusLabel.Caption;
WizardForm.StatusLabel.Caption := ‘Installing .NET Framework 4.5.2. This might take a few minutes…’;
WizardForm.ProgressGauge.Style := npbstMarquee;
try
if not Exec(ExpandConstant(‘{tmp}\NetFrameworkInstaller.exe’), ‘/passive /norestart’, ”, SW_SHOW, ewWaitUntilTerminated, ResultCode) then
begin
MsgBox(‘.NET installation failed with code: ‘ + IntToStr(ResultCode) + ‘.’, mbError, MB_OK);
end;
finally
WizardForm.StatusLabel.Caption := StatusText;
WizardForm.ProgressGauge.Style := npbstNormal;
DeleteFile(ExpandConstant(‘{tmp}\NetFrameworkInstaller.exe’));
end;
end;
procedure CurStepChanged(CurStep: TSetupStep);
begin
case CurStep of
ssPostInstall:
begin
if Framework45IsNotInstalled() then
begin
InstallFramework();
end;
end;
end;
end;
We’re running the bootstrapper we downloaded earlier, with a flag to make the install passive (non interactive). Our main installer will show this screen while the framework is getting downloaded and installed:
Then the .NET installer UI will show alongside the first window:
If you’d like to keep it “cleaner” and just show the first screen you can swap out the /passive argument for /q. However I like showing the user the .NET install progress since it can take a long time and it reassures them that work is still really happening.
After running the installer, the bootstrapper is deleted (whether or not the install succeeded).
And now your installer is done!
Testing the installer
To test a .NET 4 or 4.5 install, I’d recommend setting up a Windows 7 virtual machine in Hyper-V. Windows 7 comes with .NET 2, 3 and 3.5 out of the box, but not .NET 4 or 4.5. After it installs the framework you can go to Programs and Features to cleanly remove it and test it all over again.
To test an earlier .NET version you should just be able to use a modern OS like Windows 8.1 or 10.
Great article.
ITD is just old today. It should be replaced by :
IDP : code.google.com/…/inno-download-plugin
just a few changes :
#include <idp.iss>
[Code]
procedure InitializeWizard();
begin
if Framework45IsNotInstalled() then
begin
idpAddFile('go.microsoft.com/fwlink, ExpandConstant('{tmp}NetFrameworkInstaller.exe'));
idpDownloadAfter(wpReady);
end;
end;
Thanks for the tip, Vincent! I've updated the guide to point to IDP instead. It's quite nice to remove that ugly step of hacking InnoTools Downloader.
Hi there,
The steps to follow is what I was exactly looking for. But, in my case, it is .Net 3.5 SP1 (VB 2008 Pro). So, I would be very happy and appreciate it very much if the author of this post could help me what I should modify so that I can use it to my application installation using Inno Setup.
Thank you in advance,
I am getting Error
Line73:Column11: Unknown Identifier 'InstallFramework'
For me too
|
https://blogs.msdn.microsoft.com/davidrickard/2015/07/17/installing-net-framework-4-5-automatically-with-inno-setup/
|
CC-MAIN-2017-13
|
refinedweb
| 893
| 59.09
|
by Evelyn Chan
Learn the basics of destructuring props in React
When I first learned about ES6, I was hesitant to start using it. I’d heard a lot of great things about the improvements but at the same time, I’d just gotten used to the good ol’ original way of doing things and here was a new syntax thrown at me to learn.
I avoided it for a while under the premise of “if it ain’t broke don’t fix it,” but I’ve recently grown fond of its simplicity and the fact that it’s becoming the norm in JavaScript.
With React, which fully embraces the ES6 syntax, destructuring adds a slew of benefits to improving your code. This article will go over the basics of destructuring objects and how it applies to props in React.
Reasons to destructure
Improves readability
This is a huge upside in React when you’re passing down props. Once you take the time to destructure your props, you can get rid of
props / this.props in front of each prop.
If you’re abstracting your components into different files, you’ll also have a handy place to quickly reference what props you’re passing down without having to switch tabs. This double check helps you catch errors such as passing down excess props or typos.
You can go a step further by adding in
propType validation, which allows you to define the type of each prop you pass in. When you’re in a development environment, this triggers React to log a warning if the type is different from the one defined.
Props can be difficult to keep track of in complex apps, so clearly defining your props as you pass them down is immensely helpful for anyone reading your code.
Shorter lines of code
See the following before ES6:
var object = { one: 1, two: 2, three: 3 }
var one = object.one;var two = object.two;var three = object.three
console.log(one, two, three) // prints 1, 2, 3
It’s long, clunky, and takes way too many lines of code. With destructuring, your code becomes much more clear.
In the example below, we’ve effectively cut down the number of lines to two:
let object = { one: 1, two: 2, three: 3 }
let { one, two, three } = object;
console.log(one, two, three) // prints 1, 2, 3
Syntactic sugar
It makes code look nicer, more succinct, and like someone who knows what they’re doing wrote it. I’m somewhat reiterating the first point here, but then again if it improves readability, why wouldn’t you do it?
Functional vs. Class Components
Destructuring in React is useful for both functional and class components but is achieved just a tad bit differently.
Let’s consider a parent component in our application:
import React, { Component } from 'react';
class Properties extends Component { constructor() { super(); this.properties = [ { title: 'Modern Loft', type: 'Studio', location: { city: 'San Francisco', state: 'CA', country: 'USA' } }, { title: 'Spacious 2 Bedroom', type: 'Condo', location: { city: 'Los Angeles', state: 'CA', country: 'USA' } }, ]; }
render() { return ( <div> <Listing listing={this.properties[0]} /> <Listing listing={this.properties[1]} /> </div> ); }}
Functional Components
In this example, we want to pass down a
listing object from our array of properties for the child component to render.
Here’s how a functional component would look:
const Listing = (props) => ( <div> <p>Title: {props.listing.title}</p> <p>Type: {props.listing.type}</p> <p> Location: {props.listing.location.city}, {props.listing.location.state}, {props.listing.location.country} </p> </div>);
This block of code is fully functional but looks terrible! By the time we get to this
Listing child component, we already know we’re referencing a listing, so
props.listing looks and feels redundant. This block of code can be made to look much cleaner through destructuring.
We can achieve this in the function parameter as we pass in the props argument:
const Listing = ({ listing }) => ( <div> <p>Title: {listing.title}</p> <p>Type: {listing.type}</p> <p> Location: {listing.location.city}, {listing.location.state}, {listing.location.country} </p> </div>);
Even better, we can further destructure nested objects like below:
const Listing = ({ listing: { title, type, location: { city, state, country } }}) => ( <div> <p>Title: {title}</p> <p>Type: {type}</p> <p>Location: {city}, {state}, {country}</p> </div>);
Can you see how much easier this is to read? In this example, we’ve destructured both
listings and the keys inside
listing.
A common gotcha is destructuring only the keys like we do below and trying to access the object:
{ location: { city, state, country } }
In this scenario, we wouldn’t be able to access the
location object through a variable named location.
In order to do so, we’d have to define it first with a simple fix like so:
{ location, location: { city, state, country } }
This wasn’t glaringly obvious to me at first, and I’d occasionally run into problems if I wanted to pass an object like
location as a prop after destructuring its contents. Now you’re equipped to avoid the same mistakes I made!
Class Components
The idea is very much the same in class components, but the execution is a little different.
Take a look below:
import React, { Component } from 'react';
class Listing extends Component { render() { const { listing: { title, type, location: { city, state, country } } } = this.props;
return ( <div> <p>Title: {title}</p> <p>Type: {type}</p> <p> Location: {city}, {state}, {country} </p> </div> ) }}
You may have noticed in the parent example that we can destructure the
Component object as we import
React in class components. This isn’t necessary for functional components as we won’t be extending the
Component class for those.
Next, instead of destructuring in the argument, we destructure wherever the variables are being called. For example, if we take the same
Listing child component and refactor it into a class, we would destructure in the
render function where the props are being referenced.
The downside to destructuring in class components is that you’ll end up destructuring the same props each time you use it in a method. Although this can be repetitive, I’d argue that a positive is it clearly outlines which props are being used in each method.
In addition, you won’t have to worry about side effects such as accidentally changing a variable reference. This method keeps your methods separate and clean, which can be a huge advantage for other operations during your projects such as debugging or writing tests.
Thanks for reading! If this helped you, please clap and/or share this article so it can help others too! :)
|
https://www.freecodecamp.org/news/the-basics-of-destructuring-props-in-react-a196696f5477/
|
CC-MAIN-2019-35
|
refinedweb
| 1,097
| 60.65
|
Holy cow, I wrote a book!
A bunch of us were going through some Windows crashes
that people sent in by clicking the "Send Error Report" button
in the crash dialog.
And there were huge numbers of them that made no sense whatsoever.
For example, there would be code sequences like this:
mov ecx, dword ptr [someValue]
mov eax, dword ptr [otherValue]
cmp ecx, eax
jnz generateErrorReport
Yet when we looked at the error report, the ecx
and eax registers were equal!
There were other crashes of a similar nature,
where the CPU simply lots its marbles and
did something "impossible".
ecx
eax
We had to mark these crashes as "possibly hardware failure".
Since the crash reports are sent anonymously,
we have no way of contacting the submitter to ask them
follow-up questions.
(The ones that the group I was in was investigating were failures that
were hit only once or twice, but were of the type that
were deemed worthy of close
investigation because the types of errors they uncovered—if
valid—were serious.)
One of my colleagues had a large collection of failures
where the program crashed at the instruction
xor eax, eax
How can you crash on an instruction that simply sets a register to zero?
And yet there were hundreds of people crashing in precisely this way.
He went through all the published errata to see whether any of them
would affect an "xor eax, eax" instruction. Nothing.
He sent email to some Intel people he knew to see if they could
think of anything.
[Aside from overclocking, of course.
- Added because people apparently take my stories hyperliterally
and require me to spell out the tiniest detail,
even the stuff that is so obvious that it should go without saying.
I didn't want to give away the story's punch line too soon!]
They said that the only [other] thing they could think of was that perhaps
somebody had mis-paired RAM on their motherboard, but their
description of what sorts of things go wrong when you mis-pair
didn't match this scenario.
Since the failure rate for this particular error was comparatively
high (certainly higher than the one or two I was getting for
the failures I was looking at),
he requested that the next ten people to encounter this error
be given the opportunity to leave their email address and telephone
number so that he could call them and ask follow-up questions.
Some time later, he got word that ten people took him up on this
offer, and he sent each of them e-mail asking them various questions
about their hardware configurations, including whether they were
overclocking.
[- Continuing from above aside:
See? Obviously overclocking was considered as a possibility.]
Five people responded saying, "Oh, yes, I'm overclocking.
Is that a problem?".
For both groups, he suggested that they stop overclocking or at least
not overclock as aggressively.
And in all cases, the people reported that their computer that used
to crash regularly now runs smoothly.
Moral of the story:
There's a lot of overclocking out there,
and it makes Windows look bad.
I wonder if it'd be possible to detect overclocking from software
and put up a warning in the crash dialog,
"It appears that your computer is overclocked.
This may cause random crashes.
Try running the CPU at its rated speed to improve stability."
But it takes only one false positive to get people saying,
"Oh, there goes Microsoft
blaming other people for its buggy software again."
One.
My building was scheduled for a carpet replacement—in all
my years at Microsoft, I think this is the first time this has
ever happened to a building I was in—so we all had to pack up our things
so the carpeters could get clear access to the floor.
You go through all the pain of an office move (packing all your things)
but don't get the actual reward of a new office.
One of the machines in my office probably ranked high on the
"oldest computer at Microsoft still doing useful work" charts.
It was a 50MHz 486 with 12MB of memory and 500 whole megabytes of
disk space.
(Mind you, it wasn't born this awesome. It started out with
only 8MB of memory and 200MB of disk space, but I upgraded it
after a few years.)
This machine started out its life as a high-end Windows 95
test machine,
then when its services were no longer needed, I rescued it
from the scrap heap and turned it into my little web server
where among other things, Microsoft employees could read my
blog article queue months before publication.
It also served as my
"little computer for doing little things".
For example, the Internet Explorer test team used it
for FTP testing since I installed a custom FTP server onto it.
(Therefore, I could make it act like any type of server, or
like a completely bizarro server if a security scenario required it.)
It also housed various "total wastes of time" such as the
"What's Raymond doing right now?" program, and the
"Days without a pony" web page.
I added a CD-ROM drive, which cost me $200.
This was back in the days when getting a CD-ROM drive meant plugging
in a custom ISA card and installing a MS-DOS driver into the CONFIG.SYS file.
Like an MS-DOS driver gets you anywhere any more.
I had to write my own driver for it.
I took it as a challenge to see how high I could get the machine's
uptime. Once the hardware stabilized
(which went a lot quicker once I gave up trying to get the old
network card to stop wedging and just bought a new one),
I put it on a UPS that had been gifted to me
in exchange for debugging why the company's monitoring software wasn't
working on Windows 95.
Whenever I had to move offices, I found somebody who wasn't moving
and relocated the computer there for a few days.
The UPS kept the machine running while I carted it down the hall or
into the next building.
I think I got the uptime as high as three years before the
building suffered a half-day power outage that drained the UPS.
A few years later, the machine started rebooting
for no apparent reason.
Turns out the UPS battery itself was dying and generating
its own mini-power outages.
Ironic that a UPS ended up creating power outages
instead of masking them.
But on the other hand, it was free,
so I can't complain.
Without a UPS, the machine became victim of building-wide power
outages and office moves.
Over the years, more and more parts of the machine started
to wear out and had to be worked around.
The CMOS battery eventually died, so restarting the computer
after an outage involved lots of typing.
(It always thought the date was January 1983.)
The clock also drifted, so I wrote a program to
re-synchronize it automatically every few days.
When I packed up the computer for the recarpeting, I assumed
that afterwards, it would fire back up like the trooper it was.
But alas, it just sat there.
After much fiddling and removal of non-critical hardware, I got it to
power on.
Now it complains "no boot device".
The hard drive (or perhaps the hard drive controller) had
finally died.
The shock of being shut off and restarted proved to be its
downfall.
Since it's nearly impossible to find replacement parts for a computer
this old, I'm going to have to return it to the scrap heap.
Good-bye, old friend.
But you won't be forgotten.
I'm going to transfer your name and IP address to another computer
I rescued from the scrap heap many years ago for just this eventuality.
But still no mouse.
(Alas, this was the first of a series of computers to reach
retirement age within days of each other.
Perhaps I'll eulogize those other machines someday.)
I think it's time to update the scratch program we've been
using for the past year.
I hear there's this new language called C++ that's going to become
really popular any day now,
so let's hop on the bandwagon!
#define STRICT
#define UNICODE
#define _UNICODE
#include <windows.h>
#include <windowsx.h>
#include <ole2.h>
#include <commctrl.h>
#include <shlwapi.h>
#include <shlobj.h>
#include <shellapi.h>
HINSTANCE g_hinst;
class Window
{
public:
HWND GetHWND() { return m_hwnd; }
protected:
virtual LRESULT HandleMessage(
UINT uMsg, WPARAM wParam, LPARAM lParam);
virtual void PaintContent(PAINTSTRUCT *pps) { }
virtual LPCTSTR ClassName() = 0;
virtual BOOL WinRegisterClass(WNDCLASS *pwc)
{ return RegisterClass(pwc); }
virtual ~Window() { }
HWND WinCreateWindow(DWORD dwExStyle, LPCTSTR pszName,
DWORD dwStyle, int x, int y, int cx, int cy,
HWND hwndParent, HMENU hmenu)
{
return CreateWindowEx(dwExStyle, ClassName(), pszName, dwStyle,
x, y, cx, cy, hwndParent, hmenu, g_hinst, this);
}
private:
void Register();
void OnPaint();
void OnPrintClient(HDC hdc);
static LRESULT CALLBACK s_WndProc(HWND hwnd,
UINT uMsg, WPARAM wParam, LPARAM lParam);
protected:
HWND m_hwnd;
};
void Window::Register()
{
WNDCLASS wc;
wc.style = 0;
wc.lpfnWndProc = Window::s)(COLOR_WINDOW + 1);
wc.lpszMenuName = NULL;
wc.lpszClassName = ClassName();
WinRegisterClass(&wc);
}
LRESULT CALLBACK Window::s_WndProc(
HWND hwnd, UINT uMsg, WPARAM wParam, LPARAM lParam)
{
Window *self;
if (uMsg == WM_NCCREATE) {
LPCREATESTRUCT lpcs = reinterpret_cast<LPCREATESTRUCT>(lParam);
self = reinterpret_cast<Window *>(lpcs->lpCreateParams);
self->m_hwnd = hwnd;
SetWindowLongPtr(hwnd, GWLP_USERDATA,
reinterpret_cast<LPARAM>(self));
} else {
self = reinterpret_cast<Window *>
(GetWindowLongPtr(hwnd, GWLP_USERDATA));
}
if (self) {
return self->HandleMessage(uMsg, wParam, lParam);
} else {
return DefWindowProc(hwnd, uMsg, wParam, lParam);
}
}
LRESULT Window::HandleMessage(
UINT uMsg, WPARAM wParam, LPARAM lParam)
{
LRESULT lres;
switch (uMsg) {
case WM_NCDESTROY:
lres = DefWindowProc(m_hwnd, uMsg, wParam, lParam);
SetWindowLongPtr(m_hwnd, GWLP_USERDATA, 0);
delete this;
return lres;
case WM_PAINT:
OnPaint();
return 0;
case WM_PRINTCLIENT:
OnPrintClient(reinterpret_cast<HDC>(wParam));
return 0;
}
return DefWindowProc(m_hwnd, uMsg, wParam, lParam);
}
void Window::OnPaint()
{
PAINTSTRUCT ps;
BeginPaint(m_hwnd, &ps);
PaintContent(&ps);
EndPaint(m_hwnd, &ps);
}
void Window::OnPrintClient(HDC hdc)
{
PAINTSTRUCT ps;
ps.hdc = hdc;
GetClientRect(m_hwnd, &ps.rcPaint);
PaintContent(&ps);
}
class RootWindow : public Window
{
public:
virtual LPCTSTR ClassName() { return TEXT("Scratch"); }
static RootWindow *Create();
protected:
LRESULT HandleMessage(UINT uMsg, WPARAM wParam, LPARAM lParam);
LRESULT OnCreate();
private:
HWND m_hwndChild;
};
LRESULT RootWindow::OnCreate()
{
return 0;
}
LRESULT RootWindow::HandleMessage(
UINT uMsg, WPARAM wParam, LPARAM lParam)
{
switch (uMsg) {
case WM_CREATE:
return OnCreate();
case WM_NCDESTROY:
// Death of the root window ends the thread
PostQuitMessage(0);
break;
case WM_SIZE:
if (m_hwndChild) {
SetWindowPos(m_hwndChild, NULL, 0, 0,
GET_X_LPARAM(lParam), GET_Y_LPARAM(lParam),
SWP_NOZORDER | SWP_NOACTIVATE);
}
return 0;
case WM_SETFOCUS:
if (m_hwndChild) {
SetFocus(m_hwndChild);
}
return 0;
}
return __super::HandleMessage(uMsg, wParam, lParam);
}
RootWindow *RootWindow::Create()
{
RootWindow *self = new RootWindow();
if (self && self->WinCreateWindow(0,
TEXT("Scratch"), WS_OVERLAPPEDWINDOW,
CW_USEDEFAULT, CW_USEDEFAULT, CW_USEDEFAULT, CW_USEDEFAULT,
NULL, NULL)) {
return self;
}
delete self;
return NULL;
});
MSG msg;
while (GetMessage(&msg, NULL, 0, 0)) {
TranslateMessage(&msg);
DispatchMessage(&msg);
}
}
CoUninitialize();
}
return 0;
}
The basic idea of this program is the same as our old
scratch program, but now it has that fresh lemony C++ scent.
Instead of keeping our state in globals, we declare a C++ class
and hook it up to the window.
For simplicity,
the object's lifetime is tied to the window itself.
First, there is a bare-bones Window class which we
will use as our base class for any future "class associated with a
window" work.
The only derived class for now is the RootWindow,
the top-level frame window that for now is the only window that the
program uses.
As you may suspect, we may have other derived classes later
as the need arises.
Window
RootWindow
The reason why the WinRegisterClass method is
virtual (and doesn't do anything interesting) is so that
a derived class can modify the WNDCLASS that is
used when the class is registered.
I don't have any immediate need for it, but it'll be there if I need it.
WinRegisterClass
We use the GWLP_USERDATA window long to store
the pointer to the associated class,
thereby allowing us to recover the object from the window handle.
GWLP_USERDATA
Observe that in the
RootWindow::HandleMessage method, I used the
Visual C++ __super extension.
If you don't want to rely on a nonstandard extension, you can
instead write
RootWindow::HandleMessage
class RootWindow : public Window
{
public:
typedef Window super;
...
and use super instead of __super.
super
__super
This program doesn't do anything interesting;
it's just going to be a framework for future samples.
We've spent quite a bit of time over the past year
learning about dialog templates and the dialog manager.
Now we're going to put the pieces together to do something interesting:
Building a dialog template on the fly.
What we're going to write is an extremely lame version of
the MessageBox function.
Why bother writing a bad version of something that Windows already does?
Because you can use it as a starting point for further enhancements.
For example, once you learn how to generate a template dynamically,
you can dynamically add buttons beyond the boring "OK" button,
or you can add additional controls like a "Repeat this answer for all
future occurrences of this dialog" checkbox or maybe insert
an animation control.
MessageBox
I'm going to start with a highly inefficient dialog template class.
This is not production-quality, but it's good enough for didactic
purposes.
#include <vector>
class DialogTemplate {
public:
LPCDLGTEMPLATE Template() { return (LPCDLGTEMPLATE)&v[0]; }
void AlignToDword()
{ if (v.size() % 4) Write(NULL, 4 - (v.size() % 4)); }
void Write(LPCVOID pvWrite, DWORD cbWrite) {
v.insert(v.end(), cbWrite, 0);
if (pvWrite) CopyMemory(&v[v.size() - cbWrite], pvWrite, cbWrite);
}
template<typename T> void Write(T t) { Write(&t, sizeof(T)); }
void WriteString(LPCWSTR psz)
{ Write(psz, (lstrlenW(psz) + 1) * sizeof(WCHAR)); }
private:
vector<BYTE> v;
};
I didn't spend much time making this class look pretty because
it's not the focus of this article. The DialogTemplate
class babysits a vector of bytes
to which you can Write data.
There is also a little AlignToDword method that
pads the buffer to the next DWORD boundary.
This'll come in handy, too.
DialogTemplate
vector
Write
AlignToDword
DWORD
Our message box will need a dialog procedure
which ends the dialog when the IDCANCEL button is pressed.
If we had made any enhancements to the dialog template, we would handle
them here as well.
IDCANCEL
INT_PTR CALLBACK DlgProc(HWND hwnd, UINT wm, WPARAM wParam, LPARAM lParam)
{
switch (wm) {
case WM_INITDIALOG: return TRUE;
case WM_COMMAND:
if (GET_WM_COMMAND_ID(wParam, lParam) == IDCANCEL) EndDialog(hwnd, 0);
break;
}
return FALSE;
}
Finally, we build the template. This is not hard, just tedious.
Out of sheer laziness, we make the message box a fixed size.
If this were for a real program, we would have measured the text
(using ncm.lfCaptionFont
and ncm.lfMessageFont) to determine the
best size for the message box.
ncm.lfCaptionFont
ncm.lfMessageFont
BOOL FakeMessageBox(HWND hwnd, LPCWSTR pszMessage, LPCWSTR pszTitle)
{
BOOL fSuccess = FALSE;
HDC hdc = GetDC(NULL);
if (hdc) {
NONCLIENTMETRICSW ncm = { sizeof(ncm) };
if (SystemParametersInfoW(SPI_GETNONCLIENTMETRICS, 0, &ncm, 0)) {
DialogTemplate tmp;
// Write out the extended dialog template header
tmp.Write<WORD>(1); // dialog version
tmp.Write<WORD>(0xFFFF); // extended dialog template
tmp.Write<DWORD>(0); // help ID
tmp.Write<DWORD>(0); // extended style
tmp.Write<DWORD>(WS_CAPTION | WS_SYSMENU | DS_SETFONT | DS_MODALFRAME);
tmp.Write<WORD>(2); // number of controls
tmp.Write<WORD>(32); // X
tmp.Write<WORD>(32); // Y
tmp.Write<WORD>(200); // width
tmp.Write<WORD>(80); // height
tmp.WriteString(L""); // no menu
tmp.WriteString(L""); // default dialog class
tmp.WriteString(pszTitle); // title
// Next comes the font description.
// See text for discussion of fancy formula.
if (ncm.lfMessageFont.lfHeight < 0) {
ncm.lfMessageFont.lfHeight = -MulDiv(ncm.lfMessageFont.lfHeight,
72, GetDeviceCaps(hdc, LOGPIXELSY));
}
tmp.Write<WORD>((WORD)ncm.lfMessageFont.lfHeight); // point
tmp.Write<WORD>((WORD)ncm.lfMessageFont.lfWeight); // weight
tmp.Write<BYTE>(ncm.lfMessageFont.lfItalic); // Italic
tmp.Write<BYTE>(ncm.lfMessageFont.lfCharSet); // CharSet
tmp.WriteString(ncm.lfMessageFont.lfFaceName);
// Then come the two controls. First is the static text.
tmp.AlignToDword();
tmp.Write<DWORD>(0); // help id
tmp.Write<DWORD>(0); // window extended style
tmp.Write<DWORD>(WS_CHILD | WS_VISIBLE); // style
tmp.Write<WORD>(7); // x
tmp.Write<WORD>(7); // y
tmp.Write<WORD>(200-14); // width
tmp.Write<WORD>(80-7-14-7); // height
tmp.Write<DWORD>(-1); // control ID
tmp.Write<DWORD>(0x0082FFFF); // static
tmp.WriteString(pszMessage); // text
tmp.Write<WORD>(0); // no extra data
// Second control is the OK button.
tmp.AlignToDword();
tmp.Write<DWORD>(0); // help id
tmp.Write<DWORD>(0); // window extended style
tmp.Write<DWORD>(WS_CHILD | WS_VISIBLE |
WS_GROUP | WS_TABSTOP | BS_DEFPUSHBUTTON); // style
tmp.Write<WORD>(75); // x
tmp.Write<WORD>(80-7-14); // y
tmp.Write<WORD>(50); // width
tmp.Write<WORD>(14); // height
tmp.Write<DWORD>(IDCANCEL); // control ID
tmp.Write<DWORD>(0x0080FFFF); // static
tmp.WriteString(L"OK"); // text
tmp.Write<WORD>(0); // no extra data
// Template is ready - go display it.
fSuccess = DialogBoxIndirect(g_hinst, tmp.Template(),
hwnd, DlgProc) >= 0;
}
ReleaseDC(NULL, hdc); // fixed 11 May
}
return fSuccess;
}
The fancy formula for determining the font point size is not that fancy
after all. The dialog manager converts the font height from point to
pixels via
the standard formula:
fontHeight = -MulDiv(pointSize, GetDeviceCaps(hdc, LOGPIXELSY), 72);
fontHeight = -MulDiv(pointSize, GetDeviceCaps(hdc, LOGPIXELSY), 72);
pointSize
The template itself follows
the format we discussed earlier, no surprises.
One subtlety is that the control identifier for our OK button
is IDCANCEL instead of the IDOK you might
have expected. That's because this message box has only one button,
so we want to
let the user hit the ESC key to dismiss it.
IDOK
Now all that's left to do is take this function for a little spin.
void OnChar(HWND hwnd, TCHAR ch, int cRepeat)
{
if (ch == TEXT(' ')) {
FakeMessageBox(hwnd,
L"This is the text of a dynamically-generated dialog template. "
L"If Raymond had more time, this dialog would have looked prettier.",
L"Title of message box");
}
}
// add to window procedure
HANDLE_MSG(hwnd, WM_CHAR, OnChar);
Fire it up, hit the space bar, and observe the faux message box.
Okay, so it's not very exciting visually, but that wasn't the point.
The point is that you now know how to build a dialog template at
run-time.
The SetWindowsHookEx function
accepts a HINSTANCE parameter.
The documentation explains that it is a handle to the DLL containing
the hook procedure. Why does the window manager need to have this handle?
SetWindowsHookEx.
Computing); }
The.
hwndParent.
By default, Explorer does not show files that have the
FILE_ATTRIBUTE_HIDDEN flag, since somebody
went out of their way to hide those files from view.
FILE_ATTRIBUTE_HIDDEN
You can, of course, ask that such files be shown anyway
by going to Folder Options and selecting
"Show hidden files and folders".
This shows files and folders even if they are marked as
FILE_ATTRIBUTE_HIDDEN.
On the other hand, files that are marked as both
FILE_ATTRIBUTE_HIDDEN
and
FILE_ATTRIBUTE_SYSTEM
remain hidden from view.
These are typically files that involved in the plumbing
of the operating system, messing with which can cause various
types of "excitement". Files like
the page file,
folder configuration files,
and
the System Volume Information folder.
FILE_ATTRIBUTE_SYSTEM
If you want to see those files, too, then you can uncheck
"Hide protected operating system files".
Let's look at how far this game of hide/show ping-pong has gone:
You'd think this would be the end of the hide/show arms race,
but apparently
some people want to add a sixth level and make something
invisible to Explorer, overriding the five existing levels.
At some point this back-and-forth has to stop, and for now,
it has stopped at level five.
Adding just a sixth level would create a security hole, because it
would allow a file to hide from the user.
As a matter of security, a sufficiently-privileged
user must always have a way of seeing what is there
or at least know that there is something there that can't be seen.
Nothing can be undetectably invisible.
If you add a sixth level that lets a file hide from level five,
then there must be a level seven that reveals it.
|
http://blogs.msdn.com/b/oldnewthing/archive/2005/04.aspx?PostSortBy=MostViewed&PageIndex=1
|
CC-MAIN-2015-32
|
refinedweb
| 3,361
| 55.84
|
A simple logging class. More...
#include <Wt/WLogger>
A simple logging class.
This class logs events to a stream in a flexible way. It allows to create log files using the commonly used Common Log Format or Combined Log Format, but provides a general way for logging entries that consists of a fixed number of fields.
It is used by Wt to create the application log (WApplication::log()), and built-in httpd access log.
To use this class for custom logging, you should instantiate a logger, add one or more field definitions using addField(), and set an output stream using setStream() or setFile(). To stream data to the logger, use entry() to start formatting a new entry.
Usage example:
Creates a new logger.
This creates a new logger, which defaults to logging to stderr.
Adds a field.
Add a field to the logger. When
isString is
true, values will be quoted.
Configures what things.
Starts a new log entry.
Returns a new entry. The entry is logged in the destructor of the entry (i.e. when the entry goes out of scope).
The
type reflects a logging level. You can freely choose a type, but these are commonly used inside the library:
Returns whether messages of a given type are logged.
Returns
true if messages of the given type are logged. It may be that not messages of all scopes are logged.
Returns whether messages of a given type are logged.
Returns
true if messages of the given type are logged. It may be that not messages of all scopes are logged.
Returns whether messages of a given type and scope are logged.
Sets the output file.
Opens a file output stream for
path. The default logger outputs to stderr.
Logging function.
This creates a new log entry, e.g.:
Field separator constant.
Timestamp field constant.
|
https://webtoolkit.eu/wt/wt3/doc/reference/html/classWt_1_1WLogger.html
|
CC-MAIN-2021-31
|
refinedweb
| 306
| 78.55
|
I am searching for the better option to save a trained model in PyTorch, these are the options I am using right now:
I read it somewhere that approach 2 is better than 1.
Why second approach is preferred? Is the reason behind this because torch.nn modules haven the above two functions?
The pickle library implements serializing and de-serializing of Python objects.
The first way is to import torch which imports pickle and then you can call torch.save() or torch.load() which wraps the pickle.dump() and pickle.load() for you.Pickle.dump() and pickle.load() are the actual methods used to save and load an object.
Syntax of torch.save()-
torch.save(the_model.state_dict(), PATH)
Second way,
The torch.nn. module has learnable parameters which are the first state_dict and the second state_dist is the optimizer state dict, the optimizer is used to improve the learnable parameters. Since state_dict objects are Python dictionaries, you can easily save, update, alter and restore, this is why it is preferred over torch.save().
import torchimport torch.optim as optimmodel = torch.nn.Linear(5, 2)# Initialize optimizeropt])
import torch
import torch.optim as optim
model = torch.nn.Linear(5, 2)
# Initialize optimizer
opt])
|
https://intellipaat.com/community/561/best-way-to-save-a-trained-model-in-pytorch
|
CC-MAIN-2020-05
|
refinedweb
| 203
| 62.75
|
lp:charms/trusty/apache-hadoop-compute-slave
Created by Kevin W Monroe on 2015-06-01 and last modified on 2015-10-07
- Get this branch:
- bzr branch lp:charms/trusty/apache-hadoop-compute-slave
Members of Big Data Charmers can upload to this branch. Log in for directions.
Branch merges
Related bugs
Related blueprints
Branch information
- Owner:
- Big Data Charmers
- Status:
- Mature
Recent revisions
- 89. By Cory Johns on 2015-10-07
Get Hadoop binaries to S3 and cleanup tests to favor and improve bundle tests
- 88. By Kevin W Monroe on 2015-09-15
[merge] merge bigdata-dev r101..103 into bigdata-charmers
- 87. By Kevin W Monroe on 2015-08-24
[merge] merge bigdata-dev r91..r100 into bigdata-charmers
- 86. By Kevin W Monroe on 2015-07-24
update java-installer to pull ppc64le jdk from git
- 85. By Kevin W Monroe on 2015-07-24
updated resources to use lp:git vs lp:bzr
- 84. By Kevin W Monroe on 2015-06-29
bundle resources into charm for ease of install; add extended status messages; combine DataNode and NodeMgr into the same service block since NM cant proceed until yarn->hdfs relation is complete
- 83. By Kevin W Monroe on 2015-06-18
remove namespace refs from readmes now that we are promulgated
- 82. By Kevin W Monroe on 2015-06-01
remove dev references for production
- 81. By Kevin W Monroe on 2015-05-29
reference plugin instead of client in the docs
- 80. By Kevin W Monroe on 2015-05-29
update DEV-README to reflect correct relation data
Branch metadata
- Branch format:
- Branch format 7
- Repository format:
- Bazaar repository format 2a (needs bzr 1.16 or later)
|
https://code.launchpad.net/~bigdata-charmers/charms/trusty/apache-hadoop-compute-slave/trunk
|
CC-MAIN-2017-13
|
refinedweb
| 285
| 64.91
|
-
Performance tuning, Part 2: Analyzing performance problems
by Diego Novillo
Introduction
This is the second of a two-part series about performance tuning. [Refer to Performance Tuning with GCC, Part 1.] Previously, we discussed some of the more common flags available in GCC to select various optimizations. We now turn our attention to performance analysis tools that will help you make sure you are getting the most performance out of your code.
One fundamental fact to always keep in mind when tuning for performance is that compiler optimization is not a silver bullet. Blindly adding different optimization flags hoping for some black magic to make your program faster is usually a recipe for frustration. In most cases performance problems are actually algorithmic. If you choose a bad sorting algorithm, for instance, no combination of compiler flags will be able to fix it (short of a pattern matching the bad algorithm and replacing it with another one).
Furthermore, one of the current hot topics in compiler optimization
research circles is the problem of bad interactions between different
transformations. The compiler takes your code through a pipeline of
more than 100 transformations. Changes made by one of them may impact
negatively on another one, in sometimes seemingly random ways. So,
adding a whole bunch of different
-f flags may do more
harm than good.
The very first goal in performance tuning is to determine what to fix. So, you need tools to analyze the execution of the program. This will provide information on how resources are being used, which will hopefully lead you to the appropriate fix. In an ideal situation, the fix may well be a compiler flag. However, more often than not, changes to the program will be necessary.
When measuring performance, the metric we usually care about is how long it takes for the program to execute. But in a multitasking environment, there are many factors outside your program that may affect its performance. The system may be running low on memory and swapping too much, other higher priority tasks may be running, your program may be trying to use a device in high demand, etc.
So, when talking about timing your application, there are three different times to distinguish:
- The time spent executing your code (user time). If your program is CPU intensive then this is the time you will be trying to minimize.
- The time spent by the system executing on behalf of your code (system time). It essentially measures how long the kernel spent executing services for your program (
printf,
open,
write, etc).
- The total time elapsed during execution (wall clock time). Ultimately, this is the time that most matters to you, but it is an amalgamation of several other factors, so it can be misleading. The same application running under a loaded system will take a lot longer than in a mostly idle system. So, when analyzing your code for performance problems, wall clock time should rarely be the focus of your attention.
Measuring time
The simplest way to measure how long your program takes to run is the
time utility:
time ./my-prog
If you are using
bash as your shell, you will see
something like this:
[ ... program output ... ] real 0m8.233s user 0m7.583s sys 0m0.412s
This means that the program executed for a total of 8.23 seconds, out
of which 7.58 were spent executing your code. The system spent 0.41
seconds executing system calls made inside
my-prog.
Notice that the numbers do not seem to add up. What's missing is all
the other activities that go on behind the scenes when the system
executes a program (loading time, context switching, memory
management, etc).
This output tells you that
my-prog is fairly CPU
intensive, so if you are having performance problems, they will most
certainly be in your code or in the libraries used by your code. On
the other hand, a program that makes a lot of system calls may show
something like this:
[ ... program output ... ] real 0m20.081s user 0m2.431s sys 0m17.580s
This particular output came from a program that opens a file, writes to it, and closes it again 3,000,000 times. Notice how the program is now spending 87% of its time making system calls. If we now make the simple change of not opening and closing the file inside the loop.
[ ... program output ... ] real 0m8.467s user 0m1.127s sys 0m7.257s
While we haven't changed the profile of the program (it still spends an inordinate amount of time in the system), we have reduced its runtime quite a bit. Granted, these are rather contrived examples, but they do happen.
Knowing how long the program takes to run is obviously not enough to pinpoint hot spots. You also need to know exactly where the problems occur. There are several mechanisms available, each will have its own innate advantages and shortcomings. It depends greatly on the nature of your application (size, use of external libraries, etc).
Inserting probes in your code
You can get timing information from inside your program by calling one
of a family of timer functions:
times,
getrusage, and
clock. They all return
the amount of time used by the program so far, so the idea is to
insert them as calipers in your code. The first one before entering
the code you want to measure and the second one at the end. For
instance, using
times:
#include <sys/times.h> compute () { struct tms t1, t2; clock_t ut_elapsed, st_elapsed; times (&t1); /* Do work */ times (&t2); ut_elapsed = t2.tms_utime - t1.tms_utime; st_elapsed = t2.tms_stime - t1.tms_stime; }
Both
times and
getrusage are fairly
accurate, while
clock returns only an approximation.
Furthermore, with
getrusage you can obtain more
information besides timing. It also returns generic resource usage
information like memory usage, swap activity, input/output operations,
etc.
These functions are useful in situations where modifications to the source are manageable. Typically this means that you are familiar with the code and know which sections you are interested in monitoring. Not surprisingly, this is not often the case. Usually, all one knows is that the program is misbehaving. It would be impractical to add timer calls haphazardly in a large program just to guess where the problem may be. For this, we have profiling tools.
A profiler will automatically collect timing information as your program executes. At the end, all the collected information is aggregated and presented for analysis. There are three popular choices in this area: GProf, OProfile and Valgrind.
GProf
This is the traditional profiling tool in UNIX® environments. It
requires the help of the compiler to insert probes in the code. The
theory of operation here is that the compiler inserts timing probes at
every function as it compiles your program. When the program runs,
the probing code generates timing information into a special file,
which can then be analyzed with
gprof.
To use
gprof:
- Enable profile code generation with
-pg. This asks the compiler to insert timing probes in every function. Notice that it is not necessary to recompile every file in your application, only those that you want to instrument. It is also important to specify
-pgwhen you are compiling each individual file and when you are linking the final binary. Otherwise, the runtime libraries used by
gprofwill not be linked in.
- Run the application as you would normally do. The new code inserted by the compiler will write out timing information to a file named
gmon.out. You should expect your program to run a bit slower, but the overhead is usually not too unpleasant.
- Use
gprofto analyze the file
gmon.out. The default report provided by
gprofis fairly verbose but it is also easy to understand.
As an example, we will use a small application that just eats up CPU time by doing repeated matrix multiplications. To get a binary instrumented with profiling code, execute the following commands:
gcc -pg -O2 -o matmul matmul.c ./matmul gprof ./matmul
To produce the following output:
Flat profile: Each sample counts as 0.01 seconds. % cumulative self self total time seconds seconds calls s/call s/call name 100.12 443.81 443.81 24 18.49 18.49 matMult 0.00 443.81 0.00 4 0.00 0.00 timeconversion 0.00 443.81 0.00 1 0.00 0.00 RunStats [ ... Detailed explanation of each column ... ] Call graph (explanation follows) granularity: each sample hit covers 2 byte(s) for 0.00% of 443.81 seconds index % time self children called name 443.81 0.00 24/24 cpuHog [2] [1] 100.0 443.81 0.00 24 matMult [1] ----------------------------------------------- [2] 100.0 0.00 443.81 cpuHog [2] 443.81 0.00 24/24 matMult [1] ----------------------------------------------- [ ... Detailed explanation of each column ... ]
The report is split in two sections. The first one shows a flat ranking of all the functions called, how much total time was spent in the function, how many calls were made to it, etc. The functions at the top of this list are the ones that you want to focus on. Improving the performance of these functions will give you the best returns. Trying to improve functions that do not show up high in this list are just not worth worrying about (of course, this assumes that you are looking at CPU usage).
The second chart, sometimes called a "butterfly" chart, is a call
graph that shows functions ranked by CPU usage together with their
callers and callees. So, for function
matMult ranked in
the first position, the names above it are the callers to
matMult (in this case
cpuHog). If the
function makes any other calls, they are listed below it.
One of the nice things about
gprof is that it is
available almost everywhere. Since it modifies the source code and
uses generic timer calls (roughly what you would do if you were
instrumenting the code yourself), it does not need special hardware
support. This also works against it, though. Since it does require
recompiling your code and works at a fairly coarse granularity, it is
not usable in every situation. In those cases, Valgrind or OProfile
may be better suited.
Valgrind
Valgrind is a program that emulates the x86 instruction set and runs the application under its control. The system is split into a core module that does all the emulation and a set of add-on modules that provide various profiling services: checking for memory references, a cache profiler, a heap profiler, and a data race detector for multi-threaded applications.
While Valgrind will not provide direct timing information about your application, it may give you hints on what could be going on. For instance, when using the cache profiler, you may find that your program is causing a lot of cache misses. Rearranging your data to improve cache locality would probably boost the performance of your application.
For instance, going back to the matrix multiplication program, we can
ask
valgrind to show cache utilization statistics after
compiling with
gcc:
gcc -O2 matmul.c -o matmul valgrind --tool=cachegrind ./hog -cpu 1
The output is as follows:
[ ... program output ... ] ==9811== ==9811== I refs: 10,449,449,620 ==9811== I1 misses: 1,715 ==9811== L2i misses: 998 ==9811== I1 miss rate: 0.0% ==9811== L2i miss rate: 0.0% ==9811== ==9811== D refs: 7,987,098,008 (7,606,832,264 rd + 380,265,744 wr) ==9811== D1 misses: 366,400,427 ( 364,930,098 rd + 1,470,329 wr) ==9811== L2d misses: 22,052,577 ( 21,958,898 rd + 93,679 wr) ==9811== D1 miss rate: 4.5% ( 4.7% + 0.3% ) ==9811== L2d miss rate: 0.2% ( 0.2% + 0.0% ) ==9811== ==9811== L2 refs: 366,402,142 ( 364,931,813 rd + 1,470,329 wr) ==9811== L2 misses: 22,053,575 ( 21,959,896 rd + 93,679 wr) ==9811== L2 miss rate: 0.1% ( 0.1% + 0.0% )
Valgrind shows information about the instruction and data cache references for both instruction (I, I1) and data caches (D1, L2d). This information is highly specific to the architecture, of course.
While Valgrind can provide fairly detailed information about the program, it can be extremely slow. Since it emulates and instruments every instruction executed by the program, it may take 20x longer or more to execute. If your application is already slow, the added overhead may be prohibitive.
Perhaps one of the more useful features of Valgrind are
its memory checking tools. When you invoke it with
--tool=memcheck, it will report any out-of-bounds memory
references and memory leaks. Valgrind is also flexible enough that
you can write your own analysis modules. However, it is highly
architecture specific. Currently it has been ported to x86, ppc, and
x86_64 architectures.
Valgrind comes standard with recent Fedora™ Core distributions and it can also be downloaded from.
OProfile
OProfile is different from the other profiling tools. Instead of working on a single application, OProfile collects system-wide timing information. It does not require modifications to the source code nor does it emulate the instruction set. It uses hardware counters to collect profiling information. Basically, modern CPUs have a set of registers that just count events like instructions executed or cache misses. Whenever these counters overflow, an interrupt is triggered which is caught by OProfile, which records it together with information about the current state of the CPU registers.
OProfile outputs data about samples because that's what it does. It samples the specified hardware counters, and that's the information it gives you. You have to specify which particular counter you are interested in when you start OProfile.
All this collected information can then be retrieved. In particular, one of the saved registers is the program counter. In its most basic usage, when you ask OProfile to display profiling information, it will show a list of all the applications that were running in the system since it started collecting data. Going back to our matrix multiplication application:
$
sudo opcontrol --no-vmlinux$
sudo opcontrol --startUsing 2.6+ OProfile kernel interface. Using log file /var/lib/oprofile/oprofiled.log Daemon started. Profiler running. $
./matmult -cpu 45[ ... ] $
sudo opcontrol --stopStopping profiling. $
opreport -l ./matmultCPU: P4 / Xeon with 2 hyper-threads, speed 2993.46 MHz (estimated) Counted GLOBAL_POWER_EVENTS events (time during which processor is not stopped) with a unit mask of 0x01 (mandatory) count 100000 samples % symbol name 1981120 99.9890 matMult 215 0.0109 .plt 2 1.0e-04 cpuHog
You may have noticed that OProfile needs a bit more preparation than
the other profilers we've seen. The reason for this is that OProfile
is a system-wide daemon, so you will need root access to
use it. However, the reporting tool,
opreport, can be
used by any user.
Also notice that we had to specify the name of the binary that we wanted to see. Otherwise, OProfile will display information about all the applications running at the time:
CPU: P4 / Xeon with 2 hyper-threads, speed 2993.46 MHz (estimated) Counted GLOBAL_POWER_EVENTS events (time during which processor is not stopped) with a unit mask of 0x01 (mandatory) count 100000 GLOBAL_POWER_E...| samples| %| ------------------ 2145332 38.3077 no-vmlinux 1981337 35.3793 hog 427801 7.6389 libperl.so 244002 4.3570 libc-2.3.5.so 172337 3.0773 vmware-vmx 126713 2.2626 libqt-mt.so.3.3.4 74947 1.3383 Xorg 55229 0.9862 oprofiled 49886 0.8908 libgklayout.so 35876 0.6406 libpthread-2.3.5.so
OProfile is a fairly powerful tool that can help you analyze performance problems that go beyond a single application. For instance, if your application is causing significant kernel activity inside a specific driver, the profile listing will include those modules in the kernel that are being used by your code. Moreover, since it is based on hardware performance counters, it incurs in very little overhead, so it is quite possible to have it running constantly without any noticeable slowdowns.
You will find more detailed information about OProfile in this month's article Using OProfile to analyze an RPM package build. It also comes standard in Fedora™ Core and Red Hat® Enterprise Linux® distributions. For additional information, visit the OProfile project home page.
Conclusions
Tuning your application to perform better is rarely a matter of adding a few obscure set of compiler flags to your Makefiles. Compiler transformations are limited to mechanical changes that are not only limited by strict semantic preservation guidelines, they are also not particularly smart. If your program requires an algorithmic change, no compiler in the world will be able to rewrite that for you.
Most of the time you will be examining your program for hot spots, and
it is important to know what tools can help you. The tools we
discussed in this article will help you isolate problems in CPU-bound
applications. If your application is I/O bound or makes lots of
system calls, then OProfile will probably do a good job at telling you
which OS module is being hit the most. Other tools like
strace (system call tracer) and
ltrace (library call tracer) are also worth exploring.
|
http://www.redhat.com/magazine/012oct05/features/gcc/
|
CC-MAIN-2015-06
|
refinedweb
| 2,866
| 65.62
|
C++ Interview Questions and Answers
Ques 26. What is an object?
Ans.
Object is a software bundle of variables and related methods. Objects have state and behavior.
Is it helpful? Add Comment View Comments
Ques 27. How can you tell what shell you are running on UNIX system?Ans..
Is it helpful? Add Comment View Comments
Ques 28. What do you mean by inheritance?Ans.
Inheritance is the process of creating new classes, called derived classes, from existing classes or base classes. The derived class inherits all the capabilities of the base class, but can add embellishments and refinements of its own.
Is it helpful? Add Comment View Comments
Ques 29. Describe PRIVATE, PROTECTED and PUBLIC ? the differences and give examples.
Ans.
Is it helpful? Add Comment View Comments
Ques 30. What is namespace?Ans..
Is it helpful? Add Comment View Comments
5) What is namespace? " />
Most helpful rated by users:
|
http://www.withoutbook.com/Technology.php?tech=12&page=6&subject=
|
CC-MAIN-2020-24
|
refinedweb
| 151
| 73.13
|
Improving MediaLocker: wxPython, SqlAlchemy, and MVC
Improving MediaLocker: wxPython, SqlAlchemy, and MVC
Join the DZone community and get the full member experience.Join For Free
Learn how to add document editing and viewing to your web app on .Net (C#), Node.JS, Java, PHP, Ruby, etc.
I recently blogged about wxPython, SQLAlchemy, CRUD and MVC. The program that we created in that post was dubbed “MediaLocker”, whether or not it was explicitly stated as such. Anyway, since then, I have received a couple comments about improving the program. One came from Michael Bayer, one of the creative minds behind SQLAlchemy itself and the other comments came from Werner Bruhin, a nice guy who haunts the wxPython mailing list, helping new users. So I went about creating an improved version of the code following their advice. Werner then improved it a bit more. So in this article, we will be looking at improving the code, first with my example and then with his. Enough talk though; let’s get to the meat of story!
Making MediaLocker Better
Michael Bayer and Werner Bruhin both thought that I should only connect to the database once as that’s a fairly “expensive” operation. This could be an issue if there were multiple sessions existing at the same time too, but even in my original code, I made sure to close the session so that wouldn’t happen. When I wrote my original version, I thought about separating out the session creation, but ended up going with what I thought was more straightforward. To fix this niggling issue, I changed the code so that I passed the session object around instead of constantly calling my controller’s connectToDatabase function. You can read more about Sessions here. See the code snippet from mediaLocker.py:
class BookPanel(wx.Panel): """""" #---------------------------------------------------------------------- def __init__(self, parent): """Constructor""" wx.Panel.__init__(self, parent) if not os.path.exists("devdata.db"): controller.setupDatabase() self.session = controller.connectToDatabase() try: self.bookResults = controller.getAllRecords(self.session) except: self.bookResults = []
Note that we have a little conditional right up front that will create the database if it doesn’t already exist. Next I create the session object in the main GUI as a property of the panel sub-class. Then I pass it where ever I need to. One example can be seen above where I pass the session object to the controller’s getAllRecords method.
Another big change was to remove the ObjectListView model from model.py and just use the SQLAlchemy table class instead:
######################################################################## class Book(DeclarativeBase): """""" __tablename__ = "book" id = Column(Integer, primary_key=True) author_id = Column(Integer, ForeignKey("person.id")) title = Column(Unicode) isbn = Column(Unicode) publisher = Column(Unicode) person = relation("Person", backref="books", cascade_backrefs=False) @property def author(self): return "%s %s" % (self.person.first_name, self.person.last_name)
This is actually mostly the same as the original class except that it uses SQLAlchemy constructs. I also needed to add a special property to return the author’s full name for display in our widget, so we used Python’s built-in function: property which returns a property attribute. It’s easier to understand if you just look at the code. As you can see, we applied property as a decorator to the author method.
Werner’s Additions
Werner’s additions are mostly adding more explicit imports in the model. The biggest change in the model is as follows:
import sys if not hasattr(sys, 'frozen'): # needed when having multiple versions of SA installed import pkg_resources pkg_resources.require("sqlalchemy") # get latest version import sqlalchemy as sa import sqlalchemy.orm as sao import sqlalchemy.ext.declarative as sad from sqlalchemy.ext.hybrid import hybrid_property maker = sao.sessionmaker(autoflush=True, autocommit=False) DBSession = sao.scoped_session(maker) class Base(object): """Extend the base class - Provides a nicer representation when a class instance is printed. Found on the SA wiki, not included with TG """ def __repr__(self): return "%s(%s)" % ( (self.__class__.__name__), ', '.join(["%s=%r" % (key, getattr(self, key)) for key in sorted(self.__dict__.keys()) if not key.startswith('_')])) DeclarativeBase = sad.declarative_base(cls=Base) metadata = DeclarativeBase.metadata def init_model(engine): """Call me before using any of the tables or classes in the model.""" DBSession.configure(bind=engine)
The first few lines are for people with SetupTools / easy_install on their machine. If the user has multiple versions of SQLALchemy installed, it will force it to use the latest. Most of the other imports are shortened to make it very obvious where various classes and attributes come from. I am honestly not familiar with the hybrid_property, so here’s what its docstring had to say:
A decorator which allows definition of a Python descriptor with both instance-level and class-level behavior.
You can read more here:
Werner also added a little __repr__ method to the Base class to make it return a better representation of the class instance when it’s printed, which is handy for debugging. Finally, he added a function called init_model to initialize the model.
Wrapping Up
Now you should know that Werner and I have decided to make MediaLocker into an example of a wxPython database-enabled application. He’s been doing a bunch of work on it since the simple edits I mentioned above. We’ll be making an official announcement about that soon. In the mean time, I hope that this has helped open your eyes to some fun ways to enhance a project and clean it up a bit. It is my plan to add lots of new features to this program and chronicle those on this blog in addition to all my other articles.
Source Code
Extend your web service functionality with docx, xlsx and pptx editing. Check out ONLYOFFICE document editors for integration.
Published at DZone with permission of Mike Driscoll , DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.
{{ parent.title || parent.header.title}}
{{ parent.tldr }}
{{ parent.linkDescription }}{{ parent.urlSource.name }}
|
https://dzone.com/articles/improving-medialocker-wxpython
|
CC-MAIN-2019-13
|
refinedweb
| 992
| 57.37
|
Managed plug-ins are managed .NET assemblies that you create with tools like Visual Studio. They contain only .NET code which means that they can’t access any features that .NET libraries do not support. However, the standard .NET tools that Unity uses to compile scripts can access the managed code. Therefore, there isn’t a lot of difference in usage between managed plug-inObjects just like normal scripts. A compiled DLL is known as a managed plug-in.
For this example, rename the class to
MyUtilities in the Solution browser and replace its code with the following:
using System; using UnityEngine; namespace DLLTest { public class MyUtilities { public int c; public void AddValues(int a, int b) { c = a + b; } public static int GenerateRandom(int min, int max) { System.Random rand = new System.Random(); return rand.Next(min, max); } } }
Закончив с кодом, соберите проект, и сгенерируйте DLL файл с отладочными символами.
Create a new Project in Unity and copy the built file
<project folder>/bin/Debug/DLLTest.dll into the Assets folder. Then, create a C# script called
Test in
Assets, and replace its contents with the following code:)); } }
Attach this script to a GameObject in the Scene and press Play, and Unity displays the output of the code from the DLL in the Console window.
To enable support for compiling unsafe C# code go to Edit > Project Settings > Player. Expand the Other Settings panel and enable the Allow Unsafe Code checkbox.
|
https://docs.unity3d.com/ru/2020.1/Manual/UsingDLL.html
|
CC-MAIN-2021-10
|
refinedweb
| 242
| 67.15
|
I wanted to search discussions on mailing lists and view conversations. I didn’t want to use some webinterface because that wouldn’t allow me to search quickly and offline. So making my mail client aware of these emails seemed to be the way to go. Fortunately, the GNOME mailinglists are mbox archived. So you download the entire traffic in a standardised mbox.
But how to properly get this into your email clients then? I think Thunderbird can import mbox natively. But I wanted to access it from other clients, too, so I needed to make my server aware of these emails. Of course, I configured my mailserver to use maildir, so some conversion was needed.
I will present my experiences dealing with this problem. If you want to do similar things, or even only want to import the mbox directly, this post might be for you.
The archives
First, we need to get all the archives. As I had to deal with a couple of mailinglists and more than a couple of month, I couldn’t be arsed to click every single mbox file manually.
The following script scrapes the mailman page. It makes use of the interesting Splinter library, basically a wrapper around selenium and other browsers for Python.
#!/usr/bin/env python import getpass from subprocess import Popen, list2cmdline import sys import splinter def fill_password(b, username=None, password=None): if not username: username = getpass.getpass('username: ') if not password: password = getpass.getpass('password: ') b.fill('username', username) b.fill('password', password) b.find_by_name('submit').click() def main(url, username=None): b = splinter.Browser() try: #url = '' b.visit(url) if 'Password' in b.html: fill_password(b, username=username) links = [l['href'] for l in b.find_link_by_partial_text('Text')] cookie = b.driver.get_cookies()[0] cookie_name = cookie['name'] cookie_value = cookie['value'] cookie_str = "Cookie: {name}={value}".format(name=cookie_name, value=cookie_value) wget_cookie_arg = '--header={0}'.format(cookie_str) #print wget_cookie_arg b.quit() for link in links: #print link cmd = ['wget', wget_cookie_arg, link] print list2cmdline(cmd) # pipe that to "parallel -j 8" except: b.quit() if __name__ == '__main__': site = sys.argv[1] user = sys.argv[2] if site.startswith('http'): url=site else: url = '{0}'.format(site) main(username=user, url=url)
You can download the thing, too.
I use splinter because handling cookies is not fun as well as parsing the web page. So I just use whatever is most convenient for me, I wanted to get things done, after all. The script will print a line for each link it found, nicely prefixed with
wget and its necessary arguments for the authorization cookie. You can pipe that to
sh but if you want to download many month, you want to do it in parallel. And fortunately, there is an app for that!
Conversion to maildir
After having received the mboxes, it turned out to be a good idea nonetheless to convert to maildir; if only to extract properly formatted mails only and remove duplicates.
I came around mb2md-3.20.pl from 2004 quite soon, but it is broken. It cannot parse the mboxes I have properly. It will create broken mails with header lingering around as it seems to be unable to detect the beginning of new mails reliably. It took me a good while to find the problem though. So again, be advised, do not use mb2md 3.20.
As I use mutt myself I found this blog article promising. It uses mutt to create a mbox out of a maildir. I wanted it the other way round, so after a few trial and errors, I figured that the following would do what I wanted:
mutt -f mymbox -e 'set mbox_type=maildir; set confirmcreate=no; set delete=no; push "T.*;s/tmp/mymuttmaildir"'
where “
mymbox” is your source file and “
/tmp/mymuttmaildir” the target directory.
This is a bit lame right? We want to have parameters, because we want to do some batch processing on many archive mboxes.
The problem is, though, that the parameters are very deep inside the quotes. So just doing something like
mutt -f $source -e 'set mbox_type=maildir; set confirmcreate=no; set delete=no; push "T.*;s$target"'
wouldn’t work, because the $target would be interpreted as a raw string due to the single quotes. And I couldn’t find a way to make it work so I decided to make it work with the language that I like the most: Python. So an hour or so later I came up with the following which works (kinda):
import os import subprocess source = os.environ['source'] destination = os.environ['destination'] conf = 'set mbox_type=maildir; set confirmcreate=no; set delete=no; push "T.*;s{0}"'.format(destination) cmd = ['mutt', '-f', source, '-e', conf] subprocess.call(cmd)
But well, I shouldn’t become productive just yet by doing real work. Mutt apparently expects a terminal. It would just prompt me with “No recipients were specified.”.
So alright, this unfortunately wasn’t what I wanted. I you don’t need batch processing though, you might very well go with mutt doing your mbox to maildir conversion (or vice versa).
Damnit, another two hours or more wasted on that. I was at the point of just doing the conversion myself. Shouldn’t be too hard after all, right? While researching I found that Python’s stdlib has some email related functions *yay*. Some dude on the web wrote something close to what I needed. I beefed it up a very little bit and landed with the following:
#!/usr/bin/env python # import datetime import email import email.Errors import mailbox import os import sys import time def msgfactory(fp): try: return email.message_from_file(fp) except email.Errors.MessageParseError: # Don't return None since that will # stop the mailbox iterator return '' dirname = sys.argv[1] inbox = sys.argv[2] fp = open(inbox, 'rb') mbox = mailbox.UnixMailbox(fp, msgfactory) try: storedir = os.mkdir(dirname, 0750) os.mkdir(dirname + "/new", 0750) os.mkdir(dirname + "/cur", 0750) except: pass count = 0 for mail in mbox: count+=1 #hammertime = time.time() # mail.get('Date', time.time()) hammertime = datetime.datetime(*email.utils.parsedate(mail.get('Date',''))[:7]).strftime('%s') hostname = 'mb2mdpy' filename = dirname + "/cur/%s%d.%s:2,S" % (hammertime, count, hostname) mail_file = open(filename, 'w+') mail_file.write(mail.as_string()) print "Processed {0} mails".format(count)
And it seemed to work well! It recovered many more emails than the Perl script (hehe) but the generated maildir wouldn’t work with my IMAP server. I was confused. The mutt maildirs worked like charm and I couldn’t see any difference to mine.
I
scped the file onto my
.maildir/ on my server, which takes quite a while because
scp isn’t all too quick when it comes to many small files. Anyway, it wouldn’t necessarily work for some reason which is way beyond me. Eventually I
straced the IMAP server and figured that it was desperately looking for a
tmp/ folder. Funnily enough, it didn’t need that for other maildirs to work. Anyway: Lesson learnt: If your
dovecot doesn’t play well with your maildir and you have no clue how to make it log more verbosely, check whether you need a
tmp/ folder.
But I didn’t know that so I investigated a bit more and I found another PERL script which converted the emails fine, too. For some reason it put my mails in “
.new/” and not in “
.cur/“, which the other tools did so far. Also, it would leave the messages as unread which I don’t like.
Fortunately, one (more or less) only needs to rename the files in a maildir to end in
S for “seen”. While this sounds like a simple
for f in maildir/cur/*; do mv ${f} ${f}:2,S
it’s not so easy anymore when you have to move the directory as well. But that’s easily being worked around by shuffling the directories around.
Another, more annoying problem with that is “Argument list too long” when you are dealing with a lot of files. So a solution must involve “
find” and might look something like this:
find ${CUR} -type f -print0 | xargs -i -0 mv '{}' '{}':2,S
Duplicates
There was, however, a very annoying issue left: Duplicates. I haven’t investigated where the duplicates came from but it didn’t matter to me as I didn’t want duplicates even if the downloaded mbox archive contained them. And in my case, I’m quite confident that the mboxes are messed up. So I wanted to get rid of duplicates anyway and decided to use a hash function on the file content to determine whether two file are the same or not. I used
sha1sum like this:
$ find maildir/.board-list/ -type f -print0 | xargs -0 sha1sum | head c6967e7572319f3d37fb035d5a4a16d56f680c59 maildir/.board-list/cur/1342797208.000031.mbox:2, 2ea005ec0e7676093e2f488c9f8e5388582ee7fb maildir/.board-list/cur/1342797281.000242.mbox:2, a4dc289a8e3ebdc6717d8b1aeb88959cb2959ece maildir/.board-list/cur/1342797215.000265.mbox:2, 39bf0ebd3fd8f5658af2857f3c11b727e54e790a maildir/.board-list/cur/1342797210.000296.mbox:2, eea1965032cf95e47eba37561f66de97b9f99592 maildir/.board-list/cur/1342797281.000114.mbox:2,
and if there were two files with the same hash, I would delete one of them. Probably like so:
#!/usr/bin/env python import os import sys hashes = [] for line in sys.stdin.readlines(): hash, fname = line.split() if hash in hashes: os.unlink(fname) else: hashes.append(hash)
But it turns out that the following snippet works, too:
find /tmp/maildir/ -type f -print0 | xargs -0 sha1sum | sort | uniq -d -w 40 | awk '{print $2}' | xargs rm
So it’ll check the files for the same contents via a sha1sum. In order to make
uniq detect equal lines, we need to give it sorted input. Hence the
sort. We cannot, however, check the whole lines for equality as the filename will show up in the line and it will of course be different. So we only compare the size of the hex representation of the hash, in this case 40 bytes. If we found such a duplicate hash, we cut off the hash, take the filename, which is the remainder of the line, and delete the file.
Phew. What a trip so far. Let’s put it all together:
The final thing
LIST=board-list umask 077 DESTBASE=/tmp/perfectmdir LISTBASE=${DESTBASE}/.${LIST} CUR=${LISTBASE}/cur NEW=${LISTBASE}/new TMP=${LISTBASE}/tmp mkdir -p ${CUR} mkdir -p ${NEW} mkdir -p ${TMP} for f in /tmp/${LIST}/*; do /tmp/perfect_maildir.pl ${LISTBASE} < ${f} ; done mv ${CUR} ${CUR}.tmp mv ${NEW} ${CUR} mv ${CUR}.tmp ${NEW} find ${CUR} -type f -print0 | xargs -i -0 mv '{}' '{}':2,S find ${CUR} -type f -print0 | xargs -0 sha1sum | sort | uniq -d -w 40 | awk '{print $2}' | xargs rm
And that’s handling email in 2012…
|
https://blogs.gnome.org/muelli/tag/mailman/
|
CC-MAIN-2021-39
|
refinedweb
| 1,790
| 67.86
|
Hello All,
arr is declared as an array of pointers which are pointers to strings.
char **arr = {"element1","element2"};
How can I intitialize and access them?
I receive 2 warnings:
( AS7 GCC 5.4.0 C project)
Severity Code Description Project File Line
Warning excess elements in scalar initializer PointerToPointer
Severity Code Description Project File Line
Warning initialization from incompatible pointer type [-Wincompatible-pointer-types] PointerToPointer C:\Users\User2\Documents\Atmel Studio\7.0\PointerToPointer\PointerToPointer\main.c 16
friendly regards
Ellen
You have used array/struct syntax for a variable that is neither an array nor a struct.
Yet another example that arrays and pointers are not equivalent.
Your initializer would be good for an array of pointers to char.
Iluvatar is the better part of Valar.
Top
- Log in or register to post comments
char *arr[] = { "element1", "element2", };
Top
- Log in or register to post comments
Program memory constants are a weak area of AVR. The obvious didn't
workcompile: error: initializer element is not computable at load time
That's wrong - I think the initialiser element IS computable at load time.
The best I could do was:
The actual strings end up in .rodata and the tables end up in .progmem. Which isn't good.
Top
- Log in or register to post comments
Um... Why do you call it "an array" while there's no array in the above declaration at all? You have a `char **` pointer. Pointer is not an array.
The above can be salvaged by using a compound literal on the right-hand side
but there's really no reason for that, when you can simply declare
That is an array of pointers. Not your original declaration.
And prefer to use `const`-qualified pointers to point to string literals
Maybe this should even be
(depending on your intent)
The diagnostic message is misleading. It is distorted by a non-standard language extension implemented in GCC. Your original declaration is completely nonsensical. It is non-compilable.
Dessine-moi un mouton
Top
- Log in or register to post comments
>and access them?
Assuming you want strings in flash, and you want to have the array of string (char) pointers also in flash, then I think your only choice is to split out the strings-
#include <stdint.h>
#include <avr/io.h>
__flash const char str0[] = "element1";
__flash const char str1[] = "element2";
__flash const char str2[] = "element3";
__flash const char* //store a __flash const char* in
__flash const arr[] = { str0, str1, str2 }; //a __flash const array
//remove the second __flash if you want the array of flash pointers in ram
void print(__flash const char* str){ //access str address via lpm
while( *str ) PORTB = *str++;
}
//indirection so compiler cannot optimize and we can then see the flash address
//being read from the const char* array (in flash)
void printStr(uint8_t i){
print( arr[i] );
}
int main(){
//compiler will have to get char* address from flash (just to show is all correct)
for(uint8_t i = 0; i < 3; i++) printStr( i );
//compiler knows the address, so no flash access needed for the string addresses in the array
//this will be the normal method used, and compiler most likely does not end up generating code
//like above where it first needs to look up the arr element for the string address
for(uint8_t i = 0; i < 3; i++) print( arr[i] );
//or can just access the string you want
print( str0 );
print( str1 );
print( str2 );
}
Top
- Log in or register to post comments
hence they represent arrays of unqualified chars that ought to be treated as const.
As with most expressions representing entire arrays, its value gets converted into a pointer to element 0.
I do not have a compiler handy, but I think that constructions like (__flash char *)PSTR("Harry") will do the trick.
If not, xmacros might be in order.
Iluvatar is the better part of Valar.
Top
- Log in or register to post comments
Hello All,
many thanks for all good answers. I understand. I have to do this in 2 steps.
1. Declaring strings
2. Declaring and initializing an array of pointers to these strings.
This works
friendly regards
Ellen
Top
- Log in or register to post comments
Well, you only have to do it in 2 steps if you want to put everytihng into flash.
If it is all in RAM, you can do
const char *arr[] = {"String1", "String2", "String3"};
In this case the result is the same either way.
Top
- Log in or register to post comments
Hello MrKendo,
very good
Thank You
Ellen
Top
- Log in or register to post comments
Just to point out that a fairly common debug technique (on PC perhaps more so than AVR) is when you have something like:
and then you want to debug it:
Top
- Log in or register to post comments
No, you don't. You can easily do it "in one step", as has been clearly shown in multiple answers above.
The matter of doing it "in 2 steps" has arisen in connection with attempts to put these strings in flash memory, which has no direct relevance to your original question.
Dessine-moi un mouton
Top
- Log in or register to post comments
I'm afraid that was my fault - but the original definition looked so much like a prime candidate for program memory; I assumed that was the OP's intention.
Top
- Log in or register to post comments
|
https://www.avrfreaks.net/forum/how-use-char-arr
|
CC-MAIN-2020-29
|
refinedweb
| 907
| 59.03
|
Unable to play a wav file with Audio component
Hi all,
Now 3 days I'm stuck with a tricky issue!
I've built an image for my embedded device based on a Colibri T20 board, using Yocto.
I"ve added Qt5, using X11 (because Tegra20 drivers from nvidia support hw accel only with X11...).
At this point, everything works fine.
I've also generated the respective SDK to build application for the respective platform..
Works fine.
I wrote a simple application to play audio file, very basic...just to play wav files...
It does not work. No sound output.
import QtMultimedia 5.5
Item { ... Audio { id: audioPlayer source: "" } .. onClick() { idPlayer.play() } }
This code works fine on my host linux.
But not on my Colibri.
Qt 5.5.1 was firstly built with ALSA, pulseaudio (gstreamer plugins...).
No outpût.
But in the linux, I can play the wav file using:
aplay /opt/track.wav
So I assume that my alsa driver are up-to-date.
Then, I 've recompiled Qt 5.5.1 without pulseaudio....
It does not work better :(
To be honnest, I'm not very experienced with alsa and pulseaudio matter. I'm very fustrated bzecause I don't understand what's happening under the abstracted layer of QtMultuimedia.
Any suggestion is welcome...This issue is simply driving me mad ;)
K.
Hi,
Do you see any error message on the console ?
SGaist,
Before recompîling Qt5.51 without pulseaudio, I had the following message in the console:
PulseAudioService: pa_context_connect() failed
Unfortunately, recompiling without pulseaudio doesnot display the message anymore...but it doesn't fix anything. I can't hear anything :(
K.
Do you have Pulseaudio running on your device ?
No.
I've removed "pulseaudio" from the Yocto generated Distro.
But the command line:
aplay /opt/track.wav
still works though...
Because it's using ALSA directly.
Did you try to build the ALSA backend for Qt Multimedia ?
Yes, I think so.
I can find libqtaudio_alsa.so in /usr/lib/qt/plugins.
Start your application with
QT_DEBUG_PLUGINS=1to see if there's a problem when loading the plugin.
At first glance, the plugins get loaded.
Got keys from plugin meta data ("alsa") QFactoryLoader::QFactoryLoader() looking at "/usr/lib/qt5/plugins/audio/libqtaudio_alsa.so.new" Found metadata in lib /usr/lib/qt5/plugins/audio/libqtaudio_alsa.so"
:(
Looks good on that point.
Did you check which output QtMultimedia sees as available ?
What do you mean exactly ?
Yesterday, in the main.cpp, I dumped the
QAudioDeviceInfo::availableDevices(QAudio::AudioOutput)
Is that what you meant ? And yes, it dumps the same devices as with
aplay -L
No error raised.
However, in the
onClick()handler where
idPlay->play()is launched (see code snippet above..),
I traced the values of some properties of tha
Audiocomponent.
qml: status = 2 qml: playvolume = 1 qml: muted = false qml: error code = 0 , string= qml: src = qml: availability = 0 qml: duration = -1 qml: hasAudio = false
Weird. duration = -1, hasAudio = false and however, no error code / string... !
|
https://forum.qt.io/topic/72652/unable-to-play-a-wav-file-with-audio-component/11
|
CC-MAIN-2019-04
|
refinedweb
| 495
| 62.54
|
Content-type: text/html
#include <sys/types.h> #include <sys/stream.h>
bufcall_id_t bufcall(size_t size, uint_t pri, void (*func)(void *arg), void *arg);
Architecture independent level 1 (DDI/DKI).
size Number of bytes required for the buffer.
pri Priority of the allocb(9F) allocation request (not used).
func Function or driver routine to be called when a buffer becomes available.
arg Argument to the function to be called when a buffer becomes available.
The bufcall() function serves as a timeout(9F) call of indeterminate length. When a buffer allocation request fails, bufcall() can be used to schedule the routine func, to be called with the argument arg when a buffer becomes available. func may call allocb() or it may do something else.
If successful, bufcall() returns a bufcall ID that can be used in a call to unbufcall() to cancel the request. If the bufcall() scheduling fails, func is never called and 0 is returned.
The bufcall() function can be called from user, interrupt, or kernel context.
Example 1: Calling a function when a buffer becomes available:
The purpose of this srv(9E) service routine is to add a header to all M_DATA messages. Service routines must process all messages on their queues before returning, or arrange to be rescheduled
While there are messages to be processed (line 13), check to see if it is a high priority message or a normal priority message that can be sent on (line 14). Normal priority message that cannot be sent are put back on the message queue (line 34). If the message was a high priority one, or if it was normal priority and canputnext(9F) succeeded, then send all but M_DATA messages to the next module with putnext(9F) (line 16).
For M_DATA messages, try to allocate a buffer large enough to hold the header (line 18). If no such buffer is available, the service routine must be rescheduled for a time when a buffer is available. The original message is put back on the queue (line 20) and bufcall (line 21) is used to attempt the rescheduling. It will succeed if the rescheduling succeeds, indicating that qenable will be called subsequently with the argument q once a buffer of the specified size (sizeof (struct hdr)) becomes available. If it does, qenable(9F) will put q on the list of queues to have their service routines called. If bufcall() fails, timeout(9F) (line 22) is used to try again in about a half second.
If the buffer allocation was successful, initialize the header (lines 25-28), make the message type M_PROTO (line 29), link the M_DATA message to it (line 30), and pass it on (line 31).
Note that this example ignores the bookkeeping needed to handle bufcall() and timeout(9F) cancellation for ones that are still outstanding at close time.
1 struct hdr { 2 unsigned int h_size; 3 int h_version; 4 }; 5 6 void xxxsrv(q) 7 queue_t *q; 8 { 9 mblk_t *bp; 10 mblk_t *mp; 11 struct hdr *hp; 12 13 while ((mp = getq(q)) != NULL) { /* get next message */ 14 if (mp->b_datap->db_type >= QPCTL || /* if high priority */ canputnext(q)) { /* normal & can be passed */ 15 if (mp->b_datap->db_type != M_DATA) 16 putnext(q, mp); /* send all but M_DATA */ 17 else { 18 bp = allocb(sizeof(struct hdr), BPRI_LO); 19 if (bp == NULL) { /* if unsuccessful */ 20 putbq(q, mp); /* put it back */ 21 if (!bufcall(sizeof(struct hdr), BPRI_LO, qenable, q)) /* try to reschedule */ 22 timeout(qenable, q, drv_usectohz(500000)); 23 return (0); 24 } 25 hp = (struct hdr *)bp->b_wptr; 26 hp->h_size = msgdsize(mp); /* initialize header */ 27 hp->h_version = 1; 28 bp->b_wptr += sizeof(struct hdr); 29 bp->b_datap->db_type = M_PROTO; /* make M_PROTO */ 30 bp->b_cont = mp; /* link it */ 31 putnext(q, bp); /* pass it on */ 32 } 33 } else { /* normal priority, canputnext failed */ 34 putbq(q, mp); /* put back on the message queue */ 35 return (0); 36 } 37 } return (0); 38 }
srv(9E), allocb(9F), canputnext(9F), esballoc(9F), esbbcall(9F), putnext(9F), qenable(9F), testb(9F), timeout(9F), unbufcall(9F)
Writing Device Drivers
STREAMS Programming Guide
Even when func is called by bufcall(), allocb(9F) can fail if another module or driver had allocated the memory before func was able to call allocb(9F).
|
http://backdrift.org/man/SunOS-5.10/man9f/bufcall.9f.html
|
CC-MAIN-2017-09
|
refinedweb
| 706
| 57.2
|
Introduction to Python Sys Module
Python is a very popular and powerful scripting language these days. From developing application to developing websites, making analytical product to production level codes, python getting so much popularity. One of the reasons behind are its flexibility, easy to use modules. Let’s see one of the widely used module in python known as “Sys”. “Sys” module is defined for system. It defines basically, the environment. In order to import sys module, one can simply type in “import sys”.
Functions of Python Sys Module
Given below are the functions of Python Sys Module:
Let’s import sys module in our IDE and check the version of python.
1. version
It helps you understand the version of python being in use.
Code:
import sys
sys.version
Output:
2. executable
It’s a string that tell you where your filesystem or your python interpreter exist.
Code:
import sys
sys.executable
Output:
3. sys.builtin_module_names
This help you understand which all are the built-in modules available in python. One need not to download these modules, and hence can directly be used environment after importing.
Code:
sys.builtin_module_names
Output:
Like shown here:
However the modules which are not part of sys needs to be first downloaded and then imported in python environment.
Examples:
dateutil, git etc.
4. sys.platform
This helps you know about the platform. Everyone wants their program to run irrespective of the platform. And that’s where this command is mostly used.
Code:
sys.platform
Output:
If one need to make code in aligned with platform, here is what one can do.
Code:
if sys.platform.startswith('win'):
# win-specific code ...
elif sys.platform.startswith('aix'):
# AIX-specific code...
elif sys.platform.startswith('darwin'):
# MAC-specific code ..
elif sys.platform.startswith('linux'):
# Linux-specific code ..
5. sys.exit
In case of any exceptions in the program, there occurs the need of safe exit from the program. Sys.exit helps there.
Code:
def is_namevalid(name):
if any(i.isdigit() for i in name):
print("ERROR: Invalid feature set ")
sys.exit()
else:
print("Correct name")
Output:
6. sys.argv
This refers to a list in python, which contains arguments(command line) passed to the script. If someone types len(sys.argv), it will fetch you the count of the number of arguments. Someone working with command line arguments, will use sys.argv. Sys.argv[0] refers to the name of script.
Code:
import sys
print(sys.argv[0])
print 'Number of arguments:', len(sys.argv), 'arguments.'
print 'Argument List:', str(sys.argv)
Say this python code is saved with name “sys.py” at certain location.
Now let’s go to command line, and type “python sys.py”.
Output:
However, if you type “python sys.py arg1”.
Output:
Thing to note here is that, first argument will always be script name, which in itself will be counted in counting the number of arguments.
7. sys.exitfunc
This is a parameter less function, which can be used by user to perform cleanup actions at program’s exit. This function no more exist in python 3.x. However, one can use “atexit” instead.
8. sys.stdin, sys.stdout, sys.stderr
These functions refers to file objects used by interpreter for the standard input, output and errors.
- Stdin is for interactive input like we use input(), raw_input(). Input is taken through stdin.
- Stdout is used for output like print(), output is printed actually through stdout.
- Stderr is to prompt the error message.
Code:
import sys
for j in (sys.stdin, sys.stdout, sys.stderr):
print (j)
Output:
As one can notice stdin, printed with mode ‘r’ which means reading while stdout and stderr are writing mode.
9. sys.setrecursionlimit(limit)
This function can help you limit the maximum depth of python interpreter. It helps you keep a stopover infinite recursion that can be a cause of overflow. However, highest possible limit depends on platform to platform. In case some keeps the limit higher, he/she should be aware of risk that can lead to crashes in python.
10.sys.getdefaultencoding()
It returns the default string encoding.
Let me show you in python environment.
Code:
print(sys.getdefaultencoding())
Output:
11. sys.setdefaultencoding()
This function helps you change the default encoding for string in python environment.
Example:
Code:
import sys
sys.setdefaultencoding('UTF8')
12. sys.exc_clear()
This function helps you to clear all information related to current or previous exceptions. This function is great to use, when you need to free the allocated resources and have to do object finalization.
Conclusion
All above we saw, are important set of functions available in sys module. Sys module helps you to know about your working environment e.g. where your code is running, which platform it is running on, which python version and which interpreter is in use etc. One need to go through these functions and start playing with it, in order to get good grip over these functions.
Recommended Articles
This is a guide to Python Sys Module. Here we discuss the introduction to Python Sys Module along with top 12 functions respectively. You may also have a look at the following articles to learn more –
|
https://www.educba.com/python-sys-module/
|
CC-MAIN-2021-49
|
refinedweb
| 863
| 68.97
|
In this Flask tutorial, we will check how to get the username and the password from a HTTP request made to a Flask server with basic authentication.
Introduction
In this Flask tutorial, we will check how to get the username and the password from a HTTP request made to a Flask server with basic authentication. If you haven’t yet used Flask, please consult this getting started tutorial.
In this simple authentication mechanism, the client sends the HTTP request with an Authorization header, which contains both the password and the username [1].
This Authorization header has the following format, with the content underlined encoded as a base64 string [1]:
Authorization: Basic username:password
Important: In this tutorial we will simply cover the basic authentication part of the request, more precisely, how to get the password and username from the client request sent in the authorization header.
This authentication scheme doesn’t guarantee data privacy and the base64 applied by the client is a reversible encoding, so we should consider that the data is sent from the client to the server in plain text [2].
Thus, it’s trivial for an attacker to steal the credentials sent in the authorization header if we are using HTTP. In order to securely send the credentials, we should use the basic authentication mechanism with HTTPS to ensure data is encrypted before transmission, specially when dealing with sensitive information.
The code
We will start our code by importing the Flask class from the flask module, so we can create and configure our application.
We will also need to import the request global object, which allows us to access the parsed incoming request data [3]. Note that although this is a global object, Flask will guarantee that we get the correct data in each request even in multi-threaded environments [3].
from flask import Flask from flask import request
Next we will create a Flask class instance, which will be our app. As input of the constructor, we pass the name of our application.
app = Flask("my app")
Next we will define the route where our server will be listening for incoming requests and the handling function that will be executed when the request is received on that route. Note that we are not specifying the HTTP methods allowed and thus, by default, this route will only answer to HTTP GET requests [4].
@app.route('/auth') def authRouteHandler(): ## handling function code
Inside the handling function, the basic authentication information is stored on the authorization object of the request global object we have imported in the beginning of the code.
The authorization object is of class werkzeug.datastructures.Authorization, but we can access it in Python’s dictionary style.
So, we can access both the username and the password sent by the client by using those strings as keys of the dictionary.
print(request.authorization["username"]) print(request.authorization["password"])
To finalize the handling function we will return an “ok” message to the client.
return "ok"
Finally, we will run our app by calling the run method on the app object. We will configure it to listen on all the IPs available (by specifying the host as “0.0.0.0“) and on port 8090. The full source code can be seen below.
from flask import Flask from flask import request app = Flask("my app") @app.route('/auth') def authRouteHandler(): print(request.authorization["username"]) print(request.authorization["password"]) return "ok" app.run(host = '0.0.0.0', port = 8090)
Testing the code
To test the code, we will use the loopback IP address 127.0.0.1. As a tool to make the HTTP request with the authorization, we will use Postman.
So, after running the Python code we have developed, open Postman. On the dropdown left to the HTTP request destination URL, select GET. As mentioned before, since we didn’t specify the HTTP methods allowed, the “/auth” route we created on the server will only answer to GET requests.
On the request destination URL, write the line below. As mentioned, we will use the loopback IP address and port 8090, which was the one we specified in the code.
Then, click on the Authorization tab below the HTTP methods dropdown. On the view that opens, go to the Type dropdown and select “Basic Auth“.
Then, fill the username and password fields with some testing values. For this tutorial, I’ve used “testUser” and “testPass”.
Finally, click the send button so the request is sent to the Flask server. You should receive an “ok” message upon execution of the request. You can check all the mentioned configurations below at figure 1.
Figure 1 – Postman configured for basic authentication.
If you go back to the Python shell, you should get an output similar to figure 2, which shows the credentials sent in the HTTP request being printed. Naturally, in a real application scenario, we would then use these credentials to confirm if the user was authorized to perform the request or not.
Figure 2 – Output in Python’s prompt. Tested on the Python IDLE IDE.
References
[1]
[2]
[3]
[4]
2 Replies to “Flask: Basic authentication”
|
https://techtutorialsx.com/2018/01/02/flask-basic-authentication/
|
CC-MAIN-2019-04
|
refinedweb
| 859
| 62.27
|
Communication between a LoPy and several LoPys via LORA
- JeromeMaquoi last edited by
Hello,
I have two lopys that communicates via Lora and one of my lopy must act as a relay mast so it has to handle communication via Lora with several lopys, possibly at the same time. In fact, the relay LoPy has to be able to receive data from different LoPys and to know from which LoPy the data come. Can I do that if I start from the code below? How can I distinguish the origins of the data?
from network import LoRa
import socket
import time
lora = LoRa(mode=LoRa.LORA, frequency=863000000)
s = socket.socket(socket.AF_LORA, socket.SOCK_RAW)
s.setblocking(False)
while True:
rec = s.recv(1024) #reception of the data in bytes
time.sleep(2)
Thank you for your answers,
Jérôme Maquoi
- rcolistete last edited by
I suggest to use this Pycom LoRa-Mac nano-gateway to start with :
LoPy Nano-Gateway Extended (Timeout and Retry)
It was a development from the earlier version :
LoPy Nano-Gateway
|
https://forum.pycom.io/topic/2070/communication-between-a-lopy-and-several-lopys-via-lora/1
|
CC-MAIN-2019-09
|
refinedweb
| 175
| 62.27
|
In a given sentence there may be a word which get repeated before the sentence ends. In this python program, we are going to catch such word which is repeated in sentence. Below is the logical steps we are going to follow to get this result.
In the below program we use the counter method from the collections package to keep a count of the words.
from collections import Counter def Repeat_word(load): word = load.split(' ') dict = Counter(word) for value in word: if dict[value]>1: print (value) return if __name__ == "__main__": input = 'In good time in bad time friends are friends' Repeat_word(input)
Running the above code gives us the following result −
time
|
https://www.tutorialspoint.com/find-the-first-repeated-word-in-a-string-in-python-using-dictionary
|
CC-MAIN-2022-21
|
refinedweb
| 115
| 69.92
|
kendalltau¶
numeric.stats.
kendalltau(x, y)¶
Calculates Kendall’s tau, a correlation measure for ordinal data.
Kendall’s tau is a measure of the correspondence between two rankings. Values close to 1 indicate strong agreement, values close to -1 indicate strong disagreement. This is the 1945 “tau-b” version of Kendall’s tau 2, which can account for ties and which reduces to the 1938 “tau-a” version [1]_ in absence of ties.
- Parameters
x – (array_like) x data array.
y – (array_like) y data array.
- Returns
Correlation.
where P is the number of concordant pairs, Q the number of discordant pairs, T the number of ties only in x, and U the number of ties only in y. If a tie occurs for the same pair in both x and y, it is not added to either T or U. References ———- .. [1] Maurice G. Kendall, “A New Measure of Rank Correlation”, Biometrika
Vol. 30, No. 1/2, pp. 81-93, 1938.
- 2(1,2)
Maurice G. Kendall, “The treatment of ties in ranking problems”, Biometrika Vol. 33, No. 3, pp. 239-251. 1945.
- 3
Gottfried E. Noether, “Elements of Nonparametric Statistics”, John Wiley & Sons, 1967.
- 4
Peter M. Fenwick, “A new data structure for cumulative frequency tables”, Software: Practice and Experience, Vol. 24, No. 3, pp. 327-336, 1994.
Examples:
from mipylib.numeric import stats x1 = [12, 2, 1, 12, 2] x2 = [1, 4, 7, 1, 0] tau = stats.kendalltau(x1, x2) print tau
Result:
>>> run script... -0.471404520791
|
http://meteothink.org/docs/meteoinfolab/numeric/stats/kendalltau.html
|
CC-MAIN-2020-16
|
refinedweb
| 249
| 67.55
|
view raw
I'm currently trying to make a program that will read a file find each unique word and count the number of times that word appears in the file. What I have currently ask the user for a word and searches the file for the number of times that word appears. However I need the program to read the file by itself instead of asking the user for an individual word.
This is what I have currently:
#include <stdio.h>
#include <stdlib.h>
int main(int argc, char const *argv[])
{
int num =0;
char word[2000];
char *string;
FILE *in_file = fopen("words.txt", "r");
if (in_file == NULL)
{
printf("Error file missing\n");
exit(-1);
}
scanf("%s",word);
printf("%s\n", word);
while(!feof(in_file))//this loop searches the for the current word
{
fscanf(in_file,"%s",string);
if(!strcmp(string,word))//if match found increment num
num++;
}
printf("we found the word %s in the file %d times\n",word,num );
return 0;
}
If you want to print every line contained in the file just once, you have to save the strings you have read in a given data structure. For example, a sorted array could do the trick. The code might look as follow:
#include <stddef.h> size_t numberOfLine = getNumberOfLine (file); char **previousStrings = allocArray (numberOfLine, maxStringSize); size_t i; for (i = 0; i < numberOfLine; i++) { char *currentString = readNextLine (file); if (!containString (previousStrings, currentString)) { printString (currentString); insertString (previousStrings, currentString); } }
You may use binary search to code the functions
containString and
insertString in an efficient way. See here for further informations.
|
https://codedump.io/share/EKDuLCWpONM2/1/program-to-read-words-from-a-file-and-count-their-occurrence-in-the-file
|
CC-MAIN-2017-22
|
refinedweb
| 260
| 63.19
|
Stork - Storage Orchestration Runtime for Kubernetes
Stork is a Cloud Native storage operator runtime scheduler plugin. It translates a scheduler's orchestration decisions into someting that an external cloud native storage solution can act upon. By doing so, it extends Kubernetes with more stateful awareness of the underlying storage provider, it's capabilities and state.
Stork is intended to allow storage operators such as Portworx, EMC-RexRay, and Kubernetes Local Storage to extend upon scheduler actions and allow for a storage-implementation specific orchestration actions around what the orchestrator is trying to do. The most basic example is when the scheduler is trying to spawn a container that is part of a pod - Stork will allow for the storage provider to specify an appropriate node on which that container needs to run such that it's data access is local to the runtime of the container. This is one of many orchestration scenarios that is adressed by this project.
Stork can be used to co-locate pods with where their data is located. This is achieved by using a kubernetes scheduler extender. The scheduler is configured to use stork as an extender. So every time a pod is being scheduled, the scheduler will send filter and prioritize requests to stork. Stork will then check with the storage driver You can either configure the default kubernetes scheduler to communicate with stork or launch another instance of kube-scheduler. the configured driver.
To enable the Initializer you need to: * Enable the Intializer feature in your Kubernetes cluster since it is an alpha feature. * Add "--app-initializer=true" option to stork (in either the deployment or daemonset spec file) * Add the stork-initializer spec to you Kubernetes cluster using
kubectl create -f stork-initializer.yaml
Stork will monitor the health of the volume driver on the different nodes. If the volume driver on a node becomes unhealthy pods on that node using volumes from the driver will not be able to access their data. In this case stork will relocate pods on to other nodes so that they can continue running.
Stork uses the external-storage project from kubernetes-incubator to add support for snapshots.
Refer to Snapshots with Stork for instructions on creating and using snapshots with Stork.
This feature allows you to specify pre and post rules that are run on the application pods before and after a snapshot is triggered. This allows users to perform actions like quiescing or flushing data from applications before a snapshot is taken and resume I/O after the snapshot is taken. The commands will be run in pods which are using the PVC being snapshotted.
Read Configuring application consistent snapshots for further details.
Stork is written in Golang. To build Stork:
# git clone [email protected]:libopenstorage/stork.git # export DOCKER_HUB_REPO=myrepo # export DOCKER_HUB_STORK_IMAGE=stork # export DOCKER_HUB_STORK_TAG=latest # make
This will create the Docker image
$(DOCKER_HUB_REPO)/$(DOCKER_HUB_STORK_IMAGE):$(DOCKER_HUB_TAG).
Now that you have stork in a container image, you can just create a pod config for it and run it in your Kubernetes cluster. We do this via a deployment.
A Deployment manages a Replica Set which in turn manages the pods, thereby making stork resilient to failures. The deployment spec is defined in specs/stork-deployment.yaml. By default the deployment does the following * Uses the latest stable image of stork to start a pod. You can update the tag to use a specific version or use your own stork image. * Creates a service to provide an endpoint that can be used to reach the extender. * Creates a ConfigMap which can be used by a scheduler to communicate with stork. * Uses the Portworx (pxd) driver for stork.
You can either update the default kube scheduler to use stork or start a new scheduler instance which can use stork. Once this has been deployed the scheduler can be used to schedule any pods with the added advantage that it will also try to optimize the storage requirements for the pod.
You might not always have access to your default scheduler to update it's config options. So the recommended way to start stork is to launch another instance of the scheduler and configure it to use stork
In order to run stork in your Kubernetes cluster, just create the deployment specified in the config above in a Kubernetes cluster:
# kubectl create -f stork-deployment ....
We will then start a new scheduler instance here and configure it to use stork. We will call the new scheduler 'stork'. This new scheduler instance is defined in specs/stork-scheduler.yaml. This spec starts 3 replicas of the scheduler.
You will need to update the version of kube scheduler that you want to use. This should be the same version as your kubernetes cluster. Example for Kubernetes v1.8.1 it would be:
image: gcr.io/google_containers/kube-scheduler-amd64:v1.8.1
You can deploy it by running the following command: ```
Verify that the scheduler pods are running:
NAME READY STATUS RESTARTS AGE .... stork-scheduler-9d6cb4546-gqdq2 1/1 Running 0 32m stork-scheduler-9d6cb4546-k4z8t 1/1 Running 0 32m stork-scheduler-9d6cb4546-tfkh4 1/1 Running 0 30m .... ```
When using stork with the default scheduler, stork needs to be run as a deamon set. This is to avoid a deadlock when trying to schedule the stork pods from the scheduler.
First create the stork daemonset defined in specs/stork-daemonset.yaml
# kubectl create -f stork-daemonset ....
To configure your default scheduler to use stork add the following arguments to the scheduler and restart the scheduler if required:
--policy-configmap=stork-config --policy-configmap-namespace=kube-system
You will also need to make sure that the kube-scheduler clusterrole has permissions to read config maps. If not, run the following command: ```
And add the following permissions:
In order to schedule a given pod using the Stork scheduler, specify the name of the scheduler in that pod spec:
kind: PersistentVolumeClaim apiVersion: v1 metadata: name: mysql-data annotations: volume.beta.kubernetes.io/storage-class: px-mysql-sc spec: accessModes: - ReadWriteOnce resources: requests: storage: 2Gi --- kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: px-mysql-sc provisioner: kubernetes.io/portworx-volume parameters: repl: "2" --- apiVersion: apps/v1 kind: Deployment metadata: name: mysql spec: strategy: rollingUpdate: maxSurge: 1 maxUnavailable: 1 type: RollingUpdate replicas: 1 selector: matchLabels: app: mysql
The above spec will create a mysql pod with a Portworx volume having 2 replicas. The pod will then get scheduled on a node in the cluster where one of the replicas is located. If one of those nodes does not have enough cpu or memory resources then it will get scheduled on any other node in the cluster where the driver (in this case Portworx) is running.
|
https://xscode.com/libopenstorage/stork
|
CC-MAIN-2021-21
|
refinedweb
| 1,125
| 53.81
|
Reading Time: 4 minutes
- Selenium Grid is a tool that we can use for testing, automating web applications, and run them on remote machines. We can run our tests to integrate them with CI/CD tools such as Jenkins. In this blog, we will see Jenkins Integration with Selenium Grid.
Why is Jenkins integration required?
- Test automation helps us with continuous defects, errors, and bugs as early as possible. If we catch the issue earlier then it is cheaper to fix it.
- With CI/CD integration in Jenkins.
- Log in into Jenkins and go to Manage Jenkins.
- Go to Global Tool Configuration.
- Go to JDK and click on Add JDK.
- Here you have 2 options either install it automatically by selecting a checkbox (Install automatically), selecting the version and entering your oracle account details. You are required to have a valid oracle account.
- Or you can unselect the checkbox and provide the name and path to your local JDK.
-: Selenium plugin for running Grid on Jenkins.
Setting-up Selenium Grid
- Once we install the plugin we will be able to see it in the Jenkins dashboard. Go inside and you will see the Hub management page where the URL to the Hub is given. This will be used to execute our tests on Jenkins
- Go to the configurations tab and click on New Configurations.
- In this section, we will give the name of the configuration. And also we need to provide the path to the browser driver. We have given the path to our chromedriver. Save the configuration.
- Now go to the Nodes matching configurations section and click on the newly created browser configuration to go inside and start the server.
- Once you start the server you will all the configuration that you have saved.
- Now go back to your Selenium Grid Hub management and you will see one new registered node in the Registered Remote Controls selection
- Now we are ready to run our tests on Jenkins.
Configuring your Tests
import org.openqa.selenium.WebDriver; import org.openqa.selenium.chrome.ChromeOptions; import org.openqa.selenium.remote.RemoteWebDriver; import org.testng.annotations.Test; import java.net.MalformedURLException; import java.net.URL; import org.apache.logging.log4j.*; public class JenkinsSeleniumGridTest { private static Logger log = LogManager.getLogger(JenkinsSeleniumGridTest.class); @Test public void JenkinsDemoFunc() throws MalformedURLException { ChromeOptions chromeOptions = new ChromeOptions(); //initialize chromeOptions chromeOptions.setCapability("browserName", "chrome"); //Define on which browser you want to execute your tests. chromeOptions.setCapability("platformName", "LINUX"); //Define in which mode your tests will run. chromeOptions.addArguments("--headless"); //Define the platform on which you will execute your tests WebDriver driver = new RemoteWebDriver(new URL(""), chromeOptions); //URL to the hub running on your local system driver.get(""); //URL to hit log.debug(driver.getTitle()); //Print the title of the webpage log.debug(driver.getCurrentUrl()); //Print the URL of the current webpage driver.quit(); //Close the browser } }
- In this code, we are providing the browser name and platform name on which our tests will run.
- Using Jenkins, we usually execute our tests in the headless mode to do that will pass headless argument in our code.
- In remote WebDriver URL, we are giving the URL which is present in the Jenkins Hub management.
- We need to provide the valid URL which we will hit.
- So, this was a short blog on how we can achieve Jenkins Integration with Selenium Grid. See you in the next one.
|
https://blog.knoldus.com/jenkins-integration-selenium-grid/
|
CC-MAIN-2021-31
|
refinedweb
| 564
| 51.24
|
Are you sure?
Do you want to delete “Recursion: Koch Snowflake” permanently? You will not be able to undo this action.
import math # This recursive (that is, self-calling) function draws # a line with a detour. Each detour is also drawn with # this function, meaning that there can be detours on # detours. This creates a "snowflake" shape known as # the Koch Snowflake or Koch Curve. def koch(coords, direction, length, detours): if detours > 0: # We have detours left to take; we'll draw four sides. # Each koch() call returns its end-point; we save that # value and use it as the start point for the next side. a = koch(coords, direction, length / 3, detours-1) b = koch(a, direction - 1, length / 3, detours-1) c = koch(b, direction + 1, length / 3, detours-1) d = koch(c, direction, length / 3, detours-1) return d else: # No detours left; just draw a straight line. end = ( coords[0] + length * math.cos(direction * math.pi / 3), coords[1] + length * math.sin(direction * math.pi / 3) ) Line([coords, end], color='white') return end # a nice background Rectangle(color='#004') # This is where we draw the actual Koch Snowflake. # Try altering the last argument (the 2) to another # number from 0 to 3, and see what happens! a = koch((10, 27), 0, 80, 2) a = koch(a, 2, 80, 2) a = koch(a, 4, 80, 2)
|
https://shrew.app/show/snild/recursion-koch-snowflake
|
CC-MAIN-2022-21
|
refinedweb
| 230
| 73.68
|
This article serves as an introduction to fp-ts coming from someone knowledgeable in JS. I was inspired to write this up to help some team members familiarise themselves with an existing codebase using fp-ts.
Hindley-Milner type signatures
Hindely-Milner type signatures represent the shape of a function. This is important because it will be one of the first things you look at to understand what a function does. So for example:
const add1 = (num:number) => num + 1
This will have the type signature of
number -> number as it will take in a number and return a number.
Now what about functions that take multiple arguments, for example a function
add that adds two numbers together? Generally, functional programmers prefer to functions to have one argument. So the type signature of
add will look like
number -> number -> number and the implementation will look like this:
const add = (num1: number) => (num2: number) => num1 + num2
Breaking this down, we don't have a function that takes in two numbers and adds them, we have a function that takes in a number and returns another function that takes in an number that finally adds them both together.
In the fp-ts documentation, we read the typescript signature to tell us what a function does. So for example the trimLeft function in the string package has a signature of
export declare const trimLeft: (s: string) => string
which tell us that it is a function that takes in a string and returns a string.
Higher kinded types
Similar to how higher order functions like
map require a function to be passed in, a higher kinded type is a type that requires another type to be passed in. For example,
let list: Array<string>;
Array is a higher kinded type that requires another type
string to be passed in. If we left it out, typescript will complain at you and ask you "an array of what?". An example of this in fp-ts is the flatten function from array.
export declare const flatten: <A>(mma: A[][]) => A[]
What this says is that the function requires another type
A and it flattens arrays of arrays of A into arrays of A.
However, arrays are not the only higher kinded types around. I like to think of higher kinded types as containers that help abstract away some concept. For example, arrays abstract away the concept of iteration and Options abstract away the concept of null.
Option
Options are a higher kinded type that abstract away the concept of null. Although it requires some understanding to use and some plumbing to get it all set up, my promise to you is that if you start using Options, your code will be more reliable and readable.
Options containers for optional values.
type Option<A> = None | Some<A>
At any one time, an option is either a
None representing null or
Some<A> representing some value of type
A.
If you have a function that returns an Option for example head
export declare const head: <A>(as: A[]) => Option<A>
By seeing that the function returns an Option, you know that by calling head, you may not get a value. However, by wrapping this concept up in an Option, you only need to deal with the null case when you unwrap it.
So how do you write your own function that returns an Option? If you are instantiating your own Options, you will need to look under the constructors part of the documentation. For example,
import { some, none } from "fp-ts/lib/Option"; const some1 = (s:string):Option<number> => s === 'one'? some(1) : none;
However to extract out the value inside an Option, you will need to use one of the destructor methods. For example, the
fold function in Option is a destructor.
export declare const fold: <A, B>(onNone: Lazy<B>, onSome: (a: A) => B) => (ma: Option<A>) => B
This type signature is a little complicated so let's break it down.
fold: <A, B>...:This function has two type parameters A & B
...(onNone:Lazy<B>,...: This take in an onNone function that returns a value of type
B
..., onSome: (a: A) => B)...: This also takes in an onSome function that takes in a value of type
Aand returns a value of type
B
... => (ma: Option<A>)...: This expects an Option of type
Ato be passed in
... => B: After all arguments are passed in, this will return a value of type B.
Putting all this together, if we wanted to use our
some1 function from earlier and print "success 1" if the value was "one" otherwise print "failed", it would look like this:
import { some, none, fold } from "fp-ts/lib/Option"; const some1 = (s:string):Option<number> => s === 'one'? some(1) : none; const print = (opt:Option<number>):string => { const onNone = () => "failed"; const onSome = (a:number) => `success ${a}`; return fold(onNone, onSome)(opt); } console.log(print(some1("one"))); console.log(print(some1("not one")));
Now we know how to create an Option as well as extract out a value from an Option, however we are missing what in my opinion is the exciting part of Options which is the ability to transform them. Options are Functors which is a fancy way of saying that you can map them. In the documentation, you can see that Option has a Functor instance and a corresponding map instance operation.
What this means is that you can transform Options using regular functions. For example, if you wanted to write a function that adds one to a an
Option<number> it would look like so:
import { map, Option } from "fp-ts/lib/Option"; const add1 = (num: number) => num + 1; const add1Option = (optNum:Option<number>):Option<number> => map(add1)(optNum);
Now we know how to create options, transform them via map functions and use destructors to extract out the value from them whilst referring to the documentation each step of the way.
Discussion (0)
|
https://practicaldev-herokuapp-com.global.ssl.fastly.net/derp/intro-to-fp-ts-2ime
|
CC-MAIN-2022-27
|
refinedweb
| 990
| 58.52
|
Number-guessing Game
From Progteam
Number-guessing Game is problem number 3589 on the Peking University ACM site.
Problem Information
Problem Name: Number-guessing Game
Problem Number on PKU: 3589
Synopsis: Two players are playing a number-guessing game in which Player B has to guess a 4-distinct-digit number and Player A has to tell Player B how close the guess is by giving information in the form of *A*B where * is an integer between 0 and 4. A represents the number of correct digits in the "right" place while B represents just the number of correct digits.
Solver Information
Solver: Eric Hong
Date: September 12, 2008
Trivia
I accidentaly came across this question while I was looking for a problem to practice Ruby.
General Strategy
- Using Scanner, take the input.
- Using a nested for loop, compare each digit in numB to all digits in numA. (Note: numB is the guess and numA is the answer.)
- If two digits are equal:
- If two indices are equal, then increment counterA.
- Else, increment counterB.
- Else continue to next iteration.
- Display the results.
Solution
import java.util.*; public class Main{ public static Scanner in; public static void main(String[] args){ in = new Scanner(System.in); doStuff(); } public static void doStuff(){ int N = in.nextInt(); for(int i = 0; i < N; i++){ solve(); } } public static void solve(){ String num1 = in.next(); String num2 = in.next(); compareDigits(num1, num2); } public static void compareDigits(String num1, String num2){ int counterA = 0; int counterB = 0; for (int i = 0; i < 4; i++){ char ch2 = num2.charAt(i); int digit2 = ch2 - '0'; for (int j = 0; j < 4; j++){ char ch1 = num1.charAt(j); if (ch2 == ch1){ if (i == j) counterA++; else counterB++; break; } } System.out.println(counterA + "A" + counterB + "B"); } }
Additional Remarks
Memory: 2136K
Time: 188MS
This solution may not be as efficient as it can be.
|
http://cs.nyu.edu/~icpc/wiki/index.php?title=Number-guessing_Game&oldid=6966
|
CC-MAIN-2015-35
|
refinedweb
| 313
| 65.22
|
Working with MapKit Local Search in iOS 10
This chapter will explore the use of the iOS MapKit MKLocalSearchRequest class to search for map locations within an iOS 10 application. The example application created in the chapter entitled Working with Maps on iOS 10 with MapKit and the MKMapView Class will then be extended to demonstrate local search in action.
An Overview of iOS 10 Local Search
Local search is implemented using the MKLocalSearch class. The purpose of this class is to allow users to search for map locations using natural language strings. Once the search has completed, the class returns a list of locations within a specified region that match the search string. A search for “Pizza”, for example, will return a list of locations for any pizza restaurants within a specified area. Search requests are encapsulated in instances of the MKLocalSearchRequest class and results are returned within an MKLocalSearchResponse object which, in turn, contains an MKMapItem object for each matching location (up to a total of 10 matches).
Local searches are performed asynchronously and a completion handler called when the search is complete. It is also important to note that the search is performed remotely on Apple’s servers as opposed to locally on the device. Local search is, therefore, only available when the device has an active internet connection and is able to communicate with the search server.
The following code fragment, for example, searches for pizza locations within the currently displayed region of an MKMapView instance named mapView. Having performed the search, the code iterates through the results and outputs the name and phone number of each matching location to the console:
let request = MKLocalSearchRequest() request.naturalLanguageQuery = "Pizza")") } } })The above code begins by creating an MKLocalSearchRequest request instance initialized with the search string (in this case “Pizza”). The region of the request is then set to the currently displayed region of the map view instance.
let request = MKLocalSearchRequest() request.naturalLanguageQuery = "Pizza" request.region = mapView.region
An MKLocalSearch instance is then created and initialized with a reference to the search request instance and the search then initiated via a call to the object’s start(completionHandler:) method.
search.start(completionHandler: {(response, error) in
The code in the completion handler checks the response to make sure that matches were found and then accesses the mapItems property of the response which contains an array of mapItem instances for the matching locations. The name and phoneNumber properties of each mapItem instance are then displayed in the console:
if error != nil { print("Error occured in search: \(error!.localizedDescription)") } else if response!.mapItems.count == 0 { print("No matches found") } else { print("Matches found") for item in response!.mapItems { print("Name = \(item.name)") print("Phone = \(item.phoneNumber)") } } })
Adding Local Search to the MapSample Application
In the remainder of this chapter, the MapSample application will be extended so that the user can perform a local search. The first step in this process involves adding a text field to the first storyboard scene. Begin by launching Xcode and opening the MapSample project created in the previous chapter.
Adding the Local Search Text FieldWith the project loaded into Xcode, select the Main.storyboard file and modify the user interface to add a Text Field object to the user interface layout (reducing the height of the map view object accordingly to make room for the new field). With the new Text Field selected, display the Attributes Inspector and enter Local Search into the Placeholder property field. When completed, the layout should resemble that of Figure 80-1:
Figure 80-1
Select the Map Sample view controller by clicking on the toolbar at the top of the scene so that the scene is highlighted in blue. Select the Resolve Auto Layout Issues menu from the toolbar in the lower right-hand corner of the storyboard canvas and select the Reset to Suggested Constraints menu option located beneath All Views in View Controller.
When the user touches the text field, the keyboard will appear. By default this will display a “Return” key. For the purposes of this application, however, a “Search” key would be more appropriate. To make this modification, select the new Text Field object, display the Attributes Inspector and change the Return Key setting from Default to Search.
Next, display the Assistant Editor panel and make sure that it is displaying the content of the ViewController.swift file. Ctrl-click on the Text Field object and drag the resulting line to the Assistant Editor panel and establish an outlet named searchText.
Repeat the above step, this time setting up an Action for the Text Field to call a method named textFieldReturn for the Did End on Exit event.
The textFieldReturn method will be required to perform three tasks when triggered. In the first instance it will be required to hide the keyboard from view. When matches are found for the search results, an annotation will be added to the map for each location. The second task to be performed by this method is to remove any annotations created as a result of a previous search.Finally, the textFieldReturn method will initiate the search using the string entered into the text field by the user. Select the ViewController.swift file, locate the template textFieldReturn method and implement it so that it reads as follows:
@IBAction func textFieldReturn(_ sender: AnyObject) { _ = sender.resignFirstResponder() mapView.removeAnnotations(mapView.annotations) self.performSearch() }
Performing the Local Search
The next task is to write the code to perform the search. When the user touches the keyboard Search key, the above textFieldReturn method is called which, in turn, has been written such that it makes a call to a method named performSearch. Remaining within the ViewController.swift file, this method may now be implemented as follows:
func performSearch() { matchingItems.removeAll() let request = MKLocalSearchRequest() request.naturalLanguageQuery = searchText.text)") self.matchingItems.append(item as MKMapItem) print("Matching items = \(self.matchingItems.count)") let annotation = MKPointAnnotation() annotation.coordinate = item.placemark.coordinate annotation.title = item.name self.mapView.addAnnotation(annotation) } } }) }Next, edit the ViewController.swift file to add the declaration for the matchingItems array referenced in the above method. This array is used to store the current search matches and will be used later in the tutorial:
import UIKit import MapKit class ViewController: UIViewController, MKMapViewDelegate { @IBOutlet weak var mapView: MKMapView! @IBOutlet weak var searchText: UITextField! var matchingItems: [MKMapItem] = [MKMapItem]() . .
The code in the performSearch method is largely the same as that outlined earlier in the chapter, the major difference being the addition of code to add an annotation to the map for each matching location:
let annotation = MKPointAnnotation() annotation.coordinate = item.placemark.coordinate annotation.title = item.name self.mapView.addAnnotation(annotation)
Annotations are represented by instances of the MKPointAnnotation class and are, by default, represented by red pin markers on the map view (though custom icons may be specified). The coordinates of each match are obtained by accessing the placemark instance within each item. The title of the annotation is also set in the above code using the item’s name property.
Testing the Application
Compile and run the application on an iOS device and, once running, select the zoom button before entering the name of a type of business into the local search field such as “pizza”, “library” or “coffee”. Touch the keyboard “Search” button and, assuming such businesses exist within the currently displayed map region, annotation markers will appear for each matching location. Tapping a location marker will display the name of that location (Figure 80-2):
Figure 80-2
Local searches are not limited to business locations. It can also be used, for example, as an alternative to geocoding for finding local addresses.
Summary
The iOS MapKit Local Search feature allows map searches to be performed using free-form natural language strings. Once initiated, a local search will return a response containing map item objects for up to 10 matching locations within a specified map region.
In this chapter the MapSample application was extended to allow the user to perform local searches and to use annotations to mark matching locations on the map view.
In the next chapter, the example will be further extended to cover the use of the Map Kit directions API, both to generate turn-by-turn directions and to draw the corresponding route on a map view.
|
http://www.techotopia.com/index.php/Working_with_MapKit_Local_Search_in_iOS_8_and_Swift
|
CC-MAIN-2017-26
|
refinedweb
| 1,385
| 52.39
|
Mandrill is a scalable and affordable email infrastructure service, with all the marketing-friendly analytics tools you’ve come to expect from MailChimp.
99.99%
100%
0.0%
Status Updated: April 30, 2013 06:30 PM UTC
See how we make SMTP fast – and why you should care
Get started with SMTP in less than a minute, or use Mandrill’s API for deeper integration.
<?php include_once "swift_required.php"; // $transport = Swift_SmtpTransport::newInstance('smtp.mandrillapp.com', 587); $transport->setUsername($MANDRILL_USERNAME); $transport->setPassword($MANDRILL_PASSWORD); $swift = Swift_Mailer::newInstance($transport); // ?>
require 'mail' Mail.defaults do delivery_method :smtp, { :port => 587, :address => "smtp.mandrillapp.com", :user_name => MANDRILL_USERNAME, :password => MANDRILL_PASSWORD } end
import os import smtplib from email.mime.multipart import MIMEMultipart from email.mime.text import MIMEText ## username = MANDRILL_USERNAME password = MANDRILL_PASSWORD s = smtplib.SMTP('smtp.mandrillapp.com', 587) s.login(username, password) ##
Your first 12,000 emails per month are always free, and Mandrill’s simple pricing structure makes it easy to estimate your monthly bill. No contracts, and no hidden fees.View Pricing
Enter your estimated
sends per month
Free
up to 12k emails per month
$0.20/thousand
next 1m emails per month
$0.15/thousand
next 5m emails per month
$0.10/thousand
remaining emails
Add a dedicated IP for $29.95 / month.
Send a template to Mandrill from your MailChimp account, code your own and import it with our API, or use Mandrill’s template editor. Whether you’re in marketing or development, Mandrill gives you options.
Use webhooks to integrate real-time analytics in to your account, and keep track of trends like open rates, clicks, and unsubscribes.
Mandrill offers real-time notifications about your reputation, bounce rate changes, and more. Use the mobile app to monitor delivery and make adjustments.
|
http://mandrill.com/
|
CC-MAIN-2014-15
|
refinedweb
| 289
| 52.15
|
Code refactoring with Visual Studio 2010 Part-1
Join the DZone community and get the full member experience.Join For Free
Visual Studio 2010 is a Great IDE(Integrated Development Environment) and we all are using it in day by day for our coding purpose. There are many great features provided by Visual Studio 2010 and Today I am going to show one of great feature called for code refactoring. This feature is one of the most unappreciated features of Visual Studio 2010 as lots of people still not using that and doing stuff manfully. So to explain feature let’s create a simple console application which will print first name and last name like following.
And following is code for that.
using System; namespace CodeRefractoring { class Program { static void Main(string[] args) { string firstName = "Jalpesh"; string lastName = "Vadgama"; Console.WriteLine(string.Format("FirstName:{0}",firstName)); Console.WriteLine(string.Format("LastName:{0}", lastName)); Console.ReadLine(); } } }
So as you can see this is a very basic console application and let’s run it to see output.
So now lets explore our first feature called extract method in visual studio you can also do that via refractor menu like following. Just select the code for which you want to extract method and then click refractor menu and then click extract method. Now I am selecting three lines of code and clicking on refactor –> Extract Method just like following.
As you can I have highlighted two thing first is Method Name where I put Print as Method Name and another one Preview method signature where its smart enough to extract parameter also as We have just selected three lines with console.writeline. One you click ok it will extract the method and you code will be like this.
using System; namespace CodeRefractoring { class Program { static void Main(string[] args) { string firstName = "Jalpesh"; string lastName = "Vadgama"; Print(firstName, lastName); } private static void Print(string firstName, string lastName) { Console.WriteLine(string.Format("FirstName:{0}", firstName)); Console.WriteLine(string.Format("LastName:{0}", lastName)); Console.ReadLine(); } } }
So as you can see in above code its has created a static method called Print and also passed parameter for as firstname and lastname. Isn’t that great!!!. It has also created static print method as I am calling it from static void main. Hope you liked it.. Stay tuned for more..Till that Happy programming.
Published at DZone with permission of Jalpesh Vadgama, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.
|
https://dzone.com/articles/code-refactoring-visual-studio
|
CC-MAIN-2022-40
|
refinedweb
| 419
| 64.91
|
Have you heard of asynchronous programming in Python? Are you curious to know more about Python async features and how you can use them in your work? Perhaps you’ve even tried to write threaded programs and run into some issues. If you’re looking to understand how to use Python async features, then you’ve come to the right place.
In this article, you’ll learn:
- What a synchronous program is
- What an asynchronous program is
- Why you might want to write an asynchronous program
- How to use Python async features
All of the example code in this article have been tested with Python 3.7.2. You can grab a copy to follow along by clicking the link below:
Download Code: Click here to download the code you’ll use to learn about async features in Python in this tutorial.
Understanding Asynchronous Programming
A synchronous program is executed one step at a time. Even with conditional branching, loops and function calls, you can still think about the code in terms of taking one execution step at a time. When each step is complete, the program moves on to the next one.
Here are two examples of programs that work this way:
Batch processing programs are often created as synchronous programs. You get some input, process it, and create some output. Steps follow one after the other until the program reaches the desired output. The program only needs to pay attention to the steps and their order..
An asynchronous program behaves differently. It still takes one execution step at a time. The difference is that the system may not wait for an execution step to be completed before moving on to the next one.
This means that the program will move on to future execution steps even though a previous step hasn’t yet finished and is still running elsewhere. This also means that the program knows what to do when a previous step does finish running.
Why would you want to write a program in this manner? The rest of this article will help you answer that question and give you the tools you need to elegantly solve interesting asynchronous problems.
Building a Synchronous Web Server
A web server’s basic unit of work is, more or less, the same as batch processing. The server will get some input, process it, and create the output. Written as a synchronous program, this would create a working web server.
It would also be an absolutely terrible web server.
Why? In this case, one unit of work (input, process, output) is not the only purpose. The real purpose is to handle hundreds or even thousands of units of work as quickly as possible. This can happen over long periods of time, and several work units may even arrive all at once.
Can a synchronous web server be made better? Sure, you could optimize the execution steps so that all the work coming in is handled as quickly as possible. Unfortunately, there are limitations to this approach. The result could be a web server that doesn’t respond fast enough, can’t handle enough work, or even one that times out when work gets stacked up.
Note: There are other limitations you might see if you tried to optimize the above approach. These include network speed, file IO speed, database query speed, and the speed of other connected services, to name a few. What these all have in common is that they are all IO functions. All of these items are orders of magnitude slower than the CPU’s processing speed.
In a synchronous program, if an execution step starts a database query, then the CPU is essentially idle until the database query is returned. For batch-oriented programs, this isn’t a priority most of the time. Processing the results of that IO operation is the goal. Often, this can take longer than the IO operation itself. Any optimization efforts would be focused on the processing work, not the IO.
Asynchronous programming techniques allow your programs to take advantage of relatively slow IO processes by freeing the CPU to do other work.
Thinking Differently About Programming
When you start trying to understand asynchronous programming, you might see a lot of discussion about the importance of blocking, or writing non-blocking code. (Personally, I struggled to get a good grasp of these concepts from the people I asked and the documentation I read.)
What is non-blocking code? What’s blocking code, for that matter? Would the answers to these questions help you write a better web server? If so, how could you do it? Let’s find out!
Writing asynchronous programs requires that you think differently about programming. While this new way of thinking can be hard to wrap your head around, it’s also an interesting exercise. That’s because the real world is almost entirely asynchronous, and so is how you interact with it.
Imagine this: you’re a parent trying to do several things at once. You have to balance the checkbook, do the laundry, and keep an eye on the kids. Somehow, you’re able to do all of these things at the same time without even thinking about it! Let’s break it down:
Balancing the checkbook is a synchronous task. One step follows another until it’s done. You’re doing all the work yourself.
However, you can break away from the checkbook to do laundry. You unload the dryer, move clothes from the washer to the dryer, and start another load in the washer.
Working with the washer and dryer is a synchronous task, but the bulk of the work happens after the washer and dryer are started. Once you’ve got them going, you can walk away and get back to the checkbook task. At this point, the washer and dryer tasks have become asynchronous. The washer and dryer will run independently until the buzzer goes off (notifying you that the task needs attention).
Watching your kids is another asynchronous task. Once they are set up and playing, they can do so independently for the most part. This changes when someone needs attention, like when someone gets hungry or hurt. When one of your kids yells in alarm, you react. The kids are a long-running task with high priority. Watching them supersedes any other tasks you might be doing, like the checkbook or laundry.
These examples can help to illustrate the concepts of blocking and non-blocking code. Let’s think about this in programming terms. In this example, you’re like the CPU. While you’re moving the laundry around, you (the CPU) are busy and blocked from doing other work, like balancing the checkbook. But that’s okay because the task is relatively quick.
On the other hand, starting the washer and dryer does not block you from performing other tasks. It’s an asynchronous function because you don’t have to wait for it to finish. Once it’s started, you can go back to something else. This is called a context switch: the context of what you’re doing has changed, and the machine’s buzzer will notify you sometime in the future when the laundry task is complete.
As a human, this is how you work all the time. You naturally juggle multiple things at once, often without thinking about it. As a developer, the trick is how to translate this kind of behavior into code that does the same kind of thing.
Programming Parents: Not as Easy as It Looks!
If you recognize yourself (or your parents) in the example above, then that’s great! You’ve got a leg up in understanding asynchronous programming. Again, you’re able to switch contexts between competing tasks fairly easily, picking up some tasks and resuming others. Now you’re going to try and program this behavior into virtual parents!
Thought Experiment #1: The Synchronous Parent
How would you create a parent program to do the above tasks in a completely synchronous manner? Since watching the kids is a high-priority task, perhaps your program would do just that. The parent watches over the kids while waiting for something to happen that might need their attention. However, nothing else (like the checkbook or laundry) would get done in this scenario.
Now, you can re-prioritize the tasks any way you want, but only one of them would happen at any given time. This is the result of a synchronous, step-by-step approach. Like the synchronous web server described above, this would work, but it might not be the best way to live. The parent wouldn’t be able to complete any other tasks until the kids fell asleep. All other tasks would happen afterward, well into the night. (A couple of weeks of this and many real parents might jump out the window!)
Thought Experiment #2: The Polling Parent
If you used polling, then you could change things up so that multiple tasks are completed. In this approach, the parent would periodically break away from the current task and check to see if any other tasks need attention.
Let’s make the polling interval something like fifteen minutes. Now, every fifteen minutes your parent checks to see if the washer, dryer or kids need any attention. If not, then the parent can go back to work on the checkbook. However, if any of those tasks do need attention, then the parent will take care of it before going back to the checkbook. This cycle continues on until the next timeout out of the polling loop.
This approach works as well since multiple tasks are getting attention. However, there are a couple of problems:
The parent may spend a lot of time checking on things that don’t need attention: The washer and dryer haven’t yet finished, and the kids don’t need any attention unless something unexpected happens.
The parent may miss completed tasks that do need attention: For instance, if the washer finished its cycle at the beginning of the polling interval, then it wouldn’t get any attention for up to fifteen minutes! What’s more, watching the kids is supposedly the highest priority task. They couldn’t tolerate fifteen minutes with no attention when something might be going drastically wrong.
You could address these issues by shortening the polling interval, but now your parent (the CPU) would be spending more time context switching between tasks. This is when you start to hit a point of diminishing returns. (Once again, a couple of weeks living like this and, well… See the previous comment about windows and jumping.)
Thought Experiment #3: The Threading Parent
“If I could only clone myself…” If you’re a parent, then you’ve probably had similar thoughts! Since you’re programming virtual parents, you can essentially do this by using threading. This is a mechanism that allows multiple sections of one program to run at the same time. Each section of code that runs independently is known as a thread, and all threads share the same memory space.
If you think of each task as a part of one program, then you can separate them and run them as threads. In other words, you can “clone” the parent, creating one instance for each task: watching the kids, monitoring the washer, monitoring the dryer, and balancing the checkbook. All of these “clones” are running independently.
This sounds like a pretty nice solution, but there are some issues here as well. One is that you’ll have to explicitly tell each parent instance what to do in your program. This can lead to some problems since all instances share everything in the program space.
For example, say that Parent A is monitoring the dryer. Parent A sees that the clothes are dry, so they take control of the dryer and begin unloading the clothes. At the same time, Parent B sees that the washer is done, so they take control of the washer and begin removing clothes. However, Parent B also needs to take control of the dryer so they can put the wet clothes inside. This can’t happen, because Parent A currently has control of the dryer.
After a short while, Parent A has finished unloading clothes. Now they want to take control of the washer and start moving clothes into the empty dryer. This can’t happen, either, because Parent B currently has control of the washer!
These two parents are now deadlocked. Both have control of their own resource and want control of the other resource. They’ll wait forever for the other parent instance to release control. As the programmer, you’d have to write code to work this situation out.
Note: Threaded programs allow you to create multiple, parallel paths of execution that all share the same memory space. This is both an advantage and a disadvantage. Any memory shared between threads is subject to one or more threads trying to use the same shared memory at the same time. This can lead to data corruption, data read in an invalid state, and data that’s just messy in general.
In threaded programming, the context switch happens under system control, not the programmer. The system controls when to switch contexts and when to give threads access to shared data, thereby changing the context of how the memory is being used. All of these kinds of problems are manageable in threaded code, but it’s difficult to get right, and hard to debug when it’s wrong.
Here’s another issue that might arise from threading. Suppose that a child gets hurt and needs to be taken to urgent care. Parent C has been assigned the task of watching over the kids, so they take the child right away. At the urgent care, Parent C needs to write a fairly large check to cover the cost of seeing the doctor.
Meanwhile, Parent D is at home working on the checkbook. They’re unaware of this large check being written, so they’re very surprised when the family checking account is suddenly overdrawn!
Remember, these two parent instances are working within the same program. The family checking account is a shared resource, so you’d have to work out a way for the child-watching parent to inform the checkbook-balancing parent. Otherwise, you’d need to provide some kind of locking mechanism so that the checkbook resource can only be used by one parent at a time, with updates.
Using Python Async Features in Practice
Now you’re going to take some of the approaches outlined in the thought experiments above and turn them into functioning Python programs.
All of the examples in this article have been tested with Python 3.7.2. The
requirements.txt file indicates which modules you’ll need to install to run all the examples. If you haven’t yet downloaded the file, you can do so now:
Download Code: Click here to download the code you’ll use to learn about async features in Python in this tutorial.
You also might want to set up a Python virtual environment to run the code so you don’t interfere with your system Python.
Synchronous Programming
This first example shows a somewhat contrived way of having a task retrieve work from a queue and process that work. A queue in Python is a nice FIFO (first in first out) data structure. It provides methods to put things in a queue and take them out again in the order they were inserted.
In this case, the work is to get a number from the queue and have a loop count up to that number. It prints to the console when the loop begins, and again to output the total. This program demonstrates one way for multiple synchronous tasks to process the work in a queue.
The program named
example_1.py in the repository is listed in full below:
1import queue 2 3def task(name, work_queue): 4 if work_queue.empty(): 5 print(f"Task {name} nothing to do") 6 else: 7 while not work_queue.empty(): 8 count = work_queue.get() 9 total = 0 10 print(f"Task {name} running") 11 for x in range(count): 12 total += 1 13 print(f"Task {name} total: {total}") # Create some synchronous tasks 27 tasks = [(task, "One", work_queue), (task, "Two", work_queue)] 28 29 # Run the tasks 30 for t, n, q in tasks: 31 t(n, q) 32 33if __name__ == "__main__": 34 main()
Let’s take a look at what each line does:
- Line 1 imports the
queuemodule. This is where the program stores work to be done by the tasks.
- Lines 3 to 13 define
task(). This function pulls work out of
work_queueand processes the work until there isn’t any more to do.
- Line 15 defines
main()to run the program tasks.
- Line 20 creates the
work_queue. All tasks use this shared resource to retrieve work.
- Lines 23 to 24 put work in
work_queue. In this case, it’s just a random count of values for the tasks to process.
- Line 27 creates a list of task tuples, with the parameter values those tasks will be passed.
- Lines 30 to 31 iterate over the list of task tuples, calling each one and passing the previously defined parameter values.
- Line 34 calls
main()to run the program.
The task in this program is just a function accepting a string and a queue as parameters. When executed, it looks for anything in the queue to process. If there is work to do, then it pulls values off the queue, starts a
for loop to count up to that value, and outputs the total at the end. It continues getting work off the queue until there is nothing left and it exits.
When this program is run, it produces the output you see below:
Task One running Task One total: 15 Task One running Task One total: 10 Task One running Task One total: 5 Task One running Task One total: 2 Task Two nothing to do
This shows that
Task One does all the work. The
while loop that
Task One hits within
task() consumes all the work on the queue and processes it. When that loop exits,
Task Two gets a chance to run. However, it finds that the queue is empty, so
Task Two prints a statement that says it has nothing to do and then exits. There’s nothing in the code to allow both
Task One and
Task Two to switch contexts and work together.
Simple Cooperative Concurrency
The next version of the program allows the two tasks to work together. Adding a
yield statement means the loop will yield control at the specified point while still maintaining its context. This way, the yielding task can be restarted later.
The
yield statement turns
task() into a generator. A generator function is called just like any other function in Python, but when the
yield statement is executed, control is returned to the caller of the function. This is essentially a context switch, as control moves from the generator function to the caller.
The interesting part is that control can be given back to the generator function by calling
next() on the generator. This is a context switch back to the generator function, which picks up execution with all function variables that were defined before the
yield still intact.
The
while loop in
main() takes advantage of this when it calls
next(t). This statement restarts the task at the point where it previously yielded. All of this means that you’re in control when the context switch happens: when the
yield statement is executed in
task().
This is a form of cooperative multitasking. The program is yielding control of its current context so that something else can run. In this case, it allows the
while loop in
main() to run two instances of
task() as a generator function. Each instance consumes work from the same queue. This is sort of clever, but it’s also a lot of work to get the same results as the first program. The program
example_2.py demonstrates this simple concurrency and is listed below:
1import queue 2 3def task(name, queue): 4 while not queue.empty(): 5 count = queue.get() 6 total = 0 7 print(f"Task {name} running") 8 for x in range(count): 9 total += 1 10 yield 11 print(f"Task {name} total: {total}") 12 13def main(): 14 """ 15 This is the main entry point for the program 16 """ 17 # Create the queue of work 18 work_queue = queue.Queue() 19 20 # Put some work in the queue 21 for work in [15, 10, 5, 2]: 22 work_queue.put(work) 23 24 # Create some tasks 25 tasks = [task("One", work_queue), task("Two", work_queue)] 26 27 # Run the tasks 28 done = False 29 while not done: 30 for t in tasks: 31 try: 32 next(t) 33 except StopIteration: 34 tasks.remove(t) 35 if len(tasks) == 0: 36 done = True 37 38if __name__ == "__main__": 39 main()
Here’s what’s happening in the code above:
- Lines 3 to 11 define
task()as before, but the addition of
yieldon Line 10 turns the function into a generator. This where the context switch is made and control is handed back to the
whileloop in
main().
- Line 25 creates the task list, but in a slightly different manner than you saw in the previous example code. In this case, each task is called with its parameters as its entered in the
taskslist variable. This is necessary to get the
task()generator function running the first time.
- Lines 31 to 36 are the modifications to the
whileloop in
main()that allow
task()to run cooperatively. This is where control returns to each instance of
task()when it yields, allowing the loop to continue and run another task.
- Line 32 gives control back to
task(), and continues its execution after the point where
yieldwas called.
- Line 36 sets the
donevariable. The
whileloop ends when all tasks have been completed and removed from
tasks.
This is the output produced when you run this program:
Task One running Task Two running Task Two total: 10 Task Two running Task One total: 15 Task One running Task Two total: 5 Task One total: 2
You can see that both
Task One and
Task Two are running and consuming work from the queue. This is what’s intended, as both tasks are processing work, and each is responsible for two items in the queue. This is interesting, but again, it takes quite a bit of work to achieve these results.
The trick here is using the
yield statement, which turns
task() into a generator and performs a context switch. The program uses this context switch to give control to the
while loop in
main(), allowing two instances of a task to run cooperatively.
Notice how
Task Two outputs its total first. This might lead you to think that the tasks are running asynchronously. However, this is still a synchronous program. It’s structured so the two tasks can trade contexts back and forth. The reason why
Task Two outputs its total first is that it’s only counting to 10, while
Task One is counting to 15.
Task Two simply arrives at its total first, so it gets to print its output to the console before
Task One.
Note: All of the example code that follows from this point use a module called codetiming to time and output how long sections of code took to execute. There is a great article here on RealPython that goes into depth about the codetiming module and how to use it.
This module is part of the Python Package Index and is built by Geir Arne Hjelle, who is part of the Real Python team. Geir Arne has been a great help to me reviewing and suggesting things for this article. If you are writing code that needs to include timing functionality, Geir Arne’s codetiming module is well worth looking at.
To make the codetiming module available for the examples that follow you’ll need to install it. This can be done with
pip with this command:
pip install codetiming, or with this command:
pip install -r requirements.txt. The
requirements.txt file is part of the example code repository.
Cooperative Concurrency With Blocking Calls
The next version of the program is the same as the last, except for the addition of a
time.sleep(delay) in the body of your task loop. This adds a delay based on the value retrieved from the work queue to every iteration of the task loop. The delay simulates the effect of a blocking call occurring in your task.
A blocking call is code that stops the CPU from doing anything else for some period of time. In the thought experiments above, if a parent wasn’t able to break away from balancing the checkbook until it was complete, that would be a blocking call.
time.sleep(delay) does the same thing in this example, because the CPU can’t do anything else but wait for the delay to expire.
1import time 2import queue 3from codetiming import Timer 4 5def task(name, queue): 6 timer = Timer(text=f"Task {name} elapsed time: {{:.1f}}") 7 while not queue.empty(): 8 delay = queue.get() 9 print(f"Task {name} running") 10 timer.start() 11 time.sleep(delay) 12 timer.stop() 13 yield tasks = [task("One", work_queue), task("Two", work_queue)] 27 28 # Run the tasks 29 done = False 30 with Timer(text="\nTotal elapsed time: {:.1f}"): 31 while not done: 32 for t in tasks: 33 try: 34 next(t) 35 except StopIteration: 36 tasks.remove(t) 37 if len(tasks) == 0: 38 done = True 39 40if __name__ == "__main__": 41 main()
Here’s what’s different in the code above:
- Line 1 imports the
timemodule to give the program access to
time.sleep().
- Line 3 imports the the
Timercode from the
codetimingmodule.
- Line 6 creates the
Timerinstance used to measure the time taken for each iteration of the task loop.
- Line 10 starts the
timerinstance
- Line 11 changes
task()to include a
time.sleep(delay)to mimic an IO delay. This replaces the
forloop that did the counting in
example_1.py.
- Line 12 stops the
timerinstance and outputs the elapsed time since
timer.start()was called.
- Line 30 creates a
Timercontext manager that will output the elapsed time the entire while loop took to execute.
When you run this program, you’ll see the following output:
Task One running Task One elapsed time: 15.0 Task Two running Task Two elapsed time: 10.0 Task One running Task One elapsed time: 5.0 Task Two running Task Two elapsed time: 2.0 Total elapsed time: 32.0
As before, both
Task One and
Task Two are running, consuming work from the queue and processing it. However, even with the addition of the delay, you can see that cooperative concurrency hasn’t gotten you anything. The delay stops the processing of the entire program, and the CPU just waits for the IO delay to be over.
This is exactly what’s meant by blocking code in Python async documentation. You’ll notice that the time it takes to run the entire program is just the cumulative time of all the delays. Running tasks this way is not a win.
Cooperative Concurrency With Non-Blocking Calls
The next version of the program has been modified quite a bit. It makes use of Python async features using asyncio/await provided in Python 3.
The
time and
queue modules have been replaced with the
asyncio package. This gives your program access to asynchronous friendly (non-blocking) sleep and queue functionality. The change to
task() defines it as asynchronous with the addition of the
async prefix on line 4. This indicates to Python that the function will be asynchronous.
The other big change is removing the
time.sleep(delay) and
yield statements, and replacing them with
await asyncio.sleep(delay). This creates a non-blocking delay that will perform a context switch back to the caller
main().
The
while loop inside
main() no longer exists. Instead of
task_array, there’s a call to
await asyncio.gather(...). This tells
asyncio two things:
- Create two tasks based on
task()and start running them.
- Wait for both of these to be completed before moving forward.
The last line of the program
asyncio.run(main()) runs
main(). This creates what’s known as an event loop). It’s this loop that will run
main(), which in turn will run the two instances of
task().
The event loop is at the heart of the Python async system. It runs all the code, including
main(). When task code is executing, the CPU is busy doing work. When the
await keyword is reached, a context switch occurs, and control passes back to the event loop. The event loop looks at all the tasks waiting for an event (in this case, an
asyncio.sleep(delay) timeout) and passes control to a task with an event that’s ready.
await asyncio.sleep(delay) is non-blocking in regards to the CPU. Instead of waiting for the delay to timeout, the CPU registers a sleep event on the event loop task queue and performs a context switch by passing control to the event loop. The event loop continuously looks for completed events and passes control back to the task waiting for that event. In this way, the CPU can stay busy if work is available, while the event loop monitors the events that will happen in the future.
Note: An asynchronous program runs in a single thread of execution. The context switch from one section of code to another that would affect data is completely in your control. This means you can atomize and complete all shared memory data access before making a context switch. This simplifies the shared memory problem inherent in threaded code.
The
example_4.py code is listed below:
1import asyncio 2from codetiming import Timer 3 4async def task(name, work_queue): 5 timer = Timer(text=f"Task {name} elapsed time: {{:.1f}}") 6 while not work_queue.empty(): 7 delay = await work_queue.get() 8 print(f"Task {name} running") 9 timer.start() 10 await asyncio.sleep(delay) 11 timer.stop() 12 13async def main(): 14 """ 15 This is the main entry point for the program 16 """ 17 # Create the queue of work 18 work_queue = asyncio.Queue() 19 20 # Put some work in the queue 21 for work in [15, 10, 5, 2]: 22 await work_queue.put(work) 23 24 # Run the tasks 25 with Timer(text="\nTotal elapsed time: {:.1f}"): 26 await asyncio.gather( 27 asyncio.create_task(task("One", work_queue)), 28 asyncio.create_task(task("Two", work_queue)), 29 ) 30 31if __name__ == "__main__": 32 asyncio.run(main())
Here’s what’s different between this program and
example_3.py:
- Line 1 imports
asyncioto gain access to Python async functionality. This replaces the
timeimport.
- Line 2 imports the the
Timercode from the
codetimingmodule.
- Line 4 shows the addition of the
asynckeyword in front of the
task()definition. This informs the program that
taskcan run asynchronously.
- Line 5 creates the
Timerinstance used to measure the time taken for each iteration of the task loop.
- Line 9 starts the
timerinstance
- Line 10 replaces
time.sleep(delay)with the non-blocking
asyncio.sleep(delay), which also yields control (or switches contexts) back to the main event loop.
- Line 11 stops the
timerinstance and outputs the elapsed time since
timer.start()was called.
- Line 18 creates the non-blocking asynchronous
work_queue.
- Lines 21 to 22 put work into
work_queuein an asynchronous manner using the
awaitkeyword.
- Line 25 creates a
Timercontext manager that will output the elapsed time the entire while loop took to execute.
- Lines 26 to 29 create the two tasks and gather them together, so the program will wait for both tasks to complete.
- Line 32 starts the program running asynchronously. It also starts the internal event loop.
When you look at the output of this program, notice how both
Task One and
Task Two start at the same time, then wait at the mock IO call:
Task One running Task Two running Task Two total elapsed time: 10.0 Task Two running Task One total elapsed time: 15.0 Task One running Task Two total elapsed time: 5.0 Task One total elapsed time: 2.0 Total elapsed time: 17.0
This indicates that
await asyncio.sleep(delay) is non-blocking, and that other work is being done.
At the end of the program, you’ll notice the total elapsed time is essentially half the time it took for
example_3.py to run. That’s the advantage of a program that uses Python async features! Each task was able to run
await asyncio.sleep(delay) at the same time. The total execution time of the program is now less than the sum of its parts. You’ve broken away from the synchronous model!
Synchronous (Blocking) HTTP Calls
The next version of the program is kind of a step forward as well as a step back. The program is doing some actual work with real IO by making HTTP requests to a list of URLs and getting the page contents. However, it’s doing so in a blocking (synchronous) manner.
The program has been modified to import the wonderful
requests module to make the actual HTTP requests. Also, the queue now contains a list of URLs, rather than numbers. In addition,
task() no longer increments a counter. Instead,
requests gets the contents of a URL retrieved from the queue, and prints how long it took to do so.
The
example_5.py code is listed below:
1import queue 2import requests 3from codetiming import Timer 4 5def task(name, work_queue): 6 timer = Timer(text=f"Task {name} elapsed time: {{:.1f}}") 7 with requests.Session() as session: 8 while not work_queue.empty(): 9 url = work_queue.get() 10 print(f"Task {name} getting URL: {url}") 11 timer.start() 12 session.get(url) 13 timer.stop() 14 yield 15 16def main(): 17 """ 18 This is the main entry point for the program 19 """ 20 # Create the queue of work 21 work_queue = queue.Queue() 22 23 # Put some work in the queue 24 for url in [ 25 "", 26 "", 27 "", 28 "", 29 "", 30 "", 31 "", 32 ]: 33 work_queue.put(url) 34 35 tasks = [task("One", work_queue), task("Two", work_queue)] 36 37 # Run the tasks 38 done = False 39 with Timer(text="\nTotal elapsed time: {:.1f}"): 40 while not done: 41 for t in tasks: 42 try: 43 next(t) 44 except StopIteration: 45 tasks.remove(t) 46 if len(tasks) == 0: 47 done = True 48 49if __name__ == "__main__": 50 main()
Here’s what’s happening in this program:
- Line 2 imports
requests, which provides a convenient way to make HTTP calls.
- Line 3 imports the the
Timercode from the
codetimingmodule.
- Line 6 creates the
Timerinstance used to measure the time taken for each iteration of the task loop.
- Line 11 starts the
timerinstance
- Line 12 introduces a delay, similar to
example_3.py. However, this time it calls
session.get(url), which returns the contents of the URL retrieved from
work_queue.
- Line 13 stops the
timerinstance and outputs the elapsed time since
timer.start()was called.
- Lines 23 to 32 put the list of URLs into
work_queue.
- Line 39 creates a
Timercontext manager that will output the elapsed time the entire while loop took to execute.
When you run this program, you’ll see the following output:
Task One getting URL: Task One total elapsed time: 0.3 Task Two getting URL: Task Two total elapsed time: 0.8 Task One getting URL: Task One total elapsed time: 0.4 Task Two getting URL: Task Two total elapsed time: 0.3 Task One getting URL: Task One total elapsed time: 0.5 Task Two getting URL: Task Two total elapsed time: 0.5 Task One getting URL: Task One total elapsed time: 0.4 Total elapsed time: 3.2
Just like in earlier versions of the program,
yield turns
task() into a generator. It also performs a context switch that lets the other task instance run.
Each task gets a URL from the work queue, retrieves the contents of the page, and reports how long it took to get that content.
As before,
yield allows both your tasks to run cooperatively. However, since this program is running synchronously, each
session.get() call blocks the CPU until the page is retrieved. Note the total time it took to run the entire program at the end. This will be meaningful for the next example.
Asynchronous (Non-Blocking) HTTP Calls
This version of the program modifies the previous one to use Python async features. It also imports the
aiohttp module, which is a library to make HTTP requests in an asynchronous fashion using
asyncio.
The tasks here have been modified to remove the
yield call since the code to make the HTTP
GET call is no longer blocking. It also performs a context switch back to the event loop.
The
example_6.py program is listed below:
1import asyncio 2import aiohttp 3from codetiming import Timer 4 5async def task(name, work_queue): 6 timer = Timer(text=f"Task {name} elapsed time: {{:.1f}}") 7 async with aiohttp.ClientSession() as session: 8 while not work_queue.empty(): 9 url = await work_queue.get() 10 print(f"Task {name} getting URL: {url}") 11 timer.start() 12 async with session.get(url) as response: 13 await response.text() 14 timer.stop() 15 16async def main(): 17 """ 18 This is the main entry point for the program 19 """ 20 # Create the queue of work 21 work_queue = asyncio.Queue() 22 23 # Put some work in the queue 24 for url in [ 25 "", 26 "", 27 "", 28 "", 29 "", 30 "", 31 "", 32 ]: 33 await work_queue.put(url) 34 35 # Run the tasks 36 with Timer(text="\nTotal elapsed time: {:.1f}"): 37 await asyncio.gather( 38 asyncio.create_task(task("One", work_queue)), 39 asyncio.create_task(task("Two", work_queue)), 40 ) 41 42if __name__ == "__main__": 43 asyncio.run(main())
Here’s what’s happening in this program:
- Line 2 imports the
aiohttplibrary, which provides an asynchronous way to make HTTP calls.
- Line 3 imports the the
Timercode from the
codetimingmodule.
- Line 5 marks
task()as an asynchronous function.
- Line 6 creates the
Timerinstance used to measure the time taken for each iteration of the task loop.
- Line 7 creates an
aiohttpsession context manager.
- Line 8 creates an
aiohttpresponse context manager. It also makes an HTTP
GETcall to the URL taken from
work_queue.
- Line 11 starts the
timerinstance
- Line 12 uses the session to get the text retrieved from the URL asynchronously.
- Line 13 stops the
timerinstance and outputs the elapsed time since
timer.start()was called.
- Line 39 creates a
Timercontext manager that will output the elapsed time the entire while loop took to execute.
When you run this program, you’ll see the following output:
Task One getting URL: Task Two getting URL: Task One total elapsed time: 0.3 Task One getting URL: Task One total elapsed time: 0.3 Task One getting URL: Task One total elapsed time: 0.3 Task One getting URL: Task Two total elapsed time: 0.9 Task Two getting URL: Task Two total elapsed time: 0.4 Task Two getting URL: Task One total elapsed time: 0.5 Task Two total elapsed time: 0.3 Total elapsed time: 1.7
Take a look at the total elapsed time, as well as the individual times to get the contents of each URL. You’ll see that the duration is about half the cumulative time of all the HTTP
GET calls. This is because the HTTP
GET calls are running asynchronously. In other words, you’re effectively taking better advantage of the CPU by allowing it to make multiple requests at once.
Because the CPU is so fast, this example could likely create as many tasks as there are URLs. In this case, the program’s run time would be that of the single slowest URL retrieval.
Conclusion
This article has given you the tools you need to start making asynchronous programming techniques a part of your repertoire. Using Python async features gives you programmatic control of when context switches take place. This means that many of the tougher issues you might see in threaded programming are easier to deal with.
Asynchronous programming is a powerful tool, but it isn’t useful for every kind of program. If you’re writing a program that calculates pi to the millionth decimal place, for instance, then asynchronous code won’t help you. That kind of program is CPU bound, without much IO. However, if you’re trying to implement a server or a program that performs IO (like file or network access), then using Python async features could make a huge difference.
To sum it up, you’ve learned:
- What synchronous programs are
- How asynchronous programs are different, but also powerful and manageable
- Why you might want to write asynchronous programs
- How to use the built-in async features in Python
You can get the code for all of the example programs used in this tutorial:
Download Code: Click here to download the code you’ll use to learn about async features in Python in this tutorial.
Now that you’re equipped with these powerful skills, you can take your programs to the next level!
|
https://realpython.com/python-async-features/
|
CC-MAIN-2021-43
|
refinedweb
| 6,932
| 73.17
|
Introduction: How to Show a Message on a SenseHat
Hello, today I will be showing you how to display a message on a Raspberry Pi SenseHat.
Step 1: Hook Up the Raspberry Pi
Before we do any coding on the Raspberry Pi, we need to connect the right wires to it. Plug an HDMI cable into the HDMI port, plug in the power, and connect a keyboard and a mouse to it. If you want, you can connect an ethernet cable to it if you want to connect to the Internet. Connect it to a monitor, and voila! You can access the Raspberry Pi.
Step 2: Open Up Python 3 (IDLE)
In the top left corner of your screen, you should see a geometric raspberry icon. Click on that, and some options will come up. You should see "Programming". Click on that, and then click on "Python 3 (IDLE). A window should pop up called "Python 3.5.3 Shell
Step 3: Import SenseHat to Python
At the top left of the window, type (exactly as read):
from sense_hat import SenseHat
If you did it right, the"from" and "import" should be orange. Press enter, and type:
sense = SenseHat()
Make sure you use the parentheses. They signify a command.
Step 4: Display the Message
Press enter twice, then type:
sense.show_message("Your message here")
That should be it! Your message should be showing on the display!
Step 5: Optional Effects
If you want to get extra fancy, you can change the speed, text color, and background color of the message.
To change the text speed, enter the command like this:
sense.show_message("your message here", text_speed=random#)
1 is the default speed.
To change the color of your text or background, you need to set RGB variables first. RGB variables are colors, and you set them like this:
r = (255, 0, 0)
The first number is red value, the second green, and the third blue. After setting variables, enter the command like this:
sense.show_message("your message here", text_colour=variable, back_colour=variable)
You can combine any of these commands to change up your message.
Be the First to Share
Recommendations
2 years ago
Good info, thank you for sharing!
|
https://www.instructables.com/How-to-Show-a-Message-on-a-SenseHat/
|
CC-MAIN-2021-39
|
refinedweb
| 369
| 73.88
|
[ ]
Advertising
Tsuyoshi OZAWA commented on YARN-2052: -------------------------------------- {quote} The only problem is users who are interpreting container_ID strings themselves. That is NOT supported. {quote} Yeah, I think it is difficult to avoid the problem. But the interpreting logic itself doesn't change drastically with our approach because we doesn't change the order of attributes. IMHO, it's acceptable approach. BTW, I found that ConverterUtils is marked as {{@Pivate}}. Should we make the class {{@Public}}? {code} @Private public class ConverterUtils { {code} > ContainerId creation after work preserving restart is broken > ------------------------------------------------------------ > > Key: YARN-2052 > URL: > Project: Hadoop YARN > Issue Type: Sub-task > Components: resourcemanager > Reporter: Tsuyoshi OZAWA > Assignee: Tsuyoshi OZAWA > >)
|
https://www.mail-archive.com/yarn-issues@hadoop.apache.org/msg29536.html
|
CC-MAIN-2018-17
|
refinedweb
| 108
| 58.48
|
This is the mail archive of the cygwin@cygwin.com mailing list for the Cygwin project.
On Tue, Jun 03, 2003 at 10:09:11AM -0700, Greg Fenton wrote: >I am looking to enhance SWISH-E ( )to support >Cygwin. It builds and runs just fine, but internally the code takes a >configuration parameter and if it is a shell-command, converts all of >the "/" characters to "\". > > >#ifdef _WIN32 > make_windows_path( cmd ); >#endif > >and make_windows_path() blindly replaces '/' for '\'. > >I'd like to have one binary support both Cygwin and native Win32 >environments. Suggestions on the "cleanest" way? You shouldn't be defining _WIN32 for cygwin. Cygwin gcc doesn't define this by default. --:
|
https://www.cygwin.com/ml/cygwin/2003-06/msg00125.html
|
CC-MAIN-2018-05
|
refinedweb
| 111
| 68.47
|
Red Hat Bugzilla – Bug 241370
pread() always sets the offset=0 if gcc option -D_FILE_OFFSET_BITS=64 is set
Last modified: 2016-11-24 09:58:52 EST
Description of problem:
If a C code is compiled with the -D_LARGEFILE64_SOURCE -D_FILE_OFFSET_BITS=64
flag then the offset argument for pread() is passed as 0 or some garbage
number. This happens only on a powerPC. The sample code to reproduce this is as
follows:
===================================================================
#include <fcntl.h>
#include <stdio.h>
#include <unistd.h>
int main(void)
{
int fdin, fdout, ret;
char *buf;
off_t off=73728;
buf=(char *) malloc(131072);
if (!buf) {printf ("Bad"); exit(0);}
fdin = open("/oradata/BAN25/system01.dbf", O_RDONLY);
if(fdin<0) printf("fdin error\n");
fdout = open("/tmp/checksystem.dbf", O_WRONLY | O_CREAT);
if(fdout<0) printf("fdout error\n");
ret = pread(fdin, buf, 8192, off);
if(ret!=8192){printf("error read\n");}
ret = write(fdout, buf, 8192);
if(ret!=8192){printf("error write\n");}
close(fdin);
close(fdout);
}
===================================================================
and was compiled as follows
#gcc -g -D_LARGEFILE64_SOURCE -D_FILE_OFFSET_BITS=64 try.c
The strace output for the same is attached with this bug (strace.out).
strace was generated as:
#strace -f ./a.out >strace.out 2>&1
Now pread would work fine if these changes were done.
-> If pread() is changed to pread64() in teh code, then things work fine.
-> also ifthe code is compiled with -D_XOPEN_SOURCE=500 and things worked fine
but unfortunately I cannot use the -D_XOPEN_SOURCE=500 flag as this flag
seems to uset the following type definations: uint, u_long etc, which are
being used in the code base.
The same code was compliled on an Intel and AMD system and pread() worked fine
without the -D_XOPEN_SOURCE=500 compilation flag. That is the offset was passed
sucessfully.
Why does this happen?
Version-Release number of selected component (if applicable):
This fails on a powerpc with either of the gcc versions glibc-2.3.4-2.19 and
glibc-2.3.4-2.25
How reproducible:
Always reproducable (consistant)
Steps to Reproduce:
1. Write a C code with the pread() call
2. Set the debug flag -D_FILE_OFFSET_BITS=64 for gcc and compile.
3. strace output for the execution shows that pread64()is called internally and
the offset value is changed to some junk value or 0.
I am passing the offset as 73728.
Actual results:
pread64(3, "\0\242\0\0\377\300\0\0\0\0\0\0\0\0\0\0\341\304\0\0\0\0"..., 8192, 0)
Expected results:
pread64(3, "\0\242\0\0\377\300\0\0\0\0\0\0\0\0\0\0\341\304\0\0\0\0"..., 8192, 73728)
Additional info:
If I directly call pread64() within the code, things work fine.
Created attachment 155458 [details]
strace output for the code.
Please read
info libc 'Feature Test Macros'
and try to use
-Wimplicit-function-declaration (part of -Wall)
on your sources (and, unrelated, remember that when O_CREAT is used, 3rd argument
to open is mandatory).
pread and pread64 functions were added in SUSv3, therefore they are only
available in the headers with -D_XOPEN_SOURCE=500, -D_XOPEN_SOURCE=600,
or -D_GNU_SOURCE.
When you don't have prototype of a function, in addition to e.g. miscompiling
functions that return some other type than int, with -D_FILE_OFFSET_BITS=64 the
non-*64 functions aren't redirected to their *64 counterparts either. So,
in your testcase, the result is basically:
int pread (); // implicit function declaration
...
pread (fdin, buf, 8192, off); // int, void *, uint, long long arguments
But, pread takes int, void *, uint, long arguments. On little endian 32-bit arch
that means just using the bottom 32 bits of off, on big endian 32-bit arch
usually the top 32 bits of off, but on ppc additionally long long values are
passed only starting on odd registers, so int, void *, uint, long long is
passed as int, void *, uint, pad, high 32 bits, low 32 bits.
|
https://bugzilla.redhat.com/show_bug.cgi?id=241370
|
CC-MAIN-2017-17
|
refinedweb
| 651
| 65.42
|
I have a login screen and I wish to normalize the login name for the
purposes of INSERT and SELECT. In the model I have:
def username=(name)
# keycase is a local extension of class String.
write_attribute(:username, name.keycase)
end
The attribute username of an new user is indeed stored as a normalized
string. However, when logging on and I input the username in capital
letters then the select fails, because the param value of username is
not normalized. The question is, where do I perform this manipulation
for the login? In the controller? In the view? Or is there a way to
accomplish this in the model?
|
https://www.ruby-forum.com/t/model-controller-or-view-where-to-normalize-data/155962
|
CC-MAIN-2022-05
|
refinedweb
| 109
| 66.44
|
Opened 8 years ago
Closed 19 months ago
#9104 closed Cleanup/optimization (fixed)
FieldDoesNotExist is defined in "confusing" place.
Description
Django has "django.core.exceptions" package/module where most exceptions are defined, including DB related ones (i.e: ObjectDoesNotExist). Lame users like me would expect FieldDoesNotExist to be defined on the same module but it's in django.db.fields
Maybe we could move the exception?
Change History (12)
comment:1 Changed 8 years ago by telenieko
- Needs documentation unset
- Needs tests unset
- Patch needs improvement unset
- Triage Stage changed from Unreviewed to Design decision needed
comment:2 Changed 8 years ago by mtredinnick
comment:3 Changed 7 years ago by anonymous
- milestone post-1.0 deleted
Milestone post-1.0 deleted
comment:4 Changed 5 years ago by lukeplant
- Severity set to Normal
- Type set to Cleanup/optimization
comment:5 Changed 5 years ago by aaugustin
- Easy pickings unset
- Resolution set to needsinfo
- Status changed from new to closed
- UI/UX unset
Malcolm explained why the code is written in this way, and his question stays unanswered after years.
comment:6 Changed 3 years ago by nickname123
Just came across this looking for the exception with Google search.
My use case is that I have a model form with extra fields on the form than what the model defines. I use these extra fields to auto populate some of the model fields.
I am iterating through the form fields to add HTML5 attrs during init. I need to catch FieldDoesNotExist when I try to check if the model field has blank=True to determine if the formfield is required or not.
from django.db.models.fields import FieldDoesNotExist def __init__(self, *args, **kwargs): super(FooModel, self).__init__(*args, **kwargs) for key in self.fields: try: mfield = self.instance._meta.get_field_by_name(key)[0] except FieldDoesNotExist: pass else: if not mfield.blank: self.fields[key].widget.attrs["required"] = "required"
This isn't too important, so I am just leaving the comment in case Google brings me back here again before I grep.
comment:7 Changed 2 years ago by wraus
- Resolution needsinfo deleted
- Status changed from closed to new
I'd like to reopen this just to hopefully have this oddity resolved.
One of the comments was talking about use cases, and I wanted to give the use case that I'm using for it to hopefully sway opinion a bit on making this simple change.
I'm working on a template tag that will allow me to iterate over a list of a model's fields / attributes and output it as a table. I want to grab the verbose_name of the field, given the field's name, and to do so, I get obj._meta.get_field(field_name). However, I also want the option to reference class attributes and functions, and obviously if I reference a function or attribute, there is no matching field. Thus, I need to call get_field, and catch FieldDoesNotExist to handle it as a function / attribute and get a separate label.
There doesn't seem to be a more "obvious" way to do this, and I ran into the issue of trying to find this oddly placed exception. I understand that moving the exception would be a serious change, but hopefully making it importable via django.core.exceptions will resolve this inconvenience.
comment:8 Changed 21 months ago by tomchristie
I believe that the correct resolution of this should be whatever decision is made in the official 'meta' API update.
Given that we don't yet document get_fields or FieldDoesNotExist anywhere, the new public API is the correct way to resolve this one way or the other.
comment:9 Changed 19 months ago by freakboy3742
comment:10 Changed 19 months ago by claudep
Even if it is not documented, searching for FieldDoesNotExist on github for example reveals many usages of this import location in the wild, hence I think we should provide some deprecation path.
It should be possible for example to create a django/db/models/fields/FieldDoesNotExist.py module to trigger the deprecation warning on old import location (don't know if there is a cleaner way to deprecate import locations).
comment:11 Changed 19 months ago by claudep
Of course, my "trick" about a FieldDoesNotExist.py file only works if we want to loudly fail with an explicit message. Still looking for a way to raise a warning without ever calling the symbol (might be impossible...).
comment:12 Changed 19 months ago by Tim Graham <timograham@…>
- Resolution set to fixed
- Status changed from new to closed
We can't move it (backwards compatibility is important), but we could add an alias in the other place, maybe.
However, the reason for its current location is that this is not an exception that correct user code will ever see. You will see something like ObjectDoesExist because that can happen naturally. But your code should never be catching FieldDoesNotExist, since that would just mean you had made an error in your code 99% of the time (there are some introspection cases where it might not. I guess that is why the admin interface is using it). So declaring the exception where it is actually used and then only using it internally isn't a bad thing.
What's the use-case where you actually need to ever import this exception?
|
https://code.djangoproject.com/ticket/9104
|
CC-MAIN-2016-30
|
refinedweb
| 893
| 60.04
|
I have an asp.net mvc application and my code is based on this article: and on this sample code:
I use jquery-ui-1.11.4 dialog to show a form, but the html of popup is placed after all other elements in <body> by default, so this code is not in the <form> element used by the parent view. That is the reason why the input values do not post
Here is my Interface and class : public interface IServiceFactory<T, Y> where T : class where Y : class { T Create(ModelStateDictionary modelState); } public class ServiceFactory<T, Y> : IServiceFactory<T, Y> where T : class where Y
I'm trying to dynamically generate image path's based on an external variable in a razor view. I've seen other questions like this, however they did not seem to address my particular problem. There seem to be an issue with the order in which razor re
I dont understand the RegisterRoutes perfectly. Lets assume the browser's current URL is //Home/ListCompanies/{filter} we came to the address above with the link below @Html.ActionLink("Yerli Markalar",
I am getting compilation error in the code below, it complaining about a bracket but all the bracket and matching up I am a bit lost can someone please help me here thank in advance. Error: Description: An error occurred during the compilation of a r
I'm out of ideas, but I guess it's because I'm a noob in asp.net. I have a MVC3 project and my idea was to have 2 sections: Section Main is for displaying tables and news and all the staff that changes when entering different pages. Section Main_prof
While querying with linq, Max function retrieves up to '9' if more than 9 values in the list MaxItemNumber = ItemDetailList.Max(e => e.ItemNumber); Here ItemDetailList contains more than 10 items, but MaxItemNumber is 9, this is not taking into ac
I am new to ASP.NET MVC. While designing a Page i used Data Annotations as validations. I observed that all the validations already performed at the client side and i still didnt understood why i should use ModelState.IsValid in the server side also.
I am facing the same issue sited here. Basically, the MVC3 razor view (cshtml) intellisense does not work with VS 2013. The reason sited on that thread is that MVC3 is not supported in VS 2013. Now I have a third party library (dll) that I cannot upg
With reference to the question posted on link, how can I return data of the below format from a controller using ajax? The confusing part is that if the data property is an array of objects - with a string and an integer property; then why doesn't
This is My View @using(@Html.BeginForm("CrmBlogGroupType","knowledge",FormMethod.Get)){ @Html.TextBox("search") @Html.Hidden("type", (string)ViewBag.type) @Html.DropDownList("PageSize", new List<SelectListItem>() { new SelectListItem () { Text=
I have an open source class library that targets MVC 2, 3, 4, and 5. I use the same project file for every version and use a compilation constant to switch between project references, as follows. <ItemGroup Condition=" $(DefineConstants.Contains('
Membership.CreateUser(model.UserName, model.Password, model.Email, null, null, true, null, out createStatus); Failed System.Data.SqlClient.SqlException: Invalid object name 'dbo.Applications'. Searched google/stackoverflow no results. I guess my SQL
One of the guys here at work was saying that System.Web.MVC.dll version number (3.0, 40, etc) has to match the version of System.Web.Routing.dll... I wanted to understand his statement, so I dug into it and can't find anything on the web by googling.
I have a collection of data that I am displaying as a table. The current setup is: @using System.Collections.Generic @model IEnumerable<Dictionary<String,String>> <table id="dashboard"> <thead> @foreach (var item in Model.Firs
I have seen similar problems faced by others, but since my problem only occurs on IE7 I am wondering whether there is something else. I have a complex viewmodel structure, for example, class X { List<Y> a {get; set;} } class Y { string b {get;s
I am using MVC3, ASP.NET 4.5, C#, Razor, SQL Server 2008+ I have to start by saying that I am using a "domain object" in my postback action. public ActionResult Edit(Order myOrder) It is not a ViewModel. Therefore the default model binder will match
I was working on angular js implementation in ASP.NET MVC3 application. I have loaded my products using var deferred = $q.defer(); // Initiates the AJAX call $http({ method: 'Get', url: scope.url ,params: {categoryId: scope.categoryId,pageIndex:scope
|
http://www.dskims.com/tag/asp-net-mvc-3/
|
CC-MAIN-2018-30
|
refinedweb
| 782
| 54.52
|
0
Hello!
My application is crashing randomly, and I thought it might have to do with it manipulating widgets that are not yet mapped, despite them being already gridded. Have you experienced this problem?
I supply some code that shows that the print function happens before tkinter recognizes the button being mapped, even though it is already gridded. I couldn't find a way to wait for the button being properly mapped before the print function is executed.
from tkinter import * class fb( Frame ): def fastbuttons( self ): b = Button(self) b.grid() print( b.winfo_ismapped() ) def __init__( self, master=None ): Frame.__init__( self, master ) self.grid( sticky='nsew' ) self.button = Button( self, text='Start', command=self.fastbuttons ) self.button.grid() if __name__ == '__main__': root = Tk() app = fb( master=root ) app.mainloop()
|
https://www.daniweb.com/programming/software-development/threads/246880/tkinter-not-mapped-instantly-after-grid
|
CC-MAIN-2017-43
|
refinedweb
| 131
| 60.82
|
The standard library implements support for Linux's extended attribute API with os.getxattr/setxattr/listxattr/removexattr. I'd like to implement support for more platforms under the same API in the standard library.
Proof of concept ================ I've implemented a native-Python library (using ctypes) that runs CI tests against Linux, Mac OS X and FreeBSD. If accepted, this library could also be used to backport support to Python 3.6+.
API discussion ============== It is at least possible to do this on all three platforms as demonstrated by my library.
Mutable mapping --------------- I've also implemented a mutable mapping (inspired by os.environ) that can be used for access to a file's extended attributes. Unlike os.environ you have to instantiate the class with your path, but once you have it, it's a straightforward API.
# class Xattrs(path: os.PathLike, follow_symlinks: bool = True)
import xattr_compat
xattrs = xattr_compat.Xattrs("./my_file")
xattrs["user.humanfund.xattr"] = b"hello\0world"
print("Extended attributes:", xattrs.items())
MacOS X ------- Not listed in the manpage but defined in the xattr.h header file are a few more bitflags: XATTR_NOSECURITY, XATTR_NODEFAULT and XATTR_SHOWCOMPRESSION. Linux's setxattr takes a flags argument so os.setxattr() has a flags argument, but on MacOS X all four functions take a flags argument. It may be worth adding a flags= kwarg to all four Python functions to support the MacOS X case.
MacOS X also has no restrictions on the name of the format key which is nice. The manpages demand that you use UTF-8 for the encoding as well.
Linux ----- Linux requires the attribute name to start with "user.", "system.", "trusted." or "security.". Luckily the current Python implementation doesn't attempt to validate this, leaving it up to the kernel to set errno when you don't do this. Because Linux was the most restrictive of these platforms existing code will work correctly on the new platforms.
FreeBSD/NetBSD -------------- These systems added an argument attrnamespace to all of their xattr system calls. There are only two namespaces: EXTATTR_NAMESPACE_SYSTEM and EXTATTR_NAMESPACE_USER. There are no further restrictions on the attribute name. Unfortunately, directly supporting attrnamespace in the Python API is going to require breaking away a bit from the Linux interface.
You can't rely on checking the attribute string to determine which namespace a key should go into because there will be keys without those prefixes on live systems. You also can't trim the string in Python before handing it off to the kernel because it is valid to have a key that starts with "user." and so on in the BSDs and I imagine there are probably a lot of users who want compatibility with Linux and are doing exactly that.
I have my library using the USER namespace by default which seems to be the correct thing to do. However, it is still possible that someone might want to read or write from a SYSTEM namespace xattr. I've extended the function signature to optionally take a tuple in place of the os.PathLike argument. The first member of the tuple is the namespace constant, the second is the actual path or file descriptor that you want to work on. I'm pretty happy with this API - it doesn't break compatibility for the tiny user base of FreeBSD users who are interested in reading SYSTEM-namespaced xattrs from Python but also doesn't clutter things up too much. But maybe this is too tacky for the standard library.
Windows ------- If the Windows Subsystem for Linux implements xattr this code can likely mimic what it does. I don't have a Windows machine so I can't comment on if that is already there.
Bikeshedding ============ Should the mapping class be named os.xattrs or os.xattr or os.Xattrs or os.Xattr?
Implementation ============== If accepted, what sort of tests would be expected as part of the PR? I already have a handful of unit tests in the xattr_compat repo.
|
https://mail.python.org/archives/list/python-ideas@python.org/thread/6GTEZ2UL3LXMQIGERR5SPRWVSSRVSKRG/
|
CC-MAIN-2021-17
|
refinedweb
| 663
| 56.55
|
Watch Now This tutorial has a related video course created by the Real Python team. Watch it together with the written tutorial to deepen your understanding: Logging in Python
Logging article, you will learn why using this module is the best way to add logging to your application as well as how to get started quickly, and you will get an introduction to some of the advanced features available.
Free Bonus: 5 Thoughts On Python Mastery, a free course for Python developers that shows you the roadmap and the mindset you’ll need to take your Python skills to the next level.
The Logging Module.
Adding logging to your Python program is as easy as this:
import logging
With the logging module imported, you can use something called a “logger” to log messages that you want to see. By default, there are 5 standard levels indicating the severity of events. Each has a corresponding method that can be used to log events at that level of severity. The defined levels, in order of increasing severity, are the following:
- DEBUG
- INFO
- WARNING
- ERROR
- CRITICAL
The logging module provides you with a default logger that allows you to get started without needing to do much configuration. The corresponding methods for each level can be called as shown in the following example:
import logging logging ERROR:root:This is an error message CRITICAL:root:This is a critical message
The output shows the severity level before each message along with
root, which is the name the logging module gives to its default logger. (Loggers are discussed in detail in later sections.) This format, which shows the level, name, and message separated by a colon (
:), is the default output format that can be configured to include things like timestamp, line number, and other details.
Notice that the
debug() and
info() messages didn’t get logged. This is.
Basic Configurations
You can use the
basicConfig(**
kwargs
) method to configure the logging:
“You will notice that the logging module breaks PEP8 styleguide and uses
camelCasenaming conventions. This is because it was adopted from Log4j, a logging utility in Java. It is a known issue in the package but by the time it was decided to add it to the standard library, it had already been adopted by users and changing it to meet PEP8 requirements would cause backwards compatibility issues.” (Source)
Some of the commonly used parameters for
basicConfig() are the following:
level: The root logger will be set to the specified severity level.
filename: This specifies the file.
filemode: If
filenameis given, the file is opened in this mode. The default is
a, which means append.
format: This is the format of the log message.
By using the
level parameter, you can set what level of log messages you want to record. This can be done by passing one of the constants available in the class, and this would enable all logging calls at or above that level to be logged. Here’s an example:
import logging logging.basicConfig(level=logging.DEBUG) logging.debug('This will get logged')
DEBUG:root:This will get logged
All events at or above
DEBUG level will now get logged.
Similarly, for logging to a file rather than the console,
filename and
filemode can be used, and you can decide the format of the message using
format. The following example shows the usage of all three:
import logging logging.basicConfig(filename='app.log', filemode='w', format='%(name)s - %(levelname)s - %(message)s') logging.warning('This will get logged to a file')
root - ERROR - This will get logged to a file
The message will look like this but will be written to a file named
app.log instead of the console. The filemode is set to
w, which means the log file is opened in “write mode” each time
basicConfig() is called, and each run of the program will rewrite the file. The default configuration for filemode is
a, which is append.
You can customize the root logger even further by using more parameters for
basicConfig(), which can be found here.
It should be noted that calling
basicConfig() to configure the root logger works only if the root logger has not been configured before. Basically, this function can only be called once.
debug(),
info(),
warning(),
error(), and
critical() also call
basicConfig() without arguments automatically if it has not been called before. This means that after the first time one of the above functions is called, you can no longer configure the root logger because they would have called the
basicConfig() function internally.
The default setting in
basicConfig() is to set the logger to write to the console in the following format:
ERROR:root:This is an error message
Formatting the Output
While you can pass any variable that can be represented as a string from your program as a message to your logs, there are some basic elements that are already a part of the
LogRecord and can be easily added to the output format. If you want to log the process ID along with the level and message, you can do something like this:
import logging logging.basicConfig(format='%(process)d-%(levelname)s-%(message)s') logging.warning('This is a Warning')
18472-WARNING-This is a Warning
format can take a string with
LogRecord attributes in any arrangement you like. The entire list of available attributes can be found here.
Here’s another example where you can add the date and time info:
import logging logging.basicConfig(format='%(asctime)s - %(message)s', level=logging.INFO) logging.info('Admin logged in')
2018-07-11 20:12:06,288 - Admin logged in
%(asctime)s adds the time of creation of the
LogRecord. The format can be changed using the
datefmt attribute, which uses the same formatting language as the formatting functions in the datetime module, such as
time.strftime():
import logging logging.basicConfig(format='%(asctime)s - %(message)s', datefmt='%d-%b-%y %H:%M:%S') logging.warning('Admin logged out')
12-Jul-18 20:53:19 - Admin logged out
You can find the guide here.
Logging Variable Data
In most cases, you would want to include dynamic information from your application in the logs. You have seen that the logging methods take a string as an argument, and it might seem natural to format a string with variable data in a separate line and pass it to the log method. But this can actually be done directly by using a format string for the message and appending the variable data as arguments. Here’s an example:
import logging name = 'John' logging.error('%s raised an error', name)
ERROR:root:John raised an error
The arguments passed to the method would be included as variable data in the message.
While you can use any formatting style, the f-strings introduced in Python 3.6 are an awesome way to format strings as they can help keep the formatting short and easy to read:
import logging name = 'John' logging.error(f'{name} raised an error')
ERROR:root:John raised an error
Capturing Stack Traces
The
Here’s a quick tip: if you’re logging from an exception handler, use the
logging.exception() method, which logs a message with level
ERROR and adds exception information to the message. To put it more simply, calling
logging.exception() is like calling
logging.error(exc_info=True). But since this method always dumps exception information, it should only be called from an exception handler. Take a look at this example:
import logging a = 5 b = 0 try: c = a / b except Exception as e: logging.exception("Exception occurred")
ERROR:root:Exception occurred Traceback (most recent call last): File "exceptions.py", line 6, in <module> c = a / b ZeroDivisionError: division by zero [Finished in 0.2s]
Using
logging.exception() would show a log at the level of
ERROR. If you don’t want that, you can call any of the other logging methods from
debug() to
critical() and pass the
exc_info parameter as
True.
Classes and Functions
So far, we have seen the default logger named
root, which is used by the logging module whenever its functions are called directly like this:
logging.debug(). You can (and should) define your own logger by creating an object of the
Logger class, especially if your application has multiple modules. Let’s have a look at some of the classes and functions in the module.
The most commonly used classes defined in the logging module are the following:
Logger: This is the class whose objects will be used in the application code directly to call the functions.
LogRecord: Loggers automatically create
LogRecordobjects that have all the information related to the event being logged, like the name of the logger, the function, the line number, the message, and more.
Handler: Handlers send the
LogRecordto the required output destination, like the console or a file.
Handleris a base for subclasses like
StreamHandler,
FileHandler,
SMTPHandler,
HTTPHandler, and more. These subclasses send the logging outputs to corresponding destinations, like
sys.stdoutor a disk file.
Formatter: This is where you specify the format of the output by specifying a string format that lists out the attributes that the output should contain.
Out of these, we mostly deal with the objects of the
Logger class, which are instantiated using the module-level function
logging.getLogger(name). Multiple calls to
getLogger() with the same
name will return a reference to the same
Logger object, which saves us from passing the logger objects to every part where it’s needed. Here’s an example:
import logging logger = logging.getLogger('example_logger') logger.warning('This is a warning')
This is a warning
This creates a custom logger named
example_logger, but unlike the root logger, the name of a custom logger is not part of the default output format and has to be added to the configuration. Configuring it to a format to show the name of the logger would give an output like this:
WARNING:example_logger:This is a warning
Again, unlike the root logger, a custom logger can’t be configured using
basicConfig(). You have to configure it using Handlers and Formatters:
“It is recommended that we use module-level loggers by passing
__name__as the name parameter to
getLogger()to create a logger object as the name of the logger itself would tell us from where the events are being logged.
__name__is a special built-in variable in Python which evaluates to the name of the current module.” (Source)
Using Handlers
Handlers come into the picture when you want to configure your own loggers and send the logs to multiple places when they are generated. Handlers send the log messages to configured destinations like the standard output stream or a file or over HTTP or to your email via SMTP.
A logger that you create can have more than one handler, which means you can set it up to be saved to a log file and also send it over email.
Like loggers, you can also set the severity level in handlers. This is useful if you want to set multiple handlers for the same logger but want different severity levels for each of them. For example, you may want logs with level
WARNING and above to be logged to the console, but everything with level
ERROR and above should also be saved to a file. Here’s a program that does that:
# logging_example.py import logging # Create a custom logger logger = logging.getLogger(__name__) # Create handlers c_handler = logging.StreamHandler() f_handler = logging.FileHandler('file.log') c_handler.setLevel(logging.WARNING) f_handler.setLevel(logging.ERROR) # Create formatters and add it to handlers c_format = logging.Formatter('%(name)s - %(levelname)s - %(message)s') f_format = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s') c_handler.setFormatter(c_format) f_handler.setFormatter(f_format) # Add handlers to the logger logger.addHandler(c_handler) logger.addHandler(f_handler) logger.warning('This is a warning') logger.error('This is an error')
__main__ - WARNING - This is a warning __main__ - ERROR - This is an error
Here,
logger.warning() is creating a
LogRecord that holds all the information of the event and passing it to all the Handlers that it has:
c_handler and
f_handler.
c_handler is a
StreamHandler with level
WARNING and takes the info from the
LogRecord to generate an output in the format specified and prints it to the console.
f_handler is a
FileHandler with level
ERROR, and it ignores this
LogRecord as its level is
WARNING.
When
logger.error() is called,
c_handler behaves exactly as before, and
f_handler gets a
LogRecord at the level of
ERROR, so it proceeds to generate an output just like
c_handler, but instead of printing it to console, it writes it to the specified file in this format:
2018-08-03 16:12:21,723 - __main__ - ERROR - This is an error
The name of the logger corresponding to the
__name__ variable is logged as
__main__, which is the name Python assigns to the module where execution starts. If this file is imported by some other module, then the
__name__ variable would correspond to its name logging_example. Here’s how it would look:
# run.py import logging_example
logging_example - WARNING - This is a warning logging_example - ERROR - This is an error
Other Configuration Methods
You can configure logging as shown above using the module and class functions or by creating a config file or a dictionary and loading it using
fileConfig() or
dictConfig() respectively. These are useful in case you want to change your logging configuration in a running application.
Here’s an example file configuration:
[loggers] keys=root,sampleLogger [handlers] keys=consoleHandler [formatters] keys=sampleFormatter [logger_root] level=DEBUG handlers=consoleHandler [logger_sampleLogger] level=DEBUG handlers=consoleHandler qualname=sampleLogger propagate=0 [handler_consoleHandler] class=StreamHandler level=DEBUG formatter=sampleFormatter args=(sys.stdout,) [formatter_sampleFormatter] format=%(asctime)s - %(name)s - %(levelname)s - %(message)s
In the above file, there are two loggers, one handler, and one formatter. After their names are defined, they are configured by adding the words logger, handler, and formatter before their names separated by an underscore.
To load this config file, you have to use
fileConfig():
import logging import logging.config logging.config.fileConfig(fname='file.conf', disable_existing_loggers=False) # Get the logger specified in the file logger = logging.getLogger(__name__) logger.debug('This is a debug message')
2018-07-13 13:57:45,467 - __main__ - DEBUG - This is a debug message
The path of the config file is passed as a parameter to the
fileConfig() method, and the
disable_existing_loggers parameter is used to keep or disable the loggers that are present when the function is called. It defaults to
True if not mentioned.
Here’s the same configuration in a YAML format for the dictionary approach:
version: 1 formatters: simple: format: '%(asctime)s - %(name)s - %(levelname)s - %(message)s' handlers: console: class: logging.StreamHandler level: DEBUG formatter: simple stream: ext://sys.stdout loggers: sampleLogger: level: DEBUG handlers: [console] propagate: no root: level: DEBUG handlers: [console]
Here’s an example that shows how to load config from a
yaml file:
import logging import logging.config import yaml with open('config.yaml', 'r') as f: config = yaml.safe_load(f.read()) logging.config.dictConfig(config) logger = logging.getLogger(__name__) logger.debug('This is a debug message')
2018-07-13 14:05:03,766 - __main__ - DEBUG - This is a debug message
Keep Calm and Read the Logs
The logging module is considered to be very flexible. Its design is very practical and should fit your use case out of the box. You can add basic logging to a small project, or you can go as far as creating your own custom log levels, handler classes, and more if you are working on a big project.
If you haven’t been using logging in your applications, now is a good time to start. When done right, logging will surely remove a lot of friction from your development process and help you find opportunities to take your application to the next level.
Watch Now This tutorial has a related video course created by the Real Python team. Watch it together with the written tutorial to deepen your understanding: Logging in Python
|
https://realpython.com/python-logging/
|
CC-MAIN-2021-43
|
refinedweb
| 2,683
| 51.68
|
Read other articles of this series
- Spring Interview Questions
- Spring Boot Interview Questions
- Spring MVC Interview Questions with Answers
- Java Interview Questions (This Article)
In this post, we are covering some of the important Java interview questions for your next interview.
Introduction
Preparing for a Java interview is not an easy task. You may face many different problems starting from simple beginner level questions to the core concept of different Java classes. In this tutorial, we are listing mostly asked Java interview questions and answers. There is no way of guaranteeing what type of problems you may face in an interview but we hope that this blog post will help you to handle a good number of Java questions :
Q1. What are JDK, JRE, and JVM?
JDK: JDK is the acronym for Java Development Kit. This is the development kit for Java application development and debugging. It is platform specific. Means, we have different installers for different operating systems.JDK includes different utilities required for Java application development like Java virtual machine, libraries, development tools like javac, documentation, debugger etc.
JRE: JRE is the acronym for Java Runtime Environment. JRE is inside JDK. It provides a runtime environment used to execute a Java program.JRE includes JVM, libraries and a few other classes that needed to execute a Java program. Note that JDK is to develop a Java program and JRE is to run a Java program.
JVM: JVM is the acronym for Java Virtual Machine. It is called “virtual” because JVM doesn’t physically exist. It converts the Java bytecode to machine language. Java compiler compiles the “.java” files to “.class” that contains the bytecode for JVM.
Q2. Why Java is not a fully object-oriented programming language?
A programming language is a fully object-oriented programming language if everything in a program is an object. But in Java, all primitive types like char, byte, boolean, short, int, double etc. are not objects. These are predefined types. So, we can’t say Java as a pure or fully object-oriented programming language.
Q3. What do you understand by platform independence? Do you think Java is platform independent?
Platform independence means once we compile the code for a program, we can run it on any operating system.Java is platform independent. Because Java compiler compiles the java files to bytecode. On any platform, JVM can execute the same bytecode without recompiling. Note that JVM is platform dependent, i.e. for each OS, we have different JVM installed. But each JVM can execute the same bytecode compiled on any operating system.
Q4.What are the class and object?
A class is a template that holds different information like data types and methods used by the objects. An object is a specimen of a class which consists of all data types and methods defined in the class. We can create multiple objects with the same class.
Q5. What is a constructor?
A constructor is a method that executed when an object is created. We can have multiple constructors for a single class and at least one of the constructor will invoke while creating a new object. The name of the constructor is the same as the name of the class.
Q6. What is an Anonymous inner class in Java?
Anonymous inner class is an inner class declared and instantiated at the same time. We can use the ‘new’ operator to instantiate and initialize it. e.g. :
class Vehicle { interface Car { public void numberOfDoors(); } public static void main(String...args) { Car car = new Car() { public void numberOfDoors() { System.out.println("4"); } }; car.numberOfDoors(); } }
Q7. What is an inner class in Java?
Inner class, as its name suggests defined inside another class. The main difference between outer class and inner class is that an inner class can be private, default, public or protected but an outer class is only public or default. Example :
class Vehicle { class Bus { public void numberOfDoors() { System.out.println("2"); } } public static void main(String...args) { Vehicle vehicle = new Vehicle(); Bus bus = vehicle.new Bus(); bus.numberOfDoors(); } }
Q8. What is an object class?
By default, all classes inherit the object class directly or indirectly. If a class is not extending any other class, it inherits the object class directly. If it is extending another class, this class is inherited from the object class indirectly. Object class methods are available to all classes.
Q9. What is a singleton class?
Singleton is a design pattern used to create a class with only one single instance, i.e. it will instantiate only once in the Java Virtual Machine. These classes are useful for utility or common methods like logging, configuration etc. The constructor of the singleton class is private. One different static method is created inside the class to get the instance of the class.
public class AppUtil { private static AppUtil instance; private AppUtil() {} public static AppUtil getInstance() { if (instance == null) { synchronized(AppUtil.class) { if (instance == null) instance = new AppUtil(); } } return instance; } }
Here, AppUtil is a singleton class. It has only one instance. We can get this instance using the static method “getInstance” from any other class.
Q10. What is an abstract class?
Abstract class in Java denoted by the “abstract” keyword. We cannot instantiate an abstract class. An abstract class may contain abstract or non-abstract methods. If a class has an abstract method, it should be abstract. Like other classes, an abstract class can have constructors. Abstract classes are used mostly as Base classes with common methods and method definitions for all subclasses.
Q11. What is an interface?
An interface is a way to meet abstraction in Java. The interface has only method bodies, not implementation. If a class is implementing an interface, it needs to complete the implementation for all methods defined in the interface. An interface provides polymorphism in Java. The interface can also contain default methods, nested types, static methods, and constants. For default methods and static methods, we can implement bodies inside the interface. It doesn’t contain any constructor and we can’t instantiate it. A class can implement multiple interfaces at the same time.
Q12. What are the differences between abstract class and interface?
- “interface” keyword is for declaring interfaces but the “abstract” keyword is to declare an abstract class.
- The interface supports multiple-inheritance but abstract class doesn’t support multiple-inheritance.
- A class implements an interface but a class extends an abstract class.
- All members of an interface are public but members of an abstract class can be private, protected etc.
- an interface can’t have any constructors but an abstract class can have constructors
Q13. What is a default method?
Default method got in Java 8. Before Java 8, interfaces could contain only abstract methods. Only the classes implementing these interfaces can have the implementation code. Starting from Java 8, we can also add methods with implementation in interfaces. The implementation of these methods in the class is optional.
Q14. How sub-class is different from inner class?
- The inner class is in the same file as the parent class but a subclass can be placed in a separate file.
- To get the instance of an inner class, we need the instance of the parent class first. But for a subclass, we can create an instance directly.
- To get the instance of an inner class, we need the instance of the parent class first. But for a subclass, we can create an instance directly.
Q15. What are the different features of Java 8?
Please read the New Features in Java 8 for the detail
Summary
In this post, we covering some of the important Java interview questions. We wish you the best of luck for your interview and hope that these Java interview questions will help you in your next interview.
|
https://www.javadevjournal.com/java/java-interview-questions/
|
CC-MAIN-2021-43
|
refinedweb
| 1,299
| 59.7
|
Expert Reviewed
wikiHow to Invest Small Amounts of Money Wisely
Three Parts:Getting Ready to InvestChoosing Good InvestmentsFocusing on the FutureCommunity Q&A
Contrary to popular belief, the stock market is not just for rich people. Investing is one of the best ways for anyone to create wealth and become financially independent. A strategy of investing small amounts continuously can eventually result in what is referred to as the snowball effect, in which small amounts gain in size and momentum and ultimately lead to exponential growth. To accomplish this feat, you must implement a proper strategy and stay patient, disciplined, and diligent. These instructions will help you get started in making small but smart investments.
Steps
Part 1
Getting Ready to Invest
- 1Ensure investing is right for you. Investing in the stock market involves risk, and this includes the risk of permanently losing money. Before investing, always ensure you have your basic financial needs taken care of in the event of a job loss or catastrophic event.
- Make sure you have 3 to 6 months of your income readily available in a savings account. This ensures that if you quickly need money, you will not need to rely on selling your stocks. Even relatively "safe" stocks can fluctuate dramatically over time, and there is always a probability your stock could be below what you bought it for when you need cash.
- Ensure your insurance needs are met. Before allocating a portion of your monthly income to investing, make sure you own proper insurance on your assets, as well as on your health.
- Remember to never depend on investment money to cover any catastrophic event, as investments do fluctuate over time. For example, if your savings were invested in the stock market in 2008, and you also needed to spend 6 months off work due to an illness, you would have been forced to sell your stocks at a potential 50% loss due to the market crash at the time. By having proper savings and insurance, your basic needs are always covered regardless of stock market volatility.
- 2Choose the appropriate type of account. Depending on your investment needs, there are several different types of accounts you may want to consider opening. Each of these accounts represents a vehicle in which to hold your investments.
- penalty in these accounts, as opposed to investments in tax deferred accounts. [1][2]
- A traditional Individual Retirement Account (IRA) allows for tax-deductible contributions but limits how much you can contribute. An IRA doesn't allow you to withdraw funds until you reach retirement age (unless you're willing to pay a penalty). You would be required to start withdrawing funds by age 70.. This means the next year, you will earn 5% on $1050. The trade-off is less access to money due to the penalty for early withdrawal.[3]
- Roth Individual Retirement Accounts do not allow for tax-deductible contributions but do allow for tax-free withdrawals in retirement. Roth IRAs do not require you to make withdrawals by a certain age, making them a good way to transfer wealth to heirs. [4]
- Any of these can be effective vehicles for investing. Spend some time learning more about your options before making a decision.
- 3Implement dollar cost averaging. While this may sound complex, dollar cost averaging simply refers to the fact that -- by investing the same amount each month -- your average purchase price will reflect the average share price over time. Dollar cost averaging reduces risk due to the fact that by investing small sums on regular intervals, you reduce your odds of accidentally investing before a large downturn. It is a main reason why you should set up a regular schedule of monthly investing. In addition, it can also work to reduce costs, since when shares drop, your same monthly investment will purchase more of the lower cost shares.
- When you invest money in a stock, you purchase shares for a particular price. If you can spend $500 per month, and the stock you like costs $5 per share, you can afford 100 shares.
- By putting a fixed amount of money into a stock each month ($500 for example), you can lower the price you pay for your shares, and thereby make more money when the stock goes up, due to a lower cost.
- This occurs because when the price of the shares drops, your monthly $500 will be able to purchase more shares, and when the price rises, your monthly $500 will purchase less. The end result is your average purchase price will lower over time.
- It is important to note that the opposite is also true -- if shares are constantly rising, your regular contribution will buy fewer and fewer shares, raising your average purchase price over time. However, your shares will also be raising in price so you will still profit. The key is to have a disciplined approach of investing at regular intervals, regardless of price, and avoid "timing the market".
- At the same time, your frequent, smaller contributions ensure that no relatively large sum is invested before a market downturn, thereby reducing risk.
- 4Explore compounding. Compounding is an essential concept in investing, and refers to a stock (or any asset) generating earnings based on its reinvested earnings.
- This is best explained through an example. Assume you invest $1000 in a stock in one year, and that stock pays a dividend of 5% each year. At the end of year one, you will have $1050. In year two, the stock will pay the same 5%, but now the 5% will be based on the $1050 you have. As a result, you will receive $52.50 in dividends, as opposed to $50 in the first year.
- Over time, this can produce huge growth. If you simply let that $1000 sit in account earning a 5% dividend, over 40 years, it would be worth over $7000 in 40 years. If you contribute an additional $1000 each year, it would be worth $133,000 in 40 years. If you started contributing $500 per month in year two, it would be worth nearly $800,000 after 40 years.
- Keep in mind since this is an example, we assumed the value of the stock and the dividend stayed constant. In reality, it would likely increase or decrease which could result in substantially more or less money after 40 years.
Part 2
Choosing Good Investments
- 1Avoid concentration in a few stocks. The concept of not having all your eggs in one basket is key in investing. To start, your focus should be on getting broad diversification, or having your money spread out over many different stocks.[5]
- Just buying a single stock exposes you to to the risk of that stock losing significant value. If you buy many stocks over many different industries, this risk can be reduced.
- For example, if the price of oil falls and your oil stock drops by 20%, it is possible that your retail stock will increase in value due to customers having more spending money as a result of lower gas prices. Your information technology stock may stay flat. The end result is your portfolio sees less downside
- One good way to gain diversification is to invest in an product that provides this diversification for you. This can include mutual funds, or ETF's. Due to their instant diversification, these provide a good option for beginner investors.[6][7]
- 2Explore investment options. There are many different types of investment options. However, since this article focuses on the stock market, there are three primary ways to gain stock market exposure.
- Consider an ETF index fund. An exchange-traded index fund is a passive portfolio of stocks and/or bonds that aim to accomplish a set of objectives. Often, this objective is to track some broader index (like the S&P 500 or the NASDAQ). If you buy an ETF that tracks the S&P 500 for example, you are literally purchasing stock in 500 companies, which provides enormous diversification. One of the benefits of ETFs are their low fees. Management of these funds is minimal, so the client doesn't pay much for their service.[8]
- Consider an actively managed mutual fund. A actively managed mutual fund is a pool of money from a group of investors that is used to purchase a group of stocks or bonds, according to some strategy or objective. One of the benefits of mutual funds is professional management. These funds are overseen by professional investors who invest your money in a diversified way and will respond to changes in the market (as noted above). This is the key difference between mutual funds and ETF's -- mutual funds have managers actively picking stocks according to a strategy, whereas ETF's simply track an index. One of the downsides is that they tend to be more expensive than owning an ETF, because you pay an extra cost for the more active management service.[9][10]
- Consider investing in individual stocks. If you have the time, knowledge, and interest to research stocks, they can provide significant return. Be advised that unlike mutual funds or ETF's which are highly diversified, your individual portfolio will likely be less diversified and therefore higher risk. To reduce this risk, refrain from investing more than 20% of your portfolio in one stock. This provides some of the diversification benefit that mutual funds or ETF's provide.
- 3Find a broker or mutual fund company that meets your needs. Utilize a brokerage or mutual fund firm that will make investments on your behalf. You will want to focus on both cost and value of the services the broker will provide you. [11]
- For example, there are types of accounts that allow you to deposit money and make purchases with very low commissions. This may be perfect for someone who already knows how they want to invest their money. [12]
- If you need professional advice regarding investments, you may need to settle for a place with higher commissions in return for a higher level of customer service. [13]
- Given the large number of discount brokerage firms available, you should be able to find a place that charges low commissions while meeting your customer-service needs.
- Each brokerage house has different pricing plans. Pay close attention to the details regarding the products you plan to use most often.
- 4
Part 3
Focusing on the Future
- 1Be patient. The number-one obstacle that prevents investors from seeing the huge effects of compounding mentioned earlier is lack of patience. Indeed, it is difficult to watch a small balance grow slowly and, in some instances, lose money in the short term. [15]
- Try to remind yourself that you are playing a long game. The lack of immediate, large profits should not be taken as a sign of failure. For example, if you a purchase a stock, you can expect to see it fluctuate between profit and loss. Often, a stock will fall before it rises. Remember that you are buying a piece of a concrete business, and in the same way you would not be discouraged if the value of a gas station you owned declined over the course of a week or a month, you should not be discouraged if the value of your stock fluctuates. Focus on the companies earnings over time to gauge its success or failure, and the stock will follow.
- 2Keep up the pace. Concentrate on the pace of your contributions. Stick to the amount and frequency you decided upon earlier, and let your investment build up slowly. [16]
- 3Stay informed and look ahead. In this day and age, with technology that can provide you with the information you seek in an instant, it is tough to look several years to the future while monitoring your investment balances. Those that do, however, will slowly build their snowball until it builds up speed and helps them achieve their financial goals.
- 4Stay the course. The second biggest obstacle to achieving compounding is the temptation to change your strategy by chasing fast returns from investments with recent big gains or selling investments with recent losses. That's actually the opposite of what most really successful investors do. [18]
- In other words, don’t chase returns. Investments that are experiencing very high returns can just as quickly turn around and go down. "Chasing returns" can often be a disaster. [19] Stick to your original strategy, assuming it was well thought out to begin with.
- Stay put and don’t repeatedly enter and exit the market. History shows that being out of the market on the four or five biggest up-days in each calendar year can be the difference between making and losing money. You won't recognize those days until they've already passed.
- Avoid timing the market. For example, you may be tempted to sell when you feel the market may decline, or avoid investing because you feel the economy is in a recession. Research has proven the most effective approach is to simply invest at a steady pace and use the dollar cost averaging strategy discussed above.
- Studies have found that people who simply dollar cost average and stay invested do far better then people who try to time the market, invest a lump sum every year on new years, or who avoid stocks.[20]
Community Q&A
-
- How do I benefit if I sell stocks at a profit, then transfer those funds to another company's stock?wikiHow ContributorAssuming you have invested $100 and earned a profit of $2 from the initial investment, you have benefited by $2. And supposing you invest the same $2 into another company's stock. Then, you do not have any benefit. Wait for your eggs to hatch!
-
- How can I invest in the stock market?wikiHow ContributorTo invest few questions need to be answered: Financial Institution: Look for brokerage company which can provide you investment opportunity. You can choose mutual funds (different countries know this financial instrument with different names) or invest based in companies based on your personal judgement. Mutual funds provide you risk and return ratings. In case you want to invest yourself you need to perform financial analysis. usually big companies are low risk low return (blue chip). With personal decisions about stocks, it is higher risk than mutual funds, unless you are a professional financial analyst.
- I want to invest money in a small business. What should I ask for in return?Vanguard's small-cap Explorer fund has returned 9% annually for 50 years and doubled investors' money in the last ten years, all without much principal risk. If you're going to take the considerable risk of investing in an individual small business, you should ask for a return on investment much higher than that. Better yet, invest in a small-cap fund. There are many available.
-
- Like many, we have our money divided up between RRSPs, TFSA's and investments. This results in smaller pools of money that our adviser says can't be combined to access better investment options. What can be done?
- How can I find a reliable financial adviser or broker?wikiHow ContributorLook for a smaller but diversified investor who charges lesser fees but diversifies your investment.
-
-
- Can we cheat leading to invest in stock market?
- How do I generate a lead for property business if I am an agent?
Video
Tips
- Ask for help in the beginning. Seek the counsel of a professional or a financially experienced friend or relative. Don't be too proud to admit you don't know everything already. Lots of people would love to help you avoid early mistakes.
- Avoid the temptation of high-risk, fast-return investments, especially in the early stages of your investing activities when you could lose everything in one bad move.
- Keep track of your investments for tax and budget purposes. Having clear, easily accessible records will make things much easier for you later on.
Warnings
- Be prepared to wait a while before you see a significant return on your investments. Small, low-risk investments take a while to pay off.
- Even the safest investment comes with risk. Don't invest more than you can afford to lose.
Sources and Citations
- ↑
- ↑
- ↑
- ↑
- ↑
- ↑
- ↑
- ↑
- ↑
- ↑
- ↑
- ↑
- ↑
- ↑
- ↑
- ↑
- ↑
- ↑
- ↑
- ↑
Article Info
Categories: Investments and Trading
In other languages:
Español: invertir sabiamente pequeñas cantidades de dinero, Deutsch: Kleine Geldmengen intelligent investieren, Português: Investir Pequenas Quantidades de Dinheiro com Sabedoria, Italiano: Investire Saggiamente Piccole Quantità di Denaro, Français: investir prudemment de petites sommes, Русский: разумно инвестировать небольшие суммы, 中文: 明智地投资小额资金, Bahasa Indonesia: Menginvestasikan Sedikit Uang dengan Bijaksana, हिन्दी: समझदारी से कम धनराशि निवेश करें, العربية: استثمار المبالغ الصغيرة في سوق الأوراق المالية بحكمة, 한국어: 소액을 현명하게 투자하는 방법, Nederlands: Slim investeren met kleine bedragen
Thanks to all authors for creating a page that has been read 2,866,404 times.
|
https://www.wikihow.com/Invest-Small-Amounts-of-Money-Wisely
|
CC-MAIN-2017-47
|
refinedweb
| 2,819
| 62.58
|
21 October 2010 10:18 [Source: ICIS news]
PRAGUE (ICIS)--Unipetrol has experienced weakening demand for olefins that will affect its operating profit for the third quarter, the Czech petrochemical group said on Thursday.
Sales volumes in the third quarter came in at 421,000 tonnes, 11% down on the second quarter and 9% down on the third quarter of 2009, it said.
While the operating result would be positive, it would be worse than the figure reported for the first quarter of this year, the company added.
Compared with the second quarter, the third quarter saw the olefin margin fall 5% on weakness in benzene spreads, and a 12% improvement in the polyolefin margin attributed to strong polypropylene (PP) spreads, Unipetrol said.
A rescheduled ethylene steam cracker shutdown at Litvinov in the ?xml:namespace>
Unipetrol is to report its third-quarter results on 29.
|
http://www.icis.com/Articles/2010/10/21/9403187/unipetrol-q3-profits-to-be-hit-by-weakening-demand-for-olefins.html
|
CC-MAIN-2013-20
|
refinedweb
| 145
| 64.75
|
Python Classes
Introduction
The basic idea behind an object-oriented programming (OOP) is to combine both data and associated procedures (known as methods) into a single unit which operate on the data. Such a unit is called an object.
Python is an object-oriented language, everything in Python is an object.
We have already worked with some objects in Python, (See Python data type chapter) for example strings, lists are objects defined by the string and list classes which are available by default into Python. Lets declare two objects a string and a list and test their type with type() function.
As string1 is an object, strings "Good Morning" can produce an uppercase or lowercase version of themselves calling upper() and lower() methods associated with string. Check it in the Python IDLE.
>>>help(str)
Before introducing classes we must discuss something about local variables, global statement, and nonlocal statement as well as Namespaces and scope rules.
Local Variables
When a variable is declared inside a function, that variable is accessible only from that function or statements block where it is declared. The variable has no relation with any other variable with same name declared outside the function, therefore the variable is local to the function. See the following example.
# python-local-variable.py def function_local(a): print('a is -> ',a) a = 50 print('After new value within the function a is -> ',a) a = 100 function_local(40) print('Value of a is ->',a)
Output
a is -> 40
After new value within the function a is -> 50
Value of a is -> 100
Explanation
Line No.- 2 : Declared a function function_local(a) with a parameter
Line No.- 3 : Print 'a' uses the value of the parameter. [The value of a is now 100 as we assign a to 100 (Line No. 6) before execute the function (Line No. 7).]
Line No.- 4 : Assign the value 50 to 'a'.
Line No.- 5 : Again print 'a', as a is local within the function therefore the value of a is now 50.
Line No.- 8 : This is the last print statement and 'a' becomes 100. So far what we have done within the function which have no effect outside the function. This is called the scope of the variable.
global statement
The purpose of the global statement is to assign a value to a variable which is declared outside the function. Free variables (See Line No. 06 in previous example) may refer to globals without declaring global. The syntax of global statement is -> global var_name1, var_name2, ..
See the following example :
# python-global-variable.py def function_local(): global a print('a is -> ',a) a = 50 print('After new value within the function a is -> ',a) a = 100 function_local() print('Value of a is ->',a)
Output
a is -> 100
After new value within the function a is -> 50
Value of a is -> 50
Explanation
Line No.- 3 : The variable 'a' is declared as global variable, therefore the value of a is now 100.
Line No.- 5 : Assign the value 50 to 'a' and it will hold same value inside and outside the function unless we assign a new value.
nonlocal statement
The nonlocal statement is used to rebind variables found outside of the innermost scope. See the following example without a nonlocal statement.
def outside(): a = 10 def inside(): a = 20 print("Inside a ->", a) inside() print("outside a->", a) outside()
In the above example the first print() statement simply print the value of 'a', which is 20 as 'a' is local within inside() function. The second print() statement prints the value 'a', which is 10 as the inside() function has no effect. Now we introduce a nonlocal statement in inside() function and the code will be:
def outside(): a = 10 def inside(): nonlocal a a = 20 print("The value of a in inside() function - ", a) inside() print("The value of a in outside() function - ", a) outside()
The second print() statement prints the value 'a', which is 20 as the variable 'a' is rebinded.
Python Scopes and Namespaces
In general, a namespace is a naming system to create unique names. In daily experience we see railway stations, airports, capital of various states, directory structure of filesystems have unique names. As of other programming language Python uses namespaces for identifiers.
A namespace is a mapping from names to objects.
- For example 'a' maps to [1, 2, 3] or 'a' maps to the value 25.
- Most namespaces currently implemented as Python dictionaries (containing the names and values of the objects).
- Names in different namespaces have absolutely no relationship (e.g. the variable 'a' can be bound to different objects in different namespaces).
- Examples of namespace : The global name in a module, local names in a function invocation, built-in names (containing functions such as min()), attributes of an object.
Python creates namespaces at different times.
- The built-in namespace is created when Python interpreter starts and is never deleted.
- The global namespace for a module is created when the module is called and last until the interpreter quits.
- The local namespace for a function is created when the function is called, and deleted when the function returns.
A scope is a textual region of a Python program where a namespace is directly accessible. The scopes in python are as follows:
- The local scope, searched first, contains the local name.
- Enclosing scope (in an enclosing function) contains non-local and non-global names.
- The current module’s global names.
- The outermost scope is the namespace containing built-in names.
Defining a class
In object oriented programming classes and objects are the main feature. A class creates a new data type and objects are instances of a class which follow the definition given inside the class. Here is a simple form of class definition.
Statement-1
Statement-1
....
....
....
Statement-n
A class definition started with the keyword 'class' followed by the name of the class and a colon.
The statements within a class definition may be function definitions, data members or other statements.
When a class definition is entered, a new namespace is created, and used as the local scope.
Creating a Class
Here we create a simple class using class keyword followed by the class name (Student) which follows an indented block of segments (student class, roll no., name).
#studentdetails.py class Student: stu_class = 'V' stu_roll_no = 12 stu_name = "David"
Class Objects
There are two kind of operations class objects supports : attribute references and instantiation. Attribute references use the standard syntax, obj.name for all attribute references in Python. Therefore if the class definition (add a method in previous example) look like this
#studentdetails1.py class Student: """A simple example class""" stu_class = 'V' stu_roll_no = 12 stu_name = "David" def messg(self): return 'New Session will start soon.'
then Student.stu_class, Student.stu_roll_no, Student.stu_name are valid attribute reference and returns 'V', 12, 'David'. Student.messg returns a function object. In Python self is a name for the first argument of a method which is different from ordinary function. Rather than passing the object as a parameter in a method the word self refers to the object itself. For example if a method is defined as avg(self, x, y, z), it should be called as a.avg(x, y, z). See the output of the attributes in Python Shell.
__doc__ is also a valid attribute which returns the docstring of the class.
__init__ method
There are many method names in Python which have special importance. A class may define a special method named __init__ which does some initialization work and serves as a constructor for the class. Like other functions or methods __init__ can take any number of arguments. The __init__ method is run as soon as an object of a class is instantiated and class instantiation automatically invokes __init__() for the newly-created class instance. See the following example a new, initialized instance can be obtained by:
#studentdetailsinit.py class Student: """A simple example class""" def __init__(self, sclass, sroll, sname): self.c = sclass self.r = sroll self.n = sname def messg(self): return 'New Session will start soon.'
Inheritance
The concept of inheritance provides an important feature to the object-oriented programming is reuse of code. Inheritance is the process of creating a new class (derived class) to be based on an existing (base class) one where the new class inherits all the attributes and methods of the existing class. Following diagram shows the inheritance of a derived class from the parent (base) class.
Like other object-oriented language, Python allows inheritance from a parent (or base) class as well as multiple inheritance in which a class inherits attributes and methods from more than one parent. See the single and multiple inheritance syntax :
Statement-1
Statement-1
....
....
....
Statement-n
Statement-1
Statement-1
....
....
....
Statement-n
Example :
In a company Factory staff and Office staff have certain common properties - all have name, designation, age etc. Thus they can be grouped under a class called CompanyMember. Apart from sharing those common features, each subclass has its own characteristic - FactoryStaff gets overtime allowance while OfficeStaff gets traveling allowance for office job. The derived classes ( FactoryStaff & OfficeStaff) has its own characteristic and, in addition they inherit the properties of the base class (CompanyMember). See the example code.
# python-inheritance.py class CompanyMember: '''Represents Company Member.''' def __init__(self, name, designation, age): self.name = name self.designation = designation self.age = age def tell(self): '''Details of an employee.''' print('Name: ', self.name,'\nDesignation : ',self.designation, '\nAge : ',self.age) class FactoryStaff(CompanyMember): '''Represents a Factory Staff.''' def __init__(self, name, designation, age, overtime_allow): CompanyMember.__init__(self, name, designation, age) self.overtime_allow = overtime_allow CompanyMember.tell(self) print('Overtime Allowance : ',self.overtime_allow) class OfficeStaff(CompanyMember): '''Represents a Office Staff.''' def __init__(self, name, designation, age, travelling_allow): CompanyMember.__init__(self, name, designation, age) self.marks = travelling_allow CompanyMember.tell(self) print('Traveling Allowance : ',self.travelling_allow)
Now execute the class in Python Shell and see the output.
photo credit: Wessex Archaeology. Photo is used under creative Common License.
|
http://www.w3resource.com/python/python-object-classes.php
|
CC-MAIN-2013-20
|
refinedweb
| 1,666
| 57.06
|
File::Tail::Scribe - Monitor and send the tail of files to a Scribe logging system.
use File::Tail::Scribe; my $log = File::Tail::Scribe->new( directories => $args{dirs}, msg_filter => sub { my ($self, $filename, $line) = @_; return ('info', 'httpd', "$filename\t$line"); }, ); $log->watch_files();
Basically this module connects File::Tail::Dir to Log::Dispatch::Scribe.
It monitors files in a given directory (or set of directories), such as Apache log files in /var/log/httpd, and as the log files are written to, takes the changes and sends them to a running instance of the Scribe logging system.
The Scribe and Thrift Perl modules from their respective source distributions are required and not available as CPAN dependencies. Further information is available here: <>
$tailer = File::Tail::Scribe->new(%options);
Creates a new instance. Options are:
See the equivalent options in File::Tail::Dir: "directories" in File::Tail::Dir, "filter" in File::Tail::Dir, "exclude" in File::Tail::Dir, "follow_symlinks" in File::Tail::Dir, "sleep_interval" in File::Tail::Dir, "statefilename" in File::Tail::Dir, "no_init" in File::Tail::Dir, "max_age" in File::Tail::Dir.
This is a hash containing all of the options to pass to <Log::Dispatch::Scribe/new>.
An optional coderef that can be used to preprocess messages before they are sent to Scribe. The code is passed ($self, $filename, $line), i.e. the File::Tail::Scribe instance, the filename of the file that changed, and the line of text that was added. It must return ($level, $category, $message), i.e. the log level (info, debug etc), the Scribe category, and the log line that will be sent to Scribe. An example:
msg_filter => sub { my $self = shift; my $filename = shift; my $line = shift; $filename =~ s{^.*/}{}; # remove leading dirs $filename =~ s{\.[^.]*$}{}; # remove extension $filename ||= 'default'; # in case everything gets removed return ('info', 'httpd', "$filename\t$line"); };
If no msg_filter is provided, the log level is given by default_level, the category is the filename after removing leading paths and filename extensions, and the message is the log line as given.
Default logging level. May be set to any valid Log::Dispatch level (debug, info, notice, warning, error, critical, alert, emergency). Defaults to 'info'.
File::Tail::Scribe provides the same methods as File::Tail::Dir, plus the following:
Set/get the "msg_filter" and "default_level" attributes as described above.
Jon Schutz,
<jon at jschutz.net>
Please report any bugs or feature requests to
bug-file-tail::Scribe
You can also look for information at:
This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself.
|
http://search.cpan.org/~jjschutz/File-Tail-Scribe-0.13/lib/File/Tail/Scribe.pm
|
CC-MAIN-2018-09
|
refinedweb
| 429
| 53.61
|
Hi,
Can you plz provide the HTML code for the abo
Hi,
Can you plz provide the HTML code for the above. I tried passing APPLET CODE = "xyz.class"
The output shows nothing. Should I provide Param name and value?
Thanks in advance.
applet viewing problem
Pls. I am finding probs in viewing Java applet in web browsers. I have Mozilla Firefox, Internet Explorer and HotJava web browsers installed. It shows"Loading java applet failed".
The java console shows:
Java Plug-in 1.6.0_02
Using JRE version 1.
Applet - Searching a word in a file
how to search a word in a file by uploading the file into the applet window
type of file
when I try this tutorials with a file which was written in arabic language
the arabic laguage appear as
this (
äÇÁ Úáì ÇáÎØÇÈ ÇáæÇÑÏ áäÇ ãä ãÏíÑí)How I can appear this in the right format.I try Encoding but nothing had happened.thank you very muc
Quick question
Hello. Thank you for this tutorial. I was wondering where to put the file that you are reading. Is it in the project folder or can you designate a source like C:\\test1.txt? And how would you do so in this program. Thanks
applet code - Java Beginners
://
Hope that it will be helpful...applet code hi friends..
i have to oen one applet, in that applet code should be apper what we have written for the applet... this is my
Applet
;
Introduction
Applet is java program that can be embedded into HTML pages. Java applets... example. Alternatively
you can also run this example from your favorite java... you will see, how to write an applet
program. Java source of applet
applet - Applet
in Java Applet.",40,20);
}
}
2) Call this applet with html code.... Hi Friend,
Try the following code:
1)Create an applet...:
Thanks
Applet - Applet
the applet concept and HTML?
what is mean by swing? Hi friend,
Applet
Applet is java program that can be embedded into HTML pages. Java applets... in details to visit....
Applet - Passing Parameter in Java Applet
like Welcome in Passing parameter in java applet
example. Alternatively you... of retrieving the parameter values passed from
the html page. So, you can pass... example."
Here is the code for the Java Program :
Java - Applet Hello World
;
This example introduces you with the Applet in Java. You will learn how to develop applet code and run in the browser.
Applet...;
CODE tag is used to specify the name of Java applet class name. To test
java applet - Applet
java applet I want to close applet window which is open by another button of applet program. plz tell me! Hi Friend,
Try...://
Thanks
unable to see the output of applet. - Applet
the following tutorial
but the problem....
u just copy that java source code and compile that using javac
then you
how to run applet - Applet
://
Hope that it will be helpful for you.Even then if you got any problem, please send the code.
Thanks
Applet - Applet
------------------------");
g.drawString("Demo of Java Applet Window Event Program");
g.drawString("Java...; Hi Friend,
Try the following code:
import java.awt.*;
import...Applet Namaste, I want to create a Menu, the menu name is "Display
|
http://www.roseindia.net/tutorialhelp/allcomments/104
|
CC-MAIN-2015-06
|
refinedweb
| 541
| 77.43
|
I have a menu that I would like to store in vuex (rather than in the top level component). This menu has titles that are translated using vue-i18n. Inside the component, everything works great, but when I move my menu to my vuex store (inside state.js), my app refuses to load and I get errors that are all similar to:
state.js?6b3c:16 Uncaught TypeError: src_boot_i18n__WEBPACK_IMPORTED_MODULE_0__.default.t is not a function
my state.js file looks like this (and variations of it):
// state.js import i18n from 'src/boot/i18n' export default { storeVersion: '1', version: { version: 0, date: 0, PWA: 0 }, menu: [ { active: 'dashboard', main: [ { id: 'dashboard', parent: null, title: this.$i18n.t('dashboard'), subtitle: this.$i18n.t('dashboardSub'), url: '/dashboard' }, // etc, etc
I have tried replacing translatable strings like
this.$i18n.t('dashboard')with:
i18n.t('dashboard')
$i18n.t('dashboard')
app.i18n.t('dashboard')
i18n.$t('dashboard')
$i18n.$t('dashboard')
app.i18n.$t('dashboard')
i18n.tc('dashboard')
$i18n.tc('dashboard')
app.i18n.tc('dashboard')
and so on, but I keep getting variations on the error above telling me that t (or tc, or whatever) is not a function. Any idea if this is even possible to do? vue-i18n is installed and working flawlessly in all of my components, I just wanted to move this one global menu item to the store, but it seems I am missing something fundamental. Any advice appreciated. Thanks!
- s.molinari last edited by
What does your boot file look like ?
Scott
Here is my boot file:
import VueI18n from 'vue-i18n' import messages from 'src/i18n' let i18n export default ({ app, Vue }) => { Vue.use(VueI18n) // Set i18n instance on app app.i18n = new VueI18n({ locale: 'en-gb', fallbackLocale: 'en-gb', silentFallbackWarn: true, silentTranslationWarn: true, // enableInSFC: true, messages }) i18n = app.i18n } export { i18n }
- s.molinari last edited by
Sorry, but I’m not certain what is wrong. It should work.
Scott
Thanks for taking a look anyway, I appreciate it.
Interestingly, (and although it does not throw an error) when I
console.log(i18n)I get
undefined
And if I try to use
const i18n = require('src/boot/i18n')instead of import,
console.log(i18n)gives me:
Module {__esModule: true, Symbol(Symbol.toStringTag): "Module", default: ƒ} default: ƒ (_ref) arguments: [Exception: TypeError: 'caller', 'callee', and 'arguments' properties may not be accessed on strict mode functions or the arguments objects for calls to them at Function.invokeGetter (<anonymous>:1:142)] caller: [Exception: TypeError: 'caller', 'callee', and 'arguments' properties may not be accessed on strict mode functions or the arguments objects for calls to them at Function.invokeGetter (<anonymous>:1:142)] length: 1 name: "" prototype: constructor: ƒ (_ref) __proto__: Object __proto__: ƒ () [[FunctionLocation]]: i18n.js?8847:6 [[Scopes]]: Scopes[3] i18n: (...) Symbol(Symbol.toStringTag): "Module" __esModule: true get i18n: ƒ () __proto__: Object
Aren’t the boot files supposed to be loaded before anything else? It feels like i18n is not being loaded in time and that is why it comes back undefined?
- metalsadman last edited by
This post is deleted!
@metalsadman Thanks, I already tried it both ways, made no difference alas..
- metalsadman last edited by
Hmm you arr using
thisin your state. Remove thatt as well.
@metalsadman definitely not that, see above for all the variations I have tried.
I imported i18n in vuex modules where I need it this way:
import { i18n } from 'src/boot/i18n'
and then accessed i18n translations in my vuex actions this way:
let myMessage = i18n.t('messages.myMessage')
Works like magic.
You should import
i18nas like
import {i18n} from '../boot/i18n'
because
i18nis not default exported.
|
https://forum.quasar-framework.org/topic/5945/can-t-access-i18n-translation-inside-vuex-state-js-file
|
CC-MAIN-2022-21
|
refinedweb
| 601
| 58.38
|
?
Member 849873 wrote:I have been given access to a data feed, which provides me with live, real time data. I need to cache this data and do some processing on it, and I can only create one connection to this service - it will block any subsequent attempts from the same IP. Therefore directly connecting via a website is not going to be feasible.
Member 849873 wrote:a back end service that retrieves the data
Member 849873 wrote: As the data is real time - the ability to update the web page in real time would also be desirable.
[CommonApplicationData]\[Manufacturer]\[ProductName]
Brian C Hart wrote:I have a new requirement to write a C# service using .NET 3.5 and what it does
is, after a timer goes off, it runs various database queries to retrieve needed
information and then writes out a text file to disk. The text file contains the
data, but the file has to be formatted in accordance with a set protocol.
private UserControl _MainContent;
public UserControl MainContent
{
get { return _MainContent; }
set
{
if (_MainContent == value)
return;
_MainContent = value;
RaisePropertyChanged("MainContent");
}
}
public void LoadView(AppViews ViewToLoad)
{
_ViewModeBase vm = null;
UserControl view = null;
switch (ViewToLoad)
{
case AppViews.CustomerAccount:
vm = new CustomerAccountViewModel();
view = new CustomerAccountView();
break;
// Once CASE for each view
}
view.DataContext = vm;
MainContent = view;
}
*pre-emptive celebratory nipple tassle jiggle* - Sean Ewington
"Mind bleach! Send me mind bleach!" - Nagy Vilmos
<ResourceDictionary
xmlns=""
xmlns:x=""
xmlns:vm="clr-namespace:DemoApp.ViewModel">
DataTemplate
DataType
DataTemplate
<ContentPresenter x:Name="MainContent"
Grid.Row="2"
Grid.
W∴ Balboos wrote:Related to this is the question of a the feasibility/sense of a single copy (rather than clones in each workspace) being a possible development target?
W∴ Balboos wrote:So, running individual Windows in VMs, resident upon a remote server: rich or thin? If a table PC runs stand-alone, how could that qualify as a thin client?
W∴ Balboos wrote:I know that the applications developed for Win-XP and Visual Studio will work in the TC environment: my argument to management is that I should be developing in the environment in which the apps will run in order to be sure they actually do run (an environment that won't be wiped out every time the user instances get a general reset). One of the managers then wondered if there is, perhaps, specialization to working within this TC environment (ergo, special tools) rather than considering that, virtual though it may be, it may be treated as if programming for a standard desktop.
W∴ Balboos wrote:Really, they need to not refresh the developer VM's
W∴ Balboos wrote:give them full admin privilege on their area,
W∴ Balboos wrote:One problem is that a number of my applications, when moved to the TC, do not
function correctly. This has various causes - different version of Windows
applications which break my references or a local dedicated impact printer that
required a local driver. I'm requesting that I be given an
instance that is non-volatile
jschell wrote:1. You must have access to systems that match each different Y. And your project schedule must include time for FULLY testing on each.
jschell wrote:f you choose option 1 then you MUST have access to systems that match each environment. It doesn't matter how that is physically managed.
General News Suggestion Question Bug Answer Joke Praise Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
|
https://www.codeproject.com/messages/4296257/re-commonapplicationdata.aspx
|
CC-MAIN-2016-50
|
refinedweb
| 586
| 50.36
|
Introduction: Weather Dashboard Using MKR1000 and Losant
This project shows you how to make use of MKR1000 and Losant platform to build a simple weather dashboard monitoring temperature and humidity. With additional sensors, other weather metrics can be collected and analyzed so that more complex dashboard can be built.
Step 1: Hardware, Software and Platform
These are the component requited:
- Arduino MKR1000 & Genuino MKR1000
- DHT11 Temperature & Humidity Sensor
- Arduino IDE
- Losant Platform
Step 2: Setup MKR1000 With WiFI
For setup instructions on how to configure MKR1000 with WiFi, please refer to this resource. To setup WiFi, refer to this link.
Step 3: HTTP
The Arduino MKR1000 supports HTTPS, but the limited memory size requires the certificate be uploaded to WiFi chip. This is a two steps process. First we load a sketch on the board and then run a program on our computer to upload the certificates.
Use the Arduino IDE to load the Firmware Updater Sketch onto your board. Examples -> WiFi101 -> Firmware Updater Second, download the WiFi101 Firmware Updater. Unzip the archive and run winc1500-uploader-gui.exe. The HTTPS certificate for Losant webhooks is issued to triggers.losant.com. Enter triggers.losant.com in the text field. Choose your COM port and upload the certificates.
Step 4: The Schema and Images
This is the schema. Note that the displayed board is an UNO board, as a replacement for MKR1000.
Step 5: Arduino Codes
The following code fragment reads temperature and humidity:
// DHT setting
#include "DHT.h"
#define DHTPIN 2 // what pin we're connected to
#define DHTTYPE DHT11 // DHT 11
DHT dht(DHTPIN, DHTTYPE);
void setup() {
...
dht.begin();
...
}
void loop() {
... float humidity = dht.readHumidity(); float temperature = dht.readTemperature(); ... }
This request send data to Losant server:
char hostname[] = "triggers.losant.com";
char feeduri[] = "your losant webhook uri";
void loop() {
...
structureWebhookRequest(content);
...
}
void structureWebhookRequest(String content) {
// close any connection before send a new request. // This will free the socket on the WiFi shield wifiClient.stop();
String contentType = "application/json"; // if there's a successful connection: if (wifiClient.connect(hostname, 443)) { wifiClient.print("POST "); //Do a POST wifiClient.print(feeduri); // On the feedURI wifiClient.println(" HTTP/1.1"); wifiClient.print("Host: "); wifiClient.println(hostname); //with hostname header wifiClient.println("Connection: close");
wifiClient.print("Content-Type: "); wifiClient.println(contentType); wifiClient.print("Content-Length: "); wifiClient.println(content.length()); wifiClient.println(); wifiClient.println(content);
wifiClient.println();
#ifdef DEBUG Serial.println(content); #endif } else { // if you couldn't make a connection: #ifdef DEBUG Serial.println(); Serial.println("connection failed"); #endif }
Step 6: Losant Platform and Webhooks
Losant is a simple and powerful IoT cloud platform for developing the next generation of connected experiences. Losant offers device management with robust data visualization that reacts in real-time.
In this project, Losant's webhook is used to send the temperature and humidity data to Losant platform. Webhooks allow MKR1000 to trigger the MKR1000 Weather App application workflows via HTTP requests.
Step 7: Setup Losant Elements and Dashboard
To setup a monitoring dashboard in Losant platform, you need to create the following elements: application, device, webhook, workflow, trigger, function logic, device output, and dashboard.
Step 8: Create Application
Create a weather app called MKR1000 Weather App.
Step 9: Create Device
Create a device called MKR1000 D1 with attributes temperature and humidity.
Step 10: Create Webhook
Now create the MKR1000 webhook. Take note of the endpoint URL. This will be used in the codes to send data to the webhook. Refer to the Arduino codes for details.
Step 11: Create Workflow
Here in creating workflow, add the following three components: Webhook Trigger, Function Logic and Device Output. Name the workflow MKR1000 Weather Grabber.
Step 12: Workflow - Add Webhook Trigger
Give the webhook trigger a name "MKR1000 Webhook" and select the correct webhook from the drop-down list.
Step 13: Workflow - Add Function Logic
Accept the default values at this step.
Step 14: Workflow - Add Device Output
Select a device ID and set temperature state as '{{data.body.temperature}}' and humidity state as '{{data.body.humidity}}'.
Step 15: Create Dashboard
Key in a Dashboard name.
Step 16: Add Temperature Block
Set temperature as Number Gauge.
Step 17: Add Humidity Block
Set humidity as Dial Gauge.
Step 18: Add Temperature Vs Humidity Block
This a time series block. Pick the temperature and humidity attributes as data points. Time range is 60 minutes and one data point every 1 minute.
Step 19: The Final Dashboard
This is the final MKR100 Weather App Dashboard.
Recommendations
We have a be nice policy.
Please be positive and constructive.
2 Comments
Very nice looking weather display!
thank you. it will look much nicer if there are more sensors.
|
http://www.instructables.com/id/Weather-Dashboard-Using-MKR1000-and-Losant/
|
CC-MAIN-2018-26
|
refinedweb
| 772
| 50.84
|
Hi,On Mon, Jun 02, 2008 at 03:07:31PM +0200, Peter Zijlstra wrote:>> you stop running, then the delta is added to runtime.> > This is always on the same cpu - when you get migrated you're stopped> and re-scheduled so that should work out nicely.> > So in that sense it shouldn't matter that the rq->clock values can get> skewed between cpus.>> So I'm still a little puzzled by your observations; though it could be> that the schedstat stuff got broken - I've never really looked too> closely at it.> Thanks Peter for the explanation...I agree with the above and that is the reason why I did not see weirdvalues with cpu_time. But, run_delay still would suffer skews as the endpoints for delta could be taken on different cpus due to migration (moreso on RT kernel due to the push-pull operations). With the below patch,I could not reproduce the issue I had seen earlier. After every dequeue,we take the delta and start wait measurements from zero when moved to a different rq.Signed-off-by: Ankita Garg <ankita@in.ibm.com> Index: linux-2.6.24.4/kernel/sched.c===================================================================--- linux-2.6.24.4.orig/kernel/sched.c 2008-06-03 14:14:07.000000000 +0530+++ linux-2.6.24.4/kernel/sched.c 2008-06-04 12:48:34.000000000 +0530@@ -948,6 +948,7 @@ static void dequeue_task(struct rq *rq, struct task_struct *p, int sleep) {+ sched_info_dequeued(p); p->sched_class->dequeue_task(rq, p, sleep); p->se.on_rq = 0; }Index: linux-2.6.24.4/kernel/sched_stats.h===================================================================--- linux-2.6.24.4.orig/kernel/sched_stats.h 2008-06-03 14:14:28.000000000 +0530+++ linux-2.6.24.4/kernel/sched_stats.h 2008-06-05 10:39:39.000000000 +0530@@ -113,6 +113,13 @@ if (rq) rq->rq_sched_info.cpu_time += delta; }++static inline void+rq_sched_info_dequeued(struct rq *rq, unsigned long long delta)+{+ if (rq)+ rq->rq_sched_info.run_delay += delta;+} # define schedstat_inc(rq, field) do { (rq)->field++; } while (0) # define schedstat_add(rq, field, amt) do { (rq)->field += (amt); } while (0) # define schedstat_set(var, val) do { var = (val); } while (0)@@ -129,6 +136,11 @@ #endif #if defined(CONFIG_SCHEDSTATS) || defined(CONFIG_TASK_DELAY_ACCT)+static inline void sched_info_reset_dequeued(struct task_struct *t)+{+ t->sched_info.last_queued = 0;+}+ /* * Called when a process is dequeued from the active array and given * the cpu. We should note that with the exception of interactive@@ -138,15 +150,22 @@ * active queue, thus delaying tasks in the expired queue from running; * see scheduler_tick()). *- * This function is only called from sched_info_arrive(), rather than- * dequeue_task(). Even though a task may be queued and dequeued multiple- * times as it is shuffled about, we're really interested in knowing how- * long it was from the *first* time it was queued to the time that it- * finally hit a cpu.+ * Though we are interested in knowing how long it was from the *first* time a+ * task was queued to the time that it finally hit a cpu, we call this routine+ * from dequeue_task() to account for possible rq->clock skew across cpus. The+ * delta taken on each cpu would annul the skew. */ static inline void sched_info_dequeued(struct task_struct *t) {- t->sched_info.last_queued = 0;+ unsigned long long now = task_rq(t)->clock, delta = 0;++ if(unlikely(sched_info_on()))+ if(t->sched_info.last_queued)+ delta = now - t->sched_info.last_queued;+ sched_info_reset_dequeued(t);+ t->sched_info.run_delay += delta;++ rq_sched_info_dequeued(task_rq(t), delta); } /*@@ -160,7 +179,7 @@ if (t->sched_info.last_queued) delta = now - t->sched_info.last_queued;- sched_info_dequeued(t);+ sched_info_reset_dequeued(t); t->sched_info.run_delay += delta; t->sched_info.last_arrival = now; t->sched_info.pcount++;-- Regards,Ankita Garg (ankita@in.ibm.com)Linux Technology CenterIBM India Systems & Technology Labs, Bangalore, India
|
http://lkml.org/lkml/2008/6/5/10
|
CC-MAIN-2014-35
|
refinedweb
| 607
| 59.7
|
If you’re an early Visual Studio adopter like me you’ve probably gotten used to running into a similar problem every few years: upgrading your code. Although VS usually handles the job of converting files to work in the newer version, the big problem has usually been trying to go back to the old version. If you’re the only one working with the code that might not matter, but if you work on a team that has a mix of, for example, VS 2008 and VS 2010 Beta clients, the conversion process causes problems for one or the other.
Finally in VS 11 this problem is being addressed. The ultimate goal is to be able to open a VS 2010 solution in VS 11 with minimal conversion and applying no breaking changes to the project or solution files that would prevent it from still opening directly in 2010. There will of course be some restrictions, like not upgrading to .NET 4.5, but in general seems like a pretty reasonable goal.
To see how it’s working so far with the Developer Preview I tried upgrading a few of my own projects, including a 50+ project solution with lots of complications. The good news was that the upgrade process did succeed without preventing VS 2010 from opening the converted solutions that I tried. There were, however, some issues.
- Modeling projects (for UML architecture diagrams) need some conversion to update tooling version numbering and point to a new location for some MSBuild .targets file references used to build the project. This didn’t appear to create any real problems and is probably going to stay the same until the release version.
- The solution file itself was previously one of the biggest stumbling blocks and usually required separate copies be created for each VS version. The upgrade changed some version numbers but otherwise kept most of the content, with some exceptions below.
- Some reordering took place of projects, build configurations, and source control configurations. This didn’t appear to cause any problems in itself, but I also couldn’t figure out why the reordering took place.
- Solution folders did not fare well. Although I started with projects in multiple solution folders, only the projects which required some conversion stayed in the folders. Other projects were all placed in the solution root, leaving the solution folders present, but empty. Here are some before and after shots of a test solution showing the solution folders and the changed files:
- A WPF application project which was using some customized MSBuild to do automatic environment specific config file switching required updating. One part of this was importing of an external .targets file, which seemed to be treated similarly to the conversion done with the modeling project.
<Import Project="$(MSBuildExtensionsPath)\Microsoft\VisualStudio\v10.0\Web\Microsoft.Web.Publishing.targets" />
Unfortunately some other changes were also applied, including some additions which ended up causing compile errors related to a bad System.Xaml reference:
- The type ‘System.Windows.Markup.IQueryAmbient’ is defined in an assembly that is not referenced. You must add a reference to assembly ‘System.Xaml, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089’.
- The type name ‘IComponentConnector’ could not be found in the namespace ‘System.Windows.Markup’. This type has been forwarded to assembly ‘System.Xaml, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089’ Consider adding a reference to that assembly.
This change only took place when the extra Import was present. It ended up being a pretty easy fix by editing the project file xml. The cause appears to be that part of the conversion is to add <Generator> and <SubType> nodes to every element with an Include attribute ending in “.xaml” but it doesn’t do any checks to see if those nodes are already present or if the parent node is a Reference instead of a file. Here’s the xml to look for:
<Reference Include="System.Xaml"> <RequiredTargetFramework>4.0</RequiredTargetFramework> <Generator>MSBuild:Compile</Generator> <SubType>Designer</SubType> </Reference>
To fix, just remove the Generator and SubType elements. (I reported this on Connect so it should be expected to be fixed before release)
Overall, there’s obviously still some work to do, but given where it already is in the Developer Preview, I’m looking forward to being able to smoothly move back and forth between versions without all the manual file duplication and hand editing that used to be required.
|
http://blogs.interknowlogy.com/2011/11/07/visual-studio-11-solution-upgrading/
|
CC-MAIN-2022-40
|
refinedweb
| 743
| 53
|
An object whose state cannot be changed after it is created is called an immutable object.
A class whose objects are immutable is called an immutable class.
An immutable object can be shared by different areas of a program without worrying about its state changes.
An immutable object is inherently thread-safe.
The following code creates an Example of an Immutable Class.
public class IntWrapper { private final int value; /*from w w w.j ava2 s . c o m*/ public IntWrapper(int value) { this.value = value; } public int getValue() { return value; } }
This is how you create an object of the IntWrapper class:
IntWrapper wrapper = new IntWrapper(101);
At this point, the wrapper object holds 101 and there is no way to change it.
Therefore, the IntWrapper class is an immutable class and its objects are immutable objects.
It is good practice to declare all instance variables final so the Java compiler will enforce the immutability during compile time.
|
http://www.java2s.com/Tutorials/Java/Java_Object_Oriented_Design/0220__Java_Immutable_Objects.htm
|
CC-MAIN-2017-22
|
refinedweb
| 158
| 65.83
|
Recall that factors are really just integer vectors with ‘levels’, i.e., character labels that get mapped to each integer in the vector. How can we take an arbitrary character, integer, numeric, or logical vector and coerce it to a factor with Rcpp? It’s actually quite easy with Rcpp sugar:
#include <Rcpp.h> using namespace Rcpp; template <int RTYPE> IntegerVector fast_factor_template( const Vector<RTYPE>& x ) { Vector<RTYPE> levs = sort_unique(x); IntegerVector out = match(x, levs); out.attr("levels") = as<CharacterVector>(levs); out.attr("class") = "factor"; return out; } // [[Rcpp::export]] SEXP fast_factor( SEXP x ) { switch( TYPEOF(x) ) { case INTSXP: return fast_factor_template<INTSXP>(x); case REALSXP: return fast_factor_template<REALSXP>(x); case STRSXP: return fast_factor_template<STRSXP>(x); } return R_NilValue; }
Note a few things:
We template over the
RTYPE; i.e., the internal type that R assigns to its objects. For this example, we just need to know that the R types (as exposed in an R session) map to internal C types as
integer -> INTSXP,
numeric -> REALSXP, and
character -> STRSXP.
We return an IntegerVector. Remember that factors are just integer vectors with a
levelsattribute and class
factor.
To generate our factor, we simply need to calculate the sorted unique values (the levels), and then match our vector back to those levels.
Next, we can just set the attributes on the object so that R will interpret it as a factor, rather than a plain old integer vector, when it’s returned.
And a quick test:
library(microbenchmark) all.equal( factor( 1:10 ), fast_factor( 1:10 ) )
[1] TRUE
all.equal( factor( letters ), fast_factor( letters ) )
[1] TRUE
lets <- sample( letters, 1E5, replace=TRUE ) microbenchmark( factor(lets), fast_factor(lets) )
Unit: milliseconds expr min lq median uq max 1 factor(lets) 5.315 5.766 5.930 6.069 32.93 2 fast_factor(lets) 1.420 1.458 1.474 1.486 28.85
(However, note that this doesn’t handle
NAs – fixing that is left as an exercise. Similarily for logical vectors – it’s not quite as simple as just adding a call to a
LGLSXP templated call, but it’s still not tough – use
INTSXP and set set the levels to FALSE and TRUE.)
We can demonstrate a simple example of where this might be useful with tapply.
tapply(x, group, FUN) is really just a wrapper to
lapply( split(x, group), FUN ), and
split relies on coercing ‘group’ to a factor. Otherwise,
split calls
.Internal( split(x, group) ), and trying to do better than an internal C function is typically a bit futile. So, now that we’ve written this, we can test a couple ways of performing a
tapply-like function:
x <- rnorm(1E5) gp <- sample( 1:1000, 1E5, TRUE ) all( tapply(x, gp, mean) == unlist( lapply( split(x, fast_factor(gp)), mean ) ) )
[1] TRUE
all( tapply(x, gp, mean) == unlist( lapply( split(x, gp), mean ) ) )
[1] TRUE
rbenchmark::benchmark( replications=20, order="relative", tapply(x, gp, mean), unlist( lapply( split(x, fast_factor(gp)), mean) ), unlist( lapply( split(x, gp), mean ) ) )[,1:4]
test replications elapsed 2 unlist(lapply(split(x, fast_factor(gp)), mean)) 20 0.200 3 unlist(lapply(split(x, gp), mean)) 20 0.731 1 tapply(x, gp, mean) 20 1.444 relative 2 1.000 3 3.655 1 7.220
To be fair, tapply actually returns a 1-dimensional array rather than a vector, and also can operate on more general arrays. However, we still do see a modest speedup both for using lapply, and for taking advantage of our fast factor...
|
http://www.r-bloggers.com/fast-factor-generation-with-rcpp/
|
CC-MAIN-2014-15
|
refinedweb
| 584
| 55.03
|
Attach a path to the pathname space
#include <sys/iofunc.h> #include <sys/resmgr );
For more information, see "The flags argument," below.
libc
Use the -l c option to qcc to link against this library. This library is usually included automatically.
The resmgr_attach() function puts the path into the general pathname space and binds requests on this path to the dispatch handle dpp.
For more information, see procmgr_ability().
Most of the above file types are used for special services that have their own open function associated with them. For example, the mqueue manager specifies file_type as _FTYPE_MQUEUE, and mq_open() requests a pathname match of the same type.. This is commonly done, and is described in the Extending the POSIX-Layer Data Structures chapter of Writing a Resource Manager.:
The flags argument
The flags argument to resmgr_attach() Writing a Resource Manager
|
http://www.qnx.com/developers/docs/6.6.0_anm11_wf10/com.qnx.doc.neutrino.lib_ref/topic/r/resmgr_attach.html
|
CC-MAIN-2018-22
|
refinedweb
| 140
| 57.67
|
So trying to write a program using the enlightenment widget library. Its just an example they use in the book, and its just a simple 'hello world' app, but it doesn't like to compile: heres what I get:
[sgillespie@SG-Arch01 helloworld]$ gcc -o helloworld hello.c `ewl-config --cflags --libs` hello.c: In function `text_update_cb': hello.c:16: warning: assignment makes pointer from integer without a cast /tmp/ccaa3xQN.o(.text+0x36): In function `text_update_cb': : undefined reference to `ewl_entry_text_get' /tmp/ccaa3xQN.o(.text+0x1ce): In function `main': : undefined reference to `ewl_text_style_set' /tmp/ccaa3xQN.o(.text+0x2a1): In function `main': : undefined reference to `ewl_entry_color_set' collect2: ld returned 1 exit status
and here is the code:
#include <stdio.h>
#include <Ewl.h>
void destroy_cb(Ewl_Widget *w, void *event, void *data)
{
ewl_widget_destroy(w);
ewl_main_quit();
}
[code]void text_update_cb(Ewl_Widget *w, void *event, void *data)
{
char *s = NULL;
Ewl_Widget *label = NULL;
char buf[BUFSIZ];
s = ewl_entry_text_get(EWL_ENTRY(w));
label = (Ewl_Widget *)data;
snprintf(buf, BUFSIZ, "Hello %s", s);
ewl_text_text_set(EWL_TEXT(label), buf);
free(s);
return;
}
int main( int argc, char ** argv)
{
Ewl_Widget *win = NULL;
Ewl_Widget *box = NULL;
Ewl_Widget *label = NULL;
Ewl_Widget *o = NULL;
/* init the library */
win = ewl_window_new();
ewl_window_title_set(EWL_WINDOW(win), "Hello World");
ewl_window_class_set(EWL_WINDOW(win), "hello");
ewl_window_name_set(EWL_WINDOW(win), "hello");
ewl_object_size_request(EWL_OBJECT(win), 200, 50);
ewl_callback_append(win, EWL_CALLBACK_DELETE_WINDOW, destroy_cb, NULL);
ewl_widget_show(win);
/* create the container */
box = ewl_vbox_new();
ewl_container_child_append(EWL_CONTAINER(win), box);
ewl_object_fill_policy_set(EWL_OBJECT(box), EWL_FLAG_FILL_ALL);
ewl_widget_show(box);
/* create text label */
label = ewl_text_new(NULL);
ewl_container_child_append(EWL_CONTAINER(box), label);
ewl_object_alignment_set(EWL_OBJECT(label), EWL_FLAG_ALIGN_CENTER);
ewl_text_style_set(EWL_TEXT(label), "soft_shadow");
ewl_text_color_set(EWL_TEXT(label), 255, 0, 0, 255);
ewl_text_text_set(EWL_TEXT(label), "hello");
ewl_widget_show(label);
/* create the entry */
o = ewl_entry_new("");
ewl_container_child_append(EWL_CONTAINER(box), o);
ewl_object_alignment_set(EWL_OBJECT(o), EWL_FLAG_ALIGN_CENTER);
ewl_object_padding_set(EWL_OBJECT(o), 5, 5, 5, 0);
ewl_entry_color_set(EWL_ENTRY(o), 0, 0, 0, 255);
ewl_callback_append(o, EWL_CALLBACK_VALUE_CHANGED, text_update_cb, label);
ewl_main();
return 0;
}
Any ideas on getting that to compile?
Offline
Can you grep ewl_entry_text_get in your header files, and when you find it, "#include" that header file at the top of your code too? It's case-sensitive.
Offline
those are linker errors (you can tell, because the errors are from tmp files, not .c files)
is "ewl-config --cflags --libs" correct?
Offline
isn't it a path problem?
just check in /usr/include and if it's in a dir you must define the header with it,
example from python:
#include <python2.4/Python.h>
arch + gentoo + initng + python = enlisy
Offline
those are linker errors (you can tell, because the errors are from tmp files, not .c files)
is "ewl-config --cflags --libs" correct?
thats what the guide tells me. I looked at another doc and it says the same thing.
EDIT: It only does this with a few functions. I can get rid of most of those errors just by taking unimportant lines out like the text style. but the ..._text_get sounds pretty important, and i would really like to use that.
I did a
cat ./* | grep ewl_entry_text_get and came up with nothing, if thats even the correct way to do it....
EDIT #2: by the way that was /opt/e17/include/ewl. That is where I found Ewl.h
Offline
phrakture wrote:
those are linker errors (you can tell, because the errors are from tmp files, not .c files)
is "ewl-config --cflags --libs" correct?
thats what the guide tells me. I looked at another doc and it says the same thing.
No, what I'm saying is, run "ewl-config --cflags --libs" and check the paths it produces... i.e. if it says "-L/usr/monkey/libs -lmonkey" then check /usr/monkey/libs/libmonkey.so to see if it exists...
isn't it a path problem?
Well, if it was a problem with the path of the include file, it would fail in the compilcation stage and the error would look like:
hello.c:2: missing include file
If the failure is in the linking stage, the error will not list the source file, but the tmp object it's linking, like so:
/tmp/ccaa3xQN.o(.text+0x36):
To verify this, run "gcc -o helloworld.o hello.c `ewl-config --cflags --libs`" and it should succeed (note the .o appended to helloworld - this will output the unlinked object file)... this should succeed, it's the linking that's failing
PS Only worry about the -L and -l flags when verifying ewl-config output... capital if for the path, lowercase is for the library name (-l<foo> == lib<foo>.so)
Offline
[sgillespie@SG-Arch01 helloworld]$ gcc -o helloworld.o hello.c `ewl-config --cflags --libs`
hello.c: In function `text_update_cb':
hello.c:16: warning: assignment makes pointer from integer without a cast
/tmp/ccuYEItT.o(.text+0x36): In function `text_update_cb':
: undefined reference to `ewl_entry_text_get'
collect2: ld returned 1 exit status
Offline
[sgillespie@SG-Arch01 ~]$ ewl-config --cflags --libs -I/opt/e17/include -I/opt/e17/include -I/opt/e17/include -I/opt/e17/include -I/opt/e17/include -I/opt/e17/include/ewl -L/opt/e17/lib -lewl -L/opt/e17/lib -ledje -L/opt/e17/lib -lecore -lecore_job -lecore_x -lecore_evas -lecore_con -lecore_ipc -lecore_txt -lecore_fb -lecore_config -lecore_file -L/usr/lib -lcurl -lssl -lcrypto -ldl -lssl -lcrypto -ldl -lz -L/opt/e17/lib -leet -lz -ljpeg -lm -L/opt/e17/lib -ledb -lz -L/opt/e17/lib -levas -lm
I'm not sure what i'm supposed to do with this
am I checking -L for broken links or anythign?
Offline
Check if the directories listed with -I and -L exists on your system. you can do the same for the libs (-l<foo> == lib<foo>.so)
They are all present.
EDIT: It seems that the API might have changed, I'm going to ask the guys at edevelop...
Offline
Okay, I just talked to the guys at edevelop. The API has indeed changed and ewl_entry now inherits ewl_text.
In case anyone else is interested, the following code should be replaced
s = ewl_entry_text_get(EWL_ENTRY(w));
with
s = ewl_text_text_get(EWL_TEXT(w));
Offline
|
https://bbs.archlinux.org/viewtopic.php?id=13620
|
CC-MAIN-2017-17
|
refinedweb
| 991
| 55.84
|
On Thu, Mar 30, 2017 at 8:45 PM, Mats Wichmann <mats at wichmann.us> wrote: > Yeah, fun. You need to escape the \ that the idiot MS-DOS people chose > for the file path separator. Because \ is treated as an escape character. The COMMAND.COM shell inherited command-line switches (options) that use slash from TOPS-10 by way of CP/M, so using backslash in paths was less ambiguous for the shell (e.g. dir/w could be intended to run "w" in the "dir" subdirectory, or it could mean to run "dir" with the option "/w"). The DOS kernel did support both slash and backslash in file-system paths. Also, C wasn't a common language on the PC back then. BASIC was. Instead of using escapes in string literals, BASIC used addition at runtime with the CHR$ function or predefined constants. Support for hierarchical paths (DOS 2.0) came around at about the same time that C was rising in popularity, so the pain came on slowly like boiling a lobster. The system designers who really have no excuse are the NT kernel developers circa 1988-93. They were working in C on a system that already required converting paths from the DOS namespace to a native object namespace. They could have easily implemented the native object system to use slash instead of backslash.
|
https://mail.python.org/pipermail/tutor/2017-March/110771.html
|
CC-MAIN-2022-27
|
refinedweb
| 227
| 64.41
|
student
student how do i create a program that writes out the message :Welcome to Kaplan University!"
Hi Friend,
Try the following code:
class Message
{
public static void main(String[] args
hi
storing data in xml file using jsp hi i am storing data in xml file using jsp.in this when i enter data into xml file i am getting xml declaration...(name);
Element root = doc.createElement("Student
CoreJava Project
CoreJava Project Hi Sir,
I need a simple project(using core Java, Swings, JDBC) on core Java... If you have please send to my account
corejava - Java Interview Questions
Core Java vs Advance Java Hi, I am new to Java programming and confuse around core and advance java
Hi
Hi Hi All,
I am new to roseindia. I want to learn struts. I do not know anything in struts. What exactly is struts and where do we use it. Please help me. Thanks in advance.
Regards,
Deepak
Student Marks
Student Marks Hi everyone
I have to do this java assignment... programming grades of 8 IT students.
Randomly create student numbers for each... in an array.
Addresses of Students and store them in array.
Previous Grade for student
corejava - Java Interview Questions
;Hi friend,
date validation in javascript
var dtCh= "/";
var minYear=1900;
var maxYear=2100;
function isInteger(s){
var i;
for (i = 0; i < s.length; i++){
// Check that current character is number
corejava - Java Interview Questions
corejava how to merge the arrays of sorting i want source code of this one plz--------------------------------------------- Hi Friend...(String a[]){
int i;
int array[] = {12,9,4,99,120,1,3,10 Interview Questions
having only numeric values by using Pattern class.
Hi Friend,
4...);
}
for(int i=0;i
hi all - Java Beginners
hi all hi,
i need interview questions of the java asap can u please sendme to my mail
Hi,
Hope you didnt have this eBook. You... friend,
I am sending you a link. This link will help you.
Please read
school student attendence report
school student attendence report Hi i want school student attendence genaration source code please urgent
Student Management System
Student Management System Hi , Friend can i get the solution for connecting login page of JFrames to JDBC
Student - Java Beginners
[]=new Student[2];
for (int i=0; i65&&tm<75){
data[i].setGrade("C...].setGrade("A");
}
}
for(int i=0;i<2;i++){
Student show = data[i...Student Create a class named Student which has data fields
Corejava Interview,Corejava questions,Corejava Interview Questions,Corejava
;
Q 1. How can I get the full path of Explorer.exe.... : How do I limit the scope of a file
chooser?
Ans : Generally FileFilter is used..., statements, rather than vendor-specific
classes.
Q 6. How can I insert
Corejava Interview,Corejava questions,Corejava Interview Questions,Corejava
;
Q 1. When should I use the abstract class rather... use the equals method
to make an exact match.
Q 3. How can I get...;, "myPassword");
Q 5. How can I implement the Class.forName
Corejava Interview,Corejava questions,Corejava Interview Questions,Corejava
;
Q 1 : How should I create an immutable class ?
Ans...
dynamically at runtime.
Q 4 : How can I call a constructor from... 13 : When should I use the abstract class rather
than an interface ?
ANS :
genarating student weekly progress report
genarating student weekly progress report Hi every one iam using struts frame work backend as oracle 10g . i want code for how to genarate student weekly and monthly marks .
thanks in!
hi! how can i write aprogram in java by using scanner when asking... to to enter, like(int,double,float,String,....)
thanx for answering....
Hi...);
System.out.print("Enter integer: ");
int i=input.nextInt
i am inserting an image into database but it is showing relative path not absolute path
i am inserting an image into database but it is showing relative path not absolute path hi my first page.........
<html>
<head>...)
{
System.out.println(e);
}
%>
</body>
</html>
when i compiled it Friends,
I am installed tomcat5.5 and open the browser and type the command but this is not run please let me... also its very urgent Hi Soniya,
I am sending you a link. I hope... - Struts
Hi... Hello Friends,
installation is successfully
I am instaling jdk1.5 and not setting the classpth in enviroment variable please write the classpath and send classpath command Hi,
you set path = C
student info
student info code of insert,delete,update,view,search of student information using jsp and servelts
Hi Friend,
Please visit the following links:
http
corejava - Java Beginners
corejava pass by value semantics Example of pass by value semantics in Core Java. Hi friend,Java passes parameters to methods using pass by value semantics. That is, a copy of the value of each of the specified
Hi - Struts
Hi Hi Friends,
I am new in Struts please help me starting of struts and accept my request also..please send its very urgent....
I ahve... server please tell me.
Hi friend,
For solving the problem
Student Information System in Java - Java Beginners
Student Information System in Java Hello Sir,
I want Mini Java Project for Student Admission System. Hi Friend,
Can you provide us details regarding your project like student admission form, faculty information
Hi.. - Struts
Hi.. Hi,
I am new in struts please help me what data write in this file ans necessary also...
struts-tiles.tld,struts-beans.tld,struts........its very urgent Hi Soniya,
I am sending you a link. This link
:( I am not getting Problem (RMI)
I am not getting Problem (RMI) When i am excuting RMI EXAMPLE 3,2
I am getting error daying nested exception and Connect Exception
corejava - Java Beginners
corejava code for converting the numer into character(for ex;if we enter 1 it will comes in words like one)? Hi friend,
import java.io.*;
class NumToWords {
private static final String[] maxWords
CoreJava
corejava Some errors in Struts - Struts
i am Getting Some errors in Struts I am Learning Struts Basics,I am Trying examples do in this Site Examples.i am getting lot of errors.Please Help me
student admission system project
student admission system project College Student Admission System Project in Java Front End -JAVA Back End - MS Access Hello Sir, I want College Student Admission Project in Java with Source code, that includes Following Forms
|
http://roseindia.net/tutorialhelp/comment/3743
|
CC-MAIN-2014-42
|
refinedweb
| 1,059
| 56.25
|
Changing word wrap mode
Posted by Zafir Anjum on August 6th, 1998
To change the mode you have to assign one of these values to the m_nWordWrap member and then call the WrapChanged() member function. If you don't call the WrapChanged() function the control window is not updated.
// Code to use with CRichEditView // Turn word wrap on m_nWordWrap = WrapToWindow; WrapChanged();
// Code to use with CRichEditCtrl // To turn word wrap off SetTargetDevice(NULL, 1); // To turn word wrap on - based on window width SetTargetDevice(NULL, 0); // To turn word wrap on - based on target device (e.g. printer) // m_dcTarget is the device context, m_lineWidth is the line width SetTargetDevice(m_dcTarget, m_lineWidth);
YES! Can make it wrap in windows and dialog boxes now, no MFCPosted by Legacy on 12/28/2003 12:00am
Originally posted by: Ben
THANK YOU! I am not using MFC and I have a floating resizable window with a rich text edit control it. I simply could not figure out how to make it wrap words until I read this section. The documentation did not suggest anything obvious. Did not think the target device was the right option to use. Anyway, this turns on word wrap on and should work in a dialog window too:
SendMessage(hwndRichEditWindow, EM_SETTARGETDEVICE, 0, 0);
or
SendMessage(GetDlgItem(hDlg, IDC_RichEdit1), EM_SETTARGETDEVICE, 0, 0);
Oh, you may need this include to do this:
#include <richedit.h>
Reply
To make CRichEditCtrl wrap correctly in a dialogPosted by Legacy on 06/10/2002 12:00am
Originally posted by: Jon Schneider
How to make the CRichEditCtrl Wrap correctly in chinese charactersPosted by Legacy on 01/12/2000 12:00am
Originally posted by: Philip
I would like to use it to display chinese big 5 and GB code.Reply
Even, I set the character format to chinese character, it still cannot wrap correctly. What shall I do?
CRichEditCtrl wrapPosted by Legacy on 01/06/2000 12:00am
Originally posted by: Paul Saccasyn
I would like to have m_lineWidth expressed in number of characters of my CRichEditCtrl.Reply
How do I know the width of a character of the currently used font ?
In which unit is m_lineWidth expressed ?
Wrapping and Paragraph Format ConflictPosted by Legacy on 04/06/1999 12:00am
Originally posted by: Rizwan Zubairy
How to change the wrapping in rich edit control when I use right align paragraph formatting?Reply
|
http://www.codeguru.com/cpp/controls/richedit/article.php/c2409/Changing-word-wrap-mode.htm
|
CC-MAIN-2017-13
|
refinedweb
| 395
| 58.01
|
I am trying to use the new ZipArchive class (which is part of System.IO.Compression when the application is built with .NET 4.5). I previously got this to work. But I am having trouble with an application I am trying to convert to use this class.
The application was a .NET 2.0 application, so at first I did not see the ZipArchive class when I looked in the System.IO.Compression namespace. So I modified the project to make it a .NET 4.5 project ... but I *still* don't see ZipArchive -- the System.IO.Compression
namespace continues to look as it did when this application was using .NET 2.0. How can I get access to ZipArchive??
//Darrell
Hi, please check if you have added reference to System.IO.Compression.dll after you changed .NET 2.0 project to .NET 4.5 project.
About ZipArchive:
You can see it is in System.IO.Compression.dll
|
https://social.msdn.microsoft.com/Forums/en-US/c1de588d-700f-4700-a616-bf2f87884fcd/accessing-45-features?forum=netfxbcl
|
CC-MAIN-2020-45
|
refinedweb
| 159
| 78.25
|
Hi,
I am using WCF technology to consume web services from my server. I want the datapower to validate clients requests, some of the requests contain datasets which i cant validate.
This is the error showed in the probe:: cvc-particle 3.1: in element {}ChangedDataset with anonymous type, found <xs:schema> (in namespace), but next item should be any defined element in namespace
This is the actual request:
<s:Envelope
/>
The way I see it, the problem is that .net attaches the <xs:schema> element to the ChangedDataSet element.
I have tried configuring the client to exclude the schema from the request, but then I get another error:: cvc-complex-type 3: element {}ChangedDataset with anonymous type had undefined attribute {urn:schemas-microsoft-com:xml-msdata}SchemaSerializationMode
This was the request:
...
/>
Any way to make the datapower to ignore the schema element or the msdata:SchemaSerializationMode attribute?
|
https://www.ibm.com/developerworks/community/forums/html/topic?id=f36a1aa6-4c6c-4c38-b127-9815e113f399&ps=25
|
CC-MAIN-2015-11
|
refinedweb
| 148
| 52.09
|
Hi all,
a couple of days ago I started going through the C++ tutorials and while trying some code of my own I've run into some oversight (I suppose) that I just don't spot yet.
As I was going through the various topics in the tutorials I figured that it might be good to not just copy code but do something with the stuff I learn (hopefully).
So I thought, why not make the 'quiz' at the end of each chapter in c++.
Now, I realize that I will still need to learn and build much more complex stuff, put them in functions etc to make it all work, but the basic idea is; for it all eventually to become rather dynamic so I can add chapters with new questions and answers later on etc.
Anyways,... I'm still very early in the learning process and I'm already running into something I'm not quite figuring out. I could use a few pointers. And no, not the ones that refer to memory addresses.
Here's the code I got:
For some reason, variable chptr is returned correctly in the second to last cout, but it doesn't display the string of text I want 'chapter' to become.For some reason, variable chptr is returned correctly in the second to last cout, but it doesn't display the string of text I want 'chapter' to become.Code:
#include <iostream>
using namespace std;
int main()
{
string section("Introduction");
string title("Quiz: The basics of C++");
string chapter;
string chapter1("Intro");
string chapter2("If Statements");
string chapter3("Loops");
string chapter4("Functions");
string chapter5("Switch case");
string ERR("--- Sorry! Something went wrong. ---");
string ERR2("--- Sorry! Please select a number from 1-5. ---");
int chptr;
if (chptr == '0' || chptr >= '6')
{
chapter = ERR;
}
else if (chptr == '1')
{
chapter = chapter1;
}
else if (chptr == '2')
{
chapter = chapter2;
}
else if (chptr == '3')
{
chapter = chapter3;
}
else if (chptr == '4')
{
chapter = chapter4;
}
else if (chptr == '5')
{
chapter = chapter5;
}
else
{
chapter = ERR2;
}
cout << title << endl << endl;
cout << "Please select a chapter (1-5)" << endl;
cin >> chptr;
cin.ignore();
cout << "You have selected chapter-number: " << chptr << endl;
cout << endl << "The title of the chapter is: " << chapter << endl;
}
I've already tried moving things around a bit, putting variables outside of main or trying to have 'chapter' be a simple function, but when I do that I actually run into errors. I guess I need to re-read the functions chapter. The above code doesn't give me any errors, but it just doesn't do want I want it to do.
I'd very much appreciate some useful pointers to help me understand where I'm going wrong and why.
I hope I'm explaining correctly what I can't figure out right here. Everything else I haven't figured out, such as how to loop this whole thing upon incorrect input I'll do later, hopefully.
Thanks in advance.
EDIT: Oh yeah... and I have tried moving the If statement after the cin, but that just gives me the second error strong ERR2 returned instead of the first, which I find even more odd, considering that chptr still outputs as a int value.
|
http://cboard.cprogramming.com/cplusplus-programming/145243-new-cplusplus-could-use-some-guidance-printable-thread.html
|
CC-MAIN-2015-14
|
refinedweb
| 534
| 66.17
|
reggae 0.9.2
A build system in D
To use this package, run the following command in your project's root directory:
Manual usage
Put the following dependency into your project's dependences section:
Reggae
A (meta) build system with multiple front (D, Python, Ruby, Javascript, Lua) and backends (make, ninja, tup, custom). This is alpha software, only tested on Linux and likely to have breaking changes made.
Detailed API documentation can be found here.
Why?
Do we really need another build system? Yes.
On the frontend side, take CMake. CMake is pretty awesome. CMake's language, on the other hand, is awful. Many other build systems use their own proprietary languages that you have to learn to be able to use them. I think that using a good tried-and-true general purpose programming language is better, with an API that is declarative as much as possible.
On the backend, it irks me that wanting to use tup means tying myself to it. Wouldn't it be nice to describe the build in my language of choice and be able to choose between tup and ninja as an afterthought?
I also wanted something that makes it easy to integrate different languages together. Mixing D and C/C++ is usually a bit painful, for instance. In the future it may include support for other statically compiled languages. PRs welcome!
reggae is really a flexible DAG describing API that happens to be good at building software.
Features
- Multiple frontends: write readable and concise build descriptions in D, Python, Ruby, JavaScript or Lua. Your choice!
- Multiple backends: generates build systems for make, ninja, tup, and a custom binary backend
- Like autotools, no dependency on reggae itself for people who just want to build your software. The
--exportoption generates a build system that works in the root of your project without having to install reggae on the target system
- Flexible low-level DAG description DSL in each frontend to do anything
- High-level DSL rules for common build system tasks for C, C++ and D projects
- Automatic header/module dependency detection for C, C++ and D
- Automatically runs itself if the build description changes
- Out-of-tree builds - no need to create binaries in the source tree
- User-defined variables like CMake in order to choose features before compile-time
- dub integration for D projects
Not all features are available for all backends. Executable D code commands (as opposed to shell commands) are only supported by the binary backend, and due to tup's nature dub support and a few other features are not available. When using the tup backend, simple is better.
The recommended backend is ninja. If writing build descriptions in D, the binary backend is also recommended.
Usage
Pick a language to write your description in and place a file called
reggaefile.{d,py,rb,js,lua} at the root of your project.
In one of the scripting languages, a global variable with the type
reggae.Build must exist with any name. Also, the relevant
language-specific package can be installed using pip, gem, npm or
luarocks to install the reggae package (reggae-js for npm). This is
not required; the reggae binary includes the API for all scripting
languages.
In D, a function with return type
Build must exist with any name.
Normally this function isn't written by hand but by using the
build template mixin.
From the the build directory, run
reggae -b <ninja|make|tup|binary>
/path/to/your/project. You can now build your project using the
appropriate command (ninja, make, tup, or ./build respectively).
Quick Start
The API is documented elsewhere and the best examples can be found in the feature tests. To build a simple hello app in C/C++ with a build description in Python:
from reggae import * app = executable(name="hello", src_dirs=["."], compiler_flags="-g -O0") b = Build(app)
Or in D:
import reggae; alias app = executable!(ExeName("hello"), Sources!(["."]), Flags("-g -O")); mixin build!app;
This shows how to use the
executable high-level convenience rule. For custom behaviour
the low-level primitives can be used. In D:
import reggae; enum mainObj = Target("main.o", "gcc -I$project/src -c $in -o $out", Target("src/main.c")); enum mathsObj = Target("maths.o", "gcc -c $in -o $out", Target("src/maths.c")); enum app = Target("myapp", "gcc -o $out $in", [mainObj, mathsObj]); mixin build!(app);
Or in Python:
from reggae import * main_obj = Target("main.o", "gcc -I$project/src -c $in -o $out", Target("src/main.c")) maths_obj = Target("maths.o", "gcc -c $in -o $out", Target("src/maths.c")) app = Target("myapp", "gcc -o $out $in", [mainObj, mathsObj]) bld = Build(app)
These wouldn't usually be used for compiling as above, since the high-level rules take care of that.
D projects and dub integration
The easiest dub integration is to run reggae with a directory
containing a dub project as parameter. That will create a build system
with a default target that would do the same as "dub build" but probably
faster. An optional
ut target corresponds to the unittest executable of
"dub test". For example:
# one-time setup (assuming the current working dir is a dub project, # i.e., contains a dub.{sdl,json} file): mkdir build cd build reggae -b ninja .. # equivalent to "dub build": ninja # equivalent to "dub test -- <args>": ninja ut && ./ut <args> # build both default and unittest targets in parallel: ninja default ut
For advanced use cases, reggae provides an API to use dub build information
in a
reggaefile.d build description file. A simple example for building
production and unittest binaries concurrently is this:
import reggae; alias main = dubDefaultTarget!(CompilerFlags("-g -debug")); alias ut = dubConfigurationTarget!(Configuration("unittest")); mixin build!(main, ut);
Scripting language limitations
Build written in one of the scripting languages currently:
- Can only detect changes to the main build description file (e.g.
reggaefile.py), but not any other files that were imported/required
- Cannot use the binary backend
- Do not have access to the dub high-level rules
These limitations are solely due to the features not having been implemented yet.
Building Reggae
To build reggae, you will need a D compiler. The dmd reference
compiler is recommended. Reggae can build itself. To bootstrap,
either use dub (dub build) or the
included bootstrap script. Call it without arguments
for
make or with one to choose another backend, such as
ninja. This will create a
reggae binary in a
bin directory then
call itself to generate the "real" build system with the requested
backend. The reggae-enabled build includes a unit test binary.
- Registered by Atila Neves
- 0.9.2 released 7 months ago
- atilaneves/reggae
- github.com/atilaneves/reggae
- BSD 3-clause
- Authors:
-
- Dependencies:
- dub
- Versions:
- Show all 63 versions
- Download Stats:
137 downloads today
608 downloads this week
7038 downloads this month
35790 downloads total
- Score:
- 4.3
- Short URL:
- reggae.dub.pm
|
https://code.dlang.org/packages/reggae
|
CC-MAIN-2022-05
|
refinedweb
| 1,154
| 56.76
|
Your Account
Hear us Roar
I am a process geek. Helping to catalyze the community working on the identity layer of the Web has been my job and my obsession for the past three and a half years. Some history: it happened by accident in 2000 that I found the Planetwork community. They had been thinking about how civil society groups could work better together to address issues they cared about, such as climate change or species extinction, by using the power of the Internet.
It turns out that weaving a web of organizations and groups together means linking individual people together. With this in mind, they articulated a vision for use-centric identity between 2001 and 2002 in the Link Tank meetings and then published the Augmented Social Network: Building Identity and Trust into the Next Generation Internet in 2003.
I read this paper and got it instantly, immediately becoming an evangelist for the vision. I was hired as a technical and non-technical evangelist for one of the early organizations in this space, Identity Commons(1).
During this time, I also tried my hand at getting some social networking tools built on with open source code (Drupal). But after $35,000 and two prototypes, I put the project down both to let the market develop and to wait for the open source platform to become more usable. As the owner of a small business (rather then a coder), it was very difficult for me to feel part of the community and to get understanding of my potential clients and their users' needs by the community around the code.
One of the biggest challenges going forward for Free and Open Source application projects is how to be inclusive of the whole range of participants (see Figure 1) who have a stake in the code from core developers all the way to the end user community. A new culture is needed for truly inclusive projects that are more then just "coders" who are "coding" for their own needs.
Figure 1. The range of participants in OS has expanded greatly.
I hope that there can be a diversification of who is perceived as "in" Open Source communities and the methods of engagement—so that the full range of constituents participating with a code base can be included. Skillful adoption and use of effective face-to-face process is a good start to improving this situation. These issues and skills are "soft;" they are about communication and inclusion, more "yin" or feminine, but critical to innovation.
The Internet Identity space has taken form over the last three years, and I have had the good fortune of playing a leading role. Amazing things are happening because of the Internet Identity Workshop that I co-produce with Phil Windley and Doc Searls. It is the primary gathering of Identity Commons. We have over a dozen working groups focused on different technical, social and legal issues around the identity layer. Our fifth two-and-a-half day event using Open Space is coming up December 3-5, 2007 in Mountain View. You may have heard of OpenID but that is just the tip of the iceberg of new identity tools and standards.
In cooperation with others, such as Eugene Kim of Blue Oxen Associates, we helped nurture a culture of collaboration in the identity community that in hindsight have been pivotal, both for speed of innovation and for the diffusion of open standards. Learning and applying these successful face-to-face process technologies are my main work and lasting contribution to the tech world. Like I said, I am a process geek.
The catalytic community building role that I (and others) play can sometimes go unseen and can therefore be under-(and un-)valued. These skills and techniques are essential to building thriving, collaborating, and inclusive communities. Good community leaders often have these skills "naturally" (that is why they have the social capital to lead communities), but they are also learnable skills. I think that for open source and open standards communities to reach their full potential they need more process awareness and literacy of such simple things as:
Creating a culture of allies that welcomes new people is also something I hope can be more consciously developed. For example, if a new woman shows up at your Linux users group out of the blue, what would you—individually and collectively—do to increase the chances that she will return (and even bring more folks)?
For larger events (up to 2,000 people) that are innovating, using Open Space Technology (OST) is a great way to get the agenda to be created the day the event happens in an inclusive way that avoids the 300 alpha male geeks running-at-the-wall-with-sharpies method of creating an unconference agenda. OST creates a container for everyone to bring forth their ideas for sessions. It creates a quality of being with each other together. All voices can be heard—it is inclusive of the alphas and of the shyest among the group.
My hope is that more communities can step out of the default ways of meeting with "highly pre-scripted agendas" and paper presentations that had to be submitted 6-9 months before the event. If we can take full advantage of the limited opportunities for face-to-face interactions by adopting more effective collaborative processes, such as OST, we can actually manage to get to the heart of the "real" issues blocking the resolutions of problems in the network.
Why does this "emotional" stuff matter? I can hear you saying, "It all sounds so mushy." Surprisingly, technical road blocks are not so much at the heart of the major problems that affect the health of the network as are Economic Ownership and Trust (EOT) issues. The Principle Investigator of the Cooperative Association for Internet Analysis (CAIDA) KC Claffy recently published a list of 16 persistently unsolved network problems.
If you look at those, you'll see that such issues can't be solved by protocol alone; they must be solved in a web of human relationships and high quality process that fosters a culture of trust through growing mutual understanding, and shared meaning. If these issues are not addressed successfully and rapidly, I am fearful that we will lose the open network—and perhaps one of the most amazing transformative forces for good on the planet. I encourage you to expand your process horizons and make the most of your face-time. I have published a lot of information about the different techniques that I use (available on) and Aspiration Tech has a wiki that documents the facilitation processes they use at tech convening. Eugene Kim of Blue Oxen Associates is beginning workshops on collaboration this Fall. If you work in tech communities doing process work, you should know there is a nascent network forming of people who lead process in tech communities (contact me to learn more). I hope one of the outcomes of this network will be training and resources for tech folks who would like to expand their process skill repertoire.
I hope you will think about collaborative processes as being a vital part of the mix of skills that can be learned and valued in thriving technical communities and as the overall tech infrastructure that can serve humanity.
Series creator and editor Tatiana Apandi Recommends:
Return to Women in Technology.
Showing messages 1 through 1 of 1.
© 2014, O’Reilly Media, Inc.
(707) 827-7019
(800) 889-8969
All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners.
|
http://www.oreillynet.com/pub/a/womenintech/2007/09/19/process-geekiness-the-role-of-face-to-face-collaboration.html?page=last&x-maxdepth=0
|
CC-MAIN-2014-41
|
refinedweb
| 1,274
| 56.69
|
Odoo Help
Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps:
CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc.
Alert Box in search view in openerp 7.0?
I have two group by fields - field 1 and field 2.
And two subdivisions for field 1. These subdivisions have no relation with field 2.
When i select field 1 -- subdivision 1/subdivision 2 -->it shows the corresponding result.
If i select field 2 -- subdivision 1/subdivision 2 --> I need to dispaly an alert message. How should i display an alert box in search view?
Hello Rosey,
You can override read_group method to fulfill you requirments.
From there you can raise an exception.
Example:
def read_group(self, cr, uid, domain, fields, groupby, offset=0, limit=None, context=None, orderby=False, lazy=True):
if 'field1' in groupby and 'field2' in groupby:
raise osv.except_osv(_('Warning!'), _('You can not group by both fields.'))
return super(your_class_name, self).read_group(cr, uid, domain, fields, groupby, offset=offset, limit=limit, context=context, orderby=orderby, lazy=lazy)
|
https://www.odoo.com/forum/help-1/question/alert-box-in-search-view-in-openerp-7-0-72109
|
CC-MAIN-2017-04
|
refinedweb
| 181
| 60.92
|
I'm trying to compile OL latest SVN with MSVC 7.1. There are several compile problems here.
Oh, please don't flame about using MSVC or not. I will use this compiler for my own development, I'm used to using it, and I want to compile OL using a project file, because that really helps debugging etc.
In two places (OlRectangle.hpp and Circle.hpp) the compiler complains about std::max and std::min - which is because somewhere max and min are defined to be macros. It seems that there are two places where these #defines come in: stdlib.h and windef.h. The first seems to be PL though, because there's an #ifndef __cplusplus masking the #define, but the second does introduce the #define.
Well, does anybody know why windef.h is included? I couldn't find out easily where this include is introduced, it's definitely not included directly from OL itself.
My workaround is to add
#ifdef max
#undef max
#endif
#ifdef min
#undef min
#endif
to both files. Not beautiful, but works.
Next one is more difficult: In Internal.hpp there's a definition of OlAssert which uses___PRETTY_FUNCTION__ - which seems to be special to gcc or whichever compiler, but is not supported by MSVC (and most probably not included in standard C++). I guess this has to be changed to be portable? For this moment, I'll just remove the line to continue.
Bitmap.cpp line 1150 reveals that a for-loop reuses the name of a local variable, defined before the for statement, as loop variable. According to the standard, that is allowed, and the loop variable is not visible outside the loop. MSVC complains though, because in this particular case, the compiler behaves downward compatible unless told to not use language extensions (which wouldn't compile for many other reasons). At best, it is confusing to read this code, so I think it should be changed, so that the loop variables have names different from the local variables outside.
Same in line 1178 of that file.
With this it compiles and links - will try to compile the demo program later...
For the first point, what I did was simply a search/replace of std::min/std::max and replace with just min/max (i.e. miss out the std:: bit). I don't know what the real fix is, but I think min/max aren't actually part of the STL and for once MSVC is failing correctly.
For the second, I thought flad was going to change this, but either way what I did is as below. I haven't updated the SVN with all my changes as there are various things that don't work. Try compiling all the examples, and depending on the version of OL you have one or more will fail. The last time I tried, the linestrip example fails due to something in the linestrip class being initialised incorrectly. Anyway, here's the code for internal.hpp
Basically, I would say don't bother with OL for MSVC as it simply doesn't work properly, and I'm pretty sure it has nothing to do with MSVC. You're best off getting a copy of devc++/mingw, developing in MSVC as you can't beat the editor, then compile with devc++/mingw to get the fecker working.
Also, note that some of the examples won't compile either and you'll have to hack the code! one such thing is the demo game that is coded with some GCC specific code (a wierd ?: variant)
Neil.MAME Cabinet Blog / AXL LIBRARY (a games framework) / AXL Documentation and Tutorial
wii:0356-1384-6687-2022, kart:3308-4806-6002. XBOX:chucklepie
First thanks. I've replaced the OlAssert with what you posted and it compiles fine.
For max and min, I'm sure that the right solution is to get rid of the defines, because max and min are template function from somewhere in the std namespace. Unfortunately windef.h does not do it correctly (at least I think so), so one needs to #undef these symbols.
As for using it or not: I would really prefer to use the compiler I know, no need to install another IDE and compiler, and of course in the office I can't do that anyway. The question is, would Fladimir be interested in having OL compile and run with MSVC? If yes, I'd be happy to help. If not, well... maybe I won't use OL at all. Depends...
The question is, would Fladimir be interested in having OL compile and run with MSVC? If yes, I'd be happy to help
Of course I am. It's just that I've never learned to use it myself, so that's why I'm not aware of it's "specialities"...
OpenLayer has reached a random SVN version number ;) | Online manual | Installation video!| MSVC projects now possible with cmake | Now alvailable as a Dev-C++ Devpack! (Thanks to Kotori)
Great! So how do I send you the changes I would propose? I can access SVN from home, so I could verify whatever you put in there.
You could join the project if you get a BerliOS account...
Get yourself tortoise svn and it's a doddle.
But remember, as well as getting it to compile there are the problems with it crashing at various points in code with the samples (and obviously there are many areas not tested in the samples!). I posted the code to the mailing list that causes MSVC build to crash, but everyone seemed to be in agreement that the code was fine, it just crashed when running as MSVC.
Fladimir: I consider this a an honor for me. So I'll try to get an account there.
There's one thing: I can't verify any changes against gcc or any other compiler than MSVC. So how do you do it then? I wouldn't like to simply submit any changes to SVN, because I can't be sure that it works for everybody else. So I would prefer having any changes be checked before submission...
Neil: I have Tortoise SVN already. Remains to learn how to use it. At work we're using Perforce, which makes many tasks really simple, and many years ago I remember that I used plain SCCS (Good old days).
TortoiseSVN is almost too easy to use, just right-click the directory in Windows where you want the project to be placedc, choose "SVN Checkout..." and enter the repository URL.
Would installing GCC be too difficult? Playing with different brances would just complicate the whole process...
Are you using gcc under Windows or linux? For me, it might be possible to install gcc, but I think it would be easier if you just check my changes and submit them. No need for any additional branches. For now, it's just some small things, and I can try to get gcc running for more changes at a later time. Would that be fine? I guess using MinGW would be OK? Don't have linux...
I don't have Linux either, but the main thing is to check if the code still compiles with GCC. Sending the files over using email, for example, could be a bit tendious.
So I would suggest I send you two changes by mail, because they are really small changes and are essentially what has been discussed here already. You verify these and put them into SVN. Meanwhile I continue to compile and run the samples, and get MinGW with gcc to work. As soon as that runs (and I got a berlios account) I'm willing to verify my changes against gcc and submit them myself. OK?
That's fine. (Email: fladimir2002 ... at ... hotmail.com)
You could always roll back changes if they hose anything up. But the best thing to do and to prevent us from any headaches is to install gcc/mingw and make sure that your changes are fully compliant with gcc.You know if msvc is having a problem with ansi c++, maybe looking into stlport woud be a good idea in bridging the gaps between gcc and mvsc for openlayer.Flad: Is there any consideration into getting a newer release out soon?
__________________________________________Paintown
As soon as I get time to finish the minor stuff I'm working on. Altough making a new release doesn't mean so much nowadays, when we have the SVN repository.
So how do you do it then? I wouldn't like to simply submit any changes to SVN, because I can't be sure that it works for everybody else. So I would prefer having any changes be checked before submission...
The way this works for Allegro is to submit a patch, which is (supposedly) tested before being commited to the repository.
Perhaps have two SVN folders?
Chaos Groove Development BlogFree Logging System Code & Blog
Altough making a new release doesn't mean so much nowadays, when we have the SVN repository.
Which I believe is true, though I've seen quite a number of people reluctant to grab stuff from svn for some reason, even if the svn snapshots would make it easier for those people who'd rather not. The main issue I see here is that people are still grabbing the 2.0 release thinking that it's the latest. You could just do an interim release by grabbing the latest svn snapshot and throwing it on the downloads page. In any case I'm not bothered by it, I do prefer svn.
I don't see why we would need to unless OL is going to go through a complete refactoring of the entire source tree.
Patch sounds good. Have to read the manual of SVN to learn what a patch is (in terms of SVN). Perforce does not have the concept of a patch.
stlport is not needed, the stl which is delivered with MSVC 7.1 is OK. There have been issues with the stl of the older MSVC 6.0.
I have only recently started to use SVN instead of official releases. And I don't like it - in general. So it's good to use SVN if you want to contribute. It's not good if you want a tested, reliable version to build upon. Well, that's actually why I decided to get OL from SVN: OL 2.0 didn't work for me. This in turn means to get AllegroGL from SVN, too. The advantage is, that it's new. The disadvantage is: It's new! So probably not as thoroughly tested and reliable as an 'official' release.
The concept of a patch isn't really related to the version controll system. you just use patch and diff to make and apply patches. Applying a patch will change certain files which will get updated when you. patch and diff commands, right? Not available on Windows, that's probably why I'm not familiar with them. Are they contained in MinGW? If not, where can I get Windows versions of them?
My assumption was that I was just going to post my changes in one go, someone would check them and rollback if it didn't work. However, I use mingw as well, which is how I can test OL with msvc and gcc and given it works in the mingw version (and the changes were targetted at msvc anyway) the likelihood of an error should be close to zero.
I never got round to posting all my updates (they are still on my computer in tortoise) due to it crashing in the demos with the msvc version. All I would say, is you really need to check all the examples plus demo run as expected before you update the system, and test with mingw/devc++
Neil, well if the changes didn't affect the mingw/gcc build, I wouldn't see why you don't apply them. That way others can take a look at your changes and contribute or assist. That's what svn is intended for anyhow, WIP.
true, but people might start downloading and using OL only to find out (like me) that after a lot of investment in time and effort, they can't actually get OL to work properly with MSVC. tbh, every few months I download OL, redo the changes and test it to death to see if it works. So far, it still works beautifully with devc++ but fails at various parts with MSVC, and I'm at a loss as to why it fails, despite tracing it to the line of error.
I guess most projects with several developers have this kind of problem, in various flavours. So whenever I submit a change, it's my task to verify the changes against a defined series of test programs before submitting. Question is, which are these? And which platform has to be tested?
With OL I understand that the sample programs are expected to work fine with gcc under Windows and maybe linux (?). So we don't know for sure if MSVC 7.1 will be able to run the samples. This would mean that you may submit any changes that don't break gcc (compile and run all samples), and hopefully improve MSVC 7.1. Maybe we can reach a state where we also know that the samples work fine with MSVC, then one could try to ensure that not only gcc works fine with each change, but also MSVC.
I know, this creates some work, especially for those who wish to submit a change, because one has to run the tests. Nevertheless, that's the way we do things here at work, and it has proven to be more effective this way than submitting bugs, which will slow down all others who get the bad changelists.
Well, I have a berlios account now (tobing), which I might use for submitting anything. But before that, I'll install MinGW and get things compiled also with that - to be sure that I don't break anything.
Edit: Got the sample programs compiled and run, also the demo game.
What does ballYSpeed += ((float(mouseYMovement) <? 2.5) <? 0 ); mean?
There's one problem though. All demos showing text only show rectangular outlines of where the letters should be displayed. Probably something wrong with Glyphkeeper or AllegroGL or Freetype or something else?
Remarks to the SVN version: I had to copy the Fonts from textdemo to the linestripdemo directory to have that run, and I had to change the shapedemo to use PointerAlpha.png.
|
https://www.allegro.cc/forums/thread/588077
|
CC-MAIN-2018-30
|
refinedweb
| 2,452
| 81.53
|
Tree Traversals
Following function is supposed to calculate the maximum depth or height of a Binary tree -- the number of nodes along the longest path from the root node down to the farthest leaf node.Tree Traversals
int maxDepth(struct node* node) { if (node==NULL) return 0; else { /* compute the depth of each subtree */ int lDepth = maxDepth(node->left); int rDepth = maxDepth(node->right); /* use the larger one */ if (lDepth > rDepth) return X; else return Y; } }What should be the values of X and Y so that the function works correctly?
Discuss it
Question 1 Explanation:
If a tree is not empty, height of tree is MAX(Height of Left Subtree, Height of Right Subtree) + 1 See program to Find the Maximum Depth or Height of a Tree for more details.
What is common in three different types of traversals (Inorder, Preorder and Postorder)?Tree Traversals
Discuss it
Question 2 Explanation:
The order of inorder traversal is LEFT ROOT RIGHT The order of preorder traversal is ROOT LEFT RIGHT The order of postorder traversal is LEFT RIGHT ROOT In all three traversals, LEFT is traversed before RIGHT
The inorder and preorder traversal of a binary tree are d b e a f c g and a b d e c f g, respectively. The postorder traversal of the binary tree is:Tree Traversals
Discuss it
Question 3 Explanation:
Below is the given tree.
a / \ / \ b c / \ / \ / \ / \ d e f g
What does the following function do for a given binary tree?Tree Traversals
int fun(struct node *root) { if (root == NULL) return 0; if (root->left == NULL && root->right == NULL) return 0; return 1 + fun(root->left) + fun(root->right); }
Discuss it
Question 4 Explanation:
The function counts internal nodes. 1) If root is NULL or a leaf node, it returns 0. 2) Otherwise returns, 1 plus count of internal nodes in left subtree, plus count of internal nodes in right subtree. See the following complete program.
Which of the following pairs of traversals is not sufficient to build a binary tree from the given traversals?Tree Traversals
Discuss it
Question 5 Explanation:
Consider two binary operators 'Tree Traversals
' and '
' with the precedence of operator
being lower than that of the
operator. Operator
is right associative while operator
is left associative. Which one of the following represents the parse tree for expression (7
3
4
3
2)? (GATE CS 2011)
Discuss it
Question 6 Explanation:
Let us consider the given expression (
).
Since the precedence of
is higher, the sub-expression (
) will be evaluated first. In this sub-expression,
would be evaluated first because
is right to left associative. So the expression is evaluated as
. Also, note that among the two
operators, first one is evaluated before the second one because the associativity of
is left to right.
Which traversal of tree resembles the breadth first search of the graph?Tree Traversals
Discuss it
Question 7 Explanation:
Breadth first search visits all the neighbors first and then deepens into each neighbor one by one. The level order traversal of the tree also visits nodes on the current level and then goes to the next level.
Which of the following tree traversal uses a queue data structure?Tree Traversals
Discuss it
Question 8 Explanation:
Level order traversal uses a queue data structure to visit the nodes level by level.
Which of the following cannot generate the full binary tree?Tree Traversals
Discuss it
Question 9 Explanation:
To generate a binary tree, two traversals are necessary and one of them must be inorder. But, a full binary tree can be generated from preorder and postorder traversals. Read the algorithm here. Read Can tree be constructed from given traversals?
Consider the following C program segmentTree Traversals
struct CellNode { struct CelINode *leftchild; int element; struct CelINode *rightChild; } int Dosomething(struct CelINode *ptr) { int value = 0; if (ptr != NULL) { if (ptr->leftChild != NULL) value = 1 + DoSomething(ptr->leftChild); if (ptr->rightChild != NULL) value = max(value, 1 + DoSomething(ptr->rightChild)); } return (value); }The value returned by the function DoSomething when a pointer to the root of a non-empty tree is passed as argument is (GATE CS 2004)
Discuss it
Question 10 Explanation:
Explanation: DoSomething() returns max(height of left child + 1, height of left child + 1). So given that pointer to root of tree is passed to DoSomething(), it will return height of the tree. Note that this implementation follows the convention where height of a single node is 0.
There are 44 questions to complete.
My Personal Notes arrow_drop_up
|
https://www.geeksforgeeks.org/data-structure-gq/tree-traversals-gq/
|
CC-MAIN-2018-47
|
refinedweb
| 754
| 52.29
|
Here's the situation: I've got a symbol, say, const char *currentversion = "1.00." It's defined in version.h.
Now I've got myprog.cpp:
#include "myheader.h"
#include "myotherheader.h"
#include "yetanotherheader.h"
#include "onemoreheader.h"
#include "oklastoneipromise.h"
one of these header files eventually #includes version.h, tho maybe not directly (maybe myheader.h includes commonheader.h which includes version.h).
I need a tool that takes as input "currentversion" and spits out something like:
"currentversion" found in myprog.cpp/myheader.h/commonheader.h/version.h.
Browse info won't help me here. It'll take me directly to "currentversion" but it won't tell me how it got there, which is what I'm more interested in.
In other words... I'm looking for a "Find In Files" tool that recursively uses the "include tree."
As a last resort, I'm considering writing my own (doesn't seem like it'd be that hard to write as a DevStudio macro), but I'd rather just d/l it.
Has anyone heard of such a tool?
Mason McCuskey
Spin Studios
|
http://www.gamedev.net/topic/3875-recursive-h-file-search/
|
CC-MAIN-2015-06
|
refinedweb
| 184
| 72.22
|
Sorry But This is going to be a long post.
For AP Computer Science Online we had a very noneducational lesson on nested loops, then we had a very difficult program to write afterwards here it is:
Write a program to simulate tossing a pair of 11-sided dice and determine the percentage of times each possible combination of the dice is rolled.
3. Ask the user to input how many times the dice will be rolled.
4. Calculate the probability of each combination of dice. (You may want to start with
more familiar six-sided dice.)
5. Print the results neatly in two columns (do not worry about excessive decimal places).
6. What is the effect on the percentages when the number of rolls is increased?
7. After the program works, you might want to make it more interesting and ask the user
to enter the number of sides on a die (singular for dice).
We need to use nested loops not a long if, else if, else statement. Here is my source code so far. It is complete and formatted correctly but the problem is that my probability comes up as 0.0. I believe it has to do with the totalMatches variable. Can someone please help me?
Code:import java.util.Scanner; import java.util.Random; public class DiceProbability { /* * @author * @version */ public static void main(String[] args) { Random rand; rand = new Random(); Scanner input; input = new Scanner(System.in); int die1, die2; int sides; int diceRolls; int sum; int totalRolls; int sumOfBothDice; int totalMatches = 0; System.out.println("Welcome to the Dice Probability Game!"); System.out.print("Please enter the amount of sides you want on the dice: "); sides = input.nextInt(); if(sides < 1) { System.out.println("You cannot have 0 or negative sides on dice."); return; } System.out.print("Please enter the amount of times you want to roll the dice: "); diceRolls = input.nextInt(); if(diceRolls < 1) { System.out.println("You cannot roll a dice 0 or negative times and find its probability."); return; } System.out.println(); System.out.println("Sum of the Dice \t\t\t\t Probability"); System.out.println("-------------------------------------------------------------"); for(sum = 2; sum <= 2*sides; sum ++) { for(totalRolls = 0; totalRolls <= diceRolls; totalRolls ++) { die1 = rand.nextInt(sides) + 1; die2 = rand.nextInt(sides) + 1; sumOfBothDice = die1 + die2; if(sumOfBothDice == sum) { totalMatches += 1; } } double probability = (double)(totalMatches / totalRolls) * 100; System.out.println(sum + " :s \t\t\t\t\t " + probability); totalMatches = 0; } } }
|
http://forums.devshed.com/java-help-9/ap-computer-science-dice-probability-help-934367.html
|
CC-MAIN-2014-52
|
refinedweb
| 406
| 50.33
|
libcgraph - abstract graph library
TYPES GRAPHS SUBGRAPHS NODES EDGES STRING ATTRIBUTES RECORDS CALLBACKS MEMORY STRINGS GENERIC OBJECTS).
A ‘‘main’’ or ‘‘root’’.
A node is created by giving a unique string name or programmer defined 32-bit ID, and is represented by a unique internal object. (Node equality can checked by pointer comparison.).
Programmer-defined values may be dynamically attached to graphs, subgraphs, nodes, and edges. Such values are either uninterpreted binary records (for implementing efficient algorithms) or character string data (for I/O)...
Libcgraph performs its own storage management of strings as reference- counted strings. The caller does not need to dynamically allocate storage....
(The following is not intended for casual users.) Programmer-defined disciplines customize certain resources- ID namespace, memory, and I/O - needed by Libcgraph. A discipline struct (or NIL) is passed at graph creation time. A default discipline is supplied when NIL is given for any of these fields. An ID allocator discipline allows a client to control assignment of IDs (uninterpreted 32-bit values) to objects, and possibly how they are mapped to and from strings. permits the ID discipline to initialize any data structures that maintains per individual graph. Its return value is then passed as the first argument to all subsequent ID manager calls. informs the ID manager that Libcgraph is attempting to create an object with a specific ID that was given by a client. The ID manager should return TRUE (nonzero) if the ID can be allocated, or FALSE (which aborts the operation). is called to inform the ID manager that the object labeled with the given ID is about to go out of existence. is called to create or look-up IDs by string name (if supported by the ID manager). Returning TRUE (nonzero) in all cases means that the request succeeded (with a valid ID stored through . There are four cases: and : This requests mapping a string (e.g. a name in a graph file) into a new ID. If the ID manager can comply, then it stores the result and returns TRUE. It is then also responsible for being able to the ID again as a string. Otherwise the ID manager may return FALSE but it must implement the following (at least for graph file reading and writing to work): and : The ID manager creates a unique new ID of its own choosing. Although it may return FALSE if it does not support anonymous objects, but this is strongly discouraged (to support "local names" in graph files.) and :.) and : forbidden. is allowed to return a pointer to a static buffer; a caller must copy its value if needed past subsequent calls. should be returned by ID managers that do not map names. The and calls do not pass a pointer to the newly allocated object. If a client needs to install object pointers in a handle table, it can obtain them via new object callbacks.
Libcdt(3).
Stephen North, north@research.att.com, AT&T Research. 30 JULY 2007 LIBCGRAPH(3)
|
http://huge-man-linux.net/man3/cgraph.html
|
CC-MAIN-2017-13
|
refinedweb
| 502
| 64.91
|
To randomly generate a number between 1 and 250 you type Math.random()*249+1
Type: Posts; User: ajayajayaj
To randomly generate a number between 1 and 250 you type Math.random()*249+1
For another reason my javac command wont work so I have to use automatic ways.
I've uninstalled and re installed java many times.
Here you go:
For an unknown reason when I use the jarsigner command it returns the error of:
When I've googled how it says to use jar signer but it gives the error of:
I don't understand the link you gave to me about signing jars.
I have figured out the problem.
I had a folder with the exact same name for no reason.
Also, now with that working, do I have to have all users of my applet to have the same .java.policy file as...
I can't find any real reasons why it does that.
--- Update ---
Do you know any way I could run this differently?
Here is the complete program that compiles, executes and shows the problem for testing:
public class TestProgram{
public static void main(String args[]){
try {
FileOutputStream test = new...
Yes it does exist.
The error message is:
--- Update ---
What do you mean by:
For some unknown reason, now, I can't access any directory.
But only with that command.
I'm using:
try {
FileOutputStream out = new FileOutputStream(new File("C:\\users\\Ajay\\desktop\\"));...
For testing purposes, I tried accessing many directories.
When I even tried just accessing C:\users\<user>\Desktop\ it wouldn't work!
I get the same error!
Does trying the code in an app mean making a second applet with just hat code?
Oh sorry, I moved the file from the folder instead of copying it.
Now it says the original error:
I did the all permissions.
But now I get the error:
I couldn't find the .java.policy file.
I didn't find it in there?
How do you do that?
Where is the user dir?
Do you have to have every computer to have that file?
So you add that to your applet?
So how do you give it the permission?
It said:
--- Update ---
Also, what I understood from the link you gave me was you have to add the text:
to run the applet instead of the normal code.
On what browser?
Well, I selected Advanced Then selected show console.
But where is the cosole?
I can't find the button.
|
http://www.javaprogrammingforums.com/search.php?s=b791e9da05794db5b6a10a53e63b3e23&searchid=1075823
|
CC-MAIN-2014-41
|
refinedweb
| 415
| 76.82
|
I have a vacuum fluorescent display in my office, and I’ve been
messing around with it. Now that I’ve figured out how to communicate
with the serial connection it’s time for some fun. Its shortcoming is
its 2 lines of 20 chars. So I naturally wrote a fortune cookie program
for it.
I want to make adding new fortunes easy (and without having to figure
out myself where to force a line break), so I basically need to split
a string (approximately) in half at a word boundary. This is what I
have:
class String
def halve
first_half = ‘’
second_half = self
until first_half.length >= length / 2
match = / /.match(second_half)
first_half << match.pre_match << ’ ’
second_half = match.post_match
end
[first_half.strip, second_half]
end
end
I have a feeling there’s a one-line regexp that can do this. Am I
right? If not, is there a better way?
|
https://www.ruby-forum.com/t/halving-a-string/84189
|
CC-MAIN-2021-43
|
refinedweb
| 147
| 77.74
|
It's a simple program which scans a directory recursively through the subfolders and sum up the size of the files. It also displays the size of the individual files, and eventually the total size of the directory.
Observe the follows:
using System; using System.IO;
We need to include
System.IO, as we are working on Files and/or Directories.
The program starts as follows:
static void Main(string[] args) { //Creates an object of the class getFilesInfo //and calls its function displayFileInfo() //The argument is passed to that function //which is the path to a directory or a file. getFilesInfo obj = new getFilesInfo(); obj.displayFileInfo(args[0]); // displayFileInfo() is the function which does everything and // sets the static variable totsize to the total size of the dir/file //you passed as the argument. Console.WriteLine("Total Size = {0}", totsize); }
Now, let's look at the crux of the program:
protected void displayFileInfo(String path) { try { //Checks if the path is valid or not if(!Directory.Exists(path)) { Console.WriteLine("invalid path"); } else { try { //Directory.GetFiles() returns an array //of strings which are not just //the file/directory name but the whole path to that //file folder. string[] fileList = Directory.GetFiles(path); for(int i=0; i { if(File.Exists(fileList[i])) { //File Info is a Class which extend FileSystemInfo class. FileInfo finfo = new FileInfo(fileList[i]); //finfo.Length returns the File size. totsize += finfo.Length; Console.WriteLine("FILE: "+fileList[i]+" :Size>"+ finfo.Length); } } } catch( System.NotSupportedException e1) { Console.WriteLine("Error1"+e1.Message); } try { string[] dirList = Directory.GetDirectories(path); for(int i=0; i { Console.WriteLine("DIRECTORY CHANGED:" + dirList[i]); //Call the function recursively to get //the file sizes in the subfolder. displayFileInfo(dirList[i]); } } catch( System.NotSupportedException e2) { Console.WriteLine("Error2:"+ e2.Message); } } } catch(System.UnauthorizedAccessException e) { Console.WriteLine("Error:"+ e.Message); } }
You should have observed that we can't find the size of a folder directly. We should sum up the size of the files it constitutes. I think it is so because a Folder is just a pointer which points to the files and other directories. It has no size by itself. And one more thing is that the methods:
Directory.GetFiles(path); Directory.GetDirectories(path);
return not just the file/directory name but the whole path to that file/folder.
General
News
Question
Answer
Joke
Rant
Admin
|
http://www.codeproject.com/KB/files/showdirsize.aspx
|
crawl-002
|
refinedweb
| 391
| 52.66
|
If you’re not using IFTTT, you’re seriously missing out. If This Then That allows you to combine different services with your own custom logic. You define what should happen when a condition is met. For example: if an RSS feed updates, send an email. If the traffic is bad, wake you up early.
The potential uses for IFTTT are endless. But until now, it’s been difficult to interface it with your own hardware projects. Today, that’s all changed.
Meet Your Maker (Channel)
Channels are the fundamental building blocks of IFTTT. They represent web services that provide data sources or even physical devices like fitness wearables.
There are well over 200 channels available, and they’re a diverse bunch, ranging from Android battery statuses, to RSS feeds, as well as content from publishers like BuzzFeed and the New York Times. Now, they’ve added the Maker Channel.
This allows you to build recipes that use data from projects you’ve personally built, which can then be used to trigger actions elsewhere. Whether that’s from your Arduino-based from your Raspberry Pi powered motion tracker Build a Motion Capture Security System Using a Raspberry Pi Build a Motion Capture Security System Using a Raspberry Pi Of the many projects that you can build with the Raspberry Pi, one of the most interesting and permanently useful is the motion capture security system. Read More – or your own web project. Anything, really.
It’s also bi-directional too. You can send messages to your projects straight from IFTTT.
So, for example, you could have an LED companion cube How to Build a Companion Cube Mood Lamp (For Absolute Arduino Beginners) How to Build a Companion Cube Mood Lamp (For Absolute Arduino Beginners) Read More that flashes when you receive an email, or an automated laser turret Pew Pew! How To Build A Laser Turret With An Arduino Pew Pew! How To Build A Laser Turret With An Arduino Are you bored? Might as well build a laser turret. Read More that shoots a beam of light whenever BuzzFeed posts a new article.
The possibilities are endless.
One of my favorite things about IFTTT is the fact that many people choose to share their recipes with the public, for free. There are thousands of IFTTT recipes, ripe for the taking. This is ideal if you’re looking for some inspiration for a project. At the time of writing, there are hundreds of example recipes available for your perusal at hackster.io.
Getting Started
There are some key differences with how recipes are built with the Maker Channel compared to other channels. But that shouldn’t deter you.
First, when you sign up for the makers channel, you’re given a secret key. This string of characters is what identifies you to the IFTTT servers. Given that you’re going to be using this with real-world IoT (Internet of Things) devices in your home, you should take good care of this. Don’t share it, and keep it in a safe place.
Once you’re all signed-up, you’re going to want to start incorporating it into your projects. Using it is simply a matter of using GET and POST requests. These are an open web technology, meaning you’re not just limited to using it with Arduino and Raspberry Pi. You can also use it with anything that supports HTTP, like , the .Net Gadgeteer, and even standard web applications.
If you’re using Arduino, the official documentation will tell you everything you need to know about making HTTP requests. But if you’re using Raspberry Pi, you’ve got a lot of choices when it comes to how you use it. You could use Curl, but if you’re using Python, you can use the delightfully simple to use Requests library.
Creating Your Recipe
So, let’s start off by making our recipe. This is actually surprisingly simple. First, create a new recipe using the Maker channel. Then, you’ll be prompted to define an event name that triggers this recipe (like “button_pressed”, or “motion_made”). If you plan on having lots of custom events, make sure they aren’t too generic.
Then, move on to defining what you want to happen when an event happens.
I decided to send myself an email alert.
If it all looks good, press “Create Recipe”. Then you’re ready to start using the IFTTT Makers Channel.
Triggering Events
Communicating with your recipe is easy. You simply need send a special POST or GET request to the following URL.{event}/with/key/{secret_key}
Here, we’ve got a couple of variables in curly braces. Event is simply the event name, and secret_key is your secret key. If you were to call this in Curl, you’d be looking at something like this.
$ curl -X POST{secret_key}
With Python’s Requests Library, this is even simpler.
import requests requests.post("{secret_key}")
Including Payloads
When triggering IFTTT, you can also include include up to three variables, which can then be used in your recipes. These are stored as a JSON object (What ). Here’s how you’d include three variables in Curl:
curl -X POST -H "Content-Type: application/json" -d '{"value1":"test","value2":"test","value3":"test"}'{event}/with/key/{secret_key}
And in Python:
import requests payload = "{ 'value1' : 'hello', 'value2' : 'hello', 'value3' : 'hello'}" requests.post("{secret_key}", data=payload)
Note that the variable names (“value1”, “value2”, “value3”) are fixed; you can only include up to three variables, and they must be named like that.
Inbound Traffic
As previously mentioned, IFTTT’s Makers Channel is bi-directional. Not only can it receive messages and triggers, but it can also send them.
This shouldn’t be too difficult. You just need to set up an endpoint, and provide IFTTT with the URL for it. You can also specify the body of the content sent to that URL, as well as the type of request sent.
This means that you can use the Makers Channel with an application you’ve hosted on the cloud (for example, on a Virtual Private Server What Is A Virtual Server, And What Can You Do With One? What Is A Virtual Server, And What Can You Do With One? Virtual machines and virtual servers — what are they and how do they differ? Read More ). If you’re fortunate enough to have a static IP, or have a dynamic DNS service The Best Free Dynamic DNS Providers You Should Try The Best Free Dynamic DNS Providers You Should Try With DynDNS gone, here are the best free dynamic DNS providers, services, and DDNS alternatives to replace it. Read More like DynDNS, you could feasibly use it at home with your own creations.
If you go down the path of using a VPS, both Digital Ocean and Linode come highly recommended.
IFTTT Your Smart Home
It’s probably worth noting that this isn’t IFTTT’s first foray into the Internet of Things. They already support an expansive range of Smart Home devices, ranging from the Nest Protect, to the Philips Hue lightbulb, and everything in between.
But this marks the first time where developers can easily integrate their own creations with IFTTT. And that, to me, is really damn cool.
Do more with IFTTT and your mobile device. Here’s how to automate your Android phone with IFTTT 10 Great IFTTT Applets to Automate Your Android Phone 10 Great IFTTT Applets to Automate Your Android Phone IFTTT connects a ton of services to automate everyday tasks. Here are 10 great IFTTT applets to supercharge your Android device. Read More and use IFTTT applets with advanced filters.
Explore more about: Arduino, IFTTT, Internet of Things, Raspberry Pi.
The json payload does not work
Does this still exist? Is this the same as maker webhooks?
how is the maker used in the action (i.e. "that") part of ifttt?
I got it working for IFTTT and Pushover. One good thing about Pushover is it lets you select the sound, where normal IFTTT notifications are stuck with one tone for all of IFTTT.
#!/usr/bin/python
import RPi.GPIO as GPIO
import time
import requests
GPIO.setmode(GPIO.BCM)
PIR_PIN = 22
GPIO.setup(PIR_PIN, GPIO.IN)
def MOTION(PIR_PIN):
print "Motion Detected!"
payload = { 'value1' : 'Someone at Front Door'}
r = requests.post("{event_here}/with/key/{secret_key}", data=payload)
print r.text
print "PIR Module Test (CTRL+C to exit)"
time.sleep(2)
print "Ready"
try:
GPIO.add_event_detect(PIR_PIN, GPIO.RISING, callback=MOTION)
while 1:
time.sleep(120)
except KeyboardInterrupt:
print "Quit"
GPIO.cleanup()
I had " " around payload = "{ 'value1' : 'Someone at Front Door'}" and was banging my head until I to not have " " around the data.
What are you on about? This is the most useless of inventions I have ever heard of.
You can polish a turd, but it's still a turd.
I believe there's an error in the python code. The payload should not have quotes around it.
Hi,
It is mentioned that we can send 3 JSON values along with the request
Is that JSON object could b any JSON object?
I mean is it possible to send array as one single object?
thanks Matthew .... so useful article .... I have make a IFTTT channel with maker to send text message ... it send messages very well but I don't know how should I add a line break in my messages text ?
I simple used enter but not worked .... I used this html tag but not working ....
do you know how ?
Is there anybody who uses IFTTT with Devolo Home Control ?
Hey dude, I think I may be a little while figuring this out, but reading and reading and reading about implementing more complex rules in ifttt has brought me here... I'm working on the idea of writing a multi-presence web-service, probably in azure. I have some experience of .net c# so it shouldn't be difficult for me at that end... Hoping to write something that turns on my hue lights when they are already off, and when no-one is home, to do nothing when extra people arrive, then the turn them all off when the last person leaves. If I can write a web service to receive notification of who has arrived and who has left then I should be able to get it to trigger an event to turn them on / turn them off again when appropriate right? I know I could host it at home on a device but I'm an IT guy so using a cloud service appeals (and saves me buying and running hardware).
Is there a way of creating a receipt for lightwaverf and harmony. I want to turn the lights off when I watch movies
I am new to this please help.
Ajay
I want to do something very simple. For ifttt to send an SMS to a number when data connection speed is 0kb/s. is that possible
I am currently in the possession of a "Flic". For school I would like to use my flic to control an analog RGB led strip. For example: Push once is turn it on and this color, push twice is turn it of,...
I was thinking about using Arduino and IFTTT but after roaming the internet for long while I came to the conclusion that I didn't understand any of the articles and don't have the skills to do this.
Do you have any suggestions? Thanks!
Kind regards,
Jolien
Followed your syntax but I always get "Bad Request" as response.
curl -X POST -H "Content-Type: application/json" -d '{"value1":"URL","value2":"Test"}'
There must be a mistake but I sure can't find it. Any suggestion?
I am having this issue : {"errors":[{"message":"You sent an invalid key."}]}
eventhough I tried with some other recipes and I got the same problem, any wisdom ?
I get this same error. Tried various ways of sending the event including IFTTT's own web form that on the{secret key} page and it always returns "You sent an invalid key." I also reset the key several times using the "edit connection" button with no luck there either.
I had read on some other sites where people had the problem and it ended up being that they had a bad event name in the request (it didn't match the name in their applet) but I've quadruple checked the key and event name and I just keep getting that same error.
I've contacted support and they said they routed it to the maker team for examination, but they have not replied.
It's almost like there is some additional step required to activate your account for this service or something.
Any plans to allow making web request to a url secured by authentication mechanisms? In this scenario would the service exposing the url need to create a partnership with ifttt to create a channel?
It is very interesting !
I know it is easy to turn a Hue light on with IFTT.
I am particularly interested in using this function to wake me up earlier in case of rain (because of traffic jam). But it seems impossible to ask to IFTT to turn the light on at 6:00 IF it is raining, and I do not want to be woken at 3:00 in case of rain !
Do you know how to implement 2 conditions with IFTT ? One for the meteo, the second for the time ?
You can make two recipes.
1. Turn on the hue at 6.00 am when weather channel sends you a report saying it's raining (You can specify the time of weather report)
2. Make another recipe when the weather channels says anything but rain
I didn't even notice this. I thought I'd add that they added support for Spotify and Pinterest (among other things) recently. It really is an awesome tool.
Agreed. I'm all about IFTTT.
Cool!
|
https://www.makeuseof.com/tag/ifttt-connect-anything-maker-channel/
|
CC-MAIN-2019-51
|
refinedweb
| 2,343
| 73.68
|
Hello everyone!
Last week I began studying and practicing Java. I am trying my best to learn and as with any new study, I am running into problems that are causing some great frustration.
I have a quick question for anyone who is willing to help. It is likely a 'silly' one, but to a beginner, it is not.
I am dealing with Pythagorean triples. I am trying to create a program using loops that can display 20 possible combinations of values that will fulfill side measurements of right triangles. I am trying to write the program in a way so that congruent triangles are not represented twice.
Example of what I am not trying to do:
3,4,5
4,3,5
4,12,13
12, 4, 13
etc.
Example of what I am trying to do:
3,4,5
4,12,13,
etc.
As for the loops, my instructor is requiring me to set side1, side2, and hypot equal to 0 initially. So I try to get out of the zero zone, as seen in the first and third for commands, by adding one to side1 and one to hypot.
Now in an effort to avoid repeated values, I want to make side2 greater than side1 at all times.
Problem is, when I go to run the code, I get values that are not even Pythagorean triples.
Can anybody out there offer their assistance to a struggling beginner?
I would appreciate it greatly!
tryingtolearn
Code :
public class JPythagTriples { public static void main(String[] Theory) { System.out.println("Side 1\tSide 2\tHypotenuse\n"); for(side1 = 0; ((side1 + 1) * (side1 + 1)) < 2600; side1++) { for(side2 = (side1 + 1); ((side2 + 1) * (side2 + 1)) < 2600; side2++) // side 2 must be greater than side 1, so values are not repeated. { for(hypot = 0; ((hypot + 1) * (hypot + 1)) < 2600; hypot++) { sideSum = ((side1 + 1) * (side1 + 1)) + ((side2 + 1) * (side2 + 1)); if(sideSum == (hypot + 1) * (hypot + 1)) { System.out.println(side1 + "\t" + side2 + "\t" + hypot); } } } } } }
|
http://www.javaprogrammingforums.com/%20loops-control-statements/16408-pythagorean-triples-help-im-beginner-printingthethread.html
|
CC-MAIN-2015-35
|
refinedweb
| 329
| 72.05
|
Learn how to use Angular, D3, and Socket.IO to build an application that provides real-time charts to its users.
TL needs of users are also increasing. The capabilities of the web in the present era can be used to build very rich interfaces. The interfaces may include widgets in the dashboards, huge tables with incrementally loading data, different types of charts and anything that you can think of. Thanks to the technologies like WebSockets, users want to see the UI updated as early as possible. This is a good problem for you to know how to deal with.
In this article, you will build a virtual market application that shows a D3 multi-line chart. That chart will consume data from a Node.js backend consisting of an Express API and a SocketIO instance to get this data in real time.
"Learn how to create real-time @Angular apps with D3 and Socket.IO"
Creating a Virtual Market Server
The demo app you are going to build consists of two parts. One is a Node.js server that provides market data and the other is an Angular application consuming this data. As stated, the server will consist of an Express API and a SocketIO endpoint to serve the data continuously.
So, to create this server and this Angular application, you will a directory to hold the source code. As such, create a new directory called
virtual-market and, inside this folder, create a sub-directory called
server. In a terminal (e.g. Bash or PowerShell), you can move into the
server directory and run the following command:
# move into the server directory cd virtual-market/server/ # initialize it as an NPM project npm init -y
This command will generate the
package.json file with some default properties. After initializing this directory as an NPM project, run the following command to install some dependencies of the server:
npm install express moment socket.io
Once they are installed, you can start building your server.
Building the Express API
First, create a new file called
market.js inside the
server directory. This file will be used as a utility. It will contain the data of a virtual market and it will contain a method to update the data. For now, you will add the data alone and the method will be added while creating the Socket.IO endpoint. So, add the following code to this file:
const marketPositions = [ {"date": "10-05-2012", "close": 68.55, "open": 74.55}, {"date": "09-05-2012", "close": 74.55, "open": 69.55}, {"date": "08-05-2012", "close": 69.55, "open": 62.55}, {"date": "07-05-2012", "close": 62.55, "open": 56.55}, {"date": "06-05-2012", "close": 56.55, "open": 59.55}, {"date": "05-05-2012", "close": 59.86, "open": 65.86}, {"date": "04-05-2012", "close": 62.62, "open": 65.62}, {"date": "03-05-2012", "close": 64.48, "open": 60.48}, {"date": "02-05-2012", "close": 60.98, "open": 55.98}, {"date": "01-05-2012", "close": 58.13, "open": 53.13}, {"date": "30-04-2012", "close": 68.55, "open": 74.55}, {"date": "29-04-2012", "close": 74.55, "open": 69.55}, {"date": "28-04-2012", "close": 69.55, "open": 62.55}, {"date": "27-04-2012", "close": 62.55, "open": 56.55}, {"date": "26-04-2012", "close": 56.55, "open": 59.55}, {"date": "25-04-2012", "close": 59.86, "open": 65.86}, {"date": "24-04-2012", "close": 62.62, "open": 65.62}, {"date": "23-04-2012", "close": 64.48, "open": 60.48}, {"date": "22-04-2012", "close": 60.98, "open": 55.98}, {"date": "21-04-2012", "close": 58.13, "open": 53.13} ]; module.exports = { marketPositions, };
Now, add another file and name it
index.js. This file will do all the Node.js work required. For now, you will add the code to create an Express REST API to serve the data. So, add the following code to the file
index.js:
const app = require('express')(); const http = require('http').Server(app); const market = require('./market'); const port = 3000; app.use((req, res, next) => { res.header('Access-Control-Allow-Origin', '*'); res.header('Access-Control-Allow-Headers', 'Origin, X-Requested-With, Content-Type, Accept'); next(); }); app.get('/api/market', (req, res) => { res.send(market.marketPositions); }); http.listen(port, () => { console.log(`Listening on *:${port}`); });
After saving this file, you can check if everything is going well. Run the following command to start your Express REST API:
# from the server directory, run the server node index.js
As this command starts your Node.js server on port
3000, you can visit the URL to see the market updates on the last few days.
Adding Socket.IO to Serve Data in Real Time
To show a real-time chart, you will need to simulate a real-time market data by updating it every 5 seconds. For this, you will add a new method to the
market.js file. This method will be called from a Socket.IO endpoint that you will add to your
index.js file. So, open the file
market.js and add the following code to it:
const moment = require('moment'); // const marketPositions ... let counter = 0; function updateMarket() { const diff = Math.floor(Math.random() * 1000) / 100; const lastDay = moment(marketPositions[0].date, 'DD-MM-YYYY').add(1, 'days'); let open; let close; if (counter % 2 === 0) { open = marketPositions[0].open + diff; close = marketPositions[0].close + diff; } else { open = Math.abs(marketPositions[0].open - diff); close = Math.abs(marketPositions[0].close - diff); } marketPositions.unshift({ date: lastDay.format('DD-MM-YYYY'), open, close }); counter++; } module.exports = { marketPositions, updateMarket, };
The
updateMarket method generates a random number every time it is called and adds it to (or subtracts it from) the last market value to generate some randomness in the figures. Then, it adds this entry to the
marketPositions array.
Now, open the
index.js file, so you can create a Socket.IO connection to it. This connection will call the
updateMarket method after every 5 seconds to update the market data and will emit an update on the Socket.IO endpoint to update the latest data for all listeners. In this file, make the following changes:
// ... other import statements ... const io = require('socket.io')(http); // ... app.use and app.get ... setInterval(function () { market.updateMarket(); io.sockets.emit('market', market.marketPositions[0]); }, 5000); io.on('connection', function (socket) { console.log('a user connected'); }); // http.listen(3000, ...
With these changes in place, you can start building the Angular client to use this.
Building the Angular Application
To generate your Angular application, you can use Angular CLI. There are two ways to do it. One is to install a local copy of the CLI globally in your machine and the other is to use a tool that comes with NPM that is called
npx. Using
npx is better because it avoids the need to install the package locally and because you always get the latest version. If you want to use
npx, make sure that you have npm 5.2 or above installed.
Then, go back to the main directory of your whole project (i.e. the
virtual-market directory) and run the following command to generate the Angular project:
npx @angular/cli new angular-d3-chart
Once the project is generated, you need to install both the D3 and Socket.IO NPM libraries. So, move to the
angular-d3-chart directory and run the following command to install these libraries:
npm install d3 socket.io-client
As you will use these libraries with TypeScript, it is good to have their typings installed. So, run the following command to install the typings:
npm i @types/d3 @types/socket.io-client -D
Now that the setup process is done, you can run the application to see if everything is fine:
# from the angular-d3-chart directory npm start
To see the default Angular application, just point your browser to the URL.
Building a Component to Display the D3 Chart
Now that your Angular application setup is ready, you can start writing its code. First, you will add a component to display the multi-line D3 chart. Second, you will create a service to fetch the data. For now, this service will consume static data from the REST API then, in no time, you will add real-time capabilities to your app.
So, run the following command to add a file for this service:
npx ng generate service market-status
To consume the REST APIs, you need to use the
HttpClient service from the
HttpClientModule module. This module has to be imported into the application's module for this. As such, open the
app.module.ts file and replace its code with this:
import {BrowserModule} from '@angular/platform-browser'; import {NgModule} from '@angular/core'; import {HttpClientModule} from '@angular/common/http'; import {AppComponent} from './app.component'; @NgModule({ declarations: [ AppComponent ], imports: [ BrowserModule, HttpClientModule ], providers: [], bootstrap: [AppComponent] }) export class AppModule {}
As you can see, the new version of this file does nothing besides adding the
HttpClientModule to the
imports section of the
AppModule module.
Now, open the
market-status.service.ts file and add the following code to it:
import {Injectable} from '@angular/core'; import {HttpClient} from '@angular/common/http'; import {MarketPrice} from './market-price'; @Injectable({ providedIn: 'root' }) export class MarketStatusService { private baseUrl = ''; constructor(private httpClient: HttpClient) { } getInitialMarketStatus() { return this.httpClient.get<MarketPrice[]>(`${this.baseUrl}/api/market`); } }
This service uses the
MarketPrice class to structure the data received from your backend API (
baseUrl = ''). To add this class to your project, create a new file named
market-price.ts in the
app folder and add the following code to it:
export class MarketPrice { open: number; close: number; date: string | Date; }
Now, add a new component to the application, so you can show the multi-line D3 chart. The following command adds this component:
npx ng g c market-chart
Then, open the
market-chart.component.html file and replace its default content with this:
</p>
<div></div>
<p>
The D3 chart will be rendered inside this
<div #chart> element. As you can see, you created a local reference for the div element (
#chart). You will use this reference in your component class while configuring D3.
This component will not use the
MarketStatusService to fetch data. Instead, it will accept the data as input. The goal of this approach is to make the
market-chart component reusable. For this, the component will have an
Input field and the value to this field will be passed from the
app-root component. The component will use the
ngOnChanges lifecycle hook to render the chart whenever there is a change in the data. It will also use the
OnPush change detection strategy to ensure that the chart is re-rendered only when the input changes.
So, open the file
market-chart.component.ts and add the following code to it:
import {ChangeDetectionStrategy, Component, ElementRef, Input, OnChanges, ViewChild} from '@angular/core'; import * as d3 from 'd3'; import {MarketPrice} from '../market-price'; @Component({ selector: 'app-market-chart', templateUrl: './market-chart.component.html', styleUrls: ['./market-chart.component.css'], changeDetection: ChangeDetectionStrategy.OnPush }) export class MarketChartComponent implements OnChanges { @ViewChild('chart') chartElement: ElementRef; parseDate = d3.timeParse('%d-%m-%Y'); @Input() marketStatus: MarketPrice[]; private svgElement: HTMLElement; private chartProps: any; constructor() { } ngOnChanges() { } formatDate() { this.marketStatus.forEach(ms => { if (typeof ms.date === 'string') { ms.date = this.parseDate(ms.date); } }); } }
Now, the
MarketChartComponent class has everything required to render the chart. In addition to the local reference for the div (
chartElement) and the lifecycle hook, the class has a few fields that will be used while rendering the chart. The
parseDate method converts string values to Date objects and the private fields
svgElement and
chartProps will be used to hold the reference of the SVG element and the properties of the chart respectively. These fields will be quite useful to re-render the chart.
Now, the most complex part of the tutorial. Add the following method to the
MarketChartComponent class:
buildChart() { this.chartProps = {}; this.formatDate(); // Set the dimensions of the canvas / graph var margin = { top: 30, right: 20, bottom: 30, left: 50 }, width = 600 - margin.left - margin.right, height = 270 - margin.top - margin.bottom; // Set the ranges this.chartProps.x = d3.scaleTime().range([0, width]); this.chartProps.y = d3.scaleLinear().range([height, 0]); // Define the axes var xAxis = d3.axisBottom(this.chartProps.x); var yAxis = d3.axisLeft(this.chartProps.y).ticks(5); let _this = this; // Define the line var valueline = d3.line<MarketPrice>() .x(function (d) { if (d.date instanceof Date) { return _this.chartProps.x(d.date.getTime()); } }) .y(function (d) { console.log('Close market'); return _this.chartProps.y(d.close); }); // Define the line var valueline2 = d3.line<MarketPrice>() .x(function (d) { if (d.date instanceof Date) { return _this.chartProps.x(d.date.getTime()); } }) .y(function (d) { console.log('Open market'); return _this.chartProps.y(d.open); }); var svg = d3.select(this.chartElement.nativeElement) .append('svg') .attr('width', width + margin.left + margin.right) .attr('height', height + margin.top + margin.bottom) .append('g') .attr('transform', `translate(${margin.left},${margin.top})`); // Scale the range of the data this.chartProps.x.domain( d3.extent(_this.marketStatus, function (d) { if (d.date instanceof Date) return (d.date as Date).getTime(); })); this.chartProps.y.domain([0, d3.max(this.marketStatus, function (d) { return Math.max(d.close, d.open); })]); // Add the valueline2 path. svg.append('path') .attr('class', 'line line2') .style('stroke', 'green') .style('fill', 'none') .attr('d', valueline2(_this.marketStatus)); // Add the valueline path. svg.append('path') .attr('class', 'line line1') .style('stroke', 'black') .style('fill', 'none') .attr('d', valueline(_this.marketStatus)); // Add the X Axis svg.append('g') .attr('class', 'x axis') .attr('transform', `translate(0,${height})`) .call(xAxis); // Add the Y Axis svg.append('g') .attr('class', 'y axis') .call(yAxis); // Setting the required objects in chartProps so they could be used to update the chart this.chartProps.svg = svg; this.chartProps.valueline = valueline; this.chartProps.valueline2 = valueline2; this.chartProps.xAxis = xAxis; this.chartProps.yAxis = yAxis; }
Refer to the comments added before every section in the above method to understand what the code is doing. Also, if you have any specific doubt, just leave a comment.
Now, you will have to change the
ngOnChanges function (still in your
MarketChartComponent class) to call this method:
ngOnChanges() { if (this.marketStatus) { this.buildChart(); } }
Now, you need to insert this component in the
app-root component to see the chart. So, open the
app.component.html file and replace its content with:
<app-market-chart></app-market-chart>
Then, you have to replace the content of the
app.component.ts file with the following code:
import {Component} from '@angular/core'; import {MarketStatusService} from './market-status.service'; import {Observable} from 'rxjs'; import {MarketPrice} from './market-price'; @Component({ selector: 'app-root', templateUrl: './app.component.html', styleUrls: ['./app.component.css'] }) export class AppComponent { title = 'app'; marketStatus: MarketPrice[]; marketStatusToPlot: MarketPrice[]; set MarketStatus(status: MarketPrice[]) { this.marketStatus = status; this.marketStatusToPlot = this.marketStatus.slice(0, 20); } constructor(private marketStatusSvc: MarketStatusService) { this.marketStatusSvc.getInitialMarketStatus() .subscribe(prices => { this.MarketStatus = prices; }); } }
Save these changes and run the application using the
ng serve command (or
npm start). Now, head to the URL and you will see a page with a chart similar to the following image:
Adding Real-Time Capabilities to the D3 Chart
Now that you have the chart rendered on the page, you can make it receive the market updates from Socket.IO to make it real-time. To receive these updates, you need to add a listener to the Socket.IO endpoint in the
market-status.service.ts file. So, open this file and replace its code with:
import {Injectable} from '@angular/core'; import {HttpClient} from '@angular/common/http'; import {MarketPrice} from './market-price'; import { Subject, from } from 'rxjs'; import * as socketio from 'socket.io-client'; @Injectable({ providedIn: 'root' }) export class MarketStatusService { private baseUrl = ''; constructor(private httpClient: HttpClient) { } getInitialMarketStatus() { return this.httpClient.get<MarketPrice[]>(`${this.baseUrl}/api/market`); } getUpdates() { let socket = socketio(this.baseUrl); let marketSub = new Subject<MarketPrice>(); let marketSubObservable = from(marketSub); socket.on('market', (marketStatus: MarketPrice) => { marketSub.next(marketStatus); }); return marketSubObservable; } }
The new method,
- it creates a manager for the Socket.IO endpoint at the given URL;
- it creates a RxJS
Subjectand gets an
Observablefrom this subject. This observable is returned from this method so consumers can listen to the updates;
- The call to the
onmethod on the Socket.IO manager adds a listener to the
marketevent. The callback passed to this method is called whenever the Socket.IO endpoint publishes something new.
Now, you have to make the
AppComponent class consume the
app.component.ts file and modify the constructor as shown below:
constructor(private marketStatusSvc: MarketStatusService) { this.marketStatusSvc.getInitialMarketStatus() .subscribe(prices => { this.MarketStatus = prices; let marketUpdateObservable = this.marketStatusSvc.getUpdates(); // 1 marketUpdateObservable.subscribe((latestStatus: MarketPrice) => { // 2 this.MarketStatus = [latestStatus].concat(this.marketStatus); // 3 }); // 4 }); }
In the above snippet, the statements marked with the numbers are the new lines added to the constructor. Observe the statement labeled with 3. This statement creates a new array instead of updating the field
marketStatus. This is done to let the consuming
app-market-chart component know about the change when you have an update.
The last change you will need to do to see the chart working in real time is to make the flowing data hit the chart. To do this, open the
market-chart.component.ts file and add the following method to the
MarketChartComponent class:
updateChart() { let _this = this; this.formatDate(); // Scale the range of the data again this.chartProps.x.domain(d3.extent(this.marketStatus, function (d) { if (d.date instanceof Date) { return d.date.getTime(); } })); this.chartProps.y.domain([0, d3.max(this.marketStatus, function (d) { return Math.max(d.close, d.open); })]); // Select the section we want to apply our changes to this.chartProps.svg.transition(); // Make the changes to the line chart this.chartProps.svg.select('.line.line1') // update the line .attr('d', this.chartProps.valueline(this.marketStatus)); this.chartProps.svg.select('.line.line2') // update the line .attr('d', this.chartProps.valueline2(this.marketStatus)); this.chartProps.svg.select('.x.axis') // update x axis .call(this.chartProps.xAxis); this.chartProps.svg.select('.y.axis') // update y axis .call(this.chartProps.yAxis); }
The comments added in the snippet explain what you are doing in this method. Now, you have to make the
ngOnChanges method call this new method. So, change the
ngOnChanges() method in the
MarketChartComponent class as shown below:
ngOnChanges() { if (this.marketStatus && this.chartProps) { this.updateChart(); } else if (this.marketStatus) { this.buildChart(); } }
Now, if you run the application, you will see an error on the browser console saying
global is not defined.
This is because Angular CLI 6 removed the global object and SocketIO uses it. To fix this problem, add the following statement to the
polyfills.ts file:
(window as any).global = window;
With this, all the changes are done. Save all your files and run the applications again. You can move into the
server directory in one terminal and issue
node index.js to run your backend API, then move to the
angular-d3-chart directory and issue
npm start to run the Angular application.
Now, if you head to ()[], you will see your nice chart with real-time data flowing into it every 5 seconds.
Awesome, right?
"Adding real-time capabilities to @Angular is easy with D3 and Socket.IO" you saw in this tutorial, the web has capabilities that allow you to build very rich applications. Your real-time chart, for example, adds a huge value to your app because there the user knows that they will have the latest data without having to refresh the page or performing any action. This kind of interactivity improves your users' experiences and will contribute to their happiness.
WDYT? Ready to add some real-time data to the web?
|
https://auth0.com/blog/amp/real-time-charts-using-angular-d3-and-socket-io/
|
CC-MAIN-2019-18
|
refinedweb
| 3,290
| 52.76
|
Created attachment 28733 [details]
source ppt file
I use PPT2PNG utility to convert ppt file to image. There is a bug with numbering. In result image, it draws as a bullets.
public class Main {
public static void main(String[] args) {
String input[] = {"D:\\Test.ppt"};
try {
PPT2PNG.main(input);
} catch (Exception ex) {
ex.printStackTrace();
}
}
}
Created attachment 28734 [details]
result of converting
Numbered lists are not yet supported and replaced with plain bullets.
The first step to fix it is to support numbered lists in the .ppt format reader. Current implementation does not read this info and renders simple bullets instead.
The second step is to fix the renderer to make sense of this information. Note that the actual text is not saved in the format. PowerPoint decides what to show based on the chosen numbering scheme and index of a text block. For example, if the the numbering scheme is "roman upper-case paren-both", then the numbering for the first paragraph will be (I), for the second paragraph - (II), etc.
Yegor
I've added auto numbering some time back, although it still has some issues,
with continued lists.
Apart of the ordered numbers, the hanging indent doesn't match, but this is a
different issue.
|
https://bz.apache.org/bugzilla/show_bug.cgi?id=53195
|
CC-MAIN-2020-45
|
refinedweb
| 206
| 67.45
|
Does the gcc know to pull out declarations from frequently called functions?
I am working with the
gcc5 compiler and I was wondering whether it is able to optimize the following code:
Instead of
void mArray(int n) { double a[n]; int x; // ... Do some things with a and x } int main(int argc, char *argv[]) { int i; for (i = 0; i < 1000000; i++) { mArray(argc); } }
can the compiler optimize this to
int main(int argc, char *argv[]) { int i; double a[argc]; int x; for (i = 0; i < 1000000; i++) { // ... Do some things with a and x } }
Apart from the inlining, does the gcc compiler know how to pull these declarations and the corresponding memory allocations out of the loop in order to yield faster code?
1 answer
- answered 2018-02-13 00:48 EJP
does the gcc compiler know how to pull these declarations and the corresponding memory allocactions out of the loop
The only memory allocation inside the loop is for the stack frame, which contains at least a base pointer and a return address, and the allocation occurs so by incrementing or decrementing the stack pointer register. Adding more space in the frame for the primitive variables costs nothing.
in order to yield faster code?
It would not yield faster code.
Note that I am speaking about the code in your original question. I am not going to chase edits.
See also questions close to this topic
- Can I use a lambda to simplify a for loop
I would like to know if there are ways to simplify the for loop, such as a lambda expression without changing the nature of the code below. If possible, I would also like to know if there are other ways(better) to execute a series of functions that could do something similarly like the code below. Thank you
#include <iostream> #include <functional> #include <vector> using namespace std; void turn_left(){ // left turn function cout<<"Turn left"<<endl; } void turn_right(){ // right turn function cout<<"Turn right"<<endl; } void onward(){ // moving forward function cout<<"Onward"<<endl; } int main() { vector<char>commands{'L', 'R', 'M'}; // commmands (keys)for robot to turn or move; vector<pair<function<void()>, char>> actions; // a vector of pairs, which pairs up the function pointers with the chars; actions.push_back(make_pair(turn_left, 'L')); //populate the vector actions actions.push_back(make_pair(turn_right, 'R')); actions.push_back(make_pair(onward, 'M')); for (int i =0; i<commands.size();++i){ if(commands.at(i)==actions.at(i).second){ actions.at(i).first(); } } }
- How to convert a string type into ostringstream
I have the following code, I want to convert the str string type into ostringstream so i can use .rdbuf()
CTime tm; tm=CTime::GetCurrentTime(); CString str=tm.Format("%X");
- Use of assignment operators inside if condition in C++
If I use assignment operators or output functions inside the if condition, why is it taken as a "true" boolean value?
For example, I do
if(cout << "a"){cout << "c";}
Or
if(int x=7){cout << "c";}
Or even
if(6){cout << "c";}
the outputs are indeed, c and c and c in each case. But we were told that, inside the if condition, we have to use an expression that finally evaluates to a boolean value like 1 or 0.
So what is the piece of information I am missing?
- Interrupt Service Routine Function Call
I am receiving messages over CAN. Whenever a message comes in, an interrupt is triggered. The message contains data to regulate the speed of a motor, that should be applied immediatley.
What is the best practice:
int rpm; while(1){ changeSpeed(rpm) } isr(){ rpm = message.rpm }
or
int rpm; while(1){ //nothing } isr(){ changeSpeed(rpm) }
the function changeSpeed() does some stuff under the hood and might take some time.
- 0m Guru Meditation Error: Core 0 panic'ed (LoadProhibited)
I am trying to setup over the air(OTA) updates on my ESP-32 device using a HTTP server with SSL certificates. I am using the ESP-IDF framework on the ESP-32 system with this docker container as my tool chain so I can flash and change configuration settings using
sudo esp32env make flashand
sudo esp32env make menuconfig. I found a library which supports OTA updates via HTTP server with SSL on the ESP-IDF framework. I modified the values in the main.h file for my WIFI access point, HTTP server IP address, CA certificate key, and Peer key. After I flash to my ESP-32, my device continuously reboots giving this error.
[0;32mI (7169) fwup_wifi: Firmware updater task checking for firmware update.[0m Guru Meditation Error: Core 0 panic'ed (LoadProhibited)
The Full Serial Monitor Output::4368 load:0x40078000,len:11080 load:0x40080000,len:252 entry 0x40080034 ␛[0;32mI (45) boot: ESP-IDF v2.1 2nd stage bootloader␛[0m ␛[0;32mI (45) boot: compile time 17:22:27␛[0m ␛[0;32mI (46) boot: Enabling RNG early entropy source...␛[0m ␛[0;32mI (59) boot: SPI Speed : 40MHz␛[0m ␛[0;32mI (72) boot: SPI Mode : DIO␛[0m ␛[0;32mI (84) boot: SPI Flash Size : 4MB␛[0m ␛[0;32mI (97) boot: Partition Table:␛[0m ␛[0;32mI (108) boot: ## Label Usage Type ST Offset Length␛[0m ␛[0;32mI (130) boot: 0 nvs WiFi data 01 02 00009000 00004000␛[0m ␛[0;32mI (154) boot: 1 otadata OTA data 01 00 0000d000 00002000␛[0m ␛[0;32mI (177) boot: 2 phy_init RF data 01 01 0000f000 00001000␛[0m ␛[0;32mI (200) boot: 3 factory factory app 00 00 00010000 00100000␛[0m ␛[0;32mI (223) boot: 4 ota_0 OTA app 00 10 00110000 00100000␛[0m ␛[0;32mI (247) boot: 5 ota_1 OTA app 00 11 00210000 00100000␛[0m ␛[0;32mI (270) boot: End of partition table␛[0m ␛[0;32mI (283) boot: Disabling RNG early entropy source...␛[0m ␛[0;32mI (300) boot: Loading app partition at offset 00210000␛[0m ␛[0;32mI (1184) boot: segment 0: paddr=0x00210018 vaddr=0x3f400020 size=0x17c38 ( 97336) map␛[0m ␛[0;32mI (1184) boot: segment 1: paddr=0x00227c58 vaddr=0x3ffc0000 size=0x02f14 ( 12052) load␛[0m ␛[0;32mI (1207) boot: segment 2: paddr=0x0022ab74 vaddr=0x40080000 size=0x00400 ( 1024) load␛[0m ␛[0;32mI (1229) boot: segment 3: paddr=0x0022af7c vaddr=0x40080400 size=0x0508c ( 20620) load␛[0m ␛[0;32mI (1265) boot: segment 4: paddr=0x00230010 vaddr=0x400d0018 size=0x72a20 (469536) map␛[0m ␛[0;32mI (1281) boot: segment 5: paddr=0x002a2a38 vaddr=0x4008548c size=0x04998 ( 18840) load␛[0m ␛[0;32mI (1316) boot: segment 6: paddr=0x002a73d8 vaddr=0x400c0000 size=0x00000 ( 0) load␛[0m ␛[0;32mI (433) cpu_start: Pro cpu up.␛[0m ␛[0;32mI (437) cpu_start: Starting app cpu, entry point is 0x40080f58␛[0m ␛[0;32mI (443) cpu_start: App cpu up.␛[0m ␛[0;32mI (448) heap_init: Initializing. RAM available for dynamic allocation:␛[0m ␛[0;32mI (454) heap_init: At 3FFAFF10 len 000000F0 (0 KiB): DRAM␛[0m ␛[0;32mI (460) heap_init: At 3FFC9080 len 00016F80 (91 KiB): DRAM␛[0m ␛[0;32mI (466) heap_init: At 3FFE0440 len 00003BC0 (14 KiB): D/IRAM␛[0m ␛[0;32mI (473) heap_init: At 3FFE4350 len 0001BCB0 (111 KiB): D/IRAM␛[0m ␛[0;32mI (479) heap_init: At 40089E24 len 000161DC (88 KiB): IRAM␛[0m ␛[0;32mI (485) cpu_start: Pro cpu start user code␛[0m ␛[0;32mI (168) cpu_start: Starting scheduler on PRO CPU.␛[0m ␛[0;32mI (0) cpu_start: Starting scheduler on APP CPU.␛[0m ␛[0;32mI (169) main: ---------- Intialization started ----------␛[0m ␛[0;32mI (169) main: ---------- Software version: 1 -----------␛[0m ␛[0;32mI (199) main: Set up WIFI network connection.␛[0m ␛[0;32mI (199) wifi_sta: wifi_sta_init: network = 'Home'␛[0m I (209) wifi: wifi firmware version: bffcf7f I (209) wifi: config NVS flash: enabled I (209) wifi: config nano formating: disabled ␛[0;32mI (219) system_api: Base MAC address is not set, read default base MAC address from BLK0 of EFUSE␛[0m ␛[0;32mI (219) system_api: Base MAC address is not set, read default base MAC address from BLK0 of EFUSE␛[0m I (249) wifi: Init dynamic tx buffer num: 32 I (249) wifi: Init data frame dynamic rx buffer num: 32 I (249) wifi: Init management frame dynamic rx buffer num: 32 I (259) wifi: wifi driver task: 3ffd1cc4, prio:23, stack:4096 I (259) wifi: Init static rx buffer num: 10 I (269) wifi: Init dynamic rx buffer num: 32 I (269) wifi: wifi power manager task: 0x3ffd6904 prio: 21 stack: 2560 ␛[0;32mI (499) phy: phy_version: 3662, 711a97c, May 9 2018, 14:29:06, 0, 0␛[0m I (509) wifi: mode : sta (30:ae:a4:47:7b:b4) ␛[0;32mI (509) main: app_event_handler: event: 2␛[0m I (629) wifi: n:1 0, o:1 0, ap:255 255, sta:1 0, prof:1 I (1479) wifi: state: init -> auth (b0) ␛[0;32mI (1479) main: Initialising OTA firmware updating.␛[0m ␛[0;31mE (1479) wifi_tls: wifi_tls_create_context: parameter missing␛[0m I (1479) wifi: state: auth -> assoc (0) ␛[0;32mI (1489) fwup_wifi: Firmware updater task started.␛[0m I (1489) wifi: state: assoc -> run (10) I (1509) wifi: connected with Home, channel 1 ␛[0;32mI (1509) main: app_event_handler: event: 4␛[0m ␛[0;32mI (2169) event: sta ip: 10.0.1.26, mask: 255.255.255.0, gw: 10.0.1.1␛[0m ␛[0;32mI (2169) main: app_event_handler: event: 7␛[0m ␛[0;32mI (2169) wifi_sta: Device is now connected to WIFI network␛[0m I (4499) wifi: pm start, type:0 ␛[0;32mI (7169) fwup_wifi: Firmware updater task checking for firmware update.␛[0m Guru Meditation Error: Core 0 panic'ed (LoadProhibited) . Exception was unhandled. Register dump: PC : 0x400d277f PS : 0x00060b30 A0 : 0x800d209b A1 : 0x3ffd9410 A2 : 0x00000000 A3 : 0x00000003 A4 : 0x3f4021a9 A5 : 0x3ffc3498 A6 : 0x00000005 A7 : 0x3ffc345c A8 : 0x80081eb1 A9 : 0x3ffd93c0 A10 : 0x00000053 A11 : 0x3ffd6fe4 A12 : 0x3ffd9400 A13 : 0x0000000c A14 : 0x00000001 A15 : 0x00000000 SAR : 0x00000004 EXCCAUSE: 0x0000001c EXCVADDR: 0x00000000 LBEG : 0x400014fd LEND : 0x4000150d LCOUNT : 0xfffffffd Backtrace: 0x400d277f:0x3ffd9410 0x400d2098:0x3ffd9450 Rebooting...
- Issue with I2C EEPROM page writes using ESP32
I'm having trouble writing multiple bytes to a 24LC256 EEPROM using an ESP32.
The following functions are what is responsible for reading and writing to the EEPROM. (I understand that page writing is limited to increments of 64 bytes using this EEPROM, this code is just for testing)
EEPROM write function
esp_err_t eeprom_write_write(cmd, data, size, 1); // Start page write i2c_master_stop(cmd); // Call stop command esp_err_t ret = i2c_master_cmd_begin(I2C_NUM_1, cmd, 1000/portTICK_PERIOD_MS); i2c_cmd_link_delete(cmd); return ret; }
EEPROM read function
esp_err_t eeprom_read_start(cmd); i2c_master_write_byte(cmd, (deviceaddress<<1)|EEPROM_READ_ADDR, 1); // Sequential read support if (size > 1) { i2c_master_read(cmd, data, size-1, 0); // Send ack for these bytes // as part of a sequential read } i2c_master_read_byte(cmd, data+size-1, 1); // Do not ack the last byte i2c_master_stop(cmd); esp_err_t ret = i2c_master_cmd_begin(I2C_NUM_1, cmd, 1000/portTICK_PERIOD_MS); i2c_cmd_link_delete(cmd); return ret; }
The odd thing is that I am able to write 13 bytes to the EEPROM and everything seems to be fine.
eeprom_write(0x50, 0x0000, data_wr, 13); // Returns ESP_OK eeprom_read(0x50, 0x0000, data_rd, 64); // Returns ESP_OK
However, when writing more than 13 bytes to the EEPROM, the sequential read function screws up.
eeprom_write(0x50, 0x0000, data_wr, 14); // Returns ESP_OK eeprom_read(0x50, 0x0000, data_rd, 64); // Return ESP_FAIL
I'm sure that everything I'm doing follows the rules of read and writes according to the 24LC256 datasheet. Is there something that I'm missing?
- compile part of code using -std=c11 flag
Is there a possibility to compile one part of a code with different flags - without changing the flags in the makefile?
I've got an existing project which is not set up for -std=c11. Now I have added my own code and the compiler tells me things like
'for' loop initial declarations are only allowed in C99 or C11 mode
when I change the flag to std=c11 or c99, the rest of the program wont compile anymore.
Is there something like
extern "C" {}in C++ to tell the compiler to treat the following part as std=c11?
Thank you
- Play multiple wav audio with C and libao at same time
I'm using libao (ao_play) to play some buffers. I listen the keyboard keys and for each key I have a wav sound to play. It's simple.
With ao_play I see that the application blocks while is playing the sound. Because I want to play multiple audios at same time, I needed to use threads (with pthread lib).
It works, but I fell like a workaround and if I play to much files (maybe 10 or something like this) so everything stuck for some seconds and so come back.
Well, my question is: how to play multiple sounds at same time non-blocking using libao (and not using threads)?
- Uninstall newer version of Boost, install older version
I have Boost 1.66 installed on a system, but I need to compile against Boost 1.53. The system is a CentOS Linux VM. The 1.66 version of boost was installed here:
/usr/share/boost/boost_1_66_0
I downloaded the 1.53 Boost and installed it via the instructions (bootstrap, b2 install). I assumed bootstrap would overrite whatever was set up for 1.66. The next time I compiled, it still built against the 1.66 version. So I renamed the old directory from
.._1_66_0to
do_not_use_boost_1_66_0. I rebuilt. It still used the 1.66 Boost!
Clearly, it was not using boost from those directories.
So, looking around, I found boost installed at:
/usr/local/include/boost
Looking at the version file, I could see it was 1_66. So I renamed that folder and created a new boost folder at the same location and copied the corresponding contents from the
boost_1_53_0\boostfolder. Running
cmake, I could see it found the 1.53 stuff:
-- Detecting CXX compiler ABI info - done -- Boost version: 1.53.0 -- Boost version: 1.53.0 -- Found the following Boost libraries: -- system
Building again, I found it is still building with the 1.66 libraries! I can only determine this when I try to run the resulting executable:
./super_provider: error while loading shared libraries: libboost_system.so.1.66.0: cannot open shared object file: No such file or directory
The same program built by another engineer who only has boost 1.53 works fine. We don't want to install Boost 1.66 on the target system.
Using
locate, I can't find any other instace of Boost on the system, and the cmake even says it's using 1.53. But it's not. I even blew away and rebuilt the build folder to make sure I wasn't getting any left over garbage.
The CMakeLists.txt even says, use Boost 1.53 only:
find_package( Boost 1.53 REQUIRED) find_package( Boost 1.53 COMPONENTS system REQUIRED)
Of course, that didn't seem to matter since it's always built with 1.66 fine. We're using gcc as our compiler.
Any idea what might be going on? A hidden cache somewhere perhaps?
TL/DR: My app keeps compiling against Boost 1.66 when it should compile against 1.53 even though I've removed all references to Boost 1.66.
- Modifying convex optimization with determinant condition
I am following the example here
The original optimization problem statement is:
# minimize p'*log(p) # subject to # sum(p) = 1 # sum(p'*a) = target1 # sum(p'*max(a-K,a^2)) = target2
I want to modify it to add the condition:
det(diag(p*a − 100) = 0
It is harder to add this condition as it is not of size of rest of constraints when creating matrix
A.
- Creating scenario model instances for multidimensional parameters in Pyomo
I am working on a stochastic optimization applying the PYSP extension of the PYOMO package. For my concrete model, I create the scenarios as recommended via a callback function. For each model instance the parameters are set according to the corresponding scenario. For one dimensional parameters it works fine. However, now I try to create the model instances for two dimensional parameters and I struggle to figure out how it works.
The parameter is defined as shown below.
model.Load_profile = Param(set_sim.tme_dat_stp, vpp.bat_stp, initialize=0, mutable=True)
The callback function should now place the data from ldprof into the parameter variable.
def pysp_instance_creation_callback(scenario_name, node_name): """Callback function for scenario instance creation""" print("Constructing instance for scenario ", scenario_name) instance = model.clone() instance.Load_profile.store_values(ldprof[scenario_name]) return instance
ldprof is a nested dictionary containing the two dimensional data for several scenarios.
ldprof = defaultdict(dict) ldprof[1][1] = dict(enumerate(eldem[:, 0] - pvgen[:, 0])) ldprof[1][2] = dict(enumerate(eldem[:, 0] - pvgen[:, 1])) ldprof[2][1] = dict(enumerate(eldem[:, 1] - pvgen[:, 0])) ldprof[2][2] = dict(enumerate(eldem[:, 1] - pvgen[:, 1]))
Do you have an idea, how i can make it work? Thank you for your advice!
Greetings Philipp
- Solving a Pyomo model with heuristic method (GA or PSO)
1) I was wondering if it is possible to integrate different heuristic solvers like GA and PSO available as python packages to solve a pyomo model.
2) Also, I want to know how to integrate a heuristic algorithm written completely by me (i.e. not available as python packages) to solve the pyomo model
Thank you.
|
http://quabr.com/48757764/does-the-gcc-know-to-pull-out-declarations-from-frequently-called-functions
|
CC-MAIN-2018-34
|
refinedweb
| 2,835
| 58.62
|
Generics is a new feature added to the Whidbey CLR and languages. We are still learning the best way to take advantage of the feature in reusable library APIs. The guidelines below represent out current thinking in this area.
1.0 Generics
A new feature in CLR that allows classes, structures, interfaces, and methods to be parameterized by the types of data they store and manipulate, through a set of features known collectively as generics. CLR generics will be immediately familiar to users of generics in Eiffel or
Without generics, programmers can store data of any type in variables of the base type object. To illustrate, let’s create a simple Stack type with two actions, “Push” and “Pop”. The Stack class stores its data in an array of object, and the Push and Pop methods use the object type to accept and return data, respectively:
public class Stack {
private object[] items = new object[100];
public void Push(object data) {...}
public object Pop() {...}
}
We can then push a value of any type, for example a Customer type, onto the stack. However, when we wanted to retrieve the value, we would need to explicitly cast the result of the Pop method, an object, into a Customer type, which is tedious to write and carries a performance penalty for run-time type checking:
Stack s = new Stack();
s.Push(new Customer());
Customer c = (Customer)s.Pop();
If we pass a value type, such as an int, to the Push method, it will automatically be boxed. Similarly, if we want to retrieve an int from the stack, we would need to explicitly unbox the object type we obtain from the Pop method:
Stack s = new Stack();
s.Push(3);
int i = (int)s.Pop();
The boxing and unboxing operations can be particularly slow when done repeatedly.
Furthermore, in our current implementation, it is not possible to enforce the kind of data placed in the stack. Indeed, we could create a stack and push a Customer type onto it. Later, we could use the same stack and try to pop data off of it and cast it into a different type:
Stack s = new Stack();
s.Push(new Customer());
Employee e = (Employee)s.Pop();
While the code above is an improper use of the Stack class we want to implement and should be a compile-time error, it is actually legal code and the compiler will not have a problem with it. At run-time, however, the application will fail because we have performed an invalid cast operation.
Generics provide a facility for creating high-performance data structures that are specialized by the compiler and/or execution engine based on the types that they use. These so-called generic type declarations are created so that their internal algorithms remain the same, but so that the types of their external interface and internal data can vary based on user preference.
In order to minimize the learning curve for developers, generics are used in much the same way as C++ templates. Programmers can create classes and structures just as they normally have, and by using the angle bracket notation (< and >) they can specify type parameters. When the generic class declaration is used, each type parameter must be replaced by a type argument that the user of the class supplies.
In the example below, we create a Stack generic class declaration where we specify a type parameter, called ItemType, declared in angle brackets after the declaration. Rather than forcing conversions to and from object, instances of the generic Stack class will accept the type for which they are created and store data of that type without conversion. The type parameter ItemType acts as a placeholder until an actual type is specified at use. Note that ItemType is used as the element type for the internal items array, the type for the parameter to the Push method, and the return type for the Pop method:
public class Stack<T> {
private T[] items;
public void Push(T data) {...}
public T Pop() {...}
}
When we use the generic class declaration Stack, as in the short example below, we can specify the actual type to be used by the generic class. In this case, we instruct the Stack to use an int type by specifying it as a type argument using the angle notation after the name:
Stack<int> stack = new Stack<int>();
stack.Push(3);
int x = stack.Pop();
In so doing, we have created a new constructed type, Stack<int>, for which every ItemType inside the declaration of Stack is replaced with the supplied type argument int. Indeed, when we create our new instance of Stack<int>, the native storage of the items array is now an int[] rather than object[], providing substantial storage efficiency. Additionally, we have eliminated the boxing penalty associated with pushing an int onto the stack. Further, when we pop an item off the stack, we no longer need to explicitly cast it to the appropriate type because this particular kind of Stack class natively stores an int in its data structure.
If we wanted to store items other than an int into a Stack, we would have to create a different constructed type from Stack, specifying a new type argument. Suppose we had a simple Customer type and we wanted to use a Stack to store it. To do so, we simply use the Customer class as the type argument to Stack and easily reuse our code:
Stack<Customer> stack = new Stack<Customer>();
stack.Push(new Customer());
Customer c = stack.Pop();
Of course, once we’ve created a Stack with a Customer type as its type argument, we are now limited to storing only Customer objects (or objects of a class derived from Customer). Generics provide strong typing, meaning we can no longer improperly store an integer into the stack, like so:
Stack<Customer> stack = new Stack<Customer>();
stack.Push(new Customer());
stack.Push(3); // compile-time error
Customer c = stack.Pop(); // no cast required
1.1 Generics Related Terminology
1.2 General Generics Usage Guidelines
Consider using Generics in public APIs. Generics can significantly improve cleanness of managed APIs. They allow for strong typing, which is great for usability and performance (by avoiding boxing of value types).
Tradeoff: APIs using some advanced features of Generics may be too difficult to use for some developers. The concept of Generics is not widely understood, in some cases the syntax may pose problems, and as any large new feature, Generics may pose a significant learning curve for some entry-level developers. Section 1.2.1 describes the details of usability issues related to Generics.
Do familiarize yourself with the performance impact of Generics before exposing or calling any Generic APIs. Generics can significantly impact the performance of APIs (positively or negatively). They are significantly different from C++ templates in regards to performance. Section 1.2.2 has details on the subject.
1.2.1 Generics and Usability
One of the main reasons to use Generics in reusable APIs is to improve usability by making the APIs more strongly typed. Hopefully it is obvious that strongly typed APIs are much easier to use. The benefit is especially visible when using source code editors supporting statement completion.
Unfortunately, Generics can also have a negative impact on usability. As any large new feature, Generics may pose a significant learning curve for some entry-level developers. The syntax may pose problems, especially in some more advanced cases. And finally, advanced usage of generics may be conceptually difficult for some developers. This section will describe some of the usability issues related to Generics.
Note: This chapter is based on preliminary usability studies conducted with some limited set of Generic APIs. More research needs to be done to asses the full impact of Generics on usability. We continue working on getting such studies conducted.
Generic types may be difficult to instantiate because:
Users may not be familiar or proficient with the syntax of Generics. This is particularly true when users have to instantiate types with a type argument that is in itself a generic type.
List<Nullable<int>> list = new List<Nullable<int>>();
Users may have problems understanding what the Generic parameter represents. This is usually not a problem for simple collections, where it’s rather obvious that the parameter represents the type of the items stored in the collection. It is more problematic in cases where the type parameter relationship to the Generic type is not obvious.
DataSession<Presence> session = new DataSession<Presence>(…);
Users may have problems finding types that satisfy the constraints.
Generic methods may be difficult to use because:
Users may have problems understanding what the Generic parameter represents.
Users may not be familiar or proficient with the syntax calling Generic methods. They often confuse the type parameters with the regular method parameters.
Usability Guidelines
Do freely use Generics when returning instances of the Generic types from object models. In such cases, the user does not have to instantiate the Generic type and for the most part the user does not have to be even aware that the returned object is an instance of a Generic type.
public class Directory {
public Directory(string root) { … }
public IEnumerable<FileInfo> Files { get { … } }
}
…
Directory croot = new Directory(@”c:\”);
foreach(FileInfo file in croot){
Console.WriteLine(file.Name);
}
Do use Generic collections. The concept of a Generic collection is relatively easy to understand. List<String> is only syntactically, but not conceptually, different from StringCollection. See section for details.
Avoid Generic types with more than 2 type parameters in high-level APIs. Users have difficult understanding what type parameters represent in types with long type parameter lists.
Avoid high-level APIs that require users to instantiate a generic type with another generic type as the type argument. The syntax gets too complex for some users.
Foo<Bar<int>> foo = new Foo<Bar<int>>();
Requiring that users use triple, or higher, nesting is unacceptable in high-level APIs.
Foo<Bar<Nullable<int>>> foo = new Foo<Bar<Nullable<int>>>();
Avoid Generic methods that don’t support type parameter inference in high-level APIs. Such methods are often very difficult to call because of the difficulty in understanding the purpose of the type parameter and because of the somewhat difficult syntax.
Methods with a formal parameter typed as the generic method type parameter support inference. Methods with no formal parameter typed as the generic method type parameter don’t support inference.
public static class RandomHelper {
public static T ChooseOne<T>(params T[] set) {
return set[rand.Next(set.Length)];
}
static Random rand = new Random();
}
public class Assembly {
public T[] GetAttributes<T>() where T:Attribute { … }
}
The RandomHelper.ChooseOne method can be called without explicitly specifying a type argument. The type argument is inferred from the arguments passed to the method. The Assembly.GetAttributes does not support inference and the type parameter needs to be specified explicitly.
int i = Util.ChooseOne(5, 213); // Calls Choose<int>
string s = Util.ChooseOne("foo", "bar"); // Calls Choose<string>
AssemblyVersionAttribute version = assembly.GetAttribute<AssemblyVersionAttribute>();
Do freely use Generic methods that support type parameter inference. Such methods hit the sweet spot of Generics. They support a very convenient syntax.
public static class Reference {
public static void Swap<T>(ref T ref1, ref T ref2){
T temp = ref1;
ref1 = ref2;
ref2 = temp;
}
}
// Usage
string s1 = “World”;
string s2 = “Hello”;
// notice that the type argument does not need to be specified
Reference.Swap(ref s1,ref s2);
Do not have static members on Generic types in high-level APIs.
Some users may not understand the difference between a generic method and a static method on a generic type. They both require the user to pass type arguments, but have slightly different syntax.
Foo<int>.GetValues(); // static member on a Generic type
Foo.GetValues<int>(); // Generic method
1.2.2 Generics and Performance
Do consider the performance ramifications of generics. Specific recommendations arise from these considerations and are described in guidelines that follow.
Execution Time Considerations
Generic collections over value types (e.g. List<int>) tend to be faster than equivalent collections of Object (e.g. ArrayList) because they avoid boxing items.
Generic collections over all types also tend to be faster because they do not incur a checked cast to obtain items from the collection.
The static fields of a generic type are replicated, unshared, for each constructed type. The class constructor of a generic type is called for each constructed type. For example,
public class Counted<T> {
public static int count;
public T t;
static Counted() {
count = 0;
}
Counted(T t) {
this.t = t;
++count;
}
}
Each constructed type Counted<int>, Counted<string>, etc. has its own copy of the static field, and the static class constructor is called once for each constructed type. These static member costs can quietly add up. Also, accesses to static fields of generic types may be slower than accesses to static fields of ordinary types.
Generic methods, being generic, do not enjoy certain JIT compiler optimizations, but this is of little concern for all but the most performance critical code. For example, the optimization that a cast from a derived type to a base type need not be checked is not applied when one of the types is a generic parameter type.
Code Size Considerations
The CLR shares IL, metadata, and some JIT’d/NGEN’d native code across types/methods constructed from generic types/methods. Thus the space cost of each constructed type is modest, less than that of an empty conventional non-generic type. But see also ‘current limitations’ below.
When a generic type references other generic types, then each of its constructed types constructs its transitively referenced generic types. For example, List<T> references IEnumerable<T>, so use of List<string> also incurs the modest cost of constructing type IEnumerable<string>.
In summary, from the performance perspective, generics are a sometimes-efficient facility that should be applied with great care and in moderation.
When you employ a new constructed type formed from a pre-existing generic type, with a reference type parameter, the performance costs are modest but not zero.
The stakes are much higher when you introduce new generic types and methods for use internal or external to your assembly. If used with value type parameters, each method you define can be quietly replicated dozens of times (when used across dozens of constructed types).
Do use the pre-defined System.Collections.Generic types over reference types. Since the cost of each constructed type AGenericCollection<ReferenceType> is modest, it is appropriate to employ such types in preference to defining new strongly-typed-wrapper collection subtypes. Any of the following are appropriate:
public class Sample {
public void Method(IEnumerable<MyRefType> param);
public void Method(ICollection<MyRefType> param);
public void Method(IList<MyRefType> param);
public void Method(CustomCollection param);
public IEnumerable<MyRefType> Property1 { get; }
public ICollection<MyRefType> Property2 { get; }
public IList<MyRefType> Property3 { get; }
public Collection<MyRefType> Property4 { get; }
public KeyedCollection<MyRefType> Property5 { get; }
public CustomCollection Property6 { get; }
}
public class CustomCollection : Collection<MyRefType> {}
public class CustomCollection2 : KeyedCollection<MyRefType> {}
public class CustomCollection3 : IList<MyRefType> {}
public class CustomCollection4 : ICollection<MyRefType> {}
Do use the pre-defined System.Collections.Generic types over value types in preference to defining a new custom collection type. As was the case with C++ templates, there is currently no sharing of the compiled methods of such constructed types – e.g. no sharing of the native code transitively compiled for the methods of List<MyStruct> and List<YourStruct>. Only construct these types when you are certain the savings in dynamic heap allocations of avoiding boxing will pay for the replicated code space costs.
Do use Nullable<T> and EventHandler<T> even over value types. We will work to make these two important generic types as efficient as possible.
Do not introduce new generic types and methods without fully understanding, measuring, and documenting the performance ramifications of their expected use.
1.2.3 Common Patterns
The following section describes some common patterns where Generics can help in providing cleaner APIs.
Do use generics to provide strongly typed collection APIs. See section 1.4.2 for details.
Consider using generics whenever an API returns Object (or any other type that is not as strong typed as you would wish).
Without Generics
public class WeakReference {
public WeakReference(object target){ … }
public object Target { get { … } }
}
…
string s = “Foo”;
WeakReference r = new WeakReference(s);
s = (string)r.Target;
With Generics
public class WeakReference<T> {
public WeakReference(T target) { … }
public T Target { get { … } }
}
…
string s = “Foo”;
WeakReference<string> r = new WeakReference<string>(s);
S = r.Target;
Consider using generics whenever an API causes boxing.
Without Generics
public class ArrayList {
public void Add(object value) { … } // this will box any value type
}
…
ArrayList list = new ArrayList();
List.Add(1); // Int32 is a value type and it will get boxed here.
With Generics
public List<T> {
public void Add(T value) { … }
}
…
List<int> list = new List<int>();
List.Add(1); // this one does not box
Consider using generics instead of manual code expansion of patterns that vary only on type. For example, strongly typed collections only differ on the type of the items being stored in the collections. Generics can be used to provide a single type supporting such variations instead of many, often inconsistent variations like StringCollection, TokenCollection, AttributeCollection, etc.
public class List<T> : IList<T> {
public T this[int index] { get; set; }
public void Add(T);
public IEnumerator<T> GetEnumerator();
…
}
Similarly, today EventHandler needs to be inherited just to change the type of EventArgs. With the use of generics, we can define a generic event handler:
public delegate void EventHandler<T>(object sender, T args) where T:EventArgs;
Consider using generics whenever an API takes a parameter of type System.Type. For example, currently one of the overloads of Attribute.GetCustomAttribute is defined as follows:
class ObjectSpace {
object GetObject(Type type, String queryString, String span);
}
It could be implemented as a generic method, which would provide cleaner syntax and compile time type safety.
class ObjectSpace {
T GetObject<T>(String queryString, String span);
}
This needs to be done carefully. There are a large number of methods that take Type parameters and should not be generic. For example, it does not make sense for Activator.CreateInstance(Type) to be implemented as a generic method. The whole point of CreateInstance is that the type parameter is late bound and not hard-coded at the call site, in which case the operator new is a better way to create the instance.
Consider using generics whenever a type variant API requires ref parameters.
References passed to ref parameters must be the exact type of the parameter (a reference to string is not a subclass of a reference to an object). For example, it’s illegal to pass a reference to a string reference to a method that takes a reference to an object reference. This means that, for example, a general purpose Swap routine based on object references is completely impractical.
Without Generics
public sealed class Reference {
public static void Swap(ref object o1, ref object o2){
object temp = o1;
o1 = o2;
o2 = temp;
}
}
…
string s1 = "Foo";
string s2 = "Bar";
//Note this will not compile: Reference.Swap (s1, s2);
object o1 = s1;
object o2 = s2;
Reference.Swap(ref o1, ref o2);
s1 = (string)o1;
s2 = (string)o2;
Generics solve the problem, and quite elegantly so.
With Generics
public sealed class Reference {
public static void Swap<T>(ref T ref1, ref T ref2){
T temp = ref1;
ref1 = ref2;
ref2 = temp;
}
}
// Using Generics
string s1 = "Foo";
string s2 = "Bar";
Reference.Swap(ref s1, ref s2);
Consider using Generics whenever you need a method to take a parameter that implements two or more unrelated interfaces. For example, the following method takes a parameter that is a subclass of Stream and implements IEnumerable<Byte>.
void Read<T>(T enumerableStream) where T:Stream,IEnumerable<Byte> {
if(enumerableStream.CanRead){
foreach(Byte b in snumerableStream){
Console.Write(b);
}
}
}
1.3 Naming Guidelines
Do name generic type parameters with single letters. For example, prefer Dictionary<K,V> to Dictionary<Key,Value>.
Note: One of the main reasons for this guideline is that we noticed people need to be able to distinguish between type parameters and type names. For example, it’s not clear whether the return type of the following method is a generic parameter or a real type: public Value GetValue(); Using single letters for type parameters makes this clear.
Consider using T as the type parameter name for types with a single type parameter.
Consider including indication of the constraint in the name of Generic types and method parameters.
public class DisposableReference<T> where T:IDisposable {
}
void Read<T>(T enumerableStream) where T:Stream,IEnumerable<Byte> {
}
1.4 Usage Guidelines
1.4.1 General Usage
Do not use unnecessary constraints. For example, if a Generic method calls members exposed on System.IO.Stream, type parameter should be constrained to System.IO.Stream, not to System.IO.FileStream.
void PrintLengths<T>(IEnumerable<T> streamCollection) where T:Stream {
foreach(Stream stream in streamCollection){
Console.WriteLine(stream.Length);
}
}
Consider including the constraint type in the parameter related to the constraint. For example, if a method has a parameter ICollection<T> and T is constrained to System.IO.Stream, the parameter should be called streamCollection.
void PrintLengths<T>(IEnumerable<T> streamCollection) where T:Stream;
Avoid having “hidden” constraints. For example, you should not dynamically check the type argument and throw if it does not implement some interface.
public static class Interlock{
public static T Exchange<T>(ref T location, T value){ // no constraint
// do not do this! This is a hidden constraint
if(T.default.GetType().IsValueType){
throw new InvaludOperationException(…);
}
}
}
Note: The only scenario where we allow hidden constraints is when you would like to constrain the type parameter to either of several types. In such case, you should constrain the parameter to the most derived type common to all of the alternatives or do not use a constraint at all if System.Object is the only common base. For example, a generic parameter that can either accept IComparable or IComparable<T> should not have a constraint. If the hidden constraint is not satisfied, an InvalidOperationException should be thrown.
Do not place generic types in special generic-only assemblies just because the types are generic. It is perfectly fine to place generic types in assemblies with non-generic types. In other words, whether a type is generic or not should not affect which assembly it resides in.
Do not place generic types in special generic-only namespaces just because the types are generic. It is perfectly fine to have generic types in namespaces with non-generic types.
Annotation (
Krzysztof Cwalina): We placed the generic collections in a separate namespace from the non-generic ones because we believed that users will rarely want to, or need to, import both of the namespaces at the same time. We basically see the namespaces (most types in the namespaces) as alternatives, not as parts of one feature area. This may not be true for some scenarios where the non-generic collection interfaces implemented on the generic collections are used, but these are considered relatively advanced scenarios.
1.4.2 Generic Collections
Do prefer generic collection over the non-generic collections. You can optionally provide overloads accepting the non-generic interfaces.
static class Enumerable {
static void PrintToConsole<T>(IEnumerable<T> collection);
static void PrintToConsole(IEnumerable collection); // optional
}
public class Directory {
public Directory(string root);
public IEnumerable<FileInfo> Files { get; }
}
Do type method parameters as the least specialized interface that will to the job. For example, if a method just enumerated entries in a collection, the method should take IEnumerable<T>, not IList<T> or List<T>.
static void PrintToConsole<T>(IEnumerable<T> collection){
foreach(T item in collection){
Console.WriteLine(item);
}
}.
public class FileInfoCollection : ReadOnlyCollection<FileInfo> {}
public class Directory {
public Directory(string root);
public FileInfoCollection GetFiles();
}
Do use either a non-indexed collection (IEnumerable<T>, ICollection<T>) or a method returning a snapshot when providing access to collections that are potentially unstable (i.e. can change without the client modifying the collection). In general all collections representing shared resource (a collection of files in a directory) are unstable.
public class Directory {
public Directory(string root);
public IEnumerable<FileInfo> Files { get; }
public FileInfoCollection GetFiles();
}
Do not return “snapshot” collections from properties. Property getters should return “live” collections.
public class Directory {
public Directory(string root);
public IEnumerable<FileInfo> Files { get; } // returns live collection
}
Do not return “live” collections from methods. Methods should always return Collections returned from methods should be snapshots.
public class Directory {
public Directory(string root);
public FileInfoCollection GetFiles(); // returns snapshot collection
}
Do return Collection<T> from object models to provide standard plain vanilla collection API.
public Collection<Session> Sessions { get; }
Do return a subclass of Collection<T> from object models to provide high-level collection API.
public class ListItemCollection : Collection<ListItem> {}
public class ListBox {
public ListItemCollection Items { get; }
}
Do not return List<T> from object models. Use Collection<T> instead. List<T> is meant to be used for implementation, not in object model APIs. List<T> is optimized for performance at the cost of long term versioning. For example, if you return List<T> to the client code, you will not ever be able to receive notifications when client code modifies the collection.
Do return ReadOnlyCollection<T> from object models to provide plain vanilla read-only collection API.
public ReadOnlyCollection<Session> Sessions { get; }
Do return a subclass of ReadOnlyCollection<T> from object models to provide high level read-only collection APIs.
public class XmlAttributeCollection : ReadOnlyCollection<XmlAttribute> {}
public class XmlNode {
public XmlAttributeCollection Attributes { get; }
}
Note: When inheriting from ReadOnlyCollection, name it XxxCollection. If you have both read-only and read/write versions of the same collection, name them ReadOnlyXxxCollection and XxxCollection, not XxxCollection and XxxFileInfoCollection.
Do use a subclass of KeyedCollection<K,T> for collections of items that have natural unique names. For example, a collection of files in a directory could be represented as a subclass of KeyedCollection<string,FileInfo> where string is the filename. Users of the collection could then index the collection using filenames.
public class FileInfoCollection : KeyedCollection<string,FileInfo> {
…
}
public class Directory {
public Directory(string root);
public FileInfoCollection GetFiles();
}
Do use ICollection<T> for unordered collections.
Do implement IEnumerable<T> on strongly typed non-generic collection. Consider implementing ICollection<T> or even IList<T> where it makes sense.
1.4.3 Nullable<T>
Do use Nullable<T> to represent values that are optional. For example, database wrappers returning values from columns that allow null values or XML wrappers returning values of optional attributes.
public class Customer{
internal Customer(ISqlCommand command) { … }
internal Customer(XmlReader xml) { … }
public string Name { get; set; } // name is not optional
public Nullable<int> Age { get; set; } // age is optional
public Nullable<string> EMail { get; set; } // email is optional
}
…
// print customers with specified age
foreach(Customer customer in customerList){
if(customer.Age.HasValue){
Console.Writeline(“{0} : {0}”,customer.Name,customer.Age);
}
}
Annotation [AndersH]: When you design APIs using Nullable<T>, you have to be very sensitive to the fact that a Nullable<T> is a lot less convenient to use than a T. Nullable<T> with value types makes sense and solves a real world problem--therefore, users will be amenable to the inconveniences of using it. However, Nullable<T> with reference types makes little sense and solves no real world problem. In fact, it adds nothing but confusion because, in terms capabilities, there is no difference between a string and a Nullable<string>. With respect to having a Value Type constraint on Nullable<T>, we haven't done it because it would severely limit Nullable<T>'s use in generic scenarios. For example, imagine a Find method that returns a T or a null value when an item isn't found. In the generic world, a possible solution is to return Nullable<T>, but only if Nullable<T> works for all types.
If you are shipping pre-Longhorn you will need to provide a CLS compliant alternative. The CLS compliant alternative to the APIs is to use boxed value types. For example:
public class Customer{
public string Name { get; set; }
public object Age { get; set; } // this is actually int or null (if Age does not have a value)
}
Second alternative is to use a pair of properties. It has the disadvantage of cluttering the API, but the benefit of strong typing and avoiding boxing:
public class Customer{
public string Name { get; set; }
public int Age { get; set; } // this is actually int or null (if Age does not have a value)
public bool HasAge { get; set; }
}
Avoid using types from System.Data.SqlTypes in high level APIs. You should use Nullable<T> to represent optional or “nullable” values of Value Types.
1.4.4 EventHandler<T>
Do use System.EventHandler<T> instead of System.EventHandler. Do not manually create new delegates to be used as event handlers.
internal class NotifyingContactCollection : Collection<Contact> {
public event EventHandler<ContactAddedEventArgs> ContactAdded { … }
protected override void Add(Contact contact){
base.Add(contact);
ContactAdded(this,new ContactAddedEventArgs(this.Count-1));
}
}
internal class ContactAddedEventArgs : EventArgs {
public ContactAddedEventArgs(int index){ .. }
public int Index { get ; }
}
1.4.5 Comparer<T>
Do use System.Comparer<T> instead of other means of comparing types, like System.Comparer, Object.Equals, etc.
Gratulations.
What a long post on something that is totally irrelevant for everyone (besides very frew EAP
‘s) for another year.
May I suggest you push for a faster release 🙂
Good post… though a little long. The examples you give at the start are perhaps the best I’ve seen explaining what generics are.
[.NET]Design Guidelines Update: Generics
Thomas, thanks for the feedback. I am new to blogging and so I appreciate any comment that can help me understand what’s useful to the readers and what’s not.
… and I am pushing for a faster release 🙂 I can asure you that all of us here at MS understand that shipping sooner is a very useful feature.
You sure are competing with CBrumme when it comes to the long blogging.
I think the length and content are just fine. I would rather have a larger piece of related material than have it chunked out over ten blog entries.
A good usage guideline we use for STL collections is to make a class out of them (actualy using typedef). It works out great when the collection type changes and those collections are passed as parameters.
If I understood the examples, you are advocating…
void DoSomething(IList<int> param) {…}
void DoSomething2(IList<int> param) {…}
So, you have to do through and modify each menthod signature if the collection changes. Also, in some ways it kind of leaks implementation details.
What I am suggesting is…
class RefreshKeyList : IList<int>
{
}
void DoSomething(RefreshKeyList param) {…}
void DoSomething2(RefreshKeyList param) {…}
Later on if the list type needs to change, there’s just one place that needs to be changed. Also, the collection name better describes what it represents.
W, thanks for the comment. You may be right re the point about aliasing, but only if you are talking about guidelines for application developers, not for reusable library developers. Application developers are rarely concerned with binary breaking changes and only mildly concerned with source breaking changes. They are the only people calling their, often internal, APIs. Changing what RefreshKeyList “aliases” (inherits from or implements in the C# example) is a potentially breaking change. Reusable library vendors (including Microsoft) have a very high bar for such changes. We basically don’t expect to ever change DoSomething(IList<int>) to anything else.
You are right that aliasing improves the name. This is taken into account in the guidelines. See guideline:.
But, there are drawbacks of this approach. For example, if the type represents an input (thus the guideline only recommends it for outputs), you a limiting the callers of the method to the specified type and its subtypes.
I am not sure I understand the comment about “leaking the implementation details”. Could you clarify?
Thanks for the post! This is better than the PDC Whidbey docs on generics. I’ll definitely be saving a link to it.
A few comments:
1. "Methods should always return Collections returned from methods should be snapshots." This doesn’t make sense; I think it’s likely a typo, but with a topic like this, you need to be careful to make sense. 🙂
2. Regarding suggested naming conventions, I think the one-letter option is way out of wack with the other .NET naming conventions and quickly loses its helpfulness when using multiple constraints. I have a couple other options in order of preference.
a) Enable a special character prefix (such as $, #, @, etc.) to denote a type parameter. This would be easy to type (one stroke) and would make them stand out.
b) Recommend a suffix such as Type, TypeParam, or something better (preferably short).
3. Define what you mean by "high-level API."
4. Define what you mean by "snapshot" and "live" collection.
5. Explain type parameter inference or at least reference it in the docs.
6. Don’t use "vanilla" and "plain" and "standard" to describe the same thing. Pick one adjective and stick with it and define what you mean by it.
7. Consider expanding the performance considerations to be more specific. For instance, when you say "The CLR shares IL, metadata, and some JIT’d/NGEN’d native code across types/methods constructed from generic types/methods" and "do not enjoy certain JIT compiler optimizations," it’d be nice to get a better understanding of what is shared and not shared and what is not optimized.
8. Consider moving the detailed explanations of generics into separate topics (away from the library guidelines) and just reference them from here. For those who understand this stuff, it’d make it much more readable. Or at least make the stuff collapsable or something like that.
9. Consider explaining how you’re optimizing EventHandler<T> and Nullable<T> (maybe in a separate section) to help serve as an example for generic type authors?
10. "Do not introduce new generic types and methods without fully understanding, measuring, and documenting the performance ramifications of their expected use." This is a bit severe and vague. It probably should not even be in the guidelines as it is implicit in any design that you should do these things.
11. I like the common patterns section. This should be easily found for newbies to understand the usefulness of generics.
12. I don’t really understand the differences between List<T> and Collection<T>. Please consider expanding on that topic.
13. "When inheriting from ReadOnlyCollection, name it XxxCollection. If you have both read-only and read/write versions of the same collection, name them ReadOnlyXxxCollection and XxxCollection, not XxxCollection and XxxFileInfoCollection" I don’t get why anyone would use XxxFileInfoCollection to indicate Read Only.
14. I love Nullable<T>. I agree with Anders on the use; however, I don’t get the second option for CLS compliant stuff:
public int Age { get; set; } // this is actually int or null (if Age does not have a value)
int cannot be null..?
Thanks again for the write up. Good stuff!
Does Nullable<T> store a separate flag to indicate whether something has a value or is it inferred based on whether or not the value is null?
I have an odd case where I need to differentiate between Not Supplied, and Supplied but Null. Hopefully this is possible.
J, these are all great comments. I will try to incorporate them into the main document.
As to the type parameter name, we did debate this guideline a lot. We are fully aware that in many scenarios more descriptive identifiers would be better. We finally settled on one letter identifiers as seems they have the edge in the very simple scenarios (like List<T> where the purpose of T is obvious) and these are our main concern when it comes to usability (i.e. we try to optimize for simple 80% scenario and make the rest possible). It also seems like special characters you suggested would be also “out of whack with the other .NET naming conventions”.
“High level APIs” are APIs for mainline scenarios that are usually quite specialized for vertical scenarios. For example, dialog APIs are relatively low level and let you create any dialog you need. FileOpenDialog APIs are high level, easy to use when you work in the “vertical” scenario of opening a dialog to get a file path from the user. I will make it clearer in the main document.
“Snapshot” is when let’s say you get a collection of files in a directory and the collection does not get updated as files are added or removed from the directory; you need to get another copy of the collection to get updates made to the directory. “Live” collection would be updated as the directory changes.
Type parameter inferencing works for generic methods that take a formal method parameter that references (uses) a type parameter. For example Array.Sort<T>(T[] array) takes formal parameter “array” that is of type T[], where T is a type parameter. So now, when Sort<T> is called the type argument does not need to be specified because it can be inferred from the argument (array) passed. For example the following would be legal:
string[] names = string[] { “Foo”, “Bar” };
Array.Sort(names); // compiler knows it’s actually Sort<string> because array is string[]
List<T> is intended for internal implementation, rarely or never visible in public APIs. Collection<T> is intended for reusable library designers to be exposed from their public APIs. The reasons for this are complex. I will try to write it down one day and post it.
The Nullable<T> CLS alternative is indeed not very clear. The point is that in the underlying data source the Age (or int) may be null. In the Customer APIs it of course cannot be null and the logical “nullness” is indicated by the HasAge property.
I hope this helps and thanks again for the through review and good comments.
Rob, Nullable<T> has two fields: T and bool (to store the nullness). We are currently not supporting tri-state Nullable<T>, i.e. you cannot store null in Nullable and say that it does have a value. The reason for this is that the envisioned purpose of Nullable<T> was to unify the Reference Type world and the Value Type world in terms of how they handle nullness, not to provide yet another way to represent nullness 🙂
What about Nullable<T> where T is a reference type?
I understand why a tristate nullable value type isn’t implemented, but it should be pretty easy to add myself.
Rob, i am not sure if I fully understand your last post, but we don’t allow you to store null in Nullable<SomeReferenceType> and specify that Hasvalue property be true. basically, one you pass null to the constructor of Nullable<T>, the HasValue property get set to false.
Ok. So as you said, Nullable exists to unify value and reference types.
I was thinking it’d be more like the Option type in SML or something.
No problem, I can always write my own.
Thanks.
Krzysztof,
Thanks for the reply and explanations. Please consider adding your explanation of terms to your document and/or referencing relevant docs (e.g., type parameter inference).
As for the special character convention, I don’t think it’d be really out of whack as a language feature. Technically, int, double, float, string, etc. would be out of whack, but they’re not because they’re part of the language, not so much part of the API. A special character to indicate a type parameter would, in my estimation, fall under that umbrella.
In any case, thanks for expanding on your reasoning there. I’m not sure I can bring myself to use T, but if it’s a guideline in the final rev, I suppose I should try. 🙂
Scratch that comment on special characters. It would hose CLS compatibility.. duh..
🙂 cool article!
regards from Pol….
For some time now I have been using the Commerce Starter Kit for one of my websites, Stan’s Collectables….
Generics Design Guidelines courtesy of Krzysztof Cwalina.
So at TechEd-2006, I gave a chalktalk on generics patterns and practices. I’ve zipped the content up…
Thanks for a good article.
I’m missing valuable performance information on the generics collection. MSDN only states running times i terms of O() notificaion for a few operations. There does not seem to be consistent information about this topic. I find this very important, when trying to figure out which type to use for a certain application…
Bernd,
I think the best way to get this information in consistent format is to file a suggestion to add this information MSDN docs. You can file such suggestions at MSDN Feedback center:.
So at TechEd-2006, I gave a chalktalk on generics patterns and practices. I’ve zipped the content up
Suppose we have classes such as, worker, manager, lifter, … and these derive from person class.
Now, we need to expose all these as custom collections through an Organization class.
So, do we need to create separate collection class for each of above OR declare a single collection class and all specific collection classes should inherit from it?
What is the best method to expose the single Generic type safe collection for all above base types?
Thanks,
Sanjay
Sanjay, as a rule of thumb, I would first consider Collection<T> or ReadOnlyCollection<T>, then a custom collection that inherits from them.
You can read more on this at and all the details in the Framework Design Guidelines book.
Am I the only one having problems printing this page? All pages except the first only contain the last line…
Is there a printer-ready version somewhere, perhaps?
You say that we should:
"Do use Nullable<T> and EventHandler<T> even over value types. We will work to make these two important generic types as efficient as possible."
Your blog post mentions how the garbage collector can be stressed if many events are being raised.
Can you explain how you can make the EventHandler<T> more efficient than having a value type. And by value type, do you mean struct, or ints etc. Wouldn’t the following always be more performant than creating objects?
// second argument isn’t an EventArgs derived type, for performance reasons, avoiding too many allocations
public delegate void MyEventHandler(object sender, int someValue);
public class MyClass
{
public event MyEventHandler MyEvent;
}
PingBack from
PingBack from
I think standards for generics are very useful and we will try to incorporate some of these into our internal standards document. However, I have one problem. It seems that a number of the suggestions are based on "some developers don’t understand {blah}". I completely disagree with the statement that we should writer code for the lowest common denominator.
|
https://blogs.msdn.microsoft.com/kcwalina/2004/03/15/design-guidelines-update-generics/
|
CC-MAIN-2019-04
|
refinedweb
| 7,174
| 55.03
|
This document describes OAuth 2.0, when to use it, how to acquire client IDs, and how to use it with the Google APIs Client Library for Python.
Contents
- OAuth 2.0 explained
- Acquiring client IDs and secrets
- The oauth2client library
- Flows
- Credentials
- Storage
- Command-line tools
OAuth 2.0 explained
OAuth 2.0 is the authorization protocol used by Google APIs. It is summarized on the Authentication page of this library's documentation, and there are other good references as well:
The protocol is solving a complex problem, so it can be difficult to understand. The following presentation explains the important concepts of the protocol, and introduces you to how the library is used at each step.
Acquiring client IDs and secrets
You can get client IDs and secrets on the API Access pane of the Google APIs Console. There are different types of client IDs, so be sure to get the correct type for your application:
- Web application client IDs
- Installed application client IDs
- Service Account client IDs
Warning: Keep your client secret private. If someone obtains your client secret, they could use it to consume your quota, incur charges against your Google APIs Console project, and request access to user data.
The oauth2client library
The oauth2client library is included with the Google APIs Client Library for Python. It handles all steps of the OAuth 2.0 protocol required for making API calls. It is available as a separate download if you only need an OAuth 2.0 library. The sections below describe important modules, classes, and functions of this library.
Flows
The purpose of a
Flow class is to acquire credentials
that authorize your application access to user data.
In order for a user to grant access,
OAuth 2.0 steps require your application to potentially redirect their browser multiple times.
A
Flow object has functions that help your application take these steps and acquire credentials.
Flow objects are only temporary and can be discarded once they have produced credentials,
but they can also be
pickled
and stored.
This section describes the various methods to create and use
Flow objects.
Note: See the Using Google App Engine and Using Django pages for platform-specific Flows.
flow_from_clientsecrets()
The
oauth2client.client.flow_from_clientsecrets()
method creates a
Flow object
from a client_secrets.json file.
This
JSON
formatted file stores your client ID, client secret, and other OAuth 2.0 parameters.
The following shows how you can use
flow_from_clientsecrets()
to create a
Flow object:
from oauth2client.client import flow_from_clientsecrets ... flow = flow_from_clientsecrets('path_to_directory/client_secrets.json', scope='', redirect_uri='')
OAuth2WebServerFlow
Despite its name, the
oauth2client.client.OAuth2WebServerFlow
class is used for both installed and web applications.
It is created by passing the client ID, client secret, and scope to its constructor:
You provide the constructor with a
redirect_uri parameter.
This must be a URI handled by your application.
from oauth2client.client import OAuth2WebServerFlow ... flow = OAuth2WebServerFlow(client_id='your_client_id', client_secret='your_client_secret', scope='', redirect_uri='')
step1_get_authorize_url()
The
step1_get_authorize_url()
function of the
Flow class is used to generate the authorization server URI.
Once you have the authorization server URI,
redirect the user to it.
The following is an example call to this function:
auth_uri = flow.step1_get_authorize_url() // Redirect the user to auth_uri on your platform.
If the user has previously granted your application access,
the authorization server immediately redirects again to
redirect_uri.
If the user has not yet granted access,
the authorization server asks them to grant your application access.
If they grant access,
they get redirected to
redirect_uri with a
code query string parameter similar to the following:
If they deny access,
they get redirected to
redirect_uri with an
error query string parameter similar to the following:
step2_exchange()
The
step2_exchange()
function of the
Flow class exchanges an authorization code for a
Credentials object.
Pass the
code provided by the authorization server redirection to this function:
credentials = flow.step2_exchange(code)
Credentials
A
Credentials object holds refresh and access tokens that authorize access to a single user's data.
These objects are applied to
httplib2.Http objects to authorize access.
They only need to be applied once and can be stored.
This section describes the various methods to create and use
Credentials objects.
Note: See the Using Google App Engine and Using Django pages for platform-specific Credentials.
OAuth2Credentials
The
oauth2client.client.OAuth2Credentials
class holds OAuth 2.0 credentials that authorize access to a user's data.
Normally, you do not create this object by calling its constructor.
A
Flow object can create one for you.
SignedJwtAssertionCredentials
The
oauth2client.client.SignedJwtAssertionCredentials
class is only used with
OAuth 2.0 Service Accounts.
No end-user is involved for these server-to-server API calls,
so you can create this object directly without using a
Flow object.
AccessTokenCredentials
The
oauth2client.client.AccessTokenCredentials
class is used when you have already obtained an access token by some other means.
You can create this object directly without using a
Flow object.
authorize()
Use the
authorize()
function of the
Credentials class to apply necessary credential headers to all requests made by an
httplib2.Http
instance:
import httplib2 ... http = httplib2.Http() http = credentials.authorize(http)
Once an
httplib2.Http object has been authorized,
it is typically passed to the build function:
from apiclient.discovery import build ... service = build('calendar', 'v3', http=http)
Storage
A
oauth2client.client.Storage
object stores and retrieves
Credentials objects.
This section describes the various methods to create and use
Storage objects.
Note: See the Using Google App Engine and Using Django pages for platform-specific Storage.
file.Storage
The
oauth2client.file.Storage
class stores and retrieves a single
Credentials object.
The class supports locking such that multiple processes and threads can operate on a single store.
The following shows how to open a file,
save
Credentials to it,
and retrieve those credentials:
from oauth2client.file import Storage ... storage = Storage('a_credentials_file') storage.put(credentials) ... credentials = storage.get()
multistore_file
The oauth2client.multistore_file module allows multiple credentials to be stored. The credentials are keyed off of:
- client ID
- user agent
- scope
keyring_storage
The
oauth2client.keyring_storage
module allows a single
Credentials object to be stored in a password manager if one is available.
The credentials are keyed off of:
- Name of the client application
- User name
from oauth2client.keyring_storage import Storage ... storage = Storage('application name', 'user name') storage.put(credentials) ... credentials = storage.get()
Command-line tools
The
oauth2client.tools.run_flow()
function can be used by command-line applications to acquire credentials.
It takes a
Flow argument and attempts to open an authorization server page in the user's default web browser.
The server asks the user to grant your application access to the user's data.
If the user grants access, the run() function returns new credentials.
The new credentials are also stored in the
Storage argument,
which updates the file associated with the
Storage object.
The oauth2client.tools.run_flow() function is controlled by command-line flags, and the Python standard library argparse module must be initialized at the start of your program. Argparse is included in Python 2.7+, and is available as a separate package for older versions. The following shows an example of how to use this function:
import argparse from oauth2client import tools parser = argparse.ArgumentParser(parents=[tools.argparser]) flags = parser.parse_args() ... credentials = tools.run_flow(flow, storage, flags)
|
https://developers.google.com/api-client-library/python/guide/aaa_oauth
|
CC-MAIN-2014-15
|
refinedweb
| 1,203
| 50.94
|
The solution to a strong reference cycle is to break the cycle.
You could manually break the cycles by looping over each asset and setting its owner to
nil immediately after you set bob to
nil, but that is tedious and error prone.
Instead, Swift provides a keyword to get the same effect automatically.
Modify Asset to make the owner property a weak reference instead of a strong reference.
Listing 24.7 Making the
owner property a weak reference (
Asset.swift)
import Foundation class Asset: CustomStringConvertible { let name: String let value: Double weak var owner: Person? ... }
A weak reference is a reference that does not increase the reference count of the instance it refers to. In ...
No credit card required
|
https://www.oreilly.com/library/view/swift-programming-the/9780134610733/ch24s03.xhtml
|
CC-MAIN-2019-35
|
refinedweb
| 120
| 56.45
|
There is actually a (subtle) difference between the two. Imagine you have the following code in File1.cs:
// File1.cs using System; namespace Outer.Inner { class Foo { static void Bar() { double d = Math.PI; } } }
Now imagine that someone adds another file (File2.cs) to the project that looks like this:
// File2.cs namespace Outer { class Math { } }
The compiler searches
Outer before looking at those
using statements outside the namespace, so it finds
Outer.Math instead of
System.Math. Unfortunately (or perhaps fortunately?),
Outer.Math has no
PI member, so File1 is now broken.
This changes if you put the
using inside your namespace declaration, as follows:
// File1b.cs namespace Outer.Inner { using System; class Foo { static void Bar() { double d = Math.PI; } } }
Now the compiler searches
System before searching
Outer, finds
System.Math, and all is well.
Some would argue that
Math might be a bad name for a user-defined class, since there's already one in
System; the point here is just that there is a difference, and it affects the maintainability of your code.
It's also interesting to note what happens if
Foo is in namespace
Outer, rather than
Outer.Inner. In that case, adding
Outer.Math in File2 breaks File1 regardless of where the
using goes. This implies that the compiler searches the innermost enclosing namespace before it looks at any
using statements.
|
https://codedump.io/share/ijNhP9V2bea6/1/should-39using39-statements-be-inside-or-outside-the-namespace
|
CC-MAIN-2016-44
|
refinedweb
| 228
| 69.28
|
What is Hadoop Cluster? Learn to Build a Cluster in Hadoop
In this blog, we will get familiar with Hadoop cluster the heart of Hadoop framework. First, we will talk about what is a Hadoop cluster? Then look at the basic architecture and protocols it uses for communication. And at last, we will discuss what are the various benefits that Hadoop cluster provide.
So, let us begin our journey of Hadoop Cluster.
1. What is Hadoop Cluster?
A Hadoop cluster is nothing but a group of computers connected together via LAN. We use it for storing and processing large data sets. Hadoop clusters have a number of commodity hardware connected together. They communicate with a high-end machine which acts as a master. These master and slaves implement distributed computing over distributed data storage. It runs open source software for providing distributed functionality.
2. What is the Basic Architecture of Hadoop Cluster?
Hadoop cluster has master-slave architecture.
i. Master in Hadoop Cluster
It is a machine with a good configuration of memory and CPU. There are two daemons running on the master and they are NameNode and Resource Manager.
a. Functions of NameNode
- Manages file system namespace
- Regulates access to files by clients
- Stores metadata of actual data Foe example – file path, number of blocks, block id, the location of blocks etc.
- Executes file system namespace operations like opening, closing, renaming files and directories
The NameNode stores the metadata in the memory for fast retrieval. Hence we should configure it on a high-end machine.
b. Functions of Resource Manager
- It arbitrates resources among competing nodes
- Keeps track of live and dead nodes
You must learn about the Distributed Cache in Hadoop
ii. Slaves in the Hadoop Cluster
It is a machine with a normal configuration. There are two daemons running on Slave machines and they are – DataNode and Node Manager
a. Functions of DataNode
- It stores the business data
- It does read, write and data processing operations
- Upon instruction from a master, it does creation, deletion, and replication of data blocks.
b. Functions of NodeManager
- It runs services on the node to check its health and reports the same to ResourceManager.
We can easily scale Hadoop cluster by adding more nodes to it. Hence we call it a linearly scaled cluster. Each node added increases the throughput of the cluster.
Client nodes in Hadoop cluster – We install Hadoop and configure it on client nodes.
c. Functions of the client node
- To load the data on the Hadoop cluster.
- Tells how to process the data by submitting MapReduce job.
- Collects the output from a specified location.
3. Single Node Cluster VS Multi-Node Cluster
As the name suggests, single node cluster gets deployed over a single machine. And multi-node clusters gets deployed on several machines.
In single-node Hadoop clusters, all the daemons like NameNode, DataNode run on the same machine. In a single node Hadoop cluster, all the processes run on one JVM instance. The user need not make any configuration setting. The Hadoop user only needs to set JAVA_HOME variable. The default factor for single node Hadoop cluster is one.
In multi-node Hadoop clusters, the daemons run on separate host or machine. A multi-node Hadoop cluster has master-slave architecture. In this NameNode daemon run on the master machine. And DataNode daemon runs on the slave machines. In multi-node Hadoop cluster, the slave daemons like DataNode and NodeManager run on cheap machines. On the other hand, master daemons like NameNode and ResourceManager run on powerful servers. Ina multi-node Hadoop cluster, slave machines can be present in any location irrespective of the physical location of the master server.
4. Communication Protocols Used in Hadoop Clusters
The HDFS communication protocol works on the top of TCP/IP protocol. The client establishes a connection with NameNode using configurable TCP port. Hadoop cluster establishes the connection to the client using client protocol. DataNode talks to NameNode using the DataNode Protocol. A Remote Procedure Call (RPC) abstraction wraps both Client protocol and DataNode protocol. NameNode does not initiate any RPC instead it responds to RPC from the DataNode.
Don’t forget to check schedulers in Hadoop
5. How to Build a Cluster in Hadoop
Building a Hadoop cluster is a non- trivial job. Ultimately the performance of our system will depend upon how we have configured our cluster. In this section, we will discuss various parameters one should take into consideration while setting up a Hadoop cluster.
For choosing the right hardware one must consider the following points
- Understand the kind of workloads, the cluster will be dealing with. The volume of data which cluster need to handle. And kind of processing required like CPU bound, I/O bound etc.
- Data storage methodology like data compression technique used if any.
- Data retention policy like how frequently we need to flush.
Sizing the Hadoop Cluster
For determining the size of Hadoop clusters we need to look at how much data is in hand. We should also examine the daily data generation. Based on these factors we can decide the requirements of a number of machines and their configuration. There should be a balance between performance and cost of the hardware approved.
Configuring Hadoop Cluster
For deciding the configuration of Hadoop cluster, run typical Hadoop jobs on the default configuration to get the baseline. We can analyze job history log files to check if a job takes more time than expected. If so then change the configuration. After that repeat the same process to fine tune the Hadoop cluster configuration so that it meets the business requirement. Performance of the cluster greatly depends upon resources allocated to the daemons. The Hadoop cluster allocates one CPU core for small to medium data volume to each DataNode. And for large data sets, it allocates two CPU cores to the HDFS daemons.
6. Hadoop Cluster Management
When you deploy your Hadoop cluster in production it is apparent that it would scale along all dimensions. They are volume, velocity, and variety. Various features that it should have to become production-ready are – robust, round the clock availability, performance and manageability. Hadoop cluster management is the main aspect of your big data initiative.
A good cluster management tool should have the following features:-
- It should provide diverse work-load management, security, resource provisioning, performance optimization, health monitoring. Also, it needs to provide policy management, job scheduling, back up and recovery across one or more nodes.
- Implement NameNode high availability with load balancing, auto-failover, and hot standbys
- Enabling policy-based controls that prevent any application from gulping more resources than others.
- Managing the deployment of any layers of software over Hadoop clusters by performing regression testing. This is to make sure that any jobs or data won’t crash or encounter any bottlenecks in daily operations.
7. Benefits of Hadoop Clusters
Here is a list of benefits provided by Clusters in Hadoop –
- Robustness
- Data disks failures, heartbeats and re-replication
- Cluster Rrbalancing
- Data integrity
- Metadata disk failure
- Snapshot
i. Robustness
The main objective of Hadoop is to store data reliably even in the event of failures. Various kind of failure is NameNode failure, DataNode failure, and network partition. DataNode periodically sends a heartbeat signal to NameNode. In network partition, a set of DataNodes gets disconnected with the NameNode. Thus NameNode does not receive any heartbeat from these DataNodes. It marks these DataNodes as dead. Also, Namenode does not forward any I/O request to them. The replication factor of the blocks stored in these DataNodes falls below their specified value. As a result, NameNode initiates replication of these blocks. In this way, NameNode recovers from the failure.
ii. Data Disks Failure, Heartbeats, and Re-replication
NameNode receives a heartbeat from each DataNode. NameNode may fail to receive heartbeat because of certain reasons like network partition. In this case, it marks these nodes as dead. This decreases the replication factor of the data present in the dead nodes. Hence NameNode initiates replication for these blocks thereby making the cluster fault tolerant.
iii. Cluster Rebalancing
The HDFS architecture automatically does cluster rebalancing. Suppose the free space in a DataNode falls below a threshold level. Then it automatically moves some data to another DataNode where enough space is available.
iv. Data Integrity
Hadoop cluster implements checksum on each block of the file. It does so to see if there is any corruption due to buggy software, faults in storage device etc. If it finds the block corrupted it seeks it from another DataNode that has a replica of the block.
v. Metadata Disk Failure
FSImage and Editlog are the central data structures of HDFS. Corruption of these files can stop the functioning of HDFS. For this reason, we can configure NameNode to maintain multiple copies of FSImage and EditLog. Updation of multiple copies of FSImage and EditLog can degrade the performance of Namespace operations. But it is fine as Hadoop deals more with the data-intensive application rather than metadata intensive operation.
vi. Snapshot
Snapshot is nothing but storing a copy of data at a particular instance of time. One of the usages of the snapshot is to rollback a failed HDFS instance to a good point in time. We can take Snapshots of the sub-tree of the file system or entire file system. Some of the uses of snapshots are disaster recovery, data backup, and protection against user error. We can take snapshots of any directory. Only the particular directory should be set as Snapshottable. The administrators can set any directory as snapshottable. We cannot rename or delete a snapshottable directory if there are snapshots in it. After removing all the snapshots from the directory, we can rename or delete it.
8. Summary
There are several options to manage a Hadoop cluster. One of them is Ambari. Hortonworks promote Ambari and many other players. We can manage more than one Hadoop cluster at a time using Ambari. Cloudera Manager is one more tool for Hadoop cluster management. Cloudera manager permits us to deploy and operate complete Hadoop stack very easily. It provides us with many features like performance and health monitoring of the cluster. Hope this helped. Share your feedback through comments.
You must explore Top Hadoop Interview Questions
|
https://data-flair.training/blogs/what-is-hadoop-cluster/
|
CC-MAIN-2019-13
|
refinedweb
| 1,714
| 57.98
|
Device and Network Interfaces
- Solaris virtual console interface
#include <sys/kd.h>
#include <sys/vt.h>
The virtual console device driver — also known as virtual terminal (VT) — is a layer of management functions that provides characteristics associated with the active console.
You manage VT's by intercepting keyboard sequences (“hot key”). To maintain consistency with Xserver, the virtual console device driver supports the Ctrl, Alt, F# and ARROW keys.
Under text mode, the sequence Alt + F# (where Alt previous console in a circular fashion. The sequence Alt + ^ (where ^ represents the up directional arrow) is for the last used console.
Under graphics mode like Xorg, the sequence Ctrl-Alt + F# should be used in place of Alt + F#. And the sequence Alt + <arrow> for VT switching don't work under Xorg, because this hotkey has been defined as virtual workspace switching.
Virtual console switching can be done automatically (VT_AUTO) on receipt of a hot-key or by the process owning the VT (VT_PROCESS).T_RELDISP ioctl from the process. If the process refuses to release the device (meaning the switch does not occur), it performs a VT_RELDISP ioctl with an argument of 0 (zero). If the process desires to release the device, it saves the device state (keyboard, display, and I/O registers) and then performs a VT_RET_RELDISP ioctl with an argument of VT_ACKACQ to complete the switching protocol.
The modify-operations ioctls (VT_SETMODE, VT_RELDISP, VT_WAITACTIVE, KDSETMODE) check if the VT is the controlling tty of the calling process. If not, the sys_devices privilege is enforced. VT_ACTIVATE requires the sys_devices privilege. Note that there is no controlling tty and privilege check for query/view operations.
The following ioctls apply to devices that support virtual consoles:
Obtains the text/graphics mode associated with the VT.
#define KD_TEXT 0 #define KD_GRAPHICS 1
Sets the text/graphics mode to the VT.
KD_TEXT indicates that console text is displayed on the screen. Normally KD_TEXT is combined with VT_AUTO mode for text console terminals, so that the console text display automatically is saved and restored on the hot key screen switches.
KD_GRAPHICS indicates that the user/application (usually Xserver) has direct control of the display for this VT in graphics mode. Normally KD_GRAPHICS is combined with VT_PROCESS_TEXT to KD_GRAPHICS or a VT of KD_GRAPHICS mode is made active from a previous active VT of KD_TEXT mode, the virtual console manager initiates a KDSETMODE ioctl with KD_GRAPHICS as the argument to the underlying console frame buffer device indicating that current display is running into graphics mode.
When the mode of the active VT is changed from KD_GRAPHICS to KD_TEXT or a VT of KD_TEXT mode is actived from a previous active VT of KD_GRAPHICS mode, the virtual console manager initiates a KDSETMODE ioctl with KD_TEXT as the argument to the underlying console frame buffer device indicating that current display is running into console text mode.
Makes the VT specified in the argument the active VT (in the same manner as if a hotkey initiated the switch). If the specified VT is not open or does not exist, the call fails and errno is set to ENXIO.
Queries to determine if VT functionality is available on the system. The argument is a pointer to an integer. If VT functionality is available, the integer is 1, otherwise it is 0.
Determines the VT's current mode, either VT_AUTO or VT_PROCESS. The argument is the address of the following structure, as defined in <sys/vt.h>
struct vt_mode { char mode; /* VT mode */ char waitv; /* not used */ short relsig;/* signal to use for release request */ short acqsig;/* signal to use for display acquired */ short frsig;/* not used */ } /* Virtual console Modes */ #define VT_AUTO 0 /* automatic VT switching */ #define VT_PROCESS 1 /* process controls switching */ The structure is filled in with the current value for each field.
Returns the target of /dev/vt/console_user. The argument is an address of an int variable. The number of the VT device which /dev/vt/console_user points to is returned. If /dev/vt/console_user points to /dev/console, then 0 is returned.
Obtains the active VT number and a list of open VTs. The argument is an address to the following structure:
struct vt_stat { unsigned short v_active, /* number of the active VT */ v_signal, /* not used */ v_state; /* count of open VTs. For every 1 in this field, there is an open VT */ }
With VT_GETSTATE, the VT manager first gets the number of the active VT, then determines the number of open VTs in the system and sets a 1 for each open VT in v_state. Next, the VT manager transfers the information in structure vt_stat passed by the user process..
Tells the VT manager if the process releases (or refuses to release) the display. An argument of 1 indicates the VT is released. An argument of 0 indicates refusal to release. The VT_ACKACQ argument indicates if acquisition of the VT has been completed.
Sets the current VT node (where the ioctl comes from) as the target of /dev/vt/console_user. The sys_devices privilege is required for this ioctl.
Sets the VT mode. The argument is a pointer to a vt_mode).
If the specified VT is currently active, this call returns immediately. Otherwise, it sleeps until the specified VT becomes active, at which point it returns.
VT devices.
ioctl(2), signal(3C), wscons(7D) normally picks up the first available virtual console), use [ Ctrl + ] Alt + F7 .
# console-login:default is for the system console, others for virtual consoles.
|
http://docs.oracle.com/cd/E19963-01/html/821-1475/vt-7i.html
|
CC-MAIN-2016-30
|
refinedweb
| 908
| 54.42
|
0
Hi all,
I'm supposed to write a small program that asks the user for a number of a month( 1= Jan, 2=Feb, etc) and then outputs the month name and the month number. If the number entered is invalid (for example, > 12 or equal 0), then the user receives an error message. Perhaps there is a better way to write this, but based on my limited knowledge, this is what I have put together. However, when I run the program it prints ALL the months of the year, and then the single number input, like this:
Give me a number
10
January10
February 10
March10
April10
May10
June10
July10
August10
September10
Can anyone tell me what I've done incorrectly?
Here is the code:
import java.util.Scanner; public class Exercise3 { public static void main (String [] args) { Scanner scanner = new Scanner(System.in); { System.out.println("Give me a number"); } int x = scanner.nextInt(); while(x == 1); { System.out.println("January" + x); } while(x == 2); { System.out.println("February " + x); } while(x == 3); { System.out.println("March" + x); } while(x == 4); { System.out.println("April" + x); } while(x == 5); { System.out.println("May" + x); } while(x == 6); { System.out.println("June" + x); } while(x == 7); { System.out.println("July" + x); } while(x == 8); { System.out.println("August" + x); } while(x == 9); { System.out.println("September" + x); } while(x == 10); { System.out.println("October" + x); } while(x == 11); { System.out.println("November" + x); } while(x == 12); { System.out.println("December" + x); } while ((x > 12) || (x == 0)); { System.out.println("Error. Number must be 12 or less"); } } }
|
https://www.daniweb.com/programming/software-development/threads/237252/super-newb-troubleshooting
|
CC-MAIN-2017-09
|
refinedweb
| 268
| 53.07
|
Often reusable code is stored in DLL form. Not just you but your colleagues and your clients are expected to use it. Thus its is not only functionality that matters but also the documentation. I will touch both aspects of this while comingup with a template that serves general DLL writing. What I will discuss is not the internal working of DLL. You can find this in any VC++ development book. I assume that user knows the basics of DLL such as feature export/import, compiling, linking, c++.1. Functionality a. Export/Import key word: Most of the time while authoring a DLL we do not keep the user in mind. Our intention is to export the features in the DLL. In that process we tend to hardcode the key words __declspec(dllexport) or AFX_CLASS_EXPORT as the case may be.
Example: Content of FileLog.h
.... __declspec(dllexport) BOOL LogToFile(const char* logFile, const CString string); ....
User would include this header file in his code and link his executable with your library. But when user does this, function LogToFile() is even now EXPORTED !!! He should have been importing it. You then say I will give him a separate header file which is an exact replica of FileLog.h but has __declspec(dllimport) in it. This way both of us are happy. Content of user FileLog.h
.... __declspec(dllimport) BOOL LogToFile(const char* logFile, const CString string); ....
But this is just one function and one header file written just by you. Tomorrow this DLL might have tens of functions and classes in hundreds of files authored by many users. Can you maintain two copies of the same header file ?? Even if you think ‘may be’, say NO.
Problem can be solved very easily. By using something called onditional compilation. Macros. All of us would have used it in one form or the other. What you do is define a macros which will expand to __declspec(dllexport) at the programmer end and __declspec(dllimport) at the user end. This is how you do it.
#ifdef MYLIB_API // All function in this file are exported #else // MYLIB_API <br> // All function in this file are imported #define MYLIB_API __declspec(dllimport)#endif // MYLIB_API
I can do something similar for classes too
#ifdef MYLIB_CLASS // All function in this file are exported #else // MYLIB_CLASS // All function in this file are imported #define MYLIB_CLASS AFX_CLASS_IMPORT#endif // MYLIB_CLASS
You can find this in DLLHeader.h. What you also need to do is define MYLIB_API and MYLIB_CLASS in your implementation (.cpp)before you include the respective header file. Example. Content of FileLog.cpp .... // This dll source file exports functions and variables. // Must be present in all impl files before #include of respective header files. #define MYLIB_API __declspec(dllexport) #define MYLIB_CLASS AFX_CLASS_EXPORT
#include "FileLog.h" ....
Isn’t this simple ? You can make this part of your DLL implementaion file-template.
b.Organization:Biggest problem I have faced in DLL development is its maintenance. You need to be disciplined right from the beginning. Make sure you and all authors of the DLL follow strict coding standards, appropriate documentation, templates that I discussed.
Besides these you also need to make sure that you don’t overload your functionality in a single header/implementation. What I mean is even if there are closely related functionality which differ by their technology put them in separate files. For example you have written two sets of features which does the same, say, voice recording and replay. One uses MCI and the other uses PSAPI technology. Even though they are functionally similar, put them in different header and implementation files. You are doing this simply because tomorrow you will add more features into them, say pause, format conversion,.. It would be very expensive to make changes then.
Organization is not just at the file level. You also need to organize code within your files too. C++ provides a very nice way of doing this using ‘namespace’. Always mark your segment of code within and across files as belonging to one particular ‘namespace’. This will help the ser clearly spot his choice of feature. For example if you have written two variants of a function you can put them in two different namespace and retain the same name for both the function.
c.Log While Debugging a DLL you cannot rely much on the debugger. Many books make debugging look very simple. You can set the executable path in the Debug tab of Project Setting to the application which loads your DLL, and then debug your DLL. This is a simple scenario. But there are cases where your DLL might be loaded by an application which is loaded by a third party application which is invoked by your application. For example Browser Helper Objects(BHO) are in-proc DLL which are loaded by Internet Explorer. If you want to write a tracker application you need to write the BHO as well. In such cases it might not be possible to load your DLL (BHO) in the debugger. The way I do it is using log files. So at times you need to think about logging mechanisms for your DLLs too.
2.DocumentationYour DLL needs to be documented more than any of your executable. This is because they DLL’s have wider stake holders. Just handing over a API manual might just not be sufficient. You need to document use-cases, exceptions in code form in your DLL. I have a file named Usage.txt which gives code on how to use my DLL. Whenever I add a feature to the DLL, I make sure it is reflected in the Usage.txt file. You can place this file in the same level as your source and header folders. Note that I capture not just the application of my feature, but also the code surrounding the application.
Code I have created the DLL using MFC wizard. It is a regular DLL with MFC statically linked. The exports in the DLL may not do anything useful, but it gives you the gist of DLL layout. Once you put the layout in place you can keep adding more features (function and functions) into it and be sure that your client is happy with the it both in terms of functionality and documentation.
This article has no explicit license attached to it but may contain usage terms in the article text or the download files themselves. If in doubt please contact the author via the discussion board below.
A list of licenses authors might use can be found here
|
http://www.codeproject.com/Articles/7547/DLL-Template-which-lets-your-project-take-off
|
crawl-003
|
refinedweb
| 1,093
| 66.54
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.