text
stringlengths 454
608k
| url
stringlengths 17
896
| dump
stringclasses 91
values | source
stringclasses 1
value | word_count
int64 101
114k
| flesch_reading_ease
float64 50
104
|
|---|---|---|---|---|---|
Introduction to pandas profiling in Python
We know for extensive data analysis and to develop a machine learning model we use different libraries like the use of Pandas, Numpy & Matplotlib. The panda s library is mostly used in terms of building a machine learning model especially for Exploration Data Analysis for example reading the dataset, defining Dataframes, merging datasets, concatenate columns, and also zipping the two Dataframes into single Dataframe. I came across a very interesting topic named ‘pandas-profiling‘ which is extensively used for quick overall analysis report of any dataset you load & that helps to estimate your approach towards your building up the model. Let us see what it is all about-
Let’s perform a quick analysis report of any dataset we are using the ‘pandas-profiling’ library.
First, let’s learn the necessary commands for installing and uninstall pandas-profiling in the system-
- Install the library-
pip install python-profiling
If you are using conda use the following command-
conda install -conda-forge pandas_profiling
To Uninstall-
!pip uninstall pandas-profiling
USE IT-
- Let’s perform a quick analysis report of the dataset we are using the ‘pandas-profiling’ library. I have used movies-dataset here-
- load the libraries-
import pandas as pd import numpy as np
Import pandas-profiling library-
import pandas_profiling as pp
Import the dataset-
movies_df=pd.read_csv("G:\movie_dataset.csv")
I have taken here a movies_dataset stored in the G folder of my system.
You can load the respective dataset you want to explore along with its file path.
movies_df.head()
This command will show the first five rows of the dataset for a quick look through the data as output.
movies_df.describe()
- This command will give a quick analysis of the dataset like the count, mean, standard deviation of the parameters the dataset contains.
- We will use the command for quick analysis-
profile=pp.ProfileReport(movies_df) profile
This command will give all the detailed analysis of your loaded dataset.
- We call pp.profilereport() which is a pandas function used to extract and generate the overall report of the dataset.
movies_df.profile_report(html='style'={'full-width'=True})
- If the profile report is not generated your notebook then you can also use this command-
profile.to_widgets()
profile.to_file(output_file="movies_profiling.html")
You will find your respective Html format report automatically saved in your default folder.
THE RESULTS OF ANALYSIS-
The pandas_profiling gives a quick and detailed analysis of each parameter present in the dataset. The profile report function gives a descriptive overview of every dimension of the data.
OVERVIEW-
The overview gives the detailed description and overview of total no of missing data, total results of warning, a total of the duplicate cells, distinct values, variables with high cardinality.
NUMERICAL OVERVIEW-
This section illustrates the properties of numerical values of the dataset to get a detailed overview of Mean, Standard deviation, Min values, Max values, Interquartile range, etc.
CATEGORICAL OVERVIEW-
It shows the detailed overview of results regarding variable length, No of characters, total no of unique & distinct values, common features of the categorical variables.
CORRELATION-
The correlation report justifies how the variables are strongly related. It is the statistical technique to explain the relationship the numerical and categorical features have among each other and a detailed explanation of the parameter’s relation.
Correlation analysis is the method to show the relationship between two quantitative variables present in the dataset. Correlation is defined using correlation coefficient “r” which ranges from -1 to +1. If r is negative the variables are inversely related and if r is positive then one variable has a larger value than the other.
INTERACTIONS-
In this section, you can get the generated plot that shows the interaction between the two parameters. The interaction section clearly shows how each variable is related to each other present in the dataset. Any pair o variable interaction we can see by selecting any pair of variables from the two segments or headers.
Drawbacks of using pandas-profiling-
This library is not efficient if we use to get a quick analysis of large datasets. It takes a lot of time to compute the results.
Conclusion-
I am sure you can get a brief concept of how to use the pandas-profiling library. I am hopeful that it will save much of your time on performing this kind of analytics where you can estimate your future approach rather than going into tons of computing.
|
https://www.codespeedy.com/introduction-to-pandas-profiling-in-python/
|
CC-MAIN-2022-27
|
refinedweb
| 738
| 50.57
|
Post your Comment
connectivity step
connectivity step sir.pls give me step one by one for connect the java program to oracle.
Follow these steps:
1) Import the following packages in your java file:***
import java.sql.*;
import oracle.jdbc
First Step towards JDBC!
First Step towards JDBC!
First Step towards JDBC
Introduction
This
article introduce you with JDBC and shows you how to create a database
application to access
First Step towards JDBC!
in Java
In this section, you will learn how to connect the MySQL database... Rows from a Database Table
Here, you will learn how to retrieve all rows...;
First
Step towards JDBC
This article introduce you
Struts Flow Diagram Step By Step
Struts Flow Diagram Step By Step
In this section we will read about the flow of struts application. Here we
will see the architecture of struts... Request : In the first step an struts application
receives the request from
Step by Step Java tutorials for beginners
More and more novices are lining to learn Java language due to the growing... to learn everything but if you
are a beginner you can start with basic Java... learn the basic of the language. One
can clear their concept, learn the basic
;
How to design a text effect?
This tutorial helps you to learn designing of a text
effect. The beginners can easily learn step by step tutorial. Here we will explain this
tutorial step by step.
New File: Take a new file
How to design hard steel cover
How to design hard steel cover
This example has step by step processing to make a hard steel
cover.
New File: Take a new file.
Color: Set Foreground Color "Black"
and Background Color "White"
JPA Training
Persistence API Training Course is a step-by-step
introduction to build applications...;In this course you will learn how to persist your class
POJO's with relational database...
(as JBoss 4.x)
Understanding JPA
Configuration Files
persistence.xml
Jboss 3.2 EJB Examples
*.class META-INF\*.xml
The next step is to deploy the ejb jar file to JBoss... to do this step, Jboss won?t connect to mysql. It won?t find the url and driver... Jboss 3.2 EJB Examples
Photoshop a LCD Monitor
step by step guide
for designing this.
Let's start
New File.... Make adjustment as I have done here.
Inner shadow: Use same step like previous
JDO - Java Data Objects Tutorials
JDO - Java Data Objects Tutorials
This step-by-step Java Data... developed online JDO examples that will help you learn JDO Fast
jboss sever
jboss sever Hi
how to configure data source in jboss server and connection pooling
Thanks
Kalins naik
JBoss Tutorials
jsp-jboss
jsp-jboss where to keep jsp in jboss
Fedora Core 6 Guide, Installing Fedora Core6
installation tutorial you will
learn how to install fedora 6 along with Java, Java...;
Producing and Checking Fedora Core 6 CDs
In this section you will learn how to check Fedora Core 6 software in
burnt CDs. You will also learn
Java Training
Java Training provided by Roseindia helps the beginners to learn the basic of
Java technology and take them step-by-step towards the goal of becoming... so that everybody can understand and learn it.
Than there this the professional
Tomcat 7 in Eclipse Helios
Video Tutorial: Tomcat 7 in Eclipse Helios step by step
This video tutorial shows you how you can configure Tomcat 7 in Eclipse
Helios IDE. If you....
In the next step you will configure Tomcat 7 in eclipse IDE.
Finally create
Objective C Tutorial
;
In this Objective C Tutorial we will provide you
step-by-step... learn objective c from our easy to follow tutorial. Get
ready to learn... for Mac, iPode and iPhone. To
learn Objective-C you must have prior programming
How to Get Started with Java?
system it is in use. So to get started with Java the first step would... step to become equipped in all regards before you actually start writing codes... mentioning .java. The next step is compiling which means turning the Source Code
Post your Comment
|
http://roseindia.net/discussion/49607-JBoss-Tutorials:-Learn-JBoss-Step-by-Step.html
|
CC-MAIN-2014-42
|
refinedweb
| 685
| 65.93
|
Important: Please read the Qt Code of Conduct -
Can't get QtWebEngine to work on Raspberry Pi 3B
- Yury Lunev last edited by
Hello! I try to run the bare most sample of QtWebEngine on a Raspberry Pi using Yocto thud/warrior and EGLFS platform:
import QtQuick 2.0 import QtQuick.Window 2.0 import QtWebEngine 1.5 Window { width: 1024 height: 750 visible: true WebEngineView { anchors.fill: parent url: "" } }
But I get this on stdout:
# QTWEBENGINE_DISABLE_SANDBOX=1 qmlscene -platform eglfs 123.qml QStandardPaths: XDG_RUNTIME_DIR not set, defaulting to '/var/volatile/tmp/runtime-root' Unable to query physical screen size, defaulting to 100 dpi. To override, set QT_QPA_EGLFS_PHYSICAL_WIDTH and QT_QPA_EGLFS_PHYSICAL_HEIGHT (in millimeters). Sandboxing disabled by user. Warning: Setting a new default format with a different version or profile after the global shared context is created may cause issues with context sharing. Trace/breakpoint trap
And this on dmesg (a line on each attempt):
[ 2742.973347] Unhandled prefetch abort: breakpoint debug exception (0x002) at 0x7143f14a [ 2776.056549] Unhandled prefetch abort: breakpoint debug exception (0x002) at 0x6fc3f14a [ 3024.671555] Unhandled prefetch abort: breakpoint debug exception (0x002) at 0x6fc3f14a
I have gpu_mem=512 and I use userland drivers, not vc4. Had the very same issue with vc4graphics, though.
|
https://forum.qt.io/topic/104593/can-t-get-qtwebengine-to-work-on-raspberry-pi-3b
|
CC-MAIN-2021-10
|
refinedweb
| 205
| 58.79
|
I am trying to make a program where you can enter a credit card number and it will spit out the number back at you with a ASCII letter/symbol on the end using the remainder of the added digits divided by 26. I feel like my code is right although when I run the program, no symbol shows up. I do not get debug errors or anything, but my (char) symbol just doesn't show up. All it shows is the numbers. Can someone help me please?
Here is what I have so far:
import java.util.*;
import java.text.*;
import java.math.*;
public class Program{
public static void main (String []args){
Scanner keyboard = new Scanner(System.in);
int CC, CC2, CC3, CC4;
System.out.println("Enter your credit card number 2 numbers at a time (XX XX XX XX)");
CC=keyboard.nextInt();
CC2=keyboard.nextInt();
CC3=keyboard.nextInt();
CC4=keyboard.nextInt();
int CC6;
CC6= (CC+CC4+CC2+CC3)%26;
char CC7;
CC7 = (char)CC6;
System.out.println("The correct number and code is:" +CC+CC2+CC3+CC4+CC7);
}
}
Print them separately and you can see some weird symbol coming in the console.
System.out.println("The correct number and code is:" + CC + CC2 + CC3 + CC4); System.out.println(CC7);
The thing is, you'll get the CC7 in the range of 0-25 only since you're doing a mod 26 and this range contains ASCII codes of non-character keys.
Actual character(or for that case, special characters) start from ASCII code
33. Have a look at the ASCII table here.
|
https://codedump.io/share/t4enahu2p1x/1/how-do-i-convert-a-user-input-integer-into-a-char-ascii-value-in-java
|
CC-MAIN-2018-05
|
refinedweb
| 264
| 64.81
|
SETBUF(3F) SETBUF(3F)
setbuf, setvbuf, setbuffer, setlinebuf - assign buffering to a stream
logical unit
FORTRAN SYNOPSIS
#include <stdio.h>
character *(BUFSIZ+8) buf
integer type, size, setbuf, setvbuf, setbuffer,
setbuf (lun, buf)
setvbuf (lun, buf, size)
setbuffer (lun, buf, size)
setlinebuf (lun)
#include <stdio.h>
void setbuf (FILE *stream, char *buf);
int setvbuf (FILE *stream, char *buf, int type, size_t size);
int setbuffer (FILE *stream, char *buf, int size);
int setlinebuf (FILE *stream);
The three types of buffering available are unbuffered, fully buffered,
and line buffered. When an output stream unit is unbuffered, information
appears on the destination file or terminal as soon as written; when it
is fully buffered many characters are saved up and written as a block;
when it is line buffered characters are saved up until a newline is
encountered or input is read from stdin. Fflush(3S) flush(3F) may be
used to force the block out early. By default, output to a terminal is
line buffered and all other input/output is fully buffered.
Setbuf may be used after a stream unit has been opened but before it is
read or written. It causes the array pointed to by buf to be used
instead of an automatically allocated buffer. If buf is the NULL pointer
input/output will be completely unbuffered. If buf is not the NULL
pointer and the indicated stream lun is open to a terminal, output will
be line buffered.
A constant BUFSIZ, defined in the <stdio.h> header file, indicates the
assumed minimum length of buf. It is wise to allocate a few words of
extra space for buf, to allow for any synchronization problems resulting
from signals occurring at inopportune times. A good choice (and the one
used by default in stdio(3s)) is
Page 1
SETBUF(3F) SETBUF(3F)
char buf[BUFSIZ + 8]; character *(BUFSIZ + 8) buf
Setvbuf may be used after a stream unit has been opened but before it is
read or written. Type determines how stream lun will be buffered. Legal
values for type (defined in <stdio.h>) are:
_IOFBF causes input/output to be fully buffered.
_IOLBF causes output to be line buffered; the buffer will be flushed
when a newline is written, the buffer is full, or input is
requested.
_IONBF causes input/output to be completely unbuffered.
If input/output is unbuffered, buf and size are ignored. For buffered
input/output, if buf is not the NULL pointer and size is greater than
eight, the array it points to will be used for buffering. In this case,
size specifies the length of this array. The actual buffer will consist
of the first size-8 bytes of buf (see the discussion of BUFSIZ above).
If buf is the NULL pointer, or size is less than eight, space will be
allocated to accommodate a buffer. This buffer will be of length BUFSIZ.
(The actual space allocated will be eight bytes longer.)
Setbuffer and setlinebuf are provided for compatibility with 4.3BSD.
Setbuffer, an alternate form of setbuf, is used after a stream unit has
been opened but before it is read or written. The character array buf
whose size is determined by the size argument is used instead of an
automatically allocated buffer. If buf is the constant pointer NULL,
input/output will be completely unbuffered.
Setlinebuf is used to change stdout or stderr from fully buffered or
unbuffered to line buffered. Unlike the other routines, it can be used
at any time that the file descriptor is active.
fopen(3S), fflush(3S), getc(3S), malloc(3C), putc(3S), stdio(3S).
flush(3F), perror(3F).
Success is indicated by setvbuf and setbuffer returning zero. a zero
return value. A non-zero return value indicates an error. The value of
errno can be examined to determine the cause of the error. If it is
necessary to allocate a buffer and the attempt is unsuccessful, setvbuf
and setbuffer return a non-zero value. Setvbuf will also return non-zero
if the value of type is not one of _IONBF, _IOLBF, or _IOFBF.
A common source of error is allocating buffer space as an ``automatic''
variable in a code block, and then failing to close the stream unit in
the same block.
These functions cannot be used on direct unformatted units.
Page 2
SETBUF(3F) SETBUF(3F)
PPPPaaaaggggeeee 3333
|
https://nixdoc.net/man-pages/IRIX/man3s/setbuf.3s.html
|
CC-MAIN-2022-27
|
refinedweb
| 723
| 63.29
|
How to retry after exception?
Do a
while True inside your for loop, put your
try code inside, and break from that
while loop only when your code succeeds.
for i in range(0,100): while True: try: # do stuff except SomeSpecificException: continue break
I prefer to limit the number of retries, so that if there's a problem with that specific item you will eventually continue onto the next one, thus:
for i in range(100): for attempt in range(10): try: # do thing except: # perhaps reconnect, etc. else: break else: # we failed all the attempts - deal with the consequences.
The retrying package is a nice way to retry a block of code on failure.
For example:
def wait_random_1_to_2_s(): print("Randomly wait 1 to 2 seconds between retries")
|
https://codehunter.cc/a/python/how-to-retry-after-exception
|
CC-MAIN-2022-21
|
refinedweb
| 129
| 67.59
|
We are aware of the following known issues with Unity 2019.1. These issues have not been observed with Unity 2018.4, so if you're impacted we recommend to stay with that version.
Issues seen with Unity 2019.1.x with Vuforia Engine 8.1.X and 8.3.X:
- When running apps on iOS devices with the A10 processor or lower (e.g. iPhone 6S, iPhone 7, etc.) that have been developed with Unity 2019.1, Metal rendering API and ARKit (via Vuforia Fusion), the CPU load on that device can increase to a point where severe camera lag and/or dropped camera frames can be observed when running the device tracker. This can impact features such as Model Targets, Ground Plane, 3DRO and Extended Tracking.
- PROPOSED WORKAROUND:
- Open File->Build Settings...->Player Settings...
- Select "Resolution and Presentation"
- Select "Render Over Native UI"
By default, "Render Over Native UI" is not checked. Checking this will execute the original code pathway (without the change that introduced this regression), with the exception of setting the surface to opaque, which could be undesirable. This can be fixed by commenting out some code from this MetalHelper.mm in the created xcode project:
#if !PLATFORM_OSX if (UnityPreserveFramebufferAlpha()) { const CGFloat components[] = {1.0f, 1.0f, 1.0f, 0.0f}; CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); CGColorRef color = CGColorCreate(colorSpace, components); surface->layer.opaque = NO; surface->layer.backgroundColor = color; CGColorRelease(color); CGColorSpaceRelease(colorSpace); } //else #endif //{ //surface->layer.opaque = YES; //}
Issues seen with Unity 2019.1.x with Vuforia Engine 8.1.X that were fixed with Vuforia Engine 8.3:
App fails to resume on some Android devices after confirming the camera permission dialog. This is seen on many Android devices, but not all. Pause-resuming the app will continue it, the app will also load fine on the second run if the permission has been granted before. A workaround exists: Enable "Delayed Initialization" in the Vuforia configuration Disable the VuforiaBehaviour component on the ARCamera in your first AR scene Add the attached RequestPermissionScript.cs to your ARCamera. This script will request the Camera permission using Unity's own APIs and initialize Vuforia only after the permission has been granted. In some cases, Vuforia scenes take longer too load on iOS than expected. This is not seen consistently. Investigation on the root cause for this issue is still ongoing.
Issues fixed in Unity 2019.1.4f1:
Package Errors when importing Vuforia Samples into Unity 2019.1 from Asset Store. This is an asset store issue, Unity is currently working on a fix for an upcoming Unity 2019.1 version. In the meantime, possible workarounds can be found here:
Thanks,
Vuforia Engine Support
|
https://developer.vuforia.com/forum/unity/known-issues-unity-20191?sort=2
|
CC-MAIN-2019-51
|
refinedweb
| 444
| 51.95
|
Text boxes with different fonts!
Periscope has a feature called Text on Dashboards that allow users to add text boxes as banners, announcements, and descriptions of charts! However, they currently only support one font.
With the R and Python integration, users can use Plotly to replicate the Text on Dashboard feature but change the font!
By using this Python snippet, users can add the custom text, color, position and font to get something like the following:
Step 1: Create a New Chart
Step 2: Make the chart name blank
Step 3: Add "select 0" or any other valid sql code in the SQL editor
Step 4: Add this snippet to your Python editor and edit the text, position (x and y), font family, size, and color.
import plotly.graph_objs as go layout = go.Layout( title=dict( text= 'Welcome to my Dashboard!', y = .3, x = .5 ), font=dict( family='Cursive', size=48, color='#7A33FF' ), xaxis=dict( showgrid=False, ticks='', showticklabels=False ), yaxis=dict( showgrid=False, zeroline=False, showticklabels=False ) ) fig = go.Figure(layout=layout) periscope.plotly(fig)
Note: So far the fonts I've seen supported are:
Cursive, Times New Roman, Courier New, PT Sans Narrow, Helvetica, Arial, Arial Bold, and Comic Sans.
If you see more feel free to comment them below!
|
https://community.periscopedata.com/t/k9n2l4/text-boxes-with-different-fonts
|
CC-MAIN-2019-39
|
refinedweb
| 212
| 60.45
|
38195/how-to-filter-html-tags-and-resolve-entities-using-python
Him the answer is a pretty simple one.
Make use of lxml. This is one among the best HTML/XML libraries in Python.
Consider the following piece of code:
import lxml.html
t = lxml.html.fromstring("...")
t.text_content()
And also if you wish to sanitize the HTML code to look clean then make use of the following module:
module - lxml.html.clean
Hope this helped!
Hey. You can use requests and beautifulsoup ...READ MORE
Hello, @Pooja,
Even I got the same issue, ...READ MORE
You can use the find_all_next() to do this. Try ...READ MORE
You can also use the random library's ...READ MORE
Syntax :
list. count(value)
Code:
colors = ['red', 'green', ...READ MORE
Enumerate() method adds a counter to an ...READ MORE
You can simply the built-in function in ...READ MORE
class NegativeWeightFinder:
def __init__(self, graph: nx.Graph):
...READ MORE
You could scale your data to the ...READ MORE
OR
At least 1 upper-case and 1 lower-case letter
Minimum 8 characters and Maximum 50 characters
Already have an account? Sign in.
|
https://www.edureka.co/community/38195/how-to-filter-html-tags-and-resolve-entities-using-python
|
CC-MAIN-2022-21
|
refinedweb
| 188
| 70.7
|
[A5, A5] = A5, thereby proving that
A5not solvable. This is useful for Galois theory, where we want to show tha
A5cannot be built as extensions of smaller cyclic groups.
(12);(34)means 'perform
(12)then perform
(34)'. I find this notation makes permutation composition intuitive for me. The
;is evocative of C-style languages, where we are ending a statement. I will be consistently using to denote the commutator.
A5only has the even permutations in
S5. So it can have zero, two, four, involutions that build it up. There can't be more after simplification, since
S5ony has
5elements --- the largest sequence of transpositions we can do is
(12)(23)(34)(45). So, in
A5, we have:
().
(ij)(kl)where
{i, j}and
{k, l}do not overlap. From these, we get the 2-cycles.
(ij)(kl)where
{i, j}and
{k, l}overlap. Here we cannot have
{i, j} = {k, l}since then we will just have a single transposition. So, let us assume that we have
j = k. If we have any other equality, we can always flip the transpositions around to get to the normal form
j = k:
(23);(12) = (32);(12) [(23) = (32)] = (32);(21) [(12) = (21)]
[a b c] -(32)-> [a c b] -(21)-> [c a b]
cbackward, and allowing the other elements to take its place using the permutation
(23);(12).
(ij)(kl)where
{i, j}and
{k, l}intersect, we get the 3-cycles.
(12)(23)(34)(45). It must be of this form, or some permutation of this form. Otherwise, we would have repeated elements, since these transpositions are packed "as close as possible". These generate the 5-cycles.
(34)(12)can be written as
(34)(23)(23)(12), because
(23)(23) = e. This can be further broken down into
((34)(23)) ((23)(12))which is two 2-cycles:
(234); (123).
(32)(21)are 3-cycles:
(32)(21) = (123).
(45)(34)(23)(12)can be written as
((45)(34))((23)(12))which is two 3-cycles:
(345); (123).
C = (123)as
(32)(21). We wish to write this as the commutator of two elements
g, h: .
4, 5that are unsused by
Cin
A5[here is where
5is important:
3 + 2 = 5, and we need two leftover elements ].
4, 5to build elements
g, hwhich cancel off, leaving us with
(32)(21). We start with
g = (32)___,
h = (21)___where the
___is to be determined:
(32)___||(21)___||___(32)||___(21) g h g^-1 h^-1
gand
hcontain another tuple, because they are members of
A5! We need them to be permutations having
2, 4, 6transpositions.
(4 5)everywhere. These
(4 5)can slide over the
(2 1)and thereby harmlessly cancel:
(32)(45)||(21)(45)||(45)(32)||(45)(21) g h g^-1 h^-1
(45)over
(21), (32):
(32)||(21)(45)(45)||(32)||(45)(45)(21) g h g^-1 h^-1
(45)(45) = e:
So we are left withSo we are left with
(32)||(21)||(32)||(21) g h g^-1 h^-1
(32);(21);(32);(21). This is the square of what we really wanted,
C = (32);(21). However, since
Cis a 3-cycle, we know that . So, we can start with , use our trick to generate which is equal to . Since this works for any , we have shown that we can generate 3-cycles from commutators of
A5.
s = (a b c). We first first a square root
tsuch that
t*t=s. To do this, we make
thave the cycles of
sspread out in gaps of 2:
It is hopefully clear thatIt is hopefully clear that
t = (a _ _) t = (a _ b) [+2] t = (a c b) [+2, modulo]
t*t = s:
Now, we will writeNow, we will write
t = (a c b) t*t: apply the cycle twice. t*t = a -(skip c) -> b b -(skip a) -> c c ->(skip b) -> a = (a b c) = s
s = t*tand then find the commutator decomposition from it:
But there's a problem: thisBut there's a problem: this
s = t*t = (abc)(abc) = (cb)(ba)(cb)(ba) = (cb)|(ba)|(cb)|(ba) = (cb)|(ba)|(cb)|(ba) g h g-1 h-1
gand
hdo not belong to
A5, they belong to
S5. This is fixed by using a random
(pq)which we know will exist ..
arbitrary g = (3-cycle-1)(3-cycle-2)....(3-cycle-n) = [g, h][g2, h2]....[gn, hn] = member of [A5, A5]
C = (123)will break down for
D = (321). Fear not!
(123)is in
[A5, A5], then some other cycle
(ijk)can be conjugated to
(123). Since the commutator subgroup is closed under conjugation, we have that
(ijk)is a member of
[A5, A5].
C=(abc)and
D=(pqr), at least one of
a, b, cmust be equal to one of
p, q, r. Since each
a, b, cis unique, and each
p, q, ris unique, for them to not overlap, we would need 6 elements. But we only have 5, so there must be some overlap:
So, we will perform our proof assuming there is 1 overlap, 2 overlap, 3 overlap. Recall that ifSo, we will perform our proof assuming there is 1 overlap, 2 overlap, 3 overlap. Recall that if
a b c 1 2 3 4 5 p q r
C = (a b c)is a cycle and
sis a permutation, then the action of conjugating
Cwith
sproduces a permutation
(s(a) s(b) s(c)). We will prove our results by finding an
s, and then making
seven . This is the difficult part of the proof, since we need to show that all 3-cycles are conjugate in A5 . We will write
sas two distinct transpositions, which will guarantee that it belongs to
A5.
(abx)and
(pqx)have a single element
xin common:
C = (abx) D = (pqx) s: send a to p, b to q s = (ap)(bq) C = (abx) -conj s-> (pqx) = D
(axy)and
(pxy)have two elements in common,
xand
y. Naively, we would pick
s: send x to y. But this is odd, so this isn't a member of
A5. To make it even, we rearrange
D = (pxy)as
D = (yxp). This lets us go from
Cto
Dby relabelling
ato
y,
yto
p. This permutation is even since it has two distinct transpositions.
C = (axy) D = (pxy) = (yxp) [cyclic property] s: send a to y, y to p s = (ay)(yp) C = (axy) -conj s-> (yxp) = D
(xyz)and
(xyz)have all three elements in common,
x,
y,
z. Here we can conjugate by identity and we are done.
A5:
mwhich maps each element of
A5to the commutators that create it.
from collections import defaultdict m = defaultdict(set) A5 = AlternatingGroup(5) S5 = SymmetricGroup(5) # if necessary for g in A5: for h in A5: m[g * h * g^(-1) * h^(-1)] |= { (g, h) } # all 60 elem can be written in terms of commutators print("number of elem generated as commutator: " + str(len(m.keys()))) # Show how to access elements of A5 and their commutator representation cyc5 = A5("(1, 2, 3, 4, 5)") cyc3 = A5("(1, 2, 3)") cyc2disj = A5("(1, 2) (3, 4)") print(m[cyc5]) print(m[cyc3]) print(m[cyc2disj])
A5directly as a commutator
Note that if we chooseNote that if we choose
s = (12)(34)
t = (abcd), then
t*twill exchange the first and third elements
a <-> c, and the second and fourth elements
b <-> d. So, if we choose:
Next, we need to write thisNext, we need to write this
t = (1324) t*t = (12) (34)
t*tas
[g, h]for
g, hfrom
A5.
Where bothWhere both
t*t = (1324)(1324) = (42)(23)(31);(42)(23)(31) = (42)(23)(31);(42)(23)(31) = (42)(23)(31);(23)(23);(42)(23)(31) ^^^^^^^^ inserted = (42)(23)|(31)(23)|(23)(42)|(23)(31) g | h | g' | h' = [(42)(23), (31)(23)]
(42)(23), and
(31)(23)are members of
A5.
s = (1 2 3 4 5). we once again find a square root of
s. To build this, we will build an element with the elements of
swritten with gaps of
2:
It should be clear howIt should be clear how
t = (1 _ _ _ _) = (1 _ 2 _ _) [+2 index] = (1 _ 2 _ 3) [+2 index, wrap] = (1 4 2 _ 3) [+2 index, wrap] = (1 4 2 5 3) [+2 index, wrap]
t*t = s: When we take
s = t*t, the resulting permutation
swill move an element
j = t[i]to
k = t[i+2]. But we have built
tsuch that
t[i+2] = s[i+1]. So we will move the element according to how
spleases:
We will now useWe will now use
t = (1 4 2 5 3) t*t = 1 -> (4 skip) -> 2 2 -> (5 skip) -> 3 3 -> (1 skip) -> 4 3 -> (2 skip) -> 5 5 -> (3 skip) -> 1 t*t = (1 2 3 4 5) = s
t*tto write the commutator:
s = t*t = (35)(52)(24)(41);(35)(52)(24)(41) = = = = (1, 2)(3, 5)|(1, 5)(2, 4)|(3, 5)(1, 2)|(2, 4)(1, 5) = (1, 2)(3, 5)|(1, 5)(2, 4)|(3, 5)(1, 2)|(2, 4)(1, 5) g h g^{-1} h^{-1}
|
https://pixel-druid.com/a5-is-not-solvable.html
|
CC-MAIN-2022-27
|
refinedweb
| 1,543
| 78.38
|
CodePlexProject Hosting for Open Source Software
I had an odd problem and thought I'd post it here.
Basically I started a new project in VS2010 using White. I could add the white references and write some code and all would be well, but the instant I compiled it the White.Core namespace seemed to simply dissapear.
If you have this problem then you can change the dotnet version to 3.5 and automagically it works again!
I rather though Dot Net was backward compatible but there you go.
Iain
I have the same problem with the '.NET Framework 4.0 Client Profile' target framework. Change to '.NET Framework 4.0' will work.
In fact I've seen this in a number of projects recently. What seems to be the case is that the client profile excludes some assemblies/namespaces (specifically, I think stuff that's normally associated with asp.net), but doesn't stop you using them
in projects. In other projects, I simply wanted some URLEncoding, for a client app. This is not part of the client profile and so, one way or another it won't work.
I think there are two lessons for Microsoft. One is that client apps need more web support (UrlEncoding, for example) even if they don't need the whole ASP.Net stuff. Secondly, the error reporting on this is just dreadful.
Are you sure you want to delete this post? You will not be able to recover it later.
Are you sure you want to delete this thread? You will not be able to recover it later.
|
http://white.codeplex.com/discussions/223218
|
CC-MAIN-2017-34
|
refinedweb
| 266
| 86.2
|
Experienced C++ programmers make extensive use of RAII (esource initialization is acquisition) idiom [1][2] to manage resources in their programs. For this technique to work, it is necessary to have destructors that are called in a predictable manner. Microsoft�s decision to use a nondeterministic garbage collector for its .NET runtime came as a shock to most C++ programmers, because RAII simply does not work in such environment. While garbage collector takes care of memory, for handling other resources, like files, database connections, kernel objects, etc. it leaves all the job to the programmer. To come up with some kind of solution for this problem, Microsoft introduced methods like
Close() and
Dispose() which work in conjunction with
Finalize(). In C#, there is also a
using keyword, which can be used to automatically call
Dispose (but not
Close) at the end of a scope, but for most of the other .NET languages, it is the programmer�s responsibility to explicitly call one of those two functions after a resource is no longer needed.
In this article, I will explain how it is possible to make a template wrapper that calls either
Dispose or
Close after object goes out of scope, and thus enable C++ programmers to use RAII idiom even when programming in .NET environment. This wrapper is policy-based, and the idea for that came from the excellent Alexandrescu�s book �Modern C++ Design� [3].
Before I started writing this article, I �googled� a while trying to find if someone already came up with this solution. The best I could find was Tomas Restrepo�s
auto_dispose [4], published in February 2002 MSDN Magazine. However,
auto_dispose is a replacement for C#
using keyword, and it is not policy-based, and can not work with
Close method. Also, it requires that objects are derived from
IDisposable interface. Therefore, I am pretty sure I am not reinventing the wheel.
To explain RAII idiom, I will use Bjarne Stroustrup�s example [1]: }
If we use
File_handle instead of a pointer to
FILE, we don�t need to worry about closing a file. It will automatically close after the
File_handle object goes out of scope. Now, if we strictly follow the rules of structured programming, it is not a big deal to close a file manually after we are done with it. However, .NET applications use exceptions for reporting errors, and it is all but impossible to make well-structured programs with exceptions. Therefore, it is pretty hard to keep track of all the places where a file needs to be closed, and a probability of introducing resource leaks rises. With a
File_handle object created on stack, it�s going to be destroyed and its destructor called automatically when it goes out of scope, and the file will be automatically closed.
Unfortunately, this simple and effective technique does not work with environments controlled by a non-deterministic garbage collector. With
__gc classes, we don�t have destructors that are called in predictable manner, and we can not rely on them to clean up resources after us. However, in MC++ we also have
__nogc classes which do have proper destructors. The obvious idea is to use a
__nogc wrapper and make sure that its destructor calls
__gc object�s
Dispose() or
Close() function.
To address the problem described above, I have made a class template
gc_scoped, which_; } };
As you can see, this class template takes two template parameters:
T� which is a
__gc*type.
CleanupPolicy� a policy class template that specifies the way we clean up our resources. It can be either
DisposeObject(default) which calls
Dispose(), or
CloseObjectwhich calls
To see how this class template is useful, let's write a simple function that writes a line of text to a file. Without
gc_scoped, this function would look like this:
void WriteNoRaii (String __gc* path, String __gc* text) { StreamWriter __gc* sf; try { sf = File::AppendText(path); sf->WriteLine(text); } __finally { if (sf) sf->Close(); } }
Note the
__finally block in the example above. We need to manually call
Close in order to close the file. If we forget to do that, we have a resource leak. Now, look at the same example with
gc_scoped:
#include "gc_scoped.h" void WriteRaii (String __gc* path, String __gc* text) { gc_scoped <StreamWriter __gc*, CloseObject> sf (File::AppendText(path)); sf->WriteLine(text); }
This time we don�t need to manually close our file �
gc_scoped does it for us automatically.
In this example we used the cleanup policy
CloseObject, which called
StreamWriter::Close() internally. For the cases when we want to use
Dispose(), we will specify the cleanup policy
DisposeObject, or just leave out the second template parameter.
The beauty of policy-based designed is that we are not restricted to
DisposeObject and
CloseObject policies at all. If some class implements i.e. function
Destroy() to clean up resources, we can easily write
DestroyObject policy like this:
template <typename T> class DestroyObject { protected: void Cleanup(T object) {object->Destroy();} };
That's it! Now we can use
DestroyObject policy along with other ones.
Now, the question that every hardcore C++ programmer will ask is: how much does this cost in terms of performance? For native C++, the compiler is usually able to optimize away all the costs, and to produce the code identical to the one without template wrappers [5]. Here, we have to �double-wrap� our
__gc pointer: first into
gc_root, then into
gc_scoped, and that does not make compiler�s task easier. However, as I ran ILDasm to check the output of
WriteRaii function, I somewhat hoped that VC 7.1 would be able to optimize away
gc_scoped even if it contains a
gc_root member. I was wrong. Here is the output of
WriteRaii:
.method public static void modopt([mscorlib]System.Runtime.CompilerServices.CallConvCdecl) WriteRaii(string path, string text) cil managed { .vtentry 9 : 1 // Code size 93 (0x5d) .maxstack 2 .locals ([0] native int V_0, [1] valuetype [mscorlib]System.Runtime.InteropServices.GCHandle V_1, [2] valuetype [mscorlib]System.Runtime.InteropServices.GCHandle V_2, [3] valuetype [mscorlib]System.Runtime.InteropServices.GCHandle V_3, [4] valuetype 'gc_scoped<System::IO::StreamWriter __gc *, CloseObject>' sf, [5] native int V_5) IL_0000: ldarg.0 IL_0001: call class [mscorlib]System.IO.StreamWriter [mscorlib]System.IO.File::AppendText(string) IL_0006: call valuetype [mscorlib]System.Runtime.InteropServices.GCHandle [mscorlib]System.Runtime.InteropServices.GCHandle::Alloc(object) IL_000b: stloc.2 IL_000c: ldloc.2 IL_000d: stloc.1 IL_000e: ldloc.1 IL_000f: call native int [mscorlib]System.Runtime.InteropServices.GCHandle::op_Explicit (valuetype [mscorlib]System.Runtime.InteropServices.GCHandle) IL_0014: stloc.s V_5 IL_0016: ldloca.s sf IL_0018: ldloca.s V_5 IL_001a: call instance int32 [mscorlib]System.IntPtr::ToInt32() IL_001f: stind.i4 .try { IL_0020: ldloca.s V_0 IL_0022: initobj [mscorlib]System.IntPtr IL_0028: ldloca.s V_0 IL_002a: ldloca.s sf IL_002c: ldind.i4 IL_002d: call instance void [mscorlib]System.IntPtr::.ctor(int32) IL_0032: ldloc.0 IL_0033: call valuetype [mscorlib]System.Runtime.InteropServices.GCHandle [mscorlib]System.Runtime.InteropServices. GCHandle::op_Explicit(native int) IL_0038: stloc.3 IL_0039: ldloca.s V_3 IL_003b: call instance object [mscorlib]System.Runtime.InteropServices. GCHandle::get_Target() IL_0040: ldarg.1 IL_0041: callvirt instance void [mscorlib]System.IO.TextWriter::WriteLine(string) IL_0046: leave.s IL_0055 } // end .try fault { IL_0048: ldsfld int32** __unep@??1?$gc_scoped@P$AAVStreamWriter @IO@System@@VCloseObject@@@@$$FQAE@XZ IL_004d: ldloca.s sf IL_004f: call void modopt( [mscorlib]System.Runtime.CompilerServices.CallConvCdecl) __CxxCallUnwindDtor(method unmanaged thiscall void modopt( [mscorlib]System.Runtime.CompilerServices.CallConvThiscall) *(void*), void*) IL_0054: endfinally } // end handler IL_0055: ldloca.s sf IL_0057: call void modopt([mscorlib] System.Runtime.CompilerServices.CallConvThiscall) 'gc_scoped<System::IO::StreamWriter __gc *, CloseObject>.__dtor' (valuetype 'gc_scoped<System::IO::StreamWriter __gc *, CloseObject>'* modopt([Microsoft.VisualC]Microsoft.VisualC.IsConstModifier) modopt([Microsoft.VisualC]Microsoft.VisualC.IsConstModifier)) IL_005c: ret } // end of method 'Global Functions'::WriteRaii
Compare this to
WriteNoRaii:
.method public static void modopt( [mscorlib]System.Runtime.CompilerServices.CallConvCdecl) WriteNoRaii(string path, string text) cil managed { .vtentry 1 : 1 // Code size 29 (0x1d) .maxstack 4 .locals ([0] class [mscorlib]System.IO.StreamWriter sf) IL_0000: ldnull IL_0001: stloc.0 .try { IL_0002: ldarg.0 IL_0003: call class [mscorlib]System.IO.StreamWriter [mscorlib]System.IO.File::AppendText(string) IL_0008: stloc.0 IL_0009: ldloc.0 IL_000a: ldarg.1 IL_000b: callvirt instance void [mscorlib]System.IO.TextWriter::WriteLine(string) IL_0010: leave.s IL_001c } // end .try finally { IL_0012: ldloc.0 IL_0013: brfalse.s IL_001b IL_0015: ldloc.0 IL_0016: callvirt instance void [mscorlib]System.IO.StreamWriter::Close() IL_001b: endfinally } // end handler IL_001c: ret } // end of method 'Global Functions'::WriteNoRaii
As you can see, compiler was not able to optimize away
GCHandles, and to my surprise it didn�t even inline
gc_scoped destructor. Therefore, I expected some performance penalty, but how much exactly? To answer this question, I ran both functions 100,000 times. The version with
WriteRaii took approximately 20% more time than the version with
WriteNoRaii.
Therefore, the performance of
gc_scoped turned out to be pretty disappointing. However, giving up
gc_scoped altogether for the sake of performance would fit Knuth�s definition of premature optimization. While there are cases where performance cost of using
gc_scoped would be unacceptable (I wouldn�t recommend using it inside of a tight loop) in many cases the benefits of automatic resource management will be more important.
RAII is a powerful and simple idiom that makes resource management much easier. With
gc_scoped class template, it is possible to use RAII with
__gc types. However, unlike with native C++, there is a performance penalty that may or may not be significant in your applications.
General
News
Question
Answer
Joke
Rant
Admin
|
http://www.codeproject.com/KB/mcpp/managedraii.aspx
|
crawl-002
|
refinedweb
| 1,573
| 51.34
|
Mark a struct as a compute tag by inheriting from this. More...
#include <Tag.hpp>
Mark a struct as a compute tag by inheriting from this.
A compute tag is used to identify an item in a DataBox that will be computed on-demand from other items in the DataBox. A compute tag must be derived from a simple tag corresponding to the type of object that is computed. This simple tag can be used to fetch the item corresponding to the compute tag from a DataBox.
A compute tag contains a member named
function that is either a static constexpr function pointer, or a static function. The compute tag must also have a a type alias
argument_tags that is a typelist of the tags that will be retrieved from the DataBox and whose data will be passed to the function (pointer). Compute tags must also contain a type alias named
return_type that is the type the function is mutating. The type must be default constructible.
By convention, the name of a compute tag should be the name of the simple tag that it derives from, appended by
Compute.
Derived Class Requires:
return_typeof the type of the item computed
basethat is the simple tag from which it is derived
functionthat is either a function pointer, or a static constexpr function that is used to compute the item from the
argument_tagsfetched from the DataBox
argument_tagsthat is a
tmpl::listof the tags of the items that will be passed (in order) to the function specified by
function
A compute tag may optionally specify a static
std::string name() method to override the default name produced by db::tag_name.
Compute tags are of the form:
where the function is:
You can also have
function be a function instead of a function pointer, which offers a lot of simplicity for very simple compute items.
Note that the arguments can be empty:
|
https://spectre-code.org/structdb_1_1ComputeTag.html
|
CC-MAIN-2021-49
|
refinedweb
| 317
| 55.58
|
I've seen a few APIs based around RSS feeds, and decided to take a different approach. The idea behind mine is to model the feed in C# classes. If done correctly, the classes serialization could produce an RSS feed, and the deserialization would handle all the XML parsing.
An RSS feed is an XML based format for distributing site content. They have three basic elements: the RSS feed, a channel within the feed, and different Item elements within the channel.
Item
Many Microsoft products support RSS feeds, including Outlook and Internet Explorer.
The program contains three main classes to model the feed. Rss, Channel, and Item.
Rss
Channel
The key to ridding us of XML parsing duties is the XMLSerializer class. It serializes a class for .NET, and allows for classes and properties to specify attributes that override the default serialization.
XMLSerializer
Basic serialization will work as is for the Item and Rss class. They contain properties that when serialized will produce the correct elements for the feed. We will, however, override the element name in Rss so we can name the class with an uppercase (the class name is otherwise used as the element name).
[XmlRootAttribute(ElementName = "rss"] //change the name to rss, not Rss
public class Rss
{
Also, the version of RSS you are using needs to be represented as an attribute instead of as an element. We use the XmlAttribute attribute for this.
XmlAttribute
[XmlAttribute("version")]
public string Version
{
...
The Channel class has to override the default serialization. It needs a container of Item objects, but by default, the collection variable name will become the element under Channel. We need to use the XmlElement attribute to override the way the array is represented in the feed.
XmlElement
We specify a name for an "item" with a type of our Item class:
[XmlElement(ElementName="item", Type=typeof(Item))]
public List<Item> items
{
...
This does the trick. Now, instead of an <items> element, we get a list of <item>s, each containing our serialized Item class.
<items>
<item>
Now, all we have is the rather trivial code of using the XmlSerializer class to load or save our feed. I accomplish this with two, three line functions in the Rss class.
XmlSerializer
Load:
XmlSerializer serializer = new XmlSerializer(typeof(Rss));
Rss feed = (Rss)serializer.Deserialize(fileStream);
return feed;
Save:
XmlSerializer serializer = new XmlSerializer(typeof(Rss));
FileStream stream = File.Create(filename);
serializer.Serialize(stream, this);
I added an example with a simple aggregator displaying how the class works. The <enclosure> element isn't supported due to the lack of feeds I have found which contain it. I've tested quite a few feeds, and it works great with them.
<enclosure>.
|
http://www.codeproject.com/Articles/29033/Using-serialization-to-produce-RSS-feeds-from-C-cl
|
CC-MAIN-2016-18
|
refinedweb
| 449
| 57.06
|
Hi all - apologies if this is a tired old question. I couldn't find anything on first glance in the forums or tutorials.
I'm new to C, and I'm trying to find out from inside a function how many characters have been allocated in memory to a string array. The catch is that the function only has a pointer to the array that it takes as an argument. I've tried something like this:
(Obviously there would be no point in the get_free_chars function alone - I'm trying to do something similar within a string manipulation function I'm coding)(Obviously there would be no point in the get_free_chars function alone - I'm trying to do something similar within a string manipulation function I'm coding)Code:
#include <stdlib.h>
int get_free_chars(char *ptr)
{
return sizeof(ptr)/sizeof(char);
}
main()
{
char foo[50] = "";
int a = sizeof(foo) / sizeof(char);
int b = get_free_chars(foo);
}
'a' takes the value I'd expect - 50 - but 'b' comes out to be 4 (obviously because the program is measuring the size of the pointer (4 bytes = 32 bits, which is the size for my system) rather than the array itself). Doing sizeof(*ptr) obviously wouldn't work as that would point directly to the first character only.
Is there a nice way to get this information from within the function?
Thanks in advance!
|
https://cboard.cprogramming.com/c-programming/128498-beginner-pointers-arrays-question-printable-thread.html
|
CC-MAIN-2017-26
|
refinedweb
| 230
| 56.49
|
Details
- Type:
Improvement
- Status: Closed
- Priority:
Major
- Resolution: Fixed
- Affects Version/s: 0.22.0
- Fix Version/s: 0.20-append, 0.20.205.0, 0.22.0
- Component/s: hdfs-client
- Labels:None
- Hadoop Flags:Reviewed
Description.
Issue Links
- blocks
HBASE-2467 Concurrent flushers in HLog sync using HDFS-895
- Closed
- relates to
-
Activity
- All
- Work Log
- History
- Activity
- Transitions
+1. This sounds good. The only problem is that it is not easy to use because the application is forced to use multi-threads to write data. I guess what hbase needs is flush API1 as discussed in
HADOOP-6313. I created an issue HDFS-895 to track the implementation.
i think it's worth verifing that this would actually help hbase throughput (just a theory right now i think).
we could set the hbase queue threshold to 1 and test with fake sync (that just returns immediately) and real sync and see what the difference is (is the sync time really holding back overall throughput (as intuition says it should be)).
also - the proposal is to overlap actual network traffic and not just the buffer copies across app/dfs - right?
> Do we then allow multiple concurrent sync calls, each waiting for a different seqno? Sounds like yes.
This sounds right.
I did the experiment Joydeep described on 1 machine. I replaced the line where we call hflush in the write-ahead-log with:
if(now % 3 == 0) Thread.sleep(1);
Because first if I just slept for 1ms it was already 2-3 times slower then normal sync time, I guess it's because it's very hard for the JVM to schedule such a small sleep time. The "now" variable is a System.currentTimeInMillis called just before and used for other metrics.
So with this modification a single client takes as much time inserting as with normal sync and 4 clients take almost the same time on average to insert a value. With sync and 4 clients, it takes twice the time to insert a single value.
It would tend to confirm that the synchronization between append and sync costs a lot for multi-threaded clients.
very cool. thanks. i suspect the speedup would be more with higher number of clients - but this seals the deal.
Anybody working on this? I'm interested in doing so, if not.
Hi Todd, I am not working on this one yet and if you have an implementation that will be great.
The HBase use-case is that one thread will be calling the hflush() on a file handle while many other threads could be trying to write concurrently to that same file handle.
Here's a preliminary patch against 0.20 sync (will forward port it, but HBase on 20 makes a good testing ground). It could do with a thorough code review, as this is tricky code, but the general idea is simple enough. Also I want to augment the unit test to do some data verification.
The included test case can also be run as a benchmark, where it runs 10 threads, each of which just appends 511-byte chunks and calls sync for each one. With the patched DFSClient, it runs in about 33 seconds on my test cluster. Without the patched DFSClient it took 290 seconds (and jstack shows most threads blocked most of the time). This is confirming that we expected - there's a lot of parallelism to be gained for multithreaded writers.
Here's a patch against trunk. There are a couple TODOs in there with questions for reviewers. I haven't had a chance to run any benchmarks for the trunk version, but should be similar speedup.
Found a couple bugs in this, so cancelling patch available for the time being while I work them out.
Also, I'd like to propose an extension to the hflush() API: hflush(long fileOffset) would ensure that the file has been flushed up to a length >= fileOffset. We can do this pretty easily by examining the ackQueue and dataQueue - if we find a packet in either queue that contains the given offset, we wait for that seqnum to be acked. If we don't find such a packet, and the byte offset is smaller than the top of ackQueue, it's already been acked, and if it's larger than the top of dataQueue, we need to flush like we're doing now.
This would be very useful to HBase, where the appends to the logs don't want to happen in a synchronized block with the flush. Each thread only cares about syncing up to its last write offset.
new patch against 20 that I've been testing with HBase. Will post a new patch against trunk this week, having some timeouts with FI tests that I need to understand.
I like the idea of having a hflush(offset) type of call. It will be non-standard in the sense that I know of no other filesystems that has such a call, but it would benefit hbase a lot.
It's not entirely alone - Linux these days (since 2.6.17) has sync_file_range(2) which is pretty similar
re: the patch())
Took some time to update this to newest trunk, based on the fixes in the 20-append patch. This attachment hdfs-895-review.txt shows the patch broken up into three separate commits - first two refactors and then the actual parallel sync feature. It should be easier to understand the patch and review it this way. Will upload the total patch as well.
Here's the same patch as just one diff.
FWIW, the 20-append patch posted here has been in our distro for a couple months with lots of people using the new feature through HBase with no issues. So I think it's pretty sound (and represents at least 25-30% improvement for HBase write throughput according to JD's comments in
HBASE-2467)
Todd, could you please upload an updated patch for 0.20? Jonathan is asking me if I could commit this.
@Hairong He's out for a few days (wandering temples in foreign lands)
Stack, looks that we have to wait until Todd is back. This one is performance issue. Theoretically the release could be cut without it.
Has anyone taken a look at the patch for trunk? I think we wanted to hold off putting this in branch-20-append until it's in trunk. The latest trunk patch appears to still apply. Once that has been reviewed I'll re-do the branch-20 patch. [hello from Kyoto!]
The patch looks good. A few questions:
1. Does this work with the heartbeat packet?
2. line 1387: dataQueue.wait(1000); is it possible to use dataQueue.wait();
3. lines 1294-1299: when would this happen?
if (oldCurrentPacket == null && currentPacket != null)
Thanks for the review.
Does this work with the heartbeat packet?
Line 657-658 check for the heartbeat sequence number before we set lastAckedSeqno in ResponseProcessor.run(), so it should work as before.
dataQueue.wait(1000); is it possible to use dataQueue.wait();
Yes, I think that would also work. In theory, the timeout isn't necessary, but I've seen bugs before where a slight race causes us to miss a notify. Perhaps we can switch the synchronized (dataQueue) and the while (!closed) in this function and avoid the race? It seems to me that waking up once a second just to be extra safe doesn't really harm us, but maybe it's a bit of a band-aid.
lines 1294-1299: when would this happen?
This is a bit subtle - it was one of the bugs in the original version of the patch. Here's the sequence of events that it covers:
- write 10 bytes to a new file (ie no packet yet)
- call hflush()
- it calls flushBuffer, so that it enqueues a new "packet"
- it sends that packet - now we have no currentPacket, and the 10 bytes are still in the checksum buffer, lastFlushOffset=10
- we call hflush() again without writing any more data
- it calls flushBuffer, so it creates a new "packet" with the same 10 bytes
- we notice that the bytesCurBlock is the same as lastFlushOffset, hence we don't want to actually re-send this packet (there's no new data, so it's a no-op flush)
- hence, we need to get rid of the packet and also decrement the sequence number so we don't "skip" one
Without this fix, we were triggering 'assert seqno == lastAckedSeqno + 1;' when calling hflush() twice without writing any data in between
Thanks to JD who reminded me of a small bug we had fixed on the 20 version of this patch that didn't make it into the trunk patch. Working on trunk patch and unit test now.
The bug JD found is an NPE that happens if close() is called concurrent with hflush(). I have a patch that fixes this to IOE, but Nicolas and I have been discussing whether it should be a no-op instead. The logic is that if you append something, then some other thread close()s, then you call hflush(), your data has indeed already been flushed (ie is on disk). Right now hflush() checks that the stream is open first, but instead should it just return if the stream was closed in a non-error state?
I feel that hflush() should throw an exception if the stream is already closed. This is a pretty standard semantics.
Checked a few OutputStream in Java. It seems that they implement flush as noop if the stream is closed. So I am OK if DFSOutputStream does the same.
Here's a delta that shows the patch for the NPE issue for easy review. I'll also upload a full patch against trunk momentarily but figured it would be easier to look at just the changed bits.
Although I think either interpretation of hflush() could make sense, I decided to leave it as it currently is - flushing a stream that's been closed throws IOE. If we want to change this in the future we can do it in a different JIRA rather than conflating a semantic change with this optimization.
Here's full patch against trunk
> - flushing a stream that's been closed throws IOE.
I am not sure why the Java flush API is successful even if the steam is closed. But I would vote for the above semantics, thanks Todd.
+1 peer reviewed this. looks pretty solid.
> we call hflush() again without writing any more data
I see that you are trying to improve the case when flushing twice without writing any data. I am still a little bit uncomfortable with this fix. ( feel that it is hard to understand and maintain..Ideally we should not create a packet in this case.)
Could this patch go without this fix? I would prefer to have a different jira to improve hflush without data. I think I filed a jira a while back.
Could this patch go without this fix? I would prefer to have a different jira to improve hflush without data. I think I filed a jira a while back.
I agree that that bit of code is hard to understand. It also "works fine" without the fix, but it does trigger an assertion if assertions are enabled – a sequence number will get "skipped". So I would prefer to keep the fix in, and in the JIRA you mentioned (avoid creating a packet in the first place for empty flush) we can hopefully get rid of the confusing code. Is that alright?
> but it does trigger an assertion if assertions are enabled - a sequence number will get "skipped".
I do not understand how this would happen if the duplicate packet also gets sent to the pipeline. Did I miss anything?
Ah, I see what you're saying... I think that would work in theory, but given we've had a lot of production testing of the patch as is, I'm a little nervous to make that change at this point and lose some of that confidence.
0.22 in production?
I would prefer not to get the confusing code in especially this code is not related to this issue. Because once the code is in, it is very hard to get it out especially when you work in an open community.
@Hairong I believe Todd is referring to the this patch being run in production here at SU for last 3 months as well as whatever deploys there are atop CDHs that have this patch applied.. not 0.22.
Hey Hairong. I had actually recalled incorrectly which part of that confusing code is new - only the "currentSeqno--" code is new, to prevent skipping a sequence number. Here's a diff that ignores whitespace change:
//; }
As you can see we already had the code that avoided duplicate packets.
OK I see. So this piece of code is only for making the assertion work. I still have a question, if lastFlushOffset == bytesCurBlock, when will this condition to be true: oldCurrentPacket != null && currentPacket != null?
Please understand I did not mean to give you a hard time. I really think this seq# change is unrelated to this issue and unnecessary. It is simpler and less error prone just removing it together with the assertion. The HDFS pipeline side code is very complicated and is hard to get it right. I would prefer not to make any change unless necessary.
I still have a question, if lastFlushOffset == bytesCurBlock, when will this condition to be true: oldCurrentPacket != null && currentPacket != null?
I don't think that will ever be true. We do get the case oldCurrentPacket == null && currentPacket == null though when we call flush twice at the beginning of any block. So I think we can add an assert assert oldCurrentPacket == null in that else clause.
Please understand I did not mean to give you a hard time
No worries - I agree that this code is very tricky, which is why I'd like to keep the asserts at this point. The assert guards what we all thought was an invariant: sequence numbers should increase by exactly one with every packet. Nicolas also reviewed this code in depth a few months back, which is when we added this new currentSeqno-- bit. If I recall correctly we discussed a lot whether there was any bug where we could skip or repeat a sequence number, and when we added the assert for in-order no-skipping sequence numbers, we found this bug.
Would it be better to open a very small JIRA to add the assert and fix for it, commit that, then commit this as an optimization? That would keep the two changes orthogonal and maybe easier to understand?
Stack requested a provisional patch for branch-20-append. Here it is, not for commit until the discussion on trunk is resolved.
> Would it be better to open a very small JIRA to add the assert and fix for it, commit that, then commit this as an optimization? That would keep the two changes orthogonal and maybe easier to understand?
The assertion is a nice thing to have. Please open a jira for adding this assertion. The ideal fix is not to create the packet. Let's focus this jira on parallel hflush, which is what hbase really needs.
I took out the bug fix for sequence number skip into a patch on
HDFS-1497. This new patch, hdfs-895-ontopof-1497.txt applies on top of HDFS-1497 and adds the parallel flush capability. I think we should commit that bug fix first and then add parallel flush - the safety checks from 1497 make me more confident that we won't accidentally break this tricky code.
OK, here is a patch that applies against just trunk.
Previous patch had two javac warnings for using the deprecated MiniDFSCluster constructor. New patch just fixes those:
< MiniDFSCluster cluster = new MiniDFSCluster(conf, 1, true, null);
> MiniDFSCluster cluster = new MiniDFSCluster.Builder(conf).build();
The patch looks good. Two minor comments:
1. line 1291: queueCurrentPacket should be waitAndQueueCurrentPacket
2. I am a little uncomfortable about how LeaseExpirationExeception is handled. LeaseExpirationException does not always mean that the file is closed by the client. It may indicate that the client does not renew the lease and so the lease is really expired. But I guess eventually the client will find out when closing or getting next block. Could you please update the comment just for information.
Please update the patch then post "ant patch" result and "ant test" result.
Fixed the queueCurrentPacket to waitAndQueueCurrentPacket()
Also got rid of the hack for LeaseExpiredException, since we decided above that concurrent close and hflush() should be an IOException anyway.
+1. This looks good to me except that the following unnecessary import.
import org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException;
Could you please post ant patch & test results?
I ran the unit tests and caught one more bug that caused TestFiPipelineClose to fail. The issue was that if faliure recovery happens in PIPELINE_CLOSE stage, the "last packet in block" packet gets removed from dataQueue after the close() caller is already waiting for that sequence number. Thus the sequence number never comes and the caller of close() hangs. The fix is to set lastAckedSeqNo to the lastPacketInBlock seqno when it is removed from dataQueue. I also added some asserts in this code path.
With this latest patch, the following tests fali:
[junit] Test org.apache.hadoop.hdfs.TestFileStatus FAILED [
HDFS-1470 ]
.fs.TestHDFSFileContextMainOperations FAILED [
HDFS-874 ]
[junit] Test org.apache.hadoop.hdfs.TestPipelines FAILED [
HDFS-1467 ]
[junit] Test org.apache.hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewer FAILED [
HDFS-1500 ]
All of these also fail in trunk - I put the relevant JIRAs next to them above. So, no new failures caused by this patch.
Test-patch results:
.
The release audit is incorrect - same is true of an empty patch on HDFS trunk (known issue)
Good catch, Todd!
I've just committed this to trunk. Do we also need to get it in 0.20 append?
Yes, we could get this to 0.20-append too. Thanks Hairong.
Yes, we should get this into 20-append for HBase. Right now there seems to be an issue with the
HDFS-724 patch in 20-append, and since these touch very similar areas of the write pipeline, I want to either temporarily revert 724 from 20-append, or figure out what's wrong with it. No sense adding another variable into the mix when our current branch has some problems.
@Todd Do we need a refresher on this patch for 0.20-append? Looks like you fixed up a few things subsequent to your last 0.20-append version. hdfs-724 has gone in as well as the fixup for the misapplication of the backport to 0.20-append and seems to be working properly. Thanks.
Stack requested a patch for 0.20-append. This one ought to work, but I haven't done testing aside from running the new unit test. It's based on the patch from CDH3 which has been deployed and tested at scale, but slightly modified based on the more recent changes to the trunk version.
I've committed this. Thanks, Todd!
Patch uploaded for 20-security.
+1 for the patch.
I committed the patch to 0.20-security.
Closed upon release of 0.20.205.0
Do we then allow multiple concurrent sync calls, each waiting for a different seqno? Sounds like yes.
|
https://issues.apache.org/jira/browse/HDFS-895?focusedCommentId=12799866&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel
|
CC-MAIN-2016-18
|
refinedweb
| 3,267
| 73.27
|
Subject: Re: [boost] Coming up on 1.57 release date - get your fixes in
From: Marcel Raad (raad_at_[hidden])
Date: 2014-10-29 11:38:20
Marshall Clow <mclow.lists <at> gmail.com> writes:
> > range/detail/any_iterator.hpp is broken because it expects
> > postfix_increment_proxy and writable_postfix_increment_proxy to be in
> > namespace boost::detail, but they are in boost::iterators::detail now.
>
> This does NOT appear to have been fixed
This has been fixed in develop with,
but hasn't been merged to master yet.
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
|
https://lists.boost.org/Archives/boost/2014/10/217374.php
|
CC-MAIN-2021-31
|
refinedweb
| 105
| 62.95
|
Multiple Windows
A guizero application should only have have one single App object - this is the main window and controller of your program.
If you want to create a second (or 3rd, 4th, 5th) window, your program should use a Window object.
A Second window
When you create a second
Window you need to pass it the
App, just like when you create a widget:
from guizero import App, Window app = App(title="Main window") window = Window(app, title="Second window") app.display()
Adding widgets to the second window is the same as adding them to an
App. You tell the widget which window it will be in by passing it the name of the
Window:
from guizero import App, Window, Text app = App(title="Main window") window = Window(app, title="Second window") text = Text(window, text="This text will show up in the second window") app.display()
Opening and closing windows
When a
Window object is created it is immediately displayed on the screen. You can control whether a window is visible or not using the
show() and
hide() methods.
This code creates a window which is shown when a button on the
App is clicked and closed when a button is clicked in the
Window.
from guizero import App, Window, PushButton def open_window(): window.show() def close_window(): window.hide() app = App(title="Main window") window = Window(app, title="Second window") window.hide() open_button = PushButton(app, text="Open", command=open_window) close_button = PushButton(window, text="Close", command=close_window) app.display()
Modal windows
When a window is opened using
show() it opens side by side with the main window, and both windows can be used at the same time.
A "modal" window prevents the other windows in the application being used until it is closed. To create a modal window, you can pass
True to the optional
wait parameter of
show.
This will force all other windows to wait until this window is closed before they can be used.
def open_window(): window.show(wait=True)
|
https://lawsie.github.io/guizero/multiple_windows/
|
CC-MAIN-2019-30
|
refinedweb
| 334
| 61.77
|
Receiving a WMI Event
WMI contains an event infrastructure that produces notifications about changes in WMI data and services. WMI event classes provide notification when specific events occur.
The following sections are discussed in this topic:
Event Queries
You can create a semisynchronous or asynchronous query to monitor changes to event logs, process creation, service status, computer availability or disk drive free space, and other entities or events. In scripting, the SWbemServices.ExecNotificationQuery method is used to subscribe to events. In C++, IWbemServices::ExecNotificationQuery is used. For more information, see Calling a Method.
Notification of a change in the standard WMI data model is called an intrinsic event. __InstanceCreationEvent or __NamespaceDeletionEvent are examples of intrinsic events. Notification of a change that a provider makes to define a provider event is called an extrinsic event. For example, the System Registry Provider, Power Management Event Provider, and Win32 Provider define their own events. For more information, see Determining the Type of Event to Receive.
Example
The following script code example is a query for the intrinsic __InstanceCreationEvent of the event class Win32_NTLogEvent. You can run this program in the background and when there is an event, a message appears. If you close the Waiting for events dialog box, the program stops waiting for events. Be aware that the SeSecurityPrivilege must be enabled.
Sub SINK_OnObjectReady(objObject, objAsyncContext) WScript.Echo (objObject.TargetInstance.Message) End Sub Set objWMIServices = GetObject( _ "WinMgmts:{impersonationLevel=impersonate, (security)}") ' Create the event sink object that receives the events Set sink = WScript.CreateObject("WbemScripting.SWbemSink","SINK_") ' Set up the event selection. SINK_OnObjectReady is called when ' a Win32_NTLogEvent event occurs objWMIServices.ExecNotificationQueryAsync sink, _ "SELECT * FROM __InstanceCreationEvent " & _ "WHERE TargetInstance ISA 'Win32_NTLogEvent' " WScript.Echo "Waiting for events"
The following VBScript code example shows the extrinsic event __RegistryValueChangeEvent that the registry provider defines. The script creates a temporary consumer by using the call to SWbemServices.ExecNotificationQueryAsync, and only receives events when the script is running. The following script runs indefinitely until the computer is rebooted, WMI is stopped, or the script is stopped. To stop the script manually, use Task Manager to stop the process. To stop it programmatically, use the Terminate method in the Win32_Process class. For more information, see Setting Security on an Asynchronous Call.
strComputer = "." Set objWMIServices=GetObject( _ "winmgmts:{impersonationLevel=impersonate}!\\" & _ strComputer & "\root\default") set objSink = WScript.CreateObject( _ "WbemScripting.SWbemSink","SINK_") objWMIServices.ExecNotificationQueryAsync objSink, _ "Select * from RegistryValueChangeEvent Where " & _ "Hive = 'HKEY_LOCAL_MACHINE' and " & _ "KeyPath = 'SYSTEM\\ControlSet001\\Control' and " & _ "ValueName = 'CurrentUser'" WScript.Echo "Waiting for events..." While (True) WScript.Sleep (1000) Wend WScript.Echo "Listening for Registry " _ & "Change Events..." & vbCrLf While(True) WScript.Sleep 1000 Wend Sub SINK_OnObjectReady(wmiObject, wmiAsyncContext) WScript.Echo "Received Registry Value Change Event" _ & vbCrLf & wmiObject.GetObjectText_() End Sub
Event Consumers
You can monitor or consume events using the following consumers while a script or application is running:
- Temporary event consumers
A temporary consumer is a WMI client application that receives a WMI event. WMI includes a unique interface that use to specify the events for WMI to send to a client application. A temporary event consumer is considered temporary because it only works when specifically loaded by a user. For more information, see Receiving Events for the Duration of Your Application.
- Permanent event consumers
A permanent consumer is a COM object that can receive a WMI event at all times. A permanent event consumer uses a set of persistent objects and filters to capture a WMI event. Like. For more information, see Receiving Events at All Times.
Scripts or applications that receive events have special security considerations. For more information, see Securing WMI Events.
An application or script can use a built-in WMI event provider that supplies standard consumer classes. Each standard consumer class responds to an event with a different action by sending an email message or executing a script. You do not have to write provider code to use a standard consumer class to create a permanent event consumer. For more information, see Monitoring and Responding to Events with Standard Consumers.
Providing Events
An event provider is a COM component that sends an event to WMI. You can create an event provider to send an event in a C++ or C# application. Most event providers manage an object for WMI, for example, an application or hardware item. For more information, see Writing an Event Provider.
A timed or repeating event is an event that occurs at a predetermined time.
WMI provides the following ways to create timed or repeating events for your applications:
- The standard Microsoft event infrastructure.
- A specialized timer class.
It is recommended that permanent event subscriptions be compiled into the \root\subscription namespace. For more information, see Implementing Cross-Namespace Permanent Event Subscriptions.
Subscription Quotas
Polling for events can degrade performance for providers that support queries over huge data sets. Additionally, any user that has read access to a namespace with dynamic providers can perform a denial of service (DoS) attack. WMI maintains quotas for all of the users combined and for each event consumer in the single instance of __ArbitratorConfiguration located in the \root namespace. These quotas are global rather than for each namespace. You cannot change the quotas.
WMI currently enforces quotas using the properties of __ArbitratorConfiguration. Each quota has a per user and a total version that includes all users combined, not per namespace. The following table lists the quotas that apply to the __ArbitratorConfiguration properties.
An administrator or a user with FULL_WRITE permission in the namespace can modify the singleton instance of __ArbitratorConfiguration. WMI tracks the per-user quota.
Related topics
|
https://msdn.microsoft.com/en-us/library/aa393013(v=vs.85).aspx
|
CC-MAIN-2015-22
|
refinedweb
| 931
| 50.23
|
Debug C++ Code with DS-5¶
Before debugging C++ code in Cocos Code IDE, ARM DS-5 should be installed. Click here to install DS-5.
Initial configuration of DS-5 need to be done before debugging Andorid C++ code.Glad to tell you that configuration wizard of DS-5 have built in from Cocos Code IDE 1.1.0 to help you to configure DS-5 automatically.
In addition, the following tools also needed:
- Cocos2d-x 3.3 or Cocos2d-JS 3.2 or above
- Android SDK
- Android NDK r10c or above
- Apache Ant 1.9 or above
Debugging Steps¶
At first, create a game that contains C ++ code, then select the project, and click on the "DS-5 Debug ..." button on the toolbar.
If you have not configured Android compile environment, the settings dialog box will be opened automatically. If the environment has been set up, this step will be skipped.
In addition to Android SDK version selection, you also need to set which version of GCC toolchain to compile C ++ code, because the current DS-5 only support debugging programs compiled with GCC. You'd better to select high version of GCC, since the lowwer version may cause some unexpected bugs.
You can click "Generate" button to start the build after the compiler options set. The compile operation will take a while, and after a successful compilation, click on "Debug ..." to enter the configuration dialog of the DS-5.
All necessary options has been filled up, the only thing before start debugging is to connect a Android device which debug mode has been turned on with a USB cabble.
DS-5 will break down automatically when bebug began, then you can select one file, which is in the left corner of the project management view, for breakpoint insertion.
Double click to open the file, then double click the left column of the code window to set breakpoints.
However,so far the DS-5 Debug is not able to break off the program at the entrance on Android, so some of the front code may have been run over. If the breakpoint was set, click "continue" or "F8" to continue debugging to the breakpoint and then stop.
You can click the "Debug" in the upper left corner to restart debugging after the debug is stopped. If you modify the C++ code, select "Cocos Tools" -> "DS-5 Debug ..." menu in the project's quick menu to open the wizard for recompilation. Nevertheless, other ways to compile the code are also workable.
Follow the above instructions, you can use DS-5 in Cocos Code IDE in the Android platform to debug C ++ code. So can we debug C++ code and scrip code simultaneously? Yes!Very simple:
1. Start DS-5 debug and keep the game stay in the connection waiting view. 2. Switch to the Cocos Lua or Cocos JS perspective view. 3. Start script debug with "Remote Debug" mode.
Tips¶
The table of engine version and the Android NDK version which has been tested:
- *: The external libraries should recompile using GCC4.9.
When debug with DS-5, the application can't execute script logic and stay in wait connection view if the engine version older than Cocos2d-x 3.3 or Cocos2d-JS 3.2. To fix this bug please modify a function "lua_cocos2dx_runtime_addSearchPath" in file:
"<PROJECT>/frameworks/runtime-src/Classes/runtime/Runtime.cpp":
int lua_cocos2dx_runtime_addSearchPath(lua_State* tolua_S) { ...... // Modify the 'if' condition, at line: 1090 #if(CC_TARGET_PLATFORM == CC_PLATFORM_IOS || CC_TARGET_PLATFORM == CC_PLATFORM_ANDROID) cobj->addSearchPath(originPath); #endif ...... }
DS-5 need to call the command tool "adb" in Android SDK to identify "USB" connected devices. It's all right on Windows, but need to add "adb" to system path variable manually. Command as below:
$>sudo ln
/platform-tools/adb /usr/bin/adb
Want to know more information about DS-5, please refer to the ARM official documentation :《DS-5 Community Edition Android Debug》.
|
http://cocos2d-x.org/wiki/Debug_C++_Code_with_DS-5
|
CC-MAIN-2017-30
|
refinedweb
| 650
| 56.66
|
I don't know how much I like the ideas, but I sure do like their names!
Mine doesn't kill anyone,
but discourages people from
using perl in real-world applications..
quick Google search doesn't seem to shed enough light on it. door.
Which leads me down twisty path to...
Dame::Edna would be (is?) a hoot. How about Fantastic::Four? There's a whole library/namespace for Sin::City. XMen::Wolverine and XMen::Mystique?
On another tack, Homework::Procrastination seems like an easy module to write, though it will never be used. What about Test::Homework to analyze the parse tree and compare it to previously known or suspected samples of homework? Homework::Bootstrapper which requires a SOPW node number, and grabs the code from the highest XP node, creates a local module out of it, and uses it?
-QM
--
Quantum Mechanics: The dreams stuff is made of
Wouldn't that make it a SOPWith::Camel?
Jack
I've been pondering Perl::ReadMyMind, for when I'm out of debugging ideas and I just think the proggy should do what I want it to do, not what I told it to do... (433 votes),
past polls
|
http://www.perlmonks.org/?node_id=491721
|
CC-MAIN-2014-15
|
refinedweb
| 199
| 75.3
|
Synopsis
Add a keyword to a crate given a name and a value.
Syntax
set_key(crate, keyname, value, unit=None, desc=None)
Description
Create, or replace, a keyword in the header of the crate. If the unit argument is not None then it is a string listing the units of the key. If desc is not None then it is used as the description of the key. If the keyword already exists, then set unit="" and desc="" to clear out the previous settings, otherwise they will be retained.
Only the crate is changed; the input file is unaffected. The write_file command can be used to save the modified crate to a file.
The add_key routine can be used to create a keyword from a CrateKey object.
Units
The unit field can contain any text, but it is best to follow one of the standards, such as Specification of Physical Units within OGIP FITS files and Units in the Virtual Obervatory.
Examples
Example 1
>>> cr = read_file("evt2.fits") >>> set_key(cr, 'TIMEDEL', 0.00285, unit='s')
Create a new key named "TIMEDEL" with a value of 0.00285 and units of seconds, then add it to the crate.
Example 2
from pycrates import * cr = read_file('in.fits') x = cr.get_column('x') z = cr.get_column('z') set_key(cr, 'XMEAN', x.mean(), unit=x.unit) set_key(cr, 'ZMAX', z.values.max(), unit=z.unit, desc='Max of Z') set_key(cr, 'CONV', True) cr.write('out.fits', clobber=True)
Here we add the XMEAN and ZMAX keywords to a crate, the first with the mean of the x array and the second with the maximumn value of the z column. Since we want to copy over any unit field from the column to the keywords we use the get_column method of the crate to return a CrateData object for each column, rather than get_colvals just to return the column data. The CONV keyword is set to True before the file is written out to out.fits.
Bugs
See the bug pages on the CIAO website for an up-to-date listing of known bugs.
Refer to the CIAO bug pages for an up-to-date listing of known issues.
See Also
- crates
- add_key, cratekey, delete_key, get_key, get_key_names, get_keyval, key_exists, set_colvals, set_keyval, set_piximgvals
|
https://cxc.cfa.harvard.edu/ciao/ahelp/set_key.html
|
CC-MAIN-2020-05
|
refinedweb
| 377
| 74.49
|
On 2017-10-23, Shad Storhaug wrote:
> The HHMM directory/namespace was changed from upper to pascal case. It
> looks like the embedded resources in those folders didn't get renamed
> for some reason. However, those embedded resources are not intended
> for use by end users, so it is not a blocker (but something that
> should be fixed). You are right that it probably won't build on Linux,
> but that is not a tested scenario anyway.
If you want me to I can fix it from my Linux box on master by moving all
files to Hhmm - I assume fixing it on a case-insensitive file system is
a bit more work.
> Do note I did use RAT to locate and add the license header to 54 files
> ().
You
> are welcome to give it a another pass, though.
Ah, please leave a small part where I can contribute ;-)
Seriously, I appreciate you having done all the work (and in particular
also thinking of the boring legal stuff). Running RAT is just part of my
standard "script" when voting on any ASF release.
Stefan
|
https://mail-archives.eu.apache.org/mod_mbox/lucenenet-dev/201710.mbox/%3C87y3o2ypiq.fsf@v45346.1blu.de%3E
|
CC-MAIN-2021-21
|
refinedweb
| 184
| 68.3
|
Hi
str am new in hibernate please tell me... any database package using maintain data in struts+hiebernet....please help.../struts/struts-hibernate/struts-hibernate-plugin.shtml friends
Hi friends How to create a guy based application for cryptography(encryption and decryption) with multiple algorithms like caesar, hash ..etc
Hi.... - Java Beginners
Hi.... Hi Friends,
Thanks for reply can send me sample... me its very urgent....
Hi Friend,
Plz give full details....
For example : Java/JSP/JSF/Struts 1/Struts 2 etc....
Thanks
Waiting for ur quick response
Waiting for ur quick response Hi,
I have two different java programs like sun and moon. In both of these two programs i have used same class name... one help me ASAP....
Thanks
Developing Simple Struts Tiles Application
Developing Simple Struts Tiles Application
... will show you how to develop simple Struts Tiles
Application. You will learn how to setup the Struts Tiles and create example
page with it.
What is Struts
struts validations - Struts
struts validations hi friends i an getting an error in tomcat while... as validation disabled
plz give me reply as soon as possible. Hi friend..._struts_validator.shtml
Thanks
struts internationalisation - Struts
struts internationalisation hi friends
i am doing struts... problem its urgent Hi friend,
Plz give full details and Source... to :
Thanks
s per ur answer
s per ur answer i cannot understand please explain in detail
3)Create web.xml and classes folder inside the WEB_INF folder of web application.../introductiontoconfigrationservlet.shtml
Thanks
thanks sir
Struts
developers, and everyone between.
Thanks.
Hi friends,
Struts is a web... developers, and everyone between.
Thanks.
Hi friends,
Struts...Struts What is Struts?
Hi hriends,
Struts is a web page
Struts Architecture - Struts
Struts Architecture
Hi Friends,
Can u give clear struts architecture with flow. Hi friend,
Struts is an open source.../StrutsArchitecture.shtml
Thanks
hi.... - Java Beginners
hi.... Hi friends
i am using ur sending code but problem... very urgent
Hi ragini,
First time put the hard code after...-with-jsp.shtml
Hi.... - Java Beginners
Hi.... Hi Friends
when i compile jsp file then got the error "code to large for try statement" I am inserted 177 data please give me solution and let me know what is the error its very urgent Hi Ragini
Struts - Struts
.
thanks and regards
Sanjeev Hi friend,
For more information.../struts/
Thanks...Struts Dear Sir , I m very new to struts how to make a program
Struts - Struts
explaination and example? thanks in advance. Hi Friend,
It is not thread...://
Thanks...Struts Is Action class is thread safe in struts? if yes, how
Struts Validation - Struts
Struts Validation Hi friends.....will any one guide me to use the struts validator...
Hi Friend,
Please visit the following links:
http
Struts - Struts
be displayed on single dynamic page according to selected by student.
pls send compelete code.
thanks Hi friend,
Please give details with full...Struts Hello
I like to make a registration form in struts inwhich
struts - Struts
struts hi,
what is meant by struts-config.xml and wht are the tags... and search you get the jar file Hi friend,
struts.config.xml : Struts has.../struts/
Thanks
Hi
Hi Hi
How to implement I18N concept in struts 1.3?
Please reply to me
Struts - Struts
.
Thanks in advance Hi friend,
Please give full details with source code to solve the problem.
For read more information on Struts visit...Struts Hello
I have 2 java pages and 2 jsp pages in struts Hi this is really good example to beginners who is learning struts2.0
thanks
Hi..
Hi.. diff between syntax and signature?
signature is a particular identity of a method in terms of its argument order ,type... give us the signature of that method.
methods can be overloaded by differing
Hi
Hi I want import txt fayl java.please say me...
Hi,
Please clarify your problem!
Thanks
Hi
Hi how to read collection obj from jsp to servlet and from jsp - jsp?
Hi Friend,
Please visit the following link:
Thanks
struts - Struts
struts what is the use of debug
2 Hi Friend... class.It is having three values:
0- It does not give any debug information.
1....
Thanks
Redirection in struts - Struts
Redirection in struts
Hi Friends
Without sendredirect can we forward a page in struts Hi
There are more ways to
hi.......
for such a programme... plz help me...
Hi Friend,
Try this:
import java.awt....hi....... i've a project on railway reservation... i need to connect netbeans and mysql with the help of jdbc driver... then i need to design
Hi
Hi I have got this code but am not totally understanding what the errors. Could someone Please help. Thanks in advance!
import java.util.Random;
import java.util.Scanner;
private static int nextInt() {
public class
hi.......
hi....... /import java.awt.;
import java.sql.*;
import javax.swing....+",'"+t7+"','"+t8+"')");
JOptionPane.showMessageDialog(null,"Thanks for creating... wts wrong with this code??
Hi,
Check it:
import java.awt.*;
import
HI!!!!!!!!!!!!!!!!!!!!!
HI!!!!!!!!!!!!!!!!!!!!! import java.awt.*;
import java.sql.*;
import javax.swing.*;
import java.awt.event.*;
public class NewJFrame extends...+"')");
JOptionPane.showMessageDialog(null,"Thanks for creating an account.");
}
catch(Exception e
Why Struts in web Application - Struts
Why Struts in web Application Hi Friends, why struts introduced in to web application. Plz dont send any links . Need main reason for implementing struts. Thanks Prakash
give the code for this ques///
give the code for this ques/// write a program in java in which... can use some symbol/character as a terminator///
Hi Friend,
Try...+"!");
}
}
Thanks
Hi... - Struts
Hi... Hello,
I want to chat facility in roseindia java expert please tell me the process and when available experts please tell me Firstly you open the browser and type the following url in the address bar
give the code for this ques///
give the code for this ques/// write a program in java which...//
Hi Friend,
Try the following code:
class Simple{
int a,b,c,d...[]args){
Simple s=new Simple(0,0,0,0);
s.show();
}
}
Thanks
help - Struts
help Dear friends
I visit to this web site first time.When studying on struts2.0 ,i have a error which can't solve by myself. Please give me help, thans!
information:
struts.xml
HelloWorld.j
Thanks - Java Beginners
Thanks Hi,
Thanks for reply I m solve this problem
Hi ragini,
Thanks for visiting roseindia.net site
Thanks - Java Beginners
Thanks Hi,
Yur msg box coding is good
Thanks Hi
Thanks & Regards
thanks - Development process
. please help me.
thanks in advance...thanks thanks for sending code for connecting jsp with mysql.
I have completed j2se(servlet,jsp and struts). I didn't get job then i have learnt
struts
struts i have no any idea about struts.please tell me briefly about struts?**
Hi Friend,
You can learn struts from the given link:
Struts Tutorials
Thanks - Struts
Struts Tiles I need an example of Struts Tiles
Difference between MVC1 And MVC2 - Struts
Difference between MVC1 And MVC2 HI Friends,Can u Give me difference between mvc1 and mvc2. Thanks Prakash Hi Friend,
Please visit the following
Struts
Struts How Struts is useful in application development? Where to learn Struts?
Thanks
Hi,
Struts is very useful in writing web... applications.
You can learn Struts at our Struts tutorials section.
Thanks
Thanks - Java Beginners
Thanks Hi,
thanks
This is good ok this is write code but i... either same page or other page.
once again thanks
hai... the problem...
state it correctly....
thanks and regards
prashu
Thanks - Java Beginners
Thanks Hi Rajnikant,
Thanks for reply.....
I am... and analyze you got good scenario about Interface
Thanks
Rajanikant
Hi... is the advantage of interface and what is the use of interface...
Thanks
REGARDING TREE STRUCTURE IN STRUTS - Struts
REGARDING TREE STRUCTURE IN STRUTS Hello friends,
I need ur help... in the database should be shown in the jsp(struts) as an tree structure form .
thanks in advance
siraj,
+919440325786
Thanks
Thanks This is my code.Also I need code for adding the information on the grid and the details must be inserted in the database.
Thanks in advance
JSP - Struts
..
thanks in advance Hi Friend,
Please visit the following link...
Hope that it will be helpful for you.
Thanks Hi friend,
Thanks for ur response...but I want it in struts 1.X
struts
struts hi
Before asking question, i would like to thank you for clarifying my doubts.this site help me a lot to learn more & more technologies like servlets, jsp,and struts.
i am doing one struts application where first example - Struts
the version of struts is used struts1/struts 2.
Thanks
Hi!
I am using struts 2 for work.
Thanks. Hi friend,
Please visit...Struts first example Hi!
I have field price.
I want to check
How to give the value - JSP-Servlet
How to give the value How to give the value in following query..
"select * from studentinformation where studentid = '?'";
How to give the value into question mark?... Hi Friend,
Try the following code
Struts Tag Lib - Struts
Struts Tag Lib Hi
i am a beginner to struts. i dont have to understand the TagLib uri. Any body could you please give me some brief steps reg...://
Thanks
hi sir - Java Beginners
the details sir,
Thanks for ur coporation sir Hi Friend...hi sir Hi,sir,i am try in netbeans for to develop the swings,plz.../background/30java_tools/netbeans.shtml
Thanks
Hi - Java Beginners
Hi Hi friends,
I want download c&c++ software.....can u send me link....its urgent...please send me likn or site name Hi Friend... software and download
--------------------------------------------
Thanks
hi - Java Beginners
hi hi sir,Thanks for ur coporation,
i am save the 1 image... sir,plzzzzzzzzzzzz Hi Friend,
Please provide some more information.
Thanks
Hello - Struts
Hello Hi Friends,
Thakns for continue reply
I want to going with connect database using oracle10g in struts please write the code...://
Thanks
Amardeep
Struts Console
visually edit Struts, Tiles and Validator configuration files.
The Struts Console... Struts Console
The Struts Console is a FREE standalone Java Swing
validation problem in struts - Struts
for your best co-operation .
thanks in advance friends......
Hi...validation problem in struts hi friends...
m working on one project using struts framework. so i made a user login form for user authentication
MVC - Struts
MVC CAN ANYONE GIVE ME A REAL TIME IMPLEMENTATION OF M-V-C ARCHITECTURE WITH A SMALL EXAMPLE...... Hi friend,
Read for more information.
Thanks
struts when the action servlet is invoked in struts? Hi Friend,
Please visit the following link:
Thanks
Struts - Struts
Struts Provide material over struts? Hi Friend,
Please visit the following link:
Thanks
Hi... - Java Beginners
Hi... Hi,
I want make a date fields means
user click date...
Write JavaScript
link with button
Check ur mail for the detail code;
if any one need just mail me: fightclub_ceo@sify.com
Hi
struts tags
struts tags I want develop web pages using struts tags please help me,but i searched many sites but that sites doesn't give correct information.
examples of struts tags are
Hi Friend,
Please visit the following
hi friend - Java Beginners
....
thank u friends!!! Hi friend,
Inverted pyramid code
class...hi friend ummm i want to know a java code ...what are the code if i...("*");
}
}
System.out.println("");
}
}
}
Thanks
vineet
Single thread model in Struts - Struts
Single thread model in Struts
Hi Friends,
Can u acheive singleThreadModel , ThreadSafe in Struts
if so plx explain me. Hi
Struts 1 Actions are singletons therefore they must
Ask Questions?
If you are facing any programming issue, such as compilation errors or not able to find the code you are looking for.
Ask your questions, our development team will try to give answers to your questions.
|
http://roseindia.net/tutorialhelp/comment/199
|
CC-MAIN-2013-20
|
refinedweb
| 1,957
| 77.03
|
4 Jul 22:45 2005
Re: Avoiding linking final executable to TH
Lemmih <lemmih <at> gmail.com>
2005-07-04 20:45:03 GMT
2005-07-04 20:45:03 GMT
On 7/4/05, Einar Karttunen <ekarttun <at> cs.helsinki.fi> wrote: >? I hacked up Zeroth to overcome the linking problem. Zeroth is a preprocessor which scans Haskell source files (using haskell-src-exts) for top level splices and evaluates them. For example: > module TestTH where > #ifdef HASTH > -- import TH modules here. > #endif > -- Simple declaration > $( [d| x = "test" |] ) becomes > module TestTH > -- Simple declaration > x = "test" However, Zeroth is hardly more than a dirty fix so use with care. Darcs repository: Haskell-src-exts: I've attached a patch with Cabal support for zeroth. -- -- Friendly, Lemmih
_______________________________________________ template-haskell mailing list template-haskell <at> haskell.org
|
http://permalink.gmane.org/gmane.comp.lang.haskell.template/219
|
CC-MAIN-2014-52
|
refinedweb
| 136
| 65.42
|
I just read Pat Eyler's blog Reading Ola Bini writing about some interesting discussions on Ruby metaprogramming and how it compared with Lisp macros for writing Domain Specific Languages. In one of the references Why Ruby is an acceptable LISP, amongst other things people discuss how to implement prolog as a DSL in Ruby or Lisp. A long time ago some of my 'hobby programming' projects were writing prolog interpreters in various languages; I started off with a Pascal one and added things to it, translated it into Modula-2, and I did a Object Oriented one in Objective-C. I've started translating the Objective-C one into Ruby, and it's quite fun seeing how the code compares in the two languages.
In the Objective-C prolog I didn't attempt to use the language as a DSL, I used lex and yacc to tokenise and parse the prolog code. That allowed me to do a pretty complete implementation, apart from the 'op' predicate which allows you to define new operators with precedences to implement DSLs in prolog (although they weren't called DSLs then). With Ruby I think the language is just about powerful enough to implement a simple prolog in Ruby itself. Here's my idea of what it could look like:
# Edinburgh Prolog:## [consult('comp.pl'), write(comp), nl].## -[consult('comp.pl'), write(comp), nl]## Edinburgh Prolog:## female(mary).# likes(mary, wine).# likes(john, X) :- female(X), likes(Y, wine).## female(mary) likes(mary, wine) likes(john, X) << [female(X), likes(X, wine)]## Edinburgh Prolog:## my_numbervars(F, I, N) :-# F =.. [_ | Args],# my_numbervars(Args, I, N).# my_numbervars(F, I, N) << [F === [_ | Args], my_numbervars(Args, I, N)]
The first form consists of a list of goals in square brackets, and it executed when the prolog code is parsed like class or module method calls in Ruby. It is usually used to read prolog sources from files into your running program. To implement that in Ruby, you can add a unary minus operator to the Array class like this:
# Parse code like '-[consult('comp.pl'), write(comp), nl]'class Array def -@() return SomePrologClass.new(self) endend
# take 'id' as a missing method with its args, and return a version transformed into prolog codedef self.method_missing(id, *args)end
The same method_missing() will also trap calls like 'female(mary)', but if we have 'female(X)', the logic variable as a missing constant X, will be diverted to another method, 'const_missing'. So we need to define that method to return a logic variable, as an instance of the 'NamedVariable' class:
# Look for logic variable such as X in 'female(X)'def self.const_missing(const, *args) return NamedVariable.new(const)end
A prolog clause consists of a head, followed by an 'if' operator and a sequence of goals. In Ruby the 'if' operator is a left shift, and the sequence of goals are in an Array. So we need to define left shift as an operator method which takes two arguments; the head of the clause, and the list of sub-goals. To make this work we need method_missing() to return a 'ListTerm' class that will build a clause from the two terms like this:
class ListTerm def <<(term) return Clause.new(self, term) endend## To parse this, implement the '<<' operator in the ListTerm class, and # return the head of a Clause as a ListTerm:## likes(john, X) << [female(X), likes(X, wine)]#def self.method_missing(id, *args) terms = *args case terms when Array # Iterate through the Array, and return each element as a prolog Term. else ListTerm.new(FunctionTerm.new(id), terms) endend
For other prolog operator methods, we just do something similar. In Ruby there is no '=..' operator to implement Univ, which converts a predicate into a list, and so I've chosen to map it onto '==='. A '|' operator is used to denote the head and tail of a list, and so that can be implemented as an operator method for prolog Terms too.
In prolog, anonymous logic variables are defined as underscores like 'female(_)', and these won't be Ruby constants like 'female(X)', but method calls that get diverted to method_missing(). So '_' methods need to be special cased as logic variable in method_missing():
# To parse this, implement '===' and '|' operator methods for prolog Terms,# and '_' as Anonymous variables## my_numbervars(F, I, N) <<# [F === [_ | Args],# my_numbervars(Args, I, N)]#class Term def ===(a) return UnivOp.new([self, a]) end def |(a) return ListHeaddTail.new[self, a] endend...def self.method_missing(id, *args) ... if id == :_ return AnonymousVariable.new endend
So that's the basic idea, Ruby does the tokenising and passes a stream of tokens to method missing, which in turn returns prolog Term instances that implement operator methods to further parse and reduce the token stream to compiled prolog Clauses. The main difficulty with this approach is that there is no way to tell when one prolog clause ends and another starts. So I've come up with a hack to get round that - use the SCRIPT_LINES__ Array which contains the code for the Ruby source currently being parsed to have a look at whether the current line is the start of a new prolog clause, or a continuation of a previous clause. Yuck! But I haven't come up with anything better yet.
def self.method_missing(id, *args) filename = caller.first name, num = filename.split(":") # Do ugly things with SCRIPT_LINES__[name][num.to_i - 1]end
So that's the basics of how to parse prolog clauses, and the hard bit is how to run the code and 'Unify' the parsed clauses with a prolog query. However, that has translated quite easily from the working Objective-C code I have and so I'm pretty confident that once I can build the correct data structures the matching process shouldn't be too impossible to get working.. If anyone is interested in looking at the code so far, please email me and I can send it to you - it's a bit too early to actually release yet.
I hope you'll release it when it is done?
Syndicate Blogs
|
http://www.kdedevelopers.org/node/2369
|
crawl-002
|
refinedweb
| 1,028
| 59.74
|
What Can Python Do? 5 Python Programming Examples for Everyday Life
In this article, I will present five practical Python programming examples to show you how Python can be used to write short but useful real-life scripts.
Python is best known for its ability to make programming accessible to everyone, even total beginners. Python is a highly expressive language that allows us to write rather sophisticated programs in relatively few lines of code.
According to Stack Overflow, Python is among the most loved programming languages in the world, and developers who do not currently use it put it at the top of their list for technologies they want to learn.
Last Updated April 2021
Build 11 Projects and go from Beginner to Pro in Python with the World’s Most Fun Project-Based Python Course! | By Ziyad Yehia, Internet of Things AcademyExplore Course
1. Automated desktop cleaner
Have you ever used cleaning your computer desktop screen as a procrastination tactic? Well, that tactic is about to get even more effective.
In Python, we can automatically sort files on our desktop into different folders based on their file type.
Let’s say that we want a script that will:
- Move the .txt, .docx, .pages, and .pdf files on your desktop to a “documents” folder on our desktop.
- Move the .png and .jpg files on your desktop to an “images” folder on your desktop.
Here’s how a script could execute this:
- First, it will get a list of the files on the desktop (ignoring the directories).
- Then loop through each file and move the files to the appropriate folder based on the file extension using its “get_new_file_path” function.
Here is how that script would look in Python code:
import os # Step 1: Move to the Desktop DESKTOP_PATH = r'INSERT_PATH_TO_YOUR DESKTOP' os.chdir(DESKTOP_PATH) # Step 2: # Get all the files on desktop (ignore directories) files = [entry for entry in os.scandir() if entry.is_file()] # This is just a function that makes the code a bit neater later. def new_file_path(folder_name): return os.path.join(DESKTOP_PATH, folder_name, f"{filename}.{extension}") # Step 3: Loop through each file and move it based on its file extension. for file in files: filename, extension = file.name.split(".") current_file_path = os.path.join( DESKTOP_PATH, f"{filename}.{extension}" ) if extension in ['txt', 'docx', 'pdf']: os.rename(current_file_path, new_file_path('documents')) elif extension in ["png", "jpg"]: os.rename(current_file_path, new_file_path('images'))
Voila! In just about 30 lines, including blank lines and comments, we now have a script that can tidy up our desktop.
If you want to add different file types, just add another “elif” condition at the bottom.
Advantages of this script:
- Our desktop is tidier
Disadvantages of this script:
- We will never again get to procrastinate by clearing our desktop.
2. Text processor
Have you ever written an online review for a product, movie, or restaurant?
One of the common applications of text processing these days is to read customer reviews and use them to generate insights. As a part of this task, it becomes increasingly important to understand the number of times each word occurs in the datasets.
We can process text by building the following Python program:
# Define the statement sentence = "the person to the left of me was teaching python to the person to the right of me" #split the statement to find the words words = sentence.split(' ') #Initialize a dictionary word_dict = {} #Loop through ‘words’ list and find the frequency of each word for word in words: if word in list(word_dict.keys() ): word_dict[word] = word_dict[word] + 1 else: word_dict[word] = 1 #Let's print it out print(word_dict)
Output:
{'the': 4, 'person': 2, 'next': 1, 'to': 3, 'left': 1, 'of': 2, 'me': 2, 'was': 1, 'teaching': 1, 'python': 1, 'right': 1}
Scripts like this are a useful part of natural language processing.
3. Restaurant price tracker
I’m sure a lot of us have dreamt of owning restaurants. However, like any other business, running a restaurant can get tricky and complicated.
The first thing a customer may look for after coming to your restaurant is the menu. Let’s see how we can design a menu using Python’s dictionaries.
# Initialize a dictionary named menucard using curly brackets menu = {} # Add the different items menu['Sandwich'] = 3.99 menu['Burger'] = 4.99 menu['Pizza'] = 7.99 # Print the prices print(menucard)
Output
{'Sandwich': 3.99, 'Burger': 4.99, 'Pizza': 7.99}
Suppose your first customer orders a burger. Let’s try to print out the price of a burger. To find this value, we call the ‘Burger’ key in the dictionary, as is shown below:
print(menu['Burger'])
Output:
4.99
Now, let’s take this a step further. What if your restaurant had three different pizza sizes to choose from: small, medium, and large?
Well, here’s where nested dictionaries come into the picture.
# Initialize a dictionary named menucard menu = {} # Add the different items menu['Sandwich'] = 3.99 menu['Burger'] = 4.99 menu['Pizza'] = {'S': 7.99, 'M': 10.99, 'L':13.99} #This is how the dictionary looks like print(menu)
Output:
{'Sandwich': 3.99, 'Burger': 4.99, 'Pizza': {'S': 7.99, 'M': 10.99, 'L':13.99} }
Let’s say a new customer now comes in and orders a medium size pizza. Here’s how you could retrieve the corresponding price.
print(menu['Pizza']['M'])
Output:
10.99
What a deal!
The next step for this would be to create a graphical user interface (GUI) to represent the different selections.
To do this, check out Python’s great GUI libraries, such as TkInter and Kivy.
4. Finding common hobbies
In this example, we’ll use Python to find common hobbies between you and your friend.
To do this, we are going to use Python sets.
First, we will create a set of our hobbies, a set of our friend’s hobbies, and then use the set union operator to find the overlap.
Here we go:
my_hobbies = {'Python', 'Cooking', 'Tennis', 'Reading'} friend_hobbies = {'Tennis', 'Soccer', 'Python'} # Find overlap using the '&' operator common_hobbies = my_hobbies & friend_hobbies print(common_hobbies)
Output
{'Tennis', 'Python'}
See, everyone likes Python!
5. Weather predictor
Imagine you’d like to receive suggestions on whether you should play your favorite sport based on the current weather.
You like to play when it is overcast, but not when it’s raining. When it’s sunny, you only want to play if the temperature is below 20 degrees celsius.
Here’s what that would look like in Python:
# Define today's conditions weather = 'sunny' temperature = 18 #write the conditions to decide if weather == 'overcast': action = 'play' elif weather == 'rainy': action = 'do not play' elif weather == 'sunny' and temperature <= 20: action = 'play' elif weather == 'sunny' and temperature > 20: action = 'do not play' print(action)
Output:
Play
With some more advanced code, we can import the current weather and temperature conditions from Google, and receive automatic notifications to our desktop to tell us that now is an opportunity to play. Cool, right?
Wrapping up
I hope you have enjoyed learning five ways to use Python programming to write short and effective scripts.
If you want to learn Python and build apps for yourself, then I invite you to check out my Beginner’s Python course: The Python Bible.
Recommended Articles
How to Learn Python: A 5-Step-Guide
Python Class Tutorial
The Top 6 Resources for Your Python Projects
Python Substring Tutorial
Python Round: Problems and Solutions
Python Foreach: How to Program in Python
Python For Loop: An In-Depth Tutorial
Python vs C: A Beginner’s Guide
Top courses in Python
Python students also learn
Empower your team. Lead the industry.
Get a subscription to a library of online courses and digital learning tools for your organization with Udemy for Business.
|
https://blog.udemy.com/python-programming-examples/
|
CC-MAIN-2021-49
|
refinedweb
| 1,293
| 64.51
|
Dear all,
we are using python 2.7.1 and sqlanydb 1.0.5 (now officially supported by SAP). When fetching data from our IQ 15 server, any data that is not an integer like (smallint, int ...) is returned as "unicode", loosing the data type mapping to Python data types.
When using the ASE python-sybase library (aka Sybase) 0.39, fetching data preserves data types.
We have contacted the customer support, but unfortunately they could not provide us any solution for the case.
Did anybody experienced a similar problem?
Unfortunately decoding and casting the data to the appropriated data types is a very slow options when you fetch a couple million rows.
Thanks in advance for the help.
Regards,
Cris da Rocha
asked
17 Sep '14, 08:40
Cris da Rocha
45●1●1●4
accept rate:
0%
Whether the casting is done in your application or in the sqlanydb driver, it will take the same amount of time. The sqlanydb driver is written on top of our dbcapi library, which returns everything as a string, so the casting is necessary.
We are investigating ways that we could return native python types without the need for casting, but unfortunately I do not have a solution for you right now.
answered
17 Sep '14, 10:34
Graeme Perrow
9.3k●3●77●120
accept rate:
54%
Thanks a lot for the quick answer Graeme.
Re: any data that is not an integer like (smallint, int ...) is returned as "unicode".
The binary datatypes (BINARY, LONG BINARY, VARBINARY, IMAGE) are returned as "str" (the str class).
You haven't said what "native" datatypes you want to use. Have you considered using converters? These work well with specialized data types like datetime or decimal. The only built-in type we don't intrinsically support is boolean but a converter can be used here as well.
import decimal
import datetime
def convert_to_boolean(val):
return val != 0
def convert_to_datetime(val):
return datetime.strptime(val,'%Y-%m-%d %H:%M:%S.%f')
def convert_to_decimal(val):
return decimal.Decimal(val)
sqlanydb.register_converter(sqlanydb.DT_BIT, convert_to_boolean)
sqlanydb.register_converter(sqlanydb.DT_DECIMAL, convert_to_decimal)
sqlanydb.register_converter(sqlanydb.DT_TIMESTAMP, convert_to_datetime)
If you are looking for high-performance data transfers, Python may not be a good choice. For example, a Python application fetching 100,000 rows each containing 100 integer columns is approximately 37x slower than an ODBC applicaton doing the same thing.
answered
18 Sep '14, 10:43
JBSchueler
2.4k●2●12●44
accept rate:
16%
edited
18 Sep '14, 10:50
Dear Jack,
thanks for your reply and suggestions. Sorry for not replying before.
I'll take a look at it and see how I can implement that to our needs. Thanks a lot.
Surely Python is not the best option for high-performance, unfortunately not my choice :-)
Once you sign in you will be able to subscribe for any updates here
Answers
Answers and Comments
Markdown Basics
learn more about Markdown
Question tags:
sybase-iq ×35
python ×17
unicode ×12
question asked: 17 Sep '14, 08:40
question was seen: 26,033 times
last updated: 30 Sep '14, 07:21
Bind Variables Python
Autocommit with Django 1.7 and Sybase IQ 15.4.0.3027
Problems with cursors
SYBASE IQ - Get last Friday query
python3 cannot import sqlanydb
cursor not open
temp table problem
Error using Sybase IQ proxy table
NVarchar Ultralite support
After upgrading mac os to High Sierra, using python3.6 sqlanydb in flask app gives error: Could not load dbcapi
SQL Anywhere Community Network
Forum problems?
Maintenance log
and OSQA
Disclaimer: Opinions expressed here are those of the poster and do not
necessarily reflect the views of the company.
First time here? Check out the FAQ!
|
https://sqlanywhere-forum.sap.com/questions/22997/fetching-data-from-iq-with-python-sqlanydb-as-unicode
|
CC-MAIN-2019-51
|
refinedweb
| 620
| 57.87
|
Tech Tips archive
June 27, 2000
WELCOME to the Java Developer Connection
(JDC) Tech Tips,
June 27, 2000. This issue is covers some aspects of using the JavaTM
The Extensible Markup Language (XML) is a way of specifying the
content elements of a page to a Web browser. XML is syntactically
similar to HTML. In fact, XML can be used in many of the places
in which HTML is used today. Here's an example. Imagine that the
JDC Tech Tip index was stored in XML instead of HTML. Instead of
HTML coding such as this:
<html>
<body>
<h1>JDC Tech Tip Index</h1>
<ol><li>
<a href="/developer/TechTips/2000/tt0509.html#tip1">
Random Access for Files
</a>
</li></ol>
</body>
</html>
It might look something like this:
<?xml version="1.0" encoding="UTF-8"?>
<tips>
<author id="glen" fullName="Glen McCluskey"/>
<tip title="Random Access for Files"
author="glen"
htmlURL="/developer/TechTips/2000/tt0509.html#tip1"
textURL="/developer/TechTips/txtarchive/May00_GlenM.txt">
</tip>
</tips>
Notice the coding similarities between XML and HTML. In each case,
the document is organized as a hierarchy of elements, where each
element is demarcated by angle brackets. As is true for most HTML
elements, each XML element consists of a start tag, followed by
some data, followed by an end tag:
<element>element data</element>
Also as in HTML, XML elements can be annotated with attributes.
In the XML example above, each <tip> element has several
attributes. The 'title' attribute is the name of the tip, the
'author' attribute gives a short form of the author's name, and
the 'htmlURL' and 'textURL' attributes contain links to different
archived formats of the tip.
The similarities between the two markup languages is an important
advantage as the world moves to XML, because hard-earned HTML
skills continue to be useful. However, it does beg the question
"Why bother to switch to XML at all?" To answer this question,
look again at the XML example above, and this time consider the
semantics instead of the syntax. Where HTML tells you how to format
a document, XML tells you about the content of the document. This
capability is very powerful. In an XML world, clients can
reorganize data in a way most useful to them. They are not
restricted to the presentation format delivered by the server.
Importantly, the XML format has been designed for the convenience
of parsers, without sacrificing readability. XML imposes strong
guarantees about the structure of documents. To name a few: begin
tags must have end tags, elements must nest properly, and all
attributes must have values. This strictness makes parsing and
transforming XML much more reliable than attempting to manipulate
HTML.
The similarities between XML and HTML stem from a shared history.
HTML is a simplified vocabulary of a powerful markup language
called SGML. SGML is the "kitchen sink" of markup, allowing you
to do almost anything, including the ability to define your own
domain-specific vocabularies. HTML is a dim shadow of SGML, with
a predefined vocabulary. Thus HTML is basically a static snapshot
of some presentation features that seemed useful circa 1992. Both
SGML and HTML are problematic: SGML does everything, but is too
complex. HTML is simple, but its parsing rules are loose, and its
vocabulary does not provide a standard mechanism for extension.
XML, by comparison, is a streamlined version of SGML. It aims to
meet the most important objectives of SGML without too much
complexity. If SGML is the "kitchen sink," XML is a "Swiss Army
knife."
Given its advantages, XML does far more than simply displace HTML
in some applications. It can also displace SGML, and open new
opportunities where the complexity of SGML had been a barrier.
Regardless of how you plan to use XML, the programming language of
choice is likely to be the Java programming language. You could
write your own code to parse XML directly, the Java language
provides higher level tools to parse XML documents through the
the Simple API for XML (SAX) and the Document Object Model (DOM)
interfaces. The SAX and DOM parsers are standards that are
implemented in several different languages. In the Java
programming language, you can instantiate the parsers by using the
Java API for XML Parsing (JAXP).
To execute the code in this tip, you will need to download JAXP
and a reference implementation of the SAX and DOM parsers. You will also
need to
download SAX 2.0. Remember
to update your class path to include the jaxp, parser, and sax2
JAR files.
The SAX API provides a serial mechanism for accessing XML
documents. It was developed by members of the XML-DEV mailing list
as a standard set of interfaces to allow different vendor
implementations. The SAX model allows for simple parsers by
allowing parsers to read through a document in a linear way, and
then to call an event handler every time a markup event occurs.
The original SAX implementation was released in May 1998. It was
superseded by SAX 2.0 in May 2000. (The code is this tip is SAX2
compliant.)
All you have to do to use SAX2 for notification of markup events,
is implement a few methods and interfaces. The ContentHandler
interface is the most important of these interfaces. It declares
a number of methods for different steps in parsing an XML document.
In many cases, you will only be interested in few of these methods.
For example, the code below handles only a single ContentHandler
method (startElement), and uses it to build an HTML page from the
XML Tech Tip Index:
ContentHandler
(startElement)
import java.io.*;
import java.net.*;
import java.util.*;
import javax.xml.parsers.*;
import org.xml.sax.*;
import org.xml.sax.helpers.*;
/**
* Builds a simple HTML page which lists tip titles
* and provides links to HTML and text versions
*/
public class UseSAX2 extends DefaultHandler {
StringBuffer htmlOut;
public String toString() {
if (htmlOut != null)
return htmlOut.toString();
return super.toString();
}
public void startElement(String namespace,
String localName,
String qName,
Attributes atts) {
if (localName.equals("tip"))
{
String title = atts.getValue("title");
String html = atts.getValue("htmlURL");
String text = atts.getValue("textURL");
htmlOut.append("<br>");
htmlOut.append("<A HREF=");
htmlOut.append(html);
htmlOut.append(
">HTML</A> <A HREF=");
htmlOut.append(text);
htmlOut.append(">TEXT</A> ");
htmlOut.append(title);
}
}
public void processWithSAX(String urlString)
throws Exception {
System.out.println("Processing URL " +
urlString);
htmlOut = new StringBuffer(
"<HTML><BODY><H1>JDC
Tech Tips Archive</H1>");
SAXParserFactory spf =
SAXParserFactory.newInstance();
SAXParser sp = spf.newSAXParser();
ParserAdapter pa =
new ParserAdapter(sp.getParser());
pa.setContentHandler(this);
pa.parse(urlString);
htmlOut.append("</BODY></HTML>");
}
public static void main(String[] args) {
try {
UseSAX2 us = new UseSAX2();
us.processWithSAX(args[0]);
String output = us.toString();
System.out.println(
"Saving result to " + args[1]);
FileWriter fw = new FileWriter(args[1]);
fw.write(output, 0, output.length());
fw.flush();
}
catch (Throwable t) {
t.printStackTrace();
}
}
}
To test the program, you can use the XML fragment in the XML
Introduction that precedes this tip, or download a
longer version.
Save the XML fragment or the longer XML version in your local
directory as TechTipArchive.xml. You can then produce an HTML
version with the command:
java UseSAX2 file:TechTipArchive.xml SimpleList.html
java UseSAX2 file:TechTipArchive.xml SimpleList.html
Then use your browser of choice to view SimpleList.html, and
follow links to either text or HTML versions of recent Tech Tips.
(In a production scenario you would probably merge this code into
a client browser or into a servlet or JSP page on the server.)
There are several interesting points about the code above. Notice
the steps in creating the parser.
SAXParserFactory spf = SAXParserFactory.newInstance();
SAXParser sp = spf.newSAXParser();
SAXParserFactory spf = SAXParserFactory.newInstance();
SAXParser sp = spf.newSAXParser();
In JAXP, the SAXParser class is not created directly, but instead
through the factory method newSAXParser(). This allows different
implementations to be plug-compatible without source code changes.
The factory also provides control over more advanced parsing
features such as namespace support and validation. Even after you
have the JAXP parser instance, you still aren't ready to parse.
The current JAXP parser only supports SAX 1.0; to get SAX 2.0
support, you must wrap the parser in a ParserAdapter.
newSAXParser()
ParserAdapter pa = new ParserAdapter(sp.getParser());
ParserAdapter pa = new ParserAdapter(sp.getParser());
The ParserAdapter class adds SAX2 functionality to an existing
SAX1 parser and is part of the SAX2 download.
ParserAdapter
Notice that instead of implementing the ContentHandler interface,
UseSAX extends the DefaultHandler class.
DefaultHandler is an
adapter class that provides an empty implementation of all the
ContentHandler methods, so only the methods that are of interest
need to be overridden.
UseSAX extends
DefaultHandler
The startElement() method does the real work. Because the program
only wants to list the tips by title, the <tip> element is
all-important, and the <tips> and <author> elements are ignored.
The startElement method checks the element name and continues
only if the current element is <tip>. The method also provides
access to an element's attributes via an Attributes reference, so
it is easy to extract the tip name, htmlURL, and textURL.
startElement()
The end result of this exercise is an HTML document that allows you
to browse the list of recent Tech Tips. You could have done this
directly by coding in HTML. But doing this in XML, and writing the
SAX code provides additional flexibility. If another person wanted
to view the Tech Tips sorted by date, or by author, or filtered by
some constraint, then various views could be generated from a
single XML file, with different parsing code for each view.
Unfortunately, as the XML data gets more complicated, the sample
above becomes more difficult to code and maintain. The example
suffers from two problems. First, the code to generate the HTML
output is just raw string manipulation, which makes it easy to
lose a '>' or a '/' somewhere. Second, the SAX API doesn't remember
much; if you need to refer back to some earlier element, then you
have to build your own state machine to remember the elements that
have already been parsed.
The Document Object Model (DOM) API solves both of these problems.
The DOM API is based on an entirely different model of document
processing than the SAX API. Instead of reading a document
one piece at a time (as with SAX), a DOM parser reads an entire
document. It then makes the tree for the entire document available
to program code for reading and updating. Simply put, the
difference between SAX and DOM is the difference between
sequential, read-only access, and random, read-write access.
At the core of the DOM API are the Document and Node interfaces.
A Document is a top level object that represents an XML document.
The Document holds the data as a tree of Nodes, where a Node is
a base type that can be an element, an attribute, or some other
type of content. The Document also acts as a factory for new
Nodes. Nodes represent a single piece of data in the tree, and
provide all of the popular tree operations. You can query nodes
for their parent, their siblings, or their children. You can also
modify the document by adding or removing Nodes.
To demonstrate the DOM API, let's process the same XML document
that got "SAXed" above. This time, let's group the output by
author. This will take a little more work. Here's the code:
//UseDOM.java
import java.io.*;
import java.net.*;
import java.util.*;
import javax.xml.parsers.*;
import org.w3c.dom.*;
public class UseDOM {
private Document outputDoc;
private Element body;
private Element html;
private HashMap authors = new HashMap();
public String toString() {
if (html != null) {
return html.toString();
}
return super.toString();
}
public void processWithDOM(String urlString)
throws Exception {
System.out.println(
"Processing URL " + urlString);
DocumentBuilderFactory dbf =
DocumentBuilderFactory.newInstance();
DocumentBuilder db = dbf.newDocumentBuilder();
Document doc = db.parse(urlString);
Element elem = doc.getDocumentElement();
NodeList nl =
elem.getElementsByTagName("author");
for (int n=0; n<nl.getLength(); n++)
{
Element author = (Element)nl.item(n);
String id = author.getAttribute("id");
String fullName =
author.getAttribute("fullName");
Element h2 =
outputDoc.createElement("H2");
body.appendChild(h2);
h2.appendChild(outputDoc.createTextNode(
"by " + fullName));
Element list =
outputDoc.createElement("OL");
body.appendChild(list);
authors.put(id, list);
}
NodeList nlTips =
elem.getElementsByTagName("tip");
for (int i=0; i<nlTips.getLength(); i++)
{
Element tip = (Element)nlTips.item(i);
String title = tip.getAttribute("title");
String htmlURL =
tip.getAttribute("htmlURL");
String author =
tip.getAttribute("author");
Node list = (Node) authors.get(author);
Node item = list.appendChild(
outputDoc.createElement("LI"));
Element a = outputDoc.createElement("A");
item.appendChild(a);
a.appendChild(
outputDoc.createTextNode(title));
a.setAttribute("HREF", htmlURL);
}
}
public void createHTMLDoc(String heading)
throws ParserConfigurationException
{
DocumentBuilderFactory dbf =
DocumentBuilderFactory.newInstance();
DocumentBuilder db = dbf.newDocumentBuilder();
outputDoc = db.newDocument();
html = outputDoc.createElement("HTML");
outputDoc.appendChild(html);
body = outputDoc.createElement("BODY");
html.appendChild(body);
Element h1 = outputDoc.createElement("H1");
body.appendChild(h1);
h1.appendChild(
outputDoc.createTextNode(heading));
}
public static void main(String[] args) {
try {
UseDOM ud = new UseDOM();
ud.createHTMLDoc("JDC Tech Tips Archive");
ud.processWithDOM(args[0]);
String htmlOut = ud.toString();
System.out.println(
"Saving result to " + args[1]);
FileWriter fw = new FileWriter(args[1]);
fw.write(htmlOut, 0, htmlOut.length());
fw.flush();
}
catch (Throwable t) {
t.printStackTrace();
}
}
}
Assuming you save the XML as TechTipArchive.xml, you can run the
code with this command line:
java UseDOM file:TechTipArchive.xml ListByAuthor.html
java UseDOM file:TechTipArchive.xml ListByAuthor.html
Then point your browser to ListByAuthor.html to see a list of tips
organized by author.
To see how the code works, start by looking at the createHTMLDoc
method. This method creates the outputDoc Document, which will be
used to build the HTML output. Notice that just as with SAX, the
parser is created using factory methods. However here the factory
method is in the DocumentBuilderFactory class. The second half of
createHTMLDoc builds the basic elements of an HTML page.
createHTMLDoc
DocumentBuilderFactory
outputDoc.appendChild(html);
body = outputDoc.createElement("BODY");
html.appendChild(body);
Element h1 = outputDoc.createElement("H1");
body.appendChild(h1);
h1.appendChild(outputDoc.createTextNode(heading));
Compare that code with the code in the SAX example that builds
the elements of an HTML page:
//direct string manipulation from SAX example
htmlOut = new StringBuffer(
"<HTML><BODY><H1>JDC Tech Tips
Archive</H1>");
Using the DOM API to build documents isn't as terse or as fast as
direct String manipulation, but it is much less error-prone,
especially in larger documents.
The important part of the useDOM example is the processWithDOM
method. This method does two things: (1) it finds the author
elements and provides them as output, and (2) finds the tips and
provides them as output organized by their respective author.
Each of these steps requires access to the top level element of
the document. This is done via the getDocumentElement() method.
The author information is in <author> elements. These elements
are found by calling getElementsByTagName("author") on the
top-level element. The getElementsByTagName method returns
a NodeList; this is a simple collection of Nodes. Each Node is
then cast to an Element in order to use the convenience method
getAttribute(). The getAttribute method gets the
author's id and
fullName. Each author is listed as a second-level heading; to do
this, the output document is used to create an <H2> element
containing the author's fullName. Adding a Node requires
two steps. First the output document is used to create the Node
with a factory method such as createElement(). Then the node is
added with appendChild(). Nodes can only be added to the document
that created them.
getDocumentElement()
getElementsByTagName("author")
getElementsByTagName
getAttribute()
getAttribute
createElement()
appendChild()
After the author headings are in place, it is time to create the
links for individual tips. The <tip> elements are found in the
same way as the <author> elements, that is, via
getElementsByTagName(). The logic for extracting the tip attributes
is also similar. The only difference is deciding where to add the
Nodes. Different authors should be added to different lists. The
groundwork for this was laid back when the author elements were
processed by adding an <OL> node and storing it in a HashMap
indexed by author id. Now, the author id attribute of the tip can
be used to look up the appropriate <OL> node for adding the tip.
HashMap
For more in-depth coverage of XML, see The XML Companion, by Neil
Bradley, Addision-Wesley 2000. For more information about JAXP,
see the JavaTM Technology and XML page. For more information
about
SAX2, see. The DOM
standard is available at.
The names on the JDC
mailing list
are used for internal Sun MicrosystemsT".
|
http://java.sun.com/developer/TechTips/2000/tt0627.html
|
crawl-002
|
refinedweb
| 2,771
| 50.63
|
As I alluded to last Friday, I’ve been dabbling with the idea of expanding (modernizing) my programming knowledge and learning some Unity. I’ve been stuck in my old habits for a long time now. I’ve always been caught in this Catch-22 where I don’t want to stop working on an existing project to learn something radical and new, because doing so would bring the project to a halt. But if I’m not working on a project then I don’t have any immediate use for the New Thing. I’m either too busy or I don’t need it.
But for whatever reason, now feels like a good time to take a crack at it. To this end I’ve been watching Unity tutorials. This is both fascinating and maddening.
I have decades of coding experience, but I’m new to both Unity and C#. I’m a C++ programmer. C# and C++ are very similar, but not so similar that I can just jump in and begin writing useful C# code without educating myself first. The problem is that aren’t really any tutorials out there for me. Everything is either high-level stuff that assumes a deep knowledge of both C# and Unity, or (I am not joking here) it teaches you how to make some rudimentary “game” without needing to write a single line of code.
The latter actually kind of pisses me off. I get that this is part of the allure of Unity for most people, but for me it’s like I took a class with a master carpenter in hopes of learning woodworking, and instead he spent the entire class showing us how to assemble IKEA furniture. This wouldn’t be so bad if I was just scanning a text document, but it’s pretty annoying to sit through fifteen minutes of rambling video waiting for them to get through this introduction crap and to the main part of the video, only to realize that this click-and-drag stuff IS the main part of the video and I’ve just wasted my time again.
For couple of days in a row I’ve opened up Unity to an empty project with the silly notion that I was going to begin making some small thing. But then two hours later I was still scanning through video tutorials looking for answers and I hadn’t typed a single line of code.
Part of the problem is that what I want to do is kind of strange. I don’t want to import a model from the model library. I want to create my own out of raw polygons. I want to set up my own surface normals, set the UV mapping, and render it.
For my money, this series on Procedural Landmass Generation by Sebastian Lague is the best tutorial so far. It has a lot of what I need to get going. The problem is that the bits I need are little ten-second segments sprinkled throughout the sprawling four and a half hour video series. That’s not a very efficient way of learning.
Why Video?
I get that there are some concepts that work really well in video. In the past I’ve tried to describe fiddly concepts like surface normals, bilinear filtering, one-sided polygons, Z-buffering, camera views, and texture addressing. These concepts are all things that are relatively easy to illustrate using video but somewhat long-winded to explain in prose. When teaching this sort of high-level conceptual stuff, video is tremendously useful.
And the other hand, when it’s time to write code then nothing beats plain text..
So we’re actually facing two problems: One is the problem where there aren’t good tutorials for experienced programmers to bring them up to speed in Unity, and the other is that most tutorials are videos when they should be articles. I’m caught at the intersection of these two problems, which means I’ve spent a couple of hours this week watching someone explain to me nine things I already know and one thing I can’t understand because I don’t have the context.
C# First Impressions
C# and C++ are so similar that C# falls into some kind of uncanny valley for me. It’s so much like C++ that I get baffled when presented with something very different from what I’m used to. If the two languages used radically different syntax then the deviations wouldn’t feel so strange. But as it stands this thing looks so familiar yet feels so alien.
Consider the classic “Hello World” program:
When your write a program in C or C++, the program begins at main (). The operations within main () will be executed in order, and when main () ends the program stops running. Here’s the main() loopThis is actually the main loop for the incomplete Linux version. The Windows version uses WinMain () and… uh. Look, it’s stupid but we don’t have to to get into it. from Good Robot:
On line 3 the program calls init () to initialize everything. This starts up the rendering, initializes the sound system, loads the fonts, reads in all the data files that drive the game, and so on. Then the program spends most of its time in run (), which keeps the whole thing going by maintaining the rendering loop, reading input, and doing all the other stuff a program needs to do to keep itself alive. When the user exits the game, run () ends. Execution passes to term () to clean up all the stuff we did in init (). After that we shut down the Steam API. Technically SteamAPI_Shutdown() ought to go inside of term (), since that’s what term() is for. I think Arvind put it here because we were always shy of modifying each other’s code.
At any rate, once main () is out of instructions to run, execution ends and the program poofs out of existence.
This is how things have worked since literally before I was born. (Development began on the C language in 1969, and I didn’t show up until 1971.)
Now let’s look at “Hello World” in C#:
As a veteran C and C++ programmer, this source code is surreal in the way it mixes the familiar with the inexplicable.
First off, we declare a class. Okay. That’s a fine thing to do. That’s something you can (and often should) do in C++ in any program above trivial complexity. But usually you declare a class and then you instantiate it. The class definition is the blueprint, and somewhere in your code you need to actually create something that uses that class. Maybe something like this example of C++ code:
On lines 5 through 11 we define the blueprint for the Player class. But nothing actually happens until line 13 when we cause a specific player to exist, and we store their data in a variable named “player1”.
But that doesn’t happen in the C# program. We define a class called Hello1, but then… what causes an instance of Hello1 to begin existing?
Things get even more bizarre when we see that Main () is inside of this class! So now our program is contained within a class that never exists? Madness! What would happen if I made another file called Hello2.cs, with a class called Hello2, which also contained Main ()? Who is in charge, here?
This is just a small example of the strangeness.
To be clear, I’m not saying that C# is a dumb language for dumb people. This sort of thing makes sense once you know the rules. I’m just trying to show that learning C# after all of these years of C++ is filled with strange moments of hilarious confusion.
You can compare this to something like PHP. PHP is an objectively terrible language. But when I looked at PHP code for the first time I was immediately able to grasp where execution would begin, what order operations would be performed in, and I was able to get the general gist of what a PHP program was doing. I might get tripped up on some funny syntaxOr worse, in familiar syntax that behaves in confusing ways. Seriously. PHP is a minefield., but the large-scale stuff was easy to follow. In contrast, in C# the mysteries are all structural rather than syntax-based. Where does the program begin execution? Where does it end? When are the members of Hello1 allocatedIn the example above Hello1 doesn’t have member variables like “hitpoints” to “favorite fruit”. But it COULD.?
I think to do this properly I need to back off from Unity, learn some C#, and then come back to Unity. Trying to learn both at once is like trying to build without a foundation.
As a bonus, I’m pretty sure you can learn C# without needing to sit through hours of stupid video tutorials.
Footnotes:
[1] This is actually the main loop for the incomplete Linux version. The Windows version uses WinMain () and… uh. Look, it’s stupid but we don’t have to to get into it.
[2] Or worse, in familiar syntax that behaves in confusing ways. Seriously. PHP is a minefield.
[3] In the example above Hello1 doesn’t have member variables like “hitpoints” to “favorite fruit”. But it COULD.
Game at the Bottom
Why spend millions on visuals that are just a distraction from the REAL game of hotbar-watching?
Good Robot Dev Blog
An ongoing series where I work on making a 2D action game from scratch.
Trashing the Heap
What does it mean when a program crashes, and why does it happen?
Grand Theft Railroad
Grand Theft Auto is a lousy, cheating jerk of a game.
Are Lootboxes Gambling?
Obviously they are. Right? Actually, is this another one of those sneaky hard-to-define things?
142 thoughts on “Learning C# – Sort Of”
Learning C# in isolation is definitely the way to go, IMO.
And if you think C# and C++ are similar, you should compare it to Java; C# literally has its origins as “Java with the labels filed off” because Microsoft wasn’t allowed to use Java due to some legal shenanigans with Oracle. Over time, C# and Java have drifted apart somewhat: largely to C#’s benefit, IMO, though Java has started to “catch up” in recent releases.
They both buy heavily into Object-Oriented Programming, but C# has quite a bit of functional DNA nowadays. (Though, again, the most recent version of Java also mixes some functional stuff in, too). It’s a very nice and pragmatic language, IMO; though I admit I haven’t used it extensively in the last few years.
—
To specific questions about C#: (Though, doubtless, if you’ve started looking at a tutorial they’ve probably covered this)
> what causes an instance of Hello1 to begin existing?
Nothing; your program, as written, isn’t creating an instance of Hello1. Since Main is a static method, it doesn’t need a constructed class instance, it’s called like Hello1.Main().
Hello1 is necessary because C# and Java expects all code to be contained within classes.
Static methods are useful for various things other than entry points; sometimes they’re used as pseudo-constructors, sometimes they’re used for utilities: basically any time you have code that should be associated with a class, but that doesn’t need it’s own instance of a class.
> What would happen if I made another file called Hello2.cs, with a class called Hello2, which also contained Main ()?
You can start your program by running any static Main method; so you can create a program with multiple entry-points if you like. It can actually be useful if you want a normal start point and a debugging start point, for example.
> Who is in charge, here?
Nirvana is achieved when you accept that control is an illusion.
I was pretty much going to say all of this. I started off with C++ too, and spent a bit of time in Java, but I made the transition over a decade ago to now working almost entirely in C# (when I’m not doing SQL, JavaScript, or some other non-C-family coding).
The concept of the program root itself being considered an object is definitely a change for people coming from C++, and the proper utilization of namespaces is even more of one. C++ allows static methods, but they certainly don’t seem as common as in C#. For the record: like mentioned, you can have multiple “[int/void] Main(string[])” entry points, but at any given time the project needs to have one set as the default, which IIRC is stored in the project’s properties via the “.proj” file. The default can then be modified to any other valid entry point (as long as it matches the required signature, not just having the name) if needed.
Multiple entry points is handy if you have, for example, a single executable that can be run either as a CLI application or a Windows Service. (Think of MongoDB for a good example of a single .exe file that does either based on how it’s launched.)
That’s functionality that’s handy to have, but it’s certainly not used very often, and I can definitely see how it’d be confusing.
(It’s also not really a C# thing– it’s a Windows executable file format thing. Presumably you could do the same in C or C++ if you know how to tell the linker you wanted that. Mongo isn’t written in C#.)
Multiple entry points is handy if you have, for example, a single executable that can be run either as a CLI application or a Windows Service
Every time I’ve ever seen code for an EXE that runs as either a console/gui or service, it used the boolean runtime environment variable “Environment.UserInteractive” (that’s the .Net name for the Windows runtime value needed — not sure what other languages use) to determine which path to take, branching from within a single Main() that is called either way. I’ve never seen one that uses independent Main() entry points for this.
I was on to the next post before my brain reminded me what “CLI” stands for and then my eyes lit up. Its been a while since I’ve seen that but it does get used and I’ve always wondered how (being a front end guy myself.)
I might have an opportunity here soon to dive into C# with a team that lets its members learn as they go so I’m glad I’m seeing this post.
I apologize for the round of cringing I no doubt triggered in experienced devs when I talked about “learning as I go.” I promise I’ll try to remain cognizant of what I’ve read about design patterns and code smells and try to cross apply my Javascript experience where I can. (While being aware of where I want to differ)
We can’t afford proper devs, I can’t change the thinking of the people who can change this. And its their inability to offer competitive pay that is giving me this opportunity so I’m taking it.
“Nirvana is achieved when you accept that control is an illusion.”
Tea on the Zod-damned keyboard!
Luckily it’s water-proofed, but still man. Warning labels eh?
Technically, legal shenanigans with Sun, since the first version of C# was released about eight years prior to Oracle’s (a pox be upon their name) acquisition of Sun. Also, the nature of the legal shenanigans is that Sun wouldn’t let Microsoft extend their version of Java for Windows (that would be the “extend” part of “embrace, extend, extinguish”), so Microsoft went home and made their own programming language with””I probably shouldn’t finish that joke here, eh? With gin rummy and cake. Yeah.
There’s an alternate universe somewhere where either Microsoft complied with Sun’s Java licensing or Sun didn’t pursue the issue and the modern high-level language of choice for desktop development on Windows was Java. That would be an interesting world to live in.
Anyway. Total nitpick. But that is pretty much what we do here, so…
Ah, that makes sense, static methods!
… but the main question remains unanswered (even more mysterious, in fact): who or what decides which method to call if all code must be within classes? Will executing Shamus’ code indeed print “Hello World”? What would it do of there were several classes with several static Main functions?
I’m familiar with static methods from Python, but they still need to be called. Shamus’ program has nothing to call that method.
Well, I don’t know about C#, but in Java you would type
on the command line and the Java Runtime Environment would execute SomeClass.main().
The runtime calls it. In fact, you can ask the same about C/C++. Who calls main()? The answer is the C runtime.
What function the runtime knows to call upon execution is decided by the compiler (or linker, in some cases).
The C/C++ standards actually define it must start at Main in certain situations, but there’s no real block preventing them from doing it in some other way.
C++ Windows programs are set to WinMain, C++ console programs are set to main.
In C#’s case, the entry point is controlled by the project properties, where you set what class to use. This is usually taken care for you when you create a new project. If you open Visual Studio, Project Properties, Application, you’ll see a dropdown saying Startup object. It’ll look for Main there.
If you go deeper, main is not the actual real entry point in any of these. The OS will actually go to a predefined entry point (I forget the name, I think it’s _crt_init?) that’s defined as part of the executable file format, which spins up the runtime, performs some initialization and then finally calls your main.
In Python’s case, the entry point is the first line of the .py file being executed.
In fact, no C, C++ or C# program starts at “main”, “winmain” etc.
The executable first starts up the appropriate runtime library (inc. loading any immediate-load dynamic libraries), and performs various other platform and runtime-specific low-level shenanigans.
Then it constructs and initialises all your static variables and objects – in an undefined order!
– It is even permitted to do them simultaneously (multiple threads) if desired, due to the “as-if” rule.
Then finally it selects a main thread and calls your apparent entrypoint in it.
Potentially, lots of your code has already executed before main!
At the end of main(), the runtime then tears it all down. Except not necessarily in the same order.
It’s quite amazing all the work that is done even if all you’re running is an empty program.
As a Java developer, I probably more naturally intuit C# than you coming from a C++ background. I remember going the other way trying to teach myself C++ and getting very confused.
I remember a post you made recently that was about the argument against Object oriented Programming. And while you don’t necessarily need to program in an OO way using C#, it kinda assumes you are going to.
The classes I took in college where I learned C++ was much more having to do with Logic and Computer Science things like Logic Gates and how computers do things. The classes I took where I learned Java (and some with C#) were all programming methodologies like the tenants of OO.
I know a new programming paradigm might be difficult to adopt when you have so many years of a different way of doing things, but maybe doing some in depth reading of Object Oriented design might help with why C# is the way it is.
I’m one of those people who prefers video tutorials to written tutorials, although I can’t really explain why. I guess I just prefer the sound of a human voice relaying instructions.
To be fair, I’m not catching tutorials on coding, per se. I work in Unreal Engine 4, which means I’m mostly dealing with blueprint nodes and stuff, which come off better in a video than a paragraph of code does. I tried Unity a few years ago, but it was actually too complicated for me, since I can’t write code to save my life.
Anyway. I think you’ve got the right idea taking a break from Unity to focus on C#.
So my background is in Java, but C#, as someone mentioned above, is pretty close to Java.
(I also know some C++, but it’s at a beginner/intermediate level… I can play with pointers?)
What makes an instance of Hello1? In Java, the Interpreter makes a ‘publically accessible’ version of every class… that has anything declared as Static. It’s a little weird, because you almost have two versions of every class, one that you have to explicitly instantiate, and a version that is implicitly created at program start. Anything labelled ‘static’ is created at program start and can be accessed by using the classname, from anywhere. (If you declare it Public, any other class can access it, otherwise it follows the encapsulation rules of Private/Protected.)
The quirk is that if you’re calling a Static method, you can only use Static members. The Constructor is not necessarily called, as I recall… So basically, using your Player class, if you added a method ‘public static void printStuff()’ you can have:
The second call to printStuff() will cause compiler error, because you’re not allowed to call static methods from created Objects. Only Static Objects can call Static Methods. It’s a little weird… Generally you use Static for anything you want to be able to do regardless of whether you have a specific instantiation of that class. (Normally it’s used in Utility classes. Stuff like Math.sqrt(n) you don’t want to bother creating a Math Object just to get a square root…)
The second call to printStuff() will cause compiler error, because you're not allowed to call static methods from created Objects. Only Static Objects can call Static Methods.
You’re correct that the compiler will throw an error, but it’s slightly different than this. Static methods, members, and properties cannot be called/accessed directly off of an instance of the class, and when interacted with externally need to be referenced by the class name itself, but internally there is no issue. If code execution is currently within a non-static method, a call/access of a static element of the same class is just fine. So the following applies:
Where things start getting real weird is that you can have the following and it’s also valid:
The reason that still works is because technically, the method signatures are different, so they’re actually calling two different versions of the method. There’s a catch, however — if you tried to call “doStuff();” from anywhere WITHIN the demo class in which those two methods are defined, the compiler will error out saying it’s too ambiguous to tell which version you want. And for some reason, that error can’t be fixed by prefixing the class name and/or “this.” onto the calls.
So I guess this would be like method overloading right? I was trying to think of the use.
I guess it would be useful if I wanted “doStuff()” to do something basic like “Console.Write(‘Hi!’)” if I called the method like Person.doStuff() and something more specific like “Console.Write(‘Hi ‘ + name + ‘!’)” if I called Bob.doStuff(). An instance specific implementation of the same variable.
Question. How far out have you gone with polymorphism and method overloading? I’ve seen maybe 5 or 6 different forms of the same method. You ever seen more? Is that a good thing or a bad thing? Bad code smell?
It’s not overloading, because one isn’t replacing the other. Hence why it’ll error out if the call turns out to be ambiguous. Since it’s not overloading, there’s no rules on which one to pick.
They’re just two methods with the same name. Bad practice anyways.
Whether it’s good or bad depends a lot on what’s being modeled, although most people in the know will recommend you to do composition over inheritance. In other words, instead of inheriting and overloading, split the specific functionalities into smaller objects you store in your class and get called as needed. The advantage here is that it’s much easier to switch out these components if something needs to get changed (for example, for testing).
Oh you’re right. Different scopes. I should have caught that.
You can, but don’t.
Raw pointers should never* be used in modern C++, because they make it really easy to make daft mistakes.
Instead there are a host of “smart” pointers, eg:
– “Scoped” pointer that automatically deletes the object when the ‘pointer’ goes out of scoped
– “Strong” reference counting: Delete the object when the last reference goes out of scope
– “Weak” reference counting: Created from a strong one, except it isn’t counted as being a reference until converted into a strong one
– “Unique” pointers: There can be only one pointer to the object. It can’t be copied but can be moved.
etc.
Use those, they’re easy and fast.
The hard part is of course working out which “smart” pointer you need – choice can be a burden!
* Unless you really want to.
Eh, my 9 years in game dev have made me see things the opposite way.
There are occasional times when you need to do something like ref-counting, but most of the rest of the time you want to be explicit about your memory ownership.
Classes with scope pointers as members would generally be better off containing the objects they point to (so there isn’t the extra pointer dereference when accessing it), scope pointers in functions is often indicative of slow code since malloc is slow as is touching other bits of memory. Better off having either scoped variables or (for bigger things) one big linear allocator you can unwind at the end of your function.
Unique pointers are ok in concept, but in reality you rarely want to be passing ownership of something more than once, and should be explicit enough about it that you know you’ll be needing to call free/delete when you’re done with the object.
If you’re worried about memory leaking, then you should check against that case instead, in our engine every time we transition between the front end or a level we verify our heap is identical to how it was after we finished initialisation. Not a single byte is allowed to have been allocated or moved. Our commit box runs some tests on every code commit before it is allowed to be pushed out and if anyones commit causes a leak, our heap tracking code will show the callstack when that allocation occurred.
All this isn’t to say that special pointer classes are never useful, we have a few things like textures that use ref-counting because we need to wait for our graphics code to finish with things before we free them, as well as custom pool pointers you can call delete on so that you don’t have to know where the object was allocated from in order to know how to free the memory.
But generally speaking there’s nothing wrong with raw pointers, they’re as fast as you can go on debug builds (which other smart pointer objects will not be, they’ll be copied all over the stack even when it doesn’t make sense, and will still load and call destructors that do nothing giving you extra function call overhead), and anyone who wants to write fast code (which should be everybody) should feel totally comfortable using pointers and undertanding exactly where each bit of memory is allocated and freed.
As a C# developer who knows a bit of C++ and keeps meaning to learn more, this is super interesting to me and I hope you do more articles on the subject.
As a guy who learned some C++ in school, but quickly found all the projects he wanted to join on were grounded in C# ideology, I am really looking forward to this.
I never managed to separate the two inside my head properly, and I keep doing c++ things in C# programs if I don’t drink enough coffee.
Maybe this will finally help.
I have to give a plug for my favorite Unity and C# YouTube channel: Quill18Creates
He does a great job of walking through the code he creates and explaining WHY he is doing what he is doing as often as he explains the hows.
Two playlist of particular interest:
Intro to C# – he goes through all of the necessary syntax in the context of making a fairly simple game all in C#. It’s good for making sure you have a firm grasp on the syntax and major subtleties of C#.
Unity 3d: Procedural Mesh Generation – – a little old but right up the alley of what you want to do!
Exiting to see that you’re branching out! Hopefully, becoming proficient in C# will prove useful.
The OO-centric asumptions in C# and Java are pretty crazy though. I learned Java for a project, used it that once, and never returned.
The CLR (Common Language Runtime) is what calls your main function. In C#, methods that are declared static do not need the class to be instantiated to be called. It’s similar to have a C++ file that’s just functions. Making everything a class is the only thing I don’t really like about the language, although I suppose it forces these standalone functions to be in a namespace.
I strongly recommend you learn the language from a book, than through Unity tutorials. C# in a Nutshell is a great book, despite “in a nutshell” being 1100 pages. The good thing about this book is that it should cover everything you want to know and more, and you don’t have to read it cover-to-cover. Just look at what you want to learn from the table of contents and jump to that chapter. I would recommend reading the first few chapters in order, though, to get an understanding of the language.
Yeah, since Shamus will be working with the lower level stuff, and since Unity is maybe based on Mono (is it?) knowing how the Net framework is put together in general and how C# interracts with it is something he really needs to know. And basically any good reference book should explain those things in it’s intro chapters.
Also it will explain how garbage collection, and different heaps (places where variables and objects are kept) work and when are they GCd, which I expect is something that should interest him given the performance requirements on graphic programs.
On the other hand while I did go once through this using a book, there is probably all of this and more in Microsoft tutorials and articles online. It’s just a matter of finding them in a sea of data.
Wait, this does not really clarify it. (to me, who does not know either C++ or C#, but a little plain C, some Fortran and a lot more Python). If I have a program that consists only of functions and nothing to call them, then why should anything ever get executed? And who or what decides which of the functions/methods will be executed? This is Anarchy!
To be honest, even the definition of Main() in the C++ example confuses me. Why is Main executed if it’s not called? (I suppose it’s in the name, but still)
The CLR in C# is roughly like the “Java Virtual Machine” in Java, both of these are actually separate programs that handle setting up and managing the environment necessary to run C# and Java programs.
When I compile a Java source file, I don’t actually get a real x86 executable binary that I can run like a normal program. What I get is a “.class” file (or several) that contains “bytecode” instructions that the JVM (or CLR) knows how to interpret.
When I want to run a Java program, I don’t run that program directly, I have to run the JVM first and then tell it what class has the “main” method to start my program.
So in C, I compile a source file “hello.c” into an executable program “hello”, and run it like “./hello”
In Java (and C# is analogous), I compile a source file “Hello.java” into a bytecode file “Hello.class”, and tell the JVM to run that program like “java Hello”
Of the languages you mentioned, the closest is Python, in which you have to run a program with “python some_file.py”, except with Python the interpreter just knows to start at the top and work its way down, while Java and C# start at “public static void main()”.
All programs have an arbitrary entry point, defined by the language runtime (low-level routines you never see).
C and C++ picked:
int main(int argc, char** argv)
However, this is convention. Most C and C++ compilers have ways to tell the runtime to start somewhere else.
– Also, this isn’t actually where the resulting binary starts executing anyway.
I want to second that book recommendation. It’s a really excellent resource that doesn’t waste a huge page count explaining what loops or conditionals are.
And in Unity C#, the same code would be
public class HelloWorld : Monobehaviour {
void Start() {
print(“Hello World”);
}
}
Oh, and the reason the Hello1 class in your C# example doesn’t need instantiated is because its only method is static – which means it can’t use class variables, but it can be called anwhere using Hello1.Main();
MonoBehaviour – case sensitive. (All code is bugged until it’s been tested.)
And that still wouldn’t do anything on its own.
You’d have to take another step, like creating an empty object in your Unity game window, and then add your script to it. That would then run on startup when the game object initiates.
Although I seem to be a little late to the party, I just want to chime in and confirm that, yes, C# looks a hell of a lot like Java. Everything in Java has to get wrapped in a class one way or another, but the class containing the main loop of a program doesn’t actually need to contain anything but the main method. In Java, the the declaration for the main method has to look like this:
The “String[] args” is there in case you want to pass any arguments to the program from the command line and it has to be included even if the program you’re writing has no possible use for arguments. I’m sort of jealous that C# doesn’t make you do that.
Well how can you know you won’t need some command line arguments before you start working? :) Even if you are making a desktop application the command line arguments are useful because through them you get the file location if you try opening a file associated with your program. ;)
Because I’m writing simple programs for my own personal use. I’m a hobbyist, not a professional. In four-odd years of Java-ing, I have written precisely one program that used command-line arguments.
I can’t remember the exact syntax right now, but in any .NET language you can retrieve the command line arguments from anywhere with some always available system utility functions.
I’ve used C# for several years, and at my current employment we switched to Java because AWS (Amazon Web Services) is mainly oriented for non-Microsoft technologies. (Basically, they offer grudging basic support.) Also, the boss is a Mac/Linux fanatic and he wanted to move to Java.
I miss C#, seriously. Java is okay but C# is like Java but with a LOT of things fixed and made better. For example, in C#, strings are treated as a primitive in terms of equality, adding, and so forth. You get the flexibility of treating them like you used to in BASIC, but you can in a pitch treat them as an object as well. In Java, they’re objects and if you forget that “==” only checks an object’s reference and “.equals()” checks it’s content, you’re in for some hurt. (We’ve had this bug crop up several times in production code before we trained ourselves out of it.)
Also, C# has a better infrastructure system. With a “proj” file, you can clearly delineate which files are used by your project, and even specify versions, as well as define the build types. By contrast, Java uses the file system in a clunky fashion to determine package names, what files to use, etc. Working with imported packages is a pain too; many times we’ve discovered some package we included for a single item ended up trying to force upgrade several other packages to support itself, causing everything to break.
Also, Visual Studio is probably one of the best development tools ever written. It used to NOT be, I’ll concede that, but their debugging system is fantastic. I’m presently using IntelliJ for my Java work, and it’s okay. I flat out refused to use Eclipse, which I found incredibly weird and hard to configure and use.
Yeah, Visual Studios is a gigantic leap over every other tool that I’ve ever used. Eclipse doesn’t even feel like a professional product by comparison, and the load times don’t help.
I find Visual Studio to be pretty awful for writing code, and really awful for sharing code (the project files cannot be properly merged by any of the source control systems).
However, the debugging support is simply awesome.
Yes, they can. They are just XML files. They can be merged just as easily as any other XML file.
Mind you, this doesn’t mean I actually like them. I find the UI for managing the properties absolutely terrible. Nothing quite like the fun of finding out that an individual .cpp file has its own properties set (like precompiled headers enabled when the project itself has them disabled)…
It’s not enough to be able to merge XML, because MSVC stores most of the configuration as a text element.
For example, a simple linker config looks like this:
<Link>
<AdditionalDependencies>QtCored4.lib;OtherLibd.lib;%(AdditionalDependencies)</AdditionalDependencies>
<GenerateDebugInformation>true</GenerateDebugInformation>
<SubSystem>Windows</SubSystem>
<TargetMachine>MachineX86</TargetMachine>
</Link>
Add a library and it just adds a bit more text to the ‘;’ separated list of AdditionalDependencies text element.
Thus merging not only needs the know how to merge XML, it also has to know how to merge MSVC’s specific use of it.
Which also changes each version.
The .filters files are annoying for git to merge, because VS likes to always add to the end of the list instead of sorting alphabetically or something so every time 2 people add a file, it tells you it’s conflicted because the same part of the file has been modified.
I’ve resorted to having those files gitignore’d and just not having everything nicely sorted in the Solution Explorer. Not ideal, but better than manually merging the filters files all the time.
Thing is- some of this isn’t even C# specifically, it’s a method of doing things that is around in C++. A static class function that you use without instantiating an object is part of C++ as well, for example.
A lot of this is probably less “C# vs C++” and “modern object oriented vs procedural”, where C# is designed to push you more into object-oriented code while C++ is classically full of libraries and tools that were designed around a procedural methodology.
You’ll be able to take some of what you learn from C# right back to C++, if you want to.
For a good blog (no video, except for showing the game running!) on Unity and C#, I can recommend theliquidfire.wordpress.com/. The “Tutorials” page has some posts for fairly basic/intermediate C# syntax, and the “Projects” has more applied Unity programming, at a pretty high level. I recommend the “Tactics RPG” one. He’s all about structuring his code and creating production-ready systems. It’s not at all “baby’s first game”. Like, providing an implementation and example use of state machines in post 5.
I’ve also used this chrome extension plugin to speed up youtube videos so I can go quickly over the easy parts without actually missing anything:
As someone who grew up with Java (which is more alike to C# than C++), I’m befuddled by your befuddlement :D. “Of course you have a static main method inside a class, what else would you do?”
YouTube allows fast-forward in videos now, from 0.25 speed to 2.0 speed. Don’t need a plugin to keep watching too-slow videos! :)
Since I’ve discovered that you can skip five seconds forwards and backwards with the cursor keys, 10 seconds with “j” and “l”, pause with “k”, youtube has become so much better! And I regularly get angry that not all videoplayers use these hotkeys (also: shift+”,” to play slower, shift+”.” to go faster, and the number line on the keyboard to go move to 10, 20 …% of the video play time).
… but for programming tips, nothing beats Text (either the online code documentation, Stackoverflow, or very seldom som programmer’s blog) unless you try to actually code up an animation or so, then video certainly helps.
I find googling for answers to Unity questions much more effective than watching the videos. The answer is usually out there. The difficult bit is phrasing the question, especially if you don’t know what something’s called.
Creating a Procedural Mesh in Unity
Traditionally, C# programs are static classes that never get instantiated. (Static classes also exist in C++, although I’m not sure how much you’ve used them.) Honestly I didn’t know making your main class non-static like in your code example above even worked, I’m mildly surprised that it does.
The entry point to a C# program is inside a class because everything in a C# program is inside a class. (Which the exception, IIRC, of struct definitions which can be inside a namespace or a class.) This is a more Java-esque way of looking at the world than a C++ way, so I can see how that’d be confusing. Don’t worry, the compiler has no trouble figuring out where the program’s entry point is.
BTW, I 100% agree with you that video tutorials suck. Different people learn in different ways, but it seems like in the last 5-6 years the only “way” anybody bothers to make anymore is video. It’s frustrating.
Oops, put my foot in my mouth. I guess it’s normal for your Program class to not be a static class, that’s what the default Visual Studio “New C# Project” creates. Sorry, never looked at it closely before.
The important thing is that Main() itself is static, which means the class doesn’t have to be instantiated to use Main(). And, again, these concepts all existed in C++ also, that’s where I first learned them way back in college using C++ Builder.
C# isn’t as different from C++ as you think, it’s just that C# is exercising C++ features I don’t think you’ve been using. (And one thing about C++: it has a LOT of features. So I don’t blame you.)
C# classes don’t have be static, though. (you could do it either way, I guess.) It’s perfectly legitimate to do something funky like have your main class derive from Form, and then do something like
in MyClass.Main. You probably shouldn’t do that in a non-toy program, though.
Interestingly(?), C++ .Net programs don’t have the same limitation; by default, the entry point is the regular C++ main().
Unity is ass. Keep your C++ and use Unreal 4 instead.
Skipping existing comments, so apologies if this post is redundant.
In .Net, one class has a static function named Main, and when you run your program, that method gets called. If you need to do anything with any kind of nonstatic members, create a class instance. Java is more or less the same way.
I use class methods a lot – the class becomes not a template for an object, but a place to put methods so they don’t pollute the global namespace. So “static Main()” or the fact that Hello1 isn’t instantiated as an object washed right over me. Plus there were those three years I was a professional Java programmer, so Of Course one declares a class just to hold a class method to start the program. Different past experiences.
For my understanding: In Python, the difference between a class method and a static method is that the class method has access to the rest of the class (other class methods and static methods, or class-global variables), whereas a static method is just a “stupid” function, and can only see what you hand it.
Does this work the same way in C#? The reason I’m asking is that you talk about class methods, but Shamus defined a static, and I had to learn the difference (in Python) the hard way, so it matters to me.
Different languages in OO and subcultures of OO use different terms for different things, so I’ll just explain my particular point of view:
an object method works on an instantiated object, whereas a class method is associated with the class, not with any particular object in the class.
“static” is the keyword used by C++, C#, Java, and PHP to say “this is a class method” – it stems from C++ trying to minimize the number of new keywords they introduced to C to make C++ go, since C++ originally compiled to C, and then you had to compile the C to machine language. So they re-used “static” (which was already a keyword in C) rather than introduce a new keyword. The other languages copied from C++ to reduce the learning curve or something.
I don’t know how Python organizes methods or names them – your description of its “static methods” sounds like a third type of method relative to the two I’m familiar with.
Within Python classes, you have three levels (that I know and work with):
1: Static methods. Defined within the class, but they know nothing about any classes or instances. You adress them as .(), and it’s basically just a way of associating a regular function (which can exist outside of a class) with a class. You don’t need an instance of the class to call them. Useful if you have some amount of inheritance, where different classes only differ in how they do a certain thing, but where you also want to be able to do that thing without instantiating the class first.
2: Class methods. In addition to the regular arguments, they are implicitly handed the class itself. So that allows them to use “cls.xx” to refer to any property of the class, including static methods, or instantiate the class (by calling cls(), which returns an instance). They can be used as alternative “constructors”.
3: The “regular” methods, which are parts of instances. They can only be called from other methods on the same level, or via “.()”, and they implicitly get handed an instance of the object itself when called, so you can always address the current instance via “self”. This includes access to all static and class methods.
All of these can be “protected” by prefixing an underscore to the name. That means they won’t be listed as external references. That does not make them impossible to call from outside the class/instance but it requires a certain amount of determination.
Argh — did not pay attention, and the parser ate all my larger-than signs I put my names in…
1: static mathods are addressed as classname.methodname()
2: “regular” (or instance) methods are addressed as either objectname.methodname() or as self.methodname() (when calling from within the instance). they can address both static methods and classmethods as self.methodname() — so the object keeps all the static and class methods.
Cool. I think we’ve now answered your original question: the ‘static’ keyword there in the C# example (could also have been a C++, Java, or PHP example) does not make it a “static method” as you’re familiar with the term. It’s just the keyword used by that family of languages to denote what we both appear to call a “class method”.
As far as I know, the C-like-OO languages I refer to here do not have an equivalent of a ‘static’ method – to them that’s just a class method that doesn’t actually call any other class methods or refer to any class properties.
great, thank you! I just got smarter :)
Your idea of static methods sounds like a globally scoped static method. Methods, instance or static, can only see what’s in their scope or higher, up to the global scope. So a method in the global scope can only see other stuff in the global scope.
I’ve always thought of video as the easy way out for most things (collaborative gameplay stuff like Spoiler Warning being the exception, for various technical reasons.) Most online video seems to be sparsely-edited, if it’s edited at all, so to make a 15-minute video it feels like a lot of these people are sitting down in front of a camera for 15 minutes, and then pressing “Submit”. Crafting text takes longer – it’s taken me a lot longer to type this comment than it would take me to read it out loud, for instance.
A lot of that comes down to quality though; dashing off misspelled, error-filled garbage text is probably a lot faster than scripting and editing quality video. So maybe the medium isn’t really relevant there. I do agree with you that video is incredibly annoying for me in a lot of cases, and learning programming is definitely one of those. Among other things, in video the creator dictates the pace, while in text the reader does. It’s also much easier to skip around to the bit that’s relevant to what you’re currently working on.
I’d say it wildly varies. John Blow’s 2-hour rant about programming languages for games did certainly not take much longer to make than the video itseà¶f plays, but even most people whose videos are just them talking into the camera edit a lot. If you want to have some well-prepared pieces of code to go with it, some images, some demo … that must be incredibly time-intense.
This is my thinking as well. For a video you can just talk to a camera while doing the thing being demonstrated, import it into a video editor, trim the ends, render, and upload (at minimum, obviously you can extend this time depending on the amount of editing). You can simply explain the thing as you go along as if you were telling a friend next to you how to do it, and you only have to go over it once. Making a text tutorial with pictures requires doing the thing and being interrupted periodically every time you do something that needs a picture in order to take said picture, then going back and sitting down and mentally doing the thing again and writing up the procedure along with manually putting all the pictures you took in the right place to illustrate what you’re doing (I suppose you could write as you go, but I don’t think that would increase efficiency all that much). And just like the video editing above, you can extend the time required arbitrarily by cropping, annotating, etc. the various photos used in the tutorial.
This is not to imply that either is trivial; a quality tutorial is a lot of work whichever way you do it. But if you’re already really good at public speaking while showing something off, video may be a very attractive medium compared to writing.
This is not to defend video as a medium for tutorials, either, as I usually prefer text ones myself; just a thought experiment of why it might be easier or more convenient for people to do.
If you are cool with making the jump to a public game engine, Unreal would let you keep using C++ and is also free and is fully functional unlike the free version of Unity (until you start making notable profits anyway).
It has some downsides, code compile times tend to be longer than in Unity for some reason (I’m not a programmer, but plenty of programmers have ensured me this is the case) and by default it comes with a lot of extra bells and whistles you may not need. It’s also not that great at UI yet (though its slowly improving).
For a game like Pseudoku, that is largely (or entirely) 2D UI driven, Unity is probably a much better fit. For 3D games with lighter UI, like a 3D version of Good Robot, Unreal makes a stronger case.
Err… Code compile times are longer in UNREAL, which also comes with all the extra bells and whistles and is not that great at UI.
The dangers of discussing two game engines that are only a couple of letters different.
Please don’t. I’ve been really looking forward to you starting learning C# and Unity (whether you realise it or not, you’ve been coming to it for a long time now)
The free version of Unity doesn’t lock off any features either, actually. They switched models around the time Unreal became free.
The paid version is all about services, like their analytics, cloud build and multiplayer matchmaker. As far as the actual API goes and what you can do with the engine, the free version isn’t restricted.
The main difference in their pricing models is that Unity asks for a subscription once you start making significant money, while Unreal asks for royalties instead.
“instantiate”
In my entire +30 years of life, I have never seen this word before. At first, I thought you’d just made it up. Looking it up, i still don’t have a clue what it’s supposed to mean in the context of the sentence it’s being used in.
instantiate = “create an instance of”
To phrase it another way: To create a real, actual copy of something, instead of a blueprint, idea, or concept of a thing.
I can write a class to define a point.
Why we do this?
What happening behind the scene is that every time you use new you are reserving a section of memory big enough to hold the header information and data of a Point object. That process is called instantiating a object.
Note: Not sure how to format code using this editor.
Point made.
Or should I say, Point instantiated?
The thing to remember is that C# and the .Net framework are interwoven together. To understand why a little bit of history.
Back in the 1990’s there were issues in reusing code and sharing (or selling) code through libraries. To fix this issues a number of vendor developed frameworks designed to work within an operating system. One of which was COM or Common Object Model. If you wrote (in C++ at first) a library that adhered to the COM standard it could be used by any other programming language capable of using COM to to talk to libraries.
Along the way, Visual Basic became a popular. To bring Visual Basic into the 32-bit world and object oriented programming, it decided with VB4 to make it work with COM. The reason that VB4, 5, & 6 object oriented programming works the way it does because it is an implementation of how COM handles object orientation.
Java came along with elements of this shared library but it was along for the ride because of Java’s focus on write once run everywhere thanks to the use of a Virtual Machine to run the code on.
COM had issues, Java had a nifty thing going with its VM so by the early 2000s, Microsoft took another stab at it and created a new way of defining how to get software libraries talking with each other along with taking some of the ideas behind Java VM to create a managed environment to control how memory is used. This developed into the .NET framework.
COM had Visual Basic 6, and so the .NET framework had C#. Visual Basic .NET was also created as the successor to VB6. Most everything weird about C# is because it being a wrapper around how the .NET framework and core libraries manage things. Just as most everything weird about VB6 was because it was a wrapper around COM.
To make it weirder for the two major .NET languages C# and VB.NET it doesn’t matter which one you learn. Behind the scene they both are translated into an assembly like Intermediate Language or IL. That is what is executed by the Framework. It trivial to compile a C# program into IL and then reconstitute it as a VB.NET program complete with variable names, and vice versa. There are differences but most of them are syntax sugar. If you know how the IL works and what it calling in the framework you can generally replicate the functionality in both languages.
Like COM, .NET is concerned a lot with exposing what a software library can do. You can query any .NET assembly (the generic term for a .NET binary) and discovered what public subroutines and object are in there and can be used by another program.
Broken down this is what the following means.
To make a .NET exe work the framework needs to know it starting point. Within the assembly header the programmer can say “Hey this routine is my starting point.” When the framework tries to run the exe it will check the header, find the routine you specify and run it. The routine has to be a static method. And since all methods have to be part of an object, in C# you need a class. In VB.NET you can use a module which behind the scene looks the same as a class with nothing but static methods. So it really the same thing.
So in your example, the class is Hello1, by convention for C# the startup routines is main with a void return type. In VB.NET it is just Public Sub Main() in a module. I can also use a static Main in a class if I wanted to in VB.NET
public class Hello1
{
public static void Main()
{
System.Console.WriteLine(“Hello, World!”);
}
}
The Console.Writeline is part of the standard library the framework provides.
So that the convulsed answer of why things look weird.
That’s a very good, unbiased summary.
As a primarily VB.NET developer, it’s refreshing to see. :-) Hopefully Shamus won’t get confused by .NET Standard vs .NET Core, with the major shift in .NET underway to make it more cross-platform.
I’m also keen to follow along with Shamus’s journey into C#, and hope it concludes with one of his projects on a HoloLens.
Nice to say. I am also a VB.NET developer myself. The problem I am dealing with is the my company part design and machine control software is written in VB6 (by me mostly). The software been maintained since the middle 80s with three major ports between different platform. The leap from VB6 to .NET will be the fourth.
I have some very strong opinions about where Microsoft can shove it in regards to the compatibility between VB6 and VB.NET.* But the situation is what is so what I been doing for the past decade is converting the software over piece by piece. I had to dig into the guts of the .NET framework to understand how certain things worked so I could pick the right path for compatibility with the low level libraries we use for machine control.
That where I learned that most of the differences between VB.NET and C# are cosmetic.
*For example Integer changing to be a 32 bit integer instead of a 16 bit integer. Behind the scene it gets compiled to Int32 IL datatype so it the change was completely arbitrary. If they kept integer as Int16 and it a C# assembly referenced it, it would show up as short. Right now having a parameter like flag as integer, will show up as flag as int, in C#.
I replicated enough of the VB6 graphics engine so I can copy and paste the graphics code from VB6 to VB.NET. Why I had to do that?
However for the new stuff VB.NET does I just love it especially generics and the framework. So much easier than VB6. And having more control over the forms is very nice as well.
The thing is, moving from VB6 to VB.net was gonna be a breaking change anyways, so might as well do all the changes so VB.net could be viable moving forwards rather than stick with backcompat.
But, but …
…but what if there’s a second class Hello2, which also contains a static methods “Main()”? Which one is executed? Or both? Or would that give me an error?
I feel like this would immediately become more readable if the first line of the program was some statement which points to the main routine.
The startup object is defined in the compiler options.
This is also the case in C and C++ – the default entrypoint for a Windows C++ application is int WinMain(int argc, char** argv).
Very few people change it because there isn’t much point.
Technically the standard says certain types of applications must have their entrypoint defined as main, but even then you can just have main redirect to WinMain or whatever else you use.
I’m with you on tedious videos. I think it’s because terrible video is arguably easier than text; turn on your screen capture, do whatever you’re doing, then upload to YouTube. And I certainly see a lot of that. Gods, it is infuriating to have a simple question, and the majority of Google’s hits are 5 minute videos in which someone tediously uses an un-narrated screencast to show me something trivial.
As for C# and other languages in the Kingdom of Nouns, I always get a laugh of out Steve Yegge’s ” Execution in the Kingdom of Nouns”
Thirded on the hatred of tutorial videos. Every. Single. Time. I have a question I have to either trawl Stack Overflow for the answer or suffer through at least an hour of useless video–neither of which works when you aren’t trying to answer a question, but looking for a tutorial.
I’ve taken up searching for textbooks instead, and dropping the cash for them to get me rolling. There’s a lot more out there in print for “advanced programmer but don’t know the language” then there is on web-sites, though it’ll set you beck at least fifty bucks (more often close to 100)
My theory is that it’s a lot easier to get dollars out of people in the “I know what I’m doing and I need a new tool to do it” demographic (tax deductible and/or employer pays for it) so all of the quality resources require funds to get hold of.
Fourthed? on hatred of video content. Unless it’s clearly superior, it’s almost always much much worse than text. Takes forever to consume, can’t (easily) be skimmed, criminally unparsimonious with bandwidth. It’s just the worst. In fact, even when a concept is more clearly conveyed with video, I find I almost always prefer an animated gif embedded in a page of prose.
Also seconded on the tragedy of the Kingdom of Nouns. I live in it professionally, but I sure don’t like it at all.
Can’t be copy-pasted like SO / any other text-based solution is another big problem. And it’s much harder to keep / make an off-line copy of it.
I remember reading this years ago. I saw your link and immediately thought “and then in the end it is all managers” and yepp, there it is.
I can understand why they do video, though it is infuriating sometimes. It’s much easier to show something than it is to write it out, especially if you might not have a great grasp of the English language. Plus, while most of the time a written explanation will do great, some of the time pictures are required (and usually quite helpful anyway). Then a small portion of the time a video to show exactly how it’s done is required. Given that Unity bills itself especially with all the 3d tools to get things working, video is going to be required for a number of people. No way to get around that.
They could do a video tutorial and a text tutorial, but that’s likely more than they want to spend on the subject. Their manual isn’t terrible at least text-wise. I recently started trying to relearn Unity for a personal hobby project, and it’s been mostly good so far. I’ve managed to learn while mostly skipping the videos, thankfully.
I will join the bandwagon. Videos are incredibly irritating ways of learning anything complex in my experience. The mere fact you can’t determine if a video will really answer your question without watching significant chunks of it is a killer. It is vastly simpler to scan through a written article or tutorial, not to mention that there are tools like “Find” which work very efficiently on text to allow you to jump to the area you want…
I assume that the current world-wide conversion to using videos to communicate information is an coordinated scheme of great evil, but I have yet to work out which supervillain is behind it.
Google, especially since they took over Youtube.
Yeah, the insanity pointed to by the Kingdom of Nouns rant is why Main is inside the Hello1 class. Because in a kingdom of nouns, it’s not possible to have code that *does* anything itself. The code has to be attached to a noun, to enable the noun to do the thing.
Apparently.
Sigh.
To be fair, having an object represent the startup state is a fairly valid concept, and helps prevent contamination of the global scope, which is a thing that happens in Javascript for example.
And even in Python, the file/directory itself (the package) is technically a class-equivalent, with the code running outside any given functions/class definitions simply being the equivalent to a static constructor.
Even in C/C++, the translation unit (the file + includes) is technically its own container-type thing, so even unbound functions are kinda-sorta contained in A Kind Of Thing. OO languages just make it explicit. And most OO languages do allow for functions as first-class or at least second-class constructs. It’s only Java that is dogmatically opposed to this.
Well, everybody has already covered the “C# is more Java than C++” stuff. I was going to chime in, but A) already been done, and B) my only experience with Java OR C++ was over a decade ago. I mostly do Ruby & Go these days.
But! I got GameMaker Studio Pro from a recent Humble Bundle, and I want to start working on a game. I’m running into the same problem as you, though. Everything I find is either aimed at complete beginners, or are tidbits on how to do something really advanced. What would be really nice is a resource like Lazy Foo’s SDL tutorials, which were a great help when I was poking about with SDL forever ago.
I also need some time to experiment and play around, but hopefully this coming long weekend will let me play around a bit.
It’s been a long time since I’ve done any serious coding (unless you count TI-83 or Mathematica programs for my students), but I have to admit that I was caught off-guard by the difference in the C++ and C# programs. I really would have expected more congruence between the two.
As to your other point, I am generally frustrated by what I perceive as a move to video as the default format for freaking everything. When a colleague sends me a link to something related to our work, I would really rather not have to spend the first fifteen seconds waiting for the opportunity to stop the inevitable video at the top so I can read the transcript in peace.
Well within the month then, I am so pleased. Really look forward to this journey.
I started learning to code in January, shortly before my 21st birthday. As a result of this, and as a result of the fact that I really only wanted to do it as a hobby (so I could understand computers more), I’ve come to the conclusion that… The book I bought on Java was too complicated. As a result, I decided to start with python and some html instead. (but it’s a money-making career! Why would you rather become an English professor? Questions to be asked…)
That said, I understood the double parentheses portion of this post. Why would a coding language use two parentheses with one… empty, I believe? I don’t get that.
You just summed up why I generally dislike tutorials, guides etc. in video form. More often than not you need only a specific part of the information; in a written text you can usually skip ahead to the paragraph you need, even use things like search to find the right part etc.
Meanwhile, video tutorials require you to spend so much time on needless smalltalk, stuff you already know, stuff you don’t need right now, in order to get to the tiny portion of information you were looking for.
(This all goes especially for video game guides, like the many “all collectibles found in in ” videos where you have to sit through 10 minutes of video in order to find the 5 seconds relevant to you.)
This is relevant to my interests! I’ve been scripting in C# for about 2 years now. Going backward to C++ would likely feel intense. There’d be a lot of “WHAT!? WHY!?” and “what the hell is a PTR” going on.
Yeah, I’m the IKEA furniture installer of the programming world.
I taught myself (the basics of) C# starting almost from scratch, and almost entirely from indians with near-silent microphones on youtube. Honestly, if someone just said “do this” and didn’t walk me through the process visually I would either quit because I was getting bored of reading or quit because I wouldn’t be able to understand what I’m supposed to do. So don’t discount video tutorials, they’re pretty good for absolute beginners like me.
And well, I had a developer friend who was willing to tolerate my incessant questions. That helps too, I guess.
Just realized you missed the opportunity to name this post C#.Sort(“Of”)
No idea if that’s proper syntax or not.
I’m about to commit OO HERESY and possibly explain the static methods. More likely show my ignorance.
Static methods are for all intents and purposes FUNCTIONS in C#, in that they are used like you would use a function. You use a function to do some type of processing that can’t really be tied to an object. Like making a function (static method) to calculate a power of a number (Math class has those BTW). And static variables inside of the same class can be essentially used as global variables if you need them.
Placing a function in a class (theretofore making it a static method) really simplifies the problem of namespace since now you can reuse the same name of the function for different things i different classes. Like if you wanted to have two functions, one to add Complex numbers, another to do vector addition so now you can have a class Complex that deals with working with Complex numbers (it can both represent a Complex number AND contains static and not functions for working with them) (probably also HERESY as far as OO programming is concerned, but it works) and Vector for working with Vectors numbers and both will have their own version of Add: Complex.Add and Vector.Add.
Which IMO simplifies the namespace problems of having long ass function names to separate them by different flavors.
Oh and BTW, one thing which you will probably learn once you actually start learning C# (which is what I recommend, since you really need to know what the NET and C# are doing because performance is important to you) is that static variables of a class can be given a value during their declaration
static int x=5;
but they can also be given values using a static constructor of the class:
class A{
public static int x;
public static A(){
x=5;
}
}
So the first time I think any of the static variables of this class is called the Framework will run the static constructor (if it exists) to populate them. It will not be run afterwards. Now you are unlikely to use this for simple things like this, but if you want to load variables that you want to use from let’s say the file system, you will need some code to load the file and parse it first, which can be done in a static constructor.
But seriously, get a book or read microsoft articles to get the idea what C# and .NET do in the background and how it’s concepts work.
The thing to do with video is put it on behind a light videogame that doesn’t mind being alt-tabbed.
Playing Salt and Sanctuary. After the first runthrough, like any game, it’s not really enough by itself to hold my attention, meaning it’s perfect for putting in front of video lectures or podcasts. Now I have two chars in NG+5, plus five more chars in various advanced states.
But, likewise, much of the video is fairly pointless. There’s no reason to give it more than a fragment of my attention…until it gets to a good part. Then I can tab to it, rewind, pause, etc.
In past I’ve use WoW, Dwarf Fortress, Darkest Dungeon…. I find Legend of Grimrock, both 1 and 2, are highly meditative zen experiences to replay, provided the game doesn’t have to hold my attention by itself.
C# sounds disturbingly Java-like; my experience with Java is slim, but it always struck me as being C++ with the training wheels welded on.
It’s a shame that you’re bound to the PC-verse, as I think Objective C (macOS/iOS) would be much more to your liking. It’s a pretty simple meta-set on C (not even C++, though you can add C++ to Objective C) that makes OO work very straightforward.
Of course, it’s also theoretically going away, replaced by Swift…
It is *not* video for the convenience of the producer. You are 100% correct that video is much harder to create.
However, videos are still novel and get a lot better search mojo. The best is video-with-transcript, for the producer.
Not always harder. A sequence of five button presses takes a few seconds if you’re recording video. Taking screenshots and highlighting the buttons and writing text to explain it all and laying it out nicely is a lot harder.
Catlike coding is exactly what you are looking for.
A bunch of text articles that talk about procedural generation. Section 1.2 should be what you are looking for.
I’m working on implementing marching cubes and his articles are invaluable.
They look excellent, thanks for posting the link.
Shamus why don’t you get one of Petzold’s books? He always does the trick for me. Granted he doesn’t have anything for unity but he can get you up to speed on C# and the .NET environment.
Or perhaps one of these books
“I've always thought of video as more labor-intensive to produce and lectures more time-consuming to consume, so from my perspective it feels like everyone is wasting their time for no reason.”
I find this comment hilarious, because it more or less reflects my feelings when your blog started having videos.
Except the rollercoaster ones, because human bowling is always worth watching.
.”
And why I came to this site for the articles, not the videos. I don’t think I ever watched a single episode of Spoiler Warning all of the way through. Well, technically articles and comics because my first introduction was when of my fellow NWN players linked DMotR back in 2006.
I guess most people reading articles here are finding text better for how they consume things. That’s also my case.
But I started listening to podcasts and some videos when I discovered I could do that while playing many “mechanical” games (no deep storytelling): rogue-likes, strategy games, and most action games (like Good Robot)… Sometimes I miss something, but it works quite well overall.
I doubt it would be great for programming tutorial, though…
Since a few months I’m also contemplating learning Unity and C# while feeling that videos are mostly a waste of time, so I’m really looking forward for your next posts on that subject.
But I have some Java experience in addition to C++, so as other commentators have noted it should be easier.
Speaking of strange moments of hilarious confusion, Shamus’ hilarious confusion is itself hilariously confusing to me, because I got into programming from Python where everything just works and you never think about things like the locatioon of main().
To the tune of “But what do they eat?”:
But what instantiates the variables?
Last time I worked in Unity (2015) it was still based on an older version of Mono (the free MS implementation of the .NET framework) due to a licence change, so it was missing language features in the latest MS C#. This sometimes became an issue, as online C# language references would be for the later revs. I don’t know if this has changed since.
I don’t think Mono was made by Microsoft. .NET is allready free. What Mono is, is what .NET was supposed to be is an implementation of the .NET compilers (since all .NET assemblies are compiled by JIT right before the execution) that can run on enviroments that are not Windows.
Basically they took the specification of JIT and bunch of other low level features of .NET and whose expected behavior is known and implemented those.
They are using Mono 4.4 which offers significant upgrades but the compiler still targets C# 4 .NET 3.5.
Most of the C# 5 and higher features aren’t really needed anyways, they’re mostly async stuff which don’t quite match with Unity’s coroutines. At least not without significant engineering.
Although if they did the effort to make coroutines play with async/await that’d be amazing.
I’ve found that a good rule of thumb is to pretend all video tutorials don’t exist. Saves me a bunch of time.
Worse than videos explaining how to do something that would be better explained as text are videos which are literally a sequence of text images with annoying background music. No speech, no moving parts, just text. This has to be more work than just putting the text on a blog. The only reason I can think of is they might get money out of youtube if enough people watch it, and if they’re getting money they don’t care if all their customers are just pi**ed off at them.
Also, having watched some video tutorials, I’m not convinced much work has gone into them.
ohhhh, still worse are videos where the text is also read by a speech synthesizer!
Shamus…
Take the “why even a video” thing to the logical extreme:
Why don’t you contact Sebastian Lague (or someone similar) and do a video series TOGETHER, titled “Sebastian teaches Shamus C#” or some such?
I feel like you’re in a really unique position here, as someone that both wants to consume the tutorial content, but also someone who’s job is producing content, like this blog, and has had some success doing video content.
I dunno if it would be popular or interesting, but the back-and-forth between two different programmers coming from their different perspectives could be interesting and entertaining in and of itself. I’m imagining something like you’re both recording in real time, while one is remoted into the other’s desktop or something similar, with a good Unity IDE open; so you could actually be stealing the cursor back and forth in video and typing out the code and talking about it all in a piece.
Like a “Let’s Play” of learning a new language. I think it could work?
As of this week I am a full time indie game dev (me and an artist friend of mine are making a go of it) and our first game is in Unity. Honestly when it comes to Unity it is almost all “knowing that thing you do.” I’m part of a discord group with a bunch of hobbyists (which I was until recently) and apparently having me in there is great since I tend to know the answer to everything from years of poking at Unity.
Learning C# wasn’t too bad for me, but I cheated by being taught Java and C++ back a decade ago in high school. But learning unity took a while of prodding, figuring out what works and doesn’t in the engine.
That being said if your interests lie in procedural terrain generation it may not be something I can help with. If I had to do it, I’d probably write a shader that takes in a randomly generated heightmap and deforms a plane to do it, but then there’s already a terrain system build into unity that can take in height maps.
That said, I tend to stay away from the graphical side of things when it comes to programming. It’s practically another language and while I could learn it, I’ve got a whole lot of game to make right now. Despite that, if you have any questions, either here or by email, regarding Unity I’ll try to answer it best as I can.
Yay! Shamus will explain C# to me! I’m not likely to use it soon but I keep feeling that I should try and keep visibility of something which is not Python. So thanks a bunch for doing this!
I also went to C#/Unity from c++ and it was an interesting road. My strategy was to buy an intro to programming/C# through Unity book after also being frustrated by the fact that almost all the tutorials were video.
I think that strategy worked out fairly well. Learning from a beginners book like that was naturally a bit slow but liberal skimming sped it up and I only finished maybe 1/3 of the book before I knew what I was doing enough to set off and start a project of my own/researching whatever specifics interested me. Of course the downside of that is said book was outdated within months of being published, and would be horrifically outdated now, like 4 years later.
But if you want text-based tutorials books seem to be the only way to go, I half suspect that’s why they’re so rare on the internet, people who write them can sell them so why give them away for free?
At any rate I hope you continue writing on this topic. Your programming posts always serve as a nice inspiration to stop procrastinating on my hobby projects.
I haven’t read any of the other comments so this might have been said already, but perhaps not.
You’ve identified a gap in the market, Shamus, and given your writerly persuasions and knack with breaking down technical concepts for people to understand, YOU could potentially be the author of a book: Introduction to Unity for Experienced Programmers.
Or you could do online articles and gain search results / clicks that way, etc.
I don’t know your specific learning style, Shamus, but I taught myself C# just by having a task I wanted to complete. I got in to the modding scene for KSP (all C# + Unity) and taught myself C# (but not Unity, sorry) to pick up a mod I liked when its creator dropped it, then went on to write three or four more of my own. I find that having a goal in mind really helps with learning because it’s practical, needs-driven learning. You need a thing but you don’t know what it is, so you go and find it.
Here’s to a successful learning endeavor!
As a mathematician using Python (among others), I’ve always wondered what that final step is that takes programs from “execute your .py file using your installed python distribution” to “execute this self-contained executable that does not need anything pre-installed”. Apparently it’s not that big of a step, since C-style programs also simply point to a first line of code to start at, in their case a function. Python may be doing the same thing, just (conceptually if not literally) putting a main() wrapper around the file you’re trying to run.
Anyone interested in enlightening a dabbling programmer?
Well, from what I understand it’s basically what you are saying. Pure exe machine code files have a certain format that OS expects so it can determine where the entrance/start is and start executing from there. And that is all there is to it. All the rest of the loading of other libraries is done by OS or by the exe itself.
On the other hand Phyton scripts need to be passed to a processor/compiler/whatever it’s name is that will read them and interprets each command in the script file in turn.
Java and .NET are somewhere in between (disclamer, I don’t know Java arhitecture). The exe files for .NET aren’t simply encapsulations of the C# scripts. Actual compiling does happen in that the C#/F#/VB code is converted into machine independent CIL (common intermediate language) and packed into an assembly (exe or dll).
Then once you double click on the .NET executable the system determines that this is in fact a .NET executable and passes it to the framewrok which uses Just in Time Compiler to compile the machine independent CIL into actual machine DEPENDENT machine code to run on the computer. This machine code is basically like any other machine code on the system.
This should allow you to compile one exe, and then be able to run it on Linux and Windows with no recompiling. And it does work mostly like that, with the trouble being that the full framework implementation is only available on Windows, while Mono (a community developed alternative implementation) which does target linux and Windows doesn’t have the full implementation. Still this does allow you to run simple Windows Forms applications on both Windows AND Linux with next to no recompiling.
The difference is between interpreted languages (Python, JavaScript, etc.) and compiled languages (C, C++, etc.). Basically, where normally a program compiles to machine code (instructions that map directly to what the processor does) and you then run the compiled executable file, interpreted languages execute the program code directly (or, well, since that’s not actually possible, translate it to machine code right before a given line of code runs) and thus need to be run through an interpreter that actually knows what the code means (whereas a compiled program already matches the instruction set of the computer its on).
EDIT: Ninjas! Eloquent ninjas!
Have you encountered the scripts and tips wiki? They aren’t tutorials as such, and the quality of the commenting varies, but I’ve found example scripts pretty useful. There’s a fair bit on there about procedural mesh generation. Also, the Mesh class in the actual unity documentation is very well documented.
EDIT: I should mention, I tried implementing something based on Project Frontier in Unity a couple of years back. It’s javascript, so the code probably isn’t of much interest to you, but I may be able to help with the Unity side of things.
On video …
… the place I work does a lot of training and informational stuff. There was some video content, simple piece to camera, originally done by the “Manager in charge” and it was …. poor, to say the least. Later on the client had the content updated and this time paid to have a professional presenter — she’s a local TV presenter — do the piece to camera: The difference!! You could see why she had spent those years at stage school.
TL;DR Video presentation is a real skill — and that’s not even touching production and editing!
It occurs to me, C# is a bit like C++ with the debugging turned on. All the objects continue to exist at run time, with names and everything, instead of being aggressively optimised.
As for main, I think that’s really really a special case. It is in C too — “something” knows to call main. And in C#, “something” knows to call that main function.
As a programmer that jumped from C and C++ to C# during the .NET 2.0 days and stuck with it. I find it to be an incredible language but at the same time it was difficult learning the differences at first. The obvious difference is that with C# everything is an object. While at the same time some things are not objects but instead are object-like wrappers around data types.
The whole instance and static dichotomy in C# is weird at first but something that eventually you grow to embrace and appreciate. It forces you to keep things neatly categorized. If you look into the IL being generated when you call a static and instance method that are otherwise identical, you begin to see what the actual difference is. In the function call there is a hidden parameter the compiler handles for you on instance method calls. That parameter is the instance to execute it on. So Add(X, Y) is really Add(this, X, Y). Extension methods kind of expose this to you while not directly making it obvious what’s going on.
However I have one not so confusing thing to help you out. The best resource for learning C# is not a tutorial, it’s not a book or even a teacher. It’s the official Microsoft documentation that gets installed with the IDE. Especially for a programmer that is familiar with C++ and programming in general. I’m sure you know what constructs you need, digging through the documentation is pretty easy if you know what you need in C++ and just want to learn the C# equivalence (or alternative in some cases). There’s a damn good reason why people whom are familiar with Microsoft’s documentation call it the ‘gold standard of documentation’; it really is that good.
“These concepts are all all things”. Y’all need one less “all”.
Welcome to the wide world of extroversion. They have cakes, and are prepared to tell you about the cakes’ life history in exhaustive detail even if you try to walk away.
I know I’m being slightly unreasonable, but this bothers me. ‘class’ isn’t a holdover from C, such that you’ll still see C++ code that might typedef away the need to use the word ‘struct’ in an object declaration, because it was either that or ‘struct SomeStruct object;’ in C.
Is this a typo slash copy/paste issue, or do you typically declare class objects this way? If so, where did you pick that up from?
Copy / paste issue. It’s a bit strange when I write about code here on the blog. On the front page you see it with nice fixed-width font, highlighting, and other IDE-type comforts. But when I’m editing a post I have to type everything using the same variable-width font as everything else. So I often don’t notice odd looking code, because in the editing page it ALL looks odd.
Confession: Some programmers are really bad with copy / paste errors, but I am indefensibly bad about it. I LOT of my bugs are copypasta.
Ah.. the world of tutorials. We all need them but I hate it. And video tutorials drive me insane! they are always, 100% of the time, way too slow. I mean, it’s a video, for christ’s sake, I can go back any time I want! you don’t have to repeat yourself. And video tutorial for coding doesn’t even make sense. Loading a 4K video just to read text on the screen? naaah.
Discovering I can alter playback speed on youtube made my life much better. I’m always messing with the speed, even for non tutorials.
I didn’t realize how fortunate I was! My computer science classes taught mostly Java, which I subsequently forgot, only to marvel years later at how this Unity3D thing felt oddly painless. But it sounds like when I need to use Unreal Engline with its C++ for something, it’s going to be quite an uphill battle. :(
For some good articles on Procedural Generation in Unity. I quite like these:
interesting case studies:
great introduction:
I’m totally with you on the video tutorial issue. There are some things that do make more sense as videos (car maintenance for example) but for a lot of topics I can process the information in a written tutorial much easier and faster than a video tutorial.
This is only somewhat related but the video thing infuriates me to no end. So often I click a link expecting text or an article and get a video and immediately back out of it. No patience to watch it when I likely only need information from 15 seconds of the 5-10 minute video or longer.
Please keep us posted on the C# learning!
Even if I don’t make much with it, it’ll be a good read.
hey) I’m new in programming and it is interesting to read how others are conquering this hard tower))). But the author is not so new in this field, like me) i I started without any basic knowledge of programming… I started at here. The only thing that made me complete the course is curiosity – there read chapters of the story and code while reading. Otherwise I stopped at the very first task))
Actually, I’m junior now, so, it’s not much time passed when I just started….even though I have nothing to compare with (I mean C# with the other programming language, as the author does) – but i can say that C# is a beautiful language, it is clear and logical. I will surely continue mastering it – and thanks for the author for sharing his knowledge of C and comparison with>
|
https://www.shamusyoung.com/twentysidedtale/?p=38757
|
CC-MAIN-2020-10
|
refinedweb
| 16,268
| 70.53
|
Revision history for Perl module Alien::Base. 0.030 Mon Oct 31, 2016 - Production release identical to 0.029_01 release 0.029_01 Fri Oct 21, 2016 - Fixed example in documentation for ExtUtil::MakeMaker - Fixed bug in test suite with where multiple users installing AB could cause failures - Added interface for overriding config.site variables see alien_extra_site_config 0.029 Mon Mar 14, 2016 - Production release identical to 0.028_01 release - This is the "Happy Captain Picard Day" Release. 0.028_01 - Fixed bug where a colon (:) and equal sign (=) can confuse pkg-config parser (dakkar++ gh#165) 0.028 Mon Mar 14, 2016 - Production release identical to 0.027_03 release 0.027_03 Wed Mar 9, 2016 - Updated documentation to recommend the use of Alien::Base::ModuleBuild as a configure_requires. PLEASE UPDATE YOUR Build.PL FILES! - Fixed bug where missing trailing slash when used with exact_filename would cause a failure (plicease gh#161, gh162) - Documentation fix in FAQ (plicease gh#159,gh#160) - Added compatability for Alien::Builder (pliceasee gh#158, see also gh#157) 0.027_02 Sat Feb 27, 2016 - Fix bug where default ffi_name was incorrectly computed 0.027_01 Fri Feb 19, 2016 - Deprecate %p - Require Alien::CMake 0.07 as a minimum when used as alien_bin_requires for Alien::Base compatability 0.027 Thu Feb 4, 2016 - Production release identical to 0.026_02 release 0.026_02 Mon Feb 1, 2016 - Fix test bug introduced in 0.026_01 where t/builder.t would fail on Windows if Alien::MSYS was not installed. 0.026_01 Mon Feb 1, 2016 - Added alien_env property to Alien::Base::ModuleBuild - require HTTP::Tiny 0.044 for correct calculation of relative URLs on may websites. 0.026 Fri Jan 22, 2016 - For http, use base URL from index request when downloading files (plicease gh#150, gh#151) 0.025 Wed Jan 20, 2016 - Production release identical to 0.024_02 release 0.024_02 Mon Jan 18, 2016 - Silenced warnings that can happen when multiple .pc files are included with a package (salva++ gh#148) - Fixed bug where verbose diagnostic could cause false positives and false negatives for system libraries (salva++ gh#147, plicease gh#149) 0.024_01 Tue Jan 12, 2016 - Use URI internally for improved support for GitHub as a source (among others) (salva++ gh#144) 0.024 Thu Jan 7, 2015 - Production release identical to 0.023_01 release 0.023_01 Sat Jan 2, 2015 - Fixed a usually harmless but frequently annoying isssue where the alien_install_commands were executed on every invocation of './Build', './Build test', and './Build install' instead of just once as is needed. (plicease gh#141) - Archive extraction can now be overridden with the alien_extract_archive method in Alien::Base::ModuleBuild. (salva++ gh#142) - Fixed bug with windows where using \ instead of / broke relocatable installs. (plicease gh#139) - Promoted _env_do_system a public method named alien_do_system (salva++ gh#143) 0.023 Mon Sep 14, 2015 - Fixed typo in FAQ - Updated FAQ Alien::gmake example to require Alien::gmake 0.11 - Otherwise a production release identical to the 0.22_01 release 0.022_01 Fri Jul 31, 2015 - Add support for ALIEN_INSTALL_TYPE environment variable 0.022 Mon Jul 20, 2015 - Correction for the section "How do I specify a minumum or exact version requirement for packages that use pkg-config?" in the FAQ. 0.021_01 Wed Jul 15, 2015 - Added a default %{pkg_config} helper - Fixed bug introduced in 0.016_01 where using --destdir or $ENV{DESTDIR} would break the "./Build install" command. 0.021 Wed Jul 15, 2015 - Fixed bug where upgrading to 0.020 could break Alien modules installed before the upgrade. You do not need to re-install your Alien modules, just upgrade to Alien::Base 0.021. 0.020 Mon Jul 13, 2015 - Production release identical to 0.019_02 release 0.019_02 Wed Jul 8, 2015 - Fixed bug where alien_provides_* options were not being honored when the system provided the library (plicease gh#131) 0.019_01 Mon Jul 6, 2015 - Improved documentation: added a FAQ at Alien::Base::FAQ - Added helpers for source code builds see Alien::Base#alien_helper and Alien::Base::ModuleBuild::API#alien_helper for details 0.019 Fri Jun 5, 2015 - Production release identical to 0.018_02 release 0.018_02 Wed Jun 3, 2015 - Fix test suite for Cygwin 0.018_01 Tue May 26, 2015 - Added alien_arch option for non-homogeneous environments with shared @INC (for example AFS) (plicease gh#119) 0.018 Tue May 26, 2015 - alien_stage_install is now on by default (first production release for this to be the case) 0.017 Fri Apr 24, 2015 - Identical to 0.016_02 release, except alien_stage_install is OFF by default (it was turned on for dev release 0.016_01, and will be turned back on or after May 25). 0.016_02 Fri Apr 24, 2015 - Fix bug where ConfigData.pm was not updated after install to blib (plicease gh#121) 0.016_01 Tue Apr 22, 2015 - alien_stage_install is now on by default 0.016 Tue Apr 22, 2015 - Production release identical to 0.015_03 release 0.015_03 Mon Apr 20, 2015 - Fixed bug related to absolute URL (polettix++ gh#116) 0.015_02 Fri Apr 17, 2015 - On OS X use install_name_tool to more reliably relocate dynamic libraries on that platform (plicease gh#115) 0.015_01 Fri Apr 17, 2015 - Add alien_stage_install option for Alien::Base::ModuleBuild see Alien::Base::ModuleBuild::API for details (plicease gh#114) - alien_stage_install is default for PPM builds 0.015 Tue Mar 17, 2015 - Production release identical to 0.014_01 release 0.014_01 Fri Mar 13, 2015 - Generate config.site for autoconfigure source installs (plicease gh#113) 0.014 Feb 25, 2015 - Production release identical to 0.012_01 release 0.012_01 Feb 24, 2015 - Prefer and require PkgConfig.pm over pkg-config on 64 solaris If pkg-config is used it will likely be giving flags for 32 bit libs (plicease gh#110) - Allow for relocation of Alien-Base based dists. (plicease gh#111) 0.012 Feb 22, 2015 - Fix bug introduced in 0.011 where bin_dir dies when install_type=system 0.011 Feb 22, 2015 - Production release identical to 0.010_01 release 0.010_01 Feb 21, 2015 - When installed from source (install_type=share) if the share directory is missing it is now a fatal exception. (plicease gh#108) 0.010 Feb 17, 2015 - Production release identical to 0.009_04 release 0.009_04 Feb 16, 2015 - Test fix for Microsoft Visual C++ 0.009_03 Feb 15, 2015 - Improved FFI support 0.009_02 Feb 4, 2015 - Added diagnostics for Alien authors to help in diagnosing common configuration problems. 0.009_01 Jan 27, 2015 - Added './Build alien_fakebuild' command which shows you what would be executed without actually doing it (plicease gh#102) 0.009 Jan 27, 2015 - Production release identical to 0.008_01 release 0.008_01 Jan 26, 2015 - Allow multiple argument system calls for alien_build_commands and alien_install_commands (plicease gh#103) 0.008 Jan 16, 2015 - Production release identical to 0.007_01 release 0.007_01 Jan 12, 2015 - Add support for https repositories (zmughal++ gh#98) 0.007 Jan 8, 2015 - Production release identical to 0.006_03 release 0.006_03 Jan 8, 2015 - (optional) inline tests now require Acme::Alien::DontPanic 0.010 older versions do not do a link test, and we want that type of failure reported there, not here 0.006_02 Jan 7, 2015 - alien_bin_requires only become build_requires if Alien::Base::ModuleBuild determines a source build is necessary (plicease gh#96) - On MSWin32, Alien::MSYS is only injected a prereq if Alien::Base::ModuleBuild determines a source build is necessary and autoconf is detected (plicease gh#96) 0.006_01 Nov 14, 2014 - Add support for Alien::Base dynamic_libs method for system installs (previously only share install was supported) This adds FFI::CheckLib as a prereq. 0.006 Oct 14, 2014 - Production release identical to 0.005_07 release 0.005_07 Oct 13, 2014 - commands are printed as they are executed for easier debugging (plicease gh#91) - c_compiler_required repository option (default on) 0.005_06 Sep 30, 2014 - ExtUtils::Depends integration (plicease gh#85, gh#87) - added alien_check_built_version method to Alien::Base::ModuleBuild (plicease gh#83, gh#89) 0.005_05 Sep 29, 2014 - fix regression in test on MSWin32 introduced in 0.005_04 0.005_04 Sep 28, 2014 - added alien_bin_requires property to Alien::Base::ModuleBuild (plicease gh#84, gh#88) - added alien_msys property to Alien::Base::ModuleBuild (plicease gh#86) 0.005_03 Sep 23, 2014 - Inline tests requires Inline 0.56 (skip elsewise) - Document Inline 0.56 or better required for Inline integration 0.005_02 Sep 23, 2014 - silence Archive::Extract deprecation warning (we explicitly specify it as a prereq) - remove accidental undeclared dependency on YAML introduced in 0.005_01 - fixed test failures introduced in 0.005_01 0.005_01 Sep 22, 2014 - fixes with static library detection when pkg-config is not available (plicease gh#79, gh#75) - support for Inline 'with' (plicease gh#71, gh#77, gh#78) - fix prereqs for Text::ParseWords and PkgConfig (plicease gh#73, gh#70) 0.005 Sep 11, 2014 - improved documentation coverage 0.004_05 Sep 09, 2014 - additional use / instead of \ on MSWin32 (plicease gh#68) 0.004_04 Sep 09, 2014 - fixed test error introduced in 0.004_03 expressed on cygwin (plicease gh#67) 0.004_03 Sep 09, 2014 - added support for destdir (plicease gh#65, gh#39) - no longer attempt to dl_load static libraries, which aside from being wrong was triggering a dialog warning in windows (plicease gh#64) - use / instead of \ on MSWin32 (plicease gh#64) 0.004_02 Sep 04, 2014 - fixed MSWin32 specific bug introduced in 0.004_01 (plicease gh#59) - use pure perl PkgConfig as an alternative to pkg-config if the latter is not provided by operating system (plicease gh#61) - support SHA-1/256 sum checks for downloads (vikasnkumar++ gh#33, gh#60) 0.004_01 Sep 04, 2014 - Libraries in the share directory are preferred over the system library if that is what was used during install of the Alien module (plicease++ gh#22) - Better handling of paths on Windows (zmughal++ gh#41) - Fix test failure when pkg-config is not available (mohawk2++ gh#44) - Support for autotools on Windows (MSWin32, but not cygwin) (plicease++ gh#46) - Alien::MSYS will be injected as a build_requires on Windows if autotools is detected - "%c" can now be used as a platform independent way of running autotool based "configure" script - The new default for build uses "%c" instead of "%pconfigure" - Added property alien_isolate_dynamic which allows an Alien author to avoid using dynamic libraries when building XS modules (plicease gh#51) - Added dynamic_libs which returns a list of dynamic libraries (.dll, .so or .dylib depending on platform) which can be used for FFI modules (see FFI::Raw) (plicease gh#51) - Added support for LWP as an alternative to HTTP::Tiny (preaction++ gh#24) - Added support for content-disposition HTTP header to determine correct filename and determine format from that (rsimoes++ gh#27) - By default run autotools style configure scripts with --with-pic and add alien_autoconf_with_pic property to allow disabling that (plicease gh#47) 0.004 Mar 5, 2014 - Added version token to the interpolator (MidLifeXis++) - Fixed broken test (MidLifeXis++) 0.003 Mar 3, 2013 - Added 'blib scheme' detection logic - Improves Mac/CPANtesters compatibility - Controlled by ALIEN_BLIB env var - ACTION_alien is now ACTION_alien_code - Added ACTION_alien_install - Fix manual .pc file bug - Unbuffer STDOUT during ACTION_alien_* 0.002 Jan 27, 2013 - Added exact_filename key (giatorta++) - Various bugfixes 0.001_003 Nov 29, 2012 - Improved pkg-config handling - Added support for pkg-config key ${pcfiledir} - Note: released from "experimental" branch 0.001_002 Nov 5, 2012 - Fixed some false positives in library detection - Initialize temporary directories later - Note: released from "experimental" branch 0.001_001 Nov 4, 2012 - Improved library detection - Library files are added to packlist - Note: released from "packlist" branch 0.001 Oct 9, 2012 - First Beta release! - Documentation updated - Better autogeneration of pkgconfig information (run4flat++) 0.000_022 Oct 8, 2012 - Major refactoring - separate alien_{x}_commands where x = build, test, install - removed mac specific code - no longer test provisioning (it never worked anyway) - directly allow library to install to final share_dir destination - Moved Alien::DontPanic and Ford::Prefect to CPAN under Acme:: namespaces 0.000_021 Jul 25, 2012 - Some fixes for Mac, not sure its working yet 0.000_020 Jun 22, 2012 - Windows now passes the test suite (another cleanup error trapped) - Begin overloading copy_if_modified for relocalizing dylibs on mac (this is not working yet, this release is for windows testers) 0.000_019 Jun 21, 2012 - REALLY return to EU::LibBuilder (sorry for the noise)
|
https://metacpan.org/changes/distribution/Alien-Base
|
CC-MAIN-2017-04
|
refinedweb
| 2,076
| 58.08
|
Learn how easy it is to sync an existing GitHub or Google Code repo to a SourceForge project! See Demo
You can subscribe to this list here.
Showing
1
results of 1
I've tracked down the test_cartesian error currently on the
dashboard. The problem is that vgl_1d_basis defines an inline
constructor that calls collinear(T,T,T). This is a method in a
templated class. Even so, the C++ standard requires that all functions
that fall into the overload set (i.e. potential candiates) be declared
before this. Many compilers are relatively lose about this, mostly due
to specific implementation details.
To do it properly, the code would have to be something like
bool collinear(float,float,float);
#include <vgl/vgl_1d_basis.h>
bool collinear(float,float,float){...};
void f() {
vgl_1d_basis<float> b(1,2,3);
}
Now, if we make the function non-inline (implemented only in the
.txx), then there is no reference to collinear in the header file, so
all is okay. We'd need to make sure it is declared before including
the .txx.
I think a quick solution to the problem is to make the constructor not
inline, but I don't know the potential performance costs in code that
uses it.
As somewhat of an aside, note also that it is illegal for the same
function to appear in multiple translation units. When declared
"inline", the function definition may be present in multiple
translation units, but must have identical implementations in each TU.
Amitha.
|
http://sourceforge.net/p/vxl/mailman/vxl-maintainers/?viewmonth=200605&viewday=3
|
CC-MAIN-2015-22
|
refinedweb
| 249
| 65.42
|
If I want to check if a pixel is black image.getRGB(x, y) will equal 0 right? [java]
This may be a simple question but I can't find anything to confirm my hypothesis.
Should I expect the color of the pixel to be black if
image.getRGB(x, y) returns 0?
I would expect 0 because the bit values of each of the values (Red, Green, Blue) would be zero. Am I correct in thinking this?
2 answers
- answered 2017-11-12 19:54 user2864740
"Returns the RGB value representing the color in the default sRGB ColorModel. (Bits 24-31 are alpha, 16-23 are red, 8-15 are green, 0-7 are =blue)."
That is, the packing (in hex locations) is as follows, where each component can have a value of 0 (0x00) .. 255 (0xFF).
AARRGGBB
Thus, the final value is not only dependent upon RGB, when all color components are zero:
AA000000
In fact, AA will be 0xFF ("100% opaque") by default unless it has explicitly been set to a different value in a buffer / model that supports an alpha channel.
- answered 2017-11-12 20:01 Bedla
No, BufferedImage#getRGB() returns hex number. See this unit test:
public class TestRgb { @Test public void testBlack(){ BufferedImage bufferedImage = new BufferedImage(1,1, TYPE_BYTE_BINARY); Graphics2D graphics2D = bufferedImage.createGraphics(); graphics2D.setPaint(new Color(0,0,0)); //black graphics2D.fillRect(0,0,1,1); // pass - alpha channel set by default, even on all black pixels TestCase.assertTrue(bufferedImage.getRGB(0,0)==0xFF000000); // pass - when looking at just the color values (last 24 bits) the value is 0 TestCase.assertTrue((bufferedImage.getRGB(0,0) & 0x00FFFFFF)==0); // fail - see above TestCase.assertTrue(bufferedImage.getRGB(0,0)==0); } }
|
http://quabr.com/47253173/if-i-want-to-check-if-a-pixel-is-black-image-getrgbx-y-will-equal-0-right-j
|
CC-MAIN-2018-39
|
refinedweb
| 285
| 57.27
|
I am creating a path of Markers to be displayed on a GMapControl in my C#/XAML (WPF) application. The control requires I create a UIElement to be overlaid on the map as a marker. I have created a very simple UserControl for this purpose, as follows:
<UserControl x:Class="Project.Resources.Circle"
xmlns=""
xmlns:x=""
xmlns:mc=""
xmlns:d=""
mc:Ignorable="d"
d:
<Grid>
<Ellipse Width="5" Height="5" Stroke="Red" Fill="Red" />
</Grid>
</UserControl>
However, when I build the line, creating anything up to 400 instances of this control, my application freezes. Since I can only seem to create UserControl's on the UI thread (and not in a Thread or BackgroundWorker), what can I do to speed up the creation of new Circle instances?
Is there a more lightweight UIElement than a UserControl? Any guidance on this appreciated.
You could create a minimal derived UIElement like this:
public class Circle : UIElement { protected override void OnRender(DrawingContext drawingContext) { const double radius = 2.5; drawingContext.DrawEllipse(Brushes.Red, null, new Point(), radius, radius); } }
|
http://www.dlxedu.com/askdetail/3/4890f9bc3555bf2e6620ab5988831306.html
|
CC-MAIN-2018-39
|
refinedweb
| 174
| 54.12
|
> I dont want the caller to call import but a function. come again? >>> type (__builtins__.__import__) <type 'builtin_function_or_method'> I didnt mean that __import__ isnt a function, but that I want to make a function called ImoprtFile that actually does something very similar that what __import__. So to rephrsase the questin how does __import__ load a module into the callers namespace. Example: file1 def ImportFile(fileName): parsedCode = Parser(fileName).Parse() module = new.module(name) exec parsedCode in module.__dict__ sys.modules[name] = module import name #!!!!!!! This doesn't work. Imports in file1 namespace!!!!! file2 import file1 file1.ImportFile(fileName) fileName.function() #This wont work because the import happened locally in file1!!!!!!!!!!!!! Now the import in file1 doesnt take effect in file2. So what do I have to do to make that work. And I dont want to do a custom hook to import. So how does __import__ do?
|
https://mail.python.org/pipermail/python-list/2006-May/382025.html
|
CC-MAIN-2019-30
|
refinedweb
| 147
| 70.9
|
In this tutorial we will discuss how to extract table from PDF files using Python.
Table of contents
Introduction
Sample PDF files
Extract single table from a single page of PDF using Python
Extract multiple tables from a single page of PDF using Python
Extract all tables from PDF using Python
Conclusion
Introduction
When reading research papers or working through some technical guides, we often obtain then in PDF format. They carry a lot of useful information and the reader may be particularly interested in some tables with datasets or findings and results of research papers. However, we all face a difficulty of easily extracting those tables to Excel or DataFrames.
Thanks to Python and some of its amazing libraries, you can now extract these tables with a few lines of code!
To continue following this tutorial we will need the following Python library: tabula-py.
If you don’t have it installed, please open “Command Prompt” (on Windows) and install it using the following code:
pip install tabula-py
tabula-py is a Python wrapper for tabula-java, so you will also need Java installed on your computer. You can download it here.
Sample PDF files
Now that we have the requirements installed, let’s find a few sample PDF files from which we will be extracting the tables.
This file is used solely for the purposes of the code examples:
Now let’s dive into the code!.
Suppose you are interested in extracting the first table which looks like this:
We know that it is on the first page of the PDF file. Now we can extract it to CSV or DataFrame using Python:
Method 1:
Step 1: Import library and define file path
import tabula
pdf_path = “”
Step 2: Extract table from PDF file
dfs = tabula.read_pdf(pdf_path, pages=’1′)
The above code reads the first page of the PDF file, searching for tables, and appends each table as a DataFrame into a list of DataFrames dfs.
Here we expected only a single table, therefore the length of the dfs list should be 1:
print(len(dfs))
And it should return:
1
You can also validate the result by displaying the contents of the first element in the list:
print(dfs[0])
And get:
Number of Coils Number of Paperclips
0 5 3, 5, 4
1 10 7, 8, 6
2 15 11, 10, 12
3 20 15, 13, 14
Step 3: Write dataframe to CSV file
Simply write the DataFrame to CSV in the same directory:
dfs[0].to_csv(“first_table.csv”)
Method 2:
This method will produce the same result, and rather than going step-by-step, the library provides a one-line solution:
import tabula
tabula.convert_into(pdf_path, “first_table.csv”, output_format=”csv”, pages=’1′)
Important:
Both of the above methods are easy to use when you are sure that there is only one table on a particular page.
In the next section we will explore how to adjust the code when working with multiple tables.
Extract multiple tables from single page of PDF using Python
Recall that the PDF file has 2 tables on page 2.
We want to extract the tables below:
and
Using Method 1 from the previous section, we can extract each table as a DataFrame and create a list of DataFrames:
import tabula
pdf_path = “”
dfs = tabula.read_pdf(pdf_path, pages=’2′)
Notice that in this case we set pages=’2′, since we are extracting tables from page 2 of the PDF file.
Check that the list contains two DataFrames:
print(len(dfs))
And it should return:
2
Now that the list contains more than one DataFrame, each can be extracted in a separated CSV file using a for loop:
for i in range(len(dfs)):
dfs[i].to_csv(f”table_{i}.csv”)
and you should get two CSV files: table_0.csv and table_1.csv.
Note: if you try to use Method 2 described in the previous section, it will extract the 2 tables into a single worksheet in the CSV file and you would need to break it up into two worksheets manually.
Extract all tables from PDF using Python
In the above sections we focused on extracting tables from a given single page (page 1 or page 2). Now what do we do if we simply want to get all of the tables from the PDF file into different CSV files?
It is easily solvable with tabula-py library. The code is almost identical to the previous part. The only change we would need to do is set pages=’all’, so the code extracts all of the tables it finds as DataFrames and creates a list with them:
import tabula
pdf_path = “”
dfs = tabula.read_pdf(pdf_path, pages=’all’)
Check that the list contains all three DataFrames:
print(len(dfs))
And it should return:
3
Now that the list contains more than one DataFrame, each can be extracted in a separated CSV file using a for loop:
for i in range(len(dfs)):
dfs[i].to_csv(f”table_{i}.csv”)
Conclusion
In this article we discussed how to extract table from PDF files using tabula-py library.
Feel free to leave comments below if you have any questions or have suggestions for some edits and check out more of my Python Programming articles.
The post Extract Table from PDF using Python appeared first on PyShark.
|
https://online-code-generator.com/extract-table-from-pdf-using-python/
|
CC-MAIN-2021-43
|
refinedweb
| 891
| 62.82
|
- 0shares
- Facebook0
- Twitter0
- Google+0
- Pinterest0
- LinkedIn0
What is Java?
Java is a high level programming language that is considered as the object oriented programming language. Java is called a platform because it has its own runtime environment and a platform is known as any software or hardware in which we can run a program. Java was designed to be small, simple, and portable across platforms and operating systems. Java was modeled after C++. Java was written as a full-fledged programming language.
Java was originally developed by Sun Microsystems, a company best known for its high-end UNIX workstations in 1991. Java is an efficient, fast and easily portable programming language. Java is considered to be an ideal language for distributing executable programs through World Wide Web. Java is also considered as the general purpose language for developing programs, these programs are easily usable and portable across different platforms.
Example of Java:
Consider the following example in which the first program in Java is demonstrated:
CODE:
public class MyFirstProgram {
public static void main (String args[]){
System.out.println(“My First Program in Java”);
}
}
Why Learn Java?
Java is learned because the applets for the World Wide Web are written in Java and this must be the most compelling reason to learn Java. Java as the programming language has many advantages on other programming languages. The following are some of the advantages of Java:
Platform Independent:
Platform independence is considered to be one of the most significant advantages that Java has over other programming languages. Platform independence means that a program written in one computer can be moved to the other computer. As Java is an object oriented programming therefore, the classes in Java makes the code easy to write hence we can easily transfer our source code from one platform to another platform.
In Java the binary files are also platform independent. This means that they can be run on a number of platforms without recompiling the source code. The binary files of Java are in a form of byte codes. Byte codes are a set of instructions that looks like a machine code.
A Java development environment has two parts that is a Java compiler and a Java interpreter. The compiler of Java takes the Java program and then generates byte codes. In other programming languages such as C, C++, the compiler takes the program and then generates a machine code. But this is not with the Java compiler.
A Java program is then run by the Java interpreter and to run the program in Java the byte code interpreter is run.
A Java program is in byte code form. It means that the program is not run only in one system but can be run on other systems or platforms or any other operating system such as window. This can only be done as long as the Java interpreter is available.
The disadvantage of using the byte code is that the speed of execution becomes slow sometimes. This is mainly because the specific programs of the system are run directly on that hardware on which they are compiled and these types of programs run more faster than Java byte code, because the byte codes are mandatory to be processed by the interpreter.
Java is Object Oriented programming language:
Object oriented programming is a technique through which the programs are organized. The program becomes more flexible. A modular program is obtained when using the object oriented programming. By modular programs we mean the programs that are divided into small modules or functions. The Java’s object oriented programming concepts are mainly inherited from C++ programming language. Java like the other object oriented programming languages has classes and libraries. These provide basic data types and input and output functions and other utility functions. The Java development kit has classes that support networking, internet protocols and user interface functions. As these classes are defined in Java therefore, they are portable and can be used on other platforms.
Java is easy to learn:
Java is easy to learn programming language because Java was developed to be the simplest programming language. The programs in Java are easy to write and understand, compile, and debug and hence easy to learn programming language. Java on the other hand is also flexible and powerful programming language. Many of the syntax of Java are inherited from C++ programming language because it was developed after C and C++. The C++ programming language is the base of Java programming. Therefore, we can learn java easily.
The complex parts of C and C++ are excluded from Java which makes the programming language easy to learn. In Java there is no concept of pointers and the strings and arrays are considered as the real objects in this programming language.
Where Java is used?
Java is used in more than three million devices. Java is used on the following devices:
- In media players, acrobat reader and other desktop applications.
- Java is widely used in mobile phones.
- Java is used in games, robotics, embedded systems etc.
Types of Java Applications:
There are four types of applications in Java:
The standalone application can also be considered to be the desktop application or a window based application. The stand alone applications can be run on any machine. But this machine should have installed the Java run time system.
A web application is that in which a client can run a web browser. The web application runs on the server side. We use the Java web applications to create dynamic websites. The technologies Servlets and JSPs are used for the web applications in Java. The Java web application is needed when we want our information to be dynamic. The simple website can be created by using the static HTML pages.
An enterprise application is considered to be as the business application such as banking applications. The technology EJB is used in Java for creating the enterprise applications in Java. The enterprise applications are user friendly and have an advantage of high security.
There are various applications in Java that are created for mobile devices such as android; browser based mobile apps and IOS, etc.
|
http://www.tutorialology.com/java/what-is-java/
|
CC-MAIN-2017-22
|
refinedweb
| 1,029
| 55.74
|
QDomNamedNodeMap Class
The QDomNamedNodeMap class contains a collection of nodes that can be accessed by name. More...
Note: All functions in this class are reentrant.
Public Functions
Detailed Description
The QDomNamedNodeMap class contains a collection of nodes that can be accessed by name.
Note that QDomNamedNodeMap does not inherit from QDomNodeList. QDomNamedNodeMaps do not provide any specific node ordering. Although nodes in a QDomNamedNodeMap may be accessed by an ordinal index, this is simply to allow a convenient enumeration of the contents of a QDomNamedNodeMap, and does not imply that the DOM specifies an ordering of the nodes.
The QDomNamedNodeMap is used in three places:
- QDomDocumentType::entities() returns a map of all entities described in the DTD.
- QDomDocumentType::notations() returns a map of all notations described in the DTD.
- QDomNode::attributes() returns a map of all attributes of an element.
Items in the map are identified by the name which QDomNode::name() returns. Nodes are retrieved using namedItem(), namedItemNS() or item(). New nodes are inserted with setNamedItem() or setNamedItemNS() and removed with removeNamedItem() or removeNamedItemNS(). Use contains() to see if an item with the given name is in the named node map. The number of items is returned by length().
Terminology: in this class we use "item" and "node" interchangeably.
Member Function Documentation
QDomNamedNodeMap::QDomNamedNodeMap()
Constructs an empty named node map.
QDomNamedNodeMap::QDomNamedNodeMap(const QDomNamedNodeMap & n)
Constructs a copy of n.)
Removes the node with the local name localName and the namespace URI nsURI from the map.
The function returns the removed node or a null node if the map did not contain a node with the local name localName and the namespace URI nsURI.
See also setNamedItemNS(), namedItemNS(), and removeNamedItem().
QDomNode QDomNamedNodeMap::setNamedItem(const QDomNode & newNode)
Inserts the node newNode into the named node map. The name used by the map is the node name of newNode as returned by QDomNode::nodeName().
If the new node replaces an existing node, i.e. the map contains a node with the same name, the replaced node is returned.
See also namedItem(), removeNamedItem(), and setNamedItemNS().
QDomNode QDomNamedNodeMap::setNamedItemNS(const QDomNode & newNode)
Inserts the node newNode in the map. If a node with the same namespace URI and the same local name already exists in the map, it is replaced by newNode. If the new node replaces an existing node, the replaced node is returned.
See also namedItemNS(), removeNamedItemNS(), and setNamedItem()..
QDomNamedNodeMap & QDomNamedNodeMap::operator=(const QDomNamedNodeMap & n)
Assigns n to this named node map.
bool QDomNamedNodeMap::operator==(const QDomNamedNodeMap & n) const
Returns
true if n and this named node map are equal;.
|
https://doc.qt.io/archives/qt-5.5/qdomnamednodemap.html
|
CC-MAIN-2020-24
|
refinedweb
| 429
| 66.44
|
A simple class for meshing geometric vertices. More...
#include <meshkit/VertexMesher.hpp>
A simple class for meshing geometric vertices.
INPUT: one or more ModelEnts representing geometric vertices MESH TYPE(S): MBVERTEX OUTPUT: one mesh vertex for each ModelEnt DEPENDENCIES: (none) CONSTRAINTS: ModelEnts must be geometric vertices, i.e. with dimension() == 0
This class performs the trivial task of meshing geometric vertices. Typically there will only be a single instance of this class, and therefore it is pointed to and managed by MKCore. It will also be inserted into the meshing graph during the setup phase of most edge meshers.
The single instance of this class stores all the ModelEnt's representing geometric vertices, and after execution, an MEntSelection entry for each geometric vertex and mesh vertex pair.
Definition at line 33 of file VertexMesher.hpp.
Bare constructor.
Definition at line 16 of file VertexMesher.cpp.
Destructor.
Definition at line 25 of file VertexMesher.cpp.
No copy constructor, since there's only meant to be one of these.
Re-implemented here so we can check topological dimension of model_ent.
Reimplemented from MeshOp.
Definition at line 31 of file VertexMesher.cpp.
Function returning whether this scheme can mesh entities of t the specified dimension.
Definition at line 63 of file VertexMesher.hpp.
Function returning whether this scheme can mesh the specified entity.
Used by MeshOpFactory to find scheme for an entity.
Definition at line 72 of file VertexMesher.hpp.
The only setup/execute function we need, since meshing vertices is trivial.
Definition at line 44 of file VertexMesher.cpp.
Return the mesh entity types operated on by this scheme.
moab::MBMAXTYPE
Definition at line 83 of file VertexMesher.hpp.
Get class name.
Definition at line 56 of file VertexMesher.hpp.
No operator=, since there's only meant to be one of these.
Get list of mesh entity types that can be generated.
moab::MBMAXTYPE
Definition at line 13 of file VertexMesher.cpp.
Setup is a no-op, but must be provided since it's pure virtual.
Definition at line 41 of file VertexMesher.cpp.
Static variable, used in registration.
Definition at line 97 of file VertexMesher.hpp.
|
http://www.mcs.anl.gov/~fathom/meshkit-docs/html/classMeshKit_1_1VertexMesher.html
|
CC-MAIN-2015-11
|
refinedweb
| 355
| 52.56
|
8.3. Language Models and the Dataset¶
In Section 8.2, we see how to map text data into tokens, and these tokens can be viewed as a time series of discrete observations. Assuming the tokens in a text of length \(T\) are in turn \(x_1, x_2, \ldots, x_T\), then, in the discrete time series, \(x_t\)(\(1 \leq t \leq T\)) can be considered as the output or label of timestep \ \mid “to recognize speech” and “to wreck a nice beach” sound very similar. This can cause ambiguity in speech recognition, ambiguity that is easily resolved through a language model which rejects the second translation as outlandish. Likewise, in a document summarization algorithm it is worth while knowing that “dog bites man” is much more frequent than “man bites dog”, or that “let’s eat grandma” is a rather disturbing statement, whereas “let’s eat, grandma” is much more benign.
8.3 dataset is a large text corpus, such as all Wikipedia entries, Project Gutenberg, or all text posted online on the web. The probability of words can be calculated from the relative word frequency of a given word in the training dataset.
For example, \(p(\mathrm{Statistics})\) can be calculated as the probability of any sentence starting with the word “statistics”. A slightly less accurate approach would be to count all occurrences of the word “statistics” “Statistics is” are a lot less frequent. In particular, for some unusual word combinations it may be tricky to find enough occurrences to get accurate estimates. Things take a turn for the worse for 3-word combinations and beyond. There will be many plausible 3-word combinations that we likely will in Section 17.9., [Wood et al., 2011] for more detail of how to accomplish this. Unfortunately, models like this get unwieldy rather quickly for the following reasons. First, we need to store all counts. Second, this entirely ignores the meaning of the words. For instance, “cat” and “feline” should occur in related contexts. It is quite difficult to adjust such models to additional contexts, whereas, deep learning based language models are well suited to take this into account. Last, long word sequences are almost certain to be novel, hence a model that simply counts the frequency of previously seen word sequences is bound to perform poorly there.
8.3} \mid w_t, \ldots, w_1) = p(w_{t+1} \mid w_t)\). Higher orders correspond to longer dependencies. This leads to a number of approximations that we could apply to model a sequence:
The probability formulae that involve one, two, and three variables are typically referred to as unigram, bigram, and trigram models respectively. In the following, we will learn how to design better models.
8.3.3. Natural Language Statistics¶
Let’s see how this works on real data. We construct a vocabulary based on the time machine data similar to Section 8.2 and print the top \(10\) most frequent words.
import d2l from mxnet import np, npx import random npx.set_np() tokens = d2l.tokenize(d2l.read_time_machine()) vocab = d2l.Vocab(tokens) print(vocab.token_freqs[:10])
[('the', 2261), ('', 1282), ('i', 1267), ('and', 1245), ('of', 1155), ('a', 816), ('to', 695), ('was', 552), ('in', 541), ('that', 443)]
As we can see, the most popular words are actually quite boring to look at. They are often referred to as stop words and thus filtered out. That said, they still carry meaning and we will use them nonetheless. However, one thing that is quite clear is that the word frequency decays rather rapidly. The \(10^{\mathrm{th}}\) most frequent word is less than \(1/5\) as common as the most popular one. To get a better idea we plot the graph of the word frequency.
freqs = [freq for token, freq in vocab.token_freqs] d2l.plot(freqs, xlabel='token: x', ylabel='frequency: n(x)', xscale='log', yscale='log')
We are on to something quite fundamental here: the word frequency decays, also known as the infrequent words. But what about the other word combinations (such as bigrams, trigrams, and beyond)? Let’s see whether the bigram frequency behaves in the same manner as the unigram frequency.
bigram_tokens = [[pair for pair in zip( line[:-1], line[1:])] for line in tokens] bigram_vocab = d2l.Vocab(bigram_tokens) print(bigram_vocab.token_freqs[:10])
[(('of', 'the'), 297), (('in', 'the'), 161), (('i', 'had'), 126), (('and', 'the'), 104), (('i', 'was'), 104), (('the', 'time'), 97), (('it', 'was'), 94), (('to', 'the'), 81), (('as', 'i'), 75), (('of', 'a'), 69)]
Two things are notable. Out of the 10 most frequent word pairs, 9 are composed of stop words and only one is relevant to the actual book—“the time”. Furthermore, let’s see whether the trigram frequency behaves in the same manner.
trigram_tokens = [[triple for triple in zip(line[:-2], line[1:-1], line[2:])] for line in tokens] trigram_vocab = d2l.Vocab(trigram_tokens) print(trigram_vocab.token_freqs[:10])
[(('the', 'time', 'traveller'), 53), (('the', 'time', 'machine'), 24), (('the', 'medical', 'man'), 22), (('it', 'seemed', 'to'), 14), (('it', 'was', 'a'), 14), (('i', 'began', 'to'), 13), (('i', 'did', 'not'), 13), (('i', 'saw', 'the'), 13), (('here', 'and', 'there'), 12), (('i', 'could', 'see'), 12)]
Last, let’s visualize the token frequency among these three gram models: unigrams, bigrams, and trigrams.
bigram_freqs = [freq for token, freq in bigram_vocab.token_freqs] trigram_freqs = [freq for token, freq in trigram_vocab.token_freqs] d2l.plot([freqs, bigram_freqs, trigram_freqs], xlabel='token', ylabel='frequency', xscale='log', yscale='log', legend=['unigram', 'bigram', 'trigram'])
The graph is quite exciting for a number of reasons. First, beyond unigram words, also sequences of words appear to be following Zipf’s law, albeit with a lower exponent, depending on sequence length. Second,.3.4. Training Data Preparation¶
Before introducing the model, let’s assume we will use a neural network to train a language model. Now the question is how to read minibatches of examples and labels at random. Since sequence data is by its very nature sequential, we need to address the issue of processing it. We did so in a rather ad-hoc manner when we introduced in Section 8.1. Let’s formalize this a bit.
In Fig. 8.3.1, we visualized several possible ways to obtain 5-grams in a sentence, here a token is a character. Note that we have quite some freedom since we could pick an arbitrary offset.
In fact, any one of these offsets is fine. Hence, which one should we pick? In fact, all of them are equally good. But if we pick all offsets we end up with rather redundant data due to overlap, particularly if the sequences are long. Picking just a random set of initial positions is no good either since it does not guarantee uniform coverage of the array. For instance, if we pick \(n\) elements at random out of a set of \(n\) with random replacement, the probability for a particular element not being picked is \((1-1/n)^n \to e^{-1}\). This means that we cannot expect uniform coverage this way. Even randomly permuting a set of all offsets does not offer good guarantees. Instead we can use a simple trick to get both coverage and randomness: use a random offset, after which one uses the terms sequentially. We describe how to accomplish this for both random sampling and sequential partitioning strategies below.
8.3.4.1. Random Sampling¶
The following code randomly generates a minibatch from the data each
time. Here, the batch size
batch_size indicates to the number of
examples in each minibatch and
num_steps is the length of the
sequence (or timesteps if we have a time series) included in each
example. In random sampling, each example is a sequence arbitrarily
captured on the original sequence. The positions of two adjacent random
minibatches on the original sequence are not necessarily adjacent. The
target is to predict the next character based on what we have seen so
far, hence the labels are the original sequence, shifted by one
character.
# Saved in the d2l package for later use def seq_data_iter_random(corpus, batch_size, num_steps): # Offset the iterator over the data for uniform starts corpus = corpus[random.randint(0, num_steps):] # Subtract 1 extra since we need to account for label num_examples = ((len(corpus) - 1) // num_steps) example_indices = list(range(0, num_examples * num_steps, num_steps)) random.shuffle(example_indices) def data(pos): # This returns a sequence of the length num_steps starting from pos return corpus[pos: pos + num_steps] # Discard half empty batches num_batches = num_examples // batch_size for i in range(0, batch_size * num_batches, batch_size): # Batch_size indicates the random examples read each time batch_indices = example_indices[i:(i+batch_size)] X = [data(j) for j in batch_indices] Y = [data(j + 1) for j in batch_indices] yield np.array(X), np.array(Y)
Let’s generate an artificial sequence from 0 to 30. We assume that the batch size and numbers of timesteps are 2 and 5 respectively. This means that depending on the offset we can generate between 4 and 5 \((x, y)\) pairs. With a minibatch size of 2, we only get 2 minibatches.
my_seq = list(range(30)) for X, Y in seq_data_iter_random(my_seq, batch_size=2, num_steps=6): print('X: ', X, '\nY:', Y)
X: [[ 6. 7. 8. 9. 10. 11.] [ 0. 1. 2. 3. 4. 5.]] Y: [[ 7. 8. 9. 10. 11. 12.] [ 1. 2. 3. 4. 5. 6.]] X: [[18. 19. 20. 21. 22. 23.] [12. 13. 14. 15. 16. 17.]] Y: [[19. 20. 21. 22. 23. 24.] [13. 14. 15. 16. 17. 18.]]
8.3.4.2. Sequential Partitioning¶
In addition to random sampling of the original sequence, we can also make the positions of two adjacent random minibatches adjacent in the original sequence.
# Saved in the d2l package for later use def seq_data_iter_consecutive(corpus, batch_size, num_steps): # Offset for the iterator over the data for uniform starts offset = random.randint(0, num_steps) # Slice out data - ignore num_steps and just wrap around num_indices = ((len(corpus) - offset - 1) // batch_size) * batch_size Xs = np.array(corpus[offset:offset+num_indices]) Ys = np.array(corpus[offset+1:offset+1+num_indices]) Xs, Ys = Xs.reshape(batch_size, -1), Ys.reshape(batch_size, -1) num_batches = Xs.shape[1] // num_steps for i in range(0, num_batches * num_steps, num_steps): X = Xs[:, i:(i+num_steps)] Y = Ys[:, i:(i+num_steps)] yield X, Y
Using the same settings, print input
X and label
Y for each
minibatch of examples read by random sampling. The positions of two
adjacent minibatches on the original sequence are adjacent.
for X, Y in seq_data_iter_consecutive(my_seq, batch_size=2, num_steps=6): print('X: ', X, '\nY:', Y)
X: [[ 6. 7. 8. 9. 10. 11.] [17. 18. 19. 20. 21. 22.]] Y: [[ 7. 8. 9. 10. 11. 12.] [18. 19. 20. 21. 22. 23.]]
Now we wrap the above two sampling functions to a class so that we can use it as a Gluon data iterator later.
# Saved in the d2l package for later use class SeqDataLoader(object): """A iterator to load sequence data""" def __init__(self, batch_size, num_steps, use_random_iter, max_tokens): if use_random_iter: self.data_iter_fn = d2l.seq_data_iter_random else: self.data_iter_fn = d2l.seq_data_iter_consecutive self.corpus, self.vocab = d2l.load_corpus_time_machine(max_tokens) self.batch_size, self.num_steps = batch_size, num_steps def __iter__(self): return self.data_iter_fn(self.corpus, self.batch_size, self.num_steps)
Last, we define a function
load_data_time_machine that returns both
the data iterator and the vocabulary, so we can use it similarly as
other functions with
load_data prefix.
# Saved in the d2l package for later use def load_data_time_machine(batch_size, num_steps, use_random_iter=False, max_tokens=10000): data_iter = SeqDataLoader( batch_size, num_steps, use_random_iter, max_tokens) return data_iter, data_iter.vocab
8.3.5. Summary¶
Language models are an important technology for natural language processing.
\(n\)-grams provide a convenient model for dealing with long sequences by truncating the dependence.
Long sequences suffer from the problem that they occur very rarely or never.
Zipf’s law governs the word distribution for not only unigrams but also the other \(n\)-grams.
There is a lot of structure but not enough frequency to deal with infrequent word combinations efficiently via Laplace smoothing.
The main choices for sequence partitioning are picking between consecutive and random sequences.
Given the overall document length, it is usually acceptable to be slightly wasteful with the documents and discard half-empty minibatches.
8.3.6. Exercises¶
Suppose there are \(100,000\) words in the training dataset. How much word frequency and multi-word adjacent frequency.
What other minibatch data sampling methods can you think of?
Why is it a good idea to have a random offset?
Does it really lead to a perfectly uniform distribution over the sequences on the document?
What would you have to do to make things even more uniform?
If we want a sequence example to be a complete sentence, what kinds of problems does this introduce in minibatch sampling? Why would we want to do this anyway?
|
https://d2l.ai/chapter_recurrent-neural-networks/lang-model.html
|
CC-MAIN-2019-51
|
refinedweb
| 2,110
| 53.81
|
.osgi.framework.eventmgr;13 14 import org.eclipse.osgi.framework.eventmgr.EventListeners.ListElement;15 16 /**17 * This class is the central class for the Event Manager. Each18 * program that wishes to use the Event Manager should construct19 * an EventManager object and use that object to construct20 * ListenerQueue for dispatching events. EventListeners objects21 * should be used to manage listener lists.22 *23 * <p>This example uses the ficticous SomeEvent class and shows how to use this package 24 * to deliver a SomeEvent to a set of SomeEventListeners. 25 * <pre>26 *27 * // Create an EventManager with a name for an asynchronous event dispatch thread28 * EventManager eventManager = new EventManager("SomeEvent Async Event Dispatcher Thread");29 * // Create an EventListeners to hold the list of SomeEventListeners30 * EventListeners eventListeners = new EventListeners();31 *32 * // Add a SomeEventListener to the listener list33 * eventListeners.addListener(someEventListener, null);34 *35 * // Asynchronously deliver a SomeEvent to registered SomeEventListeners36 * // Create the listener queue for this event delivery37 * ListenerQueue listenerQueue = new ListenerQueue(eventManager);38 * // Add the listeners to the queue and associate them with the event dispatcher39 * listenerQueue.queueListeners(eventListeners, new EventDispatcher() {40 * public void dispatchEvent(Object eventListener, Object listenerObject, 41 * int eventAction, Object eventObject) {42 * try {43 * (SomeEventListener)eventListener.someEventOccured((SomeEvent)eventObject);44 * } catch (Throwable t) {45 * // properly log/handle any Throwable thrown by the listener46 * }47 * }48 * });49 * // Deliver the event to the listeners. 50 * listenerQueue.dispatchEventAsynchronous(0, new SomeEvent());51 * 52 * // Remove the listener from the listener list53 * eventListeners.removeListener(someEventListener);54 *55 * // Close EventManager to clean when done to terminate async event dispatch thread.56 * // Note that closing the event manager while asynchronously delivering events 57 * // may cause some events to not be delivered before the async event dispatch 58 * // thread terminates59 * eventManager.close();60 * </pre>61 * 62 * <p>At first glance, this package may seem more complicated than necessary63 * but it has support for some important features. The listener list supports64 * companion objects for each listener object. This is used by the OSGi framework65 * to create wrapper objects for a listener which are passed to the event dispatcher.66 * The ListenerQueue class is used to build a snap shot of the listeners prior to beginning67 * event dispatch. 68 * 69 * The OSGi framework uses a 2 level listener list (EventListeners) for each listener type (4 types). 70 * Level one is managed by the framework and contains the list of BundleContexts which have 71 * registered a listener. Level 2 is managed by each BundleContext for the listeners in that 72 * context. This allows all the listeners of a bundle to be easily and atomically removed from 73 * the level one list. To use a "flat" list for all bundles would require the list to know which 74 * bundle registered a listener object so that the list could be traversed when stopping a bundle 75 * to remove all the bundle's listeners. 76 * 77 * When an event is fired, a snapshot list (ListenerQueue) must be made of the current listeners before delivery 78 * is attempted. The snapshot list is necessary to allow the listener list to be modified while the 79 * event is being delivered to the snapshot list. The memory cost of the snapshot list is80 * low since the ListenerQueue object shares the array of listeners with the EventListeners object.81 * EventListeners uses copy-on-write semantics for managing the array and will copy the array82 * before changing it IF the array has been shared with a ListenerQueue. This minimizes 83 * object creation while guaranteeing the snapshot list is never modified once created.84 * 85 * The OSGi framework also uses a 2 level dispatch technique (EventDispatcher).86 * Level one dispatch is used by the framework to add the level 2 listener list of each 87 * BundleContext to the snapshot in preparation for delivery of the event.88 * Level 2 dispatch is used as the final event deliverer and must cast the listener 89 * and event objects to the proper type before calling the listener. Level 2 dispatch90 * will cancel delivery of an event 91 * to a bundle that has stopped bewteen the time the snapshot was created and the92 * attempt was made to deliver the event.93 * 94 * <p> The highly dynamic nature of the OSGi framework had necessitated these features for 95 * proper and efficient event delivery. 96 * @since 3.197 */98 99 public class EventManager {100 static final boolean DEBUG = false;101 102 /**103 * EventThread for asynchronous dispatch of events.104 * Access to this field must be protected by a synchronized region.105 */106 private EventThread thread;107 108 /**109 * EventThread Name110 */111 protected final String threadName;112 113 /**114 * EventManager constructor. An EventManager object is responsible for115 * the delivery of events to listeners via an EventDispatcher.116 *117 */118 public EventManager() {119 this(null);120 }121 122 /**123 * EventManager constructor. An EventManager object is responsible for124 * the delivery of events to listeners via an EventDispatcher.125 *126 * @param threadName The name to give the event thread associated with127 * this EventManager.128 */129 public EventManager(String threadName) {130 thread = null;131 this.threadName = threadName;132 }133 134 /**135 * This method can be called to release any resources associated with this136 * EventManager.137 * <p>138 * Closing this EventManager while it is asynchronously delivering events 139 * may cause some events to not be delivered before the async event dispatch 140 * thread terminates.141 */142 public synchronized void close() {143 if (thread != null) {144 thread.close();145 thread = null;146 }147 }148 149 /**150 * Returns the EventThread to use for dispatching events asynchronously for151 * this EventManager.152 *153 * @return EventThread to use for dispatching events asynchronously for154 * this EventManager.155 */156 synchronized EventThread getEventThread() {157 if (thread == null) {158 /* if there is no thread, then create a new one */159 if (threadName == null) {160 thread = new EventThread();161 } 162 else {163 thread = new EventThread(threadName);164 }165 thread.start(); /* start the new thread */166 }167 168 return thread;169 }170 171 /**172 * This method calls the EventDispatcher object to complete the dispatch of173 * the event. If there are more elements in the list, call dispatchEvent174 * on the next item on the list.175 * This method is package private.176 *177 * @param listeners A null terminated array of ListElements with each element containing the primary and 178 * companion object for a listener. This array must not be modified.179 * @param dispatcher Call back object which is called to complete the delivery of180 * the event.181 * @param eventAction This value was passed by the event source and182 * is passed to this method. This is passed on to the call back object.183 * @param eventObject This object was created by the event source and184 * is passed to this method. This is passed on to the call back object.185 */186 static void dispatchEvent(ListElement[] listeners, EventDispatcher dispatcher, int eventAction, Object eventObject) {187 int size = listeners.length;188 for (int i = 0; i < size; i++) { /* iterate over the list of listeners */189 ListElement listener = listeners[i];190 if (listener == null) { /* a null element terminates the list */191 break;192 }193 try {194 /* Call the EventDispatcher to complete the delivery of the event. */195 dispatcher.dispatchEvent(listener.primary, listener.companion, eventAction, eventObject);196 } 197 catch (Throwable t) {198 /* Consume and ignore any exceptions thrown by the listener */199 if (DEBUG) {200 System.out.println("Exception in " + listener.primary); //$NON-NLS-1$201 t.printStackTrace();202 }203 }204 }205 }206 207 /**208 * This package private class is used for asynchronously dispatching events.209 */210 211 static class EventThread extends Thread {212 /**213 * Queued is a nested top-level (non-member) class. This class214 * represents the items which are placed on the asynch dispatch queue.215 * This class is private.216 */217 private static class Queued {218 /** listener list for this event */219 final ListElement[] listeners;220 /** dispatcher of this event */221 final EventDispatcher dispatcher;222 /** action for this event */223 final int action;224 /** object for this event */225 final Object object;226 /** next item in event queue */227 Queued next;228 229 /**230 * Constructor for event queue item231 *232 * @param l Listener list for this event233 * @param d Dispatcher for this event234 * @param a Action for this event235 * @param o Object for this event236 */237 Queued(ListElement[] l, EventDispatcher d, int a, Object o) {238 listeners = l;239 dispatcher = d;240 action = a;241 object = o;242 next = null;243 }244 }245 246 /** item at the head of the event queue */247 private Queued head;248 /** item at the tail of the event queue */249 private Queued tail;250 /** if false the thread must terminate */251 private volatile boolean running;252 253 /**254 * Constructor for the event thread. 255 * @param threadName Name of the EventThread 256 */257 EventThread(String threadName) {258 super(threadName);259 init();260 }261 262 /**263 * Constructor for the event thread.264 */265 EventThread() {266 super();267 init();268 }269 270 private void init() {271 running = true;272 head = null;273 tail = null;274 275 setDaemon(true); /* Mark thread as daemon thread */276 }277 278 /**279 * Stop thread.280 */281 void close() {282 running = false;283 interrupt();284 }285 286 /**287 * This method pulls events from288 * the queue and dispatches them.289 */290 public void run() {291 try {292 while (true) {293 Queued item = getNextEvent();294 if (item == null) {295 return;296 }297 EventManager.dispatchEvent(item.listeners, item.dispatcher, item.action, item.object);298 }299 }300 catch (RuntimeException e) {301 if (EventManager.DEBUG) {302 e.printStackTrace();303 }304 throw e;305 }306 catch (Error e) {307 if (EventManager.DEBUG) {308 e.printStackTrace();309 }310 throw e;311 }312 }313 314 /**315 * This methods takes the input parameters and creates a Queued316 * object and queues it.317 * The thread is notified.318 *319 * @param l Listener list for this event320 * @param d Dispatcher for this event321 * @param a Action for this event322 * @param o Object for this event323 */324 synchronized void postEvent(ListElement[] l, EventDispatcher d, int a, Object o) {325 if (!isAlive()) { /* If the thread is not alive, throw an exception */326 throw new IllegalStateException ();327 }328 329 Queued item = new Queued(l, d, a, o);330 331 if (head == null) /* if the queue was empty */332 {333 head = item;334 tail = item;335 } else /* else add to end of queue */336 {337 tail.next = item;338 tail = item;339 }340 341 notify();342 }343 344 /**345 * This method is called by the thread to remove346 * items from the queue so that they can be dispatched to their listeners.347 * If the queue is empty, the thread waits.348 *349 * @return The Queued removed from the top of the queue or null350 * if the thread has been requested to stop.351 */352 private synchronized Queued getNextEvent() {353 while (running && (head == null)) {354 try {355 wait();356 } 357 catch (InterruptedException e) {358 }359 }360 361 if (!running) { /* if we are stopping */362 return null;363 }364 365 Queued item = head;366 head = item.next;367 if (head == null) {368 tail = null;369 }370 371 return item;372 }373 }374 }375
Java API By Example, From Geeks To Geeks. | Our Blog | Conditions of Use | About Us_ |
|
http://kickjava.com/src/org/eclipse/osgi/framework/eventmgr/EventManager.java.htm
|
CC-MAIN-2018-05
|
refinedweb
| 1,841
| 51.78
|
Did you know there’s an algebraic structure for functions? That may not surprise you at all. But it surprised me when I first found out about it. I knew we used functions to build algebraic structures. It never occurred to me that functions themselves might have an algebraic structure.
I should clarify though. When I use the word ‘function’ here, I mean function in the functional programming sense. Not in the JavaScript sense. That is, pure functions; no side effects; single input; always return a value; and so on… You know the drill. Also, I’m going to assume you understand referential transparency and composition. If not, check out A gentle introduction to functional JavaScript. It might also help if you’ve read How to deal with dirty side effects in your pure functional JavaScript.
How does this algebraic structure for functions work? Well, recall our idea of eventual numbers when we looked at Effect. They looked something like this:
const compose2 = f => g => x => f(g(x)); const increment = x => x + 1; const double = x => x * 2; const zero = () => 0; const one = compose2(increment)(zero); const two = compose2(double)(one); const three = compose2(increment)(two); const four = compose2(double)(two); // ... and so on.
In this way we could create any integer as an eventual integer. And we can always get back to the ‘concrete’ value by calling the function. If we call
three() at some point, then we get back 3. But all that composition is a bit fancy and unnecessary. We could write our eventual values like so:
const zero = () => 0; const one = () => 1; const two = () => 2; const three = () => 3; const four = () => 4; // … and so on.
Looking at it this way may be a little tedious, but it’s not complicated. To make a delayed integer, we take the value we want and stick it in a function. The function takes no arguments, and does nothing but return our value. And we don’t have to stop at integers. We can make any value into an eventual value. All we do is create a function that returns that value. For example:
const ponder = () => 'Curiouser and curiouser'; const pi = () => Math.PI; const request = () => ({ protocol: 'http', host: 'example.com', path: '/v1/myapi', method: 'GET' }); // You get the idea…
Now, if we squint a little, that looks kind of like we’re putting a value inside a container. We’ve got a bit of containery stuff on the left, and value stuff on the right. The containery stuff is uninteresting. It’s the same every time. It’s only the return value that changes.
Enter the functor
Could we make a Functor out of this containery eventual-value thing? To do that, we need to define a law-abiding
map() function. If we can, then we’ve got a valid functor on our hands.
To start, let’s look at the type signature for
map(). In Hindley-Milner notation, it looks something like this:
map :: Functor m => (a -> b) -> m a -> m b
This says that our map function takes a function, and a functor of
a, and returns a functor of
b. If functions are functors, then they would go into that
m slot:
map :: (a -> b) -> Function a -> Function b
This says that
map() takes a function from
a to
b and a Function of
a. And it returns a Function of
b. But what’s a ‘Function of
a’ or a ‘Function of
b’?
What if we started out with eventual values? They’re functions that don’t take any input. But they return a value. And that value (as we discussed), could be anything. So, if we put them in our type signature might look like so:
map :: (a -> b) -> (() -> a) -> (() -> b)
The
a and
b in the type signature are return value of the function. It’s like
map() doesn’t care about the input values. So let’s replace the ‘nothing’ input value with another type variable, say
t. This makes the signature general enough to work for any function.
map :: (a -> b) -> (t -> a) -> (t -> b)
If we prefer to work with
a,
b and
c, it looks like this:
map :: (b -> c) -> (a -> b) -> (a -> c)
And that type signature looks a lot like the signature for
compose2:
compose2 :: (b -> c) -> (a -> b) -> a -> c
And in fact, they are the same function. The
map() definition for functions is composition.
Let’s stick our
map() function in a Static-Land module and see what it looks like:
const Func = { map: f => g => x => f(g(x)), };
And what can we do with this? Well, no more and no less than we can do with
compose2(). And I assume you already know many wonderful things you can do with composition. But function composition is pretty abstract. Let’s look at some more concrete things we can do with this.
React functional components are functions
Have you ever considered that React functional components are genuine, bona fide functions? (Yes, yes. Ignoring side effects and hooks for the moment). Let’s draw a couple of pictures and think about that. Functions in general, take something of type \(A\) and transform it into something of type \(B\).
I’m going to be a bit sloppy with types here but bear with me. React functional components are functions, but with a specific type. They take Props and return a Node. That is, they take a JavaScript object return something that React can render.1 So that might look something like this:
Now consider
map()/
compose2(). It takes two functions and combines them. So, we might have a function from type \(B\) to \(C\) and another from \(A\) to \(B\). We compose them together, and we get a function from \(A\) to \(C\). We can think of the first function as a modifier function that acts on the output of the second function.
Let’s stick a React functional component in there. We’re going to compose it with a modifier function. The picture then looks like this:
Our modifier function has to take a Node as its input. Otherwise, the types don’t line up. That’s fixed. But what happens if we make the return value Node as well? That is, what if our second function has the type \(Node \rightarrow Node\)?
We end up with a function that has the same type as a React Function Component. In other words, we get another component back. Now, imagine if we made a bunch of small, uncomplicated functions. And each of these little utility functions has the type \(Node \rightarrow Node\). With
map() we can combine them with components, and get new, valid components.
Let’s make this real. Imagine we have a design system provided by some other team. We don’t get to reach into its internals and muck around. We’re stuck with the provided components as is. But with
map() we claw back a little more power. We can tweak the output of any component. For example, we can wrap the returned Node with some other element:
import React from 'react'; import AtlaskitButton from '@atlaskit/button'; // Because Atlaskit button isn't a function component, // we convert it to one. const Button = props => (<AtlaskitButton {...props} />); const wrapWithDiv = node => (<div>{node}</div>); const WrappedButton = Func.map(wrapWithDiv)(Button);
Or we could even generalise this a little…
import React from "react"; import AtlaskitButton from "@atlaskit/button"; // Because Atlaskit button isn't a function component, // we convert it to one. const Button = props => <AtlaskitButton {...props} />; const wrapWith = (Wrapper, props = {}) => node => ( <Wrapper {...props}>{node}</Wrapper> ); const WrappedButton = Func.map( wrapWith("div", { style: { border: "solid pink 2px" } }) )(Button);
What else could we do? We could append another element:
import React from "react"; import AtlaskitButton from "@atlaskit/button"; import PremiumIcon from "@atlaskit/icon/glyph/premium"; // Because Atlaskit button isn't a function component, // we convert it to one. const Button = props => <AtlaskitButton {...props} />; const appendIcon = node => (<>{node}<PremiumIcon /></>); const PremiumButton = Func.map(appendIcon)(Button);
Or we could prepend an element:
import React from 'react'; import Badge from '@atlaskit/badge'; const prependTotal = node => (<><span>Total: </span>{node}</>) const TotalBadge = Func.map(prependTotal)(Badge);
And we could do both together:
import React from 'react'; import StarIcon from '@atlaskit/icon/glyph/star'; import Button from '@atlaskit/button'; // Because Atlaskit button isn't a function component, // we convert it to one. const Button = props => <AtlaskitButton {...props} />; const makeShiny = node => ( <> <StarIcon label="" />{node}<StarIcon label="" /> </> ); const ShinyButton = Func.map(makeShiny)(Button);
And all three at once:
import React from 'react'; import AtlaskitButton from "@atlaskit/button"; import Lozenge from '@atlaskit/lozenge'; import PremiumIcon from '@atlaskit/icon/glyph/premium'; import Tooltip from '@atlaskit/tooltip'; // Because Atlaskit button isn't a function component, // we convert it to one. const Button = props => <AtlaskitButton {...props} />; const shinyNewThingify = node => ( <Tooltip content="New and improved!"><> <PremiumIcon label="" /> {node} <Lozenge appearance="new">New</Lozenge> </></Tooltip> ); const ShinyNewButton = Func.map(shinyNewThingify)(Button); const App = () => ( <ShinyNewButton>Runcible Spoon</ShinyNewButton> );
Element enhancers
I call these \(Node \rightarrow Node\) functions Element enhancers.2 It’s like we’re creating a template. We have a JSX structure with a node-shaped hole in it. We can make that JSX structure as deep as we like. Then, we use
Func.map() to compose the element enhancer with a Component. We get back a new component that eventually shoves something deep down into that slot. But this new component takes the same props as the original.
This is nothing we couldn’t already do. But what’s nice about element enhancers is their simplicity and re-usability. An element enhancer is a simple function. It doesn’t mess around with props or anything fancy. So it’s easy to understand and reason about. But when we
map() them, we get full-blown components. And we can chain together as many enhancers as we like with
map().
I have a lot more to say about this, but I will save it for another post. Let’s move on and look at Contravariant Functors.
Contravariant functor
Functors come in lots of flavours. The one we’re most familiar with is the covariant functor. That’s the one we’re talking about when we say ‘functor’ without any qualification. But there are other kinds. The contravariant functor defines a
contramap() function. It looks like someone took all the types for
map() and reversed them:
-- Functor general definition map :: (a -> b) -> Functor a -> Functor b -- Contravariant Functor general definition contramap :: (a -> b) -> Contravariant b -> Contravariant a -- Functor for functions map :: (b -> c) -> (a -> b) -> (a -> c) -- Contravariant Functor for functions contramap :: (a -> b) -> (b -> c) -> (a -> c)
Don’t worry if none of that makes sense yet. Here’s how I think about it. With functions,
map() let us change the output of a function with a modifier function. But
contramap() lets us change the input of a function with a modifier function. Drawn as a diagram, it might look like so:
If we’re doing this with React components then it becomes even clearer. A regular component has type \(Props \rightarrow Node\). If we stick a \(Props \rightarrow Props\) function in front of it, then we get a \(Props \rightarrow Node\) function back out. In other words, a new component.
So,
contramap() is
map() with the parameters switched around:
const Func = { map: f => g => x => f(g(x)), contramap: g => f => x => f(g(x)), };
Contramapping react functional components
What can we do with this? Well, we can create functions that modify props. And we can do a lot with those. We can, for example, set default props:
// Take a button and make its appearance default to 'primary' import Button from '@atlaskit/button'; function defaultToPrimary(props) { return { appearance: 'primary', ...props}; } const PrimaryButton = Func.contramap(defaultToPrimary)(Button);
And, of course, we could make a generic version of this:
import Button from '@atlaskit/button'; function withDefaultProps(defaults) { return props => ({...defaults, ...props}); } const PrimaryButton = Func.contramap( withDefaultProps({ appearance: 'primary' }) )(Button);
If we want to, we could also hard-code some props so that nobody can change them. To do that we reverse our spread operation.
import Button from '@atlaskit/button'; function withHardcodedProps(fixedProps) { return props => ({...props, ...fixedProps}); } const PrimaryButton = Func.contramap( withHardcodedProps({ appearance: 'primary' }) )(Button);
You might be thinking, is that all? And it might not seem like much. But modifying props gives us a lot of control. For example, remember that we pass children as props. So, we can do things like wrap the inner part of a component with something. Say we have some CSS:
.spacer { padding: 0.375rem; }
And imagine we’re finding the spacing around some content too tight. With our handy tool
contramap(), we can add a bit of space:
import React from 'react'; import AtlaskitSectionMessage from '@atlaskit/section-message'; // Atlaskit's section message isn't a functional component so // we'll convert it to one. const SectionMessage = props => <AtlaskitSectionMessage {...props} />; const addInnerSpace = ({children, ...props}) => ({ ...props, children: <div class="spacer">{children}</div> }); const PaddedSectionMessage = Func.contramap(addInnerSpace)(SectionMessage); const App = () => ( <PaddedSectionMessage title="The Lion and the Unicorn"> <p> The Lion and the Unicorn were fighting for the crown:<br /> The Lion beat the Unicorn all round the town.<br /> Some gave them white bread, some gave them brown:<br /> Some gave them plum-cake and drummed them out of town. </p> </PaddedSectionMessage> );
Functions as profunctors
Our
contramap() function lets us change the input and
map() lets us change the output. Why not do both together? This pattern is common enough that it has a name:
promap(). And we call structures that you can
promap() over, profunctors. Here’s a sample implementation for
promap():
const Func = { map: f => g => x => f(g(x)), contramap: g => f => x => f(g(x)), promap: f => g => h => Func.contramap(f)(Func.map(g)(h)), };
Here’s an example of how we might use it:
import React from "react"; import AtlaskitTextfield from "@atlaskit/textfield"; // Atlaskit's Textfield isn't a function component, so we // convert it. const Textfield = props => <AtlaskitTextfield {...props} />; const prependLabel = (labelTxt, id) => node => ( <> <label htmlFor={id}>{labelTxt}</label> {node} </> ); function withHardcodedProps(fixedProps) { return props => ({ ...props, ...fixedProps }); } const <ThaumaturgyField /> </div> ); }
With
promap() we could tweak the props and the output of a React component in one pass. And this is pretty cool. But what if we wanted to change the output based on something in the input? The sad truth is that
promap() can’t help us here.
Functions as applicative functors
All is not lost. We have hope. But first, why would we want to do this? Let’s imagine we have a form input. And rather than disable the input when it’s not available, we’d like to hide it entirely. That is, when the input prop
disabled is
true, then we don’t render the input at all. To do this, we’d function that has access to both the input and the output of a component. So, what if we passed the input (props) and output (node) as parameters? It might look like so:
// hideWhenDisabled :: Props -> Node -> Node const hideWhenDisabled = props => node => ( (props.isDisabled) ? null : node );
Not all that complicated. But how do we combine that with a component? We need a function that will do two things:
- Take the input (props) and pass it to the component; and then,
- Pass both the input (props) and output (node) to our
hideWhenDisabled()function.
It might look something like this:
// mysteryCombinatorFunction :: (a -> b -> c) -> (a -> b) -> a -> c const mysteryCombinatorFunction = f => g => x => f(x)(g(x));
And this mystery combinator function has a name. It’s called
ap(). Let’s add
ap() to our
Func module:
const Func = { map: f => g => x => f(g(x)), contramap: g => f => x => f(g(x)), promap: f => g => h => Func.contramap(f)(Func.map(g)(h)), ap: f => g => x => f(x)(g(x)), };
Here’s how it might look as a diagram:
If we are working with react components, then it might look like so:
With that in place, we can use our
hideWhenDisabled() function like so:
import React from "react"; import AtlaskitTextfield from "@atlaskit/textfield"; // Atlaskit's Textfield isn't a function component, so we // convert it. const Textfield = props => <AtlaskitTextfield {...props} />; // hideWhenDisabled :: Props -> Node -> Node const hideWhenDisabled = props => el => (props.isDisabled ? null : el); const DisappearingField = Func.ap(hideWhenDisabled)(Textfield);
Now, for a function to be a full applicative functor, there’s another function we need to implement. That’s
of(). It takes any value and turns it into a function. And we’ve already seen how to do that. It’s as simple as making an eventual value:
// Type signature for of(): // of :: Applicative f => a -> f a // For functions this becomes: // of :: a -> Function a // Which is the same as: // of :: a -> b -> a // We don’t care what the type of b is, so we ignore it. const of = x => () => x;
Let’s stick that in our module:
const Func = { map: f => g => x => f(g(x)), contramap: g => f => x => f(g(x)), promap: f => g => h => Func.contramap(f)(Func.map(g)(h)), ap: f => g => x => f(x)(g(x)), of: x => () => x, };
There’s not much advantage in using
Func.of() over creating an inline function by hand. But it allows us to meet the specification. That, in turn, means we can take advantage of derivations and pre-written code. For example, we can use
ap() and
of() to derive
map():
const map = f => g => Func.ap(Func.of(f))(g);
Not all that useful, but good to know.
Functions as monads
One final thought before we wrap up. Consider what happens if we swap the parameter order for our
hideWhenDisabled() function. It might look something like this:
// hideWhenDisabledAlt :: Node -> Props -> Node const hideWhenDisabledAlt = el => props => ( props.isDisabled ? null : el );
The inside of the function doesn’t change at all. But notice what happens if we partially apply the first parameter now:
import TextField from '@atlaskit/textfield'; // hideWhenDisabledAlt :: Node -> Props -> Node const hideWhenDisabledAlt = el => props => ( props.isDisabled ? null : el ); const newThing = hideWhenDisabled(<TextField name="myinput" id="myinput" />);
What’s the type of
newThing?
That’s right. Since we’ve filled that first Node slot, the type of
newThing is \(Props \rightarrow Node\). The same type as a component. We’ve created a new component that takes just one prop:
isDisabled. So, we can say that
hideWhenDisabledAlt() is a function that takes a Node and returns a Component.
That’s pretty cool all by itself. But we can take this one step further. What if we could chain together functions like this that returned components? We already have
map() which lets us shove a Component into an element enhancer. What if we could do a similar thing and jam components into functions that return components?
As it so happens, this is what the monad definition for functions does. We define a
chain() function like so:
// Type signature for chain in general: // chain :: Monad m => (b -> m c) -> m b -> m c // Type signature for chain for functions: // chain :: (b -> Function c) -> Function b -> Function c // Which becomes: // chain :: (b -> a -> c) -> (a -> b) -> a -> c const chain = f => g => x => f(g(x))(x);
Drawn as a diagram, it might look something like this:
And here’s how it looks inside our
Func module:
const Func = { map: f => g => x => f(g(x)), contramap: g => f => x => f(g(x)), promap: f => g => h => Func.contramap(f)(Func.map(g)(h)), ap: f => g => x => f(x)(g(x)), of: x => () => x, chain: f => g => x => f(g(x))(x), flatMap: Func.chain, };
I like to add
flatMap() as an alias to
chain(). Naming it
flatMap() makes more sense and is contsistent with
Array.prototype.flatMap(). But,
chain() is what we have in the specification. And, to be fair, Brian wrote the Fantasy Land spec before
flatMap() for arrays existed.
If we substitute the component type into our diagram above, then it looks like so:
What can we do with
chain()/
flatMap()? We can take a bunch of functions that return components and chain them together. For example:
import Modal, { ModalTransition } from '@atlaskit/modal-dialog'; // compose :: ((a -> b), (b -> c), ..., (y -> z)) -> a -> z const compose = (...fns) => (...args) => fns.reduceRight((res, fn) => [fn.call(null, ...res)], args)[0]; const wrapInModal = inner => ({ onClose, actions, heading }) => ( <Modal actions={actions} onClose={onClose} heading={heading}> {inner} </Modal> ); const showIfOpen = inner => ({ isOpen }) => isOpen && <>{inner}</>; const withModalTransition = el => <ModalTransition>{el}</ModalTransition>; const modalify = compose( Func.map(withModalTransition), Func.chain(showIfOpen), Func.chain(wrapInModal), );
We now have a function
modalify(), that will take any Component and place it inside a modal. Not any Element or Node. No, any Component. As a consequence, our new ‘modalified’ component will take four extra props. They are
actions,
isOpen,
onClose and
heading. These control the appearance of the modal. But, the way it’s written now, it will pass those to the inner component too. We can prevent that with a prop modifier:
const withoutModalProps = ({ actions, isOpen, onClose, heading, ...props }) => props; const modalify = compose( Func.map(withModalTransition), Func.chain(showIfOpen), Func.chain(wrapInModal), Func.contramap(withoutModalProps), );
Now, this perhaps isn’t the best example. It will probably be more familiar to most people if we write this out using JSX:
const modalify = Component => ({actions, isOpen, onClose, heading, ...props}) => ( <ModalTransition> {isOpen && ( <Modal actions={actions} onClose={onClose} heading={heading}> <Component {...props} /> </Modal> )} </ModalTransition> );
But why?
Let me ask you a question. We have two versions of the same
modalify() function above. One written with composition, the other with plain JSX. Which is more reusable?
It’s a trick question. The answer is neither. They’re the same function. Who cares whether it’s written with composition or JSX? As long as their performance is roughly the same, it doesn’t matter. The important thing is that we can write this function at all. Perhaps you are more clever than I am. But it never would have occurred to me to write a
modalify() function before this. Working through the algebraic structure opens up new ways of thinking.
Now, someone might be thinking: “But this is just higher-order components (HOCs). We’ve had those for ages.” And you’d be correct. The React community has been using HOCs for ages. I’m not claiming to introduce anything new here. All I’m suggesting is that this algebraic structure might provide a different perspective.
Most HOCs tend to be similar to our
modalify() example. They take a component, modify it, and give you back a new component. But the algebraic structure helps us enumerate all the options. We can:
- Modify Nodes (elements) returned from a Component with
map();
- Modify Props going into a Component with
contramap();
- Do both at the same time with
promap();
- Modify Nodes based on values in Props with
ap(); and
- Chain together functions that take a Node and return a Component with
chain()(aka
flatMap()).
And no, we don’t need
promap() or
ap() or
chain() to do any of these things. But when we do reuse in React, we tend to think only of Components. Everything is a component is the mantra. And that’s fine. But it can also be limiting. Functional programming offers us so many ways of combining functions. Perhaps we could consider reusing functions as well.
Let me be clear. I’m not suggesting anyone go and write all their React components using
compose,
map(), and
chain(). I’m not even suggesting anyone include a
Func library in their codebase. What I am hoping is that this gives you some tools to think differently about your React code. I’m also hoping that the algebraic structure of functions makes a little more sense now. This structure is the basis for things like the Reader monad and the State monad. And they’re well worth learning more about.
|
https://jrsinclair.com/articles/2020/algebraic-structure-of-functions-illustrated-with-react-components/
|
CC-MAIN-2021-31
|
refinedweb
| 3,977
| 66.74
|
What's going on everyone and welcome to part 9 of our "unconventional" neural networks series. We've created many deep dream images up to this point, and now we're looking to convert them to video.
To do this, we're going to use cv2's
VideoWriter, but there are many ways where you can take many images and make them videos. We'll start with some imports:
import cv2 import os dream_name = 'starry_night' dream_path = 'dream/{}'.format(dream_name)
Now for the video codec:
# windows: fourcc = cv2.VideoWriter_fourcc(*'XVID') # Linux: #fourcc = cv2.VideoWriter_fourcc('M','J','P','G')
Next for the output file format/settings:
out = cv2.VideoWriter('{}.avi'.format(dream_name),fourcc, 30.0, (800,450))
This means it's a 30FPS 800x450 video that will be output.
Next, somehow we're going to iterate over files, so I am just going to look for the stopping point:
for i in range(999999999999999999999999999): if os.path.isfile('dream/{}/img_{}.jpg'.format(dream_name,i+1)): pass else: dream_length = i break
Now that we have the maximum length:
for i in range(dream_length): img_path = os.path.join(dream_path,'img_{}.jpg'.format(i)) print(img_path) frame = cv2.imread(img_path) out.write(frame) out.release()
We write this frame out to our file for every frame we have. Once we're done, our video is complete! Now you can make something like:
Alright. In the next tutorials, we're going to be switching gears and playing around with sequence to sequence models. Just about everything life can be mapped to being a sequence causing another sequence, where both the input sequence and output sequence might be varying in length.
|
https://pythonprogramming.net/deep-dream-video-python-playing-neural-network-tensorflow/
|
CC-MAIN-2019-26
|
refinedweb
| 274
| 65.83
|
Feedback
Getting Started
Discussions
Site operation discussions
Recent Posts
(new topic)
Departments
Courses
Research Papers
Design Docs
Quotations
Genealogical Diagrams
Archives
John Hughes talks about his experience with Erlang vs. Haskell in an InfoQ Interview. While the discussions about strict vs lazy, pure vs side effecting, and dynamic vs static typing are interesting he raises a good question for the LtU crowd at the end:.
So, LtU, where is the next order of magnitude coming from?
"If you compare Haskell programs to C code or even C++ often, they are about an order of magnitude smaller and simpler."
What substantial (> 100 KLOC) applications have been written in both languages to warrant such an extraordinary claim?
The original C program was 4KLOC, and the rewritten OCaml version was 598 lines.
The exact numbers were:
wc -c wc -l
virsh 126,056 4,641
mlvirsh 19,427 598
% size 15% 13%
Note that the source for both programs has changed a lot since I did that. You'd have to go back through the VC history and look for the particular versions that I compared.
I wouldn't be surprised by shrinkage, but rewriting a system in the same language would probably also shrink it due to gained design insight; that's not apples-to-apples.
I didn't write the original, and in any case the program is a straightforward shell - hardly any room for "insight" since all it does is to take commands typed by the user and dispatch them to the underlying library.
Why do people take these claims seriously and keep repeating them? So cliche!
'sides, functional programming languages were around long before C - was the order of magnitude productivity gain around then?
I think pretty much any widely-used programming environment is at least 10x as productive as any were 30 years ago - this is not because of any new paradigm, but because of library support - ultimately made possible by better hardware.
So to answer - the next order of magnitude is going to come from library and hardware support, just as it always has.
Java, Javascript : See anonymous functions
C#, Ruby : See lambda expressions
Java has no anonymous functions. Anonymous objects with static closure, yes, but no function type or anonymous function. You can make it work, but it's very ugly. Also, if you have static types, you really need tuples to make functional programming work very well. Java lacks these.
All in all, Java makes true functional programming pretty difficult, except in very simple cases like Comparator.
Too bad the real closures support seems to be dead for Java 7. :-(
But here are a couple of possibilities:
Rational and composable abstractions for cross-cutting concerns. There's a lot of redundancy and boilerplate that goes into handling the sorts of stuff that aspect-oriented folk talk about. Even if you don't much like their solutions, they are certainly exposing a serious problem.
Moving beyond traversing physical data structures to querying and updating logical data structures. Even in the one-order-of-magnitude languages, there's way to much code spent in traversal of data graphs. Imagine instead of working with in-memory relations and sets rather than lists and maps, with the compiler sweating the implementation details.
Can you give some examples of cross-cutting concerns?
From a few years ago when AOP was hot I remember that people were repeating two examples: logging and drawing shapes/observer pattern. Neither seem important enough to introduce a mechanism that destroys local reasoning.
Security, transactions, monitoring/management, and session management. Code for these ends up scattered all over most enterprise applications, in ways that are both utterly stereotypical and easy to screw up.
Security,
I agree. Information flow and verification of security properties will be useful, though I'm not certain that it will reduce code size; in fact, perhaps the opposite given much code nowadays isn't overly concerned with sophisticated security properties.
monitoring/management
You mean live upgrade? Or heap profiling? Or both?
session management
This is indeed important, and we have a glimmer of good solutions with delimited continuations.
I don't follow -- what's the connection between aspects and verifying inf. flow properties? Perhaps you mean something like runtime enforcement when atoms are predicated on dynamic labels?
[I do agree about the use of aspects for enforcing security policies: two of my recent web projects use them for just that. Funny enough, one even brings in information flow type systems -- but I don't think in the way you mean. Will try to remember to update this thread once I can release the writeups.]
edit: aspects do decrease code size and in an important way. If your language has them (which isn't a super big deal at the interpreter / compiler level), you can remove frameworks faking them from your TCB. That's hairy code when not naturally supported, so it's a big win.
Nothing so deep, I'm afraid. Just the everyday plumbing of hooking up enterprise applications to management consoles. Basically making the application as a whole and individual components of it act as Observables. Right now this takes either fancy and fragile reflection, or a large amount of boilerplate.
Abstracting over patterns (pattern calculus, first-class patterns), abstracting over type structure (polytypism, polyvariadicity, more?), abstracting over execution (parallelism), abstracting over location (mobility), abstracting over bit-level data, and more use of partial evaluation and so less fear of using higher abstraction due to perceived performance degradation. That last point alone can bring significant improvements in conciseness.
IMO the largest remaining source of productivity will come from finding the right abstractions to place into libraries. The main improvement to come from languages will be more convenient expression of the needed abstractions. There's only so much concision that an abstraction can bring to a particular problem. The big win is in obtaining libraries that are more universally applicable so that more problems can be solved with canned solutions.
The main improvement to come from languages will be more convenient expression of the needed abstractions. There's only so much concision that an abstraction can bring to a particular problem.
I agree. Haskell is seems to be settling on some high-level abstractions and their composition (monoids, arrows, monads, applicative, etc.). I think the first two points I mentioned are in this vein as well. Control over bit-level representation is important for displacing unsafe code with safer, higher-level code, so this too is a win for conciseness IMO.
What ever happened to predicate dispatching and languages such as Cecil? It's more general than pattern matching and multiple dispatch, and promotes conditional logic to a first class citizen.
according to Pascal Costanza (at the bottom of all that).
The problem with predicate dispatch is that you can't check implication between predicates. If you could determine that P => Q then you'd first test for P and execute P's method, otherwise test for Q and execute Q's method. Unfortunately this is undecidable in general. Subsets like type tests are often decidable. For example since Cat?(x) => Animal?(x) your first choice is to execute a method on Cat and only if that doesn't exist you execute it on Animal.
Is this really necessary? I believe that a much simpler approach works in practice: you look at the definition order of methods. If you have an Animal class and a Cat class you usually define methods on Animal first and on Cat later (later meaning that the methods on Cat come after the methods on Animal in the source code). This suggests the simple policy of trying methods last-to-first, i.e. the opposite order of pattern matching in functional languages.
I think it can be simple, I just assume there aren't large enough projects to tell if it turns to spaghetti code?
I've been toying with the idea of predicated blocks, where a block (lambda) returns a value if the predicate is true, and void if not. Assigning to void does nothing, it is a nonexistent value:
f = n [n even?] -> n * n
x = f eval: 2 # x is now 4
x = f eval: 3 # x is still 4
Because of this, conditional logic is simply applying these predicate blocks:
x = if: [x > 1] -> 1
or
x = ([x > 1] -> 1) eval
I haven't implemented predicate blocks yet, so I can't say if this will turn out or not, but I'm hopeful.
Predicates could also be used for applying AOP concepts:
@security = 5
def foo( n )
puts n
end
def foo( n ) when n.even?
puts next_method() + "is even"
end
def Object.send(msg, args) when access < security
puts "You can't be here"
end
In Ruby, you can override "send()", which is the central dispatch function, but it can get messy/dangerous when you steal core functionality. Predicates would only intercept relevant conditions.
Interesting. Do you have ideas about composing predicated blocks? For example:
seq(f, g) = [x] ->
let v = f eval: x
if(v is not void) return v
else return g eval: x
Maybe there are other useful combiners?
I have been thinking about AOP and in particular COP along the same lines: context oriented programming = predicate dispatch + dynamic variables.
I'm not entirely clear on the "[x]" part, do you mean the same as this in Haskell (I'm no Haskel expert by the way)?
seq(f, g) = \x -> ...
If so, it could be something like this if there were an OR operator for void:
seq = f, g -> x ->
return f eval: x || g eval: x
end
Do you have another example of composing functions with predicates? This could be interesting.
BTW, a block with two predicates would look something like this:
stream each: (char [alpha?] -> read-identifier;
[digit?] -> read-number
end)
It looks very similar to pattern matching, but can also be passed around since it's a first class citizen. Parenthesis are added to highlight the predicate block.
You could take other logical operators (&&, xor). I don't know if these are useful.
The maybe monad comes to mind:
compose(f, g) = \x -> f(x) && g(f(x))
I don't think context-oriented programming has to be that complex.
Is COP ever defined exactly in the way you describe?
What I said wasn't what I meant. I meant that predicate dispatch + dynamic binding provides the features of COP, but not the other way around. COP certainly doesn't provide predicate dispatch. COP ⊂ predicate dispatch + dynamic variables.
Layer activations become bindings to dynamic variables and method definitions in layers become methods that dispatch on the values of dynamic variables.
Can you go more into detail about context oriented programming? For example, where do the dynamic variables come into play?
There's a language called Cecil which has predicate classes - an object can become a different class depending on a predicate:
pred full_buffer isa buffer when buffer.is_full;
method put(b@full_buffer, x) { ... }
I think that could be very useful, but I haven't used the language to a great extent.
The way you define full_buffer is exactly how I would use abstract state in dodo (although dodo is more verbose).
For a polymorphic variable, it makes no sense to store the state separately, dodo makes it part of the type and stores the type pointer in the variable.
class Buffer:
...
rules:
state = if (is_full) full_buffer else partfull_buffer.
...
<state name="partfull_buffer">
method Put: Put(x)
{
#buffer is part full, we can add an item now
...
}.
</state>
.
Dodo look interesting, I'll have to take a look at it. States reminded me of UnrealScript states, although I don't think there is a "rules" type of functionality. It can also be done in languages supporting prototypical inheritance, Io in this example:
Buffer := Object clone do(
full := self clone
put := method(value,
self setProto(full)
)
full put := method(value,
writeln("Buffer full!")
)
)
Or using muli-methods and explicitly passing state (Dylan):
define method put (buffer :: <buffer>, state)
buffer.state := #full;
end;
define method put (buffer :: <buffer>, state == #full)
format-out ("Buffer full!");
end;
By saying that Haskell programs could be shrunk 10x smaller, he's implying that there is still alot of redundancy in the code of Haskell projects. Is Haskell really so bad that it has a significantly large amount of room for improvement in this aspect?
In addition, I think his whole focus on program size is misled. As an analogy, once you rise to a certain point above the poverty line (code bloat and redundancy), more money (compression) doesn't increase happiness (readability, usefulness). I don't agree with reducing program size simply for the sake of code compression. Productivity is what needs to focused on.
If any productivity is to be gained after hitting the maintainable-terseness limit, it is through tooling:
* structured editors
* automated code refactoring
* modular, composable language systems (type systems, paradigm implementations, etc.)
* spec-to-code verification
The tool is going to have to do alot of manual coding for us. We're going to be simply telling it what to do -- what refactorings to make, what semantics to implement. Tools will become more intelligent, and thereby, more assistive and autonomous -- more actually helpful and useful.
I've seen back-of-the-envelope calcs (sorry, no cites) that an average program of the Fortran/C/Java variety has about two orders of magnitude less information density of an average mathematics paper. Assuming that Haskell/OCaml/Scala wins you one of those orders of magnitude, that leaves 10x room for improvement until programming is as expressive as mathematics. The cause is of this is mostly that mathematics allows for a lot more abstraction that even the most flexible programming language currently available. This is all hand-waving, but seems reasonable enough for purposes of argument.
Since you make a comparison to mathematics, I think it's also fair to point out that the definitions and theorems that have been accumulated in mathematics can be viewed as a powerful standard library.
Libraries in mathematics are more powerful not just because they've 3000 years to our 60, but because there are so many more sorts of things that they can abstract over, and so many more ways of combining abstractions. We're just about good with abstracting over types, after all, whereas even the least interesting of realms of mathematics abstracts over type constructors, and usually over type constraints and implementation patterns. That means their libraries are more valuable than ours, and more likely to be reused. The interface to Euclid's library is still in use, even though his implementation (proofs) is probably about 30 versions obsolete.
We've ridden this analogy about as far as it will go without breaking, but I think the learnings are still valuable
If "mathematical work" doesn't have a high abstraction/information content, it doesn't even become a paper (uninteresting to the community) while most mainstream.lang programs are truly uninteresting to almost everyone.
Also, the average(uh?) mainstream.lang program is implicitly built on top of a whole lot of concerns that are not captured in source code (they are not much better captured by non-mainstream languages, either, imho).
If programs were as dense as mathematics papers, only mathematicians could read them! That would be a disaster. It is a *very good thing* that most programs are less dense than that. When it comes to readability we should be aiming for code that's readable like the New Yorker is readable, *not* like a mathematics paper is readable.
I'd say Python is getting pretty close to the ideal, actually.
The embarrassing way to code.
One argument for comparing code size is the old rule of thumb that people tend to have the same line-per-hour productivity regardless of language. This seemed to hold out as we compared Erlang-based projects with projects using C/C++ or Java, but the code written in these projects was fairly straightforward, without mind-boggling abstractions on either side. The code bases compared were in the hundreds, or even thousands, of KLOC range.
In the first rounds, it felt fairly simple to translate this to a corresponding productivity increase. The main difficulty was to agree upon the LOC ratio, since products compared did not have identical characteristics and functionality. A conservative estimate was that the Erlang code was at least 4x more compact, but obviously, this was not universally agreed upon. My personal opinion is that while several more or less systematic comparisons supported this, the objections tended to be more of the hand-waving variety. I have asked for more concrete contrary evidence for years, but have yet to be shown a single study. I'm convinced such studies would help the discussion.
Initially, the bug ratio per KLOC was roughly the same, but a few years later, the bug ratio in the Erlang projects had been reduced by a factor 4 (mainly through automated testing), while similar improvements were not seen in other projects. Furthermore, portability and the cost of making spin-off products seemed much better on the Erlang side (this is me hand-waving; we didn't follow through with further studies, since the first one was not particularly welcome except among those already convinced.)
We have also seen that the average number of lines of code per developer tends to be some 4-10x higher in the Erlang-related projects than in others (and these all tended to show roughly similar numbers for C++ or Java). If we agree with the observation that Erlang code tends to be 4-10x shorter than corresponding C++ code, and taking into account that project costs increase with size in a way that is much worse than linear, the 10x productivity increase starts to seem like a very conservative estimate over time.
One explanation for this might be that side-effect free code composes so much easier, and is much less sensitive to peripheral changes. Once you have weeded out all bugs from a purely contained component, it stays bug free forever, no matter how many places it is used in (provided it is used correctly). If so, this seems to support the argument that increased productivity will largely come from the growth of high-quality functional libraries that can be composed, opening up for new and even more powerful abstractions, and so on.
The shoemaker watched the weaver leave, and frowned. Yes, he supposed it was maybe a little bit strange that his children had no shoes. But what kind of shoes could one make for children with lacerated gangrenous feet? And besides, the children were all sickly, and slept a lot, and all died young. So was it really worthwhile? There just isn't a market for such shoes, he decided. And I am, above all else, a businessman. He nodded, and walked across to his last, stepping between the caltrops, his booted feet crunching on the sewage-beslimmed broken glass covering the floor of his home. It's simply unclear how one might approach it, he thought.
How might our programming languages and environments be improved? How about this. Recover what once was, but has been lost. Debuggers which can run time backwards. Multiple dispatch. Gather what exists, but is scattered. Choice of tools need not be a choice of "in which ways shall I be metastaticly crippled on *this* project?". Create the obvious but neglected. Constraint propagating knowledge management in IDEs. Attract funding. Increased competition in areas where programming tech is a secondary competitive advantage. Increased and improved research funding, the lack of which is simply a will to fail. Create, perhaps, a common community vision of what is needed. Helps with funding.
The answer hasn't changed since 1999. Or changed much since 1989.
It's wonderful how much less things suck each year. But change has been so very slooowwwwwww. A long twilight vigil, making do with crippled and crippling tools, hairs going gray while waiting for the economics and knowledge to gel, for civilization to creep forward.
Each year the shoemaker's abused and pregnant wife welcomes the spring with hope, and struggles to make this year different.
Recover what once was, but has been lost.
Can you expand on this?
Debuggers which can run time backwards.
This technology exists today.
Gather what exists, but is scattered.
Master Metadata Management?
Choice of tools need not be a choice of "in which ways shall I be metastaticly crippled on *this* project?".
Vierwpoints Research Institute-related projects like OMeta?
Create the obvious but neglected.
Live sink/hoist and storage of optimization artifacts into a living record of performance trade-offs. What else?
Constraint propagating knowledge management in IDEs.
What are search terms for this that I could use to see what people are doing?
Increased and improved research funding, the lack of which is simply a will to fail.
What project do you have that I can follow and support?
Create, perhaps, a common community vision of what is needed.
Many researchers struggle with figuring out how to connect with practitioners and find practical testbeds for their ideas. Alan Kay's VPRI is no exception. Just ask them how hard it is for them to collaborate with practitioners on their wild ideas.
Tracking ideas is hard, especially when people do not encourage you to follow them. It took me 12 years from the day I started programming to the time I read books like Watch What I Do that explained what the best and brightest were doing with programming systems.
Better integration of concerns.
The driving force is capital.
1. HP's acquisition of Mercury Interactive and current plans to integrate it into its QA/Testing product line
2. Microsoft's acquistion of Hyperion and Stratature, and choosing to integrate both into the SQLServer 2008 R2 platform
...are probably the two biggest signs that the old approach to islands of tools is over
Smaller signs include Mozilla's focus on providing real value-add on top of its browser for developers.
A very expressive type system with "type libraries" + A very powerful proof engine.
The way this works is that you write what you want the software to do in a constraint based way. Then you let the proof engine find the program that does it. This gives a truly declarative approach. Good proof-engines will of course have to try and find good proofs, as proofs are NOT irrelevant when we're talking about performance.
If I were to make a list:
DMP is not practical today, since it really requires the first two elements to be secure and high-performance.
DMP also requires huge libraries/databases of knowledge and meta-knowledge - strategies, assumptions, constraints (law, ethics, physics, capabilities and service contracts), and interpretation (understanding goals, in context). One difference between DMP and other designs is that DMP can potentially leverage large libraries automatically: as a library grows more robust and offer both more specialized and more abstract strategies, better assumptions, more data... DMP programs attached to that library may reactively leverage these improvements without change. While libraries exist today for procedural and functional programming, they are simply not annotated in a manner suitable for automated selection or development of strategies.
Further, more development is necessary to prove desirable meta-level properties for DMP, such as progress and safety - that select DMP sub-programs can be guaranteed to achieve a sub-goal within the contextual constraints and invariants, or at least fail in a very well-defined manner in terms of domain properties. Partial-failures remain especially problematic; knowing which services can recover from partial failures, and under which conditions they can do so, and how to effect this recovery, is critical to automatically producing 'safe' plans. These forms of 'safety' and 'progress' encompass far more than type-safety and computation progress.
The output of DMP for a given goal is a slightly more concrete strategy, with hooks for developing sub-plans, contingencies based upon acquisition of more information. Hooks to consult supervisors and expert systems need not be considered different from other object capabilities and service contracts. Ideally, the output strategy is live and reactive such that re-planning (and re-prioritization) can be performed on-the-fly based on changes in assumptions and knowledge and improvements in strategies and tweaks in functions. Ideally, change subsets can readily be JIT'd for performance. Such 'reactive' properties are also extremely useful in debugging, since the associations necessary to create reactive procedures and plans are also sufficient to explain plans to developers and managers in terms of why strategies were selected based on live data, performance (domain costs of all sorts - fuel, time, noise, etc), desired constraints specific to the context, etc.
For secure composition and system integration, DMP must allow gathering data, strategy, code from disparate sources trusted on a per-use basis. That is, the 'library' or 'database' used by a given application may need to be a reactive transclusion of other libraries and databases. Automated code distribution, in combination with capability security and reactive programming, are elements that can make these distributed multi-sourced libraries practical. Reactive programming allows high performance.
DMP for domain programming is (theoretically) much stronger than related logic-based programming styles. Logic and constraint-logic programming tend to allow conclusions about a world, but don't much help decisions on what to do about it. Rules-based programming executes side-effects based on observations and conclusions, but fails to capture the purpose or reason for those side-effects and so quickly grows into an arcane, tangled mess.
i worry that reducing boilerplate sometimes leads to things which are inscrutable and hard to suss out when it comes time to fix bugs or extend the system.
It is true that moving something that was explicit semantic noise - but, importantly, explicit semantic - to become implicit can sometimes make things more difficult for a late joining developer. It is difficult even for skilled programmers new to a project to rapidly locate and identify things like asynchronous task queues, polymorphism, subscriptions, etc. and grok how they all tie together in a large project.
However, the problems that exist with a given model will still implicitly exist in the same model implemented in such a manner that the programmer must use explicit boiler-plate. The only differences: there are new opportunities to introduce code bugs, and any design-bugs are buried by semantic noise. By increasing complexity far enough, you reach a point where there are no obvious bugs...
I think it one task of an IDE and language design to help teach developers what they need to know, but I think this compatible with rejecting boiler-plate in every form you can feasibly eradicate.
Yes, you go from writing repetitive code to having code written for you, which then you have to guess about its execution. Perhaps its best to have a representation of the boilerplate to examine, debug even if you don't write it yourself.
This is what they said about assembly language, too. In some cases it's still true, but the number of times I've peeked at the code generated by gcc represents a miniscule percentage of all the code I've compiled. So... progress is possible.
If you have a type theory where inhabitation is proved by demonstrating a program of the appropriate type, you have a kind of DMP. You get the logical content in the type, and the actual method of achieving the type (the proof) in the term. Powerful theorem provers will find the term automatically in some cases. The term itself can be manipulated using rewrite strategies to achieve performance enhancement, or programmed manually in the case of bottlenecks. I think type-theory is a really nice setting for the implementation of DMP.
Using the type-system for DMP may be useful in a few cases, but that results in a very rigid, 'static' variation of DMP, whereas the ideal form will be able to react to changes in the environment, perform re-planning at need, and yet still be able to be reduced via partial-evaluation in the event that little or no live data is required. DMP is hardly unique in this.
Use of rich type systems to influence evaluation of terms is a symptom - an anti-pattern - not a solution!
Programmers should not be required to use such indirect mechanisms that bifurcate expression of their programs into two languages (the type language and the term language). Programmers should not be required to determine at the time of writing a function or expression whether it will be fully realized in a static or dynamic context... yet the rich use of type languages to inject elements into the term language is often focused on making exactly that distinction!
A language that supports partial-evaluations or even staged evaluations would allow more flexible use of code. A language with terminating 'pure' computations would allow arbitrary degrees of partial evaluation at that layer, even if non-termination is supported in a higher layer (procedures, for example). Partial evaluation should be favored above staged approaches because it is implicit and can cut across terms, but either way the same language can be used regardless of static vs. dynamic context.
We should be going the other direction: banish all 'explicit' type system elements from the languages entirely. Type safety would still be inferred, and data-types would be inherent to exactly what is left 'undefined', and the type-system could still be 'enriched' by enriching the term language (i.e. adding sealer/unsealer pairs would force introduction of phantom types; supporting partial-functions over recursive structure would allow one to properly define matrix operations and add inherent 'uniqueness' qualifications to sets and relations, etc.), but there should be no semantics where the 'types' influence evaluation.
If I'm tracking you, you're objecting to a term expression that will be evaluated differently depending on what type expression goes with it. That makes some sense (although it doesn't seem very far from rejecting the idea of dispatch on type, which is a bit worrisome). But type inference has always worried me. Suppose I write some code that's supposed to do something; I am, in effect, claiming that it will correctly do what it's supposed to. If I specify a type signature for what I've written, then I'm declaring the circumstances under which I claim it will correctly do what it's supposed to. If I don't specify a type signature, and the language system infers the most general type signature for which my code is syntactically meaningful, then I've lost some control over what my code is understood to mean; I have no straightforward way to say that such-and-such will work correctly under a more limited set of circumstances, and conversely I may be blamed for the failure of my code in a situation where I never had any illusions it would work.
I'm not a fan of traditional static type systems, either, because any type system has to make allowances for Gödel's Theorem — admitting some things that aren't okay and rejecting some things that are — and the choice of those trade-offs is traditionally made by the language designer, leaving no way for the programmer to do anything really new in the type-checking dimension. Is there a way around that? Perhaps. Let propositions about programs be dynamic objects, and set up constructors for these objects corresponding to axioms, so that a proposition can't be constructed unless it's provable. The task of figuring out how to prove something is then a programming problem, and instead of accepting the language designer's compromise with Gödel's theorem, the programmer can choose their own compromise with the halting problem. (Okay, so maybe there are a few kinks to work out, like, what forms of propositions will be supported, what axioms will be supported, and how can proof-construct code be gracefully integrated with its object so that the whole is lucid rather than opaque. But I still think that ore ought to be refinable into something shiny.)
any type system has to make allowances for Gödel's Theorem
But any logic you introduce will necessarily have the same limitation - there will be some things you can't prove in the logic. And for any logic you choose, there's some type system that would have equivalent expressive strength.
That said, I think it's a good idea and is essentially the approach I'm using for my language. One comment I'd make is that type systems are very practical lightweight theorem provers, so you want to make sure you don't lose that practicality.
And for any logic you choose, there's some type system that would have equivalent expressive strength.
Not so sure, though, about abstractive strength. As type systems are theorem provers, they have built into them not only the logic, but the algorithm by which proofs are to be constructed in it. So they confront Gödel twice. Even if you can see a way of proving it within the logic, if the architect of the theorem prover hadn't thought of that technique, you're out of luck. One could add to the algorithm; or one could think of the type structure as the logic, and add to the type structure (but how awkward is it going to be to encode ever-more-sophisticated proof techniques in the type structure?). Either way, if it requires a language redesign it's not abstraction, and if it is abstraction then, it seems to me, we're really working with proofs rather than types.
There's also a very deep — read, "slippery" — metaphysical difference between Gödel's Theorem and the Halting Problem. Gödel's Theorem has to do with logical antinomies, which caused a major existential crisis for mathematics a century ago (and are now a sort of old cemetery next-door, that we've learned to live with, but at moments when we look that way it can still send a chill up the spine). The Halting Problem is about general programs not being able to handle some inputs, which may be annoying but just isn't that big a deal. It would be a nice de-escalation if we could "bleed off" some of our antinomies into mere nontermination.
(Pardon my not being up to speed, but is your language something that there's information about on the web? (Alas my own efforts aren't up to the proof-system stage yet.))
There's also a very deep — read, "slippery" — metaphysical difference between Gödel's Theorem and the Halting Problem.
I think trying to make this distinction at all is even more slippery than the details of the distinction would be. ;-)
The Curry-Howard interpretation equates the two pretty strongly.
The Halting Problem is about general programs not being able to handle some inputs, which may be annoying but just isn't that big a deal.
By the same token, working with a broken logic isn't that big a deal: all human beings get by pretty well using one. ;-)
However, when you want to be sure about something, it pays to work in more rigorous and restricted systems, even if you lose some expressiveness
Not really, no. Not in the sense I'm speaking of, anyway. Curry-Howard is commonly billed as a correspondence between logic and computation, but it's really between two different systems of logic, with no computation involved. Proofs of intuitionistic logic are connected with types of typed language-calculus, not with reductions of typed lambda-calculus.
Besides which, my point isn't about any mathematical difference, or lack thereof, between Gödel and Halting, but rather about a metaphysical difference between them. Actually triggering the anomaly of Russell's paradox destroys the fabric of one's reality (so to speak); but if you define predicate
($define! A ($lambda (X) (not? (X X))))
and then apply A to itself, what happens isn't nearly as traumatic.
All of which is not unrelated to the fact that programs actually "do" something — they impinge on the real world — so that they can be useful to us without our being absolutely sure whether they'll halt, whereas the entire raison d'etre of mathematical proofs is contingent on their absolute certainty.
It's also too easy to claim that human beings use a broken logic; I submit that human beings in their native operational mode aren't using any of the usual systems of "formal logic". What they're actually using is much more like computation than like logic. Present a person will Russell's paradox, and they won't cease to be able to make any distinctions between things. Rather, they'll progress forward by a finite amount through the loop (probably just once all the way around), recognize that it's a loop, and abort the subprocess.
Curry-Howard is commonly billed as a correspondence between logic and computation, but it's really between two different systems of logic, with no computation involved.
I understand this perspective, I just don't happen to share it. ;-)
In my scheme of things, a logic needs a proof system and any non-trivial proof system is necessarily computational.
Proofs of intuitionistic logic are connected with types of typed language-calculus, not with reductions of typed lambda-calculus.
There are some subtleties glossed over here.
One is that it isn't a coincidence that well-typed terms in the simply typed lambda calculus are strongly normalizing: a non-terminating proof isn't really a proof.
For this reason, you can't really consider a term to be a proof unless its reduction has a normal form, and you don't have a reduction to normal form without computation.
but rather about a metaphysical difference between them
Clearly I accept a different metaphysics than you do. ;-)
Whereas I would say that proofs take place essentially outside of time, making them totally different from computation in its most fundamental characteristic. With which I'm fairly sure you would disagree. :-)
But I'm having a little trouble squaring this with your earlier remark that Curry-Howard pretty strongly equates the two (Gödel and Halting). If it's basic for you that logic necessarily entails computation, then wouldn't that settle the point? Curry-Howard wouldn't seem to have anything to do with it.
If it's basic for you that logic necessarily entails computation, then wouldn't that settle the point? Curry-Howard wouldn't seem to have anything to do with it.
That depends on whether you accept Gödel's own interpretation of what he proved or not (he was an overt Platonist).
My take is a lot of his proofs are essentially computational in content, even if that concept wasn't fully extant at the time he was doing them. Curry-Howard is definitely the idea that closes the circuit.
Eureka. Maybe. Not only are we attaching the word computation to different meanings, but I suspect our worldviews aren't even inconsistent with each other. The concept you've attached to the word is probably present, at least latently, in my metaphysics, and vice versa. With some careful thought, it should be possible to work out a bijective mapping between our views, and perhaps get fresh insights out of the bargain (for example, I still don't see why from your perspective there would be any gap left for Curry-Howard to close).
My instinct is to go away and meditate on how the bijection might work. (Which is, perhaps, just as well, since the connection to the topic of this thread — "where is the next order of magnitude coming from" — is by now quite tenuous.)
Just to be clear, the Curry-Howard correspondence says that: 1) Types correspond to propositions, 2) terms correspond to proofs, and 3) reductions correspond to proof transformations.
There's also a very deep — read, "slippery" — metaphysical difference between Gödel's Theorem and the Halting Problem.
...
The Curry-Howard interpretation equates the two pretty strongly.
There's also a very deep — read, "slippery" — metaphysical difference between Gödel's Theorem and the Halting Problem.
...
The Curry-Howard interpretation equates the two pretty strongly.
I don't agree with everything that John (whom I owe a reply in the old fexpr discussion) says in this discussion, but I think I agree with him more than you, Marc.
To talk of equating these two means coming up with a type system that receives two distinct interpretations, one as a theory of propositions that corresponds to Gödel's theorem, the other as a theory of constructions that corresponds to the Halting problem. The set of ideas about types is rich enough to do this, but for the one to be a logic, the coding up in the type system has to give you, I think, what Yves Lafont calls an internal logic. Doing this in a satisfactory way is tricky, not least because there is an issue of taste involved, and also because what Gödel was doing was coding up logic in a logic-free arithmetic.
The best approach, I think, starts with looking at how category theorists have gone about coding up theories of arithmetic in toposes. It's not trivial.
To talk of equating these two means coming up with a type system that receives two distinct interpretations, one as a theory of propositions that corresponds to Gödel's theorem, the other as a theory of constructions that corresponds to the Halting problem.
This is probably way too big to explore here, but let me sketch my thinking about this.
For the purposes of this discussion, I will define language to mean an inductively defined syntactic system with finite basis, which is equipped with a truth-valued semantic interpretation.
In my interpretation, Gödel and Turing are both describing pretty much the same limitation on the power of languages, just applied to different languages (arithmetic, universal Turing machines).
This limitation can be summarized by saying that if the language is powerful enough to make interesting claims about itself, it can't have the ability to verify those claims in its semantics. Conversely, languages that can verify all their claims in their semantics have trivial semantics: every well-formed statement has the same truth value.
In this interpretation, you can see that Gödel and Turing start off very close together. So I'll just sketch how the (generalized) Curry-Howard interpretation closes the last gap for me.
One observation I need to make first it that, pace Derek Elkins and others, it is not sufficient to accept terms as proofs unless all terms are strongly normalizing. In a system where terms are not strongly normalizing the best a term can do is witness that "either this term is a proof or it is not normalizing".
One way of capturing this distinction is to say that there are actually two languages at work: the first has as its semantics a second whose evaluation gives the truth values. This scheme looks like this:
CH : Types -> Term Language -> Normalizing Terms
G : Logical Formulae -> Arithmetic Propositions -> Decidable Arithmetic Proofs
T: (Terminating | Non-Terminating) -> Turing Machines -> Halting TMs
Whew! That is all pretty compressed, but hopefully that gets across how I think about these things, and why I see a strong equivalence.
[edit : improved my mapping scheme]
(Spoiler alert: the differing points of view aren't yours vs. mine.)
Good sketch. There's a lot in there, minable with sufficient persistence. I believe we tend to use the word "proof" to refer to slightly different (though closely related) entities, but that seems like, at most, a minor symptom of conceptual disconnect, rather than a cause.
I'm going to take a shot at sketching my own thoughts. (I may botch the job, but writing the following has helped me bring these thoughts into sharper focus. The fact that my sketch isn't as short as yours, I'll ascribe to my subject being fuzzier. :-)
From the overall shape of your sketch — rather than from specific details in it — it seems that you are looking at both logic and computation from a logical point of view. If one is looking at both from that point of view, and one is interested in similarities between them, one might well choose to zoom in on Gödel's Theorem(s) and the Halting Problem. Oh yes, and Curry-Howard.
The opposite approach would be to look at both logic and computation from a computational point of view. I'll zoom in on Russell's paradox. The classical proof of Russell's paradox is short, and prominently invokes reductio ad absurdum and the Law of the Excluded Middle. If that proof is a computation, then it's a short one. There is no computational difficulty with it. The answer produced at the end of that short computation got some people pretty upset, but only, I suggest, because they were interpreting the answer from a logical point of view; computationally, it's just an answer.
Now, when referring to Russell's paradox before, I set up Lisp predicate
and said that if you apply A to itself, "what happens isn't nearly as traumatic." But from a purely computational point of view, neither of these cases is traumatic, and the two cases aren't really similar, either. The first case was a very short terminating computation, and the second case is a non-terminating computation. So how are they similar? Well, they are both ways of sidling up to the scenario that Russell's paradox is about. If you define set A to contain all sets X that don't contain themselves, and then you reason about it by means of that short computation, you get this answer that's disturbing — because you're trying to get a general answer, and (as Gödel and Halting show in their closely related ways) general answers aren't trustworthy. If instead you define Lisp predicate A and then run it, any answers you get back won't be disturbing, and whether you get back an answer at all depends on the behavior of X. With the Lisp predicate you are taking inputs one-at-a-time — and thus confronting the Halting Problem rather than confronting Gödel's Theorem — so that you still don't have a general answer, but it doesn't upset you because you didn't expect a general answer. (Note: you are confronting the Halting Problem itself, not the theorem that says HP is undecidable.)
So the really important difference between the two cases isn't the fact that one of them is a short, terminating computation while the other is non-terminating. That's just how they contrast with each other if you look at both of them from a computational point of view; and the really important difference is that you don't look at both of them from that point of view: you look at the terminating one from a logical point of view, and the non-terminating one from a computational point of view. Because those were the two different points of view that caused you to look at those two different cases.
This is somewhere in the vicinity of what I meant when I spoke earlier of bleeding off some of our antinomies into mere non-termination. And Curry-Howard, though it can elucidate some connections between what different things look like within the logical point of view, simply can't touch the fundamental difference between the logical and computational points of view.
The classical proof of Russell's paradox is short, and prominently invokes reductio ad absurdum and the Law of the Excluded Middle. If that proof is a computation, then it's a short one.
The "proof" of Russell's paradox is really that there isn't a proof that resolves it. Any attempt at it loops : "if it's true then it's false then it's true then..." Gödel's proof and Turing's proof are of the same form.
The evaluation operation for all three proofs is different, and Russell has the luxury of relying on our ordinary linguistic interpretation as the "computation", but all three are examples of infinite loops, which is just a non-terminating computation.
So I'm disinclined to make a distinction between the "logical" outlook and the "computational" one: in my interpretation they are just different expressions of the same phenomenon.
Russell's paradox is a proof of false in an inconsistent logic. Under CH it corresponds to terminating computation that constructs a value of any type. Essentially the logic allows for a cast from a boolean computation to a set, without regard for non-termination. This is unsound, but you only encounter looping if you look to a model of the logic.
Russell's paradox is a proof of false in an inconsistent logic...
This is unsound, but you only encounter looping if you look to a model of the logic.
Inconsistency is a relationship between theory and model (or syntax and semantics). You can't really consider it without looking at the model.
Under CH it corresponds to terminating computation that constructs a value of any type.
I'll accept that way of looking at it if we stipulate that the "computation" here is a trivial one that just magically gives you whatever you ask for without actually "constructing" anything.
Essentially the logic allows for a cast from a boolean computation to a set, without regard for non-termination.
From my point of view, this describes the whole of classical set theory. ;-)
Inconsistency is a relationship between theory and model (or syntax and semantics). You can't really consider it without looking at the model.
Well, "consistent" is usually defined as a property of a formal system without regard to any models it might have. For example, that there exists no proposition P such that "P" and "not P" can both be proven. A common way to prove consistency is to construct a non-trivial model, as an inconsistent system typically won't have one.
Yes, but I think you'll find the same situation if you look for the computational equivalent of (presumed) consistent logics such as ZFC. Of course, I'm not sure that doing so makes any more sense than looking for the logical equivalent of C++'s type system. This is the sense in which I agree with John Shutt - sometimes one end or the other of CH doesn't make alot of sense.
Well, "consistent" is usually defined as a property of a formal system without regard to any models it might have. For example, that there exists no proposition P such that "P" and "not P" can both be proven.
Remember the context of the thread: assuming generalized CH and that we've selected a suitable language with evaluation as a model, then having a proof of P is the same thing as exhibiting an expression in the model language that corresponds to P and which evaluates to true.
You can interpret the generalized CH as a further restriction on what counts as an acceptable model of a logic and what counts as consistent.
Yes, but I think you'll find the same situation if you look for the computational equivalent of (presumed) consistent logics such as ZFC.
Yes and no. I would argue that most ZFC proofs are computational constructions in the sense that they rely on a finite number of axiomatic "atoms" and operations. The problem is that some of those atoms and operations, such as the powerset of ω, have no computational model.
Of course, I'm not sure that doing so makes any more sense than looking for the logical equivalent of C++'s type system.
I think it is more meaningful to do so than you are suggesting.
The logic for its type system is going to be broken in some way, since it admits non-terminating programs as witnesses, but it still manages to make some weak but interesting logical guarantees.
I'm not really sure I believe the abstractive strength argument, either. Type systems are really proof checkers more than proof creators, so I don't see why you couldn't encode the same proof in either style. Also not sure about the metaphysical differences between Godel and the halting problem.
is your language something that there's information about on the web?
Nothing yet. I may make a not-horrible implementation available for review in the not too distant future, but the logic and proof system probably won't be included in that first release. Proving things formally is so impractical :).
it doesn't seem very far from rejecting the idea of dispatch on type, which is a bit worrisome
Unless you are imagining that rejecting dispatch on type means rejecting polymorphic dispatch, I am curious what is so worrisome about losing dispatch on nominative 'type' of expressions.
It is true that I am rejecting the use of dispatch on type, but I embrace several powerful forms of polymorphism and dynamic dispatch. OO, open functions, and DMP are examples.
type inference has always worried me
To be clear, I haven't been promoting type inference, but rather inference of type safety. The latter does not require assigning type-signatures to expressions. Inference of type-safety only requires determining that each operation expressed in a program is well defined in context.
Assignment of types to expressions is unnecessary because the types themselves aren't used to influence evaluation. For example, there is no need for monomorphism to select the correct type-class from which to draw a function.
If I specify a type signature for what I've written, then I'm declaring the circumstances under which I claim it will correctly do what it's supposed to. If I don't [...] I have no straightforward way to say that such-and-such will work correctly under a more limited set of circumstances
There are mechanisms other than type-signatures to express the limitations, constraints, assumptions, etc. of a sub-program.
For (pure) functions, one may express partial-functions. Rather conveniently, that's the only context they'll ever need to analyze; however, the ability to prove safety across partial functions is a challenge.
For logic-programming, one may express uniqueness (at most one result) and existence (at least one result) requirements in such a manner that a program is 'undefined' unless these conditions hold true. Proving expression that a result can be found, or that a result if found is unique, are extremely powerful primitives and cover almost any situation you might dream up. These are more flexible than partial-functions, since a partial-function is equivalent to always saying 'exactly one' result (both at least and at most).
For sub-structural program requirements (such as linearity or encapsulation of a data-type), one may introduce new primitives. The use of E-language style sealer/unsealer pairs, for example, is useful for security and can easily introduce phantom-type analysis suitable to hiding an abstract data type even for dynamically created modules. (No actual encryption is required unless a sealed value is distributed temporarily to an untrusted host.)
The removal of 'manifest' typing does leave a vacuum in expressiveness, but that vacuum may be filled from the term language. By carefully shaping and extending what may be left 'undefined', one can shape and grow a very rich type-system without ever once manifesting type signatures.
Let propositions about programs be dynamic objects, and set up constructors for these objects corresponding to axioms, so that a proposition can't be constructed unless it's provable.
I don't really share your objection to static analysis, but the notion of expressing contract requirements for sub-program is one I believe worthy. That said, it is often too powerful - unless you are careful in limiting expression, it becomes very easy to express requirements that one cannot prove. Partial-functions and procedural contracts (pre-conditions, post-conditions, concurrency invariants, etc.) have the same problem.
That unobtanium ore is, indeed, quite shiny... but refining it takes a 'sufficiently advanced technology'.
That said, even very limited application can still improve program expression; a few 'guru' programmers will learn what their language can do for them, then write important libraries that depend on it. Further, the limitations on which programs can be proven may grow independently of the program expression, unlike typical type-systems.
I think one should favor specific approaches to specific expressiveness issues that repeat themselves often enough to demand an automated solution. Introducing new term-language primitives to express, say, abstract data encapsulation or a garbage-collection requirement, is not worse than introducing new type-language primitives to do the same in a system with manifest typing, and at the end of the day you are left with a language that requires no sufficiently-smart technologies to make it work.
[Edit: above heavily edited for clarity - now it's as clear as mud, and that's an improvement.]
Use of rich type systems to influence evaluation of terms is a symptom - an anti-pattern - not a solution!
This statement makes me think you completely misunderstood me. Evaluation order isn't important for strongly normalising terms. Rich types, as in Coq aren't influencing evaluation. And calling it an anti-pattern is sort of dodging the argument. While type systems which influence evaluation are interesting, that isn't at all what I was talking about.
Programmers should not be required to use such indirect mechanisms that bifurcate expression of their programs into two languages (the type language and the term language).
I think it's a rather good idea.. We leave in both languages until technology reaches the point that we don't need to specify the algorithmic content for the vast majority of jobs, just as before we had inlined assembly and C but now assembly has all but disappeared.
Partial evaluation is basically just proof normalisation. Having a type and program which are separate is not hindrance to this. I give an example in: Program Transformation, Circular Proofs and Constructive Types.
Type inference is much less useful in my opinion, than program inference. Type inference is undecidable for moderately complex types systems; you can't even infer types for something like System-F.
Type inference seems to me to be completely backwards. Why specify a proof but not describe what the proof is of. I'm much more interested in program inference than type inference.
While type systems which influence evaluation are interesting, that isn't at all what I was talking about.
Were you not referring to the type system as the 'theorem prover' when you said: "Powerful theorem provers will find the term automatically in some cases." How is using the type system as a theorem prover to find a term that will directly affect evaluation not using the type-system to influence evaluation?
"Powerful theorem provers will find the term automatically in some cases."
It was my impression that you were focused on achieving DMP of the terms via the type-language. If that was a misunderstanding, I'll need you to clarify how so.
Were you, by chance, attempting to say that a variation of DMP aimed at producing types or pure terms might also be useful? They call that 'constraint logic programming', IIRC.
I think it's a rather good idea [to bifurcate program expression into two languages]..
I do not believe this can be achieved by strong separation. You'll want to interweave the algorithmic content with the logical content, such that an algorithm can appeal to the logical constructs and the logical constructs can annotate or suggest algorithms that might work. The end result - reducing algorithmic content and gaining an order of magnitude reduction in how much of a program we write - can still be achieved without bifurcation.
Type inference seems to me to be completely backwards. Why specify a proof but not describe what the proof is of. I'm much more interested in program inference than type inference.
The two are not incompatible. Inference of types (or, more importantly, type-safety) proves a useful property about a program. Other useful properties include termination, real-time properties, implementation within a finite space, secure data-flows, and so on. But most of these require, first of all, proving that no 'undefined' operations will be performed - i.e. inference of type-safety.
A type-safety check is right up there with ensuring the program parses correctly. It's a very basic sanity check.
It's not influencing since it only gives a specification of what not how.
But most of these require, first of all, proving that no 'undefined' operations will be performed - i.e. inference of type-safety.
A type-safety check is right up there with ensuring the program parses correctly. It's a very basic sanity check.
But most of these require, first of all, proving that no 'undefined' operations will be performed - i.e. inference of type-safety.
You are confusing type checking with type inference. Type checking is something you do to show that a term ascribed with a type is correct. Type inference is an attempt to ascribe a type which is type-checkable.
I think we want to be able to do both typeful and algorithmic work at this juncture. However, I think that.
Things that don't have influence on a system, by nature, can be removed from a system without any observable effects.
Therefore, either you can strip all manifest type annotations from the program's AST and have it perform the same observable behavior or the type system is influencing the evaluation.
If one is using annotations in the type-system to specify a term that is then discovered by a theorem-prover and used in an evaluation, that is influence from the type-system upon the evaluation. I don't see how you could claim otherwise.
It doesn't matter that the type-system's influence is via specifying "what, not how" then automatically deriving an expression for the "how".
If you want to specify "what, not how", you have a decision (as a language designer or framework developer) on whether achieve this via extension of a manifest type-system (thus leveraging the light-weight theorem prover) versus extension to the term-language (thus requiring you implement a theorem prover as a library or part of the language run-time, possibly subject to partial evaluation).
You are confusing type checking with type inference.
You are confusing type-safety inference with type-inference.
Type-inference is an attempt to ascribe a type to a term-expression which makes the expression type-checkable in all contexts. Type-safety inference is an attempt to find a proof that the program evaluation will at no point attempt any undefined behaviors.
The most significant differences between the two: type-safety inference is free to invent whichever type-system it needs, so long as one can still prove the basic type-safety principles of progress and preservation. The 'rich type system' is itself inferred for type-safety inference based upon exactly what may be left 'undefined' in the language, and based upon exactly what is left 'undefined' in a particular the program.
It may also be that a safe program which can't be proven safe today can be proven safe tomorrow, as the safety checker is enriched.
A lesser difference: type-safety inference is analyzing for undefined operations, not making undefined operations into defined ones. Thus, the types chosen do not influence program definition or evaluation. Thus, there is never a need for a mono-morphism limit, and there is no need for a 'compile-time' distinction in the language unless the term-language itself requires one (i.e. for staging).
I keep trying to make this distinction clear. Perhaps I need a name other than "type-safety inference" - something less likely to connote "type-inference" to everyone who first reads the phrase. The denotation, at least, should be clear enough, once you realize that 'type-safety' commonly refers to 'progress without encountering any undefined operations'.
For type-safety inference to be 'interesting' and 'useful', the language must allow one to leave stuff artfully undefined, thus requiring richer analysis. For example, one might support conversions based on labeled units, allowing one to convert from "meters" to "feet" but leaving off the ability to convert from "meters" to "decibels". If one is able to keep certain operations undefined, a type-system is implied and the programmer can allow it to catch many errors. If not - due to syntactic constraints, or due to an insufficiently powerful type-safety inference - then the programmer will need to utilize less-direct expression of intent to make up for the weaknesses in the type-safety inference, i.e. by creating separate functions and data-structures for different conversions and unit-classes..
Why do you believe type-theory is a natural setting for this?
In particular, what advantages does using type-theory and type-system offer the programmer? Why can these advantages not be achieved by supporting logical content (search, theorem proving, etc.) in the term-language? Why are these advantages enough to outweigh the costs to expressiveness and dynamism that comes from supporting logical content via the term-language?
The reason I mentioned 'staging' and 'partial-evaluation' earlier is that the only advantage I see from pushing this theorem-proving into a rich type-system is that the proof becomes 'static' like the rest of the type-system. (I'm assuming you aren't speaking of 'dynamic' type systems.) And, yet, I see the predictability and performance advantages of static construction also to be an enormous disadvantages for flexibility. Further, the same performance and predictability benefits could be achieved to a desirable degree by alternative mechanisms without the loss in flexibility - those being staging or partial-evaluation with the theorem-prover being part of the term-language.
You say that the type theory is the natural place to support these features, but it is my impression (wrong or not) that you're simply not considering alternatives - i.e. that you see the type-system as the only place that a theorem-prover and "logical content of our programs" even can (or should) be included. I remain curious as to why you make this assumption.
Things that don't have influence on a system, by nature, can be removed from a system without any observable effects.
Even if types don't affect any given observation, they strongly affect the set of possible observations. If you remove types, program contexts will be able to observe behaviours they could not observe (at all) before.
In other words, what you say is only true for closed programs. The real benefit of types however is with composing open programs (i.e., modularity). And there their ability to restrict possible observations is what provides encapsulation, representation independence and such. This is much more than just "type safety".
In particular, what advantages does using type-theory and type-system offer the programmer? Why can these advantages not be achieved by supporting logical content (search, theorem proving, etc.) in the term-language?
There are lots of interesting – typically global – properties that you can express, enforce or encode statically in a type system but not dynamically, e.g. freshness, linearity, deadlock-freedom, etc.
Also, in more powerful type systems, term and type level tend to become intertwined in a way that closely resembles what you seem to have in mind. Once "type" is a type (apply necessary restrictions here), types are essentially “part of” the term language.
what you say is only true for closed programs. The real benefit of types however is with composing open programs (i.e., modularity). And there their ability to restrict possible observations is what provides encapsulation, representation independence and such. This is much more than just "type safety".
For object interfaces, I'll grant that you should filter illegal inputs from external resources in order to achieve the same safety properties you inferred from the exposed elements of the sub-program object configuration - i.e. a gatekeeper pattern on exports (which might still be reduced via link-time or JIT partial evaluations).
But there are approaches to open program composition that do not involve the 'traditional' forms of separate compilation, and that cannot readily leverage types in this role in any case - i.e. especially in open, distributed programming, which is my interest area. In these cases, objects in the final program may be produced by different code-bases in different security and trust domains. One must ultimately 'protect' programs by secure design patterns (such as object capability patterns written about at erights.org) rather than by types.
For ADT-style encapsulation, I'm considering use of an E-language style sealer/unsealer pair. If the use of the unsealer on a sealed value is the only well-defined operation, then type-safety inference must introduce phantom-types for each sealer/unsealer created. This will protect the closed program, but also can protect the open, distributed program - even with all its serialization and potential for malign introductions - because encryption is automatically added to the sealed values after they cross to an untrusted host. This approach is suitable for first-class modules.
There are lots of interesting – typically global – properties that you can express, enforce or encode statically in a type system but not dynamically, e.g. freshness, linearity, deadlock-freedom, etc.. Further, they could still push their proof into the static type-safety analysis - without introducing any manifest types.
One can handle linearity in a data flow, for example, by introducing a 'use-then-destroy' primitive that is 'undefined' unless one can prove it's the only reference to the value.
Some properties, such as deadlock-freedom, would be better proven even if not expressed in the program. I considered that particular one important enough to prove it at the language layer - i.e. it is impossible to express a program that will deadlock (except between a pair of pessimistic transactions, where progress can safely be achieved).
in more powerful type systems, term and type level tend to become intertwined in a way that closely resembles what you seem to have in mind
With this, I agree. Dependent and constraint types weave the term language into the type language. The reverse weave can occur if values may carry types for use in evaluation and simultaneously provide a constraint... i.e. the tuple (A T) constrained by the type relationship 'A is an instance of T'. Once both weaves are introduced, the two will grow to become nigh inextricable due to backwards compatibility issues.
I'd need to look up my notes of years ago to recall in detail why I rejected it, but the basics come down to: I could not see enough value in it, I could see much undesirable and unnecessary programmer burden [especially in distributing type information across security domains], and inferred types exposed too much information about implementation details [to remote, untrusted hosts] that should not be exposed or coupled against, and it also limited how rich the safety analysis could grow [at an independent pace from the language].
I also rejected all other forms of reflection. I've never found a reason to regret doing so - at least no reason that would survive an open distributed system.
[...] cannot readily leverage types in this role in any case - i.e. especially in open, distributed programming, which is my interest area.
Well, this happens to have been my interest area as well, and I think types can work smoothly there. At least smoother than value-level sealing..
I assert that, were you to strip away the type-system such that programmers interested in these properties were forced to express them elsewhere, they could still find ways to enhance the term-language and express these properties there, instead.
I very much doubt that. Note again that these are global properties. How would you do global reasoning with a (necessarily local) term-level, user-defined logic only, and without the language infrastructure (compiler, linker, loader) providing support?
Also, I find it strange to claim that programmers can easily come up with encodings that researchers allegedly took ages to develop.
I could not see enough value in it, I could see much undesirable and unnecessary programmer burden, it exposed too much information about implementation details that should not be exposed or coupled against, and it also limited how rich the safety analysis could grow.
Programmer burden, yes, that's the usual trade-off. Exposure of implementation details? Not if done right, and if the type system benefits from the same abstraction mechanisms as the rest of the language. Limits? Well sure, as always, but how can a user logic, that knows less, possibly be less limited?
I think types can work smoothly [in open, distributed systems programming]. At least smoother than value-level sealing.
I am curious as to how you imagine this the case. Open, distributed systems programming is, first of all, open. That means anybody can connect their program to yours, should your program not be a closed system. Value-level sealing is a secure design for ADT encapsulation that will work even across hosts, yet requires no encryption within a trusted domain (indeed, if one can prove a computation is local, one may eliminate the operation reducing it to a type-safety analysis utilizing phantom types).
Note again that these are global properties. How would you do global reasoning with a (necessarily local) term-level, user-defined logic only, and without the language infrastructure (compiler, linker, loader) providing support?
I've never said anything against "global" reasoning about the program (understanding "global" to be a sub-program of arbitrary size within a trusted security domain).
Type-safety inference certainly is global reasoning. It simply doesn't require a type-system (i.e. it can invent one on its own, without your help). [or receive one externally. Related: Pluggable, Optional Type Systems.]
The term-language itself may have global elements. This is not uncommon in logic programming languages or rule-based programming, where each proposition and entailment rule - possibly spread across several files - combine to form a whole rule. Any case where the clients of an imported code unit can mutually provide code that influences all other clients of the same code unit is an example of global properties coming out of the term-language.
I find it strange to claim that programmers can easily come up with encodings that researchers allegedly took ages to develop.
I'm not aiming to insult the value or design effort that went into these extensions, but rather to assert that the type-system is not the only reasonable place to express them.
I'll amend my earlier statement: the researchers that developed and expressed useful extensions to the type-system could, in the absence of a type-system, find intelligent and creative ways to develop and express near-equivalent 'useful' extensions within the term-language.
Value-level sealing is a secure design for ADT encapsulation that will work even across hosts, yet requires no encryption within a trusted host
The same can be true for type abstraction. To clarify: I'm talking about what is visible as a language feature. If security is an issue then yes, you may want to encrypt values. But that can simply be an implementation detail of type abstraction, which may be more convenient for the programmer.
Type-safety inference certainly is global reasoning. It simply doesn't require a type-system (i.e. it can invent one on its own, without your help).
Decidability issues aside, this again is only true for closed programs. Once you have modularity and separate compilation, the programmer needs to specify something at the interface boundaries to make these checks possible.
I'll amend my earlier statement: the researchers that developed and expressed useful extensions to the type-system could, in the absence of a type-system, are intelligent and creative and would have found ways to develop and express near-equivalent 'useful' extensions within the term-language.
As far as I can see, if that works at all (of which I remain unconvinced until you can actually demonstrate it), then it essentially amounts to greenspunning a type system.
If security is an issue then yes, you may want to encrypt values. But that can simply be an implementation detail of type abstraction, which may be more convenient for the programmer.
I agree that it can be done. I do not agree that this is 'more convenient for the programmer', and especially not for the programmers sitting in the remote security domains who must interface with a subset of instantiated components from your program.
In open distributed systems, most forms of "separate compilation" also essentially involve "separate code-bases". There is little or no need for source-level separate compilation, other than perhaps a little advanced caching for compile-time performance, since one need not distribute 'library' binaries. One can simply register resources (objects, data and event sources) to a common factory or publish/subscribe system, if one wishes, then let any automatic code-distribution be controlled by a combination of secrecy constraints, redundancy requirements, disruption tolerance requirements, and performance demands specified or inferred from both the subprogram and runtime profiling.
Obtaining, maintaining, trusting, and integrating documents that declare types for interfacing with remote components is not fun or easy. Dealing with different instances of abstract types, and especially communicating that two different (derived, widely distributed) capabilities might require or provide the same instance of an abstract type, is especially challenging.
You like open distributed systems programming. Start playing around with facet pattern on first-class ADT modules...
Certainly, the term-language needs to make it clear where the interface boundaries are, and what is well-defined at those boundaries. However, that does not require a type-system, but rather that the evaluation semantics at the relevant edges reject ill-defined inputs in a predictable (and ideally atomic) manner. Implicit type-based guards is just one way to achieve this..
And decidability is an issue regardless. If you attempt to define a rich type-system, you must always balance added features against risk to decidability. One can easily make type-safety analysis decidable, but achieving this requires the language be somewhat less flexible (i.e. rejecting support for arbitrary partial-functions) or will at least constrain which programs can be analyzed. This problem is no different for the presence or absence of a manifest type system!
But, in case of indecision, one can intelligently reject or accept programs that one cannot decide based upon the reasons for indecision and which sorts of analysis failures can be safely tolerated (and, due to support for disruption tolerance in any distributed system, quite a bit can be tolerated...).
it essentially amounts to greenspunning a type system
No, greenspunning a type-system would involve creating a type-language and evaluator atop term-expressions that one could then analyze for types prior to lifting into the program... ideally removing the tags on the way. I have no problem supporting this, i.e. for DSLs and language exploration. But it still encounters the same challenges in open distributed programming systems of distributing type information to developers in untrusted security domains.
The ability to leave stuff artfully undefined may imply that a type-system of a certain degree of minimal complexity is required for type-safety analysis, but does not require this type-system be specified or tied to the language in any permanent fashion.
And type-safety inference is certainly possible, related to structural type-inference. The trivial case is very easy. Making the system powerful and useful, however, requires some careful design efforts.
For example, in my own language I must structurally (and syntactically) distinguish codata from data. Corecursive codata always exists inside a record or tuple with a special '~' label, and special function rules limit recursive processing on data so structured. Were I to support nominative typing (which requires manifest typing) I would be able to avoid that structural distinction... but then I'd have a challenge in how to distribute (and maintain) my type-definitions to developers in remote security domains.
The decision on how to define argumented procedures was also a result of this design effort, though I'm quite happy with the resulting model..
So even before the procedure is invoked, you are able to analyse the arguments and set up the procedure call accordingly. That is a form of pre-condition, isn't it?
If the application was a shopping cart, the procedure-generating function would check the card details are valid before redirecting to the Confirm Payment page (procedure). I can see this pattern is very useful in distributed applications.
The use of types is typically to answer the questions of what? and where?. Thus if instead of predefined data structures you were to use typed values, the target should have the means to explore the type of a passed value by dynamically finding the layout and location of named fields in the passed value. That leaves the type itself as a predefined data structure, comprised of <name, layout, location> tuples. The added flexibility would make the protocol more chatty but it was done before.
In open distributed systems, most forms of "separate compilation" also essentially involve "separate code-bases". There is little or no need for source-level separate compilation
Not sure what you mean by "source-level separate compilation". Whether "linking" is static, as in traditional settings, or via dynamic import, as in open systems, does not matter. You are still employing a form of separate compilation. And separate compilation means separate type checking. My question remains: how can you check type safety separately if the programmer does not specify any types at the interface boundaries?.
Sounds like you have reinvented monadic encapsulation of effects. That's good, but how does that relate to anything?
And decidability is an issue regardless. If you attempt to define a rich type-system, you must always balance added features against risk of decidability.
Maybe you are not aware that type inference is a harder problem than type checking? You run into undecidability much quicker. Most advanced typing constructs (e.g. higher forms of polymorphism) cannot be inferred. If you insist on inferring types then you are doomed to cripple the expressiveness of your language.
No, greenspunning a type-system would involve creating a type-language and evaluator atop term-expressions that one could then analyze for types prior to lifting into the program...
That's not the only way to greenspun something. But anyway, I won't try to argue on this anymore, since what you are describing is way far too handwavy and hypothetical to get a hold on. Show me a concrete solution and the discussion may get somewhere.
My question remains: how can you check type safety separately if the programmer does not specify any types at the interface boundaries?
To be clear: type-safety is very often not defined in terms of types. I'm accepting type-safety to mean: "progress without any undefined behavior", which is actually broader than most definitions.
Thus, to perform a check of type-safety does not require knowing or specifying "types" at an interface boundary. What it does require is knowing that the behavior of a sub-program is well-defined for any inputs that cross this boundary.
Dynamic type-safety is achieved by the rather expedient means of simply ensuring all operations are well-defined. If a behavior can't be sensibly made well-defined, one well-defines that behavior as throwing an exception. This is trivial, and boring, but it can certainly be done without specifying any types.
Static type-safety is achieved by rejecting (prior to processing inputs) any program compositions that might lead to an undefined behavior. Due to a variety of well-known logical principles, one ends up accepting a subset of all safe programs.
In between the two is soft-typing where one performs a great deal of static typing on the inside but ensures that hooks to the untrusted external systems emit the necessary extra code for run-time gatekeeper checks. This is typical to any open system where untrusted code will be involved.
Despite your assumptions, none of these safety analyses require programmers to emit any sort of manifest specification of 'type'. Nor do they require unique types be assigned to functions and expressions.
Not sure what you mean by "source-level separate compilation". Whether "linking" is static, as in traditional settings, or via dynamic import, as in open systems, does not matter.
By "source-level separate compilation" I mean programming against a document that describes an API with the expectation that the implementation-code - in a "separate" compiled blob - will be "linked" (within a process memory space) to produce a program. Both of the cases you describe (static and dynamic loading of libraries) are forms of source-level separate compilation.
It is useful to distinguish source-level separate compilation from the vast multitude of other forms of separate compilation. For example, my Firefox browser likely works with your Web server despite the fact that they were clearly compiled separately. Pipes-and-filters and publish-subscribe systems (including CEP and such) offer other mechanisms to flexibly support separate-compilation.
These approaches still involve IPC-level "linking" to produce the final program. Indeed, a program like bash must perform some non-trivial pipe hookups to produce a useful program, and a publish-subscribe system often favors a registration process to ensure a more demand-driven design with support for selecting the highest quality resource for data.
Usefully, for a number of reasons (performance, disruption tolerance, resilience), these latter forms of linking may also involve some mobile and untrusted code, similar to delivery of JavaScript or an Applet to my browser. Ideally, this untrusted code runs with the authority invested in it by the remote service, and cannot obtain any authorities by running locally that it could not also have obtained while running remotely. This 'ideal' situation can be achieved by adherence to object-capability principles, and newer languages (like Caja) sometimes aim to achieve them.
Anyhow, one might assert that there is still something akin to 'types' involved in the structure of messages a being passed between these systems, along with the protocols. However, it is not generally someplace that language layer "types" provide all that much help, largely because - even with types - you cannot trust the untrusted code to hold up its end of the bargain. You're forced to check or filter your inputs. And your ability to achieve even that much is extremely limited; e.g. you can't ensure that a reference in a distributed system (a URI) will obey the expected protocol that you might express in a language layer type.
Thus, as I said many posts above, types don't help all that much for safety analysis in open distributed systems programming.
It is my desire to support 'open distributed systems programming' via supporting these sorts of alternative styles of separate compilation and mobile code from within a single language.
I'm going to say I was "inspired" by monadic encapsulation of effects; even if it were true, I'd hate to admit to something so simultaneously creative and wasteful as 'reinvention'. ;)
As to how it relates: If you recall, I asserted earlier that one does not need to know 'types', and that it is only necessary one reject inputs that would cause ill-defined operations. Further, I said that this rejection is ideally atomic - i.e. the input is rejected either all up front or not at all.
The advantage of the monad-style encapsulation of side-effects is that one can achieve the atomic safety... i.e. if an input doesn't fit the mold, it will properly get rejected before it can cause any damage.
Maybe you are not aware that type inference is a harder problem than type checking? You run into undecidability much quicker.
Type-inference is a harder problem than type-safety inference. Type-inference - especially in a language with type-based dispatch and similar features where the type-annotations influence program evaluation - must find a unique type-expression for each term-expression.
Type-safety inference is under no such constraints, and thus may be subject to more flexible proofs. This is further aided by ensuring a fair subset of the language is guaranteed to terminate except under certain well-known conditions (functions always terminate, and procedures terminate if they can be assured of receiving each reply). I typically don't start running into decidability issues until I introduce partial-functions, which is a huge can of worms (eqv. to both dependent and predicate types).
Most advanced typing constructs (e.g. higher forms of polymorphism) cannot be inferred. If you insist on inferring types then you are doomed to cripple the expressiveness of your language.
Polymorphism is not a typing construct; or, more accurately, typing is far from the only way to achieve it. Indeed, single-dispatch polymorphism is readily achieved simply by specifying objects or procedures as first-class entities. Multiple-dispatch polymorphism requires some sort of mechanism for open value definitions, but may also be achieved in the term language.
[This is] far too handwavy and hypothetical to get a hold on. Show me a concrete solution and the discussion may get somewhere.
I endeavor to do so, but I suspect it will be at least two more years before I have an implementation I'm willing to announce on LtU.
I endeavor to do so, but I suspect it will be at least two more years before I have an implementation I'm willing to announce on LtU.
Even a few lines of code (or even pseudo-code) might go a long way to clarifying what you mean.
Everything I've seen you suggest so far sounds like you have a type system, you just aren't making it explicit.
If all terms can be partitioned into groupings and you have no term building rules that range over all terms, that sounds to me like you have a de facto type system, just a very inexpressive one.
As I've said, what the language and program leave undefined tends to imply that a type-system would have some minimal complexity is necessary to analyze it for safety. I.e. you could not analyze sealers/unsealer safety without phantom types or something of similar power.
But I refuse to say that I 'have' a type-system. There may be many type-systems of sufficient complexity to perform an analysis for a particular program, and I'm not making a commitment to any of them. I believe that commitment is a mistake, since it prevents the safety analysis from growing independently of the language.
I imagine a world where there are several type-systems that a safety-analysis utility might use to attempt to prove safety on sub-programs - similar to proof strategies in Coq. Depending on the structure of a particular sub-program, some of these may be more successful than others. This is inherently more flexible than being forced to infer a type from a particular type-system - one chosen in advance of knowing the sub-program - in order to prove (or achieve, in the case of typeful programming) safety.
This may be the case, but I have not yet managed to think of any examples that are simultaneously simple enough to express in a few lines of code and yet complex enough to distinguish from type-inference in 'a particular' type-system - much less motivate the distinction.
It might help, though, to say that - rather than defining structurally recursive types and inferring a fold-function, one might define a fold-function and thereby infer a structurally recursive type. That is, function determines form.
Trivially, if a record only has labels foo and bar, I cannot ask for baz. It's often quite easy to prove that a record input to a function lacks a 'baz' element, especially if the records are immutable (restricting mutability helps type-systems too).
Less trivially, if I could write a function over two inputs then used the first list to index the second, I could infer some fairly rich safety requirements like 'list of indices' and 'list of least length'.
One of my own motivating examples is inferring proper safety of matrix adds and multiplies where a "matrix" is not a first-class structure (akin to how sets, labeled records, and coinductive-records are first-class structures), but rather is a list of lists whose greater structure is implied by the partial-functions one attempts to apply.
In some cases, it may be reasonable to support a single function in matching arbitrary structures and producing arbitrary structures (i.e. accepting a mix of natural numbers and strings, then producing a mix of functions, lists, sets, records depending on the input), but such a program would almost certainly be far outside what the type-safety analysis tools provided with the language can prove safe, which would almost certainly condemn that program... at least for now. Fortunately for language developers, such a program is also fairly distant from anything the programmers are likely to demand... at least initially.
I don't believe it fair to characterize this approach as 'inexpressive', or even to say that a specific type-system is implied. It would be fair to say that there is a certain minimum set of features any analysis would need to support - derived from both the language and how programs are commonly structured.
I agree, however, that there almost certainly will be a "de-facto" type-system. Like most de-facto things, this 'de-facto' type-system is based on popularity, and will depend on what safety-analysis tools are shipped with the most popular IDEs.
Type-inference is a harder problem than type-safety inference.
This may be the crux. I don't understand how you can claim that. Both are equivalent, the only difference is whether you throw away the result in the end.
Just to be sure, type inference does not require unique types. However, to be algorithmically tractable and compositional, you will typically want principal types, or at least something close. Otherwise your analysis will quickly explode. And this is the case no matter whether you are going to output the types in the end or not.
The two are not equivalent, but I wonder if there's a difference in assumptions.
With type-inference you are forced to pick a type-system before seeing the program. Then, upon seeing the program, you are required to "decide" on type-expressions from that type-system that will allow you to prove safety.
With a type-safety proof as a more general goal, you are not required to make any decision on type-system or logic in advance of seeing the program.
Only if it were the case that all decidable logics could prove the same set of propositions, would the above two approaches be equivalent.
Just to be sure, type inference does not require unique types.
True. Unique assignment of types is only needed if you're using certain forms of typeful programming, where program behavior depends on which types are ascribed to the term expressions.
to be algorithmically tractable and compositional, you will typically want principal types, or at least something close
Principal types don't much help in open distributed systems. If you can trust what the other guy tells you his outputs will be, and he can trust you in turn, then by nature you've got a closed system. Open distributed systems programming means that the interactions cross trust and security domains.
Besides, those 'closed-system' analysis techniques can be especially useful with automatic distribution. Distribution adds nothing fundamental to the safety analysis, but it allows one to increase the useful size of a 'sub-program' safety-checked, and thus improves the tooth-to-tail ratio of the closed-system safety analysis techniques. For hosting code, security matters more than safety, and certain security approaches (such as object-capability) depend very little on safety.
For safe program composition, principal type isn't so critical. It is not usually the case that we manage very rich types at the edges of a program anyway, and to whatever degree we can reduce those 'edges' (by spreading the gooey middle across multiple runtimes via automatic distribution) the need for 'rich types at the edges' may be further diminished.
As far as tractability: if a type-system wishes to simplify and make the analysis algorithmically tractable at the expense of proving fewer programs, that is an acceptable trade-off - but not a decision good for all programs.
With a type-safety proof as a more general goal, you are not required to make any decision on type-system or logic in advance of seeing the program.
Of course you are. You have to implement it in advance. What you are saying is that you can hack up one ad-hoc hybrid type system that has rules to switch between different kinds of "modes". That does not make it a different problem, though. It solves none of the issues I was mentioning.
Also, I think this idea of just cooking up arbitrary type systems is fundamentally broken in terms of usability. How is the user supposed to understand type errors?
Principal types don't much help in open distributed systems.
Sorry, you lost me. What have principal types to do with open vs closed? The types at a boundary may be less descriptive in the open case, because you know less, but that has nothing to do with principality.
It is not usually the case that we manage very rich types at the edges of a program anyway
That, I believe, is a fundamentally wrong assumption. Rich types are most likely to pop up at the boundaries of libraries, components and such, because you want to make them maximally reusable, and thus maximally generic.
You have to implement it in advance. What you are saying is that you can hack up one ad-hoc hybrid type system that has rules to switch between different kinds of "modes".
What I am saying is that you can provide many logics and strategies for proving safety. This is not "one" ad-hoc type-system. Any one ad-hoc type-system will need to be internally consistent, sound, demonstrating progress and preservation.
As to whether you 'implement it in advance', I agree that it is likely the case that the type-system used by most programs is implemented in advance. Programmers will have a decision whether they will favor adjusting their program to be provable by the current set of strategies, vs. locating or creating new proof strategies that will better be able to prove their program. Most will choose to adjust the program.
That does not make it a different problem, though.
I am confident it does. [See discussion on pluggable and optional type systems.]
this idea of just cooking up arbitrary type systems is fundamentally broken in terms of usability. How is the user supposed to understand type errors?
The user doesn't need to understand a type-error. What the user needs to understand is a safety error. This means that the prover must provide a useful explanation of why an expression is unsafe in a given context (i.e. "could not prove that the Unit argument will be matched at this point in code"). This isn't much different than Lint and other tools needing to provide explanations for emitted 'problems'.
Besides, I think you're a bit optimistic if you feel most users understand the oft arcane errors emitted from a C++ or Haskell compiler, even today.
What have principal types to do with open vs closed? The types at a boundary may be less descriptive in the open case, because you know less, but that has nothing to do with principality.
I'll need to disagree: the need for principality has everything to do with lacking knowledge about the context in which an expression is used.
If you know each context in which a given sub-expression will see use, then you do not need the principal type for the sub-expression because you may type it in context. There may be cases where a type you discover for the sub-expression happens to be the same as you'd discover if seeking a principal type, but - in principle - you do not need the principality.
Rich types are most likely to pop up at the boundaries of libraries, components and such, because you want to make them maximally reusable, and thus maximally generic.
There are approaches to "components" - even reusable and generic ones - that do not involve source-level separate compilation of "libraries". These alternative approaches also work - with a few simple design-patterns quite similar to what you'd see for libraries-as-plugins - for hot-swappable and upgradeable systems and for distributed systems.
I consider the common separately-compiled 'libraries' to be a harmful language feature - in part because it hinders optimizations (partial-evaluations, inlining, dead-code elimination), and in part because of security concerns (initial confinement, safety, authority), and in part because of other engineering concerns (such as developer control over sharing, persistence, instancing).
Thus, supposing you ignore "libraries" as a potential component-boundary (since I refuse to acknowledge libraries-as-components [for separate compilation] as a good thing for future language designs) I am curious if you can find a case where 'rich-types' will be heavily used across component boundaries. If it isn't clear to you, arbitrary "non-library" components can be found by examining publish-subscribe architectures, plugin architectures, complex event processing, Unix-style pipes-and-filters, blackboard metaphor, etc. And of these, in context, we're discussing open-systems variations (i.e. the third-party black-box or multi-process designs, though these architectures may also be expressed using objects within a program).
So... let's consider some rich types: linearity and exclusive transfer, region inference, dependent typing, uniqueness or existence constraints for pattern-matches inside a set, etc.
I can easily see how you might use these types to help 'document' an intended interface, but there are many ways to document interfaces. The relevant question is: How, and where, do we use these rich types across untrusted open-systems component boundaries to prove safety?
I don't really understand this perspective. Separate compilation needn't hinder optimization or any of the other engineering concerns (the output could, after all, just be type-checked source code). Deferring any static analysis until all libraries have been assembled into a closed component seems like a big wasted opportunity.
I had mentioned how I interpret 'separately compiled library' in an earlier post:
By "source-level separate compilation" I mean programming against a document that describes an API with the expectation that the implementation-code - in a "separate" compiled blob - will be "linked" to produce a program. Both of the cases you describe (static and dynamic loading of libraries) are forms of source-level separate compilation.
It seems you're taking a much broader (and largely theoretical) view of what 'separately compiled library' may mean. The practice (shared objects, archives, dynamic-link libraries, etc.) is that of distributing and maintaining near-opaque, lossy blobs along with an API document. And I dislike this practice for the many reasons listed above.
I would quite gladly support such things as caching pre-parsed source-code subject to annotations to enable rapid optimizations and safety proofs, and such. That is not the same as 'separately compiled libraries as software components'; rather, it would simply be an optimization in the IDE. The source code from which the cached forms derive would still be available.
Static analysis of a library prior to its deployment isn't an optimization - it's a feature to help library authors. A type system gives a way to state and prove properties of your library in isolation from its eventual use context.
Static analysis is useful, I agree. However, there are many approaches to supporting library developers that utilize static analysis and that require no type system.
Further, bothering to cache or persist any output from a static analysis is never anything but an optimization. After all, the not-optimized approach is to re-execute the analysis every single time the developer needs it.
I've been trying to explain in prior posts that there are many reasons that libraries should not be used as software components - as something that might "deployed" then somehow "referenced" as part of some software configuration. The argument that type-systems might help achieve safer libraries in a 'deployment' scenario sounds to me a bit like arguing that adding a laser-level makes a shovel safer for use in surgery. That is, I'll grant it could help, but it isn't exactly motivating me to invest in a type-system (or laser-level).
Static analysis ... require(s) no type system.
I agree.
libraries should not be ... "deployed"
You've lost me here. When a new version of a container library is published (why not "deployed"?) types are good at documenting properties of its API and mechanically verifying them.
Library binaries should not be "published" or "deployed" or any other word you want to use for the act of treating a library as an independent deliverable between administrative (security or trust) domains. There are other architectures for organizing software into components, and libraries as software components rarely compare favorably.
To be clear: the primary issue with libraries-as-deployables isn't "safety". At least two major problems with libraries and the systems that use them involve their use of ambient authority (i.e. a library is a repository of untrusted foreign code, but is run with more authority than the provider could have obtained by running the same code remotely), and their ad-hoc instancing semantics (esp. with "global" state, whatever that means) and the associated sharing/distribution/persistence issues.
Making libraries slightly more type-safe is not going to fix them.
If you're stuck with libraries, I can appreciate the goal to make them as safe to use as possible - especially since you need all that safety to avoid shooting yourself in the foot with that excessive ambient authority you're given - but we should be looking for ways to be rid of applications and libraries as we write them today, entirely... especially in a discussion like the one in the OP.
And where will that be shown? Either you don't, or you pick a single, unifying metalogic/type system, no?
Presumably showing such things is left as an exercise for the user.
Showing the soundness properties of a given type-system is pretty much optional; that is, a developer of a type-system need not prove the type-system from within the language. (If they needed to do so, they'd be at an impasse if they used an alternative logic or had to introduce new axioms, since they likely couldn't prove the logic.) One may - for pluggable type-systems, proof-carrying code, and other features that require a trusted proof (i.e. for use by an optimizer) - favor control by practical mechanisms outside the language definition (such as PKI-signature certification from a source trusted for the subject).
I can see how this might be construed as a problem if you were attempting to treat type-systems as program extensions rather than as IDE extensions. But it needn't be a problem in practice.
Perhaps you should ask a few questions in return: Would a typical programmer care where soundness of a type-system is shown? I've personally never asked where the validity of an optimization technique is proven. These questions rarely arise in the normal course of programming.
I think I get the gist of what David is saying, so I'll try to put forth a concrete example: consider Microsoft Code Contracts. Code contracts are not expressed in or verified by the CLR type system, they are instead expressed as DSL terms which are compiled to CIL. The CIL is itself then processed by a tool with its own static analysis, with no relation to the CLR type system, to verify that all contracts are satisfied. It does so by looking for the special terms of its DSL. This would be an example of a custom post-hoc type system built with language terms.
Andreas Rossberg wrote:
If you remove types, program contexts will be able to observe behaviours they could not observe (at all) before.
Any given program observes the same behaviors regardless of whether it's typed. You seem to be taking the viewpoint that types are specifications and come before implementations. What about a Curry style type system where types describe properties of an existing program? Or what if we dispense with types altogether and prove program properties in some suitable logic? Something along the lines of replacing the type annotation,
f:Nat->Bool
with a proposition such as,
forall x. x:Nat -> (f x):Bool
I'm curious which of your comments you think would apply to such a scheme, or whether you would consider such a system effectively a type system.
Any given program observes the same behaviors regardless of whether it's typed.
That would depend on the programming language. Type-based dispatch - as is achieved in Haskell's Type Classes and C++ overloaded functions - is likely not well defined in the absence of types, much less achieves the same behaviors.
If we assume that the program is to achieve the same behaviors regardless of whether it is typed, then under this assumption we can also assume that the manifest type-system has no influence on program evaluation (including dispatch decisions). Effectively, 'typeful programming' styles are not possible.
That's the way I'm growing to favor it.
Really depends what types we're talking about. If they're the logical entities describing (something like) sets of values, then types shouldn't affect evaluation. If on the other hand they're annotations used by a constraint solver to fill in some implicit parameters, then sure they can.
As I said, this only matters for closed programs, which is a rather uninteresting case. The more relevant situation is where you rely on types to protect your module against other parts of a larger program that are not under your control. Without types, they could invoke arbitrary undesirable behaviour in your own module.
You seem to have missed my point (probably because I didn't really explain it), which is that this kind of modularity can be achieved without types. If I export a symbol f from my module, and the only thing I tell you about f is that,
then the only way you can write a correct program that uses my f is to invoke it with a Nat value. If the language requires you prove your code correct, then you get something very close to type safety. It's a little different, since you could include the expression (f "string") in a valid program, so long as that expression is provably never evaluated.
An advantage to this approach is that it gives a simple semantics to the language even when all proof obligations haven't been met.
How does this differ from a (dependent) type system? Relaxing the typing rules for provably dead code would be perfectly possible in such a type system as well, but why do you care?
The ability to call (f "string") is certainly not the point. I think it's a conceptually simpler and more accessible approach. In the logic I'm playing with, types denote sets of terms, so subtypes are trivial. It works well with the customizable type system I'm building (though you could probably use type theory as a back-end as well). Nothing goes through Curry-Howard. There are probably other differences I'm forgetting or haven't noticed. Hopefully nothing fatal...
If one is using annotations in the type-system to specify a term that is then discovered by a theorem-prover and used in an evaluation, that is influence from the type-system upon the evaluation. I don't see how you could claim otherwise.
It's not influence on the evaluation. It says that the evaluation will always result in a certain thing, but it doesn't say how the evaluation is to take place. You can have a specification that says a function returns sorted lists. This type doesn't tell you what algorithm is used at all.
As for "type-safety inference" I think I mostly agree now that you've made it more explicit, however I don't this precludes the use of type theory at all.
The reason I mentioned 'staging' and 'partial-evaluation' earlier is that the only advantage I see from pushing this theorem-proving into a rich type-system is that the proof becomes 'static' like the rest of the type-system.
Proofs are not "static". Partial evaluation is simply proof normalisation, i.e. you restructure the proof-tree in some way. The part that is static is the Type. This could potentially allow a large degree of dynamism. Any proof of the type could be substituted if the specification was a sufficient expression of what the program was supposed to do. If it isn't, we can still come up with equivalent proofs using partial evaluation. In the paper I linked I show how you can even create new co-inductive principles by ensuring that the new proof (with the same type) is constructed in a way that preserves bi-similarity of terms (identical observable behaviour in all contexts). I think as we continue to talk that I agree with you, but that you don't understand how type theory can be used to do what you want. It seem to me that it's a very natural setting for it. Why natural? Because natural deduction proofs and proof normalisation (partial evaluation and/or supercompilation) seems to give you exactly what you say you want.
It says that the evaluation will always result in a certain thing, but it doesn't say how the evaluation is to take place.
Just keep in mind that one could say much the same thing about using an ADT or Object: "I said to push a value onto the stack, but I didn't say how the evaluation is to take place. The interface says nothing about what algorithm is used at all!" The fact is, I said to do something. That alone is part of the program evaluation, is it not? I agree the difference between specifying an interface and specifying observable constraints and post-conditions is significant, but either could occur in a term language in order to describe an evaluation.
But it is not worth further belaboring this point, lest we argue what 'influence' and 'evaluation' etc. mean.
Proofs are not "static".
I understand 'static' to strongly imply 'based only on information available at code-development time', whether or not the actual normalization is delayed. So, to me, you're saying that the 'proof' - which supposedly tells me my program is well-typed - isn't happening until after my running program has obtained necessary information from its environment. Is this right, or are we just talking past one another based on different understandings of the word 'static'?
In any case, I think logic programming belongs to the users, for normal evaluations at runtime based on dynamic inputs and data resources. Common implementations of type systems don't allow one to achieve this dynamism. It isn't that I don't understand how type-theory could be used for this purpose, but rather that I don't understand why you would use type-theory for this purpose, much less be eager about it, except for the 'proof normalizations' you might achieve by requiring the proofs be based entirely on information available at development-time.
Debugging distributed applications can be time-consuming. Any help in this quarter would save in-house IT resources an amazing amount of time.
80 some posts and no mention of the IDE's role. Programming hasn't been only about the language since at least Richard Gabriel's ph.d thesis. The current crop of IDE's provide an incredible amount of assistance to the programmer. Of course some will observe flippantly that considering the abysmal state of the popular industrial languages in use today developers need all the help they can get.
The point being isn't it a triad: language, library, tools? Of course the focus here is on languages, but this discussion demands a broader scope imo.
Others did mention tools and IDE's, I apologise for the over sight.
This is rather unformed, but bear with me...
In a functional language, there is very often a direct and expressive way to turn a specification into code. However, this direct and expressive way is often slow. In order to recover speed and move from a prototype to working code, we must pollute our lovely declarative world with icky algorithms. We can often package up and hide the algorithm, but what we end up with are then libraries of data structures, not libraries of algorithms.
If declarative code didn't need optimization, then we'd get an order of magnitude. But I don't think this is largely or fully a problem of insufficiently optimized declarative libraries of data structures. To get the sorts of optimizations we need, there needs to be much more compiler support to specialize based on all available information.
One way to think of this problem is maybe to say that list comprehensions, set and map operations, etc. that can be expressed as SQL queries should be able to be at least as optimized as SQL queries.
on the sufficiently smart compiler, about some of the tiger traps which potentially lie in wait down such paths.
and this note saying power can be confusing, i think.
People often talk about performance, and how important it is that their programs perform well. They use this as an argument to avoid learning about what a certain language or tool could do for their productivity. It might be the wrong tool for the job, in which case they shouldn't use it, but this should be an informed judgement rather than a knee-jerk reaction.
I think that with the recent progress in static analysis of real world programs these are exciting times. We have barely scratched the surface of program transformations for pure functional languages: who knows how good the compilers will be tomorrow?
As a side note: many of the transformations that GHC does can be done in a strict language as well, which would avoid the problems with the programmers mental model of execution.
As far as ad-hoc optimizations go, I think there's some truth to that. But what's more interesting are principled optimizations -- nobody complains about "sufficiently smart databases" and makes an issue of always writing out query plans by hand. That's seen, rightfully, as a last resort, in the face of an insufficiently smart database. So what I'm saying is if we can capture more about the meaning of our operations, and their domains, then we can probably move far beyond stream fusion (which is essentially about fusing traversals) and into something much more powerful involving reordering, and perhaps even, when necessary deferring certain choices about ordering to run-time, where they can be determined by the actual data in question. A language that could do this even with just pure operations over maps, sets, lists, and adts (i.e., just the relational algebra) would already be quite a leap forward.
I think by looking at a "beautiful Haskell program" and asking how to make it 10x shorter, you're asking the wrong problem. Haskell and Erlang will not (I expect) make beautiful C programs shorter -- if a program is sufficiently well-suited to programming in C that the result can be said to be beautiful, then the C representation will already be quite concise. No, what these languages make shorter are the ugly C programs, where you are fighting the C language in order to write the program, and have to include all sorts of goop that ought to be unnecessary.
The right question to ask, then, is: "What programs are still ugly to represent in all the languages we have today?" For bonus relevance, also ask, "And which of those are types of programs that we will be writing lots of in the future?" Making it possible to write those programs in a beautiful manner is where the 10x improvements come in.
I don't have a wide range of understanding to give a thorough answer to that question, but from where I sit, there are three clear things that count: Parallelism, parallelism, and parallelism.
Now, I know that Erlang is already addressing a lot of parallelism issues, but the problem of writing efficient parallel programs is bigger than one approach can solve.
i'd say that concurrency is important, too, not just parallelism. and then distribution is probably key long-term as well (and is more in the parallelism vein i'd hazard).
That's a good point. I was somewhat sloppy there, and by "parallelism" what I really meant was the whole cluster of concepts around that, including concurrency and distribution of various flavors.
|
http://lambda-the-ultimate.org/node/3673
|
CC-MAIN-2019-35
|
refinedweb
| 20,427
| 52.29
|
Web Scraping with Python: Illustration with CIA World Factbook
In this article, we show how to use Python libraries and HTML parsing to extract useful information from a website and answer some important analytics questions afterwards.
By Tirthajyoti Sarkar, ON Semiconductor
In a data science project, almost always the most time consuming and messy part is the data gathering and cleaning. Everyone likes to build a cool deep neural network (or XGboost) model or two and show off one’s skills with cool 3D interactive plots. But the models need raw data to start with and they don’t come easy and clean.
Life, after all, is not Kaggle where a zip file full of data is waiting for you to be unpacked and modeled :-)
But why gather data or build model anyway? The fundamental motivation is to answer a business or scientific or social question. Is there a trend? Is this thing related to that? Can the measurement of this entity predict the outcome for that phenomena? It is because answering this question will validate a hypothesis you have as a scientist/practitioner of the field. You are just using data (as opposed to test tubes like a chemist or magnets like a physicist) to test your hypothesis and prove/disprove it scientifically. That is the ‘science’ part of the data science. Nothing more, nothing less…
Trust me, it is not that hard to come up with a good quality question which requires a bit of application of data science techniques to answer. Each such question then becomes a small little project of your which you can code up and showcase on a open-source platform like Github to show to your friends. Even if you are not a data scientist by profession, nobody can stop you writing cool program to answer a good data question. That showcases you as a person who is comfortable around data and one who can tell a story with data.
Let’s tackle one such question today…
Is there any relationship between the GDP (in terms of purchasing power parity) of a country and the percentage of its Internet users? And is this trend similar for low-income/middle-income/high-income countries?
Now, there can be any number of sources you can think of to gather data for answering this question. I found that an website from CIA (Yes, the ‘AGENCY’), which hosts basic factual information about all countries around the world, is a good place to scrape the data from.
So, we will use following Python modules to build our database and visualizations,
- Pandas, Numpy, matplotlib/seaborn
- Python urllib (for sending the HTTP requests)
- BeautifulSoup (for HTML parsing)
- Regular expression module (for finding the exact matching text to search for)
Let’s talk about the program structure to answer this data science question. The entire boiler plate code is available here in my Github repository. Please feel free to fork and star if you like it.
Reading the front HTML page and passing on to BeautifulSoup
Here is how the front page of the CIA World Factbook looks like,
Fig: CIA World Factbook front page
We use a simple urllib request with a SSL error ignore context to retrieve this page and then pass it on to the magical BeautifulSoup, which parses the HTML for us and produce a pretty text dump. For those, who are not familiar with the BeautifulSoup library, they can watch the following video or read this great informative article on Medium.
So, here is the code snippet for reading the front page HTML,
ctx = ssl.create_default_context() ctx.check_hostname = False ctx.verify_mode = ssl.CERT_NONE # Read the HTML from the URL and pass on to BeautifulSoup url = '' print("Opening the file connection...") uh= urllib.request.urlopen(url, context=ctx) print("HTTP status",uh.getcode()) html =uh.read().decode() print(f"Reading done. Total {len(html)} characters read.")
Here is how we pass it on to BeautifulSoup and use the
find_all method to find all the country names and codes embedded in the HTML. Basically, the idea is to find the HTML tags named ‘option’. The text in that tag is the country name and the char 5 and 6 of the tag value represent the 2-character country code.
Now, you may ask how would you know that you need to extract 5th and 6th character only? The simple answer is that you have to examine the soup text i.e. parsed HTML text yourself and determine those indices. There is no universal method to determine this. Each HTML page and the underlying structure is unique.
soup = BeautifulSoup(html, 'html.parser') country_codes=[] country_names=[] for tag in soup.find_all('option'): country_codes.append(tag.get('value')[5:7]) country_names.append(tag.text) temp=country_codes.pop(0) # To remove the first entry 'World' temp=country_names.pop(0) # To remove the first entry 'World'
Crawling: Download all the text data of all countries into a dictionary by scraping each page individually
This step is the essential scraping or crawling as they say. To do this, the key thing to identify is how the URL of each countries information page is structured. Now, in general case, this is may be hard to get. In this particular case, quick examination shows a very simple and regular structure to follow. Here is the screenshot of Australia for example,
That means there is a fixed URL to which you have to append the 2-character country code and you get to the URL of that country’s page. So, we can just iterate over the country codes’ list and use BeautifulSoup to extract all the text and store in our local dictionary. Here is the code snippet,
# Base URL urlbase = '' # Empty data dictionary text_data=dict() # Iterate over every country for i in range(1,len(country_names)-1): country_html=country_codes[i]+'.html' url_to_get=urlbase+country_html # Read the HTML from the URL and pass on to BeautifulSoup html = urllib.request.urlopen(url_to_get, context=ctx).read() soup = BeautifulSoup(html, 'html.parser') txt=soup.get_text() text_data[country_names[i]]=txt print(f"Finished loading data for {country_names[i]}") print ("\n**Finished downloading all text data!**")
Store in a Pickle dump if you like
For good measure, I prefer to serialize and store this data in a Python pickle object anyway. That way I can just read the data directly next time I open the Jupyter notebook without repeating the web crawling steps.
import pickle pickle.dump(text_data,open("text_data_CIA_Factobook.p", "wb")) # Unpickle and read the data from local storage next time text_data = pickle.load(open("text_data_CIA_Factobook.p", "rb"))
Using regular expression to extract the GDP/capita data from the text dump
This is the core text analytics part of the program, where we take help of regular expression module to find what we are looking for in the huge text string and extract the relevant numerical data. Now, regular expression is a rich resource in Python (or in virtually every high level programming language). It allows searching/matching particular pattern of strings within a large corpus of text. Here, we use very simple methods of regular expression for matching the exact words like “GDP — per capita (PPP):” and then read few characters after that, extract the positions of certain symbols like $ and parentheses to eventually extract the numerical value of GDP/capita. Here is the idea illustrated with a figure.
Fig: Illustration of the text analytics
There are other regular expression tricks used in this notebook, for example to extract the total GDP properly regardless whether the figure is given in billions or trillions.
# 'b' to catch 'billions', 't' to catch 'trillions' start = re.search('\$',string) end = re.search('[b,t]',string) if (start!=None and end!=None): start=start.start() end=end.start() a=string[start+1:start+end-1] a = convert_float(a) if (string[end]=='t'): # If the GDP was in trillions, multiply it by 1000 a=1000*a
Here is the example code snippet. Notice the multiple error-handling checks placed in the code. This is necessary because of the supremely unpredictable nature of HTML pages. Not all country may have the GDP data, not all pages may have the exact same wordings for the data, not all numbers may look same, not all strings may have $ and () placed similarly. Any number of things can go wrong.
It is almost impossible to plan and write code for all scenarios but at least you have to have code to handle the exception if they occur so that your program does not come to a halt and can gracefully move on to the next page for processing.
# Initialize dictionary for holding the data GDP_PPP = {} # Iterate over every country for i in range(1,len(country_names)-1): country= country_names[i] txt=text_data[country] pos = txt.find('GDP - per capita (PPP):') if pos!=-1: #If the wording/phrase is not present pos= pos+len('GDP - per capita (PPP):') string = txt[pos+1:pos+11] start = re.search('\$',string) end = re.search('\S',string) if (start!=None and end!=None): #If search fails somehow start=start.start() end=end.start() a=string[start+1:start+end-1] #print(a) a = convert_float(a) if (a!=-1.0): #If the float conversion fails somehow print(f"GDP/capita (PPP) of {country}: {a} dollars") # Insert the data in the dictionary GDP_PPP[country]=a else: print("**Could not find GDP/capita data!**") else: print("**Could not find GDP/capita data!**") else: print("**Could not find GDP/capita data!**") print ("\nFinished finding all GDP/capita data")
Don’t forget to use pandas inner/left join method
One thing to remember is that all these text analytics will produce dataframes with slightly different set of countries as different types of data may be unavailable for different countries. One could use a Pandas left join to create a dataframe with intersection of all common countries for which all the pieces of data is available/could be extracted.
df_combined = df_demo.join(df_GDP, how='left') df_combined.dropna(inplace=True)
Ah the cool stuff now, Modeling…but wait! Let’s do filtering first!
After all the hard work of HTML parsing, page crawling, and text mining, now you are ready to reap the benefits — eager to run the regression algorithms and cool visualization scripts! But wait, often you need to clean up your data (particularly for this kind of socio-economic problems) a wee bit more before generating those plots. Basically, you want to filter out the outliers e.g. very small countries (like island nations) who may have extremely skewed values of the parameters you want to plot but does not follow the main underlying dynamics you want to investigate. A few lines of code is good for those filters. There may be more Pythonic way to implement them but I tried to keep it extremely simple and easy to follow. The following code, for example, creates filters to keep out small countries with < 50 billion of total GDP and low and high income boundaries of $5,000 and $25,000 respectively (GDP/capita).
# Create a filtered data frame and x and y arrays filter_gdp = df_combined['Total GDP (PPP)'] > 50 filter_low_income=df_combined['GDP (PPP)']>5000 filter_high_income=df_combined['GDP (PPP)']<25000 df_filtered = df_combined[filter_gdp][filter_low_income][filter_high_income]
Finally, the visualization
We use seaborn regplot function to create the scatter plots (Internet users % vs. GDP/capita) with linear regression fit and 95% confidence interval bands shown. They look like following. One can interpret the result as
There is a strong positive correlation between Internet users % and GDP/capita for a country. Moreover, the strength of correlation is significantly higher for low-income/low-GDP countries than the high-GDP, advanced nations. That could mean access to internet helps the lower income countries to grow faster and improve the average condition of their citizens more than it does for the advanced nations.
Summary
This article goes over a demo Python notebook to illustrate how to crawl webpages for downloading raw information by HTML parsing using BeautifulSoup. Thereafter, it also illustrates the use of Regular Expression module to search and extract important pieces of information what the user demands.
Above all, it demonstrates how or why there can be no simple, universal rule or program structure while mining messy HTML parsed texts. One has to examine the text structure and put in place appropriate error-handling checks to gracefully handle all the situations to maintain the flow of the program (and not crash) even if it cannot extract data for all those scenarios.
I hope readers can benefit from the provided Notebook file and build upon it as per their own requirement and imagination. For more web data analytics notebooks, please see my repository.
Ifyou have any questions or ideas to share, please contact the author at tirthajyoti[AT.
Bio: Tirthajyoti Sarkar is a semiconductor technologist, machine learning/data science zealot, Ph.D. in EE, blogger and writer.
Original. Reposted with permission.
Related:
- Web Scraping Tutorial with Python: Tips and Tricks
- Why You Should Forget ‘for-loop’ for Data Science Code and Embrace Vectorization
- How Much Mathematics Does an IT Engineer Need to Learn to Get Into Data Science
|
https://www.kdnuggets.com/2018/03/web-scraping-python-cia-world-factbook.html
|
CC-MAIN-2018-26
|
refinedweb
| 2,192
| 53.61
|
I am new to programming and following Chris Pines book. There is an
exercise in the book to write a method similar to .sort that will
arrange an array of string in alphabetical order. I will paste the
exercise:
OK. So we want to sort an array of words, and we know how to find out
which of two words comes first in the dictionary (using <).
What strikes me as probably the easiest way to do this is to keep two
more lists around: one will be our list of already-sorted words, and
the
other will be our list of still-unsorted words. We’ll take our list of
words,
find the “smallest” word (that is, the word that would come first in
the
dictionary), and stick it at the end of the already-sorted list. All
of the
other words go into the still-unsorted list. Then you do the same
thing
again but using the still-unsorted list instead of your original list:
find
the smallest word, move it to the sorted list, and move the rest to
the
unsorted list. Keep going until your still-unsorted list is empty.
I think I covered the first bit of the exercise but when I try to
repeat my method recursively it fails.
Would you be please so kind to take a look at my code and tell me what
am I doing wrong?
puts 'Please enter a list of words you would like to arrange'
puts 'in alphabetical order:'
list_of_words = []
input = 'empty'
while input != ''
input = gets.chomp
if input != ''
list_of_words.push input
end
end
def arrange some_array
recursive_arrange some_array, []
end
def recursive_arrange unsorted_array, sorted_array
# Initializing the array "still_unsorted_array"
still_unsorted_array = []
# Finding the smallest word on the array "list_of_words"
smallest = unsorted_array [0]
unsorted_array.each do |word|
if smallest > word
smallest = word
end
end
# Adding the smallest word from the array of "unsorted words" to the
array of "sorted_words"
sorted_array.push smallest
# Adding the words from the array of "unsorted_array" to the array of
"still_unsorted_array"
# with the exception of the content of variable "smallest"
bigger = 'empty'
unsorted_array.each do |word|
if word != smallest
bigger = word
still_unsorted_array.push bigger
end
end
# Clearing the array of "unsorted_array" so it can be re-used
unsorted_array = []
# Adding the smallest word from the array of "still_unsorted words" to
the array of "sorted_words"
smallest = still_unsorted_array [0]
still_unsorted_array.each do |word|
if smallest > word
smallest = word
end
end
# Adding the smallest word from the array of "still_unsorted words" to
the array of "sorted_words"
sorted_array.push smallest
# Adding the remaining words from the array "still_unsorted_words" to
the array of
# "unsorted_words"
still_unsorted_array.each do |word|
if word != smallest
bigger = word
unsorted_array.push bigger
end
end
# This is the bit I tried to call the method recursively.
if unsorted_array == unsorted_array = []
puts sorted_array
else
recursive_arrange unsorted_array, sorted_array
end
puts 'unsorted array:'
puts unsorted_array
puts 'still unsorted array:'
puts still_unsorted_array
puts 'sorted array:'
puts sorted_array
end
arrange list_of.
|
https://grokbase.com/t/gg/rubyonrails-talk/121kmwj4vn/rails-could-someone-please-explain-how-to-repeate-my-code-recursivly
|
CC-MAIN-2022-05
|
refinedweb
| 485
| 53.21
|
There might come a time when you will prefer to stylishly load spatial data into a memory-structure rather than clumsily integrating a database just to quickly answer a question over a finite amount of data. You can use an R-tree by way of the rtree Python package that wraps the libspatialindex native library.
It’s both Python 2 and 3 compatible.
Building libspatialindex:
- Download it (using either Github or an archive.
- Configure, build, and install it (the shared-library won’t be created unless you do the install):
$ ./configure $ make $ sudo make install
- Install the Python package:
$ sudo pip install rtree
- Run the example code, which is based on their example code:
import rtree.index idx2 = rtree.index.Rtree() locs = [ (14, 10, 14, 10), (16, 10, 16, 10), ] for i, (minx, miny, maxx, maxy) in enumerate(locs): idx2.add(i, (minx, miny, maxx, maxy), obj={'a': 42}) for distance in (1, 2): print("Within distance of: ({0})".format(distance)) print('') r = [ (i.id, i.object) for i in idx2.nearest((13, 10, 13, 10), distance, objects=True) ] print(r) print('')
Output:
Within distance of: (1) [(0, {'a': 42})] Within distance of: (2) [(0, {'a': 42}), (1, {'a': 42})]
NOTE: You need to represent your points as bounding-boxes, which is the basic structure of an R-tree (polygons inside of polygons inside of polygons).
In this case, we assign arbitrary objects that are associated with each bounding box. When we do a search, we get the objects back, too.
Advertisements
3 thoughts on “Build an R-Tree in Python for Fun and Profit”
Great article! Terrific way to get into spatial data and queries like ranges, nearest neighbor and k-NN.
One note on building libspatialindex
First run ./autogen.sh in order to generate the configure file.
The downloadable archives don’t require it.
I see, it is only applicable for individuals cloning the GitHub repo.
|
https://dustinoprea.com/2015/08/04/build-an-r-tree-in-python-for-fun-and-profit/comment-page-1/
|
CC-MAIN-2018-09
|
refinedweb
| 318
| 65.12
|
Mastering FXML
2 FXML—What's New in JavaFX 2.1
This page contains the following sections that describe the FXML enhancements in JavaFX 2.1 and incompatibilities with previous releases:
FXML Enhancements for JavaFX 2.1
The following FXML enhancements have been added in JavaFX 2.1:
Support for using a leading backslash as an escape character (RT-18680)
JavaFX 2.0 used consecutive operator characters such as
$$as escape sequences. JavaFX 2.1 adds support for escape sequences using the backslash character, such as
\$. These escape sequences are more similar to Unified Expression Language (UEL), making them more familiar to developers. The JavaFX 2.0 escape sequences are deprecated as of JavaFX 2.1. See Some JavaFX 2.0 FXML Escape Sequences Are Deprecated in JavaFX 2.1 and Backslash Is Now an Escape Character.
An implicit variable for the controller to document the namespace
This new feature facilitates bidirectional binding between the controller and the UI. Bidirectional binding was dropped from JavaFX 2.1, but this feature was retained.
Convenience constructors to the
FXMLLoaderclass (RT-16815)
Several new convenience constructors have been added to the
FXMLLoaderclass. These constructors mirror the
static load()methods defined in JavaFX 2.0, but make it easier to access the document's controller from the calling code.
Customizable controller instantiation (RT-16724, RT-17268)
In JavaFX 2.0, the calling code did not have any control over controller creation. This prevented an application from using a dependency injection system such as Google Guice or the Spring Framework to manage controller initialization. JavaFX 2.1 adds a
Callbackinterface to facilitate delegation of controller construction:
public interface Callback { public Object getController(Class<?> type); }
When a controller factory is provided to the
FXMLLoaderobject, the loader will delegate controller construction to the factory. An implementation might return a null value to indicate that it does not or cannot create a controller of the given type; in this case, the default controller construction mechanism will be employed by the loader. Implementations might also "recycle" controllers such that controller instances can be shared by multiple FXML documents. However, developers must be aware of the implications of doing this: primarily, that controller field injection should not be used in this case because it will result in the controller fields containing values from only the most recently loaded document.
Easier style sheets to work with (RT-18299, RT-15524)
In JavaFX 2.0, applying style sheets in FXML was not very convenient. In JavaFX 2.1, it is much simpler. Style sheets can be specified as an attribute of a root
<Scene>element as follows:
<Scene stylesheets="/com/foo/stylesheet1.css, /com/foo/stylesheet2.css"> </Scene>
Style classes on individual nodes can now be applied as follows:
<Label styleClass="heading, firstPage" text="First Page Heading"/>
Caller-specified no-arg controller method as an event handler (RT-18229)
In JavaFX 2.0, controller-based event handlers must adhere to the method signature defined by an event handler. They must accept a single argument of a type that extends the
Eventclass and return
void. In JavaFX 2.1, the argument restriction has been lifted, and it is now possible to write a controller event handler that takes no arguments.
FXML Loader Incompatibilities with Previous JavaFX Releases
The following sections contain compatibility issues that users might encounter if they load a JavaFX 2.0 FXML file with a JavaFX 2.1 FXML loader:
Some JavaFX 2.0 FXML Escape Sequences Are Deprecated in JavaFX 2.1
Backslash Is Now an Escape Character
Some JavaFX 2.0 FXML Escape Sequences Are Deprecated in JavaFX 2.1
Table 2-1 shows the double-character escape sequences that were used in FXML in JavaFX 2.0, but are deprecated in JavaFX 2.1. Instead, use a backslash as the escape character.
If Scene Builder encounters any of these deprecated escape sequences, then the console displays a warning, but loads the FXML anyway. The next time the file is saved, Scene Builder automatically replaces the deprecated escape characters with the new syntax.
Backslash Is Now an Escape Character
In JavaFX 2.1, the backslash
\ is an escape character in FXML. As a result, JavaFX 2.0 applications with FXML files that contain FXML string attributes starting with a backslash might prevent the FXML from loading, or it might cause the FXML loader to misinterpret the string.
Solution: For any FXML backslash text in a JavaFX 2.0 application, add an additional backslash to escape the character.
Example:
Remove this line of code:
<Button text="\"/>
Replace it with this line of code:
<Button text="\\"/>
|
http://docs.oracle.com/javafx/2/fxml_get_started/whats_new.htm
|
CC-MAIN-2016-40
|
refinedweb
| 766
| 58.38
|
A component designator only makes sense if you have the correct schematic.
Posts made by PlayTheGame
- RE: PCB Reference Designators Documentation
- RE: m5stack ToF vl53l0x unit will stop scanning I2C devices
.
- RE: Grove HUB unit trouble
@vvs551 If you wire the ENV module to the GROVE port A (with and without HUB) do you have the charging base connected?
Specifically do you have the M5GO Base (e.g. Fire) or the Core Bottom Base?
- RE: PCB Reference Designators Documentation
@lukasmaximus that document, which is heavily outdated
- RE: HOWTO: M5Stack Fire - use the full 16MB with the Arduino IDE (UPDATED)
- RE: Ideas, helping M5Stack document stuff...
@watson
M5Fire schematics?
Its fine to used the altered pin maps as basis but be aware your leaflets contain errors.
Fire has an undocumented I2C device on the bus, most probably the power management IC.
M5Camera really I2C SDA on G22 not G25? Schematics?
M5Core
Not up to date, connectors missing
M BUS
HPWR?
- RE: HOWTO: M5Stack Fire - use the full 16MB with the Arduino IDE (UPDATED)
@kabron
Have you installed the gitlab repository code or the release version ?
Because the v1.0.0 Release has a huge I2C bug that affects all readings on the bus and is only corrected in the repository
- RE: Ideas, helping M5Stack document stuff...
@ajb2k3
The forum is hosted by Alisoft (Alibaba group) in Hangzhou behind the great firewall. So something might trigger the behavior (keywords) intentional or not. The main site is hosted in Singapore and not affected.
- RE: Ideas, helping M5Stack document stuff...
There is also this very weird behavior, that is giving a timeout error from my IP address in the last week after I posted something here for 3 times. Well certainly coincidence, most certainly.
After renewing my dynamic IP this forum loads immediately.
- RE: Ideas, helping M5Stack document stuff...
I abandoned this platform for commercial prototypes because of lack of documentation. But at least there are now hints for more info in a new repository.
- RE: PIN number and shields
Well, the confusions epicenter is at M5stack team, it is not your fault @fameuhly
The programming pin list in Arduino is
static const uint8_t TX = 1; static const uint8_t RX = 3; static const uint8_t TXD2 = 17; static const uint8_t RXD2 = 16; static const uint8_t SDA = 21; static const uint8_t SCL = 22; static const uint8_t SS = 5; static const uint8_t MOSI = 23; static const uint8_t MISO = 19; static const uint8_t SCK = 18; static const uint8_t G23 = 23; static const uint8_t G19 = 19; static const uint8_t G18 = 18; static const uint8_t G3 = 3; static const uint8_t G16 = 16; static const uint8_t G21 = 21; static const uint8_t G2 = 2; static const uint8_t G12 = 12; static const uint8_t G15 = 15; static const uint8_t G35 = 35; static const uint8_t G36 = 36; static const uint8_t G25 = 25; static const uint8_t G26 = 26; static const uint8_t G1 = 1; static const uint8_t G17 = 17; static const uint8_t G22 = 22; static const uint8_t G5 = 5; static const uint8_t G13 = 13; static const uint8_t G0 = 0; static const uint8_t G34 = 34; static const uint8_t DAC1 = 25; static const uint8_t DAC2 = 26; static const uint8_t ADC1 = 35; static const uint8_t ADC2 = 36;
However, M5Stack team decided to name the pins in the limited info differently in different places. 13 on the pin header and the proto module is G13 but it is not routed to the core module edge pins. I am really wondering why they did two SPI connectors on opposite sides, but forgot the mandatory chip select pin? At least G2 and G5 have the same colour.
You can try with G16 and G17 on a Core (not on Fire - PSRAM!) because most likely you wont need the second UART.
- RE: Power on 5V pin when switched off.
- RE: PIN number and shields
I have a step motor driver TB6600, I use the following arduino sketch by connecting the PIN 2, 5 and 8 on the Arduino Nano.
// defines pins numbers
const int stepPin = 5;
const int dirPin = 2;
const int enPin = 8;
void setup() {
// Sets the two pins as Outputs
pinMode(stepPin,OUTPUT);
pinMode(dirPin,OUTPUT);
pinMode(enPin,OUTPUT);
I would like to use the same thing on the m5stack but I can not find the good PIN.
Include right libraries and use a stepmotor library with non blocking code (interrupt driven, so you can update the display, play sound etc.). Anyway for a quick test you can modify your code. Here I use pin 13, but there are many free pins.
#include <M5Stack.h>
const int stepPin = 5;
const int dirPin = 2;
const int enPin = 13;
//add this to setup
void setup() {
m5.begin();
pinMode(stepPin,OUTPUT);
pinMode(dirPin,OUTPUT);
pinMode(enPin,OUTPUT);
M5.Lcd.clear(BLACK);
M5.Lcd.setTextColor(RED);
M5.Lcd.setTextSize(2);
M5.Lcd.setCursor(65, 10);
M5.Lcd.println("STEPMOTOR");
M5.Speaker.mute();
//add this to loop
void loop() {
m5.update();
Connect ground, Pin 2, 5, 13 with your driver. However, because the M5stack is 3.3V the signal level might be too low for your driver expecting 5V and you need a level converter.
-
- RE: Mod to programmatically disable speaker whine/hiss.
@Calin Hold your ear on the "mute" speaker. There is still noise with your code.
THX for the hardware mod suggestion.
- RE: m5stack camera module - hardware design bug
Well, because of lack of documentation you are on your own.
Pin Configuration
CONFIG_D0=32
CONFIG_D1=35
CONFIG_D2=34
CONFIG_D3=5
CONFIG_D4=39
CONFIG_D5=18
CONFIG_D6=36
CONFIG_D7=19
CONFIG_XCLK=27
CONFIG_PCLK=21
CONFIG_VSYNC=22
CONFIG_HREF=26
CONFIG_SDA=25
CONFIG_SCL=23
CONFIG_RESET=15
The big question is: Is the BME280 and MPU6050 soldered on the board (as you indicated)? Is the I2C SDA of the BME280 physically connected to G22 as the leaflet says or G25?
For me unsoldered peripherals is very suspicious. It smells like big hardware bug. If M5Camera with PSRAM based on WROVER module uses really G22 for the I2C and also connects this pin as VSYNC to the cam the bug is obvious.
A proper design is with I2C pullups. Here is an example from bitluni
- RE: M5ez, a complete interface builder system for the M5Stack as an Arduino library. Extremely easy to use.
Tested M5ez-demo on a Fire. Compiled with Arduino 1.8.7 with PSRAM=enabled. No issue, no Neopixel, no crackling sound during boot, everything flawlessly.
Also the OTA Update worked. Maybe the user should try the included OTA application and exclude compiling errors by flashing your github binary?
- RE: HOWTO: M5Stack Fire - use the full 16MB with the Arduino IDE (UPDATED).
- RE: m5stack camera module - hardware design bug
@aj:
My opinion: It is a silly idea to share the camera VSYNC and SIOC with the BME280 and MPU6050.
|
https://forum.m5stack.com/user/playthegame/posts
|
CC-MAIN-2021-39
|
refinedweb
| 1,121
| 63.09
|
I believe post
is addressing both questions:-
1.When the outside world wants to contact the VM’s floating IP, the FIP namespace will reply that 192.168.1.3 is available via the fg’s device MAC address (An awful lie, but a useful one… Such is the life of a proxy). The traffic will be forwarded to the machine, in through a NIC connected to br-ex and in to the FIP’s namespace ‘fg’ device. The FIP namespace will use its route to 192.168.1.3 and route it out its fpr veth device. The message will be received by the qrouter namespace: 192.168.1.3 is configured on its rfp device, its iptables rules will replace the packet’s destination IP with the VM’s fixed IP of 10.0.0.4 and off to the VM the message goes.
2. Legacy routers provide floating IPs connectivity by performing 1:1 NAT between the VM’s fixed IP and its floating IP inside the router namespace. Additionally, the L3 agent throws out a gratuitous ARP when it configures the floating IP on the router’s external device. This is done to advertise to the external network that the floating IP is reachable via the router’s external device’s MAC address. Floating IPs are configured as /32 prefixes on the router’s external device and so the router answers any ARP requests for these addresses. Legacy routers are of course scheduled only on a select subgroup of nodes known as network nodes
OpenStack is a trademark of OpenStack Foundation. This site is powered by Askbot. (GPLv3 or later; source). Content on this site is licensed under a CC-BY 3.0 license.
|
https://ask.openstack.org/en/answers/83074/revisions/
|
CC-MAIN-2021-21
|
refinedweb
| 288
| 72.76
|
Simple data format for Image. More...
#include <drake/systems/sensors/image.h>
Simple data format for Image.
For the complex calculation with the image, consider converting this to other libaries' Matrix data format, i.e., MatrixX in Eigen, Mat in OpenCV, and so on.
The origin of image coordinate system is on the left-upper corner.
The data type for a channel.
An alias for ImageTraits that contains the data type for a channel, the number of channels and the pixel format in it.
Constructs a zero-sized image.
Access to the pixel located at (x, y) in image coordinate system where x is the variable for horizontal direction and y is one for vertical direction.
To access to the each channel value in the pixel (x, y), you can do:
ImageRgbaU8 image(640, 480, 255); uint8_t red = image.at(x, y)[0]; uint8_t green = image.at(x, y)[1]; uint8_t blue = image.at(x, y)[2]; uint8_t alpha = image.at(x, y)[3];
Returns the size of height for the image.
Returns the result of the number of pixels in a image by the number of channels in a pixel.
Returns the size of width for the image.
The number of channels in a pixel.
The format for pixels.
The size of a pixel in bytes.
|
http://drake.mit.edu/doxygen_cxx/classdrake_1_1systems_1_1sensors_1_1_image.html
|
CC-MAIN-2018-43
|
refinedweb
| 217
| 69.38
|
I read this and got really interested: Validating date format using regular expression
so I started writing my own version of the date validation function, I think I am close, but not quite, and I would like some suggestion as well as tips. I have spend a lot of time trying to tweak the function.
import re
import datetime
# Return True if the date is in the correct format
def checkDateFormat(myString):
isDate = re.match('[0-1][0-9]\/[0-3][0-9]\/[1-2][0-9]{3}', myString)
return isDate
# Return True if the date is real date, by real date it means,
# The date can not be 00/00/(greater than today)
# The date has to be real (13/32) is not acceptable
def checkValidDate(myString):
# Get today's date
today = datetime.date.today()
myMaxYear = int(today.strftime('%Y'))
if (myString[:2] == '00' or myString[3:5] == '00'):
return False
# Check if the month is between 1-12
if (int(myString[:2]) >= 1 or int(myString[:2]) <=12):
# Check if the day is between 1-31
if (int(myString[3:5]) >= 1 or int(myString[3:2]) <= 31):
# Check if the year is between 1900 to current year
if (int(myString[-4:]) <= myMaxYear):
return True
else:
return False
testString = input('Enter your date of birth in 00/00/0000 format: ')
# Making sure the values are correct
print('Month:', testString[:2])
print('Date:', testString[3:5])
print('Year:', testString[-4:])
if (checkDateFormat(testString)):
print('Passed the format test')
if (checkValidDate(testString)):
print('Passed the value test too.')
else:
print('But you failed the value test.')
else:
print("Failed. Try again")
int(myString[3:5])
00/01/1989
if
00
if
Like many things in python there's already underlying capabilities to check dates. Assuming you aren't just doing this as an academic exercise the most straightforward way to validate a date is to try and create it.
import datetime minyear = 1900 maxyear = datetime.date.today().year mydate = '12/12/2000' dateparts = mydate.split('/') try: if len(dateparts) != 3: raise ValueError("Invalid date format") if int(dateparts[2]) > maxyear or int(dateparts[2]) < minyear: raise ValueError("Year out of range") dateobj = datetime.date(int(dateparts[2]),int(dateparts[1]),int(dateparts[0])) except: // handle errors
if datetime.date is given an invalid date it will complain, eg:
datetime.date(2000,45,23) Traceback (most recent call last): File "<pyshell#1>", line 1, in <module> datetime.date(2000,45,23) ValueError: month must be in 1..12
|
https://codedump.io/share/8oPBV2V4Oeww/1/validating-date-both-format-and-value
|
CC-MAIN-2017-43
|
refinedweb
| 417
| 52.49
|
Introduction
The Python and NumPy indexing operators [] and attribute operator ‘.’ (dot) provide quick and easy access to pandas data structures across a wide range of use cases. The index is like an address, that’s how any data point across the data frame or series can be accessed. Rows and columns both have indexes.
The axis labeling information in pandas objects serves many purposes:
- Identifies data (i.e. provides metadata) using known indicators, important for analysis, visualization, and interactive console display.
- Enables automatic and explicit data alignment.
- Allows intuitive getting and setting of subsets of the data set.
Different Choices for indexing and selecting data
Object selection has had several user-requested additions to support more explicit location-based indexing. Pandas now support three types of multi-axis indexing for selecting data.
.locis primarily label based, but may also be used with a boolean array
We are creating a Data frame with the help of pandas and NumPy. In the data frame, we are generating random numbers with the help of random functions. Here the index is given with label names of small alphabet and column names given with capital alphabets. The index contains six alphabet means we want rows and three columns, also mentioned in the ‘randn’ function.
If these two values mismatch with index, column labels, and in ‘randn’ function, then it will give an error.
# import the pandas library and aliasing as pd import pandas as pd import numpy as npdf = pd.DataFrame(np.random.randn(6, 3), index = ['a','b','c','d','e','f'], columns = ['A', 'B', 'C']) print (df.loc['a':'f'])
How to check the values is positive or negative in a particular row. For that we are giving condition to row values with zeros, the output is a boolean expression in terms of False and True. False means the value is below zero and True means the value is above zero.
# for getting values with a boolean array print (df.loc['a']>0)
As we see in the above code that with
.locwe are checking the value is positive or negative with boolean data. In row index ‘a’ the value of the first column is negative and the other two columns are positive so, the boolean value is False, True, True for these values of columns.
Then, if we want to just access the only one column then, we can do with the colon. The colon in the square bracket tells the all rows because we did not mention any slicing number and the value after the comma is B means, we want to see the values of column B.
print df.loc[:,'B']
.ilocis primarily integer position based (from
0to
length-1of the axis), but may also be used with a boolean array. Pandas provide various methods to get purely integer based indexing.
# import the pandas library and aliasing as pd import pandas as pd import numpy as np df1 = pd.DataFrame(np.random.randn(8, 3),columns = ['A', 'B', 'C']) # select all rows for a specific column print (df1.iloc[:8])
In the above small program, the
.ilocgives the integer index and we can access the values of row and column by index values. To know the particular rows and columns we do slicing and the index is integer based so we use
.iloc. The first line is to want the output of the first four rows and the second line is to find the output of two to three rows and column indexing of B and C.
# Integer slicing print (df1.iloc[:4]) print (df1.iloc[2:4, 1:3])
.ixis used for both labels and integer-based. Besides pure label based and integer-based, Pandas provides a hybrid method for selections and subsetting the object using the .ix() operator.
import pandas as pd import numpy as npdf2 = pd.DataFrame(np.random.randn(8, 3), columns = ['A', 'B', 'C'])# Integer slicing print (df2.ix[:4])
The query() Method
DataFrame objects have a query() method that allows selection using an expression. You can get the value of the frame where column b has values between the values of columns a and c.
For example:
#creating dataframe of 10 rows and 3 columns df4 = pd.DataFrame(np.random.rand(10, 3), columns=list('abc')) df4
The condition given in the below code is to check that x is smaller than b and b is smaller than c. If both the condition is true then print the output. With this condition, only one row passed the condition.
Give the same conditions to the query function. If we compare these two condition the query syntax is simple than data frame syntax.
#with query() df4.query('(x < b) & (b < c)')
Duplicate Data
If you want to identify and remove duplicate rows in a Data Frame, two methods will help: duplicated and drop_duplicates.
- duplicated: returns a boolean vector whose length is the number of rows, and which indicates whether a row is duplicated.
- drop_duplicates: removes duplicate rows.
Creating a data frame in rows and columns with integer-based index and label based column names.
df5 = pd.DataFrame({'a': ['one', 'one', 'two', 'two', 'two'], 'b': ['x', 'y', 'x', 'y', 'x'], 'c': np.random.randn(5)}) df5
We generated a data frame in pandas and the values in the index are integer based. and three columns a,b, and c are generated. here we checked the boolean value that the rows are repeated or not. For every first time of the new object, the boolean becomes False and if it repeats after then, it becomes True that this object is repeated.
df5.duplicated('a')
The difference between the output of two functions, one is giving the output with boolean and the other is removing the duplicate labels in the dataset.
df5.drop_duplicates('a')
Conclusion:
There are a lot of ways to pull the elements, rows, and columns from a DataFrame. There is some indexing method in Pandas which helps in selecting data from a DataFrame. These are by far the most common ways to index data. The
.loc and
.iloc indexers use the indexing operator to make selections.
You can reach me at my LinkedIn link here and on my email: [email protected]
My Previous Articles:
|
https://www.coodingdessign.com/python/datascience/indexing-and-selecting-data-in-python-how-to-slice-dice-for-pandas-series-and-dataframe/
|
CC-MAIN-2020-50
|
refinedweb
| 1,038
| 64.51
|
Fixing a bug in another system, using an ixgbe driver derived from yours,
I've happened upon the fact that IXGBE_LE32_TO_CPUS appears to be a no-op.
On FreeBSD, it's defined to be le32dec(), which is a pure function, however
the uses in ixgbe_common.c are as if it were side-effecting its argument:
3964 /* Pull in the rest of the buffer (bi is where we left off)*/
3965 for (; bi <= dword_len; bi++) {
3966 buffer[bi] = IXGBE_READ_REG_ARRAY(hw, IXGBE_FLEX_MNG, bi);
3967 IXGBE_LE32_TO_CPUS(&buffer[bi]);
3968 }
Fix:
Guessing suggests that:
buffer[bi] = IXGBE_LE32_TO_CPUS(buffer[bi]);
(there are two instances).
Is what was intended.
How-To-Repeat: Read through ixgbe_common.c and ixgbe_osdep.h. I'm unsure if this causes
practical problems (I don't have a big-endian system with this board).
Responsible Changed
From-To: freebsd-bugs->freebsd-net
Hrm ... currently, its not even defined to le32dec() in head.
ixgbe_osdep.h:
/* XXX these need to be revisited */
#define IXGBE_CPU_TO_LE32 htole32
#define IXGBE_LE32_TO_CPUS(x)
#define IXGBE_CPU_TO_BE16 htobe16
#define IXGBE_CPU_TO_BE32 htobe32
Hrm ... it looks like it was intentionally removed. Typo?
I can re-add it in. I removed it because it doesn't really have an effect on i386/amd64, but I realize someone may want to (try to) use the driver on a big-endian architecture.
(In reply to Eric Joyner from comment #4)
Yeah, I think the ppc folks will want this to work.
Any input from the comment that originally opened the ticket?
It appears that the code wants to convert the array element, but the return value of the macro isn't being assigned to anything. In addition the invocation is converting the address of the element in the array, not the value of the array ... so, I'm unclear what is the right thing to do in the context here.
(In reply to Sean Bruno from comment #5)
It's supposed to take the value of buffer[bi] and convert it from little endian to the CPU architecture in-place -- that's why it's using the address of buffer[bi] instead of the value itself.
(In reply to Eric Joyner from comment #6)
The prototype for le32dec in byteorder(9) seems to indicate that it does *not* modify the value passed in but instead returns a modified value. sys/endian.h seems to agree with this. I think the man page isn't super clear on this fact in the descriptions.
static __inline uint32_t
le32dec(const void *pp)
{
uint8_t const *p = (uint8_t const *)pp;
return (((unsigned)p[3] << 24) | (p[2] << 16) | (p[1] << 8) | p[0]);
}
It looks like the intent for the macro is for it be an alias to a function like this (on Linux):. I think I ended up removing that function because I couldn't find a version in FreeBSD that did the same thing.
(In reply to Eric Joyner from comment #8)
quick grep -r of sys shows that most folks #define the swabXX functions to bswapXX functions.
dev/cxgbe/osdep.h:#define swab16(x) bswap16(x)
dev/cxgbe/osdep.h:#define swab32(x) bswap32(x)
dev/cxgbe/osdep.h:#define swab64(x) bswap64(x)
ofed/include/asm/byteorder.h:#define swab16 bswap16
ofed/include/asm/byteorder.h:#define swab32 bswap32
ofed/include/asm/byteorder.h:#define swab64 bswap64
(In reply to Sean Bruno from comment #9)
The bswap implementation doesn't do the conversion in-place, unlike what the IXGBE macro expects it to.
However, in byteorder.h, it looks like they had to make their own implementation to make le32_to_cpus.
|
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=170267
|
CC-MAIN-2021-21
|
refinedweb
| 593
| 55.84
|
IRC log of dawg on 2005-06-14
Timestamps are in UTC.
14:28:07 [RRSAgent]
RRSAgent has joined #dawg
14:28:07 [RRSAgent]
logging to
14:28:15 [ericP]
zakim, this will be DAWG
14:28:15 [Zakim]
ok, ericP, I see SW_DAWG()10:30AM already started
14:28:17 [DaveB]
Zakim, who's on the phone?
14:28:17 [Zakim]
On the phone I see ??P29
14:28:23 [DaveB]
Zakim, ??P29 is DaveB
14:28:23 [Zakim]
+DaveB; got it
14:28:27 [Zakim]
+??P30
14:28:31 [Zakim]
-DaveB
14:28:31 [ericP]
Meeting: DAWG
14:28:32 [Zakim]
+DaveB
14:28:40 [AndyS]
zakim, ??P30n is AndyS
14:28:40 [Zakim]
sorry, AndyS, I do not recognize a party named '??P30n'
14:28:41 [ericP]
Chair: DaveB
14:28:49 [ericP]
who's scribe?
14:28:50 [AndyS]
zakim, ??P30 is AndyS
14:28:50 [Zakim]
+AndyS; got it
14:28:52 [DaveB]
kendall
14:28:55 [Zakim]
+HowardK
14:29:37 [Zakim]
+EricP
14:29:42 [Zakim]
+??P31
14:29:53 [DaveB]
Zakim, agenda + Convene, take roll, review records and agenda
14:29:55 [HiroyukiS]
Zakim, ??P31 is HiroyukiS
14:29:56 [Zakim]
agendum 1 added
14:29:58 [Zakim]
+HiroyukiS; got it
14:30:00 [Zakim]
+??P33
14:30:05 [DaveB]
Zakim, agenda + Next f2f?
14:30:05 [Zakim]
agendum 2 added
14:30:10 [Zakim]
+Kendall_Clark
14:30:11 [DaveB]
Zakim, agenda + Last call for SPARQL QL
14:30:12 [Zakim]
agendum 3 added
14:30:17 [DaveB]
Zakim, agenda + FILTER()
14:30:17 [Zakim]
agendum 4 added
14:30:21 [ericP]
win go 20
14:30:31 [jeen]
Zakim, ??P33 is jeen
14:30:31 [Zakim]
+jeen; got it
14:30:57 [ericP]
agenday request: grammar
14:31:36 [DaveB]
Zakim, agenda + yacker grammer
14:31:36 [Zakim]
agendum 5 added
14:31:52 [DaveB]
Zakim, take up item 1
14:31:52 [Zakim]
agendum 1. "Convene, take roll, review records and agenda" taken up [from DaveB]
14:31:59 [kendall]
ericp: is there "another" grammar from which the grammar in the spec is generated?
14:32:12 [kendall]
i was looking for some text to cut and paste w/out all the production numbers and such :>
14:32:31 [DaveB]
I saw "Regrets from JanneS, Yoshio, patH tentative regrets: DanC, SteveH"
14:32:37 [DaveB]
Zakim, who's on the phone?
14:32:37 [Zakim]
On the phone I see DaveB, AndyS, HowardK, EricP, HiroyukiS, jeen, Kendall_Clark
14:32:38 [ericP]
kendall,
14:32:49 [kendall]
ah, excellent. thx eric.
14:32:59 [ericP]
kendall, but it's not generatd from that
14:33:19 [kendall]
(we seem to have a quorom :>)
14:33:21 [AndyS]
KC: It's produced by a perl (!) script from javacc. Can take and tailor the perl script
14:33:36 [Zakim]
+??P37
14:33:38 [AndyS]
Could produce text if you need
14:33:42 [kendall]
hmm, okay. no biggie. i'm using antlr to genereate a python parser (:>)
14:33:54 [DaveB]
minutes was
14:33:56 [kendall]
+HiroyukiS
14:33:56 [DaveB]
for last week
14:34:02 [kendall]
erp :>
14:34:59 [kendall]
2 requests for "more human readable" minutes from last week's meeting
14:35:06 [kendall]
(er, at least 2 requests)
14:35:20 [kendall]
so, formally, minutes from last week's meeting not accepted today
14:35:44 [kendall]
ACTION: DanC To produce a more humanly readable version of last week's meeting minutes
14:35:51 [kendall]
CONTINUE other actions from agenda
14:35:57 [DaveB]
Zakim, take up item 2
14:35:58 [Zakim]
agendum 2. "Next f2f?" taken up [from DaveB]
14:36:23 [AndyS]
Can fly Newark <-> Bristol nowadays
14:36:28 [kendall]
Nice :>
14:36:32 [JosD]
JosD has joined #dawg
14:36:56 [kendall]
Jeen on holiday from 4 August to 8 August
14:37:06 [kendall]
EricP on holiday during August
14:37:11 [AndyS]
AndyS:Not week of August 15
14:37:19 [kendall]
KendallC also on holiday in Houston for some 7 week period during August
14:37:20 [Zakim]
+JosD
14:37:30 [ericP]
my vacation is roughly the same dates as Jeen
14:37:40 [kendall]
(Append attendees: +Jos)
14:39:12 [kendall]
AndyS notes that cancelling a scheduled f2f is easier than slipping one in at the last minute)
14:39:43 [AndyS]
VLDB: Aug 30 - Sept 2
14:39:44 [kendall]
Souri also attending
14:39:49 [kendall]
(sorry, I missed you too)
14:39:57 [kendall]
er, he's not on IRC :>
14:40:01 [kendall]
+Souri
14:40:20 [DaveB]
Zakim, who's on the call?
14:40:20 [Zakim]
On the phone I see DaveB, AndyS, HowardK, EricP, HiroyukiS, jeen, Kendall_Clark, Souri, JosD
14:40:37 [DaveB]
Zakim, take up item 3
14:40:37 [Zakim]
agendum 3. "Last call for SPARQL QL" taken up [from DaveB]
14:40:39 [DaveB]
Zakim, take up item 4
14:40:39 [Zakim]
agendum 4. "FILTER()" taken up [from DaveB]
14:40:52 [DaveB]
re
14:40:57 [DaveB]
and
14:41:08 [kendall]
Request to put brackets around FILTER (is that right?)
14:41:14 [kendall]
re: some ambiguities in the grammar
14:41:23 [kendall]
this change affects tests we've already approved
14:41:29 [kendall]
er, this change *would* affect them
14:41:49 [kendall]
DaveB in favor of the change
14:41:52 [AndyS]
Example: "{ FILTER q:name() :a :b }"
14:42:14 [kendall]
+q to ask about "compliance" (sorry!)
14:42:21 [kendall]
q+ to ask about "compliance" (sorry!)
14:42:56 [DaveB]
q-
14:43:17 [kendall]
DaveB would rather have "brackets everywhere", doesn't like the special cases
14:43:29 [ericP]
14:43:29 [kendall]
AndyS thinks regex will be commonly used
14:43:48 [ericP]
the above link is how i solved it
14:44:00 [ericP]
(in the sparql grammar)
14:45:21 [DaveB]
discussion of regex in python, seems it has a perl5 compat regex library
14:46:01 [kendall]
(eek, sorry, badly scribing AND misleading the discussion!)
14:46:28 [kendall]
Jeen: doesn't like the special cases either
14:46:56 [kendall]
All the builtin functions are allowed w/out the outer brackets
14:47:09 [kendall]
Proposal: All the builtin functions are allowed w/out the outer brackets
14:47:26 [DaveB]
with FILTER
14:47:29 [kendall]
Proposal: All the builtin functions are allowed w/out the outer brackets with FILTER
14:47:46 [kendall]
Where all the builtin functions is...
14:47:46 [AndyS]
'STR' '(' Expression ')'
14:47:46 [AndyS]
| 'LANG' '(' Expression ')'
14:47:46 [AndyS]
| 'DATATYPE' '(' Expression ')'
14:47:46 [AndyS]
| RegexExpression
14:47:46 [AndyS]
| 'BOUND' '(' Var ')'
14:47:46 [ericP]
14:47:47 [AndyS]
| 'isURI' '(' Expression ')'
14:47:49 [AndyS]
| 'isBLANK' '(' Expression ')'
14:47:51 [AndyS]
| 'isLITERAL' '(' Expression ')'
14:47:58 [kendall]
(thanks andy!)
14:48:19 [AndyS]
"{ FILTER q:name() :a :b }"
14:48:43 [ericP]
{ FILTER (q:name()) :a :b }
14:49:35 [kendall]
AndyS: can also use an explicit syntax for function calls
14:49:36 [AndyS]
&q:name()
14:50:09 [kendall]
I hate it. :>
14:50:27 [DaveB]
it makes q:name and () - empty list, different from &q:name() function call
14:50:35 [AndyS]
Separate function call and q:name as constant value
14:51:25 [kendall]
Proposal2: To adopt an explict syntax for function calls (which makes q:name and () empty list separate from &q:name() function call)
14:51:40 [kendall]
Jeen doesn't like it
14:52:08 [kendall]
Straw poll: proposal 1, proposal 2, neither
14:53:11 [kendall]
(we rely too much on people being on irc. :>)
14:54:11 [DaveB]
Zakim, who's on the phone?
14:54:11 [Zakim]
On the phone I see DaveB, AndyS, HowardK, EricP, HiroyukiS, jeen, Kendall_Clark, Souri, JosD
14:54:34 [kendall]
JosD: don't really care
14:54:45 [kendall]
Souri: don't really care
14:54:59 [kendall]
Kendall: proposal1+, proposal2-
14:55:15 [kendall]
Souri: proposal1+, proposal2-neutral
14:55:30 [kendall]
Jeen: prop1+, prop2-
14:55:42 [kendall]
Hiroyuki: prop2-mild pref
14:55:47 [kendall]
EricP: mild for p2
14:55:54 [kendall]
Howard: prop2-
14:56:05 [kendall]
AndyS: p1+, p2-
14:56:18 [kendall]
DaveB: p2 mild pref, but doesn't care much
14:57:39 [kendall]
Editors are encouraged to take the straw poll as advice from the WG
14:57:45 [howardk]
i'll talk louder jeen!
14:57:47 [jeen]
:)
14:57:49 [DaveB]
Zakim, take up item 5
14:57:49 [Zakim]
agendum 5. "yacker grammer" taken up [from DaveB]
14:58:04 [kendall]
yacker grammar, even :>
14:58:34 [ericP]
14:58:38 [kendall]
(/me not scribing this very closely...)
14:58:50 [DaveB]
eric epxlaining what yacker does
14:58:58 [DaveB]
had trouble using javacc grammar
14:59:05 [DaveB]
machine generated something that can validate sparql
14:59:39 [kendall]
Issue seems to be whether there is a link in the spec to more than one grammar
14:59:49 [kendall]
Editorial disagreement put to the WG (yes?)
15:00:11 [ericP]
15:00:48 [kendall]
AndyS prefers to reccomend one, rather than >1 grammars
15:01:26 [kendall]
Jeen: can we add an informative link to other grammar, but only one normative grammar
15:01:35 [kendall]
?
15:03:36 [kendall]
Very confusing, Eric.
15:04:30 [jeen]
EricP wants to add both grammars to WD, let users 'vote' on which should be in Rec.
15:06:30 [kendall]
I'm confused: is the point to help yacc users, to have 2 grammars, or both?
15:06:36 [kendall]
these are v. diff!
15:11:14 [kendall]
Souri: do these 2 grammars describe exactly the same language?
15:11:18 [kendall]
Andy: No.
15:11:35 [kendall]
Kendall: I want you guys as editors to pick one and recommend it. Period.
15:12:23 [kendall]
Souri: Doesn't mind 2 grammars, minds 2 languages
15:13:10 [DaveB]
Zakim, take up item 3
15:13:10 [Zakim]
agendum 3. "Last call for SPARQL QL" taken up [from DaveB]
15:13:35 [kendall]
other grammar proposal w/drawn
15:13:59 [DaveB]
andy's action done re rq23 unionqu decision
15:14:04 [kendall]
Eric: is caught up or will be soon
15:14:16 [kendall]
Andy has a comment from DanC and one from teh comments list pending
15:14:33 [kendall]
There are some markup issues
15:15:45 [DaveB]
andys said - need to sync with xmlres namespace when it gets a datespace one
15:17:06 [kendall]
Most of the pending stuff seems a matter of small edits
15:17:39 [DaveB]
reviewer SH regrets for today
15:17:45 [DaveB]
KC - nothing hold up for last call
15:18:09 [DaveB]
JB - mostly editorial, seem addressed
15:18:13 [kendall]
JB: comments mostly editorial, all of which seem to have been addressed. Pretty much happy.
15:18:55 [kendall]
SteveH: possible issue re: precision of ints & decimals...
15:19:03 [kendall]
er, this is via EricP's recollection
15:20:02 [kendall]
We don't know for certain Steve's review of the doc's status
15:20:28 [ericP]
last of the @@s removed from section 11
15:20:59 [kendall]
Discussion of creating a namespace URI for the results bindings format
15:21:39 [kendall]
rq23 mentions this namespace, so it needs to be an acceptable one
15:23:56 [kendall]
Gathering advice about LC timing...
15:24:12 [kendall]
Andy: wants there to be a period during which all 3 docs are LC simultaneously
15:24:19 [kendall]
DaveB: seems okay w/ that
15:25:16 [DaveB]
kc - would like overlap to be a month
15:25:20 [DaveB]
and protocol to have a longer lc
15:25:40 [DaveB]
kc - lc period for query should be subsantial, 2-3 mths
15:25:49 [DaveB]
and think protocol can go to lc in the next month
15:26:24 [DaveB]
... queyr is clean, good but complex. seen buyin from rdql users, impl but not from other parts of community
15:27:15 [kendall]
I'd like to see a 10 to 12 week LC period for the query doc, with at least 4 weeks of simultaneous LC for all 3 docs.
15:27:33 [DaveB]
Revision 1.395 2005/06/14 15:19:37 eric
15:27:34 [DaveB]
got rid of last @@s in section 11
15:27:36 [kendall]
Proposed: to publish rq23 (1.395) as a LC doc
15:27:51 [kendall]
Proposed: to publish rq23 (1.395), addressing FILTER and minor editorial changes, as a LC doc
15:31:31 [kendall]
KC, JosD volunteer to review (after the 17th for KC)
15:32:30 [kendall]
Proposed: to publish rq23 (1.395), addressing FILTER and 3 editorial changes, with KC & JosD reviewing these changes post 1.395, as a LC doc
15:34:39 [kendall]
publish: UMD, Bristol, HP, Agfa, NT&T, Howard the K, JeenB, W3C, (Souri in spirit!)
15:34:49 [kendall]
opposed: ()
15:34:53 [kendall]
abstains: ()
15:35:03 [kendall]
RESOLVED, go to last call for sparql ql
15:35:04 [kendall]
yay
15:36:06 [kendall]
Action AndyS: revise rq23 per fromUnionQuery decision
15:36:09 [kendall]
FINISHED
15:36:28 [kendall]
ACTION: ericp arrange publication
15:36:36 [kendall]
Jeen to scribe, DanC to chair meeting next Tuesday
15:36:50 [kendall]
ACTION: ericp work w/ danc on SOTD and LC
15:36:59 [kendall]
ACTION: danc message to chairs (?)
15:37:11 [kendall]
ACTION: AndyS editorial changes to rq23
15:37:25 [kendall]
ACTION: KendallC to review post 1.395 doc
15:37:34 [kendall]
ACTION: JosD to review post 1.395 doc
15:37:40 [kendall]
did I get them all?
15:37:57 [kendall]
(er, some of those may want to be combined...)
15:38:07 [Zakim]
-Kendall_Clark
15:38:08 [Zakim]
-JosD
15:38:10 [Zakim]
-Souri
15:38:11 [Zakim]
-DaveB
15:38:13 [Zakim]
-HowardK
15:38:15 [Zakim]
-HiroyukiS
15:38:15 [DanC_mtg]
I'm not available next tuesday. I said as much in email.
15:38:16 [Zakim]
-EricP
15:38:16 [DaveB]
meeting over
15:38:47 [DaveB]
DanC_mtg: oops...
15:39:05 [DanC_mtg]
oh well... if there's energy for meeting next week, I can find a chair again
15:40:40 [DaveB]
ericP can you drive Zakim or RRSagent to print agendas and make irc logs public?
15:40:46 [DaveB]
s/agendas/actions/
15:41:10 [Zakim]
-jeen
15:42:50 [DanC_mtg]
RRSAgent, make logs world-access
15:42:58 [DanC_mtg]
RRSAgent, please draft minutes
15:42:58 [RRSAgent]
I have made the request to generate
DanC_mtg
15:45:57 [DaveB]
DaveB has joined #dawg
15:48:18 [DaveB]
Zakim, please show the actions
15:48:18 [Zakim]
I don't understand 'please show the actions', DaveB
15:48:26 [DaveB]
RRSAgent: please show the actions
15:48:26 [RRSAgent]
I see 8 open action items:
15:48:26 [RRSAgent]
ACTION: DanC To produce a more humanly readable version of last week's meeting minutes [1]
15:48:26 [RRSAgent]
recorded in
15:48:26 [RRSAgent]
ACTION: AndyS to revise rq23 per fromUnionQuery decision [2]
15:48:26 [RRSAgent]
recorded in
15:48:26 [RRSAgent]
ACTION: ericp arrange publication [3]
15:48:26 [RRSAgent]
recorded in
15:48:26 [RRSAgent]
ACTION: ericp work w/ danc on SOTD and LC [4]
15:48:26 [RRSAgent]
recorded in
15:48:26 [RRSAgent]
ACTION: danc message to chairs (?) [5]
15:48:26 [RRSAgent]
recorded in
15:48:26 [RRSAgent]
ACTION: AndyS editorial changes to rq23 [6]
15:48:26 [RRSAgent]
recorded in
15:48:26 [RRSAgent]
ACTION: KendallC to review post 1.395 doc [7]
15:48:26 [RRSAgent]
recorded in
15:48:26 [RRSAgent]
ACTION: JosD to review post 1.395 doc [8]
15:48:26 [RRSAgent]
recorded in
15:49:14 [DaveB]
RRSAgent: please make logs public
15:49:18 [DaveB]
RRSAgent: please leave us
15:49:18 [RRSAgent]
I see 8 open action items:
15:49:18 [RRSAgent]
ACTION: DanC To produce a more humanly readable version of last week's meeting minutes [1]
15:49:18 [RRSAgent]
recorded in
15:49:18 [RRSAgent]
ACTION: AndyS to revise rq23 per fromUnionQuery decision [2]
15:49:18 [RRSAgent]
recorded in
15:49:18 [RRSAgent]
ACTION: ericp arrange publication [3]
15:49:18 [RRSAgent]
recorded in
15:49:18 [RRSAgent]
ACTION: ericp work w/ danc on SOTD and LC [4]
15:49:18 [RRSAgent]
recorded in
15:49:18 [RRSAgent]
ACTION: danc message to chairs (?) [5]
15:49:18 [RRSAgent]
recorded in
15:49:18 [RRSAgent]
ACTION: AndyS editorial changes to rq23 [6]
15:49:18 [RRSAgent]
recorded in
15:49:18 [RRSAgent]
ACTION: KendallC to review post 1.395 doc [7]
15:49:18 [RRSAgent]
recorded in
15:49:18 [RRSAgent]
ACTION: JosD to review post 1.395 doc [8]
15:49:18 [RRSAgent]
recorded in
|
http://www.w3.org/2005/06/14-dawg-irc
|
CC-MAIN-2015-18
|
refinedweb
| 2,862
| 59.37
|
HOWTO Setup Android Development
From FedoraProject
Install one of these URLs, depending on your version of Eclipse. For Eclipse version 3.5 use:
or for Eclipse version 3.6 use:
For Eclipse version 3.7 (Fedora 16 and current Rawhide (as of Oct. 10, 2011)), use:
For Eclipse version 4.2 (Fedora 17 and current Rawhide (as of Jun. 6, 2012)), use:
If you're unsure which version of Eclipse you are using, check it at Help > About Eclipse..
- Back in the Available Software view,-tools export PATH
- Logout and login back to apply path change
Android Emulator
32 bit packages
#
AVD device
- cd into the ~/AndroidSDK directory and run tools/android to configure and create your first Android Virtual Device.
- Go to "Available Packages", select components for just those versions of Android you want to work with. For example:
- SDK Platform Android 2.1
- Documentation for Android SDK
- (SDK version r_08) For the adb tool, make sure you also select:
- Platform Tools
- Click on "Install selected", then click on "accept all" and confirm with clicking on "Install". This will start component installation, when it will be done, click on close. When this will be done, we could proceed with creation of AVD device itself.
- Go to "Virtual Devices", Click on "New", this will open screen where you need to specify SD card size (I will use 62MiB), name of device (I will use "android_dev1", target (Android 2.1, if you want to develop for different target, you need to go to step 2 and install SDK platform for different version).
- Now click on "Create AVD" which will create Android Virtual Device.
Running Emulator
Now we have created Android Virtual Device and we should start it, however, due to issues in AndroidSDK with sound, we will need to run it from command line
./emulator -noaudio -avd android_dev1
And this will start emulator for us.
Hello Fedora
Configure Android in Eclipse
- Go to Window -> Preferences, click on Android and set SDK location to directory. (for example /home/user/AndroidSDK) and click on Apply.
- Click on apply to reload available targets
- choose target android SDK
- click on OK
Create a New Android Project
After you've created an AVD, the next step is to start a new Android project in Eclipse.
- From Eclipse, select File > New > Project. If the ADT Plugin for Eclipse has been successfully installed, the resulting dialog should have a folder labeled "Android" which should contain "Android Project". (After you create one or more Android projects, an entry for "Android XML File" will also be available.)
- Select "Android Project" and click Next.
- On next screen type Project Name ("HelloFedora"), Application name (Hello, Fedora), package name (com.example.hellofedora) which represent your namespace and name of activity in "Create Activity" box (HelloFedora). Choose target (if you have multiple targets) and click on "Finish". This will create project for you.
Development and Execution
- open HelloFedora.java and paste there example code from Hello Fedora Code section.
- click on windows -> preferences. In new window, open Android -> Launch and into "Options" text box insert "-noaudio"
- open separate console, cd ~/AndroidSDK/tools and execute ./emulator -noaudio @android_dev1 to start emulator. Wait for start of emulator (it could take several minutes)
- in eclipse, click on "run" and it will deploy application into Android Virtual Device.
Hello Fedora Code
package com.example.hellofedora; import android.app.Activity; import android.os.Bundle; import android.widget.TextView; public class HelloFedora extends Activity { /** Called when the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); TextView tv = new TextView(this); tv.setText("Hello, Android Developer\n Thank you, for using Fedora Linux"); setContentView(tv); } }
yum install gcc gcc-c++ gperf flex bison glibc-devel.{x86_64,i686} zlib-devel.{x86_64,i686} ncurses-devel.i686 libsx-devel readline-devel.i686 perl-Switch
-)
|
https://fedoraproject.org/w/index.php?title=HOWTO_Setup_Android_Development&oldid=316147
|
CC-MAIN-2015-32
|
refinedweb
| 639
| 58.18
|
Something as simple as:
- Code: Select all
import matplotlib.pyplot as plt
import time
plt.ion()
plt.axis( [0, 1, 0, 1] )
plt.axes().set_aspect( 'equal' )
plt.show()
time.sleep( 1000 )
My problem is that I cannot kill this window. I've changed my keybinding to allow kill: true,
{ "keys": ["ctrl+shift+c"], "command": "exec", "args": {"kill": true} },
and it kind of kills the process, but python.exe stays running and the GUI window stays open. It seems the only way to kill the running process it from Task Manager.
Can you help me how to fix it? I'd like to have something similar to Shift + F2 in PyScripter, what could just kill these Python scripts.
|
http://www.sublimetext.com/forum/viewtopic.php?f=2&t=10075&p=39959
|
CC-MAIN-2015-11
|
refinedweb
| 117
| 86.2
|
Program Code :
/**********************************************************
* This program accept a temperature value in either *
* Celsius nor Fahrenheit and find out the water is liquid
* solid or gaseous. *
*********************************************************/
// Header file section
#include
#include
using namespace std;
// Main function begins here
int main()
{
//Declare variables
string unit;
float temp;
The unit variable specifies whether the input temperature is in F or C.
// Prompt the user to state the unit of the
// temperature
cout << "Pick a temperature unit: C or F" << endl;
cin >> unit;
// Read the input value for the temperature value
cout << "Enter the temperature:" << endl;
cin >> temp;
// To make matters simpler, convert temperature to
// Celsius if necessary
if (unit == "F")
{
temp -= 32;
temp /= 1.8;
}
Water is solid at 0 degrees Celsius or below. It is liquid until it gets to 100 degrees at which point or above, it is gaseous.
if (temp <= 0)
cout << "Solid" << endl;
else if (temp < 100)
cout << "Liquid" << endl;
else
cout << "Gas" << endl;
//Pause the system for a while
system("PAUSE");
return 0;
}
Sample Output:
Pick a temperature unit: C or F
F
Enter the temperature:
100.
|
http://www.chegg.com/homework-help/c-plus-plus-for-everyone-2nd-edition-chapter-3-solutions-9780470927137
|
CC-MAIN-2015-06
|
refinedweb
| 177
| 50.3
|
Hi, i don’t know much about sdl, but tried to find out why my qemu crashed since
today. I reduced the problem to the following lines:
#include <stdlib.h>
#include “SDL.h”
main(int argc, char *argv[])
{
if ( SDL_Init(SDL_INIT_VIDEO) < 0 ) {
fprintf(stderr, “Unable to init SDL: %s\n”, SDL_GetError());
exit(1);
}
atexit(SDL_Quit);
SDL_SetVideoMode(100, 100, 0, SDL_HWSURFACE); while (1) { SDL_Event ev; while (SDL_PollEvent(&ev)) { } }
I’ve got a callstack, but currently no self-compiled sdl, so just functions:
SDL_PumpEvents - X11_PumpEvents - X11_SetKeyboardState - X11_TranslateKey -
XLookupString - Segmentation Fault
The crash happens sometimes on start, otherwise always as soon as the mouse
enters the sdl-windows.
|
https://discourse.libsdl.org/t/sdl-in-debian-sid-crashes-sdl-pollevent/13086
|
CC-MAIN-2022-21
|
refinedweb
| 105
| 56.79
|
Using the Philips Hue Motion Sensor APIOctober 1, 2017
TL;DR:
-.
http://<HUE_BRIDGE>/api/<USER_ID>/sensors
The response will look familiar to this.
{ "1": { "state": { "daylight": true, "lastupdated": "2017-10-01T11:10:05" }, "config": { "on": true, "configured": true, "sunriseoffset": 30, "sunsetoffset": -30 }, "name": "Daylight", "type": "Daylight", "modelid": "PHDL00", "manufacturername": "Philips", "swversion": "1.0" }, "8": { "state": { "presence": false, "lastupdated": "none" }, "config": { "on": true, "reachable": true }, . . . }
So to get data of a specific sensor, just add its ID to the URL above. In my case, ID 13 happens to be the motion detector's URL.
http://<HUE_BRIDGE>/api/<USER_ID>/sensors/13
The result is simply a subset of the previous call. The information we care about is the property
presence.
{ "state": { "presence": true, "lastupdated": "2017-10-01T12:37:30" }, "swupdate": { "state": "noupdates", "lastinstall": null }, "config": { "on": true, "battery": 100, "reachable": true, "alert": "none", "ledindication": false, "usertest": false, "sensitivity": 2, "sensitivitymax": 2, "pending": [] }, "name": "Hue motion sensor Küche", "type": "ZLLPresence", "modelid": "SML001", "manufacturername": "Philips", "swversion": "6.1.0.18912", "uniqueid": ". . ." }
The Code
I had an existing python script, which read the state of the PIR sensor connected to GPIO. I simply replace this line of code with this new method that retrieves the data from the Web API. Job done!
def get_pir_state(): try: response = requests.get('http://' + secrets.HUE_BRIDGE + '/api/' + secrets.USER_ID + '/sensors/13') json_data = json.loads(response.text) return 1 if json_data['state']['presence'] == True else 0 except: print "error ...", sys.exc_info()[0] return 1.
|
https://wolfgang-ziegler.com/blog/hue-motion-sensor-api
|
CC-MAIN-2021-21
|
refinedweb
| 243
| 58.28
|
SYNOPSIS
#include <mqueue.h>
#include <time.h>
ssize_t mq_timedreceive(mqd_t mqdes, char *restrict msg_ptr, size_t msg_len, unsigned *restrict msg_prio, const struct timespec *restrict abs_timeout);
DESCRIPTION
The
If the msg_prio argument is not NULL, the priority of the selected message is stored in the location pointed to by the msg_prio argument.
If the specified message queue is empty and does not have
O_NONBLOCK set in its message queue description,
The
abs_timeout is an absolute time. It expires when
the absolute time passes, as measured by the clock on which timeouts are
based, of it the absolute time has already passed when the function was
called.
The timeout is based on the system clock as returned
by
The resolution of the timeout is the resolution of the clock on which it is based. The <time.h> header defines the timespec type.
This operation never fails with a timeout if a message can immediately be removed from the message queue. In such a case, the validity of the abs_timeout argument is not checked.
PARAMETERS
- abs_timeout
Specifies the maximum wait for receiving a message on the specified message queue.
- mqdes
Specifies the message queue descriptor associated with the message queue on which messages are to be received.
- msg_len
Specifies the size, in bytes, of the buffer where the received message is stored.
- msg_prio
Stores the priority of the received message.
- msg_ptr
Points to the buffer where the received message is stored.
RETURN VALUES
On success, the
- EAGAIN
The O_NONBLOCK flag was set in the message description associated with mqdes, and the specified message queue is empty.
- EBADF
mqdes is not a valid message queue descriptor open for reading.
- EBADMSG
There was a data corruption problem with the message.
- EMSGSIZE
msg_len is less than the message size attribute of the specified message queue.
- EINTR
A signal interrupted an
mq_receive()operation.
- EINVAL
The process or thread would have blocked, and the abs_timeout parameter specified an invalid value.
- ENOSPC
Too much space was requested for the buffer pointed to by msg_ptr.
- ETIMEDOUT
The O_NONBLOCK flag was not set when the message queue was opened, but no message arrived on the queue before the specified timeout expired.
CONFORMANCE
UNIX 03.
MULTITHREAD SAFETY LEVEL
MT-Safe.
PORTING ISSUES
On the NuTCRACKER Platform, the Windows.
|
https://www.mkssoftware.com/docs/man3/mq_receive.3.asp
|
CC-MAIN-2021-39
|
refinedweb
| 375
| 56.45
|
Updates). Please keep this in mind.
This article is a beginner’s How To use the Flash Facebook API (facebook-actionscript-api), in particular in authenticating user and loading the user’s name in Flash applications.
This guide assumes that you are already familiar with the Facebook Graph API and how to create application in Facebook (you need an Application ID to make this example). If not, head over to:
The guide is written for version 1.5 of the facebook-actionscript-api; if you are using 1.0, head over here:.
If you want to jump to the example, go here: /tutorial/as3-open-graph-example-basic-1.5/index.html?test. To download the code for the example, go to the bottom of this page, under Links section.
When you set-up your application, set the application as IFrame (not FBML) application. Below is an example of the settings that I use:
Then, follow these steps to set-up your first project in Flash IDE:
1. Download the facebook-actionscript-api from:. I have included the version I used for this example in the ZIP file at the end of this article. (Note: This tutorial uses version 1.5.)
2. Start up Flash IDE and create a new project (I am using CS3, but this guide should apply to CS4 and 5 too).
3. Extract the GraphAPI Source_1_5.zip into a folder. For now, I suggest just extracting it into the same folder as your FLA
4. Add the classpath of where you extracted the GraphAPI Source_1_0.zip into the project settings.
Ie: ./GraphAPI Source_1_5/api/ in my folder configuration below:
5. Extract GraphAPI Examples_1_5.zip somewhere because we will be using the JavaScript and html file from that example later. Note that The GraphAPI Examples_1_5.zip contains examples for Flex Builder 4, and this guide will show you how to use the API in the Flash IDE.
6. From the GraphAPI Examples_150.zip in FacebookAPI_Dec_06_2010_samplesWebIFrameDemo, create a copy of the index.php (you can rename it to index.html if you are not using php).
Now that we’re done with the set up and copying stuff, let’s go into the Flash movie editor:
7. Create a button on the screen, name it “loginButton.”
8. Create a textfield on the screen, name it “nameTextField”
9. Add code to Frame 1 of the FLA (this code is ported from the GraphAPI Examples_150.zip, FacebookAPI_Dec_06_2010_samplesWebIFrameDemo so you will see a lot of similarities):
First, we load up the Facebook API library:
import com.facebook.graph.Facebook; import com.facebook.graph.data.FacebookSession; import com.facebook.graph.net.FacebookRequest;
Next, replace the APP_ID with yours, then call Facebook.init().
//Place your application id here const APP_ID:String = "130838916986740"; //Initilize the Facebook library Facebook.init(APP_ID, handleLogin); function handleLogin(response:Object, fail:Object):void { trace("handleLogin"); if (Facebook.getSession().accessToken) changeToLoginState(); }
Facebook.init() will check if the user is already logged in. If user is not already logged in, it will call prompt the user to login (this process is already handled for you). After the user is logged in, it then it checks whether the user has authorized/installed the application and shows the application install/permission dialog (this process is already handled for you).
Facebook.init() passes the callback (in the second parameter) to a function named handleLogin(). So, after those two process, the handleLogin function in the ActionScript side is called — even if the user refused to login.
Now, if user chose not to log-in, we are giving him/her another chance to login by displaying a Login button. The process is slightly different when user clicks the Login button. Now, we’re calling Facebook.login function (which opens as popup). If you want to, you replace this to use in-same-window login as described here: /blog/2011/02/facebook-authenticationlogin-process-in-popup-or-not/, but don’t worry about this now.
Right now, let’s hook up the loginButton to log us into Facebook first.
loginButton.addEventListener(MouseEvent.CLICK, onLoginButtonClicked); function onLoginButtonClicked(event:MouseEvent):void { trace("onLoginButtonClicked"); // We check the accessToken to make sure user is logged in if (Facebook.getSession() && Facebook.getSession().accessToken==null) { Facebook.login(handleLogin, {perms:'publish_stream'}); } else { Facebook.logout(handleLogout); } }
If you want to request extended permissions, you can pass it as parameters to the login call. For example, below is how to request publish_stream permission:
Facebook.login(handleLogin, {perms:'publish_stream'});
Update
Facebook requires the use of OAUTH2.0 nowadays. Some of the changes required is that perms is now called scope.
Following that, let’s add the changeToLoginState() function, like below. This code is called when user has logged in. In this basic-example, we are just changing the button label to say Logout to allow user to logout.
function changeToLoginState():void { //resultTxt.text += 'Logged in'; loginButton.label = 'Logout'; loadMyInfo(); }
In a real application, you can start using other Facebook API once the user is confirmed as logged-in. For this example, to demonstrate that we are indeed logged in, I am just going to call a custom function named loadMyInfo() to get the user name.
And here is the loadMyInfo() function. Following the Graphi API documentation, to get your own info, you can call:
So that’s what we will do:
function loadMyInfo():void { Facebook.api('/me', onMyInfoLoaded); } // Display my name in the text field. function onMyInfoLoaded(response:Object,fail:Object):void { trace("onMyInfoLoaded "+response.name); nameTextField.text=response.name; }
Above, I called the /me API and then printed the user name in the callback function. Yes, it’s as simple as that. If you are interested about how the api works, continue here, otherwise, skip to Step 8.
It’s nice to know that you can call most of the Graph API’s () using the Facebook.api() in this manner. Once you get a hang of the process, you should take a look into com.facebook.graph.Facebook.as that comes in the facebook-actionscript-api.
public static function api(method:String, callback:Function = null, params:* = null, requestMethod:String = 'GET'
This is a very powerful function and you can do a lot by calling the Graph API using just this one function. If you are already familiar with the JavaScript version of the API, it’s almost effortless. Notice also that the callback parameter is a function that follows this format:
function callback(response:Object, fail:Object):void { ... }
The fail object contains some error informations. If for some reason the api call failed, you should always check the error message to help pinpoint where the problem is, such so:
function myCallback(response:Object, fail:Object):void { if (fail && fail.error) { trace("error.message="+fail.error.message); trace("error.type="+fail.error.type); } }
Another question is: How do I know what is inside the response object? You can always do foreach to print each variables, or you can test most of the API from this page to see the return value in the response object :. For instance, the /me call returned this:
{ "id": "9999999999999999999999", "name": "Permadi Permadi", "first_name": "Permadi", "last_name": "Permadi", "link": "", "birthday": "01/01/1990", "gender": "male", "timezone": -8, "locale": "en_US", "verified": true }
Bookmark the Facebook documentation at:, you will refer to it a lot.
You can explore more examples here:
- Posting to user Photo Album
10. Now we are almost ready with the Flash movie. Publish the movie as example.swf
11. Create the html page to host the SWF. You can use the one exported by the FLash IDE, but you will need to change a lot of the code, so just use the template below, which is taken from GraphAPI Examples_150.zip in FacebookAPI_Dec_06_2010_samplesWebIFrameDemoindex.php, which you copied in Step 6 above (if you are not using php, just rename it to index.html since there’s nothing that requires PHP in this example).
<html xmlns="" xmlns: <head> <!-- Include support librarys first --> <script type="text/javascript" src=""></script> <script type="text/javascript" src=""></script> <script type="text/javascript" src=""></script> <script type="text/javascript"> var</div> <div id="ConnectDemo"></div> </body> </html>
Notes
- The REDIRECT_URI tells Facebook where to go back to after the user logs into Facebook from your application. You need to set this to your own url. See the picture below. The REDIRECT_URI should be either the Canvas Page, the Canvas URL, or to some url within your Site URL.
For staring out, I recommend just setting the Canvas URL to be the exact same as the REDIRECT_URI, and make sure the Canvas URL is within the Site URL (ie: if your Site URL is, you should make sure the Canvas URL is nested within /example1/ folder.
- Change APP_ID to yours.
- Set PERMS to be the same as the one you use in Facebook.init in the Action Script side (Step 7).
- Change embed.swf with the name of your SWF file if necessary.
- Change the swf dimensions if necessary.
- Alert: The params variable within the Javascript handleLoginStatus() assumes that there’s a query-string in your URL. If you are using static html or have no use for the query-string, then you should just remove all references to params (do NOT remove the whole line, just the + params part).Eq:
top.location = ''+APP_ID+ '&scope='+PERMS+ '&redirect_uri=' + REDIRECT_URI;
Alternatively, you can always append a dummy query string at the end of your url.
So if my page is, I need to call it with, where ?test is the dummy query-string. Of course you can always alter the Javascript to handle this more elegantly. Failure to properly pass the redirect_uriwill cause the app not to load after the login process.
- To test the movie, you need to run the swf from the html. You cannot use the test movie feature within Flash IDE because the movie needs to communicate with JavaScript. So you should publish the movie and test by opening the html.
For starting out, I highly recommend to upload all the published files to a server for testing the redirect() (and make sure your Facebook app Site URL and Canvas URL are set to that website folder). If you are testing locally and receive permission error, you can add your local machine to the Flash Settings to allow communication. If you got a blank page after login, make sure your REDIRECT_URI is set correctly and also check the note about params above.
Updates).
Links
- Example page living on my website: /tutorial/as3-open-graph-example-basic-1.5/index.html?test
- Example page (exact same code) living on Facebook: (If you want to redirect to go to the IFramed Facebook version, you should use PHP and add a query-string to indicate where the user comes from, then set the RETURN_URL accordingly).
- Source files of the above example: /tutorial/as3-open-graph-example-basic-1.5/as3-open-graph-example-basic-1.5.zip (note: the FacebookSession.as and FacebookJSBridge.as in the attached Facebook Graph API has been modified to support OAUTH2.0 — OAUTH2.0 is required since Dec 2011).
- Real application example: Easter Your Facebook Friend
Credit to the facebook-actionscript-api to which comes under MIT license.
Hi,
great tut, thank you for it. But it is not working in opera browser. Could you please fix it ?
Hi Prednizon, here’s a post about the Opera issue:
I did that thing with channel and also I have added this code after FB.init :
FB.XD._transport=”postmessage”;
FB.XD.PostMessage.init();
I found it here
It seems that it works in opera only partly. I mean when Im not logged in it prompts me to log, so it seems that script in html is working. But flash part is not working. Flash cant connect to FB and display name of logged user. I think this Facebook.init(APP_ID, handleLogin); is not working in flash. In other browsers it works.
Thank you for help
Hi Prednizon,
Try this example in Opera.
I tested it with Windows Opera and it worked for me (make sure to allow popup windows).
After your response, I added more explanation on the bottom of the article about what exactly need to be changed:
Consider that the imports are correct and that this is my code:
public function LoadFB() {
Facebook.init(“158033120926253″, FBLoginHandler);
}
public function FBLoginHandler(success:Object, fail:Object):void {
trace(“logged in”);
}
The SWF never seems to get to that trace regardless of what I do. It doesn’t seem like init is doing anything. What gives?
Is it possible to access the Facebook API while developing in the FLASH IDE?
Otherwise I have to upload it every time and test it onine
Are you using Opera by chance? If so, see here for how to handle it:.
Otherwise, make sure you have replaced all the APP_IDs (in the SWF as well as in the html).
If you have a web-server running on your local machine, you can change your hosts setting — set 127.0.0.1 to your server name. Then you can test locally without having to upload every time.
Windows:
Mac:
[…] See also: Browsing Images From Local Computer To Post To Photo Album […]
Thanks for this great tutorial! I have been all over the internet and this was the first tutorial that actually worked with the Graph API in AS3. You rock!
Hi…my page keeps on coming up blank and I don’t know how to fix it. Could you please kindly take a look at my codes?
I apologies if it’s very simple, I’m very new to this but very keen to work on this. Hope this will help other newbies trying out this tutorial.
HTML:
var APP_ID = "XXX";
var REDIRECT_URI = "";
var PERMS = "publish_stream"; //comma separated list of extended permissions
function init() {
FB.init({appId:APP_ID, status: true, cookie: true});
FB.getLoginStatus(handleLoginStatus);
}
function handleLoginStatus(response) {
if (response.session) { //Show the SWF
$('#ConnectDemo').append('You need at least Flash Player 9.0 to view this page.');
swfobject.embedSWF("demo.swf?", "ConnectDemo", "550", "250", "9.0", null, null, null, {name:"ConnectDemo"});
} else { //ask the user to login
var params = window.location.toString().slice(window.location.toString().indexOf('?'));
//top.location = ''+APP_ID+'&scope='+PERMS+'&redirect_uri=' + REDIRECT_URI + params;
}
}
$(init);
muudles,
Why did you remove this line (in your html)?
//top.location = ‘’+APP_ID+’&scope=’+PERMS+’&redirect_uri=’ + REDIRECT_URI + params;
Without this line, you won’t get the prompt to login to Facebook.
Hi permadi,
I must have misinterpreted what you meant in your notes regarding that line. I’ve uncommented that line out now, but still no luck.
Hi muudles,
I am getting this error on your page:
{
“error”: {
“type”: “OAuthException”,
“message”: “Invalid redirect_uri: Given URL is not allowed by the Application configuration.”
}
}
Set the REDIRECT_URL to be the same as Canvas URL in your Facebook Application settings (see the picture under the Notes section above)
Great tutorial!
Also check out this post for dealing with common problems in facebook login and init:
Thank you so much for this guide, it was very helpful for me!
if you are having trouble with
Facebook.init(APP_ID, handleLogin);
it might be because of the way you put your swf in your html
try using this one
You need at least Flash Player 9.0 to view this page.
swfobject.embedSWF(“yourswf.swf”, “flashContent”, “640”, “480”, “9.0”, null, null, null, {name:”flashContent”});
it worked for me!
[…] […]
Hi,
Great set of tutorials! But the example is not working for me…
Is it the same for other people?
Hi Jay,
The example app has been fixed.
Thanks for the heads up. It was erroneously updated during a recent update to handle OAUTH 2.
Hi,
Excellent
but I think the source files may have the same issues also. Can’t be sure though.
Hi there, first off great articles and tutorial. Much appreciated.
I am having an issue with my authentication in the HTML page though. It seems that whenever I try to access the page, and I am not logged in to FB, I am taken over to facebook to login as normal, but AFTER I log in to facebook, I am redirected back to the page with a giant hash on the end of my redirect url.
ie. “–o5X-q25L-JSz1_kgCz4yrHybUVy6hP3FZy83FUEMmOXqXrMHeXpkZog7Xrja0qwp_NOc1hL2wBal3MzJi8B6B68BWR2sXJ3AgHGrXl_WFub8w5OZUGFFnTa_h4Hw2Twipz-n9DtSMKo27VS1gfiz0#_=_”
When I am taken back to my app the html page just goes into the crazy loop where it continually loads the flash into the page. and crashes my browser.
Any help or insight you may have would be greatly appreciated!
Thanks in advance
Send me the link to the page and I’ll see what I can find.
You can download a zip file at:[hidden]
Feel free to email me with any thoughts you might have… I am totally stumped!
Also thanks so much for the awesome site and tutorials. Dunno what I would do without you!
Hi Cam,
Change your html to just this (below). Your FacebookLoginForm inside Flash already took care of handling the login:
muddles,
Send a screen of your app settings.
|
http://permadi.com/2011/02/using-facebook-graph-api-in-flash-as3-1-5/
|
CC-MAIN-2016-44
|
refinedweb
| 2,841
| 66.33
|
Re: STL.NET news
- From: "Willy Denoyette [MVP]" <willy.denoyette@xxxxxxxxxx>
- Date: Tue, 14 Feb 2006 18:38:55 +0100
Andrew,
See inline.
Willy.
"Andrew Roberts" <MuayThai@xxxxxxxxxxxxx> wrote in message
news:e4E3V2WMGHA.2712@xxxxxxxxxxxxxxxxxxxxxxx
| Willy,
|
| I can understand where you are coming from but I think you don't
appreciate
| the importance of this because you don't understand the STL (once you've
| looked at that take a good look at Boost too). Any C++ programmer who
keeps
| their skills reasonably up to date will have come to rely on the STL to
some
| extent (the same as any .NET developer relies on the BCL, imagine if a new
| .NET language was created but you only got half the BCL...).
|
I don't think you know where I'm comming from, so let me explain first, I
started back in '78 (until I left the dev. busines a few years ago) to use
C/C++ as my primary language, my main task over the years was
designing/writing/porting low level stuff at DEC for VAX VMS/OVMS,
Ultrix/Unix & NT4/W2K on Alpha AXP in ASM, C and later C++, later I've done
some application level development, and I learned to use and apriciate the
STL (and other template based libraries), I'm not an expert, but I know
quite well what the STL means to the C++ community, now they have a
conformant STL in VC8 so what's the problem?
C++/CLI is a .NET language, and, as you said above, the key class library
for all .NET languages to rely upon is the Framework Class Library. Now, the
FCL is certainly lacking some features, and it's good to see some third
parties releasing rich container classes libraries, but that's not good
enough, what we need is some basic stuff added to the FCL. But, does that
mean that C++/CLI languages becomes "next to useless without" a "managed"
version of the STL (as was said by rellik)? I don't think so.
Now, when we look at the STL/CLR, AFAIK this is gonna be a C++/CLI only
library, it's a template library, not an MSIL library. This may be great to
have for the C++ community (at least for those who are willing to embrace
C++/CLI), but it lacks cross-(.NET)language support, that means it can't be
used (directly) from the other languages, so IMO it's not a first class
citizen in .NET
| There are a large number of highly talented C++ developers out there who
are
| not using C++ for their .NET development, or maybe not doing .NET
| development at all, because they do not have access to their full set of
| standard libraries.
I know a number of C++ devs. some don't need .NET at all, because their
problem domains don't need it (device driver & high perf. stuff).
But there are others, who aren't willing to touch .NET at all, believe me,
not because they are missing their favorite libraries. One of my jobs of the
last two years was trying to make them change their minds, quite a tough
job, but we are moving forward.
The point is there is no equivilant of the STL in the
| BCL, the Collections.Generic namespace provides a very basic set of
classes,
| a very small subset of what the STL does.
|
| Last but not least this is a very very hard problem to solve, largely due
to
| type identity issues, and consequently the STL/CLR was almost dropped.
The
| reason this decision has been reversed is due to the huge demand for it
from
| the community, so even though its a serious piece of engineering on
| Microsofts part to achieve it is being done (as much as can be
implemented).
| Now if that doesn't contradict your statement that developers don't care
| about it I don't know what does.
|
Sorry, You must have mis-understood my statement, what I meant to say was
"Microsoft doesn't care whether the vast majority of the C++ developers
embraced C++/CLI" as long as they use one of their products...
I can't comment on the rest of this paragraph, you seem to have other
information than I have.
Willy.
| Andrew
|
|
| "Willy Denoyette [MVP]" <willy.denoyette@xxxxxxxxxx> wrote in message
| news:%23PKs9tNMGHA.3896@xxxxxxxxxxxxxxxxxxxxxxx
| > IMO they don't care, C++ ISO, C++/CLI, C#, VB.NET you name it, as long
as
| > they use one of their products.
| >
| > Willy.
| >
| > "rellik" <nospam@xxxxxxxxxx> wrote in message
| > news:IzXHf.93$m13.60@xxxxxxxxxxxxxxxxxxxxxxx
| > | >
| > | > And how many (of the vast majority) will never embrace C++/CLI? Do
you
| > | > think
| > | > they will do when STL/CLR becomes available?
| > | >
| > |
| > |
| > | Well, presumably Microsoft's goal is to get the vast majority of C++
| > | programmers on the PC compiling to C++/CLI at some stage.
| > |
| > |
| > |
| > | >
| > | > | 2. Managed types are basically incompatible with the standard
| > library.
| > | > | (alright there's the gcroot gludge, and workarounds, but that's
what
| > | > they
| > | > | are cludge and workarounds)
| > | > |
| > | >
| > | > Not sure what you mean here, why do you compare "managed types (or
CLI
| > | > types
| > | > to be exact) to a library like the STL?
| > | >
| > |
| > |
| > | I'm not comparing managed types with the STL. I'm saying managed types
| > are
| > | incompatible with the standard library. The STL is the largest part of
| > the
| > | standard library, therefore managed types are incompatible with the
| > standard
| > | library. From your previous posts I presume you are not a C++
programmer
| > as
| > | your exposure to the standard library appears to be somewhat limited,
| > | however, one example of this incompatibility is the inability to use
| > managed
| > | types in the standard containers.
| > |
| > |
| > |
| > | -Liam
| > |
| > |
| >
| >
|
|
.
- References:
- STL.NET news
- From: Andrew Roberts
-: Andrew Roberts
- Prev by Date: Runtime library
- Next by Date: Re: Runtime library
- Previous by thread: Re: STL.NET news
- Next by thread: Re: STL.NET news
- Index(es):
|
http://www.tech-archive.net/Archive/DotNet/microsoft.public.dotnet.languages.vc/2006-02/msg00314.html
|
crawl-002
|
refinedweb
| 984
| 70.33
|
alright i made just a simple little math game using a lot of the i've learned from the first 3 chapters of "Beginning C++ Game Programming" and it works fine the first time through the loop but when i tell it that i want to play again it just keeps going through the loop and doesnt stop. dunno what is wrong with it...i've tried adding cin.ignore(); after all my inputs but it didn't work so wondering if u guys can figure it out.
ps: didn't add the final part to check if points == 10 yetps: didn't add the final part to check if points == 10 yetCode:// Simple Math Game #include <iostream> #include <ctime> #include <cstdlib> using namespace std; int main() { //seed out random numbers srand(time(0)); int points = 0; int guess; int again = 'y'; cout << "\t\t**********Welcome To The Math Quiz Game**********"; cout << "\n\nHow to play: A random problem will be generated and the player has"; cout << "\nto guess the correct answer to earn points. The player will get one"; cout << "\npoint for each answer he/she gets right and will lose one point for"; cout << "\neach answer he/she gets wrong. If the player scores 10 points he/she"; cout << "\nwins the game."; while (again != 'n') { int num1 = (rand() % 100) + 1; int num2 = (rand() % 100) + 1; cout << "\n\nThe problem is: " << num1 << " + " << num2 << " "; int answer = num1 + num2; cin >> guess; if (guess == answer) { cout << "Correct! you get 1 point"; points += 1; } else { cout << "Wrong! you lose 1 point"; points -= 1; } cout << "\n\nyou have " << points << " points"; cout << "\n\nwould you like to play again (y/n) "; cin >> again; } cout << "Bye"; cin.ignore(); cin.get(); }
|
https://cboard.cprogramming.com/cplusplus-programming/73119-program-freaking-out-me.html
|
CC-MAIN-2017-22
|
refinedweb
| 284
| 78.62
|
Getting started with the Office 365 Reporting web service
The Office 365 Reporting web service enables developers to integrate information about email and spam, antivirus activity, and compliance-related events into their custom service reporting applications and web portals. All the reports available in the admin center, inside the downloadable Microsoft Excel spreadsheets, and those accessed through Windows PowerShell cmdlets, are accessible using the Reporting web service. Users accessing the Reporting web service must have administrative rights in the organization. This article introduces the Reporting web service, provides an overview of its features and functionality, and includes a code sample to get you started using the service.
Last modified: February 20, 2014
Applies to: Office 365
In this article
Office 365 Reporting web service
Getting started
Accessing and viewing the reports
Your first report request
A little background
Your first reporting application
Reviewing the code
The service description document and MailFilterList report
Additional reports
ODATA query options
Next steps
XAML code for O365RWS_Simple MainWindow.xaml
C# code for O365RWS_Simple MainWindows.xaml.cs
You’ve probably heard that Microsoft Office 365 provides all the tools and capabilities that come with access to a massive server farm for SharePoint Server, Lync, Exchange Server, and always up-to-date clients. The new apps for Office and SharePoint are generating a lot of buzz as well. So if you're a developer new to Office 365, welcome to the proverbial cloud!
What you might not know is that inside the admin center is a set of mail flow, spam, and virus protection reports that show you, among other things, where all that spam is coming from, and who sends the most email in your organization.
If you are an enterprise developer or work in the IT department, you’re going to want these reports in your dashboard. So, let’s get started!
The first step is to be sure your Office 365 account permissions are set correctly, and to test them by using your browser to access the Reporting web service. Next, you’ll build a simple Windows Presentation Foundation (WPF) application in Visual Studio 2012 to issue queries and examine the results. Having a working knowledge of Visual Studio is helpful, but don’t worry if you’re new to it, the code is already written for you.
Once you have your first Reporting web service client up and running, we'll touch on the reports that are available, and tell you where to find information about ODATA2 query options. By the time you’ve read this article, you'll have a basic understanding of the Reporting web service and know enough to start creating your own reports.
To be able to see the reports, you need the right permissions in Office 365. Ask your organization administrator to add you to one of the administrator roles. For now, start at the lowest-level administrator, "Service Admin," because you’re going to store that account's password in a file temporarily. You might also ask your administrator to create a separate administrator account that you can use just for exploring the reporting system. Safer, cleaner, better all around.
In this article, when you see the account name userone@example.onmicrosoft.com, substitute your administrator account.
Once you're up and running with your new administrator account, view the reports from:. You'll see more data if your organization is active and has lots of users, but even small organizations can get a surprising amount of spam and malware in their email. The following screen capture shows an example "received email" graph for a small organization.
The first thing to know about the Reporting web service is that it's a REST web service. That stands for "Representational State Transfer." If you're not familiar with REST, think of it as using browser-style URLs and HTTP GET requests to retrieve data. When you call a REST web service, typically it returns a bunch of data rather than a Web page or a downloadable file.
To retrieve the "service description document," go to.
The web site prompts you for your administrator credentials. If you're using Internet Explorer and it displays only a line of text at the top, it's probably asking if you want to display all the content. Choose the Show all content button, and you should get something that looks like this.
Congratulations, you've made your first properly-authenticated Reporting web service request!
Each of the collection elements in the XML service document indicates an Office 365 report that your administrator account can access. Which ones show up depends on your permissions. At this time (July, 2013), there are only a couple of reports like that, so as you code your dashboard, only a small fraction of the reports will be hidden from your Service Administrator-privileged account.
As you build your reporting system, the best way to know whether the user can access a particular report is to first check the service document returned for that user’s credentials to be sure the report is listed.
Another important part of the service is the MailFilterList "report." You can think of this as returning the pre-defined enum string constants used in many of the report queries. For example, if you're trying to get a report of all spam emails that were sent to the quarantine mailbox, your report query would include an ODATA query option such as $filter=EventType eq 'Malware'. That text string, Malware, is one of many in the list of named EventTypes, and those are returned by the MailFilterList report. We won't go into any more detail about the MailFilterList report, but as you dig deeper, you'll find yourself needing that information frequently. For more information, see MailFilterList report.
At this point, let’s take a step back and see what’s going on behind the scenes. center also gets data from the Reporting web service.
Which brings up an interesting question: just how many ways are there to get the reporting data? There are four distinct ways you can retrieve reports. You already know about the web service and the admin center but you can also download a customizable spreadsheet that gets its data from the web service. Also, center both call the Reporting web service, which in turns calls the Windows PowerShell cmdlets. You can also call those cmdlets directly. The only things that access the datamart are the Windows PowerShell cmdlets, which ensures that every report includes the same data, regardless of how you obtain it.. However, you’re unlikely to be affected by that unless you’re really overloading the reporting system.
Now that you know the basics of requesting reports, the next step is to create a Windows Presentation Foundation (WPF) application in Microsoft Visual Studio 2012. The XAML and C# code for this application are provided in the XAML code for O365RWS_Simple MainWindow.xaml and C# code for O365RWS_Simple MainWindows.xaml.cs sections at the end of the article.
Create a simple report request
Open Visual Studio 2012.
Create a new C# WPF Application project named O365RWS_Simple. Be careful to choose the Visual C# project template, because that’s what the code is written in. Also, you need to use that project name; it’s referenced in the code you’ll copy, and things will break if you use a different project name.Figure 4. Visual Studio create new WPF project dialog
When the Visual Studio Form Designer opens, go to the MainWindow.xaml file, and replace all the content with the XAML code in the XAML code for O365RWS_Simple MainWindow.xaml section at the end of this article. The form designer should look like the following example. Notice that the report name and options are set for you.Figure 5. Visual Studio form designer
Open the MainWindow.xaml.cs file from Solution Explorer and replace all that content with the C# code from the C# code for O365RWS_Simple MainWindows.xaml.cs section at the end of this article.
Locate the following lines of code, and change the values of userName and passWord to that for the admin account you set up earlier.
Remove the password from the sample when you can. It’s never a good idea to save any password as text in a file.
Save the project (Ctrl+Shift+S).
Go back and verify that your user name and password are correct. This sample contains no error checking or exception handling, so it’s not going to give you useful information when something goes wrong.
Start the application in the debugger (F5). If all goes well, you will see the following.Figure 6. Your first application running
As you can see, the report name is pre-populated with the MailTrafficTop report. This report returns information on the top recipients and senders of email in the organization. The options in this case are asking for the name and number of messages received by the users who’ve received the most email on a daily basis for the previous two weeks.
If you're sure the username and password are correct, click Go. Depending on how large your organization is, and how much email it generates, this can take anywhere from a couple of seconds to a minute or so. Ten seconds is not uncommon, so hang in there. Each report will return a maximum of 2000 entries. Your application will need to determine whether to make additional queries to get the complete report data. When it returns, you should get something that looks like the following.Figure 7. First results in the sample
The document is just raw XML in Atom (RFC 4287) format. Take a look through it. It might look complicated, but it’s all pretty straightforward. This is an introductory article, so we’re not going to dive deeply into that XML. Feel free to copy it to your favorite editor. Visual Studio handles XML files nicely.
After you’ve reviewed the XML, try some of the other reports, or close the application. If you do try the other reports in the following sections, be sure to clear out the options, because they’re not the same for each report. Remember, this sample has no error reporting.
Let’s take a quick look at the code you copied earlier. First, it creates a .NET Framework Uniform Resource Identifier (URI) builder class UriBuilder. That class provides an easy way to construct the sometimes-complicated web address that’s sent to the Reporting web service.
Some important points to remember:
Always use HTTPS. The Reporting web service will only accept connections by way of HTTPS.
The service endpoint is reports.office365.com. There are some rare scenarios when it might be different, but that’s beyond the scope of this article.
The path portion always starts with ecp/reportingwebservice/reporting.svc/, followed by the report name.
The query contents, taken from our RwsQuery text box, consist of parameters separated by &. In the final results, there will be a ? between the end of the path and the start of the query, but for us the URIBuilder adds that.
Before you pass the string to an HttpRequest object, the query part has to be escaped, where spaces and special characters are turned into the familiar %20; format. That’s what the EscapeUriString method does.
After the URI is constructed, the code create a new HttpWebRequest object, and adds a Credentials object using the userName and passWord. Finally, it makes the call to the Reporting web service.
// // Create the request object and add the credentials from above HttpWebRequest request = (HttpWebRequest)HttpWebRequest.Create(fullRestURL); request.Credentials = new NetworkCredential(userName, passWord); // // [lines removed for clarity] // // Make the HTTPS request to the reporting server HttpWebResponse response = (HttpWebResponse)request.GetResponse();
The rest of the code formats the resulting XML so it will be readable when displayed in the RwsResult textbox. You might not need to do that in your applications. example, several of the reports return information about Exchange and data loss prevention (DLP) policies and rules. The mail administrator can customize and create new DLP policies and rules, and provide names appropriate for the organization. In addition to other pre-defined strings, the MailFilterList report provides a list of those policy and rule names. By using the names from the MailFilterList report, the application ensures that the user can select from the current set of configured polices and rules, and that the string comparisons in their queries will also use the right names. For more details, see MailFilterList report.
While the application you created certainly gets the job done, you might want to use Visual Studio to create a service reference that uses the Windows Communication Foundation (WCF). It’s not difficult, but it’s beyond the scope of this article.
So far we’ve talked about the reporting.svc document, mentioned the MailFilterList report, and even gotten data back from the MailTrafficTop report. That barely scratches the surface. The following table lists the available reports.
Each of these reports returns numerous fields of data and can be filtered, selected, and ordered as needed. For more information about report details, HTTP header, errors, and code samples, see Office 365 Reporting web service reference.
Let’s return for a moment to the sample and discuss ODATA query options. ODATA2 is an industry standard, with information available at. The sample contains a separate options box pre-populated with this string:
$select=Name,MessageCount&$filter=AggregateBy eq 'Day' and Direction eq 'Inbound'
That string contains two ODATA2 System Query Options, separated by an &: a select and a filter query option. The option name starts with a dollar sign "$", and must be separated from the value by an equals sign "=". These statements are functionally similar to SQL statements, and are oriented toward standard HTTP GET parameter syntax. The Reporting web service supports the following ODATA2 options:
$select= a list of comma-separated report columns to include in the report output.
$filter= an expression where a true evaluation will include the row in the output. This is a powerful option, but the syntax can be confusing. Read the ODATA2 specifications for this area, as complex reporting will require you to use this option frequently.
$top= a positive integer of the maximum number of rows to include. The maximum number of rows the web service will return in one request is 2000.
$orderby= specifies one or more columns to sort the results by. The desc keyword specifies descending order.
$format= accepts either Atom or JSON, and determines the report output syntax.
The ODATA2 filter options $expand=, $inlinecount=, and $skip= are not supported.
Many reports allow you to precisely define the start and end of the reporting period and the report data granularity. Do this by using the StartDate, EndDate, and AggregateBy fields in the $filter= option. See How to: Specify reporting time spans for details.
In addition to the above system query options, other report settings are handled through HTTP headers:
Accept-Language takes a standard culture identifier and, if available, will return localized column names in the report output.
X-RWS-Version takes an Office 365 service version identifier. Currently the most-recent version is Office 365 service version 2013-V1. This can allow your applications to specify an older version to maintain compatibility as the service moves forward.
For more information, see How to: Use ODATA2 query options.
Now that you know how to get permissions to the Office 365 reporting system, the four ways the system can be accessed (admin center, spreadsheet, Windows PowerShell cmdlets, and the Reporting web service), have seen how to use simple C# code to access the web service, have had a glimpse at the breadth of available reports, and learned some of the ODATA query options for optimizing your system, check out some of the Office 365 Reporting web service SDK content and code samples for more in-depth information.
The user interface definition XAML code.
<Window x: <Grid> <TextBox x: <Button Content="Go" HorizontalAlignment="Left" Margin="623,37,0,0" VerticalAlignment="Top" Width="75" Click="Button_Click_1"/> <TextBox x: <Label Content="Report Name" HorizontalAlignment="Left" Margin="31,10,0,0" VerticalAlignment="Top" Width="112"/> <Label Content="options, separated with '&'" HorizontalAlignment="Left" Margin="162,10,0,0" VerticalAlignment="Top" Width="432"/> <TextBox x: </Grid> </Window>
The request-handling code for the application.
/* * Office 365 Reporting Web simple example * Copyright Microsoft, All rights reserved. * * This sample is for demonstration purposes only * Not for use in a production environment. */ using System; using System.Collections.Generic; using System.IO; using System.Linq; using System.Net; using System.Text; using System.Threading.Tasks; using System.Windows; using System.Windows.Controls; using System.Windows.Data; using System.Windows.Documents; using System.Windows.Input; using System.Windows.Media; using System.Windows.Media.Imaging; using System.Windows.Navigation; using System.Windows.Shapes; using System.Xml; namespace O365RWS_Simple { /// <summary> /// Interaction logic for MainWindow.xaml /// </summary> public partial class MainWindow : Window { public MainWindow() { InitializeComponent(); } private void Button_Click_1(object sender, RoutedEventArgs e) { // IMPORTANT: Change these to your admin user setting // set this to your admin reporting account user name string userName = "userone@example.onmicrosoft.com"; // set this to your admin reporting account password string passWord = "myPa$$wordI$$ecure!"; // // Construct the full REST request UriBuilder ub = new UriBuilder("https", "reports.office365.com"); ub.Path = "ecp/reportingwebservice/reporting.svc/" + RwsReportName.Text; ub.Query = RwsQuery.Text; string fullRestURL = Uri.EscapeUriString(ub.Uri.ToString()); // // Create the request object and add the credentials from above HttpWebRequest request = (HttpWebRequest)HttpWebRequest.Create(fullRestURL); request.Credentials = new NetworkCredential(userName, passWord); // try { // Make the HTTPS request to the reporting server HttpWebResponse response = (HttpWebResponse)request.GetResponse(); // // Atom data is returned as XML, so let's make it human-readable Encoding encode = System.Text.Encoding.GetEncoding("utf-8"); StreamReader readStream = new StreamReader(response.GetResponseStream(), encode); StringBuilder sb = new StringBuilder(); XmlDocument doc = new XmlDocument(); doc.LoadXml(readStream.ReadToEnd()); TextWriter tr = new StringWriter(sb); XmlTextWriter wr = new XmlTextWriter(tr); wr.Formatting = Formatting.Indented; doc.Save(wr); string requestDataReturned = sb.ToString(); wr.Close(); // // and then display the XML results RwsResults.Text = requestDataReturned; } catch { RwsResults.Text = "Something went wrong!"; } } } }
|
http://msdn.microsoft.com/en-us/library/jj984321(v=office.15).aspx
|
CC-MAIN-2014-52
|
refinedweb
| 3,025
| 56.45
|
What flexvault said: you should be safe from accidental overwriting if you put your functions in a separate package. However, you still have to trust your users not to do Evil Things---as far as I know there's no way to write-protect a namespace so even an ill-intentioned user can't screw with your code.
Edit: actually it's not "package something.pm" but "package Something", conventionally in a file called "Something.pm". User package names should not be all-lowercase.
In reply to Re: Checking for duplicate subroutine names
by mbethke
in thread Checking for duplicate subroutine names
by SirBones
|
http://www.perlmonks.org/?parent=998749;node_id=3333
|
CC-MAIN-2015-48
|
refinedweb
| 104
| 63.9
|
James,
The original code we donated was a wsdl2js tool. That tool disolved as I
reimplemented its core as code in the rt/javascript project. That new
code in there can generate a Javascript client based on a service model.
In the rest of CXF, we have code to make service models from WSDL and
from Java.
While the ?js URL is a great idea, my feeling is that we need to also
provide tools (command-line and Maven) for these tasks.
I am not very sure of the appropriate modularity. On the javato side, I
can see that this could fit into the existing implementation of java2ws
pretty easily. Would we just add a -javascript argument to that tool?
Personally, I would prefer to have it be a separate command line, even
if it shared 99% of the code with the existing tool. However, that is
not a very well-developed preference, and if everyone else prefers to
just add syntax to java2ws, I will go there.
I haven't look at wsdl2java, but I believe that it's a very thin layer
on the JAX-WS/JAXB RI. So I'm not sure how much sharing makes sense in
that case.
As for the wsdl extension, I would like to support a CXF-defined
namespace to carry some javascript-specific information around in the
wsdl. I think it would live at the level of a service. It would make
sense to worry about this \last/.
--benson
> -----Original Message-----
> From: James Mao [mailto:james.mao@iona.com]
> Sent: Monday, December 03, 2007 9:51 PM
> To: cxf-dev@incubator.apache.org
> Subject: Re: tools for javascript
>
> Oh, hit 'send' too quickly,
>
>
> > All the code generation is done. You can invoke it
> currently with the
> > ?js URL.
> >
>
> Is it wsdl2js?
> That's a good thing, we can reuse the rt/js part in code generation
>
> > 'All' I need to add is the command-line 'dump it to a file' in
> > addition to the live URL handling.
> >
>
> You can have your own Generator, it is OK not using the
> velocity templates, you can dump them directly, not a big deal
>
> > I also have a yen to add some a new wsdl extension for the
> case where
> > someone explicitly controls the mapping from URI to
> javascript prefix.
> >
>
> Not quite understand this part, but we handle the wsdl
> extesions as a plugin as well, like the wsdl:port is an
> extension plugin, so we can handle the port:address equally
>
> James
>
> >
> >> -----Original Message-----
> >> From: Glen Mazza [mailto:glen.mazza@verizon.net]
> >> Sent: Monday, December 03, 2007 3:35 PM
> >> To: cxf-dev@incubator.apache.org
> >> Subject: Re: tools for javascript
> >>
> >> Ouch, that doesn't sound like fun. But I would start with
> wsdl2js,
> >> look at the wsdl2java Java output velocity templates, and just
> >> re-edit those to output JavaScript instead.
> >>
> >> Next, during testing, I guess, see where that such a
> 1-to-1 mapping
> >> will
> >> *not* work, and what workarounds you can create for that.
> >>
> >> However, I don't know what you're going to do for the JAXB-created
> >> classes (in the wsdl:types section). We don't have templates for
> >> those.
> >>
> >> Glen
> >>
> >>
> >> Am Montag, den 03.12.2007, 14:18 -0500 schrieb Benson Margulies:
> >>
> >>> It is time to create wsdl2js and java2js. I confess that I
> >>>
> >> am daunted
> >>
> >>> by the task of starting this from zero. Is there any
> >>>
> >> possibilies that
> >>
> >>> the tool-experts would be willing to create some shells for me?
> >>>
> >>
> >>
> >
> >
> >
>
>
|
http://mail-archives.apache.org/mod_mbox/cxf-dev/200712.mbox/%3CE95FD77F0171D14DA1EEE0A785E92B43024D6FDA@mail3.basistech.net%3E
|
CC-MAIN-2018-26
|
refinedweb
| 582
| 71.95
|
As applications grow, a message queue system soon becomes the best way of achieving scalability. It is an obvious candidate for a cloud-based service, and Azure's Service Bus Brokered Messaging service is a robust and well-tried product. Mike Wood provides enough in this article to get you started.
There are many different flavors of queuing system out there: MSMQ, RabbitMQ, Amazon Simple Queue Service, IBM WebSphere and more. Windows Azure Service Bus Brokered Messaging is a queuing system that is a scalable, multi-featured messaging service hosted in Windows Azure, or available as part of the Windows Azure Pack in your own data center.
Microsoft offers two different queuing technologies in Windows Azure, and they can be easily confused. This article will focus on the Windows Azure Service Bus Brokered Messaging service, but there is also Windows Azure Storage Queues. Unless I indicate otherwise, I’ll be describing the Service Bus, or will directly refer to the other queue service as Storage Queues.
The examples in this article are written in C# using the .NET Client Library for Service Bus; however, the majority of the features of the Service Bus Brokered Messaging are exposed through a REST based API. The documentation for the REST API can be found online at WindowsAzure.com and there are also examples for how to use the Service Bus Brokered messaging with Node.js, Python, Java, PHP and Ruby.
What’s in a Queue?
A message queue provides an asynchronous, decoupled message communication between two, or more, bits of code. A producer of a message can submit that message to a queue and know that it is guaranteed to be delivered to one or more consumers on the other side. This decoupling is a great way to introduce buffering requests for systems that need to scale and introduce some resilience into your solution.
The advantage of using a message queue in an application is that the sender and receiver of the message do not need to interact with the message queue at the same time. They can work asynchronously. Messages placed onto the queue are stored until the recipient is able to retrieve them and act upon them. Messages can originate from one or more sources, often referred to as producers. The messages stay in the queue until they are processed, generally in order, by one or more consumers. If you want to speed up the processing you can usually add more consumers doing the processing.
Take for example an order system which sells widgets. The front end web site provides a catalog of the widgets for sale where a lot of visitors come to look at the widgets, comparing them and reading reviews. The back end code for the system knows how to process orders for widgets. Between the front end and the processing back end sits a queue. If a visitor purchases a widget, a message is sent to a queue and is processed by the back end.
We get several advantages from splitting apart the work of the producer of the messages from the consumer of those messages. Firstly, while it would be great if every visitor that came to the site bought a widget, it’s likely that isn’t the case. We can get better resource management if we can scale the two processes independently. Second, it also means that if, for some reason, our processing system goes down we can rely on the fact that the order messages coming in from the web site will be in the queue when the system comes back online. Finally, our system can also handle sudden spikes in the number of visitors who decide to purchase widgets. The time it takes to actually process the order is decoupled from the user, so if the site experiences a rush of purchases the orders can be captured on the queue and processed as the back end has time. The number of back end processors can even be increased to help speed things up if necessary.
Getting Started
At its core, the Windows Azure Service Bus Brokered Messaging feature is a queue, albeit a queue with quite a few very useful features. To get started, you will need to have a Windows Azure account. You can get a free trial account or, if you have a MSDN Subscription, you can sign up for your Windows Azure benefits in order to experiment with the features detailed in this article. Once you have an Azure account you can then create a queue.
Creating a Queue
To create a queue, log in to the Windows Azure management portal. After you are logged in you can quickly create a Service Bus Queue by clicking on the large New button at the bottom left of the portal.
From the expanding menu, select the ‘App Services’ option, then ‘Service Bus’, then ‘Queue’ and finally, ‘Quick Create’.
When using ‘quick create’, we need to provide a queue name, a location for the data and a namespace. You can look at the advanced options for creating a queue using ‘custom create’ later, but for now the ‘quick create’ will be fine.
Just as in .NET or XML, a namespace in Service Bus is a way of scoping your service bus entities so that similar or related services can be grouped together. Within a Service Bus Namespace, you can create several different types of entities: queues, topics, relays and notification hubs. In this article we will cover Service Bus queues in depth and mention Service Bus topics. The namespace will also be hosted in a given data center, such as East US, West US, etc. The data for that namespace will only be in that location.
If you didn’t have a namespace created for the selected location when you’re using ‘quick create’, it will use a default name of the namespace of the queue name plus “-ns”. You can modify that if you wish. If you already had one or more namespaces in the selected location, you can select from an existing namespace as well. The namespace name must be globally unique since it is used as part of the URI to the service, the queue name only has to be unique within the namespace.
Once you’ve filled in the three values, click ‘Create a New Queue’ at the bottom of the screen. The portal will then create the new queue and within a few seconds you should have a queue that is ready to use.
Retrieving the Connection String
Before we interact with the queue in code, we need a little more information from the portal, namely the connection string. The connection string will contain the URI and the credentials that we need to access the queue. In this way it is very similar to a connection string used by SQL Server.
Each Service Bus Namespace that is created is provisioned with a Windows Azure Active Directory Access Control Service (ACS) namespace (yes, that’s an impressively long name). Within this ACS namespace, a default credential called owner is created, but that credential will have full permissions on anything within the namespace. You can see this credential when looking at the namespace (not the queue) in the portal if you click on the ‘Connection Information’ icon in the command bar at the bottom of the screen. This credential is something you don’t want to use unless you are managing the namespace, and is certainly not something you would hand out to your partner who needs to send you messages on a queue. You can use ACS to create other credentials to secure your service bus entities, or you can also use Shared Access Policies which allows you to define the permissions and get a connection string. A discussion of ACS is beyond the scope of this article, so instead we will create a quick Shared Access Policy to use.
To create a Shared Access Policy for a queue you can select the queue from the management portal by clicking on the queue name, then click on the ‘Configure’ tab. On this tab you’ll see the ‘shared access policies’ section. Give the policy a name. You can name this whatever you wish, but it should be something that has some meaning. You could name it after the application that will use the policy, the type of permissions it comprises, etc. At the time of writing, you can create up to only twelve policies per entity (queue, topic, etc.), so naming the policy for a partner or client might not scale. If you need many credentials you should research using ACS credentials for your purposes.
In the screenshot below you can see that a Signed Access Policy is created that is named submitandprocess and was given the rights to Listen and Send. This means that any client using this policy will be able to both send messages to this queue, as well as listen to the queue, meaning that they can process the messages off the queue. Create a policy like the one you see below and click the ‘Save’ icon at the bottom of the screen.
After the Shared Access Policy is created the screen is updated to show you a primary and secondary key for that policy. These keys are both valid for the policy name and can be regenerated by clicking on the regenerate button next to each key. When you click regenerate you are effectively disabling any client that might be using that policy with the key provided.
These keys should be kept secret, otherwise anyone with this information has access to whatever permissions was assigned to the policy. Don’t worry, I’ve already regenerated the ones you see above.
To get a copy of the connection string, you’ll need to switch over to the Dashboard tab for the queue and view the connection string information. Click on the ‘Connection Information’ icon at the bottom of the screen: Then when you hover over the end of the connection string you’ll get a copy icon. You can use this to get a copy of the full connection string.
It seems like there have been a lot of steps just to get a queue set up, but in reality this doesn’t take long at all. There are actually many different ways you can create a queue. For example, you can create queues on the fly in code as long as you have the credentials with the correct permissions to do so.
Let’s get to some Code!
Now that a queue is created out there for us to send to, and receive messages on, we can start in with some code to do just that. For the first example we will use a C# console application that will send a message.
Using Visual Studio, create a C# Console application from the standard template. By default the created project will not have a reference to the Service Bus library, but through the wonders of NuGet we can fix that quickly. Right-click on the project and select ‘Manage NuGet Packages…’ from the context menu.
Once the Package Manager dialog is loaded, select the ‘Online’ tab and search for ‘Service Bus’. Select the Windows Azure Service Bus package from Microsoft and click ‘Install’. As of the time of writing version 2.2.1.1 was the newest stable version.
If you prefer to use the PowerShell Package Manager Console, you can also use the command ‘Get-Package WindowsAzure.ServiceBus’ to install the package. The package will install two assemblies: Microsoft.ServiceBus and Microsoft.WindowsAzure.Configuration.
Sending a Message
Open the program.cs file in your project and add the following statements to the top of the file:
using Microsoft.ServiceBus.Messaging;
Add the following to the Main method, making sure to include the connection string we copied out of the portal (remove the < and > as well.):
string connectionString = "<YourConnectionStringHere>";
MessagingFactory factory = MessagingFactory.CreateFromConnectionString(connectionString);
//Sending a message
MessageSender testQueueSender = factory.CreateMessageSender("samplequeue");
BrokeredMessage message = new BrokeredMessage("Test message");
testQueueSender.Send(message);
Console.WriteLine("Message(s) sent.");
Console.WriteLine("Done, press a key to continue...");
Console.ReadKey();
You should be able to execute this and see that a message is sent. We’ll verify it actually went somewhere soon, but for now let’s look at the code. We create a MessageFactory object using the connection string from the portal. The connection string contains the base URI to the namespace and the credentials in the form of the Shared Access Policy name and key. These credentials are used when a communication occurs with the Service Bus to verify that the caller actually does have rights to interact with the service. All the communication is also secured at the transport layer so it is encrypted going over the wire.
We use CreateMessageSender method from this factory object to create a MessageSender instance, which is what is used to actually send a message to the queue. The name of the queue is passed in as a parameter to the CreateMessageSender method.
If you look over the methods that are available to you on the MessageFactory, you’ll also see a CreateQueueClient method. The MessageSender is an abstraction and we are using it in place of the QueueClient. Unless there is some functionality you absolutely must have from QueueClient, I highly recommend that you use the MessageSender abstraction. We will touch on why this abstraction exists later in the article. Note that creating a MessageSender can be an expensive operation so it is best to create one and reuse it when possible.
The type we actually send is a BrokeredMessage, so the code next creates one of these. The constructor used here appears to be taking in a string of “Test Message”, but it’s actually taking in the string as an object and will serialize the object using the DataContractSerializer by default. There are also overloads where you can pass along a different XMLObjectSerializer to use.
The maximum size of message you can send is 256KB, with a maximum of 64KB for the header, including any metadata. The client library will break up the message into 64KB chunks to actually send it over the wire, so be aware that the larger messages will incur more transactions. Be careful when serializing objects into messages in that their serialized size will be larger than the object size in memory. If you need to send larger messages you may want to look at storing the message somewhere and sending a pointer to the data over the service bus.
Finally, we call Send on the MessageSender, passing the instance of the message. This method will deliver the message to the queue. If no exception occurs the message is successfully delivered.
Retrieving a Message
The next step is to retrieve the message. We’ll create a new C# Console application to act as the consumer of the messages. In your Visual Studio solution add a new Project for the consumer of the messages. Once the new project is created, then use the NuGet package manager to add the same Service Bus package that you did to the first project.
Just as before, add the following using statement to the top of your program.cs file for the consumer:
using Microsoft.ServiceBus.Messaging;
Add the following code to the main method of the new project, again ensuring to add your connection string:
string connectionString = "<YourConnectionStringHere>"
MessagingFactory factory = MessagingFactory.CreateFromConnectionString(connectionString);
//Receiving a message
MessageReceiver testQueueReceiver = factory.CreateMessageReceiver("samplequeue");
while (true)
{
using (BrokeredMessage retrievedMessage = testQueueReceiver.Receive())
{
try
{
Console.WriteLine("Message(s) Retrieved: " + retrievedMessage.GetBody<string>());
retrievedMessage.Complete();
}
catch (Exception ex)
{
Console.WriteLine(ex.ToString());
retrievedMessage.Abandon();
}
}
}
This code is not that different from the previous code when it comes to the interactions of setting up connections to the Service Bus. The only real difference is that we are creating a MessageReceiver and performing a Receive to retrieve a message from the queue within a loop. The code is performing a Receive within the loop so if you run just this console application you will see it write out the message (or messages) that are in the queue. It is very common to see message processing code in an infinite loop like you see above, or only breaking out of the loop if it has been instructed to stop processing by an outside source.
In the example above we are using the ‘parameterless’ overload of Receive, which is a synchronous and blocking call. Execution of our code will not continue either until a message is actually received from the service bus or an exception occurs. There is another Receive overload which takes a TimeSpan as a parameter which allows you to add a client side timeout as well. This is useful if you want to break out of the receive loop from time to time to verify that you should still be processing messages. Formore ways to handle retrieving a message you can research the async versions of the Receive method and even the more event-based approach using the OnMessage event from the MessageReceiever.
Once we have the BrokeredMessage instance, we can retrieve the content of the message by using the GetBody<T> method. If we used an XMLObjectSerializer different than the default we would need to pass an instance of the correct serializer into the GetBody call so that the object can be correctly retrieved. In this simple example, the code retrieves the string from the message and writes it to the console; however, this message could represent order details or just about anything.
As a message is retrieved from the queue it is pulled from the queue in one of two modes: either with RecieveAndDelete or PeekLock. You can control which method you use when you generate the MessageSender. For Example:
MessageReceiver testQueueReceiver = factory.CreateMessageReceiver("samplequeue", ReceiveMode.ReceiveAndDelete);
By default, a message is retrieved using the PeekLock mode of retrieval, which is what the original code example is doing. This means that the message is locked on the queue so that it cannot be retrieved by another consumer and a peek is performed so that the consumer is handed a copy to process. Once the code has finished processing, it calls Complete on the message. This notifies the service bus that this message is completed and can be permanently removed from the queue. Likewise, if something goes wrong during the processing and an exception is thrown, then the code calls Abandon on the message, notifying the service bus that the message could not be processed and it can be retrieved by another consumer (or possibly even the same consumer).
By using PeekLock, we ensure that our message will be on the queue until it is processed at least once. Each message that is removed from the queue using PeekLock is locked for a period of time which is specified by the LockDuration of the queue itself. This value is set at the time that the queue is created. The maximum value is five minutes, but the default value is 30 seconds. Since we quickly created our queue for this sample, it has the default value of a 30 second lock. If you need to, you can change the lock duration for a queue after it is created from the Management portal or from code.
When a BrokeredMessage is pulled from the queue using the PeekLock, it is assigned a GUID as a LockToken. This token is only valid during the Lock Duration. If for some reason the code takes too long to process the message and the lock expires, when the code calls Complete it would receive a MessageLostLock exception since the token is no longer valid. The message might have even already been handed off to another consumer to process by that time. If necessary you can also call RenewLock on the message instance while processing, which will extend a still valid lock by the time set for the LockDuration on the queue. In this way you can keep extending the time until you are finished processing if necessary.
The second receive mode is ReceiveAndDelete, which does exactly what it sounds like it does: once the message is received by a consumer, it is immediately removed from the queue. This is referred to as ‘At Most Once’ delivery. This saves the overhead of the two-phase update to process a message; however, if something were to happen during processing and the consumer failed to process the message, then that message would be lost. You might use this approach if you decide that you don’t need to process every message that comes through the system and it is acceptable if some messages are lost. If you take this approach, it would be a good idea to have detailed logging to determine just how many messages you might be losing.
A Word about Idempotency
When you were reading the information above you might have questioned the phrase “processed at least once.” That might sound very risky to some people, and if you aren’t careful it is possible that it is very dangerous. In a distributed system many things can fail or run into issues, so it is possible that a message is picked off the queue to be processed multiple times before it is fully completed. If this is the case you have to understand what that means to your system.
The word ‘idempotent’ means that the operation can be performed multiple times, and beyond the first time it is performed the result is not changed. An example of an idempotent operation is a database script that inserts data into a table only if the data isn’t already present. No matter how many times the script is executed beyond the first time the result is that the table contains the data. Idempotency such as this can also be needed when working with message processing if the messages could potentially be processed more than once.
Some message-processing will be inherently idempotent. For example, if a system generates image thumbnails of a larger file stored in BLOB storage it could be that it doesn’t matter how many times the message is processed; the outcome is that the thumbnails are generated and they are the same every time. On the other hand, there are operations such as calling a payment gateway to charge a credit card that may not be idempotent at all. In these cases you will need to look at your system and ensure that processing a message multiple times has the effect that you want and expect.
But Wait, There’s More!
The Service Bus Brokered Messaging is so much more than a simple queuing technology and it goes well beyond the very basic example above. There many additional features, each of which could easily have an article dedicated to it. Below is a very brief description of some of these features: ‘Dead.
Deferring—You may run into a situation where you want to defer the processing of a given message. You can do this using the Defer method on the BrokeredMessage instance. When you call Defer the service bus will leave the message on the queue, but it will effectively be invisible until it is explicitly retrieved using the Receive overload that accepts a sequence number. You will need to read the SequenceNumber property from message instance and keep track of it in your own code in order to retrieve the message later.
Deferring does not work in the ReceiveAndDelete mode as the message has already been removed from the queue.
Retry Policies— As with any system, there may be errors when you perform an operation against the service bus. Some of these errors will be transient in nature, like a networking hiccup or timeout. When this happens you don’t want to lose the message, or stop your processing if the error is actually recoverable. In these cases you can configure a Retry Policy on the MessagingFactory, MessageReceiver or MessageSender instances. By default a retry policy is defined for you, but you can substitute your own as well.
Sessions— In those cases that you wish to send messages greater than 256KB, or if you simply want to send messages that need to be processed together by the same consumer, you can use the ‘sessions’ feature. This is advanced feature of Service Bus Brokered Messaging and even includes the ability to store the state of sessions as you wait for more messages from a session to arrive.
Topics & Subscriptions— This is an extremely powerful feature of Service Bus Brokered Messaging in which you can create a publish and subscribe distribution. A topic is a special type of queue. Messages delivered to a topic are then delivered to any subscription that is signed up to receive them. You can even apply filters based on metadata in the message headers to provide routing of messages. The message producers deliver to a topic and message consumers receive messages from a subscription. At the code level these are different entities than the QueueClient, which is why the abstraction of the MessageSender and MessageReceiver is useful. When using the abstractions your code doesn’t have to care if it is dealing with a regular queue or a topic/subscription.
Summary
By using queues for distributed systems, you make them more resilient to fluctuating workloads. It is worth your time to get familiar with the features and capabilities provided by Windows Azure Service Bus Brokered Messaging so that you’ll recognize when it’s time to use one.
Brokered Messaging has a lot to offer, far more than we’ve been able to cover in this article. There are also Partitioned Queues, Client Side Paired Namespaces, custom serialization, AMQP support and more. A great place to continue your learning about Service Bus Brokered Messaging after you read this article is watching a deep dive video on Channel 9 by Clemens Vasters titled “Service Bus Messaging Deep-Dive”. It’s about two hours long, so make some popcorn.
|
https://www.simple-talk.com/cloud/cloud-data/an-introduction-to-windows-azure-service-bus-brokered-messaging/
|
CC-MAIN-2016-22
|
refinedweb
| 4,354
| 60.35
|
What is PaSS
PaSS is a Web content syndication format based on its sister standard RSS.
PaSS is an acronym for Portable and Simple Syndication. RSS stands for Really Simply Syndication and RDF Site Summary.
PaSS is a dialect of XML and is based on RSS 2.0. All documents containing PaSS must conform to the XML 1.0 specification, as published on the World Wide Web Consortium (W3C) website.
RSS 2.0 does not have the ability to include its elements in other XML formats through a namespace. PaSS allows RSS 2.0 elements to exist as vocabularies in other XML documents. PaSS is not an XML file format.
PaSS is being proposed by DeveloperDude.
Current Version
Previous Versions
More on PaSS
PaSS notes
From a conversation between DeveloperDude and KenMacLeod.
PaSS leaves RSS issues to be addressed seperately.
(corollary, RSS issues are not considered issues, by some.)
PaSS is not an API
PaSS proposes an embeddable RSS so that, for example, it can be passed as a SOAP document literal within an API
If an API were to be developed using PaSS,
it would parallel the Atom API
at this point, it would be a SOAP-only API, which is why PaSS is embeddable RSS
logically, it would be "the same", just using a different document literal
|
http://www.intertwingly.net/wiki/pie/PaSS
|
crawl-003
|
refinedweb
| 219
| 65.42
|
SQLTables
SQLTables can be executed on a static server cursor. An attempt to execute SQLTables on an updatable (dynamic or keyset) cursor will return SQL_SUCCESS_WITH_INFO indicating that the cursor type has been changed.
SQLTables reports tables from all databases when the CatalogName parameter is SQL_ALL_CATALOGS and all other parameters contain default values (NULL pointers).
To report available catalogs, schemas, and table types, SQLTables makes special use of empty strings (zero-length byte pointers). Empty strings are not default values (NULL pointers).
The SQL Server Native Client ODBC driver supports reporting information for tables on linked servers by accepting a two-part name for the CatalogName parameter: Linked_Server_Name.Catalog_Name.
SQLTables returns information about any tables whose names match TableName and are owned by the current user.
When the statement attribute SQL_SOPT_SS_NAME_SCOPE has the value SQL_SS_NAME_SCOPE_TABLE_TYPE, rather than its default value of SQL_SS_NAME_SCOPE_TABLE, SQLTables returns information about table types. The TABLE_TYPE value returned for a table type in column 4 of the result set returned by SQLTables is TABLE TYPE. For more information on SQL_SOPT_SS_NAME_SCOPE, see SQLSetStmtAttr.
Tables, views, and synonyms share a common namespace that is distinct from the namespace used by table types. Although it is not possible to have a table and a view with the same name, it is possible to have a table and a table type with the same in the same catalog and schema.
For more information about table-valued parameters, see Table-Valued Parameters (ODBC).
// Get a list of all tables in the current database. SQLTables(hstmt, NULL, 0, NULL, 0, NULL, 0, NULL,0); // Get a list of all tables in all databases. SQLTables(hstmt, (SQLCHAR*) "%", SQL_NTS, NULL, 0, NULL, 0, NULL,0); // Get a list of databases on the current connection's server. SQLTables(hstmt, (SQLCHAR*) "%", SQL_NTS, (SQLCHAR*)"", 0, (SQLCHAR*)"", 0, NULL, 0);
|
https://technet.microsoft.com/en-us/library/ms131341.aspx
|
CC-MAIN-2015-14
|
refinedweb
| 301
| 54.12
|
A strange issue that can easily be filed as a “Ghost in the machine” surfaced with a user in Exchange not too long ago. The peculiarity of the problem was unique in nature because
without digging deep into the user’s Exchange account, we may have not found the problem and would have settled for a Band-Aid fix (either that or settle for having to recreate the account in Exchange). So, what is this problem?
First, let me clarify a quick term that not everyone is familiar with: OOF. It is an acronym used for “Out Of Facility” or better known today as “Out Of Office”. Here’s a link
describing this better:
Ok, now back to the mysterious issue…
UserA sets up their Out Of Office reply in Outlook. When someone emails UserA, their OOF notification goes to work and replies to the sender with the OOF message already configured
as it should. Here’s where the fun begins… UserA then receives an NDR (Non-Delivery Receipt) email for every OOF notification that gets sent out. Upon looking at the NDR, it appears that Exchange is attempting to forward the email to an address that does not
exist within the environment (UserB) while also taking care of the OOF notification to the original sender.
Well, immediately – this sounds like some sort of rule is possibly configured at the client that is causing this mayhem. However, when troubleshooting the client, no rules were
found with this configuration. Using PowerShell on the Exchange Management Console pointing to UserA’s mailbox didn’t yield any results either. The ‘Band-Aid’ here was to create a new rule that would immediately delete the NDR when received… but this was not
a resolution, only a ‘Band-Aid’. This required much more in-depth troubleshooting.
Deep-Dive Analysis:
This is where my colleague and I began analyzing using MFCMAPI ()
to further investigate our Exchange 2013 environment. Now for those who are unfamiliar with MFCMAPI, it “provides access to MAPI stores to facilitate investigation of Exchange and Outlook issues.” (Sounds like the right tool to use in this instance to me!)
After giving our Exchange Administrator account full delegation rights of UserA, we opened up MFCMAPI and pointed it to that mailbox. From there, we right clicked on “Inbox” and
selected the “Open associated contents table” option.
On the new window, sort the columns by “Message Class” and look for any “IPM.Rule.Vers…” In our example, UserA shows to have 2 rules on their Exchange account. One is visible in
Outlook (our ‘Band-Aid’ rule deleting the NDRs), but notice that here in Exchange using MFCMAPI we can see a SECOND rule!
So, clicking on the Message Class (rule on the top pane will reveal some very critical information regarding this anomaly. There are two things of interest that we will be looking
for (again, keep in mind that when troubleshooting this type of issue a user can have multiple rules in Exchange! This is why the following part of the analysis is crucial… to select the RIGHT rule!)
The first piece of the puzzle on this example is the Property Name: “PR_RULE_MSG_PROVIDER…”
Notice that the property has a value of: “MSFT:TDX OOF Rules”. Remember the acronym OOF from earlier? Looks like we’re on track! Now on to the next Property Name: “PR_RULE_MSG_STATE…”
Notice, this time, the Smart View column: “Flags: ST_ENABLED | ST_ONLY_WHEN_OOF”. Thinking analytically, this particular view identifies the action of the rule: “only when OOF”.
Let’s go a little further to make sure we’re looking at the right rule. Double click on the Property Name: “PR_RULE_MSG_STATE…” On the new window, look for the following Property
Name: “PR_EXTENDED_RULE_MSG_CONDITION…” and double click the property.
Here it is! On the middle pane is where we found UserB’s name next to SMTP. This confirms the ghost rule! To be safe, we exported the rule before deleting it. After removing, as
expected, the issue went away!
In this example, we were able to find the Ghost rule in Exchange using MFCMAPI. Again, please keep in mind that these type of issues may not be common in every environment. Use
these steps for troubleshooting with caution! Also ensure that you have the correct rule before making any modifications. As always – BACKUP, BACKUP, BACKUP!! You can export the rule in question prior to deletion. If this does not solve the problem, you can
import the rule back. Hope this helps anyone else experiencing “ghosts” in Exchange!
This article was highlighted in the TechNet Wiki Ninja Top Contributors weekly blog , Most Updated Article Award, 15/06/2014 - blogs.technet.com/.../top-contributors-awards-analysing-messages-ghosts-amp-mysteries-a-strange-case-to-solve-plus-small-basic-home-projects-and-asking-for-forgiveness-or-permission.aspx
|
http://social.technet.microsoft.com/wiki/contents/articles/24792.exchange-oof-ghost-rule.aspx
|
CC-MAIN-2014-52
|
refinedweb
| 800
| 63.39
|
:electron: Build cross platform desktop apps with ASP.NET Core (Razor Pages, MVC, Blazor).
Travis-CI (Win/macOS/Linux):
Build cross platform desktop apps with .NET 5.
Well... there are lots of different approaches how to get a X-plat desktop app running. We thought it would be nice for .NET devs to use the ASP.NET Core environment and just embed it inside a pretty robust X-plat enviroment called Electron. Porting Electron to .NET is not a goal of this project, at least we don't have any clue how to do it. We just combine ASP.NET Core & Electron.
The current Electron.NET CLI builds Windows/macOS/Linux binaries. Our API uses .NET 5, so our minimum base OS is the same as .NET 5.WebHostBuilder-Extension.
public static IHostBuilder CreateHostBuilder(string[] args) => Host.CreateDefaultBuilder(args) .ConfigureWebHostDefaults(webBuilder => { webBuilder.UseElectron(args); webBuilder.UseStartup(); });
Open the Electron Window in the Startup.cs file:
public void Configure(IApplicationBuilder app, IWebHostEnvironment env) { ...
// Open the Electron-Window here Task.Run(async () => await Electron.WindowManager.CreateWindowAsync());
}
To start the application make sure you have installed the "ElectronNET.CLI" packages as global tool:
dotnet tool install ElectronNET.CLI -g
At the first time, you need an Electron.NET project initialization. Type the following command in your ASP.NET Core folder:
electronize init.
A complete documentation will follow. Until then take a look in the source code of the sample application:
Electron.NET API Demos
In this YouTube video, we show you how you can create a new project, use the Electron.NET API, debug a application and build an executable desktop app for Windows: Electron.NET - Getting Started
Here you need the Electron.NET CLI as well. Type the following command in your ASP.NET Core folder:
electronize build /target win
There are additional platforms available:
electronize build /target win electronize build /target osx electronize build /target linux
Those three "default" targets will produce x64 packages for those platforms.
For certain NuGet packages or certain scenarios you may want to build a pure x86 application. To support those things you can define the desired .NET Core runtime, the electron platform and electron architecture like this:
electronize build /target custom "win7-x86;win32" /electron-arch ia32
The end result should be an electron app under your /bin/desktop folder.
This repository consists of the main parts (API & CLI) and it's own "playground" ASP.NET Core application. Both main parts produce local NuGet packages, that are versioned with 99.0.0. The first thing you will need is to run one of the buildAll scripts (.cmd for Windows, the other for macOS/Linux).
If you look for pure demo projects checkout the other repositories.
The problem working with this repository is, that NuGet has a pretty aggressive cache, see here for further information.": ["**/*"] }, "**/*" ] } }
In the Version 0.0.9 the CLI was not a global tool and needed to be registred like this in the .csproj:
After you edited the .csproj-file, you need to restore your NuGet packages within your Project. Run the following command in your ASP.NET Core folder:
dotnet restore
If you still use this version you will need to invoke it like this:
electronize ...
Electron.NET requires Node Integration to be enabled for IPC to function. If you are not using the IPC functionality you can disable Node Integration like so:
WebPreferences wp = new WebPreferences(); wp.NodeIntegration = false; BrowserWindowOptions browserWindowOptions = new BrowserWindowOptions { WebPreferences = wp }
ElectronNET.Api can be added to your DI container within the Startup class. All of the modules available in Electron will be added as Singletons.
using ElectronNET.API;
public void ConfigureServices(IServiceCollection services) { services.AddElectron() }
|
https://xscode.com/ElectronNET/Electron.NET
|
CC-MAIN-2021-43
|
refinedweb
| 610
| 61.43
|
This is a Mac specific question, but in essence it is a unix/bsd system level stuff. I suppose
there are people here that can help me.
I'm using fcntl() to perform locking regions of files. It works well on local volume, but fails
on shared volumes.
This line produces error 45, ENOTSUP on shared volumes:
Here is an example I wrote to test it. When I try it in Terminal, if I give it ~ as a param allHere is an example I wrote to test it. When I try it in Terminal, if I give it ~ as a param allCode:fcntl(fp, F_SETLKW, &lk)
works well. If I use some folder on shared volume (on either Mac or Windows) it fails:
There's even some Tech note by Apple that shows how to test if a volume supports file locking and all the volumes I checked passed that test.There's even some Tech note by Apple that shows how to test if a volume supports file locking and all the volumes I checked passed that test.Code:#include <stdlib.h> #include <stdio.h> #include <sys/types.h> #include <sys/time.h> #include <sys/resource.h> #include <unistd.h> #include <fcntl.h> #include <string.h> #include <errno.h> int main (int argc, const char * argv[]) { int fp, bytesGone, len; char tmpStr[256], buff[256]; struct flock lk; if (argc != 2) exit (1); if (argv[1][strlen(argv[1])-1] == '/') sprintf (tmpStr, "%s%s", argv[1], "MyFile"); else sprintf (tmpStr, "%s/%s", argv[1], "MyFile"); if ((fp = open (tmpStr, O_RDWR)) < 0) { if ((fp = open (tmpStr, O_RDWR | O_CREAT, 0666)) < 0) { printf ("Open file error!\n"); exit (1); } } lseek (fp, 0L, SEEK_SET); if (write(fp, tmpStr, len = strlen(tmpStr)+1) != len) exit (1); lseek (fp, 0L, SEEK_SET); if (read (fp, tmpStr, len) != len) exit (1); lseek (fp, 0L, SEEK_SET); lk.l_start = 0; lk.l_len = 8; lk.l_pid = 0; // getpid (); lk.l_type = F_WRLCK; lk.l_whence = SEEK_CUR; if (fcntl(fp, F_SETLKW, &lk)) { // or F_SETLK, both give ENOTSUP printf ("Lock error: %d\n", errno); exit (1); } printf ("All is well! Len = %d\n", len); return (0); }
Any idea how to solve this or find a workaround.
|
https://cboard.cprogramming.com/linux-programming/109089-mac-file-locking-fcntl-fails-shared-volumes.html
|
CC-MAIN-2017-39
|
refinedweb
| 362
| 85.28
|
Today we will learn how to convert String to a char array and then char array to String in Java.
Table of Contents
String to char array
Java String is a stream of characters. String class provides a utility method to convert String to a char array in java. Let’s look at this with a simple program.
package com.journaldev.util; import java.util.Arrays; public class StringToCharArray { public static void main(String[] args) { String str = "journaldev.com"; char[] charArr = str.toCharArray(); // print the char[] elements System.out.println("String converted to char array: " + Arrays.toString(charArr)); } }
Below image shows the output produced by the above program.
String.toCharArray internally use System class
arraycopy method. You can see that from below method implementation.
public char[] toCharArray() { char result[] = new char[value.length]; System.arraycopy(value, 0, result, 0, value.length); return result; }
Notice the use of
Arrays.toString method to print the char array.
Arrays is a utility class in java that provides many useful methods to work with array. For example, we can use Arrays class to search, sort and java copy array operations.
char array to String
Let’s look at a simple program to convert char array to String in Java.
package com.journaldev.util; public class CharArrayToString { public static void main(String[] args) { char[] charArray = {'P','A','N','K','A','J'}; String str = new String(charArray); System.out.println(str); } }
Below image shows the output produced by char array to String program.
We are using String class constructor that takes char array as an argument to create a String from a char array. However if you look at this constructor implementation, it’s using
Arrays.copyOf method internally.
public String(char value[]) { this.value = Arrays.copyOf(value, value.length); }
Again
Arrays.copyOf method internally use
System.arraycopy native method.
public static char[] copyOf(char[] original, int newLength) { char[] copy = new char[newLength]; System.arraycopy(original, 0, copy, 0, Math.min(original.length, newLength)); return copy; }
So we can clearly see that System arraycopy() is the method being used in both Strings to char array and char array to String operations. That’s all for converting String to char array and char array to String example program.
Reference: toCharArray API Doc
|
https://www.journaldev.com/766/string-to-char-array-to-string-java
|
CC-MAIN-2021-25
|
refinedweb
| 375
| 61.53
|
127 Related Items Preceded by: Starke telegraph Full Text : !.. .. L." ,_ _'n""____ .n._.L--------"," ..- -;:----.--...-.' mr, ------.--.----- .=c----- .. -. ______..2.- = AUG3-1951 ' ------ =-t_ SECTIONS-SI PACESBRADFORD OW l COUNTYTELEGRAPH : 56 Loses 2-1 .." , . ' Official County Publication Established In 1879 ' Area Opener I Post 56 Juniors dropped the first game of the I VOLUME: SE\' \-TIIREE STARKE,FIX>KII>A, FRIIAVO.. S, 1&31 NUMBER THREE Jnament Wednesday night before approximately[ -- L f.Lduled by U-'"a Bradford score for of Tuesday Park 21. The to night the first Post, but game 9 had Generals of to the be tournament from postponed Jack-, Early Start Is Made On Spring Primary Battleground I II Things 'Perk Up'IIn.Jty 4 muddY J .i infield.'4 fl..t-- run- of, _ _ _ As Three Hats Are Tossed In Ring For Clerk Vacancy ; femeli"thVtop. of the secEi he slapped out a triple and a I Larry Redden, I double In his two official times at I -.--.-- ----.---.---. ___ -- ---------- .- 1p Election t?. jammed a triple into bat. He walked on his second trip, and stored on a wild to the plate. The only other local I ''he!, G<!ueral'lI pitcher. player to hit was Paul Rivers who I . SfaiU. This proved to bert '' knocked a "Texa: leaguer"\ Into I II NEW VOTERS MUST business picked up on the City Election front this: week .1 th. ball game for short-left field In the third Inning I II I' with little more than two weeks left for candidates to make I e nlbOt Thomas limited the visitors to' REGISTER BEFORE up their minds to qualify before the deadline Tuesday, !!< >t the second, theiwnr only three hits, one of them Long's August 21*,."... .. i hack to knot then triple but three errors by hi* I AUGUST llth I Interest soared as competition" .. L'na. their left field- teammates In the second Inning, loomed In the ('itizen. of Stark reminded mayor's race and River'sat aero ed il ,t on Paul an error gave them an unearned run tu tie' than week by City Clerk Square Off AgainInriiiiihent also for the pout of chief of police it, third on up the game. The Jacksonville Still, without T \ third Carl' John that Saturday, August opposition are In ,I win\ ants, Starke team gave up two errors but they 't cumbent .city clerk Carl Johns, 11. lit the leadllnn for f th n stole horn well and allowed I j and i,and ', to second I were spaced no ,e ; ,. registering. for eligibility to 1I tWO'CCUijn,12' whose. terma trew d.n,> scoring. expire Oct. IMiVaul L. vote In the elec- Bridgesand Jacksonville September city M >ff a I slat Wm. T. Jacksonilj pick The Area Tournament was , .r. ed to be a three-team affair, with" l!: ?-- Hon.At Jake p. Roberts, one-time*"Wy. scored , nine that limn. vtere will electM l-ksonvllle the winners of the Third and Fifth or and state representative, pald'j I the frurth when Long Districts playing a series with the ' _C major, city .clirk rhlrf of the $60.22 qualifying fee ? :- Tuesdayto { te( right field and pnllre and two Counollliu-u. Hr. triple local team for the Area Champion become a candidate for the post L. second out of Candidate, for election must , in the However, the Third District .J ''i ship. LT- now held by J. R. Wainwright. L "Buck)' Brockirfford qualify with the city clerk onor when failed 'to qualify a team and the ? J Wainwright qualified last . Jack Cars- Friday.Voters out to I ; brforv AIlIe"l In orderto will b& determined - Area Championship J'a . are anticipating a - possible | their ret base by a beat two games out of _.01-_ get nsuneft printed on who lost his ;:, the election ballot. repetition of the heated Thonma tournament play three series, with the Post 9 Gen- \. ;:,. Urle \\lgglim (left) pick an : campaign these two contender me In out the Generals a* erals.If t: auto |>itrt from lie fthclf, anCharl ., : I I a staged in 1949 when Wainwright lilimton strike, and Galneavlll in the Generals won again last ".......;"''IjI.J'" .. }" .'" JH rhy (right/ ) iwrveariiHloiiirr a f t tOLD 1 Boards' nosed out Roberta by a narrow 15- ompetltlon. Only four of night (the game "was not complete tI" In his. grocery Htorr. ," ( BudgetsWill ..frk' vote margin. i-onvllla( men went down at press time) they will annex /- I Parking meters, which were In- route WedaesIt the Area title and proceed to atallnd during Roberts' administration - 1 Strikeout!tthllf he struck out 29' the State Tournament In Wauchu- 11118 CLASSMATES WILL HE POLITICAL RUNNING MATES 1I Be AiredCitizens I 4,. were one of the big the la August 6. However, if the local I .. ?'a games he pitched In Issue. back in 1949, but are apparently - ournament. He walked nine took last night's game, a rubber A harbinger of forthcoming fireworks in the "bijr year"primary lint has lived here since. He was now an "accepted insti it hit two batsmen.BRedilen session will be played Friday electiof next string: came this week with three entries -[ -diiraterl, In the public schools ofI of Bradford County tution" and could hardly be revived - was the big man night to determine the champion- in what will undoubtedly prov.e to be a heated contestto Duvnl County, but graduated from vere romi-'H.d this week tha as a vote-Influencing factor. I Site for the local nine as 'ship. fill the vacancy caused by the retirement of longtime .Alurhua High School In 1930.I both the Board of County Commissioners -----r. _' .a--.. ,-'II New issues will doubtless be proJected - _____ 5 Clerk of the Circuit Court A. J. Thomas. I Following his graduation from and the Board of Pub 4'alnwrlghtrTr' however, to make this one iontenu-, 10 uate. are cnanes-,. I high, school he worked for the lic Instruction will meet In the of the most colorful contests on ling Meters Pay For ThemselvesShow A. Darby Charles-Hardy, and Orle 1 and was educated In the public I hang 911 mill In Brooke and in court house, Monday, August 6, ... ,''"'' tate ticket. A. Wiggins, all well known and schools here graduating from June, 194O accepted the position for the purpose of bearing any The return of an old timer, form longtime residents of Starke and BUS In 1923. lie attended the I nf Rchnftl Finance Officer with the complaint. and discussing ,the er Chief of Police A L Alvarez. Net Profit In Two Years BnaJford County. University of Florida three years'' ttourd of Public Instruction where I 1931-32:! budget. to the political scene will also - Wiggins, 44, has been a resident and Tulane University In New lie has remained except for the I Each Board will meet In its respective contribute Its share of Interest . - of Starke and Bradford County Orleans for a year and a halt time .spent In service. room at 10 a.m. Kull details In the forthcoming September bal I 42 years hi. family having moved studying medicine. He Is Finance Officer. of the local ', of the new budget. may be obtained loting. here from his) birthplace In Baket He served one term on the City American Legion Post SO an.I' from the office of the Board Alvarez will vie for the position I CIty 'Clerk Carl Johns is County. He Is, at present, parts Council from 1911 to 1943 and i vice-commander' elect, Secretaryof of Public Instruction! and the office lie held some 30 ,years prior to his pictured here operating the manager for the Andrew. MotorCo. served as Mayor from 1943 until, the Starke Lodge. of the B. P. of the Clerk of the Circuit retirement on pension In 1947 He \ automatic counter that totals of Starke and has held that 191T. He Is a member of the Baptlst O. E., a Mason, a member of the Court. I paid his $192 qualifying fee Wednesday - I\C Church and a Mason. He hi I 140 8 honor organisa the nickels and pen- position 26 years. He was educat. & Legion ; I to become a full fledgedCididate' up been engaged in the grocery bust tin,, member of the EasternStar : for the in Starke'sparkhig ed In the public schools of Bradford position now nies dropped County and graduated fiom ness In Starke since 1938. Director of District 2 of the Test Well\ Seems held by W. E. Thornton. Thornton meters. BHS In 1923. He married the former Virginia/ School Finance Officer's Association qualified Monday. In September 1940 and I of the Fugate Secretary-Treasurer Ills only political venture heretofore 'has four children, Virginia Jean, II Bradford County Development Other Incumbent who have the To Work 0. K. was one term on City qualified to date are City Clerk 10; Michael, ,7; Phyllis, ..1'anlt Enterprises and Administrative Council from 1911 to 1913. He Is I Carl r----- I Johns and Couucllmen Lee, 2. He has.one i.terfliving| Assistant. l to the Bradford< County: I member of the Bradford Lodgeof Bridges' ; and a Jacluton. In Starke Mrs. Myra PeeK* .j Branch of the Florida Jnatltute.. -I It I The af.' ' Free and Accepted Masons and tesfwetl Sampson LakeisrinUihadryand a L It has probablytieen some TlrnfenUice !He married the former Juanita .! tiI ASsociate Sunday School Superln- t la.ju.wv .draiain at L \ twir men WUM such 4deotiC&jrecords IlItitlle3l"-.r? '8tark in 1940 and 30 4-H Club Girls] Arsf "very ,| tradent-and. a'TrusterW! the I I. satisfactorily" according: ,tc campaigned against each has four children, Joyce 10; Church of Starke. ''Rel an announcement this week' by At Cherry Lake Baptist I:other In the county. Both Darby Charles IV :3 I; Franklin. 2; and I _' 12'L_ atives In this county Include his' Hollia V. I Knight, county attorney -- and Wiggins, are members of the has Donald, 8 weeks. He twc father and mother, Mr. and Mrs I: I The 10-Inch well readied a depth Off ton a week's trip to Cherry :name church giaduated from BHSi Itobrrt - brothers In Bradford County I En-Mayor Joe' of 330 feet the Lake are 30 Bradford , H. B. Wiggins; one brother, the time the maximum allowed County 4-H at same serve on Eldon of Starke and Douglas Wiggins and a sister, Mrs. George I i Masonic Examining Hoard and I by the contract and all drilling .., Glut girls who left Monday from I a-.:. :z_ \ i Hampton. ceased with the : the Court House, Williams, all of Hampton shaft ending In a Accompanying served on_the City Council at the I| Incumbent ThVinns, woo has the glrN local large bed of rock. are lenders Miss He Is married to the former same time, porous r held the office for a record-break Knight said that the Board of Olaeta Green, Mrs. Millie! Sim I Laura Lee Hemingway, Stark. I Hardy, of the contend- . : youngest < mon and Home five children, Peggy ing 20 years without oppositionat County Commissioners will take Demonstration and have " i they 'crs, l Is 33 years old and a veteran It' Agent Miss Dorothy Ross. tile matter of \; Ann. 19; Bobby. 15; Jimmy, 10; the polls, has announced that up adding the siphon two and half In the a I,lof years attachment to the well at their These girls were active in 4-H ; Gene Ti: and Susan, 4. he will retire at the end of this .. I Marine Corps He was born in Club work last meeting Monday. This will speed year and have t ':"1tI' """ Darby, 43: Is a life-long resident Jacksonville but moved to Bradford term and will not be a candidate the flow of anxiously - water Into the well by anticipated this annual I and Bradford County 'County (Brooker) In 1936 forTe-electlon. .W. of Starke about tour times* he said a* It event. They are camping with the .. eliminates air space In the casing.A girls from Nassau .and! Putnam . ,_.. ,__4il ;... .. counties '-- ', Three of Four Men FALL PRACTICE STARTS AUGUST 13 concrete banked canal from meter the lake into the well will be The camp Is owned and owed the operated amount-still 'Iz'nl The of Starke will prob- , which will be paid Charged With Beating built shortly with a screen acrossIt by the State Agricultural Extension I pieD J to learn that the manufacturer meters adorning Call shortly Is $433.88.Probably : Released Under Bond Tornadoes Face Rugged Schedule to keep out all debris and trash"There's I beautiful Servkand U located In a no doubt but what spot near Madison. [ and Walnut Streets for the biggest single Item two years are about to of the Three of the four men arrestedfor just the test well will help considerably Activities enjoyed by the girls out special their property. Carl Johns to be paid for yet the beating: of W. J. Jordan With Only Six Lettermen To ReportThe during early fall floods" are swimming a fuir recreation fund is the new Mr. meter rrk, said this week that parking the Pelican Knight said. AM to drilling program, crafts one study class. , 'an now easily payoff I "white way lighting" to be Installed three weeks ago at more wells, he said that the coun. stunt night, singingbanquet, and once owed on the meters on Temple Ave. as soon as the Club are out on bond, according to I BHS Tornadoes will face I Hobbs reminded tan that the ty has only enough money to drti: candle llghtlsg. 111 show a net profit on widening Is completed. This involved an announcement thin week by the same teams again this year as Tornadoes i lost 14 lettermen last about one a ,year, but they estimate Purpose of the camp are to Iperntioii since' being In- $3.785.83 and Is quite apiece Cecil Sewell deputy sheriff. they did last year with the excep- year through graduation, leaving It will\ take three or tour provide opportunities for develop ''n June, 1949. of change for the 100 metersto The two brothers Earl and Bud I I tion of' two according an announcement only six lettermen to return. It more of the 14-Inch type to completely ing leadership responsibility a h they were originally in- turn over to the city. Austin, were released last week this week by Coach J. should have been the other way eliminate- the flooding FAMILIAR FACE: better understanding of the 4-H to after paying for around he said, graduate.sIx. and Operation of the well wouid program, and [ help relieve some of then However( even C. Rice was released a wide range of and Ellsworth I I C. Hobbs. The two newcomers, replacIng keep 14.Lettermen. have been held . parking problems the the new street lights and payingoff this week._ All were required tc i up due to the sewerage Back In Politics valuable experience. have entirely the City Alachua and Crnc..t'. returning this )'ear' from the Starke sewerage gone above and befall the meter 11.000 bond. the amount set Transportation Is furnished by of duty and turned will still show a net profit of post Judge E. K Perrymau.I I City, are New Smyrna Beach and Include Carl Johns, 190 pound disposal. plant entering the lake. Former' Chief Alvarrs th* school board and l the Boardof f"t profit to the City. The f2.281.41 from their two years of by Jordan County who had been placed In St. Leo of Jacksonville. YVillliitun. fullback; Delano Thomas, end; However, P. L. Bridges, City County Commissioner. over the two year perl- operation. This has been done hospital followingthe I I will also be the Tornadoes' openIng Edward Parker tackle; Arley Mc-, Councilman reported that overhauling . reaihtul I a total of $12.ijIflfti .- with no addition to the a Jacksonville moved home lastweek game opponent again this Rae, halfback; Neil Crawford,: : the sewage disposal working force In keeping the beating and is was reported recuperating I year. halfback; and Jimmy Rowe, plant Is now complete and con Fire Department Saved $98,705 I IFor coot of the meters meters operating and figures The 1951: schedule Is: quart.r ack. Hobbs also named tamination of the water! flowing I 00. or $58 apiece not about $30O a month. Now after satisfactorily Sept. 21 Wllliston at Starke Wiley Coley Earl Alvarez, Bobby from Alligator CreekInto Sampson - Installation coats. Then they're paid oft the approximately Sept. 28 Seabreeze at Starke Crews and Ocl McGee aa boys Lake Is now almost Impossible. Property Owners In Past YearThe I .< .ti on the first 8O 16.000 a year that will be taken Rites SaturdayFor Oct. 3 Palatka at Starke who played a ,lot of ball last year I 'al 1320 or $4 apiece. from the meters will be clear profIt Mrs. Clara Peek Oct. 12 P. K. Yonge In Gain. and showed Iota of promise, but The siphon attachment will actually - lr. the City "foxed- them and Olty official hope enable vile didn't play enough to letter. perform two'functions. Beside Strke Fire Department answered ., elimination of volunteer firemen |"t 20 ami Installed them them to leave the ad valorum tax Mra.\ Clara Otwell Pe"k 72, widow -' Oct. 19 New Smyrna Heath Other squad members of last speeding the flow of water :89.calls! and saved $94,703: and the hiring of all fulltlme per". vtnlf approximately $80. off permanently. Peek. Sr. of Starke' there year team expected back are Into the well It will serve as an worth of -property this year according conne: !. Julian of Oct. 26 Open Milton Mott Rand&IIMol'ton. Billy automatic shutoff when the to a statement this week He died, Thursday, July 26. In Newton Nov. 2 Dvland there Weldon, Gene Shaw. Edwin, Pre- water level of the lake gets too by Fire Chief C. W. Pickett. Actually said that this damage could I53 Spent On Major Projects Ga. where she had made her Nov. 9 St. Leo In Starke vail Tom McClane, Benton Futch low to make drainage necessary this means from'.July 1. 1930 have been decreased even more If Mi,. C. H. to August 1, 1931! a.s no alarms people calling In to report a fire home with her niece, Nov. 16 Ketterlinua In Stark. and Ronnie patray. Most of th... How much the one well will help Hall, at the Hall plantation, for' Nov. 21 Green Cove Springs boy played some last year, Hobbs the farmers east of Starke this wire answered at all during July would be more prompt and careful - Ilty' During Two-Year Period I there said. and will piobably be a big year no one knows, but they've of this year. about eliminating fire hazard I the past three years. Of the 59 alarm. answered. 14 Nov. 30 Lake Butler there. help this,season. all got their fingers crossed. As to promptness, he cited an Instance October 28 tnI. Peek was born When asked how this schedule Fall) practice has been called were house and building, tour a few months ago when a 'PI'ndltlll't's for major lm"flee -' 1879 in Rockledge. She was married -I compared with last year's. Coach for Monday August 13 at 3 p. m. were automobile 21) were' gras ,young boy' came Into town to report : Wm. Enos the living at the Peek In 1908 Hayes fires 14 trash and miscellaneous In proving quarters the late Major were the to City from Oc- Hobba said that this year's schedule Hobbs said that all boy reportingare a fire at night and Instead of 1949 ffre house; (3.497 .pent OB materials made her home In Starke for I ImanY' fire. and seven of the calls to August 1, and Receives Law ringing the bell at th fire house 1931. I associatedwith la the "roughest" since he took I required to bring their own Degree; , asphalt "tald (cement plp She was were out of town run. by city Clerk years. a* a sign to he Carl the coach reins here. shoes and aborts.Keystone says do went lookIng < - al' the, request of the City street signs, etc.); sidewalks, $2-i the Bradford County Chapter -I ___.___ I Admitted To Barr Though $104,000 worth of property for a policeman. By the time this week M totaling 242.2r Temple Avenue white way American Red Cross office In was Involved In the :59 fires, th* fire wa finally reported, the 1\3.\ > lights, $3.32330: police car, Starke for several years and was:'II Lunchroom Workers I Lions William !Enos Hayes, son of Mr, only $ .29S! worth was lost as a nous* wee half-way burned to the :'l'Jre.t single' expenditure I' $1.57650! ; park improvementS active In other civic affairs. I'I and Mrs. !E. W Hayes of Starke result. This means that approximately ground. 13.0110. representing the $1,229.93; (trading machine 13.949; I' Surviving are one son Julian Receive Training Hear Dr. NealDr. received hi. LLB degree from the $99.703 was saved by the Plckett said that he would Ilk Paid Th'l" the paving con audit of city books. 1949-1950. Peek. Jr.. of Fort Lauderdale; one t At Camp O'Leno University of Florida Law SchoolIn fire departmentThe to resume his Inspection of Starke ate, note I.' $043; sewer 'extensions. (3.931; Rayburn Otwell of New i Ralph Neal, pastor of the graduation exercises held Wed. value of honset and buildings - was for brother. buildings and house and WM made in 1949 I[and water exten.k>ns. $7.770. I York City two sisters, Mrs. ... A. i Fifteen Bradford County school Community Church, was guest nesday morning of \but "week. Involved totaled 102000. fire hazards, but Just doesn't regarding hare- ,. citV paved, Washington I Total expenditures In 1950 were Cook of Lake Worth and Mrs. J. lunchroom workers attended the speaker at the last meeting of the Seven of the group, IncludingHayes with $..92. lo.t and $97,073 saved the personnel to do It. H* said hUI $103,000: of which Keystone Heights Lions Club received by the quldk work. Streets approximately of Jacksonville.Funeral at the sane time department's school and Wilson lunch training the P. program the It be area only way can don. now , Bra..tord. High( Scbool. I(this 48.000 spent for what the I services were held I I held at Camp O'Leno near Lake stressing the importance of getting their certificates for admission to Th. department estimate thatautomobit.e ,.U to ask one of the two other , ry [ along with other people. valued at total of Council considers major a . Improve-1 expense la hated u the I Saturday afternoon at S o'clock In City last week. Mr. Myrtle Jones the bar. Judge John A. H. Mur- fulltlme member of th* department - tx""ndltul'e" "tents to tone half. The, In. order"to do this he advised: $2.000 caught fire but the actual IIIUch !!lore than, but the $100 was, balance U spent In maintenance: ,!, the DeWltt C. Jones Chapel with I I Starke through was the the entire only one two-week to stay getting the other person' viewpoint phree of ,the Eight Judicial Cir.coil damage done amounted to only that -Just to do It Isn't on their righQ'Plckett day off and I COrdillf to John. ;'salaries' and routine operating Rev. Thos. O. Mitchell of the course, however. ; seeing good Instead of bad administered the oath ofdmls.ion. $370. NaturaUy no damage was praised the work' of t this1 I costs. I officiating. Interment -I In other. malicious don on the I'f'.... trash and out Only] to the extra Methodist Church Those attending were: Starke ; passIng on no James Lewis city councilman In at* the City had to bin. la1qulp I.I : was la. the family plot In i ii Ida Mae Burner. Mattie Creshom goulp;, trying to bring out thebeet Mr. Hayes plans to "return to of town fire. charge uf the firs department. H* 1 the PicketS said that Starke. a* a It W35 all fActory .Ite. n., Recent Birth i Klnfsley Lake Cemetery. Alice Jenkins. Ruby Cooper. Minnie In other people. the University for one semester of 'result, has the best tire Insurance said< that lAw... the only councilman .. done with -1 9. L. Peek. Johns Flora Bell Nettle and The Lion are planning a chicken OB the firs department committee .s bearers were obtainable under Its present Pall work. rate graduate specialisingIn III thla by the City and not I Mr. and Mr*. J. R. Keny announce I Comer Ferryman W. N. Lonj. H. Myrtle Jones; Hampton. Helen dinner aa part of a campaign poet [system of fire protection. Ha sal ld< haa worked bard for the statement I I the birth of a 110ft, August I U. Rlcharda Hamilton: Ritch. and. Sapp. Louise Norman: and Law- to raise fund for lighting the parliamentary law. probating that the only way It could be Unproved department and It haa had a largebearIng 1tI1 sP3nditurs b included,II at the St. Vlncenfi Hospital Darby Peek. I l ley. Pearl Campbell. Lola WyrlckI football field! at Melroae school. wills, and. court procedure. would be through the ; on th reduction of lire was 'pent on lm-, Jacksonville. I I Bessie Terry and O. J. Phillip. .foau.st . - - ... L- --- .. : 'AUG3-1951 .(:1' TELP.GRAP STARK!: FLORfD.'i DfADFORD col1\1TY : RPAGF ,.f TS,.' L--___., c'.1.IMd: :::: : ; ; .. \ -I. .__ ..........._'.".M"' ."..",..-".",....-.,,.'. .. .,". ..-.un.,, -- .-... "" Bradford County Telegrap}i FISH'N'S FUN ESTABLISHED 1870 E. L_ MATTHEWS OWNSRIOITOHPubllxlied I It" Kill Baker Going flsh'n, fellow" Hav.-n'l. 'I' 1 you forgot somethiug": Of course. rain coat. Y< > Every end Entered' as second | you forgot your .!'sir, better take your raincoat Clans Matter at Office. at Starke. bl a you flsh'n i and sou'wester with you I these days. The old weather man Frl..ay@ : 1951 Is going crazy trying to keep up II with the rain squalls. A fellow'\i hours on a Bargains Galore Throughout the Store! , ,1.50 f For Six' Month can't even spend two j , Subscription 12.50 lake without a rain coming up real l| ._.. . "' "'""///// / quirk. CA"ra'u /. E OUR DEMOCRACY by Mat I S. \\ \ /4/4 I 1 For the last hundred or eo years 'BATTER. UP/ N fly: fishing ,has b-for consideredatrlcUy' mountain I 1 pore a , end northern ponds. In I : I Is nG ITS D/AMONOANN/V RSARY 1 streams the !,,,,< few years this has been.I J .ErTf untrue. has beI i 't7W TNL O/AMANO (proven Fly fishing tOIt I come very popular In our rlveraand I 0 ,- 1 lakes. Brother fisherman, it is leally paying off, too. If you I .rotA I I want to have some fun just hang w- 1Yy about a four pounder on the end I of a fly rod! The bream or shell- Dixie Darling Whit #//IIIIII_. Dbd D.ullnCJ: ] SandwIQ, will fun, cracker give you some too .. I BREAD Loaf 12c Quantity Rights Reserved Prices Good Thurs., Fri., Sat., Aug. 2-3-4 BREAD ui h You do not have to have an ex- ..- - I II 'I Jones pensive of rig Bradford to use County either. and R. E.L A Ve /$/llIIlftsr#at... O'SAfJ M. Gaines, of Starke, both report I, I very good catches using a fly on SNOWDRIFT 3-Lb Can 35 the end of a line tied on a 12 foot Orange 3. 75 YEARS AGO THE NATIONAL LCA6i> WAS.ORGANIZED- cane pole. They both say that almost uwee r, ... any type of bug will show AND PROFESSIONAL SAEISALICAMIE! A NATIONAL>rFAI: 1 " good results. R. E. has his own I I oANSON.,c*Ot'NQ CHHISTY MATHKWSOH.Ht WA6sntTV coat; creation that Is his favorite (may.I! 8/G46 GY. CAN oA3eRIITN..THLRL. : wERe MIGHTY PLAYERS IN THOSC DAYS... I be you can get him to tell; I1 IQjj S. C, Elberta or Free StonePEACHES can't). Percy Sullivan I U using a :j "Shimmy Popper" bug with great i : '' .--- 4LbS39ScRAPES ' :i success. S. So.o.o efreshing.o"'''' ,' lue"iI : ' I Howard Norman, of Hellbronnj I _ Springs, says that fly flaking U> "rNDERlEAf( [ i- 59a .. --..... _.-. .... '.r' ;; P'W play We don't blame you a bit \ Howard; not after the big fellow ' i ir iI I you weighed In last Thursday LIPTON i i59- Pound 19c . , night This monster weighed In ... r I I nt 12 3-4 pounds. All agreed that ', '. . 1 1lIJ 15 the pounds fish should Fly have tackle weighed would about have '% \\TETL Y 59 Cuban r. .- :, ""''ti-. -- ..' ; Jf' t'J: : I been almost useless with thls| . -- :i ". .. whopper. ffaoga Main ith>arSumierMeals, / / AVOCADOS 3 for, 25c ( 'iFirm. Fishing around these parts U 1 . getting real simple. Just get In ' '. your boat, ride around the lake f : Meaty Golden Bantam -'fHlllt6"' ''6 MrY PLAYERS STILL; and pick up the big fish with your' LEASIW0DRanN0,1 y.Uow CrispLETTUCE J LET'S &O OUT TO THE HALL GAME AND WATCH.THE hands. Mr. Haynor, of Jax. picked) TOMATOES , LOCAL !Bid! BOYS f>O THEIR STUCPVThe ._ up a 9 1-2 founder In Lake Geneva CORN ONIONS ( that had choked to death on' a'' I above cartoon sums up our sentiments to a "T" with the speckled tried to run perch.out his Seems gills and the stuck perch) TALI CANT 494TARKIST@eenLabel Lb. 19c 5 'Ears 29c 3 lb. 19c Lb 13c Fourth District\ American Legion Junior Baseball Tournament Two days later, Mrs. T. W. Baker, already played at Bradford Park, and the Area Tournament now of Baker's Fish l In progress. 10 3-4 pounder Camp in Lake picked Hutchlnson. up a $ CHUNK Dixiana Quick-Frozen Sliced & Sweetened 4 The cartoonlat Is right: "There are mighty players still," and She Is still wondering what ! I the Junior Baseball program Is doing its part to insure that there made this big one come to the top I t . . will be mighty players In the future. and stay. She had to bit him .with I i'IOlI"5I STRA WBERRIES12'0 2 Attesting to the calibre of baseball played by the Legion an oar to subdue him.. I',I % teams is the fart that the vast majority of the big league diamond S. ; 29 , , ,stars are graduates of the junior baseball program, According DC you want to go fishing? i j: to latest available figures, 270 players on the rosters of Just hop? In your:buggy, dump In \ r JA 1 the American and National League teams played Legion junior traven't the gear the and time get under-way.? That's easy You; .# "JACK SPRATT"or"SUNMYLAND" Small Smoked PicnicDHAMS q baseball and give much credit to this program for theft! success. . just take the time. Fishing is the I Exactly 138 graduates of Junior Baseball are now on the bent rSAU l SWIFT'SHamburqers and time-passer mind-rester rost'ers of American t 11710 &Y I 1 League clubs while the National league sport. In the world. Hook on a lists 132: former Legion stars' In their organizations." Includedon shiner or worm and let the world)I 45 ,j the routers of clubs making up the 59 Leagues of the National go by. There will be less nervous :' Lb Association In 19.10: were 4,620 one-time American Legion breakdowns and,heart failures In - i Junior B.iseball player. your history that .we guarantee. The program has been on thr upgrade In Florida for the S. I past five years Previously, Florida teams were the "weak sis The fishing Is getting better,with 1 ters" In the-Southern division, but when the Post! Nine General this heat wave finally broken. The \ U. S. Graded Comm. I Grade A, Quick-Frozen Dressed &Drawn: i of Jacksonville went to the finals of the Regional Tournament fifth are like people during the hot 4Q for Generous] Size I Can I IMai tPali : Reef ! In 19-17 weather; just lying up In some T-BONE CLUB I at Florence S. C., won the Southern Championship In R or STEAK, lb 79c cool spot doing nothing. If you can BAKING : 1918 and were runners-up In the Little World Series t. - held at stand the rain these fish PLATE_ days the I STEW BEEF lb. 39c I Indianapolis, the Florida junior teams became feared and re will bite pretty good. to far Slam!I" SWIFT'SPEANUT Lb i ' spected.So HAMBURGER, lb. GSc I HENS l ,an the rurtoniilit 'lIIgl1|' : "tf go out to the ball ganw R. O, Douglas of Dunedin, reported f In a nice fat 7 pounder BUTT ,I that! ", .. caught'on a Faut Bunyan 66 in PhUa.Cr.tCHEESE our eyesTHI Htllerest Lake He says tie lost several more nice ones. Ernest 1 12-OZ.JAR i I Davis of Lake Geneva, caught a 2Q 4- I li uTTER th .: big fellow weighing In at O'j! ./With fa//acfv fatter lap! Ib 69CI Iii ALVAREZ' STORE Established :;;I 11111.I I 'pounds.. This finny beauty WBSsnagged. 2 Pkga. 2 25WESSON on a Jointed Pikia. These I big fellows along with numbers of I FACTORY PACKSugar small pan size were reported taken SWIFT'S FAMOUS LUNCH MEAT \ Kvery Rranch of on plugs. Leo Hill, of Keystone Heights topped the record this 5 41 C week out this way using a "Jigger Lbs .Prem 12-oz Can 43 INSURANCEH. ( pole" His prize winner weighed I II I In at 14 U i pounds. This Is the kind I O'SAGE of fish every fisherman Is looking FABULOUS SUDSFab DESSERT HALVES.Peaches, ; C. WALL\ AGENCY seldom for every catches.time. .he goes. out. but Large Package 27C 23 No 2 ! :Many flshemen have thrown PLEASE PHONE 116 their arms off trying to catch)j these fish in the schooling past Constant Service Over 50 Years few months. Your old flsh'n reporter OAR I. MIX OH I ha. finally found the plug 1. I hilt they deem to go for. The only & Dry Skim Milk '' i plug they will take in those V IISUPERBRAND" 5traurberru For Salad or CooHrt Family Out of Town for Vacation? clear water lakes around these 7-01.Pkfl. 15CWATERMAID 67c"JUMMET" f parts Is the Green Scale Eger Dit-, Qum Make Your Summer Headquarter linger. They'will not let this plug - at theGUIDON ; alone .. -7" ICE.CREAM I 1, .. I : I If you go fishing and have good Amerl a'I! Finest Short Crala \, ;: , e. RILL luck, pass your lucky method offishing JUT-Always Cook While / 4: Freezing MI PTS. on to the next fellow. Any 3 45c 2 2.' Choc., van., or EAT: AT THE 11 K: fisherman will admit that he i Ib Pkq. ., GAL. 75 Strawberry FIrRGET: TILE REST Warns new ways of luring fish In. ALSO to the boat every day. Just dropa CHOCOLATE area( VAMUA OUR 50c LUNCHEON IS STIM. TilE nEST FOOD card In tile mall to "Bill Baker I -:- A-L-0 -:- -'H, BUY IN AMEKICAtPORTRAIT Keystone Heights, Florida" and Cat or DOQ Food your name along with your tip tc the fisherman will be printed It ? 14c Tb I"-,,1>..Karo Blue;Red Label f "C.nL- 1 '- -- N3.1 I Con .h" J-lb.Nablsco this column. ....n<<. C S .' ... Syrup BO. 22c t KING I Adding Machine";'';;; and P..rtablel I Grahams BM 30c ! APPLE J 23. KIlAt, IH-lbt: Karo Waffle JUICE :t': I Typewriter sold on IInleU..I.1ford t STUDIOHallmark .County Telegraph.FREE. -I Syrup BO. 23 IMABC JUICE. a SUM oi 351 's-oz.HvrJrnv.Sunshine Cello - I "' '' CGRAPE I "'. 25c : Greeting Cards .. .....-- Candy for DOIs-21-2-oz.! JUICE', 'IQl| 9 loos * J.llrI l-lb. Ct.nSnowdrift ....... : Pug 8c GRAPEJWCEN r. 6'.1.30e _, , YdUlll. ....II For Prompt FREE Removalof ,. j 35cCharge BRAN o im rrliilOa.lltr r'llrarh..h. or Dead, &Worthless I / Crippled ..p..hr i "........ Cattle, Horses, Mules, I. - FNti ........ V IrWN .. & hogs . uy Pkola r'lal.hl.a' p,4OIO r Pnwla a Call 132, StarkeWE Armour's Pure Economy Breakfast TEENIES babvj po 1o. \\'. Call St. Ntarke, ..... Phone 263 Florida ALWAYS Renderers COME 1, I I_ LARD, pkg. 22cl I BACON, Ib. 47c I I At LOVETTS . { Roscoe Bullington, Photographer & Owner I INC. .. .. FREE Samples To AIIJ A .. .. ' ,- ". --, --' .. .. -. "' -- _, - - t t } . I r -- -- ..- -, .L ..J '",,- --- - -- - AUG 3 195 1 : : --II .... ,. 1051 BRADFORD COUNTY TELEGRAm, STARKE., FLORIDA PAG&TIIB.. -- r = I land visited Mrs. W. E. Hall Sunday Gainesville spent the weekend] -STONE f HEIGHTS COLUMN CAMP CIWSTALUKE-ADVENTURE IN HELPFULNESS GRAHAM Mr afternoon.and Mrs Lavon Alvarez -.of M.with McRae.A. her )parents'. Mr. and Mrs. B. x lly Mrs John. McKlnny Orlando, Mr. and Mrs.' Tony Con-, D. (Uncle Duncan) McRaeis ". J. Stprelle them. While hen- MrSpn. .>r Fw tales of Starke, and Mr!. B. M. ! \ visiting relatives here and in fir '.>k to Fort was entertained with five tablesof Mr. and Mr. Wright Pope and Myrick and. little daughter of P :visit Mr and Mrs. bridge on Friday afternoon children spent lust week at Jack Gainesville were dinner guest of Starke this week.llidget . ,II, Mrs. J. 1>t. McDonald entertainedwith .;.,' son, S. C. visiting relatives.Mr. Mr. and Mr, O. R. Alvarez Sun 'mBH ;a 1 have left for bridge and canasta on Friday ,,... and Mrs. R. S. Surrency of day.Miss Pencil. Sh..men. Ideal I Their evening and Mrs for home uwc. Only $1.83.-Brad* Frank IHr'S. Wal. Tampa spent the weekend with\ Gwendolyn McRae, who Is n F. rath Jr. had a swimming party Mr. and Mrs. Leon Wynn. working at the University in ford County Telegraph. who haa for her on Thursday afternoon T r Mr and Mrs. Elijah Strickland -. g herenrettur ed with Mr. and Mrs. Reynolds and w tit, : and children of Nichols are visiting - rnpanylnlr them also, daughter have gone to North Mr. and Mrs. Carl Jones this TRANSFER Mechlin, Mrs. Carolina for short stay. week. R.B.ALVAREZ&SON COD L"nd?, Mr*. Margaret Mr. and Mrs. Vic Gllnu of Wilmington \ Mr. Nellie \Vombla of Cairo, Qa STORAGE | will spend< three or Del. who are well knownto t Is visiting frlends here thin week '' ..,.....-- ........,- ---, " .1.. !:J! .. ... I in the north. many Keystone resident, have Mrs. Maud Surrency returned ,. , I da} walker l Is vMUng, been staying at the White Feather Y nome Sunday from Tampa ,where ss::: '. P'" *"' We operate local with Court. she has been A.f.t 650g 1 return visiting frtenda. R.e. , natanceA i., v2 a lone Eula Mr. and Mrs. Bemkl'l t. who .r. ,, We are having . Bother, Mr* Ir'R't: : wr- ,;; ." jr" Cottage I'rayui . : MOVING In u. 8- /I have been for a time In North ". .... !J,' :: A .;.100, Meeting In our Community thU. J ii .......--- anywhere alrath III went by Carolina, have returned. The high Camp Crystal de week spent there Their days week. '""" tsg\ I-ashlngton. D. C.- last altitude did not agree with Mr. Heights Is probably I !! are kept full with games, swimming Those on sick list this Week are }! EXCELLENT kit relatives Bernkopf. They have gone now vanced and useful the same nature study, handcraft W. C. Burton Mrs Hoble Hln.e art Rlv- to Beach. and Lizzie Hines We win) C STORAGE Crystal Daytona Mrs. to life- [ and party sponsored by any the sane boating some are taught Lluded Judge Jennings Mr. and Mrs. William P. Wilson board in them a speedy recovery. . Florida. but an saving. FACILITIES!: L e, Mr. and Mrs. Kyle announce the coming marriage project of the day for During the winter months the Bobble Frazer U home on leave Agents t For and Mrs. Don of her daughter, Betty Jean Board Is available to school this week ' of Public Inst Wells, 'camp group - reported a good time Rowland, to Mr.' Harold Edward I vides camping speech and classes that make use of it fot Mr. and Mrs. Victor Willis of DELCIIEKimos. Mays on August 21 at 8 p. m. In Nichols visiting his parcn'tsMr. approximately 450:; l re varying periods of time. Last' are ... See us first 4 Walter RichardsEm the Keystone Heights Community and Mrs. Albert Willis. lira.[ dren and an and year almost 6,000 children and I Chicago, have taken Church. The ceremony will be children their parents visited and enjoyenthe John McKinney. and son Ottls I every degree. house for a time. followed by a reception In the faclllt were In Jacksonville Tuesday oil camp's s. and Woman's Club. I The camp, only camping vaca- re for a rest Members of the Board of Public business.PINELEYEL. 19 coun was obtained from At their last meeting on Friday, of Fairfield Conn.end July 27, the Rotary Club had as Administration, and dietician Instruction in Alachua Count rean visited the last week and ate with the of them camp j " of his v"- their guest speaker, Capt. Brown Keystone :" I guest 12- lunch with them bliss Wells said the 1 of the Green Cove Springs Naval merly used as a rgan.Spencer' And she added, I'm glad they enjoyed L and son, Air Station. He spoke? on some, groimd. The camp Alaohua , I tains college It because budget time li By :lira II. M. McRae Ser resident of Key 1 aspects of the Marshall Plan. approximately FULLY EQUIPPEDthe here. Present plan call for expending been visiting Dr. and The Rotary has been Instrumental -! Is bordered on one summer. j_ Musser. Daughters I In bringing to Keystone a pro-, on Crystal Lake. the camp a* circumstances Church service are being well been here that will be of Inestimable' In addition to permit, particularly the gram (Susie have attended at Fine Level this week, lime visiting Bobbled value to the community. Beginning ities for the crippled Camp for crippled children. conducted by Rev. Kemp of Fort most modern and efficient equip Kay Walrath. They Tuesday, July 31, a courseof I own county, the all over Worth, Texas. There will be din ment available to the profession Is turned to their home Instruction in swimming will from 80 to 100 3.'e are ner on the church' ground next used by this funeral home in main- Pa.. Ala. Kay Walrath be t given at the beach pavilion. over Florida with of the Septic Tanks Taboo Sunday. Everybody welcome. taining its well known standard of Bostedo went, with Classes will be given every week I every summer and .chil- Without Permit, Mrs. Laura McRae of Waldo Service.DeWITT . Imperfections. Mra. Margie Underfill of Starke, last approximately : children Sanitarian Warns and Mrs. L. A. Crosby of Sunny- one after another, to be brook were dinner guests of the C. JONES MIRE\ SERVICE STORE 'children attending 19 were in Anyone Bradford installing County must a", septic first tankin acquire H. M. McRaes Monday. Phone 7 Starke, Fla. M. Sgt. and Mr. H. A. Batcher PHONE 189 day from 9-30: to : will a permit from the Bradford and children, Hugh and Deana - qualified Instructors County Health Jack ' Department [Roads 100 & 200 s Starke, Fla. nished by the Y.M. chil Trawldk. county sanitarian, warned Carol of Nicevllle, Rev. and Mrs. Funeral Director Ambulance Service Odls Hinnant. At during residents this week.Trawlrk Kemp of Fort Worth, Texas, Ed- Our local funeral home enable as to offer convenience and OPEN 5 A.M. TILL 11 P.M.. become ward Hall Murrhee of Milliard, available. course there will be said that an Increas- service not otherwise of the Mr. and Mr*. H. M. McRae, and ... tion ceremony at ing number' of people"are having file .. ,, . Mrs. Hall' sister Mr* K. Av. Mitchell " cates and Insignia 1 tanks Installed around their homes Pure Tires, Tubes, Batteriesand ranging from the screen of. without first getting the required of Macon, Qa. were dinner We furnish the highest type service at lowest price, and to Life Saver." Any I admit permit. The permit requirementIs guest of Rev. and Mrs. W E. accept ANY life Insurance policy In force, tasted by ANT Accessories from six years up. t that due state law, Trawick said and Hall last Sunday. company, a* payment In part or In full on a funeral. Miss Ida Merritt of Lake Butler, !Riven without charge residents are asked to cooperate. Consultation carries no obligation. those C.A. but to show their : ire said that specifications and' Mrs. W. B. Huckaby of Wildwood, let Vs SIL-1-KLEEiV your, car of It the Rotary la most building Instructions can also be and Mrs. Llnnle Drake of Lake stantial donation to obtained from the county health [by the New Silicone Process The New Camp at Immokalee. short department.HAMPTON. --. Way to Lasting Car Beauty. $7.50 The Keystone only to . ERPROOF Corrison Resistant They was defeated played again I suit and, tM 1t4t1< 1? . with a score of 5 to the two fly !Mr. It!. O. "SuerneyItlrlliflay irA tN I Ali a Sapper Honoring the twenty-eighth birthday anniversary of Eugene fUlLY AUTOMATIC QuACe Chason, his family gave a spaghetti T.Instont , r.. .4 '.a a. r .. Y ,ri, ale Ia -- y,. supper ,at Ms home on .Saturday Electric Range y evening. Twenty guests enjoyed . "Gene"thla festive occasion, winning -Heat Ctlred Unit I Three Spacious Storage Drawers| w !' many happy returnsof atqott ga4tft6e r New the Daughter day. e 6-ql. Deep-Well Thrift Ceektrle g, Mr. and Mrs. David Brown an- > This beautiful new Hotpoint Electric Range is fully '41 nounce the arrival of their second automatic, with (feature you ordinarily find on only iho daughter, loom on Tuesrlny. July htfhe! c-priiej: models, yet it'a priced remarkably low. 24 at the Gainesville Hospital. So easy to use that""ym. can cook and bake. with it, this k Mr. and . Mrs. J.V. Horne have f been making frequent trips to full-family-iu Hotpoinc; will give you years of clean, / Starke this week visiting their cool fail electric. cooking. Qietk its topvalueeaturt:* daughter, :Mrs. R C. Barkudale, then come in soon and tee itt f and new grandson, Robert Coty Ira a 7rp Dtxlnte ra a her*in open and hut cats why Barksdale The Rev., Jr.and Mrs. Jay C. a _. -a-.. J /? /t..g you can pay "mac but you ean'1 Daugherty and three children, buy better than Ford I A look will Betty, Paul and Jimmy of Donna, Texas and Dr. A. Paul Daugherty t tell you that no convertible tin mac of Jacksonville Beach were visiting 'a"' ,,, beauty to offer. A"Test Drive" will friend here last week. ted you.that none can give you a i Mr*. Leo Singleton and children y ? a r r finer rids For Ford'l Automatic have been visiting her mother - Mrs. Nlta Johnson. The Singleton . Ride Control laughs at the bumps e. are moving from Patterson, w rk ,..adjust rids conditions And La. to Fort Myers. Mrs. Johnson's ? a \you get fine-tar V t power.. filth! ulster, Mrs. Alice Fancher of Jacksonville 1 : Automatic Mileage Maker .. 1 has also been her guest , ,, 1 I y; ; ; t Dorothy Hardy. had the misfortune y w " fuel savings to fall and break her left s arm last week, while playing "Clr- ua". with a number of other children : u at the home of her uncle t4 .4edan "Dick" Hlnson. After receiving h ,41tU9weitiUe .-. cdt Die' at '. treatment at the Alachua Hospital f Dorothy seems to be getting bD.N "mall/ alone nicely. : Mr. and Mrs. Charlie Ousley of Terrine, are spending several W want to own I real style M ?. weeks at their camp on Bedford , >(,, ,WIth the finest ,1 Lake, and are frequent visitors to '"*k., *ih! lasting quality i Hampton.Mr. . and Mrs) J. R.' Adkln had * |n dl jsly deep into s . . ere dinner guest Tuesday Mr. Ad- . "b' ' ""IticcountJurtofderd \, kins'' sister, Mrs Emma Carlton d Vktwii. |II| offers the r -- of JacKMonvIll and Mr. and Mr. car .Eon l IrMdoni t.. P. Davis of Miami. World's fastest Hi-Speed broiler for charcoal-like broiling S Tifontum, of ; t Visiting- Mrs. Thomas F. Brown porcelain enamel) tap and booty-scratch and acid resistant' Automatic \ 'v!rtill ,tht snug ? | on Thursday wen Mr and Mrs. ::i 95 even temperature control o) oO-mlnute Time-measure with bell Two cat d i iteel.topped t J. A. Ivey of Palatka, and Mrs. 2 6 9 etoctrl outlets-one timed. Turns en coffee pot automatically) Cooking I'.Plus V 8 Inline power. 1951 I Clarence Henrold and daughter of surface light Oven timing clock and Indicator light Oven and \Www! choke Atlanta, Ga. urface-unlt cooking chart combination"I I I Mr. and Mrs. Olen Bedenbaugh 'or!two lone r I and children were Weekend Visitors - h'u7lauage Intwiws _ar '-" arsrwW I h.,.. Alta Marie Elland, their (NSTANT'SPIED CALROD SURFACE UNfT-Super-irxej forrfasc cooking. Brewl 6 "W.to match.' i I:neice, accompanied them back to cup of coffee ia 6 minutest I Intense heat for fast tuning and rapid boiling. II Deland for a brief visit. - I| Little Carolyn: Ann Simmon of --- -- -.--- ---------- ------------- -- ----- ------- ---------- --- Jacksonville spending two weeks - / ;,'with Mia Mrs.June Archie Recher Elland., formerly of We will wire home / Hampton, now llvjng In Jacksonville your $ ; spent Sunday with Miss Peggy - 2 5 .' . ( , 'r r Hlnson. .DuilJWts 4aV : /tyt/ NYe Mr. and Mr. Grafton Adkln for electric for and visiting children relative, of Miami here who spent are cooking ONLY _r Thursday and Friday at Jacksonville ((50 Ft. Cable Limit Beach at the home of Mr. .: and Mra. Roy Starr. They Intendto visit Mr. Adkin sister, Mr O'June Elllw: and family\ in South WINKLER ELECTRIC SERVICE MOTOR CO Carolina before they returned ANDREWS home, with Mr. and Mr*. Jim Ad kina. and Joyce Beal accompanyluff. 222.224 Temple Ave. On U. S. 301 Starke, Fla. Phone 166LOOK Visiting friend In Brooker on :on & Adams Sts. STARKE, FLORIDA Phone 118 Adkin Sunday Oa night Monday wa Miss she attendedthe Louise TO'H'pTPOINT: J FOR THE FIN BAT -> > K S 7.1Ii , ._ Tobaeeo Market and shopped -- -..... ,la Lake City. --- --- ----- ._ ---- - I I - a ---- . --' -_._-_. -, >.. -". ._. .". __ "' ,..,..____ .., ... ...- ,.-. .... .,-_... -. .. .-. .. '-- ''AUG3-195 1"4 PAG _.FnITR- .. -. BRADFORD. COUNTY- TELF.GHAPfi. HTARKE.. FLORIDA =--fltID\\' ,. r .. ,'. r Clrr.ir", o"'IC'81(......TO. Mr.....r..H.*l.r.. t.<>..(>, null You for ore IMvorce hereby, notified restoration that "a f la.Jj... )BPW Members Attend V "' '" maiden name and othtr relief ha. I In II OUR DEMOCRACY- -by Mat un". Tr"7."rl' lntl "a.QladJl. been filed acalnit you in ''IfIOf (District Meeting 1.. ..kJ y. Defendant the above named Court and , ( C..e No. ,D7, cause and you are required to ull. . Jtollr* T* A e.r.ro of ,. .." ) [in Gainesville nerve a. copy youranswer or b ( .i. ORASS-ROOTS WJS0O2W : Oladvs L.. rakley. who* 1'0.1.| plesdlnir to the complnlnt on the dene !I.., 160 Main titreot.! Vlncen- Plaintiffs Allor".". William 1>, Thl.on. . I Mrs, Pauline Toi'ode, Mrs. DestaiDyiil .*x sS*s rMAKeHAVWHILt town New Jersey. Moriran, HOT Clark Building, J cknonvllle .. Miss Delia Ros<'lb..raoil, / THt&JNSHINti MASATIMCWOKN You are hereby notified that a Florida and file tlm origin.si week. 'n h : I suit for Divorce, car, cumody and In the office of the Clerk of the araph {Mrs. Mildred oyle with her guest, MAXIM WHIN OtIK NATIOM control of minor children to da/end- Circuit Court on or before the 4th I/Mrs. Faye Prsther of Jennings, ant and other relief has been day of September, 1951; otherwise ( 'l.j! IA'L' WAS FOUNDa IT I* STILL VLIO. i'Atr ., Co.S >root filed avalnit you In the aboveIIam.d 0.: the Starke Busi'ness th* alleiratlons of ssld Complaint C i.rk , I La, represented : Phone 35 j Court and cause and you sre will be taken as confemted by you. Myrlh T '1 tho CBy \. , Women'sClub required to serve a cTf aud JTofesslonal eopy youranswer Thin notice. shall_ be publishedonce n.ver, .. .. ... .. II at the District 3 and 4 meet- the Plaintiff or plendlngr 1 .s Attorney to the complainton William eacn weeK ror. lour connectitiveweeks Dt'I' Ing held Sunday In the Hotel D. Morcan, 607 Clark Bulldlns:, In the Bradford County Tele. Miss Madge Whigham:I Friendship Class Thomas In Gainesville. original Jacksonville In ,the Florida office and of the file Clerk the cuph.Dated this' Uat day of July, 1951. Mrs Olive Hsrt, President of .. (OFFICIAL SEAL> A. J. Thnma, of the Circuit Court on or nefor the Weds }. R. McDonald Has Dinner Meeting the Gainesville Club, welro-med. the 4th day of Penremher 1961., other.wtn Clerk Deaver of the Circuit Court, By Myrtle T. Deputy Clerk A In Tallahassee At Yarbray Home group which Included members plaint the will allegations b<- taken of as said confessedby Com 8'I| 4t *|24 from Sewing CalneRvllle Malhlnt Rep"" Jacksonville Jacksonville from you. ,"11I11 Alias Madge Whigham, daughter A dinner party and monthly Beach, St. Augustine, Stark and] This notice .hal] be punltaTtedonre NOTICE TO APPEAR Starke On Wodoltda., Let- each week for four oonnefiitlveweek In Clr.-.l. rnr Itradford Coimty, afternoon . of Mrs. Marshall Wnlham offitarka business! meeting of the Gainesville. Mrs. Ksperanzn In the Bradford County Tel*. 1"1..rlrfeii I. Ckaarrry.rteorir Aulrolt I,: Clas of the Socke of Jacksonville Pw*. District To Give F'ree Bible aDd the late Mr. Wtifgham, ship .raph. , Frlend-1 Hunter Carrsdln* Plaintiff, tsUllltte .... I"d over the Pitted this Slut day of Julv 1951 Church was Director, /< .. , make Presbjterlan , was married on Saturday, July 28, pi (OFFICIAl/ PEAL) A. J. Ttirmss, Cecil 8 Carradln Defendant tJl1 to Joseph H. McDonald, son of I IMra. Tuesday evenlnjr at the morning leasinn and the State Cl.rk of the Circuit Court Case No. 8671otlce IS'R O\\S 'Mr. and Mrs. Ralph* Yarbray on President. ;'"" Helen Krauss of By Myrtle T JDeaver, Depntv "'I.k81S"'t \ TA Aperar.To CaAh paid II() John F. McDonald and the for Cherry Street.Arrangement. Bt rvtrrsburg wa. the apraker.011'1 | 824| Ceclle 8. Csrrjtdlnft, whnn. old late Mr. McDonald of Hebron, (any residence 'In 89-42 Bird Street CJIendalfl Condltiohl of gladioli and,/ Pearl Godwin of Jacksonville I .nT'"R TO A..r""An CALI. 29 Lons Inland, New York. - Nebraska th.. IJ"" State Finance Chairman, and In ("I,.." Court. Bradford '...., Tf'I. Tile ceremony was performed canna Mile were used In 1 Klnrldm In Cttmmrrrr.Maria \ AII sre herehv notified that a and I..a VI ''..Jlllaamade Mrs. Mabel Craft, also of Ja'k1 1 pwlt for Dlvorc. and 118rne. ! of The Ing room, and pompom V. NavotiR Plaintiff, vj John award *f minor Pl1lto by the Rev John Koohan the centw"*" toT th 110 n vIII... who U chairman. nr. th.. I. ... Nnv.tta, Defendant, thlld to defendant haa been filed and 8ddl'tll.SI'ECI.I. . up --- ------- --- Sacrament Catholic .., .. aR-alnnt you In the above namod Blessed f '/ '- Can No. Church, Tallahassee.Th buffet table. .. State Resolutions committee spoke t,' .iiN1J.X'-I"w: : : \olr<. Tn Af.enr.Tn Court and cause and you are required I ! .. hrlria attended Vv Mr. After ai"r ino group neia a brli'fly. ."ri -"""lI'iJIiIlII"I, >, Jnhnnvetts. whoBA ",*"' .8515// to irerv a oopy of your annwer ornleadlna EI,'",riry \'our * uma n \using meeting In the Sunday During luncheon Dr. Florence II*o .jt''j, ,s \ .I ,./..'"! In 116 Went 76th Street, New to> the comrtlalnt on the :\ .."' fur b, -rUaha; ; ; I I. m Berry Willis, e and m-naol building with the class Bamberger, Columbia University N. Y. Plaintiffs Attorney, Zach VI. DOUB- ---,--- Mr. McDonald had, an hU.,wJJ/J'*""''(president, T. H. McNeil), presid graduate, spoke on the subject of! I I I .. -- man, Robert Willis, also or ing. The clas approved the pur "Women In Politic" Mrs. Le- hassee. .. "" br'ao' chose chase of three large oscillating Sorke Introduced four Past Presidents For her man-Is* fan for the church. The teacher, who \ were present Mrs. Llla *" a yellow ow"black acce Horles. Her Charles M. Turney, commented on White of Jacksonville, Jst Florida >> ohvt farange>r was* of miniature orchids the activities and growth of the''I Federation Pualdent.- -. Mr.._--... -Hnrtenae --.- 13 I'rt'sent at the WHS clans. Well and Mr*. Elizabeth .ceremony .." the bride's mother and a few closu George McMaater, supply pan- Heth of Tallahassee and Mr*. Ruth tor, gave a devotional talk on hi* Rich of Jacksonville. relatives and frlendu. recent trip to Montreal, N. C., and Following the luncheon Following the ceremony, .)Ir.. a short singing a program of group was I afternoon business session W, U VanLanclinghain, aunt of the all.Approximately. was enjoyed by held and attractive door bride, entertained the bridal party prizes I 30 member of were won by holder of the lucky at luncheon at the Dutch Kitchen. a ) the clas* were present on'this occasion. ticket. These included Mr*. Doy Tut tKOAOCK. : APPLICATION Of THl PftOVCUB IS TO USE OUK. CVCftV le of the Starke group. OPPORTUNITY TO STOae OP' OMJTHIN FOB THe P'UTUR.K. Mr*. McDonald wan formerly * home demonstration agent for OtP STU.O'COUft**:,BUT MCAUS THC GKCAT MAJOKITV Of THK I( Bradford County, and for the pait York Rite Masons Hercules Personnel AMdtlCAN PtOPlt HAVE'/lMOIt WAV WHIH THl SUN SH" .S.* several yean ha* taught home Will Meet Tuesday Have Annual Picnic THciit SAVINGS AND Lira 'INSUMANCC ARC A MIGHTY FOUNDATION economic In the high school her*. In Gainesville : OP security, roitTMK" NITIONNO ITS FAMILIES. The couple will reside In Stark*. Member of the Survey Department I Customers* The of Hercules Powder Co., _ t Adonlram Association of Family Reunion York Rite Freemasons will hold along with their families and ANOTHER MYSTERIOUS EXPLOSION .tffc Corner / Of It* first meeting in the Masonic gueits, enjoyed a barbecue and ! Crosby Clan Temple at Gainesville at 8:00 pm. chicken pileau dinner at Strick GOES UNEXPLAINED In our daily relations ELBERTA Held Here Sunday on Tuesday, August 7, This association land'* Landing Saturday after with our customers we KfiK j jointly sponsored by noon. Tlie affair was In recognition "Did you hear the explosion however, nay that they blasted a* strive always to be honest, ,1 A family reunion was held last Gainesville Chapter No. 2 R, A. M, of the eighth consecutive year last night?" That was the question scheduled, about four o'clock. Monday /air and sincere. Xs.* asked " Sunday at the home of Mr and Gainesville Council! No/27, R. A without a loss-time accident for by the resident of afternoon. some of the : PEACHESI4LBS. ; Mr*. A. L. Crosby, Sr. for th. S. M. and Pilgrim Commandery the survey department.E. eaat Starke Tuesday morning Major W W. Lamar of Camp Here are words rnean tous. aooa No. C. Mann, Jr. of along Call street after a mysterious Blamling, stated that he things these daughter, grandchildren 1. K. T., has as it* principal the Brunswick waa In . , and great grandchildren of Mr objective the promotion of YorK' Ga. office gave a safety talk blast about 8.30: Monday Jacksonville until 11 o'clock that v rvi* *>> r >.< itii and Mrs. Philip A. Cro by. About Rite Freemasonry in this section during the afternoon, after which night sent them scurrying outside nlnht.; but that he had heard nothIng Honesty correct weight f 1,09 people attended ,of Florida. an award wa given to the depart their home to see.. ,what was hap about any explosion out there and correct price. -<- -. . t After meeting at the house, the Dinner will be served at 8 p,m. ment. pening. 1- at all Fairness satisfaction t group gathered at the armory and Immediately thereafter there About SO guests enjoyed the It was described all being "a nnck In January, the residents guaranteed or your money BU. BASKET 3.4 : building for a picnic lunch will be a short business session outing which was In charge of hock of an explosion" and is reputed of Los Angeles went fleeing from will be cheerfully refunded. - given over to the election of officer /. M. Vtcker. to have lit up the sky considerably their homes when a blast, later 1 *, appointment of committees out towards Camp BlandIng explained as a jet plane breaking Jr. Woman's Glut etc. I and !umphrey.. Starke resident the sonic harrier. rocked_n_. thA.___ area._____ Sincerity no extravagant i To Meet Monday All York Rile Mason are urged HURRICANE will probably remember a It seems that when the plane claims or misleading BEANS 2 Ibs. . to attend, similar explosion a year or so ago broke Into the barrier, it left a : advertising. % , : The next regular monthly meetIng PRECAUTIONS when the source of the blast was terrific vacuum and the air rushIng I If ever feel that we J of the Junior. Woman's Club WCTU never traced In to fill it caused a blat!! : you failed you in any of Large; Ripe Avocado Meeting have 1 will be held Monday evening, Aug .. In a telephone! conversation thuniler.As somewhat, like a loud clap of these ways, please let us PEARS 3 for .. . There i 6, at 8 o'clock in the Woman'* Held At Tabernacle are a number of precaution Charles Hager write : - at Please superintendent , ; Club. that users of electricity I to what caused the blast know* The rtllglouj will . program the llmenite be In charge of Mr. T. Q. Mitchell.Hostesses At the WCTU meeting held in may take preceding and during graph that they mine had, told no the unscheduled Tele seen> by Starkeltes Monday night, CUSTOMEK nELATIONSREFT. Fresh Sweet'.Yellow 1 for tile evening are, the Tabernacle last Friday it was heavy windstorm that will save no one knows, but speculation has ,, decided that the trouble. With blasting Monday night and turned up everything from dynamiting special offering ample warning of r Mr. Robert Crern A&P Food Stores CORN 4 a that this chairman, the first was he had taken at the last meeting would an approaching hurricane, electricity fish to the crash of a jet Ayr., ears - Mrs. Don Mr, hoard of any explosion. He did plane. 0 Lexington , Page, M R Jucld be sent to Miss Ethel Hubler users can do some thing to I New York 7. N. V. ? Mrs. Herbert Green Mrs. E. J. publisher of the National Voice be prepared. California PascalCELERY Large Slal JDowllng and Mr*. Larry Gibson and Sam Morris, for prohibition Man has not yet devised an! IF YOU LIKE GOOD SINGING ;'t work.A overhead electric line that can always Then Mark This Date On Your_ Calendar_ _._ ___ .. 1! petition sent by Mrs. Kate withstand hurricane winds, n n J. Alonzo of Orlando, State Director and underground line In rural Bradford County song: lovers will(: lion the Bradford bunch would I of Legislation, was read aid areas are too expensive to buildTherefore . join with those of neighboring join in ,with us, and we'll be lookIntc - signed in protest of liquor ads In during period of high Union County when the letter's for them' ," Brown said. I periodicals, on radio and tele winds we can expect some farmer Singing Convention geta together I T. H. Waters Is vice president vision. and other rural people to be without at the Lake Butler Methodist I of the Union County Convention Fully WIVVTUPICNICS The next meeting will be held electricity for various periods Church, Sunday afternoon, August I which wa* organized last February the last Friday in August. due to storm damage. 5, at 2 o'clock. I and Mr*. Raymond Brown la MASON ib. 4c Farmer with home freezer I Raymond Brown president of ,secretary-treasurer. The group , Baptist Primaries should! do these things when the the group, says that everybody's !meet every first Sunday at some electricity goes off: invited, especially the Bradford Allgood Sliced Breakfast church In the . ,rea Enjoy Lake Picnic 1. First, do not open the door! County singers whose convention To do so will only cause the food la Inactive at this time. All who like to sing or who JARS BACON Ib. - The Primary "A" Group of the Inside to thaw faster a.-d may result "The agreement waa that If like to hear good singing are Invited , .. ; Banlst( Chqr\! h Sunday schiol enjnjrwl In spoilage.y. Union County organized a conven- to attend the next session. Golden Shore .... BreadedSHRIMP.Soipkg. ,. ,. ft i ptcM.3 l : '.inch Ot StrlcRWncl's : / The borne freezer will protect I P -... Landing Tuesday. food, 3d hour or longer If the door BUILDING PERMITS NEAR $250,000 I Carton of 12 Qls.87c .. The thirty joui'DriM( feathered la kept closed. Cover freezer with IN FIRST SIX MONTHS . I. KfMINGTOf'PORTAB at the beach earl/ for a swim be- blanket to !give additional insulation. .." .. I tore lunch. -. I Fla. Grade D & D Building permit for the first $66,470 And (Whole) consumed almost al! L Also atten.l.ii. ilie picnic \\ roDie 3. H electricity! la off longer'than -SSc FRYERS Ix month* of this reached of Ib. year the amount spent by school - teacher: Mrs. L E. DuggerMr . 36 hours, obtain some dry Ice a total of slightly more than . TYPEWRITE"Mf. : *. Victor Lamli, Mr*. J. D. Seymour from the nearest town and put It and organization The other U (210,000 according to Carl Johns. S4.DOO went for the building of All Meat SlicedBOLOGNA Mr*. Swcil: Hwnon Mrs a Mason in the freezer to hold the temperature Carton of 12 . I city clerk. A large of thissum part negro Masonic Lodge hall on 10th J. E. Hlldebran and Mr*. Tom board Ib. 59c down. Place It on on however Wai - due to the St. ... Harding anil mothers, Mi*. Fred top of the package, not on the granting of permit for severallarge Caps 25c Pellum. Mrs. Johnny Le-> and Mrs. packages. The Big Dad Manufacturing Co. Sliced Assorted business establishment Riley Wasden.Fri. building accounted for $8 ,ooo of . t Farmer with electric: water systems I During the period from January the $94. OO spent on business and 'IU. IoU COLD CUTS, Ib. 69c | could fill containers In the I 1 through June 30. permit granted industrial building. The new IlottleCerto. - home when word la received thai to home builder totaled Quick Frozen ' i 140,650 D & D 12-16 Ib. a>jf. : tourist court, currently under USE construction OUR NEW : torms are on the way, so water, while $3,800 more went for the 8 2Rc. HEN TURKEYS Ib. 69c DAILC for drinking and cooking will be remodeling of older home. on Temple Ave., also t oz. - TYPEWRITEREASY STARKE FLA.Sun. available.Beware. The new six-classroom building I helped considerably. It waj i - of an electric line that at Starke elementary school cost I valued at 20000. : Harris Light Meat IVt..r P_ has blown down. Do not attempt P'nut Butter .. Sat. Aug. 3. 'I to move a wlie or a tree that has each other during periods of storm'| stored to service In some Instances I TUNA 6 DOUHI.E FEATURE fallen on the wire until electric damage. even before those In some of the oz. can 25c Food r PAYMENT maintenance m..n are on hand. Two-way radio are used by all I I I cities hit by the hurricane. Ideal Dog Whispering Skull Always notify the power supplier of Florida'* electric cooperatives Snow White Brand Granulated 31 PLAN promptly when electricity goea off This ha* been the mean of notifying I f Dreft, Iste.; pk . With Tex Hitter During hurricane power suppliers each other of the need for help SUPER l1E.RKISG: XK.I SUGAR 5 lb. pkg. Small Down Payment are alerted to commence repair as when telephone lines .were; out-of I Guaranteed to conform to 17. S. 45cA Lava Soap ,,1 1 Balance AND order. radios ' Monthly soon as a line I la reported out Two-way wher' Drpt of Agriculture regulation*. ASK: FOR DETAILS I The Kangaroo Kid Cooperation among the power suppliers trucks can talk to the office and to 4 oc. bottle; 1$". Bradford County & P Small Size Sweet Ivory Soap IgebarjSj 1 other have been electric truck, a great aid( Telegraph. , of Florida In restoring I W Ith J.M k O'MahoiwyAlto In rural electric , service storm ha* been restoring service PEAS 16 Yet, by utlng our new Type* I t'.rto..n and Serial very good. during The 13 rural electric In record tI m.. ,, oz. can .. 25c Personal liar writer dub Plan, a small down I cooperative* have agreement t* In hurricane-damaged areas lastyear Ivory Soap, 4 bars payment will put the world's ., Mon., Tues.AUK. lend material and crew to aid rural electric lines were re- RAJAH finest portable-the All New. i 5, 6, 7 -.-- WELDING SALAD DRESSING Flakes 31 R.mlogtoD-IDto cowl -and yon hare you*a home full I Cavalry Scout I PISa 25c Qis.Of 45C Ivory Rt" lar Bar, S Ha.... year to pay the balance plot a I unad With Rod Cameron 'SweetheartRath Soa ' So carrying charge. why Bright Sail Alno Latest New and CfcrtoonWed. Laundry Bleach wait?-come In and tat type Oil portable to-day' the I : BarCamay .fi beauty - try .. Thurs. AUK. S, 9 Qts. 13e Soap 2 bars new, exclusive ,featumAiBa }l-2Gal.-25c ing Miracle Tab.MFiage Fined DOUBLE FEATURE GENERAL EL ECTRICiZ. We of ft*,the peeple of SUrfce : Keys Simplified Ribbon FATHER of the I ]6 Oz. Pk . Ch.o>..t..AND MORE If BRIDE 01: .'';;;:.' kndlrlsjliy ODe of the most : Sunshine Brand Grapefruit Juice Spic'n Span .. just the rl&hcl1ze foe lattett... .. ',.... $ modern and flnet equippedweldlac belt typing performance-* Elizabeth Taylor BprncerTraxy ,."'.. c.c......... 79 95 shop* anywhere. 8 Oz. can. 3 for 23c 46 Oz. can 19c Jewel Oil, qts. . TRULY THB ONLY OFFICE _: ,.... c- All Work Guaranteed 21 ANDMystery TYPEWRITER IN PER. Meat. Baby t 4ONAX.Bradford SIZE I TELEGRAPHStarke r Kan ildl' Jiffy I SMITH'S GARAGE A & P Food Store Swifts Snowdrift, 3Jb. can tAcGRFFu AND MACHINE Bill Boyd and Anxly Clyde J. ... SHOP The Great AnanUe : G. and Qts. Also Latet :New 8.. Temple Av*. Starke E.tt.. ....._f. Pad. fie Tea Co. ,Weason Oil. c.1fot c..Yr1. .... 1_ Starke Fla. 300 East Call St. pkg' 31 Oxydol Ige. ,. , L. ..-. . AUG 3 ';At."l!} () Ii I Iwell """"roaD ()Om... Ttt.1'GRArn. ST tRRE. Ft.QRID., _. ------ - I -- Williitt,. Itreit. NowI dom in S r' Friends of C *. Mr. and Parker Mrs N B. Rosier and daughter tion now. I ... Charley Quick, Sr. will Mrs. T H. MoNeill was visiting Mrs. I Y..rk, N. V. it be happy to hear 11 Itpttln her sister Mrs W L. Turner in moved liti the Dinkln garag"apartml.t1 Linda. spent the past two week Geo and Alvle Werr were IIII I vu.u nri, ;hr'tEc, nntlfli-d. thai fll.rtacalnit aI I bfl troll I stilt for DIM > with along nicely after his opera\\"" I New Smyrna over th' ", \)kend.i nn Jackson atrevt on in Freeman Va. a. the guest of Hampton recently visiting you In the ahovi. nann'di St Luke's Hospital a week ag<> July 21 The apartment was formerly Mr. and Mrs. E A. Raney and Mr and Mrs. Wallace Collins. i Court and CHUM and yOur nrr rtoulr-, Wednesday Hu Carol and Douglas Thomas who 'I I .d to nerve ,u .upy of your unnw.iriiir returned horn. | M*. and Mrs V/illard l Norris 011'upicd by the Walter family The Roaiera and Raneys pleading I.. thi ptimplnliit "n the Saturday. Harry Quick of Wash 1, spent last weak In aul"l"'lIiting Kings who have bought a home In toured the mountains and other arc now making their home with Plnlntirr. Altorn-y., Evan T. Kvan: .. THE NEWS Ington D. C. and Charles Quick, Mr. and: 1A.... C.Price. Keystone Heights. points of interest during the visit their mother' Mrs. Geo. Werre and,' ..OK Nenl..FMIIdlnK." ri. KMIOM, Jai'knonvlll.-Jr., ION JAW. Florl-Kxh , 'IN Jr. of Vafdosta Ga. were home I' Were visiting their grandparent., dn, end file the orlwlnnl In the offl.. SS rrs r-Sr. I last week visiting their parents. Mr. and Mrs. Fraud Motion of :Mrs. J. H. Fields' returned Saturday Mr. and Mrs. Alton Dobbs and Mr. and Mrs. II. Thomas of Rising of tli.. Clerk of th.- fir-"It of < i.m, . on or brfor the 4th day Hrptwm- Jacksonville were Sunday gu.m. from Newton. N. C. where daughter Eloise, and Mrs. Inca Sunday. bf.r, IKS1.: ithrwl the alliratliin ami.aylng daughter, Mias Sue Bigg returned home Mr. and Mrs L. S. Hutto. O. P. of Dr. and Mrs. M. B. Herlong he Spent a month with her brother Kelly spent the weekend In Plant Ethan and Mable Supp. Pete of sell' Oomplnlnt will be takeniimfvniifid a.. homesiet..r Monday after a visit with Cu). and Hutto Mrs. Shrllie and and Mrs. and and Mike of by you. at the Hall Miss Friends of Dr. Herlong will be Btntrr-in-law, Mr. City with relatives. Jerry Sapp son Thla notlre han be publlal111"on. J. D. Mrs. Owen Griffon in St. Augustine Sharon and Wayne visited George Pow u. Palatka Mable't .. ronncoutlveK ]Mr,. Kinney happy to hear he 1s setting along are visiting rnch went for four the .. .hn In the Bradford County Te'-l VIr- T. F. Huttos in Jacksonville brother-in-law and sister Qe}. burr, where just fine after his recent illness. The Maxle Carters had as their .r"ph. from Injur- Sunday.Mr. New Shipment Connie lxHeelers. > guests Sunday at Klngsley Lake and Pearl Baker In Birmingham. rioted thin list day of July, 1 5I.OKFICIAI . traUng recent' autoirfbM Mrs Grace Pearce and son, Ver. Mrs. Lloyd II. Lewis of Wil Select your at Mr and Mr*. Frank Webb of Haw. Ala. thla week. They plan to return ( Clerk| PKAt.\ or the) A.Circuit J. Thomas Court %i a non, of Plant City spent Sunday and Mrs. Hugh Hardy and mington. Dela., house guests of Stump's. '. 'ft, thorne. George Baladen of Jack- Sunday. Ty Myrtle T. P.av..r, Deputy Clerk and Monday with Mrs. J. H. Fields children of. Gainesville were Sua- the T. T. Longs and her meets \, ,' wmvllle M. D. Carter of Lake Conway Rouse has joined the 83| 4t 1124WOTt M \ I R-: Morse and and family.Mr. day guests of Mr. and Mrs. Chas. Flora and Susan Slade art spending E. C. <( xpect (to leave the Hardy.Mr. several daya this week visitIng Gibson, daughter Madie, and sister and'Ira...*Henry Carter of Max\ It....t 8. He jolnud In Jacksonville II....,.'. Cocci". Florida. July 5*. h'S. tour throughout and Mr*. M. C. Stith and| friends and relatlvea In Albany Becky, of St. 9imon' Island well.. HHarriett ..: and ncored U5: on his examination. "NOTICH ia HKRKnr!: OIVRM: thatplir They will atop off children, Jim and Sandra: of and Mrs. W. K. Thompklns and Baconton, Ga. Ga., formerly of Starke were Visiting 'k.. (J... il for Con w .!.liunl to Hf-rtlon 9 ..f Chapter | ..penda.weekLughter Washington, D. C. are spendingthe of Jacksonville were weekend" friends" here Saturday. and BUT*Powell are Fernle Moody ?iaa' all his tobacco isles. Lawn of Florida Acts of' 137, to and sister, month of August fat the H. L. guests of Mr. and Mrs. J. E. Jerry Lawson spent the weekend visiting their uncle and aunt. Mr. in the wairhnni, to set\). known. *described l.. the Murphy land Act In ,HradforilCounty the following - and go on to Brownlee cottage at St. Augus- Hardy, driving to Zephyrhills Lakeland Pvt Elvin D. Casey of Camp and Mri Glen Wood, In Zephyr.hill .Albert ana ,fo.. Moody l.a\e all Florida will bo offered for (11e for two week on' niile, et publIc outnrv Yur the blirht - the Maine tine. Plckett Va. returned to his base stiles flu Ob-y Moody Montreal and Auburndale whero he ", unit beet rn"h bid KUbj'i-t to the .'.;18, visiting rela- :Maiden Form Bras atStump' visited friends and relatives.Cpl. Monday after spending" two weeks last shevt,, sold Tuesday Peg right of the Truxt.. of the Int '''- Bob and Butch Smith apent the with hi. parents, Mr. and Mrs. T. Mrs. M. C. Ford. Jr. and daughter John has sou: all hi... J. B \Milie-IrtHl, lmt'ri.vt.mi'nt fund to rfJ..t anY way sod nil huift. Ht tti' (fVMirthmiH. beginning - past week with Mr. and Mrs. Er- Morris Williams vlslte 5 U Casey Suzanne spent! four day lax head has finishcd'gracling all hi... i Ht IA..,, <""i')<.>..k A M.. on the week" In Jacksonville with the < 4th rtnv r,f H > ,itih.'. 1111 t Onf'tl L Fabrics n4.t Jones in Orland,'. Mrs. Minnie pt - Hazen . of BrookerIs Head Miss Evelyn Newsome Wednesday ::110'1'11'1 TO AIT-FIAIttfc. nf munlclpnlttt-H I i>nt*"m"nrn, L 79c yd Mr. and Mrs. R. G. Dobson and Visiting her sons and families Friends of Mrs. A. C. Durden senior Ford*. In (Irriilt f..nrf. Itrmlford (......P. "n<1 rlirht of WHY :200 f.*..t w rl.. will during a days delay enrout I' . (|. Now Klurliln In ("I..r..,. i .. .....v..1 iron children of Trenton spent Thura- the M. K Hazens and V. V. Haz- will be happy to hear she i la greatly '" nov pnn <.1 l through while transferred from 'I'Wh4.h ti h" 1 flalmirr Kl- ' being Kiln N. Miflehew. VI. .n "doting MIsts day of last week visiting their ens this week. Fort improved after receiving treatments See the :NEW: REMIVOTON: bert N. Mlnxhow, JJifpiHlnnt.Can. I ..nd.*"'. t" all 1nl. title tu' on.- mother Mrs. T. M. Hutching Camp Atterbury Iml. to In the Tampa Municipal I Portable Typewriter at the Bradford No. ft&ftlMrMr hAI' .f Alitptroteun ant thro. . Ann Johns are Bennlng Columbus, Ga. ft ,ppl.r. fourths .r n........ nln"'al" will hereerv.l. an'' Youth Mrs. A. L. Buford of Tallahassee Hospital where she has been con. County Telegraph.Mrs To Elbvrt N. Vlrmhew wh Methodist Mr. and Mrs. L. B. Alvarez and is the guest of Dr. and Mrs. fined the past two weeks.: MJ'1I.Durd..n .. IN: t'nknowh. I received.i.eeriptia: . i Griffin, near Lees- Mrs. V. S. St. John left Thursday of Mrs. O. H. G. Petrle of Jacksonville You are huretir notified. that a I SCi'. T ".C'". Ac.I . returned to their Is the mother children have - R. P. Stubbing the latter of l i of line i part tilt for Divorce. and award of minor I ' W cresr for P// M. month's Visit In Darlan Mrs. home after spending the monthof this week. a L. Peasley! and Mrs. JV.. Kin- visited her aunts, Mrs. A. E, children linn teen flied against I LIneln City 82 HA 22 Jn1""$ . her daughter'on July at one of the Plott's cot- Conn with her son-in-law and cald and makes her home at theBt'uley' 'McDonald and Mrs. Jennie Roper, you In the above named Court andcause II I. Trill"1 of Fund' fh. Internalprovement of the Statc.f " will go on to daughter Mr. and Mrs. D. Chris- and l you are required torvi I . je two .. the past w..ek.Mr. PlorMa.fly < . tagea on Klngsley Lake. Mrs. Guy Andrews end son, ..> > copy of your answer ornleadIr I Sour day visit with tlano and son and daughterIn.law < to the complaint on the I A. .1. Thnrnan, Ag-nt Tommy returned home Tuesdayof Triti. rlnlnltrr'M Jo K. Mr. and Mr Edwin St. Attornrv. *ph Skip. * Mr. and Mrs. Charles Meyersof last week after spending a The following Instructors of the and Mr. Steve Simmons P"'. 12llrahitm\ Itulldlnic Jacknunvllli . Jacksonville spent Sunday with week In Radford, Va. where John. l Big Dad Manufacturing Co. are and four children are spending *. Klnrlrin. an.1 III* the nrlirliiHl. I 813 it ... Carl ChamblissIf Mr*. Meyers' family the Junlus tney staying at Avery Inn during their this. week in Miami and Cuba on In the office of the Clerk oC the Clr. drove their HOME-COOKED If were houneguodl, Mrs. : cult ("nurt on or before the 4(1) Say I (.CJOOI I Jacksonville Smiths and other friends and rela- colm Sanders home. Star Brand Dress and Work month's visit here: Mrs. Ruth business. and pleasure. of September: mil ; athi-rwln: the I of the Charlie Dar- tives here.. 1'01:1.1-1' Shoes for Men and Boys at Leonard Mrs. Minnie Clodfrlter, alleCcu inns of unlit Complulnt; willbe MEALS Ijean Darby return-Cham-, Mr. and Mrs. J. R. Cha 't en Stump's.After Mm. Minnie Lee Miller" Rudolph Mr. and Mr*. Dewey! Warrenand Thin taken notice an. oonfomied shall be by puMlnhnlonce you SERVED DAILY ills with the Sink.. Roland Hendrlck and Jackie children spent the latter each work. for four eonnccutlvewurfc , visIt Iloleproof Nylon Hose atStump's. and son J. Roy, returned home partof In the Tlrsdforil County Tele. 75c each Week Days ort t i is. iI Monday from two weeks' vacationat I a month's visit with her Cooper, all of Lexington, N. C last week In Eastman. da. visiting *;rnnh. I mother Mrs. W. A. Colley Mr Mr.:and Mrs. Pete Harrell. On fWcrt this l-rdny of Ail u.t. 1111.OFFICIAL. SUNDAY 11' the. Brownlee Herbert Green cottage In St. ( 8KAL.> A. J. Thorn. . Mrs Martha Smith and daughter Sunday the Warren and Mrs. visited In Chicken" Dinners $1 F. K. Blackburn and twc: St. - Q. Brown and had Clerk of the Circuit Court Mr W. their Charles, DP,,and Augustine. They as sand son ;<! children left Tuesday for several Dlanna left Wednesday' for Augustine. By Myrtle T. Beaver Deputy Clorka All Served Family Style and Betty of Talla- guests at Intervals while at the family relunvllle children. Billy anderson Shubuta Miss. where they ;.will >.a. |a 4t 'ia<4NOTICB Saturday. i hassee' spent the weekend at their cottage: Miss Peggy Ann Wiggins weeks visit with relatives in Room & Hoard $11 Weekly and Clearwater before It Mrs. Smith's mother Mrs. J. E. Miss Shirley Scott of Tampa I Is Tampa re Kingsley Lake. Miss Jo Ann Dlnklna Mrs. J. 'W.Browulee TO AI'PKAH cottage on , wa.nl turning to their home In Tangier Toney' Mrs. Smith will also takea the truest of Miss Nina Bishop this $5 rl...... ('earS. Itr.df.r4 Co..f,.. Bradford House W M L Miss Melba Strlngfel- . Florid. I. Che.r.Iucy three" weeks' brush-up course in week. In. D. D. IjwsonJ i Mr. .and Mrs. Russell Normanand low, Argin A. Buggux Jr. of Fitzgerald Morocco.Mr. Orh.rh, 1'lnlntlff "a -HenryOrlmch. 112 So. Cherry St. J of Auburnd&le and Mrs. Ola M. Perkins left Sun- Ga., Miss i Ju.io Caldwell M. S. C. at Hattlesburg, Mis, I Little Marjorie Ellen Thomsonof II.. *. .T Tlrfrtiilunt. A....r. Can* N,,. 8B74ll4 Mrs. W. C. !McCord I and W. L. Edwardsand (A. B. Wood am of day morning for a vacation in and Billy Miles. Mrs. Montlcello Is Tot Henry OrbMch, whims resiJn '-, Sale.! Printed Rayon Fa- vUltlnaher grandparent - and vlslawson : Washington D. C., Philadelphiaand daughter, Fredrlka Mr. weekend" - 'pre families. other points In .the ea t. I Mrs. Jesse Newaom* and Miss Mrs. Wm. a. Struth and Mrs brlcs Regular" 98c Now 79c ., Mr. and Mr*. F. F. Stumpand - Evelyn Newsome spent Thursday Leah Bauknlght of Jacksonvillewere yd. at Stump's.' family this week. Friday Saturday visitors of Mr. and Mrs.J. Monday guests of Mrs. W. Sunday afternoon of last week In Jacksonville. Mills Annette [ [ ( UNCING H. DuPre at a. family get- H. Struth and Miss Kate Struth. Mrs. J. A. Griffin of Camp Harper of Jacksonville ONLY I I. spending this. week with J>ENING of together were Mr. and Mrs. C. J. Blandng! spent last week" with Mrs. Janie Avery. Herrington and daughter Jean Maxie Carter, Jr. U In Fort Mr. and Mrs. Sam Gillisple and friends and relative. in Lakeland EN RANCH Mr. and Mrs. W. J. Herringtonand McClellan Ala. for three week's two children of Fort Mead were Over the weekend Mrs. Griffin had I Miss Minnie Bessent was homes 10 I % mSCOUNT BatteriesGOODRICH wn a* Nile Owl children Evelyn and Bill. Mrs. training with the S', Augustine weekend guests of her mother as her guest her sister, ]Miss Verlie to the Canasta Club Tuesday even ON AU. Car . Irving Lltchfleld and sons Irving National Guard Unit. Mrs. W. A. Colley Coker of Washington, D. C. Miss ing. and John all of Palatka, Mr. and I Coker Is leaving In September for and SPITFIRE management. Jin un ChiCken Dinner Mrs. C. C. Smoak of Jacksonville, Friends of Mrs. W. Q. Halle Mr. and Mrs. Drayton Colley Stockholm Sweden where she I Mr. and Mr. W. J. Studatll Mr*. Mrs. Billie Hall and son will be to hoar that sho Is son and daughter of Washington, will work with the American Embassy fuuidwk-hea & happy and daughter Cheryl, and Mr. of and Donald .of Orlando and Gene getting along nicely after under D C. are visiting his mother' Mrs. for two years. Mrs. N. n. Rosier and daughter FARM HOME & AUTO SUPPLY Eaatmore of Cocoa. In Riverside W. A. Colley for ten days I , I Dam lug Nightly, going an operation Linda spent Sunday at Jackaon- Mrs. Ben Crosby of Tampa was Hospital a week ago Friday. Mrs villa Beach. >* Mary Rivers Stubblns has had Ha lie returned home Tuesday and Mr. and Mrs. E. S. Matthews the Saturday guest of Mr. W. tie Fri., Aug. 10 as her guest since last Friday fora will remain in bed for several returned from Orlando where they Epperson, Sr. J.\\ Mr. and Mr. Maxie Carter werein ten day visit Kay Partney of spent the past six .weeks with / days. Her sister, Mrs. J. S. Adair Jacksonville on business Tues of Went Palm Beach. Kay and West Palm Beach In making an their son-in-law and daughter, Mr. and Mrs. J. E. Hildebran day. Keep YourTelevl Vacation Riding Mary Rivers spent'Tuesday night Indefinite visit with the Hallos. Dr. and Mrs. II. B. McLendon and children Gary and Linda,. returned - STERfric with France Caldwell. at the They were accompanied home by last Friday from a two-, Sheriff J. D. Reddish and M. VJohn State Farm. Jean- werfc's! vacation spent with rela- Miss I f4telr granddaughter ,. ( Mr. and Mm. Joe I' Tomlmson were In P'nnt City on busl Ives and friends at Chimney Rock rtctte will refrigerator had as their weekend guests their McLendon who spend I ness Tuesday. Mrs. Jacqueline Moorer' and daughters and families Mr. and this month visiting In Starke. I Morgantown and Shelby N. C. I e PHI1CO Mrs. Mural Gardner and children Mrs. H. C. Slaughter and childrenof I visited' Bok Tower at Lake Wales T.r.: Griffin who has been ill Star Brand and Toll I'arrot The Jasper and Mr. and Mrs. H. J. Life And TimesOf efatOrVeI7/ Sunday. the past week, left Tuesday for CooK and children .)f hastings Shoes, for Boys and Girls at " Jacksonville where, he will receive New River Folks ! Stump's.Irvin . at Joe Andrews of Fort Bragg N. 'treatment at the home of Ms son- Let us Install Mrs. O. W. Alderman hail Visiting fly Jennie Lee Luxenhy ORTHY'S C. ia visiting his parents. Mr. and in-law and daughter Dr. and Mrs her and Robbina returned this her this week ann set Mrs. :J. W. .Andrews, while waitIng C. C: Mendoza.Col. a orders. daughter-in-law, Mr. aul 1 Mrs. G. week after spending Ills vacationIn Chancy and Eloise, Dannie of comfortable C. Alderman and do-v Bruce, of Cleveland and Now York City. Cowen and Lou and Fred John I- our and Mrs. I D E.: Knight: spent spent Baltimore, Md. Mrs. .Aldermnil also entertained Rev. and Mrs. Hugh Mrs. J. R. ,Kite Wednesday Wednesday In Jacksonville. T had as weekend gunstH, Mr Weekend'! guests of Mr. and Mrs Walter and daughter of High and Thursday visiting Mr. and and Mrs. Geo. C. White of Tallahassee A, L. Crosby. Sr. were Mr. and I' Among the out-of-town friends Spring. and Mr. and Mrs O. H T Seat Covers Mrs. George M. Moody at Flagler Mrs. Richburg Hardy and daughter -{ and relatives attending" Mrs. Clara Patterson of Weston, Ga. with a In Straw or Beach. Susan, of Pompano Beach reek's funeral Saturday afternoon chicken aupper and swimming "Peaches & Cream" Back Mrs. Joe Thrasher of Daytona were' : Mr. and Mrs. Julian Perk party at tarts Beach, Klngsley 2-DOOR Stomach Gas Taxes to school Dresses at Stump's. Beach and Mrs. Sam Bleasdule lof i Fort Lauderdale Mr. and Mrs. Lake Thursday evening. After the The HeartAn and children of Cottondale.Miss :r. P. Wilson of Jacksonville, Mrs.F. I party the Patterson returned'tr Chas. C. Shepherd IILCOiERATORS Mr. and Mrs. Guy Andrew and A. Cook of Lake Worth. Mrs. ,their home. accumulation of gas In the son Tommy, spent a long weekend' I Bernice White of Toccoa,:IC. It Hall. Mr. and Mr. Curtis Mr. and Mr. P. 8. Colnon of stomach forms pressure crowds traveling down the East coast and Ga. will arrive Monday to be the : and Mrs. Maude Hall, all of El!: Paso, Tex. and Mr. Rosa lion-Uadlo .. Seat Covers .. Upholstery the heart and results In bloating up the West coast of Florida. guest of Miss Edna Nogel for i I Newton. Ga., and Mr. and Mrs. Plnholster of Brooker were visiting . gassy catches palpitation and several' days. I:r. H: Ritch of Gainesville. their niece Mana Wiggins 307 W. Call St. Starke, Fla. I'shortness of breath. This condi- Sharon Cluff returned to her recently. The Colaon are In Florida . tion may frequently be mistaken home In Clearwater the past Mrs Robbie Shaler and Mr. and Carolyn Pfaff of St. Augustine for a two-week vacation. They i forheart trouble. INNER-TONE weekend after spending two Mrs. Bill Tilley and son, Steve, i spent' the weekend with Clair-Jean. are visiting most of their relative la helping such gas "Victims" by months with her grandparents, accompanied Carl White. Jr. to Mundy. In various part of the State. 'digesting their food faster and Mr. and Mrs. V. S. St. John his home In Nashville, Tenn. last Mr. and Mr.. Frank Fegler of better. Taken before meals it Friday' where Mrs. Shatter and the I FriAnd of Miss Madge Middle- Nutley, N J. are visiting Mr. and works with your food. Gas painsgo i Mrs. Leaton Morgan spent the Tllleys will visit relatives this ton will be happy to know she Is Mr*. Ferri and touring south g Bloat vanishes! INNER- past week with her mother Mrs. week. Mr. White had been a'\ fretting along nicely, although Florida. . 'TONE contains Nature' herbs ] Ira Newton in Auburndale. Mr guest of the Shatters the preced-J' still on crutches from a knee Injury Felton Moody Joe Lazenby I plus Iron and vitamins B-l. B-2. Morgan called for her Saturday.Mrs. Ing week while enroute to and received ..*, week ago Sunday Jonnle Francis, Bobby Crew and Dedicated To Community Service - I. i and B-6. Also enriches the blood from Miami. Don Ray Tilley and while at the Middleton camp In Louise John attended the weiner "Movie Are Better Tluui Ever"Sat and strengthens the her Mr. Shaller will call for their fanvHies Cidar Key roast the \ gives you pep Ben Rowland visited at State Farm Friday nerves. Weak miserable people sister" Mrs. George McDaniel In In Nsshvllle this weekend. night given by Carolyn Crews. ., AUK. 4 ONLY Tues.. Wed. Auff. 7 8 soon feel different all over. So Tallahassee Wednesday of last Mr. and Mrs. W. S. Colson ands Mr*. Unnle Drake of Lakelandand DOUIILE: FEATURE: 1 Get INNER- Friends of Junius Smith will regret n, Cecil of Orange Park spent of don't go on suffering week. Mrs. tattle Ifucklvby of In. the ri oldest to- to hear he broke the mid- lent weekend In Starke with rel store ao Iterators I TONE at KOCH drug vorness. were here visiting their $5 IU"LISI SUSVUSSIMast. I in rrltnrv this day. Adv.) \\, spending the week' at the J. I* dle finger on his right-hand last atives. sister, Ida Mi-rrltt and Nan e .. sate.. ', \\ ( . \ week' while at work In the Smith'slaundry : . $UNITTIA Mr. and Moody. home Anderson are SIARftItT when the lid of a wash Mrs. Daisy Ferryman Is visitIng - and Harold and Janice Shiver of W 9. Johnston Ho buyI WENITtiMfl i I Mrlt'l ing machine fell on It. relatives In Georgia this Gainesville ''dlitH..d Cheryl and Carol, of Atlanta, were visiting Jarrfce'fmother. pant to write I i month. Nan and / Mrs. Dora Burton of Rome Mr. and Mrs. George Bennett Moody are help. . Ift your oldERATOR. Mr. and Mrs. Carl Bailer' and I Ing her grade tobacco. Nan 'bout and Kim returned their and to , son Mr. and Mrs Joe Gray of Jack- (.4.AVMaA PIUtJeIes ' 1 sons Carl and Frank of Brunswick I home In West Palm Beach Sunday onvllla of the has all her tobacco thru now Still _CM'ttu "trN.,a _so ..,..o.. were Sunday guests ,, ,, ....... .. ..... i f ftc cf make! Ga. Mrs. J. l. Anderson whoIs after upending a month with ,, Jewett Foggs. has some to sell. -sue cues w UuWI,, .workingin il.ixley, Ga. Was friends and relatives here. I Joe Dowllng of Live Oak was ANt) - t'1 l. nd and will return II I visiting his mother Rhoda Dow I . "ODIU01* home for the wi t I Seaman and Mrs. Gene Barrett -e.z.s__. ,pc1OL ' I l'rjm,. W4I9.IKJ ': to stay on the 15th. The Mr. and Mrj, J. H. Redgrave and end l children returned to their Ing Saturday night He was oft Andersons also bid as Sunday'guests children returned' last week froma home In Minneapolis Minn. last for a couple of days to go to a 'I Mr. and Mrs. Lyle Anderson ten' day trip tn Niagara Foils ''week after a year' residence In Doctor. He recently underwent an t and son Mike of Green Cove I Washington D. C. and the Blue the Nollmsn Court. Seaman Bar I I operation and at Interval has tc EEPHLCO Springs and Mr. and Mrs. J. C I I Ridge Mountain Mrs. Joe Martinez rett received his discharge from I go back. ' Wynn of Jacksonville.Mrs. accompanied the Redgrave the Green Cove Naval Base l la.t I Elmore Cason. Fonnle Crews ,1 I I to Rochester. N. Y. where she Is week. and :Joe Lazenby went to Live Tin* Tetwby Cobbler1 Cartoon I Berths I'-kne re'1..'V d I Iu.I'.ay making *n Indefinite Visit. with I Oak to help gather acre of peas aol Intent News PHONOGRAPH from a nln-.-d3v vl..lt t.I..t.1: her non J. A. Martinez .and fami For SeJe> Zinnia and rhrv&anfhimumfJowerrd for Joe, Cowling while he went tc .. *'wesf relatives In Miami Fort Lauderdale -,ly. marigold. (!A AInlim the doctor. It was estimated Joe i.ili.el > and Winter Haven. bulb. All color R. F had 500 hampers of pea and he Thur., Fri. Aug., 10 gkMSSdSI I Leslie Biggs U expected home \oang. Chrtollan St. had them old to a freezing con ella.. w'snesuss. . middle of from Wtlllamshurfr {o the August cern In Lake City. - I IMS 0s1. Take a ,c.tlon from worry. :Mr. and Mr*. J- U. Hudson vi I II Va. where he has been Mr. and Mrs. Tommy Gomez of liable Sapp got busy the other %hu, " I with the Third Army. mail Chap. On "RsulHr FIGHTI nus Adequate insurance I is the an- Wildwood are vlsit'ng Mi. and employed Tampa were Bandar guests' pf day and put several fry..r* In her Patrol Spy I lumber inspector since the early King" Mr. and Mr. O. L. Llnnln. Gomel 10 ewe,. We'll U gld! to work Mrs. J. S. Sapp this!. week. part of July. will be remembered a* receiving freezer COAST GUIf thel with you on proirram to rH* i boxing recognition during World I Rev Dodd and boy ate dinner Sun., Mon. Aujf. 5. 6 yon adequate and well rounded Everett McKlnney anJ George Mrs. L. E. Miner who has spentthe War II while stationed at Bland- : Sunday with the Gurney Crews ruillf PMTTMl- INasworthy of JacksirvlUe spent the family of the State Farm. Mr. M-X3-M'. prize financial protection., See u Canova past three weeks visiting Mr. .ing. hilarious comedy with this week. past week In Miami onJ Ke) end Mi,. Herbert Green left last | I Dodd hail to atay home from ./ ,'- : sone'f!! With IK>n Levy and Claude KJae West on business week. for Providence where she : Miss Sue Ann Barnes of Kings- i church ,with the two girl who I ........ will visit Mr. and Mr. T. O. Crtr. "- land. Ga. I la visiting Miss Carolyn were sick l EZIOPINZA ALSO"CASPREE _ Mrs. T. M.' Hutchlngs: and her Vlcker this week. Bill Hansen I Ia certainly clean- S houseguests her sister Mrs. F. J. I s I Ing up his trailer park ground .. SPREE"Underwater and great nephew. Myron Mr. and Mr*. R. 3. Davis spent I Mr. S. J. Benton' and son, Bobby He' putting up picnic (tables and t '. JANET LEIGH Cartoon, and Su plies; Jackman Hurtfield. of CI.vland. Ohio Sunday in St. Augustine visiting j jMrs. were 'called to Baxley. Ga. still cleaning up mere ground I I'iu.. "Hun-fare Flat Foot" fartooiami I "ACTIOjr! with ROD .and HEEL: nines of every day. It 1 I. getting to be quite Latect :NEWS!' J Short last week in Pensacola visiting Laura Gene Tatum and Mrs. last week by the serIous Featar! spent attractive around the filling sta - rh, FJa relatives. Lula B. KnskeraC I Mr*. Benton' father. . I 1 .. . '. L' .. . ::1, Ift.w___. _: iL. __- -== -__3tIn1L.! _:::_:_ ti4l. _JII! ULJli"!! : lrfA"r. ____l._____, ," ."--' .'-- .. --.----=---.-.......-'''''-.- :. -. , Ii, EQc 1 9 5 .1 BRADFORD COCNTV TELKGRATI STAKKE. FLORIDA ------------- ----_ .-lulits S 3 puhll.It.,., 1 Nollre To Appear phii' M"I i' h I I.:: .. ,. Miiri.i Kniaknt.anl . a" T. , Del..p. I I weelts" in the Il'Ildr"rd! i-'ii"t' TilrDst''d '- .. ,. n; :..:H 'r or I.I. I. = rtusby' In Starko Suodny at the Ezio Pinza Starred ':.. ':'ok. In M,i., ,,,...dI..rd LOtJlltYI ]I.oi.. ubo,-<' r.-.Jd"n<'p and addr.iH i i1A4 . : tills 2'lh day! of July, flU : Thcutntrou. Piraeus (Pasallii,. Ift home .ii f Mr. and Mrs. A. L. Crosby / LAWTEY\ [In New Film I (OIo'FCIAI" SEAt., A. 3.: Tti.ma'. -I,"i'SedTHIS:;:{ : JCth day of July 1951 oil tree t.e. tourtny STATE FARM SiI '. of the Circuit A. J. Thomas, You are hereby notified that a ',, | I To Florida Myrtle T. Ceavor, D.YI' lrlNo'ru' 1 OFFI'IAeNh* Circuit i Court I suit for Annulment has been filed I : I I Miss\ June Caldwcll returned L. II. I'uh'h Coming By Myrtle T. Diaver Deputy Clerk I against you In the above named Ct.ti5'j.7' -' C Ap5 ,,, Cti By MrN.. Bryan IVhltfleldCanaxta 1 Saturday from itt Augustine'l3..a..h Court and" cause anti you are required 1 -- "Strictly Dishonorable", a new ", TO AFPARTa >OTICBT I to serve a, copy of your answeror where she had visited Mr.I musical starring Ezio, ("1..1. c..t. .....ro... CeuaF'a. ,, "'.0.." It Max ConrertfThis I pleading to the complaint on the ) 'and Mrs. J. R. Chastcen Jr. of The Lawtey Methodist Sunday; comedy ''' '. I. to notify all claimantsand I Plaintiffs Attorneys, Evans T PartyIn Starke since school had their annual picnic aStrickland's Pinza and Janet Leigh Is scheduled :;: : .t 11j.::; tltt. John creditors that you are to file J Evans and Neal D. Evan., jr., 103 ?.t' 'it, , honor of Kay Partney of Thursday. and p.tndant of Law Exchange Building I Jacksonvllle ' for showing this Sunday B. Harper .sn your claims against estate t West Palm Beach Frances Cald- During the weekend the following Beach on Klngsle Theatre. .. .e No. Julia Johns: z>. .dh'a} under I ., Florida and file the original I C 5g1,1, 1 Monday at the Florida MAttce i' Appear.T. of the Clerk IL tr.t. afternoon. lb In the office of last Thursday oath to E. K. Ferryman County the relatives and friends visited Lake .- John I'. Harper, whose reel.fai'e qulr well was hostess at a canasta before par- I Court or the I Circuit on star fudge of Bradford County, Florida, SOth . of town were Invited Pinza cast as au opera 1I.n4er.on.N..rth 'Wul, ' schools Is Mason Street, ty at the home of her parents, Mr. and Mrs. E. J. Yarborough: I The other /alld but r*** Carolina: and the maUl all ad- within eight calendar months from I day of August 1961; otherwise the I II nn ii1. " I and all enjoyed bathing cannot resist the woman first publication, of this notice. I allegations) of said. Complaint Will I' ' the . Youngblood. FrRnl. Mr. and Mrs. A. A. . North ' Mr. and Mrs. J. O. Caldwcll Tuesday trouble dress Is Box 029, Henderson, .n,' of confessed taken by I be as other forms of recreation. At five himself into a heap otherwise said claim or debt will b* you. fll. ,: A profusion of Mr. and Mr,. Fred Johnson and CHrolina. I notice shall be published " evening. summer aspirantthat barred by the Statutes of This Clerk nerved picnic when a tells an opera You are hereby notified that a .r I' I o'clock lunch was week for four blossoms In of son and William Prevatt of Waldo; Ions. I once each consecutive( lor" a variety artistically Divorce b.. been filed iii blended hues decoratedthe Mrs. M. L. Ennls Mr. and Mrs. tyle. after supper those that. da she has the> voice of a fish- stilt against for you In the shove named Carl Johns, 1!:,LJmaa'j I weeks. In the Bradford Count]} Tele. "'h;; .. rh ti LbfeIlrd I,' I I graph. bathingagain.. wife.This Jady's husband happensto Cnurt and cause. and you are required | Coin' '", 'II Jack- sired went Invited Marvin Jaokson and George >aed this ISth day of July, 1951:1 aThi. rooms guests party editor of New York'smost to serve a copy of your answer , be the SEAL, J. ThotnsaiClerk ) A. of Jacksonville; Mrs. Alice Mr. Luther Q. Savage NOTICB TO APPIPARIs :, (OFFICIAL . were Mary Rivers Stubblnn, Jane i iRlcharde. son Mr and or pleading; to the complainton of the Circuit . Court I Yarborough via- powerful scandal sheet and Neal I>, Circuit, i ....1. Bradford Cf.>..t once nph , Jewel Davis/ Mr. and Mrs. William and their three children were the Plaintiffs Attorney yKlorlilat Py Myrtle T. Deaver Deputy Clerk ivi ""k' '. Carolyn Crews makes full in Rvans, Jr., 103 Law Exch nBHnllillnn .> la C. himrery.Thelma I weekk |In thuLntd ... nr use of It an effort toframe" n. Parker, Patricia.Wlllard. Nancy and children of Jalnes- ,aing In Lawtey !test Monday) JackHnnvllle. Florida and Ward Moore, Plaintiff, ... 720| 4t 810Of | ar ph.: ""Winc, Bates Qvleda and. Lomlith Grlffls .!villa and Mr. and Mrs. J. B. Johns (I noon. Mr. Savage/: taw' In the him. I' file the original In the office of the Stephen Elliott Moore, Defendant 15-j-1;; : ( Ih. I"h It and Fredda Dees. At the con and their children of Klngsle I'Lawtev school for u". term but Is Janet Leight enters from way_ ("lurk of the Court on or before the Case No. SB43 Proceeding For -Ike doling Of OFFICIAL. IEAL 'a.,' Clerk clusion of the games cookies and Village. Friends wtll be glad to now located "' Lillie' Rock Ar' below the Mason-Dbcon line and 27th allegations dRY .of August of said tulIp Complaintwill othrwlethe Saute: Stephen T*> Appear.To Ullott: Moore, whose To Whom It -4 Ma7CU.crrR 5tr0.t- P.y Myr! T t.f '''h\. , lemonade were served to the play:/!know that Mrs. E/.J.'Yarborou&h Kansas. They spent their vacation while acting as a super in one of I he taken as confessed by you. residence and address' I.: Unknown.You You will take notice that the .no,.; tim recuperating nicely following a fvorida and were on their way his ruins This notice jihall be publishedmie are hereby notified that a Roard of County "omml..lon..re of 'or", F.- In operas Inadvertently the to ers. esch week for four coneeruitiveweeks and lit She and Mr. Yar- suit for Divorce restoration Bpadlord Count" Florida upon the Cir'uj' ."... 'PI'r' serious illness. Mr. and Mrs. W. H Pringle and /tome. performance and Is suspected of In the Bradford County Tela.II'rAph. "f> maiden name has been filed petition of qualified lend owners llipila, InVaiter I.. ,I "" borouch extend sincere apprec against you In the above named will l:00 A. M. O'cloek \ I I' . at : the of Auhurndale Mrs. L. G. Teston and Mrs. Carl being a tool of the editor's. However on . their Harry this 211th of 1951. son Dated day Julv, kindnesS Court and and irace tlon and than<* for the. cause you are reulred Olh day of August A. D. 19&1. at its Dill D.. visited Mr. and Mr*. P. 8, Crews theirnenJs while Prevatt entertained last week the various complicationsare (OF1<'lCIAI. 8HA1A.. J. Thomas. to serve a copy of your an. Olflt'e' In the County Court }louae .....' and Mr. and Mrs. C. D. Sellers shown them by with a Stork Shower for Mrs. Karl finally Ironed out in a blast Clerk of the Circuit Court Clerk wrr or pleading to the complaint In I"rk". Bradford County Fiori.d. Nolire To Aippp. c." T. Denver .hB was m. Ey Myrtle Deputy on the Plaintiffs Attorney T. Frank consider and determine Whether. 1'0: 11.- .. i during the past week.Accompanl..d Mr. unit Mrs. John Williams Teston at !the home of Mrs. Pre of music and humor. 7127| 4t 8J17TO ; /indrnm, Htark, Florida and file or not'tbe County will vacate. "ban.d"n. reside II' . Celyne Crews vatt. the orlxinnl In the office of the discontinue and cia... the h..r. Wood I by Clifford of Mont- adapted / son The from inelr picture was end| o'rl1 APPKAH Clerk of the Circuit Court on or be. matter descriled street and "Y. ''''f. ot Lawtey Mrs. Marlon Crews, Jr. fernery Alabama spent the past Gene Griffis who spent several Preston Sturges broadway playof IA CumulI I:"..t. 11r.lt.... Coaalr, nre the 13th tiny..of August 1961; whether or not the County will r.. I You HI<, It , and their little daughter N-'* weekend with Mr. and Mrs. J. L. weeks In Marianna visiting his the same name.RemodeUrg Fliirtilnl la Chaacery thxrwlse the all Katl' visited Mr. and Mrs. Vrni ">eS, Cl..n Frances Whyte, Plaintiff, vs Complaint will b* taken as con ,the County and public In and to Pefefliat' 1 id 1 Williams.Mrs. grandmother! Mrs. Fords returnedto Defendant.Case' enrd by you. land or lot.r..t. .ltulIl- flied "kln.t , Charles K. Whyte, J coy therein ,.....*e*<*. Returning 'Inth.u. In Plant City last Calvin Williams and her his home here last Saturday No. 85.8 This notice shall be published I.d In Bradford County, Florida to- Court Cud .1. Crows for a Permits nee each week for four consecutive qulr..d 10 Inl" with >if'* dauichtors Sandra and Sherrie \1lce TA Appear.Tc "wit 'Otto a , }" .and Mr. Crews wereFridrf Miss Enza BlanchaTd was a : Charles R. Whyt, whose weeks In the Bradford County Pine, Street" In B.V.. Hayes. ewer or puadi,.,. I fp _ and Mrs. L. A. Shepherd have been Issued By City rexldanca Is: Unknown. Tfleirraph. Hub Division, In Sections 84 AS3 .. on the Plaintiff. .V--i, ' business visitor Jacksonville to Libby Curtis. Nell and this 11 til of 1951. . Dated day July Z.andrtjn, visiting relatives In Atlanta and inRlnggold. You are hereby notified that a Township I South Range 23 Storks. F2l n H. Pees of the Central Florida last Monday suit for Divorce has been filed (OFFICIAL SEAL) A. J. Thomas Esst. Said Sub Division Recorded the orl"lnal In it. '' ''l Georgia since Friday. Remodeling permits totaled sffslnst you In the above namedCourt Clerk of the Circuit Courl In Plat 8. page 48, Publto Clerk of the. I'lr: ,ii C D/f,/ , cliy. Mrs. Williams and their daughterswill Mrs. Jewell Royal of Tampa 2,100 in Starke this' week accordIng and cause and you are required Dy Myrtle T. Beaver Deputy Clerk T.nnea.... ..... .._ ft'__ _n..iv.a.__n. I'__._.n'v'.'#' fore the ih A.., .-. 1lrl. . Mr. and Mrs. W. G. Strlngfel- remain In Rlnggold until the was a recent visitor of the W. F. to serve a copy of your an 1113 4t ill Florida. I otherwise tin ;i .;;II:' low were Sunday visitors of Mr. to records in the City Hall. ...... ni"> nleMctltitf"' to- the dtmnlttlntnn Thin July InS 11)1.Tne ) I Complaint will i be arrival later in the month of Moore family. the Plaintiff's Attorney, JosephK. NOTICE TO APPEAR Board of County Commissioners I fesned by y",u. Ial. Mrs. and Mrs. Manassa in Green W. T. Marshall George was granteda This 1k M-Sgt. Calvin Williams who has Blain Teston who has been confined Skipper 221 Graham BuildingJacksonville r. Circuit Co.... Bradford C....7l of Bradford notice Ihll It i, Cove Springs. permit for $2,000 worth, of remodeling Florida and file the ? ..rlrtai !a Caaarery.rm County Florida. once each Went far tot, been stationed In Alaska for many to Riverside Hospital Jack' original In the office uf the Clerk' By.M.. O Harrell. Weeks In th. Prall. Mr. and Mrs. Elzle Staffordand on the old Sparkmui Hominy Cutter, Plaintiff vs. Te''lIraph.DAlod' . months. of the Circuit Court on or beforethe Chairman. sonville returned to his home here Howard Iaul Cotter Defendant their son Steve and their Bridal Shower home on the N. W. corner of 27th day of August, 1951 otherwise Case No. 8651eflee 711| St 818.| ibis 11110 4iy' visitor Miss Vivian Collins of last Wednesday.Mr. Church and Call Streets. the allegations of said Complaint IV Ttt Appear.To __ (OFFIf'IAI SEALI oil Tuesday afternoon of last week will b* taken as confessed by : Howard Paul Cotter, whose NOTICE TO rHEOITOR" Clerk nf tho Cbt- Macclenny. were guests of relatives and Mrs. L. B. Crosby Jr., By ,Mrs. W A. Knight ws, hostess at Paul Raymond .waa granted a you', residence In: care of Superintendent I. Ta. Court .r tk* ( ....,. Judge,. 1IIyrtl. T. Den.r.o.. In Frostproof and In Jacksonville ''III shower Miss were weekend guests of Mr. and permit to add another room anda This notice shall be publishedonce Ward 83, Veterans hospital, Tuecaooea Hrar.r complimenting each week for four consecutive Alabama. I.. Prliat. - while on vacation last Mrs. L. H. Futch. On Sunday the I Marilyn Jones of Olen St. Mary porch to his garage apartmenton weeks In the Bradford County Tele"rf You are hereby notified that a In re: Estate of J F. Klckllter, Deceased I. ('Ircult'.0'1'11''.': TO tpp1 week. Fulches and their attended ( guests suit for Divorce and other relle .*has .rl. l"tIwt whose South St. The work will cost Last Thursday Mr. and Mrs.' wedding was an event of the'Crosby reunion at the home of >aeil this Iftth day of July, 195t. been filed against you In the above 'I.. All Creditor'. end Persons. Ha..- :Sarah FI..ld.' i la ( Canopy. Saturday. July 28. The Knight about M ' George Elston moved! to their nnmed Court and .u." and you arerequired hag flalma or Deaumaa A.ala.t Said Defendant II Mr. and Mrs. A. L. Crosby Sr. In Clerk of the Circuit Court. . borne to nerve a fit answer Estate c.. was the setting of the par- copy your t new home In Gainesville whereI Starke. I NOTICE To APPEAR Py Myrtle T. Beaver Deputy Clerk or pleading to the complaint on You and each of you are herebynotified Netice Ta App.'. Mr. Elston entered the law school ty and garden flowers were effectively I. ("b..It C..rt. H.. arranged In color harmon Mr and Mr. W. F. Moore spent' !. CaaMoar. Illpley, 220 Lynch Building Jacksonville claims and demands which you address !It: t'nbau", . You t of the University of Florida. I to enhance its charm. In contests the weekend at Jacksonville Nancy Ann We.ton. Plaintiff ?. .. ISOTICE TO APPEAR. Florida and file the orlgln- or either of you may have against suit for are tirrrkr |nOl, D I. Circuit Court. Bradford Coanty.rlf Divorce I Mrs. H. P. Galney of Brooker Cyril L. Weston Defendant.Case al In the office of the Clerk of the the estate of J. F. Klckllter. deceased against HU that carried out the bridal theme Beach. rfdei I. Chancery. I you In ih. No. 8562l Circuit Court on or before the 20th late of said County to the 1M and her visitors Mr. and Mrs. R. > Court and Rutty Franklin Hudiclns Mrs. C. A. N.'I l <<. T. Appear. Plaintiff day of August 1961; otherwise. thelleffatlons County nun "mr,| B. Raulerson of Pahokeo, weresupper Roberts Br. and Mrs. Bryant Reddish of Jacksonvillespent vs Paul Osborne Huda-lns Defcnd.Innt. said Judge of Bradford County, quired to .orv. a mm, VIswer f To: Cyril L. Weston whose reelnoe of Complaint will Florida at his office In the courthouse Lorenao Orlffls were prize win- !the weekend here with his, <* Is: 1700 Hansom Street Case No. 8640 be taken as confesned by or plraillnr to f guests of Mr. end Mrs. R.I %,,'I..* To Appear.TII' .. you. of said County at Stark, on the Plaintiff* Atlnnwl I W. Brannen Friday ners. Assisted by Miss Bernice parents Mr. and Mrs. V. R Red Philadelphia fa. : Paul OHborne Hudslns. whose This notice shall be publishedonce Florida I within eight calendartno'nths I.andrtim.. fltR.k.. Flmtl. Brldwell You are hereby" notified that a each week for four consecutive from th* time of th* firstpubliratlon the honoree the I opened her resldem-e and USNSNW original In off.I Mrs. O. T. Dresden from Jacksonville dish. suit for Divorce and restoration of : Box til'l:, York :.rne..av'r.'a. ,. weeks In the Bradford! County Tale.graph. of this notice. Each Clerk of the rircullCwl thi / ; spent this week with her lovely gifts and after their display maiden name has been filedsgslflat claim or demand shall be In writing fore You the Visitors at the Methodist Parsonage are hereby notified that a 11th lay of Ay I sandwiches andcake were you In the above named suit for Divorce has been filed hated this 17th day of July 1951. and shall "'81. the place of resdenes !-* otberwise the allpiritloel parents Mr. and Mrs. L. Durden. served. with during the week were Mr Court and cause ant you are reulrid against you In the above named (OFFICIAL, SEAL) A. 3. Thomas. and post office address of Complaint will I tit ult'l Quests Sunday Carolyn Crews and her punch. In to serve a copy of your an- *,, Clerk of the Circuit Court the claimant and shall be \ tl and Mr. S. I. Sellers, Mrs. FrankP. r tirt and cause and you are required sworn feswed hv you, r eluded the bride's Intimate wer or pleading. to the complaint I-y Myrtle T. Deputy Clerk . friends Dvaver. to by the claimant his ...nt. or \ to of This Jewel serve A answer notice hill guest. Parker of Miami. accompanied copy your bi '4 Wood and Miss Helen Walters on the Plaintiff's Attorney Nest D. from the 720| 4t 810ItOTICR | his attorney and any such claim or Farm and adjacenttowni. or pleading to the complaint once each week fi., fonrtLweeks , , Mr. and Mrs. Delbert OvftTix: Jr., I OH taw Kxohange Bulldnir demand filed shall b void of Jacksonville and Mr. and Mrs on the Plaintiff Attorney, T. Frank not so In the ErtdlMI ''I Crews of Starke to Montlcello Jiii-ksonvllle Florida and file T.nndrum rltarke, Florida. and file TO APPEAR: A. J. Thomas. TII''lfr."h.Pnt.1' . Wrlncr Rout H. H. Johnson of Sarasota Florida the original In the office of the Ihe original 'In the office of the In Circuit Cnmrt. Bradford CovatyiFlorldai As executor of the Last Will rind tlllt 11th 'itT.1 I where they were dinner guests of At the home (Clerk of the Circuit Court on or he- Clerk of the Circuit Court on or he.fi la Caaarfry. Testament of J. F. Klckllter (OFFICIAL SEAL) A.l of her 4 Mr. and Mrs. John Crews. parents Mr. ore the 27th "..y of AiiKUst 19ft1.: ..re the 13th day of August 1051: NIcholHt lielopouloi*, Plaintiff, vs. deceased. Cl'j-k if thiflt.1 Frances Caldwell was the overnight and Mrs. O. E: Crews. Carolyn Little Miss Irene Moore who otherwise. the slIcfrAtlnna. of saidl'ntnplalnt l ntherwlne the allegations of saidlTh.mplaint Maria Karakatnanl Delopoulnn, r>.- First publication July 11th 1951. By Myrtle T Dea'fI iDr, ] ! Crews entertained with a weiner spent a month In Tampa with her will be taken as con- will be taken as con- fondant Case No. 854T 7113 4t 83| guest of Rivers feeced by you. Stub Mary , fi'MM**! by you. . roast grandfather Rev. E. D. Boyer re -- - Thin - bins In Starke Sunday. Friday evening. This Informal I notice tin 11 he publishednee This notice shall be published '. i turned to her home here last each week for fmir. consecutivetveokn affair was given In honorof ; iwe each week for four consecutive Following visits with theIr! In the Itrailford County Teleraph. weeks In the Bradford - their Visitor Jewel Parker of week. I, County grandparents Mr. anil, Mrs. L. TeloB-raph Miami. Out-door games afforded M. B. Futch of Jacksonville wasa D"t..d this tnth day of July 1051. Patel, this 11th day" of July 1551OFFICIAL Iturdcn Miss Faye Gordy and her (OFKICIAT SKAT.) A. J. Thomas.Clrk . ( SEAL) A. J. Thomas, brother. Junior entertainment during the party business visitor here last FrI- of the (Plrcnlt Court r 't Qordy have returned Clerk of the Circuit Court to their home In Baltimore hours. Enjoying the courtesy day.Mr. ry Myrtle T. Beaver Deputy: Clerk By Myrtle T. Deaver. Deputy Clerk : SPECIAL \ Maryland and Eva and June Tate were about forty boys and, girls of and Mrs. Hubert E. Priest 7Z7| 4t 8117ii l 713| 4t 8.3 I to theirs In Jacksonville. the teen-age group from tho Farm and Miss Loraine I'rlvst and their Circuit (Court llradford CouMly, 'rlCB TO APPKtl , Mr. and Mr. and near-by towns. guests Mrs. Morene Summerlineand Plnrlili la ( humerr.Annahelle In Circuit Court, Bradford County, R. W. Brannen.at.Q nevlnnny, Plaintiff, vif. Florida In Cnaarerr.lecter . tended .the re-union of the decendaots Enroute from Tennessee to their sons Joe and Edmond daugh. Harold Devlnney.. Defendant T J. Unrp.r Plaintiff, vs. Beth t ONE WEEK ONLY jjj' Q of Mr. and Mrs. home In Fort Lauderdale Mrs. J. ter Mrs. Ernest Williams of 'Jacksonville ; .t Cuse No. SBHt i..:. Corser. Defendant. _ 4 '1 Phillip .O_ tf..ii..b. .n.' T Y* T*__.*_*_._ Order of iLii..11ti.5, Anil Nolli'e TotilVIIr > .. 3 Case No B37 SUPPLY LIMITED ORDER TODAY I ! "y 1 e .i. nuIIurIcs) were attending the Tobac. !liT\'1'1 OP Kl.OlllnV T..l :Notice TA Appear.. ) - I Jr. were Tuesday overnight guests I co sale In Lake City Wednesdayof llnrold lievlnnev, .whose residence To: tloth H. f'orser, whose reel d t \ of Mr. and Mrs. C. M. Turney. and .address In: Unknown lint whose dene Is: ion Katherine Street Medina \ -. "I'I last week. Later they visited, last known residence and address New York. . I Thursday Mr. and Mrs. W. N. the Stephen Foster Memorial near wns 0&3 Knnt Whit.hill street, Ru,'k You are hereby notified that n 1951 MODEL North had as their luncheon Mill. Mouth Carolina suit for Divorce has been filed II High Springs. You are required to .appear' on against you In the shove named _ I Ii guests the former's brother and Alimt 27. 1951, In the ahove nnm.d Court and Tausa, and you Are required - sinter. J. 1... North of Langwnod Cntirl anil mime, wherein .plnlntlffseeks to serve a copy of your answer - i and, divorce. ftrsdrord County or pleading the complainton Mrs. Minnie Shields of. Or Telegraph shall publish' this Order. the Plaintiff's Attorney Znth (I LONG BRANCH In tiHxt, four wei-klv 1 Imuins. DuilKlns, nflO Lynch Rtllldlns, Jacksonville . : Witness m* lint anil nffl Inliial Florlilu, and file the origin. ) SEWinG _d I nt mark.., Florliln this July al In the office of the Clerk of the mAHln'EWITH ( !Mrs. i. L. Strickland ! : : : ._ : Ill 23r.I I U.O"'lrlCIAI. Circuit. Court on or bfore the 13th lando.WANTED ( SKAr.,> A. J ThoniHS dav of AiiBTUKt loll : otherwise the . A. Clerk. f.f said. Court. allegation of said Complaint will II BUY Preaching services at Padgett By Myrtle T. Denver Deputy Clerk. be taken as confessed by ynu. I A MOTOR MADE BY LI to Church were well attended last oreen Ur"l'n"awn., fUarke, Florl4a. This notice shall be published THE FAMOUS once .each week for four consecutive DELCO PULP WOOD Sunday with a good message by Attys. for ntV 787| 4t |17NOTICH weeks In the Bradford County DIV. of GENERAL MOTORS ACME , ' Telesraph. 10 DAY Vondee Durrance. He is taking TO APPEAIlIn Dated this lAth day of July, 1951. CROSS TIES this little church pver for'a while Clrealt Court llradford Cuaatr (OFFICtAT SRAT.) A. J. Thomas, ''TRIAL"LAN and see what he can do with it. Klorlitat la Caanrery.Nancy Clerk of the Circuit Court. and Pine Timber K. Qamblnn Plaintiff -.. Iry Myrtle T. Denver, Deputy Clerk LET US CHECK His appointment will be announced I'eter Carl tiamblno. Defendant. 713| 4t S3OTICR , later. The dinner was enjoyedby Case No. 8584 1\ \i flc* To Appear.. > TO AFFECTS O all-and I mean there was Tot Peter Carl Gambtno. whose I. Circuit ,....... Bradford foaaty. YOUR FIREINSURANCE some dinner there We will still residence and address 1st 299 Bust! KlArldai In <........,... Will Pay meet there every 5th Sunday aod Avenue, Buffalo. New York. W. O eleer, Plaintiff vs. Sylvia 8. You are hereby notified that a Iel.er, Defendant. NOW take our dinner with us. suit for Divorce has been. filed Case No. I5J Highest Cash PricesFOR The Union'' Meeting of the Black against you In the above named :Notice T. Appear. : Court and cause and you are r.. Tof Sylvia. H Ie|>er. whose resi Creek Association was held at quired to serve a copy of your an. dence Is: taU: iliad loon Avenue, New '" .. ANY SIZE TRACT Maxville !last Friday and Saturday.We ewer or pleadlnv, to .the complainton York Cite 3a.. New York. _ the PlaInt..Wf'a Attorney T. You are hereby notified that a \'lEi" are still trying to get the Frank Landrum Starke, Florida suit for Divorce has been filedagafnst . '" .... I Adult Training Union organizedhere and file the original In the office you In the above named - of the Clerk of the Circuit Court Court nail on cause and StrvUtQiirantte are Tur you required - at the Mission of the Long I or before the 171 h day of Auarust. to serve a copy of your answer - l Branch Church. We still need 19R11, otherwise. the alterations of or pl"ndln.. In the complaint } aid CompUInt will be taken as nn the Plaintiffs. 'F Charley E. Johns eon Attorney., Iloih- E'ffldONNEY helpers. Someof you people who fumed by you. stein A Warren ana Smith Building Llb.roITrods4ii Agency A. do not attend a T. U. anywhere This notice shall ha pul>ll: Med 211 1':. Fornylh Street. Jacksonville. : once each week for foia> ronHeciitlveweeks Florida and file the orlctaal In the Allowance Phone IS STARK, FLA. else come on and help us. The In the limdford County Tele office of the Clerk of the Circuit _ At Starke Taxi church will get credit for having S-rsph. Court on or before the 13th day. of Stand Ptd, this Uth day nf July lust. August ISM: otherwise the PHONE a Training Union whereas now It (OFFICIAl SBAt.l A. J. Thomas tiffS of said Complaint will aliena be ; 150 l Is listed without one. This Is for Clerk of the Circuit Court. taken as confessed by ycoi. T Starke Fla. allnot Strlcklands. Py Myrtle T. Dusver, Peniity Clerk I 727| 4t III7oTirn / "._"tI". THB TRAVCLEKS. Hertford Minnie Lee Gurr and sons have _____ Dr.SeymQur ' - gone/: to Jacksonville to visit a few \ TO AFFECTS . --- la rirrnlt _ ( curt llradford . Ceaatr, I R. MarcoOPTOMETRIST 5- I days- and will probably be return- PlnHdnt IN ..........,. 'a.-N .- 5 Ii II - --- Ing to California soon. r.ols T. Taylor "Plaintiff ".. Jeff I). i.als Taylor. Defendant. Case No. *5< I 'Mfttlr T. ippear.To' Eyes Examined : Jeff n Taylor ....hn.. rest. Glastes It.i den. and address Is', 1nknown Fitted This LAKE GENEVA Ton are hereby notified that a 121 W. sulf for Divorce has been filed Call St. as-ilaxt' you In the above named Ph. 219 Starke Fla. Ity lllunche Cunningham *ourt and entice. anti you are required I to serve a copy of y..ur an* - "w..r or pleading to the r.tmplalnt - could nn the Plaintiff' Attorn.v. T : happen Mrs. A. F. Hall and children of Prank T.nndrum.. fltsrke, Florida O.eIe _________________ Jacksonville Mrs. H. M. McRae and file the original' In the offlc. of the Clerk of the Circuit Court on use 5.eI.r$ and Mr. and Mr. R. Myrlck and or before the 11th day of Aurust. c daughter of Pine Level were recent IIBII otherwise the allea-stlone of LetWash Us Thoroughly FOR FREE HOMEDEMONSTRATION aid. Complaint' will be . m YOU taken . of Mrs. Sue Myrick. guests confessed by you. Your $ 95 Mrs. T. M. Finley and daughterare This notice shall be published Clothes one. each week for four consecutive 19 . newcomers to our town. They formerly lived in Stark p a Fonnds The condition of "Dad' For- 1//1// sythe who is in a Jacksonville Announcing For 40c! SEE COUPON BELOW nursing home remains about the FREE Bleach If Desired! same. ESTAnLISILMKST temple A... Stark Mr. and Mrs. W. O. Watkins of , Jacksonville spent Sunday with CRUMP WhtU can be sheared off bwe ttnlfie friction l la tvhcel Mr. Watklas' mother. Mr. Ida S bearings that are not ss/ely lubricated. Car manufacturer Gentry and sister. Lillian.Ermaat ENGINEERING - We Win S Davis tie - I recommend that wheel bearings be repacked every 3,000 or all right for himself.seems to H* having doug & CONST. CORP. I ACME VACJ1JM STORES INC.- S - 10.000 miles depending on make. For safety's sake drive ia caught several large one. as Well Dry Your Clothe 408 W. Forsyth St. . today and let us Inspect your wheel bearings, clean and as his limit in bass recently. 1032 Hendricks Are. For 20c I Jacksonville FloridaI . Mr. and Mrs. Elmer Foray the repack them with Sinclair Litholiae the superior -wheel - and children art home from a two Jacksonville, Florida b New I Without ObIigatjon bearing lubricant. Automatic Driers! I J w6uJd like of jtb' FREE demonstration weeks vacation spent In ConneU. a I wonderful 1'tEW PORTABLE a home vllle. Pa.SUPPIR. Sewing Machine at $19.95.State . . Service Center StationSeeber Registered Name MHAT MARKING INK.aiHuaatecd ------------ -- LAND I --------- -- - SURVEYORS -------------------- - to conform to V. S.Dep AddressI D. Goodmanu. &. of Agriculture cuJatlo... and CIVIL ENGINEERS The Washeteria _------------------ ___ .-- I ------------------- 301 at So. 4 ot, bottle ISc. Bradford ( I City -- - Limits - City - -----___ - Trie 'Quat'l I --- .-- fraph. I -' --- _.1. -. ----___--- 1.. Stark Fla. .. L If R. F. D- Send Specific Directions. . - - - - - . , ,,'I' e - - . -- ' AUG 3 1---l:; *" 3 -- BRADFORD COTOfTT TELEGUAPI. ' STARKE:, FLORIDA ! I u _mnu_ __ N I Dr. Roy Kemp weekend with Mr and Mrs. Lloyd. I and also visiting at Alachua with! Davis,.who.waa'killed in' action-in nniun IN OUR __ I Notion' I- \I'r.\. glen to all Conducting RevivalAt BROOKER Barry and children.Mr Mrs. Bill Gainey and family.Mr. Korea.: crodltora. ain.1 ritlninnt ,r tht Efltttfft - and Mrs. H. D. Lynn of and Mrs.H., B. Crosby" had The Leo Smiths and little son Kdlthun, Alntliit1. dc.'eosed! i I Pine Level ChurchI Mre. E. J. Lewis. Laeoocheo spent several days this as guests recently Mr. and Mrs Webster visited the T. W. Mar- of that rountn you aro to of file rlalmx your MiHtfmunt with' the I i week with the O. W. Barrys. John D Anderson of Panama City keys during the weekend. Mrs. County Judge of Rrntirurd fnunty -.It. nirthday Celebration Edna and Betty Teuton spent Mrs. Anderson remained for several Smith !la the former Miss Blanch within* tint eight' publication calendar hereof month of G"lomth the C. C Deese and son.. James and i last week visiting their grand- days with Mrs. Crosby, who Bergdorff, a member of the Brook I nolle or the claim will t>. barredby I Kenneth, all of Santa Fe and o. mothr, Mrs. Alice Davis, In Titus. Is a cousin while Mr. Andersonwas er- school faculty'ln 1929-30. I the the 20th Statute day of of July.Imitation 1911. This JI Methodist.: ChurchThe W Barry celebrated their birth. VWMr. on business In Jacksonville. I Eugene C Chmon, Admr Church Mr. l and Mr*. Tom Edwards of I J I I day anniversaries with a dinner and Mrs, M. I. Ogden of Mr, Bill Galney of Alachua I 7130 4t SilO l Worthington Spring Sunday - of First Methodist and swimming at Hampton Porapano visited relatives here spent last weekend with Dire spent - Church Church of party ' Baptist /last week. with' their daughter, Mr. Eva 1'IOTU..1T. ' UK public to wor-I Starke cordially: Invites you to attend Lake Sunday. Present to enjoy The Sila Curtis Melvin ''I Harrell. Whom It Way Co.rrrat 'i air-condl.1 I services of worship on Sun v this occasion with them were Mr Qool.bys of Little Mr. and Mrs. W. E. Lewis of I Thin' in to all whom It may co.. -ompletely I I I River visited Mrs. Doris Kelly Miss Mamie Clark Is spendingthis ecrn. that I have. filed my Final Return day, 8. and Mrs. ; Plant August during these hot The Church School Raymond Crews Mr.last week. City were guests one day with County Judge of Brailcorl ng morning begins at 9:45: a.m. with a class and Mrs. W. C. Tompkins and Miss -!P PI last week of his slater, Mrs N. week visltng In Palatka. I County, and will l on the 80th day <,( 1 '. Sunday the messageHaystack" for every age group. .. '/I "'Nd.- Mae Hinden of Jacksonville, Mr,I 1>lr. and Mrs. Clarence Jacksonand ,IC. Mott, and Mrs. Groover !Moore. Mr. and Mr. Horace Stoke and, charge AUKUHC,a 151. KlxiMitrlx, apply. for of Final the ISntJta D',. bring children of Jacksonville II I were" and th/e Rev Thomas O. Mitchell will\ .ke. \ t.; : ; I and Mrs. H. D. Lynn of Lacoochee, j L.M "' JV. Moore and daughterliV.ifw Mr. Bertha Stoke attended the' of P. A. Johanaon.This . a music. Sun preach at the 11 o'clock service on !')" I Mr. and Mr. J. E. Hardy and weekend guest of the Malcolm Myt'1'1I pent la1 weekend 5th Sunday Primitive Baptist 1051. lat publication. July 0th, gperial f the message the sermon topic, "The Forgiveness !$ I children of Starke, Mr. andMrs.. Lewl es. I *. Groover Moore. meeting to Jacksonville last Sun- i.oyr.e Kartmnnn, at eight I Mr. and Mrs. James W. Thomas Mr. and' Mrs. Warren day. Kxecurlx: of the Estate of of Crosby Sins. ; C. C. The Lord's i Dress and children, Santa . "Sin Reigns. Supper . will be I and sons visited her parents,, Rev and children of .oSt.. Petersbhrgwere P. A, Johanion. will be preceded by a, will\ be served at the close of the Fe, Mr. and Mrs. I* W. Barry and I 7110| 4t 810| service!! and Mrs. C. A. Crosby, at Penney! ;weekend guests o<.. he s. MMott _ children, by the Youth and Mr and Mrs. O. W. . concert"' . Farms last weekend. ,., ' - Jones. I The Senior and Intermediate' Barry of Brooker. ? : : '." d by George Awl I t Mr. and Mrs. Seeber Edwardsand Wright Crosby of GreensboroIs 'Youth . Fellowships finer Roast the piano.| meet at 7pm. ppy Hour family of Tampa were Sunday spending the week here with his PHONE 218 Mrs. Morse and Miss each Sunday evening and enjoy I Dr. Roy A, Kemp of FOrt A large crowd of friends and guests of Mrs. Eva Harrell.Mr. cousin, Richard Crosby.Mlsa ATTENTION IPlld the singing by a "Fellowship H<>ur" In the recrea- Worth, Texas, nationally known relatives attended a weiner roast and Mrs. Philip Crosby and Helen Barry last COLONIALFLORIST Ration of familiar tlonal building at the close of the I evangelist and Bible teacher" 1 18 at Hampton Lake last Friday spent It You Have son, Raymond, and Mr. and Mrs. week In Lt'eaburlr with her grandparents - evening service.At night In honor , of Tetatone preaching Truby nightly the at Pine Level the Warren Crosby and children of Mr. and Mr. J. A. PINE OR CYPRESS 8pm. evening service Baptist Church who celebrated his 23rd birthday: frhool meets at 9:43: aucordlng to anannoun..empnt St. visited\ relatives Petersburg ' the subject will Conerly. "'" M of teachers and ofire Prove be "Can You by Rev. W. E. Hall, anniversary. here last weekend and attendedthe Mr; and Mrs. Wilson Green and TIMBER TO SELL' w Christianity Vacation . pastor of the chruch.Dr. lllble SchoolA for the more than The 11th Crosby family reunion at the children and Mr. and Mrs, Howard ,,,.. Chapter of Hebrews large'crowd of folks Kemp's lectures young Large or Small Tracts Orle Wig- are being fattemters will be the theme of well home of the A. L. Crosbys in Douglas and children are enjoyinga FltO W E Late, Superintendent. the Bible attended. During the week are attending a Vacation Bible R S gather at 6:43: for Study and Prayer Meeting on he plans to lecture on "The End School at the Brooker Baptist Starke. two weelm vacation at Brooklyn We Buy All FOR At2boCASIONSDISH Jons Mrs N. C. Mott spent last week. Lake : ! membership"for Wednesday evening, August 8, at of Time," using the Book of Revelations Church. It began July 30 and will church end In Starke with her son-in-law The R. B. Raulersona af m1jj ;\ . . 7:30: pm. Everyone Is as the last Pahokae \ ., the various age cordially basis for his themes. two weeks. All children are PAY TOP PRICESWrite " , ? and daughter, the Edgar Wilkinson the weekend with her - T. Lawson.eervire invited to attend The public Is cordially Invited. urged to attend. spent moth. . |t ted by J. ,- U. M. .S.! Serial MeetingThe a. er, Mrs H. P. Galney and attended e. .... T will be held The Lyon Crosbys and son, the Crosby reunion. ( ) jet T:45 with Rev: Wil- At The Tabernacle Baptist Conclude night of W.last 1S.week. met at Thursday KeystoneInn. Larry, of Greensboro spent several Mrs. Carrie Larrimore of St or Call n i charge in the ab- 'Clarence Successful Vacation The devotional led days here \last week visiting Petersburg spent last week herewith Melton was by and . who will be a group:!, of GARDENS j' Pastor i I relatives and friends. On Wednesday her sister-in-law. Mrs. Mattie -, young people from Emmanuel Bible Mrs. Earl Willis. A missionary: with Rev. School a i revival they and Mr. H. P Sowell. B. THOMAS Baptist Church, Gainesville, program was presented by Mrs.James I I P0TTERYPOTTED presented ' f praham.y Gnlney" the Howard Motts, the H. Mr. and Mrs. Alex William of a fine at the The First \V ThomaX program chairman program welcome awaits thee Baptist Church or B. Croebys; and Mr and Mrs Jacksonville were weekend LUMBER CO. guest: PLANTS Youth for Christ with Miss Lubedla Plnhol- Starke. meeting Saturday Starke has just closed a most successful Baptist First Carl Roberts, of Starke enjoyeda of her parents, the Dave Green, evening In, the Tabernacle. Vacation Bible School ewer and Mrs. Eugene Crosby taking PHONE 211-J and fish fry at Mr. and Mrs. FLOWERS WIREDANYWHERE swimming party G. W. Davis and Featured were musical numberelon There was a faculty of 29 and. an part. Most of the group enJoyed .'. O. Box 735 i Church varied-isntrumente and an l Inspiring :1 enrolment of 150. A dally attendance a swim, after which" a picnic Hampton Lake children spent last weekend! InWaycrl STAKKE FLA. Mrs. P.V.. Gainey and son, I 8, Ga. where they attended Iwl message by the young of 144 made a high percent lunch was served. Nona St Stark at' 10 : begins am.Hly speaker, Richard Roberts. age of 96 for tHe school. !Mr. and Mrs. Dan Summers and Richard of Dundee are spending 'I Memorial Services Mil Sunday . hurch. Classed for; children of Wauchula spent the several days hero with her mother for their nephew, M. Sgt. W. M. -'" .:.." .: :. -. :' > The flips. Plan to be In' Another treat-ls In tore for Pastor was principal with " {1 and Church this I Sunday Rev. Hendon morning Brown August and 19 a, when Gos- the following: Beginner" teachers Dept.:, Mrs.and O.olfleers L. ...._, .-.... ., __ S $ WHY PAY MORE SAVE AT STARLINGS $ $j . : "_ -- '. . 'o'hlp! services be- pel Team tom Toccoa Falls Institute Linsln/ Supt. Mrs. J, Ella Hardy ' LoriTs Supper. servo will conduct the 11 a.m. Doris Vickers, Louree Edwards Prices Good We Reserve 'the . I am. service. Ruth Baker, Dot Shepherd Jua _ .i.-rvice begin at 7 p.m. Visitors at the Tabernacle Sunday : nita Harper, Rose Lee, Inez Brice j j r ffl . |]ian Endeavor. Evan- Included Rev. Paul Daugherty Misses Joan S11 cox, Sue Bigg - Mws begin at 8 p.m. I of Jacksonville, his brother, and Julia awsou. Primary Dept. Through Right too I i la Invited to attendWednesday Rev. Jay Daugherty, and their Mrs. T. B. Harding, Supt., teachers . ting at mother, Mrs. William:; Daugherty Mrs. Ruth Morse, Cora Dug. .. i study Is the fifth I of Texas. Also :ittr. and Mrs., Godwin ger, Louise McRae, Thelma Lamb Friday Saturday rar To Limit , (Romans. of Jacksonville.\ Ethel Raymond Mary Lawson " .I I Effle Carter Junior Dept.. Mrs . ..;I Ruth Haines, .Supt., teachers Lau Aug.. 3.4 Quantities rake Wiggins Hazel May Powell Itd < Ors 7 le D'UJHd4e4 I Ritch, and Mrs. Wilbur Scott ..,." ,.,. v v ,.. v. , ) Intermediate Dept., Mrs. L. N 1 Pickering Supt., teachers, Mrs u j Will be the Topic of the Pastor's Sermon j Ada Lawson, Crlatlne Lawson Sunday Nite at 8 P.M. and Mrs. J. P. Tomlinson. Schoo! Best Grade 14 OZa organist !Miss Julia Lawson .. [ST BAPTIST CHURCH,.Hampton Frmuello 231 Theressa ChristianA BOB GRAY Pastor fine crowd attended' service Catsup 19CIIIIIto ,Bible School 10 A.M. B.T.U. 7.P.M.: last Friday* night at the Theressa PrayerMeeting Wed. 8: P.M.:: Christian Church. Why not planto attend this Friday 'night at 8 KNTRSEKY OPEN AT ALL SERVICES"T pm.T Bring along the whole family - ... -_- .. QliltU- tf IIiIi 1CIII'II.jIi - and tell your friends to come. I Ir. i. \[ Swluclall again this will Friday bring evening.the message SARDINES tall can 2 for 25c PEACHES No. Can 2 For 35c $T BAPTIST CHURCH, StarkeRev. Presbyterian ChurchSermon Lewis D. Haines, Pastorrson 8: "Who topic Say for Ye Sunday That I Am"August ., U. S. No.1 l 10 Lbs. Flaga Short Grain 3 Lb. Pkg. & Court Streets In The Heart Of Town Sunday School at 0:43: : a.m . PHONES: Office. 234 Pastorium 130 .. Friendship. Class 10 am. Youth Fellowship 7 p.m. J 29C Potatoes D.\Y SERMONS* 11:00: A.M.:: 7:15 P.M. Prayer Meeting each Wednesday c..J Rice 35c School 9:45 &.M. Training Unions 6:15: P.M.\ at 8 p.m. WEEK SERVICE WEDNESDAY, 7:45 P.M. Young Peoples swim party Fri --r*-mi j j f j j **- *-f-f ffrfr **--rmrnir day August 3 at the Klngaley Lake . :Message -' "A Needle In a Haystack" home of Mr. and Mrs. Al Harley Evening Message "Sin Reigns" All are asked to meet at the -S I THE CHURCH WITH THE VISITING PASTOR Church at tf:30: p.m. GRAPE FRUIT JUICE 3 For 25c : CHERRIES Pie No. 2 Can 23c Church of ChristCorner _ 10th and McMahun: : - Bible School-10 A. M. 5 Lbs. Jim Dandy Flat Oil P.M.Preaching A. M. and 7:30: 4 ForSardines25 Prayer Meeting and Bible Study each Wednesday 7:30: P. M. . ANNOUNCINGTHE Everyone welcome. GRITSSPRAY 35c Church of God8th &: St. Clair SU OPENINGOF Sunday School 10 a.m. Sunday Morning Worship 11 am THE Sunday Evening Service 7:30 p m 1' P. E. Service Tuesday 7:30: pm Gulf 39c JCE CREAM " Qt. 2 Pts CHRIS. J. NEWBERN Prayer :Meeting Thursday 7:30 p.m 37c St. Edward's. Catholic nmur .g, B I IMH STUDIO tor.Rev Francis J. Dunleavy, pas Tejlow Rose 25 Lbs. Real Good SmokedBacon Lb. Sunday Mas at 9:13: a. m. . V..eekly.Mus. at 8: a. m. Confessions Saturday 5:00: to 3.00: P. ).I. and 7:30 to 8:30: P. 1>r. FLOUR .69 Children's Catechism Classes, 39C Saturday 10:00: a m. Adult Religious Instruction (SATURDAY AUGUST 4TH courses eVery Sunday 7 to 8 P. M.ind .. . by special appointment. Our CHt'KCII OF "F"It' C1IIUST *V Opening Special OF LATTFJl IJ \Y SI.NT.S SNUFF lOc SOAP ' Cashmere Bouquet 3 Masonic Temple, Call Street. 25c [ 8x10 8 Portraits $3.00 morning Sunday 10 School a. m Sacrament every Siftidaj Servo ....--- ----r -------_-... ."." nmipij> cea 6:30: p. m. A cordial Invitation IS extended to aILPF"lTECOST.L Frozen Fillet HOLINESS Apple 2Lbs. For Lbe _ _ 19th St. and Temple Ave. Sunday School:43 a. m.; Morn |lre Looted In Drown Building Close To relcom ng'Services Worship 7:00 11: a JV m.m.; Evangelistic Everyone\/ JELLY 25c Perch 29c Post Office CEMETERY ORKINO . PORTRAITS THE MODERN {WAY There will be a working at they j on Evergreen Friday Cemetery AugustlO. near Everyone Raiforif j COCA COLA 6 bottle carton: 19c. PEPSI COLA, 6 bottle carton 19c 1'10" Deposit t lua Deposit is urged to come and bring tools _ and lunch for the noon hour. 1$ $ WHY PAY MORE SAVE AT STARLINGS $ $ i -- I ..0" .., 'r"1.1 ...J. , ____- __ __ H- - , j' j AUG3-1951 TKVFGR\rn. STABKg. FLORIDA -------- 11111)\) BRADFORD coUNTY -gi-ggg i = = . P,\CF. EIGlfT Thomas Gives Lem Brown Funeral B. T. New Red Cross Makes I. ,4M Held Thursday Benches To LegionFor you t Saturday DancesThe Special AppealFor Lem Brown 69. a Ufelon resident Pictured below are four , Flood ReliefJohn flimits To FHA of Bradford afternoon Cr runty,follow died American Legion' Saturday recently built in Starke. T Ile altrllf k;' ; suddenly Tuesday night dance continued Us upward In spite of individual ,_, ,' heart attack at his home enternnse of the I chairman a attend A. Torode, -' tag with a paid of Starke. surge a crying need for small modern housed outskirts , Bradford County Red Cross Chap I oa the ance of 183: people last week. here.er p 8I 4.., 1883 reactivation lIlt was born January pective of BlandmJ M wire from National Camp ter has received a This new series of dances featuring this requesting the support of ? near Starke to the late Jasper the music of Toby Dowdy pult into a serious problem, if and when they! the local organization In the spe- |I and Lou Parrlsh Brown He was have been the most popular dances repopulated.] Starke is ripe for a housing + of the Corinth BaptistI member clal Disaster Fund now being rained a even sponsored in this area. I no one has come forward with a solution Pro'I ta I i Church. to . for flood relief along the MIn- I Strong Chairman Ray Dance SurvIvors, besides his wife, Neta sour 1 and Mississippi Rivera. 1 I announced that the B. T. Thomas tJ Howard Brown, Include) one son, Rtatad In that The telegram part i had donatedten brother, J. E. Lumber Company Brown, Starke; one "$urvpy of flood area Indlcotesdevaala.tlon to -the American and three benches destruction oft I grown Starke; sisters and Mrs Beulah Prescott, Mrs. Ida Legion to be used to help alleviate - homes ports. far Then-fore greater neces-ary than earlier public re l+ ICt, ar' Joodge and Mrs. Myra Dyal l, all of the seating. shortage at these: I dances. for five million be over Starke. t; appeal must Funeral serviced were held yes- Door prizes at the Saturday Each chapter subscribed.Its proportionate share' I 'erday afternoon at the grave. night dance were donated by assume Mr. Torode stated that the loca' I .Ide In Brown Cemetery with Rev, Leila's and the Florida Theatre. I some letter 1. A. Crosby officiating. Inter- These dances are held each Sat chapter la mailing out of appeal' and lIe hopes that me neat followed in the family plot urday night beginning at 9 p.m. I quota here will/ be oversubscribed. here at the Staike armory under the that anyone DeWItt C. Jones Funeral Home sponsorship of Jones-Langford- However, Jis urges ( desiringcontribute to this funddo i My w re vas In charge of arrangements White Post 56, American Legion. at once whether or not a letter i A >le received. I Mrs. Doicling Wins EAT MORE MEAL I I 1)onuts'On the lion**' ''. r t? ., P w ll t f tr \ Rotary Ironer AND GRITS! Pictured above Is. the, new home of \\', 11. GIIIJaa iIn i Winkler Contest Blandlng, Road. Though almoHt complete, (the(, IsI\ ill Will Introdvr>* Senator Charley E. Johns announced In until September. ' 'Teenir'' just a winner of the grand prize, a Hotpoint - Vlultors/ to Lovett's In Starke today ability but no money to f finance his Rotary Ironer which cli- from Cecil Webb of Tampa president - will of the Dixie Lily Milling Co.. ( Friday) and tomorrow ..' maxed the Magic Key Contest be treated to a nample. package of plans then I sponsored by Winkler Electric in payment of 10 share of stock in Bradford- County Development I, the new "Teenies" a product of Service for the past few weeks. r'r Lay's potato chip company. I -1,. -.- .'" A>- ..-II..IU.-. .....tL Her name was drawn for the grand, Enterprises, Inc -- "Tell to eat everybody more Teenlrs are small\ donuts, hard prize by Miss Kay Johnson of ly as large as a 50-cent piece, but' Maurice probably Treasurer of the FFA In' 1947 and I program Inaugurated by the i Miami, a disinterested party to Dixie Lily meal and grits," Senator rrtal.i all the characteristic of one of the moat aucieaiiCul Is currently serving a term as Government It provides among the drawing.. tIn I Johns said!, "for we certainly the full-sized donut. They will be I breeders and raisers of purebred president of tile Bradford County I' other things, for the alteration and I this contest, visitors to the appreciate what Mr. Webb Is do- stocked by Lovelts from this cattle In the county today and he's Cattlemen' Association. 'I repair of essential farm buildings, More were given a chance to.draw Ing to help build up Starke and week on only 21 years old. A few months I After his graduation from BHS ,|, for farmers who Vant to farm tull.I the/ "magic key" from a large container I Bradford County"Another the former time, the Pankhead-Jones Bill check for $1,000 was ago it was a different Story. He and his marriage to I I provides It the key they drew, un- school of Halford; Maurice for the purchase of farms at also recently received from Setzer'8 Betty Long I had graduated from high chest received - Rev. A. G. Karnes locked tho prize they and bought 110 acres south of realized Jhat his 110 acres wouldn't an interest rate of only tour p..rcent Super Store for ten shares a prize and their name was I Conducts ServicesAt Ft.irke for pasture land, but didn't io very far In making a success and gives. 40 years to com added to the cards In the grand of stock. - addto of Then too, he plete the payments. I The development company was Church have, the money to develop or raising cattle. container. It .was not Bayless It any further. I( needed additional funds for the Other sections provide for the: prize I organized here last fall for the Two doors down from the OrnhaniK I Is I he m-n I hoists to anything, just buy ' So Maurice secured a loan tinder Improvement of the land he already purchase and Installation of facll-: necessary the purpose of erecting a factory W. II. lIart.*Th... house will be totnpli-lp. h, the, 11I".1r. store. visit the provisions of the Farmer's I owned and the constructionof I, It'es for heating, cooking, lighting I building to be leased to the Big back In from the lake, Mr. Hart said! Home Administration and his some building, including: aJiouse. "and refrigeration Another II"C'I II Dad Manufacturing Co., Inf., ... . I -- -- -- -- ---- -- rnplil! rise In Ms cho">*n profession tlon provides for making a homesafe \\Neicbern Studio makers of 'work clothes The I I I has been building and I Iii| little short of amazing I So, he went to Clifford Currie, and sanitary by removing \I Will Open Saturday the factory is now In completed production. He derided. while still In high Director of the FHA program in hazards to the health of the app'.l, school that he wanted to raise this district and secured a loan j I cant, his family and the commu Chris J Newbern announces the purebred Brahma" cattle. And even for $11,100. Since obtaining the I nlty. However these loans haveto II opening of her studio In the Brown Troop 70 Scouts: Win 4e *' before graduation had become well i loan, he has bought an additional be paH at the end of ten years building near the post office on Honors At Camp"This known In this section as quite an 208 acres, adjoining his property I So far In the county this year Saturday August 4. t authority 'on the subject. | built an attractive five room con five farmers have taken advantage Mrs. Newbern is well known year's camp was one of Ho ion the Grand Championship Crete block house added to his of the FHA program and secured here, having been employed at the"most successful ever," E. T of the FFA division in the purebred herd, and la 1 just before I I loans totaling almost $ 0,000. Hoover's Studio for 18 years. She Shull assistant Scoutmaster of I Southeastern Fat Stotk Show In planting SO; acres of pasture land "Maurice EthvarJs U a good example now operates studios in Gainesville local Troop 70 said. Shull accompanied AINaH, aSw Ocala In 10J7 won the National in grass. I of what a man can do, if her present home, and also the boys to the one Chicken Judging Content In Water.1 I The assistance given Maurice l II I he will just come to us for the heap: In Lake Butler. week camping session at Camp): loo, Iowa In 1048 served as State provided for in the .housing i he needs, "Currle said Echokotee In Orange Park last week. ... <. Shull said that the boys were! <,,"" ... : : ::; : . :7. To Not To - IrrigateOr IrrigateWould very obedient, worked hard and I vy accomplished more this year than Ace;. the highway'"from the Harts U the nee' I .... .. . I Bradford County farmer : .' "" Norman, Wade Gtaklns and Troy ever before However, he said, the (Buddy) Bishop and family. The trii'tMini atrlal tai* have had a quarter of a million .I r Bennett Ferguson said that sc fact that the hoys slept In pup front porch are ample e\Idcme that the family.b I I I dollars more in their pockets this I f fi 'far all of these men have been tents Instead of the customary their new home. summer if they had been able to i wait satisfied with their projects cabins probably accounted for a .. ,.... ,. ..', : , " last I great deal of the extra :: ' Irrigate their crops sprint 1 One farmer told The Telegraph work. . Special services are being held I They might have collected< that r k I Jan Mundorff carried off honor F ,.":i''. '., .' .. '. I that he had used ,. ,, _. < irrigation " every Wednesday night In the I I much more this year," said County I' on a for the local troop when he ac S '..... ,< Prayer Meeting at iiuylvsa / Agent T. 1C. McGlane "but'I C '. strawberry crop and that the cumulated six feathers Each ., ,.:".,, ., t way Missionary Baptist 1I/ilh-//I they, might have spent more than' 4 f ground had crusted up and become feather ia emblematic of a different . The; Bible Is portrayed from a that In preceding years," he adJeil hard' as soon as the .water was scouting accomplishment chart called God's Theological/ To irrigate or not to liilgute l 1'1 i 1 1r taken off He said that it didn't during the camping 's8101l. How Diagram. The speaker Rev. A. O. the question In the mind of most him ever, Shull said all the boys from help a bit. When Karnes (above) states that this Bradford County farmers when the Fergusonwas the local troop earned the necessary enables the public to see and understand subject Is brought( up in routine : IKRUIATION SYSTEM:: -provides farmer with "con asked about It, he said that number of feathers to be awarded ' many things that will be conversation. Some say that they trolled" rain. Hut it Is too expensive! for "cheap" crops' the farmer probably didn't use the camp emblem ..' ,.. ., very helpful In days to come. Bev couldn't make a particular crop enough water and use It often Richard. Henderson received the 'f, ,," ' J. E. Ladd Is pastor'of the church. without and others I I Life Saving merit badge while .- ' It coining the costs or benef'ts' of enough. He recalled planting six ,1< ., -\... .. Everyone U welcome to attend couldn't make a crop Irrigation. Whether or not it will acres of Raymond Stefanelll, Don Mc- -.4 .' .,,,."' strawberry plants few a Clane and Glen Reddish pay a farmer to put In an Irrigation all received years ago when the weather their -. -' . .. """ 6 ,I I system depends almost directlyon Will swimming merit .,,. .. ryC I .. ,',. I his location and how lucky 'he extremely dry. He said that by badges Shull said that Freddie i ' I using Moore, Lawtey, also received Here In. the new home of Mr. and Mrs..t. Is, McClane said. Where a farmer Irrigation, he got a 90 percent a : merit badge for swimming of Monroe Streets. Tin'3. ' has access to water from a stand on the plants while most Lafayette amt ) .' ,'- ,. drainage ditch lake, or something of the other Though they failed to win, the first week of. July. strawberry growers local troop came In second In al" ou that order and doesn't hare tc were really having trouble activities Shull trast to the recently pim-ti drill a well and put In a great! deal said and almost approximately 6,000 , of expensive pipe; It will pay But Ferguson said that he planted walked off with best troop award cans produced last year. The activity up trunk to bund where a man has to drill a well 20 acres of butter beans about In June accounted\ for most acre farm betwe eight years ago and the New Elk of It when 9,077 number 2 cans Lawtey. put In overhead drought Official sprinklers, etaIt's SO EASY TO U5S: SO Mid thebers had and 8.962 0' almost all but killed them before he number 3 cans were Ferguson always a losing proportion started his "artificial filled The figures for July are of the local l eta"; according to those who have experimented. rainmaking However, he said the beans not available, yet. he said, but there running a caropli1. THI rlIII.miREMINGTON! reacted ...ode'doean.t ' There are exceptlo-s, of course. right away to the water and was very little activity anyway Springs this how t Orange grovel that are worth he had one of the finest crops ol ; i know all * 12.000 per acre can well afford tc butter beans he ever made. k te F.F.A. Chapter Buys as yet.Friday They mean! leave be Irrigated, but a man who )has a -When questioned about the cost ti gate1, PORTABLE Fertilizer sometime SpreaderAnother home crop that is worth only about }173: of operating an irrigation system TYPEWRITERt per acre can't stand the expense Ferguson said that there were nc addition was made to figures KindergartenRegistration and still make after available "There are money year (the growing list of equipment WITH AMAZING year It Is believed. just too many variables Involved,' owned by the Bradford County McClane said that he said. However, he added there MIRACtl TAS.lets < even agricultural Is little ETA Chapter this week when they engineers are at a loss doubt but what Irrigation ; purchased a fertilizer Martha SmUhklndl'rKartea IRG spreader, when It cornea to irrigation. 'Many will pay the progressive farmer V R. Ferguson, advisor to the rogiJttl- farmer try to water their crops It hi. crop Is a big "money crop' 4. group, announced.The held on TuesW tginnlning from the well that supplies water and! Ills location is right. spreader' Is also designed et 10 1m 1 >for the house and stock and then He said that there was considerable ?' for use as a lime spreader and building. Regular * ... and clean tabulator wonder why the well: goes dry In objection to Irrigation In this $ seed sower; he said The FFA Just I will begin Sept1" stops from keyboard level .h"I three days. Even engineers can't section because the water had tc . woo a ftu.lt of the auger. tell how much"water .U likely to be pumped at least 12 feet to gel , be In a well lie said until they it to surface height. This cost USE OUR NEW have pumped from" it three or four too much, many farmers told him r > day*. He said that many counties have a ...... TYPEWRITERCLUB :::; Mayor Wainwright U half-way number of natural artesian wells Howard R. Davie of Chicago n %nio NEPAIFD1 coLOREDlEI.P1ItI. . planning on Installing an irrigation and are more fortunate perhaps, Grand Exalted Ruler of i All "I- *- AH "' for "'alnl.no' system Mr. McClaoo said whea .It comes to Irrigation nevolent the Be Hp..hle ......,. a..rt..N.. Kingl.vPEAT p.or:. & cost : Protective Order elf . PLAN of I '" C'. .lfr.PlfF.HI but he's lucky where most farmersaren't. but even It it has to be pump Elks, has announced the appoint r..!_W r ,.... At. ........ PII. "'I1t Ht'IIl'S:' rot' WainwrIght has a drainage from the ground it still ment of pruol"'o"" r'N h Flnley Moore pays < ( I" Ye,by using out Typewriter Club Plan believea. above) at WEDDING AND BIRTHDAY CAKES Wnndprft , canal running right in back of District Deputy Grand Exalted -U up. Ordon taken at Dahm.r'. "...ry Ph"" " 'a small down payment will put the the fields he wants to irrigate and The type Irrigation to be em- Ruler for Florida Northwest This fart PhAp|>.. PhnnJ. tf PoP-ST....; ,., 6oese portable-the All New RumiagtoaInto to Install FOR 1'11'"' plans a small dam tc Division Includes foI'I1IlFR> 'I ployed varies lodges In Pensa 8EWINO; Machln. O. bnn. naturally >m- your home now I -am] you hare a .. block up .. much water aa he with. cola Martanna. Lake pang {...... city, Fla. Phon- UI. ...11111< number of factors Ferguson City. Ft will repair your \\ ":I1T I':p.11 ... said f machine In year to pay the balance plus a small needs. When the water la pale_ Walton. Tallahus.ee. too tip to at k. Gainesville Condition r..Dn..bl.. \v. an M charge. So why wait?-come la and test ed. the fall of the ground will that the only system he had used! Panama City Live Oak and Our.rvlr.H". only| .genuine Sinner part .lOon of /i 1 .. make the water flow evenly onto was surface Irrigation and.. Stark. Moore U can will pick up and .nnv.n , this portable beauty to-Jay,It'i Just the the a past Ruler oi 'lllJ'ir s.rvl.e. upon request ill r'a size. for fastest... best typing performance- the crop and eliminate the use of water had been pumped from the the Lake City Lodge and halF Si.i? Singer machines are avail. I.\ ., {.fi any pipeline or "This ground. He said held other nn r.a.nnnal. t-ritu HOME: JIo- truly the only odke typewriter in pen type of irrigation will Mc- Irrigation aystenj WANTED; ' pay", which will per. TO By* FARW-! h ,I tie.Rlandi . .. Clane -jald. form beat for you .is based on such 'r. more or I.ss. on .hard road I uRSr Canning Plant vl"nIL Starke etrk- or La..t. C. W. One farmer In the county had a data a* the amount of water avail Quinn' M* (Coll.. 8t, Jaeki.n. rQ'RANTF.n small berry patch with a road able. source of water, distance Will Close August 1 I .11I. Ctp 'lit La.d'" 1\ I" .. . ditch running right alongside .One from water source to field, eleva After I rOR 5ALW Lare two story hou_ Inc. n .t.t'harm ! BRADFORD COUNTY year the farmer dug a hole la'the lion from the water source to the Record SeasonThe containing iare. .three apartments on rnvid.d f earn lot Kmole" situate behind ditch and Irrigated his small acreage irrigated area number of I First acres U local Paptl.t Church Stark.. "ttI . with the water from this ditch be 'Irrigated, frequency of Irriga Plant will\ close.community for canning H,005<< .A. J. Thomas, Jr. AdmtnIntretor. Ins asstother ,'.' ... I However h. said, the Lion kind of the season 4t CI10 tearRot , STARKE. next year crop and type of soil August due to /tl .EONR when the farmer needed, the water. AU systems must be correctly for paying;! the the lack fund WILL bnm.KJ'E.P CHILDREN In my I t the ditch was a* .dry a* his berry engineered and Installed for maxl V R. Ferguson canning director Instructors, o PrU.t during. Starke school. term. Mrs"C "Tr.'". ". 'I'tj.wallrw said ttp fi , this . patch waa.Actually. mum efficiency Ferguson said week However "p very few firmer In but It will pay off In higher yield season' he said this A Church HtllTORT. 0' tb. First l1apU.t right' ..r w "' e production I'Itar.. ' K, t' .;;-.f'- 'ii. : ., : ,: ., the county are using Irrigation at per acre and crop insurance more than three times|Is already an Intsr.etfng pictorial rIa.. con.."tlon.t" alnln. Is. demigl.. ,d. 1ho. aiL According to V. R. Ferguson against drought amount canned the total Iron the pastor Re.. :.., rt> t> . during the entire Lwl" Haln or Mr. Alma Rnr e'r. agricultural teacher at BH9 aDd Meanwhile the question still year last ,ear.ThrouSh Hough Church Historian, .at a * aa avid believer In Irrigation some puzzling Bradford County farmers June 30 nominal prle. It FORDS and rttf/ l I of Ue farmer* us lag it In this I tit: ""Doe It pay to Irrigate or not Ferguson of this year ED. W. .''b."' ...*.... said the LOST \ paBt'ha. pottri " f canning lf-Poland China rnrdt. area Include Leon Conner Leon to irrigate r" 4, 1* turned out 18.983 can* la con- slit ..,. about 10. lb.. Notify P. .. n. )tart N 9P.tatk0. D. pij Reddish tip II" . - Contact Us | Permissions | Preferences | Technical Aspects | Statistics | Internal | Privacy Policy © 2004 - 2011 University of Florida George A. Smathers Libraries.All rights reserved. Acceptable Use, Copyright, and Disclaimer Statement Powered by SobekCM
|
http://ufdc.ufl.edu/UF00027795/03127
|
CC-MAIN-2017-39
|
refinedweb
| 27,818
| 77.84
|
Shared constants/structs that are private to libpattern. More...
#include "config.h"
#include <stdbool.h>
#include "mutt/lib.h"
#include "lib.h"
Go to the source code of this file.
Shared constants/structs that are private to libpattern. private.h.
Function to process pattern arguments.
Values for PatternFlags.eat_arg
Definition at line 36 of file private.h.
Type of range.
Definition at line 74 of file private.h.
Which side of the range.
Definition at line 111 of file private.h.
Lookup the Pattern Flags for an op.
Definition at line 210 of file flags.c.
Evaluate a date-range pattern against 'now'.
Definition at line 499 of file compile.c.
|
https://neomutt.org/code/pattern_2private_8h.html
|
CC-MAIN-2021-49
|
refinedweb
| 111
| 74.25
|
Hello: virtual Trusted Platform Module (TPM).
Here’s a quick overview of the terminology discussed in this post:
Smart cards are physical authentication devices, which improve on the concept of a password by requiring that users actually have their smart card device with them to access the system, in addition to knowing the PIN, which provides access to the smart card.
Virtual smart cards (VSCs) VSC, though the Microsoft virtual smart card platform is currently limited to the use of the Trusted Platform Module (TPM) chip onboard most modern computers. This blog will mostly concern TPM virtual smart cards.
For more information, read Understanding and Evaluating Virtual Smart Cards.
Trusted Platform Module – (As Christopher Delay explains in his blog) TPM is a cryptographic device that is attached at the chip level to a PC, Laptop, Tablet, or Mobile Phone. The TPM securely stores measurements of various states of the computer, OS, and applications. These measurements are used to ensure the integrity of the system and software running on that system. The TPM can also be used to generate and store cryptographic keys. Additionally, cryptographic operations using these keys take place on the TPM preventing the private keys of certificates from being accessed outside the TPM.
Virtualization-based security – The following Information is taken directly from
One of the most powerful changes to Windows 10 is virtual-based security. Virtual-based security (VBS) takes advantage of advances in PC virtualization to change the game when it comes to protecting system components from compromise. VBS is able to isolate some of the most sensitive security components of Windows 10. These security components aren’t just isolated through application programming interface (API) restrictions or a middle-layer: They actually run in a different virtual environment and are isolated from the Windows 10 operating system itself.
VBS and the isolation it provides is accomplished through the novel use of the Hyper V hypervisor. In this case, instead of running other operating systems on top of the hypervisor as virtual guests, the hypervisor supports running the VBS environment in parallel with Windows and enforces a tightly limited set of interactions and access between the environments. Think of the VBS environment as a miniature operating system: It has its own kernel and processes. Unlike Windows, however, the VBS environment runs a micro-kernel and only two processes called trustlets
Local Security Authority (LSA) enforces Windows authentication and authorization policies. LSA is a well-known security component that has been part of Windows since 1993. Sensitive portions of LSA are isolated within the VBS environment and are protected by a new feature called Credential Guard.
Hypervisor-enforced code integrity verifies the integrity of kernel-mode code prior to execution. This is a part of the Device Guard feature.
V.
VBS requires a system that includes:
Windows 10 Enterprise Edition
A-64-bit processor
UEFI with Secure Boot
Second-Level Address Translation (SLAT) technologies (for example, Intel Extended Page Tables [EPT], AMD Rapid Virtualization Indexing [RVI])
Virtualization extensions (for example, Intel VT-x, AMD RVI)
I/O memory management unit (IOMMU) chipset virtualization (Intel VT-d or AMD-Vi)
TPM 2.0
Note: TPM 1.2 and 2.0 provides protection for encryption keys that are stored in the firmware. TPM 1.2 is not supported on Windows 10 RTM (Build 10240); however, it is supported in Windows 10, Version 1511 (Build 10586) and later.
Among other functions, Windows 10 uses the TPM to protect the encryption keys for BitLocker volumes, virtual smart cards, certificates, and the many other keys that the TPM is used to generate. Windows 10 also uses the TPM to securely record and protect integrity-related measurements of select hardware.
Now that we have the terminology clarified, let’s talk about how to set this up.
Setting up Virtual TPM
First we will ensure we meet the basic requirements on the Hyper-V host.
On the Hyper-V host, launch msinfo32 and confirm the following values:
The BIOS Mode should state “UEFI”.
Secure Boot State should be On.
Next, we will enable VBS on the Hyper-V host.
Open up the Local Group Policy Editor by running gpedit.msc.
Navigate to the following settings: Computer Configuration, Administrative Templates, System, Device Guard. Double-click Turn On Virtualization Based Security. Set the policy to Enabled, click OK,
Now we will enable Isolated User Mode on the Hyper-V host.
1. To do that, go to run type appwiz.cpl on the left pane find Turn Windows Features on or off.
Check Isolated User Mode, click OK, and then reboot when prompted.
This completes the initial steps needed for the Hyper-V host.
Now we will enable support for virtual TPM on your Hyper-V VM guest..
That completes the Virtual TPM part of the configuration. We will now work on working on virtual Smart Card configuration.
Setting up Virtual Smart Card
In the next section, we create a certificate template so that we can request a certificate that has the required parameters needed for Virtual Smart Card logon.
These steps are adapted from the following TechNet article:
Prerequisites and Configuration for Certificate Authority (CA) and domain controllers
Active Directory Domain Services
Domain controllers must be configured with a domain controller certificate to authenticate smartcard users. The following article covers Guidelines for enabling smart card logon:
An Enterprise Certification Authority running on Windows Server 2012 or Windows Server 2012 R2. Again, Chris’s blog covers neatly on how to setup a PKI environment.
Active Directory must have the issuing CA in the NTAuth store to authenticate users to active directory.
Create the certificate template
1. On the CA console (certsrv.msc) right click on Certificate Template and select Manage
2. Right-click the Smartcard Logon template and then click Duplicate Template
3. On the Compatibility tab, set the compatibility settings as below
4. On the Request Handling tab, in the Purpose section, select Signature and smartcard logon from the drop down menu
5. On the Cryptography Tab, select the Requests must use on of the following providers radio button and then select the Microsoft Base Smart Card Crypto Provider option.
Optionally, you can use a Key Storage Provider (KSP). Choose the KSP, under Provider Category select Key Storage Provider. Then select the Requests must use one of the following providers radio button and select the Microsoft Smart Card Key Storage Provider option.
6. On the General tab: Specify a name, such as TPM Virtual Smart Card Logon. Set the validity period to the desired value and choose OK
7. Navigate to Certificate Templates. Right click on Certificate Templates and select New, then Certificate Template to Issue. Select the new template you created in the prior steps.
Note that it usually takes some time for this certificate to become available for issuance.
Create the TPM virtual smart card
You will be prompted for a pin. Enter at least eight characters and confirm the entry. (You will need this pin in later steps)
Enroll for the certificate on the Virtual Smart Card Certificate on Virtual Machine.
1. In certmgr.msc, right click Certificates, click All Tasks then Request New Certificate.
2. On the certificate enrollment select the new template you created earlier.
3. It will prompt for the PIN associated with the Virtual Smart Card. Enter the PIN and click OK.
4. If the request completes successfully, it will display Certificate Installation results page
5. On the virtual machine select sign-in options and select security device and enter the pin
That completes the steps on how to deploy Virtual Smart Cards using a virtual TPM on virtual machines. Thanks for reading!
Raghav Mahajan
Hello Raghav, thank you for this post. I have a question about this in relation to Passport for Work. I think it makes sense to invest time on the design of P4W rather than VSC, what is your take on it? Do you see in the future the need of (v)smart card being replaced by Passport? Thank you
Both have there benefits and it depends on the organization to make best of these features
Great !
Good job Raghav!
1st but many more to go..great blog Raghav..
Great stuff guys. I love that I can utilize my internal PKI, Hyper-V, Group Policy and TPM for this! This will help me a lot at work.
But it begs the question: what is the difference between TPM 1.2 and TPM 2.0 hardware modules? And what features do Pro users get out of this?
I purchased a TPM 1.2 chip for my Haswell workstation about a year ago. I’m on 10586 Pro and I just updated the firmware on my Supermicro board. Following that update, when I run a get-wmiobject -namespace root\cimv2\security\microsofttpm -class win32_tpm, I see that my TPM chip is 1.2 but specversion lists 1.2 and 2, as if a 1.2 chip is forwards compatible with the 2.0 spec.
With the Windows 10 Pro sku, can I assign my TPM 1.2 chip to a VM? I understand Virtual Smart Card isn’t supported…but what can I do with Pro and a 1.2 chip?
Great post Raghav, keep up the good work. Really useful info!
Great and details stuff, many thanks for sharing it Raghav.
Is it possible to enable this also on Hyper-V Host running on Windows 10? I cannot make it work on my Win10 Workstation but it works fine on 2016 TP5.
That would help quite a bit with my mobile lab.
thanks!
Yes JJ, it is possible the above demonstration was done on Win 10 itself. Follow the checklist for the same.
Hello,
unfortunately this guide doesn’t work for me.
I got following error message:
Enter PIN:
********
Confirm PIN:
********
Creating TPM Smart Card…
TPM Virtual Smart Card management cannot be used within a Terminal Services session.
(0x800704d3) The request was aborted.
I’m accessed the Hyper-V Guest via Hyper V Manager.
Your help would be much appreciated.
Ronso, to get past that I did 2 things.
1. Added my user to local admin of the machine.
2. Ensure my template was published properly for the user to be able to enroll.
It was probably #1 that fixed that specific issue.
|
https://blogs.technet.microsoft.com/askds/2016/05/11/setting-up-virtual-smart-card-logon-using-virtual-tpm-for-windows-10-hyper-v-vm-guests/
|
CC-MAIN-2017-30
|
refinedweb
| 1,716
| 57.27
|
Keras-APIs, SavedModels, TensorBoard, Keras-Tuner and more. On June 26 of 2019, I will be giving a TensorFlow (TF) 2.0 workshop at the . Aside from the happiness of being representing as the workshop host, I am very happy to talk about TF 2.0. PAPIs.io LATAM conference in São Paulo Daitan The idea of the workshop is to highlight what has changed from the previous 1.x version of TF. In this text, you can follow along with the main topics we are going to discuss. And of course, have a look at the for practical code. Colab notebook Introduction to TensorFlow 2.0 TensorFlow is a general purpose high-performance computing library open sourced by Google in 2015. Since the beginning, its main focus was to provide high-performance APIs for building Neural Networks (NNs). However, with the advance of time and interest by the Machine Learning (ML) community, the lib has grown to a full ML ecosystem. Currently, the library is experiencing its largest set of changes since its birth. TensorFlow 2.0 is currently in beta and brings many changes compared to TF 1.x. Let’s dive into the main ones. Eager Execution By Default To start, eager execution is the default way of running TF code. As you might recall, to build a Neural Net in TF 1.x, we needed to define this abstract data structure called a Graph. Also, (as you probably have tried), if we attempted to print one of the graph nodes, we would not see the values we were expecting. Instead, we would see a reference to the graph node. To actually, run the graph, we needed to use an encapsulation called a Session. And using the Session.run() method, we could pass Python data to the graph and actually train our models. TF 1.x code example. With eager execution, this changes. Now, TensorFlow code can be run like normal Python code. Eagerly. Meaning that operations are created and evaluated at once. Tensorflow 2.0 code example. TensorFlow 2.0 code looks a lot like NumPy code. In fact, TensorFlow and NumPy objects can easily be switched from one to the other. Hence, you do not need to worry about , , , etc. placeholders Sessions feed_dictionaties API Cleanup Many APIs like tf.gans, tf.app, tf.contrib, tf.flags are either gone or moved to separate repositories. However, one of the most important cleanups relates to how we build models. You may remember that in TF 1.x we have many more than 1 or 2 different ways of building/training ML models. are all possible APIs one can use to build NNs is TF 1.x. That not to include the Sequence to Sequence APIs in TF 1.x. And most of the time, it was not clear which one to choose for each situation. Tf.slim, tf.layers, tf.contrib.layers, tf.keras Although many of these APIs have great features, they did not seem to converge to a common way of development. Moreover, if we trained a model in one of these APIs, it was not straight forward to reuse that code using the other ones. In TF 2.0, tf.keras is the recommended high-level API. As we will see, Keras API tries to address all possible use cases. The Beginners API From TF 1.x to 2.0, the beginner API did not change much. But now, Keras is the default and recommended high-level API. In summary, Keras is a set of layers that describes how to build neural networks using a clear standard. Basically, when we install TensorFlow using pip, we get the full Keras API plus some additional functionalities. <a href=""></a> The beginner’s API is called Sequential. It basically defines a neural network as a stack of layers. Besides its simplicity, it has some advantages. Note that we define our model in terms of a data structure (a stack of layers). As a result, it minimizes the probability of making errors due to model definition. Keras-Tuner is a dedicated library for hyper-parameter tuning of Keras models. As of this writing, the lib is in pre-alpha status but works fine on Colab with tf.keras and Tensorflow 2.0 beta. Keras-tuner It is a very simple concept. First, need to define a model building function that returns a compiled keras model. The function takes as input a parameter called hp. Using hp, we can define a range of candidate values that we can sample hyper-parameters values. Below we build a simple model and optimize over 3 hyper-parameters. For the hidden units, we sample integer values between a pre-defined range. For dropout and learning rate, we choose at random, between some specified values. <a href=""></a> Then, we create a tuner object. In this case, it implements a Random Search Policy. Lastly, we can start optimization using the method. It has the same signature as . search() fit() In the end, we can check the tuner summary results and choose the best model(s). Note that training logs and model checkpoints are all saved in the directory folder (my_logs). Also, the choice of minimizing or maximizing the objective (validation accuracy) is automatically infered. Have a look at their to learn more. Github page The Advanced API The moment you see this type of implementation it goes back to Object Oriented programming. Here, your model is a Python class that extends . Model subclassing is an idea inspired by Chainer and relates very much to how PyTorch defines models. tf.keras.Model With model Subclassing, we define the model layers in the class constructor. And the method handles the definition and execution of the forward pass. call() <a href=""></a> Subclassing has many advantages. It is easier to perform a model inspection. We can, (using breakpoint debugging), stop at a given line and inspect the model’s activations or logits. However, with great flexibility comes more bugs. Model Subclassing requires more attention and knowledge from the programmer. In general, your code is more prominent to errors (like model wiring). Defining the Training Loop The easiest way to train a model in TF 2.0 is by using the method. supports both types of models, Sequential and Subclassing. The only adjustment you need to do, if using model Subclassing, is to override the () class method, otherwise, you can through it away. Other than that, you should be able to use with either or standard NumPy nd-arrays as input. fit() fit() compute_output_shape fit() tf.data.Dataset However, if you want a clear understanding of what is going on with the gradients or the loss, you can use the Gradient Tape. That is especially useful if you are doing research. Using Gradient Tape, one can manually define each step of a training procedure. Each of the basic steps in training a neural net such as: Forward pass Loss function evaluation Backward pass Gradient descent step is separately specified. This is much more intuitive if one wants to get a feel of how a Neural Net is trained. If you want to check the loss values w.r.t the model weights or the gradient vectors itself, you can just print them out. Gradient Tape gives much more flexibility. But just like Subclassing vs Sequential, more flexibility comes with an extra cost. Compared to the method, here we need to define a training loop manually. As a natural consequence, it makes the code more prominent to bugs and harder to debug. I believe that is a great trade off that works ideally for code engineers (looking for standardized code), compared to researchers who usually are interested in developing something new. fit() Also, using we can easily setup TensorBoard as we see next. fit() Setting up TensorBoard You can easily setup an instance of TensorBoard using the method. It also works on Jupyter/Colab notebooks. fit() In this case, you add TensorBoard as a callback to the fit method. As long as you are using the method, it works on both: Sequential and the Subclassing APIs. fit() <a href=""></a> If you choose to use Model Subclassing and write the training loop yourself (using Grading Tape), you also need to define TensorBoard manually. It involves creating the summary files, using and specifying which variables you want to visualize. tf.summary.create_file_writer(), As a worth noting point, there are many callbacks you can use. Some of the more useful ones are: EarlyStopping: As the name implies, it sets up a rule to stop training when a monitored quantity has stopped improving. ReduceLROnPlateau: Reduce the learning rate when a metric has stopped improving. TerminateOnNaN: Callback that terminates training when a NaN loss is encountered. LambdaCallback: Callback for creating simple, custom callbacks on-the-fly. You can check the complete list at . TensorFlow 2.0 callbacks Extracting Performance of your EagerCode If you choose to train your model using Gradient Tape, you will notice a substantial decrease in performance. Executing TF code eagerly is good for understanding, but it fails on performance. To avoid this problem, TF 2.0 introduces . tf.function Basically, if you decorate a python function with tf.function, you are asking TensorFlow to take your function and convert it to a TF high-performance abstraction. <a href=""></a> It means that the function will be marked for JIT compilation so that TensorFlow runs it as a graph. As a result, you get the performance benefits of TF 1.x (graphs) such as node pruning, kernel fusion, etc. In short, the idea of TF 2.0 is that you can devise your code into smaller functions. Then, you can annotate the ones you wish using , to get this extra performance. It is best to decorate functions that represent the largest computing bottlenecks. These are usually the training loops or the model’s forward pass. tf.function Note that when you decorate a function with , you loose some of the benefits of eager execution. In other words, you will not be able to setup breakpoints or use inside that section of code. tf.function print() Save and Restore Models Another great lack of standardization in TF 1.x is how we save/load trained models for production. TF 2.0 also tries to address this problem by defining a single API. Instead of having many ways of saving models, TF 2.0 standardize to an abstraction called the SavedModel. There is no much to say here. If you create a Sequential model or extend your class using , your class inherits from . As a result, you can serialize your model to a SavedModel object. tf.keras.Model tf.train.Checkpoints <a href=""></a> SavedModels are integrated with the TensorFlow ecosystem. In other words, you will be able to deploy it to many different devices. These include mobile phones, edge devices, and servers. Converting to TF-Lite If you want to deploy a SavedModel to embedded devices like Raspberry Pi, Edge TPUs or your phone, use the TF Lite converter. Note that in 2.0, the TFLiteConverter does not support frozen GraphDefs (usually generated in TF 1.x). If you want to convert a frozen GraphDefs to run in TF 2.0, you can use the tf.compat.v1.TFLiteConverter. It is very common to perform post-training quantization before deploying to embedded devices. To do it with the TFLiteConverter, set the optimizations flag to “OPTIMIZE_FOR_SIZE”. This will quantize the model’s weights from floating point to 8-bits of precision. It will reduce the model size and improve latency with little degradation in model accuracy. <a href=""></a> Note that this is an experimental flag, and it is subject to changes. Converting to TensorFlow.js To close up, we can also take the same SavedModel object and convert it to TensorFlow.js format. Then, we can load it using Javascript and run your model on the Browser. <a href=""></a> First, you need to install TensorFlow.js via pip. Then, use the script to take your trained-model and convert to Javascript compatible code. Finally, you can load it and perform inference in Javascript. tensorflowjs_converter You can also train models using Tesnorflow.js on the Browser. Conclusions To close off, I would like to mention some other capabilities of 2.0. First, we have seen that adding more layers to a Sequential or Subclassing model is very straightforward. And, although TF covers most of the popular layers like Conv2D, TransposeConv2D etc; you can always find yourself in a situation where you need something that is not available. That is especially true if you are reproducing some paper or doing research. The good news is that we can develop our own Custom layers. Following the same Keras API, we can create a class and extend it to . In fact, we can create custom activation functions, regularization layers, or metrics following a very similar pattern. Here is a about it. tf.keras.Layer good resource Also, we can convert existing TensorFlow 1.x code to TF 2.0. For this end, the TF team created the utility. tf_upgrade_v2 <a href=""></a> This script does not convert TF 1.x code to 2.0 idiomatics. It basically uses module for functions that got their namespaces changed. Also, if your legacy code uses , the script will not be able to convert it. You will probably need to use additional libraries or use the new TF 2.0 version of the missing functions. tf.compat.v1 tf.contrib Thanks for reading.
|
https://hackernoon.com/everything-you-need-to-know-about-tensorflow-2-0-b0856960c074
|
CC-MAIN-2021-25
|
refinedweb
| 2,265
| 67.65
|
In the Java API there are a number of APIs that rely on being able to walk up the call stack to determine certain properties of the immediate caller of the API. The most common scenario is determining the caller's class or class loader. This information is used to implement security checks or to use the right class loader.
Up until now IKVM used the System.Diagnostics.StackFrame .NET API to walk the stack. There are however two major issues with this API. The first issue is that it is unreliable and IKVM jumps through various hoops to make sure that the CLR doesn't inline methods it shouldn't inline or use tail call optimizations it shouldn't use. The second issue is that it is relatively slow.
System.Diagnostics.StackFrame
I finally got around to implementing an alternative scheme that eliminates the usage of System.Diagnostics.StackFrame in most cases. In part 2 I'll describe the implementation, but for now let's just look at some performance numbers.
I've cooked up two microbenchmarks that clearly demonstrate the difference between the current approach and the new approach.
Class.forName()
In this microbenchmark we look at the Class.forName() API. It walks the call stack to determine the class loader of the immediate caller, because that is the class loader that it needs to use to load the class.
class ForName { public static void main(String[] args) throws Exception { Class.forName("java.lang.Object"); long start = System.currentTimeMillis(); for (int i = 0; i < 100000; i++) Class.forName("java.lang.Object"); long end = System.currentTimeMillis(); System.out.println(end - start); }}
ForName results:
(The difference between JDK 1.6 x86 and x64 is mostly due to x86 defaulting to HotSpot Client and x64 to HotSpot Server.)
Method.invoke()
The Method.invoke() API uses the caller class to perform the required access checks when you are calling a non-public method or a method in non-public class.
class Invoke { public static void main(String[] args) throws Exception { java.lang.reflect.Method m = Invoke.class.getDeclaredMethod("foo"); long start = System.currentTimeMillis(); for (int i = 0; i < 100000; i++) m.invoke(null); long end = System.currentTimeMillis(); System.out.println(end - start); } private static void foo() { }}
Invoke results:
To be continued in part 2...
|
http://weblog.ikvm.net/2008/05/31/IntroducingCallerIDPart1.aspx
|
CC-MAIN-2017-30
|
refinedweb
| 382
| 59.4
|
def bandpass_ifft(X, Low_cutoff, High_cutoff, F_sample, M=None): """Bandpass filtering on a real signal using inverse FFT Inputs ======= X: 1-D numpy array of floats, the real time domain signal (time series) to be filtered Low_cutoff: float, frequency components below this frequency will not pass the filter (physical frequency in unit of Hz) High_cutoff: float, frequency components above this frequency will not pass the filter (physical frequency in unit of Hz) F_sample: float, the sampling frequency of the signal (physical frequency in unit of Hz) Notes ===== 1. The input signal must be real, not imaginary nor complex 2. The Filtered_signal will have only half of original amplitude. Use abs() to restore. 3. In Numpy/Scipy, the frequencies goes from 0 to F_sample/2 and then from negative F_sample to 0. """ import scipy, numpy if M == None: # if the number of points for FFT is not specified M = X.size # let M be the length of the time series Spectrum = scipy.fft(X, n=M) [Low_cutoff, High_cutoff, F_sample] = map(float, [Low_cutoff, High_cutoff, F_sample]) #Convert cutoff frequencies into points on spectrum [Low_point, High_point] = map(lambda F: F/F_sample * M /2, [Low_cutoff, High_cutoff])# the division by 2 is because the spectrum is symmetric Filtered_spectrum = [Spectrum[i] if i >= Low_point and i <= High_point else 0.0 for i in xrange(M)] # Filtering Filtered_signal = scipy.ifft(Filtered_spectrum, n=M) # Construct filtered signal return Spectrum, Filtered_spectrum, Filtered_signal, Low_point, High_point
Here is an example showing its usage.
First of all, let's generate a signal of different frequencies.
import numpy N = 400 # signal length of number x = numpy.arange(0, N, 1) # generate the time ticks Sines = [numpy.sin(x*n)*(1-n) for n in [.9, .75, .5, .25, .12, .03, 0.025]]# different frequency components y = numpy.sum(Sines, axis=0) # add them by column, low frequencies have higher amplitudes import matplotlib.pyplot as plt plt.plot(x, y) # visualize the data
The raw signal looks like this:
Then, let's assume the sampling frequency is 500Hz and we only want signals between 5 to 30 Hz to go thru the filter.
Low_cutoff, High_cutoff, F_sample = 5, 30, 500 Spectrum, Filtered_spectrum, Filtered_signal, Low_point, High_point = bandpass_ifft(y, Low_cutoff, High_cutoff, F_sample) # Below is visualization fig1 = plt.figure() plt.stem(numpy.fft.fftfreq(N)*F_sample, abs(Spectrum)) plt.axis([0, F_sample/2, None, None]) # since the signal is real, the spectrum is symmetric| plt.axvspan(Low_cutoff, High_cutoff,fc='g',alpha='.5') # The frequencies that we wanna get rid of fig2 = plt.figure() plt.plot(x, Filtered_signal)
In the first figure plotted, the green box shows frequencies that will be kept by the filter.
Here is the filtered signal. As you can see, high frequencies are gone. The signal looks very smooth.
Of course, this approach is very costly: FFT and iFFT are both $O(n\lg n)$ complexity algorithms. It is much more costly than convolving signal with filter terms (what people normally do in digital signal processing). Oh, and it cannot be done in real-time - you must wait until all data is sampled.
There is also a minor implementation issue here. Both Scipy and Numpy provides FFT/iFFT functions. But their speeds vary [1]. If you wanna go faster, try Python binding to some C/C++-based FFT libraries.
References:
1.
1 comment:
nice work, but it seems that this code set the all the power in [negative F_sample, 0] to zero, including the corresponding power btw [Low_cutoff, High_cutoff]. Doesn't this make some power loss in ifft?
|
http://forrestbao.blogspot.com/2014/07/signal-filtering-using-inverse-fft-in.html
|
CC-MAIN-2017-13
|
refinedweb
| 581
| 56.45
|
Improving memory and power use for XNA games for Windows Phone 8
August 19, 2014
XNA Framework games typically use several images, audio files, and other resources that can cause the game to exceed the recommended memory limit for phone applications. There are tools that help you detect the memory use of your game so you can determine whether you need to performance-tune your game. This topic tells you what tools you can use to evaluate the memory use of your game, and provides guidelines to help you reduce it. This topic also describes an API you can use to suppress drawing when your visuals are not animating in order to improve your game’s power consumption.
This topic contains the following sections.
The Windows Phone Performance Analysis tool can help you evaluate your game’s memory use. If your game exceeds the memory limit for lower-memory devices, you can use some of the techniques described in this topic to reduce memory use, or to opt out of lower-memory devices. For more information, see App monitoring for Windows Phone 8. For information about opting out, see App memory limits for Windows Phone 8 or Developing apps for lower-memory phones for Windows Phone 8.
You can reduce memory use in your games by using specific formats for images and sounds. The following sections discuss how you can reduce memory use in your XNA Framework game by using these formats.
Sound Formats
Playback on the phone is lower quality than a typical computer or television that uses external speakers; the difference in sound quality based on choosing to use a particular sound format likely will not be noticeable to the user. The memory consumption of a sound depends on the format of the sound, and on the duration of the sound effect. Sound files with a higher sample rate (48 kHz versus 22 kHz) or higher bit depth (16-bit versus 8-bit) consume more memory than a lower sample rate or lower bit-depth file. To minimize memory use, you should use sounds with lower sample rates and bit depth where possible, particularly for long-playing sounds.
Adaptive Differential Pulse Code Modulation (ADPCM) is a lossy compression format that is supported on Windows Phone, and which can significantly reduce the size of sounds you use in your game. ADPCM typically is best suited for UI and ambient sounds because of its lower quality. For more information about Windows Phone support for ADPCM, see Supported media codecs for Windows Phone 8.
Image Size and Formats
To optimize your game to run on a 256-MB device, you can reduce the size and image quality in your game to an appropriate level for the device. The phone has a smaller display and therefore does not require the level of detail and the high-resolution images you would use on a larger display.
In addition, you can use Direct-X Texture (DXT) Compression format for your images to minimize memory use in your game. The graphics unit implements DXT files in hardware, and they remain compressed when the files are loaded into memory. The graphics card decompresses the files when they render. This is different from how the graphics unit handles PNG and JPG files. Although PNG and JPG files are compressed on disk, they use the same amount of memory as an uncompressed file type. A DXT file has slightly lower image quality than a JPG file of the same image, but the memory footprint is much smaller.
The following table compares the memory use and file size of a typical 256 × 256 MB image.
To convert your images to DXT format
In your Visual Studio project, display the property grid for the image you want convert to DXT format.
In the property grid, expand the Content Processor section.
Change the Texture Format value to DXT Compressed.
If necessary, change the Resize to Power of Two property to True. DXT images must be sized to powers of two.
One of the easiest things you can do to reduce the memory use of your XNA Framework game is to limit the number of images and sounds that the game uses. A typical XNA Framework game file is composed of less than 1% code and between 70% and 90% image textures and audio files. There is a memory cost for loading all of these resources. The following sections give you ideas about how you can reduce the quantity of textures and sounds that you use in your XNA Framework games.
Use fewer sounds
The following list contains a few ways that you can reduce the number of sounds in your games.
Reuse a sound instead of using different sounds for similar actions in different parts of the game. Alternatively, reuse a sound but use parametric variations at playback to vary it.
Apply the level-of-detail pattern when you create the sounds for your game.. For example, use a single sound to represent a group of enemies as opposed to using individual enemy sounds played together to represent the same group.
Call the SoundEffect.Dispose method for SoundEffect objects when you are finished using them. This enables the sound effect to be garbage collected, and this reduces the number of sounds in memory.
Use fewer images and textures
Similar to reducing sounds, you can reduce the use of images and textures by using fewer images and by being creative in how you use your images. If you need to represent a group of enemies, use a few enemy images and repeat those many times to represent a group of enemies. This comes at much lower cost in memory use than if you use several unique enemy images to represent the group.
In addition, you can use a sprite sheet object to store many small images as a single image in your game. This results in faster loading and drawing, and a lower memory footprint because you are loading a single image. Also, you can avoid the power of two restrictions of the DXT format when using a sprite sheet.
The most difficult part about using sprite sheets can be composing them. You can compose them by hand, but the easiest way to create a sprite sheet is to use the sample posted on the Xbox Live Indie Games site. This sample uses a custom content processer to combine several images into a single texture, and then records the location of each individual texture. For more information and to download the sample, go to Sprite Sheet. After you create a sprite sheet, use an overload of the SpriteBatch.Draw method, which takes a sourceRectangle parameter, to indicate the position of the texture you want to draw.
As discussed earlier in this topic, the DXT file format is a compressed image format that usually is sufficient to help smaller games meet the 90-MB memory limit. However, if you are working with detailed models and complex terrains, you might need to compress your vertex data to further reduce your game’s memory use.
By default, XNA Framework stores most vertex data as 32-bit floating point values. These floating point values offer high precision, but often games do not require that much accuracy. For example, the VertexPositionNormalTexture structure is 32 bytes in size. This type stores its Position and Normal property values as Vector3 types, which are 12 bytes each, and its TextureCoordinate property as a Vector2, which is 8 bytes. Typically, less precision also means less memory consumption, and there are other, less precise, formats you can use when you develop your game. The following are the VertexElementFormat values that Windows Phone supports.
Single
Vector2
Vector3
Vector4
Color
Byte4
Short2
Short4
NormalizedShort2
NormalizedShort4
You can use the types in the Microsoft.Xna.Framework.Graphics.PackedVector namespace to generate compressed vectors. The following code shows an example of how you could create your own compressed version of the VertexPositionNormalTexture type. This example extends the ModelProcessor class and overrides the ProcessVertexChannel method to convert normal data to NormalizedShort4 format, and to convert texture coordinates to NormalizedShort2. This results in an 8-byte reduction in vertex size.); } } }
For more information about vector compression, see Compressing Vertex Data.
On mobile devices like phones, the power consumption of your app is critical to providing a good user experience. When your game is in a state in which the graphics are not animating, such as a static pause screen, you should tell XNA not to redraw the screen. Do this by calling SuppressDraw from within your Update method. When you call this method, XNA will skip calling your Draw method until after the next call to Update. Using this technique to prevent unnecessary redrawing of the screen can result in significant power savings.
|
http://msdn.microsoft.com/en-us/library/windows/apps/hh855082
|
CC-MAIN-2014-42
|
refinedweb
| 1,465
| 60.45
|
Community TutorialsCOMMUNITY HOME SEARCH TUTORIALS
title: On Beyond Magpie 1 - Sentiment Analysis description: An introduction to the Cloud Natural Language API, aimed at Advanced Placement Computer Science classes who have worked on the Magpie lab, but suitable for most people starting with the Cloud Natural Language API. Demonstrates how to make a POST request in Java to access the Cloud Natural Language API and make simple use of the results. author: Annie29 tags: Cloud Natural Language API, APCS, REST, Magpie, education date_published: 2017-03-28
The Advanced Placement Computer Science A program provides the Magpie lab for students to practice using basic control structures to parse user input as part of a chatbot. This tutorial is designed to be an additional enrichment exercise (typically used after the AP exam) to go beyond basic parsing and instead use Google's Cloud Natural Language API, a pretrained machine learning model that will do text analysis for the user. The lab demonstrates how to use the Cloud Natural Language API to determine the user sentiment.
The major new skill covered in this lab is how to make HTTP POST requests from Java.
This tutorial is written for an audience of CS teachers who are exposing their students to the Cloud Natural Language API, but should be usable by any interested individual.
Prerequisites
If you've completed On Beyond Magpie, Part 0, you should have all of the prerequisites completed. Otherwise,
- Create a project in the Google Cloud Platform Console.
- Enable billing for your project.
- Ensure the Cloud Natural Language API is enabled by going to the API manager from the main GCP menu.
- Generate an API key for your project.
A simple chatbot
Students who have used the Magpie lab already should have a chatbot with which they can work. Otherwise, the following Java code provides a simple (but not very interesting) chatbot.
Magpie runner
import java.util.Scanner; /** * A simple class to run the Magpie class. */ public class MagpieRunner { /** * Create a Magpie, give it user input, and print its replies. */ public static void main(String[] args) { Magpie maggie = new Magpie(); Scanner in = new Scanner (System.in); String statement = in.nextLine(); while (!statement.equals("Bye")) { System.out.println (maggie.getResponse(statement)); statement = in.nextLine(); } } }
Magpie class
public class Magpie { /** * Gives a response to a user statement * * @param statement * the user statement * @return a response based on the rules given */ public String getResponse(String statement) { String response = ""; if (statement.indexOf("cats") >= 0) { response = "I love cats! Tell me more about cats!"; } else { response = getRandomResponse(); } return response; } /** * Pick a default response to use if nothing else fits. */ private String getRandomResponse() { return "Hmmm"; } }
Accessing the API
The Cloud Natural Language API can be accessed directly using an HTTP POST request. There are also client libraries created for C#, Go, Java, Node.js, PHP, Python, and Ruby. In order to keep this tutorial simple and as general as possible, it will make its own HTTP requests. Details on how to use the client libraries are available in the Cloud Natural Language API Docs.
Making the HTTP request with Java
For simplicity, this example shows how to make an HTTP request using just the core Java libraries. This should be put in the
getResponse method of the Magpie class.
First, create constants for the API key and URL. Be sure to put your API key in the place indicated.
final String TARGET_URL = "?"; final String API_KEY = "key=YOUR_API_KEY";
Next, create a URL object with the target URL and create a connection to that URL:
URL serverUrl = new URL(TARGET_URL + API_KEY); URLConnection urlConnection = serverUrl.openConnection(); HttpURLConnection httpConnection = (HttpURLConnection)urlConnection;
This will require the
java.net.HttpURLConnection,
java.net.URL, and
java.net.URLConnection libraries be imported.
The URL constructor may throw a
MalformedURLException. You can handle this with either a try/catch block or adding
throws MalformedURLException to the method header and importing the exception. Different teachers may have different preferences; either works. Similarly, the
openConnection may throw an
IOException that should be handled before moving on. If you opt to throw the error, be sure to also handle it in the runner.
Set the method and Content-Type of the connection:
httpConnection.setRequestMethod("POST"); httpConnection.setRequestProperty("Content-Type", "application/json");
And then prepare the connection to be written to to enable creation of the data portion of the request:
httpConnection.setDoOutput(true);
Create a writer and use it to write the data portion of the request:
BufferedWriter httpRequestBodyWriter = new BufferedWriter(new OutputStreamWriter(httpConnection.getOutputStream())); httpRequestBodyWriter.write("{\"document\": { \"type\": \"PLAIN_TEXT\", \"content\":\"" + statement + "\"}, \"encodingType\": \"UTF8\"}"); httpRequestBodyWriter.close();
This will require importing
java.io.BufferedWriter and
java.io.OutputStreamWriter. Notice the line being written is the same as the data provided to the API Explorer in Part 0.
Finally, make the request and get the response:
httpConnection.getResponseMessage();
The returned data is sent in an input stream. Build a string containing it.
String results = ""; if (httpConnection.getInputStream() != null) { Scanner httpResponseScanner = new Scanner (httpConnection.getInputStream()); while (httpResponseScanner.hasNext()) { String line = httpResponseScanner.nextLine(); results += line; } httpResponseScanner.close(); }
Yes, you'll need to import
java.util.Scanner.
Once you have the results in a single string, you can access parts of it to determine the score and potentially the magnitude. (Parsing the JSON is covered in the next part of this tutorial). Recall, if there are multiple sentences, there will be multiple scores. A simple solution to get the first score is below.
int psn = results.indexOf("\"score\":"); double score = 0.0; if (psn >= 0) { int bracePsn = results.indexOf('}', psn); // Find the closing brace String scoreStr = results.substring(psn+8, bracePsn).trim(); score = Double.parseDouble(scoreStr); }
You can then use the score in creating your response to the user. If you've done part 0, you should have ideas about what thresholds to use. Otherwise, work with the values for score to find reasonable values. You can also use them in conjunction with other clauses in determining the response. Now that you can tell the sentiment of the user, it's up to you to find creative ways to use it!
String response = ""; if (score > 0.5) { response = "Wow, that sounds great!"; } else if (score < -0.5) { response = "Ugh, that's too bad"; } else if (statement.indexOf("cats") >= 0) {...
Summary
- If you do not want to use the Java client library, you can make an HTTP POST call the Cloud Natural Language API in a program.
- Making a POST call requires set the method and content type and writing the data section
- The results of the call come back in JSON format that can be concatenated into a single string. For simple cases, this text can just be searched to find the properties of interest.
Next steps
To use the different features of the Cloud Natural Language API, see the following Community articles:.
|
https://cloud.google.com/community/tutorials/on-beyond-magpie1
|
CC-MAIN-2018-26
|
refinedweb
| 1,128
| 58.28
|
User-Agent:
Build Identifier: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.6) Gecko/20040210 Firefox/0.8
In xhtml 1.1, the lang attribute has been deprecated. Therefore, using xml:lang
is the normal way of telling which language is used in an element.
Even if xml:lang is recognized on the styling part (:lang() selector works
properly), the fonts used for rendering are still those for the default lang.
Reproducible: Always
Steps to Reproduce:
1.
2.
3.
Created attachment 141512 [details]
xhtml showing everything
Note that if the content-type given is text/html, xml:lang is not "recognized"
(:lang() stops working for xml:lang'ed elements)
Also note that as the lang attribute is deprecated, it should not be there,
it's only there for demonstration purpose.
I'm also attaching an image showing the rendering result.
Created attachment 141513 [details]
result of attachment #141512 [details].
The fonts strangeness comes from the fact that my Western font contains
hiragana and katakana, but not kanjis, which means that when rendering kanjis,
the font rendering engines fallbacks to an other font, which doesn't render in
the same way, therefore creating crappy display.
In the correctly rendered paragraph, all is rendered using the Japanese font.
No relationship to view rendering (that would be painting).
The lang attr is mapped into the pseudo-style "mLang" thingie at
We could modify this to also map xml:lang, but we should really be doing that
for all elements, not just HTML ones.... And we don't really want to duplicate
the code in nsCSSStyleSheet too much. Maybe we could push up GetLang to
nsIContent (it's already on nsIDOMHTMLElement, but....) or something?
Putting it on nsIContent sounds like a good solution to me (though maybe we
should call it something like GetContentLang to avoid 'hides' warnings). At
least if the stylesystem would be able to use that without a performance hit.
I though we fixed this in bug 35768(bug 91190 comment 13 and
bug 35768 - attachment 95593 [details]), but apparently we didn't as far as the font
selection is concerned.
The real problem is that bug 115121 was marked duplicate incorrectly.
sicking, I don't really see how putting it on nsIContent will slow things down
much (I guess it's an extra virtual function call, but compared to the actual
work that needs doing -- getting attrs and all -- that seems minor).
bz, I suspected so, but i thought i'd ask. I look forward for a patch ;-)
Not likely to come from here any time soon, unfortunately. Should be a matter of
implementing GetContentLanguage by copying the CSS code, then making CSS use it
and making attr mapping use it too.
Would need to do attr mapping on nsGenericElement for that, of course...
This bug is confusing at first read. I had to go back to the earlier bugs to
clear things up in my mind. Some extra comments to clarify it a bit from the
font selection pespective:
The font sub-system gets the language from nsFrame.cpp's SetFontFromStyle().
Thus the bug also means that nsStyleVisibility::mLanguage is not reliable. It is
only half of the equation. It does not walk up the tree to find the language.
|SetFontFromStyle| seemed to assume that the language would have been fully
resolved already in nsRuleNode, but that's not the case.
The style system's nsCSSStyleSheet has its own way of getting/processing the
lang. It is quite comprehensive. It also has the advantage that it is lazy (no
extra work unless the :lang() pseudo is encountered).
Apart for adding a nsGenericElement::GetContentLang [along the same lines as
nsGenericElement::GetBaseURI which is already there for xml:base], another way
to fix the bug might be in |SetFontFromStyle|.
But probably the fix is somewhere between the two. That is
1) add nsGenericElement::GetContentLang
2) do the genuflexions for the attr mapping of xml:lang and the HTML's lang in
nsGenericElement, to honor the precedence.
3) make |SetFontFromStyle| do the right thing.
There's more involved than just SetFontFromStyle -- see
if nothing else.
I'd say the first thing we need here is a clear description of what exactly this
code is trying to achieve... First off, if we add this GetLanguageGroup method,
why store this stuff in the style system? SetFontFromStyle and the reflow state
could just call the method on the frame's mContent directly, no?
I think the original reason why |SetFontFromStyle| was made is because people
kept ommitting to set the language when passing a font to GFX.
>First off, if we add this GetLanguageGroup method, why store this stuff in the
>style system?
It doesn't have to be saved in the style system, if that is deemed unnecessary.
>SetFontFromStyle and the reflow state could just call the method on the frame's
>mContent directly, no?
Sure. That's what I meant by 'do the right thing' -- whatever is appropriate
after getting the basics.
Probably, it also has to be taken into account that 'Content-Language' (http
header and meta declaration) can set the 'overall' language(s) of a document.
Besides, there's been a talk of adding 'View | Set Language' (for the manual
overriding of the language of a document currently in view) although we may not
do that in the near future.
Yep. The style system's nsCSSStyleSheet is quite comprehensive and includes the
case of the 'Content-Language' HTTP header.
(In reply to comment #9)
> Thus the bug also means that nsStyleVisibility::mLanguage is not reliable. It
> is only half of the equation. It does not walk up the tree to find the
> language.
Actually, it does. The values in the visibility struct are all inherited by
default, so if a node doesn't have an explicit language set it automatically
inherits the parent's. The root node's language is inited from
nsIPresContext::GetLanguage() if it's not set explicitly. So we should just
keep this in the style system -- it's simpler that way... The method on
nsIContent should just return the language on that node, if any, not walk up
parents. The tree-walking code should stay in the style resolution code.
(In reply to comment #12)
> Probably, it also has to be taken into account that 'Content-Language' (http
> header and meta declaration) can set the 'overall' language(s) of a document.
Which just means that we should init the prescontext's mLanguage based on the
document's GetContentLanguage() returns, I would think... (except for the mess
with getting the latter out of prefs). Right now, the context's mLanguage is
set based on the document's character set (see nsPresContext::UpdateCharSet).
>The method on nsIContent should just return the language on that node, if any,
>not walk up parents. The tree-walking code should stay in the style resolution
>code.
I wonder, then, if it is necessary to add such a dummy GetLang to nsIContent.
I had a look and nsDocument::RetrieveRelevantHeaders() [which feeds the
document's GetContentLanguage()] looks up the prefs too.
Oh, I assumed that nsIContent::GetContentLang would walk up the parentchain. If
that's not the case then we should make that clear in the function-name, or add
an |PRBool aCheckParentChain| argument.
Comment 15 seems sensible, although it might be good to consolidate this and the
code we use for :lang()
Oh, except attribute mapping is entirely the wrong mechanism. The language
should come from somewhere in content rather than the style data so that you
*can* map xml:lang easily.
*** Bug 41978 has been marked as a duplicate of this bug. ***
Yeah, I guess we could cache the language on the content node instead of caching
it in the style.. the one nice thing with caching in style is that it's cheaper
because of style context sharing, maybe (maybe).
I kinda also like the saving being achieved via the style system. Why is/was
that important to have nsIContent::GetContentLang()? Is is going to supersede
the one in nsIDOMHTMLElement (or, has is the ambiguity going to be resolved)?
Wouldn't it be possible to simply map the xml:lang in nsGenericElement?
s/has is/how is/
> Why is/was that important to have nsIContent::GetContentLang()?
To avoid the duplication of the "what's the lang set on this content node" logic
(the fact that you have to check xml:lang and then if and only if the node is
HTML check the lang attribute in the null namespace). That way the next time
the (X)HTML WG decides to make some half-assed decisions we will only have to
change one place in the code...
It's not _that_ important. Just nice-to-have.
What about the ambiguity with nsIDOMHTMLElement?
Someone does nsIDOMHTMLElement.setlang(), but gets something different with
nsIDOMHTMLElement.getlang() because there is an old xml:lang that takes precedence?
or, the deal is that there will be two different results with GetLang() and
GetContentLang(), albeit funny?
> Someone does nsIDOMHTMLElement.setlang(), but gets something different with
> nsIDOMHTMLElement.getlang() because there is an old xml:lang that takes
> precedence?
Didn't I mention "half-baked" and "(X)HTML WG" in the same sentence already?
I looked at what it will take to fix this, and here are the results of my
investigation:
1) It is not possible presently to leverage on the mapping code to map xml:lang
to style. This is because the |MapAttributesInto| business is sooo
HTML-specific... How did I forget that?!? It turned out that this is precisely
the same issue that has blocked bug 69409 for MathML attributes.
2) While it is possible to fetch the lang attribute everytime, it is much nicer
to just map it into the style data due to these primary reasons:
a) the lang string is actually converted to a langGroupAtom that groups many
languages/scripts. Given a top-element with lang, e.g., <html lang>, there is
only one such atom in one visibility struct of the style system. The atom is
fetched rapidly whenever/wherever it is neeeded. Plus, the
HasAttributeDependentStyle stuff is kept in sync. (This BTW has to be expanded
to include the namespace as righly pointed out in an XXX comment there).
b) Take the lang out of the style system, and for every element in the doc,
one will have to walk back to <html>, convert it to an atom, deal with errors in
children (e.g., an unknown lang) rather than just inheriting the most recent
known lang as can be neatly done with the style system at style resolution
(albeit this is not done yet).
c) Trying to map xml:lang to the style data provides a nice examplar for you
guys to really appreciate why bug 69409 has been blocked all this time :-)
3) Probably this is the most startling observation: despite the recent rework
that Jonas did (which was mostly for space-efficiency) the attribute mapping
code remains over-engineered... What is going on with this |nsMappedAttributes|
class which is cloned, etc? Actually, the mapping functions are _constant_ for
each tag and I have this feeling that a table-driven approach could do the job
pretty well, while providing a way to finally cater for MathML attributes. This
is not the best place to discuss it, but since things are so fresh in mind, I
may as well put them down for the record. Consider the first-level table:
/*static*/
{
{noneNS_ID, &MapHTMLAttributesInto}.
{xmlNS_ID, &MapXMLAttributesInto},
#ifdef MOZ_MATHML
{mathNS_ID, &MapMathMLAttributesInto},
#endif
#ifdef MOZ_SVG
{svgNS_ID, &MapSVGAttributesInto}
#endif
}
and then the second-level tables
static void
MapHTMLAttributesInto(...)
{
/*static*/
{nsHTMLAtoms:div, &MapDivAttributesInto}
{nsHTMLAtoms:img, &MapImgAttributesInto}
etc..
}
static void
MapXMLAttributesInto(...)
{
/*static*/
{nsXMLAtoms:lang, &MapLangAttributesInto}
}
etc
Then, propagate the change so that the Style System calls the mapping function
based on the namespace directly, reflecting into the attribute style sheet of
the current document, rather than going to the element in a rather complex and
convoluted manner at present (to get the same _function_ for each tag, clone, etc).
Doing so will suppress the middle-man and the over-engineering that is
happening. It is somewhat reminiscent of what alecf did to convert
HasAttributeDependentStyle into a table-driven code.
> This is because the |MapAttributesInto| business is sooo HTML-specific...
With sicking's changes, this should be pretty simple to fix.
> What is going on with this |nsMappedAttributes| class which is cloned, etc?
If nothing else, it's an nsIStyleRule implementation and the thing the nodes in
the ruletree point to.
> Actually, the mapping functions are _constant_
I'm not sure what this means, exactly...
> based on the namespace directly
Wouldn't that break for XML nodes in the null namespace vs non-XML HTML nodes?
More to the point, what will be the nsIStyleRule objects involved? The current
cloning, etc is because we actually share nsMappedAttributes objects. So if you
have 10 <font size="3"> tags we only have one nsMappedAttributes object for
them. The speed and footprint savings in the ruletree from this sharing are
pretty noticeable.....
> > This is because the |MapAttributesInto| business is sooo HTML-specific...
>
> With sicking's changes, this should be pretty simple to fix.
I hope it is in the plan. I have been waiting for that to happen.
> > Actually, the mapping functions are _constant_
>
> I'm not sure what this means, exactly...
C.f. |GetAttributeMappingFunction|, it returns a function based on the tag. It
is currently designed so that different instances of the object could return
different things. But in practice, different instances return the same thing and
this suggests that a static tag-based table-driven approach is appropriate.
> >based on the namespace directly
>
>Wouldn't that break for XML nodes in the null namespace vs non-XML HTML nodes?
What do you mean? The behavior won't be any different from what is there at the
moment. Attributes are either mappable or they aren't, based on the
(attrNamespaceID, attrName).
Take <p xml:lang>, it will boil down to MapHTMLAttributesInto, which should
include a call to MapXmlLangAttr(p_content, xmlNS_ID, nsHTMLAtoms::lang,
to_ruleData) if the doc is XHTML.
> have 10 <font size="3"> tags
The management is pretty involved at the moment. With the table-driven approach,
there would just be the hard-coded tables as illustrated earlier. The question
is whether that won't work and why.
The style system requires that the style data come from implementations of
nsIStyleRule that meet the requirements in
Filed bug 235342 on discussion started in comment 27
So back to the original issue: How do we want to map xml:lang into style? Do we
want to use the attribute-mapping code and do the same thing as we do for
"lang", or do we want to add GetContentLang to nsIContent?
One problem with using attribute-mapping code is that parts of it currently
can't deal with namespaced mapped attributes. Mostly the functions on
nsIStyledContent only takes an localname-atom argument rather then
localname-atom and namespaceid.
We might need this for other attributes too, but so far neither svg, xul or html
needs this. Don't know about mathml, but i don't think it has any namespaced
attributes at all.
> Mostly the functions on nsIStyledContent only takes an localname-atom
That (and the HasAttributeDependentStyle code) have needed fixing for a while.
We should just do it, imo.
Doing attribute-mapping from nsGenericElement here seems like the way to go.
HasAttributeDependentStyle is just an optimization. It wouldn't surprise me if
the cost of passing an extra 32-bit parameter to that function and checking it
were higher than the benefits gained.
(Also, if this code is to be rewritten, the rules on presentational attributes
in CSS 2.1 should be considered.)
Fair enough. Not much call to change HasAttributeDependentStyle, I guess. ;)
In fact, xml:lang does not seem to be recognised as an alternative to lang at
all. Even with xml:lang set to something, firefox does not recognise the text to
be in that language.
Sorry about that stupid comment, I'm new to this interface. Won't happen again.
Please just ignore it.
Hi all!
I've been bitten by this bug on this page:
This is after fixing it according to the input of the good people on #mozilla.il. Given enough will, I'd also like to work on a patch for that.
Regards,
Shlomi Fish
See: for why I'm using XHTML 1.1 with a content-type of "text/html".
xml:lang is ignored in text/html content, as it should be. That is, you're not seeing this bug.
Any progress with this issue? I'm writing a web app that outputs application/xhtml+xml for Firefox and I would like to use automatic hyphenation for the content. The goal is to do public release within a year. Should I put a workaround in the code for Firefox (for example, repeat the value of xml:lang in the attribute "lang") or will this get fixed soon enough?
I would recommend not using application/xhtml+xml, period. A number of new HTML5 features work much worse with that MIME type than with text/html. And yes, you should probably just put both xml:lang= and lang= in your markup if you do want to keep using XHTML. I certainly don't see us rewriting our attribute code around this edge case (which is what would have to happen) anytime soon.
If so, would it make sense to mark this "WONTFIX"?
This actually doesn't even need attribute mapping. We can make a really simple rule class that stores a string for the language attribute, and have a hashtable of them indexed by the string (so that we only have one per string, so the rule tree works well). (We just need it not to apply when content->IsInHTMLDocument().)
(In reply to David Baron [:dbaron] (don't cc:, use needinfo? instead) from comment #44)
> (We just need it not to apply when
> content->IsInHTMLDocument().)
Actually that's not the case:
[2013-05-29 19:08:30] <annevk> that's actually also a bug for HTML
[2013-05-29 19:08:44] <dbaron> what is?
[2013-05-29 19:08:45] <annevk> HTML requires xml:lang to work (when properly namespaced in the DOM)
[2013-05-29 19:08:58] <dbaron> even in HTML documents?
[2013-05-29 19:09:00] <annevk> (only possible through scripting)
[2013-05-29 19:09:04] <dbaron> ah, ok
[2013-05-29 19:09:09] <dbaron> then we shouldn't condition it
[2013-05-29 19:09:20] <dbaron> (I'm going to just paste this IRC into the bug if that's ok with you)
[2013-05-29 19:09:30] <annevk> yes
annevk also pointed out the relevant part of the spec:
."
IIRC, it’s the HTML parser that doesn’t recognize the namespace prefix syntax and doesn’t put an xml:lang attribute in the right namespace, but it’s still possible to set namespaced attributes through the DOM.
Created attachment 755476 [details] [diff] [review]
Map xml:lang attribute into style so that it's used for font selection and hyphenation.
The code in nsHTMLStyleSheet implements LangRule to map xml:lang into
style and the code to manage its uniqueness.
The change to nsGenericHTMLElement fixes the mapping of the HTML lang
attribute to do cascading the way all other rule mapping does so that
the cascading works correctly.
The tests test that the correct style language is used for hyphenation
by copying over a set of hyphenation reftests that check its basic
response to languages. There are no specific tests for font selection,
but font selection is known to use the same language data from style.
I verified manually (see other attachments to bug) that the rule
uniqueness is being managed correctly.
Created attachment 755478 [details]
testcase used to verify rule uniqueness is working
Created attachment 755482 [details] [diff] [review]
patch (on top of above patch) used to verify rule uniqueness is working correctly
With this patch, loading attachment 755478 [details] gave the output:
Making new lang rule 0x2944840 for language en
Making new lang rule 0x28ba810 for language fr
Making new lang rule 0x3ea2dc0 for language en-US
which is the expected result (only 3 lines, no duplicate languages).
Though, actually, I think I should ditch all the reftests except for -1, -4, -11{a,b}. and -12{a,b}.
Boris, does that make sense to you?
Comment on attachment 755476 [details] [diff] [review]
Map xml:lang attribute into style so that it's used for font selection and hyphenation.
r=me, though it seems like this table can grow without bound for a given document as long as people keep adding new xml:lang values and then removing them...
Also, why do you want to ditch the reftests you want to ditch?
Because some of them were really testing other hyphenation things rather than testing that xml:lang works, which I think only really needs the tests I listed. It feels kinda like a waste of machine time to run more than that. Though I suppose some of them test other interesting characteristics of xml:lang, so it's vaguely useful to have the extras.
As far as the memory use thing... I'm at least hoping that people will use xml:lang for its intended use, and the number of languages used in a document will be reasonably limited. If people really want to drive up memory usage, they can always create lots of content nodes and *not* get rid of them.
Eh, I guess I'll keep the tests. (I'm going to add "-xmllang" to the filenames, though, to make the difference a little clearer rather than just .html vs. .xhtml.)
|
https://bugzilla.mozilla.org/show_bug.cgi?id=234485
|
CC-MAIN-2016-44
|
refinedweb
| 3,632
| 63.19
|
Hi, Im new on arduino and I have an issue. I need to create a library that uses at least 3 pins and the help on creating libraries shows how to create one pin only. How can I declare several pins to a library from the sketch section?.. And finaly how can I declare variables to use on the library?..
I presume that you have seen this
Using that as a base I had no problem creating a library to control a motor shield that took 3 pin numbers as arguments, like the LCD library, which uses several. Just pass the pin numbers to the library as arguments to a library function as you would for any other function.
What sort of variables do you have in mind ?
Examples from my motor control library
part of bbMotor.h
#ifndef bbMotor_h #define bbMotor_h #include "Arduino.h" class bbMotor { public: bbMotor(int enable, int input_1_pin, int input_2_pin); void begin(); void setSpeed(int speed); int getSpeed(); private: int _enable; int _input_1_pin; int _input_2_pin; int _speed; }; #endif
Part of bbMotor.cpp
#include "Arduino.h" #include "bbMotor.h" bbMotor::bbMotor(int enable, int input_1_pin, int input_2_pin) { _enable = enable; _input_1_pin = input_1_pin; _input_2_pin = input_2_pin; } void bbMotor::begin() //pinMode may not work if called in constructor, so use this { pinMode(_enable, OUTPUT); pinMode(_input_1_pin, OUTPUT); pinMode(_input_2_pin, OUTPUT); digitalWrite(_enable, LOW); } int bbMotor::getSpeed() { return _speed; }
An example of creating 2 instances of bbMotor
bbMotor leftMotor(6,8,7); //enable (PWM), InA, InB bbMotor rightMotor(3,2,4); //enable (PWM), InA, InB
Setting up a motor. Done in setup() rather than in the constructor to ensure that hardware is ready for pinMode()
leftMotor.begin();
Setting the speed of a motor
leftMotor.forward(100);
Getting the current speed of a motor
currentSpeed = leftMotor.getSpeed()
I hope this helps
tanaris12: perhaps I was not clear on my doubt.
The reference only shows one pin that declares as Morse morse(13); when calling the library, how do I declare more than just one pin and how do I link variables to the library from the sketch? is it Morse morse1..2..3..n(); in the case of the example for the pins? and whats the proper coding for linking variables are they declared on the sketch or the library?
I have many years of experience on c programing but I need to know how its done on this language since the syntax is different. The variables I need are integers and an array.
Note, it may be that the Morse library is the one I downloaded some time ago, and it is badly written. It does not store the pin values in the constructor within the class, instead it has static variables in Morse.cpp that it copies the values into, so you can only have one device. For my steampunk camera, at times I wanted to generate Morse code for 'Fire' when I trigger the camera. I wanted an option to do either blinky lights or via a buzzer, and control it via a dip switch. I've been meaning to rewrite it so it stores the pin values within the class (so I can have a light morse target and a buzzer one), and also remove the delays, and rewrite it to use blink without delay, so the Arduino is not locked up when doing the morse sequence.
And Im left with no answer at all.
The motor shield code does not say anything on how to do what I need because its incomplete but thanks for trying any ways.
The motor shield code I posted has an example of creating an instance of an object and passing 3 pin numbers to it
bbMotor leftMotor(6,8,7);
It has an example of passing a value to an instance of the object
leftMotor.forward(100);
Variables to be passed to the library object are declared in the normal way The example could just as well be
int speed = 100; leftMotor.forward(speed);
It has an example of passing a variable from an object to the main code
currentSpeed = leftMotor.getSpeed()
Is that not what you want ?
I am sorry that you are
left with no answer at all.
Can you explain another way what it is you need to know ?
ok so how do you separate the pins to use with different data on each one of them on the library?
You should probably head over to cplusplus.com and read the tutorial on classes (). In fact you should probably read the tutorials from the start.
When you get a grip on how to handle object oriented programming your problem will become simpler. I think this is one of those things where examples won't help much without knowledge of the underlying concepts.
and that simple answer could not be given thanks for trying
You've been given examples of how to pass multiple values to a function. What is it about that that you are having issues with?
Ill try my luck on another place and dont worry about this post ill leave it there but dont reply dont want to consume any of your time on insignificant issues like mine
Not everyone "gets it" the same way. Different perspectives are sometimes useful. If you don't like a response, YOU can ignore it, too.
You were already offered very clear examples of the syntax. If you didn't understand them it's because you don't grasp the concept. That tutorial I linked to clearly shows how to define an object and how to pass arguments to the constructor (complete with proper syntax). All you need to do is pass the pin values to the constructor when you instance your object. However, with Arduino you don't want to initialize the pins in the constructor you want to have a begin method called during setup.
If you are clear on the concept that's all you should need to know. I think you should read the tutorial. In about an hour all that would make sense. Including the syntax.
|
https://forum.arduino.cc/t/declaring-multiple-pins-on-include-libraries/152809
|
CC-MAIN-2021-25
|
refinedweb
| 1,012
| 71.24
|
.
Getting Started
This first chapter of the book will get us going with Rust and its tooling. First, we’ll install Rust. Then, the classic ‘Hello World’ program. Finally, we’ll talk about Cargo, Rust’s build system and package manager.
We’ll be showing off a number of commands using a terminal, and those lines all
start with
$. You don't need to type in the
$s, they are there to indicate
the start of each command. We’ll see many tutorials and examples around the web
that follow this convention:
$ for commands run as our regular user, and
#
for commands we should be running as an administrator.
Installing Rust
The first step to using Rust is to install it. Generally speaking, you’ll need an Internet connection to run the commands in this section, as we’ll be downloading Rust from the Internet.
The Rust compiler runs on, and compiles to, a great number of platforms, but is best supported on Linux, Mac, and Windows, on the x86 and x86-64 CPU architecture. There are official builds of the Rust compiler and standard library for these platforms and more. For full details on Rust platform support see the website.
Installing Rust
All you need to do on Unix systems like Linux and macOS is open a terminal and type this:
$ curl -sSf | sh
It will download a script, and start the installation. If everything goes well, you’ll see this appear:
Rust is installed now. Great!
Installing on Windows is nearly as easy: download and run rustup-init.exe. It will start the installation in a console and present the above message on success.
For other installation options and information, visit the install page of the Rust website.
Uninstalling
Uninstalling Rust is as easy as installing it:
$ rustup self uninstall
Troubleshooting
If we've got Rust installed, we can open up a shell, and type this:
$ rustc --version
You should see the version number, commit hash, and commit date.
If you do, Rust has been installed successfully! Congrats!
If you don't, that probably means that the
PATH environment variable
doesn't include Cargo's binary directory,
~/.cargo/bin on Unix, or
%USERPROFILE%\.cargo\bin on Windows. This is the directory where
Rust development tools live, and most Rust developers keep it in their
PATH environment variable, which makes it possible to run
rustc on
the command line. Due to differences in operating systems, command
shells, and bugs in installation, you may need to restart your shell,
log out of the system, or configure
PATH manually as appropriate for
your operating environment.
Rust does not do its own linking, and so you’ll need to have a linker
installed. Doing so will depend on your specific system. For
Linux-based systems, Rust will attempt to call
cc for linking. On
windows-msvc (Rust built on Windows with Microsoft Visual Studio),
this depends on having Microsoft Visual C++ Build Tools
installed. These do not need to be in
%PATH% as
rustc will find
them automatically. In general, if you have your linker in a
non-traditional location you can call
rustc linker=/path/to/cc, where
/path/to/cc should point to your linker path.
If you are still stuck, there are a number of places where we can get help. The easiest is the #rust-beginners IRC channel on irc.mozilla.org and for general discussion the #rust IRC channel on irc.mozilla.org, which we can access through Mibbit. Then we'll be chatting with other Rustaceans (a silly nickname we call ourselves) who can help us out. Other great resources include the user’s forum and Stack Overflow.
This installer also installs a copy of the documentation locally, so we can
read it offline. It's only a
rustup doc away!
Hello, world!
Now that you have Rust installed, we'll help you write your first Rust program. It's traditional when learning a new language to write a little program to print the text “Hello, world!” to the screen, and in this section, we'll follow that tradition.
The nice thing about starting with such a simple program is that you can quickly verify that your compiler is installed, and that it's working properly. Printing information to the screen is also a pretty common thing to do, so practicing it early on is good.
Note: This book assumes basic familiarity with the command line. Rust itself makes no specific demands about your editing, tooling, or where your code lives, so if you prefer an IDE to the command line, that's an option. You may want to check out SolidOak, which was built specifically with Rust in mind. There are a number of extensions in development by the community, and the Rust team ships plugins for various editors. Configuring your editor or IDE is out of the scope of this tutorial, so check the documentation for your specific setup.
Creating a Project File
First, make a file to put your Rust code in. Rust doesn't care where your code lives, but for this book, I suggest making a projects directory in your home directory, and keeping all your projects there. Open a terminal and enter the following commands to make a directory for this particular project:
$ mkdir ~/projects $ cd ~/projects $ mkdir hello_world $ cd hello_world
Note: If you’re on Windows and not using PowerShell, the
~may not work. Consult the documentation for your shell for more details.
Writing and Running a Rust Program
We need to create a source file for our Rust program. Rust files always end in a .rs extension. If you are using more than one word in your filename, use an underscore to separate them; for example, you would use my_program.rs rather than myprogram.rs.
Now, make a new file and call it main.rs. Open the file and type the following code:
fn main() { println!("Hello, world!"); }
Save the file, and go back to your terminal window. On Linux or macOS, enter the following commands:
$ rustc main.rs $ ./main Hello, world!
In Windows, replace
main with
main.exe. Regardless of your operating
system, you should see the string
Hello, world! print to the terminal. If you
did, then congratulations! You've officially written a Rust program. That makes
you a Rust programmer! Welcome.
Anatomy of a Rust Program
Now, let’s go over what just happened in your "Hello, world!" program in detail. Here's the first piece of the puzzle:
fn main() { }
These lines define a function in Rust. The
main function is special: it's
the beginning of every Rust program. The first line says, “I’m declaring a
function named
main that takes no arguments and returns nothing.” If there
were arguments, they would go inside the parentheses (
( and
)), and because
we aren’t returning anything from this function, we can omit the return type
entirely.
Also note that the function body is wrapped in curly braces (
{ and
}). Rust
requires these around all function bodies. It's considered good style to put
the opening curly brace on the same line as the function declaration, with one
space in between.
Inside the
main() function:
# #![allow(unused_variables)] #fn main() { println!("Hello, world!"); #}
This line does all of the work in this little program: it prints text to the screen. There are a number of details that are important here. The first is that it’s indented with four spaces, not tabs.
The second important part is the
println!() line. This is calling a Rust
macro, which is how metaprogramming is done in Rust. If it were calling a
function instead, it would look like this:
println() (without the !). We'll
discuss Rust macros in more detail later, but for now you only need to
know that when you see a
! that means that you’re calling a macro instead of
a normal function.
Next is
"Hello, world!" which is a string. Strings are a surprisingly
complicated topic in a systems programming language, and this is a statically
allocated string. We pass this string as an argument to
println!, which
prints the string to the screen. Easy enough!
The line ends with a semicolon (
;). Rust is an expression-oriented
language, which means that most things are expressions, rather than
statements. The
; indicates that this expression is over, and the next one is
ready to begin. Most lines of Rust code end with a
;.
Compiling and Running Are Separate Steps
In "Writing and Running a Rust Program", we showed you how to run a newly created program. We'll break that process down and examine each step now.
Before running a Rust program, you have to compile it. You can use the Rust
compiler by entering the
rustc command and passing it the name of your source
file, like this:
$ rustc main.rs
If you come from a C or C++ background, you'll notice that this is similar to
gcc or
clang. After compiling successfully, Rust should output a binary
executable, which you can see on Linux or macOS by entering the
ls command in
your shell as follows:
$ ls main main.rs
On Windows, you'd enter:
$ dir main.exe main.rs
This shows we have two files: the source code, with an
.rs extension, and the
executable (
main.exe on Windows,
main everywhere else). All that's left to
do from here is run the
main or
main.exe file, like this:
$ ./main # or .\main.exe on Windows
If main.rs were your "Hello, world!" program, this would print
Hello, world! to your terminal.
If you come from a dynamic language like Ruby, Python, or JavaScript, you may
not be used to compiling and running a program being separate steps. Rust to both compile and run your program. Everything
is a tradeoff in language design.
Just compiling with
rustc is fine for simple programs, but as your project
grows, you'll want to be able to manage all of the options your project has,
and make it easy to share your code with other people and projects. Next, I'll
introduce you to a tool called Cargo, which will help you write real-world Rust
programs.
Hello, Cargo!
Cargo is Rust’s build system and package manager, and Rustaceans use Cargo to manage their Rust projects. Cargo manages three things: building your code, downloading the libraries your code depends on, and building those libraries. We call libraries your code needs ‘dependencies’ since your code depends on them.
The simplest Rust programs don’t have any dependencies, so right now, you'd only use the first part of its functionality. As you write more complex Rust programs, you’ll want to add dependencies, and if you start off using Cargo, that will be a lot easier to do.
As the vast, vast majority of Rust projects use Cargo, we will assume that you’re using it for the rest of the book. Cargo comes installed with Rust itself, if you used the official installers. If you installed Rust through some other means, you can check if you have Cargo installed by typing:
$ cargo --version
Into a terminal. If you see a version number, great! If you see an error like
‘
command not found’, then you should look at the documentation for the system
in which you installed Rust, to determine if Cargo is separate.
Converting to Cargo
Let’s convert the Hello World program to Cargo. To Cargo-fy a project, you need to do three things:
- Put your source file in the right directory.
- Get rid of the old executable (
main.exeon Windows,
maineverywhere else).
- Make a Cargo configuration file.
Let's get started!
Creating a Source Directory and Removing the Old Executable
First, go back to your terminal, move to your hello_world directory, and enter the following commands:
$ mkdir src $ mv main.rs src/main.rs # or 'move main.rs src/main.rs' on Windows $ rm main # or 'del main.exe' on Windows
Cargo expects your source files to live inside a src directory, so do that first. This leaves the top-level project directory (in this case, hello_world) for READMEs, license information, and anything else not related to your code. In this way, using Cargo helps you keep your projects nice and tidy. There's a place for everything, and everything is in its place.
Now, move main.rs into the src directory, and delete the compiled file you
created with
rustc. As usual, replace
main with
main.exe if you're on
Windows.
This example retains
main.rs as the source filename because it's creating an
executable. If you wanted to make a library instead, you'd name the file
lib.rs. This convention is used by Cargo to successfully compile your
projects, but it can be overridden if you wish.
Creating a Configuration File
Next, create a new file inside your hello_world directory, and call it
Cargo.toml.
Make sure to capitalize the
C in
Cargo.toml, or Cargo won't know what to do
with the configuration file.
This file is in the TOML (Tom's Obvious, Minimal Language) format. TOML is similar to INI, but has some extra goodies, and is used as Cargo’s configuration format.
Inside this file, type the following information:
[package] name = "hello_world" version = "0.0.1" authors = [ "Your name <you@example.com>" ]
The first line,
[package], indicates that the following statements are
configuring a package. As we add more information to this file, we’ll add other
sections, but for now, we only have the package configuration.
The other three lines set the three bits of configuration that Cargo needs to know to compile your program: its name, what version it is, and who wrote it.
Once you've added this information to the Cargo.toml file, save it to finish creating the configuration file.
Building and Running a Cargo Project
With your Cargo.toml file in place in your project's root directory, you should be ready to build and run your Hello World program! To do so, enter the following commands:
$ cargo build Compiling hello_world v0.0.1 () $ ./target/debug/hello_world Hello, world!
Bam! If all goes well,
Hello, world! should print to the terminal once more.
You just built a project with
cargo build and ran it with
./target/debug/hello_world, but you can actually do both in one step with
cargo run as follows:
$ cargo run Running `target/debug/hello_world` Hello, world!
The
run command comes in handy when you need to rapidly iterate on a
project.
Notice that this example didn’t re-build the project. Cargo figured out that the file hasn’t changed, and so it just ran the binary. If you'd modified your source code, Cargo would have rebuilt the project before running it, and you would have seen something like this:
$ cargo run Compiling hello_world v0.0.1 () Running `target/debug/hello_world` Hello, world!
Cargo checks to see if any of your project’s files have been modified, and only rebuilds your project if they’ve changed since the last time you built it.
With simple projects, Cargo doesn't bring a whole lot over just using
rustc,
but it will become useful in the future. This is especially true when you start
using crates; these are synonymous with a ‘library’ or ‘package’ in other
programming languages. For complex projects composed of multiple crates, it’s
much easier to let Cargo coordinate the build. Using Cargo, you can run
cargo build, and it should work the right way.
Building for Release
When your project is ready for release, you can use
cargo build --release to compile your project with optimizations. These optimizations make
your Rust code run faster, but turning them on makes your program take longer
to compile. This is why there are two different profiles, one for development,
and one for building the final program you’ll give to a user.
What Is That
Cargo.lock?
Running
cargo build also causes Cargo to create a new file called
Cargo.lock, which looks like this:
[root] name = "hello_world" version = "0.0.1"
Cargo uses the Cargo.lock file to keep track of dependencies in your application. This is the Hello World project's Cargo.lock file. This project doesn't have dependencies, so the file is a bit sparse. Realistically, you won't ever need to touch this file yourself; just let Cargo handle it.
That’s it! If you've been following along, you should have successfully built
hello_world with Cargo.
Even though the project is simple, it now uses much of the real tooling you’ll use for the rest of your Rust career. In fact, you can expect to start virtually all Rust projects with some variation on the following commands:
$ git clone someurl.com/foo $ cd foo $ cargo build
Making A New Cargo Project the Easy Way
You don’t have to go through that previous process every time you want to start a new project! Cargo can quickly make a bare-bones project directory that you can start developing in right away.
To start a new project with Cargo, enter
cargo new at the command line:
$ cargo new hello_world --bin
This command passes
--bin because the goal is to get straight to making an
executable application, as opposed to a library. Executables are often called
binaries (as in
/usr/bin, if you’re on a Unix system).
Cargo has generated two files and one directory for us: a
Cargo.toml and a
src directory with a main.rs file inside. These should look familiar,
they’re exactly what we created by hand, above.
This output is all you need to get started. First, open
Cargo.toml. It should
look something like this:
[package] name = "hello_world" version = "0.1.0" authors = ["Your Name <you@example.com>"] [dependencies]
Do not worry about the
[dependencies] line, we will come back to it later.
Cargo has populated Cargo.toml with reasonable defaults based on the arguments
you gave it and your
git global configuration. You may notice that Cargo has
also initialized the
hello_world directory as a
git repository.
Here’s what should be in
src/main.rs:
fn main() { println!("Hello, world!"); }
Cargo has generated a "Hello World!" for you, and you’re ready to start coding!
Note: If you want to look at Cargo in more detail, check out the official Cargo guide, which covers all of its features.
Closing Thoughts
This chapter covered the basics that will serve you well through the rest of this book, and the rest of your time with Rust. Now that you’ve got the tools down, we'll cover more about the Rust language itself.
You have two options: Dive into a project with ‘Tutorial: Guessing Game’, or start from the bottom and work your way up with ‘Syntax and Semantics’. More experienced systems programmers will probably prefer ‘Tutorial: Guessing Game’, while those from dynamic backgrounds may enjoy either. Different people learn differently! Choose whatever’s right for you.
Guessing Game
Let’s learn some Rust!. Sounds good?
Along the way, we’ll learn a little bit about Rust. The next chapter, ‘Syntax and Semantics’, will dive deeper into each part.
Set up Created binary (application) `guessing_game` project $ cd guessing_game
We pass the name of our project to
cargo new, and then the
--bin flag,
since we’re making a binary, rather than a library.
Cargo.toml:
[package] name = "guessing_game" version = "0.1.0" authors = ["Your Name <you@example.com>"]
Cargo gets this information from your environment. If it’s not correct, go ahead and fix that.
Finally, Cargo generated a ‘Hello, world!’ for us. Check out
src/main.rs:
fn main() { println!("Hello, world!"); }
Let’s try compiling what Cargo gave us:
$ cargo build Compiling guessing_game v0.1.0 () Finished debug [unoptimized + debuginfo] target(s) in 0.53 secs
Excellent! Open up your
src/main.rs again. We’ll be writing all of
our code in this file.
Remember the
run command from last chapter? Try it out again here:
$ cargo run Compiling guessing_game v0.1.0 () Finished debug [unoptimized + debuginfo] target(s) in 0.0 secs Running `target/debug/guessing_game` Hello, world!
Great! Our game is just the kind of project
run is good for: we need
to quickly test each iteration before moving on to the next one.
Processing a Guess
Let’s get to it! The first thing we need to do for our guessing game is
allow our player to input a guess. Put this in your
src/main.rs:
use std::io; fn main() { println!("Guess the number!"); println!("Please input your guess."); let mut guess = String::new(); io::stdin().read_line(&mut guess) .expect("Failed to read line"); println!("You guessed: {}", guess); }
There’s a lot here! Let’s go over it, bit by bit.
use std::io;
We’ll need to take user input, and then print the result as output. As such, we
need the
io library from the standard library. Rust only imports a few things
by default into every program, the ‘prelude’. If it’s not in the
prelude, you’ll have to
use it directly. There is also a second ‘prelude’, the
io prelude, which serves a similar function: you import it, and it
imports a number of useful,
io-related things.
fn main() {
As you’ve seen before, the
main() function is the entry point into your
program. The
fn syntax declares a new function, the
()s indicate that
there are no arguments, and
{ starts the body of the function. Because
we didn’t include a return type, it’s assumed to be
(), an empty
tuple.
println!("Guess the number!"); println!("Please input your guess.");
We previously learned that
println!() is a macro that
prints a string to the screen.
let mut guess = String::new();
Now we’re getting interesting! There’s a lot going on in this little line. The first thing to notice is that this is a let statement, which is used to create ‘variable bindings’. They take this form:
let foo = bar;
This will create a new binding named
foo, and bind it to the value
bar. In
many languages, this is called a ‘variable’, but Rust’s variable bindings have
a few tricks up their sleeves.
For example, they’re immutable by default. That’s why our example
uses
mut: it makes a binding mutable, rather than immutable.
let doesn’t
take a name on the left hand side of the assignment, it actually accepts a
‘pattern’. We’ll use patterns later. It’s easy enough
to use for now:
# #![allow(unused_variables)] #fn main() { let foo = 5; // `foo` is immutable. let mut bar = 5; // `bar` is mutable. #}
Oh, and
// will start a comment, until the end of the line. Rust ignores
everything in comments.
So now we know that
let mut guess will introduce a mutable binding named
guess, but we have to look at the other side of the
= for what it’s
bound to:
String::new().
String is a string type, provided by the standard library. A
String is a growable, UTF-8 encoded bit of text.
The
::new() syntax uses
:: because this is an ‘associated function’ of
a particular type. That is to say, it’s associated with
String itself,
rather than a particular instance of a
String. Some languages call this a
‘static method’.
This function is named
new(), because it creates a new, empty
String.
You’ll find a
new() function on many types, as it’s a common name for making
a new value of some kind.
Let’s move forward:
io::stdin().read_line(&mut guess) .expect("Failed to read line");
That’s a lot more! Let’s go bit-by-bit. The first line has two parts. Here’s the first:
io::stdin()
Remember how we
used
std::io on the first line of the program? We’re now
calling an associated function on it. If we didn’t
use std::io, we could
have written this line as
std::io::stdin().
This particular function returns a handle to the standard input for your terminal. More specifically, a std::io::Stdin.
The next part will use this handle to get input from the user:
.read_line(&mut guess)
Here, we call the
read_line method on our handle.
Methods are like associated functions, but are only available on a
particular instance of a type, rather than the type itself. We’re also passing
one argument to
read_line():
&mut guess.
Remember how we bound
guess above? We said it was mutable. However,
read_line doesn’t take a
String as an argument: it takes a
&mut String.
Rust has a feature called ‘references’, which allows you to have
multiple references to one piece of data, which can reduce copying. References
are a complex feature, as one of Rust’s major selling points is how safe and
easy it is to use references. We don’t need to know a lot of those details to
finish our program right now, though. For now, all we need to know is that
like
let bindings, references are immutable by default. Hence, we need to
write
&mut guess, rather than
&guess.
Why does
read_line() take a mutable reference to a string? Its job is
to take what the user types into standard input, and place that into a
string. So it takes that string as an argument, and in order to add
the input, it needs to be mutable.
But we’re not quite done with this line of code, though. While it’s a single line of text, it’s only the first part of the single logical line of code:
.expect("Failed to read line");
When you call a method with the
.foo() syntax, you may introduce a newline
and other whitespace. This helps you split up long lines. We could have
done:
io::stdin().read_line(&mut guess).expect("Failed to read line");
But that gets hard to read. So we’ve split it up, two lines for two method
calls. We already talked about
read_line(), but what about
expect()? Well,
we already mentioned that
read_line() puts what the user types into the
&mut String we pass it. But it also returns a value: in this case, an
io::Result. Rust has a number of types named
Result in its
standard library: a generic
Result, and then specific versions for
sub-libraries, like
io::Result.
The purpose of these
Result types is to encode error handling information.
Values of the
Result type, like any type, have methods defined on them. In
this case,
io::Result has an
expect() method that takes a value
it’s called on, and if it isn’t a successful one,
panic!s with a
message you passed it. A
panic! like this will cause our program to crash,
displaying the message.
If we do not call
expect(), our program will compile, but
we’ll get a warning:
$ cargo build Compiling guessing_game v0.1.0 () warning: unused result which must be used, #[warn(unused_must_use)] on by default --> src/main.rs:10:5 | 10 | io::stdin().read_line(&mut guess); | ^ Finished debug [unoptimized + debuginfo] target(s) in 0.42 secs
Rust warns us that we haven’t used the
Result value. This warning comes from
a special annotation that
io::Result has. Rust is trying to tell you that
you haven’t handled a possible error. The right way to suppress the error is
to actually write error handling. Luckily, if we want to crash if there’s
a problem, we can use
expect(). If we can recover from the
error somehow, we’d do something else, but we’ll save that for a future
project.
There’s only one line of this first example left:
println!("You guessed: {}", guess); }
This prints out the string we saved our input in. The
{}s are a placeholder,
and so we pass it
guess as an argument. If we had multiple
{}s, we would
pass multiple arguments:
# #![allow(unused_variables)] #fn main() { let x = 5; let y = 10; println!("x and y: {} and {}", x, y); #}
Easy.
Anyway, that’s the tour. We can run what we have with
cargo run:
$ cargo run Compiling guessing_game v0.1.0 () Finished debug [unoptimized + debuginfo] target(s) in 0.44 secs Running `target/debug/guessing_game` Guess the number! Please input your guess. 6 You guessed: 6
All right! Our first part is done: we can get input from the keyboard, and then print it back out.
Generating a secret number
Next, we need to generate a secret number. Rust does not yet include random
number functionality in its standard library. The Rust team does, however,
provide a
rand crate. A ‘crate’ is a package of Rust code.
We’ve been building a ‘binary crate’, which is an executable.
rand is a
‘library crate’, which contains code that’s intended to be used with other
programs.
Using external crates is where Cargo really shines. Before we can write
the code using
rand, we need to modify our
Cargo.toml. Open it up, and
add these few lines at the bottom:
[dependencies] rand = "0.3.0"
The
[dependencies] section of
Cargo.toml is like the
[package] section:
everything that follows it is part of it, until the next section starts.
Cargo uses the dependencies section to know what dependencies on external
crates you have, and what versions you require. In this case, we’ve specified version
0.3.0,
which Cargo understands to be any release that’s compatible with this specific version.
Cargo understands Semantic Versioning, which is a standard for writing version
numbers. A bare number like above is actually shorthand for
^0.3.0,
meaning "anything compatible with 0.3.0".
If we wanted to use only
0.3.0 exactly, we could say
rand = "=0.3.0"
(note the two equal signs).
We could also use a range of versions.
Cargo’s documentation contains more details.
Now, without changing any of our code, let’s build our project:
$ cargo build Updating registry `` Downloading rand v0.3.14 Downloading libc v0.2.17 Compiling libc v0.2.17 Compiling rand v0.3.14 Compiling guessing_game v0.1.0 () Finished debug [unoptimized + debuginfo] target(s) in 5.88 secs
(You may see different versions, of course.)
Lots of new output! our
[dependencies] and downloads
any we don’t have yet. In this case, while we only said we wanted to depend on
rand, we’ve also grabbed a copy of
libc. This is because
rand depends on
libc to work. After downloading them, it compiles them, and then compiles
our project.
If we run
cargo build again, we’ll get different output:
$ cargo build Finished debug [unoptimized + debuginfo] target(s) in 0.0 secs
That’s right, nothing was done! Cargo knows that our project has been built, and that
all of its dependencies are built, and so there’s no reason to do all that
stuff. With nothing to do, it simply exits. If we open up
src/main.rs again,
make a trivial change, and then save it again, we’ll only see two lines:
$ cargo build Compiling guessing_game v0.1.0 () Finished debug [unoptimized + debuginfo] target(s) in 0.45 secs
So, we told Cargo we wanted any
0.3.x version of
rand, and so it fetched the latest
version at the time this was written,
v0.3.14. But what happens when next
week, version
v0.3.15 comes out, with an important bugfix? While getting
bugfixes is important, what if
0.3.15 contains a regression that breaks our
code?
The answer to this problem is the
Cargo.lock file you’ll now find in your
project directory. When you build your project for the first time, Cargo
figures out all of the versions that fit your criteria, and then writes them
to the
Cargo.lock file. When you build your project in the future, Cargo
will see that the
Cargo.lock file exists, and then use that specific version
rather than do all the work of figuring out versions again. This lets you
have a repeatable build automatically. In other words, we’ll stay at
0.3.14
until we explicitly upgrade, and so will anyone who we share our code with,
thanks to the lock file.
What about when we do want to use
v0.3.15? Cargo has another command,
update, which says ‘ignore the lock, figure out all the latest versions that
fit what we’ve specified. If that works, write those versions out to the lock
file’. But, by default, Cargo will only look for versions larger than
0.3.0
and smaller than
0.4.0. If we want to move to
0.4.x, we’d have to update
the
Cargo.toml directly. When we do, the next time we
cargo build, Cargo
will update the index and re-evaluate our
rand requirements.
There’s a lot more to say about Cargo and its ecosystem, but for now, that’s all we need to know. Cargo makes it really easy to re-use libraries, and so Rustaceans tend to write smaller projects which are assembled out of a number of sub-packages.
Let’s get on to actually using
rand. Here’s our next step:
extern crate rand;); }
The first thing we’ve done is change the first line. It now says
extern crate rand. Because we declared
rand in our
[dependencies], we
can use
extern crate to let Rust know we’ll be making use of it. This also
does the equivalent of a
use rand; as well, so we can make use of anything
in the
rand crate by prefixing it with
rand::.
Next, we added another
use line:
use rand::Rng. We’re going to use a
method in a moment, and it requires that
Rng be in scope to work. The basic
idea is this: methods are defined on something called ‘traits’, and for the
method to work, it needs the trait to be in scope. For more about the
details, read the traits section.
There are two other lines we added, in the middle:
let secret_number = rand::thread_rng().gen_range(1, 101); println!("The secret number is: {}", secret_number);
We use the
rand::thread_rng() function to get a copy of the random number
generator, which is local to the particular thread of execution
we’re in. Because we
use rand::Rng’d above, it has a
gen_range() method
available. This method takes two arguments, and generates a number between
them. It’s inclusive on the lower bound, but exclusive on the upper bound,
so we need
1 and
101 to get a number ranging from one to a hundred.
The second line prints out the secret number. This is useful while we’re developing our program, so we can easily test it out. But we’ll be deleting it for the final version. It’s not much of a game if it prints out the answer when you start it up!
Try running our new program a few times:
$ cargo run Compiling guessing_game v0.1.0 () Finished debug [unoptimized + debuginfo] target(s) in 0.55 secs Running `target/debug/guessing_game` Guess the number! The secret number is: 7 Please input your guess. 4 You guessed: 4 $ cargo run Finished debug [unoptimized + debuginfo] target(s) in 0.0 secs Running `target/debug/guessing_game` Guess the number! The secret number is: 83 Please input your guess. 5 You guessed: 5
Great! Next up: comparing our guess to the secret number.
Comparing guesses
Now that we’ve got user input, let’s compare our guess to the secret number. Here’s our next step, though it doesn’t quite compile yet:); match guess.cmp(&secret_number) { Ordering::Less => println!("Too small!"), Ordering::Greater => println!("Too big!"), Ordering::Equal => println!("You win!"), } }
A few new bits here. The first is another
use. We bring a type called
std::cmp::Ordering into scope. Then, five new lines at the bottom that use
it:
match guess.cmp(&secret_number) { Ordering::Less => println!("Too small!"), Ordering::Greater => println!("Too big!"), Ordering::Equal => println!("You win!"), }
The
cmp() method can be called on anything that can be compared, and it
takes a reference to the thing you want to compare it to. It returns the
Ordering type we
used earlier. We use a
match statement to
determine exactly what kind of
Ordering it is.
Ordering is an
enum, short for ‘enumeration’, which looks like this:
# #![allow(unused_variables)] #fn main() { enum Foo { Bar, Baz, } #}
With this definition, anything of type
Foo can be either a
Foo::Bar or a
Foo::Baz. We use the
:: to indicate the
namespace for a particular
enum variant.
The
Ordering
enum has three possible variants:
Less,
Equal,
and
Greater. The
match statement takes a value of a type, and lets you
create an ‘arm’ for each possible value. Since we have three types of
Ordering, we have three arms:
match guess.cmp(&secret_number) { Ordering::Less => println!("Too small!"), Ordering::Greater => println!("Too big!"), Ordering::Equal => println!("You win!"), }
If it’s
Less, we print
Too small!, if it’s
Greater,
Too big!, and if
Equal,
You win!.
match is really useful, and is used often in Rust.
I did mention that this won’t quite compile yet, though. Let’s try it:
$ cargo build Compiling guessing_game v0.1.0 () error[E0308]: mismatched types --> src/main.rs:23:21 | 23 | match guess.cmp(&secret_number) { | ^^^^^^^^^^^^^^ expected struct `std::string::String`, found integral variable | = note: expected type `&std::string::String` = note: found type `&{integer}` error: aborting due to previous error error: Could not compile `guessing_game`. To learn more, run the command again with --verbose.
Whew! This is a big error. The core of it is that we have ‘mismatched types’.
Rust has a strong, static type system. However, it also has type inference.
When we wrote
let guess = String::new(), Rust was able to infer that
guess
should be a
String, and so it doesn’t make us write out the type. And with
our
secret_number, there are a number of types which can have a value
between one and a hundred:
i32, a thirty-two-bit number, or
u32, an
unsigned thirty-two-bit number, or
i64, a sixty-four-bit number or others.
So far, that hasn’t mattered, and so Rust defaults to an
i32. However, here,
Rust doesn’t know how to compare the
guess and the
secret_number. They
need to be the same type. Ultimately, we want to convert the
String we
read as input into a real number type, for comparison. We can do that
with two more lines. Here’s our new program: new two lines:
let guess: u32 = guess.trim().parse() .expect("Please type a number!");
Wait a minute, I thought we already had a
guess? We do, but Rust allows us
to ‘shadow’ the previous
guess with a new one. This is often used in this
exact situation, where
guess starts as a
String, but we want to convert it
to an
u32. Shadowing lets us re-use the
guess name, rather than forcing us
to come up with two unique names like
guess_str and
guess, or something
else.
We bind
guess to an expression that looks like something we wrote earlier:
guess.trim().parse()
Here,
guess refers to the old
guess, the one that was a
String with our
input in it. The
trim() method on
Strings will eliminate any white space at
the beginning and end of our string. This is important, as we had to press the
‘return’ key to satisfy
read_line(). This means that if we type
5 and hit
return,
guess looks like this:
5\n. The
\n represents ‘newline’, the
enter key.
trim() gets rid of this, leaving our string with only the
5. The
parse() method on strings parses a string into some kind of number.
Since it can parse a variety of numbers, we need to give Rust a hint as to the
exact type of number we want. Hence,
let guess: u32. The colon (
:) after
guess tells Rust we’re going to annotate its type.
u32 is an unsigned,
thirty-two bit integer. Rust has a number of built-in number types,
but we’ve chosen
u32. It’s a good default choice for a small positive number.
Just like
read_line(), our call to
parse() could cause an error. What if
our string contained
A👍%? There’d be no way to convert that to a number. As
such, we’ll do the same thing we did with
read_line(): use the
expect()
method to crash if there’s an error.
Let’s try our program out!
$ cargo run Compiling guessing_game v0.1.0 () Finished debug [unoptimized + debuginfo] target(s) in 0.57 secs.
Now we’ve got most of the game working, but we can only make one guess. Let’s change that by adding loops!
Looping
The
loop keyword gives us an infinite loop. Let’s add that!"), } } }
And try it out. But wait, didn’t we just add an infinite loop? Yup. Remember
our discussion about
parse()? If we give a non-number answer, we’ll
panic!
and quit. Observe:
$ cargo run Compiling guessing_game v0.1.0 () Finished debug [unoptimized + debuginfo] target(s) in 0.58 secs Running `target!'
Ha!
quit actually quits. As does any other non-number input. Well, this is
suboptimal to say the least. First, let’s actually quit when you win the game:!"); break; } } } }
By adding the
break line after the
You win!, we’ll exit the loop when we
win. Exiting the loop also means exiting the program, since it’s the last
thing in
main(). We have only one more tweak to make: when someone inputs a
non-number, we don’t want to quit, we want to ignore it. We can do that
like; } } } }
These are the lines that changed:
let guess: u32 = match guess.trim().parse() { Ok(num) => num, Err(_) => continue, };
This is how you generally move from ‘crash on error’ to ‘actually handle the
error’, by switching from
expect() to a
match statement. A
Result is
returned by
parse(), this is an
enum like
Ordering, but in this case,
each variant has some data associated with it:
Ok is a success, and
Err is a
failure. Each contains more information: the successfully parsed integer, or an
error type. In this case, we
match on
Ok(num), which sets the name
num to
the unwrapped
Ok value (the integer), and then we return it on the
right-hand side. In the
Err case, we don’t care what kind of error it is, so
we just use the catch all
_ instead of a name. This catches everything that
isn't
Ok, and
continue lets us move to the next iteration of the loop; in
effect, this enables us to ignore all errors and continue with our program.
Now we should be good! Let’s try:
$ cargo run Compiling guessing_game v0.1.0 () Finished debug [unoptimized + debuginfo] target(s) in 0.57 secs Running `target/guessing_game` Guess the number! The secret number is: 61 Please input your guess. 10 You guessed: 10 Too small! Please input your guess. 99 You guessed: 99 Too big! Please input your guess. foo:
extern crate rand;; } } } }
Complete!
This project showed you a lot:
let,
match, methods, associated
functions, using external crates, and more.
At this point, you have successfully built the Guessing Game! Congratulations!
Syntax and Semantics
This.
Variable. #}
Functions
Every Rust program has at least one function, the
main function:
fn main() { }
This is the simplest possible function declaration. As we mentioned before,
fn says ‘this is a function’, followed by the name, some parentheses because
this function takes no arguments, and then some curly braces to indicate the
body. Here’s a function named
foo:
# #![allow(unused_variables)] #fn main() { fn foo() { } #}
So, what about taking arguments? Here’s a function that prints a number:
# #![allow(unused_variables)] #fn main() { fn print_number(x: i32) { println!("x is: {}", x); } #}
Here’s a complete program that uses
print_number:
fn main() { print_number(5); } fn print_number(x: i32) {: i32, y: i32) { println!("sum is: {}", x + y); }
You separate arguments with a comma, both when you call the function, as well as when you declare it.
Unlike
let, you must declare the types of function arguments. This does
not work:
fn print_sum(x, y) { println!("sum is: {}", x + y); }
You get this error:
expected one of `!`, `:`, or `@`, found `)` fn print_sum:
# #![allow(unused_variables)] #fn main() { fn add_one(x: i32) -> i32 { x + 1 } #}
Rust functions return exactly one value, and you declare the type after an
‘arrow’, which is a dash (
-) followed by a greater-than sign (
>). The last
line of a function determines what it returns. You’ll note the lack of a
semicolon here. If we added it in:
fn add_one(x: i32) -> i32 { x + 1; }
We would get an error:
error: not all control paths return a value fn add_one(x: i32) -> i32 { x + 1; } help: consider removing this semicolon: x + 1; ^
This reveals two interesting things about Rust: it is an expression-based language, and semicolons are different from semicolons in other ‘curly brace and semicolon’-based languages. These two things are related.
Expressions vs. Statements
Rust is primarily an expression-based language. There are only two kinds of statements, and everything else is an expression.
So what's the difference? Expressions return a value, and statements do not.
That’s why we end up with ‘not all control paths return a value’ here: the
statement
x + 1; doesn’t return a value. There are two kinds of statements in
Rust: ‘declaration statements’ and ‘expression statements’. Everything else is
an expression. Let’s talk about declaration statements first.
In some languages, variable bindings can be written as expressions, not statements. Like Ruby:
x = y = 5
In Rust, however, using
let to introduce a binding is not an expression. The
following will produce a compile-time error:
let x = (let y = 5); // Expected identifier, found keyword `let`.
The compiler is telling us here that it was expecting to see the beginning of
an expression, and a
let can only begin a statement, not an expression.
Note that assigning to an already-bound variable (e.g.
y = 5) is still an
expression, although its value is not particularly useful. Unlike other
languages where an assignment evaluates to the assigned value (e.g.
5 in the
previous example), in Rust the value of an assignment is an empty tuple
()
because the assigned value can have only one owner, and any
other returned value would be too surprising:
# #![allow(unused_variables)] #fn main() { let mut y = 5; let x = (y = 6); // `x` has the value `()`, not `6`. #}:
# #![allow(unused_variables)] #fn main() { fn add_one(x: i32) -> i32 { x + 1 } #}
Our function claims to return an
i32, but with a semicolon, it would return
() instead. Rust realizes this probably isn’t what we want, and suggests
removing the semicolon in the error we saw before.
Early returns
But what about early returns? Rust does have a keyword for that,
return:
# #![allow(unused_variables)] #fn main() { fn foo(x: i32) -> i32 { return x; // We never run this code! x + 1 } #}
Using a
return as the last line of a function works, but is considered poor
style:
# #![allow(unused_variables)] #fn main() { fn foo(x: i32) -> i32 { return x + 1; } #}
The previous definition without
return may look a bit strange if you haven’t
worked in an expression-based language before, but it becomes intuitive over
time.
Diverging functions
Rust has some special syntax for ‘diverging functions’, which are functions that do not return:
# #![allow(unused_variables)] #fn main() { fn diverges() -> ! { panic!("This function never returns!"); } #}
panic! is a macro, similar to
println!() that we’ve already seen. Unlike
println!(),
panic!() causes the current thread of execution to crash with
the given message. Because this function will cause a crash, it will never
return, and so it has the type ‘
!’, which is read ‘diverges’.
If you add a main function that calls
diverges() and run it, you’ll get
some output that looks like this:
thread ‘main’ panicked at ‘This function never returns!’, hello.rs:2
If you want more information, you can get a backtrace by setting the
RUST_BACKTRACE environment variable:
$ RUST_BACKTRACE=1 ./diverges thread 'main' panicked at 'This function never returns!', hello.rs:2 Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace. stack backtrace: hello::diverges at ./hello.rs:2 hello::main at ./hello.rs:6
If you want the complete backtrace and filenames:
$ RUST_BACKTRACE=full .>
If you need to override an already set
RUST_BACKTRACE,
in cases when you cannot just unset the variable,
then set it to
0 to avoid getting a backtrace.
Any other value (even no value at all) turns on backtrace.
$ export RUST_BACKTRACE=1 ... $ RUST_BACKTRACE=0 ./diverges thread 'main' panicked at 'This function never returns!', hello.rs:2 note: Run with `RUST_BACKTRACE=1` for a backtrace.
RUST_BACKTRACE also works with Cargo’s
run command:
$ RUST_BACKTRACE=full cargo run Running `target/debug>
A diverging function can be used as any type:
# #![allow(unused_variables)] #fn main() { # fn diverges() -> ! { # panic!("This function never returns!"); # } let x: i32 = diverges(); let x: String = diverges(); #}
Function pointers
We can also create variable bindings which point to functions:
# #![allow(unused_variables)] #fn main() { let f: fn(i32) -> i32; #}
f is a variable binding which points to a function that takes an
i32 as
an argument and returns an
i32. For example:
# #![allow(unused_variables)] #fn main() { fn plus_one(i: i32) -> i32 { i + 1 } // Without type inference: let f: fn(i32) -> i32 = plus_one; // With type inference: let f = plus_one; #}
We can then use
f to call the function:
# #![allow(unused_variables)] #fn main() { # fn plus_one(i: i32) -> i32 { i + 1 } # let f = plus_one; let six = f(5); #}
Primitive Types
The.
Booleans
Rust has a built-in boolean type, named
bool. It has two values,
true and
false:
# #![allow(unused_variables)] #fn main() {: (
')
# #![allow(unused_variables)] #fn main() { let x = 'x'; let two_hearts = '💕'; #}
Unlike some other languages, this means that Rust’s
char is not a single byte,
but four.
You can find more documentation for
chars in the standard library
documentation.
Numeric types:
# #![allow(unused_variables)] #fn main() { let x = 42; // `x` has type `i32`. let y = 1.0; // `y` has type `f64`. #}
Here’s a list of the different numeric types, with links to their documentation in the standard library:
Let’s go over them by category:
Signed and Unsigned
Fixed-size types have a specific number of bits in their representation. Valid
bit sizes are
8,
16,
32, and
64. So,
u32 is an unsigned, 32-bit integer,
and
i64 is a signed, 64-bit integer.
Variable-size types
Rust also provides types whose particular size depends on the underlying machine
architecture. Their range is sufficient to express the size of any collection, so
these types have ‘size’ as the category. They come in signed and unsigned varieties
which account for two types:
isize and
usize.
Floating-point types
Rust also has two floating point types:
f32 and
f64. These correspond to
IEEE-754 single and double precision numbers.
Arrays
Like many programming languages, Rust has list types to represent a sequence of things. The most basic is the array, a fixed-size list of elements of the same type. By default, arrays are immutable.
# #![allow(unused_variables)] #fn main() {:
# #![allow(unused_variables)] #fn main() { let a = [0; 20]; // a: [i32; 20] #}
You can get the number of elements in an array
a with
a.len():
# #![allow(unused_variables)] #fn main() { let a = [1, 2, 3]; println!("a has {} elements", a.len()); #}
You can access a particular element of an array with subscript notation:
# #![allow(unused_variables)] #fn main() {.
Slices
A ‘slice’ is a reference to (or “view” into) another data structure. They are useful for allowing safe, efficient access to a portion of an array without copying. For example, you might want to reference only one line of a file read into memory. By nature, a slice is not created directly, but from an existing variable binding. Slices have a defined length, and can be mutable or immutable.
Internally, slices are represented as a pointer to the beginning of the data and a length.
Slicing syntax
You can use a combo of
& and
[] to create a slice from various things. The
& indicates that slices are similar to references, which we will cover in
detail later in this section. The
[]s, with a range, let you define the
length of the slice:
# #![allow(unused_variables)] #fn main() { let a = [0, 1, 2, 3, 4]; let complete = &a[..]; // A slice containing all of the elements in `a`. let middle = &a[1..4]; // A slice of `a`: only the elements `1`, `2`, and `3`. #}. We'll elaborate further when we cover
Strings and references.
You can find more documentation for
str in the standard library
documentation.
Tuples
A tuple is an ordered list of fixed size. Like this:
# #![allow(unused_variables)] #fn main() { let x = (1, "hello"); #}
The parentheses and commas form this two-length tuple. Here’s the same code, but with the type annotated:
# #![allow(unused_variables)] #fn main() { let x: (i32, &str) = (1, "hello"); #}
As you can see, the type of a tuple looks, read
&str as a string slice, and we’ll learn more
soon.
You can assign one tuple into another, if they have the same contained types and arity. Tuples have the same arity when they have the same length.
# #![allow(unused_variables)] #fn main() { let mut x = (1, 2); // x: (i32, i32) let y = (2, 3); // y: (i32, i32) x = y; #}
You can access the fields in a tuple through a destructuring let. Here’s an example:
# #![allow(unused_variables)] #fn main() { let (x, y, z) = (1, 2, 3); println!("x is {}", x); #}
Remember before when I said the left-hand side of a
let statement was more
powerful:
# #![allow(unused_variables)] #fn main() { (0,); // A single-element tuple. (0); // A zero in parentheses. #}
Tuple Indexing
You can also access fields of a tuple with indexing syntax:
# #![allow(unused_variables)] #fn main() {
Functions also have a type! They look like this:
# #![allow(unused_variables)] #fn main() { fn foo(x: i32) -> i32 { x } let x: fn(i32) -> i32 = foo; #}
In this case,
x is a ‘function pointer’ to a function that takes an
i32 and
returns an
i32..
# #![allow(unused_variables)] #fn main() { //:
# #![allow(unused_variables)] #fn main() { /// Adds one to the number given. /// /// # Examples /// /// ``` /// let five = 5; /// /// assert_eq!(6, add_one(5)); /// # fn add_one(x: i32) -> i32 { /// # x + 1 /// # } /// ``` fn add_one(x: i32) -> i32 { x + 1 } #}
There is another style of doc comment,
//!, to comment containing items (e.g.
crates, modules or functions), instead of the items following it. Commonly used
inside crates root (lib.rs) or modules root (mod.rs):
//! # The Rust Standard Library //! //! The Rust Standard Library provides the essential runtime //! functionality for building portable Rust software.
When writing doc comments, providing some examples of usage is very, very
helpful. You’ll notice we’ve used a new macro here:
assert_eq!. This compares
two values, and
panic!s if they’re not equal to each other. It’s very helpful
in documentation. There’s another macro,
assert!, which
panic!s if the
value passed to it is
false.
You can use the
rustdoc tool to generate HTML documentation
from these doc comments, and also to run the code examples as tests!
if.
Loops
Rust currently provides three approaches to performing some kind of iterative activity. They are:
loop,
while and
for. Each approach has its own set of uses.
loop
The infinite
loop is the simplest form of loop available in Rust. Using the keyword
loop, Rust provides a way to loop indefinitely until some terminating statement is reached. Rust's infinite
loops look like this:
loop { println!("Loop forever!"); }
while
Rust also has a
while loop. It looks like this:
# #![allow(unused_variables)] #fn main() {:
# #![allow(unused_variables)] #fn main() { for x in 0..10 { println!("{}", x); // x: i32 } #}
In slightly more abstract terms,
for var in expression { code }
The expression is an item that can be converted into an iterator using
IntoIterator. The iterator gives back a series of elements, one element,
0..10 is an expression.
Enumerate
When you need to keep track of how many times you have already looped, you can
use the
.enumerate() function.
On ranges:
# #![allow(unused_variables)] #fn main() { for (index, value) in (5..10).enumerate() { println!("index = {} and value = {}", index, value); } #}
Outputs:
index = 0 and value = 5 index = 1 and value = 6 index = 2 and value = 7 index = 3 and value = 8 index = 4 and value = 9
Don't forget to add the parentheses around the range.
On iterators:
# #![allow(unused_variables)] #fn main() { let lines = "hello\nworld".lines(); for (linenumber, line) in lines.enumerate() { println!("{}: {}", linenumber, line); } #}
Outputs:
0: hello 1: world
Ending iteration early
Let’s take a look at that
while loop we had earlier:
# #![allow(unused_variables)] #fn main() {:
# #![allow(unused_variables)] #fn main() { let mut x = 5; loop { x += x - 3; println!("{}", x); if x % 5 == 0 { break; } } #}
We now loop forever with
loop and use
break to break out early. Issuing an explicit
return statement will also serve to terminate the loop early.
continue is similar, but instead of ending the loop, it goes to the next
iteration. This will only print the odd numbers:
# #![allow(unused_variables)] #fn main() { for x in 0..10 { if x % 2 == 0 { continue; } println!("{}", x); } #}
Loop labels
You may also encounter situations where you have nested loops and need to
specify which one your
break or
continue statement is for. Like most
other languages, Rust's
break or
continue apply to the innermost loop.
In a situation where you would like to
break or
continue for one
of the outer loops, you can use labels to specify which loop the
break or
continue statement applies to.
In the example below, we
continue to the next iteration of
outer loop
when
x is even, while we
continue to the next iteration of
inner
loop when y is even. So it will execute the
println! when both
x and
y are odd.
# #![allow(unused_variables)] #fn main() { 'outer: for x in 0..10 { 'inner: for y in 0..10 { if x % 2 == 0 { continue 'outer; } // Continues the loop over `x`. if y % 2 == 0 { continue 'inner; } // Continues the loop over `y`. println!("x: {}, y: {}", x, y); } } #}
Vectors
A:
# #![allow(unused_variables)] #fn main() {:
# #![allow(unused_variables)] #fn main() { let v = vec![0; 10]; // A vector of ten zeroes. #}
Vectors store their contents as contiguous arrays of
T on the heap. This means
that they must be able to know the size of
T at compile time (that is, how
many bytes are needed to store a
T?). The size of some things can't be known
at compile time. For these you'll have to store a pointer to that thing:
thankfully, the
Box type works perfectly for this.
Accessing elements
To get the value at a particular index in the vector, we use
[]s:
# #![allow(unused_variables)] #fn main() { let v = vec![1, 2, 3, 4, 5]; println!("The third element of v is {}", v[2]); #}
The indices count from
0, so the third element is
v[2].
It’s also important to note that you must index with the
usize type:
let v = vec![1, 2, 3, 4, 5]; let i: usize = 0; let j: i32 = 0; // Works: v[i]; // Doesn’t: v[j];
Indexing with a non-
usize type gives an error that looks like this:
error: the trait bound `collections::vec::Vec<_> : core::ops::Index<i32>` is not satisfied [E0277] v[j]; ^~~~ note: the type `collections::vec::Vec<_>` cannot be indexed by `i32` error: aborting due to previous error
There’s a lot of punctuation in that message, but the core of it makes sense:
you cannot index with an
i32.
Out-of-bounds Access
If you try to access an index that doesn’t exist:
let v = vec![1, 2, 3]; println!("Item 7 is {}", v[7]);
then the current thread will panic with a message like this:
thread 'main' panicked at 'index out of bounds: the len is 3 but the index is 7'
If you want to handle out-of-bounds errors without panicking, you can use
methods like
get or
get_mut that return
None when
given an invalid index:
# #![allow(unused_variables)] #fn main() { let v = vec![1, 2, 3]; match v.get(7) { Some(x) => println!("Item 7 is {}", x), None => println!("Sorry, this vector is too short.") } #}
Iterating
Once you have a vector, you can iterate through its elements with
for. There
are three versions:
# #![allow(unused_variables)] #fn main() { let mut v = vec![1, 2, 3, 4, 5]; for i in &v { println!("A reference to {}", i); } for i in &mut v { println!("A mutable reference to {}", i); } for i in v { println!("Take ownership of the vector and its element {}", i); } #}
Note: You cannot use the vector again once you have iterated by taking ownership of the vector. You can iterate the vector multiple times by taking a reference to the vector whilst iterating. For example, the following code does not compile.
let v = vec![1, 2, 3, 4, 5]; for i in v { println!("Take ownership of the vector and its element {}", i); } for i in v { println!("Take ownership of the vector and its element {}", i); }
Whereas the following works perfectly,
# #![allow(unused_variables)] #fn main() { let v = vec![1, 2, 3, 4, 5]; for i in &v { println!("This is a reference to {}", i); } for i in &v { println!("This is a reference to {}", i); } #}
Vectors have many more useful methods, which you can read about in their API documentation.
Ownership:
- ownership, which you’re reading now
- borrowing, and their associated feature ‘references’
-.
Move semantics details types.
More than ownership); #}
Ugh! The return type, return line, and calling the function gets way more complicated.
Luckily, Rust offers a feature which helps us solve this problem. It’s called borrowing and is the topic of the next section!
References and Borrowing
This is the second, which you’re reading now
- borrowing.
Borrowing
At the end of the ownership section, we had a nasty function that looked like); #}
This is not idiomatic Rust, however, as it doesn’t take advantage of borrowing. Here’s the first step:
# #![allow(unused_variables)] #fn main() {! #}
A more concrete example:
fn main() { // Don't worry if you don't understand how `fold` works, the point here is that an immutable reference is borrowed. fn sum_vec(v: &Vec<i32>) -> i32 { v.iter().fold(0, |a, &b| a + b) } // Borrow two vectors and sum them. // This kind of borrowing does not allow mutation through the borrowed reference. fn foo(v1: &Vec<i32>, v2: &Vec<i32>) -> i32 { // Do stuff with `v1` and `v2`. let s1 = sum_vec(v1); let s2 = sum_vec(v2); // Return the answer. s1 + s2 } let v1 = vec![1, 2, 3]; let v2 = vec![4, 5, 6]; let answer = foo(&v1, &v2); println!("{}", answer); }
Instead of taking
Vec<i32>s as our arguments, we take a reference:
&Vec<i32>., like bindings. This means that inside of
foo(),
the vectors can’t be changed at all:
fn foo(v: &Vec<i32>) { v.push(5); } let v = vec![]; foo(&v);
will give us this error:
error: cannot borrow immutable borrowed content `*v` as mutable v.push(5); ^
Pushing a value mutates the vector, and so we aren’t allowed to do it.
&mut references
There’s a second kind of reference:
&mut T. A ‘mutable reference’ allows you
to mutate the resource you’re borrowing. For example:
# #![allow(unused_variables)] #fn main() { let mut x = 5; { let y = &mut x; *y += 1; } println!("{}", x); #}
This will print
6. We make
y a mutable reference to
x, then add one to
the thing
y points at. You’ll notice that
x had to be marked
mut as well.
If it wasn’t, we couldn’t take a mutable borrow to an immutable value.
You'll also notice we added an asterisk (
*) in front of
y, making it
*y,
this is because
y is a
&mut reference. You'll need to use asterisks to
access the contents of a reference as well.
Otherwise,
&mut references are like references. There is a large
difference between the two, and how they interact, though. You can tell
something is fishy in the above example, because we need that extra scope, with
the
{ and
}. If we remove them, we get an error:
error: cannot borrow `x` as immutable because it is also borrowed as mutable println!("{}", x); ^ note: previous borrow of `x` occurs here; the mutable borrow prevents subsequent moves, borrows, or modification of `x` until the borrow ends let y = &mut x; ^ note: previous borrow ends here fn main() { } ^
As it turns out, there are rules.
The Rules
Here are the rules for borrowing in Rust:).
You may notice that this is very similar to, though not exactly the same as, the definition of a data race:
There is a ‘data race’ when two or more pointers access the same memory location at the same time, where at least one of them is writing, and the operations are not synchronized.
With references, you may have as many as you’d like, since none of them are
writing. However, as we can only have one
&mut at a time, it is impossible to
have a data race. This is how Rust prevents data races at compile time: we’ll
get errors if we break the rules.
With this in mind, let’s consider our example again.
Thinking in scopes
Here’s the code:
fn main() { let mut x = 5; let y = &mut x; *y += 1; println!("{}", x); }
This code gives us this error:
error: cannot borrow `x` as immutable because it is also borrowed as mutable println!("{}", x); ^
This is because we’ve violated the rules: we have a
&mut T pointing to
x,
and so we aren’t allowed to create any
&Ts. It's one or the other. The note
hints at how to think about this problem:
note: previous borrow ends here fn main() { } ^
In other words, the mutable borrow is held through the rest of our example. What
we want is for the mutable borrow by
y to end so that the resource can be
returned to the owner,
x.
x can then provide an immutable borrow to
println!.
In Rust, borrowing is tied to the scope that the borrow is valid for. And our
scopes look like this:
fn main() { let mut x = 5; let y = &mut x; // -+ &mut borrow of `x` starts here. // | *y += 1; // | // | println!("{}", x); // -+ - Try to borrow `x` here. } // -+ &mut borrow of `x` ends here.
The scopes conflict: we can’t make an
&x while
y is in scope.
So when we add the curly braces:
# #![allow(unused_variables)] #fn main() { let mut x = 5; { let y = &mut x; // -+ &mut borrow starts here. *y += 1; // | } // -+ ... and ends here. println!("{}", x); // <- Try to borrow `x` here. #}
There’s no problem. Our mutable borrow goes out of scope before we create an immutable one. So scope is the key to seeing how long a borrow lasts for.
Issues borrowing prevents
Why have these restrictive rules? Well, as we noted, these rules prevent data races. What kinds of issues do data races cause? Here are a few.
Iterator invalidation
One example is ‘iterator invalidation’, which happens when you try to mutate a collection that you’re iterating over. Rust’s borrow checker prevents this from happening:
# #![allow(unused_variables)] #fn main() { let mut v = vec![1, 2, 3]; for i in &v { println!("{}", i); } #}
This prints out one through three. As we iterate through the vector, we’re
only given references to the elements. And
v is itself borrowed as immutable,
which means we can’t change it while we’re iterating:
let mut v = vec![1, 2, 3]; for i in &v { println!("{}", i); v.push(34); }
Here’s the error:); } ^
We can’t modify
v because it’s borrowed by the loop.
Use after free
References must not live longer than the resource they refer to. Rust will check the scopes of your references to ensure that this is true.
If Rust didn’t check this property, we could accidentally use a reference which was invalid. For example:
let y: &i32; { let x = 5; y = &x; } println!("{}", y);
We get this error:
error: `x` does not live long enough y = &x; ^ note: reference must be valid for the block suffix following statement 0 at 2:16... let y: &i32; { let x = 5; y = &x; } note: ...but borrowed value is only valid for the block suffix following statement 0 at 4:18 let x = 5; y = &x; }
In other words,
y is only valid for the scope where
x exists. As soon as
x goes away, it becomes invalid to refer to it. As such, the error says that
the borrow ‘doesn’t live long enough’ because it’s not valid for the right
amount of time.
The same problem occurs when the reference is declared before the variable it refers to. This is because resources within the same scope are freed in the opposite order they were declared:
let y: &i32; let x = 5; y = &x; println!("{}", y);
We get this error:
error: `x` does not live long enough y = &x; ^ note: reference must be valid for the block suffix following statement 0 at 2:16... let y: &i32; let x = 5; y = &x; println!("{}", y); } note: ...but borrowed value is only valid for the block suffix following statement 1 at 3:14 let x = 5; y = &x; println!("{}", y); }
In the above example,
y is declared before
x, meaning that
y lives longer
than
x, which is not allowed.
Lif
Mutability
Mutability, the ability to change something, works a bit differently in Rust than in other languages. The first aspect of mutability is its non-default status:
let x = 5; x = 6; // Error!
We can introduce mutability with the
mut keyword:
# #![allow(unused_variables)] #fn main() { let mut x = 5; x = 6; // No problem! #}
This is a mutable variable binding. When a binding is mutable, it means
you’re allowed to change what the binding points to. So in the above example,
it’s not so much that the value at
x is changing, but that the binding
changed from one
i32 to another.
You can also create a reference to it, using
&x, but if you want to use the reference to change it, you will need a mutable reference:
# #![allow(unused_variables)] #fn main() { let mut x = 5; let y = &mut x; #}
y is an immutable binding to a mutable reference, which means that you can’t bind 'y' to something else (
y = &mut z), but
y can be used to bind
x to something else (
*y = 5). A subtle distinction.
Of course, if you need both:
# #![allow(unused_variables)] #fn main() { let mut x = 5; let mut y = &mut x; #}
Now
y can be bound to another value, and the value it’s referencing can be
changed.
It’s important to note that
mut is part of a pattern, so you
can do things like this:
# #![allow(unused_variables)] #fn main() { let (mut x, y) = (5, 6); fn foo(mut x: i32) { # } #}
Note that here, the
x is mutable, but not the
y.
Interior vs. Exterior Mutability
However, when we say something is ‘immutable’ in Rust, that doesn’t mean that
it’s not able to be changed: we are referring to its ‘exterior mutability’ that
in this case is immutable. Consider, for example,
Arc<T>:
# #![allow(unused_variables)] #fn main() { use std::sync::Arc; let x = Arc::new(5); let y = x.clone(); #}
When we call
clone(), the
Arc<T> needs to update the reference count. Yet
we’ve not used any
muts here,
x is an immutable binding, and we didn’t take
&mut 5 or anything. So what gives?
To understand this, we have to go back to the core of Rust’s guiding philosophy, memory safety, and the mechanism by which Rust guarantees it, the ownership system, and more specifically, borrowing:
You may have one or the other of these two kinds of borrows, but not both at the same time:
- one or more references (
&T) to a resource,
- exactly one mutable reference (
&mut T).
So, that’s the real definition of ‘immutability’: is this safe to have two
pointers to? In
Arc<T>’s case, yes: the mutation is entirely contained inside
the structure itself. It’s not user facing. For this reason, it hands out
&T
with
clone(). If it handed out
&mut Ts, though, that would be a problem.
Other types, like the ones in the
std::cell module, have the
opposite: interior mutability. For example:
# #![allow(unused_variables)] #fn main() { use std::cell::RefCell; let x = RefCell::new(42); let y = x.borrow_mut(); #}
RefCell hands out
&mut references to what’s inside of it with the
borrow_mut() method. Isn’t that dangerous? What if we do:
use std::cell::RefCell; let x = RefCell::new(42); let y = x.borrow_mut(); let z = x.borrow_mut(); # (y, z);
This will in fact panic, at runtime. This is what
RefCell does: it enforces
Rust’s borrowing rules at runtime, and
panic!s if they’re violated. This
allows us to get around another aspect of Rust’s mutability rules. Let’s talk
about it first.
Field-level mutability
Mutability is a property of either a borrow (
&mut) or a binding (
let mut).
This means that, for example, you cannot have a
struct with
some fields mutable and some immutable:
struct Point { x: i32, mut y: i32, // Nope. }
The mutability of a struct is in its binding:
struct Point { x: i32, y: i32, } let mut a = Point { x: 5, y: 6 }; a.x = 10; let b = Point { x: 5, y: 6 }; b.x = 10; // Error: cannot assign to immutable field `b.x`.
However, by using
Cell<T>, you can emulate field-level mutability:
# #![allow(unused_variables)] #fn main() { use std::cell::Cell; struct Point { x: i32, y: Cell<i32>, } let point = Point { x: 5, y: Cell::new(6) }; point.y.set(7); println!("y: {:?}", point.y); #}
This will print
y: Cell { value: 7 }. We’ve successfully updated
y.
Structs
structs are a way of creating more complex data types. For example, if we were
doing calculations involving coordinates in 2D space, we would need both an
x
and a
y value:
# #![allow(unused_variables)] #fn main() { let origin_x = 0; let origin_y = 0; #}
A
struct lets us combine these two into a single, unified datatype with
x
and
y as field labels:
struct Point { x: i32, y: i32, } fn main() { let origin = Point { x: 0, y: 0 }; // origin: Point println!("The origin is at ({}, {})", origin.x, origin.y); }
There’s a lot going on here, so let’s break it down. We declare a
struct with
the
struct keyword, and then with a name. By convention,
structs begin with
a capital letter and are them through dot
notation:
origin.x.
The values in
structs are immutable by default, like other bindings in Rust.
Use
mut to make them mutable:
struct Point { x: i32, y: i32, } fn main() { let mut point = Point { x: 0, y: 0 }; point.x = 5; println!("The point is at ({}, {})", point.x, point.y); }
This will print
The point is at (5, 0).
Rust does not support field mutability at the language level, so you cannot write something like this:
struct Point { mut x: i32, // This causes an error. y: i32, }
Mutability is a property of the binding, not of the structure itself. If you’re used to field-level mutability, this may seem strange at first, but it significantly simplifies things. It even lets you make things mutable on a temporary basis:
struct Point { x: i32, y: i32, } fn main() { let mut point = Point { x: 0, y: 0 }; point.x = 5; let point = point; // `point` is now immutable. point.y = 6; // This causes an error. }
Your structure can still contain
&mut references, which will let
you do some kinds of mutation:
struct Point { x: i32, y: i32, } struct PointRef<'a> { x: &'a mut i32, y: &'a mut i32, } fn main() { let mut point = Point { x: 0, y: 0 }; { let r = PointRef { x: &mut point.x, y: &mut point.y }; *r.x = 5; *r.y = 6; } assert_eq!(5, point.x); assert_eq!(6, point.y); }
Initialization of a data structure (struct, enum, union) can be simplified when fields of the data structure are initialized with variables of the same names as the fields.
#[derive(Debug)] struct Person<'a> { name: &'a str, age: u8 } fn main() { // Create struct with field init shorthand let name = "Peter"; let age = 27; let peter = Person { name, age }; // Debug-print struct println!("{:?}", peter); }
Update syntax
A
struct can include
.. to indicate that you want to use a copy of some
other
struct for some of the values. For example:
# #![allow(unused_variables)] #fn main() { struct Point3d { x: i32, y: i32, z: i32, } let mut point = Point3d { x: 0, y: 0, z: 0 }; point = Point3d { y: 1, .. point }; #}
This gives
point a new
y, but keeps the old
x and
z values. It doesn’t
have to be the same
struct either, you can use this syntax when making new
ones, and it will copy the values you don’t specify:
# #![allow(unused_variables)] #fn main() { # struct Point3d { # x: i32, # y: i32, # z: i32, # } let origin = Point3d { x: 0, y: 0, z: 0 }; let point = Point3d { z: 1, x: 2, .. origin }; #}
Tuple structs
Rust has another data type that’s like a hybrid between a tuple and a
struct, called a ‘tuple struct’. Tuple structs have a name, but their fields
don't. They are declared with the
struct keyword, and then with a name
followed by a tuple:
# #![allow(unused_variables)] #fn main() { struct Color(i32, i32, i32); struct Point(i32, i32, i32); let black = Color(0, 0, 0); let origin = Point(0, 0, 0); #}
Here,
black and
origin are not the same type, even though they contain the
same values.
The members of a tuple struct may be accessed by dot notation or destructuring
let, just like regular tuples:
# #![allow(unused_variables)] #fn main() { # struct Color(i32, i32, i32); # struct Point(i32, i32, i32); # let black = Color(0, 0, 0); # let origin = Point(0, 0, 0); let black_r = black.0; let Point(_, origin_y, origin_z) = origin; #}
Patterns like
Point(_, origin_y, origin_z) are also used in
match expressions.
One case when a tuple struct is very useful is when it has only one element. We call this the ‘newtype’ pattern, because it allows you to create a new type that is distinct from its contained value and also expresses its own semantic meaning:
# #![allow(unused_variables)] #fn main() { struct Inches(i32); let length = Inches(10); let Inches(integer_length) = length; println!("length is {} inches", integer_length); #}
As above, you can extract the inner integer type through a destructuring
let.
In this case, the
let Inches(integer_length) assigns
10 to
integer_length.
We could have used dot notation to do the same thing:
# #![allow(unused_variables)] #fn main() { # struct Inches(i32); # let length = Inches(10); let integer_length = length.0; #}
It's always possible to use a
struct instead of a tuple struct, and can be
clearer. We could write
Color and
Point like this instead:
# #![allow(unused_variables)] #fn main() { struct Color { red: i32, blue: i32, green: i32, } struct Point { x: i32, y: i32, z: i32, } #}
Good names are important, and while values in a tuple struct can be
referenced with dot notation as well, a
struct gives us actual names,
rather than positions.
Unit-like structs
You can define a
struct with no members at all:
struct Electron {} // Use empty braces... struct Proton; // ...or just a semicolon. // Use the same notation when creating an instance. let x = Electron {}; let y = Proton; let z = Electron; // Error
Such a
struct is called ‘unit-like’ because it resembles the empty
tuple,
(), sometimes called ‘unit’. Like a tuple struct, it defines a
new type.
This is rarely useful on its own (although sometimes it can serve as a
marker type), but in combination with other features, it can become
useful. For instance, a library may ask you to create a structure that
implements a certain trait to handle events. If you don’t have
any data you need to store in the structure, you can create a
unit-like
struct.
Enums
An
enum in Rust is a type that represents data that is one of
several possible variants. Each variant in the
enum can optionally
have data associated with it:
# #![allow(unused_variables)] #fn main() { enum Message { Quit, ChangeColor(i32, i32, i32), Move { x: i32, y: i32 }, Write(String), } #}:
# #![allow(unused_variables)] #fn main() { # enum Message { # Move { x: i32, y: i32 }, # } process_color_change(msg: Message) { let Message::ChangeColor(r, g, b) = msg; // This causes a.
Constructors as functions
An
enum constructor can also be used like a function. For example:
# #![allow(unused_variables)] #fn main() { # enum Message { # Write(String), # } let m = Message::Write("Hello, world".to_string()); #}
is the same as
# #![allow(unused_variables)] #fn main() { # enum Message { # Write:
# #![allow(unused_variables)] #fn main() { # enum Message { # Write(String), # } let v = vec!["Hello".to_string(), "World".to_string()]; let v1: Vec<Message> = v.into_iter().map(Message::Write).collect(); #}
Match
Often, a simple
if/
else isn’t enough, because you have more than two
possible options. Also, conditions can get quite complex. Rust
has a keyword,
match, that allows you to replace complicated
if/
else
groupings with something more powerful. Check it out:
# #![allow(unused_variables)] #fn main() { some value. The compiler infers from
x that it
can have any 32bit integer value; for example -2,147,483,648 to 2,147,483,647. The
_ acts
as a 'catch-all', and will catch all possible values that aren't specified in
an arm of
match. As you can see in the previous example, we provide
match
arms for integers 1-5, if
x is 6 or any other value, then it is caught by
_.
match is also an expression, which means we can use it on the right-hand
side of a
let binding or directly where an expression is used:
# #![allow(unused_variables)] #fn main() { let x = 5; let number = match x { 1 => "one", 2 => "two", 3 => "three", 4 => "four", 5 => "five", _ => "something else", }; #}
Sometimes it’s a nice way of converting something from one type to another; in
this example the integers are converted to
String.
Matching on enums
Another important use of the
match keyword is to process the possible
variants of an enum:
# #![allow(unused_variables)] #fn main() {, y: new_name_for_y } => move_cursor(x, new_name_for.
Patterns
Patterns are quite common in Rust. We use them in variable bindings, match expressions, and other places, too. Let’s go on a whirlwind tour of all of the things patterns can do!
A quick refresher: you can match against literals directly, and
_ acts as an
‘any’ case:
# #![allow(unused_variables)] #fn main() { let x = 1; match x { 1 => println!("one"), 2 => println!("two"), 3 => println!("three"), _ => println!("anything"), } #}
This prints
one.
It's possible to create a binding for the value in the any case:
# #![allow(unused_variables)] #fn main() { let x = 1; match x { y => println!("x: {} y: {}", x, y), } #}
This prints:
x: 1 y: 1
Note it is an error to have both a catch-all
_ and a catch-all binding in the same match block:
# #![allow(unused_variables)] #fn main() { let x = 1; match x { y => println!("x: {} y: {}", x, y), _ => println!("anything"), // this causes an error as it is unreachable } #}
There’s one pitfall with patterns: like anything that introduces a new binding, they introduce shadowing. For example:
# #![allow(unused_variables)] #fn main() { let x = 1; let c = 'c'; match c { x => println!("x: {} c: {}", x, c), } println!("x: {}", x) #}
This prints:
x: c c: c x: 1
In other words,
x => matches the pattern and introduces a new binding named
x. This new binding is in scope for the match arm and takes on the value of
c. Notice that the value of
x outside the scope of the match has no bearing
on the value of
x within it. Because we already have a binding named
x, this
new
x shadows it.
Multiple patterns
You can match multiple patterns with
|:
# #![allow(unused_variables)] #fn main() { let x = 1; match x { 1 | 2 => println!("one or two"), 3 => println!("three"), _ => println!("anything"), } #}
This prints
one or two.
Destructuring
If you have a compound data type, like a
struct, you can destructure it
inside of a pattern:
# #![allow(unused_variables)] #fn main() { struct Point { x: i32, y: i32, } let origin = Point { x: 0, y: 0 }; match origin { Point { x, y } => println!("({},{})", x, y), } #}
We can use
: to give a value a different name.
# #![allow(unused_variables)] #fn main() { struct Point { x: i32, y: i32, } let origin = Point { x: 0, y: 0 }; match origin { Point { x: x1, y: y1 } => println!("({},{})", x1, y1), } #}
If we only care about some of the values, we don’t have to give them all names:
# #![allow(unused_variables)] #fn main() { struct Point { x: i32, y: i32, } let point = Point { x: 2, y: 3 }; match point { Point { x, .. } => println!("x is {}", x), } #}
This prints
x is 2.
You can do this kind of match on any member, not only the first:
# #![allow(unused_variables)] #fn main() { struct Point { x: i32, y: i32, } let point = Point { x: 2, y: 3 }; match point { Point { y, .. } => println!("y is {}", y), } #}
This prints
y is 3.
This ‘destructuring’ behavior works on any compound data type, like tuples or enums.
Ignoring bindings
You can use
_ in a pattern to disregard the type and value.
For example, here’s a
match against a
Result<T, E>:
# #![allow(unused_variables)] #fn main() { # let some_value: Result<i32, &'static str> = Err("There was an error"); match some_value { Ok(value) => println!("got a value: {}", value), Err(_) => println!("an error occurred"), } #}
In the first arm, we bind the value inside the
Ok variant to
value. But
in the
Err arm, we use
_ to disregard the specific error, and print
a general error message.
_ is valid in any pattern that creates a binding. This can be useful to
ignore parts of a larger structure:
# #![allow(unused_variables)] #fn main() { fn coordinate() -> (i32, i32, i32) { // Generate and return some sort of triple tuple. # (1, 2, 3) } let (x, _, z) = coordinate(); #}
Here, we bind the first and last element of the tuple to
x and
z, but
ignore the middle element.
It’s worth noting that using
_ never binds the value in the first place,
which means that the value does not move:
# #![allow(unused_variables)] #fn main() { let tuple: (u32, String) = (5, String::from("five")); // Here, tuple is moved, because the String moved: let (x, _s) = tuple; // The next line would give "error: use of partially moved value: `tuple`". // println!("Tuple is: {:?}", tuple); // However, let tuple = (5, String::from("five")); // Here, tuple is _not_ moved, as the String was never moved, and u32 is Copy: let (x, _) = tuple; // That means this works: println!("Tuple is: {:?}", tuple); #}
This also means that any temporary variables will be dropped at the end of the statement:
# #![allow(unused_variables)] #fn main() { // Here, the String created will be dropped immediately, as it’s not bound: let _ = String::from(" hello ").trim(); #}
You can also use
.. in a pattern to disregard multiple values:
# #![allow(unused_variables)] #fn main() { enum OptionalTuple { Value(i32, i32, i32), Missing, } let x = OptionalTuple::Value(5, -2, 3); match x { OptionalTuple::Value(..) => println!("Got a tuple!"), OptionalTuple::Missing => println!("No such luck."), } #}
This prints
Got a tuple!.
ref and ref mut
If you want to get a reference, use the
ref keyword:
# #![allow(unused_variables)] #fn main() { let x = 5; match x { ref r => println!("Got a reference to {}", r), } #}
This prints
Got a reference to 5.
Here, the
r inside the
match has the type
&i32. In other words, the
ref
keyword creates a reference, for use in the pattern. If you need a mutable
reference,
ref mut will work in the same way:
# #![allow(unused_variables)] #fn main() { let mut x = 5; match x { ref mut mr => println!("Got a mutable reference to {}", mr), } #}
Ranges
You can match a range of values with
...:
# #![allow(unused_variables)] #fn main() { let x = 1; match x { 1 ... 5 => println!("one through five"), _ => println!("anything"), } #}
This prints
one through five.
Ranges are mostly used with integers and
chars:
# #![allow(unused_variables)] #fn main() { let x = '💅'; match x { 'a' ... 'j' => println!("early letter"), 'k' ... 'z' => println!("late letter"), _ => println!("something else"), } #}
This prints
something else.
Bindings
You can bind values to names with
@:
# #![allow(unused_variables)] #fn main() { let x = 1; match x { e @ 1 ... 5 => println!("got a range element {}", e), _ => println!("anything"), } #}
This prints
got a range element 1. This is useful when you want to
do a complicated match of part of a data structure:
# #![allow(unused_variables)] #fn main() { #[derive(Debug)] struct Person { name: Option<String>, } let name = "Steve".to_string(); let x: Option<Person> = Some(Person { name: Some(name) }); match x { Some(Person { name: ref a @ Some(_), .. }) => println!("{:?}", a), _ => {} } #}
This prints
Some("Steve"): we’ve bound the inner
name to
a.
If you use
@ with
|, you need to make sure the name is bound in each part
of the pattern:
# #![allow(unused_variables)] #fn main() { let x = 5; match x { e @ 1 ... 5 | e @ 8 ... 10 => println!("got a range element {}", e), _ => println!("anything"), } #}
Guards
You can introduce ‘match guards’ with
if:
# #![allow(unused_variables)] #fn main() { enum OptionalInt { Value(i32), Missing, } let x = OptionalInt::Value(5); match x { OptionalInt::Value(i) if i > 5 => println!("Got an int bigger than five!"), OptionalInt::Value(..) => println!("Got an int!"), OptionalInt::Missing => println!("No such luck."), } #}
This prints
Got an int!.
If you’re using
if with multiple patterns, the
if applies to both sides:
# #![allow(unused_variables)] #fn main() { let x = 4; let y = false; match x { 4 | 5 if y => println!("yes"), _ => println!("no"), } #}
This prints
no, because the
if applies to the whole of
4 | 5, and not to
only the
5. In other words, the precedence of
if behaves like this:
(4 | 5) if y => ...
not this:
4 | (5 if y) => ...
Mix and Match
Whew! That’s a lot of different ways to match things, and they can all be mixed and matched, depending on what you’re doing:
match x { Foo { x: Some(ref name), y: None } => ... }
Patterns are very powerful. Make good use of them.
Method Syntax
Functions are great, but if you want to call a bunch of them on some data, it can be awkward. Consider this code:
baz(bar(foo));
We would read this left-to-right, and so we see ‘baz bar foo’. But this isn’t the order that the functions would get called in, that’s inside-out: ‘foo bar baz’. Wouldn’t it be nice if we could do this instead?
foo.bar().baz();
Luckily, as you may have guessed with the leading question, you can! Rust provides
the ability to use this ‘method call syntax’ via the
impl keyword.
Method calls, of which there are three variants:
self,
&self, and
&mut self. You can think of this first parameter as
being the
foo in
foo.bar(). The three variants correspond to the three
kinds of things
foo could be:
self if it’s a value on the stack,
&self if it’s a reference, and
&mut self if it’s a mutable reference.
Because we took the
&self parameter to
area, we can use it like any
other parameter. Because we know it’s a
Circle, we can access the
radius
like we would with any other
struct.
We should default to using
&self, as you should prefer borrowing over taking
ownership, as well as taking immutable references over mutable ones. Here’s an
example of all three variants:
# #![allow(unused_variables)] #fn main() { struct Circle { x: f64, y: f64, radius: f64, } impl Circle { fn reference(&self) { println!("taking self by reference!"); } fn mutable_reference(&mut self) { println!("taking self by mutable reference!"); } fn takes_ownership(self) { println!("taking ownership of self!"); } } #}
You can use as many
impl blocks as you’d like. The previous example could
have also been written like this:
# #![allow(unused_variables)] #fn main() { struct Circle { x: f64, y: f64, radius: f64, } impl Circle { fn reference(&self) { println!("taking self by reference!"); } } impl Circle { fn mutable_reference(&mut self) { println!("taking self by mutable reference!"); } } impl Circle { fn takes_ownership(self) { println!("taking ownership of self!"); } } #}
Chaining method calls
So, now we know how to call a method, such as
foo.bar(). But what about our
original example,
foo.bar().baz()? This is called ‘method chaining’. Let’s
look at an example:
struct Circle { x: f64, y: f64, radius: f64, } impl Circle { fn area(&self) -> f64 { std::f64::consts::PI * (self.radius * self.radius) } fn grow(&self, increment: f64) -> Circle { Circle { x: self.x, y: self.y, radius: self.radius + increment } } } fn main() { let c = Circle { x: 0.0, y: 0.0, radius: 2.0 }; println!("{}", c.area()); let d = c.grow(2.0).area(); println!("{}", d); }
Check the return type:
# #![allow(unused_variables)] #fn main() { # struct Circle; # impl Circle { fn grow(&self, increment: f64) -> Circle { # Circle } } #}
We say we’re returning a
Circle. With this method, we can grow a new
Circle to any arbitrary size.
Associated functions
You can also define associated functions ‘associated function’ builds a new
Circle for us. Note that associated
functions are called with the
Struct::function() syntax, rather than the
ref.method() syntax. Some other languages call associated functions ‘static
methods’.
Builder Pattern
Let’s say that we want our users to be able to create
Circles, but we will
allow them to only set the properties they care about. Otherwise, the
x
and
y attributes will be
0.0, and the
radius will be
1.0. Rust doesn’t
have method overloading, named arguments, or variable arguments. We employ
the builder pattern instead. It looks like this:
struct Circle { x: f64, y: f64, radius: f64, } impl Circle { fn area(&self) -> f64 { std::f64::consts::PI * (self.radius * self.radius) } } struct CircleBuilder { x: f64, y: f64, radius: f64, } impl CircleBuilder { fn new() -> CircleBuilder { CircleBuilder { x: 0.0, y: 0.0, radius: 1.0, } } fn x(&mut self, coordinate: f64) -> &mut CircleBuilder { self.x = coordinate; self } fn y(&mut self, coordinate: f64) -> &mut CircleBuilder { self.y = coordinate; self } fn radius(&mut self, radius: f64) -> &mut CircleBuilder { self.radius = radius; self } fn finalize(&self) -> Circle { Circle { x: self.x, y: self.y, radius: self.radius } } } fn main() { let c = CircleBuilder::new() .x(1.0) .y(2.0) .radius(2.0) .finalize(); println!("area: {}", c.area()); println!("x: {}", c.x); println!("y: {}", c.y); }
What we’ve done here is make another
struct,
CircleBuilder. We’ve defined our
builder methods on it. We’ve also defined our
area() method on
Circle. We
also made one more method on
CircleBuilder:
finalize(). This method creates
our final
Circle from the builder. Now, we’ve used the type system to enforce
our concerns: we can use the methods on
CircleBuilder to constrain making
Circles in any way we choose.
Strings NUL-terminated and can contain NUL bytes.
Rust has two main types of strings:
&str and
String. Let’s talk about
&str first. These are called ‘string slices’. A string slice has a fixed
size, and cannot be mutated. It is a reference to a sequence of UTF-8 bytes.
# #![allow(unused_variables)] #fn main() { let greeting = "Hello there."; // greeting: &'static str #}
"Hello there." is a string literal and its type is
&'static str. A string
literal is a string slice that is statically allocated, meaning that it’s saved
inside our compiled program, and exists for the entire duration it runs. The
greeting binding is a reference to this statically allocated string. Any
function expecting a string slice will also accept a string literal.
String literals can span multiple lines. There are two forms. The first will include the newline and the leading spaces:
# #![allow(unused_variables)] #fn main() { let s = "foo bar"; assert_eq!("foo\n bar", s); #}
The second, with a
\, trims the spaces and the newline:
# #![allow(unused_variables)] #fn main() { let s = "foo\ bar"; assert_eq!("foobar", s); #}
Note that you normally cannot access a
str directly, but only through a
&str
reference. This is because
str is an unsized type which requires additional
runtime information to be usable. For more information see the chapter on
unsized types.
Rust has more than only
&strs though. A
String is a heap-allocated string.
This string is growable, and is also guaranteed to be UTF-8.
Strings are
commonly created by converting from a string slice using the
to_string
method.
# #![allow(unused_variables)] #fn main() { let mut s = "Hello".to_string(); // mut s: String println!("{}", s); s.push_str(", world."); println!("{}",
&*.
# #![allow(unused_variables)] #fn main() { use std::net::TcpStream; TcpStream::connect("192.168.0.1:3000"); // Parameter is of type &str. let addr_string = "192.168.0.1:3000".to_string(); TcpStream::connect(&*addr_string); // Convert `addr_string` to &str. #}
Viewing a
String as a
&str is cheap, but converting the
&str to a
String involves allocating memory. No reason to do that unless you have to!
Indexing
Because strings are valid UTF-8, they do not support indexing::
# #![allow(unused_variables)] #fn main() {:
# #![allow(unused_variables)] #fn main() { # let hachiko = "忠犬ハチ公"; let dog = hachiko.chars().nth(1); // Kinda like `hachiko[1]`. #}
This emphasizes that we have to walk from the beginning of the list of
chars.
Slicing
You can get a slice of a string with the slicing syntax:
# #![allow(unused_variables)] #fn main() { let dog = "hachiko"; let hachi = &dog[0..5]; #}
But note that these are byte offsets, not character offsets. So this will fail at runtime:
# #![allow(unused_variables)] #fn main() { let dog = "忠犬ハチ公"; let hachi = &dog[0..2]; #}
with this error:
thread 'main' panicked at 'byte index 2 is not a char boundary; it is inside '忠' (bytes 0..3) of `忠犬ハチ公`'
Concatenation
If you have a
String, you can concatenate a
&str to the end of it:
# #![allow(unused_variables)] #fn main() { let hello = "Hello ".to_string(); let world = "world!"; let hello_world = hello + world; #}
But if you have two
Strings, you need an
&:
# #![allow(unused_variables)] #fn main() { let hello = "Hello ".to_string(); let world = "world!".to_string(); let hello_world = hello + &world; #}
This is because
&String can automatically coerce to a
&str. This is a
feature called ‘
Deref coercions’.
Generics
Sometimes,:
# #![allow(unused_variables)] #fn main() {:
# #![allow(unused_variables)] #fn main() { have
to match up:
# #![allow(unused_variables)] #fn main() {>:
# #![allow(unused_variables)] #fn main() { enum Result<T, E> { Ok(T), Err(E), } #}
This type is generic over two types:
T and
E. By the way, the capital letters
can be any letter you’d like. We could define
Result<T, E> as:
# #![allow(unused_variables)] #fn main() {.
Generic functions
We can write functions that take generic types with a similar syntax:
# #![allow(unused_variables)] #fn main() { fn takes_anything<T>(x: T) { // Do something with `x`. } #}
The syntax has two parts: the
<T> says “this function is generic over one
type,
T”, and the
x: T says “x has the type
T.”
Multiple arguments can have the same generic type:
# #![allow(unused_variables)] #fn main() { fn takes_two_of_the_same_things<T>(x: T, y: T) { // ... } #}
We could write a version that takes multiple types:
# #![allow(unused_variables)] #fn main() { fn takes_two_things<T, U>(x: T, y: U) { // ... } #}
Generic structs
You can store a generic type in a
struct as well:
# #![allow(unused_variables)] #fn main() {
declare the type parameter after the
impl:
# #![allow(unused_variables)] #fn main() { # struct Point<T> { # x: T, # y: T, # } #.
Resolving ambiguities
Most of the time when generics are involved, the compiler can infer the generic parameters automatically:
# #![allow(unused_variables)] #fn main() { // v must be a Vec<T> but we don't know what T is yet let mut v = Vec::new(); // v just got a bool value, so T must be bool! v.push(true); // Debug-print v println!("{:?}", v); #}
Sometimes though, the compiler needs a little help. For example, had we omitted the last line, we would get a compile error:
let v = Vec::new(); // ^^^^^^^^ cannot infer type for `T` // // note: type annotations or generic parameter binding required println!("{:?}", v);
We can solve this using either a type annotation:
# #![allow(unused_variables)] #fn main() { let v: Vec<bool> = Vec::new(); println!("{:?}", v); #}
or by binding the generic parameter
T via the so-called
‘turbofish’
::<> syntax:
# #![allow(unused_variables)] #fn main() { let v = Vec::<bool>::new(); println!("{:?}", v); #}
The second approach is useful in situations where we don’t want to bind the result to a variable. It can also be used to bind generic parameters in functions or methods. See Iterators § Consumers for an example.
Traits
A trait is a language feature that tells the Rust compiler about functionality a type must provide.
Recall the
impl keyword, used to call a function with method
syntax:
# #![allow(unused_variables)] #fn main() { struct Circle { x: f64, y: f64, radius: f64, } impl Circle { fn area(&self) -> f64 { std::f64::consts::PI * (self.radius * self.radius) } } #}
Traits are similar, except that we first define a trait with a method
signature, then implement the trait for a type. In this example, we implement the trait
HasArea for
Circle:
# #![allow(unused_variables)] #fn main() {, only a type signature. When we
impl a trait,
we use
impl Trait for Item, rather than only
impl Item.
Self may be used in a type annotation to refer to an instance of the type
implementing this trait passed as a parameter.
Self,
&Self or
&mut Self
may be used depending on the level of ownership required.
# #![allow(unused_variables)] #fn main() { struct Circle { x: f64, y: f64, radius: f64, } trait HasArea { fn area(&self) -> f64; fn is_larger(&self, &Self) -> bool; } impl HasArea for Circle { fn area(&self) -> f64 { std::f64::consts::PI * (self.radius * self.radius) } fn is_larger(&self, other: &Self) -> bool { self.area() > other.area() } } #}
Trait bounds on generic functions
Traits are useful because they allow a type to make certain promises about its behavior. Generic functions can exploit this to constrain, or bound, the types they accept. Consider this function, which does not compile:
fn print_area<T>(shape: T) { println!("This shape has an area of {}", shape.area()); }
Rust complains:
error: no method named `area` found for type `T` in the current scope
Because
T can be any type, we can’t be sure that it implements the
area
method. But we can add a trait bound to our generic
T, ensuring
that it does:
# #![allow(unused_variables)] #fn main() { # trait HasArea { # fn area(&self) -> f64; # });
We get a compile-time error:
error: the trait bound `_ : HasArea` is not satisfied [E0277]
Trait bounds on generic structs
Your generic structs can also benefit from trait bounds. All you need to
do is append the bound when you declare type parameters. Here is a new
type
Rectangle<T> and its operation
is_square():
struct Rectangle<T> { x: T, y: T, width: T, height: T, } impl<T: PartialEq> Rectangle<T> { fn is_square(&self) -> bool { self.width == self.height } } fn main() { let mut r = Rectangle { x: 0, y: 0, width: 47, height: 47, }; assert!(r.is_square()); r.height = 42; assert!(!r.is_square()); }
is_square() needs to check that the sides are equal, so the sides must be of
a type that implements the
core::cmp::PartialEq trait:
impl<T: PartialEq> Rectangle<T> { ... }
Now, a rectangle can be defined in terms of any type that can be compared for equality.
Here we defined a new struct
Rectangle that accepts numbers of any
precision—really, objects of pretty much any type—as long as they can be
compared for equality. Could we do the same for our
HasArea structs,
Square
and
Circle? Yes, but they need multiplication, and to work with that we need
to know more about operator traits.
Rules for implementing traits
So far, we’ve only added trait implementations to structs, but you can
implement a trait for any type such as
f32:
# #![allow(unused_variables)] #fn main() { trait ApproxEqual { fn approx_equal(&self, other: &Self) -> bool; } impl ApproxEqual for f32 { fn approx_equal(&self, other: &Self) -> bool { // Appropriate for `self` and `other` being close to 1.0. (self - other).abs() <= ::std::f32::EPSILON } } println!("{}", 1.0.approx_equal(&1.00000001)); #}
This may seem like the Wild West, but there are two restrictions around
implementing traits that prevent this from getting out of hand. The first is
that if the trait isn’t defined in your scope, it doesn’t apply. Here’s an
example: the standard library provides a
Write trait which adds
extra functionality to
Files, for doing file I/O. By default, a
File
won’t have its methods:
let mut f = std::fs::File::create("foo.txt").expect("Couldn’t create foo.txt"); let buf = b"whatever"; // buf: &[u8; 8], a byte string literal. let result = f.write(buf); # result.unwrap(); // Ignore the error.
Here’s the error:
error: type `std::fs::File` does not implement any method in scope named `write` let result = f.write(buf); ^~~~~~~~~~
We need to
use the
Write trait first:
# #![allow(unused_variables)] #fn main() { use std::io::Write; let mut f = std::fs::File::create("foo.txt").expect("Couldn’t create foo.txt"); let buf = b"whatever"; let result = f.write(buf); # result.unwrap(); // Ignore the error. #}
This will compile without error.
This means that even if someone does something bad like add methods to
i32,
it won’t affect you, unless you
use that trait.
There’s one more restriction on implementing traits: either the trait
or the type you’re implementing it for must be defined by you. Or more
precisely, one of them must be defined in the same crate as the
impl
you're writing. For more on Rust's module and package system, see the
chapter on crates and modules.
So, we could implement the
HasArea type for
i32, because we defined
HasArea in our code. But if we tried to implement
ToString, a trait
provided by Rust, for
i32, we could not, because neither the trait nor
the type are defined in our crate.
One last thing about traits: generic functions with a trait bound use ‘monomorphization’ (mono: one, morph: form), so they are statically dispatched. What’s that mean? Check out the chapter on trait objects for more details.
Multiple trait bounds
You’ve seen that you can bound a generic type parameter with a trait:
# #![allow(unused_variables)] #fn main() { fn foo<T: Clone>(x: T) { x.clone(); } #}
If you need more than one bound, you can use
+:
# #![allow(unused_variables)] #fn main() { use std::fmt::Debug; fn foo<T: Clone + Debug>(x: T) { x.clone(); println!("{:?}", x); } #}
T now needs to be both
Clone as well as
Debug.
Where clause
Writing functions with only a few generic types and a small number of trait bounds isn’t too bad, but as the number increases, the syntax gets increasingly awkward:
# #![allow(unused_variables)] #fn main() { use std::fmt::Debug; fn foo<T: Clone, K: Clone + Debug>(x: T, y: K) { x.clone(); y.clone(); println!("{:?}", y); } #}
The name of the function is on the far left, and the parameter list is on the far right. The bounds are getting in the way.
Rust has a solution, and it’s called a ‘
where clause’:
use std::fmt::Debug; fn foo<T: Clone, K: Clone + Debug>(x: T, y: K) { x.clone(); y.clone(); println!("{:?}", y); } fn bar<T, K>(x: T, y: K) where T: Clone, K: Clone + Debug { x.clone(); y.clone(); println!("{:?}", y); } fn main() { foo("Hello", "world"); bar("Hello", "world"); }
foo() uses the syntax we showed earlier, and
bar() uses a
where clause.
All you need to do is leave off the bounds when defining your type parameters,
and then add
where after the parameter list. For longer lists, whitespace can
be added:
# #![allow(unused_variables)] #fn main() { use std::fmt::Debug; fn bar<T, K>(x: T, y: K) where T: Clone, K: Clone + Debug { x.clone(); y.clone(); println!("{:?}", y); } #}
This flexibility can add clarity in complex situations.
where is also more powerful than the simpler syntax. For example:
# #![allow(unused_variables)] #fn main() { trait ConvertTo<Output> { fn convert(&self) -> Output; } impl ConvertTo<i64> for i32 { fn convert(&self) -> i64 { *self as i64 } } // Can be called with T == i32. fn convert_t_to_i64<T: ConvertTo<i64>>(x: T) -> i64 { x.convert() } // Can be called with T == i64. fn convert_i32_to_t<T>(x: i32) -> T // This is using ConvertTo as if it were "ConvertTo<i64>". where i32: ConvertTo<T> { x.convert() } #}
This shows off the additional feature of
where clauses: they allow bounds
on the left-hand side not only of type parameters
T, but also of types (
i32 in this case). In this example,
i32 must implement
ConvertTo<T>. Rather than defining what
i32 is (since that's obvious), the
where clause here constrains
T.
Default methods
A default method can be added to a trait definition if it is already known how a typical implementor will define a method. For example,
is_invalid() is defined as the opposite of
is_valid():
# #![allow(unused_variables)] #fn main() { trait Foo { fn is_valid(&self) -> bool; fn is_invalid(&self) -> bool { !self.is_valid() } } #}
Implementors of the
Foo trait need to implement
is_valid() but not
is_invalid() due to the added default behavior. This default behavior can still be overridden as in:
# #![allow(unused_variables)] #fn main() { # trait Foo { # fn is_valid(&self) -> bool; # # fn is_invalid(&self) -> bool { !self.is_valid() } # } struct UseDefault; impl Foo for UseDefault { fn is_valid(&self) -> bool { println!("Called UseDefault.is_valid."); true } } struct OverrideDefault; impl Foo for OverrideDefault { fn is_valid(&self) -> bool { println!("Called OverrideDefault.is_valid."); true } fn is_invalid(&self) -> bool { println!("Called OverrideDefault.is_invalid!"); true // Overrides the expected value of `is_invalid()`. } } let default = UseDefault; assert!(!default.is_invalid()); // Prints "Called UseDefault.is_valid." let over = OverrideDefault; assert!(over.is_invalid()); // Prints "Called OverrideDefault.is_invalid!" #}
Inheritance
Sometimes, implementing a trait requires implementing another trait:
# #![allow(unused_variables)] #fn main() { trait Foo { fn foo(&self); } trait FooBar : Foo { fn foobar(&self); } #}
Implementors of
FooBar must also implement
Foo, like this:
# #![allow(unused_variables)] #fn main() { # trait Foo { # fn foo(&self); # } # trait FooBar : Foo { # fn foobar(&self); # } struct Baz; impl Foo for Baz { fn foo(&self) { println!("foo"); } } impl FooBar for Baz { fn foobar(&self) { println!("foobar"); } } #}
If we forget to implement
Foo, Rust will tell us:
error: the trait bound `main::Baz : main::Foo` is not satisfied [E0277]
Deriving
Implementing traits like
Debug and
Default repeatedly can become
quite tedious. For that reason, Rust provides an attribute that
allows you to let Rust automatically implement traits for you:
#[derive(Debug)] struct Foo; fn main() { println!("{:?}", Foo); }
However, deriving is limited to a certain set of traits:
Drop
Now.
if let
if let permits patterns matching within the condition of an if statement.
This allows us to reduce the overhead of certain kinds of pattern matches
and express them in a more convenient way.
For example, let’s say we have some sort of
Option<T>. We want to call a function
on it if it’s
Some<T>, but do nothing if it’s
None. That looks like this:
# #![allow(unused_variables)] #fn main() { # let option = Some(5); # fn foo(x: i32) { } match option { Some(x) => { foo(x) }, None => {}, } #}
We don’t have to use
match here, for example, we could use
if:
# #![allow(unused_variables)] #fn main() { # let option = Some(5); # fn foo(x: i32) { } if option.is_some() { let x = option.unwrap(); foo(x); } #}
Neither of these options is particularly appealing. We can use
if let to
do the same thing in a nicer way:
# #![allow(unused_variables)] #fn main() { # let option = Some(5); # fn foo(x: i32) { }:
# #![allow(unused_variables)] #fn main() { # let option = Some(5); # fn foo(x: i32) { } # fn bar() { } if let Some(x) = option { foo(x); } else { bar(); } #}
while let
In a similar fashion,
while let can be used when you want to conditionally
loop as long as a value matches a certain pattern. It turns code like this:
# #![allow(unused_variables)] #fn main() { let mut v = vec![1, 3, 5, 7, 11]; loop { match v.pop() { Some(x) => println!("{}", x), None => break, } } #}
Into code like this:
# #![allow(unused_variables)] #fn main() { let mut v = vec![1, 3, 5, 7, 11]; while let Some(x) = v.pop() { println!("{}", x); } #}
Trait Objects’.
Background
For the rest of this chapter, we’ll need a trait and some implementations.
Let’s make a simple one,
Foo. It has one method that is expected to return a
String.
# #![allow(unused_variables)] #fn main() { trait Foo { fn method(&self) -> String; } #}
We’ll also implement this trait for
u8 and
String:
# #![allow(unused_variables)] #fn main() { # trait Foo { fn method(&self) -> String; } impl Foo for u8 { fn method(&self) -> String { format!("u8: {}", *self) } } impl Foo for String { fn method(&self) -> String { format!("string: {}", *self) } } #}
Static dispatch:
# trait Foo { fn method(&self) -> String; } # impl Foo for u8 { fn method(&self) -> String { format!("u8: {}", *self) } } # impl Foo for String { fn method(&self) -> String { format!("string: {}", *self) } }.
Dynamic dispatch.
Why pointers?.
Representation:
# #![allow(unused_variables)] #fn main() { # mod foo {.
Suppose we’ve got some values that implement
Foo. The explicit form of
construction and use of
Foo trait objects might look a bit like (ignoring the
type mismatches: they’re all);
Object Safety:
- the trait does not require that
Self: Sized
- all of its methods are object-safe
So what makes a method object-safe? Each method must require that
Self: Sized
or all of the following:
- must not have any type parameters
- must not use
Self
Whew! As we can see, almost all of these rules talk about
Self. A good intuition
is “except in special circumstances, if your trait’s method uses
Self, it is not
object-safe.”
Closures
Sometimes it is useful to wrap up a function and free variables for better clarity and reuse. The free variables that can be used come from the enclosing scope and are ‘closed over’ when used in the function. From this, we get the name ‘closures’ and Rust provides a really great implementation of them, as we’ll see.
Syntax
Closures look like this:
# #![allow(unused_variables)] #fn main() { let plus_one = |x: i32| x + 1; assert_eq!(2, plus_one(1)); #}
We create a binding,
plus_one, and assign it to a closure. The closure’s
arguments go between the pipes (
|), and the body is an expression, in this
case,
x + 1. Remember that
{ } is an expression, so we can have multi-line
closures too:
# #![allow(unused_variables)] #fn main() { let plus_two = |x| { let mut result: i32 = x; result += 1; result += 1; result }; assert_eq!(4, plus_two(2)); #}
You’ll notice a few things about closures that are a bit different from regular
named functions defined with
fn. The first is that we did not need to
annotate the types of arguments the closure takes or the values it returns. We
can:
# #![allow(unused_variables)] #fn main() { let plus_one = |x: i32| -> i32 { x + 1 }; assert_eq!(2, plus_one(1)); #}
But we don’t have to. Why is this? Basically, it was chosen for ergonomic reasons. While specifying the full type for named functions is helpful with things like documentation and type inference, the full type signatures of closures are rarely documented since they’re anonymous, and they don’t cause the kinds of error-at-a-distance problems that inferring named function types can.
The second is that the syntax is similar, but a bit different. I’ve added spaces here for easier comparison:
# #![allow(unused_variables)] #fn main() { fn plus_one_v1 (x: i32) -> i32 { x + 1 } let plus_one_v2 = |x: i32| -> i32 { x + 1 }; let plus_one_v3 = |x: i32| x + 1 ; #}
Small differences, but they’re similar.
Closures and their environment
The environment for a closure can include bindings from its enclosing scope in addition to parameters and local bindings. It looks like this:
# #![allow(unused_variables)] #fn main() { let num = 5; let plus_num = |x: i32| x + num; assert_eq!(10, plus_num(5)); #}
This closure,
plus_num, refers to a
let binding in its scope:
num. More
specifically, it borrows the binding. If we do something that would conflict
with that binding, we get an error. Like this one:
let mut num = 5; let plus_num = |x: i32| x + num; let y = &mut num;
Which errors with:
error: cannot borrow `num` as mutable because it is also borrowed as immutable let y = &mut num; ^~~ note: previous borrow of `num` occurs here due to use in closure; the immutable borrow prevents subsequent moves or mutable borrows of `num` until the borrow ends let plus_num = |x| x + num; ^~~~~~~~~~~ note: previous borrow ends here fn main() { let mut num = 5; let plus_num = |x| x + num; let y = &mut num; } ^
A verbose yet helpful error message! As it says, we can’t take a mutable borrow
on
num because the closure is already borrowing it. If we let the closure go
out of scope, we can:
# #![allow(unused_variables)] #fn main() { let mut num = 5; { let plus_num = |x: i32| x + num; } // `plus_num` goes out of scope; borrow of `num` ends. let y = &mut num; #}
If your closure requires it, however, Rust will take ownership and move the environment instead. This doesn’t work:
let nums = vec![1, 2, 3]; let takes_nums = || nums; println!("{:?}", nums);
We get this error:
note: `nums` moved into closure environment here because it has type `[closure(()) -> collections::vec::Vec<i32>]`, which is non-copyable let takes_nums = || nums; ^~~~~~~
Vec<T> has ownership over its contents, and therefore, when we refer to it
in our closure, we have to take ownership of
nums. It’s the same as if we’d
passed
nums to a function that took ownership of it.
move closures
We can force our closure to take ownership of its environment with the
move
keyword:
# #![allow(unused_variables)] #fn main() { let num = 5; let owns_num = move |x: i32| x + num; #}
Now, even though the keyword is
move, the variables follow normal move semantics.
In this case,
5 implements
Copy, and so
owns_num takes ownership of a copy
of
num. So what’s the difference?
# #![allow(unused_variables)] #fn main() { let mut num = 5; { let mut add_num = |x: i32| num += x; add_num(5); } assert_eq!(10, num); #}
So in this case, our closure took a mutable reference to
num, and then when
we called
add_num, it mutated the underlying value, as we’d expect. We also
needed to declare
add_num as
mut too, because we’re mutating its
environment.
If we change to a
move closure, it’s different:
# #![allow(unused_variables)] #fn main() { let mut num = 5; { let mut add_num = move |x: i32| num += x; add_num(5); } assert_eq!(5, num); #}
We only get
5. Rather than taking a mutable borrow out on our
num, we took
ownership of a copy.
Another way to think about
move closures: they give a closure its own stack
frame. Without
move, a closure may be tied to the stack frame that created
it, while a
move closure is self-contained. This means that you cannot
generally return a non-
move closure from a function, for example.
But before we talk about taking and returning closures, we should talk some more about the way that closures are implemented. As a systems language, Rust gives you tons of control over what your code does, and closures are no different.
Closure implementation
Rust’s implementation of closures is a bit different than other languages. They are effectively syntax sugar for traits. You’ll want to make sure to have read the traits section before this one, as well as the section on trait objects.
Got all that? Good.
The key to understanding how closures work under the hood is something a bit
strange: Using
() to call a function, like
foo(), is an overloadable
operator. From this, everything else clicks into place. In Rust, we use the
trait system to overload operators. Calling functions is no different. We have
three separate traits to overload with:
Fn
FnMut
FnOnce
There are a few differences between these traits, but a big one is
self:
Fn takes
&self,
FnMut takes
&mut self, and
FnOnce takes
self. This
covers all three kinds of
self via the usual method call syntax. But we’ve
split them up into three traits, rather than having a single one. This gives us
a large amount of control over what kind of closures we can take.
The
|| {} syntax for closures is sugar for these three traits. Rust will
generate a struct for the environment,
impl the appropriate trait, and then
use it.
Taking closures as arguments
Now that we know that closures are traits, we already know how to accept and return closures: the same as any other trait!
This also means that we can choose static vs dynamic dispatch as well. First, let’s write a function which takes something callable, calls it, and returns the result:
# #![allow(unused_variables)] #fn main() { fn call_with_one<F>(some_closure: F) -> i32 where F: Fn(i32) -> i32 { some_closure(1) } let answer = call_with_one(|x| x + 2); assert_eq!(3, answer); #}
We pass our closure,
|x| x + 2, to
call_with_one. It does what it
suggests: it calls the closure, giving it
1 as an argument.
Let’s examine the signature of
call_with_one in more depth:
# #![allow(unused_variables)] #fn main() { fn call_with_one<F>(some_closure: F) -> i32 # where F: Fn(i32) -> i32 { # some_closure(1) } #}
We take one parameter, and it has the type
F. We also return an
i32. This part
isn’t interesting. The next part is:
# #![allow(unused_variables)] #fn main() { # fn call_with_one<F>(some_closure: F) -> i32 where F: Fn(i32) -> i32 { # some_closure(1) } #}
Because
Fn is a trait, we can use it as a bound for our generic type. In
this case, our closure takes an
i32 as an argument and returns an
i32, and
so the generic bound we use is
Fn(i32) -> i32.
There’s one other key point here: because we’re bounding a generic with a trait, this will get monomorphized, and therefore, we’ll be doing static dispatch into the closure. That’s pretty neat. In many languages, closures are inherently heap allocated, and will always involve dynamic dispatch. In Rust, we can stack allocate our closure environment, and statically dispatch the call. This happens quite often with iterators and their adapters, which often take closures as arguments.
Of course, if we want dynamic dispatch, we can get that too. A trait object handles this case, as usual:
# #![allow(unused_variables)] #fn main() { fn call_with_one(some_closure: &Fn(i32) -> i32) -> i32 { some_closure(1) } let answer = call_with_one(&|x| x + 2); assert_eq!(3, answer); #}
Now we take a trait object, a
&Fn. And we have to make a reference
to our closure when we pass it to
call_with_one, so we use
&||.
A quick note about closures that use explicit lifetimes. Sometimes you might have a closure that takes a reference like so:
# #![allow(unused_variables)] #fn main() { fn call_with_ref<F>(some_closure:F) -> i32 where F: Fn(&i32) -> i32 { let value = 0; some_closure(&value) } #}
Normally you can specify the lifetime of the parameter to our closure. We could annotate it on the function declaration:
fn call_with_ref<'a, F>(some_closure:F) -> i32 where F: Fn(&'a i32) -> i32 {
However, this presents a problem in our case. When a function has an explicit
lifetime parameter, that lifetime must be at least as long as the entire
call to that function. The borrow checker will complain that
value doesn't
live long enough, because it is only in scope after its declaration inside the
function body.
What we need is a closure that can borrow its argument only for its own
invocation scope, not for the outer function's scope. In order to say that,
we can use Higher-Ranked Trait Bounds with the
for<...> syntax:
fn call_with_ref<F>(some_closure:F) -> i32 where F: for<'a> Fn(&'a i32) -> i32 {
This lets the Rust compiler find the minimum lifetime to invoke our closure and satisfy the borrow checker's rules. Our function then compiles and executes as we expect.
# #![allow(unused_variables)] #fn main() { fn call_with_ref<F>(some_closure:F) -> i32 where F: for<'a> Fn(&'a i32) -> i32 { let value = 0; some_closure(&value) } #}
Function pointers and closures
A function pointer is kind of like a closure that has no environment. As such, you can pass a function pointer to any function expecting a closure argument, and it will work:
# #![allow(unused_variables)] #fn main() { fn call_with_one(some_closure: &Fn(i32) -> i32) -> i32 { some_closure(1) } fn add_one(i: i32) -> i32 { i + 1 } let f = add_one; let answer = call_with_one(&f); assert_eq!(2, answer); #}
In this example, we don’t strictly need the intermediate variable
f,
the name of the function works just fine too:
let answer = call_with_one(&add_one);
Returning closures
It’s very common for functional-style code to return closures in various situations. If you try to return a closure, you may run into an error. At first, it may seem strange, but we’ll figure it out. Here’s how you’d probably try to return a closure from a function:
fn factory() -> (Fn(i32) -> i32) { let num = 5; |x| x + num } let f = factory(); let answer = f(1); assert_eq!(6, answer);
This gives us these long, related errors:
error: the trait bound `core::ops::Fn(i32) -> i32 : core::marker::Sized` is not satisfied [E0277] fn factory() -> (Fn(i32) -> i32) { ^~~~~~~~~~~~~~~~ note: `core::ops::Fn(i32) -> i32` does not have a constant size known at compile-time fn factory() -> (Fn(i32) -> i32) { ^~~~~~~~~~~~~~~~ error: the trait bound `core::ops::Fn(i32) -> i32 : core::marker::Sized` is not satisfied [E0277] let f = factory(); ^ note: `core::ops::Fn(i32) -> i32` does not have a constant size known at compile-time let f = factory(); ^
In order to return something from a function, Rust needs to know what
size the return type is. But since
Fn is a trait, it could be various
things of various sizes: many different types can implement
Fn. An easy
way to give something a size is to take a reference to it, as references
have a known size. So we’d write this:
fn factory() -> &(Fn(i32) -> i32) { let num = 5; |x| x + num } let f = factory(); let answer = f(1); assert_eq!(6, answer);
But we get another error:
error: missing lifetime specifier [E0106] fn factory() -> &(Fn(i32) -> i32) { ^~~~~~~~~~~~~~~~~
Right. Because we have a reference, we need to give it a lifetime. But
our
factory() function takes no arguments, so
elision doesn’t kick in here. Then what
choices do we have? Try
'static:
fn factory() -> &'static (Fn(i32) -> i32) { let num = 5; |x| x + num } let f = factory(); let answer = f(1); assert_eq!(6, answer);
But we get another error:
error: mismatched types: expected `&'static core::ops::Fn(i32) -> i32`, found `[closure@<anon>:7:9: 7:20]` (expected &-ptr, found closure) [E0308] |x| x + num ^~~~~~~~~~~
This error is letting us know that we don’t have a
&'static Fn(i32) -> i32,
we have a
[closure@<anon>:7:9: 7:20]. Wait, what?
Because each closure generates its own environment
struct and implementation
of
Fn and friends, these types are anonymous. They exist solely for
this closure. So Rust shows them as
closure@<anon>, rather than some
autogenerated name.
The error also points out that the return type is expected to be a reference,
but what we are trying to return is not. Further, we cannot directly assign a
'static lifetime to an object. So we'll take a different approach and return
a ‘trait object’ by
Boxing up the
Fn. This almost works:
fn factory() -> Box<Fn(i32) -> i32> { let num = 5; Box::new(|x| x + num) } let f = factory(); let answer = f(1); assert_eq!(6, answer);
There’s just one last problem:
error: closure may outlive the current function, but it borrows `num`, which is owned by the current function [E0373] Box::new(|x| x + num) ^~~~~~~~~~~
Well, as we discussed before, closures borrow their environment. And in this
case, our environment is based on a stack-allocated
5, the
num variable
binding. So the borrow has a lifetime of the stack frame. So if we returned
this closure, the function call would be over, the stack frame would go away,
and our closure is capturing an environment of garbage memory! With one last
fix, we can make this work:
# #![allow(unused_variables)] #fn main() { fn factory() -> Box<Fn(i32) -> i32> { let num = 5; Box::new(move |x| x + num) } let f = factory(); let answer = f(1); assert_eq!(6, answer); #}
By making the inner closure a
move Fn, we create a new stack frame for our
closure. By
Boxing it up, we’ve given it a known size, allowing it to
escape our stack frame.
Universal Function Call Syntax
Sometimes, functions can have the same names. Consider this code:
# #! foo() -> i32; } struct Bar; impl Bar { fn foo() -> i32 { 20 } } impl Foo for Bar { fn foo() -> i32 { 10 } } fn main() { assert_eq!(10, <Bar as Foo>::foo()); assert_eq!(20, Bar::foo()); }
Using the angle bracket syntax lets you call the trait method instead of the inherent one.
Crates.
const and static
Rust has a way of defining constants with the
const keyword:
# #![allow(unused_variables)] #fn main() {:
# #![allow(unused_variables)] #fn main() { static N: i32 = 5; #}
Unlike
let bindings, you must annotate the type of a
static.
Statics live for the entire lifetime of a program, and therefore any
reference stored in a static has a
'static lifetime:
# #![allow(unused_variables)] #fn main() { static NAME: &'static str = "Steve"; #}
The type of a
static value must be
Sync unless the
static value is
mutable.
Mutability
You can introduce mutability with the
mut keyword:
# #![allow(unused_variables)] #fn main() { static mut N: i32 = 5; #}
Because this is mutable, one thread could be updating
N while another is
reading it, causing memory unsafety. As such both accessing and mutating a
static mut is
unsafe, and so must be done in an
unsafe block:
# #![allow(unused_variables)] #fn main() { # static mut N: i32 = 5; unsafe { N += 1; println!("N: {}", N); } #}
Initializing
Both
const and
static have requirements for giving them a value. They must
be given a value that’s a constant expression. In other words, you cannot use
the result of a function call or anything similarly complex or at runtime.
Dropping
Types implementing
Drop are allowed in
const and
static
definitions. Constants are inlined where they are used and are dropped
accordingly.
static values are not dropped.
Which construct should I use?
Almost always, if you can choose between the two, choose
const. It’s pretty
rare that you actually want a memory location associated with your constant,
and using a
const allows for optimizations like constant propagation not only
in your crate but downstream crates.
Attributes
Declarations can be annotated with ‘attributes’ in Rust. They look like this:
# #![allow(unused_variables)] #fn main() { #[test] # fn foo() {} #}
or like this:
# #![allow(unused_variables)] #fn main() { # mod foo { #![test] # } #}
The difference between the two is the
!, which changes what the attribute
applies to:
#[foo] struct Foo; mod bar { #![bar] }
The
#[foo] attribute applies to the next item, which is the
struct
declaration. The
#![bar] attribute applies to the item enclosing it, which is
the
mod declaration. Otherwise, they’re the same. Both change the meaning of
the item they’re attached to somehow.
For example, consider a function like this:
# #![allow(unused_variables)] #fn main() { #[test] fn check() { assert_eq!(2, 1 + 1); } #}
It is marked with
#[test]. This means it’s special: when you run
tests, this function will execute. When you compile as usual, it won’t
even be included. This function is now a test function.
Attributes may also have additional data:
# #![allow(unused_variables)] #fn main() { #[inline(always)] fn super_fast_fn() { # } #}
Or even keys and values:
# #![allow(unused_variables)] #fn main() { #[cfg(target_os = "macos")] mod macos_only { # } #}
Rust attributes are used for a number of different things. There is a full list of attributes in the reference. Currently, you are not allowed to create your own attributes, the Rust compiler defines them.
Type Aliases
The
type keyword lets you declare an alias of another type:
# #![allow(unused_variables)] #fn main() { type Name = String; #}
You can then use this type as if it were a real type:
# #![allow(unused_variables)] #fn main() { type Name = String; let x: Name = "Hello".to_string(); #}
Note, however, that this is an alias, not a new type entirely. In other words, because Rust is strongly typed, you’d expect a comparison between two different types to fail:
let x: i32 = 5; let y: i64 = 5; if x == y { // ... }
this gives
error: mismatched types: expected `i32`, found `i64` (expected i32, found i64) [E0308] if x == y { ^
But, if we had an alias:
# #![allow(unused_variables)] #fn main() { type Num = i32; let x: i32 = 5; let y: Num = 5; if x == y { // ... } #}
This compiles without error. Values of a
Num type are the same as a value of
type
i32, in every way. You can use tuple struct to really get a new type.
You can also use type aliases with generics:
# #![allow(unused_variables)] #fn main() { use std::result; enum ConcreteError { Foo, Bar, } type Result<T> = result::Result<T, ConcreteError>; #}
This creates a specialized version of the
Result type, which always has a
ConcreteError for the
E part of
Result<T, E>. This is commonly used
in the standard library to create custom errors for each subsection. For
example, io::Result.
Casting Between Types
Rust, with its focus on safety, provides two different ways of casting
different types between each other. The first,
as, is for safe casts.
In contrast,
transmute allows for arbitrary casting, and is one of the
most dangerous features of Rust!
Coercion
Coercion between types is implicit and has no syntax of its own, but can
be spelled out with
as.
Coercion occurs in
let,
const, and
static statements; in
function call arguments; in field values in struct initialization; and in a
function result.
The most common case of coercion is removing mutability from a reference:
&mut Tto
&T
An analogous conversion is to remove mutability from a raw pointer:
*mut Tto
*const T
References can also be coerced to raw pointers:
&Tto
*const T
&mut Tto
*mut T
Custom coercions may be defined using
Deref.
Coercion is transitive.
as
The
as keyword does safe casting:
# #![allow(unused_variables)] #fn main() { let x: i32 = 5; let y = x as i64; #}
There are three major categories of safe cast: explicit coercions, casts between numeric types, and pointer casts.
Casting is not transitive: even if
e as U1 as U2 is a valid
expression,
e as U2 is not necessarily so (in fact it will only be valid if
U1 coerces to
U2).
Explicit coercions
A cast
e as U is valid if
e has type
T and
T coerces to
U.
Numeric casts
A cast
e as U is also valid in any of the following cases:
ehas type
Tand
Tand
Uare any numeric types; numeric-cast
eis an enum with no data attached to the variants (a "field-less enumeration"), and
Uis an integer type; enum-cast
ehas type
boolor
charand
Uis an integer type; prim-int-cast
ehas type
u8and
Uis
char; u8-char-cast
For example
# #![allow(unused_variables)] #fn main() { let one = true as u8; let at_sign = 64 as char; let two_hundred = -56i8 as u8; #}
The semantics of numeric casts are:
-)
Pointer casts
Perhaps surprisingly, it is safe to cast raw pointers to and from integers, and to cast between pointers to different types subject to some constraints. It is only unsafe to dereference the pointer:
# #![allow(unused_variables)] #fn main() { let a = 300 as *const char; // `a` is a pointer to location 300. let b = a as u32; #}
e as U is a valid pointer cast in any of the following cases:
ehas type
*T,
Uhas type
*U_0, and either
U_0: Sizedor
unsize_kind(T) == unsize_kind(U_0); a ptr-ptr-cast
ehas type
*Tand
Uis a numeric type, while
T: Sized; ptr-addr-cast
eis an integer and
Uis
*U_0, while
U_0: Sized; addr-ptr-cast
ehas type
&[T; n]and
Uis
*const T; array-ptr-cast
eis a function pointer type and
Uhas type
*T, while
T: Sized; fptr-ptr-cast
eis a function pointer type and
Uis an integer; fptr-addr-cast
transmute
as only allows safe casting, and will for example reject an attempt to
cast four bytes into a
u32:
let a = [0u8, 0u8, 0u8, 0u8]; let b = a as u32; // Four u8s makes a u32.
This errors with:
error: non-scalar cast: `[u8; 4]` as `u32` let b = a as u32; // Four u8s makes a u32. ^~~~~~~~
This is a ‘non-scalar cast’ because we have multiple values here: the four elements of the array. These kinds of casts are very dangerous, because they make assumptions about the way that multiple underlying structures are implemented. For this, we need something more dangerous.
The
transmute function is very simple, but very scary. It tells Rust to treat
a value of one type as though it were another type. It does this regardless of
the typechecking system, and completely trusts you.
In our previous example, we know that an array of four
u8s represents a
u32
properly, and so we want to do the cast. Using
transmute instead of
as,
Rust lets us:
use std::mem; fn main() { unsafe { let a = [0u8, 1u8, 0u8, 0u8]; let b = mem::transmute::<[u8; 4], u32>(a); println!("{}", b); // 256 // Or, more concisely: let c: u32 = mem::transmute(a); println!("{}", c); // 256 } }!
Associated Types
Associated types are a powerful part of Rust’s type system. They’re related to
the idea of a ‘type family’, in other words, grouping multiple types together. That
description is a bit abstract, so let’s dive right into an example. If you want
to write a
Graph trait, you have two types to be generic over: the node type
and the edge type. So you might write a trait,
Graph<N, E>, that looks like
this:
# #![allow(unused_variables)] #fn main() { trait Graph<N, E> { fn has_edge(&self, &N, &N) -> bool; fn edges(&self, &N) -> Vec<E>; // Etc. } #}
While this sort of works, it ends up being awkward. For example, any function
that wants to take a
Graph as a parameter now also needs to be generic over
the
Node and
Edge types too:
fn distance<N, E, G: Graph<N, E>>(graph: &G, start: &N, end: &N) -> u32 { ... }
Our distance calculation works regardless of our
Edge type, so the
E stuff in
this signature is a distraction.
What we really want to say is that a certain
Edge and
Node type come together
to form each kind of
Graph. We can do that with associated types:
# #![allow(unused_variables)] #fn main() { trait Graph { type N; type E; fn has_edge(&self, &Self::N, &Self::N) -> bool; fn edges(&self, &Self::N) -> Vec<Self::E>; // Etc. } #}
Now, our clients can be abstract over a given
Graph:
fn distance<G: Graph>(graph: &G, start: &G::N, end: &G::N) -> u32 { ... }
No need to deal with the
Edge type here!
Let’s go over all this in more detail.
Defining associated types
Let’s build that
Graph trait. Here’s the definition:
# #![allow(unused_variables)] #fn main() { trait Graph { type N; type E; fn has_edge(&self, &Self::N, &Self::N) -> bool; fn edges(&self, &Self::N) -> Vec<Self::E>; } #}
Simple enough. Associated types use the
type keyword, and go inside the body
of the trait, with the functions.
These type declarations work the same way as those for functions. For example,
if we wanted our
N type to implement
Display, so we can print the nodes out,
we could do this:
# #![allow(unused_variables)] #fn main() { use std::fmt; trait Graph { type N: fmt::Display; type E; fn has_edge(&self, &Self::N, &Self::N) -> bool; fn edges(&self, &Self::N) -> Vec<Self::E>; } #}
Implementing associated types
Just like any trait, traits that use associated types use the
impl keyword to
provide implementations. Here’s a simple implementation of Graph:
# #!() } } #}
This silly implementation always returns
true and an empty
Vec<Edge>, but it
gives you an idea of how to implement this kind of thing. We first need three
structs, one for the graph, one for the node, and one for the edge. If it made
more sense to use a different type, that would work as well, we’re going to
use
structs for all three here.
Next is the
impl line, which is an implementation like any other trait.
From here, we use
= to define our associated types. The name the trait uses
goes on the left of the
=, and the concrete type we’re
implementing this
for goes on the right. Finally, we use the concrete types in our function
declarations.
Trait objects with associated types
There’s one more bit of syntax we should talk about: trait objects. If you try to create a trait object from a trait with an associated type, like this:
#>;
You’ll get two errors:
error: the value of the associated type `E` (from the trait `main::Graph`) must be specified [E0191] let obj = Box::new(graph) as Box<Graph>; ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 24:44 error: the value of the associated type `N` (from the trait `main::Graph`) must be specified [E0191] let obj = Box::new(graph) as Box<Graph>; ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~
We can’t create a trait object like this, because we don’t know the associated types. Instead, we can write this:
# #!<N=Node, E=Edge>>; #}
The
N=Node syntax allows us to provide a concrete type,
Node, for the
N
type parameter. Same with
E=Edge. If we didn’t provide this constraint, we
couldn’t be sure which
impl to match this trait object to.
Unsized Types
Most:
- We can only manipulate an instance of an unsized type via a pointer. An
&[T]works fine, but a
[T]does not.
- Variables and arguments cannot have dynamically sized types.
- Only the last field in.
Operators and Overloading
Rust:
# #![allow(unused_variables)] #fn main() { # mod foo {.
# #![allow(unused_variables)] #fn main() { # struct Point; # use std::ops::Add; impl Add<i32> for Point { type Output = f64; fn add(self, rhs: i32) -> f64 { // Add an i32 to a Point and get an f64. # 1.0 } } #}
will let you do this:
let p: Point = // ... let x: f64 = p + 2i32;
Using operator traits in generic structs
Now that we know how operator traits are defined, we can define our
HasArea
trait and
Square struct from the traits chapter more generically:
use std::ops::Mul; trait HasArea<T> { fn area(&self) -> T; } struct Square<T> { x: T, y: T, side: T, } impl<T> HasArea<T> for Square<T> where T: Mul<Output=T> + Copy { fn area(&self) -> T { self.side * self.side } } fn main() { let s = Square { x: 0.0f64, y: 0.0f64, side: 12.0f64, }; println!("Area of s: {}", s.area()); }
For
HasArea and
Square, we declare a type parameter
T and replace
f64 with it. The
impl needs more involved modifications:
impl<T> HasArea<T> for Square<T> where T: Mul<Output=T> + Copy { ... }
The
area method requires that we can multiply the sides, so we declare that
type
T must implement
std::ops::Mul. Like
Add, mentioned above,
Mul
itself takes an
Output parameter: since we know that numbers don't change
type when multiplied, we also set it to
T.
T must also support copying, so
Rust doesn't try to move
self.side into the return value.
Deref coercions
The standard library provides a special trait,
Deref. It’s normally
used to overload
*, the dereference operator:
use std::ops::Deref; struct DerefExample<T> { value: T, } impl<T> Deref for DerefExample<T> { type Target = T; fn deref(&self) -> &T { &self.value } } fn main() { let x = DerefExample { value: 'a' }; assert_eq!('a', *x); }
This is useful for writing custom pointer types. However, there’s a language
feature related to
Deref: ‘deref coercions’. Here’s the rule: If you have a
type
U, and it implements
Deref<Target=T>, values of
&U will
automatically coerce to a
&T. Here’s an example:
# #![allow(unused_variables)] #fn main() { fn foo(s: &str) { // Borrow a string for a second. } // String implements Deref<Target=str>. let owned = "Hello".to_string(); // Therefore, this works: foo(&owned); #}
Using an ampersand in front of a value takes a reference to it. So
owned is a
String,
&owned is an
&String, and since
impl Deref<Target=str> for String,
&String will deref to
&str, which
foo() takes.
That’s it. This rule is one of the only places in which Rust does an automatic
conversion for you, but it adds a lot of flexibility. For example, the
Rc<T>
type implements
Deref<Target=T>, so this works:
# #![allow(unused_variables)] #fn main() { use std::rc::Rc; fn foo(s: &str) { // Borrow a string for a second. } // String implements Deref<Target=str>. let owned = "Hello".to_string(); let counted = Rc::new(owned); // Therefore, this works: foo(&counted); #}
All we’ve done is wrap our
String in an
Rc<T>. But we can now pass the
Rc<String> around anywhere we’d have a
String. The signature of
foo
didn’t change, but works just as well with either type. This example has two
conversions:
&Rc<String> to
&String and then
&String to
&str. Rust will do
this as many times as possible until the types match.
Another very common implementation provided by the standard library is:
# #![allow(unused_variables)] #fn main() { fn foo(s: &[i32]) { // Borrow a slice for a second. } // Vec<T> implements Deref<Target=[T]>. let owned = vec![1, 2, 3]; foo(&owned); #}
Vectors can
Deref to a slice.
Deref and method calls
Deref will also kick in when calling a method. Consider the following
example.
# #![allow(unused_variables)] #fn main() { struct Foo; impl Foo { fn foo(&self) { println!("Foo"); } } let f = &&Foo; f.foo(); #}
Even though
f is a
&&Foo and
foo takes
&self, this works. That’s
because these things are the same:
f.foo(); (&f).foo(); (&&f).foo(); (&&&&&&&&f).foo();
A value of type
&&&&&&&&&&&&&&&&Foo can still have methods defined on
Foo
called, because the compiler will insert as many * operations as necessary to
get it right. And since it’s inserting
*s, that uses
Deref.
Macros.
Raw Pointers
Rust:
- are not guaranteed to point to valid memory and are not even guaranteed to be non-NULL (unlike both
Boxand
&);
- do not have any automatic clean-up, unlike
Box, and so require manual resource management;
- are plain-old-data, that is, they don't move ownership, again unlike
Box, hence the Rust compiler cannot protect against bugs like use-after-free;
- lack any form of lifetimes, unlike
&, and so the compiler cannot reason about dangling pointers; and
- have no guarantees about aliasing or mutability other than mutation not being allowed directly through a
*const T.
Basics
Creating a raw pointer is perfectly safe:
# #![allow(unused_variables)] #fn main() { let x = 5; let raw = &x as *const i32; let mut y = 10; let raw_mut = &mut y as *mut i32; #}
However, dereferencing one is not. This won’t work::
# #![allow(unused_variables)] #fn main() { let x = 5; let raw = &x as *const i32; let points_at = unsafe { *raw }; println!("raw points at {}", points_at); #}
For more operations on raw pointers, see their API documentation.
FFI
Raw pointers are useful for FFI: Rust’s
*const T and
*mut T are similar to
C’s
const T* and
T*, respectively. For more about this use, consult the
FFI chapter.
References and raw pointers:
# #![allow(unused_variables)] ).
Unsafe!
Effective.
TheGB).
Testing
Program testing can be a very effective way to show the presence of bugs, but it is hopelessly inadequate for showing their absence.
Edsger W. Dijkstra, "The Humble Programmer" (1972)
Let's talk about how to test Rust code. What we will not be talking about is the right way to test Rust code. There are many schools of thought regarding the right and wrong way to write tests. All of these approaches use the same basic tools, and so we'll show you the syntax for using them.
The
test attribute
At its simplest, a test in Rust is a function that's annotated with the
test
attribute. Let's make a new project with Cargo called
adder:
$ cargo new adder $ cd adder
Cargo will automatically generate a simple test when you make a new project.
Here's the contents of
src/lib.rs:
# // The next line exists to trick play.rust-lang.org into running our code as a # // test: # // fn main # #[cfg(test)] mod tests { #[test] fn it_works() { } }
For now, let's remove the
mod bit, and focus on just the function:
# // The next line exists to trick play.rust-lang.org into running our code as a # // test: # // fn main # #[test] fn it_works() { }
Note the
#[test]. This attribute indicates that this is a test function. It
currently has no body. That's good enough to pass! We can run the tests with
cargo test:
$ cargo test Compiling adder v0.1.0 () Finished debug [unoptimized + debuginfo] target(s) in 0.15
Cargo compiled and ran our tests. There are two sets of output here: one for the test we wrote, and another for documentation tests. We'll talk about those later. For now, see this line:
test it_works ... ok
Note the
it_works. This comes from the name of our function:
# fn main() { fn it_works() { } # }
We also get a summary line:
test result: ok. 1 passed; 0 failed; 0 ignored; 0 measured
So why does our do-nothing test pass? Any test which doesn't
panic! passes,
and any test that does
panic! fails. Let's make our test fail:
# // The next line exists to trick play.rust-lang.org into running our code as a # // test: # // fn main # #[test] fn it_works() { assert!(false); }
assert! is a macro provided by Rust which takes one argument: if the argument
is
true, nothing happens. If the argument is
false, it will
panic!. Let's
run our tests again:
$ cargo test Compiling adder v0.1.0 () Finished debug [unoptimized + debuginfo] target(s) in 0.17 secs Running target/debug/deps/adder-941f01916ca4a642 running 1 test test it_works ... FAILED failures: ---- it_works stdout ---- thread 'it_works' panicked at 'assertion failed: false', src/lib.rs:5 note: Run with `RUST_BACKTRACE=1` for a backtrace. failures: it_works test result: FAILED. 0 passed; 1 failed; 0 ignored; 0 measured error: test failed
Rust indicates that our test failed:
test it_works ... FAILED
And that's reflected in the summary line:
test result: FAILED. 0 passed; 1 failed; 0 ignored; 0 measured
We also get a non-zero status code. We can use
$? on macOS and Linux:
$ echo $? 101
On Windows, if you’re using
cmd:
> echo %ERRORLEVEL%
And if you’re using PowerShell:
> echo $LASTEXITCODE # the code itself > echo $? # a boolean, fail or succeed
This is useful if you want to integrate
cargo test into other tooling.
We can invert our test's failure with another attribute:
should_panic:
# // The next line exists to trick play.rust-lang.org into running our code as a # // test: # // fn main # #[test] #[should_panic] fn it_works() { assert!(false); }
This test will now succeed if we
panic! and fail if we complete. Let's try it:
$ cargo test Compiling adder v0.1.0 () Finished debug [unoptimized + debuginfo] target(s) in 0.17
Rust provides another macro,
assert_eq!, that compares two arguments for
equality:
# // The next line exists to trick play.rust-lang.org into running our code as a # // test: # // fn main # #[test] #[should_panic] fn it_works() { assert_eq!("Hello", "world"); }
Does this test pass or fail? Because of the
should_panic attribute, it
passes:
$ cargo test Compiling adder v0.1.0 () Finished debug [unoptimized + debuginfo] target(s) in 0.21
should_panic tests can be fragile, as it's hard to guarantee that the test
didn't fail for an unexpected reason. To help with this, an optional
expected
parameter can be added to the
should_panic attribute. The test harness will
make sure that the failure message contains the provided text. A safer version
of the example above would be:
# // The next line exists to trick play.rust-lang.org into running our code as a # // test: # // fn main # #[test] #[should_panic(expected = "assertion failed")] fn it_works() { assert_eq!("Hello", "world"); }
That's all there is to the basics! Let's write one 'real' test:
# // The next line exists to trick play.rust-lang.org into running our code as a # // test: # // fn main # pub fn add_two(a: i32) -> i32 { a + 2 } #[test] fn it_works() { assert_eq!(4, add_two(2)); }
This is a very common use of
assert_eq!: call some function with
some known arguments and compare it to the expected output.
The
ignore attribute
Sometimes a few specific tests can be very time-consuming to execute. These
can be disabled by default by using the
ignore attribute:
# // The next line exists to trick play.rust-lang.org into running our code as a # // test: # // fn main # pub fn add_two(a: i32) -> i32 { a + 2 } #[test] fn it_works() { assert_eq!(4, add_two(2)); } #[test] #[ignore] fn expensive_test() { // Code that takes an hour to run... }
Now we run our tests and see that
it_works is run, but
expensive_test is
not:
$ cargo test Compiling adder v0.1.0 () Finished debug [unoptimized + debuginfo] target(s) in 0.20 secs Running target/debug/deps/adder-941f01916ca4a642 running 2 tests test expensive_test ... ignored test it_works ... ok test result: ok. 1 passed; 0 failed; 1 ignored; 0 measured Doc-tests adder running 0 tests test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured
The expensive tests can be run explicitly using
cargo test -- --ignored:
$ cargo test -- --ignored Finished debug [unoptimized + debuginfo] target(s) in 0.0 secs Running target/debug/deps/adder-941f01916ca4a642 running 1 test test expensive_test ... ok test result: ok. 1 passed; 0 failed; 0 ignored; 0 measured Doc-tests adder running 0 tests test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured
The
--ignored argument is an argument to the test binary, and not to Cargo,
which is why the command is
cargo test -- --ignored.
The
tests module
There is one way in which our existing example is not idiomatic: it's
missing the
tests module. You might have noticed this test module was
present in the code that was initially generated with
cargo new but
was missing from our last example. Let's explain what this does.
The idiomatic way of writing our example looks like this:
# // The next line exists to trick play.rust-lang.org into running our code as a # // test: # // fn main # pub fn add_two(a: i32) -> i32 { a + 2 } #[cfg(test)] mod tests { use super::add_two; #[test] fn it_works() { assert_eq!(4, add_two(2)); } }
There's a few changes here. The first is the introduction of a
mod tests with
a
cfg attribute. The module allows us to group all of our tests together, and
to also define helper functions if needed, that don't become a part of the rest
of our crate. The
cfg attribute only compiles our test code if we're
currently trying to run the tests. This can save compile time, and also ensures
that our tests are entirely left out of a normal build.
The second change is the
use declaration. Because we're in an inner module,
we need to bring the tested function into scope. This can be annoying if you have
a large module, and so this is a common use of globs. Let's change our
src/lib.rs to make use of it:
# // The next line exists to trick play.rust-lang.org into running our code as a # // test: # // fn main # pub fn add_two(a: i32) -> i32 { a + 2 } #[cfg(test)] mod tests { use super::*; #[test] fn it_works() { assert_eq!(4, add_two(2)); } }
Note the different
use line. Now we run our tests:
$ cargo test Updating registry `` Compiling adder v0.1.0 () Running target/debug/deps/adder-91b3e234d4ed382a running 1 test test tests::it_works ... ok test result: ok. 1 passed; 0 failed; 0 ignored; 0 measured Doc-tests adder running 0 tests test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured
It works!
The current convention is to use the
tests module to hold your "unit-style"
tests. Anything that tests one small bit of functionality makes sense to
go here. But what about "integration-style" tests instead? For that, we have
the
tests directory.
The
tests directory
Each file in
tests/*.rs directory is treated as an individual crate.
To write an integration test, let's make a
tests directory and
put a
tests/integration_test.rs file inside with this as its contents:
# // The next line exists to trick play.rust-lang.org into running our code as a # // test: # // fn main # # // Sadly, this code will not work in play.rust-lang.org, because we have no # // crate adder to import. You'll need to try this part on your own machine. extern crate adder; #[test] fn it_works() { assert_eq!(4, adder::add_two(2)); }
This looks similar to our previous tests, but slightly different. We now have
an
extern crate adder at the top. This is because each test in the
tests
directory is an entirely separate crate, and so we need to import our library.
This is also why
tests is a suitable place to write integration-style tests:
they use the library like any other consumer of it would.
Let's run 0 tests test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured
Now we have three sections: our previous test is also run, as well as our new one.
Cargo will ignore files in subdirectories of the
tests/ directory.
Therefore shared modules in integrations tests are possible.
For example
tests/common/mod.rs is not separately compiled by cargo but can
be imported in every test with
mod common;
That's all there is to the
tests directory. The
tests module isn't needed
here, since the whole thing is focused on tests.
Note, when building integration tests, cargo will not pass the
test attribute
to the compiler. It means that all parts in
cfg(test) won't be included in
the build used in your integration tests.
Let's finally check out that third section: documentation tests.
Documentation tests
Nothing is better than documentation with examples. Nothing is worse than
examples that don't actually work, because the code has changed since the
documentation has been written. To this end, Rust supports automatically
running examples in your documentation (note: this only works in library
crates, not binary crates). Here's a fleshed-out
src/lib.rs with examples:
# // The next line exists to trick play.rust-lang.org into running our code as a # // test: # // fn main # //! The `adder` crate provides functions that add numbers to other numbers. //! //! # Examples //! //! ``` //! assert_eq!(4, adder::add_two(2)); //! ``` /// This function adds two to its argument. /// /// # Examples /// /// ``` /// use adder::add_two; /// /// assert_eq!(4, add_two(2)); /// ``` pub fn add_two(a: i32) -> i32 { a + 2 } #[cfg(test)] mod tests { use super::*; #[test] fn it_works() { assert_eq!(4, add_two(2)); } }
Note the module-level documentation with
//! and the function-level
documentation with
///. Rust's documentation supports Markdown in comments,
and so triple graves mark code blocks. It is conventional to include the
# Examples section, exactly like that, with examples following.
Let's run the tests 2 tests test add_two_0 ... ok test _0 ... ok test result: ok. 2 passed; 0 failed; 0 ignored; 0 measured
Now we have all three kinds of tests running! Note the names of the
documentation tests: the
_0 is generated for the module test, and
add_two_0
for the function test. These will auto increment with names like
add_two_1 as
you add more examples.
We haven’t covered all of the details with writing documentation tests. For more, please see the Documentation chapter.
Testing and concurrency
It is important to note that tests are run concurrently using threads. For this reason, care should be taken to ensure your tests do not depend on each-other, or on any shared state. "Shared state" can also include the environment, such as the current working directory, or environment variables.
If this is an issue it is possible to control this concurrency, either by
setting the environment variable
RUST_TEST_THREADS, or by passing the argument
--test-threads to the tests:
$ RUST_TEST_THREADS=1 cargo test # Run tests with no concurrency ... $ cargo test -- --test-threads=1 # Same as above ...
Test output
By default Rust's test library captures and discards output to standard
out/error, e.g. output from
println!(). This too can be controlled using the
environment or a switch:
$ RUST_TEST_NOCAPTURE=1 cargo test # Preserve stdout/stderr ... $ cargo test -- --nocapture # Same as above ...
However a better method avoiding capture is to use logging rather than raw output. Rust has a standard logging API, which provides a frontend to multiple logging implementations. This can be used in conjunction with the default env_logger to output any debugging information in a manner that can be controlled at runtime.
Conditional Compilation
Rust has a special attribute,
#[cfg], which allows you to compile code
based on a flag passed to the compiler. It has two forms:
# #![allow(unused_variables)] #fn main() { #[cfg(foo)] # fn foo() {} #[cfg(bar = "baz")] # fn bar() {} #}
They also have some helpers:
# #![allow(unused_variables)] #fn main() { #[cfg(any(unix, windows))] # fn foo() {} #[cfg(all(unix, target_pointer_width = "32"))] # fn bar() {} #[cfg(not(foo))] # fn not_foo() {} #}
These can nest arbitrarily:
# #![allow(unused_variables)] #fn main() { #[cfg(any(not(unix), all(target_os="macos", target_arch = "powerpc")))] # fn foo() {} #}
As for how to enable or disable these switches, if you’re using Cargo,
they get set in the
[features] section of your
Cargo.toml:
[features] # no features by default default = [] # Add feature "foo" here, then you can use it. # Our "foo" feature depends on nothing else. foo = []
When you do this, Cargo passes along a flag to
rustc:
--cfg feature="${feature_name}"
The sum of these
cfg flags will determine which ones get activated, and
therefore, which code gets compiled. Let’s take this code:
# #![allow(unused_variables)] #fn main() { #.
cfg_attr
You can also set another attribute based on a
cfg variable with
cfg_attr:
# #![allow(unused_variables)] #fn main() { #[cfg_attr(a, b)] # fn foo() {} #}
Will be the same as
#[b] if
a is set by
cfg attribute, and nothing otherwise.
cfg!
The
cfg! macro lets you use these kinds of flags elsewhere in your code, too:
# #![allow(unused_variables)] #fn main() { if cfg!(target_os = "macos") || cfg!(target_os = "ios") { println!("Think Different!"); } #}
These will be replaced by a
true or
false at compile-time, depending on the
configuration settings.
Documentation
Documentation is an important part of any software project, and it's first-class in Rust. Let's talk about the tooling Rust gives you to document your project.
About
rustdoc
The Rust distribution includes a tool,
rustdoc, that generates documentation.
rustdoc is also used by Cargo through
cargo doc.
Documentation can be generated in two ways: from source code, and from standalone Markdown files.
Documenting source code
The primary way of documenting a Rust project is through annotating the source code. You can use documentation comments for this purpose:
/// Constructs a new `Rc<T>`. /// /// # Examples /// /// ``` /// use std::rc::Rc; /// /// let five = Rc::new(5); /// ``` pub fn new(value: T) -> Rc<T> { // Implementation goes here. }
This code generates documentation that looks like this. I've left the implementation out, with a regular comment in its place.
The first thing to notice about this annotation is that it uses
/// instead of
//. The triple slash
indicates a documentation comment.
Documentation comments are written in Markdown.
Rust keeps track of these comments, and uses them when generating documentation. This is important when documenting things like enums:
# #for more. enum Option<T> { /// No value None, /// Some value `T` Some(T), } #}
The above works, but this does not:
/// The `Option` type. See [the module level documentation](index.html) for more. enum Option<T> { None, /// No value Some(T), /// Some value `T` }
You'll get an error:
hello.rs:4:1: 4:2 error: expected ident, found `}` hello.rs:4 } ^
This unfortunate error is correct; documentation comments apply to the thing after them, and there's nothing after that last comment.
Writing documentation comments
Anyway, let's cover each part of this comment in detail:
# #![allow(unused_variables)] #fn main() { /// Constructs a new `Rc<T>`. # fn foo() {} #}
The first line of a documentation comment should be a short summary of its functionality. One sentence. Just the basics. High level.
# #![allow(unused_variables)] #fn main() { /// /// Other details about constructing `Rc<T>`s, maybe describing complicated /// semantics, maybe additional options, all kinds of stuff. /// # fn foo() {} #}
Our original example had just a summary line, but if we had more things to say, we could have added more explanation in a new paragraph.
Special sections
Next, are special sections. These are indicated with a header,
#. There
are four kinds of headers that are commonly used. They aren't special syntax,
just convention, for now.
# #![allow(unused_variables)] #fn main() { /// # Panics # fn foo() {} #}
Unrecoverable misuses of a function (i.e. programming errors) in Rust are usually indicated by panics, which kill the whole current thread at the very least. If your function has a non-trivial contract like this, that is detected/enforced by panics, documenting it is very important.
# #![allow(unused_variables)] #fn main() { /// # Errors # fn foo() {} #}
If your function or method returns a
Result<T, E>, then describing the
conditions under which it returns
Err(E) is a nice thing to do. This is
slightly less important than
Panics, because failure is encoded into the type
system, but it's still a good thing to do.
# #![allow(unused_variables)] #fn main() { /// # Safety # fn foo() {} #}
If your function is
unsafe, you should explain which invariants the caller is
responsible for upholding.
# #![allow(unused_variables)] #fn main() { /// # Examples /// /// ``` /// use std::rc::Rc; /// /// let five = Rc::new(5); /// ``` # fn foo() {} #}
Fourth,
Examples. Include one or more examples of using your function or
method, and your users will love you for it. These examples go inside of
code block annotations, which we'll talk about in a moment, and can have
more than one section:
# #![allow(unused_variables)] #fn main() { /// # Examples /// /// Simple `&str` patterns: /// /// ``` /// let v: Vec<&str> = "Mary had a little lamb".split(' ').collect(); /// assert_eq!(v, vec!["Mary", "had", "a", "little", "lamb"]); /// ``` /// /// More complex patterns with a lambda: /// /// ``` /// let v: Vec<&str> = "abc1def2ghi".split(|c: char| c.is_numeric()).collect(); /// assert_eq!(v, vec!["abc", "def", "ghi"]); /// ``` # fn foo() {} #}
Code block annotations
To write some Rust code in a comment, use the triple graves:
# #![allow(unused_variables)] #fn main() { /// ``` /// println!("Hello, world"); /// ``` # fn foo() {} #}
This will add code highlighting. If you are only showing plain text, put
text
instead of
rust after the triple graves (see below).
Documentation as tests
Let's discuss our sample example documentation:
# #![allow(unused_variables)] #fn main() { /// ``` /// println!("Hello, world"); /// ``` # fn foo() {} #}
You'll notice that you don't need a
fn main() or anything here.
rustdoc will
automatically add a
main() wrapper around your code, using heuristics to attempt
to put it in the right place. For example:
# #![allow(unused_variables)] #fn main() { /// ``` /// use std::rc::Rc; /// /// let five = Rc::new(5); /// ``` # fn foo() {} #}
This will end up testing:
fn main() { use std::rc::Rc; let five = Rc::new(5); }
Here's the full algorithm rustdoc uses to preprocess examples:
- Any leading
#![foo]attributes are left intact as crate attributes.
- Some common
allowattributes are inserted, including
unused_variables,
unused_assignments,
unused_mut,
unused_attributes, and
dead_code. Small examples often trigger these lints.
- If the example does not contain
extern crate, then
extern crate <mycrate>;is inserted (note the lack of
#[macro_use]).
- Finally, if the example does not contain
fn main, the remainder of the text is wrapped in
fn main() { your_code }.
This generated
fn main can be a problem! If you have
extern crate or a
mod
statements in the example code that are referred to by
use statements, they will
fail to resolve unless you include at least
fn main() {} to inhibit step 4.
#[macro_use] extern crate also does not work except at the crate root, so when
testing macros an explicit
main is always required. It doesn't have to clutter
up your docs, though -- keep reading!
Sometimes this algorithm isn't enough, though. For example, all of these code samples
with
/// we've been talking about? The raw text:
/// Some documentation. # fn foo() {}
looks different than the output:
# #![allow(unused_variables)] #fn main() { /// Some documentation. # fn foo() {} #}: ```rust let x = 5; # let y = 6; # println!("{}", x + y); ``` Next, we set `y` to six: ```rust # let x = 5; let y = 6; # println!("{}", x + y); ``` Finally, we print the sum of `x` and `y`: ```rust # let x = 5; # let y = 6; println!("{}", x + y); ```
By repeating all parts of the example, you can ensure that your example still compiles, while only showing the parts that are relevant to that part of your explanation.
Documenting macros
Here’s an example of documenting a macro:
/// Panic with a given message unless an expression evaluates to true. /// /// # Examples /// /// ``` /// # #[macro_use] extern crate foo; /// # fn main() { /// panic_unless!(1 + 1 == 2, “Math is broken.”); /// # } /// ``` /// /// ```rust.
Another case where the use of
# is handy is when you want to ignore
error handling. Lets say you want the following,
/// use std::io; /// let mut input = String::new(); /// try!(io::stdin().read_line(&mut input));
The problem is that
try! returns a
Result<T, E> and test functions
don't return anything so this will give a mismatched types error.
/// A doc test using try! /// /// ``` /// use std::io; /// # fn foo() -> io::Result<()> { /// let mut input = String::new(); /// try!(io::stdin().read_line(&mut input)); /// # Ok(()) /// # } /// ``` # fn foo() {}
You can get around this by wrapping the code in a function. This catches
and swallows the
Result<T, E> when running tests on the docs. This
pattern appears regularly in the standard library.
Running documentation tests
To run the tests, either:
$ rustdoc --test path/to/my/crate/root.rs # or $ cargo test
That's right,
cargo test tests embedded documentation too. However,
cargo test will not test binary crates, only library ones. This is
due to the way
rustdoc works: it links against the library to be tested,
but with a binary, there’s nothing to link to.
There are a few more annotations that are useful to help
rustdoc do the right
thing when testing your code:
# #![allow(unused_variables)] #fn main() { /// ```rust() { /// ```rust,should_panic /// assert!(false); /// ``` # fn foo() {} #}
should_panic tells
rustdoc that the code should compile correctly, but
not actually pass as a test.
# #![allow(unused_variables)] #fn main() { /// ```rust.
Documenting modules
Rust has another kind of doc comment,
//!. This comment doesn't document the next item, but the enclosing item. In other words:
# #![allow(unused_variables)] #fn main() { mod foo { //! This is documentation for the `foo` module. //! //! # Examples // ... } #}
This is where you'll see
//! used most often: for module documentation. If
you have a module in
foo.rs, you'll often open its code and see this:
# #![allow(unused_variables)] #fn main() { //! A module for using `foo`s. //! //! The `foo` module contains a lot of useful functionality blah blah blah... #}
Crate documentation
Crates can be documented by placing an inner doc comment (
//!) at the
beginning of the crate root, aka
lib.rs:
# #![allow(unused_variables)] #fn main() { //! This is documentation for the `foo` crate. //! //! The foo crate is meant to be used for bar. #}
Documentation comment style
Check out RFC 505 for full conventions around the style and format of documentation.
Other documentation
All of this behavior works in non-Rust source files too. Because comments
are written in Markdown, they're often
.md files.
When you write documentation in Markdown files, you don't need to prefix the documentation with comments. For example:
# #![allow(unused_variables)] #fn main() { /// # Examples /// /// ``` /// use std::rc::Rc; /// /// let five = Rc::new(5); /// ``` # fn foo() {} #}
is:
# Examples ``` use std::rc::Rc; let five = Rc::new(5); ```
when it's in a Markdown file. There is one wrinkle though: Markdown files need to have a title like this:
% The title This is the example documentation.
This
% line needs to be the very first line of the file.
doc attributes
At a deeper level, documentation comments are syntactic sugar for documentation attributes:
# #![allow(unused_variables)] #fn main() { /// this # fn foo() {} #[doc="this"] # fn bar() {} #}
are the same, as are these:
# #![allow(unused_variables)] #fn main() { //! this #![doc="this"] #}
You won't often see this attribute used for writing documentation, but it can be useful when changing some options, or when writing a macro.
Re-exports
rustdoc will show the documentation for a public re-export in both places:
extern crate foo; pub use foo::bar;
This will create documentation for
bar both inside the documentation for the
crate
foo, as well as the documentation for your crate. It will use the same
documentation in both places.
This behavior can be suppressed with
no_inline:
extern crate foo; #[doc(no_inline)] pub use foo::bar;
Missing documentation
Sometimes you want to make sure that every single public thing in your project
is documented, especially when you are working on a library. Rust allows you to
to generate warnings or errors, when an item is missing documentation.
To generate warnings you use
warn:
#![warn(missing_docs)]
And to generate errors you use
deny:
#![deny(missing_docs)]
There are cases where you want to disable these warnings/errors to explicitly
leave something undocumented. This is done by using
allow:
# #![allow(unused_variables)] #fn main() { #[allow(missing_docs)] struct Undocumented; #}
You might even want to hide items from the documentation completely:
# #![allow(unused_variables)] #fn main() { #[doc(hidden)] struct Hidden; #}
Controlling HTML
You can control a few aspects of the HTML that
rustdoc generates through the
#![doc] version of the attribute:
#![doc(html_logo_url = "", html_favicon_url = "", html_root_url = "")]
This sets a few different options, with a logo, favicon, and a root URL.
Configuring documentation tests
You can also configure the way that
rustdoc tests your documentation examples
through the
#![doc(test(..))] attribute.
# #![allow(unused_variables)] #![doc(test(attr(allow(unused_variables), deny(warnings))))] #fn main() { #}
This allows unused variables within the examples, but will fail the test for any other lint warning thrown.
Generation options
rustdoc also contains a few other options on the command line, for further customization:
--html-in-header FILE: includes the contents of FILE at the end of the
<head>...</head>section.
--html-before-content FILE: includes the contents of FILE directly after
<body>, before the rendered content (including the search bar).
--html-after-content FILE: includes the contents of FILE after all the rendered content.
Security note
The Markdown in documentation comments is placed without processing into the final webpage. Be careful with literal HTML:
# #![allow(unused_variables)] #fn main() { /// <script>alert(document.cookie)</script> # fn foo() {} #}
Iterators
Let's talk about loops.
Remember Rust's
for loop? Here's an example:
# #![allow(unused_variables)] #fn main() {.
A range with two dots like
0..10 is inclusive on the left (so it
starts at 0) and exclusive on the right (so it ends at 9). A mathematician
would write "[0, 10)".
Like this:
# #![allow(unused_variables)] #fn main() {:
# #![allow(unused_variables)] #fn main() { let nums = vec![1, 2, 3]; for i in 0..nums.len() { println!("{}", nums[i]); } #}
This is strictly worse than using an actual iterator. You can iterate over vectors directly, so write this:
# #![allow(unused_variables)] #fn main() {:
# #![allow(unused_variables)] #fn main() {:
- iterators give you a sequence of values.
- iterator adaptors operate on an iterator, producing a new iterator with a different output sequence.
- consumers operate on an iterator, producing some final set of values.
Let's talk about consumers first, since you've already seen an iterator, ranges.
Consumers:
# #![allow(unused_variables)] #fn main() { let one_to_one_hundred = (1..101).collect::<Vec<i32>>(); #}
If you remember, the
::<> syntax
allows us to give a type hint that tells the compiler we want a vector of
integers. You don't always need to use the whole type, though. Using a
_
will let you provide a partial hint:
# #![allow(unused_variables)] #fn main() { let one_to_one_hundred = (1..101).collect::<Vec<_>>(); #}
This says "Collect into a
Vec<T>, please, but infer what the
T is for me."
_ is sometimes called a "type placeholder" for this reason.
collect() is the most common consumer, but there are others too.
find()
is one:
# #![allow(unused_variables)] #fn main() {:
# #![allow(unused_variables)] #fn main() { assigned the value of the accumulator.
Okay, that's a bit confusing. Let's examine the values of all of these things in this iterator:
We called
fold() with these arguments:
# #![allow(unused_variables)] #fn main() { # (1..4) .
Iterators:
# #![allow(unused_variables)] #fn main() { let nums = 1..100; #}
Since we didn't do anything with the range, it didn't generate the sequence. Let's add the consumer:
# #![allow(unused_variables)] #fn main() {:
# #![allow(unused_variables)] #fn main() {:
# #![allow(unused_variables)] #fn main() { for i in (1..).take(5) { println!("{}", i); } #}
This will print
1 2 3 4 5
filter() is an adapter that takes a closure as an argument. This closure
returns
true or
false. The new iterator
filter() produces
only the elements that the closure returns
true for:
# #![allow(unused_variables)] #fn main() {:
# #![allow(unused_variables)] #fn main() { .
Concurrency.
Error Handling
Like most programming languages, Rust encourages the programmer to handle errors in a particular way. Generally speaking, error handling is divided into two broad categories: exceptions and return values. Rust opts for return values.
In this section, section will explore those stumbling blocks and demonstrate how to use the standard library to make error handling concise and ergonomic.
Table of Contents
This section
panic, panic and stop the program.”
It would be better if we:
# #![allow(unused_variables)] #fn main() {:
# #![allow(unused_variables)] #fn main() { // doesn't(haystack: &str, needle: char) -> Option<usize> { haystack.find(needle) }, which we used previously?
There was no case analysis there! Instead, the case analysis was put inside the
unwrap method for you. You could define it yourself if you want:
# #![allow(unused_variables)] #fn main() {.
Composing
Option<T> values
In an example from before, only print out a
message saying as such.
Getting the extension of a file name is a pretty common operation, so it makes sense to put it into a function:
# #![allow(unused_variables)] #fn main() { # fn find(haystack: &str, needle: char) -> Option<usize> { haystack.find(needle) } // Returns the extension of the given file name, where the extension is defined // as all characters following the first `.`. // If `file_name` has no `.`, then `None` is returned. fn extension_explicit(file_name: &str) -> Option<&str> { match find(file_name, '.') { None => None, Some(i) => Some(&file_name[i+1..]), } } #}
(Pro-tip: don't use this code. Use the
extension, return
None.
Rust has parametric polymorphism, so it is very easy to define a combinator that abstracts this pattern:
# #![allow(unused_variables)] #fn main() { fn map<F, T, A>(option: Option<T>, f: F) -> Option<A> where F: FnOnce(T) -> A { match option { None => None, Some(value) => Some(f(value)), } } #}
Indeed,
map is defined as a method on
Option<T> in the standard library.
As a method, it has a slightly different signature: methods take
self,
&self,
or
&mut self as their first argument.
Armed with our new combinator, we can rewrite our
extension_explicit method
to get rid of the case analysis:
# #![allow(unused_variables)] #fn main() { # fn find(haystack: &str, needle: char) -> Option<usize> { haystack.find(needle) } // Returns the extension of the given file name, where the extension is defined // as all characters following the first `.`. // If `file_name` has no `.`, then `None` is returned. fn extension(file_name: &str) -> Option<&str> { find(file_name, '.').map(|i| &file_name[i+1..]) } #}
One other pattern we commonly find>:
# #![allow(unused_variables)] #fn main() { fn unwrap_or<T>(option: Option<T>, default: T) -> T { match option { None => default, Some(value) => value, } } #}
Like with
map above, the standard library implementation is a method instead
of a free function.
unwrap_or_else:
# #![allow(unused_variables)] #fn main() { # use the
map combinator to reduce the case
analysis, but its type doesn't quite fit...
fn file_path_ext(file_path: &str) -> Option<&str> { file_name(file_path).map(|x| extension(x)) // This causes a compilation error. }
The
map function here wraps the value returned by the
extension function
inside an
Option<_> and since the
extension function itself returns an
Option<&str> the expression
file_name(file_path).map(|x| extension(x))
actually returns an
Option<Option<&str>>.
But since
file_path_ext just returns
Option<&str> (and not
Option<Option<&str>>) we get a compilation error.
The result of the function taken by map as input is always rewrapped with
Some. Instead, we need something like
map, but which
allows the caller to return a
Option<_> directly without wrapping it in
another
Option<_>.
Its generic implementation is even simpler than
map:
# #![allow(unused_variables)] #fn main() { fn and_then<F, T, A>(option: Option<T>, f: F) -> Option<A> where F: FnOnce(T) -> Option<A> { match option { None => None, Some(value) => f(value), } } #}
Now we can rewrite our
file_path_ext function without explicit case analysis:
# #![allow(unused_variables)] #fn main() { # fn extension(file_name: &str) -> Option<&str> { None } # fn file_name(file_path: &str) -> Option<&str> { None } fn file_path_ext(file_path: &str) -> Option<&str> { file_name(file_path).and_then(extension) } #}
Side note: Since
and_then essentially works like
map but returns an
Option<_> instead of an
Option<Option<_>> it is known as
flatmap in some
other languages..
The
Result type
The
Result type is also
defined in the standard library:
# #![allow(unused_variables)] #fn main() { enum Result<T, E> { Ok(T), Err(E), } #}
The
Result type is a richer version of
Option. Instead of expressing the
possibility of absence like
Option does,
Result expresses the possibility
of error. Usually, the error is used to explain why the execution of some
computation failed. This is a strictly more general form of
Option. Consider
the following type alias, which is semantically equivalent to the real
Option<T> in every way:
# #![allow(unused_variables)] #fn main() {
unwrap method
defined
in the standard library. Let's define it:
# #![allow(unused_variables)] #fn main() { #
Option::unwrap, except it includes the
error value in the
panic! message. This makes debugging easier, but
it also requires us to add a
Debug.
Parsing integers
parse
FromStr (do a
CTRL-F in your browser
for “FromStr”) and look at its associated type
Err. We did
this so we can find the concrete error type. In this case, it's
std::num::ParseIntError. much
unwrap_or and
and_then.
Additionally, since
Result has a second type parameter, there are
combinators that affect only the error type, such as
map_err (instead of
map) and
or_else
(instead of
and_then).
The
Result type alias idiom
In the standard library, you may frequently see types like
Result<i32>. But wait, we defined
Result:
# #![allow(unused_variables)] #fn main() {
io::Result. Typically, one writes
io::Result<T>, which makes it clear that you're using the
io
module's type alias instead of the plain definition from
std::result. (This idiom is also used for
fmt::Result.)
expect section. section:())) .map(|n| 2 * n) } fn main() { match double_arg(env::args()) { Ok(n) => println!("{}", n), Err(err) => println!("Error: {}", err), } }
There are a couple new things in this example. The first is the use of the
Option::ok_or
combinator. This is one way to convert an
Option into a
Result. The
conversion requires you to specify what error to use if
Option is
None.
Like the other combinators we've seen, its definition is very simple:
# #![allow(unused_variables)] #fn main() { fn ok_or<T, E>(option: Option<T>, err: E) -> Result<T, E> { match option { Some(val) => Ok(val), None => Err(err), } } #}
The other new combinator used here is
Result::map_err.
This is
std::fs::File::open.
This makes it ergonomic to use any kind of string as a file path.)
There are three different errors that can occur here:
- A problem opening the file.
- A problem reading data from the file.
- A problem parsing the data as a number.
The first two problems are described via the
std::io::Error type. We know this
because of the return types of
std::fs::File::open and
std::io::Read::read_to_string.
(Note that they both use the
Result type alias
idiom described previously. If you
click on the
Result type, you'll see the type
alias, and consequently, the underlying
io::Error type.) The third problem is described by the
std::num::ParseIntError much that makes all of this work.
map_err is than like combinators, but unlike combinators, it also
abstracts control flow. Namely, it can abstract the early return pattern
seen above.
Here is a simplified definition of a
try! macro:
# #![allow(unused_variables)] #fn main() {
try!.
io::ErrorKind,:
# #![allow(unused_variables)] #fn main() {:
std::error::Error and
std::convert::From. While
Error
is designed specifically for generically describing errors, the
From
trait serves a more general role for converting values between two
distinct types.
The
Error trait
The
Error trait is defined in the standard
library:
# #![allow(unused_variables)] #fn main() {:
# #![allow(unused_variables)] #fn main() {(), CliError::Parse(ref err) => err.description(), } }:
# #![allow(unused_variables)] #fn main() {:
# #![allow(unused_variables)] #fn main() {:
# #![allow(unused_variables)] #fn main() { particular,!:
# #![allow(unused_variables)] #fn main() { macro_rules! try { ($e:expr) => (match $e { Ok(val) => val, Err(err) => return Err(err), }); } #}
This is not its real definition. Its real definition is in the standard library:
# #![allow(unused_variables)] #fn main() { macro_rules! try { ($e:expr) => (match $e { Ok(val) => val, Err(err) => return Err(::std::convert::From::from(err)), }); } #}
There's one tiny but powerful change: the error value is passed through
From::from. This makes the
try! macro much more powerful because it gives
you automatic type conversion for free.
Armed with our more powerful
try! macro, let's take a look at code we wrote
previously to read a file and convert its contents to an integer:
# #![allow(unused_variables)] #fn main() { lets it convert any error type into a
Box<Error>:
# #![allow(unused_variables)] #fn main() {
description
and
cause, but the
limitation remains:
Box<Error> is opaque. (N.B. This isn't entirely
true because Rust does have runtime reflection, which is useful in
some scenarios that are beyond the scope of this
section.):
# #![allow(unused_variables)] #fn main() {
try! and
From.:
# #![allow(unused_variables)] #fn main() { # #:
# #![allow(unused_variables)] #fn main() { #:
# #![allow(unused_variables)] #fn main() { use std::io; use std::num; enum CliError { Io(io::Error), ParseInt(num::ParseIntError), ParseFloat(num::ParseFloatError), } #}
And add a new
From impl:
# #![allow(unused_variables)] #fn main() { #
ErrorKind) or keep it hidden (like
ParseIntError). Regardless
of how you do it, it's usually good practice to at least provide some
information about the error beyond its
String
representation. But certainly, this will vary depending on use cases.
At a minimum, you should probably implement the
Error,
csv::Error
provides
From impls for both
io::Error and
byteorder::Error.
Finally, depending on your tastes, you may also want to define a
Result type alias, particularly if your
library defines a single error type. This is used in the standard library
for
io::Result
and
fmt::Result.
Case study: A program to read population data
This section
csv,
and
rustc-serialize crates.
Initial setup
We're not going to spend a lot of time on setting up a project with Cargo because it is already covered well in the Cargo section, the parser returns a struct that records matches
for defined options, and remaining "free" arguments.
From there, we can get information about the flags, for
instance, whether,
because all write code differently, but error handling is usually the last thing we
want to think about. This isn't great for the overall design of a program, but
it can be useful for rapid prototyping. Because Rust forces us to be explicit
about error handling (by making us call
unwrap), it is easy to see which
parts of our program can cause errors..)
use std::fs:]; let file =:
File::opencan return an
io::Error.
csv::Reader::decodedecodes one record at a time, and decoding a record (look at the
Itemassociated type on the
Iteratorimpl) can produce a
csv::Error.
-.
use std::path::Path; struct Row { // This struct remains
try!macro so that errors are returned to the caller instead of panicking the program.
- Handle the error in
main.
Let's try it:
use std::error::Error; // The rest of the code before this is unchanged. fn search<P: AsRef<Path>> (file_path: P, city: &str) -> Result<Vec<PopulationCount>, Box<Error>> { let mut found = vec![]; let file = try!.
At the end of
search we also convert a plain string to an error type
by using the corresponding
From impls:
// We are making use of this impl in the code above, since we call `From::from` // on a `&'static str`. impl<'a> From<&'a str> for Box<Error> // But this is also useful when you need to allocate a new string for an // error message, usually with `format!`. impl From<String> for Box<Error>
Since
search now returns a
Result<T, E>,
main should use case analysis
when calling
search:
... match search(data_path, city) { Ok(pops) => { for pop in pops { println!("{}, {}: {:?}", pop.city, pop.country, pop.count); } } Err(err) => println!("{}", err) } ... three))); }
Of course we need to adapt the argument handling code:
... let mut opts = Options::new(); opts.optopt("f", "file", "Choose an input file, instead of using STDIN.", "NAME"); opts.optflag("h", "help", "Show this usage message."); ... let data_path = matches.opt_str("f"); let city = if !matches.free.is_empty() { &matches.free[0] } else { print_usage(&program, opts); return; }; match search(&data_path, city) { Ok(pops) => { for pop in pops { println!("{}, {}: {:?}", pop.city, pop.country, pop.count); } } Err(err) => println!("{}", err) } ...
We've made the user experience a bit nicer by showing the usage message,
instead of a panic from an out-of-bounds index, when
city, the
remaining free argument, is not present.
Modifying
search is slightly trickier. The
csv crate can build a
parser out of
any type that implements
io::Read.
But how can we use the same code over both types? There's actually a
couple ways we could go about this. One way is to write
search such
that it is generic on some type parameter
R that satisfies
io::Read. Another way is to use trait objects:
use std::io; // The rest of the code before this is unchanged. fn search<P: AsRef<Path>> (file_path: &Option<P>, city: &str) -> Result<Vec<PopulationCount>, Box<Error>> { let mut found = vec![]; let input: Box<io::Read> = match *file_path { None => Box::new(io::stdin()), Some(ref file_path) => Box::new(try!:
use std::fmt;", } } fn cause(&self) -> Option<&Error> { match *self { CliError::Io(ref err) => Some(err), CliError::Csv(ref err) => Some(err), // Our custom error doesn't have an underlying cause, // but we could modify it so that it does. CliError::NotFound => None, } } }
try! we've done that, Getopts does the rest:
... let mut opts = Options::new(); opts.optopt("f", "file", "Choose an input file, instead of using STDIN.", "NAME"); opts.optflag("h", "help", "Show this usage message."); opts.optflag("q", "quiet", "Silences errors and warnings."); ...
Now we only need to implement our “quiet” functionality. This requires us to
tweak the case analysis in
main:
use std::process; ... match search(&data_path, city) { Err(CliError::NotFound) if matches.opt_present("q") => process::exit(1), Err(err) => panic!("{}", section fine to use
unwrap(whether that's
Result::unwrap,
Option::unwrapor preferably
Option::expect). use either a
Stringor a
Box<Error>for your error type.
- Otherwise, in a program, define your own error types with appropriate
Fromand
Errorimpls to make the
try!macro more ergonomic.
- If you're writing a library and your code can produce errors, define your own error type and implement the
std::error::Errortrait. Where appropriate, implement
Fromto make both your library code and the caller's code easier to write. (Because of Rust's coherence rules, callers will not be able to impl
Fromon your error type, so your library should do it.)
- Learn the combinators defined on
Optionand
Result. Using them exclusively can be a bit tiring at times, but I've personally found a healthy mix of
try!and combinators to be quite appealing.
and_then,
mapand
unwrap_orare my favorites.
Choosing your Guarantees
One important feature of Rust is that it lets us control the costs and guarantees of a program.
There are various “wrapper type” abstractions in the Rust standard library which embody a multitude of trade-offs-offs as done above and pick
one.
&[T] and
&mut [T] are slices; they consist of a pointer and a length and can refer to a portion of a vector or array.
&mut [T] can have its elements mutated, however its length cannot be touched.
Foreign"
and add
extern crate libc; to your crate root.
Calling foreign functions
The following is a minimal example of calling a foreign function which will compile if snappy is installed:
extern crate libc;:
extern crate libc;.
# extern crate libc; #.
# extern crate libc; #.
# extern crate libc; #.
# extern crate libc; #:
#[repr(C)]_variables)] :
extern crate libc; #.
extern crate libc;:
extern crate libc; #.
extern crate libc;)(void (*)(int), int)) { ... }
No
transmute required!
Calling Rust code from C
You may wish to compile Rust code in a way so that it can be called from C. This is fairly easy, but requires a few things:
#[no_mangle] pub extern fn hello_rust() -> *const u8 { "Hello, world!\0".as_ptr() } # fn main() {}
The
extern makes this function adhere to the C calling convention, as
discussed above in "Foreign Calling
Conventions". The
no_mangle
attribute turns off Rust's name mangling, so that it is easier to link to.
FFI and panics
It’s important to be mindful of
panic!s when working with FFI. A
panic!
across an FFI boundary is undefined behavior.. The simplest way is to use a
void * argument:
void foo(void *arg); void bar(void *arg);
We can represent this in Rust with the
c_void type:
extern crate libc; { private: [u8; 0] } #[repr(C)] pub struct Bar { private: [u8; 0] } extern "C" { pub fn foo(arg: *mut Foo); pub fn bar(arg: *mut Bar); } # fn main() {}
By including a private field and no constructor,
we create an opaque type that we can’t instantiate outside of this module.
An empty array is both zero-size and compatible with
#[repr(C)].
But because our
Foo and
Bar types are
different, we’ll get type safety between the two of them, so we cannot
accidentally pass a pointer to
Foo to
bar().
Borrow and AsRef
The
Borrow and
AsRef traits are very similar, but
different. Here’s a quick refresher on what these two traits mean.
Borrow
The
Borrow trait is used when you’re writing a data structure,:
# #![allow(unused_variables)] #fn main() {:
# #![allow(unused_variables)] #fn main() { use std::borrow::Borrow; use std::fmt::Display; fn foo<T: Borrow<i32> + Display>(a: T) { println!("a is borrowed: {}", a); } let mut i = 5; foo(&i); foo(&mut i); #}
This will print out
a is borrowed: 5 twice.
AsRef
The
AsRef trait is a conversion trait. It’s used for converting some value to
a reference in generic code. Like this:
# #![allow(unused_variables)] #fn main() { let s = "Hello".to_string(); fn foo<T: AsRef<str>>(s: T) { let slice = s.as_ref(); } #}
Which should I use?
We can see how they’re kind of the same: they both deal with owned and borrowed versions of some type. However, they’re a bit different.
Choose.
Release Channels
The Rust project uses a concept called ‘release channels’ to manage releases. It’s important to understand this process to choose which version of Rust your project should use.
Overview
There are three channels for Rust releases:
- Nightly
- Beta
- Stable
New nightly releases are created once a day. Every six weeks, the latest
nightly release is promoted to ‘Beta’. At that point, it will only receive
patches to fix serious errors. Six weeks later, the beta is promoted to
‘Stable’, and becomes the next release of
1.x.
This process happens in parallel. So every six weeks, on the same day,
nightly goes to beta, beta goes to stable. When
1.x is released, at
the same time,
1.(x + 1)-beta is released, and the nightly becomes the
first version of
1.(x + 2)-nightly.
Choosing a version
Generally speaking, unless you have a specific reason, you should be using the stable release channel. These releases are intended for a general audience.
However, depending on your interest in Rust, you may choose to use nightly instead. The basic trade-off is this: in the nightly channel, you can use unstable, new Rust features. However, unstable features are subject to change, and so any new nightly release may break your code. If you use the stable release, you cannot use experimental features, but the next release of Rust will not cause significant issues through breaking changes.
Helping the ecosystem through CI
What about beta? We encourage all Rust users who use the stable release channel to also test against the beta channel in their continuous integration systems. This will help alert the team in case there’s an accidental regression.
Additionally, testing against nightly can catch regressions even sooner, and so if you don’t mind a third build, we’d appreciate testing against all channels..
Using Rust Without the Standard Library
Rust 'lang items'(()) } }
Procedural Macros (and custom Derive)
As you've seen throughout the rest of the book, Rust provides a mechanism called "derive" that lets you implement traits easily. For example,
# #![allow(unused_variables)] #fn main() { #[derive(Debug)] struct Point { x: i32, y: i32, } #}
is a lot simpler than
# #![allow(unused_variables)] #fn main() { struct Point { x: i32, y: i32, } use std::fmt; impl fmt::Debug for Point { fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { write!(f, "Point {{ x: {}, y: {} }}", self.x, self.y) } } #}
Rust includes several traits that you can derive, but it also lets you define your own. We can accomplish this task through a feature of Rust called "procedural macros." Eventually, procedural macros will allow for all sorts of advanced metaprogramming in Rust, but today, they're only for custom derive.
Let's build a very simple trait, and derive it with custom derive.
Hello World
So the first thing we need to do is start a new crate for our project.
$ cargo new --bin hello-world
All we want is to be able to call
hello_world() on a derived type. Something
like this:
#[derive(HelloWorld)] struct Pancakes; fn main() { Pancakes::hello_world(); }
With some kind of nice output, like
Hello, World! My name is Pancakes..
Let's go ahead and write up what we think our macro will look like from a user
perspective. In
src/main.rs we write:
#[macro_use] extern crate hello_world_derive; trait HelloWorld { fn hello_world(); } #[derive(HelloWorld)] struct FrenchToast; #[derive(HelloWorld)] struct Waffles; fn main() { FrenchToast::hello_world(); Waffles::hello_world(); }
Great. So now we just need to actually write the procedural macro. At the
moment, procedural macros need to be in their own crate. Eventually, this
restriction may be lifted, but for now, it's required. As such, there's a
convention; for a crate named
foo, a custom derive procedural macro is called
foo-derive. Let's start a new crate called
hello-world-derive inside our
hello-world project.
$ cargo new hello-world-derive
To make sure that our
hello-world crate is able to find this new crate we've
created, we'll add it to our toml:
[dependencies] hello-world-derive = { path = "hello-world-derive" }
As for the source of our
hello-world-derive crate, here's an example:
extern crate proc_macro; extern crate syn; #[macro_use] extern crate quote; use proc_macro::TokenStream; #[proc_macro_derive(HelloWorld)] pub fn hello_world(input: TokenStream) -> TokenStream { // Construct a string representation of the type definition let s = input.to_string(); // Parse the string representation let ast = syn::parse_derive_input(&s).unwrap(); // Build the impl let gen = impl_hello_world(&ast); // Return the generated impl gen.parse().unwrap() }
So there is a lot going on here. We have introduced two new crates:
syn and
quote. As you may have noticed,
input: TokenSteam is immediately converted
to a
String. This
String is a string representation of the Rust code for which
we are deriving
HelloWorld. At the moment, the only thing you can do with a
TokenStream is convert it to a string. A richer API will exist in the future.
So what we really need is to be able to parse Rust code into something
usable. This is where
syn comes to play.
syn is a crate for parsing Rust
code. The other crate we've introduced is
quote. It's essentially the dual of
syn as it will make generating Rust code really easy. We could write this
stuff on our own, but it's much simpler to use these libraries. Writing a full
parser for Rust code is no simple task.
The comments seem to give us a pretty good idea of our overall strategy. We
are going to take a
String of the Rust code for the type we are deriving, parse
it using
syn, construct the implementation of
hello_world (using
quote),
then pass it back to Rust compiler.
One last note: you'll see some
unwrap()s there. If you want to provide an
error for a procedural macro, then you should
panic! with the error message.
In this case, we're keeping it as simple as possible.
Great, so let's write
impl_hello_world(&ast).
fn impl_hello_world(ast: &syn::DeriveInput) -> quote::Tokens { let name = &ast.ident; quote! { impl HelloWorld for #name { fn hello_world() { println!("Hello, World! My name is {}", stringify!(#name)); } } } }
So this is where quotes comes in. The
ast argument is a struct that gives us
a representation of our type (which can be either a
struct or an
enum).
there is some useful information there. We are able to get the name of the
type using
ast.ident. The
quote! macro lets us write up the Rust code
that we wish to return and convert it into
Tokens.
quote! lets us use some
really cool templating mechanics; we simply write
#name and
quote! will
replace it with the variable named
name. You can even do some repetition
similar to regular macros work. You should check out the
docs for a good introduction.
So I think that's it. Oh, well, we do need to add dependencies for
syn and
quote in the
Cargo.toml for
hello-world-derive.
[dependencies] syn = "0.11.11" quote = "0.3.15"
That should be it. Let's try to compile
hello-world.
error: the `#[proc_macro_derive]` attribute is only usable with crates of the `proc-macro` crate type --> hello-world-derive/src/lib.rs:8:3 | 8 | #[proc_macro_derive(HelloWorld)] | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Oh, so it appears that we need to declare that our
hello-world-derive crate is
a
proc-macro crate type. How do we do this? Like this:
[lib] proc-macro = true
Ok so now, let's compile
hello-world. Executing
cargo run now yields:
Hello, World! My name is FrenchToast Hello, World! My name is Waffles
We've done it!
Custom Attributes
In some cases it might make sense to allow users some kind of configuration.
For example, the user might want to overwrite the name that is printed in the
hello_world() method.
This can be achieved with custom attributes:
#[derive(HelloWorld)] #[HelloWorldName = "the best Pancakes"] struct Pancakes; fn main() { Pancakes::hello_world(); }
If we try to compile this though, the compiler will respond with an error:
error: The attribute `HelloWorldName` is currently unknown to the compiler and may have meaning added to it in the future (see issue #29642)
The compiler needs to know that we're handling this attribute and to not respond with an error.
This is done in the
hello-world-derive crate by adding
attributes to the
proc_macro_derive attribute:
#[proc_macro_derive(HelloWorld, attributes(HelloWorldName))] pub fn hello_world(input: TokenStream) -> TokenStream
Multiple attributes can be specified that way.
Raising Errors
Let's assume that we do not want to accept enums as input to our custom derive method.
This condition can be easily checked with the help of
syn.
But how do we tell the user, that we do not accept enums?
The idiomatic way to report errors in procedural macros is to panic:
fn impl_hello_world(ast: &syn::DeriveInput) -> quote::Tokens { let name = &ast.ident; // Check if derive(HelloWorld) was specified for a struct if let syn::Body::Struct(_) = ast.body { // Yes, this is a struct quote! { impl HelloWorld for #name { fn hello_world() { println!("Hello, World! My name is {}", stringify!(#name)); } } } } else { //Nope. This is an Enum. We cannot handle these! panic!("#[derive(HelloWorld)] is only defined for structs, not for enums!"); } }
If a user now tries to derive
HelloWorld from an enum they will be greeted with following, hopefully helpful, error:
error: custom derive attribute panicked --> src/main.rs | | #[derive(HelloWorld)] | ^^^^^^^^^^ | = help: message: #[derive(HelloWorld)] is only defined for structs, not for enums!
Gloss.
Syntax Index
Keywords
as: primitive casting, or disambiguating the specific trait containing an item. See Casting Between Types (
as), Universal Function Call Syntax (Angle-bracket Form), Associated Types.
break: break out of loop. See Loops (Ending Iteration Early).
const: constant items and constant raw pointers. See
constand
static, Raw Pointers.
continue: continue to next loop iteration. See Loops (Ending Iteration Early).
crate: external crate linkage. See Crates and Modules (Importing External Crates).
else: fallback for
ifand
if letconstructs. See
if,
if let.
enum: defining enumeration. See Enums.
extern: external crate, function, and variable linkage. See Crates and Modules (Importing External Crates), Foreign Function Interface.
false: boolean false literal. See Primitive Types (Booleans).
fn: function definition and function pointer types. See Functions.
for: iterator loop, part of trait
implsyntax, and higher-ranked lifetime syntax. See Loops (
for), Method Syntax.
if: conditional branching. See
if,
if let.
impl: inherent and trait implementation blocks. See Method Syntax.
in: part of
forloop syntax. See Loops (
for).
let: variable binding. See Variable Bindings.
loop: unconditional, infinite loop. See Loops (
loop).
match: pattern matching. See Match.
mod: module declaration. See Crates and Modules (Defining Modules).
move: part of closure syntax. See Closures (
moveclosures).
mut: denotes mutability in pointer types and pattern bindings. See Mutability.
pub: denotes public visibility in
structfields,
implblocks, and modules. See Crates and Modules (Exporting a Public Interface).
ref: by-reference binding. See Patterns (
refand
ref mut).
return: return from function. See Functions (Early Returns).
Self: implementor type alias. See Traits.
self: method subject. See Method Syntax (Method Calls).
static: global variable. See
constand
static(
static).
struct: structure definition. See Structs.
trait: trait definition. See Traits.
true: boolean true literal. See Primitive Types (Booleans).
type: type alias, and associated type definition. See
typeAliases, Associated Types.
unsafe: denotes unsafe code, functions, traits, and implementations. See Unsafe.
use: import symbols into scope. See Crates and Modules (Importing Modules with
use).
where: type constraint clauses. See Traits (
whereclause).
while: conditional loop. See Loops (
while).
Operators and Symbols
!(
ident!(…),
ident!{…},
ident![…]): denotes macro expansion. See Macros.
!(
!expr): bitwise or logical complement. Overloadable (
Not).
!=(
var != expr): nonequality comparison. Overloadable (
PartialEq).
%(
expr % expr): arithmetic remainder. Overloadable (
Rem).
%=(
var %= expr): arithmetic remainder & assignment. Overloadable (
RemAssign).
&(
expr & expr): bitwise and. Overloadable (
BitAnd).
&(
&expr,
&mut expr): borrow. See References and Borrowing.
&(
&type,
&mut type,
&'a type,
&'a mut type): borrowed pointer type. See References and Borrowing.
&=(
var &= expr): bitwise and & assignment. Overloadable (
BitAndAssign).
&&(
expr && expr): logical and.
*(
expr * expr): arithmetic multiplication. Overloadable (
Mul).
*(
*expr): dereference.
*(
*const type,
*mut type): raw pointer. See Raw Pointers.
*=(
var *= expr): arithmetic multiplication & assignment. Overloadable (
MulAssign).
+(
expr + expr): arithmetic addition. Overloadable (
Add).
+(
trait + trait,
'a + trait): compound type constraint. See Traits (Multiple Trait Bounds).
+=(
var += expr): arithmetic addition & assignment. Overloadable (
AddAssign).
,: argument and element separator. See Attributes, Functions, Structs, Generics, Match, Closures, Crates and Modules (Importing Modules with
use).
-(
expr - expr): arithmetic subtraction. Overloadable (
Sub).
-(
- expr): arithmetic negation. Overloadable (
Neg).
-=(
var -= expr): arithmetic subtraction & assignment. Overloadable (
SubAssign).
->(
fn(…) -> type,
|…| -> type): function and closure return type. See Functions, Closures.
.(
expr.ident): member access. See Structs, Method Syntax.
..(
..,
expr..,
..expr,
expr..expr): right-exclusive range literal.
..(
..expr): struct literal update syntax. See Structs (Update syntax).
..(
variant(x, ..),
struct_type { x, .. }): "and the rest" pattern binding. See Patterns (Ignoring bindings).
...(
...expr,
expr...expr) in an expression: inclusive range expression. See Iterators.
...(
expr...expr) in a pattern: inclusive range pattern. See Patterns (Ranges).
/(
expr / expr): arithmetic division. Overloadable (
Div).
/=(
var /= expr): arithmetic division & assignment. Overloadable (
DivAssign).
:(
pat: type,
ident: type): constraints. See Variable Bindings, Functions, Structs, Traits.
:(
ident: expr): struct field initializer. See Structs.
:(
'a: loop {…}): loop label. See Loops (Loops Labels).
;: statement and item terminator.
;(
[…; len]): part of fixed-size array syntax. See Primitive Types (Arrays).
<<(
expr << expr): left-shift. Overloadable (
Shl).
<<=(
var <<= expr): left-shift & assignment. Overloadable (
ShlAssign).
<(
expr < expr): less-than comparison. Overloadable (
PartialOrd).
<=(
var <= expr): less-than or equal-to comparison. Overloadable (
PartialOrd).
=(
var = expr,
ident = type): assignment/equivalence. See Variable Bindings,
typeAliases, generic parameter defaults.
==(
var == expr): equality comparison. Overloadable (
PartialEq).
=>(
pat => expr): part of match arm syntax. See Match.
>(
expr > expr): greater-than comparison. Overloadable (
PartialOrd).
>=(
var >= expr): greater-than or equal-to comparison. Overloadable (
PartialOrd).
>>(
expr >> expr): right-shift. Overloadable (
Shr).
>>=(
var >>= expr): right-shift & assignment. Overloadable (
ShrAssign).
@(
ident @ pat): pattern binding. See Patterns (Bindings).
^(
expr ^ expr): bitwise exclusive or. Overloadable (
BitXor).
^=(
var ^= expr): bitwise exclusive or & assignment. Overloadable (
BitXorAssign).
|(
expr | expr): bitwise or. Overloadable (
BitOr).
|(
pat | pat): pattern alternatives. See Patterns (Multiple patterns).
|(
|…| expr): closures. See Closures.
|=(
var |= expr): bitwise or & assignment. Overloadable (
BitOrAssign).
||(
expr || expr): logical or.
_: "ignored" pattern binding (see Patterns (Ignoring bindings)). Also used to make integer-literals readable (see Reference (Integer literals)).
?(
expr?): Error propagation. Returns early when
Err(_)is encountered, unwraps otherwise. Similar to the
try!macro.
Other Syntax
'ident: named lifetime or loop label. See Lifetimes, Loops (Loops Labels).
…u8,
…i32,
…f64,
…usize, …: numeric literal of specific type.
"…": string literal. See Strings.
r"…",
r#"…"#,
r##"…"##, …: raw string literal, escape characters are not processed. See Reference (Raw String Literals).
b"…": byte string literal, constructs a
[u8]instead of a string. See Reference (Byte String Literals).
br"…",
br#"…"#,
br##"…"##, …: raw byte string literal, combination of raw and byte string literal. See Reference (Raw Byte String Literals).
'…': character literal. See Primitive Types (
char).
b'…': ASCII byte literal.
|…| expr: closure. See Closures.
ident::ident: path. See Crates and Modules (Defining Modules).
::path: path relative to the crate root (i.e. an explicitly absolute path). See Crates and Modules (Re-exporting with
pub use).
self::path: path relative to the current module (i.e. an explicitly relative path). See Crates and Modules (Re-exporting with
pub use).
super::path: path relative to the parent of the current module. See Crates and Modules (Re-exporting with
pub use).
type::ident,
<type as trait>::ident: associated constants, functions, and types. See Associated Types.
<type>::…: associated item for a type which cannot be directly named (e.g.
<&T>::…,
<[T]>::…, etc.). See Associated Types.
trait::method(…): disambiguating a method call by naming the trait which defines it. See Universal Function Call Syntax.
type::method(…): disambiguating a method call by naming the type for which it's defined. See Universal Function Call Syntax.
<type as trait>::method(…): disambiguating a method call by naming the trait and type. See Universal Function Call Syntax (Angle-bracket Form).
path<…>(e.g.
Vec<u8>): specifies parameters to generic type in a type. See Generics.
path::<…>,
method::<…>(e.g.
"42".parse::<i32>()): specifies parameters to generic type, function, or method in an expression. See Generics § Resolving ambiguities.
fn ident<…> …: define generic function. See Generics.
struct ident<…> …: define generic structure. See Generics.
enum ident<…> …: define generic enumeration. See Generics.
impl<…> …: define generic implementation.
for<…> type: higher-ranked lifetime bounds.
type<ident=type>(e.g.
Iterator<Item=T>): a generic type where one or more associated types have specific assignments. See Associated Types.
T: U: generic parameter
Tconstrained to types that implement
U. See Traits.
T: 'a: generic type
Tmust outlive lifetime
'a. When we say that a type 'outlives' the lifetime, we mean that it cannot transitively contain any references with lifetimes shorter than
'a.
T : 'static: The generic type
Tcontains no borrowed references other than
'staticones.
'b: 'a: generic lifetime
'bmust outlive lifetime
'a.
T: ?Sized: allow generic type parameter to be a dynamically-sized type. See Unsized Types (
?Sized).
'a + trait,
trait + trait: compound type constraint. See Traits (Multiple Trait Bounds).
#[meta]: outer attribute. See Attributes.
#![meta]: inner attribute. See Attributes.
$ident: macro substitution. See Macros.
$ident:kind: macro capture. See Macros.
$(…)…: macro repetition. See Macros.
//: line comment. See Comments.
//!: inner line doc comment. See Comments.
///: outer line doc comment. See Comments.
/*…*/: block comment. See Comments.
/*!…*/: inner block doc comment. See Comments.
/**…*/: outer block doc comment. See Comments.
!: always empty Never type. See Diverging Functions.
(): empty tuple (a.k.a. unit), both literal and type.
(expr): parenthesized expression.
(expr,): single-element tuple expression. See Primitive Types (Tuples).
(type,): single-element tuple type. See Primitive Types (Tuples).
(expr, …): tuple expression. See Primitive Types (Tuples).
(type, …): tuple type. See Primitive Types (Tuples).
expr(expr, …): function call expression. Also used to initialize tuple
structs and tuple
enumvariants. See Functions.
ident!(…),
ident!{…},
ident![…]: macro invocation. See Macros.
expr.0,
expr.1, …: tuple indexing. See Primitive Types (Tuple Indexing).
[…]: array literal. See Primitive Types (Arrays).
[expr; len]: array literal containing
lencopies of
expr. See Primitive Types (Arrays).
[type; len]: array type containing
leninstances of
type. See Primitive Types (Arrays).
expr[expr]: collection indexing. Overloadable (
Index,
IndexMut).
expr[..],
expr[a..],
expr[..b],
expr[a..b]: collection indexing pretending to be collection slicing, using
Range,
RangeFrom,
RangeTo,
RangeFullas the "index". as a Language for High Performance GC Implementation
|
https://doc.rust-lang.org/book/first-edition/print.html
|
CC-MAIN-2018-17
|
refinedweb
| 35,424
| 67.04
|
<em>Originally published by Andres Vourakis at </em><a href="" target="_blank"><em></em></a>
Originally published by Andres Vourakis at
Web Scraping Mountain Weather Forecasts using Python and a Raspberry Pi. Extracting data from a website without an API. Go to the profile of ...
Before walking you through the project, let me tell you a little bit about the motivation behind it. Aside from Data Science and Machine Learning, my other passion is spending time in the mountains. Planning a trip to any mountain requires lots of careful planning in order to minimize the risks. That means paying close attention to the weather conditions as the summit day approaches. My absolutely favorite website for this is Mountain-Forecast.com which gives you the weather forecasts for almost any mountain in the world at different elevations. The only problem is that it doesn’t offer any historical data (as far as I can tell), which can sometimes be useful when determining if it is a good idea to make the trip or wait for better conditions.
This problem has been in the back of my mind for a while and I finally decided to do something about it. Below, I’ll describe how I wrote a web scraper for Mountain-Forecast.com using Python and Beautiful Soup, and put it into a Raspberry Pi to collect the data on a daily basis.
if you’d rather skip to the code, then check out the repository on GitHub.
In order to figure out which elements I needed to target, I started by inspecting the source code of the page. This can be easily done by right clicking on the element of interest and selecting inspect. This brings up the HTML code where we can see the element that each field is contained within.
Lucky for me, the forecast information for every mountain is contained within a table. The only problem is that each day has multiple sub columns associated with it (i.e. AM, PM and night) and so I would need to figure out a way to iterate through them. In addition, since weather forecasts are provided at different elevations, I would need to extract the link for each one of them and scrape them individually.
Similarly, I inspected the directory containing the URLs for the highest 100 mountains in the United States.
This seemed like a much easier task since all I needed from the table were the URLs and Mountain Names, in no specific order.
After familiarizing myself with the HTML structure of the page, it was time to get started.
My first task was to collect the URLs for the mountains I was interested in. I wrote a couple of functions to store the information in a dictionary, where the key is Mountain Name and the value is a list of all the URLs associated with it (URLs by elevation). Then I used the
pickle module to serialize the dictionary and save it into a file so that it could be easily retrieved when needed. Here is the code I wrote to do that:
def load_urls(urls_filename): """ Returns dictionary of mountain urls saved a pickle file """
full_path = os.path.join(os.getcwd(), urls_filename) with open(full_path, 'rb') as file: urls = pickle.load(file) return urls
def dump_urls(mountain_urls, urls_filename):
""" Saves dictionary of mountain urls as a pickle file """
full_path = os.path.join(os.getcwd(), urls_filename) with open(full_path, 'wb') as file: pickle.dump(mountain_urls, file)
def get_urls_by_elevation(url):
""" Given a mountain url it returns a list or its urls by elevation """
base_url = '' full_url = urljoin(base_url, url) time.sleep(1) # Delay to not bombard the website with requests page = requests.get(full_url) soup = bs(page.content, 'html.parser') elevation_items = soup.find('ul', attrs={'class':'b-elevation__container'}).find_all('a', attrs={'class':'js-elevation-link'}) return [urljoin(base_url, item['href']) for item in elevation_items]
def get_mountains_urls(urls_filename = 'mountains_urls.pickle', url = ''):
""" Returs dictionary of mountain urls
If a file with urls doesn't exists then create a new one using "url" and return it """ try: mountain_urls = load_urls(urls_filename) except: # Is this better than checking if the file exists? Should I catch specific errors? directory_url = url page = requests.get(directory_url) soup = bs(page.content, 'html.parser') mountain_items = soup.find('ul', attrs={'class':'b-list-table'}).find_all('li') mountain_urls = {item.find('a').get_text() : get_urls_by_elevation(item.find('a')['href']) for item in mountain_items} dump_urls(mountain_urls, urls_filename) finally: return mountain_urls
My next task was to collect the weather forecast for each mountain in the dictionary. I used
requests to get the content of the page and
beautifulsoup4 to parse it.
page = requests.get(url)
soup = bs(page.content, 'html.parser') Get data from header
forecast_table = soup.find('table', attrs={'class': 'forecast__table forecast__table--js'})Get rows from body
days = forecast_table.find('tr', attrs={'data-row': 'days'}).find_all('td')
times = forecast_table.find('tr', attrs={'data-row': 'time'}).find_all('td')Iterate over days
winds = forecast_table.find('tr', attrs={'data-row': 'wind'}).find_all('img') # Use "img" instead of "td" to get direction of wind
summaries = forecast_table.find('tr', attrs={'data-row': 'summary'}).find_all('td')
rains = forecast_table.find('tr', attrs={'data-row': 'rain'}).find_all('td')
snows = forecast_table.find('tr', attrs={'data-row': 'snow'}).find_all('td')
max_temps = forecast_table.find('tr', attrs={'data-row': 'max-temperature'}).find_all('td')
min_temps = forecast_table.find('tr', attrs={'data-row': 'min-temperature'}).find_all('td')
chills = forecast_table.find('tr', attrs={'data-row': 'chill'}).find_all('td')
freezings = forecast_table.find('tr', attrs={'data-row': 'freezing-level'}).find_all('td')
sunrises = forecast_table.find('tr', attrs={'data-row': 'sunrise'}).find_all('td')
sunsets = forecast_table.find('tr', attrs={'data-row': 'sunset'}).find_all('td')
for i, day in enumerate(days):
current_day = clean(day.get_text())
elevation = url.rsplit('/', 1)[-1]
num_cols = int(day['data-columns'])
if current_day != '': date = str(datetime.date(datetime.date.today().year, datetime.date.today().month, int(current_day.split(' ')[1]))) # Avoid using date format. Pandas adds 00:00:00 for some reason. Figure out better way to format # Iterate over forecast for j in range(i, i + num_cols): time_cell = clean(times[j].get_text()) wind = clean(winds[j]['alt']) summary = clean(summaries[j].get_text()) rain = clean(rains[j].get_text()) snow = clean(snows[j].get_text()) max_temp = clean(max_temps[j].get_text()) min_temp = clean(min_temps[j].get_text()) chill = clean(chills[j].get_text()) freezing = clean(freezings[j].get_text()) sunrise = clean(sunrises[j].get_text()) sunset = clean(sunsets[j].get_text()) rows.append(np.array([mountain_name, date, elevation, time_cell, wind, summary, rain, snow, max_temp, min_temp, chill, freezing, sunrise, sunset]))
As you can see from the code, I manually saved each element of interest into its own variable instead of iterating through them. It wasn’t pretty but I decided to do it that way since I wasn’t interested in all of the elements (i.e. weather maps and freezing scale) and there were a few of them that needed to be handled differently than the rest.
Since my goal was to scrape daily and the forecasts get updated everyday, it was important to figure out a way to update old forecasts instead of creating duplicates and append the new ones. I used the
pandas module to turn the data into a DataFrame (a two-dimensional data structure consisting or rows and columns) and be able to easily manipulate it and then save it as a CSV file. Here is what the code looks like:
def save_data(rows):
""" Saves the collected forecasts into a CSV file
If the file already exists then it updates the old forecasts as necessary and/or appends new ones. """ column_names = ['mountain', 'date', 'elevation', 'time', 'wind', 'summary', 'rain', 'snow', 'max_temperature', 'min_temperature', 'chill', 'freezing_level', 'sunrise', 'sunset'] today = datetime.date.today() dataset_name = os.path.join(os.getcwd(), '{:02d}{}_mountain_forecasts.csv'.format(today.month, today.year)) # i.e. 042019_mountain_forecasts.csv try: new_df = pd.DataFrame(rows, columns=column_names) old_df = pd.read_csv(dataset_name, dtype=object) new_df.set_index(column_names[:4], inplace=True) old_df.set_index(column_names[:4], inplace=True) # Update old forecasts and append new ones old_df.update(new_df) only_include = ~old_df.index.isin(new_df.index) combined = pd.concat([old_df[only_include], new_df]) combined.to_csv(dataset_name) except FileNotFoundError: new_df.to_csv(dataset_name, index=False)
Once the data collection and manipulation was done, I ended up with this table:
The Raspberry Pi is a low cost, credit-card sized computer that can be used for a variety of projects like, retro-gaming emulation, home automation, robotics, or in this case, web-scraping. Running the scraper on the Raspberry Pi can be a better alternative to leaving your personal desktop or laptop running all the time, or investing on a server.
First I needed to install an Operating System on the Raspberry Pi and I chose Raspbian Stretch Lite, a Debian-based operating system without a graphical desktop, just a terminal.
After installing Raspbian Stretch Lite, I used the the command
sudo raspi-config to open up the configuration tool and change the password, expand filesystem, change host name and enable SSH.
Finally, I used
sudo apt-get update && sudo apt-get upgrade to make sure everything was up-to-date and proceeded to install all of the dependencies necessary to run my script (i.e. Pandas, Beautiful Soup 4, etc…)
In order to schedule the script to run daily, I used
cron, a time-based job scheduler in Unix-like computer operating systems (i.e. Ubuntu, Raspbian, macOS, etc…). Using the following command the script was scheduled to run daily at 10:00 AM.
0 10 * * * /usr/bin/python3 /home/pi/scraper.py
This is what the final set-up looks like:
Since SSH is enabled on the Raspberry Pi, I can now easily connect to it via terminal (no need for an extra monitor and keyboard) using my personal laptop or phone and keep an eye on the scraper.
I hope you enjoyed this walk through and it inspired you to code your own Web Scraper using Python and a Raspberry Pi. If you have any questions or feedback, I’ll be happy to read them in the comments below :)
Originally published by Andres Vourakis at
Thanks for reading :heart: If you liked this post, share it with all of your programming buddies! Follow me on Facebook | Twitter
Learn More
☞ Machine Learning with Python, Jupyter, KSQL and TensorFlow
☞ Introduction to Python Microservices with Nameko
☞ Comparing Python and SQL for Building Data Pipelines
☞ Python Tutorial - Complete Programming Tutorial for Beginners (2019)
☞ Python and HDFS for Machine Learning
☞ Build a chat widget with Python and JavaScript
☞ Complete Python Bootcamp: Go from zero to hero in Python 3
☞ Complete Python Masterclass
☞ Learn Python by Building a Blockchain & Cryptocurrency
☞ Python and Django Full Stack Web Developer Bootcamp
☞ The Python Bible™ | Everything You Need to Program in Python
☞ Learning Python for Data Analysis and Visualization
☞ Python for Financial Analysis and Algorithmic Trading
☞ The Modern Python 3 Bootcamp
An overview of using Python for data science including Numpy, Scipy, pandas, Scikit-Learn, XGBoost, TensorFlow and Keras. to the hard work of thousands of open source contributors, you can do data science, too.
If you look at the contents of this article, you may think there’s a lot to master, but this article has been designed to gently increase the difficulty as we go along.
One article obviously can’t teach you everything you need to know about data science with python, but once you’ve followed along you’ll know exactly where to look to take the next steps in your data science journey.
Table contents:
Python, as a language, has a lot of features that make it an excellent choice for data science projects.
It’s easy to learn, simple to install (in fact, if you use a Mac you probably already have it installed), and it has a lot of extensions that make it great for doing data science.
Just because Python is easy to learn doesn’t mean its a toy programming language — huge companies like Google use Python for their data science projects, too. They even contribute packages back to the community, so you can use the same tools in your projects!
You can use Python to do way more than just data science — you can write helpful scripts, build APIs, build websites, and much much more. Learning it for data science means you can easily pick up all these other things as well.
There are a few important things to note about Python.
Right now, there are two versions of Python that are in common use. They are versions 2 and 3.
Most tutorials, and the rest of this article, will assume that you’re using the latest version of Python 3. It’s just good to be aware that sometimes you can come across books or articles that use Python 2.
The difference between the versions isn’t huge, but sometimes copying and pasting version 2 code when you’re running version 3 won’t work — you’ll have to do some light editing.
The second important thing to note is that Python really cares about whitespace (that’s spaces and return characters). If you put whitespace in the wrong place, your programme will very likely throw an error.
There are tools out there to help you avoid doing this, but with practice you’ll get the hang of it.
If you’ve come from programming in other languages, Python might feel like a bit of a relief: there’s no need to manage memory and the community is very supportive.
If Python is your first programming language you’ve made an excellent choice. I really hope you enjoy your time using it to build awesome things.
The best way to install Python for data science is to use the Anaconda distribution (you’ll notice a fair amount of snake-related words in the community).
It has everything you need to get started using Python for data science including a lot of the packages that we’ll be covering in the article.
If you click on Products -> Distribution and scroll down, you’ll see installers available for Mac, Windows and Linux.
Even if you have Python available on your Mac already, you should consider installing the Anaconda distribution as it makes installing other packages easier.
If you prefer to do things yourself, you can go to the official Python website and download an installer there.
Packages are pieces of Python code that aren’t a part of the language but are really helpful for doing certain tasks. We’ll be talking a lot about packages throughout this article so it’s important that we’re set up to use them.
Because the packages are just pieces of Python code, we could copy and paste the code and put it somewhere the Python interpreter (the thing that runs your code) can find it.
But that’s a hassle — it means that you’ll have to copy and paste stuff every time you start a new project or if the package gets updated.
To sidestep all of that, we’ll instead use a package manager.
If you chose to use the Anaconda distribution, congratulations — you already have a package manager installed. If you didn’t, I’d recommend installing pip.
No matter which one you choose, you’ll be able to use commands at the terminal (or command prompt) to install and update packages easily.
Now that you’ve got Python installed, you’re ready to start doing data science.
But how do you start?
Because Python caters to so many different requirements (web developers, data analysts, data scientists) there are lots of different ways to work with the language.
Python is an interpreted language which means that you don’t have to compile your code into an executable file, you can just pass text documents containing code to the interpreter!
Let’s take a quick look at the different ways you can interact with the Python interpreter.
If you open up the terminal (or command prompt) and type the word ‘python’, you’ll start a shell session. You can type any valid Python commands in there and they’d work just like you’d expect.
This can be a good way to quickly debug something but working in a terminal is difficult over the course of even a small project.
If you write a series of Python commands in a text file and save it with a .py extension, you can navigate to the file using the terminal and, by typing python YOUR_FILE_NAME.py, can run the programme.
This is essentially the same as typing the commands one-by-one into the terminal, it’s just much easier to fix mistakes and change what your program does.
An IDE is a professional-grade piece of software that helps you manage software projects.
One of the benefits of an IDE is that you can use debugging features which tell you where you’ve made a mistake before you try to run your programme.
Some IDEs come with project templates (for specific tasks) that you can use to set your project out according to best practices.
None of these ways are the best for doing data science with python — that particular honour belongs to Jupyter notebooks.
Jupyter notebooks give you the capability to run your code one ‘block’ at a time, meaning that you can see the output before you decide what to do next — that’s really crucial in data science projects where we often need to see charts before taking the next step.
If you’re using Anaconda, you’ll already have Jupyter lab installed. To start it you’ll just need to type ‘jupyter lab’ into the terminal.
If you’re using pip, you’ll have to install Jupyter lab with the command ‘python pip install jupyter’.
It probably won’t surprise you to learn that data science is mostly about numbers.
The NumPy package includes lots of helpful functions for performing the kind of mathematical operations you’ll need to do data science work.
It comes installed as part of the Anaconda distribution, and installing it with pip is just as easy as installing Jupyter notebooks (‘pip install numpy’).
The most common mathematical operations we’ll need to do in data science are things like matrix multiplication, computing the dot product of vectors, changing the data types of arrays and creating the arrays in the first place!
Here’s how you can make a list into a NumPy array:
Here’s how you can do array multiplication and calculate dot products in NumPy:
And here’s how you can do matrix multiplication in NumPy:
With mathematics out of the way, we must move forward to statistics.
The Scipy package contains a module (a subsection of a package’s code) specifically for statistics.
You can import it (make its functions available in your programme) into your notebook using the command ‘from scipy import stats’.
This package contains everything you’ll need to calculate statistical measurements on your data, perform statistical tests, calculate correlations, summarise your data and investigate various probability distributions.
Here’s how to quickly access summary statistics (minimum, maximum, mean, variance, skew, and kurtosis) of an array using Scipy:
Data scientists have to spend an unfortunate amount of time cleaning and wrangling data. Luckily, the Pandas package helps us do this with code rather than by hand.
The most common tasks that I use Pandas for are reading data from CSV files and databases.
It also has a powerful syntax for combining different datasets together (datasets are called DataFrames in Pandas) and performing data manipulation.
You can see the first few rows of a DataFrame using the .head method:
You can select just one column using square brackets:
And you can create new columns by combining others:
In order to use the pandas read_sql method, you’ll have to establish a connection to a database.
The most bulletproof method of connecting to a database is by using the SQLAlchemy package for Python.
Because SQL is a language of its own and connecting to a database depends on which database you’re using, I’ll leave you to read the documentation if you’re interested in learning more.
Sometimes we’d prefer to do some calculations on our data before they arrive in our projects as a Pandas DataFrame.
If you’re working with databases or scraping data from the web (and storing it somewhere), this process of moving data and transforming it is called ETL (Extract, transform, load).
You extract the data from one place, do some transformations to it (summarise the data by adding it up, finding the mean, changing data types, and so on) and then load it to a place where you can access it.
There’s a really cool tool called Airflow which is very good at helping you manage ETL workflows. Even better, it’s written in Python.
It was developed by Airbnb when they had to move incredible amounts of data around, you can find out more about it here.
Sometimes ETL processes can be really slow. If you have billions of rows of data (or if they’re a strange data type like text), you can recruit lots of different computers to work on the transformation separately and pull everything back together at the last second.
This architecture pattern is called MapReduce and it was made popular by Hadoop.
Nowadays, lots of people use Spark to do this kind of data transformation / retrieval work and there’s a Python interface to Spark called (surprise, surprise) PySpark.
Both the MapReduce architecture and Spark are very complex tools, so I’m not going to go into detail here. Just know that they exist and that if you find yourself dealing with a very slow ETL process, PySpark might help. Here’s a link to the official site.
We already know that we can run statistical tests, calculate descriptive statistics, p-values, and things like skew and kurtosis using the stats module from Scipy, but what else can Python do with statistics?
One particular package that I think you should know about is the lifelines package.
Using the lifelines package, you can calculate a variety of functions from a subfield of statistics called survival analysis.
Survival analysis has a lot of applications. I’ve used it to predict churn (when a customer will cancel a subscription) and when a retail store might be burglarised.
These are totally different to the applications the creators of the package imagined it would be used for (survival analysis is traditionally a medical statistics tool). But that just shows how many different ways there are to frame data science problems!
The documentation for the package is really good, check it out here.Machine Learning in Python
Now this is a major topic — machine learning is taking the world by storm and is a crucial part of a data scientist’s work.
Simply put, machine learning is a set of techniques that allows a computer to map input data to output data. There are a few instances where this isn’t the case but they’re in the minority and it’s generally helpful to think of ML this way.
There are two really good machine learning packages for Python, let’s talk about them both.
Most of the time you spend doing machine learning in Python will be spent using the Scikit-Learn package (sometimes abbreviated sklearn).
This package implements a whole heap of machine learning algorithms and exposes them all through a consistent syntax. This makes it really easy for data scientists to take full advantage of every algorithm.
The general framework for using Scikit-Learn goes something like this –
You split your dataset into train and test datasets:
Then you instantiate and train a model:
And then you use the metrics module to test how well your model works:
The second package that is commonly used for machine learning in Python is XGBoost.
Where Scikit-Learn implements a whole range of algorithms XGBoost only implements a single one — gradient boosted decision trees.
This package (and algorithm) has become very popular recently due to its success at Kaggle competitions (online data science competitions that anyone can participate in).
Training the model works in much the same way as a Scikit-Learn algorithm.Deep Learning in Python
The machine learning algorithms available in Scikit-Learn are sufficient for nearly any problem. That being said, sometimes you need to use the most advanced thing available.
Deep neural networks have skyrocketed in popularity due to the fact that systems using them have outperformed nearly every other class of algorithm.
There’s a problem though — it’s very hard to say what a neural net is doing and why it’s making the decisions that it is. Because of this, their use in finance, medicine, the law and related professions isn’t widely endorsed.
The two major classes of neural network are convolutional neural networks (which are used to classify images and complete a host of other tasks in computer vision) and recurrent neural nets (which are used to understand and generate text).
Exploring how neural nets work is outside the scope of this article, but just know that the packages you’ll need to look for if you want to do this kind of work are TensorFlow (a Google contibution!) and Keras.
Keras is essentially a wrapper for TensorFlow that makes it easier to work with.
Once you’ve trained a model, you’d like to be able to access predictions from it in other software. The way you do this is by creating an API.
An API allows your model to receive data one row at a time from an external source and return a prediction.
Because Python is a general purpose programming language that can also be used to create web services, it’s easy to use Python to serve your model via API.
If you need to build an API you should look into the pickle and Flask. Pickle allows you to save trained models on your hard-drive so that you can use them later. And Flask is the simplest way to create web services.
Finally, if you’d like to build a full-featured web application around your data science project, you should use the Django framework.
Django is immensely popular in the web development community and was used to build the first version of Instagram and Pinterest (among many others).
And with that we’ve concluded our whirlwind tour of data science with Python.
We’ve covered everything you’d need to learn to become a full-fledged data scientist. If it still seems intimidating, you should know that nobody knows all of this stuff and that even the best of us still Google the basics from time to time.!
Python For Data Analysis - Build a Data Analysis Library from Scratch - Learn Python in 2019
**
**
Immerse yourself in a long, comprehensive project that teaches advanced Python concepts to build an entire library
You’ll learn
|
https://morioh.com/p/9bda29053553
|
CC-MAIN-2020-05
|
refinedweb
| 4,483
| 62.48
|
>>."
Can't have jurisdiction here (Score:2)
On the other hand, with my vast knowledge of how these things go, he'll probably wind up facing a stiff penalty of some sort.
Re:Can't have jurisdiction here (Score:5, Informative)
Your knowledge of the law, admittedly scant, is also utterly wrong.
Additionally, it's worth noting that the headline "Sony Must Show It Has Jurisdiction To Sue PS3 Hacker" belies typical
/. cluelessness about any and all legal issues. You never have "jurisdiction" to sue somebody. COURTS have jurisdiction, not parties, and the jurisdiction means they have the power to HEAR the lawsuit.
Re:Can't have jurisdiction here (Score:5, Informative)
Re: (Score:2, Informative)
Right. The court is asking Sony to explain why a California court (where they filed) has jurisdiction, because Hotz's lawyer filed a response saying "What's california got to do with it? He lives in New Jersey".
The downside, of course, is that Sony can simply say, "fine, we'll refile in New Jersey, which the defendant has happily admitted has jurisdiction."
TFA presents the issue as Sony's jurisdiction (Score:2)
(I do agree that the headline is poorly worded but it's derived from a technically-correct but also poorly worded line in TFA).
No, its not. TFA -- and not just in the sentence excerpted in TFS, which is technically wrong but which might, on its own, have just been an error in pronoun use -- presents the legal issue in dispute as being whether Sony (not the U.S. District Court for the Northern District of California) has jurisdiction.
In addition to the first sentence in TFS, which might be explained away as an error in pronoun use in a sentence intended to convey the correct issue of the court's jurisdiction, TFA states: "Hotz's law
Re:TFA presents the issue as Sony's jurisdiction (Score:5, Informative)
Interestingly enough Sony's main claim to California jurisdiction is the "choice of venue" clause in their PSN ToS.
Re: (Score:3)
Re: (Score:2)
COURTS have jurisdiction, not parties, and the jurisdiction means they have the power to HEAR the lawsuit.
Sir –
Even more pedantically: Courts and other dispute resolution bodies (e.g. arbitrators) have what we refer to as "jurisdiction" when these bodies have the power to determine the outcome of a dispute i.e. the power to issue an award that is enforceable over the people (in personam), subject-matter, and property (in rem) in issue.
I've been taught through experience that the word "jurisdiction" ought to always be accompanied by an adjective that answers the question "over what?" Determining issues of
e-ttorney at law? (Score:2)
I hope his attorney has insurance against injuries sustained from excessive eye-rolling.
Muhahah (Score:4, Funny)
Re: (Score:2)
Re:Muhahah (Score:4, Funny)
they have the code
int getRandomNumber()
{
return (4);
}
is theirs. only Darl is capable to get with this random representation. and courts already have proof in their notes based on alll stupidity he said when he was in court
Great Legal Team! (Score:2)
Isn't that a
Re:Great Legal Team! (Score:4, Insightful)
No.
It just says that the information is out there, not that their client is responsible.
Re: (Score:2)
Re: (Score:2)
It's still missing a car analogy.
Re: (Score:2, Informative)
The milk is spilled on the barn door and the horses are out of the bag! And no use crying over the cars in the yard.
If we can hit that bullseye, the rest of the dominoes will fall like a house of cards. Checkmate. -- Zapp Brannigan
Re:Great Legal Team! (Score:4, Informative)
The defense's argument is not that George Hotz isn't responsible. He is responsible. The question is whether or not what he did was illegal. They're arguing that there is little difference between jailbreaking a phone (legally exempted in the DMCA) and a console (still illegal per the DMCA.)
Voice chat (Score:4, Interesting)
Re: (Score:2)
For what it's worth (not much really), the PSP has Skype on it.
Re: (Score:3)
Re: (Score:2)
...allow voice chat in game? I don't see what you were trying to indicate with that. I mean, I can voice chat with other people around with world using my PC, but that doesn't mean it's a telephone.
Any game with voice chat turns the PS3 into "a telecommunications device that transmits and receives sound, most commonly the human voice" [wikipedia.org]. PS3 with the Eye accessory already does video chat [playstation.com], which includes voice. Can you show that "telephone" is defined in copyright law to require connection to the PSTN, and if so, would that be reason enough to cancel rumored Skype [gamesradar.com]?
Re:Voice chat (Score:4, Insightful)
Seriously Your Honour, all I was doing was trying to do was fix my phone!
Re: (Score:2)
No.
It just says that the information is out there, not that their client is responsible.
It does, however, suggest that there has been damage done. If Sony can show the client is responsible, then the question of whether a trade secret has been destroyed is settled, for example.
Re: (Score:2).
Re: (Score:2)
Okay, so a better analog might be publishing a guide on how to remove the rev limiter from the engine computer.: (Score:2)
Oh come on.... you really expect that argument holds any weight??
"It's my car...house...equipment I bought it therefore I can do what I like with it", including something forbidden by terms of use and by the law? - I think not.
The divide is whether you think that it's you right to backup and protect your media investment, and companies like Sony should not stop you from doing that. - End of the day, Sony are protecting their interests, while we try to protect ours (the right to make backups of fragile media
Re: (Score:2)
Sure hope not, especially if they are doing it in California. They'll get their nuts cut off and fed to them. [wikimedia.org]
Re: (Score:2)
Mr. Hotz manages to open his welded-shut hood using the secret knock he heard about from fail0verflow.
Inside he finds detailed instructions for starting the car printed on an instruction sheet. Like all good instruction sheets, it has an ISBN.
Mr. Hotz writes the ISBN on a poster and puts it up in his yard. Other people start ordering copies using the ISBN.
Sony drives by, screeches to a halt, and fires a cruise missile from their car (full of lawyers) at Mr. Hotz's front door.
The judge
Re: (Score:2)
I once wrote such a guide, but my lawyers advised me that a how-to might be trouble. So I rewrote it as a poem entitled "The Rock and the Window".
Re:Great Legal Team! (Score:5, Interesting)
Re: (Score:3)
"Computer programs that enable wireless telephone handsets to execute software applications, where circumvention is accomplished for the sole purpose of enabling interoperability of such applications, when they have been lawfully obtained, with computer programs on the telephone handset."
Re: (Score:3)
Re: (Score:3)
Technically speaking, the PS3 *IS* a wireless communication device that is locked down to run proprietary software.
It has builtin bluetooth and WiFi. It's just some homebrew away from being a VERY large bluetooth VoIP phone.
Re: (Score:2)
Actually you can already do video calls if you just plug in a webcam. If they have exemptions for phones, they should have it for consoles. They should have it for everything..
Re:Great Legal Team! (Score:4, Informative)
That is not the only exemption, I believe 1201 f should apply as it is providing the information to allow the PS3 OS software to inter-operate with the software generating homebrew images. [harvard.edu]
IANAL
Tune in next week for... (Score:2)
If California does not have jurisdiction then the DMCA is irrelevant to this whole episode.
But it would remain relevant to the next episode, in which Sony lawyers fly to the other side of the U.S. mainland.
Re: (Score:2)
Keys like this cannot be copyrighted. They are not creative works in any way.
Re: (Score:2)
Of course they are speech. how could they not be?
Anything you can say is speech. I could go recite the key on street corners if I like.
Re: (Score:2)
This issue was already brought to bear by a university professor who made a very eloquent case for the DE-CSS algorithm and key to be treated as free speech.
While the case was lost, the judge did rule that object code and source code are forms of speech, so sharing the programs needed to crack the PS3 would naturally follow.
From the person behind the "Gallery of CSS descramblers" [cmu.edu]
Re: (Score:2)
See my post above. A little VOIP software to make use of a 10$ bluetooth cellular handsfree dongle on the PS3, and the console magically becomes a telephone.
Re: (Score:2)
I think if anything went forward the evidence against the guy is clear and they'd end up stipulating that he's the one.
That's why they're fighting jurisdiction and moving against restraining orders instead of taking it to court now and forcing Sony to show proof. If those don't work they should try to negotiate a cheap out, which Sony will piss on, so they probably won't bother to try. When it does get to trial, they'll fight over what it cost them and whether and how much should cost him. No sense pissi
Re:Great Legal Team! (Score:4, Interesting)
If Sony wishes to bring up potential harm instead of actual harm, the court ought to be reminded of the rootkits installed by Sony audio CDs on machines they didn't own or manufacture.
Re: (Score:2)
And who says he was even using his own PlayStation?!
:p
Re: (Score:2)
It's hard to argue that he didn't do what he did, they're going to argue that what he did was legal. Like if I was accused of slander and responded with "Of course I called him an idiot, he is one!". I said what I said and stand by it, but that doesn't mean that what I did was slander.
"On the face of Sony's Motion" (Score:2)
Isn't that a confession?!
No. Read it again. He never admits to doing anything, whether or not it is illegal, so all the pertinent questions of law and fact are still out there. All he does is state the fact that Sony's own filing admits that the information that they are seeking to prevent getting out is already widely distributed on the internet, and argues, based on that, that on the basis of the claims in Sony's own filing requesting a TRO (temporary restraining order) such an order would serve no purpose.
They know that, but that's not the point (Score:5, Insightful)
Re: (Score:2, Funny)
"Ruin him financially" you mean he might have to move out of his dad's garage?
Re:They know that, but that's not the point (Score:5, Funny)
Back in the old days, an organization from Japan would send over Ninjas to take care of the problem. I bet Sony misses those days -sigh-
Re: (Score:2)
Cyberpunk (Score:2)
That's what all those old cyberpunk novels promised us. Big Japanese companies sending cybernetic ninjas to take-out rogue hackers.
Come on, Sony. Make it happen.
Re: (Score:2)
Sony knows that they can't put the cat back in the bag, but that's not the point. The point is to make life as hellish as possible for the person who let the cat out, so the next bloke who considers doing it might find something else to do.
Unfortunately for Sony's motion for a TRO, such orders are allowed only for specified purposes (largely, to prevent irreversible harm), and punitive purposes are not an acceptable reason for a TRO.
Re:They know that, but that's not the point (Score:4, Funny)
In light of your slashID, I thought it important to mention that a corpse is an inanimate object, and even if you purchase it (legally, like for medical research) you cannot do whatever you want to it OR with it.
Just an FYI, in case you were wondering about the specific implications of edge cases to your generalism. You know. Stuff you'd want to do to/with a corpse. For science or something.
Re: (Score:2)
Trade secrets are not protected in this manner. If you find them out they stop being secrets, neat huh?
You can harm Sony legally all you want. At worst he broke some EULA bs contract.
Saw this one coming (Score:5, Insightful)
Re: (Score:3)
That argument won't work.
He should have sued them for taking away the functionality he paid for when he bought his box (if he bought it before they locked it down).
Even if he loses this case, he should still be able to sue them for that.
In fact, he should be using that as a bargaining chip: Drop your suit and let everyone use my code, or I'll counter-sue you and win and you'll have to compensate everyone who owns one of your boxes.
Re: (Score:2)
They'd accept that deal in a heartbeat: they'd rather destroy him and keep a victory that lets them deploy more DRM-friendly stances in the future. I can't see a class action suit in this case as being anything more than a slap on the wrist.
Re:Saw this one coming (Score:5, Informative)
Before using a firmware release to disable OtherOS, Sony has said [ozlabs.org]:
Please be assured that SCE [Sony Computer Entertainment] is committed to continue the support for previously sold models that have the "Install Other OS" feature and that this feature will not be disabled in future firmware releases.
IANAL, but I believe the fact that geohot was using the exploit to re-enable OtherOS will be a vital part of his defense against charges he violated the DMCA. My understanding of the current case law is that if you circumvent a security measure for the sole purpose of violating someone's copyrights then you are liable for prosecution under the DMCA. But if you circumvent a security device in order to exercise a "fair use" then you are safe. A recent example of this was the announcement by the US Government (I forget which department) that is was legal to jailbreak iPhones in order to change carriers.
This then takes us back to the 1984 Supreme Court decision in Sony Corp. of America v. Universal City Studios, Inc [wikipedia.org] where they ruled that "making of individual copies of complete television shows for purposes of time-shifting does not constitute copyright infringement, but is fair use". The idea was that if there were valid (fair) uses of video recorders then video recorders were legal even if they could be used for infringement.
IMO (IANAL), geohot's exploit has fair uses, such as restoring OtherOS, and other uses that would infringe copyright (pirating games). Without the fair uses, geohot might have been in trouble.
Re: (Score:2)
I don't think that "fair use" is the right term; it's too tied up in copyright, and doesn't extend to all kinds of uses that seem fair for anything not copyrightable. The jailbreak thing is actually in the law.
Now that his code is out, people using it are probably not liable for breaking protection on their systems.
Doesn't mean he's not liable for violating the agreement on his to create and distribute the code.
Re: (Score:2)
If one were to buy a PS3 to hack on, and never connected it to the internet, how would one be bound by a terms of service?
Re: (Score:3)
Because it shows you the eula when you first turn it on.
Re: (Score:2)
After that I believe the computer belongs to me, to do with as I wish, seeing as I paid for it.
I should really be able to send Sony a letter asking them to reimburse me for their software licence seeing as I didn't use it.
Re: (Score:2)
you mean this quote from the article you linked to
."
Why would you say what you said when the link you posted as evidence says the opposite?
Oh you fool! (Score:2, Insightful)
Now they're going to seek injunction against Google. (Yes, I DO think they're THAT RETARDED.)
This has broader implications (Score:2)
Judge Ilston is rightfully concerned about Sony's argument here. If she were to accept their argument, it would be possible for someone to sue you in California if you use PayPal, or have a Twitter account, or a YouTube account, or any other kind of computer account in California. It would effectively create a kind of universal jurisdiction based solely on the fact that you use one of those Internet services. The Federal courts in California are already back-logged enough with just the personal jurisdict
Vroom (Score:5, Informative)
Awesome, Hotz' attorneys used a car analogy in their press release.
Plan B (Score:2)
Ya know, there's nothing stopping sony from filing exactly the same complaint in new jersey and then proceeding as planned.
Re: (Score:2)
Of course that's what will happen, and GeoHot would much prefer that, because now he can show up at the courthouse in his own county to mount his defense instead of having to fly to California.
Re:Plan B (Score:5, Informative)
There's a reason they did this in California. The first is that it harasses the defendant by forcing him to defend a suit in a place where he doesn't live. The second is that California is where most of the USA's IP lawyers keep their crypts, and where SECA is headquartered, and they can't be bothered to find/hire someone barred in New Jersey or Massachusetts to pursue a case there. Yes, they really are that lazy. The special appearance was absolutely the right thing to do.
When I read Sony's application I knew there would be massive jurisdiction problems. I don't think the Sony Network agreement, even if the court finds that geohot agreed to it (good luck!), was written to cover this kind of litigation, and the rest of the bases for jurisdiction (Youtube? Paypal? Really, overpaid corporate law jerks?) are junk. The response is correct, too, that the Sony action seems to be bootstrapping jurisdiction for everyone else named through poor old geohot, which isn't going to fly. And it's also correct that there just isn't a good legal basis for issuing a TRO, which is supposed to be a TEMPORARY order in emergencies where there is a serious danger of impending harm.
IAAL, but not THIS kind of lawyer.:2, Informative)
Jury nullification works both ways.
Don't do it.
Re: (Score:2)
Yes, Jury Nullification works both ways. If I'm on a jury, I will exercise my conscience. If I'm ever a defendant, I hope my jury will do the same. The world would be a much better place if people exercised their consciences more often.
jury nullification (Score:3)
Jury nullification works both ways.
Actually, jury nullification works one way (in favor of criminal defendants) and in one context (criminal trials.) A criminal jury trial is the only case where a judge cannot overrule, in either direction, a jury verdict if it is, in the opinion of the judge, not reasonable given the applicable law and the facts presented. And, in a criminal jury trial, it works only in the defendants favor; a judge can throw out a conviction reached by the jury if it is not reasonable (though in practice, if a conviction o
Re: (Score:2)
In practice, anything that reaches a jury is a confused mess, everything else having been decided by the judge or stipulated by the lawyers.
Jury nullification can work both ways, for and against a defendant, or for and against the prosecution. They're both asserting points in the law, and the jury can ignore the law in either direction.
If the judge had enough of a clue as to what was really the answer, he probably wouldn't let it get that far. So a nullification would have to be pretty blatant for him to
Re: (Score:2)
Jury nullification can work both ways, for and against a defendant, or for and against the prosecution.
No, actually, it can't.
They're both asserting points in the law, and the jury can ignore the law in either direction.
Sure, but the jury acting inconsistently with the law in either direction in a civil trial can be overridden by the court (either the trial court or appellate court), and the jury ignoring the law in a way which benefits the prosecution in a criminal trial can similarly be overridden. The power of the jury to nullify the law -- to ignore it in a manner which cannot be overridden -- is restricted to one-direction (favoring the defense) in one context (criminal trials.)
Re: (Score:2)
Actually it doesn't work both ways. It only nullifies laws, it never makes new ones.
Re: (Score:2)
Wouldn't happen anyway. Anyone who knows about that gets screened out.
Re: (Score:2)
Re: (Score:3)
Hey everyone if you switch the spark plug wires on cylinders 2 and 4 you can turn a mustang into a mustang gt with an extra 15 hp.
Would that be illegal? No, even if Ford didnt like it.
Re: (Score:2)
Stealing would indeed be bad. Can you explain how one could use this jailbreak to go into Sony head quarters and steal this IP so that Sony no longer had it?
Otherwise I think you mean copy this IP infringing on Sony's government granted monopoly.
Re: (Score:2)
Like a hammer, there are all sorts of illegal thingsthat cann be fine with them yet they are legal to own and talk about...
Re: (Score:2)
Re: (Score:2)
If ford makes an extra grand off the gt package then they are loosing money.
There are legitimate uses for the hack. I do not have to prove that no one is gonna use it illegitimately. It would be like saying Ford cannot sell cars because some people might use them in a robbery.
Someone WILL use this improperly. Thats not geohots fault though.
Re: (Score:2)
Re: (Score:2)
What did he do wrong exactly?
Let people use their own hardware however they see fit. Oh no what a fucking tragedy.
Re: (Score:2)
Since you obviously miss the point, that he has broken NO LAWS.
Just because he did something that others can use to do bad does not mean he is GUILTY OF ANYTHING.
Your the one here who does not seem to understand.
Re: (Score:2)
Re: (Score:2)
It does not have to be 0%. This only has to have significant non-infringing use. Like the VHS case proved. This does since it adds homebrew and the ability to run alternate OSes.
Re: (Score:2)
Re: (Score:2)
Millions of dollars in what now?
In any case your hardware you do with it as you like. Not his problem what people would do with it.
Oh and you are a tool.:Isn't that kinda like saying... (Score:5, Informative)
Because others guys are doing it, it's okay for me to do it, or this case continue doing it? I don't see that as a particularly good defense.
which was most likely in reference to this comment by geohot's Lawyer:
On the face of Sony’s Motion, a TRO serves no purpose in the present matter. The code necessary to 'jailbreak' the Sony Playstation computer is on the internet. That cat is not going back in the bag.
Note: TRO = Temporary Restraining Order.
Sony's shy^H^H lawyers tried to get the court to give them all of geohots computer equipment before geohot had time to mount a defense. He was only informed of this attempt a few hours before the hearing.
The "cat is out of the bag" statement wasn't addressing whether geohot is guilty or innocent of any crime. It was addressing the lame attempt by Sony to confiscate all of geohots computers. Sony said they needed the court to take this rapid and extraordinary action to keep the world from learning how to perform the exploit. Since the information was already on the web, seizing geohot's computers would not stem the tide.
TL;DR: Sony was being an asshole and tried to use a temporary restraining order as an excuse to steal all of geohots computers. Geohot had already lawyered up and was prepared for this sneak attack.
YAESF (yet another epic Sony FAIL).
Re:Isn't that kinda like saying... (Score:4, Informative)
I suppose, but still seems darn weak.
It directly addresses the legal standard for issuing a temporary restraining order, which is issued before the legalities at issue are determined to prevent ongoing harm that might otherwise occur while the case is progressing. That there is no reasonable basis to believe that the TRO would prevent any ongoing harm is the strongest possible argument against a TRO, not a "darn weak" one.
Re: (Score:2)
Re: (Score:2)
Then I shall stand correct. I still think it is kinda silly, but I didn't make the laws, and I'm sure in other cases it makes more clear sense.
The important thing to remember about a TRO is that it isn't the final judgement in the case; instead, its simply something that is available because lawsuits take time, and so courts need a way to restrain actions that might cause irreparable harm while the case is working its way through the system.
Its not primarily about what people ought to be allowed to do in a general, permanent sense (because those questions are what is decided in the resolution of the case), its about what do we need to do to make i
Re: (Score:3)
In theory. In practice, particularly when there's a large disparity of power between the two parties, getting the TRO is as good as winning. You get the TRO, then drag the case out until the defendant runs ou
Especially because the PS3 is a phone in a way (Score:4, Informative)
Please allow me to summarize replies to similar questions above yours: "Because a PS3 is not a wireless telephone."
And the replies to that: "Yes it is. PS3 has voice chat and Wi-Fi."
Re: (Score:2)
You mean bribe them?
That is what MS did, and as usual microsofties they sold out and they sold out cheap.
|
http://games.slashdot.org/story/11/01/14/2154256/sony-must-show-it-has-jurisdiction-to-sue-ps3-hacker?sdsrc=prev
|
CC-MAIN-2015-14
|
refinedweb
| 4,562
| 71.14
|
Bundling data files with PyInstaller usually works quite well. However, there can be issues with using relative paths, particularly when bundling cross-platform applications, as different systems have different standards for dealing with data files. If you hit these problems they can unfortunately be quite difficult to debug.
Thankfully, Qt comes to the rescue with it's resource system. Since we're using Qt for our GUI we can make use of Qt Resources to bundle, identify and load resources in our application. The resulting bundled data is included in your application as Python code, so PyInstaller will automatically pick it up and we can be sure it will end up in the right place.
In this section we'll look at how to bundle files with our application using the Qt Resources system.
The QRC file
The core of the Qt Resources system is the resource file or QRC. The
.qrc file is a simple XML file, which can be opened and edited with any text editor.
You can also create QRC files and add and remove resources using Qt Designer, which we'll cover later.
Simple QRC example
A very simple resource file is shown below, containing a single resource (our application icon).
<!DOCTYPE RCC> <RCC version="1.0"> <qresource prefix="icons"> <file alias="hand_icon.ico">hand_icon.ico< wanted to use the name
application_icon.ico internally, we could change this line to.
<file alias="application_icon.ico">hand_icon.ico<.
Using a QRC file
To use a
.qrc file in your application you first need to compile it to Python. PySide2:
app.setWindowIcon(QtGui.QIcon(':/icons/hand_icon.ico'))
The prefix
:/ indicates that this is a resource path. The first name "icons" is the prefix namespace and the filename is taken from the file alias, both as defined in our
resources.qrc file.
The updated application is shown below.
from PySide2 import QtWidgets, QtGui try: from ctypes import windll # Only exists on Windows. myappid = 'mycompany.myproduct.subproduct.version' windll.shell32.SetCurrentProcessExplicitAppUserModelID(myappid) except ImportError: pass import sys import resources # Import the compiled resource file. class MainWindow(QtWidgets.QMainWindow): def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) self.setWindowTitle("Hello World") l = QtWidgets.QLabel("My simple app.") l.setMargin(10) self.setCentralWidget(l) self.show() if __name__ == '__main__': app = QtWidgets.QApplication(sys.argv) app.setWindowIcon(QtGui.QIcon(':/icons/hand_icon.ico')) w = MainWindow() app.exec()
You can run the build as follows,
pyinstaller --windowed --icon=hand_icon.ico app.py
or re-run it using your existing
.spec file.
pyinstaller app.spec
If you run the resulting application in
dist you should see the icon is working as intended.
The hand icon showing on the toolbar
The advantage of this method is that your data files are guaranteed to be bundled as they are treated as code — PyInstaller finds them through the imports in your source. You also don't need to worry about platform-specific locations for data files. You only need to take care to rebuild the
resources.py file any time you add or remove data files from your project.
Of course, this approach isn't appropriate for any files you want to be readable or editable by end-users. However, there is nothing stopping you from combining this approach with the previous one as needed.
Example Build: Bundling Qt Designer UIs and Icons
We've now managed to build a simple app with a single external icon file as a dependency. Now for something a bit more realistic!
In complex Qt applications it's common to use Qt Designer to define the the UI, including icons on buttons and menus. How can we distribute UI files with our applications and ensure the linked icons continue to work as expected?
Below is the UI for a demo application we'll use to demonstrate this. The app is a simple counter, which allows you to increase, decrease or reset the counter by clicking the respective buttons. You can also download the source code and associated files.
The counter UI created in Qt Designer
The UI consists of a
QMainWindow with a vertical layout containing a single
QLabel and 3
QPushButton widgets. The buttons have Increment, Decrement and Reset labels, along with icons from the Fugue set by p.yusukekamiyamane. The application icon is a free icon from Freepik.
The UI was created in Qt Designer as described in this tutorial.
Resources
The icons in this project were added to the buttons from within Qt Designer. When doing this you have two options —
- add the icons as files, and ensure that the relative path locations of icons are maintained after installation (not always possible, or fun)
- add the icons using the Qt Resource system
Here we're using approach (2) because it's less prone to errors.
The method for Qt Resources in your UI differs depending on whether you're using Qt Creator or Qt Designer standalone. The steps are described below.
Adding Resources in Qt Designer (Preferred).
Qt Designer resource browser
In the resource editor view you can open an existing resource file by clicking on the document folder icon (middle icon) on the bottom left.
Qt Designer resource editor
In this screenshot above we've included the
.ui file in the QResource bundle, but I recommend you compile the UI file to Python instead (see below)..
This message will be shown when creating a new Qt Creator project in an existing folder
To add resources to your existing project, select the "Edit" view on the left hand panel. You will see a file tree browser in the left hand panel. Right-click on the folder and choose "Add existing files…" and add your existing
.qrc file to the project.
Qt Creator "Edit" view, showing a list of files in the project…"
Setting the icon for a button in Qt Designer (or Qt Creator)
The Resource chooser window that appears allows you to pick icons from the resource file(s) in the project to use in your UI.
Selecting a resource in the Qt Designer resource dialog
Selecting the icons from the resource file in this way ensures that they will always work, as long as you compile and bundle the compiled resource file with your app.
Compiling the UI file
The simplest way to bundle your UIs using resources is to compile them into Python. The resulting Python file will be automatically packaged by PyInstaller (once it's imported) and will also itself automatically load the related resources file.
pyside2-uic mainwindow.ui -o MainWindow.py
You may want to wrap this in a function if you're using it a lot.
The finished app
Below is our updated
app.py which loads the
mainwindow.ui file and defines 3 custom slots to increment, decrement and reset the number. These are connected to signals of the widgets defined in the UI (
btn_inc,
btn_dec and
btn_reset for the 3 buttons respectively) along with a method to update the displayed number (
label for the
QLabel).
from PySide2 import QtWidgets, QtCore, QtGui, uic import sys from MainWindow import Ui_MainWindow, Ui_MainWindow): def __init__(self, *args, **kwargs): super(MainWindow, self).__init__(*args, **kwargs) # Load the UI self.setupUI(self) # Set value of counter self.counter = 0 self.update_counter() # Bind self.btn_inc.clicked.connect(self.inc) self.btn_dec.clicked.connect(self.dec) self.btn_reset.clicked.connect(self.reset) def update_counter(self): self.label.setText(str(self.counter)) def inc(self): self.counter += 1 self.update_counter() def dec(self): self.counter -= 1 self.update_counter() def reset(self): self.counter = 0 self.update_counter() if __name__ == '__main__': app = QtWidgets.QApplication(sys.argv) app.setWindowIcon(QtGui.QIcon(':/icons/counter.ico')) main = MainWindow() main.show() sys.exit(app.exec_())
If you have made any changes to the
resources.qrc file, or haven't compiled it yet do so now using
pyrcc5 resources.qrc -o resources.py
If you run this application you should see the following window.
Counter app, with all icons showing
We'll build our app as before using the command line to perform an initial build and generate a
.spec file for us. We can use that
.spec file in future to repeat the build.
pyinstaller --windowed --icon=resources/counter.ico app.py
PyInstaller will analyse our
app.py file, bundling all the necessary dependencies, including our compiled
resources.py and
MainWindow.py into the
dist folder.
Once the build process is complete, open the
dist folder and run the application. You should find it works, with all icons — from the application itself, through to the icons embedded in our UI file — working as expected.
Counter app, with all icons showing
This shows the advantage of using this approach — if your application works before bundling, you can be pretty sure it will continue to work after.
Building a Windows Installer with Installforge
The applications so far haven't done very much. Next we'll look at something more complete — our custom Piecasso Paint application. The source code is available to download here or in the 15 minute apps repository.
The source code is not covered in depth here, only the steps required to package the application. The source and more info is available in the 15 minute apps repository of example Qt applications. The custom application icons were created using icon art by Freepik.
Prepared for packaging the project has the following structure (truncated for clarity).
. ├── paint.py ├── Piecasso.spec ├── mainwindow.ui ├── MainWindow.py ├── README.md ├── requirements.txt ├── resources.qrc ├── resources_rc.py ├── screenshot-paint1.jpg ├── screenshot-paint2.jpg ├── icons │ ├── blue-folder-open-image.png │ ├── border-weight.png │ ├── cake.png │ ├── disk.png │ ├── document-image.png │ ├── edit-bold.png │ ... └── stamps ├── pie-apple.png ├── pie-cherry.png ├── pie-cherry2.png ├── pie-lemon.png ├── pie-moon.png ├── pie-pork.png ├── pie-pumpkin.png └── pie-walnut.png
The main source of the application is in the
paint.py file.
Packaging Resources
The resources for Piecasso are bundled using the Qt Resource system, referenced from the
resources.qrc file in the root folder. There are two folders of images,
icons which contains the icons for the interface and
stamps which contains the pictures of pie for "stamping" the image when the application is running.
The
icons were added to the UI in Qt Designer, while the stamps are loaded in the application itself using Qt Resource paths, e.g.
:/stamps/<image name>.
The
icons folder also contains the application icon, in
.ico format.
The UI in Qt Designer
The UI for Piecasso was designed using Qt Designer. Icons on the buttons and actions were set from the Qt resources file already described.
Piecasso UI, created in Qt Designer
The resulting UI file was saved in the root folder of the project as
mainwindow.ui and then compiled using the UI compiler to produce an importable
.py file, as follows.
pyuic5 mainwindow.ui -o MainWindow.py
T> For more on building UIs with Qt Designer see the introductory tutorial.
This build process also adds imports to
MainWindow.py for the compiled version of the resources using in the UI, in our case
resources.qrc. This means we do not need to import the resources separately into our app. However, we still need to build them, and use the specific name that is used for the import in
MainWindow.py, here
resources_rc.
pyrcc5 resources.qrc -o resources_rc.py
pyuic5 follows the pattern
<resource name>_rc.py when adding imports for the resource file, so you will need to follow this when compiling resources yourself. You can check your compiled UI file (e.g.
MainWindow.py) to double check the name of the import if you have problems.
Building the app
With all the above setup, we can build Piecasso as follows from the source folder.
pyinstaller --windowed --icon=icons/piecasso.ico --name Piecasso paint.py
If you download the source code, you will also be able to run the same build using the provided
.spec file.
pyinstaller Piecasso.spec
This packages everything up ready to distribute in the
dist/Piecasso folder. We can run the executable to ensure everything is bundled correctly, and see the following window, minus the terrible drawing.
Piecasso Screenshot, with a poorly drawn cat
To support developers in [[ countryRegion ]] I give a [[ localizedDiscount[couponCode] ]]% discount with the code [[ couponCode ]] — Enjoy!
For [[ activeDiscount.description ]] I'm giving a [[ activeDiscount.discount ]]% discount with the code [[ couponCode ]] — Enjoy!
Creating an installer
So far we've used PyInstaller to bundle applications for distribution. complex application, we'll next look at how we can take our
dist folder and use it to create a functioning Windows installer.
To create our installer we'll be using a tool called InstallForge. InstallForge is free and you can download the installer from this page.
The InstallForge configuration is also in the Piecasso source folder,
Piecasso.ifp however bear in mind that the source paths will need to be updated for your system.
Another popular tool is NSIS which is a scriptable installer, meaning you configure it's behaviour by writing custom scripts. If you're going to be building your application frequently and want to automate the process, it's definitely worth a look.
We'll now walk through the basic steps of creating an installer with InstallForge. If you're impatient, you can download the Piecasso Installer for Windows/Piecasso folder produced by PyInstaller. The file browser that pops up allows multiple file selections, so you can add them all in a single go, however you need to add folders separately. Click "Add Folder…" and add any folders under
dist/Piecasso.
InstallForge Files view, add all files & folders to be packaged
Once you're finished scroll through the list to the bottom and ensure that the folders are listed to be included. You want all files and folders under
dist/Piecasso to be present. But the folder
dist/Piecasso
Piecasso
Piecasso.
Piecasso in the Start Menu on Windows 10
Wrapping up
In this tutorial we've covered how to build your PySide2 applications into a distributable EXE using PyInstaller. Following this we walked through the steps of using InstallForge to build an installer for the app. Following these steps you should be able to package up your own applications and make them available to other people.
|
https://www.pythonguis.com/tutorials/packaging-data-files-pyside2-with-qresource-system/
|
CC-MAIN-2022-40
|
refinedweb
| 2,368
| 57.67
|
Understanding Namespaces in the WordPress Hook System
Hooks are a fundamental concept for WordPress developers. In previous articles on SitePoint, we’ve learned what hooks are and their importance, the two types of hooks:
actions and
filters with code examples of how they work, and an alternative way of firing actions and filters events and how to hook static and non-static class methods to actions and filters.
In this article, I will cover how to hook methods of an instantiated class (object) to actions and filters, how to integrate a namespaced class method to a hook, caveats of using namespaces in WordPress hook system and solution to them.
Hooking Object Methods
Assume you were tasked by your employer to build an ad manger plugin for a large news website, to make ad insertion to news content seamless. This is how you might go about building it.
You would create an
AdManager class with a number of methods that contain the various ad-networks ad-code.
class AdManager { /** * AdSense unit code. */ public function adsense() { ?> <script async</script> <ins class="adsbygoogle" style="display:inline-block;width:336px;height:280px" data-</ins> <script> (adsbygoogle = window.adsbygoogle || []).push({}); </script> <?php } /** * Buysellads ad code. */ public function buysellads() { // ... } }
Say a website theme has an
action called
before_post_content that is fired before the post content is displayed and you want to hook the
adsense method to it to display the ad before any post content. How would you go about it?
You are trying to hook the method to an action outside of the class unlike the examples we’ve seen in part 2 where it was done in the class constructor like so:
public function __construct() { add_action( 'before_post_content', array( $this, 'adsense' ) ); }
To hook the
adsense method to the
before_post_content action outside of the class (probably in the
functions.php file of the active website theme) in order to display the Google AdSense ads before every post’s content, you will have to replace
$this with an instance of the class.
add_action( 'before_post_content', array( new AdManager(), 'adsense' ) );
And say the class includes a method that returns a singleton instance of the class.
class AdManager { // ... /** * Singleton class instance. * * @return AdManager */ public static function get_instance() { static $instance = null; if ( $instance == null ) { $instance = new self(); } return $instance; } }
Here is how the
adsense method can be hooked to the
before_post_content action.
add_action( 'before_post_content', array( AdManager::get_instance(), 'adsense' ) );
Namespaces
The WordPress hook system was developed at a time when there was no namespace feature in WordPress. As a result, you might find it difficult to hook a namespaced function and class method to an
action and
filter.
Say your
AdManager class has a namespace of
SitePoint\Plugin as follows.
namespace SitePoint\Plugin; class AdManager { // ... }
To hook the
adsense method of the
AdManager class to the
before_post_content action, you must prepend the class name with the namespace like so:
add_action( 'before_post_content', array( SitePoint\Plugin\AdManager::get_instance(), 'adsense' ) );
If the
class and the
add_action function call is in the same PHP file namespaced by
SitePoint\Plugin\, prepending the namespace to the class name is unnecessary because they are covered by the same global namespace.
Enough of class examples, let’s now see a plain function.
Say you have the following namespaced function to be hooked to the
wp_head action.
namespace SitePoint\Plugin; function google_site_verification() { echo '<meta name="google-site-verification" content="ytl89rlFsAzH7dWLs_U2mdlivbrr_jgV4Gq7wClHDUJ8" />'; }
Here’s how it can be done with the namespace prepended to the function:
add_action( 'wp_head', 'SitePoint\Plugin\google_site_verification' );
My Namespace Horror with the Hook System
In my Admin Bar & Dashboard Access Control plugin, I registered an uninstall hook that deletes the plugin option when it’s uninstalled.
Something as easy as the following lines of code shouldn’t be a problem where
PP_Admin_Bar_Control is the class name and
on_uninstall is the method called when uninstalled.
register_uninstall_hook( __FILE__, array( 'PP_Admin_Bar_Control', 'on_uninstall' ) );
To be sure it works, I tried uninstalling the plugin to see if the plugin option will be deleted but to my surprise, I got the following error.
The plugin generated 2137 characters of unexpected output during activation.
Mind you, here is how the class and
register_uninstall_hook function are defined with a namespace of
ProfilePress\PP_Admin_Bar_Control.
namespace ProfilePress\PP_Admin_Bar_Control; register_uninstall_hook( __FILE__, array( 'PP_Admin_Bar_Control', 'on_uninstall' ) ); class PP_Admin_Bar_Control { // ... /** Callback to run when the uninstalled hook is called. */ public static function on_uninstall() { if ( ! current_user_can( 'activate_plugins' ) ) { return; } delete_option( 'abdc_options' ); } // ... }
Can you spot the reason why the
on_uninstall class method wasn’t triggered when the plugin is uninstalled?
You would think since the
register_uninstall_hook function is defined under the namespace, the class should be covered by it, but that is not the case. You still need to prepend the namespace to the class as follows for it to work.
register_uninstall_hook( __FILE__, array( 'ProfilePress\PP_Admin_Bar_Control\PP_Admin_Bar_Control', 'on_uninstall' ) );
I actually had a hard time figuring out the problem. I decided I don’t want to let you go through the same stress and head-banging that I went through.
Conclusion
Quirks like this make some developers cringe and stay away from WordPress. We shouldn’t forget WordPress was developed at the time when PHP lacked all the language improvement and features it has now. I always try to figure out how to circumvent these quirks and then teach it to people.
I hope I have been able to demystify the hook system in WordPress. If you have any questions or contributions, please let us know in the comments.
|
https://www.sitepoint.com/understanding-namespaces-wordpress-hook-system/
|
CC-MAIN-2022-05
|
refinedweb
| 901
| 51.68
|
I am trying to compare two different lists of Data in Excel and do a
comparison on two other columns using VBA Column A in Sheet1 is not in
order and can be longer or shorter than Column A in Sheet2.... If Sheet 2
Days is > Sheet 1 Days I would like to Bold Sheet1 Days. Here is a short
example of the data on the two lists.
Sheet1 Sheet
2 Col A Col B Col A
View Replies
I got one problem from comparing database schema as i use Red gate SQL
Compare 6 , after initialization of the compare databases error is coming
as following
"Index was outside the bounds of the
array".
Please provide your valuable comments to get
resolve this issue.
Hi can somebody help me? i have a class Node, I want to compare the
field Node.PathCost. I can't not add them to the priority queue to compare.
Here is what I did:
public class Node implements
Comparator<Node> { // Node fields String[] State =
new String[2], ParentState = new String[2]; // State = current state which
consist of two string:"","" S
I'm interested in both style and performance considerations. My choice
is to do either of the following ( sorry for the poor formatting but the
interface for this site is not WYSIWYG ):
string
value = "ALPHA";switch ( value.ToUpper() ){ case
"ALPHA": // do somthing break; case "BETA": // do something else
In
both of the above scenarios, B
How can i compare two MS ACCESS 2007 databases.Both databases contain
same tables with same feilds ad structure.i need to compare the record
values between two databases to detect any difference in record
values.
ACCESS 2007 Database1
serial no. | NAME
| ADDRESS
How do I compare two hashes in Perl without using Data::Compare?
I need to compare a string to multiple other constant strings in c. I am
curious which is faster, to hash the string I am going to compare and
compare it to all the other constant string hashes or just compare the
strings as strings. thank you in advance
thank you for the
answers I am going to be doing many comparisons. can anyone give me a good,
fast, low resource intensive algorit
I am trialing Beyond Compare version 3.When I use the compare
folders option and put in 2 existing non empty folders in, it shows no
content? the window is empty . What am I doin wrong?
Let's say I want to (using Visual Studio) run a schema comparison
between two databases, restore one of them from a backup, and run the
schema comparison again. The schema comparison maintains a connection to
the database, and SQL Server won't let me run the restore without removing
all connections. Is there a way I can force the schema comparison to
disconnect without closing it?
|
http://bighow.org/tags/compare/1
|
CC-MAIN-2017-22
|
refinedweb
| 471
| 69.82
|
4 Important Changes In Vue.js 2.4.0
- 2019-04-05 08:17 AM
- 1028
Vue.js 2.4.0 has been released, with an abundance of new features, fixes and optimisations.
In this article, I’ll give you a breakdown of four new features that I think are the most interesting:
- Server-side rendering async components
- Inheriting attributes in wrapper components
- Async component support For Webpack 3
- Preserving HTML comments in components
Note: this article was originally posted here on the Vue.js Developers blog on 2017/07/17
1. Server-Side Rendering Async Components
Before Vue 2.4.0, async components were not able to be server rendered; they were just ignored in the SSR output and left to the client to generate. This gave async components a significant downside, and fixing the issue allows for much better PWAs with Vue.
Async Components
Async components are really handy. If you’ve been following this blog I’ve been writing about them a lot lately. In a nutshell, they allow you to code-split your app so non-essential components (modals, tabs, below-the-fold content, other pages etc) can load after the initial page load, thus allowing a user to see the main page content quicker.
Let’s say you decided to load below-the-fold content asynchronously. Your main component might look like this:
<template> <div id="app"> <!--Above-the-fold--> <sync-component></sync-component> <!--Below-the-fold--> <async-component></async-component> </div> </template> <script> import SyncComponent from './SyncComponent.vue'; const AsyncComponent = import('./AsyncComponent.vue'); export default { components: { SyncComponent, AsyncComponent } } </script>
MyComponent.Vue
By using Webpack’s dynamic
import function,
AsyncComponent would be loaded by AJAX from the server after the page loads. The downside is that while it’s loading the user will likely only see a spinner or blank space.
This can be improved with server-side rendering, since the async component markup would be rendered on the initial page load, which is going to be a lot better for UX than a spinner or blank space.
But until Vue 2.4.0, this wasn’t possible. The SSR output of this main component would just look like this:
<div id="app" server- <!--Above-the-fold--> <div> Whatever sync-component renders as... </div> <!--Below-the-fold--> <!----> </div>
index.html
As of Vue 2.4.0, async components will be included in the SSR output so you can code split your Vue apps until your heart is content, without the UX debt.
2. Inheriting Attributes in Wrapper Components
One annoying thing about props is that they can only be passed from parent to child. This means if you have deeply nested components you wanted to pass data to, you have to bind the data as props to each of the intermediary components as well:
<parent-component : <child-component : <grand-child-component : Finally, here's where we use {{ passdown }}!
index.html
That’s not so bad for one or two props, but in a real project you may have many, many more to pass down.
You can get around this problem using an event bus or Vuex, but Vue 2.4.0 offers another solution. Actually, it’s part of two separate but related new features: firstly, a flag for components called
inheritAttrs, and secondly, an instance property
$attrs. Let’s go through an example to see how they work.
Example
Say we bind two attributes on a component. This component needs attribute
propa for its own purposes, but it doesn’t need
propb; it’s just going to pass that down to another nested component.
<my-component :</my-component>
index.html
In Vue < 2.4.0, any bound attribute not registered as a prop would simply be rendered as a normal HTML attribute. So if your component definition looks like this:
<template> <div>{{ propa }}</div> </template> <script> export default { props: [ 'propa' ] } </script>
MyComponent.vue
It’ll render like this:
<div propb="propb">propa</div>
index.html
Note how
propb was just rendered as a normal HTML attribute. If you want this component to pass
propb down, you’ll have to register it as a prop, even if the component has no direct need for it:
export default { props: [ 'propa', 'propb' // Only registering this to pass it down :( ] }
script.js
This obscures the intended functionality of the component and makes it hard to keep components DRY. In Vue 2.4.0, we can now add the flag
inheritAttrs: false to the component definition and the component will not render
b as a normal HTML attribute:
<div>propa</div>
index.html
Passing down
propb
propb doesn’t disappear, though, it’s still available to the component in the instance property
$attrs (which has also been added in Vue 2.4.0). This instance property contains any bound attributes not registered as props:
<template> <div> {{ propa }} <grand-child v-bind:</grand-child> </div> </template> <script> export default { props: [ 'propa' ], inheritAttrs: false } </script>
MyComponent.vue
Imagine you need to pass hundreds of props from a parent down through several layers of nested components. This feature would allow each intermediary component template to be declared much more concisely in the parent-scope:
<input v-
index.html
Oh, and this also works exactly the same when passing data up by binding listeners with
v-on:
<div> <input v- </div>
index.html
3. Async Component Support For Webpack 3
Scope hoisting is one of the key features of the recently released Webpack 3. Without going into too much detail, in Webpack 1 and 2, bundled modules would be wrapped in individual function closures. These wrapper functions are slow to execute in the browser compared to this new scope hoisting method, which is made possible by the new ES2015 module syntax.
Two weeks ago, vue-loader v13.0.0 was released, and it introduced a change where .vue files would be outputted as ES modules, allowing them to take advantage of the new scope hoisting performance advantages.
Unfortunately, ES modules export differently, so the neat async component syntax you can use for code splitting in a Vue project e.g.:
const Foo = () => import('./Foo.vue');
script.js
Would have to be changed to this:
const Foo = () => import('./Foo.vue').then(m => m.default);
script.js
Vue 2.4.0, however, automatically resolves ES modules’ default exports when dealing with async components, allowing the previous, more terse syntax.
4. Preserving HTML Comments In Components
Okay, this feature is not too significant, but I still think it’s cool. In Vue < 2.4.0, comments were stripped out of components when they were rendered:
<template> <div>Hello <!--I'm a comment.--></div> </template>
index.html
Renders as:
<div>Hello</div>
index.html
The problem is that sometimes comments are needed in the rendered page. Some libraries might have a need for this, for example, using comments as a placeholder.
In Vue 2.4.0, you can use the
<template> <div>Hello <!--I'm a comment.--></div> </template> <script> export default { comments: true } </script>
MyComponent.vue
|
https://school.geekwall.in/p/H1PZJqVKN/4-important-changes-in-vue-js-2-4-0
|
CC-MAIN-2019-39
|
refinedweb
| 1,170
| 56.86
|
I was reading a Tkinter tutorial and I've noticed they use
instead of the usualinstead of the usualCode:from Tkinter import *
Is there any difference, or is it equivalent ?Is there any difference, or is it equivalent ?Code:import Tkinter
When you do 'import Tkinter' you will access all the data objects through the Tkinter namespace. When you do a 'from Tkinter import *', you import all non-private data objects into your current namespace. Normally you shouldn't use 'from <something> import *', because it pollutes the namespace; but practicality beats purity...
I haven't hacked in python for a while, but IIRC, the difference between this statement:
from somemodule import *
and this statement:
import somemodule
is that the first one imports all the functions implicitly within the namespace, whereas with the second one only public functions/objects are imported, so you may still have to qualify functions and objects with the module name, when calling them. For example:
andandCode:#!/usr/bin/python from sys import * print "Hello world" exit(1)
Notice that in the first case, I don't have to qualify exit(), but in the second case, I do.Notice that in the first case, I don't have to qualify exit(), but in the second case, I do.Code:#!/usr/bin/python import sys print "Hello world" sys.exit(1)
Last edited by Scorpions4ever; July 21st, 2003 at 01:39
where they both do roughly the same thing (program wise) its really just a personal choice.
from module import functions - imports the module and allows access to its functions/classes without the use of the module name prefix.
import module - import the module for use. My personal favourate simply because its easier to read.
Hope this helps,
Mark.
Just thought I'd add my own opinion here
I'd always go with "import module", since it makes reading the code easier (because you always know that "sys.exit()" is a sys method, not a builtin or one of your own), and because it means you don't accidentally overwrite your own functions/methods when importing lots of modules.
Keeps namespace nice and tidy
And if you do use "from module import *", try to explictly import particular functions/methods instead, since it will take up less memory and again be safer/nicer (i.e. "from module import x, y, z").
Personally, I would do:
to make the the code a little less excessively expressive.to make the the code a little less excessively expressive.Code:import Tkinter as tk
|
http://forums.devshed.com/python-programming/71389-import-vs-last-post.html
|
CC-MAIN-2017-13
|
refinedweb
| 421
| 53.61
|
Comment on Tutorial - How to Send SMS using Java Program (full code sample included) By Emiley J.
Comment Added by : Chandra Shekhar
Comment Added at : 2009-04-01 00:21:54
Comment on Tutorial : How to Send SMS using Java Program (full code sample included) By Emiley J.
Please let me know the SMSConnector number of any service provider, kindly mention the service provider name.. return type doesn't influence the overloading. Onl
View Tutorial By: Shekhar at 2010-05-17 01:22:04
3. Definitely Struts is great. I have posted an artic
View Tutorial By: Ashish at 2008-09-02 08:04:49
4. Your solution has really helped me and educated me
View Tutorial By: Tziq at 2011-01-25 08:06:06
5. excellent for a beginer in inet address ...
View Tutorial By: Tony at 2009-08-13 22:39:48
6. import java.io.File;
import java.util.Scann
View Tutorial By: sa at 2012-08-02 07:09:30
7. Can you explain with increment and decrement opera
View Tutorial By: Sathish Kumar at 2013-09-01 07:40:42
8. how to read csv file using Java
View Tutorial By: Shireesha at 2013-12-26 07:34:22
9. this code is not working properly buddies. please
View Tutorial By: hammad at 2011-06-10 01:53:39
10. we need to know how to write this coding..
View Tutorial By: lalala at 2009-07-15 20:35:16
|
https://www.java-samples.com/showcomment.php?commentid=33895
|
CC-MAIN-2019-47
|
refinedweb
| 246
| 68.57
|
The Ajax Control Toolkit provides a powerful framework for creating animations, which consists of a set of classes contained in the client AjaxControlToolkit.Animation namespace.
The Animations.js file, which contains the definitions of all the animation classes, is loaded at runtime by the AnimationExtender (or the UpdatePanelAnimationExtender) together with other script files from which it depends.
If we want to take advantage of the Toolkit's Animation framework even without relying on an Extender, we have to add the following script references through the ScriptManager control:
And if we're developing an Extender or a Script Control that makes use of the Animation framework, we can decorate the server class with the following RequiredScript attribute in order to have the Animations script file and all its dependencies automatically loaded at runtime:
The AnimationScripts type is contained in the AjaxControlToolkit namespace, inside the AjaxControlToolkit assembly.
By adding those JS scripts through the ScriptManager control, is that really work ? If yes, why don't we need to add "
if (typeof(Sys) !== 'undefined') Sys.Application.notifyScriptLoaded();" at the bottom of those scripts ???
Tee+: in ASP.NET AJAX 1.0 the script files are no more loaded asynchronously. Therefore there's no more need for the notification statement, unless the script files are added during a partial postback through the RegisterXXX methods of the ScriptManager.
ASP.NET Tip - Use The Label Control Correctly [Via: Haacked ] Exiting The Zone of Pain - Static Analysis...
Thank you for the invaluable tip that saved me from 10-day struggling with this nasty error:
"'AJAXControlToolkit' is undefined."
I tried all the approaches recommended on the Internet -- none worked. Your tip turned out to be the best because it has fixed the very core of this problem.
I am using the AJAX 3.5 TextBoxWatermark Extender bound to several InsertItemTemplate texboxes on the FormView in ASP 2.0.
Thanks again.
hey, thanks for the tip. def helps out. also, saw your book at the store and i am going to go and purchase it. thanks for the info.
|
http://aspadvice.com/blogs/garbin/archive/2007/02/15/Tip_3A00_-Using-the-Toolkit_2700_s-Animation-framework-without-the-AnimationExtender.aspx
|
CC-MAIN-2013-20
|
refinedweb
| 339
| 55.03
|
Compiler Warning (level 3) C4996
The compiler encountered a function that was marked with deprecated. The function may no longer be supported in a future release. You can turn this warning off with the warning pragma (example below).
C4996 is generated for the line on which the function is declared and for the line on which the function is used.
You will see C4996 if you are using members of the <hash_map> and <hash_set> header files in the std namespace. See The stdext Namespace for more information.
Some CRT and Standard C++ Library functions have been deprecated in favor of new, more secure functions. For more information about; }
|
https://msdn.microsoft.com/en-us/library/ttcz0bys(v=vs.90).aspx
|
CC-MAIN-2017-22
|
refinedweb
| 108
| 66.23
|
The latest version of the book is P1.0, released (14-Jun-18)
PDF page: 26
The docstring for function 'greeting' says it returns 'Hello, username.' including full stop, so the body should really be `(str "Hello, " username "."))`--Radek Kysely
- Reported in: P1.0 (16-May-18)
PDF page: 47
The function:
(defn index-filter [pred coll]
(when pred
(for [[idx elt] (indexed coll) :when (pred elt)] idx)))
Could just be:
(defn index-filter [pred coll]
(for [[idx elt] (indexed coll) :when (pred elt)] idx))
I ran both and both yielded the same results.
- Reported in: P1.0 (16-May-18)
PDF page: 54
The results of the following two functions need to be swapped.
(first {:fname "Aaron" :lname "Bedra})
-> ([:lname "Bedra"])
The result should be
-> ([:fname "Aaron"])
(rest {:fname "Aaron" :lname "Bedra"})
-> [:fname "Aaron"]
The result should be
-> ([:lname "Bedra"])
- Reported in: P1.0 (07-Aug-18)
PDF page: 97
Paper page: 83
Link to “Understanding Clojure’s PersistentVector Implementation” by Karl Krukow doesn't work.--Radek Kysely
- Reported in: P1.0 (09-Mar-18)
PDF page: 132
The function big is missing the argument: x
(defn big? [ ] (> x 100))--Jonathan
- Reported in: B3.0 (01-Feb-18)
PDF page: 172
On your code starting below “Now let’s put it all together.”
To keep your code consistent with your previous, hard-coded make-reader and make-writer functions given on page 168, you should add the String type - and related functions - to your call to the extend-protocol macro (listing beginning on page 172)
--Stephen E Riley
- Reported in: B3.0 (01-Feb-18)
PDF page: 182
In the def of jaws, the building of a sequence of notes in the text using the construct (Note. pitch 2 duration) fails with the error:
"Error refreshing environment: java.lang.IllegalArgumentException: Unable to resolve classname: Note"
The tests pass when I use (->Note pitch 2 duration) .--Stephen E Riley
- Reported in: B4.0 (11-Feb-18)
PDF page: 215
These observations may be due to the way I am testing examples as they appear in the text. Maybe more explanation is required to introduce the concepts to a reader.
My tests are located as shown by this example, src/examples/multimethods/account.clj functions/methods are tested in test/examples/multimethods/test/account.clj. In the latter I (:require [examples.multimethods.account :refer :all]) as part of the (ns examples.multimethods.test.account) setup.
My first observation is that (alias 'acc ...) does not come through on :refer :all when I am testing. I had to put the same exact alias in my test namespace.
The second problem I encountered was testing the multimethod account-level:
(>= (:balance acct) ...) fails with a NullPointerException when I ran the tests. I got the tests to pass by changing this expression to (>= ::balance acct) ...). When would the original expression ever work when the account is set up with ::balance?
--Stephen E Riley
- Reported in: B4.0 (11-Feb-18)
PDF page: 215
On my previous submission for this page, about changing :balance to ::balance, I realize that the error is probably when you created test-savings and test-checking. I think you didn't want/need ::balance there in your def's, but should have had just :balance, like :id.
All the problems I reported disappear if I just use :balance, except my observation on (alias...)--Stephen E Riley
|
https://pragprog.com/titles/shcloj3/errata/
|
CC-MAIN-2018-34
|
refinedweb
| 564
| 65.83
|
Pixel shaders and stereoscopic controls
- Saturday, October 15, 2011 6:19 PM
I was looking earlier today to see if there was a way of writing custom pixel shaders for Metro apps. I'll explain why in a second. Unfortunately, it appears that the Effects class from WPF (and Silverlight) isn't available and so it's not possible to write pixel shaders and 'attach' them to controls as was possible in previous technologies.
What I'm looking ultimately to do is write controls in C# that would 'pop' out of the screen when using stereoscopic screens. Is there some any way of getting into the render pipeline of controls in Metro that would help me achieve what I want to do?
It appears to be possible to write stereoscopic Direct3D 11.1 applications (see here) though I don't know about the feasibility of attaching these Direct3D concepts to existing controls. I imagine that it is possible in C++ to combine Direct3D and Xaml on a page, but I wonder if it is possible to manipulate the actual per-pixel rendering. The strategy I would really enjoy would be to write the app in C# and be able to manipulate Z-depth on each control with a Dependency Property. I wouldn't mind having to write part of this in C++.
Can anyone suggest a technique for this?
All Replies
- Monday, October 17, 2011 11:47 PMModerator
You would need to write the entire application in Direct3D and implement your own control layer.
Creating a 3D steroscopic image to use with 3d screens and shutter glasses requires doing the 3D in Direct3D. Even if you could apply a pixel shader to a control there is no way to differentiate between the left and right frames.
Windows::UI::Xaml doesn't support UIElement for pixel shaders. Neither is is there a way to render a Xaml control to a bitmap to render in Direct3D.
--Rob
- Tuesday, October 18, 2011 7:27 AM
Rob,
Thanks for the reply. What a shame there is no "methodology" for this. I didn't really have my hopes up.
I do hope that pixel effects are instated in the Windows.UI.Xaml namespace before release.
Steve
|
http://social.msdn.microsoft.com/Forums/en-US/winappswithcsharp/thread/e40bec6c-5178-4cb0-ba04-d5aeefd5283e
|
CC-MAIN-2013-20
|
refinedweb
| 371
| 62.88
|
picked up the Agile Web Development book and have been working through
it (apologies if this isn't the correct place for this question but the
pragprog forums don't seem to work for me). I am currently working
through the book and have reached the point where I am adding
functionality to an "Add to Cart" button. Here is the code that was
provided for the 'create' method in the line_item_controller:
def create
@cart = current_cart
product = Product.find(params[:product_id])
@line_item = @cart.line_items.build(product: product)
...
end
This results in the following error when I click "Add to Cart"
Can't mass-assign protected attributes: product
Can anyone shed some light on what is going on here (my search-fu has
failed me)? The only way that I have been able to get this to work is
by changing the code (starting at @line_item) to:
@line_item = @cart.line_items.build
@line_item.product = product
Is this correct or is it just a band-aid fix that may cause issues going
forward?.
|
https://grokbase.com/t/gg/rubyonrails-talk/1247xhy5bn/rails-agile-web-development-4th-ed-cant-mass-assign-error
|
CC-MAIN-2022-05
|
refinedweb
| 170
| 65.96
|
For
The SMTP server requires a secure connection or the client was not authenticated.
5.5.1 Authentication Required. Learn more at" <-- seriously, it ends there.
CVertex, make sure to review your code, and, if that doesn't reveal anything, post it. I was just enabling this on a test ASP.NET site I was working on, and it works.
Actually, at some point I had an issue on my code. I didn't spot it until I had a simpler version on a console program and saw it was working (no change on the Gmail side as you were worried about). The below code works just like the samples you referred to:
using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Net.Mail; using System.Net; namespace ConsoleApplication2 { class Program { static void Main(string[] args) { var client = new SmtpClient("smtp.gmail.com", 587) { Credentials = new NetworkCredential("myusername@gmail.com", "mypwd"), EnableSsl = true }; client.Send("myusername@gmail.com", "myusername@gmail.com", "test", "testbody"); Console.WriteLine("Sent"); Console.ReadLine(); } } }
I also got it working using a combination of web.config, and code (because there is no matching
EnableSsl in the configuration file :( ).
|
https://codedump.io/share/VUbzwokVHIp6/1/sending-email-through-gmail-smtp-server-with-c
|
CC-MAIN-2017-17
|
refinedweb
| 196
| 54.08
|
Please explain me a strange (to me) behavior of the following piece of code:
#include <iostream>
using namespace std;
#define MIN_VAL -2147483648
typedef enum E_TEST
{
zero_val = 0,} E_TEST;
one_val = 1
void main()
{
if (zero_val > MIN_VAL)}
cout << "Not a problem!" << endl;else
cout << "It's a problem!" << endl;
So, as you can guess, "It's a problem" is printed. I'd like to understand why. According to MSDN, the limits of a signed int are -(2^32)/2 to (2^32)/2-1, e.g. -2147483648 to 2147483647. Enums are also 4-bytes types. So where is the problem? Why the result isn't "Not a problem!" ?
Thanks!
|
http://forums.codeguru.com/printthread.php?t=543377&pp=15&page=1
|
CC-MAIN-2018-22
|
refinedweb
| 107
| 86.1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.