text
stringlengths 454
608k
| url
stringlengths 17
896
| dump
stringclasses 91
values | source
stringclasses 1
value | word_count
int64 101
114k
| flesch_reading_ease
float64 50
104
|
|---|---|---|---|---|---|
This post shows how to create content types and page layouts with Visual Studio 2010. In Part 1, I introduced the concept of content types and page layouts using SharePoint Designer. In this post, I will show how to accomplish the same tasks with Visual Studio 2010, and will show how community tools like CKS:Dev make this even easier.
I have been working on a business problem for a customer this week. We need to have users visit and accept a Terms & Conditions page before they visit content on the site. Not only does this imply the users will read the content on the page, but the legal department needs to be able to quickly and easily edit the page content as the terms & conditions change over time. This is a perfect use for a publishing page.
Export a Content Type from SharePoint
Visual Studio 2010 does not provide a graphical designer to create a content type, instead you edit an XML file. Instead of editing the XML file by hand, it is much easier to just create the content type using SharePoint Designer 2010 or the web UI and then export the content type using a tool. Following the steps in Part 1, I created the content type in SharePoint. Once you have created the content type, there are a few ways to export it. The easiest that I know of is to use AC’s WCM Custom Commands for STSADM.EXE. To install, you just add the WSP to the solution store and deploy it (directions are on the linked blog post). Once deployed, go to the command line and type:
STSADM.EXE -o GenContentTypesXml -url -outputFile "c:\contentTypes.xml" -excludeParentFields
Then open your file and find the content type you created.
Create the Content Type in Visual Studio 2010
Open Visual Studio 2010 and create a new SharePoint 2010 project. We will use a farm solution, because there will be a few things in future posts that will require a farm solution, such as edits to web.config. Add a new content type to the project, and choose the Page content type to inherit from.
In the previous step, you generated an XML file based on an existing content type. Open that file and copy the contents of the FieldRefs section and paste into the FieldRefs section of your content type in Visual Studio 2010. My content type now looks like the following:
<?xml version="1.0" encoding="utf-8"?> <Elements xmlns=""> <!-- Parent ContentType: Page (0x010100C568DB52D9D0A14D9B2FDCC96666E9F2007948130EC3DB064584E219954237AF39) --> <ContentType ID=" Name="TermsAndConditionsType" Group="Custom Content Types" Description="Terms and Conditions" Inherits="TRUE" Version="0"> <FieldRefs> <FieldRef ID="{f55c4d88-1f2e-4ad9-aaa8-819af4ee7ee8}" Name="PublishingPageContent" /> </FieldRefs> </ContentType> </Elements>
Note that the content type ID is different than it was in the picture above, this is because Visual Studio is generating a new content type.
Create the Page Layout
Once the Content Type is created in Visual Studio, deploy the solution to your SharePoint server. The reason why is because you want the content type to exist on the server using the same ID as the one in Visual Studio, which won’t match the one that already exists on the server.
Note: The content type that you are defining in Visual Studio 2010 has a different content type ID than the one that you created in SharePoint. You can copy the content type ID from SharePoint into your definition in Visual Studio 2010 if you like.
Once it is deployed, you can create a Page Layout using SharePoint Designer 2010. I like this approach because it provides a WYSIWYG designer, where SharePoint 2010 does not have a designer to create a page layout. Once you create the page layout, copy the contents to Visual Studio (explained in the next section). Another option is to use the CKS – Development Tools Edition (Server) tools, which are a free add-on you can find in the Visual Studio 2010 extension manager:
Install the tools, and they add a bunch of new capabilities for SharePoint 2010 development. For instance, you can now right-click a content type and choose “Create Page Layout”.
Once the page layout is created, it’s easy to edit the HTML just as you would any ASP.NET page.
Deploying the Page Layout in Visual Studio 2010
The next step is to deploy the page layout. There are tools for branding in CKS:Dev, but none specific to creating a page layout, so we’ll walk through how to do this using a Module. In Visual Studio 2010, add a new Module to the project and call it “PageLayoutsModule”. It generates a file called Sample.txt, which we will rename to “TermsPageLayout.aspx”. Then copy the HTML for your page layout into this file.
Another file is created called Elements.xml. Open that file, and edit with the following:
<?xml version="1.0" encoding="utf-8"?> <Elements xmlns=""> <Module Name="PageLayoutsModule" Url="_catalogs/masterpage"> <File Path="PageLayoutsModule\TermsPageLayout.aspx" Url="TermsPageLayout.aspx" Type="GhostableInLibrary"> <Property Name="ContentType" Value="$Resources:cmscore,contenttype_pagelayout_name;"/> <Property Name="PublishingAssociatedContentType" Value=";#TermsAndConditionsType;;#"/> <Property Name="Title" Value="Terms and Conditions Page"/> </File> </Module> </Elements>
Replace the really long string that starts with 0x010100 with the content type ID defined in Visual Studio, taking care to leave the trailing “;#” at the end of the attribute value. These two content type IDs must match for your solution to work.
Adding Code Behind to the Page Layout
A really cool capability of page layouts is that you can code them just like any ASP.NET page, including server-side code. In the same folder as your .ASPX (in the Module folder), add a new class with the file name “TermsPageLayout.aspx.cs”. The body of that file looks like the following:
using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Web.UI.WebControls; using System.Web; namespace TermsAndConditions { [CLSCompliant(false)] public class TermsLayout : Microsoft.SharePoint.Publishing.PublishingLayoutPage { protected Button Button1; protected Button Button2; protected Label Label1; protected Label Label2; protected void Page_Load(object sender, EventArgs e) { if (null != Request.Cookies["TandC"]) { Label2.Text = Request.Cookies["TandC"].Value; } } protected void AcceptButton_Click(object sender, EventArgs e) { //Write a cookie that indicates the user accepted HttpCookie cookie = new HttpCookie("TandC"); cookie.Value = "Accepted"; //The cookie is good for 5 years, or until the //page is updated. cookie.Expires = System.DateTime.Now.AddYears(5); Response.Cookies.Add(cookie); } protected void DeclineButton_Click(object sender, EventArgs e) { //Log that the user declined } } }
This is a simplified version of what I did for my customer. If the user clicks the “Accept” button in the page layout, we use the same code we would in any ASP.NET project to create a cookie. Our updated code for the page layout defined in TermsPageLayout.aspx becomes:
<%@ Assembly <SharePointWebControls:FieldValue <br/> <asp:Button</asp:Button> <asp:Button</asp:Button> </PublishingWebControls:EditModePanel> <PublishingWebControls:EditModePanel >
Notice the placeholder $SharePoint.Project.AssemblyFullName$ that is used twice. The compiler will automatically replace this with the 4-part name of our assembly.
Adding SafeControls
We need to tell SharePoint that it’s OK for our page layout to have code-behind, we’ll do that by adding SafeControls entries to web.config. To do this, click on the Module in Visual Studio 2010 and select the Safe Control Entries option in the Properties pane:
That will bring up a new window that makes it easy to define a new safecontrol entry.
Scoping Features
The content type will be deployed to the site collection, while the page layout is deployed to the master page gallery in a single web. Therefore, we need two separate features scoped differently. Right-click the Features node in the project explorer and choose “Add Feature”. I like to name the features similar to how I will deploy them, making it easier to keep track while I am developing them. Here is our new web-scoped feature that contains the module that deploys the page layout.
Similarly, the site-scoped feature deploys the content type.
The package designer now looks like:
Using the new Page Layout
The final step is to create a new page. Create a new page called “Terms”. In the ribbon, click the Page tab and change the Page Layout to our Terms and Conditions page layout.
Enter some values for the page in the provided text fields, and hit save.
Stay tuned for more posts in this series, as there are some really cool things you can do to enhance this solution. The code for this solution is attached to the post.
Join the conversationAdd Comment
I used Visual Studio 2010 to deploy a Page Layout but I left the WebPart Zones empty. I then added WebParts directly through Internet Explorer.
Is there a way I can add this default.aspx with all the WebParts to my Visual Studio 2010 solution?
If I "deactivate" the solution I lose all design changes….
|
https://blogs.msdn.microsoft.com/kaevans/2011/04/02/code-behind-page-layouts-with-visual-studio-2010/
|
CC-MAIN-2016-07
|
refinedweb
| 1,489
| 54.63
|
In Part one of this series I showed you the history of RSS versions and the new standard for news feeds which is Atom. We also introduced the abilities of news reader applications. I think by now you must have got the feeling of how important these blogs are. I hope you have got your own blog installed on one of the various engines introduced, you can check my blog at Cairo Cafe. In this part we'll analyze the format of RSS versions and take a quick look at the Atom format, we'll make a custom RSS feed and for simplicity we'll consume the same feed that we develop. Note that we'll stick to RSS 2.0, as it's the most well known RSS version and the simplest one too.
RSS 1.0, RSS 2.0, and Atom are XML based languages, they adhere to one schema and each output is introduced based on the schema of the feed itself, RSS 2.0 is the simplest one and it's widely used, more than the other formats. We'll stick to the RSS 2.0 here as it's the simplest one. At the end of this article, you will be able to make your own news feeds and you will be able to consume other website's feeds, like the latest articles provided by Code Project for ASP.NET category.
<?xml version="1.0" encoding="utf-8" ?>
<rdf:RDF xmlns:
<channel rdf:
<title>Kareem Shaker Website</title>
<link></link>
<description>Kareem Shaker is an Egyptian developer who likes to
exchange knowledge with developers all over the worlds</description>
<image rdf:
<items>
<rdf:Seq>
<rdf:li
<rdf:li
</rdf:Seq>
</items>
</channel>
<!--Declaration for all the items that are used
above at the channel elements-->
<image rdf:
<title>KareemShaker.com</title>
<link></link>
<url></url>
</image>
<item rdf:
<title>DataSet Nitty Gritty</title>
<link></link>
<description>Explains all disconnected environment provided by ADO.NET,
you will be using SQL Server and Oracle to build a simple POS System that
posts all the sales to a central headquarter</description>
<dc:date>2004-01-13T17:16:44.5605908-08:00</dc:date>
</item>
<item rdf:
<title>Custom Controls Revisited</title>
<link></link>
<description>Build a custom control that encapsulates the functionality
of Image gallery</description>
<dc:date>2004-01-13T17:16:44.5605908-08:00</dc:date>
</item>
</rdf:RDF>
As you can see above, the RSS 1.0 is based on RDF and its namespace qualified, you don't need to know more about the RDF but if you want to dig into it you can review the W3C RDF standard, all the items referenced are listed after the closing element of "channel" and it's referenced in the items collection which is listed between the channel opening and closing tags. This provides the flexibility of referencing any item anywhere within the RSS 1.0 document.
<?xml version="1.0" encoding="utf-8" ?>
<rss version="2.0">
<channel>
<title>Kareem Shaker Website</title>
<link></link>
<description>Kareem Shaker is an Egyptian developer who
likes to exchange knowledge with developers all
over the world</description>

<item>
<title>DataSet Nitty Gritty</title>
<link></link>
<description>Explains all disconnected environment provided by ADO.NET,
you will be using SQL Server and Oracle to build a simple POS System
that posts all the sales to a central headquarter</description>
<pubDate>Wed, 14 Jan 2004 16:16:16 GMT</pubDate>
</item>
<item>
<title>Custom Controls Revisited</title>
<link></link>
<description>Build a custom control that encapsulates the functionality
of Image gallery</description>
<pubDate>Wed, 14 Jan 2004 20:50:44 GMT</pubDate>
</item>
</channel>
</rss>
RSS 2.0 is the simplest standard and it's widely used. The root element is RSS and the version attribute is mandatory. As you can see the items are just serialized within the channel body and no namespaces are used, RSS 2.0 is simple to consume and produce. Some elements are required for RSS 2.0 document and others are optional. You can review the complete detailed schema definition here.
<?xml version="1.0" encoding="utf-8" ?>
<feed version="0.3" xml:
<title>Kareem Shaker Atom Feeder</title>
<link></link>
<modified>2004-01-13T17:16:45.0004199-07:00</modified>
<tagline>Kareem Shaker is an Egyptian developer who likes to exchange
knowledge with developers all over the world</tagline>
<author>
<name>Kareem Shaker</name>
</author>
<entry>
<title>DataSet Nitty Gritty</title>
<link></link>
<created>Wed, 14 Jan 2004 16:16:16 GMT</created>
<content type="text/html" mode="xml">
<body xmlns="">
<p>Explains all disconnected environment provided by
ADO.NET,you will be using SQL Server and Oracle to build a simple
POS System that posts all the sales to a central headquarter</p>
</body>
</content>
</entry>
<entry>
<title>Custom Controls Revisited</title>
<link></link>
<created>Wed, 14 Jan 2004 16:02:16 GMT</created>
<content type="text/html" mode="xml">
<body xmlns="">
<p>Build a custom control that encapsulates the functionality
of Image gallery</p>
</body>
</content>
</entry>
</feed>
Atom root element is "feed" and the version attribute is mandatory. Actually Atom standard is something between the RSS 1.0 and RSS 2.0. It's namespace qualified but it's not based on RDF. Here you have an "entry" element instead of "item". For further information you can visit Atom official website.
Open Markup Language (OPML), is nothing more than an XML file. It is very simple to grasp. The main element is "outline" and you just have to supply type, title, description, xmlUrl, and htmlUrl attributes. You will find that all news readers support reading OPML files. I find it so useful especially when I take some featured feeds from a friend, he just exports his channels as an OPML file and passes it to me. I can then import that OPML file easily. All news readers support OPML import / export.
type
title
description
xmlUrl
htmlUrl
<?xml version="1.0" encoding="utf-8" ?>
<opml>
<head>
<title>Kareem Shaker's HotList</title>
</head>
<body>
<outline type="rss" title="Arabic Developers Bloggers"
description="This is a great collection of Arabic programming loggers"
xmlUrl=""
htmlUrl="" />
<outline type="rss"
title="Kareem Shaker ASP.NET Blog"
description="Kareem Shaker's ASP.NET Community Central for Arabs"
xmlUrl=""
htmlUrl="" />
<outline type="rss"
title="MacroCell Developers Blogs"
description="MacroCell is a innovative software house"
xmlUrl=""
htmlUrl="" />
</body>
</opml>
Generating RSS 2.0 document using repeater control
RSS 2.0 document is nothing more than a XML document that adheres to one schema, we can generate XML document using any of the System.XML namespace classes, we can use XMLTextWriter, or DataSet.WriteXML method, or even use System.IO classes, but the easiest way is to use XMLTextWriter. If you are not familiar with these classes, you can review the article at C# Corner. As you saw in the above RSS 2.0 document, we produce a standard output so that we can use Repeater control to easily output the RSS document we want. Indeed after reviewing many articles and ways to output RSS document, I found that this is the easiest one. For the sake of simplicity, we'll generate RSS items on the fly, but in real world you would get RSS items from a SQL Server database or you can point to one RSS file that's generated periodically.
XMLTextWriter
DataSet.WriteXML
System.IO
Repeater
As you see above, we have just written or hard coded the lines and items we will output to the RSS document and we have just bound the items' values that we want. Don't forget to assign the contentType attribute of the page directive to "text/xml".
contentType
We will be generating RSS items dynamically, and we hold a variable in web.config. This variable is called rssItemsNumber. I check this variable value before I generate RSS items, this is simply the items count to be generated. You should add this variable after the configuration node and before the system.web node.
rssItemsNumber
<appSettings>
<add key="rssItemsNumber" value="10"></add>
</appSettings>
You should read this value at the page load:
System.Int32 numberOfGeneratedItems =
System.Int32.Parse(
System.Configuration.ConfigurationSettings.AppSettings["rssItemsNumber"]);
rssProducts.DataSource = GenerateRss(numberOfGeneratedItems);
rssProducts.DataBind();
GenerateRSS function is responsible for producing a DataTable that we bind to the Repeater control. We then call the repeater Databind method to bind data to the Repeater. In GenerateRSS we just build a DataTable object on the fly by adding the required columns to the DataTable object. We then fill the data by looping into that table and generating the RSS items. The number of RSS items is grabbed from the web.config as shown above:
GenerateRSS
DataTable
Databind
private DataTable GenerateRss(int numberOfItems)
{
// create new table and call it rssItems
DataTable dtItems = new DataTable("rssItems");
// create all required data columns
// id column
DataColumn dcItem = new DataColumn();
dcItem.ColumnName = "Id";
dcItem.DataType = System.Type.GetType("System.Int32");
dcItem.AutoIncrement= true;
// add column to the datatable
dtItems.Columns.Add(dcItem);
//title column
dcItem = new DataColumn();
dcItem.ColumnName = "title";
dcItem.DataType = System.Type.GetType("System.String");
dtItems.Columns.Add(dcItem);
// description column
dcItem = new DataColumn();
dcItem.ColumnName = "description";
dcItem.DataType = System.Type.GetType("System.String");
dtItems.Columns.Add(dcItem);
// pubDate
dcItem = new DataColumn();
dcItem.ColumnName = "pubDate";
dcItem.DataType = System.Type.GetType("System.DateTime");
dtItems.Columns.Add(dcItem);
// link
dcItem = new DataColumn();
dcItem.ColumnName = "link";
dcItem.DataType = System.Type.GetType("System.String");
dtItems.Columns.Add(dcItem);
// make PK column
DataColumn[] pk = {dtItems.Columns[0]};
dtItems.PrimaryKey = pk;
// get new row to be added to the datatable
DataRow drItem = dtItems.NewRow();
// loop and generate the rss items up to the number
// of items mentioned in web.config
for(int iCounter = 1; iCounter <= numberOfItems; iCounter++)
{
drItem["title"] =
"Product No. " + iCounter.ToString() + " From MacroCell";
drItem["description"] = "Product " + iCounter.ToString() +
" is the most promising product in our wide group";
drItem["pubDate"] = DateTime.Now;
drItem["link"]= ""
+ iCounter.ToString();
// add to table
dtItems.Rows.Add(drItem);
// create new row
drItem = dtItems.NewRow();
}
return dtItems;
}
The code is straightforward and the comments are well descriptive, once you get the DataTable returned and bound to data Repeater, and you will get the resultant RSS 2.0 document as XML file. Don't forget that we have added the contentType attribute to be "text/xml".
If you have a news reader application installed, you can add this channel to it and you can see how the news reader handles the RSS document. If it reads it correctly and throws no exceptions/errors, you can emit the well-formed RSS document and you can add the RSS channel to your new reader using this URL : "", you can replace localhost with your server.
You can simply consume the RSS feed we have just developed, using a few lines of code. You can consume any other RSS feed using the same code. It's pretty simple. You will find a web project called RSSReader. It contains just one WebForm in which you will find a gird. This grid is bound to the RSS feed we read. You will find it very easy to grasp.
In code behind we read the RSS feed using the DataSet method ReadXml. This is the easiest method I have ever seen to consume a RSS feed and if you have a deep look into how the DataSet handles tables when it maps hierarchical data (XML nodes) into tabular ones (DataTables) you will find that for each nesting level you get a new DataTable added, item node is the second nested node after the RSS and channel nodes so the index of the table that holds all the items is "2". It's a zero based index, so if we bind the grid to the second table it will hold all the RSS items. If you want to read the channel's title or description you should read table index "1". Write the following code in the page load even handler:
DataSet
ReadXml
private void Page_Load(object sender, System.EventArgs e)
{
DataSet dsFeed = new DataSet("Feed");
dsFeed.ReadXml("");
recentProducts.DataSource = dsFeed.Tables[2];
recentProducts.DataBind();
}
You can supply any other URL to the ReadXml method, you can try the CodeProject latest articles on RSS feed, and you will get all the latest articles listed.
In this part, we have seen the various formats for news feeds. Here we tried to dig into the RSS 2.0, as it's the simplest and the widely used one. You have also seen how to make your own feed and how to consume others' feeds. I guess I will write a third part to discuss more advanced.
|
https://www.codeproject.com/Articles/9563/Blogs-RSS-News-feeders-and-ATOM-Part-Two
|
CC-MAIN-2018-30
|
refinedweb
| 2,121
| 55.74
|
Pandas is one of the most common tools used for data analysis and data manipulation in Python. It’s like the Swiss Army Knife of data wrangling. For anyone working on projects in the fields of data science and machine learning, Pandas is a key tool in your toolbox.
In this article, I’ll introduce you to ten Pandas functions that will make your life easier (in no particular order), as well as provide some code snippets to show you how to use them. I’ll be applying the Pandas functions to a Pokemon dataset, so let’s dive into our pokedex!
Before You Start: Install Python & Pandas
To follow along, you can install the Pandas Top 10 environment for Windows or Linux, which contains Python 3.8 and Pandas
In order to download this ready-to-use Pandas Top 10 runtime into a virtual environment:
powershell -Command "& $([scriptblock]::Create((New-Object Net.WebClient).DownloadString(''))) -activate-default Pizza-Team/Pandas-Top-10"
For Linux users, run the following to automatically download and install our CLI, the State Tool along with the Pandas Top 10 runtime into a virtual environment:
sh <(curl -q) --activate-default Pizza-Team/Pandas-Top-10
1–The Read_X Family Functions
Your first task is usually to load the data for analysis, and Pandas offers a large family of functions that can read many different data formats. CSV files are the most common, but Pandas also supports many other formats, including:
- Microsoft Excel
- Fixed-width formatted lines
- Clipboard (it supports the same arguments as the CSV reader)
- JavaScript Object Notation (JSON)
- Hierarchical Data Format (HDF)
- Column-oriented data storage formats like Parquet and CRC
- Statistical analysis packages like SPSS and Stata
- Google’s BigQuery Connections
- SQL databases
To load data from databases, you can use the SQLAlchemy package, which lets you work with a huge number of SQL databases including PostgreSQL, SQLite, MySQL, SAP, Oracle, Microsoft SQLServer, and many others.
In the following example, a JSON file is loaded as a DataFrame:
import pandas as pd df = pd.read_json('./pokedex.json') df.head()
The read_x family of functions is very robust and flexible. However, that doesn’t mean you always get what you want with no effort. In this example, you can see that the pokemon column is fully loaded as individual rows, which is not very useful. Pandas includes another function that will allow you to make more use of this information.
2–The Json_Normalize Function
You can use the json_normalize function to process each element of the pokemon array and split it into several columns. Since the first argument is a valid JSON structure, you can pass the DataFrame column or the json parsed from the file. The record_path argument indicates that each row corresponds to an element of the array:
fObj = open("./pokedex.json") jlist = json.load(fObj) df = pd.json_normalize(jlist, record_path=['pokemon']) df.head()
Meta and record_path give this function great flexibility, but not enough to process json without a unified structure. If you try to process the next_evolution or prev_evolution columns in the same way, you’ll get an error even if you use the errors=’ignore’ argument:
df = pd.json_normalize(jlist['pokemon'], record_path='next_evolution', meta=['id','name'], errors='ignore') df.head() --------------------------------------------------------------------------- KeyError Traceback (most recent call last) <ipython-input-20-b9ae9df69b00> in <module> ----> 1 df = pd.json_normalize(jlist['pokemon'], record_path='next_evolution', meta=['id','name'], errors='ignore') 2 df.head() ~/.virtualenvs/fixate/lib/python3.8/site-packages/pandas/io/json/_normalize.py in _json_normalize(data, record_path, meta, meta_prefix, record_prefix, errors, sep, max_level) 334 records.extend(recs) 335 --> 336 _recursive_extract(data, record_path, {}, level=0) 337 338 result = DataFrame(records) ~/.virtualenvs/fixate/lib/python3.8/site-packages/pandas/io/json/_normalize.py in _recursive_extract(data, path, seen_meta, level) 307 else: 308 for obj in data: --> 309 recs = _pull_records(obj, path[0]) 310 recs = [ 311 nested_to_record(r, sep=sep, max_level=max_level) ~/.virtualenvs/fixate/lib/python3.8/site-packages/pandas/io/json/_normalize.py in _pull_records(js, spec) --> 248 result = _pull_field(js, spec) 249 250 # GH 31507 GH 30145, GH 26284 if result is not list, raise TypeError if not ~/.virtualenvs/fixate/lib/python3.8/site-packages/pandas/io/json/_normalize.py in _pull_field(js, spec) 237 result = result[field] 238 else: --> 239 result = result[spec] 240 return result 241 KeyError: 'next_evolution'
The problem is twofold:
- Not all pokemon have a next_evolution attribute
- The errors=’ignore’ argument only applies to the columns defined in the meta list
To process the lists of values and nested structures, you’ll need to use another approach.
3–The Explode Function
You will notice that there are some columns (type, multipliers, and weaknesses) in the DataFrame that contain lists of values. You can expand those values to new rows using the explode function, which will replicate the row data for each value in the list:
df = df.explode('weaknesses') df.head()
As you can see, the index is repeated for Bulbasaur along with the other values (except weaknesses). However, the explode function won’t help you if you want new columns instead of new rows. In that case, you will have to write a function to process the values and return new columns.
4–The Apply Function
One of the most important functions of Pandas (which all data analysts should be proficient with) is the apply function. It allows you to work with the rows or columns of a DataFrame, and you can also use lambda expressions or functions to transform data.
Here’s how you expand the weaknesses, next_evolution, and prev_evolution columns using apply:
def get_nums(x): try: iterator = iter(x) except TypeError: return None else: return [c['num'] for c in x] fObj = open("./pokedex.json") jlist = json.load(fObj) df = pd.json_normalize(jlist, record_path=['pokemon']) df['evolutions'] = df['next_evolution'].apply( lambda x: get_nums(x) ) df['ancestors'] = df['prev_evolution'].apply( lambda x: get_nums(x) ) weaknesses = df['weaknesses'].apply( pd.Series ) evolutions = df['evolutions'].apply( pd.Series ) ancestors = df['ancestors'].apply( pd.Series ) weaknesses.head()
The get_nums function checks to see if the cell value passed as an argument and if it’s iterable. If so, it will extract the values of the num keys and return them as a list. Then, the apply function will split the values from the list as columns and transform them into a Pandas Series. This will result in three new DataFrames that should be connected to the main DataFrame.
5–The Rename Function
It’s a good idea to rename the columns before merging the results from the previous operation with the main DataFrame, because default names like 1, 2, or n can get confusing and cause problems. The rename function can also be used with indexes, and it can take either a list or a function as an argument:
weaknesses = weaknesses.rename(columns = lambda x: 'wk_' + str(x)) evolutions = evolutions.rename(columns = lambda x: 'ev_' + str(x)) ancestors = ancestors.rename(columns = lambda x: 'an_' + str(x)) weaknesses.head()
6–The Merge Function
At this point, we have four separate DataFrames, but it’s usually better to have just one. You can combine DataFrames using the merge function, which works a lot like the database’s join operation. For example, you can join DataFrames A and B based on a specific type of combination (inner, left, right, outer, and cross) to create DataFrame C. In this case, since the DataFrames contain the same indexes, the inner option is the appropriate choice for a horizontal concatenation:
df = df.merge(weaknesses, left_index=True, right_index=True).merge(evolutions, left_index=True, right_index=True).merge(ancestors, left_index=True, right_index=True) df.head()
The results show that columns wk_0 to wk_6, ev_0 to ev_2, and an_0 to an_1 were merged to the main DataFrame. Merge is flexible enough that it can be used with specific columns as well as indexes. You can also rename the overlapping columns on the fly and automatically validate the operation in case there are one-to-many or many-to-many cases that are not compatible with the type of merge that you are trying to apply.
7–The Iloc Function
Once you’ve completed the basic transformations, you often have to navigate through the data to get to specific slices that might be useful. The first way to do this is with the iloc function, which returns segments based on the index of the DataFrame:
_evens = df.iloc[lambda x: x.index % 2 == 0] _evens.head()
In this example, we selected the even rows from the DataFrame using a simple lambda expression. Iloc is also flexible, allowing you to pass a specific index, a range of integers, or even slices. Here, we selected two rows and three columns:
df.iloc[1:3, 0:3]
This is usually the fastest way to extract subsets of a DataFrame, but it can also be accomplished by selecting information based on labels instead of positions.
8–The Loc Function
To select information based on labels, you can use the loc function. Below, we used a conditional over the values of the column candy_count to select a subset and return three columns:
slice_candies = df.loc[lambda df: df['candy_count'] > 30, ['name', 'candy_count', 'candy']] slice_candies.head()
The power of the loc function is that you can combine complex selection criteria over the DataFrame. Another advantage of loc is that it can be used to set values for the resulting subset. For example, we can set the value of the rows without candy_count to zero using the following code:
df.loc[lambda df: np.isnan(df['candy_count']), ['candy_count']] = 0 df.head()
9–The Query Function
Another way to select a subset of data from a DataFrame is to use the query function, which allows you to operate over columns and refer to external variables in the query definition. Here is what it looks like when you select even rows that contain an above-average candy_count:
mean_candy_count = df['candy_count'].mean() df.query('(index %2 == 0) and (candy_count > @mean_candy_count)').head()
The query function can even filter the DataFrame in which it is operating, meaning that the original DataFrame will be replaced with the results of the query function. It uses a slightly modified Python syntax, so be sure that you understand the differences before using it.
10–The Sample Function
The sample function will return a random sample from the DataFrame. You can parametrize its behavior by specifying a number of samples to return, or a specific fraction of the total required. The replace flag will allow you to select the same row twice, and the random_state argument will let you use a random seed for reproducible results. By default, every row has the same probability of being selected in the sample, but you can modify this with the weights argument.
samples = df.sample(frac=0.3, replace=True, random_state=42) samples.head()
Proficiency with Pandas is simply one of those must learn Python skills for any developer.
Pandas is one of the most comprehensive tools you’ll ever use in Python. It’s fast, consistent, and fully charged with robust functions. We’ve covered ten of the most important Pandas functions in this article, but the list could be expanded to include further functions like plotting, indexing, categorizing, grouping, windowing, and many more.
Put your skills to use: Here’s a pre-compiled Pandas Python environment and the GitHub source code related to this post!
- You can find all of the code that we used in this article on GitHub.
- You can sharpen your Pandas skills by installing the Pandas Top 10 runtime environment and trying out the top 10 functions for yourself.
With the ActiveState Platform, you can create your Python environment in minutes, just like the one we built for this project. Try it out for yourself or learn more about how it helps Python developers be more productive.
*Hero banner image source: inprogresspokemon.tumblr.com/post/155399033729/6745-pancham-have-a-lot-of-attitude-for-such
|
https://sweetcode.io/the-ten-most-important-pandas-functions-and-how-to-work-with-them/
|
CC-MAIN-2021-25
|
refinedweb
| 1,990
| 53.81
|
Results 1 to 4 of 4
- Join Date
- Jan 2001
- Location
- Houston, Texas, USA
- 432
- Thanks
- 0
- Thanked 0 Times in 0 Posts
skip step
I have a procedure that will check to see if a Control number as been
assigned to a request form. If there is a number, then I assume that the
request was turned in wrong and they made the changes needed so
they "resubmit" the request. At this point I look for a response=6. At
this time I also check to see who the currentuser is.
At this time, I want it to compare the vruser (currentuser) to a list of
names(somehow) and to skip this step. If the name is not in a "list" then
to let it continue with the resubmit funciton.
Can anyone help me with this step?
'On error resume next
Set Dbe = Application.CreateObject("DAO.DBEngine.35")
If Err.Number <> 0 Then
Msg Err.Description & " Some functions may not work correctly" _
& Chr(13) & "make sure that Dao 3.5 is installed on this machine"
Exit Function
End If
'Check to see if this is from scratch
Set MyDB = Dbe.Workspaces(0).OpenDatabase("C:cigarsroom.mdb")
Set Rst = MyDB.OpenRecordset("dbtask")
If UserProperties.Find("jobnum").Value > 0 Then
response = Msgbox("Is this a Resubmited Job
Request?",vbYesNo,"Initializing")
If response =6 then
CounterStart = 1
Counter = Rst.RecordCount + CounterStart
set nms= application.getnamespace("mapi")
vrUser=nms.currentuser
UserProperties.Find("jobnum").Value = Counter
UserProperties.Find("Reqby").Value = vrUser
UserProperties.Find("txtstatus").Value = "Submit"
UserProperties.Find("jobtype").Value = "Modem Files"
Rst.AddNew
Rst("Tasknum") = Counter
Rst("txtstatus") = "Submit"
Rst("jobtype") = "Modem Files"
Rst.fields(4).value=vrUser
Rst.Update
Rst.Close
MyDB.Close
'Else
' msgbox("This is a Resubmitted job")
End if
Else
- Join Date
- Feb 2001
- Location
- Silicon Valley, USA
- 23,112
- Thanks
- 5
- Thanked 94 Times in 90 Posts
Re: skip step
> I want it to compare the vruser (currentuser) to
> a list of names(somehow)
What is your data source?
(1) Existing database: you can query the database for a matching value.
(2) Text file (e.g., exported from Word, Excel, or a database): You can iterate through the file checking for a matching value.
(3) Hardcoded: you can check an array or string for a matching value. (e.g., use Instr(strAllNames, strCurrUser & "^") where the names in strAllNames are each followed by a caret)
Does this help?
- Join Date
- Jan 2001
- Location
- Houston, Texas, USA
- 432
- Thanks
- 0
- Thanked 0 Times in 0 Posts
Re: skip step
There are just a handful of names that I want to compare against, so I will probably hardcode the names into it.
Do I just add Instr(strAllNames, strCurrUser & "^") and where would I add it.
Thank you
- Join Date
- Feb 2001
- Location
- Silicon Valley, USA
- 23,112
- Thanks
- 5
- Thanked 94 Times in 90 Posts
Re: skip step
At the point in your code where you know the strCurrentUser and want to skip based on that value:
If Instr(1, "Able^Baker^Charlie^", strCurrUser & "^", vbTextCompare) = 0 Then
'This user is not on the list; do what I do
Else
'This user is on the list; do what I do
End If
If the capitalization is guaranteed to be consistent, you can delete the vbTextCompare switch.
|
http://windowssecrets.com/forums/showthread.php/2853-skip-step
|
CC-MAIN-2017-26
|
refinedweb
| 545
| 64.2
|
-
-
-
-
-
Last edited by MichaelMeissner; 10-01-2019 at 03:56 PM.
@KurtE
I just ran a quick test using the following sketch that I found on the Adafruit forum for using getTextBounds. I tried it with and without using adafuit fonts. The one thing I noticed is the baseline for the two fonts is off - probably because of the drawing direction for the fonts. Going to do a little more testing and modification of your other test sketch and see what happens
But in the short term I think it is working fairly well and will help to position the text so going to go ahead and incorporate it into the rewrite version. This is very similar to text align stuff that I added for our fonts.
Hi @mjs513,
I was playing around with the ILI9341_t3n stuff and it gets sort of interesting.
Obviously if we change the text drawing code would also need to change the text bounds code as well to match.
Also I am not sure about how if ever we properly support:: Adafruit_GFX_Button button;
That is we have:
But now suppose you do include Adafruit_GFX like the comments mentioned...But now suppose you do include Adafruit_GFX like the comments mentioned...Code:// To avoid conflict when also using Adafruit_GFX or any Adafruit library // which depends on Adafruit_GFX, #include the Adafruit library *BEFORE* // you #include ILI9341_t3.h. // Warning the implemention of class needs to be here, else the code // compiled in the c++ file will cause duplicate defines in the link phase. #ifndef _ADAFRUIT_GFX_H class Adafruit_GFX_Button { public: Adafruit_GFX_Button(void) { _gfx = NULL; } void initButton(ILI9341_t3n *gfx, int16_t x, int16_t y, uint8_t w, uint8_t h, uint16_t outline, uint16_t fill, uint16_t textcolor, const char *label, uint8_t textsize_x, uint8_t textsize_y) ...
In Adafruit that class is defined like:
Then in our code we do something like:Then in our code we do something like:Code:/// A simple drawn button UI element class Adafruit_GFX_Button { public: Adafruit_GFX_Button(void); // "Classic" initButton() uses center & size void initButton(Adafruit_GFX *gfx, int16_t x, int16_t y, uint16_t w, uint16_t h, uint16_t outline, uint16_t fill, uint16_t textcolor, char *label, uint8_t textsize); void initButton(Adafruit_GFX *gfx, int16_t x, int16_t y, uint16_t w, uint16_t h, uint16_t outline, uint16_t fill, uint16_t textcolor, char *label, uint8_t textsize_x, uint8_t textsize_y);
Which compiles fine if Adafruit_GFX is not included. But if it is included. This will fail to compile as there is no way to cast TFT to Adafruit_GFX class...Which compiles fine if Adafruit_GFX is not included. But if it is included. This will fail to compile as there is no way to cast TFT to Adafruit_GFX class...Code:button.initButton(&tft, 200, 125, 100, 40, ILI9341_GREEN, ILI9341_YELLOW, ILI9341_RED, "UP", 1, 1);
This issue is not new, probably has happened for a long time with ili9341_t3 library... Just not sure how best to resolve...
Probably the simplest is to simply change names of the class?
Currently in the FB test program, I commented out the use of the buttons, until I figure out what to do...
Also had the bounds function draw the rectangle around calculated rect to see how far off... St
Code:void printTextSizes(const char *sz) { Serial.printf("%s(%d,%d): SPL:%u ", sz, tft.getCursorX(), tft.getCursorY(), tft.strPixelLen(sz)); int16_t x, y; uint16_t w, h; tft.getTextBounds(sz, tft.getCursorX(), tft.getCursorY(), &x, &y, &w, &h); Serial.printf(" Rect(%d, %d, %u %u)\n", x, y, w, h); tft.drawRect(x, y, w, h, ILI9341_GREEN); } void drawTextScreen(bool fOpaque) { SetupOrClearClipRectAndOffsets(); tft.setTextSize(1); uint32_t start_time = millis(); tft.useFrameBuffer(use_fb); tft.fillScreen(use_fb ? ILI9341_RED : ILI9341_BLACK); tft.setFont(Arial_40_Bold); if (fOpaque) tft.setTextColor(ILI9341_WHITE, use_fb ? ILI9341_BLACK : ILI9341_RED); else tft.setTextColor(ILI9341_WHITE); tft.setCursor(0, 5); tft.println("AbCdEfGhIj"); tft.setFont(Arial_28_Bold); tft.println("0123456789!@#$"); #if 0 tft.setFont(Arial_20_Bold); tft.println("abcdefghijklmnopq"); tft.setFont(Arial_14_Bold); tft.println("ABCDEFGHIJKLMNOPQRST"); tft.setFont(Arial_10_Bold); tft.println("0123456789zyxwvutu"); #endif tft.setFont(&FreeMonoBoldOblique12pt7b); printTextSizes("AdaFruit"); tft.println("AdaFruit"); tft.setFont(&FreeSerif12pt7b); printTextSizes("FreeSan12"); tft.println("FreeSan12"); tft.setFont(); tft.setTextSize(1,2); printTextSizes("Sys(1,2)"); tft.println("Sys(1,2)"); tft.setTextSize(1); printTextSizes("System"); tft.println("System"); tft.setTextSize(1); tft.updateScreen(); DBGSerial.printf("Use FB: %d OP: %d, DT: %d OR: %d\n", use_fb, fOpaque, use_set_origin, millis() - start_time); }
Hi @KurtE
You must be reading my mind I just tried to implement Adafruit Buttons within a sample sketch and got the error: no known conversion for argument 1 from 'ILI9341_t3n*' to 'Adafruit_GFX*' and was just going to post when I read your post. I know this is going sound funny but why can't we just incorporate the two Button functions and key pressed directly into our code base? Don't know what the impact would be?
I tried the simple example with that one change I mentioned and the effect is just to move it down in the box - lowers the baseline. That was why I trying to implement adafruit button example to see what it does.I tried the simple example with that one change I mentioned and the effect is just to move it down in the box - lowers the baseline. That was why I trying to implement adafruit button example to see what it does.Obviously if we change the text drawing code would also need to change the text bounds code as well to match.
Also I am not sure about how if ever we properly support:: Adafruit_GFX_Button button;
EDIT> almost forgot st7789 code runs the ILI9341 display as well but its mirrored from what it should be - did it by accident as I was changing setups on the same t4 for the display.
And since our class is not derived from Adafruit_GFX, it won't cast down. And if it did, it would blow up as, it would try to call virtual functions which may or may not be in same... Or use member variables that are not there...
I just put a hack in for ILI9341_t3n my Adafruit_font, which looks like it might sort of work...
I defined the button as a different class and then added a #define of the Adafruit_gfx_button to define the new one... Sort of crude, may not catch everything, but test app now compiles with a button...
@KurtE
You mean like this:
I should just wait for youI should just wait for youCode:#ifdef _ADAFRUIT_GFX_H class Display_GFX_Button { public: Display_GFX_Button(void) { _gfx = NULL; }
@mjs513 - sort of, I got rid of the #ifdef...
So far the #define worked and did not interfere that I could tell...So far the #define worked and did not interfere that I could tell...Code://#ifndef _ADAFRUIT_GFX_H #define Adafruit_GFX_Button ILI9341_Button class ILI9341_Button { public: ILI9341_Button(void) { _gfx = NULL; } void initButton(ILI9341_t3n *gfx, int16_t x, int16_t y, uint8_t w, uint8_t h, ...
Going to give your version a test and see what happens. Yours seem to work better than mine anyway.
@KurtE
Ok - it seems to be working - didn't initial touch screen - just wanted to test Adafruit fonts. I did add my change for yo to the attached sketch. It does 2 things add a 'b' to draw GFX buttons and 'g' to draw just a simple gettextbonds example options. Not too bad.... I stole the draw function from a HX display example - I hate reinventing the wheel if I have an example.
Looks good. Think it is getting pretty good.
May call it good, or may play some and see how bad opaque text, might be...
Agreed - I know you are going to try opaque so lets hope its not too bad. In the meantime tomorrow I am going to clean up the example.
@mjs513 - Yep I have taken a look at the GFX font output stuff, to see if I can hack up some form of Opaque.
As you know, their fonts are sort of setup to have the setCursor to be at the start of the character at the base line. They then have an X and Y offset to where they start to draw stuff (Upper left corner) as well as width and height that they output from the UL corner... And they have an XOffset to where the cursor should advance to next.
So first pass was to output some debug data in drawGFXFontChar:
You get some output like:You get some output like:Code:Serial.printf("DGFX_char: %c %u %u %u %u %d %d\n", c, w, h, glyph->xAdvance, gfxFont->yAdvance, xo, yo);
So for example the F we move up 15 pixels (0,-15) for UL, we then output 14 columns of 16 rows. So the Y extent goes +1 below the base line (-15+16)So for example the F we move up 15 pixels (0,-15) for UL, we then output 14 columns of 16 rows. So the Y extent goes +1 below the base line (-15+16)Code:DGFX_char: F 14 16 14 29 0 -15 DGFX_char: r 8 11 8 29 0 -10 DGFX_char: e 10 11 11 29 1 -10 DGFX_char: e 10 11 11 29 1 -10 DGFX_char: S 11 16 13 29 0 -15 DGFX_char: a 10 11 10 29 1 -10 DGFX_char: n 11 11 12 29 0 -10 DGFX_char: 1 6 17 12 29 3 -16 DGFX_char: 2 10 15 12 29 1 -14
And if we get a CR we advance by 29 pixels....
So I was then curious about with these fonts what is the farthest up we went and the farthest down we went in our drawing... So hacked into the setFont (GFX font version)
And for this font I see:And for this font I see:Code:void ILI9341_t3n::setFont(const GFXfont *f) { font = NULL; // turn off the other font... if(f) { // Font struct pointer passed in? if(!gfxFont) { // And no current font struct? // Switching from classic to new font behavior. // Move cursor pos down 6 pixels so it's on baseline. cursor_y += 6; } // Test wondering high and low of Ys here... int8_t miny_offset = 0; int max_delta = 0; uint8_t index_min = 0; uint8_t index_max = 0; for (uint8_t i=0; i <= (f->last - f->first); i++) { if (f->glyph[i].yOffset < miny_offset) { miny_offset = f->glyph[i].yOffset; index_min = i; } if ( (f->glyph[i].yOffset + f->glyph[i].height) > max_delta) { max_delta = (f->glyph[i].yOffset + f->glyph[i].height); index_max = i; } } Serial.printf("Set GFX Font(%x): Y %d %d(%c) %d(%c)\n", (uint32_t)f, f->yAdvance, miny_offset, index_min + f->first, max_delta, index_max + f->first); } else if(gfxFont) { // NULL passed. Current font struct defined? ...
So the logical question is, what Y range should I use to start the Opaque drawing in? Again the X is easy...So the logical question is, what Y range should I use to start the Opaque drawing in? Again the X is easy...Code:Set GFX Font(600012f0): Y 29 -16($) 6(g)
One Guess in this case: Should start at the MinY I detected. So this case -16 and maybe go to +13? (f->yAdvance + min_y) ?
At least that is my first guess... Wonder how bad it would be to compute the MIN value each time we call setFont here? Could also maybe cache a couple...
@KurtE
When I was playing around yesterday with adjusting the yo base I did notice that you may also have to check letters like q and y, since these have tails below the baseline - that may already be accounted for using your method of going from -16 to +13. I didn't save the data but you might want to see what happens if you do FreeSans12qy as a double check.
Hi @mjs513 (and others)
I put up a first pass at allowing Opaque GFX Font text output, in the ili9341_t3n library Adafruit_Fonts branch.
It appears to be sort of working. There is an issue that I may not fully deal with.
That is for example in the one font: FreeMonoBoldOblique12pt7b
When you output the String like: AdaFruit
The font is setup that it says the letter F is (width) is > xAdvance... So it overlaps the next character by a bit or so... Probably where the upper Horizontal line of the F
When the r is then output it may set to Background color that bit or two...
Could maybe put in hack to remember that the previous character overflowed, by N pixels and if nothing else changed (like set new cursor..) it would not necessarily set the background color for those first columns... Easyish for Frame buffer code as I just don't update those... As for Non-frame buffer, the code is working like the other fonts, where I setup an updatedate rectangle and output all of the colors, like as if we are doing a writeRect... So would need to maybe remember those bits and/or remember the last character that was output and recompute those bits...
Also I have not put in the stuff yet for handling offset and clip rectangle stuff...
Hi @KurtE, etal.
I just downloaded and gave it a try. Works ok for adafruit fonts except when you make the textsize > 1 for an Adafruit font. Then it starts overlapping with the previous line whether its an Adafruit font or not - looks like its drawing up instead of down.
Just as another note - that hack I put in to adjust starting point so the lines are a bit more center " int16_t yo = glyph->yOffset+gfxFont->yAdvance/2;" messes up the fonts.
This is what I am using for testing with the drawTextScreen:
Code:void drawTextScreen(bool fOpaque) { tft.fillScreen(ILI9341_RED); if (fOpaque) tft.setTextColor(ILI9341_WHITE, ILI9341_BLACK ); else tft.setTextColor(ILI9341_WHITE); tft.setFont(); tft.setTextSize(1); tft.setCursor(0, 5); Serial.printf("ASC: %d %d\n", tft.getCursorX(), tft.getCursorY()); tft.println("AbCdEf"); //tft.println(); Serial.printf("A Abcd: %d %d\n", tft.getCursorX(), tft.getCursorY()); tft.setTextSize(2); tft.println("0123456789"); tft.println(); Serial.printf("A 01234: %d %d\n", tft.getCursorX(), tft.getCursorY()); tft.setFont(&FreeMonoBoldOblique12pt7b); Serial.printf("A SF: %d %d\n", tft.getCursorX(), tft.getCursorY()); tft.setTextSize(1); tft.println("AdaFruitqy"); Serial.printf("A Adafruit: %d %d\n", tft.getCursorX(), tft.getCursorY()); tft.setFont(&FreeSerif12pt7b); tft.setTextSize(2); Serial.printf("A SF: %d %d\n", tft.getCursorX(), tft.getCursorY()); tft.println("FreeSan12"); Serial.printf("A FreeSans: %d %d\n", tft.getCursorX(), tft.getCursorY()); }
@mjs513 - Yep, the code I believe works the same as on Adafruit GFX library, in that the setCursor(x, y) for these fonts is the base line, and so everything above the baseline scales up farther and everything below the baseline well goes down... We could maybe change... But not sure to what...
Just took another pass and added in the libraries Offset and clipping support to the opaque text output. I think it is sort of working OK...
Here is the slightly updated version of my Frame buffer test...
Kurts_ILI9341_t3n_FB_and_clip_tests-191002a.zip
In particular here are the two Adafruit fonts being output again with the:
t and o commands.
If you do a straight: <cr> command it toggles display code to go between use frame buffer and not use frame buffer. So I can see if both cases are working.
Also if you use the: c<cmd>
command it toggles on and off a clipping rectangle.
Which appears to be working...
There is still some cleanup that could be done, like, I can update some of the variables like if my calculated y_end > _displayclipy2, could update the
y_end to be the _displayclipy2... Then I don't have to check for both...
Now back to playing
@KurtE
Seems to work ok as long as you keep Adafruit font text sizes set to 1 (or y=1). For instance if you set:
[CODE] tft.setTextSize(1,3);
printTextSizes("FreeSan12");[CODE] in your sketch the background box is way off. So if you do "o" w/o frame buffer you get this:
with this output:
If I do my "yo" change it makes it more of a mess so can't use that with opaque yet. Looking at the code now.If I do my "yo" change it makes it more of a mess so can't use that with opaque yet. Looking at the code now.Code:1 Set GFX Font(60001290): Y 24 -16($) 6(_) AdaFruit(0,115): SPL:110 Rect(0, 101, 110 15) Set GFX Font(6000129c): Y 29 -16($) 6(g) FreeSan12(0,139): SPL:102 Rect(0, 91, 102 51) Sys(1,2)(0,220): SPL:48 Rect(0, 220, 48 16) System(0,236): SPL:36 Rect(0, 236, 36 8) Use FB: 1 OP: 1, DT: 0 OR: 37
@mjs513 Good Morning
Will take a look.
Maybe one of the scale factors was wrong... Maybe used X instead of Y or ...
Just pushed up an update to cleanup some of the end of line and end of character testing in opaque mode...
@mjs513 - Fixed (I think) - needed to move an assignment down into inner loop...
@KurtE
Just downloaded the latest and greatest and looks like its working both with and without FrameBuffer.
Sounds good - I decided to go ahead and I merged all of this back into my master branch...
Question is, should I try to merge this back into st7735_t3? Or would you like to have the fun.
Not sure yet if I will "Fix" the overlap opaque text issue or not...
Might be trivial to fix in Frame buffer mode. That is if I know that lets say the first 2 pixels on the left hand edge were output by the previous char.
The output code in that left band not output the new value if it is not going to be the text color. Although I suppose there could be a question of which background color should win if someone does something like:
Should the overlap background color be Red or Green?Should the overlap background color be Red or Green?Code:tft.setCursor(0, 50); tft.setTextColor(ILI9341_WHITE, ILI9341_RED); tft.print("F"); tft.setTextColor(ILI9341_WHITE, ILI9341_GREEN); tft.print("x");
We could also fix it in the non frame buffer mode, by remembering a few things like in the above case:
That the previous character output was (an X at position 0, 50, and that there was an overlap). And FG color was ...
With this suppose that the x starts at something like 12, 50 and there was a 2 pixel overlap...
It would not be hard to logically rerun output of previous character and for example logically ask/deduce
What would the previous writeGFXChar function would output at position (12,40)... If it is FG color and we are going to output a BG color for that pixel we use the previous FG color...
Question is, is it worth it?
@KurtE
Was going to merge it with the ILI9488 code first and if you want will do the ST7735_t3 code next. Was just waiting until the bugs were worked out.
As for the overlap since the fonts get drawn up - I was working on my yo kludge and almost got it working but ran into problems. Figured out the bounding box and that looked I got that working ok but it would shift the chars down from where drawn if opaque wasn't used:
If opaque wasn't used the char position would be higher by the yAdvance/2 which is what I am adding to yo.
Looks like you are getting close...
I was/am sort of torn between leaving it with Adafruit setup or fix it to logically feel better...
I am sort of hacking up the function to look into previous char output to see how bad it would be.
I have started to hack up function that looks like:
Code:// some member variables I will set when I detect a character draws larger than its character offset... unsigned int _gfx_c_last; int16_t _gfx_last_cursor_x, _gfx_last_cursor_y; int16_t _gfx_last_x_overlap = 0; bool ILI9341_t3n::gfxFontLastCharPosFG(int16_t x, int16_t y) { GFXglyph *glyph = gfxFont->glyph + (_gfx_c_last - gfxFont->first); uint8_t w = glyph->width, h = glyph->height; int16_t xo = glyph->xOffset; // sic int16_t yo = glyph->yOffset; if (y < (_gfx_last_cursor_y + (yo*textsize_y))) return false; // above if (y >= (_gfx_last_cursor_y + (yo+h)*textsize_y)) return false; // below // Lets compute which Row this y is in the bitmap int16_t y_bitmap = (y - ((_gfx_last_cursor_y + (yo*textsize_y))) + textsize_y - 1) / textsize_y; int16_t x_bitmap = (x - ((_gfx_last_cursor_x + (xo*textsize_x))) + textsize_x - 1) / textsize_x; uint16_t pixel_bit_offset = y_bitmap * w + x_bitmap; return ((gfxFont->bitmap[glyph->bitmapOffset + (pixel_bit_offset >> 3)]) & (0x80 >> (pixel_bit_offset & 0x7))); }
|
https://forum.pjrc.com/threads/57015-ST7789_t3-(part-of-ST7735-library)-support-for-displays-without-CS-pin/page9?s=7b6dfce0f97aad08045dd9e13cbb4411
|
CC-MAIN-2019-51
|
refinedweb
| 3,392
| 71.55
|
When are developing a RhoElements application one thing that may come in handy is to enabling logging to your PC while you are testing the application on the actual device. To do that, you need to do two things:
1) Enable Logging in the Config.XML
Let's first start by changing some settings in the config.xml file (located on the device \Install Path\config - default install path on Windows Mobile is \Program Files\RhoElements\Config;
<Logger>
<LogProtocol value="HTTP"/>
<LogPort value="80"/>
<LogURI value="192.168.0.165/Temporary_Listen_Addresses/"/>
<LogError value="1"/>
<LogWarning value="1"/>
<LogInfo value="1"/>
<LogUser value="1"/>
<LogMemory value="1"/>
<LogMemPeriod value="5000"/>
<LogMaxSize value="10"/>
</Logger>
Notice a few things:
- LogProtocol - instead of FILE, I changed it to HTTP
- LogPort - I kept this at 80 (this needs to match the listener app explained below)
- LogURI -change it to the url of where I will be running the http logger application (in my case it is running on my PC on my local network with a virtual sub path - also explained below)
Also note that I have all logging options set to "1" which means that particular log type will be enabled. Notice the LogUser entry. This one allows you to use the generic.Log('Message',1) method in your code to add custom messages to the log file. This is very helpful in debugging application issues. Save all of these settings and copy it back to the device. Be aware that the config.xml file is only read once during startup, so if you have RhoElements running you will need to quit the application and restart in order for the changes to take effect.
Now lets load up the application that will display the log information. Note: This application is made freely available to use but is not supported by Motorola. Unzip the contents and you will see a Visual Studio project that will allow you to make changes, but if you look in the Bin\Release folder you will see an executable. Launch it and you will see:
Nothing is happening at this point. We need to check some settings first and then start the listener. In this demo application, we are using the HttpListner .net Class to listen for HTTP traffic coming in. Usually this type of thing can be blocked on your laptop. If you are having problems on your particular OS, just do some Google searches on this topic and you may find some solutions. In my case that is exactly what happened to me. For me I was running a Windows 7 laptop which had a lot of the corporate security policies in place. One of them did not allow the app above to run under the default settings. I kept getting "Access Denied" when I tried to start logging. So I had to download this HttpNameSpace Manager app I found on the MSDNt that allowed me to see what namespaces/ports where available for the HttpListener to listen on. Looking the the settings, I found that port 80/Temporary_Listen_Addresses/ looked like it was granted to "Everyone" on my laptop. So I went with this one. Whatever works, just make sure you have the port and the virtual path the same in the config.xml and the http logger application.
So now I am ready to go. Save the settings and then Click Log/Start to begin listening. Now launch your application and you should see information display in the http logger application.
Note: When logging to HTTP, the local logging will not be created. It is an either or condition. But sometimes this comes in very handy for when you are working through debugging your application once the app is executes on a real device.
Unfortunately, I was unable to get it to work. I followed your instructions. My PC has an ip of 192.168.1.5 so I set the config file to <LogURI value="192.168.1.5/Temporary_Listen_Addresses/"/> I used HttpNameSpace Manager to verify *:80/Temporary_Listen_Addresses/ is open to everyone. When I start logging, nothing happens.
Great post, Rob. I have to add that this is a really useful method of debugging your apps, it's definitely worth setting up and having running in the background so you can refer to the data as you code and test your apps.
I left the LogUri value at 192.168.1.1/. Running the HttpListener as Administrator fixes the Access Denied problem. You might have something else listening on port 80 already?
Yeah - I'm having this same problem where I've set it up per the example but nothing logs.
Great post! I have been successful in capturing logs via the provided HTTP Logger application while the device is docked but not while undocked using wifi on a MC9190-G. Is being docked a requirement for this activity?
Also, can you share the actual Logging API (request params) as I would like to post all logging to a specific system.
Thanks!
|
https://developer.zebra.com/community/home/blog/2011/10/14/logging-to-a-remote-location
|
CC-MAIN-2020-45
|
refinedweb
| 837
| 64.2
|
Red Hat Bugzilla – Bug 1299185
qemu-img created VMDK images lead to "Not a supported disk format (sparse VMDK version too old)"
Last modified: 2016-03-17 12:19:46 EDT
+++ This bug was initially created as a clone of Bug #1299116 +++
Description of problem:
qemu-img only creates broken VMDK image files that can't be used for within
an OVA file for importing a virtual machine into VMware vSphere/ESXi. Every
VMDK file created by qemu-img leads during import to the error message:
Not a supported disk format (sparse VMDK version too old)
Version-Release number of selected component (if applicable):
qemu-img-1.5.3-105.el7_2.1.x86_64
qemu-img-2.5.0-3.fc24.x86_64 (rebuilt from Fedora Rawhide for RHEL 7)
VMware vSphere Client, Version 5.1.0, Build 1064113
VMware ESXi, Version 5.1.0, Build 1065491, German-000
How reproducible:
Everytime, see above and below.
Steps to Reproduce:
1. cd /tmp/
2. dd if=/dev/zero of=qemu-img-bug.img bs=1M count=1
3. qemu-img convert qemu-img-bug.img -O vmdk \
-o adapter_type=lsilogic,subformat=streamOptimized,compat6 \
qemu-img-bug.vmdk
4. tar cf qemu-img-bug.ova qemu-img-bug.ovf qemu-img-bug.vmdk
5. Try to import qemu-img-bug.ova into VMware -> fails with error above
6. printf '\x03' | dd conv=notrunc of=qemu-img-bug.vmdk bs=1 seek=$((0x4))
7. tar cf qemu-img-bug.ova qemu-img-bug.ovf qemu-img-bug.vmdk
8. Try to import qemu-img-bug.ova into VMware -> works as expected now
Actual results:
qemu-img created VMDK images lead to "Not a supported disk format (sparse
VMDK version too old)".
Expected results:
qemu-img created VMDK images should not lead to any error message during
import into VMware.
Additional info:
Credit for printf/dd combination at point 6 goes to Radoslav Gerganov.
Fix was posted upstream, but isn't in git yet:
This bug appears to have been reported against 'rawhide' during the Fedora 24 development cycle.
Changing version to '24'.
More information and reason for this action is here:
This was fixed with the last qemu builds in feb, just forgot to close this
|
https://bugzilla.redhat.com/show_bug.cgi?id=1299185
|
CC-MAIN-2017-43
|
refinedweb
| 371
| 59.3
|
The io module provides the Python interfaces to stream handling. The built-in open() function is defined in this module.
At the top of the I/O hierarchy is the abstract base class IOBase. It defines the basic interface to a stream. Note, however, that there is no separation corresponding stream. If the file cannot be opened, an.
New in version 3.1: The SEEK_* constants members in addition to those from IOBase::
In many situations, buffered I/O streams will provide higher performance (bandwidth and latency) than raw I/O streams. Their API is also more usable.:
A buffered I/O object giving a combined, higher-level access to two sequential RawIOBase objects: one readable, the other writeable. It is useful for pairs of unidirectional communication channels (pipes, for instance). separ one attribute in addition to those of TextIOBase and its parents:
An in-memory stream for text. It inherits TextIOWrapper.
The initial value of the buffer (an empty:
Example usage:
import io output = io.StringIO() output.write('First line.\n') print('Second line.', file=output) # Retrieve file contents -- this will be # 'First line.\nSecond line.\n' contents = output.getvalue() # Close object and discard memory buffer -- # .getvalue() will now raise an exception. output.close()
|
http://docs.python.org/release/3.1.2/library/io.html#module-io
|
crawl-003
|
refinedweb
| 205
| 61.53
|
From: David Abrahams (abrahams_at_[hidden])
Date: 2001-04-11 17:08:40
----- Original Message -----
From: <williamkempf_at_[hidden]>
> C++ platforms (MSVC) that throw exceptions when an assertion fires?
> Some alternative library assertion packages may do this, but none
> that come with MSVC throw exceptions. I was speaking of other
> languages here.
What does this program do with msvc?
include <cassert>
#include <iostream>
int main() {
try {
assert(0);
}
catch(...) {
std::cout << "hi there" << std::endl;
}
return 0;
}
Try compiling it as follows:
cl /MLd /GX /EHa foo.cpp
> In any event, the JIT debugger handles exceptions as easily as it
> does assertions, dropping you into the code at the point where the
> exception was thrown. So I don't know what you're trying to say here.
See above.
> > Other languages have different exception models, so its hard to
> make a case
> > about C++ exception use policy on that basis. For example, I would
> have no
> > serious objections to throwing an exception from assert() if the
> exception
> > could carry a snapshot of the program state (before unwinding) with
> it. This
> > sort of thing is possible with LISP systems.
>
> It's possible with C++ as well. There's nothing in the standard
> preventing this that I can see. Granted, it's not required either,
> which may be what you're getting at, but it's also not required for
> an assertion to be able to give you this type of information either.
That's right. Of course there are differences between theory and practice.
In practice, C++ exceptions don't do this (and there's a good reason - it
would be way too expensive), but sometimes assertions do.
Another note: other languages have different error models. For example,
interpreted languages like LISP, Java, and pure Python should never crash,
no matter what bugs a user puts in his program. In principle, it should be
possible to avoid anything like unpredictable behavior (although I think the
latter two fail to acheive this goal due to poor specification). The price
paid for this kind of security is, of course, speed. No real C++
implementation is willing to accept this performance hit, so there are
plenty of ways to crash a program. I think this leads to a different
programming model. For example, in Python, iteration through a sequence is
terminated with an IndexError exception. It isn't incorrect to index off the
end of a Python sequence - it happens all the time. We wouldn't do that in
C++, but not just because of the performance problem: we have a different
culture, a different way of looking at program correctness.
> > One final point:
> >
> > if your function is documented to throw under condition ~X, then X
> is not a
> > really precondition for your function: the behavior under ~X should
> be
> > well-defined. I don't see any point in saying "f() requires X, but
> if ~X, it
> > will throw".
> >
> > If your function is documented to require X, it is allowed to
> throw, assert,
> > or do anything else it likes under condition ~X. Whether or not
> throwing is
> > advisable can be taken on a case-by-case basis. In some
> applications where
> > recoverability is critical, it may make sense to assert in debug
> builds but
> > throw in release builds.
>
> ~X == Y is always a valid possible boolean equivalancy, so I don't
> see how this designation can be used.
I never mentioned Y. I don't understand what you're getting at.
> > I don't think it would be fair to say that I violate my own rules:
> I'm
> > saying it's a choice you have to think about. There are tradeoffs.
> Not every
> > programming decision can be made by prescription.
>
> I didn't single out any one person when I said "they" break their own
> rules. By your own admission here you don't have rules but instead
> have what I'd consider loose guidelines.
They're not really as loose as they may seem. I pretty much always assert
invariants and preconditions. I'm just unwilling to pass judgement against
someone who thinks they stand a better chance of recovery in a
mission-critical application by turning all precondition violations into
exceptions for the shipping product. IMO, a precondition violation almost
always means some program invariant is broken, so recovery is unlikely.
These people are gambling... who am I to tell them how to place their bets?
> Every time this subject has
> come up, however, the arguments always state hard and fast rules. To
> further complicate matters, even if you take these rules as loose
> guidelines instead, every person seems to have their own set.
Maybe so.
> I've
> never seen this topic come up where any kind of concensus is
> reached. I'm truly at a loss to understand when and when not to use
> exceptions.
I guess you'll just need to sort through the arguments and see which ones
ring true for you, then. Wish I could offer something more/better...
-Dave
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
|
https://lists.boost.org/Archives/boost/2001/04/10973.php
|
CC-MAIN-2020-45
|
refinedweb
| 853
| 64.81
|
Tag Helpers in ASP.NET Core
What are Tag Helpers
Tag Helpers enable server-side code to participate in creating and rendering HTML elements in Razor files. For example, the built-in
ImageTagHelper can append a version number to the image name. Whenever the image changes, the server generates a new unique version for the image, so clients are guaranteed to get the current image (instead of a stale cached image).
LabelTagHelper can target the HTML
<label> element when the
LabelTagHelper attributes are applied. If you're familiar with HTML Helpers, Tag Helpers reduce the explicit transitions between HTML and C# in Razor views. In many cases, HTML Helpers provide an alternative approach to a specific Tag Helper, but it's important to recognize that Tag Helpers don't replace HTML Helpers and there's not a Tag Helper for each HTML Helper. Tag Helpers compared to HTML Helpers explains the differences in more detail.
What Tag Helpers provide
An HTML-friendly development experience For the most part, Razor markup using Tag Helpers looks like standard HTML. Front-end designers conversant with HTML/CSS/JavaScript can edit Razor without learning C# Razor syntax.
A rich IntelliSense environment for creating HTML and Razor markup This is in sharp contrast to HTML Helpers, the previous approach to server-side creation of markup in Razor views. Tag Helpers compared to HTML Helpers explains the differences in more detail. IntelliSense support for Tag Helpers explains the IntelliSense environment. Even developers experienced with Razor C# syntax are more productive using Tag Helpers than writing C# Razor markup.
A way to make you more productive and able to produce more robust, reliable, and maintainable code using information only available on the server
For example, historically the mantra on updating images was to change the name of the image when you change the image. Images should be aggressively cached for performance reasons, and unless you change the name of an image, you risk clients getting a stale copy. Historically, after an image was edited, the name had to be changed and each reference to the image in the web app needed to be updated. Not only is this very labor intensive, it's also error prone (you could miss a reference, accidentally enter the wrong string, etc.) The built-in
ImageTagHelper can do this for you automatically. The
ImageTagHelper can append a version number to the image name, so whenever the image changes, the server automatically generates a new unique version for the image. Clients are guaranteed to get the current image. This robustness and labor savings comes essentially free by using the
ImageTagHelper.
Most built-in Tag Helpers target standard HTML elements and provide server-side attributes for the element. For example, the
<input> element used in many views in the Views/Account folder contains the
asp-for attribute. This attribute extracts the name of the specified model property into the rendered HTML. Consider a Razor view with the following model:
public class Movie { public int ID { get; set; } public string Title { get; set; } public DateTime ReleaseDate { get; set; } public string Genre { get; set; } public decimal Price { get; set; } }
The following Razor markup:
<label asp-</label>
Generates the following HTML:
<label for="Movie_Title">Title</label>
The
asp-for attribute is made available by the
For property in the LabelTagHelper. See Author Tag Helpers for more information.
Managing Tag Helper scope
Tag Helpers scope is controlled by a combination of
@addTagHelper,
@removeTagHelper, and the "!" opt-out character.
@addTagHelper makes Tag Helpers available
If you create a new ASP.NET Core web app named AuthoringTagHelpers, the following Views/_ViewImports.cshtml file will be added to your project:
@addTagHelper *, Microsoft.AspNetCore.Mvc.TagHelpers @addTagHelper *, AuthoringTagHelpers
The
@addTagHelper directive makes Tag Helpers available to the view. In this case, the view file is Pages/_ViewImports.cshtml, which by default is inherited by all files in the Pages folder and subfolders; making Tag Helpers available. The code above uses the wildcard syntax ("*") to specify that all Tag Helpers in the specified assembly (Microsoft.AspNetCore.Mvc.TagHelpers) will be available to every view file in the Views directory or subdirectory. The first parameter after
@addTagHelper specifies the Tag Helpers to load (we are using "*" for all Tag Helpers), and the second parameter "Microsoft.AspNetCore.Mvc.TagHelpers" specifies the assembly containing the Tag Helpers. Microsoft.AspNetCore.Mvc.TagHelpers is the assembly for the built-in ASP.NET Core Tag Helpers.
To expose all of the Tag Helpers in this project (which creates an assembly named AuthoringTagHelpers), you would use the following:
@using AuthoringTagHelpers @addTagHelper *, Microsoft.AspNetCore.Mvc.TagHelpers @addTagHelper *, AuthoringTagHelpers
If your project contains an
AuthoringTagHelpers.TagHelpers.EmailTagHelper), you can provide the fully qualified name (FQN) of the Tag Helper:
@using AuthoringTagHelpers @addTagHelper *, Microsoft.AspNetCore.Mvc.TagHelpers @addTagHelper AuthoringTagHelpers.TagHelpers.EmailTagHelper, AuthoringTagHelpers
To add a Tag Helper to a view using an FQN, you first add the FQN (
AuthoringTagHelpers.TagHelpers.EmailTagHelper), and then the assembly name (AuthoringTagHelpers). Most developers prefer to use the "*" wildcard syntax. The wildcard syntax allows you to insert the wildcard character "*" as the suffix in an FQN. For example, any of the following directives will bring in the
@addTagHelper AuthoringTagHelpers.TagHelpers.E*, AuthoringTagHelpers @addTagHelper AuthoringTagHelpers.TagHelpers.Email*, AuthoringTagHelpers
As mentioned previously, adding the
@addTagHelper directive to the Views/_ViewImports.cshtml file makes the Tag Helper available to all view files in the Views directory and subdirectories. You can use the
@addTagHelper directive in specific view files if you want to opt-in to exposing the Tag Helper to only those views.
@removeTagHelper removes Tag Helpers
The
@removeTagHelper has the same two parameters as
@addTagHelper, and it removes a Tag Helper that was previously added. For example,
@removeTagHelper applied to a specific view removes the specified Tag Helper from the view. Using
@removeTagHelper in a Views/Folder/_ViewImports.cshtml file removes the specified Tag Helper from all of the views in Folder.
Controlling Tag Helper scope with the _ViewImports.cshtml file
You can add a _ViewImports.cshtml to any view folder, and the view engine applies the directives from both that file and the Views/_ViewImports.cshtml file. If you added an empty Views/Home/_ViewImports.cshtml file for the Home views, there would be no change because the _ViewImports.cshtml file is additive. Any
@addTagHelper directives you add to the Views/Home/_ViewImports.cshtml file (that are not in the default Views/_ViewImports.cshtml file) would expose those Tag Helpers to views only in the Home folder.
Opting out of individual elements
You can disable a Tag Helper at the element level with the Tag Helper opt-out character ("!"). For example,
<span> with the Tag Helper opt-out character:
<!span asp-</!span>
You must apply the Tag Helper opt-out character to the opening and closing tag. (The Visual Studio editor automatically adds the opt-out character to the closing tag when you add one to the opening tag). After you add the opt-out character, the element and Tag Helper attributes are no longer displayed in a distinctive font.
Using
@tagHelperPrefix to make Tag Helper usage explicit
The
@tagHelperPrefix directive allows you to specify a tag prefix string to enable Tag Helper support and to make Tag Helper usage explicit. For example, you could add the following markup to the Views/_ViewImports.cshtml file:
@tagHelperPrefix th:
In the code image below, the Tag Helper prefix is set to
th:, so only those elements using the prefix
th: support Tag Helpers (Tag Helper-enabled elements have a distinctive font). The
<label> and
<input> elements have the Tag Helper prefix and are Tag Helper-enabled, while the
<span> element doesn't.
The same hierarchy rules that apply to
@addTagHelper also apply to
@tagHelperPrefix.
Self-closing Tag Helpers
Many Tag Helpers can't be used as self-closing tags. Some Tag Helpers are designed to be self-closing tags. Using a Tag Helper that was not designed to be self-closing suppresses the rendered output. Self-closing a Tag Helper results in a self-closing tag in the rendered output. For more information, see this note in Authoring Tag Helpers.
IntelliSense support for Tag Helpers
When you create a new ASP.NET Core web app in Visual Studio, it adds the NuGet package "Microsoft.AspNetCore.Razor.Tools". This is the package that adds Tag Helper tooling.
Consider writing an HTML
<label> element. As soon as you enter
<l in the Visual Studio editor, IntelliSense displays matching elements:
Not only do you get HTML help, but the icon (the "@" symbol with "<>" under it).
identifies the element as targeted by Tag Helpers. Pure HTML elements (such as the
fieldset) display the "<>" icon.
A pure HTML
<label> tag displays the HTML tag (with the default Visual Studio color theme) in a brown font, the attributes in red, and the attribute values in blue.
After you enter
<label, IntelliSense lists the available HTML/CSS attributes and the Tag Helper-targeted attributes:
IntelliSense statement completion allows you to enter the tab key to complete the statement with the selected value:
As soon as a Tag Helper attribute is entered, the tag and attribute fonts change. Using the default Visual Studio "Blue" or "Light" color theme, the font is bold purple. If you're using the "Dark" theme the font is bold teal. The images in this document were taken using the default theme.
You can enter the Visual Studio CompleteWord shortcut (Ctrl +spacebar is the default inside the double quotes (""), and you are now in C#, just like you would be in a C# class. IntelliSense displays all the methods and properties on the page model. The methods and properties are available because the property type is
ModelExpression. In the image below, I'm editing the
Register view, so the
RegisterViewModel is available.
IntelliSense lists the properties and methods available to the model on the page. The rich IntelliSense environment helps you select the CSS class:
Tag Helpers compared to HTML Helpers
Tag Helpers attach to HTML elements in Razor views, while HTML Helpers are invoked as methods interspersed with HTML in Razor views. Consider the following Razor markup, which creates an HTML label with the CSS class "caption":
@Html.Label("FirstName", "First Name:", new {@class="caption"})
The at (
@) symbol tells Razor this is the start of code. The next two parameters ("FirstName" and "First Name:") are strings, so IntelliSense can't help. The last argument:
new {@class="caption"}
Is an anonymous object used to represent attributes. Because
class is a reserved keyword in C#, you use the
@ symbol to force C# to interpret
@class= as a symbol (property name). To a front-end designer (someone familiar with HTML/CSS/JavaScript and other client technologies but not familiar with C# and Razor), most of the line is foreign. The entire line must be authored with no help from IntelliSense.
Using the
LabelTagHelper, the same markup can be written as:
<label class="caption" asp-</label>
With the Tag Helper version, as soon as you enter
<l in the Visual Studio editor, IntelliSense displays matching elements:
IntelliSense helps you write the entire line.
The following code image shows the Form portion of the Views/Account/Register.cshtml Razor view generated from the ASP.NET 4.5.x MVC template included with Visual Studio.
The Visual Studio editor displays C# code with a grey background. For example, the
AntiForgeryToken HTML Helper:
@Html.AntiForgeryToken()
is displayed with a grey background. Most of the markup in the Register view is C#. Compare that to the equivalent approach using Tag Helpers:
The markup is much cleaner and easier to read, edit, and maintain than the HTML Helpers approach. The C# code is reduced to the minimum that the server needs to know about. The Visual Studio editor displays markup targeted by a Tag Helper in a distinctive font.
Consider the Email group:
<div class="form-group"> <label asp-</label> <div class="col-md-10"> <input asp- <span asp-</span> </div> </div>
Each of the "asp-" attributes has a value of "Email", but "Email" isn't a string. In this context, "Email" is the C# model expression property for the
RegisterViewModel.
The Visual Studio editor helps you write all of the markup in the Tag Helper approach of the register form, while Visual Studio provides no help for most of the code in the HTML Helpers approach. IntelliSense support for Tag Helpers goes into detail on working with Tag Helpers in the Visual Studio editor.
Tag Helpers compared to Web Server Controls
Tag Helpers don't own the element they're associated with; they simply participate in the rendering of the element and content. ASP.NET Web Server controls are declared and invoked on a page.
Web Server controls have a non-trivial lifecycle that can make developing and debugging difficult.
Web Server controls allow you to add functionality to the client Document Object Model (DOM) elements by using a client control. Tag Helpers have no DOM.
Web Server controls include automatic browser detection. Tag Helpers have no knowledge of the browser.
Multiple Tag Helpers can act on the same element (see Avoiding Tag Helper conflicts ) while you typically can't compose Web Server controls.
Tag Helpers can modify the tag and content of HTML elements that they're scoped to, but don't directly modify anything else on a page. Web Server controls have a less specific scope and can perform actions that affect other parts of your page; enabling unintended side effects.
Web Server controls use type converters to convert strings into objects. With Tag Helpers, you work natively in C#, so you don't need to do type conversion.
Web Server controls use System.ComponentModel to implement the run-time and design-time behavior of components and controls.
System.ComponentModelincludes the base classes and interfaces for implementing attributes and type converters, binding to data sources, and licensing components. Contrast that to Tag Helpers, which typically derive from
TagHelper, and the
TagHelperbase class exposes only two methods,
Processand
ProcessAsync.
Customizing the Tag Helper element font
You can customize the font and colorization from Tools > Options > Environment > Fonts and Colors:
Built-in ASP.NET Core Tag Helpers
Distributed Cache Tag Helper
Validation Message Tag Helper
Validation Summary Tag Helper
Additional resources
- Author Tag Helpers
- Working with Forms
- TagHelperSamples on GitHub contains Tag Helper samples for working with Bootstrap.
Feedback
|
https://docs.microsoft.com/en-us/aspnet/core/mvc/views/tag-helpers/intro?view=aspnetcore-2.0
|
CC-MAIN-2019-43
|
refinedweb
| 2,387
| 55.13
|
char *tmpnam( char *str )
char *str; // A pointer to a character string
Synopsis
#include "stdio.h"
The tmpnam function generates a name that can be used to create a temporary file.
Parameters
str is a pointer to a character string that will hold the generated name and will be identical to the name returned by the function.
Return Value
tmpnam returns a pointer to a character string that holds the generated temporary name if successful, or NULL otherwise.
Remarks
The pointer returned by tmpnam is identical to the parameter str. tmpnam returns a name unique in the current working directory.
See Also
tmpfile, fopen
Help URL:
|
http://silverscreen.com/tmpnam.htm
|
CC-MAIN-2021-21
|
refinedweb
| 106
| 56.25
|
Add “Launch at Login” functionality to your macOS app in seconds
It's usually quite a convoluted and error-prone process to add this. No more!
This package works with both sandboxed and non-sandboxed apps and it's App Store compatible and used in apps like Plash, Dato, Lungo, and Battery Indicator.
Sindre‘s open source work is supported by the communitySpecial thanks to:
Add in the “Swift Package Manager” tab in Xcode.
Warning: Carthage is not recommended. Support for it will be removed at some point in the future.
github "sindresorhus/LaunchAtLogin"
Add a new “Run Script Phase” below (not into) “Copy Bundle Resources” in “Build Phases” with the following:
"${BUILT_PRODUCTS_DIR}/LaunchAtLogin_LaunchAtLogin.bundle/Contents/Resources/copy-helper-swiftpm.sh"
(I would name the run script
Copy “Launch at Login Helper”)
Add a new “Run Script Phase” below (not into) “Embed Frameworks” in “Build Phases” with the following:
"${PROJECT_DIR}/Carthage/Build/Mac/LaunchAtLogin.framework/Resources/copy-helper.sh"
(I would name the run script
Copy “Launch at Login Helper”)
No need to store any state to UserDefaults.
Note that the Mac App Store guidelines requires “launch at login” functionality to be enabled in response to a user action. This is usually solved by making it a preference that is disabled by default. Many apps also let the user activate it in a welcome screen.
import LaunchAtLogin print(LaunchAtLogin.isEnabled) //=> false LaunchAtLogin.isEnabled = true print(LaunchAtLogin.isEnabled) //=> true
This package comes with a
LaunchAtLogin.Toggle view which is like the built-in
Toggle but with a predefined binding and label. Clicking the view toggles “launch at login” for your app.
struct ContentView: View { var body: some View { LaunchAtLogin.Toggle() } }
The default label is
"Launch at login", but it can be overridden for localization and other needs:
struct ContentView: View { var body: some View { LaunchAtLogin.Toggle { Text("Launch at login") } } }
Alternatively, you can use
LaunchAtLogin.observable as a binding with
@ObservedObject:
import SwiftUI import LaunchAtLogin struct ContentView: View { @ObservedObject private var launchAtLogin = LaunchAtLogin.observable var body: some View { Toggle("Launch at login", isOn: $launchAtLogin.isEnabled) } }
Just subscribe to
LaunchAtLogin.publisher:
import Combine import LaunchAtLogin final class ViewModel { private var isLaunchAtLoginEnabled = LaunchAtLogin.isEnabled private var cancellables = Set<AnyCancellable>() func bind() { LaunchAtLogin .publisher .assign(to: \.isLaunchAtLoginEnabled, on: self) .store(in: &cancellables) } }
Bind the control to the
LaunchAtLogin.kvo exposed property:
import Cocoa import LaunchAtLogin final class ViewController: NSViewController { @objc dynsrc=" launchAtLogin = LaunchAtLogin.kvo }
The package bundles the helper app needed to launch your app and copies it into your app at build time.
Please ensure that the LaunchAtLogin run script phase is still below the “Embed Frameworks” phase. The order could have been accidentally changed.
The build error usually presents itself as:
cp: […]/Resources/LaunchAtLoginHelper.app: No such file or directory rm: […]/Resources/copy-helper.sh: No such file or directory Command PhaseScriptExecution failed with a nonzero exit code
LaunchAtLoginwhen using Carthage
The bundled launcher app is written in Swift and hence needs to embed the Swift runtime libraries. If your project targets macOS 10.14.4 or later, you can avoid embedding the Swift runtime libraries. First, open
./Carthage/Checkouts/LaunchAtLogin/LaunchAtLogin.xcodeproj and set the deployment target to the same as your app, and then run
$ carthage build. You'll have to do this each time you update
LaunchAtLogin.
This is not a problem when using Swift Package Manager.
This is the expected behavior, unfortunately.
This is usually caused by having one or more older builds of your app laying around somewhere on the system, and macOS picking one of those instead, which doesn't have the launch helper, and thus fails to start.
Some things you can try:
DerivedDatadirectory.
Some helpful Stack Overflow answers:
CocoaPods used to be supported, but it did not work well and there was no easy way to fix it, so support was dropped. Even though you mainly use CocoaPods, you can still use Swift Package Manager just for this package without any problems.
LaunchAtLogin.bundlein my debug build or I get a notarization error for developer ID distribution
As discussed here, this package tries to determine if you're making a release or debug build and clean up its install accordingly. If your debug build is missing the bundle or, conversely, your release build has the bundle and it causes a code signing error, that means this has failed.
The script's determination is based on the “Build Active Architecture Only” flag in build settings. If this is set to
YES, then the script will package LaunchAtLogin for a debug build. You must set this flag to
NO if you plan on distributing the build with codesigning.
Swiftpack is being maintained by Petr Pavlik | @ptrpavlik | @swiftpackco | API | Analytics
|
https://swiftpack.co/package/sindresorhus/LaunchAtLogin
|
CC-MAIN-2022-27
|
refinedweb
| 783
| 56.76
|
.
I had a few technical problems with the webcast that pushed our start time back about five minutes, but after switching to a different machine things went fine. I'm now recorded and online here.
I generally don't like giving these webcasts, as I find the lack of a live audience somewhat disconcerting, but I think I'm starting to get the hang of it. Thanks to everyone who listened in and submitted questions/comments. I'd love any additional feedback or questsions---please feel free to post here or shoot me an email: isaack@...
Cheers,-Isaac
Published Thursday, November 01, 2007 8:41 PM
by
isaac
Thanks again for the presentation.
My questions relates to you showing Virtual Earth presenting Spatial data from SQL 2008.
Is there a performance hit, and is it significant, in converting from the native storage format to the "Well Known Text" or "Well Known Binary" format?
For Virtual Earth applications where the data is rendered on the client and transmitted over the internet should we be converting to "Well Known Text", do we do this in the query itself or in our application code? And then we would transmit this plain text, then parse into VEShapes objects?
Or are there javascript functions that will convert from the native storage format directly to VEShapes?
Is there a best practice for doing this in both directions, that is data from VE to be stored in the database and from the database to be viewed in VE?
John.
John
Hi John,
Good questions---I should devote a blog post or two to this, I think.
Of course there will be some performance hit when converting formats, the question is how large. We don't currently have evidence to suggest it's very big.
My feeling is now that VE supports GeoRSS (which can contain a GML geometry description) the easiest way to do a mashup with VE will be through GML. I'm going to try and work up some sample code to demonstrate this, but I don't have anything published yet.
Cheers,
-Isaac
isaac
Will the VE-based demo application you used be available when you ship the CTP?
You mentioned that there will be .NET types for the spatial datatypes. Will those be hidden away in the SqlServer namespaces? .. and will they have all the same Simple Features methods on them? (Relate, envelope, operations etc...)
SharpGIS
Great presentation although my companies Internet connection cut out for 15 minutes in the middle (great timing). I had the exact same question as John. Even if GML turns out to be the perfect/right way, how were you displaying the query results in VE in the demo?
Flipping between Enterprise Manager and VE was a bit too fast for me but now I have the video.
Thanks,
Ryan
Ryan
Unfortunately, we can't actually distribute the full demo, as we don't actually own all of the data it relies on. Also, the basic infrastrucure is a bit old, and with some of the new features in VE, particularly GeoRSS, the demo should probably be written---it should be much simpler than it is.
I will try to get something up that will demonstrate how to do this kind of mashup soon.
Isaac
Isaac,
You mentioned during the presentation this would be available in a couple weeks. Any updated word on when the new CTP will be available?
Chris
Chris
Chris,
It should be *very* soon. It looks like it's been released internally, but hasn't hit the web site yet.
Microsoft SQL Server 2008 November CTP
nateirwin.net
|
http://blogs.msdn.com/isaac/archive/2007/11/01/after-the-webcast.aspx
|
crawl-002
|
refinedweb
| 604
| 71.44
|
In this video, we cover a handful of the built-in functions with Python 3. For a full list, see:
We cover absolute value (abs()), the help() functions, max(), min() ...which are how to find maximum and minimum of a list, how to round a number with round(), as well as ceil() and floor(), even though these last two are NOT built in, it just seemed like a good time to bring them up. Finally, we cover converting floats, ints, and strings to and from each other.
There are still quite a few other built in functions to Python 3, but the others are not really meant for a basics tutorial.
Sample code for the built in functions that are covered in the video:
Absolute Values:
exNum1 = -5 exNum2 = 5 print(abs(exNum1)) if abs(exNum1) == exNum2: print('True!')
The Help function:
This is probably one of the most under-utilized commands in Python, many people do not even know that it exists. With help(), you can type it with empty parameters to engage in a search, or you can put a specific function in question in the parameter.
help()Or...
import time # will work in a typical installation of Python, but not in the embedded editor help(time)Max and Min:
How to find the maximum or highest number in a list...
or how to find the lowest or minimum number in a list.
exList = [5,2,1,6,7] largest = max(exList) print(largest) smallest = min(exList) print(smallest)
Rounding:
Rounding will round to the nearest whole. There are also ways to round up or round down.
x = 5.622 x = round(x) print(x) y = 5.256 y = round(y) print(y)
Converting data types:
Many times, like reading data in from a file, you might find the datatype is incorrect, like when we mean to have integers, but they are currently in string form, or visa versa.
Converting a string to an integer:
intMe = '55' intMe = int(intMe) print(intMe)
Converting and integer to a string:
stringMe = 55 stringMe = str(stringMe) print(stringMe)
Converting an integer to a float:
floatMe = 55 floatMe = float(floatMe) print(floatMe)
You can also convert floats to strings, strings to floats, and more. Just make sure you do a valid operation. You still cannot convert the letter h to a float.
|
https://pythonprogramming.net/built-functions-python-3/?completed=/dictionaries-tutorial-python-3/
|
CC-MAIN-2021-39
|
refinedweb
| 390
| 75.34
|
struct);
utmpname() sets the name of the utmp-format file for the other utmp functions to access. If utmpname() is not used to set the filename before the other functions are used, they assume _PATH_UTMP, as defined in <paths.h>.
setutent() rewinds the file pointer to the beginning of the utmp file. It is generally a good idea to call it before any of the other functions._PROCESS,.
On success pututline() returns ut; on failure, it returns NULL.
utmpname() returns 0 if the new name was successfully stored, or -1 on failure.
setutent(), pututent(), and the getut* () functions can also fail for the reasons described in open(2).
In XPG2 and SVID 2 the function pututline() is documented to return void, and that is what it does on many systems (AIX, HP-UX, Linux libc5). HP-UX introduces a new function _pututline() with the prototype given above for pututline() (also found in Linux libc5).
All these functions are obsolete now on non-Linux systems. POSIX.1-2001, following SUSv1, does not have any of these functions, but instead uses
#include <utmpx.h>);.
#define _GNU_SOURCE /* or _SVID_SOURCE or _BSD_SOURCE */ .)
#include <string.h> #include <stdlib.h> #include <pwd.h> #include <unistd.h> #include <utmp.h>); }
|
http://www.makelinux.net/man/3/E/endutxent
|
CC-MAIN-2014-52
|
refinedweb
| 203
| 67.96
|
Set up Apollo Client in your React app
Out of the box Apollo Client includes packages that we think are essential for building an Apollo app, like our in-memory cache, local state management, error handling, and a React based view layer.
This tutorial walks you through installing and configuring Apollo Client for your React app. If you're just getting started with GraphQL or the Apollo platform, we recommend first completing the full-stack tutorial.
Installation
First, let's install the Apollo Client package:
npm install @apollo/client/client and add the endpoint for our GraphQL server to the
uri property of the client config object.
import { ApolloClient, HttpLink, InMemoryCache } from '@apollo/client'; const client = new ApolloClient({ cache: new InMemoryCache(), link: new HttpLink({ uri: '', }) });
That's it! Now your client is ready to start fetching data. Before we show how to use Apollo Client with React, let's try sending a query with plain JavaScript first. In the same
index.js file, try calling
client.query(). Remember to first import the
gql function for parsing your query string into a query document.
import { gql } from '@apollo/client'; ... use Apollo Client with React so we can start building query components.
Connect your client to React
To connect Apollo Client to React, you will need to use the
ApolloProvider component./client'; { useQuery, gql } from '@apollo/client';.
|
https://www.apollographql.com/docs/react/v3.0-beta/get-started/
|
CC-MAIN-2020-10
|
refinedweb
| 225
| 64.61
|
Did anyone do DS with Python the Missing Numbers assignment
Did someone did 3rd challenge from DS with python course? Mine always causes error. Here’s my code: import numpy as np import pandas lst = [float(x) if x != 'nan' else np.NaN for x in input().split()] from numpy import nan lst.fillna(lst.mean(), inplace=True) print(lst.isnull().sum()) DM please to explain!
12/18/2020 8:52:10 AMBat-Ochir Artur
8 AnswersNew Answer
...series....and suddenly it is so easy. Thank you, Lisa! import numpy as np import pandas as pd lst = [float(x) if x != 'nan' else np.NaN for x in input().split()] ser = pd.Series(lst) mean= round(ser.mean(),1) ser = ser.where(ser.notna(),mean) print(ser)
I have the same problem. The codes calculates the values correct, but when I print it, the result is correct, but the output ist missing the "dtype: float64" part. Adding a print(df.dtypes) doesn't work either, because the output states dtype object....
Lisa I cannot copy paste the result, nor can I Post a Screenshot (or can I?), so I am typing the output 0 3.0 .... .... .... 6 3.8 floating64 dtype: object code: df=pd.DataFrame(lst, columns=[""]) print(df) print(df.dtypes)
Trochoide You should be able to copy your code. Then you could save it in playground and link it. I looked it up how I solved it: I converted lst to a pandas series, replaced the nan and only printed the series in the end (it was float64)
import numpy as np import pandas as pd lst = [float(x) if x != 'nan' else np.NaN for x in input().split()] ser=pd.Series(lst) mean_val = round(ser.mean(),1) ser1=ser.fillna(value=mean_val) print(ser1)
So looking at the replies above, Lisa and Apoorva Datir were able to solve the project by using a Series. When you print the Series, the output is in exactly the right format (with the data type at the end, for example). The project actually says that we should convert the data into a DataFrame. But when you print the DataFrame, the output is NOT in exactly the required format. This is what Bat-Ochir Artur was saying.
|
https://www.sololearn.com/Discuss/2630791/did-anyone-do-ds-with-python-the-missing-numbers-assignment
|
CC-MAIN-2022-21
|
refinedweb
| 377
| 79.36
|
Red Hat Bugzilla – Bug 457968
missing requires in preupgrade-0.9.3-3.fc9.noarch
Last modified: 2008-08-12 17:57:13 EDT
minimally installed F9 system. yum install preupgrade. run preupgrade
Fail:
Traceback (most recent call last):
File "/usr/share/preupgrade/preupgrade-gtk.py", line 19, in <module>
import gtk
Cheat and just install system-config-netboot to get these things (and who knows what else), so next:
Fail:
Traceback (most recent call last):
File "/usr/share/preupgrade/preupgrade-cli.py", line 26, in <module>
import preupgrade
File "/usr/lib/python2.5/site-packages/preupgrade/__init__.py", line 27, in <module>
import xf86config
ImportError: No module named xf86config
Install pyxf86config.x86_64
At this point its just pissed that it has no fonts, but at least it "sorta" starts.
*** This bug has been marked as a duplicate of bug 457803 ***
|
https://bugzilla.redhat.com/show_bug.cgi?id=457968
|
CC-MAIN-2018-13
|
refinedweb
| 141
| 59.09
|
Normal Static Variables
When we declare normal global/local variables, they can be created static by using static keyword. Static variables once created, have a lifetime of the whole program. When local variables are created, they are said to have a lifetime of the block i.e. they are destroyed when the block is over. But static variables are destroyed when the program is over even when they are declared inside a local block.
See following example
#include <iostream> using namespace std; void foo() { static int x = 0; cout<< ++x; } int main() { foo(); cout << endl; foo(); cout << endl; foo(); return 0; }
Output of above program is
1 2 3
See that there is a variable ‘x’ inside
foo() function which is declared static. When
foo() is called first time ‘x’ is initialized with value 0 and then incremented in next statement and printing its value.
If ‘x’ was a local variable, every time
foo() is called ‘x’ would have been created and destroyed when the function hits last curly brace. Output in that case would have been
1 1 1
But here ‘x’ is a static variable, so it is only created once and is destroyed at the end of program. So when first
foo() call ends, ‘x’ is not destroyed and when second
foo() call starts compiler sees that ‘x’ is a static variable and it is still in memory so it kind of ignores the first statement of
foo() and simply increments it and prints its value resulting in the output
1 2 3
Some Points to note 1. Local variables have a block scope and block lifetime -> created inside the block, can only be accessed inside the block and destroyed when the block is over. 2. Global variables have a program scope and program lifetime -> they are generally created at the starting of program, can be accessed anywhere inside the program, and are destroyed at the end of program (when program finishes it execution) 3. Static variables have a program lifetime and scope depends on where they are created once a static variable is created it is destroyed at the end of program. But its scope depends on whether it is a local static variable or a global static variable. If the static variable is created inside some block, although its lifetime is throughout the program but it cannot be accessed out that block and is called local static variable. If the static variable is created outside of all block, its lifetime is throughout the program and also it can be accessed anywhere throughout the program and is called global static variable
|
https://boostlog.io/@sophia91/static-variables-5a9e5137a6e96c008a6fc855
|
CC-MAIN-2019-30
|
refinedweb
| 435
| 60.89
|
Opened 10 years ago
Closed 7 years ago
#1286 enhancement closed fixed (fixed)
make twisted play nicely with easy_install
Description
Attachments (7)
Change History (77)
comment:1 Changed 10 years ago by icepick
comment:2 Changed 8 years ago by radix
- Component set to conch
- Keywords review added
- Owner set to z3p
READY FOR REVIEW in easy_install-1286
comment:3 Changed 8 years ago by radix
- Component changed from conch to core
- Owner z3p deleted
comment:4 Changed 8 years ago by therve
Cosmetics:
- trailing whitespaces in t.p.dist
- unused import sysconfig in t.p.dist
- trailing whitespaces in test_dist
- tearDown method uses """string""" format
- unused imports in twisted/topfiles/setup.py
- copyright statements should be added to t.p.dist, test_dist, twisted/runner/topfiles/setup.py, and updated to 2007 for twisted/topfiles/setup.py
- dist.Extension has a bit disturbing name, that does make it clear it's different from distutils Extension. I'm not very inspired, maybe ConditionalExtension ?
OK now I'll test the functionality :).
comment:5 Changed 8 years ago by therve
- Cc therve added
- Owner set to radix
- Priority changed from normal to highest
I may miss something, but it seems it doesn't work with the 'sumo' package. I could install each subprojects, but it's not compatible (see first comment).
So, do I miss something?
comment:6 Changed 8 years ago by radix
- Owner changed from radix to therve
Sorry, yeah, it doesn't work to easy_install the full Twisted release. That'll be much more work. This makes it possible to easy_install the individual subprojects.
If you want to test it, you can do ./admin/release-twisted --commands=exportTemp,makeBallAll
(if you want to rerun that command, you will either have to get rid of Twisted.exp or don't specify exportTemp again)
and then try easy_installing some of the tarballs created. (no dependency tracking is done yet, but I consider that a lack of a feature instead of a bug for now).
comment:7 Changed 8 years ago by oubiwann
- Cc oubiwann added
comment:8 Changed 8 years ago by therve
So the problem is with the top-level setup, and I may have a solution. The problem is our hand-made setup.py that allows to install sub-projects. But, we can probably have a simple setup.py that just installs all of Twisted, right ?
What I'd do is import all the subprojects setup.pys, to get extensions (there are no extensions outside of core ?) and other stuff, and just run one setup for all of this. I've quickly tried it, and it seems to work without problem. It installs everything of Twisted... but we may find a way around, until we understand easy_install.
Of course, we can keep current setup.py around under another name for people who cares.
comment:9 Changed 8 years ago by therve
To workaround the namespace problem, we must use setuptools:.
zope seems to have a nice way to workaround the absence of setuptools. See the packaging of zope.interface (it seems their setup.py is generated).
comment:10 Changed 8 years ago by therve
- Keywords review removed
comment:11 Changed 8 years ago by therve
- Owner changed from therve to radix
Changed 8 years ago by therve
comment:12 Changed 8 years ago by therve
I've made a simple, stupid setup.py that install everything in a egg (21M of pure happiness). It detects the extensions, you should have to add __init__.py in twisted/topfiles. I consider that subprojects don't have extensions, which is true for now.
comment:13 Changed 8 years ago by antoine
Just a small thing: your setup script munges sys.path after it has imported copyright and dist. Is it deliberate?
comment:14 Changed 8 years ago by exarkun
- Keywords review added
This seems to be up for review.
comment:15 Changed 8 years ago by korpios
- Cc korpios added
comment:16 Changed 8 years ago by radix
- Keywords review removed
- Owner changed from radix to therve
I guess things have changed in trunk since this setup.py was written, but it doesn't work any more. It doesn't seem to be a trivial fix, either.
comment:17 Changed 8 years ago by therve
comment:18 Changed 8 years ago by therve
- Keywords review added
- Owner changed from therve to radix
I didn't remember the first version, so I tried something else, but this time in a branch: easy_install-1286-2.
THe only real thing to notice is that it doesn't detect extensions anymore, but doesn't fail when with the compilation.
comment:19 Changed 8 years ago by zooko
- Cc zooko added
comment:20 Changed 8 years ago by zooko
comment:21 Changed 8 years ago by therve
It is ready to review, see the keyword.
comment:22 follow-ups: ↓ 24 ↓ 27 Changed 8 years ago by exarkun
- Keywords review removed
- Owner changed from radix to therve
What user-visible functionality should I expect from the easy_install-1286-2 branch?
comment:23 Changed 8 years ago by exarkun
Uggg. As an unrelated thing, the setup.py add/delete causes svn 1.3.2 to corrupt the working copy when merging this branch. Perhaps that file could be M instead of D/A?
comment:24 in reply to: ↑ 22 Changed 8 years ago by therve
comment:25 Changed 8 years ago by korpios
- Cc korpios removed
comment:26 Changed 8 years ago by exarkun
Pretend I've never used easy_install. :) Is easy_install . the only feature I should look for when reviewing this?
comment:27 in reply to: ↑ 22 Changed 8 years ago by zooko
The biggest user-visible change that I'm interested in is that it makes Twisted's dependencies on other packages machine-readable. Wait, that isn't user-visible.
Okay, the biggest user-visible change that I'm interested in is that I can tell the tahoe setup.py that tahoe depends on Twisted, and when someone invokes tahoe's setup.py, then tahoe's setup.py will parse Twisted's dependencies to learn that Twisted depends on zope.interface, and then tahoe's setup.py will make sure that zope.interface is installed before installing twisted.
So the user-visible difference from my perspective is like this:
- Current situation -- tahoe documentation instructs reader to install Twisted and zope.interface before proceeding.
- Twisted packaged with distutils -- user has to manually install zope.interface (because twisted doesn't declare its dependency on zope.interface in a machine-readable way), but user doesn't have to manually install Twisted -- it gets installed automatically because Tahoe depends on it.
- Twisted packaged with setuptools -- Twisted and zope.interface get installed automatically because Tahoe depends on Twisted and Twisted depends on zope.interface.
A way to simulate this without actually running Tahoe's setup.py is to install easy_install and execute "easy_install ." in the twisted dir that contains twisted's setup.py. (I think.)
There are other issues too, but this is the motivating one, for me.
comment:28 Changed 8 years ago by zooko
Note:
It should be possible to use setuptools as your packaging tool and continue to produce packages of the traditional form: .deb's, Windows Installer Thingies, source tarballs, etc.. Setuptools supports all of these (just like distutils does -- perhaps it actually works merely by delegating to distutils). So nobody should think that by gaining the ability to produce eggs, it will become harder to produce other packages. Indeed, this patch will probably make it easier to produce those other packages, by invocations like "./setup.py sdist" and its brethren.
I guess this suggests a couple of more "user visible" things to look at:
Can you run "./setup.py sdist" and get a source tarball that has all the right files in it?
Can you run "./setup.py bdist_wininst" and get some kind of Windows Installer Thingie? I don't know much about this part -- it might not work at all.
comment:29 Changed 8 years ago by zooko
Here's one drawback to using setuptools. It changes the meaning of PYTHONPATH from "Here is a list of places to look for a module such that you will use the first module of that name in this list." to "Here is a set of places to look for modules such that you will use whichever module that appears in this set that setuptools chooses (based on the module's version number).". This is a backwards-incompatible and "silently confusing" change in semantics.
The suggested fix from the setuptools folks is to use "./setup.py develop", which installs the package into your local system with package contents being symlinks back into your current directory.
I sincerely hope that this issue is either work-aroundable or else is considered to be a drawback which is worth accepting in return for the advantages.
comment:30 Changed 8 years ago by therve
FWIW, I don't think we plan using setuptools. The goal of this ticket (I think) is just to make the setup.py compatible with easy_install. easy_install doesn't force you to use setuptools at all.
comment:31 Changed 8 years ago by zooko
Okay, so given that the purpose of this ticket is merely: "make Twisted play nice with easy_install", then all we require is for Twisted to declare its dependencies and version number in a machine-readable and standard way. As far as I know, the easiest way to do that is to use setuptools instead of distutils for packaging, and pass "install_requires=zope.interface?" as one of the arguments to setup().
If Twisted declares its dependencies and its own version number in the standard way then my tahoe project will be able to automatically satisfy its dependency on Twisted and Twisted's dependency on other projects (zope.interface) without user intervention.
There are of course other ways to accomplish this than invoking setuptools's "setup()". One could construct the appropriate twisted.egg-info/requires.txt files, for example.
comment:32 Changed 8 years ago by pje
Actually, the simplest way to make it "play nice" for the dependency would be to check if setuptools is already in sys.modules. If so, then setup.py will know it is being run under easy_install, and can declare additional options like install_requires.
Personally, I'd be happy to be able to get Twisted to install via easy_install, even without the zope.interface dependency, because then at least people could manually include a zope.interface dependency in their projects as a workaround.
As far as the review/testing goes, I would say the objective is that you should be able to run the distutils "sdist" command to make a source distribution that can be passed on the command line to easy_install.
That is, if you run "setup.py sdist" followed by "easy_install dist/Twisted-whatever.tar.gz", you should get a working Twisted installation on sys.path, with all the extensions, data files, or whatever else included.
Ideally, the aforementioned sdist would also be registered and uploaded to the Cheeseshop, but a "download URL" on the Cheeseshop pointing to a page with *direct* download links would also work.
By the way, easy_install also supports installing from bdist_wininst .exe files, so that's a distribution option as well. However, it can't install arbitrary .exe's, they *have* to be built with bdist_wininst (which includes some extra data in the .exe that easy_install depends on).
Last, but not least, the only other thing that hampered easy_install is inconsistency in package names/distribution file names. That is, easy_install sees a distro like "Twisted_NoDocs-SomeVersion.tar.gz" as a package called "Twisted_NoDocs" of version "SomeVersion", and therefore ignores it as an installation target when someone asks for "Twisted" to be installed.
In other words, when it comes to distribution formats, easy_install really only supports files generated by the distutils (sdist and bdist_wininst) and which retain their distutils-defined filenames. (It also does SVN checkouts and eggs, but those are sort of beside the point at the moment.)
comment:33 Changed 8 years ago by zooko
- Summary Make Twisted Play Nice with setuputils/easy_install deleted
Okay, when I run "./setup.py sdist" on current trunk, it fails. c.f. #2886.
When I run "./setup.py sdist" in the branch, it builds a tarball named "Twisted-2.5.0+r21763". The names of the files in that tarball are attached to this comment as a text file attachment. When I run "easy_install ./Twisted-2.5.0+r21763", then it gives me this failure message:
File "./twisted/__init__.py", line 22, in <module> ImportError: you need zope.interface installed (
If I install zope.interface and try again, then it works:
Processing dependencies for Twisted==2.5.0 Finished processing dependencies for Twisted==2.5.
This is on Mac OS 10.4/Intel. It built some extensions -- c_urlarg, cfsupport -- and it didn't let the failure to build others -- epoll, iocp, -- stop it.
Changed 8 years ago by zooko
listing of the contents of the twisted tarball built by "./setup.py sdist"
Changed 8 years ago by zooko
patch to make twisted's setup.py declare its dependency on zope.interface, but only if it is being evaluated by easy_install
Changed 8 years ago by zooko
patch to make twisted's setup.py declare its dependency on zope.interface, but only if it is being evaluated by easy_install
comment:34 Changed 8 years ago by zooko
- Summary set to make twisted play nicely with easy_install
As per pje's suggestion in this ticket, I wrote the attached patch, which declares Twisted's dependency on zope.interface, but only if setup.py is being evaluated by easy_install. With this patch, then the behavior under normal conditions (not-being-evaluated-by-easy_install) is unchanged, but when I run easy_install ./Twisted-2.5.0+r21763 then it detects the requirement of zope.interface and easy_install automatically satisfies that requirement.
comment:35 Changed 8 years ago by therve
comment:36 Changed 8 years ago by therve
- Branch set to branches/easy_install-1286-2
- Keywords review added
- Owner therve deleted
comment:37 Changed 8 years ago by jml
- Cc radix added
- Keywords review removed
- Owner set to zooko
Here's my review as a non-expert in easy_install & friends.
twisted/python/dist.py
- Need spaces between operators (see PEP 8) in build_ext_no_fail
setup.py
- Needs a module docstring.
- Needs a docstring for main()
- Duplicates significant amounts of data from twisted.topfiles.setup. It would seem more sensible to copy the dict and mutate the values, or at least refer to values in the topfiles setup_args dict.
I also can't find out where setupdist.py is used. grep doesn't help me.
I also think this branch needs RM approval before landing.
comment:38 Changed 8 years ago by zooko
therve: why did you create a setup_args dict in setup.py instead of using the one that was in twisted/topfiles/setup.py? Should one version replace the other?
comment:39 Changed 8 years ago by zooko
The answer to "what is setupdist.py for" is that it is just vestigial since therve mv'ed it aside when he started.
Here is a patch which adds spaces between operators, adds a module docstring and a main() docstring, and rm's setupdist.py.
Changed 8 years ago by zooko
Changed 8 years ago by zooko
comment:40 Changed 8 years ago by zooko
Okay here's a patch which fixes the last of the issues that jml noted:
fix problems noted by jml in
- spaces between operators
- module docstring
- main() docstring
- import setup_args metadata values from twisted.topfiles.setup instead of duplicating them in source code
- remove old setupdist.py
comment:41 Changed 8 years ago by zooko
- Keywords review added
comment:42 Changed 8 years ago by exarkun
comment:43 Changed 8 years ago by exarkun
- Branch changed from branches/easy_install-1286-2 to branches/easy_install-1286-3
comment:44 Changed 8 years ago by exarkun
comment:45 Changed 8 years ago by exarkun
- Keywords review removed
Failing test in the branch: twisted.test.test_doc.DocCoverage.testPackages
The "docstring" on the if-suite in the top-level setup.py looks like it really belongs on the main function in that file.
Lots and lots of files don't end up in the sdist tarball which results from setup.py sdist, which is the intended use of this stuff, as far as I can tell. That means the resulting install can't possibly be completely correct, since it won't start with the necessary files. Lots and lots of tests fail when run against the result of unpacking the tgz created by the sdist subcommand.
I don't really understand what the new setup.py is doing, so I haven't really reviewed it, I've only looked at the resulting behavior, which doesn't seem to be what is desired.
comment:46 Changed 7 years ago by therve
I fixed some stuff in the branch, but I have to look further.
comment:47 Changed 7 years ago by zooko
If you want my help with this branch, you'll have to answer my questions that I posted in this ticket.
In a related story, I might actually go ahead and spend time to implement some automated tests of the Twisted install process, as per #2308 . The basic idea would be to use Nexenta Zones to make a virtual system into which I can install and then rollback the whole system. If I do something like this, then how should the tests be written? My current idea is that they should be in the form of a bash script that initializes the zone, attempts to install Twisted within that virtual operating system, runs the Twisted unit tests therein, and exits with code 0 if the twisted unit tests passed.
comment:48 Changed 7 years ago by zooko
Oh, looking back over the ticket I haven't posted the questions that I need answered.
Basically: what is your strategy? How should the current Twisted setup scripts be changed to enable Twisted to be easy_install'able? As far as I understand, two possible strategies stick out:
- Use setuptools instead of distutils to package Twisted. This strategy is attended by a lot of emotion from some people that I don't quite understand, but I don't think the emotion is that big a deal, compared to the fact that this strategy has technical consequences that can be automatically evaluated as documented in #2308 . For what it is worth, I am maintaining a patch for my own use which does this. The patch is near-trivial, and the result is conveniently usable for my purposes.
- Detect at build time whether pkg_resources is imported. If it is, then we can safely add an install_requires=['zope.interface'] item to the dict which is passed to setup().
Actually, this raises another point about which I am confused: except for the issue about declaring dependencies (on zope.interface) in a programmatic way, aren't all packages which are built with a modern version of distutils automatically easy_install'able? What do the twisted setup scripts do which prevents this from working?
comment:49 Changed 7 years ago by radix
Zooko: To answer your questions about automating the tests, I think the only requirement I care about is that it be a buildslave. Making it debuggable is another concern; if something goes wrong, can we duplicate this setup locally?
To answer your other question, the reason that the current setup.py is not easy_installable is that not only does it not use setuptools, it also doesn't use distutils at all. We need to change the toplevel setup.py to be a regular distutils-based script, instead of the hack it is which runs a bunch of other setup.py files.
The stuff you said about pkg_resources sounds fine.
Changed 7 years ago by zooko
patch to build twisted with setuptools instead of distutils
comment:50 Changed 7 years ago by zooko
I just attached a patch that I've been using for a while to build twisted using setuptools. This causes twisted to be easy_install'able.
comment:51 Changed 7 years ago by radix
- Branch changed from branches/easy_install-1286-3 to branches/easy_install-1286-4
comment:52 Changed 7 years ago by radix
The referenced branch allows Twisted to be easy_installed *and* actually builds all the extension modules. It still needs some basic work, like removing some duplication and unit testing some of the simple changes to dist.py, but really I need to set up a buildslave that runs the setup.py and then runs the tests inside of them. The twistedmatrix.com buildbot configuration is overwhelming to me at this point, so it would be great if someone who's familiar with it would pair with me on that. I think exarkun, therve, and zooko would be the best to help me with this, in that order, based on my understanding of THEIR understanding of our configuration :)
comment:53 Changed 7 years ago by therve
OK I think we should create a buildstep that:
- create a distribution tarball for Twisted (the big one without web2 I guess)
- call easy_install tarball.tar.bz2 --install-dir=something
- run the test suite in something. We can use the epoll reactor to be sure that extensions are built
I'm not sure how hard it would be, though.
I've also tried the branch and it does the right thing for me.
comment:54 Changed 7 years ago by therve
- Branch changed from branches/easy_install-1286-4 to branches/easy-install-1286-4
comment:55 Changed 7 years ago by therve
- Branch changed from branches/easy-install-1286-4 to branches/easy_install-1286-4
comment:56 Changed 7 years ago by radix
Ok, this is working now, and with the help of therve, exarkun, zooko, and idnar we now have a buildslave: "debian-easy-py2.5-epoll".
There's one thing I'm curious about, and maybe zooko can help. The 'name' field we specify in the setup.py files is not a python package name, as seems to be the convention for setuptools-based packages. Is this necessary? If we switch to package names, we have a problem: both the Twisted core distribution as well as the main Twisted distribution would have to be called "twisted". Is it ok if you have to depend on "Twisted >= 8.0" instead of "twisted >= 8.0"? That way if somehow we ever get our subprojects working with setuptools, one could depend on "Twisted Core >= 8.0 and Twisted Web >= 8.0", or whatever.
I'm not putting this up for review *quite* yet because there's one failing test on the easy install slave related to process.alias.out, so I'm going to investigate that.
comment:57 Changed 7 years ago by radix
seems to be because easy_install isn't maintaining privs on data files. process.alias.sh isn't executable after easy_install, and the test relies on executing it. Any opinions on what I should do, anyone? I checked and this looks like the only case where the test suite is executing a data file or otherwise relying on perms of data files - I could just make the test mark it executable. I could also try to figure out how to make easy_install maintain privs, but obviously that will be harder.
comment:58 Changed 7 years ago by radix
chmodding is bad because then the test won't work in an installed environment, unless I trap and ignore the exception. bleh. good night.
comment:59 Changed 7 years ago by therve
I've looked at the problem and I don't think that expecting rights will remain the same after install is a good idea. So I think we should fix that test: I have a local fix, which create a temporary file in _trial_temp, set it executable, and use it in the test. If there is an agreement about this being the good solution, I'll create a ticket and push the branch.
comment:60 Changed 7 years ago by radix
That sounds fine, can you please make a branch
comment:61 Changed 7 years ago by radix
- Keywords review added
- Owner zooko deleted
comment:62 Changed 7 years ago by therve
- pyflakes errors:
twisted/python/dist.py:11: 'CCompilerError' imported but unused twisted/runner/topfiles/setup.py:5: redefinition of unused 'Extension' from line 2
- trailing whitespaces in test_dist
- test_win32Definition should use self.patch
Look a bit under windows.
comment:63 Changed 7 years ago by therve
- Keywords review removed
- Owner set to radix
OK, +1 for me.
comment:64 Changed 7 years ago by radix
- Resolution set to fixed
- Status changed from new to closed
(In [23006]) Merge easy_install-1286-5
Author: radix
Reviewer: ther:65 Changed 7 years ago by radix
- Branch changed from branches/easy_install-1286-4 to branches/easy_install-1286-5
comment:66 Changed 7 years ago by radix
- Resolution fixed deleted
- Status changed from closed to reopened
(In [23007]) Revert r23006: Buildbot failures related to setuptools
It seems to be related to old versions of setuptools not accepting our
setup.py file:
error in Twisted setup command: 'install_requires' must be a string or list of
strings containing valid project/version requirement specifiers
comment:67 Changed 7 years ago by radix
- Keywords review added
- Owner radix deleted
- Status changed from reopened to new
Up for review again.
I changed it to skip dependency specification if we're using a crappy version of setuptools. seems to work on cube.
comment:68 Changed 7 years ago by radix
- Keywords review removed
- Owner set to radix
exarkun has +1ed this on IRC.
comment:69 Changed 7 years ago by radix
- Resolution set to fixed
- Status changed from new to closed
(In [23010]) Merge easy_install-1286-5
Author: radix
Reviewer: therve, exark:70 Changed 4 years ago by <automation>
- Owner radix deleted
|
http://twistedmatrix.com/trac/ticket/1286
|
CC-MAIN-2015-27
|
refinedweb
| 4,341
| 64.1
|
listOperation
min,
max, and
avgOperations
“I want to go to there!” -Liz Lemon
Come May of 2021 we’ll all deserve a vacation to somewhere with great weather. In this project, you will write a program to analyze publicly available weather data sourced from the National Oceanic and Atmospheric Administration (NOAA). Your program will read in data collected at an airport station over a period of time to produce statistics like average temperature, maximum wind, a listing of readings, and so on. You can use this tool to begin planning your summer escape!
csv.DictReader
sys.argv
To begin, create a new directory (“Folder”) in your Workspace’s
projects directory named
pj01. Inside the
projects/pj01 directory, create a new Python module named
weather.py.
Go ahead an add a descriptive docstring to the top of the file and an
__author__ field with your PID.
The initial challenge of this project is aquiring a source data CSV and making sense of it. NOAA offers many different data sets of different scales and specificities. For this project’s purposes, we will narrow down the focus to a specific dataset: readings from land-based stations such as airports.
Next you will be presented with a tool for selecting a station. Larger weather stations found at international airports tend to have the most complete and reliable readings and recordings. You are encouraged to select a station at a large airport (e.g. Raleigh-Durham sized or greater) near a city you would want to fly into or out of! (Notice how far back the historical records you have access date to!)
Once you’ve found your station, click “Add to Cart”. Don’t worry, this is all free, publicly accessible data made possible through tax-payer funded science and engineering.
Navigate to your cart either by clicking on the orange bar at the bottom of your screen or by scrolling up to the top-right orange Cart link.
Select an output format of LCD CSV.
Important: select a date range of May 10th, 2020 to May 16th, 2020. Be sure to cick apply after selecting your start and end date. Check to be sure the date range is correct. After you complete the core requirements of this project, you are encouraged to explore other date ranges and larger data sets.
Enter your email address (don’t worry, this isn’t a company so you won’t get added to a spam list). Click Submit Order. You will see a screen that tells you the order was submitted and that you’ll receive an e-mail once your requested data set is available for download. You can click “Check order status” and give it a minute before refreshing. For such a small data pull this typically gets processed quickly. Otherwise, if you want to you can wait for the email to come in and follow the email link.
Once your data set is available for download follow the link and your CSV file will go to your downloads folder or desktop. Its name will be your order number followed by “.csv” which isn’t too convenient. Go ahead and rename it to
2020-05-10-to-16.csv.
Pro-tip: when naming files that involve specific date ranges, use a format as demonstrated in this file name. Why? Arranging the year first, followed by the month with
0padding, followed by a date makes it such that your files will be in chronological order when you sort a directory of files.
Copy or drag your file
2020-05-10-to-16.csv file into your
pj01 directory in VSCode. It should look as follows:
NOAA currently down or erroring out? You can use this CSV file in the mean time. You’ll want to find a warmer data set in your project submission fr, so come back to NOAA and try again soon!
Before attempting to write a program to operate on a data source you should understand its contents.
If you open the CSV file in code, you’ll see it in the plaintext format that your program will read and analyze. Notice how difficult it is to browse!
There’s a wonderful, free Extension for viewing CSV files in VSCode. In VSCode, navigate to
View >
Extensions. Search for Edit csv and the top result “extension to edit csv files” by “janisdd” is the one to install. Go ahead and install it.
Then, open your CSV file in VSCode again, and you should see the following button appear:
Click it! This will open up the Edit CSV Extension’s view of your data in tabular format.
Pro-tip: Do not make any edits to your CSV files while working on data-driven projects! Ensuring that your program works by reading data produced directly from the sources will make it possible for your program to process any other data file from the same source without any manual intervention needed. Our automatic tests will use an alternate data source directly from NOAA.
Since our data file has a first row which contains column names, the extension has an option to “Freeze” the column headers with the first row’s values. Enable the _Has Header" Read Option as shown:
Spending five minutes scanning through the source data is time well spent. For the purposes of this project there are a few things to emphasize:
Each row is either an hourly reading or daily summary. In this project the focus will be on the daily summaries exclusively, not on the hourly readings. The
REPORT_TYPE column tells you whether it’s an hourly or summary of day:
SOD. Since you downloaded a week’s worth of data, you should be able to find 7 rows with a value of
SOD in the
REPORT_TYPE column.
Dailyare for the summary of day readings. In this project there are two daily columns our examples will highlight:
DailyAverageDryBulbTemperature
DailyAverageWindSpeed
DailyPrecipitation
The following requirements will drive the assessment of this project.
Your program must run as a Python module from the command-line with three required arguments:
FILEto process
COLUMNof focus
OPERATIONto perform (any of
list,
min,
avg,
max)
When your program is run without the required options, you should print a usage line. For example:
$ python -m projects.pj01.weather Usage: python -m projects.pj01.weather [FILE] [COLUMN] [OPERATION]
listOperation
This is the first operation you should implement as it should help simplify your implementations of the other operations!
The
list operation should produce and
List[float] for each of the
COLUMN argument’s values in a row whose
REPORT_TYPE contains the string
SOD.
Example usage (your data will likely be different - scroll right to see complete lines):
$ python -m projects.pj01.weather projects/pj01/2020-05-10-to-16.csv DailyAverageWindSpeed list [5.1, 7.1, 6.8, 6.1, 6.9, 6.5, 5.9] $ python -m projects.pj01.weather projects/pj01/2020-05-10-to-16.csv DailyAverageDryBulbTemperature list [53.0, 49.0, 50.0, 49.0, 48.0, 54.0, 50.0] $ python -m projects.pj01.weather projects/pj01/2020-05-10-to-16.csv DailyPrecipitation list [0.0, 0.0, 0.0, 0.0, 0.0, 0.0]
A few notes to be careful of:
The
REPORT_TYPE field’s
SOD value is actually a string with two trailing spaces of padding:
"SOD "
Some
SOD data points may be missing for a given column or may use special codes beyond our scope like
T. If you attempt to convert an empty string or
"T" to a
float, a
ValueError will be encountered. Our solution will be to ignore these kinds of errors with the following construct
try/except construct that avoids crashing our program:
A brief note on
try/except blocks:
try/except blocks can be used to prevent programs from crashing when an error is encountered. If you expect certain lines of code to potentially raise an error (a
ValueError in our case), put them in a
try block. If the error does actually end up occurring, then instead of crashing the program, the code in the
except block will run. Since our solution is to simply ignore the error, the ellipsis are sufficient for the
except block. If no error is raised when the
try block runs, then the
except block will be skipped over.
Note the
results_list variable will be the variable name of whatever list you’re trying to append your values to. The empty string literal shown above should be replaced with whatever
str expression you’re attempting to convert to a
float value.
min,
max, and
avgOperations
After completing
list, add the following operations to your program:
min - The lowest value for the given column. Examples using the lists of data shown above (your results will vary based on your data set):
$ python -m projects.pj01.weather projects/pj01/2020-05-10-to-16.csv DailyAverageDryBulbTemperature min 48.0 $ python -m projects.pj01.weather projects/pj01/2020-05-10-to-16.csv DailyAverageWindSpeed min 5.1 $ python -m projects.pj01.weather projects/pj01/2020-05-10-to-16.csv DailyPrecipitation min 0.0
max - The largest value for the given column.
$ python -m projects.pj01.weather projects/pj01/2020-05-10-to-16.csv DailyAverageDryBulbTemperature max 54.0 $ python -m projects.pj01.weather projects/pj01/2020-05-10-to-16.csv DailyAverageWindSpeed max 7.1 $ python -m projects.pj01.weather projects/pj01/2020-05-10-to-16.csv DailyPrecipitation max 0.0
avg - The mean of all values for the given column.
$ python -m projects.pj01.weather projects/pj01/2020-05-10-to-16.csv DailyAverageDryBulbTemperature avg 50.42857142857143 $ python -m projects.pj01.weather projects/pj01/2020-05-10-to-16.csv DailyAverageWindSpeed avg 6.3428571428571425 $ python -m projects.pj01.weather projects/pj01/2020-05-10-to-16.csv DailyPrecipitation avg 0.0
A few notes to consider:
min,
max, and
sumfunctions in the implementation of this requirement. See the official Python documentation for how to use each.
If the user attempts to use any column that does not exist in the CSV, respond with the following error and
exit() the process:
$ python -m projects.pj01.weather projects/pj01/2020-05-10-to-16.csv DailyAverageTemperature avg Invalid column: DailyAverageTemperature $ python -m projects.pj01.weather projects/pj01/2020-05-10-to-16.csv DailyRain avg Invalid column: DailyRain
If the user attempts to use any operation beyond those above, or those which you add as an extension, respond with the following error and
exit() the process:
$ python -m projects.pj01.weather projects/pj01/2020-05-10-to-16.csv DailyAverageWindSpeed average Invalid operation: average $ python -m projects.pj01.weather projects/pj01/2020-05-10-to-16.csv DailyAverageWindSpeed total Invalid operation: total
Your program should not have any stylistic errors picked up during linting or type checking errors during static type analysis (with the possible exception of
pyplot issues if you choose to extend the project with PyPlot).
5pts - Your program’s code should minimize redundancy. You should not have lots of functions, or blocks of code, which are exactly the same as others like it with a subtle data / literal changed (for example there should not be column-specific functions). The function(s) / solution you write for the
list operation should be designed such that the other operations can make use of it without having to re-implement the process of looping through your table’s data.
1pt - Use meaningful variable and function names.
1pt - Write descriptive docstrings. In your file’s top-level docstring, include the city/station your data set is sourced from.
1pt - Use named constants rather than magic numbers and strings.
Add an additional
chart operation to produce a
pyplot chart of your data.
For full credit, produce a visualization other than a bar chart (see examples in this tutorial). Try either a line or scatter plot.
Label the x-axis with the
DATE column’s values from your dataset.
Label the y-axis with the column being charted.
To complete this requirement, you should write a separate helper function for generating the chart and import
pyplot within this function. For example, we could write something like this:
def chart_data(data: List[float], column: str, dates: List[str]) -> None import matplotlib.pyplot as plt # plot the values of our data over time plt.plot(dates, data) # label the x-axis Date plt.xlabel("Date") # label the y-axis whatever column we are analyzing plt.ylabel(column) # plot! plt.show()
This
chart_data function would only be called if the
chart operation is typed at the command line. If we were to make a plot of DailyAverageWindSpeed, this command would produce the corresponding chart:
$ python -m projects.pj01.weather projects/pj01/2020-05-10-to-16.csv DailyAverageWindSpeed chart
To make the chart more readable, it may be helpful to reformat the dates provided by the
DATE column of your CSV. In our case, we just ignored the timestamps (all characters following the
"T")
To earn credit in your submission: download a second dataset that is at least a month in size and generate a chart for a column you find interesting. Save the resulting chart as an image stored in your
projects/pj01 folder so that the image gets included in your submission. Also, in order for our grader to correctly grade your work, please only include the
import matplotlib.pyplot statement inside of your charting function.
To prepare your scene for submission, be sure to add a docstring to your module (at the top of the file) and a global
__author__ variable set to a string which contains your 9-digit PID.
To build your submission, run
python -m tools.submission projects/pj01 to build your submission zip for upload to Gradescope. Don’t forget to backup your work by creating a commit and pushing it to GitHub.
|
https://20f.comp110.com/students/projects/pj01-weather-stats.html
|
CC-MAIN-2021-10
|
refinedweb
| 2,310
| 64.81
|
.
Facelet tag library descriptors can be specified in one of two ways: In web.xml , as <context-param> :
<context-param> <param-name>javax.faces...where javax.faces.FACELETS_LIBRARIES is interpreted as a semicolon ( ; ) separated list of paths, starting with "/" (without quotes). Each entry in the list is a path relative to the web application root, and is interpreted as a facelet XML tag library descriptor. The parameter facelets.LIBRARIES is an alias to javax.faces.FACELETS_LIBRARIES for backwards compatibility reasons. Via auto-discovery, by placing the XML tag library descriptor within a jar on the web application classpath (for example, under the folder WEB-INF/lib ). The file should have a name suffix .taglib.xml , and be placed in the META-INF folder of the JAR file....
Android application builds use Xpath to query AndroidManifest.xml attributes. This is a simple build.xml to demo how it is set up:
<?xml version="1.0" encoding="UTF-8"?> <project...
Of course you'll need a datasource to run your SQL statements. It can be set in web.xml , or with the <sql:setDataSource> tag. In web.xml , the declaration is something like this:
<context-param> <param-name>javax.servl...where the parameter value is a relative JNDI path, or parameters for a JDBC connection. In the above example, the real JNDI name for the data source would be: java:comp/env/jdbc/myDataSource . You are out of luck if the JNDI path for the data source does not fall under the java:comp/env/ namespace. In the case of JDBC connection parameters, the expected format is:
url[, [driver] [, [user] [,password]]]For example: jdbc:mysql://localhost/,org.gjt.mm.mysql.Driver , where no user name or password is used. <sql:setDataSource> Exports a data source either as a scoped variable...
|
http://www.xinotes.net/notes/keywords/value/xml/path/
|
CC-MAIN-2014-42
|
refinedweb
| 298
| 60.41
|
7, 2007
This article was contributed by Ulrich Drepper.
Profiling memory operations requires collaboration from the hardware.
It is possible to gather some information in software alone, but this
is either coarse-grained or merely a simulation. Examples of
simulation will be shown in Section 7.2 and
Section 7.5. Here we will concentrate on measurable memory
effects.
Access to performance monitoring hardware on Linux is provided by
oprofile. Oprofile provides
continuous profiling capabilities as first described in
[continuous]; it performs statistical, system-wide profiling
with an easy-to-use interface. Oprofile is by no means the only way the
performance measurement functionality of processors can be used;
Linux developers are working on
pfmon
which might at some point be sufficiently widely deployed to warrant being
described here, too..
Figure 7.1: Cycles per Instruction (Follow Random)
Figure 7.1: Cycles per Instruction (Follow Random) measurements were made on a Intel
Core 2 processor, which is multi-scalar and can work on several
instructions at once. For a program which is not limited by memory
bandwidth, the ratio can be significantly below 1.0 but, in this case,
1.0 is pretty good.
Once the L1d is no longer large enough to hold the working. This is where oprofile is currently hard to use,
irrespective of the simple user interface: the user
has to figure out the performance counter details by her/himself. In
Section 10 we will see details about some processors..
Figure 7.2: Measured Cache Misses (Follow Random)
Figure 7.2: Measured Cache Misses (Follow Random)
All ratios are computed using the number of retired instructions
(INST_RETIRED). This means that instructions not touching memory are
also counted, which, in turn, means that the number of instructions which
do touch memory and which suffer a cache miss is even higher than shown in the
graph.
The L1d misses tower over all the others since an L2 miss implies, Figure 7.3.
Figure 7.3: Measured Cache Misses (Follow Sequential)
Figure 7.3: Measured Cache Misses (Follow Sequential).
The fourth line in both graphs is the DTLB miss rate (Intel has
separate TLBs for code and data, DTLB is the data TLB). For the
random access case, the DTLB miss rate is significant and contributes to the
delays. What is interesting is that the DTLB penalties set in before
the L2 misses. For the sequential access case the DTLB costs are basically
zero.
Going back to the matrix multiplication example in Section 6.2.1 and
the example code in Section 9.1, we can make use of three
more counters. The SSE_PRE_MISS, SSE_PRE_EXEC, and
LOAD_HIT_PRE counters can be used to see how effective the software
prefetching is. If the code in Section 9.1 is run we get
the following results:
DescriptionRatio
Useful NTA prefetches 2.84%
Late NTA prefetches 2.65%
The low useful NTA (non-temporal aligned) prefetch ratio indicates
that many prefetch instructions are executed for cache lines which are
already loaded, so no work is needed. This means the processor
wastes time to decode the prefetch instruction and look up the cache.
One cannot:
The annotated listings are useful for more than determining hardware support is page
faults. The OS is responsible for resolving page faults and, on those
occasions, it also counts them. It distinguishes two kinds of page
faults:
Obviously, major page faults are significantly more expensive than
minor page faults. But the latter are not cheap either. In either
case an entry into the kernel is necessary, a new page must be found, the page
must be cleared or populated with the appropriate data, and the page
table tree must be modified accordingly. The last step requires
synchronization with other tasks reading or modifying the page table
tree, which might introduce further delays.
The easiest way to retrieve information about the page fault counts is
to use the time tool. Note: use the real tool, not the shell
builtin. The output can be seen in Figure 7.4. {The leading
backslash prevents the use of the built-in command.}
$ .
Under the hood, the time tool uses the rusage functionality. The
wait4 system call fills
children.
#include <sys/resource.h>
int getrusage(__rusage_who_t who, struct rusage *usage)
#include <sys/resource.h>
int getrusage(__rusage_who_t who, struct rusage *usage)
The who parameter specifies which process the information is requested for.
Currently,s' cumulative minor
and major page faults, respectively.
While the technical description of how a cache works is relatively
easy to understand, it is not so easy to see how an actual program
behaves with respect to cache. Programmers are not directly concerned
with the values of addresses, be they absolute nor relative.
Addresses are determined, in part, by the linker and, in part, at runtime
by the dynamic linker and the kernel. The generated assembly code is
expected to work with all possible addresses and, in the
source language, there is not even a hint of absolute address values
left. So it can be quite difficult to get a sense for how a program is
making use of memory. {When programming close to the hardware this might be
different, but this is of no concern to normal programming and, in any
case, is only possible for special addresses such as memory-mapped
devices.}
CPU-level profiling tools such as oprofile (as described in
Section 7.1) can help to understand the cache use. The
resulting data corresponds to the actual hardware, and it can be collected
relatively quickly if fine-grained collection is not needed. As soon as
more fine-grained data is needed, oprofile is not usable anymore; the
thread would have to be interrupted too often. Furthermore, to
see the memory behavior of the program on different processors,
one actually has to have such machines and execute the program on
them. This is sometimes (often) not possible. One example is the data
from Figure 3.8. To collect such data with
oprofile one would have to have 24 different machines, many of which
do not exist.
The data in that graph was collected using a cache simulator. This
program, cachegrind, uses the valgrind framework, associativity.
To use the tool a program must be run using valgrind as a wrapper:
valgrind --tool=cachegrind command arg parameters, cachegrind can be instructed to
disregard the processor's cache layout and use that specified on the
command line. For example:
valgrind --tool=cachegrind --L2=8388608,8,64 command arg
valgrind --tool=cachegrind --L2=8388608,8,64 command arg
would simulate an 8MB L2 cache with 8-way set associativity and
64 byte cache line size. Note that the —L2 option appears on
the command line before the name of the program which is simulated.
This is not all cachegrind can do. Before the process exits
cachegrind writes out a file named cachegrind.out.XXXXX where
XXXXX is the PID of the process. This file contains the summary
information and detailed information about the cache use in each
function and source file. The data can be viewed using the
cg_annotate program.
The output this program produces contains the cache use summary which
was printed when the process terminated,
The Ir, Dr, and Dw columns show the total cache use, not cache
misses, which are shown in the following two columns.
This data can be used to identify the code which produces the most
cache misses. First, one probably would concentrate on L2 cache misses,
then proceed to optimizing L1i/L1d cache misses.
cg_annotate can provide the data in more detail. If the name of a
source file is given, it also annotates (hence the program's name) each
line of the source file with the number of cache hits and misses
corresponding to that line. This information allows the programmer to
drill down to the
exact line where cache misses are a problem. The program interface is
a bit raw: as of this writing, the cachegrind data file and
the source file must be in the same directory.
It should, at this point, be noted again: cachegrind is a simulator
which does not use measurements from the processor. The
actual cache implementation in the processor might very well be quite
different. cachegrind simulates Least Recently Used (LRU) eviction,
which is likely to be too expensive for caches with large
associativity. Furthermore, the simulation does not take context
switches and system calls into account, both of which can destroy
large parts of L2 and must flush L1i and L1d. This causes the
total number of cache misses to be lower than experienced in reality.
Nevertheless, cachegrind is a nice tool to learn about a program's
memory use and its problems with memory.
Knowing how much memory a program allocates and possibly where the
allocation happens is the first step to optimizing its memory use
There are, fortunately, some easy-to-use programs available
which do not even require that the program be recompiled or
specifically modified.
For the first tool, called massif, it is sufficient to not strip the debug information
which the compiler can automatically generate. It provides an overview
of the accumulated memory use over time. Figure 7.7 shows an
example of the generated output.
Figure 7.7: Massif Output
Figure 7.7: Massif Output
Like cachegrind
(Section 7.2), massif is a tool using the valgrind
infrastructure. It is started using
valgrind --tool=massif command arg
where command arg is the program which is observed and its
parameter(s), The program will be simulated and all calls to memory
allocation functions are recognized. The call site is recorded along with
a timestamp value; the
new allocation size is added to both the whole-program total and total for
the specific according to the
location which requested the allocation. Before the process is
terminated massif creates two files: massif.XXXXX.txt and
massif.XXXXX.ps, where XXXXX in both cases is the PID of
the process. The .txt file is a summary limits functions.
The second tool is called memusage; it is part of the GNU C
library. It is a simplified version of massif (but existed a long
time before massif). It only records the total memory file IMGFILE, which will be a PNG file. allocation sizes
and, on program termination, it shows a histogram of the used allocation
sizes. This information is written to standard error.
Sometimes it is not possible (or feasible) to call the program which is
supposed to be observed directly..
Both programs, massif and memusage, have additional options. A
programmer finding herself in the position needing more functionality
should first consult the manual or help messages to make sure the
additional functionality is not already implemented.
Now that we know how the data about memory allocation can be captured,
it is necessary to discuss how this data can be interpreted in the
context of memory and cache use. The main aspects of efficient dynamic
memory allocation are linear allocation and compactness of the used
portion. This goes back to making prefetching efficient and reducing
cache misses.
A program which has to read in an arbitrary amount of data for later
processing could do this by creating a list where each of the list
elements contains a new data item. The overhead for this allocation
method might be minimal (one pointer for a single-linked list) but the
cache effects when using the data can reduce the performance
dramatically.
One problem is, for instance, that there is no guarantee that
sequentially allocated memory is laid out sequentially in memory.
There are many possible reasons for this:
If data must be allocated up front for later processing, the linked-list
approach is clearly a bad idea. candidates for the changes cannot be identified, but
the graph can provide an entry point for the search. If many
allocations are made from the same location, this could mean that
allocation in bulk might help. In Figure 7.7, we can see such a
possible candidate in the allocations at address 0x4c0e7d5. From
about 800ms into the run until, instead, latter).
This all means that memory used by the program is interspersed with
memory only used by the allocator for administrative purposes.
We might see something like this:
Each block represents one memory word and, supposed to be read from or written to by the
application itself. Only the runtime uses the header words, and the
runtime only comes into play when the block is freed.
Now, one could might still be holes, but this is also something
under control of the programmer.
In Section 6.2.2, two methods to improve L1i use through branch
prediction and block reordering were mentioned: static prediction
through __builtin_expect and profile guided optimization
(PGO). Correct branch prediction has performance impacts, but here we
are interested in the memory usage improvements.
The use of __builtin_expect (or better the likely and
unlikely macros) is simple. The definitions are placed in a
central header and the compiler takes care of the rest. There is a
little problem, though: it is easy enough for a programmer to use
likely when really unlikely was meant and vice versa.
Even if somebody uses a tool like oprofile to measure incorrect branch
predictions and L1i misses these problems are hard to detect.
There is one easy method, though. The code in Section 9.2 shows an
alternative definition of the likely and unlikely macros
which measure actively, at runtime, fulfilled. First,
all source files must be compiled with the additional
-fprofile-generate option. This option must be passed to all
compiler runs and to the command which links the program.
Mixing object files compiled with and without this option is possible, but
GPO will not do any good for those that do not have it enabled.
The compiler generates a binary which behaves normally except that it is
significantly larger and slower since it records (and emits) all kinds
of information about branches taken or not. The compiler also emits a
file with the extension .gcno for each input file. This file contains
information related to the branches in the code. It must be preserved
for later.
Once the program binary is available, it should be used to run a
representative set of workloads. Whatever workload is used, the final
binary will be optimized to do this task well. Consecutive runs of
the program are possible and, in general necessary; all the runs will
contribute to the same output file. Before the program terminates, the
data collected during the program run is written out into files with
the extension .gcda. These files are created in the directory which
contains the source file. The program can be executed from any
directory, and the binary can be copied, but the directory with the
sources must be available and writable. Again, one output file is
created for each input source file. If the program is run multiple
times, it is important that the .gcda files of the previous run are
found in the source directories since otherwise the data of the
runs cannot be accumulated in one file.
When a representative set of tests has been run, it is time to
recompile the application. The compiler has to be able to find the
.gcda files in the same directory which holds the
source files. The files cannot be moved since the compiler would not find
them and the embedded checksum for the files would not match anymore.
For the recompilation, replace the
-fprofile-generate parameter with -fprofile-use. It is
essential that the sources do not change in any way that would change the
generated code. That means: it is OK to change white spaces and edit
collected data and the compilation will fail.
This is all the programmer has to do; it is a fairly simple process.
The most important thing to get right is the selection of
representative tests to perform the measurements. better to rely exclusively on
static branch prediction using __builtin_expect.
A few words on the .gcno and .gcda files. These are binary files
which are not immediately usable for inspection. It is possible,
though, to use the gcov tool, which is also part of the gcc package, to
examine them.
This tool is mainly used for coverage analysis (hence the name) but
the file format used is the same as for PGO. The gcov tool generates
output files with the extension .gcov for each source file with
executed code (this might include system headers). The files are
source listings which are annotated, according to the parameters given
to gcov, with branch counter, probabilities, etc.
On demand-paged operating systems like Linux, an
mmap call only modifies the page tables. It makes sure that, for
file-backed pages, the underlying data can be found and, for anonymous
memory, that, on access, pages initialized with zeros are provided. No
actual memory is allocated at the time of the mmap call. {If you want to say Wrong!
wait a second, it will be qualified later that there are
exceptions.}
The allocation part happens when a memory page is first accessed, either by
reading or writing data, or by executing specific number of page faults, but the
reason why they happen. The
pagein tool emits
information about the order and timing of page faults. The output, written to
a file named pagein.<PID>, looks as in Figure 7.8.
The
second column specifies the address of the page which is paged-in.
Whether it is a code or data page is indicated in the third column, which contains
`C' or `D' respectively. The fourth column specifies the number of
cycles which passed since the first page fault. The rest of the line
is valgrind's attempt to find a name for the address which caused the page fault.
The address value itself is correct but the name is not always
accurate if no debug information is available.
In the example in Figure 7.8, execution starts at address
0x3000000B50, which forces the page at address 0x3000000000
to be paged in. Shortly after that, the page after this is also
brought in; the function called on that page is _dl_start. The
initial code accesses a variable on page 0x7FF000000. This
happens just 3,320 cycles after the first page fault and is most
likely the second instruction of the program (just three bytes after
the first instruction). If one looks at the program, one will notice
that there is something peculiar about this memory access. The
instruction in question is a call instruction, which does not
explicitly load or store data. It does store the return address on
the stack, though, and this is exactly what happens here. This is not
the official stack of the process, though, it is valgrind's internal
stack of the application. This means when interpreting the results of
pagein it is important to keep in mind that valgrind introduces some
artifacts.
The output of pagein can be used to determine which code sequences
should ideally be adjacent in the program code. A quick look at the
/lib64/ld-2.5.so code shows that the first process of sorting the
functions and variables.
At a very coarse level, the call sequences can be seen by looking a the
object files making up the executable or DSO. Starting with one or
more entry points (i.e., function names), the chain of dependencies
can be computed. Without much effort this works well at the object
file level. In each round, determine which object files contain
needed functions and variables. The seed set has to be specified
explicitly. Then determine all undefined references in those object
files and add them to the set of needed symbols. Repeat until the
set is stable.
The second step in the process is to determine an order. The various
object files have to be grouped together to fill as few pages as
possible. As an added bonus, no function should cross over a page
boundary. A complication in all this is that, to
best arrange the object files, it has to be known what the linker will
do later. The important fact here is that the linker will put the
object files into the executable or DSO in the same order in which
they appear in the input files (e.g., archives), and on the command
line. This gives the programmer sufficient control.
For those who are willing to invest a bit more time, there have been
successful attempts at reordering made using automatic call tracing
via the __cyg_profile_func_enter and
__cyg_profile_func_exit hooks gcc inserts when
called with the -finstrument-functions option
[oooreorder]. See the gcc manual for more information on these
__cyg_* interfaces. By creating a trace of the program execution, the
programmer can more accurately determine the call chains. The results in
[oooreorder] are a 5% decrease in start-up costs, just through
reordering of the functions. The main benefit is the reduced number
of page faults, but the TLB cache also plays a role—an increasingly
important role given that, in virtualized environments, TLB misses
become significantly more expensive.
By combining the analysis of the pagein tool with the call sequence
information, it should be possible to optimize certain phases of the
program (such as start-up) to minimize the number of page faults.
The Linux kernel provides two additional mechanisms to avoid page
faults. The first one is a flag for mmap which instructs the
kernel to not only modify the page table but, in fact, to pre-fault all the
pages in the mapped area. This is achieved by simply adding the
MAP_POPULATE flag to the fourth parameter of the mmap
call. This will cause the mmap call to be significantly more
expensive, but, if all pages which are mapped by the call are being used
right away, the benefits can be large. Instead of having a number of
page faults, which each are pretty expensive due to the overhead
incurred by synchronization requirements etc., the program would have
one, more expensive, mmap call. The use of this flag allocated before it is used and this might lead to
shortages of memory in the meantime. On the other hand, in the worst case,
the page is simply reused for a new purpose (since it has not been modified
yet), which is not that expensive but still, together with the allocation,
adds some cost.
The granularity of MAP_POPULATE is simply too coarse. And there
is a second possible problem: this is an optimization; it is not
critical that all pages are, indeed, mapped in.
If the system is too busy to perform the operation the
pre-faulting can be dropped. Once the page is really used the program
takes the page fault, but this is not worse than artificially creating
resource scarcity. An alternative is to use the
POSIX_MADV_WILLNEED advice finer. Individual pages
or page ranges in any mapped address space area can be pre-faulted.
For memory-mapped files which contain a lot of data which is not used
at runtime, this can have huge advantages over using
MAP_POPULATE.
Beside these active approaches to minimizing the number of page faults,
it is also possible to take a more passive approach which is popular
with the hardware designers. mapping specified when
compiling the kernel and cannot be changed dynamically (at least not at
the moment). The ABIs of the multiple-page-size architectures are
designed to allow running an application with either page size. The
runtime will make the necessary
advantages: continuous, it might, after a while, not be
possible to allocate such pages due to memory fragmentation.
prevent this. People are working on memory defragmentation and
fragmentation avoidance, but
it is very complicated. For large pages of, say, 2MB the necessary 512
consecutive pages are always hard to come by, except at one time:
when the system boots up. This is why the current solution for
large pages requires the use of a special filesystem,
hugetlbfs. This pseudo filesystem is allocated on request by
the system administrator by writing the number of huge pages which should
be reserved to
/proc/sys/vm/nr_hugepages
the number of huge pages which should be reserved. This operation
might fail if not enough continuous memory can be located. The
situation gets especially interesting if virtualization is used. A
system virtualized using the VMM model does not directly access
physical
multiple possibilities:
In the first flag and the choice of the right value for
LENGTH, which must be a multiple of the huge page size for the
system. Different architectures have different values. The use of
the System V shared memory interface has the nasty problem of
depending on the key argument to differentiate (or share) mappings. The
ftok interface can easily produce conflicts which is why, if
possible, it is better to use other mechanisms.
If the requirement to mount the hugetlbfs filesystem is not a problem,
it is better to use it instead of System V shared memory. The only
real problems with using the special filesystem are that the kernel
must support it, and that there is no standardized mount point yet.
Once the filesystem is mounted, for instance at /dev/hugetlb, a
program can make easy use of it:
int fd = open("/dev/hugetlb/file1", O_RDWR|O_CREAT, 0700);
void *a = mmap(NULL, LENGTH, PROT_READ|PROT_WRITE, fd, 0);
int fd = open("/dev/hugetlb/file1", O_RDWR|O_CREAT, 0700);
void *a = mmap(NULL, LENGTH, PROT_READ|PROT_WRITE, fd, 0);
By using the same file name in the open call, multiple processes
can share the same huge pages and collaborate. It is also possible to
make the pages executable, in which case the PROT_EXEC flag file which comes as part of the kernel source tree.
The file also describes the special handling needed for IA-64.
Figure 7.9: Follow with Huge Pages, NPAD=0 allocated in huge pages. As can be seen the performance
advantage can be huge. For 220 bytes the test using huge pages
is 57% faster. This is due to the fact that this size still fits
completely into one single 2MB page and, therefore,
227 bytes, the numbers rise significantly again. The reason for
the plateau is that 64 TLB entries for 2MB pages cover 227
significant speed-up. Databases, since they use large amounts of
data, are among the programs which use huge pages today.
There is currently no way to use large pages to map file-backed data.
There is interest in implementing this capability, but the proposals made so far
all involve explicitly using large pages, and they rely on the
hugetlbfs filesystem. This is not acceptable: large page use
in this case must be transparent. The kernel can easily determine
which mappings are large and automatically use large pages. A big
problem is that the kernel does not always.
Memory part 7: Memory performance tools
Posted Nov 7, 2007 15:10 UTC (Wed) by rossburton (subscriber, #7254)
[Link]
There is a new frontend to OProfile,, which isn't as
low-level as the standard oprofile frontend and as a bonus even gives you the ability to
profile remote hosts.
Posted Nov 7, 2007 17:01 UTC (Wed) by james (subscriber, #1325)
[Link]
...since an L1d miss implies, for Intel processors, an L2 miss...
Posted Nov 8, 2007 17:13 UTC (Thu) by mebrown (subscriber, #7960)
[Link]
Actually, since Intel processors use an inclusive cache, the original statement is correct. If
it isnt in L1, it isnt in L2, either, since L2 includes all the contents of L1.
L1/L2
Posted Nov 8, 2007 17:22 UTC (Thu) by corbet (editor, #1)
[Link]
Actually, since Intel processors use an inclusive cache, the original statement is correct. If
it isnt in L1, it isnt in L2, either, since L2 includes all the contents of L1.
L2 contains everything in L1, but, since it's larger, it contains data which is not in L1 as well. If L2 cannot satisfy an occasional L1 cache miss, why does it exist? I have a question into Ulrich on how he really wanted this paragraph to read, stay tuned.
Posted Nov 8, 2007 17:31 UTC (Thu) by mebrown (subscriber, #7960)
[Link]
Right. Doh. In my defense, I had not actually drank any of my coffee before posting that
comment.
Posted Nov 7, 2007 20:32 UTC (Wed) by aleXXX (subscriber, #2742)
[Link]
It would be nice if subscribers could get these articles as nice pdf, one
for each part or also all parts together with a TOC, which would actually
make almost a book.
The "Printable page" link is not that good, it still has the comments at
the bottom and the links at the top.
Alex
PDF version
Posted Nov 7, 2007 20:37 UTC (Wed) by corbet (editor, #1)
[Link]
Posted Nov 8, 2007 6:59 UTC (Thu) by frazier (subscriber, #3060)
[Link]
Like a lot of the kernel stuff, I appreciate that LWN is actively covering these things, even though it doesn't directly benefit me.
Posted Nov 8, 2007 6:37 UTC (Thu) by njs (subscriber, #40338)
[Link]
It's curious that Ulrich doesn't mention callgrind and its extraordinary front-end,
kcachegrind. Stock oprofile and cachegrind are okay for programs written in a style where
each function is reasonably large and performs a relatively discrete task -- this seems to
include most traditional C programs. But IME they become totally useless with, say, anything
written in C++, or really anything with many abstraction layers in it. They tell you that,
say, you are suffering a lot of cache misses in your std::vector<>'s [] (element access)
operator -- which is fine, but tells you nothing about *which* of the thousands of call sites
in your source is actually causing the problem and should have its loops interchanged or
whatever. Callgrind solves this problem, and kcachegrind makes call-graph data understandable
and usable.
Oprofile has a call-graph profiling mode too, but I at least find it very obscure (I only
managed to understand what it was measuring by asking the author on IRC). Also, textual
output just doesn't cut it for this sort of thing, you really need some tools to visualize the
data. So some might find useful a little script I wrote, that converts oprofile call-graph
profiles into something kcachegrind can read (see the oprofile manual to learn how to get
call-graph profiles):
Posted Nov 8, 2007 20:41 UTC (Thu) by oak (guest, #2786)
[Link]
Posted Nov 11, 2007 4:40 UTC (Sun) by intgr (subscriber, #39733)
[Link]
Works fine here
Posted Nov 13, 2007 22:23 UTC (Tue) by khim (subscriber, #9252)
[Link]
Fedora 8, no probem - double-dashes are double-dashes. What browser are you using ?
Posted Nov 14, 2007 0:06 UTC (Wed) by corbet (editor, #1)
[Link]
Posted Nov 15, 2007 12:20 UTC (Thu) by Ponto (guest, #49056)
[Link]
I am wondering what performance counters one has to monitor with oprofile
to get cache synchronization events on opteron and xeon 64 bit machines. I
could not find any reading through the available events on the oprofile
I am especially interessted in detecting cache-ping-pong situations.
Posted Nov 17, 2007 21:39 UTC (Sat) by anton (subscriber, #25547)
[Link]
perfctr and PAPI
Posted Nov 17, 2007 19:03 UTC (Sat) by anton (subscriber, #25547)
[Link]
As far as I understand oprofile, the difference between perfctr and
oprofile is that oprofile uses a sampling approach, whereas with
perfctr you can get an almost-exact result; perfctr virtualizes the
counters so you don't count events from other processes (although
there is a little bit of fuzz on context switching, because the
counters are not completely synchronous to execution).
Posted Nov 17, 2007 19:13 UTC (Sat) by anton (subscriber, #25547)
[Link]
Shouldn't the number of minor page faults get smaller if I use
MAP_POPULATE? I just tried it on one program, and got one additional
minor page fault instead of the expected 50 less. If the number of
minor page faults is not reduced, then MAP_POPULATE can only reduce
major page faults (I have not tested that).
cachegrind
Posted Nov 23, 2007 11:38 UTC (Fri) by tyhik (subscriber, #14747)
[Link]
"It should, at this point, be noted again: cachegrind is a simulator which does not use
measurements from the processor ... Furthermore, the simulation does not take context switches
and system calls into account, both of which can destroy large parts of L2 and must flush L1i
and L1d. This causes the total number of cache misses to be lower than experienced in
reality."
As cachegrind has no idea of hardware prefetching then, OTOH, it may report the total number
of cache misses to be higher than in reality.
Linux is a registered trademark of Linus Torvalds
|
http://lwn.net/Articles/257209/
|
crawl-002
|
refinedweb
| 5,470
| 61.77
|
Undefined reference errors when linkingsuggest change
One of the most common errors in compilation happens during the linking stage. The error looks similar to this:
$ gcc undefined_reference.c /tmp/ccoXhwF0.o: In function `main': undefined_reference.c:(.text+0x15): undefined reference to `foo' collect2: error: ld returned 1 exit status $
So let’s look at the code that generated this error:
int foo(void); int main(int argc, char **argv) { int foo_val; foo_val = foo(); return foo_val; }
We see here a declaration of foo (
int foo();) but no definition of it (actual function). So we provided the compiler with the function header, but there was no such function defined anywhere, so the compilation stage passes but the linker exits with an
Undefined reference error.
To fix this error in our small program we would only have to add a definition for foo:
/* Declaration of foo */ int foo(void); /* Definition of foo */ int foo(void) { return 5; } int main(int argc, char **argv) { int foo_val; foo_val = foo(); return foo_val; }
Now this code will compile. An alternative situation arises where the source for
foo() is in a separate source file
foo.c (and there’s a header
foo.h to declare
foo() that is included in both
foo.c and
undefined_reference.c). Then the fix is to link both the object file from
foo.c and
undefined_reference.c, or to compile both the source files:
$ gcc -c undefined_reference.c $ gcc -c foo.c $ gcc -o working_program undefined_reference.o foo.o $
Or:
$ gcc -o working_program undefined_reference.c foo.c $
A more complex case is where libraries are involved, like in the code:
#include <stdio.h> #include <stdlib.h> #include <math.h> int main(int argc, char **argv) { double first; double second; double power; if (argc != 3) { fprintf(stderr, "Usage: %s <denom> <nom>\n", argv[0]); return EXIT_FAILURE; } /* Translate user input to numbers, extra error checking * should be done here. */ first = strtod(argv[1], NULL); second = strtod(argv[2], NULL); /* Use function pow() from libm - this will cause a linkage * error unless this code is compiled against libm! */ power = pow(first, second); printf("%f to the power of %f = %f\n", first, second, power); return EXIT_SUCCESS; }
The code is syntactically correct, declaration for
pow() exists from
#include <math.h>, so we try to compile and link but get an error like this:
$ gcc no_library_in_link.c -o no_library_in_link /tmp/ccduQQqA.o: In function `main': no_library_in_link.c:(.text+0x8b): undefined reference to `pow' collect2: error: ld returned 1 exit status $
This happens because the definition for
pow() wasn’t found during the linking stage. To fix this we have to specify we want to link against the math library called
libm by specifying the
-lm flag. (Note that there are platforms such as macOS where
-lm is not needed, but when you get the undefined reference, the library is needed.)
So we run the compilation stage again, this time specifying the library (after the source or object files):
$ gcc no_library_in_link.c -lm -o library_in_link_cmd $ ./library_in_link_cmd 2 4 2.000000 to the power of 4.000000 = 16.000000 $
And it works!
|
https://essential-c.programming-books.io/undefined-reference-errors-when-linking-235cc74bdcad4d69b3f913dbc2e25e46
|
CC-MAIN-2021-25
|
refinedweb
| 512
| 57.67
|
hello,
i want to create database for individual user .
when user login and add sequences in to database .the web page show
list of sequences that user have in database. but when i try show the
list of of sequences
.it’s show all list of sequence in database. such as the total
sequences that user A add are 3 items and the total sequences that
user B add are 5 items.
when user A login and want to see own data .the web page show 8
items.in fact it’s show 3 items. how i fix this the problem.
class ComponentsController < ApplicationController
before_filter :protect, :only => [:show, :edit]
def index
@title = “list of sequences”
@items = Seq.find(:all)
end
def databases
@title = “Database”
if request.post?and params[:seq]
params[:seq][:sequence].gsub!(/\s/,"")
params[:seq][:sequence].upcase!
@seq = Seq.new(params[:seq])
if @seq.save
flash[:notice] = “sequence submitted !”
redirect_to :action => “show”, :id => @seq.id
end
end
end
|
https://www.ruby-forum.com/t/about-show-list/163937
|
CC-MAIN-2021-25
|
refinedweb
| 160
| 71.61
|
Recommenders have been around since at least 1992. Today we see different flavours of recommenders, deployed across different verticals:
What exactly do they do?
In a typical recommender system people provide recommendations as inputs, which the system then aggregates and directs to appropriate recipients. -- Resnick and Varian, 1997
Collaborative filtering simply means that people collaborate to help one another perform filtering by recording their reactions to documents they read. -- Goldberg et al, 1992
In its most common formulation, the recommendation problem is reduced to the problem of estimating ratings for the items that have not been seen by a user. Intuitively, this estimation is usually based on the ratings given by this user to other items and on some other information [...] Once we can estimate ratings for the yet unrated items, we can recommend to the user the item(s) with the highest estimated rating(s). -- Adomavicius and Tuzhilin, 2005
Driven by computer algorithms, recommenders help consumers by selecting products they will probably like and might buy based on their browsing, searches, purchases, and preferences. -- Konstan and Riedl, 2012
The recommendation problem in its most basic form is quite simple to define:
|-------------------+-----+-----+-----+-----+-----| | user_id, movie_id | m_1 | m_2 | m_3 | m_4 | m_5 | |-------------------+-----+-----+-----+-----+-----| | u_1 | ? | ? | 4 | ? | 1 | |-------------------+-----+-----+-----+-----+-----| | u_2 | 3 | ? | ? | 2 | 2 | |-------------------+-----+-----+-----+-----+-----| | u_3 | 3 | ? | ? | ? | ? | |-------------------+-----+-----+-----+-----+-----| | u_4 | ? | 1 | 2 | 1 | 1 | |-------------------+-----+-----+-----+-----+-----| | u_5 | ? | ? | ? | ? | ? | |-------------------+-----+-----+-----+-----+-----| | u_6 | 2 | ? | 2 | ? | ? | |-------------------+-----+-----+-----+-----+-----| | u_7 | ? | ? | ? | ? | ? | |-------------------+-----+-----+-----+-----+-----| | u_8 | 3 | 1 | 5 | ? | ? | |-------------------+-----+-----+-----+-----+-----| | u_9 | ? | ? | ? | ? | 2 | |-------------------+-----+-----+-----+-----+-----|
Given a partially filled matrix of ratings ($|U|x|I|$), estimate the missing values.
Content-based techniques are limited by the amount of metadata that is available to describe an item. There are domains in which feature extraction methods are expensive or time consuming, e.g., processing multimedia data such as graphics, audio/video streams. In the context of grocery items for example, it's often the case that item information is only partial or completely missing. Examples include:
A user has to have rated a sufficient number of items before a recommender system can have a good idea of what their preferences are. In a content-based system, the aggregation function needs ratings to aggregate.
Collaborative filters rely on an item being rated by many users to compute aggregates of those ratings. Think of this as the exact counterpart of the new user problem for content-based systems.
When looking at the more general versions of content-based and collaborative systems, the success of the recommender system depends on the availability of a critical mass of user/item iteractions. We get a first glance at the data sparsity problem by quantifying the ratio of existing ratings vs $|U|x|I|$. A highly sparse matrix of interactions makes it difficult to compute similarities between users and items. As an example, for a user whose tastes are unusual compared to the rest of the population, there will not be any other users who are particularly similar, leading to poor recommendations.
from IPython.core.display import Image Image(filename='./imgs/recsys_arch.png')
import pandas as pd unames = ['user_id', 'username'] users = pd.read_table('./data/users_set.dat', sep='|', header=None, names=unames) rnames = ['user_id', 'course_id', 'rating'] ratings = pd.read_table('./data/ratings.dat', sep='|', header=None, names=rnames) mnames = ['course_id', 'title', 'avg_rating', 'workload', 'university', 'difficulty', 'provider'] courses = pd.read_table('./data/cursos.dat', sep='|', header=None, names=mnames) # show how one of them looks ratings.head(10)
# show how one of them looks users[:5]
courses[:5]
Using
pd.merge we get it all into one big DataFrame.
coursetalk = pd.merge(pd.merge(ratings, courses), users) coursetalk
<class 'pandas.core.frame.DataFrame'> Int64Index: 2773 entries, 0 to 2772 Data columns (total 10 columns): user_id 2773 non-null values course_id 2773 non-null values rating 2773 non-null values title 2773 non-null values avg_rating 2773 non-null values workload 2773 non-null values university 2616 non-null values difficulty 2773 non-null values provider 2773 non-null values username 2773 non-null values dtypes: float64(1), int64(2), object(7)
coursetalk.ix[0]
user_id 1 course_id 1 rating 5 title An Introduction to Interactive Programming in ... avg_rating 4.9 workload 7-10 hours/week university Rice University difficulty Medium provider coursera username patrickdijusto1 Name: 0, dtype: object
To get mean course ratings grouped by the provider, we can use the pivot_table method:
mean_ratings = coursetalk.pivot_table('rating', rows='provider', aggfunc='mean') mean_ratings.order(ascending=False)
provider None 4.562500 coursera 4.527835 edx 4.491620 codecademy 4.450000 udacity 4.241071 udemy 4.200000 open2study 4.083333 khanacademy 4.000000 novoed 3.281250 mruniversity 3.250000 Name: rating, dtype: float64
Now let's filter down to courses that received at least 20 ratings (a completely arbitrary number); To do this, I group the data by course_id and use size() to get a Series of group sizes for each title:
ratings_by_title = coursetalk.groupby('title').size() ratings_by_title[:10]
title 14.73x: The Challenges of Global Poverty 2 2.01x: Elements of Structures 2 3.091x: Introduction to Solid State Chemistry 3 6.002x: Circuits and Electronics 10 6.00x: Introduction to Computer Science and Programming 21 7.00x: Introduction to Biology - The Secret of Life 3 8.02x: Electricity and Magnetism 3 8.MReVx: Mechanics ReView 1 A Beginner's Guide to Irrational Behavior 147 A Crash Course on Creativity 5 dtype: int64
active_titles = ratings_by_title.index[ratings_by_title >= 20] active_titles[:10]
Index([u'6.00x: Introduction to Computer Science and Programming', u'A Beginner's Guide to Irrational Behavior', u'An Introduction to Interactive Programming in Python', u'An Introduction to Operations Management', u'CS-191x: Quantum Mechanics and Quantum Computation', u'CS188.1x Artificial Intelligence', u'Calculus: Single Variable', u'Computing for Data Analysis', u'Critical Thinking in Global Challenges', u'Cryptography I'], dtype=object)
The index of titles receiving at least 20 ratings can then be used to select rows from mean_ratings above:
mean_ratings = coursetalk.pivot_table('rating', rows='title', aggfunc='mean') mean_ratings
title 14.73x: The Challenges of Global Poverty 4.250000 2.01x: Elements of Structures 4.750000 3.091x: Introduction to Solid State Chemistry 4.166667 6.002x: Circuits and Electronics 4.800000 6.00x: Introduction to Computer Science and Programming 4.166667 7.00x: Introduction to Biology - The Secret of Life 4.666667 8.02x: Electricity and Magnetism 4.333333 8.MReVx: Mechanics ReView 5.000000 A Beginner's Guide to Irrational Behavior 4.874150 A Crash Course on Creativity 3.500000 A History of the World since 1300 4.318182 A Look at Nuclear Science and Technology 3.000000 A New History for a New China, 1700-2000: New Data and New Methods, Part 1 0.500000 AIDS 5.000000 Aboriginal Worldviews and Education 4.333333 ... The Modern World: Global History since 1760 4.775862 The Modern and the Postmodern 4.777778 The Science of Gastronomy 4.000000 The Social Context of Mental Health and Illness 4.333333 Think Again: How to Reason and Argue 3.815789 Useful Genetics Part 1 4.500000 VLSI CAD: Logic to Layout 4.500000 Vaccine Trials: Methods and Best Practices 5.000000 Vaccines 3.750000 Web Development 4.625000 Web Intelligence and Big Data 3.802326 Women and the Civil Rights Movement 5.000000 Writing for the Web (WriteWeb) 5.000000 Writing in the Sciences 4.000000 jQuery 4.250000 Name: rating, Length: 211, dtype: float64
By computing the mean rating for each course, we will order with the highest rating listed first.
mean_ratings.ix[active_titles].order(ascending=False) CS188.1x Artificial Intelligence 4.833333 Machine Learning 4.830000 Functional Programming Principles in Scala 4.822581 Gamification 4.796296 An Introduction to Operations Management 4.785714 The Modern World: Global History since 1760 4.775862 Programming Languages 4.770833 CS-191x: Quantum Mechanics and Quantum Computation 4.727273 Cryptography I 4.700000 Discrete Optimization 4.695652 Introduction to Computer Science 4.687500 Learn to Program: Crafting Quality Code 4.585714 Model Thinking 4.578125 Internet History, Technology, and Security 4.541667 Fantasy and Science Fiction: The Human Mind, Our Modern World 4.522727 Learn to Program: The Fundamentals 4.303571 6.00x: Introduction to Computer Science and Programming 4.166667 Critical Thinking in Global Challenges 3.961538 Web Intelligence and Big Data 3.802326 Computing for Data Analysis 3.187500 Introduction to Finance 3.086957 Introduction to Data Science 3.060000 Name: rating, dtype: float64
To see the top courses among Coursera students, we can sort by the 'Coursera' column in descending order:
mean_ratings = coursetalk.pivot_table('rating', rows='title',cols='provider', aggfunc='mean') mean_ratings[:10]
mean_ratings['coursera'][active_titles].order(ascending=False)[:10] Programming Languages 4.850000 Machine Learning 4.830000 Functional Programming Principles in Scala 4.822581 Gamification 4.796296 Name: coursera, dtype: float64
Now, let's go further! How about rank the courses with the highest percentage of ratings that are 4 or higher ? % of ratings 4+
Let's start with a simple pivoting example that does not involve any aggregation. We can extract a ratings matrix as follows:
# transform the ratings frame into a ratings matrix ratings_mtx_df = coursetalk.pivot_table(values='rating', rows='user_id', cols='title') ratings_mtx_df.ix[ratings_mtx_df.index[:15], ratings_mtx_df.columns[:15]]
Let's extract only the rating that are 4 or higher.
ratings_gte_4 = ratings_mtx_df[ratings_mtx_df>=4.0] # with an integer axis index only label-based indexing is possible ratings_gte_4.ix[ratings_gte_4.index[:15], ratings_gte_4.columns[:15]]
Now picking the number of total ratings for each course and the count of ratings 4+ , we can merge them into one DataFrame.
ratings_gte_4_pd = pd.DataFrame({'total': ratings_mtx_df.count(), 'gte_4': ratings_gte_4.count()}) ratings_gte_4_pd.head(10)
ratings_gte_4_pd['gte_4_ratio'] = (ratings_gte_4_pd['gte_4'] * 1.0)/ ratings_gte_4_pd.total ratings_gte_4_pd.head(10)
ranking = [(title,total,gte_4, score) for title, total, gte_4, score in ratings_gte_4_pd.itertuples()] for title, total, gte_4, score in sorted(ranking, key=lambda x: (x[3], x[2], x[1]) , reverse=True)[:10]: print title, total, gte_4, score
Functional Programming Principles in Scala 31 31 1.0 Introduction to Computer Science 24 24 1.0 Programming Languages 24 24 1.0 Web Development 16 16 1.0 6.002x: Circuits and Electronics 10 10 1.0 Compilers 8 8 1.0 Archaeology's Dirty Little Secrets 7 7 1.0 How to Build a Startup 7 7 1.0 Introduction to Sociology 7 7 1.0 Stat2.1X: Introduction to Statistics: Descriptive Statistics 7 7 1.0
Let's now go easy. Let's count the number of ratings for each course, and order with the most number of ratings.
ratings_by_title = coursetalk.groupby('title').size() ratings_by_title.order(ascending=False)[:10]
title An Introduction to Interactive Programming in Python 575 Design: Creation of Artifacts in Society 191 A Beginner's Guide to Irrational Behavior 147 Modern & Contemporary American Poetry 132 An Introduction to Operations Management 98 Greek and Roman Mythology 81 Critical Thinking in Global Challenges 65 Gamification 54 Machine Learning 50 Web Intelligence and Big Data 43 dtype: int64
Considering this information we can sort by the most rated ones with highest percentage of 4+ ratings.
for title, total, gte_4, score in sorted(ranking, key=lambda x: (x[2], x[3], x[1]) , reverse=True)[:10]: print title, total, gte_4, score
An Introduction to Interactive Programming in Python 572 575 0.994782608696 Design: Creation of Artifacts in Society 190 191 0.994764397906 A Beginner's Guide to Irrational Behavior 146 147 0.993197278912 Modern & Contemporary American Poetry 130 132 0.984848484848 An Introduction to Operations Management 96 98 0.979591836735 Greek and Roman Mythology 80 81 0.987654320988 Critical Thinking in Global Challenges 47 65 0.723076923077 Gamification 52 54 0.962962962963 Machine Learning 48 49 0.979591836735 Web Intelligence and Big Data 26 43 0.604651162791
Finally using the formula above that we learned, let's find out what the courses that most often occur wit the popular MOOC An introduction to Interactive Programming with Python by using the method "x + y/ x" . For each course, calculate the percentage of Programming with python raters who also rated that course. Order with the highest percentage first, and voilá we have the top 5 moocs.
course_users = coursetalk.pivot_table('rating', rows='title', cols='user_id') course_users.ix[course_users.index[:15], course_users.columns[:15]]
First, let's get only the users that rated the course An Introduction to Interactive Programming in Python
ratings_by_course = coursetalk[coursetalk.title == 'An Introduction to Interactive Programming in Python'] ratings_by_course.set_index('user_id', inplace=True)
Now, for all other courses let's filter out only the ratings from users that rated the Python course.
their_ids = ratings_by_course.index their_ratings = course_users[their_ids] course_users[their_ids].ix[course_users[their_ids].index[:15], course_users[their_ids].columns[:15]]
By applying the division: number of ratings who rated Python Course and the given course / total of ratings who rated the Python Course we have our percentage.
course_count = their_ratings.ix['An Introduction to Interactive Programming in Python'].count() sims = their_ratings.apply(lambda profile: profile.count() / float(course_count) , axis=1)
Ordering by the score, highest first excepts the first one which contains the course itself.
sims.order(ascending=False)[1:][:10]
title Machine Learning 0.006957 Cryptography I 0.006957 Web Development 0.005217 Python 0.005217 Learn to Program: Crafting Quality Code 0.005217 Introduction to Computer Science 0.005217 Human-Computer Interaction 0.005217 Gamification 0.005217 Computational Investing, Part I 0.005217 CS-169.1x: Software as a Service 0.005217 dtype: float64
|
http://nbviewer.jupyter.org/github/python-recsys/recsys-tutorial/blob/master/tutorial/0-Introduction-to-Non-Personalized-Recommenders.ipynb
|
CC-MAIN-2016-40
|
refinedweb
| 2,197
| 52.56
|
A specialized tree node which allows additional data to be associated with each node. More...
#include <Wt/WTreeTableNode>
A specialized tree node which allows additional data to be associated with each node.
Additional data for each column can be set using setColumnWidget().
Creates a new tree table node.
Returns the widget set for a column.
Returns the widget set previously using setColumnWidget(), or
0 if no widget was previously set.
Inserts a child node.
Inserts the node
node at index
index.
Reimplemented from Wt::WTreeNode.
Sets a widget to be displayed in the given column for this node.
Columns are counted starting from 0 for the tree list itself, and 1 for the first additional column.
The node label (in column 0) is not considered a column widget. To set a custom widget in column 0, you can add a widget to the labelArea().
Sets the table for this node.
This method is called when the node is inserted, directly, or indirectly into a table.
You may want to reimplement this method if you wish to customize the behaviour of the node depending on table properties. For example to only associate data with the node when the tree list is actually used inside a table.
Returns the table for this node.
|
https://webtoolkit.eu/wt/wt3/doc/reference/html/classWt_1_1WTreeTableNode.html
|
CC-MAIN-2021-31
|
refinedweb
| 211
| 59.7
|
Created on 2016-12-30 02:16 by adamwill, last changed 2017-01-10 22:52 by doko.
I'm not sure if this is really considered a bug or just an unavoidable limitation, but as it involves part of the stdlib operating on Python itself, I figured it was at least worth reporting.
In Fedora we have a fairly simple little script called python-deps:
which is used to figure out the dependencies of a couple of Python scripts used in the installer's initramfs environment, so the necessary bits of Python (but not the rest of it) can be included in the installer's initramfs.
Unfortunately, with Python 3.6, this seems to be broken for the core of Python itself, because of this change:
which changed sysconfig.py from doing "from _sysconfigdata import build_time_vars" to using __import__ . I *think* that modulefinder can't cope with this use of __import__ and so misses that sysconfig requires "_sysconfigdata_m_linux_x86_64-linux-gnu" (or whatever the actual name is on your particular platform and arch).
This results in us not including the platform-specific module in the installer initramfs, so Python blows up on startup when the 'site' module tries to import the 'sysconfig' module.
We could work around this one way or another in the python-deps script, but I figured the issue was at least worth an upstream report to see if it's considered a significant issue or not.
You can reproduce the problem quite trivially by writing a test script which just does, e.g., "import site", and then running the example code from the ModuleFinder docs on it:
from modulefinder import ModuleFinder
finder = ModuleFinder()
finder.run_script('test.py')
print('Loaded modules:')
for name, mod in finder.modules.items():
print('%s: ' % name, end='')
print(','.join(list(mod.globalnames.keys())[:3]))
if you examine the output, you'll see that the 'sysconfig' module is included, but the site-specific module is not.
The limitation is unavoidable as modulefinder inspects bytecode for its inferencing, so any code that calls __import__() or importlib.import_module() will simply not work. So unless sysconfig can be updated reasonably back to a statically defined import this is just how it will be (I'll let doko comment on whether updating is possible and thus close this issue).
the idea is that we load a different _sysconfigdata module when we are cross building packages. So we don't know the name in advance. An ugly alternative would be a big if statement with conditional imports for all known cross build targets. Not sure if this is the better solution.
|
https://bugs.python.org/issue29113
|
CC-MAIN-2021-31
|
refinedweb
| 433
| 60.75
|
WellDerived : Base {
Derived() : base() // call protected constructor from derived constructor ...
// ... still allowed in VS 2005
{}
void Main() {
Base b = new Base(); // call protected constructor to instantiate new object ...
// ... allowed in VS 2003, but error in VS 2005
}
}() {}
}
public classDerived : Base {
Derived() : base() // OK: call protected constructor from derived constructor
// still allowed in VS 2005
{}
void Main() {
Base b = new Base(); // call protected constructor to instantiate new object
// allowed in VS 2003, but error in VS 2005
Base.StaticMethod(); // OK: call protected static method
b.InstanceMethod(); // Error: can't call a protected instance method on a Base ...
Derived d = new Derived();
d.InstanceMethod(); // ... but can call an inherited protected instance method on a Derived
}
}class Malicious : Base {
public static void ByPassProtectedAccess(Base b) {
b.InstanceMethod();
}
}.
Peter
C# Guy
PingBack from
PingBack from
|
https://blogs.msdn.microsoft.com/peterhal/2005/06/29/many-questions-protected-constructors/
|
CC-MAIN-2017-43
|
refinedweb
| 131
| 55.64
|
Don't use parameter names for kernel prototyps
1: .\" Copyright (c) 1995-2001 FreeBSD Inc. 2: .\": .\" 13: .\" THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND 14: .\" ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE 15: .\" IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE 16: .\" ARE DISCLAIMED. IN NO EVENT SHALL [your name] OR CONTRIBUTORS BE LIABLE 17: .\" FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL 18: .\" DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS 19: .\" OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) 20: .\" HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT 21: .\" LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY 22: .\" OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF 23: .\" SUCH DAMAGE. 24: .\" 25: .\" 26: .Dd December 7, 2001 27: .Dt STYLE 9 28: .Os 29: .Sh NAME 30: .Nm style 31: .Nd "kernel source file style guide" 32: .Sh DESCRIPTION 33: This file specifies the preferred style for kernel source files in the 34: .Fx.6 2004/02/25 17:35:29 joerg: When declaring variables in structures, declare them sorted by use, then 183: by size, and then in alphabetical order. 184: The first category normally does not apply, but there are exceptions. 185: Each one gets its own line. 186: Try to make the structure 187: readable by aligning the member names using either one or two tabs 188: depending upon your judgment. 189: You should use one tab if it suffices to align most of the member names. 190: Names following extremely long types 191: should be separated by a single space. 192: .Pp 193: Major structures should be declared at the top of the file in which they 194: are used, or in separate header files if they are used in multiple 195: source files. 196: Use of the structures should be by separate declarations 197: and should be 198: .Ic extern 199: if they are declared in a header file. 200: .Bd -literal 201: struct foo { 202: struct foo *next; /* List of active foo. */ 203: struct mumble amumble; /* Comment for mumble. */ 204: int bar; /* Try to align the comments. */ 205: struct verylongtypename *baz; /* Won't fit in 2 tabs. */ 206: }; 207: struct foo *foohead; /* Head of global foo list. */ 208: .Ed 209: .Pp 210: Use 211: .Xr queue 3 212: macros rather than rolling your own lists, whenever possible. 213: Thus, 214: the previous example would be better written: 215: .Bd -literal 216: #include <sys/queue.h> 217: 218: struct foo { 219: LIST_ENTRY(foo) link; /* Use queue macros for foo lists. */ 220: struct mumble amumble; /* Comment for mumble. */ 221: int bar; /* Try to align the comments. */ 222: struct verylongtypename *baz; /* Won't fit in 2 tabs. */ 223: }; 224: LIST_HEAD(, foo) foohead; /* Head of global foo list. */ 225: .Ed 226: .Pp 227: Avoid using typedefs for structure types. 228: This makes it impossible 229: for applications to use pointers to such a structure opaquely, which 230: is both possible and beneficial when using an ordinary struct tag. 231: When convention requires a 232: .Ic typedef , 233: make its name match the struct tag. 234: Avoid typedefs ending in 235: .Dq Li _t , 236: except as specified in Standard C or by \*[Px]. 237: .Bd -literal 238: /* Make the structure name match the typedef. */ 239: typedef struct bar { 240: int level; 241: } BAR; 242: typedef int foo; /* This is foo. */ 243: typedef const long baz; /* This is baz. */ 244: .Ed 245: .Pp 246: All functions are prototyped somewhere. 247: .Pp 248: Function prototypes for private functions (i.e. functions not used 249: elsewhere) go at the top of the first source module. 250: Functions 251: local to one source module should be declared 252: .Ic static . 253: .Pp 254: Functions used from other parts of the kernel are prototyped in the 255: relevant include file. 256: .Pp 257: Functions that are used locally in more than one module go into a 258: separate header file, e.g.\& 259: .Qq Pa extern.h . 260: .Pp 261: Avoid using the 262: .Dv __P 263: macro from the include file 264: .Aq Pa sys/cdefs.h . 265: Code in the DragonFly source tree is not 266: expected to be K&R compliant. 267: .Pp 268: Changes to existing files should be consistent with that file's conventions. 269: In general, code can be considered 270: .Dq "new code" 271: when it makes up about 50% or more of the file(s) involved. 272: This is enough 273: to break precedents in the existing code and use the current 274: .Nm 275: guidelines. 276: .Pp 277: Function prototypes for the kernel have parameter names associated 278: with parameter types. E.g., in the kernel use: 279: .Bd -literal 280: void function(int fd); 281: .Ed 282: .Pp 283: Prototypes that are visible to userland applications 284: should not include parameter names with the types, to avoid 285: possible collisions with defined macro names. 286: I.e., use: 287: .Bd -literal 288: void function(int); 289: .Ed 290: .Pp 291: Prototypes may have an extra space after a tab to enable function names 292: to line up: 293: .Bd -literal 294: static char *function(int _arg, const char *_arg2, struct foo *_arg3, 295: struct bar *_arg4); 296: static void usage(void); 297: 298: /* 299: * All major routines should have a comment briefly describing what 300: * they do. The comment before the "main" routine should describe 301: * what the program does. 302: */ 303: int 304: main(int argc, char *argv[]) 305: { 306: long num; 307: int ch; 308: char *ep; 309: 310: .Ed 311: .Pp 312: For consistency, 313: .Xr getopt 3 314: should be used to parse options. 315: Options 316: should be sorted in the 317: .Xr getopt 3 318: call and the 319: .Ic switch 320: statement, unless 321: parts of the 322: .Ic switch 323: cascade. 324: Elements in a 325: .Ic switch 326: statement that cascade should have a 327: .Li FALLTHROUGH 328: comment. 329: Numerical arguments should be checked for accuracy. 330: Code that cannot be reached should have a 331: .Li NOTREACHED 332: comment. 333: .Bd -literal 334: while ((ch = getopt(argc, argv, "abn:")) != -1) 335: switch (ch) { /* Indent the switch. */ 336: case 'a': /* Don't indent the case. */ 337: aflag = 1; 338: /* FALLTHROUGH */ 339: case 'b': 340: bflag = 1; 341: break; 342: case 'n': 343: num = strtol(optarg, &ep, 10); 344: if (num <= 0 || *ep != '\e0') { 345: warnx("illegal number, -n argument -- %s", 346: optarg); 347: usage(); 348: } 349: break; 350: case '?': 351: default: 352: usage(); 353: /* NOTREACHED */ 354: } 355: argc -= optind; 356: argv += optind; 357: .Ed 358: .Pp 359: Space after keywords 360: .Pq Ic if , while , for , return , switch . 361: No braces are 362: used for control statements with zero or only a single statement unless that 363: statement is more than a single line in which case they are permitted. 364: Forever loops are done with 365: .Ic for Ns 's , 366: not 367: .Ic while Ns 's . 368: .Bd -literal 369: for (p = buf; *p != '\e0'; ++p) 370: ; /* nothing */ 371: for (;;) 372: stmt; 373: for (;;) { 374: z = a + really + long + statement + that + needs + 375: two lines + gets + indented + four + spaces + 376: on + the + second + and + subsequent + lines; 377: } 378: for (;;) { 379: if (cond) 380: stmt; 381: } 382: if (val != NULL) 383: val = realloc(val, newsize); 384: .Ed 385: .Pp 386: Parts of a 387: .Ic for 388: loop may be left empty. 389: Do not put declarations 390: inside blocks unless the routine is unusually complicated. 391: .Bd -literal 392: for (; cnt < 15; cnt++) { 393: stmt1; 394: stmt2; 395: } 396: .Ed 397: .Pp 398: Indentation used for program block structure is an 8 character tab. 399: Second level indents used for line continuation are four spaces. 400: If you have to wrap a long statement, put the operator at the end of the 401: line. 402: .Bd -literal 403: while (cnt < 20 && this_variable_name_is_really_far_too_long && 404: ep != NULL) 405: z = a + really + long + statement + that + needs + 406: two lines + gets + indented + four + spaces + 407: on + the + second + and + subsequent + lines; 408: .Ed 409: .Pp 410: Do not add whitespace at the end of a line, and only use tabs 411: followed by spaces 412: to form the indentation. 413: Do not use more spaces than a tab will produce 414: and do not use spaces in front of tabs. 415: .Pp 416: Closing and opening braces go on the same line as the 417: .Ic else . 418: Braces that are not necessary may be left out. 419: .Bd -literal 420: if (test) 421: stmt; 422: else if (bar) { 423: stmt; 424: stmt; 425: } else 426: stmt; 427: .Ed 428: .Pp 429: No spaces after function names. 430: Commas have a space after them. 431: No spaces 432: after 433: .Ql \&( 434: or 435: .Ql \&[ 436: or preceding 437: .Ql \&] 438: or 439: .Ql \&) 440: characters. 441: .Bd -literal 442: error = function(a1, a2); 443: if (error != 0) 444: exit(error); 445: .Ed 446: .Pp 447: Unary operators do not require spaces, binary operators do. 448: Do not use parentheses unless they are required for precedence or unless the 449: statement is confusing without them. 450: Remember that other people may become 451: confused more easily than you. 452: Do YOU understand the following? 453: .Bd -literal 454: a = b->c[0] + ~d == (e || f) || g && h ? i : j >> 1; 455: k = !(l & FLAGS); 456: .Ed 457: .Pp 458: Exits should be 0 on success, or according to the predefined 459: values in 460: .Xr sysexits 3 . 461: .Bd -literal 462: exit(EX_OK); /* 463: * Avoid obvious comments such as 464: * "Exit 0 on success." 465: */ 466: } 467: .Ed 468: .Pp 469: The function type should be on a line by itself 470: preceding the function. 471: .Bd -literal 472: static char * 473: function(int a1, int a2, float fl, int a4) 474: { 475: .Ed 476: .Pp 477: When declaring variables in functions declare them sorted by size, 478: then in alphabetical order; multiple ones per line are okay. 479: If a line overflows reuse the type keyword. 480: .Pp 481: Be careful to not obfuscate the code by initializing variables in 482: the declarations. 483: Use this feature only thoughtfully. 484: DO NOT use function calls in initializers. 485: .Bd -literal 486: struct foo one, *two; 487: double three; 488: int *four, five; 489: char *six, seven, eight, nine, ten, eleven, twelve; 490: 491: four = myfunction(); 492: .Ed 493: .Pp 494: Do not declare functions inside other functions; ANSI C says that 495: such declarations have file scope regardless of the nesting of the 496: declaration. 497: Hiding file declarations in what appears to be a local 498: scope is undesirable and will elicit complaints from a good compiler. 499: .Pp 500: Casts are not followed by a space. 501: Note that 502: .Xr indent 1 503: does not understand this rule. 504: .Pp 505: For the purposes of formatting, treat 506: .Ic return 507: and 508: .Ic sizeof 509: as functions. In other words, they are not 510: followed by a space, and their single argument 511: should be enclosed in parentheses. 512: .Pp 513: .Dv NULL 514: is the preferred null pointer constant. 515: Use 516: .Dv NULL 517: instead of 518: .Vt ( "type *" ) Ns 0 519: or 520: .Vt ( "type *" ) Ns Dv NULL 521: in contexts where the compiler knows the 522: type, e.g., in assignments. 523: Use 524: .Vt ( "type *" ) Ns Dv NULL 525: in other contexts, 526: in particular for all function args. 527: (Casting is essential for 528: variadic args and is necessary for other args if the function prototype 529: might not be in scope.) 530: Test pointers against 531: .Dv NULL , 532: e.g., use: 533: .Pp 534: .Bd -literal 535: (p = f()) == NULL 536: .Ed 537: .Pp 538: not: 539: .Bd -literal 540: !(p = f()) 541: .Ed 542: .Pp 543: Do not use 544: .Ic \&! 545: for tests unless it is a boolean, e.g. use 546: .Bd -literal 547: if (*p == '\e0') 548: .Ed 549: .Pp 550: not 551: .Bd -literal 552: if (!*p) 553: .Ed 554: .Pp 555: Routines returning 556: .Vt "void *" 557: should not have their return values cast 558: to any pointer type. 559: .Pp 560: Use 561: .Xr err 3 562: or 563: .Xr warn 3 , 564: do not roll your own. 565: .Bd -literal 566: if ((four = malloc(sizeof(struct foo))) == NULL) 567: err(1, (char *)NULL); 568: if ((six = (int *)overflow()) == NULL) 569: errx(1, "number overflowed"); 570: return (eight); 571: } 572: .Ed 573: .Pp 574: Avoid old-style function declarations that look like this: 575: .Bd -literal 576: static char * 577: function(a1, a2, fl, a4) 578: int a1, a2; /* Declare ints, too, don't default them. */ 579: float fl; /* Beware double vs. float prototype differences. */ 580: int a4; /* List in order declared. */ 581: { 582: .Ed 583: .Pp 584: Use ANSI function declarations instead. 585: Long parameter lists are wrapped with a normal four space indent. 586: .Pp 587: Variable numbers of arguments should look like this. 588: .Bd -literal 589: #include <stdarg.h> 590: 591: void 592: vaf(const char *fmt, ...) 593: { 594: va_list ap; 595: 596: va_start(ap, fmt); 597: STUFF; 598: va_end(ap); 599: /* No return needed for void functions. */ 600: } 601: 602: static void 603: usage(void) 604: { 605: /* Insert an empty line if the function has no local variables. */ 606: .Ed 607: .Pp 608: Use 609: .Xr printf 3 , 610: not 611: .Xr fputs 3 , 612: .Xr puts 3 , 613: .Xr putchar 3 , 614: whatever; it is faster and usually cleaner, not 615: to mention avoiding stupid bugs. 616: .Pp 617: Usage statements should look like the manual pages 618: .Sx SYNOPSIS . 619: The usage statement should be structured in the following order: 620: .Bl -enum 621: .It 622: Options without operands come first, 623: in alphabetical order, 624: inside a single set of brackets 625: .Ql ( \&[ 626: and 627: .Ql \&] ) . 628: .It 629: Options with operands come next, 630: also in alphabetical order, 631: with each option and its argument inside its own pair of brackets. 632: .It 633: Required arguments 634: (if any) 635: are next, 636: listed in the order they should be specified on the command line. 637: .It 638: Finally, 639: any optional arguments should be listed, 640: listed in the order they should be specified, 641: and all inside brackets. 642: .El 643: .Pp 644: A bar 645: .Pq Ql \&| 646: separates 647: .Dq either-or 648: options/arguments, 649: and multiple options/arguments which are specified together are 650: placed in a single set of brackets. 651: .Bd -literal -offset 4n 652: "usage: f [-aDde] [-b b_arg] [-m m_arg] req1 req2 [opt1 [opt2]]\en" 653: "usage: f [-a | -b] [-c [-dEe] [-n number]]\en" 654: .Ed 655: .Bd -literal 656: (void)fprintf(stderr, "usage: f [-ab]\en"); 657: exit(EX_USAGE); 658: } 659: .Ed 660: .Pp 661: Note that the manual page options description should list the options in 662: pure alphabetical order. 663: That is, without regard to whether an option takes arguments or not. 664: The alphabetical ordering should take into account the case ordering 665: shown above. 666: .Pp 667: New core kernel code should be reasonably compliant with the 668: .Nm 669: guides. 670: The guidelines for third-party maintained modules and device drivers are more 671: relaxed but at a minimum should be internally consistent with their style. 672: .Pp 673: Stylistic changes (including whitespace changes) are hard on the source 674: repository and are to be avoided without good reason. 675: Code that is approximately 676: .Fx 677: KNF 678: .Nm 679: compliant in the repository must not diverge from compliance. 680: .Pp 681: Whenever possible, code should be run through a code checker 682: (e.g., 683: .Xr lint 1 684: or 685: .Nm gcc Fl Wall ) 686: and produce minimal warnings. 687: .Sh SEE ALSO 688: .Xr indent 1 , 689: .Xr lint 1 , 690: .Xr err 3 , 691: .Xr sysexits 3 , 692: .Xr warn 3 693: .Sh HISTORY 694: This man page is largely based on the 695: .Pa src/admin/style/style 696: file from the 697: .Bx 4.4 Lite2 698: release, with occasional updates to reflect the current practice and 699: desire of the 700: .Fx 701: project.
|
http://www.dragonflybsd.org/cvsweb/src/share/man/man9/style.9?f=h;content-type=text%2Fx-cvsweb-markup;ln=1;rev=1.6
|
CC-MAIN-2014-52
|
refinedweb
| 2,745
| 76.62
|
linereader 1.0.0
Gives Python the ability to randomly access any chunk of a file quickly, without loading any content into memory, and implements two new dynamic types of file handles.
Overview
linereader is a python package that gives the user the ability to access files with ease. The linereader package offers several new powerful ways of using files.
Two main new types of file handles are added to linereader:
1- copen, a cache based solution to random file access and dynamic processing
2- dopen, a slower but universal way of random file access and dynamic processing
Random file access and processing with a cache
linereader was meant as a direct substitute to python’s built-in linecache module. With linereader, cached entries are loaded using less memory, and are around 12% faster to access than those of linecache. There are extra utility functions added to linereader to aid in the manipulation of the global cache.
If one wants to get a upgrade of linecache, linereader offers a polymorphic getline function, where:
from linecache import getline can be replaced by: from linereader import getline
and both behave the same way, loading a file’s contents into cached memory.
An example of this usage would be as follows:
from linereader import getline filename = 'C:/Python34/testfile.txt' line_1 = getline(filename, 1) line_2 = getline(filename, 2) print(line_1, line_2)
In addition to getline, linereader also contains getonce, and copen, which are used as solutions to cache based file access.
Random file access and processing without loading into memory
The problem with file accessing methods that load the entire file into a cache, is that they only work on small files. Usually, a 5GB file cannot be loaded into memory without the python interpreter crashing. Even if a file can be loaded, it slows down the session, and eats up useful memory. A new file handle that was added into linereader, linereader.dopen, works around this problem and can access any line from any size text/logging/data file with consistency. The speed to which the file can be accessed is proportional to the amount of characters that are being read. There exists a slight python overhead when accessing any file line, that takes around 31 microseconds. Using a 10 GB test file, a line consisting of one character was returned in 31 microseconds, and a line containing 135 characters was returned in 97 microseconds.
dopen’s special internals allow for near-identical return speeds on same length lines within the same file. This means that if file a was loaded using dopen, and lines 368 and 290 both contained the same amount of characters, they would take almost the same exact time to return. The way that the dopen handle was made, allows for the ability to quickly jump from one position in a file to the next. Conventional methods of reading from a file have to iterate through all the characters or lines and silently read the content that the user doesn’t want, to pass over and get to the the content that they need.
A simple example of dopen’s usage results as follows:
import linereader filename = 'C:/Python34/NEWS.txt' file = linereader.dopen(filename) header = file.getline(1) + file.getline(2) line_500 = file.getline(500) line_38 = file.getline(38) from_38_to_500 = file.getlines(38, 500)
The usage of dopen gets very advanced, and is actually completely polymorphic with the regular open() handle:
import linereader filename = 'C:/Python34/README.txt' file = linereader.dopen(filename) file.seek(50) chars = file.read(10) file.seek(1337) chars += file.read(80) chars += file.readline() rest = file.readlines()
In addition, dopen also offers powerful methods for the navigation of the file pointers:
from linereader import dopen file = dopen('C:/Python34/README.txt') file.seekline(58) line_58 = file.readline() next_10_lines = file.readnext(10) line_67 = file.getline(67)
If you have any questions or issues regarding linereader, please contact me at:
- Author: Nicholas C Pandolfi
- Bug Tracker:
- Download URL:
- License: The MIT License (MIT)
- Platform: CROSS-PLATFORM
- Categories
- Package Index Owner: nickpandolfi
- DOAP record: linereader-1.0.0.xml
|
https://pypi.python.org/pypi/linereader
|
CC-MAIN-2016-36
|
refinedweb
| 684
| 63.39
|
On 08/02/2009 01:00 PM, Aurelien Jacobs wrote: > On Sun, Aug 02, 2009 at 06:13:32PM +0200, Diego Biurrun wrote: >> common.h #includes mem.h, which #includes common.h. >> >> [...] >> >> Index: libavutil/common.h >> =================================================================== >> --- libavutil/common.h (revision 19565) >> +++ libavutil/common.h (working copy) >> @@ -281,8 +281,6 @@ >> }\ >> } >> >> -#include "mem.h" >> - >> #ifdef HAVE_AV_CONFIG_H >> # include "internal.h" >> #endif /* HAVE_AV_CONFIG_H */ > > This will work fine when compiling ffmpeg because common.h continue to > include internal.h which itslef includes mem.h. > But this will break public API. I guess many software relies on the fact > that common.h include mem.h. Humm, yes it breaks API, though I'm not sure many software includes common.h, why would they ? What does common.h provide ? I think they would include avutil.h which include common.h therefore mem.h. IN that case including mem.h in avutil.h might make sense. -- Baptiste COUDURIER GnuPG Key Id: 0x5C1ABAAA Key fingerprint 8D77134D20CC9220201FC5DB0AC9325C5C1ABAAA FFmpeg maintainer
|
http://ffmpeg.org/pipermail/ffmpeg-devel/2009-August/060627.html
|
CC-MAIN-2013-20
|
refinedweb
| 160
| 73.95
|
Walkthrough: Developing and Using a Custom Server Control
This.cs using System; using System.ComponentModel; using System.Security.Permissions; using System.Web; using System.Web.UI; using System.Web.UI.WebControls;), tag prefix is the prefix, such as "asp" in <asp:Table />, that appears before a control's type name when the control is created declaratively in a page. To enable your control to be used declaratively in a page, ASP.NET needs a tag prefix that is mapped to your control's namespace. A page developer can provide a tag prefix/namespace mapping by adding a @ Register directive on each page that uses the custom control, as in the following example:
[Visual Basic].
Although the App_Code directory enables you to test your control without compiling it, if you want to distribute your control as object code to other developers, you must compile it. In addition, a control cannot be added to the toolbox of a visual designer unless it is compiled into an assembly.
To compile the control into an assembly
Set the Windows environment PATH variable of your computer to include the path to your .NET Framework installation by following these steps:
In Windows, right-click My Computer, select Properties, click the Advanced tab, and click the Environment Variables button.
In the System variables list, double-click the Path variable.
In the Variable value text box, add a semicolon (;) to the end of the existing values in the text box, and then type the path of your .NET Framework installation. The .NET Framework is generally installed in the Windows installation directory at \Microsoft.NET\Framework\versionNumber.
Click OK to close each dialog box.
Run the following command from the directory you created for source files in the first procedure of this walkthrough... The TagPrefixAttribute attribute is useful because it provides a tag prefix for a visual designer to use if the designer does not find a tag prefix mapping in the Web.config file or in a Register directive in the page. The tag prefix is registered with the page the first time the control is double-clicked in the toolbox or dragged from the toolbox onto the page.
If you decide to use the TagPrefixAttribute attribute, you can specify it in a separate file that is compiled with your controls. By convention, the file is named AssemblyInfo.languageExtension, such as AssemblyInfo.cs or AssembyInfo.vb. The following procedure describes how to specify the TagPrefixAttribute metadata.
To add a and the prefix aspSample.
Recompile all the source files using the compilation command you used earlier (with or without the embedded resource).
To test the compiled version of your custom control, you must make your control's assembly) loading your control, and any page in which the control is used will generate a compiler error.
The assembly that you created in this walkthrough is called a private assembly because it must be placed you create a suite of controls that an ISP makes available to all its customers, you might need to package your controls in a shared (strongly named) assembly that is installed in the global assembly cache. For more information, see Working with Assemblies and the Global Assembly Cache.
Next, you must modify the tag prefix mapping you created in.aspx page in your browser by entering the following URL in the address bar:
If you use your control in a visual designer such as Visual Studio 2005, you will be able to add your control to the toolbox, drag it from the toolbox to the design surface, and access properties and events in the property browser. In addition, in Visual Studio 2005, your control has full IntelliSense support in Source view of the page designer and in the code editor. This includes statement completion in a script block as well as property browser support when a page developer clicks the control's tag..
|
https://msdn.microsoft.com/en-us/library/yhzc935f(v=vs.85).aspx
|
CC-MAIN-2015-27
|
refinedweb
| 647
| 54.52
|
by Justin Couch revised by Bruce Campbell
IN THIS CHAPTER
You've probably had enough of buttons, menus, and creating a hundred and one pictures
for animations. Now you're looking for something a little different. If you have
been following the computing press, you may have noticed discussions about another
hot Web technology called VRML--the Virtual Reality Modeling Language. VRML is designed
to produce the 3D equivalent of HTML: a three-dimensional scene defined in a machine-neutral
format that can be viewed by anyone with an appropriate viewer. VRML is similar to
HTML in that it can be created with a simple ASCII editor such as Notepad and delivered
across the Web by the same Web server that delivers HTML documents.
The first version of the standard, VRML 1, produced static scenes that could be
visited by anyone with a VRML 1 browser or appropriate VRML 1 plug-in viewer created
for use with the Netscape Navigator or Internet Explorer browser. VRML 1 is a derivative
of Silicon Graphic's Open Inventor file format. In that first version, a user could
visit a 3D scene by walking or flying around, but there was no way to interact with
the scene apart from clicking on the 3D equivalent of hypertext links. This was a
deliberate decision on the part of the designers. VRML 1 was enough to get the art
community interested in creating some beautiful virtual places.
In December 1995, the VRML community decided to drop planned revisions to version
1 and head straight to a fully interactive version, VRML 2. One of the prime requirements
for VRML 2 was the ability to support programmable behaviors. Of the seven proposals,
the Moving Worlds submission by Sony and SGI came out as the favorite among the 2,000
members of the VRML mailing list. Contained in a draft proposal for VRML 2 was a
Java API for creating behaviors for objects in a VRML 2 scene.
Effectively combining VRML and Java requires a good understanding of how both
languages work. This chapter introduces the Java implementation of the VRML API and
shows you how to get the most from a dynamic virtual world. For specifics about VRML
2, Teach Yourself VRML 2 in 21 days, published by Sams.net, is an excellent reference
book.
Interestingly enough, Sony has implemented a Java API in its CommunityPlace VRML
2 viewer. This Java API is somewhat different than the Java API implemented by SGI
in its CosmoPlayer VRML 2 viewer. Although they use the same basic class structure,
you can think of Sony's approach as adding Java scripting to the VRML 2 node structure
and SGI's approach as providing VRML 2 3D graphics output capabilities to any Java
program. Each of these approaches to integrating Java with VRML has its strengths
and weaknesses. Both methods are covered in this chapter with appropriate code examples.
SGI has promoted a VRML-specific scripting language called VRMLscript, which you
can use instead of Java when you are scripting behaviors; VRMLscript is similar to
Sony's interpretation of intranode scripting. Consult a VRML 2 reference for the
specifics of VRMLscript. VRMLscript is an appropriate alternative for Java scripting
on many occasions, but is more similar to JavaScript than it is to the Java language
itself.
Within the virtual reality environment, any dynamic change in a virtual world
is regarded as a behavior. This change can be something as simple as an object changing
color when it is touched or something as complex as autonomous, semi-intelligent
agents that look and act like humans, such as Neal Stephenson's Librarian from his
visionary, sci-fi novel Snow Crash.
To understand how to create behaviors for virtual objects, you have to understand
how VRML works. Although this section won't delve into a lengthy discussion of VRML,
a few basic concepts are reviewed. To start with, VRML is a separate language from
the Java programming language used in scripts. The Java VRML classes interact only
with a preexisting VRML scene contained in a file with a .wrl extension.
The scene is developed using the VRML language to create a collection of VRML nodes;
the scene is then associated with a rectangular region within the browser window.
The VRML browser then provides a consistent method for moving around in the scene
and interacting with it. Although it is possible to create a VRML browser exclusively
in Java, both Sony and SGI have compiled machine-dependent browsers for the purposes
of rendering speed. Even with a VRML browser written in Java, a scene must be created
using VRML, which the Java VRML classes can then dynamically change over time.
Within Sony's CommunityPlace Java API, each virtual object can have its own script
attached to it. Creating a highly complex world in this manner requires writing many
short scripts. For more interesting behaviors, a longer, heavyweight script can combine
the VRML API classes with Java's thread and networking classes.
To minimize the amount of external programming, the VRML specification contains
a number of nodes to create common behaviors within the .wrl file itself.
These nodes are of two types: sensors and interpolators. Sensor nodes initiate events
when the user interacts with the sensor. Sensors are associated with virtual objects,
timers, and locations in three-dimensional space. A location is a 3D volume defined
by a cylinder, disk, plane, sphere, or point. Sensors initiate events based on touch
(such as a mouse click), time, or proximity (to the user's viewpoint). Interpolators
change a virtual object over time between specific end-point values defined by the
VRML author. Interpolators are available for changing color, scale, position, and
orientation over time. By using the specifics of VRML 2, a sensor can turn on a timer
that then interpolates the value of a virtual object's characteristic over time.
An example is a virtual doorbell that senses a user's mouse click and turns on an
orientation interpolator that opens a door slowly over a specific time period. This
door example is easily included in a VRML scene without Java.
The VRML world description uses a traditional hierarchical scene-graph approach
reminiscent of PEX/PHIGS and other 3D toolkits. A scene graph is a document
used to express the hierarchical relationships between all the objects in a scene.
The word graph as used in the term scene graph refers to an organization of objects
described by certain mathematical characteristics. In a graph, each object in the
scene is called a node, and each connecting line is called an edge. Each node within
a scene graph has a parent, and each parent can have multiple children. As an example
of a hierarchical VRML scene graph, a body Transform node can include two
leg Transform nodes, each of which can contain a foot Transform
node, which can contain five toe Transform nodes. The toe Transform
node can have children nodes that define its geometry, texture, and color as well
as a sensor to make it interactive.
Surprisingly, VRML nodes can be represented in a semiobject-oriented manner that
meshes well with Java. Each node has a number of fields. These fields can be accessed
by other nodes only if they are explicitly declared to be accessible. You can declare
a field as read or write only, or you can define it to require a specific method
to access its value. In VRML syntax, the four types of field access are described
as follows:
The official VRML 2 node definition specifies the standard accessibility of each
field in a node; you must then consider accessibility when you are writing behavior
scripts. Most scripts are written to process a value being passed to the script in
the form of an eventIn field, which then passes the result back through
an eventOut field. Any internal values are kept in fields. The
Script can be defined as a node within the scene graph. These Script
nodes are not permitted to have exposedFields because of the updating and
implementation ramifications within the event system.
Although a node can consist of a number of input and output fields, these fields
do not all have to be connected. Usually, the opposite is the case--only a few of
the available connections are made. VRML requires explicit connection of nodes using
the ROUTE keyword, as shown here:
ROUTE fromNode.fieldname1 TO toNode.fieldname2
The only restriction when connecting fields is that the two fields be of the same
type. No casting of types is permitted.
This route mechanism can be very powerful when combined with scripting. Basically,
both a VRML ROUTE statement and a Java script can send an eventIn
field or process an eventOut field. The specification allows both fan in
and fan out of events. Fan in occurs when many nodes send an event to a single eventIn
field of a node. Fan out is the opposite: one eventOut object is connected
to many other eventIn objects. Fan out is handy when you want one script
to control a number of different objects at the same time; for example, a light switch
eventOut object turning on multiple lights simultaneously.
If two or more events fan in on a particular eventIn object, the results
are undefined. You should be careful to avoid such situations unless the ambiguity
is intended. An example of this situation is when two separate animation scripts
set the position of the same virtual object. To avoid this situation in a complicated
animated scene, create an event graph with arrows showing the direction of events
firing from one node to another. Then make sure that any two or more events coming
in to the same node cannot possibly fire at the same time.
All VRML data types follow the standard programming norms. There are integer,
floating-point, string, and boolean standard types as well as specific types for
handling 3D graphics such as points, vectors, image, and color. To deal with the
extra requirements of the VRML scene, node and time types have been added. The node
data type contains an instance pointer to a particular node in the VRML scene graph
(such as a virtual toe or foot). Individual fields within a node are not accessible
directly. Individual field references in behaviors programming is rarely necessary
because communication is on an event-driven model. When field references are needed
within the API, a node instance and field string description pair are used.
Except for the boolean and time types, which are always single,
values can be either single or multivalued. The distinction is made in the field
name. An SF prefix is used for single-valued fields and an MF prefix
is used for multivalued fields. For example, type SFInt32 defines a single
integer; type MFInt32 defines an array of integers. The Script
node definition in the next section contains an MFString and a SFBool.
The MFString is used to contain a collection of URLs, each kept in its own
separate substring, but the SFBool contains a single boolean flag controlling
a condition.
The Script node provides the way to integrate a custom behavior into
VRML. Behaviors can be programmed in any language supported by the browser and for
which an implementation of the API can be found. In the final specification of VRML
2, sample APIs were provided for Java, C, and also VRML's own scripting language,
VRMLscript--a derivative of Netscape's JavaScript. The Script node is defined
as shown here:
}
Unlike standard HTML, VRML enables multiple target files of an open location request
to be specified in order of preference. The url field contains any number
of strings specifying URLs or URNs to the desired behavior script. For compiled Java
scripts, this would be the URL of the .class file, but the url
list is not limited to just one script type.
Apart from specifying the behavior file, VRML also enables control over how the
Script node performs within the scene graph. The mustEvaluate field
tells the browser how often the script should be run. If this field is set to TRUE,
the browser must send events to the script as soon as they are generated, forcing
an execution of the script. If the field is set to FALSE, in the interests
of optimization, the browser may elect to queue events until the outputs of the script
are required by the browser. A TRUE setting is most likely to cause browser
performance to degrade because of the constant context-swapping needed; a FALSE
setting queues events to keep the context-swapping to a minimum. Unless you are performing
something the browser is not aware of (such as using a networking or database functionality),
you should set the mustEvaluate field to FALSE.
The directOutput field controls whether the script has direct access
for sending events to other nodes. Java methods require the node reference of other
nodes when setting field values. If, for example, a script is passed an instance
of a Transform node, and the directOutput field is set to TRUE,
the script can send an event directly to that node. To add a new default box to this
group, the script would contain the following code:
SFNode group_node = (SFNode)getField("group_node");
group_node.postEventIn("add_children", (Field)CreateVRMLfromString("Box"));
If directOutput is set to FALSE, it requires the Script
node to have an eventOut field with the corresponding event type specified
(an MFNode in this case), and a ROUTE connecting the script with
the target node.
There are advantages to both approaches. When the scene graph is static in nature,
the second approach (using known events and ROUTE statements) is much simpler.
However, in a scene in which objects are being generated on the fly, static routing
and events do not work and the first approach is required.
The API is built around two Java interfaces defined in the package vrml.
The eventIn and Node interfaces are defined as follows:
interface eventIn {
public String getName();
public SFTime getTimeStamp();
public ConstField getValue();
}
interface Node {
public ConstField getValue(String fieldName)
throws InvalidFieldException;
public void postEventIn(String eventName, Field eventValue)
throws InvalidEventInException;
In addition to these two interfaces, each of the VRML field types also has two
class definitions that are subclasses of Field: a standard version and a
restricted, read-only version. The Const* definitions are used only in the
eventIn fields defined in individual scripts. Unless that field class has
an exception explicitly defined, these class definitions are guaranteed not to generate
exceptions.
For nonconstant fields, each class has at least the setValue() and getValue()
methods that return the Java equivalent of the VRML field type. For example, a SFRotation
class returns an array of floats mapping to the x, y, z, and orientation, but the
MFRotation class returns a two-dimensional array of floats. The multivalued
field types also have a set1Value() method, which enables the caller to
set an individual element.
SFString and MFString need special attention. Java defines them
as being Unicode characters, but VRML defines them as a subset of the Unicode character
set--UTF-8. Ninety-nine percent of the time, this difference between Java and VRML
should not present any problems. Listing 48.1 includes a complete list of the available
field types as part of the VRML API class hierarchy as implemented by SGI's CosmoPlayer
VRML viewer. Check out
for any new developments. Note that the classes are divided into three packages:
vrml, vrml.field, and vrml.node. This code is also located
on the CD-ROM that accompanies this book.
java.lang.Object
|
+- vrml.Event
+- vrml.Browser
+- vrml.Field
| +- vrml.field.SFBool
| +- vrml.field.SFColor
| +- vrml.field.SFFloat
| +- vrml.field.SFImage
| +- vrml.field.SFInt32
| +- vrml.field.SFNode
| +- vrml.field.SFRotation
| +- vrml.field.SFString
| +- vrml.field.SFTime
| +- vrml.field.SFVec2f
| +- vrml.field.SFVec3f
| |
| +- vrml.MField
| | +- vrml.field.MFColor
| | +- vrml.field.MFFloat
| | +- vrml.field.MFInt32
| | +- vrml.field.MFNode
| | +- vrml.field.MFRotation
| | +- vrml.field.MFString
| | +- vrml.field.MFTime
| | +- vrml.field.MFVec2f
| | +- vrml.field.MFVec3f
| |
| +- vrml.ConstField
| +- vrml.field.ConstSFBool
| +- vrml.field.ConstSFColor
| +- vrml.field.ConstSFFloat
| +- vrml.field.ConstSFImage
| +- vrml.field.ConstSFInt32
| +- vrml.field.ConstSFNode
| +- vrml.field.ConstSFRotation
| +- vrml.field.ConstSFString
| +- vrml.field.ConstSFTime
| +- vrml.field.ConstSFVec2f
| +- vrml.field.ConstSFVec3f
| |
| +- vrml.ConstMField
| +- vrml.field.ConstMFColor
| +- vrml.field.ConstMFFloat
| +- vrml.field.ConstMFInt32
| +- vrml.field.ConstMFNode
| +- vrml.field.ConstMFRotation
| +- vrml.field.ConstMFString
| +- vrml.field.ConstMFTime
| +- vrml.field.ConstMFVec2f
| +- vrml.field.ConstMFVec3f
|
+- vrml.BaseNode
+- vrml.node.Node
+- vrml.node.Script
java.lang.Exception
java.lang.RuntimeException
vrml.InvalidRouteException
vrml.InvalidFieldException
vrml.InvalidEventInException
vrml.InvalidEventOutException
vrml.InvalidExposedFieldException
vrml.InvalidNavigationTypeException
vrml.InvalidFieldChangeException
vrml.InvalidVRMLSyntaxException
At some point, each VRML Script node is connected with a list of instructions
that brings the VRML objects to life. For simple animations, VRMLscript is an appropriate
language to use for coding behaviors. If you need more complexity, the thread and
networking features of the Java environment make Java a more capable language for
coding behaviors efficiently. Java can be used to create VRML behaviors by extending
the Script class or by tying VRML-aware classes together with a VRML scene
in an HTML document.
Sony's CommunityPlace VRML viewer includes the Script class as part of
its Java API. Remember that SGI's CosmoPlayer Java API does not include the Script
class because you are expected to use VRMLscript for your intranode scripting. The
definition of Sony's Java Script class follows; as this book goes to press,
SGI has not yet included a Script class in its API.
Class Script implements Node {
public void processEvents(Events [] events)
throws Exception;
public void eventsProcessed()
throws Exception
protected Field getEventOut(String eventName)
throws InvalidEventOutException;
protected Field getField(String fieldName)
throws InvalidFieldException
When you create a script, you are expected to subclass the Script class
definition to provide the necessary functionality. The class definition deliberately
leaves the definition of the codes for the exceptions up to you so that you can create
tailored exceptions and handlers.
The getField() method returns the value of the field nominated by the
given string. This method is how the Java script gets the values from the VRML Script
node fields. This method is used for all fields and exposedField fields.
To the Java script, an eventOut looks just like another field. There is
no need to write an eventOut function--the value is set by calling the appropriate
field type's setValue() method.
Every eventIn field specified in the VRML Script node definition
requires a matching public method in the Java implementation. The method
definition takes this form:
public void <eventName>(Const<eventTypeName> <variable name>, SFTime <variable name>);
The method must have the same name as the matching eventIn field in the
VRML script description. The second field corresponds to the timestamp of when the
event was generated. The SFTime field is particularly useful when the mustEvaluate
field is set to FALSE, meaning that an event may be queued for some time
before finally being processed.
Because Script is an implementation of the Node interface, it
contains the postEventIn() method. Earlier in this chapter, you learned
that you should not call the eventIn methods of other scripts directly.
To facilitate direct internode communication, the postEventIn() method enables
you to send information to other nodes while staying within the VRML event-handling
system. The arguments are a String specifying the eventIn field
name and a Field containing the value. The value is a VRML data type cast
to Field. The use of PostEventIn(), as available in Sony's CommunityPlace
API, is shown in the following example and is also used later in this chapter when
a simple dynamic world is constructed.
//The node we are getting is a translation
Node translation;
float[3] translation_details;
translation[0] = 0.0f;
translation[1] = 2.3f;
translation[2] = -0.4f;
translation.postEventIn("translation", (Field)translation);
SGI's CosmoPlayer API does not use a postEventIn() method. Instead, it
uses the following syntax that, in effect, does the same thing:
Node translation = browser.getNode("box");
EventInSFVec3f set_translation = (EventInSFVec3f) material.getEventIn("translation");
float[] val = new float[3];
val[0] = 0.0f;
val[1] = 2.3f;
val[2] = -0.4f;
set_translation.setValue(val);
Remember that SGI's API does not contain the Script class. The event-processing
methods, processEvents() and eventsProcessed(), are described later
in this chapter.
Now you are ready to put this all together to create a color-changer behavior
that toggles a box's color between red and blue. Figure 45.1 shows a simple VRML
box that has turned blue as a result of a user's mouse click (trust me; although
the figure is shown in black and white, the box really is blue!). The following sections
show you how to create the example for both the CommunityPlace and CosmoPlayer VRML
viewers. The CommunityPlace example requires five components: a Box primitive,
a TouchSensor object, a Material node, the Script node,
and the Java script. The CosmoPlayer example requires a Box primitive, a
TouchSensor object, a Material node, the Java script, and an HTML
document to associate the Java script with the VRML scene.
Figure 45.1.
The color-changing box example.
For the CommunityPlace viewer, the basic VRML scene consists of a TouchSensor-enabled,
red box placed at the scene origin (0,0,0) with an associated Script node
and a couple of ROUTE statements. Listing 45.2 shows the complete VRML scene,
which was created using a simple text editor. In fact, all the VRML scenes in this
chapter were typed by hand into the Notepad text editor that is a part of the Windows
operating system. You can find the code in Listing 45.2 on the CD-ROM that accompanies
this book.
#VRML V2.0 utf8
#filename: s_box.wrl
Transform {
bboxSize 1 1 1
children [
Shape {
appearance Appearance {
material DEF cube_material Material {
diffuseColor 1 0 0 #start red.
}
}
geometry Box {size 1 1 1}
} # end of shape definition
# Now define a TouchSensor node. This node takes in the
# geometry of the parent transform. Default behavior OK.
DEF cube_sensor TouchSensor {}
]
}
DEF color_script Script {
url "s_color_changer.class"
# now define our needed fields
field SFBool isRed TRUE
eventIn SFBool clicked
eventOut SFColor color_out
}
ROUTE cube_sensor.isActive TO color_script.clicked
ROUTE color_script.color_out TO cube_material.set_diffuseColor
The Script node acts as the color changer. It takes input from the TouchSensor
object and outputs the new color to the Material node. The Script
node must also keep internal track of the color. This is done by reading in the value
from the Material node, but for demonstration purposes, an internal flag
is included in the script. No fancy processing or event sending to other nodes is
necessary, so both the mustEvaluate and directOutputs fields can
be left at the default setting of null.
Finally, the Java script shown in Listing 45.3 is compiled to produce the color_changer.class
file that is referenced in Listing 45.2 in the Script node's behavior
field. You can find this code on the CD-ROM that accompanies this book.
//filename: s_color_changer.java
import vrml.field.*;
import vrml.node.*;
import vrml.*;
class s_color_changer extends Script {
// declare the field
private SFBool isRed = (SFBool)getField("isRed");
// declare the eventOut
private SFColor color_out = (SFColor)getEventOut("color_out");
// declare color float array
float[] color;
// declare eventIns
public void clicked(ConstSFBool isClicked, ConstSFTime ts) {
// called when the user clicks or touches the cube or
// stops touching/click so first check the status of the
// isClicked field. We will only respond to a button up.
if(isClicked.getValue() == false) {
// now check whether the cube is red or blue
if(isRed.getValue() == true)
isRed.setValue(false);
else
isRed.setValue(true);
}
}
// finally the event processing call
public void eventsProcessed() {
if(isRed.getValue() == true) {
color = new float[3];
color[0] = 0.0f;
color[1] = 0.0f;
color[2] = 1.0f;
color_out.setValue(color);
}
else {
color = new float[3];
color[0] = 1.0f;
color[1] = 0.0f;
color[2] = 0.0f;
color_out.setValue(color);
}
}
For the CosmoPlayer viewer, the basic VRML scene consists of a TouchSensor-enabled,
red box placed at the scene origin (0,0,0) but does not include a Script
node or the ROUTE statements used with the CommunityPlace example. As you
see in Listing 45.4 (which is also located on the CD-ROM that accompanies this book),
the routing is done within the Java script itself.
#VRML V2.0 utf8
#filename: sgi_box.wrl
Transform {
bboxSize 1 1 1
children [
Shape {
appearance Appearance {
material DEF cube_material Material {
diffuseColor 1.0 0 0 #start red.
}
}
geometry Box {size 1 1 1}
} # end of shape definition
# Now define a TouchSensor node. This node takes in the
# geometry of the parent transform. Default behavior OK.
DEF cube_sensor TouchSensor {}
]
}
Instead of connecting the Java script to the VRML scene within the VRML file,
the connection is made in an HTML document. This approach has one big advantage:
You can use the Java AWT to control your VRML scene interaction. The HTML code is
provided in the file Color_changer.htm, as shown in Listing 45.5, and can
be found on the CD-ROM.
<HTML>
<HEAD>
<TITLE>Color Changer Example</TITLE>
</HEAD>
<BODY>
<CENTER>
<EMBED SRC="sgi_box.wrl" BORDER=0
</CENTER>
<APPLET CODE="sgi_color_changer.class" MAYSCRIPT>
</APPLET>
</BODY>
</HTML>
In the HTML file, you connect the box.wrl VRML file to the Color_changer.class
Java class using the <EMBED> and <APPLET> HTML tags.
Note that you can set the HEIGHT and WIDTH of the VRML scene to
whatever size you want; any Java controls you include in the .class file
appear in an area following your VRML scene.
Finally, the Java sgi_color_changer class source code is shown in Listing
45.6 (this code can also be found on the CD-ROM). This code creates the Java controls
and connects them to the VRML scene for visitors to use to interact with the scene.
The code in Listing 45.6 allows a user to interact with the VRML scene using Java
AWT controls such as buttons and checkboxes, which are made available in a separate
control panel. The code in Listing 45.7 allows a user to interact with the VRML scene
by clicking on VRML objects in the scene itself.
//filename: sgi_color_changer.java
import java.awt.*;
import java.applet.*;
import vrml.external.field.EventOut;
import vrml.external.field.EventInSFColor;
import vrml.external.Node;
import vrml.external.Browser;
import vrml.external.exception.*;
import netscape.javascript.JSObject;
public class sgi_color_changer extends Applet {
TextArea output = null;
Browser browser = null;
Node material = null;
EventInSFColor diffuseColor = null;
boolean red = true;
boolean error = false;
public void init() {
add(new Button("Change Color"));");
}
catch (InvalidNodeException ne) {
showStatus("Failed to get node:" + ne);
error = true;
}
catch (InvalidEventInException ee) {
showStatus("Failed to get EventIn:" + ee);
error = true;
}
catch (InvalidEventOutException ee) {
showStatus("Failed to get EventOut:" + ee);
error = true;
}
}
public boolean action(Event event, Object what) {
if (error)
{
showStatus("Problems! Had an error during initialization");
return true; // Uh oh...
}
if (event.target instanceof Button)
{
Button b = (Button) event.target;
if (b.getLabel() == "Change Color") {;
}
}
}
return true;
}
So that you can click the box and change the color from within the VRML scene
instead of from a Java AWT button, SGI's browser sets up an event callback in the
Java class by implementing EventOutObserver. The EventOutObserver
sets up event awareness for the VRML scene. If you want to interact with the cube
within the VRML itself instead of from a Java AWT button, just replace sgi_color_changer.class
with Color_changer.class, compiled from the source code in Listing 45.7.
This code can also be found on the CD-ROM that accompanies this book.
//filename: Color_changer.java
import java.awt.*;
import java.applet.*;
import vrml.external.field.EventOut;
import vrml.external.field.EventInSFColor;
import vrml.external.field.EventOutSFColor;
import vrml.external.field.EventOutSFTime;
import vrml.external.field.EventOutObserver;
import vrml.external.Node;
import vrml.external.Browser;
import vrml.external.exception.*;
import netscape.javascript.JSObject;
public class Color_changer extends Applet implements EventOutObserver {
TextArea output = null;
Browser browser = null;
Node material = null;
EventInSFColor diffuseColor = null;
EventOutSFColor outputColor = null;
EventOutSFTime touchTime = null;
boolean red = true;
boolean error = false;");
// Get the Touch Sensor
Node sensor = browser.getNode("cube_sensor");
// Get its touchTime EventOut
touchTime = (EventOutSFTime) sensor.getEventOut("touchTime");
// Set up the callback
touchTime.advise(this, new Integer(1));
// Get its diffuseColor EventOut
outputColor = (EventOutSFColor) material.getEventOut("diffuseColor");
// Set up its callback
outputColor.advise(this, new Integer(2));
}
catch (InvalidNodeException ne) {
add(new TextField("Failed to get node:" + ne));
error = true;
}
catch (InvalidEventInException ee) {
add(new TextField("Failed to get EventIn:" + ee));
error = true;
}
catch (InvalidEventOutException ee) {
add(new TextField("Failed to get EventOut:" + ee));
error = true;
}
}
public void callback(EventOut who, double when, Object which) {
Integer whichNum = (Integer) which;
if (whichNum.intValue() == 1) {;
}
}
if (whichNum.intValue() == 2) {
// Make the new color of the sphere and timestamp
// show up in the textarea.
float[] val = outputColor.getValue();
showStatus("Got color " + val[0] + ", " + val[1] + ", " +
val[2] + " at time " + when + "\n");
}
}
Of course, you can combine both the Java controls and the VRML node callback routine
in the same Java file.
That's it. Now you have a cube that changes color when you click it. The code
looks almost identical for changing a virtual object's translation, rotation, or
scale in the VRML scene. All other eventIns can be accessed in similar fashion--including
the current scene viewpoint. Creating more complex behaviors is just a variation
of this scheme, with more Java code and fields. Although the basic user input usually
comes from sensors, in the SGI approach the events can come from any Java program--including
a program that embeds all kinds of physics to handle interactions between the objects
drawn to the screen in the VRML scene. This approach creates many scientific visualization
opportunities.
Even with Sony's approach, scripts are not restricted to input methods based on
eventIn fields. One example is a stock market tracker that runs as a separate
thread. It can constantly receive updates from the network, process them, and then
send the results through a public method to the script, which then puts the appropriate
results into the 3D world.
Behaviors using the methods presented in the color-changer box examples work for
many simple systems. Effective virtual reality systems, however, require more than
just being able to change the color and shape of objects that already exist in the
virtual world. As a thought experiment, consider a virtual taxi: A user should be
able to step inside and instruct the cab where to go. Using the techniques from the
preceding examples, the cab would move off, leaving the user in the same place. The
user does not "exist" as part of the scene graph--the user is known to
the browser but not to the VRML scene-rendering engine. Clearly, a greater level
of control is needed.
The VRML 2 specification defines a series of actions the programmer can access
to set and retrieve information about the virtual world. Within the Java implementation
of the API, world information is provided in the Browser class. The Browser
class provides all the functions a programmer needs that are not specific to any
particular part of the scene graph.
To define a system-specific behavior, the first functions you must define are
these:
public static String getName();
public static String getVersion();
These strings are defined by the browser writer and identify the browser in some
unspecified way. If this information is not available, empty strings are returned.
If you are programming expensive calculations, you may want to know how they affect
the rendering speed (frame rate) of the system. The getCurrentFrameRate()
method returns the value in frames per second. If this information is not available,
the return value is 100.0.
public static float getCurrentFrameRate();
In systems that use prediction, two more handy pieces of information to know are
what mode the user is navigating the scene in, and at what speed the user is traveling.
Similar to the getName() method, the string returned to describe the navigation
type is browser dependent. VRML defines that, at a minimum, the following four navigation
types must be supported: WALK, EXAMINE, FLY, and NONE.
However, if you are building applications for an intranet and you know what type
of browser is used, this navigation information can be quite handy for varying the
behavior, depending on how the user approaches the object of interest. Information
about navigation is available from the following methods:
public static String getNavigationType();
public static void setNavigationType(String type)
throws InvalidNavigationTypeException;
public static float getNavigationSpeed();
public static void setNavigationSpeed(float speed);
public static float getCurrentSpeed();
The difference between navigation speed and current speed is in the definition.
VRML 2 defines a navigationInfo node that contains default information about
how to act if given no other external cues. The navigation speed is the default speed
in units per second; these units are defined for each browser by the individual browser
developers. There is no specification about what this speed represents, only hints.
A reasonable estimate of the navigation speed is the movement speed in WALK
and FLY mode and the movement speed used in panning and dollying in EXAMINE
mode, encountered when a user has not selected any speed controls. The current speed
is the actual speed at which the user is traveling at that point in time. This is
the speed that the user has set with the browser controls. Speed controls, when they
are provided by the browser developer, allow the user to vary from the default navigation
speed. For example, a browser with a default speed of 70 pixels/second may slow down
to 40 pixels/second when the user selects the slow speed control from an available
Having two different descriptions of speed may seem wasteful, but it comes in
quite handy when moving between different worlds. The first world may be a land of
giants, where traveling at 100 units per second is considered slow, but in the next
world, which models a molecule that is only 0.001 units across, this speed would
be ridiculously fast. The navigation speed value can be used to scale speeds to something
reasonable for the particular world.
Also contained in the navigationInfo node is a boolean field for a headlight.
The headlight is a directional light that points in the direction the user is facing.
Where the scene creator has used other lighting effects (such as radiosity), the
headlight is usually turned off. In the currently available browsers, the headlight
field has led to software bugs (for example, turning off the headlight results in
the whole scene going black). It is recommended that you do not use the headlight
feature within the behaviors because the browser includes logic to determine when
best to use the headlight. For example, the headlight is usually disabled when the
browser is showing the effects of a single point of light. If you have to access
the headlight, the following functions are provided by the Browser class:
public static boolean getHeadlight();
public static void setHeadlight(boolean onOff);
The methods described in this section enable you to change individual components
of the world. The other approach is to completely replace the world with some internally
generated one. This approach enables you to use VRML to generate new VRML worlds
on the fly--assuming that you already are part of a VRML world (you cannot use this
approach in an application to generate a 3D graphics front-end). Use the following
statement to replace the current world with an internally generated one:
public static void replaceWorld(node nodes[]);
This is a nonreturning call that unloads the current scene graph and replaces
it with a new one.
There is only so much you can do with what is already available in a scene. Complex
worlds use a mix of static and dynamically generated scenery to achieve their impressive
special effects. You can dramatically change a VRML scene while a user is visiting
it. The fact that nodes are embedded within other nodes in a scene graph makes VRML
very flexible for changing just the specific part of the scene you want to change.
You can query the current world to find out the URL from which it was originally
loaded. Your Java code can then contain different paths based on the current world:
public static String getWorldURL();
GetWorldURL() returns the URL of the root of the scene graph rather than the URL
of the currently occupied part of the scene. VRML enables a complex world to be created
using a series of small files that are included in the world--a technique called
inlining in VRML parlance.
You can change your VRML scene dynamically by using the following three browser
methods:
public static void loadWorld(String[] url);
public static Node createVrmlFromString(String vrmlSyntax);
public static void createVrmlFromURL(String[] url,
Node node,
String eventInName);
To completely replace the scene graph, you call the loadWorld() method.
As with all URL references in VRML, an array of strings are passed. These strings
are a list of URLs and URNs to be loaded in order of preference. Should the load
of the first URL fail, the browser attempts to load the second, and so on until a
scene is loaded or the browser reaches the end of the list. If the load fails, the
VRML viewer can notify the user as the developer sees fit. The specification also
states that it is up to the browser whether the loadWorld() call blocks
or starts a separate thread when loading a new scene.
In addition to replacing the whole scene, you may want to add bits at a time to
a scene. You can do this in one of two ways. If you are very familiar with VRML syntax,
you can create strings on the fly and pass them to the createVrmlFromString()
call. The node that is returned can be added to the scene to produce dynamic results.
Perhaps the most useful of the preceding three functions is the createVrmlFromURL()
method. From the definition, you may have noticed that, along with a list of URLs,
the method also takes a node instance and a string that refers to an eventIn
field name. This call is a nonblocking call that starts a separate thread to retrieve
the given file from the URL, converts it into the VRML viewer's internal representation,
and then finally sends the newly created list of nodes to the specified node's eventIn
field. The eventIn field type must be an MFNode. The Node
reference can be any sort of node, not just a part of the script node. This arrangement
enables the script writer to add new nodes directly to the scene graph without having
to write extra functionality in the script.
With both of the create methods, the returned nodes do not become visible until
they have been added to some preexisting node in the scene. Although it is possible
to create an entire scene on the fly within a standalone applet, there is no way
to make the scene visible unless there is an exiting node instance to which you can
add the dynamically generated scene.
Once you have created a set of new nodes, you also want to be able to link them
together to get a behavior system similar to the one in the original world. The Browser
class defines methods for dynamically adding and deleting ROUTE statements
between nodes:
public void addRoute(Node fromNode, String fromEventOut,
Node toNode, String toEventIn)
throws InvalidRouteException;
public void addRoute(Node fromNode, String fromEventOut,
Node toNode, String toEventIn)
throws InvalidRouteException;
For each of these methods, you must know the node instance for both ends of the
ROUTE. In VRML, you cannot obtain an instance pointer to an individual field
in a node. It is also assumed that if you know you will be adding a ROUTE,
you also know what fields you are dealing with, so a string is used to describe the
field name corresponding to an eventIn or eventOut field. Exceptions
are thrown if either of the nodes or fields does not exist or an attempt to delete
a nonexistent ROUTE is made.
You now have all the tools required to generate a world on the fly, respond to
user input, and modify the scene. The only thing that remains is to acquire the wisdom
to create responsive worlds that won't get bogged down in Java code.
When tuning the behaviors in a virtual world, the methods used depend on the execution
model. The VRML API gives you a lot of control over exactly how scripts are executed
and how events passed to it are distributed.
The arrival of an eventIn field at a Script node causes the
execution of the matching method. There is no other way to invoke a method. A script
can start an asynchronous thread, which in turn can call another non-eventIn
method of the script or can even send events directly to other nodes. The VRML 2
specification makes no mention about scripts containing non-eventIn public
methods. Although it is possible to call an eventIn method directly, it
is in no way encouraged. Such programming interferes with the script execution model
by preventing browser optimization and can affect the running of other parts of the
script. Calling an eventIn method directly can also cause performance penalties
in other parts of the world, not to mention reentrancy problems within the eventIn
method itself. If you find it necessary to call an eventIn method of the
script, use the postEventIn() method so that the operation of the browser's
execution engine is not affected.
Unless the mustEvaluate field is set, all the events are queued in timestamp
order from oldest to newest. For each queued event, the corresponding eventIn
method is called. Each eventIn field calls exactly one method. If an eventOut
fans out to a number of eventIns, multiple eventIns are generated--one
for each node. Once the queue is empty, the eventsProcessed() method for
that script is called. The eventsProcessed() method enables any post-event
processing to be performed.
A typical use of this post-processing was shown in the example of the color-changing
cube, earlier in this chapter. In that example, the eventIn method just
took the data and stored it in an internal variable. The eventsProcessed()
method took the internal value and generated the eventOut. This approach
was overkill for such simple behavior. Normally, such simplistic behavior uses VRMLscript
instead of Java. However, the separation of data processing from data collection
is very effective in a high-traffic environment, in which event counts are very high
and the overhead of data processing is best absorbed into a single, longer run instead
of many short runs.
Once the eventsProcessed() method has completed execution, any eventOuts
generated as a result are sent as events. If the script generates multiple eventOuts
for a single eventOut field, only one event is sent. All eventOuts
generated during the execution of the script have the same timestamp.
If your script has spawned a thread, and that script is removed from the scene
graph, the browser is required to call the shutdown() method for each active
thread to facilitate a graceful exit.
If you want to maintain static data between invocations of the script, it is recommended
that your VRML Script node have fields to hold the values. Although it is
possible to use static variables within the Java class, VRML makes no guarantees
that these variables will be retained, especially if the script is unloaded from
memory.
If you are a hardcore programmer, you probably want to keep track of all the event-handling
mechanisms yourself. VRML provides the facility to do this with the processEvents()
method. This method is called when the browser decides to process the queued eventIns
for a script. The method is sent an array of the events waiting to be processed,
which you can then do with as you please. Graphics programmers should already be
familiar with event-handling techniques from the Microsoft Windows, Xlib, or Java
AWT system.
The ROUTE syntax makes it very easy to construct circular event loops.
Circular loops can be quite handy. The VRML specification states that if the browser
finds event loops, it only processes each event once per timestamp. Events generated
as a result of a change are given the same timestamp as the original change because
events are considered to happen instantaneously. When event loops are encountered
in this situation, the browser enforces a breakage of the loop. The following sample
script from the VRML specification uses VRMLscript to explain this process:
DEF S Script {
eventIn SFInt32 a
eventIn SFInt32 b
eventOut SFInt32 c
field SFInt32 save_a 0
field SFInt32 save_b 0
url "data:x-lang/x-vrmlscript, TEXT;
function a(val) { save_a = val; c = save_a+save_b;}
function b(val) { save_b = val; c = save_a+save_b;}
}
ROUTE S.c to S.b
S computes c=a+b with the ROUTE, completing a loop from the
output c back to input b. After the initial event with a=1,
the script leaves the eventOut c with the value of 1.
This causes a cascade effect, in which b is set to 1. Normally,
this generates an eventOut on c with the value 2, but
the browser has already seen that the eventOut c has been traversed
for this timestamp, and therefore enforces a break in the loop. This leaves the values
save_a=1, save_b=1, and the eventOut c=1.
For all animation programming, the ultimate goal is to keep the frame rate as
high as possible. In a multithreaded application like a VRML browser, the less time
spent in behaviors code, the more time that can be spent rendering. Virtual reality
behavior programming in VRML is still very much in its infancy. This section outlines
a few common-sense approaches to keeping up reasonable levels of performance--not
only for the renderer, but also for the programmer.
The first technique is to use Java only where necessary. This may sound a little
strange in a book about Java programming, but consider the resources required to
have not only a 3D-rendering engine but a Java VM loaded to run even a simple behavior;
also consider that the majority of your viewers may be people using low-end PCs.
Because most VRML browsers specify that a minimum of 16M of RAM is required (and
32M is recommended), also loading the Java VM into memory requires lots of swapping
to keep the behaviors going. The inevitable result is bad performance. For this reason,
the interpolator nodes and VRMLscript were created--built-in nodes for common basic
calculations and a small, light language to provide basic calculation abilities.
Use of Java should be limited to the times when you require the capabilities of a
full programming language, such as for multithreading and network interfaces.
When you do have to use Java, keep the amount of calculation in the script to
a minimum. If you are producing behaviors that require either extensive network communication
or data processing, these behaviors should be kept out of the Script node
and sent off in separate threads. The script should start the thread as either part
of its constructor or in response to some event and then return as soon as possible.
In VR systems, frame rate is king. Don't aim to have a one-hundred percent correct
behavior if it leads to half the frame rate when a ninety-percent correct behavior
will do. It is amazing how users don't notice an incorrect behavior, but as soon
as the picture update slows down, they start to complain. Every extra line of code
in the script delays the return of the CPU back to the renderer. In military simulations,
the goal is to achieve 60fps; even for Pentium-class machines, your goal should be
to maintain at least 20fps. Much of this comes down not only to how detailed the
world is, but also to how complex the behaviors are. As always, the tradeoff between
accuracy and frame rate is up to the individual programmer and the requirements of
the application. Your user will typically accept that a door does not open smoothly
as long as he or she can move around without watching individual frames redraw.
Don't play with the event-processing loop unless you really must. Your behaviors
code will be distributed on many different types of machines and browsers. Each browser
writer knows best how to optimize the event-handling mechanism to mesh with its internal
architecture. With windowing systems, dealing with the event loop is a must if you
are to respond to user input, but in virtual reality, you don't have control over
the whole system. The processEvents() method applies only to the individual
script, not as a common method across all scripts. So although you think you are
optimizing the event handling, you are doing so only for one script. In a reasonably
sized world, another few hundred scripts may also be running, so the optimization
of an individual script isn't generally worth the effort.
Add to the scene graph only what is necessary. If you can modify existing primitives,
do so instead of adding new ones. Every primitive you add to a scene requires the
renderer to convert the scene to its internal representation and then reoptimize
the scene graph to take the new objects into account. When it modifies existing primitives,
the browser is not required to resort the scene graph structure, saving computation
time. A cloudy sky is better simulated using a multiframed texture map image format
(such as MJPEG or PNG) on the background node than using lots of primitives that
are constantly modified or dynamically added.
If your scene requires objects to be added and removed on the fly, and many of
these objects are the same, don't just delete them from the scene graph. It is better
to remove them from a node and keep an instance pointer to them so that they can
be reinserted at a later time. At the expense of a little extra memory, you save
time. If you don't take the time now, you may later have to access the objects from
a network or construct them from the ground up from a string representation.
Another trick is to create VRML objects but not add them to the visual scene graph.
VRML scripting enables objects to be created but not added to the scene graph. Any
object not added isn't drawn. For node types such as sensors, interpolators, and
scripts, there is no need for these objects to be added because they are never drawn.
Doing so causes extra events to be generated, resulting in a slower system. Normal
Java garbage collection rules apply when these nodes are no longer referenced. VRML,
however, adds one little extra: Adding a ROUTE to any object is the same
as keeping a reference to the object. If a script creates a node, adds one or more
ROUTE statements, and then exits, the node stays allocated and it functions
properly. You can use this approach to set up a VRML node that does not use events
for its visualization.
There are dangers in this approach. Once you lose the node instance pointer, you
have no way to delete the node. You need this pointer if you are to delete the ROUTE.
Deleting ROUTE statements to the object is the only way to remove these
floating nodes. Therefore, you should always keep the node instance pointers for
all floating nodes you create so that you can delete the ROUTE statements
to them when they're no longer needed. You must be particularly careful when you
delete a section of the scene graph that has the only routed eventIn to
a floating node that also contains an eventOut to a section of an undeleted
section. This situation creates the VRML equivalent of memory leaks. The only way
to remove this node is to replace the whole scene or to remove the part of the scene
referenced by the eventOut.
This section develops a framework for creating worlds on the fly. Dynamically
changing worlds add tremendous potential to your VRML scenes. You can develop cyberspace
protocol-based, seamless worlds in which a visitor can take an object from one world
and use or leave it in another world. You can provide a VRML toolkit with which a
visitor creates new VRML objects from existing ones and saves them as separate VRML
files. But for starters, you may just want to add a few new objects or eliminate
objects as a visitor interacts with your scene. The next sections provide an example
of a dynamic world for both CommunityPlace and CosmoPlayer. The example primarily
familiarizes you with the createVrmlFromString(String vrmlSyntax),
createVrmlFromURL(String[] url, Node node, String event),
and loadURL(String[] url, String[] parameter) methods
of the Browser class. Figure 45.2 shows the sample VRML scene before it
dynamically changes.
Figure 45.2.
A simple VRML scene based on the VRML logo.
If you are to add new objects to a VRML scene, a placeholder node must already
exist in the .wrl file. This placeholder need not be anything more sophisticated
than the following empty Transform node:
DEF root_node Transform { }
In fact, this statement can be the entire starting world if the world is to be
built using Java AWT controls. More typically, however, other objects are already
in the world when it is loaded and the Transform node becomes just another
potential node to which you can add new objects. The Transform node has
two eventIn fields--addChildren and removeChildren--that
can be used to add or delete multiple children from within the node.
In the following code example, three objects exist in the VRML scene when the
world is first loaded. Each object enables an event that dynamically changes the
world. Using the three primitive shapes that form the VRML logo, the red cube enables
the createVrmlFromURL() method, the green sphere enables the createVrmlFromString()
method, and the blue cone takes the user to another VRML world by using the loadURL()
method. The cube, sphere, and cone are each children of Transform nodes
to make sure that they are located in different parts of the world (all objects are
located at the Transform node's origin by default). The code that creates
the cube Transform node follows:
DEF cube Transform {
children [
DEF cube_sensor TouchSensor{}
Shape {
appearance Appearance {
material Material {
diffuseColor 1 0 0
}
}
geometry Box { size 1 1 1}
}
# note: script node goes here for CommunityPlace
]
bboxSize 1 1 1
translation -2 0 0
Notice that the TouchSensor itself has been defined (with DEF)
as has the whole Transform node. The TouchSensor is the object
that triggers events. Without a defined sensor, the cube has no way to interact with
the rest of the scene. Any mouse click (or touch, if the user has a dataglove) on
the cube does nothing. As you soon see, the other two nodes are similar in definition.
The Shape node contains the appearance of the cube (in this case, a bright
red color) as well as the geometry of the cube (which is 1 unit wide, 1 unit tall,
and 1 unit deep). The bboxSize field defines a bounding box that helps the
VRML viewer render the cube efficiently. The translation field places the
cube two units to the right from the (0,0,0) origin of the VRML scene.
Keeping this VRML primer in mind, you are ready to follow the example for both
the CommunityPlace and CosmoPlayer VRML viewers.
The following sections break down the process for creating the dynamically changing
world for the CommunityPlace browser.
NOTE: For demonstration purposes, the
separate scripts are described and grouped with their appropriate objects. It makes
no difference if you have lots of small scripts or one large one. If you are a VR
scene creator, it is probably better to have one large script to keep track of the
scene graph if you want to save a VRML file with the changes. If you are creating
a virtual factory of reusable, plug-and-play component VRML objects, you may prefer
to use many small scripts, perhaps with some "centralized" script to act
as the system controller.
Once the basic VRML file is defined, you must add behaviors. The VRML file stands
on its own at this point. You can click objects, but nothing happens. Because each
object has its own behavior, the requirement for each script is different. Each script
requires one eventIn, which is the notification from its TouchSensor().
The example presented does not have any real-time constraints, so the mustEvaluate
field for each object is left with the default setting of FALSE. For the
cone object, no outputs are sent directly to nodes, so the directOutputs
field is left at FALSE. For the sphere object, outputs are sent directly
to the Group node, so the directOutputs field is set to TRUE.
The directOutputs field for the cube object must also be set to TRUE
for reasons explained in the next section.
In addition to eventIn, the box_script example also needs an
eventOut to send the new object to the Group node that is acting
as the scene root. Good behavior is desirable if the user clicks the cube more than
once, so an extra internal variable is added, keeping the position of the last object
that was added. Each new object added is translated two units along the z axis from
the previous one. A field is also needed to store the URL of the sample file that
will be loaded. The box_script definition follows:Node newUrl []
Notice that there is an extra eventIn. Because some processing must be
done on the node returned from the createVrmlFromURL() method, you must
provide an eventIn for the argument. If you do not have to process the returned
nodes, use the root_node.add_children() method instead.
The other interesting point to note is that the script declaration includes a
field that is a reference to itself. At the time this chapter was written, the draft
specifications for VRML 2 did not specify how a script was to refer to itself when
calling its own eventIns. To play it safe, the method in the preceding declaration
is guaranteed to work. However, it should be possible for the script to specify the
this keyword as the node reference when referring to itself. Check the most
current version of the specification, available at,
for more information.
To show the use of direct outputs, the sphere uses the postEventIn()
method to send the new child directly to root_node. To do this, a copy of
the name that was defined for the Group is taken; when this copy is resolved
in Java, it essentially becomes an instance pointer to the node. Using direct writing
to nodes means that you no longer require the eventOut from the cube's script,
but you do keep the other fields:
DEF sphere_script Script {
url "sphere_script.class"
directOutputs TRUE
eventIn SFBool isClicked
field SFNode root USE root_node
field SFInt32 zposition 0
The script for the cone is very simplistic. When the user clicks the cone, all
it does is fetch some named URL and set the URL as the new scene graph: a simple
cylinder.
DEF cone_script Script {
url "cone_script.class"
eventIn SFBool isClicked
field MFString target_url ["cylinder.wrl"]
Now that you have defined the scripts, you must wire them together. A number of
routes are added between the sensors and scripts, as shown in Listing 45.8.
#VRML V2.0 utf8
#filename: s_dynamic.wrl
# first the pseudo root
DEF root_node Transform { bboxSize 1000 1000 1000}
# The cube
Transform {
children [
DEF cube_sensor TouchSensor{}
Shape {
appearance Appearance {
material Material {
diffuseColor 1 0 0
}
}
geometry Box { size 1 1 1}
}String newUrl ["cylinder.wrl"]
}
]
bboxSize 1 1 1
translation 2 0 0
}
ROUTE cube_sensor.isActive TO cube_script.isClicked
ROUTE cube_script.childlist TO root_node.add_children
# The sphere
Transform {
children [
DEF sphere_sensor TouchSensor {}
Shape {
appearance Appearance {
material Material {
diffuseColor 0 1 0
}
}
geometry Sphere { radius 0.5 }
}
DEF sphere_script Script {
url "sphere_script.class"
directOutputs TRUE
eventIn SFBool isClicked
field SFNode root USE root_node
field SFInt32 zPosition 0
}
]
# no translation needed as it the origin already
bboxSize 1 1 1
}
ROUTE sphere_sensor.isActive TO sphere_script.isClicked
# The cone
Transform {
children [
DEF cone_sensor TouchSensor {}
Shape {
appearance Appearance {
material Material {
diffuseColor 0 0 1
}
}
geometry Cone {
bottomRadius .5
height 1
}
}
DEF cone_script Script {
url "cone_script.class"
eventIn SFBool isClicked
}
]
bboxSize 1 1 1
translation -2 0 0
}
ROUTE cone_sensor.isActive TO cone_script.isClicked
# end of file
The box sensor adds objects to the scene graph from an external file. This external
file, shown in Listing 45.9, contains a Transform node with a single Cylinder
as a child. Because the API does not permit you to create node types and you have
to place the newly created box at a point other than the origin, you must use a Transform
node. Although you can just load a box from the external scene and then create a
Transform node with the createVrmlFromString() method, this approach
requires more code and slows execution speed. Remember that behavior writing is about
getting things done as quickly as possible; the more you can move to external static
file descriptions, the better.
#VRML V2.0 utf8
#filename: cylinder.wrl
Transform {
children
Shape {
appearance Appearance {
material Material {
diffuseColor 1 1 .85
}
}
geometry Cylinder {}
}
translation 0 4 0
}
# end of file
Probably the most time-consuming task for someone writing a VRML scene with behaviors
is deciding how to organize the various parts in relation to the scene graph structure.
In a simple example like the one in Listing 45.8, there are two ways to arrange the
scripts. Imagine what can happen in a moderately complex file of two or three thousand
objects!
All the scripts in this example are simple. When the node is received back in
newNodes eventIn, the node must be translated to the new position. Ideally,
you would like to do this directly by setting the translation field, but
you are not able to do so because the translation field is encapsulated
within the received node. The only way to translate the node to the new position
is to post an event to the node, naming that field as the destination (which is the
reason you set directOutputs to TRUE). After this is done, you
can then call the add_children() method. Because all the scripts are short,
the processEvents() method is not used because the risk to browser execution
interruption is minimal. Shorter scripts are less likely to significantly impair
the browser's usual processing. Listing 45.10 shows the complete source code for
the cube script; this code can also be found on the CD-ROM that accompanies this
book.
//filename: box_script.java
import vrml.field.*;
import vrml.node.*;
import vrml.*;
class box_script extends Script {
private SFInt32 zPosition = (SFInt32)getField("zPosition");
private SFNode thisScript = (SFNode)getField("thisScript");
private MFString newUrl = (MFString)getField("newUrl");
// declare the eventOut field
private MFNode childList = (MFNode)getEventOut("childList");
// now declare the eventIn methods
public void isClicked(ConstSFBool clicked, SFTime ts)
{
// check to see if picking up or letting go
if(clicked.getValue() == false)
// Note: as of the writing of this book, Sony's CommunityPlace
Java API had yet to implement
// the createVrmlFromURL and postEventIn methods
Browser.createVrmlFromUrl(newUrl.getValue(),
thisScript, "newNodes");
}
public void newNodes(ConstMFNode nodelist, SFTime ts)
{
Node[] nodes = (Node[])nodelist.getValue();
float[] translation={0.0f,0.0f,0.0f};
// Set up the translation
zPosition.setValue(zPosition.getValue() + 2);
translation[0] = zPosition.getValue();
translation[1] = 0;
translation[2] = 0;
// There should only be one node with a transform at the
// top. No error checking.
nodes[0].postEventIn("translation", (Field)translation);
// now send the processed node list to the eventOut
childList.setValue(nodes);
}
Listing 45.11 (the code can also be found on the CD-ROM) shows the sphere_script
class, which is similar to the cube_script class, except that you have to
construct the text-string equivalent of the cylinder.wrl file. This is a
straightforward string buffer problem. All you have to do is make sure that the Transform
node of the newly added object has an appropriate value for the translation
field to avoid a collision with the existing world objects.
//filename: sphere_script.java
import vrml.field.*;
import vrml.node.*;
import vrml.*;
class sphere_script extends Script {
private SFInt32 zPosition = (SFInt32)getField("zPosition");
private SFNode root = (SFNode)getField("root");
// now declare the eventIn methods
public void isClicked(ConstSFBool clicked, SFTime ts)
{
StringBuffer vrml_string = new StringBuffer();
MFNode nodes=null;
// set the new position
zPosition.setValue(zPosition.getValue() + 2);
// check to see if picking up or letting go
if(clicked.getValue() == false)
{
vrml_string.append("Transform { bboxSize 1 1 1 ");
vrml_string.append("translation ");
vrml_string.append(zPosition.getValue());
vrml_string.append(" 0 0 ");
vrml_string.append("children [ ");
vrml_string.append("sphere { radius 0.5} ] }");
nodes.setValue(
Browser.createVrmlFromUrl(vrml_string));
// Note: as of the writing of this book, Sony's CommunityPlace
Java API had yet to implement
// the createVrmlFromURL and postEventIn methods
root.postEventIn("add_children", (Field)nodes);
}
}
The cone_script class is the easiest of the lot. As soon as it receives
a confirmation of a touch, it starts to load another world specified by the URL.
Listing 45.12 reveals the source code; the code is also located on the CD-ROM that
accompanies this book.
//filename: cone_script.java
import vrml.field.*;
import vrml.node.*;
import vrml.*;
class cone_script extends Script {
SFBool isClicked = (SFBool)getField("isClicked");
// The eventIn method
public void isClicked(ConstSFBool clicked, SFTime ts)
{
if(clicked.getValue() == false) {
String s[] = new String[1] ;
s[0] = "cylinder.wrl";
String t[] = new String[1] ;
t[0] = "target=info_frame";
getBrowser().loadURL( s,t );
}
}
By compiling the preceding Java code samples and placing these and the two VRML
source files in your Web directory, you can serve this basic dynamic world to the
rest of the world. The rest of the world will get the same behavior as you do--regardless
of what system individual users are running.
The following sections break down the process for creating the dynamically changing
world for the CosmoPlayer browser.
As Listing 45.13 shows, the VRML 2 file for CosmoPlayer contains no Script
nodes or ROUTE statements. Instead, all scripting is done in the Java class,
which is tied to the .wrl file in the HTML document, Dynamic.htm.
This code is also located on the CD-ROM that accompanies this book.
#VRML V2.0 utf8
#filename: sgi_dynamic.wrl
DEF root_node Transform { },
DEF cube Transform {
children [
DEF cube_sensor TouchSensor{}
Shape {
appearance Appearance {
material Material {
diffuseColor 1 0 0
}
}
geometry Box { size 1 1 1}
}
]
bboxSize 1 1 1
translation -2 0 0
},
DEF sphere Transform {
children [
DEF sphere_sensor TouchSensor{}
Shape {
appearance Appearance {
material Material {
diffuseColor 0 1 0
}
}
geometry Sphere { radius .5 }
}
]
bboxSize 1 1 1
translation 0 0 0
},
DEF cone Transform {
children [
DEF cone_sensor TouchSensor{}
Shape {
appearance Appearance {
material Material {
diffuseColor 0 0 1
}
}
geometry Cone {
bottomRadius .5
height 1
}
}
]
bboxSize 1 1 1
translation 2 0 0
The HTML file in Listing 45.14 is almost identical to the one used in the first
behavior example in Listing 45.5, earlier in this chapter. We only have to change
the document title to Dynamic World Example, the SRC tag value
to Dynamic.wrl, and the CODE tag value to Dynamic.class.
With these changes, the HTML document will load the VRML file in a 400x400 pixel
area within the browser window. The Java class is associated with the VRML scene
and even provides an area for any AWT controls instantiated in the Java source code.
<HTML>
<HEAD>
<TITLE>Dynamic World Example</TITLE>
</HEAD>
<BODY>
<CENTER>
<EMBED SRC="sgi_dynamic.wrl" BORDER=0
</CENTER>
<APPLET CODE="sgi_dynamic.class" MAYSCRIPT>
</APPLET>
</BODY>
</HTML>
The source code in Listing 45.15 compiles without any warnings when you use the
JDK 1.1 from Sun and the beta 3 version of CosmoPlayer 1.0. In the listing, I have
commented out the lines that have not yet been implemented by SGI so that you can
compare them to the VRML Consortium's suggested Java API at.
As of this writing, neither the createVrmlFromURL() nor the loadURL()
method had been implemented for the Windows 95/NT 4.0 CosmoPlayer viewer. Note that
SGI has implemented a method declared as replaceWorld(Node node),
which works like the loadURL() method when you use a string parameter that
refers to the URL of a .wrl file. As usual, you can find the code presented
in Listing 45.15 on the CD-ROM that accompanies this book.
//filename: sgi_dynamic.java
import java.awt.*;
import java.applet.*;
import vrml.external.field.*;
import vrml.external.Node;
import vrml.external.Browser;
import vrml.external.exception.*;
import netscape.javascript.JSObject;
public class sgi_dynamic extends Applet implements EventOutObserver{
boolean error = false;
// Browser we're using
Browser browser;
// Root of the scene graph (to which we add our nodes)
Node root=null;
Node sensor[] = {null,null,null};
// Shape group hierarchy
Node[] shape[] = {null,null};
Node[] scene = null;
// EventIns of the TouchSensors
EventOutSFTime touchTime[] = {null,null,null};
// EventIns of the root node
EventInMFNode addChildren;
EventInMFNode removeChildren;);
try {
// Get root node of the scene, and its EventIns
root = browser.getNode("root_node");
sensor[0] = browser.getNode("cube_sensor");
sensor[1] = browser.getNode("sphere_sensor");
sensor[2] = browser.getNode("cone_sensor");
for(int x=0;x<3;x++) {
touchTime[x] = (EventOutSFTime) sensor[x].getEventOut("touchTime");
touchTime[x].advise(this, new Integer(x));
}
addChildren = (EventInMFNode) root.getEventIn("addChildren");
removeChildren = (EventInMFNode) root.getEventIn("removeChildren");
// Create shapes to be added on the fly --
can re-assign new strings at any time
//NOTE: as of beta 3 of CosmoPlayer 1.0 for Windows95/NT 4.0,
the createVRMLFromURL method
//was not implemented:
//shape[0] = browser.createVrmlFromURL("cylinder.wrl",root,"addChildren");
shape[0] = browser.createVrmlFromString("Transform {\n" +
" children\n" +
" Shape {\n" +
" appearance Appearance {\n" +
" material Material {\n" +
" diffuseColor 0 1 1\n" +
" }\n" +
" }\n" +
" geometry Cylinder {}\n" +
" }\n" +
" translation 0 -4 0\n" +
"}\n");
shape[1] = browser.createVrmlFromString("Transform {\n" +
" children\n" +
" Shape {\n" +
" appearance Appearance {\n" +
" material Material {\n" +
" diffuseColor 1 1 .85\n" +
" }\n" +
" }\n" +
" geometry Cylinder {}\n" +
" }\n" +
" translation 0 4 0\n" +
"}\n");
Node[] scene = browser.createVrmlFromString("Transform {\n" +
" children\n" +
" Shape {\n" +
" appearance Appearance {\n" +
" material Material {\n" +
" diffuseColor 1 1 .85\n" +
" }\n" +
" }\n" +
" geometry Sphere {radius 2}\n" +
" }\n" +
" translation 0 0 0\n" +
"}\n");
}
catch (InvalidNodeException e) {
showStatus("PROBLEMS!: " + e + "\n");
error = true;
}
catch (InvalidEventInException e) {
showStatus("PROBLEMS!: " + e + "\n");
error = true;
}
catch (InvalidVrmlException e) {
showStatus("PROBLEMS!: " + e + "\n");
error = true;
}
if (error == false)
showStatus("Ok...\n");
}
public void callback(EventOut who, double when, Object which) {
Integer whichNum = (Integer) which;
if (whichNum.intValue() == 0) {
addChildren.setValue(shape[0]);
}
else if (whichNum.intValue() == 1) {
addChildren.setValue(shape[1]);
}
else if (whichNum.intValue() == 2) {
//NOTE: as of beta3 of CosmoPlayer 1.0 for Windows95, the loadURL method
//was not implemented:
//loadURL("cylinder.wrl","");
browser.replaceWorld(scene);
}
}
The Java source code contains a object and event declaration and an initialization
section that connects the Java variables to the VRML scene graph nodes and sets up
the callback. Then it defines three nodes: shape[0] is a cyan cylinder,
shape[1] is a yellow cylinder, and scene is another yellow cylinder
centered at the origin. The callback listens for any events generated by the user.
When the cube's sensor is activated, shape[0] is added to the VRML scene
graph with the addChildren() method. When the sphere's sensor is activated,
shape[1] is added to the VRML scene graph with the addChildren()
method. When the cone's sensor is activated, the current VRML scene graph is replaced
with the scene node through the replaceWorld() method of the browser
object. Although not used in this example, the removeChildren() method is
called in exactly the same way as the addChildren() method when you want
to remove a single node from the VRML scene graph.
It would be problematic if you had to rewrite this code every time you wanted
to use it in another file. Although you could reuse the Java bytecodes, this means
that you would need identical copies of the script declaration every time you wanted
to use it. Redundancy is not a particularly nice practice from the software engineering
point of view, either. Eventually, you will be caught by the cut-and-paste error
of having extra pieces of ROUTE statements (and extra fields) floating around
that could accidentally be connected to nodes in the new scene, resulting in difficult-to-trace
bugs.
VRML 2 provides a mechanism similar to the C/C++ #include directive and
typedef statements all rolled into one--the PROTO and EXTERNPROTO
statement pair. The PROTO statement acts like a typedef: you use
PROTO with a node and its definition and then you can use that name as though
it were an ordinary node within the context of that file.
If you want to access that prototyped node outside of that file, you can use the
EXTERNPROTO statement to include it in the new file and then use it as though
it were an ordinary node.
Although this approach is useful for creating libraries of static parts, where
it really comes into its own is in creating canned behaviors. A programmer can create
a completely self-contained behavior and, in the best object-oriented tradition,
provide interfaces to only the behaviors he or she wants. The syntax of the PROTO
and EXTERNPROTO statements follow:
PROTO prototypename [ # any collection of
eventIn eventTypeName eventName
eventOut eventTypeName eventName
exposedField fieldTypeName fieldName initialValue
field fieldTypeName fieldName initialValue
] {
# scene graph structure. Any combination of
# nodes, prototypes, and ROUTEs
}
EXTERNPROTO prototypename [ # any collection of
eventIn eventTypeName eventName
eventOut eventTypeName eventName
exposedField fieldTypeName fieldName
field fieldTypeName fieldName
]
"URL" or [ "URN1" "URL2"]
You can then add a behavior to a VRML file by using just the prototypename in
the file. For example, if you have a behavior that simulates a taxi, you may want
to have many taxis in a number of different worlds representing different countries.
The cabs are identical except for their color. Note again the ability to specify
multiple URLs for the behavior. If the browser cannot retrieve the first URL, it
tries the second until it gets a cab.
A taxi can have many attributes (such as speed and direction) that the user of
the cab need not really discriminate. To incorporate a virtual taxi into your world,
all you really care about is a few things such as being able to signal a cab, get
in, tell it where to go, pay the fare, and then get out when it has reached its destination.
From the world authors' point of view, how the taxi finds its virtual destination
is unimportant. A declaration of the taxi prototype file might look like the following:
#VRML V2.0 utf8
# Taxi prototype file taxi.wrl
PROTO taxicab [
exposedField SFBool isAvailable TRUE
eventIn SFBool inCab
eventIn SFString destination
eventIn SFFloat payFare
eventOut SFFloat fareCost
eventOut SFInt32 speed
eventOut SFVec3f direction
field SFColor color 1 0 0
# rest of externally available variables
] {
DEF root_group Transform {
# Taxi shape description here
}
DEF taxi_script Script {
url ["taxi.class"]
# rest of event and field declarations
}
# ROUTE statements to connect it altogether
To include the taxi in your world, the file would look something like the following:
#VRML V2.0 utf8
#
# myworld.wrl
EXTERNPROTO taxi [
exposedField SFBool isAvailable
eventIn SFBool inCab
eventIn SFString destination
eventIn SFFloat payFare
eventOut SFFloat fareCost
eventOut SFInt32 speed
eventOut SFVec3f direction
field SFColor color
# rest of externally available variables
]
[ "", ""]
# some scene graph
#....
Transform {
children [
# other VRML nodes. Then we use the taxi
DEF my_taxi taxi {
color 0 1. 0
}
]
Here is a case in which you are likely to use the postEventIn() method
to call a cab. Somewhere in the scene graph, you have a control that your avatar
uses to query a nearby cab for its isAvailable field. (An avatar is the
virtual body used to represent you in the virtual world.) If the isAvailable
field is TRUE, the avatar sends the event to flag the cab. Apart from the
required mechanics to signal the cab with the various instructions, the world creator
does not care how the cab is implemented. By using the EXTERNPROTO call,
the world's creator and users can always be sure of getting the latest version of
the taxi implementation and that the taxi will exhibit uniform behaviors regardless
of which world the users are in.
CommunityPlace's approach to the VRML/Java API is set up to take advantage of
using Java within prototyped intranode behaviors. CosmoPlayer, on the other hand,
leaves intranode behavior scripting to VRMLscript. You can take advantage of Java
class reusability directly within the CosmoPlayer VRML/Java API. For example, you
can create a bounce class in Java that many different types of VRML objects
could inherit in the Java source code itself. The bounce class would define
how to deform an object as it hits an obstacle and changes direction; the class would
be written only once, yet take advantage of full reusability.
The information in this chapter has so far relied on static, predefined behaviors
available either within the original VRML file or retrievable from somewhere on the
Internet.
One exciting goal of VR worlds is to be able to create autonomous agents that
have some degree of artificial intelligence. Back in the early days of programming,
self-modifying code was common, but it faded away as more resources and higher-level
programming languages removed the need. A VR world can take advantage of self-modifying
code.
Stephenson's Librarian from Snow Crash is just one example of how an independent
agent can act in a VR world. His model was very simple--a glorified version of today's
2D HTML-based search engine that, when requested, searched the U.S. Library of Congress
for information related to a desired topic (the Librarian also has speech recognition
and synthesis capabilities). The next generation of intelligent agents will include
learning behavior as well.
The VRML API enables you to go the next step further--a virtual assistant that
can modify its own behavior to suit your preferences. This is not just a case of
loading in some canned behaviors. By combining VRMLscript and Java behaviors, you
can create customized behaviors on the fly by concatenating the behavior strings
and script nodes, calling the createVrmlFromString() method, and adding
it to the scene graph in the appropriate place. Although doing so is probably not
feasible with current Pentium-class machines, the next generation of processors will
probably make it so.
With the tools presented in this chapter, you should be able to create whatever
you require of cyberspace. There is only so much you can do with a 2D screen in terms
of new information-presentation techniques. The third dimension of VRML enables you
to create experiences that are far beyond what you expect of today's Web pages. 3D
representation of data and VR behaviors programming is still very much in its infancy--so
much so that, at the time of this writing, only Sony's CommunityPlace, SGI's CosmoPlayer,
and Netscape's Live3D viewers were available for testing, and even then, many parts
of these viewers were not implemented. In fact, the Live3D API works only with VRML
1 and is quite different from what is suggested by the VRML 2 standard.
If you are serious about creating behaviors, you must learn VRML thoroughly. Many
little problems can catch the unwary, particularly the peculiarities of VRML syntax
when it comes to ordering objects within the scene graph. An object placed at the
wrong level severely restricts its actions. A book on VRML is a must for this work.
For a good reference on VRML 2, check out Teach Yourself VRML 2 in 21 Days, by Chris
Marrin and Bruce Campbell (published by Sams.net Publishing).
Whether you are creating reusable behavior libraries, an intelligent postman that
brings the mail to you wherever you are, or simply a functional Java machine for
your virtual office, the excitement of behavior programming is catching.
|
http://www.webbasedprogramming.com/Java-1.1-Unleashed/htm/ch45.htm
|
CC-MAIN-2015-06
|
refinedweb
| 13,362
| 53.31
|
Edit (2008-11-09): Robert Bradshaw posted a patch to my code and the Cython implementation is now a lot faster. Click here to read more.
In a comment on a recent post, Robert Samal asked how Cython compares to C++. The graph below shows a comparison of a greedy critical set solver written in Cython and C++ (both use a brute force, naive, non-randomised implementation of a depth first search):
So things look good until n = 10. In defence of Cython, I must point out that my implementation was a first attempt and I am by no means an expert on writing good Cython code. Also, the Cython code is probably fast enough – in my experience, solving problems (computationally) for latin squares of order 10 is futile, so the code is more convenient for testing out small ideas.
edit: Robert’s code is here
Archived Comments
Date: 2008-03-04 05:40:32 UTC
Author: Mike Hansen
You should post the Cython and C++ code because it looks like there maybe some obvious fixes to the Cython to make it behave better.
Date: 2008-03-04 21:01:39 UTC
Author: Robert Samal
Does somebody else have some experience in how cython compares
with C/C++? Every once in a while I need to do some computation (something NP-complete or worse in general, so it
usually ends up as an ugly backtracking). I’d be happy to do everything from within Sage (and using python/cython), but I’m not sure, if it is fast enough (or if it getting fast enough, I suppose that cython is improving gradually).
Date: 2008-07-07 11:59:22 UTC
Author: Alexandre Delattre
Hi,
After looking quickly into the code, I’m pretty sure some overhead is caused by the __getitem__ and __setitem__ methods, you use to override the [] operator.
When calling L[i, j] (or L[i, j] = x), those special methods are resolved at runtime and hence involve additional python mechanism. While they make the code readable, you lose the interest of “cdef” methods which are called much faster.
IMO, a good compromise would be to put the code in __getitem__ into a regular ‘cdef getitem()’ method, then make __getitem__ as a wrapper of the regular method:
def __getitem__(self, rc):
i, j = rc
return self.getitem(i, j)
cdef int getitem(int i, int j):
… # Put your code here
and replace the L[i, j] by L.getitem(i, j) in your cython code.
Also put “void” return type on cdef method that returns nothing could help a bit.
I’ll try to make these changes and run the benchmark again.
Date: 2008-11-08 15:12:21 UTC
Author: Robert Bradshaw
Date: 2008-11-11 17:24:47 UTC
Author: Ben Racine
Any chance that we might see a plot of the improved data… wouldn’t want people to come here and only see the ‘depressing’ data.
Date: 2008-11-11 17:27:12 UTC
Author: Ben Racine
Nevermind, I now see the new results up one level.
Date: 2011-09-14 02:19:56 UTC
Author: Alex Quinn
The link to the improved data is dead:
Same for the link to the motivation (“recent post”):
Are these viewable elsewhere?
Thanks a lot for doing this and posting it! Very helpful in any case.
Date: 2011-09-14 02:22:59 UTC
Author: Alex Quinn
Found it! Here’s the post with the improved data:
Date: 2011-09-14 03:47:40 UTC
Author: Alex Quinn
Code link is still broken:
Date: 2015-10-10 08:12:16.866201 UTC
Author: Mohammad M. Shahbazi
nice challenge. I’ve been using cython for a couple of years. it really sucks
|
https://carlo-hamalainen.net/2008/03/04/cython-vs-c/
|
CC-MAIN-2019-26
|
refinedweb
| 623
| 77.27
|
DAPPLE Example: grades
Source code:
// grades -- compute weighted average for homework grades // written using the Data-Parallel Programming Library for Education (DAPPLE) // // This program demonstrates some very simple DAPPLE features. A set // of students' grades are represented as vectors, and then the // program computes the weighted averages. // // David Kotz 1995 // $Id: grades.cc,v 1.1 95/02/21 18:11:55 dfk CS15 Locker: dfk $ #include
#include
#include "dapple.h" const int N = 5; // number of students, presumably large int main(int argc, char **argv) { // homework grades, each for N students floatVector hw1(N), hw2(N), hw3(N); // weighted average grade floatVector average(N); // relative weights of the homeworks (should add to 1) const float w1 = 0.40, w2 = 0.25, w3 = 0.35; cout << "Enter " << N << " HW1 grades: "; cin >> hw1; cout << "Enter " << N << " HW2 grades: "; cin >> hw2; cout << "Enter " << N << " HW3 grades: "; cin >> hw3; cout << endl; cout << "HW1 (" << w1*100 << "%): " << hw1 << endl; cout << "HW2 (" << w2*100 << "%): " << hw2 << endl; cout << "HW3 (" << w3*100 << "%): " << hw3 << endl; average = (hw1 * w1) + (hw2 * w2) + (hw3 * w3); cout << "Averages: " << average << endl; cout << "Best is " << max_value(average) << endl; cout << "Worst is " << min_value(average) << endl; return(0); }
Demonstration:
grades < grades.data Enter 5 HW1 grades: Enter 5 HW2 grades: Enter 5 HW3 grades: HW1 (40%): 98 87 94 84 76 HW2 (25%): 90 93 80 88 83 HW3 (35%): 100 85 88 89 85 Averages: 96.7 87.8 88.4 86.75 80.9 Best is 96.7 Worst is 80.9
|
http://www.cs.dartmouth.edu/ILI/dapple/examples/grades.html
|
crawl-001
|
refinedweb
| 247
| 68.5
|
If we want to use the popular messaging system Kafka with our Elixir projects, we have a few wrappers we can choose from. This blogpost covers integrating one of them, Kaffe, which doesn’t have a lot of resources and therefore can be tricky to troubleshoot.
In this codealong we’ll build a simple Elixir application and use Kaffe to connect it to a locally running Kafka server. Later we’ll cover a couple of variations to connect a dockerized Kafka server or an umbrella Elixir app.
This post assumes basic knowledge of Elixir and no knowledge of Kafka or Kaffe. Here is the repo with the full project: Elixir Kaffe Codealong.
What is Kafka, briefly?
Kafka is a messaging system. It does essentially three things:
- Receives messages from applications
- Keeps those messages in the order they were received in
- Allows other applications to read those messages in order
A use case for Kafka: Say we want to keep an activity log for users. Every time a user triggers an event on your website - logs in, makes a search, clicks a banner, etc. - you want to log that activity. You also want to allow multiple services to access this activity log, such as a marketing tracker, user data aggregator, and of course your website’s front-end application. Rather than persisting each activity to your own database, we can send them to Kafka and allow all these applications to read only what they need from it.
Here’s a basic idea of how this might look:
The three services reading from Kafka would only take the pieces of data that they require. For example, the first service would only read from the
banner_click topic while the last only from
search_term. The second service that cares about active users might read from both topics to capture all site activity.
Basic Kafka terminology
Before we jump into the codealong let’s clarify a few common Kafka terms you’ll run into as you’re learning more about this service:
- consumer: what is receiving messages from Kafka
- producer: what is sending messages to Kafka
- topic: a way to organize messages and allow consumers to only subscribe to the ones they want to receive
- partition: allows a topic to be split among multiple machines and retain the same data so that more than one consumer can read from a single topic at a time
- leader/replica: these are types of partitions. There is one leader and multiple replicas. The leader makes sure the replicas have the same and newest data. If the leader fails, a replica will take over as leader.
- offset: the unique identifier of a message that keeps its order within Kafka
Codealong: basic Elixir app & Kafka running locally
Set up Kafka Server
Follow the first two steps of the quickstart instructions from Apache Kafka:
- Download the code
- Start the servers Zookeeper (a service that handles some coordination and state management for Kafka)
bin/zookeeper-server-start.sh config/zookeeper.propertiesKafka
bin/kafka-server-start.sh config/server.properties
Set up Elixir App
1. Start new project
mix new elixir_kaffe_codealong
- 2. Configure kaffe
- 2.a: In
mix.exsadd
:kaffeto the list of extra applications:
def application do [ extra_applications: [:logger, :kaffe] ] end
- 2.b: Add kaffe to list of dependencies:
defp deps do [ {:kaffe, "~> 1.9"} ] end
2.c: Run
mix deps.getin the terminal to lock new dependencies.
- 3. Configure producer in
config/config.exsadd:
config :kaffe, producer: [ endpoints: [localhost: 9092], # endpoints references [hostname: port]. Kafka is configured to run on port 9092. # In this example, the hostname is localhost because we've started the Kafka server # straight from our machine. However, if the server is dockerized, the hostname will # be called whatever is specified by that container (usually "kafka") topics: ["our_topic", "another_topic"], # add a list of topics you plan to produce messages to ]
4. Configure consumer
- 4.a: add
/lib/application.exwith the following code:
defmodule ElixirKaffeCodealong.Application do use Application # read more about Elixir's Application module here: def start(_type, args) do import Supervisor.Spec children = [ worker(Kaffe.Consumer, []) # calls to start Kaffe's Consumer module ] opts = [strategy: :one_for_one, name: ExampleConsumer.Supervisor] Supervisor.start_link(children, opts) end end
- 4.b: back in
mix.exs, add a new item to the application function:
def application do [ extra_applications: [:logger, :kaffe], mod: {ElixirKaffeCodealong.Application, []} # now that we're using the Application module, this is where we'll tell it to start. # We use the keyword `mod` with applications that start a supervision tree, # which we configured when adding our Kaffe.Consumer to Application above. ] end
- 4.c: add a consumer module to accept messages from Kafka as
/lib/example_consumer.exwith the following code:
defmodule ExampleConsumer do # function to accept Kafka messaged MUST be named "handle_message" # MUST accept arguments structured as shown here # MUST return :ok # Can do anything else within the function with the incoming message def handle_message(%{key: key, value: value} = message) do IO.inspect(message) IO.puts("#{key}: #{value}") :ok end end
- 4.d: configure the consumer module in
/config/config.exs
config :kaffe, consumer: [ endpoints: [localhost: 9092], topics: ["our_topic", "another_topic"], # the topic(s) that will be consumed consumer_group: "example-consumer-group", # the consumer group for tracking offsets in Kafka message_handler: ExampleConsumer, # the module that will process messages ]
- 5. Add a producer module (optional, can also call Kaffe from the console) We’re going to wrap the functions Kaffe provides us in our own methods for ExampleProducer. Calling on Kaffe directly would also work; the
produce_syncfunction is what ultimately sends our message to Kafka.
add
/lib/example_producer.ex with the following code:
defmodule ExampleProducer do def send_my_message({key, value}, topic) do Kaffe.Producer.produce_sync(topic, [{key, value}]) end def send_my_message(key, value) do Kaffe.Producer.produce_sync(key, value) end def send_my_message(value) do Kaffe.Producer.produce_sync("sample_key", value) end end
- 6. Send and receive messages in the console!
Now we have everything configured and can use the modules we’ve created to send and read messages through Kafka!
- We’re going to call on our producer to send a message to the Kafka server.
- The Kafka server receives the message.
- Our consumer, which we configured to subscribe to the topic called “another_topic”, will receive the message we’ve sent and print it to the console.
Start an interactive elixir shell with
iex -S mix and call the following:
iex> ExampleProducer.send_my_message({"Metamorphosis", "Franz Kafka"}, "another_topic") ...>[debug] event#produce_list topic=another_topic ...>[debug] event#produce_list_to_topic topic=another_topic partition=0 ...>:ok iex> %{ ...> attributes: 0, ...> crc: 2125760860, # will vary ...> key: "Metamorphosis", ...> magic_byte: 1, ...> offset: 1, # will vary ...> partition: 0, ...> topic: "another_topic", ...> ts: 1546634470702, # will vary ...> ts_type: :create, ...> value: "Franz Kafka" ...> } ...> Metamorphosis: Franz Kafka
Variations: Docker & Umbrella Apps
- If you’re running Kafka from a docker container (most common in real applications), you will use that hostname in the config file rather than
localhost
- In an umbrella app you’ll configure Kaffe in the child application running it. If you have apps separated by environment, you can start the consumer by structuring it as a child like this:
children = case args do [env: :prod] -> [worker(Kaffe.Consumer, [])] [env: :test] -> [] [env: :dev] -> [worker(Kaffe.Consumer, [])] [_] -> [] end
Troubleshooting Errors
- No leader error
** (MatchError) no match of right hand side value: {:error, :LeaderNotAvailable}
Solution: Try again. It just needed a minute to warm up.
- Invalid Topic error
** (MatchError) no match of right hand side value: {:error, :InvalidTopicException}
Solution: Your topic shouldn’t have spaces in it, does it?
The end
This should have given you the basic setup for you to start exploring more of this on your own, but there’s lots more you can do with Kaffe so check out sending multiple messages, consumer groups, etc. If you come up with any more troubleshooting errors you’ve solved, let us know by creating an issue here.
Resources
- Elixir Kaffe Codealong
- Kaffe on Github
- Kaffe on Hexdocs
- Kafka quickstart
- Kafka in a Nutshell
- Application module in Elixir
Caught a mistake or want to contribute to the article? Edit this page on GitHub!
|
https://elixirschool.com/blog/elixir-kaffe-codealong/
|
CC-MAIN-2020-50
|
refinedweb
| 1,336
| 55.95
|
Text updated with Python in CLR doesn't render
On 02/11/2017 at 07:35, xxxxxxxx wrote:
I'm trying to update some values in a scene (text, texture from files and colours) before rendering using the Python SDK.
Updating the textures and colours works fine, but when I update the text it no longer renders.
I'm using a Python plugin to listen for the command line arguments message (ultimately I'll use those values for customising) and update the scene, before rendering it out.
I'm invoking the command line renderer with the -nogui and -debug flags:
/opt/maxon/cinema4d/19.024/bin/Commandline -nogui -debug
Here's how I'm updating the colour:
extrude_object = doc.SearchObject("Extrude") extrude_object[c4d.ID_BASEOBJECT_COLOR] = c4d.Vector(1, 0, 0.6)
This works fine; when the scene is rendered out the colour change is as I expect.
Here's how I'm updating the text:
text_object = doc.SearchObject("Text") text_object[c4d.PRIM_TEXT_TEXT] = "Hello"
This works in the Python console in Cinema 4D, and in the plugin it seems fine. Querying back the value and printing it out shows what I've just set. However, when the scene is rendered out, the text is completely missing.
The template is using a spline for text. When using MoText I get the same output (ie, no text at all) but I also get some errors printed during rendering:
CRITICAL: NullptrError [text_object.cpp(885)] [objectbase1.hxx(370)] CRITICAL: NullptrError [text_object.cpp(885)] [objectbase1.hxx(370)] CRITICAL: NullptrError [text_object.cpp(617)] [objectbase1.hxx(370)]
Have any of you experienced this issue before? Have you managed to successfully update text values using Python and the -nogui flag?
Side note: In my real scene I'm using UserData for the values, wired up with XPresso to make the text and colours update. I get the exact same behaviour with that as I'm describing here, but to set the values I use something like:
custom_data[c4d.ID_USERDATA, 1] = c4d.Vector(1, 0, 0.6) custom_data[c4d.ID_USERDATA, 2] = "Hello"
I've made a very simple scene to demonstrate the issue, you can download it here:
It looks like this
Here's the script I'm using in full to test this:
import c4d import sys def PluginMessage(id, data) : if id==c4d.C4DPL_COMMANDLINEARGS: return render_pngs(sys.argv) return False def render_pngs(command_line_args) : path = "/home/ubuntu/simple-text-scene.c4d" c4d.documents.LoadFile(path) doc = c4d.documents.GetActiveDocument() # customise it text_object = doc.SearchObject("Text") text_object[c4d.PRIM_TEXT_TEXT] = str("Hello") extrude_object = doc.SearchObject("Extrude") extrude_object[c4d.ID_BASEOBJECT_COLOR] = c4d.Vector(1, 0, 0.6) # render it renderData = doc.GetActiveRenderData().GetData() xres = int(renderData[c4d.RDATA_XRES]) yres = int(renderData[c4d.RDATA_YRES]) bmp = c4d.bitmaps.BaseBitmap() bmp.Init(x=xres, y=yres, depth=24) renderData[c4d.RDATA_GLOBALSAVE] = True renderData[c4d.RDATA_SAVEIMAGE] = True renderData[c4d.RDATA_FORMAT] = c4d.FILTER_PNG renderData[c4d.RDATA_FRAMESEQUENCE] = c4d.RDATA_FRAMESEQUENCE_MANUAL renderData[c4d.RDATA_FRAMEFROM] = c4d.BaseTime(0.5) renderData[c4d.RDATA_FRAMETO] = c4d.BaseTime(0.5) path = "/home/ubuntu/frames/frame" renderData[c4d.RDATA_PATH] = path renderData.SetFilename(c4d.RDATA_PATH, path) res = c4d.documents.RenderDocument(doc, renderData, bmp, c4d.RENDERFLAGS_EXTERNAL | c4d.RENDERFLAGS_NODOCUMENTCLONE) return True
On 03/11/2017 at 09:12, xxxxxxxx wrote:
Hi,
welcome to Plugin Café forums
We are looking into this, but it might take a few days. I'll get back to you as soon as I have any information.
On 03/11/2017 at 09:32, xxxxxxxx wrote:
Hi Andreas, thanks for the update! I've also found what I think may be a problem with the Python libraries bundled with the R19 command line renderer client – should I start a new thread for that?
On 03/11/2017 at 09:34, xxxxxxxx wrote:
Hi,
please hold on for moment. We'll get in contact with you.
On 03/11/2017 at 12:06, xxxxxxxx wrote:
Can sound stupid but in your plugin there is no c4d.EventAdd(). And since c4d console automaticly add event for you it's why it's work in the console.
You may need to add event before to render. Since lunching a render mean a copy of the scene internally, I don't know if data set in the scene but not being pushed by an Event are computed in this copy of the scene.
But I could be wrong I'm not as much experienced as c4d command line plugin
On 03/11/2017 at 15:06, xxxxxxxx wrote:
Hi gr4ph0s, thanks for the suggestion – and it doesn't sound stupid at all!
While working on the issue I saw some other scripts that used c4d.EventAdd(), and have tried both with and without it, but sadly it didn't make any difference. Your line of thinking matches mine – that somehow the new data isn't being prepared properly for the render, and there's perhaps another command I need to run. Andreas said he's looking into it for me now so hopefully we'll find out :)
|
https://plugincafe.maxon.net/topic/10373/13871_text-updated-with-python-in-clr-doesnt-render
|
CC-MAIN-2020-10
|
refinedweb
| 826
| 50.53
|
RemoteCache.replaceAll infinite loopYog Sothoth Oct 31, 2017 5:51 AM
In certain cases replaceAll method on RemoteCache descends into infinite loops. Here's a JUnit test that proves it.
package com.test; import java.util.HashMap; import java.util.HashSet; import java.util.Map; import java.util.Properties; import java.util.Set; import org.infinispan.client.hotrod.RemoteCacheManager; import org.infinispan.client.hotrod.configuration.ConfigurationBuilder; import org.infinispan.client.hotrod.exceptions.HotRodClientException; import org.infinispan.client.hotrod.impl.ConfigurationProperties; import org.junit.Before; import org.junit.Test; import org.junit.runner.RunWith; import org.junit.runners.BlockJUnit4ClassRunner; @RunWith(BlockJUnit4ClassRunner.class) public class InfinispanTest { RemoteCacheManager cacheManager; @Before public void before() { Properties hotrodProps = new Properties(); hotrodProps.setProperty(ConfigurationProperties.SERVER_LIST, "localhost:11222"); cacheManager = new RemoteCacheManager(new ConfigurationBuilder().withProperties(hotrodProps).build()); } @Test public void testRemoveAll() { Map<String, Set> localMap = new HashMap<>(); Map<String, Set> remoteMap = createStorage("test"); remoteMap.clear(); populate(localMap); populate(remoteMap); localMap.replaceAll((k, v) -> { v.remove("v3"); System.out.println(k); return v; }); remoteMap.replaceAll((k, v) -> { v.remove("v3"); System.out.println(k); return v; }); System.out.println(localMap); System.out.println(remoteMap); } private <T, U> Map<T, U> createStorage(String arg0) { try { cacheManager.administration().createCache(arg0, null); } catch (HotRodClientException hrce) { hrce.printStackTrace(System.err); } return cacheManager.getCache(arg0); } private static void populate(Map<String, Set> arg0) { Set s = new HashSet(); s.add("v1"); s.add("v2"); s.add("v3"); s.add("v4"); s.add("v5"); s.add("v6"); arg0.put("k1", s); arg0.put("k2", s); arg0.put("k3", s); arg0.put("k4", s); arg0.put("k5", s); arg0.put("k6", s); } }
The test will print "k6" endlessly. It reproduces on Set, List and Map sub-values.
1. Re: RemoteCache.replaceAll infinite loopRadim Vansa Oct 31, 2017 2:08 PM (in response to Yog Sothoth)
I don't think this is a problem on Infinispan side; the function passed to replaceAll should not modify its argument(s). See the default implementation in ConcurrentMap; this calls conditional replace(k, v, function.apply(k, v)). As you've modified the v in the function, the conditional replace assumes that the old value is already the modified one, and replace fails.
2. Re: RemoteCache.replaceAll infinite loopYog Sothoth Oct 31, 2017 2:21 PM (in response to Radim Vansa)
Map.replaceAll javadoc states "Replaces each entry's value with the result of invoking the given function on that entry until all entries have been processed or the function throws an exception.". It literally implies that the v-argument will change, which is furthermore supported by its default implementation example in those javadocs. Besides, it does work on a HashMap, so it should on a RemoteCache as well, as they claim to implement Map interface.
3. Re: RemoteCache.replaceAll infinite loopWill Burns Oct 31, 2017 3:27 PM (in response to Yog Sothoth)
What rvansa mentioned is exactly what the issue is. The problem is that the HashMap can use identity equivalence since it is on the same JVM. Unfortunately in a Remote client like this we can only rely on object equality. The easiest way to fix this would just be to make a copy of the Set in the lambda (if changes are required) and return that instead. You can log a JIRA at, so that we override the default interface impl and allow for the value to be modified in place. But to be honest this isn't feasible until we allow for lambdas to be sent to the remote node, which I hope will be added sometime, but I have no idea when it will.
4. Re: RemoteCache.replaceAll infinite loopYog Sothoth Oct 31, 2017 4:54 PM (in response to Will Burns)
>>a copy of the Set in the lambda (if changes are required) and return that instead
Ah, i see what you mean now, but still i don't get how would it ever explain the effect of iterating endlessly on the last element only?
>>we can only rely on object equality.
>>we override the default interface impl
I would expect Map to have been implemented properly, or otherwise stated using profanely red letters in the RemoteCache's methods javadocs where default behaviour is not supported. Again, Map.replaceAll javadocs clearly explains how it must work and therefore section 3.2 of User guide for Infinispan 9.1 is totally misleading: migrating is a huge pain in the back and people like me must test thousands lines of code like that in the test, to see if your SPI complies with what it's supposed to do.
PS. I will accept my comment as a correct answer, because i needed to know for sure if it's a bug or intended behaviour and i have got that question answered .
5. Re: RemoteCache.replaceAll infinite loopRadim Vansa Nov 1, 2017 4:34 AM (in response to Yog Sothoth)
You keep referencing Map, but RemoteCache is a ConcurrentMap, and overriden method in implemented interface takes precedence (we could say that JDK itself does break/twist the contract). Any ConcurrentMap that does not implement replaceAll on its own will suffer this problem. What you passed in is not a (pure) function but a mutator.
While Will is wrong regarding the identity (Map.replaceAll does not compare the old vs. new value at all and ConcurrentMap will use equals even in single VM) a phrase "Please educate yourself" sounds very arrogant; please consider your wording more carefully next time.
6. Re: RemoteCache.replaceAll infinite loopDan Berindei Nov 1, 2017 5:28 AM (in response to Radim Vansa)
To be fair, the replaceAll documentation doesn't mention that the function parameter is not allowed to modify the value in-place. In fact, modifying the values in-place seems to be supported in all the JDK implementations, so Infinispan should really have a warning about that.
7. Re: RemoteCache.replaceAll infinite loopWill Burns Nov 1, 2017 1:23 PM (in response to Yog Sothoth).
I am more than aware of how collection equals are implemented And actually the Set is always equal to itself (it doesn't matter how many times you change it). If it was this would break the reflexive property. The lambda is never creating a new Set.
If you dig a little deeper you will see some more details. First off what I said is technically implementation specific, but most implementations of Map will check object identity first and then use object equality if the first doesn't pass. And even most equality methods are implemented in a similar fashion checking if it equals this.
Looking at ConcurrentHashMap.java when it looks at a node it will do identity equality check which will be true in a local JVM when the object is the same between replace calls
if (cv == null || cv == ev || (ev != null && cv.equals(ev))) {
In this case on the local JVM it will not get to the object equals invocation with your lamba unless it fails, since you always return the same object.
The remote case is different though as the Set is deserialized remotely and the object instance is never the same. Thus it always relies on object identity. Which means the replace value has to be exactly the same, which unfortunately using your lambda it isn't:
default void replaceAll(BiFunction<? super K, ? super V, ? extends V> function) { Objects.requireNonNull(function); forEach((k,v) -> { while(!replace(k, v, function.apply(k, v))) { // v changed or k is gone if ( (v = get(k)) == null) { // k is no longer in the map. break; } } }); }
Note how on line 4 the same value is always passed to the replace method and the function is applied before invoking the replace. This means it is essentially invoking the following every time
replace(k, modifiedv, modifiedv);
Which as you would expect would always fail. This means it will then get stuck in a loop, because the value is never queued again.
As dan.berindei mentioned the next best thing we could do besides what I already mentioned about creating a JIRA to actually implement this the same way is just to update the Javadoc or possibly log a warning etc.
Unfortunately RemoteCache implemented ConcurrentMap before it had the replaceAll method (Java 1.8) and it hasn't been updated since then.
|
https://developer.jboss.org/message/977572?tstart=0
|
CC-MAIN-2019-30
|
refinedweb
| 1,382
| 57.06
|
>Because I think 3D is very valuable as a teaching tool, >whether you call it CPFE or PFI (Programming (for the) >Fun (of) It), but unless the 3D can happen in Python >(with decent performance) we lose the benefits of using >Python as the teaching language (or at least some of the >benefits). > >--Dethe Maybe some other term than '3D' would more precisely identify the functionality you seek. You seem to be talking about real time, interactive, videogame type experiences -- something consumer-level software/hardware has only gotten good at recently (since the good ol' days of Pong and pre-GUI OSes). When it comes to learning the nuts and bolts of 3D, it's important to distill generic concepts from the vast smorgasbord of implementations. And even at the generic level, you've got the math frameworks, such as xyz, homogenous coordinates, transformation matrics and quaternions, and then the more computer-oriented frameworks, such as viewing frustrum, clipping rectangles, textures, and the different APIs (e.g. OpenGL, DirectX, Java3D etc.). In my Python + Povray approach, it's mostly the math frameworks I'm trying to tune in at the conscious level -- mostly by implementing Vector as a Python class. Python is good for playing with Vectors because of the operator overloading feature: you can write __add__ and __mul__ methods and have these pertain to vectors, e.g. (a,b,c) + (d,e,f) = (a+d,b+e,c+f). Or: >>> from coords import Vector >>> a = Vector((1,2,3)) >>> b = Vector((4,5,6)) >>> a+b Vector (5.0, 7.0, 9.0) When you want to rotate a vector around an axis, that's a method too: >>> a = Vector((1,0,0)) # pointing along x-axis >>> a.roty(180) # rotate 180 deg around y-axis Vector (-1.0, 0.0, 0.0) So the point of "pure Python" in my own curriculum writing is to help reinforce vector concepts -- what we normally teach under the heading of "linear algebra"; the roty method is where the rotation matrix lives and needs to _not_ be buried in some library (in this instance, given the goals for the lesson plan). Comprehension, not speed, is what's critical. Yet I'd call all of this '3D' certainly.[1] Pedagogy, not system performance, is what's behind this strategy. Like, even Povray by itself will scale and rotate an object, once you've defined it and named it. So if I'd just wanted to show an icosahedron, and then another one rotated 30 degrees with respect to the first, there's really no reason I'd have to use Python matrices to do that -- just name the first icosa 'MyIcosa' and tell Povray to rotate it and draw it again -- one line in the .pov script. But I don't do that, because that would hide precisely the implementation details I'm trying to bring to the foreground and "render" (make explicit) in Python. So the decision to bring a rotation matrix into "pure Python" wasn't at all for speed reasons (in my case) -- unless we're talking about "vs pencil and paper". In that limited sense, of the tedium out of manual calcs, I _do_ think automating with a computer language, even while learning it on paper, _is_ partly about getting some decent performance gains, so that applying a rotation to each of 12 vertices does _not_ take a half hour, but under a second. _That_ kind of speed (over pencil and paper manipulation) is what opens up interesting applications, such as rendering rotated polyhedra. A lot of these manipulations just seem "too expensive" if done over and over by hand -- whereas computers let us do them "in bulk" or "in volume" i.e. very _cheaply_.[2] Whereas I think Python is a fine little language (like APL in some ways, my first love, but without the funny squiggles), and a good first one, plus may be used in synergy with the various frameworks to elucidate their innards (the tack I took vis-a-vis the Blowfish crypto algorithm -- you'd really want to do that in faster code, or even in hardware, if speed were the core concern), I don't think we should feed the fantasy that "I can get by with Python for everything I'd ever want to do with computers." It's not the "be all end all language" -- no language is. In my paradigm, there's no such thing as "the" computer language, and anyone with pretensions to being a CS type should know at least 3 rather well (preferably from different families, e.g. a LISPish one should be part of the mix). Python defines a namespace in which to come up for air, get some overview, have your mind suffused with the necessary abstractions, but then where you go from here is very domain- dependent. If you're a budding videogame programmer, you're likely going to move off towards C/C++ and/or assembler and/or other packages and skills. No problem. We're not trying to set up a perimeter to hem students in, making their knowledge strictly coincident with that of any given Python user's. We're trying to open doors, many of which are exits from Python (many drift in and out). Kirby Notes: [1] actually, in my own case, '4D' would be another way of describing '3D', because I've tracked Bucky Fuller into the remote reaches of his off-beat synergetics vocabulary. He didn't "believe" in dimensionality as normally foisted upon us in this culture (in school rooms, one room or otherwise), didn't think "height, width and breadth" are really conceptual primitives, rather that "volume" comes as a package deal, indivisible. Given the tetrahedron is the "room with the fewest walls" (the most primitive container), he was inclined to call volume '4D', thinking of the tetrahedron's 4 vertices or 4 faces (as you prefer). But this is very esoteric -- no one talks this way except Fuller Schoolers. On the other hand, such thinking did inspire David Chako and others to elaborate a 4D coordinate system that uses 4-tuples instead of xyz 3-tuples for points in space. And it doesn't require negative numbers be used for any point's address. I've built this 4-tuple vector into coords.py as a subclass of Vector (called Qvector) i.e. class Qvector(Vector): """Subclass of Vector that takes quadray coordinate args""" [2] Most high school level approaches to matrices that I've seen make do with the "simultaneous equations" problem, solved by reducing a matrix, and never get into rotation, scaling, translation, as applications of matrix algebra (except maybe briefly, and then usually only in a plane). This is probably because it's labor intensive, using paper and pencil, to apply the same rotation matrix to the 12 or so vertices of a polyhedron. Too much work. But that's precisely where computers come in, giving us the opportunity to do something fun and intellectually stimulating with this matrix concept. For related reading at my website, see: (Cowen later responded and said he thought my ideas were on target and interesting).
|
http://mail.python.org/pipermail/edu-sig/2000-December/000856.html
|
crawl-002
|
refinedweb
| 1,200
| 54.26
|
Opened 9 years ago
Closed 9 years ago
#12081 closed (invalid)
My project is not detecting the geodjango application
Description (last modified by )
Hello .. I want to have google maps in my application, so i created a project and in that made an application to handle the maps and made the required changes in the settings.py for the above. But i am getting the following error when i am trying to issue the command "python manage.py sqlall world"
Error: App with label world could not be found. Are you sure your INSTALLED_APPS setting is correct?
Can some one please help in this issue . And my models file is as follows:
from django.contrib.gis.db import models class WorldBorders() # So the model is pluralized correctly in the admin. class Meta: verbose_name_plural = "World Borders" # Returns the string representation of the model. def __unicode__(self): return self.name
Change History (4)
comment:1 Changed 9 years ago by
comment:2 Changed 9 years ago by
Did you follow the installation directions? Did you put
'world' in your
INSTALLED_APPS? I cannot reproduce and there's no indication here of a bug, but rather improper configuration and/or installation.
Please use the mailing list or IRC for help requests; this is not a bug.
comment:3 follow-up: 4 Changed 9 years ago by
This is broken in release 1.1.1 again. It works fine in release 1.0.4.
comment:4 Changed 9 years ago by
Replying to intsangity:
This is broken in release 1.1.1 again. It works fine in release 1.0.4.
Do not reopen tickets that have been closed by a core developer absent a compelling reason. Simply asserting that it's "broken in release 1.1.1 again" is not compelling. Something is wrong with your configuration, as I've independently verified the tutorial code does work with the 1.1.1 release.
Before you reopen, please provide detailed steps that clearly demonstrates this problem -- I'm still sure you are missing
'world' and/or
'django.contrib.gis' from your
INSTALLED_APPS.
Please use the preview button.
|
https://code.djangoproject.com/ticket/12081
|
CC-MAIN-2018-22
|
refinedweb
| 352
| 67.45
|
in reply to How to write effective modules
Is a global in the $main:: namespace the right way to do it? No, it is not! Sometimes you'll build a set of modules that all assume a certain runtime environment, but even then that environment will be encapsulated behind method calls (or at least subroutine calls), so other modules don't need to know the internals.
Basic rule of thumb: it's fine for one module to depend on the public API of another module, but it's wrong to depend on the internal implementation of that API. And $main::db wouldn't be considered a good public API by most people.
Strict
Warnings
Results (151 votes). Check out past polls.
|
https://www.perlmonks.org/?node_id=761655
|
CC-MAIN-2019-51
|
refinedweb
| 121
| 71.75
|
Im, others teach that Christ never came. So you have a bag full of opinions and interpretations. Obviously there is one that is right and all others wrong. Now I'm not trying to start a riot, I'm just trying to see everybody's thoughts and thorough analyst of the issue.
Thanks
OriginPlus
It's not that obvious for me, though
I tend to think there are as many truths as people - and that makes this world so interesting.
In other words, whatever you believe is the truth for you, and all your believes form your reality - which is identical to my reality only to the extent of the believes we have in common...
Did not mean to derail the thread, just couldn't let this go unchecked - sorry
Greetings Misha. Hope that we are of a humble and contrite spirit today. According to Roman 10: 1-3 it would be dangerously interesting to even thing about the “many truths” on the minds of everyday people around us. We read that there are people ignorant of God’s righteousness and will go about to establish their own righteous. So who’s righteousness is truth or do the “many truths” bear there own righteousness (Col. 2:8).
Sorry, Nathanael, I will be glad to talk to you if you try to use your own words. I don't feel qualified enough to talk on the level of quoting bible...
This does have the potential to be a great discussion, but I can't keep up. By the time I read a post and think about replying, there's another post. I'll wait and try to respond when I don't have to play catch up.
Come one brother! that the easy part quoting bible. The thing is not to lean toward your own understanding, but lean towards the Lord's understanding and he will direct your paths. We as people tend to get into strive and dessention as Paul did with the Sadducess and the Pharocess when we express our own understanding. So im sorry also, I will not use my own words. Please read Pro. 3:5-6
Cool out Misha, You are discrediting yourself. Staying in practice with the word of God is of a learning experience. Grab a bible and a note pad and read and take notes. Get in the routin with the Word. before you know it you will have the mind of Christ mentioned in Phillipians 2: 5.
Greetings brother origin. I would not want to share my thoughts on the matter, but I would like to share with you and your thread visitors the mind of Christ Jesus. On the subject matter of Matthew 5:17; you ask yourself, are the laws of God done away with? No sir Mr. Origin, they’re not! Romans 3: 31 “Do we then make void the law through faith? God forbid: yea, we establish the law.” Look at Rom. 13: 8-10 and then flip a few pages back and look at Romans 7: 12, 14, 25. Peace in Jesus name. Amen.
Great question, origin. I think this question causes as much division among Christians as any question. The law is unending, yet fulfilled. Here's an analogy. You borrow money from a bank, so you owe the bank. You owe forever until the debt is paid. When your debt is paid in full, your obligation is fulfilled.
The restoration of our "right" relationship with God can only occur through a restoration of our righteousness. This was impossible through the law, and that is among the primary lessons of the Old Testament. The New Testament, on the other hand, is an offer to have your debt wiped clean. When Jesus says, "but to fulfill," I believe He meant to "Pay in Full."
I know other Christians will disagree with this interpretation, and I hope they share as well.
I don't believe Jesus meant that he will pay our debts in full, but rather that he will hold back the full payment of the Law (the full consequence of the Law that was broken) so we can work it out, (pay our debt), by following His examples of how to follow the Law until we get it completely right. I think that is what he meant by "fulfilling the Law". How could we become "like Him" if He does all the work for us? It is a learning process.
Hi Sparkling Jewel! I see where you at with your statements. Jesus death puts us under his grace for us to fulfill our part and that’s keeping his law. Sinners were stoned for sinning and had no chance to do right by faith or no grace to repent, until Jesus came to give us life. Remember the Sabbath breaker picking up sticks on the Sabbath (Num. 15: 32-35). Remember the adulteress brought to Jesus without the adulterer (John 8:3-11). They were candidate for stoning. But, where was the adulterer (the man) in this setting. Jesus knew man was not worthy of judging others because they were sinful themselves. John 8:11 and John 5:14 lets us know we are not to sin no more, lest when we do, a worst thing come on us (probly not stoning). This tells us we are to continue in the laws of God, keep the commandments, under grace that allows us to do so. And if we sin > 1 John 2:1-2. And for the law, we are doers of them (James 1:25). And yes sister, this is a learning process because we are working toward perfection in receiving that glorious change in the likeness of God. Praise the Lord. Amen.
Greetings brother Peter and family it’s very nice to share the mind of Jesus with you. The law is an everlasting covenant but we must define what has been fulfilled. The part that is fulfilled was Jesus coming in the flesh to die for the sins of the people. Read Isaiah 7:10-14 and Matthew 1: 22-23. The Lord fulfilled his part, now we must continue to fulfill our part and keep the faith by our works Eph. 2:10. We know that faith without works is dead and our faith is shown by our works, James 2:18. Also read Heb.13: 20-21. Our works is hearing and doing his word, James 1:22. There are many works we must be doing, but let’s go one to the debt Jesus paid. His work is done concerning the flesh. The bible will tells us that Paul was a debtor to the people (Rom. 1: 14) which he kept the law through his faith. Therefore, we also see that we are debtors to do the whole law (Gal. 5:3), through our faith of course.
Regarding the sacrificial law- it was impossible to restore our righteousness because it couldn’t take away sin (Heb. 10:1-2,4,8-10) (Heb. 8:7-8). Therefore, Jesus put those same laws written by Moses, in our heart and mine that we may do them and keep them as the apple of our eyes (Pro. 4:20-22).
It has been a pleasure to share the word of God with you. Grace & Peace in Jesus name Amen.
It's not my thread, but Misha you are always welcome.
Yea Peter, I see your point. I read in the Book of Hebrews where it said it was Neccesary for Christ to come to die for us because that first sacrifce wasn't able to make us clean. But Christ being a Perfect man -Son of God - was able to take away that cloak. I believe there was a seperations with the ordinances of the Law of sacrifice with the old and new testament. I think that was the hard thing for the pharisees to conceive the thought that it wasnt necessary any more to follow the law of moses, but rather follow a Man. Who indeed was the Christ. So its been a transition that takes place ever since his death.
I see the point your bringin Misha, In terms of each individual - Paul said seek your own soul salvation with fear and trembling. So each individual will have their own faith in whatever belief they have, but as a congregation they all should be of one mind.
These are some good points.
Greetings again brother Origin. And your right, it was not possible for the bulls and goats to take away sin (Heb.10:4). I ask the question, was it the law of Moses or the law of God. When Moses finished writing there was nothing else to write. Where did Moses get those writings from? God had a mission for Moses to do in Exodus chapter 3, remember Moses was commissioned to lead the people. God gave Moses the “laws of God” to give to the children of Israel- see Exodus 25:22.
Furthermore, if we are to seek our own soul salvation, we will be seeking for something to believe in. The verse I like to show you in Philippians 2:12 is “work out your own salvation with fear and trembling.”
If we WORK out our own salvation, we work for the hope of the promises in Christ Jesus. And his will and good pleasure works in as mentioned in Phil. 2:13.
Listen at this brother Origin. You mentioned “Each individual will have their own faith in whatever belief they have, but as a congregation they all should be of one mind.” Different beliefs and ones own faith lets us know that their will be strive and dissension in the congregation. Now we know they should be of one mind, but I say that because to be of one mind you should be of one spirit in the faith of the gospel mentioned in Phil 1:27. If you hear of a group of peoples affairs and they have many view points than they will not be likeminded (Phil.2:2). Remember the Sadducees and the Pharisees (Acts 23: 6-7); these guys were the spiritual counsel of the church. Go ahead, read it! They had two different beliefs. Their was division about the belief of the resurrection. Isn’t this one counsel that should be of one mind? Grace and peace brother Origin. My we continue to edify each other by the word of God in Jesus name. Amen.
The laws of Moses or the laws of God? Leviticus 18: 1-5 You decide!
Dividing the world into right and wrong, good and evil, was Original Sin (the fruit of the tree of the knowledge of good and evil, remember?)
Whenever you ask "who is right and who is wrong", you are sinning.
Greetings sister Inspire. By the way, I like the meaning of your name. Just what to touch on a couple of issues. Look at who was originally in the Garden of Eden (Gen. 2: 8-9). Man, the Tree of Life, and the tree of the knowledge of good and evil. I’m very curious to find the division there. We know the Lord said, don’t eat from the tree of knowledge of good and evil, but we also see good and bad in this world sometimes in one setting. But are we not to discern from these things? Also, we are blessed to have the gift of discernment (1 Cor. 2: 1, 7-10) or we are discerners of spirits knowing both good and evil (Heb. 5:14). Let’s look at something else in 1 cor. 6:2- obviously somebody was wrong or unjust in a matter that needed some attention. We see the works of the flesh and the works of the spirit throughout the bible and we see those people of the bible giving attention to those issues or dealing with those situations. They had discernment about something. Does that make them sinners? If you have a case and point out the bible, please list a couple. Peace sister. May your inspiration be a light to others? In Jesus name. Amen.
He doesn't do all the work for us. Our obligation is faith. Through faith, we are made righteous. I think if Jesus meant He would hold back our debts He would have said something like that. He said to fulfill. I don't know how else to interpret that without putting words in His mouth.
Yes brother Peter, it is our faith in Jesus that justifies us and not our works alone. As we mentioned, Jesus fulfilled his part of prophecy in Isaiah. 7: 10-14 / Matt. 1:22-23. So we know that is done. The Lord said “It is finished”( John 19:30), right! But there is much more prophecy the Lord is fulfilling through men and other things He will fulfill at his return (Matt. 5:18- “til ALL be fulfilled). For example; not to go astray, but Read Ezekiel 37. We know David is still dead and Israel has not been gathered, But chapter 37 will tell us David will be our king and Israel will be gathered, even in vs. 24 we still have to walk in the Lord judgments, statues and commandments and do them; or as brother Peter M. Lopez puts it, “the Law is unending.” Thanks for reading brother. Peace in Jesus name. Amen.
What is the law that Jesus is referring to in Matthew? It seems to me he is referring to the 10 commandments. he goes on to talk about them in matthew. So what he is saying is he is not disputing Gods words or any words of the prophets. He is here only to Follow those laws to fulfill the things asked of us in other religions can try to do away with these laws, but as a christian we too must follow or fulfill the things asked of us in the 10 commandments.
that's just my view.
I may be hijacking a thread here, but doesn't this offer fairly incontroverible evidence that Jesus quite clearly DID NOT SAY that sinners would go to Hell?
Seems to me he's saying they go to Heaven, but are called "least" when they get there.
Which fits with my interpretation (and my understanding of some of the translation issues), but not with some of the fire-and-brimstone stuff we see from some posters to this forum..
that was a great point.
My only question would be about the rapture and how the judgment would play into this statement by Jesus. I suppose you could take it how it is written and say it doesn't say you will go to heaven just that you will be talked about or "called the least" by people in heaven. I can't believe I never noticed what he was saying there.
I am not enough of a biblical scholar to even begin to understand the meaning of that line.
But I look forward to others input.
Don't become anything like a scholar or preacher cause the truth is, and it is the total truth, that you need to know God in your own way, God reflects upon the things that are in your heart. When you are a believer don't let anyone define how you should believe in God, who you should believe God is, or what God wants, because these days being in good faith is knowing who you are to God and not letting anyone change your mind because doing so will turn you into a hater.
The Bible can tell you a lot of things, but it can not tell you how you should love God. There are no conditions to Unconditional love, Jesus did not come to condem people, and there are no laws defining who you are in the eyes of God. Who you are in the eyes of God is who you want to be and God will love you more for every bit of yourself that is not a liar.
Don't be like the ones the Bible calls hypocrites. These are the people who repeat scripture as though they are playing a role, it's right under thier nose the things that certain people do but they boast so much about it that they can't come off thier horses long enough to figure out that "they" are everything Jesus told them not to be. The saddest part about it, is that they will probably never humble themselves enough to see it.
You know what Love is, go create Love.
Greetings sister Sandra. How is your day? Let me know please. I would like to take this time to express 2 Tim. 3:16 to you. For the most part we hope your week was a marvelous and safe one. Look at some of your statements and read some of the bible verses we put beside them.
“God reflects upon the things that are in your heart.” Rev. 22:12 / Ecc. 12:14
“The bible can tell you a lot of things, but it can not tell you how you should love God.” John14: 15 / John 15: 10 / Pro.4:4.
“There are no conditions to Unconditional love.” If- 1Jn. 4:20-21/If- 1Jn 5:1-3/If- Rev. 22:18-19
“Jesus did not come to condemn people.” Matt. 10:34-38/ 2Thes. 2: 10-12
Grace & Peace in Jesus name.
Hey there! I am very good, thanks for asking. Yes, if you love God, you can show it by keeping the comandments. I do live by those rules the 10 commandments. By how to love God I was refering to what God means to you, what God is to you, how much time you want to dedicate to God, as in worship I guess or that a good prayer isn't a rehersal prayer, but a real one with your thoughts and heart attached to it, being who you are to God. I disagree with anyone who defines Jesus as a person whom they souly base thier forgiveness on. I mean, I love that Jesus did that and all, but I don't think it is an excuse to go and do mean or rotten things and then think well Jesus will forgive me. Whether it is truely like that or not, I in my own personal opinion of God, is that youself and only yourself is accountable for your actions. While the world is a crooked place and of course all people make mistakes, I wouldn't ever say that I was justified to do wrong in the name of Jesus Christ. See, to me that is just, well....I don't know what that is, but it's not how I chose to see Christ.
As far as the verses you put up for me to look at, thanks for showing me where you were coming from, but there are also another hundered or so scriptures that talk about things the way I talk about them. Really it's up to how you chose to read it and what you take and learn from it.
And as for the commandments, I don't follow them because that is how I show I love God, I follow them because I believe they are right and justified and they just make sense. Even people who don't believe in God follow these rules, they are ingrained in us. So by definition of just a couple of those verses that you used, I would say that if showing your love for God is by following the commandments, then what would be the point of loving God because you can follow them with our without God. Or, even people who do not believe in God find it logical and right and justified to follow the commandments. Does that make sense?
I would like for you to show me those scriptures to justify your LOVE for God the way you want to Love Him. Please edify me That there is another way to LOVE God other than what he say in the way to LOVE Him. LOOK at what you said here>>>"then what would be the point of loving God because you can follow them with our without God."<<<<< YOU are talking about the Laws of God. People who dont belilieve in God dont follow God laws. They have there own set of Laws. and i will prove it. in the duration of time. Just keep coming back responding to this hub. First of all Look at Hebrews 10: 1-3. Then Go to mattew 15:8-9. Once again 1 John say IF you love me KEEP my commandmets. Thats something aint it. SO, are you talking about the GOD in 2 Corinthians 11:4 ?????????? THAT OTHER jesus with another gospel, because you have your own gospel about things and that dangerous. WE are all trying to work out salvation NOT condem ourselves. I don't want to sugar coat nothing and i pray that im not too str8 forward. Grace & Peace.
Proverbs 3:1 ¶My son, forget not my law; but let thine heart keep my commandments:
Proverbs 4:4 He taught me also, and said unto me, Let thine heart retain my words: keep my commandments, and live.
Proverbs 7:2 Keep my commandments, and live; and my law as the apple of thine eye.
John 14:15 ¶If ye love me, keep my commandments.
John 15:10 If ye keep my commandments, ye shall abide in my love; even as I have kept my Father's commandments, and abide in his love..
Yea, least in the kingdom of heaven , i wouldn't try to intrepret.
In Luke the 24th chapter he goes on to say:
And he said unto them, These are the words which I spake unto you, while I was yet with you, that all things must be fulfilled, which were written in the law of Moses, and in the prophets, and in the psalms, concerning me.
Maybe he is just refering to the prophecies, that were written in the Old testament concerning him. He said the Law of Moses..meaning the 5 first book of the Bible, in the prophets , jeremiah, isaiah, ezequiel, etc. and in the psalms of david and solomon...
I think this would make sense...
because in actuality he did away with the Laws that Moses was given...i think in Hebrews it goes more into details. in Chapter 9 ...
This has been an extremely interesting discussion, although it would be nice if some people could learn how to use paragraph breaks. It would make their posts easier to read. *hint*hint*
Personally I've always taken Jesus's words in this passage: "Think not that I am come to destroy the law, or the prophets: I am not come to destroy, but to fulfill." to mean that Jesus Himself never had any intention of founding a new religion. He saw the corruption of the Pharisees and other powerful men and sought to remind the Jews of what was truly important, namely the Great.) But He Himself lived and died a Jew, and never had any intention of being otherwise.
Or, in the words of Gandhi (astonishingly perceptive Christian theologian, Mr. Gandhi): "Jesus preached not a new religion but a new life. [...] Spiritual life has greater potency than radio waves. When there is no medium between me and my Lord, and I simply become a willing vestment of His influence to flow into it, then I overflow the water of the Ganges at its source. There is no desire to speak when one lives the truth. Truth is the most economical of words."
For the record, I am Christian, or consider myself so, but my view of Jesus is more like that of the Jews and Muslims: a great teacher and prophet (unlike the Jews and Muslims, I consider him the greatest) but a man, nothing more and nothing less, for we are all the children of God.
ARE you saying JESUS never had any intentions being otherwise as to say he is nothing but a Jew. SO, brother, WHo died for the sins of the world? JESUS >>1 Peter 3:22 Who is gone into heaven, and is on the right hand of God; angels and authorities and powers being made subject unto him. Jesus was resurrected and sat on the right hand of the Father and the SON is still interceeding for his people. He is the advocate for us in... 1 John 2:1. When we mess up The son is there.
For this Christian theologian..."There is no desire to speak when one lives the truth. Truth is the most economical of words."<<<< THERE is a lot to be heard about truth, Something have to be said and something have to be heard...but by who....those who walk the truth. They should be teachers to others.
Romans 10:17 So then faith cometh by hearing, and hearing by the word of God.
Revelation 13:9 If any man have an ear, let him hear.
Romans 10:14 How then shall they call on him in whom they have not believed? and how shall they believe in him of whom they have not heard? and how shall they hear without a preacher?
This Teacher is JESUS
The Teacher is the Christ of Jesus, to each individual Christ of each person seeking such
Jesus had solved the human dilemma...being in a human body, that resurrected, overcoming all human aspects by recognizing God fills up all of the spirit, mind, body and soul of himself and potentially each individual, as they seek and find it.
If you would read back through the law in the Old Testament you will find that if you break the law you would be put to death. When Jesus died for the sins of the world, (breaking the law of God), the Law was fulfilled. He was the atoning sacrifice for all sin.
The Law will point a person to Christ while at the same time prove the person guilty of the Law. Paul wrote about it in Romans. I have written a hub on the subject myself, but not sure if I should link it here.
To make it short, we are all guilty according to the Law, but we can be justified by faith in Jesus Christ.
by Abihahyil Shawmar4 years ago
Rom 7:22 For I delight in the Torah of Elohim according to the inward man
by gracefaith2 years ago
Romans 7:10: "I found that the very commandment that was intended to bring life actually brought death"Which commandment is being referred to?
by Julie Grimes3 months ago
I think that the Christian religion would have been entirely different, if Apostle Paul hadn't screwed things up. It is my firm belief that if Christians really want to be Christ-like, they need to have a dual...
by Jeromeo.
|
http://hubpages.com/religion-philosophy/forum/3351/are-the-laws-done-away-with
|
CC-MAIN-2017-09
|
refinedweb
| 4,529
| 81.53
|
Symptom
- Where can I get the latest transport files for SAP connectivity with Data Services?
- How do I install the Data Services functions in an SAP system?
- Are the functions provided compatible with my version of Data Services?
- Should the functions be installed in /BODS/ or /SAPDS/ namespace?
- I am getting an error or warning when installing the provided transport file in my SAP system.
Read more...
Environment
- SAP Data Services
- SAP Applications Server source system
Product
SAP Data Services 4.1 ; SAP Data Services 4.2
Keywords
BODS, SAPDS, RFC function, transport files, DS4.2 , KBA , download , EIM-DS-SAP , SAP Interfaces , EIM-DS , Data Services , EIM-IS , Information Steward , How To
About this pageThis is a preview of a SAP Knowledge Base Article. Click more to access the full version on SAP ONE Support launchpad (Login required).
Visit SAP Support Portal's SAP Notes and KBA Search.
|
https://apps.support.sap.com/sap/support/knowledge/en/2195580
|
CC-MAIN-2020-50
|
refinedweb
| 149
| 66.94
|
Internet of Things
Introduction: Internet of Things
Measure real world things, turn a knob and move a servo on the other side of the world. This has been possible with a PC but now it is possible using inexpensive boards and low or battery power. This project uses pre-made Arduino boards and no soldering is required.
This project was inspired by this Instructable
Step 1: The Internet of Things - Smaller, Cheaper, Less Power
A standard Arduino board is now under $10. A W5100 Ethernet shield is also under $10. A LCD keypad display shield is under $6. Debugging Arduino can be done via the serial port, but for out in the field there are some advantages to having a display. Adding a display can be done in many ways, and for this project we are looking to use premade boards. There are two small catches - some pins clash with the Ethernet shield and the LCD shield, and also the display shield shorts out a few pins on the metal of the Ethernet plug, so it needs to be raised up higher.
To get a cheap board, search Arduino on ebay and sort on price+postage and “buy it now”. Then for the Ethernet board, search on Arduino Ethernet. For the display board, search Arduino LCD Keypad.
Step 2: A Small Hack - Snap Off a Pin
The Ethernet shield uses pins 10,11,12,13. The standard Arduino display (as per code on the Arduino site) uses pins 2,3,4,5,11,12. The pre- made display shield uses 4,5,6,7,8,9,10. That still clashes with the Ethernet shield, but fortunately pin 10 is not really needed, as it is used to turn off the backlight. There is a transistor controlling this light, and the simple hack is to break off pin 10 under the LCD board. The backlight now is always on, and there are no conflicts with the Ethernet shield. To remove this pin, grab it with some pliers and bend it back and forth until it breaks off. As another option, it may be possible to trace the track on the PCB and cut this.
The next issue is the height of the board. Search on ebay for Arduino Stackable Header. This project needs some 6s and some 8s, and these often come in kits. Get a few more – they are very cheap and will come in handy for other projects.
The hardware is almost done – just plug it all together.
Step 3: Wifi or Ethernet
There are 5 analog inputs on an Arduino and the LCD board uses analog 0 to read all the keypresses, so that leaves 4 analog channels and some digital ones too. The buttons could be disabled by removing analog pin 0 under the board.
It might at this point be worth mentioning wireless vs wifi. There are wifi Arduino boards available, but they are expensive – a quick check on ebay is around $57. A slightly cheaper option is to get a wifi router and configure it as a wifi repeater and use a short Ethernet patch cable. I used a TP Link router (search ebay for TP Link Repeater) which was $38. Add the $10 for the Ethernet board and it is cheaper. It is also a bit more flexible as you can add an Ethernet switch and have multiple Ethernet sockets and hence many Arduino boards. Configuring the router as a repeater is very simple – follow the instructions, log into the router, let it search for your wifi, add the password and save. The only small catch – the router has a dedicated IP address, and if it is then set up as a repeater, it asks the main router to allocate IP addresses. This means if you want to repeat a different main router it is hard to log back into the repeater! Do a factory reset if this happens.
Step 4: Talk to the Cloud
For software, there are many ways to configure things. Arduino has code to create a small webpage server, and other code to read this, so it can be done locally. Or, as in this example, we can upload to the cloud, and download at any location there is internet or wifi. This example uses Xively and their site will display the data in a graph format. This code reads 5 analog values, uploads these, then reads them back and extracts the actual values from the Xively text stream. Xively is free, and you need an account. Log in and click on Develop. Click on Add Device, and there are two numbers. The first is the device feed number which is around 9 numbers. Then there is the API key which is a longer number with numbers and letters. Copy and paste these into the code. Then add some channels – this project uses five channels and I called them Sensor1, Sensor2 etc.
Step 5: Program the Board
There are seveal important things to change in the code below. The first is the IP address. Each board has to have a unique IP address otherwise the router will get very confused. I started at IPAddress ip(192,168,1,178); and then added one to the last number. Some routers have different numbers eg 192.168.2.x and a quick check on a PC running IPCONFIG in a DOS shell will give the correct first 3 numbers.
The other number to change is the MAC address range. The range byte mac[] = {
0xDE, 0xAD, 0xBE, 0xEF, 0xFE, 0xEE}; is a default number – maybe change one to a different hex number on each board, eg the last one, count backwards in hex 0xED, 0xEC.
The final thing to change is whether the board is an uploader or downloader. This code does both and about half way down is this
if(!client.connected() && (millis() - lastConnectionTime > postingInterval)) {
sendData(Analog0,Analog1,Analog2,Analog3,Analog4);
// comment out either send or get data
//getData();
which is configured to send data. To fetch back that data, comment out senddata, and uncomment getdata.
There is some leftover code commented out for things like dumping out the entire text string from Xively which is handy for debugging to work out how to cut up the string and extract the individual sensor readings.
Xively can do other things such as send an SMS or other message when certain conditions are met.
Have Fun!
/* for pushbuttons
* Ethernet shield attached to pins 10, 11, 12, 13
The LCD circuit: standard is 12,11,5,4,3,2 change to 8,9,4,5,6
and cut the header pin to D10 going to the LCD display (ethernet board needs this, and on the LCD display only used to turn backlight off
Serial debug commented out now and send to LCD instead
Change IP address to a different number for each board
created 15 March 2010
modified 9 Apr 2012
by Tom Igoe with input from Usman Haque and Joe Saavedra
This code is in the public domain.
*/
#include <SPI.h>
#include <Ethernet.h>
#include <LiquidCrystal.h>
#define APIKEY "shsCNFtxuGELLZx8ehqglXAgDo9lkyBam5Zj22p3g3urH2FM" // replace your pachube api key here
#define FEEDID 970253233 // replace your feed ID
#define USERAGENT "Arduino1" //EE};
// fill in an available IP address on your network here,
// for manual configuration:
IPAddress ip(192,168,1,178);
// initialize the library instance:
EthernetClient client;
// initialize the lcd library with the numbers of the interface pins
LiquidCrystal lcd(8,9,4,5,6,7);
// if you don't want to use DNS (and reduce your sketch size)
// use the numeric IP instead of the name for the server:
//IPAddress server(216,52,233,122); // numeric IP for api.pachube.com
char server[] = "api.xively.com"; // name address for xively API
unsigned long lastConnectionTime = 0; // last time you connected to the server, in milliseconds
boolean lastConnected = false; // state of the connection last time through the main loop
const unsigned long postingInterval = 10*1000; //delay between updates to Pachube.com
int counter;
String stringOne,stringTwo; // built string when data comes back
boolean stringData = false; // reset when a new block of data comes back
void setup() {
//delay(1000); // in case the serial port causes a latchup
// Open serial communications and wait for port to open:
Serial.begin(9600);
while (!Serial) {
; // wait for serial port to connect. Needed for Leonardo only
}
Serial.println("api.xively.com"); // if go more than ?32s then integer calc doesn't work
lcd.begin(16, 2);
Cls(); // clear the screen
delay(1000);
lcd.setCursor(0,1); // x,y top left corner is 0,0
lcd.print("api.xively.com");
lcd.setCursor(0,0);
lcd.print("Start Ethernet");
////Serial.println("Start Ethernet");
delay(1000);
// start the Ethernet connection:
if (Ethernet.begin(mac) == 0) {
//Serial.println("Failed to configure Ethernet using DHCP");
// DHCP failed, so use a fixed IP address:
//lcd.setCursor(0,1);
//lcd.print("Failed to configure");
Ethernet.begin(mac, ip);
}
Serial.println("Wait 10s");
lcd.setCursor(0,1);
lcd.print("Wait 10s ");
}
void loop() {
// read the analog sensor:
int Analog0 = analogRead(A0); // with a LCD display analog0 is all the buttons
int Analog1 = analogRead(A1);
int Analog2 = analogRead(A2);
int Analog3 = analogRead(A3);
int Analog4 = analogRead(A4);
// int sensorReading = analogRead(A2);
// if there's incoming data from the net connection.
// send it out the serial port. This is for debugging
// purposes only:
if (client.available()) {
char c = client.read();
Serial.print(c);
if (stringData == false)
{
stringData = true; // some data has come in
}
if (stringData == true)
{
stringOne += c; // build the string
}
if ((c>32) and (c<127))
{
lcd.print(c);
counter +=1;
}
if (counter==16)
{
lcd.setCursor(0,1);
//lcd.print(" ");
//lcd.setCursor(0,1);
counter = 0;
//delay(100);
}
}
// if there's no net connection, but there was one last time
// through the loop, then stop the client:
if (!client.connected() && lastConnected) {
//Serial.println();
//Serial.println("Disconnect");
client.stop();
lcd.setCursor(0,0);
lcd.print("Disconnect ");
lcd.setCursor(0,1);
counter = 0;
if (stringData == true)
{
PrintResults(); // extract the values and print out
stringData = false; // reset the flag
stringOne = ""; // clear the string
}
}
// if you're not connected, and ten seconds have passed since
// your last connection, then connect again and send data:
if(!client.connected() && (millis() - lastConnectionTime > postingInterval)) {
//sendData(Analog0,Analog1,Analog2,Analog3,Analog4);
// comment out either send or get data
getData();
}
// store the state of the connection for next time through
// the loop:
lastConnected = client.connected();
}
void PrintResults() // print results of the GET from xively
{
int n = 292; // start at the sensor data
int i;
char lf = 10;
int v;
Cls();
lcd.setCursor(0,0);
stringOne += lf; // add an end of line character
for(i=0;i<5;i++)
{
while (stringOne.charAt(n) != 44) // find first comma
{
n +=1;
}
n += 1;
while (stringOne.charAt(n) != 44) // find second comma
{
n+=1 ;
}
n+=1;
stringTwo = "";
while (stringOne.charAt(n) != 10) // find the end of the line which is a line feed ascii 10
{
//lcd.print(stringOne.charAt(n));
stringTwo+=stringOne.charAt(n);
n+=1;
}
v=stringTwo.toInt();
lcd.print(v);
lcd.print(" "); // space at end
if (i==1)
{
lcd.setCursor(0,1);
}
}
}
void Cls() // clear LCD screen
{
lcd.setCursor(0,0);
lcd.print(" "); // clear lcd screen
lcd.setCursor(0,1);
lcd.print(" ");
}
void PrintValues(int n0,int n1,int n2, int n3, int n4)
{
//Serial.print(n0);
//Serial.print(" ");
//Serial.print(n1);
//Serial.print(" ");
//Serial.print(n2);
//Serial.print(" ");
//Serial.print(n3);
//Serial.print(" ");
//Serial.println(n4);
Cls();
lcd.setCursor(0,0);
lcd.print(n0);
lcd.print(" ");
lcd.print(n1);
lcd.setCursor(0,1);
lcd.print(n2);
lcd.print(" ");
lcd.print(n3);
lcd.print(" ");
lcd.print(n4);
delay(2000);
}
// this method makes a HTTP connection to the server:
void sendData(int data0,int data1,int data2,int data3, int data4) {
PrintValues(data0,data1,data2,data3,data4);
//Serial.println("Connecting...");
lcd.setCursor(0,0);
lcd.print("Connecting... ");
lcd.setCursor(0,1);
lcd.print("No reply "); // if there is a reply this will very quickly get overwritten
lcd.setCursor(0,1);
counter = 0;
// if there's a successful connection:
if (client.connect(server, 80)) {);
// 8 is length of sensor1 and 2 more for crlf
int stringLength = 8 + getLength(data0) + 10 + getLength(data1) + 10 + getLength(data2) + 10 + getLength(data3) + 10 + getLength(data4);
client.println(stringLength);
// last pieces of the HTTP PUT request:
client.println("Content-Type: text/csv");
client.println("Connection: close");
client.println();
// here's the actual content of the PUT request:
client.print("sensor1,");
client.println(data0);
client.print("sensor2,");
client.println(data1);
client.print("sensor3,");
client.println(data2);
client.print("sensor4,");
client.println(data3);
client.print("sensor5,");
client.println(data4);
//Serial.println("Wait for reply"); // xively responds with some text, if nothing then there is an error
lcd.setCursor(0,1);
lcd.print("Wait for reply ");
}
else {
// if you couldn't make a connection:
//Serial.println("connection failed");
//Serial.println();
//Serial.println("so disconnecting.");
client.stop();
//lcd.setCursor(0,1);
//lcd.print("Connect Fail");
}
// note the time that the connection was made or attempted:
lastConnectionTime = millis();
}
// this method makes a HTTP connection to the server:
void getData() {
// if there's a successful connection:
if (client.connect(server, 80)) {
//Serial.println("connecting to request data...");
lcd.setCursor(0,0);
lcd.print("Connect ");
client.print("GET /v2/feeds/");
client.print(FEEDID);
client.println(".csv HTTP/1.1");
client.println("Host: api.pachube.com");
client.print("X-PachubeApiKey: ");
client.println(APIKEY);
client.print("User-Agent: ");
client.println(USERAGENT);
client.println("Content-Type: text/csv");
client.println("Connection: close");
client.println();
//Serial.println("Finished requesting, wait for response.");
lcd.setCursor(0,1);
lcd.print("Finish request");
}
else {
// if you couldn't make a connection:
//Serial.println("connection failed");
//Serial.println();
//Serial.println("so;
}
Step 6: VB Net and Xively
After testing this out for several months I have come across some problems with the ethernet shield reliability. This is mainly if there are multiple router/repeater hops and might be due to timeout delays. There are problems with coping with semi-reliable connections which of course includes radio links that may be subject to interference. There may also be bugs in the standard Arduino ethernet shield code - there do seem to be a number of fixes on the internet but I am not sure which ones work. It is not the easiest to debug as the whole system will run for several days and then hang.
A hardware hack is to have one Arduino controlling a relay and turning on the power to a second Arduino that has an ethernet shield. Then the whole system can be powered down and then powered back up again.
Another option might be to look at wifi modules - this year (2014) they have come down to as low as $5, and these might eventually come with code that hopefully fails more gracefully, or perhaps can be reset with software.
Another fix is to use a computer as the internet interface. A small netbook will do. The following code is vb.net and listens to the arduino on a com port and then uploads the data to xively.
Imports System
Imports System.IO Imports System.Net Imports System.Text ' create a form. From the toolbox add button1, textbox1, textbox2, timer1, serialport1 ' change the timer1 ticks to 4000. Change timer1 enabled to True ' in the opencomport routine, change the com port number ' add checkbox1, name it upload continuously
' Arduino test code '// sends an increasing number every 5 secs 'int n; 'void setup() '{ ' Serial.begin(9600); // also talk at a slow 1200 baud - easier debugging if all baud rates the same ' while (!Serial) {} ; //wait to connect '} 'void loop() // run over and over '{ ' Serial.println(n); ' n += 1; ' delay(5000); '}
Public Class Form1 Public InPacket(0 To 2000) As Byte
Private Sub Form1_Load(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles MyBase.Load OpenComPort() end sub Private Sub Button1_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles Button1.Click xivelyFeedUpdate("shsCNFtxuGELLZx8ehqglXAgDo9lkyBam5Zj22p3g3urH2FM", "970253233", "sensor1", "14") End Sub
Sub xivelyFeedUpdate(ByVal ApiKey As String, ByVal feedId As String, ByVal channel As String, ByVal value As String) Dim request As WebRequest = WebRequest.Create("" + feedId + ".csv") Dim postData As String postData = channel + "," + value ' eg sensor1,5 ' build string to send Dim byteArray As Byte() = Encoding.UTF8.GetBytes(postData) request.Method = "PUT" ' PUT or GET request.ContentLength = byteArray.Length ' the length of channel and value request.ContentType = "text/csv" ' text and comma separated data request.Headers.Add("X-ApiKey", ApiKey) ' send the header request.Timeout = 5000 Try Dim dataStream As Stream = request.GetRequestStream() ' Get the request stream. dataStream.Write(byteArray, 0, byteArray.Length) ' Write the data to the request stream. dataStream.Close() ' Close the Stream object. Dim response As WebResponse = request.GetResponse() ' Get the response - usually just Ok ' need to add a try/catch error routine here in case the internet connection goes down TextBox1.Text += CType(response, HttpWebResponse).StatusDescription ' Display the status. dataStream = response.GetResponseStream() ' Get the stream containing content returned by the server. Dim reader As New StreamReader(dataStream) ' Open the stream using a StreamReader for easy access. Dim responseFromServer As String = reader.ReadToEnd() ' Read the content. TextBox1.Text += responseFromServer ' Display the content. reader.Close() ' close the streams dataStream.Close() response.Close() Catch ex As Exception TextBox1.Text = "No connection" End Try End Sub Sub OpenComPort() Try SerialPort1.PortName = "COM9" ' windows key, "control panel", device manager, serial ports to find the number SerialPort1.BaudRate = "9600" SerialPort1.Parity = IO.Ports.Parity.None ' no parity SerialPort1.DataBits = 8 ' 8 bits SerialPort1.StopBits = IO.Ports.StopBits.One ' one stop bit 'SerialPort1.ReadTimeout = 1000 ' milliseconds so times out in 1 second if no response SerialPort1.Open() ' open the port SerialPort1.DiscardInBuffer() ' clear the input buffer 'SerialPort1.Handshake = System.IO.Ports.Handshake.RequestToSend 'handshaking on (or .None to turn off) Catch ex As Exception MsgBox("Error opening serial port - is another program using the selected COM port?") End Try End Sub Private Sub Timer1_Tick_1(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles Timer1.Tick Dim BytesToRead As Integer Dim i As Integer Dim Character As String ' collect bytes from the serial port Timer1.Enabled = False TextBox2.Clear() ' clear the text box If SerialPort1.IsOpen = True Then Do If SerialPort1.BytesToRead = 0 Then Exit Do ' no more bytes BytesToRead = SerialPort1.BytesToRead If BytesToRead > 2000 Then BytesToRead = 2000 SerialPort1.Read(InPacket, 0, BytesToRead) ' read in a packet For i = 1 To BytesToRead Character = Strings.Chr(InPacket(i - 1)) TextBox2.Text += Character ' add to the text box Next Loop If CheckBox1.Checked = True Then TextBox1.Clear() xivelyFeedUpdate("shsCNFtxuGELLZx8ehqglXAgDo9lkyBam5Zj22p3g3urH2FM", "970253233", "sensor1", Str(Val(TextBox2.Text))) End If End If Timer1.Enabled = True End Sub End Class
Hello, Dr. James.
I like your Intstructable. It's so cool and well written.
I'm so sorry. I can't find the code uses Arduino Ethernet Shiled for Xively Connection.
Where I can show the source code?
Thank you.
Hello James,
I like your Instructable, simple and straight to the point.
The one thing I would recommend is NOT to break pin 10 off the LCD shield.
Bend it on the stackable header instead: cheaper solution and it saves the integrity of the shield. By bending it, you will also have the option of soldering a wire to it and attach it to an unused pin.
BTW, I also use the TP-Link solution and there is an even cheaper and more versatile solution: TL-WR703N Mini Portable Wireless Router (US $24.50) which can be easily be upgraded to OpenWRT. Just make sure you get the English Firmware or you will have to learn Chinese like I had to do when I received mine...
Excellent idea - bend it rather than breaking it off. Thanks++. And thanks for the router ideas - as time goes on these seem to get cheaper and use less power. Cheers.
Nice instructable - well written! Concerning the router idea - I've already used the WR703N as a wifi bridge for my Arduino projects. If you're interested - I wrote up a short tutorial how to prepare the WR703N.
|
http://www.instructables.com/id/Internet-of-Things/
|
CC-MAIN-2017-30
|
refinedweb
| 3,327
| 58.08
|
.
Thanks for these Awesome C++ tutorials.
Another exception where the C++ compiler pays attention to whitespace is with // comments. Single-line comments only last to the end of the line. Thus doing something like this will get you in trouble:
-------------------------------------------
This is what you have written above the bold text that says "Basic Formatting".
My question for you sir is whether you're talking about the blank spaces or newlines in comments when you say "Another exception where the C++ compiler pays attention to whitespace ..."?
A single newline is allowed in a single-line comment (it ends the comment). The text on the subsequent line is not considered part of the comment. So in that regard, single-line comments are newline sensitive.
Thanks
My dear c++ Teacher,
Please let me say that program in paragraph "Basic formatting 4)" works fine even when second operator << in 3rd and 6th line is omitted, as follows:
With regards and friendship.
True. C++ will concatenate sequential string literals, so at least in the case of sequential C-style string literals, the << symbols aren't necessary. I still like to use them, because this concatenation doesn't work for other types.
My dear c++ Teacher,
Please let me say, Dev-C++ 5.11 IDE defaults to 3 spaces.
With regards and friendship.
My dear c++ Teacher,
Please let me comment that first example's output clear shows that all statements do the exact same thing, when "endl" is added at the end of each statement as follows:
"Another exception where the C++ compiler pays attention to whitespace is with // comments. Single-line comments only last to the end of the line. Thus doing something like this will get you in trouble"
Shouldn't whitespace be replaced by newlines here?
Thank you for this tutorial.
Hi,
I had tried learning programming through many websites/tutorials/books, but those all generated a kind of fear ,earlier I used to think, programming is like cramming things a lot (Which I hate !), but since you always explain the reason and how things work, it is easy for me to learn. Thank you so much for creating this wonderful course.
My best wishes !!!
Hi! I'm glad to read this post as today my teacher presented me his c++ work and is really... unreadable! No spaces, no comments, no new lines. I've read that there isn't a performance reason to do this, so why?
A little curiosity:
is there a way to remove every empty spaces from whole the code?
I mean
must become
All teacher's code is like the last one.
Assuming that there are many calculations in the code, is there a better way to write them?
could this be fine?
Yes, compacting your code doesn't affect anything in C++. For some people, it's a matter of style/habit.
That line of calculations looks totally fine.
is formatting basically the way it's presented?
Formatting is another word for how you arrange your code.
A quick note about aligning assignments and comments:
Some argue that lining up assignments as in the following:
is a maintenance nightmare. If you change a variable name, you have to re-align every equal sign around it by adding or removing spaces.
Same goes for end of line comments that span several lines, if you change some of the code and the lines become shorter or longer, the comments are out of alignment.
While it is definitely easier to read, chances are it will only be easier to read when the code is first checked in. After being edited by several developers, it will likely become misaligned because it’s a pain to add the relevant spaces every time.
Aligning variable comments is a bit easier if you use tabs instead of spaces, and changing variable names isn't that common.
Ultimately, it's up to you if you want to spend the extra time to align for readability or not.
Hallo evryone,
I'm new learner for this c++ langauge. I want to know what if we want to add white spaces i out text.
For example I want to print like
cout<<" Name : jd <<endl ;
cout<<" Occupation : student;
how can I write it like this. Exactly like this. So that thze symmetry doesn't change. and I can freely give the number of white spaces I want to give.
As far as I know, the font of the command prompt is Lucida Console, this sets a fixed width for all characters including spaces, so you just open Note Pad, set the font to Lucida Console if it wasn't set, paste the text that you want to edit, edit it and then paste it back into your C++ editor.
P_S__/ The code that you posted had one error, you didn't end the string with another double quotes character! If "jd" is a variable then you need to insert another two << (Insertion operators) because you use endl, if you aren't gonna use endl you'll only need one insertion operator.
P_S__/ Some editors have their font set to Lucida Console (In my case, Visual Studio 2015) and allows you to carry out such alignments easily inside the editor, instead of having to resort to external means.
What's the difference between << and +
<< has a couple of meanings, depending on context. But in this case, we use it to send text or a number to std::cout for printing on the console.
+ is mathematical plus -- the same one you use in standard mathematics. We use it to add two things together.
I disagree that aligning assignment operators makes the code more readable overall, as you suggest in the first pair of examples on recommendation 6. It makes the values easier to see (a good reason to use this formatting with comments), but it makes it more difficult to associate them with the identifiers, especially if one of the identifiers is much longer than its neighbors. If code looks scrunched without aligning the assignment operators an extra line between the assignments would go a long way towards making it more readable:
It's funny... in the computer world? I'm tidy as hell. Clear, cleanly defined areas as much as possible, where there may be more time spent on the formatting to make it legible than in the actual written documentation or code itself. Looking around where I sleep, however, one could never predict this. =P
It's nice to see there's actually some sort of more-or-less agreed upon set of rules for legibility, though. I'm sure I'll find exceptions to each and every rule listed, but for the most part, I don't see any real reason to alter what you have set up.
One tiiiiiiiny question though...
With the squiggly brackets each on their own separate lines, and that I like to use set numbers of empty lines of whitespace to break up functions and so on, do these count towards the total line count of a program? =P
I suppose it depends on whether you're using a smart line counting tool or not. In theory, a good line counting tool should ignore whitespace and lines consisting only of curly braces.
For what it's worth, line count is a horrible, horrible metric to judge developers by. :)
your tab should be set to 4 spaces.how?can any one explain it pls
This will depend on what you are using for a code editor.
I use the NetBeans 8.0.1 IDE, and it has a very extensive set of editor options for writing C++ code. The tab size setting is right in there.
In Visual Studio 2013 (it's somewhat buried) go to TOOLS -> Options -> Text Editor -> C/C++ -> Tabs and you'll find the settings there to play around with.
Does the size of program increases by giving a long, readable variable name?
And, i wrote a program, and compiled it through both code::blocks and turbo c++, but what i found is the size of program build by code::blocks is much larger than the same program build with turbo c++, And another thing what i found is the program compiled with code::blocks uses less RAM than the same program compiled with turbo c++. why this is so??
(RAM used is found using task manager).
Different compilers, different usage. The reason why the filesize might be larger, is because you're compiling the program with the debug flag, that will add additional code to your program to help you debug. Switch to release, and your filesizes will probably be the same.
Using long expressive variable names is good, don't worry about it, and that alone will not increase the size of your final executable. Debug compiliation and optimizations are what affect executable file size. Variable name lengths will affect the size of the assembly file that is output by the compiler for the linker. That is no problem. In your source code readability is paramount, and start now training yourself into a good system for naming variables in your code. You will thank yourself when you have to wade through it again for some reason years later.
Well, this is very good. I have a question: Assuming that you have more than one function, main() function should be at the top or not?
Or, it doesn't matter if it's at the bottom (since there's a "Find" dialog box to look for it easily):
Your code looks good. Just one thing though.
In your main() function body you have
main()
doubleVal(x);
and the int value returned by that function is not being assigned to any variable that is local to main(). So essentially you have done nothing constructive by simply calling the function. Comment that line out or remove it and your code will still work.
int
The reason cout << doubleVal(x) << endl; works is because doubleVal(x) is evaluated to the int value returned by the function, before it is inserted into the output stream by the operator<<() member of class ostream.
cout << doubleVal(x) << endl;
doubleVal(x)
operator<<()
ostream
This mechanics of this will all be revealed later, and since it has been four years since you posted this you probably already know all about it. :)
I removed the comment, because the code didn't look as it should and would have been more confusing than helpful.
When is it necessary to have << endl; ? Will the two following lines of code work the same?
I love these tutorials by the way. Thank you!
Sorry, my mistake. I forgot endl simply moved the cursor down one line of text. I remembered only moments after the countdown ended. Anyways, I'm learning...slowly...but surely. noobs gotta get there eventually.
std::endl does more than just a carriage return to the beginning of the following line. It also forces an immediate flush of the underlying iostream object's stream buffer. This defeats the purpose of the buffer in the first place. To me it seemed better (if you just have to type endl) to simply:
std::endl
iostream
#define endl '\n'
However this creates a problem if you are using the qualified std::endl, the preprocessor changes it to std::'\n' before compiling and that breaks the compilation.
std::'\n'
Do it this way:
std::string endl("\n");
This declares a std::string object endl and initializes it to "\n"
endl
Now in code following it whenever you do cout << endl it inserts the \n into the stream instead of std::endl (even if you have used using namespace std; before it).
cout << endl
\n
using namespace std;
Here is my solution to one of the quizzes and I implement what I was talking about in previous comment.
THANKS A LOT for this tutorial :) ...I'm actually having fun! ;)
This tutorial is great! Thanks for showing how to make code neat. It's really important and often overlooked.
I have some basic understanding of C++. But am truly new and I looked at the comments done by users here and what the tutorial says to do and by far the tutorial after showing friends and to me is perfectly illustrated. I can see why thousands of programmers skilled have developed a solid clean way of coding.
Best Regards
There are a few other cases where whitespace matters, and they have to do with using two-character operators. Pretty obvious for comparative operators (!= == >= etc.), but I ran into trouble when using multiple templates. For instance, creating a stack of pairs:
The first line will not compile (if you are using namespace std) because it reads the extract operator >> at the end. The second line is more readable as well!
The issue of a double greater-than symbol used to terminate a nested template list being interpreted as the stream output operator has now been fix from C++11 on-wards (C++11 released in 2014). Though I do agree the second line is more readable if less compact.
I'll correct myself; C++11 was released in 2011 (hence the name). C++14 was released in 2014 (are we spotting a pattern here).
Alex, near the beginning of this page, you write:
"One exception where the C++ compiler does pays attention to whitespace is inside quoted text, such as "Hello world!". "Hello world!" is different than "Hello world!" "
the last two strings you compared here are not different at all, did you mean something else?
Yes, the second one was supposed to have more whitespace, which HTML unhelpfully collapsed down to a single space. I've fixed it -- thanks for noticing.
Regarding block statements enclosed between braces, I prefer this style:
Example:
I find it compacts code and hence is more neat. Just a personal view. Experience has also shown that having 2 spaces for tabbing produces well defined code.
The example's 4th line should look like this:
Keeping the opening braces on the same line is the recommended formatting in java's case, for example, where formatting is just as unrestricted as in c++.
I also prefer 2 spaces wide tabs, but it's personal preference and such things as the font or screen resolution may matter. For this reason I use tabs only for indentation, but for alignment on lines strictly spaces. That way a different tab size produces consistent layout.
Another recommendation for c and c++, that might be worth mentioning, is to put the signature or a part of it after closing braces as a line comment to help navigation in case the IDE doesn't provide sufficient support. Keeping functions short is helpful too. (If a function is "too long", there are probably parts that can be extracted into separated functions, and just be called where their code was originally.) This kind of notation can be useful for nested control statements too.
(I don't know if my suggestions happen to be mentioned later in the tutorials.)
Is it generally acceptable in programming communities to "box in" your code with the curly brackets? To me at least, that looks easier to read and is less of an eyesore. For instance this is what I mean:
Beware, my screen resolution is high so it all fits on my screen, don't know about yours.
Actually, forget I asked that, I got to 1.10 and discovered a problem with doing that, it seems to cause the compiler to choke on preprocessor directives if you use the curly brackets like that. I don't know if this is a problem with just the IDE I'm using (Code::Blocks) or if there is a way around it. Oh well.
You sometimes see people "box" code with curly brackets for one-line functions. For example,
But outside of that, not so much.
Thanks for making things easy to understand.
I like the way you are slowly introducing new elements to the language.
Is this format affecting the size or the speed of the compiled code?
No and no. :) Whitespace is ignored by the compiler and does not affect the size or speed of the compiled code.
The code examples which all display "Hello World!" are not all correct. The one at line has no semicolon so it would not work.
[ Fixed! Thanks for the note. -Alex ]
Name (required)
Website
|
https://www.learncpp.com/cpp-tutorial/whitespace-and-basic-formatting/comment-page-1/
|
CC-MAIN-2019-13
|
refinedweb
| 2,740
| 72.05
|
Serial Port Communication in C#
#31
Posted 06 August 2009 - 01:52 PM
#32
Posted 18 August 2009 - 09:50 AM
jano_rajmond, on 23 Mar, 2009 - 10:51 AM, said:
I this that the reception thread may be interrupted by something. I modified the code in order to use separate windows for Tx and Rx and also took down the RichtextBoxes and replaced the with Normal TextBoxes, but the same thing happens..
Any ideea?
You need to add:
using System.Windows.Forms;
#33
Posted 09 September 2009 - 03:04 AM
Thanks
#34
Posted 10 September 2009 - 04:02 AM
You Are Article Is Very Good. In That i Have Small Dought.
To WriteData We Are Displaying How To Send.
But If We Want To Receive the data How To Receive And How
To Display It Their. And if We Are Getting Continuously Then How To Get
Every Line To Line For Particular Time. please Give reply. Bcoz i need It to My
Project. And Can We Run This Exe To Multiple Times in One system for
Different Ports..........?
#35
Posted 27 October 2009 - 01:04 PM
private void cmdOpen_Click(object sender, EventArgs e)
{
...................
comm.PortName = cboPort.Text;
...................
}
But I still did not figure out why the receiving from Serial Communication (COM11) message did not display in display window.
If any of you know, please post it or send email to me.
Thanks,
YZ
#36
Posted 29 October 2009 - 11:57 AM
#37
Posted 05 November 2009 - 05:15 AM
I did the download at:
and convert the VS2005 project to VS2008. It converted it very smooth, without problem.
I didn't test it yet but it seems very well done.
Thank's again!
The attached file is the original that I downloaded!
Attached File(s)
SerialPortCommunication.zip (101.85K)
Number of downloads: 913
This post has been edited by Jonadabe: 05 November 2009 - 05:17 AM
#38
Posted 18 February 2010 - 08:52 AM
great thanks;
This post has been edited by intell87: 18 February 2010 - 08:52 AM
#39
Posted 28 February 2010 - 05:51 PM
#40 Guest_Jeff*
Posted 03 March 2010 - 09:31 AM
#41 Guest_Yaron*
Posted 09 March 2010 - 01:02 AM
Yaron.
#42 Guest_Steve C*
Posted 11 April 2010 - 10:43 PM
Christianne, on 04 November 2008 - 01:19 AM, said:
Does anyone else notice this????
It's happening to me too Christianne
I've used this (great) class to communicate with an embedded system, but no matter what I do the CPU % crawls up until the app freezes. Anyone have any ideas?
#43 Guest_Clayton B*
Posted 12 April 2010 - 11:03 AM
th4k1dd, on 20 March 2008 - 05:46 AM, said:
However, when I hit the Open Port button, it throws up an error in the Rich Text Box which says that COM1 does not exist. Of course that was not a mistype. I select COM12 and yet it tries to connect to COM1.
I took a look at the code and followed the _portName variable but was unable to see why it would cut off the 2 in 12. It would seem it should not as the variable is setup as a string.
Quick Info:
Windows Vista Business
Microsoft .Net Visual Studio 2008 Express
Arduino USB Microcontroller Card
Port Settings (should be): COM12, 9600, None, 1, 8
Programs Receiving COM12: Arduino Software, Putty, & Realterm
Am I missing something? You help is greatly appreciated.
One thing I noticed...I went into Device Manager and changed the USB Virtual COM port to use COM1 and for some reason the program works fine. Also works great if changed to COM2 in Device Manager.
Any ideas?
* Needs to read up on variable tracing *
I noticed the same thing, it looks like the form does not set the port name so the default COM1 is always used.
Try adding "comm.PortName = cboPort.Text;" to the "cmdOpen_Click" method in frmMain.cs.
Excellent tutorial, very very helpful!
#44
Posted 27 April 2010 - 06:17?
#45 Guest_jeff*
Posted 06 May 2010 - 01:22 AM
MarcelMonteny, on 27 April 2010 - 05:17 AM,?
I'm having the same problem.. I've used this code on an xp and it works. Now when I try to use the same code in windows 7 receive does not work
|
http://www.dreamincode.net/forums/topic/35775-serial-port-communication-in-c%23/page__st__30
|
CC-MAIN-2017-04
|
refinedweb
| 713
| 72.46
|
Visual Studio C# IDE QA team!
I.
I: out.
Suppose you have to build a road to connect two cities on different sides of a lake. How would you plan the road to make it as short as possible?
To simplify the problem statement, a lake is sufficiently well modeled by a polygon, and the cities are just two points. The polygon does not have self-intersections and the endpoints are both outside the polygon. If you have Silverlight installed, you can use drag and drop on the points below to experiment:
Solution description
A shortest path between two points is a segment that connects them. It’s clear that our route consists of segments (if a part of the path was a curve other than a segment, we could straighten it and get better results). Moreover, those segments (which are part of the route) have their endpoints either on polygon vertices or the start or end point. Again, if this were not the case, we would be able to make the path shorter by routing via the nearest polygon vertex.
Armed with this knowledge, let’s consider all possible segments that connect the start and end point and all polygon vertices that don’t intersect the polygon. Let’s then construct a graph out of these segments. Now we can use Dijkstra’s algorithm (or any other path finding algorithm such as A*) to find the shortest route in the graph between start and endpoints. Note how any shortest path algorithm can essentially boil down to a path finding in a graph, because a graph is a very good representation for a lot of situations.
From the implementation perspective, I used my dynamic geometry library and Silverlight to create a simple demo project that lets you drag the start and end points as well as polygon vertices. You can also drag the polygon and the plane itself. I also added rounded corners to the resulting path and made it avoid polygon vertices to make it look better.
Here is the source code for the sample. Here’s the main algorithm. It defines a data structure to describe a Graph that provides the ShortestPath method, which is the actual implementation of the Dijkstra’s algorithm. ConstructGraph takes care of adding all possible edges to the graph that do not intersect our polygon. SegmentIntersectsPolygon also determines what the name suggests.
I hope to post more about polygon routing in the future and do let me know if you have any questions. couple of readers have posted questions about Visual Studio Extensibility, DTE, your own packages, commands, experimental hive etc. To be frank, I’m not an expert in this field, so instead of trying to answer these questions, I will point to some better resources for VSX (VS Extensibility):
I posted some code to start Visual Studio using C# 3.0:;
}
}
Now here’s the code that does the same in C# 4.0:
using System;
class Program
{
static void Main(string[] args)
{
Type visualStudioType = Type.GetTypeFromProgID("VisualStudio.DTE.10.0");
dynamic dte = Activator.CreateInstance(visualStudioType);
dte.MainWindow.Visible = true;
}
}
At first, it looks the same, but::
I spent some time searching the web about Remote Desktop, fullscreen and multiple monitors, so I decided to write down my findings to avoid having to search for them again.
/span for multiple monitors
If you pass /span to mstsc.exe, the target session’s desktop will become a huge rectangle that equals to the summary area of your physical monitors. This way the remote desktop window will fill all of your screens. The downside of this approach is that both screens are part of one desktop on the remote machine, so if you maximize a window there, it will span all of your monitors. Also, a dialog that is centered, will show up right on the border between your monitors. There is software on the web to workaround that but I’m fine with keeping my windows restored and sizing them myself. Also Tile Vertically works just fine in this case.
Saving the /span option in the .rdp file
There is a hidden option that isn’t mentioned in the description of the .rdp format:
span monitors:i:1
Just add it at the bottom of the file.
Saving the /f (fullscreen) option in the .rdp file
screen mode id:i:2
(By default it’s screen mode id:i:1, which is windowed).
Sources
|
http://blogs.msdn.com/kirillosenkov/default.aspx
|
crawl-002
|
refinedweb
| 742
| 70.43
|
NAMEprofile - Security profile file syntax for Firejail
USAGE
- firejail --profile=filename.profile
firejail --profile=profile_name
DESCRIPTIONSeveral command line options can be passed to the program using profile files. Firejail chooses the profile file as follows:
1. If a profile file is provided by the user with --profile option, the profile file is loaded. If a profile name is given, it is searched for first in the ~/.config/firejail directory and if not found then in /etc/firejail directory. Profile names do not include the .profile suffix. Example:
Reading profile /home/netblue/icecat.profile
[...]
Reading profile /etc/firejail/icecat.profile
[...]
2. If a profile file with the same name as the application is present in ~/.config/firejail directory or in /etc/firejail, the profile is loaded. ~/.config/firejail takes precedence over /etc/firejail. Example:jail looks for these files in ~/.config/firejail directory, followed by /etc/firejail directory. To disable default profile loading, use --noprofile command option. Example:
Reading profile /etc/firejail/default.profile
Parent pid 8553, child pid 8554
Child process initialized
[...]
$ firejail --noprofile
Parent pid 8553, child pid 8554
Child process initialized
[...]
ScriptingScripting commands:
- File and directory names
- File and directory names containing spaces are supported. The space character ' ' should not be escaped.
Example: "blacklist ~/My Virtual Machines"
- # this is a comment
-
- ?CONDITIONAL: profile line
- Conditionally add profile line.
Example: "?HAS_APPIMAGE: whitelist ${HOME}/special/appimage/dir"
This example will load the whitelist profile line only if the --appimage option has been specified on the command line.
Currently the only conditionals supported are HAS_APPIMAGE, HAS_NODBUS, BROWSER_DISABLE_U2F, and BROWSER_ALLOW_DRM.
The profile line may be any profile line that you would normally use in a profile except for "quiet" and "include" lines.
-jail/firefox.profile" file.
The file name may also be just the name without the leading directory components. In this case, first the user config directory (${HOME}/.config/firejail) is searched for the file name and if not found then the system configuration directory is search for the file name. Note: Unlike the --profile option which takes a profile name without the '.profile' suffix, include must be given the full file name.
Example: "include firefox.profile" will load "${HOME}/.config/firejail/firefox.profile" file and if it does not exist "${CFG}/firefox.profile" will be loaded.
System configuration files in ${CFG} are overwritten during software installation. Persistent configuration at system level is handled in ".local" files. For every profile file in ${CFG} directory, the user can create a corresponding .local file storing modifications to the persistent configuration. Persistent .local files are included at the start of regular profile files.
- noblacklist file_name
- If the file name matches file_name, the file will not be blacklisted"
Example: "ignore net ehh0"
- quiet
- Disable Firejail's output. This should be the first uncommented command in the profile file.
Example: "quiet"
FilesystemThese messages if the sandbox tries to access the file or directory. blacklist-nolog command disables syslog messages for this particular.
- keep-var-tmp
- /var/tmp directory is untouched.
- mkdir directory
- Create a directory in user home or under /tmp before the sandbox is started. The directory is created if it doesn't already exist.
Use this command for whitelisted directories you need to preserve when the sandbox is closed. Without it, the application will create the directory, and the directory will be deleted when the sandbox is closed. Subdirectories are recursively created./<PID> filesystems. All modifications are discarded when the sandbox is closed.
- private directory
- Use directory as user home.
- private-home file,directory
- Build a new user home in a temporary filesystem, and copy the files and directories in the list in the new home. All modifications are discarded when the sandbox is closed.
- private-cache
- Mount an empty temporary filesystem on top of the .cache directory in user home. All modifications are discarded when the sandbox is closed.
- private-bin file,file
- Build a new /bin in a temporary filesystem, and copy the programs in the list. The same directory is also bind-mounted over /sbin, /usr/bin and /usr/sbin.
- private-dev
- Create a new /dev directory. Only disc, dri, dvb, hidraw, null, full, zero, tty, pts, ptmx, random, snd, urandom, video, log, shm and usb devices are available. Use the options no3d, nodvd, nosound, notv, nou2f and novideo for additional restrictions.
- keep-dev-shm
- /dev/shm directory is untouched (even with private development, see man 1 firejail for some examples.
- private-opt file,directory
- Build a new /opt.
- private-cwd
- Set working directory inside jail to the home directory, and failing that, the root directory.
- private-cwd directory
- Set working directory inside the jail.
-, /etc, /media, /mnt, /opt, /srv, /sys/module, /usr/share, filtersT values: mappings capabilities) GroupsThese profile entries define the limits on system resources (rlimits) for the processes inside the sandbox. The limits can be modified inside the sandbox using the regular ulimit command. cpu command configures the CPU cores available, and cgroup command place the sandbox in an existing control group.
Examples:
- rlimit-as 123456789012
- Set the maximum size of the process's virtual memory to 123456789012 bytes.
- rlimit-cpu 123
- Set sandbox.
-.
- nodbus
- Disable D-Bus access. Only the regular UNIX socket is handled by this command. To disable the abstract socket, you would need to request a new network namespace using the net command. Another option is to remove unix from protocol set.
- nosound
- Disable sound system.
- noautopulse
- Disable automatic ~/.config/pulse init, for complex setups such as remote pulse servers or non-standard socket paths.
- notv
- Disable DVB (Digital Video Broadcasting) TV devices.
- nou2f
- Disable U2F devices.
- novideo
- Disable video devices.
- no3d
- Disable 3D hardware acceleration.
NetworkingNetworking features available in profile files.
- defaultgw address
- Use this address as default gateway in the new network namespace.
- filter in filename.
- net bridge_interface
- Enable a new network namespace and connect it to this bridge interface. Unless specified with option --ip and --defaultgw, an IP address and a default gateway will be assigned automatically to the sandbox. The IP address is verified using ARP before assignment. The address configured as default gateway is the bridge device IP address. Up to four --net bridge devices can be defined. Mixing bridge and macvlan devices is allowed.
- net ethernet_interface|wireless_interface
- Enable a new network namespace and connect it to this ethernet interface using the standard Linux macvlan or ipvlan driver. Unless specified.
- net tap_interface
- Enable a new network namespace and connect it to this ethernet tap interface using the standard Linux macvlan driver. If the tap interface is not configured, the sandbox will not try to configure the interface inside the sandbox. Please use ip, netmask and defaultgw to specify the configuration.
- net none
- Enable a new, unconnected network namespace. The only interface available in the new namespace is a new loopback interface (lo). Use this option to deny network access to programs that don't really need network access.
- netmask address
- Use this option when you want to assign an IP address in a new namespace and the parent interface specified by --net is not configured. An IP address and a default gateway address also have to be added.
-".
- deterministic-exit-code
- Always exit firejail with the first childs exit status. The default behavior is to use the exit status of the final child to exit, which can be nondeterministic.
FILES/etc/firejail/filename.profile, $HOME/.config/firejail/filename.profile
LICENSEFirejail is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version.
|
https://jlk.fjfi.cvut.cz/arch/manpages/man/firejail-profile.5
|
CC-MAIN-2019-30
|
refinedweb
| 1,261
| 51.24
|
How To Perform Video Streaming Using Raspberry Pi?, and one High Definition Multimedia Interface (HDMI). It can do all that you would anticipate that a work station should do, like playing a superior quality video, making spreadsheets, FM radio stations and gaming, etc. Live streaming camcorders can be utilized for security or individual purposes. The webcams, camcorders, DSLRs and mirrorless cameras for live video streaming are easily available in the market but they are costly. In this project, we will make a live streaming camera using Raspberry Pi that is an economical device can be accessed over Wifi too. We would be able to view the live video streaming on our cell phones, tablets, and desktop PCs.
How To Setup Pi Camera For Live Streaming?
The best approach to start any project is to make a list of components because no one will want to stick in the middle of a project just because of a missing component.
Step 1: Components Required 5: Enabling The Raspberry Pi Camera Module
We need to enable the Raspberry Pi camera module before using it. Close the command window after updating the packages and click on the Raspberry icon on the top left corner of the Desktop Screen. Scroll down to the Raspberry Pi Preferences, click on the Interfaces option and enable the Camera from there.
It can also be enabled by typing the following command in the Terminal window:
sudo raspi-config
After typing this command we will see that the Raspberry Pi Software Configuration Tool is opened and scroll down to Interfacing Options and press Enter.
A new screen will appear and we would see the Camera mentioned at the top. Press Enter:
After enabling the camera the Pi needs to be rebooted for the changes to take effect. We will reboot our Pi before proceeding further and it can be done by typing the following command.
sudo reboot
Step 6: Noting Down IP Address Of Pi
We need to access the video streaming webserver later hence we need to know the IP address that is assigned to the Raspberry Pi. As we have already found out the IP address while setting up our Pi we will note it down and proceed further. There is an alternative way of finding out IP address too and that is to type the following command in the Terminal window after setting up Wifi on our Pi.
ifconfig
In my case, the IP Address assigned to my Pi is “192.168.1.14“.
Step 7: Connecting The Camera Module
Now, we are ready to plug our camera module into the Pi but before doing so be aware that the camera can be harmed by electricity produced via static charges. Before taking out the camera from its grey packet ensure that you have discharged yourself by touching some earthing material. While installing the camera shut down the Pi and connect the camera to the CSI port of the Pi and ensure the camera is associated in the correct direction with the strip blue letters facing upwards as shown in the figure below.
Step 8: Looking For Suitable Format For Web Streaming
It is a bit tough task because there are no video formats that are universally supported by all of the web browsers. HTTP was designed to serve web pages initially and since its launch, many additions have been made for catering file downloads, live streaming, etc. Hence, keeping in view this issue we would stream our video in a simple format named as MJPEG. The code that is mentioned in the next step uses the built-in module to make video streaming much easier. A suitable format with code can be found out at the official Picamera website.
Step 9: Writing The Script For Video Streaming
We need to write the script for video streaming and it can be found out on the official PiCamera website. Firstly, create a new file named as rpi_video_streaming.py by typing the following the command in the Terminal window:
sudo nano rpi_video_streaming.py
After creating the file copy the code mentioned below or download the Code from Here. If you are downloading the code from the link then scroll down the webpage and check 4.10. Web Streaming part.
import io import picamera import logging import socketserver from threading import Condition from http import server </body> </html> """ class StreamingOutput(object): def __init__(self): self.frame = None self.buffer = io.BytesIO() self.condition = Condition() def write(self, buf): if buf.startswith(b'\xff\xd8'): # New frame, copy the existing buffer's content and notify all # clients it's available self.buffer.truncate() with self.condition: self.frame = self.buffer.getvalue() self.condition.notify_all() self.buffer.seek(0) return self.buffer.write(buf) class StreamingHandler(server.BaseHTTPRequestHandler): def do_GET(self): if self.path == '/':', framerate=24) as camera: output = StreamingOutput() camera.start_recording(output, format='mjpeg') try: address = ('', 8000) server = StreamingServer(address, StreamingHandler) server.serve_forever() finally: camera.stop_recording()
After pasting the code in the new file created press Ctrl+X, type Y and press Enter.
Step 10: Running The Video Streaming
After saving the script we will run it using Python3 by typing the following command:
python3 rpi_video_streaming.py
After writing this command our script will start running and now we can access our web server at Address Assigned To Pi>:8000. We will open the browser and paste the link into it and use the IP address that is assigned to our Pi by the router. We can get to the video streaming through our cellphone, tablet, etc that has a browser installed in it and is associated with a similar network as our Pi. I wrote, “192.168.1.14:8000” for running the video streaming.
Step 11: Giving Final Touches
As we have tested our Pi camera and came to know that it is working fine we are ready to install it at a suitable place. It may be installed near the gate of the house so that we could monitor every person that is entering or leaving the house. All we need is to power ON our camera by using the adapter and it would be better to put it inside a casing and just leave the camera side open for video streaming.
Applications
- It can be installed in homes for security purposes.
- It can be used in offices for monitoring the employee movement.
- It can be installed in shopping malls, railway stations, etc and can be accessed by the administration staff to have check and balance in a specific area.
|
https://appuals.com/how-to-perform-video-streaming-using-raspberry-pi/
|
CC-MAIN-2022-21
|
refinedweb
| 1,091
| 61.77
|
Say I have a program that has 5 different options which are displayed depending on what number you press, and im using if else statements.
"press 1 for my age, press 2 for my height, press 3 for my weight, press 4 for my hair color, and press 5 for my favorate food"
would it be improper/sloppy to write the program out like this:
Code:#include<iostream> using namespace std; int main() { double x; cout << "Press 1 - 5 for a random fact about me\n"; cin >> x; if(x == 1) { cout << "I am 18 years old\n"; } if(x == 2) { cout << "My height is 6 ft.\n"; } if(x == 3) { cout << "My weight is 195 lbs\n"; } if(x == 4) { cout << " I have brown hair\n"; } if(x == 5) { cout << " My favorate food is icecream\n"; } system("pause"); return 0; }
|
http://cboard.cprogramming.com/cplusplus-programming/141023-if-else-statements.html
|
CC-MAIN-2015-06
|
refinedweb
| 141
| 62.69
|
module Network.Socket.SendFile.Iter where import Control.Concurrent (threadWaitWrite) import Data.Int (Int64) import System.Posix.Types (Fd) -- |: -- -- (1) the requested number of bytes for that iteration was sent -- successfully, there are more bytes left to send. -- -- (2) some (possibly 0) bytes were sent, but the file descriptor -- would now block if more bytes were written. There are more bytes -- left to send. -- -- (2). data Iter = Sent Int64 (IO Iter) -- ^ number of bytes sent this pass and a continuation to send more | WouldBlock Int64 Fd (IO Iter) -- ^ number of bytes sent, Fd that blocked, continuation to send more. NOTE: The Fd should not be used outside the running of the Iter as it may be freed when the Iter is done | Done Int64 -- ^ number of bytes sent, no more to send -- | A simple function to drive the *IterWith functions. -- It returns the total number of bytes sent. runIter :: IO Iter -> IO Int64 runIter = runIter' 0 where runIter' :: Int64 -> IO Iter -> IO Int64 runIter' acc iter = do r <- iter case r of (Sent n cont) -> do let acc' = (acc + n) -- putStrLn $ "Sent " ++ show acc' acc' `seq` runIter' acc' cont (Done n) -> do -- putStrLn $ "Done " ++ show (acc + n) return (acc + n) (WouldBlock n fd cont) -> do threadWaitWrite fd let acc' = (acc + n) -- putStrLn $ "WouldBlock " ++ (show acc') acc' `seq` runIter' acc' cont
|
http://hackage.haskell.org/package/sendfile-0.7.8/docs/src/Network-Socket-SendFile-Iter.html
|
CC-MAIN-2013-48
|
refinedweb
| 220
| 75.24
|
plone.protect 3.0.23
Security for browser forms
Introduction
This package contains utilities that can help to protect parts of Plone or applications build on top of the Plone framework.
1. Restricting to HTTP POST
a) Using decorator
If you only need to allow HTTP POST requests you can use the PostOnly checker:
from plone.protect import PostOnly from plone.protect import protect @protect(PostOnly) def manage_doSomething(self, param, REQUEST=None): pass
This checker only operates on HTTP requests; other types of requests are not checked.
b) Passing request to a function validator
Simply:
from plone.protect import PostOnly ... PostOnly(self.context.REQUEST) ...
2. Form authentication (CSRF).
Generating the token
To use the form authenticator you first need to insert it into your form. This can be done using a simple TAL statement inside your form:
<span tal:
this will produce a HTML input element with the authentication information.
Validating the token
a) ZCA way
Next you need to add logic somewhere to verify the authenticator. This can be done using a call to the authenticator view. For example:
authenticator=getMultiAdapter((context, request), name=u"authenticator") if not authenticator.verify(): raise Unauthorized
b) Using decorator
You can do the same thing more conveniently using the protect decorator:
from plone.protect import CheckAuthenticator from plone.protect import protect @protect(CheckAuthenticator) def manage_doSomething(self, param, REQUEST=None): pass
c) Passing request to a function validator
Or just:
from plone.protect import CheckAuthenticator ... CheckAuthenticator(self.context.REQUEST) ...
Headers
You can also pass in the token by using the header X-CSRF-TOKEN. This can be useful for AJAX requests.
Protect decorator
The most common way to use plone.protect is through the protect decorator. This decorator takes a list of checkers as parameters: each checker will check a specific security aspect of the request. For example:
from plone.protect import protect from plone.protect import PostOnly @protect(PostOnly) def SensitiveMethod(self, REQUEST=None): # This is only allowed with HTTP POST requests.
This relies on the protected method having a parameter called REQUEST (case sensitive).
Customized Form Authentication
If you’d like use a different authentication token for different forms, you can provide an extra string to use with the token:
<tal:authenticator tal: <span tal: </tal:authenticator>
To verify:
authenticator=getMultiAdapter((context, request), name=u"authenticator") if not authenticator.verify('a-form-related-value'): raise Unauthorized
With the decorator:
from plone.protect import CustomCheckAuthenticator from plone.protect import protect @protect(CustomCheckAuthenticator('a-form-related-value')) def manage_doSomething(self, param, REQUEST=None): pass
Automatic CSRF Protection
Since version 3, plone.protect provides automatic CSRF protection. It does this by automatically including the auth token to all internal forms when the user requesting the page is logged in.
Additionally, whenever a particular request attempts to write to the ZODB, it’ll check for the existence of a correct auth token.
Allowing write on read programatically
When you need to allow a known write on read, you’ve got several options.
Adding a CSRF token to your links
If you’ve got a GET request that causes a known write on read, your first option should be to simply add a CSRF token to the URLs that result in that request. plone.protect provides the addTokenToUrl function for this purpose:
from plone.protect.utils import addTokenToUrl url = addTokenToUrl(url)
If you just want to allow an object to be writable on a request…
You can use the safeWrite helper function:
from plone.protect.auto import safeWrite safeWrite(myobj, request)
Marking the entire request as safe
Just add the IDisableCSRFProtection interface to the current request object:
from plone.protect.interfaces import IDisableCSRFProtection from zope.interface import alsoProvides alsoProvides(request, IDisableCSRFProtection)
Warning! When you do this, the current request is susceptible to CSRF exploits so do any required CSRF protection manually.
Clickjacking Protection
plone.protect also provides, by default, clickjacking protection since version 3.0.
To protect against this attack, plone employs the use of the X-Frame-Options header. plone.protect will set the X-Frame-Options value to SAMEORIGIN.
To customize this value, you can set it to a custom value for a custom view (e.g. self.request.response.setHeader('X-Frame-Options', 'ALLOWALL')), override it at your proxy server, or you can set the environment variable of PLONE_X_FRAME_OPTIONS to whatever value you’d like plone.protect to set this to globally.
You can opt out of this by making the environment variable empty.
Disable All Automatic CSRF Protection
To disable all automatic CSRF protection, set the environment variable PLONE_CSRF_DISABLED value to true.
WARNING! It is very dangerous to do this. Do not do this unless the zeo client with this setting is not public and you know what you are doing.
Notes
This package monkey patches a number of modules in order to better handle CSRF protection:
- Archetypes add forms, add csrf - Zope2 object locking support - pluggable auth csrf protection
If you are using a proxy cache in front of your site, be aware that you will need to clear the entry for ++resource++protect.js every time you update this package or you will find issues with modals while editing content.
Compatibility
plone.protect version 3 was made for Plone 5. You can use it on Plone 4 for better protection, but you will need the plone4.csrffixes hotfix package as well. Otherwise you get needless warnings or errors. See the hotfix announcement and the hotfix page.
Changelog
3.0.23 (2016-11-26)
Bug fixes:
- Allow confirm-action for]
3.0.22 (2016-11-17)
Bug fixes:
- avoid zope.globalrequest.getRequest() [tschorr]
3.0.21 (2016-10-05)
Bug fixes:
- Avoid regenerating image scale over and over in Plone 4. Avoid (unnoticed) error when refreshing lock in Plone 4, plus a few other cases that were handled by plone4.csrffixes. Fixes [maurits]
3.0.20 (2016-09-08)
Bug fixes:
- Only try the confirm view for urls that are in the portal. This applies PloneHotfix20160830. [maurits]
- Removed RedirectTo patch. The patch has been merged to Products.CMFFormController 3.0.7 (Plone 4.3 and 5.0) and 3.1.2 (Plone 5.1). Note that we are not requiring those versions in our setup.py, because the code in this package no longer needs it. [maurits]
3.0.19 (2016-08-19)
New:
- Added protect.js from plone4.csrffixes. This adds an X-CSRF-TOKEN header to ajax requests. Fixes [maurits]
Fixes:
- Use zope.interface decorator. [gforcada]
3.0.18 (2016-02-25)
Fixes:
- Fixed AttributeError when calling safeWrite on a TestRequest, because this has no environ.. [maurits]
3.0.17 (2015-12-07)
Fixes:
- Internationalized button in confirm.pt. [vincentfretin]
3.0.16 (2015-11-05)
Fixes:
- Make sure transforms don’t fail on redirects. [lgraf]
3.0.15 (2015-10-30)
- make sure to always compare content type with a string when checking if we should show the confirm-action view. [vangheem]
- Internationalized confirm.pt [vincentfretin]
- Disable editable border for @@confirm-action view. [lgraf]
- Make title and description show up on @@confirm-action view. [lgraf]
- Allow views to override ‘X-Frame-Options’ by setting the response header manually. [alecm]
- Avoid parsing redirect responses (this avoids a warning on the log files). [gforcada]
3.0.14 (2015-10-08)
- Handle TypeError caused by getToolByName on an invalid context [vangheem]
- You can opt out of clickjacking protection by setting the environment variable PLONE_X_FRAME_OPTIONS to an empty string. [maurits]
- Be more flexible in parsing the PLONE_CSRF_DISABLED environment variable. We are no longer case sensitive, and we accept true, t, yes, y, 1 as true values. [maurits]
- Avoid TypeError when checking the content-type header. [maurits]
3.0.13 (2015-10-07)
- Always force html serializer as the XHTML variant seems to cause character encoding issues [vangheem]
3.0.12 (2015-10-06)
- Do not check writes to temporary storage like session storage [davisagli]
3.0.11 (2015-10-06)
- play nicer with inline JavaScript [vangheem]
3.0.10 (2015-10-06)
- make imports backward compatible [vangheem]
3.0.9 (2015-09-27)
- patch pluggable auth with marmoset patch because the patch would not apply otherwise depending on somewhat-random import order [vangheem]
- get auto-csrf protection working on the zope root [vangheem]
3.0.8 (2015-09-20)
- conditionally patch Products.PluggableAuthService if needed [vangheem]
- Do not raise ComponentLookupError on transform [vangheem]
3.0.7 (2015-07-24)
- Fix pluggable auth CSRF warnings on zope root. Very difficult to reproduce. Just let plone.protect do it’s job also on zope root. [vangheem]
3.0.6 (2015-07-20)
- Just return if the request object is not valid. [vangheem]
3.0.5 (2015-07-20)
- fix pluggable auth CSRF warnings [vangheem]
- fix detecting safe object writes on non-GET requests [vangheem]
- instead of using _v_safe_write users should now use the safeWrite function in plone.protect.auto [vangheem]
3.0.4 (2015-05-13)
- patch locking functions to use _v_safe_write attribute [vangheem]
- Be able to use _v_safe_write attribute to specify objects are safe to write [vangheem]
3.0.3 (2015-03-30)
- handle zope root not having IKeyManager Utility and CRSF protection not being supported on zope root requests yet [vangheem]
3.0.2 (2015-03-13)
- Add ITransform.transformBytes for protect transform to fix compatibility with plone.app.blocks’ ESI-rendering [atsoukka]
3.0.1 (2014-11-01)
- auto CSRF protection: check for changes on all the storages [mamico]
- CSRF test fixed [mamico]
3.0.0 (2014-04-13)
- auto-rotate keyrings [vangheem]
- use specific keyring for protected forms [vangheem]
- add automatic clickjacking protection(thanks to Manish Bhattacharya) [vangheem]
- add automatic CSRF protection [vangheem]
2.0.2 (2012-12-09)
- Use constant time comparison to verify the authenticator. This is part of the fix for [davisagli]
- Add MANIFEST.in. [WouterVH]
- Add ability to customize the token created. [vangheem]
2.0 - 2010-07-18
- Update license to BSD following board decision. [elro]
2.0a1 - 2009-11-14
- Removed deprecated AuthenticateForm class and zope.deprecation dependency. [hannosch]
- Avoid deprecation warning for the sha module in Python 2.6. [hannosch]
- Specify package dependencies [hannosch]
1.1 - 2008-06-02
- Add an optional GenericSetup profile to make it easier to install plone.protect. [mj]
1.0 - 2008-04-19
- The protect decorator had a serious design flaw which broke it. Added proper tests for it and fixed the problems. [wichert]
1.0rc1 - 2008-03-28
- Rename plone.app.protect to plone.protect: there is nothing Plone-specific about the functionality in this package and it really should be used outside of Plone as well. [wichert]
- Made utils.protect work with Zope >= 2.11. [stefan]: Plone Foundation
- Keywords: zope security CSRF
- License: BSD
- Categories
- Environment :: Web Environment
- Framework :: Plone
- Framework :: Plone :: 4.3
- Framework :: Plone :: 5.0
- Framework :: Plone :: 5.1
- Framework :: Zope2
- License :: OSI Approved :: BSD License
- Operating System :: OS Independent
- Programming Language :: Python
- Programming Language :: Python :: 2.6
- Programming Language :: Python :: 2.7
- Package Index Owner: wichert, hannosch, esteele, davisagli, optilude, timo, plone, vangheem
- DOAP record: plone.protect-3.0.23.xml
|
https://pypi.python.org/pypi/plone.protect
|
CC-MAIN-2017-22
|
refinedweb
| 1,837
| 51.04
|
PyWavelets 0.3.0 Release Notes¶
Contents
PyWavelets 0.3.0 is the first release of the package in 3 years. It is the result of a significant effort of a growing development team to modernize the package, to provide Python 3.x support and to make a start with providing new features as well as improved performance. A 0.4.0 release will follow shortly, and will contain more significant new features as well as changes/deprecations to streamline the API.
This release requires Python 2.6, 2.7 or 3.3-3.5 and NumPy 1.6.2 or greater.¶
Test suite¶
The test suite can be run with
nosetests pywt or with:
>>> import pywt >>> pywt.test()
n-D Inverse Discrete Wavelet Transform¶
The function
pywt.idwtn, which provides n-dimensional inverse DWT, has been
added. It complements
idwt,
idwt2 and
dwtn.
Thresholding¶
The function pywt.threshold has been added. It unifies the four thresholding
functions that are still provided in the
pywt.thresholding namespace.
Backwards incompatible changes¶
None in this release.
Other changes¶
Development has moved to a new repo. Everyone with an interest in wavelets is welcome to contribute!
Building wheels, building with
python setup.py develop and many other
standard ways to build and install PyWavelets are supported now.
|
https://pywavelets.readthedocs.io/en/latest/release.0.3.0.html
|
CC-MAIN-2019-09
|
refinedweb
| 214
| 70.7
|
Code covered by the BSD License
by
Quant Guy
Quant Guy (view profile)
3 files
62 downloads
4.47222
23 Mar 2011
Mex C++ interpolator routines for general pp-forms in any dimension. Multithreaded.
|
Watch this File considering their high relative importance in many fields, such as finance or computer graphics. In my own field efficient polynomial evaluation is as important as efficient FFT is for signal processing engineers.
This package introduces two functions named ppmval and ppuval ('m' for multivariate, 'u' for univariate) which evaluate general polynomials in their pp-form. Representing polynomial in pp-form is the most cost-efficient way to evaluate piecewise polynomials.
As name suggest ppmval is optimized for evaluating any polynomial mappings from R^m to R^n where m > 1 and n >= 1. ppuval on the other hand is optimized for univariate polynomials of any m (constant, linear, quadratic, spline, etc...)
Interfaces are vectorized meaning that you can fetch many evaluation sites at one call. In addition to that, algorithms are multi threaded meaning that you can utilize your computer's every core to do the calculation. The valuation problem is embarrassingly parallel so that you can expect good multi-threading results.
Algorithms are designed to use only single threaded evaluation for small input sizes as there are threading overhead involved. For small sized inputs single threaded evaluation is more faster as the lump cost of threading dominates. For larger inputs, speed increase can be seen clearly. The "threshold" limit is macro defined in interpUtil.cpp so that you can easily change it to fit your environment and purposes. In my own computer I came to the conclusion that if you need to evaluate over 1024 points at once, then multi threading is used.
The multithreading uses Visual C++ native parallel pattern library that ships with VS 2010. This means that you can't compile and link the libraries with older VS as this library is not present there. However free Express Edition from Microsoft can be downloaded if you do not already have VS 2010. For older users I tried to attatch pre-compiled mexw32 files, but Mahworks did not like the idea (sorry!).
Here are the installation instructions
1. Unzip Interpolators.zip to folder of your desire.
2. Start Matlab
3. Add the unzip folder to your Matlab path.
4. In Matlab prompt type installer(<your install folder goes here>)
5. Installer automatically compiles the library and makes necessary linkings to generate two mex-files ppmval and ppual.
6. Read the docs that explain the function behaviour and contain very simple examples.
7. Run Results.m that does benchmark comparison of this implementation to Matlab's built in evaluators. This also shows examples of the use cases how this interface is supposed to be used. It might be the case that you need to have spline (or curve fit toolbox) to be able to generate general splines in higher dimensions.
8. Enjoy!
When I have (and IF) some spare time, will make the CUDA implementation of these interpolators.
Very useful, and very quick. My only gripe is that it's MSVC specific in it's current form. I wrapped the MSVC specific parts in #idefs to get it to work on linux, like:
#ifdef _MSC_VER
#include <ppl.h>
#define nParMin 1024
#endif
You will also need to put the parallel bits of the code in an #ifdef, which obviously means the parallelised parts won't work, but very useful all the same. Thanks.
Regarding my last post:
I call ppuval very often in an optimization. If I write 'clear mex' at the end of one complete optimization run, the mexfile is cleared from memory but the memory is not released. Writing 'clear mex' after each call to the mexfile slows down the computation considerably. How to clear the memory without slowing down the computation? Thank you.
It works fine (almost), but I have a problem with memory. It seems that the memory allocated by ppuval is not freed afterwards. This leads to an increasing Matlab memory demand which finally gives me an 'Out of Memory' error for often repeated evaluations. I'm using Matlab 2011b 32 bit with Windows 7 64 bit.
Do you have any hint how to fix this? Thank you.
@ Travis Storm: I think you miss some header files. If you download Visual C++ 2010 Express, they should be included though.
Love the idea, but I get the following error when I try to run installer:
interpUtil.cpp
interpUtil.cpp(3) : fatal error C1083: Cannot open include file: 'ppl.h': No such file or directory
C:\PROGRA~1\MATLAB\R2010A\BIN\MEX.PL: Error: Compile of 'interpUtil.cpp' failed.
??? Error using ==> mex at 222
Unable to complete successfully.
Error in ==> installer at 5
mex interpUtil.cpp -c
Is a header file missing?
|
http://www.mathworks.com/matlabcentral/fileexchange/30836-extrimely-fast-general-n-dimensional-interpolators
|
CC-MAIN-2015-35
|
refinedweb
| 804
| 57.06
|
Syntax bug, in 1.8.5? return not (some expr) <-- syntax error vsreturn (not (some expr)) <-- fine
Discussion in 'Ruby' started by Good Night Moon, client_encoding, stores fine but queries don't return unicodeIrmen de Jong, Dec 1, 2003, in forum: Python
- Replies:
- 0
- Views:
- 351
- Irmen de Jong
- Dec 1, 2003
extend for loop syntax with if expr like listcomp&genexp ?Bengt Richter, Jul 12, 2005, in forum: Python
- Replies:
- 6
- Views:
- 347
- Paul Rubin
- Jul 14, 2005
Using explicit constructors for C++0x "return { expr };"Johannes Schaub (litb), Sep 27, 2010, in forum: C++
- Replies:
- 10
- Views:
- 763
- Francesco S. Carta
- Oct 1, 2010
invalid argument in IE, yet everything fine in FireFox, no syntax error reported, Jul 16, 2005, in forum: Javascript
- Replies:
- 1
- Views:
- 131
- Jc
- Jul 16, 2005
Expr syntax?, with { blah : foo }, Dec 1, 2005, in forum: Javascript
- Replies:
- 4
- Views:
- 146
- Thomas 'PointedEars' Lahn
- Dec 2, 2005
|
http://www.thecodingforums.com/threads/syntax-bug-in-1-8-5-return-not-some-expr-syntax-error-vsreturn-not-some-expr-fine.842629/
|
CC-MAIN-2015-06
|
refinedweb
| 154
| 58.15
|
User account creation filtered due to spam.
The program below show that gcc reorder floating point instructions in such a way to make inexact checking fruitless.
Reading generated assembler I see two problems:
1) the cast to float in x assignment is executed *after* fetestexcept and not before as it's written (and needed to get the correct result). This infringes C99 standard sequence point rules.
2) the second division is not recomputed (because CSE), then inexact flag is not changed after feclearexcept
I guess that the latter is due to missing #pragma STDC FENV_ACCESS implementation, but the former undermine the whole fetestexcept usability.
$ cat bug.c
#include <fenv.h>
#include <stdio.h>
double vf = 0x0fffffff;
double vg = 0x10000000;
/* vf/vg is exactly representable as IEC559 64 bit floating point,
while it's not representable exactly as a 32 bit one */
int main() {
double a = vf;
double b = vg;
feclearexcept(FE_INEXACT);
float x;
x = a / b;
printf("%i %.1000g\n", fetestexcept(FE_INEXACT), x);
feclearexcept(FE_INEXACT);
double y;
y = a / b;
printf("%i %.1000g\n", fetestexcept(FE_INEXACT), y);
return 0;
}
$ gcc -O2 bug.c -lm
$ ./a.out
0 1
0 0.9999999962747097015380859375
$
Created attachment 17176 [details]
Assembler generated by gcc -S -O2 bug.c
It is both due to missing #pragma STDC FENV_ACCESS
GCC does not have a way to represent use/def of floating-point status, so
the call to fetestexcept is not a barrier for moving floating-point
operations. In fact, it will be hard to represent this.
|
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=38960
|
CC-MAIN-2017-13
|
refinedweb
| 249
| 54.93
|
I'm trying to store an array of strings within an AWS DynamoDB table. For the most part this array will be populated with at least one string. However there is the case where the array could be empty.
I've created a DynamoDB model in a Java Lambda function that has a set of strings as one of it's properties. If I try to save a DynamoDB model when the set of strings is empty it gives me an error saying I can't store an empty set in DynamoDB.
So my question is, how would handle removing that set property when it's empty from my model before I save / update it in the DynamoDB?
Here's an example of the model.
@DynamoDBTable(tableName = "group")
public class Group {
private String _id;
private Set<String> users;
@Null
@DynamoDBHashKey
@DynamoDBAutoGeneratedKey
public String getId() {
return _id;
}
public void setId(final String id) {
_id = id;
}
@DynamoDBAttribute
public Set<String> getUsers(){
return users;
}
public void setUsers(final Set<String> users) {
this.users = users;
}
public void addUser(String userId) {
if(this.users == null){
this.setUsers(new HashSet<String>(Arrays.asList(userId)));
}else{
this.getUsers().add(userId);
}
}
}
This is somewhat of an old question but the way I would solve this problem is with a custom DynamoDBMarshaller.
Making use of the
@DynamoDBMarshalling annotation, you can decorate the POJO accessor methods in order to dictate to the DynamoDB mapper which marshaller class to use to serialize and deserialize the set of strings. This way you get control over the special use cases.
Here is also a link to an AWS blog post with an example
The one caveat with the approach above is the customer marshaller solution serializes and deserializes to/from string so the representation in the database wouldn't be a set per se. However, I wouldn't consider that to be too bad.
Another approach might be to use the Document API, instead of the object mappers, which gives you full control over the items. Though I would still go for custom mapper with string backing.
|
https://codedump.io/share/JiR1YXTCs2an/1/how-do-i-handle-an-empty-java-set-of-strings-in-aws-dynamodb
|
CC-MAIN-2018-09
|
refinedweb
| 344
| 62.07
|
Using Dynamic Data with EF Code First and NuGet
Note: this post is a bit outdated. Checkout this other post for more up to date information on this topic.
Dynamic Data works out of the box with Entity Framework, but it takes a small trick to get it working with the latest EF Code First bits (known as CTP5).
Here is quick walk through of what you need to do.
As a first step, create a new ASP.NET Dynamic Data Entities Web Application. Then, let’s use NuGet to add EF Code First to your project (I never miss a chance to pitch my new product!). We’ll use it with SQL Compact, and also bring in a sample to get started.
Right click on References and choose ‘Add Library Package Reference’ to bring in the NuGet dialog. Go to the Online tab and type ‘efc’ (for EFCodeFirst) in the search box. Then install the EFCodeFirst.SqlServerCompact and EFCodeFirst.Sample packages:
Now we need to register our context with Dynamic Data, which is the part that requires special handling. The reason it doesn’t work the ‘usual’ way is that when using Code First, your context extends DbContext instead of ObjectContext, and Dynamic Data doesn’t know about DbContext (as it didn’t exist at the time).
I will show you two different approaches. The first is simpler but doesn’t work quite as well. The second works better but requires using a new library.
Approach #1: dig the ObjectContext out of the DbContext
The workaround is quite simple. In your RegisterRoutes method in global.asax, just add the following code (you’ll need to import System.Data.Entity.Infrastructure and the namespace where your context lives):
public static void RegisterRoutes(RouteCollection routes) { DefaultModel.RegisterContext(() => { return ((IObjectContextAdapter)new BlogContext()).ObjectContext; }, new ContextConfiguration() { ScaffoldAllTables = true });
So what this is really doing differently is provide a Lambda that can dig the ObjectContext out of your DbContext, instead of just passing the type to the context directly.
And that’s it, your app is ready to run!
One small glitch you’ll notice is that you get this EdmMetadatas entry in the list. This is a table that EF creates in the database to keep track of schema versions, but since we told Dynamic Data to Scaffold All Tables, it shows up. You can get rid of it by turning off ScaffoldAllTables, and adding a [ScaffoldTable(true)] attribute to the entity classes that you do want to see in there.
Another issue is that this approach doesn’t work when you need to register multiple models, due to the way the default provider uses the ObjectContext type as a key. Since we don’t actually extend ObjectContext, all contexts end up claiming the same key.
Approach #2: use the DynamicData.EFCodeFirstProvider library
This approach is simple to use, but just requires getting a library with a custom provider. If you don’t already have NuGet, get it from here.
Then install the DynamicData.EFCodeFirstProvider package in your project:
PM> Install-Package DynamicData.EFCodeFirstProvider 'EFCodeFirst 0.8' already installed. Successfully installed 'DynamicData.EFCodeFirstProvider 0.1.0.0'. WebApplicationDDEFCodeFirst already has a reference to 'EFCodeFirst 0.8'. Successfully added 'DynamicData.EFCodeFirstProvider 0.1.0.0' to WebApplicationDDEFCodeFirst.
After that, this is what you would write to register the context in your global.asax:
DefaultModel.RegisterContext( new EFCodeFirstDataModelProvider(() => new BlogContext()), new ContextConfiguration() { ScaffoldAllTables = true });
And that’s it! This approach allows registering multiple contexts, and also fixes the issue mentioned above where EdmMetadatas shows up in the table list.
|
http://blog.davidebbo.com/2011/01/using-dynamic-data-with-ef-code-first.html
|
CC-MAIN-2016-40
|
refinedweb
| 594
| 58.38
|
Eslam Farag wrote:i overrided hashCode() and equals() in a class (Dog) in order to store and retrieve it's instances from a hashMap, the code is as follows:
class Dog {
public Dog(String n) {
name = n;
}
public String name;
public boolean equals(Object o) {
if ((o instanceof Dog)
&& (((Dog) o).name == name)) {
return true;
} else {
return false;
}
}
public int hashCode() {
return name.length();
}
}
and the hashMap code is as follows:
public class MapTest {
public static void main(String[] args) {
Map<Object, Object> m = new HashMap<Object, Object>();
m.put("k1", new Dog("aiko"));
Dog d1 = new Dog("clover");
m.put(d1, "Dog key"); // #1
System.out.println(m.get("k1"));
String k2 = "k2";
d1.name = "arthur"; // #2
System.out.println(m.get(d1)); #3
System.out.println(m.size());
}
}
the problem is that, at 2 i changed the name of the dog object that's stored inside the hashMap at 1, the expected output at 3 is NULL but the actual is Dog Key!! i expect it to fail in the equals() method as clover!=arthur but it succeds!! i noticed that when the hashCode succeds (i.e. the lengh==6) the value stored in the map is retrieved even though the equals() method fails, i changed == and used equals() instead but no changes happens, the problem remains.
|
http://www.coderanch.com/t/588767/java-programmer-SCJP/certification/override-hashCode-equals-store-object
|
CC-MAIN-2015-40
|
refinedweb
| 219
| 73.98
|
Query Types
Note
This feature is new in EF Core 2.1
In addition to entity types, an EF Core model can contain query types, which can be used to carry out database queries against data that isn't mapped to entity types.
Compare query types to entity types
Query types are like entity types in that they:
- Can be added to the model either in
OnModelCreatingor via a "set" property on a derived DbContext.
- Support many of the same mapping capabilities, like inheritance mapping and navigation properties. On relational stores, they can configure the target database objects and columns via fluent API methods or data annotations.
However, they are different from entity types in that they:
- Do not require a key to be defined.
- Are never tracked for changes on the DbContext and therefore are never inserted, updated or deleted on the database.
- Are never discovered by convention.
- Only support a subset of navigation mapping capabilities - Specifically:
- They may never act as the principal end of a relationship.
- They can only contain reference navigation properties pointing to entities.
- Entities cannot contain navigation properties to query types.
- Are addressed on the ModelBuilder using the
Querymethod rather than the
Entitymethod.
- Are mapped on the DbContext through properties of type
DbQuery<T>rather than
DbSet<T>
- Are mapped to database objects using the
ToViewmethod, rather than
ToTable.
- May be mapped to a defining query - A defining query is a secondary query declared in the model that acts a data source for a query type.
Usage scenarios
Some of the main usage scenarios for query types are:
- Serving as the return type for ad hoc
FromSql()queries.
- Mapping to database views.
- Mapping to tables that do not have a primary key defined.
- Mapping to queries defined in the model.
Mapping to database objects
Mapping a query type to a database object is achieved using the
ToView fluent API. From the perspective of EF Core, the database object specified in this method is a view, meaning that it is treated as a read-only query source and cannot be the target of update, insert or delete operations. However, this does not mean that the database object is actually required to be a database view - It can alternatively be a database table that will be treated as read-only. Conversely, for entity types, EF Core assumes that a database object specified in the
ToTable method can be treated as a table, meaning that it can be used as a query source but also targeted by update, delete and insert operations. In fact, you can specify the name of a database view in
ToTable and everything should work fine as long as the view is configured to be updatable on the database.
Example
The following example shows how to use Query Type to query a database view.
First, we define a simple Blog and Post model:
public class Blog { public int BlogId { get; set; } public string Name { get; set; } public string Url { get; set; } public ICollection<Post> Posts { get; set; } } public class Post { public int PostId { get; set; } public string Title { get; set; } public string Content { get; set; } public int BlogId { get; set; } }
Next, we define a simple database view that will allow us to query the number of posts associated with each blog:
db.Database.ExecuteSqlCommand( @"CREATE VIEW View_BlogPostCounts AS SELECT Name, Count(p.PostId) as PostCount from Blogs b JOIN Posts p on p.BlogId = b.BlogId GROUP BY b.Name");
Next, we define a class to hold the result from the database view:
public class BlogPostsCount { public string BlogName { get; set; } public int PostCount { get; set; } }
Next, we configure the query type in OnModelCreating using the
modelBuilder.Query<T> API.
We use standard fluent configuration APIs to configure the mapping for the Query Type:
protected override void OnModelCreating(ModelBuilder modelBuilder) { modelBuilder .Query<BlogPostsCount>().ToView("View_BlogPostCounts") .Property(v => v.BlogName).HasColumnName("Name"); }
Finally, we can query the database view in the standard way:
var postCounts = db.BlogPostCounts.ToList(); foreach (var postCount in postCounts) { Console.WriteLine($"{postCount.BlogName} has {postCount.PostCount} posts."); Console.WriteLine(); }
Tip
Note we have also defined a context level query property (DbQuery) to act as a root for queries against this type.
|
https://docs.microsoft.com/en-us/ef/core/modeling/query-types
|
CC-MAIN-2018-47
|
refinedweb
| 701
| 51.58
|
CPUs used to perform better when memory accesses are aligned, that is when the pointer value is a multiple of the alignment value. This differentiation still exists in current CPUs, and still some have only instructions that perform aligned accesses. To take into account this issue, the C standard has alignment rules in place, and so the compilers exploit them to generate efficient code whenever possible. As we will see in this article, we need to be careful while casting pointers around to be sure not to break any of these rules. The goal of this article is to be educative by showcasing the problem and by giving some solutions to easily get over it.
For people that just want to see them and the final code, you can go directly to the library section.
Spoiler: the provided solutions have nothing really disruptive, and are fairly standard ones! Other resources on the Internet [1] [2] also cover this issue.
The problem
Let's consider this hash function, that computes a 64-bit integer from a buffer:
#include <stdint.h> #include <stdlib.h> static uint64_t load64_le(uint8_t const* V) { #if !defined(__LITTLE_ENDIAN__) #error This code only works with little endian systems #endif uint64_t Ret = *((uint64_t const*)V); return Ret; } uint64_t hash(const uint8_t* Data, const size_t Len) { uint64_t Ret = 0; const size_t NBlocks = Len/8; for (size_t I = 0; I < NBlocks; ++I) { const uint64_t V = load64_le(&Data[I*sizeof(uint64_t)]); Ret = (Ret ^ V)*CST; } uint64_t LastV = 0; for (size_t I = 0; I < (Len-NBlocks*8); ++I) { LastV |= ((uint64_t)Data[NBlocks*8+I]) << (I*8); } Ret = (Ret^LastV)*CST; return Ret; }
The full source code with a convenient main function can be downloaded here:.
It basically processes the input data as blocks of 64-bit little endian integers, performing a XOR with the current hash value and a multiplication. For the remaining bytes, it fills a 64-bit number with the remaining bytes.
If we want to make this hash portable across architectures (portable in the sense that it will generate the same value on every possible CPU/OS), we need to take care of the target's endianness. We will come back on this topic at the end of this blog post.
Let's compile and run this program on a classical Linux x64 computer:
$ clang -O2 hash.c -o hash && ./hash 'hello world' 527F7DD02E1C1350
Everything runs smoothly. Now, let's cross compile this code for an Android phone with an ARMv5 CPU in Thumb mode and run it. Supposing ANDROID_NDK is an environment variable that points to an Android NDK installation, let's do this:
$ $ANDROID_NDK/build/tools/make_standalone_toolchain.py --arch arm --install-dir arm $ ./arm/bin/clang -fPIC -pie -O2 hash.c -o hash_arm -march=thumbv5 -mthumb $ adb push hash_arm /data/local/tmp && adb shell "/data/local/tmp/hash_arm 'hello world'" hash_arm: 1 file pushed. 4.7 MB/s (42316 bytes in 0.009s) Bus error
Something went wrong. Let's try another string:
$ adb push hash_arm && adb shell "/data/local/tmp/hash_arm 'dragons'" hash_arm: 1 file pushed. 4.7 MB/s (42316 bytes in 0.009s) 39BF423B8562D6A0
Debugging
If we grep the kernel logs for details, we have:
$ dmesg |grep hash_arm [13598.809744] [2: hash_arm:22351] Unhandled fault: alignment fault (0x92000021) at 0x00000000ffdc8977
It looks like we have issues with alignment. Let's look at the assembly generated by the compiler:
The LDMIA instruction is loading data from memory into multiple registers. In our case, it loads our 64-bit integer into two 32-bit registers. The ARM documentation of this instruction [3] states that the memory pointer must be word-aligned (a word is 2 bytes in our case). The problem arises because our main function uses a buffer passed by the libc loader to argv, which has no alignment guarantees.
Why does this happen?
The question we can naturally ask is why does the compiler emits such an instruction? What makes him/her/it think the memory pointed by Data is word-aligned?
The problem happens in the load64_le function, where this cast is happening:
uint64_t Ret = *((uint64_t const*)V);
According to the C standard [10]: "Complete object types have alignment requirements which place restrictions on the addresses at which objects of that type may be allocated. An alignment is an implementation-defined integer value representing the number of bytes between successive addresses at which a given object can be allocated." In other words, this means that we should have:
V % (alignof(uint64_t)) == 0
Still according to the C standard, converting a pointer from a type to another without respecting this alignement rule is undefined behavior ( page 74, 7).
In our case the alignment of uint64_t is 8 bytes (which can be checked for instance like this ), hence we are experiencing this undefined behavior. What happens more precisely here is that the previous cast directly said to our compiler "Ret is a multiple of 8, and so a multiple of 2. You are safe to use LDMIA".
The problem does not arise under x86-64 because the Intel mov instruction supports unaligned loads [4] (if alignment checking is not enabled [5], which is something that can only be enabled by operating systems [6]). This is why a non negligible part of "old" code have this silent bug, because they never showed up on x86 computers (where they have been developed). It's actually so bad that the ARM Debian kernel has a mode to catch unaligned access and handle them properly [7]!
Solutions
Multiple loads
One classical solution is to "manually" generate the 64-bit integer by loading it from memory byte by byte, here in a little-endian fashion:
uint64_t load64_le(uint8_t const* V) { uint64_t Ret = 0; Ret |= (uint64_t) V[0]; Ret |= ((uint64_t) V[1]) << 8; Ret |= ((uint64_t) V[2]) << 16; Ret |= ((uint64_t) V[3]) << 24; Ret |= ((uint64_t) V[4]) << 32; Ret |= ((uint64_t) V[5]) << 40; Ret |= ((uint64_t) V[6]) << 48; Ret |= ((uint64_t) V[7]) << 56; return Ret; }
This code has multiple advantages: it's a portable way to load a little endian 64-bit integer from memory, and does not break the previous alignment rule. One drawback is that, if we just want the natural byte order of the CPU for integers, we need to write two versions and compile the good one using ifdef's. Moreover, it's a bit tedious and error-prone to write.
Anyway, let's see what clang 6.0 in -O2 mode generates, for various architectures:
- x86-64 : mov rax, [rdi] (see ). This is what we would expect, as the mov instruction on x86 supports non-aligned access.
- ARM64 ldr x0, [x0] ( ). Indeed, the ldr ARM64 instruction does not seem to have any alignment restriction [8].
- ARMv5 in Thumb mode:. This is basically the code we wrote, which loads the integer byte by byte and constructs it. We can note that this is some non negligible amount of code (compared to the previous cases).
So Clang is able to detect this pattern and to generate efficient code whenever possible, as long as optimisations are activated (note the -O1 flag in the various godbolt.org links)!
memcpy
Another solution is to use memcpy:
uint64_t load64_le(uint8_t const* V) { uint64_t Ret; memcpy(&Ret, V, sizeof(uint64_t)); #ifdef __BIG_ENDIAN__ Ret = __builtin_bswap64(Ret); #endif return Ret; }
The advantages of this version are that we still don't break any alignment rule, it can be used to load integers using the natural CPU byte order (by removing the call to __builtin_bswap64), and is potentially less error-prone to write. One disadvantage is that it relies on a non-standard builtin (__builtin_bswap64). GCC and Clang support it, and MSVC has equivalents:.
Let's see what clang 6.0 in -02 mode generates, for various architectures:
- x86-64: mov rax, [rdi] (). This is what we would expect (see above)!
- ARM64: ldr x0, [x0] ()
- ARMv5 in Thumb mode: (same as above)
We can see that the compiler understands the semantic of memcpy and optimizes it correclty, as alignment rules are still valid. The generated code is basically the same as in the previous solution.
Helper C++ library
After having written that kind of code a dozen times, I've decided to write a small header-only C++ helper library that allows the loading/storing in natural/little/big byte order for integers of any type. It's available on github here:. Nothing really fancy, but it might help and/or save time to others.
It has been tested with Clang and GCC under Linux (x86 32/64, ARM and mips), and with MSVC 2015 under Windows (x86 32/64).
Conclusion
It's a bit sad that we still need to do this kind of "hacks" to write portable code to load integers from memory. The current status is bad enough that we need to rely on compilers' optimisations to generate efficient and valid code.
Indeed, compiler people like to say that "you should trust your compiler to optimize your code". Even if this is generally an advice to follow, the big problem of the solutions we described is that they do not rely on the C standard, but on modern C compiler optimisations. Thus, nothing enforces them to optimize our memcpy call or the list of binary ORs and shifts of the first solution, and a change/bug in any of these optimisations could render our code inefficient. Looking at the code generated in -O0 gives an idea of what this code could be ().
In the end, the only way to be sure that what we expect actually happened is by looking at the final assembly, which is not really practical in real-life projects. It could be nice to have a better automated way to check for this kind of optimisations, for instance by using pragma s, or by having a small subset of optimisations that could be defined by the C standard and activated on demand (but the questions are: which one? how to define them?). Or we could even add a standard portable builtin to the C language to do this. But that's for another story...
On a somehow related matter, I would also suggest reading an interesting article by David Chisnall about why C isn't a low-level language [9].
Acknowledgment
I'd like to thank all my Quarkslab colleagues that took time to review this article!
|
https://blog.quarkslab.com/unaligned-accesses-in-cc-what-why-and-solutions-to-do-it-properly.html
|
CC-MAIN-2021-39
|
refinedweb
| 1,727
| 60.24
|
Hi,
I'm trying to study mvc and I have found my self dealing with this problem/ question
I have a DB with the table Customer and I have create a Entity data model (with the table customer).
Now I have also made a class name customer and I'm getting an error saying "
Missing partial modifier on declaration of type 'MvcSite.Models.Customer'; another partial declaration of this type exists
"
using System;
using System.Collections.Generic;
using System.Linq;
using System.Web;
namespace MvcSite.Models
{
public class Customer
{
public int Id { get; set; }
public string FirstName { get; set; }
public string LastName { get; set; }
public Address Address { get; set; }
}
public class Address
{
public string Street { get; set; }
public string City { get; set; }
public int Zip { get; set; }
}
}
What I'm trying to do is using this class to write new costumers to the DB , ( I do know how to do so directly , but if I'm not Mistaking using the class is the Correct way to do so (?)
If I'm changing the class name to customers , how is it suppose to w
Hall of Fame Twitter Terms of Service Privacy Policy Contact Us Archives Tell A Friend
|
http://www.dotnetspark.com/links/42563-help-needed-using-class-and-entitydatamodel.aspx
|
CC-MAIN-2017-04
|
refinedweb
| 198
| 56.29
|
Remote EJB 2.1 in Jboss 7.1leonardo w Apr 26, 2012 4:02 PM
I tried to deploy an EJB 2.1 from jboss 4.2.3 to jboss 7.1.
I need to keep the JNDI name as fallows because it´s a legacy application.
In old jboss.xml:
<session> <ejb-name>WidgetEJB</ejb-name> <jndi-name>com.company.app.ejb/WidgetEJB</jndi-name> </session>
The new way I´ve tried (jboss-ejb3.xml):
<session> <ejb-name>WidgetEJB</ejb-name> <ejb-ref> <ejb-ref-name>WidgetEJB</ejb-ref-name> <lookup-name>com.company.app.ejb/WidgetEJB</lookup-name> (I tried with look-up and jndi-name, but no way it worked) <jndi-name>com.company.app.ejb/WidgetEJB</jndi-name> </ejb-ref> </session>
In this case the jboss didn´t find the jndi in the lookup command.
Is It possible? How?
I have already researched for it but I have not had success
Thanks in advance
1. Re: Remote EJB 2.1 in Jboss 7.1Vatsan Madabushi Apr 30, 2012 3:57 PM (in response to leonardo w)
Was you lookup prefixed with "ejb:"?
2. Re: Remote EJB 2.1 in Jboss 7.1leonardo w May 1, 2012 7:59 PM (in response to Vatsan Madabushi)
No!
Should I?
3. Re: Remote EJB 2.1 in Jboss 7.1Vatsan Madabushi May 1, 2012 8:13 PM (in response to leonardo w)
Per teh link, if you are all EJB need to be prefixde with "ejb:" as namespace in AS 7.1.x
4. Re: Remote EJB 2.1 in Jboss 7.1Vatsan Madabushi May 1, 2012 8:17 PM (in response to Vatsan Madabushi)
or this. This may be better since you are runnign a standalone remote client.
5. Re: Remote EJB 2.1 in Jboss 7.1leonardo w May 1, 2012 8:28 PM (in response to Vatsan Madabushi)
Even if I customize the JNDI name?
Using the standard Jboss 7.1 JNDI names I was successfull, but I need to customize the JNDI names and It does not work.
Anyway, I will try.
Thanks
|
https://developer.jboss.org/message/732509
|
CC-MAIN-2016-50
|
refinedweb
| 348
| 71.51
|
<< ” “; });
cout << endl;
}
#include <algorithm>
#include <iostream>
#include <iterator>
#include <ostream>
#include <utility>
#include <vector>
using namespace std;
struct Multiplies {
template <typename T, typename U>
auto operator()(T&& t, U&& u) const
-> decltype(forward<T>(t) * forward<U>(u)) {
return forward<T>(t) * forward<U>(u);
}
};
class Watts {
public:
explicit Watts(const int n) : m_n(n) { }
int get() const { return m_n; }
private:
int m_n;
};
class Seconds {
public:
explicit Seconds(const int n) : m_n(n) { }
int get() const { return m_n; }
private:
int m_n;
};
class Joules {
public:
explicit Joules(const int n) : m_
Join the conversationAdd Comment
PingBack from
Would decltype(Forward<T>(T()) + Forward<U>(U())) work as a preceding return type?
No, because T and U might not have default constructors.
Couldn’t there be a default for the auto keyword in this case where the return type of the returned expression is used? Instead of having to essentially repeat the code.
Something like this :
template <typename T, typename U>
auto operator()(T&& t, U&& u) const -> decltype(__default__)
{
return forward<T>(t) * forward<U>(u);
}
That was considered, but dropped between N1607 () and N1705 () due to insufficient support from members of the Evolution Working Group. See N2869 () for the whole sequence of papers from decltype’s evolution.
This is a very long comment so I’ve split it into four parts: Two question parts, then some thoughts / rambling, and finally a thank-you. 🙂
** Null-reference trick **
"Technically, decltype(forward<T>(*static_cast<T *>(0)) + forward<U>(*static_cast<U *>(0))) could go on the left, but that’s an abomination."
I agree that is ugly.
When you say you could use that trick to avoid naming particular variables, you mean in that specific example and not in general, right?
As I understand things, the trick doesn’t preserve the lvalue/rvalue-ness of the particular variables, which could be important in determining the return type, so it won’t always work. (It works in this example because the lvalue/rvalue-ness isn’t important, assuming a typical operator*.)
OTOH, if you could always use the trick then you could have something like std::forward which does it and hides the ugliness.
** What if the variable names vary? **
Looking at the examples in this post again makes me wonder, what if the return statement doesn’t always use the same variables? For example, what goes in place of "t???" here:
template <typename T>
auto operator()(T&& t1, T&& t2) const
-> decltype(forward<T>(t???) * forward<T>(t???))
{
if (t1 > t2)
{
return forward<T>(t1) * forward<T>(t1);
}
else
{
return forward<T>(t2) * forward<T>(t2);
}
}
Presumably, if the code will compile at all, then it doesn’t matter which permutation of t1 and t2 you use to fill in the two t???. And if the lvalue/rvalue-ness of t1 and t2 are not the same, and the operator* is such that it matters, then the code won’t compile because the two return statements will have different types. (Or the return statements’ results will be coerced into whatever type the decltype specifies.)
Am I thinking along the right lines, or completely confused?
** Thoughts **
It’s quite odd that two "T&&" in the same context can be different despite being written exactly the same. It’s like we have hidden meta-types now. 🙂 Very useful though, when needed, and something that most people should be able to ignore most of the time.
I don’t mind that the equation stating the return type goes on the right. (That’s consistent with the lambda stuff and presumably dictated by parsing/syntax issues squeezing these features into the existing language.) And I don’t mind that we have to explicitly state the return type/equation. (Fair enough.)
I feel for anyone trying to learn C++ but I would not argue against any of these additions as they solve real problems in a way which makes (non-library) code easier to read and write. Anyone learning C++ needs to realise that the aim is not to hold the entire language in your head. Almost nobody does that. There are some features you should be aware of, but not expert in, where you can re-read the relevant chapter/webpage in the rare cases you need them.
It must be a bit more daunting now than it was before, but that’s life. IMO, C++ has always been a language that it takes years to master, and where mastery cannot come from a book or even from mental capacity alone. You have to trip over all of the caveats with your own foot before you truly understand them. It can be horrible for beginners but it’s still probably my favourite language.
My main complaints about C++ could never be fixed without breaking existing code. (e.g. I hate the fact that you can get default constructors/operators without explicitly asking for them. Anything where the default may be catastrophically wrong should be opt-in, not opt-out, IMO. But it’s too late to change that now.)
** Thanks! **
By the way, I read all three parts of this series over the last two days and I want to say thanks for describing things so well and thinking of examples that strike the right balance between complexity and abstraction*. Only the section on forwarding took me a few reads to grasp and I guess that is inherently complex and difficult to explain. I think you did a great job.
(*That is to say, often when language features are described the examples are so abstract that I have to almost compile them in my head to work out what’s being demonstrated. Or the examples are so complex, with so many unimportant side details, that it takes time to dig into the important bits. Your examples, OTOH, tended to be clear just from skimming the code. Nice one!)
My latest in a series of the weekly, or more often, summary of interesting links I come across related to Visual Studio. US ISV Developer Evangelism Team posted some links to money saving offers for ISVs when purchasing or upgrading Visual Studio or MSDN
"What if the variable names vary?"
Since return type of a function cannot vary depending on the input data, then the operator* should resolve to the same regardless of the order that t1 and t2 are multiplied in. So it shouldn’t matter.
@Nicol Bolas:
"then the operator* should resolve to the same regardless of the order"
The same what, is my question. The two return statements are presumably coerced into the type of the decltype statement, and there’s an error if that isn’t possible. That’s my guess anyway, but I was wondering if the guess is correct.
That is, the return type cannot vary but the type of each return statement can vary, so long as it can be turned into the return type.
A bit like this (which compiles) but where the lvalue/rvalue-ness of the types varies rather than the types themselves.
double test(long a, char b)
{
if (a > b)
{
return a * a;
}
else
{
return b * b;
}
}
[Leo Davidson]
> When you say you could use that trick to avoid naming particular variables, you mean in that specific example and not in general, right?
It works in general, so trailing return types weren’t an absolutely necessary feature. However, programmer sanity is important, which is why trailing return types were added.
> As I understand things, the trick doesn’t preserve the lvalue/rvalue-ness of the particular variables
That information is contained within the deduced types T and U, and preserved by forward<T>(stuff) and forward<U>(stuff) regardless of what you give them (but you have to give them something).
Something like arg<T>() could have been developed, taking nothing and returning T&& to a dereferenced null pointer (this function would be lethal to ever call – but decltype does not evaluate its expression). But instead of putting the return type on the left and teaching people to translate forward<T>(t) into arg<T>() there, putting the return type on the right and teaching people "give decltype exactly what you’re going to return" is easier.
> (It works in this example because the lvalue/rvalue-ness isn’t important, assuming a typical operator*.)
Although lvalueness/rvalueness doesn’t generally affect operator+() and operator*(), my functors have been carefully written to respect them, including in the decltype-powered return type. Saying decltype(t + u) would not respect lvalueness/rvalueness.
> What if the variable names vary?
Then you need to figure out some expression that has the correct type. The function can only have one return type, after all.
C++0x <type_traits> provides common_type which may be of use here.
In your case, you’re taking two T&&, so forward<T>(t1) * forward<T>(t1) and forward<T>(t2) * forward<T>(t2) are guaranteed to have the exact same type (same inputs, same output). Note that taking two T&& doesn’t produce perfect forwarding, and will probably trigger template argument deduction failure in many cases (where the deduced Ts differ).
> It’s quite odd that two "T&&" in the same context can be different despite being written exactly the same.
I’m not sure what you’re referring to.
In my examples, I use operator()(T&& t, U&& u). This allows the arguments to have different types.
A different issue is that in one instantiation, T&& might actually be string&&, while in another instantiation, it might be int&. This is covered in Part 2, reference collapsing, and is otherwise not fundamentally different from how in C++98/03, X& might actually be vector<int>& in one instantiation and const list<int>& in other (where X is deduced to be const list<int>).
> I feel for anyone trying to learn C++ but I would not argue
> against any of these additions as they solve real problems
> in a way which makes (non-library) code easier to read and write.
Excellently put.
Also, many C++0x features are simpler than the workarounds you’d have to use in C++98/03 (if they were possible at all). Which can make C++0x easier to learn from scratch (whereas we C++98/03 programmers had to learn the old ways, and now we have to learn the new ways on top of that).
> By the way, I read all three parts of this series over the last two
> days and I want to say thanks for describing things so well and
> thinking of examples that strike the right balance between complexity and abstraction*.
You’re welcome! Yes, coming up with simple but plausible examples is tricky.
Will VS C++ 2010 deliver real SSE3/SSE4 support (compiler optimization /arch:SSE3 ) instead of just intrinsics?
Ok, I don’t get one thing, after all these post, who can be so obsessed with c++ to come up with all these?
The reason the language is still alive today ( next to java/c#/php/… which have greater momentum, easier syntax, extendablity, etc.), because the programmer can more or less tell or check in the .asm output how it will compile to assembly and yet he does not have to write it in assembly! Anything that shifts our beloved close-to-the-metal c/c++ towards the mentioned languages don’t have much "market value" because that area is already full of competition.
I say just keep improving the code generator, add sse/avx/larrabee support, smarter intrinsic translationm, and of course less ICE 😛
We are working on all of that too !
In the coming months, we will share details about the codegen optimizations that will be part of VC10.
BTW, ICEs are quite rare now. If you find some, please report them back to us!
Gabest:
Please note that these features follow C++’s spirit of staying close to the metal while providing modern abstractions.
* Lambdas compile into function objects which benefit from inlining. This is superior to library machinery (like bind() and mem_fn()) which is complicated enough to defeat the inliner.
* auto, static_assert, and decltype are purely compile-time features. They impose no overheads on generated code.
* Rvalue references automatically replace unnecessary dynamic memory allocations with instantaneous pointer twiddling. C++0x mostly eliminates C++98’s biggest overhead compared to C, its tendency to perform unnecessary copies.
The compiler is divided into a "front-end" (FE) and "back-end" (BE). Summarizing how they work, the FE parses C++ while the BE generates assembly. These C++0x Core Language features are FE features, unrelated to BE features like code generation, intrinsics, and so forth. FE optimizations like rvalue references and BE optimizations like smarter code generation happily coexist with each other. Also, completely different developers work on the FE and BE, so you don’t have to worry that time spent implementing lambdas is somehow taking time away from improving code generation, because it’s not.
To answer a comment above: Wouldn’t it be nice to deduce the return type for ‘simple’ functions, in just the same way that lambda is required to?
template< typename T, typename U >
auto plus( T t, U u ) {
return t + u;
}
As Steven says, this was considered early in the C++0x cycle, and ultimately rejected. However, it may yet return as part of the last ongoing piece of work on this part of the standard, see
Now I don’t want to talk up the chances of success at this point! The paper proposes chaning the auto keyword (in new function declarations) to [] so that functions and lambda have a similar syntax. Essentially, anything that is ‘callable’ is introduced by square brackets, and there is no difference between a regular function, and a lambda expression with an empty capture list (meaning you can use lambda for callbacks into many Windows APIs!)
The problem is that many consider this syntax ugly, and we will shortly have two popular compilers shipping with the auto syntax.
So the idea is not dead yet, but will probably be decided one way or the other at the next standards meeting in July.
I’m eagerly awaiting posts on the compiler back-end improvements 🙂
#include <iostream>
#include <ostream>
#include <string>
using namespace std;
template <typename T> void meow(T&& t1, T&& t2) {
cout << t1 << endl;
cout << t2 << endl;
}
string rv() {
return "kittens";
}
int main() {
string lv("fluffy");
meow(lv, rv());
}
C:Temp>cl /EHsc /nologo /W4 meow.cpp
meow.cpp
meow.cpp(18) : error C2782: ‘void meow(T &&,T &&)’ : template parameter ‘T’ is ambiguous
meow.cpp(6) : see declaration of ‘meow’
could be ‘std::string’
or ‘std: <iostream>
#include <ostream>
#include <string>
#include <type_traits>
using namespace std;;
}
string rv() {
return "kittens";
}
int main() {
string lv("fluffy");
purr(lv, rv());
}
C:Temp>cl /EHsc /nologo /W4 purr.cpp
purr.cpp
C:Temp>purr
fluffy
kittens
Attempting to call purr(lv, 1729); fails:
C:Temp>cl /EHsc /nologo /W4 purr.cpp
purr.cpp
purr.cpp(14) : error C2338: purr(t, u) requires t and u to have identical types, but they can have different lvalueness/rvalueness.
purr.cpp(27) : see reference to function template instantiation ‘void. 🙂"
#include <algorithm>
#include <iostream>
#include <ostream>
#include <vector>
using namespace std;
int main() {
int *p= nullptr;
vector<int> v;
for (int i = 0; i < 10; ++i) {
v.push_back(i);
}
for_each(v.begin(), v.end(), [](int n) { cout << n << " "; });
cout << endl;
}
In output window, no problem:
1>f:temptconsoletconsoletconsole.cpp(13): error C2065: ‘nullptr’ : undeclared identifier
unfortunately, what I see first is Error List window, the messages in it are confusing:
Error 1 IntelliSense: identifier "nullptr" is undefined f:temptconsoletconsoletconsole.cpp 13 10 TConsole
Error 2 IntelliSense: expected an expression f:temptconsoletconsoletconsole.cpp 21 34 TConsole
Error 3 error C2065: ‘nullptr’ : undeclared identifier f:temptconsoletconsoletconsole.cpp 13 1 TConsole
When I clicked on "Error 2", the character ‘[‘!
|
https://blogs.msdn.microsoft.com/vcblog/2009/04/22/decltype-c0x-features-in-vc10-part-3/
|
CC-MAIN-2016-44
|
refinedweb
| 2,646
| 59.53
|
Download
FREE PDF
Flex is an open-source framework developed and distributed by Adobe Systems. It is based on the Adobe® Flash Platform and primarily provides a streamlined approach to the development of Rich Internet Applications.
Flex eliminates many of the designer-oriented features of Flash in favor of establishing a development environment that caters more to programmers. As such, you will find that Flex encompasses many of the concepts that you are already familiar with if you have developed front-end systems using JavaScript or, indeed, most other GUI programming environment, allowing you to take advantage of the underlying Flash infrastructure without having to worry about concepts like timelines, assets, and so on.
Flex is multi-platform—this means that, with some exceptions, you can run a Flex application on any platform that supports Adobe Flash Player. If your users run on Windows, OS X or Linux and their browsers have a recent version of the Flash Player plug-in installed, they will also be able to run your Flex applications without a problem.
Because Flex is open source, there is no cost associated with creating and distribution applications that are based on it.
You can download Adobe Flex SDK for free directly from the
Adobe website at
The Adobe Integrated Runtime (Adobe AIR) is a companion technology to the Flex framework that extends the functionality provided by the latter into desktop application development. With AIR, you can build Flex applications that can be deployed as native applications on your user's machines, thus gaining all the advantages of running in a desktop environment.
Like Flex, AIR is also cross-platform, which means that you can write your code once and immediately deploy it across multiple operating systems. Because they run natively rather than in a web browser, AIR applications also gain access to functionality that is usually restricted by the Flash Player's security model, such as local file manipulation, unrestricted access to the network, and so forth.
Flash Builder 4 is Adobe's IDE for developing Flex and AIR applications. Although Flash Builder 4 is not required in order to compile or run a Flex-based application, it significantly simplifies the process of Flex development by providing an integrated environment that includes code intelligence, real-time analysis, compilation support, live debugging and much more.
Flash Builder 4 is based on the open-source Eclipse IDE and can either be downloaded as a standalone product or as a plug-in for the latter. Like Eclipse, Flash Builder 4 is also crossplatform and runs on both Windows and OS X.
You can download a 60-day trial of Flash Builder 4 from the Adobe website at
Even though Flex is based on Flash, you don’t need to be proficient in the latter in order to use the former.
Flex uses a language called ActionScript3 (AS3), which is itself derived from the ECMAScript standard. ECMAScript is the same basic definition on which JavaScript is based—therefore, if you have any familiarity with browser programming, it's likely that you will find your bearings in AS3 very quickly.
Flex applications are based on the concept of component. A component defines a container of behaviors and, optionally, of a user interface representation. Components can be visual or non-visual, depending on whether the provide an interface of some kind (like a button) or just functionality of some kind (like a library for connecting to a remote server).
The visual structure of a component can be easily defined using MXML, Adobe’s specialized brand of XML. Other than the use of specific namespaces, MXML is nothing more than well-formed XML code; by nesting multiple components, you can create complex GUIs without ever writing a line of code.
Flash® Builder 4 makes creating new applications as easy as following the steps of its New Application Wizard. Simple select New Flex® Project from the File menu, then choose a name and type for your application. If you intend to write code that will be executed inside a browser, choose "Web" for your application type; if, on the other hand, you want to build a desktop application, choose "Desktop" instead.
You newly-created Flex project will contain a component that represents the application's main entry point:
<?xml version="1.0" encoding="utf-8"?> <s:Application xmlns: <fx:Declarations> </fx:Declarations> </s:Application>
From here on, you can add more components to your application simply by typing the MXML code that represents them in the appropriate order. However, Flash Builder 4's strength resides in its visual layout editor, which allows you to arrange and wire the different components that make up your user interface using a WYSIWYG approach. For example, you can add a DataGrid object to show the contents of a data structure and then a button to fetch the data–all enclosed in a VGroup object to provide the overall layout:
Flex’s powerful layout capabilities, like the rest of the framework, are also designed to be developer-friendly and are almost entirely based on standard CSS, with a few exceptions designed to make automated positioning easier.
Flex provides a rich ecosystem of components that can be easily expanded to meet your needs. While many of its components are designed to closely mimic their HTML cousins, there are also a number that provide unique functionality. For example:
While Flex Charting and AdvancedDataGrid were only available in the Professional version of Flex Builder 3, they are now included in all versions of Flash Builder 4.
One of the most interesting features of Flex is that it is designed to provide powerful data connectivity capabilities in a variety of formats and protocols, including XML, SOAP, JSON and Adobe’s own AMF. In fact, in most cases you will be able to use Flash Builder 4 to connect your Flex application to a PHP backend without having to write a single line of code in either!
In order to use Flash Builder 4's PHP-aware functionality through AMF, you need to have direct access to the root directory of a copy of your site—either through a network share or on your local computer. To get started, select Connect to Data/Service… from the Data menu, then choose PHP from the list of available connectors. Flash Builder 4 will ask you to confirm that your project type is set to PHP, then specify both the location of your site's server root and its URL:
At this point, you can click "Validate Configuration" to make Flash Builder 4 run a simple test to determine whether it can access your site as expected, then OK to complete the set up process.
Now, Flash Builder 4 will ask you to choose a PHP class that will provide the entry point to which your Flex application will connect. If your code is not encapsulated in classes, you could create some simple stubs for the purpose of providing services to your Flash Builder 4 code. The wizard will automatically fill in the defaults for you based on the name of the PHP file that you select.
In order for this system to work, it is important that the
class you want to import be stored in a file of the same
name [e.g.: Index inside index.php]. Otherwise, Zend_AMF
will be unable to find it.
At this point, if you website does not include a copy of Zend_AMF, a Zend Framework module that Flash Builder 4 uses to marshal data, remote procedure calls and error management, you will be asked to download and install a copy. This is required because Flash Builder 4 makes use of Action Message Format (AMF), a protocol that PHP does not support by default.
Your application does not need to use Zend Framework in
order to take advantage of AMF—Flash Builder 4 will only
use Zend_AMF in order to communicate with your server,
independently of the rest of your code.
Flash Builder 4 will introspect your code and discover which methods the service makes available:
Once you click Finish, the wizard will create a series of classes inside your project that provide all the necessary functionality required to connect to your project and execute your web service.
Remember that it is your responsibility to provide a
security layer where required—for example, by passing a
secure key to the service as appropriate.
Your application is now capable of connecting to the PHP backend, but the data that the latter provides is not yet wired to any of the controls.
Luckily, this, too, is something that can be done without writing a line of code. You can, instead, use the Data/Services panel, visible in the bottom tray of the Flash Builder 4 window, where all the remote data services defined in your application are visible:
All you need to do in order to connect the data returned by a service call to any of your components is to simply drag it from the Data/Services panel to the component. Flash Builder 4 will ask you whether you intend to create a new call, or use an existing one. In the former case, you will first need to specify the type of data returned by the remote service call, because the data connection wizard has no way of determining it by means of static analysis of your code.
Flash Builder 4 can, however, auto-detect the data type returned by a service call by making a call to it, or you can specify it by hand. Where a sample service call will have no adverse effects on your PHP backend, allowing the wizard to determine the return type automatically is usually sufficient and produces all the infrastructure required to handle the data on the Flex side.
In order for your data to be used in a Flex application,
it must conform to all the appropriate AS3 rules—for
example, you cannont return objects with properties
whose names are reserved AS3 keywords like protected or
private, even if those are perfectly acceptable in PHP
Your Flex application now has access to all the data returned by your service. If, for example, you drag a service on to a DataGrid component, the latter will be automatically populated with all the appropriate data columns—all you need to do is remove those you don’t want displayed and rename the header of the others to the proper human-readable format:
Your application is now fully functional—if you execute it, you will see that the data service is automatically called as soon as the DataGrid object finishes loading. If the call is successful, the data is immediately loaded and displayed.
If you prefer to add a manual method of refreshing the information, you can simply drag the appropriate data call on to the button—this will create all the code needed so that, when the user clicks on it at runtime, the service will be called again and all the data automatically updated.
While much of the data types are interchangeable between AS3 and PHP, there are some notable differences.
Integer values in PHP can either be 32- or 64-bit long, whereas, in AS3, they are always 64 bits. Therefore, you must be prepared for the fact that a numeric value passed from AS3 to PHP may be represented as a float even if it is, in fact, just a large integer.
String values in AS3 are always Unicode-compliant. It is up to you to ensure Unicode compliance on the PHP side.
Array values in AS3 can only have contiguous numeric keys starting at zero. If your PHP arrays have string keys or non-contiguous numeric keys, they will be represented as objects in AS3.
You should avoid passing objects into AS3 that contain members whose keys are reserved keywords, as handling them will be inconvenient—and many of Flash Builder 4's facilities will refuse to work with them.
Under most circumstances, these issues are unlikely to affect your application because both AS3 and PHP have a significant amount of flexibility.
AMF is not your only choice when it comes to external connectivity from Flash Builder 4—almost exactly the same functionality can just as easily be used to connect to an XML web service powered by SOAP.
You can start the process by selecting Connect To Data/ Service… from the Data menu and then choosing WSDL as the service type. This will bring up a dialog box that asks you to provide the URL of the service's WSDL specification:
Like before, Flash Builder 4 will fetch the WSDL file from the server and introspect it, extracting all the available remote procedure calls. You will then be able to drag and drop the data into your application like before.
Of course, SOAP is not the only way of retrieving XML data from a remote location. Flash Builder 4 provides facilities for introspecting a remote URL that simply returns a flat XML document and extracting information from it.
Once again, you will need to click on Connect To Data/ Service… from the Data menu and, this time, choose XML as the service type. Flash Builder 4 will ask you to provide the URL of the service you wish to access, invoke it and create a stub class in AS3 to encapsulate the data:
The resulting data provider will become available in the Data/ Services panel of Flash Builder 4's GUI, from where you can connect it to your components like before.
Be mindful of the fact that, when manipulation raw XML,
Flash Builder 4 has no way of determining whether your
service provides data in a consistent format. Therefore,
you should ensure that this is the case, or your service call
may unexpectedly fail at runtime.
JSON (JavaScript Object Notation) has rapidly become a very popular choice for web service development because of its simplicity, lightweight format and ease of use in a number of languages.
While PHP has had built-in support for JSON since version 5.2.0, AS3 does not have any facilities for manipulating JSON data. Luckily, Flex provides a number of different ways for using JSON.
To start, you will need a PHP script that takes zero or more parameters either through a GET or POST HTTP transaction and outputs JSON-formatted data. For example:
<?php function getTimeline($user) { $data = json_decode( file_get_contents(". json?screen_name=" . urlencode($user))); foreach($data as $v) { unset($v->user->protected); } return $data; } echo json_encode(getTimeline($_GET['user']));
The simplest way of connecting to this service consists of once again using the Data/Service Connector wizard to access arbitrary HTTP-based web services. Choosing "HTTP" from the Connect To Data/Service… menu will result in this dialog, where Flash Builder 4 asks for the URL of the service and its parameters:
Once you provide the correct information and click on Finish, the wizard will once again create all the infrastructure required to run your service and make it available as before. The HTTP Data/Service Connection wizard also supports XML data.
In most cases, you will want to develop your application in Debug mode. This causes the Flex compiler to add all sorts of useful information that can be used to debugger to help you address any issues that may occur within your application.
However, when it comes time to deploy your application for production, you will want to switch to a Release build so that you can end up with the most compact and efficient codebase possible. You can do so by selecting Export Release Build... from the Build menu.
Exporting a Release build causes a new directory, called binrelease, which contains a number of different files:
Most of these files play a support role to your application—in fact, the only one you will normally interact with is the host HTML file that contains the code required to display your application.
You can change the template used to generate your host
HTML file by editing the html-template/index.template.
html file in your application's root directory.
It is sometimes useful to pass data, like request parameters, to your Flex application as it is being initialized on the client browser.
This can be accomplished by introducing a special parameter in the HTML code that causes the application to be embedded in the web page. In reality, Flex provides a series of convenient wrappers that make the job even easier; if you look inside your HTML template, you will find a portion of code that looks like:
var flashvars = {}; ... swfobject.embedSWF( "Fle.swf", "flashContent", "100%", "100%", swfVersionStr, xiSwfUrlStr, flashvars, params, attributes);
All you need to do is change the content of flashVars to suit your need—that same data will be made available inside the application as the FlexGlobals.topLevelApplication.parameters object, where you can peruse it as needed.
When running in the browser, Flash employs a very strict security model that places your code in a sandbox through which all network and disk activities are regulated.
For more information about the Flash security model, read
the "Security" section under "Application architecture" in
the Flash Builder help online at http//help.adobe.com/en_US/Flex/4.0/UsingSDK/
The sandbox is turned off during debugging—therefore, you don't normally become aware of it until you run your code in production mode and find out that your application cannot access any of its remote data.
Flash supports a number of different sandboxes, depending on what kind of data your application needs to deal with. Most of the time, you will want to use the local-with-networking sandbox, which allows your application to access remote locations, but denies all access to local files.
By default, the sandbox model prevents an application from accessing any resources outside of its own domain, unless that domain specifically grants access with a crossdomain.xml file. Therefore, it is important to remember that you may not be able to access information across different domains.
Adobe AIR applications usually run in the local-trusted
sandbox and, therefore, are not subject to connectivity
restrictions.
While much of the functionality provided by Flex can be accessed without writing significant amounts of AS3 code, it is entirely possible to address extremely complex tasks entirely within the Flex runtime — systems as complex as encryption engines and image compression libraries have been built in pure AS3 and are used in production every day (in fact, the entire Flex framework itself is built in AS3 as well).
Through a featured called ASDoc, Flash Builder 4 allows you to write inline comments that can be used to document your entire codebase. The syntax used by ASDoc is very similar to the PHPDoc syntax that is commonly used to comment PHP code; for example:
/** * Performs some important function * * @param event The event dispatched to this method */ protected function handler(event:FlexEvent):void { // Do something }
Flash Builder 4 will automatically scan your code and add any information you write as part of your ASDoc blocks to its code intelligence features; these, in turn, will display the information as you use your code, providing you with a handy dynamic reference for your classes and methods:
Gustavo Sanchez replied on Fri, 2010/04/30 - 10:53pm
|
http://refcardz.dzone.com/refcardz/getting-started-php-and-flex
|
CC-MAIN-2014-41
|
refinedweb
| 3,219
| 51.72
|
.... Hi friend,
Please send me code and explain in detail.
Visit for more information.
Thanks.
hi - Hibernate
hi hi all,
I am new to hibernate.
could anyone pls let me know..._CREATED, dateCreated);
}
public List findAll() {
log.debug("finding all...();
} catch (RuntimeException re) {
log.error("find all failed", re);
throw
hi,
hi, print("code sample");how to display all elements in 2d array usin any one loop one-to-one relationships
Hibernate one-to-one relationships How does one to one relationship work in Hibernate?
Hibernate Mapping One-to-One
Hibernate provides...="true" cascade="all" unique="true" />
</class>
</hibernate
All Dialects in Hibernate
All Dialects in Hibernate Is the any tutorial which lists all the dialects in Hibernate?
Thanks
Hi,
Check the tutorial page: Dialect in Hibernate.
Check Latest Hibernate Tutorials.
Thanks
In this section of sitemap we have listed all the important sections of java tutorials.
Select the topics you want to learn.
We have given the important links of Java, JSP, Struts
hi - Hibernate
hi hi,
what is object life cycle in hibern
Hi Hi All,
I am new to roseindia. I want to learn struts. I do not know anything in struts. What exactly is struts and where do we use it. Please help me. Thanks in advance.
Regards,
Deepak
hi
hi how we direct page after cheching one checkbox and press login button
hi
hi how we direct page after checking one checkbox and press login button... to those options through which i can save the data from 1st jsp in different
the total cost of the two items;otherwise,display the total for all three
Hibernate tools update site
Hibernate tools update site
Are you looking for hibernate tools update site...
download the latest version from Hibernate Tools site.
You can also run Hibernate... productivity.
Hibernate tools update site is
problem.if I select pirticular country display the all the states of that country.In...);
}
Ajax
Select One
india
UK
US
AUS
Raju
WEB SITE
WEB SITE can any one plzz give me some suggestions that i can implement in my site..(Some latest technology)
like theme selection in orkut
like... Should Learn.
U can Learn Beginner Level Tools Update Site
Hibernate Tools Update Site
Hibernate Tools Update Site
In this section we... Site. The anytime you can user Hibernate
Tools Update Manager from your eclipse
hibernate - Hibernate
hibernate hi friends i had one doubt how to do struts with hibernate in myeclipse ide
its urgent
Hi - Hibernate Interview Questions
Hi please send me hibernate interview
Hibernate Many-to-one Relationships
Hibernate Many-to-one Relationships
Hibernate Many-to-one... of Hibernate mapping:
The following mapping is used to define the many-to-one
hibernate - Hibernate
Hi Radhika,
i think, you hibernate configuration... Hibernate;
import org.hibernate.Session;
import org.hibernate.*;
import...|timestamp)?,(property|many-to-one|one-to-one|component|dynamic-component|properties|any
Hibernate - Hibernate
Hibernate Hai this is jagadhish
while running a Hibernate.... Hi friend,
Read for more information,
Please specify your requirements in detail. It would be good
Hibernate one-to-many relationships.
Hibernate one-to-many relationships. How does one-to-many relationships works in hibern
JPA One-to-One Relationship
JPA One-to-One Relationship
... will learn about the one-to-one
relationship. In the one-to-one relation mapping a single value association
to another entity.
One-to-One: In one-to-one
Hibernate Search - Hibernate
which leads me to integrate the Hibernate search in my exsisting hibernate...Hibernate Search hello
i am java developer and mostely concern with that site whenever need to know some java related technology or on other
Free Web Site Hosting Services
), Free "One-Click" submission to many search engines to promote your site, And much... offer free space and tools for you to build your own web site. You get:150 MB...
One-Click EasyUpload utility allows you to upload multiple files with ease
EZ
mysqldump all databases
mysqldump all databases How to take the backup of complete data including all the databases in the MySQL Server?
Is there one command to take... is 5.5.23.
Thanks
Hi,
To take the backup of all the database
what this exception is all about? - Hibernate
what this exception is all about? What is a exception in Hibernate and how can i track it?Thanks in Advance
hi , i cant make this programmer , can any one help me to make it pls - Java Beginners
hi , i cant make this programmer , can any one help me to make it pls .... Hi friend,
For solving all the problem visit to :
http... ?
-Update record(s)?
-Delete record(s)?
-Make one copy of a specific text file
Hi... - Struts
of this installation Hi friend,
Hibernate is Object-Oriented mapping tool...Hi... Hi,
If i am using hibernet with struts then require... more information,tutorials and examples on Struts with Hibernate visit hibernate please tell me... me Hi Soniya,
I am sending you a link. I hope that, this link.../struts/struts-hibernate/struts-hibernate-plugin.shtml
Thanks
Hi...doubt on Packages - Java Beginners
Hi...doubt on Packages Does import.javax.mail.* is already Existing Package in java..
I have downloaded one program on Password Authentication ..
i Have seen javax.mail.*; ---> What does it mean..
can any one pls code problem - Hibernate
Hibernate code problem Hi
This is Raju.I tried the first example...; Hi friend,
I thinks, add hibernate-annotation.jar
if you have any...://
Thanks. Add all the dependent jar files in
Using of Hibernate - Hibernate
..._* all the column and keys are same. How can we use hibernate Hi friend,read for more information.
Java - Hibernate
, this type of output.
----------------------------
Inserting Record
Done
Hibernate... FirstExample {
public static void main(String[] args) {
Session session = null...; Hi friend,
package hibernateexample;
import javax 4 One to One Mapping using Annotation
Hibernate One to One Mapping using Annotation
How to Save Your Site from Google Penguin Update
site from Google penguin update all are not realistic enough to change your web... site from Google penguin update.
Replace all copied or churned content... quickly make the difference as to save your site from Google penguin update
Reg Hibernate in Myeclipse - Hibernate
Reg Hibernate in Myeclipse Hi,
My table name is user and database name is search in mysql.
I created the connection successfully and did reverse engineering. All the supporting classes and files are automatically created
Hibernate Isolation Query. - Hibernate
Hibernate Isolation Query. Hi,
Am Using HibernateORM with JBOSS..., all the tables are getting locked, But my requirement is i have a GUI where... the records from that table. Which not happening now ?
I want to have the solution
Hi - Struts
Hi Hi friends,
must for struts in mysql or not necessary... very urgent....
Hi friend,
I am sending you a link...://
http
Hibernate application
Hibernate application Hi,
I am using Netbeans IDE.I need to execute a **Hibernate application** in Netbeans IDE.can any one help me to do
hibernate
hibernate pls give one simple example program for hibernate can any one one explain what is hibernate ?
Please visit the following link:
Hibernate Tutorials
Hi - Struts
Hi Hi Friends,
I want to installed tomcat5.0 version please help me i already visit ur site then i can't understood that why i installed... please help me. its very urgent Hi friend,
Some points to be remember
Running First Hibernate 3.0 Example
the Hibernate and
install it yourself. But I have provided very thing in one zip... Running First Hibernate 3.0 Example
Hibernate is free open source software it can
java - Hibernate
java HI guys can any one tell me,procedure for executing spring and hibernate in myeclipse ide,plz very urgent for me,thank's in advance. Hi Friend,
Please visit the following link:
Hibernate
Hibernate hi sir i need hibernate complete tutorial for download
Hi Friend,
Please visit the following link:
Hibernate Tutorials
Thanks
Can you give me Hibernate one to one example?
Can you give me Hibernate one to one example? Hello there,
Can you give me a hibernate one to one example that explain the concept and mapping in Hibernate.
Thanks in advance
|
http://www.roseindia.net/tutorialhelp/comment/12902
|
CC-MAIN-2014-35
|
refinedweb
| 1,358
| 58.89
|
Introductory To OpenZeppelin
What Is OpenZeppelin?
OpenZeppelin is a library of reusable smart contracts to use with Ethereum and other EVM and eWASM blockchains. The contracts focus on secure and simple open source code. They are continuously tested and community reviewed to ensure they follow the best industry standards and security practices. As a developer it's difficult to create any piece of code from scratch; especially a contract. Through the use of OpenZeppelin's inheritable contracts, you have a base to start from and build complex features with little to, or no effort.
OpenZeppelin vs ZeppelinOS
Zeppelin solutions provides two different frameworks that are often confused to be the same thing. OpenZeppelin is a series of open source contracts to inherit into your code. In contrast, ZeppelinOS is a platform of utilities to securely manage your smart contracts. Ideally you use them together. In this tutorial series, we focus on OpenZeppelin.
Types of Contracts
OpenZeppelin has a variety of contracts to meet your needs divided into the following categories:
- Access: Roles and privileges.
- Crowdsale: Creating a smart contract for use in a crowdsale.
- Cryptography: Protecting your information.
- Drafts: Contracts that are currently in testing by the OpenZeppelin team.
- Introspection: Interface support.
- Lifecycle: Managing the behaviour of your contract.
- Math: Perform operations without overflow errors.
- Ownership: Manage ownership throughout your contract.
- Payment: How your contract releases tokens.
- Tokens: Creating tokens and protecting them.
- Utilities: Other contracts to assist you.
You inherit or combine OpenZeppelin contracts with your own contracts, serving as a base for you to build from. Later in the series, we will explore the uses of each of these contracts.
How To Download
To begin, you need to have Node.js and Truffle installed on your machine. To work with OpenZeppelin you should be familiar with Solidity, the programming language for smart contracts. The "Remix IDE - Your first smart contract" article is a good place to start.
In a directory of your choice make a new project folder and initialize Truffle in it.
mkdir myproject cd myproject truffle init
Now install the OpenZeppelin library into your projects root directory. Use the
--save-exact option to ensure that all dependencies configure with an exact version, since breaking changes (change in software that can potentially make other components fail) might occur when versions are updated.
npm init -y npm install --save-exact openzeppelin-solidity
OpenZeppelin is now installed. The library of contracts are stored in the nodemodules/openzeppelin-solidity/contracts_ folder path within your project.
To use the library, add an import statement at the beginning of the contract specifying which one you want to use.
import "openzeppelin-solidity/contracts/ownership/Ownable.sol";
Conclusion
OpenZeppelin allows you to write more complex and secure contracts using their variety of base contracts. Less time spent building the foundation and more time to optimize details.
Documentation and Next Steps:
|
https://kauri.io/openzeppelin-part-1-introductory/c3ef30099d1e404180067ed4f656aad2/a
|
CC-MAIN-2020-34
|
refinedweb
| 476
| 50.33
|
Lately, PostCSS is the tool making the rounds on the front-end side of web development.
PostCSS was developed by Andrey Sitnik, the creator of Autoprefixer. It is a Node.js package developed as a tool to transform all of your CSS using JavaScript, thereby achieving much faster build times than other processors.
Despite what its name seems to imply, it is not a post-processor (nor is it a pre-processor), but rather it is a transpiler to turn PostCSS-specific (or PostCSS plugin-specific, to be more precise) syntax into vanilla CSS.
With that being said, this does not mean that PostCSS and other CSS processors can’t work together. As a matter of fact, if you’re new to the whole world of CSS pre/post-processing, using PostCSS along with Sass can save you many headaches, which we’ll get into shortly.
PostCSS is not a replacement for other CSS processors; rather, look at it as another tool that may come in handy when needed, another addition to your toolset.
Use of PostCSS has recently begun to increase exponentially, with it being used today by some of the biggest tech industry businesses, like Twitter, JetBrains, and Wikipedia. Its widespread adoption and success is largely due to its modularity.
Plugins, Plugins, Plugins
PostCSS is all about plugins.
It allows you to choose the plugins you will use, ditching unneeded dependencies, and giving you both a quick and lightweight setup to work with, with the basic characteristics of other preprocessors. Also, it allows you to create a more heavily customized structure for your workflow while keeping it efficient.
As of the date of writing of this post, PostCSS has a repository of more than 200 plugins, each of them in charge of different tasks. On the PostCSS’ GitHub repository, plugins are categorized by “Solve global CSS problems,” “Use future CSS, today,” “Better CSS readability,” “Images and fonts,” “Linters,” and “Others.”
However, in the plugins directory you will find a more accurate categorization. I advise you take a look there yourself to get a better idea of the capabilities of a few of them; they are quite broad and rather impressive.
You’ve probably heard of the most popular PostCSS plugin, Autoprefixer, which is a popular standalone library. The second most popular plugin is CSSNext, a plugin that allows you to use the latest CSS syntax today, such as the CSS’ new custom properties, for example, without worrying about the browser support.
Not all PostCSS plugins are so groundbreaking though. Some simply give you capabilities that probably come out of the box with other processors. Take the
parent selector for example. With Sass, you can start using it immediately as you install Sass. With PostCSS, you need to use the
postcss-nested-ancestors plugin. The same could apply to
extends or
mixins.
So, what’s the advantage of using PostCSS and its plugins? The answer is simple - you can pick your own battles. If you feel like the only part of Sass you’re ever going to use is the
parent selector, you can save yourself the stress of implementing something like a Sass library installation in your environment to compile your CSS, and speed up the process by using only PostCSS and the
postcss-nested-ancestors plugin.
That is just one example use case for PostCSS, but once you start checking it out yourself, you’ll undoubtedly realize many other use cases for it.
Basic PostCSS Usage
First, let’s cover some PostCSS basics and take a look how it is typically used. While PostCSS is extremely powerful when used with a task runner, like Gulp or Grunt, it can also be used directly from the command line by using the postcss-cli.
Let’s consider a simple example use case. Assume we’d like to use the postcss-color-rgba-fallback plugin in order to add a fallback HEX value to all of our RGBA formatted colors.
Once we’ve NPM installed
postcss,
postcss-cli and
postcss-color-rgba-fallback, we need to run the following command:
postcss --use postcss-color-rgba-fallback -o src/css/all.css dist/css/all.css
With this instruction, we’re telling PostCSS to use the
postcss-color-rgba-fallback plugin, process whatever CSS is inside
src/css/all.css, and output it to
dist/css/all.css.
OK, that was easy. Now, let’s look at a more complex example.
Using PostCSS Along with Task-runners and Sass
PostCSS can be incorporated into your workflow rather easily. As mentioned already, it integrates perfectly well with task runners like Grunt, Gulp, or Webpack, and it can even be used with NPM scripts. An example of using PostCSS along with Sass and Gulp is as simple as the following code snippet:
var gulp = require('gulp'), concatcss = require('gulp-concat-css'), sass = require('gulp-sass'), postcss = require('gulp-postcss'), cssnext = require('postcss-cssnext'); gulp.task('stylesheets', function () { return ( gulp.src('./src/css/**/*.scss') .pipe(sass.sync().on('error', sass.logError)) .pipe(concatcss('all.css')) .pipe(postcss([ cssNext() ])) .pipe(gulp.dest('./dist/css')) ) });
Let’s deconstruct the above code example.
It stores references to all of the needed modules (Gulp, Contact CSS, Sass, PostCSS, and CSSNext) in a series of variables.
Then, it registers a new Gulp task called
stylesheets. This task watches for files that are in
./src/css/ with the extension
.scss (regardless of how deep in the subdirectory structure they are), Sass compiles them, and concatenates all of them to a single
all.css file.
Once the
all.css file is generated, it is passed to PostCSS to transpile all of the PostCSS (and plugin) related code to the actual CSS, and then the resulting file is placed in
./dist/css.
OK, so setting up PostCSS with a task runner and a preprocessor is great, but is that enough to justify working with PostCSS in the first place?
Let’s put it like this: While Sass is stable, mature, and has a huge community behind it, we might want to use PostCSS for plugins like Autoprefixer, for example. Yes, we could use the standalone Autoprefixer library, but the advantages of using Autoprefixer as a PostCSS plugin is the possibility to add more plugins to the workflow later on and avoid extraneous dependencies on a boatload of JavaScript libraries.
This approach also allows us to use unprefixed properties and have them prefixed based on the values fetched from APIs, like the one from Can I Use, something that is hardly achievable using Sass alone. This is pretty useful if we’re trying to avoid complex mixins that might not be the best way to prefix code.
The most common way to integrate PostCSS into your current workflow, if you’re already using Sass, is to pass the compiled output of your
.sass or
.scss file through PostCSS and its plugins. This will generate another CSS file that has the output of both Sass and PostCSS.
If you’re using a task runner, using PostCSS is as easy as adding it to the pipeline of tasks you currently have, right after compiling your
.sass or
.scss file (or the files of your preprocessor of choice).
PostCSS plays well with others, and can be a relief for some major pain points we as developers experience every day.
Let’s take a look at another example of PostCSS (and a couple of plugins likes CSSNext and Autoprefixer) and Sass working together. You could have the following code:
:root { $sass-variable: #000; --custom-property: #fff; } body { background: $sass-variable; color: var(--custom-property); &:hover { transform: scale(.75); } }
This snippet of the code has vanilla CSS and Sass syntax. Custom properties, as of the day of the writing of this article, are still in Candidate Recommendation (CR) status, and here’s where the CSSNext plugin for PostCSS comes into action.
This plugin will be in charge of turning stuff like custom properties into today’s CSS. Something similar will happen to the
transform property, which will be auto-prefixed by the Autoprefixer plugin. The code written earlier will then result in something like:
body { background: #000; color: #fff; } body:hover { -webkit-transform: scale(.75); transform: scale(.75); }
Authoring Plugins for PostCSS
As mentioned earlier, an attractive feature of PostCSS is the level of customization it allows. Thanks to its openness, authoring a custom plugin of your own for PostCSS to cover your particular needs is a rather simple task if you’re comfortable writing JavaScript.
The folks at PostCSS have a pretty solid list to start, and if you’re interested in developing a plugin check their recommended articles and guides. If you feel like you need to ask something, or discuss anything, Gitter is the best place to start.
PostCSS has its API with a pretty active base of followers on Twitter. Along with other community perks mentioned earlier in this post, this is what make the plugin creation process so much fun and such a collaborative activity.
So, to create a PostCSS plugin, we need to create a Node.js module. (Usually, PostCSS plugin folders in the
node_modules/ directory are preceded by a prefix like “postcss-”, which is to make it explicit that they are modules that depend on PostCSS.)
For starters, in the
index.js file of the new plugin module, we need to include the the following code, which will be the wrapper of the plugin’s processing code:
var postcss = require('postcss'); module.exports = postcss.plugin('replacecolors', function replacecolors() { return function(css) { // Rest of code } });
We named the plugin replacecolors. The plugin will look for a keyword
deepBlackText and replace it with the
#2e2e2e HEX color value:
var postcss = require('postcss'); module.exports = postcss.plugin('replacecolors', function replacecolors() { return function(css) { css.walkRules(function(rule) { rule.walkDecls(function(decl, i) { var declaration = decl.value; if (declaration.indexOf('deepBlackText') !== -1) { declaration = ‘color: #2e2e2e;’; } }); }); } });
The previous snippet just did the following:
- Using
walkRules()it iterated through all of the CSS rules that are in the current
.cssfile we’re going through.
- Using
walkDecls()it iterated through all of the CSS declarations that are inside the current
.cssfile.
- Then it stored the declaration inside the declaration variable and checked if the string
deepBlackTextwas in it. If it was, it replaced the whole declaration for the following CSS declaration:
color: #2e2e2e;.
Once the plugin is ready, we can used it like this directly from the command line:
postcss --use postcss-replacecolors -o src/css/all.css dist/css/all.css
Or, for example, loaded in a Guplfile like this:
var replacecolors = require('postcss-replacecolors');
Should I Ditch My Current CSS Processor in Order to Use PostCSS?
Well, that depends on what you’re looking for.
It is common to see both Sass and PostCSS used at the same time, since it is easier for newcomers to work with some of the tools that pre/post-processors offer out of the box, along with PostCSS plugins’ features. Using them side-by-side can also avoid rebuilding a predefined workflow with relatively new, and most likely unknown, tools, while providing a way to maintain current processor-dependant implementations (like Sass mixins, extends, the parent selector, placeholder selectors, and so on).
Give PostCSS a Chance
PostCSS is the hot (well, sort of) new thing in the front-end development world. It has been widely adopted because it is not a pre/post-processor per se, and it is flexible enough to adapt to the environment it is being inserted into.
Much of the power of PostCSS resides in its plugins. If what you’re looking for is modularity, flexibility, and diversity, then this is the right tool for the job.
If you’re using task runners or bundlers, then adding PostCSS to your current flow will most likely be a piece of cake. Check the installation and usage guide, and you will probably find an easy way to integrate it with the tools you’re already using.
Many developers say it is here to stay, at least for the foreseeable future. PostCSS can have a great impact on how we structure our present-day CSS, and that could potentially lead to a much greater adoption of standards across the front-end web development community.
|
https://www.toptal.com/front-end/postcss-sass-new-play-date?utm_source=CSS-Weekly&utm_campaign=Issue-247&utm_medium=web
|
CC-MAIN-2017-30
|
refinedweb
| 2,045
| 52.7
|
Didier Borel2,837 Points
why is this function not working
can someone tell what is wrong here
def squared(arguement): if type(arguement)==int: arguement**2 else: len(arguement)*len(arguement)
3 Answers
Didier Borel2,837 Points
Behar, thanks for your quick response. I am tied up with something else at the moment, but I will look later. Thanks for you effort, and I will play around with your second answer and let you know
behar10,788 Points
I can defintely see your idea here, but you should be using try an except, because the challenge actually wants you to square strings that can be turned into integers aswell.
So say you have squared("2") Should return 4.
Also squared(2) should return 4. So instead of checking for type, simply try to turn the aruguement into an integer, and if that can be done, square it, else multiple the length of the string by itself.
behar10,788 Points
behar10,788 Points
Well i am not going to show the finished code, but i can give you some pointers. There is a function that 'trys' to to turn something into an int, it will spit a valueError if it cant, so look here.
So if we have:
Try to see if you can solve it with that, else feel free to write back!
|
https://teamtreehouse.com/community/why-is-this-function-not-working-3
|
CC-MAIN-2020-40
|
refinedweb
| 223
| 73.31
|
Write a Java program to input radius of circle and find diameter, circumference and area of circle. Logic to find diameter circumference and area of circle.
Input
Enter radius of circle : 6
Output
Diameter of circle is : 20 Circumference of circle is : 62.8318 Area of circle is : 314.159
Required knowledge
Arithmetic operators, Data types, Basic Input/Output
Components of circle
Before we write Java code, let us recall basic components of circle. And how they are mathematically related with each other.
- A circle has a radius, denoted with letter r in mathematics.
- Circle has a diameter, denoted by letter d. Relation between diameter and radius of circle is given by the equation
d = 2 * r.
- Circumference is the boundary of circle. Formulae for calculating circumference is written as
circumference = 2 * PI * r. Here PI is constant equal to 3.141592653.
- Area of circle is evaluated by equation
area = PI * r * r.
Program to find diameter circumference and area of circle
/** * Program to find diameter, circumference and area of circle. */ import java.util.Scanner; public class Circle { public static void main(String[] args) { // Declare constant for PI final double PI = 3.141592653; Scanner in = new Scanner(System.in); /* Input radius of circle from user. */ System.out.println("Please enter radius of the circle : "); int r = in.nextInt(); /* Calculate diameter, circumference and area. */ int d = 2 * r; double circumference = 2 * PI * r; double area = PI * r * r; /* Print diameter, circumference and area of circle. */ System.out.println("Diameter of circle is : " + d); System.out.println("Circumference of circle is : " + circumference); System.out.println("Area of circle is : " + area); } }
Here we are declaring a constant variable PI through
final keyword.
Instead of declaring our custom constant for PI, we can Java predefined constant for PI, defined in Math class i.e.
Math.PI.
Math is built-in class in java, available in
java.lang package. PI is a static
final double type variable of Math. Because PI is
static, so we can use it without creating object of Math class. And PI is
final also, so we cannot change its value in our program.
Output
Please enter radius of the circle : 20 Diameter of circle is : 40 Circumference of circle is : 125.66370612 Area of circle is : 1256.6370612
Happy coding 😉
<pre><code> ----Your Source Code---- </code></pre>
|
https://codeforwin.org/2018/06/java-program-to-find-diameter-circumference-and-area-of-circle.html
|
CC-MAIN-2019-04
|
refinedweb
| 387
| 51.34
|
Perl comes with a built-in debugger. Although you could use third-party debuggers such as perltkdb and ActiveState's Komodo, which provide a graphical interface, you already have everything you need if you have Perl. In this article, I show you how to use the Perl debugger to execute arbitrary Perl statements, create and examine variables, and step through and set breakpoints in programs so that you can start using the Perl debugger right away. As you get comfortable with the basics, you can start to explore its other features.
The Perl debugger is started by specifying the -d switch on the command
line. The simplest way to run the debugger is with a command-line script using
the -e option to perl.
The script can be anything that is valid Perl. In this case, I use the single
statement
0 since it is easy to type and will not get in my way
as I explore the debugger. This example script is only half as long as the one
used in the perldebug man page.
The debugger starts up and displays some initial information about itself, including its name (perl5db.pl), its version (1.07), and how you can get more information about the debugger.
The debugger starts at the first executable statement in the script, in this
case
0, then waits for my instructions. It has not executed anything
yet. I can tell where the debugger is because it displays quite a bit of information
in its prompt. The prompt tells me that the current namespace is main::,
the name of the script is -e (since I invoked
it from the command line), and that the debugger is at line 1, which has the
statement
0. The debugger is waiting for a command at the
DB<1>
prompt, which tells me that I am at the first debugger instruction.
I do not especially care about this simple script since I really want to test some Perl statements without creating a script to go around them. The debugger allows you to enter arbitrary statements. Anything that does not look like an instruction to the debugger is evaled as Perl code.
I created an array, @order, and assigned it a list of values. The debugger accepted that statement, evaled it, and prompted me for another instruction. If I enter something that is not a valid Perl statement, the debugger complains and then continues.
Now that I have created @order, I want to examine it to check its contents. The debugger has several commands to let me see what is happening in my program. The x command allows me to examine the variable that I specify.
The debugger pretty prints the array in two columns. The first column is the element index, and the second column is the corresponding value. If I had done this with a hash, the debugger still pretty prints it as a list, but in key-value pairs. Since a hash is unordered, the indices in the first column do not mean much to me.
Even when I create and examine a scalar I see the two column display of the variable when I examine it, although a scalar only ever has one value.
I can even examine more than one variable at a time, even though the debugger makes me remember in which order I specified them.
I can also use the p command, which does the same thing as the
perl
builtin print() function, to print the values in these variables, rather
than dumping the variables as the x command does.
By this time, I think I have forgotten which variables I have defined, but I can use the V command to pretty print all of the variables in a package, defaulting to main::. Try this yourself you may be surprised how many variables are actually defined in main::. I can also limit the variables that V dumps by specifying the package and variables I want to see.
Notice that I do not need to specify the variable symbol, such as
$,
@, or
% in front of the variable name. The V
command shows me all values for any variable with that name, including variables
without special symbols.
Try opening a file onto a filehandle named pi and see what V reports.
To make things a bit simpler, I created another variable in a different package by specifying the fully qualified name in the declaration. When I use the V command for that package, I only see the variables in that package.
The X command does the same thing as the V command, although it defaults to the current package rather than main::.
For extra credit, create a lexical variable using my() and try the V command again. Where is the lexical variable? *
Now the I have shown you some of the basics for using the Perl debugger, let's invoke it on a real (toy) script, which I call test.pl.
I invoke the debugger as before.
Again, the debugger stops right before the first executable statement and tells me the current package, the name of the script, the current line number, and current statement. The first thing that I probably want to do is to execute this statement, unless I think there are bugs in the program before I even start running it. I can single step through the program with the s command. The debugger only executes the next statement and then prompts me for additional instructions. I can also examine variables with the x command, or use any other debugger commands.
If the next statement includes a subroutine call and I single-step through that statement with the s command, the debugger descends into the subroutine and I can single-step through each subroutine statement. I can skip this descent by using the n command, which executes the subroutine completely and returns control to me at the next program statement after the subroutine call. Simply typing a carriage return at the debugger prompt will repeat the last s or n command.
While I step through the program, I can examine the lines that are next or the lines that I just executed. The w command shows a window of lines around the current line and precedes them with their line numbers. It shows lines that are executable statements with a : after the line number, and shows the current line with a ==>.
If I want to examine a different window of lines, I can specify those lines with the l command. This command has many ways to specify which lines to show, but I am just going to specify a range of lines.
Once I am satisfied that at least some of the statements are executing correctly and as I expect them to, I want to let them execute automatically and stop at the parts of the program that I need to examine more closely. I can set breakpoints at certain lines so that the debugger stops and I can issue debugger commands. There are several ways to set breakpoints, including by line number and by line number if some condition is met (such as a variable having a certain value). All of them are explained in the perldebug man page. Here I am only going to set a breakpoint by line number so I can stop right before the print statement.
When I invoke the debugger, the same thing as before happens. Instead of single-stepping, I use the b command to set a breakpoint at line 9.
Now that I have set a breakpoint, I use the l command to look at the lines around the breakpoint. Notice that line 9, which has the breakpoint, has a special notation next to its line number to indicate the breakpoint.
To run the program up to the breakpoint without stopping for each statement,
I use the
c command. The program runs up to line 9, then
pauses to prompt me for further commands. At this point, I can do any of the
things that we have seen so far, including executing arbitrary Perl statements
that can influence the state of the program. I can examine and change variable
values if I choose.
If I no longer need a breakpoint, I can delete it with the d command. When I look at the window again, I no longer see the breakpoint annotation next to line 9. If I want to clear all breakpoints, I use the D command.
Besides the commands I have shown in this article, you can do many other things with the debugger, including setting actions for a particular line, changing the behavior of the debugger by setting or changing options, and examining the debugger itself. Now that you know enough to start using the debugger right away, you can explore these other features on your own and start finding those bugs in your own programs.
Happy bug squashing. :)
[*] Obviously the V command dumps everything it finds in the symbol table. These are package variables, which is why you can specify a package with this command. Since lexical variables carry no package information and are not stored in the symbol table, the V command does not know they exist.
brian d foy has been a Perl user since
1994. He is founder of the first Perl users group,NY.pmNY.pm, and Perl Mongers,
the Perl advocacy organization. He has been teaching Perl through Stonehenge
Consulting
Consultingfor the past three years, and has been a featured speaker at
The Perl Conference, Perl University, YAPC, COMDEX, and Builder.com. Some of
brian's other articles have appeared in The Perl Journal.
|
http://www.drdobbs.com/using-the-perl-debugger/184404744
|
CC-MAIN-2014-41
|
refinedweb
| 1,609
| 69.62
|
Problem
You need the length of a string.
Solution
Use string's length member function:
std::string s = "Raising Arizona"; int i = s.length( );
Discussion
Retrieving the length of a string is a trivial task, but it is a good opportunity to discuss the allocation scheme for strings (both wide and narrow character). strings, unlike C-style null-terminated character arrays, are dynamically sized, and grow as needed. Most standard library implementations start with an arbitrary (low) capacity, and grow by doubling the capacity each time it is reached. Knowing how to analyze this growth, if not the exact algorithm, is helpful in diagnosing string performance problems.
The characters in a basic_string are stored in a buffer that is a contiguous chunk of memory with a static size. The buffer a string uses is an arbitrary size initially, and as characters are added to the string, the buffer fills up until its capacity is reached. When this happens, the buffer grows, sort of. Specifically, a new buffer is allocated with a larger size, the characters are copied from the old buffer to the new buffer, and the old buffer is deleted.
You can find out the size of the buffer (not the number of characters it contains, but its maximum size) with the capacity member function. If you want to manually set the capacity to avoid needless buffer copies, use the reserve member function and pass it a numeric argument that indicates the desired buffer size. There is a maximum size on the possible buffer size though, and you can get that by calling max_size. You can use all of these to observe memory growth in your standard library implementation. Take a look at Example 4-9 to see how.
Example 4-9. String length and capacity
#include #include using namespace std; int main( ) { string s = ""; string sr = ""; sr.reserve(9000); cout << "s.length = " << s.length( ) << ' '; cout << "s.capacity = " << s.capacity( ) << ' '; cout << "s.max_size = " << s.max_size( ) << ' '; cout << "sr.length = " << sr.length( ) << ' '; cout << "sr.capacity = " << sr.capacity( ) << ' '; cout << "sr.max_size = " << sr.max_size( ) << ' '; for (int i = 0; i < 10000; ++i) { if (s.length( ) == s.capacity( )) { cout << "s reached capacity of " << s.length( ) << ", growing... "; } if (sr.length( ) == sr.capacity( )) { cout << "sr reached capacity of " << sr.length( ) << ", growing... "; } s += 'x'; sr += 'x'; } }
With Visual C++ 7.1, my output looks like this:
s.length = 0 s.capacity = 15 s.max_size = 4294967294 sr.length = 0 sr.capacity = 9007 sr.max_size = 4294967294 s reached capacity of 15, growing... s reached capacity of 31, growing... s reached capacity of 47, growing... s reached capacity of 70, growing... s reached capacity of 105, growing... s reached capacity of 157, growing... s reached capacity of 235, growing... s reached capacity of 352, growing... s reached capacity of 528, growing... s reached capacity of 792, growing... s reached capacity of 1188, growing... s reached capacity of 1782, growing... s reached capacity of 2673, growing... s reached capacity of 4009, growing... s reached capacity of 6013, growing... sr reached capacity of 9007, growing... s reached capacity of 9019, growing...
What is happening here is that the buffer for the string keeps filling up as I append characters to it. If the buffer is full (i.e., length = capacity), a new, larger buffer is allocated and the original string characters and the newly appended character(s) are copied into the new buffer. s starts with the default capacity of 15 (results vary by compiler), then grows by about half each time.
If you anticipate significant growth in your string, or you have a large number of strings that will need to grow at least modestly, use reserve to minimize the amount of buffer reallocation that goes on. It's also a good idea to experiment with your standard library implementation to see how it handles string growth.
Incidentally, when you want to know if a string is empty, don't check length against zero, just call the empty member function. It is a const member function that returns true if the length of the string is zero.
Building C++ Applications
Code Organization
Numbers
Strings and Text
Dates and Times
Managing Data with Containers
Algorithms
Classes
Exceptions and Safety
Streams and Files
Science and Mathematics
Multithreading
Internationalization
XML
Miscellaneous
Index
|
https://flylib.com/books/en/2.131.1/getting_the_length_of_a_string.html
|
CC-MAIN-2019-04
|
refinedweb
| 707
| 59.8
|
.
However, even making the credit retroactive may fail to save some smaller biodiesel producers who have thin cash cushions, Frohlich said. Even if the Senate passes the bill in the first few months of 2010, many producers will go out of business as they will be unable to produce the renewable fuel at a cost competitive with petroleum diesel, according to the NBB.
“The smaller biodiesel plants don’t really have the operating capital to continue to produce fuel that no one will buy,” he said. “There is no doubt this is going to sting - there are going to be layoffs.”
The ?xml:namespace>
The House of Representatives passed a version of the bill on 9 December and referred it to the Senate on the following day, said Steve Tomaszewski, press secretary for the measure’s co-sponsor, author, Rep. John Shimkus (Republican-Illinois).
The bill would modify the biodiesel tax credit, making it a production tax credit instead of a blender tax credit, and biodiesel tax credit, which will end on 31, and extend the credit five years.
The House of Representatives plans to start up again on 11 January. The Senate would come back to work in
Additional reporting by Brian For
|
http://www.icis.com/Articles/2009/12/17/9320276/congress-fails-to-extend-us-biodiesel-tax-credit.html
|
CC-MAIN-2014-35
|
refinedweb
| 204
| 57.5
|
A translation from Novig’s PAIP Common lisp…
""" In [4]: run gps.py Goal: son at school Goal: car works Goal: shop knows problem Goal: in communication with shop Goal: know phone number Action: look up number Action: telephone shop Action: tell shop problem Goal: shop has money Action: give shop money Action: shop installs battery Action: drive son to school State: have phone book and know phone number and in communication with shop and shop knows problem and shop has money and car works and son at school True """
import textwrap def splitrules(rules): return rules and rules.split(" and ") class op(object): def __init__(self, action, preconds, achievements=[], costs=[]): self.action = action self.precons = splitrules(preconds) self.achievements = splitrules(achievements) self.costs = splitrules(costs) school_ops = [op("drive son to school", "son at home and car works", "son at school", "son at home"), op("shop installs battery", "car needs battery and shop knows problem and shop has money", "car works", costs="car needs battery"), op("tell shop problem", "in communication with shop", "shop knows problem"), op("telephone shop", "know phone number", "in communication with shop"), op("look up number", "have phone book", "know phone number"), op("give shop money", "have money", "shop has money", "have money")] def achieve(state, goal, ops, indent=0, pending=set()): if goal in state: return True if pending and goal in pending: # stop oscillating goals print "Oscillating subgoal" return False for op in ops: if goal in op.achievements: savedstate = state[:] for precon in op.precons: if not precon in state: print ' '*indent + "Goal:", precon pending.add(goal) if not achieve(state, precon, ops, indent+2, pending): state = savedstate print "Failed goal:", precondition return False pending.remove(goal) print ' '*indent + "Action:", op.action for achievement in op.achievements: state.append(achievement) for cost in op.costs: state.remove(cost) return True return False def gps(state="", goals="", operations=school_ops): print "Goal:", goals state = splitrules(state) success = True for goal in splitrules(goals): success = success and achieve(state, goal, operations, 2) print 'State:', print textwrap.fill(' and '.join(state), 50) return success print gps(state = "son at home and car needs battery" + " and have money and have phone book", goals = "son at school")
Advertisements
|
https://rcjp.wordpress.com/2006/08/05/gps-general-problem-solver/
|
CC-MAIN-2017-26
|
refinedweb
| 369
| 58.11
|
On Wed, 2008-08-06 at 08:25 -0700, Greg KH wrote:> On Wed, Aug 06, 2008 at 10:37:06AM +0100, tvrtko.ursulin@sophos.com wrote:> > Greg KH wrote on 05/08/2008 21:15:35:> > > > > > > >....> > > > > > > > > > How long does this whole process take? Seriously is it worth the > > added> > > > > kernel code for something that is not measurable?> > > > > > > > Is it worth having 2 context switches for every open when none are> > > > needed? I plan to get numbers on that.> > > > > > Compared to the real time it takes in the "virus engine"? I bet it's> > > totally lost in the noise. Those things are huge beasts with thousands> > > to hundreds of thousands of context switches.> > > > No, because we are talking about a case here where we don't want to do any > > scanning. We want to detect if it is procfs (for example) as quickly as > > possible and don't do anything. Same goes for any other filesystem where > > it is not possible to store arbitrary user data.> > See previous messages about namespaces and paths for trying to figure this> kind of information out in a sane way within the kernel.Didn't I already go over this? The patch for FS exclusions would not benamespace based, rather dentry->d_inode->i_sb->fstype->name matching.Lets not start name based discussions at this point in time. Thosepatches weren't proposed on this go and reading the write up of both ofthe name based items (I think number 11 and 12) one I outright rejectand the other will require future discussion.-Eric
|
http://lkml.org/lkml/2008/8/6/289
|
CC-MAIN-2015-32
|
refinedweb
| 258
| 74.39
|
Answered by:
Xamarin.Forms - conditional compilation not working
Question
- User258412 posted
My App1 is a Xamarin Forms solution. * In App1.Android no compilation directive is defined. * In App1.UWP the WINDOWS_UWP directive is defined. * In App1 (shared code) I am trying to execute platform specific code using:
#if __ANDROID__ // Android-specific cod #elif WINDOWS_UWP // UWP-specific code #else // Other; #endif
..but compiler fails to recognise the directives whatever platform I choose for compilation... I also tried to add a specific extra Shared Project but it gave the same result....
What is wrong, please helpMonday, May 25, 2020 3:35 PM
Answers
-
All replies
- User369979 posted
You define this conditional symbol on each separate project but it doesn't work on Forms project. Forms has its own conditional compilation symbols. Try to right-click the Forms project, you will see it in the build section. If you want to trigger platform-specific code try this code:
if (Device.RuntimePlatform == Device.Android) { } else if (Device.RuntimePlatform == Device.UWP) { } else { }Tuesday, May 26, 2020 3:49 AM
- User258412 posted
Thanks, yes this works for conditional execution of code in runtime but I believe is not the same as during compilation..
I was trying to follow the instructions given here
My cross platform project is about using the USB-serial-port, retrieve some data from external hardware and present in UI. I was planning to reuse 100% of the code for UI and business logic but for the USB-port I need to access platform specific namespaces (API:s) for UWP, Android etc.
Maybe I missed something here, but how could this be achieved if not by conditional compiling..? Are you saying this is not suppose to work at all for Forms, if so when/how could it be used, any example?Tuesday, May 26, 2020 7:32 AM
- User369979 posted
The scenario you described is much like a PCL project. But we now all use .Net Standard Forms project to establish cross-platform applications. We can't use native iOS or Android code directly in Forms project. However, dependency service: helps us achieve this. We even don't need to know which platform the application is running on. We only need to call the dependency service to help us consume different codes on different specific platforms.Wednesday, May 27, 2020 8:39 AM
- User258412 posted
Thanks, this might do the trick!
Last question: does it mean the conditional compiling is absolute..?Wednesday, May 27, 2020 6:36 PM
-
|
https://social.msdn.microsoft.com/Forums/en-US/c4025e34-9fc1-40cf-acda-cd26adab378e/xamarinforms-conditional-compilation-not-working?forum=xamarincrossplatform
|
CC-MAIN-2021-31
|
refinedweb
| 415
| 65.42
|
Hi checkeven if Yama's ptrace policy is enabled.> Would a securebits interface be more or less suitable? It would allow> per-process setting, inherited from parent on fork.IMHO securebits doesn't belong to such finegrained type of things.I don't find anything dangerous for privileged process (i.e. withCAP_SYS_ADMIN and CAP_SYS_PTRACE) to be able to fork fromptrace-restricted pid namespace, unshare pid namespace and relax ptracepolicy. Because (1) process which is able to do unshare is already ableto ptrace everybody and (2) unshared namespace cannot explicitlyinteract with parent namespace.> It would however> not allow a root shell on the desktop, after the fact, saying that> a running gdb should be allowed to access firefox. But, it would be> able to say "I, from now on, am exempt, so that I can debug the> running firefox", without the rest of the system having its setting> changed.Thanks,-- Vasiliy Kulikov - bringing security into open computing environments
|
https://lkml.org/lkml/2011/11/22/379
|
CC-MAIN-2017-43
|
refinedweb
| 157
| 56.45
|
This is an excerpt from the Scala Cookbook (partially modified for the internet). This is Recipe 9.5, “How to use closures in Scala (closure examples, syntax).”
Problem
You want to pass a Scala function around like a variable, and while doing so, you want that function to be able to refer to one or more fields that were in the same scope as the function when it was declared.
Solution
You can demonstrate a closure in Scala with the following simple (but complete) example:
package otherscope { class Foo { // a method that takes a function and a string, and passes the string into // the function, and then executes the function def exec(f:(String) => Unit, name: String) { f(name) } } } object ClosureExample extends App { var hello = "Hello" def sayHello(name: String) { println(s"$hello, $name") } // execute sayHello from the exec method foo val foo = new otherscope.Foo foo.exec(sayHello, "Al") // change the local variable 'hello', then execute sayHello from // the exec method of foo, and see what happens hello = "Hola" foo.exec(sayHello, "Lorenzo") }
To test this code, save it as a file named ClosureExample.scala, then compile and run it. When it’s run, the output will be:
Hello, Al Hola, Lorenzo
If you’re coming to Scala from Java or another OOP language, you might be asking, “How could this possibly work?”
Not only did the
sayHello method reference the variable
hello from within the
exec method of the
Foo class on the first run (where
hello was no longer in scope), but on the second run, it also picked up the change to the
hello variable (from
Hello to
Hola). The simple answer is that Scala supports closure functionality, and this is how closures work.
As Dean Wampler and Alex Payne describe in their book Programming Scala (O’Reilly), there are two free variables in the
sayHellomethod: name and
hello. The
namevariable is a formal parameter to the function; this is something you’re used to.
However,
hellois not a formal parameter; it’s a reference to a variable in the enclosing scope (similar to the way a method in a Java class can refer to a field in the same class). Therefore, the Scala compiler creates a closure that encompasses (or “closes over”)
hello.
You could continue to pass the
sayHello method around so it gets farther and farther away from the scope of the
hello variable, but in an effort to keep this example simple, it’s only passed to one method in a class in a different package. You can verify that
hello is not in scope in the
Foo class by attempting to print its value in that class or in its
exec method, such as with
println(hello). You’ll find that the code won’t compile because
hello is not in scope there.
Discussion
In my research, I’ve found many descriptions of closures, each with slightly different terminology. Wikipedia defines a closure like this:
“In computer science, a closure (also lexical closure or function closure) is a function together with a referencing environment for the non-local variables of that function. A closure allows a function to access variables outside its immediate lexical scope.”
In his excellent article, Closures in Ruby, Paul Cantrell states, “a closure is a block of code which meets three criteria.” He defines the criteria as follows:
- The block of code can be passed around as a value, and
- It can be executed on demand by anyone who has that value, at which time
- It can refer to variables from the context in which it was created (i.e. it is closed with respect to variable access, in the mathematical sense of the word “closed”).
Personally, I like to think of a closure as being like quantum entanglement, which Einstein referred to as “a spooky action at a distance.” Just as quantum entanglement begins with two elements that are together and then separated — but somehow remain aware of each other — a closure begins with a function and a variable defined in the same scope, which are then separated from each other. When the function is executed at some other point in space (scope) and time, it is magically still aware of the variable it referenced in their earlier time together, and even picks up any changes to that variable.
As shown in the Solution, to create a closure in Scala, just define a function that refers to a variable that’s in the same scope as its declaration. That function can be used later, even when the variable is no longer in the function’s current scope, such as when the function is passed to another class, method, or function.
Any time you run into a situation where you’re passing around a function, and wish that function could refer to a variable like this, a closure can be a solution. The variable can be a collection, an
Int you use as a counter or limit, or anything else that helps to solve a problem. The value you refer to can be a
val, or as shown in the example, a
var.
A second example
If you’re new to closures, another example may help demonstrate them. First, start with a simple function named
isOfVotingAge. This function tests to see if the
age given to the function is greater than or equal to
18:
val isOfVotingAge = (age: Int) => age >= 18 isOfVotingAge(16) // false isOfVotingAge(20) // true
Next, to make your function more flexible, instead of hardcoding the value
18 into the function, you can take advantage of this closure technique, and let the function refer to the variable
votingAge that’s in scope when you define the function:
var votingAge = 18 val isOfVotingAge = (age: Int) => age >= votingAge
When called,
isOfVotingAge works as before:
isOfVotingAge(16) // false isOfVotingAge(20) // true
You can now pass
isOfVotingAge around to other methods and functions:
def printResult(f: Int => Boolean, x: Int) { println(f(x)) } printResult(isOfVotingAge, 20) // true
Because you defined
votingAge as a
var, you can reassign it. How does this affect
printResult? Let’s see:
// change votingAge in one scope votingAge = 21 // the change to votingAge affects the result printResult(isOfVotingAge, 20) // now false
Cool. The field and function are still entangled.
Using closures with other data types
In the two examples shown so far, you’ve worked with simple
String and
Int fields, but closures can work with any data type, including collections. For instance, in the following example, the function named
addToBasket is defined in the same scope as an
ArrayBuffer named
fruits:
import scala.collection.mutable.ArrayBuffer val fruits = ArrayBuffer("apple") // the function addToBasket has a reference to fruits val addToBasket = (s: String) => { fruits += s println(fruits.mkString(", ")) }
As with the previous example, the
addToBasket function can now be passed around as desired, and will always have a reference to the
fruits field. To demonstrate this, define a method that accepts a function with
addToBasket’s signature:
def buyStuff(f: String => Unit, s: String) = { f(s) }
Then pass
addToBasket and a String parameter to the method:
scala> buyStuff(addToBasket, "cherries") apple, cherries scala> buyStuff(addToBasket, "grapes") apple, cherries, grapes
As desired, the elements are added to your
ArrayBuffer.
Note that the
buyStuff method would typically be in another class, but this example demonstrates the basic idea.
A comparison to Java
If you’re coming to Scala from Java, or an OOP background in general, it may help to see a comparison between this closure technique and what you can currently do in Java. (In Java, there are some closure-like things you can do with inner classes, and closures are intended for addition to Java 8 in Project Lambda. But this example attempts to show a simple OOP example.)
The following example shows how a
sayHello method and the
helloPhrase string are encapsulated in the class
Greeter. In the
main method, the first two examples with
Al and
Lorenzo show how the
sayHello method can be called directly.
At the end of the
main method, the
greeter instance is passed to an instance of the
Bar class, and
greeter’s
sayHello method is executed from there:
public class SimulatedClosure { public static void main (String[] args) { Greeter greeter = new Greeter(); greeter.setHelloPhrase("Hello"); greeter.sayHello("Al"); // "Hello, Al" greeter.setHelloPhrase("Hola"); greeter.sayHello("Lorenzo"); // "Hola, Lorenzo" greeter.setHelloPhrase("Yo"); Bar bar = new Bar(greeter); // pass the greeter instance to a new Bar bar.sayHello("Adrian"); // invoke greeter.sayHello via Bar } } class Greeter { private String helloPhrase; public void setHelloPhrase(String helloPhrase) { this.helloPhrase = helloPhrase; } public void sayHello(String name) { System.out.println(helloPhrase + ", " + name); } } class Bar { private Greeter greeter; public Bar (Greeter greeter) { this.greeter = greeter; } public void sayHello(String name) { greeter.sayHello(name); } }
Running this code prints the following output:
Hello, Al Hola, Lorenzo Yo, Adrian
The end result is similar to the Scala closure approach, but the big differences in this example are that you’re passing around a
Greeter instance (instead of a function), and
sayHello and the
helloPhrase are encapsulated in the
Greeter class. In the Scala closure solution, you passed around a function that was coupled with a field from another scope.
See Also
- The voting age example in this recipe was inspired by Mario Gleichmann’s example in Functional Scala: Closures
- Paul Cantrell’s article, Closures in Ruby
- Recipe 3.18, “Creating Your Own Scala Control Structures”, demonstrates the use of multiple parameter lists
- Java 8’s Project Lambda
|
https://alvinalexander.com/scala/how-to-use-closures-in-scala-fp-examples/
|
CC-MAIN-2022-27
|
refinedweb
| 1,578
| 55.47
|
I am about to do some data processing in C, and the processing part is working logically, but I am having a strange file problem. I conveniently have 32-bits of numbers to consider, so I need a file of 32-bits of 0s, and then I will change the 0 to 1 if something exists in a finite field.
My question is: What is the best way to make a file with all "0s" in C?
What I am currently doing, seems to make sense but is not working. I currently am doing the following, and it doesn't stop at the 2.4GiB mark. I have no idea what's wrong or if there's a better way.
#include <stdlib.h> #include <stdio.h> typedef uint8_t u8; typedef uint32_t u32; int main (int argc, char **argv) { u32 l_counter32 = 0; u8 l_ubyte = 0; FILE *f_data; f_data = fopen("file.data", "wb+"); if (f_data == NULL) { printf("file error\n"); return(0); } for (l_counter32 = 0; l_counter32 <= 0xfffffffe; l_counter32++) { fwrite(&l_ubyte, sizeof(l_ubyte), 1, f_data); } fwrite(&l_ubyte, sizeof(l_ubyte), 1, f_data); //final byte at 0xffffffff fclose(f_data); }
I increment my counter in the loop to be 0xFFFFFFFe, so that it doesn't wrap around and run forever.. I haven't waited for it to stop actually, I just keep checking on the disk via ls -alF and when it's larger than 2.4GiB, I stop it. I checked sizeof(l_ubyte), and it is indeed 8-bits.
I feel that I must be missing some mundane detail.
The faster way to create initalize a file with zeroes (alias \0 null bytes) is using
truncate()/ftruncate(). See man page here
You are counting up to
0xffffffff, which is equal to 4,294,967,295. You want to count up to
0x80000000 for exactly 2 GB of data.
User contributions licensed under CC BY-SA 3.0
|
https://windows-hexerror.linestarve.com/q/so34962945-Filling-a-2GiB-file-with-0s-in-C
|
CC-MAIN-2019-47
|
refinedweb
| 312
| 71.55
|
Adding DOCTYPE to a XML File
In my code doc type is not working
Adding DOCTYPE to a XML File
;
}
Adding DOCTYPE to a XML File
...; DOCTYPE in the XML file.
Here is the XML File: Employee-Detail.xml...
to your XML file using the DOM APIs.
Description of program
Emitting DOCTYPE Declaration while writing XML File
Emitting DOCTYPE Declaration while writing XML File
... a DOCTYPE Declaration in a DOM document. JAXP (Java
API for XML Processing... to obtain parser for building DOM trees from XML DocumentDocumentBuilder builder
Creating XMl file - XML
Creating XMl file I went on this page: and it shows me how to create an XML file, however there is something I don't understand. I have to create an XML file
Java DOM Tutorial
;
Adding DOCTYPE to a
XML File
In this section, you will learn to add... file using DOM APIs.
To Count The Elements
in a XML File... of the XML file
using the DOM APIs. This APIs provides some constructors
XML DOM error - Java Beginners
XML DOM error import org.w3c.dom.*;
import javax.xml.parsers.... InputStreamReader(System.in));
System.out.print("Enter File name: ");
String xmlFile = bf.readLine();
File file = new File(xmlFile
Adding an Attribute in DOM Document
Adding an Attribute in DOM Document
This Example shows you how to adds an attribute in a
DOM...
DOM parsers. There are some of the methods used in code given below for adding
|
http://www.roseindia.net/tutorialhelp/allcomments/4334
|
CC-MAIN-2014-10
|
refinedweb
| 240
| 65.62
|
10112/getting-values-spark-dataframe-while-reading-data-from-hbase
I am reading data from hbase using spark sql jdbc. one column has xml data. when xml size is small , I am able to read correct data in all columns. but if sizeof xml increases too much in a given row, some of the columns in dataframe becomes null for that row. xml is still coming correctly.
Hi,
In Spark, fill() function of DataFrameNaFunctions class is used to replace ...READ MORE
Use the function as following:
var notFollowingList=List(9.8,7,6,3 ...READ MORE
I guess you need provide this kafka.bootstrap.servers ...READ MORE
Hey,
You can try this:
from pyspark import SparkContext
SparkContext.stop(sc)
sc ...READ MORE
Use Parquet. I'm not sure about CSV ...READ MORE
The official definition of Apache Hadoop given ...READ MORE
For accessing Hadoop commands & HDFS, you ...READ MORE
HDFS is a distributed file system whereas ...READ MORE
As parquet is a column based storage ...READ MORE
its late but this how you can ...READ MORE
OR
Already have an account? Sign in.
|
https://www.edureka.co/community/10112/getting-values-spark-dataframe-while-reading-data-from-hbase
|
CC-MAIN-2021-10
|
refinedweb
| 183
| 78.65
|
two types specified in one empty declaration
By tdh on Nov 18, 2006
So I'm in the midst of a project to put the /etc/dfs/sharetab into memory to help some minor performance concerns and to avoid the need for a setuid program for zfs when filesystem creation is delegated. I've been cutting and pasting the nfssys system call to a sharefs_sys system call. I've had my back against the wall for the past week as I build in the background of doing bug triage for the group.
I've been hitting one of two error messages:
"../../common/sharefs/sharesvc.h", line 59: invalid type combination cc: acomp failed for ../../common/fs/sharefs/sharetab.c
or
../../common/sharefs/sharesvc.h:60: error: two types specified in one empty declaration
A web search has not helped at all. So, here is the orginal code, the bug is there by the way:
/\* \* Flavors of the system call. \*/ enum sharefs_sys_op { SHAREFS_ADD, SHAREFS_REMOVE, SHAREFS_REPLACE }; struct sharefs_args { struct share \*sha_sharetab; }; #ifdef _SYSCALL32 struct sharefs_args32 { caddr32_t sha_sharetab; } #endif #ifdef _KERNEL union sharefs_sys_args { struct sharefs_args \*sharefs_args_u; /\* sharefs args \*/ }; struct sharefs_sys_a { enum sharefs_sys_op opcode; /\* operation discriminator \*/ union sharefs_sys_args arg; /\* syscall-specific arg pointer \*/ };
And I should mention that I am building on an i386 machine. The non-kernel part builds without complaining (which causes me problems when triaging the issue) and the kernel part always barfs on the union. To troubleshoot, since I had no frigging clue what the compiler was telling me, I added in dummy fields, I used -E to get intermediate code, etc. Nothing helped.
And then, and then, I saw that I was missing a ';' in here:
#ifdef _SYSCALL32 struct sharefs_args32 { caddr32_t sha_sharetab; } #endif
I'm feeling old, when I taught undergrad courses, I could spot those types of bugs without the compiler. Oh well, I'm documenting this effort to help people searching on the error messages.
Technorati Tags: C OpenSolaris
|
https://blogs.oracle.com/tdh/tags/c
|
CC-MAIN-2015-40
|
refinedweb
| 322
| 64.85
|
I've written a function that rolls (standard) dice. There are a few special cases though: Any sixes rolled are removed and 2*number of sixes new dice are rolled in their place. Initial roll is important to keep track of. I've solved it with the following function: from random import randint def roll_dice(count, sides=6): """Rolls specified number of dice, returns a tuple with the format (result, number of 1's rolled, number of 6's rolled).""" initial_mins = None initial_max = None result = 0 while count: dice = [randint(1, sides) for die in range(count)] if initial_mins == None: initial_mins = dice.count(1) if initial_max == None: initial_max = dice.count(sides) result += sum(result for result in dice if result != sides) count = 2 * dice.count(sides) # number to roll the next time through. return (result, initial_mins, initial_max) Now, when I test against a target number (this is from a Swedish RPG system, btw -- I just wanted to see if I could write it), several things can happen: If I roll two sixes (on the initial roll) and below the target number (in total), it's a failure. If I roll two sixes (on the initial roll) and above the target number (in total), it's a critical failure. If I roll two ones (on the initial roll) and above the target number (in total), it's a success. If I roll two ones (on the initial roll) and below the target number (in total), it's a critical success. If I roll below the target number (in total), it's a success. If I roll above the target number (in total), it's a failure. I've written a horrible-looking piece of code to solve this, but I'm sure there are better ways to structure it: #assuming target number 15 roll = (result, initial_mins, initial_max) if roll[0] > 15: if roll[1] >= 2: print("Success") elif roll[2] >= 2: print("Critical failure!") else: print("Failure.") elif roll[0] <= 15: if roll[1] >= 2: print("Critical success!") elif roll[2] >= 2: print("Failure") else: print("Success") This handles all the test cases I've come up with, but it feels very ugly. Is there a better way to do this? -- best regards, Robert S.
|
https://mail.python.org/pipermail/tutor/2011-August/085197.html
|
CC-MAIN-2017-17
|
refinedweb
| 374
| 71.34
|
DOCUMENTING CODE IN WEBSPHERE STUDIO APPLICATION DEVELOPER TUTORIAL JAVADOC. JAVADOC COMMENT CONVENTION All documentation is written as Java comments. The comment block must start with “/**” and end with “*/”. For example: /** * @author Bibhas Bhattacharya */ Special keywords are entered after the @ sign to supply specific information. For example, @author as shown above. Your are allowed to have HTML tags in the code. For example: /** * @author Bibhas Bhattacharya<BR> * (C) <a href=””>Web Age Solutions Inc.</a> */ DOCUMENTING THE CLASS Detail class information (including the author’s name as seen above) goes above the class declaration. Here is a complete example. /** * Here is a detail description of the class. You can use HTML tags. The * paragraph and listing tags come handy. * * @author Bibhas Bhattacharya<BR> * (C) <a href=””>Web Age Solutions Inc.</a> */ public class MyClass { } DOCUMENTING MEMBER VARIABLES Add a short and long description of the variable separated by an empty line as shown below. /** * A short description. * * A long description. This is displayed only in the field detail section. */ public String firstName; DOCUMENTING METHODS The long description of a method must be entered within an HTML paragraph tag. Input parameter information is eneterd after a @param Javadoc tag. Here is an example. /** * Short description of the method. *<p> * More detailed description of the method here. *</p> * @param param1 Description of parameter 1. * @param param2 Description of parameter 2. * @return Description of the returned object. * The calculated area. */ public double area(double param1, double param2) { //… } Tip: WSAD can automatically generate portion of the method Javadoc for you. You can save time by not having to manually enter the @param or @return tags. Above the method in question, type in /** and then hit enter. WSAD will generate the Javadoc comment. All you have to do is fill in the detail description. SETUP JAVADOC IN WSAD Open the preferences window (Window->Preferences). Then select the Java->Javadoc node. Enter the location of the javadoc.exe executable. For example, if you have WebSphere installed, it will be <WAS>\AppServer\java\bin\javadoc.exe. GENERATE JAVADOC Right click on a project and select Export. You can also select File->Export from the menubar. Select the Javadocexport type and click on Next. Enter the output directory. /> Then click on Finish. It is not entirely a bad idea to export the Javadoc within a docs folder of the project and keep the documentation under version control. WSAD makes the Javadoc generation easy and reliable by saving the export directory as a part of the project definition. You can view and change it from the project properties. /> CONCLUSION Writing documentation should be a standard part of software development. Java makes the process more appealing through the Javadoc tool. WASKB-009 DOCUMENTING CODE IN WEBSPHERE STUDIO APPLICATION DEVELOPER TUTORIAL was last modified: October 30th, 2018 by admin
|
https://www.webagesolutions.com/knowledgebase/waskb/waskb009
|
CC-MAIN-2021-04
|
refinedweb
| 467
| 61.53
|
DI and Pervasive services11 Aug 2010
This post was imported from blogspot.I have been learning recently about Dependency Injection for loose coupling (see for example these articles).
DI is really very simple. When a class X is designed for DI, it means that...
- X itself avoids specifying what other classes it depends on. For example, it avoids calling "new Y()" or using a singleton class, and it accesses the services it needs through interfaces (or, occasionally, abstract classes) rather than references to concrete types.
- X's dependencies are exposed explicitly in the constructor, or by properties of X, so that whoever creates/manages X can choose the dependencies.
Often, there are often many levels of dependency. For example, a program for editing documents may have a main window, which contains a document editor, and toolbars/menus that depend on the document editor. The editor in turn depends on a document object and various user interface services. The document may depend on services for undo/redo, various data structures, and disk access, while the user interface services may depend on other data structures, drawing services, spatial analysis algorithms... who knows.
Often, in an app designed with DI, all the different components are wired together in a single place if possible, so that you can look in one place to see how components are connected and dependent on each other.
As the application grows more complex, the code that initializes all these objects via dependency injection also grows more complex. Eventually, you may get to a point where an IoC/DI framework like Ninject or Windsor or (my tentative favorite) Autofac can simplify all that initialization work.
But there are some services that are pervasive, services that you would have to pass to a hundred different constructors if you want to use DI "properly". Some examples are localization (to provide French and Spanish translations), logging (almost any component might want to write a diagnostic message), profiling (to gather performance statistics), and possibly "config options" (so end-users or admins can configure multiple components through a command-line, xml file or other source). In a compiler, a service for error/warning messages might be used all over the place. In Loyc, there might ultimately be hundreds of components that need to create AST Nodes or use other "pervasive" services.
It looks to me like passing such common services to constructors is more trouble than it's worth. If a component of Loyc may produce a warning message, is user-configurable, creates new AST nodes, and needs localization support, that's 4 constructor arguments just for "pervasive" services, never mind the more important, "meaty" services that it might need, like a parser, graph algorithm or whatever. If you use constructor injection, dozens of constructors are starting to smell.
How can we avoid the burden of passing around lots of pervasive service references? I'm not sure what the best way is. I have an idea in mind, but it is not appropriate for all cases. My idea involves a global singleton that can be swapped out temporarily while performing a specific task. I tentatively call it the Ambient Service Pattern.
For instance, in a compiler, the error/warning service might print to the console by default, or to an output window, but certain types of analysis might be "transactional" or "tentative": if an error occurs, the operation is aborted, and no error is printed, although if a warning occurs, it is buffered and printed if the operation succeeds. A concrete example of this scenario is the C++ rule known as SFINAE. Template substitution may produce an error, but if that error occurs during overload resolution, it is not really an error and no message should be printed. Given a global error service, we can model this rule by switching to a special error service, performing the operation, and then switching back afterward.
To implement this pattern, use a thread-local variable alongside the interface to manage the service:
interface IService { ... } class Service { static ThreadLocalVariable<IService> _cur = new ThreadLocalVariable<IService>(); // Current service public static Cur { get { return _cur.Value; } } // Use to install a new service temporarily, or to install the // initial service when the program begins public static PushedTLV<IService> Push(IService newValue) { return new PushedTLV<IService>(_cur, newValue); } }Depending on the app, there could be different threads doing independent work, which is why the thread-local variable is needed.
Note: if you need to support threads that fork, there is a problem because .NET thread-local variables cannot inherit their value from the parent thread. To work around this problem in Loyc I made a whole infrastructure for thread creation, with a ThreadEx to wrap the standard Thread class, and a ThreadLocalVariable<T> class to be used instead of [ThreadStatic], which registers itself in a global weak reference collection so that when a thread is created with ThreadEx, the values of all ThreadLocalVariables can be propagated from the parent thread to the child thread (custom propagation behavior is also supported). Obviously, this workaround is a huge pain in the ass since all code must agree to use ThreadEx and the new .NET stuff like the "Parallel Extensions" won't propagate the thread local variables. I guess to properly support .NET 4, I should try to find a new solution based on ThreadLocal<T>.
PushedTLV is a helper class that changes the value of a thread-local variable until it is disposed. Use it like this:
using (var old = Service.Push(newService)) { // Perform an operation that will use the new service } // old service is automatically restored
An unfortunate consequence of this pattern is that the interface appears to be coupled to the thread-local variable. Sure, you could still write a class X that has a constructor-injected IService, but this might confuse those that indirectly use X: if someone changes Service.Cur, they might assume all code that needs an IService will use Service.Cur, but X could be using a different IService.
While the Ambient Service Pattern doesn't work like traditional dependency injection, it still follows the spirit of DI because components remain independent from a specific implementation of the pervasive service.
In the version of the pattern seen here, the service provided acts as a singleton. Of course, if the service is something that can be instantiated on-demand (e.g. a data structure), the thread-local variable would hold a factory instead of a singleton. The "Push()" method would switch to a different factory instead of a different instance, and there would be a "New()" method instead of a "Cur" property.
I wonder if the .NET Framework itself should adopt this kind of pattern. Consider what you do to open a file: you call File.Open() or make a FileStream, right? Now what if management decides "we need our app to be able to open files from an ftp site". Rather than change the code that calls File.Open() (and what if a third-party library is calling it?), wouldn't it be more elegant if you could swap in a new file management service that understands FTP sites (and perhaps maintains a local disk cache)? And hey, what if you want to change your console app to use a graphical console with icons and proportional fonts? What if you decide Debug.WriteLine needs to store a log somewhere?
Here's my implementation of PushedTLV, but as I mentioned, it's based on my own special ThreadLocalVariable class.
/// <summary>Designed to be used in a "using" statement to alter a /// thread-local variable temporarily.</summary> public class PushedTLV<T> : IDisposable { T _oldValue; ThreadLocalVariable<T> _variable; public PushedTLV(ThreadLocalVariable<T> variable, T newValue) { _variable = variable; _oldValue = variable.Value; variable.Value = newValue; } public void Dispose() { _variable.Value = _oldValue; } public T OldValue { get { return _oldValue; } } public T Value { get { return _variable.Value; } } }Do you have another idea for managing pervasive services with minimum fuss? Do tell.
- Now on CodeProject
|
http://loyc.net/2010/pervasive-services-and-di.html
|
CC-MAIN-2019-26
|
refinedweb
| 1,327
| 52.8
|
Am Son, 2002-11-10 um 12.01 schrieb Earnie Boyd: > John Levon wrote: > > We have a C program which links to a C++ library, and we need to > > tell automake to do the link in C++ mode to avoid undefined references > > to the C++ standard library. How can we do it ? > > > > The Makefile.am looks as follows : > > > > ---snip--- > > dist_sources = oprofiled.c opd_stats.c opd_kernel.c opd_image.c > > opd_sample_files.c \ > > opd_image.h opd_printf.h opd_stats.h opd_kernel.h > > opd_sample_files.h p_module.h > > > > EXTRA_DIST = $(dist_sources) > > > > if kernel_support > > > > AM_CPPFLAGS=-I ${top_srcdir}/libabi -I ${top_srcdir}/libutil -I > > ${top_srcdir}/libop -I ${top_srcdir}/libdb > > > > bin_PROGRAMS = oprofiled > > > > oprofiled_SOURCES = $(dist_sources) > > > > if enable_abi > > oprofiled_LDADD = ../libabi/libabi.a ../libdb/libdb.a ../libop/libop.a > > ../libutil/libutil.a > > else > > oprofiled_LDADD = ../libdb/libdb.a ../libop/libop.a ../libutil/libutil.a > > endif > > > > endif > > ---snip--- > > > > libabi is the C++ library. It would be good to have a solution that > > works for automake 1.5 upwards ... > > > > You could always just add -lstdc++ to the oprofiled_LDADD variable. This would be a fault, IMO. The problem is trying to link a c-program against a c++-library. This doesn't work in general, esp. not with g++, unless such a c++ library is specially designed for such purposes. Unresolved references to libstdc++ indicate that your library has not been prepared for this. A proper solution would be to use g++ to link the application. To achieve this with automake, the easiest way is to convert the main-application file to c++. In your case, renaming oprofiled.c to oprofiled.cc would be sufficient. Ralf
|
http://lists.gnu.org/archive/html/automake/2002-11/msg00068.html
|
crawl-003
|
refinedweb
| 258
| 59.9
|
#include <db_cxx.h> int DbMpoolFile::get(db_pgno_t *pgnoaddr, DbTxn *txnid, u_int32_t flags, void **pagep);
The
DbMpoolFile::get() method returns pages from the cache.
All pages returned by
DbMpoolFile::get() will be retained (that is,
latched) in the cache until a subsequent call to
DbMpoolFile::put().
There is no deadlock detection among latches so care must be taken in the application if the DB_MPOOL_DIRTY
or DB_MPOOL_EDIT flags are used as these get exlusive latches on the pages.
The returned page is size_t type aligned.
Fully or partially created pages have all their bytes set to a nul byte, unless the DbMpoolFile::set_clear_len() method was called to specify other behavior before the file was opened.
The
DbMpoolFile::get() method will return
DB_PAGE_NOTFOUND if the requested page does not exist and DB_MPOOL_CREATE
was not set. Unless otherwise specified, the
DbMpoolFile::get()
method either returns a non-zero error value or throws an
exception that encapsulates a non-zero error value on
failure, and returns 0 on success.
The flags parameter must be set to 0 or by bitwise inclusively OR'ing together one or more of the following values:
If the specified page does not exist, create it. In this case, the pgin method, if specified, is called.
The page will be modified and must be written to the source file
before being evicted from the cache. For files open with the
DB_MULTIVERSION
flag set, a new copy of the page will be made if this is the first
time the specified transaction is modifying it.
A page fetched with the
DB_MPOOL_DIRTY flag will be
exclusively latched until
a subsequent call to DbMpoolFile::put().
The page will be modified and must be written to the source file
before being evicted from the cache. No copy of the page will be made,
regardless of the
DB_MULTIVERSION
setting. This flag is only intended for use in situations where a
transaction handle is not available, such as during aborts or
recovery.
A page fetched with the
DB_MPOOL_EDIT flag will be
exclusively latched until
a subsequent call to DbMpoolFile::put().
Return the last page of the source file, and copy its page number into the memory location to which pgnoaddr refers.
Create a new page in the file, and copy its page number into the
memory location to which pgnoaddr
refers. In this case, the
pgin_fcn callback, if specified on
DbEnv::memp_register(), is
not called.
The
DB_MPOOL_CREATE,
DB_MPOOL_LAST, and
DB_MPOOL_NEW flags are mutually exclusive.
If the flags parameter is set to
DB_MPOOL_LAST or
DB_MPOOL_NEW, the
page number of the created page is copied into the memory location to
which the pgnoaddr parameter refers.
Otherwise, the pgnoaddr parameter is the
page to create or retrieve.
Page numbers begin at 0; that is, the first page in the file is page number 0, not page number 1.
If the operation is part of an application-specified transaction, the txnid parameter is a transaction handle returned from DbEnv::txn_begin(); otherwise NULL. A transaction is required if the file is open for multiversion concurrency control by passing DB_MULTIVERSION to DbMpoolFile::open() and the DB_MPOOL_DIRTY, DB_MPOOL_CREATE or DB_MPOOL_NEW flags were specified. Otherwise it is ignored.
The
DbMpoolFile::get()
method may fail and throw a DbException
exception, encapsulating one of the following non-zero errors, or return one
of the following non-zero errors:
The
DB_MPOOL_DIRTY or
DB_MPOOL_EDIT flag was set and the source file was
not opened for writing.
The page reference count has overflowed. (This should never happen unless there is a bug in the application.)
If the
DB_MPOOL_NEW flag was set, and the source
file was not opened for writing; more than one of
DB_MPOOL_CREATE,
DB_MPOOL_LAST,
and
DB_MPOOL_NEW was set; or if an invalid flag
value or parameter was specified.
For transactions configured with DB_TXN_SNAPSHOT, the page has been modified since the transaction began.
Memory Pools and Related Methods
|
http://idlebox.net/2011/apidocs/db-5.2.28.zip/api_reference/CXX/mempfget.html
|
CC-MAIN-2013-20
|
refinedweb
| 639
| 60.35
|
Created on 2012-11-21 15:16 by r.david.murray, last changed 2013-05-10 16:31 by mjpieters. This issue is now closed.
It looks like the use of the 'args' formal parameter was cut and pasted from the methodcaller docs, when it is not appropriate for itemgetter and attrgetter.
+.. function:: attrgetter(attr[, attr2, attr3, ...])
Why not reword to use the *attr notation? It is even already being used below:
+ The function is equivalent to::
def attrgetter(*items):
if any(not isinstance(item, str) for item in items):
I thought about that, but wanted to make a distinction between the form that accepts only 1 arg and returns an item and the form that receives 2+ args and returns a tuple.
You can also make that distinction using *. For example:
.. function:: attrgetter(attr, *attrs)
or
.. function:: attrgetter(attr)
attrgetter(attr1, attr2, *attrs)
(cf. )
Elsewhere we started to prefer using two signature lines where two or more "behaviors" are possible, which might be good to do in any case. With the "..." notation, this would look like:
.. function:: attrgetter(attr)
attrgetter(attr1, attr2, ...)
Attached an updated patch that uses the double signature.
I left a couple of Rietveld comments. Other than those nitpicks it looks good to me, and I could be convinced otherwise on the nitpicks :)
Also, thanks for catching the extra commas after the "After"s in operator.rst; I had meant to include those in the same patch that took them out of _operator.c, but apparently I missed it.
New changeset 6f2412f12bfd by Ezio Melotti in branch '3.3':
#16523: improve attrgetter/itemgetter/methodcaller documentation.
New changeset c2000ce25fe8 by Ezio Melotti in branch 'default':
#16523: merge with 3.3.
New changeset 5885c02120f0 by Ezio Melotti in branch '2.7':
#16523: improve attrgetter/itemgetter/methodcaller documentation.
Fixed, thanks for the review!
The 2.7 patch shifted the `itemgetter()` signature to above the `attrgetter()` change and new notes.
New patch to fix that in issue #17949:
|
https://bugs.python.org/issue16523
|
CC-MAIN-2020-50
|
refinedweb
| 328
| 59.8
|
Welcome to the second chapter of the Rock Sweeper project, a game which creates with Pygame. In this article we will create a World module which will later be used to manage game entities such as spaceship and rock as well as render the background image of the game.
We will use NetBeans IDE 8.1 to create this game, first open up your NetBeans IDE if you already have one or else you can refer to my previous article on how to install the NetBeans IDE on your computer. Start a new project and call it Rock Sweeper.
Create an empty module under the same folder as the existing module and named it World. Below is the script for the World module which includes those use to blit the background.
from pygame.locals import * class World(object): def __init__(self, maintile, background_coordinate, screen_size): #spritesheet self.spritesheet = maintile #the rectangle object uses in clipping the background area self.clip_rect = Rect(background_coordinate, screen_size) self.spritesheet.set_clip(self.clip_rect) # clip a portion of the spritesheet with that rectangle object self.background = self.spritesheet.subsurface(self.spritesheet.get_clip()) #create the background surface def render(self, surface): surface.blit(self.background, (0,0))
In the main module we will call the World module to render the background within the while loop.
#!/usr/bin/env python import pygame from pygame.locals import * from sys import exit from world import World maintile = 'maintile.png' #spritesheet file SCREEN_SIZE = (550, 550) BACKGROUND_COORDINATE = (0, 32) pygame.init() screen = pygame.display.set_mode(SCREEN_SIZE, 0, 32) pygame.display.set_caption("Rock Sweeper") spritesheet = pygame.image.load(maintile).convert_alpha() #spritesheet surface object pygame.display.set_icon(spritesheet) #use the entire spritesheet surface as the window icon world = World(spritesheet, BACKGROUND_COORDINATE, SCREEN_SIZE) while True: for event in pygame.event.get(): if event.type == QUIT: exit() world.render(screen) #render the background pygame.display.update()
If you run the main module above then you will see the below outcome.
The next chapter we will create game entities such as spaceship and rock so make sure you continue reading the tutorial!
|
http://gamingdirectional.com/blog/2016/09/25/create-the-world-module-for-rock-sweeper/
|
CC-MAIN-2019-18
|
refinedweb
| 344
| 59.9
|
{-# LANGUAGE MultiParamTypeClasses, GeneralizedNewtypeDeriving, DeriveDataTypeable, ScopedTypeVariables #-} module Development.Shake.File( need, want, defaultRuleFile, (*>), (**>), (?>) ) where import Control.DeepSeq import Control.Monad.IO.Class import Data.Binary import Data.Hashable import Data.Typeable import System.Directory import Development.Shake.Core import Development.Shake.FilePath import Development.Shake.FilePattern import Development.Shake.FileTime infix 1 *>, ?>, **> newtype File = File FilePath deriving (Typeable,Eq,Hashable,Binary,NFData) instance Show File where show (File x) = x instance Rule File FileTime where validStored (File x) t = fmap (== Just t) $ getModTimeMaybe x {- observed act = do src <- getCurrentDirectory old <- listDir src sleepFileTime res <- act new <- listDir src let obs = compareItems old new -- if we didn't find anything used, then most likely we aren't tracking access time close enough obs2 = obs{used = if used obs == Just [] then Nothing else (used obs)} return (obs2, res) data Item = ItemDir [(String,Item)] -- sorted | ItemFile (Maybe FileTime) (Maybe FileTime) -- mod time, access time deriving Show listDir :: FilePath -> IO Item listDir root = do xs <- getDirectoryContents root xs <- return $ sort $ filter (not . all (== '.')) xs fmap ItemDir $ forM xs $ \x -> fmap ((,) x) $ do let s = root </> x b <- doesFileExist s if b then listFile s else listDir s listFile :: FilePath -> IO Item listFile x = do let f x = Control.Exception.catch (fmap Just x) $ \(_ :: SomeException) -> return Nothing mod <- f $ getModTime x acc <- f $ getAccTime x return $ ItemFile mod acc compareItems :: Item -> Item -> Observed File compareItems = f "" where f path (ItemFile mod1 acc1) (ItemFile mod2 acc2) = Observed (Just [File path | mod1 /= mod2]) (Just [File path | acc1 /= acc2]) f path (ItemDir xs) (ItemDir ys) = mconcat $ map g $ zips xs ys where g (name, Just x, Just y) = f (path </> name) x y g (name, x, y) = Observed (Just $ concatMap (files path) $ catMaybes [x,y]) Nothing f path _ _ = Observed (Just [File path]) Nothing files path (ItemDir xs) = concat [files (path </> a) b | (a,b) <- xs] files path _ = [File path] zips :: Ord a => [(a,b)] -> [(a,b)] -> [(a, Maybe b, Maybe b)] zips ((x1,x2):xs) ((y1,y2):ys) | x1 == y1 = (x1,Just x2,Just y2):zips xs ys | x1 < y1 = (x1,Just x2,Nothing):zips xs ((y1,y2):ys) | otherwise = (y1,Nothing,Just y2):zips ((x1,x2):xs) ys zips xs ys = [(a,Just b,Nothing) | (a,b) <- xs] ++ [(a,Nothing,Just b) | (a,b) <- ys] -} -- | This function is not actually exported, but Haddock is buggy. Please ignore. defaultRuleFile :: Rules () defaultRuleFile = defaultRule $ \(File x) -> Just $ liftIO $ getModTimeError "Error, file does not exist and no rule available:" x -- | Require that the following files are built before continuing. Particularly -- necessary when calling 'system''. As an example: -- -- > "//*.rot13" *> \out -> do -- > let src = dropExtension out -- > need [src] -- > system' ["rot13",src,"-o",out] need :: [FilePath] -> Action () need xs = (apply $ map File xs :: Action [FileTime]) >> return () -- | Require that the following are built by the rules, used to specify the target. -- -- > main = shake shakeOptions $ do -- > want ["Main.exe"] -- > ... -- -- This program will build @Main.exe@, given sufficient rules. want :: [FilePath] -> Rules () want xs = action $ need xs -- |Path -> Bool) -> (FilePath -> Action ()) -> Rules () (?>) test act = rule $ \(File x) -> if not $ test x then Nothing else Just $ do liftIO $ createDirectoryIfMissing True $ takeDirectory x act x liftIO $ getModTimeError "Error, rule failed to build the file:" x -- | Define a set of patterns, and if any of them match, run the associated rule. See '*>'. (**>) :: [FilePattern] -> (FilePath -> Action ()) -> Rules () (**>) test act = (\x -> any (?== x) test) ?> act -- | () (*>) test act = (test ?==) ?> act
|
http://hackage.haskell.org/package/shake-0.2.1/docs/src/Development-Shake-File.html
|
CC-MAIN-2016-07
|
refinedweb
| 560
| 61.16
|
{-# LANGUAGE TemplateHaskell , CPP #-} -- | Embed shell commands with interpolated Haskell -- variables, and capture output. module System.ShQQ ( -- * Quasiquoters sh , shc -- * Helper functions -- -- | These functions are used in the implementation of -- @'sh'@, and may be useful on their own. , readShell , readShellWithCode , showNonString ) where import Language.Haskell.TH import Language.Haskell.TH.Quote import Control.Applicative import Control.Exception ( evaluate, throwIO ) import Data.Char import Data.Foldable ( asum ) import Data.Typeable ( Typeable, cast ) import Text.Parsec hiding ( (<|>), many ) import Text.Parsec.String import System.IO import System.Exit import qualified System.Posix.Escape.Unicode as E import qualified System.Process as P #if defined(mingw32_HOST_OS) #error shqq is not supported on Windows. #endif -- | Acts like the identity function on @'String'@, and -- like @'show'@ on other types. showNonString :: (Typeable a, Show a) => a -> String showNonString x = case cast x of Just y -> y Nothing -> show x data Tok = Lit String | VarOne String | VarMany String deriving (Show) parseToks :: Parser [Tok] parseToks = many part where isIdent '_' = True isIdent x = isAlphaNum x -- NB: '\'' excluded ident = some (satisfy isIdent) var = VarOne <$> ident <|> VarMany <$ char '+' <*> ident part = asum [ char '\\' *> ( Lit "\\" <$ char '\\' <|> Lit "$" <$ char '$' ) , char '$' *> ( var <|> between (char '{') (char '}') var ) , Lit <$> some (noneOf "$\\") ] -- | Execute a shell command, capturing output and exit code. -- -- Used in the implementation of @'shc'@. readShellWithCode :: String -> IO (ExitCode, String) readShellWithCode cmd = do (Nothing, Just hOut, Nothing, hProc) <- P.createProcess $ (P.shell cmd) { P.std_out = P.CreatePipe } out <- hGetContents hOut _ <- evaluate (length out) hClose hOut ec <- P.waitForProcess hProc return (ec, out) -- | Execute a shell command, capturing output. -- -- Used in the implementation of @'sh'@. readShell :: String -> IO String readShell cmd = do (ec, out) <- readShellWithCode cmd case ec of ExitSuccess -> return out _ -> throwIO ec mkExp :: Q Exp -> [Tok] -> Q Exp mkExp reader toks = [| $reader (concat $strs) |] where strs = listE (map f toks) var = varE . mkName f (Lit x) = [| x |] f (VarOne v) = [| E.escape (showNonString $(var v)) |] f (VarMany v) = [| showNonString $(var v) |] shExp :: Q Exp -> String -> Q Exp shExp reader xs = case parse parseToks "System.ShQQ expression" xs of Left e -> error ('\n' : show e) Right t -> mkExp reader t baseQQ :: QuasiQuoter baseQQ = QuasiQuoter { quoteExp = error "internal error in System.ShQQ" , quotePat = const (error "no pattern quote for System.ShQQ") #if MIN_VERSION_template_haskell(2,5,0) , quoteType = const (error "no type quote for System.ShQQ") , quoteDec = const (error "no decl quote for System.ShQQ") #endif } {- | Execute a shell command, capturing output. This requires the @QuasiQuotes@ extension. The expression @[sh| ... |]@ has type @'IO' 'String'@. Executing this IO action will invoke the quoted shell command and produce its standard output as a @'String'@. >>> [sh| sha1sum /proc/uptime |] "ebe14a88cf9be69d2192dcd7bec395e3f00ca7a4 /proc/uptime\n" You can interpolate Haskell @'String'@ variables using the syntax @$x@. Special characters are escaped, so that the program invoked by the shell will see each interpolated variable as a single argument. >>> let x = "foo bar" in [sh| cat $x |] cat: foo bar: No such file or directory *** Exception: ExitFailure 1 You can also write @${x}@ to separate the variable name from adjacent characters. >>> let x = "b" in [sh| echo a${x}c |] "abc\n" Be careful: the automatic escaping means that @[sh| cat '$x' |]@ is /less safe/ than @[sh| cat $x |]@, though it will work \"by accident\" in common cases. To interpolate /without/ escaping special characters, use the syntax @$+x@ . >>> let x = "foo bar" in [sh| cat $+x |] cat: foo: No such file or directory cat: bar: No such file or directory *** Exception: ExitFailure 1 You can pass a literal @$@ to the shell as @\\$@, or a literal @\\@ as @\\\\@. As demonstrated above, a non-zero exit code from the subprocess will raise an exception in your Haskell program. Variables of type other than @'String'@ are interpolated via @'show'@. >>> let x = Just (2 + 2) in [sh| touch $x; ls -l J* |] "-rw-r--r-- 1 keegan keegan 0 Oct 7 23:28 Just 4\n" The interpolated variable's type must be an instance of @'Show'@ and of @'Typeable'@. -} sh :: QuasiQuoter sh = baseQQ { quoteExp = shExp [| readShell |] } {- | Execute a shell command, capturing output and exit code. The expression @[shc| ... |]@ has type @'IO' ('ExitCode', 'String')@. A non-zero exit code does not raise an exception your the Haskell program. Otherwise, @'shc'@ acts like @'sh'@. -} shc :: QuasiQuoter shc = baseQQ { quoteExp = shExp [| readShellWithCode |] }
|
http://hackage.haskell.org/package/shqq-0.1/docs/src/System-ShQQ.html
|
CC-MAIN-2015-18
|
refinedweb
| 701
| 66.94
|
In Python, we have the star (or "*" or "unpack") operator, that allows us to unpack a list for convenient use in passing positional arguments. For example:
range(3, 6) args = [3, 6] # invokes range(3, 6) range(*args)
In this particular example, it doesn't save much typing, since
range only takes two arguments. But you can imagine that if there were more arguments to
range, or if
args was read from an input source, returned from another function, etc. then this could come in handy.
In Scala, I haven't been able to find an equivalent. Consider the following commands run in a Scala interactive session:
case class ThreeValues(one: String, two: String, three: String) //works fine val x = ThreeValues("1","2","3") val argList = List("one","two","three") //also works val y = ThreeValues(argList(0), argList(1), argList(2)) //doesn't work, obviously val z = ThreeValues(*argList)
Is there a more concise way to do this besides the method used in
val y?
There is something similar for functiones:
tupled It converts a function that takes n parameters into a function that takes one argument of type n-tuple.
See this question for more information: scala tuple unpacking
Such a method for arrays wouldn't make much sense, because it would only work with functions with multiple arguments of same type.
There is no direct equivalent in scala.
The closest thing you will find is the usage of
_*, which works on vararg methods only.
By example, here is an example of a vararg method:
def hello( names: String*) { println( "Hello " + names.mkString(" and " ) ) }
which can be used with any number of arguments:
scala> hello() Hello scala> hello("elwood") Hello elwood scala> hello("elwood", "jake") Hello elwood and jake
Now, if you have a list of strings and want to pass them to this method, the way to unpack it is through
_*:
scala> val names = List("john", "paul", "george", "ringo") names: List[String] = List(john, paul, george, ringo) scala> hello( names: _* ) Hello john and paul and george and ringo
|
https://pythonpedia.com/en/knowledge-base/15034565/is-there-a-scala-equivalent-of-the-python-list-unpack--a-k-a-------operator-
|
CC-MAIN-2020-40
|
refinedweb
| 343
| 61.6
|
Tracker configuration
The Tracker provides a customizable configuration system:
- Standard configuration allows features like publish intervals, sleep settings, etc. to be configured from the console.
- Customized configuration makes it possible to extend this to your own custom parameters and custom tabs in the console!
Additionally:
- Devices that are currently online receive the configuration updates immediately.
- Devices that are offline, because of poor cellular coverage or use of sleep modes, receive configuration updates when they reconnect to the Particle cloud, if there were changes.
- On device, the configuration is cached on the flash file system, so the last known configuration can be used before connecting to the cloud again.
Log in manually (not using single sign-on)
Configuration
Configuration is scoped so you can have:
- Fleet-wide configuration so all devices in your product have a common setting.
- Per-device configuration for certain settings, like geofence settings, that are always specific to a single device. It is also possible override settings for a specific device in your product, when marked as a development device.
In addition to setting values directly from the console, you can get and set values from the Particle Cloud API.
The configuration is hierarchical. The top level items (location, sleep, geofence) are known as "modules."
- location
- radius
- interval_min
- interval_max
- min_publish
- ...
- imu_trig
- ...
- temp_trig
- ...
- rgb
- ...
- sleep
- mode
- exe_min
- conn max
- ...
- geofence
- interval
- zone1
- enable
- shape_type
- lat
- lon
- radius
- inside
- outside
- enter
- leave
- verif
- zone2
- ...
- zone3
- ...
- zone4
- ...
Per-device configuration
Certain configuration modules are per-device only. The geofence configuration is the only built-in module set up this way. You can add your own custom modules that include the
deviceLevelOnly flag which will make your configuration always per-device only. When a configuration module is per-device only it does not appear in the product fleet-wide settings, only per-device.
Additionally, if a device is marked as a development device, then per-device configuration is allowed for all configuration items. The per-device settings will always override the fleet-wide settings if present.
Device level only configuration
- Settings are only configurable for the device, not in fleet settings
- Always device level, regardless of whether a development device or not
- Example: geofence configuration
- Also available for custom configuration schemas
Regular device configuration
- Upon adding a device, the device gets default settings from the fleet settings
- When fleet settings are updated, the device settings will be updated immediately if online
- Or after reconnecting if offline or in sleep mode
Development device configuration
- All modules are editable for a given device while in development mode
- Device settings take precedence in all cases
Note: If you go from development mode back to regular mode, the product settings do not override the settings that were set in development mode until the next time product settings are changed. This is the current behavior, but may change in the future.
Schemas
The configuration format is specified using JSON Schema. This section is not intended to be a complete reference to writing a schema, as there is literally a whole book devoted to this, available to read online for free at the link above. This should be enough to begin writing your own custom schema, however.
The schema serves several purposes:
- Documenting the options
- Instructing the console how to display the options
- Validating the configuration
Using a custom schema, you can add additional configuration elements, either to an existing group (location, sleep, etc.) or to a new custom group for your product (recommended). The new tabs or options are presented to all team members who have access to the product.
If you are familiar with JSON Schema:
- We do not support the
patternor
patternPropertieskeywords
- Remote schemas are not allowed
additionalPropertiesis always set to false for top-level fields
- We've added two custom JSON Schema keywords:
minimumFirmwareVersionand
maximumFirmwareVersionthat can be used to scope given settings to specific versions of the firmware
Console
The settings panels in the fleet configuration and device configuration, including all of the built-in settings, are defined using the configuration schema.
You can also use this technique to create your own custom configuration panels!
This picture shows how elements in the schema directly map to what you can see in the console:
A custom schema replaces the existing schema. This means that as new features are added to Tracker Edge you will want to periodically merge your changes into the latest schema so you will get any new options.
Default Schema
This is the full schema for Tracker Edge, as of version 13. You won't need to understand the whole thing yet, but this is what it looks like:
- The
geofencemodule was added in v13.
- The
deviceLevelOnlyboolean flag was added in v13. This allows a configuration module to only be configured per-device, regardless of whether it's a development device or not.
- The schema version does not change with every Tracker Edge version, and does not match. For example, Tracker Edge v17 is used with schema v13.
Data types
The schema and support in Tracker Edge can include standard JSON data types, including:
- Boolean values (true or false, a checkbox)
- Integer values (optionally with a range of valid values)
- Floating point values (optionally with a range of valid values)
- Strings
- Enumerations (a string with fixed options, a dropdown menu)
- JSON objects (of the above types, with some limitations)
There is a limit to the size of the data, as it needs to fit in a publish. You should keep the data of a reasonable size and avoid overly lengthy JSON key names for this reason. The publish size varies from 622 to 1024 bytes of UTF-8 characters depending on Device OS version; see API Field Limits.
Adding to the schema
Here's an example from the AN017 Tracker CAN application note. This is the new schema fragment we'll add to the console:
Of note:
- Since adding a custom schema replaces the default schema, you must include all of the elements from the default schema. It does not merge the two automatically for you. The whole file is included below.
- The new engine block goes directly in line and below the tracker block, which is the last configuration block (at the time of writing).
- The new engine configuration includes two elements: Idle RPM speed and Publish period when running (milliseconds); both are integers.
Here's the whole file so you can see exactly where the data goes when merged with the default schema.
Log in manually (not using single sign-on)
If you set this schema you can go to the console and view your fleet configuration with the new panel!
To remove the Engine panel and restore the default schema use the Restore Default Schema button:
Viewing in the console
This is what it looks like in the console:
Manually
You can also do these steps manually:
Getting an access token.
You can also generate a token using oAuth client credentials. You can adjust the expiration time using this method, including making it non-expiring.
Backing up the schema
At this time, the schema can only be set using the Particle Cloud API. Examples are provided using
curl, a common command-line program for making API calls.
It's a good idea to make a backup copy of the schema before you modify it. The feature to delete the custom schema and revert to the factory default is planned but not currently implemented.
curl -X GET '' -H 'Accept: application/schema+json' -o backup-schema.json
:productIdwith your product ID
:accessTokenwith a product access token, described above.
This will return a big block of JSON data and save it in the file backup-schema.json.
Or, the device-specific schema for a development device:
curl -X GET '' -H 'Accept: application/schema+json' -o backup-schema.json
:productIdwith your product ID
:deviceIdwith your Device ID that is set as a development device.
:accessTokenwith a product access token, described above.
Setting a custom schema
There is no UI for setting the configuration in the console, but you, you will need to set it using curl:
curl -X PUT '' -H 'Content-Type: application/schema+json' -d @engine-schema.json
:productIdwith your product ID
:deviceIdwith your Device ID that is set as a development device.
:accessTokenwith a product access token, described above.
To restore the normal behavior, instead of using
@engine-schema.json, use
@backup-schema.json you saved in the previous step.
Or, for product-wide configuration:
curl -X PUT '' -H 'Content-Type: application/schema+json' -d @engine-schema.json
:productIdwith your product ID
:accessTokenwith a product access token, described above.
Setting configuration
You can also set the values using the API directly, such as by using curl:
curl -X PUT '' -H 'Content-Type: application/json' -d '{"engine":{"idle":1550,"fastpub":30000}}'.
This sets this configuration object:
{ "engine":{ "idle":1550, "fastpub":30000 } }
You should always get the entire configuration, change values, and set the whole configuration back. In HTTP REST APIs, POST and PUT do not merge changes with the existing data.
Getting configuration
curl -X GET '' -H "Accept: application/json".
Firmware
And finally, this is how you access the data from your application firmware:
// Configuration settings, synchronized with the cloud int fastPublishPeriod = 0; int idleRPM = 1600;
Create some global variables for your settings.
// Set up configuration settings static ConfigObject engineDesc("engine", { ConfigInt("idle", &idleRPM, 0, 10000), ConfigInt("fastpub", &fastPublishPeriod, 0, 3600000), }); ConfigService::instance().registerModule(engineDesc);
In setup(), associate the variables with the location in the configuration schema. While just a couple lines of code, this automatically takes care of:
- Loading the saved configuration from the file system during setup(), in case the device is offline.
- When the device comes online, getting any updates that occurred while offline.
- If the device is already online and the settings are changed, they are pushed to the device automatically.
For the full example, see the AN017 Tracker CAN, the CAN bus application note.
Example
Here's an example of how you set up a custom schema and use it from firmware. It includes many of the available types of data.
Schema - Example
Here's the whole schema:
You can also set it using curl or another tool to call the API:
curl -X PUT '' -H 'Content-Type: application/schema+json' -d @test-schema.json
:productIdwith your product ID
:accessTokenwith a product access token, described above.
Be sure to use the full schema, not just the part with "Mine" as a custom schema replaces the default schema!
To remove the Mine panel and restore the default schema use the Restore Default Schema button:
Console - Example
This is what it looks like in the console.
Getting the Tracker Edge Firmware
You can download a complete project for use with Particle Workbench as a zip file here:
Version:
- Extract tracker-config-example.zip in your Downloads directory
- Open the tracker-config-example.
main.cpp
This is the Tracker Edge main source file. There are only three lines (all containing "MyConfig") added to the default main.cpp.
MyConfig.h
The C++ header file for the custom configuration class.
MyConfig.cpp
The C++ implementation file for the custom configuration class.
Digging In - Example
Member variables in the C++ class
int32_t contrast = 12; double tempLow = 0.0; int32_t fruit = (int32_t) Fruits::APPLE; String message; bool thing = false;
The settings that you can configure in the console are all added as member variables in the MyConfig class.
Accessing member variables
int32_t contrast = MyConfig::instance().getContrast();
To access configuration settings, get the
MyConfig instance, and call the accessor method
getContrast(). There are accessors for all of the variables above.
int32_t getContrast() const { return contrast; }; double getTempLow() const { return tempLow; }; Fruits getFruit() const { return (Fruits)fruit; }; const char *getMessage() const { return message; }; bool getThing() const { return thing; };
C++ getter functions are provided for convenience in the .h file.
Defining an enumeration
enum class Fruits : int32_t { APPLE = 0, GRAPE, ORANGE, PEAR };
The
Fruits example is a bit more complicated. It's an enumeration. In the console, this shows up as a popup menu (combo box) with a pre-selected list of options. The data sent back and forth between the cloud and device and saved on the cloud side is a string.
However, in device firmware, it's sometimes easier to work with numeric constants instead of strings. The
ConfigStringEnum takes care of mapping between numeric and string enumerations. It's optional - you can work directly with the strings if you prefer.
The declaration above creates a Fruits enumeration.
MyConfig::Fruits::APPLE has a value of 0 as in
int32_t.
GRAPE is 1, and so on.
init() function
void MyConfig::init() { static ConfigObject mineDesc("mine", { ConfigInt("contrast", &contrast, 0, 255), ConfigFloat("tempLow", &tempLow, -100.0, 200.0),; } ), ConfigString("message", [this](const char * &value, const void *context) { // Get message from class value = message.c_str(); return 0; }, [this](const char * value, const void *context) { // Set message in class this->message = value; Log.info("set message to %s", value); return 0; } ), ConfigBool("thing", [this](bool &value, const void *context) { // Get thing from class value = this->thing; return 0; }, [this](bool value, const void *context) { // Set thing in class this->thing = value; Log.info("set thing to %s", value ? "true" : "false"); return 0; } ) }); ConfigService::instance().registerModule(mineDesc); logSettings(); }
The
init() method maps between the member variables and the configuration data. It also registers the module, which also:
- Loads the configuration from the flash memory file system at startup, so the previously configured values are available even before the cloud has connected.
- Upon connecting to the cloud, checks to see if there are configuration updates.
- While connected, if the configuration changes, immediately updates the location configuration and saved data in the file system.
Simple mappings
ConfigInt("contrast", &contrast, 0, 255), ConfigFloat("tempLow", &tempLow, -100.0, 200.0),
Some of these are very straightforward. These map the keys to the variables that hold the configuration data.
ConfigStringEnum; } ),
This is how the string enum is mapped to the actual enumeration contents. The last two items get and set the value in the class based on the string converted into an
int32_t (integer) value. That is C++11 lambda syntax which allows you to define the function inline; the body of the function gets executed later.
[this](int32_t value, const void *context) { // Set fruit in class this->fruit = value; Log.info("fruit updated to %ld!", value); return 0; }
One handy trick is that you can add more code to the setter so you will know when the value is updated by the cloud. In the contrast and lowTemp examples above the underlying value changes, but your code is not notified. Using a custom setter makes it easy to notify your code when a configuration change occurs.
ConfigString
ConfigString("message", [this](const char * &value, const void *context) { // Get message from class value = message.c_str(); return 0; }, [this](const char * value, const void *context) { // Set message in class this->message = value; Log.info("set message to %s", value); return 0; } ),
You need to also provide a getter and setter for String variables so save the data in the underlying class. In this case, we use a
String object so it's easy, but you can also use a pre-allocated buffer.
Singleton
The
MyConfig class is modeled after the Tracker Edge classes that are a Singleton: there is only one instance of the class per application.
MyConfig::instance().init();
To use it, you get an instance of it using
MyConfig::instance() and call the method you want to use, in this case
init().
Like other Tracker Edge classes, you call:
In the C++ class, the variable to hold the instance is declared like this:
static MyConfig *_instance;
And there's an implementation of it at the top of the MyConfig.cpp file:
MyConfig *MyConfig::_instance;
Since it's a static class member, which is essentially a global variable, it's always initialized to 0 at startup.
MyConfig &MyConfig::instance() { if (!_instance) { _instance = new MyConfig(); } return *_instance; }
The function to get the instance checks to see if it has been allocated. If it has not been allocated, it will be allocated using
new. This should happen during
setup(). In either case, the instance is returned.
Per-device configuration
Tracker devices with device level only configuration (such as geofencing) and devices that are marked as development devices can have per-device configuration. In addition to using the console or curl, above, this tool makes it easy to view and edit the configuration in JSON format:
|
https://docs.particle.io/reference/tracker/tracker-configuration/
|
CC-MAIN-2022-27
|
refinedweb
| 2,738
| 53.51
|
Virtualization May Break Vista DRM 294
Nom du Keyboard writes "An article in Computerworld posits that the reason Microsoft has flip-flopped on allowing all versions of Vista to be run in virtual machines, is that it breaks the Vista DRM beyond detection, or repair. So is every future advance in computer security and/or usability going to be held hostage to the gods of Hollywood and Digital Restrictions Management? 'Will encouraging consumer virtualization result in a major uptick in piracy? Not anytime soon, say analysts. One of the main obstacles is the massive size of VMs. Because they include the operating system, the simulated hardware, as well as the software and/or multimedia files, VMs can easily run in the tens of gigabytes, making them hard to exchange over the Internet. But DeGroot says that problem can be partly overcome with .zip and compression tools -- some, ironically, even supplied by Microsoft itself.'"
devil's advocate (Score:2, Interesting)
Re:devil's advocate (Score:5, Insightful)
Another potentially real problem would be that vista as an actual OS in a computer runs slow as hell. People using virtual machines to 'test' Vista would end up with an even slower crummier machine and thus taint their perceptions for the negative. Nothing kills a product faster than the good old 'Word of Mouth' and there has been plenty badmouthing of Vista by all levels of tech support (not sales people though they gotta sell those Vista pieces of crap any way they can.
In short, the only 'acceptable' virtual environment for Vista would probably be Vista itself. They want to lock you into this crappy and crazy DRM scheme that they probably cooked up with Hollywood and hardware vendors to keep people on the upgrade treadmill indefinitely. (since if you cant watch the latest movies you need to upgrade to a computer that can run Vista, which means probably buying a whole new computer which means whole new hardware...)
Re: (Score:3, Funny)
Re:devil's advocate (Score:5, Informative)
I have as much reason to hate MS's operating systems as the next guy. No, scratch that, I have vastly more reason to hate MS's OS's than the next guy, having watched them attempt to undermine and destroy OS/2 back in the early 90's, back before it become fashionable to hate MS OS's. I remember having to put up with the constantly shifting Win32s extensions for Windows 3.1, which were modified for the sole purpose of breaking OS/2 compatibility. Or their (then new) "per-processor license agreements". I haven't run a Windows machine as my desktop since 1992, having run OS/2, Linux, and Mac OS X (in that order) since that time.
As such, it really pains me greatly to say -- Vista under virtualization is surprisingly decent and well behaved. I've been running the 64-bit Business Edition of Vista inside VMware Fusion on a new 2.16Ghz Core 2 Duo MacBook with 2GB of RAM, and it's surprisingly quick and agile. Sure, I don't get Aero (which just looks bad to me anyhow -- honestly, how is an alpha-blended window title a good thing?), and I'm not using it to play games, and I don't use it to browse the web or do e-mail or digital media, but overall it has been very well behaved, and has been surprisingly quick to boot and run. I've even experimented with it running digital video, and the performance has been very good.
Now of course, I can see why they'd be worried about their DRM stance. As the VMware audio and video go through a virtualized driver/device to the Mac's hardware, it would be easy to use readily available tools to hijack the stream (like Rogue Amoeba's excellent Audio Hijack Pro [rogueamoeba.com].
Now there is no way in hell I'd ever run Windows as my primary OS -- still think their UI scheme is garbage, and don't like the fact they have both systematically loaded their systems with crap to appease other corporations while punishing their own end-users (DRM), and that they've frequently promised features they've never delivered (anyone else remember when they promised a stand-alone MS-DOS v7? Or when they promised an OODBMS-based filesystem for Cairo starting back in 1996? That same filesystem they didn't deliver with Vista? Or how about when they finally decided it was time to introduce a new filesystem for the 9X line that instead of using a well-designed FS they owned all the rights to, like HPFS or NTFS, they instead exacerbated the problem with a band-aid solution and invented FAT32?). It's still not what I look for in a desktop OS, but as much as it pains me to say it, on a modern machine (and the latest MacBook is hardly top-of-the-line, although it's certainly quite a capable system), under virtualization, Vista actually runs pretty acceptably. If I had to use it as my day-to-day system (and I don't use it much at all -- it's there to support a development toolset for some embedded programming I'm peripherally involved in), it certainly wouldn't be slow or painful to use -- it's instantly responsive, and has so far behaved very well (i.e.: it hasn't crashed yet).
Strange but true.
Yaz.
Re:devil's advocate (Score:4, Informative)
To run the ATMEL development suite primarily, which I can't run otherwise, to program an ATMEL AT90USB microcontroller. It runs an IDE, compilers/linkers, AT90 simulator environment, Subversion, and the FLiP microcontroller board programmer.
I've experimented with a number of other applications, including IE7, WMP, and several of the other built-in tools. I still don't like how they organize their OS, or the crappy UI, but system responsiveness has not been an issue.
I don't advocate anyone use this as their gaming or media environment -- hell, I don't avocate anyone use Vista for anything. But in response to the GP's claim that someone might want to evaluate Vista under a VM and get a poor opinion of its performance, Vista 64-bit actually stands up quite well under virtualization, at least on my system.
(I will note here that the 64-bit version of Vista appears to run slightly quicker than the 32-bit version on my MacBook, both under VMware Fusion, but I suppose YMMV).
Any other questions?
Yaz
Re:So let me get this straight.. (Score:4, Informative)
Want to reply? Try my a little reading comprehension first.
Point 1: I didn't say I'm upset with Vista. What I did say is that I don't like the Widows Platform. As such, moving from running my embedded dev tools on XP instead of Vista really makes no difference to me -- I don't like either one, have a free license for 64-bit Vista Business Edition, and so use it in those few instances where I have to.
Secondly, I was defending Vista as actually running quite well under VM. So where do you get the idea that I'm upset with Vista? I dislike Windows because the entire line has been poorly designed, I don't like the UI at all, and MS routinely over-promises and under-delivers (how is WinFS, which was most recently supposed to ship in Vista and was yanked roughly a year ago "10+ years ago"?), but I don't have any particular hatred for Vista beyond it being another flavour of Windows crap.
As for your accusation of hypocrisy, Mac OS X doesn't have anywhere near the level of RM Vista has, and OS X's DRM is pretty easy to avoid: just don't buy songs from the iTunes Music Store. It doesn't have secured pathways that require handshaking with your video display just to play encoded videos, and it doesn't have a kernel you can only plug signed, vendor-validated extensions/drivers into (and which refuses to ply such content if you don't). It simply has a DRM decryption module built into a codec. That's it. It's easy to void and remove, and doesn't impinge developers abilities to develop applications or drivers for the system. Don't like DRM on the Mac? Drag and Drop iTunes to the trash and it's effectively gone. Then go and play your media in VLC.
So, before you post, at least use some reading comprehension first before you go foaming at the mouth?
Yaz.
Re: (Score:2, Interesting)
Re: (Score:2)
Windows does have the edge in consumer hardware, but with the exception of high end 3D video acceleration, Linux has excellent support for at least one major player in each consumer hardware category (which is why Linux is now a real contend
Re:devil's advocate (Score:4, Insightful)
This system does have a number of problems (and in its current state is still victim to virtualization), and as mentioned above is very difficult to implement, but Microsoft and others are pushing very hard to make it work.
Re:devil's advocate (Score:5, Funny)
Damn that's hard to say with a straight face.
Nesting VMs (Score:2)
Most likely, this could be defeated by simply adding an additional layer of virtualization beyond the said "approved" virtual machine hosting the OS in question. This is actually not unlike some theoretical viruses proposed a while back that would install thems
Re:Nesting VMs (Score:5, Interesting)
Re: (Score:2)
So that creates two possible scenarios:
1. No software emulation of Palladium ever gets signed by the Palladium consortium, and thus every check against a Palladium key fails. Thus no stuff (DRM or otherwise) relying on Palladium runs in the VM.
2. There is an emulation of Palladium that gets a valid c
Re: (Score:3, Informative)
Second, Palladium is based on phoning back to the mother ship. *Every single Palladium key* is
Re:devil's advocate (Score:5, Insightful)
The task is "allow A to send a message to B such that B can read it, but C cannot."
Under DRM, B and C are the same person.
Q.E.D.
The claim that a process will allow a customer to manage digital rights are akin to claims that a chemical process will allow a customer to change lead to gold. They are the claims of a fool, a charlatan, a newborn, or someone desperate. Or a devil's advocate.
Re:devil's advocate (Score:4, Insightful)
DRM can make it very inconvenient and very onerous for A to send a message to B, but it can never secure that message against interception by C where B and C are the same person. Telling worried rights-holders that one protocol is "less insecure", when security is impossible under all protocols, is a way to prey upon those worries and can be profitable, but never correct.
Said before (Score:5, Insightful)
Think about it.
Alice (the publisher of the song) is using encryption to ensure that you and only you (Bob) can recieve the message. But Jack (also you) is being prevented from viewing the message.
The only reason that DRM is making any kind of headway is because of the hand-waving around terms like "dual key cryptography" and "license management". When you get right down to it, the content producers exist to deliver content to me. Once I get it, the only thing limiting my distribution of that content is legal in nature - I'm afraid of getting sued or prosecuted, so I don't.
Speakers can be recorded, screens can be videotaped. DRM can make it more difficult to copy content, but it will NEVER make it impossible. And the sad part is, DRM frequently makes it more difficult to VIEW content legitimately.
As a good example, I just set up a Windows XP laptop for one of my sales associates. I spent an ungodly amount of time going thru "Genuine Advantage" this and "Genuine" that, along with some dozen or more reboots. It's riduculously annoying, especially when updating a new CentOS system takes a single line:
yum -y update; shutdown -r now;
Microsoft has it wrong, and it may well be their undoing to find this out.
Re:Said before (Score:4, Insightful)
That's an extremely common view (as said in your comment title), but it's not true. Bob is your television, and you are Jack. I don't care how much cybernetics has progressed, we're not televisions yet, and we as human beings can't assimilate, store, and regurgitate digital content with any kind of quality.
> "Speakers can be recorded, screens can be videotaped."? (For this paragraph, forget that you are a geek when I use words such as "quality" and when I presume you're a pirate - I'm talking about average users).
> "DRM can make it more difficult to copy content, but it will NEVER make it impossible."
Doesn't need to.
Or to frame the absurdity of that argument in an analogy that I feel works well: "Police can make it difficult to commit crimes (and not get caught), but they'll never make it impossible. Therefore we police are futile. When will they learn?"
> "And the sad part is, DRM frequently makes it more difficult to VIEW content legitimately."
No argument. We should be thankful that they have as difficult a time picking a DRM standard as they do. Fragmentation impedes their progress in locking everything down: CDs versus DVDs for instance.
Re: (Score:3, Interesting)
But it's not hard to create a rig that does.
Both are analog holes. If it's not a digital copy, it's not a quality copy,
Many audiophiles would disagree with you, and would argue that analog presents the be
Re: (Score:2, Insightful)
But it's not hard to create a rig that does [capture DRM limited digital data].
Then where is all this hardware? How do you plan to capture HDCP content with a "not hard to create rig"? The whole point is that DRMing the whole system leaves only analog methods, or exploiting flaws.
Many audiophiles would disagree with you, and would argue that analog presents the best "true" copy.
So an analog copy of a digital file is superior to a *perfect*digital*copy*? How did that make enough sense to you for you to type this?
See above points - it's not some guy with a camcorder of his TV, it's the "pro-sumer" guy who has good quality equipment that can kill DRM.
How? Ok, you get your HD cam out and record a plasma screen viewing of a Blu-ra
Re:Said before (Score:5, Insightful)
The millions of people pirating 128kbit crummy sounding MP3s and horribly compressed DivX copies of movies would seemingly be in complete disagreement with that statement. People downloading pirated content don't care so much about quality. Those who care about quality tend to also be the kind of people who also prefer legitimate copies, DRM or not.
Re:Said before (Score:5, Interesting)
At the rate technology is progressing, somebody with a HD projector, a HD camcorder and a few extra lenses and filters will be able to do an analog capture that easily satisfies the average guy with a 50" LCD display.
It sure helps that even today all of the satellite HD signals are highly degraded, often re-encoding from 1920x1080 to 1280x1080 and the vast majority of the viewers don't give a damn. Even the broadcast networks do shitty job, Fox is bitrate starved for no good reason, running their stuff at roughly 10Mbps when the available bandwidth over the air is just under 20Mbps. NBC and ABC are only a little bit better. Only CBS seems to give a crap about the quality of their broadcasts.
So, either consumer standards are going to have get a LOT higher or pricing on DRM'd products is going to have get a LOT cheaper if they want to compete with the quality level available via "free."
All that assumes that no bored grad students ever take an electron-tunneling microscope to the "tamper-proof" chips in these DRM systems and extracts the keys necessary to do the decrypt at the digital level. Nowadays that's not particularly expensive to do.
Re: (Score:2)
Because for the first time, virtually any copyrighted work can be perfectly copied at the click of a button, and distributed with close to zero effort.
This applies equally to the vendor. Nothing stopping them improving the efficiency of their distribution channels to match pirates.
Copying is a tool; it applies equally to vendor, consumer, pirate, whatever and does not suddenly justify DRM which messes the balance by making the average citizen guilty until proven innocent.
---
DRM'ed content breaks
Re: (Score:2)
DRM makes piracy *harder*. Not impossible, just harder, and that's all it takes to be effective.
The problem with DRM is that it's not only effective at slowing piracy, it's effective at locking consumers out of their own content.
I'd disagree with this. The cost of breaking DRM is a one time fee for pirates; once an unprotected version of the data has been released, the proverbial genie is out of the proverbial bottle. Large content holders, like the organisations that make up the MPAA, want the benefits of distributing their data across a large range of devices, and to the greatest possible proportion of the public, whilst trying to keep a small set of keys secret and hidden. We have problems securing even dedicated data centres f
Re:Said before (Score:4, Insightful)
So your saying that, new technology exists which makes distribution of content much cheaper...
And yet content producers want to charge the same or more for this cheaper to distribute content? While also restricting the customer more than they did with earlier distrbution methods? It looks like their business model is becoming obsolete, and theyre just trying to shore it up by restricting their own customers.
Why not sell a product/service that cannot be easily reproduced, such that your actually providing value for money... Movies shown in a cinema spring to mind, the cost of a cinema size screen and sound system is beyond the means of most people. And then there's live concerts for music.
You cant clone a live concert, because you cannot produce exact replicas of the artists (yet?) and the cost of setting up a bootleg cinema would be too high to be worth the hassle.
If you want to sell movies on dvd, they need to be priced such that copying them is not viable, and yes that is possible. Movie companies have access to factories where DVDs are mass produced at a cost of 1 or 2 cents each, no pirate group would be able to obtain blank media that cheaply, let alone the time and effort needed to write to it.
In short, piracy only exists because the original media is disproportionately priced compared to its production cost. DRM exists not as a solution to piracy, but as a method to wring more money out of their paying customers.
Re: (Score:3, Insightful)
So your saying that, new technology exists which makes distribution of content much cheaper...
Yes, I am. I can get a $2,000.00 computer shipped to me from across the planet for $40. That does not mean the computer should cost $40.
A film (or CD, or book, or whatever), costs something to create, costs something to manufacture, costs something to promote, and costs something to ship. Due to technology, the highlighted items are, or can be, very close to zero (cents, or fractions of cents). The other costs still exist.
The problem is that, once the other costs are paid, *anyone* can just step in and per
Re: (Score:3, Insightful)
Yes, people will pay zero if they can get away with it - Welcome to capitalism.
"The expense ISN'T in the distribution. It's in the initial production but recouped at distribution time."
And there you have a flawed business model, that simply cannot exist in an open market.
"So only physical things have value? Anything that
a rig that does (Score:2)
I wonder occassionally why this seems so hard. I could easily set up a MythTV system (see [mysettopbox.tv]) and use it with any number of cards ( [wifi.com.ar]) as a way to turn output into input. Then I could use my DVD player or CD player or even my main computer as my player. It could be considered an audio hole, but it would be a pretty high quality system, not relying on something like a hand held camcorder or audio recorder. Sure someday it might be
The advantage of digital for piracy (Score:5, Insightful)
The advantage of digital for piracy is not that you can get a perfect copy. Perfection is not the goal in piracy. In many cases a camcorder shooting a screen is fine. Instead, the advantage of digital is that the quality is not degraded further as an infinite number of generations are made. Traditional pirates were limited to making 2 to 5 generations of VHS tapes because after that, almost nothing was left of the original movie. But an analog ripped (not cracked) MPEG file can be traded all over the world without any further single bit errors (although some of that will happen at times). The internet scares the content industry because of the speed (the latest release can be in the hands of millions before the big opening). Digital scares them because it enables the multi generational sharing as we already see in P2P. The problem is, they are fixated on encryption, which is at best going to prevent the average Joe from making a perfect copy and sharing with his neighbor across the street. When Joe finally figures out how to make an analog rip or just shoots it off his screen with a camcorder, his neighbor might reject it because it's not perfect, but you can bet the world will eat it up via the internet.
Re: (Score:3, Informative)?
Hi, I live in Canada. Recently, the MPAA has banned pre-screenings in theaters across *our entire country* because they think they lose too much business to camrips done in Canada.
Take a look at this: [torrentspy.com]
There are thousands upon thousands of people pirating some guy taping the movie theater screen. Yes, people really do want to watch camrips. If DVDs couldn't be digitally ripped, then people would just tape their TVs, and pirates would absolutely download that;
Apt analogy (Score:3, Insightful)
Re: (Score:2)
The economic rights of a copyright holder have long been recognised throughout the West, but moral rights are less clear. They have been recognised in continental Europe since the late 19th century, with the BC dating from 1886, but
Re: (Score:2)
Both are analog holes. If it's not a digital copy, it's not a quality copy,
This is clearly nonsense. It's entirely possible to make a quality analog recording. How do you think they made music recordings before digital audio? That's right, they used analog magnetic tape, which can asound much better than the digital audio on a CD. How do you think they made those "digitally remastered" CD editions of Dark Side of the Moon? They used the analog master tapes, of course.
Likewise, motion picture film is ana analog medium, and it has far greater quality than even digital High Defini
Re: (Score:2)
Actually, you are wrong here. Analog copying could be a problem when in the process an analog copy was made multiple times and the content would degrade a bit further each time. However, the first analog copy will be more than acceptable (and might well be indistinguishable from the original by a set of viewers doing a double blind test).
So, if you can make a good analog
Re: (Score:3, Interesting)
.?
A programme I attended at a Canadian east coast university had high international enrollment. One of the guys was from Chechnya. We had a pretty good instructional technology setup in one of the lecture spaces, so we could snag a movie off the Internet and take a break at two in the morning to watch said movie while scarfing popcorn and pop.
We had End of Days* up on the screen one early morning when the Chechnyan Dude comes in and exclaims that 'this is like going to the theatre back home!'. The movie
Re:Said before (Score:4, Insightful)
Point is it's not hard, IMHO crypto as a means to avoid piracy is a joke, there's no point until we DO get that encrypted tap straight into the brain - the reason it's there is to piss off and control the customer
Ridiculously annoying, and sometimes impossible (Score:5, Interesting)
Being a generous IT worker, when an employee's machine goes bad I'll sometimes give them my own machine if they need something fast. Last time I did this, a copy of Vista which I purchased directly from Microsoft's website suddenly became "not genuine". Not wanting to fuss with it, hoping I'd be able to get my machine back and make my copy of Vista genuine again, I ended up passing the time frame (30 days?) allotted for using the OS, then was locked out with a red screen saying "this copy of Microsoft Windows Vista Business is not genuine". This statement was clearly a lie if taken literally, but discussing vocabulary destruction through marketing would be quite a digression.
So, I went back to using my dual-boot linux partition and another spare PC for my day-to-day work.
Fast forward a few weeks...
Last Friday I got my laptop back, put the hard disk back in, and what's this? Vista still said it was not genuine. I tried to re-activate online but it said I couldn't do that because that key had already been activated. (Gee, you think? Maybe when I bought it?) So, taking the only course left, I called Microsoft on the phone and entered a series of numbers about 30 digits long. When the computer couldn't validate my install it forwarded me to some Indian call center, a place I'm familiar with because I've had to do this process more than a few times.
But this time was different... (Don't get your hopes up, it wasn't different in a good way. I was on the phone with a Microsoft offshore call center, remember?) Not only was my personal system down, but apparently their whole call center system was down. They were unable to validate my install and told me I'd need to call back later after they got their system back up and running. Apparently there was no other backup call center online, I simply had to hang up and call back another time when their system was back up.
Back to my trusty dual-boot Linux partition with its `sudo bash -c 'apt-get update && apt-get upgrade && reboot'`, or my Mac with its `sudo bash -c 'softwareupdate -i -a && reboot'`
Oh, and Jim Allchin can kiss my ass. "It's rock solid and we're ready to ship." Rock solid as in paper weight. What good is a stable OS that won't let you use it?
Re: (Score:2)
It should be pretty good for Microsoft's bottom line. If they can force you and eveyone else to rebuy their "O/S" 3 to 6 times over the lifetime of the box/lapbox at $100-400 or more, how can that not help MS's bottom line. That ought to make the stock analysts and the Mini-microsofts of the world happy. Besides, think of Steve Ballmer's starving children! You want them to be warped for life because they don't have two different Mercedes-Benz cars for e
Re: (Score:2)
sudo -H -c "yum update"
You reboot only if the kernel changes.
Re: (Score:2)
I didn't bother fixing it, just booted into whatever distro I had installed at the time (I think it was Slack 9) and got all my important stuff off the windows partition. Then I turned it into swapspace.
Re: (Score:3)
Slightly off-topic, but I'd suggest changing that to yum -y update && shutdown -r now. Using "&&" in leiu of ";" will prevent the system from rebooting if the call to yum isn't successful (can't contact a server, whatever). On many systems you can even replace "shutdown -r now" with simply "reboot".
Re:Said before (Score:4, Insightful)
Re: (Score:2)
yum -y update && shutdown -r now
Other than the fact that your way turns both into a single return (for error checking) is there any particular difference? Both get the job done, both result in a fully updated CentOS. (or RHEL or Scientific Linux or Fedora Core) And, what kind of error-checking are you going to meaningfully get on a system reboot?
Pedanticism for its own sake is wasteful. There are many, many, MANY ways to skin a cat. But in the end, the only thing that matters is wheth
Re: (Score:3, Informative)
Big difference. The shell doesn't evaluate additional arguments of an "and" directive if the first argument evaluates to false. Thus, using && guarantees that the shutdown will not occur if the update fails. That's a good thing for any command in which a failure could potentially leave your system in an unbootable state (e.g. an OS update).
Re: (Score:2)
Using &&, if yum errored out because your internet is down, you dont reboot your system needlessly.
And I agree, the original "correction" was completely unnecessary.
Re: (Score:2)
Using &&, if yum errored out because your internet is down, you dont reboot your system needlessly.
More importantly, if yum crashed and you ended up with a corrupted RPM database or half-installed package that might render the system unbootable, you won't shutdown your system in a potentially unrestartable (or otherwise broken) state.
The correction is not only unnecessary, it is demonstrably a poorer method.
Re: (Score:2)
Actually checking that something worked before proceeding to the next and dangerous step seems to be wi
xkcd has to be mentioned here.. (Score:4, Funny)
Re: (Score:3, Informative)
Come on! Why not link to the xkcd page [xkcd.com] itself? There is an alt text to those comics which will be missed if you directly link to the png.
Tens of Gigs? No way. Try 10kilobytes. (Score:5, Insightful)
I'm sure there's a legal use for this. I just can't think of one...
Re: (Score:3, Interesting)
It sounds like there is a lot of confusion, and admittedly, I'm not going to read the article, because it seems to come from there.
Vista apparently requires an authenticated path from th
Re: (Score:2)
You could simply capture only the portion of the screen which is "protected" from capture by the guest OS. That's where the interesting stuff is going to be.
Re: (Score:2)
With a VM, you'd probably have the easiest time using virtual screen captures (no need to look at the real screen, just look at the right spot in the player's decoded memory). In your described case, you don't need
Re: (Score:2)
End result, everyone can run the VM and watch the movie, then discard the VM again.
I guess that's what they fear. What they really should be worried about though is that they've put themselves in a position they can never hope to win by using DRM.
Re: (Score:2)
I don't know whether it's technically an emulator or not, but it's close enough for me.
No way (Score:4, Funny)
No way. I told my mom and my aunt not to trade those VMs and they listen to me.
I don't want to see them in jail.
you don't have to see them in jail (Score:2)
Man just the blurb drives me nuts (Score:2)
Re: (Score:2)
Very well. But just in case that changes, remember that you can temporarily stun it by uttering 09 f9.
(Note: I didn't include the full number above because I felt it would not have helped the rhythm of the sentence at all, and that the joke itself of inserting it everywhere was by now overdone. But since I care whether people question my geekdom ("I care! I care plenty! I just don't know how to make them stop!"), here it is for goo
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
I can watch DVD's on Linux by writing "xine dvd://" or, if I have the DVD image ripped, by writing "xine dvd:///path/to/dvd/image.iso". Is that what you meant ?
Re: (Score:2)
Sad, but true.
Not the whole story (Score:5, Insightful)
Last year I was in Taiwan running WinXP under VirtualPC - with the appropriate upgrades after Microsoft had bought the product from its creators - and I had zero trouble.
This year, I'm in Taiwan again, but this time I'm running WinXP under Parallels. Shortly after my use of the machine here on the internet, I got this message telling me that my hardware had significantly changed since the original installation and that I needed to re-validate - I don't recall the rest of the message, but it involved Genuine Advantage and suggestions of unusability. So, even though I'm not carrying my original box around with the keycode (would you??), I decided to be brave and tapped on the warning from the tray as instructed. Took me right to an MS page at what appeared to be Microsoft-Taiwan, and it was quite persistent that I should continue to be routed to some Chinese language page. Long story short, I got some embedded wizard launched, got the MS phone number for the USA (Bangalore notwithstanding), called in, got re-validated and woot, woot, woot.
It seems - very strongly to me - that the only thing that Microsoft could have detected was my location in a way that didn't make sense to them, and I think I triggered something that decided I had a pirated copy. I really haven't had any use of my machine or anything change in any other way to cause me to suspect anything else.
So, how long before business travellers - and we fill a lot of 747s, virtually all running Windows - picking up VM for one reason or another start pitching fits when they discover that they can go into a full-screen presentation and be tagged publicly as potential software pirates?
I couldn't understand why MS had a real problem with Vista under VM, but if the cause I posited is in fact true, then the problem Microsoft is worried about goes back to the XP codebase. Say anything about Vista's new codebase, but it's all from the same company..... so, I think DRM is a specious explanation but it allows them to hide behind something where they can try to claim some innocence regarding VM - when in fact the OS may be more seriously broken w.r.t. VM than they're admitting.
Re: (Score:3, Informative)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2, Interesting)
* use OS X and need Windows
* use Linux/laptop and need Windows
* need or desire to partition an entire OS so that during a presentation, if casually called away from laptop, fewer worries about "innocent" snooping
Business guys adopt tomorrow what the propellerheads did yesterday. Last time I had trouble w/ a net connection for Windows in a hotel in the Bay Area and the drogue started to give me dos
I hope *IAA keeps wasting thier money on DRM (Score:5, Insightful)
These jerks think they define popular culture. They don't.
DRM doesn't work. [freshdv.com] People steal the stuff before it's encoded with the DRM. The key is always distributed with the content or recoverable.
DRM can't work. [wikipedia.org] Their attempts are hilarious. In order to be perceived by a human it has to be rendered in analog format, at which point capturing and encoding it in an open format is trivial in all cases.
DRM shouldn't work. [blogspot.com] If they won't sell me the content for the device I want to play it on when I want to play it where I want to play it, I'll convert it [blogspot.com] and to hell with what they think I should be allowed to do. Fair use.
DRM is a security risk. [slashdot.org] I will not surrender control of my PC to render your content.
The more they annoy people, the more visibility worthy indie acts [harveydanger.com] get. People will listen to their popmart derivative garbage less [magnatune.com].
I am personally opposed to straight pirating the stuff but I have to admit my conviction on the subject is wavering at this point.
Re: (Score:2, Interesting)
Did I say interesting? I meant scary.
Re: (Score:2)
You want irony? (Score:5, Funny)
Hazards of monopoly (Score:2)
Microsoft has nothing to do with Hollywood (Score:5, Interesting)
> and Digital Restrictions Management?
Microsoft has nothing to do with Hollywood. There are waiters in Hollywood who have forgotten more about movies than anyone at Microsoft will ever know. Even the accountants use Macs here in California.
Microsoft does not even make a movie player that plays the standard format. Calling Windows Media Player or Zune a movie player is like saying Microsoft Word is a Web browser because it can also display text and images. That is a very unsophisticated view that you can't sell to someone who actually knows how the Web works. Well, in Hollywood, they know how movies work. MPEG-4 was coming for many years, then it was standardized, then it became the format in iTunes+iPod, then the iPod took off. MPEG-4 is also HD DVD and Blu-Ray and AppleTV and iPhone and PSP. MPEG-4 is also the standardization of the QuickTime format which all the content creation tools are built around, even those like Avid that compete with Apple, so it arrived already having mature development tools. One day there was a QuickTime update and all of my tools could now generate MPEG-4 H.264 as if they had always known what it was. Further there is a free open source MPEG-4 streaming server that runs on every Unix and also Windows, it also has no streaming tax. Finally, most of all, MPEG-4 has no "content tax" while Microsoft's Windows Media business model depends on a content tax and everybody in both music and movie industry already knows better than that. All this happened already with sheet music and player pianos 100 years ago. Nobody is going to use an encoder that spits out a file which you can't copy or share without paying a tax to Microsoft, because everybody wants their movie or album to sell 100 million copies (even if it actually has no chance) so when Microsoft says aw it's only a penny per copy, people do the math and say no you are raping me with that, I can buy an MPEG-4 encoder for $20 and use it to make all the copies I want and not owe anybody anything why don't I just do that? And MPEG-4 just happens to already be integrated into all my tools and integrated into the hardware of consumer video playback so there was never any there there with Microsoft and movies. Even if they built a technically sound system or one that had a cost advantage, they would have to overcome the fact that nobody wants to work with the evil typewriter company.
All you are seeing here is another way that Windows sucks. Core computing functionality that customers use and want and even need to stabilize their Windows software on a real operating system is falling victim to Microsoft's lack of focus and hopeless star fucking. Why isn't Windows ready to be a good typewriter today? Because of its magic DRM.
Re: (Score:2, Informative)
> MPEG-4 has no "content tax"
Really? How about that licensing fee that all MPEG-4 use requires [wikipedia.org]? The folks who own the MPEG-4 patents fully intend to make you pay for their use. Personally, I'd call that a "content tax", since anyone who sells an encoder or any device that embeds an MPEG-4 decoder (E.G.: a BluRay player) has to pay it.
> there is a free open source MPEG-4 streaming server
Really? I'd love to know what it's
Re: (Score:3, Interesting)
What I find most interesting about the analogy though,
Re: (Score:3, Insightful)
BZZZT! (Score:3, Interesting)
"Content provider revolt" is a pitiful excuse that no one with a brain really buys.
AH HAH! More hardware (Score:4, Interesting)
"What could MSFT do next to require me to once again throw out my computer and buy the latest and greatest hardware in 2008 or 2009?"
Virtualization. MSFT Vista 4.0 or 3.51 or 95/98 or 2009... Would require:
Min of 1GB of RAM.
1TB HD (supplied by FibreChannel disk).
Quad Core CPU
Dual Core GPU.
All I wanted was to be able to surf the web and play Civ. I now require the computational power of an IBM p590.
Re: (Score:2)
Using &&, if yum errored out because your internet is down, you dont reboot your system needlessly.
You were floored because by a ~1Ghz P3, 768M RAM and a $30 video card ? A ~6 year old PC you can get basically for free because companies throw them out ? Specs that are basically the same as those for equivalent OSes ?
Disappointing (Score:2)
DeGroot? (Score:2)
Choose something else (Score:5, Insightful)
Ok, you've got many PCs most of which run Windows XP [nytimes.com]. They've been crashing every Exploit Wednesday [windowsitpro.com] since October. Every one has a license that was paid for three times (six times under Software Assurance [microsoft.com]). You have seventeen core apps. Some of them are paid for several times. Some have a licensing server so that some people can use them when other people aren't, and come with a utility so that priority users can kick off nonpriority users. A couple of them are free. Four of them are nagware that came with your PCs or that you thought were a good idea at the time. One is an in-house app that only runs in a DOS box and accesses dBase files stored on your server. Every month a couple get pwned [theregister.co.uk] for no detectable reason.
Even if they don't run Windows [theregister.co.uk] you've paid over and over. You have to because they've made it happen what "enforcement" will happen if you don't. [microsoft.com]
Every software vendor you buy from makes it clear the software you bought is being split [symantecstore.com] into "basic" versions that include most of the features you use, and an "Enterprise" version that includes must have features you can't live without. Both new versions will be annual subscriptions instead of purchases. Naturally, the Premium version you require will cost many times what you already paid and the cost will be annual rather than once each. Of course they're entitled to this conversion of your purchase into a "revenue stream" because they've upgraded their product from an application to a "platform framework" that "optimizes" your "TCO".
You're thinking about investigating this multicore thing that people are talking about, but it seems impossible to reconcile the software licenses with multiple "cores" on one or more CPUs. You want to do server consolidation, but every server app has to be evaluated both by a professional enginner and by a hideously expensive team of lawyers who also want to audit every piece of software you've purchased since 1974. Your CPA wants to know why you licensed the same software 3-6 times for each PC, and why you're buying licenses for software that won't run on the PCs they're purchased for. And what's this entry for "SCO Linux licenses"? You live in dread of being audited [com.com] by jack-booted thugs, [bsa.org] not because you're pirating but because the danger of a paperwork snafu that destroys your budget is nearly certain and the slightest discrepancy is going to get you canned.
I have one question: What the hell are you thinking? Get off the train to crazy town. The free stuff [ubuntu.com] isn't just good, it's better. So much better that you're not going to believe you put up with this crap. If it's truly free you don't have to account for each copy/user/use/year/processor/incidence. It's not free because it's less worthy: it's free because you're not the first person to be disgusted by the experience you're having. Pay for support. Nobody ever got sued for terminating their support contract. Figure it out. The world has changed. The future is open.
Re: (Score:2)
Though you did miss out the bit that the wording of most commercial software licenses is incredibly hard to follow - I sincerely believe that 90% of them are written by lawyers who are briefed to make sure it's practically impossible to understand them, much less follow them to the letter.
However, there remains just one practical problem: IT works for the business, not the other way around.
When you have a free, real alternative to Sage M
Its about content path, no VM images (Score:3, Informative)
Is stupidity abound or something? The comment from the article about copying multi gigabyte images is ludicrous and makes one ask if the guy has ever used a VM let alone knows anything about the basics of DRM.
First things firsts. Virtualization means that the physical hardware and virtual hardware are not linked. That means, in no simpler language, if you want to use a TV, monitor recording device or whatnot to view your VM: you can, and the VM doesn't know. This is a technological threat to DRM implementations inside a VM, because they cant guarentee the path outside the VM.
Why you would copy potentially dangerous VM images from one PC to another when you could simple capture the output, i don't know.
Once upon a time NES ROM carts implemented their own I/O multiplexing - the vast majority still aren't emulated today because it's tedious work. Guest OSes inside VMs will continue to find ways of obfuscating their data (after all the guest inside a VM doesn't even have to be the same architecture as the host!)... its anybody's game once you're outside of the Guest.
MS don't want people to virtualize their software for the same reason DRM is a CEOs best friend: they can charge more for less restrictions.
If you have to pay $100 extra for the Ultimate or Pro versions of Vista to get virtualization, and people want virtualization, it can be seen as a valuable extra. Extras, not to be confused with added value, increase price premiums through added cost to the purchasing party.
However, the meat of the issue is not that people spoke out about DRM in such obvious and clear cut language, touting the anti-competitive stance MS has taken, but bloggers and writers are steering the focus to Linux which is offering a mirad of virtualizations for free. The only sensible stance is to do the same - just like MS did with VirtualPC... MS can't afford to be completely leapfrogged in any area.
The thing the irks me is that people are constantly barking up the wrong tree with regards to industry ties with companies and DRM. The "MAFIAA" (as it's been put) is convincing companies to make DRM provisions, but they can't force the implementation on to end users if companies can't/don't want to/disagree. MS allowing virtualization is nothing more than a technology response to Linux. No one is warming to DRM, DRM is not dying any time soon. This is market forces at work. Granted market forces are slow, and cause no end of problems for us now...
Only need the VMs once... (Score:2)
So what? (Score:2, Insightful)
Dont forget Application Virtualization (Score:2)
Re: (Score:3, Funny)
Re:Whats more likely (Score:5, Informative)
JVC hdtv, name and shame.
Re: (Score:2)
Your money has an HDMI port? Which currency is this, Sealand's?
Re: (Score:2, Informative)
It is possible that content providers can blacklist/revoke the encryption key for a HD-DVD or Bluray player, but this would only brick the disc player, not a TV.
In short, no signal - either junk or deliberate - can permanently disable the hdmi port on a tv unless there is something wrong/faulty with the tv design itself.
Re: (Score:2, Interesting)
Xbox (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:3, Insightful)
Re: (Score:2)
Honestly though, doesn't Michael Chertoff look like a necromancer? Google him and find a picture (the first one on wiki is nice) and picture him holding up a skull commanding the zombie hoards. Makes me giddy like a schoolgirl, but I have issues.
Virtualization enables easier migration (Score:2)
Once such virtualization (assuming it is sealed against any DRM exploits and doesn't provide for mobilization of a VM) is allowed, that creates a situation where Microsoft doesn't have control of the hardware. Then another OS could be easily run side by side (since virtualization can create multiple virtual machines). And people might try some free operating system, discover that it almost meets their needs, and will have an opportunity to gradually migrate over to using that free operating system for all
|
http://tech.slashdot.org/story/07/06/24/0348233/virtualization-may-break-vista-drm
|
CC-MAIN-2014-52
|
refinedweb
| 8,484
| 69.31
|
I changed of position too soon while missing two nice opportunities, one in England and another in Ireland, two tremendous stuffs. Well a contract is a contract, and as professional (please read Uncle Bob last writings about that), I have signed, so I have to go, although J2EE seems less attractive to me as it used to be. Meeting the Actor model or functional programming seems to please me more. It's a curse, stuck in France for the next five months, but next time I will wait for the nice opportunity, six months if necessary, practicing what I like.
I took the time to read and start practicing stuff like Akka, kept going on reading Scala In Depth from Joshua Suereth. This one's rocks specifically when it comes to implicits and typing formalism. But the domain is hard, and it is a must-read-many-times book, although Joshua Suereth provides clear explanations and examples. I also began practicing Clojure thanks to this amazing book: the joy of clojure. I felt the same as I felt while I started looking for the Abelson and Sussman lectures. Some example are quite hard for the Java eye and I bumped into a wall last week.
I spent a tremendous time at the end of chapter 7, on a shortest path search algorithm aiming to help you to get out of a small maze. Interesting. I noticed once more how some of us (me included) are so bound to our client will to integrate open sources - in order not to reinvent the wheel - that we may loose our capability to reason about mandatory stuff to know like how to find a short path into a graph. By the way, I hate the expression "do not reinvent the wheel", it is a proof of laziness.
The proposed algorithm, is the so called A* search algorithm extending the Dijkstra's algorithm. I was not sure I have understood all that clearly. So I decided to lose(?) one day more and try to implement the same problem in Scala using Dijkstra's approach to search of shortest paths. Grossly explained, the A* search adds an estimate function for the sorting of remaining paths to be explored.
I took the time to re-read my introduction to algorithms about the Dijkstra's algorithm. Indeed, the maze problem is typically a shortest path problem. It is all about a question of vocabulary. What are we talking about ?
Typically we are evolving into a world - the maze - where we have to progress from one point - a spot - to the other. The two points can respectively be the entrance and the exit of the maze. Each step we are going to make is going to cost us some price. Walls, high obstacles, whatever difficulty, is going to cost us a maximum of "weight" while the cost of a step on a flat floor cost us the minimum, let us say 1 on an integer scale. The topology and the shape of the maze can then be easily represented by an array of arrays structure (matrix) like the following
val world = World(Array( Array(1 , 1, 1, 1, 1), Array(99, 99, 99, 99, 1), Array(1 , 1, 1, 1, 1), Array(1 , 99, 99, 99, 99), Array(1 , 1, 1, 1, 1) ))
What we have presented here is the structure of a Z path. The walls -of high cost - are represented by the number 99. So a natural path to leave the maze from the upper left corner to bottom right corner would))
The astute reader will have noticed that we have kept the standard x, y notations used in computer science. The x axis values increase from the left to the right, and the y values increase from the top to the bottom.
The upper left corner is represented by the Spot(0,0) while the bottom right is represented by Spot(4,4).
This working context can be mapped to a graph problem where the Spot abstractions are the nodes and the matrix (array of arrays) contains the weight of the edges linking the nodes between them. The weight (or cost) at certain position in the matrix expresses how much it would cost you to get to the position wherever you come from. The so called weight function regularly invoked in the
graph search algorithms reduces itself to a look-up of value into the matrix.
So what is a spot ? Some abstraction that matches position coordinates. As we will rambling through the maze we will have to find out for neighbors of spot using incremental deltas of position (the steps). The test describing the expected behavior is:
import org.junit.Test import org.scalatest.junit.{ShouldMatchersForJUnit, JUnitSuite} final class TestSpot extends JUnitSuite with ShouldMatchersForJUnit{ @Test def spot_WithCoordinates_ShouldBindXCoordinate() { Spot(0, 0).x.should(be(0)) Spot(7, 0).x.should(be(7)) } @Test def spot_WithCoordinates_ShouldBindYCoordinate() { Spot(0, 7).y.should(be(7)) Spot(7, 11).y.should(be(11)) } @Test def spot_WithDelta_ShouldBindCoordinates() { (Spot(0, 7) + Delta(1,1)).x.should(be(1)) (Spot(0, 7) + Delta(1,1)).y.should(be(8)) } @Test def spot_InSmallWorld_ShouldNotBeInRange() { Spot(7,7).inBoundaries(5, 5).should(be(false)) } @Test def spot_InBiggerWorld_ShouldBeInRange() { Spot(7,7).inBoundaries(10, 10).should(be(true)) } }
Nothing terrible up this point. We have added tests to check whether spots were into the boundaries of a world(or maze). This set of tests lead to the following implementation:
case class Delta(x: Int, y: Int) case class Spot(x: Int, y: Int) { def inBoundaries(width: Int, height: Int): Boolean = { (0 <= x && x < width) && (0 <= y && y < height) } def + (delta: Delta): Spot = { Spot(x + delta.x, y + delta.y) } }I chose case classes, in order to ease spots comparisons and look-up into tables, sets etc... In the spirit of functional programming, a spot incremented by a delta creates a new Spot. Before digging in to a (very) small explanation of the algorithm, a little more code. I already known I need a World abstraction where to define my costs, and maybe allowing me to find the neighbors of spot during my exploration. In order to create it I defined the following tests:
import org.scalatest.junit.{ShouldMatchersForJUnit, JUnitSuite} import org.junit.Test import scala.Array._ final class TestWorld extends JUnitSuite with ShouldMatchersForJUnit{ @Test def world_WithSingleCell_ShouldFindNoNeighbours() { World(Array.ofDim[Int](1, 1)).neighborsOf(Spot(0,0)).should(be(List())) } @Test def origin_WithTwoCellWorld_ShouldHaveTwoNeighbours() { World(ofDim[Int](2, 2)).neighborsOf(Spot(0,0)).size.should(be(2)) World(ofDim[Int](2, 2)).neighborsOf(Spot(0,0)).contains(Spot(0,1)).should(be(true)) World(ofDim[Int](2, 2)).neighborsOf(Spot(0,0)).contains(Spot(1,0)).should(be(true)) } @Test def leftBottomCornerSpot_WithFourCellWorld_ShouldHaveTwoNeighbours() { World(ofDim[Int](2, 2)).neighborsOf(Spot(0,1)).size.should(be(2)) World(ofDim[Int](2, 2)).neighborsOf(Spot(0,1)).contains(Spot(0,0)).should(be(true)) World(ofDim[Int](2, 2)).neighborsOf(Spot(0,1)).contains(Spot(1,1)).should(be(true)) } @Test def rightBottomCornerSpot_WithFourCellWorld_ShouldHaveTwoNeighbours() { World(ofDim[Int](2, 2)).neighborsOf(Spot(1,1)).size.should(be(2)) World(ofDim[Int](2, 2)).neighborsOf(Spot(1,1)).contains(Spot(1,0)).should(be(true)) World(ofDim[Int](2, 2)).neighborsOf(Spot(1,1)).contains(Spot(0,1)).should(be(true)) } @Test def rightTopCornerSpot_WithFourCellWorld_ShouldHaveTwoNeighbours() { World(ofDim[Int](2, 2)).neighborsOf(Spot(1,0)).size.should(be(2)) World(ofDim[Int](2, 2)).neighborsOf(Spot(1,0)).contains(Spot(0,0)).should(be(true)) World(ofDim[Int](2, 2)).neighborsOf(Spot(1,0)).contains(Spot(1,1)).should(be(true)) } @Test def cost_WithWeightWorld_ShouldBeProperlyFound() { World(Array(Array(1,2), Array(3, 4))).costAt(Spot(0,0)).should(be(1)) World(Array(Array(1,2), Array(3, 4))).costAt(Spot(1,0)).should(be(2)) World(Array(Array(1,2), Array(3, 4))).costAt(Spot(0,1)).should(be(3)) World(Array(Array(1,2), Array(3, 4))).costAt(Spot(1,1)).should(be(4)) } }
Basically I set small worlds, asserting on the cost of positions, finding the neighbors of very specific corner points. Standard unitary test for a matrix exploration.
This is where I discovered the useful Array.ofDim() Scala factory method. A wisely chosen import helps writing a self explanatory tests like the one into rightTopCornerSpot_WithFourCellWorld_ShouldHaveTwoNeighbours.
The matching implementation of a world is :
final class World(val definition: Array[Array[Int]]) { val selector = List(Delta(0, -1), Delta(0, 1), Delta(1, 0), Delta(-1, 0)) private val width = definition(0).length private val height = definition.length def neighborsOf(spot: Spot): List[Spot] = { selector.map(spot + _).filter(_.inBoundaries(width, height)); } def costAt(spot: Spot): Int = { definition(spot.y)(spot.x) } } object World { def apply(definition: Array[Array[Int]]) = { new World(definition) } }A World abstraction accepts a topology definition and exposes two query methods neighborsOf and costAt respectively providing the possible Spot neighbors for some input spot and allowing some client code to get the cost (or weight) at a certain position indexed by a Spot. We assume being able to step uniquely on horizontal and vertical positions so the selector used to identify the neighbors is composed of only four deltas:
val selector = List(Delta(0, -1), Delta(0, 1), Delta(1, 0), Delta(-1, 0))
Assumption has been made that all the arrays into the main array definition have the same size. The class is immutable. So far so good...
Now we have to find our way in the world abstractions. Graph search lays onto a sets of lemmas found by smart people like Dijkstra and these lemmas are nice guides when it comes to the search for short paths. These lemmas can be taken for granted as one can intuitively "feel" their correctness. I invite you to check some algorithm book to become familiar with them.
I got one of these "Ah-Ah" moments when I understood the following (over simplified here):
Given a weighted directed graph G, with a weight function, and let p be a shortest path, than any sub-path extracted from p will be itself a shortest path
So if at some points in our research we have found a shortest path then we are sure all the sub-path will be shortest paths too.
Looking for a shortest path will be a progressive move starting in the vicinity of the starting node. Then expanding from neighbor to neighbor, we will reevaluate when necessary an estimates of shortest path measures.
The technique used is called relaxation. Let start saying that all the shortest path to all the Spots except the starting have an infinite cost. If during our progression we can improve these shortest path estimates we will relax these infinite values. Let say that d[v] is the shortest path to the spot v from the spot s (the start) and that at some time we can estimate d[u] the shortest path from s to u a neighbor of v. Given w(u,v) the weight (or cost) of the edge (or step) between u and v than it is possible to reduce the value of d[v] this way:
if (d[v] > d[u] + w(u,v)) then d[v] = d[u] + w(u,v) u becomes a predecessor of v
We have a beginning of solution. Dijkstra's helps us. He proposed a greedy algorithm, always providing a solution The idea is to enrich a set of already identified set of found shortest paths (S) while progressively emptying a priority queue (Q) of paths to be explored. The paths to be explored are sorted versus their (corrected or not) weight (or cost) values. The paths to be explored immediately in Q are the paths with the lowest weight. The correction of the weight is progressively achieved by relaxation. Dijkstra's algorithm is grossly described by the following pseudo code
Set all paths weight to infinite Path weight at Starting Spot is 0
S is empty Let Q be all Spots while Q is not empty do let u = peek-first-in(Q) let S = S U {u} for all neighbors v of u relax v with u path estimation and w(u,v)
At the very beginning all the hypothetical paths in Q are supposed to be infinite except the starting point. Each neighbor path cost estimate will be progressively corrected.
I have adapted the starting conditions. The Q of to-do paths will start containing the starting path only (no point in introducing all the path at all points with infinite costs). Will be corrected the paths to-do in Q and added new corrected paths if they are not in the queue. So at each step we need weighted Path abstractions, matching a path to a spot Spot, holding a weight estimate and a list of predecessors to the Spot. These are the tests qualifying the Path abstraction:
import org.scalatest.junit.{ShouldMatchersForJUnit, JUnitSuite} import org.junit.Test final class TestPath extends JUnitSuite with ShouldMatchersForJUnit{ @Test def weight_InPath_ShouldBeBound() { Path(3, Spot(4,5), List()).weight.should(be(3)) } @Test def spot_InPath_ShouldBeBound() { Path(3, Spot(4,5), List()).spot.should(be(Spot(4,5))) } @Test def walk_InPath_ShouldBeBound() { Path(3, Spot(4,5), List(Spot(4,5),Spot(4,4), Spot(4,3))) .walk.should(be(List(Spot(4,5),Spot(4,4), Spot(4,3)))) } @Test def relax_WithWorsePredecessor_ShouldNotRelax() { val pathToRelax = Path(3, Spot(3,4), List(Spot(3,4), Spot(3,5))) pathToRelax.relax(Path(2, Spot(2,4), List(Spot(2,4), Spot(1,5))), 7) .should(be(pathToRelax)) } @Test def relax_WithBestPredecessor_ShouldNotRelax() { val pathToRelax = Path(5, Spot(3,4), List(Spot(3,5))) pathToRelax.relax(Path(2, Spot(2,4), List(Spot(2,4), Spot(2,3))), 1) .should(be(Path(3, Spot(3,4), List(Spot(3,4), Spot(2,4), Spot(2,3))))) } }
One should note that the hold walk is reversed and always contains the current spot to which the path is attached. A natural implementation becomes:
case class Path(weight: Int, spot: Spot, walk: List[Spot]) { def chainedTo(previous: Path, withWeight: Int): Path = { Path(withWeight + previous.weight, spot, spot :: previous.walk) } def relax(previous: Path, withWeight: Int): Path = { if (weight > previous.weight + withWeight) { chainedTo(previous, withWeight) } else { this } } }
I have the bricks. What do I want to test ? Well paths into some mazes. So go ahead for the tests:
import org.scalatest.junit.{ShouldMatchersForJUnit, JUnitSuite} import com.promindis.graphs.PathFinder._ import org.junit.Test final class TestPathFinder extends JUnitSuite with ShouldMatchersForJUnit { def pathIn(world: World, from: Spot, to: Spot): Path = { PathFinder(world).shortestPath(from, to) } def found(path: Path) = { path.walk.reverse } @Test def path_InFourthCellWorld_ShouldBeTwoSteps() { val world = World(Array( Array(1, 1), Array(99, 1) )) found(pathIn(world, from(0, 0), to(1, 1))) .should(be(List(Spot(0, 0), Spot(1, 0), Spot(1, 1)))) } @Test def path_InDirectedWorld_ShouldMatchUniquePath() { val world = World(Array( Array(1, 1, 1, 1, 1), Array(99, 99, 99, 99, 1), Array(1, 1, 1, 1, 1), Array(1, 99, 99, 99, 99), Array(1, 1, 1, 1, 1) )) found(pathIn(world, from(0, 0), to(4, 4))).should)) )) } @Test def path_InUndirectedCellWorld_ShouldHaveCorrectSize() { val world = World(Array( Array(1, 1, 1, 2, 1), Array(1, 1, 1, 99, 1), Array(1, 1, 1, 99, 1), Array(1, 1, 1, 99, 1), Array(1, 1, 1, 1, 1) )) found(pathIn(world, from(0, 0), to(4, 4))).size.should(be(9)) } @Test def path_InOtherDirectedCellWorld_ShouldMatch() { val world = World(Array( Array(1, 1, 1, 2, 1), Array(1, 1, 1, 99, 1), Array(1, 1, 1, 99, 1), Array(1, 1, 1, 99, 1), Array(1, 1, 1, 99, 1) )) found(pathIn(world, from(0, 0), to(4, 4))).should(be( List( Spot(0, 0), Spot(1, 0), Spot(2, 0), Spot(3, 0), Spot(4, 0), Spot(4, 1), Spot(4, 2), Spot(4, 3), Spot(4, 4)) )) } }
We start by a small world made of four Spots where the shortest path is unique. All the tests except path_InUndirectedCellWorld_ShouldHaveCorrectSize have clearly identified or forced paths. The latter one must find a path of the correct size even if there are multiple solutions. Here is the matching implementation:
final class PathFinder (val world: World) { def excludeFrom(neighbors: List[Spot], spotsInSet: List[Spot]): List[Spot] = { neighbors.filter(neighbor => !spotsInSet.contains(neighbor)) } def not(done: Map[Spot, Path]) : Spot => Boolean= { neighbor: Spot => !done.contains(neighbor) } def accessibleNeighborsOf(current: Path, criteria: Spot => Boolean): List[Spot] = { world.neighborsOf(current.spot).filter(criteria) } def relax(pathStillToDo: List[Path], currentPath: Path): List[Path] = { pathStillToDo.map(path => path.relax(currentPath, world.costAt(path.spot))) } def spotsIn(pathStillToDo: List[Path]): List[Spot] = { pathStillToDo.map(path => path.spot) } def newPathFrom (current: Path): Spot => Path = { neighbor: Spot => val cost = world.costAt(neighbor) Path(cost + current.weight, neighbor, neighbor::current.walk) }) } } def shortestPath(from: Spot, to: Spot): Path= { find(List(Path(world.costAt(from), from, List(from))), Map(), to)(to) } } object PathFinder{ def to(x: Int, y: Int) = Spot(x,y) def from(x: Int, y: Int): Spot= Spot(x,y) def apply(world: World) = { new PathFinder(world) } }
The most important methods are relax and the recursive find, the others being tool methods. I used the companion object
object PathFinder
to define fluent factory methods like from and to. The entry point is the shortestPath method
def shortestPath(from: Spot, to: Spot): Path= { find(List(Path(world.costAt(from), from, List(from))), Map(), to)(to) }
There I start my exploration creating my list of path to be improved or settled with the from point. Naturally at this very moment I pass an empty Map of already done paths. The destination to will be used as a break point to terminate the process. In the find method, lays the logic targeting my exploration choices: did I find the destination ? Is my list empty ?) } }
This decision scenario is perfectly supported by a Scala case strategy application, where the breaking condition is handled by the second match. I kept recursion at the end in order to take benefit from tail recursion optimization. The recursion process lays onto the relax method implementation:) }
This is where the algorithm is implemented.
At that moment I am working with a specified Path located at some Spot and already weighted. I simply look up for the valid spot neighbors. A valid neighbor is a neighbor Spot not already stored in the alreadyDone Map.
In order to proceed using a fluent language I defined the following tool methods
def not(done: Map[Spot, Path]) : Spot => Boolean= { neighbor: Spot => !done.contains(neighbor) } def accessibleNeighborsOf(current: Path, criteria: Spot => Boolean): List[Spot] = { world.neighborsOf(current.spot).filter(criteria) }
The not methods being a first class Function closing onto the alreadyDone Map (very Lisp-ish, I like that :)) Then I process the paths awaiting to be analyzed:
def relaxed(current: Path, awaiting: List[Path], alreadyDone: Map[Spot, Path]) = { //.......... val awaitingPaths = awaiting.filter(path => neighbors.contains(path.spot)) val refreshedPaths = relax(awaitingPaths, current) //................ } def relax(pathStillToDo: List[Path], currentPath: Path): List[Path] = { pathStillToDo.map(path => path.relax(currentPath, world.costAt(path.spot))) }
If some of the neighbor Paths are not already awaiting, then I prepare a list of new paths to be processed:
def relaxed(current: Path, awaiting: List[Path], alreadyDone: Map[Spot, Path]) = { //.......... val newPathTodo: List[Path] = excludeFrom(neighbors, spotsIn(awaitingPaths)).map{newPathFrom(current)} //................ } def excludeFrom(neighbors: List[Spot], spotsInSet: List[Spot]): List[Spot] = { neighbors.filter(neighbor => !spotsInSet.contains(neighbor)) } def spotsIn(pathStillToDo: List[Path]): List[Spot] = { pathStillToDo.map(path => path.spot) } def newPathFrom (current: Path): Spot => Path = { neighbor: Spot => val cost = world.costAt(neighbor) Path(cost + current.weight, neighbor, neighbor::current.walk) }All the purpose of the tool methods is to apply comprehensions or to create reusable first class functions so the algorithm in the relax method is more fluent. Finally one built up the lists of refreshed path to be handled and new paths to added, the whole resulting list is built up and sorted again:
(newPathTodo ::: (refreshedPaths ::: (awaiting -- awaitingPaths))) .sortWith(_.weight < _.weight)
Of course there are flaws in the implementations, mainly due to the use of list and a the late sorting after each relax operation. The program aims to be improved and advices will be welcomed .
As I have less time for Scala, a new client (remember? J2EE blah blah blah), I leave you.
Be Seeing you !
2 comments:
great...........
Thank you
|
http://patterngazer.blogspot.com/2011/07/finding-short-path-using-scala.html
|
CC-MAIN-2019-09
|
refinedweb
| 3,388
| 56.35
|
Update 2011/01/02 – if you read the comments below, you’ll note that Al mentions that the issue where Init() isn’t called when the Sorting Order / Inquiry Type changes I am talking about here doesn’t occur when you publish the code – it appears to only be when you are testing in the Script Tool! :-[
So, Smart Office rocks! The Jscript functionality is awesome. But…
You add a control to a B panel, and then change the “Sorting Order” and your control disappears – somewhat irritating.
This is a challenge that I recently came across and still don’t have a nice solution, I have yet to find an event that I can subscribe to that allows me to redraw my controls.
Lawson Smart Office does some nifty stuff, when you change the Sorting Order the controls that sit below the Sort Order ComboBox are removed and destroyed, and then the new controls associated with the new Sorting Order are added – any events that you had tied to those controls is now invalid.
However, the Init() method isn’t called again so your controls don’t get added.
I have a little script which illustrates this with Dialog Boxes. The script was used against OIS300. We have subscribed to the Sorting Order ComboBox SelectChanged event and the unload even of the ListView (I only did one control to illustrate what was happening).
Getting to the Sort Order ComboBox is a little bit of a challenge aswell, we can’t find it with the ScriptUtil.FindChild() so we have to do it the hard way, and that is walk up the tree and then walk down the visual tree until we find WWQTTP – the Sort Order ComboBox.
In this script the following will happen
- Change Sorting Order
- Event: ListView unloaded
- Event: ComboBox Selection Changed -1
- Event: ComboBox Selection Changed -1
- Event: ComboBox Selection Changed new selection
- Event: ComboBox Selection Changed new selection
- Controls are re-added (no event added for this)
The ListView event will only fire once, because the we subscribed to the ListView Unloaded event and that ListView no-longer exists.
import System; import System.Windows; import System.Windows.Controls; import System.Windows.Media; import System.Windows.Media.Media3D; import MForms; package MForms.JScript { class Sample { var gController; var gContent; var gDebug; var grdLVParent : Grid; var glvListView : ListView; var gbtnPickButton : Button; var gcmbSortOption : ComboBox; public function Init(element: Object, args: Object, controller : Object, debug : Object) { gController = controller; gDebug = debug; var content : Object = controller.RenderEngine.Content; gContent = content; var lcListControl : ListControl = controller.RenderEngine.ListControl; glvListView = lcListControl.ListView; grdLVParent = glvListView.Parent; grdLVParent.add_Unloaded(OnGridUnloaded); glvListView.add_Unloaded(OnListViewUnloaded); goUp(); if(null != gcmbSortOption) { gcmbSortOption.add_SelectionChanged(onComboBoxSelectionChanged) } } public function OnListViewUnloaded(sender : Object, e : RoutedEventArgs) { MessageBox.Show("ListView Unloaded"); } public function OnGridUnloaded(sender : Object, e : RoutedEventArgs) { MessageBox.Show("Grid Unloaded"); } public function onComboBoxSelectionChanged(sender : Object, e : SelectionChangedEventArgs) { MessageBox.Show("ComboBox Selection Changed" + gcmbSortOption.SelectedIndex); } private function findComboBox,"WWQTTP")) { gcmbSortOption = current; break; } // does the current object have any children? if(VisualTreeHelper.GetChildrenCount(current) >= 1) { // recurse down findComboBox(current, depth+1, debug); } } } } } } } } } catch(ex) { debug.WriteLine("!-! Exception: " + ex.Message + " " + ex.StackTrace); } } private function goUp() { var parent : Object = gContent; var lastParent : Object = gContent; //) { if(String.Compare(strName,"ContainerPanel") == 0) { for(var i : int = 0; i < lastParent.Children.Count; i++) { var con = findComboBox(lastParent); if(null != con) { MessageBox.Show("xxFound!"); } break; } break; } } } } } }
Hi Scott.
That does sound strange. I’ve a script attached to PMS100/B under 7.1 Smart Client. I can change the Sorting Order and the controls I’ve added (a button and a text box) remain.
Certainly while you’re developing the jscript Init() will not be called again, but once it is published and attached to the screen, as far as I can tell Init() should be called again when you change the sorting order.
Al.
Hi Al,
well, I’ll be, once published – even publishing locally using the registry entry – Init() does get called when the Sorting Order / Inquiry type changes. One of the idiosyncrasies of Smart [Client | Office] I guess.
I’ll have to keep that in mind when testing.
Thanks for that Al, guess I better update the CASE that I have open 🙂
Cheers,
Scott
|
https://potatoit.kiwi/2010/12/23/where-did-my-control-go/
|
CC-MAIN-2017-39
|
refinedweb
| 695
| 56.45
|
This chapter contains the following sections:
XML documents have structure but no format. Extensible Stylesheet Language (XSL) adds formatting to XML documents.
XSL provides a way of displaying XML semantics. It can map XML elements into other formatting langauges such as HTML.
The W3C is developing the XSL specification as part of its Style Sheets Activity. XSL has document manipulation capabilities beyond styling. It is a stylesheet language for XML.
The July 1999 W3C XSL specification, was split into two separate documents:
The formatting objects used in XSL are based on prior work on Cascading Style Sheets (CSS) and the Document Style Semantics & Specification Language (DSSSL). XSL is designed to be easier to use than DSSSL.
Capabilities provided by XSL as defined in the proposal enable the following functionality:.
An implementation is not mandated to provide these as separate processes. Furthermore, implementations are free to process the source document in any way that produces the same result as if it were processed using the conceptual XSL processing model.
A namespace is a unique identifier or name. This is needed because XML documents can be authored separately with differentSL stylesheets must include the following syntax:
<xsl:stylesheet2>
xmlns:xsl="http//"for an XSL namespace indicator and
xmlns:fo="http//"for a formatting object namespace indicator
<.
The W3C Working Group on XSL has just released a document describing the requirements for the XSLT 1.1 specification. The primary goal of the XSLT 1.1 specification is to improve stylesheet portability. The new draft is available at
In addition to supporting user-derocessors primary goal of the XSLT 1.1 specification is to improve stylesheet portability. This goal will be achieved by standardizing the mechanism for implementing extension functions, and by including in the core XSLT specification two of the built-in extensions that many existing vendors XSLT processors have added due to user demand:
A secondary goal of the XSLT 1.1 specification is to support the new XML base specification.
The XSLT 1.1 specification proposal Stylesheets tranform.
Examples on using XSL can be found throughout this manual. In particular, refer to the following chapters in Oracle9i Case Studies - XML Applications:
What is the syntax to compare not an element but the value of the element? So far, the documentation I have read tests for tags but not values within the tags. Here is a portion of my XSL document:
<xsl:template <xsl:for-each <xsl:value-of <xsl:value-of </xsl:for-each> <xsl:template>
I want to construct an
IF statement that will display the information of employees with salaries greater than 5000 in red. How do I insert the value of
sal in the
IF statement?
Here is the
IF statement:
<xsl:if ......... </xsl:if>
We are merging an XML document with its XSL stylesheet. However the child attributes are not being returned when we use syntax of type:
<xsl:value-of
in the XSL document. Why not? This seems to work fine in other XML pars would need the syntax:
Foo/Bar/@SomeAttr
to select one attribute and...
Foo/Bar/@*
to select all the attributes of
<Bar>
I am trying to render a simple XML document to an HTML form, using the following XML and XSLT. The transformation fails with the message "Unexpected EOF" using the
XSLSample.java provided with the XML parser for Java V2. When I remove the
<td></td> from the transformation (which contains the XPath expression of the type
{ELEMENT}, the transformation is fine.
Here is the XML:
<ROWSET> <ROW> <ELEM0>Al</ELEM0> <ELEM1>Gore</ELEM1> <ELEM2></ELEM2> <ELEM3></ELEM3> <ELEM4></ELEM4> </ROW> .....
Here is the XSLT:
<xsl:stylesheet <xsl:template <html> <head> <title>Value Upload</title> </head> <body bgcolor="#FFFFFF"> <form method="post" action=""> <xsl:for-each <table border="1" cellspacing="0" cellpadding="0"> <xsl:for-each <tr> <td><input type="text" name="elem0" value="{ELEM0}" size="10" maxlength="20"></td> <td><input type="text" name="elem1" value="{ELEM1}" size="10" maxlength="20"></td> ... </xsl:for-each> </form> </body> </html> </xsl:template> </xsl:stylesheet>
You need to put a slash (/) for the input element, as follows:
<td> <input xxxx /> </td>
Is there a syntax error in the following code?
Here is
djia.xml:
<?xml version="1.0" encoding="Shift_JIS"?> <?xml-stylesheet type="text/xsl" href="djia.xsl"?> <djia> <company>ALCOA</company> <company>ExxonMobil</company> <company>McDonalds</company> <company>American Express</company> </djia>
Here is
djia.xsl:
<?xml version="1.0" encoding="Shift_JIS"?> <xsl:stylesheet xmlns: <xsl:output <xsl:template <page> <xsl:apply-templates/> </page> </xsl:template> <xsl:template <xsl:value-of <xsl:value-of: <xsl:if last one! </xsl:if> </xsl:template> </xsl:stylesheet>
yields the following:
<?xml version="1.0" encoding="Shift_JIS" ?> <page>ALCOA2: ExxonMobil4: McDonalds6: American Express8: last one!</page>
Why the resulting numbers are multiplied by 2?
The answer is whitespace. When your
/djia template does
<xsl:apply-templates/> it selects all child nodes of
<djia>. Since your
djia.xml is nicely indented, that means that child nodes of
<djia> are:
So as the XSLT processor is processing this current node list, the
position() function is the position in the current node list, which are 2, 4, 6, 8 for the
<company> element.
You should be able to fix the problem by adding a top level:
<xsl:strip-space
However, a bug in XDK for Java currently prevents this from working correctly. One workaround is to use:
<xsl:apply-templates
instead of only:
<xsl:apply-templates/>
I want my XSLT to output
<mytag null="yes"/> when my corresponding source XML is
<mytag /> or
<mytag NULL="YES"/>. How do I specify that within my XSLT?
Use the following syntax:
<xsl:template <!-- If there are no child nodes --> <xsl:if <mytag null="yes"/> </xsl:if> </xsl:template>
I need to use XSLT to change my XML code from:
<REF_STATUS> ... </REF_STATUS>
to:
<REF index="STATUS"> ... </REF>
and similar code for
REF_VATCODE and
REF_USFLG. Here is the first attempt I wrote, which works:
<!-- fix REF_STATUS nodes --> <xsl:template <xsl:element <xsl:attributeSTATUS</xsl:attribute> <xsl:apply-templates/> </xsl:element> </xsl:template> <!-- fix REF_USFLG nodes --> <xsl:template <xsl:element <xsl:attributeUSFLG</xsl:attribute> <xsl:apply-templates/> </xsl:element> </xsl:template> <!-- fix REF_VATCODE nodes --> <xsl:template <xsl:element <xsl:attributeVATCODE</xsl:attribute> <xsl:apply-templates/> </xsl:element> </xsl:template>
There are three tag names all beginning with
REF_, that are changed into the
REF tagname with and index attribute equal to the remainder of the original tag name. I'd like to make one rule which matches all of these and does the correct transformation. Here is one attempt:
<xsl:template <xsl:element <xsl:attribute <xsl:value-of </xsl:attribute> <xsl:apply-templates/> </xsl:element> </xsl:template>
Unfortunately, I get this error message:
Error occurred while processing elName.xsl: XSL-1013: Error in expression: 'starts-with(local-name(),'REF_')'.
What is wrong with the above expression?
The following works for me:
Note the
match="starts-with(..)" is illegal because it is not a valid match pattern. You will need:
match="*[starts-with(local-name(.),'REF_')]"
as shown below:
<xsl:stylesheet <!-- Identity Transform --> <xsl:template <!-- Copy the current node --> <xsl:copy> <!-- Including any attributes it has and any child nodes --> <xsl:apply-templates </xsl:copy> </xsl:template> <xsl:template <REF index="{substring-after(local-name(.),'REF_')}"> <xsl:apply-templates/> </REF> </xsl:template> </xsl:stylesheet>
This transforms a document like:
<foo> <bar> <REF_STATUS> <baz/> </REF_STATUS> <zoo> <REF_USFLG> <boo/> </REF_USFLG> </zoo> </bar> </foo>
into the result:
<foo> <bar> <REF index="STATUS"> <baz/> </REF> <zoo> <REF index="USFLG"> <boo/> </REF> </zoo> </bar> </foo>
The XML we receive is wrapped with extra code using CDATA notation. My XSL is not picking up the elements in the CDATA section. What do I need to do?
Inside a
<![CDATA[ ]]> are not elements and attributes for querying with XPath. They are just literal characters (angle brackets, names, and quotes) that look like elements and attributes, but are not in the infoset tree as separate nodes.
Inside a CDATA there is just a single text node. XSL will not pick up elements in the CDATA. The best you can do is:
In one of your examples, I found an XSL file,
toolbar.xsl, that does what appears to be converting strings to a nodeset by doing the following XSL:
<xsl:variable <xsl:stylesheet <xsl:template <xsl:call-template <xsl:with-param <toolbar> <button name="xxx" url=""/> </toolbar> </xsl:with-param> </xsl:call-template> </xsl:template> ...
Is my observation correct? I have extracted the CDATA section into a variable, but I think I need to convert it to a nodeset. When I tried it using
AsyncTransformSample.java in TransView bean, I get the error:
XSL-1045: Extension function error: Class not found 'oracle.xml.parser.v2.Extensions'
Is this part of the standard packages or do I need to import it. The import statement:
import oracle.xml.parser.v2.*;
is already in AsyncTransformSample.
No. The
ora:node-set() converts result tree fragments to nodesets, not strings to nodesets. To convert strings to nodesets you have to parse the string. XSLT does not have a built-in
parse-string() function, so we can build one as a Java extension function. See Chapter 16 in Developing Oracle XML Applications by Steve Muench (O'Reilly) for details on developing and debugging Java XSLT extension functions.
Here is an example Java class that parses a string and returns a nodeset containing the root node of the parsed XML document in a string. If there is an error during parsing, it returns an empty nodeset.
import org.w3c.dom.*; import org.xml.sax.SAXException; import oracle.xml.parser.v2.*; import java.io.StringReader; public class Util { public static NodeList parse (String s) { // Create a new parser DOMParser d = new DOMParser(); try { // Parse the string into an in-memory DOM tree d.parse( new StringReader(s) ); // Return a node list containing the root node return ((XMLDocument)d.getDocument()).selectNodes("/"); } catch (Exception e) { // Return an empty nodelist in case of an error. return (new XMLDocument()).getChildNodes(); } } }
Here is a sample
message.xml file that simulates the scenario you are in with some XML in the body of an XML document enclosed in a CDATA section.
<message> <from>Steve</from> <to>Albee</to> <body><![CDATA[ <order id="101"> <item id="12" qty="10"/> <item id="13" qty="3"/> </order> ]]></body> </message>
Here is a sample stylesheet that processes the
<message> document, parses and captures the subdocument (that is encoded as a CDATA text node in the
<body>) in an XSL variable, and then uses
<xsl:for-each> to select information out of the
$body variable containing the now-parsed message body. Here we just print out the identifiers of the
<order>, but this will give you a general idea.
<xsl:stylesheet <!-- | Above, we've associated the "util" namespace prefix | with the appropriate namespace URI that maps to | the "Util" class. The Util.java class is not in any | package, otherwise the URI would have looked like | +--> <xsl:template <!-- | Use the parse() function in the util namespace | to parse the string value of the <body> child | element of the current <message> element, and | return the root node of the document +--> <xsl:variable <xsl:text>Items Ordered</xsl:text><xsl:text>
</xsl:text> <xsl:for-each <xsl:value-of<xsl:text>
</xsl:text> </xsl:for-each> </xsl:template> </xsl:stylesheet>
I have a question about XSL. My XML document is similar to the following:
<ROW num="1"> <TITLE>New Java Classes</TITLE> <URL>/products/intermedia/</URL> <DESCRIPTION><a href=\"/products/intermedia/\">Java classes for Servlets and JSPs</a>are available. </DESCRIPTION> </ROW>
When I use XSL to display the XML document in HTML, the description is not displaying as a link eventhough I am specifying it as
"<a href=\"/products/intermedia/\">" in XML.
My XSL file is:
<xsl:template> <P><FONT face="arial" size="4"><B> <xsl:value-of </B> </FONT><BR></BR><FONT size="2"> <xsl:value-of</FONT> </P> </xsl:template>
You can simply build the
<a> tag in your XSL transform. Do something like this:
<?xml version="1.0"?> <xsl:stylesheet xmlns: <xsl:output <xsl:template <xsl:for-each <table border="1"> <th>Category</th> <th>ID</th> <th>Title</th> <th>Thumbnail</th> <xsl:for-each <tr> <td><xsl:value-of </td> <td><xsl:value-of </td> <td><a href="Present.jsp?page=PRES_VIEW_SINGLE&id={id}"><xsl:value-of </a></td> <td><img src="/servlets/thumb?presentation={id}&slide=0" /></td> </tr> </xsl:for-each> </table> </xsl:for-each> </xsl:template> </xsl:stylesheet>
I am using
oracle.xml.async.XSLTransformer included in XDK for Java v2 to perform an XSL transformation on an XML document. I need a WML output. My stylesheet contains the following code:
<?xml version="1.0" encoding="ISO-8859-1"?> <xsl:stylesheet <xsl:output <xsl:output
...
When I check the transformation using a servlet, I get the following error in my WAP emulator:
"Received HTTP status: 502 - WML Encoding Error, 1:com.sun.xml.parser/P-076 Malformed UTF-8 char
Is an XML encoding declaration missing? In fact, the WML generated is not including any XML header information. The output starts like this:
<wml> <card id="gastronomia" title="Mis direcciones de gastronomia"><p>Mis direcciones de gastronomia</p> ...
How do I get the transformer to output the XML header:
"<?xml version="1.0" encoding="ISO-8859-1"?>"
Use
oracle.xml.parser.v2.XSLProcessor. Also ensure your stylesheet has:
<xsl:output
just inside the
<xsl:stylesheet>, and outside of any
<xsl:template>
Also ensure that you're using the following API:
processXSL(stylesheet,source,printwriter)
My BC4J source XML file has the following line that refers to the DTD:
<!DOCTYPE ViewObject SYSTEM "jbo_03_01.dtd">
When transforming the file, this line results in an error, saying it cannot find
jbo_03_01.dtd. The DTD file is in my classpath.
There are two solutions to this.
jbo_03_01.dtdfrom
jbomt.zipand put it in the same directory as your VO files. This is complicated if you have VO files at several different directory levels.
DOMParser d = new DOMParser(); // read DTD as a resource from the classpath InputStream is = ...getResourceAsStream("/jbo_03_01.dtd"); d.parseDTD( is ); DTD dtd = d.getDoctype();
d.setDoctype( dtd ); // set and cache the DTD to use.
// Now, subsequences calls to d.parse() will
// use the cached version of jbo_03_01.dtd
Then transform the result using XSLStylesheet and
XSLProcessor.process(style,source,printwriter).
My second question relates to namespaces. I have the following piece of code in my stylesheet:
<xsl:attribute <xsl:value-of@ipet:dataBindingObject </xsl:attribute>
At the top of my stylesheet, I have defined the
marlin namespace:
xmlns:data=""
In the resulting XML file (the
marlin UIX file), the namespace definition is repeated for each element:
<messageTextInput id="Status" name="Status" prompt="Status" required="yes"xmlns:
Try defining the data namespace prefix on the document element in your XSLT root template. If it is defined at a higher level in the result tree we may notice that and not output it on each lower level element.
JDeveloper9i has virtual Virtual Objects (VOs) that expose the metadata of aVO kind of like the database
X$ views. This means that you could use the normal
VO.writeXML() method against one of these virtual metadata views to perform operations like I think you are trying to do to render a data-driven output based on the structure of a given VO.
Is there a way to pass a parameter from a Java program to an XSLT stylesheet using Oracle XSL processor? The XSLT standard states that "...XSLT does not define the mechanism by which parameters arepassed to the stylesheet." (see). This is possible, but is a vendor-dependant implementation. However, none of the XSL constructors in the
OracleXSLprocessor seems to allow for this.
We need to pass in an integer to a stylesheet and use the
xsl:position() function to extract a document fragmentfrom an XML doc. For example:
<xsl: <xsl:if SELECT DISTINCT sp.site_datatype_id FROM ref_hm_site_pcode sp WHERE sp.' AND sp.' </xsl:if> </xsl:template>
However, instead of
position()=1, we need to substitute a parameter, such as
$1.
How can we do this?
If you have a top-level parameter declared in your stylesheet, such as:
<xsl:stylesheet ... > <!-- declare top-level $foo parameter, default value to 5 --> <xsl:param <xsl:template <xsl:if :
Then you can use the following methods on
oracle.xml.parser.v2.XSLStylesheet to control parameters:
resetParams()
setParam()
To set the parameter named
foo to the number 10, use the following:
myStylesheet.setParam("foo","10");
To set
foo to the string
ten, you need to quote it:
myStylesheet.setParam("foo","'ten'");
If I need to pass parameters to the stylesheet in a Java program, what Java class must I use?
Currently, we use:
processXSL(XSLStylesheet xsl,XMLDocument xml)
What method canI use to pass the parameters?
See:
XSLStylesheet.setParam()
XSLStylesheet.resetParams()
We used
Note:104675.1 from, that explains how to use the XDK to retrieve XML data from Oracle and transform it to HTML.
We can generate the XML output file but when we try to generate the HTML output by using the file,
Emp.xsl, which has the following argument:
<html xmlns:
it shows error XSL-1009 ATTRIBUTE 'XSL VERSION' NOT FOUND IN 'HTML'
<xsl:stylesheet xmlns:and it works but the HTML output does not have any HTML tag at all, just pure data.
What should the HTML output file look like?
You must add
xsl:version="1.0" attribute to your
<html> element.
Can you tell me what XPath expression I should use to retrieve only terminal child elements (that is, elements which don't have any child elements) from a specified element. For example, I want to use an XPath expression to return only the
TABLE child elements highlighted in red below:
<TABLE> <ID>1</ID> <NAME>1</NAME> <SIZE>1</SIZE> <COLUMNS> <COLUMN> <ID>1</ID> <NAME>Customers</NAME> <COLUMN> <COLUMN> <ID>c</ID> <NAME>Categories</NAME> <COLUMN> <COLUMNS> <DATE_CREATED>01/10/2000</DATE_CREATED> </TABLE>
A possible solution is the following:
<?xml version='1.0'?> <xsl:stylesheet xmlns: <xsl:template <xsl:apply-templates </xsl:template> </xsl:stylesheet>
The expression you want is:
/TABLE/*[count(child::*) = 0]
or
/TABLE/*[not (child::*)]
You can omit the child axis, so above expression is the same as:
/TABLE/*[count(*) = 0]
or
/TABLE/*[not (*)]
We are merging an XML document with its XSL stylesheet. Child attributes are not being returned when they are using syntax of type:
<xsl:value-of
in the XSL document. This seems to work fine in other XML parsers including XML Spy and Styl'd need the syntax:
Foo/Bar/@SomeAttr
to select one attribute and...
Foo/Bar/@*
to select all the attributes of <Bar>.
|
http://docs.oracle.com/cd/A91202_01/901_doc/appdev.901/a88894/adx04xsl.htm
|
CC-MAIN-2014-52
|
refinedweb
| 3,116
| 55.95
|
Vim (Vi Improved) is a clone of Bill Joy’s vi text editor program for Unix. It was written by Bram Moolenaar based on source for a port of the Stevie editor to the Amiga[4] and first released publicly in 1991. Credit – Wikipedia
$ vim filename.c
It will create a new file and show like “”filename.c” [New File]” at bottom of screen.
Now to edit and add something into the file we have to change to “Insert” mode, to do this hit “Insert” button from the keyboard, and file will be ready to edit with message like — INSERT — at the bottom of screen.
Now, using can use below keys to actually write some code.
- Enter – If you want to add new line
- Right, Left Arrow Key – To move cursor from right to left
- Up, Down Arrow Key – To move cursor from one line to another
- Space – To add space
Lets say, we will write the code as,
#include <stdio.h> int main(void) { printf("Hello World"); return 0; }
Once you have written above code, and now you are sure that you want to
save this file, type below key sequence.
ESCAPE
:wq!
The meaning of above key sequence is,
- ESCAPE – removes “Insert” Mode
- :w – is for writing the file
- q – is for quitting the file
- ! – is for force quit.
- “:wq!” becomes save and quit.
- “:w” becomes only save
- “q!” becomes quit without saving.
Now, after saving & closing file, you will see the newly created “filename.c”
as
$ ls -alh -rw-rw-r-- 1 myuser myuser 76 Aug 26 21:47 filename.c
|
https://www.lynxbee.com/vim-how-to-create-a-file-and-write-contents-using-vim-editor/
|
CC-MAIN-2019-47
|
refinedweb
| 264
| 80.82
|
This time, we’ll take a look at the DFA’s class and its helper class called SubsetMachine.
To understand what’s a DFA, refer to the first post in this series called Regex engine in C# - the Regex Parser.
In the Regex engine in C# - the NFA post we ended with an NFA.
Now we’re going to build a DFA based on such NFA.
Remember that the main difference between a DFA and an NFA is that a DFA doesn’t have epsilon (ε) transitions that represent "nothing" or "no input" between states.
As described in the section DFA versus NFA in the introduction of this series of posts, it may be shown that a DFA is equivalent to an NFA, in that, for any given NFA, one may construct an equivalent DFA, and vice-versa: this is the powerset construction or subset construction.
So, let’s get our hands dirty with some code.
Below I present the DFA class:
// // { /// <summary> /// Implements a deterministic finite automata (DFA) /// </summary> class DFA { // Start state public state start; // Set of final states public Set<state> final; // Transition table public SCG.SortedList<KeyValuePair<state, input>, state> transTable; public DFA() { final = new Set<state>(); transTable = new SCG.SortedList<KeyValuePair<state, input>, state>(new Comparer()); } public string Simulate(string @in) { state curState = start; CharEnumerator i = @in.GetEnumerator(); while(i.MoveNext()) { KeyValuePair<state, input> transition = new KeyValuePair<state, input>(curState, i.Current); if(!transTable.ContainsKey(transition)) return "Rejected"; curState = transTable[transition]; } if(final.Contains(curState)) return "Accepted"; else return "Rejected"; } public void Show() { Console.Write("DFA start state: {0}\n", start); Console.Write("DFA final state(s): "); SCG.IEnumerator<state> iE = final.GetEnumerator(); while(iE.MoveNext()) Console.Write(iE.Current + " "); Console.Write("\n\n"); foreach(SCG.KeyValuePair<KeyValuePair<state, input>, state> kvp in transTable) Console.Write("Trans[{0}, {1}] = {2}\n", kvp.Key.Key, kvp.Key.Value, kvp.Value); } } /// <summary> /// Implements a comparer that suits the transTable SordedList /// </summary> public class Comparer : SCG.IComparer<KeyValuePair<state, input>> { public int Compare(KeyValuePair<state, input> transition1, KeyValuePair<state, input> transition2) { if(transition1.Key == transition2.Key) return transition1.Value.CompareTo(transition2.Value); else return transition1.Key.CompareTo(transition2.Key); } } }
As you see, a DFA has 3 variables: a start state, a set of final states and a transition table that maps transitions between states.
Below I present the SubsetMachine class that is responsible for the hard work of extracting an equivalent DFA from a given NFA:
// // { class SubsetMachine { private static int num = 0; /// <summary> /// Subset machine that employs the powerset construction or subset construction algorithm. /// It creates a DFA that recognizes the same language as the given NFA. /// </summary> public static DFA SubsetConstruct(NFA nfa) { DFA dfa = new DFA(); // Sets of NFA states which is represented by some DFA state Set<Set<state>> markedStates = new Set<Set<state>>(); Set<Set<state>> unmarkedStates = new Set<Set<state>>(); // Gives a number to each state in the DFA HashDictionary<Set<state>, state> dfaStateNum = new HashDictionary<Set<state>, state>(); Set<state> nfaInitial = new Set<state>(); nfaInitial.Add(nfa.initial); // Initially, EpsilonClosure(nfa.initial) is the only state in the DFAs states and it's unmarked. Set<state> first = EpsilonClosure(nfa, nfaInitial); unmarkedStates.Add(first); // The initial dfa state state dfaInitial = GenNewState(); dfaStateNum[first] = dfaInitial; dfa.start = dfaInitial; while(unmarkedStates.Count != 0) { // Takes out one unmarked state and posteriorly mark it. Set<state> aState = unmarkedStates.Choose(); // Removes from the unmarked set. unmarkedStates.Remove(aState); // Inserts into the marked set. markedStates.Add(aState); // If this state contains the NFA's final state, add it to the DFA's set of // final states. if(aState.Contains(nfa.final)) dfa.final.Add(dfaStateNum[aState]); SCG.IEnumerator<input> iE = nfa.inputs.GetEnumerator(); // For each input symbol the nfa knows... while(iE.MoveNext()) { // Next state Set<state> next = EpsilonClosure(nfa, nfa.Move(aState, iE.Current)); // If we haven't examined this state before, add it to the unmarkedStates and make up a new number for it. if(!unmarkedStates.Contains(next) && !markedStates.Contains(next)) { unmarkedStates.Add(next); dfaStateNum.Add(next, GenNewState()); } KeyValuePair<state, input> transition = new KeyValuePair<state, input>();
transition.Key = dfaStateNum[aState]; transition.Value = iE.Current; dfa.transTable[transition] = dfaStateNum[next]; } } return dfa; } /// <summary> /// Builds the Epsilon closure of states for the given NFA /// </summary> /// <param name="nfa"></param> /// <param name="states"></param> /// <returns></returns> static Set<state> EpsilonClosure(NFA nfa, Set<state> states) { // Push all states onto a stack SCG.Stack<state> uncheckedStack = new SCG.Stack<state>(states); // Initialize EpsilonClosure(states) to states Set<state> epsilonClosure = states; while(uncheckedStack.Count != 0) { // Pop state t, the top element, off the stack state t = uncheckedStack.Pop(); int i = 0; // For each state u with an edge from t to u labeled Epsilon foreach(input input in nfa.transTable[t]) { if(input == (char)NFA.Constants.Epsilon) { state u = Array.IndexOf(nfa.transTable[t], input, i); // If u is not already in epsilonClosure, add it and push it onto stack if(!epsilonClosure.Contains(u)) { epsilonClosure.Add(u); uncheckedStack.Push(u); } } i = i + 1; } } return epsilonClosure; } /// <summary> /// Creates unique state numbers for DFA states /// </summary> /// <returns></returns> private static state GenNewState() { return num++; } } }
In the first post of this series we see the following line of code:
DFA dfa = SubsetMachine.SubsetConstruct(nfa);
The SubsetConstruct method from the SubsetMachine class receives as input an NFA and returns a DFA.
Inside the SubsetConstruct method we firstly instantiate a new DFA object and then we create two variables markedStates and unmarkedStates that are sets of NFA states which represent a DFA state.
// Sets of NFA states which is represented by some DFA state Set<Set<state>> markedStates = new Set<Set<state>>(); Set<Set<state>> unmarkedStates = new Set<Set<state>>();
From this we see that a DFA state can represent a set of NFA states. Take a look at the introductory post and see Figure 2. It shows two DFA states that represent sets of NFA states, in this particular case the DFA final states represent the NFA states {s2, s3} and {s5, s6}.
The HashDictionary helps us to give a name (to number) each DFA state.
// Gives a number to each state in the DFA HashDictionary<Set<state>, state> dfaStateNum = new HashDictionary<Set<state>, state>();
We declare a variable called nfaInitial that is a set of states. It receives the initial NFA state:
Set<state> nfaInitial = new Set<state>(); nfaInitial.Add(nfa.initial);
We’ll start using the EpsilonClosure function.
// Initially, EpsilonClosure(nfa.initial) is the only state in the DFAs states and it's unmarked. Set<state> first = EpsilonClosure(nfa, nfaInitial);
The EpsilonClosure function receives as parameters the NFA and its initial state and returns a set of states. Take a look at the method signature:
static Set<state> EpsilonClosure(NFA nfa, Set<state> states)
So, what does it do? You may ask. To answer this question let’s debug this first method call:
From the NFA transition table presented in Figure 2 and from the transition graph presented in Figure 3 in the second post of this series we can see how many transitions are represented by eps transitions.
The first time we enter into this function we’ll get as a return value a set of states that contains all the states that are reachable with an eps transition from the start state 0.
For the sake of comparison I’ll show the NFA’s graph representation for the regex (l|e)*n?(i|e)el* that we’re studying since the beginning of this series.
Figure 2 - NFA’s graph representation for the regex (l|e)*n?(i|e)el*
If you pay close attention you’ll see that the order the regex parser found the states is the order we visually debug the code looking at the graph above.
With such states found we move next adding this DFA state into the variable unmarkedStates.
We then use a function called GetNewState that is responsible for generating a number that uniquely identifies each state of the DFA:
// The initial dfa state state dfaInitial = GenNewState();
When we pass to the next line of code we add to the dfaStateNum dictionary a key that is the set of states returned by the EpsilonClosure function and a value that is the name of the initial state of the DFA.
dfaStateNum[first] = dfaInitial;We make the initial state of the DFA be the dfaInitial value we just got.
dfa.start = dfaInitial;
Next we enter in the first while keyword. In this while we basically extract one of the unmarkedStates and add the same to the markedStates set. This has the meaning of telling that we already checked such state.
// Takes out one unmarked state and posteriorly mark it. Set<state> aState = unmarkedStates.Choose(); // Removes from the unmarked set. unmarkedStates.Remove(aState); // Inserts into the marked set. markedStates.Add(aState);
In the next line of code (one of the most interesting parts of the whole code) we check to see if this current DFA state (remember that it is a set of states) we’re on contains the NFA final state, if it holds true, we add it to the DFA’s set of final states:
// If this state contains the NFA's final state, add it to the DFA's set of final states. if(aState.Contains(nfa.final)) dfa.final.Add(dfaStateNum[aState]);
Now it’s time to check against the NFA’s input symbols. To accomplish this we declare an enumerator of type state that does the job of moving through each of the input symbols in the next while code block:
SCG.IEnumerator<input> iE = nfa.inputs.GetEnumerator(); // For each input symbol the nfa knows... while(iE.MoveNext()) { . . .
Now it’s time to create the next DFA state. We do this by declaring a new set of states and we call the EpsilonClosure function again to fill this state, but this time we pass the EpsilonClosure function a different second parameter.
// Next state Set<state> next = EpsilonClosure(nfa, nfa.Move(aState, iE.Current));
Let’s go deeper to take a look at this second parameter.
As you see we call the function Move that is part of the NFA class. This function receives as parameters a set of states and an input symbol to be checked against. It returns a set of states.
What the move function does is: foreach state in the set of states passed as the first parameter we check each transition present in the NFA’s transition table from this state to another state with the input symbol passed as the second parameter.
So, the first time we pass we get the following output from the Move function:
Figure 3 - Result from the NFA’s Move function the 1st time it’s called
If we look at Figure 2 we can assert that from the states present in the first state of the DFA (see Figure 1) we can move to states {5, 16} with the first NFA input that is equal to ‘e’.
With the above result taken from the Move function we’re ready to go the EpsilonClosure function for the second time to create the 2nd DFA state in the SubsetMachine class. This second time we get the following result from EpsilonClosure function:
Figure 4 - Result from the EpsilonClosure function the 2nd time it’s called
Now, if you pay close attention, we can assert that starting at the states {5, 16} we can move with an eps-transition to the states shown above. Remember that the states we pass to the EpsilonClosure function are themselves included in the result returned by the function.
Now that we have created the 2nd DFA state we check to see if it wasn’t examined yet and if it holds true we add it to the unmarkedStates variable and give a new name to this state numbering it with the GenNewState function.
// If we haven't examined this state before, add it to the unmarkedStates and make up a new number for it. if(!unmarkedStates.Contains(next) && !markedStates.Contains(next)) { unmarkedStates.Add(next); dfaStateNum.Add(next, GenNewState()); }
Now the best part of it. :)
We create a new transition that has as key the number of the DFA state we’re checking and as the value the current input symbol we’re after.
KeyValuePair<state, input> transition = new KeyValuePair<state, input>();
transition.Key = dfaStateNum[aState]; transition.Value = iE.Current;
We then add this transition to the DFA’s transition table:
Figure 5 - DFA’s transition table
This has the following meaning: from state 0 with input ‘e’ go to state 1!
These are the subsequent values we get for the first unmarkedState we’re checking:
With input ‘i’ we can go to state { 14 } from which with an eps transition we can go to state { 17 }.
With input ‘l’ we can go to state { 3 } from which with an eps transition we can go to states { 4, 13, 8, 3, 12, 7, 2, 11, 6, 1, 15, 10 }.
With input ‘n’ we can go to state { 9 } from which with an eps transition we can go to states { 12, 9, 13, 15 }.
A point that deserves consideration is that each time you run the regex parser it’s not guaranteed that the numbers that identify the DFA states will remain the same.
I won’t continue debugging because it would consume a lot of space in this blog post.
I think that with the above explanation it’s easy to get the point.
In short we’ll repeat the above steps for each unmarked state that hasn’t been checked yet working with it against each input symbol.
For the regex (l|e)*n?(i|e)el* in one of the times I ran the code, I got the following DFA’s transition table:
Figure 6 - DFA’s transition table for the regex (l|e)*n?(i|e)el*
Below is the DFA’s graph representation:
Figure 7 - DFA’s graph representation for the regex (l|e)*n?(i|e)el*
In the next post I’ll simulate some input strings against this DFA to assert its validity.
See you there!
Updated on 5/12/2009 09:57:00 PM
As I finished writing the posts, here goes the list that points to them:
Regular Expression Engine in C# (the Story)
Regex engine in C# - the Regex Parser
Regex engine in C# - the NFA
Regex engine in C# - matching strings
|
https://www.leniel.net/2009/05/regex-engine-in-csharp-dfa.html
|
CC-MAIN-2019-13
|
refinedweb
| 2,401
| 53.51
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.