text
stringlengths 454
608k
| url
stringlengths 17
896
| dump
stringclasses 91
values | source
stringclasses 1
value | word_count
int64 101
114k
| flesch_reading_ease
float64 50
104
|
|---|---|---|---|---|---|
Issue
I would like to do an upsert using the "new" functionality added by postgresql 9.5, using sqlalchemy core. While it is implemented, I’m pretty confused by the syntax, which I can’t adapt to my needs.
Here is a sample code of what I would like to be able to do :
from sqlalchemy.ext.declarative import declarative_base Base = declarative_base() class User(Base): __tablename__ = 'test' a_id = Column('id',Integer, primary_key=True) a = Column("a",Integer) engine = create_engine('postgres://name:[email protected]/test') User().metadata.create_all(engine) meta = MetaData(engine) meta.reflect() table = Table('test', meta, autoload=True) conn = engine.connect() from sqlalchemy.dialects.postgresql import insert as psql_insert stmt = psql_insert(table).values({ table.c['id']: bindparam('id'), table.c['a']: bindparam('a'), }) stmt = stmt.on_conflict_do_update( index_elements=[table.c['id']], set_={'a': bindparam('a')}, ) list_of_dictionary = [{'id':1, 'a':1, }, {'id':2, 'a':2,}] conn.execute(stmt, list_of_dictionary)
I basically want to insert a bulk of rows, and if one id is already taken, I want to update it with the value I initially wanted to insert.
However sqlalchemy throw me this error :
CompileError: bindparam() name 'a' is reserved for automatic usage in the VALUES or SET clause of this insert/update statement. Please use a name other than column name when using bindparam() with insert() or update() (for example, 'b_a').
While it is a known issue (see), I didn’t found any proper answer that does not require to modify either the keys of list_of_dictionary or the name of your columns.
I want to know if there is a way of constructing stmt in a way to have a consistent behavior that does not depends on whether the keys of the variable list_of_dictionary are the name of the columns of the inserted table (my code works without error in those cases).
Solution
this does the trick for me:
from sqlalchemy import create_engine from sqlalchemy import MetaData, Table from sqlalchemy.dialects import postgresql from sqlalchemy.inspection import inspect def upsert(engine, schema, table_name, records=[]): metadata = MetaData(schema=schema) metadata.bind = engine table = Table(table_name, metadata, schema=schema, autoload=True) # get list of fields making up primary key primary_keys = [key.name for key in inspect(table).primary_key] # assemble base statement stmt = postgresql.insert(table).values(records) # define dict of non-primary keys for updating update_dict = { c.name: c for c in stmt.excluded if not c.primary_key } # cover case when all columns in table comprise a primary key # in which case, upsert is identical to 'on conflict do nothing. if update_dict == {}: warnings.warn('no updateable columns found for table') # we still wanna insert without errors insert_ignore(table_name, records) return None # assemble new statement with 'on conflict do update' clause update_stmt = stmt.on_conflict_do_update( index_elements=primary_keys, set_=update_dict, ) # execute with engine.connect() as conn: result = conn.execute(update_stmt) return result
This Answer collected from stackoverflow, is licensed under cc by-sa 2.5 , cc by-sa 3.0 and cc by-sa 4.0
|
https://errorsfixing.com/how-to-do-a-proper-upsert-using-sqlalchemy-on-postgresql/
|
CC-MAIN-2022-33
|
refinedweb
| 491
| 50.84
|
This class represents an orthogonal BSP tree (axis-aligned split planes) that spatially organizes MapElementT instances. More...
#include "OrthoBspTree.hpp"
This class represents an orthogonal BSP tree (axis-aligned split planes) that spatially organizes MapElementT instances.
A map element is generally linked in each leaf that it intersects, or instead whenever possible in the node closest to the root where the element intersects all children. Note that the latter is an optional feature that yields a tree that is logically equivalent to the pure "store all contents in leaves" strategy (where nodes generally remain empty and store no contents at all), but in comparison saves some storage. Also note the subtle difference to storing some contents in nodes (the elements that intersect a split plane) and some (the rest) in leaves, like ancient version of our CaBSP: Keeping some contents in nodes like this would make it impossible to determine the contents (the set of all map elements) that are inside a given node, because the set would be grossly oversized. Our "extended leaves-only" approach saves both storage and is able to produce for each node the exact set of map elements (required e.g. for view frustum tests).
Another feature of this implementation is that the bounding-box of a node is usually not the tight bounding-box over its contents, but typically larger: The bounding-box of the root node is the maximum bounds of the world, the bounding-box of its children is determined by subdividing it along the split plane, etc. As such, the bounding-boxes in the tree only depend on the split planes, but not directly on the contents of the nodes (and thus the BB member of the NodeT class is const).
Finally, note how these larger bounding-boxes affect the tree structure and size. There are two important differences:
The constructor.
The destructor.
Returns the BSP trees root node.
Inserts the given element into the tree (the structure of the tree remains unchanged).
Removes the given element from the tree (the structure of the tree remains unchanged).
Removes and re-inserts map elements from and into the tree whose bounding-box changed so that it disagrees with the tree structure.
This method is called once per "document frame" in ChildFrameT::OnIdle().
|
https://api.cafu.de/c++/classOrthoBspTreeT.html
|
CC-MAIN-2018-51
|
refinedweb
| 380
| 56.69
|
Till now we have understood what SpecFlow is and what development model it follows. Now let's try to create first SpecFlow Selenium C# test. I assume that you have some basic understanding of Selenium WebDriver and its basic commands.
I hope you have been following the complete tutorial and I expect that by now you have completed the following steps, which are the prerequisite for writing SpecFlow Selenium test:
- Download and Install Visual Studio
- Set Up Selenium WebDriver with Visual Studio in C#
- Set Up SpecFlow
Let's first take a look at a simple Selenium Test script for LogIn functionality and then convert the same script into SpecFlow script to understand it better.
Selenium Test Script
Selenium Script for logging in to application.
- Launch the Browser
- Navigate to Home Page
- Click on the LogIn link
- Enter UserName and Password
- Click on Submit button
- LogOut from the application
- Close the Browser
using OpenQA.Selenium; using OpenQA.Selenium.Firefox; namespace ToolsQA { class FirstTestCase { static void Main(string[] args) { IWebDriver driver = new FirefoxDriver(); //Launch the Online Store Website driver.Url = ""; // Find the element that's ID attribute is 'account'(My Account) driver.FindElement(By.XPath(".//*[@id='account']/a")).Click(); // Find the element that's ID attribute is 'log' (Username) // Enter Username on the element found by above desc. driver.FindElement(By.Id("log")).SendKeys("testuser_1"); // Find the element that's ID attribute is 'pwd' (Password) // Enter Password on the element found by the above desc. driver.FindElement(By.Id("pwd")).SendKeys("[email protected]"); // Now submit the form. driver.FindElement(By.Id("login")).Click(); // Find the element that's ID attribute is 'account_logout' (Log Out) driver.FindElement(By.XPath(".//*[@id='account_logout']/a")).Click(); // Close the driver driver.Quit(); } } }
Note : As I said before, a prerequisite for SpecFlow with Selenium is to have the Basic Understanding of Selenium in C#. If you are not familiar with the above script, please go through the small tutorial on Selenium with C#.
To convert the above Selenium Test into SpecFlow Test, it is required to create a Feature file and write automation test statements in it. Now the question comes in mind is What is Feature File?
What is SpecFlow Feature File?
A feature file is an entry point to the SpecFlow test. This is a file where you will describe your tests in Descriptive language (Like English). It is an essential part of SpecFlow, as it serves as an automation test script as well as live documents. A feature file can contain a scenario or can contain many scenarios in a single feature file but it usually contains a list of scenarios. Let's create one such file.
Before moving the head for writing the first script, let's create a nice folder structure of the project.
Create a Feature File Folder
It is always good to have a nice and clean folder structure in the project and each folder represents the content in it. So, first, create a folder for the feature file.
- Create a new Folder by right click on the 'Project' and navigating to Add -> New Folder.
- Name the folder as ‘Features’ and hit enter.
Once the folder for feature file is created, we are good to go to create a feature file.
Create a Feature File
On the Feature folder Right-click and navigate to Add -> New Item...
Select SpecFlow Feature File in the middle and give it a logical name, for the sake of this tutorial, please use the same name 'LogIn_Feature' referred in the below screenshot.
Note: In order for SpecFlow to automatically detect the stories (or features, as they’re known in SpecFlow), you need to make sure that they carry the '.feature' file extension. For example, in this case, I’ve named my user story 'LogIn_Feature.feature'. Every '.feature' file conventionally consists of a single feature.
- Write first feature file for LogIn Scenario.
Feature File
Note: Don’t worry about the syntax if it looks strange to you.. Ideally you should be able to understand the intent of the test just by reading a test in feature file. We will discuss this in more details in next chapter on Gherkin Keywords but still go through the small intro on Gherkin & Keywords below.
Gherkin
A language above is called Gherkin and it implements the principles of Business readable domain specific language(BRDSL). Domain specific language gives you the ability to describe your application behavior without getting into details of implementation. What does that mean? If we go back to our tutorial in TDD we saw that we wrote test code before writing any application code. In a way we described what is the expected behavior of our application in terms of tests. On TDD those tests were pure Java tests, in your case those might be a C++ or C# tests. But the basic idea is that those are core technical tests.
If we now come back to BDD/BRDSL we will see that we are able to describe tests in a more readable format. In the above test it's quite clear and evident, just by reading, what test would do. At the same time of being a test it also documents the behavior of application. This is the true power of BDD/BRDSL and it will become the power of cucumber eventually because cucumber works on the same principles.
Keywords
Now moving forward we have just defined a test. You will notice colored part of the tests (Feature, Scenario, Given, When, And and Then). These are keywords defined by Gherkin. Gherkin has more keywords and we will discuss those in the following tutorials. But to start off we can quickly explain some of the keywords in one line. Note this is not a complete listing of Keywords:
Feature: Defines what feature you will be testing in the tests below
Given: Tells the pre-condition of the test
And: Defines additional conditions of the test
When: Defines the action of the test
Then: States the post condition. You can say that it is expected result of the test
In this chapter, we are done with Feature file but still, we are not near to running the test using the feature file. I hope you understand that so far we have not provided any implementation to the steps in the feature file. Steps in the feature file are just the body of the Car, engine is yet to be created. In the next chapter of Gherkin Keywords, we will go through all the keywords available in SpecFlow to use in the feature file and then in the following chapter we will create a Step Definition file, which will hold the implementation of the features.
With this understanding let's move on the next topic where we will talk about Gherkin and the syntax it provided to write application tests/behavior.
Similar Articles
|
https://www.toolsqa.com/specflow/feature-file/
|
CC-MAIN-2022-27
|
refinedweb
| 1,144
| 62.17
|
Writing Snapshot Tests For React Components With Jest
In this tutorial, we will be looking at what snapshot tests are and how we can use snapshot testing to ensure our User Interface does not change without the team knowing about it.
To get started, you will need to familiarize yourself with the following
- NodeJS - A JavaScript runtime built on Chrome's V8 JavaScript engine.
- React - A JavaScript library for building delightful UI by Facebook
- Jest - A JavaScript testing framework by Facebook.
What Is Snapshot Testing?
Unlike strict Test Driven Development where the standard practice is to write failing tests first then write the code to make the tests pass, Snapshot testing takes a different approach.
Table of Contents
To write a snapshot test, you first get your code working, say, a React component, then generate a snapshot of it's expected output given certain data. The snapshot tests are commited alongside the component and everytime the tests are run. Jest will compare the snapshot to the rendered output for the test.
If the test does not pass, it may mean that there were some unexpected changes on the component that you need to fix, or you made some changes to the component and it's about time you updated the snapshot tests.
Snapshot testing is meant to be one of many different testing tools. Therefore, you may still need to write tests for your actions and reducers.
Let's get right into it!
Creating a Simple React Component
To get started, we will create a simple React App using Create React App.
create-react-app ooh-snap cd ooh-snap yarn start
We should now have a React app! Let's go ahead and create a component that we can test. The component that we are going to be creating renders the
items props it receives as either a list or as a span element depending on the number of items.
Create a
Components folder then add the following
Items component
import React from 'react'; import PropTypes from 'prop-types'; /** * Render a list of items * * @param {Object} props - List of items */ function Items(props) { const { items = [] } = props; if (!items.length) { // No Items on the list, render an empty message return <span>No items in list</span>; } if (items.length === 1) { // One Item in the list, render a span return <span>{items[0]}</span>; } // Multiple items on the list, render a list return ( <ul> {items.map(item => <li key={item}>{item}</li>)} </ul> ); } Items.propTypes = { items: PropTypes.array, }; Items.defaultProps = { items: [], }; export default Items;
Finally, let's update
App.js to render our Component
import React, { Component } from 'react'; import Items from './Components/Items'; class App extends Component { render() { const items = [ 'Thor', 'Captain America', 'Hulk' ]; return ( <Items items={items} /> ); } } export default App;
Effectively, delete
App.test.js because we will be adding our own tests in the next section.
Simple, right? Next, let's go ahead and add our snapshot tests
Writing Snapshot Tests
To get started, install react-test-renderer, a library that enables you to render React components as JavaScript objects without the need of a DOM.
yarn add react-test-renderer
Great, let's add our first test. To get started, we will create a test that renders the
Items component with no items passed down as props.
import React from 'react'; import renderer from 'react-test-renderer'; import Items from './Items'; it('renders correctly when there are no items', () => { const tree = renderer.create(<Items />).toJSON(); expect(tree).toMatchSnapshot(); });
Next, let's run the tests. Thanks to Create React App, we do not need to set anything else up to run our tests.
yarn test
When your run the tests for the first time, notice that a new snapshot file is created inside a
__snapshots__ directory. Since our test file is named
Items.test.js, the snapshot file is appropriately named
Items.test.js.snap that looks like this.
// Jest Snapshot v1, exports[`renders correctly when there are no items 1`] = ` <span> No items in list </span> `;
Simple, right? The test matches the component's exact output. Jest uses pretty-format to make the snapshot files human readable.
If you are getting the hang of it, go ahead and create tests for the two other scenarios where there is one item and where there is multiple items, then run the tests.
... it('renders correctly when there is one item', () => { const items = ['one']; const tree = renderer.create(<Items items={items} />).toJSON(); expect(tree).toMatchSnapshot(); }); it('renders correctly when there are multiple items', () => { const items = ['one', 'two', 'three']; const tree = renderer.create(<Items items={items} />).toJSON(); expect(tree).toMatchSnapshot(); });
Next, let's make updates to our component.
Updating Snapshot Tests
To understand why we need snapshot tests, we'll go ahead and update the
Items component and re-run the tests. This, for your dev environment, is a simulation of what would happen when someone on your team makes a change to a component and your CI tool runs the tests.
We will add class names to the
span and
li elements, say, to effect some styling.
... /** * Render a list of items * * @param {Object} props - List of items */ function Items(props) { const { items = [] } = props; if (!items.length) { // No Items on the list, render an empty message return <span className="empty-message">No items in list</span>; } if (items.length === 1) { // One Item in the list, render a span return <span className="item-message">{items[0]}</span>; } // Multiple items on the list, render a list return ( <ul> {items.map(item => <li key={item}{item}</li>)} </ul> ); } ...
Let's run our tests again with
yarn test command in lieu of our changes. Notice anything different?
That's not good, is it? Jest matched the existing snapshots against the rendered component with the updated changes and failed because there were some additions to our compnent. It then shows a diff of the changes that are introduced to the snapshot tests.
To fix this, for whatever reason, would entirely depend on the changes that were introduced to the snapshot tests.
If the changes are not expected, that's good, you got it well in advance before it was too late. If the changes were expected, update your snapshot tests and everything is green again.
While Jest is in interactive mode, you can update the snapshot tests by simply pressing
u with the options provided. alternatively, you can run
jest --updateSnapshot or
jest -u.
This will update the snapshots to match the updates we made and our tests will effectively pass.
Go ahead and pick into the snapshots folder to see how the snapshot files changed. Here is the updated empty snapshot snippet.
exports[`renders correctly when there is one item 1`] = ` <span className="item-message" > one </span> `;
Conclusion
In this tutorial, we have been able to write snapshot tests for a React component. We also updated the component to see the failing tests and eventually update the snapshots to fix the tests.
I hope you can now appreciate how easy it is to iterate and debug your UI changes especially when working as part of a big team.
While we have covered the basics of snapshot tests, there is a lot you could learn on writting better snapshot tests. Do take a look at the Snapshot best practices from the Jest's documentation to learn more about snapshot testing.
|
http://brianyang.com/writing-snapshot-tests-for-react-components-with-jest/
|
CC-MAIN-2018-51
|
refinedweb
| 1,225
| 63.09
|
Engineering
Releases
News and Events
Spring GemFire 1.0.0.M2 Released for Java and .NET
Dear Spring Community,
We are pleased to announce the second milestone release of the Spring GemFire 1.0 project is now available for both Java and .NET! The Spring GemFire project aims to make it easier to build Spring-powered highly scalable applications using GemFire as distributed data management platform.
The new milestone updates include:
- Native support for GemFire 6.5 (besides 6.0)
- Extensive namespace support for configuring all the major GemFire components: cache, replicated, partitioned and client regions and many more
- New configuration option for region lookup-only
- More documentation (twice the size of the previous release)
To learn more about the project, visit the Spring GemFire homepage.
Download it now: Spring GemFire for Java | Spring GemFire for .NET
We look forward to your feedback!
|
http://spring.io/blog/2010/12/08/spring-gemfire-1-0-0-m2-released-for-java-and-net
|
CC-MAIN-2015-06
|
refinedweb
| 143
| 57.77
|
Progress bars¶
Prompt_toolkit ships with a high level API for displaying progress bars, inspired by tqdm
Warning
The API for the prompt_toolkit progress bars is still very new and can possibly change in the future. It is usable and tested, but keep this in mind when upgrading.
Remember that the examples directory of the prompt_toolkit repository ships with many progress bar examples as well.
Simple progress bar¶
Creating a new progress bar can be done by calling the
ProgressBar context manager.
The progress can be displayed for any iterable. This works by wrapping the
iterable (like
range) with the
ProgressBar context manager itself. This
way, the progress bar knows when the next item is consumed by the forloop and
when progress happens.
from prompt_toolkit.shortcuts import ProgressBar import time with ProgressBar() as pb: for i in pb(range(800)): time.sleep(.01)
Keep in mind that not all iterables can report their total length. This happens with a typical generator. In that case, you can still pass the total as follows in order to make displaying the progress possible:
def some_iterable(): yield ... with ProgressBar() as pb: for i in pb(some_iterable, total=1000): time.sleep(.01)
Multiple parallel tasks¶
A prompt_toolkit
ProgressBar can display the
progress of multiple tasks running in parallel. Each task can run in a separate
thread and the
ProgressBar user interface
runs in its own thread.
Notice that we set the “daemon” flag for both threads that run the tasks. This is because control-c will stop the progress and quit our application. We don’t want the application to wait for the background threads to finish. Whether you want this depends on the application.
from prompt_toolkit.shortcuts import ProgressBar import time import threading with ProgressBar() as pb: # Two parallel tasks. def task_1(): for i in pb(range(100)): time.sleep(.05) def task_2(): for i in pb(range(150)): time.sleep(.08) # Start threads. t1 = threading.Thread(target=task_1) t2 = threading.Thread(target=task_2) t1.daemon = True t2.daemon = True t1.start() t2.start() # Wait for the threads to finish. We use a timeout for the join() call, # because on Windows, join cannot be interrupted by Control-C or any other # signal. for t in [t1, t2]: while t.is_alive(): t.join(timeout=.5)
Adding a title and label¶
Each progress bar can have one title, and for each task an individual label. Both the title and the labels can be formatted text.
from prompt_toolkit.shortcuts import ProgressBar from prompt_toolkit.formatted_text import HTML import time title = HTML('Downloading <style bg="yellow" fg="black">4 files...</style>') label = HTML('<ansired>some file</ansired>: ') with ProgressBar(title=title) as pb: for i in pb(range(800), label=label): time.sleep(.01)
Formatting the progress bar¶
The visualisation of a
ProgressBar can be
customized by using a different sequence of formatters. The default formatting
looks something like this:
from prompt_toolkit.shortcuts.progress_bar.formatters import * default_formatting = [ Label(), Text(' '), Percentage(), Text(' '), Bar(), Text(' '), Progress(), Text(' '), Text('eta [', style='class:time-left'), TimeLeft(), Text(']', style='class:time-left'), Text(' '), ]
That sequence of
Formatter can be
passed to the formatter argument of
ProgressBar. So, we could change this and
modify the progress bar to look like an apt-get style progress bar:
from prompt_toolkit.shortcuts import ProgressBar from prompt_toolkit.styles import Style from prompt_toolkit.shortcuts.progress_bar import formatters import time style = Style.from_dict({ 'label': 'bg:#ffff00 #000000', 'percentage': 'bg:#ffff00 #000000', 'current': '#448844', 'bar': '', }) custom_formatters = [ formatters.Label(), formatters.Text(': [', style='class:percentage'), formatters.Percentage(), formatters.Text(']', style='class:percentage'), formatters.Text(' '), formatters.Bar(sym_a='#', sym_b='#', sym_c='.'), formatters.Text(' '), ] with ProgressBar(style=style, formatters=custom_formatters) as pb: for i in pb(range(1600), label='Installing'): time.sleep(.01)
Adding key bindings and toolbar¶
Like other prompt_toolkit applications, we can add custom key bindings, by
passing a
KeyBindings object:
from prompt_toolkit import HTML from prompt_toolkit.key_binding import KeyBindings from prompt_toolkit.patch_stdout import patch_stdout from prompt_toolkit.shortcuts import ProgressBar import os import time import signal bottom_toolbar = HTML(' <b>[f]</b> Print "f" <b>[x]</b> Abort.') # Create custom key bindings first. kb = KeyBindings() cancel = [False] @kb.add('f') def _(event): print('You pressed `f`.') @kb.add('x') def _(event): " Send Abort (control-c) signal. " cancel[0] = True os.kill(os.getpid(), signal.SIGINT) # Use `patch_stdout`, to make sure that prints go above the # application. with patch_stdout(): with ProgressBar(key_bindings=kb, bottom_toolbar=bottom_toolbar) as pb: for i in pb(range(800)): time.sleep(.01) # Stop when the cancel flag has been set. if cancel[0]: break
Notice that we use
patch_stdout() to make
printing text possible while the progress bar is displayed. This ensures that
printing happens above the progress bar.
Further, when “x” is pressed, we set a cancel flag, which stops the progress. It would also be possible to send SIGINT to the mean thread, but that’s not always considered a clean way of cancelling something.
In the example above, we also display a toolbar at the bottom which shows the key bindings.
Read more about key bindings …
|
https://python-prompt-toolkit.readthedocs.io/en/latest/pages/progress_bars.html
|
CC-MAIN-2022-33
|
refinedweb
| 837
| 51.44
|
Other than using a complier that knows about Legion, such as MPLC (Mentat Programming Language Compiler), or a library that interfaces to Legion, such as MPI or PVM, the only way to interface objects to Legion is to write glue code by hand (how most of the system is done) or to use the stub generator.
The stub generator is a temporary utility which we hope to phase out as soon as MPLC's parser can be re-written. The stub generator generates both client-side and server-side stubs. Unlike MPLC, it does no dataflow detection or complicated graph building; all calls to other objects are strict RPC calls. On the server side, no monitor semantics are enforced, so whenever you call out you need to be able to deal with incoming methods.
The stub generator also allows you to build "add-in" objects, which are Legion software components. Add-in objects are compiled to .o files, which are then linked with some other object (which is unaware of the add-in). The add-in can manipulate the Legion message stack, as well as add functions to the interface. One example of an add-in object is TrivialMayI(), which both adds a MayI() security check to the message stack and extends the object's interface to allow methods that get and set security information.
Legion's IDL is currently expressed as a C++ header file. Here is a short example:
#include "legion/Legion.h" class AuthenticationObject { private: UVaL_Reference<LegionPackableString> password; public: UVaL_Reference<LegionImplicitParameterList> login (UVaL_Reference<LegionPackableString> p); int set_password (UVaL_Reference<LegionPackableString> newp); };
The stub generator takes this input and outputs a client stub, which allows another Legion object to call these two methods, and a server stub ("trans file"), which accepts Legion method calls and turns them into calls on AuthenticationObject::set_password() and AuthenticationObject::login().
Any object (but especially an add-in object) uses special method names that either override object-mandatory functions such as SaveState or get hooked into the Legion message stack.
Special names that you can use include object mandatory functions:
and Legion message stack hooks:
Note that SaveState and RestoreState can be hooked to in two ways: add-ins get the event, but an object wants to get the method so that it can issue an appropriate reply.
For examples of objects and add-ins using the stub generator, see the AuthenticationObject and TrivialMayI objects, respectively.
There are limits to the amount of C++ which the stub generator can handle, however. If you have a type such as UVaL_PackableSet in your interface, you will have to edit the resulting files in order to call new on UVaL_PackableSet_LinkedList. The stub generator will generate memory leak warning. Give a name to every parameter in the interface, although I can almost always detect that it's missing and create one for you. If the new class inherits from other Legion classes include the list of class names, beginning with the parent and ending with the new class. Note that multiple inheritance is not supported. No client calls are generated for the object-mandatory interface.
-N integer Integer where method numbering should begin.
Default is 200.
-C class_name Class of resulting object. Defaults to nothing,
which means it will inherit its class ID from its class.
But if the new class is a command line class object, you
need to specify UVaL_CLASS_ID_COMMANDLINE.
-A Generates code for an add-in trans file instead of a main transfile.
-o nomain Comma-separated list of options. So far, there is only one option,
nomain, which generates no main program. This
option is intended for program with custom server loops.
-I include/path/ Path to the C include files
-g Print debugging messages at run-time for every method invocation
The Legion-CORBA IDL is an OMG IDL compiler. It uses the legion_generate_idl command to parse CORBA input IDL files from a distributed application and generate Legion stub code for the application (Figure 13).
The stub code is currently written only in C++. The legion_make_idl command compiles the stub code with the client's and server's implementation code and generates an executable Legion client program and Legion server program.
The Legion IDL is an on-going project. It currently supports most IDL language features, such as modules, interfaces, operations, arguments, attributes, etc. It does not currently support the following features:
The Legion IDL is tested for a i386-linux platform, and the generated stub code is tested on i386-linux and Sun Solaris 2.5.1 platforms.
The CORBA IDL files are located in the $LEGION/src/CORBA directory. You may need to compile these files separately. If so, move to the $LEGION/src/CORBA/OMG_IDL directory and type make. The executable legion_generate_idl program will be copied into your $LEGION/bin/$ARCH directory. Usage is as follows:
legion_generate_idl [<flags>]
<input file local path>
(Please see legion_generate_idl in the Reference Manual or the man page for information about the legion_generate_idl flags.)
This command generates Legion stub files for input IDL files. Optional flags allow you to include information about the preprocessor and the back end and to specify whether or not trans code, client stubs, and header files should be created. If you run the command with no flags, Legion will generate client-side and server-side stubs. For example, if apps.idl is your input IDL file name and you run:
$ legion_generate_idl apps.idl
Legion will generate the following stubs files.
apps.client-stubs.new.h apps.client-stubs.new.c apps.mapping.h apps.trans.new.h apps.trans.new.c
On the other hand, if you run:
$ legion_generate_idl -client_stubs -header apps.idl
Legion will generate client-side stubs and header files but no .trans file.
apps.client-stubs.new.h apps.client-stubs.new.c apps.mapping.h
You must prepare the implementation code for the client and server. To build server code for the sample input file apps.idl, copy out the interface mapping from apps.mapping.h generated above to a new file called apps.org.h, derive a server class in apps.org.h from the base class generated in apps.mapping.h, then add private variables and methods to the server class.
The apps.client-stubs.new.h and apps.client-stubs.new.c should be compiled with an apps.client.c file, which must include code to implement the client part of the application. The implementation code of server code methods should be in a file called apps.org.c.
You can then use the legion_make_idl command to compile your stub files. Usage is:
legion_make_idl [-notrans] [-noclient]
[-v] [-noreg] [-s <suffix string>]
[-run] [-help] <application name>
(Please see legion_make_idl in the Reference Manual or the man page for information about the flags.) Continuing our example, if you ran:
$ legion_make_idl -v -s 1stTrial -run apps
the result will be:
apps_Client_1stTrial apps_Class_1stTrial
And, assuming no compilation errors, the application will then run as:
$ apps_Client_1stTrial -c apps_Class_1stTrial
Directory of Legion 1.5 Manuals
|
http://www.cs.virginia.edu/~legion/documentation/tutorials/1.5/manuals/Developer_1.7.5.html
|
CC-MAIN-2018-09
|
refinedweb
| 1,165
| 55.03
|
Running eCos on the OpenRISC ordb2 board
Published:
UPDATE: the eCos for OpenRISC is now tracking mainline eCos and more configurations are tested, so the whole procedure was a bit simplified!
Downloading the new version of eCos will be described in a new note and linked to here.
Antmicro is maintaining and developing the eCos port for OpenRISC – you can find the wiki page of the port at the opencores server.
So far, the port was being tested in the or1ksim simulator which is considered a golden model of the OpenRISC1000 architecture. However, it’s always best to see how a port performs on real hardware.
Recently, ORSoC shared the new ordb2 development board with us. The board is based on Altera Cyclone IV E FPGA chip and is equipped with all popular interfaces.
To verify that everything works fine, we tested the eCos port on the board by running a set of eCos tests and simple multi-threaded programs.
Below are the instructions how to run eCos programs on ordb2 board. UPDATE: Not all configuration options are yet supported, but the default configuration is well tested and stable, so the .ecc file provided previously is no longer needed.
In order to build eCos for ordb2, follow these instructions:
mkdir ecos_openrisc cd ecos_openrisc ecosconfig new orpsoc ecosconfig tree make make tests
After issuing these commands it is possible to run eCos tests on the board. In order to upload the binaries using GDB, the
or_debug_proxy program is needed.
Please note that FT4232 support was added to
or_debug_proxy on 16th september 2011 – older builds will not work.
To establish a connection with board, issue the following command:
or_debug_proxy -r 50001
This will open a tcp port for the GDB RSP connection.
It is now possible to open a UART connection. Please note that it is better to open the UART connection after starting
or_debug_proxy. It is because
or_debug_proxy causes the system to reorder USB devices when connected to the FTDI chip.
The configuration file assumes that the UART is running at a 115200 baud rate.
To open a UART device using picocom, issue the following command:
picocom -b 115200 /dev/ttyUSB1
The number next to ttyUSB may be different across systems.
Everything is ready to upload a test binary. We will use GDB and connect to
or_debug_proxy.
or32-elf-gdb [test binary] target remote :50001 load spr npc 0x100 c
Now that we know how to connect to the board, let’s make a simple hello world application. A minimal eCos hello world program looks pretty standard:
#include <stdio.h> int main(void) { printf("hello world\n"); return 0; }
To build the program, use the following flags with
or32-elf-gcc:
or32-elf-gcc \ -g \ -Iecos_openrisc/install/include \ -Lecos_openrisc/install/lib \ -nostdlib \ -Tecos_openrisc/install/lib/target.ld \\ main.c
It is now possible to upload the resulting binary file the same way described above. Have fun programming for eCos on the openRISC devboard!
|
https://antmicro.com/blog/2012/03/running-ecos-programs-on-openrisc-ordb2-board/
|
CC-MAIN-2019-35
|
refinedweb
| 492
| 62.27
|
I am trying to use Unwrapping.GeneratePerTriangleUV to unwrap my mesh created in the editor. In the docs of the function, it says "You'll need to merge [uvs] yourself." What does it mean by "merging uvs" and how would I do said "merging?"
Answer by Max-Bot
·
Sep 08, 2016 at 05:46 PM
Here is how I assume it should bu used. But it doesn't work on my PC and each function call there is crash report from some external program UnwrapCL.exe :( Maybe somebody helps code bellow:
Vector2[] uvs = new Vector2[newVerts.Count];
UnwrapParam paramteters = new UnwrapParam();
int t = 0;
foreach(Vector2 uvPerTri in Unwrapping.GeneratePerTriangleUV(mesh, paramteters)) {
uvs[tris[t]] = uvPerTri;
++t;
}
mesh.uv = uvs;
Answer by CarpeFunSoftware
·
May 21, 2016 at 05:40 AM
See documentation on the Mesh object () it shows you the vertex array, the triangles array and the UV array.
The triangles use the vertex indices in the vertex array. The UVs array has the x,y positions in the texture for each vertex in the vertex array.
So if your mesh was a simple triangle (consisting of only 3 vertices) you would have a vertex array equal to the 3 points of the triangle, a triangles array consisting of index 0, 1, 2, into the vertex array, which is one triangle, and (this is the important part for your question) you would have UV mappings for each of those vertices onto the texture. Note there's several UV sets (4 in all .uv[], .uv2[], .uv3[], .uv4) so you could even keep the "old" uv mapping and assign the new one to a different UV set and/or swap em around (old in uv2, new in uv1).
Short Answer-->> Basically, the function you asked about above returns a result array to you, but doesn't assign it to the mesh. That array returned result is just "info floating in memory". You have to decide what to do with it. Try assigning it to a UV set in the mesh and see what happens. It may or may not map as you wish it to. Experimentation is in order here. The function does a "best guess" on how to map the UVs given the mesh vertex data.
The mesh UV documentation creates a new UV set in the example out of x and z vertex values and that's just one example of "merging your own uv's in" programmatically:
They assigned it to uv set 1 after the for-loop was finished. That would trash what was there before, if anything. Or, like I said, you could assign it to uv2 or 3 or 4.
Shortest Answer -> Warning: The manual says this is a preliminary interface in UnityEditor namespace.
using UnityEditor;
...
void SomewhereOverTheRainbow() {
myMesh.uv = Unwrapping.GeneratePerTriangleUV(myMesh);
}
Answer by parasiteEvie
·
Jan 03, 2018 at 11:07 PM
So the output is in relation to the index array (or mesh.triangles) rather than the vertex array. I miught have over complicated it, but this works for me. Bonus points for anyone that submits a faster than n^2 solution.
Here is my solution:
public void UpdateUVs()
{
SkinnedMeshRenderer meshRenderer = GetComponent<SkinnedMeshRenderer>();
Vector2[] uvs = new Vector2[meshRenderer.sharedMesh.vertices.Length];
Vector2[] all = Unwrapping.GeneratePerTriangleUV(meshRenderer.sharedMesh);
int[] triangles = meshRenderer.sharedMesh.triangles;
int count = 0;
while(count < uvs.Length)
{
for (int i = 0; i < triangles.Length; i++)
{
if (triangles[i] == count)
{
uvs[count] = all[i];
count++;
}
}
}
meshRenderer.sharedMesh.uv = uvs;
}
Answer by Smaughbeer
·
Oct 11, 2019 at 06:37 AM
Mesh m = new Mesh();
...
Vector2[] uvPerTriangle = Unwrapping.GeneratePerTriangleUV(m, param);
Vector2[] uvs = new Vector2[m.vertices.Length];
for (int i = 0; i < m.triangles.Length; i++)
// Triangle contents reference to vertex #
uvs[m.triangles[i]] = uvPerTriangle[i];
m.uv = uvs;
m.RecalculateTang.
Making Different vertices,triangle,uv maps in the same mesh
1
Answer
Creating a mesh programatically for a tile-based 2D game
1
Answer
Null Reference Exception Assigning Vertices to Mesh
0
Answers
Why does the default Unity sphere have duplicate vertices?
1
Answer
UNITY4 all i see is the triangle/verts of objects
1
Answer
|
https://answers.unity.com/questions/1189522/how-do-i-use-the-output-of-unwrappinggeneratepertr.html
|
CC-MAIN-2020-24
|
refinedweb
| 684
| 57.77
|
1560832488
Introduction::
sudo apt-get install python-pip sudo pip install virtualenv
Application Structure
Before we actually start writing code we need to get a hold of the application structure. We'll first execute several commands that are essential in django project development.
After installing virtualenv, we need to set the environment up.
virtualenv venv
We are setting a virtual environment of the name venv here. Now we need to activate it.
source venv/bin/activate cd venv
Now that it has been activated. We need to start our project. Feed in the following command to start a project.
pip install django==1.11.8 mkdir app && cd app django-admin startproject crudapp
The first line installs Django v1.11.8 and creates a directory named app in the parent directory. the third line starts a project named crudapp in the app directory. The directory tree should look like
app └── crudapp ├── crudapp │ ├── __init__.py │ ├── settings.py │ ├── urls.py │ └── wsgi.py └── manage.py
We'll see the meaning of each file and what it does one by one. But first, to test if you are going in the right directoion, run the following command.
python manage.py runserver.
python manage.py startapp blog_posts
This will create the necessary files that we require.
First and foremost create the Model of our application.
# -*- coding: utf-8 -*- from __future__ import unicode_literals
from django.db import models # Create your models here. class blog_posts(models.Model): title = models.CharField(max_length=400) tag = models.CharField(max_length=50) author = models.CharField(max_length=120) def __unicode__(self): return self.title def get_post_url(self): return reverse('post_edit', kwargs={'pk': self.pk})
Now that we have our model ready, we’ll need to migrate it to the database.
python manage.py makemigrations
python manage.py migrate
Now we create our Views where we define each of our CRUD definition.
# -- coding: utf-8 --
from future import unicode_literals
from django.shortcuts import render, redirect, get_object_or_404 from django.forms import ModelForm from blog_posts.models import blog_posts # Create your views here. class PostsForm(ModelForm): class Meta: model = blog_posts fields = ['id', 'title', 'author'] def post_list(request, template_name='blog_posts/post_list.html'): posts = blog_posts.objects.all() data = {} data['object_list'] = posts return render(request, template_name, data) def post_create(request, template_name='blog_posts/post_form.html'): form = PostsForm(request.POST or None) if form.is_valid(): form.save() return redirect('blog_posts:post_list') return render(request, template_name, {'form': form}) def post_update(request, pk, template_name='blog_posts/post_form.html'): post = get_object_or_404(blog_posts, pk=pk) form = PostsForm(request.POST or None, instance=post) if form.is_valid(): form.save() return redirect('blog_posts:post_list') return render(request, template_name, {'form': form}) def post_delete(request, pk, template_name='blog_posts/post_delete.html'): post = get_object_or_404(blog_posts, pk=pk) if request.method=='POST': post.delete() return redirect('blog_posts:post_list') return render(request, template_name, {'object': post})
Now that we have our Views, we create mappings to URL in our
/crudapp/blog_posts/urls.py file. Make a note that the following is our app specific mappings.
“”"
crudapp URL Configuration
The `urlpatterns` list routes URLs to views. For more information please see: """ from django.conf.urls import url from django.contrib import admin from blog_posts import views urlpatterns = [ url(r'^admin/', admin.site.urls), url(r'^$', views.post_list, name='post_list'), url(r'^new$', views.post_create, name='post_new'), url(r'^edit/(?P<pk>\d+)$', views.post_update, name='post_edit'), url(r'^delete/(?P<pk>\d+)$', views.post_delete, name='post_delete'), ]
Now we create project specific mappings in
/crudapp/crudapp/urls.py.
“”"
Crudapp URL Configuration
The `urlpatterns` list routes URLs to views. For more information please see: """ from django.conf.urls import url, include from django.contrib import admin from crudapp.views import home urlpatterns = [ url(r'^admin/', admin.site.urls), url(r'^blog_posts/', include('blog_posts.urls', namespace='blog_posts')), url(r'^$', home, name='home' ), ]
Now almost everything is done and all we need to do is create our templates to test the operations.
Go ahead and create a
templates/blog_posts directory in
crudapp/blog_posts/.
templates/blog_posts/post_list.html:
<!DOCTYPE html>
<html>
<head>
<meta charset=“utf-8”>
<link rel=“stylesheet” href=“”>
<title>Django CRUD application!</title>
</head>
<body>
<div class=“container”>
<h1>Blog Post List!</h1>
<ul>
{% for post in object_list %}
<li><p>Post ID: <a href=“{% url “blog_posts:post_edit” post.id %}”>{{ post.id }}</a></p>
<p>Title: {{ post.title }}</p>
<a href=“{% url “blog_posts:post_delete” post.id %}”>Delete</a>
</li>
{% endfor %}
</ul>
<a href="{% url "blog_posts:post_new" %}">New Blog post entry</a> </div>
</body>
</html>
templates/blogposts/postform.html
<!DOCTYPE html>
<html>
<head>
<meta charset=“utf-8”>
<link rel=“stylesheet” href=“”>
<title>Django CRUD application!</title>
</head>
<body>
<div class=“container”>
<form method=“post”>{% csrf_token %}
{{ form.as_p }}
<input class=“btn btn-primary” type=“submit” value=“Submit” />
</form>
</div>
</body>
</html>
templates/blogposts/postdelete.html
<!DOCTYPE html>
<html>
<head>
<meta charset=“utf-8”>
<link rel=“stylesheet” href=“”>
<title>Django CRUD application!</title>
</head>
<body>
<div class=“container”>
<form method=“post”>{% csrf_token %}
Are you sure you want to delete “{{ object }}” ?
<input class=“btn btn-primary” type=“submit” value=“Submit” />
</form>
</div>
</body>
</html>
Now we have all the necessary files and code that we require.
The final project tree should look like following:
crudapp
├── blog_posts
│ ├── admin.py
│ ├── apps.py
│ ├── init.py
│ ├── migrations
│ ├── models.py
│ ├── templates
│ │ └── blog_posts
│ │ ├── post_delete.html
│ │ ├── post_form.html
│ │ └── post_list.html
│ ├── tests.py
│ ├── urls.py
│ ├── views.py
├── crudapp
│ ├── init.py
│ ├── settings.py
│ ├── urls.py
│ ├── views.py
│ ├── wsgi.py
├── db.sqlite3
├── manage.py
└── requirements.txt
Execute
python manage.py runserver and voila!! You have your Django app ready.
Originally published by Nitin Prakash at zeolearn.com
===================================================================
Thanks for reading :heart: If you liked this post, share it with all of your programming buddies! Follow me on Facebook | Twitter
☞ Complete Python Bootcamp: Go from zero to hero in Python 3
#python #django23057780
Django…We all know the popularity of this Python framework. Django has become the first choice of developers to build their web applications. It is a free and open-source Python framework. Django can easily solve a lot of common development challenges. It allows you to build flexible and well-structured web applications.
A lot of common features of Django such as a built-in admin panel, ORM (object-relational mapping tool), Routing, templating have made the task easier for developers. They do not require spending so much time on implementing these things from scratch.
One of the most killer features of Django is the built-in Admin panel. With this feature, you can configure a lot of things such as an access control list, row-level permissions, and actions, filters, orders, widgets, forms, extra URL helpers, etc.
Django ORM works with all major databases out of the box. It supports all the major SQL queries which you can use in your application. Templating engine of Django is also very, very flexible and powerful at the same time. Even a lot of features are available in Django, developers still make a lot of mistakes while building an application. In this blog, we will discuss some common mistakes which you should avoid while building a Django application.
#gblog #python #python django #building a django application #django #applications
|
https://morioh.com/p/f4d238611190
|
CC-MAIN-2021-39
|
refinedweb
| 1,173
| 52.97
|
Watch Now This tutorial has a related video course created by the Real Python team. Watch it together with the written tutorial to deepen your understanding: Continuous Integration With Python
When writing code on your own, the only priority is making it work. However, working in a team of professional software developers brings a plethora of challenges. One of those challenges is coordinating many people working on the same code.
How do professional teams make dozens of changes per day while making sure everyone is coordinated and nothing is broken? Enter continuous integration!
In this tutorial you’ll:
- Learn the core concepts behind continuous integration
- Understand the benefits of continuous integration
- Set up a basic continuous integration system
- Create a simple Python example and connect it to the continuous integration system
Free Bonus: 5 Thoughts On Python Mastery, a free course for Python developers that shows you the roadmap and the mindset you’ll need to take your Python skills to the next level.
What Is Continuous Integration?
Continuous integration (CI) is the practice of frequently building and testing each change done to your code automatically and as early as possible. Prolific developer and author Martin Fowler defines.” (Source)
Let’s unpack this.
Programming is iterative. The source code lives in a repository that is shared by all members of the team. If you want to work on that product, you must obtain a copy. You will make changes, test them, and integrate them back into the main repo. Rinse and repeat.
Not so long ago, these integrations were big and weeks (or months) apart, causing headaches, wasting time, and losing money. Armed with experience, developers started making minor changes and integrating them more frequently. This reduces the chances of introducing conflicts that you need to resolve later.
After every integration, you need to build the source code. Building means transforming your high-level code into a format your computer knows how to run. Finally, the result is systematically tested to ensure your changes did not introduce errors.
Why Should I Care?
On a personal level, continuous integration is really about how you and your colleagues spend your time.
Using CI, you’ll spend less time:
- Worrying about introducing a bug every time you make changes
- Fixing the mess someone else made so you can integrate your code
- Making sure the code works on every machine, operating system, and browser
Conversely, you’ll spend more time:
- Solving interesting problems
- Writing awesome code with your team
- Co-creating amazing products that provide value to users
How does that sound?
On a team level, it allows for a better engineering culture, where you deliver value early and often. Collaboration is encouraged, and bugs are caught much sooner. Continuous integration will:
- Make you and your team faster
- Give you confidence that you’re building stable software with fewer bugs
- Ensure that your product works on other machines, not just your laptop
- Eliminate a lot of tedious overhead and let you focus on what matters
- Reduce the time spent resolving conflicts (when different people modify the same code)
Core Concepts
There are several key ideas and practices that you need to understand to work effectively with continuous integration. Also, there might be some words and phrases you aren’t familiar with but are used often when you’re talking about CI. This chapter will introduce you to these concepts and the jargon that comes with them.
Single Source Repository
If you are collaborating with others on a single code base, it’s typical to have a shared repository of source code. Every developer working on the project creates a local copy and makes changes. Once they are satisfied with the changes, they merge them back into the central repository.
It has become a standard to use version control systems (VCS) like Git to handle this workflow for you. Teams typically use an external service to host their source code and handle all the moving parts. The most popular are GitHub, BitBucket, and GitLab.
Git allows you to create multiple branches of a repository. Each branch is an independent copy of the source code and can be modified without affecting other branches. This is an essential feature, and most teams have a mainline branch (often called a master branch) that represents the current state of the project.
If you want to add or modify code, you should create a copy of the main branch and work in your new, development branch. Once you are done, merge those changes back into the master branch.
Version control holds more than just code. Documentation and test scripts are usually stored along with the source code. Some programs look for external files used to configure their parameters and initial settings. Other applications need a database schema. All these files should go into your repository.
If you have never used Git or need a refresher, check out our Introduction to Git and GitHub for Python Developers.
Automating the Build
As previously mentioned, building your code means taking the raw source code, and everything necessary for its execution, and translating it into a format that computers can run directly. Python is an interpreted language, so its “build” mainly revolves around test execution rather than compilation.
Running those steps manually after every small change is tedious and takes valuable time and attention from the actual problem-solving you’re trying to do. A big part of continuous integration is automating that process and moving it out of sight (and out of mind).
What does that mean for Python? Think about a more complicated piece of code you have written. If you used a library, package, or framework that doesn’t come with the Python standard library (think anything you needed to install with
pip or
conda), Python needs to know about that, so the program knows where to look when it finds commands that it doesn’t recognize.
You store a list of those packages in
requirements.txt or a Pipfile. These are the dependencies of your code and are necessary for a successful build.
You will often hear the phrase “breaking the build.” When you break the build, it means you introduced a change that rendered the final product unusable. Don’t worry. It happens to everyone, even battle-hardened senior developers. You want to avoid this primarily because it will block everyone else from working.
The whole point of CI is to have everyone working on a known stable base. If they clone a repository that is breaking the build, they will work with a broken version of the code and won’t be able to introduce or test their changes. When you break the build, the top priority is fixing it so everyone can resume work.
When the build is automated, you are encouraged to commit frequently, usually multiple times per day. It allows people to quickly find out about changes and notice if there’s a conflict between two developers. If there are numerous small changes instead of a few massive updates, it’s much easier to locate where the error originated. It will also encourage you to break your work down into smaller chunks, which is easier to track and test.
Automated Testing
Since everyone is committing changes multiple times per day, it’s important to know that your change didn’t break anything else in the code or introduce bugs. In many companies, testing is now a responsibility of every developer. If you write code, you should write tests. At a bare minimum, you should cover every new function with a unit test.
Running tests automatically, with every change committed, is a great way to catch bugs. A failing test automatically causes the build to fail. It will draw your attention to the problems revealed by testing, and the failed build will make you fix the bug you introduced. Tests don’t guarantee that your code is free of bugs, but it does guard against a lot of careless changes.
Automating test execution gives you some peace of mind because you know the server will test your code every time you commit, even if you forgot to do it locally.
Using an External Continuous Integration Service
If something works on your computer, will it work on every computer? Probably not. It’s a cliché excuse and a sort of inside joke among developers to say, “Well, it worked on my machine!” Making the code work locally is not the end of your responsibility.
To tackle this problem, most companies use an external service to handle integration, much like using GitHub for hosting your source code repository. External services have servers where they build code and run tests. They act as monitors for your repository and stop anyone from merging to the master branch if their changes break the build.
There are many such services out there, with various features and pricing. Most have a free tier so that you can experiment with one of your repositories. You will use a service called CircleCI in an example later in the tutorial.
Testing in a Staging Environment
A production environment is where your software will ultimately run. Even after successfully building and testing your application, you can’t be sure that your code will work on the target computer. That’s why teams deploy the final product in an environment that mimics the production environment. Once you are sure everything works, the application is deployed in the production environment.
Note: This step is more relevant to application code than library code. Any Python libraries you write still need to be tested on a build server, to ensure they work in environments different from your local computer.
You will hear people talking about this clone of the production environment using terms like development environment, staging environment, or testing environment. It’s common to use abbreviations like DEV for the development environment and PROD for the production environment.
The development environment should replicate production conditions as closely as possible. This setup is often called DEV/PROD parity. Keep the environment on your local computer as similar as possible to the DEV and PROD environments to minimize anomalies when deploying applications.
We mention this to introduce you to the vocabulary, but continuously deploying software to DEV and PROD is a whole other topic. The process is called, unsurprisingly, continuous deployment (CD). You can find more resources about it in the Next Steps section of this article.
Your Turn!
The best way to learn is by doing. You now understand all the essential practices of continuous integration, so it’s time to get your hands dirty and create the whole chain of steps necessary to use CI. This chain is often called a CI pipeline.
This is a hands-on tutorial, so fire up your editor and get ready to work through these steps as you read!
We assume that you know the basics of Python and Git. We will use Github as our hosting service and CircleCI as our external continuous integration service. If you don’t have accounts with these services, go ahead and register. Both of these have free tiers!
Problem Definition
Remember, your focus here is adding a new tool to your utility belt, continuous integration. For this example, the Python code itself will be straightforward. You want to spend the bulk of your time internalizing the steps of building a pipeline, instead of writing complicated code.
Imagine your team is working on a simple calculator app. Your task is to write a library of basic mathematical functions: addition, subtraction, multiplication, and division. You don’t care about the actual application, because that’s what your peers will be developing, using functions from your library.
Create a Repo
Log in to your GitHub account, create a new repository and call it CalculatorLibrary. Add a README and .gitignore, then clone the repository to your local machine. If you need more help with this process, have a look at GitHub’s walkthrough on creating a new repository.
Set Up a Working Environment
For others (and the CI server) to replicate your working conditions, you need to set up an environment. Create a virtual environment somewhere outside your repo and activate it:
$ # Create virtual environment $ python3 -m venv calculator $ # Activate virtual environment (Mac and Linux) $ . calculator/bin/activate
The previous commands work on macOS and Linux. If you are a Windows user, check the Platforms table in the official documentation. This will create a directory that contains a Python installation and tell the interpreter to use it. Now we can install packages knowing that it will not influence your system’s default Python installation.
Write a Simple Python Example
Create a new file called
calculator.py in the top-level directory of your repository, and copy the following code:
""" Calculator library containing basic math operations. """ def add(first_term, second_term): return first_term + second_term def subtract(first_term, second_term): return first_term - second_term
This is a bare-bones example containing two of the four functions we will be writing. Once we have our CI pipeline up and running, you will add the remaining two functions.
Go ahead and commit those changes:
$ # Make sure you are in the correct directory $ cd CalculatorLibrary $ git add calculator.py $ git commit -m "Add functions for addition and subtraction"
Your CalculatorLibrary folder should have the following files right now:
CalculatorLibrary/ | ├── .git ├── .gitignore ├── README.md └── calculator.py
Great, you have completed one part of the required functionality. The next step is adding tests to make sure your code works the way it’s supposed to.
Write Unit Tests
You will test your code in two steps.
The first step involves linting—running a program, called a linter, to analyze code for potential errors.
flake8 is commonly used to check if your code conforms to the standard Python coding style. Linting makes sure your code is easy to read for the rest of the Python community.
The second step is unit testing. A unit test is designed to check a single function, or unit, of code. Python comes with a standard unit testing library, but other libraries exist and are very popular. This example uses
pytest.
A standard practice that goes hand in hand with testing is calculating code coverage. Code coverage is the percentage of source code that is “covered” by your tests.
pytest has an extension,
pytest-cov, that helps you understand your code coverage.
These are external dependencies, and you need to install them:
$ pip install flake8 pytest pytest-cov
These are the only external packages you will use. Make sure to store those dependencies in a
requirements.txt file so others can replicate your environment:
$ pip freeze > requirements.txt
To run your linter, execute the following:
$ flake8 --statistics ./calculator.py:3:1: E302 expected 2 blank lines, found 1 ./calculator.py:6:1: E302 expected 2 blank lines, found 1 2 E302 expected 2 blank lines, found 1
The
--statistics option gives you an overview of how many times a particular error happened. Here we have two PEP 8 violations, because
flake8 expects two blank lines before a function definition instead of one. Go ahead and add an empty line before each functions definition. Run
flake8 again to check that the error messages no longer appear.
Now it’s time to write the tests. Create a file called
test_calculator.py in the top-level directory of your repository and copy the following code:
""" Unit tests for the calculator library """ import calculator class TestCalculator: def test_addition(self): assert 4 == calculator.add(2, 2) def test_subtraction(self): assert 2 == calculator.subtract(4, 2)
These tests make sure that our code works as expected. It is far from extensive because you haven’t tested for potential misuse of your code, but keep it simple for now.
The following command runs your test:
$ pytest -v --cov collected 2 items test_calculator.py::TestCalculator::test_addition PASSED [50%] test_calculator.py::TestCalculator::test_subtraction PASSED [100%] ---------- coverage: platform darwin, python 3.6.6-final-0 ----------- Name Stmts Miss Cover --------------------------------------------------------------------- calculator.py 4 0 100% test_calculator.py 6 0 100% /Users/kristijan.ivancic/code/learn/__init__.py 0 0 100% --------------------------------------------------------------------- TOTAL 10 0 100%
pytest is excellent at test discovery. Because you have a file with the prefix
test,
pytest knows it will contain unit tests for it to run. The same principles apply to the class and method names inside the file.
The
-v flag gives you a nicer output, telling you which tests passed and which failed. In our case, both tests passed. The
--cov flag makes sure
pytest-cov runs and gives you a code coverage report for
calculator.py.
You have completed the preparations. Commit the test file and push all those changes to the master branch:
$ git add test_calculator.py $ git commit -m "Add unit tests for calculator" $ git push
At the end of this section, your CalculatorLibrary folder should have the following files:
CalculatorLibrary/ | ├── .git ├── .gitignore ├── README.md ├── calculator.py ├── requirements.txt └── test_calculator.py
Excellent, both your functions are tested and work correctly.
Connect to CircleCI
At last, you are ready to set up your continuous integration pipeline!
CircleCI needs to know how to run your build and expects that information to be supplied in a particular format. It requires a
.circleci folder within your repo and a configuration file inside it. A configuration file contains instructions for all the steps that the build server needs to execute. CircleCI expects this file to be called
config.yml.
A
.yml file uses a data serialization language, YAML, and it has its own specification. The goal of YAML is to be human readable and to work well with modern programming languages for common, everyday tasks.
In a YAML file, there are three basic ways to represent data:
- Mappings (key-value pairs)
- Sequences (lists)
- Scalars (strings or numbers)
It is very simple to read:
- Indentation may be used for structure.
- Colons separate key-value pairs.
- Dashes are used to create lists.
Create the
.circleci folder in your repo and a
config.yml file with the following content:
# Python CircleCI 2.0 configuration file version: 2 jobs: build: docker: - image: circleci/python:3.7 working_directory: ~/repo steps: # Step 1: obtain repo from GitHub - checkout # Step 2: create virtual env and install dependencies - run: name: install dependencies command: | python3 -m venv venv . venv/bin/activate pip install -r requirements.txt # Step 3: run linter and tests - run: name: run tests command: | . venv/bin/activate flake8 --exclude=venv* --statistics pytest -v --cov=calculator
Some of these words and concepts might be unfamiliar to you. For example, what is Docker, and what are images? Let’s go back in time a bit.
Remember the problem programmers face when something works on their laptop but nowhere else? Before, developers used to create a program that isolates a part of the computer’s physical resources (memory, hard drive, and so on) and turns them into a virtual machine.
A virtual machine pretends to be a whole computer on its own. It would even have its own operating system. On that operating system, you deploy your application or install your library and test it.
Virtual machines take up a lot of resources, which sparked the invention of containers. The idea is analogous to shipping containers. Before shipping containers were invented, manufacturers had to ship goods in a wide variety of sizes, packaging, and modes (trucks, trains, ships).
By standardizing the shipping container, these goods could be transferred between different shipping methods without any modification. The same idea applies to software containers.
Containers are a lightweight unit of code and its runtime dependencies, packaged in a standardized way, so they can quickly be plugged in and run on the Linux OS. You don’t need to create a whole virtual operating system, as you would with a virtual machine.
Containers only replicate parts of the operating system they need in order to work. This reduces their size and gives them a big performance boost.
Docker is currently the leading container platform, and it’s even able to run Linux containers on Windows and macOS. To create a Docker container, you need a Docker image. Images provide blueprints for containers much like classes provide blueprints for objects. You can read more about Docker in their Get Started guide.
CircleCI maintains pre-built Docker images for several programming languages. In the above configuration file, you have specified a Linux image that has Python already installed. That image will create a container in which everything else happens.
Let’s look at each line of the configuration file in turn:
version: Every
config.ymlstarts with the CircleCI version number, used to issue warnings about breaking changes.
jobs: Jobs represent a single execution of the build and are defined by a collection of steps. If you have only one job, it must be called
build.
build: As mentioned before,
buildis the name of your job. You can have multiple jobs, in which case they need to have unique names.
docker: The steps of a job occur in an environment called an executor. The common executor in CircleCI is a Docker container. It is a cloud-hosted execution environment but other options exist, like a macOS environment.
image: A Docker image is a file used to create a running Docker container. We are using an image that has Python 3.7 preinstalled.
working_directory: Your repository has to be checked out somewhere on the build server. The working directory represents the file path where the repository will be stored.
steps: This key marks the start of a list of steps to be performed by the build server.
checkout: The first step the server needs to do is check the source code out to the working directory. This is performed by a special step called
run: Executing command-line programs or commands is done inside the
commandkey. The actual shell commands will be nested within.
name: The CircleCI user interface shows you every build step in the form of an expandable section. The title of the section is taken from the value associated with the
namekey.
command: This key represents the command to run via the shell. The
|symbol specifices that what follows is a literal set of commands, one per line, exactly like you’d see in a shell/bash script.
You can read the CircleCI configuration reference document for more information.
Our pipeline is very simple and consists of 3 steps:
- Checking out the repository
- Installing the dependencies in a virtual environment
- Running the linter and tests while inside the virtual environment
We now have everything we need to start our pipeline. Log in to your CircleCI account and click on Add Projects. Find your CalculatorLibrary repo and click Set Up Project. Select Python as your language. Since we already have a
config.yml, we can skip the next steps and click Start building.
CircleCI will take you to the execution dashboard for your job. If you followed all the steps correctly, you should see your job succeed.
The final version of your CalculatorLibrary folder should look like this:
CalculatorRepository/ | ├── .circleci ├── .git ├── .gitignore ├── README.md ├── calculator.py ├── requirements.txt └── test_calculator.py
Congratulations! You have created your first continuous integration pipeline. Now, every time you push to the master branch, a job will be triggered. You can see a list of your current and past jobs by clicking on Jobs in the CircleCI sidebar.
Make Changes
Time to add multiplication to our calculator library.
This time, we will first add a unit test without writing the function. Without the code, the test will fail, which will also fail the CircleCI job. Add the following code to the end of your
test_calculator.py:
def test_multiplication(self): assert 100 == calculator.multiply(10, 10)
Push the code to the master branch and see the job fail in CircleCI. This shows that continuous integration works and watches your back if you make a mistake.
Now add the code to
calculator.py that will make the test pass:
def multiply(first_term, second_term): return first_term * second_term
Make sure there are two empty spaces between the multiplication function and the previous one, or else your code will fail the linter check.
The job should be successful this time. This workflow of writing a failing test first and then adding the code to pass the test is called test driven development (TDD). It’s a great way to work because it makes you think about your code structure in advance.
Now try it on your own. Add a test for the division function, see it fail, and write the function to make the test pass.
Notifications
When working on big applications that have a lot of moving parts, it can take a while for the continuous integration job to run. Most teams set up a notification procedure to let them know if one of their jobs fail. They can continue working while waiting for the job to run.
The most popular options are:
- Sending an email for each failed build
- Sending failure notifications to a Slack channel
- Displaying failures on a dashboard visible to everyone
By default, CircleCI should send you an email when a job fails.
Next Steps
You have understood the basics of continuous integration and practiced setting up a pipeline for a simple Python program. This is a big step forward in your journey as a developer. You might be asking yourself, “What now?”
To keep things simple, this tutorial skimmed over some big topics. You can grow your skill set immensely by spending some time going more in-depth into each subject. Here are some topics you can look into further.
Git Workflows
There is much more to Git than what you used here. Each developer team has a workflow tailored to their specific needs. Most of them include branching strategies and something called peer review. They make changes on branches separate from the
master branch. When you want to merge those changes with
master, other developers must first look at your changes and approve them before you’re allowed to merge.
Note: If you want to learn more about different workflows teams use, have a look at the tutorials on GitHub and BitBucket.
If you want to sharpen your Git skills, we have an article called Advanced Git Tips for Python Developers.
Dependency Management and Virtual Environments
Apart from
virtualenv, there are other popular package and environment managers. Some of them deal with just virtual environments, while some handle both package installation and environment management. One of them is Conda:
“Conda is an open source package management system and environment management system that runs on Windows, macOS, and Linux. Conda quickly installs, runs and updates packages and their dependencies. Conda easily creates, saves, loads and switches between environments on your local computer. It was designed for Python programs, but it can package and distribute software for any language.” (Source)
Another option is Pipenv, a younger contender that is rising in popularity among application developers. Pipenv brings together
pip and
virtualenv into a single tool and uses a
Pipfile instead of
requirements.txt. Pipfiles offer deterministic environments and more security. This introduction doesn’t do it justice, so check out Pipenv: A Guide to the New Python Packaging Tool.
Testing
Simple unit tests with
pytest are only the tip of the iceberg. There’s a whole world out there to explore! Software can be tested on many levels, including integration testing, acceptance testing, regression testing, and so forth. To take your knowledge of testing Python code to the next level, head over to Getting Started With Testing in Python.
Packaging
In this tutorial, you started to build a library of functions for other developers to use in their project. You need to package that library into a format that is easy to distribute and install using, for example
pip.
Creating an installable package requires a different layout and some additional files like
__init__.py and
setup.py. Read Python Application Layouts: A Reference for more information on structuring your code.
To learn how to turn your repository into an installable Python package, read Packaging Python Projects by the Python Packaging Authority.
Continuous Integration
You covered all the basics of CI in this tutorial, using a simple example of Python code. It’s common for the final step of a CI pipeline to create a deployable artifact. An artifact represents a finished, packaged unit of work that is ready to be deployed to users or included in complex products.
For example, to turn your calculator library into a deployable artifact, you would organize it into an installable package. Finally, you would add a step in CircleCI to package the library and store that artifact where other processes can pick it up.
For more complex applications, you can create a workflow to schedule and connect multiple CI jobs into a single execution. Feel free to explore the CircleCI documentation.
Continuous Deployment
You can think of continuous deployment as an extension of CI. Once your code is tested and built into a deployable artifact, it is deployed to production, meaning the live application is updated with your changes. One of the goals is to minimize lead time, the time elapsed between writing a new line of code and putting it in front of users.
Note: To add a bit of confusion to the mix, the acronym CD is not unique. It can also mean Continuous Delivery, which is almost the same as continuous deployment but has a manual verification step between integration and deployment. You can integrate your code at any time but have to push a button to release it to the live application.
Most companies use CI/CD in tandem, so it’s worth your time to learn more about Continuous Delivery/Deployment.
Overview of Continuous Integration Services
You have used CircleCI, one of the most popular continuous integration services. However, this is a big market with a lot of strong contenders. CI products fall into two basic categories: remote and self-hosted services.
Jenkins is the most popular self-hosted solution. It is open-source and flexible, and the community has developed a lot of extensions.
In terms of remote services, there are many popular options like TravisCI, CodeShip, and Semaphore. Big enterprises often have their custom solutions, and they sell them as a service, such as AWS CodePipeline, Microsoft Team Foundation Server, and Oracle’s Hudson.
Which option you choose depends on the platform and features you and your team need. For a more detailed breakdown, have a look at Best CI Software by G2Crowd.
Conclusion
With the knowledge from this tutorial under your belt, you can now answer the following questions:
- What is continuous integration?
- Why is continuous integration important?
- What are the core practices of continuous integration?
- How can I set up continuous integration for my Python project?
You have acquired a programming superpower! Understanding the philosophy and practice of continuous integration will make you a valuable member of any team. Awesome work!
Watch Now This tutorial has a related video course created by the Real Python team. Watch it together with the written tutorial to deepen your understanding: Continuous Integration With Python
|
https://realpython.com/python-continuous-integration/
|
CC-MAIN-2022-05
|
refinedweb
| 5,202
| 55.95
|
In Java 8, HashMap replaces linked list with a binary tree when the number of elements in a bucket reaches certain threshold
Q: is the mentioned improvement is nothing more but care about those programmers who are not aware how to write an appropriate hashcode() method? Or is it useful in other situations? What are the situations where it's not possible to write good hashcode() method? In other words are there situations where even very good hashcode() method doesn't help against collisions and the tree is viable?
The added complexity of tree bins is worthwhile in providing worst-case O(log n) operations when keys either have distinct hashes or are orderable, Thus, performance degrades gracefully under accidental or malicious usages in which hashCode() methods return values that are poorly distributed, as well as those in which many keys share a hashCode, so long as they are also Comparable.
This improvement prevents denial of service attacks where adversary deliberately picks values which will fall into the same bucket. It's not possible to write hashCode resilient to that, which is also stable between JVM instances.
If you add enough entries to a HashMap, statistically you’re going to get bucket collisions. Note that a bucket collision is not the same thing as a hashCode collision; while a hashCode collision always results in a bucket collision, any 2 hashCodes have a 1/bucket count chance of hitting the same bucket.
If by bad luck (many different keys happen to end up in the same bucket) or bad coding (poorly chosen algorithm generates same hashCode for different keys) the number of keys in a bucket grows large, the time complexity of a retrieval was O(n), but is now O(log n).
Consider that it is not necessarily your hashCode algorithm that is “badly coded”. It could be you are using objects from a 3rd party library for your keys, so this change protects you against other people’s bad code too.
What are the situations where it's not possible to write good hashcode() method?
Well, apart from the use-cases where someone might trying to DOS you by engineering hash collisions ...
There is the case where a full value-based hashcode calculation is too expensive, so you implement a "cheap and cheerful" version. But then this version has some edge cases where you get collisions.
An example would be where you used a wrapper for a big array or a tree of hashmaps as a key. (Clearly there are problems with this approach, but some people will do it anyway.)
Your hashCode might not be interpreted the way you think it will in a
HashMap.
For example when you create a
HashMap like:
Map<String, String> map = new HashMap<>();
There are at least 3 things you should be aware of:
Only the last 4 bits are taken into consideration to decide what bucket entries will go to.
A
HashMap will re-hash your hashCode via:
static final int hash(Object key) { int h; return (key == null) ? 0 : (h = key.hashCode()) ^ (h >>> 16); }
hashCode is an
int and it's limited, thus hash collisions are very often. IIRC for
Integer.MAX_VALUE hash collisions will start around some tens of thousands (44_000? or something similar, can't remember).
|
https://cmsdk.com/java/is-jdk-8s-new-strategy-for-dealing-with-hashmap-collisionstree-instead-of-list-need-only-for-those-who-is-not-able-to-write-a-good-hashcode.html
|
CC-MAIN-2019-22
|
refinedweb
| 548
| 59.33
|
How to send files (attachments) with Sinatra 0.9.4.
get '/some/:file' do |file| file = File.join('/some/path', file) send_file(file, :disposition => 'attachment', :filename => File.basename(file)) end
I found the documentation was misleading in some cases and plain wrong in others. The key lie is the method "signature":
def send_file(path, opts={})
This is confusing because Sinatra only uses options if you specify the disposition to either "inline" or "attachment". Otherwise, it simply throws the File (which doesn't know/care about your options). It is not enough to simply call
send_file('/my/file')
Although the documentation says File.basename will be uses as the default name, that simply isn't true. Using Firefox, I got "download" as the filename.
|
http://www.dzone.com/snippets/sending-files-sinatra
|
CC-MAIN-2014-42
|
refinedweb
| 123
| 59.4
|
I am having trouble getting an Adafruit 1115 display to work correctly. That is a nifty monochrome 2x16 display that uses an I2C approach to reduce pin count, and it almost works.
The program looks like
import board
import busio
import adafruit_character_lcd.character_lcd_rgb_i2c as character_lcd
lcd_columns = 16
lcd_rows = 2
i2c = busio.I2C(board.SCL, board.SDA)
lcd = character_lcd.Character_LCD_RGB_I2C(i2c, lcd_columns, lcd_rows)
lcd.message = "Hello"
I downloaded the modules from Adafruit (or GitHub) as described in "Python Usage" for the 1115, and everything seemed like it downloaded properly. But when I try to run the this software from Raspbian’s Terminal, it complains about not being able to find the import modules. The display DOES work if using the same lines in Thonny IDE, but I need the program to run on its own.
|
https://forums.adafruit.com/viewtopic.php?f=8&t=147695
|
CC-MAIN-2019-13
|
refinedweb
| 134
| 65.83
|
I dare you to do two things in a React codebase of your choosing:
npm install nprogres/nprogress
2. Add this import anywhere in the codebase
import NProgress from 'nprogress'
You’ll find that 5 seconds after loading the page, the app is replaced with
I built this rudimentary piece of malware for a game day. Every month or so, one of our dev team members at Okra Solar runs an exercise to help the rest of the team prepare for what happens if something goes very wrong. In our last game day, our CTO deployed…
I was working on a task today to make Grafana available on two different URLs: grafana.okrasolar.com and grafana.harvest.okrasolar.com
On it’s own, that’s not too tricky:
I just needed to add a alias record:
new ARecord(this, 'Alias', { zone: this.props.hostedZone, target: route53.RecordTarget.fromAlias(new LoadBalancerTarget(grafanaService.loadBalancer)), recordName: 'grafana'})
The hard part was setting up SSL certs for both domains.
The ApplicationLoadBalancedFargateService only allows you to supply one certificate.
To get it working with two certificates, I had to do:
This does the trick but it leaves me with one problem: I can’t do HTTP to HTTPS redirection because the ApplicationLoadBalancedFargateService already creates a listener on port 80 and I can’t find a way to access it.
Lambda is getting more and more powerful: so much so that you can now run many ETL and ML workloads on it. In this article I’ll show you how to deploy xgboost with lambda so that you can train models and run inference serverlessly.
Let’s begin with why this is a challenge: lambda deployment package size limits. The XGBoost binaries are ~400MB. Lambda will only let you deploy a package up to 250MB. Same deal if you try to use lambda layers. At the time of writing, lambda layers still have a total size limit of 250MB. …
Inspired by I ruin developers’ lives with my code reviews and I’m sorry and Why don’t you just.. I built a little tampermonkey script to give real-time feedback on code review comments. YUSoMean works with GitHub and BitBucket — it analyses the comments you write and helps you craft code reviews that are encouraging rather than demotivating.
The backend team and I at GOFAR recently did a release to staging which we confidently expected would be a crowd pleaser. After much refactoring, we were able to get the app upgraded to version 3 of Loopback, the NodeJS framework we’re using to run the app. This was kind of a big deal because the app had been running on Loopback 2, which is now officially unsupported and therefore a scary security risk.
The deploy seemed to go smoothly..until my boss started complaining about the mobile apps now running really slowly. Eek. I dug into the logs and couldn’t…
I sat the AWS devops professional (beta) exam on Nov 28th 2018 and after a nail-biting 90 day wait during which I was convinced I had failed, I got the news that I passed with a score of 835/1000.
I found the exam really, really hard. The associate level exams were super easy — I blasted through most of them in ~30 mins (read my guides for CSAA, CDA, CSOA) because the questions are black and white with three obviously incorrect answers and one clear correct answer. The Professional exam stretched me — it only had 30 more questions than…
I’ve been really enjoying using Apollo to handle data fetching and updates but have struggled a bit with writing tests. The official Apollo way seems to rely on observing changes in the DOM. In my case, the component that does the mutation does not have any DOM changes: it is a submit button. When the mutation succeeds, the root component renders a different screen based on the result of a query.
The data flow is essentially:
Is it ethical to work for a tobacco company? Most people would probably say no but if you’ve ever contributed code to React, Webpack, jQuery, Bootstrap or Modernizr, you have done exactly that.
Philip Morris International’s website uses jQuery and Bootstrap.
British American Tobacco’s site uses jQuery and Modernizr.
Altadis uses React + Webpack.
I personally feel pretty uncomfortable about the idea that any improvements I contribute to open source projects might go towards giving people lung cancer. For that reason, I was thrilled to come across the Do No Harm License (NoHarm) today.
The idea of NoHarm is to…
The NABERS (National Australian Built Environment Rating System) program began in 1998 as an initiative of the NSW government. Since then, the program has spread across Australia and has recently been adopted in New Zealand and India. The results have been positive: 77% of commercial real estate in Australia has been assessed and cost savings of more than $100 million have been achieved. From July 1st 2017, the program has broadened its reach from office spaces above 2000 sq. m to smaller office buildings. Under the Commercial Building Disclosure (CBD) act, any office premises above 1000 sq. …
From December 2017, the Australian Energy Market Commission (AEMC)’s Power of Choice (PoC) ruling will apply to all Australian households and businesses. The policy is designed to give electricity consumers greater insight into their consumption patterns and allow for “demand side participation” (essentially getting financial incentives to use electricity at times where it’s less stressful for the grid). In practical terms it means that every new meter installed (or any old meter that stops working and needs to be replaced) will be a smart meter, transmitting consumption data to the retailer every 30 minutes. If you’re in Victoria, this won’t…
|
https://jeremynagel.medium.com/?source=post_internal_links---------5----------------------------
|
CC-MAIN-2021-21
|
refinedweb
| 965
| 60.75
|
Creating a Simple Java Message Service (JMS) Producer with NetBeans
and GlassFish
Overview
Purpose
This tutorial demonstrates how to use the JMS API to create a
simple message producer using GlassFish 3.1.2 and NetBeans 7.
Time to Complete
Approximately 45 minutes.
Introduction
Messaging is a method of communicating between software components or applications. Messaging allows loosely-coupled communication between distributed applications. Message clients can send and receive messages by connecting to a messaging agent that facilitates message receipt and delivery. Clients need not know anything about other clients that will consume or produce messages. A message client only needs to know the format of the message to send and the destination. Thus messaging differs from other tightly coupled technologies, such as Remote Method Invocation (RMI), which require an application to know a remote applications methods.
The Java Message Service API was designed by Sun Microsystems
and several other companies to address the need to connect
intra-company applications through enterprise messaging
products, sometimes referred to as Message Oriented Middleware
(MOM). JMS provides a way for Java applications to access
messaging systems. JMS is a set of interfaces and associated
semantics that defines how a JMS client access the facilities of
messaging implementation. In this regard JMS is very much like
JDBC.
Messaging systems are peer-to-peer facilities, allowing clients
to send and receive messages from any other client. Some
messaging systems can also broadcast messages to many
destinations, and clients subscribe to a specific channel or
topic to receive messages from that are broadcast. JMS is
implemented to support both models, depending upon the
implementation of the JMS provider. The figure below illustrates
the participating components of a Java Message Service
implementation.
The JMS API Messaging System Participants
A JMS technology provider (JMS provider) is a messaging system that provides an implementation of the JMS API. For an application server to support JMS technology, you must place the administered objects (connection factories, queue destinations, and topic destinations) in the JNDI technology namespace of the application server. Typically, you would use the administrative tool supplied by the application server to perform this task. However, in this tutorial, you will use the capabilities built into NetBeans to define and create the administered objects.
Specifically, you will define a Queue Destination and
Connection Factory in NetBeans. After deploying the application
once, you can use a feature in NetBeans to generate the code
that uses the Connection Factory you specified to generate a
Connection object. With the Connection object, the generated
code creates a Session object, which is used to create a
MessageProducer object to send the string entered on JSF page to
the queue as a Message object.
Software Requirements
The following software is required to complete this tutorial in
Windows platform. You must install the software
in the given order.
- Download and install Java JDK 7 from this link.
- Download and install NetBeans IDE 7.1.2 Java EE version Software Requirements.
- Start the NetBeans IDE.
- Download and unzip the MDBExample.zip file that contains a NetBeans project you need to complete this tutorial.
Note: It is recommended that the location where you unzip the NetBeans projects does not contain spaces or non-alphanumeric characters.
Create a NetBeans Web Application Project
NetBeans provides a number of different project options. In this tutorial, you will create a Web Application project and use the JSF framework.
Create a New Web Application Project.
Select File -> New Project.
From the New Project dialog, select Java Web as the Category and Web Application as the Project. Click Next.
Enter JSFProducer as the Project Name.
Your project location can be anywhere you want.
Click Next.
Select Enable Contexts and Dependency Injection. Click Next.
Select JavaServer Faces as your framework. Click Finish.
NetBeans has created a simple JSF-based Web Application for you, including a simple index.xhtml JSF Facelet.
Create a JMS Producer Managed Bean
In this topic, you will create a managed bean for the JSF Facelet.
Create a new JSF Managed Bean.
Expand the Project you created. Right-click on Source packages and select New -> Other.
Choose JavaServer Faces from Categories and JSF
Managed Bean from File Types. Click Next.
Enter MessageProducerBean as the Class Name.
Enter obe as the Package name.
Select request as the Bean scope.
Click Finish.
Implement the JSF managed bean with a String message field
Add a String message field to the managed bean.
private String message;
Use the NetBeans Insert Code feature to add a getter and setter for the field. Click in the MessageProducerBean file just above the last closing brace and press the Alt-Insert key, and select Getter and Setter from the Generate list.
Choose the message field. Click Generate.
Add an empty send() method with a void
return type (you will fill this method in later) below
the getter and setter.
Save the file.
Implement the JSF page
Add components to the JSF page to write to the message field in the managed bean.
Expand the Web Pages folder and open the index.xhtml file. Add the following markup to the page replacing the Hello from Facelets string:
JMS Message Producer
<h:form>
<h:outputLabel
<h:inputText
<h:commandButton
<h:messages
</h:form>
Change the title of the JSF page to JMS Message Producer. Save the file.
Add a message queue and connection factory to your
project.
Add a JMS queue Admin Object Resource.
Right-click on the project and select New ->
Other.
Choose GlassFish from Categories and JMS Resource from File Types.
Click Next.
Accept the default JNDI name, jms/myQueue and
the default Admin Object Resource. Click Next.
On the JMS Properties screen, enter myQueue
in the value field and press the Enter key.
Click Finish.
Add a JMS Queue Connector Resource
Right-click on the project and select New ->
Other.
Choose GlassFish from Categories and JMS Resource from File Types.
Click Next.
Enter jms/myQueueFactory as the JNDI Name.
Select javax.jms.QueueConnectionFactory as the Connector Resource.
Click Finish.
Start Glassfish Application Server and Deploy the
application.
Open the Services tab (Windows -> Services) and
expand Servers.
Right-click on GlassFish Server 3.1.2 and select Start.
Note: If your instance of GlassFish already has a green triangle beside the fish icon, the server is already started and the Start command will be greyed out.
In the Output Window, you should see the GlassFish Server
3.1.2 console indicating GlassFish started.
Note: Java DB Database also starts automatically.
Select the Projects tab to open it.
Right-click the JSFProducer project and select Deploy.
In the Output Window, you will see a message that the project JSFProducer built successfully.
Open the Services tab. Right-click on the
Applications folder and select Refresh to see that
the the JSFProducer application is deployed.
Expand the Resources folder, and then expand the Connectors folder.
Right-click on Admin Object Resources and select Refresh.
Do the same with the Connector Resources and Connector Connection Pools folders.
You will see that GlassFish has deployed your application, JSFProducer, created a Admin Object Resource, jms/myQueue, and a Connector Resource object, jms/myQueueFactory.
Generate the JMS code in the ManagedBean.
Open the MessageProducerBean class in the Editor
and click in the bottom of the file, just before the closing
curly brace.
Press Alt-Insert to open the NetBeans Code Generator feature and select Send JMS Message...
By default, NetBeans will choose your Admin Resource Object
(jms/myQueue) as the Server Destination and jms/myQueueFactory
as the Connection factory. Click OK.
Scroll to the top of the file to see that NetBeans has
added the proper resource declarations to your code for the
Queue and ConnectionFactory instances.
Scroll down again and you will see that NetBeans has also
added two private methods.
The createJMSMessageForjmsMyQueue method creates and returns an instance of a TextMessage objects.
The sendJMSMessageToMyQueue creates a Connection using the ConnectionFactory, creates a Session from the connection, and a MessageProducer from the session.
The MessageProducer sends the string message (passed into the method as messageData) to the JMS queue destination.
Note: the line numbers in your editor may be different.
Modify your send method to call the generated code.
Add the following code to the send() method.
FacesContext facesContext =
FacesContext.getCurrentInstance();
try {
sendJMSMessageToMyQueue(message);
FacesMessage facesMessage = new FacesMessage("Message sent: " + message);
facesMessage.setSeverity(FacesMessage.SEVERITY_INFO);
facesContext.addMessage(null, facesMessage);
} catch (JMSException jmse) {
FacesMessage facesMessage = new FacesMessage("Message NOT sent: " + message);
facesMessage.setSeverity(FacesMessage.SEVERITY_ERROR);
facesContext.addMessage(null, facesMessage);
}
The send() method will attempt to send the String message
to the JMS queue destination you added to the project.
If the message is sent properly, a FacesMessage is added to the FacesContext instance that represents the current view page. The FacesMessage will be displayed to the browser client when the page is rendered.
Fix the missing imports (Ctrl-Shift-I) and save the file.
In the Output Window, in the GlassFish Server 3.1.2 tab,
you will see that the application deployed successfully,
however, the following warnings appear.
Note: The contents of the message between the square braces may be different in your environment.
These warnings are a result of NetBeans attempting to create portable JNDI lookup references for the queue and connection factory you created. Because this example uses a non-portable mappedName lookup for the JMS resources, you can ignore the warnings, or remove the lines shown below from the glassfish-web.xml file.
After removing the lines, save the file and GlassFish automatically redeploys the JSFProducer application, without any warnings.
Test the application.
Open a browser and enter the following URL:
Try typing in some text and click the Send Message
button.
You should see that your messages were successfully sent. For example:
Looking at Message Queue statistics
Although you cannot see the content of the messages on the server, you can determine how many messages have been sent to a destination (and are not yet picked up.)
Using the command line
Open a command window. (Start->Run->cmd).
Type the following command:
"C:\Program Files\glassfish-3.1.2\mq\bin\imqcmd" list dst
Enter admin as the username and admin as the password.
There is one message in myQueue.
Using the admin console
In a browser, start the GlassFish Admin Console
by typing the following URL:
From the left panel, click on server
(Admin Server).
Select the JMS Physical Destinations tab.
Click on View for myQueue.
Scroll down until you see the Number of Messages statistic.
This shows that there is one message in myQueue.
Reading the messages on the queue
A simple Message-Driven Bean (MDB) NetBeans project has been included in this tutorial to allow you to "see" the messages in the queue. In another OBE, you will explore more advanced application of MDBs, including how to take messages from the queue and store them for another application.
Unzip the MDBExample.zip project into directory and open the project in NetBeans.
Open the MDBExample.java file and review the code.
This code uses a Message Driven Bean (MDB) to register a
listener in the application server for messages on the Queue
with the JNDI name, "jms/myQueue". A Message Driven
Bean is deployed to the application server and instantiated
by the container. Once the bean is deployed, it will
continue to "listen" for messages on the destination queue
specified.
When a message is sent to the queue, the container invokes the onMessage method, which casts the Message object to a TextMessage object (the type you put on the queue). With the getText() method, you print the message to the console.
Right-click on the MDBExample project and select Deploy to deploy the MDB to GlassFish.
In the Output window, click on the GlassFish Server 3.1.2 tab (the system console) and you should see the message(s) you typed in the JSF window appear.
Success!
Summary
In this tutorial, you created a JMS Producer application. The
application uses a JSF page to create a string message and a
managed bean to put the string onto a JMS Message Queue.
Resources
- The Java EE 6 Tutorial: Java Message Service Concepts
- Java Message Service Documentation
- Developing Java EE 6 Applications for the Java EE 6 Platform
- To learn more about using the Java Message Service in Java EE applications, refer to additional OBEs in the Oracle Learning Library.
Credits
- Lead Curriculum Developer: Tom McGinn
- Other Contributors: Matt Heimer.
|
http://www.oracle.com/webfolder/technetwork/tutorials/obe/java/JMSProducer/JMSProducer.html
|
CC-MAIN-2015-27
|
refinedweb
| 2,067
| 57.57
|
how to get data position and pint out?
On 24/03/2015 at 22:21, xxxxxxxx wrote:
Hi, my name is yefta. I have a problem i cant solved, and i will be grateful if anyone here can give me help.
I have a roller coaster, and i want to get data position of the moving train and i want to record them continously. the object is the train.
can u help me?
I need to analyze that position of train
thankyou so much
On 25/03/2015 at 07:59, xxxxxxxx wrote:
Hello,
I don't know what you want to achieve in the end or what workflow you want to implement. From what you tell I think you could create a Python Tag and add this tag to the object you want to observe. In that python tag's code you could write something like this:
import c4d def main() : print op.GetObject().GetMg().off
In this case "op" is the python tag itself, GetObject() returns the host object and GetMg() returns the object's world space matrix. This Matrix includes a translation part "off". When you now run the scene you will find the current position printed to the console window.
Best wishes,
Sebastian
On 26/03/2015 at 01:10, xxxxxxxx wrote:
thankyou so much s_bach.. thats what i need.
hmm but can u display the vector in degree?
in my mind it will shows like this
(0,0,0)
(10,30,45)
(90,180,270)
(0,180,215)
etc
and actually i want to print it to file. maybe to txt or csv. can you help me out?
On 26/03/2015 at 01:57, xxxxxxxx wrote:
Hello,
Showing the position in degree doesn't make sense. Do you mean the rotation? You can get the rotation vector from the matrix using MatrixToHPB(). The result is a rotation in radians, so you would have to convert the values to degree yourself with Deg().
The Cinema 4D Python API does not provide any specific functions to write files. But of course you can use the standard Python libraries to access files.
best wishes,
Sebastian
On 27/03/2015 at 00:14, xxxxxxxx wrote:
i have tried use MatrixToHPB but it doesnt work. and then i tried VectorToHPB and it works. but the third coloumn always "0". How come? this is my code
import c4d
#Welcome to the world of Python
def main() :
pass #put in your code here
a = op.GetObject().GetMg().off
b = c4d.utils.MatrixToHPB(a)
print a
and how to convert values to deg?
On 27/03/2015 at 00:22, xxxxxxxx wrote:
def main() :
pass #put in your code here
a = op.GetObject().GetMg().off
b = c4d.utils.VectorToHPB(a)
print b
On 27/03/2015 at 00:28, xxxxxxxx wrote:
Vector(3.144, 0.239, 0)
Vector(3.144, 0.256, 0)
Vector(3.144, 0.273, 0)
Vector(3.145, 0.29, 0)
Vector(3.145, 0.306, 0)
Vector(3.145, 0.322, 0)
Vector(3.145, 0.336, 0)
Vector(3.146, 0.35, 0)
Vector(3.146, 0.361, 0)
Vector(3.147, 0.371, 0)
Vector(3.147, 0.379, 0)
Vector(3.147, 0.385, 0)
Vector(3.148, 0.39, 0)
Vector(3.148, 0.394, 0)
Vector(3.148, 0.396, 0)
Vector(3.148, 0.397, 0)
Vector(3.148, 0.397, 0)
Vector(3.148, 0.395, 0)
Vector(3.148, 0.393, 0)
Vector(3.148, 0.389, 0)
Vector(3.148, 0.385, 0)
On 27/03/2015 at 01:52, xxxxxxxx wrote:
Hello,
as the name suggests MatrixToHPB() must be used with a Matrix, not with the translation vector as in your code. And as said before, turning the position into a rotation using VectorToHPB() does not seem to make sense. As you can read in the documentation, VectorToHPB() will always return a rotation with bank set to zero.
best wishes,
Sebastian
On 27/03/2015 at 02:00, xxxxxxxx wrote:
Hello,
the first case:
you´ll need to feed the matrix not the offset vector from this matrix
a = op.GetObject().GetMg()
b = c4d.utils.MatrixToHPB(a)
the second case:
euler angle always gives you a zero banking back it calculates the direction with a fixed rotation around this direction
a = op.GetObject().GetMg().off
b = c4d.utils.VectorToHPB(a)
for a complex track you might think about flipping conditions or quaternions.
rad to degree:
c4d.utils.Deg( r )
or
rad = HPB * math.pi / 180
Hope this helps?
Best wishes
Martin
On 02/04/2015 at 07:17, xxxxxxxx wrote:
Hello Yefta
was your question answered?
best wishes,
Sebastian
On 05/04/2015 at 23:02, xxxxxxxx wrote:
hello Martin.
yes, it helps. but i still cant transform to degree. it shows "a float is required".
can u help me out?
On 05/04/2015 at 23:14, xxxxxxxx wrote:
hello s_bach
i'm sorry, been off for a while. yes it's answered, but not all.
if i type c4d.utils.Deg(b)
it shows in console "TypeError : a float is required"
On 06/04/2015 at 04:21, xxxxxxxx wrote:
Hello Yefta,
sorry my answer was a little sloppy.
This should answer your question ...
rad = c4d.Vector(0,1,1) deg = c4d.Vector(c4d.utils.Deg(rad.x),c4d.utils.Deg(rad.y),c4d.utils.Deg(rad.z)) print deg deg2 = 180/math.pi*rad print deg2
Best wishes
Martin
On 16/04/2015 at 00:35, xxxxxxxx wrote:
thanks martin. :D
|
https://plugincafe.maxon.net/topic/8607/11243_how-to-get-data-position-and-pint-out
|
CC-MAIN-2019-47
|
refinedweb
| 927
| 78.65
|
There are several ways to enable printing for a custom data:
If the data is a Swing component which extends JComponent and shown in a TopComponent, the key PRINT_PRINTABLE with value "Boolean.TRUE" in the component must be set as a client property. See example:
public class MyComponent extends javax.swing.JComponent {
public MyComponent() {
...
putClientProperty("print.printable", Boolean.TRUE); // NOI18N
}
...
}
The key PRINT_NAME is used to specify the name of the component which will be printed in the header/footer:
putClientProperty("print.name", <name>); // NOI18N
If the key is not set at all, the display name of the top component is used by default. The content of the header/footer can be adjusted in the Print Options dialog.
If the size of the custom component for printing differs from visual dimension, specify this with the key PRINT_SIZE:
putClientProperty("print.size", new Dimension(printWidth, printHeight)); // NOI18N
If the custom data is presented by several components, all of them can be enabled for print preview. The key PRINT_ORDER is used for this purpose, all visible and printable components are ordered and shown in the Print Preview dialog from the left to right:
putClientProperty("print.order", <order>); // NOI18N
If the custom data is presented by another classes, a PrintProvider should be implemented and put in the lookup of the top component where the custom data lives. How to put the Print action on custom Swing tool bar:
public class MyComponent extends javax.swing.JComponent {
...
JToolBar toolbar = new JToolBar();
// print
toolbar.addSeparator();
toolbar.add(PrintManager.printAction(this));
...
}
How does Print action from the main menu decide what to print?
At first, the manager searches for PrintProvider in the lookup of the active top component. If a print provider is found, it is used by the print manager for print preview.
Otherwise, it tries to obtain printable components among the descendants of the active top component. All found printable components are passed into the Print Preview dialog. Note that print method is invoked by the manager for preview and printing the component.
If there are no printable components, printable data are retrieved from the selected nodes of the active top component. The Print manager gets EditorCookie from the DataObject of the Nodes. The StyledDocuments, returned by the editor cookies, contain printing information (text, font, color). This information is shown in the print preview. So, any textual documents (Java/C++/Php/... sources, html, xml, plain text, etc.) are printable by default.
See PrintManager javadoc for details.
|
http://wiki.netbeans.org/DevFaqHowToPrint
|
CC-MAIN-2016-22
|
refinedweb
| 410
| 56.96
|
Build a Simple Application with .Net RIA Services (Silverlight 3) – Part 1
This is the first post in a series of posts about building applications with Microsoft .Net RIA Services and Silverlight 3. In this post I will create a new application, create a simple data model and use the Domain Service and Domain Context to retrieve data and bind it to a DataGrid.
Before you start, make sure you have Silverlight 3 Beta and .NET RIA Services March 2009 Preview installed, and you have already installed and configured SQL Server. In this sample I am using the Bank Schema I’ve used in the past.
Create a Silverlight Navigation Application
Create a new Silverlight Navigation Application.
After you click OK, the New Silverlight Application Dialog is shown. Click OK again to create an ASP.Net project that links to the new Silverlight Application.
A new solution is created. Notice the new assemblies that the Silverlight project is referencing, and notice the new assemblies among them.
Build a Domain Service
Add a new Data Model to your server side project (BankApp.Web). This data model can be a LINQ to SQL model, an Entity Data Model, or you can use any other business object representation.
In this sample I am using a LINQ to SQL data model based on the Bank Schema.
Make sure to build the project so that Visual Studio will generate the data classes and data context before the next step.
Add a new Domain Service. Add a new Item to the server project, and select the Domain Service template in the Web category.
After you add this item, the New Domain Service Class Dialog is shown. Select the Data Context (BankDataContext in this sample), select the entities you want to expose and whether you want to allow editing and click OK.
This adds the BankDomainService.cs that contains the code that exposes the data to the client, and BankDomainService.metadata.cs that contains additional metadata, mostly for presentation and validation.
A few references were also added:>
Build the solution. This executes a build target the generates the client side code that is required in order to consume the Domain Service. If you click the Show All Files button for the client application, you’ll notice the generated code.
Display Domain Data in the Application
Add a DataGrid Control to the client application. To do that, add a reference to System.Windows.Controls.Data.dll that contains the DataGrid Control.
Then, open the Views\AboutPage.xaml and add an xmlns prefix to the CLR namespace of the DatGrid.
<navigation:Page x:Class="BankApp.HomePage"
…
xmlns:data="clr-namespace:System.Windows.Controls;
assembly=System.Windows.Controls.Data"
…
>
Add the Xaml markup for the DataGrid Control:
<Grid x:
<StackPanel>
…
<StackPanel Style="{StaticResource ContentTextPanelStyle}">
…
</StackPanel>
<data:DataGrid
</data:DataGrid>
</StackPanel>
</Grid>
Handle the Loaded Event of the Page. First, Add a method to handle the Loaded event of the page.
<navigation:Page x:Class="BankApp.HomePage"
…
In the method that handles the event, use the client Domain Context to retrieve data from the service and bind to the DataGrid.
private void Page_Loaded(object sender, RoutedEventArgs e)
{
BankDomainContext context = new BankDomainContext();
this.dataGrid.ItemsSource = context.Customers;
context.LoadCustomers();
}
Now, run the application and let it go and retreive the data and present it in the DataGrid.
In this post I created a new application, created a simple data model and used the Domain Service and Domain Context to retrieve data and bind it to a DataGrid. In the next post I’ll explore more controls that ship with the .Net RIA Services.
Enjoy!
Great post. I look forward to part 3
How to do this when you have Oracle as database?
Anybody know how to handle Enums with RIA Services?
I try to execute your SQL code and I've got:
"Msg 208, Level 16, State 1, Line 3
Invalid object name 'Bank.dbo.Customers'."
Is it right? Where I can find the Bank database ?
Thanks
Ciekawy artykul, bede tu teraz wpadal czesciej, pozdrawiam bzerwiusz
Is it possible to have seperate (multiple) data classes working with silverlight RIA? things work fine when it's all int he same project but when I create a seperate data project in the solution I can't retrieve data from it at all.
context.LoadCustomers() not defined?!?!?!
|
http://blogs.microsoft.co.il/bursteg/2009/04/04/build-a-simple-application-with-net-ria-services-silverlight-3-part-1/
|
CC-MAIN-2014-42
|
refinedweb
| 722
| 57.16
|
On Fri, Apr 18, 2008 at 07:44:59AM +0200, Nadia.Derbey@bull.net wrote:> . echo "LONG XX" > /proc/self/task/<my_tid>/next_id> next object to be created will have an id set to XX> . echo "LONG<n> X0 ... X<n-1>" > /proc/self/task/<my_tid>/next_id> next object to be created will have its ids set to XX0, ... X<n-1>> This is particularly useful for processes that may have several ids if> they belong to nested namespaces.Can we answer the following questions before merging this patch:a) should mainline kernel have checkpoint/restart feature at allb) if yes, should it be done from kernel- or userspace?Until agreement will be "yes/from userspace" such patches don't makesense in mainline.
|
https://lkml.org/lkml/2008/4/22/399
|
CC-MAIN-2014-15
|
refinedweb
| 123
| 65.62
|
Created on 2017-05-07 19:01 by Daniel Moore, last changed 2017-05-17 14:06 by xiang.zhang. This issue is now closed.
I originally posted this as a question on StackOverflow thinking I was doing something wrong:
But I think I found the solution and answered my own question. I'm pretty sure you need to set:
self._poll = self._reader.poll
in the __setstate__ method in the SimpleQueue class of queues.py from the multiprocessing library. Otherwise, I'd love to know an alternative solution.
Thanks!
Related commit is bdb1cf1ca56db25b33fb15dd91eef2cc32cd8973. A simple reproduce snippet:
import multiprocessing as mp
def foo(q):
q.put('hello')
assert not q.empty()
if __name__ == '__main__':
mp.set_start_method('spawn')
q = mp.SimpleQueue()
p = mp.Process(target=foo, args=(q,))
p.start()
print(q.get())
p.join()
LGTM. But can you convert the reproducer to a test?
Thanks for your reply Serhiy. Test added. :-)
New changeset 6f75bc003ab4d5294b0291289ae03f7a8d305f46 by Xiang Zhang in branch 'master':
bpo-30301: Fix AttributeError when using SimpleQueue.empty() (#1601)
New changeset 9081b36f330964faa4dee3af03228d2ca7c71835 by Xiang Zhang in branch '3.5':
bpo-30301: Fix AttributeError when using SimpleQueue.empty() (#1601) (#1627)
New changeset 43d4c0329e2348540a3a16ac61b3032f04eefd34 by Xiang Zhang in branch '3.6':
bpo-30301: Fix AttributeError when using SimpleQueue.empty() (#1601) (#1628)
|
https://bugs.python.org/issue30301
|
CC-MAIN-2020-34
|
refinedweb
| 210
| 63.46
|
Hi there. I am facing a problem about my SDL program run on Android. It dies after pressing POWER button.
To test it, I simplified the code to below:
Code:
#include <android/log.h>
#include <unistd.h>
#include <SDL.h>
#define LOGD(…) __android_log_print(ANDROID_LOG_DEBUG, “mylog”, VA_ARGS)
int main(int argc, char* argv[]) {
int c = 0;
while(1) {
sleep(1);
LOGD("%d\n", c++);
}
return 0;
}
the code simply does counting and output to logcat. It works well when I press HOME button to put to background, or resume to foreground. from logcat I can see "onPause", "onResume" messages shown as they should. The problem is when I press POWER button, the program will output a "onDestroy" log to logcat, immediately after "nativePause" log. And then my program cannot receive messages any more, no matter I press HOME button or POWER button, or resume the program. But, it still counts and output numbers to logcat. It seems that only the Java thread dies, native C thread lives. I am not sure whether this issue is caused by android version. My phone runs Android 4.1.1. In "project.properties" I set target=android-19 because my eclipse adt bundle (download from developer.android.com on November) supports minimum 19. Any ideas? Thanks.
|
https://discourse.libsdl.org/t/ondestroy-event-received-after-press-power-button-of-android/20251
|
CC-MAIN-2022-33
|
refinedweb
| 211
| 67.65
|
Please circulate and distribute.
The Newsletter is received by 11 489 direct subscribers today.
To subscribe or unsubscribe:
Confort Reading and Printing
Format RTF
Format PDF
_________________________________________________
3- Three Key Steps to Sustainability
____________________________________________________________
By Richard Douthwaite
Sustainability needs to be achieved in two time-frames. One is short-
term and largely economic. We need to eat tonight. Employees have to
be paid at the end of the week. Interest has to be paid at the end of
the half-year. The second time-frame seems less urgent but is no less
important. The natural environment has to be preserved. Capital
equipment, buildings and infrastructure have to be kept up. Health
has to be maintained. Knowledge and skills have to be preserved and
passed on. And social structures such as families, friendships and
neighbourhoods have to stay strong.
Unfortunately, the achievement of immediate, short-term
sustainability is often at the expense of the longer-term type. One
reason for this is that the components of long-term sustainability
are far too forgiving for their own good and, eventually, for ours.
They allow themselves to be damaged quite a lot before they turn
around, bite, and force us to pay them some attention by impeding us
economically. We take advantage of their forbearance by ignoring them
whenever we can. Indeed, we have organised our personal lives, our
economies, our companies and our politics in a way which makes it
hard for us to do otherwise. Only when a crisis actually occurs do we
consider changing our habits but by that time, it may already be too
late, as many societies in history have found to their cost. In
Mesopotamia, in the Indus Valley and in the jungles of Mesoamerica,
civilisations collapsed because they had undermined their
environment. So did the Soviet and Roman empires. The people of
Easter Island turned to cannibalism to replace fish protein to their
diets after cutting down all the trees suitable for building fishing
canoes. In New Zealand, the Maori also became cannibals after they
had killed and eaten to extinction all twelve species of the
flightless moa birds. Abel Tasman, the first European navigator to
reach New Zealand, had several of his ship's crew eaten.
The difference between today's sustainability crisis and those in the
past is that this one affects the whole world rather than just
regions or small parts of it. The consequences of our continuing to
set short-term sustainability ahead of the longer-term type will
therefore be global and very grave - some commentators warn of a
drastic fall in human numbers. So, despite the fact that it is hard
to think of any historical precedent for people listening to warnings
of impending disaster and radically changing their way of life to
avert it, can we improve our chances of doing so this time by finding
ways in which the economic system could be reformed to make it easier
to maintain short-term sustainability and thus free enough resources
and enthusiasm to make progress towards the longer-term type?
At present, governments attempt to maintain economic sustainability
by following four short-term indicators: the rate of economic growth,
the balance of payments, the health of the public finances and the
rate of inflation. Let's explore why these indicators are currently
so crucial to see if there is any way of making them easier to
ignore.
National income growth is the world's most widely considered economic
sustainability indicator. It is the percentage by which the amount of
trading in the monetarised part of a national economy has risen ,
usually in the course of a year. Put another way, it is the
percentage increase in the total of all the money incomes generated.
It is an indicator for much more than that, however. Because it
measures the amount of additional incomes and resources the economy
has generated, growth is an excellent indicator for the extra profits
that arose in the economy and hence its attractiveness to investors.
If there is no growth in any given year, the investments made the
previous year have produced no return. Indeed, it's worse than that
because as borrowed money will have been used to part-finance the
investments and as interest will have to be paid on the borrowings,
the failure of an economy to grow means that profits fall in
comparison with the previous year.
Falling profits and unused capacity from last year's investments
obviously discourage companies from making further investments in the
current year. This has serious results. In normal years in OECD
economies, somewhere between 16% (Sweden) and 27% (Japan) of GNP is
invested, and a similar proportion of the labour force employed, in
projects which, it is hoped, will enable the economy to grow the
following year. If the expected growth fails to materialise and
further investments are cancelled, up to a quarter of the country's
workers can therefore find themselves without jobs. With only savings
or social welfare payments to live on, these newly-unemployed people
are forced to cut their spending sharply, which in turn costs other
workers their jobs. The economy enters a downward spiral, with one
set of job losses leading to further ones. The prospect of this
happening terrifies governments so much that they work very closely
with the business sector to ensure that, regardless of any social or
environmental damage, the economy continues to grow.
This is the main reason why short-term sustainability gets in the way
of the longer-term kind. Governments need to be able to be much less
concerned about whether growth occurs or not before they can feel
free to tackle long-term unsustainability. So how might the link
between growth and employment be broken? How can the rate of growth
be made a totally unimportant indicator, at least as far as
politicians and the general public are concerned? After all, as
infinite growth is impossible in a finite world, an economic system
has to have the ability to cease to grow without collapsing before it
has any claim to be considered sustainable.
Despite the above, the main reason why the economy implodes when
investment stops is not that people lose their jobs and consequently
have less to spend. Any market economy that functioned well would
automatically re-allocate a resource ( in this case, people) that was
surplus in one area of activity to some other where it could be used.
This is not happening in the present economic system because, as the
rate of investment slows down, the money supply contracts, making it
impossible for trading in the rest of the economy to carry on at even
its former level - and still less expand to take on the newly
redundant workers. What is needed, therefore, is a constant stock of
money rather than one which, like a fair-weather friend, tends to
disappear when times get hard.
Money disappears because almost all the money we use only comes into
being when a company or an individual draws on a loan facility they
have been granted by their bank. Borrowers create money when they
spend their loans and it disappears when the loans are repaid.
Consequently, if people ever repay loans worth more than the total
value of the new ones being taken out, as can happen if the
proportion of national income being invested declines, the amount of
money in circulation will fall. This makes it harder to do business.
Redundancies occur. And that, in turn, destroys the optimism required
for further borrowing.
For example, if enough people begin to fear for their jobs ('Perhaps
I 'd better not take out that car loan just now') or think that house
prices are about to fall so that there's no need for them to rush to
take out a mortgage to secure a place on the property ladder, they
are collectively making self-fulfilling prophesies. Whatever enough
of them fear or expect will come about. They will defer borrowing,
less money will be put into circulation, the property market will
become less buoyant, and, yes, there was no need to rush to get into
it after all. It's the same with business. If enough firms think that
their future prospects are so doubtful that it would be better not to
risk borrowing to expand, they will find that they were right and
there really was no need for a loan to put in that extra equipment.
This mechanism works in the opposite direction too. If people are
optimistic and increase their borrowings, the extra money they put
about enables an increased amount of business to be done. Firms find
that not only are they running into capacity constraints but they are
more profitable when their books are done at the end of the year
because, with extra money in circulation, there was more of it to be
shared around. So they borrow to expand, and this in turn provides
work for other companies who, when they reach their production
limits, borrow to expand as well.
The modern economy therefore constantly moves between boom and bust
because of the way the money system works. There are very few periods
in which there is a happy medium, an in-between. In the booms, the
economy enters a virtuous circle with borrowing leading to more
profits and therefore more borrowing. The only danger is inflation.
In the busts, cuts lead to further cuts and a vicious spiral down.
It is very difficult for governments to control such booms to prevent
them becoming excessively inflationary. Increasing the interest rate
to deter borrowing (and thus limit money creation) is a very blunt
economic tool because it has to do a lot of damage to the economy to
be effective. After all, when an economy is on the way up, how much
does an increase of one or two per cent in the interest rate matter
to a firm which has customers with large orders battering down its
door? Very large rate increases indeed are necessary to stop the
system running away with itself, particularly as, if inflation is
already at, say, 5%, it is reducing the effective interest rate by
that amount. Yet if a central bank over-reacts and pushes up the
interest rate too far, it risks frightening too many potential
borrowers and plunging the economy into a precipitate decline.
Yet interest rates work even less effectively when the economy is on
the way down. If I've surplus capacity in my factory already, why
should I borrow to install more, even if the interest rate is very
low? Unless I am absolutely confident that the market is going to
turn around and there will be a boom again soon, I don't want to trap
myself in a more heavily indebted position with no sure way of
trading my way out. And, of course, as interest rates can't become
negative (though they did become zero in Japan for a time), there is
a limit to how encouraging to borrowers they can become. Indeed, if
things get really bad and the prices of goods and services start to
fall, as they did in Japan in 2001, this has the opposite effect to
inflation and pushes the real rate of interest up.
In such circumstances, firms have to be bribed to invest by being
offered large grants. Alternatively, governments can try to get
investment - and hence borrowing - started again by adopting
Keynesian methods and borrowing and spending themselves. The Japanese
tried this after their property and stockmarket boom burst in the
early 1990s but to little effect. By 2001, a great many under-used
roads, bridges, ports and airports had been built and the amount the
country owed in relation to its national income had become so large
that the scope for further state borrowing was restricted,
particularly when, in 2002, the credit rating agencies reduced their
grading of Japanese government debt to below that of Botswana.
Because it is so difficult to get an economy out of a depression,
governments will do almost anything to keep the economy growing,
regardless of the damage that this might do to the environment, or
through, perhaps, changes in the distribution of income, to society.
As the former British Prime Minister, Edward Heath, once said 'the
alternative to expansion is not an England of quiet market towns
linked only by trains puffing slowly and peacefully through green
meadows. The alternative is slums, dangerous roads, old factories,
cramped schools, and stunted lives.'
Step One: Ending the reign of debt-based money
The replacement of the current bank-debt-based, time-limited currency
by a permanent stock of money would mean that when the economy turned
down and people lost the confidence to borrow, the means to buy and
sell didn't just disappear too. Instead, the money stock would stay
at a constant level so that there was still the same amount of
potential purchasing power about. This would limit the downturn and
make recovery much easier.
What form might such a permanent money take? Gold and silver coins
provided a permanent money stock in the past, of course, but we've no
need to revert to them. With debt-based money, the sum total of all
the debits in people's accounts is equal to the sum of all the
credits. To achieve a permanent money stock, therefore, we simply
need to be able to create credits without the corresponding debts. In
their book, Creating New Money (2000), James Robertson and Joseph
Huber suggest a method for doing just this. They propose that money
whose creation is authorised by commercial banks should be gradually
replaced by money spent into circulation by the government. The
immediate advantage of this is that it would make it very easy to
control the size of the money stock and thus the level of activity in
the economy. The need to encourage investment in order to ensure that
growth takes place would disappear. If unemployment was becoming a
problem, a government could just spend a little more. If prices then
began to rise too much, it would either cut its spending or increase
tax, thus withdrawing money from the system's circular flow.
Government-created money has another big advantage besides ending the
growth compulsion and the economic stability it would bring. It is
that in a growing economy, either taxes could be cut or a higher
level of public services afforded. Between January 1998 and January
1999, the increase in the money supply authorised by the commercial
banks in Britain was £52,600 million. If the government had created
this sum instead of the banks' borrowers and spent it into use, taxes
could have been cut by over 15%.
Huber and Robertson propose that, once the government has started
spending money into circulation, the banks should be limited to
credit broking. In other words, they would simply take in money from
one set of customers and lend it out to others§. This would end the
massive subsidy the banks get from charging for authorising money
creation. Robertson and Huber estimate the British banks got a
£21,000 million subsidy from this source in 1998/99 Their estimate
for the US subsidy is $37,000 million and DM30,000 million for the
German one. These sums obviously distort the way the national
economies concerned operate by underwriting the banks' costs and
enabling them to make abnormal profits.
Some of the serious drawbacks to the present system of creating money
are
1. It generates a growth compulsion which makes it impossible for
countries to build stable, sustainable economies 2. It is highly
unstable and tends to swing from inflationary booms to deflationary
busts. 3. These swings are exceedingly difficult to control 4. The
banking system benefits from a massive subsidy because it does not
have to pay anything for half the money whose use it authorises. This
leads to a misallocation of resources. 5. Taxes are higher (or
public services worse) than would be the case in a growing economy in
which the state spent the currency into circulation. 6. Because a
high volume of bank lending is required to keep the present money
system functioning, the banks shape the way the economy develops.
This is because they determine who can borrow and for what purposes
according to criteria which favour those with a strong cash flow
and/or substantial collateral. As a result, the present money system
favours the rich and multinational companies and discriminates
against smaller firms and poorer individuals.
Putting a permanent stock of money into circulation is therefore the
first key step towards building a sustainable, equitable economic
system. Crucially, it would make the growth rate unimportant and
allow governments to set themselves other targets beyond that of
doing everything possible to ensure that commercial investments keep
flowing.
As the three other short-term economic sustainability indicators
track factors that can interfere with growth, they are currently used
as guides to its achievement. This does not mean, however, that they
could be ignored if the achievement of growth became unimportant. As
the sustainability of an economy which had pulled off the trick of
ceasing to grow without a recession emerging could be threatened by
adverse movements in any of them, we ought now to look at each in
turn.
First, the balance of payments. If a country is tending to import a
greater value of goods and services than it is exporting, it can
handle the situation in two ways. One is to allow the exchange rate
between its national currency and those of its trading partners to
fall so that its imports decline (because they cost its citizens
more) while its exports rise (because they become more lucrative for
its exporters in terms of the home currency). This corrects the
incipient imbalance.
The alternative is for the country to attract foreign investment or
to borrow foreign currency to finance the purchase of the excess
imports. In this case, the exchange rate does not have to adjust,
which is good for those with savings who are worried they might be
eroded by inflation. For everyone else, the fact that the inflow of
foreign capital enables the exchange rate to remain higher than it
would otherwise be has undesirable effects. For example, the
country's exporters get less national currency when they convert the
foreign currency they earn. This cuts their profits and might mean
that some have to cease trading altogether. Companies supplying the
home market also suffer because imports stay cheap. This undermines
national self-reliance. In short, the increased availability of
foreign exchange damages companies and costs jobs.
So, apart from pandering to a sectional interest, it is difficult to
see why should any country ever take the second course. After all, if
it takes in overseas capital it has to get itself into a position at
some stage in the future in which its exports exceed its imports so
that it can at least pay the dividends or the interest on that money.
(There is no need for it ever to actually repay its foreign
obligations. All it has to do is to pay the service costs on them
Britain and the US have been importing more than they have been
exporting for the past two decades.) If it doesn't get into that
position itself, sooner or later, those supplying it with foreign
currency will decide that other, less-indebted countries are safer
havens for their money and decline to supply more. The long-delayed
adjustment of the exchange rate will come about - indeed, an serious
over-adjustment is likely.
While this delayed adjustment will at last make foreign goods more
costly and exports more profitable, it is likely to make the country
poorer than it would have been had it made any necessary exchange
rate alterations as it went along. One reason for this is that paying
the interest and dividends on the large foreign obligations the
country will have built up will require resources which could
otherwise have been used to benefit the people of the country
themselves. Indeed, having to service any foreign financial
obligation is a threat to a country's sustainability because of the
pressures it creates for the country to mis-use its resources - its
soils, its forests, its fisheries, its people - to generate the
necessary exports since they have to be sold in competition with
equally desperate countries in exactly the same trap.
While it is hard to think of circumstances in which a net inflow of
foreign capital could be beneficial, a net outflow of capital is just
as bad. True, after the exchange rate has fallen, the country's
exporters will, initially, get more national currency for the goods
they sell overseas. However, if they are to provide increased
employment, they will have to increase their sales and, in the short-
run, they will only be able to do this by cutting their prices enough
to give new purchasers an adequate incentive to abandon long-running
relationships with their existing suppliers. But as these rival
suppliers will not allow their business taken away without a fight,
they will reduce their prices too. It is impossible to say what the
final outcome will be, but if it takes a large fall in price to
greatly increase the world's consumption of the commodity, the extra
profit and employment that exporters provide will be very limited.
Significantly, the sales of most of the commodities exported by the
poorer countries of the world do not increase much when prices fall.
So would domestic producers provide more employment instead? The
answer is - it depends. All imported goods will cost more and local
manufacturers will be able to provide substitutes for only some of
them. So their customers, whose incomes in the local currency will
not have risen, will have to pay more for the imported part of their
purchases and this will leave them with less spending power to buy
local goods. Only the switching of purchases from importers to local
suppliers creates extra work and if the extent to which this can be
done is limited, total local employment might fall.
What tends to happen, then, is that when capital flows into a
country, it damages existing exporters and domestic producers,
leaving them in a weaker position to win overseas markets and take
over from imports when it flows out. Then, when capital flows in the
other direction, the local economy might be forced to contract
because too few locally-made substitutes for the now more costly
imports are available.
Step 2: Keeping current account and capital account money flows
apart.
The lesson from all this is that there should be no net capital
flows. Movements of investment money should not be lumped in with
those from imports and exports for reasons of administrative
convenience. If the government, a bank, a company or an individual
wishes to move capital overseas they should be free to do so but they
should not convert their national currency into foreign currency by
purchasing foreign exchange earned by exporters or tourism. Instead,
they should get their foreign exchange from people wishing to move
their capital in the opposite direction. If a lot of people want to
move their capital out and very few in, then the exchange rate for
capital flows adjusts to reflect this without affecting the exchange
rate for current (that is, the import/export) flows. In other words,
there would be two quite different exchange rates and each would
adjust independently of the other to ensure that both accounts,
capital and current, always balance. There would be no net inflow or
outflow of capital to the country, and the value of imports would
always equal exports.
Such a system was used in what was then the Sterling Area from 1947
when Britain passed the Exchange Control Act until May,
1979. Anyone wanting to move capital out of the Area had to pay what
was known as the 'dollar premium', the difference between the
two exchange rates. A similar two-tier currency system was used in
South Africa between September 1985 and March 1995. The
capital currency was known as the financial rand. "The financial rand
system has served South Africa well during the years of the
country's economic isolation" the South Africa minister of finance,
C.F. Liebenburg, said when he announced its abolition on the
grounds that it might discourage foreign investment in the country.
As, of course, it would have done to the extent that it ensured that
there was no net foreign investment, as capital inflows would have
been matched by capital coming out.
>From a sustainability perspective, the major advantage to be gained
from operating what are effectively two currencies, one for
capital purposes and the other for normal buying and selling, each
with its own exchange rate, is that policies to promote
sustainability cannot be derailed by investors taking fright at want
is going on and rushing to get their money out of the country.
Consider what happened in Mexico in December 1994. The government
devalued the peso by 13% in an attempt to correct an 8%
deficit on its current account - a deficit caused, of course, by
overseas investors in 'emerging markets' moving their money into the
country to lend short-term at attractive rates or invest in the
stockmarket. But no-one was convinced that the devaluation would
prove large enough and foreign and local investors rushed to get
their money out of the country before another took place. With so
many people panicking, the new rate was abandoned the following day
and the peso was allowed to float, ending almost 40% below
its former level. The higher price of imports naturally caused an
inflation so the Central Bank jacked up interest rates to try to
suppress it. This ruined many companies and 250,000 jobs were lost in
one month, January 1995, alone. The business collapses left
bad debts which in turn ruined the local banks. "With most local
banks reeling, the eight foreign banks operating in the country have
moved quickly to take advantage" The Financial Times wrote at the
time . "The Mexican financial crisis is an object lesson in the
power and caprice of the international capital markets" Will Hutton
wrote in The Guardian . " [Capital] flows can be cut off at will with
hugely destabilising consequences."
A two-tier exchange system would, of course, have prevented the
crisis happening. It would have given the government enormous
freedom to develop policies to suit its people rather than
international investors. It would not have mattered to it at all
whether or not
Intel was going to build a chip fabrication factory in the country as
the only effect that such a massive inward investment would have
would be to make it much more attractive for Mexicans to move their
capital overseas.
The second key step towards sustainability is therefore to keep
capital flows completely apart from those on the current account.
The two remaining conventional economic sustainability indicators,
the rate of inflation and the health of the public finances, can be
discussed quite quickly. In an economy in which the government spent
any additional money into circulation, excessive inflation
would only occur if ministers behaved irresponsibility and tried to
capture more of a greater a proportion of the country's scarce
resources by putting too much extra money into use rather than by
removing those resources from private hands by increasing
taxes. The remedy would be in their hands. Management of the public
finances would be much easier, too. There would be no need
for the state to borrow - ever. If the economy did slow down, it
would be possible for the government to create extra demand by
spending extra money into use. No debt would be incurred. And if the
economy then began to overheat, causing too rapid an
inflation, taxes could be increased and the money they brought in
removed from circulation, dampening overall demand. So while
both indicators would still have to be watched, their management, and
that of the economy, would be very easy.
Step 3: Limiting the supply of money to that of the scarcest
resource.
While the two steps we have identified so far would make it much
easier for governments to pay less attention to short-term
economic sustainability and to follow other objectives, they would
not compel them to do so. Indeed, without step three, steps one
and two could just make the economic system easier to run and, by
freeing it from the credit squeezes, recessions and depressions
that currently slow its expansion down. it could become even more
destructive. Accordingly, the final step involves tying the global
money supply to the availability of the scarcest global environmental
resource so that the world economy automatically functions
within the limits set by that resource and the two are not in
constant conflict with each other. This would mean that, whenever
people
tried to save money, they would automatically be minimising the
stress they were placing on the scarcest aspect of the global
environment rather than denying someone else work, which is what
saving can do at present.
In my view, the scarcest environmental resource is the ability of the
Earth to absorb the greenhouse gases created by humanity's
economic activities. The Intergovernmental Panel on Climate Change
(IPCC) believes that 60-80% cuts in greenhouse gas emissions
are urgently needed to lessen the risk of the catastrophic
consequences of a runaway global warming. Contraction and Convergence
(C&C), the plan for reducing greenhouse gas emissions developed by
the Global Commons Institute in London which has gained the
support of a majority of the nations of the world, provides a way of
linking a global currency with the limited capacity of the planet to
absorb or break down greenhouse gas emissions.
Under the C&C approach,
like the dollar, sterling and the euro were allowed to use them, they
would effectively get the right to use a lot of their extra energy
for
free because much of the money they paid would be used for investing
and trading around the world rather than purchasing goods
from the countries which issued them. To avoid this, Feasta, the
Dublin-based Foundation for the Economics of Sustainability,.
The use of national currencies for international trade would be
phased out. Only the ebcu would be used for trade among participating
countries and any countries which stayed out of the system would have
tariff barriers raised against them. Many indebted countries
would find that their initial allocation of ebcu enabled them to
clear their foreign loans. In subsequent years, they would be able to
import equipment for capital projects with their income from the sale
of SERs.
A major advantage of this system is that it would establish what
would amount to a dealers' ring for the purchase of fossil fuels
similar to those set up by groups of dishonest antique dealers before
an auction? The dealers in the ring decide who is to bid for
each item and the maximum the bidder is to pay and then, afterwards,
they hold a private auction among themselves to determine
who actually gets what. The point of this ploy is to ensure that the
extra money which would have gone to the vendor if the dealers
had bid against each other in the original auction stays within the
group and does not leak away unnecessarily to a member of the
public. By limiting demand, the ebcu/SER system would prevent excess
money going to fossil fuel producers in times of scarcity
and plunging the world into an economic depression. Instead, the
money would go to poor countries after an auction for their surplus
SERs. This money would not have to be lent back into the world
economy as would happen if the energy producers received it. It
would be quickly spent back by people who urgently need many things
which the over-fossil-energy-intensive economies can make.
So, rather than debt growing, demand would, constrained only by the
availability of energy. Suppose it was decided to cut emissions
by 5% a year, a rate which would achieve the 80% cut the IPCC urges
in thirty years, the sort of goal we need to adopt if we are to
have any chance of averting a sudden, catastrophic climate change.
Cutting fossil energy supplies at this rate would mean that the
ability of the world economy to supply goods and services would
shrink by 5% a year minus the rate at which energy economies
became possible and renewable energy supplies were introduced.
Initially, energy savings would take the sting out of most of the
cuts - there's a lot of fat around - and as these became
progressively difficult to find, the rate of renewable energy
installations should
have increased enough to prevent significant falls in global output.
The global economy this system would create would be much less liable
to a boom and bust cycle than the present one for two
reasons. One is that, as the shape of every national economy would be
changing rapidly, there would be a lot of investment
opportunities around. The other is that as debt was no longer being
used as the basis for either the world currencies or for national
ones, the supply of the world's money, the ebcu, and of the various
national currencies would no longer fluctuate up and down,
magnifying changes in the business climate.
Everyone, even the fossil fuel producers, would benefit from such an
arrangement and, as far as I am aware, no other course has
been proposed which tackles the problem in a way which is both
equitable and guarantees that emissions targets are met. What is
certain is that the unguided workings of the global market are
unlikely to ensure that fossil energy use is cut back quickly enough
to
avoid a climate crisis in a way that brings about a rapid switch to
renewable energy supplies.
So the three main monetary changes requires to ease the world's
passage to sustainability are:
1. The replacement of debt-based national currencies with ones spent
by governments into permanent circulation.
2. The separation of capital flows and current account flows on
foreign exchange markets.
3. Ending the use of the currencies of major nations for
international trade and their replacement by a proper global currency
which
would be given into circulation and whose supply would be controlled
so that the total level of global economic activity was reduced
to one compatible with a sustainable world.
Of course, there are many other changes that would be desirable for
sustainability too. There's even another monetary one - the
introduction of regional currencies to allow local economies to
develop and thrive regardless of the amount of national (or, in the
case of the euro, multinational) currency flowing in from outside.
But these three are the essential ones Without them, short-term
economic sustainability will always have to be put first and long-
term sustainability will seem an impractical dream.
Contact for this article. Richard Douthwaite, Cloona, Westport,
Ireland. (098) 25313. ric...@douthwaite.net
--
For MAI-not (un)subscription information, posting guidelines and
links to other MAI sites please see
|
https://groups.google.com/g/flora.mai-not/c/KqbxMZLRrFM?hl=en
|
CC-MAIN-2021-49
|
refinedweb
| 5,896
| 55.27
|
Michael Robertson says Skype should open to Gizmo Project
Michael Robertson tells Andy Abramson his Gizmo Project peers with hundreds of other networks, so Skype should open up too. Robertson contrasts Skype's closed network to Skype's Carterfone petition to the FCC, a plea for mobile phone companies to let customers use phones of their choice. Skype wrote a letter last week
It's a false comparison. How we connect a phone to a mobile network is standardized. How we connect a client to the Skype network is not. How we connect the Skype network to another service is not.
A few interoperability questions for Michael:
-?
- Will you require realtime encryption? Strong enough to prevent live intercepts? Will you require all networks to notify users when their conversations are no longer encrypted?
- Will you agree to strong user authentication? So users can have confidence in the identity of friends and strangers?
- Will you (and everyone you peer with) agree on user profile data structures, white page directory services, and directory search interop?
- Will you support data portability principles? So users can switch to and from you network with their identities, profiles, buddy lists, histories, and preferences?
- Will you peer customer support costs and security? How should customers escalate security and technical issues across multiple networks?
- Will you mandate end-to-end transparency of call quality information?
- What namespaces would you suggest Skype use? Will you support OpenID or some other namespace?
- Will you open Gizmo up to all partners? Your contact page says "Unfortunately, we are not setup to partner at this time with organizations with fewer than one million users."
- How will you make all this work? What industry body or standards process could help Skype and other companies find the sweet spots of commoditized conversation?
You like thinking of yourself as a David against Goliaths (I'm thinking back to SIPphone vs. Vonage), and you cast Skype as one of the giants. It's fine to take a swing at Skype.
I hope you are up for more than talk, Michael.
What will you do to advance Talk 2.0 interop? Will you dig deeper? Reach out? What are the next steps, Mr. Robertson?
tags: skype, sipphone, gizmo5, gizmoproject, interop
Follow Phil Wolff on Twitter or FriendFeed or on Skype.
Labels: architecture, business, competition, freedom, mobile, skype, technology
4 Comments:
I have to feel that the reason Robertson lost his temper had nothing to do with interoperability. He is not stupid. I suspect Gizmo is coming to an end and Robertson is just expressing frustration at not being able to compete. We'll know the specifics shortly.
Well this is good news. Sounds like you agree with Mr. Robertson in principal so all you need to do is work out the technical details in your list. Good for you. Good for Mr. Robertson. Good for all the customers.
Skype, their Cheerleading Squad and their cronies may indeed wish, or dream, that Gizmo is "coming to an end", but there are no facts to support that. But it is easy to understand why they would wish Gizmo (and Mr. Robertson) would go away and stop asking such embarrassing questions in public. It is obvious that the emperor is wearing no clothes...
Interesting list. But here's the thing you fail to mention. Maybe Gizmo doesn't interop at all those levels, but at least they started somewhere - interop of basic voice calls over IP, which is something Skype still has not done. At least I can pass a voice call to Gizmo over IP (I don't even need a "peering agreement" to do so) and Gizmo can call me over IP using published protocols. So, while they don't do all the things on your list, they do one important one, voice calls, and Skype won't even do that - and don't pretend like I'm the only person on earth asking for this, as has been Skype's argument (and parroted by Skype apologists).
We've started to moderate comments to avoid spam. Please excuse the short delay. We'll get your post online a quickly as possible.
Links to this post:
Create a Link
|
http://skypejournal.com/2008/09/michael-robertson-says-skype-should.html
|
crawl-002
|
refinedweb
| 700
| 75.61
|
Created on 2020-04-08 09:41 by Mark.Shannon, last changed 2021-11-17 18:08 by vstinner.
C++ and Java support what is known as "zero cost" exception handling.
The "zero cost" refers to the cost when no exception is raised. There is still a cost when exceptions are thrown.
The basic principle is that the compiler generates tables indicating where control should be transferred to when an exception is raised. When no exception is raised, there is no runtime overhead.
(C)Python should support "zero cost" exceptions.
Now that the bytecodes for exception handling are regular (meaning that their stack effect can be statically determined) it is possible for the bytecode compiler to emit exception handling tables.
Doing so would have two main benefits.
1. "try" and "with" statements would be faster (and "async for", but that is an implementation detail).
2. Calls to Python functions would be faster as frame objects would be considerably smaller. Currently each frame carries 240 bytes of overhead for exception handling.
+1! I was going to implement this, but first I wanted to implement support of line number ranges instead of just line numbers (co_lineno). We need to design some compact portable format for address to address mapping (or address range to address mapping if it is more efficient).
Are you already working on this Mark? I would be glad to make a review.
This is an exciting prospect. Am looking forward to it :-)
+1
To clarify, would there be any observable difference in behavior aside from speed? And would there be any limitations in when the speedup can be applied?
The only observable changes will be changes in the code object: new attributes and constructor parameters, changed .pyc format, dis output, etc.
The changes to pyc format aren't user visible so shouldn't matter,
but what about the dis output?
Consider this program:
def f():
try:
1/0
except:
return "fail"
Currently it compiles to:
2 0 SETUP_FINALLY 7 (to 16)
3 2 LOAD_CONST 1 (1)
4 LOAD_CONST 2 (0)
6 BINARY_TRUE_DIVIDE
8 POP_TOP
10 POP_BLOCK
12 LOAD_CONST 0 (None)
14 RETURN_VALUE
4 >> 16 POP_TOP
18 POP_TOP
20 POP_TOP
5 22 POP_EXCEPT
24 LOAD_CONST 3 ('fail')
26 RETURN_VALUE
With zero-cost exception handling, it will compile to something like:
2 0 NOP
3 2 LOAD_CONST 1 (1)
4 LOAD_CONST 2 (0)
6 BINARY_TRUE_DIVIDE
8 POP_TOP
10 LOAD_CONST 0 (None)
12 RETURN_VALUE
None 14 PUSH_EXCEPT
4 16 POP_TOP
18 POP_TOP
20 POP_TOP
5 22 POP_EXCEPT
24 LOAD_CONST 3 ('fail')
26 RETURN_VALUE
(There are additional optimizations that should be applied, but those are a separate issue)
The problem is that the exception handling flow is no longer visible.
Should we add it back in somehow, or just append the exception jump table?
We can add a new column for the offset or the index of the error handler. Or add pseudo-instructions (which do not correspond to any bytecode) at boundaries of the code with some error handler.
I like Serhiy’s idea.
BTW, what are the three POP_TOP op codes in a row popping?
> BTW, what are the three POP_TOP op codes in a row popping?
When exceptions are pushed to the stack, they are pushed as a triple: (exc, type, traceback)
so when we pop them, we need three pops.
Responding to Serhiy's suggestions:
1 Add another column:
Adding another column makes for lots of repetition in larger try blocks, and pushes useful information further to the right.
2 Add pseudo-instructions
I find those misleading, as they aren't really there, and probably won't even correspond to the original SETUP_XXX instructions.
I've played around with a few formats, and what I've ended up with is this:
1. Use the >> marker for for exception targets, as well as normal branch targets.
2. Add a text version of the exception handler table at the end of the disassembly.
This has all the information, without too much visual clutter.
The function `f` above looks like this:
>>> dis.dis(f)
2 0 NOP
3 2 LOAD_CONST 1 (1)
4 LOAD_CONST 2 (0)
6 BINARY_TRUE_DIVIDE
8 POP_TOP
10 NOP
12 LOAD_CONST 0 (None)
14 RETURN_VALUE
>> 16 NOP
18 PUSH_EXC_INFO
4 20 POP_TOP
22 POP_TOP
24 POP_TOP
5 26 NOP
28 POP_EXCEPT
30 LOAD_CONST 3 ('fail')
32 RETURN_VALUE
>> 34 POP_EXCEPT_AND_RERAISE
ExceptionTable:
2 to 8 -> 16 (depth 0)
18 to 24 -> 34 (depth 3) lasti
The 'lasti' field indicates that the offset of the last instruction is pushed to the stack, which is needed for cleanup-then-reraise code.
This seems to have broken the address sanitizer buildbot:
Example error:
================================================================
==28597==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x604000011cbd at pc 0x55e7e3cceedc bp 0x7ffc74448490 sp 0x7ffc74448480
READ of size 1 at 0x604000011cbd thread T0
#0 0x55e7e3cceedb in skip_to_next_entry Python/ceval.c:4798
#1 0x55e7e3cceedb in get_exception_handler Python/ceval.c:4866
#2 0x55e7e3cceedb in _PyEval_EvalFrameDefault Python/ceval.c:4465
#3 0x55e7e3ed10d7 in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
#4 0x55e7e3ed10d7 in _PyEval_Vector Python/ceval.c:5160
#5 0x55e7e3d1922e in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
#6 0x55e7e3d1922e in object_vacall Objects/call.c:734
#7 0x55e7e3d1ec50 in _PyObject_CallMethodIdObjArgs Objects/call.c:825
#8 0x55e7e3f49dd7 in import_find_and_load Python/import.c:1499
#9 0x55e7e3f49dd7 in PyImport_ImportModuleLevelObject Python/import.c:1600
#10 0x55e7e3cd839b in import_name Python/ceval.c:6101
#11 0x55e7e3cd839b in _PyEval_EvalFrameDefault Python/ceval.c:3693
#12 0x55e7e3ed09ea in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
#13 0x55e7e3ed09ea in _PyEval_Vector Python/ceval.c:5160
#14 0x55e7e3ed09ea in PyEval_EvalCode Python/ceval.c:1136
#15 0x55e7e420b908 in builtin_exec_impl Python/bltinmodule.c:1065
#16 0x55e7e420b908 in builtin_exec Python/clinic/bltinmodule.c.h:371
#17 0x55e7e4196590 in cfunction_vectorcall_FASTCALL Objects/methodobject.c:426
#18 0x55e7e3d1a592 in PyVectorcall_Call Objects/call.c:255
#19 0x55e7e3cd15f4 in do_call_core Python/ceval.c:6028
#20 0x55e7e3cd15f4 in _PyEval_EvalFrameDefault Python/ceval.c:4283
#21 0x55e7e3ed10d7 in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
#22 0x55e7e3ed10d7 in _PyEval_Vector Python/ceval.c:5160
#23 0x55e7e3cd424e in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
#24 0x55e7e3cd424e in PyObject_Vectorcall Include/cpython/abstract.h:123
#25 0x55e7e3cd424e in call_function Python/ceval.c:5976
#26 0x55e7e3cd424e in _PyEval_EvalFrameDefault Python/ceval.c:4187
#27 0x55e7e3ed10d7 in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
#28 0x55e7e3ed10d7 in _PyEval_Vector Python/ceval.c:5160
#29 0x55e7e3cd4384 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
#30 0x55e7e3cd4384 in PyObject_Vectorcall Include/cpython/abstract.h:123
#31 0x55e7e3cd4384 in call_function Python/ceval.c:5976
#32 0x55e7e3cd4384 in _PyEval_EvalFrameDefault Python/ceval.c:4204
#33 0x55e7e3ed10d7 in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
#34 0x55e7e3ed10d7 in _PyEval_Vector Python/ceval.c:5160
#35 0x55e7e3ce0934 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
#36 0x55e7e3ce0934 in PyObject_Vectorcall Include/cpython/abstract.h:123
#37 0x55e7e3ce0934 in call_function Python/ceval.c:5976
#38 0x55e7e3ce0934 in _PyEval_EvalFrameDefault Python/ceval.c:4219
#39 0x55e7e3ed10d7 in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
#40 0x55e7e3ed10d7 in _PyEval_Vector Python/ceval.c:5160
#41 0x55e7e3ce0934 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
#42 0x55e7e3ce0934 in PyObject_Vectorcall Include/cpython/abstract.h:123
#43 0x55e7e3ce0934 in call_function Python/ceval.c:5976
#44 0x55e7e3ce0934 in _PyEval_EvalFrameDefault Python/ceval.c:4219
#45 0x55e7e3ed10d7 in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
#46 0x55e7e3ed10d7 in _PyEval_Vector Python/ceval.c:5160
#47 0x55e7e3ce0934 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
#48 0x55e7e3ce0934 in PyObject_Vectorcall Include/cpython/abstract.h:123
#49 0x55e7e3ce0934 in call_function Python/ceval.c:5976
#50 0x55e7e3ce0934 in _PyEval_EvalFrameDefault Python/ceval.c:4219
#51 0x55e7e3ed10d7 in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
#52 0x55e7e3ed10d7 in _PyEval_Vector Python/ceval.c:5160
#53 0x55e7e3cd424e in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
#54 0x55e7e3cd424e in PyObject_Vectorcall Include/cpython/abstract.h:123
#55 0x55e7e3cd424e in call_function Python/ceval.c:5976
#56 0x55e7e3cd424e in _PyEval_EvalFrameDefault Python/ceval.c:4187
#57 0x55e7e3ed10d7 in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
#58 0x55e7e3ed10d7 in _PyEval_Vector Python/ceval.c:5160
#59 0x55e7e3cd424e in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
#60 0x55e7e3cd424e in PyObject_Vectorcall Include/cpython/abstract.h:123
#61 0x55e7e3cd424e in call_function Python/ceval.c:5976
#62 0x55e7e3cd424e in _PyEval_EvalFrameDefault Python/ceval.c:4187
#63 0x55e7e3ed10d7 in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
#64 0x55e7e3ed10d7 in _PyEval_Vector Python/ceval.c:5160
#65 0x55e7e3cd71ad in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
#66 0x55e7e3cd71ad in PyObject_Vectorcall Include/cpython/abstract.h:123
#67 0x55e7e3cd71ad in call_function Python/ceval.c:5976
#68 0x55e7e3cd71ad in _PyEval_EvalFrameDefault Python/ceval.c:4237
#69 0x55e7e3ed09ea in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
#70 0x55e7e3ed09ea in _PyEval_Vector Python/ceval.c:5160
#71 0x55e7e3ed09ea in PyEval_EvalCode Python/ceval.c:1136
#72 0x55e7e420b908 in builtin_exec_impl Python/bltinmodule.c:1065
#73 0x55e7e420b908 in builtin_exec Python/clinic/bltinmodule.c.h:371
#74 0x55e7e4196590 in cfunction_vectorcall_FASTCALL Objects/methodobject.c:426
#75 0x55e7e3d1a592 in PyVectorcall_Call Objects/call.c:255
#76 0x55e7e3cd15f4 in do_call_core Python/ceval.c:6028
#77 0x55e7e3cd15f4 in _PyEval_EvalFrameDefault Python/ceval.c:4283
#78 0x55e7e3ed10d7 in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
#79 0x55e7e3ed10d7 in _PyEval_Vector Python/ceval.c:5160
#80 0x55e7e3cd424e in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
#81 0x55e7e3cd424e in PyObject_Vectorcall Include/cpython/abstract.h:123
#82 0x55e7e3cd424e in call_function Python/ceval.c:5976
#83 0x55e7e3cd424e in _PyEval_EvalFrameDefault Python/ceval.c:4187
#84 0x55e7e3ed10d7 in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
#85 0x55e7e3ed10d7 in _PyEval_Vector Python/ceval.c:5160
#86 0x55e7e3cd4384 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
#87 0x55e7e3cd4384 in PyObject_Vectorcall Include/cpython/abstract.h:123
#88 0x55e7e3cd4384 in call_function Python/ceval.c:5976
#89 0x55e7e3cd4384 in _PyEval_EvalFrameDefault Python/ceval.c:4204
#90 0x55e7e3ed10d7 in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
#91 0x55e7e3ed10d7 in _PyEval_Vector Python/ceval.c:5160
#92 0x55e7e3ce0934 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
#93 0x55e7e3ce0934 in PyObject_Vectorcall Include/cpython/abstract.h:123
#94 0x55e7e3ce0934 in call_function Python/ceval.c:5976
#95 0x55e7e3ce0934 in _PyEval_EvalFrameDefault Python/ceval.c:4219
#96 0x55e7e3ed10d7 in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
#97 0x55e7e3ed10d7 in _PyEval_Vector Python/ceval.c:5160
#98 0x55e7e3ce0934 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
#99 0x55e7e3ce0934 in PyObject_Vectorcall Include/cpython/abstract.h:123
#100 0x55e7e3ce0934 in call_function Python/ceval.c:5976
#101 0x55e7e3ce0934 in _PyEval_EvalFrameDefault Python/ceval.c:4219
#102 0x55e7e3ed10d7 in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
#103 0x55e7e3ed10d7 in _PyEval_Vector Python/ceval.c:5160
#104 0x55e7e3ce0934 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
#105 0x55e7e3ce0934 in PyObject_Vectorcall Include/cpython/abstract.h:123
#106 0x55e7e3ce0934 in call_function Python/ceval.c:5976
#107 0x55e7e3ce0934 in _PyEval_EvalFrameDefault Python/ceval.c:4219
#108 0x55e7e3ed10d7 in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
#109 0x55e7e3ed10d7 in _PyEval_Vector Python/ceval.c:5160
#110 0x55e7e3cd424e in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
#111 0x55e7e3cd424e in PyObject_Vectorcall Include/cpython/abstract.h:123
#112 0x55e7e3cd424e in call_function Python/ceval.c:5976
#113 0x55e7e3cd424e in _PyEval_EvalFrameDefault Python/ceval.c:4187
#114 0x55e7e3ed10d7 in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
#115 0x55e7e3ed10d7 in _PyEval_Vector Python/ceval.c:5160
#116 0x55e7e3cd424e in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
#117 0x55e7e3cd424e in PyObject_Vectorcall Include/cpython/abstract.h:123
#118 0x55e7e3cd424e in call_function Python/ceval.c:5976
#119 0x55e7e3cd424e in _PyEval_EvalFrameDefault Python/ceval.c:4187
#120 0x55e7e3ed10d7 in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
#121 0x55e7e3ed10d7 in _PyEval_Vector Python/ceval.c:5160
#122 0x55e7e3ce0934 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
#123 0x55e7e3ce0934 in PyObject_Vectorcall Include/cpython/abstract.h:123
#124 0x55e7e3ce0934 in call_function Python/ceval.c:5976
#125 0x55e7e3ce0934 in _PyEval_EvalFrameDefault Python/ceval.c:4219
#126 0x55e7e3ed10d7 in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
#127 0x55e7e3ed10d7 in _PyEval_Vector Python/ceval.c:5160
#128 0x55e7e3cd71ad in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
#129 0x55e7e3cd71ad in PyObject_Vectorcall Include/cpython/abstract.h:123
#130 0x55e7e3cd71ad in call_function Python/ceval.c:5976
#131 0x55e7e3cd71ad in _PyEval_EvalFrameDefault Python/ceval.c:4237
#132 0x55e7e3ed10d7 in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
#133 0x55e7e3ed10d7 in _PyEval_Vector Python/ceval.c:5160
#134 0x55e7e3ce0934 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
#135 0x55e7e3ce0934 in PyObject_Vectorcall Include/cpython/abstract.h:123
#136 0x55e7e3ce0934 in call_function Python/ceval.c:5976
#137 0x55e7e3ce0934 in _PyEval_EvalFrameDefault Python/ceval.c:4219
#138 0x55e7e3ed10d7 in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
#139 0x55e7e3ed10d7 in _PyEval_Vector Python/ceval.c:5160
#140 0x55e7e3ce0934 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
#141 0x55e7e3ce0934 in PyObject_Vectorcall Include/cpython/abstract.h:123
#142 0x55e7e3ce0934 in call_function Python/ceval.c:5976
#143 0x55e7e3ce0934 in _PyEval_EvalFrameDefault Python/ceval.c:4219
#144 0x55e7e3ed10d7 in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
#145 0x55e7e3ed10d7 in _PyEval_Vector Python/ceval.c:5160
#146 0x55e7e3cd4384 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
#147 0x55e7e3cd4384 in PyObject_Vectorcall Include/cpython/abstract.h:123
#148 0x55e7e3cd4384 in call_function Python/ceval.c:5976
#149 0x55e7e3cd4384 in _PyEval_EvalFrameDefault Python/ceval.c:4204
#150 0x55e7e3ed10d7 in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
#151 0x55e7e3ed10d7 in _PyEval_Vector Python/ceval.c:5160
#152 0x55e7e3cd4384 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
#153 0x55e7e3cd4384 in PyObject_Vectorcall Include/cpython/abstract.h:123
#154 0x55e7e3cd4384 in call_function Python/ceval.c:5976
#155 0x55e7e3cd4384 in _PyEval_EvalFrameDefault Python/ceval.c:4204
#156 0x55e7e3ed10d7 in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
#157 0x55e7e3ed10d7 in _PyEval_Vector Python/ceval.c:5160
#158 0x55e7e4150719 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
#159 0x55e7e4150719 in method_vectorcall Objects/classobject.c:53
#160 0x55e7e3d1a663 in PyVectorcall_Call Objects/call.c:267
#161 0x55e7e3cd15f4 in do_call_core Python/ceval.c:6028
#162 0x55e7e3cd15f4 in _PyEval_EvalFrameDefault Python/ceval.c:4283
#163 0x55e7e3ed10d7 in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
#164 0x55e7e3ed10d7 in _PyEval_Vector Python/ceval.c:5160
#165 0x55e7e3ce0934 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
#166 0x55e7e3ce0934 in PyObject_Vectorcall Include/cpython/abstract.h:123
#167 0x55e7e3ce0934 in call_function Python/ceval.c:5976
#168 0x55e7e3ce0934 in _PyEval_EvalFrameDefault Python/ceval.c:4219
#169 0x55e7e3ed09ea in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
#170 0x55e7e3ed09ea in _PyEval_Vector Python/ceval.c:5160
#171 0x55e7e3ed09ea in PyEval_EvalCode Python/ceval.c:1136
#172 0x55e7e420b908 in builtin_exec_impl Python/bltinmodule.c:1065
#173 0x55e7e420b908 in builtin_exec Python/clinic/bltinmodule.c.h:371
#174 0x55e7e4196590 in cfunction_vectorcall_FASTCALL Objects/methodobject.c:426
#175 0x55e7e3ce0934 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
#176 0x55e7e3ce0934 in PyObject_Vectorcall Include/cpython/abstract.h:123
#177 0x55e7e3ce0934 in call_function Python/ceval.c:5976
#178 0x55e7e3ce0934 in _PyEval_EvalFrameDefault Python/ceval.c:4219
#179 0x55e7e3ed10d7 in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
#180 0x55e7e3ed10d7 in _PyEval_Vector Python/ceval.c:5160
#181 0x55e7e3ce0934 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
#182 0x55e7e3ce0934 in PyObject_Vectorcall Include/cpython/abstract.h:123
#183 0x55e7e3ce0934 in call_function Python/ceval.c:5976
#184 0x55e7e3ce0934 in _PyEval_EvalFrameDefault Python/ceval.c:4219
#185 0x55e7e3ed10d7 in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
#186 0x55e7e3ed10d7 in _PyEval_Vector Python/ceval.c:5160
#187 0x55e7e3d1a592 in PyVectorcall_Call Objects/call.c:255
#188 0x55e7e3ceddf2 in pymain_run_module Modules/main.c:293
#189 0x55e7e3cef08b in pymain_run_python Modules/main.c:581
#190 0x55e7e3cf10c4 in Py_RunMain Modules/main.c:666
#191 0x55e7e3cf10c4 in pymain_main Modules/main.c:696
#192 0x55e7e3cf10c4 in Py_BytesMain Modules/main.c:720
#193 0x7f9d8dd5ab24 in __libc_start_main (/usr/lib/libc.so.6+0x27b24)
#194 0x55e7e3ced49d in _start (/buildbot/buildarea/3.x.pablogsal-arch-x86_64.asan/build/python+0x17649d)
0x604000011cbd is located 0 bytes to the right of 45-byte region [0x604000011c90,0x604000011cbd)
allocated by thread T0 here:
#0 0x7f9d8e124459 in __interceptor_malloc /build/gcc/src/gcc/libsanitizer/asan/asan_malloc_linux.cpp:145
#1 0x55e7e3cf7770 in _PyBytes_FromSize Objects/bytesobject.c:126
#2 0x55e7e3cf7770 in PyBytes_FromStringAndSize Objects/bytesobject.c:159
#3 0x55e7e3f686f1 in r_object Python/marshal.c:1066
#4 0x55e7e3f6960d in r_object Python/marshal.c:1375
#5 0x55e7e3f67ee0 in r_object Python/marshal.c:1171
#6 0x55e7e3f694c3 in r_object Python/marshal.c:1348
#7 0x55e7e3f6efca in PyMarshal_ReadObjectFromString Python/marshal.c:1562
#8 0x55e7e3f48b1d in PyImport_ImportFrozenModuleObject Python/import.c:1140
#9 0x55e7e3f490cc in PyImport_ImportFrozenModule Python/import.c:1194
#10 0x55e7e3f7e1e6 in init_importlib Python/pylifecycle.c:141
#11 0x55e7e3f7e1e6 in pycore_interp_init Python/pylifecycle.c:811
#12 0x55e7e3f84536 in pyinit_config Python/pylifecycle.c:840
#13 0x55e7e3f84536 in pyinit_core Python/pylifecycle.c:1003
#14 0x55e7e3f855fc in Py_InitializeFromConfig Python/pylifecycle.c:1188
#15 0x55e7e3ced749 in pymain_init Modules/main.c:66
#16 0x55e7e3cf107a in pymain_main Modules/main.c:687
#17 0x55e7e3cf107a in Py_BytesMain Modules/main.c:720
#18 0x7f9d8dd5ab24 in __libc_start_main (/usr/lib/libc.so.6+0x27b24)
SUMMARY: AddressSanitizer: heap-buffer-overflow Python/ceval.c:4798 in skip_to_next_entry
Shadow bytes around the buggy address:
0x0c087fffa340: fa fa 00 00 00 00 00 03 fa fa 00 00 00 00 00 00
0x0c087fffa350: fa fa 00 00 00 00 03 fa fa fa 00 00 00 00 00 03
0x0c087fffa360: fa fa 00 00 00 00 03 fa fa fa 00 00 00 00 00 05
0x0c087fffa370: fa fa 00 00 00 00 03 fa fa fa fd fd fd fd fd fd
0x0c087fffa380: fa fa 00 00 00 00 00 03 fa fa 00 00 00 00 00 05
=>0x0c087fffa390: fa fa 00 00 00 00 00[05]fa fa 00 00 00 00 00 01
0x0c087fffa3a0: fa fa 00 00 00 00 00 00 fa fa 00 00 00 00 00 00
0x0c087fffa3b0: fa fa 00 00 00 00 00 01 fa fa 00 00 00 00 07 fa
0x0c087fffa3c0: fa fa 00 00 00 00 00 00 fa fa 00 00 00 00 07 fa
0x0c087fffa3d0: fa fa 00 00 00 00 00 00 fa fa 00 00 00 00 00 00
0x0c087fffa3e0: fa fa 00 00 00 00 00 00 fa fa 00 00 00 00 00 00
==28597==ABORTING
make: *** [Makefile:1249: buildbottest] Error 1
To reproduce with a modern gcc:
% export ASAN_OPTIONS=detect_leaks=0:allocator_may_return_null=1:handle_segv=0
% ./configure --with-address-sanitizer --without-pymalloc
% make -j -s
% ./python -m test test_statistics
=================================================================
==51490==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x6040000113fd at pc 0x564ec89e0edc bp 0x7ffcffffba70 sp 0x7ffcffffba60
READ of size 1 at 0x6040000113fd thread T0
#0 0x564ec89e0edb in skip_to_next_entry Python/ceval.c:4798
#1 0x564ec89e0edb in get_exception_handler Python/ceval.c:4866
#2 0x564ec89e0edb in _PyEval_EvalFrameDefault Python/ceval.c:4465
#3 0x564ec8be30d7 in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
#4 0x564ec8be30d7 in _PyEval_Vector Python/ceval.c:5160
#5 0x564ec8a2b22e in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
#6 0x564ec8a2b22e in object_vacall Objects/call.c:734
#7 0x564ec8a30c50 in _PyObject_CallMethodIdObjArgs Objects/call.c:825
#8 0x564ec8c5bdd7 in import_find_and_load Python/import.c:1499
#9 0x564ec8c5bdd7 in PyImport_ImportModuleLevelObject Python/import.c:1600
#10 0x564ec89ea39b in import_name Python/ceval.c:6101
#11 0x564ec89ea39b in _PyEval_EvalFrameDefault Python/ceval.c:3693
#12 0x564ec8be29ea in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
#13 0x564ec8be29ea in _PyEval_Vector Python/ceval.c:5160
#14 0x564ec8be29ea in PyEval_EvalCode Python/ceval.c:1136
....
Seconded, also seeing the same ASAN failure on the fuzzers with a blame for this commit.
I tried some debugging code:
diff --git a/Python/ceval.c b/Python/ceval.c
index f745067069..a8668dbac2 100644
--- a/Python/ceval.c
+++ b/Python/ceval.c
@@ -4864,6 +4864,18 @@ get_exception_handler(PyCodeObject *code, int index)
return res;
}
scan = skip_to_next_entry(scan);
+ if (scan
+ >= (unsigned char *)PyBytes_AS_STRING(code->co_exceptiontable)
+ + PyBytes_GET_SIZE(code->co_exceptiontable))
+ {
+ printf("co_name: --------------------------\n");
+ _PyObject_Dump(code->co_name);
+ printf("co_filename: ----------------------\n");
+ _PyObject_Dump(code->co_filename);
+ printf("co_exceptiontable: -------------\n");
+ _PyObject_Dump(code->co_exceptiontable);
+ printf("\n\n\n\n\n");
+ }
}
res.b_handler = -1;
return res;
It output this:
Python 3.11.0a0 (heads/main-dirty:092f9ddb5e, May 9 2021, 18:45:56) [MSC v.1927 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> from test.test_statistics import *
co_name: --------------------------
object address : 00000254B63EFB80
object refcount : 7
object type : 00007FFA1C7E71C0
object type name: str
object repr : '_find_and_load'
co_filename: ----------------------
object address : 00000254B63967A0
object refcount : 76
object type : 00007FFA1C7E71C0
object type name: str
object repr : '<frozen importlib._bootstrap>'
co_exceptiontable: -------------
object address : 00000254B63EB290
object refcount : 1
object type : 00007FFA1C7C9A40
object type name: bytes
object repr : b'\x84\x10"\x03\xa2\x04&\x0b\xa7\x03&\x0b'
>>> unittest.main()
............................................................................................................................................................................................................................................................................................................................................................................
----------------------------------------------------------------------
Ran 364 tests in 24.409s
OK
Here is the disassembly of the offending function:
>>> from dis import dis
>>> from importlib._bootstrap import _find_and_load
>>> dis(_find_and_load)
1024 0 LOAD_GLOBAL 0 (_ModuleLockManager)
2 LOAD_FAST 0 (name)
4 CALL_FUNCTION 1
6 BEFORE_WITH
8 POP_TOP
1025 10 LOAD_GLOBAL 1 (sys)
12 LOAD_ATTR 2 (modules)
14 LOAD_METHOD 3 (get)
16 LOAD_FAST 0 (name)
18 LOAD_GLOBAL 4 (_NEEDS_LOADING)
20 CALL_METHOD 2
22 STORE_FAST 2 (module)
1026 24 LOAD_FAST 2 (module)
26 LOAD_GLOBAL 4 (_NEEDS_LOADING)
28 IS_OP 0
30 POP_JUMP_IF_FALSE 27 (to 54)
1027 32 LOAD_GLOBAL 5 (_find_and_load_unlocked)
34 LOAD_FAST 0 (name)
36 LOAD_FAST 1 (import_)
38 CALL_FUNCTION 2
1024 40 ROT_TWO
42 LOAD_CONST 1 (None)
44 DUP_TOP
46 DUP_TOP
48 CALL_FUNCTION 3
50 POP_TOP
1027 52 RETURN_VALUE
1026 >> 54 NOP
1024 56 LOAD_CONST 1 (None)
58 DUP_TOP
60 DUP_TOP
62 CALL_FUNCTION 3
64 POP_TOP
66 JUMP_FORWARD 11 (to 90)
>> 68 PUSH_EXC_INFO
70 WITH_EXCEPT_START
72 POP_JUMP_IF_TRUE 39 (to 78)
74 RERAISE 4
>> 76 POP_EXCEPT_AND_RERAISE
>> 78 POP_TOP
80 POP_TOP
82 POP_TOP
84 POP_EXCEPT
86 POP_TOP
88 POP_TOP
1029 >> 90 LOAD_FAST 2 (module)
92 LOAD_CONST 1 (None)
94 IS_OP 0
96 POP_JUMP_IF_FALSE 60 (to 120)
1030 98 LOAD_CONST 2 ('import of {} halted; None in sys.modules')
1031 100 LOAD_METHOD 6 (format)
102 LOAD_FAST 0 (name)
104 CALL_METHOD 1
1030 106 STORE_FAST 3 (message)
1032 108 LOAD_GLOBAL 7 (ModuleNotFoundError)
110 LOAD_FAST 3 (message)
112 LOAD_FAST 0 (name)
114 LOAD_CONST 3 (('name',))
116 CALL_FUNCTION_KW 2
118 RAISE_VARARGS 1
1034 >> 120 LOAD_GLOBAL 8 (_lock_unlock_module)
122 LOAD_FAST 0 (name)
124 CALL_FUNCTION 1
126 POP_TOP
1035 128 LOAD_FAST 2 (module)
130 RETURN_VALUE
ExceptionTable:
8 to 38 -> 68 [1] lasti
68 to 74 -> 76 [5] lasti
78 to 82 -> 76 [5] lasti
I don't know whether there just needs to be a sentinel 128 appended to all co_exceptiontable, or if there is a more subtle bug.
Thanks everyone for the triaging and fixing.
It seems that we have broken the stable ABI of PyCode_NewWithPosOnlyArgs as it has now another parameter. We need to either create a new private constructor or a new public constructor, but the ABI cannot change.
I know PyCode_NewWithPosOnlyArgs is declared as "PyAPI_FUNC" but that can't make it part of the ABI unless it has stable behavior.
It can't have stable behavior because its inputs are complex, undefined, have altered semantics and are interlinked in complex ways.
Passing the same arguments to PyCode_NewWithPosOnlyArgs for both 3.9 and 3.10 will cause one or other version to crash (interpreter crash, not just program crash).
We need to stop adding "PyAPI_FUNC" to everything.
Adding a PyAPI_FUNC does not magically make for ABI compatibility, there is a lot more to it than that.
The only sane ways to construct a code object are to load it from disk, to compile an AST, or to use
codeobject.replace(). Any purported ABI compatibility claims are just misleading and a trap.
I can revert the API changes and add a new function, but I think that is dangerously misleading. A compilation error is preferable to an interpreter crash.
I agree with you but we already went through this when I added positional-only arguments and everyone complained that Cython and other projects were broken and we changed a stable API function so I am just mentioning that we are here again.
Honestly, I think what you describe makes sense and this constructor should never be stable (as the Python one is not stable). If we add column offsets that's another parameter more than we would need to add. But in any case that's just my opinion and we should reach some conclusion collectively, that's why I mentioned this here.
In any case, if we decide to let it stay, at the very least this behaviour (that these functions are not stable) needs to be documented everywhere: what's new, C-API docs...etc And probably we need to somehow add it to the future deprecations of 3.10 for visibility.
As I mentioned, I simphatise with your argument and I think it makes sense, but we cannot just do it in a BPO issue, I'm afraid.
PyCode_NewWithPosOnlyArgs is not part of the stable ABI. It is OK to break its ABI in a minor version (i.e. 3.11).
The PyAPI_FUNC makes it part of the public *API*. It needs to be source- compatible; the number of arguments can't change. Could yo u add a new function?
I wouldn't remove PyCode_NewWithPosOnlyArgs from the public C API, which can be CPython-specific and used by projects like Cython that need some low-level access for performance. But PEP 387 applies, so if it is deprecated in 3.11, it can be removed in 3.13.
> The PyAPI_FUNC makes it part of the public *API*. It needs to be source- compatible; the number of arguments can't change. Could yo u add a new function?
Unfortunately, no: new functions cannot be added easily because the new field that receives is needed and is a complicated field created by the compiler. The old API is not enough anymore and making a compatibility layer is a huge complexity.
Then, according to PEP 387, "The steering council may grant exceptions to this policy."
I think API breaks like this do need coordination at the project level.
> I think API breaks like this do need coordination at the project level.
Absolutely, that's why I said before:
> As I mentioned, I simphatise with your argument and I think it makes
sense, but we cannot just do it in a BPO issue, I'm afraid.
It is very little effort to add back the old function, so that isn't the problem. It won't work properly, but it never did anyway. So I guess that's sort of compatible.
Maybe the best thing is to put a big red warning in the docs and hope that warns away people from using it?
> It is very little effort to add back the old function, so that isn't the problem. It won't work properly, but it never did anyway. So I guess that's sort of compatible.
It won't work properly is an incompatible change. Before, if you extract all fields from a code object and pass it down to the constructor, everything will work.
> Maybe the best thing is to put a big red warning in the docs and hope that warns away people from using it?
I think code object constructors must be part of the private CAPI due to what we are experiencing. But again, this is something we cannot decide on this bpo issue. Either a python-dev thread needs to be open or a Steering Council request in the repo needs to be opened.
Just a comment regarding the change to "PyCode_NewWithPosOnlyArgs()". As Pablo mentioned, this has happened before. And that's OK! Exactly because this has happened before, it's clearly not a part of the API that is meant to be stable.
I can easily adapt Cython to make this work in the next patch-level release of CPython 3.11 (or the current one, since alpha-1 seems not so close), but any adaptation will be patch-level dependent. Meaning, for each such change, there will be a couple of weeks or months until the C preprocessor makes the code compile again. And during that time, people won't be able to test their code to report issues.
So, I'd rather have compatibility broken and stay that way, than going one way now and changing it back later, thus going through the same adaptation period twice.
That being said, any such change means that maintainers will have to rebuild their packages with a new Cython release to adapt them to Py3.11. Many will, but some won't, for whatever reason.
Mark: Can you please document your change on types.CodeType? In:
*
*
The change broke the Genshi project:
in this Genshi function:
def build_code_chunk(code, filename, name, lineno):
params = [0, code.co_nlocals, code.co_kwonlyargcount,
code.co_stacksize, code.co_flags | 0x0040,
code.co_code, code.co_consts, code.co_names,
code.co_varnames, filename, name, lineno,
code.co_lnotab, (), ()]
if hasattr(code, "co_posonlyargcount"):
# PEP 570 added "positional only arguments"
params.insert(2, code.co_posonlyargcount)
return CodeType(*params)
I think we're waiting here for the release manager to decide, right? Should we roll back the change to PyCode_NewWithPosOnlyArgs() or keep it?
Presumably the requested docs aren't the (beta) release blocker?
> I think we're waiting here for the release manager to decide, right?
As Petr mentions, the release manager doesn't have authority to decide if the backwards compatibility policy can be ignored, only the Steering Council.
> Should we roll back the change to PyCode_NewWithPosOnlyArgs() or keep it?
I don't think is possible: code objects must be constructed with the new argument, otherwise they are broken. There is not an easy way to have a default for PyCode_New and PyCode_NewWithPosOnlyArgs that somehow creates the field from nothing.
I *personally* think that this case is one good example of an exception to the backwards compact rule, but I myself cannot grant that exception as a release manager. I also think these APIs should be removed from the public C-API ASAP because they literally conflict everytime we change the code object for optimizations.
Small note, as I mentioned in my correction email (), this is a release blocker for 3.11 (it was not marked in the issue what Python version was associated, I am doing it with this message) so this doesn't block the 3.10 release.
Ah, okay. So we're not on the hook to decide this today. :-)
I'd like to close this, as the exception handling is all done and working correctly.
Is there a separate issue for how we are handling CodeType()?
> Is there a separate issue for how we are handling CodeType()?
No, that's why this is marked as release blocker, because this is the first issue where CodeType was changed.
If you wish to close this one, please open a new issue explaining the situation and mark that one as release blocker.
I propose we declare all APIs for code objects *unstable*, liable to change each (feature) release.
I want to get rid of PyCode_NewWithPosArgs() and just have PyCode_New(). All callers to either one must be changed anyways. (And we’re not done changing this in 3.11 either.)
>> I want to get rid of PyCode_NewWithPosArgs() and just have PyCode_New().
That as added because of PEP 387 and unfortunately removing it is backwards incompatible.
>>
> >> I want to get rid of PyCode_NewWithPosArgs() and just have PyCode_New().
> That as added because of PEP 387 and unfortunately removing it is backwards incompatible.
Is changing the signature allowed? Because it *must* be changed (at the very least to accommodate the exceptiontable, but there are several others too -- your PEP 657 touched it last to add endlinetable and columntable).
I think this was a mistake in PEP 387 and we just need to retract that. Perhaps it could be left as a dummy that always returns an error?
> >>
Yeah that's the crux. :-(
I've started a thread on python-dev.
|
https://bugs.python.org/issue40222
|
CC-MAIN-2021-49
|
refinedweb
| 5,075
| 52.26
|
Re: Accessing value of a Variable in parent from custom control
- From: "RobinS" <RobinS@xxxxxxxxxxxxxxx>
- Date: Sun, 19 Nov 2006 11:12:00 -0800
Well, I was just trying to translate from C# to VB for you, but I've gone
back and reread the whole thread. If all you want is to be able to
get some kind of information from the parent form in which the control
resides, i'd think you could just add a public property to the parent form.
If Form1 implements an interface, it means that you must provide the
code for the properties defined in the interface. I think the line
myControl = DirectCast(Me, iUsesMyControl)allows you to access the methods and properties in the interface
from the control. I think what this allows you to do is set the
properties in the form that implement the interface to private.
Then casting myControl to that interface allows it to see the
private properties. So only the control can see the properties,
not the whole project. So in the control, maybe you could access
myControl.FormInt and myControl.FormString and see the
properties from the form.
Can anybody verify that I'm understanding that right?
However, I don't really see the need for something this complicated
if all you are trying to do is provide the ability for the control
to see something in the form. I would think you could just add a
property to the form, and let the control access it. The code
you have for the properties (which are public) should work okay.
Then in your control, I would think you could access the
properties as Form1.FormInt and Form1.FormString.
I guess it depends on whether it's okay to make those
properties public to the rest of the project or not.
Hope that helps. If anybody can shed some wisdom here,
that would be great.
Thanks,
Robin S.
"Zahid Hayat" <ZahidHayat@xxxxxxxxxxxxxxxxxxxxxxxxx> wrote in message
news:9A6BB7EC-1ED6-43D6-BBA8-70328817CF0B@xxxxxxxxxxxxxxxx
My parent Form looks like this:
=======================
Public Class Form1
Implements iUsesMyControl
Private _formInt As Integer
Private _formString As String
Public var1 As Integer
Private mycontrol As ContainerControl
Private Sub Form1_Load(ByVal sender As System.Object, ByVal e As
System.EventArgs) Handles MyBase.Load
_formInt = 123
_formString = "Zahid"
mycontrol = DirectCast(Me, iUsesMyControl)
End Sub
Public Property FormInt() As Integer Implements iUsesMyControl.FormInt
Get
Return _formInt
End Get
Set(ByVal value As Integer)
_formInt = value
End Set
End Property
Public Property FormString() As String Implements
iUsesMyControl.FormString
Get
Return _formString
End Get
Set(ByVal value As String)
_formString = value
End Set
End Property
End Class
============
I do not understand the implements of Control itself. Do we need to
implement another interface 'iUsesMyControl' as we did in the parent from
(Form1)?
"RobinS" wrote:
I think this would be the VB code for that:
In your form, add an Implements for the interface, and add the
properties.
You will want to have private variables to keep the current value,
you are exposing them through a property.
public class MyFrom
Implements iUsesMyControl
Private _FormInt as Integer
Public Property FormInt() As Integer Implements
IUsesMyControl.FormInt
Get
return _FormInt
End Get
Set(ByVal value As String)
_FormInt = value
End Set
End Function
Private _FormString as String
Public Property FormString() As String Implements
IUsesMyControl.FormString
Get
return _FormString
End Get
Set(ByVal value As String)
_FormString = value
End Set
End Function
....(other form code)
End Class
Public Interface IUsesMyControl
Property FormInt as Integer
Property FormString as String
End Interface
In your control:
'**I'm not sure about this; Either I have it wrong,
'** or there's some way to define something as
'** an interface. If this doesn't work, try
'** "implements IUsesMyControl" instead of "as IUsesMyControl".
Private _parentForm as IUsesMyControl
Public Readonly Property ParentForm as IUsesMyControl
Set
_parentForm = value
End Set
End Property
Somewhere early in your parent form
myControl.ParentForm = DirectCast(me, IUsesMyControl)
I think that's right; feel free to correct me if I got any of it wrong.
Robin S.
-------------------------
"Dale" <dale0973@xxxxxxxxxxxxx> wrote in message
news:6AD37C2A-C539-45DB-BE10-E733ACEBC094@xxxxxxxxxxxxxxxx
Unfortunately, I am not a VB.Net developer but you should hopefully be
able
to figure this out from the C#.
Let's say that your control is MyControl and your form is MyForm. The
instance, in my example, of MyControl is named myControl. Assume your
form
needs a string variable we'll call formString and an int variable we'll
call
formInt that are defined in the parent form. Create an interface
called,
for
instance, iUsesMyControl.
In your form class declaration, change
public class MyForm : Form
to
public class MyForm : Form, iUsesMyControl
I think the VB would look like
Public Class MyForm : Extends iUsesMyControl
Your interface, iUsesMyControl, would look like:
internal interface iUsesMyControl
{
int FormInt { get; set; }
string FormString { get; set; }
}
Basically, we're defining two properties in the interface that all
classes
implementing iUsesMyControl must implement. Check your VB
documentation
for
how to define properties in an interface in VB.
Your implementation of iUsesMyControl in your form would look like
private int formInt; // though this may have been previously
defined
elsewhere.
internal int FormInt
{
get { return formInt; }
set { formInt = value; }
}
private string formString;
internal string FormString
{
get { return formString; }
st { formString = value; }
}
Check your VB documentation for how to define the variables and expose
them
as properties in VB to accomplish the above C# code in VB.
In your control, add a property:
private iUsesMyControl parentForm;
internal iUsesMyControl ParentForm
{
set { parentForm = value; }
}
Somewhere early in your parent form, perhaps the load event, add
myControl.ParentForm = (iUsesMyControl)this;
This passes the MyForm to MyControl but by casting to iUsesMyControl it
tells MyControl only about the two variables defined in iUsesMyControl.
I
think this, in VB would be:
myControl.ParentForm = CType(Me, iUsesMyControl)
Now, in your control, you have available:
parentForm.FormInt
and
parentForm.FormString
--
Dale Preston
MCAD C#
MCSE, MCDBA
"Zahid Hayat" wrote:
First I would like to thank both of you for replying to my question.
As I
do
not have any experience with interfaces, therefore if you can provide
an
example I will be greatfull.
Zahid.
"Dale" wrote:
You can create a property in the control and then, when the parent
form
initializes the control or changes the value of the variable, set
the
value
of the property in the control.
Though, like Ciaran says, it is not very object oriented, I have had
cases
where I had to pass the parent form as a property to a control. To
get
around the issue he identifies of the control being used in a
different
form,
then you create an interface for the data that the control requires
from its
parent and implement that interface in all parents that use that
control. In
that way, by casting to the interface type, you can make any form
that
uses
your control into something you can use inside your control.
HTH
Dale
--
Dale Preston
MCAD C#
MCSE, MCDBA
"Zahid Hayat" wrote:
Is it possible to access value of a variable in Parent Form within
control at
runtime?
Please provide an example (possibly in VB).
.
- References:
- Prev by Date: Re: Application.ThreadException not working
- Next by Date: Re: dose the attribute know what the class or property is?
- Previous by thread: Re: Accessing value of a Variable in parent from custom control
- Next by thread: Runtime bridges Ja.NET and JNBridgePro vs Web Service
- Index(es):
|
http://www.tech-archive.net/Archive/DotNet/microsoft.public.dotnet.framework/2006-11/msg00469.html
|
crawl-002
|
refinedweb
| 1,238
| 52.49
|
htm
IJMF 6,1
24
A comparative analysis of the performance of conventional and Islamic unit trust companies in MalaysiaNorma Md. Saad, M. Shabri Abd. Majid, Salina Kassim, Zarinah Hamid and Rosylin Mohd. YusofDepartment of Economics, Kulliyyah of Economics and Management Sciences, International Islamic University Malaysia, Kuala Lumpur, MalaysiaAbstractPurpose The purpose of this paper is to investigate the efciency of selected conventional and Islamic unit trust companies in Malaysia during the period 2002 to 2005. Design/methodology/approach The paper adopts Data Envelopment Analysis (DEA) to investigate efciency, as measured by the Malmquist index, which is decomposed into two components: efciency change and technical change indexes. Findings The study indicates that technical efciency is the main contributor to enhancing the efciency of the Malaysian unit trust industry. In addition, the larger the size of the unit trust companies, the more inefcient the performance. In comparing the efciency of unit trust companies, the study nds that some of the Islamic unit trust companies perform better than their conventional counterparts. Research limitations/implications The study is limited to ve Islamic unit trust companies. Thus, the ndings of this study are indicative, but inconclusive for the unit trust industry as a whole. Practical implications The results have two important implications for both conventional and Islamic unit trust companies in Malaysia. First, the deterioration of total factor productivity (TFP) in the unit trust industry in Malaysia is due to the deciency of innovation in technical components. Second, the size of the unit trust companies has an adverse effect on the TFP performance. Originality/value The contribution of this study is that it analyzes the efciency of the two types of unit trust industry which are important and relevant for Malaysia. This signicance arises from the dual nancial system, in which the Islamic unit trust companies operate in parallel with their conventional counterparts. The comparison sheds some light on the performance of the Islamic unit trust companies, whose operations are based on prot-sharing, in contrast to the conventional unit trust companies. Keywords Unit trusts, Capital markets, Indexing, Islam, Economic performance, Malaysia Paper type Research paper
1. Introduction Information about the efciency of portfolio investment, in this context, unit trust funds, is important to investors, simply because investors are motivated to ensure theInternational Journal of Managerial Finance Vol. 6 No. 1, 2010 pp. 24-47 q Emerald Group Publishing Limited 1743-9132 DOI 10.1108/17439131011015779
This paper originates from a research project funded by the Management Centre, International Islamic University Malaysia. The authors would like to thank the Center for generously funding the research. The authors are also grateful to Dr Brian Bloch for his comprehensive editing of the manuscript.
maximum return on their investments. Therefore, information about the efciency of unit trust funds is one of the major considerations in the fund-selection decision. Information about portfolio investment efciency is also important to fund managers, to enable better pricing, a greater inow of funds and improved protability (Berger et al., 1993). In addition, the ability to measure the efciency of unit trust investments helps fund managers to gauge their own performance in comparison to their competitors. This ensures that relevant factors are emphasized in efforts to improve fund performance and outperform the relevant benchmarks (Al-Shammari and Salimi, 1998). The efciency of a portfolio investment can be measured by means of two approaches, parametric and non-parametric. The parametric approach essentially species a functional relationship between a performance variable and selected explanatory variables. Among the commonly used parametric approaches are the Stochastic Frontier Approach (see, for example, Yuengert, 1993), Distribution Free Approach (Troutt et al., 2005) and Thick Frontier Approach (Ang and Lin, 2004). However, this approach has been heavily criticized due to its unrealistic assumptions (normality and linearity assumptions) in the specications of the functional forms to be estimated (Sengupta, 1989). In view of the shortcomings of the parametric approach, there has been an increasing interest in the non-parametric approach to measuring portfolio efciency. The non-parametric approach is considered as superior to the parametric approach since it is not based on possibly invalid assumptions and is more general and exible. The two most widely used forms of this approach are the Sharpe index (Sharpe, 1966) and Jensens alpha (Jensen, 1968). The Sharpe index is essentially a risk-adjusted performance measure based on the reward to variability ratio, while Jensens alpha is a measure for evaluating a portfolio managers ability to predict security prices. Continuous efforts are being made to further improve on the techniques for quantifying portfolio efciency, leading to the development of the Data Envelopment Analysis (DEA) of Charnes et al. (1978). Essentially, the DEA is a linear programming formulation that denes a correspondence between multiple inputs and outputs. While this method was originally used to measure the performance of educational institutions, the DEA has been widely applied to measure the efciency of various organizations, including banks (Sherman, 1984; Drake and Howcroft, 1994), insurance companies (Berger et al., 1997; Cummins et al., 1999a,b; Meador et al., 2000), hospitals (Banker et al., 1984), and retail sales unit (Mahajan, 1991). The application of the DEA analysis to measure unit trust performance has been extensive. For instance, Murti et al. (1997) adopt the DEA analysis to examine the efciency of the unit trust industry in the United States, by examining the relationship between return (representing benet) and expense ratio, turnover, risk and loads (representing costs). The results of the study suggest that the efciency of unit trusts is not related to transaction costs and that the impact of scale effect is mixed. Other studies investigating the efciency of unit trust, using a similar approach, include Chang and Lewellen (1984), Land et al. (1993), and Banker and Thrall (1992). However, to the best of our knowledge, there are no studies investigating the efciency of unit trusts in Malaysia using the DEA approach. Most of the existing studies on the performance of Malaysian unit trusts rely on the CAPM. This includes Ismail and Shakrani (2003) on Islamic unit trust performance in Malaysia, and
26
Shamsher and Annuar (1995), which uses several benchmark performance measures to assess the performance of 54 unit trust funds in Malaysia over the period 1988 to 1992. The study nds that the return on unit trusts investment in Malaysia is well below the risk free rate and stock market returns. Chua (1985) uses the Sharp Index and Treynor Index to examine the performance of 12 unit trust funds in Malaysia over two sub-periods: 1974-1979 and 1979-1984. He nds that fund characteristics such as size, expense ratio and portfolio turnover are all negatively correlated to performance. Chuan (1995) uses monthly data covering the period 1984-1993 on a sample of 21 unit trust funds, employing several investment measures, namely, the Adjusted Sharpe Index, Treynor Index, Jensens Alpha and the Adjusted Jensens Alpha. The results show that the unit trust funds as a whole, performed worse than the market and the fund characteristic, namely the expense ratio, correlates negatively with fund performance. Likewise, Tan (1995) and Chuan (1995) use the benchmark model based on Jensens alpha and the CAPM, to compare the actual portfolio returns against that of the market benchmarks. Later studies with Malaysian data, continue to employ the benchmark model on larger samples, including Low and Ghazali (2005) and Low (2007). In view of the above research scenario, this present study intends to ll the gap by applying the DEA to investigate the efciency of selected conventional and Islamic unit trust companies in Malaysia. Apart from using the DEA and a more recent data set, another innovative aspect of this study is that it compares the efciency of the conventional unit trust companies with that of the Islamic unit trusts. The performance of the conventional unit trusts and Islamic unit trusts are expected to be different, since the former are subject to the capital market rules, while the Islamic unit trusts are subject to both the capital market rules and shariah principles. Despite the fact that more than 90 percent of the shares listed are shariah-compliant, the remaining 10 percent of the shares listed may comprise highly protable non-shariah-compliant activities. According to Ghoul et al. (2007), companies which are not acceptable based on Islamic principles include the majority of nancial institutions involved in money lending and the charging of interest, such as bank and insurance companies. Other screening criteria prohibit investments involving the production, distribution and/or earning prots from alcohol, pornography, tobacco, gambling, weapons, music, entertainment, processing pork meat or non-halal meat, hotels and airlines which serve alcohol. Comparing and contrasting the efciency of the two types of unit trust industry is important and relevant for Malaysia, because of its dual nancial system, in which Islamic unit trust companies operate parallel with their conventional counterparts. The comparison thus sheds light on the performance of the Islamic unit trust companies, whose operations are limited to selected shariah-compliant companies, as opposed to the conventional unit trust companies which can invest in any suitable companies that can potentially give the highest return. Ultimately, the ndings of the study are expected to contribute towards improving the efciency of the unit trust industry in Malaysia as a whole. The rest of this study is organized as follows. Section 2 presents an overview of the unit trust industry in Malaysia. Section 3 describes the data and discusses the methodology of the DEA. Section 4 presents the results and analysis, and Section 5 concludes.
2. Overview of the Malaysian unit trust industry A unit trust fund is a professionally managed, collective investment scheme that pools client money and invests it with a specic objective, as stated in its documentation. Unit trust funds can be invested in a variety of assets or investment classes, which may not be available to an individual investor. These classes may include government bonds and corporate bonds. Such investments require a large amount of funds which are often beyond the capability and affordability of individual investors. Collectively, however, those investments can become accessible. The type of investment portfolios in unit trust funds depends on the nature of the fund, as well as its objectives and investment strategy. For example, a bond fund provides an individual investor with access to the bond market and a potentially steady stream of income (Prudential, 2007). In Malaysia, the unit trust industry had its modest beginnings in 1959, when the rst unit trust management company, the Malayan Unit Trusts Limited, was launched in August 1959, by a group of Australian investors. During the 1960s and 1970s, the unit trust industry was dominated by two major players, ASM MARA Unit Trust Management and Asia Unit Trusts Berhad, companies owned by the Majlis Amanah Rakyat Malaysia (MARA) or the Council of Trust for the Indigenous, a body set up by the Malaysian government to improve the socio-economic conditions of the indigenous people. The 1970s also witnessed the launching of state-government sponsored unit trusts, which may have been launched in reaction to the Federal Governments call to mobilize domestic household savings. The 1980s marked an important development in the unit trust industry, when Skim Amanah Saham Nasional (National Unit Trust Scheme), managed by Permodalan Nasional Berhad (PNB) was launched on 20th April 1981. The launching of Skim Amanah Saham Nasional provided the impetus for new growth in the industry and enabled the government to fulll its objective of mobilizing the savings of the indigenous people over the long-term. The 1980s also witnessed the emergence of unit trust management companies, which are subsidiaries of nancial institutions. The establishment of the bank-afliated unit trust management companies indicated a signicant development in the industry, as their involvement had, in many ways, assisted the marketing and distribution of unit trusts through banks branch networks, thus widening the channels used in reaching potential investors. During the 1990s, most of the unit trusts launched were equity funds. The rapid growth of the unit trust industry could be observed from the number of unit trust management companies, which tripled from 13 in 1992 to 37 in 2002 (Md Taib and Isa, 2007). Prior to the 1997 Asian nancial crisis, the size of the approved unit trusts were larger. However, weak demand resulting from the crisis, particularly in 2000, saw smaller unit trusts being launched. The establishment of structured Islamic funds management in Malaysia took place in early 1993, when a private unit trust fund was rst launched. The rst Islamic trust fund, Tabung Ittikal, by Arab-Malaysian Securities, was established on 12th January 1993 and became the precursor to the development of an Islamic unit trust sector in the country (Barom, 2004). More shariah unit trusts were launched thereafter and by 31 December 2000, there were 13 shariah unit trusts (Permodalan Nasional Berhad, 2001). As at 31 March 2007, there were 99 shariah-based funds, comprising 47 equity funds, 20 balanced funds, 18 bond funds, and 14 other funds (Securities Commission, 2007). The rapid development of the Islamic unit trust sector in the Malaysian capital market
28
signies another continuous commitment on the part of the Malaysian government in setting up a fully-edged Islamic nancial system in Malaysia. This system conforms to Islamic principles and is intended to be as efcient and competitive as its conventional counterpart in serving the nancial needs of the Malaysian community. The Islamic unit trust sector in Malaysia is a subset of the overall unit trust industry and a component of the Malaysian Islamic capital market. The industry is highly regulated by the government through the Securities Commission (SC), in order to safeguard investor interests and guarantee the integrity and systemic stability of the industry. Noordin (2002), as cited in Barom (2004), states that the Islamic unit trust schemes are a group of collective investment funds, which give investors the opportunity to invest in a professionally managed and diversied portfolio of securities that conform to the principles of shariah. Such halal securities do not include the stock of companies involved in conventional nancial services (banking and insurance), gambling, alcoholic beverages and non-halal food products. Alhabshi (1994), as cited in Barom (2004), explains that Islamic unit trusts must also avoid involvement with riba or interest, dubious transactions, and other forms of unethical or immoral activities, such as market manipulations, insider trading, short selling, and even excessive exposure of ones nancial position by contra deals that cannot be backed by sufcient funds. The returns from an Islamic unit trust fund must go through a process of cleansing or purication from any interest elements. Proceeds (dividends) of permissible securities that originate from mixed sources with non-halal or dubious revenues must also be removed. In addition, returns from securities which were previously permissible, but have subsequently been conrmed non-halal and removed from the updated list of approved shariah securities, and which could not be disposed of due to market conditions, are also excluded (Barom, 2004). In view of the increasing role played by Islamic unit trusts in the Malaysian nancial sector, several empirical studies have been conducted to assess various aspects of the industry. For example, Ismail and Shakrani (2003) examine the relationship between betas and returns to Islamic funds, using the unconditional CAPM and conditional CAPM. The results suggest that the relationship between beta and returns depends on market conditions. In particular, there is a highly signicant relationship between positive and negative beta coefcients during bull and bear phases, respectively. In addition, the conditional relationship is shown to be stronger in the bear phases than in bull phases, implying that Islamic fund investors are relatively risk averse. Abdullah et al. (2007) evaluate the performance of Malaysian Islamic unit trusts and compare them with conventional ones, by utilizing monthly returns adjusted for dividends and bonuses, for 65 funds over the period of January 1992 to December 2001. Based on non risk-adjusted returns, conventional and Islamic funds perform worse than the market for the total sample period data. However, the returns on Islamic funds are about the same as those of conventional ones. Interestingly, when risk-adjusted returns are considered, the performance of Islamic funds is better than that of conventional funds during nancial crisis and post-crisis periods. 3. Data sources and methodology This study utilizes data in the form of two inputs and one output to investigate the efciency of the Malaysian unit trust industry. According to Murti et al. (1997) in
portfolio management, performance is evaluated in terms of cost-benet ratios. That is, consumers want funds that simultaneously maximize the benets (returns) and minimize the costs (expense ratio, portfolio turnover ratio, and loadings). This framework is consistent with notions of market efciency regarding transaction costs in the unit trust industry. However, because of data availability constraints, our study considers only two inputs: expenses ratio and portfolio turnover ratio. Consistent with previous work by Ippolito (1989), Bauman and Miller (1994), Murti et al. (1997), Sengupta and Zohar (2001), and Daraio and Simar (2006), the inputs used in this study are the management expenses ratio and portfolio turnover ratio. The portfolio turnover ratio (PTR) is dened as: PTR Total acquisitions for the year total disposal for the year 4 2 Average value of the fund calculated on daily basis
while management expenses ratio (MER) refers to: MER or A B C D 100 E where: A = Annual management fees; B = Annual trustee fees; C = Auditor remuneration; D = Administration expenses and tax agent fees; E = Average net assets value of trust fund. The output used in this study is returns, as in Ippolito (1989), Droms and Walker (1996), and Murti et al. (1997). In the rst instance, the aim was to investigate a larger sample, but a complete data set for the period 2002-2005 is only available for 27 unit trust companies. The data employed in this paper were gathered from the annual reports of these companies. Even though there are some other useful inputs such as brand, marketing, and mode of sales distribution, the information cannot be obtained from the annual report of a particular unit trust company. Incorporating these potential inputs would make the study more comprehensive, but data limitations do not permit their inclusion. In exploring the contributions of technical and efciency changes to productivity increases in the Malaysian unit trust industry, the study adopts the generalized output-oriented Malmquist index developed by Fare et al. (1989). The Malmquist indexes are constructed using the Data Envelopment Approach (DEA) and estimated using Coelli (1996) DEAP version 2.1. To date, the Malmquist productivity indexes and DEA have been used in a variety of studies. These studies include aggregate comparisons of productivity between countries (Fare et al., 1994a) as well as of various Fees recoverable expenses 100 Average value of the fund calculated on daily basis
30
economic sectors (see for example, Tauer (1998) and Mao and Koo (1996), Alam and Sickless (1995) on airlines; Asai and Nemoto (1999) and Calabrese et al. (2001) on the telecommunications industry; Tulkens and Malnero (1996) on banking; Avkiran (2001) on universities; Cummins et al. (1999a), Abu Mansor and Radam (2000), and Diacon et al. (2002) on insurance). Ali and Seiford (1993) highlighted that DEA is a well-established, non-parametric efciency measurement technique, which has been used extensively in over 400 studies of efciency in the management sciences over the last decade. For the purpose of this study, a more appropriate method is the cross-efciency frontiers technique of Cummins et al. (1999a). However, due to the small sample of Islamic unit trusts, compared to that of their conventional counterparts, the Malmquist total factor productivity (TFP) approach is adopted. One limitation of using the TFP Malmquist approach is that is it sensitive to market conditions, such that a period of declining returns is associated with declining productivity. However, the Malmquist TFP approach takes this into account indirectly. For instance, poor market conditions will affect the output of unit trusts in the country and in turn, affect their productivity. The average total productivity of the unit trusts across the country may fall, but still there are rms on the best-practice (efciency) frontier. Using the Malmquist TFP approach enables us to measure the efciency of unit trusts with respect to particular market conditions, relative to a unit trust on the best practice frontier and also to compare the efciency of the unit trust across different time periods. In short, the Malmquist productivity approach can be used to identify productivity differences between two rms, or one rm over two-time periods. However, if the study focuses on unit trusts across countries, then different market conditions could be a major consideration. Nonetheless, this study considers the performance of unit trusts in Malaysia alone. Following Fare et al. (1989), the Malmquist TFP index is written as follows: " ! ! #1 2 Dto1 x t1 ; y t1 Dto x t1 ; y t1 Dto x t ; y t Dto x t ; y t Dto1 x t1 ; y t1 Dto1 x t ; y t
Mo x ; y ; x
t 1
;y
t1 t1 where the notation Dt ;y represents the distance from the period t 1 o x observation to period t technology. The rst ratio on the right-hand side of equation (1) measures the change in relative efciency (i.e. the change in how far observed production is from maximum potential production between years t and t 1. The second term inside the brackets (geometric mean of the two ratios) captures the shift in technology (i.e. movements of the frontier function itself) between the two periods evaluated at x t and x t 1. Essentially, the change in relative efciency measures how well the production process converts inputs into outputs (catching up to the frontier) and the latter reects improvement in technology. According to Fare et al. (1994a), improvements in productivity yield Malmquist index values greater than unity. A deterioration in performance over time is associated with a Malmquist index less than unity. The same interpretation applies to the values derived from components of the overall TFP index. Improvements in the efciency component yielded index values greater than one, which can be considered evidence of a shift towards the frontier. Values of the technical change component greater than one are considered to be evidence of technological progress.
Consistent with Fare et al. (1994a), this study uses an enhanced decomposition of the Malmquist index, decomposing the efciency-change component, calculated relative to constant-returns-to-scale technology, into a pure efciency component (calculated relative to the variable returns to scale (VRS) technology) and a scale-efciency change component which captures changes in the deviation between the VRS and constant-returns-to-scale (CRS) technology. The subset of pure efciency change measures the relative ability of operators to convert inputs into outputs, while scale efciency measures the extent to which the operators can take advantage of returns to scale, by altering its size in the direction of the optimal scale. 4. Empirical results and analysis 4.1 Input and output specications Two inputs and one output are utilized to investigate the efciency of the unit trust industry in Malaysia in this study. The inputs are the portfolio turnover ratio and management expenses ratio, while the output is returns. These inputs and output are used to investigate the efciency of 27 unit trust companies in Malaysia, of which ve are Islamic unit trust companies. The unit trust companies under study are HLG Dana Makmur, KL Ittikal Fund, Mayban Dana Yakin, Pacic Dana Aman, RHB Islamic Bond Fund, Alliance Vision Fund, Apex Small-Cap Fund, APEX CI Tracker Fund, APEX Malaysia Growth Trust (Apex MG Trust), HLB Construction, Infrastructure and Property Sector Fund (CIPSF), HLB Consumer Products Sector Fund (CPSF), HLB Finance Sector Fund (FSF), HLB/HLG Blue Chip Fund, HLB Industrial and Technology Sector Fund (ITSF), HLB Penny Stock Fund, HLB Trading/Service Sector Fund (TSSF), KLCI Tracker Fund, Mayban Income Trust Fund, Mayban Unit Trust Fund, OSK-UOB Equity Trust, OSK-UOB Kidsave Trust, OSK-UOB Small Cap Opportunity (SCO) Unit Trust, Public Industry Fund, Public Small Cap Fund, PB Balanced Fund, RHB Bond Fund, and TA Comet Fund. It is important to stress that the unit trusts included in this study consist of a combination of both passively-managed (tracker) and actively-managed funds. A tracker fund is categorized as passively managed, yet it is one of most efcient, due to the low fees paid for a simple tracking process. While there may be some differences in the investment approaches between these two groups of unit trusts, the funds selected in this study invest in a wide variety of economic sectors. Moreover, the main focus of the study is to compare the efciency of Islamic and conventional unit trusts. Even though there may be differences in the investment philosophy between the two groups of funds, it is beyond the scope of this study to consider them. However, this aspect could be a useful and interesting area for future research. The rst ve funds included in the study are operated on shariah principles, while the rest are based on the conventional practices. Data on inputs and outputs are collected from the period 2002 to 2005. Table I reports the descriptive statistics of the inputs and output of the 27 unit trust companies in Malaysia during the period of study. The average returns within the period are 11.085, while the average portfolio turnover ratio and management expense ratio are 0.718 and 1.633, respectively. Based on the individual rm analysis, the Alliance Vision Fund yields the highest output, which occurred in 2002, while the OSK-UOB Small Cap Opportunity Unit Trust records the lowest output, in 2005. With respect to the inputs, the Apex Small-Cap Fund and APEX CI Tracker Fund yield the highest portfolio turnover and management expenses in 2002. The HLB CPSF and
32
RHB Islamic bond funds yield the lowest portfolio turnover and management expenses in 2003 and 2004, respectively[1]. The two inputs and one output can be imposed on the x- and y-axes to show the performance of all 27 mutual unit trusts relative to each other. Figure 1 illustrates this procedure, where the x-axis represents the portfolio turnover ratio to returns, while the y-axis represents the management expenses ratio to returns. The unit trust which is nearer to the point of origin can be regarded as more efcient than those which are farther away from it. As illustrated in Figure 1, with the exception of Maybank Dana Yakin, which is numerically distant from the remainder of the observations, 26 unit trusts are clustered in the lower quadrant of the graph. One of the tracker funds, namely the APEX CI Tracker fund, is shown to be more efcient with respect to the management expenses ratio to returns, than the portfolio turnover ratio to returns. Four out of ve Islamic unit trusts, namely KL Ittikal, Pacic Dana Aman, HLG Dana Makmur and the RHB Islamic Bond Fund are shown to perform better than most of the conventional unit trusts, with their data points being nearer to the origin. Thus, in general, it can be inferred that the Islamic frontier performs better than the conventional frontier. 4.2 Production frontier and efciency Since the basic component of the Malmquist productivity index is related to measures of efciency, Table II reports the efciency change for the 27 unit trust companies from 2002-2005 for both constant-returns-to-scale (CRS) and variable returns-to-scale (VRS). The value of unity implies that the rm is on the industry frontier in the associated year, while values less than unity imply that the rm is below the frontier or technically inefcient. Thus, the lower the values than unity, the more inefcient the rm, compared to those rms with values closer to unity. As reported in Table II, for 2002, the Alliance Vision Fund and Mayban Unit Trust Fund are found to be the only two unit trust funds which were consistently efcient, both under CRS and VRS. In 2003, the Alliance Vision Fund and TA Comet Fund were consistently efcient under both CRS and VRS. The KLCI Tracker Fund is the only consistently efcient fund in 2004, while in 2005, four unit trust companies are found to be consistently efcient, namely, RHB Islamic Bond Fund, Mayban Income Trust Fund, HLB CPS Fund and RHB Bond Fund. Although the RHB Bond Fund was only found to be on the industry frontier in 2005 based on CRS, it is found to be on the frontier for three consecutive years, 2003, 2004 and 2005, based on VRS. On the other hand, the Mayban Unit Trust Fund, which was found to be on the industry frontier in 2002, based on CRS, is found to be on the frontier for 2002, 2003 and 2004, based on VRS. These indicate that the unit trust companies have successfully kept pace withInputs Portfolio turnover ratio (Times) Management expenses ratio (%) Output Returns (%) 11.085 8.215 13.901 2 23.680 49.500
34
No. 0.825 0.587 0.593 0.566 0.579 0.623 1.000 0.106 0.307 0.367 0.560 0.453 0.573 0.507 0.547 0.573 0.453 0.547 0.614 1.000 0.366 0.614 0.467 0.680 0.547 0.587 0.596 0.708 0.512 0.531 1.000 0.727 0.833 0.348 0.667 0.530 0.636 0.500 0.636 0.576 0.879 0.606 0.970 0.697 0.697 0.394 0.985 0.379 0.818 0.621 0.939 1.000 0.671 0.599 0.971 0.574 0.412 0.662 0.279 0.500 0.559 0.412 0.324 0.397 0.485 1.000 0.882 0.971 0.588 0.603 0.309 0.676 0.426 0.559 0.853 0.353 0.542 0.585 0.424 0.525 0.258 0.323 0.485 1.000 0.818 0.626 0.667 0.545 0.505 0.697 1.000 0.909 0.525 0.848 0.030 0.808 0.727 0.586 1.000 0.424 0.535 0.550 1.000 0.173 0.387 0.693 0.560 0.453 0.573 0.507 0.547 0.573 0.453 0.547 0.744 1.000 0.413 0.693 0.467 0.680 0.547 0.587 0.596 0.800 0.559 0.582 1.000 0.727 0.833 0.348 0.667 0.530 0.641 0.500 0.636 0.576 0.879 0.606 1.000 1.000 0.697 0.394 0.985 0.379 0.818 0.621 1.000 1.000 0.685 0.611 0.364 0.409 0.167 0.288 0.909 0.365 0.691 0.824 0.765 0.985 0.882 0.823 0.465 0.970 0.364 0.576 1.000 0.624 1.000 0.587 0.747 0.640 0.579 0.695 0.364 0.409 0.167 0.288 0.968 0.370
Islamic Unit Trust 1 HLG Dana Makmur 2 KL Ittikal Fund 3 Mayban Dana Yakin 4 Pacic Dana Aman 5 RHB Islamic Bond Fund GeomeanLC1 Tracker Fund 13 Mayban Income Trust Fund 14 Mayban Unit Trust Fund 15 OSK-UOB Equity Trust 16 OSK-UOB Kidsave Trust 17 OSK-UOB SCO Unit Trust 18 PB Balanced Fund 19 Public Industry Fund 20 Public Small Cap Fund 21 RHB Bond Fund 22 TA Comet Fund Geomean Overall Geomean
Table II. Efciency of the unit trust industry, 2002-2005 2002 Constant returns to scale (CRS) 2003 2004 2005 2002 Variable returns to scale (VRS) 2003 2004 0.691 0.824 0.765 0.985 0.909 0.828 0.971 0.574 0.412 0.662 0.279 0.500 0.564 0.412 0.324 0.397 0.485 1.000 0.909 1.000 0.588 0.603 0.309 0.676 0.426 0.559 1.000 0.353 0.547 0.591 2005 0.575 0.970 0.450 0.576 1.000 0.679 0.424 0.650 0.425 0.400 0.600 1.000 0.818 0.775 0.667 0.675 0.625 0.697 1.000 0.909 0.650 0.848 0.030 1.000 0.900 0.725 1.000 0.525 0.609 0.621
Firms
technically feasible production possibilities and improved their distance to the industrial production frontier for both versions of technology. Table II shows the percentage of the realized output level compared to the maximum potential output level for the given input mix. For example, in 2002, the Islamic unit trusts produced 62.3 percent of their potential output level, while the conventional unit trusts produced 51.2 percent of their potential output under CRS. Under VRS of the same year, Islamic unit trust companies produced 70.0 percent of their potential output level, whereas the conventional unit trust companies produced 56.0 percent of their potential output. During all the years of analysis, the efciency of 17 rms is found to be above average (56.7 and 60.1 percent for CRS and VRS, respectively) based either on CRS or VRS. The most efcient rm based on CRS (89.4 percent) and VRS (87.7 percent) is the Mayban Unit Trust Fund. On the other hand, the least efcient fund is the APEX Small-Cap Fund (42.5 percent) under CRS, and the OSK-UOB SCO Unit Trust (44.8 percent) under VRS. Out of the ve unit trust companies, the efciencies of four Islamic unit trust companies are above average, except for Mayban Dana Yakin, based on both CRS and VRS. On average, the ve Islamic unit trusts perform better with a CRS of 60.9 percent and VRS of 64.3 percent, while conventional unit trusts at 56.5 percent and 60 percent, respectively. These ndings further indicate that investors with funds in some Islamic unit trust companies may be better off than some of the conventional unit trust companies. This suggests the nancial viability of Islamic unit trust funds in competing with their conventional counterparts in a dual nancial system such as that of Malaysia. As indicated by the weighted geometric mean in Table II, the average efciency for the entire industry increased from 53.1 percent in 2002 to 59.9 percent in 2003, but showed a slight decrease to 58.5 percent in 2004 and declined further to 55.0 percent in 2005. As for VRS, the average efciency increases from 58.2 percent in 2002 to 61.1 percent in 2003. The average efciency declined to 59.1 percent in 2004, but increased again to 62.1 percent in 2005. In other words, the efciency performance of Malaysias unit trust industry continues to improve, based on VRS than CRS. 4.3 Productivity performance of individual companies Table III reports the performance of the unit trust companies from 2002 to 2005 in terms of TFP change and its two subcomponents, technical change and efciency change. Note that a value of the Malmquist TFP productivity index and its components of less than one, implies a decrease or deterioration in productivity. Conversely, values greater than one indicate improvements in productivity with regard to the relevant aspect. Thus, subtracting 1 from the number reported in the table yields an average increase or decrease per annum for the relevant time period and relevant performance measure. Also note that these measures capture performance relative to the best practice decision-making unit (DMU) in the relevant performance or relative to the best practice in the sample. As reported in Table III, the Apex CI Tracker Fund, Mayban Dana Yakin and HLB CIPSF have the highest average TFP growth at an annual average rate of 477.3 percent, 372.7 percent and 26.3 percent for the period of 2002-2003, 2003-2004 and 2004-2005, respectively. By contrast, Mayban Dana Yakin recorded the greatest deterioration in TFP for the period 2002-2003 at an annual average rate of 2 79.3
36
No. 0.345 0.614 0.207 0.396 0.998 0.444 0.880 5.773 1.897 0.701 1.048 1.029 0.997 0.868 1.024 0.884 1.706 1.000 0.813 0.509 1.957 0.432 0.971 0.905 0.848 0.524 0.711 0.569 0.212 0.544 0.607 0.411 1.263 0.971 0.870 1.107 1.225 1.000 0.758 0.880 0.844 0.698 0.737 0.880 0.880 0.880 0.880 0.880 0.880 0.880 1.000 1.030 1.030 1.420 1.030 1.213 1.030 1.051 1.030 1.227 1.007 0.485 0.594 0.971 0.841 0.728 0.485 0.594 0.728 0.594 0.728 0.728 1.958 2.074 4.727 3.526 1.000 2.323 0.489 0.571 0.346 0.347 1.100 0.517 0.784 0.889 0.737 0.779 0.636 0.761 1.023 1.030 1.030 1.033 1.140 1.050 0.728 0.485 0.728 0.594 0.971 0.683
Table III. Unit trust rms relative Malmquist TFP, technical and efciency changes 2002-2005 Malmquist TEP change 2002-2003 2003-2004 2004-2005 Technical change 2002-2003 2003-2004 2004-2005 Efciency change 2002-2003 2003-2004 2004-2005 0.441 0.697 0.281 0.508 1.570 0.586 1.000 6.839 2.716 0.950 1.190 1.170 1.110 0.987 1.164 1.004 1.939 1.901 2.013 4.588 3.423 0.971 2.255 0.971 0.789 0.494 1.899 0.419 0.943 0.878 0.824 0.508 0.690 0.552 0.672 1.177 0.476 0.584 1.133 0.757 0.437 0.916 0.626 0.488 1.735 2.000 1.464 1.521 2.061 1.374 1.041 (continued )
No. 0.976 1.005 0.497 1.484 0.500 1.747 0.490 1.317 0.932 1.001 1.100 1.066 0.906 1.700 0.938 1.435 0.870 1.577 0.323 1.840 0.537 0.927 0.935 0.364 0.832 1.006 0.338 1.100 0.909 0.650 0.836 0.048 0.710 1.014 0.623 0.805 0.875 0.654 0.626 0.880 0.636 0.714 0.779 0.779 0.828 0.880 0.880 0.880 0.634 0.779 0.813 0.803 1.030 1.010 1.030 1.000 1.030 1.030 1.101 1.012 1.020 1.030 0.080 0.943 1.030 0.485 0.974 0.971 0.728 0.594 0.481 0.594 0.594 0.594 0.686 0.728 0.662 0.665 1.109 1.580 0.697 1.905 0.642 2.110 0.557 0.497 1.059 1.575 1.413 1.310 1.128
Malmquist TEP change 2002-2003 2003-2004 2004-2005 1.650 0.910 1.393 0.844 1.531 0.314 1.786 0.521 0.900 0.908 0.353 0.807 0.977
Efciency change 2002-2003 2003-2004 2004-2005 0.697 1.133 0.937 0.893 1.407 0.098 1.195 1.705 1.048 1.172 1.202 0.988 0.940
12 13 14 15 16 17 18 19 20 21 22
KLCI Tracker Fund Mayban Income Trust Fund Mayban Unit Trust Fund OSK-UOB Equity Trust OSK-UOB Kidsave Trust OSK-UOB SCO Unit Trust PB Balanced Fund Public Industry Fund Public Small Cap Fund RHB Bond Fund TA Comet Fund Geomean Overall Geomean
Table III.
38
percent, while the OSK-UOB Small Cap Opportunity Unit Trust is found to have the lowest average TFP growth for the periods 2003-2004 and 2004-2005, at annual average rates of 2 67.7 percent and 2 95.2 percent, respectively. The positive average TFP changes by (0.6 percent) for all rms recorded only during the 2003-2004 period, while the TFP changes over 2002-2003 and 2004-2005 deteriorated at an annual rate of 2 9.4 percent and 37.4 percent. In terms of technical efciency changes, the average growth in technical efciency for all rms was negative during the periods 2002-2003 and 2004-2005, with annual growth rates of 2 19.7 percent and 2 33.4 percent. However, for 2003-2004, the growth rate in technical efciency was positive (3 percent). Individually, the Apex Small-Cap Fund recorded the highest technical progress of 42 percent over the period 2003-2004. By contrast, the RHB Bond Fund showed the highest technical regress of 37 percent for the period 2002-2003, while the TA Comet Fund was found to have the lowest average technical change for the period 2003-2004 at an annual average rate of 2 92 percent. For the period 2004-2005, the OSK-UOB SCO Unit Trust experienced a technical regress of 51.9 percent. Contrarily, the lowest deterioration in technical change were the KL Ittikal Fund ( 2 11.1 percent) in 2002-2003, and the Mayban Income Trust Fund ( 2 2.6 percent) in 2004-2005. However, in 2003-2004, the Apex Small Cap Fund recorded the highest average technical growth at 42 percent annually. Finally, the average efciency change for the entire industry was only positive in the period 2002-2003, with an annual rate of 12.8 percent, while negative growth rates in efciency changes were recorded over 2003-2004 and 2004-2005, at annual rates of 2 2.3 percent and 2 6 percent, respectively. The Apex CI Tracker Fund Unit Trust is found to have the highest growth rate in efciency in 2002-2003, with annual rate of 583.9 percent, Mayban Dana Yakin (358.8 percent) in 2003-2004, and HLB ITSF (106.1 percent) in 2004-2005. Conversely, Mayban Dana Yakin recorded the highest deterioration in efciency growth in 2002-2003, with annual rate of 2 71.9 percent, and OSK-UOB SCO Unit Trust ( 2 68.6 percent) in 2003-2004, and ( 2 90.2 percent) in 2004-2005. In short, the changes in TFP during the period of study are caused mostly by changes in efciency, as compared to technical efciency changes. The TFP growth rates were negative in 2002-2003 and 2004-2005, due to a deterioration in technical efciency. In contrast, the TFP growth rate was positive in 2003-2004, due to a positive change in technical efciency. The TFP change, on average, yields only minimal growth in the periods of 2003-2004 with 0.6 percent, but it deteriorated between 2002-2003 and 2004-2005 by 2 9.4 percent and 2 37.4 percent, respectively. Overall, all the rms recorded an average decrease in TFP over the period of 2002-2005. In order to identify a change in scale efciency, the efciency change is decomposed further into two subcomponents, namely pure efciency change and scale efciency change. The results in Table IV show that pure efciency and scale efciency appear to be equally important sources of growth for efciency change. The Apex CI Tracker Fund recorded the highest progress in pure efciency in 2002-2003, with an annual growth rate of 319 percent, Mayban Dana Yakin (358.8 percent) in 2003-2004, and HLB CIPSF (114.7 percent) in 2004-2005. By contrast, the highest deterioration in pure efciency were shown by Mayban Dana Yakin ( 2 77.7 percent) in 2002-2003, OSK-UOB SCO Unit Trust ( 2 68.6 percent) in 2003-2004), and OSK-UOB SCO Unit Trust ( 2 90.2 percent) in 2004-2005.
No. 0.364 0.697 0.223 0.450 1.672 0.532 1.000 4.196 2.155 0.503 1.190 1.170 1.110 0.987 1.164 1.004 1.939 1.109 1.345 1.000 1.686 0.586 2.110 0.557 1.497 1.059 1.676 1.250 1.225 1.050 1.000 1.630 1.260 1.890 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.175 0.697 1.130 1.130 1.000 1.000 1.000 1.000 0.939 1.130 1.069 1.075 0.971 0.789 0.494 1.899 0.419 0.943 0.878 0.824 0.508 0.690 0.552 1.650 0.909 1.000 0.844 1.531 0.314 1.786 0.521 0.900 1.000 0.353 0.799 0.967 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.001 1.393 1.000 1.000 1.000 1.000 1.000 1.000 0.908 1.000 1.011 1.010 0.437 1.133 1.032 0.604 2.147 2.000 1.464 1.882 2.061 1.700 1.288 0.697 1.100 0.909 1.105 1.407 0.098 1.478 2.110 1.297 1.000 1.488 1.113 1.052 1.000 0.808 0.606 0.808 0.808 1.000 1.000 0.808 1.000 0.808 0.808 1.000 1.030 1.030 0.808 1.000 1.000 0.808 0.808 0.808 1.172 0.808 0.887 0.894 1.212 1.000 1.260 1.130 0.939 1.101 1.901 2.013 4.588 3.423 0.939 2.240 1.000 1.000 1.000 1.000 1.033 1.007 0.832 1.177 0.588 0.584 1.100 0.820 0.808 1.000 0.808 1.000 1.030 0.924O Kidsave Trust 17 OSK-UO SCO Unit Trust 18 PB Balanced Fund 19 Public Industry Fund 20 Public Small Cap Fund 21 RHB Bond Fund 22 TA Comet Fund Geomean Overall Geomean
40
Relative to other unit trust rms, the Apex Small-Cap Fund recorded the highest progress in scale efciency, with an average growth rate of 89 percent per annum in 2002-2003. The Mayban Unit Trust was next with (39.3 percent) in 2003-2004 and the RHB Bond Fund (17.2 percent) in 2004-2005. On the other hand, the Mayban Unit Trust Fund, RHB Bond Fund and Apex MG Trust recorded the highest deterioration in scale efciency of 2 30.3 percent in the period 2002-2003, 2 9.2 percent in the period of 2003-2004, and 2 39.4 percent in the period of 2004-2005. Overall, during the entire period of study, 2003-2004 and 2004-2005 are identied as a period of both pure efciency and scale efciency deterioration, with average rates 2 3.3 percent and 2 10.6 percent, respectively. By contrast, the years 2002-2003 and 2004-2005 recorded pure efciency improvements with average rates of 5 percent and 5.2 percent, respectively. The periods 2002-2003 and 2003-2004 show scale efciency improvements with annual rates of 7.5 percent and 1 percent, respectively. 4.4 Industry productivity Table V summarizes the performance of the Malmquist TFP index of the unit trust industry in Malaysia between 2002 and 2005. On the average, only three unit trust rms recorded positive improvements in their TFP performance, i.e. the APEX CI Tracker Fund (36.7 percent), RHB Islamic Bond Fund (3.2 percent) and Mayban Income Trust Fund (1.2 percent), while the Alliance Vision Fund and OSK-UOB SCO Unit Trust recorded the largest deterioration in TFP with annual rates of 2 42.8 percent and 2 70 percent, respectively. Of the ve Islamic unit trust companies, only two rms, i.e. the RHB Islamic Bond Fund (the second highest TFP improvement out of 27 rms) and the KL Ittikal Funds, have TFP performances above the industrial average of 2 16.8 percent at 3.2 percent and 17.5 percent, respectively. In terms of efciency changes, 16 rms recorded improvements in their annual average growth rates, with the Apex CI Tracker Fund having the highest efciency growth of 70.3 percent, followed by HLB CPSF (30.2 percent), RHB Islamic Bond Fund (20 percent), RHB Bond Fund (18.8 percent), and KL Ittikal Fund (18.2 percent). The lowest growth rate in efciency is recorded by the OSK-UOB SCO Unit Trust, with an annual rate of 2 59.8 percent. Only two Islamic unit trust rms recorded improvements in efciency above the industrial average of 1.2 percent, i.e. the RHB Islamic Bond Fund (20 percent) and KL Ittikal Fund (18.2 percent). Finally, the unit trust industry in Malaysia is found to be technically inefcient, with an average rate of 2 18 percent. The lowest deterioration in technical efciency was yielded by the Mayban Unit Trust Fund ( 2 10.6 percent), while the OSK-UOB SCO Unit Trust recorded the greatest deterioration ( 2 25.5 percent). Interestingly, three Islamic unit trust companies recorded average deteriorations in technical efciency lower than the industrial average of 2 18 percent, i.e. the RHB Islamic Bond Fund ( 2 14 percent), HLG Dana Makmur ( 2 16.2 percent), and Mayban Dana Yakin ( 2 17.9 percent). Overall, the average industry deterioration of TFP of 17.0 percent is caused mainly by the deterioration in technical change ( 2 18 percent). However, efciency change recorded a positive contribution of 1.2 percent during the period under review. Furthermore, the efciency change is caused mainly by pure efciency (2.2 percent), rather than by scale efciency ( 2 1 percent). This indicates that the size of the companies negatively affects the unit trust TFP performance. Our nding of substantial regress in technical components suggests that the decline in TFP of the
No. 0.692 0.899 0.698 0.786 1.032 0.812 0.572 1.367 0.837 0.826 0.830 0.990 0.916 0.934 0.869 0.856 0.903 0.825 1.012 0.866 0.943 0.870 0.300 0.862 0.895 0.813 0.910 0.705 0.834 0.830 0.751 1.703 0.943 0.959 0.953 1.302 1.126 1.073 1.068 0.984 1.037 1.084 1.177 0.969 1.128 1.114 0.402 1.059 1.100 1.000 1.188 0./843 1.015 1.012 0.761 0.803 0.887 0.861 0.871 0.761 0.814 0.871 0.814 0.871 0.871 0.761 0.860 0.894 0.836 0.781 0.745 0.814 0.814 0.814 0.766 0.836 0.822 0.820 0.751 1.554 1.032 0.832 1.023 1.302 1.126 1.152 1.068 1.056 1.113 1.084 1.104 0.969 1.163 1.070 0.402 1.137 1.181 1.073 1.188 0.869 1.029 1.022 1.000 1.096 0.914 1.152 0.931 1.000 1.000 0.931 1.000 0.931 0.931 1.000 1.066 1.000 0.970 1.042 1.000 0.931 0.931 0.931 1.000 0.970 0.986 0.990 0.826 1.182 0.850 1.005 1.200 1.000 0.838 0.761 0.821 0.781 0.860 0.811 0.832 1.182 0.845 0.965 1.200 0.992 0.993 1.000 1.006 1.042 1.000 1.008
Total factor
Efciency
Technical efciency
Pure efciency
Scale efciency
Conventional Unit Trust 1 Alliance Vision Fund 2 APEX CI Tracker Fund 3 APEX-M4OI Kidsave Trust 17 OSK-UOI SCO Unit Trust 18 PB Balanced Fund 19 Public Industry Fund 20 Public Small Cap Fund 21 RHB Bond Fund 22 TA Comet Fund Geomean Overall Geomean
Table V. Mean summary of Malmquist TFP Index of unit trust companies, 2002-2005
41
42
unit trust industry in Malaysia is due to a lack of technical innovation. This further suggests that Islamic unit trust companies could improve their productivity through technical innovation. Figure 2 reports the average changes in TFP and its components. In 2003, Islamic unit trusts performed better than the conventional ones, only in terms of their scale efciency. The performances of Islamic unit trusts in 2004 improved signicantly, compared to the conventional unit trusts, as shown by greater average changes in TFP and its components, i.e. efciency, technical efciency and pure efciency. In this year,
only the scale efciency of conventional unit trusts is found to be marginally higher than their Islamic counterparts. The general performance of the conventional unit trusts continued to decline in 2005, as indicated by the downward trend of their TFP and its components, namely technical efciency and scale efciency. Overall, the Islamic unit trusts are found to perform better than their conventional counterparts during the under review period. 5. Conclusion This paper investigates the efciency of conventional and Islamic unit trust companies in Malaysia over the period 2002 to 2005. As mentioned earlier, there is a need to compare the efciency of the conventional and Islamic unit trust companies, since the conventional unit trusts are only subject to potential capital market loss, whereas the Islamic unit trusts are subject to both potential capital market loss and the constraints imposed by the shariah principles. We would, therefore, expect the performance of these two types of unit trust to differ. The input-output data, consisting of a panel of conventional and Islamic unit trust companies, are analyzed in order to measure the efciencies of these companies using the DEA approach. Overall, the efciency of the Islamic unit trust companies is found to be comparable to their conventional counterparts and, to a certain extent, some of the Islamic unit trust companies were found to be above average in TFP. Two Islamic unit trust companies, namely the RHB Islamic Bond Fund and KL Ittikal Fund recorded TFP performances which were above the industrial average. Two of the ve unit trust companies included in our analysis were found to experience improvements in efciency. In addition, three Islamic unit trust companies, i.e. RHB Islamic Bond Fund, HLG Dana Makmur, and Mayban Dana Yakin recorded average deteriorations in technical efciency lower than the industry average. These ndings should assist the Islamic unit trust companies in improving their technical efciency, in order to gain a competitive edge over their conventional counterparts. The results have important implications for both the conventional and Islamic unit trust companies in Malaysia. During the period of analysis, on average, the Malaysian unit trust industry experienced a deterioration of TFP, due mainly to a deterioration in technical efciency. Efciency change, however, contributed positively to TFP. In addition, the efciency change is largely caused by pure efciency, rather than scale efciency. This indicates that an increasing size of unit trust company exerts an adverse effect on the TFP performance. Our ndings of substantial regress in the technical components and positive growth in efciency, imply that the deterioration of TFP in the unit trust industry in Malaysia is due to the deciency of innovation in technical components. The study is limited to ve Islamic unit trust companies and the ndings are thus indicative, but inconclusive of the Malaysian unit trust industry as a whole. Since more Islamic unit trust companies have been launched in the country, further comprehensive ` -vis studies are needed to examine the efciency of Islamic unit trust companies vis-a their conventional counterparts.Note 1. Data are available upon request from the authors.
44
References Abdullah, F., Hassan, T. and Mohamad, S. (2007), Investigation of performance of Malaysian Islamic unit trust funds, Managerial Finance, Vol. 33 No. 2, pp. 142-53. Abu Mansor, S. and Radam, A. (2000), Productivity and efciency performance of the Malaysian life insurance industry, Jurnal Ekonomi Malaysia, Vol. 34, pp. 93-105. Al-Shammari, M. and Salimi, A. (1998), Modeling the operating efciency of banks: a nonparametric methodology, Logistics Information Management, Vol. 11, pp. 5-12. Alam, I. and Sickless, R. (1995), Long run properties of technical efciency in the US airline industry, mimeo, Rice University. Alhabshi, S.O. (1994), Development of capital market under Islamic principles, paper presented at the 1994 Conference on Managing and Implementing Interest-Free Banking/Islamic Financial System, Centre for Management Technology, Kuala Lumpur. Ali, A.I. and Seiford, L.M. (1993), The mathematical programming to efciency analysis, in Fried, H.O., Lovell, C.A.K. and Schmidt, S.S. (Eds), The Measurement of Productive Efciency: Techniques and Applications, Oxford University Press, New York, NY, pp. 120-59. Ang, J.S. and Lin, J.W. (2004), A fundamental approach to estimating economies of scale and scope of nancial products: the case of mutual funds, Review of Quantitative Finance and Accounting, Vol. 16 No. 3, pp. 205-22. Asai, S. and Nemoto, J. (1999), Measurement of Efciency and Productivity in Regional Telecommunications Business, Institute for Post and Telecommunications Policy Discussion Paper, No. 3, June 25, Austin, TX. Avkiran, N. (2001), Investigating technical and scale efciencies of Australian universities through data envelopment analysis, Socio-Economic Planning Sciences, Vol. 35, pp. 57-80. Banker, R.D. and Thrall, R.M. (1992), Estimation of returns to scale using data envelopment analysis, European Journal of Operational Research, Vol. 62 No. 1, pp. 74-84. Banker, R.D., Charnes, A. and Cooper, W.W. (1984), Some models for estimating technical and scale inefciencies in data envelopment analysis, Management Science, Vol. 30 No. 9, pp. 1078-92. Barom, M.N. (2004), An overview of Islamic unit trusts in Malaysia, KENMS Occasional Paper No. 3, International Islamic University Malaysia, Kuala Lumpur. Bauman, W.S. and Miller, R.E. (1994), Can management portfolio performance be predicted?, The Journal of Portfolio Management, Vol. 20 No. 4, pp. 31-40. Berger, A.N., Cummins, J.D. and Weiss, M.A. (1997), The coexistence of multiple distribution systems for nancial services: the case of property-liability insurance, Journal of Business, Vol. 70 No. 4, pp. 261-92. Berger, A.N., Hunter, W.C. and Timme, S.G. (1993), The efciency of nancial institutions: a review and preview of research past, present and future, Journal of Banking & Finance, Vol. 17 Nos 2/3, pp. 221-50. Calabrese, A., Campisi, D. and Paolo, M. (2001), Productivity change in the telecommunications industries of 13 OECD countries, International Journal of Business and Economics, Vol. 1 No. 33, pp. 209-23. Chang, E.C. and Lewellen, W.G. (1984), Market timing and mutual fund investment performance, Journal of Business, Vol. 57, pp. 57-72. Charnes, A., Cooper, W.W. and Rhodes, E. (1978), Measuring the efciency of decision making units, European Journal of Operational Research, Vol. 2, pp. 429-44.
Chua, C.P. (1985), The investment performance of unit trusts in Malaysia, MBA dissertation, University of Malaya, Kuala Lumpur. Chuan, T.H. (1995), The investment performance of unit trusts funds in Malaysia, Capital Markets Review, Vol. 3 No. 2, pp. 21-49. Coelli, T. (1996), A guide to DEAP version 2.1 Data Envelopment Analysis (Computer) program, CEPA Working Paper 96/98, University of New England, CEPA, Armidale. Cummins, J.D., Tennyson, S. and Weiss, M.A. (1999a), Consolidation and efciency in the US life insurance industry, Journal of Banking & Finance, Vol. 23, pp. 325-57. Cummins, J.D., Weiss, M. and Zi, H. (1999b), Organizational form and efciency: an analysis of stock and mutual property-liability insurers, Management Science, Vol. 45, pp. 1254-69. Daraio, C. and Simar, L. (2006), A robust nonparametric approach to evaluate and explain the performance of mutual funds, European Journal of Operational Research, Vol. 175, pp. 516-42. Diacon, S.R., Starkey, K. and OBrien, C. (2002), Size and efciency in European long-term insurance companies: an international comparison, The Geneva Papers on Risk and Insurance, Vol. 27 No. 3, pp. 444-66. Drake, L. and Howcroft, B. (1994), Relative efciency in the branch network of a UK bank: an empirical study, OMEGA International Journal of Management Science, Vol. 22 No. 1, pp. 83-90. Droms, W.G. and Walker, D.A. (1996), Mutual fund investment performance, The Quarterly Review of Economics and Finance, Vol. 36, pp. 347-63. Fare, R., Shawna, G., Bjorn, L. and Ross, P. (1989), Productivity development in Swedish hospitals: a Malmquist output index approach, in Charnes, A., Cooper, W.W., Lewin, A. and Seiford, L. (Eds), Data Envelopment Analysis: Theory, Methodology and Applications, Kluwer Academic Publisher, Boston, MA. Fare, R., Shawna, G., Mary, N. and Zhongyang, Z. (1994), Productivity growth, technical progress and efciency change in industrialized countries, American Economic Review, Vol. 84, pp. 66-83. Ghoul, W., Azoury, N. and Karam, P. (2007), Islamic mutual funds: how do they compare with other religiously-based and ethically-based mutual funds?, paper presented at the IIUM International Conference on Islamic Banking and Finance (IICiBF), IIUM Institute of Islamic Banking & Finance (IIiBF), Kuala Lumpur, 23-25 April. Ippolito, R.A. (1989), Efciency with costly information: a study of mutual fund performance, 1965-1984, Quarterly Journal of Economics, Vol. 104 Nos 1-23. Ismail, A.G. and Shakrani, M.S. (2003), The conditional CAPM and cross-sectional evidence of return and beta for Islamic unit trusts in Malaysia, IIUM Journal of Economics and Management, Vol. 11 No. 1, pp. 1-30. Jensen, M.C. (1968), The performance of mutual funds in the period 1945-1964, Journal of Finance, Vol. 23, pp. 389-416. Land, K.C., Lovell, C.A.K. and Thore, S. (1993), Chance-constrained data envelopment analysis, Managerial & Decision Economics, Vol. 14 No. 6, pp. 541-54. Low, S.W. (2007), Malaysian unit trust funds performance during up and down market conditions: a comparison of market benchmark, Managerial Finance, Vol. 33 No. 2, pp. 154-66. Low, S.W. and Ghazali, N.A. (2005), An evaluation of the market timing and security selection performance of mutual funds: the case of Malaysia, International Journal of Management Studies, Vol. 12, pp. 215-33.
Mahajan, J. (1991), A data envelopment analytic model for assessing the relative efciency of the selling function, European Journal of Operational Research, Vol. 53, pp. 189-205. Mao, W. and Koo, W. (1996), Productivity growth, technology progress and efciency change in Chinese agricultural production from 1984 to 1993, Agricultural Economics Report, 362, North Dakota State University, Fargo, ND. Md Taib, F. and Isa, M. (2007), Malaysian unit trust aggregate performance, Managerial Finance, Vol. 33 No. 2, pp. 102-21. Meador, J.W., Ryan, H.E. and Schellhorn, C.D. (2000), Product focus versus diversication: estimates of X-efciency for the US life insurance industry, in Harker, P.T. and Zenios, S.A. (Eds), Performance of Financial Institutions: Efciency, Innovation, Regulation, Cambridge University Press, Cambridge, pp. 175-98. Murti, B.P.S., Choi, Y.K. and Desai, P. (1997), Efciency of mutual funds and portfolio performance measurement: a non-parametric approach, European Journal of Operational Research, Vol. 98, pp. 408-18. Noordin, A.H. (2002), Islamic and conventional funds in comparison: an emphasis on the Islamic corporate governance, paper presented in the Securities Commission Saturday Seminar, Kuala Lumpur. Permodalan Nasional Berhad (2001), The Malaysian Unit Trust Industry, PNB, Kuala Lumpur. Prudential (2007), available at: (accessed 24 October, 2007). Securities Commission (2007), available at: (accessed 3 October, 2007). Sengupta, J.K. (1989), Measuring economic efciency with stochastic input-output data, International Journal of Systems Science, Vol. 20 No. 2, pp. 203-13. Sengupta, J.K. and Zohar, T. (2001), Nonparametric analysis of portfolio efciency, Applied Economics Letters, Vol. 8, pp. 249-52. Shamsher, M. and Annuar, M.N. (1995), The performance of unit trusts in Malaysia: some evidence, Capital Market Review, Vol. 3, pp. 51-69. Sharpe, W.F. (1966), Mutual fund performance, Journal of Business, Vol. 39, pp. 119-38. Sherman, H.D. (1984), Data envelopment analysis as a new managerial audit methodology test and evaluation, Auditing A Journal of Practice and Theory, Vol. 4 No. 1, pp. 35-53. Tan, H.C. (1995), The investment performance of unit trust funds in Malaysia, Capital Market Review, Vol. 3, pp. 21-50. Tauer, L. (1998), Productivity of New York dairy farms measured by non-parametric indices, Journal of Agricultural Economics, Vol. 49 No. 2, pp. 234-49. Troutt, M.D., Hu, M.Y. and Shanker, M.S. (2005), A distribution free approach to estimating best response values with application to mutual fund performance modeling, European Journal of Operational Research, Vol. 166 No. 2, pp. 520-7. Tulkens, H. and Malnero, A. (1996), Non-parametric approach to the assessment of the relative efciency of bank branches, in David, M. (Ed.), Sources of Productivity Growth, Cambridge University Press, Cambridge. Yuengert, A. (1993), The measurement of efciency in life insurance: estimates of a mixed normal-gamma error model, Journal of Banking and Finance, Vol. 17, pp. 483-96.
46
Further reading Afriat, S.N. (1972), Efciency estimation of production functions, International Economic Review, Vol. 13 No. 3, pp. 568-98. Bawa, V.S. (1976), Admissible portfolios for all individuals, Journal of Finance, Vol. 31, pp. 1169-83. Fare, R., Shawna, G. and Knox, L. (1994), Production Frontiers, Cambridge University Press, New York, NY. Forsund, F. (1991), The Malmquist productivity index, paper presented at the 2nd European Workshop on Efciency and Productivity Measurement, Centre of Operations Research and Econometrics, University Catholique de Louvain, Lauvain-la-Neuve. Jensen, M.C. (1969), Risk, the pricing of capital assets and the evaluation of investment portfolios, Journal of Business, Vol. 42, pp. 67-247. Mains, N.W. (1977), Risk, the pricing of capital assets and the evaluation of investment portfolios: comment, Journal of Business, Vol. 50, pp. 371-84. Markowitz, H.M. (1952), Portfolio selection, Journal of Finance, Vol. 7 No. 1, pp. 77-91. Varian, H.R. (1983), Nonparametric tests of models of investor behavior, Journal of Financial and Quantitative Analysis, Vol. 18, pp. 269-85. Corresponding author Norma Md. Saad can be contacted at: norma@iiu.edu.my
To purchase reprints of this article please e-mail: reprints@emeraldinsight.com Or visit our web site for further details:
|
https://www.scribd.com/document/142325833/A-comparative-analysis-of-the-performance-of-conventional-and-Islamic-unit-trust-companies-in-Malaysia
|
CC-MAIN-2019-43
|
refinedweb
| 11,151
| 51.48
|
Quick skim over it, im impressed john! good to see all those details publicly posted now!
Quick skim over it, im impressed john! good to see all those details publicly posted now!
#include <std_disclaimer>
Any comments made are personal opinion and do not reflect directly on the position my current or past employers may have.
hio77:
Quick skim over it, im impressed john! good to see all those details publicly posted now!
john thank you,
So just taking a look now... these are firewall / port forwarding requirements needed in order to get the sureSignal 'back on track' yes? I've just been under the house a bit recently, and have gotten a bunch of nice clean new ethernet cables all hooking everything up, seems to work well, and have cycled the router/dv130/suresignal 'on/off' a few times but definitely the Voda sureSignal needs additional help to get working again, as I type, I have zero bars reception, middle of Takapuna, zero bars. The instructions, referred to, I know are likely very clear, succinct etc, but would there be an interpretation, more sort of in layman terms, so I don't spend all night getting it working?
Right now, a very simple setup, as the guy should be bringing us a VDSL connection this week, telephone ADSL----> DV130 -----> AEBS ----> Vodafone sureSignal (not working)
Routing tables
Internet:
Destination Gateway Flags Refs Use Netif Expire
default 10.0.1.1 UGSc 62 0 en1
10.0.1/24 link#4 UCS 2 0 en1
10.0.1.1/32 link#4 UCS 1 0 en1
10.0.1.1 5c:96:9d:6c:e6:64 UHLWIir 64 8078 en1 1188
10.0.1.6/32 link#4 UCS 0 0 en1
10.0.1.8 f0:7d:68:b:59:7d UHLWI 0 2 en1 118
10.0.1.255 ff:ff:ff:ff:ff:ff UHLWbI 0 1 en1
127 localhost UCS 0 0 lo0
localhost localhost UH 1 506 lo0
169.254 link#4 UCS 0 0 en1
Internet6:
Destination Gateway Flags Netif Expire
localhost localhost UHL lo0
fe80::%lo0 fe80::1%lo0 UcI lo0
fe80::1%lo0 link#1 UHLI lo0
fe80::%en1 link#4 UCI en1
fe80::223:6cff:fe7 0:23:6c:7d:7f:c9 UHLWI en1
harlan-apple-tv3.l 28:cf:da:b:6b:8b UHLWI en1
harlan-air-1.local 5c:96:9d:6c:e6:64 UHLWIi en1
harlan-mbp-13.loca e4:ce:8f:14:12:92 UHLI lo0
ff01::%lo0 localhost UmCI lo0
ff01::%en1 link#4 UmCI en1
ff02::%lo0 localhost UmCI lo0
ff02::%en1 link#4 UmCI en1
Oh God help me,
A fantastic young guy called Russell popped round this morning, VDSL enabled, we have 70/14mb, Harlan had an issue w/getting the DV 130 going perfect but a call to SnapperNet & a very capable gentleman Mark had the latest firmware & a config file to bridge to an AEBS loaded and it goes mint.
Again, still, its Vodafone, everyone tells me the setup is sweet and since all my gates and cams & everything is A1 only the Vodafone SureSignal is FAIL.
On the phone, yet again to Vodafone Support, Technical Support.... I am asking for a Tier II Support Providing Unit, I am speaking with Kain... nicely spoken he is too, but asking me to get a paper-clip, WTF, a 'paper-clip', what for?
Telling him my AEBS etc don't need zero'ing, he's told me to use a paper-clip to access the Askey 9361's reset button, WTF, its got a massive black knob to press, Kain has never seen or used or helped anyone w/a Vodafone SureSignal...
Tell Kain to go ask a Tech, Team Leader Sneh is the man he gets, apparently the SureSignal will only work on a Huawei branded modem, router. Period.
Blood pressure rising...
I say, NO, again directing to:
Now I have a Technical Guru, Tito... he is reading this now.... Tito doesn't understand Johns instructions, he's going to hang up, call John 'the most technically minded person I know' & call me back. It's 1758.
Just waiting, for Tito to finish up his discussion with John..
Honestly, is this stuff relevant or not, everything else works, from the IP Cams to the front gate, just not the 'sure'Signal?
Setting up more complex networks with Sure Signal
This article describes the connectivity required between the Vodafone Sure Signal and the Vodafone network.
IP address
The Vodafone Sure Signal requires a DHCP assigned IP address. This may be a private IP address but if so will need to go through a proxy or NAT function so that it is routeable to over the Internet, i.e. seen by the Vodafone network as a public IP address. This is normal for a standard DSL connection but may need to be ensured in Enterprise or more complicated Consumer IP setups.
VPN Configuration
All traffic between the Vodafone Sure Signal and the Vodafone network, with the exception of the DNS and synchronisation traffic described below, is carried within a VPN tunnel. The VPN is established by the Vodafone Sure Signal on initial setup and is successfully created if the @ LED on the Vodafone Sure Signal housing is lit – the LED flashes during VPN setup.
The following firewall rules will be required as a minimum to allow the VPN to be established between the Vodafone Sure Signal and the Vodafone network..
Source
Destination
Protocol
Port
Description
Vodafone Sure Signal 124.6.205.17
124.6.205.18
124.6.205.19
124.6.205.41
124.6.205.42
124.6.205.42 IP 50 ESP UDP 4500 IPSEC NAT Traversal UDP 500 ISAKMP
Domain Name Resolution
In order to setup the VPN the Vodafone Sure Signal requires the ability to resolve IP addresses for hosts in the initial-ipsecrouter.vag.vodafone.co.nz domain.
These are specified on the Vodafone Public DNS servers. The DNS server used by the Vodafone Sure Signal must be DHCP assigned. Standard DSL connectivity includes DHCP assignment of DNS server.
Synchronisation
In order to function the Vodafone Sure Signal requires a synchronisation signal from the Vodafone network. This is carried outside the VPN tunnel and the following firewall rules will be required..
etc etc etc etc ......Vodafone Sure Signal 124.6.205.25 UDP 123 NTP - for synchronisation 124.6.205.26 UDP 123 NTP - for synchronisation.'
C'mon please guys, the position I need to be in, means that I have to have the house open, with an extension cord going up the front yard, across the fence, onto the road, I look like a dick, sitting next to my cell phone, with my 1 bar, using my neighbours internet to post, waiting for your call back.
I begged Tito, to not hang up !!! Why did I listen, again... god I am the stupid one.
Getting eaten by mozzies too, eaten.
This is a tad belated but the intention is the same, my issue was solved by johnr, who took some of his valuable off-duty time, his most valuable commodity & used it to educate / encourage us to try something else, which worked.
We'd spent many hours, getting nowhere & it just shows as usual, quality counts - johnr from Vodafone was the only person who actually knew his stuff, he made perfect sense & so provided a solution.
It's been said plenty, will be said again but we want to say now 'thank you johnr'
|
https://www.geekzone.co.nz/forums.asp?forumid=40&topicid=191334&page_no=3
|
CC-MAIN-2018-43
|
refinedweb
| 1,255
| 70.13
|
In one of the previous article we have seen how we can create XML file using SELECT statement SQL SERVER – Simple Example of Creating XML File Using T-SQL. Today we will see how we can read the XML file using the SELECT statement.
Following is the XML which we will read using T-SQL:
Following is the T-SQL script which we will be used to read the XML:
DECLARE @MyXML XML
SET @MyXML = '<SampleXML>
<Colors>
<Color1>White</Color1>
<Color2>Blue</Color2>
<Color3>Black</Color3>
<Color4 Special="Light">Green</Color4>
<Color5>Red</Color5>
</Colors>
<Fruits>
<Fruits1>Apple</Fruits1>
<Fruits2>Pineapple</Fruits2>
<Fruits3>Grapes</Fruits3>
<Fruits4>Melon</Fruits4>
</Fruits>
</SampleXML>'
a.b.value(‘Colors[1]/Color1[1]’,‘varchar(10)’) AS Color1,
a.b.value(‘Colors[1]/Color2[1]’,‘varchar(10)’) AS Color2,
a.b.value(‘Colors[1]/Color3[1]’,‘varchar(10)’) AS Color3,
a.b.value(‘Colors[1]/Color4[1]/@Special’,‘varchar(10)’)+‘ ‘+
+a.b.value(‘Colors[1]/Color4[1]’,‘varchar(10)’) AS Color4,
a.b.value(‘Colors[1]/Color5[1]’,‘varchar(10)’) AS Color5,
a.b.value(‘Fruits[1]/Fruits1[1]’,‘varchar(10)’) AS Fruits1,
a.b.value(‘Fruits[1]/Fruits2[1]’,‘varchar(10)’) AS Fruits2,
a.b.value(‘Fruits[1]/Fruits3[1]’,‘varchar(10)’) AS Fruits3,
a.b.value(‘Fruits[1]/Fruits4[1]’,‘varchar(10)’) AS Fruits4
FROM @MyXML.nodes(‘SampleXML’) a(b)
Please note in above T-SQL statement XML attributes is read the same as XML Value.
Reference : Pinal Dave ()
you are perfect. very thanks.
Sir
Help me in getting the text of a particular value attribute
Dear
i have one employee table in xml
how to display the all records from xml
like(select *from emp)
Hi Dave,
I have a Stored procedure that reads XML file using the following query
INSERT INTO inv_tempfile(BulkColumn)
select BulkColumn from Openrowset(
Bulk ”’ + @FileName + ”’, Single_Blob) as tt
This query reads the entire content of the file into inv_tempfile. The BulkColumn is a VARCHAR(MAX) type.
But the problem is, if the XML file contains “?” in any of its elements. Then this query is unable to read the whole file content.
below is the sample XML Element file.
#805 ? 2988 Alder Street, Vancouver tario C0B1C0
Can u please explain me why this is happening and if there is any workaround.
“XQuery [value()]: ‘value()’ requires a singleton (or empty sequence), found operand of type ‘xdt:untypedAtomic *'”
sir when i write the above command it shows me this error.
Use This
SELECT xCol.value(‘(//author/last-name)[1]’, ‘nvarchar(50)’) LastName
FROM docs
I’m trying to pull a single int from an xml file stored on an app server into a stored proc, but I’m not sure how that would work even after reading all of the examples above. I’d need to reference the file as \\[app server name]\c$\[path]\[filename].xml. Also, the int in the xml file is not stored within a tag, but is instead a key. Here is an excerpt of the xml file up to the point of the int that I need to pull in (I want 22 to be returned to the stored proc):
looks like the xml won’t post. trying again replacing with LT & GT:
LT?xml version=”1.0″ encoding=”utf-8″ GT
LTappSettingsGT
LT!– COPY THIS FILE TO C:\DLR –GT
LT!– Change the pieces to run to true –GT
LT!– InputFeeds –GT
LTadd key=”LoadPLPD” value = “false” /GT
LT!– LoadPLPDCancelDays is used to calculate the cancel date and should be a value greater than zero/GT
–>
LTadd key=”LoadPLPDCancelDays” value = “22” /GT
Hi, how to read the xml in sql server 2005. Below my code is placed,
declare @xml xml
set @xml=
‘
01 first
02 Second
‘
select a.b.value(‘.’, ‘nvarchar(50)’) as a_value
from @xml.nodes(‘parent/child /subChild’) a(b)
Hi pinal..
I have a serious issues.. we are getting data’s daily in a XML format and we need to insert all these data’s into SQL SERVER. The problem is, we have a column called ‘Amount’ and same name we are maintaining in our SQL server.
Sometimes, we are receiving XML files, with ‘Amount’ column name changed as ‘Amt’ and without checking if we load it, we are inserted with Null values.
So I need to solution, that my table should accept AMOUNT as well as AMT as column name.
Can you let me know the solution please.
Thanks and regards,
Dhinakaran
How can I do if the XML file has namespace?
Hello sir,
how to read xml file from url in sql server 2008 and insert into table?
Thank You,
bhar
Hello, would you explain me why do you use this symbols ‘ [ ‘, ‘ ] ‘ and why do you use the number 1 between them, please
how can alter the type in sql server 2008
Thank you very much! I have 3 hours searching over internet! Finally found your solution! There is a lot examples using common xml structures!
Thanks andreysolis for your comment. Glad that blog was helpful.
Thank you very much! Finally i found what i need!
I am feeling happy after your comment.
Hello great article. quick question im ruining sql server 2012 trying to inset query
INSERT INTO [dbo].[claims]
([clob]
,[processFlag])
VALUES
(
‘
—————————————————————end code —-
however my it is trowing an exception due the special char ññ in the name.
is any way that we can ignore the first line ‘ i think the problem is with the encoding. ?
or change the encoding to ignore. ?
Hi Pinal Dave,
I am looking some of solution to work with xml.
I have one table with two columns : 1. ID (nvarchar) 2. XmlData (xml)
Table have three rows.
The stored xml have company node.
Expected Result:
I need comma separated company against each id .How I can get ?
Thanks
|
https://blog.sqlauthority.com/2009/02/13/sql-server-simple-example-of-reading-xml-file-using-t-sql/?like=1&source=post_flair&_wpnonce=966197e62c
|
CC-MAIN-2017-13
|
refinedweb
| 966
| 67.15
|
I have a tensor with a size of
3 x 240 x 320 and I want to use a window (window size
ws=5) to slide over each pixel and crop out the sub-tensor centered at that pixel. The final output dimension should be
3 x ws x ws x 240 x 320. So I pad the original tensor with window size and use
for loop to do so.
import torch.nn.functional as F image = torch.randn(1, 3, 240, 320) image = F.pad(image, (ws // 2, ws // 2, ws // 2, ws // 2), mode='reflect') patches = torch.zeros(1, 3, ws, ws, 240, 320) for i in range(height): for j in range(width): patches[:, :, :, :, i, j] = image[:, :, i:i+ws, j:j+ws]
Are there any ways to do the cropping of each pixel at the sample time? Like without using the
for loop over each pixel? I feel like it’s pretty similar to
convolution operation but I can’t think of wats to crop efficiently. Thanks in advance!
|
https://discuss.pytorch.org/t/how-to-crop-each-element-of-a-tensor-with-a-window-size-at-the-same-time-without-using-for-loop/96098
|
CC-MAIN-2022-33
|
refinedweb
| 173
| 81.22
|
For code/output blocks: Use ``` (aka backtick or grave accent) in a single line before and after the block. See:
I can't connect to Gemini Broker using BT and ccxt for live trading.
Hello,
I would like to connect Backtrader on Gemini using CCXT but I can't figure it out.
Here is my code:
Python 3.6.9 (default, Oct 8 2020, 12:12:24) [GCC 8.4.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> from __future__ import(absolute_import, division, print_function, unicode_literals) >>> from datetime import datetime, timedelta >>> import backtrader as bt >>> from backtrader import cerebro >>> import time >>> >>> config = {'urls': {'api': ''}, ... 'apiKey': 'master-xxxxxxxxxxxxxxxxxxxxxxxxxxx', ... 'secret': 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx', ... 'nonce': lambda: str(int(time.time() * 1000)) ... } >>> broker = bt.brokers.CCXTBroker(exchange='gemini', ... currency='USD', config=config) >>> >>> cerebro.setbroker(broker) Traceback (most recent call last): File "<stdin>", line 1, in <module> AttributeError: module 'backtrader.cerebro' has no attribute 'setbroker'
but when I check my file .local/lib/python3.6/site-packages/backtrader/cerebro.py I can see
def setbroker(self, broker): ''' Sets a specific ``broker`` instance for this strategy, replacing the one inherited from cerebro. ''' self._broker = broker broker.cerebro = self return broker
When I try to connect directly without CCXT and Cerebro I have no issue.
Does anyone have any Idea?
Also I would like know how the class gemini get generated ? maybe if I can have working sample of how to config gemini or similar broker this would help me. I'm not having issue of getting the feed.
|
https://community.backtrader.com/topic/3448/i-can-t-connect-to-gemini-broker-using-bt-and-ccxt-for-live-trading
|
CC-MAIN-2021-10
|
refinedweb
| 253
| 50.73
|
I narrowed the test down to the following code: function output(c) { return (c.direction == 1) ? c.v2 : c.v1; } var constraint = { v1 : {}, v2 : {}, direction: 1 } for (i=0; i<100; i++){ output(constraint) constraint.direction = -1 output(constraint); } It looks a lot like #715111, but I'm not sure. Therefor I'm creating a new bug for this. If it's the same feel free to mark as duplicate. Program received signal SIGSEGV, Segmentation fault. 0x080cbe14 in Type (data=<optimized out>, this=<optimized out>) at /home/h4writer/Build/ionmonkey/js/src/jsinfer.h:71 71 Type(jsuword data) : data(data) {}
I could reduce it even more to: function output(c, dir) { return (dir) ? c.v1 : c.v1; } var constraint = { v1 : {} } for (i=0; i<100; i++){ output(constraint, 0) output(constraint, 1); }
Nice work reducing this, though it looks like it reproduces bug 714727 instead. deltablue is still segfaulting though and I think it is bug 715111.
Created attachment 593598 [details] [diff] [review] testcase Indeed testcase succeeds now. I've created a patch for the testcase. I wasn't sure if it was needed or not. I've also reduced the testcase again, to find why it still segfaults. Could be #715111 like you suggested, but I'm not sure. Therefor I created a new bug report #723271. That way if #715111 is fixed, we can test if it solves that test too.
Comment on attachment 593598 [details] [diff] [review] testcase Review of attachment 593598 [details] [diff] [review]: ----------------------------------------------------------------- No review needed for test cases, you can just checkin.
|
https://bugzilla.mozilla.org/show_bug.cgi?id=716895
|
CC-MAIN-2017-13
|
refinedweb
| 259
| 67.35
|
I’ve seen several good tutorials on the net about jsTree and MVC, but they’re all somehow outdated and don’t work with jsTree’s latest version (which is a free component). It’s been a long time since I wanted to post the experience I had using jsTree (for those who don’t know what jsTree is, please go here:). And what’s better to get it working than making a simple “File Manager”?
In this article, I will focus on two aspects of jsTree. The first that really blew me out was the possibility of moving trees by drag and drop, like in Windows Explorer (see Picture 1). The other feature I won’t cover (but is in the code) is using the right mouse button to display a context menu and from there create directories, rename, etc. I will not post a complete solution with rename, cut & copy features in this article, because it is just intended to be a starting point on how to use jQuery and MVC.
First, we have to define our trees as JSON classes with a particular format:
public class JsTreeModel
{
public string data;
public JsTreeAttribute attr;
public string state = "open";
public List<JsTreeModel> children;
}
public class JsTreeAttribute
{
public string id;
}
So in the data attribute, we’ll store the tree’s title, and the complete path will be stored in the ID attribute. The state of the tree will have all child leaves expanded by default, that’s what “open” is for.
We will use recursion to populate our tree, nothing really complicated.
We have to use the following JavaScript to bind the tree using jQuery:
$('#MainTree').jstree({
"json_data": {
"ajax": {
"url": "/Home/GetTreeData",
"type": "POST",
"dataType": "json",
"contentType": "application/json charset=utf-8"
}
},
"themes": {
"theme": "default",
"dots": false,
"icons": true,
"url": "/content/themes/default/style.css"
},
"plugins": ["themes", "json_data", "dnd", "contextmenu", "ui", "crrm"]
});
As you may have already understood, the “dnd” plugin stands for “Drag and Drop”.
dnd
And then the following syntax to perform the drag and drop operations:
$('#MainTree').bind("move_node.jstree", function (e, data) {
data.rslt.o.each(function (i) {
$.ajax({
async: false,
type: 'POST',
url: "/Home/MoveData",
data: {
"path": $(this).attr("id"),
"destination": data.rslt.np.attr("id")
},
success: function (r) {
Refresh();
}
});
});
});
On the controller side, we have to use:
[HttpPost]
public ActionResult MoveData(string path, string destination)
{
// get the file attributes for file or directory
FileAttributes attPath = System.IO.File.GetAttributes(path);
FileAttributes attDestination = System.IO.File.GetAttributes(path);
FileInfo fi = new FileInfo(path);
//detect whether its a directory or file
if ((attPath & FileAttributes.Directory) == FileAttributes.Directory)
{
if((attDestination & FileAttributes.Directory)==FileAttributes.Directory)
{
MoveDirectory(path, destination);
}
}
else
{
System.IO.File.Move(path, destination + "\\" + fi.Name);
}
AlreadyPopulated = false;
return null;
}
I had to find another way to move a directory because Microsoft’s “Directory.Move” does not work in every case. Again, it’s just a starting point, it is not intended to be a complete solution.
Directory.Move
One point that annoyed be a bit was the fact of having to refresh my tree in every task (move, create directory). It’s the only way I’ve found to keep sync the disk’s tree and the displayed one. Maybe some MVP could help me out if I just store the leaf’s name, I don’t know. I had to use a session state variable because otherwise if I click on the last leaf of the tree, I will have the entire tree populated again. And I couldn’t avoid it.
Anyway, I hope this article will help MVC developers wanting to use JSTree to improve the user’s experience and who don’t want to invest $500 in a professional.
|
http://www.codeproject.com/Articles/176166/Simple-FileManager-width-MVC-3-and-jsTree?fid=1618439&df=90&mpp=25&sort=Position&spc=Relaxed&tid=4027938
|
CC-MAIN-2014-35
|
refinedweb
| 617
| 54.12
|
Writing few things other than "he" and "she". Latina/Latino has a gender, but it is the exception rather than the norm. In many, many other languages, it is needed far more frequently: for "you" ("Are you sure?"), for imperative verbs ("Upload a media file"), for all past tense verbs ("Jenny thanked you for your edit"), and in other cases. MediaWiki and Facebook are the only pieces of software I know (there may be others) that support adding masculine, feminine, and unknown-gender forms. (In case you wondered, the default is "unknown".) There are some cases when this software feature cannot be used, but very frequently it can, and should be used. -- Amir Elisha Aharoni · אָמִיר אֱלִישָׁע אַהֲרוֹנִי “We're living in pieces, I want to live in peace.” – T. Moore 2017-04-05 13:52 GMT+03:00 Fæ <fae...@gmail.com>: > * > Defaulting_to_gender_neutral_language_in_the_Commons_namespace > > Hi, > > One of the unplanned outcomes from the Wikimedia Conference in Berlin, > was that the various discussions over /feeling/ more welcoming in our > language presumptions for non-male contributors made me think about > taking some practical steps on my home project. Commons is lucky that > having a standard policy language of English makes it easier to use > neutral gender in policy statements. I'm taking that further by > proposing that we stick to a neutral gender for all our policies and > help pages. In practice this means that policies avoid using "he or > she" and stick to "they" or avoid using a pronoun at all. I'm hoping > that the outcome will feel like a much more natural space for people > like me that prefer to stay gender neutral, possibly give a slightly > safer feeling to the project by the very act of making the effort, as > well as avoiding an over-emphasis on binary gender when it's pretty > easy to simply avoid it. > > Comments are welcome on the specific proposal above, or you may have > ideas for other local projects to do something similar. I'm aware that > this is much more difficult to make progress on in languages such as > German or Spanish that have a presumption of male/female gender within > their vocabulary, so any cases of on-project initiatives in > non-English would be especially interesting. Solving these challenges > is an opportunity to make our projects a leader on gender neutrality, > for example getting a Wikimedia based consensus to adopt terms like > "Latinx".[1] > > Links: > 1. "Latinx" is a reaction against using gendered forms Latino and > Latina, in a language that has no neutral gender. This is becoming an > accepted practice in related forums and academic publications. >- > the-term-latinx_us_57753328e4b0cc0fa136a159 > > Thanks, > Fae > Wikimedia LGBT+ > -- > fae...@gmail.com > > _______________________________________________ >>
|
https://www.mail-archive.com/wikimedia-l@lists.wikimedia.org/msg26979.html
|
CC-MAIN-2022-40
|
refinedweb
| 456
| 59.53
|
Paul Kimmel on VB/VB.Net - May 31, 2001
Array Enumeration in VB.NET, Part III
Last week I finished Part II of this series. Self-satisfied that I had cleverly demonstrated Enumerators, I finished the article and sent it to the editor. No sooner was it printed than someone wrote me to demonstrate the For Each loop.
If you are just getting here, let me catch you up. All arrays in Visual Basic.NET are derivatives of System.Array, a class defined in the System namespace. Because all arrays are instances of System.Array classes, arrays can and do have members and capabilities surpassing VB6 arrays. An example of array methods is the GetLowerBound and GetUpperBound methods. Yes, you can still use LBound and UBound or an index to iterate over elements in an array, or, you can use the new GetLowerBound and GetUpperBound methods. As the reader mentioned you may also use a For Each loop. Assuming we have an array of integers in VB.NET, all of the following work as you might expect.
Dim Ints() As Integer = {0, 1, 2, 3, 4, 5} Dim I As Integer ' Version 1 For I = 0 To 5 Debug.WriteLine(Ints(I)) Next ' Version 2 Dim V As Object For Each V In Ints Debug.WriteLine(V) Next ' Version 3 For I = LBound(Ints) To UBound(Ints) Debug.WriteLine(Ints(I)) Next ' Version 4 For I = Ints.GetLowerBound(0) To Ints.GetUpperBound(0) Debug.WriteLine(Ints(I)) NextInts is defined to be an array of six elements, indexable from 0 to 5. (Index boundaries may vary depending on the beta version of .NET you are currently using. Version 1 will not work correctly in beta 1.) The first loop uses literal lower and upper bound indexes to write the elements to the Output Window. The second loop uses the For Each construct, substituting Object for Variant as required. The third loop shows the older LBound and UBound functions, and the fourth loop shows the new GetLowerBound and GetUpperBound methods. All four of these ways to iterate over the elements of an array are supported. Semantically the last one is the most correct because it demonstrates array self-awareness, which is more object-oriented. Version one is not very portable. Version 2 is okay but probably exists to support COM more than anything, and version 3 is portable but the array should know its own boundaries.
In addition to the four versions of iterating an array already shown, there is a fifth way. The fifth way to iterate an array is to use enumerators as demonstrated last week. The fragment that follows demonstrates using the IEnumerator interface.
Dim Ints() As Integer = {0, 1, 2, 3, 4, 5} Dim Enumerator As IEnumerator = Ints.GetEnumerator() While (Enumerator.MoveNext()) Debug.WriteLine(Enumerator.Current()) End WhileThe enumerator is a COM interface. The basic IEnumerator implements three methods: MoveNext(), Current(), and Reset().
There are several reasons we want to use enumerators. (First, I must caution you that the best improvements in object-oriented languages are small increments and often pretty simple.) Using enumerators allows us to use a consistent interface for enumerating many kinds of collections of data, including things like recordsets, bit-arrays, collections, and System.Array. The second reason enumerators are important is that enumerators are objects that can be passed as parameters to methods. You cannot pass a For Next or For Each loop, but you can pass an enumerator. Suppose we have a method PrintAll. By defining the method to take an IEnumerator we can pass any instance of an enumerator to the method and it behaves correctly in a polymorphic way. The example demonstrates the PrintAll behavior, writing the elements referred to by any Enumerator to the Output Window.
Private Sub PrintAll(ByVal Enumerator As IEnumerator) While (Enumerator.MoveNext()) Debug.WriteLine(Enumerator.Current()) End While End Sub Private Sub TestPrintAll() Dim Strings() As String = {"Some", "text"} PrintAll(Strings.GetEnumerator()) Dim Ints() As Integer = {0, 1, 2, 3, 4, 5} PrintAll(Ints.GetEnumerator()) End SubPrintAll takes an IEnumerator and iterates over each element, writing the contents to the Output Window. TestPrintAll declares a couple of arrays and has PrintAll display the contents of each array passed.
Clearly the reader was correct: a For Each loop works in VB.NET. An important element of this fact is that data accessed through an enumerator is read only. The Current method returns the value of the element referenced by the internal mechanism of the enumerator, hence you will need to use another construct if you need to modify the elements of an array. Semantically, using the For Next loop with the GetLowerBound and GetUpperBound method invocation is probably the most appropriate. However, if you have a good reason for using any of the ways to enumerator elements of an array then by all means use them. Keep one caveat in mind: eventually older modus may be deprecated in future versions of VB.NET.
About the Author
Paul Kimmel is a freelance writer for Developer.com and CodeGuru.com. He is the founder of Software Conceptions, Inc, found in 1990. Paul Kimmel is available to help you build Visual Basic.NET solutions. You may contact him at pkimmel@softconcepts.com.
|
https://www.developer.com/net/vb/article.php/776131/Paul-Kimmel-on-VBVBNet---May-31-2001.htm
|
CC-MAIN-2017-43
|
refinedweb
| 874
| 58.18
|
Re: Altering imported modules
Discussion in 'Python' started by Tro, Mar 2, 2008.
Want to reply to this thread or ask your own question?It takes just 2 minutes to sign up (and it's free!). Just click the sign up button to choose a username and then you can ask your own questions on the forum.
- Similar Threads
namespace & imported modulesJason, Nov 24, 2004, in forum: Python
- Replies:
- 3
- Views:
- 478
- Peter Otten
- Nov 25, 2004
Strange behaviour of floating point constants in imported modulesTomasz Lisowski, May 23, 2005, in forum: Python
- Replies:
- 4
- Views:
- 459
- Tomasz Lisowski
- May 25, 2005
Altering imported modulesTro, Mar 1, 2008, in forum: Python
- Replies:
- 5
- Views:
- 350
Why do directly imported variables behave differently than thoseattached to imported module?Dun Peal, May 3, 2011, in forum: Python
- Replies:
- 10
- Views:
- 667
- Chris Rebert
- May 3, 2011
"Variable ... is not imported..." using an imported variable from a moduleVolker Nicolai, Jul 1, 2005, in forum: Perl Misc
- Replies:
- 9
- Views:
- 1,795
- Fabian Pilkowski
- Jul 4, 2005
|
http://www.thecodingforums.com/threads/re-altering-imported-modules.595579/
|
CC-MAIN-2016-07
|
refinedweb
| 173
| 66.98
|
Hello all at Cprogramming. I'm hoping someone can help me with this error. It probably involves changing one line... or I've made a stupid mistake or something. I've never seen the error before though.
Anyways:
Attempt to compile with gcc: C:\>gcc -o ywordf ywordf.cpp
Error:
ywordf.cpp: In function `int main(int, char**)':
ywordf.cpp:30: error: no match for 'operator!=' in 'i != __gnu_norm::multimap<_Key, _Tp, _Compare, _Alloc>::rend() [with _Key = size_t, _Tp = std::string, _Compare = std::less<size_t>, _Alloc = std::allocator<std:
air<const size_t, std::string> >]()'air<const size_t, std::string> >]()'
Code:
The error is in line 28, which is:The error is in line 28, which is:Code:#include <set> #include <map> #include <fstream> #include <iostream> #include <iterator> #include <utility> using namespace std; int main(int ac, char** av) { if(ac != 2) { cout << "Usage: " << av[0] << " filename" << endl; return 1; } cout << "Reading file " << av[1] << endl; ifstream f(av[1]); // read and count words istream_iterator<string> i(f); multiset<string> s(i, istream_iterator<string>()); // sort by count multimap<size_t, string> wordstats; for(multiset<string>::const_iterator i = s.begin(); i != s.end(); i = s.upper_bound(*i)) wordstats.insert( make_pair( s.count(*i), *i )); // output in decreasing order for( multimap<size_t, string>::const_reverse_iterator i = wordstats.rbegin(); i != wordstats.rend(); ++i) cout << " word " << i->second << " found " << i->first << " times " << endl; }
''Code:for( multimap<size_t, string>::const_reverse_iterator i = wordstats.rbegin(); i != wordstats.rend(); ++i)
What on earth am I doing wrong? What do I need to change in this line?
I've never seen this error before, but I believe it has something to do with having or not having a const value in the for(multimap( paramaters.
Help! and Thank You! I would be happy to post the purpose of my code, but its not necessary. People far more experienced than I can tell exactly what I'm doing.
AA.
|
http://cboard.cprogramming.com/cplusplus-programming/125480-fix-basic-error-no-match-'operator-%3D'-'i-%3D.html
|
CC-MAIN-2015-18
|
refinedweb
| 316
| 59.09
|
The objective of this post is to explain how to parse JSON messages with the ESP32 and the ArduinoJson library.
Introduction
The objective of this post is to explain how to parse JSON messages with the ESP32 and the ArduinoJson library. We assume a previous installation of the ESP32 support for the Arduino IDE. If you haven’t done it yet, check here how to do it.
Please note that this code will be very similar to the one in the tutorial about parsing JSON on a ESP8266, which you can consult here.
You can install the library via Arduino IDE library manager, being this the easiest way to do it. Just search for ArduinoJson on the search bar, as shown in figure 1, and you should get the option to install it. At the time of writing, the latest version is 5.9.0, which is the one used for the tutorial.
Figure 1 – ArduinoJson library installation via Arduino IDE library manager.
If you prefer a video tutorial, please check my YouTube channel below.
The code
First of all, we need to include the previously mentioned ArduinoJson library, so we can have access to the JSON parsing functionality. Since we are going to do the actual parsing in the main loop function, we will just open the serial connection on the setup function, in order to print the output of our program.
#include "ArduinoJson.h" void setup() { Serial.begin(115200); Serial.println(); }
After that, on the loop function, we will declare a variable to hold the JSON message to parse. Note that the \ characters are used for escaping double quotes on the declared string. This is needed because JSON names require double quotes.
char JSONMessage[] = " {\"SensorType\": \"Temperature\", \"Value\": 10}"; //Original message
The structure of the JSON message is shown bellow without the escaping characters. Note that this is a dummy example that shows a possible message structure for sending information of a sensor.
{ "SensorType" : "Temperature", "Value" : 10 }
Important: The JSON parser modifies the string [1] and because of that its content can’t be reused, even though the string will be the same during the whole program execution. So, we declare it inside the main loop and not as a global variable, guaranteeing that when the main loop function returns, it is freed and a new variable is allocated again in the next call to the loop function.
After that, we will declare an object of class StaticJsonBuffer, which corresponds to a pre-allocated memory pool to store the JSON object tree. Since this is a memory pool, we need to specify the size. This is done in a template parameter (the value between <> in the code bellow), in bytes. We passed a value 300 bytes, which is enough for the string to be parsed.
StaticJsonBuffer<300> JSONBuffer; //Memory pool
Next, we call the parseObject method on the StaticJsonBuffer object, passing as parameter the JSON string. This method call returns a reference to an object of class JsonObject, which we will use to obtain the parsed values.
JsonObject& parsed = JSONBuffer.parseObject(JSONMessage); //Parse message
We can call the success method on the JsonObject to confirm that the parsing occurred without errors, as shown bellow.
if (!parsed.success()) { //Check for errors in parsing Serial.println("Parsing failed"); delay(5000); return; }
Now, we will use the subscript operator to get the parsed values by their names, from the JsonObject. In other words, we use square brackets and the names of the parameters to get their values. Note that the strings to be used are the ones from the original JSON message: “SensorType” and “Value”.
const char * sensorType = parsed["SensorType"]; //Get sensor type value int value = parsed["Value"]; //Get value of sensor measurement
The final code with the printing of these values can be seen bellow.
#include "ArduinoJson.h" void setup() { Serial.begin(115200); Serial.println(); } void loop() { Serial.println("Parsing start: "); char JSONMessage[] = " {\"SensorType\": \"Temperature\", \"Value\": 10}"; //Original message.print("Sensor type: "); Serial.println(sensorType); Serial.print("Sensor value: "); Serial.println(value); Serial.println(); delay(5000); }
Testing the code
To test the code, upload it to the ESP32 and open the Arduino IDE serial monitor. You should start getting an output similar to figure 2, which presents the values obtained after parsing the original message.
Figure 2 – Output of the JSON parsing program on the ESP32.
Related content
Related posts
- ESP8266: Parsing JSON
- ESP8266: Parsing JSON Arrays
- ESP8266: Encoding JSON messages
- Python: Parsing JSON
References
[1]
Technical details
- ArduinoJson library: v5.9.0.
7 Replies to “ESP32: Parsing JSON”
can i able to control esp32 pins with jason file without prior initialization( eg: jason in http.getstring() holds the data of pin no. INPUT/OUTPUT high or low)
LikeLiked by 1 person
Hi!
Yes you can! Instead of using a statically initialized string, simply pass the result of the HTTP request to the parseObject method.
As long as the JSON contains the information you are looking for, then you should be able to access the field that holds the data of the pin and use that value to control the pin.
Best regards,
Nuno Santos
|
https://techtutorialsx.com/2017/04/26/esp32-parsing-json/
|
CC-MAIN-2018-39
|
refinedweb
| 854
| 63.39
|
Hi J.T., * J.T. Conklin wrote on Tue, Feb 14, 2006 at 09:53:56PM CET: > I am having problems using automake 1.9.3 and libtool when installing > libraries similar to those reported in a message to the automake list > with this same subject line from Bob Friesenhahn about a year ago (cf >). Yes. The general issue is not fixed yet. >? Yes. Two possibilities: - Automake 1.9.6 should have this fixed with the patch from 2005-06-24, so updating should help. - You should be able to declare helper variables to enforce order; that is, instead of lib_LTLIBRARIES = liba.la if condition lib_LTLIBRARIES += libb.la endif lib_LTLIBRARIES += libcee.la you could write (untested) if condition bee_libs = libb.la else bee_libs = endif lib_LTLIBRARIES = liba.la $(bee_libs) libcee.la [...] Cheers, Ralf
|
http://lists.gnu.org/archive/html/automake/2006-02/msg00033.html
|
CC-MAIN-2019-13
|
refinedweb
| 132
| 77.74
|
Time for another newbie question!
We're trying to use CombineChildren.cs or MeshCombineUnity.cs to reduce the draw call count in a RTS game for the iphone. However, given a simple scene with three identical buildings, it only merges two of them.
Can anyone educate me on the basic requirements and exceptions I need to be aware of in order to make effective use of these scripts?
Thanks!
Answer by Mike 3
·
Jun 23, 2010 at 09:39 PM
it'll merge any mesh with any other mesh(es) using the exact same instance of a material
besides that, you just have to make sure you're adding the script to a parent to all three objects you want combining
Answer by vinod.kapoor
·
Jul 16, 2012 at 01:15 PM
i have a problem attaching the CombineChiledren.cs script , it is showing an error that "Assets/Downloads/CombineChildren.cs(52,25): error CS0246: The type or namespace name `MeshCombineUtility' could not be found. Are you missing a using directive or an assembly reference?" please help me with this.
Answer by fuzail.india
·
Jul 30, 2012 at 03:54 AM
just make sure that you have all the standard scripting assets.If not, re import it from the assets. You are missing the script.
Answer by SARWAN
·
Jan 16, 2013 at 11:33 AM
In Combine Children Script. It is possible to change the Hashtable to Dictionary and Arraylist to List. If anyone did that means post here the link or.
CombineChildren fouls up lighting
0
Answers
CombineChildren.cs & Unity 5?
1
Answer
flickering textures on using combinechildren
1
Answer
How to Joint the two mesh(gameobject)
0
Answers
Why is CombineChildren making objects disappear in build?
0
Answers
|
https://answers.unity.com/questions/20417/how-to-use-combinechildrencs.html?sort=oldest
|
CC-MAIN-2019-47
|
refinedweb
| 290
| 67.55
|
#include <BatteryCheckControl.h>
List of all members.
The LEDs use the LedEngine::displayPercent() function, with minor/major style. This means the left column (viewing the dog head on) will show the overall power level, and the right column will show the level within the tick lit up in the left column. The more geeky among you may prefer to think of this as a two digit base 5 display.
This gives you pretty precise visual feedback as to remaining power (perhaps more than you really need, but it's as much a demo as a useful tool)
This is implemented as a Control instead of a Behavior on the assumption you wouldn't want to leave this running while you were doing other things (ie not in e-stop). But it definitely blurs the line between the two.
Definition at line 26 of file BatteryCheckControl.h.
[inline]
Constructor.
Definition at line 30 of file BatteryCheckControl.h.
[inline, virtual]
Destructor.
Definition at line 33 of file BatteryCheckControl.h.
Prints a report to stdio and lights up the face to show battery level.
keeps running until deactivated - will listen for power events and continue to update display
Reimplemented from ControlBase.
Definition at line 37 of file BatteryCheckControl.h.
stops listening for power events and sets display to invalid
Definition at line 43 of file BatteryCheckControl.h.
calls report()
Definition at line 48 of file BatteryCheckControl.h.
Referenced by processEvent().
Definition at line 65 of file BatteryCheckControl.h.
calls refresh() to redisplay with new information if it's not a vibration event
Implements EventListener.
Definition at line 70 of file BatteryCheckControl.h.
when the user has trigger an "open selection" - default is to return the hilighted control*/
The value which is returned is then activate()ed and pushed on the Controller's stack
Definition at line 74 of file BatteryCheckControl.h.
redisplay text to sout and refresh LED values
Definition at line 78 of file BatteryCheckControl.h.
Referenced by refresh().
|
http://www.tekkotsu.org/dox/classBatteryCheckControl.html
|
crawl-001
|
refinedweb
| 327
| 57.27
|
#define INIT declarations #define GETC () getc code #define PEEK () peekc code #define UNGETC () ungetcc;
This page describes general-purpose,
regular expression matching routines in the
form of
ed(C),
defined in
<regexp.h>.
Programs such as
ed(C),
sed(C),
grep(C),
expr(C),
and so on, that perform regular expression matching
use this source file.
In this way,
only this file need be changed to maintain regular expression compatibility.
The advance and step functions do pattern matching give an character string and a compiled regular expression as input.
The interface to this file is complex.
Programs that include this file must have
the following five macros declared before the
#include <regexp.h> statement.
These macros are used by the compile routine.
The syntax of the compile routine is as follows:for this parameter.
) 0)) 0)
The next parameter expbuf is a character pointer. It points to the place where the compiled regular expression. For example, in ed(C).
There are other functions in this file which perform actual regular expression matching, one of which is the function step. The call to step is as follows:
step(string, expbuf)The first parameter to step is a pointer to a string of characters to be checked for a match. The second parameter, expbuf, is the compiled regular expression which was,.
The function
advance
is called from
step
with the same arguments as
step.
The purpose of
step
is to step through the
string
argument and call
advance
until
advance
returns non-zero indicating a match or until the end of
string
is reached.
If one wants to constrain
string
to the beginning of(C)
and
sed(C)
for substitutions done globally
(not just the first occurrence, but the whole line)
so, for example, expressions like
s/y
//g
do not loop forever.
The additional external variables
sed and nbra
are used for special purposes.
();
X/Open Portability Guide, Issue 3, 1989
.
circf,
sed
and
nbra
are not part of any currently supported standard;
they are an extension of AT&T System V provided by the Santa Cruz Operation.
|
http://osr507doc.xinuos.com/cgi-bin/man?mansearchword=loc2&mansection=S&lang=en
|
CC-MAIN-2020-50
|
refinedweb
| 347
| 63.19
|
At 11:33 AM 4/20/04 -0400, Andrew Koenig wrote: > > You could, of course, create a statement like "const len" to flag that > > len will NOT be changed, thus creating true backwards compatibility, > >Somehow this idea is getting tangled in my mind with the distinction between >mutable and immutable objects. When you use an object as a dict key, it >must not change, in order to allow the optimization that keys can be sought >through hashing rather than by sequential search. Similarly, making a name >such as True immutable allows the optimization that "while True:" can be >determined during compilation to be unconditional. > >I understand that there is a difference between the kinds of immutability, >but still there seems to be a strong connection here. I think it's a bad idea to confuse a read-only name binding, and the concept of an immutable object. They aren't the same thing, although the former can be implemented by a namespace that is the latter.
|
https://mail.python.org/pipermail/python-dev/2004-April/044469.html
|
CC-MAIN-2016-40
|
refinedweb
| 166
| 52.83
|
now...like...i need to concatenate the two words that the user enters, s1 and s2 and output them...i thought i needed to use strcat but like...i dont really understand how to do that here, do i need to make a new c string variable or...? ah i hate this stuff lol but i need to use c strings
i also somehow need to test to see if the word is a palindrome. same backwards and forwards, like mom, wow, racecar
can i get pointed in the right direction here? i dont understand c++ that well even with extra tutoring every week (networking major :p)
so can i get some help please?
#include "stdafx.h" #include<iostream> #include<cstring> using namespace std; int main( ) { char s1[100], s2[100; int length1, length2; int count = 0; // Initialize the two strings strcpy(s1,"Black"); strcpy(s2,"Black"); while(count < 2) { if(count > 0) // After the first time ask the user to enter them { cout << "Now I let you enter two strings \n"; cout << "Enter the first string, then the second \n"; // Another way to initialize the two strings cin >> s1 >> s2; } // Find their lengths length1 = strlen(s1); length2 = strlen(s2); // Compare their length if(length1 == length2) { cout << "The two strings are the same length, are they the same? \n"; } // See if they are the same if(! strcmp(s1, s2) ) { cout << "The two strings: "; cout << s1 << " and " << s2 << " are the same \n"; } else { cout << "The two strings: "; cout << s1 << " and " << s2 << " are NOT the same \n"; } count++; } system ("PAUSE"); return 0; }
|
https://www.daniweb.com/programming/software-development/threads/183061/confused-with-c-strings
|
CC-MAIN-2018-13
|
refinedweb
| 260
| 69.65
|
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
I’m sure this has been discussed here recently, but I can’t seem to find
the references. I have two related questions:
I’ve just done a fresh install of RT 3.4.1 on a new system. We’re
currently running 2.0.15 in production on another box. The latter is
using mysql, the new RT is using postgres.
We’re not necessarily interested in importing our tickets from the old to
the new RT. But because we do have a ticket history that affects our
dealings with users, we’d like not to use ticket numbers in the new system
that may have been used in the old. So, my first question is this: can I
set up the new RT so that ticketing will begin with a number greater than
1, e.g., 40000?
Right now, I have nothing much in the new database, so if I have to do
something like a dropdb and initdb, that’s OK. But I would like to know if
that can be done without messing up my already-completed install.
My related question is this: I’ve seen reference in recent postings to
rt2-rt3 conversion tools, but I don’t know where to find them. Perhaps I
won’t really need them, but I’m still interested in case we decide to
import our old tickets after all.Qmh0bK0bf1iNr4mCEQK/uACgorTwJ24OcQHFo2aVLzImqJyLwhsAnRLu
Q1yi+95NV0gQvDm8aPz7l1ul
=0YTX
-----END PGP SIGNATURE-----
|
https://forum.bestpractical.com/t/rt2-rt3-questions/14772
|
CC-MAIN-2020-40
|
refinedweb
| 245
| 72.16
|
Java Persistence/Tables
Contents
- 1 Tables
- 2 Advanced
- 2.1 Multiple tables
- 2.2 Multiple tables with foreign keys
- 2.3 Multiple table joins
- 2.4 Multiple table outer joins
- 2.5 Tables with special characters and mixed case
- 2.6 Table qualifiers, schemas, or creators
Tables[edit]
A table is the basic persist structure of a relational database. A table contains a list of columns which define the table's structure, and a list of rows that define the table's data. Each column has a specific type and generally size. The standard set of relational types are limited to basic types including numeric, character, date-time, and binary (although most modern databases have additional types and typing systems). Tables can also have constraints that define the rules which restrict the row data, such as primary key, foreign key, and unique constraints. Tables also have other artifacts such as indexes, partitions and triggers.
A typical mapping of a persist class will map the class to a single table. In JPA this is defined through the
@Table annotation or
<table> XML element. If no table annotation is present, the JPA implementation will auto assign a table for the class. The JPA default table name is the name of the class (minus the package) with the first letter capitalized. Each attribute of the class will be stored in a column in the table.
Example mapping annotations for an entity with a single table[edit]
... @Entity @Table(name="EMPLOYEE") public class Employee { ... }
Example mapping XML for an entity with a single table[edit]
<entity name="Employee" class="org.acme.Employee" access="FIELD"> <table name="EMPLOYEE"/> </entity>
Advanced[edit]
Although in the ideal case each class would map to a single table, this is not always possible. Other scenarios include:
- Multiple tables : One class maps to 2 or multiple tables.
- Sharing tables : 2 or multiple classes are stored in the same table.
- Inheritance : A class is involved in inheritance and has an inherited and local table.
- Views : A class maps to a view.
- Stored procedures : A class maps to a set of stored procedures.
- Partitioning : Some instances of a class map to one table, and other instances to another table.
- Replication : A class's data is replicated to multiple tables.
- History : A class has historical data.
These are all advanced cases, some are handled by the JPA Spec and many are not. The following sections investigate each of these scenarios further and include what is supported by the JPA spec, what can be done to workaround the issue within the spec, and how to use some JPA implementations extensions to handle the scenario.
Multiple tables[edit]
Sometimes a class maps to multiple tables. This typically occurs on legacy or existing data models where the object model and data model do not match. It can also occur in inheritance when subclass data is stored in additional tables. Multiple tables may also be used for performance, partitioning or security reasons.
JPA allows multiple tables to be assigned to a single class. The
@SecondaryTable and SecondaryTables annotations or
<secondary-table> elements can be used. By default the
@Id column(s) are assumed to be in both tables, such that the secondary table's
@Id column(s) are the primary key of the secondary table and a foreign key to the first table. If the first table's
@Id column(s) are not named the same the
@PrimaryKeyJoinColumn or
<primary-key-join-column> can be used to define the foreign key join condition.
In a multiple table entity, each mapping must define which table the mapping's columns are from. This is done using the
table attribute of the
@Column or
@JoinColumn annotations or XML elements. By default the primary table of the class is used, so you only need to set the table for secondary tables. For inheritance the default table is the primary table of the subclass being mapped.
Example mapping annotations for an entity with multiple tables[edit]
... @Entity @Table(name="EMPLOYEE") @SecondaryTable(name="EMP_DATA", pkJoinColumns = @PrimaryKeyJoinColumn(name="EMP_ID", referencedColumnName="ID") ) public class Employee { ... @Column(name="YEAR_OF_SERV", table="EMP_DATA") private int yearsOfService; @OneToOne @JoinColumn(name="MGR_ID", table="EMP_DATA", referencedColumnName="ID") private Employee manager; ... }
Example mapping XML for an entity with multiple tables[edit]
<entity name="Employee" class="org.acme.Employee" access="FIELD"> <table name="EMPLOYEE"/> <secondary-table <primary-key-join-column </secondary-table> <attributes> ... <basic name="yearsOfService"> <column name="YEAR_OF_SERV" table="EMP_DATA"/> </basic> <one-to-one <join-column </one-to-one> </attributes> </entity>
With the
@PrimaryKeyJoinColumn the name refers to the foreign key column in the secondary table and the referencedColumnName refers to the primary key column in the first table. If you have multiple secondary tables, they must always refer to the first table. When defining the table's schema typically you will define the join columns in the secondary table as the primary key of the table, and a foreign key to the first table. Depending how you have defined your foreign key constraints, the order of the tables can be important, the order will typically match the order that the JPA implementation will insert into the tables, so ensure the table order matches your constraint dependencies.
For relationships to a class that has multiple tables the foreign key (join column) always maps to the primary table of the target. JPA does not allow having a foreign key map to a table other than the target object's primary table. Normally this is not an issue as foreign keys almost always map to the id/primary key of the primary table, but in some advanced scenarios this may be an issue. Some JPA products allow the column or join column to use the qualified name of the column (i.e.
@JoinColumn(referenceColumnName="EMP_DATA.EMP_NUM"), to allow this type of relationship. Some JPA products may also support this through their own API, annotations or XML.
Multiple tables with foreign keys[edit]
Sometimes you may have a secondary table that is referenced through a foreign key from the primary table to the secondary table instead of a foreign key from the secondary table to the primary table. You may even have a foreign key between two of the secondary tables. Consider having an
EMPLOYEE and
ADDRESS table where
EMPLOYEE refers to
ADDRESS through an
ADDRESS_ID foreign key, and (for some strange reason) you only want a single Employee class that has the data from both tables. The JPA spec does not cover this directly, so if you have this scenario the first thing to consider, if you have the flexibility, is to change your data model to stay within the confines of the spec. You could also change your object model to define a class for each table, in this case an Employee class and an Address class, which is typically the best solution. You should also check with you JPA implementation to see what extensions it supports in this area.
One way to solve the issue is simply to swap your primary and secondary tables. This will result in having the secondary table referencing the primary tables primary key and is within the spec. This however will have side-effects, one being that you now changed the primary key of your object from
EMP_ID to
ADDRESS_ID, and may have other mapping and querying implications. If you have more than 2 tables this also may not work.
Another option is to just use the foreign key column in the
@PrimaryKeyJoinColumn, this will technically be backward, and perhaps not supported by the spec, but may work for some JPA implementations. However this will result in the table insert order not matching the foreign key constraints, so the constraints will need to be removed, or deferred.
It is also possible to map the scenario through a database view. A view could be defined joining the two tables and the class could be mapped to the view instead of the tables. Views are read-only on some databases, but many also allow writes, or allow triggers to be used to handle writes.
Some JPA implementations provide extensions to handle this scenarios.
- TopLink, EclipseLink : Provides a proprietary API for its mapping model
ClassDescriptor.addForeignKeyFieldNameForMultipleTable()that allows for arbitrary complex foreign keys relationships to be defined among the secondary tables. This can be configured through using a
@DescriptorCustomizerannotation and
DescriptorCustomizerclass.
Multiple table joins[edit]
Occasionally the data model and object model do not get along very well at all. The database could be a legacy model and not fit very well with the new application model, or the DBA or object architect may be a little crazy. In these cases you may require advanced multiple table joins.
Examples of these include having two tables related not by their primary or foreign keys, but through some constant or computation. Consider having an
EMPLOYEE table and an
ADDRESS table, the
ADDRESS table has an
EMP_ID foreign key to the
EMPLOYEE table, but there are several addresses for each employee and only the address with the
TYPE of
"HOME" is desired. In this case data from both of the tables is desired to be mapped in the
Employee object. A join expression is required where the foreign key matches and the constant matches.
Again this scenario could be handled through redesigning the data or object model, or through using a view. Some JPA implementations provide extensions to handle this scenarios.
- TopLink, EclipseLink : Provides a proprietary API for its mapping model
DescriptorQueryManager.setMultipleTableJoinExpression()that allows for arbitrary complex multiple table joins to be defined. This can be configured through using a
@DescriptorCustomizerannotation and
DescriptorCustomizerclass.
Multiple table outer joins[edit]
Another perversion of multiple table mapping is to desire to outer join the secondary table. This may be desired if the secondary may or may not have a row defined for the object. Typically the object should be read-only if this is to be attempted, as writing to a row that may or may not be there can be tricky.
This is not directly supported by JPA, and it is best to reconsider the data model or object model design if faced with this scenario. Again it is possible to map this through a database view, where an outer join is used to join the tables in the view.
Some JPA implementation support using outer joins for multiple tables.
- Hibernate : This can be accomplished through using the Hibernate
@Tableannotation and set its
optionalattribute to
true. This will configure Hibernate to use an outer join to read the table, and will not write to the table if all of the attributes mapping to the table are null.
- TopLink, EclipseLink : If the database supports usage of outer join syntax in the where clause (Oracle, Sybase, SQL Server), then the multiple table join expression could be used to configure an outer join to be used to read the table.
Tables with special characters and mixed case[edit]
Some JPA providers may have issues with table and column names with special characters, such as spaces. In general it is best to use standard characters, no spaces, and all uppercase names. International languages should be ok, as long as the database and JDBC driver supports the character set.
It may be required to "quote" table and column names with special characters or in some cases with mixed case. For example if the table name had a space it could be defined as the following:
@Table("\"Employee Data\"")
Some databases support mixed case table and column names, and others are case insensitive. If your database is case insensitive, or you wish your data model to be portable, it is best to use all uppercase names. This is normally not a big deal with JPA where you rarely use the table and column names directly from your application, but can be an issue in certain cases if using native SQL queries.
Table qualifiers, schemas, or creators[edit]
A database table may require to be prefixed with a table qualifier, such as the table's creator, or its' namespace, schema, or catalog. Some databases also support linking a table on other database, so the link name can also be a table qualifier.
In JPA a table qualifier can be set on a table through the
schema or
catalog attribute. Generally it does not matter which attribute is used as both just result in prefixing the table name. Technically you could even include the full name "schema.table" as the table's name and it would work. The benefit of setting the prefix in the schema or catalog is a default table qualifier can be set for the entire persistence unit, also not setting the real table name may impact native SQL queries.
If all of your tables require the same table qualifier, you can set the default in the orm.xml.
Example mapping annotations for an entity with a qualified table[edit]
... @Entity @Table(name="EMPLOYEE", schema="ACME") public class Employee { ... }
Example mapping XML for default (entire persistence unit) table qualifier[edit]
<entity-mappings> <persistence-unit-metadata> <persistence-unit-defaults> <schema name="ACME"/> </persistence-unit-defaults> </persistence-unit-metadata> .... </entity-mappings>
Example mapping XML for default (orm file) table qualifier[edit]
<entity-mappings> <schema name="ACME"/> ... </entity-mappings>
|
https://en.wikibooks.org/wiki/Java_Persistence/Tables
|
CC-MAIN-2015-32
|
refinedweb
| 2,207
| 53.31
|
The for loop is perhaps the most commonly encountered loop in all programming. Its structure is the same in all programming languages, only its implementation is different. It is quite useful when you need to repeatedly execute a code segment, a certain number of times. The concept is simple. You tell the loop where to start, when to stop, and how much to increase its counter by each loop. The basic structure of a for loop is shown in Figure 5.2C.
Figure 5.2C: The structure of a for loop.
The essence of the for loop is in the counter variable. It is set to some initial value, it is incremented (or decremented) each time the loop executes. Either before or after each execution of the loop the counter is checked. If it reaches the cut-off value, stop running the loop.
The specific implementation of the for loop in C++ is shown here:
for (int i = 0; i < 5; i++) { }
What this code is doing is rather straightforward. First, we create an integer to be used as a loop counter (to count how many times the loop has executed) and we give it an initial value of zero. We then set the conditions under which the loop should execute. That’s the i < 5 portion of the code. It is essentially saying, “keep going as long as i is less than 5.”As soon as that condition is no longer true (i.e., i is equal to 5 or greater than 5), the loop stops executing. Finally, we have a statement to increment the loop counter after each iteration of the loop. Notice the semicolons in the for loop declaration. You should recall that all statements/expressions in C++ end with a semicolon. These code snippets are, indeed, valid statements and could be written as stand-alone expressions. The reason the last one does not have a semicolon is because it is followed by a closing parentheses and that terminates the expression. The three parts to the for loop declaration are the following.
Declare and initialize the loop counter.
Set the conditions under which the loop will execute.
Increment the loop counter.
Step 1: Use your favorite text editor to enter the following code.
#include <iostream> using namespace std; int main() { for (int j = 0; j < 10; j++) { cout << "Going Up…." << j << "\n"; }// end of for loop return 0; }// end of main
Step 2: Compile the code.
Step 3: When you run the code, you will see something similar to what is displayed in Figure 5.3.
Figure 5.3: For Loops.
This simple example will show you how a for loop works. Its rather simple but should give you the basic idea.
It should be pointed out that it is possible to increment by values greater than one, and even to count backwards, using the decrement operator. The next example illustrates this:
Step 1:
#include <iostream> using namespace std; int main() { for (int j = 10; j >0; j—) { cout << "count down… << j << "\n"; }// end of for loop cout << "Blast Off!!! \n"; }// end of main
Step 2: Compile the code.
Step 3: When you run this program you should see something like what is displayed in Figure 5.4A.
Figure 5.4A: For loops 2.
You will find many programming situations where the for loop is quite useful, so it’s rather important. Also remember that for loops, like if statements, are found in all programming languages, only their implementation is different.
Loops, like if statements and switch statements, can be nested. You can have a loop inside of another loop. When you study algorithms in Chapter 13, and games in Chapter 14, you will see places where this is practically applied. The most important thing to remember when nesting loops is that the inner loop will execute a number of times equal to its counter multiplied by the outer loops counter. Put another way, if your outer loop executes 5 times, and you set the inner loop to loop 4 times, the inner loop will actually loop 20 times. It will loop 4 times, for each iteration of the outer loop. Let’s look at an example of this.
Step 1: Enter the following code into a text editor and save it as
05-04b.cpp.
#include <iostream> using namespace std; int main() { for (int i = 0;i<3;i++) { for(int j=0;j<4;j++) { cout << "This is inner loop " << j ; cout << " of outer loop " << i << endl; }// end of inner loop }// end of outer loop return 0; }// end of main
Step 2: Compile and execute the code. You should see something like what is displayed in Figure 5.4B.
Figure 5.4B: Nested for loops.
Note how often the inner loop printed out. This should prove to you the earlier statement regarding the number of iterations an outer loop makes. The outer loop, in this case, was set to loop 3 times. The inner loop was set to loop 4 times. Because the inner loop was nested within the outer loop, it actually looped 3 multiplied by 4, or 12 times.
|
https://flylib.com/books/en/2.331.1.45/1/
|
CC-MAIN-2020-50
|
refinedweb
| 856
| 73.58
|
A VBA enumeration is a special type of constant that automatically assigns predefined values to its members. Usually, the members are related. For instance, there's an enumeration named vbDayOfWeek that contains constants for each day of the week. You can see its members by opening a module in the Visual Basic Editor and entering
vbDayOfWeek.
Don't forget the period at the end-that wakes up VBA's Intellisence, which displays the members.
You can view enumerates specific to an Office application by opening Help in the VBE and typing, "enumerate." (You must be in the VBE.)
VBA is flexible where enumerates are concerned and allows you to create your own. To do so, declare an enumeration type using the Enum statement in the Declarations section of a standard or public class module using the syntax
Private Enum WorkDays
or
Public Enum WorkDays
where WorkDays is the enumerate's name (which you provide). Then list the constant names in order of their appropriate Integer values. VBA will assign the value 0 to the first constant and increase by 1 for each subsequent member. You might never use the Integer values, but keep this in mind when creating your list. For Example, VBA would assign the values 0 through 4 to the members Monday through Friday:
Private Enum Workdays
Monday
Tuesday
Wednesday
Thursday
Friday
End Enum
To use the enumeration, simply declare a variable as the enumeration's type in the form
Dim DayOff As WorkDays DayOff = workdayconstant
where workdayconstant is Monday, Tuesday, Wednesday, Thursday or Friday or 0, 1, 2, 3, or 4, respectively.
Full Bio
Susan Sales Harkins is an IT consultant, specializing in desktop solutions. Previously, she was editor in chief for The Cobb Group, the world's largest publisher of technical journals.
|
http://www.techrepublic.com/blog/microsoft-office/how-to-use-and-define-vba-enumerate-constants/
|
CC-MAIN-2017-39
|
refinedweb
| 296
| 51.89
|
The Changelog – Episode #375
Gerhard goes to KubeCon (part 2)
talking Prometheus, Grafana, & Crossplane.com/changelog
Square – The Square developer team just launched their new developer YouTube channel. Head to youtube.com/squaredev or search for “Square Developer” on YouTube to learn more and subscribe.
GitPrime – GitPrime helps software teams accelerate their velocity and release products faster by turning historical git data into easy to understand insights and reports. Ship faster because you know more. Not because you’re rushing. Learn more at gitprime.com/changelog.
Notes & Links
Transcript
Click here to listen along while you enjoy the transcript. 🎧
Today we have around this square table, rectangular table, we have Björn from Grafana, we have Fred from Red Hat, and then we have Ben from GitLab. All of them are Prometheus contributors, so this is going to be a technical discussion. We’re going to mention a lot about cool things about Prometheus. Who would like to get us started?
Sure. I’m Ben, I’m a site reliability engineer at GitLab. I’ve been contributing to the project for quite a number of years now. My focus is on getting developers and other systems to integrate with Prometheus. So I don’t work on the core code so much, but I try and help people get their data into Prometheus and then learn how to actually turn that into monitoring.
My name is Björn. I work at Grafana, but that’s quite recent. I now am fortunate enough to be a full-time Promethean. My company pays me to contribute to the project, and I also do internal Prometheus-related things. Previously, until like half a year ago, I was at SoundCloud, where Prometheus had its cradle (that’s how I like to say it). There we kind of had other jobs; we were production engineers, or site reliability engineers, or something… Ben was also there, and we had to create Prometheus for doing our job, as a tool. But it was always like a site business, in a way; it sounds kind of weird now that it’s so popular.
I’m Frederic. I am an architect at Red Hat. I’m basically the architect for everything observability, and I happen to have started with Prometheus, in that space, roughly 3,5 years ago. Even though it’s been 3,5 years, I think I’m the most recent at this table to have joined the Prometheus project.
And one thing which I’d like to add is that this year, for the top contributor in the cloud-native landscape, the award went to Fred, right? Björn, you were mentioning earlier that Prometheus - the contributors got awards in a row every single year… One of the Prometheus contributors got some sort of an award. So there’s like a streak going on here… Is that right?
You might think it’s like a political thing, that we have to get an award, but I think we really have a bunch of awesome people.
[00:03:59.03] I think Prometheus, looking at how it grew – everybody’s looking at Kubernetes and everybody knows Kubernetes, but Prometheus is also a graduated project in the CNCF… And a lot of activities are happening around Prometheus, around observability, around metrics… I find that super-interesting, because it’s not just about the platform, it’s also all the other tooling that goes in the platform. And Prometheus is one of the shining stars of the CNCF.
We were the second graduated project.
There you go.
We almost graduated first, but…
Yeah. But Kubernetes–
Kubernetes had to take that, yeah. They’re also a much bigger project, so there was way more effort. For us it was kind of easy to graduate. But interestingly, I did this for a talk recently [unintelligible 00:04:43.13] and CNCF has this dev stats tools - it’s a Grafana dashboard, shameless plug - where they can plot, they just evaluate activities among companies, among contributors, and you can just draw graphs how actively is this project contributed to. And if you look at the Prometheus graph, it looks like from the moment of graduation you actually got more activity. It’s probably smaller things that are not so visible, but a lot is going on in the Prometheus ecosystem.
Right. And you only just had PromCon not long ago. How was that? Two weeks ago, or one week ago? That was very recent.
Yeah, that was the second week of November. It was great. It’s a very small community gathering; we’re actually sad this year, we wanted to expand the size of it, but we just couldn’t get a venue big enough that was available when we needed… So yeah, it’s a small 220-person conference, and it’s all talks about Prometheus and development of what’s going on, people’s stories and how they use Prometheus.
Tickets are highly sought after. It felt like a rock concert.
Yes. And I think even our livestream was well-visited, right?
Yeah, I think we peaked at something like about 80 people on the livestream. It was a little unreliable this year, but we’ll hopefully do better next time.
All the talks will get proper recordings on the website, so everybody can watch that.
I think what’s super-exciting about PromCon - I believe all of us have been at every official PromCon. I think there was one unofficial –
I was at the first unofficial PromCon 0. [laughter] You were too, right? It was at Soundcloud, most – I mean, we called it PromCon [unintelligible 00:06:43.00] when developers came together to prepare the 1.0 release. But then the real PromCon [unintelligible 00:06:48.10] I was at the first, and this one.
This most recent one.
Yeah.
I think what’s really interesting about how PromCon has evolved over the last couple of years is that in the first 2-3 years I think it was very Prometheus development-focused. Last year also, and already we’ve seen this a lot - I think the entire community is kind of evolving. Prometheus is a very stable project, and we’re now more demonstrating how it can be used in extremely powerful ways. I think that kind of reflects, in some way, the graduated status… Because people can rely on it, we’ve seen all this adoption that is just incredible.
Also, how this ecosystem doesn’t have a strict boundary. You have lots of projects that are not Prometheus projects, but they are closely related, and there are lots of integration points. It’s open source, it’s open community, and I think that really works well.
[00:07:57.23] One thing which I really liked about Prometheus is this emerging standard of OpenMetrics. So it’s less about a specific product and it’s more about a standard, which people and vendors are starting to agree on, and I think that is such an important moment. When you have all these companies saying “You know what - Prometheus is on to something.” So how about we stop calling the exposition format that, and we start calling it OpenMetrics? Did you have any involvement with that?
Yeah, I’m one of the people that started the OpenMetrics project, and as a site reliability engineer, I’m working with my developers to instrument their code and make it so that I can monitor it… And I also have to work with a lot of vendor code. And for a long, long time, the only real proper standard is SNMP. But SNMP for a modern developer is extremely clunky and really hard to use, and it’s not cloud-native, if we wanna use the buzzword.
As an SRE, I don’t actually care if vendors use Prometheus, but we need OpenMetrics as a modern standard to replace SNMP as the transport protocol of metric data.
And I really like how the metrics – OpenMetrics… OpenTelemetry, which is a combination of OpenCensus and Open…
Tracing.
OpenTracing. Thank you very much, Fred. So the combination of these two - how does OpenMetrics fit into OpenTelemetry?
OpenTelemetry, because it comes from the OpenTracing and OpenCensus - OpenCensus was this idea of creating a standard instrumentation library that handles both the tracing and the metrics, and some of the logging pieces… And this is a really great idea, especially when I’m wearing my SRE hat - you have a standard library for instrumenting your code, and the OpenMetrics is (or what I think should be is) the way you get the metric data out of OpenTelemetry. So it’s just kind of the standardized interface. Because the tracing interface is kind of still young, and fast-moving, and it hasn’t settled down, but the Prometheus and the OpenMetrics standard is something that we wanna see last for as long as SNMP has lasted. SNMP has been around since the early ‘90s, and it hasn’t changed much, and the data model is actually quite good… With it being clunky and a little bit designed around 16-bit CPUs, and things like that. But we wanna see the OpenMetrics transport format be this long-term, stable thing that vendors can rely on.
So we have metrics, the story is really good, we have traces, and the story of distributed tracing is really good as well… Where are logs, or events (as some like to call them), where do they fit in in this model? And I’m looking at Björn, because I know that Loki is this up-and-coming project… We’ll be talking later with Tom about Loki, and there was – I forget his name, but he’s the maintainer of Loki, or the head behind Loki [unintelligible 00:11:36.10]
[00:11:44.23] Actually, we have a bunch of people at Grafana working on Loki. It’s like a big deal, obviously. I don’t even feel like I would do them justice if I now tell them. You should probably ask later… I mean, perhaps you should take it from the other way - people see Prometheus, they realize it’s this hot thing that they should use, they see all the success they have, and then they try to shoehorn all their observability use cases into Prometheus, and then they start to use Prometheus for event logging… And Prometheus is a really bad event logging system. That’s a lot where we have to fight – not fight; where we have to convince people that they shouldn’t do this, even if they’re angry at us.
But then there’s also the other – whatever the backlash where the logs processing people try to solve everything… Yeah, we kind of have more this inclusive picture; you need all those tools, you need to combine them nicely, and Loki has this idea where you take some parts of Prometheus, which is like service discovery and labeling, and use the exact same thing for logs collection, and then it’s easy to connect the dots and jump from an alert with certain labels, into the appropriate logs that you have collected. It goes into that area, but I guess you will talk a lot about that with Tom.
Actually, I’m a strong believer of connecting different signals via metadata. Actually, Tom and I did a keynote at KubeCon Barcelona about exactly this topic, so I highly recommend people checking that out.
Okay. Are the videos out yet from Barcelona?
Yeah.
Cool.
It’s not only him recommending himself, I recommend that as well. [laughter]
Okay.
And from the Prometheus project perspective, I see it as – with Prometheus we have a very specific focus, and we kind of follow a bit of the Unix philosophy of “As an engineer, I want a tool that does one thing and one thing well.” And I look at some of these large monitoring platform things and I see a lot of vendors - they also combine monitoring and management into the same platform. With Prometheus, we explicitly don’t have any kind of management. We even don’t even have any templating in our configuration file, because different organizations have completely different ideas on what they want for their configuration management to look like.
You have things like Kubernetes and config maps and operators, and then you might have another organization that is doing everything with a templating configuration management like Chef, or Ansible, or one of those. So the layering approach to observability is really important to me, because I want a really good logging system, and I want a really good metrics system, and I really want a good tracing [unintelligible 00:14:54.15] and crash dump controls, and profiles… And to me, those are all different pieces of software and I need to combine them, and there’s no one magic solution that’s gonna solve all my problems all at once.
I can see this idea of the building blocks, and having the right building blocks, “right” being a very relative term in this context… Because “right” to be is different than “right” to you. So this choice of selecting whichever building blocks are right for you, and combining them, again, in whichever way is right for you, and then almost everybody gets what they want. The pieces exist, and they can be combined in almost infinite ways.
So Prometheus has grown a lot, Prometheus is on a crazy trajectory right now, from where I’m standing… And I would like to zoom in a little bit in a shorter time span, for example the last six months, just to get a better appreciation of all the change that is happening in Prometheus. Let’s focus on the last six months, the big items that have been delivered, and the impact that they had on the project.
[00:16:09.19] We should also say, there’s so many – we call it “project”, a repository in the Prometheus GitHub org, and there are many projects. AlertManager is probably something very famous, Node_exporter is pretty active and big, and all those things… But every project has new stuff going on, and I think we should restrain ourselves to just the Prometheus server itself, because otherwise we could chat forever about all the new things.
Yeah. And actually, a few of us have been discussing that the Prometheus-Prometheus core code is really reasonably feature-complete, and it’s not actually moving that fast. We have lots of small changes that are still important, but the speed of the project is actually how many additional things that are connected to Prometheus that is expanding.
There’s a large momentum about things that are being built around Prometheus, while Prometheus itself is largely stabilizing and optimizing.
Yeah. Should we talk about something new? Now that you say stuff around Prometheus - it was always a very hot topic that Prometheus doesn’t have this idea of having a distributed, clustered storage engine built-in, and we always had that “somebody else’s” problem. Then we provided - and it’s still an experimental interface, officially…
Officially yes, but it works.
Yeah. So we created this kind of experimental write interface, and now we have dozens of vendors or open source projects that integrate against this interface, where Prometheus can send out the metrics that it has collected to something out there. This has seen a lot of improvements recently. I don’t know, does one of you want to talk about details there?
Actually, even commercial vendors, monitoring platform vendors are starting to accept Prometheus remote write as a way to get the data into their observability stack.
I don’t think any of us actually worked on these improvements, but I think the most notable thing that happened in remote write was previously remote write - whenever Prometheus scraped any samples, it immediately queued them up and tried to send them to the remote storage. This had various problems, one of which is we really just keep all these samples in memory until we send them off. So one of the dangers was if the remote storage was down, we would continue to queue up all of this data in memory, and potentially cause out-of-memory kills, for example.
The solution to this was Prometheus has a write-ahead-log where the most recent data is written too, before it gets flushed into an immutable block of data… So instead of doing all of this in-memory, basically we use the write-ahead-log as a persistent on-disk buffer. That write-ahead-log is tailed, and then we send the data off based on that.
This is one of those things – the feature actually hasn’t changed at all in its functionality, it’s just the implementation itself changed to be a lot more robust than it used to be. And I think that’s really exciting, and it kind of shows the details that we’re starting to focus on in Prometheus.
For all those projects that are being built around Prometheus, it’s very important and it’s becoming even more important for the core to be more robust, to be more performant, to be dependable, so that it can support all those extension points and all that growth.
[00:20:04.26] Yeah, I guess if it’s still experimental, we should do something about it. [laughs] Shall we talk about the flipside of that, the remote read?
Yeah, yeah.
Because that is the flipside of it; if you have a Prometheus server that has stored stuff in remote storage, often those remote storage providers have their own query engines; sometimes they even support literally PromQL, and you can work on that… But sometimes you just want your Prometheus server to know about that data that has been stored away somewhere, and there is the flipside of the remote write, which is remote read. That’s also kind of still experimental, but there was a similar problem… Who wants to take this?
Go ahead.
Should I go ahead? Actually, we are not the domain experts in that, right…? [laughs] So the problem there was that Prometheus runs a query, and then the query engine has to retrieve the data, and the API looked like that it would essentially get all the samples that this query had to act on in one go… So the remote back-end for that had to construct all those samples in memory, on their side, and then send it all over. So Prometheus has to receive it all on it’s own side, it’s all there, and then that could have a huge impact on memory usage in that moment. I mean, that concretely happened. You would [unintelligible 00:21:31.03] both parts.
The back-end would build up all this huge amount of samples in memory, and then Prometheus has to read it. Prometheus has a really efficient way of storing time series data in blocks, in its own storage, so the idea was to just stream the data… Streaming is anyway the hotness, where it’s all in one stream, you don’t have to build it up first and then send it out. And I think it also reuses the exact block format of Prometheus.
The big problem with the remote read was that we have all of this compressed data on disk and in memory, and the remote read would decompress it, serialize it, and then send it out over the wire completely uncompressed, and it was using huge amounts of bandwidth. Actually, was it taking it and then Snappy-compressing it, if I remember correctly?
I believe so, yeah.
Yeah, so it would take a well-compressed time series block, serialize it, and then recompress it with a generic compression… And this was just kind of silly.
In hindsight, though… [laughter]
In hindsight, yes. And this doesn’t just benefit the Prometheus server itself, but this is – again, there are a bunch of integrations around Prometheus that benefit from this.
Yeah. But I think Thanos was – it was a big deal for Thanos, this improvement.
Yes, because Thanos essentially sits next to a Prometheus server, and uses this API to redraw data from the time series database… So it was a big deal for this component to have this more efficient way of doing it. Because Thanos itself had already this streaming approach; so it loaded everything into memory, and then sent it off in a streaming approach. So now it can actually make use of all of these things.
So why do you think that this remote write and remote read are becoming more and more important these days? Is something happening with Prometheus? Is it getting to a point where this is becoming more and more important? Why is it an important thing now?
[00:23:50.25] As users of Prometheus grow, they grow beyond the capacity of one Prometheus server, and Prometheus was designed from a background of distributed systems… And where Prometheus got its inspiration - we had hundreds or thousands of monitoring mini-nodes, and each of these mini-nodes would watch one specific task and keep track of one small piece of the puzzle. And as people grow their monitoring needs, they’re running into the same exact problems, where a single monitoring server is not powerful enough to monitor a whole entire Kubernetes cluster with tens of thousands of pods, and multiple clusters that are geo-distributed. So they’re running into the same problems. And being able to take Prometheus and turn it into just the core of a bigger system means that you need these in and out data streams in order to make it the spokes of a full platform.
So that’s another hint as to the popularity of Prometheus and the use cases for Prometheus, which - they are like machines, they aren’t big enough to be able to run everything in one machine. So again, it got to the point where you need more than one, and what does that look like. So this is a story in a use case which is becoming more and more relevant.
So there was the remote write, the remote read, important improvements in the last six months… What other things are noteworthy?
It’s actually a little bit longer ago than six months where we decided we go on a strict six-week cadence of releases. Similar to Kubernetes, but they have a longer cadence.
Three months.
Three months. Go has this similar thing… Personally, my ideal is always you should just release when you have something to release, and in the ideal world, that just works… But in the real world, people just procrastinate, and then – we have seen this, that just nobody was bothering to release a new Prometheus server, and then we had way too many things piled up. So we just said “Okay, every six weeks.” And should we ever reach this point where we have a new release and nothing interesting has happened, we can reconsider that. But so far, we have done this now for almost a year, I think.
So we always get a release [unintelligible 00:26:23.15] nominated ahead of time, and then you cut a release candidate, you tell the world that they should try it out, and then usually we get a fairly stable .0 release. What is the current, 2.14.0? I think we didn’t have a bug-fix release for that one, right?
Yup.
That was during PromCon actually when we released that. But that was just a coincidence, because it’s a strict six-week cadence. So every time there’s something interesting happening… Yeah, so releases go up. But we also have this all built into it, like benchmarking. The benchmarking tooling, our internal benchmarks are way better now, and it’s all part of the procedure, to run benchmarks to see regressions. We had a few of them in the past, nice, interesting, new features, but also, sadly, a new feature was everything is a bit slower… [laughs] So that can’t really happen yet; or it happens in a form where we say “Okay, now we have (whatever) stainless handling” and we accept that this has a tiny performance penalty.
Because we have all these tools, we can do these things in a controlled way, as opposed to realizing these things after we’ve already released it and users are opening issues. One thing that personally for my organization is really cool about the regular release schedule is we know exactly when the next release candidate is going to be cut, so the SRE team can plan [unintelligible 00:27:53.26] these kinds of releases, and contribute back with issues, and so on. I think that’s also for us as maintainers really powerful to get more consistent feedback.
[00:28:10.07] Do you see the adoption of new releases? Is there a way of seeing what the adoption is? What I mean by that - maybe number of downloads, maybe something that would tell you “Okay, the users are upgrading, and they’re running these new releases.” Is there such a place that you have? Maybe it’s publicly available…?
Yeah, there are counters for looking at how many downloads we get from the official releases. There’s also how many people pull their Docker images… But we’re not really paying attention to this. We’re more focused on development than marketing numbers.
Do we have like GitHub download counters?
Yes, I believe so.
Okay.
But we mostly don’t even pay attention to them.
But then also, of course, some organizations wouldn’t even download directly from GitHub, they just download it into their own repository… So you can never know. We needed to do some like some phone-home mechanism into Prometheus, and we’re not doing that… But Grafana has some mild tracking about their installed instances, and they also report back the number of – like, which data source is being used by that Grafana instance, and every PromCon has a little lightning talk where some Grafana person is telling us how many Grafana instances there are in the world that phone home, and how many of them have Prometheus as a data source. And the Grafana growth is crazy, but the percentage of Grafana instances using Prometheus is also growing like crazy. It’s like a second order of growth, and I think this year we hit the more than 50% of Grafana instances have a Prometheus data source. That’s mind-blowing.
So releasing new versions, having this six-week cycle when users can expect a new version to be cut, a new version to be available… Do you do anything about deprecating old versions, or stopping any support for older versions?
It’s largely on an ad-hoc basis. If there is someone who is willing to backport a fix, I think we genuinely are open to cutting another patch release. Sometimes us at Red Hat we support older versions in our product, for example, and that’s when we do those kinds of things. I don’t think we have a set schedule of when we don’t support anything anymore, but it generally doesn’t happen too often.
Also, we are on major version 2, and we have a few features listed as experimental that can actually have breaking changes, where you could not just seamlessly upgrade… But most features are not experimental. So there’s very few reasons for somebody to not go to the next minor release.
Sometimes we have little storage optimizations where we try, after some problems in the past where you couldn’t go back from once you have gone to the higher version and the storage has used the new encoding version internally, the older versions couldn’t act on it… And we are now doing things where you have to switch it on with a flag in the next minor release, and then it becomes default, but you could still switch it off, and then it becomes the only way of doing it. It’s very smooth, and I think rarely – I mean, some companies have these very strict procedures to whitelist a new version, but in general it’s happening rarely that somebody says “I really still have to run Prometheus 2.12. Could you please have this bug fix release for 2.12?”
[00:32:05.17] Yeah. As a matter of fact, I don’t remember the last time we’ve done anything like this.
Yeah, the releases are always upgradable within the major version… So the incremental upgrade is completely seamless. It’s just dropping the new version, restart, and away you go. There has been no real problem with upgrades.
Interestingly – so I also work on one of the projects that integrate around Prometheus called the Prometheus Operator, and we actually test, to this day, upgrades from Prometheus 1.4, I believe, up until the latest version.
Amazing. Okay.
Should we find something else to talk about?
Yeah.
So we could talk about unit testing rules and alerts.
Alert testing is a big deal, because – I have discussed this actually also quite often recently, how you actually make sure that an alert will fire if you actually have an outage. This is a big, arguably not quite solved problem, but at least in Prometheus you can now unit-test your rules - recording rules, as well as alerting rules; it’s all built-in in promtool, this little command line that’s distributed alongside with the server. And there’s a little, kind of a domain-specific language, if you want, to formulate rules. You can write “This is how the time series looks like, and then I want this alert to fire in that way”, all those things. I think we have a blog post on the project website…
Yeah, I think we have it.
Yeah. That’s pretty cool.
Again, this is one of those things where it shows the maturity of the project and the ecosystem, that people don’t only care about monitoring and alerting, but they also care about actually testing their alerting rules.
So we talked about the big, noteworthy initiatives that have been delivered in the last six months, the most exciting stuff… What about the next six months? What do you have on your roadmap, things which are worth mentioning?
We have a roadmap on the website, but it’s kind of almost obsolete, because I think most of these issues or items there have been almost implemented. So I think it’s time for getting more into more visionary things, but also there’s some things very concretely happening. One thing that will be really visible is a new UI for the Prometheus server. Some people just use Grafana as their interface for Prometheus, but originally, when Prometheus was created, there was no Grafana. We actually had our own little dashboard builder. But Prometheus was really meant to – why are you laughing…? [laughs]
Hey, I’m still a Promdash fan.
Okay, so it still has fans. [unintelligible 00:35:07.15] So we want to talk about the future… The UI on the Prometheus server was always very simplistic, but I totally loved it; it was my daily tool to work with… But yeah, it hasn’t aged that well.
Yeah. So we’re replacing our handwritten JavaScript from 2013 or so with a nice, new React user interface. It’s now in 2.14, and you can go give it a spin. There’s a button you click to try the new UI.
[00:35:48.07] Essentially, at the moment, this is just reconstructing all the features we have… But this will allow modern stuff, like proper autocompletion, and tooltips, and all those things; that will be very easy to include. You get a glimpse of it if you do the Grafana Explore view. It’s a lot of stuff… But that’s all very much wired into Grafana, and in the Prometheus UI we try to get this in a more generic form. And we also want to be able to do this Language Server Protocol (LSP), which is this generic way where IDEs can inquire from a server what to do with autocompletion, and stuff. So this could work for the Prometheus UI itself, but there’s actually an intern at Reddit, working with Fred [unintelligible 00:36:36.12] he’s working on this, just implementing this LSP for PromQL. Then you can point your VS Code to that, and suddenly you get autocompletion in your editor, writing rules. That’s so cool.
Yes, I’m really excited about that.
I’m also really excited to finally get those beautiful help strings and all the metrics output, and getting that into the basic user interface… Because this would help all the users of Prometheus to be able to see what does this metric name actually mean, and get the extended help information, and the explicit types that we have. We have this data in Prometheus, and it’s been many years and not exposed to the user.
As a matter of fact, I saw a demo last week showing exactly that.
Oh, nice. I always tell the story of Prometheus as it has started with the instrumentation first, and we always put in there that you have to describe your metrics with a help string and you have to tell that it’s a counter, or gauge, and then Prometheus was just not doing anything with that information… And that was lasting for way too long. But now something is happening.
That actually resonates really well, because you’re right, a lot of effort goes into describing what the metrics are. And then when you consume them, you just consume them as metrics, as values, right? And then a lot of that information - actually, all of that information - gets lost. So I can see a really good opportunity for maybe Grafana (or another UI) to make use of that information, to maybe start explaining what the different metrics are, as the original authors intended them.
There’s a question which I have - I’m wondering what are the limits for describing metrics. When I say “limits”, I mean is it like a single string, and is there a limit of how big that string can be? Can you add any formatting to that string? Because I’m almost thinking markdown. It’s a bit crazy in hell, but why not? It feels like the next step to this.
That might evolve when we actually use it, but at the moment it’s a plain text string with no length restrictions. Wasn’t that help string – we had an incident [unintelligible 00:38:52.00] where somebody accidentally put a whole HTML source code into a label, and Prometheus could ingest that just fine. [laughs] It looked really weird when you looked at the metric. But we are usually not implying any fixed limits on anything.
Yeah. Or any formatting. It’s just like plain text.
But formatting - that might evolve; we will see.
It’s actually interesting… We’ve had the metadata API through which you can query help and type information for I think about a year and a half now, but just haven’t actually made use of it just yet. So I think, as Björn started out with the React UI, it’s a really cool thing that we can now, with a modern approach, do all of these things.
Julius did the initial work for this React-based UI, and just within a couple of weeks of having this entry, we’ve had a tremendous amount of contributions to this… Because suddenly, we’ve opened up a pool of engineers that can help us out with these things… Which was kind of the initial point anyways, because nobody was really contributing to the old UI, and suddenly we are just a couple weeks into it and it just validated the point that making this more accessible opens a large pool of contributions.
[00:40:19.05] Which I think is a very interesting point in open source projects - should you go for something with a known, big base of people who [unintelligible 00:40:26.03] got really refurbished a while ago in Elm… Which has a way smaller community, but a very committed community, and we had a bunch of committed contributors. I think they are now obviously not happy that this is happening in React… But I think it’s a really tough decision. You could say it’s the same when we started Prometheus and decided to use Go and not Java, for example. Go is a way technically better language for that, but back then we were early adopters. We also found a lot of bugs in Go, or feature requests that we really needed, but it was a big bet to go into this new language that doesn’t have an established community yet.
I think it’s not a clear cut what way to go, but it speaks volumes that we get new contributors that are super-enthusiastic about code in React. I wouldn’t be enthusiastic, but luckily there are others who like it.
Do you know how that decision was made, like what to choose? Was it like the size of the community, or did someone just say “Oh, this looks cool”, and they started using React?
I think it was largely driven by Julius. Julius wanted to learn React actually, and kind of tried it out here. Obviously, asked everyone in one of our dev summits if people think this is a good idea to actually pursue fully, and we agreed on it.
I think we never had an explicit decision. Often, things just happen, which can be good. Sometimes I think decisions should be explicit, but again, this is not easy to make a call if this should be super top-down, we all sit together in a committee and vote about it, or this should just happen.
I think it’s best to just let it happen, because whoever is willing to do the work is the one that should drive the change. We can make committee decision after committee decision, and then nobody will do anything with it. So doing the decision-making by being willing to do the work and support it is much healthier for a project.
That sounds like such an adult approach, and such a sensible approach. It’s almost like “Of course it makes sense.”
Yeah, you’re right - whoever gets to do the work should decide; whoever is most passionate about it. They’re going to be leading the work anyway, so why don’t you just go ahead and – you know, because we trust you to make the right decision. And as it turns out, it was the right decision, right? The React community joined, and there’s all this new interest that you wouldn’t have had.
I don’t think it’s always that clear. I think a project is sometimes very complex, and some people need some guidance, should they even become active in this area… And I think we also had incidents in the process, where somebody just did something and it kind of steamrolled the others, and then they felt frustrated, or something.
I think this is an actual hard problem. I actually read a paper right now that some of my Grafana colleagues who worked in bigger open source projects recommended to me - how are open source communities making decisions. There’s active research going on on that, like should you have a governance structure…?
We have a governance structure now… I think it’s an interesting, but also very hard, or it’s a hard problem, that’s why it’s an interesting problem. And important.
[00:44:02.00] That’s a paper which I would like to read, for sure… And I know that many others will as well, so I will look forward to that link from Björn. Okay, so one of the things which I’m aware of as a Prometheus user is memory use. Is there anything that is being done about that in the next six months, any improvements around improving Prometheus’ use of memory?
Yes. As a matter of fact, we had one of our developer summits just after PromCon, and this was one of the topics that we talked about. The way that the Prometheus time series database works is that there is an active [unintelligible 00:44:38.21] where the inserts are happening, the live inserts of the data that’s being scraped. That builds a block of the most recent two hours of data, and then that’s flushed to disk to an immutable block, and then we use memory mapping, so the kernel takes care of that memory management there.
But that most recent two hours’ worth of data is kept in memory until we do this procedure. So that can potentially make up a large amount of memory that you’re using. So we’re gonna be looking into ways of offloading this from RAM, basically, to other mechanisms. We haven’t fully decided on what that is, but we are actively looking into improvements that we can make.
There are various other mechanisms that we wanna look into. Even within the immutable blocks of data we want to explore, as Björn likes to say, “new old chunk encodings.” Because when we wrote the new time series engine, we kind of made the decision that we’ll for now only look at one type of chunk encoding, and we’ve realized that looking back in hindsight, there’s probably some potential for making better decisions, potentially at runtime, or at compaction time for example, to optimize some of this data in a better way.
Yeah, [unintelligible 00:46:15.15] in Prometheus 1 was essentially hacked together, and when it was working well enough, we would do all the other stuff. Then the Prometheus 2 storage engine was really very carefully designed, but also kind of reworded into just using essentially the classical Gorilla encoding that [unintelligible 00:46:33.16] had a few crazy hacks that we never really evaluated, but now we can compare… [unintelligible 00:46:43.28] is one of those remote storage solutions, but they also use the exact same storage format, and they support everything, all the versions back into the past, and they can directly compare how things look like. And apparently, if you just look at the encoding, the Prometheus 1 encoding is 30% better, or something. So we see we can actually (what’s the word…?) recover some of the archaeological evidence from that, and perhaps improve this.
We can forward-port some of the optimizations… [laughter] Yeah, the Prometheus 2 format was very much designed to reduce the CPU needs for ingestion, and that completely succeeded, to the point where we actually have spare CPU. When you look at the CPU to memory ratios of a common server, the Prometheus server will use all of the memory, but only a quarter of the available CPU in the typical ratios you get on servers. So we could spend some more CPU to improve the compression and get us back some of that memory… Because every time we improve our compression, it not only improves the disk storage space, it improves the memory storage. Because we keep the same data in memory as we do on disk.
[00:48:05.11] I’m sure that many users will be excited about this. I’m very excited to hear that, and I’m looking forward to what will come out of this. As we are approaching the end of our interview, any other things worth mentioning, or one thing which is really worth mentioning?
There would be no story about the future complete without my favorite topic in Prometheus, and that’s histograms. I’m probably known as Mr. Histogram, or something.
Histograms in Prometheus is an extremely powerful approach, but it’s kind of half-baked. We introduced them in 2015. A histogram is like a bucketed counter, broadly spoken.
Yeah, from an SRE perspective, histograms are extremely important in getting more detail out of the latency in our applications. Several other monitoring platforms talk very loudly about histograms being important, because we need detailed data on requests coming into the system, and an average is not good enough. Summaries, pre-computed quantiles are also not good enough, because they usually don’t give us the granularity, and also they can’t be compared across instances. So if I’ve got a dozen pods, I need to have super-detailed histogram data in order to do a proper analysis of my request… Because it’s okay to have 10 milliseconds of latency on a request, but it’s not okay when 5% of those are so slow, they’re useless to the user. The typical is 10 milliseconds, but if 5% of them are 10 seconds - I can’t have that from my service SLA perspective. So I need more and more and more histograms, but right now they’re just super-expensive.
And that’s because Prometheus, in the same – like, when we talked about the metadata, where we said Prometheus throws everything away and everything is just like floating-point numbers with timestamps essentially, that’s the same for histograms, where the other part of the information is that this is all buckets belonging to the same histogram; now, every bucketed counter becomes its own time series in the Prometheus server, so every bucket you add comes with the full cost of a new time series with no potential of whatever… Putting this together in some way, or compressing this in some way. And there’s decades of research how to represent distributions in an efficient way… And now that I have more time to work on Prometheus, and my boss also likes this topic a lot - perfect opportunity to really go into this.
I had a little talk at PromCon, where I was giving my current state of research, and now at this conference… So many people and so many companies and organizations are interested in that. It was really exciting. The idea is to get something where we could have way more buckets, or we even have some kind of digest approach to that, that plays well with the Prometheus data model. So it’s a true challenge, and it will be fairly invasive, because it also changes the Prometheus storage engine, how the evaluation model works… Because suddenly you have something that is not just a float, it’s a representation of a distribution. But the idea is that we will have very detailed - and not very expensive - histograms in the not-too-far future, and I’m very hyped about this.
[00:52:03.08] That is so cool, that is so cool. You mentioned something there which reminded me of a discussion which you had earlier, and that was around being more open and getting the community more involved in what is happening in Prometheus. You (or maybe Fred) mentioned about the monthly community calls, the virtual calls… Who would like to cover that?
Sure. Yeah, we’re trying to be more open with the wider developer community and our wider user base, and a lot of people have found that the Prometheus developer team is a little closed off and a little opaque… So we’re now doing monthly public meetings and sharing what the developer team is up to, and taking more input from the community in order to be a better open source project.
So how can users join those monthly meetings?
Yes, on our website we have an announcement area for those community meetings.
Yes. They are alternating, so that they are compatible with Asian timezones and American timezones, every other month… That hopefully allows worldwide participation.
Do we announce them on the mailing list, or on Twitter?
We do announce them regularly on Twitter, and the schedule is open. People can come and just ask their questions. We’re super-happy to answer them to the best of our abilities.
Thank you. That’s a great way of ending this, in that there’s no ending; there’s other ways that the people can join this, and not just like – because this is one-sided, people are listening to us… But that’s a way of them participating in Prometheus, getting to know more about Prometheus. When is the next monthly meeting, do you know?
I think we’ve just had one, so it’ll be next month.
Okay, so December.
The 31st of December, I’m sure. [laughter]
No, I believe it’s every first Wednesday of the month.
And then the opposite timezone is the third Wednesday of every month.
Whatever. I think it should be looked up on our website. We should provide a link in the show notes.
Right. We will. Thank you very much Ben, thank you very much Fred, and thank you very much Björn. It was a great pleasure having you, and I’m so excited about what you will do next.
Thank you.
Thanks.
Thank you.
It’s the 21st of November, 2019. It’s the last day of KubeCon North America. It’s been a sunny day, it’s been a great day so far. We had a great number of hosts and guests on this show – no, there was only one; it was just me. [laughter] We had a great number of guests on this show. Just earlier I was talking to Björn from Grafana, Fred from Red Hat, and also Ben from GitLab, and they were all on the Prometheus team, very passionate, a lot of interesting things that they’ve shared with us… Now we have Tom from Grafana, and we have Ed, also from Grafana.
And I’m also one of the Prometheus maintainers.
Oh, thank you. I mean, I have seen your PRs here and there… But yes, another Prometheus maintainer. So the reason why I was very excited to speak with you was I know that you have a very passionate view on observability, on what it means for a system to be observable, and one of the key components in this new landscape, which is Kubernetes, all these stacks, the layers are getting deeper and deeper… So understanding what is happening in this very complex landscape, you need observability tooling, which is mature, which is complete… So tell me a bit about that.
Yeah, I mean… Thank you for having us. Observability is one of these buzzwords that has been going around a lot in the past few years. I’ve been asked a lot in the past few days what is observability, how does Grafana fit into the observability landscape… I think observability was previously kind of defined around these three pillars - metrics, logs and traces. And this past year I think it was trendy to bash that as an analogy. Some of it was rightly so, some of it maybe less so. I still sometimes think about it like that, but I try to avoid thinking about the particular data type, the particular way you’re storing it, the way you’ve collecting that data, and I try and think more about how people are using that data.
[01:00:00.00] For me, observability is about any kind of tooling infrastructure, UIs, anything you build that helps you understand the behavior of your applications and its infrastructure.
I think it’s something really important to emphasize, because at the end of the day, it’s about the stories that we tell. We use data, some form of data, to tell a certain story. And whatever data is relevant for that story, use it. It doesn’t matter what you call it, as long as the focus is “What are you trying to convey? What are you trying for someone to understand, and what point are you trying to make?” It doesn’t matter what you call it, as long as you don’t forget what this is all about.
I’ll give you an example then that I think is really relevant, at least to Ed and I. We were in Munich two weeks ago for the Prometheus conference. Great event, 200 or so people, coming to just focus on Prometheus, and towards the end of the first day, Ed, your pager went off. Our hosted service was having an issue, and it turns out it took two hours to diagnose it. We were using all of our tooling to understand what went wrong. I think at the end of it – well, we still don’t actually know the root cause yet; once we figure it out, we’ll put it on the blog. But the point of the story is more that a few days later, after we’d got back from PromCon, after we all sat – well, we didn’t sit together; after we had a video call with 8 or 9 of the team members on, and we were fishing through all of our metrics, all of our logs and all of our traces to try and figure out what really happened, to try and get to that root cause - that was for me such a valuable experience, dogfooding our own products, dogfooding our own projects that we work on, and using them to try and understand what went wrong, and try and build that picture.
You know, we’ve got graphs, we’ve got log segments, we’ve got everything we can possibly gather together, to try and understand why a node failure, or an Etcd master election, and then a network partition, and everything seemed to go wrong at once, but really what was the root cause. And that was exciting.
We also had David and members of the Grafana team join in to see a live example of how people were using the tools they’re building, and how they could improve the UX of those tools. I think he ended up recording it and showing it to more people on the team, to go like “Look, he wanted to click this, but it wasn’t quite in the right place, or it wasn’t quite the right thing.”
That’s a great story. One thing which I really like about this story is how relevant different elements of observability - for a lack of a better word - how important certain elements are. When you’re trying to dig for root cause analysis, logs are very, very important. So metrics are getting a lot of attention, traces are getting a lot of attention, but I’m not seeing the same thing for logs. So other than Loki, which is an open source project, is there anything else out there that I’m not aware of?
For…
For logs specifically, that integrate with Prometheus, that integrate with Zipkin or Jaeger or whatever else you may have, that will give you this root cause analysis tooling.
Yeah. I think an interesting one here is when I joined Grafana Labs 18 months ago, they were already big users of Zipkin, but not in a traditional use case. They weren’t using it to visualize requests spanning multiple microservices, they were actually using Zipkin mostly for request-centric logging. Because Zipkin has these kinds of basic logging features. I said Zipkin there, didn’t I? I mean Jaeger, didn’t I? Yeah. I meant Jaeger, sorry. They’re big users of Jaeger.
Okay, yeah.
It’s fine, we can edit that out. But yeah, so… They were big users, but not for distributed tracing. We came along and we wanted to use it for the visualization of the request flows for all the microservices, but… But yeah, I’d never really seen Jaeger used primarily for something other than visualizing request flows. So I guess you could think about the tracing tools as like a more request-oriented way of logging.
[01:04:03.23] I mean, obviously, there are a lot of logging vendors out there, and a lot of them were represented at KubeCon. I think the most popular one for Kubernetes has always been Elastic. The Elastic Stack, ELK, that’s what most people use, and it’s a great tool. One of the things that always impressed me about Elastic is you can pretty much do anything with it. I’ve seen people build their whole BI and analytics stack on Elastic; I’ve seen people use it for developer-centric logging, people use it for audit logging, people use it for security analysis… People are using it for actually searching web pages as well, which kind of is fun, because that’s what it was originally used for.
With Loki – I know you said “apart from Loki”, but Loki is not like Elastic in that sense. We are just focused on the developer-centric logging flow. We just wanna use basically what you would see in kubectl logs; we wanna give it a better user interface, so you can point and click and see it in Grafana. And honestly – I mean, we have touched on dogfooding already, and I think it’s one of our superpowers at Grafana Labs. We build the products we wanna use as developers. And really, the reason I started the Loki project was because you can’t kubectl logs a pod that’s gone away, and one of the common failure modes pods would die/disappear/get rescheduled etc, and I wanted to know what was going on in that pod before that happened. That’s why we built Loki, and that’s why we wanna kubectl logs, but with a bit more attention.
Here’s an interesting one… Kubectl - KubeCuttle, KubeCTL, what do we say?
KubeControl?
There’s so many ways now. Kubectl, from my perspective.
Kubectl, not KubeCuttle?
Wasn’t that an unofficial logo, a cuttlefish?
Yes, there was. There was an unofficial logo in a couple of places, yet the cuttlefish gets mentioned…
I like the cuttlefish one.
I mean, yeah, ctl… Sysctl? Maybe that’s what–
I don’t say sysctl.
Sysctl. But did you use to say sysctl before KubeCuttle?
No, I mean… Maybe not. And it’s definitely ioctl and not IOctl, so…
Okay… [laughter] Earlier, Ben was mentioning about all the different building blocks that exist in the observability landscape in the CNCF. And I can see Loki as one of those building blocks.
The one thing which I really like about Grafana is that it doesn’t limit you what data sources you can use. So if you want to use ELK, you can do that. If you wanna use Stackdriver, you can do that; which is logging from a vendor. Perfectly fine, no problems. And if you wanna use Prometheus - a very popular project, a graduated project, the second graduated project in the CNCF, you can use that as well. And it’s a combination of all these tools, and many others. InfluxDB…
There are 60 different databases in Grafana.
There you go. I don’t even know them all.
I couldn’t name them all…
You can combine them in innovative ways, and you can almost do the right thing, the right thing being relative and being relevant for you. So what is the right thing for you? And if you wanna use Loki, so be it; if you wanna use Splunk, so be it.
The thing I think is even more cool is it’s not just about having these data sources and having all these data into dashboards and the Explore mode, but what we’re working on is, you know, with Loki we’ve built this experience where because we have this consistent metadata between the metrics and the logs, we allow you to switch between them automatically. So given any Prometheus graph, any Prometheus query, we can automatically show you relevant logs for it.
Now, that was a very Loki-specific, that was a very Loki-specific experience. We’ve been working really hard to try and bring that to other data sources, so we’re now hopefully – as long as you curate your labels correctly, you’re able to achieve that kind of experience between Graphite and Elastic.
[01:07:56.18] This is something I didn’t really understand until I joined Grafana Labs - the team is so committed to this big tent philosophy; enabling these kinds of workflows and enabling other systems… And I really think the Grafana project is the only thing out there that really allows you to combine and mix and match, and really is so more additive to the ecosystem than other projects that are like “No, you can only use this data source. You can only talk to this database.”
A bridge to all sorts of things.
[unintelligible 01:08:27.01]
Right. I like that analogy very much. So we have Ed here… I hear that he’s quite involved with Loki, and when you said “we”, Tom, I’m sure you meant the royal we, because it’s mostly Ed, right? Let’s be honest here… [laughter] Loki is mostly Ed. So tell us, Ed, about Loki - why do you like it, what do you like about it, where is it going…?
Yeah. I can still remember probably about ten months ago when I was interviewing with Tom, and we were talking about Loki… It was new to me at the time, and the first question I asked was “Isn’t that already a solved problem? Don’t we have solutions for logging already?” And then as he explained, I would almost call it a simplification of how Loki’s store is compared to other systems. I’m like, “Oh, that immediately scratches an itch that I’ve had.” I’ve been a developer my whole life, and the two things that I do most with logs is I deploy software and [unintelligible 01:09:21.17] and I look for errors. And then I’m running the software and it’s broken, and I’ve gotta go find where it’s broken.
So what Loki does really well is we only index the metadata, the label data that is part of your logs, and not the full text of the logs. So from an operating and overhead it’s much leaner, I guess. And as long as you’re looking for data and you know that time span, and you know that relative metadata, and the server it was on, the application, you’re there; you’re looking at your logs. And the tailing aspect is included as well with Grafana. So I’m like “Wow, that’s what I wanted.”
The big advantage from an operating perspective with Loki now is that the index scales according to the size of your metadata and not your log content. So we’re almost a couple orders of magnitude smaller on our index than we are in our store log data… And then we can take advantage of object stores and compression to store data cheaply. So it’s a really nice optimization on log content when you’re a developer/operator and you really wanna just “I wanna get to my logs right now. I wanna look at this application’s logs in last week”, or regularly… Like, “Let’s go look at what are the journal logs for this node; what is going on here? Can we add a regex filter on there for TCP: out of memory?” That’s a lot of those.
Recently, we’ve been adding support for metric-style queries against your logs. To me, this was like the grep -v -v -v, and then piping into word count. I wanna know how often is this happening. But it gets better, because I can see now in time how often it happens, and it’s like TCP: out of memory - that’s probably wrong, right? That’s probably a problem.
It’s been really exciting, and I feel like that’s resonating with a lot of people we talk to here as well, that are like “This is what I want for my logs.” There’s way more you can do with your logs, absolutely, and some of these other projects are much better suited for the different kinds of queries you might do, where you need a full index. But in a lot of cases, the Loki model is really perfect for that.
I really like that, how you take a really simple, you start as simple as you possibly can, and you start adding more and more functionality, again, as simply as you can. When do you stop? When do you know when it’s enough?
That’s a great question.
[01:11:48.21] Yeah. I think in the ‘90s and 2000’s people built technologies with general building blocks. And I look at Elastic or Lucene probably as a great building block. And I look at a lot of the projects that came out of that as being generally useful in a lot of places… But I don’t think big data ever quite hit its promise. One of the things I’ve always tried to do with everything I’ve done is be very, very focused on a particular story, a particular end user, a particular use case.
With Loki, that use case was the [unintelligible 01:12:28.19] I’m still on call with Grafana Labs. I don’t know how Ed feels about that, but… [laughs]I still occasionally get paged at 3 AM, and I really wanted tooling that would help me very quickly, in a sleep-deprived state get to the problem as quickly as possible. And that’s the focus has always been on with Loki.
So you as “Where do we stop?”, well I don’t think we try and make Loki do tracing, we don’t try and make Loki do BI, we don’t try and make Loki do use cases that are beyond that sleep-deprived, 3 AM instant response drill. I think we stay with these tightly focused stories, and that’s how we build great projects. I learned that Prometheus (it still does) is incredibly focused, and incredibly resistant to [unintelligible 01:13:27.20] and scope creep.
So I learned a lot through the Prometheus project, and I’m really keen to apply that to this project and maybe future projects. I’ll caveat it with one thing… What we did with Loki and the way we built Loki so quickly is we actually took all of the distributed systems, algorithms and data structures from another one of my projects, from Cortex. So Loki is really just like a thin – well, maybe not so thin anymore, but it started off as a thin veneer wrapped around the same distributed hash tables, the same inverted indexes and chunk stores that we used in Cortex… And that’s how we got the first project out so quickly.
So I’m all for code reuse, I’m all for reusing data structures and sharing, and this kind of stuff, but I just think the end solution that you build it into should be really, really focused.
Cortex is really cool, and I would like us to go into that soon… But before that, I would like to add an extra insight for those that maybe don’t know you very well; you’re the VP of product for Grafana Labs… So why are you being paged? Because you like it? Because you want to be close to the tooling? Because you want to see what people will be getting? I think that’s possibly the most committed VP of product that I’ve known, and that’s the right way of approaching it, so that you have a first-hand experience yourself of all those products.
We talk at Grafana Labs about authenticity. We try and not spin the stories we’re telling. We try and just tell real stories, authentic stories, and we try and talk about – I remember having a conversation with the CEO, with Raj, about what does it mean to build these empowered, distributed teams of really awesome software engineers? And I think one of the ways we encapsulate it is like - you see it a lot on people’s Twitter bios, you see “Opinions here are my own.” I never want any of my employees to have to caveat their opinions. I trust them all, I want them to feel empowered, to speak on behalf of the projects and the company that they represent, and I want them to speak authentically. A part of that - if you hear me standing up, talking and telling a story about why I built Cortex, why we started Loki, why I use Prometheus, why I use Grafana; these are real stories, from my actual experience. And I do miss not being able to write as much code as I used to. On the fly over to San Diego from London I actually did a PR for Prometheus… Because I’m a software engineer at heart.
[01:16:14.22] I do miss it sometimes, but also I see the work that Ed and the rest of the team were able to do, and I just think as long as I can build an environment for people to be that successful, then I’m happy.
I think that’s a great philosophy to have, and it’s really powerful. We can see how important it is to approach things like that, to really believe in that, and to operate under that mindset.
Yeah, and I try to.
So Cortex - very interesting; another interesting Grafana Labs product… Or project? How would you call it?
Well, interestingly, Cortex isn’t a Grafana Labs project. I started the Cortex project over three years ago, before I worked for Grafana Labs. About a year ago we put it into the CNCF… So it’s actually a CNCF sandbox project, used by a lot of companies. Every time I come to KubeCon I meet new companies who are like “Oh hey, we use Cortex.” I’m like “Wow, I had no idea.” We really just started it for our own needs to begin with. Grafana Labs does use Cortex to power our hosted Prometheus product in Grafana Cloud, so that’s where our vested interest is. We are doing this because it’s the basis of one of our big products. But also, one of the things – I like Cortex; in a previous life I worked on Apache Cassandra, so you’ll see heavy influence in Cortex, in the algorithms and in the data structures, from Cassandra. We do a very similar virtual node scheme, we have a very similar distribution, and consistency, and replication, and these kinds of things to Cassandra.
I liked Cortex mainly because I was learning this new language, it was with Go, and I thought “This should be a great language to do lots of these concurrent, highly distributed systems in.” So I thought, well, what are the algorithms that I hope will be really easy to implement in Go, that would be challenging to implement in other languages? So that was one of my motivations for Cortex.
Also, at the time I was building a different product. It was still in the observability space; I was working on something called Scope. I spent a long time building this, and one of the tools I used whilst building Scope was Prometheus. And I very quickly realized that Prometheus was where it was at, and it was incredibly useful. So yeah, that’s kind of how I got into the Prometheus space. Then I thought “Well, what the world really needs is a horizontally-scalable, clustered version of Prometheus”, mostly because I thought it’d just be cool to build.
So we started it, we built it, and we kind of learned what the actual use cases it applied to were. We learned as we went. And now I’d say – I originally thought long-term storage would be the biggest value of something like Cortex, but now I think really it’s the… You know, we talked about how the Prometheus community and the Prometheus team - we like to keep Prometheus well-defined and tight and small and easy to operate, and this excludes a lot of use cases. This particularly excludes a lot of use cases that involve monitoring over a global fleet of servers. So really, I think the Cortex project’s main value proposition is about monitoring lots of servers deployed around in a global fleet. Maybe you’ve got tens of clusters on multiple different continents, and you wanna bring all of those metrics into a single place, so you can do these queries.
[01:19:48.11] Then we joined Grafana Labs, and they had much larger customers than I’d ever worked with before, and we started to experience query performance issues with Cortex. We hadn’t really at the time had any very large users on it, and as we started to onboard very large users, they started to complain about the query performance.
So I guess the past 18 months of the Cortex project has been almost 100% focused on making it the fastest possible Prometheus query evaluator out there. And that was the talk I gave at KubeCon a couple days ago; it was about how we parallelize and cache and emit parallel partial sums for us to reaggregate… And we do all of these different techniques to really accelerate our PromQL expressions.
Then the really interesting thing happened a few months ago… Because Thanos – we can’t not mention Thanos. Thanos started off a year after Cortex, started by Bartek, who also lives in London. He’s a good friend of mine. And it started to solve exactly the same problems that Cortex was solving, but effectively did it in the completely opposite way. Almost every step along the way they chose the opposite. Thanos has become a lot more popular than Cortex for sure, and they did a really good job of making it a really easy to adopt system, great documentation, and they really invested in the community. So I learned a lot… You know, Thanos is more popular than Cortex, but I think one of the things we’ve been able to do recently is take a lot of stuff we’d built and deployed in Cortex to accelerate query performance, and apply it to Thanos. And that’s kind of exciting, because now we can bring these really cool techniques, but a much larger community.
I know this was asked before, but the one thing which I kept thinking during your talk is when will you announce that Thanos and Cortex will merge and will become one? And I think you made a great job about it, like “They have. They will merge.” I know that is not happening, or at least not right now; not that we know of. But the inspiration was from Flux and Argo, how two very popular projects in the CI/CD space have merged. I think that’s a great combination of effort, getting the best of both worlds.
I’m sure many are wondering “Will that ever happen?” It would be cool, but I’m sure it also has its own challenges for that to be the case, for Thanos and Cortex to merge… So we’ll watch this space, for sure.
I don’t wanna see merging as like an end goal. I think the end goal should be collaboration. One of the things I like about the Prometheus community is they have been so open to adding maintainers because of their contributions effectively to other projects. So the main reason I’m a Prometheus maintainer is because I started Cortex. And similarly, Bartek has been added to the Prometheus maintainer team recently… So there’s a huge overlap between the Thanos maintainers, the Prometheus maintainers and the Cortex maintainers. And really, I don’t think the end goal should be convergence of these two projects. I think it should be an increased collaboration between them, and that’s what we’re working towards. I really like working with the Thanos guys, I really like working with the Prometheus guys, and if there are any ways in which we can share and collaborate more, share cool examples, try different things in different projects - that sounds awesome to me.
The deployment models for Thanos and Cortex are completely different. Opposite ends of the spectrum. So maybe they’ll never merge, because the deployments are so different. Maybe they’ll stay separate. But I think the technologies and the libraries they share – I mean, both Thanos and Cortex use the same PromQL query engine that Prometheus uses. I mean, it is the Prometheus query engine. Both Cortex and Thanos use the same compression format for their time series data. We share way more stuff in common than our differences, really.
And I look at some of the merges of communities, over the past year, and I think they’ve been announced before really; the communities have had a chance to gel, and really demonstrate the benefits of that merger… And so I definitely kind of– I wanna demonstrate the benefits of working together first, and if it turns out… You know, we are already working together and we are having some great success, and if that continues, and if we find even more ways to work together, then maybe a merger makes sense. But I’m more interested in the shared code, the collaboration, and the shared solutions.
[01:24:07.14] That’s a great take, I really like that. It makes a lot of sense… As if you had thought about this long and hard, I would say. You strike me as the person who always has a couple of projects, side projects in his backpocket. Anything that you would like to share with us? Anything interesting that you’re working on, hacking on? And maybe Ed…
What do you reckon… Tanka? Tanka is pretty cool, we should mention Tanka. So this is not really my project… There’s a very young chap called Tom Brach in Germany, who approached us actually at KubeCon. He was 17 at the time… He came up to our booth, spoke to Gotham and I and said “I really like what you’re doing with Jsonnet. I really like the whole Mixins thing, I really like Cortex, I really like Loki. Do you have a summer internship position?” And I’m like, “A 17-year-old kid is talking to me about Jsonnet.” Jsonnet is one of the nichest aspects of this community I’m aware of. So [unintelligible 01:25:03.03] and he did end up doing a summer internship. About the same time Heptio was sold to VMWare, and VMWare discountinued the Ksonnet project, we were big users. I really liked what they were doing with Ksonnet, I really like how it enabled this kind of reusable and composable configuration as code… And when I joined Grafana Labs, we rolled out Ksonnet everywhere.
So to hear it was discontinued was a bit of a problem for us. We continued to use it, we continued to invest in it, and when Tom Brach came along, we actually reimplemented it in this project called Tanka, with a whole bunch of other really cool improvements that he’s done. It’s now much faster, it now uses – it just forks out to Kubectl, so we don’t have a lot of compatibility challenges. It’s got a much more sophisticated diffing mechanism… And this 17-year-old kid has just massively improved the productivity of the engineers in Grafana Labs by really improving the toolchain for our Kubernetes [unintelligible 01:25:59.09] management.
So if anyone here is using Jsonnet, using Ksonnet, and wondering what the future holds, I’d encourage you to check out Tanka. It’s a really, really cool project.
This is something which keeps coming over and over again - the community, the openness, the barrier of entry which is so low, and how everybody is there to help you. Whatever age you have, whatever inclination you have, whatever you wanna do, you can do, and everybody is there to guide you, help you, and accept whichever contribution you wanna bring. This is something so valuable, which over the last three days I keep seeing over and over again; I’m gonna say it’s one of the core values of this new community and this new ecosystem, which has grown so much. 12,000 people. Did you manage to speak to all of them?
Probably about a twelfth of them. It definitely feels that way… I think I would definitely agree, the superpower for the Kubernetes and for the cloud-native community as a whole is this openness, is this acceptance. I really like what the CNCF has done by having multiple competing projects in their incubation. Thanos and Cortex are both in there, and I really look forward to other projects coming in and doing the same thing. I really like how the CNCF are not kingmakers in this respect. I think that openness is great.
And then the whole – you know, no matter what you think about Kubernetes and its complexity, and its adoption, I think the real benefit of Kubernetes is the openness. And if you really want to, and have the time and the effort to make a contribution and make a change - definitely; it will be accepted, and you’ll be embraced with open arms, and eventually you will be put in charge of some huge component, and you’re like “Whaat?!” So yeah, I’m a big fan.
And especially if you’re VP of product, right? PR to Prometheus… [laughs]
[01:27:58.19] Yeah, I mean, I think I’ve had some PRs into Kubernetes. I’m not sure. But I don’t get to do as much coding as I used to. I do miss it. I still get to play, I still do a fair amount of config management work, because I still help with the deployments, and I’m still building dashboards, and occasionally doing PRs to Prometheus, and… I’m still doing a fair amount of code review. Not as much as I used to, but I spend a lot of my time doing all sorts of things now. Doing marketing work, that’s an interesting one.
So as we’re approaching the end of this interview, and also we’re approaching the end of KubeCon, which is an amazing, amazing event… Anything specific that you were impressed by, or you wouldn’t expect to see, and you were very happy to see? Any key takeaways?
My story, as we were talking a little bit before - this is my first KubeCon, and I’m new to the open source community; I’ve worked a lot of enterprise jobs prior to this, and… It is really exciting, I have to say. The people that come up to the booth and talk about like “Hey, we used Grafana. We love it”, being part of that, being part of a project that – I met someone that was a contributor to Loki, and they were really excited. It’s a really cool feeling, to have people see these tools and actually use them, come and talk to you about it… I really enjoy the amount of people interested, the talks they were giving, that are deep-dives into these projects, that people are interested in seeing… It’s such a different experience than the software I’ve done in the past.
I think it’s really neat as a developer even if you’re just using these tools. Because of the tools and their proliferation and their openness, it’s a skillset you can take anywhere with you. These are real skills, and I think companies are starting to see the real value in having toolchains that people know by name. You hear Prometheus more and more and more, and that’s really valuable. And to have that be open source technology is really amazing.
Thank you Ed, thank you Tom. It’s been a pleasure having you. I look forward to the next one. Cheers!
Thank you for having us!
I would like to say that we’ve kept the best for last, but that’s something for you to appreciate. We are definitely ending the KubeCon on a high. Most people are already breaking off, and some have already flown back home. We’re still here, so in this way we are officially ending KubeCon with this last interview. I have around me three gentlemen. Left to right, we have Jared, we have Marques, and we have Dan, all from Upbound. You may recognize them by Crossplane - that’s a very strong name - and also Rook. So they are the ones (some of them) that are behind these great projects.
I’ll let them maybe speak a little bit about their involvement, and also tell us what they’re passionate about, what their takeaways are from the conference… Who would like to start?
[01:32:07.04] I’d be happy to start. This is Jared, and I have been a founder and a maintainer on both the Rook project and the Crossplane project. So I’ve been living in the open source, cloud-native ecosystem for multiple years now… And one of the biggest things for me that I see consistently is that each KubeCon gets that much more crazy, that much more lively, and the amount of new people that are coming into the ecosystem is always a fairly surprising amount. I think anytime that you go to a talk and people ask “Is this your first KubeCon?”, you see a large majority of the room raising their hands, and to me that says that this ecosystem is on to something exciting, and it’s attracting more people and it’s gaining more adoption, and that’s something that consistently excites me a lot. I see it all the time, at every KubeCon.
Yeah, Dan was calling those the second-graders. There were a lot of second-graders at this KubeCon, and some fourth-graders. I really enjoyed that, it was a great analogy.
Yeah, the analogy where he was showing how his son was playing Minecraft and hiding the screen, because that was the way to survive the night… And yes, everyone at the convention, if it was their first year, they were considered second-graders, and everyone else was only fourth-graders, because the project itself is only five years old, so we’re all new in learning this together. Yeah, it’s a great analogy.
Yeah, definitely. I think personally that was a really cool analogy for me, because I actually graduated from college recently, and I’m fairly young in the community… But a lot of people have been extremely welcoming and kind to me. Welcoming into not just the Crossplane and Rook ecosystems, but also in the greater Kubernetes ecosystem. Welcoming onto the actual release team for 1.17, and being part of that was super-cool, and there’s just a lot of people who have been around from the inception of Kubernetes, who are saying “You’re a young person. Come in here, and you’re welcome. We value your thoughts and opinions and your efforts.” So it’s definitely a cool place to be at KubeCon, and being surrounded by really talented people like that.
And actually, I think that’s something that speaks a lot to not only the community and the ecosystem here amongst people that are part of this cloud-native movement, but I think that’s just open source in general; I’ve seen a massive change over the past five years, ten years, and even earlier than that, where you’ve got these communities that are able to form based on these more socialized sites, like GitHub and GitLab, where you’re able to get these communities built and be able to be very collaborative, in a very open environment, that not only is getting these projects more out there in the hands of other people, but it’s attracting people that bring a lot of enthusiasm, that feel welcome because of the way that the community is treating people. But it’s getting more people involved in open source than have ever been involved before. It’s not something just for grey beards anymore; open source is for everybody now, and it’s pretty awesome.
This is something that was mentioned a couple of times, and even I mentioned it a couple of times in these interviews… I’m still surprised by how open and welcoming everybody is. Even though it’s been three packed days, even today everybody was still happy, was still smiling, and really happy to answer any questions… And even though they were really tired, you could see some people – you know, they had three very hard days, and who knows how many months before that… Brian was just saying, a lot of the preparations started six months ago. So some have been at this for a really long time. And yet - open, welcoming, warm. It was great. My first KubeCon - I loved it.
[01:36:04.07] What was your first KubeCon?
This.
Oh, this was your first KubeCon.
Yeah, this was my first KubeCon.
Oh, so you’re experiencing that welcoming attitude first-hand.
Yes, yes.
I love that.
It was amazing. Natasha and Priyanka, they were talking about the process, and especially Natasha, since she has been at CNCF a couple of years before GitLab… She was saying about the processes which they have in place, all the documentation, how that is such an important factor in this welcoming community.
I think that’s really been recognized as a key thing in the success of Kubernetes and the open source ecosystem in general. I think that’s one of the drivers for it. It’s not only the right thing to do to welcome people in and make everyone feel part of the community, it’s also in the best interest of the project, and I’m sure Jared will probably talk about this shortly, but I think that’s been reflected in some of the work we’re doing as well. We’re reliant on a strong community to be successful in what we’re trying to go after, so… Yeah, it’s cool to see that it’s not only the right thing to do to treat people well, but it’s also beneficial for achieving whatever goal you’re searching for.
And speaking about the goals - I think that’s another thing that makes the open source projects work. It has people coming to the booth, being happy to talk about the project… Maybe they don’t understand it at first, but as you start talking to them, they realize and you realize that they have the same concerns and they need the same sort of outcomes that you do… And when there’s a fit between your tool and what their needs are…
The ecosystem of open source is many solutions to the same problem, and each one tackles it a different way. But it’s great when you start explaining what your product does and they latch onto that, and they lead the conversation, because they know how to make what you’ve offered so far more useful to fit their circumstances. It’s good to have those conversations. I think it keeps that positive attitude. If everybody walked up and they’re like “What does your product do? I don’t get it.” [unintelligible 01:38:10.23]
And along with that welcoming nature there - this is a story I really like to share with people, because it highlights how things can go in the completely opposite direction and cause a very toxic environment. I will certainly not mention the project that this happened on, and it’s not in the cloud-native ecosystem at all, it’s certainly not a CNCF project, because all those communities are super-welcoming and kind… But there was an open source project I got really excited about, because it was very aligned with some of my personal interests… And being a maintainer on other open source projects, I know how important it is to have a contributors’ guide to be able to welcome new people into the community, but also have pragmatic, or practical steps of “This is how you build the project, this is how you add unit tests, this is the criteria for opening a pull request and getting it accepted.”
So I opened an issue on a particular open source project, and within five minutes or so one of the maintainers on that project replied back to me for my request to create a contributor guide so that I could start helping them out… He told me that it was the dumbest issue he’s ever seen. He used some explicit language and said that he’s tired of idiots opening issues in his repo. And I cannot imagine that they ever got another contributor to join that project ever again, because of that completely toxic behavior.
So there’s a spectrum of being welcoming, kind, supportive, and then there’s that type of behavior, which I don’t think anyone else has ever had an experience like that. It’s definitely an anomaly, an outlier, but it is the worst way to run a community, ever.
Okay, well… [unintelligible 01:39:45.25] I’m really glad – that’s like a really bad example, and… Because it’s really easy to forget, but these things do happen, even today. We don’t realize, because we’re so privileged to be in such a great community and to have so many genuinely nice people around us. We do forget that things like these do happen.
[01:40:13.06] So what I would say - everybody that had such an experience are more than welcome to join the CNCF community, because we will show them that that is not normal, or show them what normal is; we’ll be more than happy to get as many people as want on board, because this is normal and this is good. And I think that speaks to the success of this approach.
I’m not sure how many people were at the last KubeCon, but this one’s 12,000 people. I know the first one was only 4-5 years ago, with like 500 or 1,000. So how much this community has grown, and maybe this has something to do with it, I think.
And the success of one project can lead to the success of the other projects. Once you’ve modeled how to develop a great community and nurture the community with this sort of support to continue contributing, all the other projects are gonna be able to benefit from that.
I’m really glad that you mentioned that, Marcus, because I would like us to maybe start looking a little bit at Crossplane, and the one thing - which at least that’s what Crossplane is to me, and you can give me your perspectives - is how it’s the embodiment of leveling the playing field, being open, bridges everywhere, everybody is welcome to the party, no vendor lock-in… It’s just the opposite of that. “We’re open, we embrace everybody, we are open to anybody working with us, and this is what we think the future looks like.” So it’s all the bridges between all the vendors, all the [unintelligible 01:41:55.01] all the services… That’s how I see it, but how do you see it, Dan?
Yeah, that’s exactly right, and we pitch the project as the open multicloud control plane. And that’s what it is. We’re really trying to open up all of the different cloud provider managed services to anyone and everyone, and really reduce that barrier of switching between them… And it’s built in such a way that it allows people to add their own extension points to that, so there’s really no one who’s not welcomed there.
You could start a cloud provider in your home lab, in your apartment, and you could add a stack for that with Crossplane, which I’m sure we’ll get to later, and extend that to include that. What that does is it really allows people to pick the best solution for their problem. There’s a variety of scales of cloud providers, and maybe you just provide a managed database service and it has a very specific use case… And in an enterprise setting, that can be really hard to adopt, because it takes a lot of effort and time to bring on new providers and integrate with them… But if you integrate in a consistent way, then you can always choose the best solution for your problem. So it not only helps the users, but also the companies and the groups of people who are providing open source projects that fit certain (maybe) niche needs - those are now a lot easier to use, and you can pick the best thing to fit whatever use case you have.
Yeah, I think that’s a good point, Dan. When you’re trying to level the playing field or provide easy, attainable access to open source software or to proprietary software, whatever it may be, but getting access in a consistent way across a lot of different options, to a lot of different people, and needs, and scenarios - that’s really part of opening the door there for everybody.
[01:43:55.19] So I think that our efforts here are being based on this foundation that Kubernetes itself has started. Because if you take a step back and you look at the design of Kubernetes and some of its goals that it wanted to accomplish and what it enabled, Kubernetes itself has done a fantastic job of providing this abstraction away from the underlying cloud provider or hardware or whatever it may be. It abstracts away the infrastructure in the data center, and allows your applications to run in a very agnostic way. So Kubernetes kind of started pioneering this trail here where your application doesn’t have to worry about the environment it’s running in. It can basically just express itself in a simple way and then run anywhere. That’s a start, but then there’s many ways to take that further.
We’ve heard Dan mention something about stacks… I’m looking at Marques, because I know that he’s been closely involved with various stacks. Can you tell us, Marques, what stacks are and what stacks are currently available in Crossplane?
Sure. Stacks are a package of resources that Crossplane uses to extend the Kubernetes API with knowledge of cloud provider resources, or any sort of infrastructure resource. Additionally applications, but first focusing on the infrastructure resources. There are stacks currently for Google, Azure and AWS. And additional ones, Packet and Rook. All interesting topics…
So taking the example of Google, there’s a cloud MySQL instance, and one can imagine in Kubernetes creating an instance of that resource, specifying in the spec of that resource all of the API parameters that you need to configure that resource in the cloud, and then within Kubernetes, using Kubernetes lifecycle management, you’ve created this resource that will be reconciled, creating a cloud provider resource, and the by-product of that is a secret that you can bind to your application, so that whatever application it is you need that needs MySQL, it has access to your MySQL.
The way that we’ve done this in Crossplane is we’ve abstracted that fact to (currently) five different abstractions - maybe there’s six, I’m losing count - different abstractions. We’ve got one for MySQL, Redis, Postgres object storage, Kubernetes engines themselves. And if you’re familiar with the concept of the CSI drivers, where there’s persistent volume claims and their storage classes - in that setting you have a deployment with pods that have the intent to be bound to storage (box storage, whatever). And they make a request for, say, 20 gigs of storage attached. They don’t know, they don’t care how that storage is attached to them, to the pods, and somewhere else has been configured a storage class; this storage class dictates that storage will be provided through EBS, or through any other form of storage that the cloud provider is capable of providing. All the other settings, whether it’s faster service or cheaper service, is defined in that storage class.
What Crossplane has done is take that concept and extend it to all of the other resources that you could want to use in your cluster, or for your applications - MySQL, Postgres, and so forth.
So MySQL, Postgres, and you mentioned Rook as well - these are still relatively low-level building blocks. Do you have higher-level building blocks for someone that for example wants a type of application, so that there’s a bit more that’s done for you out of the box, so you don’t have these blocks to assemble yourself?
[01:48:06.02] Yeah, so one of the things that we’re really focused on as a project is addressing it in layers; starting with the lowest level, and then building on top of that, and also allowing other people in the community to build on top of it. One of the great values of being standardized on the Kubernetes API is that we can integrate with a lot of different things. So as Marques was talking about, we have a lot of infrastructure resources that we talk about… And in some ways, those are abstracted, because they’re managed services, which are a little simpler than running your own MySQL instance on bare metal, or something like that. But you can continue to build on top of that and package those together… And Marques alluded a little bit to a different kind of stack that we support as well, which are application stacks.
A common example that we talk about, just because everyone is usually familiar with it, is a WordPress instance. A WordPress blog - everyone’s pretty much familiar with that. Usually, what it takes to do that is somewhere to run it, so maybe a Kubernetes cluster, and then some sort of deployments into that cluster - have the container running in a pod, or something like that - and then some sort of database (MySQL for WordPress) that you need to provision as well, for that to talk to, and store posts and comments and that sort of thing in.
So what you can do with Crossplane is bundle that up into another sort of custom resource, which is a Kubernetes concept which basically allows you to extend their control plane. All of these infrastructure resources we talked about are deployed through custom resource definitions, and then instances of those other custom resources… So you could extend that to have a WordPress custom resource definition that says “I need these maybe lower-level concepts (as you were alluding to) to be able to run this application”, and someone can just deploy this WordPress instance resource and it will take care of deploying all those resources in an agnostic manner as well, meaning that it can be deployed on GCP, or AWS, or Azure, or any other cloud provider, even your on-prem solution, if so be it…
That allows someone who’s at a higher level - we like to think about a separation of concern in Crossplane between someone who would be on a platform or operations team who defines available infrastructure, and then someone on an applications team… Or if you get to something like a WordPress instance, maybe on a marketing team, or something higher than that, being able to deploy things in a consistent manner, that is something that their organization has deemed appropriate for their use case.
I really like this concept, and one thing off the top of my head which I would really to know if it exists is - you have Crossplane, running in a Kubernetes cluster; can that Crossplane instance stamp out other Kubernetes clusters which maybe have a couple of building blocks already preinstalled, that are all the same? Does this functionality exist?
When you take a philosophy of treating everything as a resource in Kubernetes, then that allows you to do some interesting things where Kubernetes itself can be treated as just another type of resource. Maybe you need a Postgres, maybe you need a Redis cache, but maybe you also need a Kubernetes cluster… So being able to dynamically provision on-the-fly, bring up a Kubernetes cluster with a certain configuration, or certain applications, or certain networking plugins, or policies, whatever it may be, to be able to on-demand bring those up and get them as part of your environment is a consistent experience, like within the other type of resource.
[01:51:49.16] I’ve heard people many times express how Kubernetes is a platform for platforms, and I think that we’re really starting to see that, that a lot of the base problems have been solved in Kubernetes… You know, a declarative API for configuration, active reconciliation controllers that are level-triggered, not edge-triggered… There’s all these different philosophies that went into Kubernetes that have made this platform where we can start building higher-level concepts on top of it. And the higher that you go up the stack, the more opinionated you can become; you become more specific to certain use cases.
But when you have these building blocks and you’ve got a community effort around bringing them into something that’s more useful and higher up the stack, with more functionality or easier to use, then you can end up with cases where I can just bring up Kubernetes itself and start using that, and treat that as maybe clusters as cattle. A lot of things are [unintelligible 01:52:43.14] And somebody used one this week, too… It was something “as cattle”, that I had never heard before, and I wanna remember that and bring that back, because I think it was taking it a little too far. It was like “Okay, not everything has to be cattle.” But maybe I’m just not on board with it yet, so… New things from KubeCon this week that I still need to process.
Well, I did hear that kubectl, however your preferred way of saying that word…
[laughs] Yeah, that’s a tough subject…
KubeCTL, yes… I did hear that pronounced as KubeCattle this week, which is taking that to a whole other level, so… [laughter]
Or like “kubed cattle” [unintelligible 01:53:19.20] [laughter]
That’s a good one. One thing which I don’t know enough about and I would like to know more about is Rook. Where does Rook fit in all of this?
Yes, and I’d be happy to take that one, since I’ve been working on Rook for just over three years now. I believe that where Rook really shines is its focus being on an orchestrator for storage. If you think about the roots of the Rook project, when we started it more than three years ago, something that we saw as Kubernetes was still in very early days is that you would ask people that are using Kubernetes “Oh, okay, so what are you doing for persistent storage?” and almost nobody had a good answer to that. That was a very commonly unanswered question, because they were just running [unintelligible 01:54:06.17] workloads in Kubernetes.
So we started seeing the value of, okay, if we can use these primitives and these patterns that are in Kubernetes, and these best practices that are starting to form around how do you manage an application’s lifecycle, how do you maintain reliability of a distributed system - all these things, these problems were being solved, and then being able to build on top of that with “Okay, let’s do the same thing for storage.” Let’s reuse the Kubernetes best practices and patterns to stop relying on external storage, or storage that’s outside of the cluster; maybe it’s in a NAS device, or a SAN, or maybe a cloud provider’s block storage service, or whatever it may be… But being able to bring those into the cluster and orchestrate them to be able to take advantage of the resources that are already in the cluster - available hard drives, or different classes of service, a regular spinning platter disk, or an SSD, or an NVMe, or whatever it may be… But being able to provide storaged applications in a cloud-native type of way; you’re not going with the full stack there. That’s something that we found that got a lot of traction pretty quickly. And it wasn’t too long - it was only a few early minor releases - before we started getting production usage of it… Which is always very surprising, because it was an alpha-level project, and we were very clear about “This isn’t intended to be used in production yet”, but we got production adoption pretty early on right away, which helped drive the maturity of the project as well.
Wow. Okay… Three years. That’s a long time in the Kubernetes world; Kubernetes itself has been around for like five years, roughly…? So three years - that’s a really long time; enough to mature, to get to a point where it solves a lot of real-world problems. That’s great to hear.
I’m wondering - this is more of a personal interest - does it support LVM? Does Rooks support LVM?
[01:56:04.23] That’s an interesting question, because if you look at the design of the Rook project, it’s basically separated into two distinct layers. One of the layers, which is the core functionality of Rook, is this orchestration layer, this management layer that will do the steps necessary to bring up the data layer that’s underneath it, to get it running, and do day two operations to make sure it’s healthy… So storage providers that Rook performs storage orchestration for within your Kubernetes cluster - it’s up to that data path there to know how to handle LVM or any other type of storage fabrics and storage presentations that you can find in a cluster. So there are a number of storage providers inside of Rook that do work with LVM.
Okay, that’s great. I really have to check that out. Very, very interesting. Just to go back to Marques again, because there’s something which is at the back of my mind… You mentioned support in Crossplane for AWS, GCP and Azure. What about the other providers? There are so many more other providers, and Dan mentioned this - any provider can be part of Crossplane. What does the path for other providers look like, that would like to be part of Crossplane?
Sure. Well, we’ve stamped out the pattern by creating those stacks. In the process of creating those stacks - they were created initially, all of them, within the Crossplane project itself… And it was interesting, even though it’s all inside of one repository, that the different providers were implemented by different developers, at different times, adopting different best practices, or what they thought was the best practice at the time… And it eventually coalesced into one set of design patterns which had been sort of the best of breed.
Around the same time, we decided to extract these (what we call) stacks, extract those providers/stacks out of the Crossplane project, into their own stack repositories. So GitHub.com/crossplaneio/stack-gcp, /stack-azure, and /stack-aws. And we have additional ones - Rook, and Packet… And there’s really an easy way to get that started for any other cloud provider interested in being able to provide their managed services through Crossplane, and having that abstracted away. If you have a managed MySQL or a managed Postgres, then users can create a claim for a MySQL instance, and one day they’re getting RDS, the next day they’re getting GCP, the next day they’re getting your service. Maybe in one namespace it’s resolving to GCP, for some production workload, and in another namespace it’s reconciling to whoever’s cloud providers manage MySQL. And again, not just for MySQL, Postgres, Redis, but all the different types.
Packet is a great example, because before Packet we didn’t have the abstraction for machines. But Packet provides their Devices, where they – Device is the name…?
Yeah, it’s essentially a bare-metal offering that they provide via their cloud provider offering. They came and wanted to have a stack, and we didn’t have support for what we call claim, for machine instances, so we wouldn’t be able to dynamically provision those. So as part of the core Crossplane project, we now had a stack that wanted to be able to dynamically provision and integrate with Crossplane, so we were happy to work with them to add the machine instance, claim type, that now allows an abstraction that can be used by other providers as well. Because obviously, AWS and GCP etc. have VMs like EC2, and that sort of thing, that can also utilize that… So it’s just another opportunity for portability.
[02:00:08.13] Another thing, to kind of build on what Marques was saying, is besides just having those best practices reflected in those stacks in our organization, we also have abstracted out to a library, a Crossplane Runtime, which is based on the Controller Runtime project, which I’m sure a lot of listeners who have built controllers are familiar with. That’s part of the Kubernetes organization. Essentially, what that does is it gives you an interface for building controllers and running those in a Kubernetes cluster, and some best practices for doing that. Well, most of our stacks are using that, but also doing other things, namely interacting with external APIs.
There’s certain patterns that are very common across stacks that do that, so we’ve been able to abstract those out into a library and just say “You just need to tell us for this resource how you want to observe the resource, create the resource, update the resource, and delete it, provide us methods to do that”, and then the logic that’s around that and actually executing those things can happen in the runtime library. So it really lowers the barrier to entry for people implementing new stacks, which I think is really valuable as we see more and more community adoption.
I think just today we actually saw a cloud provider in Europe announce that they were using Crossplane and had built a stack for that… And we had very little input on that; we did a little bit of code review, but they were able to take that library and some of the documentation we’re written and build their own stack, largely isolated from any of the work that the Crossplane community was doing. That was some really strong validation for us, and I think that we’ll start to see that happening a lot more in the next weeks and months.
And it also gets back to the idea of Kubernetes being a platform for platforms. Kubernetes and its architecture has enabled Crossplane to now become a platform for all these other different cloud providers, or independent software vendors, or whoever, to build their application and get more reach and scope of accessing more customer markets/segments, for people to come and start using their software in this open cloud sort of way, with portability and all these different features that enable more people to access more software.
We’ve heard a lot about AWS and GCP and Azure, which would make people think that it’s mostly about infrastructure, or infrastructure like a service… But service, again, which is still tied to the infrastructure. But I know that recently you have started (maybe you’ve even finished) integration with GitLab, so you can get the GitLab resource, which is a completely different type of resource that Crossplane enables. Can someone tell me more about that?
Yeah, I’d be happy to talk about that, because that’s something definitely that I’ve spent a lot of time on recently. We started alluding earlier – Dan was talking about how you can create a Crossplane stack that helps you deploy your applications such as WordPress… And WordPress was a good place to start, because it’s a fairly simple application; it’s just a container, and a MySQL, and then maybe a cluster to run that container on. But then in the KubeCon Barcelona timeframe we’ve put a significant effort into being able to deploy GitLab itself.
So if you look at the architectural components in GitLab, they have a Helm chart, and currently that’s their main supported way that they had started with to deploy GitLab and everything that comes with it into Kubernetes. Once you’ve rented that out, it’s on the order of like 50 different containers, 20 config maps, all these different resources that speaks to a fairly complicated application set. But if you boil it down, what it really needs is a set of containers to run (microservices), and then Postgres, Redis, object storage. And that’s basically it.
[02:04:04.14] So we – being able to model that and then express in a very portable way that “My application needs these containers, and these databases etc.” and being able to deploy that to any cloud, is a huge step forward. And being able to easily manage applications - not just infrastructure, but higher-level applications such as GitLab, into new environments that maybe they haven’t been able to run in so far.
Yeah. Hearing you talk about that made me think of something else, which may sound crazy…
Oh, I like that.
So I could imagine there being a need for having a Crossplane that manages Crossplane. Updates… Right? Because you have a Crossplane instance that keeps all these other Crossplane instances up to date maybe, or the applications up to date… But maybe I think there will be something else which will keep the applications, because you have the bigger loops, which reconcile maybe less frequently… And then you keep going in and in until you have some very quick loops, which reconcile every 5 seconds, 10 seconds, or whatever. Is this something that you’ve thought about, or did it come up before…?
Yeah, that is not as crazy of an idea as you would think… Or maybe we’re also crazy, too. But either way–
That’s definitely true regardless. [laughter]
Yeah, we can go with that, that’s fine… But if you think about the architecture in general in Kubernetes around controllers that are performing active reconciliation - I mean, it’s a great pattern… It’s an old pattern, too. It’s commonly used in robotics, let’s say, to run in a control loop and sit there, watch the actual states in the environments and compare that to the desired state, see what the delta is there and take an operational step towards minimizing that delta there between actual and desired.
The same exact example there that you brought up, of a Crossplane to manage Crossplanes - that’s entirely within the realm of reason. It’s a set of controllers that can watch the environments and make changes to it to continue to drive it. So if there’s a new update to Crossplane, in this single control plane you could be able to watch that, see that there’s an update, take the imperative steps within this controller’s reconciliation loop to upgrade the application and get it to the newest version. But it’s all just the operator pattern and controller pattern inside of Kubernetes, and you can use that to manage basically any resource… So I think it’s a good idea to be able to manage Crossplanes.
Because if you think about it, not everyone’s gonna want to run and manage their own Crossplane, so I think that there’s definitely value in being able to automate that and take some of that effort away from people, and let the controllers and the machines do that for you, so that you can have a Crossplane instance that’s hosted for you as a service, and be able to get all the benefits out of your Crossplane without having to manage it yourself. Let the software do that for you. I think there’s definitely value in that, that we see, for sure.
Okay. So this in my mind set us on a path that requires me to ask the next question, which is what big things do you have on the horizon that you can share?
I think scheduling is one area that we’re looking forward to designing and approaching. When you have these Kubernetes application workloads, the concept that was raised earlier of bundling your application and its managed resources as a sort of single component, you’re gonna need some sort of way to describe where to run that application, what cluster should it be run on, which managed service should it be using?
[02:07:53.20] Currently, the way that these abstract types, these MySQL instances, these Kubernetes clusters - currently, the way that they’re resolved is through label selectors. So you’ve described a class, named that class and set some set of options on that class, but right now you’re referencing it by name… So an area that we’d like to figure out is how we can do that dynamically, so scheduling it based on perhaps cost, perhaps based on the region, the locality, the affinity to another workload. There’s all sorts of areas that we can really go into there. Maybe the performance of a cluster, or an application is failing, so that could lead to an application being bound to another application, in some sense, so… Lots of layers of abstractions here, and lots of fuzzy decision-making that can really provide a better application deployment experience.
And building on what Marques is saying there - if you take a look at what the scheduler does inside of a Kubernetes cluster, the in-cluster scheduler, its job is to know about the topology of the cluster, know about the resources that are available in the cluster, and then make the best decisions about where a pod should be scheduled to, where it should run based on “Is that node overloaded? Do I need to evict some pod somewhere? Does it match the particular hardware resources that are available on a particular Node?”
So then if you take that idea of Kubernetes as a control plane and figuring out where pods should run across nodes in a cluster, and then go a higher level, where you have something like Crossplane, which is a control plane that’s spanning across multiple clouds, multiple clusters, on-premises environments, but it’s a higher level that is aware of the topology of all the resources that are available, and then can make these smart scheduling decisions about “Where should an application run, based on whatever constraints it thinks is the most important?” So this whole idea of scheduling that was done in cluster for Kubernetes can definitely be raised up, like Marques was talking about, to make decisions more at a global scale.
That’s pretty cool. I’m really looking forward to what’s going to come out of this, because it’s super-exciting. And I know that different providers and different teams are tackling this in their own specific way… So whoever gets there first, or even if it’s like multiples, it’ll be a great moment, because it will open up other possibilities. It’s all building blocks, next steps, next steps. This is really, really exciting.
As we are approaching towards the end of this great discussion, which I’m sure we can continue, one thing which I’d like to mention is that the way I got to learn about Crossplane is via your YouTube livestreams, the TBSes, I believe… And Dan was the last one that I’ve seen (I think) on the livestream. It was great to see that in action, so can you tell us more about how that works, where the idea came from, how it feels to be on the other side?
Absolutely. So if anyone out there wants to go watch some very low-quality videos… [laughter] [unintelligible 02:11:21.07] We do a livestream every two weeks. That’s something that we got ramped up shortly after I joined Upbound. It’s really just a time – it’s very informal, and it’s a time for us to talk about new things in the Crossplane community, new things in Kubernetes that are related, and then also to do a lot of really live demo-ing. Actually, someone asked me today, “Why don’t you just record your demos and just post them on there? And then you can make sure that everything goes smoothly, and that sort of thing.” And the reason we don’t do that is because we think there’s a ton of value in messing up.
[02:12:03.16] There’s a lot of different configuration that can happen when you’re provisioning things across cloud providers, on-prem; lots of different services, lots of different plugins… Those are a lot of different ways you can mess up, which is not really a reflection of the system, or even of your own ability. It’s just complicated. And what it does when you provision things and you run into issues with it and you work through it is it shows people how to troubleshoot when they run into those same issues.
It also adds a layer of humanity to it, I think, that allows people who are tuning in, especially live, when they’re dropping comments, and that sort of thing, to be able to talk about what their individual experiences are. We’ve had some other people host as well on some episodes. We’ve actually recently had multiple people hosting a single episode, which - you might wanna skip that one; there were some technical difficulties… I apologize; I’m not a visual engineers. But what I’d like to encourage people to do is talk about something they’re interesting in outside of Crossplane… So a lot of times I’ll start a show by talking about the Utah Jazz, which is a basketball team I really love… And I’ll encourage the other people to do the same. Because when it comes down to it, the end users of Crossplane and the people that build Crossplane are gonna have to be really closely integrated, right? Because it is a platform that is going to inherently have to make some architectural decisions, and we wanna be best informed about how users want to use the platform, so that we can build it to meet those specifications… And then encourage them to come in and build parts of it as well.
So I think just building that community, and having fun, and talking about – you know, you can do all these things and we’re excited about them, and we’d like for you to come join us on this journey. I think that’s really the purpose of TBS, which is The Binding Status, which is kind of a play on claims binding the classes. I think that’s the purpose of the show.
We had a couple people come up and mention that they’d watched episodes, which I was astounded by, and I apologized for the time that they had wasted, but… It was personally and as an organization really validating to say “You know what - people care about what’s going on here, and they feel welcome into the community by this style of communication.”
There’s one big downside to this, from my perspective… It’s that I enjoy watching the shows more than trying Crossplane out. [laughter] So the risk there is that I will continue watching all the Crossplane shows forever, and never try Crossplane… Because it’s so exciting to watch, but I spend all the time watching, rather than trying it out. So that’s one of the real risks of this.
Well, I think the solution to that is we just have to have you come on and host, and then you’ll be forced to try it out with hundreds of people watching. [laughter] [unintelligible 02:14:43.28]
Yeah. That’s actually a great idea, I have to say… I don’t know how I’ll get out of that one, but… [laughter] Any last, parting thoughts?
Well, it’s really easy to try it out, so you don’t have an excuse. You just Helm-install it, and as long as you’ve got some cluster somewhere, install it in kind and install it in K3s on your laptop… Docker on Mac includes a Kubernetes engine now. So from there, you can Helm-install your Crossplane, and from there start provisioning more clusters, more managed resources, the Kubernetes applications.
Another piece I’d like to piggyback off the idea is the videos. We have a lot of documentation, we’ve worked hard to update this documentation, both on how to build stacks and how to use Crossplane. We’ve been updating it every version, and we’re trying to get more strict about making sure that our Docks are updated with every release, and we’ve been releasing the product faster and faster. The last release was 0.5, and before that was our first minor patch, in 0.4.1. We’ve worked on our build pipeline, so that we can get the updates out there quicker… So with all of this, you have documentation to test it out with, so… I’d like to say that yes, the video is probably one easy way to consume it.
[02:16:20.08] For different people, different things are gonna work. But whether it’s reading the docs, whether it’s installing the product and just trying it out by hand, or whether it’s watching us fumble at the command line. YAML is not the easiest thing to just grok at a distance. Sometimes you need to watch somebody stumble over how to best describe it, or just read thoroughly what we’ve done, or jump in the code; visit the GitHub project, star it… That stuff is really useful to use. Leave issues for any kind of ideas that you would like to see Crossplane expand or delve into.
And a closing thought on that that I strongly believe in is that I consistently see that some of the best feedback and ideas for a project comes from brand new users that have never seen it before. Because you could be a project maintainer, let’s say, and you’re consistently living in that codebase, and you know all the ins and outs and the idiosyncrasies of it, and you get a very specific, myopic view on it almost. But then you have a brand new person try it out for the first time, with fresh eyes, and they see something immediately that you’ve been completely blind to for the past six months.
Some of the best feedback comes from brand new users, so we are super-open to new people trying it out and giving us their ideas, because they’re probably gonna be good ideas as well.
Okay. So on that note - I really like that idea - how about we stop the interview now, and I can start trying some Crossplane stuff out for the first time… [laughter] And watch me, and tell me all the things that I’m doing wrong. I would really like that.
Or maybe you could tell us what we’ve been doing wrong…
Or that, yes. This will get crazy. I’m really looking forward to that. Dan, thank you very much. Marques, thank you very much. Jared, thank you very much. It was a pleasure having you. I’m so excited that you were on the show, and I’m looking forward to what will happen next.
Thank you so much for having us, it was a pleasure.
Thank you.
Yeah, we really love Changelog, we love all the shows. Go Time… Just subscribe to the Master feed and you get everything. It’s the best.
Thank you, Marques, thank you.
Our transcripts are open source on GitHub. Improvements are welcome. 💚
|
https://changelog.com/podcast/375
|
CC-MAIN-2020-50
|
refinedweb
| 22,513
| 65.66
|
Disappearing shelf buttons in Maya 2016 and up?
Description:
One reason for disappearing shelf buttons its entry exists multiple times in userPrefs.mel
what confuses the loading. There might be a shelf entry with ; at the end and one without. Both valid
but overwriting each other.
This script runs through the .mel file and tries to remove any line with duplicates.
Besides "shelfName" entries that are filtered are associated "shelfVersion", "shelfFile", "shelfLoad and "shelfAlign" entries
Args:
path_to_userprefs: file path to userPrefs.mel. Windows usually "C:/Users/username/Documents/maya/20xx/prefs/userPrefs.mel"
searchlines:
Limited to speed up script, default values should be good. If you need to modify will depend on our file.
Start and end line of to process for search process. You can search "shelf" in userPrefs.mel in text editor to find the section
Hot to use:
Run the script, then reopen shelf (or restart maya)
import pathToFile.missing_shelf_clean_userprefs.py as missi
missi.clean_disappearingshelf_userprefs("C:/Users/monika/Documents/username/2017/prefs/userPrefs.mel", searchlines=[1000, 3000])
Please use the Feature Requests to give me ideas.
Please use the Support Forum if you have any questions or problems.
Please rate and review in the Review section.
|
https://jobs.highend3d.com/maya/script/missing-shelf-button-script-without-resetting-prefs-for-maya
|
CC-MAIN-2021-39
|
refinedweb
| 199
| 61.63
|
Using singleton classes for object metadata
.jpg)
- |
-
-
-
-
-
-
Read later
Reading List
A note to our readers: You asked so we have developed a set of features that allow you to reduce the noise: you can get email and web notifications for topics you are interested in. Learn more about our new features.
So you have a bunch of objects - an object graph - the result of some operation or API call. The task: analyze the data and store the results of the analysis as metadata for the graph.
Too abstract? Think of how a compiler works: the parser spits out a parse tree (or Abstract Syntax Tree or AST) that represents the code as tree structure. Then: a lot of algorithms run over the tree in passes: gathering symbols in the symbol table, doing type inference, using these types to do type checking, etc.
But wait: the last two passes show the problem: where does the type inferencing code store its data for the type checker to use? It'd be most convenient to store the data where it's needed - i.e. if an expression node (in the AST) is found to return type Foo, then it'd be best to store that information right in the node.
To illustrate the solution, we'll look at some tools that work on ASTs - not a compiler, but tools with similar requirements. In Ruby, the ParseTree library returns Ruby source code as an AST. Example:
[:vcall, :obj, :say_hello, [:array, [:lit, 42], [:lvar, :foo]]]
This is a bit of Ruby code represented as ParseTree AST. Since this is an article and not a VM, it's a bit difficult to work with objects and object references organized as a graph - so we use ParseTree's s-expr representation of the tree. S-exprs are nested lists; each list represents a node in the tree, with the first element specifying the node type. In the sample, the node represents a call to a virtual method (vcall). The parameters are the receiver (the 'self' in the called method), the name of the method and the parameters.
Ideas for tools would be a type checker, static analysis tools or automatic refactorings. One of the requirements for tools of that kind to work is some kind of type inference, i.e. determining the types of variables or expressions. For instance, this AST represents a call to the to_s method:
[:vcall, :obj, :to_s]
What information can be gathered about this, and what could be done with this information? The type of the return value, for instance. This is hard to determine - but since it's to_s, a method that returns the string representation of an object - let's just say it returns "String". This is not necessarily true - it's a guess. For tools such as a code completion, it's fine enough - 100 % accuracy isn't possible for these matters. Although in some cases, more information could be gathered, and the analyzer could determine more accurate information.
Analyzers work on the tree and annotate the AST with the metadata they have determined. For keeping codebases modular, it's a good idea to separate the analyzers from the metadata consumers, i.e. the code that walks the AST and does something with the metadata. For instance, a Ruby editor could highlight a method that overrides another in its superclass.
The solution
Long story short: here's a solution for annotating ParseTree nodes:
node = [:vcall, :obj, :to_s]
def node.set_metadata(key, value)
@_metadata ||= Hash.new
@_metadata[key] = value
end
def node.metadata
@_metadata ||= {}
end
node.set_metadata(:type, :String)
What does that do?
The secret sauce: singleton classes
Every object in Ruby is the instance of a class. Unlike many other OOP languages, Ruby allows you to change an object's class. Don't confuse this with open classes: in Ruby, it's possible to modify a class - even at runtime. Singleton classes are similar - except that they only affect one object's class. While the difference might seem small - it has the big benefit of limiting the effects of the class modification to the affected object. Open classes, on the other hand, change the class definition for all the code and all the objects of these classes. This is useful, but can also mess with other people's code and if too many pieces of code do it on common classes, name clashes with existing methods can happen. In the case of ParseTree nodes - which are ordinary Ruby Arrays - this means that only the actually used Array objects are affected - not all the Arrays on the heap.
Another way to see this: with singleton classes, the changes to the class stay local to the code that creates and uses them - they never need to be visible outside. Open classes, on the other hand, are global changes: a class name is a global variable, i.e. "String" refers to the class object representing Strings. Just as more scoped variables (local variables, member variables) are preferable over global variables, so are singleton classes to open classes.
Of course: which solution to choose (singleton or open classes) depends on the particular situation. With open classes, the change to the class happens once - after it, all objects of that class have the added methods. With singleton classes, the change to the class (i.e. creating the singleton class and changing the object's class pointer to point to it) happens at every change.
The syntax - as seen above - is simple:
def object_variable.method_name()
# code
end
The variable pointing to the object is prefixed to the method name. A more flexible and modular way is to use Mixins to do it. This allows to collect methods that handle some aspect in a Module and mix them in in one go. E.g.
module Metadata
def set_metadata(key, val)
@_metadata[key] = val
end
def metadata
@_metadata ||= {}
end
end
x = [:vcall, :obj, :to_s]
x.extend Metadata
x.set_metadata(:type, :String)
Mixins allow to mix functions defined in a Module into a class - in this sample, it's the node object's singleton class. Again: only this object will have these methods now.
Another example - dead code removal
If the static analyzer code is too abstract, let's try another tool: a dead code remover. "Dead code" is simply code that doesn't really do anything and can be removed without changing the behavior of the program. Like this code:
for x in [1,2,3] do
end
With our annotating concept, we can look for this kind of code and annotate it as such:
node.metadata(:dead_code, :true)
But marking the code as dead is only the first step - now we need to remove it. This is where it's easy to see that it makes sense to split up analysis and action part. An IDE might just want to highlight some code as dead code, but it might also offer a simple way (a Quick Fix) that removes the code.
How can this code be removed? Sure - it'd be possible to dump the node from the AST and then use Ruby2Ruby to spit out Ruby source. But that's not a very nice solution: Ruby2Ruby turns ParseTree ASTs into Ruby source code, but it loses a lot of useful information, such as formatting (white space) and comments. Dropping these is not acceptable in an IDE or refactoring/cleanup tools.
Metadata to the rescue: the nodes can be annotated with source locations - ie. where this particular node is found in the original source code. And with that - it's easy: the cleanup code simply marches through the code, finds all nodes marked with :dead_code and removes them in the source file, using the character offsets of the source location metadata. (Of course - once some nodes were removed, the offsets of the rest of the nodes are off. This can be solved by simply tracking the number of deleted characters and subtracting them from the actual offsets. With that, it's not necessary to reparse and re-analyze the source).
Conclusion
The examples in this article were focussed on language tools - but the ideas are usable for all kinds of object collections. Wherever object graphs need to be annotated, possibly in multiple passes by independently written analyzers, it's convenient to store the annotations with the nodes. If the classes of the object graph are not under the developer's control and are extensively used elsewhere (e.g. the Array class in ParseTree), singleton classes are a good choice. For other situations, open classes might be a better solution.
As for implementing the specific tools mentioned here: ParseTree is available for Ruby 1.8.x, in Rubinius and JRuby (jparsetree). The JRuby version also adds source locations for the individual nodes, so it's even easier to modify source. Have fun coming up with ideas for tools and writing them - it's easy with Ruby.
Rate this Article
- Editor Review
- Chief Editor Action
|
https://www.infoq.com/articles/prototypes-for-metadata
|
CC-MAIN-2018-17
|
refinedweb
| 1,497
| 62.78
|
The QHistoryState class provides a means of returning to a previously active substate. More...
#include <QHistoryState>
Inherits QAbstractState.
This class was introduced in Qt 4.6.
The QHistoryState class provides a means of returning to a previously active substate.. QHistoryState is part of The State Machine Framework.
Use the setDefaultState() function to set the state that should be entered if the parent state has never been entered. Example:
QStateMachine machine; QState *s1 = new QState(); QState *s11 = new QState(s1); QState *s12 = new QState(s1); QHistoryState *s1h = new QHistoryState(s1); s1h->setDefaultState(s11); machine.addState(s1); QState *s2 = new QState(); machine.addState(s2); QPushButton *button = new QPushButton(); // Clicking the button will cause the state machine to enter the child state // that s1 was in the last time s1 was exited, or the history state's default // state if s1 has never been entered. s1->addTransition(button, SIGNAL(clicked()), s1h);
By default a history state is shallow, meaning that it won't remember nested states. This can be configured through the historyType property.
This enum specifies the type of history that a QHistoryState records.
This property holds the default state of this history state.
Access functions:
This property holds the type of history that this history state records.
The default value of this property is QHistoryState::ShallowHistory.
Access functions:
Constructs a new shallow history state with the given parent state.
Constructs a new history state of the given type, with the given parent state.
Destroys this history state.
Reimplemented from QObject::event().
Reimplemented from QAbstractState::onEntry().
Reimplemented from QAbstractState::onExit().
|
http://idlebox.net/2010/apidocs/qt-everywhere-opensource-4.7.0.zip/qhistorystate.html
|
CC-MAIN-2014-10
|
refinedweb
| 258
| 51.44
|
This tutorial explains how Cactus can be used to test JSP.
There are different kibds of tests that you can implement with Cactus for testing JSP:
endXXX(WebResponse)method as described in the TestCase tutorial. Your test case class will also need to extend
ServletTestCaseand forward the request to your JSP page, as in the following example:
public class MyTest extends ServletTestCase { [...] public void testXXX() { RequestDispatcher rd = theConfig.getServletContext(). getRequestDispatcher("/path/to/test.jsp"); rd.forward(theRequest, theResponse); } public void endXXX(WebResponse) { // Assert result [...] } [...] }
JspTestCase. See the Taglib TestCase tutorial.
This type of testing depends mostly on your architecture. The general idea is that you would normally have an MVC implementation with a controller (usually a Servlet) that inspect the HTTP request, potentially gather some other data from the Session, ServletContext or some storage and based on this information decides to call some business code logic, and then forward the call to a given JSP page.
Thus, one solution to unit test your JSP in isolation is to succeed in either bypassing the controller altogether or in telling it to use some mock code logic that you would write for your tests.
public class MyTestCase extends JspTestCase { [...] public void beginXXX(WebRequest webRequest) { webRequest.addParameter("cacheId", "1"); } public void testXXX() throws Exception { PageBean bean = new PageBean(); bean.setName("kevin"); request.setAttribute("pageBean", bean); pageContext.forward("/test.jsp"); } public void endXXX(com.meterware.httpunit.WebResponse theResponse) { WebTable table = theResponse.getTables()[0]; assertEquals("rows", 4, table.getRowCount()); assertEquals("columns", 3, table.getColumnCount()); assertEquals("links", 1, table.getTableCell(0, 2).getLinks().length); [...] } }
In
testXXX(), we populate a bean with values for our
test and put this bean in the request. Normally this would have
been performed by the controller but we're bypassing it for the
test. Then we call our JSP page, which looks like:
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"> <HTML> <HEAD> <TITLE>test.jsp</TITLE> </HEAD> <BODY> <P><BR> <jsp:useBean </P> <P> <%= pageBean.getName() %> </P><BR> Place test.jsp's content here </BODY> </HTML>
See the StrutsTestCase project.
|
http://jakarta.apache.org/cactus/writing/howto_jsp.html
|
CC-MAIN-2014-15
|
refinedweb
| 342
| 51.55
|
sem_open()
Create or access a named semaphore
Synopsis:
#include <semaphore.h> #include <fcntl.h> sem_t * sem_open( const char * sem_name, int oflags, ... );
Arguments:
- sem_name
- The name of the semaphore that you want to create or access; see below.
- oflags
- Flags that affect how the function creates a new semaphore. This argument is a combination of:
Don't set oflags to O_RDONLY, O_RDWR, or O_WRONLY. A semaphore's behavior is undefined with these flags. The QNX libraries silently ignore these options, but they may reduce your code's portability.
- O_CREAT
- O_EXCL::
- If the name argument starts with a slash, the semaphore is given that name.
- If the name argument doesn't begin with a slash character, the semaphore is given that name, prepended with the current working directory.):
-.
Returns:
A pointer to the created or accessed semaphore, or SEM_FAILED for failure (errno is set)..
|
https://developer.blackberry.com/playbook/native/reference/com.qnx.doc.neutrino.lib_ref/topic/s/sem_open.html
|
CC-MAIN-2020-34
|
refinedweb
| 142
| 58.89
|
React Tutorial A Step-by-Step Guide(2)
In this tutorial, I am going to explain the Folder Structure of React applications, What is a React component in a detailed manner, and How we can use them.
Folder Structure of React Application
It is really important to understand the Files and the Folders involved and How the control flows when you run the application. Because most of the developers don't know how to explain the flow of React applications.
In your project at the root level, you can see 3 folders and four files to begin.
Let’s start with Files,
- package.json File — This contains dependencies and the scripts which we need in the project.
- gitignore File — Maintain git related files
- README.md File
Folders like,
- node_modules — This is the folder in which all the dependencies are installed. It is generated when you run the create-react-app command or when you run the npm install.
- public Folder — This folder contains 3 files and the only special file which we need is the index.html file. This is the only HTML file you are going to have in your application. We are building single-page applications. The view might dynamically change in the browser. But application serves this HTML file. You are not going to add any code to this file. Maybe some changes in the head tag. But not in the body tag. You want React to control the UI. For that, we have one “div tag” called “root”.
<div id="root"></div>
At the runtime react application takes over this “div tag” and it is ultimately responsible for the UI.
- Source Folder (src)— Mostly you will be working with this folder during development. The starting point of the React application is index.js. In the index.js we specified the root component which is the App Component and the DOM element which will be controlled by React. The DOM element in our code is an element with an ID of root. This is the element in our index.html file. We called this root element in index.html the root DOM node because everything inside it controls by React.
For this application, the App Component is rendered inside the root DOM node. You can find the app component in the App.js file. The App component represents the view you see in the browser. Also, you can find the App.css file which we can use for styling and the App.test.js file for Unit testing. We can find another file called index.css file which applies styles to the body tag. Finally, we have the serviceWorker.js file which uses for progressive web apps.
What is a React Component?
In React a component represents a part of the User Interface. Going back to the example we can say that we have 5 components in our application.
- Root (App) component — which contains whole components
- Header Component
- SideNav Component
- Main Content Component
- Footer Component
React Nesting Components
In React, we can nest components inside within one another. This helps in creating more complex User Interfaces. The components that are nested inside parent components are called child components. Import and Export keywords facilitate the nesting of the components.
Components are reusable. The same component we can use with different properties to display different information. For example side-nav Component can be the left-side-nav or right-side-nav.
How does a component translate the code in our application? Component code is placed in a JavaScript file. For Example, an app component is placed in App.js. You can also have your component files in the “.jsx” extension. Components consist of two types.
Component Types
In React we have two component types.
- Stateless Functional Component — JavaScript Functions. It returns HTML which explains the UI.
- Stateful Class Component — Regular ES6 classes that extend the Component class in the react library. They must contain a render method that returns HTML.
As I mentioned earlier you can find AppComponent in the app.js file. Here you can see AppComponent is a Class component. Class named as “App” and it extends Component class from the React library. It contains a render method that returns some HTML. You can have multiple components in your react application. More complex applications have more components.
Functional Components
Functional components are just JavaScript functions. They can optionally receive the object of properties which is referred to as props and returns HTML which describes the UI.
The HTML is also known as JSX.
Let's create a functional component first. Go to your project and remove all HTML code except the outer div tag in the App.js file. Now let’s create our own component. I am going to output a message from this newly created component. The component is nothing but a JavaScript file. Create a new folder called “components”. Inside that folder create a file called Hello.js.
Within the file first import React. Next, create a new function and return HTML elements.
import React from "react";
function Hello() {
return <h1>
Hello Kasun Dissanayake
</h1>
}
You have created your first functional component. But here “Hello Kasun Dissanayake” is not going to be rendered in the browser because the Hello component is not connected with our application. We have to export our Hello function from Hello.js and import it into App.js and then include it in the App component.
export default Hello; //Hello.js
import Hello from "./components/Hello"; //App.js
To include the Hello component in the App component we simply specify the component in the custom HTML tags. There is no content between the tags.
function App() {
return (
<div className="App">
<Hello />
</div>
);
}
Now run the application and you should be able to see Hello Kasun Dissanayake in the browser.
Your first functional component is up and running.
Let’s rewrite this Hello function with Arrow functions syntax.
An arrow function expression is a compact alternative to a traditional function expression, but is limited and can’t be used in all situations.
There are differences between arrow functions and traditional functions, as well as some limitations:
- Arrow functions don’t have their own bindings to
thisor
super, and should not be used as
methods.
- Arrow functions don’t have access to the
new.targetkeyword.
- Arrow functions aren’t suitable for
call,
applyand
bindmethods, which generally rely on establishing a scope.
- Arrow functions cannot be used as constructors.
- Arrow functions cannot use
yield, within its body.
const Hello = () => <h1> Hello Kasun Dissanayake</h1>
Now the output still works as expected.
Class Components
Class components basically ES6 classes. Similar to functional components Class components also can optionally receive props as input and return HTML. Apart from the props Class components can also maintain a private internal state. It can maintain some information that is private to that component and use that information to describe the User Interface.
Let's create Hello Component using Class components. Create a new file called Welcome.js. When we create a class component we need to import two things.
- Import React
- Import “Component” Class from React
import React, {Component} from "react";
Now create the class Welcome. To make this class a React component there are 2 simple steps.
- Extends the component class from React.
- The class has to implement a render method that will return null or some HTML.
class Welcome extends Component {
render() {
return (
<div>
Welcome Kasun Dissanayake
</div>
);
}
}
Now you have created your own Class component. Export this component and import it into App.js to connect with our application.
export default Welcome; //Welcome.js
import Welcome from "./components/Welcome"; //App.js//inside the App.js function add Welcome component
function App() {
return (
<div className="App">
<Hello />
</div>
);
}
Run the application and you will be able to see the Welcome component in the browser.
Now let's find what is the advantage of one over the other and when exactly should we use which component type.
Functional Components vs Class components
Functional Components are,
- Simple functions accepting props and returning the function declaration.
- Use functional components as much as possible.
- Absence of “this” keyword.
- Solution without having a state — If you have a number of components each having its own state maintenance and debugging your application is kind of difficult.
- Mainly responsible for the User Interface.
- Also known as Stateless/ Dumb/ Presentational components
Class Components are,
- More feature-rich components.
- Can also maintain their own private data called States.
- Can contain complicated UI logic.
- Provide lifecycle hooks.
- Also known as Stateful/ Smart/ Container components.
I hope you got a good understanding of the Folder Structure of React applications, What is a React component and How we can use them.
We will discuss Hooks and JSX in the next article.
Thank you!
|
https://kasunprageethdissanayake.medium.com/react-tutorial-a-step-by-step-guide-2-ba62d745ed89?source=user_profile---------4----------------------------
|
CC-MAIN-2022-40
|
refinedweb
| 1,465
| 60.11
|
Integrating OGC Web Mapping Services
Bing Maps is Microsoft’s entry into the mapping realm. Even though Bing Maps is relatively new (it became a public beta in July 2005) Microsoft itself is not new to the mapping business. Microsoft has been involved in mapping for more than 12 years and the experience from these years was a prerequisite to build the service.
Figure 1. Bing Maps provides major road networks worldwide and street-level coverage in 68 regions
Bing Maps not only provides a highly interactive user interface to the traditional form of roadmaps but it also grants developers access to ortho-images, oblique aerial images (so called birdseye imagery) as well as to 3D terrain and city models. Unlike most of the competitive products Bing Maps can do all of this on the Web and it can be automated through one API.
Figure 2. Birdseye images are currently available in the US, UK, FR, MC, IT, DE, NL, ES, CH, and NO
Figure 3. 3D city models are currently available in the US and UK
The latest beta release of Bing Maps contains standard features like geocoding, proximity search and routing, but it is the extensibility and the ability to integrate in different system landscapes that makes Bing Maps so valuable.
Figure 4. Ortho images are currently available in the US, UK, CA, JP, IT, FR, and AU
As part of the API there are already all the necessary methods to create layers and add individual points, polylines and polygons to a layer. There are also methods which allow you to add GeoRSS-feeds or Collections which you or someone else created in Bing Maps. These are excellent means to integrate communities and have near real-time data available. It is also very simple to create AJAX calls to other Web services like the MapPoint Web Serviceor just one which accesses your database to overlay specific information for this location. In addition, you can also overlay your own raster data on top of Bing Maps. This technique is covered in more detail later in this article.
You can further enhance the user experience by handling events that are fired whenever you zoom or pan the map, change the style or the mode of the map, press a key on your keyboard, or click a mouse button or move the mouse.
If you want to get started with Bing Maps you can do that today. Access to the API is not restricted and is currently free for non-commercial use. If you discover you have an appetite for Bing Maps, look at the topic. The Interactive SDK is a great and simple way to build your first Bing Maps application in minutes. However, if you don’t find what you need, don’t give up and have a look at the SDK on the Microsoft Developer Network (MSDN).
Overlay your own Raster-Data on Top of Bing Maps
As mentioned previously there are various types of layers which you can add to Bing Maps. The document discusses the tile layer.
Figure 5. The Bing Maps Tile System
Whenever you load the Bing Maps AJAX Control 6.3 it retrieves tiles from the Microsoft data centers and stitches them together. But how does it know which tiles to retrieve for a specific location?
Joe Schwarz wrote an excellent article about the Bing Maps Tile System which is mentioned in . For now it is important to understand that the Bing Maps tiles are indexed with a quad-key and that every tile has a unique name which determines its position, its neighbors and the zoom-level. Each of these tiles has a size of 256 x 256 pixels.
The Bing Maps AJAX Control 6.3 API likes it simple. If you want to overlay your own tiles on Bing Maps, you point to a URL for a virtual directory which includes your own tiles. Bing Maps then looks for the tiles with the same name as those which are currently used in the Bing Maps and overlay them with the opacity value and at the z-index that you determine. A sample function is shown as follows.
function AddTileLayer(layer, maxlat, maxlon, minlat, minlon, url, minlvl, maxlvl, opac) {; map.AddTileLayer(tileSourceSpec); }
Listing 1. Adding a Tile Layer
In the url we pass the parameter %4.png. %4 passes the quad-key for the current tile (yes, it is inevitable to make a request for each tile which is on the current map view). bounds defines the bounding box in which you want to show the overlay. Together with the MinZoomLevel and MaxZoomLevel it can be used to optimize performance. Also note that even though Bing Maps has a maximum zoom-level of 19 (~30 cm/pixel), we can use higher zoom-levels for our own tile layers. A list of Bing Maps zoom-levels can be found in .
How do we get to the tiles in the first place?
A very nice tool to create your own tile layers for Bing Maps is MapCruncher. This tool was developed by Microsoft Research, is available for free download, and makes it very easy to create tiles.
Figure 6. MapCruncher
You can manually geo-reference any kind of image or PDF document by moving a point in the image below the crosshair in the left window and the corresponding Bing Maps location below the crosshair in the right window. Once you lock the view the Mercator projection is applied to the image. You can then define up to which zoom-level you want to make this image available and have MapCruncher cut it down into pyramid layers and tiles for each layer. MapCruncher will take care for naming the tiles as required by the Bing Maps quad-key structure. Once this is done, configure a virtual directory on your web server and point it to the folder with the rendered tiles.
MapCruncher even allows you to render images and PDFs from the command line so that you could easily set up jobs and create new tile sources whenever you have a new image, for example, for a weather-radar that is updated every 15 minutes. Some examples for the usage of MapCruncher are as follows.
Figure 7. Using MapCruncher to create a floor plan
Figure 8. Using MapCruncher to create your own imagery
Figure 9. Using MapCruncher to create coverage maps
Figure 10. Using MapCruncher to create super-high resolution images
MapCruncher is a great tool, but it only creates a static result: you always have to pre-render the tiles. Conversely there are an increasing number of Web Mapping Services (WMS). This is a standard which has been defined by the Open Geospatial Consortium (OGC) that enables sharing geospatial data between different entities. The result is returned as an image. Although Bing Maps cannot be used as a WMS server without violating the terms of use, it can be used as a WMS client. Using Bing Maps as a WMS client is the focus of the remainder of this article.
Accessing a Web Mapping Service
The first step in accessing a Web mapping service is to find out what capabilities the service offers. This is accomplished by a
GetCapabilities call, as follows:
Listing 2. WMS GetCapabilities request
The URL contains the WMS server (server), parameters for the type of service, the version and the request-type. In this case the request is of type GetCapabilities. The WMS returns an XML-document in the HTTP-response. An example for such an XML-document is shown in Result of a WMS GetCapabilities Request.
A WMS can provide one or more layers and information about the layers. The information we need includes:
- Name
- Spatial reference system (SRS)
- Bounding box
- Format of map
If we examine the following sample layer, we see the name is
Geologische_Karte, the SRS is
EPSG:4326, the four sides of the bounding box are
8.1048153789,
46.9109504263,
14.1624073557, and
50.6062640122, and the format of the map is
image/gif.
>
Listing 3. WMS layer information from a GetCapabilities request
Bing Maps can only support EPSG:4326 (geographic 4326) SRS directly. The
LatLonBoundingBox tells you for which areas this service is available. It is a good idea to call the VEMap.SetMapView method with these values.
Let's create a sample for this service.
Listing 4. A WMS GetMap request
The GetMap request uses information from the GetCapabilities request. The width and height of the image to retrieve is always 256 x 256 pixels as this is the size of a Bing Maps tile. Finally the request includes values for the bounding box. The next task is to use Bing Maps to determine the latitudes and longitudes of the upper left and lower right corners of each tile. The next section describes how to accomplish this task. Figure 11 displays the result of this GetMap request.
Figure 11. Result of the WMS GetMap request
Architecture
To summarize what has been discussed so far, to use a Bing Maps tile layer there is a HTTP-request made for each tile that is retrieved from the tile source, and the request uses a quad-key parameter. If MapCruncher was used to create a tile source, requests are directed to the virtual directory which contains the pre-rendered tiles with corresponding names.
However, retrieving tiles from a WMS requires a different approach. The HTTP-request is sent to a Web service. The Web service acts as a proxy to the WMS-server, retrieving the quad-key from the request-parameter and determining the latitudes and longitudes of the upper left and lower right corner of the tile. Since the Bing Maps Tile System is structured with quad-keys, if the latitude, longitude, and a zoom-level are known, the tile which includes this point can be computed. Similarly, given a tile, the latitude and longitude of the upper left and lower right corner can be computed. Once the latitudes and longitudes of the bounding box are computed, the WMS GetMap-request can be created and it returns the image in the HTTP-response to the Bing Maps AJAX Control 6.3. The Bing Maps AJAX Control 6.3 ensures that this process is repeated for each tile that is currently shown on the control.
Figure 12. How AddTileLayer is mapped to a WMS GetMap request
The process is simple, but keep in mind that the tiles are in a flat Mercator-projection while the coordinates are in a spherical WGS 84 system. The mathematics are more complex and are explained in .
Creating a Web Page with a Bing Maps AJAX Control 6.3
Creating a Web page with a Bing Maps AJAX Control 6.3 is straightforward. The
<body> element of the page contains a
<div> element that specifies the size of the map and a reference to an onload event handler function, GetMap, that is called when the Web page is loaded. This JavaScript function loads the map with an optionally defined center-point, an optional zoom-level and an optional MapStyle, which in this example is VEMapStyle.Hybrid, which is a road-network overlaid on aerial image).
The buttons below the map allow add the tile layer from the WMS by executing the JavaScript AddTileLayer function. The ID of the Bing Maps tile and the name of the WMS-layer are added to the URL which calls the Web service. The boundaries of the layer and the minimum and maximum zoom-level are also defined. The z-index as well as the opacity can also be defined, if desired. Additional buttons call functions to show or hide the tile layer and to delete it.
Note that the only difference between a pre-rendered tile source, such as those created using MapCruncher, and the WMS tile source is that the former uses a virtual directory and the latter a Web service.
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" ""> <html xmlns="" > <head> <title>OGC-Compliant WMS</title> <meta http- <script type="text/javascript" src=""></script> <script type="text/javascript"> var map = null; var Location = new VELatLong(48.13493370228959, 11.578216552734378); function GetMap() { map = new VEMap('myMap'); map.LoadMap(Location, 10, VEMapStyle.Hybrid, false); } function AddTileLayer() { var bounds = [new VELatLongRectangle( new VELatLong(50.50643983210549,9.173583984374993), new VELatLong(47.27177506640826,13.765869140624984))]; // WMSTileServer.ashx is the Web service var tileSourceSpec = new VETileSourceSpecification( 'WMSBavariaGeo', 'WMSTileServer.ashx?id=%4&layer=Geologische_Karte'); tileSourceSpec.Bounds = bounds; tileSourceSpec.MinZoomLevel = 8; tileSourceSpec.MaxZoomLevel = 12; tileSourceSpec.Opacity = 1; map.AddTileLayer(tileSourceSpec); } function HideTileLayer() { map.HideTileLayer('WMSBavariaGeo'); } function ShowTileLayer() { map.ShowTileLayer('WMSBavariaGeo'); } function DelTileLayer() { map.DeleteTileLayer('WMSBavariaGeo'); } </script> </head> <body onload="GetMap();"> <div id='myMap' style="position:relative; width:600px; height:400px;"></div> <input id="btnAdd" type="button" value="Add TileLayer" onclick="AddTileLayer()" style="width: 147px" /> <input id="btnHide" type="button" value="Hide TileLayer" onclick="HideTileLayer()" style="width: 147px" /> <input id="btnShow" type="button" value="Show TileLayer" onclick="ShowTileLayer()" style="width: 147px" /> <input id="btnDel" type="button" value="Delete TileLayer" onclick="DelTileLayer()" style="width: 147px" /> </body> </html>
Listing 5. A Web site with the Bing Maps AJAX Control 6.3 as a WMS client
Creating the Web Service
The Web service requires two namespace references for image-processing, as follows.
The ProcessRequest method fetches the URL-parameters for the quad-key and the WMS-layer, as follows:
The zoom-level relates directly to the length of the quad-key. If the quad-key is 120202113, the length is 9 digits and thus the zoom-level for this tile is 9.
The area covered by this tile can be determined by calculating the binary value of the quad-key. Given the quad-key 120202113, the binary representation is as follows:
The conversion code is very simple. It just takes each character in the quad-key and translates that into a two-character output code, as follows.
// Convert Quadkey to binary value string myQuadKeyBin = ""; char [] myKeyCharArray = requestParam.ToCharArray(); for (int i = 0; i < zoomLevel; i++) { switch (myKeyCharArray[i]) { case '0': myQuadKeyBin += "00"; break; case '1': myQuadKeyBin += "01"; break; case '2': myQuadKeyBin += "10"; break; default: myQuadKeyBin += "11"; break; } }
Now that the binary quad-key value is converted to binary, the next step is to determine the binary X and Y numbers for the tile. Note that these values are not the pixel X and Y of the tile yet but the X and Y number of the whole tile itself in the coordinate system.
Figure 13. Bing Maps coordinates at zoom level 3
Each tile is given an XY coordinates ranging from (0, 0) in the upper left to (2ZoomLevel - 1, 2ZoomLevel - 1) in the lower right. For example, at level 3 the tile coordinates range from (0, 0) to (7, 7) as shown in Figure 13. For further reference, see the article on the Bing Maps Tile System, listed in .
The first digit of the binary quad-key is the first digit of the binary TileY-coordinate, the second digit of the quad-key is the first digit of the binary TileX-coordinate and so on. Therefore, the example this would resolve as follows.
The easiest way to accomplish this is to modify the previous for loop to create these values, as follows.
// Convert Quadkey to binary values int zoomLevel = tileIdParam.Length; string myQuadKeyBin = ""; string myXBinValues = ""; string myYBinValues = ""; char [] myKeyCharArray = tileIdParam.ToCharArray(); for (int i = 0; i < zoomLevel; i++) { switch (myKeyCharArray[i]) { case '0': myQuadKeyBin += "00"; myXBinValues += "0"; myYBinValues += "0"; break; case '1': myQuadKeyBin += "01"; myYBinValues += "0"; myXBinValues += "1"; break; case '2': myQuadKeyBin += "10"; myYBinValues += "1"; myXBinValues += "0"; break; default: myQuadKeyBin += "11"; myYBinValues += "1"; myXBinValues += "1"; break; } }
The next step is to convert the binary TileX and TileY to a decimal TileX and TileY coordinate using the following formula.
Binary TileY: 010101001
Decimal TileY: 0∙2ZoomLevel−1
+ 1∙2ZoomLevel−2
+ ⋯ + 1∙20 = 169
Binary TileX: 100000111
Decimal TileY: 1∙2ZoomLevel−1
+ 0∙2ZoomLevel−2
+ ⋯ + 1∙20 = 263
Once again, the easiest way to accomplish this is to modify the previous for loop to create these values, as follows.
// Convert Quadkey to binary values int zoomLevel = tileIdParam.Length; string myQuadKeyBin = ""; string myYBinValues = ""; string myXBinValues = "";': myQuadKeyBin += "00"; myYBinValues += "0"; myXBinValues += "0"; break; case '1': myQuadKeyBin += "01"; myYBinValues += "0"; myXBinValues += "1"; myXDecValue += tmpVal; break; case '2': myQuadKeyBin += "10"; myYBinValues += "1"; myXBinValues += "0"; myYDecValue += tmpVal; break; default: myQuadKeyBin += "11"; myYBinValues += "1"; myXBinValues += "1"; myXDecValue += tmpVal; myYDecValue += tmpVal; break; } }
To determine the latitude and longitude of the upper left and lower right corner of this tile, PixelX and PixelY coordinates of the tile must first be calculated. Since each tile has a size of 256 x 256 pixels and we the decimal TileX and TileY coordinates are known, this is a simple calculation. For the upper left corner, the formula is as follows:
PixelYMin: TileYdec
∙ 256 = 169 ∙ 256 = 43264
PixelXMin: TileXdec
∙ 256 = 263 ∙ 256 = 67328
PixelYMax: (TileYdec
+ 1) ∙ 256 − 1 = (170 ∙ 256) − 1 = 43519
PixelXMax: TileXdec
+ 1 ∙ 256 − 1 = 264 ∙ 256 − 1 = 67583
Here is the code to make these calculations.
The next step is to calculate the latitude and longitude of the upper left and the lower right corner. The mathematics is a bit complicated since the tile is a flat object but the latitude and longitude are in the WGS 84. The formula to calculate the longitude is as follows.
Longitude = ((PixelX * 360) / (256 * 2ZoomLevel
)) - 180
In the example, these become:
LongMin = ((67328 * 360) / (256 * 29
)) - 180 ~= 4.921875
LongMax = ((67583 * 360) / (256 * 29
)) - 180 ~= 5.622253
Before the latitude is calculated, here some intermediate calculations to help refactor the code.
denom = 256 * 2ZoomLevel
efactor = e(0.5 - ((PixelY) / (denom) )* 4 * pi
The latitude is then calculated as the following.
Latitude = asin((efactor - 1) / (efactor + 1)) * (180 / pi)
The example becomes:
denom = 256 * 29
= 131072
efactormin
= e(0.5 - ((43264) / (denom) )* 4 * pi ~= 8.459595
efactormax
= e(0.5 - ((43519) / (denom) )* 4 * pi ~= 8.255283
Latitudemin
= asin((efactor - 1) / (efactor + 1)) * (180 / pi) ~= 52.052490
Latitudemax
= asin((efactor - 1) / (efactor + 1)) * (180 / pi) ~= 51.619721
The code to do all of this looks like this:
// Convert to latitude/longitude // longitude float LongMin = (float)(((PixelXMin * 360) / (256 * Math.Pow(2, zoomLevel))) - 180); float LongMax = (float)(((PixelXMax * 360) / (256 * Math.Pow(2, zoomLevel))) - 180); // Latitude float denom = (float)(256 * Math.Pow(2, zoomLevel)); float eMinNum = (float)(PixelYMin / denom); float eMinFactor = (float)(Math.Pow(Math.E, (0.5 - eMinNum) * 4 * Math.PI)); float minLat = (float)(Math.Asin((eMinFactor - 1) / (eMinFactor + 1)) * (180 / Math.PI)); float eMaxNum = (float)(PixelYMax / denom); float eMaxFactor = (float)(Math.Pow(Math.E, (0.5 - eMaxNum) * 4 * Math.PI)); float maxLat = (float)(Math.Asin((eMaxFactor - 1) / (eMaxFactor + 1)) * (180 / Math.PI));
All the information is now available to prepare and execute the request to the WMS-server. The following code builds the URL, executes the request, and returns the response to the Bing Maps AJAX Control 6.3.
// URL to WMS string myURL = "" + "&VERSION=1.1.1&SRS=EPSG:4326&FORMAT=image/png&LAYERS=" + layerParam + "&WIDTH=256&HEIGHT=256&BBOX=" + LongMin + "," + maxLat + "," + LongMax + "," + minLat; System.Net.WebRequest myRequest = System.Net.WebRequest.Create(myURL); System.Net.WebResponse myResponse = myRequest.GetResponse(); Bitmap myImage = new Bitmap(Image.FromStream(myResponse.GetResponseStream())); WritePngToStream(myImage, Context.Response.OutputStream);
The WritePngToStream method looks like the following.
If you have been paying attention, you probably have noticed that we do not use the variables myQuadKeyBin, myYBinValues, and myXBinValues after the switch statement. Therefore, the for loop should be modified as follows.
// Convert Quadkey to binary values int zoomLevel = tileIdParam.Length;': break; case '1': myXDecValue += tmpVal; break; case '2': myYDecValue += tmpVal; break; default: myXDecValue += tmpVal; myYDecValue += tmpVal; break; } }
Here are a few examples of the resulting tile layers.
Figure 14. WMS tile layer on top of Bing Maps
Figure 15. Ortho-Images as WMS tile layer on top of Bing Maps
Figure 16. Development plan as WMS tile layer on top of Bing Maps
Figure 17. Maps as WMS tile layer on top of Bing Maps
Performance Considerations
Performance depends largely on the performance of the WMS server. The approach described in this article executes a WMS GetMap request for each Bing Maps tile that is shown in the Bing Maps AJAX Control 6.3. This results in number of roundtrips to the WMS-server. It may be worth considering a solution where only one image, which is as big as all tiles that are shown in the Bing Maps AJAX Control 6.3, is retrieved from the WMS server. This image would need to be separated into a number of tiles that matches the Bing Maps tiles in the Bing Maps AJAX Control 6.3. For further reference on this approach, see the paper Creating Bing Maps Tile-Layers on the Fly.
About the Author
Johannes Kebeck is a Microsoft Bing Maps technology specialist based in the UK.
Resources
- Bing Maps powered by Bing Maps,. There are some market-specific features. To discover the full power of Bing Maps go to
- Streetside Technology Preview,
- Interactive SDK,
- Reference SDK,
- Bing Maps Tile System,
- Bing Maps on MSDN
- Forum,
- Via Bing Maps,
- Coverage with Roadmaps,
- MapCruncher,
Zoom Levels
The Mathematical Details
This is not a true scale, but more of a resolution for imagery and graphics. So, in order to calculate the scale it is a bit more complex. The scale is actually based on the settings of the user's monitor and calculated as such. The resolution varies with latitude, as follows.
To convert the map resolution into scale, you need to know (or assume) the screen resolution. Then the formula becomes as follows.
For example, assuming a screen resolution of 100 pixels/inch, the map scale at level 10 and latitude 40 degrees is as follows.
Result of a WMS GetCapabilities Request
Given the following request:
The response should look something like the following:
<?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE WMT_MS_Capabilities SYSTEM ""> <WMT_MS_Capabilities version="1.1.1"> <Service> <Name>OGC:WMS</Name> <Title>Web Map Service gk500_wms</Title> <Abstract>gk500_wms Web Map Service des Bayerischen Landesamt fuer Umwelt (LfU)</Abstract> <KeywordList> <Keyword>Bayerisches Landesamt fuer Umwelt (LfU)</Keyword> <Keyword>Geologie</Keyword> <Keyword>Boden</Keyword> <Keyword>BIS</Keyword> </KeywordList> <OnlineResource xmlns:xlink= xlink:href=? xlink: <ContactInformation> <ContactPersonPrimary> <ContactPerson/> <ContactOrganization>Bayerisches Landesamt fuer Umwelt (LfU)</ContactOrganization> </ContactPersonPrimary> <ContactPosition/> <ContactAddress> <AddressType>postal</AddressType> <Address>Lazarettstr. 67</Address> <City>Muenchen</City> <StateOrProvince>Bayern</StateOrProvince> <PostCode>80636</PostCode> <Country>Deutschland</Country> </ContactAddress> <ContactVoiceTelephone/> <ContactFacsimileTelephone/> <ContactElectronicMailAddress> bisgfa@lfu.bayern.de </ContactElectronicMailAddress> </ContactInformation> <Fees>none</Fees> <AccessConstraints>none</AccessConstraints> </Service> <Capability> <Request> <GetCapabilities> <Format>application/vnd.ogc.wms_xml</Format> <DCPType> <HTTP> <Get> <OnlineResource xmlns: </Get> </HTTP> </DCPType> </GetCapabilities> <GetMap> <Format>image/png</Format> <Format>image/jpeg</Format> <Format>image/gif</Format> <DCPType> <HTTP> <Get> <OnlineResource xmlns:xlink= xlink:href=? xlink: </Get> </HTTP> </DCPType> </GetMap> <GetFeatureInfo> <Format>text/html</Format> <Format>application/vnd.ogc.wms_xml</Format> <Format>text/xml</Format> <Format>text/plain</Format> <DCPType> <HTTP> <Get> <OnlineResource xmlns:xlink= xlink:href=? xlink: </Get> </HTTP> </DCPType> </GetFeatureInfo> </Request> <Exception> <Format>application/vnd.ogc.se_xml</Format> <Format>application/vnd.ogc.se_inimage</Format> <Format>application/vnd.ogc.se_blank</Format> </Exception> <Layer queryable="0" opaque="0" noSubsets="0"> <Title>gk500_wms</Title> <SRS>EPSG:4326</SRS> <SRS>EPSG:31468</SRS> <SRS>EPSG:31467</SRS> <SRS>EPSG:31469</SRS> <LatLonBoundingBox minx="8.1048153789" miny="46.9109504263" maxx="14.1624073557" maxy="50.6062640122"/> <BoundingBox SRS="EPSG:31467" maxx="3881348.6569" maxy="6141610.1915" minx="3255546.0547" miny="5183321.9371"/> <BoundingBox SRS="EPSG:31468" minx="4224070.5" miny="5203193" maxx="4653051.37947922" maxy="5609942.05465011"/> <BoundingBox SRS="EPSG:31469" minx="5068763.048817" miny="5414823.816382" maxx="5247928.922649" maxy="5614593.749362"/> > <Layer queryable="1"> <Name>Bayerngrenze</Name> <Title>Bayerngrenze</Title> <SRS>EPSG:4326</SRS> <LatLonBoundingBox minx="8.9448687314" miny="47.2493919475" maxx="13.9102580841" maxy="50.5652993873"/> </Layer> </Layer> </Capability> </WMT_MS_Capabilities>
|
https://msdn.microsoft.com/en-us/library/cc161076.aspx
|
CC-MAIN-2017-39
|
refinedweb
| 3,955
| 55.03
|
Difference between revisions of "HStringTemplate"
Revision as of 22:43, 6 July 2009
HStringTemplate is a Haskell-ish port of the Java StringTemplate library by Terrence Parr, ported by Sterling Clover. page is work in progress, and aims to supplement the API docs with tutorial style documentation and template syntax documentation.
Contents
- 1 Getting started
- 2 Expression syntax
- 3 Custom formatting
- 4 Template groups
- 5 HTML templates.
There are shortcuts for long attribute chains, such as
setManyAttributes and
renderf."
Expression syntax
This section follows for structure, adapting as appropriate. Syntax not mentioned below should be assumed to be implemented as per the Java version of StringTemplate. (Please add notes if you find anything missing or different).
StringTemplate ByteString you can use
setNativeAttribute with ByteString objects to avoid the round trip to Strings. This also avoids the encoding function that has been set on the template.))
With this change, using ", " as the separator, a value
Person { name = "Joe", age = 23 } will render as:
- age: [1], name: [Joe]
Person { name = "Joe Bloggs", age = 23 } and "names" set to a tuple containing the same name parsed into a first-name, last-name pair
("Joe", "Bloggs"), this would render as:
- Hello Joe,
- Your full name is Joe Bloggs and you are 23 years old. below will work.
replace :: Eq a => [a] -> [a] -> [a] -> [a] replace _ _ [] = [] replace find repl s = if take (length find) s == find then repl ++ (replace find repl (drop (length find) s)) else [head s] ++ (replace find repl (tail s)) escapeHtmlString s = replace "<" "<" $ replace ">" ">" $ replace "\"" """ $ replace "&" "&" s
It should be set using the function
setEncoder or
setEncoderGroup
In some cases, you will have some data that is already escaped and you don't want it escaped. Since the encoder function has type 'String -> String', it is not used for data that is stored as ByteStrings in the
SElem structure. This means that the encoder function can be avoided by using setting template attributes using ByteStrings, and by appropriately written
SElem instances.
The following
SElem instance can be used if you using
Text.XHtml and want to set attributes of type
Html directly:
import Data.ByteString.Lazy.Char8 (pack) import Text.XHtml import Text.StringTemplate import Text.StringTemplate.Classes (SElem(..)) instance ToSElem Html where toSElem x = BS (pack $ showHtmlFragment x)
|
https://wiki.haskell.org/index.php?title=HStringTemplate&diff=28873
|
CC-MAIN-2020-34
|
refinedweb
| 379
| 61.36
|
.
A functional queue is a data structure with three operations:
Unlike a mutable queue, a functional queue does not change its contents when an element is appended. Instead, a new queue is returned that contains the element. The goal of this chapter will be to create a class, which we'll name Queue, that works like this:
scala> val q = Queue(1, 2, 3) q: Queue[Int] = Queue(1, 2, 3)If Queue were a mutable implementation, the append operation in the second input line above would affect the contents of q; in fact both the result, q1, and the original queue, q, would contain the sequence 1, 2, 3, 4 after the operation. But for a functional queue, the appended value shows up only in the result, q1, not in the queue, q, being operated on.
scala> val q1 = q append 4 q1: Queue[Int] = Queue(1, 2, 3, 4)
scala> q res0: Queue[Int] = Queue(1, 2, 3)
Purely functional queues also have some similarity with lists. Both are so called fully persistent data structures, where old versions remain available even after extensions or modifications. Both support head and tail operations. But where a list is usually extended at the front, using a :: operation, a queue is extended at the end, using append.
How can this be implemented efficiently? Ideally, a functional (immutable) queue should not have a fundamentally higher overhead than an imperative (mutable) one. That is, all three operations head, tail, and append should operate in constant time.
One simple approach to implement a functional queue would be to use a list as representation type. Then head and tail would just translate into the same operations on the list, whereas append would be concatenation. This would give the following implementation:
class SlowAppendQueue[T](elems: List[T]) { // Not efficient def head = elems.head def tail = new SlowAppendQueue(elems.tail) def append(x: T) = new SlowAppendQueue(elems ::: List(x)) }The problem with this implementation is in the append operation. It takes time proportional to the number of elements stored in the queue. If you want constant time append, you could also try to reverse the order of the elements in the representation list, so that the last element that's appended comes first in the list. This would lead to the following implementation:
class SlowHeadQueue[T](smele: List[T]) { // Not efficient // smele is elems reversed def head = smele.last def tail = new SlowHeadQueue(smele.init) def append(x: T) = new SlowHeadQueue(x :: smele) }Now append is constant time, but head and tail are not. They now take time proportional to the number of elements stored in the queue.
Looking at these two examples, it does not seem easy to come up with an implementation that's constant time for all three operations. In fact, it looks doubtful that this is even possible! However, by combining the two operations you can get very close. The idea is to represent a queue by two lists, called leading and trailing. The leading list contains elements towards the front, whereas the trailing list contains elements towards the back of the queue in reversed order. The contents of the whole queue are at each instant equal to "leading ::: trailing.reverse".
Now, to append an element, you just cons it to the trailing list using the :: operator, so append is constant time. This means that, when an initially empty queue is constructed from successive append operations, the trailing list will grow whereas the leading list will stay empty. Then, before the first head or tail operation is performed on an empty leading list, the whole trailing list is copied to leading, reversing the order of the elements. This is done in an operation called mirror. Listing 19.1 shows an implementation of queues that uses this approach.
class Queue[T]( private val leading: List[T], private val trailing: List[T] ) { private def mirror = if (leading.isEmpty) new Queue(trailing.reverse, Nil) else this
def head = mirror.leading.head
def tail = { val q = mirror new Queue(q.leading.tail, q.trailing) }
def append(x: T) = new Queue(leading, x :: trailing) }
What is the complexity of this implementation of queues? The mirror operation might take time proportional to the number of queue elements, but only if list leading is empty. It returns directly if leading is non-empty. Because head and tail call mirror, their complexity might be linear in the size of the queue, too. However, the longer the queue gets, the less often mirror is called. Indeed, assume a queue of length n with an empty leading list. Then mirror has to reverse-copy a list of length n. However, the next time mirror will have to do any work is once the leading list is empty again, which will be the case after n tail operations. This means one can "charge" each of these n tail operations with one n'th of the complexity of mirror, which means a constant amount of work. Assuming that head, tail, and append operations appear with about the same frequency, the amortized complexity is hence constant for each operation. So functional queues are asymptotically just as efficient as mutable ones.
Now, there are some caveats that need to be attached to this argument. First, the discussion only was about asymptotic behavior, the constant factors might well be somewhat different. Second, the argument rested on the fact that head, tail and append are called with about the same frequency. If head is called much more often than the other two operations, the argument is not valid, as each call to head might involve a costly re-organization of the list with mirror. The second caveat can be avoided; it is possible to design functional queues so that in a sequence of successive head operations only the first one might require a re-organization. You will find out at the end of this chapter how this is done.
The implementation of Queue shown in Listing 19.1 is now quite good with regards to efficiency. You might object, though, that this efficiency is paid for by exposing a needlessly detailed implementation. The Queue constructor, which is globally accessible, takes two lists as parameters, where one is reversed—hardly an intuitive representation of a queue. What's needed is a way to hide this constructor from client code. In this section, we'll show you some ways to accomplish this in Scala.
In Java, you can hide a constructor by making it private. In Scala, the primary constructor does not have an explicit definition; it is defined implicitly by the class parameters and body. Nevertheless, it is still possible to hide the primary constructor by adding a private modifier in front of the class parameter list, as shown in Listing 19.2:
class Queue[T] private ( private val leading: List[T], private val trailing: List[T] )
The private modifier between the class name and its parameters indicates)Recall that T* is the notation for repeated parameters, as described in Section 8.8.
Another possibility is to add a factory method that builds a queue from such a sequence of initial elements. A neat way to do this is to define an object Queue that has the same name as the class being defined and contains an apply method, as shown in Listing 19.3:
object Queue { // constructs a queue with initial elements `xs' def apply[T](xs: T*) = new Queue[T](xs.toList, Nil) }
By placing this object in the same source file as class Queue, you make the object a companion object of the class. You saw in Section 13.4 that a companion object has the same access rights as its class. Because of this, the apply method in object Queue can create a new Queue object, even though the constructor of class Queue is private.
Note that, because the factory method is called apply, clients can create queues with an expression such as Queue(1, 2, 3). This expression expands to Queue.apply(1, 2, 3) since Queue is an object instead of a function. As a result, Queue looks to clients as if it was a globally defined factory method. In reality, Scala has no globally visible methods; every method must be contained in an object or a class. However, using methods named apply inside global objects, you can support usage patterns that look like invocations of global methods.
trait Queue[T] { def head: T def tail: Queue[T] def append(x: T): Queue[T] }
object Queue {
def apply[T](xs: T*): Queue[T] = new QueueImpl[T](xs.toList, Nil)
private class QueueImpl[T]( private val leading: List[T], private val trailing: List[T] ) extends Queue[T] {
def mirror = if (leading.isEmpty) new QueueImpl(trailing.reverse, Nil) else this
def head: T = mirror.leading.head
def tail: QueueImpl[T] = { val q = mirror new QueueImpl(q.leading.tail, q.trailing) }
def append(x: T) = new QueueImpl(leading, x :: trailing) } }
Private constructors and private members are one way to hide the initialization and representation of a class. Another, more radical way is to hide the class itself and only export a trait that reveals the public interface of the class. The code in Listing 19.4 implements this design. There's a trait Queue, which declares the methods head, tail, and append. All three methods are implemented in a subclass QueueImpl, which is itself a private inner class of object Queue. This exposes to clients the same information as before, but using a different technique. Instead of hiding individual constructors and methods, this version hides the whole implementation class.
Queue, as defined in Listing 19.4, is a trait, but not a type. Queue is not a type because it takes a type parameter. As a result, you cannot create variables of type Queue:
scala> def doesNotCompile(q: Queue) {} <console>:5: error: trait Queue takes type parameters def doesNotCompile(q: Queue) {} ^
Instead, trait Queue enables you to specify parameterized types, such as Queue[String], Queue[Int], or Queue[AnyRef]:
scala> def doesCompile(q: Queue[AnyRef]) {} doesCompile: (Queue[AnyRef])Unit
Thus, Queue is a trait, and Queue[String] is a type. Queue is also called a type constructor, because with it you can construct a type by specifying a type parameter. (This is analogous to constructing an object instance with a plain-old constructor by specifying a value parameter.) The type constructor Queue "generates" a family of types, which includes Queue[Int], Queue[String], and Queue[AnyRef].
You can also say that Queue is a generic trait. (Classes and traits that take type parameters are "generic," but the types they generate are "parameterized," not generic.) The term "generic" means that you are defining many specific types with one generically written class or trait. For example, trait Queue in Listing 19.4 defines a generic queue. Queue[Int] and Queue[String], etc., would be the specific queues.
The combination of type parameters and subtyping poses some interesting questions. For example, are there any special subtyping relationships between members of the family of types generated by Queue[T]? More specifically, should a Queue[String] be considered a subtype of Queue[AnyRef]? Or more generally, if S is a subtype of type T, then should Queue[S] be considered a subtype of Queue[T]? If so, you could say that trait Queue is covariant (or "flexible") in its type parameter T. Or, since it just has one type parameter, you could say simply that Queues are covariant. Covariant Queues would mean, for example, that you could pass a Queue[String] to the doesCompile method shown previously, which takes a value parameter of type Queue[AnyRef].
Intuitively, all this seems OK, since a queue of Strings looks like a special case of a queue of AnyRefs. In Scala, however, generic types have by default nonvariant (or, "rigid") subtyping. That is, with Queue defined as in Listing 19.4, queues with different element types would never be in a subtype relationship. A Queue[String] would not be usable as a Queue[AnyRef]. However, you can demand covariant (flexible) subtyping of queues by changing the first line of the definition of class Queue like this:
trait Queue[+T] { ... }Prefixing a formal type parameter with a + indicates that subtyping is covariant (flexible) in that parameter. By adding this single character, you are telling Scala that you want Queue[String], for example, to be considered a subtype of Queue[AnyRef]. The compiler will check that Queue is defined in a way that this subtyping is sound.
Besides +, there is also a prefix -, which indicates contravariant subtyping. If Queue were defined like this:
trait Queue[-T] { ... }then if T is a subtype of type S, this would imply that Queue[S] is a subtype of Queue[T] (which in the case of queues would be rather surprising!). Whether a type parameter is covariant, contravariant, or nonvariant is called the parameter's variance. The + and - symbols you can place next to type parameters are called variance annotations.
In a purely functional world, many types are naturally covariant (flexible). However, the situation changes once you introduce mutable data. To find out why, consider the simple type of one-element cells that can be read or written, shown in Listing 19.5.
class Cell[T](init: T) { private[this] var current = init def get = current def set(x: T) { current = x } }
The Cell type of Listing 19.5 is declared nonvariant (rigid). For the sake of argument, assume for a moment that Cell was declared covariant instead—i.e., it was declared class Cell[+T]—and that this passed the Scala compiler. (It doesn't, and we'll explain why shortly.) Then you could construct the following problematic statement sequence:
val c1 = new Cell[String]("abc") val c2: Cell[Any] = c1 c2.set(1) val s: String = c1.getSeen by itself, each of these four lines looks OK. The first line creates a cell of strings and stores it in a val named c1. The second line defines a new val, c2, of type Cell[Any], which initialized with c1. This is OK, since Cells are assumed to be covariant. The third line sets the value of cell c2 to 1. This is also OK, because the assigned value 1 is an instance of c2's element type Any. Finally, the last line assigns the element value of c1 into a string. Nothing strange here, as both the sides are of the same type. But taken together, these four lines end up assigning the integer 1 to the string s. This is clearly a violation of type soundness.
Which operation is to blame for the runtime fault? It must be the second one, which uses covariant subtyping. The other statements are too simple and fundamental. Thus, a Cell of String is not also a Cell of Any, because there are things you can do with a Cell of Any that you cannot do with a Cell of String. You cannot use set with an Int argument on a Cell of String, for example.
In fact, were you to pass the covariant version of Cell to the Scala compiler, you would get a compile-time error:
Cell.scala:7: error: covariant type T occurs in contravariant position in type T of value x def set(x: T) = current = x ^
It's interesting to compare this behavior with arrays in Java. In principle, arrays are just like cells except that they can have more than one element. Nevertheless, arrays are treated as covariant in Java. You can try an example analogous to the cell interaction above with Java arrays:
// this is Java String[] a1 = { "abc" }; Object[] a2 = a1; a2[0] = new Integer(17); String s = a1[0];If you try out this example, you will find that it compiles, but executing the program will cause an ArrayStore exception to be thrown when a2[0] is assigned to an Integer:
Exception in thread "main" java.lang.ArrayStoreException: java.lang.Integer at JavaArrays.main(JavaArrays.java:8)What happens here is that Java stores the element type of the array at runtime. Then, every time an array element is updated, the new element value is checked against the stored type. If it is not an instance of that type, an ArrayStore exception is thrown.
You might ask why Java adopted this design, which seems both unsafe and expensive. When asked this question, James Gosling, the principal inventor of the Java language, answered that they wanted to have a simple means to treat arrays generically. For instance, they wanted to be able to write a method to sort all elements of an array, using a signature like the following that takes an array of Object:
void sort(Object[] a, Comparator cmp) { ... }Covariance of arrays was needed so that arrays of arbitrary reference types could be passed to this sort method. Of course, with the arrival of Java generics, such a sort method can now be written with a type parameter, so the covariance of arrays is no longer necessary. For compatibility reasons, though, it has persisted in Java to this day.
Scala tries to be purer than Java in not treating arrays as covariant. Here's what you get if you translate the first two lines of the array example to Scala:
scala> val a1 = Array("abc") a1: Array[java.lang.String] = Array(abc)
scala> val a2: Array[Any] = a1 <console>:5: error: type mismatch; found : Array[java.lang.String] required: Array[Any] val a2: Array[Any] = a1 ^
What happened here is that Scala treats arrays as nonvariant (rigid), so an Array[String] is not considered to conform to an Array[Any]. However, sometimes it is necessary to interact with legacy methods in Java that use an Object array as a means to emulate a generic array. For instance, you might want to call a sort method like the one described previously with an array of Strings as argument. To make this possible, Scala lets you cast an array of Ts to an array of any supertype of T:
scala> val a2: Array[Object] = a1.asInstanceOf[Array[Object]] a2: Array[java.lang.Object] = Array(abc)The cast is always legal at compile-time, and it will always succeed at run-time, because the JVM's underlying run-time model treats arrays as covariant, just as Java the language does. But you might get ArrayStore exceptions afterwards, again just as you would in Java.
Now that you have seen some examples where variance is unsound, you may be wondering which kind of class definitions need to be rejected and which can be accepted. So far, all violations of type soundness involved some reassignable field or array element. The purely functional implementation of queues, on the other hand, looks like a good candidate for covariance. However, the following example shows that you can "engineer" an unsound situation even if there is no reassignable field.
To set up the example, assume that queues as defined in Listing 19.4 are covariant. Then, create a subclass of queues that specializes the element type to Int and overrides the append method:
class StrangeIntQueue extends Queue[Int] { override def append(x: Int) = { println(Math.sqrt(x)) super.append(x) } }The append method in StrangeIntQueue prints out the square root of its (integer) argument before doing the append proper. Now, you can write a counterexample in two lines:
val x: Queue[Any] = new StrangeIntQueue x.append("abc")The first of these two lines is valid, because StrangeIntQueue is a subclass of Queue[Int], and, assuming covariance of queues, Queue[Int] is a subtype of Queue[Any]. The second line is valid because you can append a String to a Queue[Any]. However, taken together these two lines have the effect of applying a square root method to a string, which makes no sense.
Clearly it's not just mutable fields that make covariant types unsound. The problem is more general. It turns out that as soon as a generic parameter type appears as the type of a method parameter, the containing class or trait may not be covariant in that type parameter. For queues, the append method violates this condition:
class Queue[+T] { def append(x: T) = ... }Running a modified queue class like the one above through a Scala compiler would yield:
Queues.scala:11: error: covariant type T occurs in contravariant position in type T of value x def append(x: T) = ^
Reassignable fields are a special case of the rule that disallows type parameters annotated with + from being used as method parameter types. As mentioned in Section 18.2, a reassignable field, "var x: T", is treated in Scala as a getter method, "def x: T", and a setter method, "def x_=(y: T)". As you can see, the setter method has a parameter of the field's type T. So that type may not be covariant.
In the rest of this section, we'll describe the mechanism by which the Scala compiler checks variance annotations. If you're not interested in such detail right now, you can safely skip to Section 19.5. The most important thing to understand is that the Scala compiler will check any variance annotations you place on type parameters. For example, if you try to declare a type parameter to be covariant (by adding a +), but that could lead to potential runtime errors, your program won't compile.
To verify correctness of variance annotations, the Scala compiler classifies all positions in a class or trait body as positive, negative, or neutral. A "position" is any location in the class (or trait, but from now on we'll just write "class") body where a type parameter may be used. Every method value parameter is a position, for example, because a method value parameter has a type, and therefore a type parameter could appear in that position. The compiler checks each use of each of the class's type parameters. Type parameters annotated with + may only be used in positive positions, while type parameters annotated with - may only be used in negative positions. A type parameter with no variance annotation may be used in any position, and is, therefore, the only kind of type parameter that can be used in neutral positions of the class body.
To classify the positions, the compiler starts from the declaration of a type parameter and then moves inward through deeper nesting levels. Positions at the top level of the declaring class are classified as positive. By default, positions at deeper nesting levels are classified the same as that at enclosing levels, but there are a handful of exceptions where the classification changes. Method value parameter positions are classified to the flipped classification relative to positions outside the method, where the flip of a positive classification is negative, the flip of a negative classification is positive, and the flip of a neutral classification is still neutral.
Besides method value parameter positions, the current classification is also flipped at the type parameters of methods. A classification is sometimes flipped at the type argument position of a type, such as the Arg in C[Arg], depending on the variance of the corresponding type parameter. If C's type parameter is annotated with a + then the classification stays the same. If C's type parameter is annotated with a -, then the current classification is flipped. If C's type parameter has no variance annotation then the current classification is changed to neutral.
As a somewhat contrived example, consider the following class definition, where the variance of several positions is annotated with ^+ (for positive) or ^- (for negative):
abstract class Cat[-T, +U] { def meow[W^-](volume: T^-, listener: Cat[U^+, T^-]^-) : Cat[Cat[U^+, T^-]^-, U^+]^+ }
The positions of the type parameter, W, and the two value parameters, volume and listener, are all negative. Looking at the result type of meow, the position of the first Cat[U, T] argument is negative, because Cat's first type parameter, T, is annotated with a -. The type U inside this argument is again in positive position (two flips), whereas the type T inside that argument is still in negative position.
You see from this discussion that it's quite hard to keep track of variance positions. That's why it's a welcome relief that the Scala compiler does this job for you.
Once the variances are computed, the compiler checks that each type parameter is only used in positions that are classified appropriately. In this case, T is only used in negative positions, and U is only used in positive positions. So class Cat is type correct.
Back to the Queue class. You saw that the previous definition of Queue[T] shown in Listing 19.4 cannot be made covariant in T because T appears as a type of a parameter of the append method, and that's a negative position.
Fortunately, there's a way to get unstuck: you can generalize the append method by making it polymorphic (i.e., giving the append method itself a type parameter) and using a lower bound for its type parameter. Listing 19.6 shows a new formulation of Queue that implements this idea.
class Queue[+T] (private val leading: List[T], private val trailing: List[T] ) { def append[U >: T](x: U) = new Queue[U](leading, x :: trailing) // ... }
The new definition gives append a type parameter U, and with the syntax, "U >: T", defines T as the lower bound for U. As a result, U is required to be a supertype of T.[1] The parameter to append].
This revised definition of append is type correct. Intuitively, if T is a more specific type than expected (for example, Apple instead of Fruit), a call to append will still work, because U (Fruit) will still be a supertype of T (Apple).[2]
The new definition of append is arguably better than the old one, because it is more general. Unlike the old version, the new definition allows you to append an arbitrary supertype U of the queue element type T. The result is then a Queue[U]. Together with queue covariance, this gives the right kind of flexibility for modeling queues of different element types in a natural way.
This shows that variance annotations and lower bounds play well together. They are a good example of type-driven design, where the types of an interface guide its detailed design and implementation. In the case of queues, you would probably not have thought of the refined implementation of append with a lower bound, but you might have decided to make queues covariant. In that case, the compiler would have pointed out the variance error for append. Correcting the variance error by adding a lower bound makes append more general and queues as a whole more usable.
This observation is also the main reason that Scala prefers declaration-site variance over use-site variance as it is found in Java's wildcards. With use-site variance, you are on your own designing a class. It will be the clients of the class that need to put in the wildcards, and if they get it wrong, some important instance methods will no longer be applicable. Variance being a tricky business, users usually get it wrong, and they come away thinking that wildcards and generics are overly complicated. With definition-side variance, you express your intent to the compiler, and the compiler will double check that the methods you want available will indeed be available.
So far in this chapter, all examples you've seen were either covariant or nonvariant. But there are also cases where contravariance is natural. For instance, consider the trait of output channels shown in Listing 19.7:
trait OutputChannel[-T] { def write(x: T) }
Here, OutputChannel is defined to be contravariant in T. So an output channel of AnyRefs, say, is a subtype of an output channel of Strings. Although it may seem non-intuitive, it actually makes sense. To see why, consider what you can do with an OutputChannel[String]. The only supported operation is writing a String to it. The same operation can also be done on an OutputChannel[AnyRef]. So it is safe to substitute an OutputChannel[AnyRef] for an OutputChannel[String]. By contrast, it would not be safe to substitute an OutputChannel[String] where an OutputChannel[AnyRef] is required. After all, you can send any object to an OutputChannel[AnyRef], whereas an OutputChannel[String] requires that the written values are all strings.
This reasoning points to a general principle in type system design: it is safe to assume that a type T is a subtype of a type U if you can substitute a value of type T wherever a value of type U is required. This is called the Liskov Substitution Principle. The principle holds if T supports the same operations as U and all of T's operations require less and provide more than the corresponding operations in U. In the case of output channels, an OutputChannel[AnyRef] can be a subtype of an OutputChannel[String] because the two support the same write operation, and this operation requires less in OutputChannel[AnyRef] than in OutputChannel[String]. "Less" means the argument is only required to be an AnyRef in the first case, whereas it is required to be a String in the second case.
Sometimes covariance and contravariance are mixed in the same type. A prominent example is Scala's function traits. For instance, whenever you write the function type A => B, Scala expands this to Function1[A, B]. The definition of Function1 in the standard library uses both covariance and contravariance: the Function1 trait is contravariant in the function argument type S and covariant in the result type T, as shown in Listing 19.8. This satisfies the Liskov substitution principle, because arguments are something that's required, whereas results are something that's provided.
trait Function1[-S, +T] { def apply(x: S): T }
As an example, consider the application shown in Listing 19.9. In this example, class Publication contains one parametric field, title, of type String. Class Book extends Publication and forwards its string title parameter to the constructor of its superclass. The Library singleton object defines a set of books and a method printBookList, which takes a function, named info, of type Book => AnyRef. In other words, the type of the lone parameter to printBookList is a function that takes one Book argument and returns an AnyRef. The Customer application defines a method, getTitle, which takes a Publication as its lone parameter and returns a String, the title of the passed Publication.
class Publication(val title: String) class Book(title: String) extends Publication(title)
object Library { val books: Set[Book] = Set( new Book("Programming in Scala"), new Book("Walden") ) def printBookList(info: Book => AnyRef) { for (book <- books) println(info(book)) } }
object Customer extends Application { def getTitle(p: Publication): String = p.title Library.printBookList(getTitle) }
Now take a look at the last line in Customer. This line invokes Library's printBookList method and passes getTitle, wrapped in a function value:
Library.printBookList(getTitle)This line of code type checks even though String, the function's result type, is a subtype of AnyRef, the result type of printBookList's info parameter. This code passes the compiler because function result types are declared to be covariant (the +T in Listing 19.8). If you look inside the body of printBookList, you can get a glimpse of why this makes sense.
The printBookList method iterates through its book list, and invokes the passed function on each book. It passes the AnyRef result returned by info to println, which invokes toString on it and prints the result. This activity will work with String as well as any other subclass of AnyRef, which is what covariance of function result types means.
Now consider the parameter type of the function being passed to the printBookList method. Although printBookList's parameter type is declared as Book, the getTitle we're passing in takes a Publication, a supertype of Book. The reason this works is that since printBookList's parameter type is Book, the body of the printBookList method will only be allowed to pass a Book into the function. And because getTitle's parameter type is Publication, the body of that function will only be able to access on its parameter, p, members that are declared in class Publication. Because any method declared in Publication is also available on its subclass Book, everything should work, which is what contravariance of function parameter types means. You can see all this graphically in Figure 19.1.
The code in Listing 19.9 compiles because Publication => String is a subtype of Book => AnyRef, as shown in the center of the Figure 19.1. Because the result type of a Function1 is defined as covariant, the inheritance relationship of the two result types, shown at the right of the diagram, is in the same direction as that of the two functions shown in the center. By contrast, because the parameter type of a Function1 is defined as contravariant, the inheritance relationship of the two parameter types, shown at the left of the diagram, is in the opposite direction as that of the two functions.
class Queue[+T] private ( private[this] var leading: List[T], private[this] var trailing: List[T] ) {
private def mirror() = if (leading.isEmpty) { while (!trailing.isEmpty) { leading = trailing.head :: leading trailing = trailing.tail } }
def head: T = { mirror() leading.head }
def tail: Queue[T] = { mirror() new Queue(leading.tail, trailing) }
def append[U >: T](x: U) = new Queue[U](leading, x :: trailing) }
The Queue class seen so far has a problem in that the mirror operation might repeatedly copy the trailing into the leading list if head is called several times in a row on a list where leading is empty. The wasteful copying could be avoided by adding some judicious side effects. Listing 19.10 presents a new implementation of Queue, which performs at most one trailing to leading adjustment for any sequence of head operations.
What's different with respect to the previous version is that now leading and trailing are reassignable variables, and mirror performs the reverse copy from trailing to leading as a side-effect on the current queue instead of returning a new queue. This side-effect is purely internal to the implementation of the Queue operation; since leading and trailing are private variables, the effect is not visible to clients of Queue. So by the terminology established in Chapter 18, the new version of Queue still defines purely functional objects, in spite of the fact that they now contain reassignable fields.
You might wonder whether this code passes the Scala type checker. After all, queues now contain two reassignable fields of the covariant parameter type T. Is this not a violation of the variance rules? It would be indeed, except for the detail that leading and trailing have a private[this] modifier and are thus declared to be object private.
As mentioned in Section 13.4,.
Scala's variance checking rules contain a special case for object private definitions. Such definitions are omitted when it is checked that a type parameter with either a + or - annotation occurs only in positions that have the same variance classification. Therefore, the code in Listing 19.10 compiles without error. On the other hand, if you had left out the [this] qualifiers from the two private modifiers, you would see two type errors:
Queues.scala:1: error: covariant type T occurs in contravariant position in type List[T] of parameter of setter leading_= class Queue[+T] private (private var leading: List[T], ^ Queues.scala:1: error: covariant type T occurs in contravariant position in type List[T] of parameter of setter trailing_= private var trailing: List[T]) { ^
In Listing 16.1 here, we showed a merge sort function for lists that took a comparison function as a first argument and a list to sort as a second, curried argument. Another way you might want to organize such a sort function is by requiring the type of the list to mix in the Ordered trait. As mentioned in Section 12.4, by mixing Ordered into a class and implementing Ordered's one abstract method, compare, you enable clients to compare instances of that class with <, >, <=, and >=. For example, Listing 19.11 shows Ordered being mixed into a Person class. As a result, you can compare two persons like this:
scala> val robert = new Person("Robert", "Jones") robert: Person = Robert Jones
scala> val sally = new Person("Sally", "Smith") sally: Person = Sally Smith
scala> robert < sally res0: Boolean = true
class Person(val firstName: String, val lastName: String) extends Ordered[Person] {
def compare(that: Person) = { val lastNameComparison = lastName.compareToIgnoreCase(that.lastName) if (lastNameComparison != 0) lastNameComparison else firstName.compareToIgnoreCase(that.firstName) }
override def toString = firstName +" "+ lastName }
def orderedMergeSort[T <: Ordered[T]](xs: List[T]): List[T] = { def merge(xs: List[T], ys: List[T]): List[T] = (xs, ys) match { case (Nil, _) => ys case (_, Nil) => xs case (x :: xs1, y :: ys1) => if (x < y) x :: merge(xs1, ys) else y :: merge(xs, ys1) } val n = xs.length / 2 if (n == 0) xs else { val (ys, zs) = xs splitAt n merge(orderedMergeSort(ys), orderedMergeSort(zs)) } }
To require that the type of the list passed to your new sort function mixes in Ordered, you need to use an upper bound. An upper bound is specified similar to a lower bound, except instead of the >: symbol used for lower bounds, you use a <: symbol, as shown in Listing 19.12. With the "T <: Ordered[T]" syntax, you indicate that the type parameter, T, has an upper bound, Ordered[T]. This means that the element type of the list passed to orderedMergeSort must be a subtype of Ordered. Thus, you could pass a List[Person] to orderedMergeSort, because Person mixes in Ordered. For example, consider this list:
scala> val people = List( new Person("Larry", "Wall"), new Person("Anders", "Hejlsberg"), new Person("Guido", "van Rossum"), new Person("Alan", "Kay"), new Person("Yukihiro", "Matsumoto") ) people: List[Person] = List(Larry Wall, Anders Hejlsberg, Guido van Rossum, Alan Kay, Yukihiro Matsumoto)Because the element type of this list, Person, mixes in (and is therefore a subtype of) Ordered[People], you can pass the list to orderedMergeSort:
scala> val sortedPeople = orderedMergeSort(people) sortedPeople: List[Person] = List(Anders Hejlsberg, Alan Kay, Yukihiro Matsumoto, Guido van Rossum, Larry Wall)
Now, although the sort function shown in Listing 19.12 serves as a useful illustration of upper bounds, it isn't actually the most general way in Scala to design a sort function that takes advantage the Ordered trait. For example, you couldn't use the orderedMergeSort function to sort a list of integers, because class Int is not a subtype of Ordered[Int]:
scala> val wontCompile = orderedMergeSort(List(3, 2, 1)) <console>:5: error: inferred type arguments [Int] do not conform to method orderedMergeSort's type parameter bounds [T <: Ordered[T]] val wontCompile = orderedMergeSort(List(3, 2, 1)) ^
In Section 21.6, we'll show you how to use implicit parameters and view bounds to achieve a more general solution.
In this chapter you saw several techniques for information hiding: private constructors, factory methods, type abstraction, and object private members. You also learned how to specify data type variance and what it implies for class implementation. Finally, you saw two techniques which help in obtaining flexible variance annotations: lower bounds for method type parameters, and private[this] annotations for local fields and methods.
[1] Supertype and subtype relationships are reflexive, which means a type is both a supertype and a subtype of itself. Even though T is a lower bound for U, you could still pass in a T to append.
[2] Technically, what happens is a flip occurs for lower bounds. The type parameter U is in a negative position (1 flip), while the lower bound (>: T) is in a positive position (2 flips).
|
http://www.artima.com/pins1ed/type-parameterizationP.html
|
CC-MAIN-2014-35
|
refinedweb
| 6,672
| 61.56
|
DataTableReader.GetChars Method
Returns the value of the specified column as a character array.
Namespace: System.DataNamespace: System.Data
Assembly: System.Data (in System.Data.dll)
Parameters
- ordinal
- Type: System.Int32
The zero-based column ordinal.
- dataIndex
- Type: System.Int64
The index within the field from which to start the read operation.
- buffer
- Type: System.Char[]
The buffer into which to read the stream of chars.
- bufferIndex
- Type: System.Int32
The index within the buffer at which to start placing the data.
- length
- Type: System.Int32
The maximum length to copy into the buffer.
Return ValueType: System.Int64
The actual number of characters read.
ImplementsIDataRecord.GetChars(Int32, Int64, Char[], Int32, Int32).
The actual number of characters read can be less than the requested length, if the end of the field is reached. If you pass a buffer that is null (Nothing in Visual Basic), GetChars returns the length of the entire field in characters, not the remaining size based on the buffer offset parameter.
No conversions are performed; therefore the data to be retrieved must already be a character array or coercible to a character array.
The following example demonstrates the GetChars method. The TestGetChars method expects to be passed a DataTableReader filled with two columns of data: a file name in the first column, and an array of characters in the second. In addition, TestGetChars lets you specify the buffer size to be used as it reads the data from the character array in the DataTableReader. TestGetChars creates a file corresponding to each row of data in the DataTableReader, using the supplied data in the first column of the DataTableReader as the file name.
This procedure demonstrates the use of the GetChars method reading data that was stored in the DataTable as a character array. Any other type of data causes the GetChars method to throw an InvalidCastException.
using System; using System.Data; using System.IO; class Class1 { static void Main() { DataTable table = new DataTable(); table.Columns.Add("FileName", typeof(string)); table.Columns.Add("Data", typeof(char[])); table.Rows.Add(new object[] { "File1.txt", "0123456789ABCDEF".ToCharArray() }); table.Rows.Add(new object[] { "File2.txt", "0123456789ABCDEF".ToCharArray() }); DataTableReader reader = new DataTableReader(table); TestGetChars(reader, 7); } private static void TestGetChars(DataTableReader reader, int bufferSize) { // The filename is in column 0, and the contents are in column 1. const int FILENAME_COLUMN = 0; const int DATA_COLUMN = 1; char[] buffer; long offset; int charsRead = 0; string fileName; int currentBufferSize = 0; while (reader.Read()) { // Reinitialize the buffer size and the buffer itself. currentBufferSize = bufferSize; buffer = new char[bufferSize]; // For each row, write the data to the specified file. // First, verify that the FileName column isn't null. if (!reader.IsDBNull(FILENAME_COLUMN)) { // Get the file name, and create a file with // the supplied name. fileName = reader.GetString(FILENAME_COLUMN); // Start at the beginning. offset = 0; using (StreamWriter outputStream = new StreamWriter(fileName, false)) { try { // Loop through all the characters in the input field, // incrementing the offset for the next time. If this // pass through the loop reads characters, write them to // the output stream. do { charsRead = (int)reader.GetChars(DATA_COLUMN, offset, buffer, 0, bufferSize); if (charsRead > 0) { outputStream.Write(buffer, 0, charsRead); offset += charsRead; } } while (charsRead > 0); } catch (Exception ex) { Console.WriteLine(fileName + ": " + ex.Message); } } } } Console.WriteLine("Press Enter key to.
|
http://msdn.microsoft.com/en-us/library/system.data.datatablereader.getchars.aspx
|
CC-MAIN-2014-15
|
refinedweb
| 542
| 60.51
|
Project Euler 43: Find the sum of all pandigital numbers with an unusual sub-string divisibility property.
Project Euler 43 Problem Description
Project Euler 43:.
Analysis
There are only 9 x 9! (3,265,920) possible 0-9 ten-digit pandigital numbers. Simply generate the possibilities and check each three-digit section to be divisible by the appropriate prime number. But even on a fast computer this method may take too long to run.
We can make some observations to reduce the number of possibilities. Keep in mind we are dealing with 3 digit numbers that must have unique digits in order to qualify as part of a 0-9 pandigital ten-digit number.
Some observations
Observation 1: d4d5d6 must be divisible by 5 then:
d6 = {0, 5}.
Observation 2: d6d7d8 must to be divisible by 11 and, according to Obs. 1, has to start with a {0, 5}. Well, it can’t start with 0 as that would yield non-unique digits, so it has to start with 5, i.e., d6 = 5.
d6d7d8 = {506, 517, 528, 539, 561, 572, 583, 594}
Observation 3: d7d8d9 must be divisible by 13, begin with {06, 17, 28, 39, 61, 72, 83, 94} by Obs. 2, contain no 5s and have only unique digits:
d7d8d9 = {286 390 728 832}
Observation 4: d8d9d10 must be divisible by 17, begin with {28, 32, 86, 90} by Obs. 3, contain no 5s and have only unique digits:
d8d9d10 = {289, 867, 901}
Observation 5: d5d6d7 must be divisible by 7, end with {52, 53, 57, 58} by Obs. 2 & 3 and have only unique digits:
d5d6d7 = {357, 952}
We have reduced our possibilities for the last 6 digits using our 5 observations to:
d5d6d7d8d9d10 = {357289, 952867}
Finishing the problem
Observation 6: d3d4d5 must be divisible by 3, not contain {2, 5, 7, 8} since they have been place, end in {3, 9}, have all unique digits and an even middle number:
d3d4d5 = {063, 309, 603}
This forces d1d2 to {14, 41}
Our final set is: {1406357289, 1430952867, 1460357289, 4106357289, 4130952867, 4160357289}
Afterthoughts
- Reference: The On-Line Encyclopedia of Integer Sequences (OEIS) A050278: Pandigital numbers: numbers containing the digits 0-9. Version 1: each digit appears exactly once.
- 0 to 3 pandigital is 22212
- 0 to 4 pandigital is 711104
2-liner, same code
from itertools import permutations
print(sum(int(”.join(x)) for x in permutations(‘0123456789’, 10) if all([not int(”.join(x)[n[0]:n[1]]) % n[2] for n in [(1, 4, 2), (2, 5, 3), (3, 6, 5), (4, 7, 7), (5, 8, 11), (6, 9, 13), (7, 10, 17)][:7]])))
Dude, that is cool. About as fast too. Thanks for sharing this!
another rule:
Since d3d4d5 has to be divisible by 3 and d5 is {3,9}, (d3+d4) have to be divisible by 3.
When d5d6d7d8d9d10 is 952867,d3d4 can only be 30 ,then we have two numbers with this property.
when d5d6d7d8d9d10 is 357289,d3d4 can be 06 or 60, then we have four numbers with this property.
Sum all these six numbers, we get the answer.
Thanks for finishing the analysis for doing it without a computer. Good solution.
Thanks a lot!!
only 20 seconds for me, before your beautiful explanation.
Bye
Thanks for the comment Bancaldo, glad it could help you.
|
https://blog.dreamshire.com/project-euler-43-solution/
|
CC-MAIN-2017-51
|
refinedweb
| 554
| 70.73
|
/* $NetBSD: msg.mi.en,v 1.179 2013/03/23 15:36:43 gson Exp $ */ /* * Copyright 1997 Piermont Information Systems Inc. * All rights reserved. * * Written by Philip A. Nelson for Piermont Information Systems SYSTEMS. * */ /* MI Message catalog -- english, machine independent */ message usage {usage: sysinst [-D] [-f definition_file] [-r release] } message sysinst_message_language {Installation messages in English} message sysinst_message_locale {en_US.ISO8859-1} message Yes {Yes} message No {No} message All {All} message Some {Some} message None {None} message none {none} message OK {OK} message ok {ok} message On {On} message Off {Off} message unchanged {unchanged} message Delete {Delete?} message install {install} message reinstall {reinstall sets for} message upgrade {upgrade} message hello {NetBSD/@@MACHINE@@ @@VERSION@@. } message thanks {Thank you for using NetBSD! } message installusure ? } message upgradeusure {Ok, let's upgrade NetBSD on your hard disk. As always, this will change information on your hard disk. You should have made a full backup before this procedure! Do you really want to upgrade NetBSD? (This is your last warning before this procedure starts modifying your disks.) } message reinstallusure {Ok, let's unpack the NetBSD distribution sets to a bootable hard disk. This procedure just fetches and unpacks sets onto an pre-partitioned bootable disk. It does not label disks, upgrade bootblocks, or save any existing configuration info. (Quit and choose `install' or `upgrade' if you want those options.) You should have already done an `install' or `upgrade' before starting this procedure! Do you really want to reinstall NetBSD distribution sets? (This is your last warning before this procedure starts modifying your disks.) } message mount_failed {Mounting %s failed. Continue? } message nodisk {I can not find any hard disks for use by NetBSD. You will be returned to the original menu. } message onedisk {I found only one disk, %s. Therefore I assume you want to %s NetBSD on it. } message ask_disk {On which disk do you want to %s NetBSD? } message Available_disks {Available disks} message heads {heads} message sectors {sectors} message fs_isize {average file size (bytes)} message mountpoint {mount point (or 'none')} message cylname {cyl} message secname {sec} message megname {MB} message layout {NetBSD uses a BSD disklabel to carve up the NetBSD portion of the disk into multiple BSD partitions. You must now set up your BSD disklabel. You can use a simple editor to set the sizes of the NetBSD partitions, or keep the existing partition sizes and contents. You will then be given the opportunity to change any of the disklabel fields. The NetBSD part of your disk is %d Megabytes. A full installation requires at least %d Megabytes without X and at least %d Megabytes if the X sets are included. } message Choose_your_size_specifier {Choosing megabytes will give partition sizes close to your choice, but aligned to cylinder boundaries. Choosing sectors will allow you to more accurately specify the sizes. On modern ZBR disks, actual cylinder size varies across the disk and there is little real gain from cylinder alignment. On older disks, it is most efficient to choose partition sizes that are exact multiples of your actual cylinder size. Choose your size specifier} message ptnsizes {You can now change the sizes for the system partitions. The default is to allocate all the space to the root file system, however you may wish to have separate /usr (additional system files), /var (log files etc) or /home (users' home directories). Free space will be added to the partition marked with a '+'. } message ptnheaders { MB Cylinders Sectors Filesystem } message askfsmount {Mount point?} message askfssize {Size for %s in %s?} message askunits {Change input units (sectors/cylinders/MB)} message NetBSD_partition_cant_change {NetBSD partition} message Whole_disk_cant_change {Whole disk} message Boot_partition_cant_change {Boot partition} message add_another_ptn {Add a user defined partition} message fssizesok {Accept partition sizes. Free space %d %s, %d free partitions.} message fssizesbad {Reduce partition sizes by %d %s (%u sectors).} message startoutsidedisk {The start value you specified is beyond the end of the disk. } message endoutsidedisk {With this value, the partition end is beyond the end of the disk. Your partition size has been truncated to %d %s. Type enter to continue } message toobigdisklabel { This disk is too large for a disklabel partition table to be used and hence cannot be used as a bootable disk or to hold the root partition. } message fspart {We now have your BSD-disklabel partitions as: This is your last chance to change them. } message fspart_header { Start %3s End %3s Size %3s FS type Newfs Mount Mount point --------- --------- --------- ---------- ----- ----- ----------- } message fspart_row {%9lu %9lu %9lu %-10s %-5s %-5s %s} message show_all_unused_partitions {Show all unused partitions} message partition_sizes_ok {Partition sizes ok} message edfspart {The current values for partition `%c' are, Select the field you wish to change: MB cylinders sectors ------- --------- --------- } message fstype_fmt { FStype: %9s} message start_fmt { start: %9u %8u%c %9u} message size_fmt { size: %9u %8u%c %9u} message end_fmt { end: %9u %8u%c %9u} message bsize_fmt { block size: %9d bytes} message fsize_fmt { fragment size: %9d bytes} message isize_fmt { avg file size: %9d bytes (for number of inodes)} message isize_fmt_dflt { avg file size: 4 fragments} message newfs_fmt { newfs: %9s} message mount_fmt { mount: %9s} message mount_options_fmt { mount options: } message mountpt_fmt { mount point: %9s} message toggle {Toggle} message restore {Restore original values} message Select_the_type {Select the type} message other_types {other types} message label_size {%s Special values that can be entered for the size value: -1: use until the end of the NetBSD part of the disk a-%c: end this partition where partition X starts size (%s)} message label_offset {%s Special values that can be entered for the offset value: -1: start at the beginning of the NetBSD part of the disk a-%c: start at the end of previous partition (a, b, ..., %c) start (%s)} message invalid_sector_number {Badly formed sector number } message Select_file_system_block_size {Select file system block size} message Select_file_system_fragment_size {Select file system fragment size} message packname {Please enter a name for your NetBSD disk} message lastchance {Ok, we are now ready to install NetBSD on your hard disk (%s). Nothing has been written yet. This is your last chance to quit this process before anything gets changed. Shall we continue? } message disksetupdone {Okay, the first part of the procedure is finished. Sysinst has written a disklabel to the target disk, and newfs'ed and fsck'ed the new partitions you specified for the target disk. } message disksetupdoneupdate {Okay, the first part of the procedure is finished. Sysinst has written a disklabel to the target disk, and fsck'ed the new partitions you specified for the target disk. } message openfail {Could not open %s, error message was: %s. } message mountfail {mount of device /dev/%s%c on %s failed. } message extractcomplete {The extraction of the selected sets for NetBSD-@@VERSION@@ is complete. The system is now able to boot from the selected harddisk. To complete the installation, sysinst will give you the opportunity to configure some essential things first. } message instcomplete {The installation of NetBSD-@@VERSION@@. } message upgrcomplete {The upgrade to NetBSD-@@VERSION@@. } message unpackcomplete {Unpacking additional release sets of NetBSD-@@VERSION@@ is now complete. You will now need to follow the instructions in the INSTALL document to get your system reconfigured for your situation. The afterboot(8) manpage can also be of some help. At a minimum, you must edit rc.conf for your local environment and change rc_configured=NO to rc_configured=YES or reboots will stop at single-user. } message distmedium {Your disk is now ready for installing the kernel and the distribution sets. As noted in your INSTALL notes, you have several options. For ftp or nfs, you must be connected to a network with access to the proper machines. Sets selected %d, processed %d, Next set %s. } message distset . } /* XXX add 'minimal installation' */ message ftpsource {The following are the %s site, directory, user, and password that will be used. If "user" is "ftp", then the password is not needed. } message email {e-mail address} message dev {device} message nfssource {Enter the nfs host and server directory where the distribution is located. Remember, the directory should contain the .tgz files and must be nfs mountable. } message floppysource {Enter the floppy device to be used and transfer directory on the target file system. The set files must be in the root directory of the floppies. } message cdromsource {Enter the CDROM device to be used and directory on the CDROM where the distribution is located. Remember, the directory should contain the .tgz files. } message Available_cds {Available CDs } message ask_cd {Multiple CDs found, please select the one containing the install CD.} message cd_path_not_found {The installation sets have not been found at the default location on this CD. Please check device and path name.} message localfssource {Enter the unmounted local device and directory on that device where the distribution is located. Remember, the directory should contain the .tgz files. } message localdir {Enter the already-mounted local directory where the distribution is located. Remember, the directory should contain the .tgz files. } message filesys {file system} message nonet {I can not find any network interfaces for use by NetBSD. You will be returned to the previous menu. } message netup {The following network interfaces are active: %s Does one of them connect to the required server?} message asknetdev {I have found the following network interfaces: %s \nWhich device shall I use?} message badnet {You did not choose one of the listed network devices. Please try again. The following network devices are available: %s \nWhich device shall I use?} message netinfo {To be able to use the network, we need answers to the following: } message net_domain {Your DNS domain} message net_host {Your host name} message net_ip {Your IPv4 number} message net_srv_ip {Server IPv4 number} message net_mask {IPv4 Netmask} message net_namesrv6 {IPv6 name server} message net_namesrv {IPv4 name server} message net_defroute {IPv4 gateway} message net_media {Network media type} message netok {The following are the values you entered. DNS Domain: %s Host Name: %s Primary Interface: %s Host IP: %s Netmask: %s IPv4 Nameserver: %s IPv4 Gateway: %s Media type: %s } message netok_slip {The following are the values you entered. Are they OK? DNS Domain: %s Host Name: %s Primary Interface: %s Host IP: %s Server IP: %s Netmask: %s IPv4 Nameserver: %s IPv4 Gateway: %s Media type: %s } message netokv6 {IPv6 autoconf: %s IPv6 Nameserver: %s } message netok_ok {Are they OK?} message netagain {Please reenter the information about your network. Your last answers will be your default. } message wait_network { Waiting while network interface comes up. } message resolv {Could not create /etc/resolv.conf. Install aborted. } message realdir {Could not change to directory %s: %s. Install aborted. } message delete_xfer_file {Delete after install} message notarfile {Release set %s does not exist.} message endtarok {All selected distribution sets unpacked successfully.} message endtar {There were problems unpacking distribution sets. Your installation is incomplete. You selected %d distribution sets. %d sets couldn't be found and %d were skipped after an error occurred. Of the %d that were attempted, %d unpacked without errors and %d with errors. The installation is aborted. Please recheck your distribution source and consider reinstalling sets from the main menu.} message abort {Your choices have made it impossible to install NetBSD. Install aborted. } message abortinst {The distribution was not successfully loaded. You will need to proceed by hand. Installation aborted. } message abortupgr {The distribution was not successfully loaded. You will need to proceed by hand. Upgrade aborted. } message abortunpack {Unpacking additional sets was not successful. You will need to proceed by hand, or choose a different source for release sets and try again. } message createfstab {There is a big problem! Can not create /mnt/etc/fstab. Bailing out! } message noetcfstab {Help! No /etc/fstab in target disk %s. Aborting upgrade. } message badetcfstab {Help! Can't parse /etc/fstab in target disk %s. Aborting upgrade. } message X_oldexists {I cannot save %s/bin/X as %s/bin/X.old, because the target disk already has an %s/bin/X.old. Please fix this before continuing. One way is to start a shell from the Utilities menu, examine the target %s/bin/X and %s/bin/X.old. If %s/bin/X.old is from a completed upgrade, you can rm -f %s/bin/X.old and restart. Or if %s/bin/X.old is from a recent, incomplete upgrade, you can rm -f %s/bin/X and mv %s/bin/X.old to %s/bin/X Aborting upgrade.} message netnotup {There was a problem in setting up the network. Either your gateway or your nameserver was not reachable by a ping. Do you want to configure your network again? (No allows you to continue anyway or abort the install process.) } message netnotup_continueanyway {Would you like to continue the install process anyway, and assume that the network is working? (No aborts the install process.) } message makedev {Making device nodes ... } message badfs {It appears that /dev/%s%c is not a BSD file system or the fsck was not successful. Try mounting it anyway? (Error number %d.) } message rootmissing { target root is missing %s. } message badroot {The completed new root file system failed a basic sanity check. Are you sure you installed all the required sets? } message fd_type {Floppy file system type} message fdnotfound {Could not find the file on the floppy. } message fdremount {The floppy was not mounted successfully. } message fdmount {Please load the floppy containing the file named "%s.%s". If the set's has no more disks, select "Set finished" to install the set. Select "Abort fetch" to return to the install media selection menu. } message mntnetconfig {Is the network information you entered accurate for this machine in regular operation and do you want it installed in /etc? } message cur_distsets {The following is the list of distribution sets that will be used. } message cur_distsets_header { Distribution set Selected ------------------------ -------- } message set_base {Base} message set_system {System (/etc)} message set_compiler {Compiler Tools} message set_games {Games} message set_man_pages {Online Manual Pages} message set_misc {Miscellaneous} message set_modules {Kernel Modules} message set_tests {Test programs} message set_text_tools {Text Processing Tools} message set_X11 {X11 sets} message set_X11_base {X11 base and clients} message set_X11_etc {X11 configuration} message set_X11_fonts {X11 fonts} message set_X11_servers {X11 servers} message set_X11_prog {X11 programming} message set_source {Source and debug sets} message set_syssrc {Kernel sources} message set_src {Base sources} message set_sharesrc {Share sources} message set_gnusrc {GNU sources} message set_xsrc {X11 sources} message set_debug {Debug symbols} message set_xdebug {X11 debug symbols} message cur_distsets_row {%-27s %3s} message select_all {Select all the above sets} message select_none {Deselect all the above sets} message install_selected_sets {Install selected sets} message tarerror {There was an error in extracting the file %s. That means some files were not extracted correctly and your system will not be complete. Continue extracting sets?} message must_be_one_root {There must be a single partition marked to be mounted on '/'.} message partitions_overlap {partitions %c and %c overlap.} message No_Bootcode {No bootcode for specified FS type of root partition} message cannot_ufs2_root {Sorry, the root file system can't be FFSv2 due to lack of bootloader support on this port.} message edit_partitions_again { You can either edit the partition table by hand, or give up and return to the main menu. Edit the partition table again?} message config_open_error {Could not open config file %s\n} message choose_timezone {Please choose the timezone that fits you best from the list below. Press RETURN to select an entry. Press 'x' followed by RETURN to quit the timezone selection. Default: %s Selected: %s Local time: %s %s } message tz_back { Back to main timezone list} message swapactive {The disk that you selected has a swap partition that may currently be in use if your system is low on memory. Because you are going to repartition this disk, this swap partition will be disabled now. Please beware that this might lead to out of swap errors. Should you get such an error, please restart the system and try again.} message swapdelfailed {Sysinst failed to deactivate the swap partition on the disk that you chose for installation. Please reboot and try again.} message rootpw {The root password of the newly installed system has not yet been initialized, and is thus empty. Do you want to set a root password for the system now?} message rootsh {You can now select which shell to use for the root user. The default is /bin/sh, but you may prefer another one.} message no_root_fs { There is no defined root file system. You need to define at least one mount point with "/". Press <return> to continue. } message slattach { Enter slattach flags } message Pick_an_option {Pick an option to turn on or off.} message Scripting {Scripting} message Logging {Logging} message Status { Status: } message Command {Command: } message Running {Running} message Finished {Finished} message Command_failed {Command failed} message Command_ended_on_signal {Command ended on signal} message NetBSD_VERSION_Install_System {NetBSD-@@VERSION@@ Install System} message Exit_Install_System {Exit Install System} message Install_NetBSD_to_hard_disk {Install NetBSD to hard disk} message Upgrade_NetBSD_on_a_hard_disk {Upgrade NetBSD on a hard disk} message Re_install_sets_or_install_additional_sets {Re-install sets or install additional sets} message Reboot_the_computer {Reboot the computer} message Utility_menu {Utility menu} message Config_menu {Config menu} message exit_utility_menu {Back to main menu} message NetBSD_VERSION_Utilities {NetBSD-@@VERSION@@ Utilities} message Run_bin_sh {Run /bin/sh} message Set_timezone {Set timezone} message Configure_network {Configure network} message Partition_a_disk {Partition a disk} message Logging_functions {Logging functions} message Halt_the_system {Halt the system} message yes_or_no {yes or no?} message Hit_enter_to_continue {Hit enter to continue} message Choose_your_installation {Choose your installation} message Set_Sizes {Set sizes of NetBSD partitions} message Use_Existing {Use existing partition sizes} message Megabytes {Megabytes} message Cylinders {Cylinders} message Sectors {Sectors} message Select_medium {Install from} message ftp {FTP} message http {HTTP} message nfs {NFS} .if HAVE_INSTALL_IMAGE message cdrom {CD-ROM / DVD / install image media} .else message cdrom {CD-ROM / DVD} .endif message floppy {Floppy} message local_fs {Unmounted fs} message local_dir {Local directory} message Select_your_distribution {Select your distribution} message Full_installation {Full installation} message Full_installation_nox {Installation without X11} message Minimal_installation {Minimal installation} message Custom_installation {Custom installation} message hidden {** hidden **} message Host {Host} message Base_dir {Base directory} message Set_dir_bin {Binary set directory} message Set_dir_src {Source set directory} message Xfer_dir {Transfer directory} message User {User} message Password {Password} message Proxy {Proxy} message Get_Distribution {Get Distribution} message Continue {Continue} message What_do_you_want_to_do {What do you want to do?} message Try_again {Try again} message Set_finished {Set finished} message Skip_set {Skip set} message Skip_group {Skip set group} message Abandon {Abandon installation} message Abort_fetch {Abort fetch} message Device {Device} message File_system {File system} message Select_IPv6_DNS_server { Select IPv6 DNS server} message other {other } message Perform_IPv6_autoconfiguration {Perform IPv6 autoconfiguration?} message Perform_DHCP_autoconfiguration {Perform DHCP autoconfiguration?} message Root_shell {Root shell} message User_shell {User shell} .if AOUT2ELF message aoutfail {The directory where the old a.out shared libraries should be moved to could not be created. Please try the upgrade procedure again and make sure you have mounted all file systems.} message emulbackup {Either the /emul/aout or /emul directory on your system was a symbolic link pointing to an unmounted file system. It has been given a '.old' extension. Once you bring your upgraded system back up, you may need to take care of merging the newly created /emul/aout directory with the old one. } .endif message oldsendmail {Sendmail is no longer in this release of NetBSD, default MTA is postfix. The file /etc/mailer.conf still chooses the removed sendmail. Do you want to upgrade /etc/mailer.conf automatically for postfix? If you choose "No" you will have to update /etc/mailer.conf yourself to ensure proper email delivery.} message license {To use the network interface %s, you must agree to the license in file %s. To view this file now, you can type ^Z, look at the contents of the file and then type "fg" to resume.} message binpkg {To configure the binary package system, please choose the network location to fetch packages from. Once your system comes up, you can use 'pkgin' to install additional packages, or remove packages.} message pkgpath {Enabling binary packages with pkgin requires setting up the repository. The following are the host, directory, user, and password that will be used. If "user" is "ftp", then the password is not needed. } message rcconf_backup_failed {Making backup of rc.conf failed. Continue?} message rcconf_backup_succeeded {rc.conf backup saved to %s.} message rcconf_restore_failed {Restoring backup rc.conf failed.} message rcconf_delete_failed {Deleting old %s entry failed.} message Pkg_dir {Package directory} message configure_prior {configure a prior installation of} message configure {configure} message change {change} message password_set {password set} message YES {YES} message NO {NO} message DONE {DONE} message abandoned {Abandoned} message empty {***EMPTY***} message timezone {Timezone} message change_rootpw {Change root password} message enable_binpkg {Enable installation of binary packages} message enable_sshd {Enable sshd} message enable_ntpd {Enable ntpd} message run_ntpdate {Run ntpdate at boot} message enable_mdnsd {Enable mdnsd} message add_a_user {Add a user} message configmenu {Configure the additional items as needed.} message doneconfig {Finished configuring} message Install_pkgin {Install pkgin and update package summary} message binpkg_installed {Your system is now configured to use pkgin to install binary packages. To install a package, run: pkgin install <packagename> from a root shell. Read the pkgin(1) manual page for further information.} message Install_pkgsrc {Fetch and unpack pkgsrc} message pkgsrc {Installing pkgsrc requires unpacking an archive retrieved over the network. The following are the host, directory, user, and password that will be used. If "user" is "ftp", then the password is not needed. } message Pkgsrc_dir {pkgsrc directory} message get_pkgsrc {Fetch and unpack pkgsrc for building from source} message retry_pkgsrc_network {Network configuration failed. Retry?} message quit_pkgsrc {Quit without installing pkgsrc} message pkgin_failed {Installation of pkgin failed, possibly because no binary packages exist. Please check the package path and try again.} message failed {Failed} message addusername {8 character username to add:} message addusertowheel {Do you wish to add this user to group wheel?}
|
http://cvsweb.netbsd.org/bsdweb.cgi/src/distrib/utils/sysinst/msg.mi.en?rev=1.179&content-type=text/x-cvsweb-markup
|
CC-MAIN-2014-15
|
refinedweb
| 3,574
| 55.84
|
It's hard to deny the fact that the PHP community is excited for Laravel 4. Among other things, the framework leverages the power of Composer, which means it's able to utilize any package or script from Packagist.
In the meantime, Laravel offers "Bundles", which allow us to modularize code for use in future projects. The bundle directory is full of excellent scripts and packages that you can use in your applications. In this lesson, I'll show you how to build one from scratch!
Wait, What's a Bundle?
Bundles offer an easy way to group related code. If you're familiar with CodeIgniter, bundles are quite similar to "Sparks". This is apparent when you take a look at the folder structure.
Creating a bundle is fairly simple. To illustrate the process, we'll build an admin panel boilerplate that we can use within future projects. Firstly, we need to create an 'admin' directory within our 'bundles' folder. Try to replicate the folder structure from the image above.
Before we begin adding anything to our bundle, we need to first register it with Laravel. This is done in your application's
bundles.php file. Once you open this file, you should see an array being returned; we simply need to add our bundle and define a
handle. This will become the URI in which we access our admin panel.
'admin' => array('handles' => 'admin')
Here, I've named mine, "admin," but feel free to call yours whatever you wish.
Once we've got that setup, we need to create a
start.php file. Here, we're going to set up a few things, such as our namespaces. If you're not bothered by this, then you don't actually need a start file for your bundle to work, as expected.
Laravel's autoloader class allows us to do a couple of things: map our base controller, and autoload namespaces.
Autoloader::map(array( 'Admin_Base_Controller' => Bundle::path('admin').'controllers/base.php', )); Autoloader::namespaces(array( 'Admin\Models' => Bundle::path('admin').'models', 'Admin\Libraries' => Bundle::path('admin').'libraries', ));
Namespacing will ensure that we don't conflict with any other models or libraries already included in our application. You'll notice that we haven't opted to not namespace our controllers to make things a little easier.
Publishing Assets
For the admin panel, we'll take advantage of Twitter's Bootstrap, so go grab a copy. We can pop this into a
public folder inside our bundle in order to publish to our application later on.
When you're ready to publish them, just run the following command through artisan.
php artisan bundle:publish admin
This will copy the folder structure and files to the
bundles directory in our
public folder, within the root of the Laravel installation. We can then use this in our bundle's base controller.
Setting up the Base Controller
It's always a smart idea to setup a base controller, and extend from there. Here, we can setup restful controllers, define the layout, and include any assets. We just need to call this file,
base.php, and pop it into our controller's directory.
Firstly, let's get some housekeeping out of the way. We'll of course want to use Laravel's restful controllers.
public $restful = true;
And we'll specify a layout that we'll create shortly. If you're not used to controller layouts, then you're in for a treat.
public $layout = 'admin::layouts.main';
The bundle name, followed by two colons, is a paradigm in Laravel we'll be seeing more of in the future, so keep an eye out.
When handling assets within our bundle, we can do things as expected and specify the path from the root of the public folder. Thankfully, Laravel is there to make our lives easier. In our construct, we need to specify the bundle, before adding to our asset containers.
Asset::container('header')->bundle('admin'); Asset::container('footer')->bundle('admin');
If you're unfamiliar with asset containers, don't worry; they're merely sections of a page where you want to house your assets. Here, we'll be including stylesheets in the header, and scripts in the footer.
Now, with that out of the way, we can include our bootstrap styles and scripts easily. Our completed base controller should look similar to:
class Admin_Base_Controller extends Controller { public $restful = true; public $layout = 'admin::layouts.main'; public function __construct(){ parent::__construct(); Asset::container('header')->bundle('admin'); Asset::container('header')->add('bootstrap', 'css/bootstrap.min.css'); Asset::container('footer')->bundle('admin'); Asset::container('footer')->add('jquery', ''); Asset::container('footer')->add('bootstrapjs', 'js/bootstrap.min.js'); } /** * Catch-all method for requests that can't be matched. * * @param string $method * @param array $parameters * @return Response */ public function __call($method, $parameters){ return Response::error('404'); } }
We've also brought across the catch-all request from the application's base controller to return a 404 response, should a page not be found.
Before we do anything else, let's make the file for that layout,
views/layout/main.blade.php, so we don't encounter any errors later on.
Securing the Bundle
As we're building an admin panel, we're going to want to keep people out. Thankfully, we can use Laravel's built in
Auth class to accomplish this with ease..
First, we need to create our table; I'm going to be using 'admins' as my table name, but you can change it, if you wish. Artisan will generate a migration, and pop it into our bundle's migrations directory. Just run the following in the command line.
php artisan migrate:make admin::create_admins_table
Building the Schema
If you're unfamiliar with the schema builder, I recommend that you take a glance at the documentation. We're going to include a few columns:
- id - This will auto-increment and become our primary key
- name
- username
- role - We won't be taking advantage of this today, but it will allow you to extend the bundle later on
We'll also include the default timestamps, in order to follow best practices.
/** * Make changes to the database. * * @return void */ public function up() { Schema::create('admins', function($table) { $table->increments('id'); $table->string('name', 200); $table->string('username', 32)->unique(); $table->string('password', 64); $table->string('email', 320)->unique(); $table->string('role', 32); $table->timestamps(); }); } /** * Revert the changes to the database. * * @return void */ public function down() { Schema::drop('admins'); }
Now that we've got our database structure in place, we need to create an associated model for the table. This process is essentially identical to how we might accomplish this in our main application. We create the file and model, based on the singular form of our table name - but we do need to ensure that we namespace correctly.
namespace Admin\Models; use \Laravel\Database\Eloquent\Model as Eloquent; class Admin extends Eloquent { }
Above, we've ensured that we're using the namespace that we defined in
start.php. Also, so we can reference Eloquent correctly, we create an alias.
Extending Auth
To keep our bundle entirely self contained, we'll need to extend
auth. This will allow us to define a table just to login to our admin panel, and not interfere with the main application.
Before we create our custom driver, we'll create a configuration file, where you can choose if you'd like to use the
username or
return array( 'username' => 'username', 'password' => 'password', );
If you want to alter the columns that we'll be using, simply adjust the values here.
We next need to create the driver. Let's call it, "AdminAuth," and include it in our libraries folder. Since we're extending Auth, we only need to overwrite a couple of methods to get everything working, as we intended.
namespace Admin\Libraries; use Admin\Models\Admin as Admin, Laravel\Auth\Drivers\Eloquent as Eloquent, Laravel\Hash, Laravel\Config; class AdminAuth extends Eloquent { /** * Get the current user of the application. * * If the user is a guest, null should be returned. * * @param int|object $token * @return mixed|null */ public function retrieve($token) { // We return an object here either if the passed token is an integer (ID) // or if we are passed a model object of the correct type if (filter_var($token, FILTER_VALIDATE_INT) !== false) { return $this->model()->find($token); } else if (get_class($token) == new Admin) { return $token; } } /** * Attempt to log a user into the application. * * @param array $arguments * @return void */ public function attempt($arguments = array()) { $user = $this->model()->where(function($query) use($arguments) { $username = Config::get('admin::auth.username'); $query->where($username, '=', $arguments['username']); foreach(array_except($arguments, array('username', 'password', 'remember')) as $column => $val) { $query->where($column, '=', $val); } })->first(); // If the credentials match what is in the database, we will just // log the user into the application and remember them if asked. $password = $arguments['password']; $password_field = Config::get('admin::auth.password', 'password'); if ( ! is_null($user) and Hash::check($password, $user->{$password_field})) { return $this->login($user->get_key(), array_get($arguments, 'remember')); } return false; } protected function model(){ return new Admin; } }
Now that we've created the driver, we need to let Laravel know. We can use Auth's
extend method to do this in our
start.php file.
Auth::extend('adminauth', function() { return new Admin\Libraries\AdminAuth; });
One final thing that we need to do is configure Auth to use this at runtime. We can do this in our base controller's constructor with the following.
Config::set('auth.driver', 'adminauth');
Routes & Controllers
Before we can route to anything, we need to create a controller. Let's create our dashboard controller, which is what we'll see after logging in.
As we'll want this to show up at the root of our bundle (i.e. the handle we defined earlier), we'll need to call this
home.php. Laravel uses the 'home' keyword to establish what you want to show up at the root of your application or bundle.
Extend your base controller, and create an index view. For now, simply return 'Hello World' so we can ensure that everything is working okay.
class Admin_Home_Controller extends Admin_Base_Controller { public function get_index(){ return 'Hello World'; } }
Now that our controller is setup, we can route to it. Create a
routes.php within your bundle, if you haven't already. Similar to our main application, each bundle can have its own routes file that works identically.
Route::controller(array( 'admin::home', ));
Here, I've registered the home controller, which Laravel will automatically assign to
/. Later , we'll add our login controller to the array.
If you head to
/admin (or whatever handle you defined earlier) in your browser, then you should see 'Hello World'.
Building the Login Form
Let's create the login controller, however, rather than extending the base controller, we'll instead extend Laravel's main controller. The reason behind this decision will become apparent shortly.
Because we're not extending, we need to set a couple of things up before beginning - namely restful layouts, the correct auth driver, and our assets.
class Admin_Login_Controller extends Controller { public $restful = true; public function __construct(){ parent::__construct(); Config::set('auth.driver', 'adminauth'); Asset::container('header')->bundle('admin'); Asset::container('header')->add('bootstrap', 'css/bootstrap.min.css'); } }
Let's also create our view. We're going to be using Blade - Laravel's templating engine - to speed things up a bit. Within your bundles views directory, create a 'login' directory and an 'index.blade.php' file within it.
We'll pop in a standard HTML page structure and echo the assets.
<!DOCTYPE html> <html lang="en"> <head> <meta charset="utf-8"> <title>Login</title> {{Asset::container('header')->styles()}} <!--[if lt IE 9]> <script src=""></script> <![endif]--> </head> <body> </body> </html>
Now, let's make sure that the view is being created in the controller. As we're using restful controllers, we can take advantage of the 'get' verb in our method.
public function get_index(){ return View::make('admin::login.index'); }
Awesome! We're now good to start building our form, which we can create with the
Form class.
{{Form::open()}} {{Form::label('username', 'Username')}} {{Form::text('username')}} {{Form::label('password', 'Password')}} {{Form::password('password')}} {{Form::submit('Login', array('class' => 'btn btn-success'))}} {{Form::token()}} {{Form::close()}}
Above, we created a form that will post to itself (exactly what we want), along with various form elements and labels to go with it. The next step is to process the form.
As we're posting the form to itself and using restful controllers, we just need to create the
post_index method and use this to process our login. If you've never used Auth before, then go and have a peek at the documentation before moving on.
public function post_index(){ $creds = array( 'username' => Input::get('username'), 'password' => Input::get('password'), ); if (Auth::attempt($creds)) { return Redirect::to(URL::to_action('admin::home@index')); } else { return Redirect::back()->with('error', true); } }
If the credentials are correct, the user will be redirected to the dashboard. Otherwise, they'll be redirected back with an error that we can check for in the login view. As this is just session data, and not validation errors, we only need to implement a simple check.
@if(Session::get('error')) Sorry, your username or password was incorrect. @endif
We'll also need to log users out; so let's create a
get_logout method, and add the following. This will log users out, and then redirect them when visiting
/admin/login/logout.
public function get_logout(){ Auth::logout(); return Redirect::to(URL::to_action('admin::home@index')); }
The last thing we should do is add the login controller to our routes file.
Route::controller(array( 'admin::home', 'admin::login', ));
Filtering routes
To stop people from bypassing our login screen, we need to filter our routes to determine if they're authorized users. We can create the filter in our
routes.php, and attach it to our base controller, to filter before the route is displayed.
Route::filter('auth', function() { if (Auth::guest()) return Redirect::to(URL::to_action('admin::login')); });
At this point, all that's left to do is call this in our base controller's constructor. If we extended our login controller from our base, then we'd have an infinite loop that would eventually time out.
$this->filter('before', 'auth');
Setting up the Views
Earlier, we created our
main.blade.php layout; now, we're going to do something with it. Let's get an HTML page and our assets being brought in.
<!DOCTYPE html> <html lang="en"> <head> <meta charset="utf-8"> <title>{{$title}}</title> {{Asset::container('header')->styles()}} <!--[if lt IE 9]> <script src=""></script> <![endif]--> </head> <body> <div class="container"> {{$content}} </div> {{Asset::container('footer')->scripts()}} </body> </html>
You'll notice that I've also echoed out a couple of variables:
$title and
$content. We'll be able to use magic methods from our controller to pass data through to these. I've also popped
$content inside the container
div that Bootstrap will provide the styling for.
Next, let's create the view for our dashboard. As we'll be nesting this, we only need to put the content we want to put into our container.
<h1>Hello</h1> <p class="lead">This is our dashboard view</p>
Save this as
index.blade.php within the
views/dashboard directory inside of your bundle.
We now need to set our controller to take advantage of the layout and view files that we just created. Within the
get_index method that we created earlier, add the following.
$this->layout->title = 'Dashboard'; $this->layout->nest('content', 'admin::dashboard.index');
title is a magic method that we can then echo out as a variable in our layout. By using
nest, we're able to include a view inside the layout straight from our controller.
Creating a Task
In order to speed things up, Laravel provides us with an easy way to execute code from the command line. These are called "Tasks"; it's a good idea to create one to add a new user to the database easily.
We simply need to ensure that the file takes on the name of our task, and pop it into our bundle's tasks directory. I'm going to call this
setup.php, as we'll use it just after installing the bundle.
use Laravel\CLI\Command as Command; use Admin\Models\Admin as Admin; class Admin_Setup_Task { public function run($arguments){ if(empty($arguments) || count($arguments) < 5){ die("Error: Please enter first name, last name, username, email address and password\n"); } Command::run(array('bundle:publish', 'admin')); $role = (!isset($arguments[5])) ? 'admin' : $arguments[5]; $data = array( 'name' => $arguments[0].' '.$arguments[1], 'username' => $arguments[2], 'email' => $arguments[3], 'password' => Hash::make($arguments[4]), 'role' => $role, ); $user = Admin::create($data); echo ($user) ? 'Admin created successfully!' : 'Error creating admin!'; } }
Laravel will pass through an array of arguments; we can count these to ensure that we're getting exactly what we want. If not, we'll echo out an error. You'll also notice that we're using the
Command class to run
bundle:publish. This will allow you to run any command line task built into Laravel inside your application or bundle.
The main thing this task does is grab the arguments passed through to it, hash the password, and insert a new admin into the
Admins table. To run this, we need to use the following in the command line.
php artisan admin::setup firstname lastname username [email protected] password
What Now?
In this tutorial, we created an boilerplate admin panel that is quite easy to extend. For example, the
roles column that we created could allow you to limit what your clients are able to see.
A bundle can be anything from an admin panel, like we built today, to Markdown parsers - or even the entire Zend Framework (I'm not kidding). Everything that we covered here will set you on your way to writing awesome Laravel bundles, which can be published to Laravel's bundle directory.
Learn more about creating Laravel bundles here on Nettuts+.
|
http://code.tutsplus.com/tutorials/build-your-first-admin-bundle-for-laravel--net-28918
|
CC-MAIN-2014-10
|
refinedweb
| 3,008
| 56.15
|
I think I have it figured out. Fink requires installing a package in a temporary root install that is then turned into a deb. This makes the pth files have bad paths.. /sw/bin/easy_install -h is now running so I think I am on the right track. -kurt On 3/29/06, Phillip J. Eby <pje at telecommunity.com> wrote: > > At 06:42 PM 3/29/2006 -0500, Kurt Schwehr wrote: > >Phillip and others, > > > >Does this look like it is installed correctly? Or is it that I am > calling > >setup.py when I should be calling ez_setup? And what should I pass to > >ez_setup? It does look like setuptools are installed incorrectly since > >easy_install -h does not run. So what is the right way? > > If you are packaging setuptools for use with a system packager, you should > use: > > python setup.py install --root=/some/pseudoroot > > If you are using setuptools 0.6a10 or earlier, you *also* need > --single-version-externally-managed for this to work. 0.6a11 (which I > just > released a few minutes ago) automatically sets > --single-version-externally-managed if you specify a --root, so you might > want to just go ahead and upgrade, especially since 0.6a11 has a lot of > other changes to improve compatibility with system packagers. > > For example, while 0.6a10 will complain about system-installed versions of > a package as conflicting with a package that is being installed, 0.6a11 > installs things such that there are no conflicts. 0.6a11 also supports > making system packages for projects containing namespace packages, without > causing inter-package conflicts for the packaging system. > > -------------- next part -------------- An HTML attachment was scrubbed... URL:
|
https://mail.python.org/pipermail/distutils-sig/2006-March/006203.html
|
CC-MAIN-2016-44
|
refinedweb
| 275
| 68.47
|
/>Marco Parenzan November 8, 2016 Share via Twitter Share via Facebook Share via LinkedIn Blog Microsoft Azure search Azure DocumentDb for the Enterprise Market Azure DocumentDb: In today’s database wars, we often find developers and database administrators on opposite sides. Developers often experiment with new libraries, software, and methodologies to write simpler and elegant code, especially when talking about data management, and tend to throw away legacy products quite easily. On the other side, database administrators with skills in the relational space tend to be more conservative and don’t believe that there could be something different from their beloved SQL language. However, there are some good reasons for embracing the new opportunities presented by new NoSQL databases. In this post, we’d like to clear up some of the biggest misconceptions about NoSQL databases and introduce how developers can use document-based databases, specifically, Microsoft Azure’s DocumentDb. ERP, Accounting and the relational approach The 1980s were the pioneering era of Personal Computing (PC), the period when computers first began to show their potential. In the enterprise world, the PC empowered the daily routines of every office by structuring and automating many daily organizational tasks, often through applications such as Enterprise Resource Planning (ERP) and Accounting. If the ’80s were about PCs, the ’90s were the decade of connecting computers within the enterprise using a LAN. Despite the introduction of the Web and the browser, ERP and Accounting are essentially the same as they were 30 years ago. Today, we make relational databases responsible for data integrity by giving them the task of structuring data in normalized tables and relations. We assume that if data is contained in a relational database, we will always have full access to it to do whatever we want. In reality, if we could check the implementations of many applications, we would find that the database is not really responsible for data correctness and integrity. Developers tend to use different code for this. But I don’t want to discuss how a relational database can be used to do what it is supposed to do. I don’t want to move ERP into the NoSQL world because I believe that the relational model is better for ERP. I think that this is also true for Accounting software. Instead, I want to talk about a sort of laziness in how developers approach new technologies and the feeling that we are moving away from relational databases. The Internet Era The biggest misconceptions about databases come from the Internet era. It is not about ERPs and Accounting software, which are fine just the way they are. Instead, many of today’s misconceptions about databases come from a whole new class of applications that are being addressed by a combination of new (web-based) and old (relational database) technologies. These applications are still useful today, but there are other possibilities. For example, suppose you want to implement a (now) classic e-commerce website and you want to design for the purchased order. You create a flash sale event that offers a 50% discount for purchases within an hour. Because you sell worldwide, you could receive up to 10 times the normal number of orders during this time. You have two options. If this is a sporadic event, you can choose to scale up the database server to handle the increased requests. You will pay more for it, but the success of the campaign should cover the additional expense. You can also decide to schedule this type of campaign more frequently over time. To optimize the expenses of scaling up, you need to think of a better way for your application to handle this scenario. Modeling database for internet purchase orders With a relational database, this is one way to model the purchase order database: /> You have an InternetOrders entity table that represents the order, with order total and other order-related information. It refers to a Customer table that represents the user that made the orders. Another table, InternetOrderDetails, contains the rows that represent all products purchased inside a single order, with quantity and price. Each detail references a Products table with unit price and other information related to the order. To carry out an order, you need to perform (n+1) insertions in tables and (n+1) check constraints to verify that the data is integral. Once a user has submitted the checkout command from the web server, all operations are executed on the backend. Then, the backend service will carry the order to the relational database. This is just a simplification of the process, as a real world order would require additional operations. Still, you can perceive that in a flash sale, the number of database operations will greatly increase and the database operation will suffer. We can try to develop some hypotheses as to why this is the case. We develop frontend and backend applications where separation is about organizational needs. Therefore, we have control over the data. We already have correct Customer and Product catalogs. We also need to include the current customer and product information inside an InternetOrder, as they can change over time. However, for traceability reasons, we need to do so at the moment of the order. For example, if Customer information changes, we don’t need to modify past orders. In this scenario, if we use a JSON representation, we can write an InternetOrder in the following way: /> Suppose you have a database that can save this document in a single write without separating every single entity (represented by the curly braces) in distinct tables and different insert operations. Also, suppose that you need a daily count of how many T-shirts you sold on a previous day. To do so, you need to aggregate orders on a product basis and discover the total quantity and costs, but you would do it the day after, not while customers are placing their orders. This is what NoSQL databases are good at. Introducing NoSQL Databases SQL language is the best representation of a relational database. A developer writes SQL code to describe the database structure (Data Definition Language – DDL) and database queries (Data Manipulation Language – DML). NoSQL is essentially an alternative to relational SQL databases. The name refers to “No (more) SQL”, or, better, “Not Only SQL”. This kind of database doesn’t use tables and rows. NoSQL databases use JSON as a persistence format, and JavaScript becomes the preferred language for interactions, queries, and performing operations. They use JSON format to persist data in an aggregated format. If the row inside a relational table is an entity, the Data Transfer Object (DTO) is the data aggregation format that JSON prefers. NoSQL is developed with scalability in mind to optimize the ingestion of web data. NoSQL supports eventual consistency, which means that queries will be consistent with write in a non-predictable time. It is substantially different from the ACID consistency that relational databases implement. You have an unpredictable amount of time when queries will be consistent with writes. This happens in many scenarios. Think about the Internet of Things. You need to collect lots of data from connected devices all around the world, but you don’t need to analyze it in real time; you don’t need precise data immediately, but perhaps just some statistics. Essentially, you need to be able to reliably collect all data, without any loss. We say that NoSQL databases are partition-aware. They can subdivide the entire data store into partitions and each one can be distributed in different hosts to scale out the ingestion capacity. This is quite different from relational databases that must scale up because they cannot create partitions to support distributed writes. That is what NoSQL databases are good at, but it also requires some operational skills, as partitioning implies handling a complex service infrastructure. In general, developers are afraid to approach a database because they cannot reuse the SQL skills that they probably learned at school. Developers are also focused on code, so they don’t like thinking about operational issues such as distributed services and partitioning. However, they do appreciate a schema-less Db that is resilient to iterative schema changes and that promotes code-first development, without using Object to Relational Mappers. Introducing Azure DocumentDb and its SQL language There are many open source implementations of NoSQL databases, which are created as open source contributions. Microsoft has implemented NoSQL databases only in the Azure space; otherwise, they only offer hosted implementations and no on-premise, packaged NoSQL databases. Azure DocumentDb is the proposition of a NoSQL, JSON document-based database. As a hosted solution, you can interact with it only using the public REST API, other than the Portal. It is a highly reliable service, as Azure DocumentDb is natively deployed with replica support. Every write performed on the primary database is replicated on two secondary distributed replicas, in different fault domains. Every read is distributed over the three copies to avoid bottlenecks. To ensure performance, DocumentDb databases are hosted on machines with SSD drives. To interact with DocumentDb from the Azure Portal, you need to create a Database Account: /> From the Portal, select DocumentDb, where it is a DocumentDb Account: /> To create a new account, we need to specify a name, a resource group, and a location. A database account is mainly two things: unit of authorization unit of consistency As a unit of authorization, a database account provides two sets of keys. A master key is necessary to perform all read/write operations to the database account. A read-only key can be used to perform read-only queries. /> As a unit of consistency, as we can select a global consistency for all databases and collections that it contains. You can select four levels of consistency: Strong: The client always sees completely consistent data. It is the consistency level with the slowest reads/writes, but it can be used for critical applications like the stock market, banking, and airline reservations. Session: The default consistency level: A client reads its own writes, but other clients reading this same data might see older values. Bounded Staleness: The client might see old data, but it can specify a limit for how old that data can be (ex. 2 seconds). Updates happen in the order that they are received. It is similar to the Session consistency level, but speeds up reads while still preserving the order of updates. Eventual: The client might see old data for as long as it takes a write to propagate to all replicas. This is for high performance and availability, but a client may sometimes read out-of-date information or see updates that are out of order. Consistency may also be specified at the query level. Future releases will update this area. /> A database account is not a database, while a database account can contain multiple databases. With a recent October 2016 update, portal presents an interface that invites users to create collections and not a database. This is because the notion of a database inside DocumentDb is a sort of placeholder, a namespace. It is now included inside the creation of collections. Azure DocumentDb Collections If we have a relational database in mind, we may recall that we can store entities as rows inside tables. But collections are very different. A table is a static container of homogenous and flat entities. A table ensures that every flat entity is composed of scalar properties as it cannot contain more complex structures, and it guarantees that all entities have the same properties, by name and by type. To create more complex schemas, a relational database uses multiple tables, relations, and referential integrity. Different entities are contained in different tables. In DocumentDb, collections are schema-less containers of JSON documents. This means that a single collection can contain multiple different entities as it does not ensure any schema. DocumentDb cannot ensure that two distinct instances of the same kind of document, say an Internet Order, contain the same properties because of an error or specification updates. Multiple collections inside a database are necessary, as a collection is both a unit of partitioning and a unit of throughput. It is a unit of partitioning because it can contain no more that 10Gb of data; with more than 10Gb of data, you must create multiple partitions. It is a unit of throughput because it can perform a limited number of operations. Operations are measured in terms of Resource Units (RU) and each operation can consume a different quantity of RU. Every collection has a quota of Request Units per second that it can spend in operations. This number is selected where a collection is created and, in some situations, can change if needed. This is a sample reference table containing costs in terms of operations performed on the collection: Operation RU Consumed (estimated) Reading a single 1KB document 1 Reading a single 2KB document 2 Query with a simple predicate for a 1KB document 3 Creating a single 1 KB document with 10 JSON properties (consistent indexing) 14 Create a single 1 KB document with 100 JSON properties (consistent indexing) 20 Replacing a single 1 KB document 28 Execute a stored procedure with two create documents 30 These are only reference values because they depend on the real size of the document handled. An available web tool can help you more precisely calculate expenses to test with your document. If in a second you try making requests for more than, say, 2500RUs, DocumentDb throttles the request. This is due to the multitenancy nature of DocumentDb. It must guarantee the performance declared when you create a collection; no more, no less. Another quota relates to size: A collection cannot be more than 10Gb. To use more space, you need to create other collections. This is why a collection is a unit of partitioning: You need to create multiple collections to handle all of the space you may need, and then you need to specify how to distribute documents over multiple collections. There are some recent updates in this field because you can use a client-side partition resolver in every query. As of late, you can also support server-side partitioning with partitioned collections. To create a collection, go to the DocumentDb account: /> Exploring Collections To create our first documents, we can use Document Explorer (available under the left side navigation menu): /> With Document Explorer, you can interactively write JSON documents directly inside a collection. /> Here are a few additional details about writing inside of a collection: JSON syntax uses curly brackets for objects and squared brackets for arrays. IDs are typically GUIDs as there is no tool that can guarantee any number sequence. Information is contained inside the object, so for example, you do not need to connect customers or products to decode the product or customer name. customer_id and product_id probably refer to other documents: This means that information is not always embedded. Information may be embedded or referenced. Calculations can be embedded inside the document. Id is always automatically updated as “id” property if not otherwise specified. “id” must be unique inside of a collection. Now we can try experimenting with Query Explorer. When you click Query Explorer, this opens a new blade that shows the query and the collection over which it is executed. Note that the query is executed inside a collection: It has no knowledge about other collections, so all documents must be contained in that collection. /> Also, note that: You express a query with a SQL-like language. Results are always a JSON array. A query is “charged” inside a Request Unit quota up to the collection. We can perform a more complex query if we look inside inner collections. For example, we can make a join because we need to create a different query focused on order item and not on the order. We want to create a query that lists ordered products independently from the order where they were defined. /> Conclusion DocumentDb is a great opportunity to approach a new database genre because its friction-less approach for developers uses objects that natively map with JSON and can be queried with an SQL-like language. DocumentDb is tailored for new Web and cloud-scale scenarios and for new cloud solutions that use multiple data sources. In future articles, we’ll explore partitioning in more detail using DocumentDb from the REST API and Sdks.
|
https://cloudacademy.com/blog/azure-documentdb-enterprise-market/
|
CC-MAIN-2021-43
|
refinedweb
| 2,754
| 52.49
|
Comparison of characters in C
I have a question about comparing a single char string in C inside a function. The code looks like this:
int fq(char *s1){ int i; for(i=0;i<strlen(s1);i++){ if(s1[i]=="?"){ printf("yes"); } } return 1; }
Even if s1 = "???" it never prints out yes. I was able to solve the problem, but I'm curious why it works in one direction but not the other. This is the piece of code that works:
int fq(char *s1,char *s2){ int i; char q[]="?"; for(i=0;i<strlen(s1);i++){ if(s1[i]==q[0]){ printf("yes"); } } return 1; }
source to share
As the first sample compares addresses instead of characters.
There is
==
no string type in c and operator , when applied to an array or pointer, it compares addresses instead of content.
Your function will be correctly written as follows
int fq(char *s1,char *s2) { int i; for (i = 0 ; s1[i] ; ++i) { if (s1[i] == 'q') printf("yes"); } return 1; }
you can compare
s1[i]
with
'q'
.
source to share
if(s1[i]=="?"){
is not the correct syntax to test if it is a
s1[i]
character
'?'
. It should be:
if(s1[i] == '?'){
You might want to figure out how you can change compiler options so that you get warnings when such expressions exist in your code base.
Using the
-Wall
c option
gcc
, I get the following message:
cc -Wall soc.c -o soc soc.c: In function ‘fq’: soc.c:7:15: warning: comparison between pointer and integer if(s1[i]=="?"){ ^ soc.c:7:15: warning: comparison with string literal results in unspecified behavior [-Waddress]
source to share
In a C character array, that is, the string has the syntax like "". For one character, the syntax is ''.
In your case: it should be:
if(s1[i]=='?')
If you want to compare it in string form, you need strcmp. Since the '==' operator is not capable of comparing strings in C.
To compare two strings, we can use:
if(!strcmp(s1,q))
And for this operation, you need to add a string.h header like:
#include <string.h>
To compare strings with the '==' operator, you need to overload the operator. But C doesn't support operator overloading. You can use C ++ language for this.
source to share
|
https://daily-blog.netlify.app/questions/2221078/index.html
|
CC-MAIN-2021-21
|
refinedweb
| 389
| 76.62
|
(For more resources on this subject, see here.)
Important preliminary points
For this section you will need to have Apache Ant on the machine that you are going to have running Grid instances. You can get this from for Windows and Mac. If you have Ubuntu you can simply do sudo apt-get install ant1.8, which will install all the relevant items that are needed onto your Linux machine. You will also have to download the latest Selenium Grid from.
Understanding Selenium Grid
Selenium Grid is a version of Selenium that allows teams to set up a number of Selenium instances and then have one central point to send your Selenium commands to. This differs from what we saw in Selenium Remote Control (RC) where we always had to explicitly say where the Selenium RC is as well as know what browsers that Remote Control can handle.
With Selenium Grid, we just ask for a specific browser, and then the hub that is part of Selenium Grid will route all the Selenium commands through to the Remote Control you want.
Selenium Grid also allows us to, with the help of the configuration file, assign friendly names to the Selenium RC instances so that when the tests want to run against Firefox on Linux, the hub will find a free instance and then route all the Selenium Commands from your test through to the instance that is registered with that environment. We can see an example of this in the next diagram.
We will see how to create tests for this later in the chapter, but for now let's have a look at making sure we have all the necessary items ready for the grid.
Checking that we have the necessary items for Selenium Grid
Now that you have downloaded Selenium Grid and Ant, it is always good to run a sanity check on Selenium Grid to make sure that we are ready to go. To do this we run a simple command in a console or Command Prompt.
Let's see this in action.
Time for action – doing a sanity check on Selenium Grid
- Open a Command Prompt or console window.
- Run the command ant sanity-check. When it is complete you should see something similar to the next screenshot:
What just happened?
We have just checked whether we have all the necessary items to run Selenium Grid. If there was something that Selenium relied on, the sanity check script would output what was needed so that you could easily correct this. Now that everything is ready, let us start setting up the Grid.
Selenium Grid Hub
Selenium Grid works by having a central point that tests can connect to, and commands are then pushed to the Selenium Remote Control instances connected to that hub. The hub has a web interface that tells you about the Selenium Remote Control instances that are connected to the Hub, and whether they are currently in use.
Time for action – launching the hub
Now that we are ready to start working with Selenium Grid we need to set up the Grid. This is a simple command that we run in the console or Command Prompt.
- Open a Command Prompt or console window.
- Run the command ant launch-hub. When that happens you should see something similar to the following screenshot:
We can see that this is running in the command prompt or console. We can also see the hub running from within a browser.
If we put where nameofmachine is the name of the machine with the hub. If it is on your machine then you can place. We can see that in the next screenshot:
What just happened?
We have successfully started Selenium Grid Hub. This is the central point of our tests and Selenium Grid instances. We saw that when we start Selenium Grid it showed us what items were available according to the configuration file that is with the normal install.
We then had a look at how we can see what the Grid is doing by having a look at the hub in a browser. We did this by putting the URL where nameofmachine is the name of the machine that we would like to access with the hub. It shows what configured environments the hub can handle, what grid instances are available and which instances are currently active.
Now that we have the hub ready we can have a look at starting up instances.
Adding instances to the hub
Now that we have successfully started the Selenium Grid Hub, we will need to have a look at how we can start adding Selenium Remote Controls to the hub so that it starts forming the grid of computers that we are expecting. As with everything in Selenium Grid, we need Ant to start the instances that connect. In the next few Time for action sections we will see the different arguments needed to start instances to join the grid.
Time for action – adding a remote control with the defaults
In this section we are going to launch Selenium Remote Control and get it to register with the hub. We are going to assume that the browser you would like it to register for is Firefox, and the hub is on the same machine as the Remote Control. We will pass in only one required argument, which is the port that we wish it to run on. However, when starting instances, we will always need to pass in the port since Selenium cannot work out if there are any free ports on the host machine.
- Open a Command Prompt or console window.
- Enter the command ant –Dport=5555 launch-remote-control and press Return. You should see the following in your Command Prompt or console:
- And this in the Selenium Grid Hub site:
What just happened?
We have added the first machine to our own Selenium Grid. It has used all the defaults that are in the Ant build script and it has created a Selenium Remote Control that will take any Firefox requests, located on the same machine as the host of Selenium Remote Control Grid. This is a useful way to set up the grid if you just want a large number of Firefox-controlling Selenium Remote Controls.
(For more resources on this subject, see here.)
Adding Selenium Remote Controls for different machines
Selenium Grid is most powerful when you can add it to multiple operating systems. This allows us to check, for instance, whether Firefox on Windows and Firefox on Linux is doing the same thing during a test. To register new remote controls to the grid from a machine other than the one hosting the hub, we need to tell it where the hub is. We do this by passing in the –DhubURL argument when calling the Ant script. We also need to pass in the –Dhost argument with the name of the machine so that we can see where it is being hosted.
Let's see this in action.
Time for action – adding Selenium Remote Controls for different machines
For this Time for action you will need to have another machine available for you to use. This could be the Ubuntu machine that you needed for the previous chapter. I suggest giving the –Dhost argument the name of the machine that it is running on. If you have a small Grid then you can name them according to the operating system that it is run on.
- Open a Command Prompt or console.
- Run the command ant –Dport=9999 –DhubURL= –Dhost=nameofcurrentmachine launch-remote-control.
- When you have run this, your Grid site should appear as follows:
What just happened?
We have added a new remote control to the grid from a machine other than where the Selenium Grid Hub is running. This is the first time that we have been able to set up our remote control instances in a grid. We learnt about the –DhubURL argument and the –Dhost argument that is needed when launching the remote control. We then saw that it has updated the Grid site that is running on the hub.
Now that we have this working as we expect, let us have a look at setting up browsers other than Firefox.
Adding Selenium Remote Control for different browsers
Selenium Grid is extremely powerful when we start using different browsers on the grid, since we can't run all the different browsers on a single machine due to operating systems and browser combinations. There are currently up to nine different combinations that are used by most people, so getting Selenium Grid to help with this can give you the test coverage that you need. To do this we pass in the –Denvironment argument in our Ant call. The value that we assign to this has to be Selenium Grid configuration. The Selenium Grid configuration comes with a number of preset items. This is visible from the Selenium Grid Hub page that we have seen already. Let us now see how we can set the items.
Time for action – setting the Environment when starting Selenium Remote Control
Now that we need to get Internet Explorer Selenium Remote controls added to our grid. We have to add the –Denvironment argument to our call with the target on the configured environments. Since we want an Internet Explorer remote control we can use the IE on Windows targets.
- Open a console or Command Prompt window.
- Run the command ant –Dport=9998 –DhubURL=addressofhub –Dhost=nameofremotehost –Denvironment="IE on Windows" launch-remote-control.
- When it is running, your hub page should appear as follows:
What just happened?
We have just seen how we can create more verbose environment names such as "IE on Windows". We also saw how we can start a remote control for different browsers. This is quite useful when we need to test a large amount of browser and operating system combinations.
Updating the Selenium Grid Configuration
The Configured Environments come with a standard installation but there are times where it would be useful to set up your own targets. This could be when browsers that used to work on a specific operating system now work on multiple operating systems. We have already seen this happen with Google Chrome.
To update the configuration we need to have a look at the grid_configuration.yml. This is a YAML file that contains all the configurations that Selenium grid has. We need to add a new name and browser for new items. For example:
-name: "Google Chrome on Windows"
Browser: "*googlechrome"
Time for action – adding new items to the Grid Configuration
As we saw from the default configuration on that is distributed, it doesn't have the different flavors of Google Chrome. We can add this to the file this time so we can extend the coverage that we need.
- Open grid_configuration.yml in a text editor.
Add the following to the file:
-name: "Google Chrome on Windows"
Browser: "*googlechrome"
- Start the hub.
- In another console or Command Prompt, run the newly created item. While running, your Grid should appear as follows:
(For more resources on this subject, see here.)
What just happened?
We have just added our first item to the Grid Configuration. This could be renaming an item so that it makes a lot more sense, or if there is something missing that is needed. We can give each of the items meaningful names that then point to a specific browser. We can also use this to set up custom browsers if need be.
Pop quiz – doing the thing
- What is the command required to start the Hub?
- What is the URL where one can see what is happening on the Grid?
- How do you specify the port that the Remote Control is running on?
- How do you specify which browser you would like the Remote Control to be registered with?
Running tests against the Grid
Now that we have set up the Grid with different instances, we should have a look at how we can write tests against these Remote Controls on the Grid. We can pass in the value of the target that we can see in the grid and then run the tests. So instead of passing in *firefox you can use "firefox on linux" and then run the tests as usual.
Let's see this in action.
Time for action – writing tests against the grid
- Create a new test file.
- Populate it with a test script that accesses an item on the grid and then works against. Your script should look similar to the following:
import org.junit.*;
import com.thoughtworks.selenium.*;
public class TestExamples2 {
Selenium selenium;
@Before
public void setUp(){
selenium = new DefaultSelenium("192.168.157.153",4444,
"Google Chrome on Linux",
"");//
selenium.start();
}
@After
public void tearDown(){
selenium.stop();
}
@Test
public void ShouldRunTestsAgainstGoogleChromeOnLinux(){
selenium.open("/");
selenium.click("link=chapter2");
}
}
What just happened?
We have just seen how we can write tests that can run against the Grid and then run them. When the tests are running the grid will show which Remote Control is currently in use and which grid items are currently free. We can see this in the following screenshot:
Summary
We learned a lot in this article about how to set up Selenium Grid and all the different arguments needed, as well as running our tests against the Grid.
Specifically, we covered:
- Starting Selenium Grid Hub: In this section we had a look at how we can start up Selenium Grid Hub that is the central point for Selenium Grid.
- Setting up Selenium Grid Remote Controls: We had a look at all the arguments that are needed to add a Remote Control to the Grid so that we can use it. This gives us a more manageable view of our grid so that we can work with it.
We also discussed how we can create tests that use the grid.
Further resources on this subject:
- Testing your Business Rules in JBoss Drools
- Python: Unit Testing with Doctest
- Ensuring Quality for Unit Testing with Microsoft Visual Studio 2010
- Testing your JBoss Drools Business Rules using Unit Testing
|
https://www.packtpub.com/books/content/getting-started-selenium-grid
|
CC-MAIN-2016-07
|
refinedweb
| 2,362
| 70.02
|
Authenticating Users in Nginx Using Both User Password and Client Certificates
In some use cases, you want to protect different parts of a Web application with different approaches. For example, the admin related resources normally require stronger mechanism than the user related ones. The following I will show how to use Nginx with client side certificate for the resources under /admin namespace for admins, and user/name for normal users.
Generating Certificates and Keys
First, let’s take a look on how to generate certificates/keys for the client, server, and CA, based on the instructions from this blog. If you want to know details on these commands, you should definitely check it out.
Note: before you started out, I would recommend you to use one password for all these commands because it would make your life easier than using multiple passwords. This is not recommended for production environment though. We also assume you’ve got nginx installed. If not yet, read this article.
After the above commands, you should have these files in the /etc/nginx/certs directory:
Configuring Nginx
Assume that we’ll have two resource groups: one starts /admin and the other is the rest. All these are protected under using HTTPS on port 443. To use basic authentication, you should always use HTTPS instead of HTTP which can be intercepted by a network sniffer. The user name and password are not encrypted but encoded with BASE64, which can very easily decoded.
With the resources under /admin, we’ll require CA signed client certificate. If the certificate is a valid one, it would proxy through; otherwise, it’s blocked. For the rest of the resources, it would ask for user name and password, which has been covered in my previous article. We assume there is a Tomcat server running at 8000. If not and you can change it to another server, say,.
Here is the configuration which can be placed at /etc/nginx/conf.d/proxy.conf. You can change the name, but the extension should be .conf.
As shown above, the ssl_verify_client is defined as optional. Within the /admin location, the following logic checks if the ssl client has been successfully verified. (see more at:)
Testing
After you change any nginx configuration, don’t forget to run reload command as “nginx -s reload” (sometimes it’s not enough and you need to restart it). When the new configuration takes effect, you can use curl commands to test it as follows. It assume the password for the root user is “doublecloud” and server is 192.168.8.118. Change them to yours when you run them. If you try them on another machine, you want to copy client key/cert over. You can also use browser to try out the.
Which of them works and which not? It’ll leave it for you to try out.
Recent Comments
|
http://www.doublecloud.org/2014/02/authenticating-users-in-nginx-using-both-user-password-and-client-certificates/
|
CC-MAIN-2017-30
|
refinedweb
| 481
| 64.41
|
This function returns a dictionary containing the hit counts for each individual IP that has accessed your Apache web server.
This function is quite useful for many things. For one, I often use it in my code to determine how many of my "hits" are actually originating from locations other than my local host. This function was also used to chart which IP's are most actively viewing pages that are served by a particular installation of Apache.
As for the method of "validating" the IP, it as is follows: 1) an IP address will never be longer than 15 digits (4 sets of triplets and 3 periods); 2) an IP address will never be shorter than 6 digits (4 sets of single digits and 3 periods). The whole purpose of this validation is not to enforce stigent validation (for that we could use a regular expression), but rather to avoid the possibility of putting blantently "garbage" data into the dictionary.
line.split ??? Ip = line.split(" ")[0]
AttributeError: 'string' object has no attribute 'split'
Older version of Python.. You must be using Python 1.52, or maybe 1.6, or something?
".split" (without importing 'string') is a feature introduced in Python 2.0+
To make that work in older versions you will need to change it to something like:
import string
Ip = string.split("", line)
Strop. In Python 2.0 and above the string module is used for backward compatibility. Instead, a builtin strop module is used.
And now, even strop is obsolete, with the string functions being built-in
If you use the string module nevertheless, there is no overhead.
Thanks for the code but small bug. the part where is tested for a good IP is not wholy correct:
ensure length of the ip is proper: see discussion
if 6 the part where is tested for a good IP is not wholy correct:
ensure length of the ip is proper: see discussion
if 6
again the small bug. # ensure length of the ip is proper: see discussion
if 6 < len(Ip) < 15:
should be:
if 6 < len(Ip) < 16:
|
https://code.activestate.com/recipes/65251-calculating-apache-hits-per-ip/?in=lang-python
|
CC-MAIN-2022-05
|
refinedweb
| 351
| 71.14
|
Logging¶
This document will explain Ray’s logging system and its best practices.
Driver logs¶
An entry point of Ray applications that calls ray.init(address=’auto’) or ray.init() is called a driver. All the driver logs are handled in the same way as normal Python programs.
Worker logs¶
Ray’s tasks or actors are executed remotely within Ray’s worker processes. Ray has special support to improve the visibility of logs produced by workers.
By default, all of the tasks/actors stdout and stderr are redirected to the worker log files. Check out Logging directory structure to learn how Ray’s logging directory is structured.
By default, all of the tasks/actors stdout and stderr that is redirected to worker log files are published to the driver. Drivers display logs generated from its tasks/actors to its stdout and stderr.
Let’s look at a code example to see how this works.
import ray # Initiate a driver. ray.init() @ray.remote def task(): print("task") ray.get(task.remote())
You should be able to see the string task from your driver stdout.
When logs are printed, the process id (pid) and an IP address of the node that executes tasks/actors are printed together. Check out the output below.
(pid=45601) task
How to set up loggers¶
When using ray, all of the tasks and actors are executed remotely in Ray’s worker processes. Since Python logger module creates a singleton logger per process, loggers should be configured on per task/actor basis.
Note
To stream logs to a driver, they should be flushed to stdout and stderr.
import ray import logging # Initiate a driver. ray.init() @ray.remote class Actor: def __init__(self): # Basic config automatically configures logs to # be streamed to stdout and stderr. # Set the severity to INFO so that info logs are printed to stdout. logging.basicConfig(level=logging.INFO) def log(self, msg): logging.info(msg) actor = Actor.remote() ray.get(actor.log.remote("A log message for an actor.")) @ray.remote def f(msg): logging.basicConfig(level=logging.INFO) logging.info(msg) ray.get(f.remote("A log message for a task"))
(pid=95193) INFO:root:A log message for a task (pid=95192) INFO:root:A log message for an actor.
How to use structured logging¶
The metadata of tasks or actors may be obtained by Ray’s runtime_context APIs. Runtime context APIs help you to add metadata to your logging messages, making your logs more structured.
import ray # Initiate a driver. ray.init() @ray.remote def task(): print(f"task_id: {ray.get_runtime_context().task_id}") ray.get(task.remote())
(pid=47411) task_id: TaskID(a67dc375e60ddd1affffffffffffffffffffffff01000000)
Logging directory structure¶
By default, Ray logs are stored in a /tmp/ray/session_*/logs directory.
Note
The default temp directory is /tmp/ray (for Linux and Mac OS). If you’d like to change the temp directory, you can specify it when ray start or ray.init() is called.
A new Ray instance creates a new session ID to the temp directory. The latest session ID is symlinked to /tmp/ray/session_latest.
Here’s a Ray log directory structure. Note that .out is logs from stdout/stderr and .err is logs from stderr. The backward compatibility of log directories is not maintained.
dashboard.log: A log file of a Ray dashboard.
dashboard_agent.log: Every Ray node has one dashboard agent. This is a log file of the agent.
gcs_server.[out|err]: The GCS server is a stateless server that manages business logic that needs to be performed on GCS (Redis). It exists only in the head node.
log_monitor.log: The log monitor is in charge of streaming logs to the driver.
monitor.log: Ray’s cluster launcher is operated with a monitor process. It also manages the autoscaler.
monitor.[out|err]: Stdout and stderr of a cluster launcher.
plasma_store.[out|err]: Deprecated.
python-core-driver-[worker_id]_[pid].log: Ray drivers consist of CPP core and Python/Java frontend. This is a log file generated from CPP code.
python-core-worker-[worker_id]_[pid].log: Ray workers consist of CPP core and Python/Java frontend. This is a log file generated from CPP code.
raylet.[out|err]: A log file of raylets.
redis-shard_[shard_index].[out|err]: A log file of GCS (Redis by default) shards.
redis.[out|err]: A log file of GCS (Redis by default).
worker-[worker_id]-[job_id]-[pid].[out|err]: Python/Java part of Ray drivers and workers. All of stdout and stderr from tasks/actors are streamed here. Note that job_id is an id of the driver.
io-worker-[worker_id]-[pid].[out|err]: Ray creates IO workers to spill/restore objects to external storage by default from Ray 1.3+. This is a log file of IO workers.
Log rotation¶
Ray supports log rotation of log files. Note that not all components are currently supporting log rotation. (Raylet, Python/Java worker, and Redis logs are not rotating).
By default, logs are rotating when it reaches to 512MB (maxBytes), and there could be up to 5 backup files (backupCount). Indexes are appended to all backup files (e.g., raylet.out.1) If you’d like to change the log rotation configuration, you can do it by specifying environment variables. For example,
RAY_ROTATION_MAX_BYTES=1024; ray start --head # Start a ray instance with maxBytes 1KB. RAY_ROTATION_BACKUP_COUNT=1; ray start --head # Start a ray instance with backupCount 1.
|
https://docs.ray.io/en/master/ray-logging.html
|
CC-MAIN-2022-05
|
refinedweb
| 901
| 60.51
|
Hiya, My name is XXXXX XXXXX I've been repairing and upgrading computers for over 10 years, I will help you to the best of my ability.
Let's try starting the Windows Installer service and see if that helps, but let me know if it says its already started.
Windows Installer is not in the list of programs in the details pane.
That may be the issue!
I assume that you may be able to advise on how to restore Windows Installer?
Hi im back sorry
On moment while I review this
Ok sorry took me a minute to find this
This is where you need to download the windows installer from
If you go down to instructions it will explain a little more
I used the link now I need to make sure that I download the correct file...
Is x64 Platform: Windows6.0-KB942288-v2-x64.msu the correct one for Vista?
x86 Platform: Windows6.0-KB942288-v2-x86.msux64 Platform: Windows6.0-KB942288-v2-x64.msuIA64 Platform: Windows6.0-KB942288-v2-ia64.msu
Okay so I should down all three of them, correct?
No, click start
Go to control panel
System and Maintenance
then select system
What does it say under system type?
Windows Vista Home Premium, Service Pack 2, 64 Bit OS,
x64 Platform: Windows6.0-KB942288-v2-x64.msu
Okay good, I will proceed to down load Windows6.0-KB942288-v2-x64.msu 2.9 MB only right?
yes please
Im very sorry for the delay in responses
I received a message that the update does not apply to my system.
Thats odd
try the next one down
okay
Received the same message that it doesn't apply to my system.
Ok thats weird, cause it does
One moment please
Try this one instead
Is it downloading this time?
I downloaded it ... I'll go see if it shows up in the detail pane now.
Windows Installer is still not listed in the detail pane.
Do you want me to try an download the windows updates to see if it workds?
Yes lets download all updates
It should fix that automatically but we will see
in process now.
ok thanks
All seven failed again.
The codes given are 80070641 and 641
I jst went through the dowmload process again from the last link you provided and received the message that it doesn't apply to my system.
oh dear, allow me a moment to think on this
Okay, do think a restart may activate it?
Im going to open this question up to other experts while I think about this some more, if no other expert come in to help you I will be back with an answer.
We can certainly try that
Lets try and see what happens
I will stay here
Okay, will do.
Save this page to your favorites for me so you can get back
Please download this tool if you come back and the installer doesnt work
Installation errors caused due to incorrect patch registration may be corrected using this tool.
Okay, the restart didn't help. I'll save the link to my favorites.
Which file should I download?
x64
I am hoping this helps
After that I am going to open this up for another expert to look at if this fails as well
I will await your reply
Downloaded, I have what I'll call a system window that appeared saying a product code must be specified and press any key to exit. I exited it.
yeah needs to validate your windows is genuine
It is ... It came preloaded on my Dell
ok
So it wont open either
Not sure what you mean?
the program we just downloaded to fix things
Didnt do anything?
nope, nothing
Ok, i will opt out of this question, I am not feeling very good which does not help me think very clearly. Hopefully the next expert can get this squared away for you
Sorry :-(
Sorry, Hope you feel better ... should I just standby?
Still waiting
wmic /namespace:\\root\default path systemrestore get
You will see a screen similar to this (note the latest Sequence Numbers):
path systemrestore call restore <SequenceNumber>
Regards
Jins
Please ACCEPT My Solution If It Helped ..
POSITIVE Feedback And A BONUS Are Always Appreciated ..
PLEASE CONTACT ME BEFORE LEAVING A NEGATIVE FEEDBACK
Windows Installer was not listed in the services tab of the System Configuration Utility.
Now type the following command:
net user administrator /active:yes
net user administrator /active:yes
You should see a message that the command completed successfully. Log out, and you’ll now see the Administrator account as a choice.
Are there any other possible actions that I can take short of reformatting my disk and relaoding Vista?
Start the Windows Installer service
Windows Installer Registry Fix
If the above steps do not help or if the Windows Installer service is not listed in the Services applet, follow these steps:
The Windows Installer servie was not listed so I moved on to the second set of instructions. I saved the msiserver.zip file to my desktop. I unzipped it and saved the msiserver.reg file to my desktop. When I right click on it and select MERGE I get an Open File Security Warning window that provides two choices, "Run" or "Cancel" . When I run it I get the following opened in Notepad... (Wasn't sure what to do with it so I
just performed a restart and tried to truied to download the updates which failed because Windows Installer cannot be accessed.\\0000""Count"=dword:00000001"NextInstance"=dword:00000001
|
http://www.justanswer.com/software/6imr1-windows-vista-updates-fail-load-receive-error-message.html
|
CC-MAIN-2015-48
|
refinedweb
| 937
| 72.05
|
IO::Storm - IO::Storm allows you to write Bolts and Spouts for Storm in Perl.
version 0.17
package SplitSentenceBolt; use Moo; use namespace::clean; extends 'Storm::Bolt'; sub process { my ($self, $tuple) = @_; my @words = split(' ', $tuple->values->[0]); foreach my $word (@words) { $self->emit([ $word ]); } } SplitSentenceBolt->new->run;
IO::Storm allows you to leverage Storm's multilang support to write Bolts and Spouts in Perl. As of version 0.02, the API is designed to very closely mirror that of the Streamparse Python library. The exception being that we don't currently support the
BatchingBolt class or the
emit_many methods.
To create a Bolt, you want to extend the
Storm::Bolt class.
package SplitSentenceBolt; use Moo; use namespace::clean; extends 'Storm::Bolt';
To have your Bolt start processing tuples, you want to override the
process method, which takes a
IO::Storm::Tuple as its only argument. This method should do any processing you want to perform on the tuple and then
emit its output.
sub process { my ($self, $tuple) = @_; my @words = split(' ', $tuple->values->[0]); foreach my $word (@words) { $self->emit([ $word ]); } }
To actually start your Bolt, call the
run method, which will initialize the bolt and start the event loop.
SplitSentenceBolt->new->run;
By default, the Bolt will automatically handle acks, anchoring, and failures. If you would like to customize the behavior of any of these things, you will need to set the
auto_anchor,
auto_anchor, or
auto_fail attributes to 0. For more information about Storm's guaranteed message processing, please see their documentation.
To create a Spout, you want to extend the
Storm::Spout class.
package SentenceSpout; use Moo; use namespace::clean; extends 'Storm::Spout';
To actually emit anything on your Spout, you have to implement the
next_tuple method.
my $sentences = ["a little brown dog", "the man petted the dog", "four score and seven years ago", "an apple a day keeps the doctor away",]; my $num_sentences = scalar(@$sentences); sub next_tuple { my ($self) = @_; $self->emit( [ $sentences->[ rand($num_sentences) ] ] ); }
To actually start your Spout, call the
run method, which will initialize the Spout and start the event loop.
SentenceSpout->new->run;
If you need to have some custom action happen when your component is being initialized, just override
initialize method, which receives the Storm configuration for the component and information about its place in the topology as its arguments.
sub initialize { my ( $self, $storm_conf, $context ) = @_; }
Use the
log method to send messages back to the Storm ShellBolt parent process which will be added to the general Storm log.
sub process { my ($self, $tuple) = @_; ... $self->log("Working on $tuple"); ... }
This software is copyright (c) 2014 by Educational Testing Service.
This is free software; you can redistribute it and/or modify it under the same terms as the Perl 5 programming language system itself.
|
http://search.cpan.org/~dsblanch/IO-Storm-0.17/lib/IO/Storm.pm
|
CC-MAIN-2015-22
|
refinedweb
| 467
| 59.13
|
Hi
I've made a lib to replace the default Matplotlib toolbar, it looks like
this:
It replaces the pan/zoom button with wheels and handles, using new
subplot2grid API of matplotlib 1.0.1.
The reason behind that change is that people I'm currently writing my
application for can hardly handle
anything that doesn't look like their 50-years-old potentiostat. Also, I
needed some custom event code to
interact with the figure (typically to mark a part of the graph), and
enabling/disabling pan/zoom mode
each time I need to make a change to the figure is not convenient.
Here's the sources:
Currently it supports only subplot() API, similar to the Matplotlib API with
the same name,
also it requires quite a lot of a boilerplate code - see simpletest.py or
test.py for example.
As a bonus you can add your own buttons to that new toolbar, however my
designer skills
are rather poor, so it looks like MS Paint drawing.
If anyone is interested I can extend it to the point where you need only to
write
"import vintage" to get the new controls - it will replace several
matplotlib functions like pyplot.figure(),
pyplot.subplot() and pyplot.connect() with it's own handlers, right inside
pyplot module namespace.
|
http://sourceforge.net/p/matplotlib/mailman/matplotlib-users/thread/BANLkTiktU24g9Dp46tKm60jQEVwM7Y6CNQ@mail.gmail.com/
|
CC-MAIN-2015-27
|
refinedweb
| 216
| 59.74
|
Nate Dogg III Wrote:Getting 'script failed' for this, here's a bit of my debug log:
Any ideas? Downloaded through the addons repo, running from the scripts menu, rev31504 for win7.
Tomkun Wrote:Hello there.
I have to say, that I really love the CDart, but I have been less than disappointed by my own efforts at creating them.
I realised quite quickly that I was having trouble making everything line up, especially when I used scans from websites. Sometimes the perspective is wrong, there is a shadow or whatever.
So, I started cropping only the labels, and removing anything that had didn't have any design on it. At first I was very pleased with the results, but when I saw them in XBMC, they looked odd.The most glaring issue was that each CD had a different sized hole in the middle.
So, with the aid of Gimp, and a couple of plugins, I have found a way to rectify this issue.
First, I created a blank CD image 454x454 and tried my best to make it look shiny like a blank CD does.
Second, I placed the labels I had already created over it and merged the layers.
Finally, I resized back to 450x450. (The reason I made the image slighty larger in the first place is because most CDs have a border around the label).
Now, I am very pleased with the results, although I am sure someone can make a better background image than I.
I'd love to show you the results, but I can't figure out how to upload pics.
Quote:15:12:00 T:2822761328 M:1544048640 INFO: -->Python script returned the following error<--
15:12:00 T:2822761328 M:1544048640 ERROR: Error Type: exceptions.ImportError
15:12:00 T:2822761328 M:1544048640 ERROR: Error Contents: No module named pysqlite2
15:12:00 T:2822761328 M:1544048640 ERROR: Traceback (most recent call last):
File "/home/xbmc/.xbmc/addons/script.cdartmanager/default.py", line 31, in ?
import gui
File "/home/xbmc/.xbmc/addons/script.cdartmanager/resources/lib/gui.py", line 25, in ?
from pysqlite2 import dbapi2 as sqlite3
ImportError: No module named pysqlite2
15:12:00 T:2822761328 M:1544048640 INFO: -->End of Python script error report<--
giftie Wrote:Hooper818 - What version of XBMC are you using?
MacLeod_1980 - What version of XBMC are you using?
giftie Wrote:did you select the option in the settings menu?
giftie Wrote:you need to do the following during :
svn up(or git pull --whichever one you use)
./bootstrap
./configure
make
make -C lib/addons/script.module.pil
make -C lib/addons/script.module.pysqlite
make install
|
http://forum.kodi.tv/showthread.php?tid=77031&page=4
|
CC-MAIN-2015-14
|
refinedweb
| 442
| 74.39
|
peoplektgration
Elektrified X.org released
Posted Nov 30, 2004 9:45 UTC (Tue) by marduk (subscriber, #3831)
[Link]
Posted Nov 30, 2004 9:56 UTC (Tue) by evgeny (subscriber, #774)
[Link]
Only if Gnome is run from /etc/inittab ;-)
Posted Nov 30, 2004 10:17 UTC (Tue) by philips (guest, #937)
[Link]
I wouldn't claim that X config is easy. But I beleive that it is much easier to parse by human beings, than what is proposed.
Anyway, it still comes down to question "what to put where".
Apache Config editors are available in abundance - why the same approach can be taken for X?
People already developed xmlgrep - so that extracting information from structured storage is easy.
Modification of this configuration must be complicated - as to fence off people who do not understand what they do.
And what is more. I'm strictly opposed to "system-wide" stuff. One thing is death of Gnome due to damaged config. Another thing is "system-wide" death of system due to damaged config. Different apps have different configs and different priorities for configs. For example, you do not want to mess with /etc/inittab - and you definitely do not want users to be able to manipulate it easily. One size never fits all.
Posted Nov 30, 2004 11:51 UTC (Tue) by emkey (guest, #144)
[Link]
Change in heart for M$
Posted Nov 30, 2004 12:44 UTC (Tue) by jimwelch (guest, #178)
[Link]
Posted Nov 30, 2004 13:11 UTC (Tue) by tzafrir (subscriber, #11501)
[Link]
Elektra also does not need its own daemon. The kernel already does the job.
[*] Think of Hans Reiser's ideas.
Posted Nov 30, 2004 13:47 UTC (Tue) by evgeny (subscriber, #774)
[Link]
A single corrupted byte in the filesystem block layer could be as dangerous. It's the fsck maturity that eliminates these problems in almost 100% of cases. Similarly, a smart XML parser can be built that can recover from such errors.
> Think of Hans Reiser's ideas.
Yeah, if reisrfs-4 (or -5?) existed years ago AND under all Unices, that could make a difference.
Posted Nov 30, 2004 15:42 UTC (Tue) by tzafrir (subscriber, #11501)
[Link]
However why replicate so many features that the file system already give you in a database? Or in 2500 implementations of 50 different configuration formats.
Posted Dec 1, 2004 3:03 UTC (Wed) by evgeny (subscriber, #774)
[Link]
I see no reason why a DB should inherently be less stable.
> However why replicate so many features that the file system already give you in a database?
Because FS will never provide me with all the advanced features one expects from a decent configuration system - not without stacking dozens of extra daemons/utilities/... on top of it, at least.
> Or in 2500 implementations of 50 different configuration formats.
Of course. I said it in the beginning, that I like the _idea_ itself - to replace the whole current zoo of config approaches with a single API. It's the _implementation_ that seems to me a dead end.
Posted Dec 1, 2004 9:23 UTC (Wed) by tzafrir (subscriber, #11501)
[Link]
This is an important requirement: user applications need not run as root to read configuration. But there should be a place for the secret data.
But please name one thing a database provides that a filesystem doesn't. What database exactly (daemon? file-based? which one?)
Posted Dec 1, 2004 14:37 UTC (Wed) by evgeny (subscriber, #774)
[Link]
Depends on the implementation. For a DB _service_ (a daemon running) it's solved like for any other similar problem (and there is no need for it to run under a privileged account - e.g. slapd on my server runs as an unprivileged user 'ldap', yet system-wide authentication works fine). For a simple file-based DB like SQLite it can be solved by splitting the whole data into a few files (1 - writable by root, readable by everyone; 2 - r/w by root ony, plus, similarly, two files per user account).
> But please name one thing a database provides that a filesystem doesn't.
Expressions (aka views). E.g. SELECT * FROM user WHERE user.id > 400 - to have a list of "real" users - used, for example, to send an announce email. Etc. Other things are replication, atomicity, true locking, notifications, rollbacks, versioning, transparent remote access, scalability (in _both_ directions). You get all these for free with a decent RDBMS, and NONE of them with an existing filesystem.
Posted Dec 2, 2004 15:55 UTC (Thu) by tzafrir (subscriber, #11501)
[Link]
OTOH, the strict typing of the data will get back at you when you try something like:
grep -rl /home/joe /etc/
> Other things are replication,
Try rsync
> atomicity, true locking,
What about filesystem-level locking?
> notifications,
Change notification for a directory is available.
> rollbacks, versioning,
Put arch/subversion in there (cvs probably won't do as it does not support renames)
> transparent remote access,
Add a remote-access daemon. Quite simple to implement. You have to figure out the authentication method first. Currently practically no existing daemon intgrates well enough with the system.
> scalability (in _both_ directions).
Filesystem access is quite scalable
Posted Dec 3, 2004 12:42 UTC (Fri) by evgeny (subscriber, #774)
[Link]
Sorry, I didn't get what you meant.
> Try rsync [etc etc etc]
I don't want. For the same reason that nobody in sane mind will pipe 'telnet 80 | html2txt | less' to browse the Web. Possible - yes. Great when nothing else is available - yes. But call this THE ultimate browser?!
Posted Nov 30, 2004 15:58 UTC (Tue) by iabervon (subscriber, #722)
[Link]
Differences between Elektra and GConf
Posted Nov 30, 2004 10:08 UTC (Tue) by bkw1a (subscriber, #4101)
[Link]
Compared to Gnome's GConf, Elektra is not a daemon, and is much lighter. GConf uses XML documents as backends, stored in user's home directory. XML based software are memory eaters. GConf seems not to be preoccupied with access permissions, making it a good solution only for personal use in desktop (high level) systems. Also it is heavily Gnome dependent as we can see from the libraries it uses:
$ ldd /usr/bin/gconfd-1
libgconf-1.so.1 => /usr/lib/libgconf-1.so.1 (0x4375b000)
liboaf.so.0 => /usr/lib/liboaf.so.0 (0x4373f000)
libORBitCosNaming.so.0 => /usr/lib/libORBitCosNaming.so.0 (0x00cfd000)
libORBit.so.0 => /usr/lib/libORBit.so.0 (0x00ca9000)
libIIOP.so.0 => /usr/lib/libIIOP.so.0 (0x43735000)
libORBitutil.so.0 => /usr/lib/libORBitutil.so.0 (0x00ce9000)
libm.so.6 => /lib/tls/libm.so.6 (0x00b50000)
libgmodule-1.2.so.0 => /usr/lib/libgmodule-1.2.so.0 (0x00d58000)
libglib-1.2.so.0 => /usr/lib/libglib-1.2.so.0 (0x00d31000)
libdl.so.2 => /lib/libdl.so.2 (0x00b74000)
libc.so.6 => /lib/tls/libc.so.6 (0x00a15000)
libpopt.so.0 => /usr/lib/libpopt.so.0 (0x001ff000)
libwrap.so.0 => /usr/lib/libwrap.so.0 (0x4387c000)
/lib/ld-linux.so.2 => /lib/ld-linux.so.2 (0x009fd000)
libnsl.so.1 => /lib/libnsl.so.1 (0x4f727000)
This is useless for an early boot stage program (/usr/lib may be still unmounted), and for a very small OS installation that won't require desktop features, like a router, small firewall, or any other appliance. On the other hand, the Elektra database access library is very slim with a minimum of dependencies:
$ ldd /lib/libkdb.so
libc.so.6 => /lib/tls/libc.so.6 (0x00111000)
/lib/ld-linux.so.2 => /lib/ld-linux.so.2 (0x009fd000)
Posted Nov 30, 2004 10:13 UTC (Tue) by evgeny (subscriber, #774)
[Link]
Posted Nov 30, 2004 10:52 UTC (Tue) by khim (subscriber, #9252)
[Link]
Posted Nov 30, 2004 12:08 UTC (Tue) by elanthis (subscriber, #6227)
[Link]
The "your FS sucks, use the experimental/unstable one I prefer" approach doesn't work. Especially not when you can just fix the system to not *need* a "better filesystem." For example, just store multiple keys in one file. Not that difficult at all. Insanely simple, even.
Multiple files also making transactions IMPOSSIBLE (literally) because there is no way to perform an atomic action touching multiple files. Its as simple as that - with multiple files, the entire database is unsafe, because it can get corrupted and out of sync. Real life applications will have multiple keys that need to be in one of several states, such that certain combinations don't make sense. Without transactions and *guaranteed* atomic commits it would be possible for these states to occur in certain circumstances, resulting in broken applications.
Elektra also lacks change notification (so you still need to restart/signal daemons) and several other important features. As is, the *ONLY* thing it offers is a common API. And given the huge number of existing apps and scripts that will break by switching to Elektra, that common API just isn't worth it. Elektra also lacks features that other apps *need* like change notification or schemas, making it unusable.
The entire Elektra design can be improved upon. In fact, there are better designs already in existance. One (sorry, I forget the name - Config4GNU or something like that) uses multiple backends. So, if you set a key in the X.org namespace, it would load a backend that reads and writes the existing X.org configuration file. The advantage here is that already existing and already in-use configuration utilities continue to work while new applications can use the universal API. New applications would just use the default config backend, which would be some database or file heirarchy or whatever. Because of the backends, the system can inject a daemon that, for example, singles key changes over D-BUS allowing applications to be notified when the user changes something. By using a pluggable backend, the launching of this daemon can be delayed until after the initial system bootup, allowing the config system to be used during early-early boot with reduced features, and then become fully operational once enough of the system is up and running.
Elektra is a nice thought experimental, but it is far from the ideal implementation. It might evolve into a complete solution over time, or it might give way to Config4GNU or some other system.
File systems
Posted Nov 30, 2004 13:00 UTC (Tue) by tarvin (subscriber, #4412)
[Link]
Posted Nov 30, 2004 13:18 UTC (Tue) by tzafrir (subscriber, #11501)
[Link]
you can get nearly-atomic operations by creating a temporary file/directory copy and then mv(1) it to the right place. This does does unlink(2) and rename(2).
Anything more requires a specilized daemon,
Posted Nov 30, 2004 15:33 UTC (Tue) by Ross (subscriber, #4065)
[Link]
My question is why are you doing that?
But if you insist, you can do atomic operations with multiple files as long
as you don't have to have the same directory name for the old and new
version. Yes, it requires hard links or copying entire directories but
really, if a "solution" requires such ugly things then I'm quite happy to
live without it.
Posted Dec 1, 2004 1:33 UTC (Wed) by Wol (guest, #4433)
[Link]
And (certainly from the nix viewpoint) isn't the file system seen as just one large file? So, from that vantage point, updating multiple files IS just one change to one large file :-)
Cheers,
Wol
Posted Dec 1, 2004 9:27 UTC (Wed) by tzafrir (subscriber, #11501)
[Link]
In the filesystem level the kernel does basically the same things.
Posted Nov 30, 2004 12:10 UTC (Tue) by busterb (subscriber, #560)
[Link]
Posted Nov 30, 2004 13:29 UTC (Tue) by tzafrir (subscriber, #11501)
[Link]
All you need is to make sure that there is always an up-to-date copy of the configuration file in the right place.
Posted Nov 30, 2004 14:12 UTC (Tue) by LogicG8 (guest, #11076)
[Link]
I really have to disagree. Revision control systems are complicated
complications suck. Remember that /usr might not be mounted yet.
$ ldd /usr/bin/svn | wc -l
29
And that doesn't even count the dependencies of svn's dependencies... I
would like to be able to use this system for all the configuration files
in the system from init on up. If you want revision control this can be had
without introducing complications to booting and overhead embedded systems
won't put up with. The trick is to make the configuration directory a
working directory of subversion repository. Edit a file; commit changes; be
done with it. No need to have a revision control system to boot. This even
makes it easy to store the revision history on a another computer.
My reasons for actually liking this system:
1) Ability to hand edit config files. If I'm working over ssh (and I do
quite a bit) I want to be able change configurations.
2) Independence of data. If you've ever had a chunk of the registry
corrupted on Windows(tm) you understand the importance of this.
3) I can easily copy/save specific configuration data. With database
systems this tends to be awkward.
4) I can have comments.
5) This can be combined with MACs (mandatory access controls) or ACLs
(access control lists) for free.
6) This one is only speculation on my part but corruption can only happen
with misbehaving programs. If a program respects flock() two programs won't
step on each other. This can be a "if you don't know what you are doing
thing don't do it." Most users will only use a GUI or kdb so this probably
won't even come up. Even if corruption occurs (power outage, brain damaged
program, PIBCAK, whatever) see reason 2.
7) No new paradigms to learn. I know how get around in a filesystem.
I can use find.
8) Easy program interaction. I'll be thrown out of the *nix geek club for
sure for saying this but I *like* GUI configuration programs. I like things
to be easy.
9) More shared code. One less thing to worry about.
Posted Nov 30, 2004 17:30 UTC (Tue) by emkey (guest, #144)
[Link]
As do most of us. The problem being GUI's don't for the most part scale worth a dang. In general GUI's are fine, but they should always be considered secondary to some sort of CLI interface. The problem is many vendors get it backwards, or worse yet don't implament any sort of CLI.
Posted Nov 30, 2004 13:09 UTC (Tue) by evgeny (subscriber, #774)
.
Posted Nov 30, 2004 15:45 UTC (Tue) by khim (subscriber, #9252)
.
If your filesystem is dumb - it's not possible. If your filesystem is smart - it can be done. True - it should be supplied as addon over simple system but all-in-one solutions are never good.
Posted Dec 1, 2004 3:31 UTC (Wed) by evgeny (subscriber, #774)
[Link]
So we can return to this discussion when every man and his dog have such smart filesystems - including embeded devices etc.
Posted Dec 2, 2004 3:08 UTC (Thu) by szh (guest, #23558)
[Link]
Posted Dec 2, 2004 3:28 UTC (Thu) by evgeny (subscriber, #774)
[Link]
Continuing this nice logics, there is no need for a unified configuration system altogether - just as everyone has been adjusting dozens of different configs during the last twenty years of mainstream Unix usage, so we can do it further on. Right?
Posted Nov 30, 2004 17:45 UTC (Tue) by tzafrir (subscriber, #11501)
[Link]
Adding remote management, versioning on top of that is proably possible.
The main non-standard feature required for Elektra to work propely is symlinks. And even without it most things will work. Other than that all you need is basic file-system interface.
Posted Dec 1, 2004 3:47 UTC (Wed) by evgeny (subscriber, #774)
[Link]
Exactly. It, essentially, implements the day-before-yesterday functionality (INI configs) yet requires facilities not present in all current filesystems.
> Adding remote management, versioning on top of that is proably possible.
Well, the keywords here are "on top" and "probably". ;-) With the same expected level of success as adding security on top of DOS was. It just strikes me that a project with such an ambitious goal as converting _all_ the Unix configuration mess to a single API didn't bother thinking of even yesterday's requirements to system configuration!
Posted Dec 2, 2004 16:07 UTC (Thu) by tzafrir (subscriber, #11501)
[Link]
No, it is not INI config. With INI config you have only thre-level deep tree of configuration (file, section, key). Electra does not have this silly limitation.
> yet requires facilities not present in all current filesystems.
The *required* facilities are avilable everywhere. On some systems it will be less efficient. But even ext3 now has support for a more scalable directory access . Try tune2fs -O dir_index
> Well, the keywords here are "on top" and
> "probably". ;-) With the same expected level
> of success as adding security on top of DOS was.
Rebuilding everything from scratch has even less expected chance of adding security.
> It just strikes me that a project with such
> an ambitious goal as converting _all_ the Unix
> configuration mess to a single API didn't bother
> thinking of even yesterday's requirements to
> system configuration!
Into a system that is easy to use by eveything and can be used anywhere.
How can you configure the network using this daemon if the networked daemon needs the network configuration just to get going?
Posted Dec 3, 2004 12:49 UTC (Fri) by evgeny (subscriber, #774)
[Link]
In exactly the same way I can run a LDAP server holding the user DB under a given account. Also, think e.g. of gcc bootstrapping, running of fsck on / before anything is mounted etc. We have a lot of similar chicken-or-egg problems.
Posted Nov 30, 2004 10:58 UTC (Tue) by Ross (subscriber, #4065)
[Link]
Posted Nov 30, 2004 11:00 UTC (Tue) by jwb (subscriber, #15467)
[Link]
Posted Nov 30, 2004 13:32 UTC (Tue) by evgeny (subscriber, #774)
[Link]
Why? Do you mean it's tree-like? But it's well known how to map 1:1 a tree structure (or graph in general) to relational tables.
> But furthermore you would not be able to store postgresql's configuration in postgresql
Postgresql can use PAM for authentication. On the other hand, there is a pgsql-based PAM plugin. See the point? Like PAM has a plain-files fallback backend, so could the config system have.
> and even if you could, you're elevating postgresql to the status of required package
Again, this is wrong for a plugin-backend-based system. A SOHO config would use e.g. SQLite which needs neither config of its own, nor a daemon running.
Having said this, I don't insist on RDBMS. It could be e.g. an XML DB. The problem though, there is neither reilable implementation nor widely accepted XML query language... My point is that fs-based config is a dead end.
Posted Nov 30, 2004 16:02 UTC (Tue) by khim (subscriber, #9252)
[Link]
Why? Do you mean it's tree-like? But it's well known how to map 1:1 a tree structure (or graph in general) to relational tables.
Possible - yes. Convenient - no. Tree-like information should live in tree and we already have one - filesystem tree.
Again, this is wrong for a plugin-backend-based system. A SOHO config would use e.g. SQLite which needs neither config of its own, nor a daemon running.
Can I use vi over ssh to edit my config then ? No ? Then it's not contender.
Having said this, I don't insist on RDBMS. It could be e.g. an XML DB. The problem though, there is neither reilable implementation nor widely accepted XML query language...
And even if there is you are still with the same problem: you do not have access control.
My point is that fs-based config is a dead end.
I'm struggling to see your point. And can not. With fs tree we are having right from the start two important features:
1. Ability to edit config files with text editor.
2. Ability to fine-tune access rights to every part of system.
You can add remote management, replication, versioning, transactions/rollbacks if you wish - you only need good enough filesystem.
If your argument is translated to simple terms it's this: our silesystems are broken and lack a lot of needed features - but we should not fix them. Instead we should build some compilcated system over raw chunk of disk (== file used by RDBS or XML DB). My question is: why ? Why not fix filesystems instead ? Transactions and replications are nice to have for other things. Simple system upgrade is painfull without them! You can not replace Apache with new version before all mosules are upgraded but if you'll upgrade modules first then old apache can not be used anymore. True - window of oppurtunity is not so big, but it still exist. I fail to see why configuration of my system deserve more reliability then my system itself!
Posted Dec 1, 2004 1:53 UTC (Wed) by Wol (guest, #4433)
[Link]
And I'm going to get shafted for this by the relational crew, but relational theory is CRUD! Look at C&D's first rule - "data is stored as rows and columns". What real-world data comes as rows and columns? NONE of it as far as I can see. Pretty much all of C&D's rules have as much mathematical basis in fact as Euclid's rule that "parallel lines never meet".
And that rule of Euclid, by being wrong, held back geometry for 2000 years. Real world data does NOT come in neat two-dimensionsal packages. Therefore real world data does NOT belong in relational databases.
That's not to say relational theory doesn't have its uses - it is very useful. But by trying to force everything into its own (faulty) mould, it does the rest of the world a major dis-service.
Posted Dec 1, 2004 4:33 UTC (Wed) by evgeny (subscriber, #774)
[Link]
Poor Newton. He could discover the general relativity, if not that stupid greek. Was it an attempt at troll?
> Real world data does NOT come in neat two-dimensionsal packages. Therefore real world data does NOT belong in relational databases.
Do some text-book reading before posting, please.
Posted Dec 1, 2004 4:18 UTC (Wed) by evgeny (subscriber, #774)
[Link]
Configurations are NOT trees. They're graphs (the symlinks used by Electra is a poor-man solution to this). Plus, they're _dynamic_ ones.
> Can I use vi over ssh to edit my config then ? No ? Then it's not contender.
Gosh. Why not `ed' then? There is a userfs plugin that allows to navigate/edit a postgres DB like a directory, if you insist on that. I also assume that you code exclusively in assembler and use `telnet 80 lwn.net' to read my reply.
> And even if there is you are still with the same problem: you do not have access control.
RDBMS have no access control?!
> You can add remote management, replication, versioning, transactions/rollbacks if you wish - you only need good enough filesystem.
U-hu. Just a tiny obstacle.
> Why not fix filesystems instead ?
Well, go fix them. IMHO, filesystem developers have more urgent tasks than tailoring their filesystems to an ambitios configuration project. So we can wait years and years - and still have a dumb (no logical expressions etc) configuration mechanism.
Posted Nov 30, 2004 10:28 UTC (Tue) by alex (subscriber, #1355)
[Link]
For example my exim.conf file is well commented with the changes I have made over time to handle more complex mail setups. It give a lot of context to why you changed a value from one thing to another which is useful when you want to tweak stuff 6 months down the road.
It would be nice if there was a more standardised way of doing config files so each app didn't have its own perculiarites. But until then I want to keep my configs human readable!
It *is* human readable!
Posted Nov 30, 2004 10:47 UTC (Tue) by dank (guest, #1865)
[Link]
Posted Nov 30, 2004 11:46 UTC (Tue) by alex (subscriber, #1355)
[Link]
# This director handles forwarding using traditional .forward files.
# It also allows mail filtering when a forward file starts with the
# string "# Exim filter": to disable filtering, uncomment the "filter"
# option. The check_ancestor option means that if the forward file
# generates an address that is an ancestor of the current one, the
# current one gets passed on instead. This covers the case where A is
# aliased to B and B has a .forward file pointing to A.
# For standard debian setup of one group per user, it is acceptable---normal
# even---for .forward to be group writable. If you have everyone in one
# group, you should comment out the "modemask" line. Without it, the exim
# default of 022 will apply, which is probably what you want.
userforward:
driver = forwardfile
file_transport = address_file
pipe_transport = address_pipe
reply_transport = address_reply
no_verify
check_ancestor
file = .forward
modemask = 002
filter
/userforward/driver = forwardfile # the driver is a forward file
/userforward/file_transport = address_file # the file transport is an address file
/userforward/pipe_transport = address_pipe # the pipe transport is an address pipe
..
..
Re: It *is* human readable!
Posted Nov 30, 2004 13:40 UTC (Tue) by sdalley (subscriber, #18550)
[Link]
However, surely the same result could be obtained by putting .Comment file(s) judiciously in appropriate directories. A viewer tool would be capable of merging everything to do with exim (or any other program) back into a view that looks like a single file with comments. A sufficiently smart one could even edit that view and save the keys/values back in one hit.
viewer tool
Posted Nov 30, 2004 15:07 UTC (Tue) by alex (subscriber, #1355)
[Link]
Posted Nov 30, 2004 15:28 UTC (Tue) by tzafrir (subscriber, #11501)
[Link]
In the mean time Elektra has a tool to edit a subtree as an XML file.
Posted Nov 30, 2004 15:47 UTC (Tue) by JamesErik (subscriber, #17417)
[Link]
Posted Dec 1, 2004 1:21 UTC (Wed) by Duncan (guest, #6647)
[Link]
Posted Nov 30, 2004 11:23 UTC (Tue) by bk (guest, #25617)
[Link]
I have absolutely zero interest in making my systems: a) less Unix like, b) more complex to administer, and c) less free.
Posted Nov 30, 2004 11:40 UTC (Tue) by rahulsundaram (subscriber, #21946)
[Link]
system-config-display can be considered a "ISV" for xorg configuration file. Not all ISV's sell proprietary software. Another example was Ximian Gnome..
Posted Nov 30, 2004 11:50 UTC (Tue) by juanjux (guest, #11652)
[Link]
Posted Nov 30, 2004 11:54 UTC (Tue) by tzafrir (subscriber, #11501)
[Link]
Elektra allows some cool stuff at the distro level. You can have a separate {rpm|deb} package of a certain configuration variant for a specific software.
Remming out a whole subtree can be done by prefixing its name with a '.' . The usage of symlinks allows many cool tricks (try to emulate that with XML). You can give different permissions to different parts of your config file.
Is that worth the price of making the system more accessible to propietary software packages?
Posted Nov 30, 2004 14:05 UTC (Tue) by evgeny (subscriber, #774)
[Link]
Refer by ID?
Posted Nov 30, 2004 16:06 UTC (Tue) by khim (subscriber, #9252)
[Link]
Posted Dec 1, 2004 3:26 UTC (Wed) by evgeny (subscriber, #774)
[Link]
Expat and libxml can do this. And, BTW, what d'ya'think kernel is doing when resolving symlinks?
Posted Dec 1, 2004 10:51 UTC (Wed) by tzafrir (subscriber, #11501)
[Link]
Now what about caching those small changes?
Posted Dec 1, 2004 13:22 UTC (Wed) by evgeny (subscriber, #774)
[Link]
1. Which is the "whole XML file"?
2. There is no need to keep the "whole XML" (whatever it means) in memory. There are alternative methods of parsing XML, e.g. SAX.
3. You probably should argue with another person anyway. My suggestion was to use an RDBMS, not XML file(s). I did say an XML-based DB (notice the word _database_ - I never meant plain XML files, especially not one huge file) could be used as well, in principle.
> Now what about caching those small changes?
Same here.
Posted Dec 2, 2004 16:15 UTC (Thu) by tzafrir (subscriber, #11501)
[Link]
Any software that needs to resolve IDs to nodes.
Posted Dec 3, 2004 13:27 UTC (Fri) by evgeny (subscriber, #774)
[Link]
> Any software that needs to resolve IDs to nodes.
So do you say the kernel keeps the whole inode table/whatever in memory? And why memory requirements are different than that for similar task for a filesystem?
Posted Nov 30, 2004 13:13 UTC (Tue) by tarvin (subscriber, #4412)
[Link]
My fear is that Linux configuration will become a nasty mess of XML files (note: just because something is XML-based doesn't automatically make it easily understood). On the other hand, I'm not really fond of the current state where lots of different configuration formats have to be used.
Some people immediately start screaming "argh! - it's the _registry_", referring to Windows' configuration system. But it's not: It's not like your whole configuration system will go down the drain if "_the_ registry" file becomes currupted. And it's not a large binary file which is unfriendly to revision control systems.
It would be great to have a standard configuration API, so that writers of new software don't have to invent their own configuration parsers again and again, but can concentrate on documenting the configuration options.
I really hope that some people will be able to look through the fog of fear and conservatism and give this initiative a chance.
Not so good
Posted Nov 30, 2004 13:21 UTC (Tue) by xav (subscriber, #18536)
[Link]
X needs
Posted Nov 30, 2004 13:37 UTC (Tue) by tarvin (subscriber, #4412)
[Link]
Posted Nov 30, 2004 16:09 UTC (Tue) by khim (subscriber, #9252)
[Link]
Posted Nov 30, 2004 13:35 UTC (Tue) by libra (guest, #2515)
[Link]
Rights management
Posted Nov 30, 2004 13:41 UTC (Tue) by tarvin (subscriber, #4412)
[Link]
I don't see any need for a ReiserX file system.
Posted Nov 30, 2004 16:14 UTC (Tue) by khim (subscriber, #9252)
[Link]
If you'll think about it we are going the way of Elektra already. On my system I have /etc/apache2/conf/modules.d, /etc/conf.d, /etc/conf.d, /etc/xinetd.d and so on. This is exactly what Elektra proposing - just that every application is doing this in it's own way.
Posted Nov 30, 2004 17:30 UTC (Tue) by petegn (guest, #847)
[Link]
Posted Nov 30, 2004 18:09 UTC (Tue) by sbergman27 (subscriber, #10767)
[Link]
One of my pet peeves with Linux/Unix is the total balkanization of config file formats. I would really prefer to see all these moved into XML, though. But I really don't care, as long as there is some predictability and universality to it all.
And if someone could just XMLize or Elektrify my sendmail.cf, I'd be eternally grateful. I'd even spring to send you to the mental rehabilitation center of your choice after completing the task (if required). ;-)
Posted Nov 30, 2004 19:03 UTC (Tue) by bk (guest, #25617)
[Link]
As a side note, it probably would be almost trivial to write a backend that writes the config back into the application's native format (ie, goes from the nested hierarchy -> xorg.conf), making this project a simple translation layer for those who prefer a registry-like layout. That seems like a better way of doing this, as opposed to shoving a completely un-Unix configuration system down the throat of the free software community.
Posted Dec 1, 2004 3:44 UTC (Wed) by dh (subscriber, #153)
[Link]
Posted Dec 1, 2004 8:05 UTC (Wed) by hppnq (subscriber, #14462)
[Link]
Generally, it would ease the job of an admin tremendously if
different packages could agree on some general syntax rules for config
files, and Elektra tries to present a light-weight solution exactly for
this purpose.
I'm not so sure. I can see that a simple, uniform configuration syntax helps automating configuration, but to me it looks like Elektrify will make hand-editing configuration files harder rather than simpler. I am not quite sure what problem it tries to solve, and whether the chosen path will bring any joy. Things might be entirely different for you, but my main problem with configuring software is usually choosing the correct values, not understanding the configuration file syntax.
Saying: "But now you can easily sort your key value pairs!" begs the question: do I need that? I don't think so.
I would appreciate a meta-configurator that has plugins for handling native formats like fstab and httpd.conf, so no patching would be needed for the filesystem utilities and Apache.
Posted Dec 1, 2004 11:36 UTC (Wed) by Ross (subscriber, #4065)
[Link]
One unified config format is *impossible*
Posted Dec 1, 2004 13:25 UTC (Wed) by nix (subscriber, #2304)
[Link]
If you can turn my XEmacs configuration (here)
into something Elektra-based, I'll be impressed.
But I really, really doubt it's possible.
You can map most configuration files two-ways into this format, but not all.
Posted Dec 1, 2004 15:43 UTC (Wed) by sokol (guest, #4383)
[Link]
Posted Dec 2, 2004 3:24 UTC (Thu) by hppnq (subscriber, #14462)
[Link]
Posted Dec 2, 2004 4:03 UTC (Thu) by sokol (guest, #4383)
[Link]
Posted Dec 2, 2004 10:13 UTC (Thu) by nix (subscriber, #2304)
[Link]
That's been known for as long as general-purpose computers have existed.
Posted Mar 1, 2005 21:52 UTC (Tue) by aviram (guest, #24411)
[Link]
Just to make it bold:
- Elektra is not a revolutionary lightweight new file format.
- Elektra, as an initiative, does not care about storage details.
- Elektra is not concerned about read/write transactions, just because current text configuration files aren't too.
- Elektra is not a network configuration system, just because current text configuration files aren't too. Go look for LDAP, SOAP, etc in the rare cases current Unix apps will need (or support) some.
- Elektra provides a way to vi/ed/sed/awk/OpenOffice.org/whatever keys-values only because Unix sysadmins (me included) are so addicted to this specialized practice. While average computer users (the entire human race) don't even know what "configuration" is, but they do know what are buttons, checkboxes and comboboxes.
Elektra is just what the 'dh' user above wrote. We are concerned about the problem that the concept of Elektra (but not only Elektra) can solve, more than the implementation details (which can evolve over time if the ecosystem grows -- yes, chicken and egg problem). If you don't like Elektra, we are sorry, but this was the way we found to solve the configuration superzoo issue in the most pragmactical way. Config4GNU, GConf, Hiveconf, UniConf, "etCeteraConf" currently can't do it, by design.
If you guys want to keep going deeper and deeper discussing implementation details of such a simple thing that Elektra is, fine. Just keep in mind that the general business world is way more practical than this discussion, and Linux doesn't have a real future without the interest, energy and investment of that world.
Avi Alkalay
Elektra Initiative
Posted Dec 2, 2004 6:06 UTC (Thu) by sokol (guest, #4383)
[Link]
I like the idea of elektra. It fixes a giving "itch"
which happens to be quite general in Linux world.
With a little bit of luck, elektra approach will generalize
(with or without filesystem backend).
Talking about backend, I have my two cents to bring.
Some time ago, I asked myself what was the simpliest file
format to handle hierarchical key-value paires. Just this.
No read-write permissions, no validators, no multiple data types.
Just an hierarchie of key-values. If someone need mentioned
features, he has to extend this simpliest format at cost
of reserved keys, particular syntax for references and so on.
Thus a user will pay only what it really uses.
So, I came up with
KVH format
which uses the sipmliest parsing rules for handling key-values
hierarchie I have ever seen. This format is program oriented but
remains human readable and editable with plain text editor without
any need of aditional APIs. APIs may come later for GUI editors and Co.
Posted Dec 2, 2004 10:03 UTC (Thu) by kfiles (subscriber, #11628)
[Link]
Argh. This falls victim to the same problems of the old syslog.conf: visual confusion in whitespace use. A naive user, editing such a config file, might not know that the indentation by \t (tab character) was mandatory for proper parsing. Substitution of spaces for tabs in syslog.conf provided no end of headaches for sysadmins trying to track down configuration problems. Some editors (like emacs) may even unhelpfully assist in creating this problems by substituting strings of spaces for tabs.
Using arbitrary whitespace to imply parsing semantics has almost always lead to problems (c.f., syslog.conf, /etc/fstab). Unless you intend for this format to only be parsed and written by automata, please don't do this.
I much prefer the gated.conf C-style self-descriptive hierarchical config blocks, which use curly-braces for closure, and do not require the pseudo-SGML-isms of httpd.conf to indicate hierarchy.
--kirby
Posted Dec 2, 2004 16:09 UTC (Thu) by sokol (guest, #4383)
[Link]
Serguei.
Hiveconf is another alternative
Posted Dec 4, 2004 3:47 UTC (Sat) by astrand (subscriber, #4908)
[Link]
Linux is a registered trademark of Linus Torvalds
|
http://lwn.net/Articles/113279/
|
crawl-002
|
refinedweb
| 6,369
| 64.3
|
Answered
The code cleanup can sort and optimize usings, but I like the way VS2019 groups usings with spaces like this:
using System; using System.Diagnostics; using System.Linq; using System.Runtime.Caching; using System.Threading; using System.Threading.Tasks; using System.Web.Mvc; using Microsoft.Identity.Client; using Newtonsoft.Json;
Is it possible to configure R# like this, or have it call the VS* command as part of the cleanup?
Cheers!
Hello Johan Danforth
Thank you for your question! Please try to set this setting's value to "1":
ReSharper | Options | Code Editing | C# | Formatting Style | Blank Lines | Blank lines in declarations | Between different "using" groups
Amazing! Thanks! There are just too many options... :D
|
https://resharper-support.jetbrains.com/hc/en-us/community/posts/360010492039-Code-Cleanup-Group-Usings
|
CC-MAIN-2021-25
|
refinedweb
| 116
| 54.29
|
I am learning Python, and taking several online courses on computer science and algorithms. One of my assigments is to write a Selection Sort in Python. Selection sort is one of the more intuitive sort routines, because it logically works how one would probably sort a small list of items manually.
Selection Sort scans the list and picks the smallest item and puts it at the beginning of the list. It then scans for the next smallest item and puts it next. It continues this same process until every item in the list is sorted.
Using asymptotic notation, Selection Sort is O(n2). If you're looking for something faster, check out my article on Merge Sort.
Selection Sort
There are lots of different coding examples of Selection Sort using Python. Below is my code, which uses nested
for loops.
I also include some unit tests. I plan to include Doctests or unit tests for all my examples, because I want the practice. It also helps me refactor the code if I discover a bug or better implementation.
import unittest def selection_sort(integers): """Performs in-place Selection Sort on a list of integers.""" length = len(integers) if length < 2: return for i in range(length - 1): # set initial position as min val, index minimal_value = integers[i] minimal_index = i for j in range(i + 1, length): if integers[j] < minimal_value: minimal_value = integers[j] minimal_index = j integers[i], integers[minimal_index] \ = integers[minimal_index], integers[i] class selection_sort_tests(unittest.TestCase): def sort(self, lst): copy = lst[:] selection_sort(copy) return copy def test_empty_list(self): lst = [] sorted_lst = self.sort(lst) self.assertEqual(lst, sorted_lst) def test_single_item(self): lst = [1] sorted_lst = self.sort(lst) self.assertEqual(lst, sorted_lst) def test_two_items_sorted(self): lst = [1, 2] sorted_lst = self.sort(lst) self.assertEqual(lst, sorted_lst) def test_two_items_unsorted(self): lst = [2, 1] sorted_lst = self.sort(lst) self.assertEqual(sorted_lst, [1, 2]) def test_zero_in_list(self): lst = [10, 0] sorted_lst = self.sort(lst) self.assertEqual(sorted_lst, [0, 10]) def test_odd_number_of_items(self): lst = [13, 7, 5] sorted_lst = self.sort(lst) self.assertEqual(sorted_lst, [5, 7, 13]) def test_even_number_of_items(self): lst = [23, 7, 13, 5] sorted_lst = self.sort(lst) self.assertEqual(sorted_lst, [5, 7, 13, 23]) def test_duplicate_integers_in_list(self): lst = [1, 2, 2, 1, 0, 0, 15, 15] sorted_lst = self.sort(lst) self.assertEqual(sorted_lst, [0, 0, 1, 1, 2, 2, 15, 15]) def test_larger_integers(self): lst = [135604, 1000000, 45, 78435, 456219832, 2, 546] sorted_lst = self.sort(lst) self.assertEqual(sorted_lst, [2, 45, 546, 78435, 135604, 1000000, 456219832]) if __name__ == '__main__': unittest.main()
As I mentioned on Twitter, I wrote this using Pythonista on my iPad Pro. It's nice to get away from the office and complete my assignments on my iPad without having to bring the MacBook Pro.
Pythonista is a great app for developing Python scripts on the iPad Pro.
I hope you find the article useful.
I'll be writing tutorials daily and many of them will be on my homework assignments or personal experiments as I learn Python, data structures, and algorithms. If you want to chat, catch me on Twitter.
|
https://www.koderdojo.com/blog/python-training-selection-sort-with-sample-unit-tests
|
CC-MAIN-2021-39
|
refinedweb
| 511
| 56.86
|
#include <OP_OperatorTable.h>
Definition at line 42 of file OP_OperatorTable.h.
Convenience functions for the second callback in setScriptCreator().
Builds (or rebuilds) the operator type namespace hierarchy. The optype precedense is given by the environment variable HOUDINI_OPTYPE_NAMESPACE_HIERARCHY, which is processed by this method.
Creates a new node of a given type inside a parent and names it as a give name.
Definition at line 97 of file OP_OperatorTable.h.
Obtains a list of available operator names that have the same base (core) name as the given operator. If scope network name is not NULL, the list includes only operators whose nodes can be created in that network (otherwise all operators are included). The list is sorted according to the descending precedence order.
Definition at line 236 of file OP_OperatorTable.h.
Definition at line 117 of file OP_OperatorTable.h.
Obtains the value of the environment variable used to construct the hierarchy.
Obtains the preferred operator name that matches the given op_name. Any name component included in the op_name must match the returned op type name, and any component not present in op_name is assumed to match the returned op type. For example 'hda' will match any scope, namespace, or version, while 'userA::hda' will match any scope and version, but the namespace must be 'userA'. For global namespace use '::hda' and for versionless opname use 'hda::'. If the scope_network_stack is also given (ie, non-null) then the returned opname must match one of the scopes listed in that array too. Returns the name of the highest precedence operator that matches the given op_name.
Definition at line 68 of file OP_OperatorTable.h.
Definition at line 119 of file OP_OperatorTable.h.
Definition at line 110 of file OP_OperatorTable.h.
Called once all basic operator types are loaded to call the python code which will initialize node color and shape themes.
Returns true if the provided node name is "close enough" to the operator type name, english name, or first name to imply what the operator type is.
Definition at line 238 of file OP_OperatorTable.h.
Definition at line 370 of file OP_OperatorTable.h.
|
https://www.sidefx.com/docs/hdk/class_o_p___operator_table.html
|
CC-MAIN-2022-40
|
refinedweb
| 350
| 57.77
|
NPV and IRR cost of capital impact the investment decision process?
Thank you in advance.
Solution Preview
a. Besides net present value (NPV) and internal rate of return (IRR), what other criteria do companies use to evaluate investments?
The investment decisions of a firm are generally known as the capital budgeting, or capital expenditure decisions..
Other criterias
The payback rule is how long it takes to recover the initial investment. This rule does not involve discounting which means that time value of money is disregarded, it fails to consider risk differences, and an accurate cutoff period cannot be picked.
ARR is defined as an investment's average net income divided by its average book value. The projects can be ranked according to the excess of AAR compared to the target AAR. Once again, this is a flawed ...
Solution Summary
This discusses the pros and cons of NPV and IRR, impact of change in cost of capital on investment decision process
|
https://brainmass.com/business/capital-budgeting/180660
|
CC-MAIN-2017-22
|
refinedweb
| 160
| 56.76
|
Redux helps you manage state by setting the state up at a global level. In the previous tutorial, we had a good look at the Redux architecture and the integral components of Redux such as actions, action creators, the store, and reducers.
In this second post of the series, we are going to bolster our understanding of Redux and build on top of what we already know. We will start by creating a realistic Redux application—a contact list—that's more complex than a basic counter. This will help you strengthen your understanding of the single store and multiple reducers concept which I introduced in the previous tutorial. Then later we'll talk about binding your Redux state with a React application and the best practices that you should consider while creating a project from scratch.
However, it's okay if you haven't read the first post—you should still be able to follow along as long as you know the Redux basics. The code for the tutorial is available in the repo, and you can use that as a starting point.
We're going to build a basic contact list with the following features:
Here's what our application is going to look like:
Covering everything in one stretch is hard. So in this post we're going to focus on just the Redux part of adding a new contact and displaying the newly added contact. From a Redux perspective, we'll be initializing the state, creating the store, adding reducers and actions, etc.
In the next tutorial, we'll learn how to connect React and Redux and dispatch Redux actions from a React front-end. In the final part, we'll shift our focus towards making API calls using Redux. This includes fetching the contacts from the server and making a server request while adding new contacts. Apart from that, we'll also create a search bar feature that lets you search all the existing contacts.
You can download the react-redux demo application from my GitHub repository. Clone the repo and use the v1 branch as a starting point. The v1 branch is very similar to the create-react-app template. The only difference is that I've added a few empty directories to organise Redux. Here's the directory structure.
. ├── package.json ├── public ├── README.md ├── src │ ├── actions │ ├── App.js │ ├── components │ ├── containers │ ├── index.js │ ├── reducers │ └── store └── yarn.lock
Alternatively, you can create a new project from scratch. Either way, you will need to have installed a basic react boilerplate and redux before you can get started.
It's a good idea to have a rough sketch of the state tree first. In my opinion, this will save you lots of time in the long run. Here's a rough sketch of the possible state tree.
const initialState = { contacts: { contactList: [], newContact: { name: '', surname: '', email: '', address: '', phone: '' }, ui: { //All the UI related state here. eg: hide/show modals, //toggle checkbox etc. } } }
Our store needs to have two properties—
contacts and
ui. The contacts property takes care of all contacts-related state, whereas the
ui handles UI-specific state. There is no hard rule in Redux that prevents you from placing the
ui object as a sub-state of
contacts. Feel free to organize your state in a way that feels meaningful to your application.
The contacts property has two properties nested inside it—
contactlist and
newContact. The
contactlist is an array of contacts, whereas
newContact temporarily stores contact details while the contact form is being filled. I am going to use this as a starting point for building our awesome contact list app.
Redux doesn't have an opinion about how you structure your application. There are a few popular patterns out there, and in this tutorial, I will briefly talk about some of them. But you should pick one pattern and stick with it until you fully understand how all the pieces are connected together.
The most common pattern that you'll find is the Rails-style file and folder structure. You'll have several top-level directories like the ones below:
The image below demonstrates how our application might look if we follow this pattern:
The Rails style should work for small and mid-sized applications. However, when your app grows, you can consider moving towards the domain-style approach or other popular alternatives that are closely related to domain-style. Here, each feature will have a directory of its own, and everything related to that feature (domain) will be inside it. The image below compares the two approaches, Rails-style on the left and domain-style on the right.
For now, go ahead and create directories for components, containers, store, reducers, and action. Let's start with the store.
Let's create a prototype for the store and the reducer first. From our previous example, this is how our store would look:
const store = createStore( reducer, { contacts: { contactlist: [], newContact: { } }, ui: { isContactFormHidden: true } }) const reducer = (state, action) => { switch(action.type) { case "HANDLE_INPUT_CHANGE": break; case "ADD_NEW_CONTACT": break; case "TOGGLE_CONTACT_FORM": break; } return state; }
The switch statement has three cases that correspond to three actions that we will be creating. Here is a brief explanation of what the actions are meant for.
HANDLE_INPUT_CHANGE: This action gets triggered when the user inputs new values into the contact form.
ADD_NEW_CONTACT: This action gets dispatched when the user submits the form.
TOGGLE_CONTACT_FORM: This is a UI action that takes care of showing/hiding the contact form.
Although this naive approach works, as the application grows, using this technique will have a few shortcomings.
To fix the single reducer issue, Redux has a method called combineReducers that lets you create multiple reducers and then combine them into a single reducing function. The combineReducers function enhances readability. So I am going to split the reducer into two—a
contactsReducer and a
uiReducer.
In the example above,
createStore accepts an optional second argument which is the initial state. However, if we are going to split the reducers, we can move the whole
initialState to a new file location, say reducers/initialState.js. We will then import a subset of
initialState into each reducer file.
Let's restructure our code to fix both the issues. First, create a new file called store/createStore.js and add the following code:
import {createStore} from 'redux'; import rootReducer from '../reducers/'; /*Create a function called configureStore */ export default function configureStore() { return createStore(rootReducer); }
Next, create a root reducer in reducers/index.js as follows:
import { combineReducers } from 'redux' import contactsReducer from './contactsReducer'; import uiReducer from './uiReducer'; const rootReducer =combineReducers({ contacts: contactsReducer, ui: uiReducer, }) export default rootReducer;
Finally, we need to create the code for the
contactsReducer and
uiReducer.
import initialState from './initialState'; export default function contactReducer(state = initialState.contacts, action) { switch(action.type) { /* Add contacts to the state array */ case "ADD_CONTACT": { return { ...state, contactList: [...state.contactList, state.newContact] } } /* Handle input for the contact form. The payload (input changes) gets merged with the newContact object */ case "HANDLE_INPUT_CHANGE": { return { ...state, newContact: { ...state.newContact, ...action.payload } } } default: return state; } }
import initialState from './initialState'; export default function uiReducer(state = initialState.ui, action) { switch(action.type) { /* Show/hide the form */ case "TOGGLE_CONTACT_FORM": { return { ...state, isContactFormHidden: !state.isContactFormHidden } } default: return state; } }
When you're creating reducers, always keep the following in mind: a reducer needs to have a default value for its state, and it always needs to return something. If the reducer fails to follow this specification, you will get errors.
Since we've covered a lot of code, let's have a look at the changes that we've made with our approach:
combineReducerscall has been introduced to tie together the split reducers.
uiobject will be handled by
uiReducerand the state of the contacts by the
contactsReducer.
createStore. Instead, we've created a separate file for it called initialState.js. We're importing
initialStateand then setting the default state by doing
state = initialState.ui.
Here's the code for the reducers/initialState.js file.
const initialState = { contacts: { contactList: [], newContact: { name: '', surname: '', email: '', address: '', phone: '' }, }, ui: { isContactFormHidden: true } } export default initialState;
Let's add a couple of actions and action creators for adding handling form changes, adding a new contact, and toggling the UI state. If you recall, action creators are just functions that return an action. Add the following code in actions/index.js.
export const addContact =() => { return { type: "ADD_CONTACT", } } export const handleInputChange = (name, value) => { return { type: "HANDLE_INPUT_CHANGE", payload: { [name]: value} } } export const toggleContactForm = () => { return { type: "TOGGLE_CONTACT_FORM", } }
Each action needs to return a type property. The type is like a key that determines which reducer gets invoked and how the state gets updated in response to that action. The payload is optional, and you can actually call it anything you want.
In our case, we've created three actions.
The
TOGGLE_CONTACT_FORM doesn't need a payload because every time the action is triggered, the value of
ui.isContactFormHidden gets toggled. Boolean-valued actions do not require a payload.
The
HANDLE_INPUT_CHANGE action is triggered when the form value changes. So, for instance, imagine that the user is filling the email field. The action then receives
"bob@example.com" as inputs, and the payload handed over to the reducer is an object that looks like this:
The reducer uses this information to update the relevant properties of the
newContact state.
The next logical step is to dispatch the actions. Once the actions are dispatched, the state changes in response to that. To dispatch actions and to get the updated state tree, Redux offers certain store actions. They are:
dispatch(action): Dispatches an action that could potentially trigger a state change.
getState(): Returns the current state tree of your application.
subscriber(listener): A change listener that gets called every time an action is dispatched and some part of the state tree is changed.
Head to the index.js file and import the
configureStore function and the three actions that we created earlier:
import React from 'react'; import {render}from 'react-dom'; import App from './App'; /* Import Redux store and the actions */ import configureStore from './store/configureStore'; import {toggleContactForm, handleInputChange} from './actions';
Next, create a
store object and add a listener that logs the state tree every time an action is dispatched:
const store = configureStore(); //Note that subscribe() returns a function for unregistering the listener const unsubscribe = store.subscribe(() => console.log(store.getState()) )
Finally, dispatch some actions:
/* returns isContactFormHidden returns false */ store.dispatch(toggleContactForm()); /* returns isContactFormHidden returns false */ store.dispatch(toggleContactForm()); /* updates the state of contacts.newContact object */ store.dispatch(handleInputChange('email', 'manjunath@example.com')) unsubscribe;
If everything is working right, you should see this in the developer console.
That's it! In the developer console, you can see the Redux store being logged, so you can see how it changes after each action.
We've created a bare-bones Redux application for our awesome contact list application. We learned about reducers, splitting reducers to make our app structure cleaner, and writing actions for mutating the store.
Towards the end of the post, we subscribed to the store using the
store.subscribe() method. Technically, this isn't the best way to get things done if you're going to use React with Redux. There are more optimized ways to connect the react front-end with Redux. We'll cover those in the next…
|
https://www.4elements.com/nl/blog/read/getting-started-with-redux-learn-by-example/
|
CC-MAIN-2018-39
|
refinedweb
| 1,886
| 56.86
|
Using PInvoke To Call An Unmanaged DLL From Managed C++
WEBINAR: On-demand webcast
How to Boost Database Development Productivity on Linux, Docker, and Kubernetes with Microsoft SQL Server 2017 REGISTER >
If you've been maintaining a large and complex C++ application for several years, it probably serves as a mini-museum of technology all by itself. There will be code that interacts with the user, business logic, and data access logic - and I hope it's not all together in one giant source file. Chances are that you've used several different techniques to split that code into modules.
One popular technique is COM, and creating and using COM components in Visual C++ 6 is relatively easy. But it's not the only way to reuse code: DLLs are relatively easy to write and use from Visual C++ 6, and many developers found them easier than COM for code that was only to be called from one place. Chances are your system has a few COM components and also a few DLLs.
So let's go on past the COM Interop story to the PInvoke story. The P stands for platform and it's a reference to the underlying platform on which the Common Language Runtime is running. Today that will be some flavor of Windows, but in the future, who knows? There are projects underway to recreate the CLR on non-Windows platforms.
Calling a function in a DLL
I'm going to use examples of calling functions from the Win32 API, even though you would never actually call these functions from a managed C++ application - they are encapsulated very conveniently in the Base Class Library. It's just that these examples are sure to be on your machine, so using them spares you the trouble of writing a DLL just to call it.
For my first example, I'm going to use GetSystemTime(). The documentation tells you that it takes an LPSYSTEMTIME, which it fills with the current UTC time. The documentation tells you the name of the function, the parameters it takes, and the fact that it's in kernel32.lib. (You can't use the version in the .lib file, but there's a corresponding kernel32.dll that you can use.) This is everything you need to declare the function prototype in an unmanaged application, like this:
using namespace System::Runtime::InteropServices; [DllImport("kernel32.dll")] extern "C" void GetSystemTime(SYSTEMTIME* pSystemTime);
The DllImport attribute tells the runtime that this function is in a DLL, and names the DLL. The extern C takes care of the rest. Now you need to teach the compiler what a SYSTEMTIME pointer is. By clicking on LPSYSTEMTIME on the help page for GetSystemTime(), you can find a definition of the SYSTEMTIME struct:
typedef struct _SYSTEMTIME { WORD wYear; WORD wMonth; WORD wDayOfWeek; WORD wDay; WORD wHour; WORD wMinute; WORD wSecond; WORD wMilliseconds; } SYSTEMTIME, *PSYSTEMTIME;
If you copy and paste this struct definition into a Managed C++ application, it won't compile - WORD is a typedef that isn't automatically usable in .NET applications. It just means a 16-bit integer, anyway, and that's a short in C++. So paste it in (before the prototype for GetSystemTime()) and change all the WORD entries to short.
Now here's some code that calls this function:
SYSTEMTIME* pSysTime = new SYSTEMTIME(); GetSystemTime(pSysTime); Console::WriteLine("Current Month is {0}", __box(pSysTime->wMonth));
What could be simpler? When this code runs, it prints out:
Current Month is 9
The documention is clear that months are 1-based in the SYSTEMTIME structure, so yes indeed September is month 9.
Remember, this example is here to show you how to declare the function type and the structures it uses, not how to get the current time and date. A .NET application gets the current time and date with DateTime::Now():
System::DateTime t = DateTime::Now; Console::WriteLine( "Current Month is {0}", __box(t.get_Month()));
(The trick here is to remember, or to be reminded by Intellisense, that DateTime is a value class, which means that C++ applications don't have to work with a pointer to a DateTime, but can just use a DateTime object as you see here. Also, Now is a property, not a function, of the DateTime class. Let Intellisense help you, because these things are non-trivial to keep track of yourself. It shows different symbols for properties and functions.)
Dealing with Strings
The GetSystemTime() function involved a struct, and once you made a managed-code version of the struct definition, it was simple enough to work with. Things are little trickier when the DLL function you want to call works with a string. That's because string manipulation is a bit of a technology museum itself.
Strings come in two basic flavors: ANSI, or single-byte, and wide, or double-byte. Strings in .NET are all Unicode which are compatible with wide strings in the Win32 world. Many of the Win32 functions that use strings actually come in two versions: one for ANSI strings and one for wide strings. For example there is no MessageBox function really: there is a MessageBoxA and a MessageBoxW. This is hidden from you when you program in Visual C++ 6 by a #define that changes function names based on your Unicode settings. It's hidden from you in .NET by the Interop framework.
If you wish, you can declare MessageBox with a DllImport attribute that only specifies the DLL. The declaration from the online help is:
int MessageBox( HWND hWnd, LPCTSTR lpText, LPCTSTR lpCaption, UINT uType);
This translates into the following .NET signature:
MessageBox( int hWnd, String* lpText, String* lpCaption, unsigned int uType);
How do I know? Handles and the like generally map to integers. Pointers to various kinds of strings become String*. UINT is an unsigned int. If you're stuck, do a search in the online help for one of the many tables of data types that are scattered about. One of them will have the mapping you need.
Here's the prototype for MessageBox:
[DllImport("user32.dll")] extern "C" void MessageBox( int hWnd, String* lpText, String* lpCaption, unsigned int uType);
And here's a simple way to use it:
MessageBox(0,"Hi there!","",0);
This is, however, a little wasteful. You know (or you should know) that "Hi There" is a Unicode string, and that MessageBoxW is the function you really want. Here's a revamp of the prototype that makes it clear:
[DllImport( "user32.dll", EntryPoint="MessageBoxW", CharSet=CharSet::Unicode)] extern "C" void MessageBox( int hWnd, String* lpText, String* lpCaption, unsigned int uType);
call the function the same way, but the DllImport attribute is now mapping directly to the right entry point, and using the right marshaling technique, without having to make these decisions on the fly every time you call this function from the DLL.
Don't forget that MessageBox() is just an example of a function call that takes a string and that most developers are familiar with. If you really want to display a message box from within your code, use the MessageBox class in System::Windows::Forms (remember to add a #using statement to the top of your source code mentioning System.Windows.Forms.dll, where this class is implemented. Here's an example:
System::Windows::Forms::MessageBox::Show(NULL, "Hi from the library!","", System::Windows::Forms::MessageBoxButtons::OK, System::Windows::Forms::MessageBoxIcon::Information);
Conclusion
Do you have useful, even vital functionality wrapped up in a DLL? Would you like to be able to access that DLL from your new .NET applications? Well, go ahead. All you have to do is declare a prototype of the function with a DllImport attribute and an extern C linkage. Declare any structures the function uses, and just call it as though it were managed code. (Remember that it isn't, though, and that means your application must have permission to execute unmanaged code.) Give the framework a hand and tell it you're using wide strings, and away you go. Old technologies can never die as long as they can be called from new.
What about using unmanaged DLLs exporting classes?Posted by controlcraft on 11/25/2006 10:31pm
I have a regular DLL, written in VC++ exporting a class, rather than individual methods. Can I use this in C#?Reply
|
https://www.codeguru.com/cpp/com-tech/complus/managed/article.php/c3947/Using-PInvoke-To-Call-An-Unmanaged-DLL-From-Managed-C.htm
|
CC-MAIN-2017-51
|
refinedweb
| 1,391
| 61.56
|
Portions Copyright © 2002 W3C® (MIT, INRIA, Keio), All Rights Reserved. W3C liability, trademark, document use, and software licensing rules apply.
The SLink language provides a data model and syntax for XML linking, suitable for use in XHTML 2.0 and related languages.
This document is a private skunkworks and has no official standing of any kind, not having been reviewed by any organization in any way.
This draft was assembled by Micah Dubinko with inspriation from the [XLink] Editors (Steve DeRose; Eve Maler; David Orchard), the [HLink] Editors (Steven Pemberton; Masayasu Ishikawa), the editor of [XArc] (Gabe Beged-Dov), and the other panel members at the XML 2002 Hypertext Town Hall (Norm Walsh; Ron Daniel; Liam Quin; Eve Maler; Simon St. Laurent). There should be no suggestion that anybody other than Micah Dubinko approves of the content or even the existence of the present document.
1 Introduction
2 Requirements
3 Terminology and Data Model
4 Inline Markup
4.1 The xml:href Attribute
4.2 The xml:src Attribute
5 Out-of-line Markup
6 Conformance
A References
A.1 Normative References
A.2 Informative References
B Questions and Answers (non-normative)
XML linking has been a subject of frequent discussion.. Lists of contributors may be found in these specifications. may be distributed freely, as long as all text and legal notices remain intact.
The following goals are specifically not addressed in SLink 1.0. For these features, either application-specific markup, XLink 1.0, or a successor to SLink is the appropriate technology.
Links can be described in terms of a simple data model. The fundamental unit of the data model is the arc. [Definition: An arc consists of the following: a reference to a starting resource (optionally with a reference to a sub-resource), an ending resource (optionally with a reference to a sub-resource), an arc-type, and a prominence.] In some cases, such as URI fragment identifiers, a sub-resource might actually be a view or interpretation of the resource. When this is not possible, or when no specific sub-resource is specified, the sub-resource is the same as the resource.
[Definition: The possible arc-types defined in this document are:
href, src, and none.] The arc-type describes
the intent of the author in providing the link. An arc-type of href
indicates the intention that, when the link is traversed at the user's
option, the presentation context will be changed to the ending (sub-)resource.
An arc-type of src indicates the intention that, without requiring
any special action of the user, the ending (sub-)resource will be presented
in place of the starting (sub-)resource. An arc-type of none
indicates that the author did not intend to create an arc. Implementations
are free to process SLink links in any suitable manner. For example, a
bandwidth-limited device might prompt the user before traversing any link,
even if the arc-type is src, or a desktop browser with certain
configuration settings might open a new window, even when the arc-type
is href.
[Definition: The prominence is a zero-based non-positive integer
that describes the author's intent of how readily a link is to be presented
to the user for activation.] A larger prominence indicates that an arc
is to be presented to the user for activation before related arcs of a smaller
prominence. Prominence is important for multi-ended links, where typically
a single "default" arc might have a prominence of 0, and additional arcs
will have smaller prominences. For example, in a suitably constructed link,
a browser might traverse an arc with prominence 0 by a single click; with
promenence -1 by a context menu; with prominence -2 with a 2nd-level context
menu, and so on. When a link contains multiple arcs with the same prominence,
the arcs in question are intended as alternatives for selection by the end
user. As with the arc-type, implementations are free to interpret the prominence
in any suitable manner.
The notion of resources is universal to the World Wide Web. [Definition:
As discussed in [RFC 2396], a resource
is any addressable unit of information or service.] Examples include
files, images, documents, programs, and query results..] SLink links, as defined
abstractly here, may appear in non-XML documents (such as PDF or Flash),
and are able to associate all kinds of resources, not just XML-encoded
ones.
One of the common uses of SLink is to create hyperlinks. [Definition:
A hyperlink is a link that is intended primarily for presentation to
a human user.] Nothing in SLink's design, however, prevents it from being
used with links that are intended solely for consumption by generic
link-aware software.
: An SLink link is a collection of one or more arcs,
intended to express an explicit relationship between resources or portions
of resources.] For [XML],
a SLink link is realized by one of the syntax options defined later in
this document.
This section discribes markup that can appear inline in an [XML] document to define a SLink link.
The xml:href href. The prominence is equal to -1 times the number of ancestor elements that represent arcs of arc-type href.
NOTE: In XML documents conforming to [XML Names], the prefix
xmlis by definition bound to the namespace name, and does not require any additional declaration markup on the part of the author.
Examples: The following example shows how a custom-defined language can contain a hyperlink detectable by generic link processors.
This example shows how the
a element might be defined in
a future version of XHTML.
This example shows how the
nl element could be defined as
a multi-ended link in a future version of XHTML.
This example shows how a commonly occurring idiom in news and press releases could be defined as a multi-ended link in a future version of XHTML.
This example shows alternative links presented to the user, based on a language preference.
.
The xml:src src. The prominence is equal to -1 times the number of ancestor elements that represent arcs of arc-type href.
Examples: The following example shows how an XML language
could bear both
xml:href and
xml:src links at the
same time.
The following example shows how the
object element might be
defined in a future version of XHTML.
The following example shows how the
html element might be
modified in a future version of XHTML to allow an automatic client-side 'redirect'
to a new document.
.
This document does not define a syntax for out-of-line markup, along the lines of [HLink]. It would, however, be straightforward to define a mapping from RDF, meta elements, or CSS syntax, to the SLink data model.
An XML element conforms to SLink if it contains one or more attributes
defined in this specification, and the value of any such attributes conforms
to the lexical constraints of
xsd:anyURI. This specification
imposes no particular constraints on DTDs or schemas; conformance applies
only to elements and attributes.
An XML application conforms to SLink if it recognizes the attributes defined
in this specification, processes relative URIs in accordance with [XML
Base], and provides a suitable user interface to realize the intent indicated
by the attributes.
The following informative set of questions and answers discusses some common concerns with this approach.
Q: I don't see any means to provide link metadata, such as a title. Also, several of my favorite XLink/HLink features, like linkbases, are missing. How do I provide these things, and how can an implementation render links without them?
Q: The xml: prefix seems like an conspicuous choice. How do you justify using it?
A: The xml: prefix is reserved for things that are potentially useful in nearly any XML document on the Web. If linking doesn't fit this criteria, I can't imagine what does.
Q: Isn't it a slippery slope to introduce xml: prefixed attributes?
|
http://dubinko.info/writing/skunklink/
|
crawl-001
|
refinedweb
| 1,329
| 53.61
|
Home > Application Builder User's ... > Adding Database Application... > About jQuery and jQuery UI ...
In Oracle Application Express includes the jQuery 1.7.1, jQuery UI 1.8.22, and jQuery Mobile - 1.1.1 RC1 JavaScript libraries. This section discusses the features available in jQuery UI and best practices when referencing jQuery libraries in your JavaScript code.
About Available jQuery UI Features
Referencing the jQuery Library in Your JavaScript Code
Oracle Application Express only loads the components of jQuery UI that are required for base Oracle Application Express functionality. Oracle Application Express does not include the entire jQuery UI library since doing so would significantly add to download and processing time for each page load. Oracle Application Express includes these components by default:
jQuery UI Core - Required for all interactions and widgets.
Query UI Widget - The widget factory that is the base for all widgets.
jQuery UI Mouse - The mouse widget is a base class for all interactions and widgets with heavy mouse interaction.
jQuery UI Position - A utility plug-in for positioning elements relative to other elements.
jQuery UI Draggable - Makes any element on the page draggable.
jQuery UI Resizable - Makes any element on the page resizable.
jQuery UI Dialog - Opens existing mark up in a draggable and resizable dialog.
jQuery UI Datepicker - A datepicker than can be toggled from a input or displayed inline.
jQuery UI Effects - Extends the internal jQuery effects, includes morphing, easing, and is required by all other effects.
jQuery UI Effects Drop - A drop out effect by moving the element in one direction and hiding it at the same time.
For more information about jQuery UI and these specific components, see the jQuery UI site:
Oracle Application Express does not include the entire jQuery UI library. You can easily activate other components of jQuery UI by just including the relevant files. For example, to include the Tabs jQuery UI Widget in your application, you would include the following in your Page Template, Header within the
<head>...
</head> tags as shown in the following example:
<link href="#IMAGE_PREFIX#libraries/jquery-ui/1.8.22/themes/base/minified/jquery.ui.tabs.min.css" rel="stylesheet" type="text/css" /> <script src="#IMAGE_PREFIX#libraries/jquery-ui/1.8.22/ui/minified/jquery.ui.tabs.min.js" type="text/javascript"></script>
Tip:You do not need to include the Dependencies for Tabs of UI Core and UI Widget since these are included by default with Oracle Application Express, as shown in the previous list of default components.
Determining when to use the
$,
jQuery, and
apex.jQuery references to the jQuery library in your own JavaScript code depends on where you use it.
Newer versions of Oracle Application Express may include updated versions of the jQuery libraries. When new versions of jQuery are released, they sometimes introduce changes that could break existing functionality. These issues are documented in the Change Log for a particular version's Release Notes. To minimize your risk when upgrading to a newer version of Oracle Application Express, Oracle recommends the best practices described in the sections that follow.
Managing JavaScript Code in Your Application
Using JavaScript Code in a Plug-In
If you want to use jQuery in your own JavaScript code in an application, Oracle recommends you use
jQuery or the shortcut
$ for the following reasons:
If you upgrade Oracle Application Express, there is the potential that the newer jQuery version may break your existing code.
To avoid intensive testing and rewriting of your application to the new jQuery version, you can include the old jQuery version that was shipped with Oracle Application Express release 4.1 in your Page Template, Header within the
<head>…
</head> tags after the
#HEAD# substitution string. Consider the following example:
... #HEAD# ... <script src="#IMAGE_PREFIX#libraries/jquery/1.6.2/jquery-1.6.2.min.js" type="text/javascript"></script> ...
In this example, the references to the jQuery library '$' and 'jQuery' point to jQuery version 1.6.2 (the version included in Oracle Application Express release 4.1) and not the newer version 1.7.1 included with Oracle Application Express release 4.2
Tip:Oracle Application Express release 4.2 includes jQuery version 1.6.2 so you do not need to add this to your web server.
Note that no additional code changes necessary. This approach minimizes the risk of breaking custom jQuery code when upgrading to a newer version of Application Express.
If jQuery plug-in you use requires a different version of jQuery and you want to use that version in other code in your application.
In this case, you can include version of jQuery library (the newer or older) into your page template as described in the previous step (that is, after the
#HEAD# substitution string). The variables
$ and
jQuery point to the appropriate jQuery version.
If a jQuery plug-in you use requires a different version of jQuery and you do not want to use that version in other code in an application.
For example, suppose a jQuery plug-in requires jQuery 1.6.2, but you want to use jQuery 1.7.1 which included with Oracle Application Express. Include jQuery version 1.6.2 into your Page Template, Header, within the
<head>…
</head> tags, before the
#HEAD# substitution string.
In this case, you also need a JavaScript snippet to define a new variable to store the plug-in specific jQuery version (for example,
jQuery_1_6_2) and then assign it. Consider the following example:
... <script src="#IMAGE_PREFIX#libraries/jquery/1.6.2/jquery-1.6.2.min.js" type="text/javascript"></script> <script type="text/javascript"> var jQuery_1_6_2 = jQuery; </script> ... #HEAD# ...
You must also modify the initialization code of the jQuery plug-in to use the jQuery_1_6_2 variable, for example:
(function($) { ... plugin code ... })(jQuery_1_6_2);
For more information about jQuery plug-in initialization, see "Using JavaScript Code in a Plug-In".
Tip:In all of these scenarios, use the variables
$and
jQueryin your application to specify the version you require. Note that the variable
apex.jQuerystill points to the version that ships with Oracle Application Express to support built-in functionality, which depends on that jQuery version.
If you want to use jQuery in an Oracle Application Express plug-in, Oracle recommends using the
apex.jQuery reference to the jQuery library. This reference should even include a modification of the initialization code of the included jQuery plug-in to use the
apex.jQuery reference.
If you look at the JavaScript code of a jQuery plug-in, notice that most have the following code structure:
(function($) { ... plugin code ... })(jQuery);
This structure declares an anonymous JavaScript function with a parameter
$, which is immediately called and passed the parameter value of the current jQuery variable. The use of
$ as the jQuery shortcut is implemented in a safe manner without having to rely on the fact that
$ is still used to reference the main jQuery library.
Oracle recommends copying the jQuery plug-in file and prefixing it with
apex (for example,
apex.jquery.maskedinput-1.2.2.js). This makes it obvious to Oracle Application Express that the file has been modified. Then, change the reference from
jQuery to
apex.jQuery in the initialization code as shown in the following example.
(function($) { ... plugin code ... })(apex.jQuery);
As a plug-in developer, you want to minimize your testing effort and create an environment where you have full control. Assume you are not following the recommendations described in the previous section. What happens if your plug-in is using a different version of jQuery?
This may result in strange plug-in behavior. Furthermore, it may be difficult to reproduce that behavior because you will not know if your plug-in is actually referencing a different version of jQuery. Using the
apex.jQuery namespace reduces your risk and also the risk to your users. This approach enables you to test your plug-ins with the Oracle Application Express versions you want to support.
|
http://docs.oracle.com/cd/E37097_01/doc/doc.42/e35125/app_comp001.htm
|
CC-MAIN-2014-41
|
refinedweb
| 1,322
| 56.25
|
This is the first in a series of mini-tutorials on Windows Phone 7 Programming for Silverlight Programmers.
Audience: Silverlight Programmers who want to learn to program the new Windows Phone 7. Once the fundamentals are covered, the target audience will narrow to Silverlight Programmers who wish to participate in the Silverlight HVP Phone project
Getting Started
There are two paths to installing what you need
- Get the Windows Phone Developer Kit
- Or… Download Visual Studio and the Silverlight Developer tools, and then install the Windows Developer tools
The First Three Applications
This tutorial will demonstrate how to create a simple Win7Phone application. The follow on will extend that application to create a second page, consisting of a simple form whose text fields are data bound.
The third tutorial, building on an application started by Tim Heuer, will demonstrate how to bind to live data through a web service, how to implement the panorama view and how to integrate both animation and live Bing maps.
Silverlight Versions
First Silverlight Phone Application
The goal of this very first phone application is to become comfortable with the development of Win7Phone applications and to notice that you already know how to do this.
To crack through the difficult barrier of writing a very first application, let’s dive in and open Visual Studio 2010, bring up the templates and click on Silverlight For Windows Phone. On the right hand side select Silverlight Phone Application, and after giving your application a name and clicking OK, Visual Studio will open.
The most significant changes you may notice immediately are that the design surface is already populated with an image of a phone and that MainPage.xaml is split vertically rather than horizontally.
Some of the magic of making this look so much like a standard Windows Phone 7 Application can be found in App.xaml which contains, literally, hundreds of lines of resource code: essentially a complete style set for creating Windows Phone 7 applications.
The default layout created by VisualStudio is a Grid (LayoutRoot) with at least two inner grids, TitleGrid and ContentGrid. The former is populated for you, and to get started you’ll want to change the Text fields of its two TextBlock controls.
<TextBlock Text="First Phone App" x: <TextBlock Text="Events" x:
The ContentGrid is a placeholder for your own content and in here we’ll place a button, just as we would in a standard (web) Silverlight Application. That is, you can drag it from the toolbox onto the design surface (placing it where you like on the image of the phone) or you can drag it into the HTML, and you can set its properties either in the Properties Window or in the Xaml directly.
(For that matter, we could also instantiate the button dynamically, at run time!)
To keep the design simple, I’ll add a row, half way down the content grid, and place a button within it, using the Properties window to set its Location, and then just for fun, switching to the Xaml to set the remainder of its properties:
<Grid x: <Grid.RowDefinitions> <RowDefinition Height="1*" /> <RowDefinition Height="1*" /> </Grid.RowDefinitions> <Button Name="SetText" Content="Button" HorizontalAlignment="Center" Margin="0" VerticalAlignment="Center" FontFamily="Segoe WP" FontSize="32" Foreground="Yellow" /> </Grid>
As is true in Silverlight Web applications, I could put the event into the Xaml, but as a rule, I don’t. Thus, my code-behind will look like this,
using System.Windows; using Microsoft.Phone.Controls; namespace WindowsPhoneApplication2 { public partial class MainPage : PhoneApplicationPage { public MainPage() { InitializeComponent(); SupportedOrientations = SupportedPageOrientation.Portrait | SupportedPageOrientation.Landscape; Loaded += new RoutedEventHandler( MainPage_Loaded ); } void MainPage_Loaded( object sender, RoutedEventArgs e ) { SetText.Click += new RoutedEventHandler( SetText_Click ); } void SetText_Click( object sender, RoutedEventArgs e ) { // todo } } }
The first two lines in the constructor were added by Visual Studio, the third I added to handle the event when MainPage is fully loaded. Within that event handler I set up the event handler for the button click.
Again, in the interest of keeping everything simple, let’s have the event handler just change textBlockListTitle from Events to Clicked.
void SetText_Click( object sender, RoutedEventArgs e ) { textBlockListTitle.Text = "Clicked!"; }
When you debug the application the Windows Phone 7 emulator will start. Since this takes a little while, you’ll be happy to know that you can stop and start debugging repeatedly without closing or restarting the emulator. Once your application comes up, clicking on the button will cause the event handler to be called and the text to change.
This is the turn Windows Phone 7 For Silverlight Programmers | Jesse Liberty blog for anyone who wants to essay out out around this message. You asking so more its virtually wearying to argue with you (not that I truly would want…HaHa). You definitely put a new spin on a matter thats been engrossed about for years. Nice congest, but high!
Any plans to update samples and text to suit the Beta version of the tools? (eg the App.xaml file no longer has hundreds of lines of resources)
What is the phone used by this image ? I’m just curious.
Pingback: DotNetShoutout
Pingback: Windows Phone 7 For Silverlight programmers « Windows 7 Phone
Pingback: You Already Are A Windows Phone 7 Programmer | Jesse Liberty
Pingback: Windows Phone 7 For Silverlight programmers « Windows Phone Secrets
Hi, I am finding it difficult for creating nice transition between two pages. Do we have any toolkit controls available for WP7 silverlight development that takes care of navigation transition between pages. I have found iphone developers using Tableview and Subviews. Do we have any similar controls?
I noticed the Developer node is under C#. Is there also a VB vesion planed?
There is a vb version planned, but we’ve not provided a timetable yet, I’m afraid.
Pingback: Dew Drop – May 17, 2010 | Alvin Ashcraft's Morning Dew
|
http://jesseliberty.com/2010/05/16/windows-phone-7-for-silverlight-programmers/
|
CC-MAIN-2013-48
|
refinedweb
| 974
| 51.28
|
Path to image in xcassets folder (iOS)
Hi!
Hope you can help me with this... I have a large number of images that i want to use in an iOS app. Adding them to a .qrc does not work ("xcode build failed"), so i made a .xcassets file in xcode to which I want to add images. However, no images are shown when running on an iPhone (but it does work in the simulator). I followed the advice here.
How do i find the correct path to my images?
Thanks in advance!
QT += core gui multimediawidgets greaterThan(QT_MAJOR_VERSION, 4): QT += widgets TARGET = AssetsTest TEMPLATE = app SOURCES += main.cpp\ mainwindow.cpp HEADERS += mainwindow.h FORMS += mainwindow.ui ios { assets_catalogs.files = $$files($$PWD/*.xcassets) QMAKE_BUNDLE_DATA += assets_catalogs }
And my .cpp file:
#include "mainwindow.h" #include "ui_mainwindow.h" MainWindow::MainWindow(QWidget *parent) : QMainWindow(parent), ui(new Ui::MainWindow) { ui->setupUi(this); QPixmap pixmap("//Users/JCROHNER/Programmering/QT Projekt/AssetsTest/Media.xcassets/Photo 2014-05-10 11 33 08.imageset/Photo 2014-05-10 11 33 08.jpg"); ui->label->setPixmap(pixmap); }
- SGaist Lifetime Qt Champion last edited by
Hi,
You hardcoded the path to your image on your computer so it won't be valid at all on your device. You should rather use QCoreApplication::applicationDirPath and move to the right folder from there.
Tanks a lot! Your suggestion solved my problem.
/JC
Here is the code that works:
QString path = QCoreApplication::applicationDirPath() + "/Photo 2014-05-10 11 33 08.jpg"; QPixmap pixmap(path); ui->label->setPixmap(pixmap);
|
https://forum.qt.io/topic/67214/path-to-image-in-xcassets-folder-ios
|
CC-MAIN-2019-47
|
refinedweb
| 254
| 54.9
|
Most of you know the KISS-principle, although some of you rather link it to the Ninja like painted rock band.
It usually stands for "keep it stupid simple", "keep it short and simple", "keep it simple sir", "keep it super simple", ..."keep it simple and straightforward" or "keep it simple and sincere".
In fact it comes down to the "Less is more" approach, which should be one of the core principles when developing and maintaining your Wiki articles.
Keep it simply structured
Most Wiki articles start small but quickly grow to a more elaborated document.
It's a good practice to build some structure from the beginning.
A basic structure can include
- [TOC] tag to get an idea of the content at first sight
- introduction to explain what the article is about
- Chapters (with header layout), to build the TOC
- References ("See also" section), References help to build credibility and provide additional information from other sources.
Keep it simply structured
Although a fancy layout is attractive and nice, keep it limited.
The more complex the layout, the higher the risk that other contributers scramble your article.
Don't go extreme on colours and exotic fonts...
Use tables only when really needed, don't box your article with tables.
Kick in swiftly and straightforward
Don't hesitate to start an article when you feel there is a need for it.
If you have information to share, give it a go!
BTW
If you think, for whatever reason, that the article has no reason to exist anymore, you can clean it and mark is as "candidate for deletion".
Or ping the WIki moderator Ninjas directly to remove it.
Keep information shared and social
In one of my recent Wiki blog posts I mentioned the fair use and privacy rights.
Be fair. if you publish copyrighted information, refer to the proper source and make sure it's ok to publish it.
When you publish to the Wiki, be aware that your article can be edited by other contributes.
Keep invisibility simple and sincere
A while Ana Paula devoted an interesting article on "The Principle of Invisibility", see Wiki Life: The Principle of Invisibility.
"Who" should receive the biggest highlight in an article?
In a Wiki it is article itself and the information it contains.
"When editing a page, main namespace articles should not be signed, because the article is a shared work, based on the contributions of many people, and one editor should not be singled out above others." (from Wikipedia: Signatures)
[Ka-jah Shakaah!]
The Security & Identity Ninja.
Peter Geelen
peter@fim2010.com
Premier Field Engineer - Security & Identity at Microsoft
CISSP, CISA, MCT
I agree to keep it very simple in structure IF you don't have any content. It's a little lame to return to a bunch of articles two years later with all sorts of sections laid out and zero content! If you're going to get very detailed in your structure, then you should also try to build a team to help you fill it out, rather than abandoning it.
Excellent Peter!
I agree with you too!
I was thinking of my articles: How i facilitate the insertion of content through other authors.
In fact, when creating an article, we need to think that he should keep himself alive (regardless of the author).
Ana,
This is the quote of my month: "In fact, when creating an article, we need to think that he should keep himself alive (regardless of the author)."
Simply, clever and it resumes all the work of wiki in few words !
Meaning that the article is like a person who survives off the contributions of Wiki revisors? Hmm. Should we give our articles names? Like George.
Anyone who got sucked into TechNet Wiki and started writing articles, is familiar with the online editor
|
https://blogs.technet.microsoft.com/wikininjas/2013/02/21/community-win-kiss-the-wiki-ninja/
|
CC-MAIN-2017-47
|
refinedweb
| 639
| 62.27
|
#include <stdio.h> #include <stdlib.h> #include <conio.h> int main () { int i,j; puts ("Trying to execute command CD "); [b] i = system ("cd c:\text"); [/b] if (i==-1) puts ("Error executing CD"); else puts ("Command successfully executed"); puts ("Trying to execute command del "); [b] j = system ("del *.txt"); [/b] if (j==-1) puts ("Error executing del"); else puts ("Command successfully executed"); getchar(); return 0; }
It won't work. How do I make this go to a specific directory? I know if i just make it i = system ("cd"); that it will print the current directory. Is there a way to change directories this way?
|
https://www.daniweb.com/programming/software-development/threads/896/i-system-cd-c-text
|
CC-MAIN-2018-13
|
refinedweb
| 106
| 65.73
|
It is not only properties that can be declared static. As of PHP 5, you can declare a method static:
static function doOperation() {
//...
Some classes make static the utility methods that do not depend upon member variables, to make the tool more widely available. We might supply a calcTax() method in Item, for example:
class Item {
public static $SALES_TAX=10;
private $name;
public static function calcTax( $amount ) {
return ( $amount + ( $amount/(Item::$SALES_TAX/100)) );
}
}
The calcTax() method uses the static $SALES_TAX property to calculate a new total given a starting amount. Crucially, this method does not attempt to access any standard properties. Because static methods are called outside of object context (that is, using the class name and not an object handle), they cannot use the $this pseudo-variable to access methods or properties. Let's use the calcTax() method:
$amount = 10;
print "given a cost of $amount, the total will be";
print Item::calcTax( $amount );
// prints "given a cost of 10, the total will be 110"
The benefit of using a static method in this example was that we did not need to create or acquire an Item object to gain access to the functionality in calcTax().
Let's look at another common use for static methods and properties. In Listing 17.2, we create a Shop class. Our system design calls for a central Shop object. We want client code to be able to get an instance of this object at any time, and we want to ensure that only one Shop object is created during the life of a script execution. All objects requesting a Shop object will be guaranteed to get a reference to the same object and will therefore work with the same data as one another.
1: <?php
2:
3: class Shop {
4: private static $instance;
5: public $name="shop";
6:
7: private function ___construct() {
8: // block attempts to instantiate
9: }
10:
11: public static function getInstance() {
12: if ( empty( self::$instance ) ) {
13: self::$instance = new Shop();
14: }
15: return self::$instance;
16: }
17: }
18:
19: // $s = new Shop();
20: // would fail because ___construct() is declared private
21:
22: $first = Shop::getInstance();
23: $first-> name="Acme Shopping Emporium";
24:
25: $second = Shop::getInstance();
26: print $second -> name;
27: // prints "Acme Shopping Emporium"
28: ?>
Listing 17.2 shows an example of a design pattern called singleton. It is intended to ensure that only one instance of a class exists in a process at any time and that any client code can easily access that instance. We declare a private static property called $instance on line 4. On line 5, we declare and assign to a property called $name. We will use it to test our class later. Notice that we declared the constructor private on line 7. This declaration makes it impossible for any external code to create an instance of the Shop object. We declare a static method called getInstance() on line 11. Because it is static, getInstance() can be called through the class rather than the object instance:
Shop::getInstance();
As a member function, getInstance() has privileged status. It can set and get the static $instance property. It can also create a new instance of the shop object using new. We test $instance and assign a Shop object to it if it is empty. After the test, we can be sure that we have a Shop object in the $instance property, and we return it to the user on line 15.
We use the self keyword to access the $instance property. self refers to the current class in the same way that $this refers to the current object.
We call getInstance() on line 25, acquiring a Shop object. To test the class, we change the $name property on line 23 and then call getInstance() once again on line 25. We confirm that the $second variable contains a reference to the same instance of Shop on line 26 by printing $shop->name.
|
http://books.gigatux.nl/mirror/php24hours/0672326191_ch17lev1sec3.html
|
CC-MAIN-2018-09
|
refinedweb
| 657
| 68.2
|
The Open Source Release of NWFS 2.2 for the Linux 2.0 and 2.2 kernels isposted to our site at and 207.109.151.240. Includedare the release notes. 2.4 will be posted Wednesday, March 29, 2000 at7:00 a.m. Eastern Time.Jeff MerkeyCEO, TRGNWFS 2.2 RELEASE NOTES----------------------NWFS is a work in progress. TRG will continue to develop enhancementsand new features to NWFS in the future. You are encouraged to reportbugs or requests for feature enhancements to info@timpanogas.comThis release supports Linux Kernels 2.0 and 2.2. 2.3 support will bereleased Wednesday, January 29, 2000 at 7:00 a.m. Eastern Timeat and will also be avialable via FTP at207.109.151.240.New To This Release:1. Full Asynch IO Manager (SMP)2. NetWare-ish LRU Mirrored Block Cache.3. Handle Based Virtual Partition Mirroring and Hotfixing Engine (NWVP).4. Full SMP Support (and we even tested it).BUGS----There is a bug related to the interaction between NWFS and the 2.2 VFSduring "rm -r <directory>" operations where readdir from within rm isgetting confused and reading partial dentry blocks during recursivedelete operations. The VFS interface for NWFS is contained in create.c,dir.c, inode.c, file.c, mmap.c, nwvfs.c, and super.c. The side affectis that you will get a "directory not empty" message and have torepeat the "rm -r" operation a couple of times until all the fileshave been deleted. Anyone out there with VFS 2.2 knowledge is welcometo point out any apparent defects in the NWFS VFS interface for Linux.2.3 also is manifesting similiar behavior at present.There still may be deadlocks in the file system when boundry or lowmemory conditions occur. If you encounter one, rebuild with DEBUG_DEADLOCKSset to 1 in the globals.h file. You will be able to hit Control-Cand get a message about where the deadlock occurred. email your/var/log/messages file to jmerkey@timpanogas.com (after compressing itfirst) and we will get it fixed if anyone runs across any deadlocksour testing scenarios did not expose.At present, we are implementing hard links linux-style. The method usedby NetWare NFS is extremely ackward, and implementations between NetWareversions are different. There is a HARD_LINKED flag however for files,which allows vrepair under NetWare to fixup hard-linked directory recordsduring mount.The NetWare NFS implementation of hard links does not allow the data streamto "float" between inodes as is done in linux. NetWare creates a rootdirectory record, chains the data stream to it, and if it gets deleted,all the hard links lose their links to the data. I have implementedhard links linux-style, but probably should create some mapping layer tomove the root around when someone deletes it.BUILDING NWFS-------------The globals.h file contains the following table of options:#define WINDOWS_NT_RO 0#define WINDOWS_NT 0#define WINDOWS_NT_UTIL 0#define WINDOWS_CONVERT 0#define WINDOWS_98_UTIL 0#define LINUX_20 0#define LINUX_22 1#define LINUX_24 0#define LINUX_UTIL 0#define DOS_UTIL 0The LINUX_20 and LINUX_22 File System driver options are the onlydriver versions covered under this particular release. Selecteither LINUX_20 or LINUX_22 and set to 1 in globals.h (you can onlyselect one at a time).There are makefiles included for differnt kernel configurations. Tomake the NWFS driver for Linux, select one of the following. Themakefiles support modversioned kernels and naked kernels.make -f nwfs.mak This will make an NWFS driver SMP-no MODVER-nomake -f nwfsmod.mak This will make an NWFS driver SMP-no MODVER-yesmake -f nwfssmp.mak This will make an NWFS driver SMP-yes MODVER-nomake -f nwmodsmp.mak This will make an NWFS driver SMP-yes MODVER-yesTO-DO-List.-----------The list has gotten very short, and few items remain (less platformoptimization work) for closure for NWFS on Linux relative toproviding all the features of the Native NetWare File System.Tuning and performance work is always on-going.1. Implement Macintosh Data Fork Support and integrate with HFS codeon 2.2 and 2.3 for the Linux Macintosh File Service.2. Implement 255 character name support in the extended directory forNFS and LONG namespaces (current is 80 character).3. Implement extensible parent hash skip lists for rapid positionalnumeric probes of the parent hash (like readdir and lookup like to do).4. Add deleted block sequence number engine for salvageable filesystem (rename Command needs to support -2 deleted file directory).5. Implement splay tree for hash buckets on name hashes instead oflinked lists for better search times.6. Implement hash in aio manager for rapid indexing during add iorequests (will do today).7. Finish Netware hard link support NetWare NFS style instead oflinux style (very ackward implementation).8. Implement cpu_to_le32(), etc. macros and test on IA64, Sparc64, andAlpha64.
|
http://lkml.org/lkml/2000/3/27/67
|
CC-MAIN-2014-41
|
refinedweb
| 796
| 51.85
|
Struggling with naming things
Since I discovered Python and started programming as a hobby in my early forties, I've often found it difficult to come up with "the right name". My first real program followed in the footsteps of Guido van Robot (aka GvR, one of the best ever named Python computer programs), and was uninspiringly named RUR-PLE, meant as an acronym. It spite of this poorly constructed name, RUR-PLE/rur-ple became somewhat popular, having been downloaded well over 50000 times - but I always wondered if it would have been more easily found and popular if its name had been a short phrase with either the word "robot" of "karel" in it.
I remember reading comments about Crunchy (originally known as Crunchy Frog, after a famous Monty Python sketch, and later shortened since another open source program was named Crunchy Frog) indicating that such a silly name should not be taken seriously.
My best-named experimental program was probably docpicture, which was meant to easily enhance docstrings so that, using an alternative to Python's help() function, one could see docstrings with relevant embedded images. As an example, here was an example for a turtle module, and another one for sequence diagrams. When using docpicture, a browser would be launched to view the resulting enhanced docstring. This felt somewhat unnatural when dealing with Python code and contributed to my abandoning this idea. Nowadays, with the Jupyter notebook available, such an enhancement to traditional doctrings might be worth revisiting.
However, docpicture was an exception: I do not find the names of other programs I have written inspiring in any way.
This brings me to "experimental/nonstandard". When I first started looking at ways to easily explore modifications/additions to Python syntax, having a construct like
from __experimental__ import some_syntax
seemed a logical choice. When I found out that this construct had been suggested before (in a different context), and even though it had not been adopted, I thought I should come up with an alternative. Thus __experimental__ became __nonstandard__.
However, I believe that this new name gives the wrong impression. If I come accross some code in a repository and I see something like
from __nonstandard__ import something
I would likely see the word "nonstandard" as a warning and interpret this as "the authors are doing their own thing, straying away from the standard Python code; it's probably better to stay away."
On the other hand, if I read
from __experimental__ import something
I would likely interpret this as "the authors are trying out something different; it might be worth having a look as I may learn something new".
This is what I would like experimental to be: a Python package that lets people try out new syntactic constructs easily, coming up with new ideas or testing ideas proposed by others (in PEPs, for instance); not something to be used in production code, but something to experiment with: hence why I am reverting back to using the first name I had chosen.
1 comment:
You're missing a gorilla. The point is that you shouldn't use a dunder name for such things. Please read, last paragraph.
|
https://aroberge.blogspot.com/2017/05/whats-in-name.html
|
CC-MAIN-2017-47
|
refinedweb
| 531
| 54.86
|
: 59505
<![if !IE]>▶<![endif]> Reply to This
Jean-Louis,
I have question. You're fix DCM in library or code?
thank you,
Balloon
<![if !IE]>▶<![endif]> Reply
Hello Balloon,
The DCM fix is in the library (AP_Math and AP_DCM). I have only used my current flying version (AC241xp2)... The main bug corrected about the DCM drift very well done by Tridge is located only in these two lib.
So, if you have a flying well version you may only update these two libs...
Jean-Louis
<![if !IE]>▶<![endif]> Reply
Thank you for your answer.
<![if !IE]>▶<![endif]> Reply
Re Hello Jean Louis
I followed your advice lines (# include "APM_PID_HIL_JLN.h" / file / Config for quadcopter model for HIL
/ / # Include "APM_PID_HELI.h" / file / config for the H1 type of helicopter mode on Aerosim HIL / /
# Include "APM_PID_JLN.h" / file / config for the real modelquadcopter BRQ) are present in "APM_Config.h" Mission Planner1.1.46 but refuses to connect. Position CLI is ok but can not connectFly position, I'll settle for official 2.4.1 which does not make me this,my meager skills do not allow me to define the cause.
Once again thank you for your answers.
Daniel
<![if !IE]>▶<![endif]> Reply
Hello Chapelat Daniel, are you using an APM1 or APM2 ? the APM_PID_JLN (used for real flight is configured by default for APM2, you must edit and change it for APM1 if you're using APM1 and upload firmware again). If you're using APM2 I think you don't have to do anything to it.
<![if !IE]>▶<![endif]> Reply
Hello Rui Manuel
I use apM1, I return the land or a flight I made with 2.4.1, the landinghas unfortunately I was unable to controller a squall of wind, twobroken helices I can not comment on 2.4 .1 I have not had the time,this matim my idea was to test JLN xp2 2.4.1, I will do better to wait!
thank you for your comment I will try again to run JLN 2.4.1 xp2 with your index
thank you
Daniel cordially
Hello Jean Louis
two questions:
I'm compiling with arduino (ArduCopter 242xp2) but when I want touse APM, I've not get the connection, you have an idea?
- I just buy AeroSimRC, I followed your tutorial but the plugin does not appear in the fenêrtre plugins so I can not access it while it is in the folder? Have you any idea?
Thank you good day
<![if !IE]>▶<![endif]> Reply
Hello Daniel,
Have you set this as follow in the APM_Config.h
#include "APM_PID_HIL_JLN.h" // Config file for the quadcopter model for HIL
//#include "APM_PID_HELI.h" // Config file for the Helicopter type H1 in HIL mode on AeroSIM //
#include "APM_PID_JLN.h" // Config file for the real QRO quadcopter model
Ps: This setup for HIL is set for the APM v1
Regards,
Jean-Louis
<![if !IE]>▶<![endif]> Reply
So I had similar issues JL, and since I was loading the HIL firmware through MP I didn't really ahve to reconfigure anything in APM_Config.h. The problem was that the AeroSim plugin included with MP didn't get recognized by AeroSim 3.8...basically AeroSim would say that the plugin is not compatible. Based on Marco's advice and help, I managed to get a hold of a much older version of MP(2.0. something) and tried that plugin which workd. Continued following the instructions on the dev team wiki, it would work but the quad would just fly on it's own before I could even load a mission script. Any ideas there?
<![if !IE]>▶<![endif]> Reply
Here my APMHil pugin that I am currently use successfully with AeroSIM.
You need only to copy the APMHil folder into:
c:\Program Files (x86)\AeroSIM-RC\Plugin\
and then on AeroSIM select:
PlugIn -> APMHill
I hope this will work for you...
Regards, Jean-Louis
<![if !IE]>▶<![endif]> Reply
<![if !IE]>▶<![endif]> Reply
Hi JL :) I was flying this version today and behaved very well, autolanding marvelous. I have just one question, in system.pde, line 438, there's the following line :
// debug to Serial terminal
Serial.println(flight_mode_strings[control_mode]);
The Serial.println, would be better commented right ? to avoid consuming time to the cycle, right ?
|
http://diydrones.com/forum/topics/arducopter-2-4-released?commentId=705844%3AComment%3A798556&xg_source=activity
|
CC-MAIN-2013-48
|
refinedweb
| 712
| 74.9
|
So, after a long time without posting (been super busy), I thought I’d write a quick Bollinger Band Trading Strategy Backtest in Python and then run some optimisations and analysis much like we have done in the past.
It’s pretty easy and can be written in just a few lines of code, which is why I love Python so much – so many things can be quickly prototyped and tested to see if it even holds water without wasting half your life typing.
So as some of you may be aware, Yahoo Finance have pulled their financial data API, which means that we can no longer use Pandas Datareader to pull down financial data from the Yahoo Finance site. Rumour has it that Google are pulling theirs too, although I’m yet to see that confirmed. Why they have both chosen to do this, I really don’t know but it’s a bit of a pain in the backside as it means lots of the code I’ve previously written for this blog no longer works!!! Such is life I guess…
Anyway, onto bigger and better things – we can still use the awesome Quandl Python API to pull the necessary data!
Let’s start coding…
#make the necessary imports import pandas as pd from pandas_datareader import data, wb import numpy as np import matplotlib.pyplot as plt import quandl %matplotlib inline #download Dax data from the start of 2015 and store in a Pandas DataFrame df = quandl.get("CHRIS/EUREX_FDAX1", authtoken="[enter-your-token-here]",start_date="2015-01-01")
We now have a Pandas DataFrame with the daily data for the Dax continuous contract. We can take a quick look at the structure of the data using the following:
df.head()
and we get the following:
So next we get to the code for creating the actual Bollinger bands themselves:
#Set number of days and standard deviations to use for rolling lookback period for Bollinger band calculation window = 21 no_of_std = 2 #Calculate rolling mean and standard deviation using number of days set above rolling_mean = df['Settle'].rolling(window).mean() rolling_std = df['Settle'].rolling(window).std() #create two new DataFrame columns to hold values of upper and lower Bollinger bands df['Rolling Mean'] = rolling_mean df['Bollinger High'] = rolling_mean + (rolling_std * no_of_std) df['Bollinger Low'] = rolling_mean - (rolling_std * no_of_std)
Let’s plot the Dax price chart, along with the upper and lower Bollinger bands we have just created.
df[['Settle','Bollinger High','Bollinger Low']].plot()
Now let’s move on to the strategy logic…
#Create an "empty" column as placeholder for our /position signals df['Position'] = None #Fill our newly created position column - set to sell (-1) when the price hits the upper band, and set to buy (1) when it hits the lower band #Forward fill our position column to replace the "None" values with the correct long/short positions to represent the "holding" of our position #forward through time df['Position'].fillna(method='ffill',inplace=True) #Calculate the daily market return and multiply that by the position to determine strategy returns df['Market Return'] = np.log(df['Settle'] / df['Settle'].shift(1)) df['Strategy Return'] = df['Market Return'] * df['Position'] #Plot the strategy returns df['Strategy Return'].cumsum().plot()
So not particularly great returns at all…in fact pretty abysmal!
Let’s try upping the window length to use a look-back of 50 days for the band calculations…
But first, lets define a “Bollinger Band trading Strategy” function that we can easily run again and again while varying the inputs:
def bollinger_strat(df,window,std): rolling_mean = df['Settle'].rolling(window).mean() rolling_std = df['Settle'].rolling(window).std() df['Bollinger High'] = rolling_mean + (rolling_std * no_of_std) df['Bollinger Low'] = rolling_mean - (rolling_std * no_of_std) df['Short'] = None df['Long'] = None df['Position'] = None df['Position'].fillna(method='ffill',inplace=True) df['Market Return'] = np.log(df['Settle'] / df['Settle'].shift(1)) df['Strategy Return'] = df['Market Return'] * df['Position'] df['Strategy Return'].cumsum().plot()
Great, now we can just run a new strategy backtest with one line! Let’s use a 50 day look back period for the band calculations…
bollinger_strat(df,50,2)
Which should get us a nice looking plot:
Well those returns are at least better than the previous back-test although definitely still not great.
If we want to get a quick idea of whether there are any lookback periods that will create a positive return we can quickly set up a couple of vectors to hold a series of daily periods and standard deviations, and then just “brute force” our way through a series of backtests which iterates over the two vectors, as follows…
#Set up "daily look back period" and "number of standard deviation" vectors #For example the first one creates a vector of 20 evenly spaced integer values ranging from 10 to 100 #The second creates a vector of 10 evenly spaced floating point numbers from 1 to 3 windows = np.linspace(10,100,20,dtype=int) stds = np.linspace(1,3,10) #And iterate through them both, running the strategy function each time for window in windows: for std in stds: bollinger_strat(df,window,std)
This gets us the following plot at the end:
Granted at this point we can’t be sure exactly which combination of standard deviations and daily look back periods produce which results shown in the chart above, however the fact that there are only a couple of equity curves that end up in positive territory would suggest to me that this may not be a great strategy to pursue…for the Dax at least. That’s not to say Bollinger bands are not useful, just that used in such a simple way as outlined in the above strategy most likely isn’t going to provide you with any kind of real “edge”.
Oh well..perhaps we’ll find something better next time.
until then!
I like your explanation. It’s a pity this strategy doesn’t provide good returns. Keep on looking 🙂
Thanks! I will continue the eternal search for that “winning strategy” 😉
Hey thanks for your posts, hope you keep them coming. These are great!
Thanks for the comment! Also – I replied to your other comment about running code using data stored locally on your PC but didn’t hear back from you. i am happy to try to help you refactor the code to use locally held data – I just need to know a bit more about what format it is stored in. 😀
Hello Sir! I have a question regarding how you download the Dax data. What does authtoken=”5GGEggAyyGa6_mVsKrxZ” mean?? Is it a website? Also, if I want to get the data for a certain stock such as Apple, what will be the corresponding code? Thank you in advance!
The “authtoken” is something that is used when downloading data from the Quandl API.
Pandas have fixed their DataReader so that the Yahoo Finance API can be used again now – so the simplest solution to your problem is to use something in the following format:
from pandas_datareader import data, wb
dax_data = web.DataReader(“^GDAXI”, ‘yahoo’, ’01/01/2000′, ’07/09/2017′)
That will get you Dax cash index data between those two dates alter the dates as necessary. hope that helps!
Hey! Thanks for these posts. Really enjoy them. Shouldn’t it be: df[‘Strategy Return’] = df[‘Market Return’] * df[‘Position’].shift(1) in order to be correct?
Hi Aleksander – I think you are indeed correct; because we are using settle price to determine whether the bollinger bands have been hit, the position determined the day before will be the position that affects the next day’s returns. So in short, yes i think you are correct. I shall change the code when I have a moment. Well spotted – thanks for that!
Thank you very much for this. Helps beginners like myself port this over to several other domains like Cryptocurrencies. Keep on posting this great content! Very helpful to see how easy python can test an idea.
No problem, thanks for the comment – I’ll be trying to post an article very soon along the same lines but this time using the Stochastic Oscillator as a signal.
The best one!
When you execute a sale, shouldn’t the following rows be filled with 0 after -1? You effectively stopped making a loss/gain then? If you continue multiplying by -1 in my understanding you will continue ‘making a loss’ in the sum even though you sold?
Hi Jed – actually this strategy operates on the assumption that you go “long” when the price hits the lower band and go “short” when the prices hits the upper band, rather than just selling your long position when the upper band is hit. SO there is always a position being held, whether long or short. Hope that explains it.
hi thanks for the post really helped me learning algorithm in python from you website, I have a small question when is run a for loop to calculate position my position dataframe dosent change it show none as selected earlier cna you help me out in this it would really help thx
Hi Hardik – are you using the code I have written above or are you adapting it and changing it in any way?
I follow your strategies and they are very well explained. I saw you use lot of looping. We can create additional 3 columns for ‘Settle’, ‘Bollinger High’, ‘Bollinger Low’ using shift(1) on the existing columns in the dataframe. This will avoid looping and vectorized operation is much faster. This is just a suggestion.
Hi Gurudutt, that is a good point – I should really try to avoid looping operations and go for more vectorised operations for sure!
Hi S666,
I have very much enjoyed reading your posts about using pandas for quickly testing simple trading strategies. I have one question:
It seems to me that you are able to change your position before you get the signal.
In the code you calculate the strategy return by multiplying the (log) return with the current position:
df[‘Market Return’] = np.log(df[‘Settle’] / df[‘Settle’].shift(1))
df[‘Strategy Return’] = df[‘Market Return’] * df[‘Position’]
I’m wondering if it shouldn’t be
df[‘Market Return’] = np.log(df[‘Settle’] / df[‘Settle’].shift(1))
df[‘Strategy Return’] = df[‘Market Return’] * df[‘Position’].shift(1)
Let me give an example:
t | price | returns | pos
1 | 100 | | 1
2 | 99 | -1 | -1
at t = 1 we get the signal to enter a long position and buy one instrument.
at t = 2 we get the signal to enter a short position sell the instrument we already bought and short sell another.
Using the code without shifting the position, the strategy makes a (non log) return of -1*-1 = 1, but in reality we bought 1 at a price of 100 at t= 1 and sold it for 99 at t=2 giving a strategy return of -1 (then we also short sold another in order to enter a short position, but that first influence the strategy return at t=3 when we see which way the price went).
Am I completely off here, i’m very curious of what i’m missing?
best regards
Nich
I’m sorry for the messed up formatting of my post, the forum seems to eat white spaces. I hope you still can read it, otherwise shoot me an email.
Hi Nich, you are correct it seems – I have failed to shift the position forward to align with the correct signal as you have shown. Apologies for that – always happy for people to point out errors of logic like that – thanks very much!
Seems so easy … so good explanation … thanks you Sir
Love your articles and the brute force method!
Hey Chris – Glad you like the content 😉 – if there is anything in particular you would like to see a post about, just shout and I will see what I can do.
Hey very helpful article! Any suggestions on if it’s possible to automate the execution as well?
|
https://www.pythonforfinance.net/2017/07/31/bollinger-band-trading-strategy-backtest-in-python/?utm_source=rss&utm_medium=rss&utm_campaign=bollinger-band-trading-strategy-backtest-in-python
|
CC-MAIN-2020-10
|
refinedweb
| 2,019
| 59.33
|
signal you can pass it a payload. This payload is the starting point of the props to the signal. Given the signal:
[ actionA, actionB ]
someSignal({ state API. It is available to every action.
function setSomething ({state}) { state.set('some.path.foo', 'bar') }
All common state operations are available as a method. Instead of first pointing to a value and then operate, you operate first and give the path to the value.
// Traditional approach someArray.push('newItem') // With Cerebral state.push('path.to.array', 'newItem')
This is the one core concept of Cerebral that gives all its power. This simple approach allows for a few important things:
The path on the context is only available if there is actually expressed a path after the action in question:
import actionA from '../actions/actionA' import actionB from '../actions/actionB' import actionC from '../actions/actionC' export default [ actionA, actionB, { foo: [actionC] } ]
In this scenario only actionB has the path on its context. As explained in Chains and paths, the path allows you to diverge execution of the signal.
When you ramp up your game with Cerebral you will most certainly take more advantage of tags and computed in your actions, typically related to action factories. }
You have access to the controller instance on the context:
function someAction ({controller}) {}
You have access to function tree execution as well. This holds information about the current execution, mostly used by the devtools to inform the debugger.
function someAction ({execution}) {}
|
http://cerebraljs.com/docs/api/action.html
|
CC-MAIN-2017-34
|
refinedweb
| 242
| 59.19
|
If we use print instead of return why does the word none comes in the console?
I also have the same question.
In previous questions when i would type an if function and a return a string it would print to the console.
Now, in this question it seems like we need to add Print and run the function again?
E.G.
def greater_than(x,y):
if x>y:
return x
if y>x:
return y
if x==y:
return “These numbers are the same”
#New Function
def graduation_reqs(credits):
if credits >= 120:
return “You have enough credits to graduate!”
print(graduation_reqs(120))
A function will print to the console only if you explicitly call print().
We use print() when we want to see the data on the screen.
We use return when Python needs to have the data - to assign to a variable, perhaps, or pass to another function
In general, you’ll find that you use print() less and return more as you go on. (One exception is in development and debugging, where print() is invaluable.)
To view returned output, call the function within a print function, so, the famous “hello world” should be something like:
def hello(name): return "Hello " + name + "!" print(hello("world"))
Ok. So return and then “Strng” will never print. Thanks!
Correct. The returned value can be assigned to a variable for future use, passed directly as an argument to another function (in the example above, the print() function), or used as one value within an expression. If the calling expression does nothing with the return value, it will be lost.
If you want to print - “These numbers are the same”
You have to type something like:
print(greater_than(2, 2))
I don’t get it. Why can’t a print be part of the function?
it can be , but if you checked above reply by patrickd314 you will understand .
check below.
so yes in our exercises here sure you can use it but normally in real life apps you will depend more on return , unless your program is required to print something on screen once function is called.
|
https://discuss.codecademy.com/t/do-i-need-to-be-utilizing-return-or-print/452105
|
CC-MAIN-2020-29
|
refinedweb
| 355
| 68.2
|
In my last post I allowed people to register but the registration process was not very robust. The account controller just accepted whatever you put in. It didn’t even attempt to validate that your email was correct. Today I am going to correct that oversight and see about doing the normal confirmation. Firstly, I will validate that the email address is of the correct form, then I’ll wire up a process whereby the user has to confirm their email by clicking on a link in an email that is sent to them.
Let’s take a look at the HTTP Post controller for /Account/Register. I need to validate the email address first. To do this, I changed the top of the method as follows:); }
I’ve delegated the checking of the email form to another method in the same class:
public bool IsValidEmail(string s) { if (string.IsNullOrEmpty(s)) return false; // Return true if strIn is in valid e-mail format. try { return Regex.IsMatch(s, @"^(?("")("".+?(?<!\\)""@)|(()); } catch (RegexMatchTimeoutException) { return false; } }
That long regular expression is from Microsoft and validates the email address. The method returns true or false depending on whether the string matches or not.
I now need to generate a link that the user can utilize to validate their email address. To do this, I change the POST /Account/Register routine again:
[HttpPost] [AllowAnonymous] [ValidateAntiForgeryToken]); } Debug.WriteLine("Register: Creating new ApplicationUser"); var user = new ApplicationUser { UserName = model.Email, Email = model.Email }; Debug.WriteLine(string.Format("Register: New Application User = {0}", user.UserName)); var result = await UserManager.CreateAsync(user, model.Password); Debug.WriteLine(string.Format("Register: Registration = {0}", result.Succeeded)); if (result.Succeeded) { Debug.WriteLine("Register: Sending Email Code"); var code = await UserManager.GenerateEmailConfirmationTokenAsync(user); Debug.WriteLine(string.Format("Register: Email for code {0} is {1}", model.Email, code)); var callBackUrl = Url.Action("ConfirmEmail", "Account", new { userId = user.Id, code = code }, protocol: Context.Request.Scheme); await SendEmailAsync(model.Email, "Confirm your account", "Please confirm your account by clicking this link: <a href=\"" + callBackUrl + "\">link</a>"); ViewBag.Link = callBackUrl; return View("RegisterEmail"); } foreach (var error in result.Errors) { Debug.WriteLine(string.Format("Register: Adding Error: {0}:{1}", error.Code, error.Description)); ModelState.AddModelError("", error.Description); } return View(model); } // Somethign went wrong, but we don't know what return View(model); }
When the user is created successfully, I use the Identity Framework to generate an email confirmation token. I then generate a Url for the request and send it via email. There is a flip side of this – the /Account/ConfirmEmail action is run when the user clicks on the link:
/** * GET: /Account/ConfirmEmail */ [HttpGet] [AllowAnonymous] public async Task<IActionResult> ConfirmEmail(string userId, string code) { Debug.WriteLine("ConfirmEmail: Checking for userId = " + userId); if (userId == null || code == null) { Debug.WriteLine("ConfirmEmail: Invalid Parameters"); return View("ConfirmEmailError"); } Debug.WriteLine("ConfirmEmail: Looking for userId"); var user = await UserManager.FindByIdAsync(userId); if (user == null) { Debug.WriteLine("ConfirmEmail: Could not find user"); return View("ConfirmEmailError"); } Debug.WriteLine("ConfirmEmail: Found user - checking confirmation code"); var result = await UserManager.ConfirmEmailAsync(user, code); Debug.WriteLine("ConfirmEmail: Code Confirmation = " + result.Succeeded.ToString()); return View(result.Succeeded ? "ConfirmEmail" : "ConfirmEmailError"); }
This method looks up the user and then checks with the Identity Framework (specifically the user manager) to see if the code is correct. It it’s correct, then the user gets a friendly “Please log in” notification. If not, an error message is generated.
I now need three views:
- Message that an email has been sent
- Message confirming successful activation
- Message indicating failed activation
I’ve created three crude views as follows:
RegisterEmail.cshtml
@{ <h4>Check your email</h4> <p> We just tried to send you an email with an activation link. When you get the email, click the link in the email to activate your account. </p> </div>
ConfirmEmail.cshtml
@{ <h4>Confirmation Successful</h4> <p> Thank you for confirming your email. Please @Html.ActionLink("click here to log in", "Login", "Account", routeValues: null) </p> </div>
ConfirmEmailError.cshtml
@{ <h4>Confirmation Failed</h4> <p> We could not confirm your account. </p> </div>
The project still won’t compile. That’s because I need a method in the AccountController for sending email async. I’ve got the following initially just to test things out:
public static Task SendEmailAsync(string email, string subject, string message) { Debug.WriteLine("SendEmailAsync: " + message); return Task.FromResult(0); }
This version outputs the link that the user would be sent in the Debug Output window. I can now run the project and go through the process of registration. When it comes time to actually send the link, I can cut-and-paste it from the Debug Output window into my browser.
Sending Email via outlook.com
I have to pick an email provider for this next bit. I have an account on outlook.com so I will use that. The email settings for outlook.com are written in their site (see the settings for IMAP and SMTP) and are listed here:
- Host: smtp-mail.outlook.com
- Port: 587 or 25
- Authentication: Yes
- Security: TLS
- Username: Your email address
- Password: Your password
I’ve created a new section in the config.json file according to this specification:
"From": "Your-Account@outlook.com",
"Host": "smtp-mail.outlook.com",
"Port": "587",
"Security": "TLS",
"Username": "Your-Account@outlook.com",
"Password": "Your-Password"
}
Outlook.com doesn’t allow you to send generic emails through their system so the From: has to be your username. If you are using other systems, this may not be a restriction.
I’ve created a new namespace called AspNetIdentity.Services and created a new class called EmailService in there. First task is to take in a Configuration object and store the email data for later. I’m going to use a standard Singleton pattern for this. A Singleton pattern basically tells the system to create the object once and then return the same instance whenever asked. This means I can set the data once in the Startup routine and then access the same data all over the place. A typical Singleton pattern is like this:
private static EmailService instance; private EmailService() { } public static EmailService { get { if (instance == null) { instance = new EmailService(); } return instance; } }
I’ve also created a bunch of properties and a SetConfiguration() routine that allows the startup object to set the configuration for me. Since it’s a lot of repetitive code for each property, I won’t repeat it here. You can check it out in the files on GitHub. I’ve added a single line to the Startup.cs ConfigureServices() method.
// Configure the Email Service EmailProvider.Instance.SetConfiguration(Configuration);
Finally, I’ve removed the SendEmailAsync() method in the AccountControler and changed the call to SendEmailAsync() in the Register() post function to be:
try { await EmailService.Instance.SendEmailAsync(model.Email, "Confirm your account", "Please confirm your account by clicking this link: <a href=\"" + callBackUrl + "\">link</a>"); ViewBag.Link = callBackUrl; return View("RegisterEmail"); } catch (SmtpException ex) { Debug.WriteLine("Could not send email: " + ex.InnerException.Message); ModelState.AddModelError("", "Could not send email"); return View(model); }
Now that I’ve got my configuration in the right place, I can concentrate on the task at hand – sending an email asynchronously with all the configuration. I’m going to do this using the standard System.Net.Mail.SmtpClient class:
public Task SendEmailAsync(string email, string subject, string message) { if (!this.IsConfigured) { Debug.WriteLine("EmailService is not configured"); Debug.WriteLine("SendEmailAsync: " + message); return Task.FromResult(0); } SmtpClient client = new SmtpClient(this.Hostname, this.Port); client.EnableSsl = this.Encryption.Equals("TLS"); if (this.Authenticated) { client.Credentials = new System.Net.NetworkCredential(this.Username, this.Password); } MailAddress fromAddr = new MailAddress(this.FromAddress); MailAddress toAddr = new MailAddress(email); MailMessage mailmsg = new MailMessage(fromAddr, toAddr); mailmsg.Body = message + "\r\n"; mailmsg.BodyEncoding = System.Text.Encoding.UTF8; mailmsg.Subject = subject; mailmsg.SubjectEncoding = System.Text.Encoding.UTF8; Debug.WriteLine("SendEmailAsync: Sending email to " + email); Debug.WriteLine("SendEmailAsync: " + message); return client.SendMailAsync(mailmsg); }
Troubleshooting Tips
Did you get it to work straight away? I didn’t. There are a bunch of things that could go wrong but they are all tied up in the SmtpException that gets returned. I set a breakpoint on the Debug.WriteLine statement immediately after the catch(SmtpException). Examine the InnerException which is usually where the problem lies. Some interesting issues:
- Cannot connect to remote host: Indicates that port 25 is blocked generally. My ISP blocks outbound port 25, so I have to use port 587 instead.
- Mailbox is inaccessible: No authentication: This indicates that the authentication failed or that you didn’t include authentication in your settings.
I’ve included enough debugging with the files to indicate the problem. You can take a look at the files on my GitHub Repository.
Now I’ve got some final clean-up to do, which I’ll do before the next post. I want to clean up the CSS and display of the various error messages that come back so that they look nicer. I also want to disable the Register button and put a spinner up when I click on Register. Finally, I need to clean up the email message that goes out so that the link is clickable and the email looks good.
In the next post, I’ll look at what to do about forgotten passwords.
|
https://shellmonger.com/2015/04/06/asp-net-vnext-identity-part-4-registration-confirmation/
|
CC-MAIN-2017-13
|
refinedweb
| 1,543
| 51.34
|
README
ADAPT DS IconsADAPT DS Icons
This package basically takes a folder of svg icons and converts them to exported React components.
Import and useImport and use
import { IconEmail } from '@adapt-design-system/icons'; <Text fontSize="small"> Inbox <IconEmail /> </Text>;
Set size and colorSet size and color
By default, the icons are designed to inherit the
fontSize and
color of their parent. This is to serve the usual case where an icon is used to accompany text and it should generally be the same size and color as the text.
Pass style values to the
customStyle prop to add new properties / override the defaults.
Note: The size of the icons is set to
1.1em by default, as generally icons look better when they're slightly larger than their accompanying text. To override this, you can do the following:
<IconEmail customStyle={{width: '1em', height: '1em' }} />
Why is there a negative top margin on the icons?Why is there a negative top margin on the icons?
This is to create a better visual alignment with the text.
To remove it,
<IconEmail customStyle={{marginTop: 0 }} />
Adding New IconsAdding New Icons
If the design team have added new icons to the Figma library, you should get them to export SVGs of those icons as 24x24 svgs and drop those files in the
icons directory. Note that the react component name will be generated from the filename:
activity-fill.svg ->
<IconActivityFill />
Publishing a new version to npmPublishing a new version to npm
To publish a new version run
yarn publish-package. If you introduce a breaking change, bump the major version (2.0.3 -> 3.0.0). If you add new (non-breaking) functionality bump the minor version (2.0.3 -> 2.1.0). Otherwise bump patch when adding new icons (2.0.3 -> 2.0.4). You will be asked for a new version.
⚠️ Important!⚠️ Important!
Don't forget to upgrade the consuming packages, in this case
/docs, to make use of the new version. Like so:
yarn upgrade @adapt-design-system/icons@^2.0.4
|
https://www.skypack.dev/view/@adapt-design-system/icons
|
CC-MAIN-2021-25
|
refinedweb
| 343
| 62.38
|
In this post, I want to explain how you change the way that build information is displayed on the Build Details View. You can do some pretty crazy things if you really want to. We are just going to change the color of Build Messages. If you want more details on customizing the Log View, see my last post on the subject.
In the Log View of the Build Details View, messages are displayed as normal black text on the white background. So, they don’t stand out much. What I want to do is show you how to change the color of the messages so that they do stand out against all the other text.
Scenario:
Change the text of build messages from black to LightSteelBlue.
Steps:
We don’t have to do anything for messages to show up in the view. They already do that. All we need to do is change the way they show up. To do that we need to create a custom converter for the build information type “BuildMessage”.
There are a couple of things I would like to note here about the Convert method. The IBuildDetailView object gives you access to some addition information like IBuildServer if you need it. The IBuildInformationNode object is the build information that you are converting. The parentIndent value is how much you should probably indent your paragraph to keep the “tree-like” feel of the Log View. In the example below, I simply set the Margin of the new Paragraph to have a left margin equal to the parentIndent. This value is based on the Build Information hierarchy depth at which the build information is located.
Here is the code for the converter:
using System.Windows;
using System.Windows.Documents;
using System.Windows.Media;
using Microsoft.TeamFoundation.Build.Client;
using Microsoft.TeamFoundation.Build.Controls;
namespace CustomBuildInformationConverters
{
class MessageConverter : IBuildDetailInformationNodeConverter
{
public object Convert(IBuildDetailView view, IBuildInformationNode node, double parentIndent)
{
Paragraph p = new Paragraph();
p.Margin = new Thickness(parentIndent, 0, 0, 0);
p.Foreground = Brushes.LightSteelBlue;
p.Inlines.Add(node.Fields["Message"]);
return p;
}
public string InformationType
{
get { return "BuildMessage"; }
}
}
}
But first, we need an AddIn (or VSP Package) that we can load into our Visual Studio Client at start up. I used the wizard to generate a new AddIn project that loaded at start up.
Here is the code I added to register/unregister my converter:
public void OnStartupComplete(ref Array custom)
{
IVsTeamFoundationBuild VsTfBuild = (IVsTeamFoundationBuild)_applicationObject.DTE.
GetObject("Microsoft.VisualStudio.TeamFoundation.Build.VsTeamFoundationBuild");
if (VsTfBuild != null)
{
_converter = new Message);
}
}
And Here is how the Log View looks now:
So, you may be wondering what other build information types already have converters. In the Microsoft.TeamFoundation.Build.Common assembly, you can find a list of all predefined Build Information Types. Just look for the class InformationTypes. Not all of those types, however, have existing converters. This is mostly because we only wanted to show you the important information on the Log View.
Here’s the list of Information Types that do have converters for TFS 2010:
Enjoy!
|
http://blogs.msdn.com/b/jpricket/archive/2009/12/22/tfs2010-changing-the-way-build-information-is-displayed.aspx
|
CC-MAIN-2014-35
|
refinedweb
| 508
| 58.18
|
I'm doing some practice with Lists in C# to get more experience, so I thought I could do some statistics with data stored in a database.
My idea:
I have a Sqlite DB with some data about customers calls. Who called, when did he call, how much time did the call last, and other values that I do not care about. Now I would like to get out theese values and calculate how many times every customer called per month and per year. Afterwards, calculate how much time was invested for every customer per month and per year.
How to do it:
The calculation in fact is not difficult and getting everything out of the DB aswell. I did that already. What I want, tough, is to create a well structured list to fill out.
How would you structure your list?
As there are over 200000 rows in the DB I thought it would be best to insert every customer only once in the list and than update the specified fields to increase the desired values. With this in mind I began with creating a class for my list:
public class KdStats { public Int32 customernumber { get; set; } public String customername { get; set; } public Int32 calls { get; set; } public Decimal time { get; set; } public Int32 month { get; set; } public Int32 year { get; set; } }
but I realized soon, that this way I can not make a calculation per month/year. So my first thought was: is there a possibility to define an array within my list's class? this would solve my problem, wouldn't it? I tried to define for example "public Int32[] calls { get; set; }" but afterwards I do not know where to inizialize the array. (in the class or in the function, where I modify the list's vlaues) I did not find anything while googling...
Do you have other suggestions on how to do something like that? Or would it really be possible to do it with an array in my list?
This post has been edited by Anthonidas: 19 January 2013 - 06:38 AM
|
http://www.dreamincode.net/forums/topic/308010-best-way-to-create-statistics-out-of-a-db-with-listst/page__p__1786410
|
CC-MAIN-2013-20
|
refinedweb
| 348
| 77.57
|
This post is about my own spin-off from the first project in
9 Projects you can do to become a Frontend Master in 2020
Simon Holdorf ・ Oct 6 '19 ・ 7 min read
The fact that the challenge involves using the new hooks feature in React interests me particularly, because I recently revisited React, after I learnt it one year and a half ago, then I left frontend completely all this time. The base for my spin off comes from Samuel Omole’s tutorial at freecodecamp.org
The
Header and
Movie components are almost identical to the ones Samuel wrote, but I took a few liberties in the
Search and
App components. Also I prefer
.jsx extensions for components and
.js for plain old javascript code. However I encourage you to read his tutorial because he goes deeper in the logic behind the app and explains how the hooks work, it is not my goal to repeat him.
What I do different?
- OMDB API key is not exposed in the React app
- A OpenFaaS server-less function acts as proxy to OMDB
- keywords for narrowing the search and limiting number of results:
- title:“movie title”
- type:movie, series, episode
- limit:12
create-react-app is the tool I am more used to bootstrap React projects, honestly I haven’t tried many others besides Gatsby. I became a patron at CodeSandbox so why not use it for this project? The
codesandbox cli made a breeze to export the project I had already bootstrapped from my laptop to the web editor. You can also start your project directly in CodeSandbox and then export it to a Github repository and or publish it to Netlify. Go and check them out!
npx create-react-app avocado npm install -g codesandbox cd avocado codesandbox ./
The component
components/Search.jsx introduces Hooks , the new React feature that allow to handle state in functional components. I really do not like the idea of classes in JavaScript and since I am not a long time React developer I very much welcome Hooks.
import React, { useState } from "react"; const Search = props => { const [searchValue, setSearchValue] = useState(""); //...
In the
Search component the
useState hook takes the initial state as parameter and returns an array with the current state (similiar to
this.state but does not merge old and new states togheter) and a function to update the state
setSearchValue in this case, which is the same as
this.setState.
The
App component features
useEffect which allows to perform side effects in your components, like fetching data or changing the DOM after render, it is called in every stage of the life cycle of the component, in this case it only logs to the console. Other hook is
useReducer which works similar to redux reducers, it accepts state and action and returns the current state and a dispatch method.
//... const initialState = { loading: false, movies: [], errorMessage: null }; const reducer = (state, action) => { switch (action.type) { case "SEARCH_MOVIES_REQUEST": return { ...state, loading: true, errorMessage: null }; // .. }; // const buildRequestBody = function(v) { ... const App = function() { const [state, dispatch] = useReducer(reducer, initialState); useState(() => { console.log('I am a side efect!'); }) const search = searchValue => { dispatch({ type: "SEARCH_MOVIES_REQUEST" }); let body = buildRequestBody(searchValue); fetch(MOVIE_API_URL, { method: "POST", body: body, headers: { "Content-Type": "application/json" } }) //...
There are a couple of unbreakable rules for using hooks in React, only call hooks at the top level of function components. Do not call hooks inside loops, conditions or nested functions. Only call hooks from React function components. Do not call hooks from regular JavaScript functions.
Of course you can define your own custom hooks if what is shipped by default does not fit, read more about Hooks in the React docs..
You can try and explore the code for the working movie app. Be ware!Since this is a demo the server-less function behind is not setup to scale and any moderate load of requests may take it down temporarily.
Discussion
|
https://dev.to/celisdelafuente/path-to-frontend-master-i-4h8e
|
CC-MAIN-2020-50
|
refinedweb
| 655
| 62.27
|
↑ Grab this Headline Animator
For my entry in the WinPHP Challenge I need to use some .Net assemblies I wrote a while ago. It wasn’t clear to me how this can be done. Here’s an example on how to do this. In short: First we create an assembly in visual studio, than we sign it, add it to the Global Assembly Cache or GAC and access it using PHP from there.
Inside visual studio, create a new project. For the purpose of explanation I named the project DotNetTest.
Add the following method to the newly created Class1 class. Make sure the class is declared public.
public class Class1
{
public string SayHello()
{
return "Hello from .NET";
}
}
Now, go ahead an see if this compiles by clicking Build->Build Solution in the menu, or by hitting Ctrl+Alt+B on your keyboard. It may come as no surprise that it does… ;)
To be able to use it thru the GAC the assembly has to be signed. Signing can by done in visual studio. Go to the project properties by right-clicking the project and click Properties all the way at the bottom. Go to the Signing tab. Check the Sign the assembly checkbox and select <New…> from the dropdown list beneath that. Give the key file a nice name, like DotNetTestKey and uncheck Protect my key file with a password, because security isn’t an issue in this demo. Click Ok to close the window and finish the signing.
PHP uses COM, even for .NET, we have to make sure the assembly is Com Visible. Go to the Application tab in the Properties windows and click the Assembly Information button. Check the box next to Make assembly COM-Visible and click Ok.
Build the solution again to make sure the assembly we’re about to add to the GAC is signed and configured correctly.
Next, we have to register the assembly in the GAC. The easiest way to do this is thru the command-line. Visual studio provides a short cut to the command prompt by right-clicking on the project in the solution explorer and click Open Command Prompt. Go to the debug folder by typing cd bin/debug. Enter the following command to install the assembly into the GAC:
gacutil -i DotNetTest.dll
The utility should tell you that the assembly as successfully been added to the cache.
Last, take your favorite PHP editor and create a new PHP file with the code below.
<?php
$class1 = new DOTNET("DotNetTest,"
."Version=1.0.0.0,"
."Culture=neutral,"
."PublicKeyToken=????????????????"
,"DotNetTest.Class1");
echo($class1->SayHello());
?>
The DOTNET class instantiates a class from a .Net assembly. The full assembly name must be provided to the constructor at the place of the ‘???’ in the example. This can be found by going to C:\Windows\Assembly using the Windows Explorer or by opening the .dll file in a tool like Reflector.
Place the file at a location accessible thru the browser and go there. If everything went well, it should say “Hello from .Net”
|
http://geekswithblogs.net/tkokke/archive/2009/04/24/how-to-use-.net-assemblies-in-php.aspx
|
CC-MAIN-2015-22
|
refinedweb
| 510
| 75.4
|
Hi all,
I've been using the spark parser generator from jython (2.2a_something if that
matters) - no problems at all. Spark uses docstrings to annotate the semantic
actions with the grammar rules these apply to - like this:
def p_expr(self, args):
"""
expr := expr or aexpr
expr := aexpr
"""
if len(args) == 3:
return Or(args[0], args[1])
return args[0]
Consider my surprise when in found out that jythonc-ing the whole app of mine
made the docstrings disappear.
I can (and will for now) work around that - but I wonder why that is the case?
I frequently use docstrings for meta-information, at least in pre-2.4-python.
Regards,
Diez
|
http://sourceforge.net/p/jython/mailman/message/12806684/
|
CC-MAIN-2014-49
|
refinedweb
| 113
| 68.1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.