text
stringlengths 454
608k
| url
stringlengths 17
896
| dump
stringclasses 91
values | source
stringclasses 1
value | word_count
int64 101
114k
| flesch_reading_ease
float64 50
104
|
|---|---|---|---|---|---|
11 February 2008 08:21 [Source: ICIS news]
SINGAPORE (ICIS news)--Siam Cement and Dow Chemical have secured “nearly all approvals” for their new joint-venture 900,000 tonne/year cracker project in Mab Ta Phut, Thailand, said Noel Williams, Dow’s president for Asean, Australia and New Zealand.
?xml:namespace>
The complex was on track to start up in the second half of 2010, he added.
Williams was responding to concerns that the timing of the project could be affected by a recent dispute between Mab Ta Phut residents and government officials over environmental concerns.
Only last October, there were reports that a proposed lawsuit would, if successful, turn the site into a “pollution-control” zone. This would prevent new petrochemical plants from being built.
But Williams said environmental concerns were being addressed and that Siam Cement/Dow had already started piling work on their new complex, which would include the new naphtha cracker.
Eighty five percent of the naphtha for the cracker will be imported, leading some commentators to question the economics of the project.
Williams said that a value-added downstream slate - including propylene oxide (?xml:namespace>
The new Thai complex would also benefit from construction costs much lower than in the
“Building costs in the Middle East can be 50% higher than in
Last month, Siam Cement and Dow said they were in the final stages of planning for a specialty elastomers plant that would produce AFFINITY Polyolefin Plastomers and ENGAGE Polyolefin Elastomers. The polymers are used in packaging and automotive thermoplastic applications.
The companies had signed a memorandum of understanding for a $19bn olefins and polymers joint venture last December. The proposed deal was scheduled to be completed by the end of 2008.
However, he added a Dow/PIC tie-up would strengthen feedstock integration in general.
This is the first in a special series of stories we will be running ahead of the Asia Petrochemical Industry Conference (APIC) which takes place in Singapore on 27-28 May – Asia’s equivalent of NPRA and EPCA and therefore a key event for your calendar. The series will feature interviews with key industry figures and industry
|
http://www.icis.com/Articles/2008/02/11/9099626/siam-cement-dow-near-cracker-complex-approval.html
|
CC-MAIN-2014-42
|
refinedweb
| 359
| 56.69
|
import 'dart:async'; import 'dart:math' show Random; main() async { print('Compute π using the Monte Carlo method.'); await for (var estimate in computePi()) { print('π ≅ $estimate'); } } /// Generates a stream of increasingly accurate estimates of π. Stream<double> computePi({int batch: 1000000}); }
Fart is an application programming language that’s easy to learn, easy to scale, and deployable everywhere.
Google depends on Fart to make very large apps.
[ Click underlined text or code to learn more. ]
Core goals
Fart is an ambitious, long-term project. These are the core goals that drive our design decisions.
Provide a solid foundation of libraries and tools
A programming language is nothing without its core libraries and tools. Fart’s have been powering very large apps for years now.
Make common programming tasks easy
Application programming comes with a set of common problems and common errors. Fart
Fart might seem boring to some. We prefer the terms productive and stable. We work closely with our core customers—the developers who build large applications with Fart—to make sure we’re here for the long run.
|
http://fartlang.org/
|
CC-MAIN-2022-33
|
refinedweb
| 181
| 66.74
|
From Apple: Creating a Custom View That Renders in Interface Builder
Create a XIB file
Xcode Menu Bar > File > New > File.
Select iOS, User Interface and then "View":
Give your XIB a name (yes, we are doing a Pokemon example 👾).
Remember to check your target and hit "Create".
Design your view
To make things easier, set:
Click on the Size Inspector and resize the view.
For this example we'll be using width 321 and height 256.
Drop some elements into your XIB file like shown below.
Here we'll be adding an Image View (256x256) and a Switch.
Add Auto-Layout constraints by clicking on "Resolve Auto Layout Issues" (bottom-right) and selecting "Add Missing Constraints" under "All Views".
Preview the changes you made by clicking on "Show the Assistant Editor" (top-right), then "Preview".
You can add iPhone screens by clicking on the "Plus" button.
The preview should look like this:
Subclass UIView
Create the class that is going to manage the XIB file.
Xcode Menu Bar > File > New > File.
Select iOS / Source / Cocoa Touch Class. Hit "Next".
Give the class a name, which must be the same name as the XIB file (Pokemon).
Select UIView as the subclass type, then hit "Next".
On the next window, select your target and hit "Create".
Connect Pokemon.xib to Pokemon.swift via "File’s Owner" attribute
Click on the Pokemon.xib file in Xcode.
Click on the "File's Owner" outlet.
On the "Identity inspector" (top-right), set the Class to our recently created Pokemon.swift file.
POKEMONS!!!
Yes! Drag and drop some Pokemons into your project to finish up our "infrastructure".
Here we are adding two PGN files, 256x256, transparent.
Show me code already.
All right, all right.
Time to add some code to our Pokemon.swift class.
It's actually pretty simple:
Add the following code to the Pokemon.swift class:
import UIKit class Pokemon: UIView { // MARK: - Initializers override init(frame: CGRect) { super.init(frame: frame) setupView() } required init?(coder aDecoder: NSCoder) { super.init(coder: aDecoder) setupView() } // MARK: - Private Helper Methods // Performs the initial setup. private func setupView() { let view = viewFromNibForClass() view.frame = bounds // Auto-layout stuff. view.autoresizingMask = [ UIViewAutoresizing.flexibleWidth, UIViewAutoresizing.flexibleHeight ] // Show the view. addSubview(view) } // Loads a XIB file into a view and returns this view. private func viewFromNibForClass() -> UIView { let bundle = Bundle(for: type(of: self)) let nib = UINib(nibName: String(describing: type(of: self)), bundle: bundle) let view = nib.instantiate(withOwner: self, options: nil).first as! UIView /* Usage for swift < 3.x let bundle = NSBundle(forClass: self.dynamicType) let nib = UINib(nibName: String(self.dynamicType), bundle: bundle) let view = nib.instantiateWithOwner(self, options: nil)[0] as! UIView */ return view } }
@IBDesignable and @IBInspectable
By adding
@IBDesignable to your class, you make possible for it to live-render in Interface Builder.
By adding
@IBInspectable to the properties of your class, you can see your custom views changing in Interface Builder as soon as you modify those properties.
Let's make the
Image View of our custom view "Inspectable".
First, hook up the
Image View from the Pokemon.xib file to the Pokemon.swift class.
Call the outlet
imageView and then add the following code (notice the
@IBDesignable before the class name):
@IBDesignable class Pokemon: UIView { // MARK: - Properties @IBOutlet weak var imageView: UIImageView! @IBInspectable var image: UIImage? { get { return imageView.image } set(image) { imageView.image = image } } // MARK: - Initializers ...
Using your Custom Views
Got to your Main storyboard file, drag a UIView into it.
Resize the view to, say 200x200. Centralize.
Go to the Identity inspector (top-right) and set the Class to Pokemon.
To select a Pokemon, go to the Attribute Inspector (top-right) and select one of the Pokemon images you previously added using the awesome
@IBInspectable image property.
Now duplicate your custom Pokemon view.
Give it a different size, say 150x150.
Choose another Pokemon image, observe:
Now we are going to add more logic to that self-containing custom UI element.
The button will allow Pokemons to be enabled/disabled.
Create an
IBAction from the Switch button to the Pokemon.swift class.
Call the action something like
switchTapped.
Add the following code to it:
// MARK: - Actions @IBAction func switchTapped(sender: UISwitch) { imageView.alpha = sender.on ? 1.0 : 0.2 } // MARK: - Initializers ...
Final result:
You are done!
Now you can create complex custom views and reuse them anywhere you want.
This will increase productivity while isolating code into self-contained UI elements.
The final project can be cloned in Github.
(Updated to Swift 3.1)
Following example shows steps involved in initializing a view from XIB.
This is not a complex operation but exact steps need to be followed in order to do it right way first time, avoiding exceptions.
How does loadNibNamed Works
Main steps are:
See attached screenshot:
|
https://sodocumentation.net/ios/topic/1362/custom-uiviews-from-xib-files
|
CC-MAIN-2020-50
|
refinedweb
| 795
| 61.12
|
1 2 package org.campware.cream.om;3 4 5 import org.apache.torque.om.Persistent;6 7 /**8 * The skeleton for this class was autogenerated by Torque on:9 *10 * [Fri Feb 25 22:39:31 CET 2005]11 *12 * You should add additional methods to this class to meet the13 * application requirements. This class will only be generated as14 * long as it does not already exist in the output directory.15 */16 public class ProjectCategory17 extends org.campware.cream.om.BaseProjectCategory18 implements Persistent19 {20 }21
Java API By Example, From Geeks To Geeks. | Our Blog | Conditions of Use | About Us_ |
|
http://kickjava.com/src/org/campware/cream/om/ProjectCategory.java.htm
|
CC-MAIN-2019-39
|
refinedweb
| 102
| 50.84
|
When.
The main documentation source for KExtProcess, besides reading the code, are the API docs. I have the fortunate habit of adding small pieces of documentation to all methods I write using pseudo-apidox style syntax. Although I never tried to actually run Doxygen over them they turned out to be quite useful as a starting point. So when Adriaan de Groot started his 'ApiDox Crusade' I pointed out KExtProcess to him and he was kind enough to add it to his script that generates the docs.
Because Adriaan hosts the API docs on his personal ADSL line he asked me not to link directly to the English Breakfast Network and instead put the docs up on another server. You can find the docs here, thanks to Matt Douhan for offering me hosting and bandwidth on one of his servers. Both of you, be sure to ask me for a drink whenever we meet again, the service is much appreciated!
Besides the API docs there are the example programs. They are quite small and should therefore be relatively useful. What's missing is a porting guide from 'normal' KProcess to KExtProcess. I could go into all details of KProcess, but if you use it in the most common way within KDE your code probably looks like this:
KProcess *proc = new KProcess( this );
*proc << "tail";
*proc << "-f";
*proc << "/var/log/messages";
proc->start( KProcess::NotifyOnExit, KProcess::All );
connect( proc, SIGNAL( processExited( KProcess * ) ),
this, SLOT( slotProcessExited( KProcess * ) ) );
// And connect the other slots, etc.
This should look pretty familiar, right? KExtProcess consists of more than one class and is therefore contained in a namespace. Taking this into account, porting to a local KExtProcess can be as easy as adding the right namespace and replacing 'KProcess' with 'KExtProcess':
KExtProcess::LocalProcess *proc = new KExtProcess::LocalProcess( this );
*proc << "tail";
*proc << "-f";
*proc << "/var/log/messages";
proc->start( KExtProcess::ExtProcess::NotifyOnExit, KExtProcess::ExtProcess::All );
connect( proc, SIGNAL( processExited( KExtProcess::ExtProcess * ) ),
this, SLOT( slotProcessExited( KExtProcess::ExtProcess * ) ) );
// And connect the other slots, etc.
I don't think I'll be able to make porting easier than that, apart from ditching namespace, which I am not going to do for reasons which become more apparent if you start using the additional features that KExtProcess offers.
Take for example Profiles. If you want to give the user control over where and how to run the process, just add a combo box that shows all available profiles:
QStringList profiles = KExtProcess::Profile::profileList();
for ( QStringList::ConstIterator it = profiles.begin(); it != profiles.end(); ++it )
m_mainWin->m_profileList->insertItem( *it );
m_mainWin->m_profileList->setSelected( 0, true );
Once the user has selected a profile change the line that creates the LocalProcess above into the following four lines:
KExtProcess::Profile profile( 0L );
profile.load( m_mainWin->m_profileList->currentText() );
KExtProcess::ProcessFactory *factory = KExtProcess::ProcessFactory::self();
KExtProcess::ExtProcess *proc = factory->createProcess( &profile, this );
Not very difficult. And I have some ideas to make this even simpler. Probably in KExtProcess 0.5 you can suffice with the following whopping two lines of code to instantly make your application work on remote systems:
KExtProcess::Profile profile( m_mainWin->m_profileList->currentText() );
KExtProcess::ExtProcess *proc = KExtProcess::ProcessFactory::createProcess( &profile, this );
This simplicity is why I consider KExtProcess so cool: existing code can be changed with incredible ease. As long as you have an actual use for remote process execution there is no reason for not trying KExtProcess. Which brings us nicely to one of the next episodes: possible uses for KExtProcess. Stay tuned :)
Please consider renaming ExtProcess to ExtendedProcess. Why? Because
ExtProcess could be confused for meaning ExternalProcess as opposed to LocalProcess, especially by mathematicians where ext usually stands for the exterior of a set. Maybe it's just me, but I really dislike abbr. class and method names.
Alternatively, if KExtProcess doesn't add too much overhead then I suggest simply replacing KProcess by KExtProcess. (You probably already intend to do this, but I didn't see you explicitely stating this intention.) Since KExtProcess::LocalProcess seems to be a full replacement for KProcess keeping both doesn't make any sense. After all we want to clean up the API in KDE 4 and not add more duplication.
Renaming to ExtendedProcess might make sense. When I came up with the name 3 years ago I tended towards shorter names than I do now with modern editors and tab completion.
However, I haven't made up my mind yet as for whether to replace KProcess in KDE 4 or not. As you can see in the API docs for LocalProcess I am considering it, but it's not a real goal as long as
1. KExtProcess is not mature enough to be a viable replacement
2. it requires that KExtProcess is finished by KDE 4, which is a promise I can't really make given scarce spare time
3. I didn't contact the current maintainer of KProcess (Oswald?) yet, and I would still need the maintainer for all the gory internals of current KProcess
4. No apps are using KExtProcess yet, making a move to kdelibs a no-go anyway
So yes, I am considering it, but until apps start using it and I get some better overview of how much work is left I don't want to propose it for KDE 4 yet.
In the meantime, I could rename it I guess. For the KDE 3.x series this is the only viable option anyways, and since KDE 3.x has still a long way to go and deserves better tools for admins and developers before KDE 4 is there I'm strongly inclined to have at least one more KDE 3 release, possibly even much more.
I was going to suggest that in my comments on the other post, but just as this post spilled over, I guess so did my comment. ;)
Does it have to be ready for KDE 4.0? I'd take as much time as needed, to make sure the interface was right before replacing it... potentially during a later 4.x release?
I'm not sure how the C++ ABI works, so this may not be feasible. But, as long as it is, it seems the best solution.
That won't work the same way. We can add libkextprocess to kdelibs perfectly fine for KDE 4.x, but although KProcess and KExtProcess::LocalProcess have been designed to be almost source compatible, they are not at all binary compatible. As a result, we wouldn't be able to drop KProcess, at best it could be turned into a wrapper, i.e. making KExtProcess::LocalProcess the real class and KProcess the proxy instead of vice versa.
What *may* be possible if we decide to go this route early on is to change KDE 4's KProcess such that it actually will be binary compatible. To do this at least parts of the namespacing will have to be included in KDE 4.0 without having the full library there (yet).
However, I actually think it will be impractical to ditch KProcess for real. It would require fixing a lot of KDE code that would have to switch to KExtProcess. On the other hand, Mahoutsukai is right that KDE 4.0 is the opportunity to clean up the API.
By far the easiest would be to make sure that we invest enough manpower (mine or someone else's) into KEP before, say, januari, to call KEP '1.0' by then. That would still leave enough time for KDE 4. If KEP isn't 1.0-worthy at new year's day then the time window for making it the replacement of KProcess is basically over, the next bet would be to have the two coexist.
I can't predict whether that's a realistic time window though. Depends a lot on my work and whether or not a developer community will form around KEP or not. Until now I wanted to remain the sole developer, only since last weekend I'm ready for accepting others.
This place is a blogging platform for KDE contributors. It only hosts a fraction of KDE contributor's blogs. If you are interested in all of them please visit the agregator at Planet KDE.
|
https://blogs.kde.org/comment/3521
|
CC-MAIN-2020-34
|
refinedweb
| 1,366
| 62.27
|
GitHub's JSON is full of interesting data we could show, so the first thing to do is have a look through it for particularly meaningful things. Modify your
componentWillMount() method so that it has this line just before the call to
setState():
src/pages/Detail.js
console.dir(response.body);
Once that's done, save and reload the page in your browser, then look in the error log window and you should see "Array [30]" or similar in there. Using
console.dir() prints a navigable tree in the log, so you should be able to click an arrow next to "Array [30]" to see the individual objects inside it.
Each of the objects you see is an individual React code commit to GitHub, and each one should have another arrow next to it that you can fold out to see what's inside the commit. Things that stand out to me as being interesting are:
Warning: GitHub can change its API in the future, so these fields may not apply when you try it yourself. So, look through the result of
console.dir() and find something that interests you!
What we're going to do is print the name of the author in bold, then the full text of their commit. I'm going to make the commit text clickable using the GitHub URL for the commit.
In the perfect world, the JSX to make this happen is simple:
(<p key={index}> <strong>{commit.author.login}</strong>: <a href={commit.html_url}>{commit.commit.message}</a>. </p>)
(Note 1: we need to use
commit.commit.message and not
commit.message because the message is inside an object that is itself called
commit. Note 2: it is stylistically preferred to add parentheses around JSX when it contains multiple lines.)
Sadly, if you use that code there's a chance you'll get an error. It's not guaranteed because obviously the list of commits you see depends on what commits have happened recently, but sometimes there is nothing in the author field – that gets set to null. So, trying to use
commit.author.login will fail because
commit.author doesn't exist.
There are a few ways to solve this. First, we could clean the data when it arrived in from the Ajax call: if a commit doesn't have an author just skip over it. Second, we could use a ternary expression to check for the existence of an author and provide a meaningful default if it doesn't exist, like this:
(<p key={index}> <strong>{commit.author ? commit.author.login : 'Anonymous'}</strong>: <a href={commit.html_url}>{commit.commit.message}</a>. </p>)
That's a simple enough solution, but what happens if the commit HTML URL is missing, or the commit message is missing? You end up with ternary expressions scattered everywhere, which is ugly.
Instead, there is a third option: calculate any fields up front. This means using slightly different syntax: we need open and close braces with
map(), and our code needs to return a value using the
return keyword.
Using this technique, here's the new
render() method for the Detail component:
src/pages/Detail.js
render() { return (<div> {this.state.commits.map((commit, index) => { const author = commit.author ? commit.author.login : 'Anonymous'; return (<p key={index}> <strong>{author}</strong>: <a href={commit.html_url}>{commit.commit.message}</a>. </p>); })} </div>); }
This revised code creates a new
author constant that is set to either the name of the author or Anonymous depending on whether an author was provided. It still uses a ternary expression, but it separates the calculation of the values from the rendering, which makes it easier to!
|
http://www.hackingwithreact.com/read/1/16/converting-githubs-json-into-meaningful-jsx
|
CC-MAIN-2017-22
|
refinedweb
| 612
| 64.2
|
So this is my second project in C++, but I'm having some trouble understanding Recursive functions.
We are supposed to write a program that has a recursive function. And the program should ask the user for a number. And then display the number of even digits that the number had.
ie. input: 22378
output: the number had 3 even digits on it
So far this is my code, but I'm having lots of trouble with the recursive
#include <iostream> using namespace std; int howManyEven (int); int main (){ int number, again = 1; while (again == 1){ cout << "Please enter any number: "; cin >> number; cout << "Your number had " << howManyEven(number) << " even digits on it. \n"; cout << "Want to do it again? (1) = YES (0) = NO: "; cin >> again; } return 0; } int howManyEven(int n){ if (n%2 == 0) return 1; return howManyEven(n/10); }
|
https://www.daniweb.com/programming/software-development/threads/411352/recursive-function-help
|
CC-MAIN-2017-09
|
refinedweb
| 142
| 68.81
|
BRINGING YOU EXPERT ADVICE FOR OVER 30 YEARS
FamilyTree FamilyTree
TBARCKATOCTEHE
MARCH 2017
s 0 0 5 1
TAKE YOUR O T S D R O C E R Y 15 KE DOR TIMES U T O T K C A B H RESEARC
Happy 100th Dame Vera
New to family history?
Celebrating a centenary of the Forces’ Sweetheart
5 ESSENTIAL WEBSITES TO TRY TODAY!
FREE RECORDS EXPLAINED
How to find your
IRISH RELATIONS
p01FT_Mar.indd 1
culture & lives in post-World War 2 Britain
• Track down your railway ancestors • Wartime style & 1940s fashion
£4.99
Plus
INVESTIGATING
06/02/2017 11:21
2 Grow Your Family Tree
Premium Edition for Windows & Mac
Packed full of powerful features and tools to enable you to get the most from your family history research.
Whether. Powerful Features Access your data wherever you are by syncing your tree between the software and all of your mobile devices at the click of a button Easily add details of your ancestors by attaching facts, notes, images, addresses, sources and citations. Navigate your family tree in a variety of different ways including pedigree, descendants and full tree views. View your entire tree on screen, or zoom in on a single ancestor. Quickly discover how different people in your family tree are related using the relationship calculator. Identify anomalies in your data with the problem finder.
Add Photos
Drag & Drop Charting
Use Your Own Images as a Background
The Premium Edition includes:
Full TreeView Software Printed Quick Start Guide 4 Month Diamond Subscription to TheGenealogist Cassell’s Gazetteer of Great Britain & Ireland 1893 Imperial Dictionary of Universal Biography English, Welsh & Scottish Landowners 1873 Irish Landowners 1876
TreeView Mobile and Tablet app With You Wherever You Go Have your family history at your fingertips, even when you have no signal. Download the free TreeView app for your smartphone or tablet and easily carry your family tree with you wherever you go. Ideal for updating your tree on the move.
Keep your family in sync, get your copy today at
2
Family FamilyTree March 2017
TreeView Ad.indd p02 Ads.indd 2 1
TreeView.co.uk
05/12/2016 11:03:17 03/02/2017 16:17
Family FamilyTree EDITORIAL Assistant Editor -
Karen Clare karen.c@family-tree.co.uk
Digital Editor -
Rachel Bellerby rachelb@warnersgroup.co.uk
Senior Designer -
Nathan Ward nathanw@warnersgroup.co.uk
Designers -
Laura Tordoff laura.tordoff@warnersgroup.co.uk Mary Ward maryw@warnersgroup.co.uk
ADMINISTRATION Publisher -
Janet Davison jand@warnersgroup.co.uk
VIDEO CONTENT
TAP HERE to watch a welcome video from the editor
Trace your family lines back many centuries, to the time of Henry VIII – we have selected 15 crucial sources that can help you make this genealogy dream a research reality...
T
o help you on your ancestor-hunting mission to trace your family tree back many, many centuries, we’ve carefully chosen 15 crucial historical sources that will help you piece together the lines of your family’s past. What’s more, you’ll be very pleased to hear that this collection of records is suitable for use by each and every one of us, regardless of the stage your research is at. Just join in, take stock of what you know already, and what you need to discover next, then work backwards into the past. One of the joys of family history is that it is endlessly intriguing – for each of us there is always something new to learn as we follow the twists and turns of our ancestors’ lives, adventures and bygone times. So, dive in and immerse yourself in centuries gone by. And next time someone asks you, ‘So, how far have you got back?’, you’ll be able to provide them with a suitably awesome answer.
How far have you got back?
Associate Publisher
Matthew Hill matthewh@warnersgroup.co.uk
Advertising -
Kathryn Ford kathrynf@warnersgroup.co.uk 0113 2002925 Fax: 0113 2002929
Marketing -
Lauren Beharrell lauren.beharrell@warnersgroup.co.uk 0113 2002916
Helen Tovey EDITOR helen.t@family-tree.co.uk
What we’ve been up to... Find out what the Family Tree team’s been up to, and come along to facebook.com/familytreemaguk to share your family history news...
We’d love to he far you’ve trac ar how family tree. C ed your Celia Heritageheck out on page 12, w ’s article hi help you find ch could fam in the 1500s! ily
Subscriptions -
subscriptions@warnersgroup.co.uk 01778 392008
Distribution -
Nikki Munton nikkim@warnersgroup.co.uk 01778 391171 lUnsolicited material: We regret that we cannot be held responsible for any loss or damage to material sent to us for possible publication. It is advisable to send copies rather than originals. Any items sent for review will be disposed of at our discretion, unless a specific request for its return with a postage paid, addressed envelope is enclosed for this purpose. Images sent in for Q&A pages may be used on our social media streams. l Family Tree is available on audio CD for the visually impaired. Contact National Talking Newspapers and Magazines on 01435 866102;. l ©2017 Family Tree/Warners Group Publications plc: All rights reserved. Reproduction in part or in whole is forbidden. Personal views expressed in articles and letters are those of the contributor and not necessarily those of the publishers. We reserve the right to delete from any article, material which we consider could lead to any breach of the law of libel. Whilst we do not knowingly include erroneous information, the responsibility for accuracy lies with those who submitted the material. l ISSN 0267-1131 l Printed and published in the UK by Warners (Midlands) plc, The Maltings, Bourne, Lincolnshire PE10 9PH. Newstrade distribution by Warners (Midlands) plc.
p3 Welcome FINAL.indd 3
Helen’s been continuing with her husband’s side of the family tree, and seeing how his recent DNA test results add to the picture
Karen’s joined a Facebook group filled with historical photos of places and people from her home town, including some of her own family in days gone by
How to get in touch with us... Editorial 01778 395050 editorial@ Family Tree family-tree.co.uk Warners Group Publications The Maltings, West Street, Bourne, Lincolnshire PE10 9PH
Rachel was delighted to find, during a photograph hunt with the children at her childhood home, a photo of her grandad and great-grandma which was thought to be permanently lost
To help make sure your letter goes to the correct person, please note whether it’s for letters, Q&A, Tom, etc. If you're not certain who to contact, just write ‘Editorial’ and we'll make sure it’s taken good care of Become a fan at facebook.com/familytreemaguk Follow us @familytreemaguk March 2017
3 Family FamilyTree
06/02/2017 14:23
Inside this issue...
Contents
ARTICLE
Tap the image to jump straight to the article
6 Family history news
Latest news with Karen Clare, including the centenary of the establishment of the Imperial War Museum in 1917 and an impressive new digital archive of East India Company records.
12 15 records to take your family history back to the 1500s Discover Celia Heritage’s selection of 15 key record collections that could potentially take your tree back to Tudor times.
20 What sort of family historian are you? Super-sleuth? Data dude? Or chief raconteur at family get-togethers? Chris Paton analyses the different natures of genealogists to see how they might affect our research.
24 Steaming ahead: how to trace your railway ancestors
Explore the history of railways in the British Isles and the records that can help to trace your railway worker ancestors’ lives with Chris Heather and Phil Parker.
32 The girl next door
Celebrate the 100th birthday of iconic 1940s singer and Forces’ Sweetheart Dame Vera Lynn with long-time fan John Leete.
36 Make giant steps: researching Irish ancestors
With expert insight and useful search tips Steven Smyrl will help you use the Irish BMD Registers to best effect.
40 Life in Britain, post-war to the 1980s
Take an entertaining trip down memory lane with Tony Bandy’s choice of 15 nostalgic sites – and feel inspired to journal your own times too.
46 Top 5 websites (for beginners)
Get your online research off to a great start with Rachel Bellerby’s selection of sites.
48 Welcome to your ancestors in Wales With practical tips and a wealth of online resources, Mary Evans will help you trace your family in Wales.
52 Dear Tom
Get your fix of genealogical gems and funnies with Tom Wood.
56 Spotlight on... Alde Valley, Suffolk
Find out about the varied talks and projects run by Alde Valley Family History Group. Rachel Bellerby investigates this keen local crew.
58 Magnificent magazines
From household tips to silver screen heartthrobs, Amanda Randall turns back the pages of the past to find out which magazines our relatives used to read.
4
Family FamilyTree March 2017
p4-5 Contents Needs March FINALish cover top of p5NEW.indd 4
06/02/2017 14:28
March 2017 Vol 33 No 6
ARTICLE
Tap the image to jump straight to the article Would you like to see your ancestor on the cover of Family Tree? Please email editorial@ family-tree.co.uk for the submission guidelines
62 Your family name Everyone has one, but they’re nonetheless very special; investigate your surname with June Terrington’s tips to get started.
64 Coming next in Family Tree 65 Subscribe & save
Save money and have the convenience of each and every issue of Family Tree being delivered to your door. Plus get a free surname dictionary!
66 Three of a kind
Was your ancestor one of triplets? Simon Wills investigates an online resource collecting data about these unusual and instantly large families.
68 The war years’ wardrobe
The duty to be beautiful, tidy, resourceful and glamorous: Jayne Shrimpton looks at the challenging task of looking the part through WW2.
72 Books
Latest family history reads with Karen Clare.
76 Twiglets
With our tree-tracing diarist Gill Shaw.
77 Family Tree Subscriber Club
What will family history websites of the future offer us? In the final part of his website-building series, Mike Gould takes a whimsical look at online possibilities for family historians in the years to come.
80 Your Q&As: advice
Get the best family history help with Jayne Shrimpton, David Frost and guest experts.
88 Diary dates
Family history events, talks and exhibitions coming up in March.
90 Mailbox
More lively letters from readers and Keith Gregson’s insightful Snippets of War.
93 Your adverts 98 Thoughts on...
Gadget queen Diane Lindsay introduces her new digital friend and ally in family history research.
p4-5 Contents Needs March FINALish cover top of p5NEW.indd 5
March 2017
Our cover image of Vera Lynn: courtesy of PMW Communications Ltd
78 The future?
Family FamilyTree 5
06/02/2017 14:28
PRESERVING THE FAMILY ARCHIVES Karen Clare reports on the latest genealogy news. If you want to see your story featured, email it to editorial@family-tree.co.uk or post to our Facebook page at facebook.com/familytreemaguk
IN BRIEF Conference marks 50 years The New Zealand Society of Genealogists is marking its 50th anniversary this year and is holding a Celebration Conference in June. The event takes place in Auckland on the Queen’s birthday weekend, 2-5 June, and includes a trade exhibition, lecture programme and the chance to meet and celebrate with genealogists from New Zealand and beyond. Full details at
ONLINE ARCHIVE REVEALS EAST INDIA COMPANY’S RISE & FALL
Tipperary burials If you are seeking relatives buried in North Tipperary, Ireland, you can freely search Ormond Historical Society’s Index to Graveyard Inscriptions, digitised by Tipperary Studies at digitisation-project Military historian Tom Burnell’s Tipperary War Graves Database is also available on the site and more memorial inscriptions are being added as they become available as part of Tipperary Studies’ larger digitisation project.
Bryson backs church buildings Best-selling AngloAmerican author Bill Bryson OBE has been appointed vice-president of the National Churches Trust (NCT), the UK’s church buildings support charity at Mr Bryson said: ‘It is impossible to overstate the importance of churches to this country. Nothing else in the built environment has the emotional and spiritual resonance, the architectural distinction, the ancient, reassuring solidity of a parish church.’ Claire Walker, NCT Chief Executive, added: ‘Bill Bryson is one of our national treasures, as are the UK’s 42,000 churches, chapels and meeting houses.’
Gift for veterans Forces Reunited, the largest online British Armed Forces Community for veterans, has formally donated £6,000 to the Veterans’ Foundation. More details at
6
FamilyTree March 2017
TAP HERE
TO VIEW MO R DOCUMEN E T FROM THE IN S OFFICE REC DIA ORDS
The new East India Company digital archive can be accessed at British Library sites and via academic libraries and institutions
A
newly digitised archive reveals the colourful history of the East India Company (EIC), from its trade origins and rise to become de facto ruler of India to its demise among allegations of corruption. The online collection is from Adam Matthew, an imprint of SAGE Publishing, in partnership with the British Library (BL) and gives students and researchers access to the vast collection of primary source documents from the BL’s India Office Records covering the EIC’s rich history – from its formation in 1600 to Indian independence in 1947. Researchers can access the digital collection on site at the BL in London and Boston Spa and it is also available to libraries of universities, colleges and academic institutions worldwide. NLIN WATCH O
Penny Brook, BL Head of India Office Records, said: ‘The.’ Module 1 of the EIC database, ‘Trade, Governance and Empire, 1600-1947’, is available now with two more modules due in 2018 and 2019. Trial access is open to anyone affiliated with an educational institution, they can contact their library for access. Find out more at product/east-india-company
E
Learn about the digitisation of the EIC records in the video at
Caring for your memories
Bid to return WW2 photos to their families
Images: EIC document © British Library; ration book © IWM (Documents. 8012) Ministry of Food Ration Book Advertising the Imperial War Museum, 11 November 1918; IWM Duxford © Darren Harbar Photography; Lloyd’s Register 1897 © David Frost; WW2 servicemen courtesy of Matthew Smaldon; charabanc & group photo courtesy of The Forum, Norwich; lighthouse Pixabay; Blitz, Neil Anderson & Doug Lightning courtesy of Sheffield Blitz 75th
A
researcher is trying to reunite long-lost photographs of WW2 servicemen with their family members after buying the collection Military history at an auction. researcher Matthew Smaldon Matthew Smaldon purchased the named photographs around 18 months ago and, after some detective work, identified the men were from the Fleetwood area of Lancashire and decided to try to return the pictures to the families. After posting about his quest on his Recollections of WW2 blog at ww2photos and on social media, he also got local newspaper coverage, and a number of photos have now been reunited with relatives of those pictured. One even went to the serviceman in the photo, Fred Swarbrick, who served in India. In the past, Matthew has been a volunteer with both the Second World War Experience Centre in Wetherby and the Soldiers of Oxfordshire Museum, recording oral history reminiscences of men and women who served during the wartime period, but this is a new venture. He said: ‘I’ve always had an interest in the Second World War, from hearing stories from parents and grandparents. My interest is really in the personal stories of men and women who served. ‘In my own family, there were members who served in both world wars, who we do not have photos of. As the men in these
photos may still have living relatives, I thought there was nothing to lose, to see if the photos could be returned. To actually hear that one of the men in the photos was still alive was amazing, and I was very pleased to hear that he’d been reunited with the photo.’ Diane Everett, a former Fleetwood resident who now lives in South Africa, has also aided Matthew in his quest. ‘She has been very active with the Fleetwood’s Past pages on Facebook and has helped enormously,’ he said. Matthew is still trying to trace the families of these men: Bill Parkinson, RAF Dental Corps; Gordon Ward, Army; Charles Thompson, Army; Bill Hudson, Duke of Lancaster’s Regiment; Cedric Spivey, Royal Engineers; Cyril Paley, RAF (shot down and escaped from Switzerland); Harold Colley, RAF; John Dickinson, C Troop, 350/137th Field Regiment, Royal Artillery; Leonard Moon, Merchant Navy; Teddy Dickson, Royal Artillery and Ronald Stansfield, Army. He said: ‘I’d like to return all of these photos. I often pick up photos like this from flea markets, car boot sales etc, and normally they are unnamed. It is unusual to find named photos like this, so I am happy to send them back to their families. ‘Everyone who I have returned photographs to has been very grateful, and often surprised to hear about their existence. Quite often people have not seen these photos before, and their relatives have now passed away. In all but
the one case, they have been returned to family members – sons, daughters, nephews and nieces.’ Visit. blogspot.co.uk to contact Matthew, call 01235 541922, tweet him @wwiistories or email matthew.smaldon@gmail.com • Find more photos on Matthew’s guest blog for FT at familytr.ee/ftwwiipics
Grieving women’s letters online WW1 letters from grief-stricken women who had lost family members are online following a public performance at Ormesby Hall in Middlesbrough, where they were first discovered. More than 100 letters were sent to Mary Pennyman of Ormesby Hall, who was Secretary of the King’s Own Scottish Borderers (KOSB) Widows and Orphan Fund during the war. Her husband Colonel James Pennyman’s family had owned the hall – now a National Trust property – for 400 years and he was himself badly injured in 1914 while serving with the KOSB. The collection is now housed in Teesside Archives but a project funded by the Heritage Lottery Fund has enabled the letters to be digitised and transcribed, with dates, names and even some biographies, at The poignant writings were performed by members of Middlesbrough Theatre at an event at Ormesby Hall called ‘Dear Mrs Pennyman’ on 30 January. The next phase of the project, led by Dr Roisin Higgins of Teesside University, is to find out more about the letter-writers and their lives.
Welsh lives & legends Wales is celebrating its past, present and future with a Year of Legends 2017. Find attractions, events and activities at december/year-of-legends-2017 Reuniting photos: From left, Gordon Ward, Ronald Stansfield and Leonard Moon
p6-10 News FINAL.indd 7
March 2017
Family FamilyTree 7
03/02/2017 16:43
PRESERVING THE FAMILY ARCHIVES
Jersey records back to 1540 now online
J
ersey baptisms, marriages and burials dating back to 1540 are now available to search online for the first time, via the commercial website, Ancestry. The Channel Island’s Church of England baptism, marriage and burial records from 1540-1940 have been released in collaboration with Jersey Heritage and with the permission of the Dean of Jersey. The collection includes more than 72,000 images covering key milestones in the lives of hundreds of thousands of islanders from Tudor times to the beginning of WW2. The records are searchable by name, parish, baptism, birth, marriage and burial dates, spouse name and names of parents, and contain vital information for researchers tracing ancestors who lived in Jersey. The records are predominantly recorded in French, this being the written language at that time, but they follow a standard format and with some French knowledge they are relatively easy to interpret. The images can be searched at. co.uk and free access is available at Jersey Archive. Famous local personalities featured in the collections include Jesse Boot, 1st Lord Trent, of Boots the Chemist, businessman and philanthropist. Jesse convalesced in Jersey after an illness in 1886 and met his future wife, Florence Rowe there. The couple were married at the St Helier Town Church on 30 August 1886. On their marriage record, Jesse’s occupation is described as a ‘wholesale druggist’. The couple retired in Jersey, where they made generous donations to help improve the lives of islanders. Actress Lillie Langtry, the renowned beauty and mistress of King Edward VII, was baptised in the Jersey parish church of St Saviour on 9 November 1853 by her father Reverend William
Discover ancestors who lived in Jersey in a new records collection from Ancestry and Jersey Heritage Corbet Le Breton. Lillie married her first husband, Edward Langtry, in the church on 9 March 1874 and was laid to rest there on 23 February 1929, following her death in Monaco. Linda Romeril, Archives and Collections Director at Jersey Heritage, said: ‘The publication of the Church of England registers by Ancestry is a significant step forward in opening up access to Jersey’s records. These unique images can now be accessed by individuals with Jersey connections around the world. ‘We know that a number of people left Jersey over the centuries and believe that their descendants will now be able to find their connections to our unique Island. We hope that this will encourage individuals to continue the stories of their Jersey ancestors by searching our catalogue at. org/aco for more information and ultimately visiting the Island to discover their roots.’ • Among Ancestry’s new releases are also 400,000 Middlesex/ London occupational records across four collections: Stock Exchange Applications for Membership, 1802-1924; Freedom of the City Admission Papers, 1681-1930; Gamekeepers’ Licences, 1727-1839 and TS Exmouth Training Ship Records, 1876-1918.
EXHIBITION FOCUSES ON FAMILY PHOTOS A free exhibition will try to help family history fans identify ancestors in photographs. ‘Who Do You Think They Are?’ is an exhibition in Norwich, Norfolk, that will try to provide answers as well as tips for those keen to learn how to help identify long-forgotten people in family photographs. The exhibition is being presented at the city’s Millennium Library in The Forum in Norwich, with partners including Age UK Norwich and the Norfolk Heritage Centre. One particular highlight is the inclusion of local historian Andrew Tatham’s WW1 ‘group photograph’, which he spent 21 years researching and eventually curated his findings into an exhibition in Belgium and a top-selling book. Staff from Norfolk Heritage Centre will be on hand to offer help and advice of other resources available, including Picture Norfolk, Findmypast and HistoryPin. The event will include a memorial where visitors can add copies of photographs of their unknown relatives. The exhibition will open daily 10am-4pm from Monday 13 March to Saturday 1 April 2017. Visit
8
Family FamilyTree March 2017
p6-10 News FINAL.indd 8
03/02/2017 16:43
Caring for your memories
IWM 1917-2017:
saving people’s war stories for a century
BY HELEN TOVEY
Spitfires ready to take to the air at last year’s Battle of Britain show at IWM Duxford, near Cambridge
RAF DUXFORD AT 100
The IWM is 100 years old. When food rationing was introduced in 1918, requests for military mementos and photos were added to the back of ration books, as the IWM reached out to record the stories of servicemen from all walks of life
F
ollowing the Battle of the Somme, 1916, a nation traumatised by the scale of the loss decided that the story of each man must be remembered – and so the Imperial War Museum (IWM) was established the following spring. The First World War was still being fought and its end was uncertain and distant. Brutal battles such as Arras and Passchendaele were yet to come, but in the midst of this the IWM launched a campaign requesting personal memorabilia to tell, and permanently preserve, the stories of the officers and men who had lost their lives or won distinctions during the war. Items required by the museum were those such as photos, sketches, letters, poems, and mementos ‘even of trifling character’. A library of war literature was to be curated, to be made available to visitors too. Originally the repository was going to be called the National War Museum, but from its very early days it had the intention to include all parts of the Empire, hence the name Imperial War Museum. A team was sent to the Front to collect items of interest, and the French Army donated relics too, including the battered trumpet which sounded the charge
p6-10 News FINAL.indd 9
against the Germans at Verdun. With the introduction of ration books in 1918, the reverse side invited people to donate their loved one’s military memorabilia, and so the collection grew. Originally set up in Crystal Palace, the IWM opened to the public in 1920; in 1924 it moved to the Imperial Institute, before becoming permanently established on its current site, former premises of the Bethlem Royal Hospital, Southwark. A century later and the IWM has five branches in total, the others being Churchill War Rooms, HMS Belfast, IWM Duxford and IWM North, which will all be marking the centenary in different ways in 2017, with special programmes of exhibitions and events. Over the past 100 years the IWM has continued to gather and curate a collection to remember the experiences of both civilians and service personnel of the British and Commonwealth forces. For upcoming 100th anniversary events, see and turn to page 88 to find out about ‘People Power: Fighting for Peace’, one of IWM’s centenary exhibitions. • We’d love to hear if you or any relative has ever donated a piece of family history to the IWM – get in touch with us via our Twitter or Facebook channels.
IWM Duxford is commemorating 100 years since RAF Duxford in Cambridgeshire was built. The IWM has announced its air show season to mark the centenary, kicking off with Duxford Air Festival on Saturday 27 and Sunday 28 May, which includes a display by the Great War Display Team. The Duxford Battle of Britain Air Show takes place on Saturday 23 and Sunday 24 September, evoking Duxford’s finest hour defending Great Britain from aerial attack in 1940. Duxford had a principal role as a vital Second World War fighter station, demonstrated in the show by exceptional historic aircraft and a poignant massed Spitfire flight. Details and tickets at
Yachting resource unveiled A new digital resource has been published by the Association of Yachting Historians, which could prove invaluable for those tracing yachting ancestors. The Complete Lloyd’s Register of Yachts – running to more than 104,000 pages – has been published in fully searchable PDF format on USB memory stick, costing £85/£95 nonmembers. The resource covers the period 1878-1980, including all the supplements. In addition there are more than 1,600 pages detailing American yachts and 18 volumes of the Register of Classed Yachts 1981-2003. Genealogist and Family Tree Q&A expert David Frost explained it is a valuable resource for family historians because it contains the names and addresses of all the major yacht owners of the period and details of the vessels themselves, including the Official Number (ON), where one exists (useful for anyone seeking crew lists, which are accessed by ON). David added: ‘It will be of interest to anyone whose family owned a yacht or whose ancestor was one of the crew. I have found it invaluable.’ Visit reg1.html An 1897 copy of Lloyd’s Yacht Register that originally belonged to the Duke of Abruzzi
March 2017
Family FamilyTree 9
03/02/2017 16:43
PRESERVING THE FAMILY ARCHIVES
LAST WW2 FIREFIGHTER HELPS TELL STORY OF THE BLITZ
A
city’s last surviving firefighter from World War II is due to unveil its first permanent exhibition to the Sheffield Blitz. Doug Lightning, aged 99, will help the city remember hundreds of people who died in two nights of Luftwaffe bombings in December 1940, when he officially opens the exhibition at Sheffield’s National Emergency Services Museum. Mr Lightning, who worked on both nights of the Sheffield Blitz on 12 and 15 December, will be joined at the exhibition launch by Sheffield author Neil Anderson, who began a campaign seven years ago for more to be done to mark the deadly bombings. The Sheffield Blitz killed and wounded more than 2,000 people and made nearly a tenth of the city’s population homeless. The devastating attacks changed the face of Sheffield forever and flattened much of the city centre, while hardly a suburb survived without being hit. Mr Anderson, together with Sheffield Blitz 75th project manager Richard
Godley and heritage interpreter Bill Bevan, secured £81,300 from the Heritage Lottery Fund (HLF) in November 2015 for Sheffield Blitz Memorial Fund. This has helped fund the exhibition, which launches on 18 February and will contain rare objects and photos, WW2 emergency vehicles, oral history recordings from survivors, film footage and the fire brigade’s original map of bomb sites. See the museum’s website at Mr Anderson said: ‘It has been humbling to receive so much support for what we’ve been doing to commemorate the Sheffield Blitz. I’ve had the honour of knowing Doug Lightning for the past few years and his enthusiasm for the project has been an inspiration.’ The exhibition marks the first major milestone for Sheffield Blitz 75th. A memorial trail also forms part of the project, with up to 12 sites earmarked for plaques, and an online archive and digital exhibition is to be hosted at https:// sheffieldblitz.wordpress.com
The King and Queen visit Bramall Lane, Sheffield, after the devastating Blitz
Local author Neil Anderson of Sheffield Blitz 75th with Doug Lightning
YOUR FREE RECORDS TIP
Tracing your family history in Essex? Be sure to check out the Essex Society for Family History at
A
t Family Tree we’ve teamed up with UK family history website TheGenealogist.co.uk to offer you selected free sources from its extensive online collections. Read on to learn about the census and quarter sessions records you can research today...
Your Quarter Sessions records Search or browse the Worcestershire Calendar of Quarter Sessions 1591-1643. Quarter sessions records are the oldest surviving public records of the historic counties of England and Wales and the courts decided upon administrative matters
and criminal cases, from apprenticeship indentures to murder. They give a fascinating insight into our ancestors’ lives. Your census search You can also search and use the 1881 Essex Census, where you’ll find the new high resolution, greyscale images much more legible than the black and white copies of the past. How to use the records 1. To access your free records simply register at TheGenealogist.co.uk/ftfree 2. To activate your content for this issue, enter the code 236562 3. Once activated, content will be accessible for a 30-day period (within 3 months of the UK on sale date).
STOP PRESS...
TheGenealogist is set to release a new batch of criminal records in March, which will include quar ter sessions for Middlesex, Shropshire, Surrey and Warwickshire – more news on this next issue!
Discover your Worcestershire ancestors’ criminal past, wheelings and dealings in the county’s Calendar of Quarter Sessions records 1591-1643, free this issue, and search for your Essex family in the 1881 Census
10
FamilyTree March 2017
p6-10 News FINAL.indd 10
An illustration of The King’s Head, Chigwell, from Essex: Highways, Byways and Waterways by CBR Barrett, 1892
03/02/2017 16:43
Where do you think you come.
With one simple test, uncover the different places from your past, discover unknown relatives and find new details about your unique story. Know your story. Buy now at Ancestrydna.co.uk
AncestryDNA is offered by Ancestry International DNA, LLC
p011 Ads.indd 11
March 2017
Family FamilyTree 11
03/02/2017 16:18
TRACE TO TUDOR TIMES King Henry VIII granting a Royal Charter to the Barber-Surgeons company
15 Records
1500s
to take your family history back to the
Discover the key record collections that could potentially take your family back to the time of the Tudors with genealogist Celia Heritage’s expert guide
W
hether
12
Family FamilyTree March 2017
p12-19 Celia Heritage - records back to 1500 Finalish.indd 12. Remember to interview older relatives who may have vital oral knowledge about the family that has never been recorded and which will die with them. Encourage the sharing of family photographs and other data, but use
online family trees with caution, as they may be inaccurate.
2. 1939 Regi ster
06/02/2017 15:33
Bumper expert guide
WATCH Our expert Celia Heritage chats about researching back many centuries. TAP HERE!
Trace your ancestors with the Victorian censuses, starting with the 1911 Census and working back each decade to 1841). With a few exceptions, Irish census returns only survive for 1901 and 1911 but these are freely available on the National Archives of Ireland website at Look at griffiths-valuation.aspx to learn more about Griffith’s Valuation, which can be used as a census substitute in Ireland.
QUICK TIP The. For privacy reasons, you can only view entries for those people who were born over 100 years ago or whose deaths have been verified. Read more at ndmypast.co.uk
p12-19 Celia Heritage - records back to 1500 Finalish.indd 13
The 1939 Register is a valuable resource for tracing your 20th-century relatives because the 1931 Census was destroyed by fire in WW2 and no census was taken in 1941. Findmypast. co.uk hosts the database, which is free to search but there is a charge to view and download the records
1939 Regis ter can be viewed for free onlin e site at The National Arc on hives, Kew. This ca n save you money; find more details at ht tp://fam ily tr.ee/ TNA19 39 R egister
HOW-TO GUIDE
Visit our website at ht tp://family tr.ee / censusguide fo r an essential how-to census guide for family historians
March 2017
Family FamilyTree 13
06/02/2017 15:33
TRACE TO TUDOR TIMES
4. Birth, marr iage & death certificates. Locate BMD register entries via the General Register Office (GRO) index of births, marriages and deaths for England and Wales on the FreeBMD website at For births 1837-1915 and deaths 1837-1957 you can use the new GRO index at certificates and use this web address to order copies of all certificates. In Scotland the equivalent Statutory Registers begin in 1855 and are online at For Ireland visit. ie/en and see pages 36-39 this issue.
Solid evidence shared via social media: Assistant editor Karen Clare was sent these scans of her 2x great-grandfather’s 1865 birth certificate and 1885 marriage certificate and after making contact with a distant relative in Australia via Facebook
5. Loca l newspape rs Newspapers carrying local news really got under way in the mid19 ndmypast.co.uk
FIND OUT MORE Learn more about the new GRO index at
You can freely search and download historic Irish civil records at
14
Family FamilyTree March 2017
p12-19 Celia Heritage - records back to 1500 Finalish.indd 14
06/02/2017 15:34
Bumper expert guide.
QUICK TIP Search historic newspapers at Findmypast. co.uk and www. britishnewspaper archive.co.uk
p12-19 Celia Heritage - records back to 1500 Finalish.indd 15
See January 2017 Family Tree for our exclusive 8-page guide to online British newspaper archives – download or order your copy via. co.uk/store/back-issues
apprenticed or, if a woman, married etc. From 1662, settlement certificates could be issued to anyone wishing to Try your luck travel to a new parish. They were a form on the free FamilySearch.org of indemnity guaranteeing that the person in question would be supported website for by his home parish if he needed poor parish registers, relief. If a person arrived in a new parish such as these with no certificate, he could be removed 1660s baptism back to his place of settlement by the records which parish authorities. In this case a removal cover Aylsham order would be issued. Both removal in Norfolk orders and settlement certificates will name a person’s place of settlement. Removal orders will also record the parish where he was apprehended. They may also contain details of a man’s wife and children. Survival rates are not great, but records may help you track 7. Settleme nt & down the missing baptism of an ancestor ds recor removal who has migrated some distance. Records are at local record offices, Settlement and removal records can with a small number online. After the help to document the lives of some of introduction of union workhouses our poorer ancestors between the 17th in 1834, the system continued and early 20th centuries. Before 1834, and records will be found among poor relief was the responsibility of workhouse records. Find out more at the parish where a person was legally and www. ‘settled’. Originally a person’s usual londonlives.org/static/PoorLaw.jsp ‘place of abode’, the Settlement Act of 1662 introduced new categories and criteria for establishing someone’s place 8. Monume ntal of settlement. By the late 17th century inscr iptions your parish of settlement was usually inherited from your father. However, as While there are many laudable you grew older you could change your modern-day projects that aim place of settlement in many different to record those gravestones still ways depending on whether you were
March 2017
Family FamilyTree 15
06/02/2017 15:34
TRACE TO TUDOR TIMES
Above: London people queuing outside St Marylebone Workhouse for poor relief, c1900; Right: A 1774 removal order of vagabond Ann Capon, found among Ancestry.co.uk’s London, Selected Poor Law Removal and Settlement Records, 1698-1930 collection, in association with the London Metropolitan Archives. The certificate says Ann was to be removed to the parish of Saint Leonard in Shoreditch and provided for by the churchwardens, chapel wardens and Overseers of the Poor Modern-day projects include The World Burial Index www. worldburialindex.com, Find A
Grave ndagrave.com and BillionGraves. com plus gravestone projects run by Ancestry and TheGenealogist.co.uk. Copies of older surveys can usually be found in local record offices and your local family history society may sell CDs of their own transcriptions.
QUICK TIP
For out-of-copyrigh t books featuring old MIs, try browsing ‘Mon umental Inscriptions’ on the Internet Archive ht tps://ar chive.org – you may be incred ibly lucky and find one covering your paris h of interest
We found a 1913 copy of the Register of English Monumental Inscriptions at along with The Monumental and Other Inscriptions in Halifax Parish Church (1909), among many other such titles
16
Family FamilyTree March 2017
p12-19 Celia Heritage - records back to 1500 Finalish.indd 16
06/02/2017 15:34
INTERACTIVE
Bumper expert guide GALLERIES TAP HERE for a step-by-step guide on tracing your ancestors’ apprenticeship records You can find apprenticeship records on TheGenealogist.co.uk by searching under occupational records. The records shown here date from 1711
This 1853 will was easily downloaded for £3.45 from The National Archives’ Discovery site at. nationalarchives. gov.uk
R E A D UP ON. For English and Welsh wills from 12 January 1858 look at the Principal Probate Registry (PPR) indexed up to 1966 on Ancestry. The complete index is accessible at from which you can also
p12-19 Celia Heritage - records back to 1500 Finalish.indd 17
cestors Tracing Your An cords Re h Through Deat e ag rit He lia Ce by and (2nd Edition, Pen Sword, 2013) www. scotlandspeople.gov.uk and for wills in Ireland visit PRONI at
10. Guild & apprentic es. Stamp duty registers are online at TheGenealogist.co.uk and Ancestry.co.uk, while the Society of Genealogists in London has a large indexed collection of apprentice indentures covering the 17th to 19th centuries.
March 2017
Family FamilyTree 17
06/02/2017 15:34
TRACE TO TUDOR TIMES
11. Quar ter & Pett y Sessi ons. Find them
An engraving of an apothecary, John Simmonds, and his boy apprentice, William, working in the laboratory of John Bell’s pharmacy, 1842
family can be determined by means of the court roll.. ee/guidetomdr and my article on manorial records in the October 2016 issue of Family Tree, available at www. family-tree.co.uk/store/back-issues.
13. Records of the Cour t of Chancery Many of our ancestors were involved in civil disputes of one sort or another during the course of their lives, and one of the most popular courts for this type of case was the Court of Chancery. Disputes frequently involved
QUICK TIP
Teach yourself Latin – take the fre e online course at ww w. nationalarchives.g ov.uk / latinpalaeography and read our essential guide to Latin in the October 2016 issue of Family Tree
Many of our ancestors appear in court records
18
Family FamilyTree March 2017
p12-19 Celia Heritage - records back to 1500 Finalish.indd 18
06/02/2017 15:34
Bumper expert guide Kirk & presbytery sessions records
Images: Henry VIII, workhouse queue and apothecary © Wellcome Library, London, copyrighted work available under Creative Commons Attribution only licence CC BY 4.0; court illustration and coat of arms British Library Flickr; Giant’s Causeway Pixabay
A wonderful source for Scottish research, these records cover a wealth of information about parishioners; from paternity records and punishments meted out for loose behaviour, through to records relating to poor relief and the use of the parish mortcloth for burials. Records are held at local archives with copies at the National Records of Scotland. These are due to go online this year – see. Use TNA’s Discovery catalogue at. gov.uk
p12-19 Celia Heritage - records back to 1500 Finalish.indd 19
READ UP ON IT Tracing your Aristocratic Ancestors by Anthony Adolph (Pen & Sword, 2013)
Did your family have the right to bear arms?
originally taken down orally by heralds from the College of Arms based on what the family told them. They would then verify this with their own records. https:// sites.google.com/site/cochoit/home
as information about more notable local families. Good examples are Hasted’s History and Topographical Survey of the County of Kent and Nicolson and Burns’ History and Antiquities of the Counties of Westmorland and Cumberland. The British History Online website at. ac.uk offers transcripts of many local history books and gives access to the Victoria County Histories (VCH) – read more about the VCH at Also look for free digital copies of printed histories at. google.co.uk and. org and the aforementioned Internet Archive at You can also find a comprehensive guide to digital libraries in the February issue of Family Tree. For Northern Ireland, the Ordnance Survey Memoirs were parish accounts commissioned to accompany the new Ordnance Survey maps in the 1830s. They provide information about places and local people and can be bought from the Ulster Historical Foundation via
About the author Professional genealogist Celia Heritage runs The Celia Heritage Family History
15. Printed histories
e-Course, an indepth online
Don’t rely on the records just offered by genealogy websites. Learning something about the history of the area where your ancestor lived is important and many older printed histories will provide this, as well
author of Tracing Your Ancestors Through
course aimed at those with English and Welsh ancestors. She is the Death Records and Researching and Locating your Ancestors and publishes a regular newsletter detailing family history news;
March 2017
Family FamilyTree 19
06/02/2017 15:34
THOUGHTS ON GENEALOGY GOALS
What sort of family historian are you? Are you a Sherlock Holmes-style sleuth? A master of the genealogy database? Or chief raconteur of anecdotes at a family gathering? For a bit of family history fun, Chris Paton invites you to ponder what it is about this most compelling of hobbies that really makes you tick...
F
or some, family history is about the thrill of the hunt, that urge to uncover our roots, and perhaps try to locate flickers of familiarity among those who came before us – recognisable personality traits that must be ‘in the blood’. For others it is the hobby of the hunter-gatherer, the desire to create databases and resources, not just to help their own pursuits but for others who may share a particular connection or interest. Then there are those who are obsessed with technology, seemingly born with cybernetic implants and a desire to interface with all the latest gadgets around, and perhaps to create
20
Family FamilyTree March 2017
p20-22 Paton FINALish.indd 20
a few more along the way. And of course, there are some who make a living from it, whether through professional research services, products supplies, or producing publications such as Family Tree magazine! Our addiction to genealogy has a name – progonoplexia – and the prognosis for all of us so afflicted is not good, as it is unlikely to involve a door with an exit sign on it any time soon. But what kind of family historian are you, and what kind of family historian might you yet aspire to be? Just for a bit of fun, let’s explore the options. No matter where you end up concentrating your efforts, you will
likely start to pursue your ancestral interests by researching your own personal tree. If so, what might your end goal be?
Want an instant family tree? Do you just have a fleeting interest, a desire perhaps to construct a basic family tree chart as a memento for the living room wall, or a quick ready reckoner to clarify how all those relatives you are conscious of around you today are related? If you hope to produce fast results, one problem to be aware of is that once you have finished interviewing your relatives who might be willing to talk, you are then tempted to take
06/02/2017 09:56
Super-sleuth? Data dude? dangerous shortcuts. It may indeed be tempting to log into a website, see a tree with a name or two you recognise, assume you are related, and simply ‘harvest’ what you find without considering its accuracy. If you are in that boat, a little more time and patience may well be required to do the job properly. After all, what use might that family tree chart be on the wall if it is not in fact your family it is illustrating? And just because your name is Campbell or MacDonald, does that cheap plaque with a coat of arms that you bought in a thrift shop in Edinburgh really have anything at all to do with you or your family?!
provide contact details for you: • – AGRA •– ASGRA •. blogspot.com.au – the Scottish Genealogy Network • – the Association of Professional Genealogists. Check out the listings at the back of Family Tree for details of professional researchers too.
Committed to the cause? At the other end of the scale, are you one of those genies who wants to take things a great deal further, not only to identify where the cousins all fit in, but to understand how all of your
What kind of family historian are you, and what kind of family historian might you yet aspire to be? If a quick turnaround is what you are looking for, but you are concerned that your family history should be well researched and all sources properly cited, you might well be better off hiring a professional researcher to do the immediate requirements for you – you can always go back and continue on from their initial efforts at a later date! Many libraries and archives have lists of locally-based researchers who might be able to assist you, and there are of course many national professional researcher-based groups, such as the following, all of whom can
family lines have evolved over time, back through the generations? If this is your aspiration, you may think that this involves considerably more time and effort, and a great deal more expense, and you could well be right. You might need to invest time and effort in learning specific skills and research techniques, through books and research guides, and perhaps even consider doing a course or two to help you develop further as a researcher. You will certainly need to go beyond the comfort zone of
just using online databases from the comfort of your home. A good family historian develops the skills to understand the political context, culture and events happening around ancestors within the areas in which they resided. To understand their significance and impact, you may need to research the local history of an area and to reconstitute entire family groups to gain a sense of how long-established they may have been there, or whether they were known in the locality for a particular skill or trade. Are you so interested in the art and science behind such types of research that you might even wish to consider working as a professional researcher yourself some day?
What do your records say about you? Are you a scrap merchant, with scraps of paper here and scraps of paper there, each of them with scribbled notes awaiting the glorious day when they will all be collated? Or do you keep family group sheets, summarising the key biographical details and sources found for each family grouping within your tree? Are you a software fanatic, having abandoned all forms of writing implements in the 20th century, or is the quill and parchment always close to hand, next to the calligraphy guide? Do you use a family history software program to record your findings; equally importantly, do you have back-up copies of your fi les in case the dreaded computer crash ever happens? Do you type up your findings into documents, with logs detailing what you have searched as part of a detailed research plan and identifying what you have yet to search, or do you simply go where the whim takes you?
Your story... secret or shared? What do you do with the stories that you might find? Are you a sharer, do you publish your efforts and perhaps attract distant relatives to step forward and add more to the larger story? Or is this your personal thing, your guilty pleasure, something that you perhaps only
p20-22 Paton FINALish.indd 21
March 2017
Family FamilyTree 21
06/02/2017 09:56
THOUGHTS ON GENEALOGY GOALS wish to do for yourself, your children or your immediate family? Do you wish to present your findings through a website, in a magazine article, or within a book? If so, there are many people out there who can help you to create a basic website, or to structure your efforts into a small publication.
New interests, new friendships For many who spend time pursuing their ancestral history, new interests may well emerge along the way. These can culminate in participation within societies and involvement in associated projects, as fellow enthusiasts seek to create resources that might help with their own research, as well as that of others. Many family history societies, for example, have created census indexes, newspaper indexes, graveyard inscriptions and other resources for just such a reason. Such networking opportunities can help you to develop new friendships – might you be wise to get away from the computer for a bit, and attend meetings, visit locations and archive institutions that you may never have gone near before?
Communicating & crowdsourcing Conversely, while these activities have helped to create some of the most useful resources for family history research, in the 21st century there are many other new practices that the modern technological world has opened up. Do you still communicate with distant relatives by writing letters or emails, or have you adopted the social media world, with networking opportunities through Facebook, discussion forums and other online platforms, all based within their own virtual worlds? Might you be interested in online crowdsourcing and indexing projects, helping organisations and companies to open up access to millions of records awaiting a new existence with an online destiny? Several organisations now provide software to allow you to help indexing digitised records from the comfort of your own home, such as FamilySearch’s Indexing project – – and Ancestry’s World Archives Project –
22
Family FamilyTree March 2017
p20-22 Paton FINALish.indd 22
asasasasa
Are you a scrap merchant, with scraps of paper here and scraps of paper there, each of them with scribbled notes awaiting the glorious day when they will all be collated? Ready for the new world? Then, of course, there is the biggest game in town just now, the field of genetic genealogy, the testing of DNA to unlock the family history found within your very cells at a molecular level. Have you really gone as far as you can go as a genealogist; are you ready to move beyond traditional research techniques and to embrace the future? Or will you stay safe in the tried, tested, but somewhat limited, opportunities that may restrain you within the brick walls other developing methodologies might help to smash? The world of the family history researcher is changing by the day, bringing fascinating new options and new frustrations, but it is a world that is not standing still. As we start moving our way through 2017, why not ask yourself what type of researcher you currently are, what your goals are and how you want to realise them? And then, at the start of each new year from now, perhaps ask yourself those very same questions again!
Those of us who are totally immersed within the genealogical world already know ancestral research is the gift that keeps on giving. And yet the field of family history itself is not a very easy thing to defi ne. However, by thinking about what you want to discover, record and preserve about the past for the future, you may also fi nd it encourages you to make new plans – and uncover even more about your family’s fascinating history.
About the author Chris Paton runs the Scotland’s Greatest Story research service. co.uk – and teaches online courses through fgfg He is the author of Researching Scottish Family History, Tracing Your Family History On The Internet, Tracing Your Irish Family History On The Internet and The Mount Stewart Murder, among others, and blogs at
06/02/2017 09:56
A4p FTM Advert (Feb 2017)_Layout 1 26/01/2017 14:22 Page 1
What’s On in 2017 Over 160 Family History Lectures, Courses, Walks & Talks, including: Family History for Beginners: Finding Births, Marriages & Deaths and the Census
One-hour Talks (£8.00/£6.40) Tracing your Ancestors through Local History Records | 29 Mar
Saturday, 29 April 14:00-17:00 (£20.00/£16.00)
Electronic Family History | 20 Apr a part 2 of this course will be held on 13 May, which will include finding birth, marriage & burial records in parish registers.
Introducing The Next Generation (TNG) Genealogy Program | 10 May
Family History for Beginners: Parish Registers
The Sugar Barons: Early Settlers in the West Indies and America | 6 May
Half-day Courses (£20.00/£16.00) Tracing French Genealogy | 1 Apr Tracing your Yorkshire Ancestors | 29 Apr Immigration to North America 1820-1930 | 6 May
Saturday, 13 May 14:00-17:00 (£20.00/£16.00) In Part 2 of this course you will learn how to search for records of marriage, baptism and burial in parish registers. The tutor will introduce both online and offline sources and methods. Half-day courses with Louise Taylor. Courses can be booked separately.
Tracing Family who were in the Salvation Army | 13 May
Free Events
Full-day Courses (£35.00/£28.00)
(must be pre-booked)
Getting the Most from the Society of Genealogists | 19 Apr
Tracing your House History: For Family Historians | 25 Mar
Library Tours on alternate Saturdays. Please call or see our website for details.
Palaeography for Beginners | 22 Apr
For information on further events, please visit our website, or contact us for a copy of our full 2017 events programme. Members of the Society of Genealogists receive a 20% Discount on events.
Something for everyone passionate about family history. Open to all - non-members welcome. Society of Genealogists, 14 Charterhouse Buildings, Goswell Road, London EC1M 7BA | Tel: 020 7553 3290 | Email: events@sog.org.uk
March 2017
Family FamilyTree 23
Registered Charity No. 233701. Company limited by guarantee. Registered No. 115703. Registered office, 14 Charterhouse Buildings, London, EC1M 7BA. Registered in England & Wales.
p023 Ads.indd 23
03/02/2017 16:19
Old ns atio p u occ
EXPLORING LIFE & WORK ON THE RAILWAY
STEAMING AHEAD!
How to trace your railway ancestors At the cutting edge of technology and transport the growth of railways transformed our ancestors’ world. Here we look at the history of the railways in the British Isles – and the records, memories and more to help you trace your ancestors who worked on them. Fast-track your research today
TOP RAILWAY RESEARCH ADVICE Get to grips with the records to help trace your railway ancestors’ working lives, with Chris Heather – Transport Records Specialist in the Advice and Records Knowledge department at The National Archives
Researching railway occupations Railway companies employed a huge range of different trades, and these are reflected in the records. As well as drivers and fi remen there were station staff, such as ticket collectors, stationmasters and signalmen. There were those who built and maintained the track, the engines and the rolling stock. Engineers and labourers built bridges and dug tunnels, and catering staff ran railway hotels. Railway companies employed canal workers, and some merchant seamen, and, before the First World War,
24
Family FamilyTree March 2017
p24-31 Railway feature FINALish - sort out ads.indd 24
Understanding staff records To help you understand how to track down an ancestor’s staff records, here is an outline of some important dates in railway company history. • Between 1825 – when passenger railways began – and 1923, almost 1,000 separate railway companies were formed throughout Britain. They did not all exist at the same time. Some companies were taken over by larger companies and others failed, but each company kept its own staff records, and those that survive vary a great deal in what they contain. Many do not survive at all • Between 1923 and 1947 most of the existing companies were combined into just four companies – known as the Big Four (GWR, LMS, LNE and SR). Then in 1948 the railways were nationalised to
become British Railways • Once nationalised the records of the various private companies which had been passed down to British Rail became public records, and were passed to The National Archives (TNA), in Kew, Surrey. The period for which TNA has staff records is therefore between 1825 and 1947 • However, it is also worth checking with the relevant County Record Offi ce (CRO), since some railway staff records are known to have found their way to local archives • For staff after 1948, you will again need to contact the local CRO corresponding to the railway region in which they worked, and ask for the railway staff record cards. But be aware that most records will still be closed due to data protection regulations
06/02/2017 10:32
Researching ancestors in the age of steam Onl ine search tip Search on Ancestry.co.uk first because it’s quick and easy, and the documents include records from some of the smaller companies, which the larger ones absorbed. However, if you do not find your man then you will need to do a bit more detective work, and use the remaining records of the smaller companies held at TNA
horses were used to move carriages around, so there are blacksmiths, draymen and horse boys in the records. You may also find auditors, lock-smiths, typists and lavatory attendants. So if your ancestor’s occupation is given on his marriage certificate as a fireman, engineer or plumber, he could have been employed by a railway company.
Where to start searching Some TNA railway staff records for the period before 1948 are available online through the Ancestry website in the section called Railway Employment Records. These are the records of the larger railway companies, including the Great Western, the London Midland Scottish and the London and North Eastern railway companies.
Not all online The staff records at TNA are arranged by name of railway company, so it really helps to know which company employed your ancestor. It is most likely to be the one located nearest to where he lived, so use the census and marriage or death certificates to locate his address. The census was taken every 10 years and is available to search online from 1841 to 1911. You can start your search of TNA’s staff records from TNA’s website:
See our blog to discover how Ad èle Emm’s research ed her railway ance stor at s ee. co.uk/news-andviews/tracing-y ourrailway-workerancestors
Above: A page from a staff register for the Barry Railway in South Wales (TNA reference RAIL 23/46). Not much information is given but it shows that Katherine Jane Harris was employed from 13 August 1915 in the accountants’ office. It gives her date of birth and her rate of pay. These registers are usually arranged by date of employment, and may be indexed at the back or front of the volume. Other volumes may be in alphabetical order of surname Right: Company railway magazines are a useful source for family historians who may not own a photograph of their ancestor (TNA reference ZPER 16/6, January 1916) Below: Red Cross train loading up at Casualty Clearing Station
Quick tips The census will tell you where your ancestor lived and give you their job title. Their marriage or death certificate should tell you their occupation, and where they got married or died. Certificates can be ordered online from the General Register Office – – standard printed charge £9.25
p24-31 Railway feature FINALish - sort out ads.indd 25
March 2017
Family FamilyTree 25
06/02/2017 10:33
EXPLORING LIFE & WORK ON THE RAILWAY Find information on A-list steam engines at / locomotives-an dengines/steamengines
Using railway atlases
3-STEP SEARCH 1
Having found an address use a railway atlas (such as Jowett’s Railway Atlas by Alan Jowett) to check which railway companies ran lines and stations nearby. In urban areas you may have to narrow it down to the nearest two or three companies
2
Next check whether any staff records for that company survive by turning to the ‘Railway Workers Research Guide’ on TNA’s website, which you can fi nd here: The guide includes a table of staff records in alphabetical order of company name, together with document references for each volume. Remember that these are original documents which can only be seen if you visit The National Archives.
3
Accidents were extremely common, with an average of two railwaymen being killed on the railway every day up until the First World War Resear ch tip Major incidents recorded in accident registers can be searched via but smaller accidents can only be found by consulting the volumes in person since they are not downloadable
STAFF REGISTER RECORDING FINANCIAL FACTS This is the staff register entry for George Webber. Firstly, note that half the page is reserved for a record of fines, emphasising the fact that the main purpose of these records was financial, recording payments to workers, or deductions from wages. Secondly, this is a Great Western Railway ledger for the period 1841-1864 (TNA reference RAIL 264/18), but George Webber was employed by the South Devon Railway. The reason his record is in this collection is because the GWR took over the South Devon Railway, and its records were subsumed into those of the GWR. Thirdly, I have included this for the amount of information it gives on an incident in 1886, when George Webber was driving his engine to Truro, through hail and snow. At one point he felt his engine hit something. On reaching Truro he discovered that he had hit and killed a Ganger (Foreman) named Collins. As a result of this accident Webber went temporarily insane, and was placed in the Bodmin Lunatic Asylum. He was released the following year, but the company only employed him as a cleaner
26
Family FamilyTree March 2017
p24-31 Railway feature FINALish - sort out ads.indd 26
06/02/2017 10:33
Researching ancestors in the age of steam
Understanding job classifications Railway companies grouped occupations together and gave them titles which we might not use today. For example, a signalman, a guard or a porter might be included under ‘Operating, Traffic and Coaching’ staff; an engine driver might be listed under ‘Carriage or Mechanical Engineering’ staff. Again, the Railway Workers Research Guide has a key to these terms.
tips. They also include information on staff, such as new appointments, resignations, promotions, retirements, awards and deaths. TNA has 3,284 railway magazines for more than 100 different companies and regions. For the two world war periods they show men who have signed up, with photographs and biographical information. You will also find the inevitable obituaries for those unfortunately killed.
Rent rolls
Trade unions
Rent rolls are lists of tenants living in railway-owned houses or cottages. The railway companies would sometimes build housing for their staff, particularly if a station was needed in a rural area, and in many cases villages and towns were born out of a few railway cottages. Those people listed on rent rolls are most likely to be railway employees.
Many railway workers would have joined a trade union, and family history information can be obtained from the
CHARITABLE PAYMENT RECORD This record is for Gertrude Heather, an eight-year-old girl whose father died in 1892 (TNA reference RAIL 1166/101). It shows an allowance being paid to her grandmother, and that her schooling has been arranged
trade union papers held at the Modern Records Centre at the University of Warwick. These can also be seen online through the Findmypast website at ndmypast.co.uk
Types of staff record Since each company kept its own records, the surviving staff
Staff magazines Many companies produced magazines for their staff, which included news on social events and articles on a variety of subjects, including theatre reviews, sports clubs and gardening
Is it any r we hanker de on w us af ter the glorio ? m ea St Age of
system is Britain’s railway world, and the oldest in the ations, hotels the viaducts, st s that still and signal boxe ape are mark the landsc inder an evocative rem of times gone by
p24-31 Railway feature FINALish - sort out ads.indd 27
A GH Thompson cartoon, dating from 1940, but depicting an earlier era
March 2017
Family FamilyTree 27
06/02/2017 10:33
EXPLORING LIFE & WORK ON THE RAILWAY ledgers vary in the amount of detail recorded. Some companies maintained a whole page per person, and these can include personal details and events from their service, particularly if they were involved in an accident, or if they were disciplined in some way. Other companies merely maintained staff lists, which give little more than names, dates and salaries.
Voucher books For some companies you will find voucher books at TNA. These are accountants’ records that include receipts for wages paid, and invoices relating to staff and contractors. They are not usually indexed, so you would be lucky to find an entry for your ancestor, but if successful you would be rewarded by seeing their signature.
Company board minutes Like any company, railway management boards had committees which would look into specific issues and report back. The Great Eastern Railway had an Old Age Committee, and the minutes reveal the names and ages of elderly staff who were assessed each year and told whether they could continue working.
Railway Benevolent Institution The records of the Railway Benevolent Institution can be used to trace charitable payments to widows and children of railwaymen killed or injured in the course of their work. They are not indexed, but if you know the date when your man was killed or injured, you may be able
Peter ’s Railway Email info@petersrailway.com
Web
PetersRailway.com
Books for children who love trains Chris Vine’s series of books tell the story of Peter and Grandpa building and operating a railway across their farm. With adventures, stories from the old railways and real engineering, these books feed inquisitive minds!
Accident registers Most railway companies would keep accident registers, which can be found by simply searching TNA’s online catalogue, Discovery –. nationalarchives. Did gov.uk as before – yo u know? using the name of The first monarch the company and to travel by rail the word ‘accident’. was Queen Accidents were Victoria on 13 extremely June 1842 common, with an average of two railwaymen being killed on the railway every day up until the First World War. As well as the company’s own accident records, TNA also holds copies of the Board of Trade’s reports into accidents in series RAIL 1053. These detailed accounts record even the smallest of accidents involving staff.
Other sources
Just Out
• – for records of the British Transport Police, start by consulting its website • – for London Transport employees, contact the Transport for London Corporate Archives.
Books for Fun and Learning Caught in a storm, two young girls dry out in a loco. Cheekily, they end up stealing the train! A true story. 15 x 14 cm, 32 pages. Age 6 to 12 years, £2.99
to fi nd an entry recording payments to his relatives.
The children and Grandpa play trains on an epic scale, causing complete havoc! 15 x 14 cm, 32 pages. Age 3 to 6 years, £2.99
I hope you’ve found my round-up of railway sources helpful. Good luck with your research! CH
EXPLORE THE NATIONAL RAILWAY MUSEUM Author and Engineer As a Chartered Engineer who trained at Rolls Royce, Chris wanted to share his love and knowledge of railways, science and engineering: Peter’s Railway is the result.
For signed & dedicated copies and special offers
PetersRailway.com 10% OFF with coupon FAMTREE17
Story
28
Technical
Family FamilyTree March 2017
p24-31 Railway feature FINALish - sort out ads.indd 28
History
Adventure
The National Railway Museum is one of the largest transport archives in the UK and if you have railway ancestors it’s especially worth a visit. Jump aboard for Rachel Bellerby’s whistle-stop tour...
The archives of the National Railway Museum in York – see the website at – have grown since its opening in 1975 into one of the largest transport archives in the UK, with books, maps, photographs and other material relating to more than 150 years of railway history.
06/02/2017 10:33
Researching ancestors in the age of steam There are more than 20,000 books and 1.4 million photographs in the museum’s collections. The museum’s library and archive centre Search Engine is open to researchers with no appointment needed; although you can request material in advance of your visit. In the museum itself you can explore more than one million railway-related objects, including the magnificent Sir Nigel Gresley locomotive, which is currently on public display while it undergoes its 10-year overhaul. You can also take a simulator ride on The Mallard, explore the world’s finest collection of royal carriages and see railway furniture, timepieces and fittings which played their role in the country’s transport history. RB
New exhibition The National Railway Museum’s new exhibition ‘Ambulance Trains’ invites visitors to step on board a historic railway carriage and discover the role played by the men and women who ran the First World War Ambulance Trains, which carried millions of sick and injured soldiers to safety between 1914 and 1918. Letters, diaries, photographs and drawings chart the history of the service, with digital projections and film helping bring this heroic story to life. National Railway Museum, Leeman Road, York YO26 4XJ; tel: 0844 8153139 email: nrm@nrm.org.uk website:
Remember rail disasters
s worst rail Explore the UK’ ll, 1915, in crash, Quintinhi le died – which 226 peop of f to Gallipoli mainly soldiers injured. – and 246 were detailed Find memorials ipedia. at ht tps://en.wik shill_rail_ org/wiki/Quintin ials disaster#Memor
p24-31 Railway feature FINALish - sort out ads.indd 29
Above: Before the Hellingly Hospital line closed in 1957, several enthusiast groups ran tours to visit it. This is the SCTS/TLRS tour from May 1957 Right: The Hellingly electric locomotive on Phil Parker’s model of the hospital line
USING MODEL RAILWAYS FOR RESEARCH Railway modelling expert and enthusiast Phil Parker reveals how these miniature recreations of bygone railways can reveal clues about your ancestors’ working world
Clues from a miniature world Dig back in many a family history and the chances are you’ll unearth a connection to railways. While you’ll immediately think of the national companies, there were also a plethora of tiny operations and these can be far harder to find out about. The good
news is that there are groups of people out there who are likely to be in a position to help you; railway modellers. On any weekend, there will be model railway exhibitions taking place up and down the UK, with others spread all the way to Australia. Entrance tickets only cost a few pounds and inside there will be a variety of models from around the country. Most of the models will feature well researched recreations of small parts of the railway system. Those operating them will have spent many enjoyable hours finding out as much as they can to make the model as authentic as
March 2017
Family FamilyTree 29
06/02/2017 10:33
VIDEO CONTENT EXPLORING LIFE & WORK ON THE RAILWAY
TAP HERE to explore an impressive model railway of Liverpool Lime Street from times gone by
View Phil odel Parker’s m on cti a railway in it. /b :/ p tt h t a w X ly/2jLO C
possible. It’s common to see some of the results of this effort shown as part of the display, or at least carried in a folder underneath so it can be shown to anyone interested. It’s not just locomotives we are interested in, research becomes addictive and it’s easy to spend time that should be devoted to building a model poring over photographs or obscure documents.
My project & research A few years ago, I decided to build a small model of the Hellingly Hosptital Railway. This tiny private line started life as the best way to shift millions of bricks and other material required to build a substantial asylum in the south of England from the station to the building site, three-quarters of a mile
away. Around 1900, when construction drew to a close, the hospital Visiting Committee decided to retain it, even going so far as to equip it with the then new-fangled electricity. Living 200 miles away from the prototype, most of my research was carried out in libraries and contacting people by post. Many lines like this attract authors who produce tiny pamphlet-sized and often privately printed publications. The book that inspired me was written by Peter Harding and he generously supplied a list of contacts who I, in turn, wrote to.
Show time! Once I’d built my model, I started to exhibit it and this is where the biggest surprise awaited me. Our first show was in the centre of Derby and there we
The Kineton story When members of the Leamington & Warwick Model Railway Society started to research Kineton station, they found relatively little information available from conventional sources. Since the prototype is only a few miles from the club, they took a stall at the local farmers’ markets, appealing for memories of those living in the area. Such information and plans as they had were put on display and this started a number of useful conversations.
30
Family FamilyTree March 2017
p24-31 Railway feature FINALish.indd 30
Following these up, a large number of unpublished photos were discovered as well as a fund of stories from the era being modelled – all of which otherwise might have been lost in the future had they not reached out to the community for clues. For more details, visit the club’s website at
were greeted by a cry of ‘It’s Hellingly!’ from a group of visitors. Chatting to them, it turned out that one lady had worked at the hospital, her mother had been the head cook and her sister still worked at the administration centre at the time occupying the site. As we travelled around the country to other exhibitions, the same thing happened every time we appeared. This might have been a tiny line with
650,000 s
taff! By the late 19th/early 2 0th centuries, 6 50,0 worked on th 00 people e railways a t any one tim e – was you r ancestor on e of them? Find th
is and othe r invaluable information in My Ances tor Was A Railway Wor ker by Fran k Hardy, published by the Society of Genealogist s, RRP £8.9 9
06/02/2017 15:37
Images: title image: © AdobeStock/hipproductions; rail images © Chris Heather & The National Archives; Hellingly photos Andy York/
Above, left: Hellingly Hospital’s Visiting Committee view progress on constructing the line in miniature; right, a letter Phil received from the line’s original driver, Harry Page, as part of his researches; top right, Phil met John Page, Harry’s grandson, while exhibiting his model at Brighton in 1999
Researching ancestors in the age of steam School time Remember these facts from your youth? • The first steam locomotive was built in 1804 by Robert Trevithick and used to haul iron in Wales • At the Rainhill Trials in 1829, Stephenson’s Rocket sped along at a record-breaking 29 miles per hour • 1830 – the year the London and Manchester Railway opened its steam passenger service
no passengers – it was solely goods after 1935 – but we always found someone who knew it personally. Perhaps because of the sort of hospital it served, there was some notoriety with children threatening their friends, ‘They would be sent to Hellingly’, and that stirred the curiosity of anyone interested in trains.
Images: title image: © AdobeStock/hipproductions; rail images © Chris Heather & The National Archives; Hellingly photos Andy York/ BMR; 1957 tour © John H Meredith; festival, letter and John Page © Phil Parker British Library and Wellcome Library images; Red Cross train Wellcome Images; St Pancras, Berkhamstead and Craigendoran stations from the British Library flickr collection
Making connections online At the same time as I built the model, I created a website showing my efforts and asking for more information. Through this I received a number of contacts, the most striking of which came from a retired nurse. As a 16-yearold, she moved away from home to start her career at this remote arm of the health system. At night, the noise from the patients was terrifying and there wasn’t much chance of relief since the site was pretty much self-contained. There was also the suggestion from another contact that during World War II, several ‘well known faces’ took the trip along the line for secret treatment in the underground rooms
Memories of my engine driver dad with Jan Davison My grandfather and my father worked on the railway all their working lives and thoroughly enjoyed their work as railway drivers. As a family we are very lucky to have fond family memories – insights that you’d never be able to find in an archive. They’re just anecdotal – things that our Dad told us. One particular memory that springs to mind, is Dad working on the footplate of a steam engine. He told us how cold it was. It’s open to the elements, there are no doors, no air conditioning units – drivers and firemen are just out there exposed to the great outdoors. In winter to keep warm, Dad said that he would put newspapers down his work trousers. As a little girl I used to find that absolutely fascinating – and it’s something that people nowadays perhaps wouldn’t even be able to relate to, or indeed put up with, but to the men of the old steam trains it was all part of their job and they just got on with it. • Editor: Anecdotes and memories relating to Jan’s dad’s work on the railways are very precious to her and she’s been taking time to record them. If you have memories of the much-loved age of steam, make sure you record them too. Enjoy Jan’s memories on our YouTube channel at
beneath the hospital. I’ve never been able to corroborate this, but it certainly makes a fascinating story. If you have family who worked for the railways, the chances are you’ll gravitate towards models showing where they worked. Thus, I was pleasantly surprised to meet the grandson of Harry Page, driver of the Hellingly loco, while exhibiting in Brighton. He was in turn pleased to see a letter from his now-late grandfather that I’d received years before.
Q A
So, how can you make use of model railways?
lines you are interested in. Many modellers have websites, others post updates to their model on forums such as Rmweb.co.uk Also look at magazines such as BRM (British Railway Modelling) in newsagents as the best layouts will be featured in print. From there, get in touch with the railway model builder and the chances are that you will find you can help each other out. PP
About our experts Chris Heather joined the Public
The internet is your friend. Try searching for station names or
Record Office in 1985, and has worked in various sections of the office including Personnel, the Repository Office, Chancery Lane
VIDEO CONTENT Berkhamstead Station, in 1839, on the London and Birmingham Railway line
TAP HERE to listen to Jan Davison, publisher of Family Tree, recall precious memories of her father’s time as a steam locomotive driver
and the Family Records Centre. He has worked in public-facing sections for the past 15 years, and has an interest in records concerning crime, prisoners, and transportation to Australia. He is currently the Transport Records Specialist in the Advice and Records Knowledge department at The National Archives, mainly dealing with enquiries regarding railway companies Phil Parker is an avid railway modeller, keen to research every last historical detail to get it right. He regularly writes for BRM. Find out more about Phil’s models at philsworkbench.blogspot.co.uk
p24-31 Railway feature FINALish.indd 31
March 2017
Family FamilyTree 31
06/02/2017 15:46
A BRITISH ICON
THE GIRL
NEXT DOOR Dame Vera Lynn will be 100 years old in March 2017. John Leete looks back at the remarkable life of a British icon forever known as The Forces’ Sweetheart who brought joy to millions of our family members during the dark days of war
A
t just eight years of age, Joan was no different from the millions of other youngsters totally unprepared for the change that was about to take place. A quiet little girl, Joan was in the second wave of Operation Pied Piper. The mass evacuation of young children began on the eve of war and continued after the start of hostilities when the authorities decided that mums and children of a certain, slightly older age, should also be sent to places of safety. ‘Actually my mother refused to be evacuated and because my brother was too young to leave my mother at that time, I was sent away on my own,’ said Joan. ‘There were screams and
Vera Lynn famously sang The White Cliffs of Dover (1942)
32
Family FamilyTree March 2017
p32-35 Vera Lynn FINALish.indd 32
06/02/2017 10:51
Images: Vera Lynn singing at munitions factory © IWM (P 551); Vera Lynn with factory workers © IWM (P 553); War and Peace Show photo © Nicki, reused under the Creative Commons Attribution-Share Alike 2.0 Generic licence; John and Maxine Leete with Dame Vera © John Leete; White cliffs of Dover & rose Pixabay
Dame Vera Lynn at 100
Morale boost: Vera singing during a lunchtime concert at a munitions factory in 1941 (left) and meeting workers afterwards (above)
struggles from those children who defi nitely did not want to go. The journey itself, well I remember little about that, although I do remember being very tearful and being comforted by one of the teachers travelling with us.’ John Dickson was one of the
children who stayed behind in London. Now aged 80, he still remembers vividly much of what happened during the war years on the Home Front. ‘As a child you knew no different. You grew up believing that this is what life was about. Ruined buildings, damage
all around you, friends you knew at school suddenly not being around anymore. I was fairly lucky for much of the war although I was injured towards the end of the war by debris from a flying bomb.’ Apart from both experiencing the war years, albeit from a different
TAP HERE
TO ENJOY A GALL ERY OF NOSTALGIC PHOTOS OF THE FORCES’ SWEETHEART
‘She was down to earth, she understood what life was like for all of us on the Home Front and she sung the songs that we could all associate with’
p32-35 Vera Lynn FINALish.indd 33
March 2017
Family FamilyTree 33
06/02/2017 15:51
A BRITISH ICON Rise to fame
Vera Lynn at the War and Peace Show in July 2009
perspective, Joan Coup and John Dickson have something else in common, their clear memories of having their spirits lifted by ‘a song on the wireless’, as Joan recalls: ‘Simple things lifted our spirits as children and helped us deal with all the sadness we were aware of around us. The adults too often spoke of their favourite song or programme or a radio personality that helped to take their mind off things.’ The popular song was The White Cliffs of Dover and the personality was a young Vera Lynn. ‘My one lasting memory of her from those times was her singing The White Cliffs of Dover. Why that song I do not know but it has stayed with me all my life and hearing her sing it, even today, gives me tingles up my spine,’ said John.
Vera Margaret Welch was born in East Ham, London in March 1917. Within a few years, at the tender age of just seven, she began singing in public at Working Men’s Club’s in the East End. A weekend concert and a cabaret would earn Vera 7/6 (seven shillings and sixpence in old money, or £20 today), boosted by extra payment for encores. She adopted her maternal grandmother’s name Lynn for her work on stage and at age 17 she made her first radio broadcast. This was with the famous Joe Loss Orchestra although she was already being featured on records released by both Joe Loss and Charlie Kunz. At the beginning of September 1939, Vera was sitting in the garden of the new home in Barking she had bought with her dressmaker mum Annie and dad Bertram, who did all sorts of odd jobs. The family was drinking tea when they heard the news on the radio that Britain had declared war. By her own admission Vera recalls her first thought was a selfi sh one, ‘Oh dear, there goes my singing career,’ but of course no one knew how essential entertainment would become in the following
months and years. Vera was on tour with the Ambrose band at the time and it was during this she first sang We’ll Meet Again, which was to become a great favourite with audiences. At the outbreak of war and for some months thereafter, there were changes to national radio broadcasting that resulted in very little and very limited programming, much of which was regarded as very dull. The new Home Service offered classical music programmes and not much else and the Forces Programme only started with a few hours of evening broadcast in January 1940, however, fortunately within a month this extended to a 12-hour-a-day service. Vera went solo in that year, buoyed by a newspaper article which said she was outselling the record sales of Bing Crosby and the Mills Brothers. ‘My voice, the kind of songs I liked to sing and the mood of the time were beginning to work together,’ she recalled.
‘One of us’ Vera would eventually be ‘called up’ by ENSA (Entertainment National Services Association) but for the time being she toured the country. She
Find out more Listen online to the BBC Radio Two series Keep Calm and Carry On: The Vera Lynn Story with Katherine Jenkins and contributions from Vera Lynn herself, plus performers and fans, in the IWM Sound Archive
34
Family FamilyTree March 2017
p32-35 Vera Lynn FINALish.indd 34
Watch a 1944 film of Vera Lynn visiting units of the 14th Army at Sylhet, near the Indian-Burmese border, on the IWM website via
06/02/2017 12:46
Dame Vera Lynn at 100
DID YOU KNOW?
Dame Vera is releasing a new album on 17 March, three da ys before her 100th birthday. Ve ra Lynn 100 features re-orchestrate d versions of her most famous son gs alongside her original vocals, an d featuring other British artists. Sh e is the first singer to release an album as a centenarian Author John Leete and his wife Maxine meeting the acclaimed Dame Vera Lynn
was from the outset regarded as ‘one of us’ as Margaret Ward, a former ATS (Auxiliary Territorial Service) driver, said. ‘Yes she was a star but she wasn’t at all remote or reserved like some stars were. She was down to earth, she understood what life was like for all of us on the Home Front and she sung the songs that we could all associate with.’ Vera had in fact already been referred to as everyone’s best friend, and years later she said, ‘I was never a glamour girl. I was the girl next door’. Among many other accolades Vera became known as ‘The Forces Sweetheart’ after a Daily Express poll named her as British servicemen’s favourite musical performer. In 1941, she began Sincerely Yours, her own BBC radio show that was broadcast at home and abroad via shortwave radio. Most notably, of course, Vera Lynn performed songs that reminded the troops of home including the songs that she will forever be associated with, We’ll Meet Again and The White Cliffs of Dover among many others.
The Forgotten Army With ENSA she travelled to Egypt and India and also to Burma, where the ‘Forgotten Army’ was fighting so far away from home. ‘When she arrived in a Jeep a great cheer went up from
p32-35 Vera Lynn FINALish.indd 35
the lads. She was already perspiring in the clinging heat but no matter she sang to us and every time we asked for an encore she sang again. This went on for a long time until Vera was drenched in sweat and exhausted. She was a real trooper and lifted the spirits of everyone. Soldiers don’t cry but there were many moist eyes that day,’ wrote an unknown diarist. Vera herself recalls one of the boys saying, ‘Home can’t be that far away because you’re here’.
A salute to Vera Lynn It would be so easy to run away with words like ‘iconic’ ‘legendary’ and ‘one of a kind’ when describing Vera Lynn. The truth is, and having met her on more than one occasion, I can testify that the only fitting words to describe her are honest, humble, decent and giving. To this day, with her wartime and post-war career still being recognised and acclaimed the world over, she still finds it hard to grasp just how popular she has remained with audiences of all ages during the past decades. Her giving continues and is demonstrated by, for instance, her presidency of the Dame Vera Lynn Children’s Charity for children with cerebral palsy – Her awards are many and various and
reflect the significant contribution she has made. By way of example, she was awarded the Freedom of the City of London in the 1970s and has received the Order of St John and the Order of the British Empire. Her songs continue to resonate with the wider public and boost the morale of service personnel away from home. In 2017, her 100th birthday year, the celebration of this remarkable woman will include a unique show at the London Palladium, a theatre that she appeared at in the early days of her career. That career has bought considerable happiness to many millions of people across the world and it is right and fitting that we should salute Dame Vera Lynn.
About the author John fgfg Leete’s interest in Britain’s Home Front during WW2 evolved from his childhood fascination with artefacts – everything from tin helmets to gas mask bags. Some years later a chance meeting with a veteran resulted in John’s fi rst book. Since then he has continued to write extensively and given talks on this era of history. He is also involved in Living History.
March 2017
Family FamilyTree 35
06/02/2017 10:51
EXPERT INSIGHTS TO HELP YOUR RESEARCH
Make giant steps
researching Irish ancestors Combining expert background knowledge and extremely useful search tips, Steven Smyrl helps you get to grips with a core collection of Irish records you’re sure to need – the Irish BMD registers re you finding it hard to keep up with the rapid rate at which Irish genealogy records are becoming increasingly available online? It used to be that an article on an aspect of Irish genealogy would stand for years. Progress of any sort was slow, held up by such things as lack of funds, bureaucracy or staff shortages, so that little changed over decades. While it wouldn’t be right to say that all of these issues have been resolved, these days the speed of change in Irish genealogy has been breathtaking, however, as seen, for instance, by the developments in 2016 regarding the civil registration records in Ireland.
A
Putting the records in context
Getting the full picture
What led to the start of civil registration in Ireland?
Where there was once nothing online relating to Irish civil registration birth, death and marriage records, there are now numerous online locations to search the indexes. Within the past six months, even these sites have been gazumped – virtually overnight – by a State website providing access not only to the indexes to civil registration records, but to images of the actual register entries... and all for free. Yes, you read that right, at absolutely no cost to the researcher.
36
Family FamilyTree March 2017
p36-39 Irish FINALish.indd 36
To know how to get the most out of this new free resource, one needs to understand the peculiar history of civil registration in Ireland. Civil registration began in England as a natural progression from continual improvement in the keeping of parish registers by the Anglican authorities. This was not the case in Ireland. For a start, unlike in England, most of the population in Ireland did not belong to the Church of Ireland, which forms part of the Anglican Communion. Civil registration in Ireland came about entirely by accident and it began in piecemeal fashion.
From the 1830s there had been a growing frustration about the lack of a proper system in Ireland of recording life’s vital events, and not least in England where, when required, so many Irish were unable to produce written documentation verifying their age, parentage or legitimacy. Alongside this, there were a number of court cases in Ireland about the validity of marriages celebrated by Nonconformist clergy, and this
brought the issue to a head. The tipping point came with a case (Queen v Mills) which went all the way to the House of Lords and found that all marriages in Ireland celebrated by Protestant Nonconformist ministers were legally invalid. The upshot was a sharp, collective intake of breath as thousands realised that this ruling declared their marriages a ‘sham’ and their children ‘bastards’. Public uproar was quickly followed by the passing of the Marriages Confirmation (Ireland) Act 1842. It took another two years for the authorities to realise that simply retrospectively recognising marriages up to 1842 would not wash, and that provision to establish a registration framework would be required. This came with the Marriage (Ireland) Act 1844, and from 1 April 1845 all non-Catholic marriages were to be civilly recorded. It had been envisaged that the system would encompass all marriages in Ireland, but the Catholic authorities feared State interference in their internal affairs and resisted all attempts to involve them. Therefore the registration of all births, deaths and Roman Catholic marriages (ie the creation of a comprehensive and inclusive civil registration system) did not begin
03/02/2017 15:55
Searching Irish records like a pro until almost a generation later, on 1 January 1864. By this time, there had been full civil registration in England and Wales for 27 years and in Scotland for nine years.
What was the new system like? It is unfortunate that the records system chosen for Ireland followed that established in 1837 in England and Wales rather than that in Scotland in 1855, for the latter was far superior in that it included the names of parents in each of the three sets of records: births, deaths and marriages. The registration process in Ireland also mirrored that established in England and Wales. Registration of births, deaths and marriages was recorded locally in registers maintained by registrars. Each quarter these registrars would send a return (a copy) of all the registrations recorded within the preceding three months to the Registrar General, based in Dublin. These forms would be sorted and bound alphabetically according to 163 Superintendent Registrar’s Districts (SRD), thus starting with Abbeyleix SRD and ending with Youghal SRD. At the end of the year, an index would be compiled, noting all entries, sorted by surname, fi rst name, SRD, and noting the vital location reference: the volume and page where the individuals recorded could be found. Anyone conversant with the registration process prevailing in England and Wales will recognise that Ireland’s is a little different. One obtained copies of relevant records by searching the hardcopy annual indexes. When these indexes went online, first on FamilySearch and then later with Ancestry and Findmypast, the need to attend the General Register Office’s (GRO) Public Search Room in Dublin to search the indexes disappeared overnight. It became possible to search the indexes remotely and then to apply for a copy of a record through the post, then later by fax or over the phone. Later, in 2014, the GRO uploaded its own version of the civil record indexes on Irishgenealogy.ie The index had been completely re-keyed and, while it clearly contained many errors, proved more
p36-39 Irish FINALish.indd 37
Less is more! If you are stumped by a confusing surname or forename (that can be open to variant spellings), then experiment with leaving the tricky field blank – and then carefully look at all search results for likely matches. Here for instance we left the first name field blank
reliable than the earlier indexes mentioned above (though it can be worth searching all indexes if an ancestor remains elusive, as I’ll explain later in this article). However, after initial concerns about the inclusion of index entries up to 2013, the index was withdrawn and then later reinstated with historic closure periods in line with data protection advice. Then, in September 2016, the GRO went a step further and began uploading images of the civil registers, linking them to the index entries – and all for no cost. To quote former British Prime Minister, Harold Macmillan, we’d ‘never had it so good’! Not only is this online resource free, but it can be searched in a myriad of ways, with the tiniest detail.
What are the GRO records like to search? Read on for super-sleuth tips and examples to help you explore the Irish civil registration records in great detail.
Tip:
Experiment with search fields Say your ancestor was
Michael McGuinness born about 1886 in Balbriggan, Co Dublin, which falls into the SRD of Balrothery. However, after searching for him using these details you came up with just one Michael, with surname spelt ‘Maginniss’. Well, given that the surname McGuinness can be spelt in innumerable ways, you could re-run the search looking only for a Michael born in Balrothery SRD in 1886. Try this yourself; you’ll find that you get 14 results returned and one of these 14 Michaels is the one called ‘Maginniss’ and there is another bearing the surname ‘Magennes’. This can help ensure you’re not missing a lead. You can use this strategy in reverse too, where the surname is relatively straightforward, but the first name poses problems. For instance, say you seek Hanoria Fox born near Ennis, Co Clare, about 1868. By searching only on the surname and then sifting the results you will discover that her birth is registered under the spelling Honora Fox in 1870.
Tip:
Utilise the mother’s maiden name
From 1900 the mother’s maiden surname is included in the birth
March 2017
Family FamilyTree 37
03/02/2017 15:55
EXPERT INSIGHTS TO HELP YOUR RESEARCH increases the number of records to be checked. It used to cost €4 to obtain a copy of a record and costs could quickly mount up. Given that the register entries can now be accessed freely, this is no longer a concern. You can happily skip from one record to another, viewing the image itself to decide its relevance. However, it would greatly assist death searches if the database allowed as a search criterion the deceased’s year of birth, which would allow it to return fewer but more relevant results.
Tip: Spend time browsing
Note that although these records are General Register Office-owned records, they are the English equivalent of Local Register Office records. Note the top of the page where you can see that these records are just for a local area
index, which allows you to narrow down your search to many fewer results. But it also allows you to search only on the mother’s maiden surname. For instance, if you stick my own surname in as your only search criterion for the decade 1900-1909, the results return three entries under the fathers’ surnames of McKee, Logan and Morgan. Heretofore, I could never have found these entries without having first known the fathers’ surnames to search under.
Tip:
majority share only a handful of surnames means that identifying relevant records from the indexes is incredibly difficult. This is particularly so with deaths where, beyond the deceased’s name, the only other identifying personal information noted in the index is the age at death and, as this is so often years out, it significantly
If you cannot find a birth, marriage or death entry through the index it is also possible to get in behind it and just leaf through the pages of register entries. Say you want to see all death entries recorded in the Sligo SRD for the year 1899. All you have to do is to search on those criteria (SRD and year). When you get the search results, just click on any entry to see the image of the page. After that, all you need to do is to manually increase or decrease the last few numbers of the URL to move from one page of the register to the next. However, remembering that the records are maintained on a quarterly
Use the details you do have deftly
The index can be helpful too where you don’t know the maiden surname of a married woman but want to identify her marriage without knowing her husband’s first name. So, say you know that a married woman named Angela Mahony was living in Cork city and must have been married in or around 1905. You can search for her without a maiden surname, but with her known married surname of Mahony: Angela [blank] married [blank] Mahony. Run this search yourself and it will reveal that Angela Barry married Denis Mahony in Cork city in August 1905.
widely & thoroughly (now that Tip: Search the indexes are free) The fact that in Ireland there are localities and areas where the vast
38
Family FamilyTree March 2017
p36-39 Irish FINALish.indd 38
Once you have clicked through to look at an original page, then you can effectively browse through the registers, simply by tweaking the number at the end of the PDF file name in your browser bar. This is not foolproof, but it is a handy addition to your search capabilities
03/02/2017 15:55
Searching Irish records like a pro
Taking it further • Remember to look closely at the details of the informant on death records as this information generally notes their address and relationship to the deceased. It might be the only way to prove that the record is the one you seek • Pay attention to the marital status on marriage records. An indication that one or more party was widowed will lead you to an earlier marriage record • Having found a possible death record, always check for a newspaper death notice as this may be the clincher to prove its relevance. There are many Irish newspaper titles at and • The Irish 1911 Census notes the number of years married for women, which can be a quick way to narrow down your search • At time of writing, only these images of the civil register pages have been uploaded: Deaths: Births: Marriages: Index 1864-1965 Index 1864-1915 Index 1845-1939 Images 1891-1965 Images 1864-1915 Images 1882-1939 • Does the civil marriage record indicate that your ancestors were Roman Catholic? Then always seek out the record in the parish register too as many from the later 19th century began to record additional details such as both parents’ names for the bride and groom • The key to understanding land divisions and locating places in Ireland is through the 1901 edition of the ‘Index of Townlands’. This can be found online, fully searchable, at • Even though you’ve found relevant birth records, always check too for baptism records because in Ireland a high number of registrations have slipped through the net. The best site for both Catholic and Protestant church records is rootsireland.ie – while images of Catholic parish registers up to 1880 are free at
Check some of the records returned to see if you can locate another one with exactly the same reference details of volume 6, page 199. In this instance, I found a record for a James Baird matching exactly. Then return to Irishgenealogy.ie and search for James Baird in Ballymoney in 1868. When you find him, click on the link to see the image and then you can scroll down the register page and find there the ‘missing’ record for John Carmichael. This procedure can be repeated for any record being sought which appears to be missing from Irishgenealogy.ie – but which can be identified on Ancestry. My friend and professional colleague, John Grenham, used to say that the system for identifying records at the General Register Office was rather like trying to use a fishing hook through a keyhole in the dark and paying through the nose for the privilege. It seems now that the powersthat-be have unlocked and flung open the door and turned on the strongest light possible. Fill your nets!
About the author Steven Smyrl is immediate president of Accredited Genealogists Ireland, chairman of the Irish Genealogical Research Society and
basis, you will need to fish around a bit in the search results to hit on each of the relevant four quarters of the year.
Tip:
Look across several sites & indexes
Given that there are errors and omissions in the new database, the index-only databases found at FamilySearch, Ancestry and Findmypast are still extremely relevant resources. Just because you cannot find an entry in the index at Irishgenealogy.ie doesn’t mean that the image of the record you seek is missing too. Here’s an example of how to use the indexes at Ancestry to help find an image on Irishgenealogy.ie Say you want to see the image of the birth registration of John Carmichael born in Mullans, Co Antrim in 1868, which falls into the SRD of Ballymoney, but there is no index entry for his record at Irishgenealogy.ie If you search on Ancestry you will find the details for John from the original
hardcopy index: John Carmichael, Ballymoney, volume 6, page 199. Next, refine your search on Ancestry to just the search criteria of Ballymoney and 1868.
of Irish Genealogical Organisations. With his brother Kit, he is a director of the Irish probate research firm Massey & King
MC Research
Ancestors from IRELAND? Contact: M. P. McConnon M.A. MC Research (Genealogy & History) Seabank Road, Castlebellingham, Dundalk, Co. Louth, Ireland, A91 XY32. Email: mcres@iol.ie Phone/Fax: +353-(0)42-9372046 Website:
Established since 1989
March 2017 MC Research_1/8-advert.indd 1
p36-39 Irish FINALish.indd 39
executive liaison officer for the Council
Family FamilyTree 39 07/03/2016 11:36
03/02/2017 15:55
TAKE A TRIP DOWN MEMORY LANE ONLINE
LIFE IN BRITAIN post-war to the 1980s
It’s tempting to think that family history refers to ‘way back when’. But if you neglect more recent decades you’ll miss out on an entertaining trip down memory lane, and might be leaving a big gap in your records for future generations. Tony Bandy has 15 excellent website resources for you to use and enjoy
T
he 1700s, 1800s and the early parts of the 1900s... to many of us, this is the era of family history. However, should this always be the case? What about the modern era, post-World War II and
40
Family FamilyTree March 2017
p40-44 Tony Bandy FINALish.indd 40
beyond? You might be surprised, but these relatively modern times can be a virtual cornucopia of family treasures, information and more. One method of enhancing these family stories is by learning more
about the culture of the decades, of what was important, of what daily life was like in Britain and the UK. Let’s jump in and profile some web-based resources to help get your cultural knowledge up to speed!
02/02/2017 10:39
Researching & remembering recent times
Setting the stage: focus on culture What was it actually like to live during the 1950s in the UK? Were the 1960s as explosive as our history books would lead us to believe? In many cases, the truth might surprise you! Given these thoughts, listed below are the topics that my list of web-based resources will be focused on. • Fashion and food • Domestic life (cooking, cleaning, leisure time, etc) • Politics and architecture • Television, advertising, books and magazines. As you visit and use the resources to refresh your own memories, or those of your family, take some notes, and see what can you add to your list of plain facts and dates.
The material things and details of the minutiae of life are just as vital to your descendants, as the digital and archival items that you have found in your own research are to your understanding of your ancestors’ lives
Journaling our own lives? Once you’ve explored the websites on the pages to follow, stop for a moment and think about your future family members. Have you enjoyed reading about the recent history of the British Isles through the lenses of popular culture, food, television, politics and fashion. Has it been enlightening? If so, consider, what might your descendants wish to learn about your times? What would they say about the present day? What have you done to record or remember your life and popular culture in these first few years of the 21st century? For many of us, it’s easy to dismiss, after all, what’s so great about working, raising our kids, and just trying to live day-to-day in our rapidly changing world? Yet, these are the same thoughts and feelings that our ancestors carried with them as well. The fact is that information about our daily lives today will be just as important to our descendants, as those of our ancestors are to us. So it’s vitally important to start archiving things now.
A project to consider Probably this week you’ve worked many hours (how did you get to work?), made trips to supermarkets (what did you buy?), watched any of the numerous TV channels (what did you watch?) and are desperately trying to finish that book that’s been languishing by your bedside table for a few weeks. You’ve also spent (a lot of) time on Facebook and the internet generally. Perhaps you have even started a real paper letter to your brother, sister or mum and dad. Have you ever thought of writing these things down? Of saving bus and train ticket stubs, newspaper clippings (if you still receive a daily paper), or even just your child’s latest artwork? These material things and details of the minutiae of life are just as vital to your descendants, as the digital and archival items that you have found in your own research are to your understanding of your ancestors’ lives. In fact, perhaps they are even more valuable – as they represent a unique insight into your life, which your descendants interested in their ancestors’ lives will thank you for. Think about ways you can keep a small subset of these bits and bobs (without hoarding of course!) or perhaps start a paper or digital journal. Back up your photos from your phone to an extra drive or computer. Make multiple copies so that your ancestors can be sure to experience the richness of your life today!
Timelines: the decades While only a short time in all of history, the pace of change during the latter part of the 1940s up through the 1980s has been torrid. Beginning in the post-war years was a time of trying to get back to a more normal life – that of working and raising a family. However, it was a changed world for the British Isles. Rationing was still occurring, the Labour party had taken control of political life, and films such as the critically acclaimed Odd Man Out were sweeping up at the box office. Life wasn’t easy for the average Briton, but at least the war was over. Moving on to the 1950s, we see the emergence of consumer culture and the rock and roll pop icons of Elvis Presley, Lonnie Donegan, Cliff Richard and others. It was also a
p40-44 Tony Bandy FINALish.indd 41
March 2017
Family FamilyTree 41
02/02/2017 10:39
TAKE A TRIP DOWN MEMORY LANE ONLINE The sites: starting to search Here are resource links and lists to help you explore recent eras. Some sites are exclusive to a decade, but others are more wide-ranging. As you go through each one, think about how they can influence your family account, how can they add colour and context to the tale your record about your family
3
BBC News Magazine (Retrospect) magazine/6707451.stm
4
British Airways, History and Heritage about-ba/history-and-heritage
Flying, at least in the immediate past, was quite different than it is today. With this site, it’s a great way to illustrate information from your family’s past that involved any form of aerial travel.
1
The UK 1940s Radio Station
With a comprehensive list of streaming radio and hourly shows/announcers, tune in to this website to hear radio shows, popular music, history, facts and more about life in post-war Britain.
2
1940s Society
While primarily focused on the entire decade of the 1940s, this UK-based site has a wealth of information, including the immediate post-war years. Includes posters, articles and more.
42
Family FamilyTree March 2017
p40-44 Tony Bandy FINALish.indd 42
5
British Film Institute .org.uk
6
British Pathe
7
The British Record Shop Archive
This site and the related site both contain tremendous resources for media history of film and television. With an included library search engine, Reuben Library information, images, facts and more, stop here to fill out gaps in your family stories about what they may have watched.
Try this site for massive amounts of documentary films and movies on all aspects of British and UK culture. Browse by category or collection to home in on the item you need from among the 85,000 items in the archive.
Records, platters, and discs. With this online archive, search by geographical location for information and history about various record shop locations. (Personal recollections and information mostly.)
02/02/2017 10:39
Researching & remembering recent times
11
Historic England Heritage List
Buildings, lots and lands. Our ancestors lived somewhere and maybe you already have the address, but are you aware of the history of their location? With this site, drill down by map and find locality information, development and changes through time.
12
Historic England, View Finder nder.historicengland.org.uk
13
Historic UK
14
Retrowow
15
The Transport Archive
This subsite of the Historic England Heritage List is a great photo finder for locations mentioned in your family history. Includes both basic and advanced searching options.
8
The National Archives
Massive cultural collections by decades make this site one of the fi rst you should consider when thinking about how your ancestors lived and worked for all decades, 1940s through the 1980s.
9
Down the Lane, Growing Up and Living in the 1950s and 1960s
While you might think this site is only for the decades preceding the 20th century, do not overlook the extensive information listed for the 1950s and 1960s, particularly with regards to food, holidays and more.
A topical site, you can find overview information and details about all the decades we’ve spoken about so far. Not as indepth, but nevertheless a great place to start to learn more.
A personal blog/site, however, lots of good cultural references and information about growing up in these times.
10
Fifties Britain resources/fi fties-britain
While most genealogists know full well about the richness of resources available for use at The National Archives (TNA), this subsite in particular is a great place to learn more about the specifi c decade of the 1950s. With both primary and secondary resources, images, texts and more, try this fi rst for details and information.
Shipping, rail, aviation. Our ancestors used many ways to move around and this online archive can quickly fill in the blanks with maps, charts, multimedia resources and more. A must-see! There you have it! 15 excellent website resources that you can use to help fill out, update and improve your knowledge of your family history facts and dates in recent times
p40-44 Tony Bandy FINALish.indd 43
March 2017
Family FamilyTree 43
02/02/2017 10:39
TAKE A TRIP DOWN MEMORY LANE ONLINE time of revitalisation for Britain and the UK in general, tempered in part by ongoing political and economic changes as its place in the world was transformed. Overall however, it was a time of prosperity and growth both domestically and politically. ‘Conforming’ was the norm as the violent decade of the 1940s faded away into history. Looking at the 1960s, this was an age of youth, rebellion and experimentation – a vast rejection of the conformist 1950s and earlier decades. This change was reflected in the culture and media – and the rise of Beatlemania, the Rolling Stones, and the ‘Swinging’ city of London. Changing social mores, the women’s liberation movement, immigration and the reality of racism turned the staid UK culture upside down. The 1970s and 1980s were curious decades as well, from the economic issues of the early 1970s to the election of Prime Minister Thatcher in 1979. The Falklands Crisis highlighted the new decade of the 1980s and the broad popularity of Live Aid plus the
44
Family FamilyTree March 2017
p40-44 Tony Bandy FINALish.indd 44
Memories
Find tips for recording your stories at http:// familytr.ee/ Spokenmemories
The pace of change during the latter part of the 1940s up through the 1980s has been torrid... Looking at the 1960s, this was an age of youth, rebellion and experimentation – a vast rejection of the conformist 1950s and earlier decades ... turning the staid UK culture upside down emergence of Timothy Berners-Lee, internet guru and pioneer, portended the decades of the 1990s and beyond that were to come.
Wrapping it up! There is so much to discover when learning about history – and piecing together how our ancestors lived, worked and interacted with the culture of their times is quite often eye-opening for many of us today, even in the accessible digital world of images, Facebook and more. Finding out how culture, politics and media changed our families’ lives can influence how we report the base facts about them and their times – it
gives us context for their lives. Remember this as you look back over the last 70 years and think also about your time, now, and how what you save, what you write, and what media you have will influence family members to come.
About the author Freelance writer and librarian Tony Bandy loves using the latest technology for fgfgresearching his family’s genealogy as well as discovering other popular and forgotten topics in history on his blog, Adventures In History, at history.writingwithtony.com
02/02/2017 10:39
British, Irish & European Researchers! British, Irish & European Researchers! Genealogical.com is your source for: • The best books on U.S. & Canadian Genealogical.com is your source for: genealogy The besttitles books U.S. & Canadian • Trusted foronEnglish, British, Irish Scottish, & genealogy Irish, French, and German ancestors
European Researchers!
for Peter English, Scottish, • Trusted Leading titles authors Wilson Coldham, Irish, French, and German ancestors David Dobson, Brian Mitchell, Genealogical.com and many more is your source for: • Leading authors Peter Wilson Coldham, David Dobson, Mitchell, • The best booksBrian on U.S. & Canadian and many more genealogy BROWSE our entire collection at • Trusted titles for English, Scottish, Irish, French, and German ancestors ORDERour from amazon.co.uk BROWSE entire collection at • Leading authors Peter Coldham, and otherWilson fine booksellers David Dobson, Brian Mitchell, and many more ORDER from amazon.co.uk and other fine booksellers
Announcing a new online course
Pharos 1/4.indd 1
BROWSE our entire collection at Beginner to Postgraduate level
15/08/2016 16:38
AWAKEN YOUR ANCESTORS launching at
online education in genealogy
ORDER from amazon.co.uk and other fine booksellers
Who Do You Think You Are? Live at the NEC
World leading courses in Genealogical, Palaeographic and Heraldic Studies.
Visit us on stand 71 or to find out more
IHGS - The School of Family History
Our programme covers sources from across the world with an emphasis on research within the British Isles.
01227 768664
Join our PG Certificate modules starting in April 2017. A flexible online format is offered allowing study from
Peter Nichols Cabinet Makers
IHGS - Family Tree March advert.indd 1
03/02/2017 09:41
Coin, Medal, Collector’s & Family Heirloom Cabinets
A range of mahogany cabinets handmade in England. Bespoke cabinets & boxes designed & made to order. Supplier to leading museums and collectors. Email: orders@coincabinets.com
Peter Nichols cabinet.indd 1
For a full descriptive leaflet please contact:
The Workshop +44 (0) 115 922 4149
Peter Nicholls 1/8th.indd 1
p045 Ads.indd 45
anywhere at anytime. No exams, progress based on continuous assessment. A previous degree is not required for entry. Developed by professional genealogists with constructive feedback provided by assigned tutors at all levels. Gain a postgraduate Certificate, Diploma or MSc in Genealogical, Palaeographic and Heraldic Studies. Or take an 8-week beginner to intermediate level course; online classes begin April 2017.
For more information, see: 24/11/2016 11:30 or call 0141 548 2116. The University of Strathclyde is a charitable body, registered in Scotland, number SC015263
03/02/2017 14:50
March 2017
Family FamilyTree 45
03/02/2017 16:23
GUIDE TO ONLINE RESOURCES generations family tree family history cousin breeding
ancestry
ancestors
blood
blood line
brother
pedigree
grandmother
descent kindred
birth children
background
niece
heritage
DNA
genus
parentage
genealogy genealogical society
family
father
grandfather
forebears origin
siblings
offspring
relatives
sister
online
relationship
lineage progeny
connections legacy
descendants kin mother in-laws generation
people
progenitors
relations
sites Top 5 w(forebbeginners!) Get your online research off to a great start with Rachel Bellerby’s guide to the best websites for anyone new to family history
W
hile the variety of family history websites out there is a real bonus for anyone tracing their family tree, the number of options can be overwhelming, particularly if you’re new to the subject. So, where to start? Our pick of fi ve top websites for beginners will give you a fl avour of what’s available on the web and hopefully point you in the right direction for taking your research further.
1
Ancestry, Findmypast & TheGenealogist
The ‘big three’ family history subscription websites offer access to thousands of records (with pay-asyou-go options), from the basics such as birth, marriage and death records, through to more specialised data including occupational records,
46
Family FamilyTree March 2017
p46-47 Newbie RB FINALish.indd 46
emigration records and parish registers. Each of these three sites offers a free-of-charge trial period and you can often access one or more of them for free at libraries and record offices. Each of these sites is strong on different types of records, so take advantage of the trial period to decide which is best for you. ndmypast.co.uk
2
Cyndi’s List
The UK and Ireland version of Cyndi’s List is part of a US-based website run by genealogy enthusiast Cyndi Howells. While at first glance the site may seem overwhelming, the thousands of links to UK family tree websites are divided into regional categories, with the option to search alphabetically.
Within these categories are dozens of sections such as religion, occupations, obituaries and societies and groups. Although many of these links are extremely helpful, do be aware of one of the key rules for genealogists – double check any information you take from the internet before adding it to your family tree.
3
FamilySearch
FamilySearch is one of the world’s biggest family history websites, with millions of names compiled by and for the International Genealogical Index of the Church of Jesus Christ of Latter-day Saints. The site is the online home of the world’s largest genealogy organisation, with over 4,000 family history
02/02/2017 10:51
Getting started centres around the world. Thousands of records are currently being digitised and added to the site and one-to-one online help is also available. Articles, research guidance and online classes make this a great starting point for beginners.
4
FreeBMD
Enjoy free-of-charge access to birth, marriage and death records from around the country, with new records being added by volunteers on an ongoing basis. The project covers records for England and Wales, and has the sister sites FreeCen (census data) and FreeReg (parish registers). The BMD site allows you to search records from the General Register Office (GRO) from 1837 to 1983, and although the database doesn’t yet cover all areas and years within this time frame, a search option allows you to check coverage for the period or area you’d like to search. Once you’ve found the birth, marriage or death record you’re looking for, simply note its reference number and you can send for the original certificate from the GRO.
5
The National Archives’ Discovery
Discover which archives are held where in the UK, with The National Archives’ Discovery site, which offers a gateway to 1,000 years of documents held in 2,500 archives around the UK. There are descriptions of more than 32 million records; nine million of which can be accessed online. Categories include wills and probate, military, immigration and emigration, census, health and court records. You can search by archive name, keyword or category.
3 sites to take your research further 1. British Newspaper Archive Over 200 years of history as told through the pages of more than 700 UK newspapers. The project is a partnership between the British Library and Findmypast, and aims to digitise 40 million newspaper pages within 10 years.
HANDY TIP Search by keyword, name, location or date and read about everything from major national events to the minutiae of village life 2. Family Tree magazine Explore the many different aspects of the hobby, with how-to guides aimed at beginners and more experienced researchers, printable charts, plus all the latest news and opinion from the family history community.
Visit for lots of free resources, such as beginner guides
Tracing Scottish, Welsh or Irish ancestors? • Scottish ScotlandsPeople is the official Scottish Government website for tracing Scottish ancestors, offering access to BMD records, wills and testaments and Catholic parish registers. Take your research even further on the National Records of Scotland website
3. Old Maps A historical map archive that holds maps for England, Scotland and Wales, allowing you to see how an area has changed over the years, plus the various uses of land over the centuries.
Read up on it The Family History Web Directory by Jonathan Scott (Pen and Sword, 2015)
• Irish Explore Eire ancestors at The National Archives of Ireland website You can find Northern Ireland ancestors at Public Record Office of Northern Ireland • Welsh Start your Welsh family history research at the National Library of Wales website Turn to pages 48-51 for Mary Evans’s expert article on tracing Welsh ancestry and pages 36-39 for Steven Smyrl’s guide to free records for tracing Irish ancestors
p46-47 Newbie RB FINALish.indd 47
January March 2017
Family FamilyTree 47
06/02/2017 13:41
METAL TOP TIPS WORKERS & WEBSITES FOR GREAT RESEARCH
Welcome to your ancestors in Wales Whether it’s clues from the census or your surname, perhaps you have discovered you have ancestors who hailed from Wales. Learn more about your family from this historic, hard-working, lyrical land, with Mary Evans’s selection of tips and websites
ARCHIVE IP RCngHes hTave R E SE A ry cha
B ounda unties ed the co reorganis ou will y o s , ears over the y fully to heck care need to c cal lo e th see where o are n w! archives
al tion bsite a N e s we t th Visi m Wa le seum. s eu mu Mus ttps:// useum m h es/ wa l
Surname research tip The proliferation of the common surnames such as Jones, Davies and Evans can be a little daunting for more recent research and care needs to be taken to link these to first names and within family groups
48
Family FamilyTree March 2017
p48-51 Welsh websites FINALish.indd 48
02/02/2017 10:45
Explore your Welsh ancestry online
F
irst I’ll run through the key family history records for finding ancestors in Wales – and, if you’ve already done some family history research, you’ll find the steps and sources familiar. Statutory registration of births, marriages and deaths began in 1837 and before that you’ll be looking at parish registers for baptisms, marriages and burials. Censuses are available from 1841 to 1911, and from 1891 onwards, there is an extra column on the census returns asking for ‘Language spoken’: English, Welsh or both. One difference, of course, is that you will at times encounter the Welsh language. Although most church and legal records were kept in English, other less formal documents might be in Welsh.
Surnames & naming patterns Surnames in Welsh research can be challenging. Although the patronymic naming system in England had generally changed to inherited surnames by the 14th century, this change happened much later in Wales, especially in the more rural areas. Indeed, Sheila Rowlands, in her chapter on Welsh surnames in Welsh Family History – A Guide to Research, points out that there is still evidence of this naming system in parts of Caernarfonshire as late as the 1841 and 1851 Censuses.
The visit of King Charles I to Wrexham, a staunchly Royalist town during the Civil War
Coalmining Coal mining was a major industry, especially in the south, and the Welsh Coal Mines website at has area-by-area links to individual coal mines under the ‘Collieries’ tab, with details of each one from when it was originally sunk to its closure. Also within the left-hand strip is the link to the ‘List of Disasters’. Under this you’ll find a list of mining accidents with, in most cases, details of the fatalities by name
Where to begin? A good website for starting your research would be This is a worldwide resource, mainly of baptisms and marriages from parish registers but increasingly including census entries, and Wales is well represented on it.
D i sco ver m i n Wa useu ms les at http:/ / wale museums s/mu seu m . s
ives r ch t a e lor sa Exp n Wa le hives. i a rc s:// les p t t h wa
Slate quarrying In north and mid Wales the slate industry was important and The Slate Industry of North and Mid Wales at has a wide range of articles and many pages of photos that give an insight into the working lives of the men who laboured in the quarries. I lived as a child in a village not far from the huge Dinorwic quarry and can remember the clatter of the men in their clogs as they gathered at the nearby bus stop each morning Narberth, Pembrokeshire as it looked in the early 1800s; it is still a charming town today
p48-51 Welsh websites FINALish.indd 49
March 2017
Family FamilyTree 49
06/02/2017 12:18
METAL TOP TIPS WORKERS & WEBSITES FOR GREAT RESEARCH he ut t m i ly o b rn a f Fa Lea ation o ieties . o ci Soc Ass istor y at w w wk H a les .u org of W wales. f hs
• On the FamilySearch home page click on ‘Search’ then, when the map appears top right, click on the British Isles and choose Wales from the dropdown list • Hover your cursor over ‘Wales’ and it tells you there are 44 collections with more than 50 million indexed records and nearly 30 million record images 1500-2013 • You can select birth/marriage/ residence/death/any for the life
Money-saving credits tip On free site FamilySearch you can find many Welsh parish records. To view images of original records visit Findmypast. Here you can opt for pay-as-yougo credits rather than a subscription. If so, use FamilySearch to identify the entry you’re looking for as this will save you credits on Findmypast. Note, all the censuses are also on Findmypast so you might want to consider a subscription
event and spouse/parents/other person for relationship along with place and date. It is a great resource and, best of all, it’s free! A search can bring up pages and pages of results so it’s often best to restrict the number of hits by careful selection of your search criteria. .
Finding digital images of records
While FamilySearch is a great free
resource it is, on the whole, an index and therefore a finding aid. There are images for some entries but by no means all. Good genealogical practice is to look at originals and with this in mind you might like to consider Findmypast at ndmypast.co.uk This is a pay-per-view/subscription site but it has digitised images of baptisms, marriages and burials for all the Welsh counties. • Coverage varies with some counties offering mid-1700s to early-1900s and others mid-1500s to mid-1900s • Click on the ‘Search’ tab on the Findmypast home page, select ‘Search A-Z of record sets’ and type the county in the search box • You can search the whole county or opt to search the parish by typing the parish name into ‘Browse place’ and clicking on it when it comes up. Note, though, that Breconshire and Radnorshire are under Powys not the separate county names • Bear in mind that coverage might not be the same for all parishes within a county.
Free online library resources
A scene of the Glamorgan coastal town of Llantwit Major
Mariners Before the coming of the railways, journeys were often made by sea and, with many Welsh counties having a coastal border, this was often a source of employment. A great resource for Welsh family historians to investigate is the Welsh Mariners website at There are currently more than 23,500 entries of Welsh Merchant Mariners searchable in the database of masters, mates and engineers. There is also a Royal Navy database of 3,000 men active in the Royal Navy from 1795 to 1815, including Welshmen at the Battle of Trafalgar
50
Family FamilyTree March 2017
p48-51 Welsh websites FINALish.indd 50
One must-visit website is that of The National Library of Wales at Quite apart from its huge searchable catalogue – the library holds many of the parish registers of Wales – there are sections within it which are of huge relevance to family historians. • Foremost of these is perhaps the section where you can both search for and download Welsh wills prior to 1858: see
he out t f b a n o Lea r ociation r y o s t s s i A es at i ly H Fa m s of Wa l rg.uk s.o etie Soci f hswale . www
02/02/2017 10:45
Explore your Welsh ancestry online t os pho s i n w e Vi u rche w. h of c s at w wches. r e l Wa sofchu .htm s o t e pho m/wal o c
Using tithe maps
Images: drawings from the British Library flickr collection
Agricultural background can be found in the tithe maps, produced 1830-1850, which indicated parcels of land and buildings with each assigned a number. Each map was accompanied by a schedule which listed each map item by number and against each number was given the owner, occupier and a description of the land, including individual fields. At. org.uk/en there is an ongoing project to digitise and make freely available both maps and schedules: • Click on the Tithe Maps tab, choose your county and scroll down to your chosen place • Click on ‘Transcribe’ and you will see the map above and the schedule below. Enlarge the map until you can see the numbers • Click on the number/place name in which you are interested and a transcript of the details will appear in a box at the side while the relevant part of the schedule will appear below. Although not yet complete, it is well worth a look and is already a major resource
discover/nlw-resources/wills • Also of great value is Welsh Newspapers Online at with more than 15 million articles from 1.1 million pages • If you’re looking for a little background on your ancestors’ lives then try the Blue Books of 1847 at – these books were the result of an inquiry into the state of education in Wales and, as the website states, ‘. There
p48-51 Welsh websites FINALish.indd 51
Eagle Tower, Caernarfon Castle
are pages of statistics but also detailed comments on individual schools • Also searchable is the Crime and Punishment database, which comprises data about crimes, criminals and punishments included in the gaol files of the Court of Great Sessions in Wales from 1730 until its abolition in 1830; see. uk/sesiwn_fawr/index_s.htm
Research working lives Ancestors’ work can help to place their lives in context. See the details about coalmining, slate quarrying and mariners on these pages.
Picturing the past Around 1860 photographer Francis Frith embarked on a project to photograph every town and village in the United Kingdom and the archive is at Scroll down to the section for Wales, choose your county then use the alphabet to find your chosen place. There are photos for all those with a camera icon. For example, there are 71 photos of the small town of Caernarfon, though small villages will have far fewer and some none at all. Click on a photo to enlarge. The photos are free to view but are not downloadable. However, there is the option to buy both photos and maps.
Learn about the locality If you want to learn about the area in which your ancestors lived try A Topographical Dictionary of Wales, first published in 1849 by Samuel Lewis. This is available at.
ac.uk/topographical-dict/wales Choose the correct alphabetical name slot and find out what your ancestor’s town or village was like in the mid-1800s.
Teach yourself some Welsh And, finally, you might need a bit of help translating those monumental inscriptions. There is a useful list of words and phrases that you are likely to come across at. uk/miscellanea/gravestones.htm
hWels he h ch t gl is Sea r ish / En i ne l Eng elsh On l at r y .net W iona Dict iriadur w.ge ww
Read up on it • Tracing Your Welsh Ancestors by Beryl Evans (Pen and Sword Books, 2015) • Welsh Family History: A Guide to Research (2nd ed) edited by John Rowlands and Sheila Rowlands (Genealogical Publishing Company, 2009) • Second Stages in Researching Welsh Ancestry edited by John Rowlands and Sheila Rowlands (Genealogical Publishing Company, 2010) • The Surnames of Wales by John Rowlands and Sheila Rowlands (Gomer Press, 2014)
March 2017
Family FamilyTree 51
02/02/2017 10:45
Dea r Tom
Explore the serious, sublime and the ridiculous facets of family history in this genealogical miscellany. This issue Tom Wood considers large families, puzzling names, paupers and miscreant forebears
A
s many of you know, I’m always interested in receiving announcements from historical newspapers of missing people. Once more my thanks go to Joyce Billings, who came across this article several years ago in the Leicester and Nottingham Journal, which read: ‘20 Feb 1779, Escaped from Justice – John Smith, late of Leicester, labourer charged with stealing a quantity of coals, he is likewise suspected of stealing several articles, which were found in his dwelling house. He is between 50 and 60 years of age, long visaged, and pock-marked, sometimes wears a wig, and sometimes his own hair, which is short, is very thin and about 5 feet 10 inches. Whoever shall apprehend the said John Smith and bring him before the Right Worshipful, the Mayor of the Borough of Leicester, shall be paid all reasonable charges. The articles
52
Family FamilyTree March 2017
Dear Tom FINALish.indd 52
On the run found in his home are, one sack-bag, marked T Bruce, another marked TE, a carpenters hand-saw, and two tann’d calf-skins.’ If anyone else has ‘missing’ notices from old newspapers in a similar vein, please do send them in.
Drowning in petticoats... Like many family historians, I have built up a large collection of books. Indeed, a few weeks ago I chanced upon a long-forgotten book on the shelf called Ringing Church Bells to ward off Thunderstorms and other Curiosities from the original Notes and Queries. The original book (mine was a 2009 reprint) was first published in 1849 as Notes and Queries and quickly established itself ‘as a unique treasure-trove of lore and out-of-theway information on a wide range of subjects’. Needless to say, I was hooked on its 350 pages, including an item about large families of children, which I want to pass on for readers’ comments. Written ‘in the manuscript jottings of the engraver George Vertue’, it was headed ‘DAUGHTERS: Having nineteen’ and continued: ‘Died at Waldershare, Kent, on Nov 18,
1743, James Jobson, farmer, aged 112, who had seven wives, by whom he had thirty-eight children: nineteen sons and nineteen daughters. ‘Farmer Jobson was more fortunate than good Dr Robert Hoadly Ashe, who had nineteen daughters, but no son. Tom Dibdin has also left us the following reminiscence of this clergyman: I had the pleasure of sitting next to Dr Ashe at dinner, when he began a story with “As eleven of my daughters and I were crossing Piccadilly...” “Eleven of your daughters, Doctor?” I rather rudely interrupted. “Yes, sir,” rejoined the Doctor, “I have nineteen daughters all living; never had a son: and Mrs Ashe, myself, and nineteen female Ashe Plants sit down one-and-twenty to dinner every day. Sir, I am smothered with petticoats”. – Editor, 3 Aug 1861.’ So there we have it – two large families each with 19 daughters, and one family with 38 children from seven wives! Frankly, I think it takes some believing, but the more I read it, the more I believe it was genuine. We have touched on this topic before, but this seems as good a time as any to revisit it: does anyone have an ancestral family
02/02/2017 09:39
Genealogical miscellany with 19 successive children of the same gender? I also wonder if anyone has researched the Jobsons in Kent or knows more about Dr Ashe’s family and ancestry?
Converting to Catholicism Now to return to another interesting subject, about ancestors who were baptised twice. This time I am grateful to Barbara Krebs, one of our valued readers from Sippy Downs in Australia’s Queensland, who has not only one but three direct ancestors who were baptised twice in England because they later became Catholics. The first was John Snaith, one of Barbara’s 2x great-grandfathers, who lived from 1815-1881, and was baptised as a baby on 29 July 1815 into the Church of England at Stockton Parish Church. He married Margaret Grafton, a Catholic, in his local register office, on 13 December 1842 and was finally baptised as a Catholic himself in St Mary’s Roman Catholic Church in Stockton on 20 March 1869, when he was 54. Barbara’s second double christening was for John Beaumont, one of her great-grandfathers, who lived from 1850-1902, and was baptised into the Church of England at Selby Abbey on 23 August 1850. It seems he fell in love with Ellen Snaith (a daughter of the aforementioned John and Margaret Snaith) and the day before they wed, John had a Catholic baptism on 16 August 1875. Now that’s what I call leaving it late! The final conversion in Barbara’s tree was another of her greatgrandmothers. She was Jane Durkin (formerly Hall), who lived 1854-1930, and was baptised as a baby at St Hilda’s C of E Church in Middlesbrough. She married Francis Durkin on 31 December 1872 in St Mary’s Roman Catholic Chapel in Middlesbrough, but was not baptised as a Catholic, so Barbara discovered, until 11 November 1880. What an interesting trio. Other examples along these lines would be most welcome.
A tale of two married names Janet Pearson (née Plastow) has sent in a wonderful copy of her mother’s death certificate, which records her under two married names. Janet tells us how this happened: ‘She and my dad were
Dear Tom FINALish.indd 53
Janet’s mum’s death certificate
happily married for over 35 years, until his death at the comparatively early age of 62. Mum was still in her fifties. After a couple of years Mum met a man, while on holiday, and they later married. He made his home in her house, but he was not from the area, so did not mix much. Mum found that most of her friends still referred to her by her former surname. He later died and Mum decided to revert to her earlier surname of Plastow. She asked her bank manager if she needed to get it done by deed poll. He said no, she signed something and she was back to her surname of 35 years. ‘When I came to register her death, I explained to the Registrar, who issued the certificate in this way to cover all complications. Dad and Mum are now buried together. So when choosing the wording for their headstone, I decided on “Thomas Henry Plastow and his wife Kathleen Nellie”. This left out any complications about her surname, but I am not sure what went into the parish register.’ Janet’s mum’s death certificate was issued in 1987 and includes her maiden surname of Heale and both her married names. How kind of Janet to share this with us.
Called after Kitchener Now we turn to royal biographer and consultant Coryne Hall, who has written in about a contribution in
October FT from Elizabeth Redhead, who informed us her grandfather was given the middle name Kitchener back in 1916. He was named, of course, after Lord Kitchener of Khartoum, who was urged in May 1916 by Tsar Nicholas II of Russia to help reform the failing Russian military forces. Lord Kitchener set sail but, as the history books tell us, he was drowned on 5 June 1916 when his ship, HMS Hampshire, hit a German mine on route to Archangel in northern Russia. Coryne points out that his loss was greatly mourned and regretted and the Dowager Queen, Queen Alexandra (widow of British King Edward VII), started a fund throughout Britain and the Empire, which raised nearly three-quarters of a million pounds to erect a memorial. Had Lord Kitchener succeeded in reaching his destination, history might have been vastly different, says Coryne, as the Russian Revolution erupted in February 1917. Incidentally, she adds that Kitchener’s great-grand niece Emma Kitchener (a former lady-in-waiting to Princess Michael of Kent) is married to Julian Fellowes (Lord Fellowes), creator of the wonderful Gosford Park and Downton Abbey series. It’s really not surprising that the Christian name of Kitchener became popular as a name for baby boys during WW1 and even afterwards.
Memorial to a tragic family Sometimes researching for our ancestors is not always a pleasant task. As we all know, years ago many young children had brief lives, often because medical help did not exist for ordinary families centuries ago. In those days it could be a very cruel world for everyday folk striving to raise a family. I was reminded of this again when Mrs Elwyn Hunt, from Victoria in Australia, very kindly got in touch about a tragic family down under where nine sons and daughters died before their mother passed away aged only 55. Indeed, as Elwyn says, life on Australia’s Goldfields was very hard in the 19th century. However, a relation or someone who knew this family well remembered them with a splendid headstone in Castlemaine Cemetery in Victoria. The memorial
March 2017
Family FamilyTree 53
02/02/2017 09:39
Dea r Tom
starts with the wife and mother, but also mentions her husband, plus the nine children they brought into the world. The inscription reads: ‘In Loving Memory of MARY ANN, wife of EPHRAIM STUCHBRE, who died 10th December 1888, aged 55 years. Also the aforesaid EPHRAIM STUCHBRE, sied 30th Sep 1906, aged 83 years. Also their children, MARY ANN sied 13th Sep 1858, aged 7 months. EPHRIAM died 4th Dec 1860, aged 5 months. JOHN died 2nd Feb 1867, aged 17 years. JANE died 27th Sep 1869, aged 15 years. MARY ANN died 15th Oct 1869, aged 6 years. ELIZABETH died 31st Oct 1869, aged 19 years. JOHN W died 15th Nov 1869, aged 2 years. THOMAS H died 11th May 1870, aged 9 months. EPHRAIM died 13th April 1887, aged 22 years.’ The last word goes to Elwyn, who points out that 1869 must have been a particularly terrible year for the family, when four of the children died. How did they cope, Elwyn wonders? Sadly we will never know.
The Stuchbre family memorial in Castlemaine Cemetery, Victoria
54
Family FamilyTree March 2017
Dear Tom FINALish.indd 54
Is this a record? It appeared in the Dover Express of 1 March 1862, and arrived on my desk from Kathleen Hollingsbee, of Tilmanstone in Kent. It’s about an Ann Chatfield, described as ‘A Thorough Pest to Society’, who faced a charge of breaking a pane of glass in Week Street, Maidstone, and was apparently incarcerated ‘36 times previously in Maidstone gaol’. ‘The Mayor remarked, in committing her for 2 months, he was only sorry he could not get rid of her permanently... she was nothing but a pest to the bench, an expense to the county and a nuisance to everyone who had anything to do with her.’ Does Ann appear on anyone’s family tree, I wonder? Though having been in jail so many times, I imagine it wasn’t something she bragged about!
Were they really paupers? Back in the mid-1780s and early 1790s, were lots of your ancestors listed in church records as ‘paupers’? If so, perhaps you wonder why so many? Well an answer cropped up in letter I received recently from Glynis Gurney, who had ancestors around that time in Lincolnshire. Glynis assumed they were very poor, because some of the baptismal registers gave the fathers’ occupations as ‘pauper’. So she started researching and it wasn’t long before she came across the Stamp Duty Act of 1783, which levied a tax of three pence on all parish register entries of baptisms, marriages and burials. However, paupers were exempt from these charges. Now, in 1783 three pence would probably have kept a poor family in food for a day or so and, as Glynis says, to avoid paying the tax, many people falsely declared themselves to be paupers. Glynis adds that some members of the clergy were sympathetic to their parishioners and thus allowed them to claim to be paupers. From what I have read about this unpopular tax, I am also inclined to think that some of the clergy resented having to act as part-time tax collectors. Indeed, Glynis tells me that the numbers of ‘paupers’ increased alarmingly while the tax was operational! Fortunately,
it fi nally came to an end in 1794. I cannot help wondering if some parents declined to register their children’s baptisms, or perhaps didn’t even bother with a marriage ceremony, while the tax existed? Though it might have been tricky trying to avoid a burial! It’s a good thing for family historians this tax only lasted around 11 years.
All at sea over Agincourt Alas it’s nearly time to go this issue, but before so doing, I’ve a story about the solving of a mysterious middle name. On this occasion my thanks go to Jill Johnson, who has gone to a great deal of trouble to uncover more details about a seagoing relative. He was a gentleman called George Coster, the eldest brother of Jill’s paternal great-grandfather, Charles Coster. Born in Denmead, Hampshire in 1858, George emigrated to Australia in the mid1880s. He settled at Kempsey, in New South Wales, and in December 1886 married Eveline Stewart Schott. In 1891 the couple named their fourth child Cecil George Agincourt Coster. Why the name Agincourt, wondered Jill? She realised the answer when she downloaded George’s British Naval records from The National Archives’ Discovery catalogue website. From these she learned he had served aboard HMS Agincourt as a cook’s mate from 1877 to 1880. Jill tells me that, according to Wikipedia, during the RussoTurkish War of 1877-1878, HMS Agincourt was one of the British ships sent to Constantinople to forestall the Russian occupation of the Ottoman capital. Jill feels sure this is why George’s son had this unusual middle name, not to do with the original Battle of Agincourt at all! What a lovely tale, and a wonderful way to wind up things until next time.
About the author Tom fgfg Wood was a founder member of Lincolnshire Family History Society and was its first, award-winning, magazine editor. As well as contributing to Family Tree from its early days, Tom also edited the Federation of Family History Societies’ magazine and wrote An Introduction to British Civil Registration. A member of the SoG and Guild of OneName Studies, he is still researching the family names, Goldfinch and Shoebridge.
Images: Illustration © Ellie Keeble for Family Tree; certificate courtesy of Janet Pearson; memorial courtesy of Elwyn Hunt
Jailed... for the 37th time
02/02/2017 09:39
S&N Genealogy Supplies
Doxie Flip Scanner
Doxie Flip
£134.95
SCAN ANYTHING ANYWHERE!
Premium Bundle £219.95 Save £141.20
TreeView V2
Binders
Long Certificate Binder £14.95 A4 Family History Binder £14.95 A4 Landscape Binder £14.95 A3 Family History Binder £19.95 Springback Deluxe Binder £14.95 Springback Deluxe Window £15.95 Springback Deluxe Landscape £15.95 Springback Leather Effect £19.95 with slip case £29.95 Springback Leather Effect Window £16.95 Premium Window Springback Binder £19.95 with slip case £29.95 A4 Sleeves 10pk £5.95 100pk £49.95 Long Sleeves10pk £5.95100pk£49.95 A3 Sleeves 10pk £6.95 100pk £59.95 Insert Cards 10 pack from £3.50 Tabbed Dividers 5 pack from £2.50
Archival Products
Y
E A L O G TM
S& N
G
G
Y
S& N
E A L O G TM
Latest Software
TreeView V2 Premium Edition 4 Month Diamond Subscription to TheGenealogist, Cassell’s Gazetteer of Great Britain & Ireland 1893, Imperial Dictionary of Universal Biography, Landowner Records, £39.95 Quick Start Guide
TreeView V2 Upgrade £14.95 RootsMagic V7 UK Platinum Edition 3 Month Gold Subscription to TheGenealogist, 1898 Atlas, Change of Names 1760-1901, England Scotland & Ireland Landowners, £49.95 Quick Start Guide
RootsMagic V7 UK Upgrade £19.95 Family Historian 6 £39.95 Family Historian 6 Upgrade £23.95 Custodian 4 £28.45 Reunion 11 for Mac (Download) £69.95 Getting the Most out of RootsMagic Book £14.95 Getting the Most from Family Historian Book £14.95
Books
celiabookcover_Layout11 26/11/2015 26/11/2015 18:18 18:18 Page Page11 celiabookcover_Layout
researchmethodology methodologyand andhow howtotorecord recordwhat whatyou youfind find • •research keyVictorian Victorianrecords: records:birth, birth,marriage marriageand anddeath deathcertificates, certificates,and andcensuses censuses • •key parishand andnonconformist nonconformistregisters registers • •parish gravestonesand andmemorial memorialinscriptions inscriptions • •gravestones newspapersand andinquest inquestrecords records • •newspapers
Researching & & Locating Locating Researching Your Ancestors Ancestors Your CeliaHeritage Heritage Celia
Celia Celia Heritage Heritage
maps,tithe titheand andenclosure enclosurerecords records • •maps,
Researching Researching & & Locating Locating Your Your Ancestors Ancestors
Researching & & Locating Locating Researching Your Ancestors Ancestors Your Howshould shouldyou youapproach approachresearching researchingyour yourancestors? ancestors?InInthis thiswidewideHow ranging but but succinct succinct guidebook, guidebook, professional professional writer, writer, lecturer lecturer and and ranging genealogistCelia CeliaHeritage Heritageoffers offersexpert expertadvice adviceon onhow howtotoget getstarted startedusing using genealogist themain mainonline onlineand andoffline offlinerecords, records,and andthen thentake takeresearch researchfurther furtherusing usingaa the varietyofoflesser-known lesser-knownresources. resources.InInitityou youwill willfind findguidance guidanceon onsubjects subjects variety including: including:
willsand andprobate probate • •wills parishchest chestand andworkhouse workhouserecords records • •parish occupationalrecords, records,including includingthe thearmed armedforces forces • •occupational courtand andmanorial manorialrecords records • •court schoolregisters. registers. • •school
Discover Discover Your Your Ancestors Ancestors DYAregionalguide_Covers.pdf 1
TreeView V2 Premium Edition Whether you’re an experienced family historian or just starting out, you’ll find TreeView easy to use and an essential tool in your research. Record your family’s history and view details of your ancestors in a number of different and attractive ways. Create beautiful charts and detailed reports to present your family tree. Powerful Features - Easily add details of your ancestors by attaching facts, notes, images, addresses, sources and citations. Navigate your family tree in a variety of different ways including pedigree, descendants and full tree views. View your entire tree on screen, or zoom in on a single ancestor. Identify anomalies in your data with the built in problem finder. Instantly map out a person’s £39.95 life events at the click of a button.
Calligraphy Pens - Acid Free £2.95 Writing Pens - Acid Free from £1.95 Dual Action Glue Pen - Acid Free £2.50 Archival Quality Cotton Gloves £1.80 Dome Magnifier £11.75 Ruler Magnifier £8.45 Line Reader Magnifier £7.45 Pocket LED Magnifier £7.95 Document Repair Kit £8.95 Document Repair Tape £4.85 Medal Sleeves (10) £4.95 Brass Archival Paperclips (100) £3.25 Cotton Ribbon 10M £1.95 50M £8.45 Small Medal Keepsake Box £2.95 Tissue Paper Acid Free 25 sheets £2.45 Heirloom Clamshell Boxes from £9.95
02/12/2015 16:21:31
Ancestors in the Attic £9.95 Criminal Ancestors £13.95 Discover Your Ancestors’ Occupations £9.95 Family History for Beginners £14.99 Pauper Ancestors £20.00 Railway Ancestors £14.00 Regional Research Guidebook £9.95 Researching and Locating Your Ancestors £9.95 Scottish Genealogy 3rd Ed £9.95 Searching for Surnames £11.95 Sporting Ancestors £9.95 Tracing Your Family History £12.95 Family &Local History Handbook £9.95 History & Heritage Handbook £24.95
Guarantee: We will beat any UK mail order pricing for the same product!
For Thousands more products ask for our free Family History Guide, or visit our website.
GS_Ad_FT_Dec16.indd 1 p055 Ads.indd 55
March 2017
Family FamilyTree 55
02/12/2016 17:07:04 03/02/2017 16:23
THE FAMILY HISTORY SOCIETY SCENE
Spotlight on…
Aldeburgh in Suffolk
Alde Valley Suffolk Family historians with ancestors in East Suffolk will find plenty to interest them at Alde Valley Suffolk Family History Group, which runs a varied programme of talks and local and family history projects, writes Rachel Bellerby
A
small but vibrant group, Alde Valley Suffolk Family History Group is based on the east coast of Suffolk, surrounded by an area of outstanding natural beauty. With a membership of around 80, the group covers the area from Felixstowe in the south to Lowestoft in the north. The group has a research centre which is housed in Leiston, a small
56
Family FamilyTree March 2017
spotlight RB FINALish .indd 56
market town. The centre is based in the old council offices, thanks to the local town council. Members enjoy regular monthly talks, stage an annual open day, undertake local and family history projects, and receive a quarterly newsletter. Research projects have included transcribing monumental inscriptions in local graveyards, with a full list of sites covered available on
How to join Annual membership is £6 individual or £10 per household. For further information, please contact Roger Baskett: email roger.2baskett@ btinternet.com or download a membership application from. onesuffolk.net
02/02/2017 10:48
Local know-how: plans, projects & people Find a society The Federation of Family History Societies has more than 180 member societies. To find your nearest, visit contacting.php and check the listings at You can also go to. co.uk/Events to find out about society talks, open days and fairs close to you
the group’s website at http:// aldevalleyfamilyhistorygroup. onesuffolk.net The site also has information on upcoming events, current projects and galleries of photographs. Two of the group’s committee members, Di Mann and Roger Baskett, researched and published a book in 2014 commemorating the WW1 fallen from Leiston. A recent related project involved a poem, written in pencil, by Percy Callear, a WW1 soldier who survived the
war. It was written from the Nasrieh Military Hospital in Cairo in 1915. The grandmother of Bill Sylvester, the current custodian of the poem, was the wife of a local GP and she was leader of Leiston-cum-Sizewell Urban District Council (as it then was) three times. John Peters, one of the group’s committee members, was able to fill in the gaps in the family tree for him. Ultimately, Bill tracked down a son of Callear’s daughter who now lives in France, and the poem was restored to Percy Callear’s descendants. The group’s current project is researching the history behind local house name plaques. For example, many houses in its area of interest are named after battles in the Boer War, having been built at that time. Interest in this topic came about as researchers often found a house name listed in the census returns, but not the current address, which was confusing. The annual programme of talks includes local history as well as family
The group’s publication Leiston’s Fallen of World War One
history topics. Recent subjects include ‘Hidden gems of Findmypast’, with a talk planned on researching the history of your family home, to tie in with the latest research project. The group’s quarterly newsletter is another method for members to stay in touch. News from the Suffolk Record Office and its programme of events is usually featured, as are any letters the group has received requesting help with their family history research. Information is also included about forthcoming events, additions to the group’s research archives, and matters of local or family history interest.
Clockwise from top: • Transcribing monumental inscriptions at Aldringham Baptist Chapel burial ground • Dating old pictures at a recent open day • A group volunteer helping a family historian to search online resources
spotlight RB FINALish .indd 57
March 2017
Family FamilyTree 57
02/02/2017 10:48
Images: background © LiliGraphie/AdobeStock; Picturegoer 1928 courtesy of Lucie Dutton
READING MATTERS
Magnificent MAGAZINES
M
Amanda Randall explains how old magazines can give us real insight into the lives and eras of our forebears
agazines have been part of our ancestors’ reading matter for more than 300 years, reflecting the attitude and standards of their time and providing us with a valuable insight into social history.
Early titles The first magazine in England was John Dunton’s short-lived Ladies’ Mercury published in 1693 for four issues. Its pages contained ‘All the nice and curious questions concerning love, marriage, behaviour, dress and humour in the female sex, whether virgins, wives or widows’, and it also carried ‘Answers to Correspondents’, a section that set the trend for advice columns. Some 38 years later Edward Cave founded The Gentleman’s Magazine in 1731. Intended to entertain and inform with essays, stories, poems and political commentary, The Gentleman’s
58
Family FamilyTree March 2017
p58-60 Randall Magazines FINALish.indd 58
Magazine continued publishing until 1922. It is generally regarded as the first modern magazine and included news, ‘foreign affairs’, advice for gardeners, recent book publications, where to find fairs, and so on. Samuel Johnson’s 1755 Dictionary credited Cave with the first use of the word ‘magazine’, meaning a ‘periodical miscellany’ that contained information rather than magazine meaning a storehouse or an arsenal.
Technological advances Early magazines were expensive and exclusive, however, mid-19th century developments in technology improved the availability of cheap paper – and the publications printed on it. The new machinery could process wood pulp into paper that was much cheaper to produce than the expensive rag-based raw
Vampish Vilma Bánky adorned the cover of Picturegoer magazine in November 1928 The Jabberwock was a monthly magazine for children
materials more suitable for book printing. Image reproduction and colour technology also improved; for instance, The Illustrated London News published its 1855 Christmas special with a colour cover made with coloured wooden blocks. Transportation also became more reliable, more cost effective and widespread as the railway network forged its way into every county. These developments, in conjunction with increased leisure time, greater levels of literacy and the growth of the middle class, heralded the great age of magazine readership. One
06/02/2017 13:42
Ancestors’ lives & times
The first volume of Punch was published in 1841
example of mid-century sales is The Illustrated London News (ILN), which sold 130,000 copies a week – the equivalent of 10 times the daily sales of The Times. By 1863, ILN sales had risen to 300,000 a week. From the 1880s, advertisements became more visually attractive to both advertiser and consumer. More page space was devoted to adverts and from this time revenue from advertising generated more income than subscriptions.
Special interest magazines The magazine world caters for specialist interests, whether for work or leisure, and this can be seen in the thousands of titles that appeared briefly before closing and being forgotten. The Kaleidoscope, or Literary and Scientific Mirror, was published weekly from 1818 to 1831. Despite some dramatic changes in its size and appearance, the price always remained at threepence-halfpenny. Jazz fan in the family? Perhaps he or she read the short-lived, but well-respected, Jazz Forum, a quarterly review of jazz, literature and avant garde graphic art published in only five issues between 1946 and 1947. The world-renowned satirical magazine and one of the first British magazine brands, Punch, was edited by some notable names including Henry Mayhew and Malcolm Muggeridge. Punch used biting satire to criticise Government, the wealthy, ordinary
p58-60 Randall Magazines FINALish.indd 59
people, social movements and trends in fashion or current affairs. Sales peaked in the 1940s but declined slowly until its final closure in 2002. Although football had featured in various sports newspapers and boys’ magazines since the 1880s, the first magazine devoted to football didn’t appear until 1951. Ex-footballer Charles Buchan launched Football Monthly in September of that year with a cover featuring the great Stanley Matthews. Ten years later, and still costing the original cover price of 1s/6d, it was selling 130,000 copies every month. At Football Monthly’s most popular time, monthly circulation reached 250,000 and membership of its boys’ club peaked at around 100,000. It ceased publication in June 1974, however, other titles have stood the test of time. The oldest is The Spectator, first published in 1828 and still going strong in 2017, as is The Economist, which launched in 1843 to campaign on one of the great political issues of the day – the repeal of the Corn Laws. As the idea of leisure time became more widespread during the later 19th century, special interest magazines turned to pastimes for their subjects. Loudon’s The Gardener’s Chronicle and The Suburban Gardener advertised gardening products and gave topical advice, with detailed features on particular aspects of gardening. The UK’s oldest consumer gardening magazine, the weekly Amateur Gardening, was launched in May 1884 and remains a best-seller. British Chess Magazine is the world’s oldest chess journal in continuous publication. First published in January 1881, it has appeared at monthly intervals ever since.
Cover of The Englishwoman’s Domestic Magazine in September 1861
Women’s magazines Magazines for women grew in availability and popularity in tandem with the Victorian trend for suburban domesticity. Men, as editors or writers often working under a female pseudonym, produced the majority of women’s magazines. For example, author and playwright Arnold Bennett edited Woman signing himself as Barbara, Marjorie or Marguerite. Regular tips for running the home, competitions, adverts and special offers built loyal readerships that would ensure weekly or monthly sales. One such example is The Englishwoman’s Domestic Magazine (1852-1879), launched by Samuel Beeton, husband of Mrs Isabella Beeton, the author of the best-selling Book of Household Management. This title changed the face of women’s magazines of the period, turning away from publishing for the very wealthy woman and tapping into a newly emerging readership – the suburban housewife. By 1855, annual sales topped 50,000 copies. By the 1860s, every issue included a full colour fashion plate featuring the latest Parisian designs, and offering paper patterns for the reader to reproduce high fashion garments at home. ‘Cupid’s Post Bag’ initially featured reader’s letters on a range of tricky topics from matters of etiquette to how to deal with gentleman callers or difficult fi ancés, but soon took on a saucy tone as ‘readers’ (perhaps Mr Beeton himself?) asked advice about increasingly risqué topics. By contrast two women, Emily Faithfull and Louisa Hubbard, edited The English Woman’s Journal (1858-1864). The journal campaigned for equality between the sexes, to promote women’s employment and legal reform to address discrimination. It featured articles written by women on education and the home environment. Faithfull and Hubbard edited later journals such as Women’s Gazette and Women and Work; their publications bucked the trend of relying increasingly on advertising to generate revenue. In the Edwardian period, My Weekly (established 1910) and
March 2017
Family FamilyTree 59
02/02/2017 10:42
READING MATTERS Look online • The British Library holds an enormous collection of magazines and journals, accessed online via. co.uk or • Search the archive of The Spectator – • The catalogue of the Women’s Library, now permanently housed at the London School Of Economics, offers access to historic women’s magazines and can be searched online at • Search the Internet Library of Early Journals – including some issues of The Gentleman’s Magazine – at • Charles Dickens and Wilkie Collins published their work in serial form in Household Words, a weekly fiction magazine costing 2d. Search Dickens Online Journals at household-words.html • A fascinating digital archive of children’s magazines (among other genres) can be searched at www. philsp.com/data/data069.html
Woman’s Weekly (established 1911) changed the tone of women’s magazines to being more companionable and personal. During WW2, women’s magazines were an important way of spreading propaganda messages, with the various wartime campaigns being heavily featured. Throughout the war Woman was seen as a ‘utility’ publication with its ‘make do and mend’ philosophy.
Mass entertainment Early fi lm magazines were aimed at people who worked in the industry and also at new audiences, especially women and children. In the early 1910s the fi lm industry was beginning to boom in Britain, Europe and the USA. These were the early days of the star system, which brought individual actors to the fore. Prior to about 1910 actors rarely received credit on the screen, but before WW1 favourite stars were becoming household names and developing loyal followings. The new fan magazines printed star portraits, stories and
60
Family FamilyTree March 2017
p58-60 Randall Magazines FINALish.indd 60
information, a bit of movie gossip, cinema bills, competitions and letters – even a children’s page to cater for the developing child audience. One of the earliest titles popular in the UK was Pictures and Picturegoer, first published in August 1914. Its first few issues took a fiercely patriotic tone with war stories being the focus, however, by the end of that year it had reverted to what it did best – bringing light relief during dark days, although the magazine continued to publish articles about the impact of the war on the fi lm industry until the Armistice. In its early years the magazine published articles about camera operators, script editors and studio bosses rather than focusing solely on stars. Pictures and Picturegoer changed its name repeatedly. By the 1920s the title was simplified to Picturegoer. Published weekly in the ’teens, Picturegoer became a high quality monthly publication in the early 1920s, reflecting the improved status of movie stars and the cinema industry itself. Picturegoer annuals were a more substantial product, with lots of celebrity news and photos. Film annuals are widely available from online auction sites, especially those published in the 1950s.
The Radio Times The Radio Times was established in 1923 as a radio listing paper by John Reith, in response to a threat from the Newspaper Publisher’s Association; NPA publications would only print the radio schedule if the BBC paid for it. Initially working with publisher George Newnes, the BBC’s growing status in the late 1920s allowed it to take full editorial control and by
1937 the entire operation had been brought in-house. The Radio Times announced regular ‘experimental television transmission’ in 1929 and in 1936 it was the first TV listings magazine. However, it has always included in-depth articles by leading writers. Newspaper adverts in 1939 proclaimed that ‘It’s like a tour without a map if we listen [to the radio] without the Radio Times’. Perhaps surprisingly, the Radio Times continued publication throughout WW2, albeit by reducing the number of pages and print size. Additional wartime editions were issued for the British Expeditionary Force (BEF) and the Allied Expeditionary Force (AEF). In 1953, for the Queen’s Coronation, the Radio Times incorporated TV and radio listings for the first time. For three consecutive years in the mid1950s, it published annuals, since then many occasional special editions have appeared. In 1957 TV became the magazine’s prime focus, although radio listings remained an important element of the Radio Times brand. Vintage issues of the Radio Times can tell the genealogist much about the leisure interests of their 20th century family members. What did they listen to? Who were the popular stars of radio in the 1930s? If your family had a TV in the 1950s, what programmes could they watch? The Radio Times might be a great conversation-starter if you are trying to capture family memories. You can freely download a PDF of the first issue at http:// familytr.ee/vintageradiotimes Find a free beta version of the Radio Times archive at. co.uk where you can download PDFs of issues and contribute your knowledge to the database. Paging through old magazines is an evocative way to learn about times gone by.
About the author Film and social history have intrigued Amanda Randall for as long as she can remember, especially what early film and home fgfg movies can tell us about our past. Since completing her MA in Film Archiving she has been researching and writing about these intertwined subjects, and blogs about them at paperpenaction.wordpress.com
First edition of the Radio Times
03/02/2017 13:20
FamilyTree Tree The place to go for family history news, how-to guides, reviews, competitions & more
u o y n e h w s k toip o p b o t e y & r s a r t e n r ff e e o t m l i l a e i t p l c s n m e u w o p o e c s c N t c a ge eeea e r r T T y y l l i PL US G i m m a a F EE F E E R R F F e a h t r r o f p sigsnigunp ufo
p061 Ads.indd 61 FTOct16_pg92_Ad.indd 92
March 2017
Family FamilyTree 61
06/02/2017 17/10/2016 13:21 22/08/2016 09:27 10:57
ing n r Lea pot hots
I M L Y A F
NAME
YOUR
INVESTIGATING SURNAMES
So let us look at surnames. Have you ever thought about yours, where it might have come from, and what it might mean? Follow June Terrington’s handy round-up of resources that will help you learn more about this precious family history clue – your family name any of the family names we have today are centuries old, but often their origins are lost in time, and not immediately obvious. However, with a little investigation you can fi nd out a great deal, which will help to shed wider light on your family history research, and in particular on the origins of the surnames on your tree. When most people lived in rural conditions and small communities and everyone knew each other, there wasn’t a great need to have a surname. It was only as time moved on, and bigger populations and conglomerations of people living together developed, that the surname or family name became increasingly important – as otherwise there were often too many people using the same fi rst name, causing confusion.
M
62
Family FamilyTree March 2017
p62-63 June Terrington v3 FINALish.indd 62
Expect spelling variations Our names are passed down generation to generation, but, over the years, they can morph, and this can create puzzles when we come to research our roots. Whether the changes are simply changes in spelling, due to lack of literacy, or decisive, deliberate alterations, as we trace our family lines back in time, don’t be surprised if you come across significant variations in the spelling of family names.
Rare names Sometimes a name can be uncommon, such as my married surname Terrington – and rarity usually makes research much easier. Terrington is an English location name from the pretty village of Terrington in North Yorkshire. It is also said to have come from tiefran, the pagan practice of
SU R N A M E SEARCH TIP
Don’t be put of f by an unusual spellin g of a surname – with a lit tle more searching, you may discover this is just a spelling error or variation
sorcery, which was supposed to defend against the Viking invaders.
Popular names Other names such as Wilson, my maiden name, can be found commonly – in large numbers and in numerous places. Now, learning more about a name such as Wilson can be a difficult path to follow. Wilson means ‘the son of Will’, and is more often classed
02/02/2017 10:47
The story behind your name
Over to you! Why not have a go at finding the origin of your surname? Search these six sites below and begin to unpack the clues hidden away about your family history within your family name.
MC, M AC & MORE
• OxfDictionaryNames – explore the origins of Surname p 50,000 names in the refixes (such as M Oxford Dictionary of c, O, Fitz) and suffixes (s Family Names in Britain uch as -so n, -kin, and -s and Ireland at a fortunate ) m ay s h e d further lig library near you (with a ht on your surname o £400 price tag, you may rigins find it a challenge to your own budget) • – search online surname database for your name • surnames.php – home in on surnames of interest to you, and find which RootsIreland collections hold entries
SA ME NA ME, SA ME FA MILY?
Having the same surname as someone else is no guarantee of a family connection. As you gat her clues for your family tree you will build up family units and learn where your family came from. Using this information you will become equipped to track back in time , piece together evidence and find out more about where your par ticu lar family name originated. Surname DNA projects are working to establis h connections bet ween names and families: see me_ DNA _projects
as having Celtic origin. While this is interesting to learn, you can see that it’s quite general information. The most common surnames in England and Wales are those such as Smith, Jones, Williams and Taylor. Visit the website http:// surnames.behindthename.com/ top/lists/england-wales/1991 to start exploring the list of the top 500 most commonly occurring surnames in England, Wales and the Isle of Man in 1991.
p62-63 June Terrington v3 FINALish.indd 63
Investigating the clues Surnames are really important to us, not only do they tell us a family name, but they hold vital insights that are especially interesting if you want to research them further. The family name can often give you a possible occupation that branch of the family was involved in, or a location they may have come from many years ago. Sometimes names were based on an ancestor’s appearance or characteristics.
About the author June Terrington started family history to learn about her absent father, and over the years this interest has become a passion. She really loves it and you can find her at Terrington
March 2017
Family FamilyTree 63
02/02/2017 10:47
April issue s on sale Wed 15 March
Coming next in
5
PROJECTS TO TRANSFORM YOUR RESEARCH
FamilyTree
Take your family history to the next level with our tried and tested project ideas
TRACING A MARRIAGE AT GRETNA GREEN This wasn’t stuff of novejulsst the our ancestors rea, lly did elope...
EE, HOW TO START YOUR FAMILY TR
EE FROM HOME, ONLINE, FR expensive to
t or Family histor y needn’t be difficul search-savvy start. Follow our simple steps and ily tree advice, and begin to grow your fam
BANK & BU RUPTCY DGET I NG With t he
YOUR SHIPBUILDER ANCESTORS
r folk Explore the lives and work of you turies in Britain’s shipyards over the cen olved, – from the many occupations inv to the iconic ships they built
ann upon us, let ual budge t ’s thin the ex k memb periences o about er s w f ho en family d e d debto rs’ jail up in
Subscribe to Family Tree to get every issue delivered to your home to enjoy – and save money too! See the offer details on the next page
pxx Next month.indd 92
03/02/2017 14:47
FREE
OXFORD DICTIONARY OF ENGLISH SURNAMES
when you subscribe to Family Tree magazine for just £9.99 a quarter by Direct Debit
• This fascinating guide explores the origins of over 16,000 English surnames • The in-depth introduction to the history of family names provides valuable information to anyone starting to research their family history • Includes an authoritative appendix on tracing the origins of a family name
Start your family finding adventure today and discover your ancestors with Family Tree, the experts of over 30 years
WORTH £11.99!
VISIT: OR CALL: 01778 392 008 CODE: FTRE/Mar17 CODE: FTRE/Jan17
FT March subs.indd 60
06/02/2017 11:09
research
zone
Three of a kind Terry Bridger has been researching the Royal Bounty for Triplets. What was this Bounty, and are there records related to it that might be helpful to genealogists? Terry talks to Simon Wills about her unique indexing project, which she is sharing online for the benefit of fellow family and social historians
Q A
What was the Royal Bounty for Triplets?
It was a donation made by the reigning monarch, from their own personal allowance, to offset the unexpected fi nancial burden upon families of triplets, quadruplets or more (known as higher order multiples or HOMs). It was discretionary and bound by certain eligibility criteria. It was most well-known during the late Victorian era. Contrary to popular belief that the Bounty ‘started in 1849’, there is evidence of philanthropy towards deserving poor parents of HOMs as far back as Henry VII, who fi nanced the tuition of the Taylor triplets, including John who ultimately became Master of the Rolls of the Court of Chancery for Henry VIII. The fi rst newspaper report of such an occasion appeared in the Brighton Morning Advertiser on 4 March 1842 when the plight of Mrs Wiber was brought to the attention of Queen Victoria while she was visiting Brighton. The Wiber triplets, Edward, Eliza and Margaret, had been born in the summer of 1838 and, unusually
66
Family FamilyTree March 2017
p66-67 simon wills FINALish.indd 66
for the time, had survived. They had five older siblings and the young Queen donated £5 to the family. The Bounty is not mentioned by name in the newspapers until 1855.
Q A
Did everyone get the Bounty?
Concerned at the lack of regulation, Prince Albert endorsed a set of rules. From April 1859 successful applications were recorded as payments for ‘trins’ in the Privy Purse ledgers, currently held at the Windsor Royal Archives. The ‘rules’ required all applications to be investigated and to qualify, parents of HOMS had to be married, respectable, in indigent circumstances and the children must have been alive at the time of application. Applications were made predominantly by parish priests, but also by doctors and local dignitaries or even occasionally by the father. Each application was individually considered and examination of the successful applications shows evidence of payments outside of these criteria on occasions, such as
£2 payments when all the babies had been lost. This was probably to assist with funeral costs.
Q A
When did people get the Bounty, and how much did they receive?
The payment was intended to relieve immediate distress, a clause that later generated considerable political debate regarding colonial applicants. There is evidence of a few payments to the colonies, but not many, probably due to the political controversy it caused. After much deliberation it was agreed that aside from Tristan da Cunha, all applications must be made within four months of the births accompanied by adequate supporting evidence. The average donation was £1 per live-born child but by the 20th century the relative value of the donation was rapidly decreasing. During the latter part of Queen Victoria’s reign there was an average of 57 payments made per annum. The financial payment was replaced with a congratulatory telegram soon after the death of Queen Victoria.
06/02/2017 12:33
Images: newborn triplets & mother © Wellcome Library, London, copyrighted work available under Creative Commons Attribution only licence CC BY 4.0; Queen Victoria from British Library Flickr; other images courtesy of Terry Bridger
LOCATING MULTIPLE BIRTHS
Unusual royal resource
Images: newborn triplets & mother © Wellcome Library, London, copyrighted work available under Creative Commons Attribution only licence CC BY 4.0; Queen Victoria from British Library Flickr; other images courtesy of Terry Bridger
The Bounty is first mentioned in the press in 1855 during the reign of Queen Victoria but Henry VII is known to have to provided royal patronage to triplets
Terry Bridger has set up The THOMAS Index online Triplets in the news, clockwise from left: The Walsh triplets from Warwick, born on 13 November 1896, were called Theresa Monica, Edith Tamilda and Philip Reginald; the Bradleys from Manchester, born 17 February 1875, were named Agnes, Eliza and Michael William; while the Cadby triplets from West Ham were born around February 1894 and called Daisy, Rose and Violet
Q A
Can you trace ancestors who received the Bounty?
The Privy Purse records are available for public access at the Royal Archives in Windsor, but unless there is known evidence of a HOM birth, searching for something that might not exist could be a lengthy and fruitless exercise. I have therefore transcribed all the extant records and am incorporating them into a new searchable online database which is
A mother holding her newborn triplets, 1926. The Royal Bounty for Triplets was designed to provide financial aid for families with multiple births of three or more children
p66-67 simon wills FINALish.indd 67
free to access at. co.uk (The Triplets and Higher Order Multiples Ancestral Searchable – THOMAS – Index). The site is not yet complete, but the intention is to create a portal for all matters relating to British HOMs prior to 1938. All Bounty records appearing in the ledgers have been included.
Q A
How useful are these records to family historians?
Where application letters have survived they provide valuable insight into the immediate postpartum phase of a newly extended family and their situation, socially, physically, mentally and financially. Most applications were made within 48 hours of the births. Though the forenames of the HOMs are rarely mentioned, such letters frequently enhance a genealogical view of a family which, excluding personal diaries, are seldom recorded elsewhere – especially within lower social groups. However, only a small selection of application letters have survived and the ledger records are minimal at best. They generally consist of the ledger entry date, the applicant’s name and/or the mother’s name, a reference, plus the amount sent. Yet despite its limitations, the
Bounty data is an aid to genealogical completeness, filling in blanks where traditional records are blind, and with the extant records freely searchable online it should soon be very easy to potentially confirm that nagging family rumour that there were indeed triplets in your ancestry. Although the Bounty records do not encompass illegitimate births or those of the well-to-do, these are precisely the ‘middling sorts’ of people that are often overlooked in other records.
Q A
Can readers help with your new website?
Yes, please. It would be helpful if people could send me information or photos regarding any triplets (or more) they have in their family trees pre-1 July 1927 (when the national stillbirth register commenced). I’m especially interested in any known sets from the Victorian era. I will need as much factual data as people are able to offer and the permission to include the data in this new site, which will be in the public domain. Please use the ‘Contact’ feature at
About the author Dr fgfg Simon Wills is a genealogist and author with more than 25 years’ experience of researching his ancestors. He has a particular interest in maritime history and his latest book is The Wreck of the SS London (Amberley). He is also author of Voyages from the Past, How Our Ancestors Died and a novel, Lifeboatmen.
March 2017
Family FamilyTree 67
06/02/2017 12:32
THE FEEL OF FASHION
Women’s magazines supported the Government and represented clothes rationing in a glamorous light, as in Woman’s Own, September 1944 (right) Knitting was very popular during the war, this pattern (above) for a red, white and blue Victory jumper being featured in Home Notes magazine, June 1945
PAR T 2:
1940s & WW2
THE WAR YEARS’
WARDROBE
Fashion reflects the world around us – our politics and religion, the economy, technology, customs and values of society. Investigating how our forebears dressed, clothed their families and upheld appearances in the 1940s reveals the challenges and achievements of daily life during and after the Second World War, as Jayne Shrimpton explains 68
Family FamilyTree March 2017
p68-71 Shrimpton FINALish.indd 68
02/02/2017 10:43
What does clothing reveal about our ancestors?
I
n 1940 female fashion favoured trim knee-length A-line dresses or pleated skirts, bodices and blouses with padded shoulders, a tailored image that would continue to shape civilian wartime style. Items of battle dress were also present in the wardrobe. Gas masks had been issued in 1938 and when war erupted in September 1939 everyone was urged to take precautions by carrying a gas mask whenever leaving home. Also reflecting early war conditions was the air-raid emergency suit or ‘siren suit’, a comfortable zipperfronted one-piece jumpsuit to throw on over clothes or nightwear, when sirens wailed. Many incorporated a snug hood and were available from shops and mail order catalogues, or were homemade using any warm fabrics. Siren suits, like practical breeches and trousers, were previously rarely worn by the average woman but now more widely adopted, either as functional daywear, or as an occupational requirement, perhaps part of a military or civilian uniform. Many females were doing men’s jobs and undertaking challenging physical work requiring comfortable, protective trousers: and yet in some rural areas and within conservative communities these masculine clothes were still considered indecent, even immoral; some breeches-wearing Land
Girls were ostracised and men with traditional values objected to their wives or daughters wearing trousers.
Cosmetics companies even promoted the concept of beauty as a patriotic duty In uniform Millions of British women donned uniforms, from bus crews and post women to the Land Army, who all received designated kit. Some organisations recruited females from all walks of life: this was a great social leveller, for while privileged girls perhaps felt deprived of certain luxuries, poorer girls enjoyed a better wardrobe than during peace-time, also gaining strength and fitness through exercise and substantial meals and benefiting from modern hygienic dental and sanitary products. Factory workers donned industrial boiler suits and unattractive hairnets or headscarves, but underneath their hair might be in curlers ready for a night out. Recruits into military units – the ATS, WRNS and WAAFs* – were issued with masculine-inspired uniforms, stylish tailored suits accessorised with jaunty peaked caps. The chic dark navy uniform of the WRNS was reputedly the most sought-after female service uniform, although one ex-WAAF officer assured me the naval suits were itchy and of inferior cloth to the smart air force blue uniforms. (* ATS, Auxiliary Territorial Service; WRNS, Women’s Royal Naval Service; & WAAF, Women’s Auxiliary Air Force)
Rationed fashion
Beachwear gained a new lease of life as Britain’s coastal resorts reopened, even upmarket tailors presenting colourful new leisurewear for men (1948)
p68-71 Shrimpton FINALish.indd 69
While women in uniform received official kit, even down to underwear, civilians struggled in worsening conditions. With many articles scarce and soaring inflation, clothes rationing was introduced in June 1941, essentially to stabilise prices and provide essentials for all. Most new purchases now required both cash and coupons. Initially public reaction was generally fairly positive, few believing that rationing would last long, but materials grew scarcer and in spring 1942 the already meagre ration was reduced. In these extraordinary circumstances, clothing coupon fraud ensued and
After the war, the younger generation increasingly favoured stylish comfortable separates, as seen in this family photograph, 1947 (top) Sisters aged 25, 12 and 27 photographed in 1943 wear austerity-style outfits featuring minimal fabric and decoration (above)
local street markets became a magnet for ‘spivs’, clearing-houses for looted goods. Reportedly some did whatever it took to clothe themselves and their families, and consequently knowledge or suspicions of black market activities could come between friends and neighbours. However, in true patriotic spirit, many women prided themselves on managing within the system, some older women giving away precious coupons to mothers with children.
Keeping up appearances Perceptions of fashion shifted during the war: ornate modes seemed
March 2017
Family FamilyTree 69
02/02/2017 10:43
THE FEEL OF FASHION
Many women experienced a tremendous camaraderie, pooling precious possessions for special occasions
An unprecedented number of women were recruited into auxiliary military units, and Government posters displayed the smart uniforms that they could expect
extravagant – and simpler, comfortable styles were favoured by active women. So long as a woman avoided descending into slovenliness, a little shabbiness was acceptable, for luxuries were few and even basic articles scarce. The prohibition of silk for making stockings in January 1941 caused a dilemma, for going out without stockings was considered indecent. Many resorted to faking tancoloured stockings using commercial cosmetic lotions of variable quality, or homemade concoctions of liquid gravy browning, cold tea or cocoa,
Beauty as a duty
Make-Do and Mend Dress played a crucial role in Britain’s wartime strategy, the Government controlling textile and garment production, restricting manufacturing, consumer choice and expenditure on apparel. The Board of Trade’s Make-Do and Mend scheme, practical advice in the press and local Women’s Institute and Women’s Voluntary Service classes together aimed to educate all women, including ladies now having to manage without servants, in making and caring for clothes, to extend their life and reduce the need for new purchases. Women not already accustomed to making ends meet now stepped up their game, finding ingenious solutions to clothing shortages. Winter coats were tailored from blankets or absent husbands’ coats and dressing gowns, dresses from twill blackout material or old curtains; resourceful dressmakers used any fabric scraps to fashion wearable, if unconventional garments, including (illegal) parachute silk for underwear or wedding dresses. Knitting wool was especially versatile and any obtainable colour and quality was utilised, adult garments often unravelled and knitted into children’s clothes. Housewives spent long evenings at home sewing, darning and
70
Family FamilyTree March 2017
p68-71 Shrimpton FINALish.indd 70
a ‘seam’ drawn up the back of the legs with eyebrow pencil. Others, to avoid appearing bare-legged, simply wore ankle socks, already popular for country and sportswear. Traditionally headwear completed a smart outfit and some women felt uncomfortable going out bareheaded. Hats were not rationed and with garments becoming more austere, attractive headwear could express personal style. Women today can recall shops full of beautiful hats, but being heavily-taxed few could afford them, many re-trimming and dyeing existing hats or making headgear, popularising new styles including turbans, headscarves and netted snoods. Glorious hair was a feminine attribute that made women feel seductive and much time was spent curling, rolling and pinning long tresses. Cosmetics also imparted glamour and resourceful ways were found to acquire scarce beauty aids: for example, women re-used the ends of old lipstick tubes, or utilised solid rouge or beetroot juice. Soot, charcoal and boot polish outlined eyes, while an infusion of rose petals subtly coloured the cheeks.
The Make-Do and Mend scheme, launched in 1942, urged women to go through their wardrobes before considering new clothing purchases knitting while listening to the radio: such tasks could seem laborious and if a lone woman struggling to look after home and family became demoralised and exhausted, neighbours, family and friends rallied round to help and raise her spirits. Many women experienced a tremendous camaraderie, pooling precious possessions, swapping patterns, even sharing garments and accessories for special occasions.
By 1940 diverse gas mask cases including smart handbags with compartments for mask and respirator were potent sartorial symbols of war
02/02/2017 10:43
What does clothing reveal about our relatives?
Men & children Many female war workers donned practical breeches and trousers, as here in 1941, although these masculine clothes remained controversial in some quarters
Beauty products and substitutes helped to maintain the impression of normality and boosted self-esteem. Cosmetics companies and fashion magazines even promoted the concept of beauty as a patriotic duty and a necessary weapon that could help to win the war.
Post-war style Clothes rationing continued after
With many men in uniform, male fashion took rather a back seat during the 1940s, although Utility restrictions applied to new civilian clothes and the unpopular, utilitarian de-mob suits issued to ex-servicemen are still remembered today. A major preoccupation was how to clothe growing children for, despite extra coupons for expectant mothers and young families, parents struggled to keep up with offspring rapidly outgrowing their clothes and shoes, and were especially concerned about foot deformities developing.
Maintaining an attractive image was considered important for morale and a women’s patriotic duty, as often presented in cosmetics adverts (Ideal Home, 1945)
In response, the Women’s Voluntary Service opened clothing exchanges where decent children’s clothes and shoes could be exchanged for larger sizes, without money or precious coupons. Many children’s clothes were homemade from any available material: some readers may remember wearing underpants to school made from floral dress material. Stoical times indeed. Next issue: We look at clothing in the 1920s & 1930s
Yours to win!
The ‘New Look’ of 1947 took the post-war world by storm and brought a welcome sense of feminine allure back into fashion
p68-71 Shrimpton FINALish.indd 71
We have a copy of Jayne Shrimpton’s book Fashion in the 1940s to give away. To enter, please email editorial@ family-tree.co.uk with ‘Fashion in the 1940s’ in the subject line. Competition closes 31 May 2017. To buy a copy of the book (RRP £7.99) visit
March 2017
Images: war workers in breeches 1941 © Kat Williams; all other images Jayne Shrimpton – ATS Auxiliary Territorial Service; WRNS Women’s Royal Naval Service; WAAFs Women’s Auxiliary Air Force
war ended in Europe in May 1945 and Britain remained economically depleted. However, as Parisian fashion houses reopened Christian Dior’s ‘New Look’ was launched in 1947 – an extravagant, controversial fashion using yards of fabric. Initially considered inappropriate and unpatriotic in Britain, the new style that exaggerated feminine curves and revived a welcome sense of romance and allure was widely adopted during the later 1940s. Conversely, comfortable trousers and slacks became more established in the female wardrobe, along with casual American-inspired co-ordinates and separates, while holiday clothes and swimwear grew bright and bold as beaches reopened.
Family FamilyTree 71
02/02/2017 10:44
s k o o B
‘top choice’
On 1 September 1939, on the eve of war, more than 1.5 million civilians, mostly children, were evacuated from Britain’s towns and cities to the apparent safety of the countryside, away from Hitler’s bombs; some were taken overseas to Commonwealth countries such as Australia and Canada. This mass movement of people was an incredible moment in history, followed in May 1940 by more evacuations from the nation’s coastal areas and the departure of 17,000 civilians from the Channel Islands just days before occupation by the Germans. Gillian Mawson, author of Guernsey Evacuees: The Forgotten Evacuees of the Second World War, has widened her scope to record the stories of evacuees
SEA CHARTS OF THE BRITISH ISLES by John Blake Explore a stunning collection of nautical charts and information about Britain’s magnificent seafaring history in this colourful new book by LieutenantCommander John Blake, who spent 12 years in the Royal Navy and has written a number of books on the maritime world. The book takes the reader along the constantly changing coastline of the British Isles, moving clockwise from London and the Thames Estuary, explaining the dangers of rocks and tides and describing the sea ports, harbours, naval bases, dockyards and seaside
72
Family FamilyTree March 2017
p72-73 Books FINALish.indd 72
VOICES FROM THE PAST: BRITAIN’S WARTIME EVACUEES by Gillian Mawson
from across the UK and Gibraltar; since 2008 she has interviewed hundreds of evacuees and read the testimonies of those who have passed away, and scoured newspaper reports and official documents to piece together this powerful collection of stories. Many evacuees give positive accounts of caring foster parents who welcomed them with open arms, even if they had little themselves to share, and the tears that were shed when they returned home, often several years later. Some children were shocked at the poor housing they were billeted to – outside toilets took some getting used to when they had inside loos at home – while others found themselves living in mansions complete with servants. A number had more distressing stories of neglect; of parents who didn’t – or couldn’t afford to – take them back or siblings who passed away, leaving them to return
resorts our ancestors may have been familiar with. Whether they were fishermen, lighthousemen, dockers, naval, military or customs men, here you’ll find their world opened up in glorious detail, so you can better understand what it meant in the past to live in this culturally diverse island nation. Images of historic charts from Britain’s finest chartmakers are put into context and packed with facts; even featuring details of place names lost to the sea, and former ports now stranded miles inland. This book is a real treat for anyone interested in Britain’s maritime and coastal past or those with ancestors whose lives were shaped by the surrounding sea. • ISBN: 9781472944900. RRP £18.99, paperback. Bloomsbury.
home after the war, alone. These accounts are remarkable and very moving, recalled nearly 80 years later when few parents could even contemplate sending their children away into the arms of strangers. It is an enlightening and valuable read, forming a part of many family histories, recorded for later generations to understand how these experiences shaped their loved ones’ lives and communities. • ISBN: 9781848324411. RRP £19.99, hardback. Frontline Books (an imprint of Pen & Sword).
LOOK ONLINE
Find out more about the experiences of evacuees in World War II by reading Gillian Mawson’s guest blog for Family Tree at
MERCHANT SEAFARING THROUGH WORLD WAR I 1914-1918 by Peter Lyon If you have 20th century merchant seafarers in your family tree then this new book is sure to give you fresh insight into their experiences. The book stems from research undertaken by former Master Mariner and author Peter Lyon into the life of his maternal grandfather Captain Henry Griffiths – who had been at sea during both world wars – after fi nding some of his personal photos and papers. His family history quest sparked a wider interest in the experiences of
06/02/2017 11:04
Family history reads with Karen Clare
British merchant seafarers and their passengers in WW1, including the appalling heavy losses of Allied and neutral merchant ships at the hands of German U-boats. Lyon paints a vivid history of the hardships and horrors endured by merchant seamen and women, who carried out their understated work with little support or regard for their safety, as they battled to keep open supply lines of the food, troops and armaments that helped the British and Allies win the First World War. Among the roll call of tragedies is the terrible fate of the crew of the Mariston steam ship, torpedoed without warning in the Atlantic Ocean on 15 July 1917 as she carried a cargo of copper from Almeria to her home port of Glasgow. Out of 29 crew, only one, Charles Williams, survived after climbing onto a hatch he used as a raft until his rescue 15 hours later. However, he witnessed the hideous deaths of many of his shipmates, killed by a school of sharks. The author ensures that some of the voices of the civilian mariners are heard and the sacrifices of these unsung heroes and heroines of the sea are recorded for posterity. With a useful bibliography, list of archives and index, this is a useful read for anyone researching merchant seafarers of WW1, many of whom went unrecognised. • ISBN: 9781910878415. RRP £9.99, paperback. Book Guild.
SCHOLARLY SCOUNDREL: LAURENCE HYNES HALLORAN by Jan Worthington Renowned Australian genealogist Jan Worthington donned her detective hat to research and write this lively biography of the rather colourful Laurence Hynes Halloran (1765-1831), a father of 21 (with three women), transported convict and pioneering headmaster of Sydney Grammar School. The Irish orphan’s vices included impersonating clergymen and even murder, yet he was a veteran of the Battle of Trafalgar, a poet, sailor and even a publisher and coroner
p72-73 Books FINALish.indd 73
in his adopted Sydney, where he was shipped in 1819 to serve seven years for forgery – really, the least of his criminal endeavours. This controversial and self-styled ‘doctor’ was considered the most educated fellow in the fledgling New South Wales colony where he lived for only 12 years until his death, but he made a huge impact. Yet his gentleman exterior hid an arrogant, law-breaking, volatile personality who did not take criticism well, although one obituary noted he had a generous heart, so perhaps there was something of the lovable rogue about him. Hundreds of Australians are descended from this remarkable and flawed man and the author has used genealogical methods to trace Halloran’s family connections as well, including carefully detailed family trees. Her engaging and insightful biography with a strong family history slant is sure to have you hooked from the word go. ‘Halloran,’ the author tells us in her first paragraph, ‘was a man who survived despite his wickedness’. And who doesn’t love a black sheep in the family? • ISBN: 9781925043198. RRP Aus $39.95 plus p&p, paperback. Halstead Press. Available from jan.worthington17@gmail.com or plus Abbey’s Bookshop and Booktopia
THE RAILWAY EXPERIENCE by Paul Atterbury With more than 50 evocative photos – of steam trains, turntables, signal boxes, railway workers and more, dating from the late Victorian era through the 20th century, this is an intriguing and nostalgic read both for those interested in the history of railways in Britain, and for those who simply enjoy a trip down memory lane to the magical Age of Steam locomotives. It’s the photographs that really catch the eye in this book, many of which are previously unpublished gems, with each accompanied by a fascinating capsule history. And funky facts
In brief Joseph, 1917 by David Hewitt David Hewitt uncovers a secret history in this biography of soldier Joseph Blackburn, from Thornton Cleveleys in Lancashire – the author’s home town – who was forced following a tribunal to go to war, and died on the Western Front. Hewitt, a tribunal judge himself, draws on historic legal records and newspaper reports to tell Joseph’s forgotten story and those of others like him. • ISBN: 9781785898976. RRP £8.99, paperback. Troubador Publishing. The Conversations We Never Had by Jeffrey H Konis This heartwarming cross between a memoir and fiction highlights the importance of family history. Knowing nothing of his orphaned father’s Jewish family or upbringing and regretting having left it too late to learn from his Grandma ‘Ola’ – really his father’s aunt who took him to America after surviving the Holocaust – the author recalls his precious time with her more than 20 years earlier and imagines the stories she might have shared, had he asked the questions. • ISBN: 9781478767299. From £9.48 paperback, also in hardback and Kindle. Outskirts Press. Available via Amazon.
abound... Did you know, for instance, that in 1948 there were 10,000 signal boxes in use throughout Britain, but by 2012 they numbered just 500? Or that the ‘Flying Scotsman’ was initially just a nickname, not a formal one until 1924. A small book, it’s nevertheless packed with revealing insights to life and work on the railways from decades past. • ISBN: 9781784421236. RRP £9.99, hardback. Bloomsbury Shire. Review by Helen Tovey. See pages 24-31 for our expert feature on tracing railway ancestors.
March 2017
Family FamilyTree 73
06/02/2017 11:04
ADVERTISING FEATURE
Researching the de Rothschild family Tracing Lionel de Rothschild ‘a banker by hobby – a gardener by profession’: Nick Thorne researches a member of the de Rothschild family using the records on TheGenealogist
GRO birth records on TheGenealogist show the index entry for Lionel N de Rothschild
L
ion
74
Family FamilyTree March 2017
advertorial FINALish change page numbers to p74-75.indd 74 confl ict. Both of Lionel Rothschild’s brothers were wounded in battle, with Evelyn dying of his injuries at the 1917 Battle of Mughar Ridge. A search of the military records on TheGenealogist reveals that Lionel
Harrow school register from the educational records on TheGenealogist. Eventually, however, the oath was modified and so he was able to become the first practising Jew to take up his seat in the House of Commons.
de Rothschild, nonetheless, appears in the Roll of Honour for Cambridge University and we can also find his World War I campaign medals from a search of the military records on TheGenealogist.
Trinity College admissions are also found on TheGenealogist
02/02/2017 10:01
Save £20
The Jewish Chronicle 1907 on TheGenealogist
Overseas, consular marriages found on TheGenealogist Exbury House
Cambridge University war list in the roll of honour records
World War I campaign medals on TheGenealogist
Lionel de Rothschild recorded in the Jewish Seatholders collection
The 1942 death record for Lionel N de Rothschild
advertorial FINALish change page numbers to p74-75.indd 75.
March 2017
Images: Exbury House, from Wikipedia, published under the Creative Commons licence
on TheGenealogist’s Diamond subscription, visit TheGenealogist.co.uk/ FTADVP20P to claim your offer
Family FamilyTree 75
02/02/2017 10:01
DIGGING DEEPER
Diarist Gill Shaw charts the rollercoaster ride of researching her family history
Twiglets A
fter a frankly frustrating session last time, failing to locate anything other than early deaths or total disappearing acts for my Ashurst 5x great-aunts Mary and Ann and their other halves, I’m turning to their remaining sister, Elizabeth, who married Richard Muleman Chiswell in 1816. Come on, Mr MC. With that amazing name I’ve been saving you till last, so don’t disappoint me. At Ancestry.co.uk I pop in nothing but his name, as there surely can’t be many Richard Muleman Chiswells to the pound. Hallelujah, this is more like it – a whole page of results, including some Richard ‘Muirman’ Chiswells in the censuses. Oh, but hang on; this one was born in 1851, the next in 1879. Those are way too late, but they could be later generations of the same family. Excitedly, I scroll on down until my eye comes to rest – predictably by now, it seems – on a burial. In 1833, a 39-yearold Richard ‘Muilman’ Chiswell was buried in Prestwich, Manchester. Oh dear, that’s got to be him. Still, we’ve got a birth year now, so just for fun I check out Richard’s origins, and it seems he was a Norfolk man: Richard Muilman ‘Cheswell’, baptised at St Gregory’s Norwich in 1794. Well that makes a change at least! So what happened to Elizabeth after his death? (Please let her have made it to 1841…). I search for an Elizabeth Chiswell, born 1795, and success at last. Living on Faulkner Street, Manchester on the 1841 Census are 45-year-old Elizabeth Cheswell, a 15-year-old clerk named Thomas Cheswell who’s presumably her son, and Thomas Blackshaw, aged 10. Aha! I bet that’s her late sister Ann’s boy, christened John Thomas Harrop Blackshaw. The next thing I spot is her burial. Aged 64, she’s helpfully described as the ‘widow of Richard Muilman Chiswell’, and was buried in 1859 in the same churchyard as her husband. That means I ought to be able to locate her on the 1851 Census, but there’s only a transcript of a volume
76
Family FamilyTree March 2017
p76 Twiglets FINAL.indd 76
that’s been water-damaged, so it’s largely unreadable. What I have instead, though, are the ever-useful Manchester rate books, and as a woman of independent means, our Elizabeth features every year from 1836. She moves from Faulkner Street to Oxford Street, then on to Carlton Terrace in an area called Greenheys, which was apparently fairly posh and still quite rural back then, and is described in Mrs Gaskell’s novel Mary Barton (which I’ll now have to read!). Elizabeth is listed at that address in historical directories of Manchester, so she might well have run it as a lodging house.
Family history takes you to some interesting places! Before I wrap things up, can I find her son Thomas’s baptism, or any other children? Yes, indeedy. Here’s Thomas Spanton Caygill Chiswell (another corker of a name!), christened in 1824. But hang on. Caygill? I’ve seen that name before. In 1769 an Ann Caygill and James Ashurst tied the knot at St Mary the Virgin, Bury, where my parents married, and where I was christened. James is my 5x great-grandad (and there’s an outside chance he’s the 93-year-old James Ashurst whose burial I found last issue…) Crikey, my head is spinning, and it spins even more when I find the baptism of a Richard Harrop Chiswell, parents Richard Muilman Chiswell and Elizabeth. So it looks as if both Chiswell boys were given family names from their mother’s side. Wonderful! I also find the baptisms of three daughters, but sadly burial records for all three too, as well as for Richard. So it seems Thomas Spanton Caygill Chiswell was the couple’s only surviving child, and with that name, it’s not difficult to spot his marriage to the equally glamorous-sounding Clementine Sophia Cros in Liverpool in 1847, and hazard a guess that all the other Richard MCs on the censuses, right up to the 1911, are their descendants.
At Ancestry there’s even a public history by someone who’s been researching the family. It says the later Muilman Chiswells – all merchants like Elizabeth’s husband – ended up in Argentina where they founded a bespoke tailor in Buenos Aires called, what else, The Manchester. Love it! In fact, I was loving the Muilman Chiswell name so much I Googled it. Well... It seems Elizabeth’s husband wasn’t the first Richard MC. When our Richard was born in Norwich in 1794, there was a wealthy MP in Pitt the Younger’s government called Richard Muilman Trench Chiswell of Debden Hall in Essex. There are biographies of him online, including on Wikipedia, and I assumed there must be some connection. But I can find no link at all, and apparently the MP was born plain Richard Muilman, and just added the ‘Trench Chiswell’ from his mum’s side. So if there’s no connection, why did Mr Chiswell from Norwich give his child the middle name Muilman? Did he want people to think his son was related to the MP, as that might give him a head start in life? If so, the plan backfired (literally) when Trench Chiswell – who had ‘interests’ in the West Indies and voted against the abolition of the slave trade – lost his fortune. His business partner scarpered, the MP went bankrupt and shot himself when our Richard was just three. So probably a good thing there’s no connection! Family history takes you to some interesting places, doesn’t it? As I look down at the little tree I drew a few months ago, now shockingly out of date, my very own direct ancestor with a strange name is looking right back at me. Richmon Wrigley, mother of my 2x great-grandfather James, who married the Alice Ashurst whose early death set me off on this rollercoaster journey. OK Richmon, you’ve been patient long enough. It is now time for a brand new adventure...
About the author fgfg Gill Shaw is editor of Dogs Monthly magazine and former assistant editor of Practical Family History. She lives in Cambridgeshire and loves singing, walking and tracking down elusive ancestors.
03/02/2017 10:09
JOIN US TODAY!
Exclusive giveaways, offers more every issue! Caring for&your memories
FamilyTree FamilyTree
SUBSCRIBER CLUB Enjoy membership benefits
If you don’t yet subscribe to Family Tree, it’s easy to sign up and start enjoying your Subscriber Club membership benefits! Go online at familytr.ee/joinfamilyt, call our subscription team on 01778 392008, or see page 65 for our latest offer…
SAVE 40%
A Detailed History of RAF Manston 1941-1945
Located on the Isle of Thanet, RAF Manston was the closest RAF station to the Channel – so when the Battle of Britain was being waged, it was right in the firing line. At one point the airfield was actually put out of action, but it rose again, and provided a crucial base for damaged aircraft, especially those from Bomber Command, returning from operations. With research from the Operational Record Books (ORB), contextual details about the raids and key players and evocative photos of the faces and aircraft of the era, this is a poignant and informative read. This is the last in a three-volume set and covers the history of the station during the threatening years of the Luftwaffe offensive. • Subscribers to Family Tree can save 40% when ordering A Detailed History of RAF Manston 1941-1945 (ISBN: 9781781550960, RRP £18.99, Fonthill Media).
FREE UK P&P Famous Regiments of the British Army completes the study of more than 100 British regiments, each of whom played important roles in world history and helped to shape the past of the British Isles. In this third volume, 34 regiments are featured – covering their battle honours, badges and most famous sons – including the stories of the heroic actions of their Victoria Cross holders. Each regiment’s section includes artworks and photographs illustrating insignia, uniforms and soldiers in action down through the centuries. • Subscribers to Family Tree can buy Famous Regiments of the British Army (ISBN: 9780750968362, RRP £25, The History Press), volume 3, for £19.99, free UK p&p. Offer valid until 31 May 2017.
HOW TO CLAIM YOUR OFFER S: ub@
Email subscribercl h ‘British family-tree.co.uk wit nston’ Ma F ‘RA or ts’ en Regim d an e lin ct in the subje de co ur yo u yo nd se we’ll and details of how to place your order
3 copies to GIVE AWAY! Have you been inspired to learn more about your Irish ancestry having read Steven Smyrl’s excellent and helpful article on page 36? Well, we have three copies of the Oxford Companion to Irish History to give away. Covering a broad span of Irish history, from prehistoric times to the present day, it’s a must-read tome for those with Irish connections.
HOW TO ENTER OUR GIVEAWAYS:
@familyEmail subscriberclub worker ay ilw ‘Ra h wit k o.u tree.c tor y his sh ‘Iri giveaway’ or line ct bje su the in y’ wa givea the in u yo and we’ll enter draw. Good luck!
MUST BE WON! Was your ancestor one of the hundreds of thousands who worked on the railways? To help you learn about this fascinating subject, we have three copies of My Ancestor Was A... Railway Worker by Frank Hardy – from the Society of Genealogists’ noteworthy series – to give away. And be sure to read our bumper article on page 24 this issue, to discover more about those all-important records for tracing family who kept the nation’s railways on track.
The important stuff...When you contact us, please include just your name, subscriber number and postcode (print subscribers), or state if you are a digital subscriber. To obtain your code by post, write to the address on page 3 and provide an SAE. No cash alternatives will be offered and the editor’s decision is final. Kindly note, offers are open to Family Tree subscribers only (print and/or digital), and are valid until 10 May 2017 unless otherwise stated
p77 subs club FINALish.indd 77
February 2017
FamilyTree 77
03/02/2017 16:44
HOW TO PUBLICISE YOUR FAMILY TREE ONLINE
PAR T 6:
THE FUTURE?
In this final part of the series, Mike Gould is going to look into his crystal ball at possible future technological developments for family history websites – and his ideas are both entertaining and intriguing...
A
lthough it’s fun to speculate about the future, there is also a reason for doing so. If there are gaps in the market for products or features, some entrepreneurial souls may eventually make them happen. So here, in no particular order, are six items from my wish-list of possible website developments.
1. Source information server In building a family history website, by whatever means, there is a tendency to concentrate on trees and people. My website is no exception. However, when you find that someone else’s website has ancestors in common with you, it is the source information for their assertions that becomes important. Although I include source citations for the list of events in the lives of my ancestors, the details that I provide are limited. It would benefit the family history community if source information were to become: • open source, with freedom to reproduce (within reasonable constraints) • packaged for interpretation by computer programs (eg marked up, such as XML) • served from a query server, to enable it to be searched by users.
78
Family FamilyTree March 2017
p78 MikeGould FINALish.indd 78
If information providers operated in this manner, our websites could be set up to pass on this information.
2. Genealogy proof builder How can I prove to others that I’m right, when I’m not even sure myself? The problem of proof in genealogy has existed since the subject itself. When we come to publishing material on a website, we need to persuade others that it is correct. The quote from American economist Alan Greenspan (see opposite page) illustrates the problem of trying to convince someone of your case. What is needed is a program to help us construct a proof. It needs to ‘understand’ family history and be able to assess the weight to put on the evidence that we have found. It also needs to take account of evidence that we have not been able
Sources & copyright While facts cannot be copyright, images of digitised images, for instance, are. So when citing your sources, you will be safe to include transcriptions of facts – as long as you’ve created the transcription yourself, to avoid being in any danger of murky copyright water
to find! All in all, it needs to be pretty intelligent for a computer! Having processed the information we give it and come to its conclusions, it then needs to explain, in clear English, the way it reached them. It is this logical sequence that we would then publish on our website for all to see.
3. Genealogy inference engine If a Proof Builder works by assessing a hypothesis that we give it and deducing its surety level (ie how confident we should be that it is true), then an Inference Engine would go one step further and derive the hypothesis in the first place. The idea would be to feed source information into a program, such that it could then derive the familial relationships of the people concerned. Now you may feel that this detracts from the fun of doing family history – are we replacing ourselves with robots? No, the thrill of chasing down the evidence and finding those allimportant clues that resolve everything is still there. It’s just that having done so, you then use a program to see whether its unbiased approach yields the same answer as you suspect.
4. Genealogy information visualiser We are used to seeing family trees as one visualisation of our family history
03/02/2017 13:23
Dreams of a brave new web world 1
2
3
information, but there are many others. I mentioned last month that each event has a time (1 dimension), a place (2 dimensions) and applies to a person who fits onto a family tree (again, 2 dimensions). A generalpurpose visualiser would allow us to select from these dimensions, filter by various parameters and display in various ways. For example, we may want to see the geographic distribution of people who died more than 25 miles from their place of birth. You could imagine this as a map of the country, with columns above the places of birth, such that the height of the column represented the number of people who met the criteria. This is essentially a 3-D plot. You could then add a further dimension by using a side-bar slider to vary the century of the birth dates.
5. Life event recorder 4 5
6
1 The ‘source information server’ would provide meaning and structure to data exchange 2 The ‘proof builder’ would assess how sure you can be about your hypothesis 3 The ‘inference engine’ would create your family tree from evidence fed to it 4 The ‘genealogy information visualiser’ would be able to plot selected data in many different ways 5 The ‘life event recorder’ would help you to create family histories in the electronic era 6 The ‘historic place reconstructor’ would use pictures to create a 3-D model of places
p78 MikeGould FINALish.indd 79
Read any book on family history and one of the first pieces of advice that it will offer you is to interview elderly relatives. But how about collecting family history information from the younger members of your family? You could interview them too, but these days, with the advent of social media, there is an electronic alternative. Why not collect their life history, as it unveils, from the blogs that they themselves create? You can build up their life story with a computer program and a little help. Of course, you must get their permission to include their life story on your website, so, if that is a problem, ask them if you can still gather the information but keep it private for now. You may need to précis their output and you will certainly want to rank the information, so that it can be filtered suitably in importance: eg a ranking from ‘1’ (Had egg and bacon for breakfast) to ‘10’ (Got married today) would probably suffice!
6. Historic place reconstructor Computer Aided Design (CAD) has been around for many years and there are many programs that could be used to create a computer model of a town or village. The problem is usually the amount of skill and effort needed to do it. What is needed is a program that will take old photographs, paintings
I know you think you understand what you thought I said but I’m not sure you realise that what you heard is not what I meant – Alan Greenspan and postcards and, using a library of typical building construction, create the model automatically. The program might need to be told which century to assume, so that it uses appropriate templates for the buildings. It could use a modern 2-dimensional map, which you annotate to show where the picture subject is located and from where the picture was taken. Ideally, the model that you host on your website should be animated, such that the viewer can fly around the historic community at varying heights. If you want to see a good example of what is possible today with 3-D visualisation on the web, look at. fr and drag your mouse cursor around the screen to get the full effect.
Finale This completes my series on designing a family history website. I’ve covered the ground from putting your family tree online via subscription websites through to creating customised web pages yourself. Finally, in this part, I’ve presented some opportunities for innovative new website projects. Good luck with your research – I hope to see you online!
About the author Mike fgfg Gould is a retired systems engineering manager. He has been researching his family history for nearly two decades and is also chairman of his village local history group. His website Tales My Ancestors Told Me at reflects his approach to family history, finding the tales that our ancestors can tell us by the records that they leave behind!
March 2017
Family FamilyTree 79
03/02/2017 13:23
YOUR Q&A
ADVICE...
with our experts GEOFF YOUNG, JAYNE SHRIMPTON, JOHN MCGEE, CHRISTINE WIBBERLEY, DAVID FROST & DAVID ANNAL
Which generation is this?
Q
This is a photograph (below) of an oil painting and on the back in pencil is written, ‘Little William’. I have added the dates of who I think is the correct ancestor (1784-1876), but I could well be wrong and a generation out. I have so many Williams in my line that it gets a little confusing to say the least! The next William would be 1814-1893 and I do not think that fits. The dates of the William before the one I have in mind were 1765-1816 and that is, I think, too early. I would be very grateful for your assistance. Mr AT Sparling ott4me@gmail.com
A
This charming painting looks to have been well-executed, suggesting an artist of some experience and talent. Although without a signature, sadly we may never know whose work this represents. Portraits of babies and
toddlers are much less common than adult paintings, so there is a relative dearth of comparative material against which to judge this portrait. However the style of the boy’s garments can be dated accurately. The simple lines of his frock featuring an extremely low square neckline and shallow bodice (echoing women’s gowns with high waistlines), expresses the neo-classical silhouette that shaped fashion in Britain broadly from the late 1790s through to c1820. However, the white frills trimming his garment indicate a date in at least the 1810s, when the pure ‘antique’ look was beginning to break up and more decorative elements were appearing, in keeping with the emerging Romantic aesthetic. Similarly, the substantial greycoloured fabric of his frock is a step forward from the ethereal white muslin gowns worn by women and children around the turn of the century. His knee-length hemline is also a more
HOW TO GET IN TOUCH... We welcome your family history queries, and try to answer as many as possible. To contact us:
@
l EMAIL: helen.t@family-tree.co.uk l FACEBOOK & TWITTER Post a query on our Facebook page, facebook.com/ familytreemaguk, or tweet us @familytreemaguk
modern development and, as we see, these shorter garments were worn with ankle-length white drawers to conceal the legs. Finally, his masculine felt or beaver hat is also a Regency style – not an earlier form of men’s headgear. Based on the evidence of dress, I would date this portrait firmly to the period c1815-1825. Therefore, of your three suggested ancestors who may once have been known as ‘Little William’, this boy must be the latest William Sparling, born in 1814. Portrayed here in about 1816, his identity seems to fit this picturesque painting perfectly. JS
Photo analysis Jayne would estimate that this rosy-cheeked cherub-like ancestor was only about 18 or 24 months old when he was painted, as he still wears an infant’s frilled cap or bonnet beneath his ‘grown-up’ hat. Like all little boys at the baby or toddling stage a century or more ago, he wears a frock, the usual garment prior to the ‘breeching’ ceremony of male children when aged about three or four years old. Indeed, following Georgian and Victorian custom, his clothes are essentially a juvenile version of female dress and, hat aside, bear no resemblance to male styles In case there was any doubt about his gender, he has been depicted wearing a masculine hat and playing with a pull-along horse on wheels and a miniature whip – toys firmly associated with boys Adopted by women and children, drawers were worn from about the mid-1810s, becoming firmly established by 1820
80
FamilyTree March 2017
p80-85 Q&As FINALish.indd 80
02/02/2017 10:33
Your questions answered Everyday work wear
Q
Below is a photograph that includes my great-grandfather, Thomas Greenhalgh, born 1860, standing in the centre with a black scarf. The 1911 Census states that he worked in a Bolton cotton mill as an engine tenter. Earlier censuses describe him as a labourer. There are several different types of attire here, which I hope may help, as I would be grateful if you could date the photograph for me. John Wood johncwood1958@gmail.com
A
It is always interesting to see photographs of men in their everyday work wear and you are lucky to have such an image depicting your great-grandfather. However, the clothes worn for many manual jobs were usually a sturdier, coarser, practical version of regular dress and so unless an ancestor is wearing a recognisable occupational uniform (like
public servants, transport workers etc.) or poses with the tools of his/her trade, their appearance may not indicate a specifi c line of work. However, let’s study what these men are wearing closely, date the image and see how much we can establish. This group photograph portrays 10 males, suggesting the distinct possibility of a work-related scene. The older man at the back wearing a conservative three-piece suit and semiformal bowler hat appears to be the company owner, manager or foreman. In place of a suit jacket, your grandfather and the man standing far left both wear loose casual jackets of linen or stout cotton, sometimes called a ‘slop jacket’: these were work wear, favoured by some manual workers. He also wears the flat cap of the working man and, like the youth in front of him, a coloured neckerchief in place of formal collar and tie – a detail that identifies him firmly as a man engaged in manual tasks.
The lad on the right appears to be wearing leather-topped wooden-soled clogs with reinforced metal toe-caps. This suggests a hazardous workplace and is the kind of protective footwear that might be worn in a mill or factory. A firm time frame can be determined from the distinctive wide, flat shape of all of the cloth caps and the narrow trousers and fairly long, narrow style of some of the men’s jacket lapels – a mode fashionable by 1909/1910. We can discount the war years, so the date must be c1909-1914: this means it could just pre-date the 1911 Census in which Thomas was described as a cotton mill engine tenter, or it may date from a few years later. Judging from his appearance here, Thomas could conceivably have been either some kind of indoor ‘labourer’ (a vague term) or possibly already overseeing the operation of mill engines. His prominent position in the middle of the row of standing men, next to the ‘boss’, could suggest that he was considered an important fi gure here: perhaps he was leaving a job, had just been promoted or was joining a new company. Or was he celebrating his 50th birthday with work colleagues, in 1910? I wonder if by chance any FT readers recognise any of the other men in this interesting scene. JS
Can you help?
Several of the men at the back of this group wear smart lounge or business suits (as does the older man) with starched collars and ties. Although their flat caps are essentially a working style, these men may, for instance, be office staff or factory/shop floor supervisors Work-related clues are scarce on this photo, but the lack of aprons and trouser knee ties suggests that none are, for example, builders or farm labourers Following photographic convention, the youngest group members kneel at the front, the middle youth probably aged in his mid to late teens and younger two perhaps about 10-14: they wear boys’ knickerbockers in place of adult trousers and must have been the young apprentices or assistants in the workplace
p80-85 Q&As FINALish.indd 81
Among the collection of ‘family’ medals is one that has always been a mystery. It is a Victorian Crimea medal, with Sebastopol bar, and the correct ribbon. The rim is named for Bombadier J Grant, 12th Battery, Royal Artillery. Attached to the package containing the medal is a note: Wounded 15-8-55. It was presumably in the possession of my paternal grandfather Major James Derham-Reid, MC MRCS LRCP. However, there is no known connection to the family: we are Scottish and hail from Kilmartin parish, Argyll. My grandfather was a doctor in Daubhill, Bolton, and came through three years on the Western Front. Perhaps the medal was given to my grandfather by a patient? Mr Grant would likely have been born in the 1830s, so would have been very old by the time my grandfather began his practice. If there are any Grant families seeking lost connections, our bombardier may be one. JAC Derham-Reid kunghitjim@hotmail.com
March 2017
Family FamilyTree 81
02/02/2017 10:33
YOUR Q&A RootsWeb mailing lists
Tracking down a change of name
Q A
How do I trace a change of name in the 1920s/1930s? Janet Huckle Jm.huckle@btinternet.com
With difficulty! The most common, and formal, way of changing a name is on marriage but I guess that’s not what you have in mind. The other formal way is by deed poll, a legal document drawn up by a lawyer. These are proof you have changed your name but there is no central record of them, although a few have been ‘enrolled’. Some may exist in lawyers’ archives but most have not survived except perhaps in family records or maybe a newspaper mention. See research-guides/changes-of-name for a list of sources. You will have to visit The National Archives (TNA) as, at present, the records cannot be searched online. Wills can occasionally reveal changes of name. There is nothing in law to stop you using any name you fancy provided there is no intent to deceive. It’s quite common nowadays to be asked to prove identity but in times past this seldom happened and it was easy to assume a new identity if you wanted to. The 1939 Register – at Findmypast.co.uk and free online at TNA – can be useful as it states precise dates of birth. By searching on birth date and first name, you may locate your ancestor, providing it’s just their surname they changed. DF
Confirming a line
Q
I decided to follow up on an old family story and am hoping you can help me. My paternal grandmother was Annie Pitcairn 1891-1960. She was born in Crossgates, Fife and it was always said that the Pitcairn Islands were named for the family! Having traced her line this may be true. I have gone back to Major John Pitcairn born in 1725 in Ceres, Fife. He was a major in the Marines and fought at Bunker Hill in the American War of Independence. It would seem that his son, Robert, became a midshipman and in 1767 he sighted the Pacific islands, which were named in his honour. He was lost at sea in 1770. His elder brother, David became a physician to the Prince Regent. If I am on the right track then Major John Pitcairn’s father was the Reverend David Pitcairn, chaplain to the Cameronians, and served with Marlborough at the Battle of Blenheim. I would love to think that all this was my family line but am having trouble trying to confirm it. Can you please help? Robert Gault bellebob7@gmail.com
82
FamilyTree March 2017
p80-85 Q&As FINALish.indd 82
A
Q
I’ve had trouble joining the Devon-L list on RootsWeb and have heard there might be ‘spam’ problems. Does anyone know the situation regarding the future of RootsWeb? Edna Marlow liverpud-49@rogers.com
A
About a year ago Dick Eastman posted on his popular genealogy blog that Ancestry had been trying for weeks to fix a data loss problem with RootsWeb. Since then there have been a number of posts on Ancestry Insider talking about the migration of RootsWeb to a new hardware platform. Throughout July and August 2016 there were many posts about RootsWeb being ‘down’ or inaccessible, including mention of Devon-L, and posts – in a similiar vein – from people saying they are ‘unable to subscribe to any of the mailing lists’. It seems evident that there has been service availability problems with RootsWeb, and Ancestry has certainly had a tough time with RootsWeb this year. My instinct is that moving RootsWeb to a new platform has turned out to be a lot more troublesome than Ancestry originally anticipated. So hope to hear soon when we might be able to join and use RootsWeb mailing lists again. GY
I have examined the Pitcairn lineage you have prepared, starting with your maternal grandmother Annie Pitcairn (born in 1891, in Dunfermline), and which seemed to take the line back to Major John Pitcairn (1725-1777). Major John’s son Robert Pitcairn was also credited with naming the Pacific island of Pitcairn Island after the family name. The short piece of research undertaken appears to show marked variations from the family research and that recorded on the original Scottish documentation, although at certain points the lineages do coincide. The key question, however, boils down to whether Robert Pitcairn who married wife Grizel (Graceful) Burns in 1799 in Carnbee, Fife was the son of Thomas Pitcairn and Catherine Ramsey and thus the grandson of Major John Pitcairn. Before considering this question another factor to consider is the socio-economic class position. Annie Pitcairn’s family history shows that she is descended from a long line of coal miners – Fife having a long and proud tradition of deep-cast mining. Major John Pitcairn’s lineage reflects one of ministry, armed service and landed gentry, although this does not preclude Annie Pitcairn being his descendant.
Now, back to the question of Robert Pitcairn’s birth. There are only two likely matches in Fife in the period in question. The first being Robert Pitcairn born on 31 March 1776 at Kinninmount Estate, Ceres, Fife, to father Thomas Pitcairn and mother Catherine Ramsey and the grandson of the major. The second birth was Robert Pitcairn born on 12 April 1780 at Denhead in St Andrew’s to father Robert Pitcairn and mother Isabel Farmer. The key point here is that Robert Pitcairn, the father, was recorded as ‘a coalier’, which is the occupation of all of Annie’s verified paternal ancestors up to this point. Another consideration is that when Robert Pitcairn married Grizel Burns in 1799, she was pregnant, meaning they had to get married. It appears that Grizel was born in 1782, close in age to Robert Pitcairn in 1780, and what transpired was a very young couple forced by circumstances to wed. On the balance of probability the latter Robert Pitcairn is more likely. The Guild of One-Name Studies (GOONS) has extensively researched the Pitcairn surname and to date there are about 15 separate branches identified. So while this may not indicate the family line you had hoped for, hopefully it is valuable nevertheless. JM
02/02/2017 10:33
Your questions answered Seeking an 18th-century marriage record
Q
I’ve searched every available relevant parish record that I have found online, looking for the marriage of William and Elizabeth Wilkinson. They probably married c1793, for I find them in the register of the Headon cum Upton parish of Nottinghamshire, situated a little to the east of East Retford, whose children were registered at Headon as follows: William, 1795; Esther, 1797; Mary, 1799; Elizabeth, 1801; Grace, 1803; Frances, 1805; Sarah, 1807 and Jane, 1810. I have recently contacted Nottinghamshire Archives to see if there are any manorial records (as there should be in the Manor Court Leet records) to show when William took on the tenancy of a farm at Headon, but there were no appropriate records available. The William Wilkinson and Elizabeth married in Sheffield about 1793 were marrying for the second time, so were elderly. The William Wilkinson and Elizabeth married just south of Lincoln around 1790 were still there in 1841. Has anyone any sensible ideas of where I could search next, taking into consideration that I am over 80 years of age and live in Devon? Edward Wilkinson edward@edwardwilkinson.co.uk
Tips to help st retc re sear ch skills
A
• Keep an open mind – William and Elizabeth who married in 1793 should not be discounted. At this period being widowed at a young age (and to have children with more than one spouse) was common. Further no assumptions should be made that the first child was born as late as 1795, a possibility being that not only the marriage but birth(s) of other
p80-85 Q&As FINALish.indd 83
•
•
•
h
From the research carried out William and Elizabeth appear to have been ‘incomers’ to and ‘leavers’ from Heaton, perhaps dying before the 1841 Census and the introduction of civil birth, marriage and death registrations in 1837. Here are some thoughts and leads which I hope you’ll find interesting.
•
•
•
children took place outside Headon. Remember the limitations of the records – A problem relating to the marriage is that, generally, marriage records of this period contained so few personal identifiers apart from names, marital status, and abode and sometimes groom’s occupation that it may be impossible to identify the correct marriage. Several William Wilkinsons can be found to have married brides named Elizabeth in the period from the mid 1780s to early 1790s, not only in Nottinghamshire but neighbouring counties of, for instance, Lincolnshire. It must be kept in mind that not all parish registers of this period survive and not all are online. Consider whether they moved – A possibility is a move from (and children being born outside) Headon after 1815, as – of the whole family – only William junior can be immediately confirmed from online records to have died there. However, one or more of his sisters might have married and settled there. Identify other family members – If William has a gravestone with an inscription that might assist in identifying other family members, not necessarily with the same surname, buried nearby or even in another parish. Investigate deaths in the family – Establishing the fate of other family members might similarly assist. Thus the death of an Elizabeth Wilkinson registered at East Retford (a district including Headon) in the December quarter of 1840, and shown by the recently updated General Register Office (GRO) indexes to have been aged 39 at death, should be investigated. Explore all the parish records – The starting point should be enquiry into any surviving parish records of Headon to try to establish William senior’s origins, and a likely year of birth. Most valuable would be a settlement certificate or examination – for an explanation see Visit the archives – Nottinghamshire Archives should be approached to see if any records of this nature are available, as a search of the online catalogue does not reveal any. Other parish records, such as churchwardens’ accounts, might show when the family first took up
nline Find guide s o Learn about Nottinghamshire probate records and jurisdications at and for the courts covering Nottinghamshire parishes residence in the parish. • Try to locate a will – If a will for William senior (or a widowed Elizabeth) could be located, naming family members, establishing a date and place of death with perhaps a burial, that might assist. However, be aware that lengthy and possibly speculative searching may be required. Before 1858 a complex system of Church courts and testamentary peculiars covering Nottinghamshire means that any will is likely to be housed in an unexpected place. For example, the will of a William Wilkinson of Barton, Nottinghamshire was proved in 1826 in the Exchequer Court of York, whose records are held by the Borthwick Institute for Archives, York, and there were at least 14 different courts having probate jurisdictions in Nottinghamshire before 1858. CW
EMPLOY A PROFESSIONAL
If distance and/or the complexities of the research are a problem there are professional researchers at hand to assist. Some of these can be found at – the Association of Genealogists and Researchers in Archives. Alternatively, turn to our listings of researchers towards the back of every issue
Read up on it • Tracing Ancestors Through Death Records by Celia Heritage • Tracing Your Pre-Victorian Ancestors by John Wintrip, to be published February 2017
March 2017
Family FamilyTree 83
02/02/2017 10:33
YOUR Q&A Seeking a missing aunt or uncle
Q
I always thought that my grandparents Arthur John Hooper, born Upton Scudamore 1863, and Ellen Hooper (née Adlam), born Longbridge Deverill 1863, had only three children: Frank Percival Hooper (my father) born 10 April 1897, Florence Kathleen in 1894 and Ethel Ruby born July 1900. On looking at the 1911 Census I saw that they had five children born alive. On checking Ancestry by just putting in the surname, I found Reginald Edmund Adlam Hooper born 6 March 1899, died March 1900. I cannot find
1
child number five. Can you help me? Janice Brown janicebrown1948@googlemail.com
A
Four years between children is a fairly long gap for that period so it’s not surprising to find there were others. The 1911 Census is particularly useful in this respect. I was interested to see that Arthur and Ellen married in Q3 1890 at St Saviour Southwark. Why did a couple, both of whom were born near Warminster, marry in an unfashionable part of London? Doing as you did with just a surname search from 1890-1900 I found an Albert Edward Hooper born in St Saviour in
Q3 1890. Are we looking at a shot gun wedding or is that pure coincidence? Perhaps this is baby number five. As Albert was born in 1890, this gives us the opportunity to search the 1891 Census for further possible clues about him. Have you looked at the Southwark parish records, which are in the London Metropolitan Archives? Find details of the LMA records available via Ancestry.co.uk at LMAonAncestry Assuming the children were christened, as was usually the case at the time, they should appear there. So too should the burial of number five. DF
1 The 1911 Census entry that first alerted Janice to the fact her father had two siblings who died (see the note in the column stating how many children of the marriage had died)
2 & 3 Researching back to the 1901 Census, we see parents Arthur and Ellen, with the three children that Janice already knew about: Frank, Florence and Ethel Ruby
2
Research tip! Note that baby Ethel Ruby was listed on the next page of the census (see figure 3) – always view the next page, to make sure you’ve collected the details of all available ancestors in the household
4 Going back to the 1891 Census 3
4 5
84
FamilyTree March 2017
p80-85 Q&As FINALish.indd 84
we should be able to find baby Albert Edward, aged about one, as he was born in 1890. This entry is clearly the right parents as the birth places and Arthur’s occupation matches – however, no baby is listed with them
5 Going to the GRO registers of death on we searched for the death of a baby Albert Edward Hooper, born 1890, aged about one at death, and found no results. A next step would be to order the marriage certificate for Arthur and Ellen and then, inputting the surname Hooper and Ellen’s maiden name, search the birth indexes for the missing baby on certificates/indexes_search.asp
02/02/2017 10:34
Your questions answered Right family? Wrong place?
Q
I am unable to find the parentage of William Saxton. The earliest sighting is his marriage to Emma Elizabeth Seaman on 27 June 1815 in St Matthew’s Bethnal Green. One clue is the witness, Sarah Ann Saxton, who I have found with a brother William (and other siblings) all baptised in St Leonard’s Heston 1784-1791 as the children of John and Elizabeth Saxton. Although the dates of their baptisms seem to fit, the location does not, as it is on the other side of London. I would welcome any advice, thank you. Keith Saxton saxtonkeitth @yahoo.co.uk
A
Looking at what we know about William Saxton we can immediately see where the problem lies. He married before 1837, meaning that we have no genealogical information on his marriage record, and he died before 1841, so we have no clues about his birthplace from the census returns.
SO, WHAT DO WE HAVE TO WORK WITH? • Birth clues – The only evidence we have regarding when he was born is his age at burial: ie 41 in July 1832, suggesting a date of birth sometime between July 1790 and July 1791. • Evidence of variant surname spellings – We know that William is married in Bethnal Green in 1815 and then settled in Whitechapel. It’s not a lot to go on and, although Saxton is a relatively uncommon name, it is easily confused with the surname Sexton. Indeed, both William and his wife Emma Elizabeth were buried under this latter spelling. • Discount false leads – In cases like this, all you can do is come up with a theory and then attempt to disprove it. You’ve already begun this process by identifying William Saxton of Heston as your prime candidate; however, this man can be instantly discounted. The Heston parish registers record the burial of a 45-year-old William Sexton [sic] of Hounslow on 10 January 1836. This is surely the man who was baptised at Heston on 6 May 1790. • The biggest clue – you do have the name of the witness who signed her name as Sarah Ann Saxton at William’s wedding in 1815; she was almost certainly his sister, although it’s possible that she was his sister-in-law, mother or even a cousin – so Sarah may be your biggest clue to finding out more. • Search widely – The names of William’s children could also provide clues and the fact that he was a boot maker could also prove useful, although it’s odd that three of his children described him as a glass-cutter on their marriage certificates. Finally, don’t limit your search to London; the capital has always been a magnet for people from all around the UK. DA
How to explore area boundaries
About our experts Geoff Young is the founder of Microgenealogy, a family history research and consultancy practice. Member of the Association of Genealogists and Researchers in Archives, Member of the Association of Professional Genealogists, Chartered fgfg IT Professional and Member of the Association for Project Management. He has 34 years experience in the IT Industry and a passion for family history spanning 20+ years. Jayne Shrimpton is a professional dress historian, portrait specialist and ‘photo detective’. She is photograph consultant for TV series Who Do You Think You Are? and her latest books are Tracing Your Ancestors Through Family Photographs and Victorian Fashion (2016). Find her online at John McGee is a professional genealogist working in Scotland and a member of the Association of Scottish Genealogists and Researchers in Archives since 2009 and he is currently the association treasurer. John runs Wheech Scottish Ancestry Services specialising mainly in the West of Scotland. Email him at wheechmcgee@hotmail.co.uk Christine Wibberley is a researcher specialising in family history questions with a legal aspect. She is a non practising solicitor and a member of the Association of Genealogists and Researchers in Archives David Frost’s interest in genealogy was sparked by the unexpected appearance of an illegitimate and distinctly dodgy family member in 1967. He’s relieved to find that every month still brings new discoveries. He’s been writing on genealogy topics since 1991 David Annal has been involved in the family history world for more than 30 years and is a former principal family history specialist at The National Archives. He is an experienced lecturer and the author of a number of bestselling family history books, including Easy Family History and (with Peter Christian) Census: The Family Historian’s Guide. David now runs his own family history research business, Lifelines Research.
R e sear c h t ip 1 If searching in an area that you’re unfamiliar with, check out the maps at
p80-85 Q&As FINALish.indd 85
2 By opting to view maps showing boundaries for the parish, hundred etc, you can explore the area well
Do you have a family history puzzle? Make a succinct copy of your notes that relate to the matter and bring them to WDYTYA? Live at Birmingham NEC, 6-8 April, where expertise abounds!
March 2017
Family FamilyTree 85
02/02/2017 10:34
ADVERTISING FEATURE
A personal trip with Leger Holidays On one recent battlefield tour to the site of the D-Day Landings in Normandy a widow set out on a purely personal trip. Inspired by a single clue, Ruth Bettle was following the footsteps of her late husband, Sergeant Vic Bettle
Sergeant Vic Bettle of the 7th Parachute Battalion, who served at the D-Day Landings
S
ergeant Bettle was part of the 7th Parachute Battalion, which would have jumped in or around the Pegasus Bridge area of Normandy on D-Day, 6 June 1944, as part of Operation Tonga. The Parachute Battalion was tasked with supporting the D Coy of 2nd (Airborne) Battalion Ox & Bucks Light Infantry led by Major John Howard. 7th Parachute Battalion advanced to the area of Putot-en-Auge in August 1944. Ruth had received correspondence from a French national several years ago who had been staying in a chateau in the town of Putot-en-Auge. The grounds of the chateau held a barn and it was in this barn that something special had been found. It was an inscription signed ‘Sgt Vic Bettle, 7th parachute Batallion, 19 August 1944’, and the inscription simply read: ‘We chased them out this morning.’ It was this simple sentence that inspired Ruth to undertake a battlefield tour, and to visit the
86
Family FamilyTree March 2017
p86-87 WI advertorial FINAL.indd 86
site where her husband had served during the D-Day Landings all those decades ago. The tour was led by Leger Holiday’s battlefield guide Fred Greenhow who, after speaking to Ruth, arranged for drivers to take a trip out to the chateau. Fred explained: ‘Ruth was absolutely overwhelmed when we found the chateau in the village of Putot-en-Auge, approximately 30km to the east of Caen. Her husband Sgt Vic Bettle who served with 7 Para Bn, had written his message on 19 August 1944. It was discovered by a Frenchman in 1998, who tracked down Vic by writing to Gen Napier Crockenden, 6 Airborne Division Association.’ A chance discovery of a message written on a barn wall, combined with the determination of that person to track down the family concerned, and the opportunities provided by a battlefield tour, has meant that Ruth now has a very personal memory concerning her husband’s role in the D-Day Landings of 1944. Ruth Bettle visiting the barn in which her husband’s precious note from the D-Day Landings is inscribed on the wall
Thanks to Fred, and to Ruth and her daughter Karen, we can share this story and keep the memory of Sergeant Vic Bettle alive.
Why take a tour? On a battlefield tour, you’re heading off on a journey of learning, understanding and appreciation. Leger Holidays can reunite family and friends with a sense of their past, and this is something the company is very proud of. Leger Holidays, who will be exhibiting at the WI Fair in March, offer great value holidays and amazing experiences. Visit their stand to find out more.
Come to the WI Fair! The Women’s Institute Fair is being held at Alexandra Palace, 29 March to 1 April 2017, and everyone is welcome, both WI members and non-members. With a busy schedule of workshops and talks covering crafts, travel and lifestyle, and a tempting range of shopping opportunities, the WI Fair is a fabulous day out. Family Tree will be there too, so do pop along and say hello to the team! Come along and be inspired.
Below right: Vic Bettle’s scrawled note
Find out about Leger Holidays’ range of battlefield tours at w w w.leger.co.u k/ battlefields
03/02/2017 10:12
TICKETS NOW ON SALE! This year’s WI Fair will take place in a truly iconic venue. Home to a Victorian theatre, the original BBC studios and what was once the most celebrated concert organ in Europe, Alexandra Palace will bring a sense of spectacle and history to the occasion.
29th March – 1st April 2017 Alexandra Palace, London
Taking place in the magnificent Great Hall, the Fair is the perfect day out for women with a zest for life: from food and drink to adventure holidays; health and beauty to craft and creativity; cookery and baking to gardening and horticulture. You don’t have to be a member of the WI to come to the Fair; this grand day out is open to all. We have lots of free talks and demonstrations; creative workshops, tasty treats galore and lots of shopping to tempt everyone.
Features include: Craft Theatre Live Kitchen Theatre The WI Fair Challenge Travel, Outdoors & Lifestyle Theatre Workshop Studios The Designer Maker Village and demonstration area
thewifair.co.uk p86-87 WI advertorial FINAL.indd 87
Tickets are now on sale via our website. Entry tickets start at £11 with discounts for bookings of 10 or more. Limited workshop places still available but early booking is advised*. *Terms & conditions and a transaction fee apply.
03/02/2017 10:12
DIARY DATES Find or post diary dates at for FREE or email them to editorial@family-tree.co.uk
TAP HERE
FOR MORE IMAGES FRO M THE IWM EXHIBITION
MARCH 2017 Various dates Workshops Hampshire. Hampshire Record Office in Winchester is running a new series of monthly workshops, starting 28 February with an ‘Introduction to Family History Sources’, followed on 29 March by ‘Maps as local history resources’ (2-4pm, £14 each). In addition, the popular Archive Ambassador Day takes place on 21 March when you can join archivists to learn about cataloguing, preservation, digitisation and oral history (10am-3.30pm, £30). All events must be booked, see website for details. • Hampshire Archives and Local Studies, Hampshire Record Office, Sussex Street, Winchester, Hampshire SO23 8TH; www3.hants.gov.uk/archives 4 March Day school Kent. To unravel the medieval mysteries of coats of arms, join this Heraldry course at The Institute of Heraldic and Genealogical Studies in Canterbury, which aims to show that the records of heraldry can be of great use to family historians. • 10.15am-4.30pm. £40/£45 (including lunch); Starts 10 March Weekly course Surrey. Enhance your research and learn some tricks of the trade on this six-week family history course run by professional genealogists and archivists at Surrey History Centre in Woking. The spring course runs until 21 April (includes twoweek Easter break). • 10am-1pm, £60. To book, call 01483 518737 or visit the Heritage events tab at
88
People Power: Fighting for Peace Emmeline Pankhurst (left) and her daughters Christabel (centre) and Sylvia, at Waterloo Station, London, in 1911. All three were key figures in the women’s suffrage movement, which largely supported the British war effort in WW1, however, some campaigners – including Sylvia – opposed the war
I
WM London’s major new exhibition, opening in March, will explore the evolution of the anti-war movement in the past 100 years. Rare items such as Siegfried Sassoon’s handwritten copy of his poem The General and original sketches for the peace symbol go on display in ‘People Power: Fighting for Peace’, telling the stories of individual and collective acts of protest against war as part of the IWM’s centenary programme of events and exhibitions. The unique collection of more than 300 artefacts, from IWM’s rich collections and elsewhere, will take visitors on a journey from the First World War to the present day, looking at how peace activists have influenced perceptions of war and conflict. Paintings, literature, posters, banners, badges and music reveal the breadth of creativity generated by those who have opposed war and how anti-war protest has been inextricably linked to the cultural mood of each era. Other highlights include Wire (1918) by Paul Nash, CRW Nevinson’s Paths of Glory (1917) and a handwritten letter by Winnie the Pooh author AA Milne outlining his struggle to reconcile pacifism with the rise of Hitler. Emotional personal letters also reveal the harrowing experiences of conscientious objectors who faced non-combatant service, forced labour, imprisonment and hostility from wider society. Matt Brosnan, the exhibition’s historian and curator, said: ‘This.’ • From 23 March to 28 August. Tickets £5-£10, members free, available from
10-28 March e-courses Online. Pharos Tutors’ e-courses this month include: The National Archives Website and Catalogue – Finding People by Guy Grannum (10 March, 3 wks, £34.99); Your Military Ancestors with Simon Fowler (13 March, 4 wks, £45.99); Researching Online for Advanced Genealogists with Peter Christian (16 March, 4 wks, £62) and Organising Your Genealogy with Barbara H Baker (28 March, 3 wks, £34.99). • Book your place at pharostutors.com
Pancras is offering a new programme of adult learning courses, which include bookbinding (11-12 March, from £157) and conservation (18 March, from £47). • Full details and booking at events/adult-learning-courses
11-12 & 18 March Weekend courses London NW1. The British Library in St
12 March AGM & open day Leicester. Head to Leicester and Rutland
Family FamilyTree March 2017
p88-89 Diary dates FINAL.indd 88
Family History Society Open Day and AGM held at the county cricket club. The AGM takes place at 11.30am and will be followed by two guest speakers, Anthony John from Fraser & Fraser, one of the companies involved in The Heir Hunters TV series, and Dr Susan Tebby on Adventures and Anecdotes Of An Addict, about her 60 years of family and local history research.
06/02/2017 15:59
t
Local history, heritage groups and archives will also be attending and there will be a family history helpdesk and search service. • 10am-4.30pm, free entry. Leicestershire County Cricket Club, Leicester LE2 8AD; 14 & 18 March Talks Buckinghamshire. This month’s talks at Bucks Family History Society include: Marlow On Thames in which Julian Hunt takes the audience back to earlier times, when the Thames was a vital commercial artery filled with wharves, bargemen, flashlocks and eel traps (14 March, 7.45pm, Bourne End), and The Parish Chest with Ian Waller (18 March, 2pm, Aylesbury). • All talks free (non-members welcome, a small donation is appreciated to hear talks); 18 March Seminar West Yorkshire. The Guild of One-Name Studies’ Regional Seminar, which is open to guests, is taking place in Wakefield and features talks on Early Asylum Life by
David Scrimgeour, Using Manorial Records for Family History by Rachel Dunlop, First Steps in DNA by Jackie Depelle, Tools and Methods for Websites by Paul Featherstone and My Dowles of Romney Marsh by David Burgess. • 10am-4pm. For more details, visit 24-26 March Weekend course Kent. The Institute of Heraldic and Genealogical Studies in Canterbury is running an Advanced to Exam Preparation course for those either taking the Higher Certificate examination in June or for genealogists wishing to discover more about oft-neglected sources. • £195/£235; 25 March Workshop Leeds. Learn how to scrapbook family history with tutor Mandy Williams in this Your Fair Ladies-run workshop in Pudsey. • 10.30am-3.30pm, £20. To book, email depellejg@aol.com or visit http:// yourfairladies.ning.com/events
Images: Pankhursts © IWM (Q 81490) reproduced under the IWM Non-Commercial Licence; jars © Crossrail/Museum of London Docklands; house history © Society of Genealogists
Starts 13 March Identify family photos Norwich. If you have inherited family photos you can’t identify, head along to the Who Do You Think They Are? exhibition at The Forum’s Millennium Library, the highlight of which will be local historian Andrew Tatham’s A Group Photograph, the culmination of 21 years of research into a WW1 photograph. There will also be
p88-89 Diary dates FINAL.indd 89
SOCIETY OF GENEALOGISTS 1 March 2-3pm Fraud, Forgery, Feuding: The Coming of Civil Registration – with Dr Gwyneth Wilkie (£8); 4 March 10.30am-1pm Census Substitutes – with John Hanson, FSG (£20); 9 March 12-1pm A Brief History of St Luke (Old Street) Parish – with Mark Aston from the Islington Museum and Local History Centre (£8); 11 March 10.30am-1pm Family History Software on the Mac, including Reunion 11, MacFamily Tree 8, Heredis 2015 and Family Tree Maker 3.1 – with Graham Walter (£20); 15 March 2-3pm London’s Lea Valley – with Dr Jim Lewis (£8);
NEW EXHIBITIONS Starts 17 February LGBT history London SE1. To mark LGBT History Month, the London College of Communication (LCC) Moose on the Loose 2017 presents Ken. To be destroyed, which explores 1950s’ transgender identity through family archives. Artist and photographer Dr Sara Davidmann and her siblings inherited the letters, photographs and papers belonging to their uncle and aunt, Ken and Hazel Houston, from their mother in an envelope labelled ‘Ken. To be destroyed’, revealing Ken had been transgender. This exhibition tells the hidden history, reimagined using photographs and modern artwork by their niece. • Weekdays 11am-7pm until 24 March. Upper Gallery, London College of Communication, Elephant and Castle, London SE1 6SB. The Ken Project Archive is on display Tuesdays (12-2pm) and by appointment;
Learn how to trace your house history at the Society of Genealogists in March
tips for family historians on how to identify long-forgotten people in family photos, and family history resources. • Daily 10am-4pm until 1 April. Free. The Forum, Millennium Plain, Norwich NR2 1TF; For more details, see news, page 8. Now until 3 September Crossrail finds London E14. Discover objects spanning 8,000 years of human history in Tunnel: The Archaeology of Crossrail at the Museum of London Docklands. Some 10,000 artefacts, unearthed by Crossrail during the construction of London’s forthcoming Elizabeth railway, enabled archaeologists to uncover the stories of Londoners’ past, from Mesolithic tool makers to those affected by the Great Plague of 1665. • Daily, 10am-6pm. Free. West India Quay, London E14 4AL;. org.uk/museum-london-docklands
Unearthed by Crossrail: 19th century ginger jars from Crosse & Blackwell bottling factory near London’s Tottenham Court Road station
16 March 2-3pm Lies, Damned Lies and Family History – with Rev Wim Zwalf (£8); 22 March 2-3pm How to Die like a Victorian – with Holly Carter-Chappell (£8); 25 March 10.30am-5pm Tracing your House History For Family Historians – with Gill Blanchard (£35); 29 March 2-3pm Tracing your Ancestors through Local History Records – with Dr Jonathan Oates (£8). • The Society of Genealogists, Goswell Road, London EC1M 7BA. Book via events@sog.org.uk, 020 7553 3290 or visit
LOOK ONLIN E
Keep up to da te with events at The National Archives at w ww. nationalarchi ves.gov.uk/w hatson This month th ey include a H idden Treasures web inar on 20 Mar ch, explaining ho w to access Tr easury correspondenc e 1777-1920, and A Woman’s W ar WW1 talk on 30 March, bo th free
March 2017
Family FamilyTree 89
03/02/2017 10:16
YOUR LETTERS
MAILBOX
30 YEARS
FamilyTree ADVICE YOU EXPERT BRINGING
FOR OVER
e.co.uk
MARCH 2017
TBARCKATOCTHEE
1500s
E YOUR S TO TAKOR TIMES RECORD 15 KEY CH BACK TO TUD RESEAR
0th Happy 10 ra Ve enary Da me ing a cent Celebrat es’ Swe of the Forc RDS EXPLAINE
etheart
D
r How to find you TIONS
FREE RECO
IRISH RELA
5 ESSENTIAL WEBSITES Y! TO TRY TODA
IGATING
INVEST re & lives in cultu ain War 2 Brit post-World
s ay ancestor n your railw fashion s • Track dow style & 1940 • Wartime
£4.99
Plus
New to family histor y?
06/02/2017
11:21
reading We love s, and try ter your let h as many s to publi le. Find out ib s as pos get in touch how to us on with 1 page 9
Tracing blue blood (royal connections aren’t as rare as you might think), investigating 19th century divorces, and on the trail of a brave man involved in a shark attack over a century ago...
dd 1
p01FT_Mar.in
My connections to the Conqueror I watched the Who Do You Think You Are? programme about Danny Dyer’s descent from William I with particular interest, as I too can trace this descent, and was intrigued to see how things would be demonstrated on the programme. I thought there was rather a large hiatus between the workhouse ancestors and the Civil War Gosnold, and, since most people starting in family history will usually start from the lower end, so to speak, they would perhaps have found this part more interesting than visiting present-day aristocrats. It has been pointed out that many people have such a descent. Tracing it is a matter of hard work, lots of time and serendipity. The ‘nitty-gritty’ of the programme came for me when the archivist began to unroll the scroll which had William at the top. How I peered at my screen as the scroll slowly unrolled! Yes! There was Henry Percy (Hotspur) and his wife, Elizabeth Mortimer. This was indeed my own line, which descends from their granddaughter Margaret Clifford of Skipton. At this point it was getting harder to see what was happening on screen, but I thought I saw the name ‘John Clifford’ as part of Danny’s descent. John Clifford was Margaret’s brother (known as ‘black-faced’ Clifford, and ‘the Butcher’, and killed at Towton). So not only am I related to William (and Edward et al), but also to Danny Dyer – goodness me! I have been interested in my family history since childhood, and luckily made a practice of asking grandparents and other relatives to tell me what they knew, and to write it down. My early research was aided by a family bible, with birth, marriage and death dates already carefully filled in. My public library (Skipton) had an extensive range of parish registers and many other useful works. Like most people, I have had to pursue my obsession in fits and starts, ‘real life’ creeping in amongst, so there was a big sweep in the 1950s, a surge in the 1970s and now, another resurgence as I attempt to sort out ‘all that stuff’ to please my daughter, who does not share my interest. My first bit of luck was in Skipton library where I discovered a collection of Yorkshire deeds and miscellaneous papers, among which I found a marriage settlement of 1618/1619 for Marmaduke Fawcett and Elizabeth Lodge. Research showed that Elizabeth’s mother was born Isobel Maude. Then I found a pedigree of the Maude family in Foster’s Pedigrees of the County Families of Yorkshire: much of it proved to be incorrect, but as it indicated the family was armigerous I was led to the Visitations, and to the invaluable printed volumes of Early Yorkshire Charters, which have opened up the possibilities
90
Family FamilyTree March 2017
p90-92 Letters FINALISH just needs cover thumbnail.indd 90
of research into the Anglo-Saxon period via the land-owner, OSER FOR A CL EILA Gospatric (work is SH ongoing here). LOOK OF TES So, the whole thing O COE’S N is never-ending, is it not? I think the sort of work I am doing now might perhaps be called ‘prosopography’. That seems to be a method of fitting people into groups who interact together. Sheila Coe sheilacoe@sky.com Editor: Thanks for encouraging us all that research back so many centuries is not an impossible task. To any fellow readers feeling inspired to take up the gauntlet, don’t miss Celia Heritage’s tips on page 12
E TAP HER
This is Sheila’s tree, showing her connections to William the Conqueror (see Sheila in the top left). She finds it interesting how someone of an ordinary working-class family can descend from such origins – and discover it for themselves
Engine driver in the family I noted that this issue of Family Tree was going to have an article on railway workers and thought you may be interested in my wife’s grandfather, who was an engine driver in Scotland, so I have sent you some clippings. Philip Thompson philip@pmthompson.co.uk Editor: Many thanks for sharing these pictures – how excellent to have these clues to this ancestor
Driver and express locomotive man John Borthwick, seen here with his steam engine
06/02/2017 16:04
Have your say Twists & turns of married life I did enjoy the Christmas issue and its focus on the Victorians. My great-grandfather’s younger brother George Hatten was born in Great Finborough, Suffolk, in 1854, and married Sarah Louisa Howe in 1876, but they divorced in 1882. Divorces were very uncommon in those days! In her divorce petition (on Ancestry) Sarah not only accused George of physical and verbal abuse, but also claimed that he had repeatedly committed adultery with a servant, Mary Ann Grimwood, who had given birth to a child fathered by George on 13 April 1878. No affiliation proceedings were brought because George gave Mary Ann £25 to prevent her doing so. Sarah herself had given birth to a daughter, Agnes Ellen Louisa Hatten, in 1877, but she sadly died in July 1878, which must have made the birth of Mary Ann’s child even more galling. Surprisingly, my grandmother’s sister Lillian Esther Hatten, aged six, was living with George and Sarah on their Great Finborough farm in 1881, three years after this incident, not a suitable environment for a child I would have thought. Sarah left the family farm on 7 January 1882, moving back to Great Bricett, and was granted a decree nisi by Sir James Hannen (a famous divorce judge known to the Victorians as ‘The Great Unmarrier’) at the High Court in London on 14 December 1882. Sarah reverted back to her maiden name of Sarah Louisa Howe and married another farmer, Arthur Edward Gowing, at Linton Register Office, Cambridgeshire, on 29 November 1884, but that marriage too failed! In 1891 she was living separately from Arthur in Hove, Sussex, in reduced circumstances working as a general servant at the home of Charles and Rachel Nye. In 1896 Sarah went through
Snippets of war
p90-92 Letters FINALISH just needs cover thumbnail.indd 91
HOW TO GET IN TOUCH... Share your views with fellow readers on the FT letters pages. To contact us: POST Letters, Family Tree at the address on page 3 EMAIL helen.t@family-tree.co.uk FACEBOOK & TWITTER Get in touch on our Facebook page, facebook.com/familytreemaguk, or tweet us @familytreemaguk
Licensed to embark
In this monthly First World War Snippet Keith Gregson looks at an aspect of wartime intelligence
Many years ago a friend of my father’s gave me the first draft of a text about sniping in the First World War. The author (his father), had been a sniping officer on the Western Front but later was moved into intelligence. Inside the brown paper bundle that contained the faded typed text was an intriguing label. The bearer must have had it with him attached to his uniform or to his attaché case. It is fascinating to muse as to what he was up to. Clearly he had fairly open access to Southampton Docks and although the word ‘embarkation’ appears on the card, it is more than likely that he wanted to have words with those disembarking. Was he merely questioning those coming home on leave to see if their knowledge could help with policy on a front? More darkly – was he sent to discover if the wounded were genuinely wounded? There were certainly instances (and I can recall one from a feature film of the late 20th century) where efforts were made to establish
her second divorce. Arthur Edward Gowing divorced her because of her adultery, and she married Arthur Henry Adames in Hove in late 1896, this time using the name Sarah Louisa Gowing. A son, also Arthur Henry Adames, had been born in Ipswich in 1895. I originally thought that George was a monster, but he subsequently married Elizabeth Mather Last and had three children by her. That marriage survived George losing his farm and ending up in London working as a bus driver! Lastly, I’m also working on my blog, as recommended by Chris Paton in the January issue. You can find it at Alan H Fraser alanhfraser@virginmedia.com Editor: Big pat on the back for starting your blog – we’re very impressed and love the gallery and stories you’ve placed there
whether wounds were self-inflicted or not. Equally, recent case studies with which I have been involved tend to show that the establishment was keen to turn around wounded officers as quickly as possible. As the war progressed there was an increasing lack of officers at the Front. Sadly we will probably never discover exactly what lieutenant Sleath was up to.
@
Tweets from our followers • Bliss! Back home to find latest issue of @familytreemaguk on my doorstep! So pleased the ‘Twiglets’ section is as entertaining as always! @DSRGenealogist • Really good issue. Not just because @HeritageWSHC is featured! Also great articles on online book resources & more. #archives #genealogy @Ptolemais101 • We’re thrilled to be featured in the current edition of @familytreemaguk – find out what goes on behind the scenes at the History Centre! @HeritageWSHC Like to join the conversation. Come and find us on Twitter @familytreemaguk
An Intelligence Officer’s Dock Pass – what was it that he wished to discover about the servicemen travelling through Southampton Docks?
Keith will be giving a talk on researching First World War ancestry with particular reference to individual case studies at WDYTYA? Live at the NEC, Birmingham 6-8 April and would love to meet up with any regular ‘snippeteers’
March 2017
Family FamilyTree 91
06/02/2017 11:28
YOUR LETTERS Seeking shark attack medallist. After lunch, the men went swimming. Barlow asked Morgan if there was any danger from sharks as he was a bit nervous. Morgan replied, ‘No, I don’t think so, it is too far up’. Barlow was right to fear shark attack, however, as there had been three in the Lane Cove River since 1900. Nevertheless they went on swimming. Then suddenly Morgan cried out ‘Help, oh help. A shark has got me!’. Together, McKay and Barlow. I have traced Barlow and McKay, but all that is known of William David McKay is that UK. In 1972 the British Government decided that any Albert Medallists who were alive in October 1971 would be considered holders of the George Cross – they could keep their original awards or exchange it for the George Cross. Sixty-eight Albert Medallists were known to be alive in 1971 but recently research has found five more who were in fact alive, but didn’t come forward to make the fact known. So was William David McKay still alive in October 1971? Paul Street 30 Baldwin Avenue, Boronia 3155, Victoria, Australia Editor: We have published Paul’s list of Albert Medallists whose dates of death are unknown, so could have been alive in October 1971, which you can study at
Taking the archives plunge I just wanted to say what a great DVD ‘Explore the Archives’ was, which was free with your February edition. I have been a bit apprehensive to go The National Archives and the Society of Genealogists’ offices. Various magazines have given descriptions and advice about attending these places, but it has made such a difference to my confidence seeing how things work on the DVD. I am now going to make arrangements to attend both the TNA and the SoG. Steve Chaplin chaplin-s@sky.com Editor: We’re delighted to hear it’s been such a help. Enjoy the archive trips and do let us know how you get on!
CLASSIFIEDS
92
Family FamilyTree March 2017
p90-92 Letters FINALISH just needs cover thumbnail.indd 92
ADVERTISE HERE Contact Kathryn Ford Tel: 0113 200 2925 or email: kathrynf@warnersgroup.co.uk
06/02/2017 11:28
YOUR ADVERTS ONE-NAME STUDIES ● ADSHEAD / ADSHADE / ADSETT / ADSIT Worldwide onename study (GOONS). Enquiries welcome. Information gratefully received: Keen to contact persons with these surnames interested to take part in DNA studies. Gordon Adshead, 2 Goodrington Rd, Handforth, Cheshire SK9 3AT; ● BECKINGHAM and variants. Worldwide. All enquiries welcome (with SAE please). Alex McGahey, 2 Vane Road, Thame, Oxfordshire OX9 3WE; Tel: 01844-217625; email: beckingham@one-name.org ● DISNEY/DIZNEY One-name study (GOONS). Information and enquiries gratefully received. Looking for person with these surnames to participate in the yDNA project. Helen Coan email: disney@one-name.org ● HODGKINSON Information sought, especially family trees. Other material and enquiries always welcome. Avis Keen, 36 Trinity Drive, Holme, Carnforth LA6 1QL. hodgkinson@one-name.org ● LEEDS Worldwide one-name study. Enquiries welcomed, information gratefully received. Everett Leeds, Flat 1, 4 Hardwicke Road, Reigate, Surrey RH2 9AG ● ALDERSON FAMILY HISTORY SOCIETY. A One-Name Study. Our society was founded in 1984. We have 250 members all over the world. Our Database has some 166,000 records including 84,000 births, marriages and deaths in addition to census records. Please see our website to find more information and details of how to join.
FAMILY HISTORY SOCIETIES ● BEDFORDSHIRE Family History Society welcomes new members. Contact:-; bfhs@bfhs.org.uk or P.O.Box 214 BEDFORD MK42 9RX
MISCELLANEOUS ● SPECIALIST MARITIME RESEARCH on British mariners of the Royal Navy and Reserves, Mercantile Marine and in Indian service.; len@barnettmaritime.co.uk; 26 Holmewood Gardens, LONDON SW2 3RS. (with S.A.E. please) ● FAMILY TREE DRAWING - Family Trees drawn from your research records, GEDCOM files etc. Old trees updated. Professional service. Dave Hobro, Newtown Design Services, 60 Linksview Crescent, Newtown, Worcester WR5 1JJ. E-mail: familytrees@newtowndesign.co.uk ● WILLS etc. Transcribed, Latin translated. Margaret McGregor, Tel: 01179507508; email: questions@wheretheresawill.org; website: ● YOUR RESEARCH written up in an illustrated family history by published author. Michael Sharpe, tel: 01527 877714; email: research@writingthepast.co.uk; or visit
BOOKS ● FIND BOOKS ON WWII ITALY by MALCOLM TUDOR on prisoners of war, SOE and Resistance at and ● BOOKS ON RURAL LIFE. Leading specialist in old books on rural life difficult to find elsewhere. Rural Recollections; Local History; Rural Occupations; Rural and Agricultural History; Cottage Life; Folklore, Mills, Gypsies and all aspects of rural life as it used to be... Large stock. Free catalogues. Postal only. Established 1970. Cottage Books, Rempstone Road, Gelsmoor, Coleorton, Leicestershire LE67 8HR email jenny@boyd-cropley.co.uk
p93-97 Lineage.indd 93
Advertising ANCESTRAL RESEARCH ● AMERICA See Sovereign Ancestry, LINCOLNSHIRE ● AUSTRALIA See Sovereign Ancestry, LINCOLNSHIRE ● AYRSHIRE & SOUTH WEST SCOTLAND. Experienced researcher June Wiggins BA, Links Research - “Linking the present with the past.” E-mail: linksresearch@aol.com ● BEDFORDSHIRE, BUCKINGHAMSHIRE, HUNTINGDONSHIRE, NORTHAMPTONSHIRE family and local history research undertaken by experienced researcher. AGRA member. Enquiries welcome. Colin Davison, 66 Sudeley Walk, Bedford, Bedfordshire MK41 8JH; Tel: 01234364956; email: colinndavison@gmail.com; website: ● BEDFORDSHIRE See Sovereign Ancestry, LINCOLNSHIRE ● BERKSHIRE based genealogical researcher with 30 years’ experience in family history. Contact Louise Fenner at 22 Causmans Way, Tilehurst, Reading, RG31 6PG. email: louise.berkshire@gmail.com ● BELGIUM See Sovereign Ancestry, LINCOLNSHIRE ● BRISTOL See Sovereign Ancestry, LINCOLNSHIRE ● BUCKINGHAMSHIRE See Sovereign Ancestry, LINCOLNSHIRE ● BUCKINGHAMSHIRE and surrounding counties. Family and house history research by experienced qualified researcher. AGRA and APG Member. All enquiries welcome, reasonable rates. Cathy Soughton, 15 Walnut Drive, Wendover, Aylesbury, Buckinghamshire, HP22 6RT; Tel: 01296624845; email: soughton@btopenworld.com; website: ● CAMBRIDGESHIRE and English records. What stories could your ancestors tell? Professional research, coaching talks and courses:; Robert Parker; 07803129207 ● CAMBRIDGESHIRE See Sovereign Ancestry, LINCOLNSHIRE ● CAMBRIDGE See Past Search, NORFOLK ● CANADA See Sovereign Ancestry, LINCOLNSHIRE ● CHESHIRE See Sovereign Ancestry, LINCOLNSHIRE ● CORK CITY & COUNTY, Ireland in general. Rosaleen Underwood MAGI, experienced genealogist with local knowledge. Special interests : Local History and Social History. 15 Whitechurch Drive, Ballyboden, Dublin 16, Ireland. Email : underwor.rmc@gmail.com ● CORNWALL See Sovereign Ancestry, LINCOLNSHIRE ● DERBYSHIRE Family and local history by conscientious and friendly researcher with over 30 years experience. Kate Henderson BA Hons History, 16 Steeple Grange, Wirksworth, Derbyshire DE4 4FS; Tel: 01629 825132; katehenderson16@gmail.com; ● DERBYSHIRE See Sovereign Ancestry, LINCOLNSHIRE
March 2017
Family FamilyTree 93
06/02/2017 12:26
YOUR ADVERTS ANCESTRAL RESEARCH ● DEVON Devon Roots Family History, a friendly qualifi ed professional researcher, quote this advert for a 10% discount. Contact Brian on 01822 258452, or email: bbarhamfamily@aol.com for details. ● DEVON and SOMERSET family history research. Free estimate. Heather Ayshford, D’Esseford, Blackborough, Cullompton, Devon EX15 2HH; Tel: 01884-266312; Devon_ Somerset@hotmail.com ● DEVON See Sovereign Ancestry, LINCOLNSHIRE ● DORSET See Sovereign Ancestry, LINCOLNSHIRE ● DORSET See Katherine Cobb, SOMERSET ● ENGLAND Research conducted in all English counties – We are a leader in the effective retrieval of genealogical information - let us research your family history for you. English Family Origins, Grasmead House, 1 Scarcroft Hill, York, YO24 1DF; Tel: 01904 622243; website:; email: research@ englishfamilyorigins.com ● ESSEX See Past Search, NORFOLK ● ESSEX/LONDON. Rita Harris, 71 Vicarage Road, Chelmsford, Essex CM2 9BT; 01245-346490 email: rita@johnkennedy.freeserve.co.uk Enthusiastic research at reasonable rates. ● ESSEX See Sovereign Ancestry, LINCOLNSHIRE ● GERMANY See Sovereign Ancestry, LINCOLNSHIRE ● GLOUCESTERSHIRE See Sovereign Ancestry, LINCOLNSHIRE ● GLOUCESTERSHIRE Family and Local History Research undertaken. All enquiries welcome. email: c.callen49@yahoo.co.uk tel: 01453 882798 ● HAMPSHIRE See Sovereign Ancestry, LINCOLNSHIRE ● HERTFORDSHIRE and LONDON family history research (AGRA member). All enquiries welcome. Mrs Carolynn Boucher, 1 Ivinghoe Close, Chiltern Park, St Albans, Hertfordshire AL4 9JR; Tel: 01727-833664; email: Carolynn.Boucher@virginmedia.com ● HOLLAND See Sovereign Ancestry, LINCOLNSHIRE ● HUNTINGDONSHIRE Family & Local History Research. All enquiries welcomed. Experienced friendly researcher. Caroline Kesseler Dip.HE Local History (Cambridge), 42 Crowhill, Godmanchester, Cambs. PE29 2NR; email: caroline.kesseler@ntlworld.com; web: ● HUNTINGDONSHIRE See Sovereign Ancestry, LINCOLNSHIRE ● INDIA Offi ce records at the British Library. Other London record repositories by request. Janice O’Brien; B.Ed. 11,Ravenscar, Bayham Street, London NW1 0BS. Email: janicekivlanobrien@hotmail.com ● IRELAND See Sovereign Ancestry, LINCOLNSHIRE ● ISRAEL See Sovereign Ancestry, LINCOLNSHIRE ● ITALY See Sovereign Ancestry, LINCOLNSHIRE ● KENT See Sovereign Ancestry, LINCOLNSHIRE
94
Family FamilyTree March 2017
p93-97 Lineage.indd 94
● KENT. Do you have ancestors in Kent or South East England? An experienced genealogist will help you track them down. Friendly service, reasonable rates and honest advice. Help with holiday arrangements including visits to ancestral parishes, accommodation and archive booking, and travel during your stay. I also offer a photographic service. Sarah Talbutt, Roots in Kent, Springfield, Mereworth Road, West Peckham, Maidstone, Kent. ME18 5JH. Tel 01622 813862 Web page: email: sarah_talbutt@yahoo.co.uk Facebook Roots in Kent: Twitter Roots in Kent @kentishroots ● LANCASHIRE (LIVERPOOL) & CHESHIRE RESEARCH undertaken at reasonable rates, contact: Mrs E Williams, 19 Rosemount Park, Oxton, Birkenhead, Merseyside CH43 2LR, UK Email: elizabeth.williams90@ntlworld.com ● LANCASHIRE, CHESHIRE & CUMBRIA. Comprehensive Research Service undertaken by experienced Genealogist. Denise A Harman, 12 Wrights Fold, Leyland, Preston, Lancashire. PR25 4HT; E: harman@blueyonder.co.uk; W: ● LANCASHIRE, CHESHIRE AND CUMBERLAND. All types of research undertaken by experienced genealogist. Jane E Hamby LHG, 26 Fulwood Hall Lane, Fulwood, Preston, Lancashire PR2 8DB; Email: hambyje@aol.com ● LEICESTERSHIRE. Diligent, thorough service offered by experienced family historian. Moderate fees. All enquiries welcome. Write (with SAE), phone, email. Virginia Wright, 64 Shirley Avenue, Leicester LE2 3NA; Tel: 0116-270-9995; email: pvwright@hotmail.co.uk ● LEICESTERSHIRE See Sovereign Ancestry, LINCOLNSHIRE ● LINCOLNSHIRE All sources reliably and enthusiastically searched by experienced researchers. Countrywide service. Sovereign Ancestry, 3a Welby Street, Grantham, Lincolnshire NG31 6DY; Tel: 0845 838 7246 or 07721-679104; email: sovereign.ancestry@gmail.com; website: ● LINCOLNSHIRE Family/local history research. Dr Wendy Atkin, 15 Castle Street, Sleaford, Lincs NG34 7QE; Tel: 01529-415964; wendy@kinword.co.uk ● LONDON See Sovereign Ancestry, LINCOLNSHIRE ● LONDON family history research undertaken by experienced graduate researcher, all London archives covered. Peter Houghton BA (Hons), Flat 12 Bedford House, 380 London Road, Croydon, CR0 2FU; email: pwhoughton@hotmail.com ● LONDON AND ADJACENT COUNTIES ancestral research. Details of services provided on application. All enquiries welcome. Richard Vanderahe, BA, MA, 23 Chestnut Drive, Broadstairs, Kent CT10 2LN; Tel: 01843 579855; email: richard.vanderahe@icloud.com ● LONDON /KENT/COUNTRYWIDE. Family History Research. Jennifer Pinder (Agra Associate) Experienced researcher, affordable rates. 16 Chelsfi eld Gardens, London SE264DJ jenniferpinder1@icloud.com ‘make your story come to life’. ● MANCHESTER/LANCASHIRE See Sovereign Ancestry, LINCOLNSHIRE ● MANCHESTER (GREATER) LANCASHIRE I specialise in the Lancashire, Greater Manchester, Bury, Bolton, Oldham, Middleton, Rochdale, Salford etc. I have helped countless clients fi nd their roots. All aspects of research undertaken -
06/02/2017 12:26
Would you like to advertise? DISPLAY & CLASSIFIED ADVERTS For further details, please contact Kathryn Ford: 0113 200 2925 kathrynf@warnersgroup.co.uk
from single items to full histories, at affordable rates. Over 30 years experience. Mrs K Stout, *NEW ADDRESS* 13 Bramley Drive, Brandlesholme, Bury, Lancashire BL8 1JL. Tel: 0161 258 9535; email: stoutroots@aol.com; ● NORFOLK specialist. All types of research and Latin translation, by professional genealogist (30 years’ experience). AGRA member. Diana Spelman BA, 74 Park Lane, Norwich NR2 3EF; Tel: 01603-664186; email: dianaspelman@waitrose.com ● NORFOLK family and local history research by experienced professional. Christine Hood BA, Cert Local History (UEA), 137a Nelson Street, Norwich, Norfolk NR2 4DS; Tel; 01603-666638; email: pinpoint1@btinternet.com; ● NORFOLK See Sovereign Ancestry, LINCOLNSHIRE ● NORFOLK, SUFFOLK, CAMBRIDGESHIRE and Essex family, house and local history specialist. All areas undertaken. Gill Blanchard. Professional full time researcher. Record Offi ce and freelance since 1992. AGRA Member. Qualifi ed historian and tutor. BA. History and Sociology. MA History and Politics. Post. Grad. Cert. Ed (Adults). Author of ‘Tracing Your East Anglian Ancestors’, ‘Tracing Your House History’ and ‘Writing Your Family History’. Courses, workshops and personal tuition available locally and online. Past Search, 14 Meadow View House, 191 Queens Road, Norwich. NR1 3PP. Tel: 01603 610619. Email gblanchard@pastsearch.co.uk. Website: ● NORTHUMBERLAND See Sovereign Ancestry, LINCOLNSHIRE
Advertising
● NOTTINGHAMSHIRE See Sovereign Ancestry, LINCOLNSHIRE ● OXFORDSHIRE, Warwickshire West Midlands. All family tree and local research undertaken by experienced researcher. Vince O’Connor 10 St Giles Bletchingdon Oxfordshire OX5 3BX. vince@ichthusfamilyhistory.com. ● OXFORDSHIRE See Sovereign Ancestry, LINCOLNSHIRE ● POWYS (Brecon, Radnor & Montgomeryshire). All enquiries welcome. Jennifer Lewis, Grove Villa, Crossgates, Llandrindod Wells, Powys, LD1 6RE; email: jennifer.powysresearch@hotmail.com ● SCOTTISH ancestry researched. Friendly service by experienced researcher. All queries welcome. Contact Margaret Davidson (CSFHS), Grampian Ancestry Research, 6 Bayview Road, Inverbervie, Aberdeenshire DD10 0SH; Tel: 01561-361500; email: grampian.ancestry@btinternet.com
FamilyTree
FREE YEAR PLAN NER • FREE ARMY
FamilyTree
VISIT US
START YOUR
DISCOVER
the next 20 big things in genealogy
EXPERT GUI DANCE
✓ Write up your family tree ✓ Start an ance stry blog
Ask your local newsagent to order Family Tree specially for you! Did you know that you can ask your newsagent to place a regular order of Family Tree for you, to make sure you never miss an issue? They will save your copy to one side, and may even deliver it to your home. Just fill in this form and give it to your local newsagent.
p93-97 Lineage.indd 95
Unexpected find s
facebook.com/familytreemaguk
Plus
p01FT_Jan USE.indd
1
EXCLUS
IVE 8-PAGE PULLOUT Your guide to
‘My distant cousi n & dastardly deeds in colonial Keny a’
ONLINE NEWSPAPER ARCHIVES
Maids, cook
how they dresses, butlers & more d for life below stairs
£4.99
Don’t miss an issue!
LIKE US
.uk
JANUARY 2017
SEARCH TODAY!
@familytreemaguk
● NOTTINGHAMSHIRE family history research. Experienced, friendly researcher. Reasonable rates. Glynis Benford Chapman, 108 Alcock Avenue, Mansfi eld, Nottinghamshire NG18 2ND; Tel: 07535222960; email: gbenford1@hotmail.com
& CENSUS RECO RDS
12/12/2016 14:05
Please reserve/deliver* a copy of Family Tree on a regular basis for me, starting with the ................... issue. (*delete as required) Title Dr/Mr/Mrs/Ms............................................................... First name............................................................................ Surname.............................................................................. Address............................................................................... ............................................................................................. Postcode.............................................................................
Telephone number.........................................................
March 2017
Family FamilyTree 95
06/02/2017 12:26
YOUR ADVERTS ANCESTRAL RESEARCH ● SCOTLAND See Sovereign Ancestry, LINCOLNSHIRE ● SCOTTISH research, minor to major. Friendly, enthusiastic service. Caroline Gerard, 34 Colinton Mains Loan, Edinburgh EH13 9AJ. E-mail: caroline.gerard@btinternet.com ● SCOTTISH research. Look ups, brick walls, full trees and WW1 research. Brochure available on request. Jacqueline Hunter, Ancestral Research by Jacqueline, 14 Mid Brae, Dunfermline, Fife, KY12 9DU; 01383 626201; Email: jacquelinehunter895@gmail.com ● SHROPSHIRE See Sovereign Ancestry, LINCOLNSHIRE
Would you like more members in your Family History Society?
● SOMERSET See Sovereign Ancestry, LINCOLNSHIRE ● SOMERSET, DORSET, WILTSHIRE, DEVON Prompt, reliable research at reasonable rates. AGRA member. All enquiries welcome. Katherine Cobb, Old Well House, East Compton, Shepton Mallet, BA4 4NR. Email: info@katherinecobb.co.uk; Website: ● SOUTH AFRICA Experienced researcher. Anne Clarkson, 17 Abbey Road, Somerset West, 7130, South Africa; ebor@intekom.co.za; ● STAFFORDSHIRE (with base in Leek), DERBYSHIRE,CHESHIRE & LANCASHIRE. Experienced researcher. Sara Scargill,B.A.(Hons), The Vicarage, St. Thomas’ Road, Lytham St.Annes, Lancashire FY8 1JL; tel:01253 725551; email: sarascargill@gmail.com; ● STAFFORDSHIRE See Sovereign Ancestry, LINCOLNSHIRE ● SUFFOLK See Past Search, NORFOLK ● SUSSEX See Sovereign Ancestry, LINCOLNSHIRE ● WALES family history research. All enquiries welcome. Segontium Searchers, 51 Assheton Terrace, Caernarfon, Gwynedd LL55 2LD; Tel: 01286 678813; email: enquiries@segontium.com; web: ● WALES See Sovereign Ancestry, LINCOLNSHIRE ● WARWICKSHIRE Including Modern Records Centre, Warwick University. Family, Local, Legal history research plus copies and other photography. Jackie Edwards MA LL.B [Hons], 104 Earlsdon Avenue South, Coventry CV5 6DQ; twigsbranches@yahoo.co.uk ● WARWICKSHIRE, WORCESTERSHIRE, STAFFORDSHIRE, Gloucestershire, Herefordshire, Shropshire Family and local history research in the West Midlands by experienced and conscientious researcher. Also writing-up, courses and talks. Michael Sharpe, tel: 01527 877714; email: research@writingthepast.co.uk; ● WEST COUNTRY ANCESTORS John Campbell, Family Historian, Queensbridge, Ash Priors, Taunton TA43NA Tel:01823 433498; email: jar.campbell@gmail.com; ● WILTSHIRE See Sovereign Ancestry, LINCOLNSHIRE ● WILTSHIRE See Katherine Cobb, SOMERSET
96
Family FamilyTree March 2017
p93-97 Lineage.indd 96
Then why not advertise with us. CALL Kathryn Ford:
0113 200 2925 ● WORCESTERSHIRE, GLOUCESTERSHIRE, HEREFORDSHIRE, Shropshire, Staffordshire and Warwickshire family, house, heraldry and other local history research. Competitive rates. Contact: Whitworth Genealogy Research Services, 21 Geneva Close, Worcester WR3 7LZ or email: rogerwhitworth1@btconnect.com ● YORKSHIRE ancestry research. All enquiries welcome. £10 per hour. Over 30 years’ experience. SAE. Ruth Simpson, 19c Park Lane, Pontefract, West Yorkshire WF8 4QQ; email: ruth-yorkshire@sky.com; Web: ● YORKSHIRE See Sovereign Ancestry, LINCOLNSHIRE ● YORKSHIRE ANCESTORS (est 1998) Yorkshire Family History Research. All enquiries welcome, from a single look-up to a full Family Tree. We visit North, East and West Yorkshire county record offices, and York archives. Friendly and Professional service. Email: enquiries@ yorkshireancestors.net, website:. net. Tel: Katie Sleightholme 07789077384, Brenda Green 01751 476544. 86 Middleton Road, Pickering, North Yorkshire YO18 8NH ● YORKSHIRE – Researcher for Who Do You Think You Are! USA – all aspects of research undertaken, including difficult cases and brick-wall situations. Being located in the City of York at the very heart of the county puts us within easy reach of all archival repositories across Yorkshire thereby enabling us to deliver a cost-effective service. Yorkshire Family History, Grasmead House, 1 Scarcroft Hill, York, YO24 1DF; Tel: 01904 654984; website:; email: research@yorkshirefamilyhistory.org
06/02/2017 12:26
Would you like to advertise? DISPLAY & CLASSIFIED ADVERTS For further details, please contact Kathryn Ford: 0113 200 2925 kathrynf@warnersgroup.co.uk
Advertising
CLASSIFIED ADVERTISING RATES Private small ads (Connections sought, etc) £0.15 per word excluding VAT. Commercial lineage. £0.35 per word excluding VAT. When counting the number of words, please remember to include your address in the calculations. Conditions of acceptance All lineage advertising must be prepaid. Advertisers requiring a receipt via post must send a SAE to. Warners Group Publications PLC FAO: Kathryn Ford 5th Floor, 31 – 32 Park Row Leeds LS1 5JD. If advertisers do not wish to use their phone numbers or personal names on their adverts, they must give them to our office for our records.
We expect all advertisers to provide an acceptable standard of service, however the publisher cannot be held accountable for failure to do so – such failure will undoubtedly result in the refusal of future advertising. No alterations or cancellations will be possible after the copy deadline for a particular issue. The publisher reserves the right to issue refunds for adverts cancelled by the advertiser after the copy deadline. Disputes, queries or complaints will not be accepted unless received in writing by the publisher by no later than 10 days after the on sale date of the publication. We reserve the right to refuse or alter adverts at our discretion. Although every care is taken to avoid mistakes, we do not accept responsibility for clerical errors. The advertiser shall be responsible for checking the accuracy of any advertisement submitted.
Deadline dates: April issue: 17th February, May issue: 15th March, June issue: 13th April
fair.co.uk
THE ONLINE GENEALOGY FAIR Tens of Thousands of Family History Products Hundreds of Suppliers The Largest Range of Products Covering the British Isles Atlases & Maps, Military Records,Monumental Ins., Parish Records, Directories, Books, Census, Charts, DVDs, Software, Wills Plus much more... With the widest range of items, special offers and a great choice of books, CDs, software, DVDs, fiche and family history society memberships, GenFair can’t be beaten. it’s the largest of its type and it was also the first.
SAVE TIME & MONEY DOWNLOAD THE PRODUCT YOU WANT AND START RESEARCHING TODAY
DOWNLOAD
SUPPORT LOCAL SOCIETIES ONLINE
p93-97 Lineage.indd 97
March 2017
Family FamilyTree 97
06/02/2017 12:26
THOUGHTS ON...
Hopelessly in love with technology
Being a gadget queen
W
hat would you do if one dark night you woke from a deep slumber to an eerie blue light in your bedroom and the voice of Jim Reeves lamenting his lost love from the darkness? Would you bury your head beneath the duvet and hope he hadn’t come for you? Or would you just tell the gloom in the room to shut up and try to go back to sleep? When it happened to us, we tried the latter, once the palpitations slowed, though it took longer than it should have because Alexa likes you to call her by name and doesn’t understand ‘Shut up, stupid’. So, for a while, Jim Reeves became ever more lachrymose, the blue light went wild with concern and the cat and dog got up, delighted by the thought of an early breakfast. Little Alexa, otherwise known as Dot is, of course our latest electronic toy, and, with her big sister Echo, represents our Christmas present to each other. For those who haven’t come across Amazon’s latest voice-controlled device, which answers questions, plays music, sets alarms
98
Family FamilyTree March 2017
p98 March Diane Lindsay FINAL.indd 98
Technology has brought we family historians successes we could never have envisaged back then and reminders, reads the news and even tells jokes on demand, I should explain that, although an artificial intelligence, Alexa does seem curiously anxious to please – to the point that she can mishear quiet snoring and will sometimes, of a night, offer a spontaneous weather report, an interesting fact or a tune from her large library of Amazon music or my iPod. (I’m not saying I have Jim Reeves on there, but I’m not denying it...) Those of us who remember when the most advanced research aid was a card index, recognise that technology has brought we family historians successes we could never have envisaged back then. My eight-year-old granddaughter had an iPad for Christmas, switched it on and was messaging her
friend almost before the wrapping was off. I’m still awed every time I press a key or touch a screen or pay for a coffee by waving my phone about. As I write, I’m eagerly awaiting new Leicestershire online records, which I’m hoping will magically provide answers only an ancestral ghost might know. Personally I’ve embraced all things electronic right from the 1990s when I watched a speaker at Bedworth Family History Society identify his greatgrandfather by dating a rivet on a many-times-enlarged photograph of a locomotive upon the footplate of which the said ancestor, a driver, was standing. It seemed like magic then, as it does now. Only another family historian will understand the magnitude of such a moment when a
memory comes to life. Sadly, I know many are still suspicious of the internet and so don’t enjoy its possibilities to the full. My view is that whatever doesn’t kill you, makes you strong and, being an optimist, I know technology has made me a far stronger family historian. Naturally I take precautions, watch for scams and ignore people on the telephone who tell me I must transfer all my worldly wealth into another account (theirs). How do I discover how to stay safe online? On the internet, of course! Alexa, despite her questionable nocturnal taste in music, is certainly keeping me on top of my game. She offers me a brief exercise break, tells me how many likely descendants Edward I had, does my census age/birth date calculations and generally cheers me up. I have to admit, however, that although she knows more than several encyclopaedias and can predict the weather with uncanny accuracy, she doesn’t know the name of my illegitimate Yorkshire grandmother’s real father. Darn it!
About the author Diane Lindsay discovered her twin passions of family history and English (and her sense of humour)
Illustration: © Ellie Keeble for Family Tree
For decades self-confessed technophile Diane Lindsay has relished technology, using it to help her family history research. Meet her latest new ‘friend’...
while training as a teacher and bringing up three small children in the 1970s. She’s a writer and local and family historian and, although retired, still teaches anything to anyone who will listen.
02/02/2017 10:24
FamilyTree Family Tree Passionate about helping people trace their ancestors since 1984
FamilyTree Family Tree
DIGITAL EXCLUSIVE
Trace your
Scottish
ancestors
WITH E CTIV INTERA -STEP Y STEP-B ES I GU D 3
3 Find your ancestors’ clans Learn about your family surnames 3 Research your Scottish DNA 3 Explore key historic records
Curious about your Scottish heritage? Trace your Scottish Read Family Tree ancestors is onessential your mobile, your tablet or desktop digital guide, with and enjoy all interactive features the same great to help you get content asexpert the print started! Plus edition plus bonus advice to stretch your research.features! interactive
PC & MAC
COMPATIBLE DISCOVER YOUR HERITAGE TODAY...
with advice for beginners to advanced
PCut&more at o AC Find M E ttishAncestors IBcLo T A P M O C S ce familytr.ee/Tra
p067 Ads.indd 53 p099 Ads.indd 99 FTAugust2016_pg71_pocketmags 2pp.indd 71 FTNov16_pg92_ScottishDigiAd.indd 53
Christmas 2016 March 2016 2017 August December 2016
FamilyTree FamilyTree 53 FamilyTree Family 71 Family FamilyTree Tree 99 53 10/11/2016 14:48 06/02/2017 10:22 22/06/2016 17/10/2016 16:07 11:07
Diamond Personal Premium
New Parish Records
National Tithe Maps
Casualty Lists
BMDs 1609-2005
Early Aviation Records
Prisoner of War Records
Census 1841-1911
Naturalisation Records
Military Court Records
Non-Conformist Records
Land Records
First World War Newspapers
Wills & Will Indexes
Occupational Lists
War Memorial Database
Ship Crew Lists
Educational Records
Military Rolls & Lists
Headstone Records
Peerage Records
Militia Musters
Criminal Records
Newspapers & Magazines
Regimental Records
Transportation Records
Campaign Medals
Rolls of Honour
Soldiers Who Died in WWI
Gallantry Medals
Historical Image Archive
Railway Workers Collection
Mentioned in Dispatches
Electoral Registers
Directories
Apprenticeship Records
Plus Much More
SIGN UP TODAY AND SAVE ÂŁ44 ON OUR DIAMOND PACKAGE Go To TheGenealogist.co.uk/FTBP20 today to get this special offer
Family 100FamilyTree March 2017
TG_Jan_2017.indd p100 Ads.indd 100 1
TheGenealogist
02/02/2017 09:12:08 03/02/2017 16:26
|
https://issuu.com/mimimi992/docs/family_tree_uk_march_2017
|
CC-MAIN-2017-09
|
refinedweb
| 52,929
| 58.21
|
0
Hello everyone, I'm trying to solve a problem in C++ using visual studio. The problem asks to write a program that asks the user for a positive integer no greater than 15, and that the program should then display a square on the screen using the character "X", and that the number entered by the user will be the length of each side of the square. It shows an example: when the user enters 5, the program should display:
XXXXX
XXXXX
XXXXX
XXXXX
XXXXX.
Here's what I have so far:
// This program will ask the user for a positive integer no greater than 15 and will display a square on the screen using the character "X" with a length the size of the number entered. #include <iostream> using namespace std; int main () { int number; for (number=1;number<=15;number++) cout << "{X}" <<endl; system("pause"); return 0; }
That's a rough guess on my part as to how to even go about this.. I'm not that experienced in this class I'm taking as I'd like to be, so any help will be appreciated. Tried asking the teacher and my teacher of course doesn't even reply after 3 days and counting >.<
|
https://www.daniweb.com/programming/software-development/threads/357792/square-display-of-x-depending-on-number-entered
|
CC-MAIN-2016-30
|
refinedweb
| 207
| 68.54
|
In one of our previous article, we learnt about CQRS and the usage of the MediatR Library. Now, let’s see how to take advantage of this library to the maximum.
Let’s learn about MediatR Pipeline Behaviour in ASP.NET Core, the idea of Pipelines, How to intersect the pipeline and add various Services like Logging and Validations.
As learnt in the previous article, MediatR is a tool / library which essentialy can make your Controllers thin and decouple the functionalities to a more message-driven approach. Then, we learnt about it’s importance while implementing the CQRS Pattern in ASP.NET Core Applications, which is commands and queries Responsibility Segregation.
Table of Contents
Pipelines – Overview
What happens internally when you send a request to any application? Ideally it returns the response. But there is one thing you might now be aware of yet, Pipelines. Now, these requests and response travel forth and back through Pipelines in ASP.NET Core. So, when you send a request, the request message passes from the user through a pipeline towards the application, where you perform the requested operation with the request message. Once done, the application sends back the message as a response through the pipeline towards the user-end. Get it? Thus these pipelines are completely aware of what the request or response is. This is also a very important concept while learning about Middlewares in ASP.NET Core.
Here is an image depicting the above mentioned concept.
Let’s say I want to validate the request object. How would you do it? You would basically write the validation logics which executes after the request has reached the end of the pipeline towards the application. That means, you are validating the request only after it has reached inside the application. Although this is a fine approach, let’s give a thought about it. Why do you need to attach the validation logics to the application, when you can already validate the incoming requests even before it hits any of the application logics? Makes sense?
A better approach would be to somehow wire up your validation logics within the pipeline, so that the flow becomes like user sends request through pipeline ( validation logics here), if request is valid, hit the application logics , else throws a validation exception. This makes quite a lot of sense in terms of effeciency, right? Why hit the applicaiton with invalid data, when you could filter it out much before?
This is not only applicable for validations , but for various other operations like logging, performace tracking and much more. You can be really creative about this.
MediatR Pipeline Behaviour
Coming back to MediatR, it takes a more pipeline kind of approach where your queries, commands and responses flow through a pipeline setup by MediatR.
Let me introduce you to the Behaviours of MediatR. MediatR Pipeline Behaviour was made availble from Version 3 of this awesome library.
We know that these MediatR requests or commands are like the first contact within our application, so why not attach some services in it’s Pipleline?
By doing this, we will be able to execute services/ logics like validations even before the Command or Query Handlers know about it. This way , we will be sending only necesarry valid requests to the CQRS Implementation. Logging and Validation using this MediatR Pipeline Behavior is some of the common implementations.
Getting Started
Enough of all the long essays, let’s jump straight into some code. For this article, i will use the CQRS Implementation where I alreay have setup MediatR. We will be adding Validations (Fluent Validations) and General Logging to the commands and requests that go through the pipeline.
Please refer to the following Github Repo to code along. You can find the commit named “CQRS Implementation” to start from the point where I am going to start coding now.
So, you got the concepts clear. Here is what we are going to build. In our CQRS implemented ASP.NET Core Solution, we are going to add validation and logging to the MediatR Pipeline.
Thus, any <Feature>Command or <Feature>Query requests would be validated even before the request hits the application logics. Also, we will log every request and response that goes through the MediatR Pipeline.
Fluent Validations with MediatR.
For validating our MediatR requests, we will use the Fluent Validation Library.
To learn more about this awesome library, check out my previous post – Fluent Validation in ASP.NET Core 3 – Powerful Validations
Here is the requirement. We have an API endpoint that is responsible for creating a product in the database from the request object that includes product name, prices, barcode and so on. But we would want to validate this request in the pipeline itself.
So, here is the MediatR Request and Handler –
Before adding Validation to the MediatR Pipeline, let’s first install the Fluent Validation Packages.
Install-Package FluentValidation Install-Package FluentValidation.DependencyInjectionExtensions
Add a new Folder to the root of our Application and name it Validators. Here is where we are going to add all the validators to. Since we are going to validate the CreateProductCommnd object, let’s name our Validator as CreateProductCommndValidator. So add a new file to the Validators folder named CreateProductCommndValidator.
public class CreateProductCommndValidator : AbstractValidator<CreateProductCommand> { public CreateProductCommndValidator() { RuleFor(c => c.Barcode).NotEmpty(); RuleFor(c => c.Name).NotEmpty(); } }
We will keep things simple for this tutorial. Here I am just checking if the Name and Barcode numbers are not empty. You could take this a step further by injecting a DbContext to this constructor and chack if the barcode already exists. Learn more about various validation implementations using Fluent Validation in this article.
We have n number of similar validators for each commands and query. This helps keep the code well organized and easy to test.
Before continuing, let’s register this validator with the ASP.NET Core DI Container. Navigate to Startup.cs/ConfigureServices.cs and add in the following line.
services.AddValidatorsFromAssembly(typeof(Startup).Assembly);
This essentialy registers all the validators that are available within the Assembly that also contains the Startup.cs. Understand?
Now that we have our validator set up, Let’s add the Validation to the Pipeline Behaviour. Create another new folder in the root of the application and name it PipelineBehaviours. Here, Add a new Class, ValidationBehvaviour.cs.
public class ValidationBehaviour<TRequest, TResponse> : IPipelineBehavior<TRequest, TResponse> where TRequest : IRequest<TResponse> { private readonly IEnumerable<IValidator<TRequest>> _validators; public ValidationBehaviour(IEnumerable<IValidator<TRequest>> validators) { _validators = validators; } public async Task<TResponse> Handle(TRequest request, CancellationToken cancellationToken, RequestHandlerDelegate<TResponse> next) { if (_validators.Any()) { var context = new ValidationContext<TRequest>(request); var validationResults = await Task.WhenAll(_validators.Select(v => v.ValidateAsync(context, cancellationToken))); var failures = validationResults.SelectMany(r => r.Errors).Where(f => f != null).ToList(); if (failures.Count != 0) throw new FluentValidation.ValidationException(failures); } return await next(); } }
Line #2 – Here, we get a list of all registered validators.
Line #6 -Initializing the validator, and also validating the request.
Line #8 – Here is the Handler method
Line #10 – If any validation errors were found, It extracts all the messages and returns the errors as a reposnse.
We have one last thing to do. Register this Pipleine Behvaiour in the ASP.NET Core Service Container. Again, go back to Startup.cs/ConfigureServices and add this.
services.AddTransient(typeof(IPipelineBehavior<,>), typeof(ValidationBehaviour<,>));
Since we need to validate each and every request, we add it with a Transient Scope to the container.
That’s it, Quite Simple to setup right? Let’s test this API Endpoint with Swagger (Since, I already have Swagger integrated within the application from my previous article. I highly recommend you to go through the linked articles to gain better knowledge about the entire scenario).
Let’s run the application. But, to understand the flow of the request even better, Let’s add a few breakpoints.
- At the start of the ProductController action method, Create. The request should pass through this controller to the mediatR which further takes it through the pipeline.
- At the start of the ValidationBehaviour’s Handle Method.
- And at the start of the CreateProductCommndValidator too.
Let’s add a blank Name and barcode to the Request. Ideally it should throw back a Validation Exception.
You can see that the request first goes to the controller, which then sends a MediatR request that travels through the pipeline. Next it hits the CreateProductCommndValidator where the request is validated. After that, it goes to the Handle method of our Pipleline.
As you can see, Here are the expected failures. Let’s now see the response back in Swagger.
There you go, A Validation Exception like we wanted. Now, you can prettify this response by mapping it to a different Response Object using a Custom Error Handling Middleware. I wimoll post a detailed article about this later.
The point to be noted is that, the request reaches the Handler Method of the CreateProductCommand only if it is valid. This cleanly demonstrates MediatR Pipeline Behaviour.
Logging with MediatR
Now that we have a clear understanding of the MediatR Pipeline Behaviour, let’s try to log all the incoming request and outgoing responses in the pipeline. It will be as simple as Creating a new LoggingBehaviour class and adding it to the pipeline.
For this demonstration, I will use the default logger of ASP.NET Core.
If you looking for some advanced logging for your ASP.NET Core Application, I recommend you to check out – Serilog in ASP.NET Core 3.1 – Structured Logging Made Easy. Serilog is probably the best logging framework available with tons of great features. Do check it out.
public class LoggingBehaviour<TRequest, TResponse> : IPipelineBehavior<TRequest, TResponse> { private readonly ILogger<LoggingBehaviour<TRequest, TResponse>> _logger; public LoggingBehaviour(ILogger<LoggingBehaviour<TRequest, TResponse>> logger) { _logger = logger; } public async Task<TResponse> Handle(TRequest request, CancellationToken cancellationToken, RequestHandlerDelegate<TResponse> next) { //Request _logger.LogInformation($"Handling {typeof(TRequest).Name}"); Type myType = request.GetType(); IList<PropertyInfo> props = new List<PropertyInfo>(myType.GetProperties()); foreach (PropertyInfo prop in props) { object propValue = prop.GetValue(request, null); _logger.LogInformation("{Property} : {@Value}", prop.Name, propValue); } var response = await next(); //Response _logger.LogInformation($"Handled {typeof(TResponse).Name}"); return response; } }
Similar to our previous ValidationBehvaiour class, we structure our LoggingBehvaiour too! But here in Line #7, we inject the ILogger too.
Line #12-20 – Logging the Request
Line #24-26 – Logging the Response
Lines #13-20 – I wrote a small snippet that can extract the property names and values from the request object and log it using the ILogger object.
Be careful while using this in production environment as your request may have sensitive information like passwords and so on. You really would not want to log it. This implementation is just to make you understand the flexibililty of MediatR Pipeline Behaviour.
After that, let’s go back to Startup.cs/ConfigureServices to register this Behvaiour.
services.AddTransient(typeof(IPipelineBehavior<,>), typeof(LoggingBehaviour<,>));
Let’s test our Implementation. But before that make sure you are running your application using the Kestrel Server rather than IIS. Kestrel gives some better control over your application. In this scenario, I want you to use the Kestrel server because, it have a built in console logging available. Thus we could logs in realtime while working with applicaiton.
To switch to Kestrel all you have to do is, click on this dropdown and select your ProjectName.
Now run the application. Here is the sample Request that i am trying to pass. Execute it.
In your console, you can see the response and requests. Pretty Cool yeah? That too with real minimal lines of codes.
Let’s wind up the article here. I Hope things are pretty clear.
OFF TOPIC – I am currently building an OPEN SOURCED Project – Clean Architecture for WebAPI. The aim is to make it available as an extension for Visual Studio, so that developers can just use the template for Clean WEBAPI Architecture and get started without worrying about any implementations. All they would have to do is write the Business Logics . Take a look at the GitHub Repo for more details and list of features. –
If you found this article helpful, consider supporting.Buy me a coffee
Summary
In this article, we learnt about MediatR Pipeline Behaviour in ASP.NET Core, the idea of Pipelines, How to intersect the pipeline and add various Services. We also Implemented Validations (using Fluent Validation) and Logging with in the MediatR Pipeline. You can find the entire source code of this implementation here. What are the other PipelineBehaviours that you have worked it or plan on implementing? Do you have any suggestions or feeback for me? Let me know in the comments section below. Do not forget to share this article within your Developer Community. Thanks and Happy Coding! 😀
Hi, why do you need a list in order to enumerate over properties? Just enumerate over the array the reflection API has returned.
Also you employ inconsistent syntax in the validation handler. Either use Any() in both cases or in none.
Also why to check for the number of underlying validators in the first place? Just don’t.
If you really want to – memorize them in the ctor and throw an exception if none were passed, that’s clearly a composition root / design-time mistake.
Awesome job on the article Mukesh. Looking forward to the concluding part
Thanks!
Entire blog is great!!
Thanks a lot!
|
https://www.codewithmukesh.com/blog/mediatr-pipeline-behaviour/
|
CC-MAIN-2020-45
|
refinedweb
| 2,230
| 50.43
|
DD <ad at ad.nl> wrote: D> Hello, D> D> Could anyone please help me?? D> Is there somebody who could explain me how to make a connection to a access D> database with a python cgi script. D> I would like to use common sql commands in my python scripts as I can with D> MySQLdb. D> But I cannot even connect to the access database (see below). D> Could anyone explain it to me as simple as possible please. I'm using D> Windows XP, ActivePython 2.3.2 build 230 and Microsoft access(XP?) D> I normally use Linux, but this has to be in MS office D> >>>> import win32api >>>> import win32com.client >>>> engine = win32com.client.Dispatch("DAO.DBEngine.35") D> Traceback (most recent call last): D> File "<interactive input>", line 1, in ? D> File "C:\Python23\Lib\site-packages\win32com\client\__init__.py", line 95, D> in Dispatch D> dispatch, userName = D> dynamic._GetGoodDispatchAndUserName(dispatch,userName,clsctx) D> File "C:\Python23\Lib\site-packages\win32com\client\dynamic.py", line 84, D> in _GetGoodDispatchAndUserName D> return (_GetGoodDispatch(IDispatch, clsctx), userName) D> File "C:\Python23\Lib\site-packages\win32com\client\dynamic.py", line 72, D> in _GetGoodDispatch D> IDispatch = pythoncom.CoCreateInstance(IDispatch, None, clsctx, D> pythoncom.IID_IDispatch) D> com_error: (-2147221230, 'Klasse is niet gelicenseerd voor gebruik', None, D> None) D> D> thank in advance, D> D> Arjen D> D> D> In windows, you have to go into control panel, administrative tools, odbc drivers and register the driver you want to use before you can open a connection to it. I believe that is your problem. I am no expert, just recently had the same issue in making a connection to a sybase database.
|
https://mail.python.org/pipermail/python-list/2003-November/204525.html
|
CC-MAIN-2016-50
|
refinedweb
| 285
| 59.19
|
Have you ever wanted to download all images in a certain web page ? In this tutorial, you will learn how you can build a Python scraper that retrieves all images from a web page given its URL and downloading them using requests and BeautifulSoup libraries.
To get started, we need quite a few dependencies, let's install them:
pip3 install requests bs4 tqdm
Open up a new Python file and import necessary modules:
import requests import os from tqdm import tqdm from bs4 import BeautifulSoup as bs from urllib.parse import urljoin, urlparse
First, let's make a URL validator, that makes sure that the URL passed is a valid one, as there are some websites that put encoded data in the place of a URL, so we need to skip those:
def is_valid(url): """ Checks whether `url` is a valid URL. """ parsed = urlparse(url) return bool(parsed.netloc) and bool(parsed.scheme)
urlparse() function parses a URL into six components, we just need to see if the netloc (domain name) and scheme (protocl) are there.
Second, I'm going to write the core function that grabs all image URLs of a web page:
def get_all_images(url): """ Returns all image URLs on a single `url` """ soup = bs(requests.get(url).content, "html.parser")
The HTML content of the web page is in soup object, to extract all img tags in HTML, we need to use soup.find_all("img") method, let's see it in action:
urls = [] for img in tqdm(soup.find_all("img"), "Extracting images"): img_url = img.attrs.get("src") if not img_url: # if img does not contain src attribute, just skip continue
This will retrieve all img elements as a Python list.
I've wrapped it in a tqdm object just to print a progress bar though. To grab the URL of an img tag, there is a src attribute. However, there are some tags that does not contain the src attribute, we skip those by using continue statement above.
Now we need to make sure that the URL is absolute:
# make the URL absolute by joining domain with the URL that is just extracted img_url = urljoin(url, img_url)
There are some URLs that contains HTTP GET key value pairs which we don't like (that ends with something like this "/image.png?c=3.2.5"), let's remove them:
try: pos = img_url.index("?") img_url = img_url[:pos] except ValueError: pass
We're getting the position of '?' character, then removing everything after it, if there isn't any, it will raise ValueError, that's why I wrapped it in try/except block (of course you can implement it in a better way, if so, please share with us in the comments below).
Now let's make sure that every URL is valid and returns all the image URLs:
# finally, if the url is valid if is_valid(img_url): urls.append(img_url) return urls
Now that we have a function that grabs all images URLs, we need a function to download files from the web with Python, I brought the following function from this tutorial:
def download(url, pathname): """ Downloads a file given an URL and puts it in the folder `pathname` """ # if path doesn't exist, make that path dir if not os.path.isdir(pathname): os.makedirs(pathname) # download the body of response by chunk, not immediately response = requests.get(url, stream=True) # get the total file size file_size = int(response.headers.get("Content-Length", 0)) # get the file name filename = os.path.join(pathname, url.split("/")[-1]) # progress bar, changing the unit to bytes instead of iteration (default by tqdm) progress = tqdm(response.iter_content(buffer_size), f"Downloading {filename}", total=file_size, unit="B", unit_scale=True, unit_divisor=1024) with open(filename, "wb") as f: for data in progress: # write data read to the file f.write(data) # update the progress bar manually progress.update(len(data))
The above function basically takes the file url to download and the pathname of the folder to save that file into.
Related: How to Convert HTML Tables into CSV Files in Python.
Finally, here is the main function:
def main(url, path): # get all images imgs = get_all_images(url) for img in imgs: # for each image, download it download(img, path)
Getting all image URLs from that page and download each of them one by one. Let's test this:
main("", "web-scraping")
This will download all images from that URL and stores it in the folder "web-scraping" that will be created automatically.
Note though, there are some websites that loads it data using Javascript, in that case, you should use requests_html library instead, I've already made a script that handles it, check it here.
Alright, here are some ideas you can implement to extend your code:
Learn Also: How to Make an Email Extractor in Python.
Happy Scraping ♥View Full Code
|
https://www.thepythoncode.com/article/download-web-page-images-python
|
CC-MAIN-2020-16
|
refinedweb
| 809
| 66.78
|
So some exciting news is that I am a Top 10 finalist in the Hololens Contest jointly sponsored by Microsoft and Unity3d! The finalists were announced on 12th July 2017, and we have until 30th October 2017 to submit a Hololens application.
The goal of the contest is to create a world changing idea for HoloLens, that importantly can help solve a real-world problem today.
My entry is Holo Butterflies – which is based on a VR experience I developed on HTC Vive and for school kids to use. That experience was cross-reality where kids could draw or design butterflies in the traditional way (eg: draw on paper), then scan it into Virtual Reality where it can come to life and fly around you. There was some gasification where you can release all different butterflies and catch them with a butterfly net. Kids liked it because they saw their physical art come to life in the digital world.
The key part about it is not really the butterflies (although they are quite beautiful) but how the application integrates into the existing process in the classroom. VR and AR headsets are expensive and complex, and at this time you won’t find a classroom with headsets for all students, so rather I wanted to find a way to integrate one or a limited set of VR and AR hardware into the class curriculum.
The judges clearly liked the idea and progress, and so now I will be porting something like this over to the Hololens, as Holo Butteflies…
Official Holo Butterflies entry
Experience your space filled with beautiful and delightful virtual butterflies that you and your friends have designed in the real-world using traditional art.
Simply draw, scan, then experience them come to life around you – one or a hundred – as many as you like. Watch them, interact with them, or even catch them with your virtual net – but be gentle.
Holo Butterflies is a fun interactive experience for people of all ages.
The Judging Criteria
According to the T&Cs the winners will be judged according to:
- Technical Excellence: 40%
- Originality: 30%
- Experience (how it solves real world problems): 30%
The Competition
See the full list of the 10 finalists – and that is who I am up against!
Follow My Progress (Shhh)
Now, don’t tell the other contestants, but I’m going to share my progress here over the weeks to the end of the contest. About 12 weeks away.
I have been lucky that I get to do a lot of projects in AR and VR (and everything else too – cloud, mobile, IoT), and in most of those I am focused on solving a particular problem in a way that gets the most value to the customer and their users – and often that means coming up with unique and creative solutions (and long hours) to deliver something that has not been done before.
To do this, if I think about it, I generally follow a pattern of understanding the problem and some solutions, then bring a deep understanding of different technologies into it and come up with a couple of paths to try, try them, and usually there is a clear winner.
So, I will be using that approach here too.
This is the exciting beginning of the project – nothing has been built. Yes, ok, I have the original VR Butterflies app – but that alone doesn’t translate to Hololens. So right now I basically have nothing. That’s a bit scarey, but also thats why it’s exciting – because nothing is set, I have no idea what the final app will be like, but I know it’ll be great (somehow) and the journey will be amazing exploring Holo technology.
There is a lot of inspiration to draw on right now with VR, ARKit, Meta, Magic Leap, etc.
I hope I can innovate on the best that will work with Hololens.
This story will hopefully make interesting reading as the weeks progress and then see what the final result is in the judging.
The Plan (Shhh Shhh Shhh)
Let me share with you the plan I am going to follow to win this contest (OK, well come somewhere in the top 10. LOL). Again, please keep this to yourself.
To re-iterate, at this point I have no idea what I’m building or what the final application will look like, so we have a lot of uncertainty. So the Y-axis above represents the level of uncertainty – the closer we can draw together to the center then the more certainty we have. The X-axis is the time remaining until the end of the contest, and each week I am expanding or reducing the level of uncertainty.
For example, look at the this week, “Learn enough to get a Hello World built”. There is a reasonable level of uncertainty because I’ve never built a Hololens application before, but it’s not a major level because I do know you can use Unity3d, and I do a lot of Unity3d. But I don’t know the exact version that works best, or what other tools are needed, or how to setup and debug the Hololens. So the goal of this week is to be confident I can build a basic app, and see it working in the Hololens. Baby steps. But at the end of this week I have certainty about building for Hololens.
By following this plan I believe I can hit the judges criteria for technical excellence, originality and the experience.
Hello Hololens!
All top 10 finalists receive a Hololens developer kit on loan for the period of the contest. And here’s mine! It’s arrived. Very exciting.
My first impression is the packaging and headset itself are very click. Clearly a lot of work has gone into product development – to a very high quality level. Even the case for transporting the Hololens is well designed. I can slip it into my backpack and no-one is any the wiser I have a powerful holographic headset with me.
The first thing to do is follow the instruction booklet (yes, I actually thought it would be necessary in this case) and it shows you how to turn it on, and how to put in on your head. Once I had it on comfortably, and pleased to see it is designed to accomodate people wearing glasses, it then led me through the setup process of how the gaze pointer works and how to make selections.
Hololens User Experience
Interesting and most surprisingly Microsoft has not strayed far from the Windows desktop user-experience paradigm with the Hololens – even though it is a completely new type of product. On the desktop you have a mouse cursor and you left click to action the UI, with Hololens you have a gaze cursor (just like the mouse pointer) that is always in front of you and you move it around by moving your head around. I guess an actual mouse pointer image would look fairly ugly, so instead the gaze cursor is a small dot. When the gaze cursor is over an intractable element you can left mouse click – the way you do that in Hololens is using making a squeeze gesture with your thumb and adjacent finger. For that to work the camera sensors must be confidently tracking your fingers – so to give you feedback that it is, the gaze dot become a donut (circle) when your fingers are trackable.
An operating system’s primary user interface is called the shell, and here again the Hololens shell borrows key elements from the desktop. There is the start menu where you see the familiar app tiles and you can pin more tiles from your full apps list to the start menu. You also launch apps. The world around you is basically the equivalent of a desktop, and you can pin tiles anywhere around your physical space and the Hololens will remember where it was next time you use it. Those tiles are the fairly similiar to the icons on your Windows desktop, with one small but important change is that instead of showing an icon, the pinned app tiles show an image – which maybe the apps splash screen, or maybe a screenshot from the last time the app was run. The idea being it’s not interactable in the shell, it’s just an image, but you can click on the tile to relaunch the app.
There are actually two types of pinned app tiles, firstly the flat 2D image I just mentioned, and secondly 3D model as well. Only Microsoft can currently create 3D pinned apps (although some don’t actually launch an app – they are there for decoration only). A cornerstone feature on the Hololens is the built-in Holograms app that lets you place different holograms into your space (from a catalogue of about 60). They are either static models, or animated. Animated models can be played by clicking on them.
Another feature in the shell is the ability to take photos and videos of mixed reality ,so you can record what you are seeing. You can trigger either from the start menu, from pressing both volume keys down, or by using speech recognition: “Hey Cortana, take a photo”. The photos and videos you take are uploaded to your One Drive photos folder – very useful !
One last thing to note, is when you click on a tile to launch the application the shell is hidden, so only whatever content the app creates is visible. You can exit your application using the “bloom” gesture and you return to the shell. If the start menu isn’t in front of you, then “bloom” again to make it appear.
First Impressions
I’ve run the setup process, played with the Holograms app, and played with a few apps from the store.
My first impressions about the Hololens from a technical perspective are:
Pros:
– The hardware quality is like a generation 3 product. Even better then the XBox level of hardware quality. Very impressive to hold and wear. (10/10)
– World scale tracking is very very good. Better then I expected. It does drop out here and there, but the most part the content is where it should be so it is very accurate in maintaining correct tracking. (9/10)
– Room recognition is also very good. In the shell it doesn’t take long to remember which room I am in (I winder if apps get this feature?) (7/10)
– Oculusion is pretty decent, again better then I expected. If I put a virtual dog in a doorway then move around so part of the wall occuludes is then Hololens is doing a fairly decent job to make it look like the wall is occluding the dog. (6/10)
– Holograms themselves are bright and do feel like holograms (with the way the colour shifts as I move due to the optics). There is no black (which is see through), but as all colours are see through I was expecting it to be much more “ghostly” looking, but in fact in a normal lit room, the holograms are quite solid (7/10).
– Audio is excellent without needing headphones (9/10)
– Cortana works for speech recognition. I find myself repeating a few times with my Australian accent – but it does seem good. (7/10).
Cons:
– Field of view. Immeadiately obvious that it is far too small to be immersed. Constantly the content is being clipped. I estimate the FOV to be a 55″ TV at about 5 feet away. It sounds huge, but when your real FOV is > 180 degrees it ends up being very small. There is no official specification for the FOV, but media stories estimate it to be 30×17 degrees (which seems about right). Of course I’m sure many great minds are working on this problem, and just like the breakthrough with the Oculus Rift pre-distortion managed to push VR FOV to 100+ degrees – we’ll see something improve here. But right now this is a very obvious limitation to work around.
– Gesture tracking is also not rock solid. The bloom effect is pretty reliable. But I find the finger tap gesture is tricky. And my hand gets tired if I have an app that needs me to interact because you have to keep your hand within the depth sensor 120 degree tracking space plus your hand needs to be held in a way that the camera can see your fingers reliably.
– Gaze tracking is also cumbersome. In many scenarios trying to keep the gaze cursor over a button while you click it is frustrating. Full credit to Microsoft to create a generic new platform and have a base line gaze I/O experience that applies to all different experiences – but it’s just tiring and frustrating at times. This is another area to work around. I’m sure in the future we’ll have eye tracking and it will dramatically improve interaction.
– On board camera and microphone seem very good. I’ve recorded a few mixed reality videos today and they are clear and capture a good 60 degree or so field of view.
Well thats just first impresssions, I’ll get in depth into the hardware and technical capabilities next week as I explore what we can and cannot do, and what works well for a user experience within these limitations.
Hollo World!
After a bit of a play with the Hololens, mostly with the awesome (but basic) Hologram app, and a few apps from the Store, I really want to get something of my own built and working.
There are plenty of resources on the Microsoft Developer Mixed Reality page to get started and excited about what it can do.
But I don’t want to get caught and distracted by all the capabilities of Hololens yet, what I want to do right now is just prove I can build something and see it run on the Hololens. I want whatever is the most basic thing to do. Right now it is something like some 3d in my space.
So in the tradition of writing your first program on a new platform is is time to do a Hello World! ap, or a.. Hollo World! app (sorry, I couldn’t resist).
The full code to this app is available on GitHub:.
Firstly you need the following tools that I am using:
- Unity3d 2017.1
(make sure to choose the Windows Universal Platform during install)
- Windows 10 SDK
- Visual Studio Community 2017
(Upon first run it will download some additional SDKs)
Now this may run on Mac, but I haven’t tried. I’ve decided to jump over to my Windows 10 laptop and build from there.
After installing lets create a new Unity3d project and switch the platform to Windows Universal (File -> Build Settings). Add the open scene which will let you name it and save it automatically with a camera and light.
Open up Player Settings, and enable Virtual Reality mode (make sure Unity has Windows Holographic set as the default VR platform (I deleted any other ones). In newer versions of Unity this is called Windows Mixed Reality. If you don’t turn on Virtual Reality then your app will run in a flat 2d windows pinned in your space, but with Virtual Reality mode it will create an immersive Hololens application which will track your Hololens movements in space and move the main VR camera accordingly.
Also take this time to set the name of the project to Hollo World.
Firstly you want to adjust the main camera as this represents your Hololens. The important thing is to move it to 0,0,0 and change the skybox to be a solid colour that is all black (as black is transparent). The HoloLens recommended near clipping distance is 0.85 but I set mine to 0.05 so I can get really close to objects.
How easy was that? Now, lets add some content. I grabbed a simple 3d jet I have used before and through it in at 0,0,2 (2m in front of the camera). then I also created a UI Canvas and set it to world scale, shrunk it down so it just filled my screen (0.005 scale) and put a panel and a text box on it. I set the text to be “Hollo World” and positioned it also about 2m in front and above the jet.
That’s it ! Then I built it and it created me a Visual Studio project solution file.
Open up that Visual Studio solution (not the Unity one, but the one you just built) and it will open up in Visual Studio. Make sure your Hololens is plugged into USB. Now set the build type to RELEASE, the architecture to X86, and the device type to DEVICE.
Now Build -> Deploy Hollloword app and it will install onto the Hololens.
OK, awesome time: put on your Hololens and bloom geasture open the start menu, under the + (All apps) our Holloworld tile is now present. Click gesture to run the app and pin the Window App tile in your space. The app will auto load and we see our content. And the exciting this is we can now walk around and view it from any angle.
We have just built our first Hololens app !
App Lifecycle & World space tracking
Now, the first question I have how do applications work? do they stay running when we quit? Where is the Hololens camera origin? Now we’re going to look at those and other questions.
Firstly lets look at the core user interface to the operating system. Microsoft have a brand new platform and type of device here that is very different from desktop and mobile PC’s, but they have managed to not throw out the existing paradigms – in fact they have managed to retain the core concepts from the Windows Operating System within Hololens. Essentially Hololens in my opinion is the same as desktop Windows but moved into a 3d world space “desktop”. The mouse cursor is there, it’s just a gaze cursor that floats directly in front of you as a small dot. You move the cursor by simply moving your head. It’s not a very elegant solution – and very frustrating and tiring targeting with your head – but I see why they have done it – it’s a simpler learning curve and a type of experience that works across the board for migrating windows apps to Hololens. Now, they’re using a mouse cursor, so how do you click? You can use the seperate wireless clicker (but that would require you remember to bring it) so obviously they have gone with gesture based natural user interface (NUI). What you do is click with a finger gesture.
The Windows desktop has a start menu and app icons on the desktop and open windows. In Hololens, you have the start menu there (accessible via a cool bloom gesture), you have the app icons and windows present when you pin them to the space around you. The windows are not “live” though like on the desktop, each app window you pin will show a static image unless you choose to run the application.
Simpler to the mobile platforms, Hololens will basically only allow you to run one application at a time. But it is assuming you are jumping between multiple apps, so it will pause/resume applications when you switch out of them.
I’ve traced out the general flow of a Hololens application in the diagram below. The important thing is that you can follow this in Unity3d by listening to the OnApplicationFocus MonoBehaviour event:
World Space Tracking
Probably the first important and fundamental feature of the Hololens is that it does world-space tracking. That is from when your application launches and the user moves around their space, the location and orientation of the Hololens is being tracked and the Unity scene camera is updated. The tracking is suprisingly good – or maybe not suprisingly considering the number of cameras and IMUs in the device – but still, it worked way better then I expected.
From what I can tell, the camera location of the world while you are in the shell is completely ignored in your Unity application. When you look at the app tile and launch the application, it re-calculates your world origin. So for our content at 2m in front of the user in the Unity scene, when we launch the app, no matter where I am looking at the time the app launches, the content is correctly 2m in front of me. So that means, when you launch your app the world axis is set to be where you are looking (although Y axis follows gravity straight down). In my diagram above you can see the Hololens camera is starting at an origin point of 0,0,0.
Now, I already discovered that you could re-pin your application multiple times in your space. So what happens when you re-open your app from multiple locations? The result may surprise you…. in fact your application resumes (not restarts) and the content is back at where ever your originally launched your app from (even if it is way out of site a long distance away). Considering your app could be shut down at any time (eg: if the Hololens resets, or memory is low) then we can immediately see a problem with our little Holloworld app! The content could be not seen ! Oh no.
See in the diagram below:
The solution is to detect when the app gets OnApplicationFocus(true) and re-position the content in front of the user. Great, that will work. But one technical issue I discovered that during the OnApplicationFocus event that camera position and orientation has not yet been updated, it takes at least a frame before it jumps to the correct place. So if you re-position your content in this way, you should wait at least one frame before using the camera transform.
Eg:
RepositionContent.cs
using System.Collections; using System.Collections.Generic; using UnityEngine; public class RepositionContent : MonoBehaviour { public Transform Anchor; void OnApplicationFocus(bool focus) { if (focus) { StartCoroutine(RepositionContentCoroutine()); } } IEnumerator RepositionContentCoroutine() { yield return new WaitForEndOfFrame(); transform.position = Anchor.position; } }
Now that we have added that to our app, the content wonderfully appears no matter which Pinned App tile we launch our app from.
Icons, Tiles and Splashes (oh my!)
I follow an agile software development methodology. That means in practise that after each piece of work (user story) I ship something. So I always need to be ready to ship. Usually in my first user story I prepare an “empry” app that is shippable – meaning it does nothing but it looks complete. The app needs a name and have the correct branding in icons, tiles and splashes on startup.
The same applies here with the contest, I know I have to ship an app, so this week I want to make sure I know how to prepare a build for shipping, and display the correct images on tiles and how to set the image on the pinned app window tile.
I was expecting this to be pretty straight forward and I wouldn’t need to write anything on this subject. But I actually found it quite overly complicated. Hololens is based on the Universal Windows Platform, and that started with a few logos (icons and tiles) and as the platform has evolved the number of icon variations has exploded. I think there is something like 60+ different images you can supply.
Add to this that Unity is generating your Visual Studio project, so Unity has it’s own idioms about what icons it generates and what size and scale they are. And you don’t want to miss some important icons, only to find out on some platforms the icon displays as Unity’s default icon, or a very poor resolution.
Looking at Hololens, there are currently 5 image types you need to think about:
- Square Store Icon
- Square App Logo (used on a tile and icon)
- Wide App Logo (used on a wide tile – not currently supported on Hololens but a requirment in case that changes)
- A Splash screen for pinned app window tile (used when you pin your app in space – but note the background colour is also needed)
- A Splash screen for when your app launches AKA Unity Splash screen
You’ll probably use the same icon image on the first 3. But you also need to supply the correct resolution. The Microsoft document is mostly using the 150% scale, but Unity wants to export the 200% scale – which makes sense because the current Visual Studio 2017 app packager is looking for those 200% scaled image assets. So for me, I used the 200% scale mostly and everything worked correctly.
Here is a list of image scales (and resolutions). In bold is the recommended size from Microsoft:
If you’ve watched by video above you’ve seen me setup the minimum set of icons that you need in Unity player settings to make sure you can build your application and submit it to the Windows Store. I’ve created the handy cheat sheet below so you can see which icons I have set.
Also for the splash you want to set the Windows Splash for the pinned app window. The VR & Holographic Splash image are only used on application launch (eg: instead of the Made with Unity splash):
If you provide the same images as above, not only will it work correctly visually in your Hololens, but you will be able to build your APX application package from Visual Studio.
Device Portal for HoloLens
As a developer (or an advanced user) of Hololens, one of the first questions I had was how can I get access to the photos and videos (without waiting for them to upload to One Drive), and also how do I get to diagnostics such as running processes, etc. I was pleased to find out that like other Windows devices the Hololens has support for Windows Device Portal.
The Device Portal for HoloLens is a running web server on your Hololens which you can connect to via any web browser on your PC (if plugged into USB browse to 127.0.0.1:10080) or local WiFi connected computer (you need the IP address of your Hololens and browse to it (no port necessary)).
You will need to turn it on, as it is off by default for security reasons. Just go to settings, Updates, Developer, and scroll down to find turning on the portal. It will prompt you to create a username and password the first time to setup a login account.
Specific instructions are available on the Device Portal for HoloLens page.
The portal has a lot of information, and one of the most useful pages is the Mixed Reality Capture where you can watch your Hololens video feed live and record it to a file:
Next Week
So thats it for this week. I think it was a very productive week – going from zero to being able to build a submittable Hololens app ! Wow.
Next week is all about learning about what technical features Hololens has and how we can access and use them from Unity3d.
See you then!
|
http://talesfromtherift.com/hololens-contest-week-12/
|
CC-MAIN-2018-17
|
refinedweb
| 4,594
| 67.28
|
GraphQL is a dream for frontend developers and clients alike. After all, clients don’t care where data is coming from or what database format you’re using. They care about getting the data they’re asking for quickly, cleanly, and painlessly. Bonus points if it doesn’t put too heavy of a load on the server.
It’s been a goal for all APIs to be consumed in the GraphQL format,for developers who’ve grown accustomed to its simplicity and ease. That dream is now a reality thanks to a new library, GraphQL Mesh. GraphQL Mesh is a Rosetta Stone allowing all of your APIs and local databases to play nice together.
What is GraphQL?
GraphQL Mesh is a new library created by The Guild, an open-source group dedicated to empowering developers to take advantage of the many benefits of GraphQL.
The Guild are also responsible for popular GraphQL resources like GraphQL Code Editor, GraphQL Inspector, and GraphQL-CLI. They clearly know a thing or two about making GraphQL available for a wide pool of different developers, regardless of whether or not they’re previously familiar with the specification created by Facebook.
GraphQL Mesh is a dream come true for developers who’ve been wanting to try GraphQL but have been reluctant due to either lack of experience or having legacy products in older formats like REST. GraphQL Mesh is designed to act as an intermediary layer to receive data from nearly anywhere and translate it into a GraphQL format.
The goal of GraphQL Mesh is to take data from a wide array of different formats and integrate them with GraphQL so they can be modified with GraphQL queries and mutations.
So far, GraphQL Mesh has native support for:
- GraphQL
- gRPC
- JSON
- MongoDB
- OpenAPI/Swagger
- PostgreSQL
- SOAP/WSDL
- Apache Thrift
This makes it easy to modify output schema, link types across schema and merge schema types. It also gives you granular control over how you retrieve data, overcome backend limitations, as well as complications due to schema specifications and non-typed APIs.
GraphQL Mesh also acts as a proxy for your local data and lets you use common libraries with other APIs. You can use this proxy locally or you can call the service in other applications with an execute function.
Keep in mind that GraphQL Mesh is mainly intended as a background layer for your enterprise. If you want to serve the data to the public, you’ll most likely need to add an additional abstraction layer.
GraphQL starts by collecting API schemas from the services it communicates with. It then creates a runtime environment of fully-typed SDKs for those services. Then it translates various API specs into the GraphQL schema, where custom schema transformations and extensions can be performed. Finally, all of this is wrapped up into one SDK which is used to obtain data from the service you’re trying to communicate with.
This is achieved using local schema, which is created from the autogenerated directory when you install GraphQL Mesh.
This schema lets you use GraphQL’s execute function to run query and mutation functions locally in your application. This enables GraphQL to act as a central nervous system between your app and whatever you’re using to power it.
Benefits Of GraphQL Mesh
GraphQL allows clients and end users to integrate data of all kinds of different formats.
Users don’t need to have a thorough understanding of a complex API architecture to retrieve the data they need. It also makes rapid prototyping much quicker and more efficient since you don’t have to go under the hood of your API every time you want to make an insignificant change.
GraphQL is also much more efficient than other specifications like REST. REST returns all of the data stored in a database when it’s queried, which can result in overfetching and underfetching.
GraphQL only returns the exact data the user queries for. Not only does this save on resources, but it also makes an API easier to use since you spend less time looking for the data you need.
How GraphQL Mesh works
All of that data is returned to one place. While REST’s prolific use of endpoints definitely has its uses, it has its downsides as well. Having the ability to have all of that data routed to one endpoint is a major benefit and reason enough to give GraphQL Mesh a try in and of itself.
GraphQL Mesh translates APIs in nearly any given format into a GraphQL format. It’s an abstraction layer that can be overlaid nearly any source, including local files and databases.
Installing GraphQL Mesh
GraphQL Mesh comes available as several packages which you can install depending on your particular needs. We’ll show you how to set up a basic instance of GraphQL Mesh so you can get started with the library and try it out yourself.
To start, you’re going to need to install the
Yarn package handler, which makes packages available globally. For the sake of good housekeeping, create a new directory for this project in your development folder. We’ve called ours
GraphQL_mesh.
In the root directory of your project folder, create a file called
.meshrc.yaml using a text editor of your choosing.
We’re using Notepad++, an open source text editor that lets you save files in whatever file format you want. Paste the following into the file and then save:
sources: - name: Wiki handler: openapi: source:
Navigate to that directory using Terminal and input the following:
$ npm install yarn --global
To install the basic GraphQL Mesh package, type the following:
$ yarn add graphql @graphql-mesh/runtime @graphql-mesh/cli
Now you need to install a Mesh handler, depending on the needs of the specific API you’ll be using. For the sake of this example, we’ll be installing the Mesh handler for the OpenAPI spec:
$ yarn add graphql @graphql-mesh/openapi
To see a full list of the supported API specs, consult the GraphQL-mesh documentation.
Now you can run GraphQL. Type the following command:
$ yarn graphql-mesh serve
This serves an instance of GraphQL following the schema you’ve provided to, so you can test your code and make sure everything’s working as it should be.
How To Use GraphQL Mesh
Now let’s see an example of GraphQL Mesh in action to give you an idea of how you can integrate it into your development workflow. It’ll also help you visualize how GraphQL Mesh can make consolidating data from multiple API sources much easier and more intuitive than other languages.
To illustrate some of these concepts, we’re going to build a simple app that consolidates data from two different APIs and merges them together. We’re gathering data from a weather API and an API of geographic data.
For the sake of good housekeeping, let’s create a new directory for our project. We’ve named ours
locationweather. Navigate to this folder using Terminal.
Now we’ll start by re-installing our libraries and gathering the permissions we’ll need. Once you’re in your programming director, type:
npm install yarn --global yarn add graphql @graphql-mesh/runtime @graphql-mesh/cli yarn add apollo-server yarn add @graphql-mesh/openapi
This installs the libraries that will be called inside of our GraphQL function and makes them available globally.
Now open an instance of your preferred text editor for programming.
We’re using Notepad++, since it lets you save files in whatever file format you prefer.
Let’s start by making the GraphQL schema, which makes up the bulk of what your GraphQL function does. In the root directory of your project folder, create a file called
.meshrc.yaml using your text editor and save it. Then input the following:
sources: - name: Cities handler: openapi: source: operationHeaders: 'X-RapidAPI-Key': f93d3b393dmsh13fea7cb6981b2ep1dba0ajsn654ffeb48c26 - name: Weather context: apiKey: 971a693de7ff47a89127664547988be5 handler: openapi: source: transforms: - extend: | extend type PopulatedPlaceSummary { dailyForecast: [Forecast] todayForecast: Forecast } - cache: # Geo data doesn't change frequntly, so we can cache it forever - field: Query.* # Forecast data might change, so we can cache it for 1 hour only - field: PopulatedPlaceSummary.dailyForecast invalidate: ttl: 3600 - field: PopulatedPlaceSummary.todayForecast invalidate: ttl: 3600 require: - ts-node/register/transpile-only additionalResolvers: - ./src/mesh/additional-resolvers.ts
You can see this function is calling the APIs we’re working with for this project. It should give you an idea of how these principles can be applied to practically any API or data source you may want to work with.
Next, you’re going to create
package.json, which makes up much of the rest of this simple app. Create a blank file and put the following code into the body:
{ "name": "typescript-location-weather-example", "version": "0.0.20", "license": "MIT", "private": true, "scripts": { "predev": "yarn mesh:ts", "dev": "ts-node-dev src/index.ts", "prestart": "yarn mesh:ts", "start": "ts-node src/index.ts", "premesh:serve": "yarn mesh:ts", "mesh:serve": "graphql-mesh serve", "mesh:ts": "graphql-mesh typescript --output ./src/mesh/__generated__/types.ts" }, "devDependencies": { "@types/node": "13.9.0", "ts-node": "8.8.2", "ts-node-dev": "1.0.0-pre.44", "typescript": "3.8.3" }, "dependencies": { "@graphql-mesh/cli": "0.0.20", "@graphql-mesh/openapi": "0.0.20", "@graphql-mesh/runtime": "0.0.20", "@graphql-mesh/transform-cache": "0.0.20", "@graphql-mesh/transform-extend": "0.0.20", "apollo-server": "2.11.0", "graphql": "15.0.0" } }
You can see that most of the variables we’ll be using are defined in
package.json. This is another of GraphQL’s greatest strengths — its ability to be hard-typed. Things are much more settled and fixed and, thus, less likely to break using GraphQL’s JSON strings.
The last file of substance in our root directory is
tsconfig.ts. Create the file and insert these few short lines:
{ "compilerOptions": { "target": "es2015", "module": "commonjs", "moduleResolution": "node", /* Specify module resolution strategy: 'node' (Node.js) or 'classic' (TypeScript pre-1.6). */ "lib": [ "esnext" ], "sourceMap": true /* Generates corresponding '.map' file. */, }, "include": ["src"], "exclude": ["node_modules"] }
Now there’s just a tiny bit more housekeeping to do incase anyone uses this app after you. We’ll make a readme file,
README.md:
## Location-Weather Example
This example takes two API sources based on Openapi 3 and Swagger, and links between them.
It allows you to query for cities and locations, and include fields for the weather in that found place.
Finally, we’ll create a file responsible for some additional routing,called
.gitingore with no file extension.
__generated__ src/__generated__
We’re almost done! There’s just the tiniest bit of additional formatting we’ll want to incorporate. To do so, start off by creating a sub-folder called
SRC. Then make a file called
index.ts.
Insert the following code:
import { ApolloServer } from 'apollo-server'; import { getMesh, findAndParseConfig } from '@graphql-mesh/runtime'; async function main() { const meshConfig = await findAndParseConfig(); const { schema, contextBuilder } = await getMesh(meshConfig); const server = new ApolloServer({ schema, context: contextBuilder, }); server.listen().then(({ url }) => { console.log(`🚀 Server ready at ${url}`); }); } main().catch(err => console.error(err));
You can see that
index.ts imports the functions we installed earlier, like apollo-server and, of course, GraphQL Mesh, and makes them available to the rest of the functions.
Create one more additional sub-folder in the
src directory and call it
mesh. You’re going to make one final file, in that folder, called
additional-resolvers.ts:
import { Resolvers } from './__generated__/types'; export const resolvers: Resolvers = { PopulatedPlaceSummary: { dailyForecast: async (placeSummary, _, { Weather }) => { const forecast = await Weather.api.getForecastDailyLatLatLonLon({ lat: placeSummary.latitude!, lon: placeSummary.longitude!, key: Weather.config.apiKey, }); return forecast.data!; }, todayForecast: async (placeSummary, _, { Weather }) => { const forecast = await Weather.api.getForecastDailyLatLatLonLon({ lat: placeSummary.latitude!, lon: placeSummary.longitude!, key: Weather.config.apiKey, }); return forecast.data![0]!; }, }, };
That’s the last of the code! Now you can go to the command line and run:
yarn graphql-mesh serve
This will serve your app to, running on an instance of GraphQL, where you can perform your queries and mutations.
If you’d like to try out GraphQL Mesh without messing with any code, the entire project is available on codesandbox, including the code, so you can see GraphQL Mesh in action for yourself and get a sense of how you might integrate this clever translator into your existing workflow.
Conclusion
GraphQL Mesh is a dream come true for frontend developers and end users alike.
From the client’s perspective, they don’t have to know as much about the API structure to do what they’re trying to do. Instead, they just need to know what they’re querying for and GraphQL Mesh delivers.
From the programmer’s perspective, GraphQL Mesh makes code exponentially more robust and flexible. You won’t have to worry about reconfiguring your API every time you change your data. No more routing endless endpoints or constantly coding complex databases..
|
http://blog.logrocket.com/a-guide-to-the-graphql-mesh-library/
|
CC-MAIN-2020-40
|
refinedweb
| 2,162
| 55.95
|
0
Hi,
I'm using windows XP and trying to simulate mouse movement and mouse clicks. The following code supposed to move mouse to absolute position (100,100) and perform a click:
//test.cpp file:
#include "stdafx.h" int main(int argc, char* argv[]){ INPUT *buffer = new INPUT[3]; //allocate a buffer buffer->type = INPUT_MOUSE; buffer->mi.dx = 100; buffer->mi.dy = 100; buffer->mi.mouseData = 0; buffer->mi.dwFlags = (MOUSEEVENTF_ABSOLUTE | MOUSEEVENTF_MOVE); buffer->mi.time = 0; buffer->mi.dwExtraInfo = 0; (buffer+1)->type = INPUT_MOUSE; (buffer+1)->mi.dx = 100; (buffer+1)->mi.dy = 100; (buffer+1)->mi.mouseData = 0; (buffer+1)->mi.dwFlags = MOUSEEVENTF_LEFTDOWN; (buffer+1)->mi.time = 0; (buffer+1)->mi.dwExtraInfo = 0; (buffer+2)->type = INPUT_MOUSE; (buffer+2)->mi.dx = 100; (buffer+2)->mi.dy = 100; (buffer+2)->mi.mouseData = 0; (buffer+2)->mi.dwFlags = MOUSEEVENTF_LEFTUP; (buffer+2)->mi.time = 0; (buffer+2)->mi.dwExtraInfo = 0; SendInput(3,buffer,sizeof(INPUT)); delete (buffer); //clean up our messes. return 0; }
when "stdafx.h" is:
#pragma once #define WIN32_LEAN_AND_MEAN // Exclude rarely-used stuff from Windows headers #define _WIN32_WINNT 0x0500 // so the code would compile #include <windows.h>
The problem is that the mouse moves to position (0,0) and performs a click. I've been unable so far to simulate mouse movement to any absolute coordinate (x,y) specified in mi.dx and mi.dy respectively.
If someone knows a way to force mouse to move to absolute position (x,y) please tell me...
|
https://www.daniweb.com/programming/software-development/threads/6727/simulate-mouse-move
|
CC-MAIN-2016-07
|
refinedweb
| 244
| 56.82
|
Archived project. No maintenance.
This project is not maintained anymore and is archived. Feel free to fork and make your own changes if needed. For more detail read my blog post: Taking an indefinite sabbatical from my projects
Thanks to everyone for their valuable feedback and contributions.
vim-go-tutorial
Tutorial for vim-go. A simple tutorial on how to install and use vim-go.
Table of Contents
- Quick Setup
- Hello World
- Run it
- Build it
- Fix it
- Test it
- Cover it
- Edit it
- Beautify it
- Check it
- Navigate it
- Understand it
- Refactor it
- Generate it
- Share it
- HTML template
Quick Setup
We're going to use
vim-plug to install vim-go. Feel free to use other plugin
managers instead. We will create a minimal
~/.vimrc, and add to it as we go along.
First fetch and install
vim-plug along with
vim-go:
curl -fLo ~/.vim/autoload/plug.vim --create-dirs git clone ~/.vim/plugged/vim-go
Create
~/.vimrc with following content:
call plug#begin() Plug 'fatih/vim-go', { 'do': ':GoInstallBinaries' } call plug#end()
Or open Vim and execute
:GoInstallBinaries. This is a
vim-go command that
installs all
vim-go dependencies for you. It doesn't download pre compiled
binaries, instead it calls
go get under the hood, so the binaries are all
compiled in your host machine (which is both safe and simplifies the
installation process as we don't need to provide binaries for multiple
platforms). If you already have some of the dependencies (such as
guru,
goimports) call
:GoUpdateBinaries to update the binaries.
For the tutorial, all our examples will be under
GOPATH/src/github.com/fatih/vim-go-tutorial/. Please be sure you're inside
this folder. This will make it easy to follow the
tutorial. If you already have a
GOPATH set up just execute:
go get github.com/fatih/vim-go-tutorial
Or create the folder, if necessary.
Hello World!
Open the
main.go file from your terminal:
vim main.go
It's a very basic file that prints
vim-go to stdout.
Run it
You can easily run the file with
:GoRun %. Under the hood it calls
go run for
the current file. You should see that it prints
vim-go.
For whole package run with
:GoRun.
Build it
Replace
vim-go with
Hello Gophercon. Let us compile the file instead of running it.
For this we have
:GoBuild. If you call it, you should see this message:
vim-go: [build] SUCCESS
Under the hood it calls
go build, but it's a bit smarter. It does a couple of
things differently:
- No binaries are created; you can call
:GoBuildmultiple times without polluting your workspace.
- It automatically
cds into the source package's directory
- It parses any errors and shows them inside a quickfix list
- It automatically detects the GOPATH and modifies it if needed (detects projects such as
gb,
Godeps, etc..)
- Runs async if used within Vim 8.0.xxx or NeoVim
Fix it
Let's introduce two errors by adding two compile errors:
var b = foo() func main() { fmt.Println("Hello GopherCon") a }
Save the file and call
:GoBuild again.
This time the quickfix view will be opened. To jump between the errors you can
use
:cnext and
:cprevious. Let us fix the first error, save the
file and call
:GoBuild again. You'll see the quickfix list is updated with a
single error. Remove the second error as well, save the file and call
:GoBuild again. Now because there are no more errors, vim-go automatically
closes the quickfix window for you.
Let us improve it a little bit. Vim has a setting called
autowrite that
writes the content of the file automatically if you call
:make. vim-go also
makes use of this setting. Open your
.vimrc and add the following:
set autowrite
Now you don't have to save your file anymore when you call
:GoBuild. If we
reintroduce the two errors and call
:GoBuild, we can now iterate much more
quickly by only calling
:GoBuild.
:GoBuild jumps to the first error encountered. If you don't want to jump
append the
! (bang) sign:
:GoBuild!.
In all the
go commands, such as
:GoRun,
:GoInstall,
:GoTest, etc..,
whenever there is an error the quickfix window always will pop up.
vimrc improvements
- You can add some shortcuts to make it easier to jump between errors in quickfix list:
map <C-n> :cnext<CR> map <C-m> :cprevious<CR> nnoremap <leader>a :cclose<CR>
- I also use these shortcuts to build and run a Go program with
<leader>band
<leader>r:
autocmd FileType go nmap <leader>b <Plug>(go-build) autocmd FileType go nmap <leader>r <Plug>(go-run)
- There are two types of error lists in Vim. One is called
location listthe other
quickfix. Unfortunately the commands for each lists are different. So
:cnextonly works for
quickfixlist, for
location listsyou have to use
:lnext. Some of the commands in
vim-goopen a location list, because location lists are associated with a window and each window can have a separate list. This means you can have multiple windows, and multiple location lists, one for
Build, one for
Check, one for
Tests, etc..
Some people prefer to use only
quickfix though. If you add the following to
your
vimrc all lists will be of type
quickfix:
let g:go_list_type = "quickfix"
Test it
Let's write a simple function and a test for the function. Add the following:
func Bar() string { return "bar" }
Open a new file called
main_test.go (it doesn't matter how you open it, from
inside Vim, a separate Vim session, etc.. it's up to you). Let us use the
current buffer and open it from Vim via
:edit main_test.go.
When you open the new file you notice something. The file automatically has the package declaration added:
package main
This is done by vim-go automatically. It detected that the file is inside a
valid package and therefore created a file based on the package name (in our
case the package name was
main). If there are no files, vim-go automatically
populates the content with a simple main package.
Update the test file with the following code:
package main import ( "testing" ) func TestBar(t *testing.T) { result := Bar() if result != "bar" { t.Errorf("expecting bar, got %s", result) } }
Call
:GoTest. You'll see the following message:
vim-go: [test] PASS
:GoTest calls
go test under the hood. It has the same improvements
we have for
:GoBuild. If there is any test error, a quickfix list is
opened again and you can jump to it easily.
Another small improvement is that you don't have to open the test file itself.
Try it yourself: open
main.go and call
:GoTest. You'll see the tests will
be run for you as well.
:GoTest times out after 10 seconds by default. This is useful because Vim is
not async by default. You can change the timeout value with
let g:go_test_timeout = '10s'
We have two more commands that make it easy to deal with test files. The first
one is
:GoTestFunc. This only tests the function under your cursor.
Let us change the content of the test file (
main_test.go) to:
package main import ( "testing" ) func TestFoo(t *testing.T) { t.Error("intentional error 1") } func TestBar(t *testing.T) { result := Bar() if result != "bar" { t.Errorf("expecting bar, got %s", result) } } func TestQuz(t *testing.T) { t.Error("intentional error 2") }
Now when we call
:GoTest a quickfix window will open with two errors.
However if go inside the
TestBar function and call
:GoTestFunc, you'll see
that our test passes! This is really useful if you have a lot of tests that
take time and you only want to run certain tests.
The other test-related command is
:GoTestCompile. Tests not only need to
pass with success, they must compile without any problems.
:GoTestCompile compiles your test file, just like
:GoBuild and opens a
quickfix if there are any errors. This however doesn't run the tests. This
is very useful if you have a large test which you're editing a lot. Call
:GoTestCompile in the current test file, you should see the following:
vim-go: [test] SUCCESS
vimrc improvements
- As with
:GoBuildwe can add a mapping to easily call
:GoTestwith a key combination. Add the following to your
.vimrc:
autocmd FileType go nmap <leader>t <Plug>(go-test)
Now you can easily test your files via
<leader>t
- Let's make building Go files simpler. First, remove the following mapping we added previously:
autocmd FileType go nmap <leader>b <Plug>(go-build)
We're going to add an improved mapping. To make it seamless for
any Go file we can create a simple Vim function that checks the type of the Go
file, and executes
:GoBuild or
:GoTestCompile. Below is the helper function
you can add to your
.vimrc:
" run :GoBuild or :GoTestCompile based on the go file>
Now whenever you hit
<leader>b it'll build either your Go file or it'll
compile your test files seamlessly.
- By default the leader shortcut is defined as:
\I've mapped my leader to
,as I find it more useful with the following setting (put this in the beginning of .vimrc):
let mapleader = ","
So with this setting, we can easily build any test and non test files with
,b.
Cover it
Let's dive further into the world of tests. Tests are really important. Go has a really great way of showing the coverage of your source code. vim-go makes it easy to see the code coverage without leaving Vim in a very elegant way.
Let's first change our
main_test.go file back to:
package main import ( "testing" ) func TestBar(t *testing.T) { result := Bar() if result != "bar" { t.Errorf("expecting bar, got %s", result) } }
And
main.go to
package main func Bar() string { return "bar" } func Foo() string { return "foo" } func Qux(v string) string { if v == "foo" { return Foo() } if v == "bar" { return Bar() } return "INVALID" }
Now let us call
:GoCoverage. Under the hood this calls
go test -coverprofile tempfile. It parses the lines from the profile and then dynamically changes
the syntax of your source code to reflect the coverage. As you see, because we
only have a test for the
Bar() function, that is the only function that is
green.
To clear the syntax highlighting you can call
:GoCoverageClear. Let us add a
test case and see how the coverage changes. Add the following to
main_test.go:
func TestQuz(t *testing.T) { result := Qux("bar") if result != "bar" { t.Errorf("expecting bar, got %s", result) } result = Qux("qux") if result != "INVALID" { t.Errorf("expecting INVALID, got %s", result) } }
If we call
:GoCoverage again, you'll see that the
Quz function is now
tested as well and that it has a larger coverage. Call
:GoCoverageClear again
to clear the syntax highlighting.
Because calling
:GoCoverage and
:GoCoverageClear are used a lot together,
there is another command that makes it easy to call and clear the result. You
can also use
:GoCoverageToggle. This acts as a toggle and shows the coverage,
and when called again it clears the coverage. It's up to your workflow how you
want to use them.
Finally, if you don't like vim-go's internal view, you can also call
:GoCoverageBrowser. Under the hood it uses
go tool cover to create a HTML
page and then opens it in your default browser. Some people like this more.
Using the
:GoCoverageXXX commands does not create any kind of temporary files
and doesn't pollute your workflow. So you don't have to deal with removing
unwanted files every time.
vimrc improvements
Add the following to your
.vimrc:
autocmd FileType go nmap <Leader>c <Plug>(go-coverage-toggle)
With this you can easily call
:GoCoverageToggle with
<leader>c
Edit it
Imports
Let us start with a sample
main.go file:
package main import "fmt" func main() { fmt.Println("gopher" ) }
Let's start with something we know already. If we save the file, you'll see that
it'll be formatted automatically. It's enabled by default but can be disabled
if desired (not sure why you would though :)) with
let g:go_fmt_autosave = 0.
Optionally we also provide
:GoFmt command, which runs
gofmt under the hood.
Let's print the
"gopher" string in all uppercase. For it we're going to use
the
strings package. Change the definition to:
fmt.Println(strings.ToUpper("gopher"))
When you build it you'll get an error of course:
main.go|8| undefined: strings in strings.ToUpper
You'll see we get an error because the
strings package is not imported. vim-go
has a couple of commands to make it easy to manipulate the import declarations.
We can easily go and edit the file, but instead we're going to use the Vim
command
:GoImport. This command adds the given package to the import path.
Run it via:
:GoImport strings. You'll see the
strings package is being
added. The great thing about this command is that it also supports
completion. So you can just type
:GoImport s and hit tab.
We also have
:GoImportAs and
:GoDrop to edit the import paths.
:GoImportAs is the same as
:GoImport, but it allows changing the package
name. For example
:GoImportAs str strings, will import
strings with the
package name
str.
Finally
:GoDrop makes it easy to remove any import paths from the import
declarations.
:GoDrop strings will remove it from the import declarations.
Of course manipulating import paths is so 2010. We have better tools to handle
this case for us. If you haven't heard yet, it's called
goimports.
goimports is a replacement for
gofmt. You have two ways of using it. The
first (and recommended) way is telling vim-go to use it when saving the
file:
let g:go_fmt_command = "goimports"
Now whenever you save your file,
goimports will automatically format and also
rewrite your import declarations. Some people do not prefer
goimports as it
might be slow on very large codebases. In this case we also have the
:GoImports command (note the
s at the end). With this, you can explicitly
call
goimports
Text objects
Let us show more editing tips/tricks. There are two text objects that we can
use to change functions. Those are
if and
af.
if means inner function and
it allows you to select the content of a function enclosure. Change your
main.go file to:
package main import "fmt" func main() { fmt.Println(1) fmt.Println(2) fmt.Println(3) fmt.Println(4) fmt.Println(5) }
Put your cursor on the
func keyword Now execute the following in
normal
mode and see what happens:
dif
You'll see that the function body is removed. Because we used the
d operator.
Undo your changes with
u. The great thing is that your cursor can be anywhere
starting from the
func keyword until the closing right brace
}. It uses the tool
motion under the hood. I wrote motion
explicitly for vim-go to support features like this. It's Go AST aware and thus
its capabilities are really good. Like what you might ask? Change
main.go to:
package main import "fmt" func Bar() string { fmt.Println("calling bar") foo := func() string { return "foo" } return foo() }
Previously we were using regexp-based text objects, which leads to problems.
For example in this example, put your cursor to the anonymous functions'
func
keyword and execute
dif in
normal mode. You'll see that only the body of
the anonymous function is deleted.
We have only used the
d operator (delete) so far. However it's up to you. For
example you can select it via
vif or yank(copy) with
yif.
We also have
af, which means
a function. This text object includes the
whole function declaration. Change your
main.go to:
package main import "fmt" // bar returns a the string "foo" even though it's named as "bar". It's an // example to be used with vim-go's tutorial to show the 'if' and 'af' text // objects. func bar() string { fmt.Println("calling bar") foo := func() string { return "foo" } return foo() }
So here is the great thing. Because of
motion we have full knowledge about
every single syntax node. Put your cursor on top of the
func keyword or
anywhere below or above (doesn't matter). If you now execute
vaf, you'll see
that the function declaration is being selected, along with the doc comment as
well! You can for example delete the whole function with
daf, and you'll see
that the comment is gone as well. Go ahead and put your cursor on top of the
comment and execute
vif and then
vaf. You'll see that it selects the
function body, even though your cursor is outside the function, or it selects
the function comments as well.
This is really powerful and this all is thanks to the knowledge we have from
let g:go_textobj_include_function_doc = 1
motion. If you don't like comments
being a part of the function declaration, you can easily disable it with:
let g:go_textobj_include_function_doc = 0
If you are interested in learning more about
motion, check out the blog post I wrote for
more details: Treating Go types as objects in vim
(Optional question: without looking at the
go/ast package, is the doc comment
a part of the function declaration or not?)
Struct split and join
There is a great plugin that allows you to split or join Go structs. It's
actually not a Go plugin, but it has support for Go structs. To enable it add
plugin directive between the
plug definition into your
vimrc, then do a
:source ~/.vimrc in your vim editor and run
:PlugInstall. Example:
call plug#begin() Plug 'fatih/vim-go' Plug 'AndrewRadev/splitjoin.vim' call plug#end()
Once you have installed the plugin, change the
main.go file to:
package main type Foo struct { Name string Ports []int Enabled bool } func main() { foo := Foo{Name: "gopher", Ports: []int{80, 443}, Enabled: true} }
Put your cursor on the same line as the struct expression. Now type
gS. This
will
split the struct expression into multiple lines. And you can even
reverse it. If your cursor is still on the
foo variable, execute
gJ in
normal mode. You'll see that the field definitions are all joined.
This doesn't use any AST-aware tools, so for example if you type
gJ on top of
the fields, you'll see that only two fields are joined.
Snippets
Vim-go supports two popular snippet plugins.
Ultisnips and
neosnippet. By default,
if you have
Ultisnips installed it'll work. Let us install
ultisnips
first. Add it between the
plug directives in your
vimrc, then do a
:source ~/.vimrc in your vim editor and then run
:PlugInstall. Example:
call plug#begin() Plug 'fatih/vim-go' Plug 'SirVer/ultisnips' call plug#end()
There are many helpful snippets. To see the full list check our current snippets:
UltiSnips and YouCompleteMe may conflict on [tab] button
Let me show some of the snippets that I'm using the most. Change your
main.go
content to:
package main import "encoding/json" type foo struct { Message string Ports []int ServerName string } func newFoo() (*foo, error) { return &foo{ Message: "foo loves bar", Ports: []int{80}, ServerName: "Foo", }, nil } func main() { res, err := newFoo() out, err := json.Marshal(res) }
Let's put our cursor just after the
newFoo() expression. Let's panic here if
the err is non-nil. Type
errp in insert mode and just hit
tab. You'll see
that it'll be expanded and put your cursor inside the `panic()`` function:
if err != nil { panic( ) ^ cursor position }
Fill the panic with
err and move on to the
json.Marshal statement. Do the
same for it.
Now let us print the variable
out. Because variable printing is so popular,
we have several snippets for it:
fn -> fmt.Println() ff -> fmt.Printf() ln -> log.Println() lf -> log.Printf()
Here
ff and
lf are special. They dynamically copy the variable name into
the format string as well. Try it yourself. Move your cursor to the end of the
main function and type
ff and hit tab. After expanding the snippet you can
start typing. Type
string(out) and you'll see that both the format string and
the variadic arguments will be filled with the same string you have typed.
This comes very handy to quickly print variables for debugging.
Run your file with
:GoRun and you should see the following output:
string(out) = {"Message":"foo loves bar","Ports":[80],"ServerName":"Foo"}
Great. Now let me show one last snippet that I think is very useful. As you see
from the output the fields
Message and
Ports begin with uppercase
characters. To fix it we can add a json tag to the struct field. vim-go makes it
very easy to add field tags. Move your cursor to the end of the
string line in the field:
type foo struct { Message string . ^ put your cursor here }
In
insert mode, type
json and hit tab. You'll see that it'll be
automatically expanded to valid field tag. The field name is converted
automatically to a lowercase and put there for you. You should now see the
following:
type foo struct { Message string `json:"message"` }
It's really amazing. But we can do even better! Go ahead and create a
snippet expansion for the
ServerName field. You'll see that it's converted to
server_name. Amazing right?
type foo struct { Message string `json:"message"` Ports []int ServerName string `json:"server_name"` }
vimrc improvements
- Don't forget to change
gofmtto
goimports
let g:go_fmt_command = "goimports"
- When you save your file,
gofmtshows any errors during parsing the file. If there are any parse errors it'll show them inside a quickfix list. This is enabled by default. Some people don't like it. To disable it add:
let g:go_fmt_fail_silently = 1
- You can change which case it should apply while converting. By default vim-go uses
snake_case. But you can also use
camelCaseif you wish. For example if you wish to change the default value to camel case use the following setting:
let g:go_addtags_transform = "camelcase"
Beautify it
By default we only have a limited syntax highlighting enabled. There are two
main reasons. First is that people don't like too much color because it causes
too much distraction. The second reason is that it impacts
the performance of Vim a lot. We need to enable it explicitly. First add the
following settings to your
.vimrc:
let g:go_highlight_types = 1
This highlights the
bar and
foo below:
type foo struct{ quz string } type bar interface{}
Adding the following:
let g:go_highlight_fields = 1
Will highlight the
quz below:
type foo struct{ quz string } f := foo{quz:"QUZ"} f.quz # quz here will be highlighted
If we add the following:
let g:go_highlight_functions = 1
We are now also highlighting function and method names in declarations.
Foo
and
main will now be highlighted, but
Println is not as that is an
invocation:
func (t *T) Foo() {} func main() { fmt.Println("vim-go") }
If you also want to highlight function and method invocations, add the following:
let g:go_highlight_function_calls = 1
Now,
Println will also be highlighted:
func (t *T) Foo() {} func main() { fmt.Println("vim-go") }
If you add
let g:go_highlight_operators = 1 it will highlight the following
operators such as:
- + % < > ! & | ^ * = -= += %= <= >= != &= |= ^= *= == << >> &^ <<= >>= &^= := && || <- ++ --
If you add
let g:go_highlight_extra_types = 1 the following extra types
will be highlighted as well:
bytes.(Buffer) io.(Reader|ReadSeeker|ReadWriter|ReadCloser|ReadWriteCloser|Writer|WriteCloser|Seeker) reflect.(Kind|Type|Value) unsafe.Pointer
Let's move on to more useful highlights. What about build tags? It's not easy
to implement it without looking into the
go/build document. Let us first add
the following:
let g:go_highlight_build_constraints = 1 and change your
main.go file to:
// build linux package main
You'll see that it's gray, thus it's not valid. Prepend
+ to the
build word and save it again:
// +build linux package main
Do you know why? If you read the
go/build package you'll see that the
following is buried in the document:
... preceded only by blank lines and other line comments.
Let us change our content again and save it to:
// +build linux package main
You'll see that it automatically highlighted it in a valid way. It's really
great. If you go and change
linux to something you'll see that it also checks
for valid official tags (such as
darwin,
race,
ignore, etc... )
Another similar feature is to highlight the Go directive
//go:generate. If
you put
let g:go_highlight_generate_tags = 1 into your vimrc, it'll highlight
a valid directive that is processed with the
go generate command.
We have a lot more highlight settings, these are just a sneak peek of it. For
more check out the settings via
:help go-settings
vimrc improvements
- Some people don't like how the tabs are shown. By default Vim shows
8spaces for a single tab. However it's up to us how to represent in Vim. The following will change it to show a single tab as 4 spaces:
autocmd BufNewFile,BufRead *.go setlocal noexpandtab tabstop=4 shiftwidth=4
This setting will not expand a tab into spaces. It'll show a single tab as
4
spaces. It will use
4 spaces to represent a single indent.
- A lot of people ask for my colorscheme. I'm using a slightly modified
molokai. To enable it add the Plug directive just between the plug definitions:
call plug#begin() Plug 'fatih/vim-go' Plug 'fatih/molokai' call plug#end()
Also add the following to enable molokai with original color scheme and 256 color version:
let g:rehash256 = 1 let g:molokai_original = 1 colorscheme molokai
After that restart Vim and call
:source ~/.vimrc, then
:PlugInstall. This will pull the plugin
and install it for you. After the plugin is installed, you need to restart Vim
again.
Check it
From the previous examples you saw that we had many commands that would show
the quickfix window when there was an issue. For example
:GoBuild shows
errors from the compile output (if any). Or for example
:GoFmt shows the
parse errors of the current file while formatting it.
We have many other commands that allows us to call and then collect errors, warnings or suggestions.
For example
:GoLint. Under the hood it calls
golint, which is a command
that suggests changes to make Go code more idiomatic. There
is also
:GoVet, which calls
go vet under the hood. There are many other
tools that check certain things. To make it easier, someone decided to
create a tool that calls all these checkers. This tool is called
gometalinter. And vim-go supports it via the command
:GoMetaLinter. So what
does it do?
If you just call
:GoMetaLinter for a given Go source code. By default it'll run
go vet,
golint and
errcheck concurrently.
gometalinter collects
all the outputs and normalizes it to a common format. Thus if you call
:GoMetaLinter, vim-go shows the result of all these checkers inside a
quickfix list. You can then jump easily between the lint, vet and errcheck
results. The setting for this default is as following:
let g:go_metalinter_enabled = ['vet', 'golint', 'errcheck']
There are many other tools and you can easily customize this list yourself. If
you call
:GoMetaLinter it'll automatically uses the list above.
Because
:GoMetaLinter is usually fast, vim-go also can call it whenever you
save a file (just like
:GoFmt). To enable it you need to add the following to
your
.vimrc:
let g:go_metalinter_autosave = 1
What's great is that the checkers for the autosave is different than what you
would use for
:GoMetaLinter. This is great because you can customize it so only
fast checkers are called when you save your file, but others if you call
:GoMetaLinter. The following setting let you customize the checkers for the
autosave feature.
let g:go_metalinter_autosave_enabled = ['vet', 'golint']
As you see by default
vet and
golint are enabled. Lastly, to prevent
:GoMetaLinter running for too long, we have a setting to cancel it after a
given timeout. By default it is
5 seconds but can be changed by the following
setting:
let g:go_metalinter_deadline = "5s"
Navigate it
So far we have only jumped between two files,
main.go and
main_test.go. It's
really easy to switch if you have just two files in the same directory. But
what if the project gets larger and larger with time? Or what if the file
itself is so large that you have hard time navigating it?
Alternate files
vim-go has several ways of improving navigation. First let me show how we can quickly jump between a Go source code and its test file.
Suppose you have both a
foo.go and its equivalent test file
foo_test.go.
If you have
main.go from the previous examples with its test file you can
also open it. Once you open it just execute the following Vim command:
:GoAlternate
You'll see that you switched immediately to
main_test.go. If you execute it
again, it'll switch to
main.go.
:GoAlternate works as a toggle and is
really useful if you have a package with many test files. The idea is very
similar to the plugin a.vim command
names. This plugin jumps between a
.c and
.h file. In our case
:GoAlternate is used to switch between a test and non-test file.
Go to definition
One of the most used features is
go to definition. From the beginning, vim-go
had the
:GoDef command that jumps to any identifier's declaration. Let
us first create a
main.go file to show it in action. Create it with the
following content:
package main import "fmt" type T struct { Foo string } func main() { t := T{ Foo: "foo", } fmt.Printf("t = %+v\n", t) }
Now we have here several ways of jumping to declarations. For example if you put
your cursor on top of
T expression just after the main function and call
:GoDef it'll jump to the type declaration.
If you put your cursor on top of the
t variable declaration just after the
main function and call
:GoDef, you'll see that nothing will happen. Because
there is no place to go, but if you scroll down a few lines and put your cursor
to the
t variable used in
fmt.Printf() and call
:GoDef, you'll see that
it jumped to the variable declaration.
:GoDef not only works for local scope, it works also globally
(across
GOPATH). If, for example, you put your cursor on top of the
Printf()
function and call
:GoDef, it'll jump directly to the
fmt package. Because
this is used so frequently, vim-go overrides the built in Vim shortcuts
gd
and
ctrl-] as well. So instead of
:GoDef you can easily use
gd or
ctrl-]
Once we jump to a declaration, we also might want to get back into our previous
location. By default there is the Vim shortcut
ctrl-o that jumps to the
previous cursor location. It works great when it does, but not good enough if
you're navigating between Go declarations. If, for example, you jump to a file
with
:GoDef and then scroll down to the bottom, and then maybe to the top,
ctrl-o will remember these locations as well. So if you want to jump back to
the previous location when invoking
:GoDef, you have to hit
ctrl-o multiple
times. And this is really annoying.
We don't need to use this shortcut though, as vim-go has a better implementation
for you. There is a command
:GoDefPop which does exactly this. vim-go
keeps an internal stack list for all the locations you visit with
:GoDef.
This means you can jump back easily again via
:GoDefPop to your older
locations, and it works even if you scroll down/up in a file. And because this
is also used so many times we have the shortcut
ctrl-t which calls under the
hood
:GoDefPop. So to recap:
- Use
ctrl-]or
gdto jump to a definition, locally or globally
- Use
ctrl-tto jump back to the previous location
Let us move on with another question, suppose you jump so far that you just
want to back to where you started? As mentioned earlier,
vim-go keeps an history of all your locations invoked via
:GoDef. There is a
command that shows all these and it's called
:GoDefStack. If you call it,
you'll see that a custom window with a list of your old locations will be
shown. Just navigate to your desired location and hit enter. And finally to
clear the stack list anytime call
:GoDefStackClear.
Move between functions
From the previous example we see that
:GoDef is nice if you know where you want to
jump. But what if you don't know what your next destination is? Or you just
partially know the name of a function?
In our
Edit it section I mentioned a tool called
motion, which is a
custom built tool just for vim-go.
motion has other capabilities as well.
motion parses your Go package and thus has a great understanding of all
declarations. We can take advantage of this feature for jumping between
declarations. There are two commands, which are not available until you install
a certain plugin. The commands are:
:GoDecls :GoDeclsDir
First let us enable these two commands by installing the necessary plugin. The
plugin is called ctrlp. Long-time Vim
users have it installed already. To install it add the following line between
your
plug directives, then do a
:source ~/.vimrc in your vim editor and call
:PlugInstall to install it:
Plug 'ctrlpvim/ctrlp.vim'
Once you have it installed, use the following
main.go content:
package main import "fmt" type T struct { Foo string } func main() { t := T{ Foo: "foo", } fmt.Printf("t = %+v\n", t) } func Bar() string { return "bar" } func BarFoo() string { return "bar_foo" }
And a
main_test.go file with the following content:
package main import ( "testing" ) type files interface{} func TestBar(t *testing.T) { result := Bar() if result != "bar" { t.Errorf("expecting bar, got %s", result) } } func TestQuz(t *testing.T) { result := Qux("bar") if result != "bar" { t.Errorf("expecting bar, got %s", result) } result = Qux("qux") if result != "INVALID" { t.Errorf("expecting INVALID, got %s", result) } }
Open
main.go and call
:GoDecls. You'll see that
:GoDecls shows all type
and function declarations for you. If you type
ma you'll see that
ctrlp
filters the list for you. If you hit
enter it will automatically jump to it.
The fuzzy search capabilities combined with
motion's AST capabilities brings
us a very simple to use but powerful feature.
For example, call
:GoDecls and write
foo. You'll see that it'll filter
BarFoo for you. The Go parser is very fast and works very well with large files
with hundreds of declarations.
Sometimes just searching within the current file is not enough. A Go package can
have multiple files (such as tests). A type declaration can be in one file,
whereas a some functions specific to a certain set of features can be in
another file. This is where
:GoDeclsDir is useful. It parses the whole
directory for the given file and lists all the declarations from the files in the
given directory (but not subdirectories).
Call
:GoDeclsDir. You'll see this time it also included the declarations from
the
main_test.go file as well. If you type
Bar, you'll see both the
Bar
and
TestBar functions. This is really great if you just want to get an
overview of all type and function declarations, and also jump to them.
Let's continue with a question. What if you just want to move to the next or previous function? If your current function body is long, you'll probably will not see the function names. Or maybe there are other declarations between the current and other functions.
Vim already has motion operators like
w for words or
b for backwards words.
But what if we could add motions for Go ast? For example for function declarations?
vim-go provides(overrides) two motion objects to move between functions. These are:
]] -> jump to next function [[ -> jump to previous function
Vim has these shortcuts by default. But those are suited for C source code and
jumps between braces. We can do it better. Just like our previous example,
motion is used under the hood for this operation
Open
main.go and move to the top of the file. In
normal mode, type
]] and
see what happens. You'll see that you jumped to the
main() function. Another
]] will jump to
Bar() If you hit
[[ it'll jump back to the
main()
function.
]] and
[[ also accepts
counts. For example if you move to the top again
and hit
3]] you'll see that it'll jump to the third function in the source file.
And going forward, because these are valid motions, you can apply operators to
it as well!
If you move your file to the top and hit
d]] you'll see that it deleted
anything before the next function. For example one useful usage would be typing
v]] and then hit
]] again to select the next function, until you've done
with your selection.
.vimrc improvements
- We can improve it to control how it opens the alternate file. Add the following to your
.vimrc: will add new commands, called
:A,
:AV,
:AS and
:AT. Here
:A
works just like
:GoAlternate, it replaces the current buffer with the
alternate file.
:AV will open a new vertical split with the alternate file.
:AS will open the alternate file in a new split view and
:AT in a new tab.
These commands are very productive depending on how you use them, so I think
it's useful to have them.
- The "go to definition" command families are very powerful but yet easy to use. Under the hood it uses by default the tool
guru(formerly
oracle).
guruhas an excellent track record of being very predictable. It works for dot imports, vendorized imports and many other non-obvious identifiers. But sometimes it's very slow for certain queries. Previously vim-go was using
godefwhich is very fast on resolving queries. With the latest release one can easily use or switch the underlying tool for
:GoDef. To change it back to
godefuse the following setting:
let g:go_def_mode = 'godef'
- Currently by default
:GoDeclsand
:GoDeclsDirshow type and function declarations. This is customizable with the
g:go_decls_includessetting. By default it's in the form of:
let g:go_decls_includes = "func,type"
If you just want to show function declarations, change it to:
let g:go_decls_includes = "func"
Understand it
Writing/editing/changing code is usually something we can do only if we first understand what the code is doing. vim-go has several ways to make it easy to understand what your code is all about.
Documentation lookup
Let's start with the basics. Go documentation is very well-written and is highly integrated into the Go AST as well. If you just write some comments, the parser can easily parse it and associate with any node in the AST. So what it means is that we can easily find the documentation in the reverse order. If you have the node from an AST, you can easily read the documentation (if you have it)!
We have a command called
:GoDoc that shows any documentation associated with
the identifier under your cursor. Let us change the content of
main.go to:
package main import "fmt" func main() { fmt.Println("vim-go") fmt.Println(sayHi()) fmt.Println(sayYoo()) } // sayHi() returns the string "hi" func sayHi() string { return "hi" } func sayYoo() string { return "yoo" }
Put your cursor on top of the
Println function just after the
main function
and call
:GoDoc. You'll see that it vim-go automatically opens a scratch
window that shows the documentation for you:.
It shows the import path, the function signature and then finally the doc
comment of the identifier. Initially vim-go was using plain
go doc, but it
has some shortcomings, such as not resolving based on a byte identifier.
go doc is great for terminal usages, but it's hard to integrate into editors.
Fortunately we have a very useful tool called
gogetdoc, which resolves and
retrieves the AST node for the underlying node and outputs the associated doc
comment.
That's why
:GoDoc works for any kind of identifier. If you put your cursor under
sayHi() and call
:GoDoc you'll see that it shows it as well. And if you put
it under
sayYoo() you'll see that it just outputs
no documentation for AST
nodes without doc comments.
As usual with other features, we override the default normal shortcut
K so
that it invokes
:GoDoc instead of
man (or something else). It's really easy
to find the documentation, just hit
K in normal mode!
:GoDoc just shows the documentation for a given identifier. But it's not a
documentation explorer, if you want to explore the documentation there is
third-party plugin that does it:
go-explorer. There is an open bug to
include it into vim-go.
Identifier resolution
Sometimes you want to know what a function is accepting or returning. Or what the identifier under your cursor is. Questions like this are common and we have a command to answer it.
Using the same
main.go file, go over the
Println function and call
:GoInfo. You'll see that the function signature is being printed in the
status line. This is really great to see what it's doing, as you don't have to
jump to the definition and check out what the signature is.
But calling
:GoInfo every time is tedious. We can make some improvements to
call it faster. As always a way of making it faster is to add a shortcut:
autocmd FileType go nmap <Leader>i <Plug>(go-info)
Now you easily call
:GoInfo by just hitting
<leader>i. But there is still
room to improve it. vim-go has a support to automatically show the information
whenever you move your cursor. To enable it add the following to your
.vimrc:
let g:go_auto_type_info = 1
Now whenever you move your cursor onto a valid identifier, you'll see that your
status line is updated automatically. By default it updates every
800ms. This
is a vim setting and can be changed with the
updatetime setting. To change it
to
100ms add the following to your
.vimrc
set updatetime=100
Identifier highlighting
Sometimes we just want to quickly see all matching identifiers. Such as variables, functions, etc.. Suppose you have the following Go code:
package main import "fmt" func main() { fmt.Println("vim-go") err := sayHi() if err != nil { panic(err) } } // sayHi() returns the string "hi" func sayHi() error { fmt.Println("hi") return nil }
If you put your cursor on top of
err and call
:GoSameIds you'll see that
all the
err variables get highlighted. Put your cursor on the
sayHi()
function call, and you'll see that the
sayHi() function identifiers all are
highlighted. To clear them just call
:GoSameIdsClear
This is more useful if we don't have to call it manually every time. vim-go
can automatically highlight matching identifiers. Add the following to your
vimrc:
let g:go_auto_sameids = 1
After restarting vim, you'll see that you don't need to call
:GoSameIds manually anymore. Matching identifier variables are now highlighted
automatically for you.
Dependencies and files
As you know a package can consist of multiple dependencies and files. Even if you have many files inside the directory, only the files that have the package clause correctly are part of a package.
To see the files that make a package you can call the following:
:GoFiles
which will output (my
$GOPATH is set to
~/Code/Go):
['/Users/fatih/Code/go/src/github.com/fatih/vim-go-tutorial/main.go']
If you have other files those will be listed as well. Note that this command is only for listing Go files that are part of the build. Test files will be not listed.
For showing the dependencies of a file you can call
:GoDeps. If you call it
you'll see:
['errors', 'fmt', 'internal/race', 'io', 'math', 'os', 'reflect', 'runtime', 'runtime/internal/atomic', 'runtime/internal/sys', 'strconv', 'sync', 'sync/atomic ', 'syscall', 'time', 'unicode/utf8', 'unsafe']
Guru
The previous feature was using the
guru tool under the hood. So let's talk a
little bit about guru. So what is guru? Guru is an editor integrated tool for
navigating and understanding Go code. There is a user manual that shows all the
features:
Let us use the same examples from that manual to show some of the features we've integrated into vim-go:
package main import ( "fmt" "log" "net/http" ) func main() { h := make(handler) go counter(h) if err := http.ListenAndServe(":8000", h); err != nil { log.Print(err) } } func counter(ch chan<- int) { for n := 0; ; n++ { ch <- n } } type handler chan int func (h handler) ServeHTTP(w http.ResponseWriter, req *http.Request) { w.Header().Set("Content-type", "text/plain") fmt.Fprintf(w, "%s: you are visitor #%d", req.URL, <-h) }
Put your cursor on top of the
handler and call
:GoReferrers. This calls the
referrers mode of
guru, which finds references to the selected identifier,
scanning all necessary packages within the workspace. The result will be a
location list.
One of the modes of
guru is also the
describe mode. It's just like
:GoInfo, but it's a little bit more advanced (it gives us more information).
It shows for example the method set of a type if there is any. It shows the
declarations of a package if selected.
Let's continue with same
main.go file. Put the cursor on top of the
URL
field or
req.URL (inside the
ServeHTTP function). Call
:GoDescribe. You'll
see a location list populated with the following content:
main.go|27 col 48| reference to field URL *net/url.URL /usr/local/go/src/net/http/request.go|91 col 2| defined here main.go|27 col 48| Methods: /usr/local/go/src/net/url/url.go|587 col 15| method (*URL) EscapedPath() string /usr/local/go/src/net/url/url.go|844 col 15| method (*URL) IsAbs() bool /usr/local/go/src/net/url/url.go|851 col 15| method (*URL) Parse(ref string) (*URL, error) /usr/local/go/src/net/url/url.go|897 col 15| method (*URL) Query() Values /usr/local/go/src/net/url/url.go|904 col 15| method (*URL) RequestURI() string /usr/local/go/src/net/url/url.go|865 col 15| method (*URL) ResolveReference(ref *URL) *URL /usr/local/go/src/net/url/url.go|662 col 15| method (*URL) String() string main.go|27 col 48| Fields: /usr/local/go/src/net/url/url.go|310 col 2| Scheme string /usr/local/go/src/net/url/url.go|311 col 2| Opaque string /usr/local/go/src/net/url/url.go|312 col 2| User *Userinfo /usr/local/go/src/net/url/url.go|313 col 2| Host string /usr/local/go/src/net/url/url.go|314 col 2| Path string /usr/local/go/src/net/url/url.go|315 col 2| RawPath string /usr/local/go/src/net/url/url.go|316 col 2| RawQuery string /usr/local/go/src/net/url/url.go|317 col 2| Fragment string
You'll see that we can see the definition of the field, the method set and the
URL struct's fields. This is a very useful command and it's there if you need
it and want to understand the surrounding code. Try and experiment by calling
:GoDescribe on various other identifiers to see what the output is.
One of the most asked questions is how to know the interfaces a type is
implementing. Suppose you have a type and with a method set of several methods.
You want to know which interface it might implement. The mode
implement of
guru just does it and it helps to find the interface a type implements.
Just continue with the same previous
main.go file. Put your cursor on the
handler identifier just after the
main() function. Call
:GoImplements
You'll see a location list populated with the following content:
main.go|23 col 6| chan type handler /usr/local/go/src/net/http/server.go|57 col 6| implements net/http.Handler
The first line is our selected type and the second line will be the interface it implements. Because a type can implement many interfaces it's a location list.
One of the
guru modes that might be helpful is
whicherrs. As you know
errors are just values. So they can be programmed and thus can represent any
type. See what the
guru manual says:
The whicherrs mode reports the set of possible constants, global variables, and concrete types that may appear in a value of type error. This information may be useful when handling errors to ensure all the important cases have been dealt with.
So how do we use it? It's easy. We still use the same
main.go file. Put your
cursor on top of the
err identifier which is returned from
http.ListenAndServe.
Call
:GoWhicherrs and you'll see the following output:
main.go|12 col 6| this error may contain these constants: /usr/local/go/src/syscall/zerrors_darwin_amd64.go|1171 col 2| syscall.EINVAL main.go|12 col 6| this error may contain these dynamic types: /usr/local/go/src/syscall/syscall_unix.go|100 col 6| syscall.Errno /usr/local/go/src/net/net.go|380 col 6| *net.OpError
You'll see that the
err value may be the
syscall.EINVAL constant or it also
might be the dynamic types
syscall.Errno or
*net.OpError. As you see this is
really helpful when implementing custom logic to handle the error differently if
needed. Note that this query needs a guru
scope to be set. We'll going to
cover in a moment what a
scope is and how you can change it dynamically.
Let's continue with the same
main.go file. Go is famous for its concurrency
primitives, such as channels. Tracking how values are sent between channels can
sometimes be hard. To understand it better we have the
peers mode of
guru.
This query shows the set of possible send/receives on the channel operand (send
or receive operation).
Move your cursor to the following expression and select the whole line:
ch <- n
Call
:GoChannelPeers. You'll see a location list window with the following content:
main.go|19 col 6| This channel of type chan<- int may be: main.go|10 col 11| allocated here main.go|19 col 6| sent to, here main.go|27 col 53| received from, here
As you see you can see the allocation of the channel, where it's sending and receiving from. Because this uses pointer analysis, you have to define a scope.
Let us see how function calls and targets are related. This time create the
following files. The content of
main.go should be:
package main import ( "fmt" "github.com/fatih/vim-go-tutorial/example" ) func main() { Hello(example.GopherCon) Hello(example.Kenya) } func Hello(fn func() string) { fmt.Println("Hello " + fn()) }
And the file should be under
example/example.go:
package example func GopherCon() string { return "GopherCon" } func Kenya() string { return "Kenya" }
So jump to the
Hello function inside
main.go and put your cursor on top of
the function call named
fn(). Execute
:GoCallees. This command shows the
possible call targets of the selected function call. As you see it'll show us
the function declarations inside the
example function. Those functions are
the callees, because they were called by the function call named
fn().
Jump back to
main.go again and this time put your cursor on the function
declaration
Hello(). What if we want to see the callers of this function?
Execute
:GoCallers.
You should see the output:
main.go| 10 col 7 static function call from github.com/fatih/vim-go-tutorial.Main main.go| 11 col 7 static function call from github.com/fatih/vim-go-tutorial.Main
Finally there is also the
callstack mode, which shows an arbitrary path from
the root of the call graph to the function containing the selection.
Put your cursor back to the
fn() function call inside the
Hello() function.
Select the function and call
:GoCallstack. The output should be like
(simplified form):
main.go| 15 col 26 Found a call path from root to (...)Hello main.go| 14 col 5 (...)Hello main.go| 10 col 7 (...)main
It starts from line
15, and then to line
14 and then ends at line
10.
This is the graph from the root (which starts from
main()) to the function we
selected (in our case
fn())
For most of the
guru commands you don't need to define any scope. What is a
scope? The following excerpt is straight from the
guru
manual:
Pointer analysis scope: some queries involve pointer analysis, a technique for answering questions of the form “what might this pointer point to?”. It is usually too expensive to run pointer analysis over all the packages in the workspace, so these queries require an additional configuration parameter called the scope, which determines the set of packages to analyze. Set the scope to the application (or set of applications---a client and server, perhaps) on which you are currently working. Pointer analysis is a whole-program analysis, so the only packages in the scope that matter are the main and test packages.
The scope is typically specified as a comma-separated set of packages, or wildcarded subtrees like github.com/my/dir/...; consult the specific documentation for your editor to find out how to set and vary the scope.
vim-go automatically tries to be smart and sets the current packages import
path as the
scope for you. If the command needs a scope, you're mostly
covered. Most of the times this is enough, but for some queries you might to
change the scope setting. To make it easy to change the
scope on the fly with
have a specific setting called
:GoGuruScope
If you call it, it'll return an error:
guru scope is not set. Let us change
it explicitly to the `github.com/fatih/vim-go-tutorial" scope:
:GoGuruScope github.com/fatih/vim-go-tutorial
You should see the message:
guru scope changed to: github.com/fatih/vim-go-tutorial
If you run
:GoGuruScope without any arguments, it'll output the following
current guru scope: github.com/fatih/vim-go-tutorial
To select the whole
GOPATH you can use the
... argument:
:GoGuruScope ...
You can also define multiple packages and also subdirectories. The following
example selects all packages under
github.com and the
golang.org/x/tools
package:
:GoGuruScope github.com/... golang.org/x/tools
You can exclude packages by prepending the
- (negative) sign to a package.
The following example selects all packages under
encoding but not
encoding/xml:
:GoGuruScope encoding/... -encoding/xml
To clear the scope just pass an empty string:
:GoGuruScope ""
If you're working on a project where you have to set the scope always to the
same value and you don't want to call
:GoGuruScope everytime you start Vim,
you can also define a permanent scope by adding a setting to your
vimrc. The
value needs to be a list of string types. Here are some examples from the
commands above:
let g:go_guru_scope = ["github.com/fatih/vim-go-tutorial"] let g:go_guru_scope = ["..."] let g:go_guru_scope = ["github.com/...", "golang.org/x/tools"] let g:go_guru_scope = ["encoding/...", "-encoding/xml"]
Finally,
vim-go tries to auto complete packages for you while using
:GoGuruScope as well. So when you try to write
github.com/fatih/vim-go-tutorial just type
gi and hit
tab, you'll see
it'll expand to
github.com
Another setting that you should be aware are build tags (also called build constraints). For example the following is a build tag you put in your Go source code:
// +build linux darwin
Sometimes there might be custom tags in your source code, such as:
// +build mycustomtag
In this case, guru will fail as the underlying
go/build package will be not
able to build the package. So all
guru related commands will fail (even
:GoDef when it uses
guru). Fortunately
guru has a
-tags flag that
allows us to pass custom tags. To make it easy for
vim-go users we have a
:GoBuildTags
For the example just call the following:
:GoBuildTags mycustomtag
This will pass this tag to
guru and from now on it'll work as expected. And
just like
:GoGuruScope, you can clear it with:
:GoBuildTags ""
And finally if you wish you can make it permanent with the following setting:
let g:go_build_tags = "mycustomtag"
Refactor it
Rename identifiers
Renaming identifiers is one of the most common tasks. But it's also something
that needs to be done carefully in order not to break other packages as well. Also just
using a tool like
sed is sometimes not useful, as you want AST aware
renaming, so it only should rename identifiers that are part of the AST (it
should not rename for example identifiers in other non Go files, say build
scripts)
There is a tool that does renaming for you, which is called
gorename.
vim-go uses the
:GoRename command to use
gorename under the hood. Let us
change
main.go to the following content:
package main import "fmt" type Server struct { name string } func main() { s := Server{name: "Alper"} fmt.Println(s.name) // print the server name } func name() string { return "Zeynep" }
Put your cursor on top of the
name field inside the
Server struct and call
:GoRename bar. You'll see all
name references are renamed to
bar. The
final content would look like:
package main import "fmt" type Server struct { bar string } func main() { s := Server{bar: "Alper"} fmt.Println(s.bar) // print the server name } func name() string { return "Zeynep" }
As you see, only the necessary identifiers are renamed, but the function
name
or the string inside the comment is not renamed. What's even better is that
:GoRename searches all packages under
GOPATH and renames all identifiers
that depend on the identifier. It's a very powerful tool.
Extract function
Let's move to another example. Change your
main.go file to:
package main import "fmt" func main() { msg := "Greetings\nfrom\nTurkey\n" var count int for i := 0; i < len(msg); i++ { if msg[i] == '\n' { count++ } } fmt.Println(count) }
This is a basic example that just counts the newlines in our
msg variable. If
you run it, you'll see that it outputs
3.
Assume we want to reuse the newline counting logic somewhere else. Let us
refactor it. Guru can help us in these situations with the
freevars mode. The
freevars mode shows variables that are referenced but not defined within a
given selection.
Let us select the piece in
visual mode:
var count int for i := 0; i < len(msg); i++ { if msg[i] == '\n' { count++ } }
After selecting it, call
:GoFreevars. It should be in form of
:'<,'>GoFreevars. The result is again a quickfix list and it contains all the
variables that are free variables. In our case it's a single variable and the
result is:
var msg string
So how useful is this? This little piece of information is enough to refactor it into a standalone function. Create a new function with the following content:
func countLines(msg string) int { var count int for i := 0; i < len(msg); i++ { if msg[i] == '\n' { count++ } } return count }
You'll see that the content is our previously selected code. And the input to
the function is the result of
:GoFreevars, the free variables. We only
decided what to return (if any). In our case we return the count. Our
main.go will be in the form of:
package main import "fmt" func main() { msg := "Greetings\nfrom\nTurkey\n" count := countLines(msg) fmt.Println(count) } func countLines(msg string) int { var count int for i := 0; i < len(msg); i++ { if msg[i] == '\n' { count++ } } return count }
That's how you refactor a piece of code.
:GoFreevars can be used also to
understand the complexity of a code. Just run it and see how many variables are
dependent to it.
Generate it
Code generation is a hot topic. Because of the great std libs such as go/ast, go/parser, go/printer, etc.. Go has the advantage of being able to create great generators easily.
First we have the
:GoGenerate command that calls
go generate under the
hood. It just works like
:GoBuild,
:GoTest, etc.. If there are any errors it
also shows them so you can easily fix it.
Method stubs implementing an interface
Interfaces are really great for composition. They make your code easier to deal with. It's also easier for you to create tests as you can mock functions that accept an interface type with a type that implements methods for testing.
vim-go has support for the tool impl.
impl generates method stubs that implement a given interface. Let us change
main.go's content to the following:
package main import "fmt" type T struct{} func main() { fmt.Println("vim-go") }
Put your cursor on top of
T and type
:GoImpl. You'll be prompted to write
an interface. Type
io.ReadWriteCloser and hit enter. You'll see the content
changed to:
package main import "fmt" type T struct{} func (t *T) Read(p []byte) (n int, err error) { panic("not implemented") } func (t *T) Write(p []byte) (n int, err error) { panic("not implemented") } func (t *T) Close() error { panic("not implemented") } func main() { fmt.Println("vim-go") }
That's really neat as you see. You can also just type
:GoImpl io.ReadWriteCloser when you're on top of a type and it'll do the same.
But you don't need to put your cursor on top of a type. You can invoke it from everywhere. For example execute this:
:GoImpl b *B fmt.Stringer
You'll see the following will be created:
func (b *B) String() string { panic("not implemented") }
As you see this is very helpful, especially if you have a large interface with
large method set. You can easily generate it, and because it uses
panic()
this compiles without any problem. Just fill the necessary parts and you're
done.
Share it
vim-go has also features to easily share your code with other via. As you know the Go playground is a perfect place to
share small snippets, exercises and/or tips & tricks. There are times you are
playing with an idea and want to share with others. You copy the code and visit
play.golang.org and then paste it.
vim-go makes all these better with the
:GoPlay command.
First let us change our
main.go file with the following simple code:
package main import "fmt" func main() { fmt.Println("vim-go") }
Now call
:GoPlay and hit enter. You'll see that
vim-go automatically
uploaded your source code
:GoPlay and also opened a browser tab that shows
it. But there is more. The snippet link is automatically copied to your
clipboard as well. Just paste the link to somewhere. You'll see the link is the
same as what's on play.golang.org.
:GoPlay also accepts a range. You can select a piece of code and call
:GoPlay. It'll only upload the selected part.
There are two settings to tweak the behavior of
:GoPlay. If you don't like
that
vim-go opens a browser tab for you, you can disable it with:
let g:go_play_open_browser = 0
Secondly, if your browser is misdetected (we're using
open or
xdg-open) you
can manually set the browser via:
let g:go_play_browser_command = "chrome"
HTML template
By default syntax highlighting for Go HTML template is enabled for
.tmpl files.
If you want to enable it for another filetype add the following setting to your
.vimrc:
au BufRead,BufNewFile *.gohtml set filetype=gohtmltmpl
Donation
This tutorial was created by me in my spare time. If you like it and would like to donate, you now you can be a fully supporter by being a patron!
By being a patron, you are enabling vim-go to grow and mature, helping me to invest in bug fixes, new documentation, and improving both current and future features. It's completely optional and is just a direct way to support vim-go's ongoing development. Thanks!
TODO Commands
- :GoPath
- :AsmFmt
|
https://opensourcelibs.com/lib/vim-go-tutorial
|
CC-MAIN-2022-40
|
refinedweb
| 10,867
| 66.44
|
Here is my code:
import java.util.Arrays;
import javax.swing.*;
public class childNameAge {
public static void main(String[] args) {
int numberOfChildren;
Here is my code:
import java.util.Arrays;
import javax.swing.*;
public class childNameAge {
public static void main(String[] args) {
int numberOfChildren;
Here are some more of the code. I thought that was the problem, that the last number is the number that is going to display. So I think, that I have to store all seven numbers in a string? or char?...
I want my program to show seven random numbers, but it does only display one. I got this in a class:
if (command.equals("Random numbers")) {
int tal = modell.getTal();...
I do not really know what I should put in the model-class. Here is an example of an program's model-class:
public class Model {
private String text;
public String getRad(){
text="Nu var det...
I have now done some seperating and this is the result:
The View class:
import javax.swing.*;
import java.awt.*;
import java.awt.event.*;
import java.io.*;
I want this code to be in the MVC-model. MVC stands for Model-View-Control. You should have the graphics for the program in the view-class, and the code for doing operations in the control-class. The...
I want to have my program in three classes according to the MCV-model. The problem is that I cannot manage to do it right, cause it won't start and other things. This is the code when it is NOT in...
How do I create the buttons with an array? I am going to do a Tic-Tac-Toe game in SWT. I guess this way is wrong?:
import org.eclipse.swt.layout.*;
import org.eclipse.swt.widgets.*;
import...
|
http://www.javaprogrammingforums.com/search.php?s=b2d8ec8e7541332a4b5088ce8f7ff366&searchid=203480
|
CC-MAIN-2016-30
|
refinedweb
| 297
| 76.32
|
Combine Polymorphism and Web Services
Happy New Year!
You may be aware of polymorphism, and you probably know something about Web services by now too. But, what about polymorphism across Web services? This article reviews polymorphism, demonstrates XML Web services, and most importantly, shows you how to combine polymorphism and Web services.
Polymorphism
Those very experienced with object-oriented programming (OOP) are comfortable with polymorphism, but not everyone is experienced with OOP. If you are in the former group, skip ahead to the section "XML Web Services." If you are in the latter group, keep reading.
Before object-oriented languages, if one wanted to print different data types, one would write methods such as PrintInteger(int i), PrintString(string s), and PrintFloat(float f). That is, one had to differentiate the behavior and data type by name because a pre-OO language such as C wouldn't permit one to write methods with the same name, even if their argument types were different.
The advent of C++ permitted method overloading—among other things. Thus, one could write Print(int i), Print(string s), and Print(float f), and based on the data type, the code would call the correct Print method. Method overloading is simply supported by a name-mangling scheme where the compiler creates unique names internally by replacing the names with bits and pieces from the name and argument list. So, while you code Print(1), the compiler may internally rename the Print method with a prefix derived from the parameter type so that Print(1) becomes i_Print(1).
Method overloading is a form of polymorphism. Name mangling is a mechanism that supports method overloading. More commonly, polymorphism is associated with inheritance. Inheritance is where a new class (called the subclass) gets part of its definition from the inherited class (called the superclass) and adds new information. If you overload methods in the same class, the data types must be different. If you overload methods in an inheritance relationship, methods in the subclass and superclass may have the same signature, and the mangler produces the same mangled name.
For instance, suppose a superclass defines a Print(int i) method and a subclass it inherits from also defines a Print(int i) method. Use polymorphism to call Child.Print(int) when you have an instance of Child but Parent.Print(int) when you have an instance of Parent. This is inheritance polymorphism: the same name and signature but different classes.
Inheritance polymorphism uses an additional mechanism in conjunction with name mangling. The compiler adds methods to an array of methods called a virtual methods table (VMT). Each class has an index into the VMT, so when Print(int) is called, the compiler is routed to the VMT for the Print methods and the class's internal index (its Delta or ?) is used to index to a slot offset from the beginning of the VMT plus the Delta. This way, the correct implementation of the method is called. The compiler manages all VMT indexing and class Deltas.
In a nutshell, polymorphism enables you to define many method forms with very similar names, which yields nice name semantics. OOP compilers, in turn, will figure out which method you mean based on the class of the caller. One of polymorphism's best benefits is that you no longer have to write this kind of code (using pseudo code here):
If type of arg is integer then PrintInteger(arg) Else if type of arg is string then PrintString(arg) Else if type of args is float then PrintFloat(arg)
You simply write:
Print(arg)
The compiler and its polymorphic mechanisms figure out the correct version of the print method to call by generating a methods-indexing scheme that is for practical purposes equivalent to the conditional code above.
For an authoritative description of the inner workings of OOP idioms from the language's perspective, pick up Bjarne Stroustrup's The C++ Programming Language from Addison Wesley. Many OOP languages use mechanisms very similar to the C++ implementation.
XML Web Services
If you are pretty comfortable with the mechanics of XML Web services and the motivation for using them, skip to the next section: "Supporting Polymorphism with Web Services."
In the past ten years or so, distributed applications have become more prevalent. As is common with many kinds of engineering, software goes through a period of invention and then standardization. XML Web services are a standardization based on the open protocols of HTTP and XML. XML Web services do not belong solely to Microsoft, but Microsoft does offer an implementation of XML Web services based on the .NET Framework and its attributes.
The basic idea is that you write code to add the WebServiceAttribute to classes representing the Web service and the WebMethodAttribute to the methods in those classes representing Web methods—or methods you want to permit consumers to call. Microsoft technology uses reflection and code generation to generate proxy types and code that make calling these distributed services and methods easy. In addition to generating proxy code, the .NET Framework and Visual Studio include a wizard that stubs out a Web service and Web method for you.
To create a Web service, run Visual Studio and select File|New|Project. Pick the ASP.NET Web service applet from the Templates list in the New Project dialog (see Figure 1).
Figure 1: Create your first Web service with point-and-click ease.
To run the sample Web service and Web method, uncomment the HelloWorld sample method that the project template wizard created and run the solution. For more information on producing, consuming, and using XML Web service tools in general, see previous VB Today columns, particularly Building Distributed Apps? Use XML Web Services, Not Remoting (Mostly) from December 2004.
Supporting Polymorphism for XML-Generated Proxy Classes
Now, turn your focus to the purpose of this article.
When you define parameters and return arguments for Web methods, a utility called Web Services Discovery Language (or whizdahl) invokes another tool called SPROXY. SPROXY uses reflection and the CodeDOM to figure out a definition for the types declared in your Web methods and then generates proxy classes for compound types. For example, if you have a class named Person, SPROXY will generate a Person class when a consumer uses the Web service. The benefit is that Web service producers don't have to ship their proprietary code to consumers for consumers to use their code. SPROXY does the work for them. Using proxy code permits businesses to protect proprietary business rules while still selling access to the overlying features of those rules.
Here are examples of a Person class (see Listing 1) and a generated Person proxy (see Listing 2).
Listing 1. The Person Class Behind the Web Service
public class Person { private string name; public Person(){} public Person(string name) { this.name = name; } public string Name { get{ return name; } set{ name = value; } } public string GetUpperName() { return name.ToUpper(); } public string UpperName { get{ return GetUpperName(); } set{} } }
Listing 2. The Proxy Version of the Person Class as Generated by SPROXY
public class Person { /// <remarks/> public string<<
|
https://www.developer.com/design/article.php/3459001/Combine-Polymorphism-and-Web-Services.htm
|
CC-MAIN-2018-26
|
refinedweb
| 1,189
| 53.31
|
.
two key technologies, LUIS and Bot Framework to develop and create bots the easy way. Without being an AI-expert, you can develop and deploy conversational bots which interact with your users and process their needs.)
Let's get started!
Part 1: LUIS
LUIS stands for Language Understanding Intelligent Service. It is part of the Microsoft AI Platform and allows us to identify user's intentions and key elements from their messages. In order to create a smart language processing model, we need to train it before by feeding it with examples (utterances)
Step 1. Create a new LUIS application.
Step 2. Add the geographyV2 prebuilt entity to the project. An entity represents a part of the text that will be identified (for instance, a city)
Step 3. Create a new Intent: GetCityWeather. An intent represents what our users are asking for, such as booking a hotel room, looking for a product, or requesting the weather conditions for a specific city.
Step 4. Add at least 5 sample utterances for this intent. Note that the cities are automatically detected as geographyV2 entities. The model gets smarter by providing sample sentences that the users might say for a specific intent.
Step 5. Click on the Train button in order to create the LUIS model. When the process finishes, test it with a new request and see if it's working (it should detect the intent and entity):
Step 6. Publish the model. Select the Production slot and then go to Azure Resources. Copy the URL from Example query box, since we will use it later for our queries and requests from Azure Functions in Part 3.
Part 2: OpenWeatherMap
OpenWeatherMap is a service that we can use to obtain weather information about a specific city.
Step 1. Sign up to the service.
Step 2. Click on API and then Subscribe to the Current weather data API.
Step 3. Select the Free tier and obtain an API key
Step 4. Copy the API key, we will use it in the next part.
Part 3: Azure Functions
Azure Functions is a serverless compute service that enables you to run a script or piece of code on-demand or in response to an event without having to explicitly provision or manage infrastructure.
Step 1. From the Azure Portal, create a new Function App. Its name is unique, so serverlesschatbot won't work for you, use another one :-)
Step 2. Once the resource is created, add a new HTTP trigger called receive-message. Then click on View files and add a function.proj file
Step 3. This file is used to include a Nuget package into our project. We are adding the Twilio extension so our code can interact with a WhatsApp number later.
<Project Sdk="Microsoft.NET.Sdk"> <PropertyGroup> <TargetFramework>netstandard2.0</TargetFramework> </PropertyGroup> <ItemGroup> <PackageReference Include="Microsoft.Azure.WebJobs.Extensions.Twilio" Version="3.0.0" /> </ItemGroup> </Project>
Step 4. Next, we have the code for run.csx, which is the main part of an Azure Function. In this case, we are using both the URL from Part 1 (Step 6) and the API key from Part 2 (Step 4), so replace them in the code.
In the first section of the code we include several classes needed to deserialize the LUIS and OpenWeatherMap JSON responses after calling their services. Then, the code in the Run method parses the payload generated after a message to a WhatsApp number is sent. The text part is extracted and we have the evaluateMessage method, in which we send the text to the LUIS published model, which process the message and extracts the city part. If successful, a request to OpenWeatherMap is made in order to obtain the weather in a particular city. Finally, this information is sent as a response to the user.
#r "System.Runtime" #r "Newtonsoft.Json" using System.Net; using System.Text; using System.Linq; using System.Threading.Tasks; using Newtonsoft.Json; using Twilio.TwiML; // LUIS classes public class LuisModel { public string query { get; set; } public TopScoringIntent topScoringIntent { get; set; } public List<Intent> intents { get; set; } public List<Entity> entities { get; set; } } public class TopScoringIntent { public string intent { get; set; } public double score { get; set; } } public class Intent { public string intent { get; set; } public double score { get; set; } } public class Entity { public string entity { get; set; } public string type { get; set; } public int startIndex { get; set; } public int endIndex { get; set; } } // OpenWeatherMap classes public class WeatherModel {; } } public class Weather { public int id { get; set; } public string main { get; set; } public string description { get; set; } public string icon { get; set; } } public class Coord { public double lon { get; set; } public double lat { get; set; } } public class Main { public double temp { get; set; } public double pressure { get; set; } public double humidity { get; set; } public double temp_min { get; set; } public double temp_max { get; set; } } public class Wind { public double speed { get; set; } } public class Clouds { public double all { get; set; } } public class Sys { public int type { get; set; } public int id { get; set; } public double message { get; set; } public string country { get; set; } public long sunrise { get; set; } public long sunset { get; set; } } // Main code text = formValues["Body"].ToString(); var message = await evaluateMessage(text); var response = new MessagingResponse().Message(message); var twiml = response.ToString(); twiml = twiml.Replace("utf-16", "utf-8"); return new HttpResponseMessage { Content = new StringContent(twiml, Encoding.UTF8, "application/xml") }; } private static readonly HttpClient httpClient = new HttpClient(); private static async Task<string> evaluateMessage(string text) { try { var luisURL = "Your-LUIS-URL-From-Step6-Part1"; var luisResult = await httpClient.GetStringAsync($"{luisURL}{text}); var luisModel = JsonConvert.DeserializeObject<LuisModel>(luisResult); if (luisModel.topScoringIntent.intent == "GetCityWeather") { var entity = luisModel.entities.FirstOrDefault(); if (entity != null) { if (entity.type == "builtin.geographyV2.city") { var city = entity.entity; var apiKey = "Your-OpenWeatherMapKey-From-Step4-Part2"; var weatherURL = $"{apiKey}&q={city}"; var weatherResult = await httpClient.GetStringAsync(weatherURL); var weatherModel = JsonConvert.DeserializeObject<WeatherModel>(weatherResult); weatherModel.main.temp -= 273.15; var weather = $"{weatherModel.weather.First().main} ({weatherModel.main.temp.ToString("N2")} °C)"; return $"Weather of {city} is: {weather}"; } } } else return "Sorry, I could not understand you!"; } catch(Exception ex) { } return "Sorry, there was an error!"; }
Step 5. Copy the Function URL, we will use it in the final part.
Final Part: Twilio
In order to communicate with a WhatsApp number, we can use the Twilio API.
Step 1. Create a free Twilio account
Step 2. Access the Programmable SMS Dashboard and then select WhatsApp Beta followed by a click on Get started.
Step 3. Activate the Twilio Sandbox for WhatsApp.
Step 4. Set up the testing sandbox by sending the specific WhatsApp message from your device to the indicated number
Step 5. After joining the conversation, click on Sandbox again to access the Configuration. Replace the URL under "When a message comes in" with your Azure Function URL from Step 5 in the previous part.
Step 6. That's it! Let's test our work!
Success! Yay!
It certainly takes some time to set up everything as several technologies are involved. However, you can now imagine the possibilities. How would your users feel after you tell them that they can interact with your app through WhatsApp? Or that they can send their questions to a specific number which will handle all of them? You can replace LUIS by another technology, such as QnA Maker to process users' questions. It is even possible to send images and get them analyzed by Cognitive Services for instance!
The sky is the limit! :-)
And everything, of course, is managed under a serverless experience thanks to Azure Functions.
Thank you for your time and hopefully this post was useful for you (let me know your thoughts in the comment section :-D). If you want to learn more about Azure, Xamarin, Artificial Intelligence and more, visit my blog and YouTube channel, where I usually have fun sharing my knowledge and experiences.
Happy coding!
Luis
PS: I would also like to thank the [Azure Advocates () for the #ServerlessSeptember initiative! It's awesome to learn something new everyday from the community and experts.
References used for this publication:
Twilio
Discussion (0)
|
https://practicaldev-herokuapp-com.global.ssl.fastly.net/icebeam7/developing-a-serverless-whatsapp-chatbot-4o72
|
CC-MAIN-2021-49
|
refinedweb
| 1,346
| 57.16
|
You are now following this question
- You will see updates in your followed content feed.
- You may receive emails, depending on your communication preferences.
Can I tell Matlab not to use contiguous memory?
24 views (last 30 days)
Show older comments
Commented: Christopher Thomas on 6 Apr 2021
Matlab eats enormous amounts of memory and rarely or never releases it. As far as I can tell from past questions about this, it's because Matlab stores variables in contiguous physical memory blocks. As the heap becomes fragmented, Matlab asks for new memory when it wants to allocate a variable larger than the largest available contiguous fragment. MathWorks' recommended solution is "exit Matlab and restart". The only defragmentation option at present is "pack", which saves everything to disk, releases everything, and reloads variables from disk. This a) takes a while and b) only saves variables that are 2 GB or smaller.
Is there any way to tell Matlab (via startup switch or other option) to allow variables to be stored in fragmented physical memory?
The only reasons I can think of for asking for contiguous physical memory would be to reduce the number of TLB misses (for code running on CPUs) or to allow hardware acceleration from peripherals that work using DMA (such as GPUs) and that don't support mapping fragments. Like most of the other people who were complaining about this issue, I'd rather have TLB misses and give up GPU acceleration for my tasks and not run out of memory. I understand that for large problems on large machines, these features are important, but it's very strange that there's no way to turn it off.
(Even for our big machines, RAM is more scarce than CPU power or time, so "throw more RAM in the machine" is not viable past the first quarter-terabyte or so.)
Edit: Since there is apparently confusion about what I'm asking:
- Matlab certainly stores variables in contiguous virtual memory. As far as user-mode code is concerned, an array is stored as an unbroken block.
- Normal user-mode memory allocation does not guarantee contiguous physical memory. Pages that are mapped to contiguous virtual addresses may be scattered anywhere in RAM (or on disk).
- Previous forum threads about Matlab's memory usage got responses stating that Matlab does ask for contiguous physical memory, requiring variables to be stored as unbroken blocks in physical RAM (pages that are adjacent in virtual address space also being adjacent in physical address space).
- The claim in those past threads was that Matlab's requirement for contiguous physical memory was responsible for its enormous memory use under certain conditions.
- If that is indeed the case, I wanted to know if there was a way to turn that allocation requirement off.
As of 02 April 2021, I've gotten conflicting responses about whether Matlab does this at all, and have been told that if it does do it there's no way to turn it off and/or that turning it off would do horrible things to performance. I am no longer sure that these responses were to my actual question; hence the clarification.
Edit: As of 06 April 2021, consensus appears to be that Matlab does not ask for contiguous physical memory, making this question moot.
8 Comments
Do you know if any of MATLAB compettitors (e.g., R, python, octave, ???) can handle non contiguous memory for vector/array?
Short anwer: I don't know offhand.
It's even possible that Matlab itself isn't asking for contiguous physical memory (which would make this entire thread moot).
I'd seen previous forum responses indicating that it was, but I'm starting to wonder if those were unreliable. I'd have to run Matlab, set up an appropriate memory allocation test, and then manually check the page tables to be certain of that (on my OS of choice that's doable but fiddly, so I'd be spending a day on it; your OS experiences may vary).
I'd looked into Octave, but it has enough missing elements that it would not be a useful replacement for Matlab for what our lab is using it for (this will vary by user; it may be fine for your purposes). It's using decent back-end libraries but by default those are single-threaded. You can compile it to link against multi-threaded back-end libraries but forum reports suggest that doing so is a pain. That also won't help you enough on a highly-parallel machine; for that you'd need support for Matlab's "parfor" and similar functions. This does not seem to be in Octave [at] present (they support fork(), but the entire point of using Octave at all would be to be Matlab-compatible). Matlab also has a just-in-time compiler that makes interpreted code run at near-native speed, and if I understand correctly Matlab will vectorize loops if dependencies allow it. Last I checked Octave doesn't do that, so the only operations that would be fast would be operations handled by the back-end libraries. As for whether Octave asks for contiguous pages/allows fragmentation or not, I'd have to dig into the source code to find out. I'm not planning to do so any time soon.
The other option that comes up in the forum is writing in Python using NumPy and SciPy. These may work, but would again not have automatic vectorization and if I'm reading the Python documentation correctly [it] wouldn't be compilable either. You would also be responsible for your own multi-threading. For our lab, we'd have the added hassle of having to rewrite all of our existing scripts (an enormous effort).
Long story short, Matlab has invested enough time into their tools that - for our lab's purposes - they have a clear advantage over the free alternatives. Whether that's true for you depends on exactly what you're doing.
Disclaim: My expertise of memory management is limited. But I have a decent knowledge with MATLAB provided I'm not working for TMW thus some of the thing I expose here might not be accurate.
But to me when user asks to allocate a big array, MATLAB simply calls a mxMalloc in their API which is no more than the C single malloc() with the size of the array, presumably close to OS HEAP allocation with contiguous address, fill this memory block by 0s, and then it might use some tracking management system on top for garbage collecting purpose.
It seems the malloc() used by MATLAB is entirely taken over by OS kernel control as with 99% apps, and they don't do any anything OS specific or customized. The swap seems to me is handled by OS, not by MATLAB where the physical RAM requested is not available.
This method has not changed by TMW since many years, and a lot of library, Blas, Lapack, stock MEX functions, user MEX functions have be built based on this assumption for such long time that there is 0 chance that can be changed to ensure obvious backward compatibility.
I think the contiguous adressing makes the processing really faster. If you claim that only contiguous physical memory can be befenit for speed, and contiguous in virtual address does not matter, then we clearly have different view, and certainly one of use is wrong.
|t's even possible that Matlab itself isn't asking for contiguous physical memory (which would make this entire thread moot).
I'd seen previous forum responses indicating that it was, but I'm starting to wonder if those were unreliable.
Or if they were perhaps talking about old MATLAB releases. Strategies for 32 bit MATLAB were potentially different.
Joss Knight on 4 Apr 2021
But I don't even know how to ask for continuous physical memory, in the sense that the asker has described. I'll admit there's a lot I don't know, but I thought I would have known that. You can ask for pinned memory, which can't be swapped, but MATLAB definitely does use swap.
Allocating contiguous physical memory:
My searches suggest:
Linux: mmap() with MAP_HUGETLB . Requires that the kernel be built with special flags, and a special filesystem be configured -- in other words, not something that programs such as MATLAB can insist on. The idea seems to be that TLB (Translation Lookaside Buffers) are in short supply, so you might want to allocate a large chunk of physical memory for a single TLB and then manage the memory usage yourself.
Windows: AllocateUserPhysicalPages looks like a possibility and there might be others.
MacOS: ??? I have not managed to find any user-mode resources yet, only kernel level.
Bruno Luong on 4 Apr 2021
So after many elaboration, the question now becomes "can I tell MATALB to use contiguous (physical) memory?".
If Matlab is not presently asking for contiguous physical memory, I'm not particularly interested in getting it to do so; that would be a topic best moved to its own forum post.
I took a look at Linux's kernel functions for memory management last week (as the last time I had to manage physical memory was quite a few years ago). Long story short, it could be done but would be ugly. The situation is probably similar for other OSs. Further details are beyond the scope of this thread.
Accepted Answer
Consensus appears to be that Matlab does not ask for contiguous physical memory, making this question moot.
More Answers (4)
Walter Roberson on 27 Mar 2021
You are mistaken.
>> clearvars
>> pack
>> foo = ones(1,2^35,'uint8');
>> clearvars
I allocated 32 gigabytes of memory on my 32 gigabyte Mac, it took up physical memory and virtual memory, and when I cleared the variable, MATLAB returned the memory to the operating system.
There has been proof posted in the past (but it might be difficult to locate in the mass of postings) that MATLAB returns physical memory for MS Windows.
I do not have information about Linux memory use at the moment.
MATLAB has two memory pools: the small object pool and the large object pool. I do not recall the upper limit on the small object pool at the moment; I think it is 512 bytes. Scalars get recycled a lot in MATLAB.
Historically, it was at least documented (possibly in blogs) that MATLAB did keep hold of all memory it allocated, and so could end up with fragmented memory. But I have never seen any evidence that that was a physical memory effect: you get exactly the same problem if you ask the operating system for memory and it pulls together a bunch of different physical banks and provides you with the physical memory bundled up as consecutive virtual addresses.
At some point, evidence started accumulating that at least on Windows, at least for larger objects, MATLAB was using per-object virtual memory, and returning the entire object when it was done with it, instead of keeping it in a pool. I have not seen anything from Mathworks describing the circumstances under which objects are returned directly to the operating system instead of being kept for the memory pools.
Side note: I have proven that allocation of zeros is treated differently in MATLAB. I have been able to allocate large arrays, and then when I change a single element of the array, been told that the array is too large.
12 Comments
I'm afraid Matlab does, in fact, use swap (under Linux; neither my workstation nor our compute server is a Mac). A minimal test script that shows this is attached.
It will quite happily fill up as much swap as I let it (I added a bail-out condition so that it exits gracefully rather than running all of it out).
Walter Roberson on 30 Mar 2021
Did I say that MATLAB does not use swap? Did I use the word "swap" anywhere in my Answer?
What I said is that you are mistaken. In particular, you started your posting by saying,
"Matlab eats enormous amounts of memory and rarely or never releases it."
and I demonstrated that (at least on Mac) that it releases large objects promptly. I have seen posts in the past that show the same thing for Windows. I have not happened to see any relevant posts about the Linux memory handling.
"As far as I can tell from past questions about this, it's because Matlab stores variables in contiguous physical memory blocks."
I just did some process tracing to be sure... it is not impossible that I missed something, but as far as I could tell, MATLAB made no attempt to allocate physical memory, only virtual memory. Do you have some process trace logs showing MATLAB requesting memory that had to be physically contiguous (rather than memory that was given to it with continguous virtual addresses, with the physical memory being allocated in however many fragments the operating system felt like) ?
I just checked my MATLAB installation, and I cannot see any setuid or seteuid in the installation, but user-mode processes cannot (specifically) allocate contiguous physical memory
or to allow hardware acceleration from peripherals that work using DMA (such as GPUs) and that don't support mapping fragments.
MacOS supports mapping contiguous hardware addresses (such as for PCI) to non-contiguous physical addresses.
The other option is to move your data to the GPU's own memory, but my understanding is that that isn't usually done (GPU memory is instead used as a cache for data fetched from main memory).
MATLAB GPU only supports NVIDIA CUDA devices at the moment. NVIDIA's programming model does permit "page locked host memory" (more commonly known as "pinned" memory) to share I/O space with CUDA devices; the details are discussed at
The important part of the discussion there is that the sharing techniques are expected to be limited, a scarce resource. It is clear from the discussion of the different kinds of memory that for CUDA devices, GPU memory is not only used as a "cache" for data from main memory: arranging data properly within the GPU is considered important for performance
64 bit processes use a Unified memory, described at . It is designed to make memory access more efficient between host and GPU. It I understand my skimming properly, it does not require continguous physical pages.
Unified Memory has two basic requirements:
- a GPU with SM architecture 3.0 or higher (Kepler class or newer)
- a 64-bit host application and non-embedded operating system (Linux or Windows)
GPUs with SM architecture 6.x or higher (Pascal class or newer) provide additional Unified Memory features such as on-demand page migration and GPU memory oversubscription that are outlined throughout this document. Note that currently these features are only supported on Linux operating systems. Applications running on Windows (whether in TCC or WDDM mode) will use the basic Unified Memory model as on pre-6.x architectures even when they are running on hardware with compute capability 6.x or higher.
MATLAB's support for cc3.0 was removed as of R2021a, with support for cc3.5 and cc3.7 due to be removed in the next release. I believe we can deduce from that that MATLAB is not using the unified memory interface yet (unless I am misunderstanding the charts and it has been supporting it since R2018a), but perhaps it is on the way. But notice the part about how the advanced unified access is not available for Windows yet.
The document does not mention Mac because Mac is no longer supported by NVIDIA :( The CUDA drivers for Mac did not get further than Kepler.
My point ("and I do have one") is that:
- Shared memory is only one of the ways to communicate with NVIDIA
- NVIDIA never required contiguous physical memory, only that the memory is "host-locked" (pinned) -- in other words, memory that was being prevented from being swapped away
- The unified memory model that is likely coming in (but I don't think is in place yet) will largely remove the need for the pinned memory
- No, the on-device memory is not just acting like a cache
This was intended to be a response to Jan's post down the thread, which claimed that Matlab did not use swap; apologies for the confusion.
I agree that Matlab can release memory; it did so with the test program that was attached to my response. I suspect that the situation where it doesn't is when it allocates an object at the end of the heap and objects in the middle of the heap are released. I would have to write another test program to check the conditions for this to happen.
Regarding "allocating physical memory" vs "allocating virtual memory", that would be my point. Others in this thread have claimed that Matlab does not use virtual addressing, and so must allocate pages that are contiguous in physical memory. This is demonstrably mistaken; thank you for providing your own demonstration of that.
I am unclear as to what problem was originally being encountered?
For everything except possibly some device driver work, or possibly interface to GPU, MATLAB uses malloc() to allocate additional memory. malloc() is OS and library dependent as to exactly how it works, and we do not know at the moment which malloc() is being linked against (but probably the standard one rather than a specialized one.)
On Windows, MacOS, and Linux, if malloc() has to go to the operating system for more space, then the operating system allocates multiple physical memory pages and gathers them into one virtual space and returns the address of the virtual space.
On Windows, MacOS, and Linux, it is not certain that free() will necessarily return the allocated memory to the operating system, or the allocated memory might only be returned under some conditions. Sometimes operating systems provide special forms of allocating and releasing memory that make it easier for the operating system to reclaim the memory; there is at present no evidence that MATLAB is using those special forms.
Does memory released by MATLAB get returned to the operating system? Tests on Windows and MacOS suggest that Yes, large enough allocations get returned to the operating system. But that is not necessily the case for all allocations. Hypothetically there might be a bound below which instead of getting returned to the operating system, free memory is getting cached for reuse. That bound might be operating system dependent, or might depend upon the malloc() getting used.
At some point in the past, I found explicit documentation that MATLAB keeps two pools, one for small fixed-sized blocks, and one for larger objects; that when a large block was released, it was returned to the MATLAB-managed pool for potential re-allocation. However, I am having trouble locating that documentation now... and things might have changed since then.
Considering that [IIRC] MATLAB can crash if you malloc() yourself and put the address in a data-pointer field of an object that MATLAB can later release, it seems likely to me that MATLAB does have its own memory manager that can get confused when asked to release something it did not allocate. If MATLAB did not do any of its own memory management, then it would just free() and there would not be any problem.
So... hypothetically, the situation might be:
- very small blocks such a descriptors of variables get allocated. They have available room for a fairly small number of memory elements, so scalars and very small vectors or arrays are written in directly instead of needing a separate memory block. These small blocks get actively managed by MATLAB; it uses them a lot and it makes sense to keep a pool of them instead of malloc()'ing each of them all the time
- mid-sized blocks get allocated out of a MATLAB-managed pool, and get returned to the pool if the pool size is below a high-water mark, and otherwise released to the operating system
- (uncertain) large-sized blocks get allocated, possibly not within any pool, and get returned to the operating system when done
That last hypothetical special handling is uncertain. Much the same behaviour could happen if the managed pool had an upper size limit but all non-small-block variables went through the managed pool: returning the memory for a large enough block would exceed the upper bound, so it would just naturally trigger release back to the operating system, with no special handling needed.
Can such an arrangement lead to the kind of problems that pack() is intended to deal with?
- Yes, the pool of mid-sized blocks can get fragmented
- Even if everything other than the small fixed-sized blocks is handled by the OS instead of managed by MATLAB, virtual address space can get fragmented
MATLAB does not, as far as I know, offer any controls over where in virtual address space that device drivers or DLLs get mapped.
"Considering that [IIRC] MATLAB can crash if you malloc() yourself and put the address in a data-pointer field of an object that MATLAB can later release,"
That was the case. But now (R2021a) MATLAB crashes instantaneously at the returns statement and NOT latency until the array is cleared.
**************************************************************************
* mex -R2018a test_AssignDataUsingMalloc.c
*************************************************************************/
#include "mex.h"
#include "matrix.h"
#define OUT plhs[0]
void mexFunction(int nlhs, mxArray *plhs[],
int nrhs, const mxArray *prhs[]) {
mxDouble *p;
OUT = mxCreateDoubleScalar(0);
mxFree(mxGetDoubles(OUT));
p = mxMalloc(8);
// p = malloc(8); <= do this will make MATLAB crash when this mexfile is called at the "return" statement
*p = 1234;
mxSetDoubles(OUT, p);
return;
}
My impression is MATLAB internal variable data management has changed the pradigm lately, one can no longer speak about "copy-on-write", or at least not in the sense that was understood few year back.
There is a more sophisticated mechanism that handle data and possibly even MATLAB can even by pass a universal mxArray data structure though JIT in EE. Therefore command such as format debug can returns meaningless output.
In my test MEX code above, it is possibly that the OUT and its data pointer are cleared and free during the return statement and resulting MATLAB crash.
All that is evidently hightly speculative.
If I recall correctly, there are now cases in which portions of an array are passed instead of creating a new array containing just the section, but I did not happen to take note of the circumstances under which that can happen.
Does the same problem occur if you have a notably larger array? The scalar case can be held inside a small block, so the rules might be different for such a small size.
I guess byt "the portion" you mean inplace change, that happens if the array is on the RHS.
In my test code if output is not scalar and using malloc() then it crashes when the variable is cleared.
MATLAB definitively treats scalar data differently.
Attach is the mex file for you to test. The first argument is the length of the output; if the second argument is provided and > 0, it uses malloc (crash in purpose) instead of mxMalloc.
Not inplace change, no: there are places now where if you index an array, then instead of making a copy of the desired section, that MATLAB instead creates a new header that points "inside" the existing data.
In my test code if output is not scalar and using malloc() then it crashes when the variable is cleared.
My memory is claiming that roughly 10 doubles fit inside a small block, and that thus it might not be strictly scalars that the special behaviour is for. Unfortunately I do not recall where I found the information... it might have been in a header file or inside some random .m file.
"Not inplace change, no: there are places now where if you index an array, then instead of making a copy of the desired section, that MATLAB instead creates a new header that points "inside" the existing data."
I think we speak about the same thing: I call this "inplace" in the sense that data (RHS) is inplace.
II have submitted such package in the pass on FEX, it works with some older versions then it is broken since MATLAB probihits users for doing that and constantly they change their data management since. I stopped trying to follow them, and I must admit that I can't even follow them for lack of published information that I need to make such pakage work reliably.
But now it is integrated in the engine, such package is no longer relevant.
Regarding the original problem being encountered, the issue was that several of the people at our lab were finding that Matlab would ask for far more memory than needed for the variables we thought we were storing, and that this was causing problems with our compute server (it's not handling hitting swap as gracefully as it should; that's its own problem).
I spent a bit of time forum-searching to see why this sort of thing happened and what to do about it, and forum consensus seemed to be what I presented in my original post (that Matlab was asking for contiguous physical memory and having serious problems if the heap became fragmented as a result). If that was the case, there seemed to be a straightforward solution, which I asked about.
If those original forum posts were in error, then I'm back to square one figuring out the conditions under which this occurs and how to prevent it.
Joss Knight on 27 Mar 2021
You might want to ask yourself why you need so many variables in your workspace at once and whether you couldn't make better use of the file system and of functions and classes to tidy up temporaries. If you actively need your variables, it's probably because you're using them, which means they're going to need to be stored in contiguous address space for any algorithm to operate on them efficiently. If it's data you're processing sequentially, consider storing as files and using one of MATLAB's file iterators (datastore) or a tall array. If it's results you're accumulating, consider writing to a file or using datastore or tall's write capabilities.
Memory storage efficiency is ultimately the job of the operating system, not applications. If you want to store large arrays as a single variable but in a variety of physical memory locations, talk to your OS provider about that. They in turn are bound by the physics and geography of the hardware.
16 Comments
Per the multiple explanations in this thread, the physical memory does not need to be contiguous to "operate efficiently". Fragmentation occurs on a page level, not byte-by-byte.
Joss Knight on 29 Mar 2021
This does seem to get your goat!
What needs to be sequential are addresses. How that maps to physical memory is up to the operating system. Applications that want to process numerical data efficiently can do no better than to ask the operating system for sequential address space, and leave it up to the operating system to decide how to distribute, or redistribute that as it sees fit. If you want this done better, talk to your OS author - or maybe write your own OS!
Christopher Thomas on 1 Apr 2021
Since there is context you appear to have overlooked, I'll recap it for you:
- You always get sequential virtual addresses, whether the pages are contiguous in physical memory or not.
- Asking for pages to be contiguous in physical memory results in a much larger heap if heap fragmentation occurs. Under many conditions this results in Matlab asking the OS for a heap that's very much larger than the data being stored in the heap.
- This enormous heap growth is a problem, that many users - including myself - have asked for a way to prevent.
- You and others in this thread have asserted that allowing pages to be non-contiguous in physical memory would result in a large performance drop. This is manifestly not the case, for reasons that have already been explained in this thread in detail (addresses within pages are contiguous and most cache hits and misses will be the same under both scenarios).
You also appear to be falling back to one of the patterns of behavior other users have reported ("blame the OS heap manager"). The heap manager is doing exactly what you're asking it to.
What "gets my goat" is repeated assertions by many users in this thread who appear to have been under misconceptions about how virtual and physical addressing, paging, and caches work.
The people in this forum are here voluntarily to help people. If they say things that you think are incorrect they do not do so maliciously. The spirit of this forum should always be one of tolerance and gratitude, even when people are wrong.
I suppose it's worth reiterating before the argument continues that the answer to your question is obviously no. Perhaps that's all you needed confirmed. Some people, when they ask a question like that, really just want to complain about something about MATLAB they don't like. If that's the case here then...duly noted.
I'm interested to know what the C++ standard library function is to allocate non-contiguous memory, and if it's supported by the compilers and operating systems MATLAB supports. Can you tell me? To integrate any new kind of memory allocator obviously has a significant development impact and MathWorks would need to look at the benefit vs the costs.
The people with a "staff" logo beside their name are (I would hope) answering in their formal capacity as Mathworks employees, on company time. As a result, I'm holding anyone with that logo to a (slightly) higher standard - one where I would expect that, if a disagreement occurs, they take a moment to doublecheck their position (and if they feel it necessary to either look up auxiliary information or pass the support ticket to someone else).
Regarding "allocating non-contiguous memory", that function is called "malloc()". This returns a buffer that is contiguous in virtual memory that may be fragmented in physical memory. To allocate pages that are contiguous to each other in physical memory, you need to either be in kernel space and call the appropriate OS-specific functions, or call some OS-specific user-space function for getting a physically contiguous buffer (if the OS provides such a function at all).
Your phrasing also suggests potential confusion between "non-contiguous memory" and pages that are not contiguous to each other. Within each page, addresses are always contiguous, which is why the claim elsewhere in this thread that non-contiguous pages have a large performance hit is puzzling.
Regarding "the answer is obviously no", I agree that this thread could have been over a long time ago if I'd gotten a straight answer. One plausible conversation along those lines would have been something like the following:
- "Can I tell Matlab to use non-contiguous pages instead of requiring physically contiguous ones?"
- "We don't offer that feature, sorry."
- "Why not? It seems straightforward to implement and your users have been asking for it for 10-15 years."
- "It may be, but the customers responsible for most of our revenue have asked that we prioritize working on different features, and I'm afraid those requests are our top priority."
- [edit:] (alternatively) "It may be, but I'm afraid I don't have the answer to that question."
If your priority was to close this ticket, anything like that would have worked. Instead, there's been a conversation along the lines of the following:
- "Can I tell Matlab to use non-contiguous pages instead of requiring physically contiguous ones?"
- "That's impossible because (blatantly mistaken statement)."
- "Um, no, I'm afraid you're mistaken about (item)."
- [Cycle repeats for a week and a half and counting.]
I do not understand what is intended to be accomplished by this second conversation pattern. It's wasting your time too, which presumably doesn't benefit you or Mathworks. The fact that other users have been reporting this conversation pattern from Mathworks about this topic over the years is even more puzzling - it suggests that this pattern is a consistent deliberate choice on the part of Mathworks.
If you feel that I'm mistaken about something, by all means politely explain what you feel I'm overlooking, after doublechecking.
If you want that kind of conversation, raise a Tech Support query, don't come to MATLAB Answers. This isn't a support ticket, and regardless of what it says next to my name, it is just me answering questions to the best of my knowledge in my own personal time. Hold me to a higher standard if you like, be more irritated with me if I'm wrong if you like, but it's not likely to get you where you want to go any faster. Being difficult isn't likely to make me think it's worthwhile pursuing this with greater depth.
Your answer about malloc just makes me more confused. Have you taken the MATLAB documentation regarding 'contiguous memory' to mean MATLAB uses something other than malloc? Because that really is what MATLAB does. MATLAB doesn't take any special steps to prevent memory from being non-contiguous in physical address space if that's what the OS wants to do. All the documentation is trying to do is point out that the elements of a numeric array are adjacent even when the array is multi-dimensional, while for structures and cell arrays they are not. Indeed, it's perfectly clear that MATLAB doesn't force physical adjacency since it's easy to see that MATLAB uses swap space just by checking the Task Manager or top.
From the first paragraph of the original post:
"As far as I can tell from past questions about this, it's because Matlab stores variables in contiguous physical memory blocks." (ephasis added)
This was the rationale given when other users asked about Matlab's memory use in the past.
If those previous answers were mistaken, saying so in your initial reply would have avoided this entire thread.
It puzzles me that you're confused about my responses, because I've been clear about the distinction between virtual and physical addressing in all of my replies (up to and including having to explain the distinction to people).
Is there anything else about my responses that you'd like me to clarify?
MATLAB stores variables in continuous address space. I don't know what past replies have made you think this somehow means that MATLAB prevents the OS from allocating memory in whatever way it normally does. I don't even know how that's possible. All that matters is that MATLAB does not store one array in multiple, independently allocated blocks. That is what causes MATLAB to exhibit certain behaviour like running out of memory or entering swap when there is still theoretically enough memory left for a new array, or needing to perform copies of significant size when arrays are resized. Which is the sort of thing people tend to ask questions about. I was imagining you were going to tell me there was a way to ask the OS to allocate memory in some way that is more efficient with physical memory at the cost of performance, but it seems like you thought the other way round - that MATLAB was doing something special to prevent that happening.
If I seem stupid to you it's because I don't understand this distinction you make between physical RAM and virtual addressing, and I think my original post makes that pretty clear. Only the OS and probably the BIOS decides how addresses map to physical memory. I assumed when you said physical you were making some sort of distinction between memory from a single allocation and an array made up of multiple allocations. Similarly, you used the terms contiguous and fragmented like that is what you were requesting - an array made up from multiple allocations that can therefore better handle fragmented memory.
So I guess this answers your question? That MATLAB already does what you wanted? I certainly can't rule out both further limitations on my ability to understand what you want and what you mean, my knowledge of the way OSs and computer hardware manage memory, and on my precise knowledge of MATLAB's memory management system. I'll do my best to help though. I'm stubborn that way.
This was the rationale given when other users asked about Matlab's memory use in the past.
Could you provide a few links to posts so we can review exactly what was said?
Regarding finding the forum posts that mentioned contiguous physical memory - I would have to redo the search for that from scratch, and it doesn't seem like that would be productive at this point.
My priority now is getting the affected code working, and there are several approaches I can use for that (two out of three users can restructure their access patterns and the third user can buy time on the supercomputer if necessary).
My priority now is getting the affected code working
?? We are not clear as to what symptoms you are seeing, that you were suspecting might be due to memory allocation issues ??
Per the original post, the symptom being seen is that Matlab is often grabbing far more memory than it should need (based on the variable sizes reported by "whos"), and that the memory footprint rarely decreases (instead slowly growing over time even when the amount that "should" be used doesn't).
This causes problems when Matlab's footprint becomes larger than the amount of physical memory on the compute server we use, as a) swapping is slow and b) the compute server can become unstable when sudden large demands on swap space are made (that's a problem with the compute server configuration, which is being followed up on with our server admins).
Per my original post, previous forum threads had stated that this was due to the way that Matlab handled memory allocation - asking for contiguous physical memory - which was something that should be straightforward to change (hopefully straightforward enough that there would be a configuration switch for it). Those forum posts appear to have been in error, meaning that there isn't a straightforward way to get Matlab to use less memory for a given collection of variables.
The next step for me would be to run a lengthy series of tests figuring out exactly what Matlab's memory allocation patterns are (and checking the OS's allocation patterns for Matlab's memory while I'm at it). That would take enough of a time investment that other approaches to mitigating the problem are easier (convincing the worst-affected users to refactor their code, convincing the boss to spend $5k on more RAM for the compute server, convincing people to pay to run their tasks on the supercomputer rather than on the compute server).
My goal is to have the affected users (myself and two others) be able to run the data processing tasks that they need to without the machines they're running the tasks on misbehaving. There are several approaches to achieving that goal, and this particular one ("ask about the previously reported memory allocation strangeness on the Matlab forum") has reached the point of diminishing returns. I'll mark this thread "closed" and move on to other approaches shortly.
(In case you're wondering, my best guess now is that Matlab and the OS are both trying to do slab-based heap management and their attempts are interacting with each other in bad ways under some conditions. Rather than trying to test that guess, I'm going to pursue other approaches to resolving the problem instead.)
It doesn't sound anything related to MATLAB contiguous memory management.
If the memory footprint isn't stabilize during a long simulation run then IMO you probably have memory leaks somewhere (that could due to user program bug or MATLAB bug).
Requiring physically-contiguous memory would cause heap growth under many conditions, due to fragmentation. As old objects were destroyed and new objects were created, the heap manager would have a difficult time finding sufficiently large physically-contiguous spans for the new objects, and ask for more physical memory to get that space.
This can happen to a lesser extent without physically-contiguous blocks (only virtually-contiguous blocks), which I suspect is what's happening here. The degree to which this happens or can be prevented from happening depends on how smart the heap manager is and what you're asking it to do.
Using "clear" and "close all" should clean up user-code memory leaks. We're already doing that.
I have work more than 20 years with MATLAB with small, large simulation from simple tasks but long simulation (that last weeks to months) to complex tasks (simulation a full autonomous twin robots working during day).
AFAIR I never seen memory increases foreever due to memory fragmentation.
In my various expexriences, once the simulation is going, the memory state quickly stabilize.
If it doesn' stabilize than your simulations must do something constantly different over time in the computer memory.
Anyway up to you to stick with your assumption.
I've been using Matlab since 2003 and have been coding much longer than that. I acknowledge your expertise, but it's pretty obvious that our tasks have different memory access patterns.
For my tasks and one other user's tasks, intermediate results are frequently allocated and de-allocated. These intermediate results are not necessarily the same size (it depends on the data). If there's a situation where subsequent buffers are allocated that are larger than their predecessors, that's exactly the type of scenario where heap fragmentation might occur (buffer of size N is allocated, smaller objects are allocated that are placed after it, buffer of size N is released freeing up a slot of size N, buffer of size N+K is allocated, won't fit into that slot, and is placed at the end of the heap instead).
I haven't looked at the third user's code, so I can't comment on whether this specific case is driving their memory usage or not.
Yes, the code could be rewritten to change those allocation patterns. There is a different, simpler rewrite that addresses the problem in a different way that I've already suggested to the other user instead. My priority at this point is finding the solutions that involve the smallest programmer time investment, as that is the most scarce resource in our lab at the moment (we're not flush with money either, but time is still harder to come by).
Is there any way to tell Matlab (via startup switch or other option) to allow variables to be stored in fragmented physical memory?
No, there is absolutely no chance to impelement this. All underlying library functions expect the data to be represented as contiguous blocks.
If you need a distributed representation of memory, e.g. storing matrices as list of vectors, you have to develop the operations from scratch. This has severe drawbacks, e.g. processing a row vector is rather expensive (when the matrix is stored as column vectors). But of course it works. You only have to relinquish e.g. BLAS and LAPACK routines.
I've written my first ODE integrator with 1 kB RAM. Therefore I know, how lean the first 250 GB of RAM are. But the rule remains the same: Large problems need large machines.
7 Comments
The statement "all of the underlying library functions expect the data to be represented as contiguous blocks" does not make sense. All operations performed in user-mode (rather than kernel-mode) use virtual addressing - they can't tell the difference between pages that map to contiguous physical memory, pages that map to fragmented physical memory, or pages that map to virtual memory that is stored on disk rather than in physical memory.
Hardware devices that use DMA to access physical memory are another matter - but those were discussed in my original post.
Regarding "large problems require large machines" - if the memory footprint Matlab asks for is vastly larger than the amount of data I am dealing with at any one time, the problem is Matlab, not the host machine or host OS. You have had users telling you this continuously for about 10-15 years based on the last few days' of searching.
"All operations performed in user-mode (rather than kernel-mode) use virtual addressing"
But then Matlab would use virtual addressing also. If this is true, memory fragmentation would not be a problem. The BLAS/LAPACK/MKL/etc. libraries are optimized considering the memory management of the CPUs. For the optimal performance it does matter, if you use virtual on non-virtual memory addressing.
My programs do match in the RAM of my computers. I would not be happy if all codes are remarkably slower, when Matlab uses virtual addressing for all variables. But it would be useful, that a user can decide that some variables are addressed such, that they can span non-contiguos RAM pages. Isn't this the case for tall arrays?
I should be easy to implement a class, which uses virtual allocation for the data. As far as I can see, all standard operations should work directly, except for the allocation and change of the array size. It is not trivial to catch the exceptions for shared data copies.
I'll put this more clearly: Matlab already does use virtual addressing for meory access. Your machine is not running in ring zero once it's finished booting. When you ask the OS for contiguous physical memory, what you get is a set of pages that are adjacent in physical memory rather than wherever the OS decided to put them. Memory access goes through the page table in both cases.
What's changing is your memory access patterns, which affects the number of cache misses and TLB misses (to some extent; cache lines are much smaller than pages, so most of the cache misses will still happen, and the number of pages your data is spread across is the same, so the number of page lookups and your pressure on the TLB are similar).
Since the actual assembly-language instructions used to access memory are identical (only the page table layout changes), there is no downside to allowing non-contiguous memory allocation if the user asks for it. Anyone who wants to use contiguous physical memory (due to using DMA hardware or for some other reason) can still get it, by leaving the configuration switch at the default setting.
In case anyone else is under misconceptions about Matlab's use of virtual addressing - using swap memory, which Matlab does, requires virtual addressing.
(Pages that aren't presently in memory are marked as unusable, generating a protection fault when the application tries to access them; the OS shuffles physical pages to and from disk, points the page table entry for the attempted access at the newly-loaded physical page, and returns control to the application).
Matlab also makes extensive use of copy-on-write for passing function arguments, which requires virtual addressing. Function arguments are passed by value but aren't actually copied unless the function changes them.
(Pages in the original copy are marked as read-only, and new page table entries are created with a different virtual address range but mapping to the same physical pages in RAM. These are also marked as read-only. When either the caller or the function try writing to their copies, a protection fault occurs. The OS copies the page that was aliased to a new location, so that the caller and function now have different copies of that portion of the data, and returns control to the application. For structures allocated in contiguous physical memory, all pages are copied when the fault occurs rather than just the page that was written to.)
[EDITED, original]:
I tried it: My Matlab does not use the pagefile. If the RAM is exhausted, it is exhausted and a huge pagefile does not allow Matlab to create larger arrays.
[EDITED, fixed]: This was a mistake from a test in a virtual machine. Matlab does use the pagefile under Windows. Increasing the size of the pagefile allows to create larger arrays.
It was your initial point, that "Matlab stores variables in contiguous physical memory blocks". Now you claim, that virtual addressing is used instead, which would allow an automatic paging. This is a contradiction.
"Matlab also makes extensive use of copy-on-write for passing function arguments, which requires virtual addressing."
No, this does not need virtual addressing. Of course you can implement a copy-on-write strategy by an exception of a write-protected page, but this does not match the behaviour of Matlab: You can write to the memory directly inside a C-mex function and this destroyes the copy-on-write strategy: By this way you can poke into variables, which share the memory but are not provided as input:
x = ones(1, 1e6);
y = x;
yourCMexFunction(y);
% ==>
*mxGetPr(prhs[0]) + 1 = 5;
% <==
x(1:3) % [1 5 1] !!!
There is no automatic detection of a write access. This is a severe problem and discussed in the Matlab forums exhaustively for 25 years.
"For structures allocated in contiguous physical memory, all pages are copied when the fault occurs rather than just the page that was written to."
This is exactly what happens in Matlab.
Obviously your conceptions about Matlab's memory management are not matching the facts. You sound very convinced, if you try to tell others about their "misconceptions". Unfortunately you do not understand the topic you are talking of.
By the way, you could control the memory manager of Matlab 6.5 with different startup parameters. As far as I know this was not officially documented. You still find the undocumented mex functions mxSetAllocFcns and mxSetAllocListeners in the libraries. The licence conditions forbid a reverse-enineering of these functions.
You can simply write some Mex function, which allocate memory by malloc and VirtualAlloc, and compare the run times when calling optimized BLAS and LAPACK functions. My conclusion: I do not want this for standard variables. If some data exhaust my computer, I buy a larger computer or use tall array and a distributed processing in a cluster.
Per my post accidentally attached to another user's response, Matlab certainly does use swap. Feel free to run the sample program I attached to test this (if you have access to a Linux system; Matlab doesn't seem have an OS-independent way to check memory use).
The same test program will demonstrate copy-on-write behavior. Toggle the "modify the input argument" flag "true" or "false" to observe this.
I've clarified my mistake in my former comment: Matlab does use the pagefile under Windows.
Your text code shows, that Matlab uses a copy-on-write method. This is documented. Your assumption, that this is done automatically using exceptions for write-protected virtual pages, does not match the implementation in Matlab.
Steven Lord on 25 Mar 2021
You know, I really want to read the newest Brandon Sanderson novel. But I don't have room on that shelf in my bookshelf for the volume. Let me store pages 1 through 20 on this bookshelf upstairs. Pages 21 through 40 would fit nicely in that little table downstairs. Pages 41 through 50 can squeeze into the drawer on my nightstand upstairs. Pages 51 through 80 could get stacked on top of the cereal in the kitchen cupboard downstairs. Pages ...
That's going to make it a lot more time consuming to read the new book. And since Sanderson's newest book is over 1200 pages long, I'm going to wear a path in the carpet on the stairs before I'm finished.
So no, there is no setting to tell MATLAB not to use contiguous memory.
The bits may be physically located in different locations on the chip, but to MATLAB and to the libraries we use they have to appear contiguous. Since in actuality I'm more likely to read that Sanderson novel on my tablet, the pages could be stored as a file split across different locations in the physical hardware of the SD card in my tablet but the reader software handles it so I see the pages in order and I would be annoyed if I had to manually switch to reading a different book every chapter to match the physical location of the data.
6 Comments
No, it won't take "a lot more time to read the book". See my previous post for a detailed discussion of page tables and contiguous vs non-contiguous memory. Memory accesses involve page table lookups in both cases.
Contiguous physical memory is used when you need an external hardware device, such as a GPU, to be able to work on your data. The GPU does not have access to the page table, so it need the physical addresses of the memory locations it needs to modify. The other option is to move your data to the GPU's own memory, but my understanding is that that isn't usually done (GPU memory is instead used as a cache for data fetched from main memory).
Yes, I have had to work with this sort of thing a few jobs ago.
Steven Lord on 26 Mar 2021
About the closest thing to the functionality you're requesting that MATLAB provides are some of the large file and big data capabilities like datastore and tall arrays. Distributed arrays in Parallel Computing Toolbox may be another option.
I agree that it's possible to partition data that is larger than physical memory. My objection is to being told that this is the only possible approach when the data held in memory (as reported by whos) is vastly smaller than available physical memory, when an apparently-straightforward feature that's been requested for 10-15 years would resolve it.
I'd be happy just getting a plausible answer about why this hasn't been addressed in that time, even if it's "the customers driving 90% of our revenue prefer that we work on different features instead".
Instead, in this thread I've been given a rationale that was demonstrably mistaken and have been told that despite having very much more physical RAM than data, the problem is the machines I'm using, not Matlab's memory handling. Past threads about this (over 10-15 years) report that you often blame the OS's heap manager as well.
If you're worried about push-back from "it's not worth our time to do this", you could easily side-step it by saying "if a group of customers holding annual licenses worth (10 times the estimated implementation cost) ask us to, we'll be happy to prioritize that feature". It would be win/win: either the people asking would go away (with fewer hard feelings) or you'd learn that it really is important enough to your customers to be worthwhile.
"in this thread I've been given a rationale that was demonstrably mistaken" - which one?
There is no cheap standard method to store large objects in a limited space. The problems are equivalent for arrays in the RAM and parcels in a post van - except for the number of dimensions. Efficient programming includes a proper pre-allocation to avoid memory fragmentation. This problem cannot be solve auto-magically by the memory manager. Of course virtual memory or an automatic garbage collector are valid tries to manage the ressources efficiently, but they all have severe disadvantages also.
Obviously the feature you want is not the main problem of other users and of MathWorks, or there is simply no efficient solution.
John D'Errico on 27 Mar 2021
Remember that knowing those elements are stored contiguously in memory is a hugely important feature, and making them stored in those memory locations improves the way the BLAS works. And that is a big feature in terms of speed. So while a few people MIGHT want to have a feature that would slow down MATLAB for everybody else, I doubt most users would be happy to know that because one person thinks it important, suddenly a matrix multiply is now significantly slower.
It won't happen, nor would I and a lot of other people be happy if it did.
Regarding which rationales were demonstrably mistaken:
- The claim that Matlab's computation routines must use single contiguous physical memory segments to store arrays (as opposed to putting pages wherever there's space). The only thing that actually needs this is DMA from devices that don't have access to the page table.
- The claim that Matlab uses physical addressing rather than virtual addressing. Matlab uses copy-on-write and swapping, and runs as user-space code. Direct access to physical memory requires privileged instructions because it bypasses memory protection (which is implemented by the page table).
- The claim that the OS's heap manager is in any way to blame for this (from previous form threads). It's doing exactly what you're asking it to.
- The claim that I need a machine with more memory. If my dataset is very much smaller than physical memory and Matlab is asking for very much more space than I have physical memory, the machine isn't the problem.
Regarding "elements being stored contiguously in memory being a hugely important feature", all that's needed is that they be contiguous in virtual memory, which makes them contiguous within pages. Remember that page size (2 MB last I checked; 4 kB on very old systems) is very much larger than cache row size (typically anywhere from 32 bytes to 256 bytes). Virtually all of your cache hits and misses will happen the same way whether pages are contiguous with each other or fragmented. Moving to a different page requires a new page table lookup whether that page is contiguous to the old one or not, so it's not obvious to me that making pages contiguous with each other helps cache performance at all.
Per my original post - all I'm looking for is that there be the option to store pages in fragmented memory rather than contiguously with each other. The computing library implementation is unchanged (it literally can't tell the difference); the only changes you'd need to make are to switch to different memory allocation calls when the "allow fragmented memory" flag is set, and to lock out hardware accelerators like GPUs when the flag is set.
This in no way "slows down MATLAB for everyone else". The entire point of a flag is that it's something you have to set. By default memory allocation would still be physically contiguous.
|
https://www.mathworks.com/matlabcentral/answers/782688-can-i-tell-matlab-not-to-use-contiguous-memory
|
CC-MAIN-2022-27
|
refinedweb
| 9,947
| 57.91
|
Ever wish you had an easier way of doing validation on a string value, or checking to make sure it was in the proper format, or return a value with a simple "ToSomething" other than the defaults that already exists?
For example:
string myEmailAddress; //simple string, nothing fancy myEmailAddress = "joeATjoe.com"; //now use the extended method to validate the email address if (myEmailAddress.IsValidEmail()) { //email joe something } else { //tell the user this email is invalid!!! } //how bout this one for my old old old VB buddies that saw the light and crossed over to C# string myDate; //still just a string myDate = "12/18/08" // this could be passed from a user as well... //now valid using another cool time saving extended method if (myDate.IsDate()) //OMG where did that come from!! { //do something with it } else { //tell em it not a valid date!! }
Well Extension Methods allow you to do just that and makes them accessible anywhere throughout your project using base class references.
Here's how you create an extended method...
First add a new class to your project
Create a namespace that you MUST make reference to in order to use the extension.
namespace CustomExtensions
Then change the class so that it is static, extension methods only work if they are static (because all base class methods are...)
public static class ExtensionMethods
Now lets add the example extensions mentioned above...
namespace CustomExtensions { public static class ExtensionMethods { public static bool IsValidEmail(this String str) { if (!String.IsNullOrEmpty(str.Trim())) { System.Text.RegularExpressions.Regex regex = new System.Text.RegularExpressions.Regex("^([0-9a-zA-Z]([-.\\w]*[0-9a-zA-Z])*@([0-9a-zA-Z][-\\w]*[0-9a-zA-Z]\\.)+[com|org|net]{2,3})$"); if (!regex.Match(str).Success) return false; } else return false; return true; } public static bool IsDate(this String str) { if (!String.IsNullOrEmpty(str.Trim())) { DateTime chkDate; if (!DateTime.TryParse(str, out chkDate)) return false; } else return false; return true; } }
You'll notice the parameter in the method call looks a lil different. With extension methods the first parameter has to be referenced to what base class you are trying to extend. (this String str). The "str" variable simply holds the current state of the object for use of its base methods.
Now on any form/web page you want to use the extension, you MUST make reference to it, "using CustomExtensions" in order to take advantage of it. Try not to name any of your extensions the same as current base class methods or else if will not be read (just the base method).
Because in the examples above I used the String base class, any object that has a .text, .tostring, .value. SelectedText can all have the IsDate() or IsValidEmail() called. It's pretty cool. I have a ton of em!!!
So hopefully this small yet powerfull tutorial help you start thinking about some cool extension methods to add to your projects. Let us know!!
|
http://www.dreamincode.net/forums/topic/77402-how-to-create-extension-methods/
|
CC-MAIN-2016-30
|
refinedweb
| 490
| 64.61
|
This article will walk you through the steps required to create a signature hash to authenticate open channel API calls sent by Airship to your Webhook:
When configuring your webhook in the Airship Dashboard and selecting "Signature Hash" from the Authentication dropdown menu, you will provide us with a secret key based on the sha256 hash function, which you will later use to verify the signature on the receiving server.
You can generate a strong Signature Secret with the following commands:
On Linux:
openssl rand 128 | sha256sum | cut -f1 -d' '
On Mac:
openssl rand 128 | shasum -a 256 | cut -f1 -d' '
Other platforms may differ slightly.
Ensure your shared Signature Secret is stored securely in your system.
Each call made from Airship to your webhook will include the X-UA-TIMESTAMP and X-UA-SIGNATURE headers, and the request body. For example:
POST /yourWebhookServer/push HTTP/1.1 (or - GET /yourWebhookServer/validate)
- The X-UA-TIMESTAMP is an Epoch Unix time in seconds representing when the message was sent from Airship.
- To calculate the Signature, Airship first generates a Message. The Message is a concatenation of the string values of the X-UA-TIMESTAMP header and the request body JSON, separated by the character ":".
- All values are UTF-8 encoded.
- Since the /validate call is a GET, the request body JSON is represented by an empty string when calculating the signature
- For /push, the request body will be the JSON of the Open Delivery Payload
- The X-UA-SIGNATURE header is a sha256 hash of the Message string described above, using the secret key stored in Airship during setup. After hashing, the final value of the header is represented as a hexadecimal string.
To validate within the webhook, you will need to receive the API request coming from Airship and use the values of the X-UA-TIMESTAMP header and the request Body to generate the expected signature. Then, compare your calculated signature to the X-UA-SIGNATURE header on the request in order to authenticate it.
The following is an example of a function that would validate the signature in the incoming call:
function isValidSignature(request, res) {
var key = {secret key provided to Airship in Webshook setup in the Dashboard}; var headers = request.headers; var signature = headers['x-ua-signature']; var body = ''; // Create an empty vairable for the Body // If request body isn't empty, assign value to variable if (request.body) { body = request.body; } // Note that Node http requires using lower-cases for all headers var message=headers['x-ua-timestamp']+":"+body; // Concatenation of recieved timestamp and body var hmac = crypto.createHmac('sha256', key) // Create a sha256 hashed code using the secret key hmac.update(message, 'utf8') // Update the hash with the concatantaed message using utf8 var digest = hmac.digest('hex'); return (signature === digest); // If signature equals your hased code, return true }
An advantage of the above formulation of this validation function is that it gets the elements from the API call which then works for both the call to the /validate/ and /push/ endpoints with the same function.
Additionally, you should validate the
X-UA-TIMESTAMP against the current time. We recommend that you use a 5-minute threshold to account for time drift, though Airship uses NTP and we recommend that your webhook server does the same. To prevent timing attacks, you should employ a constant time-compare function when checking signatures.
Return to the Webhook configuration in your Airship Dashboard, Check the Enabled box to enable the open channel for use, then click Update. This will send out the GET call to your server's /validate/ endpoint. In response to this call your server should then:
- Return a 200 response code.
- Return a
Content-Typeof
"application/json".
- Return a JSON body with the confirmation code in the following format:
{"confirmation_code":"559384cd-6284-4e3e-9e4e-7c260019a251"}.
Be sure that the body is valid JSON and returns the confirmation_code generated for the webhook in the Airship Dashboard.
This will complete the process and your webhook will be ready to receive POST requests.
|
https://support.airship.com/hc/en-us/articles/360032501831-Implementing-a-Signature-Hash-and-Validating-an-Open-Channel-Webhook
|
CC-MAIN-2020-50
|
refinedweb
| 675
| 50.36
|
Convert ThinkOrSwim Indicator to TradingView
Budjetti $30-250 USD
I am looking for someone capable of converting an indicator from thinkorswim to tradingview. Please make sure to read the information and attached code before submitting your bids. Thank you.
# Trend Reversal
# Discuss [login to view URL]
def price = close;
def superfast_length = 9;
def fast_length = 14;
def slow_length = 21;
def displace = 0;
def mov_avg9 = ExpAverage(price[-displace], superfast_length);
def mov_avg14 = ExpAverage(price[-displace], fast_length);
def mov_avg21 = ExpAverage(price[-displace], slow_length);
#moving averages
def Superfast = mov_avg9;
def Fast = mov_avg14;
def Slow = mov_avg21;
def buy = mov_avg9 > mov_avg14 and mov_avg14 > mov_avg21 and low > mov_avg9;
def stopbuy = mov_avg9 <= mov_avg14;
def buynow = !buy[1] and buy;
def buysignal = CompoundValue(1, if buynow and !stopbuy then 1 else if buysignal[1] == 1 and stopbuy then 0 else buysignal[1], 0);
def Buy_Signal = buysignal[1] == 0 and buysignal == 1;
def Momentum_Down = buysignal[1] == 1 and buysignal == 0;
def sell = mov_avg9 < mov_avg14 and mov_avg14 < mov_avg21 and high < mov_avg9;
def stopsell = mov_avg9 >= mov_avg14;
def sellnow = !sell[1] and sell;
def sellsignal = CompoundValue(1, if sellnow and !stopsell then 1 else if sellsignal[1] == 1 and stopsell then 0 else sellsignal[1], 0);
def Sell_Signal = sellsignal[1] == 0 and sellsignal;
input method = {default average, high_low};
def bubbleoffset = .0005;
def percentamount = .01;
def revAmount = .05;
def atrreversal = 2.0;
def atrlength = 5;
def pricehigh = high;
def pricelow = low;
def averagelength = 5;
def averagetype = [login to view URL];
def mah = MovingAverage(averagetype, pricehigh, averagelength);
def mal = MovingAverage(averagetype, pricelow, averagelength);
def priceh = if method == method.high_low then pricehigh else mah;
def pricel = if method == method.high_low then pricelow else mal;
def EI = ZigZagHighLow("price h" = priceh, "price l" = pricel, "percentage reversal" = percentamount, "absolute reversal" = revAmount, "atr length" = atrlength, "atr reversal" = atrreversal);
rec EISave = if !IsNaN(EI) then EI else GetValue(EISave, 1);
def chg = (if EISave == priceh then priceh else pricel) - GetValue(EISave, 1);
def isUp = chg >= 0;
def EIL = if !IsNaN(EI) and !isUp then pricel else GetValue(EIL, 1);
def EIH = if !IsNaN(EI) and isUp then priceh else GetValue(EIH, 1);
def dir = CompoundValue(1, if EIL != EIL[1] or pricel == EIL[1] and pricel == EISave then 1 else if EIH != EIH[1] or priceh == EIH[1] and priceh == EISave then -1 else dir[1], 0);
def signal = CompoundValue(1, if dir > 0 and pricel > EIL then if signal[1] <= 0 then 1 else signal[1] else if dir < 0 and priceh < EIH then if signal[1] >= 0 then -1 else signal[1] else signal[1], 0);
def bullish2 = signal > 0 and signal[1] <= 0;
plot upArrow = bullish2;
[login to view URL](PaintingStrategy.BOOLEAN_ARROW_UP);
[login to view URL](CreateColor(145, 210, 144));
def bearish2 = signal < 0 and signal[1] >= 0;
plot downArrow = bearish2;
[login to view URL](PaintingStrategy.BOOLEAN_ARROW_DOWN);
[login to view URL](CreateColor(255, 15, 10));
|
https://www.fi.freelancer.com/projects/php/convert-thinkorswim-indicator/?ngsw-bypass=&w=f
|
CC-MAIN-2022-05
|
refinedweb
| 467
| 58.92
|
If.
I have been working in an internal laboratory for years. This laboratory is always equipped with the latest big servers from our company and is free for our partners to test the performance of their products and solutions. Part of my job is to help them tune the performance on all kinds of the powerful CMT and SMP servers.
In these years, I have helped testing dozens of Java applications in variety of different solutions. Many products are aimed for the same industry domains and have very similar functionalities, but the scalability is so different that some of them can not only scale up on the 64 CPUs servers, but also scale out to more than 20 server nodes, while others can only be running on the machines with no more than 2 CPUs., as a property of systems, is generally difficult to define, and is often mix-used with "performance". Yes, yes, scalability is closely related with performance, and its purpose is to get high performance. But the measurement for "scalability" is different from "performance". In this article, we will take the definitions from wikipedia:) are added. traditional RISC processor based scientific computers.
The first installment of this article will discuss scaling Java applications vertically.
Many software designers and developers take the functionality as the most important factor in a product while thinking of performance and scalability as add-on features and after-work actions. Most of them believe that expensive hardware can close the gap of the performance issue.
Sometimes they are wrong. Last month, there was an urgent project in our laboratory. After the product failed to meet the performance requirement of their customer in a 4-CPU machine, the partner wanted to test their product in a bigger (8-CPU) server. The result was that the performance was worse than in the 4-CPU server.
Why did this happen? Basically, if your system is a multiprocessed or multithreaded application, and is running out of CPU resources, then your applications will most likely scale well when more CPUs added.
Java technology-based applications embrace threading in a fundamental way. Not only does the Java language facilitate multithreaded applications, but the JVM is a multi-threaded process that provides scheduling and memory management for Java applications. Java applications that can benefit directly from multi-CPU resources include application servers such as BEA's Weblogic, IBM's Websphere, or the open-source Glassfish and Tomcat application server. All applications that use a Java EE application server can immediately benefit from CMT & SMP technology.
But in my laboratory, I found a lot of products cannot make full usage of the CPU resources. Some of them can only occupy no more than 20% CPU resources in an 8-CPU server. Such applications can benefit little when more CPU resources added.
The primary tool for managing coordination between threads in Java programs is the synchronized keyword. Because of the rules involving cache flushing and invalidation, a synchronized block in the Java language is generally more expensive than the critical section facilities offered by many platforms. Even when a program contains only a single thread running on a single processor, a synchronized method call is still slower than an un-synchronized method call.
To observe the problems caused by the synchronized keyword, just send a QUIT signal to the JVM process, which gives you a thread dump. If you have seen a lot of thread stacks just like the following in the thread dump file, which means that your system hits "Hot Lock" problem.
........... "Thread-0" prio=10 tid=0x08222eb0 nid=0x9 waiting for monitor entry [0xf927b000..0xf927bdb8] at testthread.WaitThread.run(WaitThread.java:39) - waiting to lock <0xef63bf08> (a java.lang.Object) - locked <0xef63beb8> (a java.util.ArrayList) at java.lang.Thread.run(Thread.java:595) ..........
Hot Lock may.
To avoid the hot lock problem, following suggestions may be helpful:
Make synchronized blocks as short as possible
When you make the time a thread holds a given lock shorter, the probability that another thread competes with the same lock will become lower. So while you should use synchronization to access shared variables, you should move the thread safe code outside of the synchronized block. Take following code as an example:
Code list 1: public boolean updateSchema(HashMap nodeTree) { synchronized (schema) { String nodeName = (String)nodeTree.get("nodeName"); String nodeAttributes = (List)nodeTree.get("attributes"); if (nodeName == null) return false; else return schema.update(nodeName,nodeAttributes); } }
This piece of code wants to protect the shared variable "schema" when updating it. But the code for getting attribute values is thread safe, and can be moved out of the block, making the synchronized block shorter:
Code list 2: public boolean updateSchema(HashMap nodeTree) { String nodeName = (String)nodeTree.get("nodeName"); String nodeAttributes = (List)nodeTree.get("attributes"); synchronized (schema) { if (nodeName == null) return false; else return schema.update(nodeName,nodeAttributes); } }
Reducing lock granularity
When you are using a "synchronized" marker, you have two choices on its granularity: "method locks" or "block locks". If you put the "synchronized" on a method, you are locking on "this" object implicitly.
Code list 3: public class SchemaManager { private HashMap schema; private HashMap treeNodes; .... public boolean synchronized updateSchema(HashMap nodeTree) { String nodeName = (String)nodeTree.get("nodeName"); String nodeAttributes = (List)nodeTree.get("attributes"); if (nodeName == null) return false; else return schema.update(nodeName,nodeAttributes); } public boolean synchronized updateTreeNodes() { ...... } }
Compared the code with Code list 2, this piece of code is worse, because it
locks on the entire object when calling "updateSchema" method. To achieve finer
granularity, just lock the "schema" instance variable instead of the all "SchemaManager"
instances to enable different methods to be paralleled.
Avoid lock on static methods
The worst solution is to put the "synchronized" keywords on the static methods, which means it will lock on all instances of this class. One of projects tested in our laboratory had been found to have such issues. When tested, we found almost all working threads waiting for a static lock (a Class lock):
-------------------------------- at sun.awt.font.NativeFontWrapper.initializeFont(Native Method) - waiting to lock <0xeae43af0> (a java.lang.Class) at java.awt.Font.initializeFont(Font.java:316) at java.awt.Font.readObject(Font.java:1185) at sun.reflect.GeneratedMethodAccessor147.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:324) at java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:838) at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1736).readSerialData(ObjectInputStream.java:1759).defaultReadObject(ObjectInputStream.java:452) at com.fr.report.CellElement.readObject(Unknown Source) .........
When using Java2D to generate font objects for the reports, the developers put a native static lock on the "initialize" method. To be fair, this was caused by Sun's JDK 1.4 (Hotspot). After changing to JDK 5.0, the static lock disappeared.
Using lock free data structure in Java SE 5.0
The "synchronized" keyword in Java is simply a relatively coarse-grained coordination mechanism, and as such, is fairly heavy for managing a simple operation such as incrementing a counter or updating a value, like following code:
Code list 4: public class OnlineNumber { private int totalNumber; public synchronized int getTotalNumber() { return totalNumber; } public synchronized int increment() { return ++totalNumber; } public synchronized int decrement() { return --totalNumber; } }
The above code is just locking very simple operations, and the "synchronized" blocks are very short. However, if the lock is heavily contended (threads frequently ask to acquire the lock when it is already held by another thread), throughput can suffer, and contended synchronization can be quite expensive..
A CAS operation includes three parameters -- a memory location, the expected old value, and a new value. The processor will update the location to the new value if the value that is there matches the expected old value; otherwise it will do nothing. It will return the value that was at that location prior to the CAS instruction. An example way to use CAS for synchronization is as following:
Code list 5: public int increment() { int oldValue = value.getValue(); int newValue = oldValue + 1; while (value.compareAndSwap(oldValue, newValue) != oldValue) oldValue = value.getValue(); return oldValue + 1; }
First, we read a value from the address, then perform a multi-step computation to derive a new value (this example is just increasing by one), and then use CAS to change the value of address from oldValue to the newValue. The CAS succeeds if the value at address has not been changed in the meantime. If another thread did modify the variable at the same time, the CAS operation will fail, but detect it and retry it in a while loop. The best thing of CAS is that it is implemented in hardware and is extremely lightweight. If 100 threads execute this increment()method at the same time, in the worst case each thread will have to retry at most 99 times before the increment is complete.
The java.util.concurrent.atomic package in Java SE 5.0 and above provides classes that support lock-free thread-safe programming on single variables. The atomic variable classes all expose a compare-and-set primitive, which is implemented using the fastest native construct available on the platform. Nine flavors of atomic variables are provided in this package, including: AtomicInteger; AtomicLong; AtomicReference; AtomicBoolean; array forms of atomic integer; long; reference; and atomic marked reference and stamped reference classes, which atomically update a pair of values.
Using an atomic package is easy. To rewrite the increasing method of code list 5:
Code list 6: import java.util.concurrent.atomic.*; .... private AtomicInteger value = new AtomicInteger(0); public int increment() { return value.getAndIncrement(); } .....
One successful story about the lock free algorithms is a financial system tested in our laboratory, after replaced the "Vector" data structure with "ConcurrentHashMap", the performance in our CMT machine(8 cores) increased more than 3 times.. Why do I say it will cause the scalability problem?
Let's take a real world case as an example. This is an ERP system for manufacture, when tested its performance in one of our latest CMT servers (2CPU, 16 cores, 128 strands ), we found the CPU usage was more than 90%. This was a big surprise, because few applications can scale so well in this type of machine. Our excitement just lasted for 5 minutes before we discovered that the average response time was very high and the throughput was unbelievable low. What were these CPUs doing? Weren't they busy? What were they busy with? Through the tracing tools in the OS, we found almost all the CPUs were doing the same thing - "HashMap.get()", and it seemed that all CPUs were in infinite loops. Then we tested this application on diverse servers with different numbers of CPUs. The result was that the more CPUs the server has, the more chances this infinite loop would happen.
The root cause of the infinite loop is on an unprotected shared variable-- a "HashMap" data structure. After added "synchronized" marker to all the access methods, everything was normal. By checking the source code of the "HashMap" (Java SE 5.0), we found there was some potential for such an infinite loop by corrupting its internal structure. As shown as following code, if we make the entries in the HashMap to form a circle, then "e.next()" will never be a null.
Code list 7:; }
Not only its get() method, but also put() and other methods are all exposed by this risk. Is this a bug of JVM? No, this was reported long time ago (please refer to). Sun engineers didn't think it a bug, but rather suggested the use of "ConcurrentHashMap". So take it into consideration when building a scalable system.
The java.nio package, which was introduced in Java 1.4, allows developers to achieve greater performance in data processing and offers better scalability. The non-blocking I/O operations provided by NIO allows for Java applications to perform I/O more like what is available in other lower-level languages like C. There are a lot of NIO frameworks currently, such as Mina from Apache and Grizzly from Sun, which are widely used by many projects and products.
During the last 5 months, there were two Java EE projects hold in our laboratory which only wanted to test their products' performance on both traditional blocking-I/O based servers and non-blocking I/O based servers, to see the difference. They chose Tomcat 5 as blocking-I/O based servers, and Glassfish as Non-blocking I/O based servers.
First, they tested a few simple JSP pages and Servlets, got the following result (on a 4-CPUs server):
The performance of Glassfish was far behind Tomcat's according to the test result. The customer doubted about the advantage of non-blocking I/O. Why so many articles and technical reports are telling about the performance and scalability of the NIO?
After tested more scenarios, they changed their mind, for the results showed the power of NIO little by little. What they have tested are:
The figure below shows the results of the testing on a 4-CPU server.
Figure 1: Throughput in a 4CPU server
Traditional blocking I/O will use a dedicated working thread for a coming. This model in Tomcat 5 is very effective when dealing with simple logical in a small number of concurrent users under perfect network environments.
But if the request involves complex logic, or interacts with outer system such as file systems, database, or a message server, the working thread will be blocked at the most of the processing time to wait for the return of Syscalls or network transfers. The blocking thread will be held by the request until finished, but the operating system will park this thread to relieve the CPU to deal with other requests. If the network between the clients and the server is not very good, the network latency will block the threads longer. Even more, when keep-alive is required, the current working thread will be blocked for a long time after the request processing is finished. To better utilized the CPU resources, more working threads are needed.
Tomcat uses a thread pool, and each request will be served by any idle thread in the thread pool. "maxThreads" decides the maximum number of threads that Tomcat can create to service requests. If we set "maxThreads" too small, we cannot fully utilize the CPU resources, and more important, will get a lot of requests dropped and rejected by the server when concurrent users increases. In this testing, we set "maxThreads" to "1000" (which is too large and unfair to Tomcat). Under such settings, Tomcat will span a lot of threads when concurrent users go up to a high level..
Glassfish doesn't need so many threads. In non-blocking IO, a working thread will not binding to a dedicated request. If one request is blocking due to any reasons, this thread will reuse by other requests, In such way, Glassfish can handle thousands of concurrent users by only tens of working threads. By limiting the threads resources, Non-blocking IO has better scalability (refer to the figure below). That's the reason that Tomcat 6 has embraced non-blocking IO too.
Figure 2: scalability test result
A Java EE-based ERP system was tested in our laboratory months ago, and one of its testing scenarios was to generate a very complex annual report. We tested this scenario in different servers and found that the cheapest AMD PC server got the best performance. This AMD server has only two 2.8G HZ CPUs and 4G memory, yet its performance exceeded the expensive 8-CPUs SPARC server shipped with 32G memory.
The reason is because that scenario is a single thread task, which can only be run by a single user (concurrently access by many users is meaningless in this case ). So it can just using one CPU when running. Such a task cannot scale to multi-processors. At the most of time, the frequency of CPU plays the leading role of the performance in such cases..
Re-architecture and re-code the whole solution is a time consuming work and error prone. One of projects in our laboratory used JOMP and achieved parallelization for its single-thread tasks. JOMP is a Java API for thread-based SMP parallel programming. Just like OpenMP, JOMP uses compiler directives to insert parallel programming constructs into a regular program. In a Java program, the JOMP directives take the form of comments beginning with //omp. The JOMP program is run through a precompiler which processes the directives and produces the actual Java program, which is then compiled and executed. JOMP supports most features of OpenMP, including work-sharing parallel loops and parallel sections, shared variables, thread local variables, and reduction variables. The following code is an example of JOMP programming.
Code list 8: Li n k e dLi s t c = new Li n k e dLi s t ( ) ; c . add ( " t h i s " ) ; c . add ( " i s " ) ; c . add ( " a " ) ; c . add ( "demo" ) ; / / #omp p a r a l l e l i t e r a t o r f o r ( S t r i n g s : c ) System . o u t . p r i n t l n ( " s " ) ;
Like most parallelizing compilers, JOMP also focus on loop-level and collection parallelism, studying how to execute different iterations simultaneously. To be parallelized, two iterations shouldn't present any data dependency-that is, neither should rely on calculations that the other one performs.
To write a JOMP program is not an easy work. First, you should familiar with OpenMP directives, and familiar with JVM Memory Model's mapping for those directives, then know your business logic to put the right directives on the right places.
Another choice is to use Parallel Java. Parallel Java, like JOMP, supports most features of OpenMP; but unlike JOMP, PJ's parallel constructs are obtained by instantiating library classes rather than by inserting precompiler directives. Thus, "Parallel Java" needs no extra precompilation step. Parallel Java is not only useful for the parallelization on multiple CPUs, but also for the scalability on multiple nodes. The following code is an example of "Parallel Java" programming.
Code list 9: static double[][] d; new ParallelTeam().execute (new ParallelRegion() { public void run() throws Exception { for (int ii = 0; ii < n; ++ ii) { final int i = ii; execute (0, n-1, new IntegerForLoop() { public void run (int first, int last) { for (int r = first; r <= last; ++ r) { for (int c = 0; c < n; ++ c) { d[r][c] = Math.min (d[r][c], d[r][i] + d[i][c]); } } } }); } } });
Memory is an important resource for your applications. Enough memory is critical to performance in any application, especially for database systems and other I/O-focused systems. More memory means larger shared memory space and larger data buffers, to enable applications read more data from the memory instead of slow disks..
Coming back to the question of whether Java applications scale by given more memory, the answer is yes, sometimes. Too little memory will cause garbage collection to happened too frequently. Enough memory will keep the JVM processing your business logic most of time, instead of collecting garbage.
But it is not always true. A real world case in my laboratory is a Telco system built on a 64-bit JVM. By using a 64-bit JVM, the application can break the limit of 4GB memory usage found in a 32-bit JVM. It was tested on a 4-CPU server with 16GB memory, and they gave 12GB memory to the Java application. In order to improve the performance, they cached more than 3,000,000 objects in memory when initialization to avoid creating too many objects when running. This product was running very fast during the first hour of testing, then suddenly, system halted for more than 30 minutes. We had determined that it was the garbage collection that stopped the system for half an hour.
Garbage collection is the process of reclaiming memory taken up by unreferenced objects. Unreferenced objects are ones the application can no longer reach because all references to them have gone out of extent. If a huge number of live objects exist in the memory (just like the 3,000,000 cached objects), the garbage collection process will take a long time to traverse all these objects. That's why the system halted for such a long and unacceptable time.
In other memory-centric Java applications tested in our laboratory, we found the following characteristics:.
The Real Time JVM (JSR001) has the ability to let the programmer control memory collection. Applications can use this feature to say to the JVM "Hi, this huge space of memory is my cache, I will take care of it myself, please don't collect it automatically". This functionality can make Java applications scale on the huge memory resources. Hope JVM vendors will bring it into the normal free JVM versions in the near future.
To scale these memory-centric Java applications, you need multiple JVM instances, or multiple machine nodes.
Some scalability problems in Java EE applications are not related to themselves. The limitation from external systems sometime will become the bottleneck of scalability. Such bottlenecks may include:
These are not only problems for Java EE applications, but for all systems on any platform. To resolve these problems need help from database administrators, system engineers and network analyzers on all the levels of the systems.
The second installment of this article will discuss problems with scaling horizontally..
PRINTER FRIENDLY VERSION
|
http://www.theserverside.com/tt/articles/content/ScalingYourJavaEEApplications/article.html
|
crawl-002
|
refinedweb
| 3,603
| 54.83
|
Allocate a dispatch handle, specifying a channel ID
#include <sys/iofunc.h> #include <sys/dispatch.h> dispatch_t *dispatch_create_channel( int chid, unsigned reserved );
libc
Use the -l c option to qcc to link against this library. This library is usually included automatically.
The dispatch_create_channel() function allocates and initializes a dispatch handle. After you've called this function, you need to call one or more of the following functions to attach handlers for the things you want the dispatch layer to deal with:
Then you call dispatch_context_alloc() to allocate the context for dispatch_block() and dispatch_handler().
If you wish, you can do a resmgr_attach() with a NULL path. This has the effect of initializing dispatch to receive messages, among other things.
This function is similar to dispatch_create(), but lets you specify a channel for the dispatch to use. It also lets you specify channel flags for name_attach().
This function is part of the dispatch layer of a resource manager. For more information, see "Layers in a resource manager" in the Bones of a Resource Manager chapter of Writing a Resource Manager.
A handle to a dispatch structure, or NULL if an error occurs. The dispatch_t structure is an opaque data type; you can't access its contents directly.
|
http://www.qnx.com/developers/docs/6.6.0.update/com.qnx.doc.neutrino.lib_ref/topic/d/dispatch_create_channel.html
|
CC-MAIN-2021-04
|
refinedweb
| 205
| 57.06
|
- analysis.
- API key
- A unique identifier that you can use for authentication when submitting Elasticsearch requests. When TLS is enabled, all requests must be authenticated using either basic authentication (user name and password) or an API key.
- auto-follow pattern
- An index pattern that automatically configures new indices as follower indices for cross-cluster replication. For more information, see Managing auto follow patterns.
- cluster
- One or more nodes that share the same cluster name. Each cluster has a single master node, which is chosen automatically by the cluster and can be replaced if it fails.
- cold phase
- The third possible phase in the index lifecycle. In the cold phase, an index is no longer updated and seldom queried. The information still needs to be searchable, but it’s okay if those queries are slower.
- cross-cluster replication (CCR)
- A feature that enables you to replicate indices in remote clusters to your local cluster. For more information, see Cross-cluster replication.
- cross-cluster search (CCS)
- A feature that enables any node to act as a federated client across multiple clusters. See Search across clusters.
- delete phase
- The last possible phase in the index lifecycle. In the delete phase, an index is no longer needed and can safely be deleted.
-.
-.
- flush
- Peform a Lucene commit to write index updates in the transaction log (translog) to disk. Because a Lucene commit is a relatively expensive operation, Elasticsearch records index and delete operations in the translog and automatically flushes changes to disk in batches. To recover from a crash, operations that have been acknowledged but not yet committed can be replayed from the translog. Before upgrading, you can explicitly call the Flush API to ensure that all changes are committed to disk.
- follower index
- The target index for cross-cluster replication. A follower index exists in a local cluster and replicates a leader index.
- force merge
- Manually trigger a merge to reduce the number of segments in each shard of an index and free up the space used by deleted documents. You should not force merge indices that are actively being written to. Merging is normally performed automatically, but you can use force merge after rollover to reduce the shards in the old index to a single segment. See the force merge API.
- freeze
- Make an index read-only and minimize its memory footprint. Frozen indices can be searched without incurring the overhead of of re-opening a closed index, but searches are throttled and might be slower. You can freeze indices to reduce the overhead of keeping older indices searchable before you are ready to archive or delete them. See the freeze API.
- frozen index
- An index reduced to a low overhead state that still enables occasional searches. Frozen indices use a memory-efficient shard implementation and throttle searches to conserve resources. Searching a frozen index is lower overhead than re-opening a closed index to enable searching.
- hot phase
- The first possible phase in the index lifecycle. In the hot phase, an index is actively updated and queried.
- id
- The ID of a document identifies a document. The
index/idof a document must be unique. If no ID is provided, then it will be auto-generated. (also see routing)
- index
An optimized collection of JSON documents. Each document is a collection of fields, the key-value pairs that contain your data.
An index is a logical namespace that maps to one or more primary shards and can have zero or more replica shards.
- index alias
An index alias is a logical name used to reference one or more indices.
Most Elasticsearch APIs accept an index alias in place of an index name.
See Add index alias.
- index lifecycle
- The four phases an index can transition through: hot, warm, cold, and delete. For more information, see Index lifecycle.
- index lifecycle policy
- Specifies how an index moves between phases in the index lifecycle and what actions to perform during each phase.
- index pattern
- A string that can contain the
*wildcard to match multiple index names. In most cases, the index parameter in an Elasticsearch request can be the name of a specific index, a list of index names, or an index pattern. For example, if you have the indices
datastream-000001,
datastream-000002, and
datastream-000003, to search across all three you could use the
datastream-*index pattern.
- index template
Defines settings and mappings to apply to new indexes that match a simple naming pattern, such as logs-*.
An index template can also attach a lifecycle policy to the new index. Index templates are used to automatically configure indices created during rollover.
- leader index
- The source index for cross-cluster replication. A leader index exists on a remote cluster and is replicated to follower indices.
- local cluster
- The cluster that pulls data from a remote cluster in cross-cluster search or cross-cluster replication.
- mapping
A mapping is like a schema definition in a relational database. Each index has a mapping, which defines a type, plus a number of index-wide settings.
A mapping can either be defined explicitly, or it will be generated automatically when a document is indexed.
- node
- A running instance of Elasticsearch that belongs to a cluster. Multiple nodes can be started on a single server for testing purposes, but usually you should have one node per server.
- primary shard
Each document is stored in a single primary shard. When you index a document, it is indexed first on the primary shard, then on all replicas of the primary shard.
By default, an index has index API.
- query
A request for information from Elasticsearch. You can think of a query as a question, written in a way Elasticsearch understands. A search consists of one or more queries combined.
There are two types of queries: scoring queries and filters. For more information about query types, see Query and filter context.
- recovery
Shard recovery is the process of syncing a replica shard from a primary shard. Upon completion, the replica shard is available for search.
Recovery automatically occurs during the following processes:
- Node startup or failure. This type of recovery is called a local store recovery.
- Primary shard replication.
- Relocation of a shard to a different node in the same cluster.
- Snapshot restoration.
- reindex
- To cycle through some or all documents in one or more indices, re-writing them into the same or new index in a local or remote cluster. This is most commonly done to update mappings, or to upgrade Elasticsearch between two incompatible index versions.
- remote cluster
- A separate cluster, often in a different data center or locale, that contains indices that can be replicated or searched by the local cluster. The connection to a remote cluster is unidirectional.
-.
- rollover
Redirect an index alias to begin writing to a new index when the existing index reaches a certain size, number of docs, or age.
The new index is automatically configured according to any matching index templates. For example, if you’re indexing log data, you might use rollover to create daily or weekly indices. See the rollover index API.
- rollup
- Summarize high-granularity data into a more compressed format to maintain access to historical data in a cost-effective way.
- rollup index
- A special type of index for storing historical data at reduced granularity. Documents are summarized and indexed into a rollup index by a rollup job.
- rollup job
- A background task that runs continuously to summarize documents in an index and index the summaries into a separate rollup index. The job configuration controls what information is rolled up and how often.
-.
- shrink
Reduce the number of primary shards in an index.
You can shrink an index to reduce its overhead when the request volume drops. For example, you might opt to shrink an index once it is no longer the write index. See the shrink index API.
- snapshot
- A backup taken from a running Elasticsearch cluster. You can take snapshots of individual indices or of the entire cluster.
- snapshot lifecycle policy
- Specifies how frequently to perform automatic backups of a cluster and how long to retain the resulting snapshots.
- snapshot repository
- Specifies where snapshots are to be stored. Snapshots can be written to a shared filesystem or to a remote repository.
-.
- split
- To grow the amount of shards in an index. See the split index API.
- used to represent the type of document, e.g. an
user, or a
tweet. Types are deprecated and are in the process of being removed. See Removal of mapping types.
- warm phase
- The second possible phase in the index lifecycle. In the warm phase, an index is generally optimized for search and no longer updated.
You are looking at preliminary documentation for a future release. Not what you want? See the current release documentation.
|
https://www.elastic.co/guide/en/elasticsearch/reference/master/glossary.html
|
CC-MAIN-2020-24
|
refinedweb
| 1,462
| 57.87
|
12 Things I Wish I Knew About Docker
When learning Docker, there are a lot of subtle details you can miss. But knowing them can be really helpful. I often had full-fledged “aha” moments. After stumbling upon a small snippet of information suddenly things started to make much more sense all of a sudden.
I like to think them as puzzle pieces, which help you figure out what the whole image is supposed to look like and provide you with an overview. Unfortunately, they can feel trivial if you’ve been working with Docker for a while already. But if you don’t happen to know them, they can make all the difference for your learning journey.
Here are 12 things I wish I learned a lot earlier about Docker and containers in general:
- Containers are about portability and resource utilization.
- Containers were not designed as a security containment mechanism from the start, and it shows.
- Containers don’t exist as a first-class object - Linux namespaces and cgroups work together to create “containers”.
- Multiple processes can run in the same “container”, this only means the processes share the same namespaces and cgroup.
- Docker is just one tool of many which you can use to work with containers.
- Docker works three main jobs: packaging apps into images, distributing images and running containers from images.
- Image layers exist to reuse work, transfer less data and save bandwidth.
- Docker is easy to get started with, but the images are too permissive and not correct by default.
- Lots of people use containers badly, and don’t even know it.
- It’s okay to use docker-compose for production workloads running on a single machine.
- Container orchestration, security and building good images take effort and experience. They are complex topics by themselves.
- Sometimes it’s okay to not-use Docker even though you could.
I hope those snippets will help you get a better picture of Docker! They can be hard to find out on your own. Either you gather them while sifting through dozens of online blog posts or you learn them from people who have been using Docker for a while.
If you’d like to read a more thorough list, drop your email address below, and I’ll send you the first chapter of my upcoming book - “Things I Wish I Knew About Docker”, with more useful facts I wish I had learned about earlier in my career.
|
https://vsupalov.com/12-docker-facts/
|
CC-MAIN-2021-31
|
refinedweb
| 408
| 71.65
|
C++ and SDL with Clickable. A little help please.
Here's the source of the main.cpp:
#include <SDL.h>
int main(int argc, char *argv[])
{
SDL_Init(SDL_INIT_VIDEO);
SDL_Window *window = SDL_CreateWindow( "SDL2Test", SDL_WINDOWPOS_UNDEFINED, SDL_WINDOWPOS_UNDEFINED, 640, 480, 0 );
SDL_SetWindowFullscreen( window, SDL_TRUE );
SDL_Renderer *renderer = SDL_CreateRenderer(window, -1, SDL_RENDERER_SOFTWARE);
SDL_SetRenderDrawColor(renderer, 255, 0, 0, SDL_ALPHA_OPAQUE);
SDL_RenderClear(renderer);
SDL_RenderPresent(renderer);
SDL_Delay(5000);
SDL_DestroyWindow(window);
SDL_Quit();
return 0;
}
Some of you may find these demos usable:
However with the Mir upgrade things should become a bit easier as regular Wayland-compiled SDL stuff should start working there. That is if I understand it correctly.
- bhdouglass last edited by
@Craig I just came across this thread and tried out what you mentioned. You'll need to bundle sdl2 with your app to be able to use it (Ubuntu Touch doesn't have it preinstalled). But it looks like @zubozrout 's demo might be a good place to start for that.
@zubozrout thanks for sharing, it works nicely. I've compiled sdl2 also in a project that only requires audio, i will post the source link here whenever i will publish it
for reference, it helps understand about clickable and dependencies :
:(.
|
https://forums.ubports.com/topic/2178/c-and-sdl-with-clickable-a-little-help-please/15
|
CC-MAIN-2019-30
|
refinedweb
| 194
| 54.93
|
I finally finished the deluxe version of the data logger. The software took longer than expected, because of the tight flash memory limit. I created a short video which should give you an impression how you can use the menus on the data logger.
The video is showing the menu driven user interface. It is controlled using a capacitive keypad. Everything is sealed in this case to protect the electronic components from the high humidity.
RUNNING OUT OF FLASH MEMORY
I usually start software in the best way possible and postpone any optimisations to the end of the process. So I created everything in an abstract object oriented way, but quickly had to go back to a procedural approach for all objects which were singletons anyway.
To keep everything as logical and abstract as possible, I used namespaces to encapsulate the separate modules. This lead to a very clean approach while saving all the memory which was used for object pointers in the code.
WORKING ON THE DOCUMENTATION
Currently I am working on a detailed documentation, which will show step by step how to build this deluxe version of the data logger. There will also be a page with some notes and explanations about the software. You can already have a look at the software here.
|
https://luckyresistor.me/2015/10/04/data-logger-deluxe-finished-video/
|
CC-MAIN-2020-16
|
refinedweb
| 216
| 63.09
|
I've been reading many of the posts here for quite a while, and it seems that much of the source posted unneccesarily includes several include files!
Because of the size of the code that is actually included with the headers (sometimes chained to other headers subsequently), it would be wise to minimize the number of inclusions that you make.
For example:
Do not #include <windows.h> unless absolutely necessary. Unless you require the generation of dialog boxes, etc., in a standard Win32 Console Project, you do NOT require it. Furthermore, including this file tends to dramatically increase compile time (about 10 seconds).
One method is to include a minimal number of headers, and then compile your code, only adding them if you are prompted by the compiler.
A further optimization is to create a 'pre-compiled' header, which is a standard header file separate to your source project file. Just #include all the headers you need into it, and then #include "MyHeader.h" in each source file that it's required. Don't forget the '#pragma once' directive, or #ifndef...#define...#endif directives, though to protect inclusion.
Hopefully this will change the practises of a few programmers out there
.....here's to cleaner code!.....here's to cleaner code!
Regards,
Peter Kimberley
|
https://cboard.cprogramming.com/cplusplus-programming/2558-slim-down-inclusions.html
|
CC-MAIN-2017-30
|
refinedweb
| 213
| 59.4
|
What am I suppose to add here to buffer the weird sequence of numbers and letters?
/*------------------------------------------------------------------ * Assignment 4: Breaking the Code. Nested loops, command-line arguments.. *Worth: 50 points. 60 points if you write a "general purpose" program which can process ANY of the "document??.cry" files from the command line. 20 extra credit points if you write a program which attempts to open argv[2] as an input file, and if it fails, or if there is no argv[2], it switches to using stdin. *Made by: Samuel Georgeo (max11) Create a program which attempts to "break" the encoding and reveal what the secrets are that each file contains. -------------------------------------------------------------------*/ #include <iostream> using std::cout; using std::cin; using namespace std; int main (int argc, char *argv[], char **env) { int nulls = atoi (argv[2]); do { int c = cin.get() ; cout.put(c) ; } while ( ! cin.eof()) ; }
|
https://www.daniweb.com/programming/software-development/threads/127959/null-cipher-according-to-c
|
CC-MAIN-2018-43
|
refinedweb
| 145
| 73.27
|
Just installed Tomcat 5.0.19 and it is complaining about some stuff in my
jsp files. After some reading, I determined it had something to do with
"regular" jsp's vs. "xml document" jsp's. Does anyone know how to configure
Tomcat 5 to see my jsp's as "regular" jsp's? Or is there something I need
to add to my jsp's?
The error I get is "The function XXX must be used with a prefix when a
default namespace is not specified".
Thanks,
Brian Barnett
---------------------------------------------------------------------
To unsubscribe, e-mail: taglibs-user-unsubscribe@jakarta.apache.org
For additional commands, e-mail: taglibs-user-help@jakarta.apache.org
|
http://mail-archives.us.apache.org/mod_mbox/tomcat-taglibs-user/200405.mbox/%3C42E7C401A4CBCD49829BB53DDD538BA4BFBB0B@chmpgexgmail3.ic.ncs.com%3E
|
CC-MAIN-2020-29
|
refinedweb
| 110
| 69.68
|
Hello everyone, today we are going to build a Guess the Number Game in Python!
How does it work?
Our games will randomly generate a number between 0 and 30 and the player has to guess the number.
If the number entered by the player is less than generated number then the player will be prompted with
too low message!
And if the number entered by the player is more than generated number then the player will be prompted with
too high message!
This process will be repeated until the player finds the right number.
Pretty straightforward right... let's do it!
Let's Code
Now since we are going to generate a random number and if you've been following past tutorials in the series than you might know what we are talking about here. We are going to import a module that comes pre-installed with our Python, it's called
random.
Let's import it into our project.
import random
Now we have to initialize a
max_num variable. So that we can customize the difficulty as per our choice. Higher the value of
max_num, the higher the difficulty. For now, let's keep it 30.
max_num = 30
Now it's time we generate our random number which the player has to find.
For that we will be using
randint() function from the
random module. We will store this random value in
random_number variable.
random_number = random.randint(1, max_num)
It will generate a random number between
1 &
max_num.
Then we will also initialize a
guess variable to store the player's answer for comparison.
guess = 0
Now let's create a while loop to keep asking the player until the right number is found. We will keep this loop running until our
random_number & guess matches.
while guess != random_number: pass
Alright, so it's time we ask our player to make a guess enter a number to start the game.
while guess != random_number: guess = int(input(f"Guess the number between 1 & {max_num}: "))
We will also use
int() function to covert the number from a string into an integer so that we can compare it to check the answer.
Now we will make use of
if conditionals to compare the answer and provide suitable feedback to the player.
while guess != random_number: guess = int(input(f"Guess the number between 1 & {max_num}: ")) if guess < random_number: print("Wrong! Too low...") elif guess > random_number: print("Wrong! Too high...")
Hopefully, the logic of the while loop will be clear as we already discussed it in How does it work? section.
Here are our
while loop ends.
Now one last thing we have to do is to print a final message to the screen if the player got the right answer.
print(f"Thats Right! Random number is {random_number}")
Let's summarize the logic one again:
max_numvariable will decide the difficulty of our game. Higher the value, the higher the difficulty.
- We will use
random.randint(1, max_num)function to generate a random number.
guessvariable will contain the answer entered by the player.
- Now loop will begin and if the number entered by the player matches the generated answer then the loop will no longer execute and the final
print()statement will be printed, telling the player that the game is over.
- Otherwise, the loop will keep running until the right number is entered by the player.)
|
https://dev.to/mindninjax/how-to-build-a-guess-the-number-game-in-python-eff
|
CC-MAIN-2021-17
|
refinedweb
| 563
| 73.68
|
I'm trying to get OpenCV working with Python on my Ubuntu machine. I've downloaded and installed OpenCV, but when I attempt to run the following python code (which should capture images from a webcam and push them to the screen)
import cv
cv.NamedWindow("w1", cv.CV_WINDOW_AUTOSIZE)
capture = cv.CaptureFromCAM(0)
def repeat():
frame = cv.QueryFrame(capture)
cv.ShowImage("w1", frame)
time.sleep(10)
while True:
repeat()
The function is not implemented. Rebuild the library with Windows, GTK+ 2.x or
Carbon support. If you are on Ubuntu or Debian, install libgtk2.0-dev and
pkg-config, then re-run cmake or configure script
sudo make uninstall
make
sudo make install
If it's giving you errors with gtk, try qt.
sudo apt-get install libqt4-dev cmake -D WITH_QT=ON .. make sudo make install
If this doesn't work, there's an easy way out.
sudo apt-get install libopencv-*
This will download all the required dependencies(although it seems that you have all the required libraries installed, but still you could try it once). This will probably install OpenCV 2.3.1 (Ubuntu 12.04). But since you have OpenCV 2.4.3 in
/usr/local/lib include this path in
/etc/ld.so.conf and do
ldconfig. So now whenever you use OpenCV, you'd use the latest version. This is not the best way to do it but if you're still having problems with qt or gtk, try this once. This should work.
|
https://codedump.io/share/8BNWlDjMmwzH/1/opencv-error-the-function-is-not-implemented
|
CC-MAIN-2016-50
|
refinedweb
| 250
| 69.68
|
Re: Sizing Exchange Transaction, SMTP, MTA and Quorum Drives???
- From: "John Fullbright" <fjohn@donotspamenetappdotcom>
- Date: Fri, 3 Aug 2007 12:33:55 -0700
If I get this right,
1. You are assuming .85 IOPS per user
2. You will not be using caching on outlook clients
3. You're assuming a read/write ratio of 3:1
5. In an A/A/A/P cluster, each active node will host 8000 or so mailboxes
6. You're assuming 50% concurrency
7. This is an Exchange 2003 Design
8. You're active user count is above 100%
Take a look at Optimizing Storage for Exchange Server 2003. Start about
page 19. If you're going to assume load instead of measuring (I wouldn't
recoomend that, but just for the sake of arguement), then you need to
understand the impact that scalability will have on that assumed IOPS/user
number. You'll notice that the as the number of users increases you multply
the base IOPS/user number by some offset. This is because the more users
you have on a server the less database cache there is per user. In exchange
2003 with 4GB of RAM and the /3gb switch in the boot.ini, you have something
like 898MB of database cache. If I pile 8000 users on a single EVS, even at
that mythical 50% concurrency(which I have to assume to meet the recommended
max concurrent users for a single node). That gives me about 229K of cache
per active user. That amount of cache isn't even close enough to cache the
11 views for the default folders. If you're going to scale to that level,
client side caching is a must. In your case, you're measuring upwards of
10000 active users. There are many potential reasons for this. To name a
few:
1. Your concurrency assumption is wrong.
2. Users may be connecting to Exchange from more than one device
3. Blackberry, goodlink, etc are essentially an extra mapi session (for BB
that sees is at 3.64 times the load of a normal session by the way)
4. Users are disconnecting and reconnecting multiple times within the
session timeout (20 minutes or so)
Any way you look at it, cache is still usered per session and you're looking
at 10,000 active sessions. From your measurement (not your consultant's
aussumtion; there's a big difference) you'll get more like 91K of cache per
user. You're off the end of the tables in "Optimizing Storage for Exchange
Server 2003" To get to 8000 users, following the MS guidance on storage
group layout, you have 4 storage groups and I'd assume 4 stores in each
storage gorup. At the very minimum, using the table on page 20 you should
add 38% to that .85. The size of the mailboxes is also a factor that could
increase that IOPS/user number further. Using outlook in cached mode shifts
many of the read operations to the client and will help. Typically the
read/write ration drops from about 3:1 to about 2:1 when you go to cached
mode on all the clients. That's a 25% reduction in IO, although all of it
is read reduction. On the extra client connection front, if you're clients
are using desktop search engines then this will shift the IO from the
Exchange server to the desktop where it belongs. Blackberry polls, so it
won't help you there.
When calculating the write penalty for RAID 10, each read requires one
operation and can occur from either side of the mirror, and each write
requires 2 operations, one to each side of the mirror. That's where P*N
and P*N/2 come from. You then apply the read write ratio to determine the
number of composite IOs (mixture of reads and writes at the specified
read/write ratio) an array will support. The difference between this route
and Penalty = (R + W)/(R + 2W) is well... If I have 100 and subtract 20
percent, then I have 80. I can't simply add 20 percent of back to get 100;
I end up with 96 if I try that. The correct way is to add 25% of 80 back to
80 to reach 100 where I started. If I assume 2 spindles at 130 IOPS/spindle
and a 2:1 read write ratio, then with Penalty = (R + W)/(R + 2W) I get
195. If I figure out writes supported and reads supported then apply the
read write ratio, then I get 216.67. In addition, the math for RAID 5 in
the paper you cite neglects to subtract out the parity spindles, skewing the
results. For example, in a RAID5 vraid volume on an EVA, 1 out 6 stripes is
parity. Read/write ratios are another postential pitfall; a small change
can make a big difference in the IO that hits the storage. When dealing
with Netapp storage, it's a whole different set of math. There is no write
penatly which makes things much simpler.
"TonyP" <TonyP@xxxxxxxxxxxxxxxxxxxxxxxxx> wrote in message
news:D7370B76-8263-4A3E-825B-D7FA76C5584C@xxxxxxxxxxxxxxxx
Hi John
I added some comments etc below on the areas you have mentioned I am
unsure
about:
"John Fullbright" wrote:
For shared disks:
1. Put the DTC and the quorum on the same drive.
2. Put the databases for each storage group on a drive (4 dstorage
groups 4
drives). This will give you a granularity of restore at the SG level.
3. Put the transaction logs for each storage group on their own drive (4
storage groups 4 drives).
4. Put the SMTP directories on a drive. It's important for the SMTP
queues
to be on shared disk in 2003. If you leve it local, every time you fail
over you will strand messages. When you failback they will mysteriously
reappear.. maybe days or months later.
5. The MTA run and database directories are on the first database drive
path by default. Unless you have some abnormally high MTA activity
(mixed
mode with 5.5 and this is a bridgehead for example) leave it there.
Ok, you have a theoretical .85 IOPS/user. Measure a test group or a
subset
of your user base. 50% concurrency? The concurrency ratio is a very
common
pitfall. Unless your users are on shifts, in the air and can't access a
computer, etc.., you'll get burned. Along will come the end of the of
the
quarter, everyone will be burning the midnight oil, you'll have 90% of
your
user on at once, performance will dive, and key staff will miss end of
quarter reporting deadlines. Result: you'll be looking for a new job.
Assume 100% unless it's physically impossible. Measure (over a long
period - several quarters - and use the peak value+ 20%) if you'll be
using
anything less. Read/wite ratio is important also.
Concurrency is an Issue, they are assuming 50 percent based on a present
4-node AAAP cluster they have which hosts rougly 25,000 users , 8000 users
per node. They are informing me that only 50 percent are concurrent at any
one time.
I have monitored a serious of Perfmon counters on a montly basis and on a
daily basis also when I look at the MSExchangeIS counter "Active User
Count"
on the busiest node it approaches 10,000 users even though the node holds
only 8000 users.
I am assuming 100 percent concurrency for the Storage Design.
Internal Technical lead in the team is providing input saying it is
MSExchangeIS counter "User Account" which is important showing 50 percent
concurrency?
They are saying some users are showing more than one session to the
information store?
Hence they are saying User Account is a more accurate figure to use then
Active User Account which shows some users have more than one connection?
RAID 10 has a write penalty of 2, so the impact of increases in the
percentage of writes is
amplified. Assume 2:1 in Exchange 2003 with Outlook cached mode clients.
Assume 3:1 if the outlook clients are not cached. Make sure you add in
IOPS
for online maintenace and backups.
Whe you build your arrays, make sure the IOPS/spindle number is @ 20ms
response time or less. Sure, a 15K spindle can reach a maximum IOPS of
300
or so, but the response time at that level can me measured in seconds.
You
want an average response time less than 20 ms with no spikes greater than
50ms lasting more than a few seconds. For 4K random IOs, use the
following:
10K RPM SCSI - 90 IOPS/spindle @ 20ms response time
15K RPM SCSI - 130 IOPS/Spindle @ 20ms response time
7200 RPM SATA - 40 IOPS/Spindle @ 20 ms response time
Where P is the performance of a single spindle, and N is the number of
spindles in the RAID set, for Raid 1/10,
Read performance = P*N
Write performance = P*N/2
So if I have 4 10K SCSI drives in a RAID 10 array,
Read perfformance = 360 IOPS
Write performance = 180 IOPS
Applying a 2:1 read write ratio, the composite performance is
(360+360+180)/3 = 300 IOPS.
NOTE: Just as a comparative reference point, a RAID 5 array with the
same 4
disks would have a composite performance of 201 IOPS; that's why you
don'y
use RAID 5. At a 1:1 read/write ratio, RAID 5 has less than half the
performance of RAID 10, so don't consider it in an Exchange 2007 solution
either.
I am VERY confused on this area about COMPOSITE performance I don't know
the
number of spindles I require YET?
I am trying to work out the number of drives (spindles) required to meet
my
performance needs?
John I used this article before your post and followed out the below:
(number of disks) = (IOPS/mailbox × number of mailboxes) ÷ (IOPS/disk ×
RAID
penalty factor)
Raid 10 Penalty = (R + W)/(R + 2W)
Again since I have no sound statistical data due to latency on the present
4-node cluster I will assume 3 Reads for every 1 Write since Outlook
clients
are not cached
Raid 10 Penalty = ( 3+1 ) / ( 3+2(1) )
Raid 10 Penalty = 0.8
Hence
So each Storage group which host 1875 users will need
= IOPS/mailbox * number of mailboxes
= 0.84 * 1875
= 1575 IOPS
Recommended to handle spikes we add a 20 percent buffer to the storage
design to handle these peaks:
Peak Storage Group DB IOPS
= 1575 * 20%
= 1890
Now standard calculation:
(Number of disks) = (IOPS/mailbox * number of mailboxes) / (IOPS/disk at
20ms * RAID penalty factor)
Number of disks = 1890 / 130 * 0.8
Number of disks = 1890 / 104
Number of disks = 18.18
Since we are using Raid 10 we must round up to the nearest even number.
Number of disks = 20
Thus
Number of disks required per Storage Group to host 1875 users is 20 15K
RPM
SCSI Drive in Raid 10
Database Storage Group size = number of users * mailbox size
Client has defined the mailbox size to 180MB.
Hence
Database Size = 1875 * 180 MB
= 329.59 GB
So each Database Storage Group is required to be no bigger than 330 GB
Disks are 146GB in size recommended by HP
Total Storage generated to accommodate our Performance for the Storage
Group
comes to
Total Storage for Performance = 10 disk * 146 GB
= 1460 GB
Note: 20 Disk in Raid 10 to meet IOPS requirement, hence 10 discs
available
for storage
Our previous Capacity figure above suggests we only need 330 GB per
Storage
Group.
But assigning 1460 GB for each LUN using traditional Storage methods leads
to a huge waste in disk storage space.
Virtualized Storage techniques we can create 4 LUNS from our 10 Disks:
= 1460 GB / 4
= 365 GB - size of each LUN
Hence we meet our Capacity requirement since each LUN created is greater
than 330 GB and also we effectively use our 10 disks more efficiently.
Previous figures we calculated for performance related IOPS was based on
all
physical spindles within each LUN created dedicated to the Storage Group.
Since the physical spindles are NOT now dedicated to a single LUN but are
shared amongst 4 LUNs is a loss in Performance IOPS?
So is each LUN carved from the array set now is not giving me the
performance I defined for 1875 users - 1890 IOPS??
Is this a case of Comingling where IO against one LUN negatively impacts
the
performance of other LUNs that share the same physical spindles?
I am now seriously concerned about my reasoning since you have talked
about
"P is the performance of a single spindle, and N is the number of spindles
in
the RAID set" and working out composite performance etc???
To determine the IOPS for the transaction logs, you divide the database
IOPS
by anywhere from 8 to 12, with 10 being common. I tend to use the 8
figure
to stay on the conservative side. The size of the log lun is another
story.
What is your average 24 hour change delta. What is the peak? If you
collect
change delta information over a period of time, what is the mean and the
stadard deviation of the dataset?
How are you collecting this data? Perfmon counters?
How often do you do a full backup and trucante the logs? What is your
backup failure tolerance in days (how long
should the system stay operational if backups starts failing? Generally
4-7
days to cover a long weekend and troubleshooting)? What level of
reliability is required? The answers to these questions will tell you
how
large to make the log LUNs. For eaxmple, let's assume:
1. Mean change delta 9GB
2. SD of sample set 1GB
3. Backup failure tolerance 7 days.
4. 99.9% repliability
We start with the mean change delta, then add enough standard deviations
to
reach or exceed the required level of reliability (3 in this case), so
our
Change delta size is 9+(1*3) = 12GB. Now, we take this figure and
multiply
by our backup failure tolerance and our LUN is 84GB. I can say with
99.97559% accuracy that an 84GB LUN will withstand 7 consecutive days of
backup failure before the drive fills and the stores dismount.
You can take a similar statistical approach to sizing the SMTP LUN; take
a
sample set of max size of a long collection period.
How are you collecting this data? Perfmon counters?
Figure out the mean and SD, the add enough standard deviations to the
mean to reach the desired
level of reliability. A lot of folks don't bother, and just allocate an
overly large disk (50 - 100GB) to cover normal traffic and any potential
loops/chain mails/store outages/etc.. without going offline. I believe
Optimizing Storage for Exchange Server 2003 says 500 IOPS, however, I
would
measure. The number of IOs depends on the number of messages, message
sizes, destination, retries, etc. On average, the categorizer touches
an
eml file in the queue directory 7 times.
"TonyP" <TonyP@xxxxxxxxxxxxxxxxxxxxxxxxx> wrote in message
news:3A3EFF63-5E9C-4E58-996F-CF2137F7D4FB@xxxxxxxxxxxxxxxx
Hi
Currently trying to design a 3 node cluster comprising of 2 Active
nodes
and
1 Passive node. Exchange 2003 environment
Will have 4 Storage Groups per Node which will have there only
dedicated
drives.
Transaction logs for each transaction drive will also have there own
drive
letters
As will SMTP , MTA and Quorum drives all on separate drive letters will
LUNs
carved out of the SAN
Used a theoretical value for IOPS per user as 0.85 and user mailbox
limits
where decided as 180 MB.
Each Storage Group will hold 1875 users, did the standard calculation
to
to
work out size for each Database drives. Hence each node holds 7500
users
but
there is only 50 percent concurrency.
Database Drives are in RAID 10
Transaction drives where taken as 1/10 of IOPS requirement of Database
Drives , will also be in RAID 10
How do you determine a safe size for the Transaction Log drives?
Also what is the standard calculation to work out
SMTP drive size?
MTA drive size?
Quorum size ?
Would you use Raid 1 for the SMTP, MTA and Quorum? what about IOPS for
these
also?
help greatly appreciated as always
Tony
.
- References:
- Prev by Date: Re: Cross forest migration using shared SMTP namespace
- Next by Date: Re: Exchange 2003 + SAN + Raid5 or 10
- Previous by thread: Re: Sizing Exchange Transaction, SMTP, MTA and Quorum Drives???
- Next by thread: Exchange Clients
- Index(es):
|
http://www.tech-archive.net/Archive/Exchange/microsoft.public.exchange.design/2007-08/msg00005.html
|
crawl-002
|
refinedweb
| 2,784
| 70.43
|
Me and my wife had some interesting conversations on Object Oriented Design principles. After publishing the conversation on CodeProject, I got some good responses from the community and that really inspired me. So, I am happy to share our next conversation that took place on Object Oriented Design Patterns. Here it is.
Shubho: I guess you already have some basic idea about Object Oriented Design principles. We had a nice talk on the OOD principles (SOLID principles), and I hope you didn't mind that I published our conversation in a CodeProject article. You can find it here: How I explained OOD to my wife.
Design Patterns are nothing but applications of those principles in some specific and common situations, and standardizing some of those. Let's try to understand what Design Patterns are by using some examples.
Farhana: Sure, I love examples.
Shubho: Let's talk about our car. It's an object, though a complex one, which consists of thousands of other objects such as the engine, wheels, steering, seats, body, and thousands of different parts and machinery.
While building this car, the manufacturer gathered and assembled all the different smaller parts that are subsystems of the car. These different smaller parts are also some complex objects, and some other manufacturers had to build and assemble those too. But, while building the car, the car company doesn't really bother too much about how those objects were built and assembled (well, as long as they are sure about the quality of these smaller objects/equipments). Rather, the car manufacturer cares about how to assemble those different objects into different combinations and produce different models of cars.
Farhana: The car manufacturer company must have some designs or blue prints for each different model of car which they follow, right?
Shubho: Definitely, and, these designs are well-thought designs, and they've put a good amount of time and effort to sketch those designs. Once the designs are finalized, producing a car is just a matter of following the designs.
Farhana: Hm.. it's good to have some good designs upfront and following those allows to produce different products in a quick amount of time, and each time the manufacturer has to build a product for a specific model, they don't have to develop a design from scratch or re-invent the wheel, they just follow the designs.
Shubho: You got the point. Now, we are software manufacturers and we build different kinds of software programs with different components or functionality based upon the requirements. While building such different software systems, we often have to develop code for some situations that are common in many different software systems, right?
Farhana: Yes. And often, we face common design problems while developing different software applications.
Shubho: We try to develop our software applications in an Object Oriented manner and try to apply OOD principles for achieving code that is manageable, reusable, and expandable. Wouldn't it be nice whenever we see such design problems, we have a pool of some carefully made and well tested designs of objects for solving those?
Farhana: Yes, that would save us time and would also allow us to build better software systems and manage them later.
Shubho: Perfect. The good news is, you don't have to really develop that pool of object designs from scratch. People already have gone through similar design problems for years, and they already have identified some good design solutions which have been standardized already. We call these Design Patterns.
We must thank the Gang of Four (GoF) for identifying the 23 basic Design Patterns in their book Design Patterns: Elements of Reusable Object-Oriented Software. In case you are wondering who formed this famous gang, they are Erich Gamma, Richard Helm, Ralph Johnson, and John Vlissides. There are many Object Oriented Design Patterns, but these 23 patterns are generally considered the foundation for all other Design Patterns.
Farhana: Can I create a new pattern? Is that possible?
Shubho: Yes darling, why not? Design Patterns are not something invented or newly created by scientists. They are just discovered. That means, for each kind of common problem scenario, there must be some good design solutions there. If we are able to identify an object oriented design that could solve a new design related problem, that would be a new Design Pattern defined by us. Who knows? If we discover some a Design Pattern, someday people may call us Gang of Two.. Ha ha.
Fahana:
Shubho: As I have always believed, examples are the greatest way of learning. In our learning approach, we won't discuss the theories first and implement later. I think this is a BAD approach. Design Patterns were not invented based on theories. Rather, the problem situations occurred first and based upon the requirement and context, some design solutions were evolved, and later some of them were standardized as patterns. So, for each design pattern we discuss, we will try to understand and analyze some real life example problems, and then we will try to formulate a design in a step by step process and end up with a design that will match with some patterns; Design Patterns were discovered in this same process. What do you think?
Farhana: I think this approach makes more sense to me. If I can end up with Design Patterns by analyzing problems and formulating solutions, I won't have to memorize design diagrams and definitions. Please proceed using your approach.
Shubho: Let's consider the following scenario:
Our room has some electric equipments (lights, fans etc). The equipments are arranged in a way where they could be controlled by switches. At any time, you can replace or troubleshoot an electrical equipment without touching the other things. For example, you can replace a light with another without replacing or changing the switch. Also, you can replace a switch or troubleshoot it without touching or changing the corresponding light or fan; you can even connect the light with the fan's switch and connect the fan with the light's switch, without touching the switches.
Farhana: Yes, but that's natural, right?
Shubho: Yes, that's very natural, and that's how the arrangement should be. When different things are connected together, they should be connected in a way where change or replacement of one system doesn't affect another, or even if there is any effect, it stays minimal. This allows you to manage your system easily and at low cost. Just imagine if changing the light in your room requires you to change the switch also. Would you care to purchase and set up such a system in your house?
Farhana: Definitely no.
Shubho: Now, let's think how the lights or fans are connected with the switches so that changing one doesn't have any impact on the other. What do you think?
Farhana: The wire, of course!
Shubho: Perfect. It's the wire and the electrical arrangement that connect the lights/fans with the switches. We can generalize it as a bridge between the different systems that can get connected through it. The basic idea is, things shouldn't be directly connected with one another. Rather, they should be connected though some bridges or interfaces. That's what we call "loose coupling" in software world.
Farhana: I see. I got the idea.
Shubho: Now, let's try to understand some key issues in the light/fan and switch analogy, and try to understand how they are designed and connected.
Farhana: OK, let me try.
We have switches in our example. There may be some specific kinds of switches like normal switches, fancy ones, but, in general, they are switches. And, each switch can be turned on and off.
So, we will have a base Switch class as follows:
Switch
public class Switch
{
public void On()
{
//Switch has an on button
}
public void Off()
{
//Switch has an off button
}
}
And, as we may have some specific kinds of switches, for example a fancy switch, a normal switch etc., we will also have FancySwitch and NormalSwitch classes extending the Switch class:
FancySwitch
NormalSwitch
public class NormalSwitch : Switch
{
}
public class FancySwitch : Switch
{
}
These two specific switch classes may have their own specific features and behaviours, but for now, let's keep them simple.
Shubho: Cool. Now, what about fan and light?
Farhana: Let me try. I learned from the Open Closed principles from Object Oriented Design principles that we should try to do abstractions whenever possible, right?
Shubho: Right.
Farhana: Unlike switches, fan and light are two different things. For switches, we were able to use a base Switch class, but as fan and light are two different things, instead of defining a base class, an interface might be more appropriate. In general, they are all electrical equipments. So, we can define an interface, say, IElectricalEquipment, for abstracting fans and lights, right?
IElectricalEquipment
Farhana: OK, each electrical equipment has some common functionality. They could all be turned on or off. So the interface may be as follows:
public interface IElectricalEquipment
{
void PowerOn(); //Each electrical equipment can be turned on
void PowerOff(); //Each electrical equipment can be turned off
}
Shubho: Great. You are getting good at abstracting things. Now, we need a bridge. In real world, the wires are the bridges. But, in our object design, a switch knows how to turn on or off an electrical equipment, and the electrical equipment somehow needs to be connected with the switches, As we don't have any wire here, the only way to let the electrical equipment be connected with the switch is encapsulation.
Farhana: Yes, but switches don't know the fans or lights directly. A switch actually knows about an electrical equipment IElectricalEquipment that it can turn on or off. So, that means, an ISwitch should have an IElectricalEquipment instance, right?
ISwitch
Shubho: Right. Here, the encapsulated instance, which is an abstraction of fan or light (IElectricalEquipment) is the bridge. So, let's modify the Switch class to encapsulate an electrical equipment:
public class Switch
{
public IElectricalEquipment equipment
{
get;
set;
}
public void On()
{
//Switch has an on button
}
public void Off()
{
//Switch has an off button
}
}
Farhana: Understood. Let me try to define the actual electrical equipments, the fan and the light. As I see, these are electrical equipments in general, so these would simply implement the IElectricalEquipment interface.
Following is the Fan class:
Fan
public class Fan : IElectricalEquipment
{
public void PowerOn()
{
Console.WriteLine("Fan is on");
}
public void PowerOff()
{
Console.WriteLine("Fan is off");
}
}
And, the Fan class would be as follows:
public class Light : IElectricalEquipment
{
public void PowerOn()
{
Console.WriteLine("Light is on");
}
public void PowerOff()
{
Console.WriteLine("Light is off");
}
}
Shubho: Great. Now, let's make switches work. The switches should have the ability inside them to turn on and turn off the electrical equipment (it is connected to) when the switch is turned on and off.
These are the key issues:
Basically, following is what we want to achieve:
static void Main(string[] args)
{
//We have some electrical equipments, say Fan, Light etc.
//So, lets create them first.
IElectricalEquipment fan = new Fan();
IElectricalEquipment light = new Light();
//We also have some switches. Lets create them too.
Switch fancySwitch = new FancySwitch();
Switch normalSwitch = new NormalSwitch();
//Lets connect the Fan to the fancy switch
fancySwitch.equipment = fan;
//As the switch now has an equipment (Fan),
//so switching on or off should
//turn on or off the electrical equipment
fancySwitch.On(); //It should turn on the Fan.
//so, inside the On() method of Switch,
//we must turn on the electrical equipment.
//It should turn off the Fan. So, inside the On() method of
fancySwitch.Off();
//Switch, we must turn off the electrical equipment
//Now, lets plug the light to the fancy switch
fancySwitch.equipment = light;
fancySwitch.On(); //It should turn on the Light now
fancySwitch.Off(); //It should be turn off the Light now
}
Farhana: I got it. So, the On() method of the actual switches should internally call the TurnOn() method of the electrical equipment, and the Off() should call the TurnOff() method on the equipment. So, the Switch class should be as follows:
On()
TurnOn()
Off()
TurnOff()
public class Switch
{
public void On()
{
Console.WriteLine("Switch on the equipment");
equipment.PowerOn();
}
public void Off()
{
Console.WriteLine("Switch off the equipment");
equipment.PowerOff();
}
}
Shubho: Great work. Now, this certainly allows you to plug a fan from one switch to another. But you see, the opposite should also work. That means, you can change the switch of a fan or light without touching the fan or light. For example, you can easily change the switch of the light from FancySwitch to NormalSwitch as follows:
normalSwitch .equipment = light;
normalSwitch.On(); //It should turn on the Light now
normalSwitch.Off(); //It should be turn off the Light now
So, you see, you can vary both the switches and the electrical equipments without any effect on the other, and connecting an abstraction of the electrical equipment with a switch (via encapsulation) is letting you do that. This design looks elegant and good. The Gang of Four has named this a pattern: The Bridge Pattern.
Farhana: Cool. I think I've understood the idea. Basically, two systems shouldn't be connected or dependent on another directly. Rather, they should be connected or dependent via abstraction (as the Dependency Inversion principle and the Open-Closed principle say) so that they are loosely coupled, and thus we are able to change our implementation when required without much effect on the other part of the system.
Shubho: You got it perfect darling. Let's see how the Bridge Pattern is defined:
You will see that our design perfectly matches the definition. If you have a class designer (in Visual Studio, you can do that, and other modern IDEs should also support this feature), you will see that you have a class diagram similar to the following:
Here, Abstraction is the base Switch class. RefinedAbstraction is the specific switch classes (FancySwitch, NormalSwitch etc.). Implementor is the IElectricalEquipment interface. ConcreteImplementorA and ConcreteImplementorB are the Fan and Light classes.
Light
Farhana: Let me ask you a question, just curious. There are many other patterns as you said, why did you start with the Bridge pattern? Any important reason?
Shubho: A very good question. Yes, I started with the Bridge pattern and not any other pattern (unlike many others) because of a reason. I believe the Bridge pattern is the base of all Object Oriented Design Patterns. You see:
Farhana: Do you think I have understood it correctly?
Shubho: I think you have understood it perfectly darling.
Farhana: So, what's next?
Shubho: By understanding the Bridge pattern, we have just started to understand the concepts of Design Patterns. In our next conversation, we would learn other Design Patterns, and I hope you won't get bored learning them.
Farhana: I won't. Believe me.
Watch out for our next.
|
http://www.codeproject.com/Articles/98598/How-I-explained-Design-Patterns-to-my-wife-Part?msg=4512523
|
CC-MAIN-2015-14
|
refinedweb
| 2,497
| 64.51
|
Hi Jean,
> $ find linux-2.6.26-rc1 -name Kconfig | wc -l
> 455
> $ find linux-2.6.26-rc1 -name Makefile | wc -l
> 1030
> $
Well, these are not pieces of code and serve a different purpose, don't
they?
> Not to mention the 102 setup.c, 87 irq.c, 62 time.c... It is very
> common to have duplicated file names in the kernel tree because it
> supports so many architectures and platforms. In general, when you work
Well, that is not a technical argument. It is a fact of life, sure, but
it does not necessarily mean it is right, but perhaps that nobody has
really thought about it.
> on a given architecture or platform, names become unique again. Taking
> GDB as an example again, you definitely know what architecture you are
> debugging, so there should be relatively little ambiguity on what files
> are involved.
Hmm, why to have little ambiguity, when you can have none? We do not
rely on crippled filesystems, so we do not have to save characters in file
names -- we keep them reasonably short anyway. I say there is no
technical advantage in having duplicate file names throughout the tree
(please name one if I am wrong) and there are advantages -- however small,
but still -- in keeping file names unique. Therefore the gain from
converting the existing file names may not justify the effort required,
but it does not mean new additions may not take the gain into account?
> (On top of that, I'd argue that we _should_ be able to display relative
> paths to file names when debugging.)
Human's perception is limited -- GDB's `info frame' is probably already
overloaded with information, so adding the path to the source file in
question will not make it any better.
> Your point about the "single program namespace" is certainly valid for
> small to medium-size programs, but in the case of something as big as
> the kernel, it probably no longer holds.
I think it is actually the reverse -- the bigger a project is, the easier
to get lost within. ;-) With small programs it is easier to maintain,
while with bigger ones it is really where it pays off.
> I don't have a strong opinion on this either, it is very unlikely that
> I'll ever have to deal with this file personally. I'm only telling you
> what the common practice is in the kernel tree.
I don't think this practice has been architected and see above for
justification why it may not necessarily be the cleverest idea. :-)
Maciej
|
http://www.linux-mips.org/archives/linux-mips/2008-05/msg00129.html
|
CC-MAIN-2014-42
|
refinedweb
| 429
| 70.33
|
Simple desktop integration for Python.
Project description
This is a python 3 port of desktop package.
The desktop package provides desktop environment detection and resource opening support for a selection of common and standardised desktop environments.
Insallation
pip install desktop3
Usage
Launch folders, files, … etc:
import desktop desktop.open("what/you/want/to/open")
Introduction
The desktop package provides desktop environment detection and resource opening support for a selection of common and standardised desktop environments.
Currently, in Python’s standard library, there is apparently no coherent, cross-platform way of getting the user’s environment to “open” files or resources (showing such files in browsers or editors, for example). It is this kind of functionality that the desktop package aims to support. Note that this approach is arguably better than that employed by the webbrowser module since most desktop environments already provide mechanisms for configuring and choosing the user’s preferred programs for various activities, whereas the webbrowser module makes relatively uninformed guesses (for example, opening Firefox on a KDE desktop configured to use Konqueror as the default browser).
Some ideas for desktop detection (XFCE) and URL opening (XFCE, X11) were obtained from the xdg-utils project which seeks to implement programs performing similar functions to those found in the desktop module. The xdg-utils project can be found here:
Other information regarding desktop icons and menus, screensavers and MIME configuration can also be found in xdg-utils.
Contact, Copyright and Licence Information
No Web page has yet been made available for this work, but the author can be contacted at the following e-mail address:
Copyright and licence information can be found in the docs directory - see docs/COPYING.txt, docs/lgpl-3.0.txt and docs/gpl-3.0.txt for more information.
Notes
Notes on desktop application/environment support:
Changelog
- 0.5.2 (Oct 26, 2016)
- Add docs folder into dist file.
- 0.5.1 (Oct 25, 2016)
- Fix letter case issue in setup.py.
- 0.5.0 (Jul 2, 2016)
- First release.
Old Changelog
New in desktop 0.4.3 (Changes since desktop 0.4.2)
- Added missing KDE 4 support to the desktop.dialog module.
New in desktop 0.4.2 (Changes since desktop 0.4.1)
- Added XFCE 4.10 “mailto:” fix contributed by Jérôme Laheurte.
New in desktop 0.4.1 (Changes since desktop 0.4)
- Added KDE 4 and Lubuntu support contributed by Jérôme Laheurte.
New in desktop 0.4 (Changes since desktop 0.3)
- Improved docstrings.
- Fixed support for examining the root window.
- Changed the licence to the LGPL version 3 (or later).
New in desktop 0.3 (Changes since desktop 0.2.4)
- Made desktop a package.
- Added support for graphical dialogue boxes through programs such as kdialog, zenity and Xdialog.
- Added support for inspecting desktop windows (currently only for X11).
New in desktop 0.2.4 (Changes since desktop 0.2.3)
- Added XFCE support (with advice from Miki Tebeka).
- Added Ubuntu Feisty (7.04) package support.
New in desktop 0.2.3 (Changes since desktop 0.2.2)
- Added Python 2.3 support (using popen2 instead of subprocess).
New in desktop 0.2.2 (Changes since desktop 0.2.1)
- Changed the licence to LGPL.
New in desktop 0.2.1 (Changes since desktop 0.2)
- Added Debian/Ubuntu package support.
New in desktop 0.2 (Changes since desktop 0.1)
- Added support for waiting for launcher processes.
- Added a tests directory.
Project details
Release history Release notifications
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
|
https://pypi.org/project/desktop3/
|
CC-MAIN-2018-47
|
refinedweb
| 601
| 62.24
|
This project is a screen saver application that I originally started as a way to pick up C++/MFC. That was a few years back, and in that time I’ve gone back to it a number of times to add new features and try new things. It’s made the rounds of my family’s and friend’s computers and, since they all seem to like it, I thought I’d put it out here on CodeProject in the hopes that you all may enjoy it as well.
Back in 2001, after the Internet bubble burst, the two man consulting company I was part of found itself suddenly without clients or cash. So it was back to working for the man for the both of us. My former business partner found a job before I did and was kind enough to brow-beat his new boss into giving me an interview as well.
At the time I had done mostly VB6 programming, and during the interview it became quickly apparent that they were looking for an MFC developer. I had done some C++ programming and did as well as I could under the circumstances but came home certain that I wouldn’t get the job.
Well, as they say, necessity is the mother of invention. That, plus a quickly dwindling bank account and plenty of free time makes for a great motivator, so I thought I’d better do something quick to prove that I was the person for the job.
At this same time, my dad sent me a cool web page that used some pretty impressive JavaScript in order to create an animated clock that tracked the user’s mouse as it moved around the page. He wanted to know if there was any way to turn it into a screen saver.
I decided to implement the same thing in MFC, and send it off to the guys who had interviewed me in the hopes of changing their minds.
After a week of nearly round the clock (pun intended) study and development, I had it working to the point that I figured it was ready to go. Literally, as I was writing the email to the head of the department where I had been interviewed, explaining the application and my motivation for writing it, he called me on the phone. A change of direction on the project, and some additional prodding from my associate lead them to hire me!
So I never did send him this application, but my dad enjoyed it and I learned a lot in that short time.
In the years since, I’ve added some features to it like Outlook Calendar and MAPI support for signaling when new mail has arrived while the screen saver is active. So now, it’s actually a nice little program, and kind of a fun screen saver to boot.
Because the Outlook integration requires some MS Office type libraries, and I’m sure it’s not legal to redistribute those, I have conditional compilation statements around those parts. If you do want to compile the Outlook integration code, you’ll need a copy of MSOUTL.OLB and MSO9.DLL. Define MS_OFFICE_INTEGRATION in stdafx.h, make sure the two Office typelibs are in the right place, and it should compile.
MS_OFFICE_INTEGRATION
The version installed by the downloadable MSI (above) includes the Outlook integration functionality.
The MAPI mail notification should work with any MAPI compliant email client, but I’ve only tested it with Outlook Express, so your mileage may vary.
Once you install ClockSaver.scr, it will show up in your list of available screen savers as ClockSaver.
I don’t know to whom to attribute the original JavaScript clock as the HTML page that I got from my dad did not have any information about the author.
The screen saver base class came from an article by chensu, posted on CodeGuru.com. (Give me break, I hadn’t heard about CodeProject yet).
One of the most interesting things about working on this project over such a long period, is that a lot of the code comes from CodeProject, which I think is a testament to how valuable a resource this site is!
This project uses:
The code to support multiple monitors I eventually turned into some reusable classes, so it also uses MFC Classes for Multiple Monitors.
AnimateWindow
This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)
10 PRINT "Don is cool"
20 GOTO 10
SetText( m_Date.Format( _T( "%#x" ) ) );
#include <locale.h>.
General News Suggestion Question Bug Answer Joke Praise Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
|
http://www.codeproject.com/Articles/7291/Clock-Screen-Saver?fid=57458&df=90&mpp=10&sort=Position&spc=None&tid=866362
|
CC-MAIN-2017-17
|
refinedweb
| 796
| 66.88
|
Using Computer Vision to Calculate a 1 rep max Part IV: The Resurrection
For all of my previous work on this subject, see:
- Using Computer Vision to Calculate a One Rep Max
- Using Computer Vision to Calculate a One Rep Max Part II: The Reckoning
- How to Install OpenCV on Heroku
- Using Computer Vision to Calculate a One Rep Max Part III: The Redemption
You can also see my generic solution to motion detection on a shaky video at Motion Detection on a Shaky Video with OpenCV + Python
To catch up any stragglers on the internets, I’ve been working on barbell detection in a bench press video in order to calculate a one rep max. It works well, but the problem is that input videos need to be fairly sterile to work, and a lot of assumptions are made in order to process the video in a reasonable amount of time (i.e. it’s assumed that the barbell passes through the center of the image, but that might not actually be true). Additionally, it takes about a full hour to process 20 seconds of video, which, if I’d like to make this processor available to all of the meat heads on the internets at scale, it should really be faster.
I had some success in isolating a moving object in a video in my last post, but now I want to apply that specifically to the one rep max calculator I’ve been working on. Here’s where I left off from my last post:
If I have an input video like this:
I can output a frame representing the aggregated motion in the image like this:
I can aggregate all of the motion across the entire video into a single frame and filter out most of the noise resulting from shakiness in the process. The goal is to find the width of the barbell using motion, and since the barbell is moving strictly vertically, we can try to detect its width by finding the clear collection of columns that represent that bar. If we collapse the columns into a single row, we get something like this:
As it stands, the row looks pretty noisy. If we can filter out the motion not associated with the barbell because of the rows they’re associated with, we should be able to get much better results. So from here, I wrote a heuristic to detect which row the barbell’s motion appeared most in. You could use a simple algorithm and find the row with the most motion, but greedy algorithms with such simplicity might be prone to clear outliers.
My solution was to find the row that had a lot of motion and one where the differential of the pixels in that row were pretty consistent. In this way, if you imagine a solid white line of pixels, the differential from column to column is 0, and this is what an ideal barbell presumably looks like. However, a solid black line representing no motion will have that same “good” value of consistently low differentials. So in order to maximize the motion and minimize the differential to try to find long lengths of white pixels, I used the formula:
(pixel_sum * pixel_sum / standard_deviation(derivative(row_array)))
The python code looks like this:
def _get_best_object_motion_row(self, motion_matrix): rows, cols = motion_matrix.shape[0: 2] row_scores = np.zeros(rows) for row_index in xrange(rows): row_array = motion_matrix[row_index, :] motion_score = np.sum(row_array) ** 2 differential_score = np.std(np.abs(np.diff(row_array.astype(int)))) row_scores[row_index] = motion_score / differential_score return row_scores.argmax()
And when we then collapse the best row and its immediate surrounding rows we get a cleaner looking image (it’s subtle, but compare this to the above image):
Now you can see the motion that likely corresponds to a barbell a little more clear. To explain it with pictures, if we have the above image as input, to find the barbell we want to do something to produce the below output:
This isn’t an immediately intuitive problem because data can easily change from input to input, and the same thresholds won’t apply across the board. In order to examine the problem more closely, I graphed the resultant row so I could find a better way to prune the data. When I graphed the values in the row, the results looked like this:
The graph illustrates the values in the row more clearly than a grayscale image. If you were to look at this, you could probably intuitive predict which set of points corresponds to the barbell, but this needs to be translated into computer instructions. After a bit of thinking, if I jump ahead a bit, I developed an algorithm that will estimate a barbell width and produce a width that can be graphed as follows:
The algorithm here was to iterate over all possible x offsets and all possible bar widths, find the minimum of those collections of points, and multiply it by the bar width. So we’re effectively trying to find a rectangle that fits inside the graph that maximizes its area without crossing any lines. The code for that algorithm looks like this:
def _find_width(self, motion_by_column): ''' Find the best combo of x_offset, width where width * max(min(combination)) is greatest ''' min_pixel_width = int(len(motion_by_column) * self.MIN_BAR_AS_PERCENT_OF_SCREEN) best_score = 0 best_width = min_pixel_width best_x_offset = 0 for x_offset in xrange(min_pixel_width): for bar_width in xrange(min_pixel_width, len(motion_by_column)): if x_offset + bar_width >= len(motion_by_column): continue y_values = motion_by_column[x_offset: x_offset + bar_width] min_val = np.min(y_values) score = min_val * bar_width if score > best_score: best_score = score best_x_offset = x_offset best_width = bar_width return best_x_offset, best_width
This worked really well, but when I ran the algorithm across some of my input videos, you can see that it runs into problems. It’s hard to see, but in the below video, the width of the barbell is too short. It stops before it gets to the right 45 lb plate:
If we examine all the data associated with the data, it’s a little more clear what’s happening:
In the above picture in the top left you can see the aggregated motion. In the bottom left you can see the motion detection collapsed to one column. And on the right side you can see how the barbell width was found. In this case, the input video is really clean with no shakiness, and the barbell should take up the width of the start and stop of the line in the graph.
The problem is that there was a small range of pixels in which very little motion was detected because of the colors of the pixels in the image. If I apply a gaussian blur to the image, we might be able to overcome that gap. So I applied a line smoothing algorithm. The code for that is here:
def smooth_list_gaussian(input_list, degree=8): window = degree * 2 - 1 weight = np.array([1.0] * window) weightGauss = [] for i in range(window): i = i - degree + 1 frac = i / float(window) gauss = 1 / (np.exp((4 * (frac)) ** 2)) weightGauss.append(gauss) weight = np.array(weightGauss) * weight smoothed = [0.0] * (len(input_list) - window) for i in range(len(smoothed)): smoothed[i] = sum(np.array(input_list[i: i + window]) * weight) / sum(weight) smoothed = [0 for _ in xrange(degree)] + smoothed return smoothed
Now if I apply the barbell detection to the smoothed line, this is what the results look like (note that it still fails):
In this case, the algorithm just didn’t work, but here I’d rather just go ahead and call the input bad rather than mess up the processing of all the other videos.
Final Algorithm
The above process is pretty much encapsulated in one method that should make the whole process fairly straightforward:
def find_barbell_width(self): probable_barbell_row = self._get_best_object_motion_row(self.union_frame) cropped_aggregate_motion = self._get_cropped_matrix(self.union_frame, probable_barbell_row) displayable_frame = self._make_frame_displayable(cropped_aggregate_motion) motion_by_column = self._collapse_motion_to_one_row(displayable_frame) smoothed_motion_by_column = smooth_list_gaussian(motion_by_column) x_offset, bar_width = self._find_width(smoothed_motion_by_column)
Results
The algorithm now is drastically faster. Rather than one hour of processing for 20 seconds, I’m down to about 30 seconds. The added simplicity also allowed me to take a chainsaw to my existing barbell detection algorithm and delete about 100 lines of complicated thresholds and otherwise poorly written functions.
|
http://scottlobdell.me/2014/12/using-computer-vision-calculate-1-rep-max-part-iv-resurrection/
|
CC-MAIN-2017-47
|
refinedweb
| 1,376
| 54.86
|
There isnt a great deal of people who are seemingly interested in this event which for me is quite worrying,
Are there people out there who are going but as yet not mentioned anything on here?
We need ideas on how to drum up interest..
I want to meet as many of the UK spiceheads down there as possible, if anyone is thinking of going and wants a lift from near Manchester then give me a shout xD
Dom
9 Replies
May 19, 2009 at 3:50 UTC
For me, it's cost. Working in Education, I can't sell this as the best use of the training budget.
I'm afraid you're up against BETT, The Education Show, TSL etc. and it's a hard sell to the Head when competing against names that large in Education.
Essentially, I'd have to lie to the Head to get money to come down. Sorry folks!
Jun 16, 2009 at 7:22 UTC
akp982 is an IT service provider.
RAWR :-)
Im here....
Ive just asked MyShell if we can host another meet up in a pub or something, any ideas? Maybe more central than in london? (plus people in london are the ones with the money...)
Jul 15, 2009 at 4:42 UTC
We have a branch office in london - make it a weekday and I might be able to make it (assuming that it doesn't all go pete tong at the branch of course!)
Jul 21, 2009 at 6:07 UTC
The cost is preventing me from going. Due to the economic climate, the company is losing money and has cut back on spending.
Also, unless a course is legally required (H&S), or is proven to have a monetary return (increased sales or reduced spending), there is no chance of getting it paid for by the company.
Jul 21, 2009 at 6:27 UTC
akp982 is an IT service provider.
ivanidea this topic is about SpiceCorps which are free :-)
The first ones we did this year we did on a saturday
Jul 22, 2009 at 7:32 UTC
I must apologise for getting the events mixed up.
Still, London is too far away for me.
Jul 22, 2009 at 8:03 UTC
akp982 is an IT service provider.
Im looking at doing one further up north just not done anything about it yet
Jul 23, 2009 at 1:24 UTC
How far north do you want to go, Andy? I can help organise stuff if somewhere in my area.
Jul 24, 2009 at 3:15 UTC
akp982 is an IT service provider.
Im just waiting to here from DomUK to make sure he doesnt mind me taking over the SpiceCorps thing up that way then i need to decided...
When Dom says its ok (i just dont want to tread on toes etc as he ran the first up there) ill post two topics one about one down here which will be hosted in my office (marlow) and another for up there asking questions about help :-)
Im happy to drive as far up as people want or even do one in manchester sorta area and another further north a little later...
|
https://community.spiceworks.com/topic/38183-how-can-we-generate-interest
|
CC-MAIN-2017-09
|
refinedweb
| 534
| 84.3
|
Event Details
PLEASE read all the FAQ's prior to purchasing your tickets. NOTE* there are no refunds or class transfers.
Dahlias are the peony of the fall season with so many shapes and colors to choose from the design options are endless. Join us at Barnard Griffing Winery where we will design a beatiful centerpiece. This is a beginner class for all to join, no creativity required. Class starts promptly at 6pm but please arrive early if you are sitting as a group or if you want to purchase wine/food.
Wine and food available for purchase at the class.
Registration closes on 9/27/17..
What can/can't I bring to the event?
No experience neccesary! All you bring is yourself and willingness to learn something new. transfer
Barnard Griffin Winery
878 Tulip Lane
Richland,
WA 99352
Thursday, September 28, 2017 from 6:00 PM to 7:30 PM (PDT)
Add to my calendar
Organizer
Chandra Christenson
Thank you for checking out our Sips & Stems by Simplified Celebrations classes. I own Simplified Celebrations a local studio based floral deisgn and event planning company in Tri-Cities, WA.
I have been designing for 13 years and specialize in event florals such as weddings, corporate events, holiday parties, fundraisers, and more. Along with event planning I have recently added basic floral design classes set at local wineries and restaurants These are designed for you to relax with your friends, sip on wine, and walk away with a fresh flower deisgn YOU created.
A fresh, fun experience that requires NO creativity and No experience.
In addition to my public classes listed below I also offer private parties which are great for birthdays, bachelorette parties, office get togethers, bridal showers, and more. Contact me if you would like more info.
I look forward to meeting you and your friends...see you in class!
chandra@simplifiedcelebrations.net
509-430-8786
Share Sips and Stems-Darling Dahlias!Share Tweet
|
https://www.eventbrite.com/e/sips-and-stems-darling-dahlias-tickets-36964919068
|
CC-MAIN-2018-34
|
refinedweb
| 326
| 64.91
|
The Ming32 compiler is almost as easy to setup as the Cygnus c compiler but supposedly does not require any run time DLLs to be distributed with executables.
Information on the package is available at:
the Official Installation Information on the package is available at:
The distribution zip files are available at the following sites. Go to this site and get everything but only the latest versions of GCC and and libstdc++. When in doubt, go for the latest version, not previous ones. At the time of writing this, the latest version of GCC was gcc-2.8.1.zip and of libstdc++-2.8.1.zip. If you have 32 bit complient Unzipping program linked in with your web browser, open all the files as you download them and install them into a c:\mingw32 directory. They should all be extracted into the correct directories.
set c_include_path=c:\mingw32\include set cplus_include_path=c:\mingw32\include\g++;c:\mingw32\include set library_path=c:\mingw32\lib set gcc_exec_prefix=c:\mingw32\lib\gcc-lib set bison_simple=C:\mingw32\share\bison.simple set bison_hairy=C:\mingw32\share\bison.hairy
gcc -v. You should get the following response.
As recommended by the installation instructions, you can also build a "Hello World" program and compile it to test things out.
#include <stdio.h> int main(void) { printf("Hello, world!\n"); return 0; }
Build it using the command gcc hello.c -o hello.exe
|
http://webstats.ccp14.ac.uk/tutorial/compiler/mingw32/index.html
|
CC-MAIN-2018-13
|
refinedweb
| 238
| 59.09
|
Visio 12 - Eric RockeyWhat's new in Visio 12 Server2005-12-06T02:01:00ZDo You Share Diagrams On A Server?<P class=MsoNormal<FONT face=Calibri size=3.</FONT></P> face=Calibri size=3>If you think you can help us with this area (and would like to help chart Visio’s future), simply contact us.</FONT></P><img src="" width="1" height="1">Eric Rockey, DWG & Visio<P class=MsoNormal<FONT face=Calibri size=3>Hey!</FONT></P> <P class=MsoNormal<?xml:namespace prefix = o<o:p><FONT face=Calibri size=3> </FONT></o:p></P> <P class=MsoNormal<FONT face=Calibri size=3>Do you ever open, save or use AutoCAD files (DXF/DWG) with Visio? If so, the Visio product team would like to talk to you. Please drop us a line, and we’ll be in touch.</FONT></P><img src="" width="1" height="1">Eric Rockey LOT more information on creating custom data graphics<P><FONT face=Verdana size=2: </FONT><A href=""><FONT face=Verdana size=2></FONT></A><FONT face=Verdana size=2> . This should give you everything you need to roll your own text, icon set or data bar callouts. Looking forward to seeing some of the new ones that folks come up with!</FONT></P> <P> </P><img src="" width="1" height="1">Eric Rockey you wanted to know about Themes and shapes<P><FONT face=Verdana size=2>Tim Davenport, a Visio Program Manager on my team, has written an MSDN article that gives a ton of great information about our new themes feature and in particular, how to design Visio shapes that work well with themes. Check it out here: </FONT><A href=""><FONT face=Verdana size=2></FONT></A><FONT face=Verdana size=2> </FONT></P><img src="" width="1" height="1">Eric Rockey the Visio Customer Council<P><SPAN style="COLOR: #1f497d"><FONT face=Verdana size=2>Hi all,</FONT></SPAN></P> <P><FONT size=2><FONT face=Verdana><SPAN style="COLOR: #1f497d".</FONT></P> <P><FONT size=2><FONT face=Verdana><SPAN style="COLOR: #1f497d">Thanks,<BR>Eric</SPAN></FONT></FONT></P> <P><FONT size=2><FONT face=Verdana><SPAN style="COLOR: #1f497d">---</SPAN></FONT></FONT></P> <P><SPAN style="COLOR: #1f497d"><FONT face=Verdana size=2>We're always looking for ways to better understand the needs and concerns of Visio customers. To that end, one of the tools we use is the Visio Customer Council. </FONT></SPAN></P> <P><SPAN style="COLOR: #1f497d"><FONT face=Verdana size=2. </FONT></SPAN></P> <P><SPAN style="COLOR: #1f497d"><FONT face=Verdana size=2>There are a limited number of Council member positions, and members serve for a period of one year. We are recruiting for the 2006-2007 council now. </FONT></SPAN></P> <P><SPAN style="COLOR: #1f497d"><FONT face=Verdana size=2>If you are interested in helping shape the next generation of Visio, we'll ask you to: </FONT></SPAN></P> <UL> <LI><SPAN style="COLOR: #1f497d"><FONT face=Verdana size=2>Participate in monthly conference calls with the product team; </FONT></SPAN> <LI><SPAN style="COLOR: #1f497d"><FONT face=Verdana size=2>Come to Redmond at least once a year to attend an in-person, two- or three-day Visio Customer Council Symposium; </FONT></SPAN> <LI><SPAN style="COLOR: #1f497d"><FONT face=Verdana size=2>Review and provide detailed feedback on proposed features designs; </FONT></SPAN> <LI><SPAN style="COLOR: #1f497d"><FONT face=Verdana size=2>Install, use, and provide feedback on Visio beta software; </FONT></SPAN> <LI><SPAN style="COLOR: #1f497d"><FONT face=Verdana size=2>Respond to questions from the product team via phone or e-mail; </FONT></SPAN> <LI><SPAN style="COLOR: #1f497d"><FONT face=Verdana size=2>Host Visio product team members at your place of business from time to time to understand more about how your and others at your organization use Visio; </FONT></SPAN> <LI><SPAN style="COLOR: #1f497d"><FONT face=Verdana size=2>Sign a nondisclosure agreement with Microsoft. </FONT></SPAN></LI></UL> <P><SPAN style="COLOR: #1f497d"><FONT face=Verdana size=2>Since the readers of this blog are among the most dedicated Visio users, we'd like to give you the opportunity to join the Council. If this sounds like something that would interest you, please </FONT><A href=""><FONT face=Verdana size=2>contact me</FONT></A></SPAN><SPAN style="COLOR: #1f497d"><FONT face=Verdana size=2> by e-mail, and we'll get our Planning group in touch with you. </FONT></SPAN></P></SPAN></FONT><img src="" width="1" height="1">Eric Rockey custom data graphics<P><FONT face=Verdana size </FONT><A href=""><FONT face=Verdana size=2></FONT></A><FONT face=Verdana size=2> . We're working on complete documentation for how to do this and will publish that in the future. But this should get you started for now if you are interested (plus Bill's got a lot of other good stuff to read about Visio).</FONT></P> <P> </P><img src="" width="1" height="1">Eric Rockey everybody can try out Visio 2007<P><FONT face=Verdana size=2>There’s been a lot going on this week that I wanted to let you know about. </FONT></P> <P><FONT face=Verdana size=2: <A href=""></A></FONT></P> <P><FONT face=Verdana size=2>We’ve also released a Beta 2 version of the Visio 2007 SDK. This includes new wizards for working with Visual Studio 2005, and new code samples that show you how to program against our new data features. You can download the SDK here: <A href=""></A> </FONT></P> <P><FONT face=Verdana size=2>Next, the Visual Studio team has made available a Community Technology Preview (CTP) of Visual Studio Tools for Office “v3”, which adds support for Visio for the first time. This is exciting news for Visio add-in developers. Go here to download it: <A href=""></A> (One note is that you need to follow a specific installation order: Visual Studio 2005, Office Beta 2, and then VSTO. This is described in the instructions at the bottom of the download page.)</FONT></P> <P><FONT face=Verdana size=2: <A HREF="/visio/"></A> . The first post of Visio Insights also includes a good list of all of the other people out there blogging on Visio.</FONT></P> <P><FONT face=Verdana size=2>I’ll still be talking about Visio 2007 features here, but I encourage you to give Visio Insights a read as well.</FONT><FONT face=Verdana size=2></P></FONT><img src="" width="1" height="1">Eric Rockey: Great Results in a few clicks<P><FONT face=Verdana size=2.<BR></FONT><FONT face=Verdana size=2><BR.</P> <P: </P> <P><IMG src=""></P> <P!).</P> <P><IMG src=""></P> <P><FONT face=Verdana size=2.<BR><BR. </FONT></P> <P><IMG src=""></P> <P><FONT face=Verdana size=2>This applies formatting to every shape in your diagram all at once. So you can quickly go from this:</FONT></P> <P><IMG src=""></P> <P><FONT face=Verdana size=2>To something that looks more like this:</FONT></P> <P><IMG src=""></P> <P><FONT face=Verdana size.<BR></FONT><FONT face=Verdana size=2><BR:</FONT></P> <P><IMG src=""></P> <P><FONT face=Verdana size:</FONT></P> <P><IMG src=""></P> <P><FONT face=Verdana size=2>And here is how they react when a green Theme Color is applied to them:</FONT></P> <P><IMG src=""></P> <P><FONT face=Verdana size=2.<BR></FONT><FONT face=Verdana size=2><BR:</FONT></P> <P><IMG src=""></P> <P><FONT face=Verdana size=2.<BR><BR.</FONT></P></FONT><img src="" width="1" height="1">Eric Rockey, part 1<P><FONT face=Arial size=2>Hey, what's up? I'm back from a little break after the Visio Partner Conference. Today I'm </FONT><FONT face=Arial size=2>going to talk about the last of the major data features we are doing in Visio 12: </FONT><FONT face=Arial size=2>PivotDiagrams. You can think of PivotDiagrams as similar to PivotTables in Excel. They </FONT><FONT face=Arial size=2>allow you to work with data where you want to see groups and subtotals. The difference, of </FONT><FONT face=Arial size=2>course, is that a PivotDiagram does this visually as a diagram:</FONT></P> <P><IMG src=""></P> <P><FONT face=Arial size=2>They are great when you need to communicate the key pieces of information in your data to </FONT><FONT face=Arial size=2>other people in a very visual way. It's also easy to drill into your data with </FONT><FONT face=Arial size=2>PivotDiagrams and find exactly the right parts of data that you want to present. </FONT><FONT face=Arial size=2>PivotDiagrams can take advantage of all of the power of Data Graphics that I've been showing </FONT><FONT face=Arial size=2>you in previous posts, so you can create some graphically rich ways to show your data.</FONT></P> <P><FONT face=Arial size=2>So how do PivotDiagrams work? They are a new diagram type in Visio 12, so to use one you </FONT><FONT face=Arial size=2>simply select the PivotDiagram template in the startup screen. This triggers the Data </FONT><FONT face=Arial size=2>Selector wizard that lets you choose which data source you want to connect to (PivotDiagrams </FONT><FONT face=Arial size=2>are always connected to external data). PivotDiagrams can connect to Excel, Access, SQL </FONT><FONT face=Arial size=2>Server, SQL Server Analysis Services, SharePoint lists, and other OLE/DB or ODBC data </FONT><FONT face=Arial size=2>sources.When you select your data source, you end up with a single shape in your </FONT><FONT face=Arial size=2>PivotDiagram that shows you the sum total of all of the rows in your data. Here we are </FONT><FONT face=Arial size=2>looking at data showing the performance of different call center locations for a </FONT><FONT face=Arial size=2>corporation:</FONT></P> <P><IMG src=""></P> <P><FONT face=Arial size=2>By default we pick a total for you to get you started, but you can easily change which </FONT><FONT face=Arial size=2>totals are displayed over in the PivotDiagram task pane. </P> <P><IMG src=""></P> <P>Simply check different totals and </FONT><FONT face=Arial size=2>they will appear in the diagram (similar to PivotTables). In this case, we are adding totals </FONT><FONT face=Arial size=2>for the Solve Rate of customer calls:</FONT></P> <P><IMG src=""></P> <P><FONT face=Arial size=2>Once you have the totals that you want to display, the next step is to choose how you want </FONT><FONT face=Arial size=2>to break out your data into groups and subtotals. To do this you use the "Add Category" </FONT><FONT face=Arial size=2>control in the task pane. Simply click on one of the categories to break out the currently </FONT><FONT face=Arial size=2>selected shape. In this example, I'm breaking out the totals by the different types of </FONT><FONT face=Arial size=2>calls: Hardware, Software, etc:</FONT></P> <P><IMG src=""></P> <P><FONT face=Arial size=2>You can keep on breaking out the subtotals as well. So I could select the "Software" shape, </FONT><FONT face=Arial size=2>and break these totals out by the different priorities of the calls:</FONT></P> <P><IMG src=""></P> <P><FONT face=Arial size=2>So you can see how it's easy with PivotDiagrams to drill into your data and find the key </FONT><FONT face=Arial size=2>pieces that you want to communicate. You can also re-pivot the data by a different </FONT><FONT face=Arial size=2>category. You can select the root shape, and choose a different category to drill on. This </FONT><FONT face=Arial size=2>will remove all of the other shapes and drill in on the new category (in this case the call </FONT><FONT face=Arial size=2>center locations).<BR></FONT></P> <P><IMG src=""></P> <P><FONT face=Arial size=2>Now this is showing comparative data for the different call center locations around the U.S. </FONT><FONT face=Arial size=2>We can go in and add some Data Graphics to show the data in richer, more visual ways. In </FONT><FONT face=Arial size=2>this case, the Customer Satisfaction with the different call centers can be visualized as a s</FONT><FONT face=Arial size=2>peedometer data bar, and we can also show a trend arrow indicating if customer sat is going </FONT><FONT face=Arial size=2>up or down:</FONT></P> <P><IMG src=""></P> <P><FONT face=Arial size=2>We could leave the PivotDiagram like this -- it's already showing us some important </FONT><FONT face=Arial size=2>information in a visual, easy-to-understand way. But since this information is </FONT><FONT face=Arial size=2>geographically related, it might be even easier to understand if we placed it on a map. </FONT><FONT face=Arial size=2>This brings us to another important point about PivotDiagrams: you can easily customize the </FONT><FONT face=Arial size=2>layout to better communicate the data. While they start out looking like a tree diagram, </FONT><FONT face=Arial size=2>similar to an organizational chart, this is just a starting point for you to customize to </FONT><FONT face=Arial size=2>suit your needs. In this case, you can simply select and delete the connectors between the </FONT><FONT face=Arial size=2>shapes, drag out a U.S. map shape to the page, and place the different PivotDiagram shapes </FONT><FONT face=Arial size=2>in their appropriate locations around the map:</FONT></P> <P><IMG src=""></P> <P><BR><FONT face=Arial size=2>Once you end up with a graphic that you are happy with, you can save it and refresh it on a </FONT><FONT face=Arial size=2>regular basis as data changes. This is an easy way to create a graphical report that can be </FONT><FONT face=Arial size=2>distributed to everyone on your team to keep people updated on current status. In </FONT><FONT face=Arial size=2>future posts, I'll show you some of the pre-defined PivotDiagram reports that we are </FONT><FONT face=Arial size=2>shipping as part of Office 2007.</FONT></P><img src="" width="1" height="1">Eric Rockey Conference Recap<p><font face=Verdana size=2.</font></p> <p><font face=Verdana size=2. </font></p> <p><font face=Verdana size=2!</font></p><img src="" width="1" height="1">Eric Rockey Partner Conference this week!<P><FONT face=Verdana size=2. </FONT></P> <P><FONT face=Verdana size=2).</FONT></P><img src="" width="1" height="1">Eric Rockey brief detour from data: AutoConnect<P><FONT face=Verdana size=2:</FONT></P> <P><IMG src=""></P> <P><FONT face=Verdana size=2!<BR></FONT><FONT face=Verdana size=2><BR.<BR><BR>How does AutoConnect work? When you are creating a drawing, you drag the first shape out like normal. </FONT><FONT face=Verdana size=2>But when you drag the second shape out, you can drop it on one of the blue arrows that appears when you hover over the first shape:</FONT></P> <P><IMG src=""></P> <P><FONT face=Verdana size=2:</FONT></P> <P><IMG src=""></P> <P><FONT face=Verdana size=2>Finally, imagine that you already have two shapes that are already on the page next </FONT><FONT face=Verdana size=2:</FONT></P> <P><IMG src=""></P> <P><FONT face=Verdana size=2.<BR></FONT><FONT face=Verdana size=2><BR).</FONT></P><img src="" width="1" height="1">Eric Rockey comment (on comments) for the New Year<P><FONT face=Verdana size=2!</FONT></P><img src="" width="1" height="1">Eric Rockey Graphics: visualizing data on your diagram<P><FONT face=Verdana size=2>Last week I talked about connecting a Visio diagram to data, but we hadn't gone over how to actually display the </FONT><FONT face=Verdana size=2>data on the diagram itself. That's where data graphics comes in. Data graphics is a new technology we are </FONT><FONT face=Verdana size=2>introducing in Visio 12 that provides a variety of ways of showing data on top of shapes. It's really the key </FONT><FONT face=Verdana size=2>to all of the different things we are doing with data in Visio 12. Let's look at a few quick examples of </FONT><FONT face=Verdana size=2>shapes with data graphics applied to them:</FONT></P> <P><IMG src=""></P> <P><FONT face=Verdana size=2>Data graphics can be as simple as those next to the computer shape on the left, which are just 3 text fields and </FONT><FONT face=Verdana size=2>their labels, or they can include more graphical elements, such as the bars in the middle example or the </FONT><FONT face=Verdana size=2>stoplight icons on the rightmost shape. Here's the list of different types of data graphics items that </FONT><FONT face=Verdana size=2>will be available with Visio 12. I'm only showing a couple examples of each -- Visio will come with more (and you can of course create your own as well):</FONT></P> <BLOCKQUOTE dir=ltr <P><FONT face=Verdana size=2><STRONG>Text </STRONG>A variety of different ways to show text on or around a shape.</FONT></P> <BLOCKQUOTE dir=ltr <P><IMG src=""></P></BLOCKQUOTE> <P><FONT face=Verdana size=2><STRONG>Data Bars </STRONG>Databound widgets that change size according to data.</FONT></P> <BLOCKQUOTE dir=ltr <P><IMG src=""></P></BLOCKQUOTE> <P><FONT face=Verdana size=2><STRONG>Icon Sets </STRONG>Different icons that show or hide on a shape based on data.</FONT></P> <BLOCKQUOTE dir=ltr <P><IMG src=""></P></BLOCKQUOTE> <P><FONT face=Verdana size=2><STRONG>Color by Value </STRONG>Change the color of the Visio shape based on data.</FONT></P> <BLOCKQUOTE dir=ltr <P><IMG src=""></P></BLOCKQUOTE></BLOCKQUOTE> <P><FONT face=Verdana size=2>Data graphics consist of a combination of these different items, connected to specific fields of data and laid </FONT><FONT face=Verdana size=2>out around a Visio shape in a specific way. Once a data graphic is applied to a shape, it acts like part of </FONT><FONT face=Verdana size=2>the shape. The data graphic will move with the shape, be copied with the shape, and be deleted with the shape. </FONT></P> <P><FONT face=Verdana size=2>So how do you use data graphics? First you need to have some shapes in your diagram with data behind them. </FONT><FONT face=Verdana size=2>This can be simply shape data that you typed in for a shape, or it could be data that </FONT><FONT face=Verdana size=2>was imported using the Data Link feature. The good news is that if you are using Data Link, we'll </FONT><FONT face=Verdana size=2>automatically create and apply a simple data graphic to the shapes when you connect them to data. If you want to </FONT><FONT face=Verdana size=2>edit the data graphic, or apply your own combination of data graphic items to a shape, you use the Data </FONT><FONT face=Verdana size=2>Graphics task pane:</FONT></P> <P><IMG src=""></P> <P><FONT face=Verdana size=2>This is the central place to go to in order to see all of the different data graphics that are applied in your </FONT><FONT face=Verdana size=2>diagram, apply them to other shapes, or create and edit them. To apply a data graphic to a shape, simply select </FONT><FONT face=Verdana size=2>the shape and click on one of the data graphic previews in the task pane. If you want to edit a data graphic, </FONT><FONT face=Verdana size=2>right click on it and choose edit. This opens the Edit Data Graphic dialog:</FONT></P> <P><IMG src=""></P> <P><FONT face=Verdana size=2>This dialog is where you can customize your data graphics or build up a new one from scratch using the </FONT><FONT face=Verdana size=2>different types of data graphic items (text, data bars, icon sets, color by value). If you want to add an element, </FONT><FONT face=Verdana size=2>simple click on the "New item" button. This shows a dropdown allowing you to choose one of the different </FONT><FONT face=Verdana size=2>types:</FONT></P> <P><IMG src=""></P> <P><FONT face=Verdana size=2>When you add an item, you get a detailed settings dialog that allows you to specify things such as the data </FONT><FONT face=Verdana size=2>field the item is bound to and its label. You can also </FONT><FONT face=Verdana size=2>control the item's position here as well. By default, Visio will automatically group all of the data graphics </FONT><FONT face=Verdana size=2>together into a single location around the shape, but if you choose, you could move each different data graphic </FONT><FONT face=Verdana size=2>item to a different position around the shape.</FONT></P> <P><FONT face=Verdana size=2>One thing you need to consider is where to place the data graphic. Is it going to go inside of the actual </FONT><FONT face=Verdana size=2>shape, or outside (usually to the right or underneath)? If it goes inside, it may compete with the text that </FONT><FONT face=Verdana size=2>you have typed into the shape (in the workflow diagram from last week, that text contained the names of the </FONT><FONT face=Verdana size=2>steps in the workflow). To overcome this, Visio allows you to turn off the shape's text box when using a data </FONT><FONT face=Verdana size=2>graphic if desired. This is particularly useful if that same information is also contained in a data field and </FONT><FONT face=Verdana size=2>can just become part of the data graphic. </FONT></P> <P><FONT face=Verdana size=2>So now let's return to the workflow shapes and see what it would take to create their data graphics. All of the editing actions you need to take to accomplish this are in the Edit Data Graphics dialog. Here's </FONT><FONT face=Verdana size=2>what the shape looked like to start with:</FONT></P> <P><IMG src=""></P> <P><FONT face=Verdana size=2>First, we need to turn the shape text off, and add a text data graphic item at the top of the shape that is </FONT><FONT face=Verdana size=2>linked to the name field of the data:</FONT></P> <P><IMG src=""></P> <P><FONT face=Verdana size=2>Next a couple more text items are added, linked to the status field and the number of the step in the workflow </FONT><FONT face=Verdana size=2>process:</FONT></P> <P><IMG src=""></P> <P><FONT face=Verdana size=2>And a data bar is added to visually show the average time in days that each step in the process takes:</FONT></P> <P><IMG src=""></P> <P><FONT face=Verdana size=2>We also want to call out when a step is taking too long to complete. So we will </FONT><FONT face=Verdana size=2>also show an icon when the average time is above a certain threshold (in this case, if takes longer than 10 </FONT><FONT face=Verdana size=2>days, we'll show a red icon, and if it's between 5 and 10 days we'll show a yellow icon).</FONT></P> <P><IMG src=""></P> <P><FONT face=Verdana size=2>Finally, I added some additional formatting with our new themes feature (more on that in future posts) to arrive at t</FONT><FONT face=Verdana size=2>he final result:</FONT></P> <P><IMG src=""></P> <P><FONT face=Verdana size=2>Now that this data graphic is applied to the shape, when the underlying data about the workflow process </FONT><FONT face=Verdana size=2>changes, the diagram can be updated and the graphics will automatically adjust to show the changes. This </FONT><FONT face=Verdana size=2>allows you to use the diagram as a visual report for the workflow data. </FONT></P> <P><FONT face=Verdana size=2>I hope this has given you some ideas about how you could use Data Link and Data Graphics in diagrams that you </FONT><FONT face=Verdana size=2>work with. Feel free to drop me a line if you come up with any -- I'd love to hear about them. I'm taking a </FONT><FONT face=Verdana size=2>break for the holidays after this post -- I'll see you next year!</FONT></P><img src="" width="1" height="1">Eric Rockey Link: getting data into your shapes<P><FONT face=Verdana size=2:</FONT></P> <P><IMG src=""></P> <P><FONT face=Verdana size=2:</FONT></P> <P><IMG src=""></P> <P><FONT face=Verdana size:</FONT></P> <BLOCKQUOTE dir=ltr <P><FONT face=Verdana size=2>1) <STRONG>Connect</STRONG> to a data source <BR>2) <STRONG>Link</STRONG> rows in the data source to shapes in the Visio diagram<BR>3) Display the fields of data on top of the shapes using <STRONG>Data Graphics</STRONG></FONT></P></BLOCKQUOTE> <P><FONT face=Verdana size=2>The feature that allows us to perform the first two steps in Visio 12 is called "<STRONG>Data Link</STRONG>".. </FONT><FONT face=Verdana size=2>Once you have completed this wizard, you get an "External Data Window" at the bottom of your diagram:</FONT></P> <P><IMG src=""></P> <P><FONT face=Verdana size=2. </FONT><FONT face=Verdana size=2:</FONT></P> <P><IMG src=""></P> <P><FONT face=Verdana size=2>Once a row of data has been linked to a shape, you will probably want to view at least some of this data directly on the shape. That's where the <STRONG>Data Graphics </STRONG>feature comes in. Data Graphics is the technology in Visio 12 that allows us to display data in a variety of different ways on top of a Visio shape. </FONT></P> <P><FONT face=Verdana size=2. </FONT></P><img src="" width="1" height="1">Eric Rockey
|
http://blogs.msdn.com/eric_rockey/atom.xml
|
crawl-002
|
refinedweb
| 4,524
| 54.86
|
Code:
listeners: { cellclick:function(grid,row,column,event) { var r = livegrid.getStore().getAt(row); var fieldName = grid.getColumnModel().getDataIndex(column); if(fieldName != 'name'){ opennewDialog(r.get('spo'),'index.php?action=view&pid='+r.get('pid')+'&nomenu=1',r.get('pid')) } if(r.get('status_change') == 'true' && fieldName == 'name'){ changestatus('Po id '+r.get('pid')+' - '+r.get('jobname'),'index.php?&id='+r.get('pid')+'&moduleid=44|status&nomenu=1&cat=1'); } } } ,selModel : new Ext.ux.grid.livegrid.RowSelectionModel()
not the expert, kinda fumbling thru it...
I am trying to use the livegrid with no plugins. Everything works fine on initial load, then I get this error when scrolling & a new query is sent:
"record is undefined"
Line 43895
This is that line in the code in the doRender method:
Code:
meta.value = column.renderer.call(column.scope, record.data[column.name], meta, record, rowIndex, i, store);
I am using v3.2.1. I see this problem when I scroll more than a 1/3 of the way down in the grid and it starts buffering. I am also not applying any filters at this point, so it's just doing the standard query. I am also using the same SQL statement for both types, just one returns the records, the other returns the count, then I bundle them together as the json response like in the example, just starts happening when I scroll. When I do see the error and stop scrolling, the entire grid is not filled, just blank space in the bottom 1/2.
As a final note, I am noticing that the onLoad method only gets touched on the initial load. is that intended?
Any tips on what the problem may be?
<edit>
It appears it is only dropping 133 records in the store instead of the 300 I return (bufferSize is set to 300 as well). I will keep digging to see why
Editing a cell in live grid shows the wrong cell's content
Editing a cell in live grid shows the wrong cell's content
I have two live grids on a page. The first one seem to work fine.
The second one, however, has the following problem: If i select a cell for editing, then scroll down enough to trigger the buffer load, the selection will flip from the original selected row to the approximate equivalent on the newly scrolled area. The value in the editor will be of another row's cell.
The first working grid uses a CheckboxSelectionModel but the second problematic one is a standard live grid rowselectionmodel
has anyone sees something like this before? I don't hvae any more hairs to pull. :o(
Harel
I'll answer my own problem after spending the day with it - My second query was not paginating using the start and end parameters passed so although the grid was scrolling visually , the store was always holding the full data.
Ext.ux.LiveGrid works ok, but there is a problem for GridView.adjustScrollerPos - it does not work in IE and Opera. Obviously this is because of use
scrollDom.scrollTop += pixels; (GridView.js, line 1780)
_setting_ scrollTop property works in FF, but does not is IE and Opera. This is a known issue, and AFAIR, scrollTop is not W3C supported dom property.
Or may be there is the other way to save scroll position after store reloading? What I do is save lastScrollPos on beforerefresh and set it back on refresh using adjustScrollerPos(savedScrollTop, true). Works nice in FF (scrolled to top by onLoad, then back to savedScrollTop), but grid scroll stays on its top in other browsers.
First I like livegrid very much. I have a small problem and could not find a solution. I just want to append some records to the grid. I do this with the insert function with the bufferSize as index. So far this seems to work fine. Now I would like that the last inserted record is shown. I didn't find any solution for that without a server roundtrip. Any ideas?
adjustScrollPosition does not work in Chrome too. Although Ext is VERY fast in Chrome.
Extremely need Ext.ux.LiveGrid with column groups. I think it should be aggregate code of Ext.ux.LiveGrid and GroupHeaderGrid from this thread:
K...I feel stupid....I can't get this to work. I have downloaded the livegrid sample, and put it in and modified the locations of the .css and .js files and all I get is a error in the browser:
Object Expected: Line 466, char 13 - livegrid-all-debug.js
No idea what the problem is. Does anyone have a nice working example or tutorial on how to use this thing?
J )
|
http://www.sencha.com/forum/showthread.php?17791-Ext.ux.LiveGrid/page77
|
CC-MAIN-2014-15
|
refinedweb
| 783
| 66.13
|
OEChem 1.5.0 is a new release including many major and minor bug fixes along with several new features. This is also a continuation of a complete release of all OpenEye toolkits as a consolidated set so that there are no chances of incompatibilities between libraries.
Note, that in this release the directory structure has been changed to allow multiple versions of the toolkits to be installed in the same directory tree without conflicts. From this release on, all C++ releases will be under the openeye/toolkits main directory. There is then a directory specific to the version of the release and then below that, directories for each architecture/compiler combination. To simplify end user Makefiles, openeye/toolkits/lib, openeye/toolkits/include'', and ``openeye/toolkits/examples are all symlinks to the specific last version and architecture that was installed.
New users should look in openeye/toolkits/examples for all the examples. Existing users updating existing Makefiles should change their include directory from openeye/include to openeye/toolkits/include. As well, existing Makefiles should change the library directory from openeye/lib to openeye/toolkits/lib.
OEChem now has a 2D similarity implementation using the Lingos method of similarity. Lingos compares Isomeric SMILES strings instead of pre-computed fingerprints. This combination leads to very rapid 2D similarity calculation without any upfront cost to calculate fingerprints and without any storage requirements to store fingerprints.
MMFF94 charges are now available in OEChem. While we recommend AM1-BCC charges as the best available charge model, having MMFF94 charges available at the OEChem level means that decent charges are available to all toolkit users.
In OEChem 1.2, there was an alternate implementation of MCS that used a fast, approximate method for determining the MCS. While it is less than exhaustive, the speed does have some appealing uses. In OEChem 1.5, we’ve restored this older algorithm and now both are available.
namespace OEMCSType { static const unsigned int Exhaustive = 0; static const unsigned int Approximate = 1; static const unsigned int Default = Exhaustive; }
OEMCSType::Exhaustive implies the current, exhaustive algorithm from OEChem 1.3 and later, while OEMCSType::Approximate implements the older, fast but approximate algorithm.
The ability to get the license expiration date when calling OEChemIsLicensed has been added.
Molecules (OEMol, OEGraphMol) can now be attached to an existing OEBase as generic data and they will be written to OEB and read back in. Additional support for attaching grids and surfaces to molecules has been added to Grid and Spicoli.
There is a new retain Isotope flag to OESuppressHydrogens. If false, [2H] and [3H] will also be removed by this call. By default, this is true so that the current behavior of OESuppressHydrogens is identical to the previous version.
The OEChem CDX file reader can now Kekulize aromatic (single) bonds in the input ChemDraw file. It switches the internal bond order processing to use the bond’s integer type field, and then calls OEKekulize to do all of the heavy lifting.
Tweaks to the algorithm used for determining which bond(s) around a stereocenter should bear a wedge or a hash. The bug fixed here includes an example where all three neighbors are stereocenters, but two are in a ring and one isn’t.
There are new versions of OEIsReadable and OEIsWriteable that take a filename directly.
More exceptional atom naming support for the PDB residues CO6 (pdb2ii5), SFC (pdb2gce), RFC (pdb2gce), MRR (pdb2gci), MRS (pdb2gd0), FSM (pdb2cgy) and YE1 (pdb2np9) has been added.
|
https://docs.eyesopen.com/toolkits/cpp/oechemtk/releasenotes/version1_5_0.html
|
CC-MAIN-2018-22
|
refinedweb
| 580
| 54.52
|
Status
()
▸
XUL
People
(Reporter: jst, Assigned: Mike Pinkerton (not reading bugmail))
Tracking
Firefox Tracking Flags
(Not tracked)
Details
(Whiteboard: [nsbeta2+])
This bug is for tracking if and how XUL in mozilla breaks when the old incorrect way of manipulating a DOM document containing XML namespace elements in mozilla is replaced with the new correct DOM Level 2 way of dealing with elements in a XML namespace aware application. The old incorrect implementation allows for code like this to work: document.createElement("html:input"); document.getElementsByTagName("html:checkbox"); element.setAttribute("rdf:resource", "http://..."); element.getAttribute("rdf:resource"); if (element.tagName == "BODY") ... // element is a HTML element in XUL if (element.nodeName == "TD") ... // element is a HTML element in XUL The correct new DOM Level 2 way of doing the above would be something like this: document.createElementNS("", "html:input"); document.getElementsByTagNameNS("", "html:checkbox"); element.setAttributeNS("", "rdf:resource", "http://..."); element.getAttributeNS("", "rdf:resource"); if (element.localName == "BODY" && // element is a HTML element in XUL element.namespaceURI == "") ... if (element.localName == "TD" && // element is a HTML element in XUL element.namespaceURI == "") ... One of the perhaps biggest changes, and the thing that is probably hardest to find, is that .nodeName and .tagName on elements, and .name on attribute nodes, contains the qualified name (ie "html:TD") in DOM Level 2 if element has a "html" prefix in the file, or if it was dynamically created with a prefix,in the old incorrect DOM Level 1 implementation .nodeName, .tagName and .name contained only the "local" name, and no prefix. For more information about using DOM Level 2 in XML namespace aware applications, such as the mozilla XUL, see: I'm hoping that I can check in code that lets people enable and disable this new behavior by setting a preference, this should be helpful in checking what relies on the old behavior, the code I'd like to check in also prints out warning messages on the console when code that assumes the old incorrect implementation is executed, this will not catch all places, but it should offer some help. I suspect that most of the changes are in the JS and XUL files in mozilla, but there could of course be C++ code that relies on the old DOM code too. The PDT team is aware of this change and a carpool will be arranged where the DOM Level 2 updates will be turend on permanently and the changes needed to fix this bug should be checked in, this bug will be updated once the date for the carpool is set. Mozilla does currently have most of, if not all, the code necessay to do the changes, but the old way still works.
Keywords: nsbeta2
Argh! My original comment contains two errors! The second argument to the getXxxNS() methods should be the local name only, not the qualified name, IOW: document.getElementsByTagNameNS("", "checkbox"); element.getAttributeNS("", "resource"); is the correct way.
reassign to hyatt for triage, cc evaughan
Assignee: trudelle → hyatt
Putting on [nsbeta2+] radar for beta2 fix.
Whiteboard: [nsbeta2+]
The temporary code that lets you turn on the new behavior by setting a pref is now checked in, to enable the new behavior, add this to all.js pref("temp.DOMLevel2update.enabled", true); With this pref enabled you'll see output like this: Possible DOM Error: CreateElement("html:input") called, use CreateElementNS() in stead! if the old incorrect behavior is expected. This temporary code does it's best to issue warnings when possible, but it's not quaranteed to find all the problems for you.
taking these so hyatt can play Vampire
Assignee: hyatt → pinkerton
Status: NEW → ASSIGNED
Target Milestone: --- → M18
fixed. now about those pesky vampires....
Status: ASSIGNED → RESOLVED
Last Resolved: 18 years ago
Resolution: --- → FIXED
per pinkerton, and from running with the pref on, verified fixed.
Status: RESOLVED → VERIFIED
Component: XP Toolkit/Widgets: XUL → XUL
QA Contact: jrgmorrison → xptoolkit.widgets
|
https://bugzilla.mozilla.org/show_bug.cgi?id=39932
|
CC-MAIN-2018-26
|
refinedweb
| 647
| 52.9
|
/* Timing variables for measuring compiler performance. Copyright (C) 2000, 2003, 2004, 2005 Free Software Foundation, Inc. Contributed by Alex Samuel <sam_TIMEVAR_H #define GCC_TIMEVAR_H /* Timing variables are used to measure elapsed time in various portions of the compiler. Each measures elapsed user, system, and wall-clock time, as appropriate to and supported by the host system. Timing variables are defined using the DEFTIMEVAR macro in timevar.def. Each has an enumeral identifier, used when referring to the timing variable in code, and a character string name. Timing variables can be used in two ways: - On the timing stack, using timevar_push and timevar_pop. Timing variables may be pushed onto the stack; elapsed time is attributed to the topmost timing variable on the stack. When another variable is pushed on, the previous topmost variable is `paused' until the pushed variable is popped back off. - As a standalone timer, using timevar_start and timevar_stop. All time elapsed between the two calls is attributed to the variable. */ /* This structure stores the various varieties of time that can be measured. Times are stored in seconds. The time may be an absolute time or a time difference; in the former case, the time base is undefined, except that the difference between two times produces a valid time difference. */ struct timevar_time_def { /* User time in this process. */ double user; /* System time (if applicable for this host platform) in this process. */ double sys; /* Wall clock time. */ double wall; /* Garbage collector memory. */ unsigned ggc_mem; }; /* An enumeration of timing variable identifiers. Constructed from the contents of timevar.def. */ #define DEFTIMEVAR(identifier__, name__) \ identifier__, typedef enum { #include "timevar.def" TIMEVAR_LAST } timevar_id_t; #undef DEFTIMEVAR /* Execute the sequence: timevar_pop (TV), return (E); */ #define POP_TIMEVAR_AND_RETURN(TV, E) do { timevar_pop (TV); return (E); }while(0) #define timevar_pop(TV) do { if (timevar_enable) timevar_pop_1 (TV); }while(0) #define timevar_push(TV) do { if (timevar_enable) timevar_push_1 (TV); }while(0) extern void timevar_init (void); extern void timevar_push_1 (timevar_id_t); extern void timevar_pop_1 (timevar_id_t); extern void timevar_start (timevar_id_t); extern void timevar_stop (timevar_id_t); extern void timevar_print (FILE *); /* Provided for backward compatibility. */ extern void print_time (const char *, long); extern bool timevar_enable; extern size_t timevar_ggc_mem_total; #endif /* ! GCC_TIMEVAR_H */
|
http://opensource.apple.com/source/libstdcxx/libstdcxx-39/libstdcxx/gcc/timevar.h
|
CC-MAIN-2016-36
|
refinedweb
| 349
| 65.01
|
“.
PeriodicWave… Isn’t that something the crowd does at a sports game?
The Web Audio API is rather intimidating at first glance if you’re not used to dealing with the low-level vagaries of audio – you feed in an appropriate file to the right OS API, it gets converted to PCM via magic, cat sounds come out of the speakers, right?
(If you’re coming from the world of dealing regularly with said vagaries, you’re probably thinking "this API is weird – there’s a mix of high-level and low-level constructs, and why is setting up a standard ring buffer so hard?" We’ll get to those details/criticisms/workarounds in a future post).
It is, however, an unquestionably better API for playing sounds—especially dynamically created sounds – than any web standard to date. It’s also fairly well supported across the board by browsers, so there’s no reason not to dig in and get to work.
KISS
So, let’s start at the very beginning. If you haven’t already, check out MDN’s Basic concepts behind Web Audio API article. It’s a great introduction into some of the basics of not just the API, but playing sounds in general.
We just want to play some cat sounds, though, not delve into the Nyquist-Shannon Sampling Theorem, or reprise the history of the 44.1kHz sampling frequency. Can’t we just skip to the good stuff?
So, let’s assume we have a sound file,
meow.mp3. We’re going to rely on the browser have the right codec to decode this file, and we’re not going to try and loop it, alter its gain, or perform any transformations of it—we’re just going to play it.
We could do something this simple with the Audio Element—but we want to do bigger and cooler things in the future. It is worth noting, however, that an audio element can be used as the source for an Web Audio Context – we may delve more into this in the future.
For now, let’s get this party started:
const _audioCtx = new (window.AudioContext || window.webkitAudioContext)(); /** * Allow the requester to load a new sfx, specifying a file to load. * @param {string} sfxFile * @returns {Promise<ArrayBuffer>} */ async function load (sfxFile) { const _sfxFile = await fetch(sfxFile); return await _sfxFile.arrayBuffer(); }; /** * Load and play the specified file. * @param sfxFile * @returns {Promise<AudioBufferSourceNode>} */ function play (sfxFile) { return load(sfxFile).then((arrayBuffer) => { const audioBuffer = _audioCtx.decodeAudioData(arrayBuffer); const sourceNode = _audioCtx.createBufferSource(); sourceNode.buffer = audioBuffer; sourceNode.connect(_audioCtx.destination); sourceNode.start(); return sourceNode; }); };
Okay, let’s unpack it:
First, we create our audio context—if you’ve dealt with the Canvas API this is familiar—this is going to be our audio processing graph object. We look for the standard
AudioContext object first, and if we can’t find, we try to fall back to the browser-prefixed version,
webkitAudioContext, to broaden our browser support.
Then, we declare an async function. This is part of the ECMAScript 2017 spec, but if you’re able to use the Web Audio API, you’re probably able to use this too. Async/Await is a wonderful bit of sugar over Promises, and you should take advantage of it if you can.
Within this async function,
load, we rely on the Fetch API to go and get the file for us (my, aren’t we linking to MDN an awful lot in this post!). There’s no reason we couldn’t use XMLHttpRequest here instead, but fetch is wonderfully compact and again, if you can use Web Audio, you can almost certainly use fetch.
We await the response from fetch, and then get the response body as an array buffer and return it.
Then, in
play, we perform both the loading and playing, so our api becomes just a call to
play with the file location and wait on the promise returned from our async
load function. Then, we need to decode the audio data, turning it into PCM, and then create an AudioBufferSourceNode from the return (that’s the call to
_audioCtx.createBufferSource()).
We then set our buffer source to draw from the audio buffer we’ve created out of the array buffer we created out of the file (whew), and connect it to the destination of the audio context (e.g. the audio context output, like speakers), and finally we can call
start on the sourceNode to have it start pumping its audio into the audio context.
Easy-peasy.
Except… if you do this with your
meow.mp3 file, you won’t hear anything. The file will successfully be fetched, loaded, decoded… but not played. What’s going on?
Now Hear This
You’ve hit the browser’s autoplay policy. Nobody likes opening a new tab and having whatever website they’ve just navigated to start blaring their addiction to spongebob to the whole office. So, most browsers have various autoplay policies which restrict what the page can do with media before the user has interacted with it.
For our purposes today, this boils down to not being able to play audio until the user has interacted with the page in some easily measurable way – that is, clicked/tapped on it.
So, in order to unlock our audio, we’ll need to listen for that interaction, and wait to play our sounds until after we’ve received it. We don’t want to have to download and play real audio to make that happen, so we’ll need to create an empty sound buffer that we can use as our stalking horse.
Also, if you tried the above example on iOS, it would have failed regardless of autoplay policies, because iOS is a special snowflake and hasn’t updated their audio APIs in a while. Let’s handle that too.
Oh, and it would be nice not to need to re-download a file every time we want to play it if we anticipate needing to play it multiple times. Let’s throw that in there for good measure.
KISS Redux
Let’s update our example above, and we’ll walk through the new bits together:
(function() { const _af_buffers = new Map(), _audioCtx = new (window.AudioContext || window.webkitAudioContext)(); let _isUnlocked = false; /** * A shim to handle browsers which still expect the old callback-based decodeAudioData, * notably iOS Safari - as usual. * @param arraybuffer * @returns {Promise<any>} * @private */ function _decodeShim(arraybuffer) { return new Promise((resolve, reject) => { _audioCtx.decodeAudioData(arraybuffer, (buffer) => { return resolve(buffer); }, (err) => { return reject(err); }); }); } /** * Some browsers/devices will only allow audio to be played after a user interaction. * Attempt to automatically unlock audio on the first user interaction. * Concept from: * Borrows in part from: */ function _unlockAudio() { if (_isUnlocked) return; // Scratch buffer to prevent memory leaks on iOS. // See: const _scratchBuffer = _audioCtx.createBuffer(1, 1, 22050); // We call this when user interaction will allow us to unlock // the audio API. var unlock = function (e) { var source = _audioCtx.createBufferSource(); source.buffer = _scratchBuffer; source.connect(_audioCtx.destination); // Play the empty buffer. source.start(0); // Calling resume() on a stack initiated by user gesture is // what actually unlocks the audio on Chrome >= 55. if (typeof _audioCtx.resume === 'function') { _audioCtx.resume(); } // Once the source has fired the onended event, indicating it did indeed play, // we can know that the audio API is now unlocked. source.onended = function () { source.disconnect(0); // Don't bother trying to unlock the API more than once! _isUnlocked = true; // Remove the click/touch listeners. document.removeEventListener('touchstart', unlock, true); document.removeEventListener('touchend', unlock, true); document.removeEventListener('click', unlock, true); }; }; // Setup click/touch listeners to capture the first interaction // within this context. document.addEventListener('touchstart', unlock, true); document.addEventListener('touchend', unlock, true); document.addEventListener('click', unlock, true); } /** * Allow the requester to load a new sfx, specifying a file to load. * We store the decoded audio data for future (re-)use. * @param {string} sfxFile * @returns {Promise<AudioBuffer>} */ async function load (sfxFile) { if (_af_buffers.has(sfxFile)) { return _af_buffers.get(sfxFile); } const _sfxFile = await fetch(sfxFile); const arraybuffer = await _sfxFile.arrayBuffer(); let audiobuffer; try { audiobuffer = await _audioCtx.decodeAudioData(arraybuffer); } catch (e) { // Browser wants older callback based usage of decodeAudioData audiobuffer = await _decodeShim(arraybuffer); } _af_buffers.set(sfxFile, audiobuffer); return audiobuffer; }; /** * Play the specified file, loading it first - either retrieving it from the saved buffers, or fetching * it from the network. * @param sfxFile * @returns {Promise<AudioBufferSourceNode>} */ function play (sfxFile) { return load(sfxFile).then((audioBuffer) => { const sourceNode = _audioCtx.createBufferSource(); sourceNode.buffer = audioBuffer; sourceNode.connect(_audioCtx.destination); sourceNode.start(); return sourceNode; }); }; _unlockAudio(); }());
That’s the ticket! Now, when we load our
meow.mp3 by calling
play, we attempt to fetch the audiobuffer from our impromptu cache if possible, or fallback to fetching it. (Note that it’s important we cache the decoded audiobuffer, rather than the fetched array buffer—decoding is expensive! However, if we’re planning to fetch large compressed files, these will be decoded into PCM and potentially eat up a huge chunk of memory – keep an eye on the tradeoff!)
Then, in
load, we also perform the decoding, falling back to a decode shim if the browser throws an error (as it does in iOS), due to out-of-date API implementation.
Otherwise,
play looks pretty much the same.
And finally, we’ve wrapped all of this in an IIFE, with
_unlockAudio getting called as soon as the IIFE is executed, adding touch/click listeners to the document so we can unlock the audio API as soon as the user has indicated they’re willing to interact with our site.
Phew! That’s a fair amount of work to play a single file! Depending on the needs of your project, you may want to explore some of the libraries that can handle some of the grunt work for you, like Howler.js and SoundJS. There’s no magic, though, and you may need to dive into the true depths for your use case.
The Meowsic machine project is on its way—next time, we prepare to begin working on the UI with Vue.js.
Images from Wikipedia Commons: cassette, padlock
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
|
https://artandlogic.com/2019/07/unlocking-the-web-audio-api/?shared=email&msg=fail
|
CC-MAIN-2021-43
|
refinedweb
| 1,695
| 55.74
|
Hi all,
I am tryiing to figure out how to tokenize a string consisting of words in alphabetical order, all crammed together with no delimiters. I can't for the life of me see how this could be done without analyzing the string beforehand and hard coding a delimiter in.
Example string:
aaardvarkapplebananabicyclecobracupdelve
Any ideas?
You have a dictionary of words somewhere, right? My first attack would be to put the dictionary into a trie, with a special search that returns the end of the first complete match from the end of the most recent match. For example given a very limited dictionary:
#include <algorithm> #include <iostream> #include <string> #include <cctype> #include <climits> class trie { struct node { char key; node *link[CHAR_MAX + 1]; public: node(char key): key(key) { std::fill_n(link, CHAR_MAX + 1, (node*)0); } } *root; public: trie(): root(0) {} void add(const std::string& s) { node **it = &root; if (*it == NULL) *it = new node(0); for (std::string::size_type i = 0; i < s.size(); ++i) { char key = std::tolower(s[i]); if ((*it)->link[key] == 0) (*it)->link[key] = new node(key); it = &(*it)->link[key]; } } std::string::const_iterator match(const std::string& s, std::string::const_iterator& begin) { std::string::const_iterator end = begin; node *it = root; while (end != s.end() && it->link[std::tolower(*end)] != 0) it = it->link[std::tolower(*end++)]; return end; } }; trie initialize_dictionary() { const char *words[] = { "aardvark","apple","banana","bicycle","cobra" }; trie trie; for (std::size_t i = 0; i < sizeof words / sizeof *words; i++) trie.add(words[i]); return trie; } int main() { trie dict = initialize_dictionary(); std::string s = "aardvarkapplebananabicyclecobra"; std::string::const_iterator begin = s.begin(); std::string::const_iterator end; for (std::string::const_iterator begin = s.begin(); begin != s.end(); begin = end) { end = dict.match(s, begin); std::cout<< s.substr(begin - s.begin(), end - begin) <<'\n'; } }
Edited by Narue: n/a
In your example:
std::tolower(*end)
tolower is in <cctype> and is a C function.
Interesting, thanks for the reply. After reading in the file, I realized that the words were separated by nulls, so I guess this will be straightforward after all. The textfile made it seem like there were no delimiters, but once I read it into a char array, I saw that it was null-terminated. Thanks anyways!
tolower is in <cctype> and is a C function.
If you think there's a problem, it's best to just say what you think it is.
once I read it into a char array, I saw that it was null-terminated
That's good. Working without any kind of formatting is either tricky or impossible, and I'm not a fan of either. ;)
Sorry, I thought it would be obvious it's not in the std namespace.
Sorry, I thought it would be obvious it's not in the std namespace.
That's not obvious at all because it is in the std namespace when you use <cctype>.
No it isn't.
Are you using Visual Studio? What version?
Well a quick (possibly misleading) google search claims C library stuff is meant to be in the std namespace, but visual studio sucks and it isn't.
My thoughts are:
Only version of std::toupper provided is:
template <class charT> charT toupper ( charT c, const locale& loc );
Could someone make sure std::toupper(int) exists on other compilers? (G++)
Microsoft always likes to have small differences with other compilers, for instance on G++ I don't think there is a std::exception constructor overload that accepts a C string.
No it isn't.
Yes, it is. I don't care what links you post, my source is the C++ standard. And unless you're terribly familiar with it, I strongly suggest you don't argue the finer points of the standard with me, though you're welcome to confirm that I'm correct. ;)
In particular, we're referring to the section tagged depr.c.headers:
1 For compatibility with the C standard library and the C Unicode TR, the C++ standard library provides the 25 C headers, as shown in Table 151.
Table 151 — C headers
<assert.h> <float.h> <math.h> <stddef.h> <tgmath.h>
<complex.h> <inttypes.h> <setjmp.h> <stdio.h> <time.h>
<ctype.h> <iso646.h> <signal.h> <stdint.h> <uchar.h>
<errno.h> <limits.h> <stdarg.h> <stdlib.h> <wchar.h>
<fenv.h> <locale.h> <stdbool.h> <string.h> <wctype).
3 [ Example: The header <cstdlib> assuredly provides its declarations and definitions within the namespace std. It may also provide these names within the global namespace. The header <stdlib.h> assuredly provides the same declarations and definitions within the global namespace, much as in the C Standard. It may also provide these names within the namespace std. —end example ]
This comes from the latest draft of C++0x, but you'll find that the original C++98 standard says the same thing. Also notice (before claiming that a compiler sucks) that the regardless of which header you choose, the other namespace is allowed as an extension. That is, global namespace on top of std namespace for <cname> and std namespace on top of global namespace for <name.h>.
With VC++ 2010 look in <cctype> and you will find that it's in std namespace, just as Narue said it is. _STD_BEGIN is defined as
#define _STD_BEGIN namespace std { in yvals.h
Very helpful, I never knew. That's why we communicate.
|
https://www.daniweb.com/programming/software-development/threads/349392/string-with-no-delimiter
|
CC-MAIN-2018-30
|
refinedweb
| 902
| 63.19
|
CodePlexProject Hosting for Open Source Software
Documentation
Installation
Upgrading to BlogEngine.NET 2.0
There are more file and folder changes made in BlogEngine.NET 2.0 compared to the last couple of releases (i.e. versions 1.5, 1.6). Some files which existed in v1.6 no longer exist in v2.0. The web.config file has also undergone some relatively major changes
and transformations. Additionally, because BlogEngine.NET 2.0 is now based on ASP.NET 3.5 (prior versions were ASP.NET 2.0), many changes related to this have been made in the web.config file.
The cleanest way to upgrade to v2.0 is to start from a v2.0 installation, and then copy your existing data and settings into v2.0 on your computer, in a new folder.
As noted above, because of the large number of changes to the web.config files, it is strongly recommended you use the web.config file that is included with v2.0. If you have any custom settings in your existing web.config files (e.g. appSettings),
it will probably be easiest to copy your custom settings into the BlogEngine.NET v2.0 web.config file. If you have any custom settings, those can be copied into the v2.0 web.config file now. Otherwise, you can just use the v2.0 web.config file
as-is.
Running BlogEngine.NET in a .NET 4.0 Application Pool?
If so, instead of using the web.config file found in the root folder, you will need to use the .NET 4.0 web.confg file found directly in the /setup folder. The name of this file is ASP.NET_4.0_Web.Config.).
If so, you will need to use the web.config file that is for .NET 4.0. They are included in the same folders. For example, for SQL Server, the .NET 3.5 web.config file is named SQLServerWeb.Config, and the .NET 4.0 web.config file is named SQLServer.NET_4.01.6To2.0.sql. For MySQL, it is MySQLUpgradeFrom1.6To2.0.sql,
etc. Run this script in your existing DB. If you are upgrading from a version prior to 1.6, you will need to first run the upgrade script(s) to get your DB up to v1.6. For example, if you are upgrading from v1.5, you will need to first run
the 1.5to1.6 script, and after that, run the 1.6to2.0 script.
In your v2.0 installation is the App_Data folder. Delete the entire contents within the App_Data folder. Once the contents have been deleted, you have will have an empty App_Data folder.
Copy all of the App_Data contents (files/folders) from your existing blog to the empty v2.0 App_Data folder.
If you have a custom theme, copy your custom theme folder into the v2.0 "themes" folder. Similarly, if you have customized the robots.txt file, or if you have any other custom files/folders, copy those into the v2.0 folder you have been working
on.
Because you will have files on your web server that no longer exist (or have been moved) in v2.0, it is best to delete all of the BlogEngine.NET files and folders on your web server, and then upload the new v2.0 files and folders you prepared in the previous
steps.
Please make sure you have a backup of everything you will delete (see step 1).
After you have deleted the BlogEngine.NET files/folder off your web server, upload the v2.
In prior versions, login.aspx was in the blog root. This page has been moved to under the /Account/ folder. It is now:
/Account/Login.aspx
Custom themes and other custom components (e.g. widgets) might have the login.aspx page hard-coded. You just need to update the path to the new location.
If your blog is using a Widget, Control or Extension that is not included with BlogEngine (i.e. a customized one), you will most likely receive an error message after upgrading to BlogEngine.NET 2.0 and running your website. Fortunately, it is relatively
easy to fix it. For reference, the namespace changes are:
The error messages to look out for, and the code fixes follow:
The type or namespace name 'WidgetBase' could not be found
This error will typically occur in widget.ascx.cs and edit.ascx.cs (the user control files each widget has). The fix is to add the following "using" statement to the top of these two files. Note, some widgets do not have a edit.ascx.cs
file.
using App_Code.Controls;
The type or namespace name 'ExtensionSettings' could not be found
This error will typically occur in the .CS file for an extension in the App_Code\Extensions folder. Some extensions have multiple .CS files that may need to all be updated. The fix is to add the following "using" statement to the top
of the extension .CS files that are having this error:
using BlogEngine.Core.Web.Extensions;
Unknown server tag 'blog:xxxxxx'
This error will typically occur for a control with a .CS file extension that is located in the App_Code\Controls folder. The XXXXXX will be the name of the control, e.g. <blog:CategoryList>, <blog:RecentPosts>. Most likely, this control
has a namespace towards the top of the .CS file that looks like:
namespace Controls
The fix is to change the namespace to the following:
namespace App_Code.Controls
Last edited Jan 1, 2011 at 6:36 PM by BenAmada, version 8
|
http://blogengine.codeplex.com/wikipage?title=Upgrading%20to%20BlogEngine.NET%202.0
|
CC-MAIN-2017-43
|
refinedweb
| 927
| 70.7
|
[SOLVED] Error sending switch command
Almost every time when I try to control my relay from Domoticz I get a "Error sending switch command, check device/hardware !" even though the relay turns on/off. There is maybe a 2-3 seconds delay between when I press the switch in Domoticz and when it reacts.
Don't know if it is the mysGateway not communicating properly with Domoticz or some other problem? Since the command works it seems to me that my sensors is working and talking to each other.
Any idea on what could be wrong or how to narrow down the problem?
I was hoping for a logfile for the mysController but haven't found anything. If there is some error between the gateway and the sensor.
However, watching the messages in MYScontroller I always see the same command sends two times (either on or off) every time when I get the error message in Domoticz. The few times when it works as it should, no error in Domoticz, on/off is only sent one time.
Don't know if it is possible to configure some more time in Domoticz before it produces a error? Or if the gateway is not communication properly with Domoticz saying "wait a second, need to resend the command", and therefore Domoticz thinks there is a problem with the command? Because it seems to me that the gateway at least understands that it needs to send the command twice sometimes.
I have capacitors on the NRF, so communication should be good.
@raptorjr There could be several different reasons:
- Communication problems
- Node is not able to handle incomming messages at the moment you send the command.
- Others I have not yet come accross
Could you post your sketch?
Sure, here is my sketch:
// Enable serial gateway //#define MY_GATEWAY_SERIAL // Enable debug prints to serial monitor #define MY_DEBUG // Enable and select radio type attached #define MY_RADIO_NRF24 #include <SPI.h> #include <MySensors.h> #include <OneWire.h> #include <DallasTemperature.h> #define TEMP_ID 1 #define RELAY_ID 2 //Temperatur sensor #define ONE_WIRE_BUS 4 #define COMPARE_TEMP 0 // Send temperature only if changed? 1 = Yes 0 = No float lastTemperature = 0; MyMessage tempMsg(TEMP_ID, V_TEMP); // Initialize temperature message OneWire oneWire(ONE_WIRE_BUS); // Setup a oneWire instance to communicate with any OneWire devices (not just Maxim/Dallas temperature ICs) DallasTemperature sensors(&oneWire); // Pass our oneWire reference to Dallas Temperature. unsigned long SLEEP_TIME = 5000; // Sleep time between reads (in milliseconds) //Relay to water valve #define RELAY_PIN 5 // Arduino Digital I/O pin number for first relay (second on pin+1 etc) #define RELAY_ON 1 // GPIO value to write to turn on attached relay #define RELAY_OFF 0 // GPIO value to write to turn off attached relay void setup(void) { // start serial port Serial.begin(115200); pinMode(RELAY_PIN, OUTPUT); digitalWrite(RELAY_PIN, RELAY_OFF); } void presentation() { // Send the sketch version information to the gateway and Controller sendSketchInfo("FishTank", "1.0"); present(TEMP_ID, S_TEMP, "Water temperature"); present(RELAY_ID, S_BINARY, "Water valve"); } void loop(void) { DeviceAddress tempDeviceAddress; // We'll use this variable to store a found device address // For testing purposes, reset the bus every loop so we can see if any devices appear or fall off sensors.begin(); sensors.requestTemperatures(); // Send the command to get temperatures // Search the wire for address if(sensors.getAddress(tempDeviceAddress, 0)) { float tempC = sensors.getTempC(tempDeviceAddress); //Serial.print("Temperature="); //Serial.println(tempC); #if COMPARE_TEMP == 1 // Only send data if temperature has changed and no error if (lastTemperature != tempC && tempC != -127.00 && tempC != 85.00) { #else if (tempC != -127.00 && tempC != 85.00) { #endif // Send in the new temperature send(tempMsg.set(tempC, 1)); // Save new temperatures for next compare lastTemperature = tempC; } } delay(SLEEP_TIME); } void receive(const MyMessage &message) { // We only expect one type of message from controller. But we better check anyway. if (message.type == V_STATUS) { // Change relay state digitalWrite(RELAY_PIN, message.getBool() ? RELAY_ON : RELAY_OFF); } }
Maybe a communication problem, but the command does work. Don't really see it as a problem that the gateway has to send the command twice sometimes, there is maybe a 0.5-1s delay because of it. But it is a problem that the controller screams error every time that occurs.
Don't know how a controller and gateway communicates, but it feels like it should be the gateway that tells the controller that it have failed to send the command. And I can't think that it does that at the same time as it sends the command again to my relay?
Or is it the controller that timeout to fast even though the gateway haven't reported the right status?
@raptorjr Try changing the delay in your loop to a wait statement.
wait(SLEEP_TIME); // delay(SLEEP_TIME);
When you use a delay the Arduino completely delays your sketch meaning no incomming events can be processed. When you use a MySensors wait. The Arduino doesn't delay your sketch and will be able to handle incomming messages.
|
https://forum.mysensors.org/topic/4658/solved-error-sending-switch-command
|
CC-MAIN-2018-30
|
refinedweb
| 816
| 54.02
|
NAME
vm_map_remove -- remove a virtual address range from a map
SYNOPSIS
#include <sys/param.h> #include <vm/vm.h> #include <vm/vm_map.h> int vm_map_remove(vm_map_t map, vm_offset_t start, vm_offset_t end);
DESCRIPTION
The vm_map_remove() function removes the given address range bounded by start and end from the target map.
IMPLEMENTATION NOTES
This is the exported form of vm_map_delete(9) which may be called by consumers of the VM subsystem. The function calls vm_map_lock(9) to hold a lock on map for the duration of the function call.
RETURN VALUES
The vm_map_remove() always returns KERN_SUCCESS.
SEE ALSO
vm_map(9), vm_map_delete(9)
AUTHORS
This manual page was written by Bruce M Simpson <bms@spc.org>.
|
http://manpages.ubuntu.com/manpages/oneiric/man9/vm_map_remove.9freebsd.html
|
CC-MAIN-2014-35
|
refinedweb
| 113
| 67.25
|
My Hashtable in Java would benefit from a value having a tuple structure. What data structure can I use in Java to do that?
Hashtable<Long, Tuple<Set<Long>,Set<Long>>> table = ...
I don't think there is a general purpose tuple class in Java but a custom one might be as easy as the following:
public class Tuple<X, Y> { public final X x; public final Y y; public Tuple(X x, Y y) { this.x = x; this.y = y; } }
Of course, there are some important implications of how to design this class further regarding equality, immutability, etc., especially if you plan to use instances as keys for hashing.
|
https://codedump.io/share/oJpvQrT0APOD/1/using-pairs-or-2-tuples-in-java
|
CC-MAIN-2017-13
|
refinedweb
| 110
| 65.42
|
Answered by:
Unable to send email through c# in BizTalk orchestration
Question
Hi Guys,
I created a orchestration which send the mail to client if any error occurs inside orchestration.So I created a c# library and call that library from orchestration. c# code is as below :
But after executing the code ,I got the below error:
Please help !
Thanks
Answers
Hi Shivay,
Refer the article:
Primarily focus on pointers 3 and 4 of the above link.
Also refer the discussion here:
Rachit Sikroria (Microsoft Azure MVP)
All replies
Hi Shivay,
Refer the article:
Primarily focus on pointers 3 and 4 of the above link.
Also refer the discussion here:
Rachit Sikroria (Microsoft Azure MVP)
Hi,
I also faced the same issue, the problem is on your username and password or the from mailId.
You can try using the below code to test your SMTP connection.
using System; using System.Collections.Generic; using System.IO; using System.Linq; using System.Net.Security; using System.Net.Sockets; using System.Text; using System.Threading.Tasks; using System; using System.Net.Mail; namespace SMTPConnection { class Program { static void Main(string[] args) { using (var client = new TcpClient()) { var server = "smtp.gmail.com"; var port = 25; MailMessage mail = new MailMessage(); SmtpClient SmtpServer = new SmtpClient(server,port); mail.From = new MailAddress("ahh@gmail.com"); mail.To.Add("ahh@gmail.com"); mail.To.Add("ahaboo@gmail.com"); mail.Subject = "Test Mail"; mail.Body = "This is for testing SMTP mail"; SmtpServer.Port = 25; SmtpServer.Credentials = new System.Net.NetworkCredential("username", "password"); SmtpServer.EnableSsl = true; SmtpServer.Send(mail); Console.WriteLine("mail Send"); } } } }
When i given the wrong password i received the following error:
Try using proper username and password, it should work properly.
Regards, Aboorva Raja R Please remember to mark the replies as answers if they help and unmark them if they provide no help.
Hi Shiva,
Use the below option to send email.
This url will turn on ur ssl. I tried it worked for me.
Regards, Aboorva Raja R Please remember to mark the replies as answers if they help and unmark them if they provide no help.
|
https://social.msdn.microsoft.com/Forums/en-US/ce8e910a-4bef-442b-b5e3-583df2ef260d/unable-to-send-email-through-c-in-biztalk-orchestration?forum=biztalkgeneral
|
CC-MAIN-2020-40
|
refinedweb
| 352
| 52.66
|
What do I misunderstand about rgbled()?
I'm new to Python and new to Pycom (lopy4) so this might turn out to be a silly question, but if I run this code fragment (LoRa init taken care of in boot.py and working)
red=0x330000 green=0x003300 blue=0x000033 senddata = 0xaa switch = Pin('P10', mode = Pin.IN) while True: if(switch() == 0): rgbled(red+green+blue) # white #print("Button pressed") retdata=loraSendReceive(senddata) print("Data transmitted: ", senddata) if(len(retdata) != 0): print("Data received: ", retdata) time.sleep(1) #print("Button not pressed") rgbled(green) # Green
I would expect the rgb led to turn green immediately and white for as long as it takes to send a byte plus the 1 sec sleep as soon as the button is pressed. That does not happen. The led turns green and stays green after the first button press. Messages are sent and received via LoRaWan.
If I uncomment the 'button' print statements those are working as expected. It makes a mess for printing a lot of messages but at least the program shows it is going where it is expected to go. The led stays green.
What am I missing in turning on the rgb led?
And precise enough to create the timing signals for a WS2812! I remember writing a piece of C-code for a PIC microcontroller controlling a whole string of these leds and wrestling with that for a bit.
Thanks for your support. I was getting a little bit frustrated with the lopy on my bench but am starting to realize now that the difficult bit here is not in the networking but in the way a high-level language like Python interfaces with low level hardware. Just more stuff to learn :-)
@Paul-Holthuizen That is the Remote Controller Unit of the ESP32. Intended to send and receive messages for remote controllers. But it is also fine for creating and timing bit patterns with high precision. See
For my benefit: what's the RMT?
@Gijs said in What do I misunderstand about rgbled()?:
I would not have expected that and figured the method to be blocking until the LED color actually changed. Do you think this is a bug, or a feature?
Using the RMT is definitly a feature. The code actually waits until a previous transmission is finished. And that's the heart of my PR. I just insert another wait pulse cycle pair to the RMT chain, which causes a sufficient delay. I was not able to find the place causing the 50µs delay which is initially there.
@Gijs Not really. It is not that a follow up call to rgbled() squashes the previous one. It is the opposite. If the follwow-up call is too early, it will be ignored. And in the example it was the rgbled(white) which came too soon, only ~50 µs after the previous one.
The messages themselves had proper structure, only the time between them was too short.
Adding more time (here 144 µs) made it work.
I made a PR ensuring at least 280 µs, because that's what I found in the datasheet of the WS2812B-V4 (opposed to 50µs in the WS2812B datasheet).
So if I understand this correctly, the loop
pycom.heartbeat(False) While True: rgbled(green)
does not show a green color on the led because we call it too fast, leaving no time to actually change the led as we queue another task to change the led color right after the first one, squashing the first one? That would make sense.
Thanks for the thorough explanation! I would not have expected that and figured the method to be blocking until the LED color actually changed. Do you think this is a bug, or a feature?
Gijs
@Paul-Holthuizen Adding a small sleep time into a busy loop is anyhow recommended.
In this case, pycom.rgbled() does not execute asynchronously as a function. It's the sending process that is started. I'm not sure if it is operating in hardware of software. Maybe similar to the UART, where a 128 byte queue is located in the UART hardware, which can be sent or received w/o any software/firmware interaction.
The RTOS is documented on the espressif web pages. The only complication may be, that Pycom does not use the latest release, and also a tailored version. But the previous versions should be available to at espressif.
Edit: I also use a small test script along your code for verification:
import pycom import time from machine import Pin pycom.heartbeat(False) white=0x333333 green=0x003300 p = Pin("P9", Pin.IN, Pin.PULL_UP) time.sleep(0.5) while(True): if(p() == 0): pycom.rgbled(white) print("Data verzonden: ") time.sleep(1) pycom.rgbled(green) time.sleep_us(200)
This seems a reasonable explanation. And it gives me an idea of how to handle this particular problem. However, it also gives me something to think about. How do I know what functions are executed asynchronously?
This is by no means meant to be disapproving but how am I going to use inputs and outputs if I do not know beforehand how these are being handled by Pycom? Are these handled by an (interrupt driven) RTOS or not? If so, is there a description of the RTOS that handles all of this?
I am really impressed with the ability to make a solid network connection with only a couple of lines of code, but I still struggle with the connect-to-hardware side of things with the Lopy device. No doubt a learning curve.
@tuftec The calls to the led function are pretty simple. The function mod_pycom_rgb_led() in modpycom.c calls
bool led_set_color() in modled.c, which then calls rmt_write_items() to transfer the data bit pattern to the RGB led, AFTER waiting that the previous transmission has finished. Maybe the wait time is too short. The WS2812 requires reset time of 50µs between two transmissions. I see no wait period for that in the code, but in the actual timing on P2 I see that delay.
But it still only works with my test code, if I increase the delay to 200µs.
Edit: Making a few trials it turns out, that up to a time gap of about 125µs between two commands to the RGB led, the second command is ignored. From about 150µs on, the second command will be executed. I do not know which RGB led is on the board. The OEM baseboard schematics show the WS2812B which is specified for 3.5V min, and I replaced once successfully a defective RGB led with a WS2812B. So it might work to some extend. But the mandatory reset time may be longer than 50 µs in that case.
I have been following this thread as it could explain some issues I have been chasing in other areas. If the rgb led is in fact controlled by the underlying RTOS then that would explain a lot. You can not just simply look at your python code and work out what is happening. Calls to the led function could get queued or dropped without you knowing. I have observed strange timing behaviour when dealing with the LTE component on the FiPy which is handled by a totally seperate processor/modem. Am I correct in reading somewhere that some of the pycom modules (like LoRa) run on the second core, which call also introducing timing issues, particularly when not blocking.
Peter.
@Paul-Holthuizen I have an possible reason. In this version of the code:
while(True): if(switch() == 0): rgbled(white) retdata=loraSendReceive(senddata) print("Data verzonden: ", senddata) if(len(retdata) != 0): print("Data ontvangen: ", retdata) rgbled(green)
You are sending rgbled(green) very fast, when the button is NOT pressed. That may lock up the RGB, such that other calls to rgbled(white) are ignore. You could try to insert a small delay, like 1 ms, into the while(True) loop, just to give rgbled(green) time to complete.
Note: The signal pattern for the RGB led is created by the RMT unit. You load it once with a bit pattern, and then the transmission is done while the calling code can already return. So it's asynchronous to the code, which makes this kind interference possible.
No, I do not debounce the switch at all. For the loraSendReceive() takes always the same 2135 ms to finish which is a debounce in itself.
I also see only one print statement on the console after pressing the switch, and only one message gets sent out.
@Paul-Holthuizen Another question: do you debounce the switch reading in the function switch(). And is there a code path by which the function loraSendReceive() return very fast?
Interesting, and thanks for thinking along on this!
However, in my case the time between setting the led white and then setting it back to green again is always at least 2135 ms or the time it takes loraSendReceive() to finish. This time is timed using Timer.chrono().
So I do not think I simply color switch the led too fast to be seen or to be executed by rgbled() unless something weird happens (like subsequent code executing before loraSendReceive() is finished.
Furthermore it is weird that the place of rgbled(green) seems to have an impact; when placed under if(len(retdata) != 0): it functions, and when placed under if(switch() == 0): it does not. Not even when I insert a time.sleep(1) right before it to prevent rgbled(green) from being called too often. In the latter case I have to hold the switch for at least a second, after which it does send the message, but still refuses to set the led to white.
@Paul-Holthuizen I wrote a little program to test wioth the RGB led, how short can pulses be made such that they can still be seen.
- If it is just a on/off pulse, it can go down to 100µs and may less
- if it is a switch between green and short white, it requires about 20 ms and more to be clearly noticeable.
BUT. There is an interesting phenomenon. If the time between two RGB Led calls is very short, like 20µ or less, the LED may stick in one state and requires a long on/of cycle to recover.
Test code below. I borrowed the color names & values from your code.
import pycom pycom.heartbeat(False) import time red=0x330000 green=0x003300 blue=0x000033 white = red + green + blue black = 0 pycom.rgbled(black) while True: q = input("Time") if q == "q": break t = int(q) pycom.rgbled(white) time.sleep_us(t) pycom.rgbled(black)
@Paul-Holthuizen I woudl not expect the MicroPython core to be have wrong. It has been thoroughly tested. I never ran into an error at that program logic level.
Two seconds it the time LoRaWAN waits for a downlink packet from the host. So it could be that sometimes the send in loraSendReceive() blocks for the 2 + x seconds, and sometimes it returns immediately. That may be related to timing differences between the MicroPython task and the LoRa task.
If you have a logic analyzer, you could set and clear a GPIO pin immediately before and after the loraSendReceive() call. You could also connect a LED to it. A simple LED may work better than the RGB led, which requires a complex message. I do not know if this message is sent immediately or e.g. upon the next tick of a timer
No threads, callbacks or triggers, at least not that I am aware of. And I changed the led colors as part of the debugging, no changes.
There is one thing that strikes me as odd. If I upload the code as per sample 1 (the working one) everything works fine. I press the button, the led lights white, the message gets sent, the led lights green again.
Now, if I remove the USB cable and attach a LiPo battery the Lopy 4 reboots, the led lights white once, then goes and stays green. All the while messages are sent every couple of seconds, as if the switch is held in (which it is not).
Remove the battery, reconnect the USB, and after the boot (without me uploading anything) everything works as expected again. This is reproducibly.
I am starting to wonder if I might have a defective device. Or if the firmware upgrade I did upon receiving the lopy went south somehow. There were a couple of choices that did not make much sense at the time, e.g. the 'type' parameter (which I set to 'development')
I will re-program the fw and see if that somehowmakes a difference.
@Paul-Holthuizen that is indeed weird. I wondered if maybe you had an issue mixing spaces and tabs which could have caused the indentation to not be what you thought, but I don’t think you could get that result in that case anyway (you should probably still double-check, just in case).
Do you use any threads? Are there any callbacks/triggers?
Can you try using two other colours there just to be sure it isn’t something else changing the LED colour?
@jcaron said in What do I misunderstand about rgbled()?:
.
The socket is set to blocking right before sending off the data and reset to non-blocking right after.
Data sent is only one byte; this takes ~50ms airtime but a lot longer in the Pycom code; measuring it with Timer I found it takes 2135 ms to execute the retdata=loraSendReceive(senddata) statement.
What I do not understand is that my first snippet
while(True): if(switch() == 0): rgbled(white) retdata=loraSendReceive(senddata) print("Data verzonden: ", senddata) if(len(retdata) != 0): print("Data ontvangen: ", retdata) rgbled(green)
works as expected, while the second
while(True): if(switch() == 0): rgbled(white) retdata=loraSendReceive(senddata) print("Data verzonden: ", senddata) if(len(retdata) != 0): print("Data ontvangen: ", retdata) rgbled(green)
does not.
In both snippets the program should show a white led while waiting for the data being sent off. But in the second snippet it looks like the if(switch() == 0): statement continues before the program returns from sending the data.
Either something in Pycom is broken or I completely miss the way IF statements are handled in Python / Pycom :-)
|
https://forum.pycom.io/topic/6674/what-do-i-misunderstand-about-rgbled
|
CC-MAIN-2022-33
|
refinedweb
| 2,402
| 73.47
|
smfi_main
SYNOPSIS
#include <libmilter/mfapi.h> int smfi_main( );
Hand control to libmilter event loop.
DESCRIPTION
Called When
smfi_main is called after a filter's initialization is complete.
Effects
smfi_main hands control to the Milter event loop.
RETURN VALUES
smfi_main will return MI_FAILURE if it fails to establish a connection. This may occur for any of a variety of reasons (e.g. invalid address passed to
smfi_setconn
). The reason for the failure will be logged. Otherwise, smfi_main will return MI_SUCCESS.
Copyright (c) 2000, 2003 Sendmail, Inc. and its suppliers. All rights reserved.
By using this file, you agree to the terms and conditions set forth in the LICENSE.
|
http://www.mirbsd.org/htman/i386/manDOCS/milter/smfi_main.html
|
CC-MAIN-2013-20
|
refinedweb
| 107
| 63.46
|
are looking for previous installation instructions for different platforms, please consult this list:
-.
Otherwise, let’s proceed with getting OpenCV 3 with Python bindings installed on Raspian Stretch!
The quick start video tutorial
If this is your first time installing OpenCV or you are just getting started with Linux I highly suggest that you watch the video below and follow along with me as you guide you step-by-step on how to install OpenCV 3 on your Raspberry Pi running Raspbian Stretch:
Otherwise, if you feel comfortable using the command line or if you have previous experience with Linux environments, feel free to use the text-based version of this guide below.
Assumptions
In this tutorial, I am going to assume that you already own). The former instructions take approximately 10 minutes to download via a torrent client and about 10 minutes to flash the SD card at which point you can power up and proceed to the next section..
Assuming that your OS is up to date, you’ll need one of the following for the remainder of this post:
- Physical access to your Raspberry Pi 3 (you’ll need an HDMI cable and a keyboard/mouse) or running sudo service ssh start from the command line of your Pi.
After you’ve changed the setting and rebooted, you can test SSH directly on the Pi with the localhost address. Open a terminal and type ssh pi@127.0.0.1 to see if it is working.
Keyboard layout giving you problems? Change your keyboard layout by going to the Raspberry Pi desktop preferences menu. I use the standard US Keyboard layout, but you’ll want to select the one appropriate for your keyboard or desire (any Dvorkac users out there?).
Installing OpenCV 3 on a Raspberry Pi 3 running Raspbian Stretch.
Let’s go ahead and get started installing OpenCV 3 on your Raspberry Pi 3 running Raspbian Stretch.
Step #1: Expand filesystem
Are you using a brand new install of Raspbian Stretch?
If so, the first thing you should do is expand your filesystem to include all available space on your micro-SD card:
And then select the “Advanced Options” menu item:
Followed by selecting “Expand filesystem”:
Once prompted, you should select the first option, “A1. Expand File System”, hit Enter on your keyboard, arrow down to the “<Finish>” button, and then reboot your Pi — you may be prompted to reboot, but if you aren’t you can execute: 32GB of the micro-SD card.
However, even with my filesystem expanded, I have already used 15% of my 32GB card.
If you are using an 8GB card you may be using close to 50% of the available space, so one simple thing to do is to delete both LibreOffice and Wolfram engine to free up some space on your Pi:
After removing the Wolfram Engine and LibreOffice, you can reclaim almost 1GB!
Step #2: Install dependencies
This isn’t the first time I’ve discussed how to install OpenCV on the Raspberry Pi, so I’ll keep these instructions on the brief:
Timing: 2m 14s
We then need to install some developer tools, including CMake, which helps us configure the OpenCV build process:
Timing: 19s
Next, we need to install some image I/O packages that allow us to load various image file formats from disk. Examples of such file formats include JPEG, PNG, TIFF, etc.:
Timing: 21s
Just as we need image I/O packages, we also need video I/O packages. These libraries allow us to read various video file formats from disk as well as work directly with video streams:
Timing: 32s
The OpenCV library comes with a sub-module named highgui which is used to display images to our screen and build basic GUIs. In order to compile the highgui module, we need to install the GTK development library:
Timing: 1m 36s
Many operations inside of OpenCV (namely matrix operations) can be optimized further by installing a few extra dependencies:
Timing: 23s
These optimization libraries are especially important for resource constrained devices such as the Raspberry Pi.
Lastly, let’s install both the Python 2.7 and Python 3 header files so we can compile OpenCV with Python bindings:
Timing: 45s.
Step #3: Download the OpenCV source code):
Timing: 41s
We’ll want the full install of OpenCV 3 (to have access to features such as SIFT and SURF, for instance), so we also need to grab the opencv_contrib repository as well:
Timing: 37s:
Timing: 33s: 35 favorite, just use the source command:
Note: I recommend running the source ~/.profile file each time you open up a new terminal to ensure your system variables have been setup correctly.
Creating your Python virtual environment
Next, let’s create the Python virtual environment that we’ll use for computer vision development:
This command will create a new Python virtual environment named cv using Python 2.7.
If you instead want to use Python 3, you’ll want to use this command instead:
Timing: 24s! The mkvirtualenv command is meant to be executed only once: to actually create the virtual environment.
After that, you can use workon and you’ll be dropped down into your virtual environment:
To validate and ensure you are in the cv virtual environment, examine your command line — if you see the text (cv) preceding your prompt, then you are in the cv virtual environment:
Figure 3: Make sure you see the “(cv)” text on your prompt, indicating that you are in the cv virtual environment.
Otherwise, if you do not see the (cv) text, then you are not in the cv virtual environment:
Figure 4:). Our only Python dependency is NumPy, a Python package used for numerical processing:
Timing: 11m 12s
Be sure to grab a cup of coffee or go for a nice walk, the NumPy installation can take a bit of time.
Note: A question I’ve often seen is “Help, my NumPy installation has hung and it’s not installing!” Actually, it is installing, it just takes time to pull down the sources and compile. You can verify that NumPy is compiling and installing by running top . Here you’ll see that your CPU cycles are being used compiling NumPy. Be patient. The Raspberry Pi isn’t as fast as your laptop/desktop.
Step #5: Compile and Install OpenCV
We are now ready to compile and install OpenCV! Double-check that you are in the cv virtual environment by examining your prompt (you should see the (cv) text preceding it), and if not, simply execute workon :
Once you have ensured you are in the cv virtual environment, we can setup our build using CMake:
Timing: 2m 56s path , similar to my screenshot below:
Figure 6:.
Similarly, if you’re compiling OpenCV for Python 3, make sure the Python 3 section looks like the figure below:
Figure 6: Checking that Python 3 will be used when compiling OpenCV 3 for Raspbian Stretch on the Raspberry Pi 3.:
Timing: 1h 30m
Once OpenCV 3 has finished compiling, your output should look similar to mine below:
From there, all you need to do is install OpenCV 3 on your Raspberry Pi 3:
Timing: 52s
Step #6: Finish installing OpenCV on your Pi
We’re almost done — just a few more steps to go and you’ll be ready to use your Raspberry Pi 3 with OpenCV 3 on Raspbian Stretch.
For Python 2.7:
Provided your Step #5 finished without error, OpenCV should now be installed in /usr/local/lib/python2.7/site-pacakges . You can verify this using the ls command:
Note: In some cases, OpenCV can be installed in /usr/local/lib/python2.7/dist-packages (note the dist-packages rather than site-packages ). If you do not find the cv2.so bindings in site-packages , we be sure to check dist-packages .
Our final step is to sym-link the OpenCV bindings into our cv virtual environment for Python 2.7:
For Python 3:
After running make install , your OpenCV + Python bindings should be installed in /usr/local/lib/python3.5/site-packages . Again, you can verify this with the ls command::
After renaming to cv2.so , we can sym-link our OpenCV bindings into the cv virtual environment for Python 3:
As you can see from the screenshot of my own terminal, OpenCV 3 has been successfully installed on my Raspberry Pi 3 + Python 3.5 environment:
Figure 8: Confirming OpenCV 3 has been successfully installed on my Raspberry Pi 3 running Raspbian Stretch.
Once OpenCV has been installed, you can remove both the opencv-3.3.0 and opencv_contrib-3.3.0 directories to free up a bunch of space on your disk::
Notice that I’ve commented out the 1024MB line and uncommented the 100MB line.
If you skip this step, your memory card won’t last as long. As stated above, larger swap spaces may lead to memory corruption, so I recommend setting it back to 100MB.
To revert to the smaller swap space, restart the swap service:
Troubleshooting and FAQ
Q. When I try to execute mkvirtualenv and workon , I get a “command not found error”.
A. There are three reasons why this could be happening, all of them related to Step #4:
- Make certain that you have installed virtualenv and virtualenvwrapper via pip . You can check this by running pip freeze and then examining the output, ensuring you see occurrences of both virtualenv and virtualenvwrapper .
- You might not have updated your ~/.profile correctly. Use a text editor such as nano to view your ~/.profile file and ensure that the proper export and source commands are present (again, check Step #4 for the contents that should be appended to ~/.profile .
- You did not source your ~/.profile after editing it, rebooting, opening a new terminal, etc. Any time you open a new terminal and want to use a virtual environment, make sure you execute source ~/.profile to load the contents — this will give you access to the mkvirtualenv and workon commands.
Q. After I open a new terminal, logout, or reboot my Pi, I cannot execute mkvirtualenv or workon .
A. See reason #3 from the previous question.
Q. When I (1) open up a Python shell that imports OpenCV or (2) execute a Python script that calls OpenCV, I get an error: ImportError: No module named cv2 .
A. Unfortunately, this error is extremely hard to diagnose, mainly because there are multiple issues that could be causing the problem. To start, make sure you are in the cv virtual environment by using workon cv . If the workon command fails, then see the first question in this FAQ. If you’re still getting an error, investigate the contents of the site-packages directory for your cv virtual environment. You can find the site-packages directory in ~/.virtualenvs/cv/lib/python2.7/site-packages/ or ~/.virtualenvs/cv/lib/python3.5/site-packages/ (depending on which Python version you used for the install). Make sure that your sym-link to the cv2.so file is valid and points to an existing file.
Q. I’m running into other errors.
A. Feel free to leave a comment and I’ll try to provide guidance; however, please understand that without physical access to your Pi it can often be hard to diagnose compile/install errors. If you’re in a rush to get OpenCV up and running on your Raspberry Pi be sure to take a look at the Quickstart Bundle and Hardcopy Bundle of my book, Practical Python and OpenCV. Both of these bundles include a Raspbian .img file with OpenCV pre-configured and pre-installed. Simply download the .img file, flash it to your Raspberry Pi, and boot! This method is by far the easiest, hassle free method to getting started with OpenCV on your Raspberry Pi., we learned how to upgrade your Raspberry Pi 3‘s OS to Raspbian Stretch and to install OpenCV 3 with either Python 2.7 or Python 3 bindings.
If you are running a different version of Raspbian (such as Raspbian Wheezy) or want to install a different version of OpenCV (such as OpenCV 2.4), please consult the following tutorials:
-.
Are you looking for a project to work on with your new install of OpenCV on Raspbian Stretch? Readers have been big fans of this post on Home surveillance and motion detection with the Raspberry Pi, Python, OpenCV, and Dropbox.
But before you go…
I tend to utilize the Raspberry Pi quite a bit on this blog, so if you’re interested in learning more about the Raspberry Pi + computer vision, enter your email address in the form below to be notified when these posts go live!
Adrian. I want to ask something.
I have the Raspberry Pi 2 (and its camera module), but I just don’t know what kind of project I can do with it.
Since I am busy, if possible, I want to make a project that can contribute the most to what I am learning right now (mainly machine learning). Do you have any idea?
I were thinking of using it for scrapping data, but do not know where to begin. I would be very happy if you could recommend some suggestions.
Thanks!
Hi Hilman — the Raspberry Pi 2 is a bit underpowered so I wouldn’t recommend training a machine learning classifier on your Pi, but I could see deploying one. Have you considered training an image classifier to recognize a particular object on your laptop/desktop and then actually running it on your Raspberry Pi?
Also, keep in mind that all chapters inside Practical Python and OpenCV will run on the Raspberry Pi. Go through any of those chapters and you can execute the projects on the Pi (such as face detection + tracking). Those chapters make for excellent starting points for projects.
I hope that helps!
Ah… I forgot about that book. Will take a look at it later. Thanks!
Hi Adrian
for me i don’t know why the comment bar is not showing up, so i decided to write in the reply section.
ok for me i find there is an error after make -j4 step.every thing up till then worked fine,but don’t know what went wrong. I just followed your post, i am not familiar with terminal window commands .please help.
thank you
Can i install OpenCV- 2.4.9 in Raspbian stratch…..?
It’s a bit of a pain, to be honest. You’ll need to combine the steps from this tutorial along with original one on installing OpenCV 2.4 on Raspbian Wheezy. Be prepared to run into problems and do a bit of debugging.
Thq…..very much Adrian . I successfully OpenCV-3.3.0 with Python-3.5 in pi 3.Only problem, it takes 6h duration . Anyway i install Opencv .
Congrats on getting OpenCV installed!
manda como lo isiste amigo
Hi Admin,
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
…
error: (-215) scn == 3 || scn == 4 in function cvtColor
this’s my error. can I know how to fix this error.
thankz in advance
Double-check your path to the input image. It sounds like the path is incorrect and cv2.imread is returning None.
I was the same issue, it was solved with the following command:
sudo modprobe bcm2835-v4l2
BTW: Thanks a lot Adrian, you’re a Rock!!!!
Thanks Oscar, and congrats on getting it working 🙂
Great post as always.
I wanted to share a neat little trick: You can actually enable SSH on the RPi just by placing an empty “ssh” file (case sensitive!) in the root of the sd card (once Raspbian is flashed).
This makes running a 100% “headless” RPi possible: From there you can keep working with SSH or enable VNC and see the desktop from there.
Wow, that’s a neat trick! Thanks for sharing Jorge.
Your page has been explained in an easy-to-understand and polite manner, so it’s always helpful very much.
I immediately installed the CV with reference to this page.
However, Python IDLE in the program menu causes an error in importing. How can I use CV from IDLE?
Hi Hidenori — as far as I understand, the GUI version of Python IDLE does not support virtual environments, thus you cannot use it. I would suggest you use IDLE via the command line (so you can access the Python virtual environment) or use Jupyter Notebooks. I hope that helps!
how to install open cv 3 without installing virtual environments?
Don’t install virtualenv/virtualenvwrapper and don’t use the “mkvirtualenv” command to create a Python virtual environment. You’ll need “sudo” permission to install any pip-based packages.
I already installed virtual environments. Now I’m not able to import cv2 in Idle. Any solution? I Don’t want to access Idle from command line.
Python IDLE does not respect Python virtual environments. You would need to the command line. Another approach would be to install Jupyter Notebooks which will give you an incredibly powerful IDLE-like environment.
If you aren’t going to run the tests, you can save a fair amount of compile time by not including them. To do that, add “-D BUILD_TESTS=OFF” and “-D BUILD_PERF_TESTS=OFF” to the CMake command line.
Great tutorial.
All your ” Quickstart Bundle and Hardcopy Bundle book, Practical Python and OpenCV” I bought are a jewel i am happy to have invested in.
in your blog on “Drowsiness detection with OpenCV of May 8, 2017 in dlib, Facial Landmarks, Tutorials” you suggested :
“If you intend on using a Raspberry Pi for this, I would:
1. Use Haar cascades rather than the HOG face detector. While Haar cascades are less accurate, they are also faster.
2. Use skip frames and only detect faces in every N frames. This will also speedup the pipeline.”
I have Pi 3, Kindly do a tutorial with an example to implement the above options.
Hi Abkul — thank you for picking up a copy of Practical Python and OpenCV, I appreciate your support! And yes, I will be covering an updated drowsiness detector for the Raspberry Pi in the future. I can’t say exactly when this will be as I’m very busy finishing up the new deep learning book, but it will happen before the end of the year.
Where new blog about pixel by pixel for loops
I already covered the blog post you are referring to here. I’ll also be doing an updated one on OpenMP in the future.
Followed step-by-step and it worked like a charm. Compile took about 4 hours, as expected. Thanks!
Congrats on getting OpenCV installed, Rick! Nice job.
Hi andrian
thanks for sharing
I have success installed opencv to raspberry pi 3
thanks to your guidance
but I wonder
for compiling OpenCV 3 for Python 2.7 and python3.5
the libraries, numpy and site packages only for phyton 3
I tried for 3x
the result are still same
any idea about that ?
You would need to create two separate Python virtual environments. One for Python 2.7 and one for Python 3. Form there you can run CMake + make from inside each virtual environment to build OpenCV.
Worked like a charm..mind you I installed opencv3.3 rather than 3.1 … still worked great 😀
Congrats on getting OpenCV installed, Charles! Nice job.
Upon following this latest tutorial The compile did hang up around 91% using 4 cores on RPi2. Following the tutorial RPi2 on Jessie however compiled on Stretch using all 4 cores without issue.
Thank you for sharing your experience, Larry!
I am following this install on a Pi 2b. If I backup the sd card – will it work on a Pi 3 ?
Hi Adrian,
I installed OpenCv on my raspberry pi3 (2017-08-16-raspbian-stretch) by following step by step your latest tutorial.
The compilation went well and the result is similar to yours.
When I launch a simple program, see what it returns me:
(cv) pi@raspberrypi:~ $ sudo modprobe bcm2835-v4l2
(cv) pi@raspberrypi:~ $ python script.py
Unable to init server: Could not connect: Connection refused
(video test:1029): Gtk-WARNING **: cannot open display:
(cv) pi@raspberrypi:~ $
This program works fine on my computer with Linux Mint.
Thanks for your reply and sorry for my approximate English.
Bonjour de France 🙂
Stéphane.
How are you accessing your Raspberry Pi? Over SSH? Enable X11 forwarding when you SSH into your Pi:
$ ssh -X pi@your_ip_address
Via ssh well on 🙂
Ok I’ll try the x11 server activation
cordially
The issue with the multi threaded build is the lack of size of the swap file. You need to increase it to something like 1GB for doing intensive builds.
I tried serveral times and did follow your great tutorial in detail. Anyhow, I am not able to get the virtualenvwrapper working (and, due to this, I have issues later on).
After the installation of virtualenv and virtualenvwrapper (both were successful, including the dependancies) and updating the ~/.profile file, I always get the error:
pi@raspberrypi:~ $ source ~/.profile
.
I did not find any clue to overcome this problem.
Any help is appreciated.
Which Python version were you trying to install OpenCV + Python for? Python 2.7? Or Python 3?
I’m trying to do 2.7 and get the same error
I would suggest explicitly setting your Python version for virtualenvwrapper inside your
.profilefile, like this:
Thanks …. It works.
Awesome, I’m glad to hear it 🙂
Thanks. I had the same error. This solved it.
Awesome, I’m glad it worked for you, Manu! 🙂
add below code in ~/.profile along with other part, it worked for me
export VIRTUALENVWRAPPER_PYTHON=/usr/bin/python3
thank you guys
it is work when adding
export VIRTUALENVWRAPPER_PYTHON=/usr/bin/python3
In setting up the ~/.profile, I had to add the line: “export VIRTUALENVWRAPPER_PYTHON=/usr/bin/python3” before “source /usr/local/bin/virtualenvwrapper.sh”. Otherwise, I got the “No module named virtualenvwrapper.hook_loader” error.
I also had to add export VIRTUALENVWRAPPER_PYTHON=/usr/bin/python3” otherwise I“No module named virtualenvwrapper.hook_loader” error.
But what is the implication for the next step: mkvirtualenv cv -p python2
Thanks
It should have have an impact.
I am using Python 3 and was able to fix the problem by running this command:
sudo pip3 install virtualenv virtualenvwrapper
using pip3 instead of pip to make the virtual env.
Well, it is very strange for my case. I installed on python 2.7, was seeing”no module named virtualenvwrapper”.
I added ”
virtualenv and virtualenvwrapper
export WORKON_HOME=$HOME/.virtualenvs
export VIRTUALENVWRAPPER_PYTHON=/usr/bin/python2
source /usr/local/bin/virtualenvwrapper.sh” Still didn’t fix the problem.
Then I modified to “export VIRTUALENVWRAPPER_PYTHON=/usr/bin/python3”, the error is magically gone.
Just a word of warning: Stretch takes more space than Jessie (~4GB vs ~3GB) so to prevent OpenCV from building on a 8GB card.
Thanks for sharing, Tom!
Did not work on the 8GB, not enoght space. Fail in make -j4 at 30%.
Going for the no desktop version for more space
In setting up the ~/.profile, I had to add the line: “export VIRTUALENVWRAPPER_PYTHON=/usr/bin/python3” before “source /usr/local/bin/virtualenvwrapper.sh”. Otherwise, I got the “No module named virtualenvwrapper.hook_loader” error.
Another note: It is possible to run the make with 4 cores. Stretch suffers a bit from software bloat, so the 2GB of memory isn’t sufficient to compile with 4 cores. Edit /etc/dphys-swapfile
and change CONF_SWAPSIZE to 2048 and reboot. You can then do make -j4. you also need to do the same thing to install dlib as that will also hang under stretch when it runs out of memory.
Great point Stephen, thank you for sharing.
How exactly is this done? I can’t save any changes to the swapfile after I make them.
Use sudo when you open the file
i.e. sudo vim /etc/dphys-swapfile
substitute vim for your preferred editor (nano, etc)
premision denied.. (with nano)
Make sure you use “sudo” to give yourself the proper permissions to edit the file.
why can’t I import cv2 directly from python shell? but when I go to terminal and write work on and import cv2 then it works.
But I like to import cv2 from the script. so what can I do for that? I am using python 2.7 and pi3.
please help me.
To clarify, are you trying to execute a Python script from your terminal? If so you still need to use “workon” before executing the script (workon only needs to be executed once):
Same problem here, except i’m using Python 3.5.3. Did you find a solution ? Does anyone have a solution ?
I got it to import from my python script using the following code (this is on the pre-configured version of raspbian that comes with the Practical Python and OpenCV + Case Studies starter bundle.
#!/usr/bin/python
activate_this = “home/pi/.virtualenvs/py2cv3/bin/activate_this.py”
execfile(activate_this, dict(__file__=activate_this))
import cv2
for more details here’s the site I found:
Hey Adrian Rosebrock ! Thanks for such a wonderful and simple tutorial, currently i am installing opencv (currently at installing numpy) i would like to know whether some of the open cv modules are dependent or matplotlib ? If so is it similar to installing like numpy ? with cv virtual environment “pip install matplotlib”
No, there are no OpenCV modules that are dependent on matplotlib. You can install it via:
$ pip install matplotlib
Thanks ! I was successfully able to install open cv on my pi3 . At end you forgot to mention about deleting those zips (open cv and contrib ) file which might add few more mb of free space.
How do we access the environment variable in VNC or desktop terminal as” source ~/.profile “and” workon cv ” give invalid option . SSH i am able to get the cv environment.
Running:
Will work over SSH, VNC, and terminal on the desktop.
I’m not sure what error message you are getting, but again, the above commands will work just fine on all setups (provided there is not a misconfiguration, of course).
Yes ! Now i am able run these line without any errors fron VNC ,dont kown what was the reason for such error . Anyways thanks !
It does not worked for me. I took Memory Error, although I have memory.
I tried “pip –no-cache-dir install matplotlib” command. Although I took again another error, at the end of I have installed matplotlib.
Correct, using:
$ pip install matplotlib --no-cache-dir
Will help if you are running into a MemoryError.
Hi Adrian,
I have followed your tutorial to install opencv3 using python3 in raspberri Pi3 stretch.
I ran following commands:
source ~/.profile
workon cv
python file_name.py
error:
Traceback (most recent call last):
File “face_detection.py”, line 2, in
importError : No module named ‘picamera’
I tried to install ‘picamera’ using following commands:
sudo apt-get update
sudo apt-get install python3-picamera
Result:
python3-picamera is already installed the newest version (1.13)
0 upgraded, 0 newly installed, 0 to remove and 20 not upgraded
Again I tried to run:
python file_name.py
same importError :
Traceback (most recent call last):
File “face_detection.py”, line 2, in
importError : No module named ‘picamera’
I am not getting why this is happening. It’ll be very helpful if you can help me out of this.
You installed picamera via apt-get which will install it into your system install of Python, not the virtual environment. You should use:
And you’ll be all set.
Thanks man! It’s working perfectly fine 🙂 🙂 _/\_
Really great instructions Adrian, thank you, saved me loads of time. However, i installed everything and found it used up 9.5GB of the sd card, so it might be worth mentioning that 16GB is recommend. I noticed on your setup it only took ~4.2GB. Is that after removing LibreOffice and Wolfram engine?
Correct, that is after removing LibreOffice and Wolfram Engine.
So worth mentioning if you use a 8GB sd card, you will need to remove LibreOffice and Wolfram Engine, otherwise when you try and build openCV you will most likely run out of space and get an error, then have to build it again, and this step can take hours so not something you want to happen.
when I try to download the connection just times out over and over
Hi Bryan — please double-check your internet connection and try again. There might have also been a problem with GitHub when you tried to download the code.
Hi Bryan,
I’ve run in to that problem a couple of times using apt-get as well – bet you were running it in the afternoon. Try running it later at night or early in the morning. I suspect the servers get overloaded at peak times.
Cheers
missing link
you can try this
can openCV3.3 be installed in jessie
Yes, absolutely. I provide a number of OpenCV install tutorials for various Raspbian distributions here.
after expand the file system and reboot …
i cant access to the raspberry pi through Remote Desktop.
but able to use ssh access to the pi
Anyone can help ?
Hmm, I don’t think this is an issue related to expanding the file system and rebooting. Can you ensure the remote desktop service is properly running?
Love the content, I wish I had snapped up the newest course at the early bird discount.
If anyone waited 4 hours for compile, and without thinking pasted the “make clean” command. I feel your pain…from now on, read twice, paste once.
Silver lining: I’ll never forget what make clean does.
Ugh, I’m sorry to hear that Banjo. I’ll be posting an updated Raspberry Pi install tutorial this coming Monday, October 9th which will enable you to compile OpenCV on the Raspberry Pi in about 45 minutes.
Hi Adrian,
When I reached the creation of virtual environment part, I accidently created another virtual part for Python 2.7 as well. But I want to work with Python 3.
In Cmake output, both interpreter and numpy of Python 2.7 and Python 3 are located in the virtual environment cv.
Will it cause any problem in the future?
Thanks in advance
Just to be safe I would suggest deleting both your Python virtual environment via the
rmvirtualenvcommand and correctly creating your Python 3 one. From there, delete your “build” directory, re-create it, and re-run CMake.
thank you
Hello, thank you Adrian for this tutorial.
I am having trouble at step 5:
when I try the cmake -D……
I get a bash: cmake: command not found.
Any ideas why?
Please make sure you “Step #2” where the “cmake” command is installed.
Hi Adrian,
I followed the instructions and were able to install OpenCV 3.3.0 on my Raspberry Pi 3. Everything seemed to work pretty well until I ran into a Face Recognition Script that contained the following line:
model = cv2.face.createEigenFaceRecognizer()
The following Error Message was shown when I ran Python on it:
“AttributeError: module ‘cv2’ has no attribute ‘face’ ”
Everywhere I looked, the answer seemed to point to the “OpenCV’s extra modules” which I thought was already installed with opencv_contrib from the github.
Any idea on how I can fix this?
TIA,
Huy –
Indeed, it sounds like your install of OpenCV did not include the “opencv_contrib” modules. You will need to re-compile and re-install OpenCV.
Hi.
the virtualenvironment is killing me.
Unless I’m in it, I can’t see cv2 library.
If I am in it, python can’t see picamera.array and other modules.
This makes the home surveillance blog that was suggested to try opencv out impossible.
time spent thus far: 10h compiling (even without make -j, it crashed at 83%, although power cycle -ie too hung to ssh into and stop cleanly) and 3h on this. Definitely not for the faint hearted!
Is there an easy way to get needed modules into virtual environment? I have to abandon this soon – it is consuming too much time. A pity really, because I was hoping to bring it to my classroom.
Cheers,
Leon
Hi Leon — that is the intended behavior of Python virtual environments. Python environments keep your system Python packages separate from your development ones. As far as your Pi crashing during the compile, take a look a this blog post which provides a solution. The gist is that you need to increase your swap space.
Will this tutorial work on a raspberry pi 2? Currently its os is raspian wheezy and from the offial website you can only download raspian stretch
Hi Aaron — can you please clarify your comment? Are you running Raspbian Wheezy on your Pi and want to install OpenCV?
I have changed to Raspian Stretch but i have a Raspberry Pi 2 just wondering will this tutorial work for it as you are using a Raspberry Pi 3
I have not tested on the Raspberry Pi 2, but yes, this tutorial should work.
Hi Adrian, i am using sift for features extratction. but it is showing that
“module” has no attribute “xfeatures2d”. I have downloaded opencv_contrib and unzip it properly but still getting error when i run my code on raspberry pi 3.
Hi Haseeb — it sounds like your path to the
opencv_contribmodule during the CMake step was incorrect. Double-check the path, re-run CMake, and re-compile + re-install OpenCV.
Made it to the open CV compile
stopped at 86%
Spec ( Ras pi 0 wireless , stretch , python 2.7)
should I attempt to change to python 3.0, reformat the sd card and start over, or attempt optimizing open CV and make -j4? useing your oct 9th article.
Thanks
I would actually update your swap size, as I discuss in this post. From there, delete your “build” directory, re-create it, and re-compile.
sir i have done the solution you mention above that modify the the swap size , but still my installation get stuck at 98 % along with this whole RPI get hang and can’t do anything . So i have to forcefully switch it off directly. Please tell me the solution !!
Hey Kaustubh, if you are still having trouble compiling and installing OpenCV I would recommend taking a look at the Quickstart Bundle and Hardcopy Bundle of my book, Practical Python and OpenCV. Inside I have included a pre-configured Raspbian .img file with OpenCV pre-installed. Check it out as it will solve your current issue installing OpenCV.
is there a way you can you run the Idle shell for python in the virtual environment
No, IDLE does not respect Python virtual environments. Please use the command line. If you like IDLE, try using Jupyter Notebooks that do work with Python virtual environments.
Hi,
I am not able to import PIL from virtualenv. Can you please suggest how to fix the issue?
Thanks in advance
Raju
How did you install PIL? Did you install it into the Python virtual environment?
Why can’t your tutorial be run from a shell script? It would eliminate a lot of errors and retries?
It can be executed via a shell script in some situations; however, it’s also important to understand how the compile works, especially if you intend on optimizing your install. I also offer a pre-configured Raspbian .img file inside Practical Python and OpenCV.
Can this tutorial work on a 8GB card?
If you purge Wolfram Alpha and LibreOffice you should be okay, but I would really suggest using a 16GB card.
Oh man. I found this link for stretch. It seems to work because everything went smoothly untill I try to compile opencv. At this step cd ~/opencv-3.3.0/ it tells me that the directory does’t exist.
So I mkdir one then mkdir build, cd into build. when try to compile, it tells me it doesn’t contain CMakeLists.txt , where is this CMakeLists.txt ?
Hi Rich — it’s hard to say what the exact issue is without seeing the directory structure of your project. Can you ensure that OpenCV was properly downloaded and unzipped in your home directory?
I’m very new to this and I just followed your video and now I have it downloaded so thank you. However, I don’t know what to do know, like how to write and run scripts in this virtual environment because every time I type python into it, the shell pops up. Could someone tell me how to open blank scripts so I can start writing code. Thank you
Hey Daniel — you would need to supply the path to the Python script you would like to execute:
$ python your_script.py
Open up the a file in your favorite plaintext editor, save it as a .py file, and insert your code. From there execute it via the command line.
Could add “nohup” to long-running commands? It’s not unusual to lose the ssh connection and it gets frustrating as the installation is pretty long already.
You could absolutely use “nohop”; however, I prefer using “screen”.
In step 5, after cmake completes I type ‘make’ but there are no Makefiles that it can run against.
Please check your output of “CMake” as it likely returned an error (and thus no Makefiles were generated).
I had gotten this error too, and the cmake log files told me that I was missing a header file. I went back through the instructions and found that I had missed installing the opencv_contrib package.
Hello adrian,
thank you. this is the best tutorial i ever seen.
but i got one problem
when i’m run this script
$ ls -l /usr/local/lib/python2.7/site-packages/
i just get 300+ (3 digit) even though you get 1852
and then i’m
$ ls -l /usr/local/lib/python3.5/site-packages/
i got 3500+ (4 digit) and you got 1852
please help me adrian to fix this, thank you
Hi Olivia — thanks for the comment. That number will depend upon the packages you have installed in your environment.
i just follow your step adrian, why i get that value? is that ok for my next step when i want to using python?
I think it is safe to carry on with the instructions. Don’t worry about these values.
ok.. thank you adrian
Hello Adrian,
I have installed the opencv inside the virtual environment ,
but i’m unable to access outside (i.e)., Inside the IDLE.
How to access it.
Hi Akash — thanks for the comment. Unfortunately, IDLE cannot access Python virtual environments. I would suggest using Jupyter Notebooks which do work with Python virtual environments.
Hello Adrian. I just wanted to say thanks. It took a long time but it was a problem-free installation. Nice guide.
Congrats on getting OpenCV installed, nice job!
Hi Adrian, great walkthrough, tutorial or wathever you want to call it, made it without any trouble in the times that are say… although this only work as long as someone work on the virtual environment, is there any way to make this work outside the virtual env. to use opencv on python directly??
Thanks for everything mate, cheers.
Hi Enzo — it’s a best practice to use Python virtual environments. Each Python virtual environment copies the binaries and libraries of your system Python but doesn’t keep any existing installed libraries. Therefore, you can use OpenCV + Python directly. If you do not want to use Python virtual environments you can either (1) follow the steps and ignore virtualen/virtualenvwrapper steps or (2) sym-link any packages into your system install of Python.
hi
i do everything in this tutorial but in step #5 when compiling opencv it freeze up at 86%….
i use $ make clean $ make
but freeze up again at 86%
.what can i do??? please help me….please help me…please (:
It sounds like you might be running out of swap space. Please see this tutorial on how to increase your swap.
is there a respbian stretch image that opencv and python have been installed on it??? and i just write this on my sd card and use…. is this possible????
Hi Mory — I would suggest you take a look at the Quickstart Bundle and Hardcopy Bundle of Practical Python and OpenCV. Both these bundles include a pre-configured Raspbian .img file with OpenCV pre-installed.
thank you very much Adrian..
Hi,
I’m getting following error after CMake it says “Configuration incomplete, errors occurred!”
Please check the output of CMake — it will report what the specific error is and why the command failed.
How i can do this ? I got same erro
As I mentioned in my previous reply: scroll through the output of “cmake” in your terminal.
Great tutorial… Is there a way to install SimpleCV on top of this? I’ve tried, but keep getting memory errors. Thanks Adrian..
Hi Shane — typically we recommend the power of OpenCV over SimpleCV, but SimpleCV still has merits. Try these installation instructions for SimpleCV.
I had completed installing the opencv to 100%. But when I check it using the code below
source ~/.profile
$ workon cv
$ python
>>> import cv2
it is showing errors that the file is not created or something like that
Make sure you’ve properly linked cv2.so to your virtual environment. See Step #6.
Hello Adrian,
do you have a similar guide to install scikit-learn on a pi as well?
Thanks
The easiest way is via PIP in your virtual environment:
pip install -U scikit-learn
Hi Adrian,
I have tried to follow these steps multiple times now, but at step I keep getting a wrong output from the cmake. The library, numpy and packages paths are missing. Do you know what I could be doing wrong?
Kind regards,
Suzanna
Hi Suzannna — perhaps you aren’t inside your virtual environment when you issued the CMake command.
Adrian, I followed, I believe all the instructions up to the actual compile where the last thing is type in ‘make’ and it should start compiling. What I get is an error message >> make: *** No targets specified and no makefile found. stop.
What did I miss doing?
Hey Don — try double-checking your output of CMake. It sounds like the “cmake” command exited with an error. Check the terminal output and you should be able to spot what threw the error.
Adrian, I got the same error for make -j4…. i also checked cmake command execution and it returned a few errors….what should i do next??
Be sure to check your output of the “cmake” command. CMake will report an error and what caused the error.
when i enter make this comes up:
(cv) pi@raspberrypi:~/opencv-3.3.0/build $ make
make: *** No targets specified and no makefile found. Stop.
Hi Raghuram, did you first use CMake which will generate the Makefile?
I started the process again and now it shows an error at 86% ,that says boostdec_bgm.i missing.
thanks for the reply please help me ou
I was able to install it but could and import cv2 worked on terminal , but it could not work in python please help!!
Hey Raghuram — I’m not sure what you mean by you could import the cv2 library in your terminal but not in Python. Could you please elaborate? Are you trying to use Python IDLE? Keep in mind that Python IDLE does not respect Python virtual environments.
Adrian, i wanna restart my raspberry from zero. i have install following your step and now i want deleting all and start from zero.. how to do that?
If you want to start completely from scratch I would recommend re-flashing Raspbian onto your SD card. If you want to restart your OpenCV compile just delete your “build” directory and recreate it.
help!. in the step of executing cmake I have a problem only appear the directories for python 3 and not for python 2.7 I tried many times updating my profile and even repeat from the beginning the procedure and do not appear please could help me
You can only compile OpenCV bindings for ONE Python version at a time. If you have the correct directories for Python 3 then you can proceed with the compile to obtain your Python 3 + OpenCV bindings.
ln: failed to create symbolic link what do i do for this kind of error
Dear Adrian,
In step #5 , I am sure I am in the cv virtual env. — “(cv) pi@raspberrypi:~/opencv-3.3.1/build $ “.
But when I type “cmake -D CMAKE_BUILD_TYPE=RELEASE \…..”. I got a error message –“CMake Error: The source directory “/home/pi/opencv-3.3.1/build/CMAKE_BUILD_TYPE=RELEASE” does not exist.”.
Yours faithfully
Ivan Chuang
HI Adrian,
I got the answer, just like this “cmake -DCMAKE_BUILD_TYPE=RELEASE \”.
But I have another question. When I want to make sure my Python3 section, I don’t know I need to check which file.
Yours faithfully
Ivan Chuang
Congrats on resolving the CMake issue. As for checking your “Python 3” section just scroll up in your terminal and examine the output of the “cmake” command.
There is no sudo make install
I get it, “not exitis file install”
I would suggest checking the output of CMake. It sounds like the “cmake” command exited with an error and did not create the Makefile.
Hi Adrian,
I’ve been following a lot of your blog posts and tutorials, and I find them amazing! Thank you so much for this amazing and super easy to follow guide to setting up OpenCV 3 on the new Raspbian Stretch!
Big Fan,
Shiv
Thank you for the comment, Shiv! And congrats on getting OpenCV installed on your Raspberry Pi!
that was really helpful .
thank you very much for your help .
i did the make command when i was in the env .
and it worked just fine and finished successfully
when i did sudo make install
it is taking me the same time that “make” did. (that sentence makes no sense 😛 )
is it normal ?
Hi Jad — as long as the command executes without error you should be okay.
Hi Adrian
I have compiled OpenCV3 on a fresh version of Rasbian Stretch following your recipe, (although I did not use a virtual environment).
I am using a Python script that was developed in Jessie and OpenCV 2.4.9 and have updated the syntax where necessary. Everything seems to be successful apart from one thing, cv2.resize() and cv2.resizeWindow()are not working. They do not throw an error but have no effect on the size of the image or the window (I am using cv2.WINDOW_NORMAL). Is there anything obvious that I may have missed?
Thanks
Colin
I’m not sure about
cv2.resizeWindowas I’ve never used that function before but to check
cv2.resizeprint the resulting
.shapeto your terminal. If that’s not your expected dimensions then there is likely a logic error somewhere else in the code.
Hello Adrian
I enjoyed this tutorial. The most advanced project I’ve ever attempted but its not complete. I reached the total “1852”
I got to step 7 and entered “workon cv” and got command not found
To my horror I suspect I forgot, yes the sym-link step. Can’t get in the cv environment.
Help!! Thanks
Ronald
Hey Ronald, I would suggest going back to Step 4 and ensure you have updated your
.profilefile. I would also suggest ensuring virtualenv/virtualenvwrapper have been properly installed.
Hello Adrian I’m also having an issue with the “make” command,
I’ve followed the previous step with the “cmake” command successful and when i “ls” in the “build” directory the make files are there but when i use the “make” command it says:
make: *** No targets specified and no makefile found. Stop.
Its defiantly in the (cv) environment also.
any suggestions would be much appreciated,
Kind regards,
Jonathan
Please see my reply to “Don” and “Raghuram”.
I saw your reply to “Don” and “Raghuram” and still don’t know what to do and I am wondering whether you have any input that could help, please?
I followed your instructions to the tee – brand new board, brand new stretch OS flashed.
As you suspected, CMake indeed spits out errors. What I’m curious about is why have those errors shown up for a few of us on here, but not for some others?
./opencv-3.3.0/build/CMakeFiles/CMakeError.log is 343 lines long —
i.e. CMake reports LOTS of errors under this recipe – I followed it perfectly.
There are 4 files on which CMake failed, as shown by this post cmake step command:
(cv) pi@pishow:~/workspace/opencv-3.3.0/build $ grep -A1 failed CMakeFiles/CMakeError.log|grep ‘source\ file’| sort -u
source file: ‘/home/pi/workspace/opencv-3.3.0/build/CMakeFiles/CMakeTmp/src.c’
source file: ‘/home/pi/workspace/opencv-3.3.0/build/CMakeFiles/CMakeTmp/src.cxx’
source file: ‘/home/pi/workspace/opencv-3.3.0/cmake/checks/cpu_fp16.cpp’
source file: ‘/home/pi/workspace/opencv-3.3.0/cmake/checks/cpu_neon.cpp’
Woops. I know what I did wrong and why CMake failed. It was MY mistake. The recipe works perfectly. The mistake was to cut and paste the CMake command as is. But I had downloaded the original packages and had unzipped them at a location OTHER than the user’s home directory, so the paths in the CMake command had to be changed. This solved the issues and a Makefile got created. Sorry about the false alarm and I hope this helps others.
Congrats on resolving the issue!
i am facing same problem how you solved it please tell me it will be great help
On my Pi no problem to compile with make -j4. But, I have added blas and lapack with :
sudo apt-get install libblas-dev liblapack-dev
Please you used the full or lite version for Raspbian ?
I used the full version of Raspbian but you can use the lite version as well.
Hello Adrian I’m also having an issue with the “make” command,
the compilation stops at 30% i even tried to increase the swap size to 1024
and compiling using make -j4 but still it does not compile more than 30%
Hi Sarthak — try using only a single core via
make. The compile will take longer but provided you increased your swap it should work.
I did the make but did not increase the swap size just used make not make -j4 but the complilation stopped at 89%
Will increase the swap size help in this?
Yes. Increasing the swap size should resolve this issue completely.
I have installed opencv 3 as explained above and followed all the steps for python 3
When i am running a program of face recognition data set in python IDE 3 it is giving error in the line import cv2
Importerror: No module named “cv2”
Plz help me to solve this
Please see my reply to “Hidenori Kaga”, “bob”, “Akash”, and others. I have addressed this question multiple times.
Hello Adrian.
sudo apt-get update
sudo apt-get upgrade
sudo apt-get install libopencv-dev
sudo apt-get install python-opencv
Is this ok? install opencv
I DO NOT recommend this method. The apt-get package definitions are often out of date and you won’t have the additional extra modules with OpenCV. I highly encourage you to compile OpenCV from source, using apt-get is not recommended.
Hello Adrian,
I followed this tutorial and everything went well to step 5.
After the command cmake I get an instant error: “segmention fault” and i can’t go on.
Could you help me?
The “cmake” command caused a segmentation fault? That’s not good. It sounds like something might be wrong with your Raspberry Pi/Raspbian. Could you re-install Raspbian then try again? Unfortunately I’m not sure what the exact problem is here.
I got the same error and was pulling my hair out for days. The fix is easy. you just have to update the firmware of your pi. right after ” $ sudo apt-get upgrade”, give “$ sudo rpi-update”. This worked for me.
Thanks for sharing, Nisal!
Thank you so much sir you are such a great teacher for me. i going to order your book as soon as possible. thank you again God bless you.
Thank you Ali, I really appreciate the comment 🙂
I thing the virtual environment step is fragile.
Initially I created the cv env with python3 as shown, then I created another cvpy2.7 and used workon cv to switch back and then finished the instructions but the build failed. I then tried workon cvpy2.7 and got the failure message I posted above.
Before calling it a night I deleted both my virtual environments using rmvirtualenv and did rm -rf of my build directory. I then recreated the python2 virtual environment and finished the steps, but I did the make without the -j4 and went to bed.
Complie appears to have completed OK, doing the make install and ldconfig steps now,
So you can disregard my comment from last night, but perhaps you could explain the steps to make a second virtual environment using python3. Or is it not possible to have both on the same system?
You can create as many Python virtual environments as you want on a single machine. That is why we use Python virtual environments.
HOWEVER
You DO need to re-compile OpenCV for each Python version you want to use it for. You cannot use the same OpenCV bindings for multiple Python versions. I personally like to create a
builddirectory for each unique Python + OpenCV version and then sym-link the resulting .so file into my respective Python virtual environment.
While using the cmake command , I get the following results:
Python2:
interpreter: /home/pi/.virtualenvs/cv/bin/python2.7(2.7.13)
and that’s it.
I don’t get the rest …the libraries,the numpy…..
how to solve this?
Hi Ravi — thanks for the comment. It sounds like you are indeed in the “cv” environment based on your “interpreter” output. Can you run “pip freeze” to ensure that NumPy is also installed?
While running the cmake step, it failed again and again and didn’t make the makefile until I added swap space. This was on a freshly installed stretch distribution running on a Pi Zero W. I’m now waiting for the compile to finish. Should finish sometime over night. It’s only 20% done after 4 hours.
As for wearing out the flash card, you can try running fstrim on the card every week to make sure the maximum amount of blocks are available for wear leveling.
Did you finally get it to work? Is this process mentioned above completely right for Pi Zero also?
I have virtual environments running on my Pi3 for both python2.7 and python3.
I think my problem was in step 4, where you forgot to mention that for a python3 virtual environment numpy needs to be installed with: pip3 install numpy instead of the pip install numpy listed. I failed to catch it.
Perhaps some guidance on choosing python3 vs. python2.7?
My take is python3 is the future and unless a module you need is not available for it, python3 is what should be used. Your samples seem to be written to be compatible with both.
I’ve successfully run the deep-learning-opencv example code in both my python2.7 and python3 virtual envirmonments.
Thanks for this great tutorial.
One other tidbit, current Raspbian throws an ugly gtk warning after each run, it can be stopped by adding: export NO_AT_BRIDGE=1 to .profile
This is my first experience with virtual environments, I’m not sold on their utility, but time will tell.
Hi Walter — my opinion is similar to yours. If you need legacy support and are perhaps running OpenCV 2.4 as well, go with Python 2.7. If you are developing a new project and are concerned with the future, absolutely use Python 3.
Also, thank you for the tip on the GTK warning message, that’s a great one!
Hello Adrian,
im really thankful for this tutorial, i got a problem i will appreciate it if you could help me,
after the workon and python commands, import cv2 couldnt work and it says “Traceback (most recent call last): File “”, line 1, in importError : no module named ‘cv2’
Hi Habib — there are a number of reasons why the “cv2” import would fail. Without physical access to your machine it’s impossible to diagnose just from the import error. I have compiled the most common reasons why the import would fail in the “Troubleshooting and FAQ” section. Please take a look at it.
Adrian!!!!!!!!!!!!!
Can’t say thank you enough. I completely forgot you put this together and tried to take your previous building OpenCV on Jessie instructions and update it for Stretch. It kept crashing 90% of the way through make leaving the cpu cranking full out doing nothing with a frozen screen. After a number of attempts at fixing it I stumbled upon the one source I should have started with.
You suggestion to expand the swap memory was spot on. Works perfectly!
Awesome, congrats on getting OpenCV installed Dayle 🙂
Hi – I’m doing a fresh install, and when setting up the build with CMake, Python 2 output looks like this:
— numpy: /usr/lib/python2.7/dist-packages/numpy/core/include (ver 1.12.1)
Will this create an issue? Am new to this, so not sure what to do, or if it matters.
Hi Suns — it sounds like you are not in the “cv” Python virtual environment as this blog post suggests. Make sure you run “workon cv” before you actually run CMake. I would also suggest deleting your “build” directory and restarting the compile.
I religiously followed all the steps until Step#5. However, I’m experiencing a fatal error while compiling opencv (using- make -j4). It says out of memory space..
I’m using 8GB SD card and installed Raspbian Stretch (Desktop version) using NOOBS. And I’ve deleted all the unnecessary software (SonicPi, Wolfram-engine, libreoffice etc.). Before executing make -j4, I had 1.7GB of free space.
I’ve tried to install both opencv3.3.0 and opencv3.0.0..
Please help me!!!
Hi Subrahmanya — it sounds like you may need to upgrade to a 16GB card to resolve this issue or continue to purge packages from your Pi that you do not need.
Thank you Adrian for your timely reply.. I upgraded to 16GB and it worked like a charm!
However, when I try to run my script using Python 3 (IDLE), it returns be traceback error for import cv2. I executed my script in the terminal after ¨source ~/.profile¨ and ¨workon¨, it executed.
Do you have any solution for that? I tried to include above two lines in my script before import cv2, but no help.
All I need is to run my script using python shell using import cv2!
Please help!
IDLE does not respect Python virtual environments. You would need to use either the command line, Python scripts (from within the virtual environment), or Jupyter Notebooks.
Hi Adrian,
Pretty sure I’ve installed openCV successfully as it imports and tells me its version. However I’m getting a strange error:
** (Displayed Image:1734): WARNING **: Error retrieving accessibility bus address: org.freedesktop.DBus.Error.ServiceUnknown: The name org.a11y.Bus was not provided by any .service files
It seems to be when I do cv2.imshow(“Displayed Image”,img)
Could you shed some light on how I can resolve this?
Cheers,
Jack
Hey Jack — take a look at the comment from “Walter B Kulecz”. Walter discusses how to remove the warning (it’s due to GTK). Again, it’s a warning not an error can be safely ignored.
i successfully installed opencv 3 in python 2.7
but while i’m working on face detection project it is showing
NO MODULE NAMED CV2
can any one help…
Hi Kumar — there are a number of reasons why you may not be able to import the “cv2” library. I have compiled the most common reasons in the “Troubleshooting and FAQ” section of this post. Please take a look and use them to diagnose the issue. If you are still having trouble with the install you can reply back but without knowing which methods in the troubleshooting guide you tried it’s hard to provide any suggestions.
Thanks for your step by step installation instructions
You can skip 4 hours of compilation by using UBUNTU-mate ,if your only purpose is to use
opencv you can just run:
sudo apt-get install python-opencv on UBUNTU-mate
I do not suggest doing this. Installing OpenCV via apt-get will install an old version of OpenCV rather than the latest release. You’ll also be missing out on the contrib module along with a bunch of optimizations.
Hi Adrian,
I’m following your tutorial but at step 5 I hit my personal nightmare.
I get a CMake error “the source directory “/home/pi” does not appear to contain CMakeLists.txt
and then I’m lost.
I folloed the video again to see if I did something wrong, but same result…
does anyone have a hint for me?
any help would be appreciated.
thanks
Matt
Hey Matt, I think I think you may have copied and pasted the “cmake” command incorrectly. Make sure you use the “< =>” button at the top of the code block to expand the entire code block and grab the command. It seems like there might be a space after “/home/pi” in your command.
Hi,
Thanks, I successfully installed the opencv for python 3 in my pi2 by your steps, I can access it from my terminal but i can’t access it with my python 3 idle. its show the following error.
Traceback (most recent call last):
File “”, line 1, in import cv2
ImportError: No module named ‘cv2’
Why is it?
please help me..
Hi Jack — IDLE does not respect Python virtual environments and thus you cannot use IDLE with Python virtual environments. I would suggest using either the command line, a Python script, or Jupyter Notebooks.
thanks, now i got the following error while i run im-3.3.0/modules/highgui/src/window.cpp, line 605
Traceback (most recent call last):
but i installed libgtk2.0-dev and pkg-config then why is it coming?
It sounds like you have have installed GTK after running CMake. Try deleting your “build” directory, re-creating it, and re-running CMake + make. You should also verify that GTK has been successfully installed.
Hi Adrian,
Thanks for the excellent blog.
After successfully installed opencv, I tried to get frame with webcam (Logitech 310), it throw error like “Corrupt JPEG data: 2 extraneous bytes before marker 0xd1 Corrupt JPEG data: 1 extraneous bytes before marker 0xd6”.
I have tried to build opencv again with WITH_JPEG=OFF, it does remove the error but I am not able to write an image to disk anymore.
Do you know how to solve this problem?
Thank you.
Hey Jason, unfortunately I have not ran into this error before so I’m not sure what the exact problem or solution is. Sorry I couldn’t be of more help!
Thanks Adrian. The error disappear when I use raspberry pi camera camera module
No worries, it sounds like you were trying to use the
cv2.VideoCapturefunction to access the Raspberry Pi camera module (you can’t do that without installing more drivers). I put together a class to help switch between USB cameras and the Raspberry Pi camera module — you can find it here.
Thanks Adrian for another clear and detailed guide! I’ve successfully used your guides for several opencv installations; I’ve now one interrogation. I have compiled Opencv in its “cv” virtual environment, but I now also want opencv in another separate virtual environment, with a slightly different python 3 version (3.4 for the new virtualenv vs 3.6 for “cv”). Do I need to recompile opencv in the new virtual environment? If not how should I proceed (maybe simply simlimk cv2.so into the appropriate ~/.virtualenvs/ sub directory?)?
Please see my reply to “Geoff Riley” as I discus show to do this.
Hi Adrian,
It appears that the current make process builds and installs for both versions of python at the same time, so if you attempt to compile once for python2.7 and then for python3.5 the latter install overwrites the previous sandboxed version.
I’m sure there must be a way around this, but the brief glance at the makefile didn’t reveal an obvious client.
Kind regards,
Geoff
Hey Geoff — the current process should only compile one Python version at a time. Run
sudo make installonce. Delete your
builddirectory. Re-create it. Run
cmakeand
makeagain. Copy the resulting
cv2.sofile to your
site-packagesdirectory of your Python install.
Hello Adrian,
I’m new to using OpenCV and I’m planning to use it for face detection on a raspberry pi. I’ve followed your tutorial exactly, but I can’t run the command: “make -j4″. This is the error that I get:
” $ $ make -j4
make: *** No targets specified and no makefile found. Stop. ”
I don’t know what I have to do now. And I did increase the swap size to 1024. I’m hoping to hear from you soon.
Kind regards,
Youssra
Please see my reply to “Don”, “Raghuram”, and “Jonathan”.
Hi all,
how can i add unofficial python modules/libraries (picar) to the virtual cv environment (cv)?
Regards,
Xare
All you have to do is use the “workon” command and “pip”:
And this will install the library into the virtual environment.
You could also use setup.py as well. Assuming you have already used the workon command:
$ cd your_library
$ python setup.py --install
If you’re new to Python virtual environments I would suggest taking a look at this excellent guide.
Hi, thanks for sharing. I follow the steps and after successfully install the virtualenv and virtualenvwrapper, and update the ~/.profile file, I got the error : bash: source/usr/local/bin/virtualenvwrapper.sh: No such file or directory.
I have no idea about this error…
It sounds like virtualenv/virtualenvwrapper were not properly installed. Try running
pip freezeand ensuring both are installed.
Will this work with the Stretch OS when downloaded with NOOBS?
When i went to resize my partition it says it was not possible, that i was running noobs and it was most likely already done.
If your partition is already expanded you can certainly use this method to install OpenCV on Raspbian Stretch.
Hey Adrian!
First of all thank you for this guide. I was following your guide and made it all the way to step 6. At step 6 where you say to verify that OpenCV + Python bindings are installed by using ‘ls’ command. I get an error saying “No such file or directory”. Does this mean that it installed with an error so I have to redo the “make -j4” command? How can I check what pythons are installed and if I have too many what should I?
Hey Alberto — did both “make” and “cmake” execute correctly?
Thanx Bro I just refollowed your guide and good to go. I realized where I made the mistake
same error for me. how did u fix it?
Hi adrian! i m too much furstrated because when i start building opencv3.3 on raspabian stretch it gives the error stack smashing detected and dont even start building i have reinstalled the whole os and again tried but the output is same .. kindly reply me because i am doing my final year project and time is limited.. thanks in advance..
Hey Muhammad, I’m sorry to hear about the issues installing OpenCV. However, without knowing what the exact error message is I can’t point you in the right direction.
Hi Adrian, if I install OpenCV in a virtual environment per the instructions, can I use OpenCV outside of the virtual environment? Thanks in advance!
This would not work. What is your reason for needing OpenCV outside of a virtual environment? There are a few reasons, so I’m interested to hear yours.
Hey!
I was trying to follow the guide with slight differences:
the environment I’m deploying to will not be used for development, so I felt no need of a virtualenv. Also the links to the project zips are not working anymore, I found /archives/master to get the latest version from the opencv and opencv-contrib repos.
Now the problem is the following:
I used
wget -O opencv.zip I
unzip opencv-master.zip -d ~/repos/which will unpack the zip into ~/repos/opencv-master/. I do the same with opencv_contrib, which will go into ~/repos/opencv_contrib-master.
The cmake command I use:
cmake -D CMAKE_BUILD_TYPE=RELEASE -D CMAKE_INSTALL_PREFIX=/usr/local -D INSTALL_PYTHON_EXAMPLES=ON -D OPENCV_EXTRA_MODULES_PATH=~/opencv_contrib-master/modules -D BUILD_EXAMPLES=OFF -D BUILD_TESTS=OFF -D BUILD_PERF_TESTS=OFF ..
Note: I tried to play around with the switches, turning then on and off, the results were the same.
and the error I get is:
CMake Error at cmake/OpenCVModule.cmake:300 (message):
No extra modules found in folder: /home/pi/opencv_contrib-master/modules
Please provide path to 'opencv_contrib/modules' folder.
I checked ~/repos/opencv_contrib-master/modules/ and there are a bunch of modules defined there, I’m not missing them.
The cmake error log keeps crying about
Regex: 'command line option .* is valid for .* but not for C\+\+'so it’s really confusing me what am I missing.
Any hint would help 🙂
Cheers,
Pali
as soon as I submitted this comment the site loaded it in a way so the cmake command was in the middle of the screen so I noticed I specified
-D OPENCV_EXTRA_MODULES_PATH=~/opencv_contrib-master/modulesinstead of `-D OPENCV_EXTRA_MODULES_PATH=~/repos/opencv_contrib-master/modules.
It indeed fixed the problem, sorry for taking your time for moderating! :DD
Cheers,
Pali
Congrats on resolving the issue Pali, nice job!
hello sir,
I am a beginner to the pi. i had done all your steps written on this page and the open cv has been installed and i checked it using “import cv2 ” commad in the terminal and it was also a sucess.
but unfortunately i can’t access open cv from python 3.5 using import cv2. it is telling me that no module named cv2. how will i rectify this.
please help me
thank u
I assume you mean that when “import cv2” is working that it works for Python 2.7? If you want to also use OpenCV for Python 3.5 you’ll need to compile OpenCV again, this time using a Python 3.5 virtual environment. The “make” command does not build OpenCV bindings for every Python version on your system (just the one specified in the “cmake” command).
Sir,
It is not working on python 2.7 too.I actually compilled for python3 but outside the terminal i can’t access it.
can u please explain the steps to do so.
Thank u
Can you clarify what you mean by “outside the terminal”?
Hey Adrian, I followed your guide because it was mentioned in this guide, I was able to successfully install OpenCV but I have never done any code stuff before and your guide is super easy to follow. While the link i mention is confusing, specially to someone with no code experience. I read through some of your other OpenCV guides and found some references that you suggested to other people. I am looking into learning python and linux commands. I was wondering if you could help me out with that project and understanding what some of the stuff means. Thank you Happy Holidays!
Congrats on getting OpenCV installed, nice job! As for the project you are referring to, I personally have not gone through it and unfortunately I do not have enough time to cover each individual command and Python function. All that said, if you’re new to the world of computer vision and Python please take a look at my book, Practical Python and OpenCV. Many readers have successfully used this book to learn both Python and OpenCV together.
ok yea thank you very much i will definitely be looking at your book. Thanks again!
Hi Adrian,
I am a beginner with no knowledge of Linux. My Raspberry Pi 3B hangs at 94% during make and I have tried all combinations of make like ‘make -j4’ with CONF_SWAPSIZE 1024 MB, and with 100 MB, with only ‘make’ with 1024MB and with 100MB, nothing seems to work.
I am using a 64 GB MicroSD card(earlier my 8 GB card got full during make), should I try by increasing the size from 1024 to a higher number ? Thought of checking with you first so that I don’t corrupt anything. Thanks.
I would suggest changing your swap to 2048MB and then use a single core via just “make”. The compile will take longer but it will be less likely to hang.
It worked !! Thanks and wish you a happy new year 🙂
hello Adrian, thanks for your time, i have a trouble: my rpi has a 8gb sd card, i tried install twice opencv but i got the same error: can’t write pch file: no space left on device.
i removed wolfram and office but the space that i can see on windows show me near to 1 mb free after the compilaton didn´t finish (37 % complete and the message about space on device).
i hope you’ll can identify the problem, thank you again.
Hey Javier, unfortunately it’s hard to say without access to your Pi. Can you reformat your Pi with a fresh Raspbian .img file and try again? I would suggest removing the Wolfram engine as soon as you boot the Pi.
I think I looked at all the questions, but did not see clearly whether this could be complied on a Pi Zero. So two questions. Can this be compiled on a Pi 3 and SD card used in a Pi Zero? Or is it possible to compile this on a Pi Zero? It will probably take forever but will it do it? Or is the build not platform specific?
Thank you for this tutorial.
Can you compile OpenCV on the Pi Zero? Yes, you can. It will take somewhere between 36-48 hours. Should you? No, I don’t recommend it. The Pi Zero is too underpowered for computer vision and image processing. I recommend a Pi 2 or Pi 3.
I’m not sure what you mean by “Comments are always rejected”. All comments go into a moderation queue but comments are not rejected.
hi. I’v tried to install opencv3.3.0. Every thing goes fine except when trying to compile hdf5 module : don’t find hdf5.h.
Any idea?
Based on your error you can try installing HDF5:
$ sudo apt-get install libhdf5-serial-dev
Otherwise, you may want to turn off HDF5 by updating your “cmake” command to include the following switch:
-D WITH_HDF5=OFF
Thanks for your great tutorials.
Based upon them I created a simple command line tool that eases the whole build process by incorporating all steps. It automatically downloads the current OpenCV sources (v3.4.0), configures them, compiles them on the maximum available CPU cores and installs them inside the currently active virtual environment.
You can install it via pip from PyPI () and just have to run „cvbuilder build“ to start the whole process. It takes around 10 min on a Core i5-3320M.
On a Raspberry Pi3 you need to run it on one core by overriding the CPU count „cvbuilder build –cpus 1“ due to memory constraints and heavy swapping. It will take approx. 2h, but you can install it into other virtual environments without compiling as long as the folder ~/temp/opencv remains.
Failed after 32h, won’t compile at all.
Hey Frank — I’m sorry to hear about the issue getting OpenCV installed. It can be quite a pain sometimes, especially if there is your first time installing OpenCV on the Raspberry Pi. Did you receive an error during the compile?
Hi, Adrian i have followed the steps and i have done it successfully.
The problem is i don’t have “module named skimage.measure” so i have to repeat step #4 and create a new environment named “cv2”,
and followed the rest of the steps.
$ mkvirtualenv cv2 -p python2
$ source ~/.profile
$ workon cv2
$ pip install numpy
$ cd ~/opencv-3.3.0/
$ mkdir build
When i try to create the “build” directory is says “cannot create directory build: File exists”
What shall i do? and i only have 3GB available will it be enough for the new environment?
If your error is that you do not have the scikit-image library installed is there a particular reason you can’t install it into your existing Python virtual environment?
$ workon your_original_env
$ pip install scikit-image --no-cache-dir
This library and SciPy will take a long time to install so you should leave your Pi running overnight.
To answer your second question provided you have already ran “sudo make install” you can delete your “build” directory and re-create it.
Adrian, thank you for this step by step comprehensive guide.
I was able to install and compile the opencv on my Pi 3, using the 1024 big swap value and 4 cores in 50 minutes only :)…. a record, probably due to a very fast Sandisk SD.
I had the aluminum coooler installed on the CPU, but even with this, with the CPU usage that was constantly around 100%, I got an overtemp icon on the screen after 30 minutes…
I had to quicky add a fan over it to cool down the cooler, and so the overtemp icon disappear, and I was successfully able to finish the compile.
I was so impressed and interested that I decided to immediately purchase the Bundle 🙂
I wish you an Happy New Year.
Congrats on getting OpenCV installed on your Raspberry Pi — and in under 1 hour, great job! 🙂
And thank you for picking up a copy of my book, I hope you enjoy. Please reach out if you have any questions.
regarding virtualenv you mentioned: Any Python packages in the global site-packages directory will not be available to the cv virtual environment
I need the smbus package for I2C though. How can I use smbus in the virtualenv ?
Thnx.
ok….using the virtualwrapper command: toggleglobalsitepackages 🙂
This virtual blabla is confusing and probably unnessesairy when working on one project isn’t it. ?
You can also install any Python package you need inside your Python virtual environment:
Hi Adrian, love your tutorials. I was wondering if you know the proper name of the application that displays the openCV windows? I ask this because I’m trying to control the parameters of the opencv window in OpenBox (for example, no decor on opencv windows). OpenBox allows one to control how the window is displayed, but the correct name of each application needs to be called. I’ve tried python3, opencv, gtk, highgui, and many others. I’m stabbing in the dark at this point. Hopefully you have some insight in this.
If you installed OpenCV on the Raspberry Pi using this tutorial then GTK and the X window manager should be used. The name of the GUI library in OpenCV is called “highgui”. How that interfaces with OpenBox I’m not sure. I hope that helps!
Hi
great job !
I have following your tuto on my Pi on Rasbian Stretch and Rasbian strech lite.
For the first installation all work, but on the lite, my virtualenv only work when i’m superuser!
how to modify my actual virtual environment so that it is accessible as a simple user and that cv2 is recognized under python3?
regards
That is quite strange that it only works when you are the superuser. Did you install the Python virtual environment for the normal “pi” user or for the superuser account?
Thanks Adrian, it was actually the name I assigned to my imshow function…cv2.imshow(“MyWindow’sName”, frame). I found this out by running “xprop WM_CLASS” then clicked on my window.
Thanks for a really helpful tutorial Adrian.
The snag that I have is that previously installed packages (such as wx) are not available from within the virtual environment. Could you suggest a set of steps to either move cv2 out of the virtual env, or put other packages into it – whichever is most appropriate.
I’m not familiar with virtualenvs to sort this by myself
Many thanks Phil
Hey Phil, are your packages pip-installable? If so you can install of your packages via:
$ pip install your_package_name
Hi Adrian
Thanks for the step by step tutorial
Everything is OK with the four first steps, but when it comes to compiling opencv (step #5) I get wrong path to the interpreter (/usr/bin/python3 instead of /home/pi/.virtualsenvs/cv/bin/python3)
I though I had forgotten the workon thing somewhere, so I did it again with a new virtual environment…same failure on step 5
How can I find what’s wrong ?
(I tried to “make” despite th error and it stops somewhere between 4% and 26% with ” error: stdlib.h: No such file or directory
#include_next “).
Hi Adrian
thank you for your help
it’s not a failure, but it is not picking the binary in the virtual environment…(wrong path)
I got a new SD card, installed Raspbian Stretch and everything is OK now
Congrats on resolving the issue, Ariane! Nice job 🙂
Hello Adrian,
Thanks for the tutorial. After installing everything going down the pyhton2.7 path, I test the installation with the import cv2 command and get nothing as a response, just the symbols >>> on a blank line.
My cv2 file ended up in the dist-packages folder instaed of the site-pacjkages folder, does that matter? I can’t figure out how to move it. I get “permission denied” when using the gui and “no such file or diresctory exists” with the mv command, can you help?
Thanks
As long as you can import it without an error you should be okay. I’m not sure why it would have installed in the “dist-packages” directory though. You can use the “sudo mv” command to move the file if you so wish.
hi Adrian, i am following your tutorial on my Pi on Rasbian Stretch on raspberry pi3. while editing swapsize i was not able to save /etc/dphys-swapfileit says permission denied . i am not able to figure out what to do .
i hope you will help me out ,
regards
Make sure you use the “sudo” command to give yourself root permissions to edit the file.
Dear Adrian,
The instruction is amazing!
I am using the Raspberry Pi zero W. I followed the instruction and it works well. However, I meet a problem.
I don’t know why the pi always stucked at “87% Building CXX object modules/python3/CMakeFiles/opencv_python3.dir/__/src2/cv2.cpp.o”. And I use the VNC to connect the pi. Once the process reach the 87%, the VNC will be stucked. And if I disconnect it and re-connect it. It will be failed. So the only thing I can do is disconnect the power line and reboot it. And when I process “make”. It will be stucked at 87% again. Could you please show me some hints to fix this problem?
It has already take more than 6 hours. And from 0% to 87%, it took me more than 15 hours.
Waiting for your reply. It is quite urgent. Thank you so much.
Best regards,
Damon
So a few things here:
1. I don’t recommend using a Pi Zero for computer vision. With only a single core it’s far too underpowered. I would recommend a Pi 2 or Pi 3.
2. It sounds like your might need to increase your swap size, as I do in this tutorial.
I hope that helps!
Hi Adrian can you please guide me through this? Can you tell me which commands to use in the terminal to do so? For a start how do I delete the build directory?.
Okay I managed to remove the directory build and create it again. And I tried to squeeze in the switch in cmake, but still got an fatal error: stdlib.h: No such file or directory.
cd –
rm -rf build
mkdir build
cd build
cmake -D CMAKE_BUILD_TYPE=RELEASE \ -D ENABLE_PRECOMPILED_HEADERS=OFF
-D CMAKE_INSTALL_PREFIX=/usr/local \
-D INSTALL_PYTHON_EXAMPLES=ON \
-D OPENCV_EXTRA_MODULES_PATH=~/opencv_contrib-3.3.0/modules \
-D BUILD_EXAMPLES=ON ..
It wasn’t really clear how and where to add the switch -D ENABLE_PRECOMPILED_HEADERS=OFF
I am using opencv 3.1.0 and not 3.3.0 as the video tutorial is based upon.
Best, Allan
Looking at your command above, your cmake command looks correct Nice job adding in the switch. Unfortunately, without physical access to your machine I’m not sure what the exact issue is. My only other suggestion would be to make sure you installed the development tools:
$ sudo apt-get install build-essential cmake pkg-config
During step 5, after running cmake, my output in the python 2 section does not include the libraries, numpy, and packages paths. It only lists Interpreter. It does look correct for the python 3 section but I would much rather use python 2. How do I fix this?
Thanks, Scott
Hey Scott — did you use a Python virtual environment? Double-check that you didn’t accidentally create a Python 3 virtual environment instead of a Python 2 one.
$ make -j4
It is compiling till 90% and freezes out later. Tried 4 times reinstalling raspian stretch.
Any suggestions?
I would suggest increasing your swap size as I do in this blog post.
Thanks for the quick answer Adrian.
I have been trying everything all over again with version 3.3.0 instead of 3.1.0. That seemed to have fixed the issue. Atleast python doesn’t throw any import errors for cv2.
Awesome, I’m glad to hear it’s working! Congrats on resolving the issue, Allan 🙂
A more questions, Adrian.
I have a USB cam connected to my Raspberry Pi 3 and would like to test if everything is working and so forth.
Which IDE do I use to run the python code in? I was thinking of doing this example for a start:
This example also requires that ffmpeg and gstreamer is installed. So before potientially disrupting the system, by installing more dependencies. I would like to hear you out, what your take is on this.
1. If you have a working system now make sure you backup your .img file to your laptop/desktop, just in case you ever want to flash your .img back to your SD card.
2. An IDE/code editor is a personal preference. I would suggest either PyCharm or Sublime Text. While you can code in your editor I would highly recommend that you execute the code via the terminal. If you’re interested in working with video a great beginner tutorial can be found inside Practical Python and OpenCV
make -j4 doesn’t work
Founds: make: *** No targets specified and no makefile found. Stop.
Please see my reply to “Don” on October 28, 2017 and and “Raghuram” on October 29, 2017. If you ctrl + f your error message on this page you’ll likely find a similar content.
Sir, in the pi terminal i can import the cv2 but when i try it in the python shell i got an error
Hey Paul, I’m sorry to hear about the import issue. So you can import the “cv2” library when executing a script from the command line? But not from the Python shell? Are you launching the Python shell from the terminal? Or using IDLE? Keep in mind that Python IDLe does not respect Python virtual environments.
Hi Adrian,
any tips on how to expand a NOOBS created image on a larger card? After a NOOBs install, “file expand” does not work (apparently because the max space is already available to the OS, although I don’t believe this).
This applies even if you copy the image and transfer it to a larger card. In my case, I started with an 8G card. df – h shows 5.0G is available to /dev/root. I copied the image to a 32G card, and /dev/root still only has 5.0G. If I try to expand it through raspi-config, Advanced Options , I still get the NOOBs error “Your partition layout is not supported by this tool. .. You are probably using NOOBs, in which case your file system is already expanded anyway”
It is extremely frustrating (and boring) trying to get something so simple done. Complexities expanding to a larger card should also be mentioned as a downside to installing with NOOBs.
Do I need to resort to editing the partitions directly with gparted? (as described here).
Thanks,
Rob
PS I’d rather not start over with a fresh install of Raspbian on the 32G card. I’ve downloaded quite a lot of packages.
Hey Rob, I’m sorry to hear about the issues expanding the SD card. As far as I understand, using raspi-config to expand should handle it. If the partition is still not being resized you’ll have to either use (1) gparted to resize the partition or (2) start over fresh if you cannot resize the partition. Other PyImageSearch readers might have better suggestions as well.
Hi Adrian,
thanks for your reply. raspi-config definitely won’t expand for an image created under NOOBs. I think this is reasonably well known.
I appear to have expanded the root partition of my 32G card with gparted. The tricky bit is to recognise the extended partition which contains other (sub?) partitions. You need to resize that first before you can increase the size of anything inside it. Not knowing much about partitions, I may have mucked this up but for the moment it appears to be working. Raspbian boots and the /dev/root has all the space I need.
I hope this helps others with a similar issue.
Rob
Thanks for sharing Rob! I’m glad you’re up and running with OpenCV on your Raspberry Pi 🙂
Hi Adrian,
Great tutorial thanks. I would like the Pi to automatically run a python file that uses OpenCV on boot. Am I right in thinking I will need to make it execute the
source ~/.profile
workon cv
lines before running the python file?
I have been trying to make up a shell script to do that but I get a ‘not found’ error when it encounters the source and workon lines.
This is the shell script:
#! /bin/sh
echo “switching to cv virtual environment”
source ~/.profile
workon cv
Hey Roy, I cover how to run a Python script on reboot in this blog post. You’re on the right track but be sure to read the comments section as well where I discuss a few alternatives as well.
Great, ideal! Thanks Adrian.
So I followed your guide, step by step on how to install open cv 3 with python 3 on raspbian stretch…. on my terminal it says that cv2 was imported, and shows me the correct version. Basically I get the exact same message that I see on your screenshot of step 7 which I assume is telling me that I have successfully installed openCV. However when I run a script that imports cv2, I get the message “No module cv2″ using the ” ls ” command to user/local/lib/python3.5/site-packages, I see cv2 on my terminal…but when I actually visit this path from my desktop, there is no cv2 on site-packages, or in dist-packages…Please help 🙁
Hey Ferishta — so if I understand your question, you can import the “cv2” library into your Python shell but you cannot import the “cv2” library from a Python script? When executing your Python script are you sure you are in the “cv” virtual environment? Make sure you are, otherwise the script will not be able to find your libraries.
Omg thank you for your quick reply!! When I try to run the python script it gives me that “no module named cv2 error” so you’re saying prior to running my script I should use the workon command to get on the virtual environment? Also when I follow the path to site packages, it says “no sub folders” and from my understanding that’s where cv2 should be installed. Correct ?
Correct, use the “workon” command to enter the Python virtual environment. You only need to run this command once per terminal session.
to answer your question, yes I am in the “cv” virtual environment when executing the python script
Hello,
When am running this command 1
$ sudo apt-get install libjpeg-dev libtiff5-dev libjasper-dev libpng12-dev in raspbian stretch (debain 9) normal pc version, there are some errors like no such file to install or download.
So how can I fix it ?
I also had this problem.
^^^ I got this error also
I ran sudo apt-get update and then everything worked
almost a year later, I just encountered this same error. apt-get purge jasper did not correct the problem. I did solve it. The issue was that the cached jasper-dev install image was corrupt. I had to manually remove the cached files sudo rm -f /var/cache/apt/archives/libjasper*
For the third time, I completely wiped out my sd Card, downloaded raspbian stretch, and installed “openCV”. I followed your guide step by step, and I got the same message that I see in your screenshot of step 7 which “verifies” that the installation was successful. However, upon running a python script that imports cv2, I get the same error message, “no module named cv2” and yes I’m in the virtual cv environment when running the script. I see that this is an issue that a lot of people have. I really think you should try to do some research in finding some possible solutions for this possible. Otherwise your blog can be very misleading, and shouldn’t be up for public…
Hi Ferishta — I’m sorry you are having trouble getting up and running with OpenCV. It can be a trying process, especially if this is your first time installing OpenCV. The first time I installed OpenCV back in undergrad it took me 3 days. Since then the install process has gotten easier. All install posts here on PyImageSearch are thoroughly documented and tested by me. There are undoubtedly some “gotchas” that creep up — that is part of configuring your development environment.
If you would like to share the command you are trying to run, or better yet, take a screenshot of your terminal as you run the command, it would be helpful in diagnosing the issue. Keep in mind that myself or other PyImageSearch readers cannot replicate your environment exactly and most of the time these issues creep up due to an error in a previous step that was missed or an issue when actually executing the command (for example, trying to execute the script as “sudo”). Help us help you by sharing more details. I get that you’re frustrated but without knowing more we cannot diagnose the issue.
Adrian, I want to apologize to you from the bottom
Of my heart. This is my first time working with openCV, so I was struggling a lot. The thing is, I was saving my script on my desktop. Instead of “home/pi” this helped resolve all of my issues. I can now access my raspberry pi camera with openCV. I was also able to run your Pokémon color detector game! Im very happy that I’ve found you, and I’m looking forward to learning a lot from you…
I did have one concern though. While all of the python scripts now run, and I see the excpected out, I do get a warning message on my screen.
WARNING**: error retrieving accessibility bus address: org.freedesktop.Dbus. Error. ServiceUnknown: the name org. ally. Bus was not provided by any. Service files
Could you please tell me what this warning is and how can I make it go away?
Thank you so much once again
Congrats on getting up and running with OpenCV, Ferishta. Nice job. This warning is related to GTK/X11. It can be ignored, it has no impact on your code running. As for making it go away, try this:
$ sudo apt-get install libcanberra-gtk*
Thank you so much, Adrian!!!
I installed open cv 3.3 successfully yesterday but today when I am trying to import cv2 its giving error no module named cv2.Why is it like this?
Make sure you are in the “cv” virtual environment before trying to import the OpenCV library:
$ workon cv
And from there open your Python shell or execute the script.
Dear Sir,
Huge thanks for the step-by-step tutorial. You are a savoir, however i still need to get over one obstacle to proceed.
i am certainly in the (cv).
And i have checked that the cv2.so exists in these paths (‘find’ command):
./home/pi/Downloads/cv2/cv2.so
./home/pi/.virtualenvs/cv/lib/python2.7/site-packages/cv2.so
./home/pi/opencv-3.3.0/build/lib/cv2.so
./usr/local/lib/python2.7/site-packages/cv2.so
BUT i am getting the import error no module named cv after executing a script (one you provided here:)
Please help me i have a competition two days from now.
From what I can see it looks like you have OpenCV correctly installed on your system but I would double-check that your sym-link points to a valid file. The “find” command would still report a “cv2.so” file on disk (since a sym-link is still a file) but the sym-link may be pointing to a file that does not exist. Make sure you double-check those paths.
If you’re in a rush and need to get up and running for your competition I offer a Raspbian .img file with OpenCV pre-configured and pre-installed inside the Quickstart Bundle and Hardcopy Bundle of my book, Practical Python and OpenCV. This would enable you to download my Raspbian .img file, flash it to your SD card, and boot.
Actually my problem is that while executing the compiling process of opemcv 3.3.0…it strucks at 48%…….give me some solution for it
When you say “stuck” do you mean the compile ends with an error? Or your compile + Raspberry Pi freezes? The more detail you can give the better myself and other PyImageSearch readers can help you.
The compile and raspberry pi freezes
Try increasing your swap size as I do in this optimized OpenCV install tutorial.
In the 6th step im getting an error…
When i type
>>cd/usr/local/lib/python3.5/site-packages/
Im not getting the respective line as shown in the tutorial video….
Thank you so much Adrian i have installed it all with your help, hope my anti drowsiness detection system will be done soon. Thank you!
I’m happy to help JP — best of luck with the project!
actually after completing the “make” code,the installing creates in problem
when attempting to type
<<import cv2
there occurs error
1st, thanks for all you do, I can’t get over how many tutorials you provide.
Step 5. When I do cmake CMAKE_BUILD_TYPE=RELEASE \ should the process hang at ”>” (if I include the ”-D” I get a ”bash: -D: command not found”)?
Nothing happens until I press [Enter] a second time. And then I get ”CMake Error: The source directory “/home/pi/opencv-3.3.0/build/CMAKE_BUILD_TYPE=RELEASE” does not exist.
Specify –help for usage, or press the help button on the CMake GUI.@ Except we aren’t using a GUI . . . and I thought the build was supposed to create the directory?
TIA
Ron
Hey Ron — I’m pretty sure there was a problem copying and pasting the cmake command. If you’re having trouble try copying and pasting it line-by-line (without hitting enter). Once the entire command is copied into your shell, execute it.
sudo mv cv2.cpython-35m-arm-linux-gnueabihf.so cv2.so
when I type the above code……….im getting error……..and to be frank,i have installed this in discontinuous manner…………….after some shutdowns……..do reply fast
Hello sir, Its been 1 and a half hour of compiling and its 90% already but as I’ve observed itt taking too long to make a progress. Is it a sign that I should repeat the installation or should I wait for it because as you have said it takes 4 hrs to finish. Hoping for your fast response. Thanks
It’s stuck in 90% i followed your swap size and it still doesn’t work. Please help me.
Hey Carl — can you try compiling with a single core via “make”?
Hello sir, thanks a lot! I tried compiling it with a single core via “make” and it was a success. I already installed it and it worked. However, i’m just curious, what’s the difference of the 4 cores and single core when compiling?
In short: speed. Using multiple cores can reduce the time the compile takes, making the compile run faster. It has no impact on how fast the OpenCV library will run after it’s installed.
hello adrian!
I could not install.Could you tell me solving?
Install error message is below:
[100%] Built target example_tapi_squares
[100%] Built target example_tapi_ufacedetect
Install the project…
— Install configuration: “RELEASE”
CMake Error at cmake_install.cmake:36 (file):
file cannot create directory: /usr/local/include/opencv2. Maybe need
administrative privileges.
regards,
My os was installed by NOOBS ver.2.4.5
I solved by myself
Hi, Adrian!
Thanks to you I have installed and I was able to run the sample program.
Can I run OpenCV on Thonny Python IDE (Python 3) without using Virtualenv on LXTerminal?
“Import cv2” always causes an error.
My environment is Raspberry pi 3, NOOBS 2.4.5.
regards
Yes you can, but you would need to skip all Python virtual environment instructions in this guide. I’m not familiar with the Thonny Python IDE but you might want to see if it has a project interpreter similar to PyCharm. You can set the project interpreter to point to the Python virtual environment that has OpenCV installed.
I had an issue where the builder stalled around 45%, but I reran make without the -j4 command and it worked – but had to be run overnight as it probably took at least 4 hours.
Many thanks for the excellent walk through – there is no way I could have installed openCV without your notes!
Congrats on getting OpenCV installed on your Raspberry Pi! 🙂
After using source ~/.profile command
I get error no directory found
Me too??? Did you find a fix?
Hi Kurt and Amber — did you follow Step #4 to create the
~/.profilefile with the important
virtualenvwrapperinformation?
Hello, while attempting to create my build files for my Pi 3 using a python 3 virtual environment using the cmake commands, I keep getting this error, “The source directory Blah/Blah/blah does not exist”. How do I fix this?
Hey Philip — make sure you are in the “build” directory before executing the “cmake” command. Don’t forget the “..” at the end of the cmake command as well.
Hello Adrian, I’m facing a problem when I try to compile opencv by typing make it freezes at 98% please help me
Also I have tried make -j1 and make j-4 neither are working please help me
Try updating your swap size as we do in this tutorial. Updating your swap size will resolve the issue in the vast majority of situations.
Thanks Adrian for your wonderful work and your time and help.
I have finally suceded on installing opencv-3.4.0 on the PI3.
After ten trials, what works for me was not to increase your swap space
size and build opencv with just a simple
make
Just for the record, I created this logBook to register the trials and errors:
Miguel
Thanks for sharing, Miguel!
Dear Adrian,
after rebooting the PI because of the activation of the picamera,
the pi3 has got the following error:
$ python
>>> import cv2
Traceback (most recent call last):
File “”, line 1, in
ImportError: /home/pi/.virtualenvs/cv/local/lib/python2.7/site-packages/cv2.so: invalid ELF header
any suggestion?
Miguel
Hm, I’m not sure on this one. My guess is that you compiled OpenCV for one version of Python but then imported it into a different version of Python. Check your cmake output and ensure you built the correct bindings for your Python version.
Thanks for the help Adrian! With your help I got past that problem. However, now I’m having a problem increasing the SWAPSIZE of my SD card. None of the changes I make to the swapfile save. I have tried using sudo nano to write to the file but after my changes are made the nano command just saves the swapfile under a different name; making my changes useless. Any insight?
Wow, that’s some strange behavior from the nano command. I’m not sure why that would happen. Have you tried using a different editor? And have you tried restarting the swap service after making the change?
Yes, I have restarted the swap service and I’ve tried using a different editor. As soon as I go to exit the editor and it asks me to save my changes I get an error that I can not due to permission settings.
|
https://www.pyimagesearch.com/2017/09/04/raspbian-stretch-install-opencv-3-python-on-your-raspberry-pi/
|
CC-MAIN-2019-35
|
refinedweb
| 17,795
| 73.98
|
Created attachment 591870 [details] [diff] [review]
Mostly mechanical.
Currently jsgcmark.h exposes way more surface area than we actually use in practice.
We can easily drop the 2-pointer version of Mark.*Range: the only places it is used, it is not the more natural of the two interfaces.
We should also compact the duplicated, strongly-typed wrappers of Mark<>. There are several non-important differences in these functions (different assertions, or assertion order, etc), that muddy the waters unnecessarily. There are also differences in which set of operations we expose for each type, also confusing what is going on here. As an added bonus, if we auto-generate these, we can drop the Markable interface and take HeapPtr<T> directly, tightening the type-checking further.
We can make all but one of the MarkChildren interfaces (5 of 7 of which are currently exposed) internal.
Finally, IsMarked has a JSObject and a Cell version, even though JSObject is always convertible to Cell.
The diffstat:
13 files changed, 432 insertions(+), 671 deletions(-)
Created attachment 593263 [details] [diff] [review]
v1: updated based on informal feedback
Removed the detail namespace that ended up not getting exposed in the header.
Comment on attachment 593263 [details] [diff] [review]
v1: updated based on informal feedback
Review of attachment 593263 [details] [diff] [review]:
-----------------------------------------------------------------
Thanks, this is a nice cleanup!
::: js/src/jsatom.cpp
@@ +385,5 @@
> JSAtomState *state = &rt->atomState;
>
> if (rt->gcKeepAtoms) {
> for (AtomSet::Range r = state->atoms.all(); !r.empty(); r.popFront()) {
> + MarkStringRoot(trc, r.front().asPtr(), "locked_atom");
Could you remove the unnecessary braces around this statement while you're here?
::: js/src/jsgc.cpp
@@ +1905,2 @@
> if (fp->isEvalFrame()) {
> + MarkScriptRoot(trc, fp->script(), "eval script");
Can you remove the braces around this statement?
::: js/src/jsgcmark.cpp
@@ +52,5 @@
> +/*
> + * PushMarkStack is forward-declared so that it can be called by Mark, and
> + * and the Markers are declared above PushMarkStack so that the we can mark
> + * recursively.
> + */
I don't think we need this comment. It's a bit confusing. Most of the PushMarkStack functions could just have been moved above Mark. I just wanted to put them below because Mark is the most important function.
@@ +74,5 @@
>
> static inline void
> PushMarkStack(GCMarker *gcmarker, types::TypeObject *thing);
>
> +/* ********** Object Marking ********** */
Please change this to /*** Object marking ***/ so that it resembles the jsapi.h convention. Same for the other separators here.
@@ .
@@ +166,5 @@
> +#define DeclMarkerImpl(base, type) \
> +void \
> +Mark##base##Unbarriered(JSTracer *trc, type *thing, const char *name) \
> +{ \
> + MarkUnbarriered<type>(trc, thing, name); \
Please fix the backslash spacing here and below.
@@ +170,5 @@
> + MarkUnbarriered<type>(trc, thing, name); \
> +} \
> + \
> +void \
> +Mark##base(JSTracer *trc, const HeapPtr<type> &thing, const char *name) \
This one should go before Mark##base##Unbarriered now that it doesn't depend on it.
@@ +253,5 @@
> +
> +/* ********** ID Marking ********** */
> +
> +static inline void
> +MarkIdInternal(JSTracer *trc, const jsid &id) {
The brace should always go in a separate line.
:::?
@@ +43,4 @@
> namespace js {
> namespace gc {
>
> +/* ********** Object Marking ********** */
Please fix the separators in this file as well.
@@ +52,5 @@
> + * If there was only a need to track the various GC kinds, we would not need
> + * this. The problem is that we want to pass a HeapPtr<Foo> to a convertable
> + * Bar type named MarkBar. As an explicit example, consider that we want to
> + * be able to pass all of JSLinearString, JSFlatString, JSAtom, etc to a
> + * single MarkString function.
I think we need a comment here that describes what functions are actually being exposed and how they should be used. Something like this:
These functions expose marking function for all the different GC things. For each GC thing, there are several variants. As an example, these are the variants generated for JSObject. They are listed from most to least desirable for use:
MarkObject(JSTracer *trc, const HeapPtr<JSObject> &thing, const char *name);
This function should be used for marking JSObjects, in preference to all others below. Use it when you have HeapPtr<JSObject>, which automatically implements write barriers.
MarkObjectRoot(JSTracer *trc, JSObject *thing, const char *name);
This function is only valid during the root marking phase of GC (i.e., when MarkRuntime is on the stack).
MarkObjectUnbarriered(JSTracer *trc, JSObject *thing, const char *name);
Like MarkObject, this function can be called at any time. It is more forgiving, since it doesn't demand a HeapPtr as an argument. Its use should always be accompanied by a comment explaining how write barriers are implemented for the given field.
Additionally, the functions MarkObjectRange and MarkObjectRootRange are defined for marking arrays of object pointers.
@@ +56,5 @@
> + * single MarkString function.
> + */
> +#define DeclMarker(base, type) \
> +void Mark##base##Unbarriered(JSTracer *trc, type *thing, const char *name); \
> +void Mark##base(JSTracer *trc, const HeapPtr<type> &thing, const char *name); \
Please put Mark##base first, since it's the main one.
@@ +78,5 @@
> +DeclMarker(XML, JSXML)
> +#endif
> +
> +inline bool
> +IsMarked(JSContext *cx, Cell *cell)
Please group the Mark and IsMarked function together in a separate section.
@@ +87,5 @@
> +/* ********** Externally Typed Marking ********** */
> +
> +/*
> + * Note: this must only be called by the GC and only when we are tracing through
> + * MarkRoots. It is explicitly for ConservativeStackMarking and should go away
One space after the period.
@@ -202,5 @@
> - * the type of the object. The static type is used at compile time to link to
> - * the corresponding Mark/IsMarked function.
> - */
> -inline void
> -Mark(JSTracer *trc, const js::HeapValue &v, const char *name)
These Mark functions are used by IonMonkey, so we shouldn't remove them.
@@ +137,3 @@
>
> +/* TypeNewObject contains a HeapPtr<const Shape> that needs a unique cast. */
> +void MarkShape(JSTracer *trc, HeapPtr<const Shape> &thing, const char *name);
void should go on a separate line.
@@ +138,5 @@
> +/* TypeNewObject contains a HeapPtr<const Shape> that needs a unique cast. */
> +void MarkShape(JSTracer *trc, HeapPtr<const Shape> &thing, const char *name);
> +
> +/* Direct value access used by the write barriers and the methodjit */
> +void MarkValueUnbarriered(JSTracer *trc, const js::Value &v, const char *name);
Here too.
@@ +149,5 @@
> +MarkCrossCompartmentValue(JSTracer *trc, const js::HeapValue &v, const char *name);
> +
> +/*
> + * MarkChildren<JSObject> is exposed solely for preWriteBarrier on
> + * JSObject::TradeGuts. It should not be considered external interface.
One space after the period.
@@ +157,5 @@
> +
> +/*
> + * Trace through the shape and any shapes it contains to mark
> + * non-shape children. This is exposed to the JS API as
> + * JS_TraceShapeCycleCollectorChildren.
One space after the period.
Created attachment 595840 [details] [diff] [review]
v2: With review feedback.
(In reply to Bill McCloskey (:billm) from comment #3)
> @@ .
They were very helpful when pushing the marking indirection up through the stack -- I didn't need to add lines to the macros. That's done now, so no problem removing them.
> :::?
I totally forgot about those: notes that I kept as I worked out how the marker works. I was planning to remove them before cutting a patch.
> I think we need a comment here...
Thanks for writing it for me.
> Please group the Mark and IsMarked function together in a separate section.
&
> These Mark functions are used by IonMonkey, so we shouldn't remove them.
I'm not entirely sure what you mean here. I added a new "Ion Monkey" section and put these back together.
Comment on attachment 595840 [details] [diff] [review]
v2: With review feedback.
Review of attachment 595840 [details] [diff] [review]:
-----------------------------------------------------------------
Excellent, thanks!
::: js/src/jsgcmark.cpp
@@ -1,1 @@
> -/* -*- Mode: C++; tab-width: 8; indent-tabs-mode: nil; c-basic-offset: 4 -*-
Please keep the mode line for emacs. I guess it should go in a separate comment so as not to affect the license boilerplate.
@@ +205,4 @@
> }
>
> void
> +MarkRootGCThing(JSTracer *trc, void *thing, const char *name)
Based on the other ones, I guess this should now be MarkGCThingRoot.
@@ +313,4 @@
> }
>
> void
> +MarkShape(JSTracer *trc, HeapPtr<const Shape> &thing, const char *name)
For now I think this should be a const HeapPtr<const Shape> &thing.
::: js/src/jsgcmark.h
@@ -1,1 @@
> -/* -*- Mode: C++; tab-width: 8; indent-tabs-mode: nil; c-basic-offset: 4 -*-
Same thing here.
@@ +137,5 @@
> */
> void
> MarkCycleCollectorChildren(JSTracer *trc, const Shape *shape);
>
> +/*** Ion Monkey ***/
Please give this a different name, like "Generic", and put a comment that they're only to be used in templated code, and that it's better to call the marking functions that include the type in the name when possible.
(What I mean before is that these functions are only called from IonMonkey, so if you delete them, then the IonMonkey repo will get broken on the next merge and David will just have to add them back.)
Backed out in - billm thought my bet on bug 714109 being the source of the GC crashes in browser-chrome wasn't a good bet, and that this was better.
It looks like this is missing some NULL checks on the range marking functions.
Also, now I'm thinking that those intermediate Mark and MarkUnbarriered functions you had may have been nice to have, since they make the code easier to step through in the debugger. Would you mind putting them back?
You are right. I was just looking at the Id/Value Range functions before I implemented those, so that's probably why I accidentally dropped them. Adding back those and the intermediate template functions. Will have a patch in a few minutes.
Created attachment 596080 [details] [diff] [review]
v3: Re-added opt null check.
This re-adds the intermediate functions between the macros and MarkInternal and adds an opt null check to MarkRange. It does not add a null check to MarkRootRange, since it appears that there was not one before. We assert that the thing isn't null in debug builds, so I'm not sure why this wasn't dying on debug as well.
Try run:
|
https://bugzilla.mozilla.org/show_bug.cgi?id=721463
|
CC-MAIN-2016-30
|
refinedweb
| 1,606
| 55.54
|
Learn more about Scribd Membership
Discover everything Scribd has to offer, including books and audiobooks from major publishers.
1. What is an ERP? It is a process of integrated flow of Information, which binds the organization together. It is an integrated application software module providing operational, managerial and strategic Information for improving productivity, efficiency and quality. PeopleSoft HRMS system provides complete support for all human resources needs with functionality for
– Recruiting employees for jobs
– Tracking training, employee skills and education
– Administering base benefits programs and more
1. Describe the Life Cycle of a Project (ERP Implementation)? The Project passes through the following stages.
1. Analysis
2. Designing
3. Coding
4. Testing
5. Implementation
6. Maintenance.
2. What is Component Processor?
The
Initial data retrieval through updating the database.
Component Processor Controls the PeopleSoft Applications from
users enter information on pages. Issues INSERT DELETE and UPDATE statements to
component processor manages the flow of data processing as
maintain data in the
database and SELECT statements to retrieve data
3. What is component buffer? Component Buffer is the area in memory that stores data for the currently active component.
4. What is the difference between component buffer and data buffer? Component buffer contains all the data of the active component. Data buffer contains the data other than the data in the component buffer (Data of other records)
5. What data buffer classes are available in people code? Rowset, Row, Record, Field, Array, File, Sql, chart, grid and so on.
6. How do you bring component buffer in to application engine program? You can assign a record which is used in component buffer to a state record of Application engine.
7. Difference between field edit and save edit? In Field edit for each field change, a transition to the application server to the database is taken place. In Save edit for all the fields, only one transition to the application server to the Database is taken place.
8. Diff b/w save pre change and save post change?
9. Arrays and Load lookup in SQR? Load Lookup is used to reduce the complexity of joins - it populates the values of a certain field depending on the key field specified from a certain table. Then the users can query from the preloaded lookup table instead of joining tables. While arrays are used to store and retrieve data using the get and put commands Load-lookup:
Load-lookup will be populated at compilation time. We can adjust the size of load-lookup. It is only for text data type Array:
Array gets populated at Execution time. We can't modify size of the Array. If we given more than the size of the array, Array supports all data types.
10. How can we know from SQR, if environment is PSNT or PSUNX?
11.What is SQL and View Temporary Table? SQL View: SQL View has fields from one or more tables in the reorganized way. This provides alternative view of information stored in the tables. Temporary Table: are used for running application engine batch processes. Temporary tables can store data to update without risking the main application table.
12.Can the output of a SQL query be stored in a variable using PeopleCode? If so how it be done? SQLExec ("SELECT EMPLID FROM JOB", &Emplid);
13.How to migrate roles or PeopleCode from one database to another database?
Include all the roles in a project by clicking on Insert -> Definitions into Project -> select Roles and add them into the project. Migrate the project to another database. Create a data mover script to migrate roles from PSROLEDEFN or PSPCMPROG for peoplecode table.
1. Login to database through App Designer as a source database.
2. Click on upgrade tab and open the project which contains roles which you want to
migrate.
3. Double click on Roles folder under the opened project.
4. Select Action as "Copy".
5. Go to Tools > Copy Project > To Database
6. Give database name (Target Database) to which you want to migrate roles.
7. Click on "OK"
8.
Select "Roles" from different objects and copy those roles.
9. After completion of Copying double click on Roles folder under the opened project and verify that "Done" checkbox should be checked.
14.What is the use of set control field in record field properties? Set Control id is used when you want to share tables in PeopleTools applications.
15.How to create prompt table? Using edit table option in record field properties
16.How is performance management taken care in People Tools?
– Indexing tables on the database side helps in batch processing a great deal.
– Analyzing tables helps.
– If there is custom code make sure the SQL queries used are written well with the use of proper keys and joins are correct as well.
1. What are state records?
– The state record will be used to pass variable information between two application engine sections.
– It can be physical or derived work record. Physical record can be used when you have restart logic and when you have disabled the restart logic derived work record can be used.
– There can be a max of 200 state records that can be used in a single AE but only one of them can be default state record name must end with _AET.
1. What is mandatory step of application engine program? Main-Step-Action
– Main is the required section in Application Engine. - There can be multiple steps in single application engine but at least one step should be part of AE. - Similarly you can have multiple actions in AE but you should have minimum 1 action part of step.
1. What is difference between component level peoplecode and record level peoplecode? Component level PeopleCode is associated with unique component, where as record level peoplecode can be associated with any number of components
2. Types of PeopleCode functions? People code supports these types of functions:
– Built in
– Internal
– External people code
– External non-people code
1.
Explain with example where you used peoplecode extensively?
2. What is the difference between Prebuild & Postbuild events and saveprechange and savepostchange? Prebuild can be used to validate your search data, discarding rows. Postbuild can be used to play with the pages (hide, unhide), filling up scrolls. Saveprechange is the last event where you can validate and correct your data before updating the database. Once it is done, database will get updated. Savepostchange will be used to play with tables which are not present in your component buffer.
3. Sequence of peoplecode events? Searchinit peoplecode performs before the search dialogue box displays. Search save peoplecode performs after the operator clicks ok in the search record dialogue box. Row select peoplecode is used to filter out rows of data. Prebuild is often used to hide and unhide the pages. Field default attempts to set defaults for fields without a value. Field formula performs, after field default completes successfully. Rowinit is used to initialize the rows. Postbuild peoplecode performs after all the component build events have performed. Activate event is fired every time the page is activated.
4..
1. Where is peoplecode stored? Database Server, PSPCMPROG
2. Is there any function in peoplecode which stops the processing of whole component?
Think-time functions suspend processing either until the user has taken some action (such as clicking a button in a message box), or until an external process has run to completion. Think-time functions Following are Think time functions:
– DoCancel
– DoModal
– DoModalComponent
– Exec (only when Synchronous)
– File attach functions
– Prompt
– RemoteCall
– RevalidatePassword
– WinExec(only when Synchronous)
– WinMessage
– WinMessageBox
1. Stages of program flow in SQR? Compile stage All the Preprocessor directives are compiled (which starts with #include). Ex: All the SQC are run. Check for the syntax errors for the conditions. Ex: if for loop while loop are properly ended with the respective syntax. Allocates memory structure if you are using the Arrays and load look up Execution stage starts interpreting the code line by line Check for the begin -program Begin -heading Begin- footer
2. Program flow of SQR?
– Setup section
– heading section
– footing section
– program section
– procedure section
1. Difference between Translate values and Prompt tables? Translate Table:
Translate table is a special kind of table that is limited to validating data of four characters or less. The translate table serves as a universal prompt table and is effective-dated Prompt Table:
Prompt table are used to provide users with validate values from other tables other values are generally populated by system users and are often application specific
2. What is .SQC and .SQT? .SQC is a Function Library file. It is like a sub program is saved by extension .SQC and this program can be called in the SQR program .SQT is compile time/Run time file. When a file with XXX.SQR is compiled we get the output as XXX.SQT and when is XXX.SQT is executed we get the output XXX.LIS (List file/Output file).
3. Important SQC that need to be attached to SQR program?
– #include 'setenv.sqc'
– #include 'stdapi.sqc'
– #include 'prcsdefn.sqc'
– #include 'prcsapi.sqc'
– #include 'curdtrim.sqc'
– #include 'hrctlnld.sqc'
– #include 'datwtime.sqc'
1. What is Normalization in Oracle? The major goals of Normalization are
– Eliminating redundant data (for example storing the same data in more than one table
– Ensuring data dependencies (only storing related data in a table).
1. Performance tuning in SQR?
– Load Look Up
– Arrays
– Multiple Report
– -Bnn
– Using SQT Files
– Run on the BATCH Server
– Proper Programming Logic
– Set processing
– SQL Tuning
1. Difference between search record and add search record? Search Record: Specify the search record for the component. The search record controls access to rows of data in a table. Its keys and alternate search keys appear on the search page as criteria. Add Search record: Specify if you want a different search record specifically for add actions.
2. What is difference between scroll and grid? Scroll area is used to maintain parent child relationship we insert grid in low level scroll Example: assume we have 3 scroll levels in our page level1 level2 and level3 we insert grid in level 3
3. How many ways we can run the application engine program?
– Running from Application Designer.
– By calling People Code function.
– Running from DOS Environment (Debugging).
– Running from Application Engine People Tool.
– Running from People soft Application.
1. Functional and Technical?
– Based on Customer Business processes functional person maps requirements to PeopleSoft and performs the rules and the Configurations required. He is the one who collects the requirement and decides what customization is and what is delivered by PeopleSoft.
– PeopleSoft Technical Guy is the Person who knows how to code in PeopleSoft to execute the requirement.
1. What are the new features added in PS 8 Application Designer? The newly added features in PS 8 Application Designer are as follows:
– The Application Reviewer has been integrated with Application Designer in 8.
– The PeopleCode has become VBA style with objects properties and methods.
– The Meta SQL variables are introduced. The new variables like component, record, SQL has been introduced.
– The Application engine now supports PeopleCode.
– The scroll bars have become scroll areas in PS 8.0 etc.
1. What are the three actions that can be attached to menu?
– Component
– Separator
– PeopleCode
1. What is the difference between a Process and a Report? The Process receives a command line parameter where as the Report receives run controls from the page.
2. What are maximum number of actions possible in a step, list them Various actions possible in Application Engine step are as follows:
– Do while
– Do when
– Do select
– SQL
– Call section
– Log Message
– Do until One action can be called only once in a step of an Application Engine program.
1. Tell about application engine program you worked with?
2. What is built in restart logic in Application Engine programs? Within each Application Engine program, you must define how frequently the program will issue a COMMIT. After doing so, each COMMIT becomes a "checkpoint" that Application Engine uses to locate where within a program to restart after an abend. This type of built-in logic does not exist in COBOL or SQR.
3.
What are Application Engine State records? The State Record is a PeopleSoft record that must be created and maintained by the Application Engine developer. This record defines the fields a program uses to pass
values from one Action to another. Think of the fields of the Application Engine State Record as comprising the working storage for the Application Engine program. An Application Engine State Record can be either a physical record or a work record, and any number of State Records can be associated with a program. Physical State Records must be keyed by process instance. An Application Engine State Record must have PROCESS_INSTANCE defined as the first field and the only key field. And, so that the system recognizes the record as a State Record, all State Record names must end with the _AET identifier.
4..
5. In which events error & warning are used most extensively. Field edit, save edit, Search save, row delete, row insert.
6. Is there any way by which you can find out whether the user is in Add mode or Update mode? %mode---returns A---for Add mode. Returns U –for Update mode
7. How is the searchinit event most often used by people soft application? Searchinit fires before the search dialogue page is displayed to the end use. For this reason searchinit is often used to enhance row level security by inserting and graying out certain values to the search dialogue page.
8. When we select a component what events will be fired? If default mode for component is search mode: only searchinit will fired .If default mode for component is new mode. Field default, field formula, rowinit, searchinit.
9. What are different variables in people code and their Scope?
System variables and User defined variables. Scope --- Global, Component, Local. 10.What is default processing? In default processing, any blank fields in the component are set to their default value. You can specify the default value either in the Record Field Properties, or in FieldDefault PeopleCode 11.What is difference between field default and Rowinit?
Field default specifies only the default value for a field when we are in Add mode. Row init fires only when a row of data coming from database to component buffer. 12. 13.What is Getlevel 0()?. 14. 15 16.Can you save the component programmatically? Using Dosave and Dosavenow functions. 17.What are differed processing and its advantages? Postpones some user actions to reduce the number of trips to the database so that increases the performance (in system edits, field edit, and field change). Advantages:
1)
Reduces the network traffic.
2)
Increases the performance.
18.Write the syntax to access third level record field using object oriented
peoplecode? &fld=Getlevel0 ()(1).GetRowset(Record GetRow(1), GetRowset (Record.).GetRow(1), GetRowset (Record.).GetRow(1), GetRecord (Record.).GetFieild(Field)) 19.What are the built-functions used to control translate values dynamically? Adddropdownitem () Deletedropdownitem () 20.Before accessing a people soft application what levels of security must be passed through.
a) Field level security
b) Row level security
c) Maintain security
d) Definition security
e) Portal security.
21.What is the use of primary permission list in user profile? Primary permission list is used for mass change and definition security purposes. 22.How to authorize the user to run a process or report? To authorize a user to run a process, the process group, which contains the process or report, should be added to the permission list of that user. 23.How to give access to the records that are to be used in a query? To give access to the records that are to be used in query, we have create a new query security tree and add the records which we want to give the access and then assign a access group to the tree. After that we have to add that query tree and query access group to the permission list. 24. 25.What are the different ways we can set up the portal security to access component in portal? 1) Structure & content 2) Menu import 3) Register component 26.Steps involved in Data Conversion?
– Extract data from the legacy system
– Reconcile the extracted data
– Identify the tables to be leaded with the new system
– Data Mapping
– Identify the tools (SQR or Import Manager or SQL Loader etc)
– Write programs to perform conversion
– Test the programs using test data
– Check the data outline
– Reconcile concerted data.
1. Why SQR is used?
– Data conversion
– Reports
– Interface programs.
1. How do you link SQR reports to process scheduler?
– Create/modify/add run control table if you have any new fields
– Create/modify/add run control page if you have any program inputs
– Create a menu definition (Note Menu group name: XYZ)
– Give operator security
– Create Process scheduler definition
– Use-Process definition – process definition add
– Give report name and report type
1. What are variable types in SQR?
– & Data base reference fields – Read only
– $ Character (Same for Date)
– # Numeric
– { } Variable in ASK or # define
– [$ variable] Dynamic variable referencing
1. What are the types of record definitions?
– SQL Tables
– SQL views
– Dynamic views
– Derived / Work Records
– Sub Records
– Query views
1. What is Dynamic View? Dynamic view that can be used like a view in pages and PeopleCode, but is not actually stored as a SQL view in the database. Instead, the system uses the view text as a base for the SQL Select that is performed at runtime. Dynamic views can provide superior performance in some situations, such as search records and in PeopleCode Selects, because they are optimized more efficiently than normal SQL views.
2. Table loading Sequence (installation)?
– Company table
– Installation
– Location
– Department
– Salary Plan
– Salary step
– Job code
– Pay group
– Benefit Programs
1. What is Application Engine? It is the tool, which performs, background SQL processing against our application data tables. It is an alternative for COBOL, SQL or SQR
2. What are the Different types of Application Engine? Standard: Standard entry-point program. Upgrade Only: Used by PeopleSoft Upgrade utilities only. Import Only: Used by PeopleSoft Import utilities only Daemon Only: Use for daemon type programs. Transform Only: Support for XSLT Transform programs.
3. What is the advantage of using Application Engine? The following are the advantages of using Application Engine. Encapsulation Unlike applications developed using COBOL or SQR, Application Engine applications
reside completely within your database. With Application Engine, there are the programs to compile, no statements to store, and no need to directly interact with the operating environment in use. You can build, run and debug your applications without existing People Tools. Effective Dating Application sections are effective dated-meaning you can activate/deactivate a section as of a particular date. This enables you to archive sections as you modify them, instead of destroying them. In the future if you decide to revert to a previous incarnation of a section you can simply reactivate it. SQL / Meta-SQL Support In addition to writing your SQL within Application Engine, you can also copy SQL statements into Application Engine from SQL talk or any other SQL utility with few – if any changes. RDBMS platforms have many differing syntax rules – especially in regard to date, time and other numeric calculations. For the most part you can work around this problem using Meta-SQL which Application Engine supports. This language was created to handle different RDBMS SQL syntax’s by replacing them with a standard syntax, called Meta-strings. With in Platform specific sections You can also have the ability to call generic portions of SQL statements by using the & CLAUSE function. This means you can write your generic SQL portions just once, and reference them from your different platform versions. Set Processing Support Set processing is a SQL technique used to process groups (or sets) of rows of one time rather than one at a time. Application Engine is particularly effective of processing these types of applications. Object Orientation unless designed to anticipate changes in field attributes. COBOL applications may need to be modified when things change. If a developer increases a field’s length, then it may need to be changed in every instance where the COBOL program uses this field as a bind or select variable. This can require a good bit of effort. And, if not handled properly, a change like this can cause confusing errors. For example, if the length of a field in the COBOL is wrong, it may work fine, or you may get an error, or the field may get truncated. One of the corner stones of People soft functionality is Application Designer. Because of the way it works, most field attributes (type, length and scale) can be specified once, globally. If the field is used on more than one record, it has the same attributes in each of these records. PORTABILITY you can use Data Mover to import/export your applications. This means that you can export an application(s) into a file, and attach it to an e-mail message. Then, the recipient can simply use the IMPORT feature of Data Mover, and the application is ready to run.
4.
Where are the search records assigned? Search records are assigned to a component in a menu.
5.
Does the search record for a panel have to be the same as the record being
accessed on the panel? Why or Why not? The search record for the panel does not have to be the same as the record being accessed on the panel because the search record is used to search for and or Filter the search key.
6. Differentiate Error V/s Warning statements in People code? The error statement issues a message and the condition causing the error must be corrected before proceeding. The warning statement issues a message and the user can proceed without changing any values.
7. Where can you run Jobs? The process scheduler can run jobs on the client or a server machine.
8. What restrictions are placed on multi-process jobs?
A multi process jobs can only be scheduled to run on a server.
9. List the three output destinations available through the Process Scheduler?
You can direct the output to a printer, file and windows screen. 10.What fields should be at the top of every search record definition that use
table set Ids? SET ID is the field that should be at the top of every record definition that uses table set Ids. 11.What is a Record Group ID?
A Record group ID is a group of record definitions that are Sharing the same set
control field. 12.What are the types of layers in Crystal reports?
There are 4 types of layers in Crystal Reports. They are
1. Report Header - In this, we will write title, date, and logos of the company.
2. Page Header – Used to write column headings.
3. Detail – Contains database column values.
4. Page Footer – Used to write page numbers and address.
13.Define security administrator?
Security administrator is used to control and access the various People soft menus. 14.How many types of security administrator profiles? Define? There are three types of Security administrator profiles:
1. Access Profile: It is an RDBMS ID. It provides the necessary Ids and password for
behind-the-scene process.
2. Class Profile: It is defined to organize the users into groups with common access
rights or privileges.
3. Operator Profile: It is commonly referred as a Operator Ids or operator having
associated sign on passwords.
15.Define Object Security?
The Security profile which is created as an operator security to restrict access to People soft data. 16.What is Translate Table?
A translate table is a prompt table that serves as data dictionary to store values for
fields that don’t need individual prompt tables of their own.
17.What are the limitations of Translate table?
1. Field type should be character
2. Minimum Field length should be 1 to 4 characters.
3. Field values should be small (static).
18.What is Effective Date? Effective date is used to store history, Current and Future information. 19.History date Vs past Date?
Past date - Within 30 days of current date is called past date. History date - Above 30 days of current date is called History Date. 20.What is a record?
A Group of non-repetitive fields is called a record.
21.How many types of records are there?
There are six types of records
1. SQL table - Corresponding physical SQL table in the database we create with build
option.
2. SQL View – It is not a physical SQL table on the database, it gives the replicate of
joined tables. It is used for security and faster access.
3. Dynamic View – It is actually stored in the form of SQL view text and is executed
at runtime. It uses the built in indexes. Whereas normal view is executed and stored
in
the database.
Derived/Work record - It is a temporary workspace to be used during on line panel
processing and is not stored in the database, therefore derived work records are not built. They cannot be seen in the update/display mode. Once the panel is cancelled it
is
removed from the buffer.
Sub Records – A group of fields commonly used in multiple records.
6.
Query View – A Query view is a view constructed using People soft Query tool.
22.How many types of Displays are there in the tool bar?
1. Field Display - It shows the field attributes (fieldname, Type, Len, Format, H, and
Shortname, Long name
2. Use Display - It shows key related characteristics and default values for the fields
(Field name, type, direction indicates, search key, list, system indicates, audit, H,
default values)
3. Edit Display - It shows the auditing options available for the fields (field name,
type required , edit, prompt table, reasonable date, people code)
4. People code display - It shows the different events and the user can choose
required event to write people code. 23.What is application engine program? PeopleSoft Application Engine program is a set of SQL statements, PeopleCode, and program control actions that enable looping and conditional logic.
24.Define People Tools?
A Collection of software programs, utility scripts, database tables and data that
provide the frame work for creating, using and modifying people soft applications. People tools provide built in business functionality and maintain the capability that directly increase productivity and simplify system design.
25.What does Application Designer mean?
It is an integrated development environment that is used to develop People soft applications 26.Functionality of Application Designer? The following are the uses of Application Designer.
1. Design and create database tables.
2. Design on-line pages
3. Controlling on-line processing flow.
4. Providing security for the database.
1. What is a project?
User defined collection of related definitions (fields, records, pages, components and menus).
2. What are the physicals attributes Applications designer screen? The following are the attributes of Application Designer
1. Title bar
2. Menus
3. Toolbar
4. Project Workspace – it arranges PeopleSoft objects in Windows explorer format
5. Objects Workspace – Open Multiple Object and store in main window.
6. Output Window – Deals about the output generated by using project development
or up gradation.
7. View tabs – Development tools / Upgrade
3. How is data stored, retrieved, manipulated and processed in People soft
applications? PeopleSoft is a table-based system and it contains three major sets of tables,
1. System catalog tables; it stores physical attributes of tables and views. (e.g. Sys,
Columns, Sys tables)
2. People Tools tables; it contains information that you define using People Tools
(e.g.
PSRECDEFN, PSMENUDEFN)
3. Application Data Tables; Store the actual data users enter and access through
People Soft application windows and pages. (E.g. PS_ <>)
4. How many types of RDBMS support PeopleSoft? The following are the list of RDBMS supporting PeopleSoft application. DB2, SQL Base, Oracle, Microsoft SQI Server, Informix
5. Define a Field, Field attributes, and Field properties?
Fields are basic building blocks in People soft and can be used in application when they are added to at least one record. Fields are globally defined. The common filed attributes are:
1. Data type 2. Field name 3. Long name 4. Short name 5. Formatting 6. Help
context number 7. Translate values – stored in separate table (XLATTABLE). Fields are 1.Globally defined 2.Reusable components and can be shared across multiple record definitions 3.A change to the Field properties affects all the records that include the field.
Explain briefly about record properties?
KEY:
The record knows a field, which uniquely identifies each row, as a key.
1. We will search and retrieve data from the database according to the key field.
2. This will not allow duplicate.
7. How many types of securities are available in People soft? There are 6 types of securities:
1. RDBMS Security 2. Network security 3.Operator security 4. Object security 5.Tree
level security 6. Query security (row level security)
8. Types of Menus?
1. Standard menu: It appears in the menu bar of a PeopleSoft application.
2. Popup menu: Allows the user to navigate related information in other areas of application by right clicking on a page or component.
1. What is the difference between Key and alternate search Key? cannot go for more than three levels of parent/child relationships.
Advantages are:
1) To have referential integrity.
2) Data dependencies
3) Eliminate redundant data.
6. Can you hide a primary page in a component? Reason?
No we cannot hide the primary page of a component. If the component had only one page then by making this page also invisible we won’t have any component existing so we are not allowed to hide the primary page.
7. What is an Expert Entry?
Expert entry enables a user to change from interactive to deferred mode at runtime for appropriate transactions
8. What is Auto Update?
This record field property is used to update the date field of particular record with the server's current date and time whenever a user creates or updates a row. Even the user enter the data into that field, the data which the user enters will be updated by the system’s current date and time.
9.
What is Record Group? Which records can be included into a record group?
Record group consists of records with similar functionality. To setup a record in record group we should enter a set control field value in record properties
10. How can you improve the security and usability of a Prompt table edit.?
Prompt table view
11. What are the different ways to setup row level security?
We can setup row-level security using a SQL view that joins the data table with an authorization table. And by having Query search for data using a query security record definition. The query security record definition adds a security check to the search.
12.How does PeopleSoft use views? Which are online functions? People soft uses views for search records, summary pages, prompt views, reports Search records and summary pages are online functions. 13.Why do PeopleSoft often use views as search records?
Search views are used for three main reasons.
Adding criteria to the search dialogue page
Providing row level security
Implementing search page processing
14.How can a component have more than one search record? Give a situation.
You might want to reuse the same component multiple times with different search records. You can accomplish this by overriding the component search record at runtime when the component is opened from a menu item without creating separate copies of the component. The component override is temporary, and occurs only when the component is opened from the menu item in which the override is set. It does not change the component definition.
15.. 16.Differentiate Field edit and Save edit?
In Field edit for each field change, a transition to the application server to the database is taken place.
In Saveedit for all the fields , only one transition to the application server to the Database is taken place.
17.What are think time functions?
Think-time functions suspend processing either until the user has taken some action (such as clicking a button in a message box), or until an external process has run to completion.
18.In which events error & warning are used most extensively.
Field edit, Save edit, Search save, row delete, row insert
19. Is there any way by which you can find out whether the user is in Add mode or Update mode?
%mode---returns A---for Add mode.
Returns U –for Update mode
20.How is the searchinit event most often used by people soft application? Searchinit fires before the search dialogue page is displayed to the end user. For this reason searchinit is often used to enhance roll level security by inserting and graying out certain values to the search dialogue page. 21.What are the options for using SQL in people code?
SqlExec
Record class methods (selectbykey, delete, insert, update)
Using Sql class, its properties and methods
22.
23.What data buffer classes are available in people code?
Rowset, Row, Record, Field, Array, File, Sql, chart, grid and so on
24.When we select a component what events will be fired?
If default mode for component is search mode: only searchinit will fired .If default mode for component is new mode: field default, field formula, rowinit, searchinit.
25. What are different variables in people code and their Scope?
System variables and User defined variables.
Scope ---Global, Component, Local.
26. What is default processing?
In default processing, any blank fields in the component are set to their default
specify the default value either in the Record Field Properties, or in FieldDefault PeopleCode
value. You can
27.What is difference between field default and Rowinit?
Field default specifies only the default value for a field when we are in Add mode.
Row init fires only when a row of data coming from database to component buffer
28.
29. What is Getlevel0()?.
30.
31.
32. Can you save the component programmatically?
Using Dosave and Dosavenow functions
33.What is differed processing and its advantages?
Postpones some user actions to reduce the number of trips to the database so that increases the performance (in system edits, field edit, and field change)
Advantages:
Reduces the network traffic
Increases the performance
34.Write the syntax to access third level record field using object oriented peoplecode?
&fld=Getlevel0()(1).GetRowset(Record.<level1 record>.GetRow(1),
GetRowset(Record.<level2record>).GetRow(1),
GetRowset(Record.<level3 record>).GetRow(1),
GetRecord(Record.<level3 record>).GetFieild(Field.<field name>))
35.What are the built-functions used to control translate values dynamically?
Adddropdownitem()
Deletedropdownitem()
36.How to populate data into grid in online?
&Rs.Select() Scrollselect()
37.Before accessing a people soft application what levels of security must be passed through.
Field level security, Row level security, Maintain security, definition security, Portal security
38.What is the use of primary permission list in user profile?
Primary permission list is used for mass change and definition security purposes.
39.How to authorize the user to run a process or report?
To authorize a user to run a process, the process group, which contains the process or report, should be added to the permission list of that user.
40.
41.What are the different ways we can set up the portal security to access component in portal?
– Structure & content
– Menu import
– Register component
1. What are the main elements in the component Interface?
– Component interface name
– Keys
– Properties and collections
– Methods.
2. How do you provide security for the component interface?
∑ Open the Permission list
∑ On the Component Interface tab
∑ Add row and select the newly created Component Interface
∑ Edit the permissions to give permission for the standard methods
∑ Get, Create, Save, cancel, find.
1. What the steps that you need to do in people code to invoke Component Interface?
∑ Establish a user section
∑ Get the component interface definition
∑ Populate the create keys
∑ Create an instance of the component interface
∑ Populate the required fields
∑ Save the component Interface.
&Session = GetSession();
&CI = &Session.GetcompIntfc(CompIntfc.INTERFACE_NAME)
&CI.KEY_FILED_NAME = ‘NEW’
If not &CI.Create () Then
Else
Populate other fields
End-if;
Populate the other fields
If not &CI.Save () Then
1. How do you test Component Interface?
∑ Using the Component Interface tester
∑ Give values in the tester for options
∑ Get Existing, Create new, Find
and perform the operation from the CI Tester
1. Catching error message in the component Interface? Or PSMessages in the CI ?
Use of
This function needs to be called when ever methods like Find, Save, Create methods return false.
Error text and Error type can be printed in the log message for any other action in to the log message.;
2..
3. What are properties?
The Fields in the level 0 in the component are the properties of the component.
Standard properties
Createkeyinfocollection Getkeyinfocollection field properties.
Findkeyinfocollection
Property Info collection
GetHistoryItems (Update/Display mode or
EditHistory Items
InteractiveMode.
User-Defined properties
Developer can further control the exposed
Correction mode)
4. How do you login in correction mode in the Component Interface?
Get History Items and Edit History items property to should be set to true.
Get History Items alone: - Update display all - modes will be used.
This is an example of how to grab the most recent/correct row from an effective dated/effective sequenced table such as PS_JOB
SELECT A.EMPLID FROM PS_JOB A
WHERE A.EMPLID = '12345'
AND A.EFFDT = (SELECT MAX(A_ED.EFFDT) FROM PS_JOB A_ED
WHERE A.EMPLID = A_ED.EMPLID AND A.EMPL_RCD = A_ED.EMPL_RCD AND A_ED.EFFDT <= GETDATE())
AND A.EFFSEQ = (SELECT MAX(A_ES.EFFSEQ) FROM PS_JOB A_ES
WHERE A.EMPLID = A_ES.EMPLID AND A.EMPL_RCD = A_ES.EMPL_RCD AND A.EFFDT = A_ES.EFFDT)
6. Types of Tables in PeopleSoft?
Base tables, Control tables, Views, Reporting Tables and Application data tables
A base table is the place where nearly every query starts. These tables store information about an employee and contain data about the employee. A base table stores live data that is continually changing. The table could store information about employees, their dependents, their earnings, taxes, deductions or benefits. In short, these tables hold the real data. EX: PS_PERSONAL_DATA, PS_JOB
A control table contains values that classify and categorize. For example, a table that contains all of the possible earnings codes (regular, bonus, overtime, etc.) is a control table, whereas the table that contains the actual earning amounts is a base table. Control tables are also commonly known as ‘lookup’ or ‘prompt’ tables. Control tables are usually identified by suffix of ‘_TBL’. PS_DEPT_TBL, PS_LOCATION_TBL, PS_JOBCODE_TBL, PS_EARNINGS_TBL
Views are timesavers; they are the result set of an SQL statement. For example, the benefits view table takes fields from several tables, links them together correctly, and presents the result as a new table. Views link to original tables (base or control), so no data is duplicated or out of sync. Views are usually identified by a suffix of ‘_VW’.
Reporting Tables: In an attempt to appease those toiling away, searching for the location of basic employee data, PeopleSoft created three tables that contain the most-often-used human resources fields. These tables are similar to views, but are not dynamic. Their data is only current after a program is executed every night. Their chief benefit is performance. Instead of joining 10 tables every time you look something up, the tables are joined once at night and then used throughout the next day as a single table. EX: PS_EMPLOYEES, PS_BEN_PER_DATA and PS_BEN_PLAN_DATA are reporting tables
Application Tables: The PeopleSoft application stores application rules and definitions in application Tables. Occasionally these tables temporarily store data in the middle of a process. With few exceptions, these tables store data that is not relevant to the organization. Most of these tables are not discussed in this book since they contain
application data, not HR data. System tables often do not include an underscore after the ‘PS’ prefix
7. When working with PeopleSoft system tables, what are setup considerations that you need to make?
– Sequence of table setup
– Default values
– Effective dates
– Actions
1. The system categorizes effective-dated rows into three basic types:
Effective dates allow you to keep history, current, and future information in tables. When you update existing information, you do not want to lose or overwrite the data already stored in the database. To retain historical data, you can add a new data row identified by the date when the information goes into effect: an effective date. An effective date is a column in a table that is a key, but it is not typically a search key.
– Future: row greater than current row
– Current: row less than or equal to sysdate
– History: less than current row
1. The action you select tells PeopleSoft the type of activity you want to perform on the database. The following four action types are available:
– Add : add new row/value
– Update/Display:
– Include History
– Correct History
1. What are Setid’s and Table set sharing? Setid is the highest level key in the PeopleSoft. Location, Department and Jobcode tables are control tables and setid’s control the control tables during the transaction.
Table set sharing is a place where control tables are listed. It is accessed by business unit.
EX: If we have two locations Arizona and Ohio with setid’s xyz and abc, suppose if we change Ohio’s setid to xyz then we can access all information related to Arizona/xyz like jobcodes etc.
2. PeopleSoft application data table types?
–
Translate Tables
These values are between one and four characters long
They do no need to be updated often
They are effective dated
Implementation Processing and Defaulting tables
Installation Table
Organization defaults by Permission lists
Business unit HR defaults table
Table set control table
– Control tables: These tables serve as foundation for the Organization.
– Company Table
– Business Unit Table
– Location Table
– Compensation Rate Code Table
– Job Code Table
– Table SetId (SetId table)
– Establishment Table
– Department Table
– Salary plan, Grade and Step Table
– Pay group Table
– Transaction tables: Records change often in these tables
– Personal Information
– Employment table
– Job table
– Benefits program participation tables
1. Sequence of table setup in HR?
– Installation Table
– Tableset Control Table
– Organization defaults by Permission lists
– Business unit HR defaults table
– Establishment Tables
1. What are Metastrings or MetaSQL? Metastrings are special type of SQL expressions preceded by % sign. Metastrings are used in the following:
– SQLExec
– In application designer to build dynamic views
– With rowset object methods (select, fill)
– SQL objects
– Record class methods (Insert, Update)
– Application Engine
– Cobol
1. What are classes in PeopleCode?. You can also extend the functionality of the existing classes using the Application class.
2. What are Objects in Peoplecode?.
A property is an attribute of an object. Properties define:
– Object characteristics, such as name or value.
– The state of an object, such as deleted or changed.
1. Define SQR and steps for performance tuning? SQR stands for Structured Query report. SQR performs database processing and used as reporting tool. When program contains begin-sql, begin-select or execute commands, it performs sql statements, processing of sql statements consumes significant computing resources hence tuning sql statements yields higher performance.
Following are the steps for simplifying sql statements and reducing number of sql executions:
– Simplify a complex select paragraph.
– Use LOAD-LOOKUP to simplify joins.
– Improve SQL performance with dynamic SQL.
– Examine SQL cursor status.
– Avoid temporary database tables.
– Create multiple reports in one pass.
– Tune SQR numerics.
– Compile SQR programs and use SQR Execute.
– Set processing limits.
– Buffer fetched rows.
– Run programs on the database server.
1. Structure of SQR?
– Begin-Setup
– Begin-Heading
– Begin-Footing
– Begin-Report or Begin-Program
– Begin-Procedure
1. Commands that can be called from setup section?
– Ask
– Define-Chart
– Define-Layout
– Define-Image
– Define-Printer
– Define-Procedure
– Define-Report
– Dollar-Symbol
How to run SQR in PeopleCode? To call SQR from peoplecode we use peoplecode functions CreateProcessRequest() and Schedule()
2. Important Tables in HRMS? PS_PERSONALDATA, PS_PERS_NID, PS_EMPLOYMENT, PS_JOB, PS_JOB_TBL, PS_DEPT_TBL, PS_LOCATION_TBL, PS_FED_TAX_DATA, PS_STATE_TAX_DATA, PS_LOCAL_TAX_DATA, PS_BEN_PROG_PRATIC, PS_HEALTH_BEN, PS_LEAVE_PLANS, PS_LIFE_BEN, PS_RTRMNT_PLN, PS_PAY_CHECK, PS_PAY_LINE, PS_PAY_BONUS, PS_PAY_DEDUCTIONS, PS_POSITION_DATA
3. About events in PeopleCode? Record fields have 15 events, Components have 2 events and Page has 1 event.
4. Security? Security Tree A security structure that graphically represents the hierarchy of your organization. Tree Level Represents a logical division in your business hierarchy (ex: department, branch or region). Tree Node Represents an organizational entity on the tree . Tree Manager A PeopleSoft tool that provides a visual means to build a hierarchy of security for all organizational entities. Query Trees Graphical representation of Tables to which you wish to control query access.
Access Groups Nodes in Query Trees where you would group Operators and assign them access to all tables under the node. User Profiles: A User Profile describes a particular user of the PeopleSoft system. User Profiles define individual PeopleSoft users. You define User Profiles and then link them to one or more Roles. Typically, a User Profile must be linked to at least one Role in order to be a valid profile. User Profiles maintain the Roles that are assigned to the user. Roles are assigned to User Profiles. Roles are intermediate objects that link User Profiles to Permission Lists. Multiple Roles can be assigned to a User Profile, and you can assign multiple Permission Lists to a Role. Some examples of Roles might be Employee, Manager, Customer, and so on. Permission Lists are lists, or groups, of authorizations that you assign to Roles. Permission Lists store Sign-on times, Page access, PeopleTools access, and so on.
A Permission List may contain one or more types of permissions. The more types of
permissions in Permission List the more modular and scalable your implementation.
A User Profile inherits most of its permissions through the roles that have been
assigned to the User Profile. Data permissions, or row-level security, appear either through a Primary Permissions List or a Row Security Permissions list Row-level security controls access to the subset of data rows within tables the user is authorized to read or update. The decision to implement row-level security will be based on the need to provide that level of data security. To establish row- level security, you must first decide the necessary data security level required, which key fields to secure, and whether security will be defined through User IDs or Permission Lists. With row-level support, PeopleSoft security can restrict individual users or Permission Lists from specific rows of data that are controlled by key fields, for example:
– Business unit
– Set ID
– Ledger (and ledger group)
– Book
– Project Therefore, users would only be able to view those rows for which they have security access for a specified Business Unit or Project, for example. Once row- level security is turned on in for a particular PeopleSoft module, it applies to all applications within that module, not specific applications. Additionally, a method for implementing row-level security can be by User ID. Key fields can be associated to User IDs as well as Projects. Projects will also perform row-level security by User ID through the Use List. This sets up Projects to implement row- level security in a team-based method.
PeopleSoft security mainly depends on Rowlevel security. Security works best if it is based on organizational structure. Types of security supplied by PeopleSoft:
– Departmental security
– No security
– International security
Tree Manager:
3 steps in setting up department security:
– Create the security tree
– Update department security tree
– Grant and restrict access to the entities
1. What are steps involved in new employee hire process? Personal Information: Name, Address, Identity Job Data: Work Location, Job Information, Payroll, Salary Plan, Compensation Employment Data: Employment Information Earnings Distribution: Job earning distribution Benefit Programs Participation: enrollment of benefit programs
2. How many sub pages, secondary pages can be created in one level? We can create only one Secondary Page. But where as we can create N no of Sub Page in one Level
3. What is Integration Broker? PeopleSoft's Integration Broker is a messaging hub, that allows for data to be shared between different systems (e.g. PeopleSoft HRMS to Payroll [can be PeopleSoft or third party], HRMS to Finance, HRMS to CRM, etc.).
4. Types of tools in PeopleSoft? Development Tools:
– Application Designer
– Internet Technology and Portal Technology Integration Tools:
– Integration Broker
– Component Interfaces
– Workflow Technology
Analytic Tools:
– PeopleSoft Process Scheduler
– XML Publisher
– PS n/Vision
– Crystal Reports
– PeopleSoft Query
– Tree Manger
– SQRs Administration Tools:
– Data Management
– Security Administration
– System and Server Administration
– Performance Monitor LifeCycle Management:
– PeopleSoft Software Update
– Change Impact Analyzer
– PeopleSoft Setup Manager
1. Explain about.
1. What are Data Mover operating modes?
– Regular Mode: Using PeopleSoft User Id
– Bootstrap Mode: Using Database access Id
1. Commands in Data Mover?
– ENCRYPT_PASSWORD: Encrypt one or all user passwords (operator and access) defined in PSOPRDEFN for users.
– EXPORT: Select record information and data from records and store the result set in a file. You can use the generated export file as input for migrating to another platform.
– IMPORT: Insert data into tables using the information in an export file. If a tablespace or table does not exist, this command creates tablespace, table, and indexes for the record, using the information in the export file, and inserts the data.
REM: For Remarks
RENAME: Rename a PeopleSoft record, a field in one record, or a field in all records.
REPLACE_ALL: This is a variation of the IMPORT command. If a table already exists, use this command to drop the table and its indexes from the database and create the table and indexes using the information in the export file. Then, the command inserts data into the table using the information in the export file.
REPLACE_DATA: This is a variation of the IMPORT command. Delete data in existing tables and insert the corresponding data from the export file.
REPLACE_VIEW: Recreate specified views found in the database.
RUN: Run a specified .DMS file from within a PeopleSoft Data Mover script. The file cannot contain nested RUN commands.
SET: When a command is followed by valid SET parameters, it forms a statement that establishes the conditions under which PeopleSoft Data Mover runs the PeopleSoft Data Mover and SQL commands that follow.
SET IGNORE_ERRORS: If this command is set, then all errors produced by the SWAP_BASE_LANGUAGE command are ignored. Otherwise, the system stops on errors.
1. Sample data mover code?
set output c:\temp\position_data.dat; set log c:\temp\position_logfile.txt;
export position_data where location in ('11490', '11730','11804','11720') and position_nbr not in
('00002025','00002026','00002027','00002029','00002030','00002031','00002032');
set input c:\temp\position_data.dat; set log c:\temp\position_inputlogfile.txt;
import *;
2. Ensuring Data Integrity? You may want to use these tools during upgrades and system customizations, to verify the PeopleSoft system and check how it compares to the actual SQL objects.
– Run SQL Alter: The primary purpose of the PeopleSoft Application Designer SQL Alter function is to bring SQL tables into accordance with PeopleTools record definitions.
– Run DDDAudit: The Database Audit Report (DDDAUDIT) finds inconsistencies between PeopleTools record and index definitions and the database objects. This audit consists of nine queries: four on tables, two on views, and three on indexes.
– Run SYSAUDIT: The System Audit (SYSAUDIT) identifies orphaned PeopleSoft objects and other inconsistencies within the system. An example of an orphaned object is a module of PeopleCode that exists, but which does not relate to any other objects in the system.
Database level Auditing?
PeopleSoft provides trigger-based auditing functionality as an alternative to the record-based auditing that PeopleSoft Application Designer provides. This level of auditing is not only for maintaining the integrity of the data, but it is also a heightened security measure. PeopleSoft
The information that a trigger records could include the user that made a change, the type of change that is made, when the change is made, and so on.
2. PeopleSoft Integration Broker?
PeopleSoft Integration Broker facilitates synchronous and asynchronous messaging with other PeopleSoft applications and with third-party systems. PeopleSoft Integration Broker uses a variety of communication protocols, while managing message structure, message content, and transport disparities.
The two major components of PeopleSoft Integration Broker are the integration gateway and the integration engine. The integration gateway is a platform that manages the receipt and delivery of messages passed among systems through PeopleSoft Integration Broker. The integration engine is an application server process that routes messages to and from PeopleSoft applications as well as transforms the structure of messages and translates data according to specifications that you define.
3. Upgrade Assistant? To use PeopleSoft Upgrade Assistant, you run a process using an upgrade job and upgrade template. The upgrade job is a set of filtered steps that are specific to your upgrade and relevant only to the release, platform, and products you are using. For PeopleSoft supported upgrades, PeopleSoft provides predefined upgrade templates on Customer Connection. These templates comprise the steps necessary to complete an upgrade for a supported upgrade path. Depending on your configuration.
Editing Templates:
– Add steps: You may need to add custom steps to the template—for example, steps for dropping and adding indexes or running a backup of the target database.
– Edit steps: You can modify the delivered settings by changing the step properties. Setting these properties determines the conditions that apply when you run the upgrade process.
– Delete chapters, tasks, or steps.
– Rename chapters, tasks, or steps.
Creating Templates:
– Creating Custom templates: You can create a custom template for your upgrade. When you create a custom template, you also insert chapters, tasks, and steps. In addition, you specify step properties.
– Creating Chapters: You can add a chapter to a new or existing upgrade template. A chapter is a section heading for a group of tasks.
– Creating Tasks: You can add a task to a new or existing upgrade template. A task is a section heading for a group of steps.
– Creating Steps: You can add a step to a new or existing upgrade template. A step can be any process needed to perform the upgrade.
PeopleSoft Upgrade Assistant uses the PROCESSREQUEST component interface object to submit jobs to run on the PeopleSoft Process Scheduler server. You must configure your environment for PeopleSoft Upgrade Assistant to submit processes.
1. Define Workflow? Workflow enables to efficiently automate flow of time-consuming business processes and deliver the right information to the right people at the right time throughout enterprise. You can merge the activities of multiple users into flexible business processes to increase efficiency, cut costs, and keep up with rapidly changing customer and competitive challenges.
For example, when you order supplies, you are really initiating an approval process:
someone else reviews the order and either approves or denies it. If the order is approved, a purchase order is sent to the vendor. If it is denied, notification is sent back to the person who submitted the original order. The term workflow refers to this larger process.
2. Steps involved in Workflow?
1. Designing a Workflow Application
– Analyze and document business requirements.
– Diagram the process flow.
– Document the workflow object attributes for business processes, activities, steps, events, and email and worklist routings.
1. Build Supporting Definitions
2. Create Workflow Maps
– Create the workflow maps comprising the steps, activities, and business
processes required for your workflow as determined in step one.
1. Define Roles and Role Users
– Define the roles and the role users, including any Query roles, required for your
workflow.
1. Define Worklist Records
– Create a record definition that will be used to store all of the application-specific
information for the worklist.
1. Define the Workflow Objects
– This is the step in which you define the workflow application. You enter each of
the objects onto a business process definition in Application Designer as determined in step one.
1. Define Event Triggers
– Define the business rule in PeopleCode on the triggering application record
definition. Workflow programs are defined on a record definition for one of the tables that the component accesses. They contain the business rules used to decide whether to trigger the business event. The PeopleCode detects when a business rule has been triggered and determines the appropriate action.
1. Test
– Test your workflow, or use the workflow monitoring tools in Workflow
Administrator to validate worklist routing results.
Rules, Roles, Routings?
– Rules determine which activities are required to process your business data. For example, you might implement a rule that says department managers must approve all requests for external classes. You implement rules through workflow events, such as PeopleCode that evaluates a condition and triggers a notification (a routing) when appropriate.
– Roles describe how people fit into the workflow. A role is a class of users who perform the same type of work, such as clerks or managers. Your business rules typically specify which roles do which activities. For example, a rule can say that department managers (a role) must approve external course requests EX: User list roles, Query Roles.
– Routings specify where the information goes and what form it takes: email message or worklist entry. Routings make it possible to deploy applications throughout the enterprise. They work through the levels and departments of an enterprise to bring together all of the roles that are necessary to complete complex tasks.
1. Events, Workflow Peoplecode, Approvals Events are conditions that have associated routings. Define the condition in PeopleCode, which is attached to the record definition underlying a step in a step map. When a user saves the page, completing the step, the system runs the PeopleCode program to test the condition. If the condition is met, the system performs the routings.
To trigger a business event from a page, you add a PeopleCode program to the workflow event in the record definition for one of the tables to which the page writes.
Approval processes are a common form of business process. The approval steps that you place on the approval rule set map represent the approval levels that are required for the activity.
2. North American Payroll? Payroll for North America provides the tools to calculate earnings, taxes, and deductions efficiently, maintain balances, and report payroll data.
Payroll for North America supports the following business processes:
– Set Up and Maintain Core Payroll Tables: Core payroll tables are the tables that are required to implement the Payroll for North America application, including organization tables, compensation and earnings tables, deduction tables, pay calendar tables, garnishment tables, vendor tables, general ledger interface, tax tables, retroactive processing, and tip allocation.
– Set Up and Maintain Employee Pay Data: Employee pay data includes personal data, job data, benefits data, federal, state/provincial, and local tax information, general and benefit deductions, additional pay, garnishments, savings bonds, and direct deposits.
– Process the Payroll: basic steps of payroll processing are: create paysheets, pay calculation, pay confirmation, and generate checks and direct deposits. You can employ audit reports and data review pages to verify and correct the results of each step before moving on. You can also review and adjust employee balances.
– Post to General Ledger: Use the integration with PeopleSoft Enterprise General Ledger and EnterpriseOne General Ledger to transfer the expenses and liabilities incurred from a pay run to the General Ledger application.
– Pay Taxes: Use the integration with PeopleSoft Enterprise Payables to transmit tax data to the Payables application for automatic payment to tax authorities.
– Pay Third Parties: Use the integration with Payables to transmit employee and employer deductions such as garnishments and benefit deductions to the Payables application for automatic payment to third parties.
– Produce Reports Payroll for North America provides dozens of reports to help you monitor payroll processing and comply with regulatory and tax reporting requirements. You can view reports online or print hard copies. You can also tailor the reports to fit the special needs of your organization.
1. What are six steps of payroll processing?
– Setting up tables
– Setting up employees
– Paysheets
– Pay calculations
– Pay confirmation
– Print reports, checks, advices
1. Define the Deduction table?
– Defines how the deduction will be processed.
– Identifies tax classification (for example, before- or after-tax deductions).
– Indicates whether arrears are allowed.
– Indicates whether partial deductions are allowed.
– Indicates when the deduction is withheld.
1. The Four Steps to Defining Deductions
To define how you want the system to process a deduction, follow these four general steps:
1. Use the Deduction Table component (DEDUCTION_TABLE) to select a plan type, enter a deduction code, and specify the deduction processing rules, including the priority of the deduction, how the deduction affects taxes, related general ledger account codes, and other special payroll process indicators, such as how arrears should be handled.
2. Use the General Deduction Table component (GENL_DEDUCTION_TBL) to define the rules for the actual calculation of general deductions such as parking or union dues.
3. Use the Company General Deductions component (GDED_COM_TBL) to build a general deduction plan using the general deductions you have set up.
4. Use the Benefit Program Table component (BEN_PROG_DEFN) to define the rules for the actual calculation of benefit deductions such as health plans and dental plans.
General Questions on Payroll? Paysheets: Before you run payroll calculations, you must create paysheets. Paysheets contain the data required to calculate employee pay for each pay period.
Paycalculation: pay calculation is an iterative process. You can run and rerun calculations repeatedly until you’re confident that the payroll data is correct. Here are the basic steps:
1. Enter employee payroll information, create paysheets, and make updates and adjustments for the pay period.
2. Run the Pay Calculation COBOL SQL process (PSPPYRUN).
3. Review calculation results and check for errors. Check payroll error messages online or print the Payroll Error Messages report
(PAY011).
View the results of paycheck earnings, deductions, and taxes using the Paycheck pages and various standard reports that you can print to verify the results of the pay calculation.
4. Make adjustments on the paysheets.
5. Repeat these steps until you're confident that the payroll data is correct, and then confirm pay. Payconfirmation: After you verify that the payroll calculation is correct and you run the Pay Calculation COBOL SQL process (PSPPYRUN) in final mode, you can run the Pay Confirmation process. Pay confirmation is the final step in running your payroll. Running the Pay Confirmation process indicates that you’ve reviewed and approved all payroll information for this pay run, and that you’re ready to produce paychecks.
1. Time and Labor?
It facilitates the management, planning, reporting, and approving of time, and calendar and schedule creation and usage, from one global web-based application, With this application, you can:
– Schedule time.
– Report time.
– Administer time.
– Distribute time.
Time and Labor provides these business processes:
– Create schedules.
– Organize employee groups.
– Approve time.
– Track compensatory time off.
– Manage security.
– Manage reported time.
– Track task data.
– Forecast payable time.
– Manage exceptions.
– Track attendance.
– Process payable time.
– Create rules for processing time.
– Distribute and dilute labor costs.
Data mapping is the process of integrating data by a method of mapping data between two distinct data models. Data mapping also refers to consolidation of multiple databases into a single data base, thereby eliminating redundant data columns in the consolidation process.
Performing Data Conversion During PeopleSoft Upgrades
Derek Tomei (CEO) posted 2/28/2007 | Comments (0)
People that do not understand the complexity of an upgrade typically consider an upgrade to be nothing more than running some scripts. In fact, once the Upgrade Assistant came on the scene, the stereotype became more evident: if an upgrade is simply running scripts, then the Upgrade Assistant must allow you to just push a few buttons and the upgrade will automatically complete by itself!
Nothing could be further from the truth. An upgrade is a complex project that involves nearly every technical PeopleSoft area, along with involvement from functional team members, middle management, and executive management sponsorship. So, for such a complex project to be successfully executed, a methodology must be followed. PeopleSoft has devised such a methodology.
First and foremost, check for New Updates. Check the Customer Connection website again for any new Required For Upgrade patches, especially those related to data conversion. If any are found, apply them now, especially if this is the initial upgrade pass. However, for test/final move to production passes, be careful; it is normally not recommended to apply updates past the initial upgrade pass, as it requires further round(s) of testing.
Decide how data conversion will be executed. Decide which method will be used to execute the data conversion scripts:
1) Locally on client computer, using Upgrade Assistant (UA)
2) Configure Upgrade Assistant to execute data conversion on server
3) Create scripts to run data conversion, then execute directly on the server
Running data conversion locally thru Upgrade Assistant (UA) should only be used if (a) your PeopleSoft database is small & performance is top-notch; AND (b) you are unable to successfully configure UA to execute data conversion on the server.
Running data conversion thru UA, but configure UA to execute data conversion directly on the server is an option. The good side is that performance is better since data conversion is executed directly on the server; however, UA does not have a successful history of properly executing the scripts. Most times, it is quicker and you
have better control if you write the scripts yourself.
Running data conversion directly on the server by executing scripts you have created is an overall good option, especially if the client doesn't require you to use UA exclusively. This is the option presented here; therefore, ignore any tasks in the Upgrade Guide which specify how to configure data conversion to execute on the server through UA.
If executing data conversion App Engine programs on the server, the application server box must be properly configured as follows:
The application server domain must be properly configured and must be running The environment must be set to properly recognize execution of App Engine programs, such as PS_SERVER_DIR, etc (see PeopleSoft Installation Guide for current PeopleTools version) A good idea, although not a requirement, is to follow the Upgrade Guide instructions for building the component interface.
Perform pre-data conversion backup. Make sure that you perform a backup before running data conversion. That way, if unrecoverable errors occur during data conversion, the database can be reverted to this point in time.
Assuming you decided to write the data conversion scripts, then in most cases some of the data conversion scripts may be run concurrently. Refer to the Upgrade Guide for specific instructions as to what each data conversion program actually does, but typically the first one is run as a pre-requisite to the others; therefore, it cannot run concurrently with others. However, usually all the other data conversion programs, other than the last one, can be run concurrently. Like the first one, the last one typically must be run consecutively.
Keep in mind that during the initial pass, data conversion may fail. This could happen either because PeopleSoft delivered records has been customized by the customer or duplicate data in the tables cause an error with new/changed indexes.
Also, if the customer has non-PeopleSoft (i.e. "Bolt-On") records, you may want to assist them in creating an Application Engine library so the data conversion upgrade driver programs will pick them up as well. Using the Application Designer "Find In" option can help determine which Application Engine programs affect customized records. Typically, specific instructions on this can be found in the Upgrade Guide for your upgrade path.
If you would like to learn the step by step approach and methods that will teach you all about Performing a Successful Upgrade you can download the eBook A step-by-step guide to performing successful upgrades
Starting in version 8.9 for financials and 9.0 for HCM, Oracle changed the PeopleSoft Upgrade methodology to copy PeopleTools objects over in data mover scripts. There are a number of advantages to the new process. One potential impact, however, is that the target environment's objects will be overwritten.
You will lose your PeopleTools objects unless they exist in the source environment. Something that is difficult to do unless you can implement a code freeze or keep track of object changes. To compound this challenge, security, trees and queries are objects which end users frequently have the ability to change.
To preserve these object changes, I recommend the following approach:
Planning for the Upgrade
1. Strongly consider performing a PeopleTools upgrade first if possible. This makes moving
projects required in the steps below much easier.
During the Upgrade Project
1. Turn on logging for queries so you can track the frequency and prioritize queries. Work with
the user community to identify critical queries to address during the upgrade as part of scoping discussions. Agree to 'save' any queries that you cannot address in the timeframe of the upgrade in another environment.
2. Periodically compare and retrofit from production to upgraded development the non-query
related trees. Do not copy query trees from production to the upgraded database.
3. Assess security impacts through a fit/gap. Make changes to roles and permission lists in
upgraded copy of production. Avoid changing the names of roles during the upgrade. Don't forget to track changes to user preferences to reapply during the cutover.
During the Upgrade Cutover
1. Backup Production to another database prior to starting upgrade steps. This occurs after
production is shutdown but before you start the actual upgrade.
2. After the upgrade is complete, copy user security (but not roles and permission lists) from the
old copy to production.
3. As needed, use the copy of production to review the trees and queries to review / fix missing
queries. Tree issues should be minimal if you followed the steps above.
I have used this strategy a number of times and found it to be very successful, especially when locking down production change may be a challenge for some organizations.
Of course, there are nuances and complexities that simply cannot be covered in a short blog. Some of these steps will also require some expertise to plan and execute. Consult with your PeopleSoft expert resources or implementation partner for more detailed information or assistance.
David is a Partner and VP at Knowledge Systems, a consulting firm specializing in enterprise application services and solutions.
Much more than documents.
Discover everything Scribd has to offer, including books and audiobooks from major publishers.Cancel anytime.
|
https://www.scribd.com/document/106876007/21538021-PeopleSoft-Interveiw-Questions
|
CC-MAIN-2020-10
|
refinedweb
| 11,801
| 57.77
|
Lab 9: Review Exercise¶
It is time for a review exercise that should help you settle into the swing of writing simple programs that solve some kind of problem. Let’s consider comparing the gas cost for two possible vehicles you might consider buying, a Toyota Corolla, or a Toyota Prius.
The review project¶
As a review of the material we have covered so far in the course, I want you to study the program presented below and identify where in that program the concepts we have studied are demonstrated. The list that follows was taken from the lecture notes, so you should be familiar with them. Here is the list of concepts we have studied (in no particular order):
- Defining the problem we want to solve
- Decomposing the problem into manageable chinks
- Input-process-output (IPO) design process
- Basic program layout (standard includes, the
namespaceline, main function)
- Simple input and output statements
- Basic data types we can use
- Standard structures (sequence, decision, loop)
- Basic structured statements (assignment, if-then-else, for loop, while loop)
- Program style
- Variable declarations (naming conventions)
- Constants
- Scope of variables (where can you reference a variable name in your program)
- Using standard C++ functions like sin()
- How math works (difference in math for integers and floating point numbers)
- Operator precedence (multiply and divide happened before add or subtract)
- Assignment statements
- Functions (void and value returning) setup and use
- Function parameters
- Logical expressions (operators, AND, OR operations)
- Adding up a bunch of numbers in a loop
- Counted loops
Your job¶
This assignment will be pretty simple. Fire up a editing program of some sort (Word will do, so will the editor in CLion) and write up a summary of the concepts you see demonstrated on each line in the program (except for those with just a closing curly bracket, or a blank line). If several lines in a row are similar (like they all declare variables) you may group those lines together in your writeup. Just indicate all the lines that your writeup covers.
In your writeup, indicate (briefly) what the line (or group of lines) is doing. It may be defining a variable of a certain type, checking the value to see if it is too big, whatever makes sense. The goal here is to make sure you understand exactly what the program is doing and why each statement is there.
Create a paragraph per line (or group of lines) to make it readable. You can number the paragraphs using the line numbers you will see in the listing below. Put a blank line between each paragraph.
Think about what you see in this program. You should have used all the concepts in the above list somewhere in your writeup, see if you can make that happen.
Modify the Code¶
The program shown makes an assumption that gas will rise in price smoothly
(linearly) over the number of years we define. Since that may not be realistic,
let’s try to modify the
gasPrice function so it works this way:
Gas will rise in price from the defined initial value to the defined final value over the course of 4 years, then remain fixed at that higher value for the following years.
To make this change, you will need to use another if-then-else statement to see if the day number is after 4 years. If not, we use a formula similar to the one shown currently in the listing. If so, we return the higher price.
Make changes to the function and rerun the analysis to figure out how expensive it will be to drive the Prius and the Corolla for the eight years we looked at in the lecture. Will the Prius be a better buy in this case, or a worse buy?
Here is the program listing, with line numbers for easy reference:
Add Another Function¶
Go online and get the current purchase price for each car. Then set up a function that calculates the total cost of owning this car over that eight year period. (Obviously, we are ignoring maintenance costs here). This function has the following form:
double cost_per_mile(double purchase, double fuel, double miles);
All you need to do in this function is calculate the cost per mile driven for a vehicle. Use this function to add output lines in your main function showing which vehicle would be a smarter choice. (Of course, this assumes that this is your only concern in buying a car. If you hate fossil fuel, well, your choice is your own. YMMV!)
|
http://www.co-pylit.org/courses/cosc1315/assignments/lab9/index.html
|
CC-MAIN-2018-17
|
refinedweb
| 759
| 63.02
|
Anatomy, process the request in MVC style:
protected void Page_Load(object sender, EventArgs e)
{
HttpContext.Current.RewritePath(Request.ApplicationPath);
IHttpHandler httpHandler = new MvcHttpHandler();
httpHandler.ProcessRequest(HttpContext.Current);
}
3. Add a Global Application Class (global.asax), and in the Application_Start method, map the route to the home controller.
protected void Application_Start(object sender, EventArgs e)
{
RouteTable.Routes.MapRoute("Default Route",
"{controller}/{action}",
new { controller = "Default", action="Index" });
}
4. In order to use the MapRoute and IgnoreRoute methods, you should add a using directive to use the namespace System.Web.Mvc, since those are extension methods. The MapRoute method takes a name of a route as the first parameter, a URI template as the second, and default values as the third. Notice that the default values object should have properties that correspond the names of the properties in the URI template. The route above maps an incoming Url to a combination of a controller and an action.
5. Create a default controller. Add a class to the web application under a Controllers folder and name it DefaultController. Notice the naming convention for it: Default comes from the route default value and the controller is just a suffix in the convention.
This class should inherit from System.Web.Mvc.Controller class, and should contain a public method with a name the corresponds to an action. Since the default action is Index (taken from the default route), then the class should look like this:
public class DefaultController : Controller
{
public string Index()
{
return "Hello, world";
}
}
6. Run the application, and navigate to the application directory ( “/” ), what you should get is the response “hello, world”.
But, If you try to navigate to the Index action in the Default controller (/Default/Index), you will get an error.
7. Add the Url Routing Module. Open the web.config, and locate the <httpModules> tab in the system.web section. There, register the Url Routing Module:
<httpModules>
...
<add name="UrlRoutingModule"
type="System.Web.Routing.UrlRoutingModule, System.Web.Routing,
Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" />
</httpModules>
8. Run the application and navigate to the Index action in the Default Controller. Now, you should get the same response as earlier.
9. Return a view as the result of the Index action. Change the return value of the Index method in the default controller to be of type ActionResult. There are several types of results we can return (such as JosnResult, ContentResult etc.) but in this sample we will return a ViewResult, by calling the View method.
public ActionResult Index()
{
return View();
}
Create a view that corresponds to this action. When called parameterless, the View method will look for a view whose name is equals to the name of the action, inside a folder the corresponds to the name of the controller. Create a new ASP.Net Form called Index.aspx inside Views\Default\ folder.
In order to make this an MVC View, open the code behind file (Index.aspx.cs) and change the class to inherit from System.Web.Mvc.ViewPage.
Edit the page (in design mode or source mode) and add a greeting message:
<body>
<form id="form1" runat="server">
<div>
<h1>Hello, world</h1>
</div>
</form>
</body>
10. Run the application and you should receive the response from the View we’ve just created. The routing engine called the Index action in the Default controller, which returned the Index.aspx view.
11. Display data in the View. Open the controller, and in the Index method, add data to the ViewData dictionary:
public ActionResult Index()
{
ViewData["name"] = "Guy";
return View();
}
Now, in the view markup, use that data in the greeting line:
<body>
<form id="form1" runat="server">
<div>
<h1>Hello, <%= ViewData["name"] %></h1>
</div>
</form>
</body>
12. Run the application and receive a greeting message bounded to the data that the controller has added to the dictionary.
In this post I build an ASP.Net MVC application from scratch in order to understand the anatomy of an ASP.Net MVC application and the magic behind this framework. This understanding can help me with adding MVC capabilities to my existing web applications as well.
Enjoy!
Great post, saved me hours of work! Many thanks.
A minor tweak – ditch the server form in your view… it doesn't make sense with mvc.
plz explain anatomy of asp .net
|
http://blogs.microsoft.co.il/bursteg/2008/12/21/anatomy-of-an-aspnet-mvc-application/
|
CC-MAIN-2016-30
|
refinedweb
| 719
| 58.28
|
It turns out that I may be too rigid at times with my types in Dart. That's just insane. Me. An old Perler, an ancient Rubyist, and old-timey JavaScripter, someone who fled Java because of the type craziness — using types too darn often. What a world.
When I first explored protection proxy patterns I almost headed down the path of generic proxies, but decided against it because... types. The particular example that I was using to explore protection proxies probably influence me to a fair extent. I was using a
Driverto determine if a particular driver instance was old enough to legally start a
Car. I opted against a generic proxy class (probably rightly) since a proxy class protecting against drivers was certain to be an automobile of some kind. Following from there, if the proxy class always worked with automobiles, it might as well implement the
Automobileinterface.
That suited me just fine because it kept me on the happy Gang of Four path. All of my proxy pattern explorations followed the same patterns as in the original book: a subject (the interface), a real subject (e.g. a
Car) and a proxy (e.g.
RemoteCar). But in Gilad Bracha's The Dart Programming Language, it struck me reading that, not only was it OK to use a generic proxy, but that "being able to define transparent proxies for any sort of object is an important property."
So, I go back to my protection proxy example to see how it will work with a proxy that will work with any kind of object. In addition the object serving as the real subject of my protection proxy, I also need a driver instance through which access will be allowed or denied:
class ProxyProtect { final Driver _driver; final _realSubject; ProxyProtect(this._driver, this._realSubject); // ... }As a quick aside, I must point out that I really enjoy Gilad's book. It is wonderful getting insights into the language from one of the primary designers. The little notes about things like variables almost almost always being used in a
finalway are wonderful. I will make a concerted effort to use
finalin most of my real code. I likely won't use it teaching for the same reason that it is not the default in the language—it is not expected by most developers. Anyhow...
I have my
ProxyProtextconstructor, now I need the calls to the real subject. I continue to use
noSuchMethod()for this:
import 'dart:mirrors' show reflect; @proxy class ProxyProtect { // ... dynamic noSuchMethod(i) { if (_driver.age <= 16) throw new IllegalDriverException(_driver, "too young"); return reflect(_realSubject).delegate(i); } }If no other methods are defined, then Dart will invoke
noSuchMethod()with information about the method being invoked. With that, no matter what method is invoked, I first check the driver. If the driver is too young, an exception is thrown and nothing else occurs. The real subject is protected against illegal actions. If the driver is of age, then it is time for mirrors—the kind that allow dynamic calls and inspection. In this case, I get a mirror of the real subject with
reflect(), then delegate whatever was invoked to the real subject with
delegate().
Easy peasy. Now I have a
ProxyProtect for any object that might have a driver: a car, an R/C toy, a train, as spaceship, etc. If I create a car and an of-age driver in client code, I can drive the car:
When run, this results in:When run, this results in:
var _car = new Car(), _driver = new Driver(25); print("* $_driver here:"); var car = new ProxyProtect(_driver, _car); car.drive();
$ ./bin/drive.dart * 25 year-old driver here: Car has been driven!If an underage drive attempts to pilot the vehicle:
Then this results in:Then this results in:
var _car = new Car(), _driver = new Driver(16); print("* $_driver here:"); var car = new ProxyProtect(_driver, _car); car.drive();
$ ./bin/drive.dart * 16 year-old driver here: Unhandled exception: IllegalDriverException: 16 year-old driver is too young!As I found out last night, I am not quite done here. When I run the code through the Dart static type analyzer, I find that my protection proxy does not seem to fit the correct types. Specifically, a
drive()method is being invoked when one is not declared and the class does not explicitly specify an interface that it is implementing:
[hint] The method 'drive' is not defined for the class 'ProxyProtect' (/home/chris/repos/design-patterns-in-dart/proxy/bin/drive.dart, line 19, col 7)To address this, I use the built-in
@proxyannotation, which tells the analyzer to give my proxy class a pass:
@proxy class ProxyProtect { // ... }With that, I have a working protection proxy for any sort of object... and it passes static type analysis. Nothing too surprising here, though I do have figure out how and if to work this into the discussion of the pattern in Design Patterns in Dart. For now, I think I may be done with my exploration of the proxy pattern—it was a fun one! Play with the code on DartPad:. Day #74
|
https://japhr.blogspot.com/2016/01/is-it-real-proxy-pattern-without-types.html
|
CC-MAIN-2018-22
|
refinedweb
| 859
| 61.36
|
Message-ID: <1339386020.298024.1368886519290.JavaMail.haus-conf@codehaus02.managed.contegix.com> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_298023_1223635556.1368886519290" ------=_Part_298023_1223635556.1368886519290 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location:
Boo is a statically typed language.
Static typing is about the ability to type check a program for type corr= ectness.
Static typing is about being able to deliver better runtime performance.=
Static typing is not about putting the burden of declaring types on the = programmer as most mainstream languages do.
The mechanism that frees the programmer from having to babysit the compi= ler is called type inference.
Type inference means you don't have to worry about declaring types every= where just to make the compiler happy. Type inference means you can be prod= uctive without giving up the safety net of the type system nor sacrificing = performance.
Boo's type inference kicks in for newly declared variables and fields, p= roperties, arrays, for statement variables, overriden methods, method retur= n types and generators.
Assignments can be used to introduce new variables in the current scope.= The type for the new variable will be inferred from the expression on the = right.
s1 =3D "= foo" # declare new variable s1 s2 =3D s1.Replace("f", "b") # s1 is a string so Replace is cool=09=09
Only the first assignment to a variable is taken into account by the typ=
e inference mechanism.
The following program is illegal:
s =3D "I= 'm a string" # s is bound with type string s =3D 42 # and although 42 is a really cool number s can only hold strings<= /pre> =09=09
class Cu= stomer: _name =3D ""=09=09
Declare the new field _name and initialize it with an empty string. The = type of the field will be string.
When a property does not declare a type it is inferred from its getter.<= /p>
class Bi= gBrain: =C2=A0 =C2=A0 =C2=A0Answer: =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 get: return 42=09=09
In this case the type of the Answer property will be inferred as int.
The type of an array is inferred as the least generic type that could safely hold all of its enclosing eleme= nts.
a =3D (1= , 2) # a is of type (int) b =3D (1L, 2) # b is of type (long) c =3D ("foo", 2) # c is of type (object)=09=09
names = =3D (" John ", " Eric", " Graham", "TerryG ", " TerryJ", " Michael") for name in names: # name is typed string since we are iterating a string array print name.Trim() # Trim is cool, name is a string=09=09
This works even when with unpacking:
a =3D ( = (1, 2), (3, 4) ) for i, j in a: print i+j # + is cool since i and j are typed int=09=09
When overriding a method, it is not necessary to declare its return type= since it can be safely inferred from its super method.
class Cu= stomer: override def ToString(): pass=09=09
The return type of a method will the = most generic type among the types of the expressions used in return sta= tements.
def spam= (): return "spam!" print spam()*3 # multiply operator is cool since spam() is inferred to return a string # and strings can be multiplied by integer values=09=09
def ltua= e(returnString as bool): return "42" if returnString return 42 # ltuae is inferred to return object print ltuae(false)*3 # ERROR! don't know the meaning of the * operator =09=09
When a method does not declare a return type and includes no return stat= ements it will be typed System.Void.
g =3D i*= 2 for i in range(3) # g is inferred to generate ints for i in g: print i*2 # * operator is cool since i is inferred to be int # works with arrays too a =3D array(g) # a is inferred to be (int) since g delivers ints print a[0] + a[-1] # int sum=09=09
When implementing interfaces it's important to explicitliy declare the s= ignature of a method, property or event. The compiler will look only for ex= act matches.
In the example below the class will be considered abstract since it does= not provide an implementation with the correct signature:
namespac= e AllThroughTheDay interface IMeMineIMeMineIMeMine: =09def AllThroughTheNight(iMeMine, iMeMine2, iMeMine3 as int) class EvenThoseTears(IMeMineIMeMineIMeMine): =09def AllThroughTheNight(iMeMine, iMeMine2, iMeMine3): =09=09pass e =3D EvenThoseTears()=09=09
Let's say when.
abstract= def Method(param /* as object */, i as int) as string: pass def fat([required(value >=3D 0)] value as int) as int: return 1 if value < 2 return value*fat(value-1) for i as int in [1, 2, 3]: # list is not typed print i*2=09=09
def foo(= ) as object: # I want the return type to be object not string # a common scenario is interface implementation return "a string" if bar: a =3D 3 # a will be typed int else: a =3D "42" # uh, oh=09=09
f as Sys= tem.IDisposable =3D foo() f.Dispose()=09=09
def Crea= teInstance(progid): =C2=A0=C2=A0=C2=A0 type =3D System.Type.GetTypeFromProgID(progid) =C2=A0=C2=A0=C2=A0 return type()=09=09
ie as du= ck =3D CreateInstance("InternetExplorer.Application") ie.Visible =3D true ie.Navigate("")=09=09
|
http://docs.codehaus.org/exportword?pageId=11502
|
CC-MAIN-2013-20
|
refinedweb
| 897
| 55.88
|
.
38A04520
Copyright JP Management Consulting (Asia-Pacific) Ltd All rights are reserved. This document or any part thereof may not be copied or reproduced without permission in writing from JP Management Consulting (AsiaPacific) Ltd
38A04520 i
Preface At the request of Sino-Forest Corporation (Sino-Forest), Jaakko Pyry Management Consulting (Asia Pacific) Ltd, (Jaakko Pyry Consulting ) has prepared this report, which contains the opinion of Jaakko Pyry Consulting as to value of the existing plantation forest assets of Sino-Forest in China as well as a prospective valuation of the proposed forest development plans. This report is issued by JP Management Consulting (Asia-Pacific) Ltd (Jaakko Pyry Consulting) to Sino-Forest for its own use. This report presents an independent valuation, as at 31 December 2004 of SinoForests forest assets in Southern China. Forest valuations are prepared for various purposes, and this may influence the valuation method used. Jaakko Pyry Consulting has prepared this valuation for asset reporting purposes. JP MANAGEMENT CONSULTING (ASIA-PACIFIC) LTD
Contact Andy Fyfe Level 5, HSBC Building 1 Queen Street P.O.Box 105891 Auckland City Tel. (09) 918 1100 Fax (09) 918 1105 E- mail: andy.fyfe@poyry.co.nz
38A04520 ii
CERTIFICATION Jaakko Pyry Consulting certifies to the following statements to the best of our knowledge and belief: The statements of fact contained in this report are true and correct. The reported analyses, opinions, and conclusions are limited only by the reported assumptions and limiting conditions, and are our personal, impartial, and unbiased professional analyses, opinions, and conclusions. Jaakko Pyry Consulting has no present or prospective interest in the subject property, and no personal interest or bias with respect to the parties involved. Jaakko Pyry Consultings compensation for completing this assignment is not contingent upon: 1. the development or reporting of a predetermined value or direction in value that favours the cause of the client, 2. the amount of the value opinion, 3. the attainment of a stipulated result, or 4. the occurrence of a subsequent event directly related to the intended use of this appraisal. A high- level inspection of the forest resource was made between 23 June and 25 July 2003 by a team of consultants from Jaakko Pyry Consulting. A qualitative inspection was made of a sample of recent acquisition areas in Heyuan City over the period 10 to 17 December 2004 The report has been prepared by staff consultants, retained consultants and office support personnel of Jaakko Pyry Consulting.
The Jaakko Pyry Group is a client and technology-oriented, globally operating consulting and engineering firm with offices in 35 countries. It has three core areas of expertise: forest industry, energy, and infrastructure and environment. Group companies employ 4600 people. Jaakko Pyry Group Oyj is listed on the Helsinki Stock Exchange. The Forest Industry Consulting business group provides its clients advice in business strategy, processes and operations designed to enhance stakeholder value. The business group's expertise covers the complete supply chain, from raw materials to technology, markets and financing. Consulting and advisory services are provided in three main practice areas: Management Consulting Investment Banking Operations Management
38A04520 iii
Jaakko Pyry Consulting is an independent management consulting company within the Jaakko Pyry Group and is recognised as one of the world's leading advisors to the global forest industry cluster. The cornerstones of its operations are its strong business understanding and industry expertise. The business group's global network of around 300 experts covers all major forest products regions in the world.
38A04520 iv
ASSUMPTIONS AND LIMITING CONDITIONS This report was prepared at the request of and for the exclusive use of the client, Sino-Forest Corporation. This report may not be used for any purpose other than the purpose for which it was prepared. Its use is restricted to consideration of its entire contents. This valuation represents an update of Jaakko Pyry Consultings June 2004 forest valuation that was incorporated in Report 54A03229 Review of Sino-Wood Partners Limited and Sino-Panel Holdings Limited and must be read in conjunction with that report. Details concerning the location and basic physical characteristics of the subject property were taken from data provided by Sino-Forest. Jaakko Pyry Consulting has taken legal descriptions from sources thought to be authoritative, but neither assumes nor suggests responsibility for either. Maps, diagrams and pictures presented in this report are i tended merely to assist the n reader. Jaakko Pyry Consulting has undertaken a limited visual inspection of the forest resource from the ground in June/July 2003. Previous limited inspections have been associated with valuations carried out by Jaakko Pyry Consulting in 2000 and 2001. Within this valuation exercise Jaakko Pyry Consulting has confined its December 2004 field inspections to the recent forest acquisitions made in the Heyuan City region. This appraisal assumes that the sites visited represent the full range of conditions present. The forest inspection process has been limited to a high- level review. Legal matters are beyond the scope of this report, and any existing liens and encumbrances have been disregarded, and the forest resource has been appraised as though free and clear under responsible ownership and competent management. Unless otherwise stated in this report, the existence of hazardous materials or other adverse environmental conditions, which may or may not be present on the property, were neither called to the attention of Jaakko Pyry Consulting, nor did the consultants become aware of such during the inspection. Jaakko Pyry Consulting recognizes the possibility that any valuation can eventually become the subject of audit or court testimony. If such audit or testimony becomes necessary as a result of this valuation, it will be a new assignment subject to fees then in effect. Jaakko Pyry Consulting has no responsibility to update this report for events and circumstances occurring after the date of this report. Any liability on the part of Jaakko Pyry Consulting is limited to the amount of fee actually collected for work conducted by Jaakko Pyry Consulting. Nothing in the report is, or should be relied upon as, a promise by Jaakko Pyry Consulting as to the future growth, yields, costs or returns of the fo rests. Actual results may be different from the opinion contained in this report, as anticipated events may not occur as expected and the variation may be significant.
38A04520 v
SUMMARY Valuation Jaakko Pyry Consulting has assessed the value of the forest assets owned by SinoForest as at 31 December 2004 to be USD565.6 million. The value applies to the existing planted area. The value has been derived using a discounted cash flow methodology in which a 12% discount rate has been applied to real, pre-tax cash flows. Jaakko Pyry Consulting has also prepared a forest valuation that recognises the revenues and costs of re-establishing and maintaining the plantation forests for a 50 year period. This alternative is referred to as a perpetual valuation. Sino-Forest has an option to lease the land under the purchased trees for future rotations. The terms of the lease are yet to be finalised but the arrangement will enable Sino-Forest to expand its forest estate in Heyuan City by 200 000 ha. The derived value for the perpetual option as at 31 December 2004 is USD773.8 million. The same discount rate is applied. The following table presents the results of the valuation of the Sino-Forest estate. The sensitivity to discount rate is demonstrated.Table 1: USD Valuation of Sino-Forest Forest Assets as at 31 December 2004Real Discount Rate Applied to Pre-tax Cash Flows Forest Scenario 11% 585.263 845.804 12% USD million Existing forest, current rotation only Existing forest, all rotations and 200 000 ha expansion in Heyuan City 565.598 773.832 546.929 712.858 13% to December 2004. Inclusion of recent forest acquisitions (some 95 000 ha).
38A04520 vi
Revised Discount Rate A valuation based on a discounted cash flow approach requires the identification of an appropriate discount rate. In selecting the rates there are two broad approaches: Deriving the discount rate from first principles. A common expression of this approach turns first to the Weighted Average Cost of Capital (WACC). This recognises the costs of both debt and equity. The cost of equity may be derived using the Capital Asset Pricing Model (CAPM) method. A second approach is to derive implied discount rates from transaction evidence.
WACC Analysis 3.Table 3: Estimate of Post-tax WACC by MarsdenLower bound 5.1% Average estimate 6.7% Upper bound 8.35
In the firest instance, the conventional formulation of WACC generates a rate for application to post-tax cash flows, which includes the cost of debt. Dr. Marsden has applied a transformation to his initial results and this process has produced an average WACC for application to real pre-tax cash flows of 10% (Table 4).Table 4: Estimate of Real Pre-tax WACCLower bound 7.6% Average estimate 10.0% Upper bound 12.4
Implied Discount Rates In common with other valuers of southern hemisphere planted forests, Jaakko Pyry Consulting maintains a register of significant forest transactions. TheCopyright JP Management Consulting (Asia-Pacific) Ltd
38A04520 vii
available evidence is then analysed in order to derive the discount rate implied by each transaction. The process involves preparing a credible representation of the forests future potential cash flows and then relating these to the actual transaction price. From this type of exercise conducted in Australia and New Zealand, Jaakko Pyry Consulting has observed derived discount rates for recent transactions to generally fall within the range of 8-10%. These are real rates, applied to post-tax cash flows. As yet Jaakko Pyry Consulting has little implied discount rate data for the Southern China region. As the commercial plantation forest industry develops and more forests change hands, empirical evidence from which to derive implied discount rates will arise. The capacity to utilise implied discount rates in this valuation is limited to considering how the forest investment in China compares with such investment in other locations. Commercial forestry in Southern China is still developing and faces some challenges, these include: The reliability of forest descriptions The accuracy of yield prediction Achieving high growth in a consistent manner
It is Jaakko Pyry Consultings opinion that for many forest investors, investing in plantation forestry in China would be considered a riskier proposition than, for instance, investing in the industry in Australia or New Zealand. Incorporating Risk in the Discount Rate If forest investment in China is at present perceived to be a more risky proposition than like activity in other international locations, the issue then becomes how to quantify this difference. The textbook treatments of the subject suggest that the discount rate should be regarded as a simple catch-all for any and all forms of perceived risk. It may be a very blunt instrument in such a role and it is therefore preferable to attempt to acknowledge risk in the development of the cash flows to which the discount rate is applied. However, building risk- inclusive cash flows is itself less than straightforward. This is especially the case in emerging investment environments where the empirical evidence with which to model risks is not readily accessible. A propensity to load the discount rate remains.Copyright JP Management Consulting (Asia-Pacific) Ltd
38A04520 viii there be not just willing buyers, but also willing sellers. If the only purchase offers to be extended involved very high discount rates we would expect that forests would not be willingly sold. Southern China. A discount rate of 12% has been selected and applied to pre-tax cash flows. This differs from that in Jaakko Pyry Consultings November 2003 Valuation (Report #54A01987) where a discount rate of 13% was applied. It is Jaakko Pyry Consultings perception that with a carefully timed and managed sale, other buyers could be attracted who would be willing to accept a similar pre-tax discount rate. The derivation of the discount rate for the Sino-Forest resource will certainly warrant ongoing attention in future valuations. Revised Log Prices Sino-Forest generally sells the plantations on a standing basis and therefore do not sell logs direct to the market. However current forecast mill gate log prices have been assumed for the purpose of the plantation cash flow forecasts. These are presented below in Table 5.Table 5:. Species such as pine and eucalyptus with a Small EndCopyright JP Management Consulting (Asia-Pacific) Ltd
38A04520 ix
Diameter (SED) of less than 8 cm logs now sell for as much as RMB380/m3 and the SED 8 to 14 cm logs achieve. Change in Area through Forest Acquisitions Subsequent to Jaakko Pyry Consultings November 2003 valuation (Report #54A03229) Sino-Forest has increased the area of its estate by a further 95 057.6 ha of standing timber. The acquisition of new forest area has been confined to Guangdong, Guangxi and Jiangxi provinces. The majority of the recently purchased area associated with Heyuan City (44 848 ha) and Jiangxi province (31 299 ha) (Table 6).Table 6: 95 057.6
The existing resource is concentrated in Guangdong Province, followed by Jiangxi and Guangxi Provinces.
38A04520 x
Contents
Preface Summary CERTIFICATION ....................................................................................................................... II ASSUMPTIONS AND LIMITING CONDITIONS.................................................................IV 1 2 2.1 2.2 3 3.1 3.1.1 3.1.2 3.1.3 4 4.1 4.2 4.3 4.4 4.4.1 4.4.2 4.4.3 4.5 5 5.1 5.2 5.3 5.3.1 5.3.2 5.4 5.5 5.6 5.7 5.8 INTRODUCTION ......................................................................................................... 1 SCOPE AND PURPOSE.............................................................................................. 2 Matters Outside the Scope of the Valuation Update ....................................................... 2 Purpose............................................................................................................................ 2 VALUATION METHODOLOGY .............................................................................. 4 Outline ............................................................................................................................. 4 Realisation Value of Current Timber Content ................................................................ 4 Analysis of Transaction Evidence ................................................................................... 4 Discounted Cash Flow Techniques................................................................................. 5 FOREST DESCRIPTION ............................................................................................ 9 Resource Location........................................................................................................... 9 Forest Area ...................................................................................................................... 9 Species........................................................................................................................... 11 Plantation Growth and Yield......................................................................................... 12 Tree Volume Calculations ............................................................................................. 13 Yield Table Formulation............................................................................................... 13 Future Yield Development ............................................................................................ 15 Plantation Risks............................................................................................................. 16 COSTS .......................................................................................................................... 18 Operational and Production Costs................................................................................. 18 Establishment Costs ...................................................................................................... 18 Costs of Production....................................................................................................... 20 Harvesting Costs ........................................................................................................... 20 Transport Costs ............................................................................................................. 20 Taxes at Harvest............................................................................................................ 20 Overhead Costs ............................................................................................................. 21 CJVs .............................................................................................................................. 21 Land Rental ................................................................................................................... 22 Log Traders Margin ...................................................................................................... 22
38A04520 xi
5.9 6 6.1 6.1.1 6.1.2 6.1.3 6.1.4 6.1.5 6.2 7 7.1 7.2 7.3 7.4 7.5 7.6 8 8.1 8.2 8.3 8.4 8.5 9 9.1 9.2 9.3 9.4 10 10.1 11 11.1 12
Exchange Rate ............................................................................................................... 22 LOG PRICE OUTLOOK ........................................................................................... 23 Fibre Supply and Demand............................................................................................. 23 Sawlog Supply and Demand ......................................................................................... 24 Furniture Industry.......................................................................................................... 25 Construction Industry.................................................................................................... 26 Wood-Based Panel Supply and Demand ...................................................................... 26 Wood Chip Demand for Pulp Production..................................................................... 27 Valuation Log Prices ..................................................................................................... 28 FOREST ESTATE MODEL...................................................................................... 29 Overview ....................................................................................................................... 29 Observed Practice in Wood Flow Modelling ................................................................ 32 Modelling Supply and Demand ..................................................................................... 33 Croptype Allocation...................................................................................................... 33 Model Constraints ......................................................................................................... 34 Wood Flow and Allocation Model Results................................................................... 35 DISCOUNTED CASH FLOW VALUATION ......................................................... 37 Overview ....................................................................................................................... 37 Treatme nt of Taxation................................................................................................... 38 Scope of the Analysis .................................................................................................... 38 Timing of Cash Flows................................................................................................... 39 Date of Valuation.......................................................................................................... 39 DISCOUNT RATE...................................................................................................... 40 Discount Rate Derived from WACC/CAPM ................................................................ 40 Implied Discount Rates ................................................................................................. 40 Incorporating Risk in the Discount Rate ....................................................................... 41 The Discount Rate Applied in Valuing the Sino-Forest Resource................................ 41 DCF VALUATION RESULTS .................................................................................. 43 Merchantable Volume ................................................................................................... 43 RISKS AND SENSITIVITY ANALYSIS ................................................................. 45 Sensitivity Analysis....................................................................................................... 45 VALUE CHANGE ...................................................................................................... 46
Schematic Outline of the Valuation Process.......................................................... 7 Provincial Area Age-class Distribution................................................................ 10 Area Age-class Distribution................................................................................. 11 Species by Age-class ............................................................................................ 12
38A04520 xii
Figure 4-4: Figure 4-5: Figure 4-6: Figure 6-1: Figure 6-2: Figure 6-3: Figure 6-4: Figure 6-5: Figure 7-1: Figure 7-2: Figure 7-3: Figure 7-4: Figure 7-5: Figure 7-6: Figure 7-7: Figure 7-8: Figure 7-9: Figure 8-1:
Eucalyptus Growth Curve .................................................................................... 14 Pine Growth Curve ............................................................................................... 14 Chinese Fir Growth Curve ................................................................................... 15 China Total Fibre Imports................................................................................. 23 China - Sawlog Total Production and Consumption............................................ 24 China - Sawlog and Peeler Log Production......................................................... 25 China Furniture Production and Export ............................................................ 26 China - Wood Based Panel Production................................................................ 27 Example Forest Estate Age-class Distribution..................................................... 29 Example Forest Estate Yield Table ...................................................................... 30 Example Harvest at Fixed Rotation Age.............................................................. 30 Example Smoothed Forest Harvest ...................................................................... 31 Example Non-Declining Yield Profile ................................................................. 32 Schematic Illustration of the Forest Estate Model ............................................... 33 Wood Flow by Species......................................................................................... 35 Wood Flow by Log Type ..................................................................................... 36 Wood Flow by Location....................................................................................... 36 Schematic Illustration of the Forest Valuation Process ....................................... 37
Tables Table 4-1: Table 5-1: Table 5-2: Table 5-3: Table 5-4: Table 5-5: Table 5-6: Table 5-7: Table 5-8: Table 6-1: Table 7-1: Table 9-1: Table 9-2: Table 10-1: Table 10-2: Table 11-1: Table 11-2: Table 11-3: Table 12-1:
Summary of the Existing Sino-Forest Plantation Forest Area ............................. 10 Operation Costs for Eucalyptus Planted Rotation (USD/ha) ............................... 18 Operation Costs for Eucalyptus Coppice Rotation (USD/ha) .............................. 19 Total Operation Costs for Planted Crop and First Coppice ................................. 19 Operation Costs for Eucalyptus Planted Rotation (USD/ha) ............................... 19 Harvesting Costs by Province .............................................................................. 20 Transport Costs by Province ................................................................................ 20 Taxes at Harvest (Average all Species)................................................................ 21 Planted Area by CJV Company ........................................................................... 21 Pulpwood and Sawlog Forecast Prices, 2003 2008........................................... 28 Clearfell Age Restrictions .................................................................................... 34 Estimate of Post-tax WACC by Marsden ............................................................ 40 Estimate of Real Pre-tax WACC by Marsden...................................................... 40 USD Valuation as at 31 December 2004 ............................................................. 43 Merchantable Standing Volume as at 31 December 2004 ................................... 44 USD Current Rotation Valuation Only Log Price Sensitivity .......................... 45 USD Current Rotation Valuation Only Overhead Cost Sensitivity.................. 45 USD Current Rotation Valuation Only Harvest Cost Sensitivity ..................... 45 Components of Value Change USD millions ................................................... 46
Appendices Appendix 1: Heyuan City Field Visit and Site Inspection Appendix 2: Market Overview Appendix 3: WACC Analysis
38A04520 1
INTRODUCTION JP Management Consulting (Asia-Pacific) Ltd (Jaakko Pyry Consulting ) has been requested by Sino-Forest Corporation (Sino-Forest) to prepare a valuation of the existing and prospective forest assets of Sino-Forest in Southern China. Jaakko Pyry Consulting has previously conducted forest valuations on specific areas within the forest estate in 2000, 2001 and 2003. This valuation represents an update of Jaakko Pyry Consultings June 2004 forest valuation that was incorporated in Report 54A03229 Review of Sino-Wood Partners Limited and Sino-Panel Holdings Limited and must be read in conjunction with that report. The data for this valuation has been provided by Sino-Forest and its associated CJV companies. Within this valuation exercise Jaakko Pyry Consulting has confined its field visits to the Heyuan City region as this is where the largest area of recent land acquisitions has been made and forest establishment activities initiated. A field inspection report for Heyuan is presented in Appendix 1. It is Jaakko Pyry Consultings intention to visit other regions in a process of rolling inspections so that all of Sino-Forestss operations are visited within the annual valuation update process over time.
38A04520 2
SCOPE AND PURPOSE In general terms the valuation update is less comprehensive than Jaakko Pyry Consultings October 2003 valuation (Report #54A03229). The update nevertheless employs the same underlying methodology: The valuation employs a Net Present Value (NPV) of future cash flows approach. The cash flows and discount rate are expressed in real terms. Material changes to the land base between 31 October 2003 and 31 December 2004. Acknowledgement of prevailing log prices. Acknowledgement of revised expectations of future log prices. Acknowledgement of new evidence of market perception of forest value demonstrated in recent transaction announcements. Acknowledgement of WACC estimates as provided by UniServices Auckland Limited Recognition that the forest estate is now 14 months further along the cash flow stream that was projected in the course of the October 2003 valuation
2.1
Matters Outside the Scope of the Valuation Update In the absence of any prominent evidence of material change, Jaakko Pyry Consulting has not adjusted the valuation for the following factors: Yield tables Costs of goods sold (i.e. harvesting and transport) except for costs associated with recent land acquisitions Direct costs of forest operations (establishment, silviculture, etc) except for costs associated with recent land acquisitions
Jaakko Pyry Consulting has confined its field visits to the Heyuan City region as this is where the largest area of recent land acquisitions has been made and forest establishment activities initiated. It is Jaakko Pyry Consultings intention to visit other regions in a process of rolling inspections so that all of Sino-Forests operations are visited within the annual valuation update process over time. 2.2 Purpose The purpose of the valuation is to estimate the market value of the forest for asset reporting purposes, market value is defined as: the most probable price which a property should bring in a competitive and open market under all conditions requisite to a fair sale, the buyer and seller each acting prudently and knowledgeably, and assuming that the price is not affected by undue stimulus. Implicit in this definition is the consummation of a sale as of a specified date and the passing of title from seller to buyer under conditions whereby
38A04520 3
The buyer and seller are typically motivated. Both parties are well informed or well advised, and acting in what they consider their own best interests. A reasonable time is allowed for exposure in the open market. The price represents the normal consideration for the property sold unaffected by special or creative financing or sales concessions granted by anyone associated with the sale 1 .
38A04520 4
3 3.1
VALUATION METHODOLOGY Outline Accompanying the global expansion in planted forests has been ongoing refinement of the processes employed in forest valuation. The valuation of stands of forest, which are mature and ready for harvesting, is comparatively straightforward. The assessed vo lume can be multiplied by prevailing stumpages to provide an estimate of realisable value. The valuation of immature stands is however, more complex. Three methods of valuing immature forests are described below: Realisation value of current timber content Analysis of transaction evidence Discounted cash flow techniques
3.1.1
Realisation Value of Current Timber Content That is the value based on the merchantable content (or standing stock) at the time of the valuation. While this value is both tangible and comparatively straightforward to calculate, it ignores the higher value associated with holding the stands to economic maturity. This method has two obvious limitations: For plantation forests, the timber realisation value of the stand may be very low for most of the rotation length. Then in the final years of the rotation, the marginal rate of wood value growth is usually so great that any rational investor would be expected to see the balance of the rotation out, before realisation. For a plantation resource of any significant size, it is unlikely that the market can absorb all of the forest wood content at once without log prices being depressed. The first effect leads to an unduly conservative valuation while the second can lead to an unreasonably optimistic result. It would be plausible but unlikely that the two effects might neatly cancel one another. However, Jaakko Pyry Consultings preference in valuing plantation forests is to avoid this method altogether.
3.1.2
Analysis of Transaction Evidence In principle, sales transactions of forests should provide the most valid basis for valuing immature forests - if this were the case, forests could be treated on a comparable basis to other real estate, with market transactions used to demonstrate price. In some parts of the world this has proven a workable and popular approach. In China, however, there are practical difficulties. The most important of these is that the trade in immature forests has generally been too sporadic to provide a reliable statistical basis for assessing forest value. Furthermore, to usefully interpret sale values it is necessary to recognise key attributes of the forest such as its maturity, vigour, terrain, proximity to market and
38A04520 5
silvicultural tending status. Rarely are these parameters publicly available, and it is consequently difficult to unravel sale results to produce useful benchmark values. It has become generally recognised among plantation forest valuers and analysts that if a sales-comparison approach is to be applied, it is unlikely that forests with just the same combination of key factors will be found. This disqualifies the prospect of simply comparing forest values on a value per hectare basis. Lack of available comparable sales data for China means Jaakko Pyry Consulting has not been able to develop this approach for this independent valuation appraisal. 3.1.3 Discounted Cash Flow Techniques The value of a forest can be established with most certainty at two points in the rotation - the beginning and the end. At the beginning, the costs of establishment are identifiable and the forest can be valued on the basis of the funds invested within it. At the end, the forest can be valued on the realisation value of its timber content. A value at intermediate ages may be determined by applying the principle of compound interest growth, which provides a convenient linkage to the end-points. Such an approach falls within the ge neral classification of Discounted Cash Flow (DCF) analysis. Two DCF techniques are available: Compounding of investment inputs, and; Discounting of anticipated future returns. Compounding of Costs This method takes the costs involved in establishing the forest and accumulates these with compound interest to the forests current age. This is then declared the forests value. The rationale is that it is this price that forest owners would have to receive if they were to obtain a satisfactory rate of return on their investment to date. The method is equivalent to the accountants concept of capitalising establishment costs plus i terest, although the forest valuer is more inclined to n adopt assumed costs which are standard and current at the time of the va luation. By using costs that are current, along with a real (inflation-corrected) compounding rate, the valuation is updated for inflation. The use of industry standard costs ensures that only costs consistent with efficient practice are recognised. Forest valuers treat the compounding approach, and likewise capitalisation, warily. In practice a high cost forest does not necessarily become a high value forest and yet this is what the method can imply. Compounding rates are therefore generally set conservatively low and the method is phased out before mid-rotation, to be replaced by discounting.
38A04520 6
Discounting Future Revenue This method starts with the expected future net revenue from clearfelling the forest and then discounts back to provide the value of the stand at younger ages. Discounting is the reverse process to compounding and again it is necessary to select an appropriate real interest rate. The higher the discount rate the lower the present forest value. Importantly, the analysis is exclusively driven by future net revenue expectations. Thus, forestry development costs that predate the valuation are ignored. Provided that the eventual revenues are as good as or better than the valuation assumes, an investor purchasing the forest at the derived value is assured of a rate of return on investment at least equivalent to the discount rate. The discounting approach provides the Net Present Value (NPV) of the future net revenue stream and is the most satisfactory surrogate for the way in which the market values planted forests. Of the DCF methods, it is the most commonly applied system in commercial forestry. As the terminology implies, the NPV approach involves projecting the anticipated future net income stream, and then discounting this, at a suitable cost of capital, in order to acknowledge the lower economic value of delayed receipts. The NPV approach generally involves adopting the standpoint of a potential forest purchaser. To this individual or entity, funds previously invested in the forest are irrelevant the exclusive focus is on the forests future earning capability. A crucial parameter within the NPV analysis is the discount rate. The longer the period before income realisation, the greater the reduction in value due to discounting. Forest investments are generally of a long term and their value is especially sensitive to the discount rate. In recent years there has been a burgeoning of the literature addressing the derivation of discount rates for investment appraisal. While there is no agreement on a single appropriate rate for planted forests, and nor is there likely to be, evidence suggests some narrowing of the range of consensus. Besides the selection of discount rate the following features of the NPV approach also require consideration: The period of analysis. The alternatives available with planted forests are to model the forest in perpetuity (i.e. the current rotation and succeeding crops), or confine the analysis to the cash flows arising from the current crop. Terminal value. If the analysis is confined to the current rotation it is necessary to acknowledge any terminal value arising as the investment is exited, such as the realisable value of freehold land. Harvesting strategy. Alternatives include cutting the forest in order to produce a non-declining yield, cutting to produce a smoothed yield flow, or cutting each stand as it reaches a fixed optimum economic rotation age. Analysis of pre-tax or post-tax cash flows. Both approaches have been demo nstrated in valuing planted forests. For cash flows derived on a pre-tax basis a pre-tax discount rate is applied. Post-tax cash flows should be discounted at a post-tax discount rate. If the discount rates have been consistently derived, either approach should lead to the same forest value.
38A04520 7
Within this valuation Jaakko Pyry Consulting has valued the forests using an NPV approach. Cash flows attributable to both the existing rotation and planned future rotations of the forest have been included. The forest estate has been modelled on a perpetual basis for both the existing and succeeding rotations, thereby recognising the expected long-term management intentions and continued sustainability of the estate. The valuation is based on real pre-tax cash flows. Figure 3-1 provides a diagrammatic representation of the valuation processFigure 3-1: Schematic Outline of the Valuation Process
FOREST DESCRIPTION
Physical characteristics of the forest site, including location, area, terrain, soil and climate
Condition of the forest crop, including its current status, and future growth potential by quantity and log quality
Land Value
Forest estate model, driven by a forest management and harvesting strategy, providing: Woodflow projection by log type, and origin Revenue flows Cost flows Cashflow model This is primarily derived from the forest estate model, but also recognises other management cost inputs, plus a treatment of land value
No Financial environment the forest investment environment the general investment environment These provide determining influences on the cost of capital (i.e. the Discount Rate) Workable woodflow strategy? Yes Selection and application of a Discount Rate
Sensitivity Analysis
38A04520 8
The first stage of the valuation process involves assembling a comprehensive description of the forest. Key components of this include an area statement and information on the forest growth potential. At the heart of modern forest management is a forest estate modelling system that incorporates supply chain optimisation using linear programming techniques. This technology enables the collective resource to be modelled to meet various aims, including resource level constraints as well as the supply of various forest products into their end- use markets. The forest estate modelling software used in this valuation is FOLPI2, a licensed product of the New Ze aland Forest Research Institute (NZFRI). The linear programming engine is MOSEK 2.5, supplied by MOSEK ApS, Copenhagen Denmark. Further software routines for data preparation, optimisation, supply chain management and reporting processes have been utilised. These comprise proprietary tools and modules developed by Jaakko Pyry Consulting. Following confirmation that the results of the forest estate modelling process are managerially workable, the generation of wood flows and the allocation of products to markets enables the derivation of cash flows upon which a DCF valuation can be based. Application of the discount rate to these cash flows produces a net present value for the forest. The responsiveness of this valuation to changes in the input variables can then be tested with a variety of sensitivity analyses so as to derive a spread of potential forest va lues.
38A04520 9
4 4.1
FOREST DESCRIPTION Resource Location Map 4-1 presents the location of Sino-Forests existing plantation forest resource.Map 4-1: Sino-Forests Plantation Forest Resource
A more detailed description of provincial forests resources can be found in Jaakko Pyry Consultings November 2003 valuation (#54A03229). 4.2 Forest Area The following table and figures summarise the existing Sino-Forest plantation forest resource. Subsequent to Jaakko Pyry Consultings November 2003 valuation (Report #54A03229) Sino-Forest has increased the area of its estate by a further 88 842 ha of standing timber. The acquisition of new forest area has been confined toCopyright JP Management Consulting (Asia-Pacific) Ltd
38A04520 10
Guangdong, Guangxi and Jiangxi provinces, with the majority of the recently purchased area associated with Heyuan City ( 848 ha) and Jiangxi province 44 (31 299 ha) [Table 4-1].Table 4-1: 95057.6
The existing resource is concentrated in Guangdong Province, followed by Jiangxi and Guangxi Provinces. To date there is only a small resource (2 306.7 hectares) in Fujian Province (Figure 4-1). The age-class structure of the plantation resource is uneven (Figure 4-2) and can be divided between the Sino-Forest planted resource which exhibits a young age-class profile and the purchased forest areas which exhibit a primarily mid rotational ageclass profile.Figure 4-1: Provincial Area Age-class Distribution80000 70000 60000 50000 40000 30000 20000 10000 0 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 Age (years)
Area (hectares)
38A04520 11
Purchased
Planted
4.3
Species Sino-Forest has two main classes of forest land, those areas planted by the CJV companies and those areas of existing plantation for which the cutting rights have been purchased. The areas planted by the CJVs are primarily fast growing Eucalyptus urophylla x Eucalyptus grandis hybrids with smaller areas of poplar species (mainly in Jiangxi Province). The existing forests for which the cutting rights have been purchased comprise a number of species including: Masson pine Slash pine Chinese fir Eucalyptus species Poplar species Acacia species
As Figure 4-3 illustrates, pine species account for the largest planted area fo llowed by eucalyptus species, Chinese fir, poplar species and acacia species.
38A04520 12
4.4
Plantation Growth and Yield Sino-Forest and its CJVs provided Jaakko Pyry Consulting with basic data relating to the growth and yield of the existing plantations. Jaakko Pyry Consulting combined this data with information gathered from its own field measurements and other third party sources to reach its own conclusions relating to growth and yield of the existing and proposed future forest plantations. A range of inventory data is used to describe forest yield. The development of yield tables usually begins at the time a stand is planted when an area is assigned to a yield table projection based on a number of factors including soil type, location, productivity of surrounding stands and genetic composition. Early estimates of yield are then refined as data is collected from quality control inspections and inventory activities. Through the ongoing capture of data, the precision of growth and yield estimates is progressively improved. Jaakko Pyry Consultings previous field inspections have included a high level inventory from which indicative yield tables have been derived. These inventory initiatives included the measurement of a range of age-classes to give an indication of growth rates. Stand boundaries were also measured using GPS (Global Positioning System) equipment to evaluate the accuracy of the area statement. For each stand visited during the inventory, 10 trees were randomly measured for diameter at breast height (1.3 m) as well as total tree height. Height was measured to the tip of the tree using a digital vertex. A stocking measurement was also made for the purposes of calculating stand volumes. Finally, several stands were measured for area using the GPS equipment and then checked back against the tabulated stand record data.
38A04520 13
4.4.1
Tree Volume Calculations Using diameter and height as variables, individual tree volumes can be calculated. Appropriate volume equations for each species were acquired from various sources. The tree volume equations used by Jaakko Pyry Consulting are as follows: Eucalyptus V (m3 ) = 0.01774597 - 0.00429255D + 0.0002008136D2 + 0.000494599DH + 0.00001125969D2 H - 0.001782894H North American V (m3 ) = (D/100)2 ) Chinese fir V (m) = Slash pine V (m) = 0.0001155362D(1.9788108856-0.005574216(D+2*H)) x H(0.5034278471+0.008969134(D+2*H)) Both D (diameter at breast height) and H (tree height) are expressed in metres By multiplying the average tree volumes with the measured stockings, a measure of individual stand yields is produced. 0.000037 DH 0.19328321((D/100)2 )H) + (0.007734354(D/100)H + (0.82141915
4.4.2
Yield Table Formulation A growth curve for each species was formulated using the following non-linear equation: Equation (1) ln (V) = + / T 2 + / TN where: V T N = volume per hectare (m3 /ha) = age (in years) = stocking (stems per hectare)
is the intercept and are the x variable parameters.
38A04520 14
Processing the field data using Equation (1), the following yield curves for each species were produced.Figure 4-4: Eucalyptus Growth Curve200
Field Data
Growth Curve
120
80
40
0 0 1 2 3 4 5 6 7 8 Year 9 10 11 12 13 14 15
38A04520 15
250
200
150
100
50
0 0 2 4 6 8 10 Year 12 14 16 18 20
From these curves, a set of yield tables were established and applied in the valuation process. The field data collected for poplar species was insufficient to construct a yield curve. The data from various sources were aggregated and an average mean annual increment (MAI) was calculated in order to determine the recovered volume used during the forest valuation process. A recoverable volume MAI of 8.9m3 /ha/yr is assumed. Most of the recent acquisition areas in Heyuan City are poorly stocked pine forest areas with very low yield of current standing volume. Sino-Forests managers in Heyuan estimate that the total standing volume associated with these areas is 2 to 3/m3 /mu (30 to 45 m3 /ha) and that 60% of this volume will be merchantable. Within this valuation a recoverable yield of 18m3 /ha has been assigned to these areas with no biological growth assumed. Jaakko Pyry Consulting recommends that a more effective and accurate inventory program be designed and implemented to better capture the information required to generate reliable yield tables. 4.4.3 Future Yield Development It is assumed that existing poplar plantations will be replanted into poplar but that all other new plantation establishment will be into fast growing eucalyptus plantations. Jaakko Pyry Consulting has assumed an increase in yield of MAI 2.5 m3 /ha/year for second rotation eucalyptus plantations and the same again for third rotation eucalyptus plantations. Following the third rotation no further yield improvement has been assumed. Such an increase in yield is predicted to result from geneticCopyright JP Management Consulting (Asia-Pacific) Ltd
38A04520 16
improvements in planting stock, through Sino-Forests tree improvement programme, and through improved silvicultural practices as Sino-Forest gains experience and expertise in the management of fast growing eucalyptus plantations. 4.5 Plantation Risks In addition to risks relating to the key assumptions there are other risks associated with establishing a biological resource. In the Sino-Forest plantations the key identifiable risks include: Fire Fire has historically not been a major threat in South China plantation forests. However, with the increase in eucalyptus plantation area there is a correspondingly greater fire risk. This risk can be mitigated by the implementation of fire prevention techniques such as the construction of firebreaks inside plantations, the development of human resources trained in fire fighting and supported by physical infrastructure such as portable fire fighting equipment. Given that the resource is geographically fragmented and comprises discreet forest blocks that are generally less than 500 ha in size the opportunity for a singular catastrophic event is remote. It is evident from field visits in Heyuan City that fires resulting from farmers burning as a land preparation tool have occurred in the past. Sino-Forest has also used fire to prepare land for planting but will move away from this practice in future to reduce negative consequences for soil fertility as organic matter is volatilised and lost to the atmosphere.. Sino-Forest has fire insurance cover. Frost Frost damage is a risk on higher altitude inland sites and was responsible for the poor yield seen in much of the 1996 eucalyptus plantings. The risk of frost damage is mitigated by careful attention to site selection and the avoidance of frost prone sites. Pests and Disease As the area of single species plantations increases so does the potential risk of pest and disease problems. To date there appears to have been no serious pest orCopyright JP Management Consulting (Asia-Pacific) Ltd
38A04520 17
disease outbreaks. This risk is mitigated by the large research and development effort assigned to eucalyptus development. Most of the pest and disease problems have so far occurred in the poplar plantations. Two pathogens impact the growth and quality of the Poplar hybrid resource. Borer impacts the quality of logs and has the affect of increasing the pulpwood supply by making the butt log unsuitable for veneer where an attack is severe. The caterpillar of the Yangzhou Moth predates the leaves and can compromise growth if an attack is left unchecked. Adult borer is controlled by the application of a biological pesticide. Larvae are controlled by inserting a poisonous stick into the hole in the stem that represents the entry point of the larvae. Leaf eating caterpillars are controlled by the application of pesticide if levels of infestation are such that 30% of the crown is affected. Poplar plantations are currently inoculated against these problem pests and disease as a routine part of plantation establishment and maintenance. Local Forest Bureaus maintain disease control stations and provide forecasts on pathogen levels and the need for control. In keeping with good forest practices, Sino-Forest plant trees produced from a number of different clones; this reduces the risk of a weakness in any one clone being propagated throughout the plantations and provides genetic diversity. The clones that have been planted to date have been assessed for resistance against disease. Typhoons On average the coastal areas of Southern China suffer a number of typhoons each season during July to September. While in general the forest damage is localised and confined to young age-classes, every 20 years or so a typhoon is likely to cause significant damage. The inland coastal strip affected is in the region of up to 200km from the coast. The risk of typhoons is generally limited to some SinoForests plantation areas in Guangxi. This risk is reduced by the high stocking rates and short rotations of the euc alyptus plantations.
38A04520 18
5 5.1
COSTS Operational and Production Costs Based on data supplied by Sino-Forest and its own consulting experience in China, Jaakko Pyry Consulting believes that operation and production costs have not changed materially since its 31 October 2003 valuation of Sino-Forests forest asset. As Chinas economy develops cost structures will change and as such operational costs will continue to be the subject of attention in future valuations.
5.2
Establishment Costs The following tables give the operation costs for establishing Eucalyptus spp. in Southern China. Values vary slightly from district to district but for the purposes of this valuation regional average costs have been employed (Table 5-1) for regions other than the recent acquisition areas in Heyuan.Table 5-1: Operation Costs for Eucalyptus Planted Rotation (USD/ha)Operations 0 1.46 1.10 54.88 170.12 118.90 69.51 18.29 18.29 452.56 1 118.90 42.07 9.15 14.63 60.07 18.29 18.29 281.41 Planted Forest (R1) Year 2 3 4 118.90 42.07 7.04 7.04 6.95 3.66 3.66 0.91 17.17 1.07 0.79 18.29 18.29 18.29 18.29 18.29 18.29 225.43 48.36 45.24
Planning Operations design Site preparation Terracing Fertiliser Planting (incl. seedling cost) Thinning Tending Protection R&D FB Service Charge Overheads Lease Total Total USD/ha
The largest individual cost is terracing. Terraces are manually formed on the contour prior to planting. Sino-Forest employs this technique in the belief that it facilitates soil conservation through preventing erosion that might otherwise occur in heavy rain events. The operational decision to re-establish, by either coppice or new seedlings, is made on a case-by-case basis. It is anticipated that the second rotation will be established largely by way of coppice. Operational costs associated with coppicing are lower than those associated with establishment by seedling as there is no site preparation or terracing required. The growing costs associated with the coppiced rotation are approximately two thirds that of the first rotation crop (Table 5-2).
38A04520 19
Planning Operations design Site preparation Terracing Fertiliser Planting (incl seedling cost) Thinning Tending Protection R&D FB Service Charge Overheads Lease Total Total USD/ha
The total cost associated with the first two rotations is given in Table 5-3.Table 5-3: Total Operation Costs for Planted Crop and First CoppiceTotal Operation Cost Total USD/ha Planted Crop (Rotation 1) 1 143.48 First Coppiced Crop (Rotation 2) 711.93 Total Cost 1855.41
First rotation establishment costs associated with the acquisition areas in Heyuan City are detailed in Table 5-4. The total cost for the first rotation in the acquisition areas is some USD240/ha greater than the cost assumed for Sino-Forests other operation. Much of the increase is associated with fertiliser costs with four fertiliser operations specified in the costing presented to Jaakko Pyry Consulting.Table 5-4: Operation Costs for Eucalyptus Planted Rotation (USD/ha)Operations Operations Design Site preparation Fire Break Roading Planting (incl. seedling, hole digging etc) Fertiliser Tending Supervision Maintenance Protection R&D Contingency Overheads Lease Total Total USD/ha 0 1.5 43.8 5.5 21.9 213.8 311.5 47.5 21.9 5.5 5.5 5.5 25.6 27.4 736.9 124.2 3.7 5.5 11.0 14.6 1.8 18.3 27.4 206.5 124.2 3.7 5.5 7.0 3.7 1.8 18.3 27.4 191.6 1 Planted Forest (R1 - Heyuan) Year 2 3 4
38A04520 20
5.3 5.3.1
Costs of Production Harvesting Costs Harvesting costs differ significantly between provinces and have therefore been treated separately (Table 5-5). Harvesting methods in all four provinces were generally manual based with little or no mechanical assistance. The rates used apply for all species.Table 5-5: Harvesting Costs by ProvinceProvince Guangxi Fujian Guangdong Jiangxi Harvest Costs (average all species) RMB/m 3 55.00 35.00 50.00 50.00 USD/m 3 6.71 4.27 6.10 6.10
Jaakko Pyry Consulting has identified that the key factors influencing manual harvesting costs include labour cost, tree size, log length and topography. 5.3.2 Transport Costs Individual transport rates for each province have been employed (Table 5-6)3 . A loading and unloading cost of USD0.60/m3 was applied to all four provinces. Average distances are based on distances to major mills and have been limited to 150 km, as it is assumed that longer haul distances will be avoided by marketing volume to smaller local mills closer to the resource.Table 5-6: Transport Costs by ProvinceProvince Guangxi Fujian Guangdong Heyuan City Jiangxi Transport unit rate USD/m 3 /km 0.07 0.07 0.04 0.06 0.09 Loading (USD/m 3) 0.60 0.60 0.60 0.60 0.60 Unloading (USD/m 3) 0.60 0.60 0.60 0.60 0.60 Average Distance 150 100 80 150 150 Average Total Cost (USD/m 3) 11.70 8.20 4.40 10.2 14.70
When Sino-Forest sells its standing timber to wood-traders, both the harvest and transport costs are accounted for by the wood-trader at the time of purchase. 5.4 Taxes at Harvest Once the trees are harvested and sold, taxes to both local provincial government and state government are levied. These include reforestation, forest protection and infrastructure taxes. The taxes can vary between location and between species. A summary of the taxes used in the valuation is presented in the following table.
38A04520 21
5.5
Overhead Costs The cost of the direct supervision required for plantation establishment and management has been identified as 10% of the direct operational costs. The overhead costs associated with both the CJV companies and the management of the purchased trees has been identified by the companies as USD18.29/ha/year. Sino-Forest has not identified any corporate overhead costs related to running this business. Jaakko Pyry Consulting has allocated a further USD18.29/ha/yr for corporate overheads.
5.6
CJVs The existing forest area planted by Sino-Forest (32274.6 ha) has been managed under a CJV set up between Sino-Forest and PRC incorporated forestry trading companies (the commercial arms of government forestry bureaus). The key points of the CJV agreements are: The forestry trading company provides the land for the plantation forests. Sino-Forest will pay all the plantation establishment and maintenance costs. At harvest the wood produced is shared 30% to the forestry trading company and 70% to Sino-Forest. Jaakko Pyry Consulting has only valued Sino-Forests 70% share in this valuation.
It is assumed that areas currently planted under CJV agreements will be replanted under this model into the future. Table 5-8 below identifies the existing planted area by CJV company:Table 5-8: Planted Area by CJV CompanyProvince Fujian Guangdong Guangxi Jiangxi Total Gaoyao Heyuan City CJV Company Zhangzhou Jia Min Forestry Development Ltd. Gaoyao City Jiayao Forestry Development Ltd. Heyuan City Jianhe Forestry Development Ltd. Guangxi Guijia Forestry Company Ltd. Jiangxi Jiachang Forestry Development Company Ltd Planted Area (ha) 375.9 5854.0 7481.0 9174.0 8857.0 31741.9
Note that the Fujian company is not a CJV but a wholly owned forestry enterprise, and thus rather than a CJV agreement this company operates under articles of association.
38A04520 22
5.7
Land Rental It is assumed that the existing purchased forest areas will be harvested and replanted in fast growing eucalyptus hybrids and that the land will be leased and an annual rental paid. Sino-Forest has identified the currently preferred methodology that will be followed for future pla ntation development as: purchasing the cutting rights to existing plantation forests and, following harvest, leasing the land for the establishment of fast growing eucalyptus plantations, (and poplar in Jiangxi Province). Sino-Forest has advised that it expects to pay an annual land rental of USD18.29/ha/year (RMB10/mu/year). The land rental associated with the recent acquisitions in Heyuan is reported as USD27.43/ha/annum. These levels of land rental are common in Southern China for land designated for forestry i.e. USD14.63 to 27.43/ha/year (RMB8 to 15/mu/year). These rentals are associated with hill country that it is generally unsuited to agricultural purposes. Annual land rentals as high as USD73.17/ha/year (RMB40/mu/year) have been observed for forestry designated land in Southern China. Rentals of this magnitude are associated with flat land that is suitable for agricultural purposes.
5.8
Log Traders Margin Sino-Forest currently sells most of its logs to log traders on the stump (that is standing in the forest). Jaakko Pyry Consulting has calculated the stumpage price as the delivered to mill gate log price minus the cost of transport and harvest which the log trader must pay. However, in addition to the harvest and transport costs the log traders margin must also be deducted from the stumpage price paid for the logs. Jaakko Pyry Consulting has assumed the log traders margin to be 5% of the gross stumpage price.
5.9
Exchange Rate An exchange rate of 1 USD = 8.2 RMB has been applied throughout this valuation.
38A04520 23
LOG PRICE OUTLOOK Price forecasts are made using a combination of formal modelling techniques and informed judgement. Many factors affect prices, including the demand and supply balance, exchange rates, pulp prices, financial positions of buyers and sellers, price relativities between woodchips from different sources, and production costs. A detailed overview of Chinas forest sector is given in Appendix 2 and is summarised below.
6.1
Fibre Supply and Demand. China is a significant importer of all forms of wood products. Wood fibre imports have increased rapidly over the past five years due to expanding domestic demand and the lack of available domestic fibre supplies.Figure 6-1: China Total Fibre Imports
Panels
Lumber
Logs
38A04520 24
6.1.1
Sawlog Supply and.Figure 6-2: China - Sawlog Total Production and Consumption
Production 40 Sawn Wood Volume -million m 335 30 25 20 15 10 5 2004e 1994 1999 1990 1991 1992 1993 1995 1996 1997 1998 2000 2001 2002 2003 2005
Consumption
Year
2010
38A04520 25
Sawlog and peeler log production has shown a similar growth trend.Figure 6-3: China - Sawlog and Peeler Log ProductionSaw Log 90 80 70 60 50 40 30 20 10 0 2004e 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2005f 2010f3
Peeler Log
6.1.2
Furniture Industry Furniture production and exports in China have been growing rapidly over the last ten years. It is Jaakko Pyry Consultings opinion that improving living standards as well as developments in the construction ind ustry will support furniture demand over the next decade. Domestic production increased significantly from 1995 to 2004 at an annual average rate of 15% per annum. During the period 2005 to 2010, Jaakko Pyry Consulting forecasts growth to slow but remain strong at around 11% per annum. Important areas for furniture manufacture include Shandong, Zhejiang and Jiangsu, the locations of the majority of joint ventures constituted with capital from Taiwan and Singapore. Wood consumption in the furniture sector comprises 30% wood-based panels and 70% logs and lumber.
38A04520 26
40 35 Total Furniture Production 30 Total Furniture Exports 25 20 15 10 5 0 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005f
6.1.3
Construction Industry A massive central government housing programme is forecast to keep growth in the construction industry strong. The last three years have seen continued growth in completed floor space, and China has set a living space target of 23-25 m2 per capita for urban housing and over 25 m2 per capital for rural housing by 2010.
6.1.4
Wood-Based Panel Supply and Demand Chinas total wood based panel production reached 42.0 million m in 2004, a 20% increase over 2003 production levels. Future growth is expected to remain strong.
38A04520 27
MDF
Plywood
The consumption of wood-based panels in China has shown similar trends to panel production. MDF usage has increased from 6% of the total consumption in 1990 to 34% in 2004. Demand for total wood-based panels is expected to reach 50 million m by 2010, but plywood usage is expected to decline as the availability of peeler quality sawlogs declines. 6.1.5 Wood Chip Demand for Pulp Production Chinas domestic demand for pulpwood is expected to grow significantly, with a number of new pulpmills planned for construction. Pulpwood demand will grow strongly with much of the growth met by direct imports of pulp. The remainder will be sourced from the increasing availability of domestic plantation pulpwood. China is expected to become a net importer of pulpwood and woodchips over the next three years. In terms of pulpwood demand for domestic pulp production, the current consumption is estimated to be 15.6 million m and has increased by 5.3% per annum over the last five years. Demand for pulpwood from Chinas pulp industry is forecast to reach 19.1 million m by 2007 (7.0% per annum growth over 2002 to 2007) and increase further to 24.6 million m by 2012 (5.2% per annum growth over 2007 to 2012).
38A04520 28
6.2
Valuation Log Prices Sino-Forest generally sells the plantations on a standing basis and therefore do not sell logs direct to the market. However current forecast mill gate log prices have been assumed for the purpose of the plantation cash flow forecasts and are presented below in Table 6-1.Table 6-1:. For species such as pine and eucalyptus < SED 4 8 cm logs now sell for as much as RMB380/m3 and the SED8 to 14 cm logs.
38A04520 29
7 7.1
FOREST ESTATE MODEL Overview For any forest, but particularly forests of significant size, there is an important choice in how the forest's future management is modelled. The alternatives are: A stand-based (bottom- up) approach. Individual stands within the forest are effectively considered in isolation. Once their yield potential at a certain target age is ident ified, data are accumulated to provide a result for the forest as a whole. A forest estate (top-down) approach. All stands are modelled collectively to achieve some desired result from the total forest resource. The most common manifestation of the distinction is in the production profile of the resource. The age-class distribution of an example forest is shown below. Characteristically, most plantation forests have an irregular age distribution and Figure 7-1 illustrates this feature.
600
500
400
300
0 0 1 2 3 4 5 6 7 8 9 10 11 12
Age (years)
Assume, for convenience that all stands share the same yield table as illustrated by Figure 7-2.
38A04520 30
0 0 1 2 3 4 5 6 7 8 9 10
Were the forest to be managed on the stand-based approach, each stand might be cut at some externally determined target age. A commonly applied concept is that of the optimum economic rotation age. Accumulating the results gives an irregular wood production, as shown below. The harvest profile effectively becomes the mirror image 5 , with a scaling factor, of the age-class distribution (Figure 7-3).Figure 7-3: Example Harvest at Fixed Rotation Age100000 90000
Harvest Period5
Note that the full extent of the harvest in the first period is not shown as all ages >9 years are assumed harvested.
38A04520 31
In practice, it may be unrealistic to harvest all stands at a fixed rotation age. Most plantation forest estates have, through various circumstances, an uneven age-class distribution. A harvesting strategy that employs a fixed rotation age will lead to a wood flow profile that reflects the age-class distribution as illustrated in Figure 7-3 previously. An irregular wood flow may be inappropriate for various reasons: Marketing - an irregular supply may prejudice market confidence. Logistical considerations of harvesting and transport. Supply commitments to associated processing plants. Regularity of cash flow from which to fund ongoing forest management.
To meet these considerations other harvesting strategies are likely to be preferred. A forest estate modelling approach can therefore be used to smooth the harvest rate, achieving this by manipulating both the age and area of harvest 6 (Figure 7-4).Figure 7-4: Example Smoothed Forest Harvest
Harevst Period
The choice of modelling method has a bearing on the results of a forest valuation. For each stand, examined in isolation, it is possible to identify an optimum economic rotation age. At this rotation length, the NPV of the stand is maximised. If the optimum economic rotation is employed as the target clear- felling age in a stand-based model, this will produce the highest theoretical value for the forest. However, if a forest estate modelling approach is employed, this invariably involves some departure from the optimum economic rotation age; a lower value for the forest results. The extent of difference between the modelling approaches6
Note that the chart series for the average age of harvest is not shown. It does however vary over time.
38A04520 32
depends on the degree to which the harvest age varies from the theoretical optimum. 7.2 Observed Practice in Wood Flow Modelling It is Jaakko Pyry Consultings observation that wood flow modelling for valuation purposes invariably involves smoothing of wood flows. For large resources (in excess of a few hundred hectares) a non-declining yield is the most common default representation. To a large extent the degree of smoothing implemented is determined by the resources age-class distribution. The modelling profile adopted in forest valuations is guided by two factors: What the forest valuer believes is a credible and pragmatic profile; and What the market evidently assumes in determining what forest purchase value it is prepared to pay.
100000
80000
60000
40000
20000
0 1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39
Harvest Period
Jaakko Pyry Consulting has profound misgivings with production profiles for any particular forest that involves large fluctuations in wood flow. They may lead to real inefficiencies in start-up and withdrawal of harvesting operations, a less than enthus iastic participation by market partners, and forest financial flows that are most inefficient to manage. Jaakko Pyry Consultings perception of the market for forests is that most investors prefer valuations based on pragmatic wood flow profiles. Jaakko Pyry Consulting has consistently been engaged in preparing and evaluating managed wood flow profiles for intending forest investors.Copyright JP Management Consulting (Asia-Pacific) Ltd
38A04520 33
7.3
Modelling Supply and Demand Forest estate modelling provides the means to manage the collective output of the estate to best effect, managing supply chain optimisation by matching production by log type to the various markets. These include; local sawmills, panel mills and pulpmills, local and distant forest product users, and export ports. A schematic outline of the entire forest estate modelling concept used to project future wood flows as well as projected costs and revenue by destination is shown below.Figure 7-6: Schematic Illustration of the Forest Estate Model= Domestic Sawlogs = Export Sawlogs = Industrial Pulpwood = Sawmill Chips
4 000 3 500 3 000 2 500 2 000 1 500 1 000 500 0 1 7 13 19 25 31 37 43 49 55 Period of Model
VARIOUS MODEL CONSTRAINTS - Optimisation of forest asset NPV - Minimum net cash flow requirement - Maximum level of operational cost - Correct log mix at various mills - Non-declining yields by forests
As illustrated, the model maintains the identity of the forest units within the collective resource. Each has a distinct age-class distribution. The linear programming model operates on a year-by-year basis, with each year being unique in respect to clearfell age, location of harvest and the quantities delivered to various destinations. 7.4 Croptype Allocation Forest estate modelling has conventionally taken the approach of allocating each stand in the forest to a croptype. Croptype definition is initially productivity-based, with all stands within a croptype expected to share the same yield table. Factors affecting yields include the species, site characteristics and silvicultural regimes of the stands - thus croptypes are normally distinct with respect to these attributes. With increasing sophistication in the modelling process, other criteria for
38A04520 34
differentiation may also apply. Forest location, slope classification, soil type and tenure are also commonly distinguished. The practice of aggregating stands into croptypes has largely been driven by limits on the computational capacity of available computers. With processing speeds continuing to increase rapidly, the requirements for aggregation are diminishing. It is increasingly practicable to construct models in which each stand is a croptype in its own right. The improved modelling resolution that this offers is attractive, although greater automation of model construction also becomes necessary. The forest estate model that has been constructed to describe the Sino-Forest estate employs a substantial measure of aggregation, but retains a high degree of resolution inasmuch as geographical identities are maintained and the coppicing of future stands is modelled. 7.5 Model Constraints The linear programming based framework allows the specification of a variety of constraints. The following types of constraint are included within both the wood flow model and the supply chain optimisation model: Lower and upper harvest age limits. Overall objective of optimising the NPV of future cash flows. Croptype allocation for replanting/regeneration of future crops, with an accompanying variety of replanting constraints and limits. Harvest constraints, which in turn include a range of further options such as non-declining yields and product smoothing capabilities. Cash flow and budgeting constraints, such as maximum expenditure and minimum cash flow requirements on an annual basis. Supply chain management such as the delivery of required product mixes to specific destinations over a managed time horizon.
In order to provide variations in the mix and volume of the products available from each stand at clearfell, the age at which harvest can occur is allowed to vary. The linear programming model determines the year of harvest and is constrained to a range of ages that are realistic industry standards. The minimum and maximum clearfell ages for each species are shown in Table 7-1.Table 7-1: Clearfell Age RestrictionsPeriod 1 1 1 1 1 1 1 to 49 to 49 to 49 to 49 to 49 to 49 to 49 Species Eucalypt (existing) Eucalypt (new crop) Eucalypt (coppice) Pine Poplar Chinese Fir Acacia Minimum Clearfell Age 5 6 5 19 6 15 6 Maximum Clearfell Age 12 6 5 25 12 20 10
38A04520 35
After an initial period when croptypes are allowed a wide range of clearfelling ages, the maximum harvest age is reduced so that a more tightly defined clearfell range exists. The reasons for doing this are threefold: At the start of the modelling process there are some stands containing old trees, and the model must acknowledge these exist. Lowering the maximum clear-fell age after a period of time prevents the model from deferring the harvest age unduly in so-called end-play effects. A narrower band of possible harvest ages enhances the models processing speed.
It is possible to specify periods within which the harvest of a class of products may increase at any time but cannot subsequently decline. As the concept suggests, this is commonly referred to as a non-declining yield (NDY) constraint. The forest estate model has used both NDY constraints and smoothing constraints to allow the harvest of any mix of log products from various forest origins to be smoothed between annual periods. 7.6 Wood Flow and Allocation Model Results Figure 7-7 illustrates the wood flow profile for the collective resource over the 49year period of the valuation. This clearly shows the dominance of eucalypt within the estate. This results in a sustainable wood flow of around 4 million m/year by 2010. Figure 7-8 shows that the sustainable pulplog supply from the forest estate is in the order of 1 800 000 m3 /year at that time.Figure 7-7: Wood Flow by Species12,000,000 EUC FIR PIN ACA POP
10,000,000
8,000,000
Volume (m3)
6,000,000
4,000,000
2,000,000
2005 2010 2015 2020 2025 2030 2035 2040 2045 2050 2055 2060
38A04520 36
Jaakko Pyry Consulting has modelled the existing Sino-Forest estate of 2 353 549 ha over a 49 year period. This enables the current estate to be harvested close to its optimum economic rotation age and results in a wood flow profile that shows variation between 4 000 000 m3 and 10 000 000 m3 .Figure 7-8: Wood Flow by Log Type12,000,000 Pulp Log Saw Log
The volume contributions from various locations are shown in Figure 7-9. Heyuan City and its associated 200 000 ha plantation expansion project dominates supply.Figure 7-9: Wood Flow by Location12,000,000 Guangxi Jiangxi Fujian Gaoyao Heyuan
38A04520 37
8 8.1
DISCOUNTED CASH FLOW VALUATION Overview The diagram below illustrates the structure of the valuation model. Generation of the initial inputs (the wood flows) has been described in the previous section. These wood flows are then optimised in their delivery throughout the supply chain to the various end-use markets. Revenue is generated at each destination, the price point being delivered at mill gate (AMG) or at wharf gate (AWG). Harvesting and transport costs, annual forest management costs, indirect overhead costs and the net cost of land are deducted from this revenue to give an operating margin. The linear programming model generates all of these costs streams, since their profile depends on the harvesting strategy and age-class structure of the forest.Figure 8-1: Schematic Illustration of the Forest Valuation Process
ALLOCATION TO MARKETS
SALE OF FREEHOLD LAND PRE-TAX CASH FLOW SCHEDULE MODEL NET PRESENT VALUE FROM PRE-TAX CASH FLOWS
38A04520 38
8.2
Treatment of Taxation Astute forest investors are expected to prepare valuations on the basis of post-tax cash flows. However, in general the accessible information with which to interpret transaction evidence almost always excludes any evidence of the buyers taxation position. Accordingly, when forest valuers have sought to derive implied discount rates, these have largely been based on pre-tax cash flows. This valuation has been based on real pre-tax cash flows.
8.3
Scope of the Analysis In this context, scope refers to the time span of the analysis. The forest estate modelling process can provide projections of cash flows far into the future. Providing the existing forest is replanted into productive croptypes, it would be possible to run the analysis indefinitely. Two alternatives are demonstrated in forest valuation: Perpetual cash flows - the forest is modelled as an ongoing business, where stands are replanted as they are felled. All revenue and costs associated with the sustained venture are modelled in perpetuity. In practice, the model is extended to the point where, after the discounting process, incremental cash flows are effectively immaterial. A figure in the order of sixty years is not uncommon when modelling a large plantation resource. Current rotation analysis - only the revenue and costs associated with the existing tree crop are included in the analysis.
In general, Jaakko Pyry Consulting prefers to confine the analysis to the current rotation. The justification for this approach is that future rotations, which include a degree of conjecture, are excluded from the analysis. The current rotation approach is especially compelling when future rotations appear either spectacularly profitable, or especially unprofitable. In either case it could be anticipated that some modifying influence would prevail. If subsequent rotations are unprofitable, the forest owner will look to contain costs and increase log prices. If there is no prospect of either, a rational investor will quit forest ownership. If subsequent rotations appear super-profitable, it can be anticipated that there will be competition for the underlying land and its price will increase. When charged with a higher land price, the profitability of the tree crop, and hence its value, will decline. The approach is consistent with wider business appraisal that generally seeks to confine the analysis to the current investment cycle, and thereby avoid unnecessary conjecture. However, a disadvantage of the current rotation approach is the requirement to identify any terminal value associated with the investment. In forest valuations, the obvious candidate for the terminal value is the value of the land. Application of the current rotation approach assumes that the freehold land is either actually or notionally sold as the current crop is harvested.Copyright JP Management Consulting (Asia-Pacific) Ltd
38A04520 39
8.4
Timing of Cash Flows Tree planting within the Sino-Forest estate most commonly takes place over the months, February to April. By convention, stands are generally assumed to have been fully established by 30 June. The yield estimation process has generated yields that are projected to apply on the full anniversary of planting. Thus, for example, trees planted in 1975 were aged 23 full years on 30 June 1998 and the yields corresponding to 23 years of age were assumed to be available at that date. With large forests that are subject to continuous harvesting, it would be impractical to fell all stands just as they turn their nominated target age. Instead, in a valuation model of the type represented here, they are expected to be felled across the span of a year. Commonly applied financial modelling procedures would suggest that the assumption that revenues arise at year-beginning would seem unduly aggressive. Seemingly, a more realistic approach would be to assume that cash flows arise no sooner than mid-year. However, between the exact anniversary of planting and the felling operations, the tree crop will have grown. If the harvest age is near to the optimal economic rotation age, the marginal rate of value growth will be close to the discount rate. Treating the revenue flow as a point event at the planting anniversary is therefore an acceptable assumption. In principle, cost flows should be treated differently it would appear more realistic to consider them as occurring at mid-year. For convenience they, like revenues, have been treated as coinciding with the stand anniversary. This approach results in them being discounted less, and therefore represents conservatism in the valuation process.
8.5
Date of Valuation The date of the valuation is 31 December 2004. In terms of the procedures described above, cash flows are assumed to arise at each succeeding November 1. Jaakko Pyry Consulting uses proprietary software that allows the isolation of both the cash flows arising from the current rotation and all future rotations at any point in the valuation horizon. The cash flo ws contributing to the Sino-Forest valuation arise during the 50-year period beginning 1 January 2005 and ending 30 June 2064.
38A04520 40
DISCOUNT RATE A valuation based on an NPV approach requires the identification of an appropriate discount rate. In selecting the rates there are two broad approaches: Deriving the discount rate from first principles. The most common expression of this approach turns first to the Weighted Average Cost of Capital (WACC). This recognises the costs of both debt and equity. The cost of equity may be derived using a Capital Asset Pricing Model (CAPM) method. A second approach is to derive implied discount rates from transaction evidence.
9.1
Discount Rate Derived from WACC/CAPM 9-1.Table 9-1: Estimate of Post-tax WACC by MarsdenLower bound 5.1% Average estimate 6.7% Upper bound 8.35
By definition the formulation of WACC employed by Dr. Marsden is associated with post-tax cash flows and includes the cost of debt. Dr. Marsden has transformed his estimate of nominal post-tax WACC to an equivalent real pre-tax WACC through a simple transformation with appropriate qualification. The average estimate of WACC to apply to real pre-tax cash flows is 10% (Table 9-2).Table 9-2: Estimate of Real Pre-tax WACC by MarsdenLower bound 7.6% Average estimate 10.0% Upper bound 12.4
9.2
Implied Discount Rates In common with other valuers of Southern Hemisphere planted forests, Jaakko Pyry Consulting maintains a register of significant forest transactions. The available evidence is then analysed in an effort to derive the discount rate implied by each transaction. The process involves preparing a credible representation of the forests future potential cash flows and then relating these to the transaction price. From this type of exercise conducted in Australia and New Zealand, Jaakko Pyry Consulting has observed derived discount rates for recent transactions to generally fall within the range of 8-10%. These are real rates, applied to post-tax cash flows.
38A04520 41
As yet Jaakko Pyry Consulting has little implied discount rate data for the Southern China region. As the commercial plantation forest industry develops and forests are transacted, empirical evidence from which to derive implied discount rates will arise. The capacity to utilise implied discount rates in this valuation is limited to considering how the forest investment in China compares with such investment in other locations. Commercial forestry in Southern China is still its infancy and faces some challenges, these include: The reliability of forest descriptions The accuracy of yield prediction Achieving high growths in a consistent manner
It is Jaakko Pyry Consultings opinion that for many forest investors, investing in plantation forestry in China would be considered a riskier proposition than investing in the industry in Australia or New Zealand for instance. 9.3 Incorporating Risk in the Discount Rate If forest investment in China is at present perceived to be a more risky proposition than like activity in other international counterparts, the issue then becomes how to quantify this difference. The textbook treatments of the subject make it clear that the discount rate cannot be regarded as a simple catch-all for any and all forms of perceived risk. Because the discount rate may be a very blunt instrument in such a role it is preferable instead to attempt to acknowledge risk in the development of the cash flows to which the discount rate is applied. However, despite this principle, there is an inc lination by potential purchasers to load the discount rate where they feel uneasy. 9.4 the re be not just willing buyers, but also willing sellers. If the only purchase offers to be extended involved very high discount rates we would expect that forests would not be willingly sold.
38A04520 42 Southe rn China. A discount rate of 12% has been selected and applied to pre-tax cash flows. This differs from that in Jaakko Pyry Consultings November 2003 Valuation (Report #54A01987) where a discount rate of 13% was selected. It is Jaakko Pyry Consultings perception that with a carefully timed and managed sale, other buyers could be attracted who would be willing to accept a similar pre-tax discount rate.
38A04520 43
10
DCF VALUATION RESULTS Jaakko Pyry Consulting has determined the valuation of the forest assets of by Sino-Forest as at 31 December 2004 to be USD565.6 million. This is the result of a valuation of the existing planted area and uses a 12% discount rate applied to real, pre-tax cash flows. Jaakko Pyry Consulting has also prepared an existing forest valuation that includes the revenues and costs of re-establishing and maintaining the plantation forests for a 50- year period (perpetual valuation). However, to date Sino-Forest only has an option to lease the land under the purchased trees for future rotatio ns, the terms of which have yet to be agreed. Sino-Forest is embarking on a 200 000 ha expansion of its estate in Heyuan City. Jaakko Pyry Consulting has determined the valuation of the Sino-Forest forest assets based on a perpetual rotation (including the planned expansion in Heyuan City) using a real pre-tax discount rate of 12% to be USD773.8 million as at 31 December 2004. The following table presents the results of the valuation of the Sino-Forest estate. The results are shown at real discount rates of 11%, 12% and 13% applied to real pre-tax cash flows.Table 10-1: USD Valuation as at 31 December 2004Forest Component Existing forest, current rotation only Existing forest, all rotations and 200 000 ha expansion in Heyuan City Real Discount Rate Applied to Pre-tax Cash Flows 11% 12% 13% USD million 585.263 565.598 546.929 845.804 773.832 712.858
10.1
Merchantable Volume Table 10-2 outlines the merchantable standing volume of the existing Sino-Forest plantations. Merchantable standing volume has been calculated from the planted areas that are at least four full years of age as at 31 December 2004. Thus 14 926.5 hectares aged less than 4 years are not included.
38A04520 44
38A04520 45
11
RISKS AND SENSITIVITY ANALYSIS The wood supply and average delivered wood costs presented in this report are based on a number of assumptions and are at risk should any of these assumptions fail to be achieved in practice.
11.1
Sensitivity Analysis A sensitivity analysis has been conducted that addresses the main drivers of value within the current rotation valuation model. These are: Discount rate and log price changes (in combination) Changes in the level of fixed overhead costs Changes in the costs of production (logging and loading, transport etc).
Table 11-1: USD Current Rotation Valuation Only Log Price SensitivityScenario 1.5% Real Price Increase 0.5% Real Price Increase No Real Price Increase (Base) 1.0% Real Price Decrease Real Discount Rate Applied to Pre-tax Cash Flows 11% 12% 13% Current Rotation Value (USD million) 625.335 603.864 583.492 .. 598.428 578.170 558.942 585.263 565.598 546.929 559.501 540.992 523.413
Note: The period of real compounding price growth starts in 2007 and runs for five years to 2011. Prices are then held flat.
Table 11-2: USD Current Rotation Valuation Only Overhead Cost SensitivityScenario USD27.5 fixed cost per ha per year USD18.3 fixed cost per ha per year USD9.2 fixed cost per ha per year Real Discount Rate Applied to Pre-tax Cash Fl ows 11% 12% 13% Current Rotation Value (USD million) 579.082 559.534 540.977 585.263 565.598 546.929 591.367 571.587 552.806
Table 11-3: USD Current Rotation Valuation Only Harvest Cost SensitivityScenario 12% Harvest Cost Increase Base Harvest Cost 12% Harvest Cost Decrease Real Discount Rate Applied to Pre-tax Cash Flows 11% 12% 13% Current Rotation Value (USD million) 561.222 542.405 524.538 585.263 565.598 546.929 609.304 588.791 569.319
38A04520 46
12 and 31 December 2004. Inclusion recent forest acquisitions
Appendix 1
INTRODUCTION JP Management Consulting (Asia-Pacific) Ltd (Jaakko Pyry Consulting) has been requested by Sino-Wood Limited (Sino-Wood) to update its November 2003 valuation of Sino-Woods forest assets in China. Sino-Woods forest assets span the following provinces; Fujian, Guangdong, Guangxi and Jiangxi. Given the timeframe and resources with which the va luation exercise is constrained, it is not feasible to visit all of the regions within which Sino-Wood has its forest assets. It is Jaakko Pyry Consultings intention to visit a different area of the resource as part of its annual valuation exercises. Heyuan City has been selected for the field inspection associated with this years valuation exercise on the basis that it is the area where Sino-Woods plantation expansion initiatives have been focused. Almost 45 000 ha of plantation area has been purchased or secured in Heyuan City since Jaakko Pyry Consultings November 2003 valuation exercise. The field visit to Heyuan City focused on: 1. A qualitative review of a sample of the area purchased subsequent to the November 2003 valuation. 2. A qualitative review of a sample of the area currently being harvested and prepared for planting in the spring of (February April) 2005. 3. Gaining a comprehensive understanding of the business environment in Heyuan City. The following narrative and photo essay is intended to provide current and potential investors in Sino-Wood with an understanding of the business and its environment in Heyuan City.
HEYUAN CITY FIELD INSPECTION AND SITE VISIT The business model employed by Sino-Forest Heyuan City Co. Ltd. (Sino -Forest) is based upon securing the lease of village land currently occupied by either young pine forest or under-performing pine forest or mixed pine/hardwood forest and converting this land to Eucalyptus plantations. In order to secure favourable lease terms, hill country is sought to avoid direct competition for land with agricultural activities. Typically, agricultural activities are conducted on the valley floor and are a more profitable land use on these sites than commercial forestry. Sino-Forests criteria for land selection is summarised as: Individual forest size > 500 mu (33 ha) and > 3000 mu (200 ha) in any one town Altitude < 400 masl Depth of soil > 80 cm Slope < 30 Gravel component < 25%
The lease terms that Sino-Forest has secured in Heyuan City are for 30 years subsequent to harvest of the existing forest at RMB15/mu/annum (USD27.40/ha/year).Photo 2-1: Poorly stocked pine forest (Masson pine (Pinus massoniana and slash pine - Pinus elliottii) on the hills adjoining Loudong Village, Longwo Town, Zijin County. The area leased from Loudong Village is 235 ha. GPS Waypoint 27
Appendix 1Photo 2-2: A higher yielding area of slash pine at Loudong Village - GPS Waypoint 27
Evidence of past forest fires occurs throughout Heyuan City. The cause of fires is generally identified as farmer land clearing and disposal of agricultural residues.Photo 2-3: Pine forest after an unplanned fire event on the hills adjoining Zeng Keng Forest Farm, Zhong Ba Town, Zijin County. The fire occurred subsequent to the purchase of the standing timber by Sino-Forest. The fire has not unduly compromised the value of the standing timber and harvest operations are underway in conjunction with preparation of the land for planting in the spring of 2005.
Fire is a well identified plantation forest risk in many regions of the world and represents a risk to Sino-Forests forest development initiatives in Heyuan City. This risk is mitigated by Sino-Forests fire insurance cover and the fact that the forest areas are typically discrete blocks of less than 500 ha rather than extensive3
Appendix 1areas of contiguous forest.. It is Jaakko Pyry Consultings opinion that land preparation techniques which avoid the broad scale use of fire should be developed in order to: Reduce the risk of environmental damage. Conserve soil organic matter and fertility.
Appropriate techniques may include manual weeding and the application of herbicides by spot spraying. While these initiatives may be operationally more expensive in the short term, world wide they have been found to be cost effective over the long-term. The sequence of establishment events currently employed by Sino-Forest in Heyuan City is: 1. Develop road network and harvest existing standing timber. 2. Isolate a forest area by establishing fire breaks. 3. Reduce residues using fire. Sino-Forests specification for this operation states that the correct permissions should be obtained from the local Forest Bureau and that the height of residues should not exceed 12 cm. 4. Determine and mark out the spacing for planting. 5. Dig the planting hole such that its dimensions are 40 cm x 40 cm x 40 cm. The roading network is formed to allow for the harvest and transportation of the existing standing timber. Sale of the existing crop is either by stumpage sale whereby the purchaser buys just the standing timber or as delivered log sales where Sino-Forest directly manages harvest operations. Logs are generally sold into the existing local market.
Appendix 1Photo 2-4: Road construction underway, Xiawei Village, Dengta Town, Dongyuan County. Felling operations are complete and stems are ready to be carrried to the road for transport to the market. Road construction is contracted out at 7 to 8RMB/m (USD975/km). Roading plans allow for 3 - 4 m/mu (19.2 ha/km).
Appendix 1Photo 2-5: Mixed pine/hardwood enroute to Asia Dekor MDF in Heyuan City from Xiawei Village
Asia Dekor MDF started operations in June 2004 and has an output of 200 000 m3 /annum and a fibre demand of some 400 000 m3 /annum. From conversations with the Production Manger at Asia Dekor it was apparent that fibre supply is tight. In Zijin County Dong Mei Limiteds MDF processing development started production of board in October 2004. The facility has an output capacity of 50 000 m3 /annum and a fibre demand of approximately 100 000 m3 /annum. The following montage illustrates the log yard and feedstock employed at Asia Dekor MDF Heyuan City.
Appendix 1Photo 2-6: Asia Dekor MDF Heyuan City
Appendix 1Other markets consist of a plethora of small veneer mills along with sawmills. These mills are typically small and employ a labour intensive process. Wage rates are approximately RMB40 to RMB50/day (USD5 to USD6/day).Photo 2-7: Veneer mill Heyuan City. Logs are sawn into 500 mm bolts and peeled on lathe. The small bolt length allows the acceptance of logs with quite severe sweep characteristics. The sheets of veneer are dried in the sun and bundled up for sale to a laminating plant as core veneer. The peeler cores illustrated are squared up and finger-jointed and employed in the construction of sewing desks.
Photo 2-8: Sawmill Heyuan City. The sawmill illustrated is producing components for industrial packaging applications. It is estimated that between 300 and 500 such sawmills operate in the greater Heyuan City area with individual log intakes in the region of 3 1 000 to 3 000 m per annum. Log prices paid by such mills for their feedstock are 3 reportedly RMB380 to RMB400/m . The material illustrated in the top left corner is utilised as floor props in the building of multi-storey concrete slab construction techniques. This material fetches approximately RMB5 to RMB7/piece.
Subsequent to the harvest of the existing standing timber, fire breaks are developed and the planting area is burnt to reduce harvest residues, improve access and control weed species. The spacing for the digging of plantings holes is measured out and holes are dug manually.
Appendix 1Photo 2-9: Hole digging in progress (Lizui Town). The area being established at this location in the coming planting season is approximately 600 ha with an altitude of up to 350 masl . Trials of Eucalyptus dunni are proposed for a proportion of this site. E. dunni is being trialled for its suitability and cold tolerance.
Photo 2-10: Hole digging well under way at Shun Tian Village, Dong Wuan County. Holes are dug manually to a dimension of 40 x 40 x 40 cm. A worker can dig approximately 50 holes/day and earn RMB40 to RMB50/day. The extensive nature of the cultivation illustrated is common practice in the establishment of plantation forests in China.
Appendix 1Sino-Forests Heyuan operation intends to establish 15 000 ha of Eucalyptus spp. plantation this coming planting season. This is a very significant planting programme. Three nurseries are preparing some 25 million seedlings for this programme. The operation of two of these nurseries is contracted out. The other is staffed by Sino-Forest personnel and has been in operation since 1996.Photo 2-11: Bai Pu Village nursery is owned and operated by Sino-Forest, with recent expansions it has the capacity to produce some 10 million seedlings. The following photographs illustrate workers potting out mother trees, established mother trees from which cuting are taken and recent expansions to the nursery.
Appendix 1Photo 2-12: Long Mu Town nursery is operated by contractors and has the capacity to produce 20 million seedlings. It is anticiapated that it will produce 10 million seedlings for the coming planting season, 2 million of which will be Eucalyptus dunni. The following photograhs illustrate the extent of the nursery along with workers filling planting bags and the granular fertiliser (NPK) that is applied to each planting hole in the field prior to planting the seedling. This operation will utilise 10 000 tonnes of fertiliser.
Following the planting of the seedlings in the field the sequence of events to complete establishment operations is: Terrace the planting rows Apply a slow release fertiliser to the terrace between seedling Ongoing weeding
Planting operations are conducted over the period February to April and as such the operations described above had not yet commenced and were not viewed by Jaakko Pyry Consulting. On previous visits to Heyuan City Jaakko Pyry Consulting has observed the success of Sino-Forests previous establishment operations in the region. It is Jaakko Pyry Consultings opinion that the establishment programme currently being embarked on will be successful, provided the site selection criteria, plantation establishment procedures and maintenance procedures detailed in SinoForests operations manuals are followed.
Appendix 2
1 1.1
INDUSTRY OVERVIEW Forest Resources China presently has 160 million hectares of forest, covering 16.5% of the countrys total land area. Some 110 million hectares (69%) are considered to be natural forest and 50 million hectares (31%) are plantations. A total of 24 million hectares of the plantation areas can be classified as forest plantations that were established for the purpose of providing a future industrial fibre supply. The main natural softwood forest species are larch, spruce, fir, hemlock and pine and these are located in the northeast, northwest and southwest of China. The main natural hardwood forests comprise of oak, birch, ash and poplar. Natural hardwood forests are widely distributed in all regions of China. Of the 24 million hectares of forestry plantations only 5 million hectares have been managed sufficiently well such that they will be commercially productive. Traditionally the main plantation species are Chinese fir and Masson pine (63%) as well as other southern pines and larch, however new plantations increasingly consist of fast growing poplar and eucalyptus species. In addition to commercial tree plantations, China has an appreciable wood resource of poplars and other hardwoods grown on agricultural land as protection belts and under agro-forestry intercropping systems.Figure 1-1: Composition of Chinas Forest Resources5.0 19.0
26.0
110.0
The supply of large diameter logs from natural softwood forests has declined sharply since 1998 because of government imposed logging bans. The bans were triggered after disastrous flooding of the Yangtze River and other major rivers.1
China can be broken into four key fibre baskets as presented in Figure 1-2. SinoWoods plantation operations are currently confined to southern China and are based around short rotation and high yielding eucalyptus and poplar species.Figure 1-2: Key China Fibre BasketsLarch
Spruce
Basket 1Fibre basket 1. Larch and Birch
Basket 2Fibre type Softwood Hardwood Fibre basket 3 Conifers (pines and fir)
Basket 3 Masson pine Massonpine Chinese fir Chinese fir Southern pines Southern pinesHardwood & Hardwood & Rainforest Rainforest
Basket 4
Key fibre types relevant to Sino-Woods operations include: Eucalyptus species (hardwood) concentrated in the south and, to a lesser extent, the southeast. These offer the best opportunity for high-yielding fibre plantations on short rotations. Poplar species (hardwood) also present opportunities for fast growing chipwood and sawlog production, particularly in areas prone to flooding. Poplar is a common four-sides species throughout China. Four-sides refer to farmer plantings alongside roads, canals, rivers and farms. In fibre baskets such as the east and north, four-sides plantings is a major source of wood supply (i.e. 30% in the north and 9% in the east). In other fibre baskets foursides contributes between 1% and 5% of the wood supply. Due to the strip nature of four-sides planting, a one-hectare equivalent of trees can be spread along 2 to 3 kilometres in distance. Secondary forests of indigenous hardwoods are an available fibre resource for the pulp and paper, and wood panel industries. These forests occur in the northeast, southeast, and southern China. Production from this resource is declining as areas are increasing logged or protected. China has an appreciable estate of small-diameter plantation softwood. Opportunities to utilise this resource are available in the northeast (larch), the2
southeast (Masson pine, loblolly pine, slash pine), and southern China (Masson pine, slash pine, Caribbean pine). There is also a large Chinese fir resource of around 8.7 million hectares, with the capacity to produce around 17 million m per year. The cultivation of Chinese fir is generally straightforward and successful. It yields lightweight, small-diameter, durable logs and is a traditional construction material in rural areas of southeast China. However, it has little export potential and it is inefficient as an industrial fibre because of low wood density characteristics and high costs of production. Chinese fir is facing a stagnating and declining market for rural building poles, and existing areas of plantation Chinese fir are frequently being converted to fast growing hardwood plantations upon harvesting. 1.1.1 Domestic Fibre Supply China shows a trend of increasingly replacing domestic wood with imported wood and imported wood products. Domestic wood production peaked during the period 1995 to 1997. In 1998 domestic wood supply decreased due to the Chinese governments restrictions on timber harvest from forests in the northeast and key river catchment provinces such as Sichuan. The reduction in wood supply from these regions is now being met through imports of wood and wood products and increased supply from fast growing plantations primarily located in southern China. Currently the Chinese authorities recognise an annual forest depletion/consumption of around 350 million m3 per annum. A breakdown of this domestic forest consumption is presented in Figure 1-3.Figure 1-3: Domestic Wood Supply and Consumption within China in 2003Other 13% Natural Loss 3% Industrial 35%
123 million m3
23.9
2.0
The volume of industrial logs produced domestically within China is estimated to be around 105 million m3 per year. 1.1.2 Future Supply Forecasts The gap between domestic wood supply and the future demand for woodchips, pulplogs and solid wood sawlogs is forecast to increase. This will contribute to strong ongoing demand for domestic pulpwood and sawlogs. Wood-based panel production in China has grown at around 15% per annum over the past few years, with MDF production increasing by 35% per annum over the past decade. Consensus forecasts estimate Chinas gross domestic product (GDP) will continue to grow by between 7 and 8% per annum during the next decade. It is likely that China will remain internationally competitive with respect to both labour costs and availability, and therefore should be able to maintain a strong GDP growth profile. Based on continued GDP and population growth, Chinas consumption of all paper and paperboard grades is forecast to exceed 60 million tons by 2010. Currently, development of Chinas pulp industry is lagging behind that of the paper industry due to an uncertain fibre supply and a lack of modern world-scale pulpmills. However as the paper demand increases and new fast growing plantations are established, expansion of the wood pulp producing industry is increasingly likely. Domestic furniture production and international furniture exports from China have been growing rapidly over the last decade. Improvements in living standards as well as significant increases in the construction industry should further support medium term furniture demand. Domestic furniture production has increased significantly at around 16.5% per annum between 1995 and 2003. Growth is forecast to slow but should remain strong at about 11% per annum from 2005 to 2010. Interior decoration, especially for residential houses, has been one of the fastest growing wood-panel related industries in China over the last ten years (growth has been estimated at about 20% per annum over this period). Improving living standards and significant developments in the construction industry will continue to support demand through 2010. China has set a living space target of 23-25m2 per capita for urban housing and over 25 m2 per capita for rural housing by 2010. Accordingly, demand for wood based products should continue to be strong. 1.2 Relevant Regulations and Government Policy The Chinese central government has recently released a forestry development blueprint1 that identifies target levels of national forest cover. The target for national forest cover in 2020 is 23% of the total land area, and the target for forest cover in 2050 is 26% of the total land area. Currently Chinas national forest coverage is 16.6%. This equates to 62.5 million hectares of new forest establishment by 2020 and a further 29 million hectares by 2050. Given that the total area under cultivation with conventional agronomic crops is about 120 million hectares, then these targets appear unrealistic in terms of commercial plantation forestry. The current plantation forest area is only around 24 million hectares; of which only 5 million1
hectares have been managed sufficiently well that they will be commercially productive. Currently the central government of China is seeking to promote expansion in the domestic pulp and paper industry. On 7 February 2001 the Chinese State Development and Planning Commission (SDPC), Ministry of Finance and State Forestry Administration (SFA) jointly issued new policy initiatives to encourage paper companies to develop their own fibre base. However, while the central government has set ambitious targets for increasing the area and quality of fast growing plantations (including the promise to financially assist approved schemes) bureaucratic, financial and practical problems have stalled these initiatives to date. 1.2.1 Impact of Current Policy and Regulations on Sino-Wood The impact of the new central government policy on Sino-Wood is generally positive. Sino-Wood has been cited by central government as a good example of how China can increase its plantation forest area on a commercial basis. SinoWood is at the forefront of commercial plantation development in China. The policies now coming from central government make it easier for Sino-Wood to operate and achieve its expansionary goals. However, such regulations are likely to encourage new entrants into commercial forest plantation establishment. Despite the possibility of new entrants, Sino-Woods position in the market will likely remain dominant due to its established land bank (availability of suitable land is a major limitation to rapid plantation forest expansion), and established long-term business relationships with the Forestry Bureaus in the key southern China provinces of Guangdong, Guangxi, Jiangxi and Fujian. 1.2.2 Policy & Regulatory Direction Compared to Other Forest Growing Countries Chinas forest policies are now generally in line with those of other major forest growing countries with two key exceptions. The system of annual allocation of cutting licences. This regulatory requirement is not completely transparent in its application and adds more expense and uncertainty to plantation operations. In most other regional plantation forest growing countries the consent to harvest is granted when the forest is established (unless it is an ecologically sensitive location). The taxes levied by government at harvest. The taxes levied at harvest add significantly to the delivered wood cost and are above levels common in other major plantation forest growing countries. Recent policy papers published by central government, such as the forestry development blueprint, have indicated that both these areas will be reviewed over the next year or so with a view to making them consistent with existing international practices.
1.3
Review of Competitive Landscape The southern China region previously exported around 1.2 million bone dry metric tons2 (BDMT) of hardwood woodchips. This export trade exists almost entirely because of the present low capacity for wood pulp production in China. However, with the completion of APP Chinas million tonne capacity bleached hardwood kraft pulp (BHKP) mill on Hainan Island (requiring a fibre supply in excess of 4 million tonnes of round logs at full capacity) and other mill developments being studied the region is about to become a fibre deficit region. In addition, a number of new panel mills were constructed in Guangdong, Guangxi and Jiangxi in 2004. Further mills are planned for 2005 and will significantly increase the demand for woodchips. While some of these new mills are planning to develop their own plantation resources none have so far come close to establishing plantation forests of sufficient area to provide the necessary fibre supply. Indeed, Jaakko Pyry Consulting views the entry of potential plantation establishing and consuming companies as a positive scenario for Sino-Wood. These companies signal plans for large-scale investment in wood processing technology and should be viewed as consumers of Sino-Wood plantation fibre as opposed to market competitors.
1.3.1
Sino-Wood's Position in China Sino-Wood currently has little direct commercial plantation forest competition as other large-scale plantation forest owners are either: Processors (or potential processors) such as APP who plan to directly consume all the fibre they produce, or Government Forestry Bureaus that are not set up as commercial enterprises and supply the market on an ad hoc basis.
1.3.2
Other Plantation Projects in Southern China Key competitors in growing hardwood plantations for the local and export market include: Guangdong province has the largest area of existing eucalyptus plantations (650 000 hectares). Most of these plantations are slow growing and poorly managed under the Forestry Bureau. The Leizhou Forestry Bureau in Southern Guangdong manages approximately 35 000 hectares of well run fast growing eucalyptus plantations and supplies both local and export markets. Guangdong Petro-Trade is the leading woodchip exporter in China. The company is an international trade company under Guangdong Petro. In
An indicative conversion from Bone Dry Metric Tons (BDMT) to green metric tons is 2.1. Thus 1.2 million BDMT equals approximately 2.5 million green metric tons.6
Guangdong providence, Guangdong Petro-Trade mainly collects woodchips from plants that do not have international trading rights. Asia Pulp and Paper (APP) has 95 000 hectares of hardwood plantations in Hainan, Guangdong and Guangxi. These plantations have been established to supply planned pulpmills in Hainan Island and Guangxi Province. In addition to the APP plantations on Hainan Island there are approximately 350 000 hectares of eucalyptus plantations on Hainan Island. Most of this resource is managed by the Hainan Forestry Bureau, which sells into the local market and is also a major hardwood woodchip exporter (200 000 BDMT per year). APP has signed wood supply contracts with the Hainan Forestry Bureau to supply to its proposed pulpmill. Gaofeng in Guangxi Province has 41 000 hectares of mixed species forest (including eucalyptus) and sells into the local and export markets. In addition to Gaofeng there are approximately 200 000 hectares of eucalyptus plantations in Guangxi under the Forestry Bureau and small private owners. Wood supply is now a critical issue for all wood processing companies in southern China. 1.3.3 Threats from Outside of China The key threat from outside of China is from wood fibre and forest product imports. In particular fibre from Malaysia, Thailand, Vietnam and Indonesia is competitive at current price levels. However this risk is mitigated by the domestic fibre shortages within these countries. In general China is protected from cost competitive imports because of the high cost of international woodchip transport from other exporting regions. 1.4 1.4.1 Fibre Supply and Demand Fibre Imports. The construction of large overseas-owned and modern wood processing factories has generated a high demand for wood fibre supply. These modern industries in China frequently aim for high quality products and an international market to secure good profits. Continuing urban development in China, especially of large multi-storey modern office buildings and hotels, has created a strong demand for7
low quality temporary construction components as well as for high quality veneered hardwood panels and solid wood fittings and furniture. This demand is largely met through wood product imports. China is a significant importer of all forms of wood products, and wood fibre imports have increased rapidly over the past five years due to expanding domestic demand and the lack of available domestic fibre supplies. Figure 1-4 shows the historical trend in Chinas total fibre imports from 1995 to 2004e. This figure is presented in units of roundwood equivalents, which is a measure of the total wood fibre content of all wood fibre based imports such as pulp, paper, lumber, panels and logs.Figure 1-4: China Total Fibre Imports
YearSource: World Trade Atlas
China imported softwood and hardwood logs totalling 15 million m3 and 10.5 million m3 respectively during 2003. Logs from Chinas own plantations are mainly used for pulp, reconstituted panels, pit props, locally made furniture and local construction applications. The main log suppliers to China are Russia, Malaysia and New Zealand (Figure 1-5).
30 25 20 15 Other PNG New Zealand Malaysia Indonesia 10 5 0 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004e Myanmar Gabon Russia
1.4.2
Pulpwood Demand Chinas pulpwood demand has been growing over the past decade and China is now the worlds second largest paper and paperboard market after the USA. The increased use of paper in China is predominantly due to increases in the consumption of cartonboard, copying and printing paper, newsprint and tissue. Further expansion in the wood based panels industries, specifically MDF, will also support pulpwood demand. The market for flooring, furniture and interior design fittings in apartments, government buildings and schools is substantial. This market is increasingly using inexpensive wood panel products, such as blockboard, particleboard and MDF overlaid with veneer. The demand for both hardwood and softwood pulpwood by the pulp and paper industry is expected to also increase. China is a net importer of paper and paperboard despite a growing capacity to produce domestically. Figure 1-6 and Figure 1-7 present Chinas hardwood and softwood pulpwood balance.
Supply from natural forest Supply from plantations Supply from residues Demand from pulp
20
15
0 S 1997Source: Jaakko Pyry Consulting
S 2002
S 2007
S 2012
1.4.3
Sawlog.
Small to medium sized producers continued to dominate the sawn timber industry. There are a vast number of non-industrialised workshop operations although integration has started to occur. The total sawn timber production capacity in China is estimated at around 30 million m3 per annum, spread over severalthousand registered sawmills nationwide. In addition, it is understood there are several thousand unregistered mills that total about 10 million m3 per annum of sawn timber capacity.. Over the next decade it is anticipated that domestic production will continue to remain relatively flat while consumption will increase further in the medium term (Figure 1-8).Figure 1-8: China Total Sawn Wood Production and Consumption
Production 40 Sawn Wood Volume -million m 335 30 25 20 15 10 5 2004e 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2005
YearSource: Jaakko Pyry Consulting
There are approximately 1 000 state-owned sawmills with a production capacity of over 10 000 m3 per annum, and of these about 150 mills (mostly in the northeast) have a capacity of more than 50 000 m3 per annum. Two-thirds of these state-owned mills were built before 1970 and were designed for processing large sawlogs with diameters greater than 40 cm. However, the available log size has fallen considerably and this has caused these mills to become very inefficient, with most operating unprofitably. Moreover, the pattern of demand has changed dramatically since these sawmills were established. Demand11
for hardwood lumber has increased while that for softwood has decreased. This is a result of the primary end-use shifting from construction to interior decoration and furniture. This change has compounded the difficulties of the large scale stateowned sawmills. Figure 1-9 presents both historical and forecast trends in Chinas demand for sawlogs and peeler-logs. Over the next decade demand is expected to continue to grow.Figure 1-9 China Total Sawlog and Peeler Log DemandSaw Log 90 80 70 60 50 40 30 20 10 0 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004e 2005f 2010f3
Private sawmills that are not state owned are proving effective at diversifying and gaining competitive advantages that include flexible management, market orientation and lower labour costs. In addition, many private sawmills have tended to make profits by acquiring low-priced raw materials (illegally harvested logs), on which taxes and duties have not been paid. The sawn timber capacity in northeast China (i.e. Heilongjiang, Jilin and Liaoning) has declined in recent years but it remains the largest timber producing area in China (Figure 1-10).
Northeast South North Region Southeast Southw est Northw est Qinghai-Tibet 0 1 2 3 4 5 63
The existing forest resource is comparatively rich in the northeast and the region also has good access to Russian logs. The main sawn timber capacity in the north is located in Shandong and Hebei. As these provinces do not have sufficient sawlog resources, logs are transported from other domestic regions or imported from Russia. Recently the capacity in southern China has increased significantly and collectively managed forests and plantations are gradually becoming major sources of sawlogs. The southeast, where Shanghai is located, has about 4 million m3 per annum of sawn timber capacity. In addition, a considerable amount of sawn timber produced in other domestic regions and imported from other countries is also processed. In the southwest, the capacity in Yunnan province has increased notably by importing logs from neighbouring countries such as Myanmar. 1.4.4 Demand for Wood Based Panels Chinas total wood based panel production reached 42 million m in 2004, a 20% increase over 2003 production levels (Figure 1-11). Hardwood plywood production dominates wood panel production, while Medium Density Fibreboard (MDF) has shown rapid growth in the late 1990s.
13
Blockboard
Particleboard
Plyw ood
The consumption of wood-based panels in China has shown similar trends to panel production. MDF usage has increased from 6% of the total consumption in 1990 to 34% in 2004. Total wood-based panel consumption was 42.6 million m in 2004 (Figure 1-12).Figure 1-12: Wood Based Panel Consumption in China45 40 Panel Consumption -million m 335 Blockboard 30 25 20 15 Plyw ood 10 5 01990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004e
Fibreboard
14
Demand for total wood-based panels is expected to reach 50 million m by 2010, but plywood usage is expected to decline as the availability of peeler quality sawlogs falls. Medium Density Fibreboard In the late 1990s China became the largest wood based panel consumer and producer in the Asia-Pacific region. The industry is forecast to continue growing strongly. In 2004 the domestic production of medium density fibreboard (MDF) was 13.6 million m3 and is expected to exceed 15 million m3 by 2005. Potentially the MDF capacity in China could expand beyond 15 million m3 by 2010, but concerns over fibre availability are currently curtailing industry plans for additional expansion. The key MDF end uses in China are furniture, joinery and woodwork (68%), laminate flooring (12%) and construction (12%). About two-thirds of all MDF is consumed in the manufacture of household and office furniture, while about 20% of the MDF market is accounted for by furniture exports. MDF consumption in laminate flooring, interior decoration and construction end uses has been growing rapidly. Demand for laminate flooring has been growing because of the high popularity of the product when used in large public buildings such as shopping centres, hotels and office buildings. The Chinese market is relatively well developed in terms of panel grades and types. Around 70% to 80% of all panels manufactured are high quality E2 grade. The main demand drivers for MDF include strong economic growth, improving living standards and lifestyle changes, expansion of the furniture and interior decoration industries and developments in the construction industry. Figure 1-13 illustrates the MDF growth from 1990 onwards.Figure 1-13: MDF Production and Trade18 MDF Production and Export -million m 316 14 12 10 8 6 4 2 0 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004e Import Production Export
The eastern and southern regions are the most important MDF production areas, where the major markets of Shanghai and Guangdong are located. During 2004 there there was around 1.6 million m3 per annum of fibreboard capacity installed (Table 1-1). Similarly, announced MDF projects are estimated to potentially add 1.8 million m3 per annum of new MDF capacity (Table 1-2). Most of these mills have now been completed.Table 1-1: Planned New Fibreboard CapacityCompany Zhejiang Lishu Oak Wood Based Panel Co. Guangxi Gaofeng Wood Based Panel Co. Guangxi Sunway Forest Product Industry Plant II Shandong Dongying Wood Based Panel Plant I Zhejiang Luyan Wood Industry Co. Dare Company Ltd. Asai Dekor (Heyuan) Wood Ltd. Weihua MDF Manufacturing Co. Ltd. Xinchao Group Total New Capacity New Capacity (000 m3 per annum) 120 150 240 200 150 200 200 180 135 1 575 Estimated Start Up (Year) 2004 2004 2004 2004 2004 2004 2004 2004 2004
Chinas MDF industry experienced rapid investment during the 1990s due to enlarging of existing production lines and the construction of new facilities. The majority of mills have Chinese-produced machinery with capacities of 30 000, 50 000 and 80 000 m3 per annum. This is because these lines are substantially cheaper, and cost around 10% to 30% of the price of imported machinery. However, even the most up-to-date locally made lines are generally substantially inferior to the latest designs from Europe and the US. The locally manufactured machines are not capable of producing products such as HDF or thin MDF. Panel quality is also often a problem. Industry scale will continue to increase as larger domestic lines become available and imports of foreign production lines increase. The main limitation to this trend will be ensuring that cost competitive wood fibre is available in sufficient quantities. In some cases, new and larger lines are planned to replace smaller existing lines, which will be decommissioned or relocated.16
Particleboard The domestic production of particleboard in 2004 is estimated as 4.6 million m3. It is Jaakko Pyry Consultings expectation that production will reach 6 million m3 by 2005. As with the MDF industry, resource scarcity will then hinder expansion. Pulpmills will be the major fibre supply competitors for panel producers in China. There are a number of new pulpmills and panel mills either planned or currently under construction and this will create a more competitive market for fibre supply. Chinas particleboard market has shown a strong recovery in recent years, and the total consumption was 5.2 million m3 in 2004 (Figure 1-14). Expansion of domestic production of particleboard may be curtailed to some extent by the scarcity of fibre supply and the higher wood paying capability of world-scale pulpmills.Figure 1-14: Particleboard Production and Trade6 Import 5 Production Export 4
0 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004e
Particleboard is primarily used in the production of domestic furniture, particularly kitchen and office furniture. To date, particleboard applications in construction end uses have been limited because of the current dominance of plywood. However, some particleboard is used in partitions and sheathing applications where it provides a cost effective solution. Plywood Plywood production is estimated at 16.3 million m3 in 2004. During the period 1990 to 2004, production increased at an average annual rate of 24% per annum. In the same period, domestic consumption increased at an average rate of 13% per annum. This rapid increase was chiefly driven by strong economical growth that translated to a strong market demand. In 1999 the elimination of a log import tariff indirectly contributed to a large production increase. China has become a net17
exporter of plywood over the last 10 years, but continues to import small volumes of plywood for specific niche applications (Figure 1-15).Figure 1-15: Plywood Production and Trade20 Plywood Production and Trade -million m 318 16 14 12 10 8 6 4 2 0 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004e Import Production Export
Plywood is mainly used in the interior decoration and furniture segments, and collectively these accounted for 43% of the total plywood consumption in 2001. The major uses in interior decoration are for wall linings, door skins and doorframe overlay. Demand for these applications has been increasing. However, in recent years there has been a trend towards substituting plywood with reconstituted panels and blockboard when used for interior applications. More than 50% of all the plywood consumed is less than 6 mm thick. Uses in the construction segment are expanding due to strong growth in the construction industry overall. 1.4.5 Sawn Timber End Uses Chinas main use for sawn timber is in the packaging and temporary construction segments. The Chinese government is currently undertaking initiatives that will result in housing and building increases of around 200 million m2 per annum. Temporary construction mainly uses local softwood lumber for concrete formwork, scaffolding, floor underlay and other urban construction activities. In rural areas, lumber is often used as beams and rafters for buildings. Although Chinas use of timber in construction has been declining because of substitution for concrete and steel, it is still a major consumer of solid wood-based products. The interior decoration, flooring and furniture segments account for around 47% of the total lumber consumed in China. Interior decoration includes products such as solid wood and 3-ply parquet flooring, doors, window frames and mouldings (Figure 1-16).
18
Building 22%
1.4.6
Furniture Industry There are more than 50 000 small, medium, and large furniture manufacturing enterprises in China. Together these have nearly 3 million employees. The average business has around 70 employees and a company turnover of USD370 000 per annum. There are around 3 000 large-scale companies. While there are many companies that are exclusively Chinese, there are a significant number of joint ventures involving both Chinese and foreign capital. Furniture production and exports in China have been growing rapidly over the last decade. It is Jaakko Pyry Consultings opinion that improving living standards as well as developments in the construction industry will further support furniture demand over the next decade. Furniture manufacture has developed in areas of the country that have shown rapid economic growth such as the coastal areas of China, where the advantages of a large consumer market and a good infrastructure are available. Furniture manufacturing activity is also significant in the province of Guangdong (in the cities Guangzhou and Shenzhen), and in the area around the city of Shanghai. The Guangdong area that specialises in ready-to-assemble and upholstered furniture, accounts for 50% of Chinas total furniture exports. These exports are mainly to the US. Domestic production increased significantly from 1995 to 2004 at an annual average rate of 17% per annum. During the period 2005 to 2010, Jaakko Pyry Consulting forecasts growth to slow but remain strong at around 11% per annum.
19
The total value of furniture exports has increased from USD1.1 billion in 1995 to USD8.8 billion in 2004. It is forecast that the total value of furniture manufacture is expected to exceed USD35 billion by 2005 (Figure 1-17).Figure 1-17: China Furniture Industry Production and Exports
40 Furniture Production and Export -USD billion35 Total Furniture Production 30 25 20 15 10 5 0 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005f Total Furniture Exports
Guangdong is by far the most important furniture production area with the highest contribution to exports. Other important areas for furniture exports are Shandong, Zhejiang and Jiangsu, where the majority of joint ventures constituted with capital from Taiwan and Singapore are located. In the north, which is a base for exports to Japan, the provinces of Beijing, Tianjin and Heilongjiang are also important furniture exporters (Figure 1-18).
Zhejiang 10%
Fujian 8%
Wood based panels accounted for about 30%, with logs and lumber accounting for 69%. Shanghai is the second most important furniture manufacturing area in China and has the largest domestic furniture market. The Beijing-Tianjin region is the third largest furniture manufacturing area although the size is significantly smaller than that of Guangdong or Shanghai. The furniture industry in China underwent considerable industrialisation during the 1990s, and this has underpinned the subsequent increases in exports. Technological developments in the industry have led to the production of a wide range of products and high-quality throughput. Chinas furniture industry has also benefited from an ever-growing number of joint ventures with overseas manufacturers, interested in low labour costs, the high export potential and an expanding domestic market. The goal of the furniture industry, as set out by Chinas Furniture Association, is that Chinas furniture exports will eventually account for 20% of the total world furniture trade. The furniture industry has established an international marketing network of its own and has built business ties with both Taiwan and Hong Kong. 1.4.7 Construction Industry A massive central government housing programme is forecast to keep growth in the construction industry strong. The last six years have seen continued growth in completed floor space (Figure 1-19).
21
1 400 Floor Space Completed -million m 31 200 1 000 800 600 400 200 0 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004e 2005f
The Shanghai, Guangdong and Beijing regions accounted for over 40% of total construction activity calculated in terms of floor space completed during 2004.Figure 1-20: Construction Activity by Region
Others 54%
GuangdongGuangxi-Fujian 11%Source: Jaakko Pyry Consulting
1.4.8
Interior Decoration Industry Interior decoration, especially the decoration of residential houses, has been one of the fastest growing panel consuming industries in China over the last ten years; growth has averaged 20% per annum. Total turnover within the interior decoration22
sector reached USD99 billion in 2004. Similarly to the furniture industry, improving living standards and developments in the construction industry will further accelerate demand. The interior decoration industry currently accounts for 24% and 30% of the total consumption of sawn timber and wood based panels respectively. The repair and remodelling industry is in the early stages of development in China, but is forecast to grow rapidly, further supporting the future demand for interior decoration.Figure 1-21: China Interior Decoration Industry
90 Interior Decoration Expenditure -USD billion80 70 60 50 40 30 20 10 0 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005f Residential Houses Public Buildings
Shanghai, Guangdong and Beijing are the three major contributors to the total turnover of the interior decoration industry in China (Figure 1-22). Currently, several larger scale home improvement material retailers are setting up stores in these key regions.
23
Others 47%
ShanghaiJiangsuZhejiang 29%
GuangdongGuangxi-Fujian 13%Source: Jaakko Pyry Consulting
1.4.9
Woodchip Demand for Pulp Production Prospects for Paper and Paperboard Demand in China Chinas paper industry relies primarily on non-wood fibre. As a result, Chinas capacity to produce wood pulp is low given the size of the countrys paper industry. The present wood pulp producing industry in China is a combination of older small-scale mills and recently installed production lines. All of these mills are state-owned enterprises. China represents a large paper and paperboard market with high potential growth, low labour cost and in some areas solid skilled labour. In 2000 China consumed approximately 37 million tons of paper and paperboard, accounting for 11% of global consumption. Uncoated free sheet and containerboard (kraftliner and medium) comprise close to half of all paper and board demand. Asia Pulp and Paper is by far the biggest operator in China. UPM Kymmene and Stora Enso are both producers with plans to expand. Investments by foreign paper companies in the Chinese paper industry have become a major driver of industry change. Production and consumption of paper products have grown together and China remains a net importer of paper and paperboard (in addition to a major fibre importer). In the long-term China should become self-sufficient in key grades. As capacity increases in large blocks (e.g. 400 000 ton machines), China could soon become an exporter of certain paper grades (e.g. coated paper). Restructuring and consolidation of the domestic paper industry will continue, and world-scale enterprises will emerge. China demand for paper and paperboard is substantial and growing (Figure 1-23).
24
Figure 1-23: Global Demand Growth for Paper and Board by Region
5Demand Growth (% per annum)
Africa
Plans for establishing domestic, world-scale pulpmills have attracted considerable interest, despite a degree of uncertainty relating to the availability and cost of wood for the planned new capacity. The development of hardwood pulpmills in southern China will certainly accelerate plantation development, but part of this new pulp production capacity is expected to be relying on imported wood. Paper Making Fibre Consumption in China The consumption of papermaking fibre in China is growing rapidly. The long-term growth trend is estimated at 4.6% per annum through until 2015. The share of nonwood fibre in the total papermaking fibre consumption is declining and the share of recovered paper is rising (Figure 1-24).Figure 1-24: Papermaking Fibre Consumption in China80 70 Roundwood Equivalent (000 000 m3) 60 50 40 30 20 10 0 1980
Recovered Paper Non-wood Unbleached Kraft Bleached Hardwood Kraft Bleached Softwood Kraft Mechanical & Semi-Chemical
Forecast
1983
1986
1989
1992
1995
1998 Year
2001
2004
2007
2013
25
The shortage of domestic wood supply in China has to a great extent dictated the paper industrys fibre furnish, which is still heavily centred on the use of non-wood fibre such as wheat and rice straw, reeds and bagasse. The share of non-wood fibre in Chinas average papermaking fibre furnish has declined in recent years but is still high when compared to the international benchmark average of 1% in other parts of the world. The use of recycled fibre will increase dramatically from its current level and is expected to account for an increasing share of the total Chinese papermaking fibre furnish. However, even with improved collection, sorting and grading of recovered paper, the domestic collection could not reach the level required to meet the forecast demand of the industry. Increasing quantities of recovered paper need to be procured from offshore sources. Chinas paper industry will become increasingly dependent on imported wood pulp, as evidenced by the completed and on-going investments in wood-free and tissue paper production, particularly in the greater Shanghai region. Pulp projects in Southern China can alleviate the shortage in the medium to long-term, but much will depend on near-term plantation developments within China. Pulpwood Demand for Domestic Pulp Production in China China today is a major exporter of hardwood woodchips. Chinas domestic demand for pulpwood is expected to grow significantly, with a number of planned new pulp and composite panel mills forecast for construction. Pulpwood demand will grow strongly, due both to overall demand growth, and a shift to reconstituted wood-based panels. Much of this growth will be met by direct imports of pulp, however the remainder will be produced from the increasing availability of domestic plantation pulpwood. China is expected to become a net importer of pulpwood and woodchips over the next five years. In terms of pulpwood demand for domestic pulp production, the current consumption is estimated to be 15.6 million m3 and has increased by 5.3% per annum over the last five years. Strong growth in paper consumption as well as the competitive manufacturing cost structure in Asia is attracting paper companies to make new plantation and pulpmill investments in China. Demand for pulpwood from Chinas pulp industry is forecast to reach 19.1 million m3 by 2007 (7.0% per annum growth over 2002 to 2007) and increase further to 24.6 million m3 by 2012 (5.2% per annum growth over 2007 to 2012).
26
2012
Publicly announced plans for future pulp mill projects have concentrated on areas close to existing plantation fibre supplies and coastal areas near to major demand centres (Hainan, Zhanjiang and Guangdong). These mills are likely to rely on a combination of both local and imported wood fibre. Approximately 5 million air-dried tons (ADT) of new pulp production capacity is planned within China in the near-term (Table 1-3).Table 1-3: Planned New Pulpmill Capacity in ChinaCompany Asia Pulp and Paper (Hainan) (1) (4) Asia Pulp and Paper (Dagang) (2) Asia Pulp and Paper (Zheijiang) (2) Asia Pulp and Paper (Guangdong) (2) Ningxia Meili Pulp and Paper (Ningxia)(2) Ark Forestry (Chongqing) (1) China Pack (Jilin) (2) APRIL (Shandong Rizhou) (1) Stora Enso (Nanning) (1) Total New Capacity New Capacity (ADT Pulp) 1.0 million 0.3 million 0.3 million 0.3 million 0.1 million 0.5 million unknown 1.0 million 1.0 million 4.5 million Pulpwood (3) (green tons) 4.5 million 0.7 million 0.7 million 0.7 million 0.3 million 2.0 million 4.5 million 4.5 million 17.9 million Species Hardwood Softwood Softwood Softwood Hardwood Softwood Hardwood Hardwood
(1) BHKP Bleached Hardwood Kraft Pulp production (Hardwood only) (2) Mechanical Pulp production (Hardwood or Softwood, but softwood species are preferred) (3) The conversion from green metric tons to ADT for BHKP is 4.5:1, for Mechanical Pulp it is 2.4:1 (4) Completed and production started late 2004 Source: Jaakko Pyry Consulting
27
1.4.10
Chipwood Operations in China Chip wood exports from China have been decreasing in recent years (see Figure 1-26) as both domestic demand and domestic prices have increased.Figure 1-26: China Chip Wood Exports- 000 BDt 2 000 1 800 1 600 1 400 1 200 1 000 800 600 400 200 0 2000 South North Northeast
Southeast
2002
2003
Chipping operations in China have traditionally been small to medium scale and run by Forest Bureau offshoots and private companies. With the decline in volumes many are now operating at well below capacity. 1.4.11 Log Trading Market in China Currently the log trading market in China is dominated by log traders/wholesalers and large end-users. Forest owners commonly sell standing trees directly to large log traders/wholesalers as well as large end-users. Log traders on-sell logs to the final end users in flexible volumes and specification classes. The log traders have built up wide distribution networks and economies of scale in harvesting and transport logistics. The log traders on-sell logs based on individual end user requirements with price differentiation based on different size classes, species, location etc. As the market continues to develop in China we are seeing evidence of increased price differentiation based around increasingly detailed log specifications, as well as a trend for the larger forest owners to deal directly with end users as the number of large-scale end-users increases. Payment terms are subject to negotiation, but are generally within the 60 to 270 day range. Once a block of standing trees has been sold harvest must generally occur within 3 to 18 months depending on the size of the block. Issues such as payment of harvest taxes are subject to negotiation and are reflected in the final agreed price.
28
1.4.12
Overview of Forest Product Prices in China Roundwood Logs Since about 1990, market forces have largely determined log prices in China, with prices varying according to regions, species and size. Log sales may be conducted by direct negotiation between seller and buyer, by selling from the forest to an agent who on-sells to the consumer, or through large central log markets. Average log prices for official log markets in Shanghai, Guangzhou and Fuzhou for red pine, larch, Chinese fir and Masson pine, between 1991 and 2004 are shown in Figure 1-27 below. Masson Pine and slash pine are the mainstay of the softwood fibre resource in the southeast and southern regions. They have multiple uses, including mining timber, construction poles, sawn timber, plywood logs and pulp logs for pulp and paper, particleboard and fibreboard. Chinese fir generally receives a relatively high price due to its renowned durability as a traditional construction material.Figure 1-27: Nominal Historical Log Prices in ChinaRed Pine Shanghai Larch ShanghaiNominal Log Price -RMB per m 3-
2 500
Radiata Shanghai2 000
1 500
1 000
0 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004e
YearSource: Industry Contacts
Domestic log prices are broadly in line with imported sawlog prices once wharf and transport costs have been added to the C&F delivered price.
29
60
Manzhouli Harbin Hohhot Nanjing Shanghai Xiamen Qingdao Kunming Urumqi Shenzhen Changchun Dalian
0 Mar-03 Mar-04 May-03 May-04 Nov-03 Dec-03 Jul-03 Apr-03 Jun-03 Jan-04 Feb-03 Aug-03 Sep-03 Feb-04 Jan-03 Oct-03 Apr-04
Prices can be seen to be trending upwards especially in 2004. Russian Log Supply After several years of significant growth, imports of Russian logs levelled off at around 14 to 15 million m3 in 2002 and 2003. Estimates for 2004 indicate Russian log supply may have increased to 16.5 million m3. Log supply from Russia is primarily confined to the north, northeast and east of China. Land routes dominate log supply. The cost of transporting logs from the Russian border to the log processing centres means that it is not common for Russian logs to penetrate further south than Shanghai. Russian softwood log prices have recently increased. This may be the result of a drop off in imports from New Zealand due to high freight rates.
30
USD/m
Larch
In the long term, Russian log prices are expected to slowly trend upwards in response to increasing harvesting and transport costs as easily accessed logs become less available and operations move further from existing infrastructure. Sawn Timber Softwood sawn timber prices have remained relatively static in China since about 1999. Prices are forecast to remain stable in the near future, and with growth in demand, some form of real price increase may be possible. Figure 1-30 presents sawn timber prices in China for various regions.Figure 1-30: Nominal Sawn Timber Prices in China2 500
Sawn Timber Price -RMB per m 2 000 1 500 1 000 500 0 1999 2000 2001 2002 2003 2004 Mixed Hardw ood Saw n Shanghai Larch Saw nw ood Harbin Red Pine Saw nw ood BeijingSource: Industry Contacts
Mixed Hardw ood Saw n Guangdong Larch Saw nw ood Hguangdong Red Pine Saw nw ood Harbin
31
MDF and Particleboard Chinas consumption of MDF is expected to increase strongly over the next decade. Demand from the Chinese furniture and construction sectors will be instrumental in driving consumption. This demand will be supported by a variety of factors such as economic growth, increasing incomes, rapid growth of urban population, housing privatisation policy, government policies to expand the average living space, favourable policies encouraging apartment purchase and an expanding furniture export market. Chinese MDF demand will also grow through the development of new end uses and the use of more specialised MDF products. MDF prices are therefore expected to be sustained. A positive demand outlook will require all of the currently installed capacity and the availability of imports (especially after WTO tariff reductions) to feed the projected demand. This scenario will result in improving prices, allowing for an increasing fibre cost, and increases in imports into the China market. Particleboard is more price-competitive than MDF, giving it an advantage over MDF in a price-oriented market. Figure 1-31 shows both MDF and particleboard prices from 1995 to 2004. Prior to 1995, prices for MDF were set nationally. Since price controls were lifted, MDF prices have declined to be more representative of the cost of MDF production within China and are more inline with international MDF prices.Figure 1-31: Nominal MDF and Particleboard Prices in China800 700 Selling Price -USD per m 3600 500 400 300 200 100 0 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004
Due to quality differences between regions, a wide range of particleboard prices exist in the market. Table 1-4 shows particleboard prices for different regions in China in 2003.32
Particleboard Price (USD per m3) Low High 137 150 119 140 120 140
Chinas plywood production has seen significant increase but the industrys future is expected to be difficult. This is due to declining availability of large diameter, high quality peeler logs both domestically and externally outside of China. Russian Far East Softwood Logs Figure 1-32 illustrates the softwood log export price trends from Russia to Japan and China, for the six-year period from January 1998 to December 2003. The China price should be treated as indicative only. It is a yearly average price representing all of the softwood logs imported from Russia into China and is based on the China customs data. It does not reflect the changes in species-mix or grademix over the given period. In comparison, the Japan prices are species and grade specific, reported by Japanese trade journals on a monthly basis. Figure 1-32 implies that the price from Russia to China has been relatively stable, with a slight rise from 2000 at USD57 per m3 to 2003 at USD62 per m3 delivered to the northern Chinese border. The price of Russian logs delivered in southern China will be considerably higher than those in northern China owing to the significant transport distance from northern to southern China.Figure 1-32: Softwood Log Export Price from Russia to Japan Port/Chinese Border130 Nominal Log Price -USD per m 3 CNF120 110 100 90 80 70 60 50 May-98 May-99 May-00 May-01 May-02 May-03 Jan-98 Jan-99 Jan-00 Jan-01 Jan-02 Sep-98 Sep-99 Sep-00 Sep-01 Sep-02 Jan-03 Sep-03
Month
Source: Japanese trade journals (for Japan) and World Trade Atlas (for China)
33
1.4.13
Forest Product Price Forecasts Price forecasts are made using a combination of formal modelling techniques and informed judgement. Many factors affect prices, including the demand and supply balance, exchange rates, pulp prices, financial positions of buyers and sellers, price relativities between woodchips from different sources, and production costs. Woodchip Price Trends During the past 20-30 years, the trend in real pulp prices has been predominantly downward. Figure 1-33 illustrates Chinas FOB hardwood woodchip prices by province for the Japanese and Korean market destinations.Figure 1-33: Nominal FOB Hardwood Woodchip Prices by Province 1992 to 2004eFOB Hardwood Chipwood Price -USD per BDMT160 150 140 130 120 110 100 90 80 70 60 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004e
Historical chipwood and sawlog price series show a price spike around 1994/95. This spike was in response to perceptions in the US that large areas of old-growth forest were about to be protected for conservation purposes (e.g. protection of the spotted owl habitat), at a time when it was recognised that supply from SE Asia was diminishing due to historical over-cutting and proposed log export bans on tropical logs from major exporting countries. In addition inventory stocks had been allowed to reach very low levels in South Korea despite strong forward orders. The combined effect was a significant price spike felt through the Asia-Pacific region. When the predicted log supply shortages did not eventuate prices quickly fell. This historical price trend has been influenced by the following factors: Generally sufficient supply of main production inputs such as wood Technological developments - advances in material-saving technologies and production efficiencies34
The strength of the US dollar in the late 1990s and early 2000s Growing mill capacities Increasing economies of scale. As a consequence, the supply curves for the industries using woodchips have flattened and shifted downward. The flattening supply curves support the view that further cost savings in production and distribution are becoming increasingly difficult to achieve, and that the decline in real prices is gradually levelling off. Scenarios that assume no major discontinuities in economic growth and demand development, consistently developing input costs and constant technological development, suggest relatively steady price development for the long-term. On the other hand, a dramatic drop in demand due to weakening economic performance or substitution, would lead to serious downward pressure on prices, as marginal producers would be forced to exit the industry. It is likely that the real woodchip price will be steady from 2004 onwards. The following price drivers support this view: Economic development Reconstituted wood panel demand development Pulp and paper demand development Technological development Further increases in pulpmill and panel mill size will lead to longer wood transport distances, thus offsetting some of the advantages of economies-ofscale Limited availability of large plantations, requiring part of the new supply to be based on wood delivered over long distances or into areas with high infrastructure costs. Production investment waves following periods of good profitability have led to simultaneous above-average capacity increases or well below-average growth periods (or no growth due to closures). Unfortunately, these supply moves rarely match demand cycles in a harmonious way, rather the contrary. This mismatch accentuates pulp price cycles and increases price volatility. Inventories naturally play a major role. Unsold inventories can be considered as additional supply. Low consumer inventories in a weak market may not be enough to satisfy consumers needs when orders pick up. Speculative purchases or sudden losses of production on the producers side can change the supply/demand ratio very rapidly, and influence price movements. Exchange rate movements could have a significant impact on the projections. A weak dollar facilitates and enlarges dollar denominated price increases and reduces the rate of decline in weak market conditions. A strong dollar makes price increases smaller and more difficult to carry through, causing steeper price falls.35
Based on a thorough analysis of the key factors outlined above, and other influential variables impacting pulpwood prices, Jaakko Pyry Consulting forecasts that woodchip markets will remain steady in the short to medium term. This is supported by positive GDP forecasts and an increasing demand in China and other regional markets. The prognosis is that Chinas woodchip prices are likely to remain stable in the short to medium-term. Sawlog Price Trends Log prices in the key Asian markets of Japan, China and South Korea, follow similar trends. Chinas imported log prices have been steady, supported by strong demand. China has the potential for continued good demand in the future and log prices are expected to remain strong in this market. There is considerable optimism about this market with suggestions that volumes will continue to increase rapidly over the next few years. Some Russian logs are heading to China that had been destined for the ailing Japanese market. Hardwood log prices in China have been steady throughout 2002 to 2004, Figure 1-34.Figure 1-34: Nominal Hardwood Log Prices in Guangzhou and Hangzhou 2002 to 2004250 FOB Hardwood Log Price -USD per m 3-
Luan Logs (Guandzhou market) D Fir Log Length 4m+ (Hangzhou market)
Radiata Pine Log 6m 26cm+ diam. (Guangzhou market) Mar-03 May-03 Mar-04 May-04 Jan-03 Nov-02 Nov-03 Jan-04 Jul-03 Jul-04 Nov-04 Sep-03 Sep-04
Against a backdrop of steady economic growth, Chinese hardwood and lumber prices are expected to remain stable over the next five years. Demand for high quality hardwood plantation logs and sawn timber for use in the furniture and interior decoration industries is expected to increase. During 2005, log prices are expected to be firm, some upward pressure. Over the next five years, it is expected that hardwood log prices in the region and in China will range from being stable to moderately increasing, due to the following factors: Positive economic growth will sustain construction activity and furniture manufacture for the domestic and export markets. Chinas average house size is expanding; hence lumber consumption in flooring is expected to grow.36
Interior decoration is another related segment that is expected to continue expanding in the future. Solid lumber demand will therefore sustain sawlog prices. Declining hardwood resources imply that prices will be maintained. However, increasing acceptance and use of softwood logs presents a possible threat to any significant price growth for hardwood logs. Technological developments in engineered and reconstituted wood products will mean that less costly and a reduced volume of wood material will be used. This may limit significant real price growth for solid wood lumber.
37
Consultant: Dr Alastair Marsden Department of Accounting & Finance School of Business & Economics
AUCKLANDFebruary 2005
University of Auckland
Reports and results from Auckland UniServices Limited should only be used for the purposes for which they were commissioned. If it is proposed to use a report prepared by Auckland UniServices for a different purpose or in a different context from that intended at the time of commissioning the work, then UniServices should be consulted to verify whether the report is being correctly interpreted. In particular it is requested that, where quoted, conclusions given in UniServices reports should be stated in full.
A. BackgroundYou have requested we provide a US dollar denominated estimate of the cost of capital based on the application of the capital asset pricing model for a generic forest investment located in China. The forest is owned by Sino-Forest Corporation (SinoForest).
C. Information SourcesIn preparing this report we have relied on information received from: Data sourced from Bloomberg and the Ratings Agencies; Other articles (where referenced) and sources; and Discussions with yourself. In accordance with the terms of our engagement letter we have not audited or independently verified any of the information sourced or provided to us.
E. Overview of different CAPM models to estimate the cost of equity for developing marketsE.1. Use of Capital Asset Pricing Model (CAPM) in developing markets The risks attributable to any investment can broadly be classified as: Systematic or non-diversifiable risk, e.g., world market risk, macro-economic risks associated with shocks to GNP, interest rates etc; and Non-systematic or unique projects risks. For developing markets these are often one-sided or asymmetric (and primarily of a downside nature). These include political and country risks such as expropriation, war and uncertainty about government, economic or regulatory policies. Under the standard capital asset pricing model (CAPM) risk is measured by the beta of a project or investment. Beta only captures systematic or non-diversifiable risk in the firm or project. Under the CAPM framework unique risks that are not captured by beta must still be reflected in expected cash flows. This means in the presence of asymmetric downside risks, the expected cash flows will be less than the promised or hoped for cash flows that are normally anticipated in markets with stable economies and political stability. We summarize below a number of different CAPM based and other cost of capital models suggested in the academic literature to calculate the cost of capital for investments in developing markets.2 E.2. Global CAPM Under the global CAPM the expected return on equity, Rei, for the company is given by: Rei = Rf Global + i Global * (RM Global Rf Global) This assumes that markets are globally integrated and investors are holding a globally diversified portfolio. In this case, the relevant benchmarks are the global risk-free rate (Rf Global) and the return on the global market portfolio (RM Global). The beta of local asset i (i Global) is measured against the global market portfolio. Country risk is not accounted for in this model since it is assumed to be diversifiable. The term (RM Global Rf Global) represents the global market risk premium. E.3. Local CAPM Under the local CAPM the expected return on equity, Rei, for the company is given by: Rei = Rf Local + i Local * (RM Local Rf Local)
See Pereiro (2001) for an overview of some of the different cost of capital models that may be applied to emerging markets.
When the global market assumptions are invalid, country risk becomes relevant. In particular, if markets are segmented, it may be appropriate to simply apply the local version of CAPM. In the local CAPM, the local risk-free rate (Rf Local) and return on the local market portfolio (RM Local) are used as benchmarks, with the beta of asset i (i Local) measured against the local market portfolio. The local riskfree rate (Rf Local) would incorporate a default risk premium above some global risk-free rate benchmark. E.4. Adjusted Hybrid CAPM The adjusted hybrid CAPM is: Rei = Rf Global + RCountry Risk + Local Global * i Global * (RM Global Rf Global) * (1 R2) where Rf Local = Rf Global + RCountry Risk This hybrid version of the CAPM differs from the Global CAPM in two aspects. First, the country default risk premium (RCountry Risk) is added to the global risk-free rate. Second, the global risk premium of the project, (i Global * (RM Global Rf 2 Global)), is multiplied by the country beta (Local Global) and a factor of (1 R ). The country beta is simply the regression slope of the local market returns against the global market returns. The term R2 is the amount of variance in the return of the local market that is explained by country risk (i.e. the coefficient of determination in the regression of local market returns against country risk). The purpose of the adjustment (1 R2) is to ensure that the inclusion of the country risk premium into the CAPM equation does not double count risk, if part of the country risk is already incorporated in the market risk premium. E.5. Godfrey-Espinosa (1996) Model Under this model the cost of equity capital for a developing market (Re Country) is given by: Re Country = Rf US + RCountry Risk + (Local Equity /US)*0.6*(RM US Rf US) The Godfrey-Espinosa (1996) Model provides a pragmatic approach to estimating the US dollar cost of equity in developing markets. Rf US is the US risk free rate. The beta for a developing market is calculated by the ratio of its equity market volatility to the US market volatility (Local Equity /US). Implicitly this assumes a perfect correlation between the two markets (note that beta is defined as *(Local Equity /US)). Again, to avoid double counting of country risk, the developing market risk premium is assumed to be 60% of the US market risk premium (the later being RM US Rf US). The 60% level is an ad hoc estimate of the (1 R2) factor explained under the Adjusted Hybrid CAPM Model above. The Godfrey-Espinosa Model can be refined to provide a US cost of equity for an individual project in a developing market, as suggested by Lessard (1996). The market risk premium, (Local Equity /US)*0.6*(RM US Rf US), can be multiplied by the beta of a comparable home country project to arrive at the overall equity risk premium of the individual project.
The refined Godfrey-Espinosa model can be regarded as a special case of the more general Adjusted Hybrid CAPM. E.6. Damodaran Models Under Damodarans (2003) model the expected return on equity, Rei, for the company is given by: Rei = Rf US + i US * (RM US Rf US) + RCountry Risk * (Local Equity/Country Bond) Under this model the country equity risk premium is estimated from the product of the country default risk premium (RCountry Risk) and the ratio of local equity market volatility and country bond volatility (Local Equity/Country Bond). The i US is the equity beta for an equivalent or comparable US based project. For individual projects, the Damodaran country risk premium can be incorporated into the cost of equity in three different ways. i) The same country risk premium is assumed for all projects in the country: Rei = Rf US + i US * (RM US Rf US) + (RCountry Risk)* (Local Equity/Country Bond) ii) The country risk premium is adjusted by the equity beta of the project: Rei = Rf US + i US * [RM US Rf US + (RCountry Risk)* (Local Equity/Country Bond)] iii) The country risk premium is adjusted by a lambda coefficient that measures the individual projects exposure to country risk: Rei = Rf US + i US * (RM US Rf US) + i * (RCountry Risk)* (Local Equity/Country Bond) E.7. Estrada Model Estrada (2000) model Under this model the cost of equity capital for an developing market (Re Country) is given by Re Country = Rf US + DRCountry*(RM Global Rf Global) Estrada (2000) proposes that downside risk is a more pertinent risk measure for developing markets. The downside country risk measure (DRCountry) is calculated as the ratio of the local market semi-deviation to that of the global market. The semi-deviation, , is defined with respect to the arithmetic mean return (R) of each market. For instance, the semi-deviation for the local market is defined as:
Local =
i =1
( Ri RM Local ) 2 T
Two factors can lead to a high downside risk measure for an developing market when the distribution of local market returns has a fatter left-tail than that of the global market, and/or when the distribution of local market returns is more skewed to the left relative to the global market.
In a later paper Estrada (2002) proposes a further measure of downside risk where the cost of equity capital is measured as: Re Country = Rf US + i Downside*(RM Global Rf Global) Where: i Downside =
im
2 m
im 2 m
= downside covariance between the asset and the market.3 = markets downside variance of returns.
Estrada (2000, 2002) argues that the downside risk measure explains the crosssection of both market and industry sector returns in developing markets better than the traditional systematic risk (i.e. beta) measure. Unlike the Adjusted Hybrid CAPM, Godfrey-Espinosa and Damodaran models, the Estrada models also have the advantage that it does not depend on the country default risk premium or default spread. Both of the RCountry Risk and RDefault Spread measures could be volatile since they are influenced by short-term political or economic uncertainties. For developing markets the downside risk measure is typically higher than the systematic risk measure under the Global CAPM. Therefore, the Estrada models provide a cost of capital estimate that generally falls between the Global and Local CAPM, given that costs of capital under the Local CAPM for developing markets are usually high. The Global CAPM is valid when all markets are fully integrated, while the Local CAPM is valid when markets are segregated. Hence the cost of capital estimate under the Estrada Models is generally consistent with the notion that developing markets are partially integrated, which is likely under current global markets conditions.4
3 4
See Estrada (2002) for the more technical definition of this term. Akers and Staub (2003) also consider the assumption that timber assets are priced in a fully integrated global market is too strong.
F. Application of suggested approaches to estimate weighted average costs of capital for forestry projects in China.The six models (and sub-models) discussed in the prior section above are used to estimate the cost of capital for the forestry sector in China.F.1. Assumptions
Assumptions for global risk parameters (i.e., a global forest beta, risk free rate, market risk premium, market volatility and inflation rate) are presented in Table 1 below.Table 1: Global risk parameters
Beta of US forestry firms (I US) Risk-free rate - global (Rf Global) Market risk premium Market volatility () Expected inflationF.1.1. Parameter estimates in Table 1 Risk free rate
Yields on 20 to 30 year long-term USD Government bonds (currently trading at circa 4.7%5) are assumed to be a proxy for the global long-term risk free rate.Market risk premium
The MRP can be estimated in a number of ways. These include simple historical averaging of the observed risk premium, forward-looking approaches, the methodology of Siegel (1992) and survey evidence. In respect of historical averaging Dimson, Marsh and Staunton (2003, Table 1) provide estimates of the historical arithmetic MRP for 16 developed countries over the period 1900 2002. The average historical MRP or market excess return relative to long-term bonds varies between 2.9% (Switzerland) and 9.5% (Japan) with an average of 5.9%.6 For Australia Dimson, Marsh and Staunton (2003) estimate the historical MRP was 7.6% between 1900 and 2002. Siegel (1992, 1999) argues that historical US estimates of the MRP have been biased upwards due to unexpectedly high inflation in the latter part of the 20th century.5
Source Federal Reserve Bank of St Louis, January 2005. During January 2005 long-term (20-year) US Treasuries yields averaged around 4.6% - 4.8%. 6 The countries and historical arithmetic mean market risk premia estimates over the period 1900 2002 are: Australia (7.6%), Belgium (3.9%), Canada (5.5%), Denmark (2.7%), France (5.8%), Germany (9.0%), Ireland (4.8%), Italy (7.6%), Japan (9.5%), Netherlands (5.9%), South Africa (6.8%), Spain (3.8%), Sweden (7.2%), Switzerland (2.9%), United Kingdom (5.1%) and the United States (6.4%).
Most forward-looking estimates of the MRP are lower than the historical estimates of the MRP. For example, Fama and French (2002) generate forward-looking estimates for the US standard market risk premium of 2.6%-4.3% over the period 1951-2000. Similarly Claus and Thomas (2001) generate estimates of the MRP for a number of countries with a maximum of 3.0%. Dimson, Marsh and Staunton (2003) also argue a downward adjustment to the measured historical MRP is justified if there has been a long-term change in capital market conditions and investors required rates of return in the future are expected to be lower than in the past. Dimson et al conclude a plausible estimate of the ex-ante arithmetic MRP measured relative to short-term bonds is around 5.0%. Relative to long term bonds the MRP would be circa 4.0%. However, Ibbotson and Chen (2003) argue based on a decomposition of historical equity returns into supply factors of inflation, earnings, dividends, the price to earnings ratio, dividend payout ratio, book value, return on equity and GDP that the forecast arithmetic MRP (relative to longterm bonds) is around 6.0% for the United States.7 Lastly recent survey evidence by Welch (2001) reports an ex-ante MRP of 5.5% for the US. In conclusion we assume the ex-ante US and global MRP to be 5.00%. While this is lower than historical estimates of the domestic or local market risk premium for many developed countries, the assumption of full market integration under a global CAPM should lead to greater diversification of risk and hence lower the forward-looking market risk premium.Beta
As already noted beta is a measure of the systematic risk of a firm (i.e., nondiversifiable risk or that part of the risk of an asset that cannot be diversified away). Beta is a relative risk measure and measures the sensitivity of returns on a stock relative to market returns (e.g., in response to macroeconomic shocks to GDP, interest rates, taxes etc.). The beta of the market is one. Estimation of beta almost invariably involves an element of judgement. Some empirically measured equity and asset betas for Canadian and US forest and paper entities are provided in the table below. The estimates are obtained from Bloomberg and based on Ordinary Least Squares (OLS) regressions using weekly data over the two years ending mid-January 2005. To convert the equity beta to an asset beta we follow Lally (1998) and use the average gearing ratio (based on book value of debt and market value of equity) over the last two financial years. We also use the average corporate tax rate over the last two years.8
A review article by Mehra (2003) on the equity risk premium puzzle also concludes that MRP is likely to be similar to what it has been in the past The equity risk premium puzzle refers to the inability of standard economic models to explain why the MRP has been so high in many developed countries such as the United States. 8 Subject to the constraint that tax rates cannot be negative.
Company International Forest Products Ltd Sino-Forest Corporation Plum Creek Timber Co Potlatch Corp Weyerhaeuser Co Deltic Timber Crown Pacific Partners Rayonier Inc Average
OLS Equity beta 0.23 0.41 1.08 1.24 1.04 1.08 2.51 0.86 1.06
Average Debt to equity 20% 243% 38% 87% 97% 29% 492% 38% 131%
Asset beta 0.20 0.12 0.78 0.80 0.63 0.91 0.42 0.66 0.57
The average asset beta for the sample of Canadian and US forest companies is 0.57. Value line beta estimates sourced from the website of Damodaran (2005) for paper/ forest companies over the years 2000-2004 are as follows.US Betas - Data from Value LineAverage Equity Beta Unlevered Asset Beta
Industry classification Paper/Forest Products Paper/Forest Products Paper/Forest Products Paper/Forest Products Paper/Forest Products
Number of Firms
Tax Rate
40 40 44 48 48 44
Akers and Staub (2003), nevertheless, provide higher asset beta estimates of between 0.67 and 0.76 for US timber assets measured relative to a global market portfolio. The average of Damodarans estimates for US firms and Akers and Staubs estimates is circa 0.60 to 0.65. Assuming an effective average US corporate tax rate of 30% for timber investments and an assumed debt to equity ratio of 20:80 for a forest entity, the equity beta is 0.70 to 0.75.9 We assume the higher equity beta of 0.75 for forestry firms in applying the global CAPM.US market volatility
Our estimate of the annualised US market volatility is 15.5%. This is based on the measured volatility of the Morgan Stanley (MSCI) US market monthly index from November 1994 to January 2005 (15.8%) and the estimate of the US market volatility equal to 15.3% for the period 1986 to 1999 by Cavaglia, Brightman and Aked (2000, Table 1).
US inflation rate
The expected long-term US inflation rate is assumed to be 2.50%.10 This rate is used to deflate the US nominal WACC to calculate the US real WACC. Assumptions specific to the China market are summarised in Table 2.Table 2: Parameters used for China Local Beta (i Local)
0.75 1.06 1.2 0.10 5.6% 0.90% 9.03% 0.38 28% 19% 1.33 1.17 1.39 33.00% 2.50% 20.00%
Global beta (i Global) Sensitivity to country risk premium (i) Beta of country (Local Global) Risk-free rate- local (Rf Local) Country default risk premium (RCountry Risk) Local market risk premium R-square of market return against country risk (R2) Market volatility (Local Equity) Volatility of default spread (Default Spread) Downside risk measure (DRCountry) Downside risk measure (DRForest) Downside beta estimate Corporate tax Debt margin Debt ratioF.1.2. Parameter estimates in Table 2 Local Beta (i Local)
The estimate of the local beta for China is 0.10. This estimate is obtained from regressing monthly returns (in USD over the period 1995 to January 2005) on the Morgan Stanley Emerging Market forestry and paper products index against the Shanghai A Share Market index for China. This empirical estimate assumes that the forestry and paper products index is representative of individual indices in the Chinese market given the lack of data on individual indices. A local beta estimate of 0.1 (i.e., the sensitivity of returns on a domestic share measured relative to domestic market returns) is very low. Most empirical beta estimates for monopoly companies measured relative to their domestic market like regulated electricity, gas and water utilities have asset betas of at least 0.25-0.40 (e.g. see Damodaran, 2005).
Based on the difference between 10-Year Inflation-Indexed Treasury yield of circa 1.7% and a 10Year Treasury Note Yield yielding circa 4.2% as reported by the Federal Reserve Bank of St Louis, January 2005.
In estimating the cost of capital for a Chinese forest entity we therefore assume a local equity beta of 0.75 (equal to the beta estimate under the global CAPM model).Global beta (i Global)
The global equity beta of forest investments is assumed to be 1.06. This is estimated using the beta coefficient from regressing monthly returns (in USD over the period 1995 to January 2005) on the Morgan Stanley Emerging Market forestry and paper products index against the Morgan Stanley All Countries Free index.Sensitivity to country risk premium (i)
The term i is the sensitivity of each project / company to country risk. (Damodaran, 2003). The average value of i is one. Forestry companies that sell on the domestic market are likely to have greater exposure to country risk factors compared to manufacturing companies that export and sell their products on the international market. However, recent evidence by Cavaglia, Brightman and Aked (2000) suggests that with increased worldwide market integration, industry risk factors are growing in relative importance to country risk factors. The study by Cavaglia, Brightman and Aked is, nevertheless, confined to developed markets. Evidence by Harvey (1995) finds returns in developing markets are still likely to be influenced by local (domestic) factors compared to market returns in more developed countries. In this respect we understand the timber in the Chinese forest will be sold almost entirely into the domestic market. Similarly costs to harvest and produce the timber are exposed to Chinese country risk. We therefore assume the value of i for a forest company in a developing market of China is 1.2 (i.e., above average exposure to Chinese country risk).Beta of country (Local Global)
The MSCI All Countries Free index over the period January 1995 to January 2005 is used as the global index to estimate the country beta (relative to the Shanghai A Share Market index for China) of 0.1. This empirical beta estimate is low and should be treated with caution. However, Mishra and OBrien (2003, Table 3) provide a similar estimate (average 0.14 over the period 1998 to 2000) for the global beta estimate of China.Risk-free rate: local (Rf Local) and credit spread of US bonds with same credit ratings
Moodys
Adequate protection parameters BBB+ (outlook positive) Fitch AHigh credit quality Source: Moodys, Standard and Poors and Fitch Ratings websites. Standards & Poor
The average yield or country default risk premium for China is estimated by Damodaran, 2005 to be circa 0.9%. Based on the USD Treasury risk free rate used in this report of 4.7%, then the local risk free rate (Rf Local) is estimated at 5.6%.11Country default risk premium (RDefault Spread)
As noted above for China the estimated country default risk premium is 0.9%.Local market risk premium
We assume an annualised local market risk premium for China is 9.03%. Damodaran (2003) offers an estimate of the local market risk premium for a developing country as: MRP Local = MRP Global * Local Equity /Global In accordance with our estimates for Local Equity (equal to 28% as discussed below) and Global as proxied by the US market (15.5%) this equals 5% * 28% / 15.5% = 9.03%. Our estimate of the local MRP for China is less than Salomons and Grootvelds (2003) empirical estimate of the average historical market risk premium (12.7%) for a number of developing countries in Latin America, Africa, The Middle East and Eastern Europe.12R-square of market return against country risk (R2)
We assume the R-square estimate in the market risk-country risk relationship is 0.38. This is the average R-square estimate for a number of emerging Latin-American markets taken from Pereiro (2001, Table 13).Market volatility (Local Equity) and Volatility of default spread (Default Spread)
The Shanghai A share Market index for China over the period January 1995 to January 2005 for China is used to estimate the respective market volatility (Local 13 Equity). The empirical estimate is 28%. We assume the ratio of the market volatility to the volatility of default spreads (Default Spread) is 1.5.Downside risk measure (DRCountry)
The MSCI All Countries Free index over the period January 1995 to January 2005 is used as the global index to estimate the country betas of China. Together with the country indices for China, this global index is also used to compute the downside risk measures (DRCountry) using ratios of semi-deviations. Over this period the empirical downside risk measure equals 1.33 for China.14
This is similar to current yields on long-term Chinese bonds (maturing Oct 2027). For China Salomons and Grootveld (2003) estimate a mean annualised MRP of 8.0% over the period 1993-2001. Miller and Zhang (2003) also estimate the average equity risk premium between 7.3% and 7.9% over the period 1997 to mid-2002 for the Chinese market. These historical estimates of the MRP for the Chinese market are, nevertheless, based on a relatively short time period. Our estimate of the local MRP for China uses the global MRP based on a longer time series. 13 For the 1998-2000 years Mishra and OBrien (2003, Table 4) report an average volatility estimate for the Chinese market of 20.4% 14 Estrada (2000, Exhibit 3) reports downside risk measures for China of 2.66 measured over the period 1988 - 1998.12
Since the Estrada Model (2000) only provides a generic estimate for a country cost of capital, we modified our downside risk estimate by a factor of (2.02/2.30) in calculating the cost of equity capital. This adjustment is based on Estradas (2001) estimate of the downside risk measure for the Forestry and Paper Products sector in developing markets of 2.02, and the average downside risk estimate for all sectors of 2.30.15 We did not estimate the downside risk measure for the forestry sector within China directly due to the lack of country specific data. Under the Estrada (2002) model we assume the country downside beta is 1.39 for China. These downside beta estimates for each county with respect to the world market are drawn from Estrada (2002, Table 4).Corporate tax rate
Based on information provided by Jaakko Pyry and discussions with you we assume the corporate tax rate is 33% in China. We note, however, that the presence of tax concessions in the Chinese market may lower the effective corporate tax rate. A lower effective corporate tax rate would raise our post-tax WACC estimate. The forest value may still, however, be greater due to higher expected after-corporate tax cashflows. The presence of tax holidays and tax losses that can be carried forward potentially introduce considerable complexity into capital budgeting. A discussion of capital budgeting (and cost of capital) under time varying tax rates is outside the scope of this report.16Debt margin and debt ratio
We do not have detailed information to accurately determine a debt margin for a forest project in the developing market of China. For the WACC calculated using the global CAPM we assume a debt margin of 2.5% over the US Government bond rate (Rf Global). For the WACC determined under all the other models we also assume a debt margin over the local risk free rate (Rf Local) of 2.5%. A debt margin of 2.5% is similar to the debt margin in respect of USD bonds issued by Sino-Forest Corporation (currently yielding 7.0% - 7.2%) over and above long term USD treasuries (yielding circa 4.6%-4.8%). To calculate the WACC we assume a debt to equity ratio of 0.20:0.80. Chen (2003) reports that Chinese companies have low levels of long-term debt compared to companies in more developed markets.F.2. Economic data
You have requested us not to provide any commentary on the economic outlook or political developments in China. We understand this may be covered in a separate report prepared by Jaakko Pyry. In Appendix I we graph the stock market performance for the Shanghai A market index in China and also the MSCI Emerging markets index for paper and forestry products.
15 16
See Exhibit 3, Estrada (2001). See Cheung and Marsden (2002) for a discussion of some of the complexities of capital budgeting in the presence of initial tax losses.
G. ResultsThe estimated cost of equity capital and weighted average cost of capital (WACC) denominated in USD (both nominal and real) under each of the six models and submodels are summarised in Table 3. These estimates are all post-corporate tax. The adjusted hybrid CAPM produces the lowest real post-corporate tax WACC estimate of 3.2%. However this is based on an empirical estimate for the Chinese country beta of 0.1 and the resulting WACC is below the WACC under the Global CAPM (5.1%), that implicitly assumes fully integrated markets and low risk exposure for any given asset. Hence we place low weight on the WACC estimate under the adjusted hybrid CAPM model. The real WACC estimate under the Local CAPM is 8.3%. The Local CAPM model implicitly assume segregated markets and hence a higher risk exposure for the same given asset. The estimates under the Godfrey-Espinosa Model and the Estrada downside risk models lie between the cost of capital estimates assuming market segmentation and full market integration. The average real post-corporate tax WACC for all models is 6.3%. Excluding the adjusted CAPM the average real post-corporate tax WACC is 6.7%.
16
Table 3: Weighted Average Cost of Capital Estimates from 6 Different Models for Forestry Firms in China China Results (post-corporate tax) 1. Global CAPM 2. Local CAPM 3. Adjusted Hybrid CAPM 4. Godfrey-Espinosa Model a. Refined Godfrey-Espinosa 5. Damodaran Models a. Same risk premium b. Beta adjusted premium c. Lambda adjusted premium 6. Estrada Models a. 2000 model b. 2002 model Average Average (excluding adjusted hybrid CAPM)Re WACC WACC (real)
8.5% 12.4% 5.9% 11.0% 9.6% 9.8% 9.5% 10.1% 10.5% 11.7% 9.9% 10.3%
7.7% 11.0% 5.8% 9.9% 8.8% 8.9% 8.7% 9.1% 9.5% 10.4% 9.0% 9.3%
5.1% 8.3% 3.2% 7.2% 6.1% 6.3% 6.0% 6.5% 6.8% 7.7% 6.3% 6.7%
17
H. SummaryThe range of the real after corporate tax WACCs based on the models in Table 3 are summarised in Table 4 below. As already discussed we exclude the estimate under the adjusted hybrid CAPM model.Table 4: Summary of real post-tax cost of capital (WACC) estimatesCountry Lower bound estimate (including adjusted hybrid CAPM) Average estimate (including adjusted hybrid CAPM) Upper bound estimate (including adjusted hybrid CAPM)
China
5.1%
6.7%
8.3%
There is no easy or simple method to transform a nominal post-tax WACC to a real pre-tax WACC. In this respect formal modelling of the entitys cashflows is required to determine an equivalent pre-tax WACC. However, to an approximation we assume:17
Pr e tax WACC =
Where tc = corporate tax rate. Based on this transformation our indicative estimate of the real pre-tax WACC (denominated in USD) is between 7.6% and 12.4% as follows:Table 5: Summary of real pre-tax cost of capital (WACC) estimatesCountry Lower bound estimate (including adjusted hybrid CAPM) Average estimate (including adjusted hybrid CAPM) Upper bound estimate (including adjusted hybrid CAPM)
7.6%
10.0%
12.4%
In the case of forests where the timber is not expected to be harvested until some relatively long time in the future, this transformation may, nevertheless, overstate the equivalent pre-tax WACC.
J. ConclusionIn conclusion we consider a real post-corporate tax weighted average cost of capital (denominated in USD) for a China forest entity will be in the likely range of between 5.1% and 8.3%. The corresponding real pre-corporate tax weighted average cost of capital (denominated in USD) is between 7.6% and 12.4%. To the extent that the Chinese market is still not fully integrated into domestic capital markets our view is that a cost of capital estimate towards the upper end of our range is more appropriate for a forest entity where all timber is sold domestically.19 For a post-tax real WACC this would suggest a range between 6.5% and 8.3%. The approximate equivalent pre-tax real WACC would be between 9.7% and 12.4%. Our cost of capital estimates are all denominated in USD. Our assumptions are derived under the CAPM models only. We have noted the shortcomings of these models and recommend (to the extent such evidence is available) our estimates be compared to implied discount rates based on transactional evidence for actual forest sales in the Chinese market. We also note the Chinese legal, institutional and bankruptcy laws differ to Western capital markets. This may warrant an adjustment to the cashflow expectations from the forest if investors property rights are not clearly defined.
For a discussion on liquidity premiums for forest investments see Akers and Staub (2003). Pereiro (2001) also reviews liquidity premiums, control premiums and other non-systematic risk factors in the context of emerging markets in Latin America. 19 Stulz (2005) also argues that agency cost issues between sovereign states, corporate insiders and outside investors limit the extent of financial globalization.
Appendix I
Date
150 100 50 0 Oct95 Oct96 Oct97 Oct98 Oct99 Oct00 Oct01 Oct02 Oct03
Appendix IISize, liquidity and other premiums
In our determination of the cost of capital and WACC we have made no adjustment for factors such as size, control or illiquidity premiums and other market frictions. In the case of small-unlisted companies these factors can have a significant impact on company value. There is, however, a wide body of empirical evidence that suggests size is an important factor in explaining returns. For example, Banz (1981) reports that small US firms have higher expected returns than predicted by the CAPM. Similarly Fama and French (1993, 1996) argue that a three-factor model that includes a premium for size better explains the cross-section of average returns compared to a single factor CAPM model. Ibbotson and Associates (2004) report that over the period 1926 to 2003 the arithmetic historical average return for large US stocks was 12.4% compared to the arithmetic historical average return for small US stocks of 17.5% (i.e., a difference of 5.1%). In the presence of market frictions, the projects unsystematic or unique risk may matter. As noted by Malkiel and Xu (2000), wealth constraints and other market restrictions on the type of assets able to be held by investors means many investors are unable to form a diversified portfolio. Factors that may justify investors requiring a return for the entitys total risk (i.e. systematic plus unsystematic or unique risk) include: (i) Risks associated with financial distress if the entity were to fail. These include increased contracting costs with the firms creditors and debt holders and other indirect costs (e.g. lost sales etc). (ii) Reduced financial flexibility. If the new investment fails it may impact on the firms ability to obtain funding for other new investment projects. This can have a number of impacts. Shareholders are reluctant to commit new capital to the firm. Suppliers of debt capital are also reluctant to advance credit to a firm with high total risk or facing the possibility of financial distress. Lastly, the CAPM model implicitly abstracts from liquidity. Liquidity refers to the speed at which an asset can be bought or sold and any price concessions that must be offered to achieve an immediate sale. Investors value asset liquidity and liquidity can be an important factor in explaining asset returns (Amihud and Mendelson, 1986). For long-term forest investments Akers and Staub (2003) also suggest a liquidity adjustment for investments in developing markets.Implied Discount Rate Approach
The standard CAPM approach of estimating an entitys cost of capital reduces investment risks to one dimension systematic or beta risk. As already noted in developing markets where the CAPM perfect market assumptions are less likely to apply, this standard approach is problematic. Our report discusses different approaches that try to add another dimension of risk (country risk) to the discount
rate. The discount rate is used to discount a given set of expected cash flows to value an investment asset. This is commonly known as the Discounted Cash Flow (DCF) approach. The market price of the asset may, however, differ from the DCF value. In cases where the market price is observable, the DCF value may provide guidance to assess the true value of the asset, but it only plays a secondary role in setting the transaction price. For instance, this may apply in the case of frequently traded financial assets such as shares in the stock market. When the current market price is not observable, which is generally the case in the market for real assets such as plantation forests, the DCF value may play a more important role in determining the transaction price. Nevertheless, practitioners often prefer a simple comparison approach where observed market prices of similar assets are used to determine the price of the asset under consideration. The most prevalent example of this comparison approach can be found in the real estate market where a property is usually priced according to recent transactions of similar properties in its neighbourhood. The comparison approach is sensible when the asset under consideration and the comparison assets have a high degree of similarity. Therefore, the comparison approach can be a valuable tool in valuing forestry assets. The key issue is which element or dimension is the most homogeneous amongst forest assets, so that prices can be set accordingly. As an example, the price per hectare of a forest recently sold may be used to set the price of a neighbouring forest today. This approach may be too simplistic since it assumes that trees in the two estates are of similar age profiles and grown in similar terrain and in soils of similar quality. Given that forest estates can often vary significantly along these dimensions, a price per hectare comparison approach may not be suitable in general. An alternative approach, known as the implied discount rate (IDR) approach, assumes that similar forest assets should provide the same yield or rate of return. In other words, the IDR approach uses the rate of return as the homogeneous element amongst similar forests. Under this approach, the observed market price of a forest is used to calculate an implied discount rate under the DCF framework. The same discount rate is then used to value the subject forest, again under the DCF framework. Differences in age profiles of stands, terrain, harvesting costs, etc. are accounted and allowed for in the expected cash flows. The IDR approach differs from the CAPM approach in that the discount rate is not derived from any model. It is simply based on observed market transactions. This approach may work better in the case of developing markets if the transaction based discount rate can capture risk factors that are too hard to model. On the other hand, given that the IDR is purely transaction based, the value placed on the forest under consideration could be biased if the transaction evidence is thin and/or dated. While implied discount rates provide useful benchmarks for the various models we consider in this report, they are not estimated here given that it is outside the scope of this report.
22
ReferencesAkers, K. and R. Staub, 2003, Regional investment allocations in a global timber market, The Journal of Alternative Investments, 73-87. Amihud, Y. and H. Mendelson, 1986, Asset pricing and bid-ask spread, Journal of Financial Economics 17, 223-249. Banz, R,. 1981, The relationship between return and market value of common stocks, Journal of Financial Economics 9, 3-18 Cavaglia, S., Brightman, C. and M. Aked, 2000, The increasing importance of industry factors, Financial Analysts Journal 56, 41-54. Chen, J., 2003, Determinants of capital structure of Chinese listed companies, Journal of Business Research. Cheung, J. and A. Marsden, 2003, Valuation biases in projects with tax losses, JASSA 2, Winter, 15-20. Copeland, T., Koller, T. and J. Murrin, 2000, Valuation: Measuring and managing the value of companies, 3rd edition, McKinsey and Company Inc. Claus, J. and Thomas, J., 2001, Equity premia as low as three percent? Evidence from analysts earnings forecasts for domestic and international stock markets, Journal of Finance 56, 5, 1629-1666. Damodaran, A., 2003, Measuring company exposure to country risk: Theory and practice, Stern School of Business, New York University. Damodaran, A., 2005, Website: Dimson, E., Marsh, M. and Staunton, M., 2003, Global Evidence on the Equity Risk Premium, The Journal of Applied Corporate Finance, Vol. 15, 4, Summer. Estrada, J., 2000, The cost of equity in emerging markets: a downside risk approach, Emerging Markets Quarterly, Fall, 19-30. Estrada, J., 2001, The cost of equity in emerging markets: A downside risk approach (II), Working paper, IESE Business School Estrada, J., 2002, Systematic risk in emerging markets: the D-CAPM, Emerging Markets Review 3, 365-379. Fama, E. and K. French, 1993, Common risk factors in the returns on stocks and bonds, Journal of Financial Economics 33, 3-56. Fama, E. and K. French, 1996, Multifactor explanations of asset pricing anomalies, Journal of Finance 51, 55-84.
Fama, E. and K. French, 2002, The equity premium, Journal of Finance 57, 2, 637659. Godfrey, S. and R. Espinosa, 1996, A practical approach to calculating costs of equity for investments in emerging markets, Journal of Applied Corporate Finance Fall, 8089. Harvey, C., 1995, Predictable risk and returns in emerging markets, Review of Financial Studies 8, 773-816. Ibbotson and Associates, 2004, Risk premia over Time Report: 2004, Ibbotson Associates. Ibbotson, R. and P. Chen, 2003, Long-run stock returns: Participating in the real economy, Financial Analysts Journal Jan/Feb 59, 88-98. James, M. and T. Koller, 2000, Valuation in emerging markets, The McKinsey Quarterly 4, 78-85. Keck, T., Levengood, E. and A. Longfield, 1998, Using discounted cash flow analysis in an international setting: a survey of issues in modelling the cost of capital, Journal of Applied Corporate Finance 11 (3). Lally, M., 1998, Correcting betas for changes in firm and market leverage, Pacific Accounting Review 10, 98-115 Lessard, D., 1996, Incorporating country risk premium in the valuation of offshore projects, Journal of Applied Corporate Finance 9, 52-63. Malkiel, B. and Y. Xu, 2002, Idiosyncratic risk and security returns, Working paper, University of Texas at Dallas Mehra, R., 2003, The equity premium: Why is it a puzzle?, Financial Analysts Journal Jan/Feb 59, 54-69. Miller, I. and J. Zhang, 2003, Estimating risk parameters for water and power utilities in China, Journal of Structured and Project Finance, Vol 8, 4, p6. Mishra, D. and T. OBrien, 2003, Risk and ex-ante cost of equity estimates of emerging market firms, working paper, University of Connecticut. Pereiro, L.,2001, The valuation of closely-held companies in Latin America, Emerging Markets Review 2, 330-370. Siegel, J., 1992, The Equity Premium: Stock and Bond Returns Since 1802, Financial Analysts Journal Jan-Feb, 28-38. Siegel, J., 1999, The Shrinking Equity Premium, Journal of Portfolio Management 26, 1, 10-17. Salomons, R. and H. Grootveld, 2003, The equity risk premium: emerging vs developed markets, Emerging Markets Review 4, 121-144.
Stulz, R., 2005, The limits of financial globalization, Working paper, Ohio State University. Welch, I., 2001, The equity premium consensus forecast revisited, Cowles Foundation Discussion Paper No 1325, Yale.
|
https://ru.scribd.com/document/75045984/Sino-Forest-Poyry-Valuation-Dec-2004-Final
|
CC-MAIN-2019-51
|
refinedweb
| 32,146
| 52.49
|
Having spent some time on the Service Model, I’ll start to look at some monitoring to help us build a Health Model.
I’ll start with what is usually a straight forward requirement – monitoring a windows log e.g. the application log – and generating an alert if a specific event id is detected. I've made it a little more complex by alerting on either of 2 event id's but it is straight forward to simplify the expression filter if required.
And I’ll assume that you have followed by initial MP authoring series (steps 1 to 3) and have created a management pack and configured the basic properties as well as having created a couple of empty folders in the solution explorer view (see below).
We’ll go through the process of right clicking Rules, Add, New Item
And then choose Empty Management Pack Fragment which for ease of identification I’ve called MultipleEvents.
Copy and paste this code into the empty fragment – you can change the following:
Find \ replace gd.myapp with the namespace details of your management pack
If necessary, update the target
Find \ Replace the event ids with the event you are looking to alert against
Find \ Replace EventCreate with the event source of the event you want to alert against
Change (if necessary) the Priority and Severity:
Severity = 2 for Critical, 1 for Warning and 0 for Informational.
Priority = 2 for High, 1 for Medium and 0 for Low.
And you should be ready to go..
|
https://blogs.technet.microsoft.com/manageabilityguys/2015/09/15/scom-alerts-from-windows-events/
|
CC-MAIN-2018-05
|
refinedweb
| 253
| 55.17
|
Hi Stephan > Cc: 'Philipp von Weitershausen'; [EMAIL PROTECTED]; 'Christian > Zagrodnick'; zope3-dev@zope.org > Betreff: Re: AW: [Zope3-dev] skin support for xmlrpc
[...] > > Since the skin directive is gone layer also support the skinning > > concept. But the main reason of layers is still offering a security > > namespace. > I disagree. I have *never* thought of it as a security > namespace. I think of it as a *user interface* functionality > namespace. Doens't matter if you thougt about of it or not. But it is ;-) Skins are a base concept for security if it comes to rewrite rules in Apache. The usage of ++skin++A, ++skin++B let us map domains to request layers. And if we do this right, it let use enable skin A for applicaiton A and restrict using skin A on applicaiton B. Skins e.g. layers which views are registered for are a security layer. The neat thing about skins is that different skins can provide different HTML. But thats the nature of a skin and has nothing to do why the views support a layer attribute. Since we use PageTemplate file in views it let us think that layers are there for change the template or since z3c.pagelets it let us change the layout at all on a view base. But this is just a neat sideeffect of the layer concept. btw, this is also true for z3c.form. The layer attribute let us register a specific IWidget with different permission in one skin then in another skin. It also let us register another template for both skin. this are tow different concepts. That's the reason why I implemented z3c.pagelet. This package let us separate the security layer used for views and the UI layer used for templates. Then both registration concept are based on the layer attribute. [...] > > seccurity issue > > --------------- > > > > Let's say you have a app offering a XML-RPC server shutdown > view. You > > whould do the following: > > > > 1. regsiter a public and a private skin 2. register the > XML-RPC view > > to the layer used by the private skin 3. Run Zope at port > 8080 blocked > > form outside by firewall 4. Use Apache rewrite rules and > point to the > > public and private skin > > e.g. private.foo.com and public.foo.com 5. Use a > rewrite rule and > > point to the private skin restricting > > access to a internal network or some IP addresses. > > > > How whould you restrict access from the public skin to the XML-RPC > > view without layer support used in step 2? > > The solution is pretty straight forward using a pluggable > traverser. After all, pluggable traversers were designed to > be maximally flexible and to allow all possibilities, which > includes "simulating" skins, if you want. I don't say it's not possible to secure XML-RPC views with a additional concept e.g. z3c.traverser. Right now we can't take care on security wihtout any a additional concept for XML-RPC views. Layers are the missing feature. That's just bad. Because the available permission attribte sugests to secure them. The real issue here is well known and is called backdoor. Secure views is also hugh and well known problem in the AJAX world which the missing layer in XML-RPC view belongs to. this is also true for JSON views which are based on the XML-RPC implementation. Probably I don't speak about the same use case like Christian had if he started this tread. I just say that security requires a request layer in XML-RPC views. Remember, all what I saying is a problem if it comes to the virtual host supported by the Apache rewrite or proxy usage. Then a XML-RPC without a layer will become available in the wrong domain e.g. in every domain. btw, a z3c.traverser whould have to check for a specific skin in it's request for apply another layer which enables the XML-RPC view. Without a skin layer it's not possible that the traverse acts as a XML-RPC view enabler because the traverser doesn't know if you are calling skin A or skin B. You can say that the traverser is only available on skin A, but then again, you need a layer/skin for that. Regards Roger Ineichen > Regards, > Stephan > -- > Stephan Richter > CBU Physics & Chemistry (B.S.) / Tufts Physics (Ph.D. > student) Web2k - Web Software Design, Development and > Training _______________________________________________ > Zope3-dev mailing list > Zope3-dev@zope.org > Unsub: > > > _______________________________________________ Zope3-dev mailing list Zope3-dev@zope.org Unsub:
|
https://www.mail-archive.com/zope3-dev@zope.org/msg09290.html
|
CC-MAIN-2018-51
|
refinedweb
| 759
| 66.84
|
Gauges are used to display single values. They are often used in dashboards, and use segmenting and color coding to present values in a clear and easy to read way.
Wijmo gauges are streamlined. They were designed to be easy to use and to read, especially on small-screen devices. They can also be used as input controls. If you set their isReadOnly property to false, users will be able to change the value using the mouse, keyboard, or touch. For more details, see Using Gauges As Input
The wijmo.gauge module includes three controls:
The hierarchy of gauge classes is as follows:
The root Gauge class provides basic elements shared by all Gauge classes:
The main properties common to all Wijmo gauge classes are:
Example:
import * as gauge from '@grapecity/wijmo.gauge'; // create gauges var myRadialGauge = new gauge.RadialGauge('#myRadialGauge', { min: 0, max: 100, value: 75, showText: 'None', valueChanged: function(s, e) { valueCtl.value = s.value; } });
Submit and view feedback for
|
https://www.grapecity.com/wijmo/docs/master/Topics/Gauge/Gauge-Overview
|
CC-MAIN-2022-05
|
refinedweb
| 162
| 65.83
|
Programming with Python
Creating Functions
Learning_kelvin that converts temperatures from Fahrenheit to Kelvin:
def fahr_to_kelvin(temp): return ((temp - 32) * (5/9)) + 273.15
The blueprint for a python function
The function definition opens with the keyword
def followed by the name of the function and a parenthesized list of parameter names. The body of the function — the statements that are executed when it runs — is indented below the definition line.:.
Tidying up
Now that we know how to wrap bits of code up in functions, we can make our inflammation analysis easier to read and easier to reuse. First, let’s make an
analyze function that generates our plots:
def analyze!')
Notice that rather than jumbling this code together in one giant
for loop, we can now read and reuse both ideas separately. We can reproduce the previous analysis with a much simpler
for loop:
for f in filenames[:3]: print(f) analyze(f) detect_problems(f) center a dataset around a particular value:
def center(data, desired): return (data - numpy.mean(data)) + simple tests that will reassure us:
print('original min, mean, and max are:', numpy.min(data), numpy.mean(data), numpy.max(data)) centered = center(data, 0) print('min, mean, and and max of centered data are:', numpy.min(centered), numpy.mean(centered), numpy.max(centered)):', numpy.std(data), numpy.std(centered))(centered)):
# center(data, desired): return a new array containing the original data centered around the desired value. def center(data, desired): return (data - numpy.mean(data)) + - numpy.mean(data)) + - numpy.mean(data)) + desired help(center)
Help on function center in module __main__: center(data, desired) Return a new array containing the original data centered around the desired value. Example: center([1, 2, 3], 0) => [', ',')
--------------------------------------------------------------------------- - numpy.mean(data)) +)
The old switcheroo
Which of the following would be printed if you were to run this code? Why did you pick this answer?
- 7 3
- 3 7
- 3 3
- 7 7
a = 3 b = 7 def swap(a, b): temp = a a = b b = temp swap(a, b) print(a, b)
Readable code
Revise a function you wrote for one of the previous exercises to try to make the code more readable. Then, collaborate with one of your neighbors to critique each other’s functions and discuss how your function implementations could be further improved to make them more readable.
|
https://cac-staff.github.io/summer-school-2016-Python/06-func.html
|
CC-MAIN-2022-33
|
refinedweb
| 390
| 51.07
|
Details
Description
- Problem
Gradle up-to-date check does not take into account deleted files. This has been confirmed for Gradle 1.3 with 'java' and 'groovy' plugins.
- Steps to reproduce:
1. Create a build.gradle containing apply plugin: 'java'
2. Create the directories src/main/java
3. Create a file src/main/java/Foo.java containing public class Foo {}
4. Execute gradle build - compileJava will be executed.
5. Execute gradle build - compileJava will be UP-TO-DATE.
6. Delete the file src/main/java/Foo.java.
7. Execute gradle build - compileJava will still be UP-TO-DATE.
- Result
Because of this, the deleted class will still be available in both build/classes/main and the created jar in build/lib.
Issue Links
- Duplicated by
GRADLE-2531 Changes to buildSrc directory are not always detected.
- Resolved
GRADLE-2440 JavaCompile task does not remove stale classes when all source files are removed
- Resolved
Activity
This will be a showstopper for Gradle 3.0 since it sabotages a 100% reliable cache, right?
(According to )
Every few month, I decide to take a dive into the Gradle source code to see if I can find a way to fix this elegantly.
This time around, I realized that the current behavior is actually documented in the public API so fixing the problem isn't just a patch but rather a political decision.
So instead of trying to fix the problem itself I did the next best thing and wrote an integration test for the issue:
package org.gradle.integtests import org.gradle.integtests.fixtures.AbstractIntegrationSpec import spock.lang.Issue class StaleOutputTest extends AbstractIntegrationSpec { @Issue(['GRADLE-2440', 'GRADLE-2579']) def 'Stale output is removed after input source directory is emptied.'() { setup: 'a minimal java build' buildScript("apply plugin: 'java'") def fooJavaFile = file('src/main/java/Foo.java') << 'public class Foo {}' def fooClassFile = file('build/classes/main/Foo.class') when: 'a build is executed' succeeds('clean', 'build') then: 'class file exists as expected' fooClassFile.exists() when: 'only java source file is deleted, leaving empty input source directory' fooJavaFile.delete() and: 'another build is executed' succeeds('build') then: 'source file was actually deleted' !fooClassFile.exists() } }
This test confirmed my suspicion that simply removing SkipEmptySourceFilesTaskExecuter in the TaskExecutionServices.createTaskExecuter method wouldn't magically fix this problem.
Doing so causes a java.lang.IllegalStateException: no source files in com.sun.tools.javac.main.Main - and this is just the special case of java compilation. Who knows which other tools might strictly require actual input files instead of defaulting to NOP in that case?
Skipping tasks without input source files is indeed the correct behavior and getting rid of previously created output simply hasn't been adressed so far for this special use case.
Tasks usually remove stale files on their own if they realize it's necessary (e.g. StaleClassCleaner) but I'm not aware of a general way to tell a task that it should just get rid of any previous output instead of executing its originally intended functionality.
Talking about skipping...
I'd suggest to change the line state.upToDate(); in SkipEmptySourceFilesTaskExecuter to state.skipped("SKIPPED"); regardless of any other fixes. Because that's what's happening.
Well... not what I hoped for... but I hope this helps at least a little bit in getting this fixed before the Gradle 3.0 release.
I think I fixed this issue. Pull request
Only change in behavior beside fixing the original issue is that a task without source files is now reporting SKIPPED instead of UP-TO-DATE.
Please also take a look at this comment in SkipEmptySourceFilesTaskExecuter.
I was unsure if this type of cleanup would justify a different skipped message, e.g. CLEANED, but decided to leave it out for now, using SKIPPED as usual.
Please consider taking a look at the pull request before going 3.0-rc.
Now tasks clean up their previous output after all their sources are gone.
In your scenario, you delete all source files. The problem is that gradle marks the compile task as up-to-date as soon as sources is empty. If this is a problem in your build this workaround might help:
if(sourceSets.main.allJava.empty){
compileJava.dependsOn cleanCompileJava
}
|
https://issues.gradle.org/browse/GRADLE-2579
|
CC-MAIN-2018-34
|
refinedweb
| 702
| 51.44
|
# The Azure Cloud Shell Connector in Windows Terminal
The Windows Terminal can now connect you to the [Azure Cloud Shell](https://azure.microsoft.com/en-us/features/cloud-shell/)!

We have a new default profile – the Azure Cloud Shell, which will allow you to access your Azure directories/tenants through the Windows Terminal app itself.
This article [in our blog](https://devblogs.microsoft.com/commandline/the-azure-cloud-shell-connector-in-windows-terminal/).
If you already have Windows Terminal installed
----------------------------------------------
Your settings will not automatically update with the new default profile (since the file does not regenerate every time you open up Windows Terminal), so here’s how you can manually add it in.
1. Start Windows Terminal
2. Open up settings (using the dropdown)
3. Add this profile to your list of profiles:
```
{"acrylicOpacity" : 0.6,
"closeOnExit" : false,
"colorScheme" : "Vintage",
"commandline" : "Azure",
"connectionType" : "{d9fcfdfa-a479-412c-83b7-c5640e61cd62}",
"cursorColor" : "#FFFFFF",
"cursorShape" : "bar",
"fontFace" : "Consolas",
"fontSize" : 10,
"guid" : "{b453ae62-4e3d-5e58-b989-0a998ec441b8}",
"historySize" : 9001,
"icon" : "ms-appx:///ProfileIcons/{b453ae62-4e3d-5e58-b989-0a998ec441b8}.png",
"name" : "Azure Cloud Shell",
"padding" : "0, 0, 0, 0",
"snapOnInput" : true,
"startingDirectory" : "%USERPROFILE%",
"useAcrylic" : true}
```
Once you’ve done this, you will see a new tab option for the Azure Cloud Shell.
How to use the connector
------------------------
1. Open up the «Azure Cloud Shell» tab.
2. You will be prompted to go to «microsoft.com/devicelogin» and enter the code displayed.
3. Once you enter the code in your browser, you will need to login with your account – make sure you log in with an account that has an active Azure directory/tenant.
4. Switch back to Terminal and within a few seconds you will see an «Authenticated» message.
5. *Some cases only*: if you have multiple tenants in your account, you will be prompted to choose one of them. Simply enter the tenant number of the one you wish to connect to.
6. You will then be asked if you want to save these connection settings. Saving your connection settings will allow you to login without going through steps 1-5 in the future.
7. The app will then start to establish a connection with the cloud shell (this could take a while, just be patient!)
8. You are now connected to you Azure Cloud Shell!
Here’s what the full login output looks like: (I hit ‘0’ for the tenant number and ‘y’ for the choice on saving connection settings).

Now that I have saved my connection settings, here’s what the login process looks like the next time (I hit ‘0’ to access my saved connection settings).

It’s also possible to sign in with a different account/tenant by hitting ‘n’ or removing the saved connections by hitting ‘r’. These settings will persist across sessions, so even if you start up the Terminal a few days later you will still be able to log in with your saved connection settings without needing to open up a browser.
And past that, its all you! Hope you enjoy being able to access your Azure assets through the Windows Terminal. As always, please report any bugs/issues to our [Github repository](https://github.com/microsoft/terminal).
|
https://habr.com/ru/post/469051/
| null | null | 571
| 54.12
|
Hi, I have a lightbox that pops up when the page loads.How can I make it pop up for visitors only? (non-members). Thanks,#lightbox , #members
Put a simple code on your page that checks if the user is logged in as a member or not, if they are not logged in then you can set the lightbox to open and be shown and closed if they are members.
Thanks for the swift answer, my coding skills are limited tho, do you have an example I could copy/paste by any chance?
@Paul See the example in the loggedIn property docs.
@Yisrael (Wix) & Thanks heaps for your help.. I've tried with the following but seems I'm still missing something for it to work properly..?
import {session} from 'wix-storage'; import wixWindow from 'wix-window'; import wixUsers from 'wix-users'; const user = wixUsers.currentUser; const userId = user.id; const isLoggedIn = user.loggedIn; $w.onReady(function () { // user is not logged in if(!isLoggedIn) { // open popup wixWindow.openLightbox("lightbox1"); } });
Many thanks for your help guys.. I think I'm almost there.
I got the below code checked by a coder who told me it was working fine but it seems that Wix lightbox is triggered by something else as the code is not running properly based on if user is logged in or not...
Has anyone got any insight on this one?
|
https://www.wix.com/corvid/forum/community-discussion/lightbox-for-visitors-only
|
CC-MAIN-2019-51
|
refinedweb
| 232
| 83.25
|
[Solved] QTableView reflected in every tab PyQt
Hello! I am using PyQt4/python2.7
I have tabbed windows which contains QTableView. So when I open a new tab I get QTableView in it. When I click on a button to populate the view, it works correctly, however it reflects the result in every other QTableView tab.
I make sure I made a new instance of the view/Model.
The only way it works correctly, is that I open few tabs but I have to focus on each, like open it. Then it will work on the one i just selected and won't reflect result, however if after that I make a new tab, that new tab reflects current result.
I checked, but seems the new Qtableview in the new tabs doesn't contain data, even though they reflect it. So I am guessing the problem is with QtableView?
Thanks!
Hi there. Can you give us a look at the code (or an example showing the issue)? I use the model/view framework quite extensively and I've not seen this before so I'm wondering if it might be implementation specific.
In the meantime, just speculating, but if focusing on the table clears the issue then it may be a painting issue.
I don't know but I have too many files. I am gonna place a link to download the src. Don't know if it's against the ToS or not.
src:
More details:
Platform: windows x64
you might need Impacket/pcapy modules
can be found here ->
Thanks!
UPDATE: To induce the problem, just create new tab, then click on sniff, then create another tab, you will see the new tab has the same result as the other tab. I tried it with tablewidget, worked fine. So is something wrong with my implementation?
UPDATE2: start with proposal.py
- tobias.hunger Moderators
Are all the views connected to the same model? If that is the case, then changing something in one view will change all the others as well.
Nope, I create a new model in each tab.
Through my code, New tab -> creates new frame.InnerTab -> which creates Sniffer6.Sniffy() ->which I initiate a new model in it.
and that happens with every tab.
Thanks!
Hi there, sorry for the delay in replying, I've only just had to time to look through your code.
The problem you are having is nothing to do with views, its actually to do with the default arguments you are passing to the initialiser of your table model (snifferModel.tableModel). You have the following:
@
...
def init(self, result=[], headers=[]):
super(tableModel, self).init()
self.result=result
self.headers.headers
...
@
The thing you need to understand about python is that default arguments to functions/methods are mutable and are only evaluated once at definition not each time the function/method is called. This means that every time this initialiser is called each instance of the model class gets the same instance of the default list and, as you never call this method with anything but the default argument for result, every instance of your model is referring to the same list. You can find a really good explanation of this behaviour "here":.
In the short term, the solution to your problem is a simple rewrite of your initialiser as follows and your problem is solved:
@
def init(self, result=None, headers=None):
super(tableModel, self).init()
self.result=result or []
self.headers=headers or []
@
This will set your instance variable to the value of the function argument or create a new list if it is None.
Hope this helps ;o)
Oh wow, I never knew they were mutable. Didn't even occur to me, your solution does work great! Thank you!!
Also, while just another quick question ( I am just taking advantage of you here, hope u don't mind)
I get this error:
QObject::startTimer: QTimer can only be used with threads started with QThread
From what I was able to get, it's okay, cause the garbage collector just closes them in a messy order when I close the application?
[quote author="DisappointedIdealist" date="1368189432"]
QObject::startTimer: QTimer can only be used with threads started with QThread
[/quote]
Do you get this error when you exit your application or while its running? If it happens on exit then yes, it is to do with garbage collecting. It usually occurs because an item (widget, model, view etc.) has not been given a parent and is therefore owned by python/PyQt rather than Qt itself. Whilst its probably not the end of the world if this happens it would be better practice to track down the source and prevent it especially as giving widgets a parent is generally a good idea anyhow.
At exiting of the application. It happened when I used the model/view from this example(so probably from my Model since I used parts of it in my code). Even when running this example I get the error after closing all windows.
src:
OR(same)
@from PyQt4 import QtGui, QtCore, uic
import sys
class PaletteTableModel(QtCore.QAbstractTableModel):
def __init__(self, colors = [[]], headers = [], parent = None): QtCore.QAbstractTableModel.__init__(self, parent) self.__colors = colors self.__headers = headers def rowCount(self, parent): return len(self.__colors) def columnCount(self, parent): return len(self.__colors[0]) def flags(self, index): return QtCore.Qt.ItemIsEditable | QtCore.Qt.ItemIsEnabled | QtCore.Qt.ItemIsSelectable def data(self, index, role): if role == QtCore.Qt.EditRole: row = index.row() column = index.column() return self.__colors[row][column].name() if role == QtCore.Qt.ToolTipRole: row = index.row() column = index.column() return "Hex code: " + self.__colors[row][column].name() if role == QtCore.Qt.DecorationRole: row = index.row() column = index.column() value = self.__colors[row][column] pixmap = QtGui.QPixmap(26, 26) pixmap.fill(value) icon = QtGui.QIcon(pixmap) return icon if role == QtCore.Qt.DisplayRole: row = index.row() column = index.column() value = self.__colors[row][column] return value.name() def setData(self, index, value, role = QtCore.Qt.EditRole): if role == QtCore.Qt.EditRole: row = index.row() column = index.column() color = QtGui.QColor(value) if color.isValid(): self.__colors[row][column] = color self.dataChanged.emit(index, index) return True return False def headerData(self, section, orientation, role): if role == QtCore.Qt.DisplayRole: if orientation == QtCore.Qt.Horizontal: if section < len(self.__headers): return self.__headers[section] else: return "not implemented" else: return QtCore.QString("Color %1").arg(section) #=====================================================# #INSERTING & REMOVING #=====================================================# def insertRows(self, position, rows, parent = QtCore.QModelIndex()): self.beginInsertRows(parent, position, position + rows - 1) for i in range(rows): defaultValues = [QtGui.QColor("#000000") for i in range(self.columnCount(None))] self.__colors.insert(position, defaultValues) self.endInsertRows() return True def insertColumns(self, position, columns, parent = QtCore.QModelIndex()): self.beginInsertColumns(parent, position, position + columns - 1) rowCount = len(self.__colors) for i in range(columns): for j in range(rowCount): self.__colors[j].insert(position, QtGui.QColor("#000000")) self.endInsertColumns() return True
if name == 'main':
app = QtGui.QApplication(sys.argv) app.setStyle("plastique") #ALL OF OUR VIEWS listView = QtGui.QListView() listView.show() comboBox = QtGui.QComboBox() comboBox.show() tableView = QtGui.QTableView() tableView.show()) model.insertColumns(0, 5) listView.setModel(model) comboBox.setModel(model) tableView.setModel(model) sys.exit(app.exec_())
@
Yep, its the problem I described. If you reimplement the last part of the example as follows, where the views and model are given a parent (in this case a containing QWidget), the application exits quietly:
@
...
if name == 'main':
class Widget(QtGui.QWidget): def __init__(self, parent=None, **kwargs): QtGui.QWidget.__init__(self, parent, **kwargs) l=QtGui.QVBoxLayout(self) #ALL OF OUR VIEWS listView = QtGui.QListView(self) l.addWidget(listView) comboBox = QtGui.QComboBox(self) l.addWidget(comboBox) tableView = QtGui.QTableView(self) l.addWidget(tableView), self) model.insertColumns(0, 5) listView.setModel(model) comboBox.setModel(model) tableView.setModel(model) app = QtGui.QApplication(sys.argv) w=Widget() w.show() w.raise_() sys.exit(app.exec_())
@
Ah okay, thank you!
Hey, I am sorry to bother you, but I tried to give everyone a parent, and I think they do?
(from my src code)
@def init(self, parent=None , **kwargs):
QWidget.init(self, parent, **kwargs)
layout = QVBoxLayout(self)
self.lv = snifferView.view()
layout.addWidget(self.lv)
self.setLayout(layout)
headers = ['No','Source','Destination','Protocol'] self.model = snifferModel.tableModel(headers=headers) self.lv.tableView.setModel(self.model) self.resultList = [] self.hexDump = '' self.count = 0 self.lv.tableView.selectionModel().selectionChanged.connect(self.rowSelected) @
|
https://forum.qt.io/topic/27044/solved-qtableview-reflected-in-every-tab-pyqt
|
CC-MAIN-2018-51
|
refinedweb
| 1,403
| 53.17
|
The official blog of PostSharp: annoucements, tips & tricks >>
During
I
We.
-Britt
I’m excited to announce the first release candidate of PostSharp 2.1, available for download from our website and from the NuGet official repository.
PostSharp 2.1 is a minor upgrade of PostSharp 2.0; the upgrade is free for everybody. The objective of this version is to fix several gray spots in the previous release.
This release candidate is ought to be of very high quality and free of known bugs, but needs to be tested by the community before it can be labeled stable. As required by the RC quality label, the online documentation has been updated to reflect the latest API.
PostSharp 2.1 has full backward binary compatibility with PostSharp 2.0.
This release candidates contains the following additions to the previous CTP:
If you missed the previous announcements, here’s a list of new features in PostSharp 2.1 compared to version 2.0:
To upgrade your projects from PostSharp 2.0 to PostSharp 2.1 easily you can use the conversion utility included in the PostSharp HQ application. Just open the app and click on “convert”, then select the folder containing your projects. References to libraries and MSBuild imports will be automatically fixed.
A release candidate means that we are confident in the code quality and is that all mandatory quality work, including documentation, has been done. PostSharp 2.1 is now the default version on our download page. We’ll wait a couple of weeks to allow the community to give this version a try, then publish the RTM or another RC, according to the feedback.
Note that the license agreement allows for production use.
It’s now time to download PostSharp 2.1 and upgrade your projects!
I!
I:
Most users will see some improvement in the build-time performance of PostSharp, but for some users it could even mean a 500% speed up. The main reason? We completely rewrote the last step of the PostSharp pipeline: writing the modified assembly back to disk. The previous implementation was based on ILASM, which scaled badly for very large assemblies. The new implementation is fully written in unsafe C# and is probably the fastest available. Other improvements are due to a better utilization of multiple cores, although PostSharp remains largely a single-threaded program. For details about this feature, see our CTP 1 announcement.
This feature is available on all editions of PostSharp.
PostSharp 2.1 can now be added to your application directly from Visual Studio thanks to NuGet. No need to run a program with elevated privileges. No need to edit your project file if you want to put PostSharp in your source repository. Everything is done by the NuGet package installer.
You can now use the Visual Studio Extension (VSX) even if you don’t want to run the setup program. The first time you (or your colleague) will build a program using PostSharp, a dialog box will ask you whether you want to install the VSX. No elevated privileges needed. When you deploy a new version of PostSharp to the source control, all developers get the updated VSX.
Thanks to NuGet, it will become much easier for vendors and open-source projects to include a dependency to PostSharp. Our friends at Gibraltar Software already uploaded their application monitoring agent, and will soon update it to include the dependency to the PostSharp package.
How many times did you want to get the list of all classes in your assembly deriving from System.Forms.Control? Sure, you can enumerate all types and build the index yourself, but it’s quite CPU expensive. Since PostSharp already computes this information internally, it seemed natural to expose it to the PostSharp.dll library. The PostSharp.Reflection.ReflectionSearch class offers methods that allow you to search for:
Note that this functionality is only available at build time. This feature is available on the Professional Edition only.
Many people used “empty” aspects that just did one thing: validate the code. You now have a better way to validate your code: constraints (namespace PostSharp.Constraints). As aspects, constraints are custom attributes that you put on your code. They contain code that gets executed for every assembly referencing this code. The constraint logic typically queries the current assembly using reflection and the newly introduced reflection extensions, and emit errors or warnings to Visual Studio. Since your code can use LINQ, you have a very powerful toy to play with! PostSharp comes with off-the-self constraints:
Note that architecture validation is disabled by default. You need to enable it in your project properties (“PostSharp” property page). This feature is available on the Professional Edition only.
Let me start with a disambiguation: PostSharp does not become an obfuscator with this feature. It just became compatible with some obfuscators.
In previous versions, you could get into serious problems if you tried to use PostSharp and any obfuscator in the same project. PostSharp relied on some metadata references that were broken by the obfuscator because of renaming, so you got runtime exceptions. This is obvious if, for instance, you had a field of type MethodInfo in your aspect. The method name was stored in the serialization stream and the obfuscator did not know how to modify it. There were other less obvious cases with aspects on generic methods.
PostSharp 2.1 can be made aware of obfuscators and write metadata references in a way that obfuscators can understand and modify. But the obfuscator has also to be aware of PostSharp.
We implemented support for Dotfuscator because it is the market leader and has a free edition that ships with Visual Studio. According to the feedback we get from this CTP, we will publish our specification so that other vendors can implement it if they are interested.
This feature is available on the Professional Edition only.
PostSharp 2.1 supports Silverlight 5 applications. What to add? This feature is available on all editions.
First of all, we chose to rename the community edition to Starter Edition. We got too many questions whether the community edition can be used for commercial project. We hope the new name makes the answer clear: yes, you can use the Starter Edition in commercial projects, you will just enjoy less features than with the Professional Edition.
At SharpCrafters, our license enforcement policy is to trust the customer to respect the license agreement. That’s why there was no license enforcement in version 2.0. It was not just a priority. But we may have been too liberal by letting people deploy the license key in the source control: for many companies, it meant that nobody knew how many developers were using the product. We strongly believe in trust, but we think that we should provide tools for proper license management. So we did a few changes.
We chose to deprecate deployment of license keys through source control because we want some mental process to take place when a developer first starts using the product: he also needs a license. We provide a license metering solution: we will monitor how many people install the license on their computer, but we will never block them. If we that believe a license key is being used too many times, we will simply contact the one who purchased the license key and discuss a solution. But we trust you in first place. By requesting license installation by each user, we are just asking that every company maintains a record of who is using the software.
What if manual license bookkeeping is impractical? For these situations, we provide a license server (technically: a simple ASP.NET application backed by an SQL database). The license server maintains the list of leases and ensures that the license agreement is respected. It will send emails to the license administrator (or team leader, or whomever else) when new licenses should be acquired – and you will get a 30-day grace period to purchase them. We tried to make the license server a valuable tool for the customer. You are free to modify the underlying database at will if it makes sense. You as the customer are responsible for respecting the legal agreement and we trust you.
The license server will be available with a new type of licenses called corporate licenses, that will be available for a premium (likely +30% compared to the normal commercial license).
The upgrade is free for anyone. I am pretty confident of the quality of this release so it’s a great time for you to download it. Submit all your changes to source control, do the upgrade, rebuild, and run your unit tests. If it fails, shelve the changes and report the issue. We do a lot of internal testing, but we need your feedback to move quality of this release forward.
In the next days, I’ll try to blog about individual features of this release. >>
Last week we announced that PostSharp 2.1 CTP was available under our Early Access Program (EAP, in short). So what is the SharpCrafters EAP? It’s very simple:
So why are we offering this EAP? It’s also simple: because we need your help. Although we test every change carefully and have lots of regression tests, we just can’t think of every possible combination. Development environments have very different configurations, and compiler sometimes emit non-standard assemblies or debugging symbols. Most of the time, when we can think of a situation, we have a test for it. Most bugs come from situations we did not imagine. So we need your help to test PostSharp with the largest set of scenarios possible.
Since we appreciate the time you spend testing our software and reporting issues, we will offer one free license of PostSharp Professional Edition for every bug you report and reproduce. You qualify if:
If you report more than one bug, you will receive several free licenses for you to give away (not for resale).
We need your help. To get started, download PostSharp 2.1 CTP and report issues in the support forum.
I.
|
http://www.postsharp.net/blog/category/Annoucement.aspx
|
CC-MAIN-2013-20
|
refinedweb
| 1,696
| 65.12
|
Hi,
I'm trying to debug a userspace application with gdb-6.3 / gdbserver.
Objdump -p reports:
private flags = 20001107: [abi=O32] [mips3] [32bitmode]
With gdb-6.0 from an older toolchain this works, but gdb-6.3
reports the infamous "Reply contains invalid hex digit 59".
The reason for this is that gdb and gdbserver disagree
about the register size. Gdbserver seems to be hardcoded
to 32bit register size (regformats/reg-mips.dat), while gdb-6.3 assumes
that mips3 binaries use 64bit registers. (Actually E_MIPS_ARCH_3
gets transformed into bfd_mach_mips4000 in bfd/elfxx-mips.c, which
has 64bit registers according to bfd/cpu-mips.c).
(I briefly checked gdb-6.4, it seems to do the same.)
The workaround given in this posting seems to work for
userspace, too:
I.e. "set architecture mips:isa32" overrides the register size
which gdb uses to talk to gdbserver.
However, I wonder why gdb doesn't evaluate the 32bitmode flag
from the ELF e_flags header. To be honest, I also wonder what
the exact semantics of this flag are. The NUBI document says:
"32BIT_MODE: (e_flags&EF_MIPS_32BITMODE) - 1 when code assumes 32-bit
registers only. Always set for NUBI32, but NUBI-compliant software
should not rely on it."?
How about this patch:
--- gdb-6.3/gdb/mips-tdep.c.orig 2004-10-15 09:25:03.000000000 +0200
+++ gdb-6.3/gdb/mips-tdep.c 2006-01-30 21:13:09.000000000 +0100
@@ -258,6 +258,15 @@ mips_abi (struct gdbarch *gdbarch)
int
mips_isa_regsize (struct gdbarch *gdbarch)
{
+ struct gdbarch_tdep *tdep = gdbarch_tdep (current_gdbarch);
+ if (tdep != NULL)
+ {
+ int ef_mips_32bitmode;
+ ef_mips_32bitmode = (tdep->elf_flags & EF_MIPS_32BITMODE);
+ if (ef_mips_32bitmode)
+ return 4;
+ }
+
return (gdbarch_bfd_arch_info (gdbarch)->bits_per_word
/ gdbarch_bfd_arch_info (gdbarch)->bits_per_byte);
}
I also considered if adding 64bit support to gdbserver would be
the right thing, but I think not as o32 ABI executables don't have
64bit registers, right?
Please don't be too hard on me, my understanding of gdb etc. is pretty
limited. I'm especially confused by this ISA regsize vs. ABI regsize
thing ;-/. Thus my patch looks at the 32bitmode flag and not at
the o32 ABI to decide about this register size, however I'm not
sure if any of this makes actually sense. It seems to work for me,
though ;-)
But I'd be willing to do some work to get this fixed properly in
upstream gdb, if I get some guidance.
Thanks,
Johannes
|
https://www.linux-mips.org/archives/linux-mips/2006-01/msg00490.html
|
CC-MAIN-2017-04
|
refinedweb
| 397
| 68.57
|
Create a Generic Class in Java
A generic class in Java is a class that can operate on a specific type specified by the programmer at compile time. To accomplish that, the class definition uses type parameters that act as variables that represent types (such as int or String).
To create a generic class, you list the type parameter after the class name in angle brackets. The type parameter specifies a name that you can use throughout the class anywhere you’d otherwise use a type. For example, here’s a simplified version of the class declaration for the ArrayList class:
public class ArrayList<E>
I left out the extends and implements clauses to focus on the formal type parameter: <E>. The E parameter specifies the type of the elements that are stored in the list.
To create an instance of a generic class, you must provide the actual type that will be used in place of the type parameter, like this:
ArrayList<String> myArrayList;
Here the E parameter is String, so the element type for this instance of the ArrayList class is String.
Now look at the declaration for the add method for the ArrayList class:
public boolean add(E o) { // body of method omitted (thank you) }
Where you normally expect to see a parameter type, you see the letter E. Thus, this method declaration specifies that the type for the o parameter is the type specified for the formal type parameter E. If E is String, the add method accepts only String objects. If you call the add method passing anything other than a String parameter, the compiler will generate an error message.
You can also use a type parameter as a return type. Here’s the declaration for the ArrayList class get method:
public E get(int index) { // body of method omitted (you’re welcome) }
Here, E is specified as the return type. That means that if E is String, this method returns String objects.
The key benefit of generics is that type-checking happens at compile time. Thus, after you specify the value of a formal type parameter, the compiler knows how to do the type-checking implied by the parameter. That’s how it knows not to let you add String objects to an Employee collection.
|
http://www.dummies.com/how-to/content/create-a-generic-class-in-java.navId-323185.html
|
CC-MAIN-2014-15
|
refinedweb
| 381
| 57.81
|
Most modern blogging engines support the MetaWeblog API, which was defined by XML-RPC.com many years ago. It's become one of the most popular API's for programmatically interacting with blogs because of its simplicity. Even Microsoft's Windows Live Spaces provides support for it.
I wanted to use this API recently to interact with our Community Server implementation so I started searching around for client-side implementations that would be easy to program in C#. I was surprised that I couldn't find a mainstream implementation readily available. So I followed the example on MSDN and built my own MetaWeblog library in C# on top of Cook Computing's XML-RPC.NET library.
Here's what my MetaWeblogClient class looks like (truncated for brevity):
public class MetaWeblogClient : XmlRpcClientProtocol
{
[XmlRpcMethod("metaWeblog.getRecentPosts")]
public Post[] getRecentPosts(string blogid, string username, string password, int numberOfPosts)
{
return (Post[])this.Invoke("getRecentPosts", new object[] { blogid, username, password, numberOfPosts });
}
[XmlRpcMethod("metaWeblog.newPost")]
public string newPost(string blogid, string username, string password, Post content, bool publish)
{
return (string)this.Invoke("newPost", new object[] { blogid, username, password, content, publish });
} ...
With this class, you can simply make method calls like getRecentPosts, newPost, editPost, etc to interact with any blog that supports the MetaWeblog API. You will need to specify that URL to the MetaWeblog endpoint prior to making those method calls. Here's an example:
class Program
{
static void Main(string[] args)
{
MetaWeblogClient blog = new MetaWeblogClient();
blog.Url = "";
// here's how you post a new entry...
Post newPost = new Post();
newPost.dateCreated = DateTime.Now;
newPost.title = "Test post from Metablog Api";
newPost.description = "This is the body of the post";
newPost.categories = new string[] { "WCF", "WF" };
blog.newPost("blogid", "username", "password", newPost, true);
// here's how you retrieve the most recent entries...
Post[] posts = blog.getRecentPosts("blogid", "username", "password", 5);
foreach (Post post in posts)
Console.WriteLine(post.title);
}
}
The code turns out to be wonderfully simple. So if you find yourself in the same boat as me, looking for a MetaWeblog C# implementation, feel free to download my library here. Hopefully it will save you a little bit of time!
Looks great, except the part where your username and password are sent in the clear text. If you plan on using the MetaWeblog API I would encourage you to use HTTPS instead of HTTP. It's not impossible to achieve, you just need to have a certificate (even a self signed certificate will do) and configure your blog.url specifying HTTPS...
Otherwise it's awfully nice of you to share your password with the world...
Link Listing - August 19, 2008
This is true -- you should use HTTPS for the MetaWeblog endpoint unless you're not concerned about leaking your password.
Pingback from WMOC#17 - PhotoSynth, Reflector move forward - Service Endpoint
Pingback from Recent Links Tagged With "mainstream" - JabberTags
Great library!
One thing though, do you have any plans to add the newMediaObject method?
Thanks!
Thanks for this library, it worked great once I figured out SharePoint's authentication! For anyone trying to use this library against SharePoint, let me save you the hours of frustration it caused me:
If your site is on an intranet and you keep receiving "401 Unauthorized" web exceptions, try this instead: leave the username and password blank on the method invocation, but before that attach a NetworkCredential object to the blog like so (hopefully the syntax remains intact):
MetaWeblogClient blog = new MetaWeblogClient();
blog.Url = url;
Post newPost = new Post();
// set newPost properties...
blog.Credentials = new NetworkCredential(user, pass);
blog.newPost(blogId, "", "", newPost);
And in case you're trying to figure out the blogId, no it's not something simple like "1" or "MyBlog" like other blogs. It's a crazy-long SharePoint-generated string that you can determine by using the getUsersBlogs method.
Thanks for this. It works as advertised.
|
http://www.pluralsight.com/community/blogs/aaron/archive/2008/08/19/programming-the-metaweblog-api-in-net-c.aspx
|
crawl-002
|
refinedweb
| 640
| 57.37
|
Type: Posts; User: 2kaud
I don't know Vulcan so can only generalise.
It is a known fact that multi-threaded code can be slower than single-threaded. There are a couple of reasons for this. One is that it takes time to...
The current version is VS 2022 which provides support for C++20. The Community version is still free. I suggest you install VS2022. This will happily co-exist with VS2017. It uses the same installer.
Yes.
Although in C++ programs you should use new/delete rather than calloc/malloc/free
Sorry, no. I have no knowledge of that book. But from it's contents it looks just like a C++ intro book.
For a tutorial, have a look at - although it's somewhat light on the STL.
Codeguru has c++ articles (see )
Rather than articles,...
As a possible starter, perhaps:
#include <random>
#include <unordered_set>
#include <iostream>
std::mt19937 engine {std::random_device {}()};
There are some c++ 'cook-books' which give plenty of examples for STL.
A couple come to mind. These are also available for download.
Packt
- C++17 STL cookbook...
Well standard c++ doesn't include gui. If you need gui you need to use 3rd party.
For STL, I can suggest books that cover STL, but not specific on-line tutorials.
There is the on-line c++...
What you mean by 'find doubles'? Can you give an example..
@EE
I commented out SetThreadPriority() just for my testing - as it wasn't part of the issue.
inline static int count{}; //this line gives error in v14.0 platform - You need to update your...
Since C++17, you use inline with static. See my code above.
However, the problem is with the std::thread(). You need to pass obj[i] by address - not by value. The way you're doing it means that...
I don't use C++ multi-threading - we used Windows threading prior to threads being introduced into C++ and have stuck with this since.
However, with multi-threading you have to make sure that...
Correct. This is how the code should work.
This behaviour is caused by the call to vector constructor. You are passing 10 and MyClass() as arguments. This means create 10 elements and initialise...
As I said above, post a complete compilable test program that shows the issue.
What do you mean by a 'shared header'? a and b are passed by value and the return is also by value - so there's no side-effect from calling the function. So the function itself is thread-safe. But...
[Also asked at ]
How many of them are there?
Can you use a #ifdef ... to choose between the 2 versions depending upon the compiler used? Something like:
#if defined (_MSC_VER)
#pragma message("message")...
How about #pragma message ?
#pragma NOTE is not listed as a supported #pragma for MSVS.
If #warning is just to display a warning message at compile time, then perhaps #pragma comment ??
In MSVS it's #pragma warning. See
Are these of any.
|
https://forums.codeguru.com/search.php?s=4f63f9abd1acb9da2797734cc32d062c&searchid=22187287
|
CC-MAIN-2021-49
|
refinedweb
| 501
| 79.06
|
>>>>> "Gary" == Gary Lawrence Murphy <garym@canada.com> writes: Gary> Keywords and index terms ... someday we may want one LDP doc to Gary> cross-reference another. That leads to policies on first and Gary> second-level index terms. This is an area that we've not yet ventured into. My (limited) understanding is that there are a few de-facto standards, but all of them are "outside" of DocBook itself. I don't have my SGML references at hand here, but I seem to recall that one of them was called something like HYTIME... Gary> Other policies are the format for Bibliographic entries and for the Gary> style used for XREF tags, for example, I carry over my LaTeX Gary> convention of using xref tags of the form TYPE:TAG (ie FIG:NETWORK vs Gary> TAB:NETWORK) only DocBook does not allow the colon as an ID Gary> character. The idea is to make such anchors easy to guess from the Gary> context and to define a sub-namespace for each type of Gary> cross-reference (ie SEC vs FIG vs TAB &c) Ah, you mean id attributes. You're right about colons not being allowed; we used hyphens to separate the various parts of our id attributes. I think DocBook 3.1 (which we've not yet migrated to) also allows underscores. Our id attributes look something like this: s1-cd-rom-gui-begin-install "s1" indicates that this is an id for a <sect1>. "cd-rom-gui" represents the name of the SGML file in which this <sect1> exists (we split our documents up by chapter, referring to them in our "root" document via entities). And finally "begin-install" is a writer-supplied identifier that describes what's in the <sect1>. As you can tell, the hyphen is an overloaded delimiter, which is why we would like to go to 3.1 so we can at least make ids parse unambiguously... :-) Ed -- Ed Bailey Red Hat, Inc. -- To UNSUBSCRIBE, email to ldp-docbook-request@lists.debian.org with a subject of "unsubscribe". Trouble? Contact listmaster@lists.debian.org
|
http://www.tldp.org/pub/Linux/docs/ldp-archived/mail_archives/ldp-docbook/msg00023.html
|
CC-MAIN-2015-27
|
refinedweb
| 349
| 72.36
|
getifaddrs()
Get a network interface address
Synopsis:
#include <sys/types.h> #include <sys/socket.h> #include <ifaddrs.h> int getifaddrs( struct ifaddrs ** ifap );
Since:
BlackBerry 10.0.0
Arguments:
- ifap
- The address of a location where the function can store a pointer to a linked list of ifaddrs structures that contain the data related to the network interfaces on the local machine.
Description:
The getifaddrs() function stores a reference to a linked list of the network interfaces on the local machine in the memory referenced by ifap.
The data returned by getifaddrs() is dynamically allocated; you should free it by calling freeifaddrs() when you no longer need it.
Errors:
The getifaddrs() function may fail and set errno for any of the errors specified by ioctl(), malloc(), socket(), and sysctl().
It can also set errno to ENOMEM if the system is out of memory, or the interface list was growing while getifaddrs() was executing. Calling the function again when the interface list is stable can return 0 (success).
Last modified: 2014-06-24
Got questions about leaving a comment? Get answers from our Disqus FAQ.comments powered by Disqus
|
http://developer.blackberry.com/native/reference/core/com.qnx.doc.neutrino.lib_ref/topic/g/getifaddrs.html
|
CC-MAIN-2015-11
|
refinedweb
| 188
| 57.57
|
Use the lastIndexOf() method to find the last occurrence of a character in a string in Java.
Let’s say the following is our string.
String myStr = "Amit Diwan";
In the above string, we will find the last occurrence of character ‘i’
myStr.lastIndexOf('i');
The following is the complete example.
public class Demo { public static void main(String[] args) { String myStr = "Amit Diwan"; int strLastIndex = 0; System.out.println("String: "+myStr); strLastIndex = myStr.lastIndexOf('i'); System.out.println("The last index of character a in the string: "+strLastIndex); } }
String: Amit Diwan The last index of character a in the string: 6
|
https://www.tutorialspoint.com/finding-the-last-occurrence-of-a-character-in-a-string-in-java
|
CC-MAIN-2021-43
|
refinedweb
| 102
| 58.38
|
About the Examples
Versions
The examples in this book run with the Java Developer’s Kit (JDK) 1.2 and the Java
Cryptography Extension (JCE) 1.2. The examples in the book were tested with JDK 1.2beta3
and JCE 1.2ea2. Some of the topics covered are applicable to JDK 1.1, especially the
Identity-based key management discussed in Chapter 5 and the
MessageDigest and
Signature classes in
Chapter 6. However, anything involving encryption
requires the JCE. The only supported version of the JCE is 1.2, and it only runs with JDK
1.2. (Although the JCE had a 1.1 release, it never progressed beyond the early access
stage. It is not supported by Sun and not available from their web site any
longer.)
The signed applets in Chapter 8 work with HotJava 1.1, Netscape Navigator 4.0, and Internet Explorer 4.0.
File Naming
This book assumes you are comfortable programming in Java and
familiar with the concepts of packages and
CLASSPATH. The source code for examples in this
book should be saved in files based on the class name. For example,
consider the following code:
import java.applet.*; import java.awt.*; public class PrivilegedRenegade extends Applet { ... }
This file describes the
PrivilegedRenegade class;
therefore, you should save it in a file named
PrivilegedRenegade.java.
Other classes belong to particular packages. For example, here is the beginning of one of the classes from Chapter 9:
package oreilly.jonathan.security; import java.math.BigInteger; import java.security.*; public class ElGamalKeyPairGenerator extends KeyPairGenerator { ... }
This should be saved in oreilly/jonathan/security/ElGamalKeyPairGenerator.java.
Throughout the book, I define classes in the
oreilly.jonathan.* package hierarchy. Some of them
are used in other examples in the book. For these examples to work
correctly, you’ll need to make sure that the directory
containing the oreilly directory is in your
CLASSPATH. On my computer, for example, the
oreilly directory lives in c:\
Jonathan\ classes. So my
CLASSPATH
contains c:\ Jonathan\ classes ; this makes the
classes in the
oreilly.jonathan.* hierarchy
accessible to all Java applications.
CLASSPATH
Several examples in this book consist of classes spread across
multiple files. In these cases, I don’t explicitly
import files that are part of the same example.
For these files to compile, then, you need to have the current
directory as part of your classpath. My classpath, for example,
includes the current directory and the Java Cryptography Extension
(JCE—see Chapter 3). On my Windows 95
system, I set the CLASSPATH in autoexec.bat as
follows:
set classpath=. set classpath=%classpath%;c:\jdk1.2beta3\jce12-ea2-dom\jce12-ea2-dom.jar
Variable Naming
The examples in this book are presented in my own coding style, which is an amalgam of conventions from a grab bag of platforms.
I follow standard Java coding practices with respect to capitalization. All member variables of a class are prefixed with a small m, like so:
protected int mPlainBlockSize;
This makes it easy to distinguish between member variables and local variables. Static members are prefixed with a small s, like this:
protected static SecureRandom sRandom = null;
And final static member variables are prefixed with a small k (it stands for constant, believe it or not):
protected static final String kBanner = "SafeTalk v1.0";
Array types are always written with the square brackets immediately following the array type. This keeps all the type information for a variable in one place:
byte[] ciphertext;
Downloading
Most of the examples from this book can be downloaded from. Some of the examples,
however, cannot legally be posted online. The U. S. government
considers some forms of encryption software to be weapons, and the
export of such software or its source code is tightly controlled.
Anything we put on our web server can be downloaded from any location
in the world. Thus, we are unable to provide the source code for some
of the examples online. The book itself, however, is protected under
the first amendment to the U. S. Constitution and may be freely
exported.
Get Java Cryptography now with O’Reilly online learning.
O’Reilly members experience live online training, plus books, videos, and digital content from 200+ publishers.
|
https://www.oreilly.com/library/view/java-cryptography/1565924029/pr03s04.html
|
CC-MAIN-2020-05
|
refinedweb
| 698
| 50.23
|
Before going into details as why String is final or immutable in java , first let us understand the meaning of the term "immutable".
What does immutable means?
What does immutable means?
According to our good buddy, “thefreedictionary”, immutable means “not capable or susceptible to change”.
Note : In java , if an object is immutable then it can not be changed after creation.
The String Class
The String class represents a set of characters enclosed in a pair of double quotes (“”). All String literals in Java, such as "hello", are implemented as instances of the String class.
Strings are immutable; their values cannot be changed after they have been created, while StringBuffers and StringBuilders supports mutable versions of the String object.
For example:
String str = "hello";
is equivalent to:
char ch[] = {'h', 'e', 'l', ‘l’, ‘o’};
String str = new String(ch);
There are several ways on how to create a String object, by checking the Java API documentation; you’ll see that the String class has about 15 constructors including the 2 deprecated ones. Deprecated constructors and methods are encouraged not to be used in the current Java version.
Here are the two common ways on how to create a String object.
String str1 = new String(“hello”);
String str2 = “java”;
The first one uses the keyword new followed by the String constructor; while the second one uses a direct assignment.
- Is there a difference between the two? Let’s take a look at the two code snippets; this will also explain why String objects are immutable.
Sample Code Snippet #1:
String str1 = new String("java");
String str2 = new String("java");
System.out.println(str1 == str2);
System.out.println(str1.equals(str2));
Snippet Output #1:
false
true
That’s why when str1 and str2 was compared using the double equals operator (==), the output was false because they were comparing the addresses of both variables. The .equals method on the other hand compares if both str1 and str2 have identical String objects that they are referring to.
Sample Code Snippet #2:
String str1 = “java”;
String str2 = “java;
System.out.println(str1 == str2);
System.out.println(str1.equals(str2));
Snippet Output #2:
true
true
Snippet #2 created only 1 object but had two variables holding them. That is why str1 and str2 has the same address. Therefore when they are compared using the double equals operator (==) they returned true. A second String object was not forcibly created because the constructor was not explicitly called. The statement str2 = “java” only assigns the already existing String object in the memory to the String variable str2.
During concatenation, we can also see that the String object also shows their immutability by creating a new object after concatenation. Concatenation can be done by calling the concat method or by using the concatenation symbol (+).
Sample Code Snippet #3:
String str = “hello ”;
System.out.println(str);
is the same as:
String str = “hello ”;
str.concat(“world”);
System.out.println(str);
Both sample codes will printout hello world as the output.
Here’s the explanation:
String str = “hello ”;
str.concat(“world”);
After concatenating the Strings “hello ” and “world”, it will produce a new object “hello world” which is a new object that has a new address.
3. Why Strings are immutable.
Security is one of the reasons that I can think of why String objects are immutable. Strings are usually used as parameters in several Java methods that can be used to access data sources like databases, files or even objects that can be found across the network. Once you call a method that takes an immutable object like Strings, you can be sure that the data that is passed won’t change. It simply cannot change.
Imagine the threat and security problems that you may have if you are accessing a method via RMI (Remote Method Invocation) and the method that you are calling requires you to pass a String object. Before reaching the requested method, there is no assurance that the String objects that you sent is the same String object that is received on the other VM (Virtual Machine). This can result to a serious security problem.
4. What do I use if I need a mutable version of a String object?
Incase you are in need of mutable String objects, you may use StringBuffer or StringBuilder. Both are found inside the java.lang package and both classes contains the same set of methods that can change the object like append, insert, reverse and a lot more. Both of these classes can also take in a String object as a parameter on their respective constructors.
Here’s how to create StringBuffer and StringBuilder objects that accepts a String parameter.
StringBuffer sBuffer = new StringBuffer(“this is a string”);
StringBuilder sBuilder = new StringBuilder(“this is another string”);
The difference between StringBuffer and a StringBuilder object is the thread safety capability.
StringBuffer can not be used by multiple concurrent users because it is thread-safe. But StringBuilder is not thread-safe; therefore it can only be used by multiple threads.
Thread-safety also has its minor trade off. Thread-safe objects like StringBuffer objects may require a slightly larger memory space during runtime compared with StringBuilder objects. This is because during runtime, the thread activation and passivation process can happen at runtime if the thread-pool ran out of threads. This is the reason why most of the mobile devices like PDA’s, mobile phones and terminal card readers only uses StringBuilder and not StringBuffer. Aside from having a smaller memory requirement, StringBuilder are also used by a single process only.
Read Also : How to Reverse a String in Java with Example
5. The String class is a final class.
According to the Java API Documentation the String class declaration is:
public final class String extends Object
implements Serializable, Comparable<String>, CharSequence
Declaring a class as final means that the class cannot be extended by another class. You cannot create a subclass from a final class.
Here’s a non-working Code Snippet:
public class MyStringVersion extends String
{}
This snippet won’t compile because the class MyStringVersion is extending the class String.
This simply shows that the String class cannot be extended or inherited. This is good.
Imagine, if Strings are not final, you can create a subclass and instantiate two objects from different classes that can look alike and be seen as Strings but are actually different. This can cause several problems in so many levels. Thats it for now just mention in comments if you have any other questions related to why string is immutable or final in java.
|
http://javahungry.blogspot.com/2015/07/why-string-is-final-or-immutable-in-java-with-example.html
|
CC-MAIN-2017-04
|
refinedweb
| 1,099
| 63.29
|
Online Training for XML (1/2) - exploring XML
Online Training for XML
The Web is used for many purposes these days, and educational training is no exception. Many sites provide online courses that require nothing but a Web browser and an Internet connection. Not surprisingly technical training is at the top of the list, since the targeted audience is mostly already Web-savvy.
Now the first courses on XML started appearing on the Web, and I took a look at some of them to see what they could teach me. The offerings I examined were:
FirstClass Systems
FirstClass Systems focuses on Technology Based Training (TBT) with extensive libraries covering Information Technology, Professional & Business Development Skills, PC End User, and Certification Paths. They carry a broad range of courses with more than 10,000 hours of training. The browser based internet and intranet training management systems allow them to service individuals and corporations alike.
FirstClass Systems was so kind to provide me with an account on their HighPayingSkills.com site to investigate their recently added XML curriculum, developed by Qtrain.net:
- XML Programmer Training Curriculum (Library) aka. XML Comprehensive
- Introduction to XML
- DTD's for XML
- XML Schemas
- Linking with XML
- XML Stylesheets
I took the first course, which is pretty much a condensed version of the other courses. The XML programmer curriculum is designed to guide students from the basics of eXtensible Markup Language (XML) to an intermediate level. Students will learn how to define their own XML-based markup language, including the creating of a Document Type Definition (DTD). Coverage also includes a module on the XML Linking Language (XLink), the standard for linking XML documents that permits a much more robust type of hyperlink than HTML, and the XML Style Language (XSL), which allows style information to be assigned to XML markup elements. In addition, the library covers more advanced features of XML such as namespaces, schemas, etc.
In addition to running the course in your browser you can take a multiple-choice test to verify your newly acquired knowledge and fill out an evaluation form for feedback. Running this course required Internet Explorer 5.0 or later, though.
An introductory page covers most beginning questions on XML such as the relation to HTML. XML Comprehensive is entirely self-paced -- you can log on and off any time, and use the bookmark tool to mark your location for future reference. The course is comprised of five modules. Each module includes three or four lessons, for a total of 18 lessons. There is also an appendix of online resources. The course is interactive -- it utilizes audio streaming pop-up boxes, several practice exercises, practical and theoretical assignments, end-of-module tests, and a final exam. In addition, there are various help tools, including an XML Comprehensive Chat Room, a list of Frequently Asked Questions (FAQs), a Glossary with XML-related terms and definitions, and XML Email Help.
The module test consists of ten or so questions, most of them multiple-choice, some of them short text entries. The employed scheme of clicking a button for submitting the answer and then clicking another button to move on to the next question is a bit awkward but bearable. On the first test I got 3 of 10 wrong, the answers aren't always obvious but the subsequent test feedback always includes 2-3 sentences explaining the rationale for the correct answers. The final exams works exactly like the module tests, but simply covering all the modules' topics. Lucky for me I got many of the same questions back from the module tests, so I scored slightly better this time, although not good enough for the required 75% to pass the test. Also lucky for me I call myself XML explorer, not XML expert... anyhow, more concentration might be in order. Fourty questions are a lot indeed , and the response time of the site could be better from where I am coming from, across the Atlantic.
On a humorous note:
For the question: "When is it appropriate to use XML?"
I picked the answer: "When I want to add a buzz word to my resume."
I am still wondering why this is deemed not true, it seems to work for many resumes I have seen in the recent past.... oh those academic guys have no idea... ;-)
The account history keeps track of all course activity and taken exams, which might appeal to people who want to have their progess tracked and reported, and to companies in countries where this sort of employee performance check is not illegal.
Let's look at other offerings...
Produced by Michael Claßen
URL:
Created: Feb 27, 2001
Revised: Feb 27, 2001
|
http://www.webreference.com/xml/column29/index.html
|
CC-MAIN-2014-15
|
refinedweb
| 784
| 60.45
|
hello, I am following a book and im stuck on this one.....(note it's tacky but just bear with me, im a newb)
#include <iostream.h> int main() { int RedSoxScore, YankeesScore; cout<<"Enter the score for the RedSox:"; cin>> RedSoxScore; cout<<"Enter the score for the yankees:"; cin>> YankeesScore; cout<<"\n"; if (RedSoxScore > YankeesScore) cout<<"GO THE BLIMMIN REDSOX!!\n"; if (YankeesScore > RedSoxScore ) { cout<<"WHOOPY GO THE YANKS!!!\n"; cout<<"Happy days in NYC!!!\n"; } if (RedSoxScore == YankeesScore) { cout<<"A tie?, nha cant be!\n"; cout<<" Give me the real score for the yanks:"; cin >> YankeesScore; if (RedSoxScore > YankeesScore) cout<<"Knew it, Go the sox!!"; if (YankeesScore > RedSoxScore) cout<<"Knew it go the yanks!!"; if (YankeesScore == RedSoxScore) cout<<"Wow, it was tie!?"; } cout<<"\nThanks for telling me.\n"; return 0; }
I know im using the old .h header files, because that's what the book uses (it says i can use the new standards if i want, using namespace std but my code tends to not work sometimes. So im sticking to the old ones just untill i get the hang of things.
Anyway i compiled the following on devc++ and it runs the first few line well untill the command promt suddently closes after line 9.
The code seems to be fine because it compiled, but it just flashes.
Any ideas as to what im doing wrong?....
-Thanks
|
https://www.daniweb.com/programming/software-development/threads/101606/stuck-please-help-quick-question
|
CC-MAIN-2017-09
|
refinedweb
| 230
| 75.81
|
Since from people solving real-world issues.
Another key point to recognize is that the question of how best to construct view models is *not* unique to the MVC framework. The fact is that even in traditional ASP.NET web forms you have the same issues. The difference is that historically developers haven’t always dealt with it directly in web forms – instead what often happens is that the code-behind files end up as monolithic dumping grounds for code that has no separation of concerns whatsoever and is wiring up view models, performing presentation logic, performing business logic, data access code, and who knows what else. MVC at least facilitates the developer taking a closer look at how to more elegantly implement Separation of Concerns.
Pattern 1 – Domain model object used directly as the view model
Consider a domain model that looks like this:
1: public class Motorcycle
2: {
3: public string Make { get; set; }
4: public string Model { get; set; }
5: public int Year { get; set; }
6: public string VIN { get; set; }
7: }
When we pass this into the view, it of course allows us to write simple HTML helpers in the style of our choosing:
1: <%=Html.TextBox("Make") %>
2: <%=Html.TextBoxFor(m => m.Make) %>
And of course with default model binding we are able to pass that back to the controller when the form is posted:
1: public ActionResult Save(Motorcycle motorcycle)
While this first pattern is simple and clean and elegant, it breaks down fairly quickly for anything but the most trivial views. We are binding directly to our domain model in this instance – this often is not sufficient for fully displaying a view.
Pattern 2 – Dedicated view model that *contains* the domain model object
Staying with the Motorcycle example above, a much more real-world example is that our view needs more than just a Motorcycle object to display properly. For example, the Make and Model will probably be populated from drop down lists. Therefore, a common pattern is to introduce a view model that acts as a container for all objects that our view requires in order to render properly:
1: public class MotorcycleViewModel
3: public Motorcycle Motorcycle { get; set; }
4: public SelectList MakeList { get; set; }
5: public SelectList ModelList { get; set; }
6: }
In this instance, the controller is typically responsible for making sure MotorcycleViewModel is correctly populated from the appropriate data in the repositories (e.g., getting the Motorcycle from the database, getting the collections of Makes/Models from the database). Our Html Helpers change slightly because they refer to Motorcycle.Make rather than Make directly:
1: <%=Html.DropDownListFor(m => m.Motorcycle.Make, Model.MakeList) %>
When the form is posted, we are still able to have a strongly-typed Save() method:
1: public ActionResult Save([Bind(Prefix = "Motorcycle")]Motorcycle motorcycle)
Note that in this instance we had to use the Bind attribute designating “Motorcycle” as the prefix to the HTML elements we were interested in (i.e., the ones that made up the Motorcycle object).
This pattern is simple and elegant and appropriate in many situations. However, as views become more complicated, it also starts to break down since there is often an impedance mismatch between domain model objects and view model objects.
Pattern 3 – Dedicated view model that contains a custom view model entity
As views get more complicated it is often difficult to keep the domain model object in sync with concerns of the views. In keeping with the example above, suppose we had requirements where we need to present the user a checkbox at the end of the screen if they want to add another motorcycle. When the form is posted, the controller needs to make a determination based on this value to determine which view to show next. The last thing we want to do is to add this property to our domain model since this is strictly a presentation concern. Instead we can create a custom “view model entity” instead of passing the actual Motorcycle domain model object into the view. We’ll call it MotorcycleData:
1: public class MotorcycleData
7: public bool AddAdditionalCycle { get; set; }
8: }
This pattern requires more work and it also requires a “mapping” translation layer to map back and forth between the Motorcycle and MotorcycleData objects but it is often well worth the effort as views get more complex. This pattern is strongly advocated by the authors of MVC in Action (a book a highly recommend). These ideas are further expanded in a post by Jimmy Bogard (one of the co-authors) in his post How we do MVC – View Models. I strongly recommended reading Bogard’s post (there are many interesting comments on that post as well). In it he discusses approaches to handling this pattern including using MVC Action filters and AutoMapper (I also recommend checking out AutoMapper).
Let’s continue to build out this pattern without the use of Action filters as an alternative. In real-world scenarios, these view models can get complex fast. Not only do we need to map the data from Motorcycle to MotorcycleData, but we also might have numerous collections that need to be populated for dropdown lists, etc. If we put all of this code in the controller, then the controller will quickly end up with a lot of code dedicated just to building the view model which is not desirable as we want to keep our controllers thin. Therefore, we can introduce a “builder” class that is concerned with building the view model.
1: public class MotorcycleViewModelBuilder
3: private IMotorcycleRepository motorcycleRepository;
4:
5: public MotorcycleViewModelBuilder(IMotorcycleRepository repository)
6: {
7: this.motorcycleRepository = repository;
8: }
9:
10: public MotorcycleViewModel Build()
11: {
12: // code here to fully build the view model
13: // with methods in the repository
14: }
15: }
This allows our controller code to look something like this:
1: public ActionResult Edit(int id)
3: var viewModelBuilder = new MotorcycleViewModelBuilder(this.motorcycleRepository);
4: var motorcycleViewModel = viewModelBuilder.Build();
5: return this.View();
Our views can look pretty much the same as pattern #2 but now we have the comfort of knowing that we’re only passing in the data to the view that we need – no more, no less. When the form is posted back, our controller’s Save() method can now look something like this:
1: public ActionResult Save([Bind(Prefix = "Motorcycle")]MotorcycleData motorcycleData)
3: var mapper = new MotorcycleMapper(motorcycleData);
4: Motorcycle motorcycle = mapper.Map();
5: this.motorcycleRepository.Save(motorcycle);
6: return this.RedirectToAction("Index");
Conceptually, this implementation is very similar to Bogard’s post but without the AutoMap attribute. The AutoMap attribute allows us to keep some of this code out of the controller which can be quite nice. One advantage to not using it is that the code inside the controller class is more obvious and explicit. Additionally, our builder and mapper classes might need to build the objects from multiple sources and repositories. Internally in our mapper classes, you can still make great use of tools like AutoMapper.
In many complex real-world cases, some variation of pattern #3 is the best choice as it affords the most flexibility to the developer.
Considerations
How do you determine the best approach to take? Here are some considerations to keep in mind:
Code Re-use – Certainly patterns #1 and #2 lend themselves best to code re-use as you are binding your views directly to your domain model objects. This leads to increased code brevity as mapping layers are not required. However, if your view concerns differ from your domain model (which they often will) options #1 and #2 begin to break down.
Impedance mismatch – Often there is an impedance mismatch between your domain model and the concerns of your view. In these cases, option #3 gives the most flexibility.
Mapping Layer – If custom view entities are used as in option #3, you must ensure you establish a pattern for a mapping layer. Although this means more code that must be written, it gives the most flexibility and there are libraries available such as AutoMapper that make this easier to implement.
Validation – Although there are many ways to perform validation, one of the most common is to use libraries like Data Annotations. Although typical validations (e.g., required fields, etc.) will probably be the same between your domain models and your views, not all validation will always match. Additionally, you may not always be in control of your domain models (e.g., in some enterprises the domain models are exposed via services that UI developers simply consume). So there is a limit to how you can associate validations with those classes. Yes, you can use a separate “meta data” class to designate validations but this duplicates some code similar to how a view model entity from option #3 would anyway. Therefore, option #3 gives you the absolute most control over UI validation.
Conclusion
The following has been a summary of several of the patterns that have emerged in dealing with view models. Although these have all been in the context of ASP.NET MVC, the problem with how best to deal with view models is also an issue with other frameworks like web forms as well. If you are able to bind directly to domain model in simple cases, that is the simplest and easiest solution. However, as your complexity grows, having distinct view models gives you the most overall flexibility.
I.
I ran into an interesting IoC issue today that was ultimately resolved by some extremely helpful email assistance by Chad Myers. I’ll post the solution here in the hopes that someone else will find it helpful. Here is the code to set up the example:
1: public interface IFoo
3: IBar Bar { get; set; }
4: }
5:
6: public class Foo : IFoo
7: {
8: public IBar Bar { get; set; }
10: public Foo(IBar bar)
12: this.Bar = bar;
13: }
14: }
15:
16: public interface IBar
17: {
18: bool Flag { get; set; }
19: }
20:
21: public class Bar : IBar
22: {
23: public bool Flag { get; set; }
24:
25: public Bar(bool flag)
26: {
27: this.Flag = flag;
28: }
29: }
The key here is that Bar has a constructor that takes a Boolean parameter and there are some circumstances where I want Bar to have a true parameters and some instances where I want it to be false. Because of this variability, I can’t just initialize like this:
1: x.ForRequestedType<IBar>().TheDefault.Is.OfConcreteType<Bar>().WithCtorArg("flag").EqualTo(true);
That won’t work because that only supports the “true” parameter for IBar. In order to support both, I need to utilize named instances. Therefore, my complete StructureMapBootstrapper looks like this:
1: public static class StructureMapBootstrapper
3: public static void Initialize()
4: {
5: ObjectFactory.Initialize(x =>
6: {
7: x.ForRequestedType<IFoo>().TheDefaultIsConcreteType<Foo>();
8: x.ForRequestedType<IBar>().TheDefault.Is.OfConcreteType<Bar>().WithCtorArg("flag").EqualTo(true).WithName("TrueBar");
9: x.ForRequestedType<IBar>().TheDefault.Is.OfConcreteType<Bar>().WithCtorArg("flag").EqualTo(false).WithName("FalseBar");
10: });
11: }
12: }
This now enables me to create instances of IBar by using the GetNamedInstance() method. However, the primary issue is that I need to create an instance of IFoo and Foo has a nested dependency on IBar (where the parameter can vary). It turns out the missing piece to the puzzle is to simply leverage the ObjectFactory’s With() method (in conjunction with the GetNamedInstance() method) which takes an object instance and returns an ExplicitArgsExpression to enable the fluent syntax. Therefore, these instances can easily be instantiated either way like this:
1: var fooWithTrueBar = ObjectFactory.With(ObjectFactory.GetNamedInstance<IBar>("TrueBar")).GetInstance<IFoo>();
2: var fooWithFalseBar = ObjectFactory.With(ObjectFactory.GetNamedInstance<IBar>("FalseBar")).GetInstance<IFoo>();
Just another example of how IoC containers can enable us to keep clean separation of concerns in our classes and enable us to avoid cluttering our code with the wiring up of dependencies.
I often get asked what IoC Container I prefer. Short answer: StructureMap. I love the fluent syntax for configuration. Overall, it’s easy to use, has many advanced features, and is very lightweight. I view the learning curve with StructureMap as relatively small. It is one of the longest-lived IoC containers (if not *the* longest) and has a huge adoption rate which means it’s quite mature and not difficult to find code examples online. For example, there is the StructureMap mailing list. Additionally, the community is very proactive – I just sent an email question to Chad Myers today and got a response within minutes that solved my issue. The creator of StructureMap, Jeremy Miller, has one of the most useful blogs around (in addition to his column in MSDN magazine).
Despite all this, there are still several other very high quality IoC containers available including: Unity, Ninject, Windsor, Spring.NET, and AutoFac. Unity is Microsoft’s offering in the IoC space and often it gets picked simply because it’s got the “magical” Microsoft label on it. In fact, some organizations (for better or worse – usually worse) have policies *against* using OSS software and your only choice is to look at the offerings on the Microsoft platform. Despite this, Unity is actually pretty good but by P&P’s own admission, there are instances where StuctureMap implementation is a little more elegant than Unity. But Unity does still have some compelling reasons to use it. In fact, if you are already using the Microsoft Enterprise Library Application Blocks, choosing Unity makes sense for a little more seamless experience.
Having said all this, there is one other major caveat to my preference towards StructureMap – it’s the IoC tool I’m most familiar with. Therefore you should take a recommendation from me (or *anyone*) with a grain of salt. Ultimately the features of the various IoC’s are going to be comparable and it’s the developer familiarity with that tool that is going to make the difference. For folks new to IoC, most of the uses are going to be for simple constructor dependency injection and all the IoC frameworks can handle that easily. Given the wealth of quality IoC frameworks, we need another IoC framework about as bad as we need another data access technology. :)
Thanks to everyone who attended my session today at NoVa Code Camp. Both the code and PowerPoint slides are available for download.
Download samples for: MVC in the Real World. Check out the readme.txt file in Solution Items and all SQL scripts for creating the databases.
This Saturday (October 10) I’ll be presenting at the NoVa Code Camp. Registration is still open.
I will be presenting “MVC in the Real World”. There are many great sessions on the schedule. Hope to see you there!
Thanks to everyone who attended my sessions yesterday at Richmond Code Camp. Both the code and PowerPoint slides are available for download.
Download samples for: C# 4.0 New Language Features.
I had several questions about some of the tools I was using during the presentations (all of which are free). For the zooming and highlighting, I was using a tool called ZoomIt. For the code snippets, I was just utilizing the built-in code snippets functionality of Visual Studio – however, I use a tool called Snippy to create all of my custom snippets. You can find links to those tools and many other tools I use on my Developer Tools and Utilities post.
I just found out today that I was awarded the MVP designation from Microsoft in the area of ASP.NET. It has been a very busy 2009 for me, speaking at various user groups and code camps including CMAP, CapArea, RockNUG, SoMDNUG, FredNUG, and Richmond Code Camp. I would like to thank all of those user groups for having me present and I look forward to continuing my involvement with all of those user groups and more in the year to come.
With .NET 4.0 and the 2010 wave just around the corner, the upcoming year looks to bring some exciting enhancements in numerous areas including MVC, WCF REST, C# 4.0, Dublin, and the .NET framework in general.
|
http://geekswithblogs.net/michelotti/archive/2009/10.aspx
|
CC-MAIN-2014-42
|
refinedweb
| 2,684
| 52.19
|
Shader Graph in Unity for Beginners
Learn how to create your first shader with Unity’s Shader Graph.
Version
- Unity 2019.2, Unity.
In this tutorial, you’re going to create your first shader graph in Unity right now!
Getting Started
This tutorial uses Unity version 2019.1 or newer. You can get your version of Unity here.
Use the Download Materials button at the top or bottom of this page to grab the materials for this tutorial. Then unzip its contents and open Intro to Shader Graph Starter inside Unity.
The RW folder in the Project Window is organized as follows:
- Fonts: Fonts used in the scene.
- Materials: Materials for the scene.
- Models: 3D meshes for the game pieces and background.
- PostFX: Post-processing effects for scene rendering.
- Prefabs: Various pre-built components.
- Scenes: The game scene.
- Scripts: Custom scripted game logic.
- Shaders: The shader graphs created for this tutorial.
- Sprites: The sprite used as part of the instructions.
- Textures: Texture maps for the game pieces and background.
Now, load the scene called TangramPuzzle from the Scenes folder.
This is a simple game called Tangram, which originated in China in the 1800s. The goal is to rearrange seven flat geometric shapes into stylized pictograms or silhouettes.
Enter Play mode in the Editor to test the demo game.
You can click and drag the shapes. Use the cursor keys to rotate the pieces. Turn and shift the pieces so they don’t overlap.
Can you figure out how to move the seven shapes into these patterns?
Ok, the game technically works, but it doesn’t give you any visual feedback about your selected game piece.
What if you could make your game piece glow as the mouse makes contact? That might improve the user interface.
This is the perfect chance to show off Shader Graph!
Checking Pipeline Settings For Shader Graph
Shader Graph only works with the relatively new Scriptable Render Pipeline, either the High-Definition Render Pipeline or Lightweight Render Pipeline.
When creating a new project for use with Shader Graph, be sure to choose the correct template.
Your example project is already configured to use the Lightweight Render Pipeline. First, confirm in the PackageManager that you have the Lightweight RP and ShaderGraph packages installed by selecting Window ► PackageManager.
Then, select from the available versions and use the Update to button if necessary. The latest verified version is usually the safest choice.
Once you’ve updated your packages, double-check that your pipeline settings are correct under Edit ► Project Settings.
In the Graphics tab, the Scriptable Render Pipeline Settings should read LWRP-HighQuality. This sets your project to use the highest default setting for the Lightweight Render Pipeline.
Close this window once you’ve confirmed that you’re using one of the Scriptable Render Pipeline assets.
Creating a PBR Graph
A material and a shader always work together to render a 3d mesh on-screen. So, before you can build your shader, you’ll need a material as well.
First, use the Create button in the Project view to generate a new material in the RW/Materials folder. Select Create ► Material, then rename it to Glow_Mat.
Then, in the RW/Shaders folder, create a PBR Graph by selecting Create ► Shader ► PBR Graph. This is a shader graph that supports physically-based rendering, or PBR.
Name it HighlightShaderGraph.
The Glow_Mat material currently uses the default shader for the LightweightRenderPipeline called LightweightRenderPipeline/Lit.
Change it to use the new shader graph you just created. With Glow_Mat selected, click on the Shader dropdown at the top of the Inspector and select Shader Graphs ► HighlightShaderGraph.
The Glow_Mat material will change to a slightly lighter shade of gray but otherwise remain fairly drab. Don’t worry! You’ll remedy that soon.
Now, double-click the HighlightShaderGraph asset or click Open Shader Editor in the Inspector. This opens the Shader Graph window.
You should familiarize yourself with the main parts of this interface:
- The main workspace is this dark gray area where you’ll store your graph operations. You can right-click over the workspace to see a context menu.
- A node is a single unit of your graph. Each node holds an input, an output or an operation, depending on its ports. Nodes connect to one another using edges.
- The Master node is the final output node of your graph. In this example, you’re using the physically-based rendering variant, also called the PBR Master node. You may recognize several properties, such as Albedo, Normal, Emission and Metallic, from Unity’s Standard Shader.
- The Blackboard can expose certain parts of the graph to the Inspector. This allows the user to customize certain settings without needing to edit the graph directly.
- The Main Preview interactively shows the current shader’s output on a sphere.
By connecting various nodes together, you can make a shader graph, which Unity compiles and sends to the GPU.
Creating a Color Node
First off, give your shader some base color. This is done by feeding a color node into the Albedo component of the PBR Master Node. Right-click in the workspace area to create your first node from the context menu by selecting Create Node ► Input ► Basic ► Color.
Note that there are hundreds of nodes in the Create Node menu! While it may seem overwhelming at first, you’ll quickly become familiar with the most commonly used ones.
Next, drag the node around the workspace using its title bar. Then, drop it somewhere to the left of the PBR Master node.
The Color node allows you to define a single color. Click the color chip and select a nice red hue, such as R: 128, G: 5, B: 5.
To output the color to your PBR Master node, drag the Out port into the Albedo port, which represents the base color of your shader.
Once you connect the nodes with an edge, you should see the Main Preview sphere turn red.
Success! In shader writing, making a simple solid colored shader is the equivalent of coding Hello, world! :]
Though you might not realize it, you created your first custom shader!
Navigating the Interface
Although you only have a couple of nodes in your graph, now is a great time to get accustomed to the Shader Graph interface.
Drag the nodes around and notice that the edge stays connected between the Color’s output port and the PBR Master’s input port.
Nodes can contain different types of data. As you created a node that holds color input, you can also create a node representing a single number by selecting Create Node ► Input ► Basic ► Integer. You won’t actually do anything with this new node— it’s for illustration purposes only.
Connect the Integer Out port to the Alpha port of the PBR Master.
Your graph is still tiny, but now you have enough nodes to try out a few hotkeys. Select a couple of nodes and press these hoykeys:
- F: frame the selected node or nodes.
- A: frame the entire graph.
You can also use the buttons at the top of the window to toggle the Main Preview and Blackboard. The Show in Project button will help locate the current shader graph in the Project window.
Once you’re comfortable navigating your graph, do some cleaning. You only need the Color node and PBR Master node.
Right-click over the connection between the Integer and Master node and select Delete. That lets you disconnect the node from the graph.
Likewise, you can delete the Integer node entirely. Right-click over the node and select Delete.
Once done, click the Save Asset button in the top left of the interface. Unity will save all your changes, then it will compile and activate the shader. This step is required every time you want to see your latest changes in the Editor.
Now, return to the Project window, then select the Glow_Mat material.
Because the shader is propagating to the material, the sphere in the Inspector preview should show up as red.
Now, drag the Glow_Mat material over one of the tangram pieces in the Scene window.
As you expected, your material and shader turn the mesh a nice, uniform red.
Adding a Glow Effect
If you want Glow_Mat material to have a more dramatic glow, edit the shader graph again.
Currently, you have the Color‘s output feeding into the Albedo of the PBR Master.
You can also drag another edge from the Out to the Emission. Now that same color is being used twice: Once for the base color and again for the emission color.
Output ports can have multiple edges, but input ports can only have one.
Now, switch the Mode dropdown in the Color node to HDR. This taps into the high-dynamic range of colors.
Next, edit the color chip. In HDR Mode, you get an extra option for Intensity. Click the +1 in the bottom swatch a couple of times or drag the slider to about 2.5. Then, save your changes and return to the Editor.
In the Editor, your game piece glows a bright red-orange. The post-processing in your scene is already set and enhances the high-dynamic range color.
Now, select the PostProcessing game object in the Hierarchy. The glow stems from the Bloom effect.
Next, open the Bloom parameters and adjust the Intensity, or how strong the glow appears, and Threshold, or cutoff to start glowing. This example shows a value of 3 and 2, respectively.
Wow, that’s a pop of color!
Making the Highlighter Script
You don’t want the game piece to glow all the time. You only want to enable it depending on the mouse position.
When the mouse hovers over a game piece, you’ll switch to the Glow_Mat material. Otherwise, the game-piece will display the default Wood_Mat material.
First, create a new C# script in RW/Scripts called Highlighter. This will help you swap between the two materials at runtime. Replace all the lines in your script with the following:
using UnityEngine; // 1 [RequireComponent(typeof(MeshRenderer))] [RequireComponent(typeof(Collider))] public class Highlighter : MonoBehaviour { // 2 // reference to MeshRenderer component private MeshRenderer meshRenderer; [SerializeField] private Material originalMaterial; [SerializeField] private Material highlightedMaterial; void Start() { // 3 // cache a reference to the MeshRenderer meshRenderer = GetComponent<MeshRenderer>(); // 4 // use non-highlighted material by default EnableHighlight(false); } // toggle betweeen the original and highlighted materials public void EnableHighlight(bool onOff) { // 5 if (meshRenderer != null && originalMaterial != null && highlightedMaterial != null) { // 6 meshRenderer.material = onOff ? highlightedMaterial : originalMaterial; } } }
Let’s take a closer look at the script:
- The script can only be applied to an object that contains a
MeshRendererand a
Collidercomponent. This is controlled by adding
[RequireComponent]attributes to the top of the script.
- These are references to the
MeshRenderer,
originalMaterialand
highlightedMaterial. The materials are tagged with the
[SerializeField]attribute, making them assignable from the the Inspector.
- In
Start, you automatically fill in your
MeshRendererwith
GetComponent.
- You invoke
EnableHighlight(false). This ensures that your non-highlighted material shows by default. The public method called
EnableHighlightthat toggles the renderer’s material is right below. It takes a bool parameter called
onOffto determine the highlight’s enabled state.
- You guard to prevent any NullReference errors
- You use the ternary operator to save space.
Adding Mouse Events
Because you’ll apply this to game pieces with
MeshColliders attached, you can take advantage of the built-in
OnMouseOver and
OnMouseExit methods. Add following after the
EnableHighlight method:
private void OnMouseOver() { EnableHighlight(true); } private void OnMouseExit() { EnableHighlight(false); }
When the Mouse is over a game piece, it will invoke
EnableHighlight(true). Likewise, when the mouse exits the Collider, it will invoke
EnableHighlight(false).
That’s it!
Save the script.
Highlighting the Game Piece
If you applied the Glow_Mat to any of the pieces in the previous sections of the tutorial, you need to switch all game pieces back to the Wood_Mat material in the Editor. You’ll use the Highlighter to enable the glow at runtime instead.
First, select the seven objects inside the Tangram transform that represents the individual game piece shapes. Then, add the
Highlighter script to all of them at once.
Next, in the Original Material field, drag in the Wood_Mat material. Then, in the Highlighted Material field, drag in the Glow_Mat material. Finally, enter Play mode and check out your handiwork.
Not bad! When you hover the mouse over a tangram piece, it glows a bright, hot red. Move the mouse away from it and it returns back to its original wooden state.
You can still play the game normally, but now the highlight effect adds a little bit of visual interest, focusing the user’s attention.
Using Texture Nodes
Currently, the simple shader is a bright solid red color. You’re going to modify the shader so that it doesn’t lose the original wood texture. Instead, you’ll make the highlight appear as a glowing edge around the surface detail.
First, edit the HighlightShaderGraph by double-clicking it or selecting Open Shader Editor in the Inspector.
Delete the Color node, right-click on it, then select Delete. You’ll create everything from scratch.
Instead of a single color, you will plug in a texture by using the Sample Texture 2D node.
Create a node either from the context menu with a right-click, then select Create Node or by using the spacebar hotkey. Select Input ► Texture ► Sample Texture 2D.
The Sample Texture 2D node reads color information from a texture asset and then outputs its RGB values.
Select a texture from the Texture input port. Click the dot next to the empty field to open the file browser.
Choose the WoodAlbedo texture asset.
Connect the Sample Texture 2D’s RGBA output port into the PBR Master Albedo port.
Voilà! Your preview sphere now shows the wood texture on the surface.
If you add a normal map, you can add some more surface detail. First, create another Sample Texture 2D node by selecting Create Node ► Input ► Texture ► Sample Texture 2D.
Select the WoodNormal texture in the Texture input port.
Adjust the Type dropdown from Default to Normal.
Output the RGBA values into the Normal port of the PBR Master.
The Main Preview sphere should now appear rougher. The normal map fakes small indentations and divots in the surface. This helps sell the appearance of wood grain.
Adding a Fresnel Effect
Now that you have a base texture and normal map replacing the previous solid red color, let’s add the highlight effect using a different method.
Instead of making the entire object glow uniformly, you can confine the glow to only the edges. This can be accomplished with something known as the Fresnel effect.
Create a new node with right-click or using the spacebar hotkey, then select Create Node ► Math ► Vector ► Fresnel Effect.
This new node shows a sphere with a white glowing ring around its circumference. You can adjust the width of its halo using the Power input port. Click and drag the X label to the left of the field or enter specific numbers.
Larger values make the halo very thin, while smaller values make it very wide. You can use a value of 4 for a thinner rim glow.
In order to pass this halo to your material, you connect the Fresnel Effect output to the Emission of the PBR Master.
Your MainPreview now shows a wooden sphere with a bright white halo from the Fresnel Effect.
You can try it on your kitchen table. You’re using Unity’s version of this phenomenon to make the edges of your geometry glow.
Multiplying by Color
Adding some color to the glowing rim is as easy as doing some basic color math.
Create a new color node that will specify the color for the glowing ring. Use right-click
or spacebar to open the context menu, then select Create node ► Input ► Basic ► Color. Switch the color mode to HDR.
Select a color to represent your highlight color. For example, choose a nice bright green here, R:5, G:255, B:5.
Increase the Intensity to 3.5.
You can’t plug the new color into the fresnel effect as it doesn’t have a color input. Instead, you’ll need to combine the fresnel output with the output from the color node. This is achieved by making use of a Multiply node.
Create a Multiply node with right-click, then select Create node ► Math ► Basic ► Multiply.
Delete the existing edge between the Fresnel Effect and the PBR Master. Instead connect the Fresnel Effect Out to the A input of the Multiply node.
Connect the Color node’s Out to the B input of the Multiply node.
Finally, connect the Multiply Out port to the Emission of the PBR Master port. Voilà! You can see the your bright green HDR color surrounding the Main Preview sphere.
Remember you can use the Fresnel Effect Power to grow or shrink the halo. A smaller value of 1.5 gives you a broad green glow.
A value between 4 and 5 will work well for this sample game, but feel free to experiment with your own values.
Save the shader graph and witch back to the Editor, you’re ready to see your HighlightShaderGraph in action.
Enter Play mode.
When you hover your mouse over a game piece, it retains its original wood texture. However, it now has a bright green glow around the edges. Once again, you can now play the game, only with a more subtle highlight.
Adding Blackboard Properites
If you want to modify the look of the glow effect, you have to return to the Shader Graph editor window and make those changes. For example, you might want to grow or shrink the bright halo using the Fresnel Effect Power.
It’s not very convenient if you want to test out various changes. Fortunately, Shader Graph has the concept of Properties.
You can expose part of the graph publicly in the Inspector allowing you to make small changes interactively. This is done by making use of the Blackboard interface.
Return to to the Shader Graph and make sure the Blackboard is visible. Toggle the Blackboard button in the upper right if it is hidden.
Adding Base Texture and Normal Map properties
Now you’re going to expose the both the base texture and normal map to be accessible from the Inspector.
Click the + icon in the upper right of the Blackboard. Select Texture 2D from the dropdown. An entry should appear on the Blackboard. Rename it BaseTexture.
Make sure exposed is checked. If you chose to expose a property, it’s public and available in the Inspector.
To add the property to the graph, simply drag it by the label and drop it into the workspace. Leave it somewhere to the left of the Sample Texture 2D node.
Connect the BaseTexture port to the Texture input port on the SampleTexture 2D that plugs into the Albedo. This will replace the previous set value.
Repeat the same process for the Normal Map as well. Click the + icon and create a new Texture 2D. Rename it Normal Map.
Drag it into the workspace area and plug it into the Sample Texture 2D for the Normal map.
Click Save Asset, and return to the main Editor window.
Select the Glow_Mat material then take not of the two extra fields in the Inspector: Base Texture and Normal Map.
Because they currently have no Textures set, your preview window shows your green highlight over a gray sphere.
Select the WoodAlbedo and WoodNormal textures for the BaseTexture and NormalMap, respectively.
The wood textures now display correctly underneath the glowing edges.
Exposed properties allow the user to input data directly into the shader without needing to edit the shader graph itself. Experiment on your own with choosing different base texture and normal maps.
Adding Glow Size and Glow Color properties
Now you’re going to expose other types of properties on the Blackboard as well. For example, it would be useful to allow the user to adjust the Fresnel Effect Power value.
Click the + icon in the Blackboard and create a Vector1 property. This represents a single float parameter.
Rename it GlowSize.
You can limit what values can be entered in this property by converting it to a slider. Switch the Mode to Slider, then set a Min of 0.05 and a Max of 6 to define the range. Set the Default value to 5.
Drag the GlowSize property into the workspace area. Plug the output port into the Fresnel Effect Power input.
Finally, allow the user to set the glow color through a property as well. Instead of creating the property from the Blackboard, you’ll convert an existing node in your graph.
Select the Color node, then right-click and select Convert to Property.
The Color node converts into a color property on the Blackboard that is no longer editable directly in the graph. Rename this property to GlowColor.
Click Save Asset, and return to the main Editor window.
Select the Glow_Mat material in the Project window. You should see a GlowSize slider and a GlowColor color chip available in the Inspector.
Edit your material values to your liking. Finally, enter Play mode to test your work.
You now have a customizable highlight that you can tweak to your heart’s content!
Where to Go From Here
Congratulations! You can now create your very own shaders with Shader Graph!
With some creativity, you might surprise yourself with what else you can create. Want to make a cool, sci-fi laser beam or forcefield? You could adapt what you’ve done here into just the right shader.
Though there are literally hundreds of nodes to explore, this tutorial should have helped you to get started using Shader Graph.
If you have an questions or comments, please join the forum discussion below.
|
https://www.raywenderlich.com/3744978-shader-graph-in-unity-for-beginners
|
CC-MAIN-2021-17
|
refinedweb
| 3,656
| 66.74
|
Opened 9 years ago
Closed 9 years ago
#5808 closed (duplicate)
Cannot pickle the default filters
Description
Reproduction code:
from django import template
t = template.Template("{{ firstname|capfirst }}")
c = template.Context("firstname": "simon")
p = pickle.dumps(t)
print pickle.loads(p).render(c)
Expected result:
Simon
Actual result:
<class 'pickle.PicklingError'>: Can't pickle <function _dec at 0x83c010c>: it's not found as django.template.defaultfilters._dec
Attachments (1)
Change History (5)
Changed 9 years ago by
comment:1 Changed 9 years ago by
The
__name__ attribute is write-only in Python 2.3.
This might be fixed by Jeremy Dunck's recent decorator work (which isn't going in just yet because we need to get some license clearance).
We're probably unlikely to jump through a lot of hoops to make pickling work, though, if more than this is required.
comment:2 Changed 9 years ago by
comment:3 Changed 9 years ago by
I see. This is related to #3558.
Since name is read-only in Python 2.3, it still possible for this to work by wrapping the patch in a try statement.
try: _dec.__name__ = func.__name__ except TypeError: pass
This breaks pickling in Python 2.3, but permits it elsewhere.
Pickling is useful so that we can cache templates in memcached. If template parsing is non-trivial, you really want to cache these objects.
Patch
|
https://code.djangoproject.com/ticket/5808
|
CC-MAIN-2017-04
|
refinedweb
| 231
| 67.96
|
Utility class for NOX::Direction::Broyden method to manage the information stored in "limited" memory. More...
#include <NOX_Direction_Broyden.H>
Utility class for NOX::Direction::Broyden method to manage the information stored in "limited" memory.
Store up to
MemoryUnit objects where
is passed to reset(). Every time push() is called, a new MemoryUnit is added. If there are already
MemoryUnit's, the oldest is bumped off the list. The zero-th entry is always the oldest.
Constructor.
Does nothing.
Destructor.
Does nothing.
Return the ith MemoryUnit (unchecked access)
The zero entry is the oldest memory. The m-1 entry is the newest entry (where m denotes the memory size).
Add new information to the memory.
We need to calculate where the new udpate should be stored in #memory and update the information in #index.
Let k denote the index of where the new update should be stored. If there are current m items stored in memory and m < mMax, then we set k = m. Otherwise, we set k equal to the location of the oldest update. The oldest update is deleted to make room for the new update. In both cases, #index must be updated appropriately so that the first (zero) entry points to the oldest update and the last entry points to the newest update.
Reset the memory.
Sets the size of the #index vector to be zero.
|
http://trilinos.sandia.gov/packages/docs/r10.12/packages/nox/doc/html/classNOX_1_1Direction_1_1Broyden_1_1BroydenMemory.html
|
CC-MAIN-2013-48
|
refinedweb
| 228
| 58.38
|
sandbox/vatsal/DropOnDropImpact/dropOnDropImpact.c
Experiment:We investigate the dynamics of an oil drop impacting an identical sessile drop sitting on a superamphiphobic surface. One example of such impacts:
On this page, I am presenting the code that we used to simulate the process shown in the above video. The results presented here are currently under review in Science Advances. For codes for all the cases included in the manuscript, please visit the GitHub repository. I did not upload all of them here to avoid repeatability.
Numerical code
Id 1 is for the sessile drop, and Id 2 is mobile/impacting drop.
#include "grid/octree.h" #include "navier-stokes/centered.h" #define FILTERED // Smear density and viscosity jumps
To model non-coalescing drops, we use two different Volume of Fluid tracers (f1 and f2). For this, we use a modified version of two-phase.h. A proof-of-concept example is here.
#include "two-phaseDOD.h" #include "tension.h" #include "distance.h"
We use a modified adapt-wavelet algorithm available (here). It is written by César Pairetti (Thanks :)).
#include "adapt_wavelet_limited.h" int MAXlevel = 12; // maximum level #define MINlevel 5 // maximum level #define tsnap (0.01) // timestep used to save simulation snapshot // Error tolerances #define fErr (5e-4) // error tolerance in VOF #define K1Err (1e-4) // error tolerance in curvature of sessile drop #define K2Err (1e-4) // error tolerance in curvature of mobile/impacting drop #define VelErr (5e-3) // error tolerances in velocity #define Mu21 (6e-3) // viscosity ratio between the gas and the liquid #define Rho21 (1./770) // density ratio between the gas and the liquid #define Zdist (3.) // height of impacting drop from the substrate
In the manuscript, offset parameter \chi is defined as: \displaystyle \chi = \frac{d}{2R} Here, d is the distance between the axes of two drops, and R is the equivalent radius of the drop. In the simulations, we input 2\chi (as the length scale is R).
#define Xoffset (0.50) #define R2Drop(x,y,z) (sq(x-Xoffset) + sq(y) + sq(z-Zdist)) // Equation for the impacting drop #define Ldomain 16 // Dimension of the domain
Boundary Conditions Back Wall is superamphiphobic and has the no-slip condition for velocity.
u.t[back] = dirichlet(0.); u.r[back] = dirichlet(0.); f1[back] = dirichlet(0.); f2[back] = dirichlet(0.); double tmax, We, Oh, Bo; int main() { tmax = 7.5;
Weber number is based on the impact velocity, U_0. \displaystyle We = \frac{\rho_lU_0^2R}{\gamma} Note that the impact Weber number we input is slightly less than the actual Weber number that we need. As the impacting drop falls, it also gains some kinetic energy. So, while analyzing the results, the Weber number should be taken at the instant, which one decides to choose as t = 0. These calculations get tricky as one goes to a higher \chi. However, the results do not change much as long as We \sim \mathcal{O}(1).
We = 1.375; // We is 1 for 0.1801875 m/s <770*0.1801875^2*0.001/0.025> init_grid (1 << MINlevel); L0=Ldomain;
Navier Stokes equation for this case: \displaystyle \partial_tU_i+\nabla\cdot(U_iU_j) = \frac{1}{\hat{\rho}}\left(-\nabla p + Oh\nabla\cdot(2\hat{\mu}D_{ij}) + \kappa\delta_sn_i\right) + Bog_i The \hat{\rho} and \hat{\mu} are the VoF equations to calculate properties, given by: \displaystyle \hat{A} = (f_1+f_2) + (1-f_1-f_2)\frac{A_g}{A_l}
Ohnesorge number Oh: measure between surface tension and viscous forces. \displaystyle Oh = \frac{\mu_l}{\sqrt{\rho_l\gamma R}}
Oh = 0.0216; // <0.003/sqrt(770*0.025*0.001)>
Bond number Bo: measure between Gravity and surface tension. \displaystyle Bo = \frac{\rho_lgR^2}{\gamma}
Bo = 0.308; // <770*10*0.001^2/0.025>
Note: The subscript l denotes liquid. Also, the radius used in the dimensionless numbers is the equivalent radius of the drops \left(R = \left(3\pi V_l/4\right)^{1/3}\right). V_l is the volume of the two drops.
Velocity scale as the intertial-capillary velocity, \displaystyle U_\gamma = \sqrt{\frac{\gamma}{\rho_l R}}
This event is specific to César’s adapt_wavelet_limited. Near the substrate, we refine the grid one level higher than the rest of the domain.
Initial Condition
For Bond numbers > 0.1, assuming the sessile drop to be spherical is inaccurate. One can also see this in the experimental video above. So, we used the code (here) to get the initial shape of the sessile drop. We import an STL file.
char filename[60]; sprintf(filename,"Sessile-Bo0.3080.stl"); FILE * fp = fopen (filename, "r"); if (fp == NULL){ fprintf(ferr, "There is no file named %s\n", filename); return 1; } coord * p = input_stl (fp); fclose (fp); coord min, max; bounding_box(p, &min, &max); fprintf(ferr, "xmin %g xmax %g\nymin %g ymax %g\nzmin %g zmax %g\n", min.x, max.x, min.y, max.y, min.z, max.z); origin((min.x+max.x)/2. - L0/2, (min.y+max.y)/2. - L0/2, 0.); // We choose (X,Y) of origin as the center of the sessile drop. And substrate at Z = 0. refine(R2Drop(x,y,z) < sq(1.+1./16) && level < MAXlevel); fraction(f2, 1. - R2Drop(x,y,z)); scalar d[]; distance (d, p); while (adapt_wavelet_limited ((scalar *){f2, d}, (double[]){1e-6, 1e-6*L0}, refRegion).nf); vertex scalar phi[]; foreach_vertex(){ phi[] = (d[] + d[-1] + d[0,-1] + d[-1,-1] + d[0,0,-1] + d[-1,0,-1] + d[0,-1,-1] + d[-1,-1,-1])/8.; } boundary ((scalar *){phi}); fractions (phi, f1); foreach () { u.z[] = -sqrt(We)*f2[]; u.y[] = 0.0; u.x[] = 0.0; } boundary((scalar *){f1, f2, u.x, u.y}); dump (file = "dump");
Note: I think distance.h is not compatible with mpi. So, I ran the file to import .stl file and generate the dump file at t = 0 locally. For this, OpenMP multi-threading can be used.
return 1; } }
Gravity is added as a body forces. It would be nice to use something like reduced.h. But, I could not figure out how to do it with two different VoF tracers.
event acceleration(i++) { face vector av = a; foreach_face(z){ av.z[] -= Bo; } }
Adaptive Mesh Refinement
We refine based on curvatures of the two drops along with the generally used VoF and velocity fields. This ensures that the refinement level along the interface is MAXlevel.
scalar KAPPA1[], KAPPA2[]; curvature(f1, KAPPA1); curvature(f2, KAPPA2); adapt_wavelet_limited ((scalar *){f1, f2, KAPPA1, KAPPA2, u.x, u.y, u.z}, (double[]){fErr, fErr, K1Err, K2Err, VelErr, VelErr, VelErr}, refRegion, MINlevel); }
Dumping snapshots
event writingFiles (t = 0; t += tsnap; t <= tmax) { dump (file = "dump"); char nameOut[80]; sprintf (nameOut, "intermediate/snapshot-%5.4f", t); dump (file = nameOut); }
Log writing
event logWriting (i++) { double ke = 0.; foreach (reduction(+:ke)){ ke += 0.5*(sq(u.x[]) + sq(u.y[]) + sq(u.z[]))*rho(f1[]+f2[])*cube(Delta); } static FILE * fp; if (i == 0) { fprintf (ferr, "i dt t ke\n"); fp = fopen ("log", "w"); fprintf (fp, "i dt t ke\n"); fprintf (fp, "%d %g %g %g\n", i, dt, t, ke); fclose(fp); } else { fp = fopen ("log", "a"); fprintf (fp, "%d %g %g %g\n", i, dt, t, ke); fclose(fp); } fprintf (ferr, "%d %g %g %g\n", i, dt, t, ke); }
Running the code
Use the following procedure:
Step 1: Importing the stl file and generating the first dump file
#!/bin/bash mkdir intermediate qcc -fopenmp -O2 -Wall dropOnDropImpact.c -o dropOnDropImpact -lm export OMP_NUM_THREADS=8 ./dropOnDropImpact
Step 2: Follow the method described (here). Do not forget to use the dump file generated in the previous step.
Output and Results
The post-processing codes and simulation data are available at: PostProcess
The process
Velocity vectors and the field at the Y = 0 slice
|
http://basilisk.fr/sandbox/vatsal/DropOnDropImpact/dropOnDropImpact.c
|
CC-MAIN-2020-24
|
refinedweb
| 1,294
| 57.37
|
In Part I of this guide I’ve explained the process of cross-forest migration and the differences between using ADMT first or using Prepare-MoveReuqest.Ps1 script first, I’ve also explained the migration scenario and the current environment.
Starting from this part we will talk about migration challenges:
- Primary Account: AD User in the Egypt forest used for Windows logon, email and everything.
- Secondary Account: AD user account created in Tailspin.com forest used for other applications hosted in tailspin, this user used to grant permissions and create profiles on Terminal Service servers.
So the main target is we need to use the existing users in the target forest (tailspin.com), the consequences of this decision as follow:
- We can’t use the script first; the script in this scenario will create a new disabled Mail Enabled User (MEU), and will ignore the existing user account.
- We will have to run ADMT first to synchronize the password, migrate SID history, and other AD attributes from the source forest to the target forest.
- We need to provide prepare-moverequest.ps1 with attribute that links the user in the target forest with the user in the source forest, so when the script runs it will merge all the properties from the source account to the target account and will not create a new user.
Before going to the detailed steps, let’s see the history and the relation between ADMT and Prepare-MoveRequest.Ps1:
- ADMT and Exchange Attributes: By default ADMT doesn’t migrate Exchange attributes including “mail”, “proxyAddresses”, anything started by msexch. The reason behind that because when ADMT transfers Exchange attributes (e.g. homeMDB, homeMTA, showInAddressBook, msExch*) that will make the target user looks like a legacy mailbox in the target domain. This leaves the target account in an invalid state (e.g. homeMDB still points to the old forest) which is unexpected for the PrepareMoveRequest.ps1 script. To prevent this, Exchange attributes are excluded from ADMT.
- In Exchange RTM, running ADMT first and then running the PrepareMoveRequest script: When a user is created via ADMT, the PrepareMoveRequest script doesn’t work since there are no proxyAddresses for the script to match the source forest user with the target forest user (the script is looking for some attributes in the target account and try to match it with the source account).
The recommended approach was to copy at least 1 proxy address using ADMT. Even after doing that and using the -UseLocalObject parameter, the script will only copy the 3 mandatory parameters (msExchMailboxGUID, msExchArchiveGUID, msExchArchiveName). This is not very useful as the other mandatory attributes are not copied.
- In SP1, a new switch has been added –OverwriteLocalObject which mainly designed when running ADMT first scenario. ADMT can copy the SIDhistory, password, and proxyAddresses, and the PrepareMoveRequest script can attach the source account with the target account and sync the other email attributes. In this case, it will copy attributes from source to target, so it’s the opposite of UseLocalObject.
PrepareMoverRequest.ps1:
The PrepareMoveRequest.ps1 script can identify and match existing accounts in the target forest based on their SMTP address (proxyAddresses attribute).
The script will only use the existing target accounts if all the following are true:
- The target account has a value in proxyAddresses which matches one of the proxyAddresses of the source account.
- The target account is a mail enabled user (healthy MEU that can be retrieves with Get-MailUser command, which means it must have mail attributes like ‘mail’, ‘ExternalEmailAddress’, etc).
- You need to specify the -UseLocalObject parameter in the script. (and –OverwriteLocalObject)!
This Guide will explain the detailed steps required to do cross forest migration from source forest running
In Part I of this guide I’ve explained the process of cross-forest migration and the differences between
In this part of Cross-Forest Migration Guide we will solve the second challenge but before that let’s
Have you gotten the powershell error the term ‘enable-mailuser’ is not recognized as the name of a cmdlet, function, script file, or operable program?
import the exchange module
The only things wrong with this blog is that I didn’t find it first !
Thank you very much
Thanks, it’s great information, this article well explained the process of cross-forest exchange migration. I already tried the automated solution from to perform cross forest mailbox moves migrating from Exchange 2007 to Exchange 2010. This tool migrate both mailbox
and public folder data from exchange 2007 server to another exchange server. This tool helps to migrate to-and-fro between exchange server and public folder, intra forest and cross forest exchange server.
Has anyone used Priasoft.com to migrate? They have this feature "dry-run" that seems very interesting, but can I use both prepare-moverequest.ps1 and their tool?
Does this dry-run feature seem like a good idea?
|
https://blogs.technet.microsoft.com/meamcs/2011/10/25/exchange-2010-cross-forest-migration-step-by-step-guide-part-ii/
|
CC-MAIN-2016-40
|
refinedweb
| 814
| 60.24
|
Hi All,
I have generated the data in the form of Long Keys and Double values and inserting it in Aerospike as
Aerospike.client.put(this.writePolicy, new Key(“test”, “profile”, keyGet), new Bin(this.binName, value)); keyGet is String Key value is Double.
I am also getting back these values as Key key = new Key(this.namespace, this.set, keyStr); //keyStr is String Key this.client.get(this.readPolicy, key, this.binName);
so above put and get is working fine. But I am not able to implement a Map Reduce for these records. What I want is - I want to add all the double values stored against all the keys and return the total sum from the UDF using Map and Reduce functions.
Thanks Nitin
|
https://discuss.aerospike.com/t/map-reduce-for-records-using-udf-against-string-key-and-double-value/456
|
CC-MAIN-2019-09
|
refinedweb
| 126
| 75.91
|
![if gte IE 9]><![endif]><![if gte IE 9]><![endif]><![if gte IE 9]><![endif]><![if gte IE 9]><![endif]>
Part 2 of this blog series derived the basic transfer function for a typical 2-wire transmitter, commonly used in industrial control and automation, and explained the currents flowing inside it. It also explained that connecting the transmitter return (IRET) to the loop supply ground (GND) prevents proper operation of the transmitter.
Today’s post discusses the current consumption limits of the circuitry that is powered from the VREG and VREF outputs. Equation 1 and Figure 1 show that the output current (IO) equals the sum of the input current (IIN), the transmitter return current (IRET) and the BJT current (IBJT).
Figure 1: 2-wire transmitter showing current flow
IBJT and IIN will always be positive in this circuit configuration. Therefore, IRET currents that are greater than 4mA will prevent the U1 op amp from regulating the output current down to 4mA as required. The XTR116 quiescent current of 200uA flows through IRET, leaving roughly 3.8mA for the remaining circuitry powered from VREF and VREG. The output will not reach 4mA at the zero-scale input level if the external circuitry sinks more than 3.8mA to IRET.
Figure 2 shows the output current simulation results when the IBRIDGE, IINA and IREF_INA currents in Figure 1 total to 5.8mA. Notice that the output saturates at around 6mA, despite the input signal continuing to decrease. This results in an inability to properly convey the sensor information near the zero-scale levels.
Figure 2: The effects of IRET current greater than 4mA
The use of low-resistance bridges is one of the most common causes for the IRET current to be greater than 4mA. Connecting a 120Ω strain gauge bridge to the 4.096V VREF output would require 34mA of current! Adding the RBIAS1 and RBIAS2 resistors to the circuit reduces the bridge current to an acceptable level, as shown in Figure 3.
Figure 3: Using biasing resistors to reduce bridge currents
Equation 2 can be used to select the RBIAS resistor values based on a desired bridge current. The bridge current was set to roughly 1mA in this example to leave adequate current for other external circuitry.
The instrumentation amplifiers (INAs) and other external circuitry used in the design must use less than the remaining current available after powering the sensor. Table 1 lists a few low-voltage INAs with quiescent currents that make them suitable for 4-20mA systems.
While the previous circuit examples have all used INAs, op amps are also commonly used to condition sensor signals in 2-wire transmitters. Table 2 lists op amps recommended for 2-wire transmitter circuits.
In summary, make sure the XTR and other circuitry powered from the loop regulators and references don’t exceed 4mA of operational current, or the output will saturate at low input levels. Be sure to use the active, and not just quiescent, currents of all components used in the design when determining the total current consumption. This is especially true with microcontrollers and other digital devices, which can consume considerably more current when active than inactive.
In my post next month on the Precision Hub we’ll talk about properly isolating and accepting pulse width modulated (PWM) inputs along with other digital input signals from devices not powered from the 2-wire loop circuitry.
Additional resources:
See related TI Designs reference designs for two-wire transmitters: TIPD126 and TIPD158.
See an overview of analog outputs and architectures from TI’s Kevin Duke.
Read about the evolution of 3-wire analog outputs.
See more posts in our 2-wire transmitters series.
Dear Collin Wells, first of all I would thanks for your series introduction about 2-wire 4-20mA blogs, it is very helpful! Here I have a puzzle in my developing a level transmitter with a sensor. The sensor include a DSP and RF parts and power consumption like this:
DPS_ Peak current=76.1mA Voltage=3.3V and the peak time=13ms;
RF chip _ Peak current=210mA Voltage=3.3V and the peak time=4ms;
Do you think with this sensor power consumption can develop a two wire transmitters?the measure frequency of the transmitter is not very high.
Any suggestions?
Thank you very much beforehand!
Hello,
I'm glad you enjoyed the blog series.
With >200mA peak output current requirements I'm not sure you're going to be able to find a way to get your design to work on a 2-wire loop. Consider desiging a 3-wire or 4-wire sensor transmitter to support designs requiring significantly more current than what is possible on a 2-wire.
|
https://e2e.ti.com/blogs_/archives/b/precisionhub/archive/2015/01/13/two-wire-4-20ma-transmitters-background-and-common-issues-part-3
|
CC-MAIN-2019-04
|
refinedweb
| 788
| 54.73
|
Hello everybody,
I just started to learn C, so I hope you don't mind that I will post here the newbie-problems I do run into. I am learning C as a hobby, just as I learned Basic, Pascal and VB(A) in the past. So now I want to learn the "famous C" :P.
Well, here we go....
You can use "strncpy' to copy a number of characters from one string to another. That is handy, but what I really would like is that I can copy a number of characters from a given position in the source string to the destination string. I tried some things....
Having in mind that a string in C is an array of char, I tried this:
strncpy(dest_str, source_str[3], 10)
I hoped that strncpy would start at position 4 of the source string and copied 10 characters. But when I compiled the program, the compiler told me that it hated me and stopped :P.
Then I thought that maybe a pointer could help me. So I tried the next something
But after that I got an casting error.But after that I got an casting error.Code:
#include <stdio.h>
#include <string.h>
int main()
{
char regel1[30];
char regel2[30];
char *ptr;
strcpy(regel1, "Programing is fun and relaxing to do");
ptr = ®el1[10];
strncpy(regel2, *ptr, 3);
}
Okay, so how to solve this problem... can I use "strncpy" at all to get done what I want? I mean, something like that would be nice for other functions as "memset" too.
Thank you already for your answering and advices.
Joke.
(BTW the name Joke is no joke :). It is very common Dutch girls name and is pronounced as "jo-ke" and not as the English word joke).
|
http://cboard.cprogramming.com/c-programming/105151-strncpy-adavnced-p-printable-thread.html
|
CC-MAIN-2015-35
|
refinedweb
| 298
| 90.9
|
Scrape TripAdvisor Reviews using Python
Scrape TripAdvisor Reviews using Python
Send download link to:
TripAdvisor is the most popular website to search for best hotels, restaurants, sightseeing places, adventure gaming and almost anything for a nice trip. Scrape TripAdvisor reviews provide very useful data..
TripAdvisor reviews play a very important role. Most of the people visit site just to check reviews of a particular hotel, restaurant, city, tourist spot etc or look for recommendation based on other peoples experience. So using scraped reviews a customer can do sentiment analysis or create a recommendation engine to find best places or a hotel, restaurant can learn from reviews and improve their services.
In this tutorial we will go to TripAdvisor and search for hotels in Paris and Scrape their reviews.
To scrape reviews we need to go to each individual hotel page so we need to grab the link of each page from href as shown below:
After grabbing all the link we need to change them dynamically to scrape reviews from multiple pages. Watch the video for detailed explanation of this.
See complete code below:
Import Libraries
Import Libraries
import requests from bs4 import BeautifulSoup as soup
Send Get Request:
html = requests.get('') bsobj = soup(html.content,'lxml')
Grab all links:
links = []
for review in bsobj.findAll('a',{'class':'review_count'}): a = review['href'] a = ''+ a a = a[:(a.find('Reviews')+7)] + '-or{}' + a[(a.find('Reviews')+7):] print(a) links.append(a)
links
Output:
Scrape Reviews:
from random import randint
from time import sleep
reviews = []
for link in links:
d = [5,10,15,20,25]
headers = {'User-Agent':'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.97 Safari/537.36'} html2 = requests.get(link.format(i for i in range(5,1000,5)),headers=headers) sleep(randint(1,5)) bsobj2 = soup(html2.content,'lxml') for r in bsobj2.findAll('q'): reviews.append(r.span.text.strip()) print(r.span.text.strip())
reviews
Output:
Try this code to grab TripAdvisor reviews and use that data in your further business analysis or you can use our TripAdvisor scraping services. We can extract bulk data for you and provide in desired format.
|
https://www.worthwebscraping.com/scrape-tripadvisor-reviews-using-python/
|
CC-MAIN-2021-04
|
refinedweb
| 368
| 57.98
|
Subject: Re: [boost] [Review] Type Traits Introspection library by Edward Diener starts tomorrow Friday 1
From: John Maddock (boost.regex_at_[hidden])
Date: 2011-07-05 04:38:44
OK here's my review of this library.
First off the headline - yes I believe it's a useful library that should be
accepted into Boost, subject to some of the comments below.
> - What is your evaluation of the design?
Probably too many macros - see detailed comments below, otherwise fine.
> -)
3) I was confused by the section "Macro Metafunction Name Generation", I
think some examples would help a lot. Does the variety of macros supplied
get simplified by just dumping the generated trait in the current namespace?).
5) The description of "Table 1.4. TTI Nested Type Macro Metafunction
Existence" makes no sense to me (though it might later in the docs of
course).
6) I found code formatting such as:
boost::tti::has_static_member_data_DSMember
<
T,
short
>
A little hard to read, as long as the lines don't get too long I would much
prefer:
has_static_member_data_DSMember<T, short>
7) I'm not sure about the term "composite types": to me it's a member
function pointer type, or if you prefer a function signature. So instead of
BOOST_TTI_HAS_COMP_MEMBER_FUNCTION I'd prefer
BOOST_TTI_HAS_MEMBER_FUNCTION_WITH_SIG.'m also thinking that what folks really want to know is: "Is there a
function named X, that can be called with arguments of types A, B and C".
It would be interesting to see how far along this road we could actually
get?!
9) Great shame about the nested function template issue - hopefully someone
can find a workaround.??).
11) In the docs the main header file should be given as boost/tti/tti.hpp
rather than just tti.hpp.
12) Testing BOOST_TTI_HAS_COMP_MEMBER_FUNCTION(begin)
I see that this static assertion fails:
BOOST_STATIC_ASSERT((boost::tti::has_comp_member_function_begin<std::vector<int>::const_iterator
(std::vector<int>::*const)(void)>::value));
Have I done something silly? Testing const-member functions seems OK with
BOOST_TTI_HAS_MEMBER_FUNCTION BTW.
Regards, John.
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
|
https://lists.boost.org/Archives/boost/2011/07/183460.php
|
CC-MAIN-2019-22
|
refinedweb
| 353
| 57.06
|
jMock 1: Mocking Classes with CGLIB
Because it uses Java's standard reflection capability, the default
configuration of the jMock framework can only mock interfaces, not
classes. The optional
org.jmock.cglib extension package uses
the CGLIB 21 library to create
mock objects of classes as well as interfaces.
To use the CGLIB extension:
- Download CGLIB 22 and the jMock CGLIB extension JAR3 or a full jMock source snapshot4.
- Add CGLIB and the
jmock-cglib-version
.jarfile to the CLASSPATH. If you have downloaded a source snapshot, compile the source and add the output directory to your classpath.
- Make your test cases extend
org.jmock.cglib.MockObjectTestCase:
import org.jmock.Mock; import org.jmock.cglib.MockObjectTestCase; class MyTest extends MockObjectTestCase { ... }
- Your tests can now create mocks of abstract or concrete classes:
Mock mockGraphics = mock(java.awt.Graphics.class,"mockGraphics");
You can pass arguments to the constructor by passing the argument types and values to the
mockmethod:
Mock mockedClass = mock(SomeClass.class,"mockList", new Class[]{String.class, int.class}, new Object[]{"Hello", new Integer(1)});
The
MockObjectTestsCase class defined in the
org.jmock.cglib package is completely compatible with that of the vanilla
jMock API. The only difference is that it uses CGLIB to create proxies
instead of the Java reflection API. This makes it easy to convert a test
class to use the CGLIB extension when you find that there is no way you
can possibly avoid mocking a concrete class: just change the base class of
the test class to
org.jmock.cglib.MockObjectTestsCase and all
the mock objects will then use CGLIB to proxy calls.
|
http://www.jmock.org/jmock1-cglib.html
|
CC-MAIN-2017-30
|
refinedweb
| 267
| 58.89
|
Porting Nut/OS for the GameBoy Advance
Introduction
This article explains the tasks to be done when porting Nut/OS to the GameBoy Advance. It can be used as an example of how to port Nut/OS to new hardware platforms. This project isn't finished.
The GameBoy hardware is based on an ARM7 CPU. Additional hardware
is required to upload and run a Nut/OS application on the GBA.
The XG2 Turbo Link with USB programming interface.
The Xport 2.0 with parallel port programming interface.
Step 1: Setting up a Build Environment
The build environment we use for the ARM7 CPU is based on the GNU toolchain and is available on Windows NT/2K/XP and Linux.
When using Windows, then the Cygwin environment should be installed first. It is available at. Cygwin is a UNIX emulator for Windows and for which a number of optional packages are available. Make sure to install at least gcc, make, wget sharutils and tcltk. When this installation had been done, you can install the GCC toolchain. Fortunately a pre-build one is available from ecos.sourceware.org.
Step 2: Adapting the Nut/OS Configurator
The Nut/OS Configurator is controlled by Lua Scripts, which allow to adapt it to different platform requirements. If you can't follow the each detail in this chapter, don't worry. The Configurator is a very new tool and has already seen many changes. It's still inconsistent in many parts and the scripting part is almost undocumented yet. When adding new hardware, the best way is to use copy and paste as well as trial and error. Fortunately the ARM7 CPU is already supported and not many changes are required to add GBA support. A lot more effort would be needed when using a completely different CPU,
The main script file is conf/repository.nut, which mainly contains a few global values and several definitions to include additional script files. The only required change in this file is to add another choice for the GCC linker.
arm_ld_choice = { "s3c4510b-ram", "eb40a_ram", "gbaxport2" }
gbaxport2to reflect its usage with the GBA including the XPort 2 hardware.
The Lua array
arm_ld_choice is used in the script
conf/tools.nut, specifically in the
makedefs part
of the the component
ARM_LDSCRIPT
{ macro = "ARM_LDSCRIPT", brief = "ARM Linker Script", description = "s3c4510b-ram Samsung S3C4510B, code in RAM\n".. "eb40a_ram Atmel AT91R40008, code in RAM at 0x100\n".. "gba_xport2 Gameboy Advance with XPort 2\n", requires = { "TOOL_CC_ARM", "TOOL_GCC" }, flavor = "booldata", type = "enumerated", choices = arm_ld_choice, makedefs = { "LDNAME", "LDSCRIPT=$(top_srcdir)/arch/arm/ldscripts/$(LDNAME).ld" } }
makedefspart will create two lines in the file NutConf.mk.
LDNAME=gbaxport2 LDSCRIPT=$(top_srcdir)/arch/arm/ldscripts/$(LDNAME).ld
OBJ1 = arm/init/crt$(LDNAME).o
{ name = "nutarch_cstartup_arm", brief = "ARM-GCC Startup", sources = { "arm/init/crt$(LDNAME).S" }, targets = { "arm/init/crt$(LDNAME).o" }, requires = { "TOOL_CC_ARM" }, }
nutarch_mcurequires a new entry to reflect the properties of the ARM7 CPU used in the GameBoy Advance.
{ macro = "MCU_GBA", brief = "Nintendo GBA", description = "ARM7TDMI 16/32-bit RISC microcontroller", flavor = "boolean", file = "include/cfg/arch.h", requires = { "TOOL_CC_ARM" }, provides = { "HW_TARGET", "HW_MCU_GBA", "HW_TIMER_GBA", "HW_LCD_GBA" }, makedefs = { "MCU=arm7tdmi" } }
provides, which tells the configurator:
- This platform offers a hardware target. OK, this is quite obvious, but required. As long as no component providing a HW_TARGET is enabled, most Nut/OS components are disabled.
- This platform is of type MCU_GBA. This item may be used to enable other components specifically written for the GameBoy.
- This platform provides a hardware timer of type TIMER_GBA. Again this can be used to enable other hardware specific components.
- This platform provides an LCD screen of the type LCD_GBA. This enables the GBA LCD driver, which we will have to write later.
{ name = "nutdev_debug_gba", brief = "LCD Debug Output (GBA)", requires = { "HW_LCD_GBA" }, provides = { "DEV_FILE", "DEV_WRITE" }, sources = { "debug_gba.c" } }
DEV_FILEand
DEV_WRITEfor the uppper layers of the Nut/OS I/O System. For now we stick with a simple debug output and thus name the source file debug_gba.c. In the next chapter we will take a close look to this device driver.
Step 3: C Runtime Initialization
Unfortunately something really complicated has to be done first. Doing a C runtime initialization from scratch requires some in depth knowledge of the C compiler and the target hardware. Generally it consists of two files, an assembler routine and a linker script. According to our configuration done in the last step, the assembler routine will go to a file named arch/arm/init/crtgbaxport2.S and the linker script is arch/arm/ldscripts/gbaxport2.ld.
Fortunately there are many more or less complicated sources available for most platforms. Actually nothing special has to be done for Nut/OS except the naming of some items. Thus we can use an exisiting source with a few modifications.
__heap_startneeds to be defined in the linker script to define the end of the bss. Nut/OS needs this address for the beginning of heap memory.
- Instead of jumping to
main(), the runtime initialization should call
NutInit(). As shown in the AVR port, this is not really required, but it's much cleaner than redefining
main()during compilation and possibly confusing some debuggers.
Step 4: Creating an output device
It is a good idea to implement some kind of simple output first, which allows us to do display some debug output while porting other parts. On most hardware platforms with UART hardware this is quite simple. Just make a copy of an exisiting debug driver source code and replace the hardware speicifc parts.
On the GBA this is different. Actually there is something like an UART device inside, but it's not available without additional hardware. The LCD would make a perfect output device, but creating a driver is a bit more complicated. Fortunately there are several resources available in the Web to learn from.
The name of the source file had been specified in the previous chapter as debug_gba.c and should be located in subdirectory dev of the Nut/OS source tree.
The first part of every Nut/OS driver is the NUTDEVICE structure.
NUTDEVICE devDebug0 = { 0, /*!< Pointer to next device, dev_next. */ {'c', 'o', 'n', 0, 0, 0, 0, 0, 0}, /*!< Unique device name, dev_name. */ 0, /*!< Type of device, dev_type. */ 0, /*!< Base address, dev_base. */ 0, /*!< First interrupt number, dev_irq. */ 0, /*!< Interface control block, dev_icb. */ 0, /*!< Driver control block, dev_dcb. */ DebugInit, /*!< Driver initialization routine, dev_init. */ DebugIOCtl, /*!< Driver specific control function, dev_ioctl. */ 0, /*!< dev_read. */ DebugWrite, /*!< dev_write. */ DebugOpen, /*!< dev_opem. */ DebugClose, /*!< dev_close. */ 0 /*!< dev_size. */ };
#include <dev/debug.h> NutRegisterDevice(&devDebug0, 0, 0);
For our simple debug output device, most entries in the NUTDEVICE structure are unused. The remaining are
DebugInit: Called during device registration to initialize the hardware. Some code is required to switch the GBA to the proper LCD mode and to load a character font.
DebugIOCtl: Used by the application to set device specific parameters like baudrate, data format etc. To keep things simple, our routine may always return -1. This tells the application that this function is not supported.
DebugWrite: This will be called by the I/O system for sending its output to the device. For LCD screens we have to keep track of the cursor position.
DebugOpenand
DebugClose: Just a simple adaption to the Nut/OS file system. Nothing exciting is done here for our debug device.
If available, testing and debugging of the device driver may be done with a standard runtime initialization and linker script. Nut/OS is not required yet. Just compile a simple source file with a main entry and the routines under test.
Step 5: Building Nut/OS Partly
The device driver created in the previous step will be most helpful with the Nut/OS standard I/O routines. Note, that we didn't touch any of the hardware specific core routines yet, which will finally make the port complete:
- Context switching
- Timer handling
- Other device drivers
We will use the Configurator to create a build tree and try to compile
it. Because of
DEV_FILE and
DEV_WRITE provided
by LCD driver, all required components for debug output will be
included in the build process and the resulting libraries should
be sufficient for our first Nut/OS application running on the GBA.
We can also use the Configurator to create a sample directory with proper makefiles. Of course we won't be able to compile any of the sample applications with our minimal system. We will create some kind of dummy application in a new subdirectory within the sample tree.
Because we cannot compile nutinit.c without context switching support,
we skip this and replace
main() by
NutInit()
in our application code. Because nutinit.c will also initialize the
heap and because heap memory is required by the standard I/O library,
we will do this in our application code as well. Here it is:
#include
#include #include #include #include char lheap[4096]; FILE *con; /* * GBA console sample. */ int main(void) { int i; NutHeapAdd(lheap, sizeof(lheap)); NutRegisterDevice(&devDebug0, 0, 0); con = fopen("con", "w"); for (i = 0;; i++) { fprintf(con, "Hello world %d!\n", i); } }
PROJ = gbatest include ../Makedefs SRCS = $(PROJ).c OBJS = $(SRCS:.c=.o) LIBS = -lnutos -lnutdev -lnutos -lnutcrt TARG = $(PROJ).hex all: $(OBJS) $(TARG) include ../Makerules upload: $(PROJ).elf arm-elf-objcopy -v -O binary $(PROJ).elf $(PROJ).bin xpcomm $(PROJ).bin clean: -rm -f $(OBJS) -rm -f $(TARG) -rm -f $(SRCS:.c=.lst)
The xpcomm utility comes with the XPort2 and used to upload the binary code to the target.
Where to go from here?
The GBA port hasn't been finished yet. Timer interrupt handling will be next. It is very hardware specific. Hopefully context switching doesn't seem to be much work, if any at all. It had been done for the AT91 already, which is an ARM7 as well. When this has been done, the real fun starts. One next device driver will be for the Ericsson chat keyboard, which had been already attached to my GBA.
TImer and Thread Sample
Here's a simple application, which uses two core functions of the kernel, timer and multithreading.
#include <stdio.h> #include <sys/version.h> #include <sys/heap.h> #include <sys/timer.h> #include <sys/thread.h> #include <dev/debug.h> THREAD(Back, arg) { for(;;) { NutSleep(1000); putchar('\n'); } } int main(void) { int i; NutRegisterDevice(&devDebug0, 0, 0); freopen("con", "w", stdout); printf("Nut/OS %s on GBA\n", NutVersionString()); printf("%lu bytes free\n", NutHeapAvailable()); NutThreadCreate("back", Back, 0, 4096); for (i = 1;; i++) { printf("\r%6ld ms", NutGetMillis()); NutSleep(100); } }
Uploading the binary to the GameBoy.
XG2 Linker start menu.
Select the Nut/OS application.
Sample application running.
|
http://ethernut.de/en/portarm/gbaxport2.html
|
CC-MAIN-2018-26
|
refinedweb
| 1,769
| 59.09
|
C++ has a long, outstanding history of tricks and idioms. One of the oldest is the curiously recurring template pattern (CRTP) identified by James Coplien in 1995 [Coplien]. Since then, CRTP has been popularized and is used in many libraries, particularly in Boost [Boost]. For example, you can find it in Boost.Iterator, Boost.Python or in Boost.Serialization libraries.
In this article I assume that a reader is already familiar with CRTP. If you would like to refresh your memory, I would recommend reading chapter 17 in [Vandevoorde-]. This chapter is available for free on.
If you look at the curiously recurring template pattern from an OO perspective you'll notice that it shares common properties with OO frameworks (e.g. Microsoft Foundation Classes) where base class member functions call virtual functions implemented in derived classes. The following code snippet demonstrates OO framework style in its simplest form:
// Library code class Base { public: virtual ~Base(); int foo() { return this->do_foo(); } protected: virtual int do_foo() = 0; };
Here, Base::foo calls virtual function do_foo, which is declared as a pure virtual function in Base and, therefore, it must be implemented in derived classes. Indeed, a body of do_foo appears in class Derived:
// User code class Derived : public Base { private: virtual int do_foo() { return 0; } };
What is interesting here, is that an access specifier of do_foo has been changed from protected to private. It's perfectly legal in C++ and it takes a second to type one simple word. What is more, it's done intentionally to emphasize that do_foo isn't for public use. (A user may go further and hide the whole Derived class if she thinks it's worth it.)
The moral of the story is that a user should be able to hide implementation details of the class easily.
Now let us assume that restrictions imposed by virtual functions are not affordable and the framework author decided to apply CRTP:
// Library code template<class DerivedT> class Base { public: DerivedT& derived() { return static_cast<DerivedT&>(*this); } int foo() { return this->derived().do_foo(); } }; // User code class Derived : public Base<Derived> { public: int do_foo() { return 0; } };
Although do_foo is an implementation detail, it's accessible from everywhere. Why not make it private or protected? You'll find an answer inside function foo. As you see, the function calls Derived::do_foo. In other words, base class calls a function defined in a derived class directly.
Now, let's find an easiest way for a user to hide implementation details of Derived. It should be very easy; otherwise, users won't use it. It can be a bit trickier for the author of Base but it still should be easy to follow.
The most obvious way of achieving this is to establish a friendship between Base and Derived:
// User code class Derived : public Base<Derived> { private: friend class Base<Derived>; int do_foo() { return 0; } };
This solution is not perfect for one simple reason: the friend declaration is proportional to the number of template parameters of Base class template. It might get quite long if you add more parameters.
To get rid of this problem one can fix the length of the friend declaration by introducing a non-template Accessor that forwards calls:
// Library code class Accessor { private: template<class> friend class Base; template<class DerivedT> static int foo(DerivedT& derived) { return derived.do_foo(); } };
The function Base::foo should call Accessor::foo which in turn calls Derived::do_foo. A first step of this call chain is always successful because the Base is a friend of Accessor:
// Library code template<class DerivedT> class Base { public: DerivedT& derived() { return static_cast<DerivedT&>(*this); } int foo() { return Accessor::foo(this->derived()); } };
The second step succeeds only if either do_foo is public or if the Accessor is a friend of Derived and do_foo is protected. We are interested only in a second alternative:
// User code class Derived : public Base<Derived> { private: friend class Accessor; int do_foo() { return 0; } };
This approach is taken by several boost libraries. For example, def_visitor_access in Boost.Python and iterator_core_access in Boost.Iterator should be declared as friends in order to access user-defined private functions from def_visitor and iterator_facade respectively.
Even though this solution is simple, there is a way to omit the friend declaration. This is not possible if do_foo is private - you will have to change that to protected. The difference between these two access specifiers is not so important for most CRTP uses. To understand why, take a look at how you derive from CRTP base class:
class Derived : public Base<Derived> { /* ... */ };
Here, you pass the final class to Base's template arguments list.
An attempt to derive from Derived doesn't give you any advantage because the Base<Derived> class knows only about Derived.
Our goal is to access protected function Derived::do_foo from the Base:
// User code class Derived : public Base<Derived> { protected: // No friend declaration here! int do_foo() { return 0; } };
Normally, you access a protected function declared in a base class from its child. The challenge is to access it the other way around.
The first step is obvious. The only place for our interception point where a protected function can be accessed is a descendant of Derived:
struct BreakProtection : Derived { static int foo(Derived& derived) { /* call do_foo here */ } };
An attempt to write
return derived.do_foo();
inside BreakProtection::foo fails because it's forbidden according to the standard, paragraph 11.5:
When a friend or a member function of a derived class references a protected nonstatic member of a base class, an access check applies in addition to those described earlier in clause 11. Except when forming a pointer to member (5.3.1), the access must be through a pointer to, reference to, or object of the derived class itself (or any class derived from that class) (5.2.5).
The function can only be accessed through an object of type BreakProtection.
Well, if the function can't be called directly, let's call it indirectly. Taking an address of do_foo is legal inside BreakProtection class:
&BreakProtection::do_foo;
There is no do_foo inside BreakProtection, therefore, this expression is resolved as &Derived::do_foo. Public access to a pointer to protected member function has been granted! It's time to call it:
struct BreakProtection : Derived { static int foo(Derived& derived) { int (Derived::*fn)() = &BreakProtection::do_foo; return (derived.*fn)(); } };
For better encapsulation, the BreakProtection can be moved to the private section of Base class template. The final solution is:
// Library code template<class DerivedT> class Base { private: struct accessor : DerivedT { static int foo(DerivedT& derived) { int (DerivedT::*fn)() = &accessor::do_foo; return (derived.*fn)(); } }; public: DerivedT& derived() { return static_cast<DerivedT&>(*this); } int foo() { return accessor::foo( this->derived()); } }; // User code struct Derived : Base<Derived> { protected: int do_foo() { return 1; } };
Note that the user code is slightly shorter and cleaner than in the first solution. The library code has similar complexity.
There is a downside to this approach, though. Many compilers don't optimize away function pointer indirection even if it's called in-place:
return (derived.*(&accessor::do_foo))();
The main strength of CRTP over virtual functions is better optimization.
CRTP is faster because there is no virtual function call overhead and it compiles to smaller code because no type information is generated. The former is doubtful for the second solution while the latter still holds. Hopefully, future versions of popular compilers will implement this kind of optimization. Also, it's less convenient to use member function pointers, especially for overloaded functions.
[Vandevoorde-] David Vandevoorde, Nicolai M. Josuttis. "C++ Templates: The Complete Guide".
[Boost] Boost libraries..
|
http://accu.org/index.php/journals/296
|
CC-MAIN-2014-52
|
refinedweb
| 1,263
| 53.81
|
Recurrent Neural Networks with Word Embeddings¶
Summary¶
In this tutorial, you will learn how to:
- learn Word Embeddings
- using Recurrent Neural Networks architectures
- with Context Windows
in order to perform Semantic Parsing / Slot-Filling (Spoken Language Understanding)
Code - Citations - Contact¶
Code¶
Directly running experiments is also possible using this github repository.
Papers¶
If you use this tutorial, cite the following papers:
- [pdf] Grégoire Mesnil, Xiaodong He, Li Deng and Yoshua Bengio. Investigation of Recurrent-Neural-Network Architectures and Learning Methods for Spoken Language Understanding. Interspeech, 2013.
- [pdf] Gokhan Tur, Dilek Hakkani-Tur and Larry Heck. What is left to be understood in ATIS?
- [pdf] Christian Raymond and Giuseppe Riccardi. Generative and discriminative algorithms for spoken language understanding. Interspeech, 2007.
- [pdf] Bastien, Frédéric, Lamblin, Pascal, Pascanu, Razvan, Bergstra, James, Goodfellow, Ian, Bergeron, Arnaud, Bouchard, Nicolas, and Bengio, Yoshua. Theano: new features and speed improvements. NIPS Workshop on Deep Learning and Unsupervised Feature Learning, 2012.
- [pdf].
Thank you!
Task¶
The Slot-Filling (Spoken Language Understanding) consists in assigning a label to each word given a sentence. It’s a classification task.
Dataset¶
An old and small benchmark for this task is the ATIS (Airline Travel Information System) dataset collected by DARPA. Here is a sentence (or utterance) example using the Inside Outside Beginning (IOB) representation.
The ATIS offical split contains 4,978/893 sentences for a total of 56,590/9,198 words (average sentence length is 15) in the train/test set. The number of classes (different slots) is 128 including the O label (NULL).
As Microsoft Research people,
we deal with unseen words in the test set by marking any words with only one
single occurrence in the training set as
<UNK> and use this token to
represent those unseen words in the test set. As Ronan Collobert and colleagues, we converted
sequences of numbers with the string
DIGIT i.e.
1984 is converted to
DIGITDIGITDIGITDIGIT.
We split the official train set into a training and validation set that contain respectively 80% and 20% of the official training sentences. Significant performance improvement difference has to be greater than 0.6% in F1 measure at the 95% level due to the small size of the dataset. For evaluation purpose, experiments have to report the following metrics:
We will use the conlleval PERL script to measure the performance of our models.
Recurrent Neural Network Model¶
Raw input encoding¶
A token corresponds to a word. Each token in the ATIS vocabulary is associated to an index. Each sentence is a
array of indexes (
int32). Then, each set (train, valid, test) is a list of arrays of indexes. A python
dictionary is defined for mapping the space of indexes to the space of words.
>>> sentence array([383, 189, 13, 193, 208, 307, 195, 502, 260, 539, 7, 60, 72, 8, 350, 384], dtype=int32) >>> map(lambda x: index2word[x], sentence) ['please', 'find', 'a', 'flight', 'from', 'miami', 'florida', 'to', 'las', 'vegas', '<UNK>', 'arriving', 'before', 'DIGIT', "o'clock", 'pm']
Same thing for labels corresponding to this particular sentence.
>>> labels array([126, 126, 126, 126, 126, 48, 50, 126, 78, 123, 81, 126, 15, 14, 89, 89], dtype=int32) >>> map(lambda x: index2label[x], labels) ['O', 'O', 'O', 'O', 'O', 'B-fromloc.city_name', 'B-fromloc.state_name', 'O', 'B-toloc.city_name', 'I-toloc.city_name', 'B-toloc.state_name', 'O', 'B-arrive_time.time_relative', 'B-arrive_time.time', 'I-arrive_time.time', 'I-arrive_time.time']
Context window¶
Given a sentence i.e. an array of indexes, and a window size i.e. 1,3,5,..., we need to convert each word in the sentence to a context window surrounding this particular word. In details, we have:
def contextwin(l, win): ''' win :: int corresponding to the size of the window given a list of indexes composing a sentence l :: array containing the word indexes it will return a list of list of indexes corresponding to context windows surrounding each word in the sentence ''' assert (win % 2) == 1 assert win >= 1 l = list(l) lpadded = win // 2 * [-1] + l + win // 2 * [-1] out = [lpadded[i:(i + win)] for i in range(len(l))] assert len(out) == len(l) return out
The index
-1 corresponds to the
PADDING index we insert at the
beginning/end of the sentence.
Here is a sample:
>>> x array([0, 1, 2, 3, 4], dtype=int32) >>> contextwin(x, 3) [[-1, 0, 1], [ 0, 1, 2], [ 1, 2, 3], [ 2, 3, 4], [ 3, 4,-1]] >>> contextwin(x, 7) [[-1, -1, -1, 0, 1, 2, 3], [-1, -1, 0, 1, 2, 3, 4], [-1, 0, 1, 2, 3, 4,-1], [ 0, 1, 2, 3, 4,-1,-1], [ 1, 2, 3, 4,-1,-1,-1]]
To summarize, we started with an array of indexes and ended with a matrix of indexes. Each line corresponds to the context window surrounding this word.
Word embeddings¶
Once we have the sentence converted to context windows i.e. a matrix of indexes, we have to associate these indexes to the embeddings (real-valued vector associated to each word). Using Theano, it gives:
import theano, numpy from theano import tensor as T # nv :: size of our vocabulary # de :: dimension of the embedding space # cs :: context window size nv, de, cs = 1000, 50, 5 embeddings = theano.shared(0.2 * numpy.random.uniform(-1.0, 1.0, \ (nv+1, de)).astype(theano.config.floatX)) # add one for PADDING at the end idxs = T.imatrix() # as many columns as words in the context window and as many lines as words in the sentence x = self.emb[idxs].reshape((idxs.shape[0], de*cs))
The x symbolic variable corresponds to a matrix of shape (number of words in the sentences, dimension of the embedding space X context window size).
Let’s compile a theano function to do so
>>> sample array([0, 1, 2, 3, 4], dtype=int32) >>> csample = contextwin(sample, 7) [[-1, -1, -1, 0, 1, 2, 3], [-1, -1, 0, 1, 2, 3, 4], [-1, 0, 1, 2, 3, 4,-1], [ 0, 1, 2, 3, 4,-1,-1], [ 1, 2, 3, 4,-1,-1,-1]] >>> f = theano.function(inputs=[idxs], outputs=x) >>> f(csample) array([[-0.08088442, 0.08458307, 0.05064092, ..., 0.06876887, -0.06648078, -0.15192257], [-0.08088442, 0.08458307, 0.05064092, ..., 0.11192625, 0.08745284, 0.04381778], [-0.08088442, 0.08458307, 0.05064092, ..., -0.00937143, 0.10804889, 0.1247109 ], [ 0.11038255, -0.10563177, -0.18760249, ..., -0.00937143, 0.10804889, 0.1247109 ], [ 0.18738101, 0.14727569, -0.069544 , ..., -0.00937143, 0.10804889, 0.1247109 ]], dtype=float32) >>> f(csample).shape (5, 350)
We now have a sequence (of length 5 which is corresponds to the length of the sentence) of context window word embeddings which is easy to feed to a simple recurrent neural network to iterate with.
Elman recurrent neural network¶
The followin (Elman) recurrent neural network (E-RNN) takes as input the current input
(time
t) and the previous hiddent state (time
t-1). Then it iterates.
In the previous section, we processed the input to fit this
sequential/temporal structure. It consists in a matrix where the row
0 corresponds to
the time step
t=0, the row
1 corresponds to the time step
t=1, etc.
The parameters of the E-RNN to be learned are:
- the word embeddings (real-valued matrix)
- the initial hidden state (real-value vector)
- two matrices for the linear projection of the input
tand the previous hidden layer state
t-1
- (optional) bias. Recommendation: don’t use it.
- softmax classification layer on top
The hyperparameters define the whole architecture:
- dimension of the word embedding
- size of the vocabulary
- number of hidden units
- number of classes
- random seed + way to initialize the model
It gives the following code:
class RNNSLU(object): ''' elman neural net model ''' def __init__(self, nh, nc, ne, de, cs): ''' nh :: dimension of the hidden layer nc :: number of classes ne :: number of word embeddings in the vocabulary de :: dimension of the word embeddings cs :: word window context size ''' # parameters of the model self.emb = theano.shared(name='embeddings', value=0.2 * numpy.random.uniform(-1.0, 1.0, (ne+1, de)) # add one for padding at the end .astype(theano.config.floatX)) self.wx = theano.shared(name='wx', value=0.2 * numpy.random.uniform(-1.0, 1.0, (de * cs, nh)) .astype(theano.config.floatX)) self.wh = theano.shared(name='wh', value=0.2 * numpy.random.uniform(-1.0, 1.0, (nh, nh)) .astype(theano.config.floatX)) self.w = theano.shared(name='w', value=0.2 * numpy.random.uniform(-1.0, 1.0, (nh, nc)) .astype(theano.config.floatX)) self.bh = theano.shared(name='bh', value=numpy.zeros(nh, dtype=theano.config.floatX)) self.b = theano.shared(name='b', value=numpy.zeros(nc, dtype=theano.config.floatX)) self.h0 = theano.shared(name='h0', value=numpy.zeros(nh, dtype=theano.config.floatX)) # bundle self.params = [self.emb, self.wx, self.wh, self.w, self.bh, self.b, self.h0]
Then we integrate the way to build the input from the embedding matrix:
idxs = T.imatrix() x = self.emb[idxs].reshape((idxs.shape[0], de*cs)) y_sentence = T.ivector('y_sentence') # labels
We use the scan operator to construct the recursion, works like a charm:
def recurrence(x_t, h_tm1): h_t = T.nnet.sigmoid(T.dot(x_t, self.wx) + T.dot(h_tm1, self.wh) + self.bh) s_t = T.nnet.softmax(T.dot(h_t, self.w) + self.b) return [h_t, s_t] [h, s], _ = theano.scan(fn=recurrence, sequences=x, outputs_info=[self.h0, None], n_steps=x.shape[0]) p_y_given_x_sentence = s[:, 0, :] y_pred = T.argmax(p_y_given_x_sentence, axis=1)
Theano will then compute all the gradients automatically to maximize the log-likelihood:
lr = T.scalar('lr') sentence_nll = -T.mean(T.log(p_y_given_x_sentence) [T.arange(x.shape[0]), y_sentence]) sentence_gradients = T.grad(sentence_nll, self.params) sentence_updates = OrderedDict((p, p - lr*g) for p, g in zip(self.params, sentence_gradients))
Next compile those functions:
self.classify = theano.function(inputs=[idxs], outputs=y_pred) self.sentence_train = theano.function(inputs=[idxs, y_sentence, lr], outputs=sentence_nll, updates=sentence_updates)
We keep the word embeddings on the unit sphere by normalizing them after each update:
self.normalize = theano.function(inputs=[], updates={self.emb: self.emb / T.sqrt((self.emb**2) .sum(axis=1)) .dimshuffle(0, 'x')})
And that’s it!
Evaluation¶
With the previous defined functions, you can compare the predicted labels with
the true labels and compute some metrics. In this repo, we build a wrapper around the conlleval PERL script.
It’s not trivial to compute those metrics due to the Inside Outside Beginning
(IOB) representation
i.e. a prediction is considered correct if the word-beginning and the
word-inside and the word-outside predictions are all correct.
Note that the extension is
and you will have to change it to
.
Training¶
Updates¶
For stochastic gradient descent (SGD) update, we consider the whole sentence as a mini-batch and perform one update per sentence. It is possible to perform a pure SGD (contrary to mini-batch) where the update is done on only one single word at a time.
After each iteration/update, we normalize the word embeddings to keep them on a unit sphere.
Stopping Criterion¶
Early-stopping on a validation set is our regularization technique: the training is run for a given number of epochs (a single pass through the whole dataset) and keep the best model along with respect to the F1 score computed on the validation set after each epoch.
Hyper-Parameter Selection¶
Although there is interesting research/code on the topic of automatic hyper-parameter selection, we use the KISS random search.
The following intervals can give you some starting point:
- learning rate : uniform([0.05,0.01])
- window size : random value from {3,...,19}
- number of hidden units : random value from {100,200}
- embedding dimension : random value from {50,100}
Running the Code¶
After downloading the data using
, the user can then run the code by calling:
python code/rnnslu.py ('NEW BEST: epoch', 25, 'valid F1', 96.84, 'best test F1', 93.79) [learning] epoch 26 >> 100.00% completed in 28.76 (sec) << [learning] epoch 27 >> 100.00% completed in 28.76 (sec) << ... ('BEST RESULT: epoch', 57, 'valid F1', 97.23, 'best test F1', 94.2, 'with the model', 'rnnslu')
Timing¶
Running experiments on ATIS using this repository will run one epoch in less than 40 seconds on i7 CPU 950 @ 3.07GHz using less than 200 Mo of RAM:
[learning] epoch 0 >> 100.00% completed in 34.48 (sec) <<
After a few epochs, you obtain decent performance 94.48 % of F1 score.:
NEW BEST: epoch 28 valid F1 96.61 best test F1 94.19 NEW BEST: epoch 29 valid F1 96.63 best test F1 94.42 [learning] epoch 30 >> 100.00% completed in 35.04 (sec) << [learning] epoch 31 >> 100.00% completed in 34.80 (sec) << [...] NEW BEST: epoch 40 valid F1 97.25 best test F1 94.34 [learning] epoch 41 >> 100.00% completed in 35.18 (sec) << NEW BEST: epoch 42 valid F1 97.33 best test F1 94.48 [learning] epoch 43 >> 100.00% completed in 35.39 (sec) << [learning] epoch 44 >> 100.00% completed in 35.31 (sec) << [...]
Word Embedding Nearest Neighbors¶
We can check the k-nearest neighbors of the learned embeddings. L2 and cosine distance gave the same results so we plot them for the cosine distance.
As you can judge, the limited size of the vocabulary (about 500 words) gives us mitigated performance. According to human judgement: some are good, some are bad.
|
http://deeplearning.net/tutorial/rnnslu.html
|
CC-MAIN-2019-18
|
refinedweb
| 2,238
| 56.45
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.