text
stringlengths 454
608k
| url
stringlengths 17
896
| dump
stringclasses 91
values | source
stringclasses 1
value | word_count
int64 101
114k
| flesch_reading_ease
float64 50
104
|
|---|---|---|---|---|---|
In our book we provide examples of how to convert IPv4 addresses to integer format (and back). We held ourselves to using only basic R functionality since the book had to be at an introductory level. On a fairly modern box, the
ip2long function takes (roughly)
0.1s to convert 4,000 IPv4 address to integers (I just happened to have a file with 4K of IPv4 addresses lying around). For raw R code, that’s not too shabby, but we can incorporate some of the
Rcpp techinques we showed in previous posts to crank that time down significantly. Don’t worry, this post will be much shorter than the previous one since we’re not building a whole package, just showing you a quick way to smooth out bottlenecks by (briefly) dropping into C++ and taking advantage of the Boost libraries.
For those unfamiliar with C++, Boost is a collection of robust and rigorously developed/peer-reviewed C++ libraries that are very compatible with
Rcpp. We’re going to use the
ip::address_v4 class to replace the functionality of two of the book’s IPv4 conversion functions (
ip2long and
long2ip). Put the following code into a file called
iputils.cpp
#include <Rcpp.h> #include <boost/asio/ip/address_v4.hpp> using namespace Rcpp; # we're modeling these sample routine names off of # the C inet_ntop and inet_pton functions #' Convert IP in dotted (char) notation to integer // [[Rcpp::export]] unsigned long rinet_pton (CharacterVector ip) { return(boost::asio::ip::address_v4::from_string(ip[0]).to_ulong()); } #' Convert an IP in integer foramt to dotted (char) notation // [[Rcpp::export]] CharacterVector rinet_ntop (unsigned long addr) { return(boost::asio::ip::address_v4(addr).to_string()); }
Now, either in another R file or in the R console, do the following:
# these make the Rcpp magic happen library(Rcpp) library(inline) # this compiles our code and makes the # two functions available to our session sourceCpp("iputils.cpp") # test convert an IPv4 string to integer rinet_pton("10.0.0.0") [1] 167772160 # test conversion back rinet_ntop(167772160) [1] "10.0.0.0"
The
iputils.cpp file will need to be in the working directory for that bit of code to work (which is why packages are usually a better route). The call to
sourceCpp does most of the heavy lifting for us (with some help from the
[[Rcpp::export]] hint in the code which tells
sourceCpp to do quite a bit of work for you under the covers). The
sourceCpp function takes care of ensuring that proper memory allocation & garbage collection protection is performed and also handles all return value wrapping (conversion). As you can see in the code snippet, the Boost
asio library provides two methods that make it super-easy to use native versions of the IP address conversion functions and also highlights the object compatibilty between
Rcpp and C++.
Performing the same 4,000 IPv4 conversion exercise now takes
0.01s (remember, the pure R version took
0.1s). For a few thousand IP addresses, the difference is negligible, but if you’re working with millions or billions of IP addresses, this speedup can help dramatically and keep your processing in R vs potentially splitting up you workflow between R and, say, Python.
Exercise for the reader!
Try modifying the functions to handle both IPv4 and IPv6 addresses. You can start by writing two similar functions just to get your feet wet and then work on the logic necessary to combine the four into two. If you do the exercise, drop us a note here, on Twitter or over at github and we’ll feature you in an upcoming post and podcast!
If the world of
Rcpp seems intriguing, you’d do well to pick up a copy of Dirk Eddelbuttel’s Seamless R & C++ Integration with Rcpp. He goes into great detail with tons of examples that should make it much easier take advantage of the functionality that
Rcpp...
|
http://www.r-bloggers.com/speeding-up-ipv4-address-conversion-in-r/
|
CC-MAIN-2015-22
|
refinedweb
| 651
| 59.74
|
Hi,
I've been trying to use MeCab from RubyCocoa application on Snow Leopard. I installed the latest MeCab-Ruby (0.98) and I can use it from Ruby scripts, but not from RubyCocoa application. The same application runs fine on Leopard.
A simple script (sample on MeCab site) like the following ends up crashing the application.
require 'MeCab'
m = MeCab::Tagger.new("-Ochasen")
print m.parse("今日もしないとね")
The error message is the following:
dyld: lazy symbol binding failed: Symbol not found: __ZN5MeCab12createTaggerEPKc
Referenced from: /Library/Ruby/Site/1.8/universal-darwin10.0/MeCab.bundle
Expected in: flat namespace
dyld: Symbol not found: __ZN5MeCab12createTaggerEPKc
Referenced from: /Library/Ruby/Site/1.8/universal-darwin10.0/MeCab.bundle
Expected in: flat namespace
Could this be a bug or is there anything wrong with MeCab-Ruby on Snow Leopard? Considering it works fine with Ruby 1.8.7, it could be a RubyCocoa bug, though. Or is this fixed on the latest build?. Or should I blame MeCab-Ruby? I'm using 0.13.2, which I believe comes with Snow Leopard
Thanks.
kimura wataru
2009-10-15
this problem might cause architecture conflicts between libmecab.dylib and your app, such as x86_64 and i386.
please tell me results of running ruby with command line like the following list.
$ arch -i386 ruby -rMeCab -e "m = MeCab::Tagger.new('-Ochasen')"
$ arch -x86_64 ruby -rMeCab -e "m = MeCab::Tagger.new('-Ochasen')"
kimura wataru
2009-10-15
Nobody/Anonymous
2009-10-16
Sassoku no komento do-mo arigato gozaimasu.
I checked both commands and the first returned error and the second didin't. So I changed the build target to x86_64 and the application run without error. Then I run the same application on CoreDuo Leopard machine and it didn't run, obviously.
I set the application architecture to Standard (32/64-bit Universal) and valid architecture includes i386 on Xcode, but it looks like the built application only includes x86_64 if I select x86_64 as the active architecture (build only for the active architecture NOT ckecked), according to the error report on Leopard (it says Code Type: x86-64 (Native)). Is there anything that can be done on RubyCocoa? Or is there any setting on Xcode so I can include both (am I missing something?)? Or should I just build the app for each architecture?
Again, thank you very much for your comment.
|
http://sourceforge.net/p/rubycocoa/bugs/59/
|
CC-MAIN-2015-48
|
refinedweb
| 398
| 59.8
|
25 April 2012 04:48 [Source: ICIS news]
SINGAPORE (ICIS)--?xml:namespace>
Its sales revenue in 2011 was at CNY9.32bn, up by 21% year on year, the company said in a statement to the Shenzhen Stock Exchange.
The company’s sales revenue and net profit in the first quarter of 2012 were at CNY2.90bn and CNY90.8m, up by 20% and 23% respectively from the same period a year ago, it said in a separate statement on the same day.
The company aims to produce 3m tonnes of fertilizer and 3.2m tonnes of raw chemicals in 2012, it said, without giving details on its 2011 output.
Shandong-based Luxi Chemical Group is a key fertilizer
|
http://www.icis.com/Articles/2012/04/25/9553332/chinas-luxi-chemical-2011-profit-doubles-on-strong-sales.html
|
CC-MAIN-2014-42
|
refinedweb
| 118
| 76.11
|
red()
Contents
red()#
Extracts the red value from a color, scaled to match current color_mode().
Examples#
def setup(): c = "#FFCC00" # define color 'c' py5.fill(c) # use color variable 'c' as fill color py5.rect(15, 20, 35, 60) # draw left rectangle red_value = py5.red(c) # get red in 'c' py5.println(red_value) # print "255.0" py5.fill(red_value, 0, 0) # use 'red_value' in new fill py5.rect(50, 20, 35, 60) # draw right rectangle
Description#
Extracts the red value from a color, scaled to match current color_mode().
The
red() function is easy to use and understand, but it is slower than a technique called bit shifting. When working in
color_mode(RGB, 255), you can achieve the same results as
red() but with greater speed by using the right shift operator (
>>) with a bit mask. For example,
red(c) and
c >> 16 & 0xFF both extract the red value from a color variable
c but the later is faster.
Underlying Processing method: red
Signatures#
red( rgb: int, # any value of the color datatype /, ) -> float
Updated on September 01, 2022 16:36:02pm UTC
|
https://py5.ixora.io/reference/sketch_red.html
|
CC-MAIN-2022-40
|
refinedweb
| 182
| 65.93
|
hi, I need to display the temperature given by a DHT11, through an Arduino Mega, in a widgest gauge with a decimal number and symbol degrees celsius. example: 20.6 °. can someone help me??? and if you can have a reference code. thanks
Dht11+arduino mega+gauge
Try searching this forum for the keyword DHT11
Basically, get it working in Arduino first,
then simply send the data to a display Widget.
And for good measure
I have already used this library DHT11, and I did not understand how to pass the data to gauge. I think we should convertitre string?
What have you got written for code so far and how have you set up your display widget?
Personally I used the Adafruit DHTXX Library as it was a bit simpler.
And as for the display… remember that the DHT11 has only a ±2deg accuracy… so no decimals needed
#define BLYNK_PRINT Serial
#include <SPI.h>
#include <Ethernet.h>
#include <BlynkSimpleEthernet.h>
#include <SimpleTimer.h>
#include <DHT.h>
#define BLYNK_DEBUG
char auth[] = “fefabd5745bc4960819ac43f29d3c7a6”;
#define DHTPIN 2
#define DHTTYPE DHT11
DHT dht(DHTPIN, DHTTYPE);
SimpleTimer timer;
void setup()
{
Serial.begin(9600);
Blynk.begin(auth);
dht.begin();
timer.setInterval(1000L, sendSensor);
}
void sendSensor()
{
float h = dht.readHumidity();
float t = dht.readTemperature();
if (isnan(h) || isnan(t)) {
Serial.println(“Failed to read from DHT sensor!”);
return;
}
Blynk.virtualWrite(V5,h);
Blynk.virtualWrite(V6,t);
}
void loop()
{
Blynk.run();
timer.run();
}
Please edit your last post and format your code between the ~~~ as instructed… I can’t see all the libraries, etc.
Sorry
Use the Tilde key three times, then your code, followed by three more Tilde keys (or just click on the welcome link I sent, twice, and watch the little video
)
Tilde. Alternatively referred to as the squiggly or twiddle, the tilde is a character ( ~ ) on keyboards below the escape or ESC key and on the same key as the back quote that resembles a squiggly line.
#include BLYNK_PRINT Serial
#include <SPI.h>
#include <Ethernet.h>
#include <BlynkSimpleEthernet.h>
#include <SimpleTimer.h>
#include <DHT.h>
#define BLYNK_DEBUG
Oh well…
Meanwhile, I will leave you with this… (untested in it’s current format, so check for syntax).
You will need to integrate your own Blynk info and settings.
Also, make sure you have the proper Adafruit library’s.
This sensor is slow… I suggest giving it at least 2 seconds between readings… I am using Blynk’s
SimpleTimer.h for this…
#include <SimpleTimer.h> #include <Adafruit_Sensor.h> #include <DHT_U.h> #define DHTPIN 2 // Defines pin number to which the sensor is connected #define DHTTYPE DHT11 // DHT 11 object DHT_Unified dht(DHTPIN, DHTTYPE); SimpleTimer timerTempHum; // Setup Temp and Humidity timer. void setup() { // Setup Blynk stuff here dht.begin(); timerTempHum.setInterval(2000L, sendTempHum); // Run every 2 seconds. Set display Widgets to PUSH. } void sendTempHum() { sensors_event_t event; dht.temperature().getEvent(&event); // Get temperature... Blynk.virtualWrite(V5, event.temperature); // and send it's value. dht.humidity().getEvent(&event); // Get humidity... Blynk.virtualWrite(V6, event.relative_humidity); // and send it's value. } void loop() { Blynk.run(); timerTempHum.run() }
I’m sorry but I’m doing everything by phone and I can not find the tilde key
That’s OK… technically it is the ` key that is needed… but everyone (aka me) just sees the larger ~ symbol on the key and calls it as such
But bonus points for doing all this on a little phone keyboard
Its the ‘backtick’ button.
@rok.kom, like this. You can do it on your phone… also EDIT YOUR POSTS … dont spam with more posts.
```cpp
CODE
```
Code snippets should be formatted. Please edit your initial post:
How to do that:
``` cpp <--put 3 backticks BEFORE your code starts //("cpp" means C++ language) //Put your code here //.................. //.................. ``` <--insert 3 backticks AFTER your code
**This makes your code readable and with highlighted syntax, like
//comment goes here void helloWorld() { String message = "hello" + "world"; }
Hi Rok, I’ve done it using a mega and an Ethernet shield with an SD card. Basically the htm file is stored on the SD card and is populated with the DHT11 data to display on the two gauges. If you are using a W5100 wifi shield with an SD card I can post you both the sketch and the htm code.
thank you so much
I accept willingly
thank you so much
I also wanted to know if it was possible to put a logo in the app with my image. if you can do please postarki an example
At the moment you can’t put your logo. If you would like to convert your project into standalone app (with your branding, etc.) please consider
|
https://community.blynk.cc/t/dht11-arduino-mega-gauge/10824
|
CC-MAIN-2019-39
|
refinedweb
| 774
| 66.94
|
Results 1 to 8 of 8
I have read a few short discussions on implemented memory mapping in a driver but I am still puzzled. What I want to do is have a piece of memory ...
- Join Date
- Apr 2005
- 6
Driver MMAP support for direct access from User space
Is this a correct understanding, I allocate memory in the module that address is used as the physical address when I call remap_page_range? Oh the Kernel is 2.6. Why I am asking this is the discussions also mention the address in kernel space is logical not physical.
Thanks,
What kind of information would you like to be placed in this memory region after receiving an interrupt? Just the fact that you've gotten interrupted or what?
- Join Date
- Apr 2005
- 6
shared memory
I am just looking to put in the driver memory that an interrupt has occurred and the user process polls that memory instead of the other methods for faster response.
Thanks,
Just write an interrupt handler function that does the following:
1. Handles whatever needs to be handled for the particular interrupt
2. If you want a running count, increase whatever stats variable you have
3. Awake any reading user processes
Then change the read() method for your device to sleep until data has arrived.
- Join Date
- Apr 2005
- 6
deterministic behavior
I need deterministic behavior so to do that the user program is tied to one CPU.
Thanks,
can you give me some more context as to the overall scope of this driver?
- Join Date
- Apr 2005
- 6
Goal for Interrupt Handler and an attempt that does not work
What is trying to be done is when a ISR is called a variable is set. A user level process memory maps that driver/module's variable and polls it to check for an interrupt occurrence. This is an attempt to get the fastest response as possible to an IRQ assertion in User space.
I added these lines in the module:
static int Airtiger2Driver_mmap(struct file* pFile, struct vm_area_struct* pVma)
{
static const char* function = NAME "_ioctl";
ENTER("(%p,%p)",pFile,pVma);
int status;
unsigned long offset = pVma->vm_pgoff << PAGE_SHIFT;
status = remap_page_range(pVma,
(unsigned long)pBuffer + offset, // from start
(unsigned long)pBuffer + PAGE_SIZE + offset, // to end
PAGE_SIZE,
PAGE_SHARED);
if (status != 0)
return (-EAGAIN);
return SUCCESS;
}
...
static int Airtiger2DriverInit(void)
{
...
pBuffer = kmalloc(PAGE_SIZE,GFP_KERNEL);
pBuffer[0] = 0x000000AA;
pBuffer[1] = 0x000000BB;
pBuffer[2] = 0x000000CC;
pBuffer[3] = 0x00000123;
...
and this in the user level code to test it:
// open driver
int fDriver;
fDriver = open(DRIVER, 0);
if (fDriver < 0) {
printf("Can't open device file: %s\n", DRIVER);
goto byebyeWithError;
}
// memory map
UINT32* pBuffer;
pBuffer = (UINT32*)mmap(0,PAGE_SIZE,PROT_READ,MAP_SHARED,fDr iver,0);
printf("pBuffer @ %p\n",pBuffer);
printf("[0]=%08x [1]=%08x [2]=%08x [3]=%08x\n",
pBuffer[0], pBuffer[1], pBuffer[2], pBuffer[3]);
munmap(0,PAGE_SIZE);
and this is the output:
pBuffer @ 0x2a9556c000
[0]=00000000 [1]=00000000 [2]=00000000 [3]=00000000
Thanks,
Seriously, I think a better way to do this is to implement a read/open/whatever method that sleeps, and have the interrupt handler wake it up. This eliminates the need to mmap() anything.
|
http://www.linuxforums.org/forum/kernel/40876-driver-mmap-support-direct-access-user-space.html
|
CC-MAIN-2014-41
|
refinedweb
| 530
| 58.82
|
.
DATATYPE { OFF | ON | IN | OUT }
OFF This is the default behavior when the DATATYPE option is not used. For DNET output format, SQL Anywhere data types are translated to and from XML Schema string types. For CONCRETE and XML formats, no data type information is emitted.
ON Data type information is emitted for both input parameters and result set responses. SQL Anywhere data types are translated to and from XML Schema data types.
IN Data type information is emitted for input parameters only.
OUT Data type information is emitted for result set responses only.'.
Data typing of input parameters is supported by simply exposing the parameter data types as their true data types in the WSDL generated by the DISH service.
A typical string parameter definition (or a non-typed parameter) would look like the following:
The String parameter may be nillable, that is, it may or may not occur.
For a typed parameter such as an integer, the parameter must occur and is not nillable. The following is an example.
All SQL Anywhere web services of type 'SOAP' may expose data type information within the response data. The data types are exposed as attributes within the rowset column element.
The following is an example of a typed SimpleDataSet response from a SOAP FORMAT 'CONCRETE' web service.
The following is an example of a response from a SOAP FORMAT 'XML' web service returning the XML data as a string. The interior rowset consists of encoded XML and is presented here in its decoded form for legibility.
Note that, in addition to the data type information, the namespace for the elements and the XML schema provides all the information necessary for post processing by an XML parser. When no data type information exists in the result set (DATATYPE OFF or IN) then the xsi:type and the XML schema namespace declarations are omitted.
An example of a SOAP FORMAT 'DNET' web service returning a typed SimpleDataSet follows:
When one or more parameters are of type NCHAR, NVARCHAR, LONG NVARCHAR, or NTEXT then the response output is in UTF8. If the client database uses the UTF-8 character encoding, there is no change in behavior (since NCHAR and CHAR data types are the same). However, if the database does not use the UTF-8 character encoding, then all parameters that are not an NCHAR data type are converted to UTF8. The value of the XML declaration encoding and Content-Type HTTP header will correspond to the character encoding used.
|
http://dcx.sap.com/1101/fr/dbprogramming_fr11/datatypes-http.html
|
CC-MAIN-2018-43
|
refinedweb
| 417
| 52.49
|
Sentinel - Wait this account still exists?
SentinelGaming - Just a plugin developer
To access functions without public static (I don't use this method myself because I am way too deep in now to turn back) is as displayed below:...
SentinelGaming - Kinda-Retired Kinda-Not Spigot Plugin Developer whose avatar is too large to upload.
Line 27
@Zombie_Striker
I still need help, I can't crack the stack(trace).
Hello! I'm having a little problem. Here's the code:
package me.alphagladiator.crates.listeners;
import java.util.List;
import java.util.Random;...
AlphaGladiator - Spigot Plugin Developer
@patricksterza
Sorry, but I have no experience in that area, and I'm pretty busy with other plugins right now.
@RealEmpire
Sorry that I couldn't finish this in time, hope you enjoy @Jake861 's plugin!
@DarthMike
I'm looking into it.
Next time please tag me because I did not notice your post until now.
(BTW, that is not the same error.)
EDIT:...
@RealEmpire
Going to work on the bugs I found yesterday.
Almost done. I accidentally deleted a line of code that contained the formula needed for...
I got an idea. Use Random (a java class). minimum number = 0, maximum number = list of colors - 1. use rand.nextint(max) + min; do that and you...
@xproxowndx @kik_bat
I'm sorry but I'm unable to complete this plugin. I don't have the skills to do it. Please mark this thread as unsolved and...
Separate names with a comma.
|
https://dl.bukkit.org/members/mcjoshua345.90995769/recent-content
|
CC-MAIN-2020-05
|
refinedweb
| 246
| 70.09
|
#include "petscmat.h" #include "petscvec.h" PetscErrorCode MatGetVecsFFTW(Mat A,Vec *x,Vec *y,Vec *z)Collective on Mat
Note: The parallel layout of output of forward FFTW is always same as the input of the backward FFTW. But parallel layout of the input vector of forward FFTW might not be same as the output of backward FFTW. Also note that we need to provide enough space while doing parallel real transform. We need to pad extra zeros at the end of last dimension. For this reason the one needs to invoke the routine fftw_mpi_local_size_transposed routines. Remember one has to change the last dimension from n to n/2+1 while invoking this routine. The number of zeros to be padded depends on if the last dimension is even or odd. If the last dimension is even need to pad two zeros if it is odd only one zero is needed. Lastly one needs some scratch space at the end of data set in each process. alloc_local figures out how much space is needed, i.e. it figures out the data+scratch space for each processor and returns that.
Level:advanced
Location:src/mat/impls/fft/fftw/fftw.c
Index of all Mat routines
Table of Contents for all manual pages
Index of all manual pages
|
http://www.mcs.anl.gov/petsc/petsc-dev/docs/manualpages/Mat/MatGetVecsFFTW.html
|
crawl-003
|
refinedweb
| 216
| 64.51
|
Blog::Spam::Plugin::Sample - A sample plugin.
This is a sample plugin which is designed to demonstrate the functionality which a plugin may implement to be usefully called by Blog::Spam::Server.
As this is an example plugin it does nothing useful.
The Blog::Spam::Server receives comment data, via XML::RPC, from remote clients.
These incoming comments, and associated meta-data, will be examined by each known plugin in turn. If a single plugin determines the comment is SPAM then all further testing is ceased.
This module is an example of one such plugin, and when the server is installed it will be called in order, along with any others.
For a plugin to be loaded it must live beneath the Blog::Spam::Plugin namespace.
There are only a single mandatory method which must be implemented ("new"), and several optional methods ("classifyComment", "testComment", "expire", "logMessage").
The new method is required for the plugin loading to succeed. The optional methods are invoked at various points in the servers lifecycle, if they are present.
For example the testComment method will be called to test the state of an incoming comment "SPAM" or "OK". The expire method will be called periodically, if available, to carry out house-keeping tasks.
The classifyComment method is called only when a request to retrain a comment is received.
Finally the logMessage method will be invoked when the server has determined an incoming message is either SPAM or OK.
This method is called when the server is started, and all plugins are loaded.
This method is mandatory.
A given plugin will only be initialised once when the server is launched, which permits the plugin to cache state internally if it wishes.
This method is invoked upon the reception of an incoming comment to test.
The arguments are a pointer to the server object, and a hash of values read from the remote client. (These remote keys include such things as the IP address of the comment submitter, their name, their email address and the comment itself. For a complete list of available keys please consult Blog::Spam::API.)
The IP address of the comment submitter.
The text of the comment received.
There are two valid return values "OK", which means the comment should be allowed to continue, and "SPAM" which means the plugin has determined the comment to be SPAM.
Optionally the SPAM result may be qualified with a human-readable explanation:
return "SPAM:This comment defames me";
This method is optional.
Some plugins maintain state which must be expired. If this method is implemented it will be invoked upon a regular frequency, with the intention that a plugin may expire its state at that time.
There are two arguments, the first is a handle to the Blog::Spam::Server object, and the second is a frequency label:
This method has been called once per hour.
This method has been called once per day.
This method has been called once per week.
This method is optional.
This method is called whenever a comment is submitted for retraining, because the server was judged to return the wrong result.
The parameters received are identical to those of the testComment method - with the addition of a new key "train":
The comment was returned by the server as being OK but it should have been marked as SPAM.
The comment was previously judged as SPAM, but this was an error and the comment was actually both welcome and valid.
This method is optional.
This method will be called when the server wishes to log a result of a connection. ie. It will be called once for each comment at the end of the testComment function.
The message structure, as submitted to testing, will be supplied as a hash, and this hash will contain a pair of additional keys:
The result of the test "OK" or "SPAM:[reason]".
If the result of the test was not "OK" then the name of the plugin which caused the rejection will be saved in this key.
This module is free software; you can redistribute it and/or modify it under the same terms as Perl itself. The LICENSE file contains the full text of the license.
|
http://search.cpan.org/dist/Blog-Spam/lib/Blog/Spam/Plugin/Sample.pm
|
crawl-003
|
refinedweb
| 701
| 63.49
|
Hey,
What I need to do is read a file- it will have a name and several numbers after, with -1 as a sentinel number to signal the end of line, ex:
Rogers 15 22 6 12 -1
Myers 23 10 4 22 34 -1
...
....
.....
What I want to do is take the numbers, except -1, read them, and display an average of each name's numbers, alone with the name with the highest number.
So far I can read the file and display it, but I'm not sure how to gather the numbers inside of the file and use them for displaying averages and maximum.
import java.io.*; import java.util.Scanner; public class HW7 { public static void main (String[] args) throws IOException { Scanner keyboard = new Scanner(System.in); System.out.println("What is the name of the file?"); String fileName = keyboard.nextLine(); File file = new File(fileName); Scanner inputFile = new Scanner(file); while (inputFile.hasNext()) { String line = inputFile.nextLine(); System.out.println(line); } } }
|
http://www.javaprogrammingforums.com/java-theory-questions/5762-reading-file.html
|
CC-MAIN-2013-48
|
refinedweb
| 166
| 69.62
|
Vectorization in R: Why?
Here are my notes from a recent talk I gave on vectorization at a Davis R Users’ Group meeting. Thanks to Vince Buffalo, John Myles White, and Hadley Wickham for their input as I was preparing this. Feedback welcome!
Beginning R users are often told to “vectorize” their code. Here, I try to explain why vectorization can be advantageous in R by showing how R works under the hood.
Now, remember, premature optimization is the root of all evil (Knuth). Don’t start re-writing your code unless the time saved is going to be worth the time invested. Other approaches, like finding a bigger machine or parallelization, could give you more bang for the buck in terms of programming time. But if you understand the nuts and bolts of vectorization in R, it may help you write shorter, simpler, safer, and yes, faster code in the first place.
First, let’s acknowledge that vectorization can seem like voodoo. Consider a two math problems, one vectorized, and one not:\[\begin{bmatrix} 1 \\ 2 \\ 3 \end{bmatrix} + \begin{bmatrix} 1 \\ 2 \\ 3 \end{bmatrix} = \begin{bmatrix} 2 \\ 4 \\ 6 \end{bmatrix}\] \[\begin{aligned} 1 + 1 = 2 \\ 2 + 2 = 4 \\ 3 + 3 = 6 \end{aligned}\]
Why on earth should these take a different amount of time to calculate? Linear algebra isn’t magic. In both cases there are three addition operations to perform. So what’s up?
What on earth is R actually doing?
R is a high-level, interpreted computer language. This means that R takes care of a lot of basic computer tasks for you. For instance, when you type
i <- 5.0
you don’t have to tell your computer:
- ‘Enter’.
When you then type
i <- "foo"
you don’t have to tell the computer that
i no longer stores an integer
but a series of characters that form a string, to store “f”, “o”, and
“o”, consecutively, etc.
R figures these things on it’s own, on the fly, as you type commands or source them from a file. This means that running a command in R takes a relatively longer time than it might in a lower-level language, such as C. If I am writing in C, I might write
int i i = 5
This tells the computer the
i will store data of the type
int
(integers), and assign the value 5 to it. If I try to assign 5.5 to it,
something will go wrong. Depending on my set-up, it might throw an
error, or just silently assign 5 to
i. But C doesn’t have to figure
out what type of data is is represented by
i and this is part of what
makes it faster.
Here’s another example. If, in R, you type:
2L + 3.5
The computer asks:
“OK, what’s the first thing?”
“An integer”
“The second thing?”
“A a floating-point number”
“Do we have a way to deal with adding an integer and a floating-point number?”
“Yes! Convert the integer to a floating-point number, then add the two floating point numbers”
[converts integer]
[finds a place in memory for the answer]
etc.
If R were a compiled computer language, like C or FORTRAN, much of this “figuring out” would be accomplished during the compilation step, not when the program was run. Compiled programs are translated into binary computer language after they are written, but before they are run, and this occurs over the whole program, rather than line-by-line. This allows the compiler to organize the binary machine code in an optimal way for the computer to interpret.
What does this have to do with vectorization in R? Well, many R
functions are actually written in a a compiled language, such as C,
C++, and FORTRAN, and have a small R “wrapper”. For instance, when you
inspect the code for
fft, the fast Fourier transform, you see
> fft function (z, inverse = FALSE) .Call(C_fft, z, inverse) <bytecode: 0x7fc261e1b910> <environment: namespace:stats>
R is passing the data onto a C function called
C_fft. You’ll see this
in many R functions. If you look at their source code, it will include
.C(),
.Call(), or sometimes
.Internal() or
.Primitive(). These
means R is calling a C, C++, or FORTRAN program to carry out operations.
However, R still has to interpret the input of the function before
passing it to the compiled code. In
fft() the compiled code runs only
after R figures out the data type in
z, and also whether to use the
default value of
inverse. The compiled code is able to run faster than
code written in pure R, because the “figuring out” stuff is done first,
and it can zoom ahead without the “translation” steps that R needs.
If you need to run a function over all the values in a vector, you could pass a whole vector through the R function to the compiled code, or you could call the R function repeatedly for each value. If you do the latter, R has to do the “figuring out” stuff, as well as the translation, each time. But if you call it once, with a vector, the “figuring out” part happens just once.
Inside the C or FORTRAN code, vectors are actually processed using loops or a similar construct. This is inevitable; somehow the computer is going to need to operate on each element of your vector. Since this occurs in the compiled code, though, without the overhead of R functions, this is much faster.
Another important component of the speed of vectorized operations is that vectors in R are typed. Despite all of its flexibility, R does have some restrictions on what we can do. All elements of a vector must be the same data type. If I try to do this
a <- c(1, 2, FALSE, "hello")
I get
> a [1] "1" "2" "FALSE" "hello" > class(a) [1] "character"
R converts all my data to characters. It can’t handle a vector with different data types.
So when R needs to perform an operation like
c(1, 2, 3) + c(1, 2, 3)
R only has to ask what types of data are in each vector (2) rather than each element (6).
One consequence of all this is that. This is not the case in all other languages. Often, in compiled languages, you want to stick with lots of very simple statements, because that allows the compiler to figure out the most efficient translation of the code.
Everything is a vector
In R everything is a vector. To quote Tim Smith in “aRrgh: a newcomer’s (angry) guide to R”
All naked numbers are double-width floating-point atomic vectors of length one. You’re welcome.
This means that, in R, typing “6” tells R something like
<start vector, type=numeric, length=1>6<end vector>
While in other languages, “6” might just be
<numeric>6
So, while in other languages, it might be more efficient to express something as a single number rather than a length-one vector, in R this is impossible. There’s no advantage to NOT organizing your data as vector. In other languages, short vectors might be better expressed as scalars.
Linear algebra is a special case
Linear algebra is one of the core functions of a lot of computing, so there are highly optimized programs for linear algebra. Such a program is called a BLAS - basic linear algebra system. R, and a lot of other software, relies on these specialized programs and outsources linear algebra to them. A BLAS is generally designed to be highly efficient and has things like built-in parallel processing, hardware-specific implementation, and a host of other tricks. So if your calculations can be expressed in actual linear algebra terms, such as matrix multiplication, than it is almost certainly faster to vectorize them because the BLAS will be doing most of the heavy lifting.
There are faster and slower linear algebra libraries, and you can install new ones on your computer and tell R to use them instead of the defaults. This used to be like putting a new engine in your car, but it’s gotten considerably easier. For certain problems, a shiny new BLAS can considerably speed up code, but results vary depending on the specific linear algebra operations you are using.
Functionals: Pre-allocating memory, avoiding side effects.
There are a whole family of functions in R called functionals, or
apply functions, which take vectors (or matrices, or lists) of values
and apply arbitrary functions to each. Because these can use arbitrary
functions, they are NOT compiled. Functionals mostly are written in pure
R, and they speed up code only in certain cases.
One operation that is slow in R, and somewhat slow in all languages, is
memory allocation. So one of the slower ways to write a
for loop is to
resize a vector repeatedly, so that R has to re-allocate memory
repeatedly, like this:
j <- 1 for (i in 1:10) { j[i] = 10 }
Here, in each repetition of the
for loop, R has to re-size the vector
and re-allocate memory. It has to find the vector in memory, create a
new vector that will fit more data, copy the old data over, insert the
new data, and erase the old vector. This can get very slow as vectors
get big.
If one pre-allocates a vector that fits all the values, R doesn’t have to re-allocate memory each iteration, and the results can be much faster. Here’s how you’d do that for the above case:
j <- rep(NA, 10) for (i in 1:10) { j[i] = 10 }
The
apply or
plyr::*ply functions all actually have
for loops
inside, but they automatically do things like pre-allocating vector size
so you don’t screw it up. This is the main reason that they can be
faster.
Another thing that “ply” functions help with is avoiding what are known
as side effects. When you run a ply function, everything happens
inside that function, and nothing changes in your working environment
(this is known as “functional programming”). In a
for loop, on the
other hand, when you do something like
for(i in 1:10), you get the
leftover
i in your environment. This is considered bad practice
sometimes. Having a bunch of temporary variables like
i lying around
could cause problems in your code, especially if you use
i for
something else later.
I’ve seen arguments that
ply functions make for more expressive,
easier to read code, but I’ve seen the same argument for
for loops.
Once you are used to writing vectorized code in general, though,
for
loops in R will can seem odd.
So when might
for loops make sense over vectorization?
There are still situations that it may make sense to use
for loops
instead of vectorized functions, though. These include:
- Using functions that don’t take vector arguments
- Loops where each iteration is dependent on the results of previous iterations
Note that the second case is tricky. In some cases where the obvious
implementation of an algorithm uses a
for loop, there’s a vectorized
way around it. For instance, here is a good example of implementing a
random walk using vectorized code. In these cases, you often want to call functions
that are essentially C/FORTRAN implementations of loop operations to
avoid the loop in R. Examples of such functions include
cumsum
(cumulative sums),
rle (counting number of repeated value), and
ifelse (vectorized if…else statements).
Your performance penalty for using a
for loop instead a vector will be
small if the number of iterations is relatively small, and the functions
called inside your for loop are slow. In these cases, looping and
overhead from function calls make up a small fraction of your
computational time. It may make sense to use a
for loop in such cases,
especially if they are more intuitive or easier to read for you.
Some resources on vectorization
- Good discussion in a couple of blog posts by John Myles White.
- Some relevant chapters of Hadley Wickham’s Advanced R book: Functionals and code profiling
- Vectorization is covered in chapters 3 and 4 of the classic text on R’s idiosyncrasies - The R Inferno, by Patrick Burns
- Here are a bunch of assorted blog posts with good examples of speeding up code with vectorization
-
-
-
-
-
|
https://d-rug.github.io/blog/2014/vectorization-in-r-why
|
CC-MAIN-2021-21
|
refinedweb
| 2,085
| 69.01
|
This module will export the two functions named below into the namespace of the package using it. These two functions are useful to do typical checks at the start of functions that are supposed to be either class or instance methods. Always remember ...ROBINS/Method-Assert-0.0.1 - 31 Jul 2010 09:38:09
* Swat is a powerful and yet simple and flexible tool for rapid automated web tests development. * Swat is a web application oriented test framework, this means that it equips you with all you need for a web test development and yet it's not burdened...MELEZHIK/swat-0.1.96 - 13 Apr 2016 14:24:09 GMT - Search in distribution 4.5 (2 reviews) - 12 Oct 2014 13:57:47 GMT - Search in distribution
MLEHMANN/EV-4.22 3 (5 reviews) - 20 Dec 2015 01:35:40...SHAY/perl-5.22.1 4.5 (6 reviews) - 13 Dec 2015 19:48:31 GMT - Search in distribution
- perlfunc - Perl builtin functions
- perlmodlib - constructing new Perl modules and finding existing ones
- perlmodstyle - Perl module style guide
- 2 more results from perl »
Minions is a class builder that makes it easy to create classes that are modular <>, which means there is a clear and obvious separation between what end users need to know (the interface for using the ...ARUNBEAR/Minions-1.000000 - 04 Feb 2016 21:48
WYANT/Astro-satpass-0.071 5 (1 review) - 06 Jan 2016 18:12:29 GMT - Search in distribution
Test::Mini is a light, spry testing framework built to bring the familiarity of an xUnit testing framework to Perl as a first-class citizen. Based initially on Ryan Davis' minitest, it provides a not only a simple way to write and run tests, but the ...PVANDE/Test-Mini-v1.1.3 - 13 Feb 2011 06:09:36 GMT - Search in distribution
- Test::Mini - Lightweight xUnit Testing for Perl
- Test::Mini::Logger - Output Logger Base Class
- Test::Mini::Runner - Default Test Runner
- 2 more results from Test-Mini »
This section of the FAQ answers questions related to manipulating numbers, dates, strings, arrays, hashes, and miscellaneous data issues....LLAP/perlfaq-5.021011 - 04 Mar 2016 20:04:35 GMT - Search in distribution
This class provides a set of assertion methods useful for writing tests. The API is based on JUnit4 and Test::Unit::Lite and the methods die on failure. These assertion methods might be not useful for common Test::Builder-based (Test::Simple, Test::M...DEXTER/Test-Assert-0.0504 - 06 Dec 2009 22:50:03.1 - 02 May 2013 18:42:53 GMT - Search in distribution
This module allows you to test XPaths into an XML Document to check that their number or values are what you expect. To test the number of nodes you expect to find, use the "assert_xpath_count()" method. To test the value of a node, use the "assert_x...CHILTS/XML-Assert-0.03 - 18 Jul 2010 22:46:46
|
https://metacpan.org/search?q=Method-Assert
|
CC-MAIN-2016-18
|
refinedweb
| 491
| 59.33
|
Docs |
Forums |
Lists |
Bugs |
Planet |
Store |
GMN |
Get Gentoo!
Not eligible to see or edit group visibility for this bug.
View Bug Activity
|
Format For Printing
|
XML
|
Clone This Bug
Basically what the summary says. Some packages have several AC_PRERQ macro
calls in aclocal.m4 where the first only requires 2.1 but another requires 2.5
See attached patch about how to fix this.. it just gets all calls and takes the
(stringwise) largest number
Reproducible: Always
Steps to Reproduce:
1.
2.
3.
Created an attachment (id=25486) [edit]
fix to evalutate all AC_PRERQ macro calls
The test was actually wrong (did not catch '(?.??)' ..) . Please try
autoconf-2.59-r1, or provide a test package which it fails for, so that
I can have a look.
nopes, still doesn't work.
the new ac-wrapper-2.pl still doesn't evaluate all AC_PREREQ calls.
for example net-irc/irssi-cvs has to select the autotools versions explicitly because of this.
Works fine here:
--
nosferatu irssi # grep AC_PRE *
aclocal.m4:[AC_PREREQ([2.12])
aclocal.m4:[AC_PREREQ(2.50)dnl
configure:# Because of `break', _AC_PREPROC_IFELSE's cleaning code was skipped.
configure:# Because of `break', _AC_PREPROC_IFELSE's cleaning code was skipped.
configure:# Because of `break', _AC_PREPROC_IFELSE's cleaning code was skipped.
configure:# Because of `break', _AC_PREPROC_IFELSE's cleaning code was skipped.
nosferatu irssi # set | grep AUTO
nosferatu irssi # ./autogen.sh
Creating help files...
Documentation: html -> txt...
Checking auto* tools...
**Warning**: I am going to run `configure' with no arguments.
If you wish to pass any to it, please specify them on the
`./autogen.sh' command line.
Running libtoolize...
You should add the contents of `/usr/share/aclocal/libtool.m4' to `aclocal.m4'.
Running aclocal -I . ...
Running autoheader...
autoheader-2.59: WARNING: Using auxiliary files such as `acconfig.h', `config.h.bot'
autoheader-2.59: WARNING: and `config.h.top', to define templates for `config.h.in'
autoheader-2.59: WARNING: is deprecated and discouraged.
autoheader-2.59:
autoheader-2.59: WARNING: Using the third argument of `AC_DEFINE' and
autoheader-2.59: WARNING: `AC_DEFINE_UNQUOTED' allows to define a template without
autoheader-2.59: WARNING: `acconfig.h':
autoheader-2.59:
autoheader-2.59: WARNING: AC_DEFINE([NEED_FUNC_MAIN], 1,
autoheader-2.59: [Define if a function `main' is needed.])
autoheader-2.59:
autoheader-2.59: WARNING: More sophisticated templates can also be produced, see the
autoheader-2.59: WARNING: documentation.
configure.in:18: warning: AC_ARG_PROGRAM invoked multiple times
Running autoconf ...
configure.in:18: warning: AC_ARG_PROGRAM invoked multiple times
Running automake --gnu ...
Running ./configure --enable-maintainer-mode --enable-compile-warnings ...
checking for a BSD-compatible install... /usr/bin/install -c
checking whether build environment is sane... yes
checking whether make sets $(MAKE)... yes
checking for working aclocal-1.4... found
checking for working autoconf... found
hum, indeed. irssi works again... just upgraded to -r3
let me give you a shorter example:
$ cat > configure.in <<EOF
AC_INIT(configure.in)
AC_PREREQ([2.1]);
AC_PREREQ([2.5]);
EOF
$ autoconf
$ ./configure --version
configure generated by autoconf version 2.13
Created an attachment (id=26671) [edit]
ac-wrapper.pl
Ok, I wanted to rather add it via function. I however suck at perl, so
cannot see why it does not work properly :/ Any ideas?
It should read something like
sub ac_version {
return ((@versions = cat_(shift) =~ /^\s*\[?AC_PREREQ\(\[?([^\)]{3}[0-9]?)[^\)]*\]?\)/mg) ? ((sort @versions)[-1]) : '');
}
you got the assignment wrong. It's not @versions = cat_(shift) but @versions = (cat_(shift) =~ ... ) The match operator returns an array ($1, $2, $3, $4, ...) which is assigned to @versions.
alternativly you could write something like
my $file = cat_(shift);
return ((@versions = $file =~ ...
I did try adding the cut_() inbetween. It seems rather its the return
that causes issues for some reasons. Maybe because of the '.' in what is
returned?
Created an attachment (id=27157) [edit]
ac-wrapper with my fix
sorry for the long delay.
I modified the ac-wrapper as I described above, and it works for me.
could you please provide a short counter-example were it doesn't?
I've committed this to ac-wrapper-4 which is used in autoconf-2.59-r4.
|
http://bugs.gentoo.org/41389
|
crawl-002
|
refinedweb
| 680
| 55.1
|
Funny thing about C parameter evaluation order...
I just explained this to a friend today, and thought this might make an interesting blog posting:
#include <stdio.h>
int main( int argc, const char * argv[] ) { char theText[2] = { 'A', 'B' }; char* myString = theText; printf( "%c, %c\n", *(++myString), *myString );
return 0; }
The above code is platform-dependent in C. Yes, you read correctly: platform dependent. And I'm not nitpicking that this may cause a problem if your compiler is old or that some compiler may not have printf() or the POSIX standard.
This code is platform-dependent, because the C standard says that there is no guarantee in which order the parameters of a function call get evaluated. So, if you run the above code, it could print B, B (which most of you probably expected because it corresponds to our left-to-right reading order) or it could print B, A.
If you want to test this and you own an Intel Mac, you can do the following thanks to Rosetta's PowerPC emulation: Create a new "Standard Tool" project in Xcode and paste the above code into the main.c file. Switch to "Release" and change "Architectures" in the build settings for the release build configuration to be "ppc". Build and Run. It'll print B, B. Now change the architecture to "i386" and build and run again. It'll print B, A.
So, why doesn't C define an order? Why did anyone think such odd behaviour was a good idea? Well, to explain that, we'll have to look at what your computer does under the hood to execute a function call. In general, there are two steps: First, the parameters are evaluated and stored in some standardized place where the called function can find them, and then the processor "jumps" to the first command in the new function and starts executing it.
Some CPUs have registers inside the CPU, which are little variables that can hold short values, and which can be accessed a lot quicker than actually going over to a RAM chip and fetching a value. There are different registers for different kinds of values. Many CPUs have separate registers for floating-point numbers and integers. And just like with RAM, it's sometimes faster to access these registers in a certain order.
So, it may be faster to first evaluate all integer-value parameters, and then those that contain floating-point values. Depending on what physical CPU your computer has (or in the case of Rosetta, what characteristics the emulated CPU your code is being run on has), these performance characteristics may be different. Some CPUs may have so few registers that the parameters will always have to be passed in RAM. Others may put larger parameters in RAM and smaller ones in registers, others again may put the first couple parameters in registers (maybe even distributing a longer parameter across several registers), and the rest that don't fit in RAM, etc.
So, to make sure C can be made to run that little bit faster on any of these CPUs, its designers decided not to enforce an order for execution of parameters. And that's one of the dangers of writing code in C++ or Objective C: It may look like a high-level language, but underneath it is still a portable assembler, with platform-dependencies like this.
|
http://orangejuiceliberationfront.com/funny-thing-about-c-parameter-evaluation-order/
|
CC-MAIN-2018-34
|
refinedweb
| 568
| 59.43
|
Abstract
Contents
- Abstract
- Rationale
- Proposal
- Backwards compatibility
- Implementation
- Rejected Ideas
- Acknowledgements
- References
This PEP proposes a protocol for classes which represent a file system path to be able to provide a str or bytes representation. Changes to Python's standard library are also proposed to utilize this protocol where appropriate to facilitate the use of path objects where historically only str and/or bytes file system paths are accepted. The goal is to facilitate the migration of users towards rich path objects while providing an easy way to work with code expecting str or bytes.
Rationale
Historically in Python, file system paths have been represented as strings or bytes. This choice of representation has stemmed from C's own decision to represent file system paths as const char * [3]. While that is a totally serviceable format to use for file system paths, it's not necessarily optimal. At issue is the fact that while all file system paths can be represented as strings or bytes, not all strings or bytes represent a file system path. This can lead to issues where any e.g. string duck-types to a file system path whether it actually represents a path or not.
To help elevate the representation of file system paths from their representation as strings and bytes to a richer object representation, the pathlib module [4] was provisionally introduced in Python 3.4 through PEP 428. While considered by some as an improvement over strings and bytes for file system paths, it has suffered from a lack of adoption. Typically the key issue listed for the low adoption rate has been the lack of support in the standard library. This lack of support required users of pathlib to manually convert path objects to strings by calling str(path) which many found error-prone.
One issue in converting path objects to strings comes from the fact that the only generic way to get a string representation of the path was to pass the object to str(). This can pose a problem when done blindly as nearly all Python objects have some string representation whether they are a path or not, e.g. str(None) will give a result that builtins.open() [5] will happily use to create a new file.
Exacerbating this whole situation is the DirEntry object [8]. While path objects have a representation that can be extracted using str(), DirEntry objects expose a path attribute instead. Having no common interface between path objects, DirEntry, and any other third-party path library has become an issue. A solution that allows any path-representing object to declare that it is a path and a way to extract a low-level representation that all path objects could support is desired.
This PEP then proposes to introduce a new protocol to be followed by objects which represent file system paths. Providing a protocol allows for explicit signaling of what objects represent file system paths as well as a way to extract a lower-level representation that can be used with older APIs which only support strings or bytes.
Discussions regarding path objects that led to this PEP can be found in multiple threads on the python-ideas mailing list archive [1] for the months of March and April 2016 and on the python-dev mailing list archives [2] during April 2016.
Proposal
This proposal is split into two parts. One part is the proposal of a protocol for objects to declare and provide support for exposing a file system path representation. The other part deals with changes to Python's standard library to support the new protocol. These changes will also lead to the pathlib module dropping its provisional status.
Protocol
The following abstract base class defines the protocol for an object to be considered a path object:
import abc import typing as t class PathLike(abc.ABC): """Abstract base class for implementing the file system path protocol.""" @abc.abstractmethod def __fspath__(self) -> t.Union[str, bytes]: """Return the file system path representation of the object.""" raise NotImplementedError
Objects representing file system paths will implement the __fspath__() method which will return the str or bytes representation of the path. The str representation is the preferred low-level path representation as it is human-readable and what people historically represent paths as.
Standard library changes
It is expected that most APIs in Python's standard library that currently accept a file system path will be updated appropriately to accept path objects (whether that requires code or simply an update to documentation will vary). The modules mentioned below, though, deserve specific details as they have either fundamental changes that empower the ability to use path objects, or entail additions/removal of APIs.
builtins
open() [5] will be updated to accept path objects as well as continue to accept str and bytes.
os
The fspath() function will be added with the following semantics:
import typing as t def fspath(path: t.Union[PathLike, str, bytes]) -> t.Union[str, bytes]: """Return the string representation of the path. If str or bytes is passed in, it is returned unchanged. If __fspath__() returns something other than str or bytes then TypeError is raised. If this function is given something that is not str, bytes, or os.PathLike then TypeError is raised. """ if isinstance(path, (str, bytes)): return path # Work from the object's type to match method resolution of other magic # methods. path_type = type(path) try: path = path_type.__fspath__(path) except AttributeError: if hasattr(path_type, '__fspath__'): raise else: if isinstance(path, (str, bytes)): return path else: raise TypeError("expected __fspath__() to return str or bytes, " "not " + type(path).__name__) raise TypeError("expected str, bytes or os.PathLike object, not " + path_type.__name__)
The os.fsencode() [6] and os.fsdecode() [7] functions will be updated to accept path objects. As both functions coerce their arguments to bytes and str, respectively, they will be updated to call __fspath__() if present to convert the path object to a str or bytes representation, and then perform their appropriate coercion operations as if the return value from __fspath__() had been the original argument to the coercion function in question.
The addition of os.fspath(), the updates to os.fsencode()/os.fsdecode(), and the current semantics of pathlib.PurePath provide the semantics necessary to get the path representation one prefers. For a path object, pathlib.PurePath/Path can be used. To obtain the str or bytes representation without any coersion, then os.fspath() can be used. If a str is desired and the encoding of bytes should be assumed to be the default file system encoding, then os.fsdecode() should be used. If a bytes representation is desired and any strings should be encoded using the default file system encoding, then os.fsencode() is used. This PEP recommends using path objects when possible and falling back to string paths as necessary and using bytes as a last resort.
Another way to view this is as a hierarchy of file system path representations (highest- to lowest-level): path → str → bytes. The functions and classes under discussion can all accept objects on the same level of the hierarchy, but they vary in whether they promote or demote objects to another level. The pathlib.PurePath class can promote a str to a path object. The os.fspath() function can demote a path object to a str or bytes instance, depending on what __fspath__() returns. The os.fsdecode() function will demote a path object to a string or promote a bytes object to a str. The os.fsencode() function will demote a path or string object to bytes. There is no function that provides a way to demote a path object directly to bytes while bypassing string demotion.
The DirEntry object [8] will gain an __fspath__() method. It will return the same value as currently found on the path attribute of DirEntry instances.
The Protocol ABC will be added to the os module under the name os.PathLike.
os.path
The various path-manipulation functions of os.path [9] will be updated to accept path objects. For polymorphic functions that accept both bytes and strings, they will be updated to simply use os.fspath().
During the discussions leading up to this PEP it was suggested that os.path not be updated using an "explicit is better than implicit" argument. The thinking was that since __fspath__() is polymorphic itself it may be better to have code working with os.path extract the path representation from path objects explicitly. There is also the consideration that adding support this deep into the low-level OS APIs will lead to code magically supporting path objects without requiring any documentation updated, leading to potential complaints when it doesn't work, unbeknownst to the project author.
But it is the view of this PEP that "practicality beats purity" in this instance. To help facilitate the transition to supporting path objects, it is better to make the transition as easy as possible than to worry about unexpected/undocumented duck typing support for path objects by projects.
There has also been the suggestion that os.path functions could be used in a tight loop and the overhead of checking or calling __fspath__() would be too costly. In this scenario only path-consuming APIs would be directly updated and path-manipulating APIs like the ones in os.path would go unmodified. This would require library authors to update their code to support path objects if they performed any path manipulations, but if the library code passed the path straight through then the library wouldn't need to be updated. It is the view of this PEP and Guido, though, that this is an unnecessary worry and that performance will still be acceptable.
pathlib
The constructor for pathlib.PurePath and pathlib.Path will be updated to accept PathLike objects. Both PurePath and Path will continue to not accept bytes path representations, and so if __fspath__() returns bytes it will raise an exception.
The path attribute will be removed as this PEP makes it redundant (it has not been included in any released version of Python and so is not a backwards-compatibility concern).
C API
The C API will gain an equivalent function to os.fspath():
/* Return the file system path representation of the object. If the object is str or bytes, then allow it to pass through with an incremented refcount. If the object defines __fspath__(), then return the result of that method. All other types raise a TypeError. */ PyObject * PyOS_FSPath(PyObject *path) { _Py_IDENTIFIER(__fspath__); PyObject *func = NULL; PyObject *path_repr = NULL; if (PyUnicode_Check(path) || PyBytes_Check(path)) { Py_INCREF(path); return path; } func = _PyObject_LookupSpecial(path, &PyId___fspath__); if (NULL == func) { return PyErr_Format(PyExc_TypeError, "expected str, bytes or os.PathLike object, " "not %S", path->ob_type); } path_repr = PyObject_CallFunctionObjArgs(func, NULL); Py_DECREF(func); if (!PyUnicode_Check(path_repr) && !PyBytes_Check(path_repr)) { Py_DECREF(path_repr); return PyErr_Format(PyExc_TypeError, "expected __fspath__() to return str or bytes, " "not %S", path_repr->ob_type); } return path_repr; }
Backwards compatibility
There are no explicit backwards-compatibility concerns. Unless an object incidentally already defines a __fspath__() method there is no reason to expect the pre-existing code to break or expect to have its semantics implicitly changed.
Libraries wishing to support path objects and a version of Python prior to Python 3.6 and the existence of os.fspath() can use the idiom of path.__fspath__() if hasattr(path, "__fspath__") else path.
Implementation
This is the task list for what this PEP proposes to be changed in Python 3.6:
- Remove the path attribute from pathlib (done)
- Remove the provisional status of pathlib (done)
- Add os.PathLike (code and docs done)
- Add PyOS_FSPath() (code and docs done)
- Add os.fspath() (done <done)
- Update os.fsencode() (done)
- Update os.fsdecode() (done)
- Update pathlib.PurePath and pathlib.Path (done)
- Add __fspath__()
- Add os.PathLike support to the constructors
- Add __fspath__() to DirEntry (done)
- Update builtins.open() (done)
- Update os.path (done)
- Add a glossary entry for "path-like" (done)
- Update "What's New" (done)
Rejected Ideas
Other names for the protocol's method
Various names were proposed during discussions leading to this PEP, including __path__, __pathname__, and __fspathname__. In the end people seemed to gravitate towards __fspath__ for being unambiguous without being unnecessarily long.
Separate str/bytes methods
At one point it was suggested that __fspath__() only return strings and another method named __fspathb__() be introduced to return bytes. The thinking is that by making __fspath__() not be polymorphic it could make dealing with the potential string or bytes representations easier. But the general consensus was that returning bytes will more than likely be rare and that the various functions in the os module are the better abstraction to promote over direct calls to __fspath__().
Providing a path attribute
To help deal with the issue of pathlib.PurePath not inheriting from str, originally it was proposed to introduce a path attribute to mirror what os.DirEntry provides. In the end, though, it was determined that a protocol would provide the same result while not directly exposing an API that most people will never need to interact with directly.
Have __fspath__() only return strings
Much of the discussion that led to this PEP revolved around whether __fspath__() should be polymorphic and return bytes as well as str or only return str. The general sentiment for this view was that bytes are difficult to work with due to their inherent lack of information about their encoding and PEP 383 makes it possible to represent all file system paths using str with the surrogateescape handler. Thus, it would be better to forcibly promote the use of str as the low-level path representation for high-level path objects.
In the end, it was decided that using bytes to represent paths is simply not going to go away and thus they should be supported to some degree. The hope is that people will gravitate towards path objects like pathlib and that will move people away from operating directly with bytes.
A generic string encoding mechanism
At one point there was a discussion of developing a generic mechanism to extract a string representation of an object that had semantic meaning (__str__() does not necessarily return anything of semantic significance beyond what may be helpful for debugging). In the end, it was deemed to lack a motivating need beyond the one this PEP is trying to solve in a specific fashion.
Have __fspath__ be an attribute
It was briefly considered to have __fspath__ be an attribute instead of a method. This was rejected for two reasons. One, historically protocols have been implemented as "magic methods" and not "magic methods and attributes". Two, there is no guarantee that the lower-level representation of a path object will be pre-computed, potentially misleading users that there was no expensive computation behind the scenes in case the attribute was implemented as a property.
This also indirectly ties into the idea of introducing a path attribute to accomplish the same thing. This idea has an added issue, though, of accidentally having any object with a path attribute meet the protocol's duck typing. Introducing a new magic method for the protocol helpfully avoids any accidental opting into the protocol.
Provide specific type hinting support
There was some consideration to provdinga generic typing.PathLike class which would allow for e.g. typing.PathLike[str] to specify a type hint for a path object which returned a string representation. While potentially beneficial, the usefulness was deemed too small to bother adding the type hint class.
This also removed any desire to have a class in the typing module which represented the union of all acceptable path-representing types as that can be represented with typing.Union[str, bytes, os.PathLike] easily enough and the hope is users will slowly gravitate to path objects only.
Provide os.fspathb()
It was suggested that to mirror the structure of e.g. os.getcwd()/os.getcwdb(), that os.fspath() only return str and that another function named os.fspathb() be introduced that only returned bytes. This was rejected as the purposes of the *b() functions are tied to querying the file system where there is a need to get the raw bytes back. As this PEP does not work directly with data on a file system (but which may be), the view was taken this distinction is unnecessary. It's also believed that the need for only bytes will not be common enough to need to support in such a specific manner as os.fsencode() will provide similar functionality.
Call __fspath__() off of the instance
An earlier draft of this PEP had os.fspath() calling path.__fspath__() instead of type(path).__fspath__(path). The changed to be consistent with how other magic methods in Python are resolved.
Acknowledgements
Thanks to everyone who participated in the various discussions related to this PEP that spanned both python-ideas and python-dev. Special thanks to Stephen Turnbull for direct feedback on early drafts of this PEP. More special thanks to Koos Zevenhoven and Ethan Furman for not only feedback on early drafts of this PEP but also helping to drive the overall discussion on this topic across the two mailing lists.
|
http://docs.activestate.com/activepython/2.7/peps/pep-0519.html
|
CC-MAIN-2018-09
|
refinedweb
| 2,861
| 54.83
|
ecvt, fcvt − convert a floating-point number to a
string.
#include <stdlib.h>
char *ecvt(double number, int
ndigits, int *decpt, int
*sign);
char *fcvt(double number, int
ndigits, int *decpt, int
*sign);
The ecvt() function converts number to a
null-terminated string of ndigits digits (where
ndigits is reduced to. If number is
zero, it is unspecified whether *decpt is 0 or 1.
The fcvt() function is identical to ecvt(),
except that ndigits specifies the number of digits
after the decimal point.
Both the ecvt() and fcvt() functions return
a pointer to a static string containing the ASCII
representation of number. The static string is
overwritten by each call to ecvt() or
fcvt().
These functions are obsolete. Instead, sprintf()
is recommended. Linux libc4 and libc5 specified the type of
ndigits as size_t. Not all locales use a point
as the radix character (‘decimal point’).
SysVR2, XPG2
gcvt(3), setlocale(3),
sprintf(3)
|
https://alvinalexander.com/unix/man/man3/fcvt.3.shtml
|
CC-MAIN-2019-09
|
refinedweb
| 153
| 66.54
|
Important: Please read the Qt Code of Conduct -
Insert new rows into database using QSqlTableModel
I am using QSqlTableModel with manual submit. Consider a situation where 4 new rows are inserted into this model using inserRows, and user add data into the 2 rows of newly created. Then on manual submit, it returns with an error "No fields to update". The following code,
@ model->database().transaction();
if(model->submitAll())
{
model->database().commit();
}
else
{
model->database().rollback();
QMessageBox::warning(this, "Database Write Error",
tr("The database reported an error: %1")
.arg(model->lastError().text()));
}@
So when submitAll() fails nothing updated in database. How to change the code to "update/insert the valid data in to the database, and other remains in cache" instead of nothing inserted into the database.
Thanking You,
Ras
Could you please add some sourcecode to explain how you use insertRows?
Here is the code used for inserting rows
@ int row = model->rowCount();
model->insertRows(row, 4);@
Or
@ int row = model->rowCount();
model->insertRow(row);@
called 4 times
Have a look at this as it is working.
@#include <QCoreApplication>
#include <QSqlDatabase>
#include <QSqlTableModel>
#include <QSqlQuery>
#include <QSqlError>
#include <QDebug>
#include <QModelIndex>
int main(int argc, char *argv[])
{
QCoreApplication a(argc, argv);
QSqlDatabase db = QSqlDatabase::addDatabase("QSQLITE"); db.setDatabaseName("rast123.sqlite"); if(!db.open()) { qDebug() << db.lastError().text(); return 0; } QSqlQuery q(db); if(!q.exec("create table if not exists \"rast123\" (id integer)")) { qDebug() << "Create table" << q.lastError().text(); return 0; } QSqlTableModel model(0, db); model.setEditStrategy(QSqlTableModel::OnManualSubmit); model.setTable("rast123"); model.select(); model.database().transaction(); int rowCount = model.rowCount(); qDebug() << rowCount; if(!model.insertRows(rowCount, 4)) { qDebug() << "insertRows" << model.lastError().text(); return 0; } model.setData(model.index(rowCount + 0,0), rowCount +0); model.setData(model.index(rowCount + 1,0), rowCount +1); model.setData(model.index(rowCount + 2,0), rowCount +2); model.setData(model.index(rowCount + 3,0), rowCount +3); rowCount = model.rowCount(); if(!model.insertRow(rowCount)) { qDebug() << "insertRow" << model.lastError().text(); return 0; } model.setData(model.index(rowCount,0), rowCount); if(model.submitAll()) { model.database().commit(); } else { model.database().rollback(); qDebug() << "Database Write Error" << "The database reported an error: " << model.lastError().text(); } return a.exec();
}
@
Hi,
The problem is only when user didn't fill the rows completely, i.e. as an example, try after removing two lines in
@ model.setData(model.index(rowCount + 0,0), rowCount +0);
model.setData(model.index(rowCount + 1,0), rowCount +1);
model.setData(model.index(rowCount + 2,0), rowCount +2);
model.setData(model.index(rowCount + 3,0), rowCount +3);@
Thanks for your help.
have fun :-)
Please note that it may not work with the situaltion
@
model.setData(model.index(rowCount + 0,0), rowCount +0);
model.setData(model.index(rowCount + 1,0), rowCount +1);
@
i.e. user left two rows, any solution?
I tried out your example, leaving the two rows, this results in the mentioned error. So you have to find out how many rows the user wants to add prior to call insertRows.
Hello.
I have another question: isn't it easier to insert rows after getting data from user? I mean user inserts data into some intermediate table and then you insert it into your DB. So you can create some class based on QAbstractTableModel, implement adding information into it, checking input, etc. Then you may create a special dialog with this model in QTableView and a spin box for changing number of rows to insert. After user finished input, you check new data (if you need it) and then you build a query to insert data into your DB. You may process rows one by one, or may be at the same time using threads (I realy don't think it's a good idea), or even construct a big query, whitch adds all rows at once .
Or let user to add rows one by one - it seems less comfortable, but it's easier to implement.
And another one question: does your DB have any not null fields?
Wilk, this is what I mentioned above. But it depends on the application someone wants to deploy. However, I don't think that it makes sense using multithreaded insertion into a database in general. Since nearly everything is possible in coder's world ('beam me up' not yet) it should be clear what the user wants to enter and how - not inverse - looking at what Qt does and then decide what the user has to do. Besides, the example DB does only have one integer key.
|
https://forum.qt.io/topic/18097/insert-new-rows-into-database-using-qsqltablemodel/5
|
CC-MAIN-2022-05
|
refinedweb
| 747
| 51.65
|
I'm having trouble figuring out how to get the testing framework set up and usable in Visual Studio 2008 for C++ presumably with the built-in unit testing suite.
Any links or tutorials would be appreciated.
I'm having trouble figuring out how to get the testing framework set up and usable in Visual Studio 2008 for C++ presumably with the built-in unit testing suite.
Any links or tutorials would be appreciated.
I use UnitTest++.
While the example tutorial is for Visual Studio 2005, the concepts are similar (try setting one up for VC6...).
Update: The VC6 hacks are now included in the source!
I'm not 100% sure about VS2008, but I know that the Unit Testing framework that microsoft shipped in VS2005 as part of their Team Suite was only for .NET, not C++
I've used CppUnit also and it was alright. Much the same as NUnit/JUnit/so on.
If you've used boost, they also have a unit testing library
The guys behind boost have some serious coding chops, so I'd say their framework should be pretty good, but it might not be the most user friendly :-)
The framework included with VS9 is .NET, but you can write tests in C++/CLI, so as long as you're comfortable learning some .NET isms, you should be able to test most any C++ code.
boost.test and googletest look to be fairly similar, but adapted for slightly different uses. Both of these have a binary component, so you'll need an extra project in your solution to compile and run the tests.
The framework we use is CxxTest, which is much lighter; it's headers only, and uses a Perl (!) script to scrape test suite information from your headers (suites inherit from CxxTest::Base, all your test methods' names start with "test"). Obviously, this requires that you get Perl from one source or another, which adds overhead to your build environment setup.
The unit tester for VS2008 is only for .NET code as far as I know.
I used CppUnit on Vs2005 and found it to be pretty good.
As far as I remember, the setup was relatively painless, just make sure that in your testing projects the linker (Linker->Input->Additional Dependencies) includes cppunitd.lib.
Then,
#include <cppunit/extensions/HelperMacros.h> in your header
You can then follow the steps in to get your test class working.
Google releases C++ Test Framework which is very similar with xUnit frameworks.
This page may help - it reviews a bunch of cpp unit test frameworks:
Check out CPPUnitLite or CPPUnitLite2. CPPUnitLite was created by Michael Feathers, who originally ported Java's JUnit to C++ as CPPUnit. CPPUnit tries mimic the development model of JUnit - but C++ lacks Java's features (reflection) to make it easy to use. CPPUnitLite attempts to make a true C++-style testing framework, not a Java one ported to C++. (I'm paraphrasing from Feather's Working Effectively with Legacy Code book). The CPPUnitLite2 seems to be another re-write - with more features and bug fixes.
I also just stumbled across UnitTest++ - which includes stuff from CPPUnitLite2 and some other framework.
Microsoft has released (via MSDN magazine) WinUnit. The download for the code seems broken, but here is a link found in the comments here. * Update: WinUnit Homepage *
Personally, I prefer WinUnit since it doesn't require me to write anything except for my tests (I build a .dll as the test, not an exe). I just build a project, and point WinUnit.exe to my test output directory and it runs everything it finds. You can download the WinUnit project here. (MSDN now requires you to download the entire issue, not the article. WinUnit is included within.)
Have a look at CUnitWin32. It includes an example.
There is a way to test unmanaged C++ using the built in testing framework within Visual Studio 2008. If you create a C++ Test Project, using C++/CLI, you can then make calls to an unmanaged DLL. You will have to switch the Common Language Runtime support to /clr from /clr:safe if you want to test code that was written in unmanaged C++.
I have step by step details on my blog here:
The tools that have been mentioned here are all command line tools. If you look for a more integrated solution, have a look at cfix studio, which is a Visual Studio AddIn for C/C++ unit testing . It is quite similar to TestDriven.Net, but for (unmanaged) C/C++ rather than .Net.
Here is the approach I use to test the IIS URL Rewrite module at Microsoft (it is command-line based, but should work for VS too):
Here is an example:
// Example #include "stdafx.h" #include "mstest.h" // Following code is native code. #pragma unmanaged void AddTwoNumbersTest() { // Arrange Adder yourNativeObject; int expected = 3; int actual; // Act actual = yourNativeObject.Add(1, 2); // Assert Assert::AreEqual(expected, actual, L"1 + 2 != 3"); } // Following code is C++/CLI (Managed) #pragma managed using namespace Microsoft::VisualStudio::TestTools::UnitTesting; [TestClass] public ref class TestShim { public: [TestMethod] void AddTwoNumbersTest() { // Just jump to C++ native code (above) ::AddTwoNumbersTest(); } };
With this approach, people don't have to learn too much C++/CLI stuff, all the real test will be done in C++ native and the TestShim class will be used to 'publish' the test to MSTest.exe (or make it visible).
For adding new tests you just declare a new [TestMethod] void NewTest(){::NewTest();} method and a new void NewTest() native function. No macros, no tricks, straighforward.
Now, the heade file is optionally, but it can be used to expose the Assert class' methods with C++ native signatures (e.g. wchar_t* instead of Stirng^), so it can you can keep it close to C++ and far from C++/CLI:
Here is an example:
// Example #pragma once #pragma managed(push, on) using namespace System; class Assert { public: static void AreEqual(int expected, int actual) { Microsoft::VisualStudio::TestTools::UnitTesting::Assert::AreEqual(expected, actual); } static void AreEqual(int expected, int actual, PCWSTR pszMessage) { Microsoft::VisualStudio::TestTools::UnitTesting::Assert::AreEqual(expected, actual, gcnew String(pszMe ssage)); } template<typename T> static void AreEqual(T expected, T actual) { Microsoft::VisualStudio::TestTools::UnitTesting::Assert::AreEqual(expected, actual); } // Etcetera, other overloads... } #pragma managed(pop)
HTH
|
http://ansaurus.com/question/3150-how-to-set-up-unit-testing-for-visual-studio-c
|
CC-MAIN-2018-22
|
refinedweb
| 1,050
| 64
|
Created on 2007-10-27 22:33 by neuralsensor, last changed 2008-08-24 05:27 by igorcamp.
I get the response shown below when trying to use OpenGL. I have Python
2.5, PIL-1.1.6, and PyOpenGL 3.0 installed.
Any help would be greatly appreciated.
Thanks,
Dale
>>> from OpenGL.GLUT import *
Traceback (most recent call last):
File "<pyshell#0>",'
>>>
__init__.py line 4 is;
from OpenGL.GLUT.special import *
special.py line 73 is;
_base_glutDestroyWindow = GLUT.glutDestroyWindow
Looks like GLUT in special.py is None. You should ask the PyOpenGL
author/community for help. This is not a Python bug.
As gagenellina said, this problem appears to be with OpenGL, not python.
OpenGL is not maintained here. You'll have to open a bug report with them.
You have to put the glut32.dll in Windows/system32 folder.
You can get glut32.dll here:
extract the glut32.dll to the Windows/system32 folder and voila.
|
http://bugs.python.org/issue1346
|
crawl-002
|
refinedweb
| 159
| 82.31
|
27 February 2008 18:47 [Source: ICIS news]
ORLANDO, Florida (ICIS news)--Biofuels producer Abengoa Bioenergy said on Wednesday it is still seeking financing for a hybrid plant that would include one of the first commercial streams of cellulosic ethanol.
Abengoa has fully staffed its project office and lined up contractors to build the plant, and hopes to close the financing by the end of this year, said Gerson Santos-Leon, executive vice president at Abengoa.
Speaking at the National Ethanol Conference in ?xml:namespace>
Investment experts had earlier told the conference that the current downturn in equity markets and in ethanol margins meant no new financing for
The hybrid plant is planned for Hugoton in
That would provide an outlet for the distiller's grains that would be a by-product from the ethanol plant, Santos-Leon said.
The hybrid plant would produce 15m gal/year of ethanol from the cellulosic process, using 245,000 tonnes/year of biomass.
The plant would also produce 88m gal/year from first-generation ethanol technology.
|
http://www.icis.com/Articles/2008/02/27/9104277/abengoa-seeks-finance-for-us-cellulosic-ethanol.html
|
CC-MAIN-2014-52
|
refinedweb
| 173
| 52.73
|
slogf()
Send a message to the system logger
Synopsis:
#include <stdio.h> #include <sys/slog.h> int slogf( int opcode, int severity, const char * fmt, ... );
Since:
BlackBerry 10.0.0
Arguments:
- opcode
- A combination of a major and minor code. Create the opcode using the _SLOG_SETCODE( major , minor ) macro that's defined in <sys/slog.h>.
The major and minor codes are defined in <sys/slogcodes.h>.
- severity
- The severity of the log message; see " Severity levels," below.
- fmt
- A standard printf() string followed by printf() arguments.
The formatting characters that you use in the message determine any additional arguments..
The vslogf() function is an alternate form in which the arguments have already been captured using the variable-length argument facilities of <stdarg.h>.
Severity levels
There are eight levels of severity defined. The lowest severity is 7 and the highest is 0. The default is 7.
Returns:
The size of the message sent to slogger, or -1 if an error occurs..
Examples:
); } }
Classification:
Last modified: 2014-11-17
Got questions about leaving a comment? Get answers from our Disqus FAQ.comments powered by Disqus
|
http://developer.blackberry.com/native/reference/core/com.qnx.doc.neutrino.lib_ref/topic/s/slogf.html
|
CC-MAIN-2015-18
|
refinedweb
| 185
| 61.93
|
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" ""> <!-- $Id: changes.html,v 1.107 2007/02/06 15:32:43 murch Exp $ --> <html xmlns=""> <head> <meta name="generator" content="HTML Tidy, see" /> <title>Changes to the Cyrus IMAP Server</title> </head> <body> <h1>Changes to the Cyrus IMAP Server since 2.3.7</h1> <ul> <li>Added <tt>improved_mboxlist_sort</tt> option which fixes LIST/LSUB problem with characters like <tt>'-'</tt> and <tt>' '</tt> in mailbox names. See <tt>imapd.conf.5</tt> for details</li> <li>Fixed problem with mupdate randomly spinning.</li> <li>Fixed problem with DELETEing mailboxes with split metadata directories.</li> <li>Fixed compatibility problem with RFC 4314 ACLs and mixed 2.2/2.3 environments.</li> <li>Fixed problem with replication and COPYing \Seen messages.</li> <li>Fixed problem with replication and XFER.</li> <li>Added options to reconstruct to preserve cyrus.expunge and to synchronize changes to a replica server.</li> <li>Removed (broken) support for proxying of pipelined IMAP commands.</li> <li>Added new <tt>cyr_dbtool</tt> utility for manipulating Cyrus databases (courtesy of Fastmail.fm).</li> <li>Better sanity checking of IMAP URLs.</li> <li>Fixed miscellaneous bugs and build issues.</li> </ul> <h1>Changes to the Cyrus IMAP Server since 2.3.6</h1> <ul> <li>Fixed problems with replication and virtual domains.</li> <li>Fixed problems with newer cyrus.index files on 64-bit machines.</li> <li>Added '<tt>-p <ssf></tt>' option to services so that PLAIN authentication can be used without TLS in secure environments.</li> <li>Added <tt>munge8bit</tt> to control whether unencoded 8-bit characters in headers are changed to 'X' or are left alone.</li> <li>Added <tt>sieve_allowreferrals</tt> option to control whether <tt>timsieved</tt> issues referrals or proxys traffic to backends.</li> <li>Fixed miscellaneous bugs and build issues.</li> </ul> <h1>Changes to the Cyrus IMAP Server since 2.3.5</h1> <ul> <li>Fixed COPY code so that clients display new messages (added MODSEQ).</li> <li>Fixed imtest to be compatible with SASL 2.1.22.</li> </ul> <h1>Changes to the Cyrus IMAP Server since 2.3.4</h1> <ul> <li>Fixed append/delivery code so that clients display new messages.</li> </ul> <h1>Changes to the Cyrus IMAP Server since 2.3.3</h1> <ul> <li>Added support for BINARY APPEND (including CATENATE). <i>Based on contributions from Tom Esh <esh@lucent.com></i>.</li> <li>Added support for CONDSTORE (must be enabled on a per-mailbox basis with the <tt>/vendor/cmu/cyrus-imapd/condstore</tt> mailbox annotation.</li> <li>Fixed bug in reconstruct using bad name for cyrus.header.</li> <li>Fixed bug with replication and default partition.</li> <li><tt>ctl_mboxlist</tt> now dumps/undumps the mailbox type flags, making it useful for remote mailboxes.</li> <li>Better logging to facilitate message tracking (Wes Craig <wes@umich.edu>).</li> <li>Implemented CAPABILITY response in banner and after authentication.</li> <li>Fixed miscellaneous bugs and build issues.</li> </ul> <h1>Changes to the Cyrus IMAP Server since 2.3.2</h1> <ul> <li>Fixed broken berkeley (btree) backend.</li> </ul> <h1>Changes to the Cyrus IMAP Server since 2.3.1</h1> <ul> <li>Added more extensive output to arbitron.</li> <li>Allow responses of any length from backend when proxing IMAP/POP3/NNTP traffic.</li> <li>Properly handle timeouts when proxying.</li> <li>Added plaintextloginalert option.</li> <li>Fixed segfault in deliver.</li> <li>Only allow mbpath to be run as Cyrus user.</li> <li>Added nntptimeout option for nntpd.</li> <li>Added berkeley_hash and berkeley_hash_nosync cyrusdb backends (seem to perform better under heavy loads).</li> <li>Added TLS support to cyradm.</li> <li>Fixed miscellaneous bugs and build issues.</li> </ul> <h1>Changes to the Cyrus IMAP Server since 2.3.0</h1> <ul> <li>Updated ACL code to RFC 4314 (separate rights for message delete, mailbox delete, and expunge).</li> <li>Fixed IDLE to use idled for local mailboxes.</li> <li>Fixed miscellaneous build issues.</li> </ul> <h1>Changes to the Cyrus IMAP Server since 2.2.x</h1> <ul> <li>Added support for "unified" and "replicated" Murders. A Murder no longer has to have discrete frontend and backend servers; any one "unified" server can both proxy and serve local mailboxes (proxy functionality in <tt>proxyd</tt> and <tt>lmtpproxyd</tt> has been merged with <tt>imapd</tt> and <tt>lmtpd</tt> respectively), or all "replicated" servers can serve the same mailboxes from a shared filesystem. The new <tt>mupdate_config</tt> option in <tt>imapd.conf</tt> is used to determine whether a Murder is using a "traditional", "unified" or "replicated" configuration.</li> <li>Ported/rewrote/integrated David Carter's mailspool replication code. <i>Development sponsored by Columbia University</i>.</li> <li>Added support for "delayed" expunge, in which messages are removed from the mailbox index at the time of the EXPUNGE (hiding them from the client), but the message files and cache entries are left behind, to be purged at a later time by <tt>cyr_expire</tt>. This reduces the amount of I/O that takes place at the time of EXPUNGE and should result in greater responsiveness for the client, especially when expunging a large number of messages. The new <tt>expunge_mode</tt> option in <tt>imapd.conf</tt> controls whether expunges are "immediate" or "delayed". <i>Development sponsored by FastMail</i>.</li> <li>Added support to place some/all mailbox metadata files (cyrus.* files) on a separate (probably high-speed) partition. See the new <tt>metapartition</tt> and <tt>metapartition_files</tt> options for details. <i>Development sponsored by FastMail</i>.</li> <li>Added support for accessing subfolders of INBOX via POP3. See the new <tt>popsubfolders</tt> option for details. <i>Development sponsored by FastMail</i>.</li> <li>Added support to <tt>lmtpd</tt> to do "fuzzy" mailbox matching on user+detail addresses. See the new <tt>lmtp_fuzzy_mailbox_match</tt> option for details. <i>Development sponsored by FastMail</i>.</li> <li>Added new <tt>sieve_extensions</tt> option to allow individual Sieve extensions to be enabled/disabled.</li> <li>The Sieve "include" extension is now supported. This also allows for global sieve scripts. See the new <tt>sieve_extensions</tt> options to enable it.</li> <li>The Sieve "body" extension is now supported. See the new <tt>sieve_extensions</tt> option to enable it. <i>Development sponsored by FastMail</i>.</li> <li>The $text$ variable for Sieve notify messages is now supported. <i>Development sponsored by FastMail</i>.</li> <li>The MIME structure of a new message destined for multiple recipients is now only parsed once rather than once per delivery, resulting in better performance. <i>Development sponsored by FastMail</i>.</li> <li>Support 64-bit quota usage (both per mailbox and for the entire quotaroot), based on a patch from Jeremy Rumpf. <i>Development sponsored by FastMail</i>.</li> <li>Added new <tt>flushseenstate</tt> option which causes imapd to immediately flush changes in \Seen state to disk rather than caching them until the mailbox is closed. Enabling this option may fix \Seen state weirdness with MS Outlook, at the expense of performance/scalability. <i>Based on a patch by John A. Tamplin (jtampli@sph.emory.edu).</i></li> <li>The Sieve "copy" extension is now supported.</li> <li>The IMAP "CATENATE" and "URLAUTH" extensions are now supported.</li> <li>Updated Sieve "vacation" extension to draft-ietf-sieve-vacation-04.</li> <li>Added support for Sieve scripts on shared mailboxes via the /vendor/cmu/cyrus-imapd/sieve annotation.</li> </ul> <h1>Changes to the Cyrus IMAP Server since 2.2.13</h1> <ul> <li><tt>ctl_mboxlist</tt> now dumps/undumps the mailbox type flags, making it useful for remote mailboxes.</li> <li>Added <tt>sieve_allowreferrals</tt> option to control whether <tt>timsieved</tt> issues referrals or proxys traffic to backends.</li> </ul> <h1>Changes to the Cyrus IMAP Server since 2.2.12</h1> <ul> <li>Allow sieve scripts to be run on shared mailboxes (via <tt>sieve</tt> annotation).</li> <li>Updated <tt>nntpd</tt> to be compliant with latest draft (soon to be RFC3977).</li> <li>Updated IMAP UIDPLUS extension to be compliant with latest specification (RFC4315).</li> <li>Performance improvements to <tt>quota</tt> utility.<li> <li>Fixed possible race condition in IMAP IDLE.</li> <li>Made <tt>ptloader</tt> runtime configurable.</li> <li>Added more extensive output to <tt>arbitron</tt>.</li> <li>Allow responses of any length from backend when proxing IMAP/POP3/NNTP traffic.</li> <li>Added <tt>plaintextloginalert</tt> option.</li> <li>Only allow <tt>mbpath</tt> to be run as Cyrus user.</li> <li>Added <tt>berkeley_hash</tt> and <tt>berkeley_hash_nosync</tt> cyrusdb backends (seem to perform better under heavy loads).</li> <li>Added <tt>lastpop</tt> mailbox annotation.</li> <li>Added subscribe/unsubscribe support to <tt>cyradm</tt>.</li> <li>Fixed miscellaneous bugs and build issues.</li> </ul> <h1>Changes to the Cyrus IMAP Server since 2.2.11</h1> <ul> <li>Revert index change which wasn't supposed to make it into 2.2.11</li> </ul> <h1>Changes to the Cyrus IMAP Server since 2.2.10</h1> <ul> <li>Fix possible single byte overflow in mailbox handling code.</li> <li>Fix possible single byte overflows in the imapd annotate extension.</li> <li>Fix stack buffer overflows in fetchnews (exploitable by peer news server), backend (exploitable by admin), and in imapd (exploitable by users though only on platforms where a filename may be larger than a mailbox name).</li> </ul> <h1>Changes to the Cyrus IMAP Server since 2.2.9</h1> <ul> <li>Fix 0 termination in mysasl_canon_user.</li> <li>Check for imap magic plus buffer overflow in proxyd also (CAN-2004-1015).</li> <li>Only send an over quota ALERT on SELECT if the quotaroot is different from the last ALERT, or we haven't sent an ALERT in over 10 min.</li> </ul> <h1>Changes to the Cyrus IMAP Server since 2.2.8</h1> <ul> <li>Change ACLs correctly when renaming a user</li> <li>Do not abandon std{in,out,err} file descriptors; syslog assumes it can use stderr if syslogd isn't running.</li> <li>Clean up imap magic plus to avoid buffer overrun (CAN-2004-1011)</li> <li>Fix lack of bounds checking in PARTIAL and FETCH (CAN-2004-1012, CAN-2004-1013)</li> <li>Do not attempt to reuse a freed connection in lmtpproxyd.</li> <li>Allow login without authentication with -N switch in proxyd.</li> <li>Fix use of xrealloc and fold pointers in lmtpengine.</li> </ul> <h1>Changes to the Cyrus IMAP Server since 2.2.7</h1> <ul> <li>Fix a double-free bug in the notify code</li> <li>Fix a problem with idled and an empty mailbox list</li> </ul> <h1>Changes to the Cyrus IMAP Server since 2.2.6</h1> <ul> <li>Fix handling of PARTIAL command and partial body fetches</li> <li>A large number of portability fixes supplied by Albert Chin <china@thewrittenword.com></li> <li>Added <tt>client_timeout</tt> option to control connect() timeouts for proxy code</li> <li>Added <tt>popuseacl</tt> option</li> <li>Fix a number of issues with the <tt>quota -f</tt> tool</li> <li>Fix thread safety issue in saslserver()</li> <li>Fix possible stage file leak in append code</li> <li>Fix bugs in handling of MULTIAPPEND introduced in 2.2.3</li> <li>Fixed regression bug in Sieve vacation</li> </ul> <h1>Changes to the Cyrus IMAP Server since 2.2.5</h1> <ul> <li>Fix a bug in the proxy code where a backend connection might get closed twice</li> <li>Improved consistancy checking in <tt>chk_cyrus</tt></li> <li>Fix segfault in APPEND code</li> <li>Fix a bug with an interaction between sieve and unixhierarchysep</li> <li>Fix a file descriptor leak in the quotadb code</li> <li>Fix a triggered assertation in service-thread services</li> <li>Add a number of internal consistancy checks to the skiplist code</li> <li>Allow <tt>mbpath</tt> to handle virtual domains</li> <li>Fix various MANAGESIEVE client authentication issues</li> <li>Other minor fixes</li> </ul> <h1>Changes to the Cyrus IMAP Server since 2.2.4</h1> <ul> <li>Bug fixed in hash table code that could sometimes cause crashes with the quotalegacy database</li> <li>Net-SNMP compatibility</li> <li>Significantly improved com_err detection</li> <li>Assorted minor NNTP improvements</li> <li>Assorted other minor bugfixes</li> </ul> <h1>Changes to the Cyrus IMAP Server since 2.2.3</h1> <ul> <li>Quota now uses the cyrusdb interface (<tt>quotalegacy</tt> by default).</li> <li>All incoming messages are now staged to disk before locking the destination mailbox (locks are no longer held during a network read).</li> <li>Fixed off-by-one error in <tt>fetchnews</tt> (articles are no longer skipped).</li> <li><tt>nntpd</tt> now uses the Followup-To: header (if exists) instead of the Newsgroups: header when constructing post address(es) and adds them to the Reply-To: header instead of the To: header.</li> <li>Added <tt>berkeley_locks_max</tt>, <tt>berkeley_txns_max</tt> and <tt>berkeley_cachesize</tt> options.</li> <li>Added <tt>imapmagicplus</tt> option.</li> <li>Substantial work on afspts/ptloader canonicalization code</li> <li>Much improved LDAP ptloader code (no more internal OpenLDAP dependencies)</li> <li>Fixed a number of IPv6 related bugs</li> </ul> <h1>Changes to the Cyrus IMAP Server since 2.2.2</h1> <ul> <li.</li> <li>Runtime configuration of the Cyrus databases. The cyrudb backend used for each database can be specified with an <tt>imapd.conf</tt> option. <b>NOTE:</b> You MUST convert the database using <tt>cvt_cyrusdb</tt> BEFORE changing the backend in <tt>imapd.conf</tt>. <li>Sendmail socket map support (<tt>smmapd</tt>) for verifying that mailboxes exist and are deliverable before accepting the message and sending it to Cyrus.</li> <li>New <tt>userid</tt> mode for virtual domains, which does NOT do reverse lookups of the IP address.</li> <li><tt>nntpd</tt> now supports the Xref header.</li> <li><tt>nntpd</tt> can now use the POST command to feed articles to upstream servers.</li> <li><tt>fetchnews</tt> can now be used with NNTP servers which don't support the NEWNEWS command.</li> <li><tt>lmtpd</tt> now initializes <tt>duplicate.db</tt> only when it is necessary (when using Sieve or <tt>duplicatesuppression</tt>).</li> <li>Sieve now verifies that text strings are valid UTF-8.</li> <li>Sieve now verifies that address tests and envelope tests are done on headers which contain addresses (can be disabled with <tt>rfc3028_strict: no</tt>).</li> <li>Services will now notice that a new binary has been installed and will restart using the new binary once the existing connection is closed.</li> </ul> <h1>Changes to the Cyrus IMAP Server since 2.2.1</h1> <ul> <li>Major bugfixes in murder altnamespace/unixhierarchysep/virtdomain support (Thanks in large part to work by Christian Schulte <cs@schulte.it>)</li> <li>Improved master process accounting (Henrique de Moraes Holschuh <hmh@debian.org>)</li> <li>Significantly improved message header caching (based in large part on code supplied by David Carter <David.Carter@ucs.cam.ac.uk> from the University of Cambridge) <li>The sieve bytecode format has been updated once more, to correctly handle short-circuiting of the allof and anyof operators</li> <li>Support for warning quota based on absolute mailbox size</li> <li>Correct handling of annotations during XFER operations</li> <li>Simple support for IMAP BINARY extension</li> <li>Support for Automake 1.7 and Autoconf 2.57</li> <li>Support for IMAP initial SASL response (the SASL-IR extension)</li> </ul> <h1>Changes to the Cyrus IMAP Server since 2.2.0</h1> <ul> <li>The improved directory hashing (fulldirhash) is now a runtime configuration option.</li> <li>The netnews.db has been integrated into deliver.db.</li> <li>Full r/w ANNOTATEMORE support, including more annotations that allow the control of operations such as message expiration. ANNOTATEMORE is also always enabled now.</li> <li><tt>expirenews</tt> has been replaced by <tt>cyr_expire</tt> which uses annotations for fine-grained mailbox expiration.</li> <li><tt>squatter</tt> can now use annotations for fine-grained mailbox indexing.</li> <li>Many nntpd enhancements including: reader-only and feeder-only modes, support for LIST NEWSGROUPS (via mailbox annotations) and gatewaying news to mail (via mailbox annotations).</li> <li><tt>fetchnews</tt> can now authenticate to the remote server.</li> <li>Removed deprecated LAST command from pop3d.</li> <li>Sieve Bytecode is now stored in network byte order, meaning that bytecode files can be freely moved between different platforms</li> <li>Sieve relational extension now working again.</li> <li>Sieve vacation now uses the correct subject.</li> <li>A large number of bugs involving virtual domain support have been fixed, including issues with the Murder, and with Sieve.</li> </ul> <h1>Changes to the Cyrus IMAP Server since 2.1.x</h1> <ul> <li>There have been extensive performance and consistancy changes to the configuration subsystem. This will both ensure greater consistancy between the documentation and the code, as well as a more standard format for specifing service-specific configuration options in imapd.conf. Important changes are detailed here: <ul> <li> The tls_[service]_* configuration options have been removed. Now use [servicename]_tls_*, where servicename is the service identifier from cyrus.conf for that particular process.</li> <li> Administrative groups (e.g. admins and lmtp_admins) no longer union, service groups completely override the generic group. </li> <li> lmtp_allowplaintext is no longer a defined parameter and must be specified using the service name of your lmtp process if you require a specific value</li> </ul></li> <li> libcyrus has been split into libcyrus_min and libcyrus, so as to allow sensative applications (such as master) include the least amount of code necessary for operation </li> <li> Virtual domain support. See the <a href="install-virtdomains.html">virtual domains</a> document for details.</li> <li> Users can now be renamed (even across domains). Note that this is not atomic and weirdness may occur if the user is logged in during the rename. See the <tt>allowusermoves</tt> option in <tt>imapd.conf(5)</tt> for details.</li> <li> The <tt>db3</tt> and <tt>db3-nosync</tt> database backends have been renamed to <tt>berkeley</tt> and <tt>berkeley-nosync</tt> respectively (to avoid confusion over whether or not db4 is supported).</li> <li> The default mailbox list and seen state database formats have changed to skiplist from Berkeley and Flat, respectively. </li> <li> ptloader is now a regular cyrus service. This has several implications, see <a href=install-upgrade.html>install-upgrade.html</a> for more details.</li> <li> NNTP support. Usenet news can now be fed to and read from Cyrus directly via NNTP, without the need for a local news server. See <a href="install-netnews.html">netnews</a> document for details.</li> <li>IPv6 support, provided by Hajimu UMEMOTO <ume@mahoroba.org></li> <li> Sieve scripts are now compiled to bytecode to allow for faster execution (and lmtpd no longer needs lex or yacc). See <a href=install-upgrade.html>install-upgrade.html</a> for more details.</li> <li> The functionality of pop3proxyd has been merged into pop3d. Be sure to update <tt>cyrus.conf</tt> on your frontend machines accordingly.</li> <li> The functionality of <tt>ctl_deliver -E</tt> has been moved to <tt>cyr_expire -E</tt>. Be sure to update <tt>cyrus.conf</tt> on your machines accordingly.</li> </ul> <h2>Changes to the Cyrus IMAP Server since 2.1.14</h2> <ul> <li>Correct a potential DOS attack in the fud daemon.</li> <li>Arbitron now works again</li> <li>Telemetry logging for mupdate</li> <li>Duplicate Suppression logging for redirect sieve actions</li> <li>A number of bugs in reconstruct have been fixed. also added the -p and -x options</li> <li>Better stubbing out of user_deleteacl</li> <li>No longer log any shutdown() failures</li> <li>Improved IPv6 support (for systems with two getnameinfo implementations)</li> <li>Misc Documentation Improvements</li> </ul> <h2>Changes to the Cyrus IMAP Server since 2.1.13</h2> <ul> <li>Be more forgiving in the parsing of MIME boundary headers, specifically those generated by Eudora where the outer boundaries are substrings of the inner boundaries. This feature can be disabled by enabling the <tt>rfc2046_strict</tt> option.</li> <li>Allow cyradm to handle aggregate mailbox sets for ACL and DELETE operations.</li> <li>Add a lmtp_downcase_rcpt option to force the lowercasing of recipient addresses (Henrique de Moraes Holschuh <hmh@debian.org>).</li> <li>Include more MIME headers in sieve rejection notices</li> <li>Add an mbexamine command for debugging purposes</li> <li>LMTP will now fatal error if we cannot initialize the duplicate delivery database.</li> <li>Continued audit by Security Appraisers and Bynari</li> <li>Correctly terminate the processes by calling service_abort even on successful exit (helps to fix a db3 lockers problem)</li> <li>Fix some murder+altnamespace/unixhiersep issues</li> <li>Fix imclient's handling of literals.</li> <li>Add support for the windows-1256 character set</li> <li>Don't log 'could not shut down filedescriptor' messages when the socket is already not connected</li> <li>Now include a script to convert sieve script names to the altnamespace format</li> <li>Added a <tt>--with-extraident</tt> configure option to make it easier to set the extra version information that is compiled into the binary.</li> <li>Minor build fixes.</li> <li>Minor other bug fixes.</li> </ul> <h2>Changes to the Cyrus IMAP Server since 2.1.12</h2> <ul> <li>Add maxfds= option in cyrus.conf</li> <li>"The shutdown() Patch" by Henrique de Moraes Holschuh <hmh@debian.org> and Jeremy Howard <jhoward@fastmail.fm></li> <li>Now report both built-with and running-with OpenSSL versions</li> <li>Misc other small bugfixes</li> <li>Security Appraisers and Bynari review of the majority of the modules in <tt>imap/</tt></li> </ul> <h2>Changes to the Cyrus IMAP Server since 2.1.11</h2> <ul> <li>Master now will forcibly exit if a service is not executable</li> <li>Master now has a daemon mode and pidfile support (-d and -p options)</li> <li>Berkeley DB Configuration methods have changed. Hopefully they're more generic now. You can still use --with-dbdir, or you can use --with-bdb-libdir and --with-bdb-incdir</li> <li>timsieved now handles usernames with dots (when unixhierarchysep is active)</li> <li>tugowar has been removed from the distribution.</li> <li>Squatter now has an option to skip unmodified mailboxes.</li> <li>Properly hash username to remove a user's sieve scripts when their INBOX is removed.</li> <li>Reset output buffer when prot_flush returns EOF.</li> <li>Minor Makefile improvements with use of $(srcdir)</li> <li>Remotepurge improvement for empty mailboxes</li> <li>Fix for AFS overwriting the canonicalized username in ptloader</li> <li>Security audit of imapd.c performed by SecurityAppraisers and Bynari</li> </ul> <h2>Changes to the Cyrus IMAP Server since 2.1.10</h2> <ul> <li>Fixed some potential buffer overflows in the sieve code, as well as a pre-login buffer overflow in the IMAP parsing code.</li> <li>ipurge can now skip flagged messages</li> <li>Fix a problem with the flat backend and tracking new files</li> <li>Fix a problem with the memory pool routines on 64-bit machines</li> </ul> <h2>Changes to the Cyrus IMAP Server since 2.1.9</h2> <ul> <li>support Berkley DB 4.1 <li>more portable use of errno throughout <li>timsieved now does telemetry logging <li>libcyrus.a no longer supplies fs_get() and fs_give() </ul> <h2>Changes to the Cyrus IMAP Server since 2.1.8</h2> <ul> <li>Fix a strlcpy() off-by-one error. <li>Better handling of errors in connecting to LMTP servers for deliver and lmtpproxyd. <li>Fix bug in pop3proxyd's pop3s handling. <li>Fix Exim install documentation. </ul> <h2>Changes to the Cyrus IMAP Server since 2.1.7</h2> <ul> <li>Fix a severe locking problem during failed CREATEs</li> <li>Change default locking method to fcntl from flock</li> <li>Don't cleanup the original mailbox during a RENAME while holding the mailbox list lock</li> <li>Quoting fixes in cyradm</li> <li>Small pathname fix in rehash script</li> </ul> <h2>Changes to the Cyrus IMAP Server since 2.1.6</h2> <ul> <li>Correct some minor version number errors.</li> </ul> <h2>Changes to the Cyrus IMAP Server since 2.1.5</h2> <ul> <li>Better locking of the mailbox list during mupdate operations for CREATE and RENAME</li> <li>Permissions fixes for annotations.</li> <li>pop3proxyd now does telemetry logging</li> <li>Cleanup a number of leaks in the murder code</li> <li>Correct semantics of our provided strlcpy(). Fix places where strlcpy() was being used incorrectly.</li> <li>Correct a significant memory leak in the memory pool routines</li> <li>OpenSSL is now handled correctly for the perl modules</li> <li>Small documentation cleanups</li> <li>The normal assortment of small bugfixes</li> </ul> <h2>Changes to the Cyrus IMAP Server since 2.1.4</h2> <ul> <li> Sieve is no longer dependent on duplicate delivery suppression (it still uses the duplicate delivery database however). </li> <li> Sieve now supports <tt>draft-segmuller-sieve-relation-02.txt</tt></li> <li> <tt>imtest</tt>. <tt>imtest</tt> also includes new MANAGESIEVE functionality (sivtest) as well as the ability to reconnect to the same server multiple times (useful for testing SSL/TLS session caching and DIGEST-MD5 fast reauth).</li> <li.</li> <li>Added the chk_cyrus program to help point out missing message files and/or mailboxes</li> <li)</li> <li>The RENAME command has been almost entirely rewritten. Now we rely on mailbox-level locking instead of locking the entire mailboxes file for the duration of the rename. <tt>ctl_cyrusdb -r</tt> now also cleans up "reserved" mailboxes that may appear in the event of a crash.</li> <li><tt>ctl_mboxlist</tt> can now dump only a particular partition</li> <li>The configuration subsystem now uses a hash table to speed up lookups of options. Additionally, the hash table implementation has been updated to possibly take advantage of memory pools.</li> <li>Many bugfixes related to the Cyrus Murder. Includes improvments to subscription handling as well as correct merging of seen state on mailbox moves.</li> <li>Can now configure an external debugger (<tt>debug_command</tt> option in imapd.conf.</li> <li>Misc. autoconf-related fixes (most notably those related to sasl_checkapop and O_DSYNC).</li> <li>Misc. locking-related fixes.</li> <li>Security fixes related to handling large literals in getxstring(), as well as correct usage of layers in timsieved.</li> </ul> <h2>Changes to the Cyrus IMAP Server since 2.1.3</h2> <ul> <li> All "MAIL" and "SIEVE" notifications are now handled by <tt>notifyd</tt> which is a daemon that supports multiple notification methods. The <tt>mailnotifier</tt> and <tt>sievenotifier</tt> options have been added to <tt>/etc/imapd.conf</tt> to configure notifications. (Ken Murchison) </li> <li> Many feature enhancements and bugfixes for the Cyrus Murder. The code now supports live (but not transparent) moving of mailboxes from one server to another. </li> <li> Some warning fixes. </li> <li> <tt>fdatasync()</tt> is no longer required. </li> <li> Fixed a bug in <tt>imap/append.c</tt> that would show itself if a message was being delivered to five or more different partitions. </li> <li> Deliveries now don't create a redudant temporary file using <tt>tmpfile()</tt>; the staging directory is used instead. (Ken Murchison) </li> <li> Fix a possible crashing bug in <tt>squatter</tt>. (Ken Murchison) </li> <li> Deleting a user now also removes their Sieve scripts. </li> <li> <tt>cyrusdb_skiplist</tt>: release locks during iteration. Should prevent denial of service attacks and possibly increase performance. </li> <li> <tt>cyrusdb_skiplist</tt>: introduce a new mode using <tt>O_DSYNC</tt> writes which is possibly faster on Solaris. Currently off (it seems to hurt performance on Linux). </li> <li> <tt>master</tt> has preliminary code to avoid forking storms. </li> <li> <tt>sieveshell</tt> should now loop through all available SASL mechanisms before conceding defeat. </li> <li> <tt>sieveshell</tt> can now upload a file to a different name. </li> </ul> <h2>Changes to the Cyrus IMAP Server since 2.1.2</h2> <ul> <li> fud now runs from the Cyrus master process; more generally, the Cyrus master process can deal with UDP services. (Amos Gouaux, <tt>amos@utdallas.edu</tt>)</li> > <li> added <tt>cvt_cyrusdb</tt> for quick conversions between different cyrusdb backends. </li> <li> fixed a bug in the Sieve header cache, where legal header names were being rejected. </li> <li> many Murder-related fixes </li> <li> suppress a bogus TLS session reuse DBERROR message </li> <li> make the list of acceptable TLS ciphers configurable in <tt>/etc/imapd.conf</tt> </li> <li> <tt>cyrusdb_skiplist</tt> fixes; it's now suitable for using in production environments though there are still performance problems outstanding </li> </ul> <h2>Changes to the Cyrus IMAP Server since 2.1.1</h2> <ul> <li> now compatible with Berkeley DB4 (Larry M. Rosenbaum, <tt>lmr@ornl.gov</tt>) </li> <li> timsieved now supports proxying via <tt>loginuseacl</tt> (Amos Gouaux, <tt>amos@utdallas.edu</tt>) </li> <li> Sieve <tt>vacation</tt> now does a case-insensitive comparison of <tt>:addresses</tt> </li> <li> Warning-related bug fixes from Henrique de Moras Holschuh <tt>hmh@debian.org</tt></li> <li> automatic archival of db3 files so that filesystem backups are always consistent (Ken Murchison, <tt>ken@oceana.com</tt>)</li> <li> added a skiplist database backend, still needs more testing </li> <li> further work on the Cyrus Murder </li> <li> fixed bug in <tt>remotepurge</tt> dealing with mailboxes with characters that need to be escaped inside quoted-strings </li> <li> Cyrus::IMAP::Admin now supports referrals </li> <li> <tt>cyradm</tt>, via Cyrus::IMAP::Shell, now can remove quotaroots </li> <li> <tt>timsieved</tt>, <tt>sieveshell</tt>, and the MANAGESIEVE protocol extended with referrals </li> </ul> <h2>Changes to the Cyrus IMAP Server since 2.1.0</h2> <ul> <li> now compatible with Cyrus SASL 2.1.0 </li> <li> fixed a problem with LMTP AUTH and unix domain sockets </li> <li> make deleting users faster </li> <li> add a "-n" switch to <tt>remotepurge</tt></li> <li> cyradm now does implicit SASL authorization </li> <li> fix for Sieve <tt>:matches</tt> comparator </li> </ul> <h2>Changes to the Cyrus IMAP Server since 2.0.16</h2> <ul> <li>migrated to SASLv2 (Rob Siemborski)</li> <li>altnamespace: it is now possible to display user mailboxes as siblings to the INBOX at the top-level (Ken Murchison)</li> <li>unixhierarchysep: it is now possible possible to use slash as the hierarchy seperator, instead of a period. (Ken Murchison, inspired by David Fuchs, <tt>dfuchs@uniserve.com</tt>)</li> <li>SSL/TLS session caching (Ken Murchison)</li> <li>support for IMAP CHILDREN & LISTEXT extensions (Ken Murchison, work in progress)</li> <li>check recipient quota & ACL at time of RCPT TO: in <tt>lmtpd</tt> (Ken Murchison)</li> <li>support for LMTP STARTTLS & SIZE extensions (Ken Murchison)</li> <li>unified deliver.db, using cyrusdb interface, hopefully improving concurrency and performance (Ken Murchison)</li> <li>fixed STORE FLAGS () bug (Ken Murchison)</li> <li>fixed SEARCH SUBJECT vs. SEARCH HEADER SUBJECT bug (Ken Murchison)</li> <li>users without an INBOX can have subscriptions (Ken Murchison; noticing a trend here?)</li> <li>added cyrusdb_db3_nosync backend, used for duplicatedb and session cache, to postpone non-critical writes. (Ken Murchison)</li> <li>support for STARTTLS and AUTH=ANONYMOUS for timsieved (Ken Murchison)</li> <li>do setgid and initgroups in master (as urged by several people)</li> <li>added more config info to IMAP ID (in a vain attempt to improve debugging)</li> <li>configure now checks for DB3.3</li> <li>SQUAT (Rob O'Callahan, <tt>roc@cs.cmu.edu</tt>)</li> <li>change SEARCH HEADER <i>x</i> to SEARCH <i>x</i> utilizing internal cache where possible (Rob O'Callahan, <tt>roc@cs.cmu.edu</tt>)</li> <li>an improved directory hashing option (Gary Mills, <tt>mills@cc.UManitoba.CA</tt>)</li> <li>use of EGD for SSL/TLS (Amos Gouaux, <tt>amos@utdallas.edu</tt>)</li> <li>separate certs/keys for services (Henning P. Schmiedehausen, <tt>hps@intermeta.de</tt>)</li> <li>ability to force ipurge to traverse personal folders (Carsten Hoeger, <tt>choeger@suse.de</tt>)</li> <li>fixed zero quota bugs in cyradm (Leena Heino, <tt>liinu@uta.fi</tt>)</li> <li>ignore trailing whitespace in imapd.conf</li> <li>Received: header (with TLS and AUTH info)</li> <li>added '-i' switch to sendmail command line for SIEVE reject, redirect and vacation</li> <li>small fixes to notify_unix</li> <li>added "<tt>duplicatesuppression</tt>" switch to imapd.conf for enabling/disabling duplicate delivery suppression (Birger Toedtmann, <tt>birger@takatukaland.de</tt>)</li> </ul> <h2>Changes to the Cyrus IMAP Server since 2.0.15</h2> <ul> <li>fixed a longstanding bug in <tt>quota</tt> that would affect people with unusual top-level hierarchy, fix by John Darrow, <tt>John.P.Darrow@wheaton.edu</tt>.</li> <li>some important fixes to db3 interface code, by Walter Wong <tt>wcw@cmu.edu</tt>, prompted by complaints from Scott Adkins <tt>adkinss@ohio.edu</tt>.</li> <li>fixed some memory leaks in imclient and in the Perl IMAP module, prompted by Toni Andjelkovic <tt>toni@soth.at</tt>.</li> <li>fixed a longstanding authentication error in the Perl IMAP module, should remove pesky extra Password: prompt.</li> <li>fixed some allocation bugs in the managesieve perl module.</li> </ul> <h2>Changes to the Cyrus IMAP Server since 2.0.14</h2> <ul> <li>fixed memory management bugs in <tt>imapd</tt>, <tt>lmtpd</tt> that were being hit due to the connection reuse code and causing subtle and annoying problems.</li> <li>we now clean up better when deleting a user</li> <li>fixed an endian bug in <tt>ipurge</tt></li> <li><tt>pop3d</tt> now can also reuse processes.</li> <li>fix a bug in <tt>imclient</tt> that would strike when <tt>cyradm</tt> specifies a mechanism on the command-line. (SASL mechanism names aren't case sensitive.)</li> <li>fix some bugs in handling SIGHUP in <tt>master</tt></li> <li>fix a couple of goofs in <tt>Admin.pm</tt></li> </ul> <h2>Changes to the Cyrus IMAP Server since 2.0.13</h2> <ul> <li>fixed a silly bug with reusing SSL connections</li> <li><tt>lmtpd</tt> can now service multiple clients in sequence, hopefully improving performance</li> <li>changed how Berkeley db databases are opened, hopefully lessening the chance of deadlock and improving performance</li> <li>fixed a couple of memory leaks</li> <li>lessened the chance of a race condition during <tt>index_check()</tt></li> </ul> <h2>Changes to the Cyrus IMAP Server since 2.0.12</h2> <ul> <li>refactored code so less duplication</li> <li>added alternate config file for partial virtual domain support</li> <li><tt>pop3d</tt> can now disable USER/PASS commands.</li> <li>STARTTLS now accepts a SSLv23 hello but doesn't allow SSLv23 to be negotiated.</li> <li><tt>imtest</tt> no longer buffers to aid use as an automated layer.</li> <li><tt>master</tt> now supports maximum number of service processes via the "maxchild" modifier.</li> <li>fixed a bug in the Sieve string lexer.</li> <li>one <tt>imapd</tt> process can now service multiple clients in sequence, eliminating a large number of forks.</li> </ul> <h2>Changes to the Cyrus IMAP Server since 2.0.11</h2> <ul> <li>portability fixes involving <tt>setrlimit()</tt></li> <li>fixed compiler warnings</li> <li>the STARTTLS command will only accept TLSv1 now, not SSLv2/v3. The <tt>imaps</tt> port is unaffected by this change.</li> <li><tt>timsieved</tt> no longer returns garbage strings.</li> </ul> <h2>Changes to the Cyrus IMAP Server since 2.0.9</h2> <ul> <li>some small memory savings</li> <li>the "fud" daemon once again works correctly</li> <li>the IDLE extension now uses signals correctly</li> <li>problems with libwrap have been resolved</li> <li><tt>imapd</tt> and <tt>pop3d</tt> now log connections protected via TLS.</li> <li>efficiency improvements when searching for a particular message-id</li> <li>fixed an envelope-parsing bug affecting SORT and THREAD</li> <li>made RENAME keep the same mailbox uniqueid, preserving seen state across renames</li> <li>STOREing flags to multiple messages in one command is now more efficient</li> <li>RENAME now preserves the ACL</li> <li>LIST is now as efficient as Cyrus v1.6, modulo Berkeley DB issues.</li> <li>Sieve zephyr notifications are now correct.</li> <li>crash in <tt>reconstruct</tt> now fixed.</li> <li>man pages added for <tt>cyrus.conf</tt>, <tt>master</tt>, <tt>lmtpd</tt>, <tt>idled</tt>, <tt>ctl_mboxlist</tt>, and <tt>ctl_deliver</tt>.</li> <li><tt>master</tt> can now listen on specific interfaces</li> <li><tt>master</tt> can now reread <tt>/etc/cyrus.conf</tt> on SIGHUP.</li> <li><tt>timsieved</tt> now uses symlinks instead of hard links.</li> </ul> <h2>Changes to the Cyrus IMAP Server since 2.0.8</h2> <ul> <li>remembered to update this file</li> <li>bug in <tt>Cyrus::IMAP</tt> perl module affecting cyradm's setquota fixed</li> <li>portability fix with <tt>socklen_t</tt></li> </ul> <h2>Changes to the Cyrus IMAP Server since 2.0.7</h2> <ul> <li>preliminary implementation of the IDLE extension (Ken Murchison, <tt>ken@oceana.com</tt>).</li> <li>THREAD=REFERENCES now part of the normal build.</li> <li>tweaks to the installation documentation and suggested Sendmail configuration</li> <li>portability fixes and other small bugfixes</li> <li>added "<tt>-a</tt>" flag to <tt>lmtpd</tt></li> <li>master process can now export statistics about running processes via UCD SNMP AgentX</li> <li>many fixes to Cyrus Murder-related code</li> <li>fixes to perl code, especially the Sieve interface. added an IMSP interface to the perl code, but it still needs work.</li> </ul> <h2>Changes to the Cyrus IMAP Server since 2.0.6</h2> <ul> <li>some number of random static variables eliminated, to save on memory footprint</li> <li>recursive RENAME was a little to eager; fixed. RENAME will also give the client a hint that a sub-RENAME failed. (mostly probably useful for cyradm, but cyradm doesn't take advantage of it yet.)</li> <li>THREAD=X-JWZ has turned into THREAD=REFERENCES (Ken Murchison)</li> <li>DELETE wasn't failing cleanly in database code; fixed.</li> <li>off-by-one bug in seen_db fixed.</li> <li>starting/committing/aborting transactions now logged more correctly in cyrsudb_db3</li> <li>master will now accept port numbers instead of just service names in cyrus.conf. also logs even more verbosely (see bug #115.)</li> <li>libwrap_init() is now inside the loop, since i don't quite understand the semantics of libwrap calls.</li> <li>setquota in cyradm now behaves more sanely (and gives correct usage message).</li> <li>bugfixes to the managesieve client perl api. (still needs work.)</li> <li>small fixes in timsieved.</li> <li>added a "make dist" target so i won't dread releases as much.</li> </ul> <h2>Changes to the Cyrus IMAP Server since 2.0.5</h2> <ul> <li>APPEND now honors the \Seen flag.</li> <li>mailboxes file can once again be a flat text file. (compile-time choice)</li> <li>subscriptions file can be flat text or berkeley db. likewise for seen state.</li> <li>unfortunately, the format of the mailboxes file has changed.</li> <li>implementation of "JWZ" threading, a first pass on the forthcoming THREAD=REFERENCES.</li> <li>bugfixes in libacap.</li> <li>bugfixes in other Murder related functionality.</li> <li>removal of dead code.</li> <li>will now look in CYRUS_PREFIX/etc/imapd.conf if there's no /etc/imapd.conf.</li> <li>more paranoid implementation of ID.</li> <li>more descriptive lmtp errors.</li> <li>finished implementation of LMTP 8BITMIME.</li> <li>fixed minor bugs in pop3d.</li> <li>small test suite for cyrusdb backends added in <tt>lib/test/</tt>.</li> <li>added <tt>-DPERL_POLLUTE</tt> to the perl compilation to deal with Perl 5.6.</li> <li>small additions to the Sieve library.</li> <li>As usual, owe lots of thanks to Ken Murchison for his hard work and awesome implementations.</li> </ul> <h2>Changes to the Cyrus IMAP Server SINCE 2.0.4</h2> <ul> <li>Now should work with Berkeley DB 3.1, but does <b>not</b> auto-upgrade 3.0 databases (and, in fact, I haven't written any upgrade software yet).</li> <li>SORT and THREAD should now function correctly.</li> <li>Some configure fixes.</li> <li>Some fixes for possible race conditions in initializing services and database structures.</li> <li>Some non-gcc compile fixes with structure initialization.</li> <li>Some non gcc compile fixes with structure initialization.</li> </ul> <h2>Changes to the Cyrus IMAP Server SINCE 2.0.3</h2> <ul> <li>fixed <tt>mbpath</tt> and <tt>ipurge</tt>. Thanks to Alain Turbide for the bug report.</li> <li>configure: removed <tt>mmap_private.c</tt>; it was buggy, and not worth supporting.</li> <li>configure: improvements in detecting libwrap, typos fixed in detecting libsasl.</li> <li>Merged the acapsieve library into libacap.</li> <li>improvements to the ACAP API.</li> <li>invariant checks added to the skiplist code.</li> <li>bugfix in TCL cyradm.</li> <li>acapmbox.c: bugfixes in handling acap connections.</li> <li>fix the size given for a unix socket address (changes throughout the code), patch thanks to Vladimir Kravchenko, <tt>jimson@null.ru</tt>.</li> <li>rewrote <tt>deliver</tt> to use the generic LMTP API in lmtpengine.c. Likewise, implemented the client-side API in lmtpengine.c. (Still need to implement AUTH.)</li> <li>added SORT and THREAD support (Ken Murchison, <tt>ken@oceana.com</tt>.)</li> <li>In checking an APPEND command, we were rejecting valid system flags and accepting invalid ones.</li> <li>minor bug fixes to <tt>proxyd</tt>.</li> <li>large amount of debugging code added to <tt>target-acap</tt>.</li> <li>build fixes to Perl programs.</li> <li>allow plaintext authentication to <tt>timsieved</tt>.</li> </ul> <h2>Changes to the Cyrus IMAP Server in 2.0</h2> <ul> <li>The mailboxes file is now a transaction-protected Berkeley database.</li> <li>The core delivery process has been moved to <tt>lmtpd</tt>. <tt>deliver</tt> is now a simple wrapper to create an LMTP transaction.</li> <li>master process, responsible for spawning services (<tt>imapd</tt>, <tt>lmtpd</tt>, etc.) and for routine housekeeping. Optionally, it can use <tt>libwrap</tt> to allow or deny connections.</li> <li>ACAP (Application Configuration Access Protocol) support for Cyrus Murder: IMAP Aggregator.</li> <li>Sieve enhancements: regular expressions, notifications, automatically setting IMAP flags.</li> <li>SNMP (Simple Network Management Protocol) support for monitoring usage (e.g. number of users logged in) as well as for instrumenting protocol usage (e.g. number of times CREATE has been called).</li> <li>Perl version of <tt>cyradm</tt> contributed by Brandon Allbery (<tt>allbery@ece.cmu.edu</tt>). Eventually we expect to transition to the Perl version away from the TCL version.</li> <li>Bugfix in modified UTF-7 processing (for mailbox names). Bugfix in <tt>index_searchcacheheader()</tt>.</li> <li>Implemented the extension MULTIAPPEND.</li> <li>RENAME is now hierarchical.</li> <li>The right that controls whether a mailbox may be deleted is now "c". (It used to be "d".)</li> <li>An additional backend for seen state has been created, <tt>seen_db</tt>. It stores seen state in a per-user database.</li> </ul> <h2>Changes to the Cyrus IMAP Server Since Version 1.6.20</h2> <ul> <li>Some fixes to the TLS support to gracefully degrade service.</li> <li>Sieve now correctly re-sieves messages that are received with identical message-ids, but different envelopes. It also obeys plus-addressing on keep actions. (Fixes by Ken Murchison, <tt>ken@oceana.com</tt>.)</li> <li>The server wasn't correctly calculating the percentage of quota used when deciding whether or not to issue a warning.</li> <li>Implemented single-instance store: deliver, when using LMTP, will only store one copy of a message per partition, and hard link it among multiple users. Sites with a large number of partitions could see a performance decrease.</li> </ul> <h2>Changes to the Cyrus IMAP Server Since Version 1.6.19</h2> <ul> <li>Added STARTTLS support; requires OpenSSL.</li> <li>Sieve now uses MDNs to reject messages instead of DSNs, conforming to the latest Sieve specification. (Ken Murchison)</li> <li>The duplicate delivery database expiration (deliver -E) was deleting all entries; fixed.</li> <li>imtest is now a little smarter about parsing the protocol to avoid synchronization errors prior to authentication.</li> <li>timsieved's parser is now written in C; should be no noticeable changes, but should make compiling it much easier.</li> </ul> <h2>Changes to the Cyrus IMAP Server Since Version 1.6.16</h2> <ul> <li>Fix to enclosed message parsing (thanks to John Myers).</li> <li>When trying to skip over non-synchronizing literals during error recovery, the IMAP server never stopped eating. Fixed.</li> <li>Added <tt>with-sasldir</tt> as a configure option.</li> <li>Fixed a bug in cyradm when it got the CAPABILITY list.</li> <li>Fixed bugs relating to the incomplete SASLfication of deliver.</li> <li>Fixed bugs in deliver relating to duplicate delivery suppression and Sieve vacation functionality.</li> <li>Fixed a memory leak in deliver.</li> <li>When looking for SASL options, imapd wasn't defaulting to the option without the plugin name requesting it. This caused PLAIN authentications to incorrectly fail.</li> <li>Changed the expiration time of pts entries to 3 hours; only affects sites using krb_pts as the authorization method.</li> <li>Fixed some bugs in imclient; mostly affects cyradm.</li> <li>Fixed a bug in the Sieve lexer and improved the usefulness of the Sieve test program.</li> </ul> <h2>Changes to the Cyrus IMAP Server Since Version 1.6.13</h2> <ul> <li>An annoying memory management bug in imclient was fixed and it's efficiency was improved somewhat.</li> <li>Added the sieve client (<tt>installsieve</tt>) and server (<tt>timsieved</tt>) for getting sieve scripts onto the server and managing them once they are there.</li> <li>The default Sieve script layout has changed to sievedir/u/user/default; this supports multiple Sieve scripts per user with the ability to switch between them.</li> <li>Fixed the kerberos-to-local-host bug (patch by Greg Hudson, <tt>ghudson@mit.edu</tt>).</li> <li>Started changes to deliver to support LMTP AUTH.</li> <li>Improved the error messages logged when authentication fails.</li> <li>Fixed a bug dealing with argument processing in the arbitron program.</li> <li>pop3d now correctly supports SASL AUTH.</li> <li>imtest will no longer prompt for authentication or authorization names; instead, it defaults to the current Unix user. Override on the command line.</li> <li>Likewise, cyradm will no longer prompt. It now accepts "-m" to specify what SASL mechanism to use, and the pwcommand option to authenticate should once again work when used non-interactively.</li> </ul> <h2>Changes to the Cyrus IMAP Server Since Version 1.6.10</h2> <ul> <li>Changed the sieve option in the configure script to <tt>--disable-sieve</tt>.</li> <li>Updated reconstruct and quota to check for hashed imap spool directories correctly.</li> <li>deliver now will not use Sieve if duplicate delivery suppression is disabled. There was also a bug that caused the duplicate delivery database to be checked even if dupelim was disabled.</li> <li>deliver now uses tm_gmtoff if available to check for the local timezone.</li> <li>The default format for reading information from INN has changed. If you use INN to feed imapd news, you must change your "<tt>newsfeeds</tt>" file to contain <pre> collectnews!:*:Tf,WO:collectnews </pre> </li> <li>The dohash script now takes a "<tt>-i</tt>" option to run interactively and the "<tt>-f</tt>" option to issue warnings instead of fatal errors.</li> </ul> <h2>Changes to the Cyrus IMAP Server Since Version 1.6.1-BETA</h2> <ul> <li>cyradm should now work with all mechanisms (it now handles empty challenges and responses).</li> <li>Fixed deliver to deal with arbitrarily long headers</li> <li>COPY for non-existent sequence numbers returns NO; this contrasts to UID COPY, which always returns OK.</li> <li>FETCH for non-existent sequence numbers returns NO; this contrasts to UID FETCH, which always returns OK.</li> <li>Fixed a misleading BAD responses to commands that take sequences.</li> <li>Added UIDNEXT untagged response to a SELECT (from <tt>draft-crispin-imapv-07.txt</tt>).</li> <li>pop3d now correctly passes SASL configuration options to libsasl.</li> <li>imtest now correctly flushes the server's output to the screen.</li> <li>Added more hashing using a simple but stupid algorithm. Now whenever there is a mailbox access, quota access, or subscription access, it goes through a hash function. this is done to help reduce the number of files/directories in any given directory.</li> <li>Added the binary <tt>mbpath</tt>. Given a mailbox name, this binary will print the filesystem path to that mailbox. This way if you have multiple partitions and hashing turned out, you don't have to spend as many mental cycles figuring out where the actual directory is.</li> <li>deliver now checks <tt>sieveusehomedir</tt> and <tt>sievedir</tt> in the config file to determine where to look for sieve scripts.</li> <li>ptloader now has a workaround for afs 3.5.</li> <li>clarified an error message in message.c when an unexpected end of file is encountered.</li> <li>fixed some random memory leaks in deliver.</li> <li>fixed a fairly major bug in prot_fill. it was performing incorrectly when reading only a single character.</li> <li>fixed a bug in how imtest looked for OK or NO.</li> <li>fixed a memory leak in imapd.</li> <li>imapd now allows any user (or member of a group) listed in "proxyservers" to proxy.</li> </ul> <h2>Changes to the Cyrus IMAP Server Since Version 1.6.0-BETA</h2> <ul> <li>fixed stupid bug in imapd</li> <li>fixed sasl/config.c interaction</li> <li>fixed use of stat in imtest</li> </ul> <h2>Changes to the Cyrus IMAP Server Since Version 1.5.24</h2> <ul> <li>ANSI C is now required.</li> <li>imtest's interface has changed, to allow for SASL authentication. Sorry, but it had to happen. It now also includes a timing test (-z) which we use to test the SASL layers.</li> <li>imtest no longer uses a non-synchronizing literal with LOGIN, so it should work against all IMAP servers.</li> <li>The prot layer now uses SASL for encryption and authentication. This changed a large amount of code, and some build procedures.</li> <li>As a side effect of SASL, --enable-static-libraries now doesn't do anything. We are considering compiling cyrus with libtool to change this.</li> <li>Error codes returned by programs have changed, and programs return EX_TEMPFAIL far more than they used to. This is because Sendmail considers most not-EX_TEMPFAIL errors permanent; now, if it may not be permanent, EX_TEMPFAIL is returned. (See lib/exitcodes.h.)</li> <li>Two bugs fixed: UID FETCH's with no messages in range now return OK, not BAD. And an obscure bug in LIST case sensitivity is fixed.</li> </ul> <h2>Changes to the Cyrus IMAP Server Since Version 1.5.19</h2> <ul> <li>Most of the charset.c code (and mkchartable.c code) has been replaced thanks to John Myers).</li> <li>Bug fix in message.c to look up headers in the cache when they're in the cache correctly; thanks to Chris Newman for the fix.</li> <li>Code cleanup here and there (thanks to Bruce Balden).</li> <li>Annoying (and confusing) lines in syslog every time a message was delivered if deliver was compiled using dbm saying that deliver was "unable to fetch entry" have been removed.</li> <li.</li> <li>The arbitron program now takes a mailbox pattern argument for the mailbox to run on. The manpage always said it did anyway.</li> <li>Uninitialized variable fixed in imapd.c with the shutdown file code.</li> <li>Minor tweaks to purify build config.</li> <li>Fix minor memory leak in proc.c where procfname wasn't being free'd.</li> <li.</li> <li>Committed minor syslog log level changes in ptloader and deliver.</li> <li>make distclean now does what it's supposed to.</li> <li.</li> <li>Add optional third argument to imtest for it to take input from a file. This is a gross hack.</li> </ul> <h2>Changes to the Cyrus IMAP Server Since Version 1.5.14</h2> <ul> <li>LIST now honors the reference argument. <p.)</p> </li> <li>The <tt>arbitron</tt> program now takes a mailbox pattern argument for the mailbox to run on. The manpage always said it did anyway.</li> <li.</li> <li.</li> <li>Fixed a bug in LIST and LSUB code so that user.* mailboxes will be printed on every LIST instead of just the first one.</li> <li>Implemented the <i>POP3 Extension Mechanism</i>, RFC 2449, in order to advertise the capabilities already supported.</li> <li.)</li> <li>More cleanup the ptloader/auth_krb_pts code. If you use Kerberos and IMSP, you *MUST* pick up cyrus-imspd-v1.5a6 (or newer).</li> <li>A few configure tweaks.</li> <li>Duplicate delivery changes: <ul> <li>Split out duplicate delivery elimination to multiple files. This should help reduce the lock contention that normally occurs with this file. To not clutter <i>config_dir</i>, the files will be located in a subdirectory named <tt>deliverdb</tt>, for example <tt>/var/imap/deliverdb</tt>. If you don't make this directory, nothing bad will happen (other than duplicate delivery elimination will not work).</li> <li>The time value is now stored as an integer in native byte order as opposed to converting it to a string before it is stored in the database.</li> <li>checkdelivered() now obtains a read lock instead of a write lock when trying to check for duplicates. Only markdelivered() grabs a write lock.</li> </ul> </li> <li>Added logic to cause cyradm to abort more cleanly if not given command line arguments in an interactive session. This gets rid of the dreaded <tt>application-specific intialization failed</tt> messages.</li> </ul> <h2>Changes to the Cyrus IMAP Server Since Version 1.5.11</h2> <ul> <li>The CREATE command now ignores a trailing hierarchy delimiter instead of ignoring the CREATE command.</li> <li>UIDPLUS is now always advertised in CAPABILITY and is always availible. The UIDPLUS extension is a set of optimizations using UID values instead of sequence numbers and is described in RFC 2359.</li> <li>Cyrus no longer rejects messages with 8-bit characters in the headers. Rather than reject the message, characters with the 8th bit set are changed to 'X'. Internationalization in headers is supported by the mechanism specified in RFC 2047 (and RFC 1342).</li> </ul> <h2>Changes to the Cyrus IMAP Server Since Version 1.5.10</h2> <ul> <li>If ENABLE_EXPERIMENT is set, the server no longer claims to support OPTIMIZE-1; instead, it claims to support UIDPLUS. The Getuids command has been removed since it is not in the UIDPLUS document (draft-myers-imap-optimize-03.txt).</li> <li.)</li> <li>The checks for com_err in configure are a little smarter and look to see if all the pieces are there before trying to use them.</li> <li>Added support for the NAMESPACE extension (if --enable-experiment is supplied).</li> <li').</li> <li.</li> <li>Bug fix: User defined flags now work properly.</li> </ul> <h2>Changes to the Cyrus IMAP Server Since Version 1.5.2</h2> <ul> <li>Fixed a bug with word alignment on Solaris using Kerberos compiled with Sun's CC. (Several patches were submitted; thanks to everyone who did so.)</li> <li>Patches from John Myers, including more glob fixes.</li> <li>Use the default hash function from DB. Note that this means that the existing <tt>delivered.db</tt> and <tt>ptscache.db</tt> is <b>NOT</b> compatible with this release. These files should be removed.</li> <li>Provide two debugging programs that dump the databases: <tt>ptdump</tt> and <tt>dump_deliverdb</tt>.</li> <li>Multiple changes to ptloader. added a bunch of flags; let it reauthenticate on its own; added support perl wrapper; added bunch of debugging information/output; bunch of other cleanups</li> <li>The mailboxes file is now closed if it isn't likely to be referenced, hopefully preventing old mailboxes files from hanging around in memory as frequently.</li> <li>Added a patch from Eric Hagberg to work around a possible deadlock condition in mboxlist.c where rename isn't atomic.</li> <li>Patch from John Myers to get rid of cyrus.seen corruption in bsearch_mem.</li> <li>Patch from John Myers and to allow ISO-8859-1 characters in mailbox names.</li> <li>Makedepend still runs, and still generates warnings, but these are squirrled away in makedepend.log.</li> <li>On mailbox delete, the server will no longer try and unlink ".." and "." as we got a report that it seriously breaks one file system (even as non-root).</li> <li.</li> <li>Bug swap: imtest quotes password with a non-synchronizing literal in order to allow weird characters like ) in passwords. But it doesn't look to see if the server supports non-synchronizing literals.</li> <li>If the file "<tt>msg/motd</tt>" exists, the first line is now sent to clients upon login.</li> <li>Bug fix: to handle BODY[] properly when fetching news articles (truncation no longer occurs). (thanks to John Prevost)</li> <li>The makedepend supplied should now run on Solaris Intel. (thanks to Chris Newman)</li> <li>Added some hacks to pwcheck.c for Linux and Digital Unix where the default protections on the socket don't allow the cyrus user to read it. (thanks to Lyndon Nerenberg)</li> <li>Bug fix: Flags beginning with \ are system flags and users can only create the defined flags. The code to do this before was confused.</li> <li>The configure scripts and makefiles have some random fixes.</li> <li>Added a contrib directory for reasons of laziness in collecting patches, not all of which should be in the distribution.</li> <li>ptloader can now renew its AFS authentication by reading from a srvtab file.</li> <li>The configure script now looks for a libcom_err and can use an installed one if one exists.</li> <li>Other small bug fixes.</li> </ul> <h2>Changes to the Cyrus IMAP Server Since Version 1.5</h2> <ul> <li>Bug fix: RENAME corrupted mailboxes if they had been EXPUNGEd. (may have only happened with INBOX, which Pine tickles once a month.)</li> <li>Bug fix: auth_newstate now initializes its structures.</li> <li>Bug fix: pop3d.c, a printf was changed to prot_printf.</li> <li>Cyrus now sends X-NON-HIERARCHICAL-RENAME to alert clients that it is not handling RENAME in an IMAP4rev1 compliant manner. This will be fixed in a subsequent release.</li> <li>Bug fix: imclient_autenticate now does resolution on the hostname before authenticating to it. This caused problems when authenticating to an address that was a CNAME.</li> <li>Bug fix: LIST %.% (and other multiple hierarchy delimiter matches) works properly. Several other glob.c fixes are included as well.</li> <li>Bug fix: a fetch of exclusively BODY[HEADER.FIELDS...] should now work properly.</li> <li>Bug fix: reconstruct now considers a nonexistant INN news directory to be empty; this makes reconstruct fix the cyrus.* files in the imap news partition.</li> <li>Added a manpage for imclient.</li> <li>Fixed a few other minor bugs.</li> </ul> <h2>Changes to the Cyrus IMAP Server Since Version 1.4</h2> <ul> <li>Implemented the "<tt>IMAP4rev1</tt>" protocol commands. (The hierarchical behavior of RENAME, which was added late to the IMAP4rev1 specification, is not implemented.) Changes the minor version number of the cyrus mailbox database format to 1. <b>IMPORTANT:</b> it is necessary to run the command "<tt>reconstruct -r</tt>" as the cyrus user after upgrading the Cyrus IMAP software from version 1.4 or earlier.</li> <li>If the file "<tt>msg/shutdown</tt>" exits in the configuration directory, the IMAP server will issue the first line in the file in an untagged BYE message and shut down.</li> <li>Permit SPACE in mailbox names.</li> <li>Permit the "modified UTF-7" internationalized mailbox name convention.</li> <li>"User opened mailbox" messages are now logged at the DEBUG level instead of the INFO level.</li> <li>Added <tt>-q</tt> (ignore quota) switch to <tt>deliver</tt>.</li> <li>New "<tt>krbck</tt>" program for diagnosing common kerberos problems.</li> <li>auth_unix no longer requires users to be in the passwd file.</li> <li>AUTHENTICATE command now reports the protection mechanism in use in the text of the tagged OK response</li> <li>Make MAILBOX_BADFORMAT and MAILBOX_NOTSUPPORTED temporary errors.</li> <li>Use the header cache for SEARCH HEADER</li> <li>Use "unspecified-domain" instead of server's hostname to fill out RFC 822 addresses without the "@domain" part.</li> <li>Make "reconstruct -r" with no args reconstruct every mailbox.</li> <li>The configure script now defaults to using unix_pwcheck instead of unix if the file /etc/shadow exists.</li> <li>The location of the pwcheck socket directory now defaults to "<tt>/var/ptclient/</tt>". It is controlled by the "<tt>--with-statedir=DIR</tt>" option, which defaults to "<tt>/var</tt>".</li> <li>Bug fix: by using an certain address form, one could deliver to a user's mailbox bypassing the ACL's.</li> <li>Bug fix: un-fold header lines when parsing for the ENVELOPE.</li> <li>Delete quota roots when deleting the last mailbox that uses them. Doesn't catch all cases, but should get over 99% of them.</li> <li>Implement plaintextloginpause configuration option, imposes artificial delay on plaintext password logins.</li> <li>Implement popminpoll configuration option, limits frequency of POP3 logins.</li> <li>Implement AFS PT server group support.</li> <li>Remove persistence of POP3 LAST value and remove Status: hack</li> <li>Support the new ACL command set in the IMAP server.</li> <li>Bug fix: Have to initialize reply to 0 in pop3d. Was causing POP3 server to occasionally drop the connection during authentication.</li> <li>Bug fix: The COPY command wasn't issuing a [TRYCREATE] when appropriate for sub-mailboxes of INBOX.</li> <li>Bug fix: Renaming a mailbox wasn't correctly changing its UIDVALIDITY.</li> <li>Bug fix: Renaming a mailbox to itself, in order to move it to a different partition, was not working correctly.</li> <li>Update the AUTH support in pop3d to conform to the latest draft specification.</li> <li>Update cyradm to use Tcl 7.5 instead of Tcl 7.4</li> <li>Re-implement large sections of the netnews support. It no longer requires modifications to INN, as it now expunges the index entries for expired/canceled articles upon select of the newsgroup.</li> <li>Implement newsspool configuration option, for separating the directories for the news spool and the various cyrus.* IMAP server index files.</li> <li>Bug fix: permit empty flag list in APPEND command</li> <li>Bug fix: deal with truncated Date: header values.</li> <li>Bug fix: memory mapping code, deal better with 0-length maps, since mmap() appears to crap out on that boundary condition.</li> <li>Portability fix: if no strerror, have to define NEED_SYS_ERRLIST.</li> <li>Bug fix: used append instead of lappend in cyradmin, preventing use of any port other than IMAP.</li> <li>When the client is streaming its commands, the IMAP server attempts to stream its tagged responses.</li> <li>Modify zephyr support to compile without Kerberos support.</li> <li>Add a bunch of prototype declararations to the code.</li> <li>In deliver, change the MULT support to instead use the LMTP syntax.</li> <li>imclient: support tagged intermediate replies and a default callback.</li> <li>Implement some experimental protocol extensions for optimizing disconnected use resynchronization. Most extensions are disabled by default. Client authors should contact info-cyrus@andrew.cmu.edu if they wish to experiment with these.</li> <li>In Makefiles, change $(AR) to ar -- HPUX make is defective.</li> <li>In deliver, use HAVE_LIBDB to select use of db over dbm</li> <li>Add map_stupidshared mapping module for older versions of Digital Unix. It's not quite as bad as HPUX, but...</li> <li>Bug fix: in imclient.c, don't free NULL pointers and don't call htons() on the output of getservbyname(). Have to abort sending the command if you get a tagged response when sending a literal.</li> <li>The auth_xxx routines now create/take a state argument instead of maintaining internal static state.</li> <li>Solaris mktime() is buggy in some releases. Create and use mkgmtime() for parsing date strings.</li> <li>Message parsing routines now use memory mapping, though they still copy data around in line-sized buffers.</li> </ul> <h2>Changes to the Cyrus IMAP Server Since Version 1.3</h2> <ul> <li>Implemented the "<tt>reconstruct -m</tt>" command, for reconstructing the <tt>mailboxes</tt> file. <b>IMPORTANT:</b> it is necessary to run the command "<tt>reconstruct -m</tt>" as the cyrus user after upgrading the Cyrus IMAP software from version 1.3 or earlier. We recommend you make a backup copy of the <tt>mailboxes</tt> file in the configuration directory before performing the conversion.</li> <li>Mailbox names are now case sensitive, not case insensitive. "<tt>INBOX</tt>" is the exception, and is treated as being case-insensitive.</li> <li>Personal mailboxes now appear to their owners as being under the "<tt>INBOX.</tt>" hierarchy. For example, the mailbox "<tt>user.bovik.work</tt>" appears to the user "<tt>bovik</tt>" as "<tt>INBOX.work</tt>". The user may still refer to the mailbox with the name "<tt>user.bovik.work</tt>".</li> <li>Previously, the code used "<tt>anybody</tt>" as the name of the group that all users are in, but the documentation used the name "<tt>anyone</tt>". Changed the code to match the documentation. The name "<tt>anybody</tt>" will be canonicalized to the name "<tt>anyone</tt>".</li> <li>The install document now gives different recommended locations for the server databases. The recommended location of the configuration directory changed from "<tt>/usr/cyrus</tt>" to "<tt>/var/imap</tt>" and the recommended location of the default partition directory changed from "<tt>/usr/spool/cyrus</tt>" to "<tt>/var/spool/imap</tt>". It is <b>NOT< <tt>/etc/imapd.conf</tt> file.</li> <li>Created a "<tt>make install</tt>" rule. See the <a href="install.html">installation</a> document for all the new corresponding <tt>configure</tt> options. Note the recommended location of the "<tt>imapd</tt>", "<tt>pop3d</tt>", and "<tt>deliver</tt>" programs has changed, this change needs to be reflected in the "<tt>inetd.conf</tt>" and "<tt>sendmail.cf</tt>" files.</li> <li>New "<tt>login_unix_pwcheck</tt>" module and "<tt>pwcheck</tt>" daemon, for improved shadow password support. See the "<tt>pwcheck/README.pwcheck</tt>" file in the distribution for details.</li> <li>Renamed the "<tt>login_unix_shadow</tt>" module to "<tt>login_unix_getspnam</tt>".</li> <li>Added a mail notification mechanism, using Zephyr.</li> <li>Added a feature to automatically create user IMAP accounts. Controlled by the "<tt>autocreatequota</tt>" config option.</li> <li>Added the "<tt>logtimestamps</tt>" config option, for putting timestamp information into protocol telemetry logs.</li> <li>Beefed up the Kerberos checks in Configure to ensure the DES library routines exist.</li> <li>On some systems, the "<tt>echo</tt>" command with no arguments emits a newline. Changed the installation document to instead use the "<tt>true</tt>" command to create the "<tt>mailboxes</tt>" file.</li> <li>Store a redundant copy of a mailbox's ACL in the <tt>cyrus.header</tt> file, so "<tt>reconstruct -m</tt>" may later use it as a backup.</li> <li>Had to remove the declaration of <tt>tcl_RcFileName</tt> for the latest version of Tcl.</li> <li>Make much more extensive use of memory mapping. Replace the binary search module with one that searches a memory mapped area.</li> <li>Replaced the yacc-based RFC822 address parser with a hand-coded one.</li> <li>Replaced the et (error table) libary with a version that doesn't require lex or yacc. Remove the lex/yacc checking from Configure.</li> <li>Safety feature: most programs now refuse to run as root.</li> <li>Bug fix: Issue [TRYCREATE] tag on COPY command when appropriate.</li> <li>Bug fix: The quoted-printable decoder wasn't ignoring trailing whitespace, as required by MIME.</li> <li>Bug fix: Don't spew cascade errors if the server gets an EOF during/after reading an APPEND literal.</li> <li>Bug fix: gmtmoff_gmtime.c was returning results with the wrong sign.</li> <li>Bug fix: imclient_send was appending spaces to %d and %u and the response parser was not handling responses that did not contain a space after the keyword.</li> <li>Bug fix: rmnews wasn't removing some (un-indexed) article files correctly.</li> <li>Completely disabled the dropoff code for now. It will be completely replaced when IMSP integration is implemented</li> <li>Added workaround for the Linux mkdir() problem.</li> <li>In Configure, use a more direct test for a working shared-memory mmap</li> <li>In collectnews, avoid O(n**2) behavior when processing articles that have already expired.</li> <li>Bug fix: append_addseen() would screw up if no messages were previously seen.</li> <li>Added the CMU-specific amssync and cmulocal directories.</li> <li>Use memmove instead of bcopy.</li> <li>Implemented the first pass of SMTP/MULT support in deliver.</li> <li>Added cacheid parameter to auth_setid(), for AFS PT server support.</li> </ul> <h2>Changes to the Cyrus IMAP Server Since Version 1.2</h2> <ul> <li>Fixed bug in character set code that broke text searches. Sites which care about searching headers need to reconstruct their existing mailboxes.</li> </ul> <h2>Changes to the Cyrus IMAP Server Since Version 1.1-Beta</h2> <ul> <li>Add support for <tt>UIDVALIDITY</tt> special information token.</li> <li>Add <tt>syncnews</tt> and <tt>arbitron</tt> programs.</li> <li>Redo duplicate delivery elimination in <tt>deliver</tt>.</li> <li>Bug fixed: Must re-read files after acquiring a lock. Cannot trust the mtime of a file to increment when writing the file--file could be written to multiple times in the same second.</li> <li>Bug fixed: <tt>EXAMINE</tt> command should not affect <tt>\Recent</tt> status.</li> <li>Update the user's <tt>\Recent</tt> high-water-mark when we report new messages.</li> <li>Portability changes</li> <li>Upgrade to autoconf 2.1</li> <li>Allow privacy to be turned off at compile-time with <tt>--disable-privacy</tt> configure switch.</li> <li>Fix typo in <tt>cyradm</tt> preventing "<tt>all</tt>" from being recognized.</li> <li>Include <tt>map_private.c</tt> memory mapping module for systems like HPUX which have half-working <tt>mmap()</tt> implementations.</li> <li>Switch to using UTF-8 for internal search format. Sites which care about internationalized searching of headers need to reconstruct all their existing mailboxes.</li> <li>Fix some errors in the iso-8859-* tables.</li> <li>Add and correct a bunch of case-independence mappings in the character tables.</li> <li>First pass at implementing the <tt>STATUS</tt> extension; disabled for release.</li> <li>First pass at implementing IMAP/IMSP server integration. Not ready for general use.</li> <li>Add <tt>new_cred</tt> and <tt>free_cred</tt> mechanisms to authentication modules.</li> <li>Don't complain when doing "<tt>reconstruct -r foo</tt>" and <tt>foo</tt> isn't a mailbox.</li> <li>Add <tt>IMAP_QUOTAROOT_NONEXISTENT</tt> error code.</li> <li>Bug fix: Avoid divide by zero when quota is zero</li> <li>Bug fix: In an error case of the ACL handling code, we have to restore tab before breaking out of loop.</li> <li>Fix file descriptor leak in quota system.</li> <li>Change a bunch of int variables to unsigned.</li> <li>Better error reporting on reads that end up short.</li> </ul> <h2>Changes to the Cyrus IMAP Server Since Version 1.0-Beta</h2> <ul> <li>Improved <a href="install.html">installation</a> document.</li> <li>New "<a href="cyradm.1.html"><tt>cyradm</tt></a>" administrative client.</li> <li>Changed the syslog facility from "<a href="install.html#syslog"><tt>local4</tt></a>" to "<tt>local6</tt>".</li> <li>Removed the <tt>renounce setuid</tt> check in "<a href="install.html#deliver"><tt>deliver</tt>"</a>. The "<tt>deliver</tt>" program must now be <b>non</b>-executable by <tt>other</tt>.</li> <li>Fixed a typo in the parsing of <tt>SEARCH DELETED</tt>. (This bug constantly got tripped by newer C-clients.)</li> <li>Redesigned the implementation of <tt>SEARCH CHARSET</tt>.<br /> Sites that wish to search for non-ASCII characters in the headers of existing mailboxes must run <tt>reconstruct</tt> on all their mailboxes after upgrading to this version.</li> <li>Added AUTH and KPOP support to the POP3 server.</li> <li>Added search support for the ISO-2022-JP character set.</li> <li>Replaced the search engine with a partial Boyer-Moore algorithm.</li> <li>Special-case optimized searching US-ASCII text.</li> <li>Fixed a bug which caused the message parser to spin-loop on a particular degenerate invalid-MIME case.</li> <li>Fixed a performance bug in the message parser.</li> <li>Tracked last-minute changes to the IMAP4 protocol.</li> <li>Fixed a bug in <tt>UNSUBSCRIBE</tt> which caused too many subscriptions to be removed.</li> <li>Added a bunch more "<a href="install.html#configure"><tt>configure</tt></a>" options.</li> <li>Ported to HPUX.</li> <li>Fixed a bug in the <tt>LIST/LSUB \Noselect</tt> code.</li> <li>Fixed bug in the globbing code which caused the "<tt>*%</tt>" pattern to work incorrectly.</li> <li>Client-side Kerberos support is now conditionalized on <tt>HAVE_ACTE_KRB</tt>, which is set by configure.</li> <li>Fixed some invalid buffer-alignment assumptions in the Kerberos code.</li> <li>Made the lexers compatible with flex. Configure now looks for and prefers to use <tt>flex</tt> and <tt>bison</tt>/<tt>byacc</tt>.</li> <li>Made the IMAP server check for the existence of the mailboxes file upon startup, in order to give a more informative error message for this common configuration error.</li> <li>Fixed other minor bugs.</li> </ul> <hr /> last modified: $Date: 2007/02/06 15:32:43 $ <br /> <a href="index.html">Return</a> to the Cyrus IMAP Server Home Page </body> </html>
|
http://opensource.apple.com/source/CyrusIMAP/CyrusIMAP-188/cyrus_imap/doc/changes.html
|
CC-MAIN-2015-48
|
refinedweb
| 12,688
| 50.84
|
Hello everyone,
I would like to convert data from csv files into several stacks.I have a batch of input files, say a number of Z csv files, one for each slice. Each file has 3 columns and n lines:
Z
n
double_01, double_02, double_03
double_11, double_12, double_13
...
double_n1, double_n2, double_n3
My task is to produce a stack with given dimensions W, H, Z from each column, according to the following pattern:The W first rows of a column go into the first line (y=0) of the current slice of the stack, then the following W rows go into the 2nd line (y=1), and so on. Here's a sketch for a single file/slice:
W, H, Z
W
This is somehow like the "reshape" function of Matlab, but between a text file and an image. The output stacks are ~200*250*1000 vox. 32 bits, so memory is an issue.
Before going any further: I did it, my code works. I'm just wondering if it can run faster.
My algorithm goes like this:
Create 3 empty stacks
For each csv file (z)
scan the file line by line
convert the line into 3 pixel values
put 1st value at (x,y,z) in the 1st stack
put 2nd value at (x,y,z) in the 2nd stack
put 3rd value at (x,y,z) in the 3rd stack
Go to next line
Go to the next file/slice
Save the stacks
So, I'm putting pixel values one by one into each image. My question is: Is there a more efficient option? For example, I noticed there exists a putrow method. Does anyone know if it could run faster? Or would it be more efficient to first scan the whole csv file, load it into an Array or a List, process it line by line, and write-append each slice into TIF files?
putrow
Thank you for your help!
I would not be surprised if you got a decent speed improvement if you imported the csv as a text image (File->Import->Text Image), and then simply copied columns of pixels from the text image to each stack. Be sure to run it in batch mode, or the switching between windows will bog things down tremendously.
Thank for your answer.Actually, I'm working with a Groovy script. I can still load the csv file using ij.plugin.TextFileReader but if possible, I would like to avoid loading the whole file into memory.
ij.plugin.TextFileReader
I answered my own question: Putting pixel values row by row is way faster than one by one.
Great!Would you mind to share the script code as well? I'd be interested to see how you solved it in the end.
Sure, but it is still a work in progress. If find there is too much duplicate code.Here's the main part. I do not include the pre-processing of the files (GUI, opening...).files is my array of input files.u, v, zncc are the data in the 3 columns of each file.nbPtsX, nbPtsY and nbSlices are the dimensions of the output stacks (nbSlices = number of input files).
files
u, v, zncc
nbPtsX, nbPtsY
nbSlices
ImagePlus imgU = IJ.createImage("u", "32-bit", nbPtsX, nbPtsY, nbSlices);
ImagePlus imgV = IJ.createImage("v", "32-bit", nbPtsX, nbPtsY, nbSlices);
ImagePlus imgZNCC = IJ.createImage("zncc", "32-bit", nbPtsX, nbPtsY, nbSlices);
stackU = imgU.getStack();
stackV = imgV.getStack();
stackZNCC = imgZNCC.getStack();
// u, v, zncc values are stored into 1D arrays that are put in the images at increasing (0, y) coordinates
float[] uRow = new float[nbPtsX];
float[] vRow = new float[nbPtsX];
float[] znccRow = new float[nbPtsX];
for (int z = 1; z <= nbSlices; z++){
file = files[z - 1];
ipU = stackU.getProcessor(z);
ipV = stackV.getProcessor(z);
ipZNCC = stackZNCC.getProcessor(z);
i = 0; // Line counter
file.withReader { reader ->
while (((line=reader.readLine()) != null)) {
String[] uvzncc = line.split(',');
uRow[i % nbPtsX] = Float.parseFloat(uvzncc[0]);
vRow[i % nbPtsX] = Float.parseFloat(uvzncc[1]);
znccRow[i % nbPtsX] = Float.parseFloat(uvzncc[2]);
// When nbPtsX lines have been scanned, put rows at (0, y)
if((i+1) % nbPtsX == 0){
int y = i / nbPtsX;
ipU.putRow(0, y, uRow, uRow.size());
ipV.putRow(0, y, vRow, uRow.size());
ipZNCC.putRow(0, y, znccRow, znccRow.size());
}
i++;
}
}
stackU.setProcessor(ipU, z);
stackV.setProcessor(ipV, z);
stackZNCC.setProcessor(ipZNCC, z);
IJ.showProgress(z, nbSlices);
}
imgU.show();
imgV.show();
imgZNCC.show();
Thanks @Nicolas, that's a nice example using Groovy's file.withReader syntax!
file.withReader
Some comments:
In Groovy, you can also write more concisely:
for (z in 1..nbSlices)
Also, you don't need to use semicolon (;) at the end of each line.
;
I wonder how using only ImageJ2 types (i.e. Ops and ImgLib2) would compare in performance.If you like to benchmark this, here's a modified version of your script (using Script Parameters and runnable from within Fiji's Script Editor):
// @File(label="Input file (csv)") csvFile
// @OpService ops
// @StatusService sts
// @OUTPUT Img imgU
// @OUTPUT Img imgV
// @OUTPUT Img imgZNCC
import net.imglib2.type.numeric.real.FloatType
import net.imglib2.img.array.ArrayImgFactory
sizeX = 10
sizeY = 10
sizeZ = 1
imgU = ops.run("create.img", [sizeX, sizeY, sizeZ], new FloatType(), new ArrayImgFactory())
imgV = imgU.copy()
imgZNCC = imgU.copy()
cursorU = imgU.cursor()
cursorV = imgV.cursor()
cursorZNCC = imgZNCC.cursor()
// for (z in 1..sizeZ) { } // optionally loop over slices
csvFile.withReader { reader ->
while (((line=reader.readLine()) != null)) {
uvzncc = line.split(',')
cursorU.next().set(Float.parseFloat(uvzncc[0]))
cursorV.next().set(Float.parseFloat(uvzncc[1]))
cursorZNCC.next().set(Float.parseFloat(uvzncc[2]))
}
// sts.showProgress(z, sizeZ)
}
Thank you @imagejan! I will try that tomorrow. Your example tells me that I wouldn't waste my time learning Ops and Imglib2.
|
http://forum.imagej.net/t/convert-csv-to-tif-best-practice/4211
|
CC-MAIN-2017-26
|
refinedweb
| 955
| 69.07
|
Java Clients for Elasticsearch
UPDATE: This article refers to our hosted Elasticsearch offering by an older name, Found. Please note that Found is now known as Elastic Cloud.
One of the important aspects of Elasticsearch is that it is programming language independent. All of the APIs for indexing, searching and monitoring can be accessed using HTTP and JSON so it can be integrated in any language that has those capabilities. Nevertheless Java, the language Elasticsearch and Lucene are implemented in, is very dominant. In this post I would like to show you some of the options for integrating Elasticsearch with a Java application.
The Native Client
The obvious first choice is to look at the client Elasticsearch provides natively. Unlike other solutions there is no separate jar file that just contains the client API but you are integrating the whole application Elasticsearch. Partly this is caused by the way the client connects to Elasticsearch: It doesn’t use the REST API but connects to the cluster as a cluster node. This node normally doesn’t contain any data but is aware of the state of the cluster.
On the right side we can see two normal nodes, each containing two shards. Each node of the cluster, including our application’s client node, has access to the cluster state as indicated by the cylinder icon. That way, when requesting a document that resides on one of the shards of Node 1 your client node already knows that it has to ask Node 1. This saves a potential hop that would occur when asking Node 2 for the document that would then route your request to Node 1 for you.
Creating a client node in code is easy. You can use the
NodeBuilder to get access to the
Client interface. This then has methods for all of the API functionality, e.g. for indexing and searching data.
Client client = NodeBuilder.nodeBuilder() .client(true) .node() .client(); boolean indexExists = client.admin().indices().prepareExists(INDEX).execute().actionGet().isExists(); if (indexExists) { client.admin().indices().prepareDelete(INDEX).execute().actionGet(); } client.admin().indices().prepareCreate(INDEX).execute().actionGet(); SearchResponse allHits = client.prepareSearch(Indexer.INDEX) .addFields("title", "category") .setQuery(QueryBuilders.matchAllQuery()) .execute().actionGet();
You can see that after having the client interface we can issue index and search calls to Elasticsearch. The fluent API makes the code very readable. Note that the final
actionGet() call on the operations is caused by the asynchronous nature of Elasticsearch and is not related to the HTTP operation. Each operation returns a
Future that provides access to the result once it is available.
Most of the operations are available using dedicated builders and methods but you can also use the generic
jsonBuilder() that can construct arbitrary JSON objects for you.
An alternative to the node client we have seen above is the
TransportClient. It doesn’t join but connects to an existing cluster using the transport module (the layer that is also used for inter-node communication). This can be useful if maintaining the cluster state in your application can be problematic, e.g. when you are having tight constraints regarding your memory consumption or you are restarting your application a lot.
The
TransportClient can be created by passing in one or more urls to nodes of your cluster:
Client client = new TransportClient() .addTransportAddress(new InetSocketTransportAddress("localhost", 9300)) .addTransportAddress(new InetSocketTransportAddress("localhost", 9301));
Using the property
client.transport.sniff the
TransportClient will also retrieve all the URLs for the other nodes of the cluster for you and use those in a round robin fashion.
The native Java client is the perfect solution if you need to have all of the features of Elasticsearch available. New functionality is automatically available with every release. You can either use the node client that will save you some hops or the
TransportClient that communicates with an existing cluster.
If you’d like to learn more about the two kinds of clients you can have a look at this article on using Elasticsearch from Java or this post on networking on the Found blog.
Note: Elasticsearch service providers that have built a highly secure platform and service, e.g. implementing security measures such as ACLs and encryption, do not support unmodified clients. For more details regarding Found’s requirements, see Found Elasticsearch Transport Module.
Jest
For when you need a lightweight client in your application (regarding jar size or memory consumption) there is a nice alternative. Jest provides an implementation of the Elasticsearch REST API using the Apache HttpComponents project.
The API of Jest is very similar to the Elasticsearch API. It uses a fluent API with lots of specialized builders. All of the interaction happens using the
JestClient that can be created using a factory:
JestClientFactory factory = new JestClientFactory(); factory.setHttpClientConfig(new HttpClientConfig.Builder("") .multiThreaded(true) .build()); JestClient client = factory.getObject();
When it comes to communicating with Elasticsearch you have two options: You can either create strings in the JSON-API of Elasticsearch or you can reuse the builder classes of Elasticsearch. If it’s not a problem to have the Elasticsearch dependency on your classpath this can lead to cleaner code. This is how you conditionally create an index using Jest:
boolean indexExists = client.execute(new IndicesExists.Builder("jug").build()).isSucceeded(); if (indexExists) { client.execute(new DeleteIndex.Builder("jug").build()); } client.execute(new CreateIndex.Builder("jug").build());
And this is how a search query can be executed.
String query = "{\n" + " \"query\": {\n" + " \"filtered\" : {\n" + " \"query\" : {\n" + " \"query_string\" : {\n" + " \"query\" : \"java\"\n" + " }\n" + " }" + " }\n" + " }\n" + "}"; Search.Builder searchBuilder = new Search.Builder(query).addIndex("jug").addType("talk"); SearchResult result = client.execute(searchBuilder.build());
You can see that concatenating the query can become complex so if you have the option to use the Elasticsearch builders you should try it.
The really great thing about Jest is that you can use Java Beans directly for indexing and searching. Suppose we have a bean
Talk with several properties we can index instances of those in bulk in the following way:
Builder bulkIndexBuilder = new Bulk.Builder(); for (Talk talk : talks) { bulkIndexBuilder.addAction(new Index.Builder(talk).index("jug").type("talk").build()); } client.execute(bulkIndexBuilder.build());
Given the
SearchResult we have seen above we can then also retrieve our talk instances directly from the Elasticsearch results:
List
> hits = result.getHits(Talk.class); for (Hit hit: hits) { Talk talk = hit.source; log.info(talk.getTitle()); }
Besides the execute method we have used so far there is also an async variant that returns a
Future.
The structure of the JEST API is really nice, you will find your way around it immediately. The possibility to index and retrieve Java Beans in your application makes it a good alternative to the native client. But there is also one thing I absolutely don’t like: It throws too many checked Exceptions, e.g. a plain
Exception on the central execute method of the
JestClient. Also, there might be cases where the Jest client doesn’t offer all of the functionality of newer Elasticsearch versions immediately. Nevertheless, it offers a really nice way to access your Elasticsearch instance using the REST API.
For more information on Jest you can consult the project documentation on GitHub. There is also a nice article on Elasticsearch at IBM developerWorks that demonstrates some of the features using Jest.
Spring Data Elasticsearch
The Spring Data project is a set of APIs that provide access to multiple data stores using a similar feeling. It doesn’t try to use one API for everything, so the characteristics of a certain data store can still be available. The project supports many stores, Spring Data JPA and Spring Data MongoDB being among the more popular. Starting with the latest GA release the implementation of Spring Data Elasticsearch is also officially part of the Spring Data release.
Spring Data Elasticsearch goes even one step further than the Jest client when it comes to indexing Java Beans. With Spring Data Elasticsearch you annotate your data objects with a
@Document annotation that you can also use to determine index settings like name, numbers of shards or number of replicas. One of the attributes of the class needs to be an id, either by annotating it with
@Id or using one of the automatically found names
id or
documentId. The other properties of your document can either come with or without annotations: without an annotation it will automatically be mapped by Elasticsearch, using the
@Field annotation you can provide a custom mapping. The following class uses the standard mapping for speaker but a custom one for the title.
@Document(indexName="talks") public class Talk { @Id private String path; @Field(type=FieldType.String, index=FieldIndex.analyzed, indexAnalyzer="german", searchAnalyzer="german") private String title; private List
speakers; @Field(type= FieldType.Date) private Date date; // getters and setters ommitted }
There are two ways to use the annotated data objects: Either using a repository or the more flexible template support. The
ElasticsearchTemplate uses the Elasticsearch
Client and provides a custom layer for manipulating data in Elasticsearch, similar to the popular
JDBCTemplate or
RESTTemplate. The following code indexes a document and uses a GET request to retrieve it again.
IndexQuery query = new IndexQueryBuilder().withIndexName("talks").withId("/tmp").withObject(talk).build(); String id = esTemplate.index(query); GetQuery getQuery = new GetQuery(); getQuery.setId(id); Talk queriedObject = esTemplate.queryForObject(getQuery, Talk.class);
Note that none of the classes used in this example are part of the Elasticsearch API. Spring Data Elasticsearch implements a completely new abstraction layer on top of the Elasticsearch Java client.
The second way to use Spring Data Elasticsearch is by using a
Repository, an interface you can extend. There is a general interface
CrudRepository available for all Spring Data projects that provides methods like
findAll(),
count(),
delete(...) and
exists(...).
PagingAndSortingRepository provides additional support for, what a surprise, paging and sorting.
For adding specialized queries to your application you can extend the
ElasticsearchCrudRepository and declare custom methods in it. What might come as a surprise at first: You don’t have to implement a concrete instance of this interface, Spring Data automatically creates a proxy for you that contains the implementation. What kind of query is executed is determined by the name of the method, which can be something like
findByTitleAndSpeakers(String title, String speaker). Besides the naming convention you can also annotate the methods with an
@Query annotation that contains the native JSON query or you can even implement the method yourself.
Spring Data Elasticsearch provides lot of functionality and can be a good choice if you are already using Spring or even Spring Data. Some of the functionality of Elasticsearch might not be available at first or is more difficult to use because of the custom abstraction layer.
More
Besides the clients we have seen in this post there are more available for the JVM. The Groovy Client wraps the Java API in Groovy, Elastisch is a client that implements the Elasticsearch API in a Clojure way. Have a look at the clients supported by the community to see more, e.g. several clients for Scala.
Some of the clients might be a little behind so if you need to have the newest features it might be best to choose the native client. But of course all of this projects are open source, so if you need a feature why not implement it yourself and contribute it back to the project?
|
https://www.elastic.co/de/blog/found-java-clients-for-elasticsearch
|
CC-MAIN-2016-50
|
refinedweb
| 1,896
| 54.52
|
Recently, I have been learning about (generalized) additive models by working through Simon Wood’s book. I have previously posted an IPython notebook implementing the models from Chapter 3 of the book. In this post, I will show how to fit a simple additive model in Python in a bit more detail.
We will use a LIDAR dataset that is available on the website for Larry Wasserman’s book All of Nonparametric Statistics.
%matplotlib inline
from matplotlib import pyplot as plt import numpy as np import pandas as pd import patsy import scipy as sp import seaborn as sns from statsmodels import api as sm
df = pd.read_csv('', sep=' *', engine='python') df['std_range'] = (df.range - df.range.min()) / df.range.ptp() n = df.shape[0]
df.head()
This data set is well-suited to additive modeling because the relationship between the variables is highly non-linear.
fig, ax = plt.subplots(figsize=(8, 6)) blue = sns.color_palette()[0] ax.scatter(df.std_range, df.logratio, c=blue, alpha=0.5); ax.set_xlim(-0.01, 1.01); ax.set_xlabel('Scaled range'); ax.set_ylabel('Log ratio');
An additive model represents the relationship between explanatory variables \(\mathbf{x}\) and a response variable \(y\) as a sum of smooth functions of the explanatory variables
\[y = \beta_0 + f_1(x_1) + f_2(x_2) + \cdots + f_k(x_k) + \varepsilon.\]
The smooth functions \(f_i\) can be estimated using a variety of nonparametric techniques. Following Chapter 3 of Wood’s book, we will fit our additive model using penalized regression splines.
Since our LIDAR data set has only one explanatory variable, our additive model takes the form
\[y = \beta_0 + f(x) + \varepsilon.\]
We fit this model by minimizing the penalized residual sum of squares
\[PRSS = \sum_{i = 1}^n \left(y_i - \beta_0 - f(x_i)\right)^2 + \lambda \int_0^1 \left(f''(x)\right)^2\ dx.\]
The penalty term
\[\int_0^1 \left(f''(x)\right)^2\ dx\]
causes us to only choose less smooth functions if they fit the data much better. The smoothing parameter \(\lambda\) controls the rate at which decreased smoothness is traded for a better fit.
In the penalized regression splines model, we must also choose basis functions \(\varphi_1, \varphi_2, \ldots, \varphi_k\), which we then use to express the smooth function \(f\) as
\[f(x) = \beta_1 \varphi_1(x) + \beta_2 \varphi_2(x) + \cdots + \beta_k \varphi_k(x).\]
With these basis functions in place, if we define \(\mathbf{x}_i = [1\ x_i\ \varphi_2(x_i)\ \cdots \varphi_k(x_i)]\) and
\[\mathbf{X} = \begin{bmatrix} \mathbf{x}_1 \\ \vdots \\ \mathbf{x}_n \end{bmatrix},\]
the model \(y_i = \beta_0 + f(x_i) + \varepsilon\) can be rewritten as \(\mathbf{y} = \mathbf{X} \beta + \varepsilon\). It is tedious but not difficult to show that when \(f\) is expressed as a linear combination of basis functions, there is always a positive semidefinite matrix \(\mathbf{S}\) such that
\[\int_0^1 \left(f''(x)\right)^2\ dx = \beta^{\intercal} \mathbf{S} \beta.\]
Since \(\mathbf{S}\) is positive semidefinite, it has a square root \(\mathbf{B}\) such that \(\mathbf{B}^{\intercal} \mathbf{B} = \mathbf{S}\). The penalized residual sum of squares objective function can then be written as
\[ \begin{align*} PRSS & = (\mathbf{y} - \mathbf{X} \beta)^{\intercal} (\mathbf{y} - \mathbf{X} \beta) + \lambda \beta^{\intercal} \mathbf{B}^{\intercal} \mathbf{B} \beta = (\mathbf{\tilde{y}} - \mathbf{\tilde{X}} \beta)^{\intercal} (\mathbf{\tilde{y}} - \mathbf{\tilde{X}} \beta), \end{align*} \]
where
\[\mathbf{\tilde{y}} = \begin{bmatrix} \mathbf{y} \\ \mathbf{0}_{k + 1} \end{bmatrix} \]
and
\[\mathbf{\tilde{X}} = \begin{bmatrix} \mathbf{X} \\ \sqrt{\lambda}\ \mathbf{B} \end{bmatrix}. \]
Therefore the augmented data matrices \(\mathbf{\tilde{y}}\) and \(\mathbf{\tilde{X}}\) allow us to express the penalized residual sum of squares for the original model as the residual sum of squares of the OLS model \(\mathbf{\tilde{y}} = \mathbf{\tilde{X}} \beta + \tilde{\varepsilon}\). This augmented model allows us to use widely available machinery for fitting OLS models to fit the additive model as well.
The last step before we can fit the model in Python is to choose the basis functions \(\varphi_i\). Again, following Chapter 3 of Wood’s book, we let
\[R(x, z) = \frac{1}{4} \left(\left(z - \frac{1}{2}\right)^2 - \frac{1}{12}\right) \left(\left(x - \frac{1}{2}\right)^2 - \frac{1}{12}\right) - \frac{1}{24} \left(\left(\left|x - z\right| - \frac{1}{2}\right)^4 - \frac{1}{2} \left(\left|x - z\right| - \frac{1}{2}\right)^2 + \frac{7}{240}\right).\]
def R(x, z): return ((z - 0.5)**2 - 1 / 12) * ((x - 0.5)**2 - 1 / 12) / 4 - ((np.abs(x - z) - 0.5)**4 - 0.5 * (np.abs(x - z) - 0.5)**2 + 7 / 240) / 24 R = np.frompyfunc(R, 2, 1) def R_(x): return R.outer(x, knots).astype(np.float64)
Though this function is quite complicated, we will see that it has some very conveient properties. We must also choose a set of knots \(z_i\) in \([0, 1]\), \(i = 1, 2, \ldots, q\).
q = 20 knots = df.std_range.quantile(np.linspace(0, 1, q))
Here we have used twenty knots situatied at percentiles of
std_range.
Now we define our basis functions as \(\varphi_1(x) = x\), \(\varphi_{i}(x) = R(x, z_{i - 1})\) for \(i = 2, 3, \ldots q + 1\).
Our model matrices \(\mathbf{y}\) and \(\mathbf{X}\) are therefore
y, X = patsy.dmatrices('logratio ~ std_range + R_(std_range)', data=df)
Note that, by default,
patsy always includes an intercept column in
X.
The advantage of the function \(R\) is the the penalty matrix \(\mathbf{S}\) has the form
\[S = \begin{bmatrix} \mathbf{0}_{2 \times 2} & \mathbf{0}_{2 \times q} \\ \mathbf{0}_{q \times 2} & \mathbf{\tilde{S}} \end{bmatrix},\]
where \(\mathbf{\tilde{S}}_{ij} = R(z_i, z_j)\). We now calculate \(\mathbf{S}\) and its square root \(\mathbf{B}\).
S = np.zeros((q + 2, q + 2)) S[2:, 2:] = R_(knots)
B = np.zeros_like(S) B[2:, 2:] = np.real_if_close(sp.linalg.sqrtm(S[2:, 2:]), tol=10**8)
We now have all the ingredients necessary to fit some additive models to the LIDAR data set.
def fit(y, X, B, lambda_=1.0): # build the augmented matrices y_ = np.vstack((y, np.zeros((q + 2, 1)))) X_ = np.vstack((X, np.sqrt(lambda_) * B)) return sm.OLS(y_, X_).fit()
We have not yet discussed how to choose the smoothing parameter \(\lambda\), so we will fit several models with different values of \(\lambda\) to see how it affects the results.
fig, axes = plt.subplots(nrows=3, ncols=2, sharex=True, sharey=True, squeeze=True, figsize=(12, 13.5)) plot_lambdas = np.array([1.0, 0.1, 0.01, 0.001, 0.0001, 0.00001]) plot_x = np.linspace(0, 1, 100) plot_X = patsy.dmatrix('std_range + R_(std_range)', {'std_range': plot_x}) for lambda_, ax in zip(plot_lambdas, np.ravel(axes)): ax.scatter(df.std_range, df.logratio, c=blue, alpha=0.5); results = fit(y, X, B, lambda_=lambda_) ax.plot(plot_x, results.predict(plot_X)); ax.set_xlim(-0.01, 1.01); ax.set_xlabel('Scaled range'); ax.set_ylabel('Log ratio'); ax.set_title(r'$\lambda = {}$'.format(lambda_)); fig.tight_layout();
We can see that as \(\lambda\) decreases, the model becomes less smooth. Visually, it seems that the optimal value of \(\lambda\) lies somewhere between \(10^{-2}\) and \(10^{-4}\). We need a rigorous way to choose the optimal value of \(\lambda\). As is often the case in such situations, we turn to cross-validation. Specifically, we will use generalized cross-validation to choose the optimal value of \(\lambda\). The GCV score is given by
\[\operatorname{GCV}(\lambda) = \frac{n \sum_{i = 1}^n \left(y_i - \hat{y}_i\right)^2}{\left(n - \operatorname{tr} \mathbf{H}\right)^2}.\]
Here, \(\hat{y}_i\) is the \(i\)-th predicted value, and \(\mathbf{H}\) is upper left \(n \times n\) submatrix of the the influence matrix for the OLS model \(\mathbf{\tilde{y}} = \mathbf{\tilde{X}} + \varepsilon\).
def gcv_score(results): X = results.model.exog[:-(q + 2), :] n = X.shape[0] y = results.model.endog[:n] y_hat = results.predict(X) hat_matrix_trace = results.get_influence().hat_matrix_diag[:n].sum() return n * np.power(y - y_hat, 2).sum() / np.power(n - hat_matrix_trace, 2)
Now we evaluate the GCV score of the model over a range of \(\lambda\) values.
lambdas = np.logspace(0, 50, 100, base=1.5) * 1e-8
gcv_scores = np.array([gcv_score(fit(y, X, B, lambda_=lambda_)) for lambda_ in lambdas])
fig, ax = plt.subplots(figsize=(8, 6)) ax.plot(lambdas, gcv_scores); ax.set_xscale('log'); ax.set_xlabel(r'$\lambda$'); ax.set_ylabel(r'$\operatorname{GCV}(\lambda)$');
The GCV-optimal value of \(\lambda\) is therefore
lambda_best = lambdas[gcv_scores.argmin()] lambda_best
0.00063458365729550153
This value of \(\lambda\) produces a visually reasonable fit.
fig, ax = plt.subplots(figsize=(8, 6)) ax.scatter(df.std_range, df.logratio, c=blue, alpha=0.5); results = fit(y, X, B, lambda_=lambda_best) ax.plot(plot_x, results.predict(plot_X), label=r'$\lambda = {}$'.format(lambda_best)); ax.set_xlim(-0.01, 1.01); ax.legend();
We have only scratched the surface of additive models, fitting a simple model of one variable with penalized regression splines. In general, additive models are quite powerful and flexible, while remaining quite interpretable.
This post is available as an IPython notebook here.
|
http://austinrochford.com/posts/2015-08-29-additive.html
|
CC-MAIN-2017-13
|
refinedweb
| 1,528
| 50.73
|
Determines whether or not the specified object identifier (OID) identifies a control that is present in a list of controls.
#include "slapi-plugin.h" int slapi_control_present( LDAPControl **controls, char const *oid, struct berval **val, int *iscritical );
This function takes the following parameters:
List of controls that you want to check.
OID of the control that you want to find.
If the control is present in the list of controls, this function specifies the pointer to the berval structure containing the value of the control. If you do not want to receive a pointer to the control value, pass NULL for this parameter.
If the control is present in the list of controls, this function specifies whether or not the control is critical to the operation of the server:
0 means that the control is not critical to the operation.
1 means that the control is critical to the operation.
If you do not want to receive an indication of whether the control is critical or not, pass NULL for this parameter.
This function returns one of the following values:
1 if the specified control is present in the list of controls.
0 if the control is not present in the list of controls.
The val output parameter is set to point into the controls array. A copy of the control value is not made.
slapi_entry_get_uniqueid()
slapi_register_supported_control()
|
http://docs.oracle.com/cd/E19528-01/820-2492/aaiex/index.html
|
CC-MAIN-2016-36
|
refinedweb
| 226
| 60.95
|
edit Welcome to Uncyclopedia
Spike is a grumpy old dog. You can stroke his fur, or you can be smitten by his stick. It is up to you.
Hello, Frank Fancinostril,:Frank Fancinostril 11:13 17-Aug-14
edit ¬ 21:43 19-Aug-14
edit )
edit ¬ 18:47 23-Aug-14
edit ¬ 03:07 24-Aug-14
- I don't know how to link to that TIE Fighter page, as you did with "movies" blue link and at bottom "Thunderbirds". I will remove Youtube link even though I thought it was hilarious reveal after set-up, but I understand redirecting traffic is a bad idea. Frank Fancinostril (talk) 03:19, August 24, 2014 (UTC)
- Thanks for fixing that link. Frank Fancinostril (talk) 05:55, August 24, 2014 (UTC)
Better that way; as you could see, after you deleted the YouTube link, your "Please click on [2]" became [1]!
Now to nag you about something completely different: We guide against the overuse of quotations at UNQUOTE. Basically, they should be attributed to someone real and should be something he said (but maybe taken out of context) or would have said. They should not be anonymous, and if they start arguing with one another, editors remove all of them. You had an anonymous quote that was pretty good and I massaged it to indicate why he was "speaking on condition of anonymity." But that's not a magic word you can just apply to all of them — and they aren't even coded using {{Q}}. It reads as though you want a mysterious figure to peer in from the side of the TV screen (you!) and say something irreverent — because it starts to read like a cartoon rather than a news article. See if you can weave the same humor with a different technique. Spıke ¬ 10:46 24-Aug-14
edit Always log in
My fellow Admin Frosty posted to your IP address. When editing in your userspace, always log in as you; otherwise, one of us has to guess if someone else is messing with your page. Cheers! Spıke ¬ 04:07 1-Sep-14
- Spike, I moved /renamed my page, but it doesn't show up using the search window. Do I need to rename it without the "Uncyclopedia:" in front of "Star Wars 7 (Film In Production)" ? Frank Fancinostril (talk) 06:42, September 1, 2014 (UTC)
- Done. Now here: Star Wars 7 (film in production). BTW, if you add something about yourself or interests to your User Page, your signature will link back to your pages. I have done this now as:52, September 1, 2014 (UTC)
- Thank you much ! Frank Fancinostril (talk) 07:04, September 1, 2014 (UTC)
Our new, "helpful" pull-down menu on the Move page baffles everyone! Unless you make sure it reads "(Main)" it adds a namespace to the start of whatever you type. Spıke ¬ 12:24 1-Sep-14
I renamed the page again, as Wikipedia doesn't capitalize words that clarify a title, or indeed words other than proper names in the page title itself; and we try to match their style. Spıke ¬ 22:10 1-Sep-14
- Cool, thanks Spike Frank Fancinostril (talk) 23:23, September 1, 2014 (UTC)
PS--Regarding "it doesn't show up using the search window," the Uncyclopedia search tool is notoriously bad. Several Uncyclopedians always go to an external search engine to search the site. However, your problem might have been caused by moving your article to an unintended namespace. Notice all the check-boxes in the search window; you can tick these to enable and disable various namespaces to be searched. Spıke ¬ 14:50 2-Sep-14
- Ok, that's true, because I actually discovered Uncyclopedia in a Google search regarding the band Yes, and it had me laughing in tears the more I read. (I might have to name drop Yes (band) or Jon Anderson into the SW7 page, just to make a link to it. It's over the top silly, which got me hooked on Uncyclopedia)Frank Fancinostril (talk) 00:15, September 3, 2014 (UTC)
Please don't boost other articles unless the link makes sense! That is, serve your reader rather than trying to get him to do things, even to read another great article. In July, the Yes page, though featured, was "brought up-to-date" by M00rglade. Spıke ¬ 02:04 3-Sep-14
- You're right. Focus Frank! Focus !Frank Fancinostril (talk) 02:36, September 3, 2014 (UTC)
If you like his stuff, though, by all means tell him so. I hear there is a tag-team editing competition coming up and you might get a partner. Spıke ¬ 03:02 3-Sep-14
edit Star Wars 7 bis
Every time you visit, you add a new episode to this! It may work as a diary of the entire progress of this movie, but think a bit about the typical reader (if we can decide who he is). It may be time to boil it down to just its best parts, or at least re-read it as if for the first time and tighten up stuff that is now old news. An article whose first five pages make people laugh until their dentures fall out, but then keep going and keep going and the guy doesn't finish it, are not as good as they could be. Spıke ¬ 02:50 21-Sep-14
I take it the movie now has an actual name. Anon today just added the name to your article. I reverted him, as the paragraph in question made sport of the fact that the movie did not have an actual name, and his addition broke it. However, you should reconsider the paragraph, as the basis for your joke is now "overtaken by events." Spıke ¬ 03:07 5-Oct-14
|
http://uncyclopedia.wikia.com/wiki/User_talk:Frank_Fancinostril?curid=719982&diff=0&oldid=5819790
|
CC-MAIN-2014-52
|
refinedweb
| 972
| 77.37
|
Code smells and their refactorings can be very daunting and intimidating to newbies. So in this series, I’ve tried to make them easy to understand, both for slightly experienced Ruby developers and starters alike.
This final article mentions a few more smells you should look out for and sums up what this small series wanted to achieve. A final whiff, if you like…
Topics
- Callbacks
- Bad Names
- Mixins
- Data Clumps
A Final Whiff
The last article in this series is something like a bonus round. I wanted to introduce you to a few more smells that can be addressed quickly and without much fuss. One for the road, so to speak. I think with the knowledge you’ve gathered from the previous articles, most of them won’t even need code examples to wrap your head around.
When you open a book about refactoring, you will easily find more smells than we have discussed. However, with these major ones under your belt, you will be well prepared to deal with any of them.
Generously applied comments are rarely a good idea—probably never. Why not? Because it might suggest that your design is not speaking for itself. That means your code is probably so complicated to understand that it needs literal explanations.
First of all, who wants to go through hoards of text in your code—or worse, through code that is hard to understand. Jackpot if both are a common occurrence. That’s just bad form and not very considerate of people who come after you—no offence, masochists, torture your future self all you want.
You want to write code that is expressive enough in itself. Create classes and methods that speak for themselves. In the best scenario, they tell a story that is easy to follow. That is probably one of the reasons conventions over configurations became so influential. Reinventing the wheel is certainly sometimes a good practice to sharpen your understanding and to explore new territory, but in fast-paced development environments, your colleagues are looking for clarity and quick navigation—not only within your files but also within the mental map you create in your code.
I don’t want to drift off into a whole new topic, but naming plays a big role in all of that. And excessive commenting within your code slightly contradicts good naming practices and conventions. Don’t get me wrong, it’s fine to add comments—just stay on the path that “illuminates” your code rather than distracting from it. Comments should certainly not be instructions for clever code that mostly you can decipher because you wanted to show off. If you keep your methods simple—as you should—and name everything with consideration, then you have little need to write whole novels in between your code.
Stay away from the following:
- Todo lists
- Dead code commented out
- More than one comment per method
It’s also useful to break out parts of methods via extract method and giving this part of a method a name that tells us about its responsibility—rather than have all the details clutter up a high-level understanding of what’s going on within the method’s body.
def create_new_agent ... end ... # create new agent visit root_path click_on 'Create Agent' fill_in 'Agent Name', with: 'Jinx' fill_in 'Email', with: 'jinx@nsa.com' fill_in 'Password', with: 'secretphrase' click_button 'Submit' ...
What is easier to read? A no brainer of course! Use the free mileage you get by naming things properly via extracted methods. It makes your code so much smarter and easier to digest—plus the benefits of refactoring in one place if reused, of course. I bet this will help trim down your comments by a very significant amount.
Callbacks
This is a simple one. Don’t use callbacks that are not related to persistence logic! Your objects have a persistence life cycle—creating, saving and deleting objects, so to speak—and you don’t want to “pollute” that logic with other behaviour like the business logic of your classes.
Keep it simple, remember? Typical examples of what to avoid are sending emails, processing payments and stuff. Why? Because debugging and refactoring your code should be as easy as possible, and messy callbacks have a reputation of interfering with these plans. Callbacks make it a bit too easy to muddy the waters and to shoot yourself in the foot multiple times.
Another problematic point about callbacks is that they can hide the implementation of business logic in methods like
#save or
#create. So don’t be lazy and abuse them just because it seems convenient!
The biggest concern is coupling of concerns, of course. Why let the create method of
SpectreAgent, for example, deal with the delivery of a
#mission_assignment or something? As so often, just because we can do it—easily—doesn’t mean we should. It’s a guaranteed bite in the ass waiting to happen. The solution is actually pretty straightforward. If a callback’s behaviour has nothing to do with persistence, simply create another method for it and you’re done.
Bad Names
Bad naming choices have serious consequences. In effect, you are wasting other people’s time—or even better your own, if you have to revisit that piece of code in the future. The code you write is a set of instructions to be read by you and other people, so a purely logical, super prosaic, overly clever, or worse, a plain lazy approach to naming things is one of the worst things you can leave behind. Aim to make your code easier to understand by providing better names.
Clarity trumps false cleverness or unnecessary conciseness any day of the week! Work hard on naming methods, variables, and classes that make it easy to follow some sort of thread.
I don’t want to go as far as to say that you should aim for trying to tell a story, but if you can, go for it! Machines are not the ones who need to “read” your code—it’s run by them, of course. Maybe that’s one reason why the term “Software Writer” has been growing on me a bit lately. I’m not saying that the engineering aspect should be diminished, but writing software is more than writing soulless instructions for machines—at least software that is elegant and sparks joy to work with.
Don’t freak out if this turns out to be a lot more difficult than you thought. Naming is notoriously hard!
Mixins
Mixins are a smell? Well, let’s say they can be smelly. Multiple inheritance through Mixins can be useful, but there are a couple of things that make them less useful than you might have thought when you started out with OOP:
- They are trickier to test.
- They can’t have their own state.
- They “pollute” the namespace a bit.
- It’s not always super clear where functionality comes from—since it’s mixed in.
- They can inflate the size of classes or the number of methods drastically. Small classes rule, remember?
I suggest you read up a bit on “Composition Over Inheritance”. The gist of it is that you should rely more on reuse of your own, separately composed classes than on inheritance or subclassing. Mixins are a form of inheritance that can be put to good use but also something you should be a bit suspicious about.
Data Clumps
Watch out for repeatedly passing the same multiple arguments into your methods. That often suggests that they have a relationship that can be extracted into a class of its own—which can in turn often drastically simplify feeding these methods with data by reducing the size of arguments. Whether it’s worth introducing a new dependency is the thing you have to weigh, though.
This smell is another form of subtle duplication that we can handle better. A good example is passing a long list of arguments that make up an address and credit card info. Why not package all of this in an existing class, or extract a new class first and pass in the address and credit card objects instead? Another way to think about it is having a range object instead of a start and an end. If you have instance variables that fall for that smell, then extracting a class is worth considering. In other cases, a parameter object might offer the same quality of abstraction.
You’ll know that you have achieved a small win if your system is easier to understand and you found a new concept—like credit card—that you could encapsulate into an object.
Final Thoughts
Congratulations! You have leveled up your OOP skills significantly! Boss level status is approaching. No, seriously, great job if this whole topic was rather new to you!
As a final recommendation, I want you to take away one thing. Please remember that there is no recipe that will always work. You will need to weigh every problem differently and often mix different techniques to fit your needs. Also, for the rest of your career, this is most likely something you’ll never stop struggling with—I guess a good struggle, though, a creative and challenging one.
This is a little guess, but I feel that if you understood most of the topics we covered, you’ll be well on your way to writing code other developers like to discover. Thanks for your time reading this little series, and good luck becoming a happy hacker!
|
http://esolution-inc.com/blog/rubyrails-code-smell-basics-04--cms-25454.html
|
CC-MAIN-2019-18
|
refinedweb
| 1,590
| 71.24
|
Currently,, let's remove them from
app.component.scss.
1.2. Template
Update
app.component.html file with the following template:
<div class="container"> <div class="row mt-3"> <div class="col-md-9"> <div * <app-event [value]="event"></app-event> </div> <div class="text-center"> <a type="button" mdbBtn <h3 class="text-uppercase my-3">Schedule</h3> <h6 class="my-3"> It's going to be busy that today. You have <b> {{ events.length }} events </b> today. </h6> <h1 class="my-3"> <div class="row"> <div class="col-3 text-center"> <mdb-icon fas</mdb-icon> </div> <div class="col-9 ">Sunny</div> </div> <div class="row"> <div class="col-3 text-center"> <mdb-icon fas</mdb-icon> </div> <div class="col-3 ">23°C</div> </div> </h1> <p> Don't forget your sunglasses. Today will dry and sunny, becoming warm in the afternoon with temperatures of between 20 and 25 degrees. </p> </div> </div> </div>
List of changes:
- Added "Add event" button
- Added event counter () in the right column
- Added static (Weather Forecast) content to the right column
- Added some minor classes to enhance spacing
Preview:
Note:
In the right-hand column, we added an Event counter which tells us how many events are scheduled for today. In order to get this number we are simply counting elements within the array using function.
2. Delete Event function
Now we can create a function which removes a particular Event from an array. It's a very simple function, it looks for index of clicked item, and remove it from the events array[] using splice() function.
Add following function to AppComponent class within the
app.component.ts file
deleteEvent(event: any) { const itemIndex = this.events.findIndex(el => el === event); this.events.splice(itemIndex, 1); }
Note:
findIndex() method returns the index of the first element in the array that satisfies the provided testing function. In our case we are checking whether item (event) which we are about to delete is one of the items (el) in the array, and returns it's position.
Now when our function is ready, we can emit and event from Event component.
3. Handle click event
- Add new function to EventComponent class within the
event.component.tsfile.
- Update the mdb-badge in
event.component.html
- Test it. Run the app, open developers console and click on the delete icon. You should see confrimation message in a console.
handleDeleteClick() { console.log("Delete button clicked!"); }
<mdb-badge (click)="handleDeleteClick()" danger="true" class="text-center float-right">-</mdb-badge>
4. Event emitting
In order to call the parent (App) component's deleteEvent() function from child
(Event) component we have to emit an
Event from one component and catch it other.
Warning:
The naming might be a little bit confusing. This is because we have component within our app which we called an Event. This Event represents a single entry in our agenda (we could have called it Item, Appointment or Entry).
At the same time Angular uses
Event to call JavaScript-like events. This
Event
reffers to "occurrence" on which JavaScript can react.
Since this may be a bit confusing I will use different styles and whenever we will reffer to Event Component i will use yellow highligt, while whenever I will referr to
Angular Event
I will use a red font.
Whenever we want to emit an
Event, we have to import EventEmmiter first. Since we
will transfer outside the component, we also have to import
Output directive.
- Import
EventEmmiterand
Outputin
event.components.ts
- Define the
@Output()within EventComponent class.
- Update the handleDeleteClick() method with the following code:
<mdb-badge (click)="handleDeleteClick()" danger="true" class="text-center float-right">-</mdb-badge>
Note:
In a previous lessons we were passing data to (inside) our component, in order to do that we had to use
Input directive.
Now, since we are passing data out (otuside) of our component, we have to use
Output directive.
@Output() deleteEventInstanceEvent: EventEmitter<any> = new EventEmitter<any>();
Note:
We called Output deleteEventInstanceEvent to make it clear that we are emitting and
Event to delete instance of Event.
You can read the name as deleteEventInstance
Event().
If we would call our calendar entry differently, i.e. Appointment, we could simply call this function deleteAppointmentEvent without ussing extra Instance word.
handleDeleteClick() { this.deleteEventInstanceEvent.emit(this.value); }
That how your
event.component.ts file should looks like:
import {Component, EventEmitter, Input, Output} from '@angular/core'; @Component({ selector: 'app-event', templateUrl: './event.component.html', }) export class EventComponent { @Input() value: any; @Output() deleteEventInstanceEvent: EventEmitter<any> = new EventEmitter<any>(); handleDeleteClick() { this.deleteEventInstanceEvent.emit(this.value); } }
Now whenever user click on the delete icon, next to the Event title, it will emit a new
event and pass itself (an object) as a parameter.
4. Catching & handling events
The last thing we have to do is to catch emmited
event from the
Event component.
In order to that let's update our <app-event> tag in
app.component.html:
<app-event [value]="event" (deleteEventInstanceEvent)="deleteEvent($event)"></app-event>
As you probably guessed, we have binded here emitted
event
(deleteEventInstanceEvent) with the internal App component's function -
deleteEvent() and we are passing.
In other words - we are telling Angular that he can expect that
<app-event> can emmit
events and which function do we want to use to handle it.
Now, whenever we click on a delete icon:
- User clicks on the delete icon
- (click) calls handleDeleteClick() function
- handleDeleteClick() emits deleteEventInstanceEvent
event
- App component catches it and triggers deleteEvent()
- deleteEvent() deletes corresponding element from the array
|
https://girishgodage.in/blog/emit-and-handle-events
|
CC-MAIN-2022-40
|
refinedweb
| 921
| 55.24
|
Products.CMFTestCase 0.9.12
Integration testing framework for CMF.
Introduction
CMFTestCase is a thin layer on top of the ZopeTestCase package. It has been developed to simplify testing of CMF-based applications and products.
The CMFTestCase package provides
The function installProduct to install a Zope product into the test environment.
The function installPackage to install a Python package registered via five:registerPackage into the test environment. Requires Zope 2.10.4 or higher.
The function setupCMFSite to create a CMF portal in the test db.
Note: setupCMFSite accepts an optional products argument, which allows you to specify a list of products that will be added to the portal. Product installation is performed via the canonical Extensions.Install.install function. Since 0.8.2 you can also pass an extension_profiles argument to import GS extension profiles.
The class CMFTestCase of which to derive your test cases.
The class FunctionalTestCase of which to derive your test cases for functional unit testing.
The classes Sandboxed and Functional to mix-in with your own test cases.
The constants portal_name, portal_owner, default_products, default_base_profile, default_extension_profiles, default_user, and default_password.
The constant CMF15 which evaluates to true for CMF versions >= 1.5.
The constant CMF16 which evaluates to true for CMF versions >= 1.6.
The constant CMF20 which evaluates to true for CMF versions >= 2.0.
The constant CMF21 which evaluates to true for CMF versions >= 2.1.
The constant CMF22 which evaluates to true for CMF versions >= 2.2.
The module utils which contains all utility functions from the ZopeTestCase package.
Example CMFTestCase
from Products.CMFTestCase import CMFTestCase CMFTestCase.installProduct('SomeProduct') CMFTestCase.setupCMFSite(products=('SomeProduct',)) class TestSomething(CMFTestCase.CMFTestCase): def afterSetup(self): self.folder.invokeFactory('Document', 'doc') def testEditDocument(self): self.folder.doc.edit(text_format='plain', text='data') self.assertEqual(self.folder.doc.EditableBody(), 'data')
Example CMFTestCase setup with GenericSetup
from Products.CMFTestCase import CMFTestCase CMFTestCase.installProduct('SomeProduct') CMFTestCase.setupCMFSite(extension_profiles=('SomeProduct:default',))
Please see the docs of the ZopeTestCase package, especially those of the PortalTestCase class.
Look at the example tests in this directory to get an idea of how to use the CMFTestCase package.
Copy testSkeleton.py to start your own tests.
Changelog
0.9.12 (2012-07-02)
- Use getSite from zope.component. [hannosch]
0.9.11 - 2010-09-20
- Fix the cleanup method called by safe_load_site to mark the component registry as uninitialized regardless of whether the flag is in Zope2.App.zcml or Products.Five.zcml. [davisagli]
0.9.10 - 2010-07-13
- Make compatible with Zope 2.13 and avoid setup problems with zope.schema vocabularies. [hannosch]
0.9.9 - 2009-11-14
- Call reindexObjectSecurity on the member folder conditionally, as CMF 2.2's PortalFolder no longer has this method. [davisagli]
- Specify all dependencies in a backwards compatible way. [hannosch]
0.9.8 - 2009-04-19
- Fixed deprecation warnings for use of Globals. [hannosch]
- Added CMF22 constant. [stefan]
0.9.8b4 - 2008-10-26
- Fix homepage URL in setup.py. [stefan]
0.9.8b3 - 2008-10-16
- Bugfix: Reindex security of member-area after taking ownership. [stefan]
0.9.8b2 - 2008-10-08
- Egg was broken due to use of svn export. Who'd have thunk setuptools makes a difference? [stefan]
0.9.8b1 - 2008-10-05
- Install all CMF products quietly.
- Eggified Products.CMFTestCase.
0.9.7
- The CMFSite layer can now be set up more than once.
0.9.6
- Deal with new skin setup in CMF 2.1 and up.
- Provide hasPackage and installPackage if ZopeTestCase supports it.
- Use new stateless GenericSetup API in CMF 2.1 and up.
0.9.4
- Minor test fixes to cater for changes in CMF.
0.9.2
- Added support for local component registries. We now call setSite() on the portal before importing profiles and before each test.
0.9.0
- Added CMF21 constant.
- Prepared for switching ZopeTestCase to Zope3 interfaces.
- Load etc/site.zcml lazily instead of using the wrapper.
- Import extension profiles one by one to gain better control.
- Added a CMFTestCase.addProfile() method that allows to import extension profiles into the site. This is an alternative to passing the 'extension_profiles' argument to setupCMFSite().
- Create the CMF site lazily using layer.CMFSite.
- Renamed utils.py to five.py.
0.8.6
- Made sure layer cleanup resets Five.zcml to "not initialized".
0.8.4
- Allow to pass a base profile to the setupCMFSite function.
- Added a ZCMLLayer to support recent Zopes and zope.testing.testrunner. Thanks to Whit Morris.
0.8.2
- Added support for passing a list of GenericSetup extension profiles to the setupCMFSite function.
0.8.0
- Updated to new portal creation process of CMF 1.6.
0.7.0
- Updated to CMF 1.5.
- Added an API module, ctc.py.
- Added an addProduct() method to CMFTestCase that allows to add a product to the portal. This is as alternative to passing a 'products' argument to setupCMFSite().
- CMFTestCase now uses the version independend 'transaction' module provided by ZopeTestCase >= 0.9.8.
0.6.0 (not released)
- setupCMFSite() now accepts a 'products' argument which allows to specify a list of products that will be added to the portal by executing their respective Extensions.Install.install() methods.
- Removed setupCMFSkins() and the ability to setup a CMF site w/o skins.
- Made the ZopeTestCase.utils module available as CMFTestCase.utils.
- Added FunctionalTestCase base class for "functional" CMF tests.
- Test classes now assert their interfaces.
0.5.0
- Package for testing CMF-based products and applications.
- Downloads (All Versions):
- 53 downloads in the last day
- 360 downloads in the last week
- 1245 downloads in the last month
- Author: Stefan H. Holek
- Keywords: cmf testing
- License: ZPL
- Categories
- Package Index Owner: stefanholek, witsch, esteele, evilbungle, davisagli
- DOAP record: Products.CMFTestCase-0.9.12.xml
|
https://pypi.python.org/pypi/Products.CMFTestCase/0.9.12
|
CC-MAIN-2014-15
|
refinedweb
| 953
| 53.58
|
Timeline …
08/25/05:
- 20:55 Ticket #4659 (UPDATE: sisc-1.11.3) closed by
- fixed: Update committed, thanks. Note, to make things easier, try to attach the …
- 20:54 Changeset [13724] by
- Port: sisc Version: 1.11.3 Bug: 4659 Submitted-by: maintainer Update …
- 20:49 Ticket #3814 (zope 2.7.3 doesn't start) closed by
- fixed: This should be fixed with the latest python23; Robert, be sure to update …
- 20:38 Ticket #4666 (gimp2 / gimp-print) created by
- When installing gimp2 the following error occurs (did a port …
- 20:19 Changeset [13723] by
- Submitted by: jmpp@ Uprev the version number a bit to trigger a base/ …
- 19:04 Ticket #4653 (port command not working after selfupdate (invalid command name ui_debug)) closed by
- fixed: Found out the bug and fixed it.
- 19:03 Changeset [13722] by
- Merge the previous bug fix in RELEASE1. This should solve the fact that …
- 19:00 Changeset [13721] by
- Fix a namespace problem. This problem was actually hidden by package …
- 17:43 Changeset [13720] by
- Bug: Submitted by: jmpp@ I accidently foobar'ed the r1 shipping branch …
- 17:10 Changeset [13719] by
- new port python/py-readline
- 16:06 Changeset [13718] by
- Switched underlying lisp back to clisp. Openmcl's sockets didn't work …
- 15:38 Ticket #4662 (BUG: fxscintilla 1.62_1 fails to build on 10.4.2) created by
- I was building FreeRide and fxscintilla failed. The error: Making all in …
- 13:14 Ticket #4660 (UPDATE OpenVPN 2.0.2) created by
- Please update the net/openvpn2 Portfile Thanks, $ diff -u Portfile.orig …
- 12:38 Ticket #4659 (UPDATE: sisc-1.11.3) created by
- Version bump -- the diff is below: Index: Portfile …
- 12:31 Ticket #4658 (Can't get subversion to build: +mac-os-x-server-mod_dav_svn) closed by
- invalid: +mac-os-x-server-mod_dav_svn is for building ra_dav against the apache2 …
- 12:27 Ticket #4658 (Can't get subversion to build: +mac-os-x-server-mod_dav_svn) created by
- Building subversion with ra_dav seems to be broken on my system. Looks …
- 12:17 Ticket #4642 (UPDATE squeak-3.7 to squeak-3.8) closed by
- fixed: Committed, thanks! -Greg
- 12:16 Changeset [13717] by
- Version bump to 3.8. Thanks to Brent Fulgham! Bug: 4642 …
- 11:28 Ticket #4656 (Python 2.4 Portfile could use a readline variant) created by
- Hi, I just copied it out of the Python23 port file and it worked. Patch …
- 11:08 Changeset [13716] by
- Fix a syntax error in tomcat5.
- 10:45 Ticket #1598 (add $Id$ to base/, add -V (version info) to port, please comment) closed by
- later: Yeah, we can close this. I'm marking the resolution as LATER 'cause I …
- 09:37 Ticket #4653 (port command not working after selfupdate (invalid command name ui_debug)) created by
- After installing the newest version of DarwinPorts (1.011) from tarball …
- 09:31 Ticket #4652 (BUG: Worms of Prey (wop) won't compile) created by
- It's current version is 0.3.2, and the data date is 2005-05-13. …
- 09:28 Ticket #4651 (Samba3 depends on readline) created by
- From a clean install on 10.4.2, samba3 won't compile until you install …
- 06:27 Changeset [13715] by
- Add (an empty) test rule so make test in base/ works.
- 06:15 Changeset [13714] by
- Fix make when configure has been changed since last commit. We probably …
- 05:59 Changeset [13713] by
- Add test of checksums.
- 05:08 Ticket #4650 (gawk build error) created by
- Build trace Error: Target com.apple.build returned: shell command "cd …
- 04:35 Ticket #4544 (gimp2 can't find XML::Parser) closed by
- fixed: Fixed - Thanks
- 04:34 Changeset [13712] by
- Bug: 4544 - missing p5-xml-parser dependency Submitted by: Reviewed by: …
- 00:48 Ticket #3886 (bakery error on Tiger) closed by
- fixed: just commited an update. It should build just fine.
- 00:47 Changeset [13711] by
- Bug: Submitted by: Reviewed by: Approved by: Obtained from: bump to 2.3.15
- 00:30 Changeset [13710] by
- Port: dcraw Version: 7.53 New port graphics/dcraw Raw Digital Photo …
- 00:26 Ticket #3857 (BUG: sword-bible-web-1.4 checksum incorrect) closed by
- fixed: Checksum has been updated, thanks for the report.
- 00:26 Changeset [13709] by
- Port: sword-bible-web Version: 1.4 Revision: 2 Bug: 3857 Update port …
08/24/05:
- 23:53 Changeset [13708] by
- Port: xsp Version: 1.0.6 Bug: 2893 Update port devel/xsp Add www as a …
- 23:53 Ticket #2893 (RFC: xsp should be re-categorized) closed by
- fixed: It seems to fit pretty well into both, so www has been added as a second …
- 23:38 Ticket #4182 (python23 cannot import asyncore) closed by
- fixed: Fix committed, switched maintainer to Martin; thanks for the patch.
- 23:37 Changeset [13707] by
- Port: python23 Version: 2.3.5 Revision: 4 Bug: 4182 Update port …
- 23:28 Ticket #4642 (UPDATE squeak-3.7 to squeak-3.8) created by
- A few minor changes to allow the Version 3.7-7 VM to build and run with …
- 23:25 Ticket #3167 (BUG: port and libsigc++) closed by
- duplicate: * This bug has been marked as a duplicate of 1580 *
- 22:36 Changeset [13706] by
- Submitted by: pguyot@ Reviewed by: jmpp@ Optimization: ui_* was a …
- 22:05 Changeset [13705] by
- Fix bugs in, and clean up, calculation of which startupitem type to use in …
- 20:49 Changeset [13704] by
- Submitted by: jbj@… Reviewed by: jmpp@ Build fixes for the …
- 20:34 Changeset [13703] by
- Submitted by: jbj@… Reviewed by: jmpp@ Update the aging yum …
- 19:39 Changeset [13702] by
- New implementation of the checksums parser, using an array. This new …
- 16:26 Changeset [13701] by
- autoreconf due to aclocal.m4 changes.
- 16:20 Changeset [13700] by
- Daemondo relies on CFNotificationCenterGetDarwinNotifyCenter, which is not …
- 16:20 Changeset [13699] by
- Submitted by: pguyot@ Reviewed by: jmpp@ Fixed a problem with 10.3's …
- 16:19 Changeset [13698] by
- Add a missing include to daemondo
- 16:17 Ticket #4618 (Netpbm does not build with GCC 4.0.0) closed by
- fixed: Fixed, Thanks !
- 16:16 Changeset [13697] by
- fix xcodeversion check Bug: 4618 Submitted by: Reviewed by: Approved by: …
- 15:54 Changeset [13696] by
- Submitted by: pguyot@ jberry@ Reviewed by: jmpp@ Rework of the …
- 15:52 Changeset [13695] by
- More fixes for build of libcurl support on Jaguar. curl-config improperly …
- 14:27 Changeset [13694] by
- Fix post-destroot phase (trash files in destroot, relative to variables).
- 14:24 Changeset [13693] by
- Fix make clean target of darwinports1.0 directory (don't override the …
- 14:16 Changeset [13692] by
- update to 1.1
- 14:07 Changeset [13691] by
- Fix build of libcurl support on Jaguar, where the renamed constant …
- 13:51 Changeset [13690] by
- Submitted by: jkh@ Reviewed by: jmpp@ Force all volume detaches and …
- 13:10 Changeset [13689] by
- Update dovecot-stable --> 1.0.alpha1. Once it goes final, or even before, …
- 10:25 Ticket #4623 (RFE: Request Port of latex2html) created by
- Can we get latex2html as a darwinport, to complement the port of tetex. …
- 08:35 Ticket #4607 (tftp-hpa worse than useless) closed by
- fixed: To use tftpd, you'll need to modified settings in …
- 08:31 Changeset [13688] by
- Bug: 4607 Submitted by: Bahamat <bahamat@…> Reviewed by: …
- 08:09 Changeset [13687] by
- Version bump to 5.5.27. Thanks to Paulo Moura!
- 08:01 Ticket #4621 (port install nethack fails; conflicting types for tparm) created by
- mcc-1:~ mcc$ sudo port install nethack Password: ---> Building nethack …
- 07:58 Changeset [13686] by
- update to 3.2.4
- 07:51 Ticket #4620 (upgrading gawk from 3.1.4_1 to 3.1.5_0 fails) created by
- Attempting to upgrade gawk from 3.1.4_1 to 3.1.5_0 fails with the …
- 07:37 Changeset [13685] by
- update to version 1.03
- 07:26 Changeset [13684] by
- Total number of ports parsed: 2738 Ports successfully parsed: 2738 …
- 07:11 Changeset [13683] by
- Version bump, update svk to 1.04
- 07:11 Changeset [13682] by
- Version bump, updatee p5-svn-mirror to 0.66 (needed for svk 1.04)
- 06:58 Changeset [13681] by
- Change readline and port dependencies to port:. This should avoid the …
- 06:56 Changeset [13680] by
- Changed so we don't pick up /usr/lib/libedit (which Apple has linked to …
- 04:29 Ticket #4618 (Netpbm does not build with GCC 4.0.0) created by
- I tried building netpbm 10.29 today and the compilation ended with a bus …
- 03:03 Changeset [13679] by
- Total number of ports parsed: 2738 Ports successfully parsed: 2738 …
- 02:11 Ticket #4605 (UPDATE: p5-xml-dom-1.44) closed by
- fixed: excellent - commited;
- 02:11 Changeset [13678] by
- Bug: #4605 Submitted by: narf_tm@… …
- 02:10 Ticket #4606 (UPDATE: p5-log-log4perl-1.00) closed by
- fixed: thanks, commited!
- 02:10 Changeset [13677] by
- Bug: #4606 Submitted by: narf_tm@… …
- 02:05 Ticket #4600 (NEW: python/py-cherrypy-2.0.0) closed by
- fixed: thanks, commited!
- 02:05 Changeset [13676] by
- Bug: #4600 Submitted by: yuhei@… Reviewed by: …
- 02:03 Ticket #4610 (BUG mailsync doesn't compile) closed by
- fixed: excellent - commited;
- 02:03 Changeset [13675] by
- Bug: #4610 Submitted by: pguyot@… Reviewed by: …
- 01:44 Ticket #4617 (BUG: netatalk does not build on 10.4) created by
-
- 00:42 Changeset [13674] by
- pcre 6.3
08/23/05:
- 23:47 Ticket #4601 (UPDATE stardict-2.4.5 and some portfile changes) closed by
- fixed: Update has been committed, thanks.
- 23:47 Changeset [13673] by
- Port: stardict Version: 2.4.5 Bug: 4601 Submitted-by: maintainer …
- 22:11 Changeset [13672] by
- Bug: Submitted by: Reviewed by: Approved by: Obtained from: bump to 2.10.1
- 22:11 Changeset [13671] by
- Bug: Submitted by: Reviewed by: Approved by: Obtained from: buump to …
- 22:11 Changeset [13670] by
- Bug: Submitted by: Reviewed by: Approved by: Obtained from: bump to 2.10.2
- 22:11 Changeset [13669] by
- Default launchd startup item support to on. To disable it, configure with …
- 22:10 Changeset [13668] by
- Bug: Submitted by: Reviewed by: Approved by: Obtained from: bump to 0.34.1
- 22:10 Changeset [13667] by
- Bug: Submitted by: Reviewed by: Approved by: Obtained from: bump to 1.5.3
- 22:10 Changeset [13666] by
- Bug: Submitted by: Reviewed by: Approved by: Obtained from: new port …
- 22:08 Changeset [13665] by
- Bug: Submitted by: Reviewed by: Approved by: Obtained from: bump to 1.12.2
- 21:28 Changeset [13664] by
- Revisions to tomcat5 to make it possible to run it under JDK 5.0
- 21:22 Changeset [13663] by
- New port mysql-connector-java 3.1.10, the official JDBC connector for …
- 21:15 Changeset [13662] by
- Update java/jakarta-log4j --> 1.2.11. Log4j has now moved out from Jakarta …
- 21:13 Changeset [13661] by
- Update java/commons-lang --> 2.1
- 21:11 Changeset [13660] by
- Update java/commons-digester --> 1.7
- 21:10 Changeset [13659] by
- New startupitem support (partially completed) for logging, and for …
- 21:08 Changeset [13658] by
- Fix build on XCode 2.1
- 21:08 Changeset [13657] by
- Add support to daemondo for pidfile control and tracking
- 20:47 Ticket #4065 (mailsync installation fails) closed by
- duplicate: Bug 4610 has a patch, so duping this to it. * This bug has been marked …
- 20:39 Ticket #4578 (UPDATE: gsoap-2.7.6a) closed by
- fixed: Update has been committed, thanks for the patches.
- 20:39 Changeset [13656] by
- Port: gsoap Version: 2.7.6a Bug: 4578 Submitted-by: maintainer …
- 20:30 Ticket #4563 (Update R port to version 2.1.1) closed by
- fixed: Update has been committed, thanks. In the future, to simplify things, can …
- 20:30 Changeset [13655] by
- Port: R Version: 2.1.1 Bug: 4563 Submitted-by: maintainer Update …
- 20:18 Ticket #4561 (BUG: libdvdnav installs dvdnav.m4 in /usr/local) closed by
- fixed: Incorrectly-placed file fixed, and port updated to 0.1.10. Thanks for the …
- 20:18 Changeset [13654] by
- Port: libdvdnav Version: 0.1.10 Bug: 4561 Update port devel/libdvdnav …
- 20:12 Ticket #4305 (JHymn 0.8.3 checksum & version mismatch) closed by
- fixed: Update has been committed, thanks.
- 20:12 Changeset [13653] by
- Port: JHymn Version: 0.8.5 Bug: 4305 Submitted-by: maintainer Update …
- 20:07 Ticket #3031 (BUG: ettercap-ng fails to build) closed by
- fixed: Now that the CVS server is back, your changes have been committed.
- 20:07 Changeset [13652] by
- Port: ettercap-ng Version: 0.7.3 Bug: 3031 Submitted-by: maintainer …
- 19:42 Ticket #4612 (UPDATE freeciv 2.0.4) created by
- Version bump for freeciv.
- 19:41 Ticket #4611 (UPDATE gtk2: 2.6.10) created by
- version bump of gtk2 port.
- 19:36 Ticket #4610 (BUG mailsync doesn't compile) created by
- mailsync requires some file that c-client didn't provide. Moreover, …
- 19:33 Changeset [13651] by
- Install linkage.c (required by some port).
- 19:30 Ticket #2950 (RFC: add a variant to allow lablgtk2 to be build without too much deps) closed by
- fixed: Committed. Thanks.
- 19:30 Changeset [13650] by
- New variant: rsvg Bug: #2950 Submitted by: Antoine Reilles …
- 19:06 Changeset [13649] by
- Comment rmd160 checksum until it's merged in RELEASE1 Lack of eol at eof …
- 19:05 Changeset [13648] by
- New port: teg (risk-like game)
- 19:04 Changeset [13647] by
- New port: ssldump (checksums and patch-ssl_ssldecode_c are taken from …
- 19:04 Changeset [13646] by
- Version bump (fixes distfile problem as well).
- 19:03 Changeset [13645] by
- Documentation for rmd160 checksum.
- 18:59 Changeset [13644] by
- New checksum type: rmd160.
- 18:58 Changeset [13643] by
- Factorize checksum code.
- 18:05 Changeset [13642] by
- update to 3.2.3
- 17:53 Ticket #4558 (stegdetect fails to build) closed by
- fixed: commited!
- 17:53 Changeset [13641] by
- Bug: #4558 works with gcc-3 only on darwin 8
- 13:59 Ticket #4607 (tftp-hpa worse than useless) created by
- for the life of me I can't get tftpd to listen on the tftp port. tftpd …
- 13:33 Ticket #4606 (UPDATE: p5-log-log4perl-1.00) created by
- p5-log-log4perl-1.00 the portfile can be found here: ATTACHED …
- 13:29 Ticket #4605 (UPDATE: p5-xml-dom-1.44) created by
- p5-xml-dom-1.44 the portfile can be found here: ATTACHED Description: …
- 10:14 Ticket #4603 (NEW: cobertura-1.6) created by
- Cobertura-1.6 The portfile is attached Description: Cobertura is a Java …
- 08:34 Ticket #4601 (UPDATE stardict-2.4.5 and some portfile changes) created by
- Here is portfile for stardict updated to 2.4.5. Also it can be compiled …
- 08:22 Ticket #4600 (NEW: python/py-cherrypy-2.0.0) created by
- Portfile of CherryPy ()
- 02:17 Ticket #4597 (Dia 0.94 does not install) closed by
- duplicate: closing this ... the information is in 4598 * This bug has been marked …
- 02:04 Ticket #4598 (libxml2: libxml2-2.6.19: No such file or directory) created by
- I had tried using Port Authority, and it does not show any error. …
- 01:58 Ticket #4597 (Dia 0.94 does not install) created by
-
Note: See TracTimeline for information about the timeline view.
|
https://trac.macports.org/timeline?from=2005-08-27T01%3A51%3A11-0700&precision=second
|
CC-MAIN-2015-48
|
refinedweb
| 2,527
| 64.2
|
# Command Line Interface
The following is a comprehensive reference of the Redwood CLI. You can get a glimpse of all the commands by scrolling the aside to the right.
The Redwood CLI has two entry-point commands:
- redwood (alias
rw), which is for developing an application, and
- redwood-tools (alias
rwt), which is for contributing to the framework.
This document covers the
redwood command . For
redwood-tools, see Contributing in the Redwood repo.
A Quick Note on Syntax
We use yargs and borrow its syntax here:
yarn redwood generate page <name> [path] --option
redwood g pageis the command.
<name>and
[path]are positional arguments.
<>denotes a required argument.
[]denotes an optional argument.
--optionis an option.
Every argument and option has a type. Here
<name> and
[path] are strings and
--option is a boolean.
You'll also sometimes see arguments with trailing
.. like:
yarn redwood build [side..]
The
.. operator indicates that the argument accepts an array of values. See Variadic Positional Arguments.
# build
Build for production.
yarn redwood build [side..]
We use Babel to transpile the api side into
./api/dist and Webpack to package the web side into
./web/dist.
Usage
Example
Running
yarn redwood build without any arguments generates the Prisma client and builds both sides of your project:
~/redwood-app$ yarn redwood build yarn run v1.22.4 $ /redwood-app/node_modules/.bin/redwood build ✔ Generating the Prisma client... ✔ Building "api"... ✔ Building "web"... Done in 17.37s.
Files are output to each side's
dist directory:
├── api │ ├── dist│ ├── prisma │ └── src └── web ├── dist ├── public └── src
# check (alias diagnostics)
Get structural diagnostics for a Redwood project (experimental).
yarn redwood check
Example
~/redwood-app$ yarn redwood check yarn run v1.22.4 web/src/Routes.js:14:5: error: You must specify a 'notfound' page web/src/Routes.js:14:19: error: Duplicate Path web/src/Routes.js:15:19: error: Duplicate Path web/src/Routes.js:17:40: error: Page component not found web/src/Routes.js:17:19: error (INVALID_ROUTE_PATH_SYNTAX): Error: Route path contains duplicate parameter: "/{id}/{id}"
# console (alias c)
Launch an interactive Redwood shell (experimental):
- This has not yet been tested on Windows.
- The Prisma Client must be generated prior to running this command, e.g.
yarn redwood prisma generate. This is a known issue.
yarn redwood console
Right now, you can only use the Redwood console to interact with your database:
Example
~/redwood-app$ yarn redwood console yarn run v1.22.4 > await db.user.findMany() > [ { id: 1, email: 'tom@redwoodjs.com', name: 'Tom' } ]
# dataMigrate
Data migration tools.
yarn redwood dataMigrate <command>
# install
- Appends a
DataMigrationmodel to
schema.prismafor tracking which data migrations have already run.
- Creates a DB migration using
yarn redwood prisma migrate dev --create-only create_data_migrations.
- Creates
api/db/dataMigrationsdirectory to contain data migration scripts
yarn redwood dataMigrate install
# up
Executes outstanding data migrations against the database. Compares the list of files in
api/db/dataMigrations to the records in the
DataMigration table in the database and executes any files not present.
If an error occurs during script execution, any remaining scripts are skipped and console output will let you know the error and how many subsequent scripts were skipped.
yarn redwood dataMigrate up
# db
Database tools.
WARNING
As of
v0.25,
yarn redwood db <command>has been deprecated in favor of
yarn redwood prisma <command>. Click here to skip to the prisma section below.
yarn redwood db <command>
# down
Migrate your database down.
WARNING
As of
v0.25,
yarn redwood db <command>has been deprecated in favor of
yarn redwood prisma <command>. Click here to skip to the prisma section below.
yarn redwood db down [decrement]
Example
Given the following migrations,
api/db/migrations/ ├── 20200518160457-create-users <-- desired├── 20200518160621-add-profiles ├── 20200518160811-add-posts <-- current└── migrate.lock
we could get to
20200518160457-create-users by running:
~/redwood-app$ yarn redwood db down 2
# generate
Generate the Prisma client.
WARNING
As of
v0.25,
yarn redwood db <command>has been deprecated in favor of
yarn redwood prisma <command>. Click here to skip to the prisma section below.
yarn redwood db generate
The Prisma client is auto-generated and tailored to your
schema.prisma.
This means that
yarn redwood db generate needs to be run after every change to your
schema.prisma for your Prisma client to be up to date. But you usually won't have to do this manually as other Redwood commands run this behind the scenes.
# introspect
Introspect your database and generate models in
./api/db/schema.prisma, overwriting existing models.
WARNING
As of
v0.25,
yarn redwood db <command>has been deprecated in favor of
yarn redwood prisma <command>. Click here to skip to the prisma section below.
yarn redwood db introspect
# save
Create a new migration.
WARNING
As of
v0.25,
yarn redwood db <command>has been deprecated in favor of
yarn redwood prisma <command>. Click here to skip to the prisma section below.
yarn redwood db save [name..]
A migration defines the steps necessary to update your current schema.
Running
yarn redwood db save generates the following directories and files as necessary:
api/db/migrations ├── 20200516162516-create-users │ ├── README.md │ ├── schema.prisma │ └── steps.json └── migrate.lock
migrations: A directory to store migrations.
migrations/<migration>: A directory for a specific migration. The name (
<migration>) is composed of a timestamp of when it was created and the name given during
yarn redwood db save.
migrations/<migration>/README.md: A human-readable description of the migration, including metadata like when the migration was created and by who, a list of the actual migration changes, and a diff of the changes made to
schema.prisma.
migrations/<migration>/schema.prisma: The schema that will be created if the migration is applied.
migrations/<migration>/steps.json: An alternate representation of the migration steps that will be applied.
migrate.lock: A lock file specifying the current migration.
# seed
Seed your database with test data.
WARNING
As of
v0.25,
yarn redwood db <command>has been deprecated in favor of
yarn redwood prisma <command>. Click here to skip to the prisma section below.
yarn redwood db seed
Runs
seed.js in
./api/db.
seed.js instantiates the Prisma client and provides an async main function where you can put any seed data—data that needs to exist for your app to run. See the example blog's seed.js file.
# studio
Start Prisma Studio, a visual editor for your database.
WARNING
As of
v0.25,
yarn redwood db <command>has been deprecated in favor of
yarn redwood prisma <command>. Click here to skip to the prisma section below.
yarn redwood db studio
# up
Generate the Prisma client and apply migrations.
WARNING
As of
v0.25,
yarn redwood db <command>has been deprecated in favor of
yarn redwood prisma <command>. Click here to skip to the prisma section below.
yarn redwood db up [increment]
Example
Given the following migrations
api/db/migrations/ ├── 20200518160457-create-users <-- current├── 20200518160621-add-profiles ├── 20200518160811-add-posts <-- desired└── migrate.lock
we could get to
20200518160811-add-posts by running:
~/redwood-app$ yarn redwood db up 2
# dev
Start development servers for api and web.
yarn redwood dev [side..]
yarn redwood dev api starts the Redwood dev server and
yarn redwood dev web starts the Webpack dev server with Redwood's config.
Usage
If you're only working on your sdl and services, you can run just the api server to get GraphQL Playground on port 8911:
~/redwood-app$ yarn redwood dev api yarn run v1.22.4 $ /redwood-app/node_modules/.bin/redwood dev api $ /redwood-app/node_modules/.bin/dev-server 15:04:51 api | Listening on 15:04:51 api | Watching /home/dominic/projects/redwood/redwood-app/api 15:04:51 api | 15:04:51 api | Now serving 15:04:51 api | 15:04:51 api | ►
Using
--forward (alias
--fwd), you can pass one or more Webpack Dev Server config options. The following will run the dev server, set the port to
1234, and disable automatic browser opening.
~/redwood-app$ yarn redwood dev --fwd="--port=1234 --open=false"
You may need to access your dev application from a different host, like your mobile device. To resolve the “Invalid Host Header” message, run the following:
~/redwood-app$ yarn redwood dev --fwd="--disable-host-check"
For the full list of Webpack Dev Server settings, see this documentation.
# deploy
Deploy your redwood project to a hosting provider target.
For Jamstack hosting providers like Netlify and Vercel, the deploy command runs the set of steps to build, apply production DB changes, and apply data migrations. In this context, it is often referred to as a Build Command.
For hosting providers like AWS, this command runs the steps to both build your project and deploy it to AWS.
yarn redwood deploy <target>
# aws
Deploy to AWS using the selected provider
yarn redwood deploy aws [provider]
# netlify
Build command for Netlify deploy
yarn redwood deploy netlify [provider]
Example The following command will build, apply Prisma DB migrations, and skip data migrations.
yarn redwood deploy netlify --no-data-migrate
# vercel
Build command for Vercel deploy
yarn redwood deploy vercel [provider]
Example The following command will build, apply Prisma DB migrations, and skip data migrations.
yarn redwood deploy vercel --no-data-migrate
# destroy (alias d)
Rollback changes made by the generate command.
yarn redwood d <type>
# generate (alias g)
Save time by generating boilerplate code.
yarn redwood generate <type>
Some generators require that their argument be a model in your
schema.prisma. When they do, their argument is named
<model>.
Undoing a Generator with a Destroyer
Most generate commands (i.e., everything but
yarn redwood generate dataMigration) can be undone by their corresponding destroy command. For example,
yarn redwood generate cell can be undone with
yarn redwood d cell.
# cell
Generate a cell component.
yarn redwood generate cell <name>
Cells are signature to Redwood. We think they provide a simpler and more declarative approach to data fetching.
Usage
See the Cells section of the Tutorial.
Destroying
yarn redwood d cell <name>
Example
Generating a user cell:
~/redwood-app$ yarn redwood generate cell user yarn run v1.22.4 $ /redwood-app/node_modules/.bin/redwood g cell user ✔ Generating cell files... ✔ Writing `./web/src/components/UserCell/UserCell.test.js`... ✔ Writing `./web/src/components/UserCell/UserCell.js`... Done in 1.00s.
A cell defines and exports four constants:
QUERY,
Empty,
Failure, and
Success:
// ./web/src/components/UserCell/UserCell.js export const QUERY = gql` query { user { id } } ` export const Loading = () => <div>Loading...</div> export const Empty = () => <div>Empty</div> export const Failure = ({ error }) => <div>Error: {error.message}</div> export const Success = ({ user }) => { return JSON.stringify(user) }
# component
Generate a component.
yarn redwood generate component <name>
Redwood loves function components and makes extensive use of React Hooks, which are only enabled in function components.
Destroying
yarn redwood d component <name>
Example
Generating a user component:
~/redwood-app$ yarn redwood generate component user yarn run v1.22.4 $ /redwood-app/node_modules/.bin/redwood g component user ✔ Generating component files... ✔ Writing `./web/src/components/User/User.test.js`... ✔ Writing `./web/src/components/User/User.js`... Done in 1.02s.
The component will export some jsx telling you where to find it.
// ./web/src/components/User/User.js const User = () => { return ( <div> <h2>{'User'}</h2> <p>{'Find me in ./web/src/components/User/User.js'}</p> </div> ) } export default User
# dataMigration
Generate a data migration script.
yarn redwood generate dataMigration <name>
Creates a data migration script in
api/db/dataMigrations.
Usage
See the Data Migration docs.
Usage
# function
Generate a Function.
yarn redwood generate function <name>
Not to be confused with Javascript functions, Capital-F Functions are meant to be deployed to serverless endpoints like AWS Lambda.
Usage
See the Custom Function recipe.
Destroying
yarn redwood d function <name>
Example
Generating a user function:
~/redwood-app$ yarn redwood generate function user yarn run v1.22.4 $ /redwood-app/node_modules/.bin/redwood g function user ✔ Generating function files... ✔ Writing `./api/src/functions/user.js`... Done in 16.04s.
Functions get passed
context which provides access to things like the current user:
// ./api/src/functions/user.js export const handler = async (event, context) => { return { statusCode: 200, body: `user function`, } }
Now if we run
yarn redwood dev api:
~/redwood-app$ yarn redwood dev api yarn run v1.22.4 $ /redwood-app/node_modules/.bin/redwood dev api $ /redwood-app/node_modules/.bin/dev-server 17:21:49 api | Listening on 17:21:49 api | Watching /home/dominic/projects/redwood/redwood-app/api 17:21:49 api | 17:21:49 api | Now serving 17:21:49 api | 17:21:49 api | ► 17:21:49 api | ►
# layout
Generate a layout component.
yarn redwood generate layout <name>
Layouts wrap pages and help you stay DRY.
Usage
See the Layouts section of the tutorial.
Destroying
yarn redwood d layout <name>
Example
Generating a user layout:
~/redwood-app$ yarn redwood generate layout user yarn run v1.22.4 $ /redwood-app/node_modules/.bin/redwood g layout user ✔ Generating layout files... ✔ Writing `./web/src/layouts/UserLayout/UserLayout.test.js`... ✔ Writing `./web/src/layouts/UserLayout/UserLayout.js`... Done in 1.00s.
A layout will just export it's children:
// ./web/src/layouts/UserLayout/UserLayout.test.js const UserLayout = ({ children }) => { return <>{children}</> } export default UserLayout
# page
Generates a page component and updates the routes.
yarn redwood generate page <name> [path]
path can include a route parameter which will be passed to the generated
page. The syntax for that is
/path/to/page/{routeParam}/more/path. You can
also specify the type of the route parameter if needed:
{routeParam:Int}. If
path isn't specified, or if it's just a route parameter, it will be derived
from
name and the route parameter, if specified, will be added to the end.
This also updates
Routes.js in
./web/src.
Destroying
yarn redwood d page <name> [path]
Examples
Generating a home page:
~/redwood-app$ yarn redwood generate page home / yarn run v1.22.4 $ /redwood-app/node_modules/.bin/redwood g page home / ✔ Generating page files... ✔ Writing `./web/src/pages/HomePage/HomePage.test.js`... ✔ Writing `./web/src/pages/HomePage/HomePage.js`... ✔ Updating routes file... Done in 1.02s.
The page returns jsx telling you where to find it:
// ./web/src/pages/HomePage/HomePage.js const HomePage = () => { return ( <div> <h1>HomePage</h1> <p>Find me in ./web/src/pages/HomePage/HomePage.js</p> </div> ) } export default HomePage
And the route is added to
Routes.js:
// ./web/src/Routes.js const Routes = () => { return ( <Router> <Route path="/" page={HomePage} <Route notfound page={NotFoundPage} /> </Router> ) }
Generating a page to show quotes:
~/redwood-app$ yarn redwood generate page quote {id} yarn run v1.22.4 $ /redwood-app/node_modules/.bin/redwood g page quote {id} ✔ Generating page files... ✔ Writing `./web/src/pages/QuotePage/QuotePage.stories.js`... ✔ Writing `./web/src/pages/QuotePage/QuotePage.test.js`... ✔ Writing `./web/src/pages/QuotePage/QuotePage.js`... ✔ Updating routes file... Done in 1.02s.
The generated page will get the route parameter as a prop:
// ./web/src/pages/QuotePage/QuotePage.js import { Link, routes } from '@redwoodjs/router' const QuotePage = ({ id }) => { return ( <> <h1>QuotePage</h1> <p>Find me in "./web/src/pages/QuotePage/QuotePage.js"</p> <p> My default route is named "quote", link to me with ` <Link to={routes.quote({ id: 42 })}>Quote 42</Link>` </p> <p>The parameter passed to me is {id}</p> </> ) } export default QuotePage
And the route is added to
Routes.js, with the route parameter added:
// ./web/src/Routes.js const Routes = () => { return ( <Router> <Route path="/quote/{id}" page={QuotePage} <Route notfound page={NotFoundPage} /> </Router> ) }
# scaffold
Generate Pages, SDL, and Services files based on a given DB schema Model. Also accepts
<path/model>.
yarn redwood generate scaffold <model>
A scaffold quickly creates a CRUD for a model by generating the following files and corresponding routes:
- sdl
- service
- layout
- pages
- cells
- components
The content of the generated components is different from what you'd get by running them individually.
Usage
See Creating a Post Editor.
You can namespace your scaffolds by providing
<path/model>. The layout, pages, cells, and components will be nested in newly created dir(s). For example, given a model user, running
yarn redwood generate scaffold admin/user will nest the layouts, pages, and components in a newly created
admin directory:
~/redwood-app$ yarn redwood generate scaffold admin/user yarn run v1.22.4 $ /redwood-app/node_modules/.bin/redwood g scaffold admin/user ✔ Generating scaffold files... ✔ Writing `./api/src/graphql/users.sdl.js`... ✔ Writing `./api/src/services/users/users.test.js`... ✔ Writing `./api/src/services/users/users.js`... ✔ Writing `./web/src/scaffold.css`... ✔ Writing `./web/src/layouts/Admin/UsersLayout/UsersLayout.js`... ✔ Writing `./web/src/pages/Admin/EditUserPage/EditUserPage.js`... ✔ Writing `./web/src/pages/Admin/UserPage/UserPage.js`... ✔ Writing `./web/src/pages/Admin/UsersPage/UsersPage.js`... ✔ Writing `./web/src/pages/Admin/NewUserPage/NewUserPage.js`... ✔ Writing `./web/src/components/Admin/EditUserCell/EditUserCell.js`... ✔ Writing `./web/src/components/Admin/User/User.js`... ✔ Writing `./web/src/components/Admin/UserCell/UserCell.js`... ✔ Writing `./web/src/components/Admin/UserForm/UserForm.js`... ✔ Writing `./web/src/components/Admin/Users/Users.js`... ✔ Writing `./web/src/components/Admin/UsersCell/UsersCell.js`... ✔ Writing `./web/src/components/Admin/NewUser/NewUser.js`... ✔ Adding scaffold routes... ✔ Adding scaffold asset imports... Done in 1.21s.
The routes will be nested too:
// ./web/src/Routes.js const Routes = () => { return ( <Router> <Route path="/admin/users/new" page={AdminNewUserPage} <Route path="/admin/users/{id:Int}/edit" page={AdminEditUserPage} <Route path="/admin/users/{id:Int}" page={AdminUserPage} <Route path="/admin/users" page={AdminUsersPage} <Route notfound page={NotFoundPage} /> </Router> ) }
Destroying
yarn redwood d scaffold <model>
# sdl
Generate a GraphQL schema and service object.
yarn redwood generate sdl <model>
The sdl will inspect your
schema.prisma and will do its best with relations. Schema to generators isn't one-to-one yet (and might never be).
Destroying
yarn redwood d sdl <model>
Example
Generating a user sdl:
~/redwood-app$ yarn redwood generate sdl user yarn run v1.22.4 $ /redwood-app/node_modules/.bin/redwood g sdl user ✔ Generating SDL files... ✔ Writing `./api/src/graphql/users.sdl.js`... ✔ Writing `./api/src/services/users/users.test.js`... ✔ Writing `./api/src/services/users/users.js`... Done in 1.04s.
The generated sdl defines a corresponding type, query, and create/update inputs, without defining any mutations. To also get mutations, add the
--crud option.
// ./api/src/graphql/users.sdl.js export const schema = gql` type User { id: Int! email: String! name: String } type Query { users: [User!]! } input CreateUserInput { email: String! name: String } input UpdateUserInput { email: String name: String } `
The services file fulfills the query. If the
--crud option is added, this file will be much more complex.
// ./api/src/services/users/users.js import { db } from 'src/lib/db' export const users = () => { return db.user.findMany() }
For a model with a relation, the field will be listed in the sdl:
// ./api/src/graphql/users.sdl.js export const schema = gql` type User { id: Int! email: String! name: String profile: Profile } type Query { users: [User!]! } input CreateUserInput { email: String! name: String } input UpdateUserInput { email: String name: String } `
And the service will export an object with the relation as a property:
// ./api/src/services/users/users.js import { db } from 'src/lib/db' export const users = () => { return db.user.findMany() } export const User = { profile: (_obj, { root }) => { db.user.findUnique({ where: { id: root.id } }).profile(), }}
# service
Generate a service component.
yarn redwood generate service <name>
Services are where Redwood puts its business logic. They can be used by your GraphQL API or any other place in your backend code. See How Redwood Works with Data.
Destroying
yarn redwood d service <name>
Example
Generating a user service:
~/redwood-app$ yarn redwood generate service user yarn run v1.22.4 $ /redwood-app/node_modules/.bin/redwood g service user ✔ Generating service files... ✔ Writing `./api/src/services/users/users.test.js`... ✔ Writing `./api/src/services/users/users.js`... Done in 1.02s.
The generated service component will export a
findMany query:
// ./api/src/services/users/users.js import { db } from 'src/lib/db' export const users = () => { return db.user.findMany() }
# util
This command has been deprecated. See Setup command.
# info
Print your system environment information.
yarn redwood info
This command's primarily intended for getting information others might need to know to help you debug:
~/redwood-app$ yarn redwood info yarn run v1.22.4 $ /redwood-app/node_modules/.bin/redwood info System: OS: Linux 5.4 Ubuntu 20.04 LTS (Focal Fossa) Shell: 5.0.16 - /usr/bin/bash Binaries: Node: 13.12.0 - /tmp/yarn--1589998865777-0.9683603763419713/node Yarn: 1.22.4 - /tmp/yarn--1589998865777-0.9683603763419713/yarn Browsers: Chrome: 78.0.3904.108 Firefox: 76.0.1 npmPackages: @redwoodjs/core: ^0.7.0-rc.3 => 0.7.0-rc.3 Done in 1.98s.
# lint
Lint your files.
yarn redwood lint
Our ESLint configuration is a mix of ESLint's recommended rules, React's recommended rules, and a bit of our own stylistic flair:
- no semicolons
- comma dangle when multiline
- single quotes
- always use parenthesis around arrow functions
- enforced import sorting
# open
Open your project in your browser.
yarn redwood open
# prisma
Run Prisma CLI with experimental features.
yarn redwood prisma
Redwood's
prisma command is a lightweight wrapper around the Prisma CLI. It's the primary way you interact with your database.
What do you mean it's a lightweight wrapper?
By lightweight wrapper, we mean that we're handling some flags under the hood for you. You can use the Prisma CLI directly (
yarn prisma), but letting Redwood act as a proxy (
yarn redwood prisma) saves you a lot of keystrokes. For example, Redwood adds the
--preview-featureand
--schema=api/db/schema.prismaflags automatically.
If you want to know exactly what
yarn redwood prisma <command>runs, which flags it's passing, etc., it's right at the top:
$ yarn redwood prisma migrate dev yarn run v1.22.10 $ ~/redwood-app/node_modules/.bin/redwood prisma migrate devRunning prisma cli: yarn prisma migrate dev --schema "~/redwood-app/api/db/schema.prisma" ...
Since
yarn redwood prisma is just an entry point into all the database commands that the Prisma CLI has to offer, we won't try to provide an exhaustive reference of everything you can do with it here. Instead what we'll do is focus on some of the most common commands; those that you'll be running on a regular basis, and how they fit into Redwood's workflows.
For the complete list of commands, see the Prisma CLI Reference. It's the authority.
Along with the CLI reference, bookmark Prisma's Migration Flows doc—it'll prove to be an invaluable resource for understanding
yarn redwood prisma migrate.
# db
Manage your database schema and lifecycle during development.
yarn redwood prisma db <command>
The
prisma db namespace contains commands that operate directly against the database.
# pull
Pull the schema from an existing database, updating the Prisma schema.
👉 Quick link to the Prisma CLI Reference.
yarn redwood prisma db pull
This command, formerly
introspect, connects to your database and adds Prisma models to your Prisma schema that reflect the current database schema.
Warning: The command will Overwrite the current schema.prisma file with the new schema. Any manual changes or customization will be lost. Be sure to back up your current schema.prisma file before running introspect if it contains important modifications.
# push
Push the state from your Prisma schema to your database.
👉 Quick link to the Prisma CLI Reference.
yarn redwood prisma db push
This is your go-to command for prototyping changes to your Prisma schema (
schema.prisma).
Prior to to
yarn redwood prisma db push, there wasn't a great way to try out changes to your Prisma schema without creating a migration.
This command fills the void by "pushing" your
schema.prisma file to your database without creating a migration. You don't even have to run
yarn redwood prisma generate afterward—it's all taken care of for you, making it ideal for iterative development.
# seed
Seed your database.
👉 Quick link to the Prisma CLI Reference.
yarn redwood prisma db seed
This command seeds your database by running your project's
seed.js file (in
api/db). Note that having a great seed might not be all that important at the start, but as soon as you start collaborating with others, it becomes vital.
Prisma's got some great resources on this command. You can code along with Ryan Chenkie, and learn how libraries like faker can help you create a large, realistic database fast, especially in tandem with Prisma's createMany. And Prisma's got a great seeding guide that covers both the concepts and the nuts and bolts.
# migrate
Update the database schema with migrations.
👉 Quick link to the Prisma Concepts.
yarn redwood prisma migrate <command>
As a database toolkit, Prisma strives to be as holistic as possible. Prisma Migrate lets you use Prisma schema to make changes to your database declaratively, all while keeping things deterministic and fully customizable by generating the migration steps in a simple, familiar format: SQL.
Since migrate generates plain SQL files, you can edit those SQL files before applying the migration using
yarn redwood prisma migrate --create-only. This creates the migration based on the changes in the Prisma schema, but doesn't apply it, giving you the chance to go in and make any modifications you want. Daniel Norman's tour of Prisma Migrate demonstrates this and more to great effect.
Prisma Migrate has separate commands for applying migrations based on whether you're in dev or in production. The Prisma Migration flows goes over the difference between these workflows in more detail.
# dev
Create a migration from changes in Prisma schema, apply it to the database, trigger generators (e.g. Prisma Client).
👉 Quick link to the Prisma CLI Reference.
yarn redwood prisma migrate dev
# deploy
Apply pending migrations to update the database schema in production/staging.
👉 Quick link to the Prisma CLI Reference.
yarn redwood prisma migrate deploy
# redwood-tools (alias rwt)
Redwood's companion CLI development tool. You'll be using this if you're contributing to Redwood. See Contributing in the Redwood repo.
# setup
Initialize project config and install packages
yarn redwood setup <command>
# setup auth
Setup an auth configuration.
yarn redwood setup auth <provider>
You can get authentication out-of-the-box with generators. Right now we support Auth0, Firebase, GoTrue, Magic, and Netlify.
Usage
See Authentication.
# setup custom-web-index
Setup an
index.js file in
web/src so you can customize how your Redwood App mounts to the DOM.
yarn redwood setup custom-web-index
Redwood automatically mounts your
<App /> to the DOM, but if you want to customize how that happens, you can use this setup command to generate a file where you can do that in.
Usage
See Custom Web Index.
# setup deploy (config)
Setup a deployment configuration.
yarn redwood setup deploy <provider>
Creates provider-specific code and configuration for deployment.
# storybook
Starts Storybook locally
yarn redwood storybook
Storybook is a tool for UI development that allows you to develop your components in isolation, away from all the conflated cruft of your real app.
"Props in, views out! Make it simple to reason about."
RedwoodJS supports Storybook by creating stories when generating cells, components, layouts and pages. You can then use these to describe how to render that UI component with representative data.
# test
Run Jest tests for api and web.
yarn redwood test [side..]
# serve
Run server for api in production, if you are self-hosting, or deploying into a serverfull environment.
yarn redwood serve [side]
# upgrade
Upgrade all
@redwoodjs packages via an interactive CLI.
yarn redwood upgrade
This command does all the heavy-lifting of upgrading to a new release for you.
Besides upgrading to a new stable release, you can use this command to upgrade to either of our unstable releases,
canary and
rc, or you can upgrade to a specific release version.
A canary release is published to npm every time a PR is merged to the
main branch, and when we're getting close to a new release, we publish release candidates.
Example
Upgrade to the most recent canary:
yarn redwood upgrade -t canary
Upgrade to a specific version:
yarn redwood upgrade -t 0.19.3
Upgrade using packages from PR #1714 (version tag provided in PR comments):
yarn redwood upgrade --pr 1714:0.24.0-38ba18c
|
https://redwoodjs.com/docs/cli-commands.html
|
CC-MAIN-2021-17
|
refinedweb
| 4,706
| 50.33
|
Hello all, I'm just beginning with programming. Started with a bit of PASCAL and now just browsing thru C,C++ and Java. Basically, I'm comparing them through code. I have this piece of code in PASCAL:
-------------------------------------------
----------------------------------------------------------------------Code:Program A; var sum,x,i:integer; Procedure AddX(x:integer); begin sum:=sum+x; end; Begin i:=0; repeat read(x); AddX(x); i:=i+1; until i=10; write(x); write(sum); End.
I'm converting them to C, C++ and Java.. to understand the differences. Here are my C and C++ samples:
-----------------------------------
----------------------------------------------------------------------------------Code:#include <stdio.h> int main() { int i,x,sum; i=0; sum=0; do { scanf("%d", &x); sum=sum+x; i=i+1; } while (i!=10); printf("%d\n", x); printf("%d\n", sum); system("PAUSE"); return 0; }
In C++:
------------------------------------------------------------------------------------------------------Code:#include <iostream> #include <stdlib.h> using namespace std; int main() { int i,x,sum; i=0; sum=0; do { cin >> x; sum=sum+x; i++; } while (i!=10); cout << x; cout << sum; system("PAUSE"); return 0; }
I'd like to find out, from the experts out there, are my codes correct? Console would return the last value input and the sum. However, I'm figuring out, do I need to make a function for AddX?
I haven't started with the Java code yet. (Because I'm just browsing through Java now) But if you guys have an idea how I can do this, I'd appreciat e it very much.
|
http://cboard.cprogramming.com/cplusplus-programming/54506-comparing-pascal-c-cplusplus-java-thru-code.html
|
CC-MAIN-2014-52
|
refinedweb
| 247
| 75.2
|
/21/10 1:49 PM, John Peterson wrote:
> On Mon, Jun 21, 2010 at 11:35 AM, Roy Stogner<roystgnr@...> wrote:
>> I'd been kind of hoping that we'd be able to get away with the same in
>> library code (in our .C files, albeit not our .h files), but I guess
>> if even PETSc might have new encroaching identifiers then we've got to
>> be careful - anything that includes a third party header (even
>> indirectly...) shouldn't use any non-explicitly-namespaced symbols.
>
> Right, and what if someone wants to write Sieve code (if this makes
> sense?) within an otherwise libmesh source file, foo.C?
>
> <code>
> #include<petscmesh.h>
> #include "mesh.h" // LibMesh's Mesh, assume properly namespaced
> using namespace LibMesh;
>
> Mesh mesh; // How to access Sieve's Mesh?
> </code>
>
> With the using declaration, they'd have no way to access it, right?
Assuming it is in the global namespace, I believe that one could do:
Mesh libmesh_mesh;
::Mesh sieve_mesh;
--
|
https://sourceforge.net/p/libmesh/mailman/message/25578277/
|
CC-MAIN-2017-17
|
refinedweb
| 162
| 75
|
TDD with Selenium and Castle
- |
-
-
-
-
-
-
-
Read later
My Reading List
IntroductionTest Driven Development samples are mostly based on very simple unit tests. The challenge is often how to use TDD in a larger application. It will be demonstrated in this tutorial how to build a web application using test first principles using Selenium and Castle.
PreparationLet's say that a developer needs to write a method using 'test first' for the following feature for an application
manage users (add new, delete, edit user details, list all)
In this test case, each user will have a Full Name, a Username , a Password and an Email, all of which are mandatory.
Basic TDD Steps
Following typical TDD steps:
- Write the test
- Make it fail
- Write the code to make the test succeed
- Refactor
- Repeat and start at Step
The First Test
The first test that should be written is a test to add a new user. Test Driven Development is a design technique rather than a testing technique, because when writing the test, we will define how the code/page will work, therefore design.
In order to be able to add a new user a simple form like this is sufficient:
For the functional test the developer needs to open up the add page (prepare stage), fill the fields and save (action stage) and to verify if the user was actually saved (verification stage of the project). In order to do that, the developer needs to update the page and add a new list with the users on the left side, where later the code can verify the user exists after clicking save.
Selenium comes into Action
For a task like this developers need a tool that can actually perform this action on their behalf. Selenium conveniently does this in a browser and this is an excellent open source tool, such that it can be modified to your own needs I necessary. Selenium provides web based functional tests, and also allows the tests to be written as simple html tests, providing an interpreter that runs those actions for the developer:
The great news for developers who want to integrate their tests with a continuous integration tool is that they can write the tests in their own preferred language like C#, Java, VB.NET, Ruby, and Python using an extension of Selenium called Selenium RC.
Using Selenium RC, the .NET version of the test would look like:
Step 2, Making the Initial Test Fail
At this stage the developer hasn't written any code, so the test should fail. First start the Selenium RC server (a small Java server that handles the Selenium commands and transmits them to the browser):
>java -jar selenium-server.jar
Running the test fails as expected:
This is a good sign as this means the test fails when it should. Otherwise the test wouldn't really test anything and would be worthless.
Step 3, Write the Code
On step 3 in the TDD steps, the developer needs to write the code. This means when run against the tests, the code should not fail. Next create the user controller, then view and run the test:
Next create an empty add.vm and rerun
Since the error says that it cannot find the elements on the page, we add them in add.vm:
Retest:
.... an error once again since it submits the content on the form to create.aspx and clicking the button on the page has not been implemented yet.
Next add the code to save the data:
Now wait because there does not exist a user class, the list action or the database.
TDD-ing the Layers under the Presentation Layer
In order to build the code the developer needs to build it using 'test first'. Although in some cases this isn't really necessary since ActiveRecord is already quite well tested and it is also covered by the functional test. It is still demonstrated to show how far more complicated situations should be handled.
Below is the test, which is not a functional test but is an integration test, a unit test that also uses the database):
Test if it fails. In fact it does even not compile so the first thing would be to create a User class, with empty methods to force the code to compile:
Now,
The error indicates the User class is not initialized in ActiveRecord, for that instance adapt the test thus:
Add the appropriate attributes for ActiveRecord and the constructors and by reruning the test. Now a corresponding database table is missing, but that is quickly remedied by adding the following line in the test:
ActiveRecordStarter.CreateSchema();//create the database schema
After running the test the database table was created, but there is still
Finish implementing the Find method in the User class:
public static User Find(long id)
{
return (User) FindByPrimaryKey(typeof(User),id,false)
}
Finally a database test that works!
Top-Down TDD Approach
Usually tests like this one aren't really required, for the two reasons mentioned earlier but it was also done so that the flow is understood in a test first vertical development environment for a n-tier application.
Back to the Functional Test
Now that the User class exists and the database access works, it is time to continue work on the presentation layer.
Implement the list action and view:
public void List()
{
PropertyBag["users"] = User.FindAll();
}
Create a list.vm:
For the view a GridComponent could have been used. Now running the test the developer should see a working UI Test for the first time.
Edit functionality
Next there is a need to add the edit user functionality to the site. The functionality will flow like this: on the list of users page each user will have an edit link, which once clicked upon will transfer the user to editing where the class can modify the user details. When the form is saved the user is sent back to the list. Now write the test:
A User is added to the database so when the list page is opened, there is something to edit. There is still a problem. If the test is run twice the user will be inserted twice in the database. In order to avoid this potential error do the following:
Running all the tests, the edit test now
To rectify this problem add the Edit link in the list.vm:
Edit the action in the controller:
public void Edit(long id)
{
PropertyBag["user"] = User.Find(id);
}
Now edit the view for this action: edit.vm
Since the value will be saved in the update action we'll also have:
public void Update([DataBind("user")] User user)
{
user.Update();
RedirectToAction("list");
}
Success!!
Begin Refactoring
There are a few areas of opportunity for refactoring that can be made. First, the TestAddNew and TestEdit methods are almost similar and:
ALSO:
Running the tests, they should all still work. Now go further into the views, which have a similar problem: add.vm and edit.vm are almost identical. Separate the common part into _form.vm. Running the tests still confirms the fact that the application is passing tests:
For delete, use the same test first principles. For the data validations or any other functionality the user will be able to use, add new tests, then finish by adding the code to make them pass those tests.
Conclusion
This is an example of designing an application with TDD using a method called incremental architecture. In the actual architecture of the system the architect and developer need not take a month in advance for design. The architecture and design is constructed as the code is written and tested. In this manner changes are very easily made since continuous refactoring is done to the code to make it better, all provided by TDD principles and using 'test first'.
Resources
- Selenium - - open source web functional testing tools
- Castle Project (MonoRail and ActiveRecord) - - open source lightweight ASP.NET/ADO.NET alternative
|
http://www.infoq.com/articles/Tutorial-TDD-Selenium/
|
CC-MAIN-2016-18
|
refinedweb
| 1,332
| 58.52
|
FORMS(3) Library Functions Manual FORMS(3)
NAME
dup_field, free_field, link_field, new_field -- form library
LIBRARY
Curses Form Library (libform, -lform)
SYNOPSIS
#include <<form.h>>
FIELD *
dup_field(FIELD *field, int frow, int fcol);
int
free_field(FIELD *field);
FIELD *
link_field(FIELD *field, int frow, int fcol);
FIELD *
new_field(int rows, int cols, int frow, int fcol, int nrows, int nbuf);
DESCRIPTION
The dup_field() function duplicates the given field, including any
buffers associated with the field and returns the pointer to the newly
created field. free_field() destroys the field and frees any allocated
resources associated with the field. The function link_field() copies
the given field to a new field at the location frow and fcol but shares
the buffers with the original field. new_field() creates a new field of
size rows by cols at location frow, fcol on the page, the argument nrows
specified the number of off screen rows the field has and the nbuf
parameter specifies the number of extra buffers attached to the field.
There will always be one buffer associated with a field.
RETURN VALUES
On error dup_field() and new_field() will return NULL. The functions
will one of the following error values:
E_OK The function was successful.
E_BAD_ARGUMENT A bad argument was passed to the function.
E_CONNECTED The field is connected to a form.
SEE ALSO
curses(3), forms(3)
NOTES
The header <form.h> automatically includes both <curses.h> and <eti.h>.
NetBSD 6.1.5 January 1, 2001 NetBSD 6.1.5
|
http://modman.unixdev.net/?sektion=3&page=new_field&manpath=NetBSD-6.1.5
|
CC-MAIN-2017-17
|
refinedweb
| 245
| 54.83
|
Results 1 to 5 of 5
- Join Date
- Mar 2008
- 136
- Thanks
- 39
- Thanked 1 Time in 1 Post
Response
Hi all,
I am writing Java code where I would like the desired responses applied to the correct statments made by the user.
I have the following code, which works correctly:
public void chat(String talk)
{
String intro = "Hi";**Local Variable**
if(talk.equals(intro)) {
System.out.println("(NLP): Hello.");
}
However, on the
String intro = "Hi";line I would like to include other words, as well as "Hi". However I do not know how. I have experimented with it; using || , + () etc. and haven't found a solution which works.
Can anyone help?
you mean like
Code:
public void chat(String talk, String user) { String intro = "Hi" + " " + user; if(talk.equals(intro)) { System.out.println("(NLP): Hello."); } }You never have to change anything you got up in the middle of the night to write. -- Saul Bellow
- Join Date
- Mar 2008
- 136
- Thanks
- 39
- Thanked 1 Time in 1 Post
I mean like the term
introwould have multiple words associated with it. So if a user were to enter one of those multiple words the response would be "Hello"
You could either use regular expressions in Java using the pattern and matcher classes, or you could just create other variables and test for them.
Another way would to be to create an array that would hold each of these strings, then compare through each of these strings with a for loop and check if it's in there.
Or if you wanted it to be more dynamic and extensible, you could create an ArrayList!
Anyways, psuedocode for the array:
Code:
public class test { main(String) { declare array w/ all strings in it get input loop through array till you find it if found print hello break end loop } }"To iterate is human, to recurse divine." -L. Peter Deutsch
or us a nifty regex
Code:
public void chat(String talk) { if(talk.matches("Hi|Hello|Aloha")) { System.out.println("(NLP): Hello."); } }You never have to change anything you got up in the middle of the night to write. -- Saul Bellow
AdSlot6
|
http://www.codingforums.com/java-and-jsp/150585-response.html
|
CC-MAIN-2015-40
|
refinedweb
| 358
| 68.7
|
Important: Jekyll loads the sidebar data when it starts. When running Jekyll
locally if you make changes to the sidebars, you must completely stop and restart Jekyll to
pick up the changes (i.e. Jekyll won’t pick up the changes incrementally if you use the
-i flag). In some cases, you may need to do
jekyll clean before restarting Jekyll to ensure the site is fully rebuilt.
Location
Sidebars are defined in YAML files in the
_data/sidebars directory. They are
Adding to
config.yml
To add a new sidebar, you must add the sidebar’s base file name to the list in
_config.yml:
sidebars: - lb2_sidebar - lb3_sidebar - lb4_sidebar - contrib_sidebar - community_sidebar - es_lb2_sidebar - fr_lb2_sidebar - ja_lb2_sidebar - ko_lb3_sidebar - ko_lb2_sidebar - ru_lb2_sidebar - zh_lb2_sidebar - pt-br_lb2_sidebar
Configuring the sidebar
The docs for each version of LoopBack uses a different sidebar. Additionally, there are sidebars for the Contributing and Community docs.
The top navigation remains the same, because it allows users to navigate across products. But the sidebar navigation adapts to the product.
Because each product uses a different sidebar, you’ll need to set up your sidebars. The
_includes/custom/sidebarconfigs.html file controls which sidebar gets associated with which product.
The
sidebarconfigs.html file uses simple
if elsif logic to set a variable that the
sidebar.html file uses to read the sidebar data file. The code looks like this:
{% if page.sidebar == "home_sidebar" %} {% assign sidebar = site.data.sidebars.home_sidebar %} {% elsif page.sidebar == "lb2_sidebar" %} {% assign sidebar = site.data.sidebars.lb2_sidebar %} {% elsif page.sidebar == "lb3_sidebar" %} {% assign sidebar = site.data.sidebars.lb3_sidebar %} {% elsif page.sidebar == "lb4_sidebar" %} {% assign sidebar = site.data.sidebars.lb4_sidebar %} ... {% else %} {% assign sidebar = site.data.sidebars.home_sidebar %} {% endif %}
In each page’s frontmatter, you must specify the sidebar you want that page to use. Here’s an example of the page frontmatter showing the sidebar property:
--- title: Alerts tags: [formatting] keywords: notes, tips, cautions, warnings, admonitions summary: "You can insert notes, tips, warnings, and important alerts in your content. These notes are stored as shortcodes made available through the linksrefs.hmtl include." sidebar: contrib_sidebar permalink: /doc/en/contrib/alerts ---
The
sidebar: contrib_sidebar refers to the
_data/sidebars/contrib_sidebar.yml file.
If no sidebar assignment is found in the page frontmatter, the default sidebar (specified by the
else statement) will be shown:
site.data.sidebars.home_sidebar.entries.
Note: Note that each level must have at least one topic before the next level starts. You can’t have a second level that contains multiple third levels without having at least one standalone topic in the second level.
For more detail on the sidebar, see Sidebar navigation.
Sidebar syntax
The sidebar data file uses a specific YAML syntax that you must follow. Follow the sample pattern shown in the theme. For example:
title: LoopBack 3.x url: index.html children: - title: 'Installation' url: Installation.html output: 'web, pdf' children: - title: 'Installation troubleshooting' url: Installation-troubleshooting.html output: 'web, pdf' - title: '3.0 Release Notes' url: 3.0-Release-Notes.html output: 'web, pdf' - title: 'Migrating apps to v3' url: Migrating-to-3.0.html output: 'web, pdf' ...
Each item must contain a
title,
url, and
output property. An item (article) with sub-items (children) must have
children: and the sub-items must be indented two spaces under it.
The two outputs available are
web and
output: web).
The YAML syntax depends on exact spacing, so make sure you follow the pattern shown in the sample sidebars.
For more detail on the sidebar, see [Sidebar navigation][sidebar_navigation].
|
https://loopback.io/doc/en/contrib/sidebar_navigation.html
|
CC-MAIN-2018-39
|
refinedweb
| 579
| 51.85
|
Library for building powerful interactive command lines in Python
Project description
prompt_toolkit is a library for building powerful interactive command lines in Python.
Looking for ptpython, the Python REPL?
Are you looking for ptpython, the interactive Python Shell? We moved the ptpython source code to a separate repository. This way we are sure not to pollute the prompt_toolkit library with any ptpython-specific stuff and ptpython can be developed independently. You will now have to install it through:
pip install ptpython.)
- Mouse support for cursor positioning and scrolling.
- Auto suggestions. (Like fish shell.)
- Multiple input buffers.
- No global state.
- Lightweight, the only dependencies are Pygments, six and wcwidth.
- Code written with love.
- Runs on Linux, OS X, OpenBSD and Windows systems.
Feel free to create tickets for bugs and feature requests, and create pull requests if you have nice patches that you would like to share with others.
About Windows support
prompt_toolkit.
Installation
pip install prompt-toolkit
Getting started
The most simple example of the library would look like this:
from prompt_toolkit.shortcuts import get_input if __name__ == '__main__': answer = get_input('Give me some input: ') print('You said: %s' % answer)
For more complex examples, have a look in the examples directory. All examples are chosen to demonstrate only one thing. Also, don’t be afraid to look at the source code. The implementation of the get_input function could be a good start.
Note: For Python 2, you need to add from __future__ import unicode_literals to the above example. All strings are expected to be unicode strings.
Projects using prompt-toolkit
- ptpython: Python REPL
- ptpdb: Python debugger (pdb replacement)
- pgcli: Postgres client.
- mycli: MySql client.
- pyvim: A Vim clone in pure Python
- wharfee: A Docker command line.
- xonsh: A Python-ish, BASHwards-compatible shell.
- saws: A Supercharged AWS Command Line Interface.
.
|
https://pypi.org/project/prompt_toolkit/0.51/
|
CC-MAIN-2018-34
|
refinedweb
| 299
| 60.61
|
I really need help on my latest C++ assignment. The program was due a week ago and I'm still beating my head against a wall here. This is supposed to be a VERY SIMPLE linked list implementation. I have rewritten this dang thing countless times in as many different configurations as I can think of but I just can't get it. If someone could just shine a light for me I would be forever in debt.
That being said, after countless attempts I finally decided to use the code here:
to start with. It works just fine without the modifications I made. The only thing I really did was move the bulk of the traversal into the ListNode::add() function. Now it goes through and places the values of the array into curNode->value, but once I move the pointer I can't access those values again. What I end up with is a 1 and a 9 printed in the terminal, it skips all the others. I guess my question is: How do I store the curNode->value values after the add() function handles them? I thought the linked list did this. Do I need another array to store the values? Sorry, I'm so confused...
I know I'm missing something very simple here, but my brain is just so burned out on this assignment I've got tunnel vision and I just can't see the forest through the trees.
The ultimate goal here is to assign doubles to the list, traverse & print the list, and, using the function isMember() (which I have not yet implemented), determine if a particular number is a member of said linked list. Oh, and to add a copy constructor, once I get this base portion complete.
Thanks in advance.:S
#include <iostream> using namespace std; struct ListNode { double value; ListNode *next; void add( double, ListNode* ); bool isMember( double ); }; void ListNode::add( double x, ListNode *curNode ) { curNode->next = new ListNode; // Creates a ListNode at the end of the list curNode = curNode->next; // Points to that ListNode curNode->next = 0; // Prevents it from going any further curNode->value = x; } int main() { const int SIZE = 5; double testValue[SIZE] = { 1, 3, 5, 7, 9 }; double* tv = testValue; ListNode *head; ListNode *curNode; head = new ListNode; head->next = 0; head->value = *tv; curNode = head; if ( curNode != 0 ) { while ( curNode->next != 0 ) curNode = curNode->next; } cout << curNode->value << endl; for( int i = 1; i < SIZE; i++ ) { curNode->add( *( tv + i ), curNode ); } //CURRENTLY I CANNOT MAKE THIS BLOCK WORK CORRECTLY. IT ONLY PRINTS THE FIRST AND LAST VALUES OF THE ARRAY //ListNode::add() PERFORMS ALL OPERATIONS CORRECTLY AND I CAN PRINT AS I AM ASSIGNING THE NODE VALUES //BUT ONCE I GET PAST THEM I CANT GET BACK THROUGHT THE LIST. IT ONLY LEAVES ME WITH THE MOST RECENT value ENTRY //?????????????????????????????????????????????????????????????????????????????????????????????????????????????? //if ( curNode->value != 0 ) { //Makes sure there is a place to start // while ( curNode->next != 0 ) { // cout<< curNode->value << endl; // curNode = curNode->next; // } // cout<< curNode->value << endl;; //} return 0; }
|
https://www.daniweb.com/programming/software-development/threads/363981/please-help-with-c-linked-list
|
CC-MAIN-2018-05
|
refinedweb
| 504
| 69.11
|
While there are tons of libraries that make it easy to add a data table to a Vue app, Kendo UI for Vue makes it a lot easier to render data and style. Read along as we build a real-time editable data table with Kendo UI for Vue and Hamoni Sync.
Building responsive Vue apps just got better and faster with Kendo UI for Vue. Kendo UI for Vue is a library with a set of UI components that you can use in your Vue applications to make them beautiful, responsive and accessible. One of the components that comes with Kendo UI for Vue is the Grid component. The Grid is used to display data in a tabular format. It not only allows you to display data in a tabular form, but it also provides the features highlighted below:
After all is said and done, I’ll show how to use the Grid component by building a small app that allows you to add and edit data in a Grid in real time. We will be using Hamoni Sync for real-time synchronization, and Vue CLI to bootstrap the project. Here’s a peek at what you will build:
Let’s get started with creating a Vue project. Open the command line and run
vue create kendo-realtime-vue-grid && cd kendo-realtime-vue-grid command, select the default option and press Enter. In a short while, a Vue project will be bootstrapped by the Vue CLI. With the project ready, we’ll go ahead and install dependencies needed for the project. Run the following npm command to install dependencies for Kendo Vue and Hamoni Sync.
npm install --save @progress/kendo-theme-material @progress/kendo-vue-grid @progress/kendo-vue-intl vue-class-component hamoni-sync
We installed the Material design theme for Kendo UI, the Kendo Vue Grid package, and Hamoni Sync.
Let’s get started with some code. Open App.vue and delete the style section. Update the template section with the following snippet:
<template> <div> <Grid ref="grid" : <GridToolbar> <button title="Add new" class="k-button k-primary" @ Add new </button> <button v- Cancel current changes </button> </GridToolbar> </Grid> </div> </template>
We used a
Grid component, which represents the data table, and passed it some props. The
data-items props holds the data for the grid,
columns set the properties of the columns that will be used, and
edit-field is used to determine if the current record is in edit mode. We chose to use
inEdit as the field name to be used to determine which record is being edited. We will create a computed method called
hasItemsInEdit that returns Boolean and is used in Kendo’s
GridToolbar component. If it returns true, we show a button that allows canceling the edit operation; otherwise, it shows a button to trigger adding new data. The edit event is fired when the user triggers an edit operation, the remove event for removing records, and the
itemchange event for when data changes in edit mode.
In the script section, add the following import statements.
import Vue from "vue"; import "@progress/kendo-theme-material/dist/all.css"; import { Grid, GridToolbar } from "@progress/kendo-vue-grid"; import Hamoni from "hamoni-sync"; import DropDownCell from "./components/DropDownCell.vue"; import CommandCell from "./components/CommandCell.vue"; Vue.component("kendo-dropdown-cell", DropDownCell); Vue.component("kendo-command-cell", CommandCell); const primitiveName = "kendo-grid";
In the code above we have the
Grid and
GridToolbar from Kendo Vue Grid, and also Hamoni (we’ll get to that later). The
DropDownCell and
CommandCell components will be added later. One of the columns will need a dropdown when it’s in edit mode, so the
DropDownCell will be used to render that cell. CommandCell will be used to display buttons to trigger edit or cancel changes while in edit mode.
Next, update the exported object to look like the following:
export default { name: "app", components: { Grid, GridToolbar }, data: function() { return { columns: [ { field: "ProductID", editable: false, title: "ID", width: "50px" }, { field: "ProductName", title: "Name" }, { field: "FirstOrderedOn", editor: "date", title: "First Ordered", format: "{0:d}" }, { field: "UnitsInStock", title: "Units", width: "150px", editor: "numeric" }, { field: "Discontinued", title: "Discontinued", cell: "kendo-dropdown-cell" }, { cell: "kendo-command-cell", width: "180px" } ], gridData: [] }; }, mounted: async function() {); await hamoni.connect(); try { const primitive = await hamoni.get(primitiveName); this.listPrimitive = primitive; this.gridData = [...primitive.getAll()]; this.subscribeToUpdate(); } catch (error) { if (error === "Error getting state from server") this.initialise(hamoni); else alert(error); } }, computed: { hasItemsInEdit() { return this.gridData.filter(p => p.inEdit).length > 0; } } };
In the code above, we have declared data for the columns and set
gridData to an empty array. Our actual data will come from Hamoni Sync, which we set up from the mounted lifecycle hook. Hamoni Sync is a service that allows you to store and synchronize data/application state in real time. This will allow us to store data for the data table and get a real-time update when a record changes. You will have to replace YOUR_APP_ID and YOUR_ACCOUNT_ID in the mounted function with your Hamoni Sync’s account details. Follow these steps to register for an account and create an application on the Hamoni server.
Hamoni Sync has what is called Sync primitives as a way to store and modify state. There are three kinds of Sync primitives: Value, Object, and List primitives. We’re going to use List primitive because it provides an API for us to store and modify data that needs to be stored in an array-like manner. You can read more about sync primitives from the docs.
In the last code you added, there’s a line that calls
hamoni.connect() to connect to the server once you’ve gotten a token. While we had the code to retrieve the token in there, it is recommended to have it behind a server you control and only return a token from an endpoint you control. This is to avoid giving away your account ID to the public. To get or store data, you first need to get an object that represents the sync primitive you want to use. This is why we called
hamoni.get(), passing it the name of the state we want to access. If it exists, we get an object with which we can manipulate state on Hamoni.
The first time we’ll use the app, the sync primitive will not exist; this is why in the catch block we call
initialise() to create a sync primitive with a default data. If it exists, we call
primitive.getAll() to get data and assign it to
gridData so the grid gets data to display. Later on we will add implementation for
subscribeToUpdate(), which will be used to subscribe to data updates events from Hamoni Sync.
We’ve referenced methods so far from the template and code in the mounted hook. Add the code below after the computed property.
methods: { itemChange: function(e) { Vue.set(e.dataItem, e.field, e.value); }, insert() { const dataItem = { inEdit: true, Discontinued: false }; this.gridData.push(dataItem); }, edit: function(e) { Vue.set(e.dataItem, "inEdit", true); }, save: function(e) { if (!e.dataItem.ProductID) { const product = { ...e.dataItem }; delete product.inEdit; product.ProductID = this.generateID(); this.gridData.pop(); this.listPrimitive.add(product); } else { const product = { ...e.dataItem }; delete product.inEdit; const index = this.gridData.findIndex( p => p.ProductID === product.ProductID ); this.listPrimitive.update(index, product); } }, generateID() { let id = 1; this.gridData.forEach(p => { if (p.ProductID) id = Math.max(p.ProductID + 1, id); }); return id; }, update(data, item, remove) { let updated; let index = data.findIndex( p => JSON.stringify({ ...p }) === JSON.stringify(item) || (item.ProductID && p.ProductID === item.ProductID) ); if (index >= 0) { updated = Object.assign({}, item); data[index] = updated; } if (remove) { data = data.splice(index, 1); } return data[index]; }, cancel(e) { if (e.dataItem.ProductID) { Vue.set(e.dataItem, "inEdit", undefined); } else { this.update(this.gridData, e.dataItem, true); } }, remove(e) { e.dataItem.inEdit = undefined; const index = this.gridData.findIndex( p => JSON.stringify({ ...p }) === JSON.stringify(e.dataItem) || (e.dataItem.ProductID && p.ProductID === e.dataItem.ProductID) ); this.listPrimitive.remove(index); }, cancelChanges(e) { let dataItems = this.gridData.filter(p => p.inEdit === true); for (let i = 0; i < dataItems.length; i++) { this.update(this.gridData, dataItems[i], true); } }, initialise(hamoni) { hamoni .createList(primitiveName, [ { ProductID: 1, ProductName: "Chai", UnitsInStock: 39, Discontinued: false, FirstOrderedOn: new Date(1996, 8, 20) } ]) .then(primitive => { this.listPrimitive = primitive; this.gridData = this.listPrimitive.getAll(); this.subscribeToUpdate(); }) .catch(alert); }, subscribeToUpdate() { this.listPrimitive.onItemAdded(item => { this.gridData.push(item.value); }); this.listPrimitive.onItemUpdated(item => { //update the item at item.index this.gridData.splice(item.index, 1, item.value); }); this.listPrimitive.onItemRemoved(item => { //remove the item at item.index this.gridData.splice(item.index, 1); }); } }
In the
initialise() method, we call
hamoni.createList() to create a sync primitive to store data. When this succeeds, we update the grid data and then subscribe to change events using
subscribeToUpdate() method has code to listen for changes in the sync primitive for when data is added, updated, or removed.
The rest of the methods are used by Kendo UI’s Vue Grid. The insert method triggers insert and creates a new object with property
inEdit set to true and the grid component notices this and enters edit mode. The
edit() method does a similar thing and sets
inEdit to true for the current selected row data. In the
remove() method, we remove data from Hamoni Sync by calling
this.listPrimitive.remove(index), passing it the index of data to delete. The
save() method handles saving new or existing data. To add new record, we call
this.listPrimitive.add(), passing it an object to add, and
this.listPrimitive.update(product) to update a product.
All looking good so far. The next thing for us is to create the
DropDownCell and
CommandCell component we referenced earlier. In the components folder, add a new file named DropDownCell.vue.
<template> <td v-{{ dataItem[field]}}</td> <td v-else> <select class="k-textbox" @ <option>True</option> <option>False</option> </select> </td> </template> <script> export default { name: "DropDownCell", props: { field: String, dataItem: Object, format: String, className: String, columnIndex: Number, columnsCount: Number, rowType: String, level: Number, expanded: Boolean, editor: String }, methods: { change(e) { this.$emit("change", e, e.target.value); } } }; </script>
That code will render a dropdown for a column if it’s in edit mode; otherwise, it displays the text for a cell.
Add a new file in the same folder called CommandCell.vue.
<template> <td v- <button class="k-primary k-button k-grid-edit-command" @Edit</button> <button class="k-button k-grid-remove-command" @Remove</button> </td> <td v-else> <button class="k-button k-grid-save-command" @{{this.dataItem.ProductID? 'Update' : 'Add'}}</button> <button class="k-button k-grid-cancel-command" @{{this.dataItem.ProductID? 'Cancel' : 'Discard'}}</button> </td> </template> <script> export default { name: "CommandCell", props: { field: String, dataItem: Object, format: String, className: String, columnIndex: Number, columnsCount: Number, rowType: String, level: Number, expanded: Boolean, editor: String }, methods: { onClick: function(e) { this.$emit("change", e, this.dataItem, this.expanded); }, editHandler: function() { this.$emit("edit", this.dataItem); }, removeHandler: function() { this.$emit("remove", this.dataItem); }, addUpdateHandler: function() { this.$emit("save", this.dataItem); }, cancelDiscardHandler: function() { this.$emit("cancel", this.dataItem); } } }; </script>
The code above will render buttons in a cell based on if it is in edit mode or not.
Now we’re all ready to try out our code. Open the terminal and run
npm run serve.
Isn’t it awesome to build a real-time editable data table so easily and in under 10 minutes like we just did? Kendo UI for Vue allows you to quickly build high-quality, responsive apps. It includes all the components you’ll need, from grids and charts to schedulers and dials. I’ve shown you how to use the Grid component and we only used the edit functionality. There are more features available with it than what we’ve covered. Check out the documentation to learn more about other possibilities with the Grid component from Kendo UI for Vue.
For the real-time data we used Hamoni Sync. Hamoni Sync is a service that allows you store and synchronize data/application state in real-time. This allows you to store data for the grid and get a real-time update when a record changes.
You can download or clone the project with source code on GitHub..
✅ 30s ad
☞ Learn by Doing: Vue JS 2.0 the Right Way
☞ Vue.js 2 Essentials: Build Your First Vue App
☞ Sıfırdan İleri Seviye Vue.JS Eğitimi ve Uygulama Geliştirme
☞ Programador FullStack JS Vue Node: Proj Galeria Vídeo e CRUD.!
|
https://morioh.com/p/024c0acd35b0
|
CC-MAIN-2020-05
|
refinedweb
| 2,101
| 58.08
|
USB Rubber Ducky Toolkit
Project description
Duck
Library
The toolkit can also be imported as a library.
from ducktoolkit import encoder duck_text = 'STRING Hello' language = 'gb' duck_bin = encoder.encode_script(duck_text, language)
Limitations
The encoder can only deal with certain Command keys and key combinations. Please see for details on supported commands.
The decoder is a best effort decoder. It will attempt to restore all command keys and strings. But its a lot harder going backwards. You will NOT be able to generate a valid duck script from an inject.bin
ToDo
- Support more keyboard layouts / languages.
- Improve the decoder.
- Pip Installation
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
|
https://pypi.org/project/ducktoolkit/
|
CC-MAIN-2020-24
|
refinedweb
| 130
| 60.82
|
"To Socrateaser"
Thank You for your prompt response. I am not sure that I fully understand the answer, though.
Situation:
Employer - located & registered in Pennyslvania, but clients may be located outside of Pennsylvania
Employee - located in Maine & working from a home office
Work Performed - Work by employee will be ultimately administrative in nature and not client facing to local clients unless I would be unavailable to be there. All work will be performed in the State of Maine, though it may not be for revenue generated in Maine.
Do I need to register with the State of Maine to pay employment taxes ONLY? Are there any changes to how I file my tax returns because of having out of state employees?
Do I need to register with the State of Maine to pay employment taxes ONLY?
A: From the way you describe the work being performed, you would only be obligated for Maine employment/payroll taxes (unemployment, workers comp, Maine state income tax withholding), because although you could be considered to be engaged in business in Maine, you are not generating any revenue from resources located in Maine.
Of course, you could get audited, and the tax authorities may challenge your assertions about the conduct of your business activities. But, based on how you've described things, you would only be liable for employment taxes and withholding for the employee located in Maine.
Are there any changes to how I file my tax returns because of having out of state employees?
A: I don't believe that under your proposed scenario, that you would have any filing obligations in Maine, because you are an single-member LLC. You would only owe taxes on your income generated from Maine, in which case you would have to file a Maine nonresident personal income tax return. Your payroll costs would all be deducted on your PA state tax return (and on your federal Form 1040, Schedule C).
Note: there are outsourcing companies throughout the USA that handle administrative employees and the related withholding issues, so that the employer only need cut a check to the outsourcing firm, and the firm handles the employee financial relationship. You may want to google around for "outsourcing" and see if it would be more cost effective to let some other business handle the employee issues, rather than spend time trying to handle payroll matters yourself. Payroll wage and hour issues can be very complicated, are prone to changes in law on a frequent basis, and the fines for messing up are onerous.
Hope this helps.
|
http://www.justanswer.com/tax/79b88-limited-liability-company-sole-member-pennsylvania.html
|
CC-MAIN-2015-27
|
refinedweb
| 428
| 55.98
|
Those few CSS rules will immediately make our pages look a bit smarter, but I also want to add some very basic breadcrumbs so users can navigate back to the home page more easily.
There are several different ways of doing this, not least using the marvellous React-Breadcrumbs component. But for the sake of simplicity – and also to show you a little bit more React Router magic – we're going to take the most basic approach imaginable: we're going to have each page print out its breadcrumbs.
In order to have these breadcrumb links work correctly, you're going to need to ensure this
import line is present in List.js, Detail.js, and User.js:
src/pages/List.js, src/pages/Detail.js, src/pages/User.js
import { IndexLink, Link } from 'react-router';
You've seen the
<Link> component before, but now we're adding
<IndexLink> into the mix. You'll see why in just a moment!
We're going to start by adding breadcrumbs to the Detail component. Modify the last part of its
render() method>);
Only the "You are here" line is new, but immediately you'll see I'm using that new
<IndexLink> component. If you save your page and navigate to a repository you should see something like "You are here: Home > react" in the breadcrumb bar, and the "Home" text is a link back to the home page. How come it's an
<IndexLink> rather than a regular
<Link>, then?
Well, to find out, just try changing
<IndexLink to="/" activeClassName="active">Home</IndexLink> to
<Link to="/" activeClassName="active">Home</Link> and see for yourself what happens. Tried it? Yup: the link goes black rather than blue.
This is the "little bit more React Router magic" I mentioned earlier: React Router knows which route is active, and will automatically adjust any
<Link> components it finds so that active links automatically have a CSS class of your choosing attached to them.
To see how this works you need to look at this piece of CSS we used in style.css a few minutes ago:
dist/style.css
a.active { color: black; }
And now look at our breadcrumbs code again:
src/pages/Detail.js
You are here: <IndexLink to="/" activeClassName="active">Home</IndexLink> > {this.props.params.repo}
So, that CSS specifies a style for
<a> elements that have the class name
active. Then the
<IndexLink> component has an
activeClassName attribute set to
active. This means that when React detects this link is currently being viewed, it will automatically apply the
active class to the link.
But there's a problem: all our URLs start with / because it's right there at the base of our routes. When configuring our routes we created a special
<IndexRoute> to handle this situation, but a regular
<Link> component doesn't take that into account. If you want to say "consider / active only when we're on the List page", you need to use
<IndexLink> to match the link to the
<IndexRoute> we defined.
The simple rule is this: if you're pointing to the index route of your site, you need to use an index link.
Now that you know the difference between
<Link> and
<IndexLink> we just need to add breadcrumbs to the List and User components.
The List component already has a message saying "Please choose a repository from the list below", so all you need to do is add the breadcrumbs before that:
src/pages/List.js
<p>You are here: <IndexLink to="/" activeClassName="active">Home</IndexLink></p>
The User component is a little more difficult because its root JSX element is
<ul>. We need to wrap that in a
<div> so that we can include the breadcrumbs in its output. For the avoidance of doubt, the new
render() method should look like this:
src/pages/User.js
render() { return (<div> <p>You are here: <IndexLink to="/" activeClassName="active">Home</IndexLink> > {this.props.params.user}</p> <ul> {this.state.events.map((event, index) => { const eventType = event.type; const repoName = event.repo.name; const creationDate = event.created_at; return (<li key={index}><strong>{repoName}</strong>: {eventType} at {creationDate}. </li>); })} </ul> </div>); }
That's it – breadcrumbs all!
|
http://www.hackingwithreact.com/read/1/30/adding-react-router-breadcrumbs-with-link-and-indexlink
|
CC-MAIN-2017-22
|
refinedweb
| 698
| 62.98
|
.
from __future__ import absolute_import, division, print_function import tensorflow as tf from tensorflow import keras import numpy as np print(tf.__version__)
1.13.0-rc2
Download to your machine (or uses a cached copy if you've already downloaded it):
imdb = keras.datasets.imdb (train_data, train_labels), (test_data, data(train_data), len(train_labels)))
Training entries: 25000, labels: 25000
The text of reviews have been converted to integers, where each integer represents a specific word in a dictionary. Here's what the first review looks like:(train_data[0]), len(train_data[1])
(218, 189)"
Prepare the data
The reviews—the arrays of integers—must be converted to tensors before fed into the neural network. This conversion can be done a couple of ways:
Convert the arrays into vectors of 0s and 1s indicating word occurrence, similar to a one-hot encoding. For example, the sequence [3, 5] would become a 10,000-dimensional vector that is all zeros except for indices 3 and 5, which are ones. Then, make this the first layer in our network—a Dense layer='binary_crossentropy', metrics=['acc']) WARNING:tensorflow:From /usr/local/lib/python3.5/dist-packages/tensorflow/python/ops/math_ops.py:3066: to_int32 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version. Instructions for updating: Use tf.cast instead. Epoch 1/40 15000/15000 [==============================] - 1s 52us/sample - loss: 0.6919 - acc: 0.6323 - val_loss: 0.6898 - val_acc: 0.7176 Epoch 2/40 15000/15000 [==============================] - 1s 40us/sample - loss: 0.6859 - acc: 0.7338 - val_loss: 0.6814 - val_acc: 0.7371 Epoch 3/40 15000/15000 [==============================] - 1s 39us/sample - loss: 0.6717 - acc: 0.7485 - val_loss: 0.6628 - val_acc: 0.7591 Epoch 4/40 15000/15000 [==============================] - 1s 41us/sample - loss: 0.6456 - acc: 0.7573 - val_loss: 0.6327 - val_acc: 0.7644 Epoch 5/40 15000/15000 [==============================] - 1s 43us/sample - loss: 0.6071 - acc: 0.7943 - val_loss: 0.5928 - val_acc: 0.7916 Epoch 6/40 15000/15000 [==============================] - 1s 40us/sample - loss: 0.5592 - acc: 0.8161 - val_loss: 0.5476 - val_acc: 0.8068 Epoch 7/40 15000/15000 [==============================] - 1s 42us/sample - loss: 0.5076 - acc: 0.8329 - val_loss: 0.5001 - val_acc: 0.8269 Epoch 8/40 15000/15000 [==============================] - 1s 40us/sample - loss: 0.4580 - acc: 0.8527 - val_loss: 0.4582 - val_acc: 0.8391 Epoch 9/40 15000/15000 [==============================] - 1s 41us/sample - loss: 0.4137 - acc: 0.8674 - val_loss: 0.4223 - val_acc: 0.8492 Epoch 10/40 15000/15000 [==============================] - 1s 40us/sample - loss: 0.3759 - acc: 0.8778 - val_loss: 0.3938 - val_acc: 0.8558 Epoch 11/40 15000/15000 [==============================] - 1s 41us/sample - loss: 0.3446 - acc: 0.8867 - val_loss: 0.3707 - val_acc: 0.8628 Epoch 12/40 15000/15000 [==============================] - 1s 41us/sample - loss: 0.3185 - acc: 0.8937 - val_loss: 0.3534 - val_acc: 0.8662 Epoch 13/40 15000/15000 [==============================] - 1s 41us/sample - loss: 0.2972 - acc: 0.9005 - val_loss: 0.3383 - val_acc: 0.8727 Epoch 14/40 15000/15000 [==============================] - 1s 40us/sample - loss: 0.2779 - acc: 0.9055 - val_loss: 0.3271 - val_acc: 0.8758 Epoch 15/40 15000/15000 [==============================] - 1s 39us/sample - loss: 0.2614 - acc: 0.9105 - val_loss: 0.3177 - val_acc: 0.8780 Epoch 16/40 15000/15000 [==============================] - 1s 41us/sample - loss: 0.2464 - acc: 0.9159 - val_loss: 0.3100 - val_acc: 0.8771 Epoch 17/40 15000/15000 [==============================] - 1s 39us/sample - loss: 0.2325 - acc: 0.9209 - val_loss: 0.3037 - val_acc: 0.8807 Epoch 18/40 15000/15000 [==============================] - 1s 41us/sample - loss: 0.2201 - acc: 0.9246 - val_loss: 0.2988 - val_acc: 0.8823 Epoch 19/40 15000/15000 [==============================] - 1s 42us/sample - loss: 0.2090 - acc: 0.9262 - val_loss: 0.2944 - val_acc: 0.8839 Epoch 20/40 15000/15000 [==============================] - 1s 43us/sample - loss: 0.1990 - acc: 0.9317 - val_loss: 0.2917 - val_acc: 0.8836 Epoch 21/40 15000/15000 [==============================] - 1s 39us/sample - loss: 0.1886 - acc: 0.9376 - val_loss: 0.2893 - val_acc: 0.8851 Epoch 22/40 15000/15000 [==============================] - 1s 42us/sample - loss: 0.1800 - acc: 0.9405 - val_loss: 0.2876 - val_acc: 0.8855 Epoch 23/40 15000/15000 [==============================] - 1s 41us/sample - loss: 0.1715 - acc: 0.9450 - val_loss: 0.2876 - val_acc: 0.8849 Epoch 24/40 15000/15000 [==============================] - 1s 42us/sample - loss: 0.1640 - acc: 0.9474 - val_loss: 0.2868 - val_acc: 0.8858 Epoch 25/40 15000/15000 [==============================] - 1s 40us/sample - loss: 0.1564 - acc: 0.9510 - val_loss: 0.2861 - val_acc: 0.8858 Epoch 26/40 15000/15000 [==============================] - 1s 42us/sample - loss: 0.1496 - acc: 0.9532 - val_loss: 0.2875 - val_acc: 0.8850 Epoch 27/40 15000/15000 [==============================] - 1s 41us/sample - loss: 0.1432 - acc: 0.9557 - val_loss: 0.2877 - val_acc: 0.8855 Epoch 28/40 15000/15000 [==============================] - 1s 41us/sample - loss: 0.1371 - acc: 0.9587 - val_loss: 0.2894 - val_acc: 0.8851 Epoch 29/40 15000/15000 [==============================] - 1s 40us/sample - loss: 0.1317 - acc: 0.9615 - val_loss: 0.2916 - val_acc: 0.8852 Epoch 30/40 15000/15000 [==============================] - 1s 42us/sample - loss: 0.1260 - acc: 0.9633 - val_loss: 0.2917 - val_acc: 0.8864 Epoch 31/40 15000/15000 [==============================] - 1s 41us/sample - loss: 0.1202 - acc: 0.9659 - val_loss: 0.2936 - val_acc: 0.8857 Epoch 32/40 15000/15000 [==============================] - 1s 40us/sample - loss: 0.1152 - acc: 0.9683 - val_loss: 0.2958 - val_acc: 0.8850 Epoch 33/40 15000/15000 [==============================] - 1s 40us/sample - loss: 0.1104 - acc: 0.9696 - val_loss: 0.2989 - val_acc: 0.8842 Epoch 34/40 15000/15000 [==============================] - 1s 40us/sample - loss: 0.1060 - acc: 0.9711 - val_loss: 0.3021 - val_acc: 0.8840 Epoch 35/40 15000/15000 [==============================] - 1s 38us/sample - loss: 0.1017 - acc: 0.9719 - val_loss: 0.3055 - val_acc: 0.8841 Epoch 36/40 15000/15000 [==============================] - 1s 38us/sample - loss: 0.0976 - acc: 0.9743 - val_loss: 0.3078 - val_acc: 0.8836 Epoch 37/40 15000/15000 [==============================] - 1s 41us/sample - loss: 0.0933 - acc: 0.9753 - val_loss: 0.3112 - val_acc: 0.8826 Epoch 38/40 15000/15000 [==============================] - 1s 41us/sample - loss: 0.0894 - acc: 0.9771 - val_loss: 0.3158 - val_acc: 0.8819 Epoch 39/40 15000/15000 [==============================] - 1s 41us/sample - loss: 0.0863 - acc: 0.9784 - val_loss: 0.3206 - val_acc: 0.8805 Epoch 40/40 15000/15000 [==============================] - 1s 41us/sample - loss: 0.0821 - acc: 0.9804 - val_loss: 0.3243 - val_acc: 0.8811
Evaluate the model
And let's see how the model performs. Two values will be returned. Loss (a number which represents our error, lower values are better), and accuracy.
results = model.evaluate(test_data, test_labels) print(results)
25000/25000 [==============================] - 1s 44us/sample - loss: 0.3471 - acc: 0.8687 [0.3471100524044037, 0.86872](['loss', 'acc', 'val_loss', 'val_acc'])
There are four entries: one for each monitored metric during training and validation. We can use these to plot the training and validation loss for comparison, as well as the training and validation accuracy:
import matplotlib.pyplot as pl()
<Figure size 640x480 with 1 Axes>_0<< after about twenty epochs., we could prevent overfitting by simply stopping the training after twenty or so epochs. Later, you'll see how to do this automatically with a callback.
#.
|
https://www.tensorflow.org/tutorials/keras/basic_text_classification?hl=fi
|
CC-MAIN-2019-22
|
refinedweb
| 1,148
| 83.83
|
One of the things you often see in legacy systems is repetitive little blocks of code. Take this example from Suteki Shop, here we’re inserting some static data into the database to setup the initial set of roles:
static void InsertRoles1(ISession session) { var admin = new Role { Id = 1, Name = "Administrator" }; session.Save(admin); var orderProcessor = new Role { Id = 2, Name = "Order Processor" }; session.Save(orderProcessor); var customer = new Role { Id = 3, Name = "Customer" }; session.Save(customer); var guest = new Role { Id = 4, Name = "Guest" }; session.Save(guest); }
The only thing that varies in these four code blocks is the id and the name. One way of factoring this out would be to write a little nested private class:
private class RoleInserter { private readonly ISession session; public RoleInserter(ISession session) { this.session = session; } public void Insert(int id, string name) { var role = new Role { Id = id, Name = name }; session.Save(role); } }
So our InsertRoles method now looks like this:
static void InsertRoles1(ISession session) { var roleInserter = new RoleInserter(session); roleInserter.Insert(1, "Administrator"); roleInserter.Insert(2, "Order Processor"); roleInserter.Insert(3, "Customer"); roleInserter.Insert(4, "Guest"); }
But now we’ve actually got more code in total than before. Also it’s one of the unfortunate things about object oriented programming that simple utility functions like this are awkward. The silly name of the class ‘RoleInserter’ is a dead giveaway. Steve Yegge calls it The Kingdom of Nouns. We don’t really need a whole class, we just need a single function.
Let’s use a closure to replace RoleInserter with a function:
private static Action<int, string> GetRoleInserter(ISession session) { return (id, name) => { var role = new Role { Id = id, Name = name }; session.Save(role); }; }
Now our InsertRoles method looks like this:
static void InsertRoles1(ISession session) { var insertRole = GetRoleInserter(session); insertRole(1, "Administrator"); insertRole(2, "Order Processor"); insertRole(3, "Customer"); insertRole(4, "Guest"); }
There’s little need to factor out the GetRoleInserter, it’s simpler just to write it in line:
static void InsertRoles(ISession session) { Action<int, string> insertRole = (id, name) => { var role = new Role {Id = id, Name = name}; session.Save(role); }; insertRole(1, "Administrator"); insertRole(2, "Order Processor"); insertRole(3, "Customer"); insertRole(4, "Guest"); }
That’s much nicer. Using lambdas like this can really clean up little repetitive code blocks.
10 comments:
While I'm a big fan of Action and Func code, an extension might be nicer. It would remove the need to specify the ID and could be easily reused in other tests (as well as being understood by Action/Func fearful devs).
Something like below (typed in the comment, may not compile :)...
static void InsertRoles(ISession session)
{
session.SaveNamedRoles("Administrator", "Order Processor", "Customer", "Guest");
}
public static class TestSessionExtensions
{
public static void InsertNamedRoles(this Isession session, params string[] roles)
{
//TODO loop roles and save each (just increment id)
}
}
Obviously, the extension method InsertNamedRoles should be called SaveNamedRoles.
Hi Ben,
Yes, if reuse is the aim then the inline closure is not the way to go.
As for these Action/Func fearful devs, do they actually exist? I've worked with people who hadn't seen/used lambdas before, but they got up to speed pretty fast.
Yes, they certainly exist. IME there is no problem when someone (e.g. you or me) is around to help them get them up to speed. However, I have seen problems on larger (50+), co-located projects where skill levels are, er, varied.
Don't get me wrong. I have a rep on my current project for my (over)love of lambda :) Personally, I think everyone should know and love Action & Func. I am using them in place of strategy classes etc a lot these days.
The example here just looks a bit awkward to me. I think it's the need to duplicate the Action call and manually increment the IDs. That still looks like duplication to me. The extension method looks DRY-er regardless of reuse requirements.
If someone doesn't get lambda expressions, they're likely the type that abuses extension methods if they use them at all. I think the inline lambda is pretty clean, it isolates the function to where its used.
If this is something thats going to be reused broadly, then I think I'd prefer the insert roles class as opposed to an extension method, as then I have options to replace the dependency when testing.
Its really just preferences though, any of them are reasonable.
Ben,
Maybe it's not a very good example. I guess the point I wanted to make was that you should try to remove 'boiler plate' code whenever possible. Look at what is different in otherwise similar blocks and boil it down to that. It's kinda accidental that the ids in this case are incremental, maybe I should have made them 4, 7, 11 and 243 instead :)
I feel your pain, the happy place for any developer is when he's the dumbest guy in the room. Working in any team where you are encouraged to dumb down your code so that other people can understand it can be really fustrating.
"the happy place for any developer is when he's the dumbest guy in the room."
Aint that the truth?!
I don't think passing ISession around all over the place is good. Seems like a leak in abstractions to me. I think what you're looking for is the Unit of Work pattern.
I know that ISession is basically that. But hard-coupling to NH smells bad to me. :)
@fschwiet @Mike - I think we are all right! But Mike is obviously the most right. At the end of the day it's all about what works for your project and the people/plans you have.
Like Mike's version, I feel we now have closure...
I wrote something similar a while back:
I like the in-lining of the action, though. Very nice.
Hi Chris,
Agreed, but this is a little utility to insert static data. I didn't think it waranted fully encapsulating NH.
Hi Steve,
That's very nice. Higher-order functions really do lead to some nice refactorings. It's a shame you can't put extension methods on methods:
MyMethod.MyExtensionMethod();
Would be really cool. This works though:
((Action)MyMethod).MyExtensionMethod();
but it's ugly :(
|
http://mikehadlow.blogspot.com/2010/09/adventures-in-repetitive-code.html
|
CC-MAIN-2016-44
|
refinedweb
| 1,057
| 65.73
|
This lab will cover how to set-up and use Apache Spark and Jupyter notebooks on Cloud Dataproc.
Jupyter notebooks are widely used for exploratory data analysis and building machine learning models as they allow you to interactively run your code and immediately see your results.
However setting up and using Apache Spark and Jupyter Notebooks can be complicated.
Cloud Dataproc makes this fast and easy by allowing you to create a Dataproc Cluster with Apache Spark, Jupyter component and Component Gateway in around 90 seconds.
What you'll learn
In this codelab, you'll learn how to:
- Create a Google Cloud Storage bucket for your cluster
- Create a Dataproc Cluster with Jupyter and Component Gateway,
- Access the JupyterLab web UI on Dataproc
- Create a Notebook making use of the Spark BigQuery Storage connector
- Running a Spark job and plotting the results.
The total cost to run this lab on Google Cloud is about $1. Full details on Cloud Dataproc pricing can be found here.
Sign-in to Google Cloud Platform console at console.cloud.google.com and create a new project:
last section of this codelab will walk you through cleaning up your project.
New users of Google Cloud Platform are eligible for a $300 free trial.
First, open up Cloud Shell by clicking the button in the top right-hand corner of the cloud console:
After the Cloud Shell loads, run the following command to set the project ID from the previous step**:**
gcloud config set project <project_id>
The project ID can also be found by clicking on your project in the top left of the cloud console:
Next, enable the Dataproc, Compute Engine and BigQuery Storage APIs.
gcloud services enable dataproc.googleapis.com \ compute.googleapis.com \ storage-component.googleapis.com \ bigquery.googleapis.com \ bigquerystorage.googleapis.com
Alternatively this can be done in the Cloud Console. Click on the menu icon in the top left of the screen.
Select API Manager from the drop down.
Click on Enable APIs and Services.
- Compute Engine API
- Dataproc API
- BigQuery API
- BigQuery Storage API
Create a Google Cloud Storage bucket in the region closest to your data and give it a unique name.
This will be used for the Dataproc cluster.
REGION=us-central1 BUCKET_NAME=<your-bucket-name> gsutil mb -c standard -l ${REGION} gs://${BUCKET_NAME}
You should see the following output
Creating gs://<your-bucket-name>/...
Creating your cluster
Set the env variables for your cluster
REGION=us-central1 ZONE=us-central1-a CLUSTER_NAME=spark-jupyter BUCKET_NAME=<your-bucket-name>
Then run this gcloud command to create your cluster with all the necessary components to work with Jupyter on your cluster.
gcloud beta dataproc clusters create ${CLUSTER_NAME} \ --region=${REGION} \ --image-version=1.4 \ --master-machine-type=n1-standard-4 \ --worker-machine-type=n1-standard-4 \ --bucket=${BUCKET_NAME} \ --optional-components=ANACONDA,JUPYTER \ --enable-component-gateway
You should see the following output while your cluster is being created
Waiting on operation [projects/spark-jupyter/regions/us-central1/operations/abcd123456]. Waiting for cluster creation operation...
It should take about 90 seconds to create your cluster and once it is ready you will be able to access your cluster from the Dataproc Cloud console UI.
While you are waiting you can carry on reading below to learn more about the flags used in gcloud command.
You should the following output once the cluster is created:
Created [] Cluster placed in zone [us-central1-a].
Flags used in gcloud dataproc create command
Here is a breakdown of the flags used in the gcloud dataproc create command
--region=${REGION}
Specifies the region and zone of where the cluster will be created. You can see the list of available regions here.
--image-version=1.4
The image version to use in your cluster. You can see the list of available versions here.
--bucket=${BUCKET_NAME}
Specify the Google Cloud Storage bucket you created earlier to use for the cluster. If you do not supply a GCS bucket it will be created for you.
This is also where your notebooks will be saved even if you delete your cluster as the GCS bucket is not deleted.
--master-machine-type=n1-standard-4 --worker-machine-type=n1-standard-4
The machine types to use for your Dataproc cluster. You can see a list of available machine types here.
By default, 1 master node and 2 worker nodes are created if you do not set the flag –num-workers
--optional-components=ANACONDA,JUPYTER
Setting these values for optional components will install all the necessary libraries for Jupyter and Anaconda (which is required for Jupyter notebooks) on your cluster.
--enable-component-gateway
Enabling Component Gateway creates an App Engine link using Apache Knox and Inverting Proxy which gives easy, secure and authenticated access to the Jupyter and JupyterLab web interfaces meaning you no longer need to create SSH tunnels.
It will also create links for other tools on the cluster including the Yarn Resource manager and Spark History Server which are useful for seeing the performance of your jobs and cluster usage patterns.
Accessing the JupyterLab web interface
Once the cluster is ready you can find the Component Gateway link to the JupyterLab web interface by going to Dataproc Clusters - Cloud console, clicking on the cluster you created and going to the Web Interfaces tab.
You will notice that you have access to Jupyter which is the classic notebook interface or JupyterLab which is described as the next-generation UI for Project Jupyter.
There are a lot of great new UI features in JupyterLab and so if you are new to using notebooks or looking for the latest improvements it is recommended to go with using JupyterLab as it will eventually replace the classic Jupyter interface according to the official docs.
Create a notebook with a Python 3 kernel
From the launcher tab click on the Python 3 notebook icon to create a notebook with a Python 3 kernel (not the PySpark kernel) which allows you to configure the SparkSession in the notebook and include the spark-bigquery-connector required to use the BigQuery Storage API.
Rename the notebook
Right click on the notebook name in the sidebar on the left or the top navigation and rename the notebook to "BigQuery Storage & Spark DataFrames.ipynb"
Run your Spark code in the notebook
In this notebook, you will use the spark-bigquery-connector which is a tool for reading and writing data between BigQuery and Spark making use of the BigQuery Storage API. the first cell check the Scala version of your cluster so you can include the correct version of the spark-bigquery-connector jar.
Input [1]:
!scala -version
Output [1]:
Create a Spark session and include the spark-bigquery-connector package.
If your Scala version is 2.11 use the following package.
com.google.cloud.spark:spark-bigquery-with-dependencies_2.11:0.15.1-beta
If your Scala version is 2.12 use the following package.
com.google.cloud.spark:spark-bigquery-with-dependencies_2.12:0.15.1-beta
Input [2]:
from pyspark.sql import SparkSession spark = SparkSession.builder \ .appName('BigQuery Storage & Spark DataFrames') \ .config('spark.jars.packages', 'com.google.cloud.spark:spark-bigquery-with-dependencies_2.11:0.15.1-beta') \ .getOrCreate()
Enable repl.eagerEval
This will output the results of DataFrames in each step without the new need to show df.show() and also improves the formatting of the output.
Input [3]:
spark.conf.set("spark.sql.repl.eagerEval.enabled",True)
Read BigQuery table into Spark DataFrame
Create a Spark DataFrame by reading in data from a public BigQuery dataset. This makes use of the spark-bigquery-connector and BigQuery Storage API to load the data into the Spark cluster.
Create a Spark DataFrame and load data from the BigQuery public dataset for Wikipedia pageviews. You will notice that you are not running a query on the data as you are using the spark-bigquery-connector to load the data into Spark where the processing of the data will occur. When this code is run it will not actually load the table as it is a lazy evaluation in Spark and the execution will occur in the next step.
Input [4]:
table = "bigquery-public-data.wikipedia.pageviews_2020" df_wiki_pageviews = spark.read \ .format("bigquery") \ .option("table", table) \ .option("filter", "datehour >= '2020-03-01' AND datehour < '2020-03-02'") \ .load() df_wiki_pageviews.printSchema()
Output [4]:
Select the required columns and apply a filter using
where() which is an alias for
filter().
When this code is run it triggers a Spark action and the data is read from BigQuery Storage at this point.
Input [5]:
df_wiki_en = df_wiki_pageviews \ .select("datehour", "wiki", "views") \ .where("views > 1000 AND wiki in ('en', 'en.m')") \ df_wiki_en
Output [5]:
Group by title and order by page views to see the top pages
Input [6]:
import pyspark.sql.functions as F df_datehour_totals = df_wiki_en \ .groupBy("datehour") \ .agg(F.sum('views').alias('total_views')) df_datehour_totals.orderBy('total_views', ascending=False)
Output [6]:
You can make use of the various plotting libraries that are available in Python to plot the output of your Spark jobs.
Convert Spark DataFrame to Pandas DataFrame
Convert the Spark DataFrame to Pandas DataFrame and set the datehour as the index. This is useful if you want to work with the data directly in Python and plot the data using the many available Python plotting libraries.
Input [7]:
spark.conf.set("spark.sql.execution.arrow.enabled", "true") pandas_datehour_totals = df_datehour_totals.toPandas() pandas_datehour_totals.set_index('datehour', inplace=True) pandas_datehour_totals.head()
Output [7]:
Plotting Pandas Dataframe
Import the matplotlib library which is required to display the plots in the notebook
Input [8]:
import matplotlib.pyplot as plt
Use the Pandas plot function to create a line chart from the Pandas DataFrame.
Input [9]:
pandas_datehour_totals.plot(kind='line',figsize=(12,6));
Output [9]:
Check the notebook was saved in GCS
You should now have your first Jupyter notebook up and running on your Dataproc cluster. Give your notebook a name and it will be auto-saved to the GCS bucket used when creating the cluster.
You can check this using this gsutil command in the cloud shell
BUCKET_NAME=<your-bucket-name> gsutil ls gs://${BUCKET_NAME}/notebooks/jupyter
You should see the following output
gs://bucket-name/notebooks/jupyter/ gs://bucket-name/notebooks/jupyter/BigQuery Storage & Spark DataFrames.ipynb
There might be scenarios where you want the data in memory instead of reading from BigQuery Storage every time.
This job will read the data from BigQuery and push the filter to BigQuery. The aggregation will then be computed in Apache Spark.
import pyspark.sql.functions as F table = "bigquery-public-data.wikipedia.pageviews_2020" df_wiki_pageviews = spark.read \ .format("bigquery") \ .option("table", table) \ .option("filter", "datehour >= '2020-03-01' AND datehour < '2020-03-02'") \ .load() df_wiki_en = df_wiki_pageviews \ .select("title", "wiki", "views") \ .where("views > 10 AND wiki in ('en', 'en.m')") df_wiki_en_totals = df_wiki_en \ .groupBy("title") \ .agg(F.sum('views').alias('total_views')) df_wiki_en_totals.orderBy('total_views', ascending=False)
You can modify the job above to include a cache of the table and now the filter on the wiki column will be applied in memory by Apache Spark.
import pyspark.sql.functions as F table = "bigquery-public-data.wikipedia.pageviews_2020" df_wiki_pageviews = spark.read \ .format("bigquery") \ .option("table", table) \ .option("filter", "datehour >= '2020-03-01' AND datehour < '2020-03-02'") \ .load() df_wiki_all = df_wiki_pageviews \ .select("title", "wiki", "views") \ .where("views > 10") # cache the data in memory df_wiki_all.cache() df_wiki_en = df_wiki_all \ .where("wiki in ('en', 'en.m')") df_wiki_en_totals = df_wiki_en \ .groupBy("title") \ .agg(F.sum('views').alias('total_views')) df_wiki_en_totals.orderBy('total_views', ascending=False)
You can then filter for another wiki language using the cached data instead of reading data from BigQuery storage again and therefore will run much faster.
df_wiki_de = df_wiki_all \ .where("wiki in ('de', 'de.m')") df_wiki_de_totals = df_wiki_de \ .groupBy("title") \ .agg(F.sum('views').alias('total_views')) df_wiki_de_totals.orderBy('total_views', ascending=False)
You can remove the cache by running
df_wiki_all.unpersist()
The Cloud Dataproc GitHub repo features Jupyter notebooks with common Apache Spark patterns for loading data, saving data, and plotting your data with various Google Cloud Platform products and open-source tools:.
|
https://codelabs.developers.google.com/codelabs/spark-jupyter-dataproc
|
CC-MAIN-2020-50
|
refinedweb
| 2,003
| 55.44
|
Integrating detekt in the Workflow
Learn how to integrate the powerful detekt tool in Android app development to help detect and prevent code smells during the development process.
Version
- Kotlin 1.4, Android 4.2, Android Studio 4.2
There are several ways to keep your technical debt low throughout the development process. Finding potential problems early — and minimizing them — is important. It’s also a good idea to maintain your team’s code styles and conventions and keep the cognitive complexity of methods within acceptable limits.
detekt, a static code analysis tool for Kotlin, helps you do all of this. It comes with a wide range of rule sets. It provides you with options to configure them, as well as the ability to add your own custom rules. So, adding this as a part of your CI steps can prove to be beneficial. CI stands for Continuous Integration, a process of ensuring features and bug fixes are added to an application regularly.
In this tutorial, you’ll build DetektSampleApp, an app that shows detekt rule sets with their details. During the process you’ll learn about:
- detekt and its features.
- Adding detekt to your project.
- Rule sets available in detekt.
- Writing custom rules and processors.
- Integrating detekt into your IDE.
- Adding detekt to GitHub Actions.
Getting Started
Download the starter project by clicking the Download Materials button at the top or bottom of the tutorial.
Open Android Studio 4.2.1 or later and import the starter project. Then, build and run the project. You’ll see the following screen:
The app shows a list of detekt rule sets. Each rule set has a short description of what it does. Clicking on any of the rule sets takes you to a view that loads the official documentation from the detekt website.
Understanding detekt
detekt is a static analysis tool for the Kotlin programming language. It tries to improve your codebase by enforcing a set of rules. You can integrate it with your CI to help avoid code smells on your codebase. This is helpful — especially when working with a team of developers.
detekt has several features that make it a worthwhile addition to your project:
- It offers code-smell analysis for Kotlin projects.
- It’s highly configurable — you can customize it according to your own needs.
- You can suppress findings if you don’t want warnings for everything.
- IDE, SonarQube and GitHub Actions integration.
- You can specify code-smells threshold to break your builds or print warnings.
- You can add code-smell baseline and suppression for legacy projects.
You’ll learn about all of these features in this tutorial. Now, it’s time to add detekt to your project.
Adding detekt To Your Project
You can easily add detekt to new or existing Kotlin projects. For existing projects, you’ll add some more customization to prevent many errors.
Adding to New Projects
detekt is available as a Gradle plugin. For new projects, add the plugin via the Gradle files.
Navigate to the project-level build.gradle file. Add the corresponding classpath in the dependencies block:
classpath "io.gitlab.arturbosch.detekt:detekt-gradle-plugin:1.17.1"
Here, you define the path that Gradle will use to fetch the plugin.
Next, you need to add the plugin to your project.
Below the
dependencies block add this:
apply plugin: "io.gitlab.arturbosch.detekt"
With this, you apply the detekt plugin to your project. The final step of adding the plugin is to apply the plugin on the app level Gradle file.
Navigate to the app-level build.gradle file. Add this
apply line:
apply plugin: "io.gitlab.arturbosch.detekt"
This enables you to use the plugin in the app module too. With this added, do a Gradle sync. After the sync is complete, your plugin is set and ready to use!
Running detekt Terminal Command
Now that you have the detekt plugin set up, run the following command on your terminal:
./gradlew detekt
Once the task completes, the build will fail with the following results:
There are a few things to notice:
- At the very top, detekt shows warnings from Gradle. Currently, it shows a deprecation warning.
- You can see the red text with the message
Task :app:detekt FAILED. Below this text is a list of detekt rule sets. In this case, you have
namingand
stylerule sets. In each rule set, you can see the rule that has been violated and the file together with the line. This makes it easier for you to pinpoint where in your code you need to do refactoring. Each rule set has an associated debt.
- After the rule sets, you have the
Overall debt: 1h 25mintext. detekt estimates the total time it would take to fix all the code smells in your project. Imagine this is a new project and you already have 1 hour and 25 minutes of debt. And if it were an existing project, you could have days and days of debt. That’s a code smell already!
Congratulations! You’ve set up detekt in your new project successfully. Quite easy, right? :]
Now, you’ll focus on learning how to add it to an ongoing project.
Adding to Existing Projects
Integrating detekt into an ongoing project can result in many warnings and violated rules. The debt can be a lot of hours that you may not have time to resolve.
One option is to ignore the legacy code and instead focus on the code you add after integrating detekt. To do this, detekt allows you to define a
baseline property that generates a list of every violated rule that detekt will ignore. Run
./gradlew detektBaseline, and this generates a baseline file for you. Now, when running
./gradlew detekt, detekt will ignore warnings listed in the baseline file. Check out the official documentation to learn more about
baseline configuration.
In the next section, you’ll dive deeper into the detekt rule sets to know what they are. You’ll also look at rules in the rule sets.
Looking at detekt Rule Sets
The DetektSampleApp already includes a list of the rule sets and their descriptions. Rule sets contain a group of rules that check compliance with your code to improve the code quality. The rules don’t affect the functionality of your app. Here are the common rule sets that exist:
- Comments: It provides rules that address issues in comments and documentation of the code. It checks header files, comments on private methods and undocumented classes, properties or methods.
- Complexity: This set contains rules that report complex code. It checks for complex conditions, methods, expressions and classes, as well as reports long methods and long parameter lists.
- Coroutines: The coroutines rule set analyzes code for potential coroutines problems.
- Empty-Blocks: It contains rules that report empty blocks of code. Examples include empty
catchblocks, empty class blocks, empty function and conditional function blocks.
- Exceptions: It reports issues related to how code throws and handles exceptions. For example, it has rules if you’re catching generic exceptions among other issues related to handling exceptions.
- Formatting: This checks if your codebase follows a specific formatting rule set. It allows checking indentation, spacing, semicolons or even import ordering, among other things.
- Naming: It contains rules which assert the naming of different parts of the codebase. It checks how you name your classes, packages, functions and variables. It reports the errors in case you’re not following the set conventions.
- Performance: It analyzes code for potential performance problems. Some of the issues it reports include the use of
ArrayPrimitivesor misuse of
forEachloop, for instance.
- Potential-Bugs: The potential-bugs rule set provides rules that detect potential bugs.
- Style: The style rule set provides rules that assert the style of the code. This will help keep code in line with the given code style guidelines.
Each rule set contains a vast number of rules. This tutorial covers the most common rules, but don’t hesitate to check the official documentation to learn more about the rules.
Some of the rules are not active by default. You have to activate them yourself. How do you do that, you might ask? You’ll look into that in the next section.
Configuring detekt
One of the features of detekt is the high ability to customize it to your own needs. Moreover, it gives you the ability to easily enable or disable rules in your project.
detekt uses a YAML style configuration file for setting up reports, processors, build failures, rule set and rule properties.
In the starter app, switch to project view and you’ll see detekt.yml as shown below:
Open detekt.yml. It contains some rule sets and rules properties — for example, the
comments: active: true excludes: "**/test/**,**/androidTest/**,**/*.Test.kt,**/*.Spec.kt,**/*.Spek.kt" CommentOverPrivateFunction: active: false CommentOverPrivateProperty: active: false EndOfSentenceFormat: active: false endOfSentenceFormat: ([.?!][ \t\n\r\f<])|([.?!:]$) UndocumentedPublicClass: active: false searchInNestedClass: true searchInInnerClass: true searchInInnerObject: true searchInInnerInterface: true UndocumentedPublicFunction: active: false
As you can see, in order to enable or disable a given rule, you just have to set the boolean value
active: true/false.
In the code above, you have the configurations for the
comments rule set, which is set to be active. There's an extra property for listing the files you'd want to exclude while reporting the issues. In this example, you exclude test and androidTest directories. Below this, you further set the properties for the individual rules in the rule sets. For instance,
UndocumentedPublicFunction is not active. detekt won't report this in your codebase.
You can customize detekt.yml according to your project requirements.
For this specific configuration to be read by detekt, you'll need to add this file to your plugin configuration. To do this, navigate to the project-level build.gradle file. Add this below your
apply line:
subprojects { apply plugin: "io.gitlab.arturbosch.detekt" detekt { config = files("${project.rootDir}/detekt.yml") parallel = true } }
Here, you specify the configuration file for detekt. It will no longer use the default configurations. You can enable the rules or disable them in detekt.yml. This configuration applies to all sub-projects in your project.
You've seen available configuration options for detekt. Next, you'll add more methods that violate rule sets. You'll see how detekt reports these violations.
Adding a Rule Set
Locate the MainActivity class. Add the following
coroutineTestRules function below
onCreate:
import kotlinx.coroutines.GlobalScope import kotlinx.coroutines.delay import kotlinx.coroutines.launch private suspend fun coroutineTestRules() { GlobalScope.launch { delay(2000) } }
Be sure to add the imports above at the top of the file or when the IDE prompts you to do so. This method tests the coroutines rule set.
It's a bad practice to use
GlobalScope in an activity. detekt will raise an issue on this.
Finally, call this function at the end of the
onCreate method:
runBlocking { coroutineTestRules() }
Resolve the imports once the IDE prompts you. Here, you call your
coroutineTestRules() method inside
runBlocking. You have to call
suspend methods inside coroutines or other suspend methods.
coroutines: active: true GlobalCoroutineUsage: active: true
Here, you set the
coroutines rule set to be active. You also enable the
GlobalCoroutineUsage rule in the process. Now, run the
./gradlew detekt command on your terminal. Your results will be as follows:
In the image, you can see the report from the terminal now includes the
coroutines rule set. It shows the debt and the file with the
GlobalCoroutineUsage. The report has additional details at the bottom as shown:
detekt.yml has additional configuration at the top. These settings specify the reports you'll get and the project complexity report from detekt. You'll look at this later on in this tutorial.
Breaking More Rule Sets
Add the following code below your
coroutineTestRules function in the MainActivity class:
// 1 private fun complexMethod(name: String, email: String, phone: String, address: String, zipCode: String, city: String, country: String): String { return name } // 2 private fun emptyMethod() {} // 3 override fun toString(): String { throw IllegalStateException() } // 4 fun performanceIssues() { (1..19).forEach { print(it.toString()) } } // 5 fun potentialBugs() { val test = when ("type") { "main" -> 1 "main" -> 2 else -> 3 } }
Here's a breakdown of the code above:
- This is an example of a complex method. The method has seven parameters that exceed the maximum six parameters set by detekt. You can change this number in the configuration file too.
- This is an empty method. It falls in the
empty-blocksrule set.
- This represents the
exceptionsrule set. This method throws an exception without a cause. The exception is also thrown from an unexpected location.
- You have a
forEachloop on a range that leads to performance issues. detekt reports this under the
performancerule set.
- You have a
whencondition that has two similar states. This can lead to a bug in your app. detekt reports such cases as code smells under the
potential-bugsrule set.
Run
./gradlew detekt command on the terminal and you'll see:
You've learned about the rule sets in different scenarios. What about cases when detekt detects too much? At times you may need to disable a certain rule on a specific method and not the whole project. For such cases, you need to suppress the issues. In the next section, you'll look at how to suppress issues.
Suppressing Issues
To prevent detekt from displaying an issue on a specific method, you can use the
@Suppress annotation. Above
complexMethod() add the following annotation:
@Suppress("LongParameterList")
Run
./gradlew detekt command on the terminal and you'll see the following:
From the image, you can see detekt doesn't complain about the
LongParameterList rule anymore. This will only apply to this method. If you had another file or class with a complex method, detekt would still report the issue.
To see this in action, add this new method below
potentialBugs():
@Suppress("EmptyFunctionBlock") private fun suppressedWarning() { }
In this method, you suppress
EmptyFunctionBlock rule. Run
./gradlew detekt command on the terminal and you'll see:
The report doesn't have
EmptyFunctionBlock issue on
suppressedWarning(), but on
emptyMethod() it's still shown.
So far, you've learned how to use the rules available on the detekt plugin. With your projects, you might need to detect more code smells that are not part of detekt. You'll learn how to do that next.
Writing Custom detekt Rules
As mentioned earlier, detekt allows you to extend its functionality by adding your own rules. Recently, the Kotlin Android Extensions plugin was deprecated. This means using Kotlin Synthetic is no longer recommended. Before the introduction of ViewBinding, this was the go-to way of accessing views in Android apps. This deprecation affects so many projects. Using detekt, you can write a specific rule to check it and fail your build if there is a synthetic import.
First, move to the customRules module and add these dependencies to build.gradle:
// 1 compileOnly "io.gitlab.arturbosch.detekt:detekt-api:1.17.1" // 2 testImplementation "io.gitlab.arturbosch.detekt:detekt-api:1.17.1" testImplementation "io.gitlab.arturbosch.detekt:detekt-test:1.17.1" testImplementation "org.assertj:assertj-core:3.19.0" testImplementation 'junit:junit:4.13.2'
This code:
- Adds the detekt API dependency. You need this for writing the custom rules.
- Adds test dependencies. To test your rules you need
detekt-test. It also requires
assertj-coreas a dependency.
Do a Gradle sync to add this dependencies. Next, inside the
com.raywenderlich.android.customrules.rules package in the customRules module, add a new file. Name it NoSyntheticImportRule.kt and add the following:
package com.raywenderlich.android.customrules.rules import io.gitlab.arturbosch.detekt.api.* import org.jetbrains.kotlin.psi.KtImportDirective //1 class NoSyntheticImportRule : Rule() { //2 override val issue = Issue("NoSyntheticImport", Severity.Maintainability, "Don’t import Kotlin Synthetics " + "as it is already deprecated.", Debt.TWENTY_MINS) //3 override fun visitImportDirective( importDirective: KtImportDirective ) { val import = importDirective.importPath?.pathStr if (import?.contains("kotlinx.android.synthetic") == true) { report(CodeSmell(issue, Entity.from(importDirective), "Importing '$import' which is a Kotlin Synthetics import.")) } } }
Here's what's happening:
NoSyntheticImportRuleextends
Rulefrom the detekt API.
- This is a method from
Rulethat defines your issue. In this case, you create an issue named NoSyntheticImport. You specify the debt for the issue, the severity of the issue and the message for detekt to show. The debt represents the time you need to fix the issue.
- This method checks imports in your files and classes. Inside here you have a check to see if any of the import contains Kotlin synthetics. If it does, you report the code smell with a message.
Adding Your Own Rule to detekt
With your rule class complete, your need to create a
RuleSetProvider. This class lets detekt know about your rule. You create one by implementing
RuleSetProvider interface.
On the same package as the file above, create another new file. Name it CustomRuleSetProvider.kt and add the following code:
package com.raywenderlich.android.customrules.rules import io.gitlab.arturbosch.detekt.api.Config import io.gitlab.arturbosch.detekt.api.RuleSet import io.gitlab.arturbosch.detekt.api.RuleSetProvider class CustomRuleSetProvider : RuleSetProvider { override val ruleSetId: String = "synthetic-import-rule" override fun instance(config: Config): RuleSet = RuleSet(ruleSetId, listOf(NoSyntheticImportRule())) }
Implementing this interface provides detekt with a rule set containing your new rule. You also provide an
id for your rule set. You can group as many rules as you'd like in this rule set if you have other related rules.
Next, you need to let detekt know about your CustomRuleSetProvider. Navigate to the customRules module. Open src/main/resources/META-INF/services, and you'll find io.gitlab.arturbosch.detekt.api.RuleSetProvider. Inside this file, add:
com.raywenderlich.android.customrules.rules.CustomRuleSetProvider
Here, you add the full qualified name for your CustomRuleSetProvider. detekt can now find the class, instantiate it and retrieve your rule set. This file notifies detekt about your
CustomRuleSetProvider.
Woohoo! You've created your first custom rule. It's time to test if your rule is working.
Testing Your Custom Rule
Inside the test directory in the customRules module, add a new Kotlin file to the package and name it NoSyntheticImportTest.kt. Add the following code:
package com.raywenderlich.android.customrules.rules import io.gitlab.arturbosch.detekt.test.lint import org.assertj.core.api.Assertions.assertThat import org.junit.Test class NoSyntheticImportTest { @Test fun noSyntheticImports() { // 1 val findings = NoSyntheticImportRule().lint(""" import a.b.c import kotlinx.android.synthetic.main.activity_synthetic_rule.* """.trimIndent()) // 2 assertThat(findings).hasSize(1) assertThat(findings[0].message).isEqualTo("Importing " + "'kotlinx.android.synthetic.main.activity_synthetic_rule.*' which is a Kotlin Synthetics import.") } }
In the code above:
- You use
lint()extension function, which executes the checks and returns the results. Inside the functions you've added two imports. One is compliant and the other one is non-compliant.
- Using the results from above, you do an assertion to check if the
findingshas a size of one. This is because you have one non-compliant import. You also do an assertion to check for message in your
findings.
Click the Run icon on the left side of
noSyntheticImports(). You'll see your test passes as shown below:
You can see your test for the custom rule passes. This means you can use your rule in a real project. You'll learn how to do that next.
WARNING: An illegal reflective access operation has occurredmessage. It is relative to this test running on itself. This won't happen when using this rule for the app module.
Using the Custom Rule in Your Project
To use this new rule in your project, you'll need to apply the
customRules module in your app module. Navigate to the app-level build.gradle file and add the following to the
dependencies section:
detekt "io.gitlab.arturbosch.detekt:detekt-cli:1.17.1" detekt project(":customRules")
The first line is relative to
detekt-cli dependency. detekt requires it to run your custom rule. The second line tells detekt to use your module as a dependency in order to be able to use your custom rule.
Have a look at the SyntheticRuleActivity class. As you can see, it has Kotlin Synthetic Import. You'll use this class to test if the rule works.
import kotlinx.android.synthetic.main.activity_synthetic_rule.*
Last, you need to activate your rule set and include your rule in the configuration file. To do this, add the following code to detekt.yml just above the
synthetic-import-rule: active: true NoSyntheticImportRule: active: true
Run
./gradlew detekt on your terminal. You'll see these results:
detekt now reports an issue under
synthetic-import-rule. It shows a debt of 20 minutes and points to the class that has the import.
Congratulations! You've made your first custom rule. Next, you'll learn about processors.
Looking at Custom Processors to detekt
detekt uses processors to calculate project metrics. If you enable count processors, for instance, this is what your report looks like:
detekt is customizable, so you can create your own processor for the statistics that you want to see in your project. Creating a custom processor is very similar to creating a custom rule.
To create a custom processor, your class needs to implement
FileProcessListener, which is called when file processing begins. You also need a visitor class that depends on the statistics you want to get for the metrics. The visitor can be for methods, loops and so on. Lastly, you need to register your custom processor qualified name on a file named io.gitlab.arturbosch.detekt.api.FileProcessListener inside src/main/resources/META-INF/services to inform detekt of your processor. That's all you have to do. You won't create a processor in this tutorial, but if you want to learn more about the custom processors, check out the custom processors documentation.
You've seen how to add custom processors to your project. Next, you'll learn how to add detekt on GitHub Actions.
Integrating detekt With GitHub Actions
GitHub Actions is GitHub's platform for automation workflows. With GitHub actions, you can add a detekt action. You can set it to run when someone pushes or creates a pull request on your said branches.
To enable actions, go to any of your GitHub projects and navigate to the actions tab as shown below:
Tap
Set up a workflow yourself. You'll see the workflow editor. Inside the editor, add:
## 1 name: detekt ## 2 on: push: branches: - main pull_request: branches: - main ## 3 jobs: detekt: ## 4 runs-on: ubuntu-latest steps: - name: "checkout" uses: actions/checkout@v2 - name: "detekt" uses: natiginfo/action-detekt-all@1.17.0
Here's what the code above does:
- Creates a workflow named detekt
- Specifies when your workflow will run. In this case, the workflow runs when there's a push or a pull request on the main branch.
- Defines the jobs to de done by the work flow. You only have detekt as your job.
- Specifies a runner and steps for your job. You also add a third-party action, called detekt action. It runs detekt checks to ensure your code follows the set practices before merging to the main branch.
Save your workflow and create a pull request. For this sample project, the pull request fails with the following error:
From the Github Actions you'll see:
The workflow fails and shows the same report as you were seeing on your terminal. This is a very useful feature for detekt, as it ensures any changes added to the main branch don't have code smells. The pull request creator has to first address the issues before merging the pull request.
You've seen how to add detekt on GitHub Actions. Next, you'll learn how to add detekt on Android Studio.
Integrating detekt With Your IDE
To add the detekt plugin to Android Studio, navigate to Preferences/ Settings ▸ Plugins ▸ Search detekt. You'll see the plugin as in the image below:
Click Install to install the plugin. Once the installation completes, exit this screen. Navigate to Preferences/Settings ▸ Tools ▸ detekt and check Enable Detekt.
detekt IDE plugin also has these optional configuration options:
- Configuration file path: This is your detekt.yml.
- Baseline file: Path to your custom baseline.xml.
- Plugin Jars: Path to jar file that has your custom rules, if you want detekt to report them.
With this, the IDE will detect errors on the go as you code!
Where to Go From Here?
Download the completed project files by clicking the Download Materials button at the top or bottom of the tutorial.
Congratulations! You've learned so much about detekt and its features. Now you know how you can add it to your project, create custom rules or even integrate it on the workflow in GitHub Actions or your IDE. :]
We hope you enjoyed this tutorial. If you have any questions or comments, please join the forum discussion below!
|
https://www.raywenderlich.com/24470020-integrating-detekt-in-the-workflow
|
CC-MAIN-2022-33
|
refinedweb
| 4,137
| 59.8
|
What exactly is ProcessInstance ?Aaron Jan 26, 2006 11:42 PM
All,
I'm trying to comprehend the methods used in JBpm. I'm building a helpdesk for my company using JBoss, and JBpm will really help me out.
My confusion is in the ProcessDefinition and ProcessInstance. In the tutorial for JBpm3.0 (Chapter 3.2 Database Example), a processDefinition is deployed with the name "Hello World".
Easy enough.
Where I fall short on, is how do you "attach" a java object instance to a specific instance in JBpm? The tutorial loads it as:
JbpmSession jbpmSession = jbpmSessionFactory.openJbpmSession(); jbpmSession.beginTransaction(); jbpmSession .getGraphSession() .saveProcessDefinition(processDefinition); ..... close the session and transaction ....
How does JBpm know that my, lets say, HolidayRequest#1234 is the object that is attached to that session?
It goes on, mimicking a seperate request:
ProcessDefinition processDefinition = jbpmSession .getGraphSession() .findLatestProcessDefinition("hello world"); ProcessInstance processInstance = new ProcessInstance(processDefinition); Token token = processInstance.getRootToken();
Okay, so I loaded up the process-definition "hello world" which is my workflow "map" so-to-speak, that all my HolidayRequests use. But, again, how does JBpm know that I'm talking about HolidayRequest#1234 when I "getRootToken()" ?
Obviously, HolidayRequest#1234 could be in the "pending" state, while HolidayRequest#567 could be in the "approved" state. How does JBpm know which object I'm talking about, when all I see is "findLatestProcessDefinition" and "getRootToken".
The HolidayRequest object is fictitious, but I could just as well be saying, TroubleTicket, or DocumentApproval objects. Any objects that would be defined to flow through the specific Process Definition.
I hope I'm clear on my confusion. I'm still scouring the docs, as I'm sure the answer is there somewhere.
Can anyone elaborate for me?
Thanks!
Aaron
1. Re: What exactly is ProcessInstance ?Rainer Alfoeldi Jan 27, 2006 2:28 AM (in response to Aaron)
Hi Aaron,
jBPM doesn't know... unless you tell it.
How about calling processInstance.getContextInstance().setVariable()?
Greetings
2. Re: What exactly is ProcessInstance ?Elmo Jan 27, 2006 7:35 AM (in response to Aaron)
Hi Aaron,
The process instance has a getId() method that returns the unique number of your process instance. You can use that to retrieve your process instances. If you want to retrieve using your defined names you can store it as a variable like Ralfoeldi has suggested. If you're ok with retrieving it using sql, you can check it out in the jbpm_variableinstance table, it contains the variable and the related process instance id.
Regards,
Elmo
3. Re: What exactly is ProcessInstance ?Aaron Jan 27, 2006 9:54 AM (in response to Aaron)
Thanks RAlfoeldi and enazareno! I appreciate you taking the time to help me understand!
"enazareno" wrote:
The process instance has a getId() method that returns the unique number of your process instance.
So, keeping with this method, I can store the processInstanceID in my objects, to load at any time?
public class Request { String name; Date dateCreated; Date dateRequested; long workflowID; public long getWorkflowID() { if (workflowID == null) { // obtain a JBpm session and processInstance workflowID = processInstance.getId(); } return workflowID; } public void setWorkflowID(long id) { this.workflowID = id; } }
Am I on the right track so far? Again, keeping to this effect, I can load the JBpm process using:
processInstance = graphSession.loadProcessInstance(request.getWorkflowID());
Is my understanding right?
Thanks for your help, and your time! If I'm correct in my understanding for processInstances, my next learning challenge is task lists and assignments ((grin))
Again, thanks.
~~Aaron
4. Re: What exactly is ProcessInstance ?Rainer Alfoeldi Jan 27, 2006 10:02 AM (in response to Aaron)
Hi Aaron,
just a thought: isn't your processInstance your HolidayRequest?
Things get a lot easier if the processInstance is the leading entity.
There might be situations where the other way around is required i.e. an entity keeps track of it's processInstances (e.g. if more than one processInstance can be connected to an entity) but that isn't the usual situation.
Greetings
5. Re: What exactly is ProcessInstance ?Aaron Jan 27, 2006 11:41 AM (in response to Aaron)
"RAlfoeldi" wrote:
isn't your processInstance your HolidayRequest?
Yes it is. I'm using EJB3 EntityBeans for my project.
"RAlfoeldi" wrote:
Things get a lot easier if the processInstance is the leading entity.
What do you mean by "leading entity"? Maybe that's where I'm getting confused at.
Thanks!!
6. Re: What exactly is ProcessInstance ?Rainer Alfoeldi Jan 27, 2006 12:22 PM (in response to Aaron)
Hi Aaron,
another attempt:
Your processInstance is a data object that traverses a process as defined in the processDefinition. It can basically handle any variables you might need. What I was suggesting is that there might not be any need for a seperate HolidayRequest object. Just use the processInstance. If the required data is to large for jBPM to efficiently handle just keep a key to an external data obejct, but the 'leading entity', the one that knows about state etc. remains the processInstance.
The other way around would be something like: long running contract (years) with various processes that are executed over a certain period, maybe even concurrently. Then it would make sense to let the contract keep track of the processInstances doing something to it. (The implementation might be the other way around, but that isn't the point.) In this case you would maybe want to ask: who is working on this contract?
Just my 2 cents. As always there are a thousand different ways to solve things.
Greetings
7. Re: What exactly is ProcessInstance ?Aaron Jan 27, 2006 1:25 PM (in response to Aaron)
Rainer,
I see now. Thanks for taking the extra time to explain to me.
Basically, you are saying let the processInstance be my main object, storing the variables I need.
That makes sense. It keeps things simple and centralized.
Unfortunately, I think I'd rather do it the other way (i.e. Having my Entity Bean reference the processID), as my Entity Beans will be used by other outside processes (such as Crystal Reports, other external systems, etc).
So, I will need a table for my object's data, and then reference JBpm to assess the current state, and it's workflow.
Please correct me if I'm wrong.
I think I'm ready to start planning my architecture.
Thanks again!
8. Re: What exactly is ProcessInstance ?Rainer Alfoeldi Jan 28, 2006 5:11 AM (in response to Aaron)
Hi Aaron,
you're on the right track.
If the required data is to large for jBPM to efficiently handle just keep a key to an external data obejct, but the 'leading entity', the one that knows about state etc. remains the processInstance.
Greetings
|
https://developer.jboss.org/thread/113037
|
CC-MAIN-2016-44
|
refinedweb
| 1,116
| 58.89
|
Remove a shared memory object
#include <sys/mman.h> int shm_unlink( const char * name );
libc
Use the -l c option to qcc to link against this library. This library is usually included automatically.
The shm_unlink() function removes the name of the shared memory object specified by name. After removing the name, you can't use shm_open() to access the object.
This function doesn't affect any references to the shared memory object (i.e., file descriptors or memory mappings). If more than one reference to the shared memory object exists, then the link count is decremented, but the shared memory segment isn't actually removed until you remove all open and map references to it.
See shm_open().
POSIX 1003.1 SHM
mmap(), munmap(), mprotect(), shm_ctl(), shm_ctl_special(), shm_open()
|
http://www.qnx.com/developers/docs/6.5.0SP1.update/com.qnx.doc.neutrino_lib_ref/s/shm_unlink.html
|
CC-MAIN-2019-47
|
refinedweb
| 127
| 57.98
|
03-30-2016 09:57 AM
I would like to create a new IP address on an interface with xml api. I tried different requests but no one worked.
I tried to use config set wiht the xpath which ended to
[@name='interface']/ip and after &element=<ip><entry name="x.x.x.x/x"/></ip> and similar settings but the response is always '406 Not Acceptable'.
03-30-2016 12:15 PM - edited 03-30-2016 12:17 PM
When using 'set', the element should not contain the end of the xpath (you don't need the <ip></ip> part of the element). This is only needed when using 'edit'.
Also, you're missing the layer3 element in the xpath.
xpath should end with: entry[@name='ethernet1/1']/layer3/ip
element should be: <entry name='x.x.x.x/x'/>
So here are a few examples of setting an ip address on ethernet1/1 using the API:
With a browser:
https://<fw ip>/api?key=<apikey>&type=config&action=set&xpath=/config/devices/entry[@
With pan-python:
pan-python is available here:
pan-python abstracts out much of the API into simple parameters on a command line or python script. It also handles encoding and other 'under-the-hood' complexity for you. On a linux CLI, with pan-python installed, you can make the same API call like this:
panxapi.py -h <fw ip> -K <apikey> -S "<entry name='5.5.5.5/24'/>" "/config/devices/entry/network/interface/ethernet/entry[@name='ethernet1/3']/layer3/ip"
With pandevice framework (python SDK):
pandevice is available here:
It's a python SDK that allows you to interact with the API at an object level. No XPath or XML is required. pandevice is also natively vsys and HA aware.
This example accomplishes the same thing, but using objects in python without XPaths or XML:
from pandevice import firewall, network fw = firewall.Firewall("<fw ip>", "admin", "password") fw.add(network.EthernetInterface("ethernet1/1", ip="5.5.5.5/24")).create()
03-31-2016 03:21 AM
The things are a bit more complicated but I didn't write all details. We use Panorama, and the interface is a subinterface of an aggregate interface. But I tried the similar way. The current request is:
/api/?type=config&action=set&key=myapikey&xpath=/config/devices/entry[@
But I get 406 Not!
|
https://live.paloaltonetworks.com/t5/automation-api-discussions/create-a-new-ip-address-on-a-specific-interface/m-p/75547
|
CC-MAIN-2022-40
|
refinedweb
| 396
| 58.28
|
In this section, you'll create a standard synchronous Web Service consumer. You'll also investigate how the proxy class makes it look like you're working with a local object and its methods when you make method calls in the Web Service.
For this example, you'll modify the InventoryService Web Service you created in the previous chapter. This service provides a simple UnitsInStock method, which returns the available inventory for a product matching the ProductID value your method call supplies. In the following steps, you'll add code that causes the method to delay 10 seconds before returning its value to simulate the behavior of a Web Service that takes measurable time to do its work.
Follow these steps to slow down your Web Service method:
In Visual Studio .NET, load the InventoryService project you created in the previous chapter. You should be able to locate the project in the Jumpstart\InventoryService folder.
View the code for the Inventory.asmx file.
Modify the code in the UnitsInStock procedure so that it looks like this (we only show a few lines of code here to indicate the context for the new line of code):
cmd = New OleDbCommand(strSQL, cnn) UnitsInStock = cmd.ExecuteScalar() ' Wait for a few seconds... System.Threading.Thread.Sleep(10000)
To call this Web Service method, you pass in a product ID, and the method returns the number of items currently in inventory.
The code that you've added pauses the thread for 10 seconds. This code uses the shared Sleep method of the Thread object provided by the .NET Framework, emulating a long-running Web Service method.
A thread is a single path of execution within an application. Each application must contain at least one thread, and many applications use more than one thread to do their work. Your Web Service only uses one thread, unless you write code that creates more, and the call to Thread.Sleep causes the single thread in use to block for the specified number of milliseconds. The outcome of this is that the caller, waiting for a response from this method, waits for 10 seconds for the function's return value. You won't normally add code to your applications to slow them down, but in this case, it helps test the concept of asynchronous Web Service consumer applications.
To verify that the sample Web Service works, follow these steps:
Press F5 to run the project.
Click the UnitsInStock link on the page that's displayed.
On the test page, enter a number between 1 and 15 or so in the text box, click the Invoke button, and verify that 10 seconds or so later, you get the available inventory for the item you requested.
Save your project.
To create the sample page, for both this section and the next, follow these steps:
Create a new ASP.NET Web Application project. Set the location to Jumpstart\AsyncConsumer.
Add the controls shown in Figure 30.1 to the Webform1.aspx page. Set properties for these controls as shown in Table 30.1.
Add a Web reference to your project, referring to the InventoryService Web Service you modified in the previous section.
Select the Project, Add Web Reference menu item and enter the address for the InventoryService Web Service, using this address:
Select the Add Reference button once you've found the Web Service.
Right-click the name of the newly added Web Service, select Rename from the context menu, and rename the service to InventoryService.
Select the Project, Show All Files menu item so that you can view all the files added to your project.
Next, you'll add code to your page, calling the newly added Web Service. To begin, double-click btnSync and modify the btnSync_Click procedure, adding the following procedure call:
TestSync()
Add the TestSync procedure, shown in Listing 30.1, to your class.
Private Sub TestSync() Dim ws As New InventoryService.Inventory() Dim intResults as Integer Dim intProductID as Integer intProductID = CInt(txtProductID.Text) intResults = ws.UnitsInStock(intProductID) lblResults.Text = FormatResults( _ intProductID, intResults) End Sub
Add the FormatResults procedure shown in Listing 30.2, which formats the item number and inventory for display on the page (and for the Event Log item, later).
Private Function FormatResults(ByVal Item As Integer, _ ByVal ItemCount As Integer) As String Dim blnSingular As Boolean ' Format the results. blnSingular = (ItemCount = 1) Dim strResults As String = _ String.Format( _ "There {0} currently {1} {2} of item {3} in stock.", _ IIf(blnSingular, "is", "are"), _ ItemCount, _ IIf(blnSingular, "unit", "units"), Item) Return strResults End Function
NOTE
The FormatResults function you just added doesn't do much besides "prettify" the output. It accepts the item number and the item count as parameters and then creates a string such as "There are currently 4 units of item 12 in stock." The code takes into account the embarrassing singular versus plural issue.
Finally, you can test the page. Press F5 to run the project.
Enter a value into the Product ID text box, click Get Inventory (Sync), and wait for the response. After 10 seconds or so, you should see the results displayed in the Label control on the page.
Close the Browser window and save your project.
If you look carefully at the code in the btnSync_Click procedure, you'll see that you're creating an instance of the InventoryService.Inventory class:
Dim ws As New InventoryService.Inventory()
Where did that namespace and class come from? They're both provided by the proxy class created for you when you added the Web reference. To check it out, open the Reference.vb file, hidden as a code-behind file for Reference.map in the Solution Explorer.
TIP
You won't see Reference.vb in the Solution Explorer if you didn't select the Project, Show All Files menu item. If you can't find Reference.vb, make sure you show all the files first.
If you investigate the Reference.vb file, you'll find code like this:
Namespace InventoryService Public Class Inventory Inherits System.Web.Services.Protocols. _ SoapHttpClientProtocol
(Note that we've removed the distracting procedure attributes, which don't add much to our explanation here.) As you can see, the file provides a namespace (InventoryService) and a class (Inventory). The class inherits from SoapHttpClientProtocol, so it can call any of the methods provided by that base class. This will be important, in just a few paragraphs.
Because your project includes a file that creates the namespace and class, your code can refer to InventoryService.Inventory as if that were a local object, because it is. Even though it looks like you're referring to the Web Service class, you're actually working with the local class (often called a proxy class, because it stands in place of, as a proxy, for the real Web Service).
Now let's get back to the procedure you added. Next, the procedure calls the UnitsInStock method of the proxy class, like this:
lblResults.Text = ws.UnitsInStock( _ CInt(txtProductID.Text)).ToString
Looking in the proxy class, you'll find a UnitsInStock procedure, like this (again, we've removed procedure attributes that affect how SOAP handles the procedure, but that doesn't affect anything at the moment):
Public Function UnitsInStock( _ ByVal ProductID As Integer) As Integer Dim results() As Object = _ Me.Invoke("UnitsInStock", New Object() {ProductID}) Return CType(results(0), Integer) End Function
This method, which is actually what your code is calling, calls the Invoke method provided by the base class (SoapHttpClientProtocol), which uses the URL provided in the class's constructor to call the actual Web Service method:
Public Sub New() MyBase.New() Me.Url = _" & _ "inventory.asmx" End Sub
The Invoke method expects that you'll pass it two parameters:
The name of the method to call (UnitsInStock, in this case).
An array of Objects, containing the parameters to be passed to the method. In this case, it's just an array with a single element the ProductID value passed as a parameter to the procedure.
The Invoke method returns an array of Objects, and the UnitsInStock method retrieves the first item in the array (index 0), converts it to a Short, and returns the value as its return value.
As you can see, when you instantiate and make a call to a Web Service method, you're actually instantiating a local object and calling a method of that object that knows how to call the Web Service for you. This proxy class makes your interaction with the Web Service much simpler.
|
https://flylib.com/books/en/3.93.1.194/1/
|
CC-MAIN-2019-43
|
refinedweb
| 1,433
| 64
|
Two Sum problem in Java
Hi coders! In this tutorial, we are going to solve a problem that is related to a very common data structure known as the Array. To solve the two sum problem in Java, we will use HashMap.
The problem is like, we have given an array of integers and we have to return the indices of the two numbers so that they add up to a specific target.
There are various methods to solve this problem, one of them is a naive approach to use two loops and check pair for each number. This solution is going to give O(n^2) time complexity so we switch to a better approach that can be done by using a HashMap.
Note: Assuming that give array has exactly one solution and also we cannot use the same element twice.
HashMap method to solve two sum problem in Java
- First of all, we are going to take a HashMap of <Integer, Integer> i.e. of <key, value> pair.
- Then in the loop, we will check if the pair of the current accessed element is present in the HashMap or not.
- If found in the HashMap, we will print the indices else we will put the element in the HashMap.
Example: Given nums = [2, 7, 11, 15, 19],target = 13 since nums[0] + nums[2] = 13 return [0, 1].
Code Implementation in Java
import java.util.*; public class Example { public static void main(String[] args) { int[] nums = {2, 7, 11, 15, 19}; int target = 13; int[] arr = twoSum(nums, target); for(int i = 0; i < 2; i++) { System.out.print("["+arr[i]+ " ]"); } } private static int[] twoSum(int[] nums, int target) { int[] arr = new int[2]; Map<Integer, Integer> map = new HashMap<Integer,Integer>(); for(int j = 0; j < nums.length; j++) { Integer value = map.get(target - nums[j]); if(value == null) { /* no match found */ map.put(nums[j], j); } else { /* pair found, updating index */ arr[0] = value; arr[1] = j; break; // loop exit } } return arr; } } // The code is provided by Anubhav Srivastava
Output: [0 2]
Hence the code has the time complexity of O(n) and the space complexity is also O(n).
I hope you like the solution.
Also read:
You need to put import java.util.HashMap; to execute it.
|
https://www.codespeedy.com/two-sum-problem-in-java/
|
CC-MAIN-2021-17
|
refinedweb
| 383
| 71.55
|
Link Copied
Both C++ and Fortran support Thread Private data area. The Thread Private area isn't large but it is more than adequate to hold a pointer to your large array or a pointer to a structure that contains a large array(s) or pointer(s) to large arrays.
An alternate method is to pass a pointer toa thread private context areaalong with function/subroutine calls.
The following is what I use in Fortran
typeTypeThreadContext
SEQUENCE
type(TypeObject), pointer :: pObject
type(TypeTether), pointer :: pTether
type(TypeFSInput), pointer :: pFSInput
integer:: LastObjectLoaded
endtype TypeThreadContext
type(TypeThreadContext) :: ThreadContext
COMMON/CONTEXT/ ThreadContext
!$OMP THREADPRIVATE(/CONTEXT/)
In this implementation I hold pointers to an element within an array. The thread context could also contain a pointer to a derrived type that contains an array or pointer to an array (both allocatable).
A different technique to use is assume you have t number of threads and n number of "things" to process where each "thing" has a varying number of entities requiring temporary array space. Due to memory constraints you do not wish to allocate the worst case situation to all threads.
For this in Fortran you create a derived type (class if C++, struct if C or C++) containing thread context information including pointers to arrays:
typeTypeCURCALinteger :: FirstIntegerinteger :: MAXSEGreal, pointer :: CTBDUM(:)real, pointer :: DUMI(:)integer :: LastInteger
endtype TypeCURCAL
You typically will have one of these types per major subroutine/function requiring scratch space. This happens to declare the data for my subroutine CRUCAL.
Then in a common module have an array of pointers to this data type
! CMODE4.F90type(TypeCMODE4), allocatable :: cmode4(:)! CURCAL.F90type(TypeCURCAL), allocatable :: crucal(:)
On the entry to the subroutine CRUCIAL a function call is made
to obtain the pointer to the entry of Module.crucal(threadnumber).
Prior to returning the pointer to crucal(threadnumber) a
test is made to see if crucal had been allocated. If not
then a critical section is entered and re-test for allocated
on array crucal is made. If not allocated perform allocation
to number of threads. Next the pointer to the crucal(threadnumber)
is obtained and then the arrays within the TypeCRUCAL structure
are tested to see if they are allocated, if not allocate, if
allocated a test is made to see if the size of allocation
meets the size requirements for the particular thing being
processed by the thread. If so, exit with pointer, if not
large enough deallocate/reallocate scratch space.
Once initialized, and large enough, the function is a quick
in and out.
Jim Dempsey
What I currently have is
//I have
double myvar[5000];
//I want
//double* myvar = new double[nl];
#pragma omp parallel for private(myvar)
for(int i = 0; i < datacount; i++)
{
//Do some stuff
for(int j = 0; j
{
myvar = sqrt(stuff);
}
//myvar used down here
}
What I would like is to be able to dynamically declare myvar outside
of the loop, since I really have no idea how big it will be if a
user has a nonstandard data file. It would be bad form to just crash. It
seems that since OpenMP would have some sort of easy mechanism for
doing this. At least that's what I'm hoping.
xray,
This may be an unintended consequence of your trying to write a simplified example.
Your sample code is declaring myvar as a local array of the full extent of work storage [5000] (I know you want to chang this to fastdynamic allocation). However, notice that parallel loop declares myvar as private. i.e. each thread stack gets a local copy of myvar to the full extent of [5000] whereas if you had n threads you would require only 5000/n+1 amount of scratch space (as i is striped per thread). Or less depending on chunk size arg to schedule.So that is one problem.
I may be wrong bit it seems like you want to have a scratch working array inside the parallel loop that is allocated to at least the size of the current working set (reallocated as needed).
If you intend to have each thread work on a different section of myvar then myvar should not be private to each thread. But each thread must be careful not to stomp on sections of myvar that it ought not to modify.
If you want the myvar private then consider something like the following
#define MAXcores 64 // or 32 or ??
// somewhere outside the processing function
double* myvarPointerTable[MAXcores];
int myvarSize[MAXcores];
inti;
for(i=0;i
{
myvarPointerTable = NULL;
myvarSize = 0;
}
...
// inside the processing function
int numThreads = omp_get_num_threads();
int chunkSize = (datacount+numThreads-1) / numThreads;
bool doOnce = TRUE;
#pragma omp parallel for private(doOnce) copyin(doOnce)
for(int i_chunk = 0; i_chunk < datacount; i_chunk+=chunkSize)
{
double* myvar;
int i;
if(doOnce)
{
doOnce = FALSE;
int thread_num = omp_get_thread_num();
ASSERT(thread_num < MAXcores);
if(myvarSize[thread_num] < nl) // or is this chunkSize?
{
if(myvarPointerTable[thread_num]) delete myvarPointerTable[thread_num];
myvarPointerTable[thread_num] = new double[nl]; // or chunkSize;
myvarSize[thread_num] = nl; // or chunkSize;
}
myvar = myvarPointerTable[thread_num];
i = 0;
}
//Do some stuff
for(int j = 0; j
{
myvar = sqrt(stuff);
}
//myvar used down here
...
// end of loop
++i;
}
Jim Dempsey
Might be I do not completely understand your need, but to me it seems the solution is as simple as:
#pragma omp parallel
{
double* myvar = new double[nl];
#pragma omp for
for(int i = 0; i < datacount; i++)
{
//Do some stuff
for(int j = 0; j
{
myvar
= sqrt(stuff);
}
//myvar used down here
}
}
So at the very beginning of the parallel region, _each_ thread allocates a temporary array of a required size; then the parallel loop starts where each thread uses its copy of array. Might be you will need to explicitly specify myvar as private, but otherwise I think this should work. Though I need to say I do not have significant experience with OpenMP and may be unaware of some peculiarities.
Don't forget the delete at the end of the parallel section.
However....
I do not believe thatyour simplified exampleis what you want.
new and delete are expensive operations. And if this parallel section is entered/exited many times then you should expect memory fragmentation. i.e. you have enough virtual memory, but not in one piece, so eventually an allocation fails.
IMHO a better approach (performance wise, and less fragmentation wise) is for each thead to have a private myvar array that is persistent across entry and exit of the parallel region. Then only if the array size is insufficient (or not allocated) perform the allocation.
For a n threaded system this would mean you would have n myvar arrays that would eventually grow to the largest size experienced during run time. To avoid memory fragmentation you might want to determine in advance what the worst case (largest requirement) is and preallocate the scratch arrays.
In the event that you have many such scratch arrays but only one or a few require concurrancy by the same thread, then you might want to create a pool allocation routine where each thread maintains a pool of buffers. If the pool allocation/free is simplified then there would be no (or few) requirements to call new/delete which have critical sections.
Here is a skeleton of what you might find interesting:
struct poolBuffer
{
union
{
char* cP;
int* iP;
float* fP;
double* dP;
};
union
{
int numberOfBytes;
void* padd1;
};
union
{
bool isAvailable;
void* padd2;
};
poolBuffer() {memset(this, 0, sizeof(*this)); isAvailable=TRUE;};
~poolBuffer() {ASSERT(isAvailable); if(cP) delete cP;};
}
struct myPools_struct
{
poolBuffer* Pools;
int numberOfPools;
myPools() {
Pools=NULL; numberOfPools=0;};
~myPools() {
if(Pools) delete Pools;};
init(int n) {
ASSERT(!Pools);
Pools = new poolBuffer
;
ASSERT(Pools);
numberOfPools = n; };
char* allocate(int n);
int* allocate(int n);
float* allocate(int n);
double* allocate(int n);
void deallocate(char* cP);
void deallocate(int* iP);
void deallocate(float* fP);
void deallocate(double* dP);
};
...
char* myPools_struct::allocate(int n)
{
int bestFit = -1;
int firstAvailable = -1;
int i;
for(i=0;i< numberOfPools; ++i)
{
if(Pools.isAvailable)
{
if(firstAvailable < 0) firstAvailable = i;
if(!Pools.cP) break;
if(Pools.numberOfBytes>=n)
{
if(bestFit < 0 || Pools.numberOfBytes
bestFit = i;
}
}
}
if(bestFit >= 0) return Pools.cP;
ASSERT(firstAvailable>=0); // too few of pools
if(Pools.cP) delete Pools.cP;
Pools.cP = new char
;
ASSERT(Pools.cP);
Pools.numberOfBytes = n;
return Pools.cP;
}
...
double* myPools_struct::allocate(int n)
{
char* cP = allocate(n * sizeof(double));
return (double*)cP;
}
__declspec( thread ) myPools_struct* myPools = NULL;
// at start where you initialize the app
#pragma omp parallel
{
ASSERT(!myPools);
myPools = new myPools_struct;
ASSERT(myPools);
#define numberOfPoolsPerThread 10
myPools->init(numberOfPoolsPerThread); // each is empty
}
// somewhere in your app
#pragma omp parallel
{
double* myvar = myPools->allocate(nl);
#pragma omp for
for(int i = 0; i < datacount; i++)
{
//Do some stuff
for(int j = 0; j
{
myvar
= sqrt(stuff);
} //myvar used down here
}
myPools->deallocate(myvar);
}
You can fill out the particulars. You may want to analyse memory usage to see if you should return the first available buffer, last available buffer, smallest available buffer, least recently used available buffer, most recently used available buffer.
And don't forget to return myPools at end of program.
Jim Dempsey
xray,
The class as constructed will work. However, at the expense of new/delete as you enter/leave scope of the function.
Although this conserves memory it has two potential pitfalls. 1) If you enter and exit scope a large number of times you may fracture the memory heap and then at some point an allocation (new) will fail. 2) excessive computation overhead.
Your current method may be satisfactory assuming you can live with a future memory allocation problem. If you run this code yourself then any consequences suffered can be weighed against the little bit of extra programming effortnow.
On the otherhand, if you are planning on shipping your code out to customers then a little bit extra effort to avoid a potential problem might be worth the extra effort.
Jim Dempsey
|
https://community.intel.com/t5/Intel-Moderncode-for-Parallel/Dynamically-allocated-private-variables-possible/td-p/912321
|
CC-MAIN-2021-31
|
refinedweb
| 1,647
| 50.46
|
shared pool environment for compile-time evaluation
There are several places you can put expressions that are evaluated during compilation:
- class tag arguments (e.g.
<import: SomeVar>)
- class var initializers (e.g.
MyClassVar := SomeVar)
- compile-time evals in methods, and in any of the above (e.g.
##(SomeVar)or
MyClassVar := ##(SomeVar)
In the VM, all of these are currently managed as compilations on UndefinedObject, with the addition to the pool search of the current namespace (which is twiddled by moving UndefinedObject). This is all very well and recursive, where
GSTParser reads MyClassVar := ##(SomeVar) asks new compiler to compile ##(SomeVar) asks new compiler to compile SomeVar answers method that answers SomeVar in UndefinedObject+Namespace current executes method inserts the result into a method's literals executes do-nothing method finally gets SomeVar
The builtin compiler uses the same strategy for all of these, because that is what _gst_execute_statements does.
STInST would use the current class instead for #1 and #2, were #evalFor: ever sent to the driver. #3 handling is in the compiler, which doesn't have access to the driver anyway. In summary, STInST does the same as the VM compiler.
Should all these change? If so, which should change?
Note: Paolo brought this issue up when working on PoolResolution; I merely write it up here.
|
http://smalltalk.gnu.org/node/211
|
crawl-002
|
refinedweb
| 216
| 52.49
|
Hi Michel -- sorry I've been so slow to look at FEComponent.
I noticed you added in Action. That's something that was built into
FormEncode before, but got lost in the shuffle when form generatino (and
Submit) went away. I think it's probably correct to put it in at this
level instead of directly in FormEncode. It might be better to use
Webware actions instead of another action system, but I don't know if
there's any conflict there. There shouldn't be, really -- components
can add new actions. Though, looking at it, there's no documentation
for this. Anyway, you'd do (in the Component subclass):
_servletMethods = ['submit_action', ...]
def actions(self):
return ['submit_action']
Is there a default action? For instance, when the user hits Enter
without hitting a button, no submit button will be triggered. Is the
first action listed in actionList the default?
It looks like processForm() calls on*Click. So the flow typically goes:
def writeHTML(self):
self.processForm() --> calls on*Click if form is submitted and valid
do stuff to write the form
then in sleepEvent parseDocument loads the page that was written and
rewrites the form with htmlfill.
But anyway, it looks pretty easy to use, and I like parseDocument is
very output-neutral. It would be good to get this in a public
repository. I can't remember if I ever set you up with svn access for
FormEncode? This could also go in svn.w4py.org.
--
Ian Bicking / ianb@... /
View entire thread
|
http://sourceforge.net/p/webware/mailman/message/14167106/
|
CC-MAIN-2015-35
|
refinedweb
| 252
| 67.96
|
I'm new to Blender / Python. I use version 2.64 to view a 3Dmodel generated, for now, by other software (Wavefront .obj format), so I have to switch over and over between generator and viewer. To speed this, I have this script in the Blender text editor:
import bpy
bpy.ops.import_scene.obj( filepath = "c:\models\model1.obj" )
bpy.data.objects["model1"].scale = (25, 25, 25)
Before running the script however, I still have to manually remove the currently displayed model1 using File > New (CTRL N). How can I automate also this in the script? It's a basic question, but I'd like to have this running while starting to learn B/P at the same time!
Thanks
|
http://www.blender.org/forum/viewtopic.php?t=25436
|
CC-MAIN-2015-32
|
refinedweb
| 120
| 68.06
|
Re: How to populate a treeview from a dataset
- From: bob <startatbob_clegg@xxxxxxxxxxxxxxxxxxxxx>
- Date: Thu, 18 Oct 2007 14:04:29 +1300
Hi,
Haven't got time to go any deeper with this at present.
I would prefer to stay with Nicholas's logic, it is compact and
maintanable.
First point is that it is not the self referentiality (such a word?)
that is the killer but the fact that child id is not unique.
So we need to ditch or disguise the second ClassId 15 record.
My gut reaction is to separate the attributes of your data from the
position in the tree and to ditch the second record.
i.e.
Suck the data into a collection of data classes similar to myDummyLoad
say myData, one instance for each row.
MyData would have a boolean property PurchasingGroup
As you attempt to load each treenode into the dictionary ask an
additional question. "Is this myData.ChildId already present in the
keys list?"
If it is then check to see if the parentId and child id of the
canditate data is the same i.e. PurchasingGroup is true
If the answer is yes then set the PurchasingGroup property of the one
that is already in the dictionary to true. And throw the canditate in
the bin
If the answer is no then you have an error as all Purchasing Groups
are self referential.
hth
Bob
On Wed, 17 Oct 2007 17:52:40 GMT, xmail123@xxxxxxxxx wrote:
Bob thank you very much for your help and input..
My data will always be sorted as shown in the sample. I was hoping
that would make things easer.
15 is a parent and a child of itself. This is a case where, for
example, a business is a part of a group. Say a purchasing group.
And that same business is the headquarters for the purchasing group.
If the intent is to depict the purchasing group (which is what I have
to do) you end up with
15 Null - root tree node, purchasing group HQ
15 15 - member of purchasing group
16 15 - member of purchasing group
Etc.
I looked through the information you provided. I have a handle on
most of it. But the show stopper is not being able to handel the
child = parent data condition.
Prior to receiving your post I found nthis approach.
Building a dynamic Tree
I believe it will populate the treeview even with the child = parent
data condition. Please look at the code and tell me if you think I am
correct.
The problem is it uses the Info column to get the ID and I think it
will only accomidate the one root tree node. I am thinking it can be
tweeked to work with my data. But being a novis I am not sure how to
accomplish this.
Your description of the approach you suggested was very clear. If I
am correct and this will work can you give me some pointers on how to
modify this to work with my data?
namespace DynamicTree
{
public partial class Form1 : Form
{
DataTable tbl;
DataColumn col;
public Form1()
{
InitializeComponent();
InitTable();
initTableData();
}
private void InitTable()
{
tbl = new DataTable();
col = new DataColumn("ID");
tbl.Columns.Add(col);
col = new DataColumn("PID");
tbl.Columns.Add(col);
col = new DataColumn("Info");
tbl.Columns.Add(col);
tbl.AcceptChanges();
}
private void initTableData()
{
DataRow r;
r = tbl.NewRow();
r["ID"] = "0";
r["PID"] = "-1";
r["Info"] = "Root";
tbl.Rows.Add(r);
r = tbl.NewRow();
r["ID"] = "1";
r["PID"] = "0";
r["Info"] = "Menu1";
tbl.Rows.Add(r);
r = tbl.NewRow();
r["ID"] = "10";
r["PID"] = "0";
r["Info"] = "Menu10";
tbl.Rows.Add(r);
r = tbl.NewRow();
r["ID"] = "2";
r["PID"] = "0";
r["Info"] = "Menu2";
tbl.Rows.Add(r);
r = tbl.NewRow();
r["ID"] = "3";
r["PID"] = "0";
r["Info"] = "Menu3";
tbl.Rows.Add(r);
r = tbl.NewRow();
r["ID"] = "4";
r["PID"] = "1";
r["Info"] = "Menu4";
tbl.Rows.Add(r);
r = tbl.NewRow();
r["ID"] = "5";
r["PID"] = "4";
r["Info"] = "Menu5";
tbl.Rows.Add(r);
r = tbl.NewRow();
r["ID"] = "6";
r["PID"] = "5";
r["Info"] = "Menu6";
tbl.Rows.Add(r);
r = tbl.NewRow();
r["ID"] = "7";
r["PID"] = "2";
r["Info"] = "Menu7";
tbl.Rows.Add(r);
r = tbl.NewRow();
r["ID"] = "11";
r["PID"] = "6";
r["Info"] = "Menu11";
tbl.Rows.Add(r);
r = tbl.NewRow();
r["ID"] = "8";
r["PID"] = "10";
r["Info"] = "Menu8";
tbl.Rows.Add(r);
r = tbl.NewRow();
r["ID"] = "9";
r["PID"] = "3";
r["Info"] = "Menu9";
tbl.Rows.Add(r);
r = tbl.NewRow();
r["ID"] = "12";
r["PID"] = "7";
r["Info"] = "Menu12";
tbl.Rows.Add(r);
}
private void Form1_Load(object sender, EventArgs e)
{
TreeNode r = new TreeNode();
r.Text = "Root";
initTreeView(r);
tree.Nodes.Add(r);
tree.ExpandAll();
}
private void initTreeView(TreeNode N)
{
DataTable temp = new DataTable();
col = new DataColumn("ID");
temp.Columns.Add(col);
col = new DataColumn("PID");
temp.Columns.Add(col);
col = new DataColumn("Info");
temp.Columns.Add(col);
temp.AcceptChanges();
string id = getID(N);
foreach (DataRow r1 in tbl.Rows)
{
if (r1["PID"].ToString() == id)
{
DataRow r2 = temp.NewRow();
r2["ID"] = r1["ID"].ToString();
r2["PID"] = r1["PID"].ToString();
r2["Info"] = r1["Info"].ToString();
temp.Rows.Add(r2);
temp.AcceptChanges();
}
}
foreach(DataRow r3 in temp.Rows)
{
TreeNode tn = new TreeNode();
tn.Text = r3["Info"].ToString();
initTreeView(tn);
N.Nodes.Add(tn);
}
}
private string getID(TreeNode N)
{
foreach (DataRow r in tbl.Rows)
{
if (r["Info"].ToString() == N.Text)
return r["ID"].ToString();
}
return "";
}
}
}
On Thu, 18 Oct 2007 01:06:50 +1300, bob clegg
<cutbob_clegg@xxxxxxxxxxxxxxxxxxx> wrote:
Hi,
Isn't there a problem here with child 15?
It has two entries in the data and is self referential.
I assume it is a typo although the heirarchy makes it look
intentional. A way of declaring a leaf node?
Anyway, stepping through Nicholas's logic.
He is suggesting a dictionary of nodes.
Each node has a child node collection.
i.e. each node is a tree in itself.
So
Step 1
Create a node. Attach your first row of data to it as a payload.
You can use the tag property for this.
Get the child value of this row of data from the data and use it as
the key for your dictionary entry.
Your dictionary will be keyed on an integer and its value will be a
treenode (System.Collections.Generics.Dictionary)
So...
treenode t = new treenode();
dictionary(<int>,<treenode>) d = new dictionary(<int>,<treenode>) ();
t.tag = myfirstdataobject //be it a string made from the data or a
class with the data in it. Up to you what you want to hang on the
tree.
d.add(1,t); in goes the first node
Now the next step is
make a new treenode say t1 and attach the second row of data to it.
Get its parent id from the data.
Ask the dictionary if it has the parent.
Seeing the parent id of the second data row is 1 then yes the
dictionary has it.
Add t1 to the node collection of t. other wise just add t1 to the
dictionary
if (d.containskey(1))
d[1].nodes.add(t1); //We are adding t1 into the nodes
collection of its parent
//now we stick t1 into the dictionary.
d.add(2,t1); //The key is two because that is the value retrieved from
your data, not because this is the second entry.
This process will work until you get to the second data row with 15 as
a child id. This will break the dictionary when you try to add it.
Don't know what you do here. Depends on the meaning of the data as to
what your options are.
you will only need a single pass with your perfectly sorted data.
but with unsorted data a number of passes would be necessary.
Once you have all of your nodes processed in the dictionary hang them
on your treeview.
To get your nodes from the dictionary to the tree you need to clone
the root nodes and add them to the treeview nodes collection.
as per this snippet where I have represented your data with class
myDummyload
private void BuildTree()
{
myDummyload dl1 = new myDummyload(1, 0);//childid,parentid
myDummyload dl2 = new myDummyload(2, 1);
List<myDummyload> myDataCollection = new
List<myDummyload>();
myDataCollection.Add(dl1);
myDataCollection.Add(dl2);
Dictionary<Int32,TreeNode> d = new
Dictionary<Int32,TreeNode>();
IEnumerator<myDummyload> en =
myDataCollection.GetEnumerator();
while (en.MoveNext())
{
TreeNode t = new TreeNode("I am node " +
en.Current.ChildId.ToString());
t.Tag = en.Current;
if (d.ContainsKey(((myDummyload)t.Tag).ParentId))
d[((myDummyload)t.Tag).ParentId].Nodes.Add(t);
d.Add(((myDummyload)t.Tag).ChildId, t);
}
IEnumerator<TreeNode> en1 = d.Values.GetEnumerator();
while (en1.MoveNext())
if (en1.Current.Parent == null)
this.treeView1.Nodes.Add((TreeNode)en1.Current.Clone());
}
Once you exit this code your treeview should be displaying your nodes
hth
bob
On Wed, 17 Oct 2007 03:58:11 GMT, xmail123@xxxxxxxxx wrote:
I'm really new at this C#. I'm not sure I'm understanding your
description. It sounds like it will populate a table in a Hierarchy
order. I am not understanding how this populates a treeview? I tried
writing some pseudocode but was completely lost. Could you provide a
little more detail for this novice.
Thanks
On Tue, 16 Oct 2007 22:08:52 -0400, "Nicholas Paldino [.NET/C# MVP]"
<mvp@xxxxxxxxxxxxxxxxxxxxxxxxxxx> wrote:
Well, the one I originally stated would work fine. If you select the
data in the order you specified, then you can do the following:
- Create a dictionary, it is keyed on the Child value, and contains the tree
node.
- Cycle through the data
- Create a new tree node with the information
- Place it in the dictionary, using the Child value as the key
- Check to see if the Parent id is contained in the dictionary
- If it is then get the node from the dictionary and set the parent
of the current node to the parent node.
That's about it.
--
- Nicholas Paldino [.NET/C# MVP]
- mvp@xxxxxxxxxxxxxxxxxxxxxxxxxxx
<xmail123@xxxxxxxxx> wrote in message news:471521f2.13968406@xxxxxxxxxxxx
Sounds simple to an MVP. LOL
THere is what I am guessing I will need to do let me know if this
logic sounds right.
1. From my ds load all the root tree nodes (Parent = Null) into a temp
table.
2. Step through this temp table. Create a node for the first row.
3. Find all the children of the node just created and place them into
a temp table. If no children are found continue with step number 5.
4. Continue with step number 2.
5. Since no children were found move to the next row in the current
table. Continue with step number 2.
6. This process continues until the last root tree node is completely
processed.
I am certainly open to a better approach.
On Tue, 16 Oct 2007 16:05:59 -0400, "Nicholas Paldino [.NET/C# MVP]"
<mvp@xxxxxxxxxxxxxxxxxxxxxxxxxxx> wrote:
You can't directly bind a data source to a tree view. The reason for
this is that most data sources deal with vectors, not heiarchical data
sets.
It looks like you are getting the result set ordered by Parent, then
Child, which is good.
What I would do is cycle through this set, and create a new tree view
node with the child id. As you created nodes, place them in a dictionary
with their ids as a key. Then, when you create a child node, get the
parent
node (which you have the key for) from the dictionary, and add the child
to
it.
--
- Nicholas Paldino [.NET/C# MVP]
- mvp@xxxxxxxxxxxxxxxxxxxxxxxxxxx
<xmail123@xxxxxxxxx> wrote in message news:471513c1.10335203@xxxxxxxxxxxx
How to populate a treeview from a dataset
I am very new to C#. I need to create a Windows form that will read a
SQL table and populate a treeview. I can connect to the DB, create
the dataadapter, populate a data set. The problem is how to use the
dataset to populate a treeview.
I have looked at a few examples here but none use a dataset, or the
data structure was different and I could not modify to work with my
data or the examples were more than I needed and too complex for a
beginner. Can some one suggest a URL, book or show me an example of
code that simply takes the data as shown and populates a treeview.
I would really appreciate the help.
Thank you.
I have a SQL Server 2005 table containing this data shown below.
Child Parent depth Hierachy
1 NULL 0 01
2 1 1 01.02
5 2 2 01.02.05
6 2 2 01.02.06
3 1 1 01.03
7 3 2 01.03.07
11 7 3 01.03.07.11
14 11 4 01.03.07.11.14
12 7 3 01.03.07.12
13 7 3 01.03.07.13
8 3 2 01.03.08
9 3 2 01.03.09
4 1 1 01.04
10 4 2 01.04.10
15 NULL 0 15
15 15 1 15.15
16 15 1 15.16
18 16 2 15.16.18
17 15 1 15.17
- Follow-Ups:
- Re: How to populate a treeview from a dataset
- From: xmail123
- References:
-: bob clegg
- Re: How to populate a treeview from a dataset
- From: xmail123
- Prev by Date: Re: Business Rule Validation
- Next by Date: basic file I/O issue
- Previous by thread: Re: How to populate a treeview from a dataset
- Next by thread: Re: How to populate a treeview from a dataset
- Index(es):
|
http://www.tech-archive.net/Archive/DotNet/microsoft.public.dotnet.languages.csharp/2007-10/msg02459.html
|
crawl-002
|
refinedweb
| 2,270
| 77.13
|
agree with him if we are using .NET Framework 4.0 or higher version for earlier version still JavaScriptSerializer is good. link.
How to convert C# object into JSON string with JSON.NET framework:
For this I am going to use old application that I have used in previous post. Following is a employee class with two properties first name and last name.
public class Employee { public string FirstName { get; set; } public string LastName { get; set; } }
I have created same object of “Employee” class as I have created in previous post like below.
Employee employee=new Employee {FirstName = "Jalpesh", LastName = "Vadgama"};
Now it’s time to add JSON.NET Nuget package. You install Nuget package via following command.
I have installed like below.
Now we are done with adding NuGet package. Following is code I have written to convert C# object into JSON string.
string jsonString = Newtonsoft.Json.JsonConvert.SerializeObject(employee); Console.WriteLine(jsonString);
Let's run application and following is a output as expected.
That’s it. It’s very easy. Hope you like it. Stay tuned for more.
{{ parent.title || parent.header.title}}
{{ parent.tldr }}
{{ parent.linkDescription }}{{ parent.urlSource.name }}
|
https://dzone.com/articles/how-convert-c-object-json
|
CC-MAIN-2017-13
|
refinedweb
| 192
| 53.37
|
[ To any NSA and FBI agents reading my email: please consider [ whether defending the US Constitution against all enemies, [ foreign or domestic, requires you to follow Snowden's example. Ok. But all my namespace proposal is, is basically this, with automatic 'load-read-alias'. The virtue of my proposal is that it does not change the meaning of symbols, obarrays, or any built-in functions. It has no effect on any file that doesn't use it, so it can't break anything, and if you don't choose to use this facility, you don't need to know about it. -- Dr Richard Stallman President, Free Software Foundation 51 Franklin St Boston MA 02110 USA Skype: No way! That's nonfree (freedom-denying) software. Use Ekiga or an ordinary phone call.
|
https://lists.gnu.org/archive/html/emacs-devel/2013-07/msg00840.html
|
CC-MAIN-2021-10
|
refinedweb
| 131
| 62.38
|
Spark Shell For Interactive Analysis
With the Spark shell, we can easily learn on how to use the API, and a powerful tool for an interactive analysis of data will be provided. The shell for Spark is available in Python or Scala. Note that Scala runs on the Java Virtual Machine (JVM).
Start the shell by navigating to the Spark directory, and then execute the following command:
For Scala users, execute the following command:
For Python users, run the following command:
The primary abstraction for Spark is the RDD (Resilient Distributed Dataset) which is a distributed collection of items. To create RDDs, we can transform other RDDs or create them from the Hadoop InputFormats.
The README file contained in the root directory of Hadoop has some text. Let us try to make an RDD from this text by executing the following command:
With RDDs, there are actions which return values, and there are transformations which will return pointers to other RDDs. Consider the actions given below:
The above command should give us the number of items which are contained in our RDD. Consider the next command given below:
The command given above will give us the first item which is contained in our RDD.
We now need to make use of a transformation. The filter transformation will be used for returning a new RDD having a subset of the items which are contained in the file. This is shown below:
The actions and the transformations can then be chained together as shown below:
The actions and transformations for RDD can be used for carrying out of more complex computations. Consider a scenario in which we need to find the line which is having the most words. This can be done by use of the command given below:
textFile.map(lambda line: len(line.split())).reduce(lambda x, y: x if (x > y) else y)
With the above line of code, the line will first be mapped to an integer value, and a new RDD will be created. The function “reduce” will then be called on the newly created RDD in that line, so that the largest line count is found. Consider the example given below:
def max(x, y):
… if x > y:
… return x
… else:
… return y
…
Once you have written the above, the following command should then be executed:
textFile.map(lambda line: len(line.split())).reduce(max)
MapReduce is one of the most command data flow patterns which are supported in Spark. This can easily be implemented in Spark as shown below:
wc = textFile.flatMap(lambda line: line.split()).map(lambda word: (word, 1)).reduceByKey(lambda x, y: x+y)
Note that in the above example, we have combined several transformations for the computation of the per word count in our file as an RDD of pairs of string and integer.
The “collect” action can be used for collection of the word count in the shell as shown below:
Caching
With Spark, one can pull the data sets into a memory cache which is cluster-wide. This becomes very important in circumstances where the data has to be accessed repeatedly. A good example is when the algorithm that you are using is iterative. The data will not have to be fetched from the memory, which involves much overhead, but from the cache, which is faster and offers much less overhead. We need to demonstrate how caching can be done. Suppose that you want to mark a particular line to be cached, this can be done as follows:
Note that you do not have to do caching on files which have very few lines. It is recommended that this should be done on files which have large data sets. Even if the data sets have been distributed across multiple nodes, the functions can be applied on them. The process can also be done interactively.
Writing Self-Contained Applications
Sometimes, you might need to use the Spark API so as to create self-contained applications. This can be done in Java, Scala, and Python.
The Python API, that is, PySpark, can be used for writing self contained applications.
Scala
We need to create a simple self contained app with Scala. The code given below can be used for that purpose:
/* MyApp.scala */ import org.apache.spark.SparkContext._ import org.apache.spark.SparkContext import org.apache.spark.SparkConf object MyApp { def main(args: Array[String]) { val lFile = “YOUR_SPARK_HOME/README.md” // The file should be present in your system val con = new SparkConf().setAppName(“My Application”) val sc = new SparkContext(con) val lData = sc.textFile(lFile, 2).cache() val nAs = lData.filter(line => line.contains(“x”)).count() val nBs = lData.filter(line => line.contains(“y”)).count() println(“Lines with x: %s, Lines with y: %s”.format(nAs, nBs)) } }
With the above example, the number of lines containing the letter “x” and the ones containing the letter “y” will be counted. These will be counted in the README file. The parameter “YOUR_SPARK_HOME” in the above code should be replaced with the location of Spark in your local system, otherwise, you will get an error. You also notice that we have initialized our own SparkContext, unlike what we have doing in the other examples. A repository on which Spark will depend on will also be created as shown below:
name := “My Project” version := “1.0” scalaVersion := “2.10.4” lDependencies += “org.apache.spark” %% “spark-core” % “1.5.0”
We must lay the app according to the typical structure of the directory. It is after this that a JAR package containing the code for the application can be created, and then we will execute or run the program.
Learn Apache Spark Tutorial
Java
The following code can be used for creation of a simple Spark application in the Java programming language:
/* MyApp.java */ import org.apache.spark.SparkConf; import org.apache.spark.api.java.*; import org.apache.spark.api.java.function.Function; public class MyApp { public static void main(String[] args) { String lFile = ” SPARK_HOME/README.md”; // This file should be available in your local system. SparkConf con = new SparkConf().setAppName(“My Application”); JavaSparkContext sc = new JavaSparkContext(con); JavaRDD lData = sc.textFile(logFile).cache(); long nAs = lData.filter(new Function() { public Boolean call(String s) { return s.contains(“x”); } }).count(); long nBs = lData.filter(new Function() { public Boolean call(String s) { return s.contains(“y”); } }).count(); System.out.println(“Lines with x: ” + nAs + “, lines with y: ” + nBs); } }
Similarly, the program given above will count the number of lines in the file README for the Spark which have the letters “x” and “y”. The parameter “SPARK_HOME” has to be replaced with the location of the Spark in your system, otherwise, the program will not run. Note that we have also initialized a SparkContext, unlike in the other cases.
For the purpose of building the application, a Maven “pon.xml” file should also be written and this will be used for listing Spark as a dependency. The artifacts for Spark are tagged with a version for Scala. This is shown below:
That is how it looks like.
Python
A simple Spark application can also be created by use of the Python API, that is, PyAPI. The following code can be used for creating the application “MyApp.py”:
“””MyApp.py””” from pyspark import SparkContext lFile = ” SPARK_HOME/README.md” This file should be available in your local system. sc = SparkContext(“local”, “My App”) lData = sc.textFile(logFile).cache() nAs = logData.filter(lambda s: ‘x’ in s).count() nBs = logData.filter(lambda s: ‘y’ in s).count() print(“Lines with x: %i, lines with y: %i” % (nAs, nBs))
The above program will be used for counting the number of lines having the letters “x” and “y” in the file README of the Spark. Again, do not forget to replace the parameter “SPARK_HOME” with the location of the Spark installed on your system.
Consider the code given below, which shows how a simple job can be implemented in Java:
/*** MyJob.java ***/ import spark.api.java.*; import spark.api.java.function.Function; public class MyJob { public static void main(String[] args) { String lFile = “/var/log/syslog”; // The file should be available in your local system JavaSparkContext sc = new JavaSparkContext(“local”, “My Job”, “$ SPARK_HOME”, new String[]{“target/my-project-1.0.jar”}); JavaRDD lData = sc.textFile(lFile).cache(); long nAs = lData.filter(new Function() { public Boolean call(String s) { return s.contains(“x”); } }).count(); long numBs = logData.filter(new Function() { public Boolean call(String s) { return s.contains(“b”); } }).count(); System.out.println(“Lines with x: ” + nAs + “, lines with y: ” + nBs); } }
Free Demo for Corporate & Online Trainings.
|
https://mindmajix.com/spark/interactive-analysis-with-the-apache-spark-shell
|
CC-MAIN-2019-13
|
refinedweb
| 1,434
| 66.23
|
New submission from Case Van Horsen <casevh at gmail.com>: I've ported the GMPY module to Python 3 and found a problem comparing Fraction to gmpy.mpq. mpq is the rational type in gmpy and knows how to convert a Fraction into an mpq. All operations appear to work properly except "Fraction == mpq". "mpq == Fraction" does work correctly. gmpy's rich comparison routine recognizes the other argument as Fraction and converts to an mpq value properly. However, when "Fraction == mpq" is done, the Fraction argument is converted to a float before gmpy's rich comparison is called. The __eq__ routine in fractions.py is: def __eq__(a, b): """a == b""" if isinstance(b, numbers.Rational): return (a._numerator == b.numerator and a._denominator == b.denominator) if isinstance(b, numbers.Complex) and b.imag == 0: b = b.real if isinstance(b, float): return a == a.from_float(b) else: # XXX: If b.__eq__ is implemented like this method, it may # give the wrong answer after float(a) changes a's # value. Better ways of doing this are welcome. return float(a) == b Shouldn't __eq__ return NotImplemented if it doesn't know how to handle the other argument? I changed "return float(a) == b" to "return NotImplemented" and GMPY and Python's test suite passed all tests. I used the same logic for comparisons between gmpy.mpf and Decimal and they all work correctly. Decimal does return NotImplemented when it can't convert the other argument. (GMPY 1.10 alpha2 fails due to this issue.) ---------- components: Library (Lib) messages: 90211 nosy: casevh severity: normal status: open title: Fraction fails equality test with a user-defined type type: behavior versions: Python 3.1 _______________________________________ Python tracker <report at bugs.python.org> <> _______________________________________
|
https://mail.python.org/pipermail/new-bugs-announce/2009-July/005314.html
|
CC-MAIN-2016-36
|
refinedweb
| 290
| 61.53
|
The easiest way to get your custom application metrics into Datadog is to send them to DogStatsD, a metrics aggregation service bundled with the Datadog Agent. DogStatsD implements the StatsD protocol and adds a few Datadog-specific extensions:
Note: DogStatsD does NOT implement the following from StatsD:
Note: Any StatsD client works just fine, but using the Datadog DogStatsD client gives you a few extra features.
DogStatsD accepts custom metrics, events, and service checks over UDP and periodically aggregates and forwards them to Datadog. Because it uses UDP, your application can send metrics to DogStatsD and resume its work without waiting for a response. If DogStatsD ever becomes unavailable, your application won’t skip a beat.
As it receives data, DogStatsD aggregates multiple data points for each unique metric into a single data point over a period of time called the flush interval. Let’s walk through an example to see how this works.
Suppose you want to know how many times your Python application is calling a particular database query. Your application can tell DogStatsD to increment a counter each time the query is called:
def query_my_database(): dog.increment('database.query.count') # Run the query ...
If this function executes one hundred times during a flush interval (ten seconds, by default), it sends DogStatsD one hundred UDP packets that say “increment the counter ‘database.query.count’”. DogStatsD aggregates these points into a single metric value—100, in this case—and send it to Datadog where it is stored and available for graphing alongside the rest of your metrics.
First, edit your
datadog.yaml file to uncomment the following lines:
use_dogstatsd: yes ... dogstatsd_port: 8125
Then restart your Agent.
Once done, your application can reliably reach the DogStatsD client library for your application language and you’ll be ready to start hacking. You can use any generic StatsD client to send metrics to DogStatsD, but you won’t be able to use any of the Datadog-specific features mentioned above.
By default, DogStatsD listens on UDP port 8125. If you need to change this, configure the
dogstatsd_port option in the main Agent configuration file:
# Make sure your client is sending to the same port. dogstatsd_port: 8125
Restart DogStatsD to effect the change.
While StatsD only accepts metrics, DogStatsD accepts all three major data types Datadog supports: metrics, events, and service checks. This section shows typical use cases for each type.
Each example is in Python using datadogpy, but each data type shown is supported similarly in other DogStatsD client libraries.
The first four metrics types —gauges, counters, timers, and sets— are familiar to StatsD users. The last one—histograms—is specific to DogStatsD.
Gauges track the ebb and flow of a particular metric value over time, like the number of active users on a website:
from datadog import statsd statsd.gauge('mywebsite.users.active', get_active_users())
Counters track how many times something happens per second, like page views:
from datadog import statsd def render_page(): statsd.increment('mywebsite.page_views') # add 1 # Render the page...
With this one line of code we can start graphing the data:
Dog datadog import statsd @statsd.timed('mywebsite.page_render.time') def render_page(): # Render the page...
or with a context manager:
from datadog import statsd def render_page(): # First some stuff we don't want to time boilerplate_setup() # Now start the timer with statsd.timed('mywebsite.page_render.time'): # Render the page...
In either case, as DogStatsD receives the timer data, it calculates the statistical distribution of render times and sends the following metrics to Datadog:, DogStatsD actually treats timers as histograms; Whether you send timer data using the methods above, or send it as a histogram (see below), you’ll be sending the same data to Datadog.
Histograms calculate the statistical distribution of any kind of value. Though it would be less convenient, you could measure the render times in the previous example using a histogram metric:
from datadog datadog import statsd def handle_file(file, file_size): # Handle the file... statsd.histogram('mywebsite.user_uploads.file_size', file_size) return
Since histograms are an extension to StatsD, use a DogStatsD client library.
Since the overhead of sending UDP packets can be too great for some performance intensive code paths, DogStatsD clients support sampling, i.e. only sending metrics a percentage of the time. The following code sends a histogram metric only about half of the time:
dog.histogram('my.histogram', 1, sample_rate=0.5)
Before sending the metric to Datadog, DogStatsD uses the
sample_rate to
correct the metric value, i.e. to estimate what it would have been without sampling.
Sample rates only work with counter, histogram, and timer metrics.
DogStatsD can emit events to your Datadog event stream. For example, you may want to see errors and exceptions in Datadog:
from datadog import statsd def render_page(): try: # Render the page... # .. except RenderError as err: statsd.event('Page render error!', err.message, alert_type='error')
Finally, DogStatsD can send service checks to Datadog. Use checks to track the status of services your application depends on:
from datadog Dog ...
Since tagging is an extension to StatsD, use a DogStatsD client library.
This section specifies the raw datagram format for each data type DogStatsD accepts. You don’t need to know this if you’re using any of the DogStatsD client libraries, but if you want to send data to DogStatsD without the libraries or you’re writing your own library, here’s how to format the data.
metric.name:value|type|@sample_rate|#tag1:value,tag2
metric.name— a string with no colons, bars, or @ characters. See the metric naming policy.; Datadog drops)— The colon in tags is part of the tag list string and has no parsing purpose like for the other parameters. No default.) — The colon in tags is part of the tag list string and has no parsing purpose like for the other parameters.No default.
For Linux and other Unix-like OS, we use Bash. For Windows we need Powershell and powershell-statsd, a simple Powershell function that takes care of the network bits for us.
The idea behind DogStatsD is simple: create a message that contains information about your metric/event, and send it to a collector over UDP on port 8125. Read more about the message format.
The format for sending metrics is
metric.name:value|type|@sample_rate|#tag1:value,tag2, so let’s go ahead and send datapoints for a gauge metric called custom_metric with the shell tag. We use a locally installed Agent as a collector, so the destination IP address is 127.0.0.1.
On Linux:
vagrant@vagrant-ubuntu-14-04:~$ echo -n "custom_metric:60|g|#shell" >/dev/udp/localhost/8125
or
vagrant@vagrant-ubuntu-14-04:~$ echo -n "custom_metric:60|g|#shell" | nc -4u -w0 127.0.0.1 8125
On Windows:
PS C:\vagrant> .\send-statsd.ps1 "custom_metric:123|g|#shell" PS C:\vagrant>
On any platform with Python (on Windows, the Agent’s embedded Python interpreter can be used, which is located at
C:\Program Files\Datadog\Datadog Agent\embedded\python.exe):
import socket sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) # UDP sock.sendto("custom_metric:60|g|#shell", ("localhost", 8125))
The format for sending events is:
_e{title.length,text.length}:title|text|d:date_happened|h:hostname|p:priority|t:alert_type|#tag1,tag2.
Here we need to calculate the size of the event’s title and body.
On Linux:
vagrant@vagrant-ubuntu-14-04:~$/dev/udp/localhost/8125
On Windows:
PS C:\vagrant> $title = "Event from the shell" PS C:\vagrant> $text = "This was sent from Powershell!" PS C:\vagrant> .\send-statsd.ps1 "_e{$($title.length),$($text.Length)}:$title|$text|#shell,powershell"
Additional helpful documentation, links, and articles:
|
https://docs.datadoghq.com/developers/dogstatsd/
|
CC-MAIN-2018-22
|
refinedweb
| 1,273
| 55.74
|
NegativeArraySizeException
The Negative Array size exception is one of the rarely occurring exception in java programming. Let's have a brief overlook on this exception.
If you glimpse at the title,may be you would get an hint that this exception is related to size of the array.Yes,of course.It is related to the size of the array.
We have studied that,when defining an array in java(and almost in all the programming languages ) we should give a positive value for the size of the array.As we have read we used to give only positive values.
But,I guess we may not know what will happen if we give negative value to the size of the array?..
Even if you specify a negative value,the compiler will accept it and it will not show any error.The fact is that the compiler is designed to verify whether the value given for the size is an integer or not.
It will not check whether the given value is positive or not at the compile time and throws this Negative Array Size Exception.
We programmers,will not give negative value for an array size intentionally.But may be responsible for assigning a negative value.
There are certain situations in which the value for the size of an array goes negative.For example, You could have assigned the result of a mathematical calculation to be the size value.If the mathematical calculation results in a negative value then you would be landed in a embarrassing situation.
So I suggest you to assert the result of an computation or a value read from a stream to be positive before assigning it as a size of the array.
Here i have included an example program that gives you an idea about when this exception is thrown.
Program:
import java.util.*;
import java.io.*;
public class Stacktest
{
public static void main(String args[])throws IOException
{
int c[]=new int[-2];
Scanner in=new Scanner(new InputStreamReader(System.in));
int b=in.nextInt();
int a[]=new int[b];
}
}
output:
-2
Exception in thread "main" java.lang.NegativeArraySizeException
at Stacktest.main(Stacktest.java:10)
In the above program i have given the value for the size the array c[] as -2,Which is the reason for this exception to be thrown.
4 comments:
This is great is there any way you could post an example of code on how to fix
Thank you for using my Guide and if it work for you that makes me happy
Spark Training in Chennai
/**
* Write a description of class ExeptionProg3 here.
*
* @author (your name)
* @version (a version number or a date)
*/
import java.util.*;
public class neg_arr_size_exep
{
public static void main(String args[]){
Scanner sc=new Scanner(System.in);
try{
int[] arr=new int[-2];
for(int i=0;i<arr.length;i++)
{
System.out.println("enter "+i+" th element");
arr[i]=sc.nextInt();
}
for(int i=0;i<arr.length;i++)
{
System.out.println(arr[i]);
}
}
catch(NegativeArraySizeException e)
{System.out.println("caught and thrown");}
}}
|
http://craftingjava.blogspot.com/2012/07/handling-javalangnegativearraysizeexcep.html?showComment=1367037999620
|
CC-MAIN-2017-51
|
refinedweb
| 509
| 57.06
|
My app relies on SQLObject to create its tables initially. This has the
problem that there's no way to specify indexes for each table.
This patch (against SVN head) implements an Index() class which allows
you to add indexes. Its use is pretty simple:
from sqlobject import *
class Foo(SQLObject):
foo = IntCol()
bar = StringCol()
biff = IntCol()
idx1 = Index(foo) # simple index
idx2 = Index(bar, unique=True) # simple unique index
idx3 = Index((biff, foo)) # multi-column index
idx4 = Index(((biff, 4),)) # only the first 4 chars of biff are used
There are no operations which can be applied to Indexes; they're used
implicitly (one assumes) during queries. The only time Index() has an
actual effect is during createTable.
I've only implemented support in the databases I have on hand: SQLite
and MySQL. Adding createIndexSQL for the other database types should be
easy.
This is also the first time I've dug about SQLObject's innards, so I'm
not really sure if I've done things right. The mapping from a Col to
SOCol via the name seems a bit tortured.
Does this look like an acceptable way of dealing with this problem?
J
|
http://sourceforge.net/p/sqlobject/mailman/sqlobject-discuss/thread/1090570350.7021.15.camel@localhost/
|
CC-MAIN-2014-23
|
refinedweb
| 196
| 61.46
|
Section 1. Before you begin
Objectives
The objective of this tutorial is to teach you how to configure Web service message-level security of Java API for XML Web Services 2.1 (JAX-WS) running on WebSphere Application Server 7 using the Rational Application Developer 7.5.2 integrated development environment (IDE). To achieve that objective, we will teach you the following tasks:
- How to create a JAX-WS service provider using annotations.
- How to create a standalone JAX-WS client.
- How to monitor the SOAP messages using the TCP/IP Monitor.
- How to customize a WS-Security policy set in the WebSphere Application Server Administration Console.
- How to customize a policy set binding in the Administration Console.
- How to export policy sets and bindings from the Administration Console.
- How to generate X509 asymmetric keys and use them with your customized policy set bindings.
- How to import policy sets and bindings into the Rational Application Developer IDE.
- How to attach policy sets to Web service clients and servers using the Rational Application Developer IDE.
- How to customize a client-side policy set binding using the Rational Application Developer IDE.
- How to use the UsernameToken (UNT) profile to add credentials to the SOAP header.
- How to use the UNT to authenticate against the WebSphere Application Server user repository.
About the example code
Our objective with this tutorial is to demonstrate message-level security, so we use a simple “HelloWorld” example so that you are not distracted from that main objective.
Prerequisites
While the tutorial is written in a very meticulous, step-by-step fashion to ensure that you can follow along, we assume that you are a Java programmer that is conceptually aware of Web services. We have written the article in a fashion that allows you to follow along visually without downloading any code or missing any magical steps hidden in prebuilt files. If you want to follow along programmatically by working through the example code that is provided, you will need a copy of Rational Application Developer for WebSphere Software V7.5.2. Additionally, you will need to install the WebSphere Application Server V7.0 test environment that comes packaged with Rational Application Developer 7.5.2.
Since there is a great deal of literature on JAX-WS and Web services in general, we will not cover that ground in order to reduce the size of this tutorial. However, we do recommend the following literature to learn about Web services and using JAX-WS:
- Web services hints and tips: JAX-RPC versus JAX-WS series
- JAX-WS client APIs in the Web Services Feature Pack for WebSphere Application Server V6.1 series
Section 2. Introduction to message-level security
Transport-level security (e.g. HTTPS) is a point-to-point security model where the channel is protected between two parties. However, many times the service consumer and service provider are separated by intermediaries (e.g. an Enterprise Service Bus). In situations like these, message-level security can provide an end-to-end security solution. Figure 1 depicts how message-level security can provide an end-to-end security solution even if intermediaries are between the consumer and provider. The secret is that with message-level security, you can encrypt the message using the public key of the final destination. In this way, only the intended receiver can decrypt the message. Additionally, by encrypting the message and storing the encrypted data into the message, you can store the message on the file system for asynchronous communication and later decrypt it when the receiver is available. These are just a few of the reasons that message level security is often being applied to secure Web services.
Figure 1. Comparison of transport level security and message level security (see enlarged Figure 1)
Web Services Security (WS-Security) is an OASIS standard to describe how to implement message-level security with Web services. Specifically, WS-Security describes how you can add confidentiality (e.g. encryption), integrity (e.g. digital signatures), and authentication (e.g. username and password) to a SOAP message. In most cases, XML encryption and XML signatures are the mechanisms for securing the message; WS-Security describes how to use these technologies with Web services to provide message-level security as well as providing a framework for propagating security identities. Figure 2 provides an example of how message-level security looks in a SOAP message. In this tutorial, you learn how to build SOAP messages that get encrypted and signed to provide messages like the one shown in Figure 2.
Figure 2. Example of message-level security of a SOAP message (see enlarged Figure 2)
Section 3. Creating and consuming JAX-WS Web services
In this section, you create a simple
HelloWorld Web service using a JAX-WS annotation and the Rational Application Developer tooling. Then you use the Rational Application Developer tooling to generate a proxy client to invoke the Web service. Finally, you test your service provider (i.e. Web service) running on the WebSphere Test Environment (WTE) that comes packaged with Rational Application Developer. In your testing, you examine the SOAP messages (i.e. request and response) as they appear on the network.
Creating a JAX-WS Service Provider
In this tutorial, you will use a dynamic Web project to contain the Web service. Start by creating a dynamic Web project and a plain old Java object (POJO) for your service provider as shown in the following steps:
- Start Rational Application Developer 7.5.2. Click File > New > Dynamic Web Project, then enter
HelloWorldProjectas the project name, as shown in Figure 3.
Figure 3. Create a dynamic Web project
Accept the defaults for the other fields, then click the Finish button. Choose No if prompted to change to the Web perspective. For this tutorial, you will use the Java EE perspective.
- Select the HelloWorldProject project in the project explorer view. Right-click and select New > Class, which brings up a Java Class wizard as shown in Figure 4.
Figure 4. Create new Java class wizard
Enter
com.ibm.dwexamplefor the package name and
HelloWorldProvideras the class name , and click the Finish button.
- Copy the code from Listing 1 into the
HelloWorldProvider.javafile and save the file.
Listing 1. HelloWorldProvider.java
package com.ibm.dwexample; import javax.jws.WebService; @WebService publicclass HelloWorldProvider { public String sayHello(String msg) { System.out.println("[helloworld provider] Hello " + msg); return "Hello " + msg; } }
That’s it! A simple POJO with the
@WebService annotation is all that is necessary to create a JAX-WS Web service.
Now that you have created a Web service, you can use Rational Application Developer to deploy your service onto the WebSphere Test Environment (WTE).
To deploy your service:
- Select the HelloWorldProvider.java file that contains the code from Listing 1. Right-click and choose Run As > Run on Server, which displays the Run On Server wizard in Figure 5.
Figure 5. Run On Server wizard
Select the Choose an existing server radio button, and then select WebSphere Application Server V7.0 at localhost. If you do not have a WebSphere Application Server V7 server defined, you can define a new one by selecting Manually define a new server followed by expanding the IBM folder and selecting the WebSphere Application Server v7.0 server.
Click the Finish button to deploy and start the Web service on WebSphere Application Server.
At this point, you have created a simple JAX-WS Web service, deployed it to WTE, and started the service. Next we’ll create a JAX-WS client that can invoke the running Web service.
Creating a JAX-WS service consumer
Rational Application Developer V7.5.2 provides a wizard for generating a client from a JAX-WS Web service. In this tutorial, you use a standalone Java client as your service consumer. Therefore, you begin by creating a simple Java project that will contain your JAX-WS client proxy as well as your client code that uses the generated client proxy.
To create a Java project:
Using Rational Application Developer select File > New > Other. Locate the Java Project wizard and click Next.
- In the New Java Project wizard, enter
HelloWorldConsumeras the project name and click Finish as shown in Figure 6. If prompted to switch to the Java perspective, choose No to stay in the Java EE perspective.
Figure 6. New Java Project wizard
Now that you have a Java project to hold the JAX-WS proxy classes that are generated by Rational Application Developer, you can use Rational Application Developer to generate the client proxy classes.
- Expand the Services folder of the HelloWorldProject Web project. Right-click the {}HelloWorldProviderService service and choose Generate > Client as shown in Figure 7.
Figure 7. Generate the Web service client
Ensure the Web service runtime value in the configuration section is set to IBM WebSphere JAX-WS, and Java Proxy is selected as the client type.
- Click the Client project: link to change the client project to use the standalone Java project you created above, as shown in Figure 8.
Figure 8. Web Service Client wizard
- Choose HelloWorldConsumer as the client project for the generated Java proxy, and then click OK as shown in Figure 9.
Figure 9. Specify the Client project for proxy
- Click the Next button and enter
com.ibm.dwexample.clientproxyas the target package of the Web service client. Ensure the Generate portable client checkbox is selected, as shown in Figure 10. It is usually a good idea to segregate the generated code from your client code by using a different package name for the proxy code as shown in this step.
Figure 10. Web Service Client Configuration wizard
- Click the Finish button to generate the client proxy code, which will look like Figure 11.
Figure 11. Generated client proxy classes
Rational Application Developer uses the WSDL of the service provider to auto-generate Java classes that can invoke the service provider Web service. Figure 11 shows the classes that get generated for you. Using the generated proxy classes, you do not have to worry about SOAP message building, XML parsing, or any other low-level programming constructs – the generated classes do this for you. All you need to do is instantiate the client proxy and invoke methods you want to be sent to the Web service. Therefore, next you will create a simple Java test client that instantiates the generated client proxy in order to invoke the service provider.
To create a Java test client:
Right-click the HelloWorldConsumer project and choose New > Class.
- Enter
com.ibm.dwexampleas the package for the client and
ClientTestas the Java class name as shown in Figure 12. Note that you segregate your client code from the generated proxy client by using a different Java package name.
Figure 12. New Java Class wizard
- Click the Finish button. Copy the code from Listing 2 into the ClientTest.java file and save the file.
Listing 2. ClientTest.java
1 package com.ibm.dwexample; 2 import com.ibm.dwexample.clientproxy.HelloWorldProvider; 3 import com.ibm.dwexample.clientproxy.HelloWorldProviderService; 4 public class ClientTest { 5 public static void main(String[] args) { 6 try { 7 HelloWorldProviderService srv = new HelloWorldProviderService(); 8 HelloWorldProvider port = srv.getHelloWorldProviderPort(); 9 String resp = port.sayHello("World"); 10 System.out.println("[response] " + resp); 11 } catch(Exception e) { 12 e.printStackTrace(); 13 } 14 } 15 }
In Listing 2, line 7 demonstrates how you instantiate the client proxy service that Rational Application Developer generated for you. Then in line 8, you use the generated interface (i.e.
HelloWorldProvider) to get a handle to the Web service port. Finally you use the port object to invoke the
sayHello() method which will make a remote call to the Web service provider. The invocation of
port.sayHello(“World”)sends a SOAP request message to the listening Web service provider shown in Listing 1. The service provider then sends a SOAP response message back to the client. Let’s examine what these SOAP request and response messages look like via the built-in TCP/IP Monitor view provided by Rational Application Developer.
Test and verify the consumer and provider
After you have the server-side and client-side up and running, it is generally a good idea to test and verify things. Rational Application Developer provides a TCP/IP Monitor view to display the SOAP message as it is transferred from the client to the server and back. To use this view, we need to configure the TCP/IP Monitor to listen on an unused TCP/IP port, and then update your client proxy code to point to this TCP/IP port. The following section demonstrates how to do this.
In Rational Application Developer, select Window > Show View > Other. Then locate the TCP/IP Monitor
view in the Debug
folder and click OK.
- Right-click in the first entry box and choose Properties as shown in Figure 13.
Figure 13. TCP/IP Monitor view
Click the Add button to configure a new monitor to intercept the Web service request and response in order to display the SOAP message.
Enter an unused TCP port for the Local monitoring port
field. (e.g.
9081)
Enter
localhostfor the Host Name field.
- Enter the TCP port for the Web service provider (check the WSDL in the HelloWorldConsumer project). (e.g.
9080). You should have values similar to Figure 14:
Figure 14. New Monitor settings
Click the OK button.
- Now select your newly defined TCP/IP Monitor and click the Start button as shown in Figure 15:
Figure 15. Start TCP/IP Monitor
Now that the TCP/IP Monitor is running and listening for Web service calls, you need to change your client proxy to connect to the listening port of the TCP/IP monitor instead of connecting directly to the service provider. You can accomplish this by changing the port number in the WSDL that was saved locally to the client project as a result of clicking the Generate portable client checkbox in Figure 10.
Right-click the HelloWorldProviderService.wsdl file located in the HelloWorldConsumer project under the src > META-INF > wsdl folder path and choose Open With > WSDL Editor.
- Change the port number to match the TCP/IP Monitor listening port (e.g.
9081) as shown in Figure 16, then save and close the WSDL file.
Figure 16. WSDL Editor (see enlarged Figure 16)
- Right-click the ClientTest.java file and choose Run As > Java Application. The results should appear in the TCP/IP Monitor view as shown in Figure 17.
Figure 17. ClientTest results (see enlarged Figure 71)
At this point, you have developed a JAX-WS Web service provider (i.e. server-side) and a JAX-WS Web service consumer (i.e. client-side) and demonstrated the results of the consumer invoking the provider. In the next section, you configure the policy sets to add message-level security to your little
HelloWorld example, and once again view the results in the TCP/IP Monitor view to verify that the SOAP message is being passed securely.
Section 4. Policy sets
Policy sets provide a declarative way to define qualities of service (QoS) for Web services. This simplifies the management of multiple Web services as policy sets can be reused across them. Let’s discuss the differences in policy set terminology:
- Policy – A policy describes a configuration that defines qualities of service (QoS) for Web services. Examples include WS-Security and WS-Addressing.
- Policy sets – A policy set is a collection of policies.
- Policy set attachment – In order to apply policy sets to Web services, they need to be attached.
- Bindings – Policy sets are meant to be reused across Web services and thus do not contain environment specific settings such as key stores or passwords. Instead, a binding contains these environment specific values.
Figure 18. Example of policy sets (see enlarged Figure 18)
WebSphere Application Server V7 comes prepackaged with 18 policy sets (see Figure 18) to simplify getting started. They are production-level policy sets that you can begin using immediately. WebSphere Application Server V7 also comes with 4 sample bindings, but these are for demonstration purposes only. For production Web services, you should customize the policy set bindings as shown in this tutorial.
Rational Application Developer 7.5.2 also comes prepackaged with a set of policy sets. Rational Application Developer 7.5.2 fully supports configuration of the client-side bindings required for policy sets. However, you must configure server-side policy set bindings from the WebSphere Application Server V7 server. WebSphere Application Server V7 does support exporting policy sets and policy set bindings such that you can import them into Rational Application Developer 7.5.2. After importing into Rational Application Developer, you can attach the policy sets and policy set bindings to the service provider (i.e. server), as well as the service consumer (i.e. client), using the wizards in Rational Application Developer.
Setting up asymmetric keys
“Public.” ()
To secure your Web services, you can use asymmetric keys. In this section, you learn how to create a set of cryptographic keys that you then use to secure your Web service. Before you get started, it may be helpful to review the following terminology:
- Public key – The key that is used by others to encrypt data for you.
- Private key – The key that matches your public key and is used to decrypt data that others have encrypted with your public key. This key should not be shared with others.
- Certificate authority – For others to trust that your public key really belongs to you, you normally request a CA (e.g. Verisign, GeoTrust, GoDaddy) to sign your key. Since others do the same thing, you can trust others by the CA vouching for you and them.
- Digital certificate – To share your public key with others and for them to trust that you are who you say you are, you create a digital certificate which contains your public key along with your identity information (e.g. your name) and send this digital document to a CA to sign for you.
- Key store – A place to store your keys. Also called a key ring.
- Signer certificate – After your digital certificate has been signed by a CA, it becomes a signer certificate. Digital certificate, public key certificate, and signer certificate are often used synonymously.
Creating service provider keys
There are a number of tools for creating public key/private key pairs, but for this tutorial you use the
keytool command provided by the Java Development Kit (JDK), since it will be available with standalone clients as well as with WebSphere Application Server.
First, you create the server-side keys that will be used by your service provider (i.e. server-side) running on WebSphere Application Server V7. Then you create the client-side keys for your service consumer running as a standalone client from Rational Application Developer V7.5.2. We purposely separate the server-side keys from the client-side keys to delineate the differences and to mimic the more likely production environment where the consumer and provider are often distributed on different physical hardware. In other words, the private key stays with the owner and should not be distributed.
The first thing you need to do is create a key store to hold your public and private keys. This can be accomplished with the following
keytool command:
In Microsoft Windows, select Start > Run…, then enter
cmdin the Open field of the dialog box and click OK.
In the Command Prompt window, change directories to where WebSphere Application Server V7 is installed. (e.g.
cd c:\Program Files\IBM\SDP\runtimes\base_v7)
Now run the following
keytoolcommand:
java\bin\keytool.exe -genkey –v -alias server1 -keyalg RSA -keystore helloServerKeys.jks -storepass f00bar -dname "cn=server1,O=IBM,C=US" -keypass passw0rd
This command generates a public key and private key pair that will be accessed via the server1 alias. Additionally, this command self signs the public key. Both private and public keys are stored in the
helloServerKeys.jksfile, which is password protected.
Next, you need to export your server1 certificate to be imported into your client-side key ring later on, with the following
keytoolcommand:
java\bin\keytool.exe –export -v -alias server1 –file c:\temp\server1.cert –rfc -keystore helloServerKeys.jks –storepass f00bar
For someone (or some computer) to encrypt messages for you, they need your public key. Then you can decrypt the message using your private key. However, you must somehow extract your public key from your key ring into some format and send it to the party with which you wish to securely communicate. The
export argument of the
keytool does just that, and in the above command saves the public key into an X509 digital certificate format and stores it in the text file
c:\temp\server1.cert. This public key certificate will then be imported into the service consumer’s (i.e. client-side) key store such that the service consumer will know how to encrypt messages for the service provider.
Figure 19 shows the commands used for creating the service provider keys:
Figure 19. Service Provider Key setup
Creating service consumer keys
Now that you have created the server-side keys, next you create a client-side key store. Note that this is a completely different set of keys and has no relationship to the server-side keys. Only when you exchange public keys with each key store is a trust relationship established. In fact, our keys use a different organization name (i.e.
server1 at
IBM and
myclient at
ACME) to demonstrate that your keys can be from completely different organizations and that the client and server need not have keys created by one certificate authority.
As with the server-side keys, you can use the
keytool command to create the client-side key ring. Note that you will use the
keytool command that comes with Rational Application Developer V7.5.2 and not the WebSphere Application Server
keytool as evident by the different directories from which you run this command:
In Microsoft Windows, select Start > Run…, then enter
cmdin the Open field of the dialog box and click OK.
In the Command Prompt window, change directories to where Rational Application Developer 7.5.2 is installed. (e.g.
cd c:\Program Files\IBM\SDP)
- Now run the following
keytoolcommand:
jdk\bin\keytool.exe -genkey –v -alias myclient -keyalg RSA -keystore myclientKeys.jks -storepass g00ber -dname "cn=myclient,O=ACME,C=US" -keypass p@ssword
Just as the service provider used this command to generate a public key and private key pair, we now use the same command to create the service consumer’s key ring with a corresponding set of public key/private key that is accessed via the myclient alias. Likewise with the service provider keys, this command creates a self-signed public certificate that contains the public key. However, note that the service consumer (i.e. client-side) keys are stored in the
myclientKeys.jksfile.
- To build that trust level between the service provider and service consumer, you need to export the client certificate to be imported into the service provider’s key store. This is done with the following
keytoolcommand:
jdk\bin\keytool.exe –export -v -alias myclient –file c:\temp\myclient.cert –rfc -keystore myclientKeys.jks –storepass g00ber
This command imports the public key certificate into the service provider’s (i.e. server-side) key store so that the service provider will know how to encrypt messages for the consumer.
- Next you import the server-side public key into the client-side keys using the following
keytoolcommand:
jdk\bin\keytool.exe –import -v –noprompt -alias server1 –file c:\temp\server1.cert -keystore myclientKeys.jks –storepass g00ber
Recall above that you exported the public key of the server1 alias, which is the key pair that is associated with your service provider. Therefore, you need to import this public key into the client-side key store (i.e
myclientKeys.jks). Then, when the service consumer (i.e. client-side) wants to encrypt a message for the service provider, the WS-Security configuration associated with the client will specify the server1 alias public key in the client’s key store.
Figure 20 shows all of the commands listed above required to create the client-side keys:
Figure 20. Creating client-side keys with keytool
Importing service consumer keys
Recall during the service provider key creation, you had not yet created the service consumer keys with which to export and import into the service provider’s key ring. Now that you have created the client-side keys and certificates and exported the public key to be used by the service consumer, you can now import this key into the service provider key ring.
- The following
keytoolcommand lets you import the public key into the key ring:
java\bin\keytool.exe –import -v –noprompt -alias myclient –file c:\temp\myclient.cert -keystore helloServerKeys.jks –storepass f00bar
Make sure you run this command from the WebSphere Application Server V7 directory (i.e.
c:\Program Files\IBM\SDP\runtimes\base_v7)
Now that the service provider’s key ring is ready, you copy it to the cell configuration directory of the WebSphere Application Server V7 runtime so that your keys will be available on all nodes of a cluster for a clustered environment. This location will also work for a standalone server configuration. In your policy set bindings configuration below, you will point to this key store.
From the WebSphere Application Server V7 directory, copy the service provider’s key ring to the following directory:
copy helloServerKeys.jks profiles\<profile name>\config\cells\<cell name>
On my machine the path is
C:\Program Files\IBM\SDP\runtimes\base_v7\profiles\was70profile1\
config\cells\griffith-t60pNode01Cell\MyKeys.
Now you have the client keys and the server keys both with an imported certificate from the other. You will use these keys in the configuration of the WSSecurity policy set to provide encryption and signing.
Creating a policy set
WebSphere Application Server V7 allows creating policy sets from scratch to provide maximum flexibility, but WebSphere Application Server V7 also comes with many preconfigured policy sets to simplify their creation. Very often the preconfigured policy sets are more than adequate for most needs and thus copying one of the built-in policy sets and modifying it is often easier than starting from scratch – it is also the recommended approach. In this tutorial, you will copy the Username WSSecurity default policy set.
The Username WSSecurity default policy set comes with a WSSecurity policy and a WSAddressing policy. Within these policies, you specify message integrity by digitally signing the message body, the timestamp, the addressing headers, and the username token. Message confidentiality is achieved by encrypting the message body, the signature, and the username token. Finally, these policies specify message authentication by using the username token. All of the work of specifying what parts of the message to sign and encrypt are already done for you by copying the Username WSSecurity default policy set.
To create and customize a policy set, you need to open the Administration Console of WebSphere Application Server. In this tutorial, you use the WebSphere Application Server that we installed inside of Rational Application Developer.
- From Rational Application Developer, right-click your WebSphere Application Server V7 runtime in the Servers view and choose Administration > Run administrative console, as shown in Figure 21. Ensure your WebSphere Application Server V7 runtime is started or Run administrative console will be grayed out.
Figure 21. Launching Administration Console from Rational Application Developer
- From the Administrative Console, select Services > Policy sets > Application policy sets as shown in Figure 22.
Figure 22. Application policy sets (see enlarged Figure 22)
- Click the checkbox next to the Username WSSecurity default, then click the Copy… button.
The Username WSSecurity default policy set encrypts the SOAP body, the signature, and the Username token. Additionally, the Username WSSecurity default policy set signs the SOAP body, the timestamp, the addressing headers, and the Username token. Message authentication is provided using the Username token. As this policy set provides defaults that are likely to be used frequently in real-life scenarios, you will use this policy set for this tutorial.
Figure 23. Copy policy set (see enlarged Figure 23)
- Enter
HelloWorldPolicySetas the name for your new policy set and any description you’d like in the description field. Click the OK button.
Exporting a policy set
As discussed previously, Rational Application Developer does not allow customization of policy sets and thus you used the Administration Console of WebSphere Application Server to create the policy set. You can then export the policy set to allow the consumer to use the same policy set. Additionally, you may attach the policy set to the service provider or service consumer using Rational Application Developer such that the policy set will get attached when deployed from Rational Application Developer.
From the Administration Console, select Services > Policy sets > Application policy sets as shown in Figure 22.
Click the checkbox next to the HelloWorldPolicySet
then click the Export… button.
Figure 24. Exporting policy sets (see enlarged Figure 24)
Click the HelloWorldPolicySet.zip link as shown in Figure 24 and save the file to
c:\temp. Click the OK button to save the file.
Creating a policy set binding
The policy set defines the policies to attach to your service provider, but you need to assign a binding to specify the service-specific settings to use, such as key stores. Rather than starting from scratch, you copy the provider sample bindings and then customize it, as this is usually easier and considered a best practice.
In the Services menu, expand the Policy sets folder and select the General provider policy set bindings link. This link displays the list of provider policy set bindings.
Click the checkbox next to the Provider sample policy set bindings, then click the Copy… button.
- Enter
HelloWorldProviderBindingsas the name for the new bindings and any desired text for the description field (optional), as shown in Figure 25.
Figure 25. Copy policy set bindings for customization (see enlarged Figure 25)
Click OK.
Configuring service provider policy set binding
The provider sample that you started with is for demonstration purposes only, and you must change the keys to provide production-level security. Therefore, you now need to customize your new policy set binding by changing the sample keys to use the real keys that you generated above.
To customize the policy set binding to specify which certificates you trust:
Navigate to the Keys and certificates policy bindings by clicking HelloWorldProviderBindings > WS-Security > Keys and certificates.
Scroll down the page to the Trust anchor section and click the New… button.
Enter
HelloServerTrustStorein the name field then click the External keystore radio button.
Enter
${USER_INSTALL_ROOT}\config\cells\<yourCellName> \helloServerKeys.jksfor the full path to the external key store.
For simplicity in this tutorial, we use the hard-coded path to the key store. Normally, you would create a new WebSphere variable (e.g.
MY_KEY_STORE) that would point to the absolute path so that you wouldn’t need to change your policy set bindings when moving from one cell to another.
JKSas the key store type.
- Enter
f00baras the key store password. Your screen should look something like Figure 26.
Figure 26. New trust anchor
On my machine the external key store path is
C:\Program Files\IBM\SDP\runtimes\base_v7\profiles\was70profile1\config
\cells\griffith-t60pNode01Cell\helloServerKeys.jks.
- Click the OK button to save the changes.
In this step, you customized the policy set binding to specify which certificates you trust. This step lets you verify that the public certificate that is used to encrypt messages is a trusted certificate. In this tutorial, you use your server-side key store as your trust store to simplify things. Normally, your trust store would contain trusted CAs (e.g. Verisign, GeoTrust) that have signed the public keys.
Now that you have specified your trust store, you can customize the signing token for inbound messages to use this trust store. This token essentially verifies that the key used to sign the message is a trusted key.
To customize the signing token for inbound messages to use this trust store:
- Navigate to the Authentication and protection bindings by clicking HelloWorldProviderBindings > WS-Security > Authentication and protection. This step displays a window like the one in Figure 27.
Figure 27. Customizing policy set bindings (see enlarged Figure 27)
- Select con_signx509token> Callback handler. Change the Certificate store to be (none) instead of DigSigCertStore. Then choose HelloServerTrustStore next to the Trusted anchor store label as shown in Figure 28.
Figure 28. Changing certificate store (see enlarged Figure 28)
Click the OK button to save the callback handler customizations.
Click the OK button again to save the consumer signature token customizations, which should now bring you back to the Authentication and protection page shown in Figure 27.
Now that you have specified what trust store to use for verifying the signature for incoming messages, you need to customize the token to be used for signing outgoing messages:
To customize the token for signing outgoing messages:
Select gen_signx509token > Callback handler. Then choose Custom from the drop-down in the Key store section and click the Custom keystore configuration link.
- Change the values to match this table:
Note that since you use the private key of server1 for outgoing signatures, you must specify the password for the private key.
Click the OK button to save the key store configuration changes.
Click the OK button to save the callback handler changes.
Click the OK button to save the token changes. At this point, you should be back at the Authentication and protection page shown in Figure 27.
Now that you have customized the binding for signatures, next you customize the binding for encryption and decryption protection. You will begin with the con_encx509token token, which is used to decrypt incoming messages.
To customize the binding for encryption and decryption protection:
Select con_encx509token > Callback handler. Then choose Custom from the drop-down in the Key store section followed by clicking the Custom keystore configuration link.
- Change the values to match this table:
Notice that you have to enter the key’s password in addition to the key store password since you are accessing the private key.
The results should look similar to Figure 29.
Figure 29. Key store for consumer decryption
Click the OK button to save the key store configuration changes.
Click the OK button to save the callback handler changes.
Click the OK button to save the token changes. At this point, you should be back at the Authentication and protection page as in Figure 27.
To customize the token used for encrypting outgoing messages:
Click gen_encx509token > Callback handler. Then choose Custom from the drop-down in the Keystore section and click the Custom keystore configuration link.
- Change the values to match this table:
Notice that in this case you do not need to provide a password for the key alias because you are using the client’s public key to encrypt the outgoing response message.
Click the OK button to save the key store configuration changes.
Click the OK button to save the callback handler changes.
Click the OK button to save the token changes. At this point, you should be back at the Authentication and protection page as in Figure 27.
So far you have created a custom policy set and a custom policy set binding and customized to use custom keys. While WebSphere Application Server V7 provides the ability to attach policy sets and bindings to services in the Administrative console, for this tutorial, you will use Rational Application Developer V7.5.2 to accomplish this task. Therefore, you now need to export the policy set and bindings to import them into Rational Application Developer.
Exporting a policy set binding
Just as you exported the copied policy set above, you can also export the policy set bindings. Because this policy set binding is only for the service provider (i.e. server-side), it isn’t necessary to export this policy set binding. However, doing so allows you to attach the binding to the service provider in Rational Application Developer, which simplifies policy set attachment during development.
In the Services menu, expand the Policy sets folder. Select the General provider policy set bindings link to display the list of provider policy set bindings.
Click the checkbox next to the HelloWorldProviderBindings
policy set bindings, then click the Export… button.
- Click the HelloWorldPolicySet.zip link as shown in Figure 30 and save the file to
c:\temp. Click the OK button to save the file.
Figure 30. Export policy set bindings
Section 5. Securing the service provider
Now that you have created a custom policy set and policy set binding (which you exported to
c:\temp) you need to import them into the HelloWorldProject to attach them to the service provider. Recall that policy sets provide a declarative way to provide qualities of service (QoS) for Web services. By attaching a policy set and binding to a Web service, you are declaratively specifying what QoS to use.
From the main menu of Rational Application Developer choose File > Import > Web services > WebSphere Policy Sets. Now click the Next button, which displays a dialog box like the one in Figure 31.
Click the Browse… button and choose the HelloWorldPolicySet.zip file that you exported to
c:\tempabove.
- The wizard reads the zip file and lists the policy sets included in the file. Click the checkbox next to HelloWorldPolicySet, then click the Finish button.
Figure 31. Import policy set
As in the steps above, right-click HelloWorldProject and choose Import > Import > Web services > WebSphere Named Bindings. Now click the Next button, which displays a dialog box like the one in Figure 32.
Click the Browse… button and choose the HelloWorldProviderBindings.zip file that you exported to
c:\tempabove.
- Again, the wizard reads the zip file and list the policy set bindings included in the file. Click the checkbox next to HelloWorldProviderBindings, then click the Finish button.
Figure 32. Import policy set bindings
Once the policy set and bindings have been imported into Rational Application Developer, you can attach them to the service provider.
- In Rational Application Developer, drill into the HelloWorldProject > Services > {}HelloWorldProviderService then right-click and choose Manage policy set attachment… as shown in figure 33:
Figure 33. Attaching policy set and bindings
Click the Add button add a policy set and binding to an endpoint.
- Leave the scope set to the entire service and choose HelloWorldPolicySet from the drop-down for the policy set and HelloWorldProviderBindings from the drop-down for the binding as shown in figure 34.
Figure 34. Customizing policy set bindings
Click the OK button to save this association.
Click the Finish button to close the policy set attachment dialog box.
Now that you have attached the policy set and bindings to the service provider, you will deploy the service provider onto the WebSphere Application Server runtime and verify that our policy set and bindings have been attached.
There are a variety of ways to deploy the service provider onto the WebSphere Application Server, but in this tutorial we will use the Add and Remove Projects… menu item available in the Rational Application Developer Servers view.
To deploy the service provider:
- Right-click WebSphere Application Server v7.0 at localhost (or whatever your server is called if different) and choose Open. This will bring up the server configuration settings page as shown in Figure 35.
Figure 35. WebSphere Application Server V7.0 server configuration (see enlarged Figure 35)
Ensure Run server with resources on Server is selected in the Publishing settings for WebSphere Application Server section.
- Save and close the server configuration file.
The Run server with resource on Server setting will ensure that the policy set and bindings attachment show up in the WebSphere Application Server Administrative Console.
In order to ensure a clean deploy, you will remove the HelloWorldProjectEAR from the server, then re-add it.
Right-click the HelloWorldProjectEAR project under WebSphere Application Server v7.0 at localhost in the Servers view and choose Remove as shown Figure 36.
Figure 36. Remove HelloWorldProjectEAR from server
Now that the project has been removed, you will add it back, which will cause a fresh deploy.
Right-click WebSphere Application Server v7.0 at localhost (or whatever your server is called if different) and choose Add and Remove Projects… as shown in Figure 37.
- Click the HelloWorldProjectEAR project from the Available projects field and click the Add > button to move it to the Configured projects field then click the Finish button.
Figure 37. Deploy Service Provider to WebSphere Application Server
When the server finishes deploying and publishing, use the Administrative Console to verify that the service provider was successfully deployed to WebSphere Application Server and that the policy set and bindings have been attached. Once again right-click WebSphere Application Server v7.0 at localhost, but this time choose Administration > Run Administrative console.
Login to the admin console and select Services > Services providers. You should see the HelloWorldProviderService listed in the service providers.
Click the HelloWorldProviderService to drill into this service.
- You should now see the HelloWorldPolicySet attached as the policy set and HelloWorldProviderBindings attached for the binding as shown in Figure 38.
Figure 38. Verify policy set and bindings attached (see enlarged Figure 38)
If you do not see the HelloWorldProviderService in the service providers window, then logout of the Adminstrative Console and then log back in to refresh the console such that you see the policy set and bindings attached to the service provider as shown in Figure 38.
Since you copied the Username WSSecurity default policy set, this policy set defines authentication through the username token. As a result, you need to enable security on the WebSphere Application Server server such that authentication can occur.
To enable security on WebSphere Application Server:
- In the Administrative Console, navigate to Security > Global security and verify that Enable administrative security and Enable application security are selected as shown in Figure 39.
Figure 39. Enable application security (see enlarged Figure 39)
- If security was not enabled before, you need to restart WebSphere Application Server for the security settings to take effect.
Section 6. Consuming a secure service
At this point in the tutorial, you should have the service provider running on WebSphere Application Server V7 with the customized HelloWorldPolicySet and bindings attached. If you were to rerun the service consumer as developed above, the service provider would reply with a SOAP fault indicating that the consumer does not adhere to the policy set attached to this provider. Therefore, you need to attach a policy set to the consumer (i.e. client-side) and customize the consumer bindings to match up with the expectations of the service provider.
One way to ensure the consumer adheres to the policy of the service provider is to use the same policy set, which is what we’ll do in this tutorial. Since you imported the HelloWorldPolicySet into Rational Application Developer to attach it to the service provider, it is also available to be attached to our service consumer.
Attaching a policy set
In a similar fashion to attaching the policy set to the service provider, you do the same thing with the service consumer. The following sections describe this process.
To configure the consumer-side binding for signatures:
Drill down to HelloWorldConsumer > Services > Clients > {}HelloWorldProviderService. Right-click and choose Manage policy set attachment…
- Click the Next button followed by the Add… button of the Application section, which presents the dialog box shown in Figure 40.
Figure 40. Attaching policy set
Select HelloWorldPolicySet for the policy set drop-down.
Type
HelloWorldConsumerBindingin the drop-down binding field and click OK.
- Select the WSSecurity policy type in the bindings configuration section. Click the Configure… button, which presents the WSSecurity Binding Configuration dialog as shown in Figure 41.
Figure 41. WSSecurity Binding Configuration (see enlarged Figure 41)
Select the Digital Signature Configuration tab, and then click the Key Store Settings… button of the Outbound Message Security Configuration section.
- Enter the values in the following table for the Key Store Settings dialog shown in Figure 42.
Figure 42. Outbound signature key settings
Notice that you are specifying that you want to sign the outbound (i.e. service request) message using the private key of the myclient alias.
Click the OK button.
In the Inbound Message Security Configuration section, uncheck the Trust Any Certificate, because we only want to trust the signature if the response is from the server.
- Click the Key Store Settings… button, then enter the values in the following table:
- Click the OK button.
- Enter
C:\temp\server1.certas the value for the Certificate Path field.
Now you have configured the consumer-side binding for signatures. Next, you will configure the keys to use for encryption.
To configure the keys to use for encryption:
Select the XML Encryption Configuration tab, and then click the Key Store Settings… button of the Outbound Message Security Configuration section.
- Enter the values from the following table for the Key Store Settings dialog shown in Figure 43.
Figure 43. Outbound encryption key settings
Since you are encrypting the service request for the service provider, which is associated with the server1 certificate, you specify the public key of server1 in Figure 44.
- Click the OK button.
To configure how to decrypt the inbound message (i.e. the response):
- On the XML Encryption Configuration tab, click the Key Store Settings… button in the Inbound Message Security Configuration section.
- Enter the values from the following table for the Key Store Settings dialog shown in Figure 44.
Figure 44. Inbound encryption key settings
When the provider’s response comes back, it will be encrypted with the client’s public key. Therefore, you need to decrypt the message using the client’s private key, which is what we have specified in Figure 44.
Click the OK button.
Recall that the Username WSSecurity default policy set that you copied included authentication using a username token. Somehow you need to get a valid username token in the SOAP header for the server to verify that you are authenticated before executing the service provider Web service. The Token Authentication tab provides two such methods. You will choose the UNTGenerateCallbackHandler.
Select the Token Authentication tab then choose the com.ibm.websphere.wssecurity.callbackhandler.UNTGenerateCallbackHandler as the callback handler, as Figure 45 shows.
Enter a valid user name and password that matches the user repository of your WebSphere Application Server (e.g. admin/admin).
Click the Add Timestamp checkbox.
Click the Add Nonce checkbox.
- Click the OK button, and then click the Finish button.
Figure 45. Token authentication (see enlarged Figure 45)
If the dialog box as shown in Figure 45 does not include checkboxes for Add Timestamp and Add Nonce, you will need to ensure you are using Rational Application Developer 7.5.2 .
Section 7. Testing secure JAX-WS
In section 3 of this tutorial, you tested the service provider and consumer and viewed the SOAP messages as they traveled between the client and server. In section 3, you had not yet enabled message-level security through the attachment of policy sets, and thus the SOAP messages were sent in clear text (i.e. not encrypted) as shown in Figure 17. As one of the goals with message- level security is to ensure confidentiality (i.e. only the intended recipient can see the data inside the SOAP message), you now need to rerun the test and verify that the SOAP messages contain encrypted data that isn’t visible to anyone except the intended recipient (not even the TCP/IP Monitor that is acting as an intermediary).
- Ensure the TCP/IP Monitor is started as shown in Figure 15, then right-click the ClientTest.java file of the HelloWorldConsumer project and choose Run As > Run Configurations. This should present a Run Configurations dialog box as shown in Figure 46.
Figure 46. Setting ClientTest arguments (see enlarged Figure 46)
- Since the consumer needs to use a Java Authentication and Authorization Service (JAAS) to pass in the Username credentials, you need to specify the following VM argument:
-Djava.security.auth.login.config= ”C:\Program Files\IBM\SDP\runtimes\base_v7 \profiles\was70profile1\properties\wsjaas_client.conf”
- Next click the Run button and view the results in the TCP/IP Monitor view, which looks something like Figure 47.
Figure 47. Viewing the SOAP messages with XML encryption (see enlarged Figure 47)
Notice that the Console shows the output from the consumer after unencrypting the message. If you view the WebSphere Application Server console log, you see a similar message, which demonstrates that the service provider received the message.
Section 8. Conclusion
In this tutorial, we demonstrated how to create a Web service provider using JAX-WS annotations. Then we showed you how to build a matching consumer and how to test the client- to-server communications. Next, we demonstrated how to create a custom policy set, which we showed you how to customize with personalized asymmetric keys, before once again testing the client and server pair. Additionally, we showed you how to monitor the data flowing between the client and the server, and verify that in fact the SOAP messages are being encrypted and protected by the customized policy set you created and associated with the service consumer and provider pair.
While this tutorial was designed to teach and instruct, the steps taken are valid production-level configuration options that employ strong cryptography for encryption and protection.
Acknowledgements
Special thanks to Zina Mostafa and Indran Naick for reviewing this tutorial.
Resources
- Download WebSphere Application Server V7 trial
- Download Rational Application Developer V7 trial
- Download IBM SOA Sandbox for reuse
- Redbook: WebSphere Version 6 Web Services Handbook Development and Deployment
- Build secure Web services Using Rational Application Developer V7.0
- Redbook: IBM WebSphere Application Server V6.1 Security Handbook
- Redbook: Web Services Handbook for WebSphere Application Server 6.1
- Redbook: Connecting Enterprise Applications to WebSphere Enterprise Service Bus
- WebSphere Application Server V7 Information Center
- developerWorks WebSphere Application Server zone
- WebSphere Application Server forum.
|
http://www.ibm.com/developerworks/websphere/tutorials/0905_griffith/
|
CC-MAIN-2014-52
|
refinedweb
| 8,360
| 54.42
|
new operator for creating the object of inner class October 4, 2012 at 10:37 AM
Hi , The folliowing is my code : class Outer { class Inner{ void show() { System.out.println("Hello "); } } } class Test { public statis void main(String args[]) ... View Questions/Answers
cannot do the additional operator October 4, 2012 at 8:28 AM
i got problem with additional and multiplication operator...please anyone help me <html> <head> <title>Simple Calculator</title> <script language = "JavaScript"> // calculate function function calculate () { //get input ... View Questions/Answers
Writing and Reading A File October 4, 2012 at 6:57 AM
Hello, I've been trying to learn writing and reading data from file for our assignment, but just stuck on how to proceed. Our assignment requires us to make an application to read and write customer contact information the user enters. Our assignment requires us to make GUI apps. I decided to mak... View Questions/Answers
c program October 3, 2012 at 11:12 PM
plz send me program for this. write a program to print the following code? 1 2 3 4 5 6 7 8 9 10 1 2 3 4 5 6 7 8 9 10 11 2 3 4 5 6 7 8 9 10 11 12 3 4 5 6 7 8 9 10 11 12 13 4 ... View Questions/Answers
c program October 3, 2012 at 11:08 PM
plz send me program for this. write a program to print the following code? (in c language) 1 2 3 4 5 6 7 8 9 10 1 2 3 4 5 6 7 8 9 10 11 2 3 4 5 6 7 8 9 10 11 12 3 4 5 6 7 8 9 10 ... View Questions/Answers
form validate and perfom action immediatly October 3, 2012 at 8:51 PM" f... View Questions/Answers
Works only for one row October 3, 2012 at 8:01 PM
Hi, My below code is working only if there is a single row. could you please help me in doing it for all the rows retrieved from the database. <%@page import="java.util.concurrent.CountDownLatch"%> <%@include file="DBCon.jsp" %> <html>... View Questions/Answers
Export Extjs Gridview data to excel in jsp October 3, 2012 at 6:07 PM
i need to export the extjs girdview data to excel can you please help me thanks in advance ... View Questions/Answers
Core Java October 3, 2012 at 5:24 PM
Hi, can any one please tell me the uses of return type,"Super" and "this" calling statement in Java?? why do we required this,super calling statement?? why return type is required?? ... View Questions/Answers
Java I/O problem October 3, 2012 at 5:11 PM
Write a Java application that prompts the user to input their Name, Address, Date of Birth and Student ID number using the standard input - this information should then be saved to a file named studentData. The program should use the FileWriter class and an appropriate processing stream ... View Questions/Answers
combining 5 images and text in between to one single image in java October 3, 2012 at 5:09 PM
Hi I have to combine 5 images and text in between to one single image. I have image followed by description about the image. I have to put them together and create and single image for download. any help would be appreciated. Thank you. ... View Questions/Answers
how to post data in mysql php form October 3, 2012 at 4:48 PM
how to post data in mysql php form ... View Questions/Answers
how to connect to database in php using mysql October 3, 2012 at 4:47 PM
how to connect to database in php using mysql ... View Questions/Answers
how to create a user registration form in php October 3, 2012 at 4:45 PM
how to create a user registration form in php ... View Questions/Answers
login page in php October 3, 2012 at 4:42 PM
How to create a login page in php? ... View Questions/Answers
jar file built by ant jar task does not have all the dependant jars and throws exception while reading the appplicationContext,xml October 3, 2012 at 2:12 PM
I have a spring class with main method. Inside the class am trying to read the values applicationContext.xml . My intention is to jar this main class along with its dependant jars,property files and applicationContext.xml .Hence i have my target as below <target ... View Questions/Answers
jsp to access query October 3, 2012 at 1:14 PM
How to insert the values from jsp to access ? ... View Questions/Answers
Excel File data upload into postgresql database in struts 1.x October 3, 2012 at 11:06 AM
Dear members please explain how Excel Files data upload into postgresql database in struts 1.x ... View Questions/Answers
Updating user profile October 3, 2012 at 10:33 AM
how should i provide user to update his profile with edit option including entered data by user should be shown in jsp page fileds ....js....jsp: <... View Questions/Answers
openning the pdf or doc file in same jsp October 2, 2012 at 3:35 PM
**how to open the pdf and doc file in same jsp after clicking the link or button (which is with out moving to the next page). will some body help me on this @simple format please it will be helpfull for my future** ... View Questions/Answers
java October 2, 2012 at 3:21 PM
can we use the word "buffer" as a variable name? ... View Questions/Answers
java October 2, 2012 at 3:18 PM
can we use buffer as a key word in java? ... View Questions/Answers
date in textbox on page load October 2, 2012 at 1:06 PM
i want to insret the current date in textbox but this code dosen't work and the value of text box will be blank <script type="text/javascript"> function addDate() { var mydate = new Date(); var year = mydate.ge... View Questions/Answers
image retreival October 2, 2012 at 12:33 PM
I ve stored the path of image and audio in mysql database. how to retrive it and display... Can u pls help me out.. ... View Questions/Answers
make a prog for this query October 2, 2012 at 12:25 PM
write a program implementing interface for personal data n salary details. interface must contain abstract methods. steps for the program are 1) store the data in text file 2)display salary when emp id matches 3)duisplay emp name whose starts with 'a' 4)display emp name in uppercase so...... View Questions/Answers
Validations October 2, 2012 at 12:10 PM
Javascript validation is working fine in Internet Explorer browser but it is not working in Mozilla. Can you please let me know the reason. ... View Questions/Answers
Question about a flex... October 2, 2012 at 11:33 AM
Is there good Future for Adobe flex ? ... View Questions/Answers
SEMISTER October 1, 2012 at 9:51 PM c... View Questions/Answers
How to embedd jre inside the converted .exe file from .jar using Launch4J October 1, 2012 at 6:09 PM
Hello Sir, I created a .exe file from the .jar file using Launch4J tool. Now As per my need i have to execute it without installing jre .Means that jre should be embedded in the .exe file. Will you plz guide me how to achieve this goal. Thank you Sir. ... View Questions/Answers
Java servlet with jsp on sql server October 1, 2012 at 3:43 PM
How to delete a user by an admin with check box in Java Servlet with jsp on Sql Server? ... View Questions/Answers
Getting mysql table in textbox October 1, 2012 at 2:18 PM
how to get mysql table values into textbox in java using ajax and servlets ... View Questions/Answers
How to export grid into excel October 1, 2012 at 1:28 PM
Hi, i created a grid panel i have to export it to the excel. please help me by some sample code. thanks in advance. cool day dude. ... View Questions/Answers
How do SEL and @selector work in iphone sdk? October 1, 2012 at 1:10 PM
How do SEL and @selector work in iphone sdk? ... View Questions/Answers
string October 1, 2012 at 10:05 AM
how we conunt the no. of words in java without using in pre defined method in java ... View Questions/Answers replaces a, e, i, o, u in converts any given year to Roman Numeral in Java October 1, 2012 at 5:03 AM
Write a program that converts any given year to Roman Numeral ... View Questions/Answers
Java converts any given year to Roman Numeral October 1, 2012 at 5:03 AM
Write a program that converts any given year to Roman Numeral ... View Questions/Answers
log4j is not logging even under src path September 30, 2012 at 7:54 PM
We are working with struts application and using log4j.properties file , it is placed under WEB-INF/classes and we have referenced log4j-1.2.8.jar in classpath file even after this it is not logging messages in linux servers . It is working in local setup. Please suggest any help for this issue.<... View Questions/Answers
display multiple image file or pdf file in multiple column of a row from server or database September 30, 2012 at 6:15 PM
hello sir I have uploaded the file to the server and I want to display that file to the user page but I want to display that file in a particular format i.e I have to display more than one files in multiple columns in a single row..Suppose I have to display two files in a row then after uploading... View Questions/Answers
packages September 30, 2012 at 5:53 PM
how to save program created by sub package method......... ... View Questions/Answers
project in java using applet and servlet...which tool i should use?eclipse or netbeans? September 30, 2012 at 3:17 PM
i want to do a project in java using applet and servlet...which tool i should use?eclipse or netbeans? ... View Questions/Answers
mobile application September 30, 2012 at 12:15 AM
hey i am given a project on j2me it has to be start with authentication tab den when we click it login and register item shud get blinked after registering it must save the data..to login data and it must login with the registered id password..after it der must be to menu color menu and shape men... View Questions/Answers
want a project September 29, 2012 at 10:49 PM
i want to make project in java on railway reservation using applets and servlets and ms access as database..please provide me code and how i can compile and run it... ... View Questions/Answers
Waiting for ur quick response September 29, 2012 at 8:51 PM
Hi, I have two different java programs like sun and moon. In both of these two programs i have used same class name like A. For example, File name:sun.java Then the code is here: Class A { public static void main(String args[]) { System.ou... View Questions/Answers
Question on Checked Exception September 29, 2012 at 8:42 PM
why checked exception force to put try and catch block ? Please send me answer ... View Questions/Answers
connectivity problem September 29, 2012 at 7:20 PM
i am facing error in Class.for name statement please help me correct it the error is ""java.lang.ClassNotFoundException: oracle.jdbc.OracleDriver"" i am using this with netbeans since i have gain connectivity still i am facing this error* ================================orac... View Questions/Answers
Write a program in JAVA which accepts a sentence & displays the longest word in the sentence alongn with it length of the word. September 29, 2012 at 7:08 PM
**A program in JAVA which accepts a sentence & displays the longest word in the sentence along with it length of the word, ... View Questions/Answers
oracle connectivity problem with netbeans September 29, 2012 at 6:29 PM
sir I am using oracle window version +net bean6.8. jam trying to connect net bean to oracle. for this after adding new driver(ojdbc6.jar) in services tab I got connectivity with oracle since I execute statement directly from net bean but now I am preparing a program of simple connectivity with or... View Questions/Answers
Query to insert values in the empty fields in the last row of a table in Mysql database? September 29, 2012 at 5:39 PM
I have some fields filled and some fields empty in the last row of my MYSQL database. Now I want to fill up those empty fields in the last row. So what will be the query for that? ... View Questions/Answers
Zend question September 29, 2012 at 1:16 PM
How to change action in zend framework?? suppose i want to add data in DB so my logic is that form view code in my indexAction() and insert process in addAction so how to go addAction() part in cilck on submit button???? Please ans.... thanks.... ... View Questions/Answers
how to make exampage in jsp ? September 29, 2012 at 1:06 PM
how to make a online exam page in jsp and servelet ? ... View Questions/Answers
dojo grid from 3 different sources September 29, 2012 at 10:07 AM
I t... View Questions/Answers
problem in servlet program September 28, 2012 at 11:25 PM
Dear Sir, I have a problem to insert the Blob type data like as video file, audio file in the database using Servlet and html code.... View Questions/Answers
White Space encoding problem in PDF September 28, 2012 at 7:55 PM
I am reading PDF version 1.3 using iText. Things are fine but with one problem that is I am not able to get whitespace. Is there any problem in whitespace encoding in PDF version 1.3 ?? ... View Questions/Answers
java file handling September 28, 2012 at 6:29 PM
enter string from keyboard and then read a file if this string is present in file then print message already exists else write this string to file? ... View Questions/Answers
make a program September 28, 2012 at 3:48 PM
GoodEmployee is defined who has ALL the following properties: He should be married. He should have 2 or less than 2 children. His middle name should start with "k" but not end with "e" The last name should have more than 4 characters The character "a" should appear i... View Questions/Answers
Auto complete of word search using ajax with java September 28, 2012 at 12:48 PM
I want to display the list of words when I type the first letter of the word in a text box. I am using jsp to design the form. I want ajax sample to achieve this feature. Its like google search. Please help. Thanks in advance. ... View Questions/Answers
illegal start of expression in servlet error.. September 28, 2012 at 11:22 AM
hello Sir, here is my servlet code and i am getting illegal start of expression error in declaring the method named " public Boolean ModificarUsuario(int IdUsuario, String Usuario, String Email) {" plz help me out to solve the problem .thank you Sir. public class edi... View Questions/Answers
Why the null values are stored in Database when I am sending proper values? September 28, 2012 at 10:27 AM
I Datab... View Questions/Answers
loop to input multiple points to calculate final distance and speed September 27, 2012 at 11:49 PM
import java.util.Scanner; public class Travel { private double time; private double distance; public Travel() { time=0; distance=0; } public void setTime(double t) { time=t; } public void setDistance(double d) { distace=d; } ... View Questions/Answers
Employee form September 27, 2012 at 10:47 PM
In employee form emp Id id after any data iserted is incremented for oher data insertion. insert, delete, update buttons when we cleak the select buttoun then the created table data iserted automaticaly. ... View Questions/Answers
search program September 27, 2012 at 7:56 PM
i m writing program which takes company names from databse...serch on google...n try to find the best match WEBSITE...of company name....m confused....how can i find best match..between my company name ..and all links after google search...plzz help..... ... View Questions/Answers
class MyThread2 extends Thread September 27, 2012 at 7:18 PM
Hi Friend, Try the following code: import java.io.*; import java.util.*; class MyThread1 extends Thread { private PipedReader pr; private PipedWriter pw; MyThread1(PipedReader pr, PipedWriter pw) { this.pr = pr; this.pw = pw; } pub... View Questions/Answers
HELP Generic linked list September 27, 2012 at 6:59 PM
How to create Generic linked list example program in Java?... View Questions/Answers
Reading xml file using dom parser in java with out using getelementby tag name September 27, 2012 at 2:58 PM
Hi, How to read the xml file using java with dom parser, but without using getelementbytag name, and also read the attribute values also. I had found some thing in the below url : b... View Questions/Answers
How to show autocomplete textbox values on combo box option selection using database? September 27, 2012 at 2:58 PM
When I select option(i.e First Year) then it will show list of student names in auto-complete text box. ... View Questions/Answers
Draw Line September 27, 2012 at 12:51 PM
sir i want to draw a moving line in j2me.That line should also show arrow in moving direction. How can we do so. ... View Questions/Answers
Countdown timer to show a link September 27, 2012 at 11:05 AM... View Questions/Answers
xml September 27, 2012 at 10:52 AM
what is name space,xml scema give an example for each ... View Questions/Answers
web September 27, 2012 at 10:43 AM
what do you mean by stacking elements? explain with example(in web programming) ... View Questions/Answers
web September 27, 2012 at 10:43 AM
what do you mean by stacking elements? explain with example(in web programming) ... View Questions/Answers
spring+jdbc September 27, 2012 at 10:42 AM
I crated a jsp page with one text box named as user id and submit button. When I submit the form all the details(Columns of database) corresponding to that particular field must be retrieved. The project must be spring. ... View Questions/Answers
retail invoice and billing source code in struts September 27, 2012 at 10:26 AM
Hi. I need source code for "retail invoice and billing source code in struts", Please help me in doing so. Please It's important and urgent to.............................................. ... View Questions/Answers
For Loop/PHP September 27, 2012 at 10:24 AM
Write ... View Questions/Answers
Storing and Reading data September 27, 2012 at 8:32 AM
Hello, I'm developing a GUI application as part of an assignment but stuck on how my program stores and reads the data the user entered into the GUI table I created. I also wanted to apply a java code that limits the data type and the character size that can be entered into the table fields. I go... View Questions/Answers
program an interface for a futuristic vending machine September 27, 2012 at 4:09
Provide a method that calculates the total of the order Provide a method that prints the Entree September 27, 2012 at 4:08
About jsp September 26, 2012 at 8:28 PM
Read Excel data using JSP and update MySQL databse ... View Questions/Answers
Disable BBcode encoding within a latex tag September 26, 2012 at 7:40 PM
Hello, I have the following bb function function bb($string) { $string = trim($string); $search = array( '@\<(?i)latex\>(.*?)\</(?i)latex\>@si', '@\<(?i)b\>(.*?)\</(?i)b\>@si', '@\<(?i)i\>(.*?)\</(?i)i\>@si', '@... View Questions/Answers
Converting PDF in to XML September 26, 2012 at 7:03 PM
I have to convert PDF into XMl without any loss in text. Please suggest sth good. ... View Questions/Answers
Traffic Simulator September 26, 2012 at 5:56 PM
Hi, ... View Questions/Answers
pyramid September 26, 2012 at 5:31 PM
123321 12 21 1 1 1 1 12 21 123321 ... View Questions/Answers
write excel file into the oracle database September 26, 2012 at 5:31 PM
dear sir, i need the jsp code that reads the excel file and stores it into the oracle database table..and also i need the code to connect oracle database? thank u in advance.. ... View Questions/Answers
How to Communicate with the operating system embedded on processor running on hardware machine in java September 26, 2012 at 4:25 PM
hello Sir, i have a operating system embedded on processor running on hardware machine .Now as per my need i have to communicate with the opearating system using java tools. Will you plz give me the names of the tools for this task. Thank you Sir. ... View Questions/Answers
session maintanance September 26, 2012 at 3:24 PM
Hi i am developing a small project using j2ee... i have some problem in maintaing session in my project... in my project when a user logout and when he click back button in the browser it goes back to the application....suggest me some codes so that i can maintain session in my project and also w... View Questions/Answers
collection September 26, 2012 at 1:53 PM
can you pass object reference as key , value ? ... View Questions/Answers
Extracting position of a particular string from a PDF September 26, 2012 at 12:59 PM ... View Questions/Answers
File path for jar file September 26, 2012 at 10:43 AM
Hi Experts, I have created one eclipse project, its about Velocity Template. In that project I have given the hard-coded path of the template, which is working as per expectation. But I have created jar file of that application, unfortunately it is giving the error that resource not fo... View Questions/Answers
get data between date using jsp with msaccess September 26, 2012 at 9:49 AM
hi, urgently i need program for get data between date using jsp with MsAccess database.plz any one can help me.thanks for anyone replay with regards c.b.chellappa ... View Questions/Answers
how to validate national ID number September 25, 2012 at 11:22 PM to do this.im confused. plz help me ... View Questions/Answers
loop September 25, 2012 at 11:13 PM
i want to write my name(inder) through loop in java using star("*"). ... View Questions/Answers
Preprocessor directive case for multiplication and addition September 25, 2012 at 9:46 PM
Preprocessor directive case for multiplication and addition
profile doesn't match application identifier September 25, 2012 at 8:14 PM
profile doesn't match application identifier ... View Questions/Answers
Threads in Java Swing MVC Application September 25, 2012 at 8:00 PM
Hello, I am currently making a Java Swing application, but I am having a lot of trouble with implementing threads into my program. I use the MVC paradigm and I just can't seem to implement custom threads into it, so my GUI doesn't freeze when I run certain parts of the program. I have read your T... View Questions/Answers
srinu September 25, 2012 at 6:05 PM
how to print a equilateral triangle (*) in java ... View Questions/Answers
|
http://www.roseindia.net/answers/questions/42
|
CC-MAIN-2016-22
|
refinedweb
| 3,938
| 62.58
|
Dependency injection is an important application design pattern. It's used so widely that almost everyone just calls it DI.
Angular has its own dependency injection framework, and you really can't build an Angular application without it.
This page covers what DI is and why it's useful.
When you've learned the general pattern, you're ready to turn to the Angular Dependency Injection guide to see how it works in an Angular app.
To understand why dependency injection is so important, consider an example without it. Imagine writing the following code:
export class Car { public engine: Engine; public tires: Tires; public description = 'No DI'; constructor() { this.engine = new Engine(); this.tires = new Tires(); } // Method using the engine and tires drive() { return `${this.description} car with ` + `${this.engine.cylinders} cylinders and ${this.tires.make} tires.`; } }:
public description = 'DI'; constructor(public engine: Engine, public tires: Tires) { }
public engine: Engine; public tires: Tires; public description = 'No DI'; constructor() { this.engine = new Engine(); this.tires = new Tires(); }
See what happened? The definition of the dependencies are now in the constructor. The
Car class no longer creates an
engine or
tires. It just consumes them.
This example leverages TypeScript's constructor syntax for declaring parameters and properties simultaneously.
Now you can create a car by passing the engine and tires to the constructor.
// Simple car with 4 cylinders and Flintstone tires. let car = new Car(new Engine(), new Tires());
How cool is that? The definition of the
engine and
tire dependencies are decoupled from the
Car class. You can pass in any kind of
engine or
tires you like, as long as they conform to the general API requirements of an
engine or
tires.
Now, if someone extends the
Engine class, that is not
Car's problem.
The consumer of
Carhas the problem. The consumer must update the car creation code to something like this:class Engine2 { constructor(public cylinders: number) { } } // Super car with 12 cylinders and Flintstone tires. let bigCylinders = 12; let car = new Car(new Engine2(bigCylinders), new Tires());
The critical point is this: the
Carclass:
class MockEngine extends Engine { cylinders = 8; } class MockTires extends Tires { make = 'YokoGoodStone'; } // Test car with 8 cylinders and YokoGoodStone tires. let car = new Car(new MockEngine(), new MockTires());:
import { Engine, Tires, Car } from './car'; // BAD pattern! export class CarFactory { createCar() { let car = new Car(this.createEngine(), this.createTires()); car.description = 'Factory'; return car; } createEngine() { return new Engine(); } createTires() { return new Tires(); } }
It's not so bad now with only three creation methods. But maintaining it will be hairy as the application.
let car = injector.get(Car);.
Now that you know what dependency injection is and appreciate its benefits, turn to the Angular Dependency Injection guide to see how it is implemented in Angular.
© 2010–2018 Google, Inc.
Licensed under the Creative Commons Attribution License 4.0.
|
http://docs.w3cub.com/angular/guide/dependency-injection-pattern/
|
CC-MAIN-2018-39
|
refinedweb
| 472
| 59.09
|
Utilities for parsing time strings
Project description
Utilities for parsing time strings in Python.
Building and installation
Before installing chronos you will have to generate some of its modules as it is explained in Chronos readme Then, you can simply run
pip install bigml-chronos
Requirements
Python 2.7 and Python 3 are currently supported.
The basic third-party dependencies are isoweek and pytz. These libraries are automatically installed during the setup.
Running the tests
The tests will be run using nose, that is installed on setup. You can run the test suite simply by issuing
python setup.py nosetests
Basic methods
Chronos offers the following main functions:
With parse you can parse a date. You can specify a format name with format_name, a list of possible format names with format_names or not specify any format. In the last case, parse will try all the possible formats until it finds the correct one:
from bigml_chronos import chronos chronos.parse("1969-W29-1", format_name="week-date")
from bigml_chronos import chronos chronos.parse("1969-W29-1", format_names=["week-date", "week-date-time"])
from bigml_chronos import chronos chronos.parse("7-14-1969 5:36 PM")
You can also find the format_name from a date with find_format:
from bigml_chronos import parser chronos.find_format("1969-07-14Z")
Instead of the name of the format, you can also pass a string containing some Joda-Time directives.
from bigml_chronos import chronos chronos.parse("1969-01-29", format_name="YYYY-MM-dd")
If both format_name and format_names are passed, it will try all the possible formats in format_names and format_name.
You can find all the supported formats, and an example for each one of them inside the test file.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
|
https://pypi.org/project/bigml-chronos/1.0.0/
|
CC-MAIN-2021-49
|
refinedweb
| 309
| 65.32
|
I have registration and authorization logic in my application
There is this code:
This is the function I call on the onSubmit button when I login
import store from "redux/store"; export const login= async (data)=> { try { const res= await api.login(data); if(res.status < 300) store.dispatch(setUserData(res.data)) } } catch(err) { alert(err) } }
How does this dispatch method differ from the one using redux-thunk?
export const loginAction= (data)=> async(dispatch)=> { try { const res= await api.login(data); if(res.status < 300) dispatch(setUserData(res.data)) } } catch(err) { alert(err) } }
Is there any difference at all? And if so, what is the best way to use it and why? Why does redux strongly recommend using tools like the redux-thunk middleware, because my actions are already asynchronous, I only dispatch when I receive a response from the server?
@DimaEf, with the status code there is a typo, there should be res.status < 300, and the video does not say why we should pass the function to dispatch so that dispatch can be used from action if this can be done without middleware.user4694852022-01-23 15:08:59
Then look here. And you, in theory, do not need to check the status of the code.ДимаЭф2022-01-23 15:27:34
- javascript : Is it possible to use the same state for different elements of an array?
- javascript : React + Redux why element is not updated when store is updated
- javascript : How to optimize useSelector redux
-
Here they explain why this is necessary (14:30, approximately). And you also have a check for the status of the response code, but this makes no sense, because if there is an error, it will simply execute the catch block + 400, 500 codes are also greater than 200)ДимаЭф2022-01-23 14:47:55
|
https://www.tutorialfor.com/questions-390347.htm
|
CC-MAIN-2022-27
|
refinedweb
| 302
| 62.17
|
Introduction: Quick Setup Guide for DS3231 Alarm/Timer Function
I was looking for DS3231 alarm function and I came across multiple libraries such as Radon.
The libraries people used are way too complex and difficult to understand. I wanted to make a timer using the DS3231 module but other instructions/notes were unclear.
Hmmm... There's gotta be an easier way to do this, right?
So, try to go back to basics and see how to embed the alarm function into the code
Step 1: Coding
First, you would need a DS3231 module and its library:...
Add the .zip folder to your Arduino IDE via Sketch>Include Library>Add .zip Library and locate your saved DS3231.zip library.
Using the very basics of programming, use if operator to set the alarm or intended timer function.
Insert && to add and operator to the code. (See last few lines)
#include <DS3231.h> // Init the DS3231 using the hardware interface DS3231 rtc(SDA, SCL); // Init a Time-data structure Time t;(SUNDAY); // Set Day-of-Week to SUNDAY //rtc.setTime(12, 0, 0); // Set the time to 12:00:00 (24hr format) //rtc.setDate(1, 1, 2016); // Set the date to DD/MM/YYYY } void loop() { t = rtc.getTime(); // Get data from the DS3231 // Send date over serial connection Serial.print("Date: "); Serial.print(t.date, DEC); Serial.print("/"); Serial.print(t.mon, DEC); Serial.print("/"); Serial.print(t.year, DEC); Serial.println();
// Send Day-of-Week and time Serial.print("Day of Week: "); Serial.print(t.dow, DEC); Serial.println(); Serial.print("Time: "); Serial.print(t.hour, DEC); Serial.print(":"); Serial.print(t.min, DEC); Serial.print(":"); Serial.print(t.sec, DEC); Serial.println(); Serial.println("--------------------------------"); delay(1000); //Delay is for displaying the time in 1 second interval. if (t.hour == 14 && t.min == 32 && t.sec == 53)
//Setting alarm/timer at every 2:32:53pm,
//in other words you can insert t.dow for every Thursday?, t.date for specific date? { digitalWrite(99, HIGH); delay(5000);
//Lets say that your component is wired to pin 99 and be switched on for 5 seconds,
//whatever you want to do with it } }
Step 2: Tell the Time Whenever
Update 08/21/2016: Apparently after your very first setup with the time,
rtc.setDOW(SUNDAY); // Set Day-of-Week to SUNDAY rtc.setTime(12, 0, 0); // Set the time to 12:00:00 (24hr format) rtc.setDate(1, 1, 2016); // Set the date to DD/MM/YYYY
You pretty much "burned" the time into the module. Now,
1. You can power off and power on the Arduino without messing the time in the DS3231 module, otherwise the Arduino uses the "void setup()" command to reset the time back to the original time you set. In other words, restarting Arduino means redoing everything in the code.
2. So, delete the above commands and use only:
void loop(){ Serial.begin(115200); rtc.begin(); }
instead to tell the time by reading the "burned" time in RTC DS3231 module.
Step 3: Conclusion and Reference
In conclusion, if you are to power off and power on the Arduino, and you want the "burned" time to stay still, you need to go through a two upload process. First is to "burn" the time, and second is to remove the "burning" code. That's it. Simple, right?
References: //code source... //library //and, nor, or... operator references
Be the First to Share
Recommendations
8 Comments
5 months ago
Hoping the article would cover the two built-in alarms. Wanting to make a battery powered project that wakes up by the alarm every hour, logs data, reset alarm, and goes back to sleep. I keep looking.
2 years ago
Nice, thanks. How can I implement code so I can change alarms time and duration trough web server or android phone? Please help. New with C and android. Im just good with electronics. Thanks
Reply 1 year ago
use a captive portal, I would suggest use an esp8266 based development board as main brain. Search for captive portal input esp8266 on google
3 years ago
i've done the same code and it works fine at serial monitor. But how to code for multiple alarms?
3 years ago
Thanks for this wonderful piece. Very informative on retrieving and using values from RTC.
3 years ago
Thanks, I've been trying to use library's to do this with no luck. So simple this way! (:- )
4 years ago
Hello I follow your instruction and I want to know if its possible to exit alarm if certain conditions is met .
5 years ago
Nicely done, thanks for sharing! :)
|
https://www.instructables.com/Setup-for-DS3231-AlarmTimer-Function/
|
CC-MAIN-2022-05
|
refinedweb
| 772
| 67.86
|
Hello Folks,
This is my first time ever posting here pardon my brevity.
My namespace has more than 1000 folders and sub-folders. Until now, we were using Virtual File Manager to manage the DFS environment which was pretty neat. VFM is now end of support and we are relying solely on DFS Management.
Our shares are on one netapp filer which we are planning to replace with a new filer at a different location and i have been assigned to change the target on all the folders. Now, as i said there are more than 1000 folders and sub folders it is impossible for me to change target on each folder individually. Is there a way i can do it on multiple folders?
I would really appreciate your help on this.
4 Replies
Should be do-able with PowerShell
Welcome to the community!
You can export the configuration for the entire namespace, modify the XML file, and then import it back....
Oh, that's a good way to do it and I didn't know you could export as xml.
Also why do you need that many folder targets? Why can't you just move the folder target up a level minimize the number of folders? You can always use Access Based Enumeration to limit the number of folders that people see.
That is true and i have been using Access Based Enumeration in my previous company (only 120 employees) before which was very handy. But now i have switched my job and i am working for a company with 500 employees may be more and this is there structure which they have been using for a decade, hence i do not want to change anything except of course it is necessary.
|
https://community.spiceworks.com/topic/2127513-adding-folder-targets-on-multiple-folders-in-dfs-namespace
|
CC-MAIN-2021-31
|
refinedweb
| 293
| 70.13
|
Leszek Gawron pisze:
>> 1. What's the scope of variable introduced by jx:set?
> you are probably asking the wrong question. jx:set always puts a
> variable in current context. The question should be: which
> elements/instructions should trigger a new local context.
>
> I think new local context should never be introduced for plain xml
> elements (either local or imported, namespaced or not).
Take a look at this code from StartPrefixMapping:
namespaces.addDeclaration(getPrefix(), getUri());
objectModel.markLocalContext();
objectModel.put(ObjectModel.NAMESPACE, namespaces);
What this code does is putting NamespaceTable object (namespaces) on Object Model. It may
be used
while some expression deeper in elements hierarchy is evaluated. Even though namespaces are
only
supported by JXPath we cannot putting them on Object Model. If we put something, we need to
remove
it from Object Model, that's what EndPrefixMapping does:
objectModel.cleanupLocalContext();
That's why we need to introduce new context when encountering new namespace prefix.
> jx:if, jx:choose, jx:forEach, etc. should create a new local context.
Do you want to say that all instructions that contain other instructions create a new context?
If
so, I'm fine with such behaviour.
> jx:call (along with alternative <macroName/> invocation) should create a
> new context that DOES not inherit from parent context (only the
> parameters explicitly passed with <jx:call) should be visible.
That's something new for me but after a while of thinking I agree with you. This means that
Call
instruction will need to create it's own instance of ObjectModelImpl class that will be created
every time the call is made. That's not the best option because it couples Template and Object
Model
implementation. I must think it over.
> I have no experience in xml namespaces area.
>
> Are these valid xml files?:
>
> <root>
> <foo:foo xmlns:
> </foo:foo>
>
> <!-- different namespace, same prefix -->
> <foo:foo xmlns:
> </foo:foo>
> </root>
Yes, it's valid. It's worth to say that namespace prefix does not matter at all because it's
defined
locally and makes it easier to indicate which elements belong to which namespaces.
> <root>
> <foo:foo xmlns:
> </foo:foo>
>
> <!-- namespaced element outside namespace declaration -->
> <foo:foo>
> </foo:foo>
> </root>
>
> If the second one is valid we have to keep all declared namespaces till
> the end of xml file (gets worse for xmls with jx:imports).
Second file is valid XML but second "foo" element is in the same namespace as root element
(empty
namespace) and it's full name is "foo:foo". Lack of namespace declaration makes prefix meaningless
and part of element's name (if prefix is attached to the namespace it's part of element's
name also).
Namespace declarations are not available to sibling elements.
> Unnecessary
> namespaces are cleared anyway by a filter:
>
> XMLConsumer consumer = new AttributeAwareXMLConsumerImpl(new
> RedundantNamespacesFilter(this.xmlConsumer));
>
Leszek, you are mixing two things:
1. SAX events declaring namespaces pushed down the pipeline that you say are filtered out.
2. Namespaces declaration put on Object Model that are used solely for JXPath purpose and
have
nothing to do with SAX events.
> Once again: the contract of jx:set is clear. On the contrary we have
> inconsistent context creation contract for all other jx:* instructions.
>From ObjectModel point of view everything is fine. StartPrefixMapping puts something on
ObjectModel
that will need to be cleaned up later on so it needs to create local contexts.
However, what we want to is to attach variable to context upper in hierarchy. I'm thinking
about
named contexts. For example, jx:if could create new local context this way:
objectModel.markLocalContext("jx:instruction");
Then in jx:set we could use:
objectModel.put("variable_name", variable, "jx:instruction");
so object model would attach this variable to the first context with name "jx:instruction"
found
when searching the context hierarchy.
I'm not particularly happy with such approach because it pollutes Object Model API but I couldn't
come with something more elegant.
Thoughts?
--
Grzegorz Kossakowski
|
http://mail-archives.apache.org/mod_mbox/cocoon-dev/200708.mbox/%3C46D1DD81.2050100@apache.org%3E
|
CC-MAIN-2013-48
|
refinedweb
| 657
| 58.18
|
Eugene:> Can Chris or someone please tell me what is the reason behind leaving> __kernel_uid_t == short on x86 architecture, and adding> __kernel_uid32_t? To me, it sounds more reasonable to rather make> __kernel_uid_t == int and add __kernel_uid16_t == short for binary> compatibility things.That's what I did originally. Unfortunately, it breaks both libc5 andglibc. In the case of libc5, every time you compile a program, you areincluding header files from the kernel source tree (normally installed in/usr/include/linux and /usr/include/asm). If we were to change the typesof uid_t and gid_t directly in the kernel, then anyone trying to compile aprogram would be using the wrong sizes and would end up with a miscompiledprogram.This is why we needed the following ugly construction ininclude/linux/types.h:#ifdef __KERNEL__typedef __kernel_uid32_t uid_t;typedef __kernel_gid32_t gid_t;#elsetypedef __kernel_uid_t uid_t;typedef __kernel_gid_t gid_t;#endif /* __KERNEL__ */> As a result of current status of things, 32bit uid support does *not*> become visible to the userspace when you rebuild glibc with the new> kernel source. To make it take effect, you also need to modify glibc> source, and in a rather illogical way.This is quite deliberate. If we were to change the type of __kernel_uid_t,and you recompiled glibc, you would *not* get a working glibc, you wouldget a glibc that worked only partially and caused various programs tocrash. Even worse, if you tried to use this glibc on a system with anolder kernel, every program would crash.Old versions of glibc simply do not expect the type of __kernel_uid_t tochange.Instead, we added a new type, __kernel_uid32_t, and new system calls(sys_setuid32, sys_chown32, etc.). Old versions of glibc do not know aboutthese, and will continue to work properly if compiled against newer kernelheaders.Future versions of glibc will check to see if __kernel_uid32_t is definedwhen they are being compiled, and if so, will include everything that isnecessary to make use of 32-bit UIDs on new kernels, and still workproperly on older ones.I'll post a message to the kernel list when I have a patch for glibc 2.1for testing. The older patches on my web site do not yet work for the32-bit UID support now in Linux 2.3.Thanks,Chris Wingwingc@engin.umich.edu-To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.rutgers.eduPlease read the FAQ at
|
http://lkml.org/lkml/2000/1/15/33
|
CC-MAIN-2016-18
|
refinedweb
| 402
| 53
|
strfromd (3) - Linux Man Pages
strfromd: convert a floating-point value into
NAME
strfromd, strfromf, strfroml - convert a floating-point value into a string
SYNOPSIS
#include <stdlib.h> int strfromd(char *restrict str, size_t n, const char *restrict format, double fp); int strfromf(char *restrict str, size_t n, const char *restrict format, float fp); int strfroml(char *restrict str, size_t n, const char *restrict format, long double fp);
Feature Test Macro Requirements for glibc (see feature_test_macros(7)):
strfromd(), strfromf(), strfroml():
- __STDC_WANT_IEC_60559_BFP_EXT__
DESCRIPTIONThese functions convert a floating-point value, fp, into a string of characters, str, with a configurable format string. At most n characters are stored into str.
The terminating null character ('\0') is written if and only if n is sufficiently large, otherwise the written string is truncated at n characters.
The strfromd(), strfromf(), and strfroml() functions are equivalent to
snprintf(str, n, format, fp);
except for the format string.
Format of the format stringThe format string must start with the character '%'. This is followed by an optional precision which starts with the period character (.), followed by an optional decimal integer. If no integer is specified after the period character, a precision of zero is used. Finally, the format string should have one of the conversion specifiers a, A, e, E, f, F, g, or G.
The conversion specifier is applied based on the floating-point type indicated by the function suffix. Therefore, unlike snprintf(), the format string does not have a length modifier character. See snprintf(3) for a detailed description of these conversion specifiers.
The implementation conforms to the C99 standard on conversion of NaN and infinity values:
- convert the value 12.1 as a float type to a string using decimal notation, resulting in "12.100000":
#define __STDC_WANT_IEC_60559_BFP_EXT__ #include <stdlib.h> int ssize = 10; char s[ssize]; strfromf(s, ssize, "%f", 12.1);
To convert the value 12.3456 as a float type to a string using decimal notation with two digits of precision, resulting in "12.35":
#define __STDC_WANT_IEC_60559_BFP_EXT__ #include <stdlib.h> int ssize = 10; char s[ssize]; strfromf(s, ssize, "%.2f", 12.3456);
To convert the value 12.345e19 as a double type to a string using.
|
https://www.systutorials.com/docs/linux/man/3-strfromd/
|
CC-MAIN-2021-21
|
refinedweb
| 363
| 56.55
|
Content-type: text/html
cc [ flag ... ] file ... -lvolmgt [ library ... ] #include <volmgt.h>
int volmgt_release(char *dev);
The volmgt_release() routine releases the removable media device reservation specified as dev. See volmgt_acquire(3VOLMGT) for a description of dev.
If dev is reserved by the caller, volmgt_release() updates the internal device reservation database to indicate that the device is no longer reserved. If the requested device is reserved by another process, the release attempt fails and errno is set to 0.
Upon successful completion, volmgt_release returns a non-zero value. Upon failure, 0 is returned.
On failure, volmgt_release() returns 0, and sets errno for one of the following conditions:
EINVAL dev was invalid or missing.
EBUSY dev was not reserved by the caller.
Example 1: Using volmgt_release()
In the following example, Volume Management is running, and the first floppy drive is reserved, accessed and released.
#include <volmgt.h> char *errp; if (!volmgt_acquire("floppy0", "FileMgr", 0, &errp, NULL)) { /* handle error case */ ... } /* floppy acquired - now access it */ if (!volmgt_release("floppy0")) { /* handle error case */ ... }
See attributes(5) for descriptions of the following attributes:
vold(1M), volmgt_acquire(3VOLMGT), attributes(5)
|
http://backdrift.org/man/SunOS-5.10/man3volmgt/volmgt_release.3volmgt.html
|
CC-MAIN-2017-04
|
refinedweb
| 183
| 60.01
|
problem is, again, a seemingly innocuous parantheses. Modify your server code to accept connection as follows(watch the additional parantheses):
. code ...
clilen = sizeof(cli_addr);
if ((newsockfd = accept(sockfd, (struct sockaddr *) &cli_addr, &clilen)) < 0)
printf("server: accept error\n");
. code ...
Your intention in writing a = b() < 0 was (a = b()) < 0. But the compiler was seeing a = (b() < 0)!
Welcome to the unforgiving world of C!
if ( !(fp = fopen("filetoberead", "r"))) {
perror("fopen failed");
exit(1);
}
Also,
>if (childpid = fork() < 0)
> printf("server: fork error\n");
can be better written as
if ((childpid = fork()) < 0)
printf("server: fork error\n");
When I try to get the client to read from a file rather than from stdin, the behaviour is exactly the same: ie, characters are echoed on the server's virtual console. I modified the client code to incorporate the test you suggested, and have pasted it below, just so that you can see what I've done more clearly.
Thanks.
/*
* Example of client using UNIX domain stream protocol
*/
#include <stdio.h>
#include "uniks.h"
int writen(register int, register char *, register int);
int readline(int, char *, int);
main(int argc, char *argv[])
{
int sockfd, servlen;
struct sockaddr_un serv_addr;
FILE *fp;
pname = argv[0];
/*
* Fill in the structure "serv_addr" with the address of the
* server that we want to send to.
*/
bzero((char *) &serv_addr, sizeof(serv_addr));
serv_addr.sun_family = AF_UNIX;
strcpy(serv_addr.sun_path,
servlen = strlen(serv_addr.sun_path)
/* Open a UNIX domain stream socket */
if ((sockfd = socket(AF_UNIX, SOCK_STREAM, 0)) < 0)
printf("client: can't open stream socket\n");
/*
* Connect to the server
*/
if (connect(sockfd, (struct sockaddr *) &serv_addr, servlen) < 0)
printf("client: can't connect to server\n");
if (!(fp = fopen("thingo.txt", "r")))
{
printf("Can't open file\n");
exit(1);
}
str_cli(stdin, sockfd); /* do it all */
close(sockfd);
exit(0);
}
|
https://www.experts-exchange.com/questions/10068392/Unix-domain-client-and-server.html
|
CC-MAIN-2018-17
|
refinedweb
| 300
| 55.24
|
Opened 11 years ago
Closed 10 years ago
Last modified 7 years ago
#5825 closed (fixed)
Customized actions for manage.py not working
Description
Custom actions can be added, as described
Following the instructions I am getting the error message: unknown command
To reproduce you can also copy the contents of the folder tests\modeltests\user_commands\management in an application subdirectory of your project.
The regression test for user_commands passes: it seems to use a different sys.path than when you run manage.py from the commandline...
Digging a bit deeper in the source file
django\core\management\__init__.py :
- The function imp.find_module in the method find_management_module is using sys.path to locate the management modules. The parent directory of your project is normally not on the path, so looking for an application package "project_name.app_name" doesn't find a module. Manage.py only has the project directory itself on the path.
- The find_commands method in the same source file is using a pretty hack-ish way to discover commands: it looks for files ending in .py. This misses compiled code with extensions as .pyo and .pyc and also doesn't work for source files packaged in a zip file (as produced by eg py2exe). The correct api is to use pkgutil.iter_modules (unfortunately only available in python 2.5)
(I'm willing to look at a patch for the issues, but need some guidance: For the first issue, I am a bit puzzled by the append and popping of the sys.path in the setup_environ method in the same file. For the second issue, it requires more changes and thinking to do it properly across different versions: is it worth the effort?)
Attachments (5)
Change History (30)
comment:1 Changed 11 years ago by
comment:2 Changed 11 years ago by
import os, sys
sys.path.append(os.path.dirname(os.getcwd()))
comment:3 Changed 11 years ago by
import os, sys sys.path.append(os.path.dirname(os.getcwd()))
how formatting works?
comment:4 Changed 11 years ago by
The correct directory is added to sys.path by the django.core.management.setup_environ() function. It is then, for some reason, removed again. Why is this function called, if not to setup the sys.path correctly?
Commenting out the line: 'sys.path.pop()' from this function will make customized actions work correctly.
comment:5 Changed 11 years ago by
Changed 11 years ago by
Patch
Changed 11 years ago by
patch
comment:6 Changed 11 years ago by
Changed 11 years ago by
Here is a very simple patch that will make sure that sys.path includes the project's containing directory while searching for the local app management modules.
comment:7 Changed 11 years ago by
comment:8 Changed 11 years ago by
comment:9 Changed 11 years ago by
comment:10 Changed 11 years ago by
I fully agree that the testing for manage.py and django-admin.py can be better. The current test suite elegantly bypasses them.
There is already a unit test for user commands, and I don't clearly see how I can improve on it: hence I'm removing myself as the owner of the issue. Maybe somebody else has some bright idea or view on this...
Changed 11 years ago by
comment:11 Changed 11 years ago by
I came across this bug as well over the weekend. Attached is a small patch (the current one here no longer applies) & a unit test for the fix.
comment:12 Changed 10 years ago by
comment:13 Changed 10 years ago by
comment:14 Changed 10 years ago by
comment:15 Changed 10 years ago by
comment:16 Changed 10 years ago by
comment:17 Changed 10 years ago by
This is a sufficiently significant problem that it warrants attention before v1.0
comment:18 Changed 10 years ago by
comment:19 Changed 10 years ago by
Patch works for me. Would love to see this checked in.
comment:20 Changed 10 years ago by
comment:21 Changed 10 years ago by
(In [8227]) Fixed #5825 -- Modified the custom command loader to allow for explicit specification of the project name, even if the project directory isn't in the python path. This brings custom command loading into alignment with application loading. Thanks to jdetaeye@… for the original report, and to abozanich for the patch.
comment:22 Changed 10 years ago by
comment:23 Changed 10 years ago by
The fix does not work, if say, we use a symbolic link from ~/bin to manage.py, or run manage.py from anywhere else, like running python myproject/manage.py
The current directory won't show up as the project name, so the custom commands aren't found.
Should I not be doing that?
comment:24 Changed 10 years ago by
This bug was fixed, please file a new ticket or contact the mailing lists if you have a new bug/might have a bug.
Changed 10 years ago by
comment:25 Changed 7 years ago by
Milestone 1.1 deleted
In django.core.management.init.py in find_management_module function to define path to management module of each application (user app as well as django core) used function find_module from standard imp. Function find_module is called iterately on each part of application name with project name.
If application is called "common", and project is called "holdem" then find_module first tries to import "holdem", then using it's path tries to import "common", then using it's path tries to import "management".
Before first call of find_module path is set to None. This makes find_module to use "clean" sys.path on start. In that sys.path project dirname itself appears, but it's parent directory does not. Which is required to import user application whose name is starts from project name.
I added to my project settings.py lines
{{{import os, sys
sys.path.append(os.path.dirname(os.getcwd()))}}}
This makes manage.py to see management commands added by user applications.
I think that project parent directory appending have to be done by manage.py if that directory is not in sys.path.
|
https://code.djangoproject.com/ticket/5825
|
CC-MAIN-2018-51
|
refinedweb
| 1,029
| 66.13
|
Hi Olivier
Olivier Rossel wrote:
> I wish to extract the tag "/task/title" from my XML generated.
> I think that using a XSL for that is a bit heavy, so I would like to use
> an existing transformer.
>
> I tried FragmentExtractorTransformer, but I think the name
> of the tag that is used for extraction must be declared not in
> the map:pipeline (instanciation part), but in the map:components
> (declaration part). Am I wrong?
> It seems also that the tag must have a namespace or the transformer
> does not trigger.
> Am I wrong?
yes you are !!!! ;-)
This transformer does not perform what you want : it replaces named
element by xlink locator pointer to the element. AFAIK it's mostly used
with SVGs.
>
>
> So I have tried the FilterTransformer.
> It seems that 3 parameters must be filled:
> element-name (here i said "title")
> but also the block number (what is it???)
> and the number of rows(???)
this transformer is typically written to keep only necessary data from
eg. a huge SQL request.
So you can tell this transformer to 1st group your data in blocks (the
'count' parameter : 'count' "title"-elements per block), then choose the
'blocknr' block.
>
>
> It seems strange that those very specific transformers exist, and
> there is no default
> transformer that can accept simply the name of a tag and extract the
> sub-tree that corresponds.
you only have to write your own...
To do this, you can watch any Transformer Java's code...
>
>
> Any information is welcome.
;-)
Enjoy...
>
>
>
> ---------------------------------------------------------------------
>>
|
http://mail-archives.apache.org/mod_mbox/cocoon-users/200204.mbox/%3C3CB411C8.2020007@anyware-tech.com%3E
|
CC-MAIN-2018-34
|
refinedweb
| 251
| 75.4
|
Why Qt for C++ is STILL the truth: F the noise
Nowadays, when you speak about desktop development, people look at you like you're some kind of Luddite. Despite the hype with mobile, the honest truth is that the most productive and complex work is still being done on desktop (or through browsers that run on desktop).
I love Java, Swift and them, but when it comes to building desktop applications, C++ in tandem Qt is still the best. Here are a few reasons I'm still a Qt junkie in 2017:
Easy
Qt with C++ is just easy to use. The only other development environment that is comparable is XCode. With tons of great examples at startup, it doesn't take long to get your feet wet. I know that the appeal of being a coder is dealing with the kind of complexity that "mere mortals" can barely wrap their heads around, but I am also a big fan of simplifying rudimentary tasks like UI building so that it becomes an art instead of it being comparable to solving a complex mathematical problem. Coming from a Java Swing and PyGTK background, Qt was heaven for me.
Fast. Very fast.
It's common knowledge that C++ is a very fast language and I believe Qt has leveraged some of that speed. Comparing the development and interactions with software that I have built in the past, Qt always felt a bit more nimble (I haven't done any benchmarks so I could be wrong). Even today, when I spend most of my days on XCode, I still can't help but marvel at the speed and lightness of Qt. This of course may be based on my internal biases and limitations as a coder. Below is a snippet of a crude GUI based merge-sort I did a LONG time ago. Worked like a charm and was noticeable crisper than Java when given a large data set. You can try it out yourself.
// MergeSort.cpp
// Class MergeSort member-function definition.
#include <vector>
using std::vector;
#include “merge_sort_budget.h” // class MergeSoft definition
/**
The constructor for the MergeSort class.
@param btList btList the budget tab list.
*/
MergeSortBudget::MergeSortBudget(BudgetTabList btList)
{
size = btList.size(); //validate vectorSize
// fill the vector with random BudgetTabItems with different dates
for (int i = 0; i < btList.size(); i++)
{
data.push_back(btList.at(i));
}
} // end MergeSort constructor
//
/**
Split vector, sort subvectors and merge subvectors into sorted vector
@param btList the budget tab list.
@return nothing.
*/
void MergeSortBudget::sort(BudgetTabList btList)
{
sortSubVector(0, size — 1, btList); // recursively sort entire vector
} // end function sort
/**
Recursive function to sort subvectors
@param low the low tab.
@param high the high tab.
@param btList the budget tab list.
@return nothing.
*/
void MergeSortBudget::sortSubVector(int low, int high, BudgetTabList btList)
{
// test base case; size of vector equals 1
if ((high — low) >= 1) // if not base case
{
int middle1 = (low + high) / 2; // calculate middle of vector
int middle2 = middle1 + 1; // calculate next element over
// split vector in half; sort each half (recursive calls)
sortSubVector(low, middle1, btList); // first half of vector
sortSubVector(middle2, high, btList); // second half of vector
// merge two sorted vectors after split calls return
merge(low, middle1, middle2, high, btList);
} // end if
} // end function sortSubVector
/**
Merge two sorted subvectors into one sorted subvector
@param left the left.
@param middle1 the high tab.
@param middle2 the high tab.
@param right the high tab.
@param btList the budget tab list.
@return nothing.
*/
void MergeSortBudget::merge(int left, int middle1, int middle2, int right, BudgetTabList btList)
{
int leftIndex = left; // index into left subvector
int rightIndex = middle2; // index into right subvector
int combinedIndex = left; // index into temporary working vector
vector<BudgetTabItem> combined; // working vector
for (int i = 0; i < size; i++)
combined.push_back(BudgetTabItem(“”, 0, 0, 0));
// merge vectors until reaching end of either
while (leftIndex <= middle1 && rightIndex <= right)
{
// place smaller of current elements into result
// and move to next space in vector
if ((data[leftIndex].getYear() >= data[rightIndex].getYear()
&& data[leftIndex].getMonth() > data[rightIndex].getMonth()) ||
(data[leftIndex].getYear() > data[rightIndex].getYear()
&& data[leftIndex].getMonth() >= data[rightIndex].getMonth()) ||
(data[leftIndex].getYear() > data[rightIndex].getYear()
&& data[leftIndex].getMonth() < data[rightIndex].getMonth()))
combined[combinedIndex++] = data[leftIndex++];
else
combined[combinedIndex++] = data[rightIndex++];
} // end while
if (leftIndex == middle2) // if at end of left vector
{
while (rightIndex <= right) //copy in rest of right vector
combined[combinedIndex++] = data[rightIndex++];
} // end if
else // at end of right vector
{
while (leftIndex <= middle1) // copy in rest of left vector
combined[combinedIndex++] = data[leftIndex++];
} // end else
// copy values back into original vector
for (int i = left; i <= right; i++)
{
BudgetTabItem item = combined[i];
item.setTabIndex(i);
data[i] = item;
}
} // end function merge
/**
Display elements in vector
@param none.
@return nothing.
*/
void MergeSortBudget::displayElements() const
{
displaySubVector(0, size — 1);
} // end function displatElements
/**
Display certain values in vector
@param none.
@return nothing.
*/
void MergeSortBudget::displaySubVector(int low, int high) const
{
// output spaces for alignment
for (int i = 0; i < low; i++)
//cout << “ “;
// output elements left in vector
for (int i = low; i <= high; i++)
{
BudgetTabItem item = data[i];
}
} // end function displatSubVector
/**
Gets the sorted index.
@param month. the month of the tab.
@param year. the year of tab.
@return sortedIndex
*/
int MergeSortBudget::getSortedTabIndex(int month, int year)
{
int sortedIndex;
for (int i = 0; i < size; i++)
{
if ((data[i].getYear() == year) && (data[i].getMonth() == month))
{
sortedIndex = data[i].getTabIndex();
}
}
return sortedIndex;
}
Platform agnostic
There's been a lot of emphasis on being platform agnostic in mobile development and a lot of tools have been built with that in mind. Qt did it a LONG time ago on desktop and did it in a big way. Porting your apps to different platforms doesn't feel like hard work. They run the same on every OS/platform with little to no porting.
Snapping out of the current zeitgeist
Apps. Apps. Apps! Mobile apps! "Hey dude, can you build me an app bro?", said the old college buddy. Lately, it feels like ubiquity is associated with mobile and mobile with success. Maybe that's true. It can also be as exhausting as listening to the mainstream media constantly harp on AI. Maybe this is just me getting sentimental but Qt has the feel of authentic and real-world programming — a coding culture that isn't based on hype or the latest hot thing. It's advanced but primal. It's comprehensive but not closed. It feels real. It feels true. It doesn't feel StackOverflow-y. I recently saw a cool WIRED documentary about the startup scene in Israel. The documentary made me feel the same way I felt when I first started tinkering with Qt —like I was part of something true. Check it out below:
There's an entire generation of coders that might make the tragic mistake of ignoring desktop and Qt (with C++). It would hurt them, not just in terms of history but also in terms of making them whole and broad. There is a whole big world of computer science out there: the hyped-up fields being AI and data science at the moment. In all likelihood, I'll be working with both on a full-time basis within the next six months. That being said, nothing beats the intimacy of building something from scratch with something like Qt — it's a pleasure that no coder should deny themselves.
I am Thabo Klass and I work at Spreebie. Cheers.
|
https://medium.com/@thabodavidnyakalloklass/why-qt-for-c-is-still-the-truth-f-the-noise-efe8acb5109a
|
CC-MAIN-2018-30
|
refinedweb
| 1,233
| 62.48
|
Sparse Approximations¶
The
gp.MarginalSparse class implements sparse, or inducing point, GP
approximations. It works identically to
gp.Marginal, except it
additionally requires the locations of the inducing points (denoted
Xu), and it accepts the argument
sigma instead of
noise
because these sparse approximations assume white IID noise.
Three approximations are currently implemented, FITC, DTC and VFE. For most problems, they produce fairly similar results. These GP approximations don’t form the full covariance matrix over all \(n\) training inputs. Instead they rely on \(m < n\) inducing points, which are “strategically” placed throughout the domain. Both of these approximations reduce the \(\mathcal{O(n^3)}\) complexity of GPs down to \(\mathcal{O(nm^2)}\) — a significant speed up. The memory requirements scale down a bit too, but not as much. They are commonly referred to as sparse approximations, in the sense of being data sparse. The downside of sparse approximations is that they reduce the expressiveness of the GP. Reducing the dimension of the covariance matrix effectively reduces the number of covariance matrix eigenvectors that can be used to fit the data.
A choice that needs to be made is where to place the inducing points. One option is to use a subset of the inputs. Another possibility is to use K-means. The location of the inducing points can also be an unknown and optimized as part of the model. These sparse approximations are useful for speeding up calculations when the density of data points is high and the lengthscales is larger than the separations between inducing points.
For more information on these approximations, see Quinonero-Candela+Rasmussen, 2006 and Titsias 2009.
Examples¶
For the following examples, we use the same data set as was used in the
gp.Marginal example, but with more data points.
In [1]:
import pymc3 as pm import theano import theano.tensor as tt import numpy as np import matplotlib.pyplot as plt %matplotlib inline
In [3]:
# set the seed np.random.seed(1) n = 2000 # The number of data points X = 10*np.sort(np.random.rand(n))[:,None] # IID Gaussian noise # The standard deviation of the noise is `sigma` σ_true = 2.0 y = f_true + σ_true * np.random.randn(n) ## Plot the data and the unobserved latent function fig = plt.figure(figsize=(12,5)); ax = fig.gca() ax.plot(X, f_true, "dodgerblue", lw=3, label="True f"); ax.plot(X, y, 'ok', ms=3, alpha=0.5, label="Data"); ax.set_xlabel("X"); ax.set_ylabel("The true f(x)"); plt.legend();
Initializing the inducing points with K-means¶
We use the NUTS sampler and the
FITC approximation.
In [4]:="FITC") # initialize 20 inducing points with K-means # gp.util Xu = pm.gp.util.kmeans_inducing_points(20, X) σ = pm.HalfCauchy("σ", beta=5) y_ = gp.marginal_likelihood("y", X=X, Xu=Xu, y=y, sigma=σ) trace = pm.sample(1000)
Auto-assigning NUTS sampler... Initializing NUTS using advi+adapt_diag... Average Loss = 4,339.1: 5%|▌ | 10591/200000 [02:00<41:40, 75.74it/s] Convergence archived at 10600 Interrupted at 10,600 [5%]: Average Loss = 4,654.4 100%|██████████| 1500/1500 [02:09<00:00, 10.48it/s]/home/bill/pymc3/pymc3/step_methods/hmc/nuts.py:451: UserWarning: The acceptance probability in chain 0 does not match the target. It is 0.878845019249, but should be close to 0.8. Try to increase the number of tuning steps. % (self._chain_id, mean_accept, target_accept))
In [5]:
X_new = np.linspace(-1, 11, 200)[:,None] # add the GP conditional to the model, given the new X values with model: f_pred = gp.conditional("f_pred", X_new) # To use the MAP values, you can just replace the trace with a length-1 list with `mp` with model: pred_samples = pm.sample_posterior_predictive(trace, vars=[f_pred], samples=1000)
100%|██████████| 1000/1000 [00:43<00:00, 20.15it/s]
In [11]:
# plot the results fig = plt.figure(figsize=(12,5)); ax = fig.gca() # plot the samples from the gp posterior with samples and shading from pymc3.gp.util import plot_gp_dist plot_gp_dist(ax, pred_samples["f_pred"], X_new); # plot the data and the true latent function plt.plot(X, y, 'ok', ms=3, alpha=0.5, label="Observed data"); plt.plot(X, f_true, "dodgerblue", lw=3, label="True f"); plt.plot(Xu, 10*np.ones(Xu.shape[0]), "cx", ms=10, label="Inducing point locations") # axis labels and title plt.xlabel("X"); plt.ylim([-13,13]); plt.title("Posterior distribution over $f(x)$ at the observed values"); plt.legend();
Optimizing inducing point locations as part of the model¶
For demonstration purposes, we set
approx="VFE". Any inducing point
initialization can be done with any approximation.
In [7]:
Xu_init = 10*np.random.rand(20)="VFE") # set flat prior for Xu Xu = pm.Flat("Xu", shape=20, testval=Xu_init) σ = pm.HalfCauchy("σ", beta=5) y_ = gp.marginal_likelihood("y", X=X, Xu=Xu[:, None], y=y, sigma=σ) mp = pm.find_MAP()
lp = -4,298.1, ||grad|| = 2.834e-05: 0%| | 116/50000 [00:01<10:36, 78.36it/s]
In [10]:
mu, var = gp.predict(X_new, point=mp, diag=True) sd = np.sqrt(var) # draw plot fig = plt.figure(figsize=(12,5)); ax = fig.gca() # plot mean and 2σ intervals plt.plot(X_new, mu, 'r', lw=2, label="mean and 2σ region"); plt.plot(X_new, mu + 2*sd, 'r', lw=1); plt.plot(X_new, mu - 2*sd, 'r', lw=1); plt.fill_between(X_new.flatten(), mu - 2*sd, mu + 2*sd, color="r", alpha=0.5) # plot original data and true function plt.plot(X, y, 'ok', ms=3, alpha=1.0, label="observed data"); plt.plot(X, f_true, "dodgerblue", lw=3, label="true f"); Xu = mp["Xu"] plt.plot(Xu, 10*np.ones(Xu.shape[0]), "cx", ms=10, label="Inducing point locations") plt.xlabel("x"); plt.ylim([-13,13]); plt.title("predictive mean and 2σ interval"); plt.legend();
|
https://docs.pymc.io/notebooks/GP-SparseApprox.html
|
CC-MAIN-2018-47
|
refinedweb
| 967
| 54.59
|
Machine Learning is Fun!
The world’s easiest introduction to Machine Learning
Update: Machine Learning is Fun! Part 2, Part 3, Part 4, Part 5 and Part 6 are now available!
You can also read this article in 日本語, Português, Türkçe, Français, 한국어 , العَرَبِيَّة, Español (México), Español (España) or Polski.
Have you heard people talking about machine learning but only have a fuzzy idea of what that means? Are you tired of nodding your way through conversations with co-workers? Let’s change.
:
Using that training data, we want to create a program that can estimate how much any other house in your area is worth:
:
.
This is kind of like someone giving you a list of numbers on a sheet of paper and saying “I don’t really know what these numbers mean but maybe you can figure out if there is a pattern or grouping or something — good luck!”
So what could do with this data? For starters, you could have an algorithm that automatically identified different market segments in your data. Maybe you’d find out that home buyers in the neighborhood near the local college really like small houses with lots of bedrooms, but home buyers in the suburbs prefer 3-bedroom houses with lots of square footage. Knowing about these different kinds of customers could help direct your marketing efforts.
Another cool thing you could do is automatically identify any outlier houses that were way different than everything else. Maybe those outlier houses are giant mansions and you can focus your best sales people on those areas because they have bigger commissions.
Supervised learning is what we’ll focus on for the rest of this post, but that’s not because unsupervised learning is any less useful or interesting. In fact, unsupervised learning is becoming increasingly important as the algorithms get better because it can be used without having to label the data with the correct answer.
Side note: There are lots of other types of machine learning algorithms. But this is a pretty good place to start.
That’s cool, but does being able to estimate the price of a house really count as “learning”?
As a human, your brain can approach most any situation and learn how to deal with that situation without any explicit instructions. If you sell houses for a long time, you will instinctively have a “feel” for the right price for a house, the best way to market that house, the kind of client who would be interested, etc. The goal of Strong AI research is to be able to replicate this ability with computers.
But current machine learning algorithms aren’t that good yet — they only work when focused a very specific, limited problem. Maybe a better definition for “learning” in this case is “figuring out an equation to solve a specific problem based on some example data”.
Unfortunately “Machine Figuring out an equation to solve a specific problem based on some example data” isn’t really a great name. So we ended up with “Machine Learning” instead.
Of course if you are reading this 50 years in the future and we’ve figured out the algorithm for Strong AI, then this whole post will all seem a little quaint. Maybe stop reading and go tell your robot servant to go make you a sandwich, future human.
Let’s write that program!
So, how would you write the program to estimate the value of a house like in our example above? Think about it for a second before you read further.
If you didn’t know anything about machine learning, you’d probably try to write out some basic rules for estimating the price of a house like this:
def estimate_house_sales_price(num_of_bedrooms, sqft, neighborhood):
price = 0
# In my area, the average house costs $200 per sqft
price_per_sqft = 200
if neighborhood == "hipsterton":
# but some areas cost a bit more
price_per_sqft = 400
elif neighborhood == "skid row":
# and some areas cost less
price_per_sqft = 100
# start with a base price estimate based on how big the place is
price = price_per_sqft * sqft
# now adjust our estimate based on the number of bedrooms
if num_of_bedrooms == 0:
# Studio apartments are cheap
price = price — 20000
else:
# places with more bedrooms are usually
# more valuable
price = price + (num_of_bedrooms * 1000)
return price
If you fiddle with this for hours and hours, you might end up with something that sort of works. But your program will never be perfect and it will be hard to maintain as prices change.
Wouldn’t it be better if the computer could just figure out how to implement this function for you? Who cares what exactly the function does as long is it returns the correct number:
def estimate_house_sales_price(num_of_bedrooms, sqft, neighborhood):
price = <computer, plz do some math for me>
return price
One way to think about this problem is that the price is a delicious stew and the ingredients are the number of bedrooms, the square footage and the neighborhood. If you could just figure out how much each ingredient impacts the final price, maybe there’s an exact ratio of ingredients to stir in to make the final price.
That would reduce your original function (with all those crazy if’s and else’s) down to something really simple like this:
def estimate_house_sales_price(num_of_bedrooms, sqft, neighborhood):
price = 0
# a little pinch of this
price += num_of_bedrooms * .841231951398213
# and a big pinch of that
price += sqft * 1231.1231231
# maybe a handful of this
price += neighborhood * 2.3242341421
# and finally, just a little extra salt for good measure
price += 201.23432095
return price
Notice the magic numbers in bold — .841231951398213, 1231.1231231, 2.3242341421, and 201.23432095. These are our weights. If we could just figure out the perfect weights to use that work for every house, our function could predict house prices!
A dumb way to figure out the best weights would be something like this:
Step 1:
Start with each weight set to 1.0:
def estimate_house_sales_price(num_of_bedrooms, sqft, neighborhood):
price = 0
# a little pinch of this
price += num_of_bedrooms * 1.0
# and a big pinch of that
price += sqft * 1.0
# maybe a handful of this
price += neighborhood * 1.0
# and finally, just a little extra salt for good measure
price += 1.0
return price
Step 2:
Run every house you know about through your function and see how far off the function is at guessing the correct price for each house:
For example, if the first house really sold for $250,000, but your function guessed it sold for $178,000, you are off by $72,000 for that single house.
Now add up the squared amount you are off for each house you have in your data set. Let’s say that you had 500 home sales in your data set and the square of how much your function was off for each house was a grand total of $86,123,373. That’s how “wrong” your function currently is.
Now, take that sum total and divide it by 500 to get an average of how far off you are for each house. Call this average error amount the cost of your function.
If you could get this cost to be zero by playing with the weights, your function would be perfect. It would mean that in every case, your function perfectly guessed the price of the house based on the input data. So that’s our goal — get this cost to be as low as possible by trying different weights.
Step 3:
Repeat Step 2 over and over with every single possible combination of weights. Whichever combination of weights makes the cost closest to zero is what you use. When you find the weights that work, you’ve solved the problem!
Mind Blowage Time
That’s pretty simple, right? Well think about what you just did. You took some data, you fed it through three generic, really simple steps, and you ended up with a function that can guess the price of any house in your area. Watch out, Zillow!
But here’s a few more facts that will blow your mind:
- Research in many fields (like linguistics/translation) over the last 40 years has shown that these generic learning algorithms that “stir the number stew” (a phrase I just made up) out-perform approaches where real people try to come up with explicit rules themselves. The “dumb” approach of machine learning eventually beats human experts.
- The function you ended up with is totally dumb. It doesn’t even know what “square feet” or “bedrooms” are. All it knows is that it needs to stir in some amount of those numbers to get the correct answer.
- It’s very likely you’ll have no idea why a particular set of weights will work. So you’ve just written a function that you don’t really understand but that you can prove will work.
- Imagine that instead of taking in parameters like “sqft” and “num_of_bedrooms”, your prediction function took in an array of numbers. Let’s say each number represented the brightness of one pixel in an image captured by camera mounted on top of your car. Now let’s say that instead of outputting a prediction called “price”, the function outputted a prediction called “degrees_to_turn_steering_wheel”. You’ve just made a function that can steer your car by itself!
Pretty crazy, right?
What about that whole “try every number” bit in Step 3?
Ok, of course you can’t just try every combination of all possible weights to find the combo that works the best. That would literally take forever since you’d never run out of numbers to try.
To avoid that, mathematicians have figured out lots of clever ways to quickly find good values for those weights without having to try very many. Here’s one way:
First, write a simple equation that represents Step #2 above::
In this graph, the lowest point in blue is where our cost is the lowest — thus our function is the least wrong. The highest points are where we are most wrong. So if we can find the weights that get us to the lowest point on this graph, we’ll have our answer!
So we just need to adjust our weights so we are “walking down hill” on this graph towards the lowest point. If we keep making small adjustments to our weights that are always moving towards the lowest point, we’ll eventually get there without having to try too many different weights.
If you remember anything from Calculus, you might remember that if you take the derivative of a function, it tells you the slope of the function’s tangent at any point. In other words, it tells us which way is downhill for any given point on our graph. We can use that knowledge to walk downhill.
So if we calculate a partial derivative of our cost function with respect to each of our weights, then we can subtract that value from each weight. That will walk us one step closer to the bottom of the hill. Keep doing that and eventually we’ll reach the bottom of the hill and have the best possible values for our weights. (If that didn’t make sense, don’t worry and keep reading).
That’s a high level summary of one way to find the best weights for your function called batch gradient descent. Don’t be afraid to dig deeper if you are interested on learning the details.
When you use a machine learning library to solve a real problem, all of this will be done for you. But it’s still useful to have a good idea of what is happening.
What else did you conveniently skip over?
The three-step algorithm I described is called multivariate linear regression. You are estimating the equation for a line that fits through all of your house data points. Then you are using that equation to guess the sales price of houses you’ve never seen before based where that house would appear on your line. It’s a really powerful idea and you can solve “real” problems with it.
But while the approach I showed you might work in simple cases, it won’t work in all cases. One reason is because house prices aren’t always simple enough to follow a continuous line.
But luckily there are lots of ways to handle that. There are plenty of other machine learning algorithms that can handle non-linear data (like neural networks or SVMs with kernels). There are also ways to use linear regression more cleverly that allow for more complicated lines to be fit. In all cases, the same basic idea of needing to find the best weights still applies.
Also, I ignored the idea of overfitting. It’s easy to come up with a set of weights that always works perfectly for predicting the prices of the houses in your original data set but never actually works for any new houses that weren’t in your original data set. But there are ways to deal with this (like regularization and using a cross-validation data set). Learning how to deal with this issue is a key part of learning how to apply machine learning successfully.
In other words, while the basic concept is pretty simple, it takes some skill and experience to apply machine learning and get useful results. But it’s a skill that any developer can learn!
Is machine learning magic?
Once you start seeing how easily machine learning techniques can be applied to problems that seem really hard (like handwriting recognition), you start to get the feeling that you could use machine learning to solve any problem and get an answer as long as you have enough data. Just feed in the data and watch the computer magically figure out the equation that fits the data!
But it’s important to remember that machine learning only works if the problem is actually solvable with the data that you have.
For example, if you build a model that predicts home prices based on the type of potted plants in each house, it’s never going to work. There just isn’t any kind of relationship between the potted plants in each house and the home’s sale price. So no matter how hard it tries, the computer can never deduce a relationship between the two.
So remember, if a human expert couldn’t use the data to solve the problem manually, a computer probably won’t be able to either. Instead, focus on problems where a human could solve the problem, but where it would be great if a computer could solve it much more quickly.
How to learn more about Machine Learning
In my mind, the biggest problem with machine learning right now is that it mostly lives in the world of academia and commercial research groups. There isn’t a lot of easy to understand material out there for people who would like to get a broad understanding without actually becoming experts. But it’s getting a little better every day.
Andrew Ng’s free Machine Learning class on Coursera is pretty amazing. I highly recommend starting there. It should be accessible to anyone who has a Comp. Sci. degree and who remembers a very minimal amount of math.
Also, you can play around with tons of machine learning algorithms by downloading and installing SciKit-Learn. It’s a python framework that has “black box” versions of all the standard algorithms. 2, Part 3, Part 4, Part 5 and Part 6!
|
https://medium.com/@ageitgey/machine-learning-is-fun-80ea3ec3c471?bsft_eid=1baa362f-396f-472d-b3f6-ff1eba51866f&bsft_clkid=36912740-7b48-46f2-83aa-59d2d37dff11&bsft_uid=5123a86b-a559-4041-a771-c8f25fd6918d&bsft_mid=fe89070b-d8b6-48d0-bf64-55cb52ca13c2
|
CC-MAIN-2017-09
|
refinedweb
| 2,610
| 69.21
|
James Russell Lowell, Early Rapper
The other day on Instagram, I shared four anti-war lines from James Russell Lowell’s serial poem, The Biglow Papers. Tricia Tierney, friend and writer, pointed out the similarity of the lines to the rhythms of rap. I had to gasp — she was absolutely right.
Wut’s the use of meeting’-goin’,
Every Sabbath, wet or dry,
Ef it’s right to go amowin’
Feller Men like oats and rye?
Supply a beat and not only the rhythm but the narrative of the poem fits in the mold of rap music. Lowell wrote his poem to protest against what he saw as the political and social threats of his time, including warmongering, slavery, and materialism. And like some of the best rap lyrics, Lowell chose to use the dialects and nuances of those he defined as his people — old Yankee souls — to present his arguments.
The Biglow Papers was written over a period of months, a project begun after the death of Lowell’s first child, a baby girl named Blanche. Jamie and his wife Maria were devastated by her sudden death, which had been brought on by a bout of teething — or, more likely, by the doctor’s treatment for it, bloodletting and purging. Channeling his despair into work, Lowell began to write the poem in his third-floor aerie at Elmwood, the house where he lived with his father, wife, and sister. He hung over his desk Blanche’s tiny shoes, the tied laces hanging from a nail; they would remain there for the rest of his life.
The year was 1847 and the United States was at war with Mexico, a war brought on by U.S. annexation of Texas. Many Northerners, including members of the Lowell family, saw the war as a strategy by the Southern states to increase slave-holding territory and by extension, the power of slaveholding interests in the political and economic life of the United States.
Jamie took pen to paper to protest the war and the machinations of the southern interests. As a tool in expressing what he thought of as Yankee wit and wisdom — which he saw as a corrective against materialistic, despotic, and inhumane slaveholding interests — he used the old dialects of New England in his poem. The country needed saving, Jamie reckoned, and there were none who could show the way better than the three characters he brought to life in his poem: the faithful Parson; the simple Yankee farmer Biglow; and the immoral rascal Sawin.
The three fellows observed, opined and condemned (or celebrated, in the case of Sawin, a southern sympathizer and war monger) all the issues of the time: the folly and dangers of Manifest Destiny; the degradations of slavery, not only for the enslaved but the slaveholders; the hypocrisy of politicians; the prevalence of greed; and the brutal realities of War. As Biglow himself put it:
Ez fer war, I call it murder, -
There you hev it plain an’ flat….
you take a sword an’ dror it,
An’ go stick a feller thru,
Guv’mint an’t to answer for it,
send the bill to you.
The Biglow Papers was a huge success, popular with critics and readers alike. Lowell had used humor, satire, and grim reality to portray his country as he saw it and the country in return (at least in the North) lapped it up. He became the most read and recited poet in the United States — and perhaps the country’s earliest political rapper.
Find out more about James Russell Lowell and other Lowells at The Lowells of Massachusetts. The book, The Lowells of Massachusetts: An American Family will be released April 11, 2017 by St. Martin’s Press and is available for preorder.
|
https://medium.com/nina-sankovitch/james-russell-lowell-early-rapper-21593413ff8d
|
CC-MAIN-2017-34
|
refinedweb
| 630
| 64.34
|
This is a page from my book, Functional Programming, Simplified
“People say that practicing Zen is difficult, but there is a misunderstanding as to why.”
In the last chapter I looked at the benefits of functional programming, and as I showed, there are quite a few. In this chapter I’ll look at the potential drawbacks of FP.
Just as I did in the previous chapter, I’ll first cover the “drawbacks of functional programming in general”:
-.
After that I’ll look at the more-specific “drawbacks of functional programming in Scala”:
- You can mix FP and OOP styles.
- Scala doesn’t have a standard FP library.
1) Writing pure functions is easy, but combining them into a complete application is where things get hard
Writing a pure function is generally fairly easy. Once you can define your type signature, pure functions are easier to write because of the absence of mutable variables, hidden inputs, hidden state, and I/O. For example, the
determinePossiblePlays function in this code:
val possiblePlays = OffensiveCoordinator.determinePossiblePlays(gameState)
is a pure function, and behind it are thousands of lines of other functional code. Writing all of these pure functions took time, but it was never difficult. All of the functions follow the same pattern:
- Data in
- Apply an algorithm (to transform the data)
- Data out
That being said, the part that is hard is, “How do I glue all of these pure functions together in an FP style?” That question can lead to the code I showed in the first chapter:
def updateHealth(delta: Int): Game[Int] = StateT[IO, GameState, Int] { (s: GameState) => val newHealth = s.player.health + delta IO((s.copy(player = s.player.copy(health = newHealth)), newHealth)) }
As you may be aware, when you first start programming in a pure FP style, gluing pure functions together to create a complete FP application is one of the biggest stumbling blocks you’ll encounter. In lessons later in this book I show solutions for how to glue pure functions together into a complete application.
2) Advanced math terminology makes FP intimidating
I don’t know about you, but when I first heard terms like combinator, monoid, monad, and functor, I had no idea what people were talking about. And I’ve been paid to write software since the early-1990s.
As I discuss in the next chapter, terms like this are intimidating, and that “fear factor” becomes a barrier to learning FP.
Because I cover this topic in the next chapter, I won’t write any more about it here.
3) For many people, recursion doesn’t feel natural
One reason I may not have known about those mathematical terms is because my degree is in aerospace engineering, not computer science. Possibly for the same reason, I knew about recursion, but never had to use it. That is, until I became serious about writing pure FP code.
As I wrote in the “What is FP?” chapter, the thing that happens when you use only pure functions and immutable values is that you have to use recursion. In pure FP code you no longer use
var fields with
for loops, so the only way to loop over elements in a collection is to use recursion.
Fortunately, you can learn how to write recursive code. If there’s a secret to the process, it’s in learning how to “think in recursion.” Once you gain that mindset and see that there are patterns to recursive algorithms, you’ll find that recursion gets much easier, even natural.
Two paragraphs ago I wrote, “the only way to loop over elements in a collection is to use recursion,” but that isn’t 100% true. In addition to gaining a “recursive thinking” mindset, here’s another secret: once you understand the Scala collections’ methods, you won’t need to use recursion as often as you think. In the same way that collections’ methods are replacements for custom
for loops, they’re also replacements for many custom recursive algorithms.
As just one example of this, when you first start working with Scala and you have a
List like this:
val names = List("chris", "ed", "maurice")
it’s natural to write a
for/
yield expression like this:
val capNames = for (e <- names) yield e.capitalize
As you’ll see in the upcoming lessons, you can also write a recursive algorithm to solve this problem.
But once you understand Scala’s collections’ methods, you know that the
map method is a replacement for those algorithms:
val capNames = fruits.map(_.e.capitalize)
Once you’re comfortable with the collections’ methods, you’ll find that you reach for them before you reach for recursion.
I write much more about recursion and the Scala collections’ methods in upcoming lessons.
4) Because you can’t mutate existing data, you instead use a pattern that I call, “Update as you copy”
For over 20 years I’ve written imperative code where it was easy — and extraordinarily common — to mutate existing data. For instance, once upon a time I had a niece named “Emily Maness”:
val emily = Person("Emily", "Maness")
Then one day she got married and her last name became “Wells”, so it seemed logical to update her last name, like this:
emily.setLastName("Wells")
In FP you don’t do this. You don’t mutate existing objects.
Instead, what you do is (a) you copy an existing object to a new object, and then as a copy of the data is flowing from the old object to the new object, you (b) update any fields you want to change by providing new values for those fields, such as
lastName in this example:
The way you “update as you copy” in Scala/FP is with the
copy method that comes with case classes. First, you start with a case class:
case class Person (firstName: String, lastName: String)
Then, when your niece is born, you write code like this:
val emily1 = Person("Emily", "Maness")
Later, when she gets married and changes her last name, you write this:
val emily2 = emily1.copy(lastName = "Wells")
After that line of code,
emily2.lastName has the value
"Wells".
Note: I intentionally use the variable names
emily1 and
emily2 in this example to make it clear that you never change the original variable. In FP you constantly create intermediate variables like
name1 and
name2 during the “update as you copy” process, but there are FP techniques that make those intermediate variables transparent.
I show those techniques in upcoming lessons.
“Update as you copy” gets worse with nested objects
The “Update as you copy” technique isn’t too hard when you’re working with this simple
Person object, but think about this: What happens when you have nested objects, such as a
Family that has a
Person who has a
Seq[CreditCard], and that person wants to add a new credit card, or update an existing one? (This is like an Amazon Prime member who adds a family member to their account, and that person has one or more credit cards.) Or what if the nesting of objects is even deeper?
In short, this is a real problem that results in some nasty-looking code, and it gets uglier with each nested layer. Fortunately, other FP developers ran into this problem long before I did, and they came up with ways to make this process easier.
I cover this problem and its solution in several lessons later in this book.
5) Pure functions and I/O don’t really mix
As I wrote in the “What is Functional Programming” lesson, a pure function is a function (a) whose output depends only on its input, and (b) has no side effects. Therefore, by definition, any function that deals with these things is impure:
- File I/O
- Database I/O
- Internet I/O
- Any sort of UI/GUI input
- Any function that mutates variables
- Any function that uses “hidden” variables
Given this situation, a great question is, “How can an FP application possibly work without these things?”
The short answer is what I wrote in the Scala Cookbook and in the previous lesson: you write as much of your application’s code in an FP style as you can, and then you write a thin I/O layer around the outside of the FP code, like putting “I/O icing” around an “FP cake”:
Pure and impure functions
In reality, no programming language is really “pure,” at least not by my definition. (Several FP experts say the same thing.) Wikipedia lists Haskell as a “pure” FP language, and the way Haskell handles I/O equates to this Scala code:
def getCurrentTime(): IO[String] = ???
The short explanation of this code is that Haskell has an
IO type that you must use as a wrapper when writing I/O functions. This is enforced by the Haskell compiler.
For example,
getLine is a Haskell function that reads a line from STDIN, and returns a type that equates to
IO[String] in Scala. Any time a Haskell function returns something wrapped in an
IO, like
IO[String], that function can only be used in certain places within a Haskell application.
If that sounds hard core and limiting, well, it is. But it turns out to be a good thing.
Some people imply that this
IO wrapper makes those functions pure, but in my opinion, this isn’t true. At first I thought I was confused about this — that I didn’t understand something — and then I read this quote from Martin Odersky on scala-lang.org:
“The IO monad does not make a function pure. It just makes it obvious that it’s impure.”
For the moment you can think of an
IO instance as being like a Scala
Option. More accurately, you can think of it as being an
Option that always returns a
Some[YourDataTypeHere], such as a
Some[Person] or a
Some[String].
As you can imagine, just because you wrap a
String that you get from the outside world inside of a
Some, that doesn’t mean the
String won’t vary. For instance, if you prompt me for my name, I might reply “Al” or “Alvin,” and if you prompt my niece for her name, she’ll reply “Emily,” and so on. I think you’ll agree that
Some["Al"],
Some["Alvin"], and
Some["Emily"] are different values.
Therefore, even though (a) the return type of Haskell I/O functions must be wrapped in the
IO type, and (b) the Haskell compiler only permits
IO types to be in certain places, they are impure functions: they can return a different value each time they are called.
The benefit of Haskell’s
IO type
It’s a little early in this book for me to write about all of this, but ... the main benefit of the Haskell
IO approach is that it creates a clear separation between (a) pure functions and (b) impure functions. Using Scala to demonstrate what I mean, I can look at this function and know from its signature that it’s pure function:
def foo(a: String): Int = ???
Similarly, when I see that this next function returns something in an
IO wrapper, I know from its signature alone that it’s an impure function:
def bar(a: String): IO[String] = ???
That’s actually very cool, and I write more about this in the I/O lessons of this book.
I haven’t discussed UI/GUI input/output in this section, but I discuss it more in the “Should I use FP everywhere?” section that follows.
6) Using only immutable values and recursion can lead to performance problems, including RAM use and speed
An author can get himself into trouble for stating that one programming paradigm can use more memory or be slower than other approaches, so let me begin this section by being very clear:
When you first write a simple (“naive”) FP algorithm, it is possible — just possible — that the immutable values and data-copying I mentioned earlier can be a performance problem.
I demonstrate an example of this problem in a blog post on Scala Quicksort algorithms. In that article I show that the basic (“naive”) recursive
quickSort algorithm found in the “Scala By Example” PDF uses about 660 MB of RAM while sorting an array of ten million integers, and is four times slower than using the
scala.util.Sorting.quickSort method.
Having said that, it’s important to note how
scala.util.Sorting.quickSort works. In Scala 2.12, it passes an
Array[Int] directly to
java.util.Arrays.sort(int[]). The way that
sort method works varies by Java version, but Java 8 calls a
sort method in
java.util.DualPivotQuicksort. The code in that method (and one other method it calls) is at least 300 lines long, and is much more complex than the simple/naive
quickSort algorithm I show.
Therefore, while it’s true that the “simple, naive”
quickSort algorithm in the “Scala By Example” PDF has those performance problems, I need to be clear that I’m comparing (a) a very simple algorithm that you might initially write, to (b) a much larger, performance-optimized algorithm.
In summary, while this is a potential problem with simple/naive FP code, I offer solutions to these problems in a lesson titled, “Functional Programming and Performance.”
7) Scala/FP drawback: You can mix FP and OOP styles
If you’re an FP purist, a drawback to using functional programming in Scala is that Scala supports both OOP and FP, and therefore it’s possible to mix the two coding styles in the same code base.
While that is a potential drawback, many years ago I learned of a philosophy called “House Rules” that eliminates this problem. With House Rules, the developers get together and agree on a programming style. Once a consensus is reached, that’s the style that you use. Period.
As a simple example of this, when I owned a computer programming consulting company, the developers wanted a Java coding style that looked like this:
public void doSomething() { doX(); doY(); }
As shown, they wanted curly braces on their own lines, and the code was indented four spaces. I doubt that everyone on the team loved that style, but once we agreed on it, that was it.
I think you can use the House Rules philosophy to state what parts of the Scala language your organization will use in your applications. For instance, if you want to use a strict “Pure FP” style, use the rules I set forth in this book. You can always change the rules later, but it’s important to start with something.
8) Scala/FP drawback: Scala doesn’t have a standard FP library
Another potential drawback to functional programming in Scala is that there isn’t a built-in library to support certain FP techniques. For instance, if you want to use an
IO data type as a wrapper around your impure Scala/FP functions, there isn’t one built into the standard Scala libraries.
To deal with this problem, independent libraries like Scalaz, Cats, and others have been created. But, while these solutions are built into a language like Haskell, they are standalone libraries in Scala.
I found that this situation makes it more difficult to learn Scala/FP. For instance, you can open any Haskell book and find a discussion of the
IOtype and other built-in language features, but the same is not true for Scala. (I discuss this more in the I/O lessons in this book.)
“Should I use FP everywhere?”
Caution: A problem with releasing a book a few chapters at a time is that the later chapters that you’ll finish writing at some later time can have an impact on earlier content. For this book, that’s the case regarding this section. I have only worked with small examples of Functional Reactive Programming to date, so as I learn more about it, I expect that new knowledge to affect the content in this section. Therefore, a caution: “This section is still under construction, and may change significantly.”
After I listed all of the benefits of functional programming in the previous chapter, I asked the question, “Should I write all of my code in an FP style?” At that time you might have thought, “Of course! This FP stuff sounds great!”
Now that you’ve seen some of the drawbacks of FP, I think I can provide a better answer.
1a) GUIs and Pure FP are not a good fit
The first part of my answer is that I like to write Android apps, and I also enjoy writing Java Swing and JavaFX code, and the interface between (a) those frameworks and (b) your custom code isn’t a great fit for FP.
As one example of what I mean, in an Android football game I work on in my spare time, the OOP game framework I use provides an
update method that I’m supposed to override to update the screen:
@Override public void update(GameView gameView) { // my custom code here ... }
Inside that method I have a lot of imperative GUI-drawing code that currently creates this UI:
There isn’t a place for FP code at this point. The framework expects me to update the pixels on the screen within this method, and if you’ve ever written anything like a video game, you know that to achieve the best performance — and avoid screen flickering — it’s generally best to update only the pixels that need to be changed. So this really is an “update” method, as opposed to a “completely redraw the screen” method.
Remember, words like “update” and “mutate” are not in the FP vocabulary.
Other “thick client,” GUI frameworks like Swing and JavaFX have similar interfaces, where they are OOP and imperative by design. As another example, I wrote a little text editor that I named “AlPad,” and its major feature is that it lets me easily add and remove tabs to keep little notes organized:
The way you write Swing code like this is that you first create a
JTabbedPane:
JTabbedPane tabbedPane = new JTabbedPane();
Once created, you keep that tabbed pane alive for the entire life of the application. Then when you later want to add a new tab, you mutate the
JTabbedPane instance like this:
tabbedPane.addTab( "to-do", null, newPanel, "to-do");
That’s the way thick client code usually works: you create components and then mutate them during the life of the application to create the desired user interface. The same is true for most other Swing components, like
JFrame,
JList,
JTable, etc.
Because these frameworks are OOP and imperative by nature, this interface point is where FP and pure functions typically don’t fit.
If you know about Functional Reactive Programming (FRP), please stand by; I write more on this point shortly.
When you’re working with these frameworks you have to conform to their styles at this interface point, but there’s nothing to keep you from writing the rest of your code in an FP style. In my Android football game I have a function call that looks like this:
val possiblePlays = OffensiveCoordinator.determinePossiblePlays(gameState)
In that code,
determinePossiblePlays is a pure function, and behind it are several thousand lines of other pure functions. So while the GUI code has to conform to the Android game framework I’m using, the decision-making portion of my app — the “business logic” — is written in an FP style.
1b) Caveats to what I just wrote
Having stated that, let me add a few caveats.
First, Web applications are completely different than thick client (Swing, JavaFX) applications. In a thick client project, the entire application is typically written in one large codebase that results in a binary executable that users install on their computers. Eclipse, IntelliJ IDEA, and NetBeans are examples of this.
Conversely, the web applications I’ve written in the last few years use (a) one of many JavaScript-based technologies for the UI, and (b) the Play Framework on the server side. With Web applications like this, you have impure data coming into your Scala/Play application through data mappings and REST functions, and you probably also interact with impure database calls and impure network/internet I/O, but just like my football game, the “logic” portion of your application can be written with pure functions.
Second, the concept of Functional-Reactive Programming (FRP) combines FP techniques with GUI programming. The RxJava project includes this description:
.”
(Note that declarative programming is the opposite of imperative programming.)
The ReactiveX.io website states:
“ReactiveX is a combination of the best ideas from the Observer pattern, the Iterator pattern, and functional programming.”
I provide some FRP examples later in this book, but this short example from the RxScala website gives you a taste of the concept:
object Transforming extends App { /** * Asynchronously calls 'customObservableNonBlocking' * and defines a chain of operators to apply to the * callback sequence. */ def simpleComposition() { AsyncObservable.customObservableNonBlocking() .drop(10) .take(5) .map(stringValue => stringValue + "_xform") .subscribe(s => println("onNext => " + s)) } simpleComposition() }
This code does the following:
- Using an “observable,” it receives a stream of
Stringvalues. Given that stream of values, it ...
- Drops the first ten values
- “Takes” the next five values
- Appends the string
"_xform"to the end of each of those five values
- Outputs those resulting values with
println
As this example shows, the code that receives the stream of values is written in a functional style, using methods like
drop,
take, and
map, combining them into a chain of calls, one after the other.
I cover FRP in a lesson later in this book, but if you’d like to learn more now, the RxScala project is located here, and Netflix’s “Reactive Programming in the Netflix API with RxJava” blog post is a good start.
This Haskell.org page shows current work on creating GUIs using FRP. (I’m not an expert on these tools, but at the time of this writing, most of these tools appear to be experimental or incomplete.)
2) Pragmatism (the best tool for the job)
I tend to be a pragmatist more than a purist, so when I need to get something done, I want to use the best tool for the job.
For instance, when I first started working with Scala and needed a way to stub out new SBT projects, I wrote a Unix shell script. Because this was for my personal use and I only work on Mac and Unix systems, creating a shell script was by far the simplest way to create a standard set of subdirectories and a build.sbt file.
Conversely, if I also worked on Microsoft Windows systems, or if I had been interested in creating a more robust solution like the Lightbend Activator, I might have written a Scala/FP application, but I didn’t have those motivating factors.
Another way to think about this is instead of asking, “Is FP the right tool for every application I need to write?,” go ahead and ask that question with a different technology. For instance, you can ask, “Should I use Akka actors to write every application?” If you’re familiar with Akka, I think you’ll agree that writing an Akka application to create a few subdirectories and a build.sbt file would be overkill — even though Akka is a terrific tool for other applications.
Summary
In summary, potential drawbacks of functional programming in general are:
-.
Potential drawbacks of *functional programming in Scala” are:
- You can mix FP and OOP styles.
- Scala doesn’t have a standard FP library.
What’s next
Having covered the benefits and drawbacks of functional programming, in the next chapter I want to help “free your mind,” as Morpheus might say. That chapter is on something I call, “The Great FP Terminology Barrier,” and how to break through that barrier.
See also
- My Scala Quicksort algorithms blog post
- Programming in Scala
- Jesper Nordenberg’s “Haskell vs Scala” post
- Information about my “AlPad” text editor
- “Reactive Extensions” on reactivex.io
- Declarative programming
- Imperative programming
- The RxScala project
- Netflix’s “Reactive Programming in the Netflix API with RxJava” blog post
- Functional Reactive Programming on haskell.org
- Lightbend Activator
|
https://alvinalexander.com/index.php/scala/fp-book/disadvantages-of-functional-programming
|
CC-MAIN-2019-39
|
refinedweb
| 4,053
| 65.35
|
trail ]
Updates trail settings that control what events you are logging, and how to handle.
See also: AWS API Documentation
See 'aws help' for descriptions of global parameters.
update] [--cli-input-json <value>] [--generate-cli-skeleton <value>]
--name (string)
Specifies the name of the trail or trail ARN. If
Nameis a trail name, the stringand
my--namespaceare not valid.
- Not be in IP address format (for example, 192.168.5.4)
If
Nameis a trail ARN, it must be in the following format.
arn:aws:cloudtrail:us-east-2:123456789012:trail/MyTrail
- applies only to the current region or to all regions. The default is false. If the trail exists only in the current region and this value is set to true, shadow trails (replications of the trail) will be created in the other regions. If the trail exists in all regions and this value is set to false, the trail will remain in the region where it was created, and its shadow trails in other regions will be deleted. As a best practice, consider using trails that log events in all regions.
--enable-log-file-validation |
--no-enable-log-file-validation (boolean)
Specifies whether log file validation is enabled. The default is false.
Note
When you disable log file integrity validation, the chain of digest files is broken after one hour. CloudTrail does.
--cloud-watch-logs-log-group-arn (string)
Specifies a log group name using an Amazon Resource Name (ARN), a unique identifier that represents the log group to which CloudTrail logs applied to all accounts in an organization in Organizations, or only for the current Amazon Web Services account. The default is false, and cannot be true unless the call is made on behalf of an Amazon Web Services account that is the management account for an organization in Organizations. If the trail is not an organization trail and this is set to
true, the trail will be created in all Amazon Web Services accounts that belong to the organization. If the trail is an organization trail and this is set to
false, the trail will remain in the current Amazon Web Services account but be deleted from all member accounts in the trail
The following
update-trail command updates a trail to use an existing bucket for log delivery:
aws cloudtrail update-trail --name Trail1 --s3-bucket-name my-bucket IAM Log Files .
SnsTopicName -> (string)
This field is no longer in use. Use UpdateTrailResponse$SnsTopicARN .
SnsTopicARN -> (string)
Specifies the ARN of the Amazon SNS topic that CloudTrail uses to send notifications when log files are delivered. The following is the format of a topic ARN. updated. The following is the format of a trail ARN. the log group to which CloudTrail logs are delivered.
CloudWatchLogsRoleArn -> (string)
Specifies the role for the CloudWatch Logs endpoint to assume to write to a user's log group.
KmsKeyId -> (string)
Specifies the KMS key ID that encrypts the logs delivered by CloudTrail. The value is a fully specified ARN to a KMS key in the following format.
arn:aws:kms:us-east-2:123456789012:key/12345678-1234-1234-1234-123456789012
IsOrganizationTrail -> (boolean)
Specifies whether the trail is an organization trail.
|
https://docs.aws.amazon.com/de_de/cli/latest/reference/cloudtrail/update-trail.html
|
CC-MAIN-2022-33
|
refinedweb
| 530
| 53.31
|
Adding a .first() method to Django's QuerySet
Adding a .first() method to Django's QuerySet
Join the DZone community and get the full member experience.Join For Free
Deploying code to production can be filled with uncertainty. Reduce the risks, and deploy earlier and more often. Download this free guide to learn more. Brought to you in partnership with Rollbar.
In my last Django project, we had a set of helper functions that we used a lot. The most used was helpers.first, which takes a query set and returns the first element, or None if the query set was empty.
Instead of writing this:
try: object = MyModel.objects.get(key=value) except model.DoesNotExist: object = None
You can write this:
def first(query): try: return query.all()[0] except: return None object = helpers.first(MyModel.objects.filter(key=value))
Note, that this is not identical. The get method will ensure that there is exactly one row in the database that matches the query. The helper.first() method will silently eat all but the first matching row. As long as you're aware of that, you might choose to use the second form in some cases, primarily for style reasons.
But the syntax on the helper is a little verbose, plus you're constantly including helpers.py. Here is a version that makes this available as a method on the end of your query set chain. All you have to do is have your models inherit from this AbstractModel.
class FirstQuerySet(models.query.QuerySet): def first(self): try: return self[0] except: return None class ManagerWithFirstQuery(models.Manager): def get_query_set(self): return FirstQuerySet(self.model) class AbstractModel(models.Model): objects = ManagerWithFirstQuery() class Meta: abstract = True class MyModel(AbstractModel): ...
Now, you can do the following.
object = MyModel.objects.filter(key=value).first() }}
|
https://dzone.com/articles/adding-first-method-djangos
|
CC-MAIN-2018-47
|
refinedweb
| 302
| 60.82
|
Web maps are comprised of a basemap layer and one or more operational layers, as well as tasks, feature collections, and pop-ups. Using the ArcGIS API for Silverlight, it's easy to add a web map to your application, as shown in the following steps:
- Browse to ArcGIS.com and locate the web map you're adding to your application.
- Find the ID of the web map to add to your application.
- Using Visual Studio, create the application that will contain the web map.
- Add a reference to the ESRI.ArcGIS.Client, ESRI.ArcGIS.Client.Toolkit.DataSources, and ESRI.ArcGIS.Client.WebMap assemblies.
- Open the code-behind file for the application's main page (for example, MainPage.xaml.cs).
- Add a using statement for the ESRI.ArcGIS.Client.WebMap namespace to the end of the list of using statements at the top of the code-behind file as shown in the following code example:
using ESRI.ArcGIS.Client.WebMap;
- Load the web map when the application initializes.
The Document class in the ESRI.ArcGIS.Client.WebMap namespace supports reading Web Map JSON and creating a map from it. Call the Document.GetMapAsync method, passing in the ID of the web map, to read a web map as shown in the following code example. When the map is read in, the Document.GetMapComplete event will fire. That event is implemented in step 8 below.
Document webMap = new Document(); webMap.GetMapCompleted += webMap_GetMapCompleted; webMap.GetMapAsync("d5e02a0c1f2b4ec399823fdd3c2fdebd");
- Implement the Document.GetMapCompleted event handler to add the map to the application.
Note:
The following event handler adds the web map to the LayoutRoot element of your application. To add the map using the following code, your application needs to have a content presenter of that name. For example, give the main Grid in your UserControl an x:Name of LayoutRoot. See the following code example:
void webMap_GetMapCompleted(object sender, GetMapCompletedEventArgs e) { if (e.Error == null) LayoutRoot.Children.Add(e.Map); }
- Compile and run your application.
- To add event handling to your map, such as for the MouseClick event, wire the events to the map in the GetMapCompleted event. The map can be accessed through the GetMapCompletedEventArgs parameter. See the following code example:
void webMap_GetMapCompleted(object sender, GetMapCompletedEventArgs e) { if (e.Error == null) { LayoutRoot.Children.Add(e.Map); e.Map.MouseClick += new EventHandler<Map.MouseEventArgs>(Map_MouseClick); } }
The event can then be implemented in the code-behind and will work with the web map.
|
https://developers.arcgis.com/silverlight/guide/adding-a-web-map.htm
|
CC-MAIN-2017-43
|
refinedweb
| 408
| 60.82
|
Member
22 Points
Sep 23, 2012 11:14 AM|m.shawamreh|LINK
Hi All,
How To Convert day name to localized day name ??
string sDay = "Monday"
i need to convert this day name to other Culture. (ex. de-DE )
the result in this case it's "Montag"
Star
11588 Points
Sep 23, 2012 01:00 PM|rivdiv|LINK
string sDay = "Monday"; int dayOfWeek = (int)Enum.Parse(typeof(DayOfWeek), sDay); string convertedDay = CultureInfo.GetCultureInfo("de-DE").DateTimeFormat.DayNames[dayOfWeek];
You'll need the System.Globalization namespace.
Member
22 Points
Sep 24, 2012 02:26 AM|m.shawamreh|LINK
2 replies
Last post Sep 24, 2012 02:26 AM by m.shawamreh
|
https://forums.asp.net/t/1845644.aspx?Convert+day+name+to+localized+day+name
|
CC-MAIN-2017-47
|
refinedweb
| 110
| 60.92
|
package URI; use strict; use vars qw($VERSION); $) and a component that is missing (represented as or an unescaped string. A component that can be further divided into sub-parts are usually passed escaped, as unescaping might change its semantics. The common methods available for all URI are: =over 4 =item $uri->scheme =item $uri->scheme( $new_scheme ) Sets and returns the scheme part of the $uri. If the $uri is relative, then $uri->scheme returns C. =item $uri->opaque =item $uri->opaque( $new_opaque ) Sets and returns the scheme-specific part of the $uri (everything between the scheme and the fragment) as an escaped string. =item $uri->path =item $uri->path( $new_path ) Sets and returns the same value as $uri->opaque unless the URI supports the generic syntax for hierarchical namespaces. In that case the generic method is overridden to set and return the part of the URI between the I<host name> and the I<fragment>. =item $uri->fragment =item $uri->fragment( $new_frag ) Returns the fragment identifier of a URI reference as an escaped string. =item $uri->as_string Returns a URI object to a plain string. URI objects are also converted to plain strings automatically by overloading. This means that $uri objects can be used as plain strings in most Perl constructs. =item . =item $uri->eq( $other_uri ) =item C<URI> object references denote the same object, use the '==' operator. =item . =item $uri->rel( $base_uri ) Returns a relative URI reference if it is possible to make one that denotes the same resource relative to $base_uri. If not, then $uri is simply returned. =back =head1 GENERIC METHODS The following methods are available to schemes that use the common/generic syntax for hierarchical namespaces. The descriptions of schemes below indicate which these are. Unknown schemes are assumed to support the generic syntax, and therefore the following methods: =over 4 =item $uri->authority =item $uri->authority( $new_authority ) Sets and returns the escaped authority component of the $uri. =item $uri->path =item $uri->path( $new_path ) Sets and returns the escaped path component of the $uri (the part between the host name and the query or fragment). The path can never be undefined, but it can be the empty string. =item $uri->path_query =item $uri->path_query( $new_path_query ) Sets and returns the escaped path and query components as a single entity. The path and the query are separated by a "?" character, but the query can itself contain "?". =item $uri->path_segments =item . =item $uri->query =item $uri->query( $new_query ) Sets and returns the escaped query component of the $uri. =item $uri->query_form =item $uri->query_form( $key1 => $val1, $key2 => $val2, ... ) =item $uri->query_form( \@key_value_pairs ) =item $uri->query_form( \%hash ) Sets and returns query components that use the I C<URI::QueryParam> module can be loaded to add further methods to manipulate the form of a URI. See L<URI::QueryParam> for details. =item $uri->query_keywords =item $uri->query_keywords( $keywords, ... ) =item . =back =head1 SERVER METHODS For schemes where the I<authority> component denotes an Internet host, the following methods are available in addition to the generic methods. =over 4 =item $uri->userinfo =item . =item $uri->host =item $uri->host( $new_host ) Sets and returns the unescaped hostname. If the $new_host string ends with a colon and a number, then this number also sets the port. =item $uri->port . =item $uri->host_port =item $uri->host_port( $new_host_port ) Sets and returns the host and port as a single unit. The returned value includes a port, even if it matches the default port. The host part and the port part are separated by a colon: ":". =item $uri->default_port Returns the default port of the URI scheme to which $uri belongs. For I<http> this is the number 80, for I<ftp> this is the number 21, etc. The default port for a scheme can not be changed. =back =head1 SCHEME-SPECIFIC SUPPORT Scheme-specific support is provided for the following URI schemes. For C<URI> objects that do not belong to one of these, you can only use the common and generic methods. =over 4 =item B<data>: The I<data> URI scheme is specified in RFC 2397. It allows inclusion of small data items as "immediate" data, as if it had been included externally. C<URI> objects belonging to the data scheme support the common methods and two new methods to access their scheme-specific components: $uri->media_type and $uri->data. See L<URI::data> for details. =item B<file>: An old specification of the I<file> URI scheme is found in RFC 1738. A new RFC 2396 based specification in not available yet, but file URI references are in common use. C<URI> objects belonging to the file scheme support the common and generic methods. In addition, they provide two methods for mapping file URIs back to local file names; $uri->file and $uri->dir. See L<URI::file> for details. =item B<ftp>: An old specification of the I<ftp> URI scheme is found in RFC 1738. A new RFC 2396 based specification in not available yet, but ftp URI references are in common use. C<URI> objects belonging to the ftp scheme support the common, generic and server methods. In addition, they provide two methods for accessing the userinfo sub-components: $uri->user and $uri->password. =item B<gopher>: The I<gopher> URI scheme is specified in <draft-murali-url-gopher-1996-12-04> and will hopefully be available as a RFC 2396 based specification. C<URI> objects belonging to the gopher scheme support the common, generic and server methods. In addition, they support some methods for accessing gopher-specific path components: $uri->gopher_type, $uri->selector, $uri->search, $uri->string. =item B<http>: The I<http> URI scheme is specified in RFC 2616. The scheme is used to reference resources hosted by HTTP servers. C<URI> objects belonging to the http scheme support the common, generic and server methods. =item B<https>: The I<https> URI scheme is a Netscape invention which is commonly implemented. The scheme is used to reference HTTP servers through SSL connections. Its syntax is the same as http, but the default port is different. =item B<ldap>: The I<ldap> URI scheme is specified in RFC 2255. LDAP is the Lightweight Directory Access Protocol. An ldap URI describes an LDAP search operation to perform to retrieve information from an LDAP directory. C<URI> objects belonging to the ldap scheme support the common, generic and server methods as well as ldap-specific methods: $uri->dn, $uri->attributes, $uri->scope, $uri->filter, $uri->extensions. See L<URI::ldap> for details. =item B<ldapi>: Like the I<ldap> URI scheme, but uses a UNIX domain socket. The server methods are not supported, and the local socket path is available as $uri->un_path. The I<ldapi> scheme is used by the OpenLDAP package. There is no real specification for it, but it is mentioned in various OpenLDAP manual pages. =item B<ldaps>: Like the I<ldap> URI scheme, but uses an SSL connection. This scheme is deprecated, as the preferred way is to use the I<start_tls> mechanism. =item B<mailto>: The I<mailto> URI scheme is specified in RFC 2368. The scheme was originally used to designate the Internet mailing address of an individual or service. It has (in RFC 2368) been extended to allow setting of other mail header fields and the message body. C<URI> objects belonging to the mailto scheme support the common methods and the generic query methods. In addition, they support the following mailto-specific methods: $uri->to, $uri->headers. =item B<mms>: The I<mms> URL specification can be found at L<> C<URI> objects belonging to the mms scheme support the common, generic, and server methods, with the exception of userinfo and query-related sub-components. =item B<news>: The I<news>, I<nntp> and I<snews> URI schemes are specified in <draft-gilman-news-url-01> and will hopefully be available as an RFC 2396 based specification soon. C<URI> objects belonging to the news scheme support the common, generic and server methods. In addition, they provide some methods to access the path: $uri->group and $uri->message. =item B<nntp>: See I<news> scheme. =item B<pop>: The I<pop> URI scheme is specified in RFC 2384. The scheme is used to reference a POP3 mailbox. C<URI> objects belonging to the pop scheme support the common, generic and server methods. In addition, they provide two methods to access the userinfo components: $uri->user and $uri->auth =item B<rlogin>: An old specification of the I<rlogin> URI scheme is found in RFC 1738. C<URI> objects belonging to the rlogin scheme support the common, generic and server methods. =item B<rtsp>: The I<rtsp> URL specification can be found in section 3.2 of RFC 2326. C<URI> objects belonging to the rtsp scheme support the common, generic, and server methods, with the exception of userinfo and query-related sub-components. =item B<rtspu>: The I<rtspu> URI scheme is used to talk to RTSP servers over UDP instead of TCP. The syntax is the same as rtsp. =item B<rsync>: Information about rsync is available from. C<URI> objects belonging to the rsync scheme support the common, generic and server methods. In addition, they provide methods to access the userinfo sub-components: $uri->user and $uri->password. =item B<sip>: The I<sip> URI specification is described in sections 19.1 and 25 of RFC 3261. C<URI> objects belonging to the sip scheme support the common, generic, and server methods with the exception of path related sub-components. In addition, they provide two methods to get and set I<sip> parameters: $uri->params_form and $uri->params. =item B<sips>: See I<sip> scheme. Its syntax is the same as sip, but the default port is different. =item B<snews>: See I<news> scheme. Its syntax is the same as news, but the default port is different. =item B<telnet>: An old specification of the I<telnet> URI scheme is found in RFC 1738. C<URI> objects belonging to the telnet scheme support the common, generic and server methods. =item B<tn3270>: These URIs are used like I<telnet> URIs but for connections to IBM mainframes. C<URI> objects belonging to the tn3270 scheme support the common, generic and server methods. =item B<ssh>: Information about ssh is available at. C<URI> objects belonging to the ssh scheme support the common, generic and server methods. In addition, they provide methods to access the userinfo sub-components: $uri->user and $uri->password. =item B<urn>: The syntax of Uniform Resource Names is specified in RFC 2141. C<URI> objects. =item B<urn>:B<isbn>: The C<urn:isbn:> namespace contains International Standard Book Numbers (ISBNs) and is described in RFC 3187. A C<URI> object belonging to this namespace has the following extra methods (if the Business::ISBN module is available): $uri->isbn, $uri->isbn_publisher_code, $uri->isbn_country_code, $uri->isbn_as_ean. =item B<urn>:B<oid>: The C<urn:oid:> namespace contains Object Identifiers (OIDs) and is described in RFC 3061. An object identifier consists of sequences of digits separated by dots. A C<URI> object belonging to this namespace has an additional method called $uri->oid that can be used to get/set the oid value. In a list context, oid numbers are returned as separate elements. =back =head1 CONFIGURATION VARIABLES The following configuration variables influence how the class and its methods behave: =over 4 =item " =item ("") ==> "" =back =head1 BUGS Using regexp variables like $1 directly as arguments to the URI methods does not work too well with current perl implementations. I would argue that this is actually a bug in perl. The workaround is to quote them. Example: /(...)/ || die; $u->query("$1"); =head1 PARSING URIs WITH REGEXP As an alternative to this module, the following (official) regular expression can be used to decode a URI: my($scheme, $authority, $path, $query, $fragment) = $uri =~ m|(?:([^:/?#]+):)?(?://([^/?#]*))?([^?#]*)(?:\?([^#]*))?(?:#(.*))?|; The C<URI::Split> module provides the function uri_split() as a readable alternative. =head1 SEE ALSO L<URI::file>, L<URI::WithBase>, L<URI::QueryParam>, L<URI::Escape>, L<URI::Split>, L<URI::Heuristic> RFC 2396: "Uniform Resource Identifiers (URI): Generic Syntax", Berners-Lee, Fielding, Masinter, August 1998. =head1 COPYRIGHT Copyright 1995-2003 Gisle Aas. Copyright 1995 Martijn Koster. This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself. =head1 AUTHORS / ACKNOWLEDGMENTS This module is based on the C<URI::URL> module, which in turn was (distantly) based on the C<wwwurl.pl> code in the libwww-perl for perl4 developed by Roy Fielding, as part of the Arcadia project at the University of California, Irvine, with contributions from Brooks Cutter. C<URI::URL> was developed by Gisle Aas, Tim Bunce, Roy Fielding and Martijn Koster with input from other people on the libwww-perl mailing list. C<URI> and related subclasses was developed by Gisle Aas. =cut
|
http://opensource.apple.com//source/CPANInternal/CPANInternal-108/URI/URI.pm
|
CC-MAIN-2016-40
|
refinedweb
| 2,161
| 56.45
|
Shell.
Shell Sort and Gap Sequence
There are a number of options for calculating the gap sequence used by shell sort. The last gap used by shell sort will be 1, which is insertion sort. In the Python algorithm used in this article the following formula is used to generate the gap sequence.
gaps = 2 * N / 2k+1 + 1, where k >= 1 and N is length of list
For a list of 10 items, the gap sequence will be 5, 3, 1.
Shell Sort Example
Let's say we have the following list to be sorted by shell sort.
[6, 0, 9, 3, 7, 5, 4, 1, 8, 2]
The first gap is 5 based on the gap formula. The list will be logically divided into several interleaved lists with a gap of 5, each one being sorted individually using the insertion sort algorithm. Logically these are 5 lists of 2 items each.
[6, 5] -> [5, 6] [0, 4] -> [0, 4] [9, 1] -> [1, 9] [3, 8] -> [3, 8] [7, 2] -> [2, 7]
After 5 sorting completes, this will be the new list.
After 5 sorting: [5, 0, 1, 3, 2, 6, 4, 9, 8, 7]
The same process occurs for a gap of 3 and 1.
After 3 sorting: [3, 0, 1, 4, 2, 6, 5, 9, 8, 7] After 1 sorting: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
The final sort is insertion sort, which has a gap of 1.
Shell Sort in Python
Here is one solution for shell sort in Python. It's a little more complicated than insertion sort, because one has to caculate and keep track of the gaps used during the sorting process.
def shell_sort(lst): """ Performs in-place Shell Sort on list of integers. :param lst: list of integers :return: None >>> lst = [6, 0, 9, 3, 7, 5, 4, 1, 8, 2] >>> shell_sort(lst) >>> assert(lst == [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) """ if lst is None or len(lst) < 2: return k = 1 gap = len(lst) while gap > 1: gap = 2 * int(len(lst) / 2 ** (k+1)) + 1 if gap < 1: gap = 1 for index in range(len(lst) - gap): i = index + gap while i < len(lst): value = lst[i] j = i while j - gap > -1: if value > lst[j-gap]: break lst[j] = lst[j-gap] j -= gap lst[j] = value i += gap index += 1 k += 1
Conclusion
Insertion sort is a very popular sorting algorithm, and shell sort is also used by various libraries. If you're interested in other sorting algorithms that use the gap technique for comparing values over larger distances, comb sort is a great example.
|
https://www.koderdojo.com/blog/shell-sort
|
CC-MAIN-2021-39
|
refinedweb
| 446
| 70.06
|
The Java Specialists' Newsletter
Issue 0942004-09-20
Category:
Language
Java version: Sun JDK 1.5.0-rc
GitHub
Subscribe Free
RSS Feed
Welcome to the 94th edition of The Java(tm) Specialists' Newsletter. I would like to
welcome Nelson Boegbah from Liberia onto our list (we now
have readers from 101 ). Can you imagine
a city of one million residents, without any running water,
electricity or telephone lines? I certainly cannot imagine
that, but that is what Monrovia (capital of Liberia and
Nelson's home) is like. Nelson has promised to write a few
stories about life in Liberia. They are off-topic for our
Java newsletter, so if you are interested, please send me an
August 2004 was extremely busy. I spent my days and nights
developing a J2ME application for a customer. It
reminded me of my PhD days - working until 3:00am every
morning (I'm sure you can relate to that ;-). The highlight
was one Friday night, when I sneaked into bed at 1:00am, and
felt guilty because there were still two productive hours
left in the day! As you may imagine, there was not too much
time left over to write newsletters, but I AM BACK!
This newsletter was written based on an idea sent to me by
Remco de Boer from the Netherlands.
Attention: On the 30th of November 2004, I would have
been sending The Java(tm) Specialists' Newsletter for exactly four years. I want to try
publish the 100th newsletter on that same day. If you have
good ideas that you would like to share with 8000+ Java
specialists in 101 countries, please email them to me. I
want to thank all those who have helped grow this newsletter
over the past 4 years by forwarding it to their friends and
colleagues :-)
NEW:
Please see our new "Extreme Java" course, combining
concurrency, a little bit of performance and Java 8.
Extreme Java - Concurrency & Performance for Java 8.
The idea is to have two classes, an abstract superclass and
a subclass implementation. The subclass' constructor calls
the superclass constructor, which in its turn calls an
abstract method implemented by the subclass. The method
implementation sets a (subclass) member variable - and this
is where the fun begins.
I have often seen code like this:
public class MyClass {
private boolean b = true; // unnecessary initialisation
private int i = 42; // unnecessary initialisation
public MyClass(boolean b, int i) {
this.b = b;
this.i = i;
}
}
The writer of MyClass was being overcautious, by initialising
fields that are actually being set in the constructor anyway.
"Oh, but it does no harm." Really? Before we look at
Remco's example, let us decompile the class and see what the
compiler did with the field initialisers:
public class MyClass {
public MyClass(boolean flag, int j) {
b = true;
i = 42;
b = flag;
i = j;
}
private boolean b;
private int i;
}
If we look carefully, we see that the field initialisers
get copied into the constructor as part of compilation.
The steps "b = true" and "i = 42" are thus of no use at all.
I find this interesting. All the initialising code and the
initialiser blocks, are copied into each of the constructors.
Another quick example:
public class MyClass2 {
{ System.out.println("Goodbye"); }
public MyClass2() { }
public MyClass2(boolean b) { }
public MyClass2(boolean b, String s) { }
private int i = 4;
{ System.out.println("Hello"); }
}
becomes the compiled class:
public class MyClass2 {
public MyClass2() {
System.out.println("Goodbye");
i = 4;
System.out.println("Hello");
}
public MyClass2(boolean flag) {
System.out.println("Goodbye");
i = 4;
System.out.println("Hello");
}
public MyClass2(boolean flag, String s) {
System.out.println("Goodbye");
i = 4;
System.out.println("Hello");
}
private int i;
}
Now, let us look at the classes that Remco sent me:
public abstract class A {
public A(int i) {
build(i);
}
protected abstract void build(int i);
}
and its subclass
public class B extends A {
private int size = 0;
public B(int size) {
super(size);
}
protected void build(int size) {
this.size = size;
}
public int size() {
return size;
}
public static void main(String[] args) {
B test = new B(1);
System.out.println("Size: " + test.size());
}
}
The resultant output is:
Size: 0
Correct, but it is easy to think that the output should
rather be "Size: 1", but it is "Size: 0"! You can prevent
this by not explicitly setting the B.size field to 0 in the
declaration. In other words, declaring
private int size; provides the
expected answer ('Size: 1').
private int size;
According to the JVM Language Specification, this is expected
behaviour. A superclass is initialized before the member
variables of a subclass. Explicitly setting the member
variable to 0 therefore takes place after the super
constructor called the build method (note that you can set
the size to anything you want, not necessarily 0). When you
leave out the explicit '= 0', the variable is of course still
(implicitly) initialized to 0. However, this implicit default
initialization ('preparation') is performed before the
superclass constructor. [See for instance section
2.17.6, in particular steps 3 and 4].
It makes sense, when we consider that the initialisation
code (size=1) is moved to the start of each constructor.
final: The confusion could have been
avoided by changing the design of the application. I usually
try to make all fields final. This
would have highlighted the problem in the code from the
start.
final
Kind regards
Heinz
Language Articles
Related Java Course
|
http://www.javaspecialists.eu/archive/Issue094.html
|
CC-MAIN-2016-44
|
refinedweb
| 906
| 64.1
|
Armenian Genocide, Turks, Kurds, Arabs, and other Muslims were both perpetrators and beneficiaries of the deportations and killing—but they also saved non-Muslims. This study documents and analyzes the ways in which Armenians were rescued and the various motives of the rescuers. Unlike previous studies, which are based solely on survivor oral histories or anecdotal fam- ily material, this paper also utilizes missionary reports, published survivor memoirs, German con- sular reports, archival sources, and other material. It discusses the concept of Righteous among the Nations and it explores the application of this idea to the context of the Armenian Genocide. Wider recognition of the phenomenon of Turks who saved Armenians can facilitate dialogue between Armenians and Turks today, many of whom tend to view each other as enemies. Key words: Armenian Genocide, Muslims who saved Christians, rescuers, the righteous, Turkish- Armenian dialogue
Introduction In this paper, we explore the subject of righteous Turks who saved Armenians during the Armenian Genocide. First, an explanation of the term Turks as used in this paper is in order. Armenians were targeted for deportation and destruction based primarily on their religion. As Christians, they were perceived by the ruling Ottoman elite, as well as Ottoman Muslim society in general, as serving the interests of the Christian European powers, who seemed intent on dismembering the Ottoman Empire. The Armenians were believed not to share in the common concern for the welfare and preservation of the empire. Ethnicity was much less a factor in the Armenians’ being targeted. Arme- nians could be spared by converting to Islam, especially prior to July 1916. At the same time, the Christian Assyrians and Greeks in the Ottoman Empire were also targeted for destruction as early as 1913.1 In some cases, the sources identify a rescuer specifically as an Arab or Kurd, for example, and I have accepted that at face value. When they refer to a rescuer as a Turk, however, I am aware that the term could have been used based on knowledge that the individual was of Turkish ethnicity or as a catch-all for any Muslim; there is no way to be sure. Therefore, in this paper, Turk is used generically to refer to an Ottoman Mus- lim, whether they were known to be of Turkish ethnicity or not. An explanation of the term righteous is also in order. In 2001, I published a short essay, titled “Turks Who Saved Armenians: An Introduction,” in which I used the term righteous Turks.2 This phrase was adopted from the Israeli practice of honoring non- Jews who had risked their lives to save Jews during the Holocaust as “Righteous among the Nations.” There, righteous is taken from the Talmudic term hasid umot ha’olam,
George N. Shirinian, “Turks Who Saved Armenians: Righteous Muslims during the Armenian Genocide,” Genocide Studies International 9, 2 (Fall 2015): 208–227. © 2015 Genocide Studies International. doi: 10.3138/gsi.9.2.03 Turks Who Saved Armenians 209
referring to those who, though not direct recipients of the Ten Commandments and the Torah, demonstrated by their actions that their sense of justice and mercy was rooted in those teachings that, according to Jewish tradition, lead to the highest degree of human goodness.3 It is based on the idea that, whereas Jews who had saved fellow Jews were only fulfilling a religious obligation, non-Jews had no such responsibility; therefore, those who risked their own safety to help Jews deserve special recognition.4 To be considered “righteous,” one’s actions had to involve “extending help in sav- ing a life; endangering one’s own life; absence of reward, monetary or otherwise; and - Friday, June 03, 2016 8:14:23 PM - IP Address:5.62.159.223
similar considerations which make the rescuer’s deeds stand out above and beyond what can be termed ordinary help.”5 Cases are examined carefully by a public commis- sion in Israel, headed by a Supreme Court justice and according to a set of criteria, before the title of “righteous” is granted.6 Criteria include evidence by survivors and other eyewitnesses; only a Jewish party can put a nomination forward; helping a family member or Jew convert to Christianity is not a criterion for recognition; assistance has to be repeated or substantial; and assistance has to be given without any financial gain expected in return (although requiring payment for rent, food, or similar expenses is deemed acceptable).7 It has been argued that, because the concept of the righteous originates in the Judeo-Christian tradition while its modern use is secular, Muslims end up being ex- cluded from the notion of righteousness because Muslim tradition is different.8 I think this interpretation overlooks the echoes of the Judeo-Christian tradition found in Islam, which link morality with holiness. Echoing the Babylonian Talmud (4:1),9 the Qur’an (5:32) states, “He who saves a life it is as if he saves the entire world; he who destroys a life it is as if he destroys the entire world.”10 A little later (Qur’an 5:151), it echoes the sixth commandment of the Old Testament (Exodus 20, Deuteronomy 5): “Slay not the life which Allah hath made sacred, save in the course of justice.” The Qur’an (16:97) also states, “Whoever does righteous deeds, whether male or female, provided he is a believer, We shall surely grant him a new life, a life that is good, and We will certainly reward such people according to the noblest of their deeds in the hereafter.” Further- more, there are multiple references to Turks saving Armenians, or at least criticizing the government policy of their deportation and massacre, as being contrary to Islam.11 Moreover, there are several words in the Turkish language for the notion of righteous and righteousness—doğru, doğrucu, dürüst, dürüstlük, hak tanır. The application of the term righteous in the context of the Turkish rescue of Armenians is as valid for its reli- gious origins as for its indication of moral principle. During the years 1913–1923, not only Armenians but also Assyrians and Greeks were subject to a deliberate policy of the Young Turk—and subsequently, the Kemalist— regime to rid itself of these non-Turkish, non-Muslim minorities, whom it blamed for the economic, political, and military failures of the government. This policy involved the bru- tal deportation and massacre of these minorities as well as the confiscation of their land, property, and wealth. In this study, I will focus only on cases of Armenian rescue, although I do expect that there are comparable cases of rescue of Assyrians and Greeks. There is no way to know today how many individual acts of rescue occurred in those tragic years. The sources of information used in this paper include recorded oral testi- monies of Armenian survivors that have been published, anecdotal family histories about the survivors transmitted orally, written autobiographies and survivor memoirs,
and reports of American, Danish, German, and Swiss missionaries as preserved in the German Foreign Office archives. In the course of this exploration, I will examine what the rescuers did, the phenom- enon of rescue during genocide, the motivations of the rescuers, and the significance of all this for political and social relations between Armenians and Turks now and in the future. Ultimately, this study is a search for humanity in the midst of genocide.
The pioneering work on this subject was done by Richard G. Hovannisian. Basing his research on the 527 oral histories collected at the University of California–Los Angeles, he approached the question of altruism during the genocide.12 The research sample was thus limited to evidence provided through oral histories collected from survivors who ended up in southern California. Hovannisian found that, statistically, humanitarian considerations were the main motivations for the majority of his cases (51.5%), while religious considerations were among of the least important (4.3%). Three-quarters of the interventions were by individuals unknown to the survivors.13 More recently, Shahkeh Yaylaian Setian published a study based on 15 family anec- dotes received through a call she had placed in Armenian-American newspapers.14 The material provided relatively few examples and little detail about acts of rescue, especially if one discounts personal gain for the rescuer. While her book was a sincere effort to promote Turkish-Armenian dialogue, it does not offer a comprehensive account or ana- lysis of the issue. To expand a little on the above description of the sources used in the present study: they include the recorded oral testimonies of Armenian survivors living in Armenia that have been published, anecdotal family histories about Armenian survivors in Can- ada transmitted orally and previously unpublished, written Armenian autobiographies and survivor memoirs published in the United States, and reports of American, Danish, German, and Swiss missionaries and representatives of the Baghdad Railway, which were contemporaneous with the events they describe, as preserved in the German For- eign Office archives. In addition, information on Ottoman officials who saved Arme- nians is aggregated for the first time in this paper. This broader collection of information from multiple types of sources adds detail to the previous studies and gives a more rounded picture of the nature of such rescue, such as the pervasiveness of opposition to the genocide, the importance of religion as a motive, and the implications of such things, particularly regarding their potential for aiding Turkish-Armenian dialogue.
Challenges None of the rescuers or the survivors is alive today to answer questions or explain moti- vations. However, the examples presented below do provide information, either expli- citly or with sufficient detail, to reasonably infer the rescuers’ motives in these cases. In some of the examples presented below, the survivors express great reverence for their Turkish rescuers. During my efforts to collect oral family anecdotes, however, I found that the descendants of the survivors, who knew through their family histories that their ancestors had been saved by Turks, were very reluctant to share these ac- counts. Some who did tell me such stories often either asked that they be kept anony- mous, because they feared scorn from family members, or refused permission to have them published, out of fear of being perceived to disrespect the memory of the victims. © 2015 Genocide Studies International 9, no. 2 doi:10.3138/gsi.9.2.03 Turks Who Saved Armenians 211
The events of 1915–1923 were catastrophic and debilitating for Armenians, not only physically but also psychologically. This has been exacerbated by the ongoing and aggressive denial, perpetrated by the modern Turkish state right up to today, that geno- cide had taken place. Such denial re-victimizes and traumatizes the descendants of the survivors. Even now, after the span of 100 years, it is still difficult for Armenians to acknowledge that there were Turks who did good deeds during the genocide. Owing to the limited length of a journal article, it is not possible to include or refer- ence every example of rescue I have uncovered. By no means have I exhausted the - Friday, June 03, 2016 8:14:23 PM - IP Address:5.62.159.223
sources.
any Muslim who provides shelter to an Armenian with hanging in front of his house and the burning down of that house, while government officials will be re- moved from their posts and military men shall be expelled from the army. In either case the offenders are to be brought before the Military Tribunal for trial.15
In Van, the governor general (vali), Cevdet Bey, is reported to have issued a general order that stated, “The Armenians must be exterminated. If any Moslem protect a Christian, first, his house shall be burned, then the Christian killed before his eyes, and then his (the Moslem’s) family and himself.”16 Given such highly charged circum- stances, today one can only imagine the difficulty of helping Armenians escape the de- portations and massacres and the humanity and courage required to do so. It should be noted that, prior to July 1916, some Armenians could escape deporta- tion and death by converting to Islam. Even afterwards, it is well known that many of the stragglers or survivors could be taken in or adopted quite openly on condition that they convert to and profess Islam.17 Indeed, in some areas, Armenian orphans were gathered into Turkish state orphanages in order to be turkified. This is one of the chief differences between the Armenian Genocide and the Holocaust; Jews had no compara- ble option. It is remarkable that so many Armenians chose not to renounce their Christianity in order to save themselves.18
In 1915, the family lived in Urfa. This story was first told publicly at the “Problems of Genocide” international conference in Yerevan in 1995.
for that reason. My maternal grandfather was hanged in front of his family, which included his pregnant wife, my grandmother, and four children between the ages of two and eight. A Turkish businessman, Haji Khalil, had been my grandfather’s business partner, and had promised to care for his family in case of misfortune. When a disaster greater than anything either of them could have imagined struck, he kept his prom- ise by hiding our family in the upper storey of his house for a year. The logistics in- volved were extremely burdensome. Including my grandmother’s niece, there were seven people in hiding. Food for seven extra mouths had to be purchased, prepared and carried up undetected once a night and had to suffice until the next night. Khalil’s consideration was such that he even arranged for his two wives and the ser- vants. Every night, my mother used to remember him in her prayers to the end of her life—may God bless his soul.19
Helen’s Story This account was told to me in Cambridge, Ontario, in 2000. I call it “Helen’s Story.” At Helen’s request (her real first name), all identifiable mur- dered Helen’s grandfather, her father’s father. They had shot him in the back while he was out riding on horseback and while holding his sister’s hand. The horse had returned home with him slumped over in the saddle. Therefore, she did not think it would be respectful of her grandfather’s memory to be such good friends with this Turkish girl. Helen’s mother told her that she could have other friends over to the house but that she should not go to that friend’s house. Helen made an excuse to miss the party, but her friend wanted her there so much, she postponed it. This happened again, and again. After her friend offered to postpone the birthday party for the third time, Helen felt compelled to explain demeanor - Friday, June 03, 2016 8:14:23 PM - IP Address:5.62.159.223 settled in Canada. Despite the passing of time, she stated that she had never forgotten this story, and it was clear that it still had a strong emotional impact on her many years later.20
Antik Balekjian was a little girl in Gesaria (Kayseri) when her life and her kinder- garten and elementary schooling were cruelly interrupted eighteen times because of Turkish authorities entering her house with search warrants. When the raids oc- curred, Antik would get a glimpse of what the gendarmes did to her father and her house. Her mother would push her out of the side door and whisper instructions for her to go to either one of her married siblings or the next door neighbor, so she would not witness the barbaric attacks on her father, the merciless vandalism of their house, and her parents’ humiliation. Each time, their house was ransacked and left in ruins. Her father was beaten up by the gendarmes, who demanded he reveal the hiding place of his stepson (Antik’s stepbrother), Hovhannes, who was falsely accused of manslaughter. Hovhannes had stuck flyers on government buildings asking for justice and equality for the Armenian people from the Ottoman government. Eventually, he was captured and hung [sic] for a crime he never committed, in 1890. His death caused a gaping hole of sorrow in the hearts of the family members. Gesaria’s population consisted of Armenians, Turks and Greeks. Her father, Master Garabed Balekjian, was a consultant to civil engineers. Young civil engineers from all three communities came to their house for consultations and advice. Dur- ing these visits, Antik brought the visitors Armenian coffee and desserts with a glass of cold water on a tray, as was the tradition. Her father never turned down any of these young engineers, in spite of his turbulent life. He was generous, honest and encouraging with his advice and instructions. Antik, the youngest of nine children, was never spoiled. She always felt a sadness in her heart for missed schooling because of the turmoil in her home life caused by those raids. She helped her mother in household duties and weaved carpets in her spare time. She mastered that art. She even weaved with silk and her handiwork was sent to Istanbul for sale. She married a fabric designer and printer, Haroutiun
Kassabian, in 1904. At the beginning of World War I, her husband was drafted into the Ottoman army. In 1915, nineteen caravans were formed from the Armenian population of Gesaria. Antik was among them, along with her disabled mother-in-law, and three sons, ages nine, six and one. They were sent on the forced marches, like the rest of the Armenian population in all the provinces of the Ottoman Empire. Along the way, people were robbed, killed, kidna[p]ped, raped and tortured. They were ex- posed to all kinds of climates and weather, without shelter, deprived of water and - Friday, June 03, 2016 8:14:23 PM - IP Address:5.62.159.223
nose reacted to her emotions, to the scenes that she witnessed and to the experi- ences she lived. Unlike my father, her middle son, she was able to tell us about the horrific years of hunger and thirst, fear, looting, losses, sickness, bitter cold, burning sun, murders, death, kidnappings, rapes and forced conversions to Islam. During these dreadful years of torture, stories of human kindness, miracles along with stor- ies of perseverance, hope and prayer stand out. The responsibility for three minor-aged sons, ten, six and one, and her mother- in-law, almost blind and incapable of walking, turned my shy, thirty-two year old grandmother to a lion ready to protect her cubs. She learned by heart an important phrase and recited it whenever she saw any danger from the gendarmes, bandits or other aggressive groups. The phrase was “the families of soldiers are in trust of the nation.” Then she would add her husband’s number and division. He was drafted into the Turkish army in 1914, and the death marches started in 1915. Turkish was her mother tongue. In her native Kayseri/Gesaria, speaking Armenian was forbid- den. The punishment was extremely harsh. Whoever was caught, their tongue would be cut off. This is not a fable. Later in my research, I found this fact docu- mented and verified by survivors. Among the few stories of human miracles, there is one that stands out in my mind. Their caravan arrived at a Turkish village; by that time their number was less than half of the initial group. People had died of being exposed to extreme weather conditions, to sickness, young girls and women were kidnapped, the very old and the babies could not survive, families had been separated. My father had become very sick. He had high fever, chills and diarrhea. My grandmother kept changing his soiled cloth[e]s and wrapping him tightly with whatever she found to stop his chills. She knew she was losing him. They found a shelter in a Turkish peasant wo- man’s barn. The landlady, Fatiya, asked her what was wrong with the boy. Antik told her his symptoms with trembling lips. Fatiya asked them to settle in and she would be back soon. Indeed, she came back shortly, holding a bowl of hot soup, a bundle with warm sand and home remedies. She instructed Antik to wrap the warm sand around the boy’s abdomen. She visited them three times daily, each time she brought the soup, the warm sand and the home remedies. A few days later, the fever broke, and in two weeks’ time, he was almost normal. Unfortunately, marching orders came soon after and they had to leave to an unknown and fearful road. Fatiya had parting words for Antik. “Though I am a peasant woman, I am an Imam’s daughter, and I know how to read. Nowhere in the Quran does it say that we should kill the Christians, who know God. What these gendarmes are doing, killing children in front of their parents, depriving the caravans of shelter and water is against Islam. According to our religion, the destitute and the disabled should be protected and cared for. I saved your son, so God will have mercy on me
and return my four sons home safe from the war.” The two mothers embraced tightly and parted. The memory of Fatiya remained with my grandmother to the day she passed away at age 90. She prayed that Fatiya’s soul would always rest in Heaven. Loulsa- ren ichindeh yatsen she repeated many times, meaning, may she lie among lights. She was eternally grateful to Fatiya and never forgot her human kindness.
A Typology of Rescue - Friday, June 03, 2016 8:14:23 PM - IP Address:5.62.159.223
Generally speaking, the ways in which Turks rescued Armenians can be listed according to seven broad categories of rescuers: Ottoman officials who refused to act against the Armenians; those who hid Armenians in their homes for an extended period of time; those who helped Armenians with food, medicine, and shelter for a short period of time; those who helped Armenians en route in the deportation caravans; those who helped Armenians to escape deportation; those who adopted children and treated them kindly, as their own family; and those who married young girls or arranged their mar- riages to their sons. A comparison of the types of rescue in Mordecai Paldiel’s study of the Holocaust reveals some overlap with the Armenian case, but also significant differences.21 Paldiel also identified seven categories: raising protest or alarm (including an SS officer who attempted to expose the horrors); foreign officials helping Jews escape with visas; shel- tering and hiding; subterfuge (including by German officials); sheltering of children; acts by members of the clergy; and rescue of individuals during death marches. Shelter- ing refugees, intervention by officials, and special consideration for children are found in both cases. Religion is a factor in Paldiel’s study only insofar as it was practiced by members of the clergy, whereas during the Armenian Genocide, it was a factor among Muslim religious leaders and lay people. (However, this is not to say that there are not cases of pious rescuers in the Holocaust documented elsewhere.) Most significantly, there was no possibility of Jews knowingly marrying or being adopted into gentile families. Let us now review examples of the types of Armenian rescue. Understanding the types of rescue will help us understand the motives of the rescuers.
The kaimakam was so good to the Armenians, and for this he would be dealt with badly. The higher authorities finally chased him away from his post in Rakkah. They had telegraphed him one final warning to deport all Armenians to the Der-el- Zor for massacre. He refused. In addition to the humanitarian reasons, the Arme- nians had built and revived his whole town. Without their industry and education, Rakkah would have been with the war reduced to a barren waste. Already, with the Turkish men at the front and the Armenian men dead, the lands were lying useless, the sparse harvests rotting; food was in great shortage. Business had come to a jolt- ing halt. The few businessmen still open were often run by Armenians under
The names of such people as Bedri Nuri (lieutenant governor of Müntefak), Mehmet Celal Bey (governor general of Aleppo and Konya), Ferit (governor general of Basra), Ali Suat Bey (district governor [mutasarrıf] of Deir es Zor), Hüseyin Nesimi (mayor of - Friday, June 03, 2016 8:14:23 PM - IP Address:5.62.159.223
Lice), Hasan Mazhar Bey (governor general of Ankara), Reşid Paşa (governor general of Kastamonu), Şabit (deputy prefect of Beşiri), Faik Ali Bey (Ozansoy) (district governor of Kütahya), Mustafa Bey (Azizoğlu) (district governor of Malatya), Cemal Bey (district governor of Yozgat), and others who tried to alleviate the suffering of the Armenians deserve to be remembered today.23 It is even reported that Ahmed Rıza, a member of the Committee for Union and Progress (CUP, Ittihad ve Terakki Cemiyeti) and presi- dent of the Senate, dared protest against the massacre of the Armenians and was in- stantly ordered imprisoned by Talât (though saved thanks to the intervention of others)24 and that Hayri Bey, the sheikh ul-Islam, “had the temerity to criticize his col- leagues’ policy of massacre of the Armenians.” For this and other disagreements with the government, he was arrested, tried in civil court, and executed. Whether one was an opponent of the CUP or a CUP insider, failure to support the party’s policy toward the Armenians meant one’s own demise.25 Turks Who Hid Armenians in Their Homes for an Extended Period of Time One of the most courageous and humanitarian types of rescue occurred when a group of Armenians was hidden in a Turkish home for an extended period of time. The com- plicated logistics and the risks are illustrated in the following passage from “The Story of Haji Khalil,” given above.
Food for seven extra mouths had to be purchased, prepared and carried up unde- tected once a night and had to suffice.
Such sustained efforts over an extended period of time entailed great risk and suggest a great commitment on the part of the rescuer, both in terms of the caring for the shel- tered individuals and to moral and humanitarian principals. Such narratives may call to mind the renowned story recorded in the Diary of Anne Frank, which relates the experi- ence of a Jewish family being hidden from the Nazis in the Netherlands during World War II, and have strong emotive appeal today.
Turks Who Helped Armenians with Food, Medicine or Shelter for a Short Period of Time Another type of rescue was helping Armenians with food, medicine, or shelter for a short period of time. This type of rescue was one of the more frequent occurrences and is illustrated by the following anecdote about an incident that took place in Bzhnkert village, province of Van:
My uncle’s wife, Paydsar, took us to the barn. The Turk neighbor’s wife covered the barn door with straw and hay, so that they might not find and kill us. Our neigh- bor’s eight-year-old daughter, who had gone away from her parents, was caught by three Turks and raped in our yard. We saw all that through the haystacks. In an hour she died. The following night that Turk woman took us: five–six children, to her house, fed us, kept and sent us to Van by night. We stayed there until the deportation began.26 - Friday, June 03, 2016 8:14:23 PM - IP Address:5.62.159.223
This type of rescue, even though taking place over a relatively short period of time, still entailed considerable risk and considerable commitment to moral and humanitar- ian principals. The fact that the rescuer was a neighbor of the Armenians adds the extra dimension of saving someone the rescuer knew personally.
While being deported, Aghavni observed hundreds of young women commit sui- cide by drowning themselves in the Euphrates. She said the rivers were awash with bodies of people who had been killed by the Turks, as well as those who had drowned themselves. At one point, in despair, she left her children on the riverbank and threw herself in the river, but a relative saw her and solicited the assistance of a kind gendarme who pulled her out of the water. As she had lapsed into uncon- sciousness, the next thing she remembered was the gendarme slapping her on the back trying to revive her, and her young daughter crying in a thin voice, “Gen- darme, don’t hit Aghavni. Don’t hit Aghavni.” The gendarme was an older man with a real conscience, she said. In fact, he gave Aghavni three gold pieces and in- structed her, “Take it and don’t throw yourself in again.”27
Sometimes, the good intentions of the civilian Turkish population were thwarted by the gendarmes. According to one report, “Between Marash and Aintab, the Moham- medan population of a village wanted to distribute water and bread to a transport of about 100 families, but the soldiers accompanying the transport would not permit this.”28 Another report from Aleppo related the following:
A short while ago, the Armenian emigrants coming from the interior were led through the town, and the inhabitants were strictly forbidden to refresh those dying of thirst in the heat with a drop of water. Eyewitnesses confirm that an old woman, who collapsed from exhaustion, was forced to move along by a gendarme who kicked and whipped her. When a woman came out of a neighbouring house with a glass of water, the gendarme knocked the glass out of her hand and at- tempted to mistreat the old woman again. She dragged herself past another few houses and died there. Despite this, it is strictly forbidden to give the people bread or even water. Two men who attempted to do so in two different places received official letters threatening them with court-martial.29
My father was also killed by Turks. During those dark days in Turkey, when the life of every Armenian was in danger, my father smuggled Armenians to freedom eleven times. The twelfth time, he was caught by the authorities and executed. They - Friday, June 03, 2016 8:14:23 PM - IP Address:5.62.159.223
Although such arrangements were often benign, they were a form of absolute slav- ery, for if the master or any member of the family disliked the refugee-servant, they could turn the unfortunate into the street to starve, to be enslaved by the soldiers, or, as was later done to Acabie, hand her over to the authorities to do with as they saw fit.33
At the same time, such arrangements were often not benign, and the child was used for forced labor or as a concubine and had little recourse. Marrying Young Girls to Heads of Families or Their Sons Some of these children were integrated into their adoptive Turkish families and were married to the heads of families or to one of the sons. After the end of the war, the Lea- gue of Nations undertook a humanitarian program called the “Rescue Movement.” Its purpose was to reclaim Christian children who had been absorbed into Muslim homes, involuntarily married, or forced into servile concubinage and to reunite them with their family members or place them in League-approved orphanages. Some of the girls were content with their new lives, had had children in these marriages, and chose to remain with their Turkish families willingly. Others left when they could, some even leaving their children behind.34
humble individuals who helped out a pitiable individual they encountered. Let us now review examples of the types of motivation of the Turkish rescuers.
Personal Friendship Personal acquaintance and family friendship are a recurrent motive in the survivors’ stories of rescue. A typical example that took place at Ras ul-Ain is described below.
While we were there, a Turkish officer came and asked us where we were from. We said: Harpoot-Kessirik. He said, “Which one of you is from Kessirik?” I said that I - Friday, June 03, 2016 8:14:23 PM - IP Address:5.62.159.223
was. He asked my name, and I told him that I was the daughter of Sargis and Gohar Aslanian. He said, “You are the daughter of Sargis Aslanian?” I answered, “Yes.” A kind look came over his face as he said, “I have visited your home many times to see your father about official town business, and your father has been very hospitable to me on many occasions. In fact, I recall once your father gave me some figs and dates to take home to my child. There was a little girl who came and took them out of my pocket and ate some of them. Yes, you must be that little girl.” He smiled as he recalled the incident. I confessed that I did not remember that, but said, “Well, effendi, since you say that you have eaten at our table many times, then please save me and take me to Halab (Aleppo) or put me on a train, so that I may go there.” It was known that Halab had trains going in and out of it. He told me to stay where I was and that he would return with some food. He came back with food and told me that if he were assigned to remain there, he would save me out of respect for my father. However, he was sent away from that area and did not return.38
Religious Piety Religious piety was certainly a motivating factor for some. Islam calls for the protection of conquered non-Muslims. This protocol, known as dhimma, is a direct result of con- quest and is linked to a protection pact that suspended the conqueror’s initial right to kill or enslave followers of the tolerated religions, provided they submitted themselves to pay the tribute (cizye).39 It should be noted that this protection was ascribed to the Prophet, which means that it fulfilled the will of Allah. To transgress it represented a breach of religion—an important point because the non-Muslim’s right to existence within the context of Islamic law no longer depended on the whim of a potentate but, henceforth, was rooted in a divine command.40 An official pronouncement on the importance of Muslims protecting non-Muslims is found in the following statement:
Indeed, the Prophet strictly warned against any maltreatment of people of other faiths. He said: “Beware! Whoever is cruel and hard on a non-Muslim minority, or curtails their rights, or burdens them with more than they can bear, or takes any- thing from them against their free will; I (Prophet Muhammad) will complain against the person on the Day of Judgment.” (Abu Dawud)41
On another occasion, the Prophet sent a message to the monks of Saint Catherine in Mount Sinai:
church to pray. Their churches are to be respected. They are neither to be pre- vented from repairing them nor the sacredness of their covenants. No one of the nation is to disobey this covenant till the Day of Judgment and the end of the world.42
From Al-Husayn Ibn ‘Ali, King of the Arab Lands and Sharif of Mecca and its Prince to The Honorable and Admirable Princes—Prince Faisal and Prince Abd al-’Aziz al-Jarba—greetings and the compassion of God and His blessings. This let- ter is written from Imm Al-Qura (Mecca), on 18 Rajab 1336, by the praise of God and no God except Him. . . . ‘Ali44
Humanity Similarly, basic human decency was a motivating factor for others. A “report by a German public official from the Baghdad Railway” stated, “It should not be forgotten that there are also Mohammedans who disapprove of the atrocities carried out against the Arme- nians. A Mohammedan sheikh, a respected personality in Aleppo, said in my presence, ‘When people speak of how the Armenians are treated, I’m ashamed to be a Turk.’ ”45 Dr. Martin Niepage, the German missionary, reported the following from Aleppo:
in advanced pregnancy and upon dying people who can no longer drag themselves along.46
Personal Gain There are cases in which the rescuer derived a tangible benefit. Sometimes, a bribe of - Friday, June 03, 2016 8:14:23 PM - IP Address:5.62.159.223
cash, gold, or jewelry provided the motivation to rescue an Armenian. The role of gold coins in the survivability of Armenians being deported is noteworthy. There were many instances when a gold coin could bribe a gendarme or a civilian to help a deportee find the path to safety. An example is found in “At a Crossroad: The Story of Antik Balek- jian,” given above. In a mixture of religious piety and self-interest, one survivor told the story of two Turks who found him foraging for water. They offered to take him to their village. One of the old Turks said to the other, “Listen, my sons and your son are fighting at the Baghdad front. I’ll take this Armenian boy, rescue him from death, and Allah will save our sons from the enemy’s bullet.” They took the boy to their village of Hyulumen, near Antep. The villagers were divided about whether to keep the boy or kill him right away. In the end, the boy was kept, nurtured back to health, and it turned out that he was the second Armenian boy the old Turk had rescued.48 Women, girls, and boys taken by Ottoman officers and ranking soldiers were brought into the men’s own households or were passed to state officials, who gave or sold them to elite and middle-class homes in the major cities of the empire. This was consistent with a mid-nineteenth-century Ottoman policy of placing Muslim refugee girls and boys with elite Ottoman families as “foster children” (beslemeler), a process known in Ottoman legal parlance as evlatlık.49 In many cases, the rescuer kept the sur- vivor for forced labor in his business, on his farm, or in his home. Male children worked in the fields. Girls worked in the home as maids. Assuming the property of an Armenian was another motive. The government is- sued a law that whoever took and kept a 12-year-old child from an Armenian family would be allowed to take the family’s property.50 In the case of female survivors, sexual exploitation and slavery were major factors, and in some instances, the survivor became married to the rescuer or to the rescuer’s children.51 We are still only beginning to learn about the large number of Turks and Kurds who are discovering they have an Armenian grandmother and of dönme (con- vert) Armenians who live as Muslim Turks, the so-called gizli Ermeniler, or “hidden Armenians.”52
should not be used to counterbalance the record of cruelty and horror in some quanti- tative manner, as if they somehow reduced the immensity of the killing or the inten- sity of the horror. In fact, there are relatively few documented examples of the Turkish rescue of Armenians when compared with the number of atrocities that have been documented and the number of people who died. The quality of human good- ness they evidence, however, may give some comfort to us all.55 Many Turks today feel that the Armenians blame them unfairly. They are taught to believe that there was no genocide of the Armenians. It is ingrained in their upbringing - Friday, June 03, 2016 8:14:23 PM - IP Address:5.62.159.223
and culture. They are taught that whatever may have happened, the Armenians were to blame because they were disloyal, or they justify such extreme measures because it was wartime, or they claim that not only Armenians but Turks, too, suffered. The facts do not bear these rationalizations out.56 Taken in their entirety, the Ottoman archives and Western sources complement each other and confirm that the CUP did deliberately implement a policy that intended to destroy the Armenian citizens of the Ottoman Empire.57 Acts of rescue carried out during times of mass atrocity have a unique meaning, their importance amplified beyond the individual actions. Just as genocide may be con- sidered the ultimate crime against humanity, as a corollary, acts of rescue during geno- cide may be considered a sort of ultimate affirmation of humanity, and the extraordinary circumstances of such rescue imbue the rescuer with special adulation and reverence.58 Commemorating Turkish rescuers in the Armenian Genocide can make the events of 1915–1923 more approachable for Turks who have been taught to treat them as a taboo subject. It can help expose these individuals to the facts of what happened, providing a more mutual understanding by Armenians and Turks of this his- tory, which is still aggressively contested by the Turkish state and its supporters. It can help change the monolithic view that many Turks and Armenians have of each other— as disloyal rebels or murderers, respectively. Ultimately, it can be an important factor in making possible dialogue between the two peoples, leading to reconciliation. If patriotic citizens of Turkey truly want to defend their national honor, they must make a genuine effort to face the truth about one of the darkest pages in their history.59 While that truth may be very unpleasant for Turks to face, learning that there are stories of righteous Turks, and that Armenians also know these stories, handed down from their grandparents and parents, can make it possible to establish a new, different, and more positive relationship between the two peoples.60 Accepting and openly acknowled- ging these stories will only help Turkey find its place among the democracies of the modern world.61 From a broader perspective, scholars have questioned how people can commit genocide, and how other people can stand by and do nothing in the face of such gross violence and injustice.62 They remind us that we, as individuals, have a moral responsi- bility to establish, in modern society, a universal atmosphere that engenders and pro- motes the sense of caring for others.63 They argue that focusing efforts on healing, forgiveness, and reconciliation after genocide can facilitate the prevention of other genocides.64 It is hoped that by recalling these examples of how individual Turks behaved mor- ally and altruistically toward their Armenian fellow citizens under the most difficult cir- cumstances, we may all draw a lesson on how people can feel caring and empathy for others. In this regard, it is interesting to note that Yad Vashem identifies 21 Armenians
as Righteous among the Nations—rescuers of Jews during the Holocaust. Some of them were motivated by the memory of the Armenian Genocide. These rescues took place in various locations of the Armenian Diaspora—Austria, Crimea, France, Hungary, and Ukraine.65 The study of righteous individuals and acts of humanity in the midst of genocidal events provides us with uplifting examples of how one good deed can beget another and gives us hope that, even in the darkest circumstances, humanity can tri- umph over evil. - Friday, June 03, 2016 8:14:23 PM - IP Address:5.62.159.223
George N. Shirinian is executive director of the Zoryan Institute. He is co-editor of Studies in Comparative Genocide (Basingstoke, UK: Macmillan, 1999) and editor of The Asia Minor Catastrophe and the Ottoman Greek Genocide: Essays on Asia Minor, Pontos and Eastern Thrace, 1913–1923 (Bloomingdale, IL: Asia Minor and Pontos Hellenic Research Center, 2012).
Notes 1. The name Assyrians, as used here, is intended to refer to Assyrians, Nestorians, Chaldeans, and Syrian/ Syriac Christians. Hannibal Travis, “ ‘Native Christians Massacred:’ The Ottoman Genocide of the As- syrians during World War I,” Genocide Studies and Prevention: An International Journal 1,3 (2006): 327–71, 350n2, doi: 10.1353/gsp.2011.0023. For an explanation of the significance of the different As- syrian denominations, see David Gaunt, “The Complexity of the Assyrian Genocide,” Genocide Studies International 9,1 (2015): 83–103, doi: 10.3138/gsi.9.1.05. 2. “Turks Who Saved Armenians: An Introduction,” rev. ed., Zoryan Institute, 2001,. zoryaninstitute.org/dialogue/Turks%20Who%20Saved%20Armenians.pdf (accessed 11 Aug 2015). At the time, I was unaware that Donald Miller and Lorna Touryan Miller had already used the term in their own work. Donald E. Miller and Lorna Touryan Miller, Survivors: An Oral History of the Armenian Genocide (Berkeley: U of CaliforniaP, 1993), 182. I also did not know that Pietro Kuciukian had founded, in 1996, the International Committee of the Righteous for the Armenians’ Memory. Pietro Kuciukian, “Why Armenians Should Honour the ‘Righteous’ of the Armenian Genocide,” Études arméniennes contemporaines 2 (2013): 117–124. 3. Mordecai Paldiel, Saving the Jews: Amazing Stories of Men and Women Who Defied the “Final Solu- tion” (Rockville, MD: Schreiber, 2000), xii. 4. Paldiel, Saving the Jews, 273. 5. Martin Gilbert, The Righteous: The Unsung Heroes of the Holocaust (Toronto: Key Porter, 2003), xv– xvi; Nechama Tec, “Righteous among the Nations,” in The Holocaust Encyclopedia, ed. Walter La- queur, assoc. ed. Judith Tydor Baumel (New Haven : Yale UP, 2001), 569–74, 569–70. 6. “About the Program,” Yad Vashem, (ac- cessed 11 Aug 2015). 7. Wikipedia contributors, “Righteous among the Nations,” Wikipedia, The Free Encyclopedia, https:// en.wikipedia.org/w/index.php?title=Righteous_Among_the_Nations&oldid=673081704 (accessed 11 Aug 2015). 8. Fatma Müge Göçek, “In Search of ‘the Righteous People’: The Case of the Armenian Massacres of 1915,” in Resisting Genocide: The Multiple Forms of Rescue, ed. Jacques Semelin, Claire Andrieu, and Sarah Gensburger (New York: Columbia UP, 2011), 33–49, 37–8. 9. See also Pirkei deRabbi Eliezer 47, Eliyahu Rabbah 11, and Yalkut Shimoni on Exodus 166, as well as Talmud Sanhedrin 37a, where alone, the focus is on Jews in particular. 10. The appearance of this passage in the Quran is open to other interpretation. See Archie Medes, “Does the Koran Forbid the Killing of Non-Muslims?,” Patheos, daylightatheism/essays/does-the-koran-forbid-the-killing-of-non-muslims/ (accessed 11 Aug 2015). 11. See, for example, Bat Ye’or, Islam and Dhimmitude: Where Civilizations Collide (Madison and Tea- neck, NJ: Fairleigh Dickinson University Press, 2002), 51; and the story below “Nowhere in the Qur’an.” 12. Richard G. Hovannisian, “The Question of Altruism during the Armenian Genocide of 1915,” in Em- bracing the Other: Philosophical, Psychological, and Historical Perspectives on Altruism, ed. Pearl M. Oliner, Samuel P. Oliner, Lawrence Baron, Lawrence A. Blum, Dennis L. Krebs, and M. Zuzanna Smo- lenska (New York: New York UP, 1992), 282–305; Richard G. Hovannisian, “Intervention and Shades of Altruism during the Armenian Genocide,” in The Armenian Genocide: History, Politics, Ethics, ed. Richard G. Hovannisian, (New York: St. Martin’s, 1992), 173–207. 13. Hovannisian, “Question of Altruism,” 295, 297, 300.
14. Shahkeh Yaylaian Setian, Humanity in the Midst of Inhumanity (Bloomington, IN: Xlibris, 2011). 15. Key Indictment of the Extraordinary Military Tribunal, file 13, document 1, Takvîm-i Vekâyi, 27 Nisan 1335, 4–14 (Karârnâme), quoted in Vahakn N. Dadrian and Taner Akçam, Judgment at Istan- bul: The Armenian Genocide Trials (New York: Berghahn, 2011), 278. 16. Clarence D. Ussher, An American Physician in Turkey: A Narrative of Adventures in Peace and in War (Boston: Houghton Mifflin, 1917), 244. 17. Hovannisian, “Intervention and Shades of Altruism,” 180. A good example of how this worked in prac- tice is provided in a survivor memoir originally published in 1939: Elizabeth Caraman, Daughter of the Euphrates, 2nd ed. (Paramus, NJ: Armenian Missionary Association of America, 1979), 176–256. 18. It has been estimated that as few as 5–10% of Armenians were converted and absorbed into Muslim - Friday, June 03, 2016 8:14:23 PM - IP Address:5.62.159.223
households. Ara Sarafian, “The Absorption of Armenian Women and Children into Muslim House- holds as a Structural Component of the Armenian Genocide,” in In God’s Name: Genocide and Reli- gion in the Twentieth Century, ed. Omer Bartov and Phyllis Mack (New York: Berghahn, 2001), 209– 21, 211–2. 19. Kourken M. Sarkissian, “The Story of Haji Khalil,” Zoryan Institute, dialogue/The%20Story%20of%20Haji%20Khalil.pdf (accessed 14 Aug 2015). 20. Helen, interview with author, April 2000. 21. Paldiel, Saving the Jews. 22. Heghine Abajian, On a Darkling Plain (Fair Lawn, NJ: Rosekeer, 1984), 64. 23. See Wolfgang Gust, The Armenian Genocide: Evidence from the German Foreign Office Archives, 1915– 1916 (New York: Berghahn, 2014), 80–2; Taner Akçam, A Shameful Act: The Armenian Genocide and the Question of Turkish Responsibility (New York: Metropolitan, 2006), 4, 164, 166–7; Taner Akçam, The Young Turks’ Crime against Humanity: The Armenian Genocide and Ethnic Cleansing in the Otto- man Empire (Princeton: Princeton UP, 2012), 394–5; Hilmar Kaiser, “Regional Resistance to Central Government Policies: Ahmed Djemal Pasha, the Governors of Aleppo, and Armenian Deportees in the Spring and Summer of 1915,” Journal of Genocide Research 12,3 (2010): 173–218, 174, 181, 184–5, 193, 205; Raffi Bedrosyan, “The Real Turkish Heroes of 1915,” Armenian Weekly, 29 July 2013, http:// armenianweekly.com/2013/07/29/the-real-turkish-heroes-of-1915/ (accessed 11 Aug 2015); Racho Donef, “1915: Righteous Muslims during the Genocide of 1915,” 2010, 1900/20101105a.html (accessed 2 Mar 2015); Rober Koptaş, “Türkler ve Müslümanlar, bu kan cinaye- tlerden dolayı ağlıyor,” Agos Gazetesi, 30 July 2010; Feroz Ahmad, The Young Turks and the Ottoman Nationalities: Armenians, Greeks, Albanians, Jews, and Arabs, 1908–1918 (Salt Lake City, UT: U of Utah P, 2014), 81–2. 24. Foreign and Political Department of the Government of India, “Memorandum on Intellectual and Political Forces in the Ottoman Empire,” January 1917, 22, IOR/L/PS/18/B267, India Office Records and Private Papers, British Library, London, UK,! ps!18!b267_f001r (accessed 14 Aug 2015). In 1912, during a period of violence against the Armenians, the sheikh ul-Islam had ordered the senior clerics in the provinces of Erzurum, Van, Bitlis, and Mamuret-ül-Aziz to use their influence to prevent crimes against the Armenians, as such actions were “contrary to the precepts of holy law.” Dikran Mesrob Kaligian, Armenian Organization and Ideology under Ottoman Rule, 1908–1914 (New Brunswick, NJ: Transaction, 2009), 143. 25. See Barry Rubin and Wolfgang G. Schwanitz, Nazis, Islamists, and the Making of the Modern Middle East (New Haven, CT: Yale UP, 2014), 52–3; Foreign and Political Department of the Government of India, “Memorandum,” 24. For more on Hayri Bey, see Feroz Ahmad, The Young Turks: The Commit- tee of Union and Progress in Turkish Politics, 1908–1914 (London: Oxford UP, 1969), 148–9. 26. Patrick Avetis Saroyan, testimony, in The Armenian Genocide: Testimonies of the Eyewitness Survivors, ed. Verjiné Svazlian (Yerevan: Gitoutyoun, 2011), 147–8, 147. 27. Miller and Touryan Miller, Survivors, 96. 28. W. Spieker, report, 2 September 1915, 1915–09–03-DE-002, in Gust, Armenian Genocide, 351–7, 357. 29. Chairman of the Baghdad Railway in Constantinople Franz Johannes Günther, report to chargé d’af- faires of the embassy in Constantinople Konstantin Freiherr von Neurath, October 1915, 1915–11–01- DE-001, in Gust, Armenian Genocide, 431–4, 434. It is not clear if Dr. Martin Niepage’s report from Aleppo that “a Swiss engineer was to have been brought before a court-martial because he had distrib- uted bread in Anatolia to the starving Armenian women and children in a convoy of exiles” was about one these two individuals. Martin Niepage, The Horrors of Aleppo Seen by a German Eyewitness (Lon- don: T. Fisher Unwin, 1917), 15–6. 30. Satenik Nshan Doghramadjian, testimony, in Svazlian, Armenian Genocide, 326–9, 328. 31. Ibid.; Caraman, Daughter of the Euphrates, 219–25. 32. See, for example, Caraman, Daughter of the Euphrates, 226ff.
“Armenians Claim Roots in Diyarbakır,” Hürriyet Daily News, 23 October 2011,. hurriyetdailynews.com/default.aspx?pageid=438&n=armenians-claim-roots-in-diyarbakir-2011-10-23 (accessed 19 Oct 2014). 53. In this regard, cf. Karl Blank of the German Christian Charity Organization for the Orient, who wrote,
On 13 April, a new transport arrived from Zeytun. This time the Muslims were held back slightly because the way in which they had behaved towards the first transport had not met with the approval of many Turks. Some told me directly that it was incorrect to behave like this towards the poor people, but, they said, from our side we can do nothing about it. - Friday, June 03, 2016 8:14:23 PM - IP Address:5.62.159.223
[German missionary Karl Blank], report to German Christian Charity Organization for the Orient Director Friederich Schuchardt, 14 April 1915, 1915–05–27-DE-001, in Gust, Armenian Genocide, 191–3, 193. Similarly, German consul Heinrich Bergfeld reported from Trebizond, “In respect of the Turkish population, on the whole it must be said that very many Turks are not in agreement with the expulsion of women and children.” Bergfeld, message to Bethmann Hollweg, 9 July 1915, 1915–07–07- DE-002, in Gust, Armenian Genocide, 240–4, 242. 54. Samuel P. Oliner and Pearl M. Oliner, “Rescuers of Jews in Nazi Europe,” in Encyclopedia of Genocide, ed. Israel W. Charny, vol. 2 (Santa Barbara, CA: ABC-CLIO, 1999), 496–9, 496. 55. Harold M. Schulweis, foreword to Mordecai Paldiel, The Path of the Righteous: Gentile Rescuers of Jews during the Holocaust (Hoboken, NJ: KTAV, 1993), ix–xv, xii, xiii. 56. See, for example, Akçam, Shameful Act, 4–10. On Turkish denial of the Armenian Genocide, see, for example, Vahakn N. Dadrian, The Key Elements in the Turkish Denial of the Armenian Genocide: A Case Study of Distortion and Falsification (Cambridge, MA: Zoryan Institute, 1999). 57. Akçam, Young Turks’ Crime, xxiii. 58. Ron Dudai, “ ‘Rescues for Humanity’: Rescuers, Mass Atrocities, and Transitional Justice,” Human Rights Quarterly 34,1 (2012): 1–38, 6–8. 59. Taner Akçam, “The Genocide of the Armenians and the Silence of the Turks,” in Studies in Compara- tive Genocide, ed. Levon Chorbajian and George Shirinian (London: Macmillan, 1999), 125–46. 60. Taner Akçam, “Is There Any Solution Other Than a Dialogue?” in Dialogue across an International Divide: Essays Towards a Turkish-Armenian Dialogue (Cambridge, MA: Zoryan Institute, 2001), 1–30. 61. Akçam, Shameful Act, 12–3. 62. See, for example, Israel W. Charny, How Can We Commit the Unthinkable? (Boulder: Westview, 1982); Daniel Jonah Goldhagen, Hitler’s Willing Executioners: Ordinary Germans and the Holocaust (New York: Alfred A. Knopf, 1996), 375ff.; Eva Fogelman, Conscience and Courage: Rescuers of Jews during the Holocaust (New York: Anchor, 1994), xiv–xx; Ervin Staub, The Roots of Evil: The Origins of Genocide and Other Group Violence (Cambridge: Cambridge UP, 1989), 166–9, 274–83. 63. Oliner and Oliner, “Rescuers of Jews,” 499; Ervin Staub, “Preventing Genocide: Activating Bystanders, Helping Victims Heal, Helping Groups Overcome Hostility,” in Chorbajian and Shirinian, Studies in Comparative Genocide, 251–60, 258–9. 64. James Waller, Becoming Evil: How Ordinary People Commit Genocide and Mass Killing, 2nd ed. (Oxford: Oxford UP, 2007), 283. 65. “Armenian Righteous among the Nations,” Yad Vashem, tions/righteous-armenian/index.asp (accessed 14 Aug.
|
https://de.scribd.com/document/478284415/Shirinian-Turks-who-saved-Armenians
|
CC-MAIN-2020-50
|
refinedweb
| 9,231
| 59.13
|
Lazy Vals in Scala: A Look Under the Hood
02/24/16
Scala allows the special keyword
lazy in front of
val in order to change the
val to one that is lazily initialized. While lazy initialization seems tempting at first, the concrete implementation of lazy vals in
scalac has some subtle issues. This article takes a look under the hood and explains some of the pitfalls: we see how lazy initialization is implemented as well as scenarios, where a lazy val can crash your program, inhibit parallelism or have other unexpected behavior.
Introduction
This post was originally inspired by the talk Hands-on Dotty (slides) by Dmitry Petrashko, given at Scala World 2015. Dmitry gives a wonderful talk about Dotty and explains some of the lazy val pitfalls as currently present in Scala and how their implementation in Dotty differs. This post is a discussion of lazy vals in general followed by some of the examples shown in Dmitry Petrashko’s talk, as well as some further notes and insights.
How
lazy works
The main characteristic of a
lazy val is that the bound expression is not evaluated immediately, but once on the first access1. When the initial access happens, the expression is evaluated and the result bound to the identifier of the
lazy val. On subsequent access, no further evaluation occurs: instead the stored result is returned immediately.
Given the characteristic above, using the
lazy modifier seems like an innocent thing to do, when we are defining a
val, why not also add a
lazy modifier as a speculative “optimization”? In a moment we will see why this is typically not a good idea, but before we dive into this, let’s recall the semantics of a
lazy val first.
When we assign an expression to a
lazy val like this:
lazy val two: Int = 1 + 1
we expect that the expression
1 + 1 is bound to
two, but the expression is not yet evaluated. On the first (and only on the first) access of
two from somewhere else, the stored expression
1 + 1 is evaluated and the result (
2 in this case) is returned. On subsequent access of
two, no evaluation happens: the stored result of the evaluation was cached and will be returned instead.
This property of “evaluate once” is a very strong one. Especially if we consider a multithreaded scenario: what should happen if two threads access our
lazy val at the same time? Given the property that evaluation occurs only once, we have to introduce some kind of synchronization in order to avoid multiple evaluations of our bound expression. In practice, this means the bound expression will be evaluated by one thread, while the other(s) will have to wait until the evaluation has completed, after which the waiting thread(s) will see the evaluated result.
How is this mechanism implemented in Scala? Luckily, we can have a look at SIP-20. The example class
LazyCell with a
lazy val value is defined as follows:
final class LazyCell { lazy val value: Int = 42 }
A handwritten snippet equivalent to the code the compiler generates for our
LazyCell looks like this:
final class LazyCell { @volatile var bitmap_0: Boolean = false // (1) var value_0: Int = _ // (2) private def value_lzycompute(): Int = { this.synchronized { // (3) if (!bitmap_0) { // (4) value_0 = 42 // (5) bitmap_0 = true } } value_0 } def value = if (bitmap_0) value_0 else value_lzycompute() // (6) }
At
(3) we can see the use of a monitor
this.synchronized {...} in order to guarantee that initialization happens only once, even in a multithreaded scenario. The compiler uses a simple flag (
(1)) to track the initialization status (
(4) &
(6)) of the var
value_0 (
(2)) which holds the actual value and is mutated on first initialization (
(5)).
What we can also see in the above implementation is that a
lazy val, other than a regular
val has to pay the cost of checking the initialization state on each access (
(6)). Keep this in mind when you are tempted to (try to) use
lazy val as an “optimization”.
Now that we have a better understanding of the underlying mechanisms for the
lazy modifier, let’s look at some scenarios where things get interesting.
Scenario 1: Concurrent initialization of multiple independent vals is sequential
Remember the use of
this.synchronized { } above? This means we lock the whole instance during initialization. Furthermore, multiple
lazy vals defined inside e.g., an
object, but accessed concurrently from multiple threads will still all get initialized sequentially. The code snippet below demonstrates this, defining two
lazy val (
(1) &
(2)) inside the
ValStore object. In the object
Scenario1 we request both of them inside a
Future (
(3)), but at runtime each of the
lazy val is calculated separately. This means we have to wait for the initialization of
ValStore.fortyFive until we can continue with
ValStore.fortySix.
import scala.concurrent.ExecutionContext.Implicits.global import scala.concurrent._ import scala.concurrent.duration._ def fib(n: Int): Int = n match { case x if x < 0 => throw new IllegalArgumentException( "Only positive numbers allowed") case 0 | 1 => 1 case _ => fib(n-2) + fib(n-1) } object ValStore { lazy val fortyFive = fib(45) // (1) lazy val fortySix = fib(46) // (2) } object Scenario1 { def run = { val result = Future.sequence(Seq( // (3) Future { ValStore.fortyFive println("done (45)") }, Future { ValStore.fortySix println("done (46)") } )) Await.result(result, 1.minute) } }
You can test this by copying the above snippet and
:paste-ing it into a Scala REPL and starting it with
Scenario1.run. You will then be able to see how it firsts evaluates
ValStore.fortyFive, then prints the text and afterwards does the same for the second
lazy val. Instead of an
object you can also imagine this case for a normal
class, having multiple
lazy vals defined.
Scenario 2: Potential dead lock when accessing lazy vals
In the previous scenario, we only had to suffer from decreased performance, when multiple
lazy vals inside an instance are accessed from multiple threads at the same time. This may be surprising, but it is not a deal breaker. The following scenario is more severe:
import scala.concurrent.ExecutionContext.Implicits.global import scala.concurrent._ import scala.concurrent.duration._ object A { lazy val base = 42 lazy val start = B.step } object B { lazy val step = A.base } object Scenario2 { def run = { val result = Future.sequence(Seq( Future { A.start }, // (1) Future { B.step } // (2) )) Await.result(result, 1.minute) } }
Here we define three
lazy val in two objects
A and
B. Here is a picture of the resulting dependencies:
The
A.start val depends on
B.step which in turn depends again on
A.base. Although there is no cyclic relation here, running this code can lead to a deadlock:
scala> :paste ... scala> Scenario) ... 35 elided
(if it succeeds by chance on your first try, give it another chance). So what is happening here? The deadlock occurs, because the two
Future in
(1) and
(2), when trying to access the
lazy val will both lock the respective object
A /
B, thereby denying any other thread access. In order to achieve progress however, the thread accessing
A also needs
B.step and the thread accessing
B needs to access
A.base. This is a deadlock situation. While this is a fairly simple scenario, imagine a more complex one, where more objects/classes are involved and you can see why overusing
lazy val can get you in trouble. As in the previous scenario the same can occur inside
class, although it is a little harder to construct the situation. In general this situation is unlikely to happen, because of the exact timing required to trigger the deadlock, but it is equally hard to reproduce in case you encounter it.
Scenario 3: Deadlock in combination with synchronization
Playing with the fact that
lazy val initialization uses a monitor (
synchronized), there is another scenario, where we can get in serious trouble.
import scala.concurrent.ExecutionContext.Implicits.global import scala.concurrent._ import scala.concurrent.duration._ trait Compute { def compute: Future[Int] = Future(this.synchronized { 21 + 21 }) // (1) } object Scenario3 extends Compute { def run: Unit = { lazy val someVal: Int = Await.result(compute, 1.minute) // (2) println(someVal) } }
Again, you can test this for yourself by copying it and doing a
:paste inside a Scala REPL:
scala> :paste ... scala> Scenario) at Scenario3$.someVal$lzycompute$1(<console>:62) at Scenario3$.someVal$1(<console>:62) at Scenario3$.run(<console>:63) ... 33 elided
The
Compute trait on it’s own is harmless, but note that it uses
synchronized in
(1). In combination with the
synchronized initialization of the
lazy val inside
Scenario3 however, we have a deadlock situation. When we try to access the
someVal (
(2)) for the
println call, the triggered evaluation of the
lazy val will grab the lock on
Scenario3, therefore preventing the
compute to also get access: a deadlock situation.
Conclusion
Before we sum this post up, please note that in the examples above we use
Future and
synchronized, but we can easily get into the same situation by using other concurrency and synchronization primitives as well.
In summary, we had a look under the hood of Scala’s implementation of lazy vals and discussed some surprising cases:
- sequential initialization due to monitor on instance
- deadlock on concurrent access of
lazy valswithout cycle
- deadlock in combination with other synchronization constructs
As you can see,
lazy vals should not be used as a speculative optimization without further thought about the implications. Furthermore you might want to replace some of your
lazy val with a regular
val or
def depending on your initialization needs after becoming aware of the issues above.
Luckily, the Dotty platform has an alternative implementation for
lazy val initialization (by Dmitry Petrashko) which does not suffer from the unexpected pitfalls discussed in this post. For more information on Dotty you can watch Dmitry’s talk linked in the “references” section and head over to their github page.
All examples have been tested with Scala 2.11.7.
References
- Hands-on Dotty (slides) by Dmitry Petrashko
- SIP-20 – Improved Lazy Vals Initialization
- Dotty – The Dotty research platform
Comment
|
https://blog.codecentric.de/en/2016/02/lazy-vals-scala-look-hood/
|
CC-MAIN-2017-26
|
refinedweb
| 1,683
| 53.61
|
/* Declarations for getopt. Copyright (C) 1989, 1990, 1991, 1992, USA. */ /* XXX THIS HAS BEEN MODIFIED FOR INCORPORATION INTO BASH XXX */ #ifndef _SH_GETOPT_H #define _SH_GETOPT_H 1 #include "stdc.h" /* For communication from `getopt' to the caller. When `getopt' finds an option that takes an argument, the argument value is returned here. Also, when `ordering' is RETURN_IN_ORDER, each non-option ARGV-element is returned here. */ extern char *sh_optarg; /*, `sh_optind' communicates from one call to the next how much of ARGV has been scanned so far. */ extern int sh_optind; /* Callers store zero here to inhibit the error message `getopt' prints for unrecognized options. */ extern int sh_opterr; /* Set to an option character which was unrecognized. */ extern int sh_optopt; /* Set to 1 when an unrecognized option is encountered. */ extern int sh_badopt; extern int sh_getopt __P((int, char *const *, const char *)); extern void sh_getopt_restore_state __P((char **)); #endif /* _SH_GETOPT_H */
|
http://opensource.apple.com//source/bash/bash-86.1/bash-3.2/builtins/getopt.h
|
CC-MAIN-2016-44
|
refinedweb
| 142
| 58.79
|
Handling Multiple Submits
Pages: 1, 2
Listing 2 was certainly an improvement, but we've still got a ways to go. A
number of issues still could go wrong. What if the user pushes the back button
and starts over? What if his browser has JavaScript disabled or the browser
cannot handle the processing? We can still solve the problem, but instead of
preventing multiple submits, we need to handle them on the back end, via the
form-processing servlet.
In order to understand how to solve the multiple submit problem, we must
first understand how servlets work with respect to sessions. As
everyone knows, HTTP is inherently a stateless protocol. In order to handle
state, we need some way for the browser to associate the current request with a
larger block of requests. The servlet session provides us a solution to this
problem. The HttpServlet methods doGet() and
doPost() use two specific parameters:
HttpServletRequest and HttpServletResponse. The
servlet request parameter allows us to access what is commonly referred to as
the servlet session. Servlet sessions have mechanisms for accessing and storing
state information.
HttpServlet
doGet()
doPost()
HttpServletRequest
HttpServletResponse
What exactly is a servlet session? A servlet session is a number of things,
including:
HttpServlets
HttpSession
Before we look at how to solve our problem with a server-side solution, we
need to understand the servlet session lifecycle. As with EJBs and other server
side entities, servlet sessions go through a defined set of states during their
lifetime. The figure below shows the lifecycle of a servlet session. Servlets
move through three distinct states: does not exist, new, and
not new or in-use.
Figure 3: Servlet session lifecycle
get()
Now that we understand the lifecycle of a session, how do we go about
obtaining a session and using it to our advantage? The
HttpServletRequest interface provides two methods for working with
sessions:
public HttpSession getSession() always returns either a new
session or an existing session.
public HttpSession getSession()
getSession() returns an existing session if a valid session ID
was somehow provided (perhaps via a cookie). It returns a new session in
several cases: the client's initial session (no session ID provided), a
timed-out session (session ID provided), an invalid session (session ID
provided), or an explictly invalidated session (session ID provided).
getSession()
public HttpSession getSession(boolean) may return a new
session, an existing session, or null.
public HttpSession getSession(boolean)
null
getSession(true) returns an existing session if possible.
Otherwise it creates a new session. getSession(false) returns an
existing session if possible and otherwise returns null.
We have still only solved half of the problem at hand. We'd like to be able
to skip over the "session new" state and move to the "session in use" state
automatically. We can achieve this by redirecting the browser to the handling
servlet automatically. Listing 3 combines servlet session logic with the
ability to redirect clients with valid sessions to the handling servlet.
Listing 3: RedirectServlet.java
RedirectServlet.java
01: package multiplesubmits;
02:
03: import java.io.*;
04: import java.util.Date;
05: import javax.servlet.*;
06: import javax.servlet.http.*;
07:
08: public class RedirectServlet extends HttpServlet{
09: public void doGet (HttpServletRequest req, HttpServletResponse res)
10: throws ServletException, IOException {
11: HttpSession session = req.getSession(false);
12: System.out.println("");
13: System.out.println("-------------------------------------");
14: System.out.println("SessionServlet::doGet");
15: System.out.println("Session requested ID in Request:" +
16: req.getRequestedSessionId());
17: if ( null == req.getRequestedSessionId() ) {
18: System.out.println("No session ID, first call,
creating new session and forwarding");
19: session = req.getSession(true);
20: System.out.println("Generated session ID in Request: " +
21: session.getId());
22: String encodedURL = res.encodeURL("/RedirectServlet");
23: System.out.println("res.encodeURL(\"/RedirectServlet\");="
+encodedURL);
24: res.sendRedirect(encodedURL);
25: //
26: // RequestDispatcher rd = getServletContext().getRequestDispatcher(encodedURL);
27: // rd.forward(req,res);
28: //
29: return;
30: }
31: else {
32: System.out.println("Session id = " +
req.getRequestedSessionId() );
33: System.out.println("No redirect required");
34: }
35:
36: HandleRequest(req,res);
37: System.out.println("SessionServlet::doGet returning");
38: System.out.println("------------------------------------");
39: return;
40: }
41:
42: void HandleRequest(HttpServletRequest req, HttpServletResponse res)
43: throws IOException {
44: System.out.println("SessionServlet::HandleRequest called");
45: res.setContentType("text/html");
46: PrintWriter out = res.getWriter();
47: Date date = new Date();
48: out.println("<html>");
49: out.println("<head><title>Ticket Confirmation</title></head>");
50: out.println("<body>");
51: out.println("<h1>The Current Date And Time Is:</h1><br>");
52: out.println("<h3>" + date.toString() + "</h3>");
53: out.println("</body>");
54: out.println("</html>");
55: System.out.println("SessionServlet::HandleRequest returning");
56: return;
57: }
58: }
Just how does this solve our problem? Examining the code closely shows that
on line 11 we try to obtain a session handle. On line 17 we identify that an
active session exists by checking the session ID for null, or by
checking for a valid session ID. Either method suffices. Lines 18-29 are
executed if no session exists. We handle the multiple submit problem by first
creating a session as shown on line 19, using URL encoding to add the new session
ID as shown on line 22, and then redirecting our servlet to the newly encoded URL,
as shown on line 24..
Readers unfamiliar with URL rewriting are directed to lines 15 and 23. An
HttpServlet object has the ability to rewrite a URL. This
process inserts a session ID into a URL. The underlying application server can
then use the encoded URL to provide an existing session automatically to a
servlet or JSP. Depending on the application server, you may need to enable
URL rewriting for the above example to work!
In this article, we discussed several solutions to the multiple submit
problem. Each solution has its positive and negative aspects. When solving
problems, the various pros and cons of a solution must be clearly understood to
assess the value of each tradeoff. Our final example had the benefit of solving
the problem at hand at the cost of an extra client round trip. The JavaScript
solution was the most elegant, but required client support to work. As with any
problem, there are often a world of solutions, each one with its own trade-offs. By understanding the trade-offs of a given solution, we can make the most
informed choice for a given problem.
Al Saganich
is BEA Systems' senior developer and engineer for enterprise Java technologies, focused on Java integration and application with XML and Web services..
|
http://www.onlamp.com/pub/a/onjava/2003/04/02/multiple_submits.html?page=2
|
CC-MAIN-2016-26
|
refinedweb
| 1,082
| 51.44
|
12 February 2019 0 comments Web development, Javascript, ReactJS
create-react-appfirst:
create-react-app myapp
node-sassand
bulmaswatch
cd myapp yarn add bulma bulmaswatch node-sass
src/index.jsto import
index.scssinstead:
-import "./index.css"; +import "./index.scss";
index.cssfile:
git rm src/index.css touch src/index.scss git add src/index.scss
src/index.scssto look like this:
@import "node_modules/bulmaswatch/darkly/bulmaswatch";
This assumes your favorite theme was the
darkly one. You can obviously change that later.
BROWSER=none yarn start
That's it! However, the
create-react-app default look doesn't expose any of the cool stuff that Bulma can style. So let's rewrite our
src/App.js by copying the minimal starter HTML from the Bulma documentation. So make the
src/App.js component look something like this:
class App extends Component { render() { return ( <section className="section"> <div className="container"> <h1 className="title">Hello World</h1> <p className="subtitle"> My first website with <strong>Bulma</strong>! </p> </div> </section> ); } }
Now it'll look like this:
Yes, it's not much but it's a great start. Over to you to take this to infinity and beyond!
In the rushed instructions above the choice of theme was
darkly. But what you need to do next is go to, click around and eventually pick the one you like. Suppose you like
spacelab, then you just change that
@import ... line to be:
@import "node_modules/bulmaswatch/spacelab/bulmaswatch";
Follow @peterbe on Twitter
|
https://www.peterbe.com/plog/create-react-app-scss-and-bulmaswatch
|
CC-MAIN-2019-18
|
refinedweb
| 246
| 52.36
|
Decision trees for classification.
In this third post on supervised machine learning classifiers, I'll be talking about one of the oldest and most widely used techniques - decision trees. Decision trees work well with noisy or missing data and are incredibly fast at runtime. They're additionally nice because you can visualize the decision tree that your algorithm created, so it's not as much as a black box as some of the algorithms. However, because you're making a series of splits based on a single attribute at a time, the decision boundary is only capable of making splits parallel to your feature axes.
Decision trees are pretty easy to grasp intuitively, let's look at an example.
Note: decision trees are used by starting at the top and going down, level by level, according to the defined logic. This is known as recursive binary splitting.
Everybody loves a good meta reference; and yes, I'm sipping tea as I write this post. Now let's look at a two-dimensional feature set and see how to construct a decision tree from data.
Any ideas on how we could make a decision tree to classify a new data point as "x" or "o"? Here's what I did.
Run through a few scenarios and see if you agree. Look good? Cool. So now we have a decision tree for this data set; the only problem is that I created it. It'd be much better if we could get a machine to do this for us. But how?
If you analyze what we're doing from an abstract perspective, we're taking a subset of the data, and deciding the best manner to split the subset further. Our initial subset was the entire data set, and we split it according to the rule $x_1 < 3.5$. Then, for each subset, we performed additional splitting until we were able to correctly classify every data point.
Information gain and entropy
How do we judge the best manner to split the data? Simply, we want to split the data in a manner which provides us the most amount of information - in other words, maximizing information gain. Going back to the previous example, we could have performed our first split at $x_1 < 10$. However, this would essentially be a useless split and provide zero information gain. In order to mathematically quantify information gain, we introduce the concept of entropy.
[\mathop - \sum \limits _i^{} {p _i}{\log _2}\left( {{p _i}} \right)]
Entropy may be calculated as a summation over all classes where $p_i$ is the fraction of data points within class $i$. This essentially represents the impurity, or noisiness, of a given subset of data. A homogenous dataset will have zero entropy, while a perfectly random dataset will yield a maximum entropy of 1. With this knowledge, we may simply equate the information gain as a reduction in noise.
Here, we're comparing the noisiness of the data before splitting the data (parent) and after the split (children). If the entropy decreases due to a split in the dataset, it will yield an information gain. A decision tree classifier will make a split according to the feature which yields the highest information gain. This is a recursive process; stopping criterion for this process include continuing to split the data until (1) the tree is capable of correctly classifying every data point, (2) the information gain from further splitting drops below a given threshold, (3) a node has fewer samples than some specified threshold, (4) the tree has reached a maximum depth, or (5) another parameter similarly calls for the end of splitting. To learn more about this process, read about the ID3 algorithm.
Techniques to avoid overfitting
Often you may find that you've overfitted your model to the data, which is often detrimental to the model's performance when you introduce new data. To prevent this overfitting, one thing you could do is define some parameter which ends the recursive splitting process. As I mentioned earlier, this may be a parameter such as tree depth or when the split will result in a node less than some specified number of samples. However, it may be more intuitive to simply perform a significance test when considering a new split in the data, and if the split does not supply statistically significant information (obtained via a significance test), then you will not perform any further splits on a given node.
Another more common technique is known as pruning. Here, we grow a decision tree fully on your training dataset and then go back and evaluate its performance on a new validation dataset. For each node, we evaluate whether or not it's split was useful or detrimental to the performance on the validation dataset. We then remove those nodes which caused the greatest detriment to the decision tree performance.
Degenerate splits
Evaluating a split using information gain can pose a problem at times; specifically, it has a tendency to favor features which have a high number of possible values. Say I have a data set that determines whether or not I choose to go sailing for the month of June based on features such as temperature, wind speed, cloudiness, and day of the month. If I made a decision tree with 30 child nodes (Day 1, Day 2, ..., Day 30) I could easily build a tree which accurately partitions my data. However, this is a useless feature to split based on because the second I enter the month of July (outside of my training data set), my decision tree has no idea whether or not I'm likely to go sailing. One way to circumvent this is to assign a cost function (in this case, the gain ratio) to prevent our algorithm from choosing attributes which provide a large number of subsets.
Example code
Here's an example implementation of a Decision Tree Classifier for classifying the flower species dataset we've studied previously.
import pandas as pd from sklearn import datasets from sklearn.tree import DecisionTreeClassifier from sklearn.model_selection import train_test_split iris = datasets.load_iris() # ['target_names', 'data', 'target', 'DESCR', 'feature_names'] features = pd.DataFrame(iris.data) labels = pd.DataFrame(iris.target) ### create classifier clf = DecisionTreeClassifier(criterion='entropy') ### split data into training and testing datasets features_train, features_test, labels_train, labels_test = train_test_split(features, labels, test_size=0.4, random_state=0) ### fit the classifier on the training features and labels clf.fit(features_train, labels_train) ### use the trained classifier to predict labels for the test features pred = clf.predict(features_test) ### calculate and return the accuracy on the test data from sklearn.metrics import accuracy_score accuracy = accuracy_score(labels_test, pred) ### visualize the decision tree ### you'll need to have graphviz and pydot installed on your computer from IPython.display import Image from sklearn.externals.six import StringIO import pydot from sklearn import tree dot_data = StringIO() tree.export_graphviz(clf, out_file=dot_data) graph = pydot.graph_from_dot_data(dot_data.getvalue()) Image(graph[0].create_png())
Without any parameter tuning we see an accuracy of 94.9%, not too bad! The decision tree classifier in sklearn has an exhaustive set of parameters which allow for maximum control over your classifier. These parameters include: criterion for evaluating a split (this blog post talked about using entropy to calculate information gain, however, you can also use something known as Gini impurity), maximum tree depth, minimum number of samples required at a leaf node, and many more.
Get the latest posts delivered right to your inbox
|
https://www.jeremyjordan.me/decision-trees-for-classification/
|
CC-MAIN-2018-22
|
refinedweb
| 1,243
| 53
|
This page describes the standards used for Apache ServiceMix code (java, xml, whatever). Code is read by a human being more often than it is written by a human being, make the code a pleasure to read. If you're using Eclipse, configuration files for your IDE can be found at
Lets follow Sun's coding standard rules which are pretty common in Java.
* 4 characters indentation * No tabs please!
Correct brace style:
public class Foo {
public void foo(boolean a, int x, int y, int z) {
do {
try {
if (x > 0) {
int someVariable = a ? x : y;
} else if (x < 0) {
int someVariable = (y + z);
someVariable = x = x + y;
} else {
for (int i = 0; i < 5; i++) {
doSomething(i);
}
}
switch (a) {
case 0:
doCase0();
break;
default:
doDefault();
}
} catch (Exception e) {
processException(e.getMessage(), x + y, z, a);
} finally {
processFinally();
}
} while (true);
if (2 < 3) {
return;
}
if (3 < 4) {
return;
}
do {
x++
} while (x < 10000);
while (x < 50000) {
x++;
}
for (int i = 0; i < 5; i++) {
System.out.println(i);
}
}
private class InnerClass implements I1, I2 {
public void bar() throws E1, E2 {
}
}
}
* Use 4 characters. This is to allow IDEs such as Eclipse to use a unified formatting convention * No tabs please!:
public interface MyInterface {
public static final int MY_INTEGER = 0;
public abstract void doSomething();
} or RuntimeException. Use either IllegalArgumentException, or NullArgumentException (which is a subclass of IllegalArgumentException anyway). If there isn't a suitable subclass available for representing an exception, create your own.
*.
* Should be fully qualified e.g. import java.util.Vector and not java.util.* * Should be sorted alphabetically, with java, then javax packages listed first, and then other packages sorted by package name.
* Eclipse users can * use Source -> Organise Imports to organize imports * use Source -> Format to format code (please use default Eclipse formatting conventions, which are as above) * IntelliJ users can * use Tools -> Organise Imports to organize imports * use Tools -> Reformat code to format code (uses the code style setting in IDE options)
The eclipse formater settings are available here.
@version Should be: @version $Revision$ $Date$
@author Should not be used in source code at all.
* Use the naming scheme *Test.java for unit tests. * Do not define public static Test suite\() or constructor methods, the build system will automatically do the right thing without them.
* Log as much as necessary for someone to figure out what broke :-) * Use org.apache.commons.logging.Log rather than raw Log4j * Do not log throwables that you throw - leave it to the caller * Use flags to avoid string concatenation for debug and trace * Cache flags (especially for trace) to avoid excessive isTraceEnabled() calls
* * Use fatal level for things that mean this instance is compromised
private static final Log log = LogFactory.getLog(MyClass.class);
public void doSomeStuff(Stuff stuff) throws StuffException {
boolean logTrace = log.isTraceEnabled();
try {
if (logTrace) {
log.trace("About to do stuff " + stuff);
}
stuff.doSomething();
if (logTrace) {
log.trace("Did some stuff ");
}
} catch (BadException e) {
// don't log - leave it to caller
throw new StuffException("Something bad happened", e);
} catch (IgnorableException e) {
// didn't cache this as we don't expect to come here a lot
if (log.isDebugEnabled()) {
log.debug("Ignoring problem doing stuff "+stuff, e);
}
}
}
|
http://servicemix.apache.org/SM/developers/coding-standards.html
|
CC-MAIN-2016-30
|
refinedweb
| 529
| 53.61
|
The Startup CTO’s Guide to Ops (3 of 3): A Minimal Production and Deployment Setup
This is part 3 of a 3-part series on operations setups for early-stage startups. Previously, part 1 discussed guiding principles and requirements, and part 2 toured 3rd party services and open-source tools.
In this last post of our series, we’ll take a deep dive into a case study of production hosting and deployment.
Note that your goals and tools are undoubtedly different from ours, so this should not be read as a proscriptive “here’s what you should do…” guide. Our approach is very much at the scrappy, cheap, and DIY end of the spectrum. That being said, we will do the following:
- Keep costs as low as possible
- Have a fast website that handles our target load
- Make deployments painless
- Ensure internal and external monitoring
- Track business metrics
Outline
Here’s what I’ll be covering in this post:
Deployment and versioning
Conclusion: Focus on what matters
Production setup
I’ll walk through our production setup first, then I’ll discuss our deployment and versioning process.
Our application stack
- Language: Python 2.7
- Web framework: Pyramid (WSGI)
- Javascript: our site isn’t front-end heavy, but we use a fair bit of JQuery, JQuery datatables, and Chart.js
- Web server: Waitress (Pyramid default, works fine)
- OS: Ubuntu Linux 16.04
- Database: PostgreSQL 9.5 (extensions: PostGIS, foreign data wrappers for Sqlite and CSV)
- Load balancer/web server: Nginx
SSD
Unless you have so much data it would be prohibitively expensive, use SSD storage to speed up your database and any host needing significant disk IO.
Codero hosting
In the previous post I explained why it would be reasonable for an early-stage website startup to run on a single dedicated leased server.
We lease one Quad Xeon (8 core), 12Gb RAM, SSD server for about $130/mo. We picked Codero as our hosting provider because they had good reviews and prices. They also offer cloud VMs in addition to dedicated hosts, which gives us more scaling options. We’ve been happy so far!
Nginx
We run staging and production versions of our site. All traffic is served out of HTTPS. Nginx will handle SSL and proxy requests to our web server processes, which run locally. For now, we run two processes each for prod and staging; and each process has its own thread pool.
The Nginx config for prod-ssl looks like:
upstream myservice_prod {
server 127.0.0.1:8000;
server 127.0.0.1:8001;
}server {
server_name mydomain.com;
listen 443 ssl;ssl on;
ssl_certificate /etc/nginx/ssl/bundle.crt;
ssl_certificate_key /etc/nginx/ssl/star_mydomain_com.key;location / {
proxy_pass
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
The proxy headers aren’t required, but these values forward headers to our Python Pyramid server so we can log where requests come from.
Separately, an Nginx rule redirects http to
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name _;
return 301
}
Where things are located, and users
Production is deployed to
/var/myservice-prod, and staging to
/var/myservice-staging. We create a Linux user (e.g. “
myservice”) to own these directories, and also to run the website processes.
Environments maintain isolated Python package dependencies. When we do our initial setup, we create a Python
virtualenv in each environment which lives in a
/venv subdirectory. Among other things, the virtual environment will include a
/venv/bin directory which includes an environment-specific
/venv/bin/python and
/venv/bin/pserve (for running the server).
Secrets are managed outside of the deployment process or source control. I manually create a small file
/var/myservice-prod/venv/config/prod-secrets.ini, with permissions set so only the “
myservice” user can read this file (chmod 600). This secrets file looks like:
[app:main]
use = egg:myservice
sqlalchemy.url = postgresql+psycopg2://user:pass@localhost/db_name
stripe.api_key = sk_live_stuff
stripe.public_key = pk_live_stuff
...and so on...
The application’s configuration files (
staging.ini and
production.ini) are checked into source control, and are bundled into the deployment package. These configs inherit from the local
secrets.ini file as:
[app:main]
use = config:secrets-prod.ini
...
Web server
We run our web server using the
/venv/bin/pserve Pyramid command. This starts a WSGI server using a PasteDeploy
.ini configuration file. The web server options are configured as:
[server:main]
use = egg:waitress#main
host = %(
port = %(
url_scheme =
threads = 8
The “
%( and “
%( ” lines allow for variable substitution from command line arguments to
pserve. The
url_scheme= line tells Pyramid that urls should be written as (because our upstream proxy is managing ssl for us).
Putting these pieces together, we could fire up a web server bound to localhost on port 8000 by running:
sudo -u myservice /var/myservice-prod/venv/bin/pserve /var/myservice-prod/venv/config/production.ini
Systemd
Of course, we won’t actually be starting our server from the command line because we need a keep-alive mechanism, a way to manage logging (and logfile rotation), etc. We’ll use
systemd to do all those things.
We create the systemd unit
/etc/systemd/system/myserviceprod.service
[Unit]
Description=Our Amazing Service, Production[Service]
ExecStart=/var/myservice-prod/venv/bin/pserve /var/myservice-prod/venv/configs/production.ini
WorkingDirectory=/var/myservice-prod
User=myservice
Group=myservice
Restart=always[Install]
WantedBy=multi-user.target
Then we tell systemd to pick up this new file as:
$ sudo systemctl daemon-reload
Now we can start and stop our service as:
$ sudo systemctl start myserviceprod
$ sudo systemctl stop myserviceprod
(The astute reader will notice that this example only creates a server on one port; a fancier setup with multiple ports using a systemd template is recommended, but this is left as an exercise for the reader).
Logging
Under this systemd setup, our logs will being managed by
journald. As we discussed earlier, the Python app ships a copy of log entries to GrayLog.
Deployment and Versioning
Deployment systems can be complicated. You don’t need to boil the ocean and get a perfect system in place on day 1, but you should invest the time to have something. We started with a simple shell script, but as we grow we’ll migrate to better tools.
Package and Version Requirements
I expect deployments to be integrated with a packaging and source control system such that:
- Packages have dependencies: a package should specify dependencies which will be automatically installed as part of deployment.
- Configurations are treated like code: config files should be managed by the deployment system, either by bundling them as part of an application package (the approach we’ll be taking) or as their own deployable unit.
- Deployments are versioned: each deployable release candidate is marked with a version like “1.0.32”. This version is visible to the application (it “knows” that it is 1.0.32), and also the version is used as a git tag so we have a clean historical record.
- Enable pre-package hooks: sometimes your build has preparation tasks like minifying and combining css and js.
Application version
Maybe I’m a Python n00b, but I couldn’t find a great way to make Python “aware” of the latest git tag. I settled on the approach of modifying a file
version.py in tandem with setting a git tag.
version.py is a single line like:
__version__ = '1.0.32'
In our main application __init__.py main() method, we pluck the version from this file as:
from version import __version__
#...def main(global_config, **settings):
#...
config.registry['version'] = __version__
#...
Downstream in our application views and templates, the version is then available as
request.registry['version']. We use the application version for:
- The application reports what version it is running, which makes it easy to check what-is-running-where.
- Many cache keys include the version, so on deployment we bust the cache.
- We append “...
?v=<version>” to static web asset URLs. This forces clients to pick up the latest version after a deployment.
Bump version
Before building a package, we run a script locally to bump the application version.
- Get the current git tag, increment it, and set a new tag
- Update our Python
version.pyfile with the new version
- Push these changes to origin
I’ll skip the full script here (it’s not pretty and is rather specific to our setup), but here are the two useful git commands it uses:
- Get the current tag:
git describe --tags --abbrev=0
- Get the toplevel directory for your git repo:
git rev-parse --show-toplevel
Build package
We deploy our application as a Python package.
pip does all the heavy work of bundling and installing the application, and managing dependencies.
We build our application as a source distribution (sdist), which produces a deployable file like
dist/myservice-1.0.32.tar.gz.
# Pre-processing: minify our JS and CSS
./minify.sh# Create source distribution
python setup.py sdist
The file
setup.py manages package configuration:
- There is a list of Python package dependencies. If we add packages or change versions, Pip will install the new packages as part of deployment.
- We will use our versioning scheme.
- Our config files are included in the build.
# Load the version (which is updated by our bump version script)
from myservice.version import __version__requires = [
# List all required python packages here. If you
# add a new package, the deployment process will
# pick it up.
'pyramid',
'pyramid_jinja2',
#...etc...
]# Add a few things to the setup() method
setup(
#...boilerplate stuff...
# Use our versioning scheme
version=__version__,
# Copy .ini files to venv/configs
data_files=[
('configs', ['staging.ini']),
('configs', ['production.ini']),
],
)
Deployment
We wrote a simple shell script to deploy the latest package to production or staging:
ops/deploy.sh [username] [--staging or --production]
The essential parts of this script are:
# Get the path of the most recent package
distfile=$(ls -t myservice/dist | head -n 1)# Copy the bundle to our remote host
scp $distfile $user@$host:/tmp# Remotely:
# 1) Pip install the new bundle
# 2) Restart the service
# 3) Print the status (sanity check)
if [[ $2 == '--production' ]]
then
ssh -t $user@$host "sudo -u myservice /var/myservice-prod/venv/bin/pip install /tmp/$distfile --no-cache-dir; sudo systemctl restart myserviceprod; sleep 1; sudo systemctl status myserviceprod | cat;"
fi
Note that a slightly more advanced script would do rolling deployments, allow us to specify a past version, etc.
In Conclusion: Focus on What Matters
At the start of this series, I set out to help startup CTOs think about “what is a minimal decent starting point?” I gave my base requirements, then stepped through a minimal production setup to meet these goals. The essential features we check-off are:
- Our setup is cheap: we spend about $140/month, plus a few annual fees.
- The site runs well: in the course of a few weeks, we’ve steadily grown revenue and traffic, and easily managed the load when we appeared on Hacker News. Uptime has been 99.99%, and the production box has plenty of capacity.
- Deployments are easy: we run a command line script which reliably works. Even though it’s rather basic, our system manages package dependencies, application versioning, configuration files, and SCM tags.
- We have monitoring: there are external status checks and log file monitoring.
- There are extensive metrics: we use Google Analytics and GrayLog dashboards to get real-time insights about our product.
Our setup has known issues:
- We are not built for scale: but we could scale as needed; I’d start by moving the web servers to two or more virtual machines. The database has years of headroom; and honestly, I’d probably be much more inclined to throw more RAM and SSD at the database than move to a distributed system because having a single relational database keeps our code and system so much simpler.
- We have single points of failure: but even in a catastrophe we could build a replacement within 2 hours. At our current size the business impact would be tolerable.
- Our deployment scripts are simplistic: we don’t have clearly defined roles or host configuration management, or a package repository. As we grow, the next steps will be to set up a local Python package repository, and to move our deployment management to Ansible.
All things considered, our system has been reliable and easy to maintain. With this simple setup we’re making decent money while spending very little. And we’re not painted into any corners because we know our limitations and how we’d grow.
I’ll leave you with this: as long as your service works, nobody on the outside cares whether the internals are magic fairy dust or a rickety assemblage of duct tape and bailing wire. Your time is limited, so plan an ops setup to match the expected size and growth of your business.
|
https://cgroom.medium.com/the-startup-ctos-guide-to-ops-3-of-3-a-minimal-production-and-deployment-setup-a10dccc04f51?source=post_internal_links---------7-------------------------------
|
CC-MAIN-2022-21
|
refinedweb
| 2,155
| 55.13
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.