text
stringlengths 454
608k
| url
stringlengths 17
896
| dump
stringclasses 91
values | source
stringclasses 1
value | word_count
int64 101
114k
| flesch_reading_ease
float64 50
104
|
|---|---|---|---|---|---|
- 教程
- 2D Roguelike tutorial
- Adding Mobile Controls
Adding Mobile Controls
版本检查: 5
-
难度: 中级
This is part 14 of 14 of the 2D Roguelike tutorial in which we add mobile touch screen controls to our player script.
Adding Mobile Controls
中级 2D Roguelike tutorial
脚本
- 00:02 - 00:04
In this final video for our series
- 00:04 - 00:06
we're going to set our game up to use
- 00:06 - 00:08
touch screen controls.
- 00:08 - 00:10
To do this we're going to switch our build
- 00:10 - 00:14
target from the default standalone build target
- 00:14 - 00:17
to iOS, and we're going to add some code
- 00:17 - 00:21
to our Player script to handle touch screen input.
- 00:23 - 00:24
Let's go to File
- 00:25 - 00:26
Build Settings
- 00:29 - 00:31
Make sure that your main scene
- 00:31 - 00:34
is added to the scenes in the build
- 00:34 - 00:37
and then we're going to click iOS
- 00:37 - 00:39
and choose Switch Platform.
- 00:43 - 00:45
With that done we can close our build settings.
- 00:45 - 00:49
And we're going to open our Player script in Monodevelop.
- 00:51 - 00:53
In the Player script the first thing that we're going to add
- 00:53 - 00:57
is a private vector2 called touchOrigin.
- 00:57 - 01:00
We're going to use touchOrigin to record
- 01:00 - 01:02
the place where the player's finger
- 01:02 - 01:05
started touching our touch screen.
- 01:05 - 01:09
We're going to initialise it to -Vector2.one
- 01:09 - 01:12
which is going to be a position off the screen.
- 01:12 - 01:14
This means that our conditional that we're going to use
- 01:14 - 01:18
to check and see if there has been any touch input
- 01:18 - 01:20
will initially evaluate to false
- 01:20 - 01:22
until there actually is a touch,
- 01:22 - 01:25
which changes touchOrigin.
- 01:25 - 01:27
In our update function we're going to
- 01:27 - 01:29
add the mobile input code.
- 01:29 - 01:31
Now we're actually going to add some
- 01:31 - 01:35
platform-specific code in here which is going to check
- 01:35 - 01:37
what platform the game is running on and
- 01:37 - 01:39
use appropriate input controls
- 01:39 - 01:41
so we're initially going to check if we're
- 01:41 - 01:44
running in a standalone build of our application
- 01:44 - 01:46
or we're running in a web player
- 01:46 - 01:48
in which case we're going to want to continue to use
- 01:48 - 01:51
the keyboard controls that we've been using thus far.
- 01:51 - 01:53
If we're not running in a standalone build
- 01:53 - 01:55
or a web player we're going to use
- 01:55 - 01:58
else to encompass all other build targets
- 01:58 - 02:03
including iOS, Android, Windows Phone 8, etcetera.
- 02:03 - 02:05
Next we're going to check if
- 02:05 - 02:08
input.touchCount is greater than 0,
- 02:08 - 02:11
meaning the input system has registered
- 02:11 - 02:13
1 or more touches.
- 02:13 - 02:15
If that's true we're going to store the
- 02:15 - 02:18
first touch detected in a variable of the type
- 02:18 - 02:20
Touch called myTouch.
- 02:20 - 02:22
In this case we're just going to grab the
- 02:22 - 02:24
first touch and ignore all other touches
- 02:24 - 02:26
because the game is going to support
- 02:26 - 02:30
a single finger swiping in the cardinal directions.
- 02:30 - 02:32
Next we're going to check the phase of that touch
- 02:32 - 02:34
and make sure that it's equal to
- 02:34 - 02:37
Began by checking it's TouchPhase
- 02:37 - 02:39
so that we can determine that this is
- 02:39 - 02:41
the beginning of a touch on the screen.
- 02:41 - 02:43
If that's true we're going to set our
- 02:43 - 02:46
variable touchOrigin to equal
- 02:46 - 02:48
the position of myTouch.
- 02:49 - 02:51
If the TouchPhase is not Began
- 02:51 - 02:53
we're going to check if the TouchPhase
- 02:53 - 02:57
is equal to Ended and if the TouchOrigin
- 02:57 - 03:00
is greater than or equal to 0.
- 03:01 - 03:03
Since we initialised TouchOrigin
- 03:03 - 03:07
to -1 we can now check if we had a touch that
- 03:07 - 03:10
ended meaning the finger lifted off the screen,
- 03:10 - 03:13
and if the TouchOrigin.X
- 03:13 - 03:17
is greater than or equal to 0
- 03:17 - 03:20
meaning the touch is inside the bounds of the screen
- 03:20 - 03:24
and has changed from the value that we initialised
- 03:24 - 03:28
it to when we declared it, and that it has ended.
- 03:28 - 03:30
If that is the case we're going to declare
- 03:30 - 03:32
a vector2 called TouchEnd and set it
- 03:32 - 03:35
to equal myTouch.position.
- 03:35 - 03:37
Next we're going to calculate the difference
- 03:37 - 03:39
between the beginning and end of the
- 03:39 - 03:40
touch on the X axis,
- 03:40 - 03:43
We're going to declare a float called X
- 03:43 - 03:47
and set it to equal touchEnd.x - TouchOrigin.x
- 03:47 - 03:49
By doing this we get the difference between
- 03:49 - 03:51
the two touches on the X axis
- 03:51 - 03:54
and therefore we can get a direction to move in.
- 03:54 - 03:56
We're going to do the same on the Y.
- 03:57 - 04:00
And then we're going to set touchOrigin.x
- 04:00 - 04:02
to -1 so that our conditional
- 04:02 - 04:04
doesn't repeatedly evaluate to true.
- 04:04 - 04:06
Because touch controls are not always
- 04:06 - 04:08
in a perfectly straight line we're going
- 04:08 - 04:10
to have to take a guess at the user's
- 04:10 - 04:12
intent with their touch, and so what we're going to do
- 04:12 - 04:14
is we're going to figure out if the touch was
- 04:14 - 04:16
generally more horizontal
- 04:16 - 04:19
or generally more vertical in a given direction.
- 04:20 - 04:23
We're going to do this using mathf.absolute
- 04:23 - 04:26
and checking if X was greater than Y.
- 04:27 - 04:29
If X is greater than Y
- 04:29 - 04:31
we're then going to check if X
- 04:31 - 04:33
is greater than 0
- 04:33 - 04:35
to determine whether it's a positive or negative
- 04:35 - 04:37
number and thereby what direction
- 04:37 - 04:39
the user is trying to swipe in.
- 04:40 - 04:43
We are then going to set horizontal to 1
- 04:43 - 04:45
or -1 accordingly.
- 04:46 - 04:50
If the difference is not greater along the X axis
- 04:50 - 04:52
then what we're going to do is we're going to set
- 04:52 - 04:56
vertical to equal 1 or -1.
- 04:57 - 04:58
We're going to add an endif statement
- 04:58 - 05:00
at the end of our mobile code.
- 05:00 - 05:03
Now to tie this back in to the code that we wrote
- 05:03 - 05:07
earlier when we wrote the original player movement code
- 05:07 - 05:09
the values that we were previously getting
- 05:09 - 05:12
using Input.GetAxisRaw from the input manager
- 05:12 - 05:15
from the keyboard, which we stored in to
- 05:15 - 05:18
horizontal and vertical respectively,
- 05:18 - 05:24
are instead being set in our touch screen code using swipes.
- 05:24 - 05:26
If we've determined that they've changed
- 05:26 - 05:30
in this frame, meaning that they're no longer equal to 0,
- 05:30 - 05:32
the values they were initialised with
- 05:32 - 05:35
we're going to call AttemptMove
- 05:35 - 05:38
and parse in horizontal and vertical.
- 05:38 - 05:40
Now that our touch code is completed you'll be
- 05:40 - 05:43
able to test the game using Unity Remote
- 05:43 - 05:46
or build to your touch device of choice.
- 05:46 - 05:48
It's worth noting that if you want to continue
- 05:48 - 05:50
testing with the keyboard in the editor
- 05:50 - 05:52
you'll need to either change the build type
- 05:52 - 05:56
back to standalone, or you can add
- 05:56 - 05:58
an additional condition to our
- 05:58 - 06:01
first if statement, if Unity_Editor.
- 06:03 - 06:05
And there we have it. If you want to give this a test
- 06:05 - 06:08
you can download the Unity Remote app
- 06:08 - 06:10
for your mobile or tablet device
- 06:10 - 06:12
and hook your device up to
- 06:12 - 06:14
the computer you're running Unity on with USB cable
- 06:15 - 06:17
and you should be able to test your game in the editor.
- 06:18 - 06:19
Thanks so much for watching
- 06:19 - 06:21
and I hope this was helpful for you.
Player
Code snippet
using UnityEngine; using System.Collections; using UnityEngine.UI; //Allows us to use UI. /. public Text foodText; //UI Text to display current player food total. public AudioClip moveSound1; //1 of 2 Audio clips to play when player moves. public AudioClip moveSound2; //2 of 2 Audio clips to play when player moves. public AudioClip eatSound1; //1 of 2 Audio clips to play when player collects a food object. public AudioClip eatSound2; //2 of 2 Audio clips to play when player collects a food object. public AudioClip drinkSound1; //1 of 2 Audio clips to play when player collects a soda object. public AudioClip drinkSound2; //2 of 2 Audio clips to play when player collects a soda object. public AudioClip gameOverSound; //Audio clip to play when player dies. private Animator animator; //Used to store a reference to the Player's animator component. private int food; //Used to store player food points total during level. private Vector2 touchOrigin = -Vector2.one; //Used to store location of screen touch origin for mobile controls. /; //Set the foodText to reflect the current player food total. foodText.text = "Food: " + food; /. //Check if we are running either in the Unity editor or in a standalone build. #if UNITY_STANDALONE || UNITY_WEBPLAYER / are running on iOS, Android, Windows Phone 8 or Unity iPhone #elif UNITY_IOS || UNITY_ANDROID || UNITY_WP8 || UNITY_IPHONE //Check if Input has registered more than zero touches if (Input.touchCount > 0) { //Store the first touch detected. Touch myTouch = Input.touches[0]; //Check if the phase of that touch equals Began if (myTouch.phase == TouchPhase.Began) { //If so, set touchOrigin to the position of that touch touchOrigin = myTouch.position; } //If the touch phase is not Began, and instead is equal to Ended and the x of touchOrigin is greater or equal to zero: else if (myTouch.phase == TouchPhase.Ended && touchOrigin.x >= 0) { //Set touchEnd to equal the position of this touch Vector2 touchEnd = myTouch.position; //Calculate the difference between the beginning and end of the touch on the x axis. float x = touchEnd.x - touchOrigin.x; //Calculate the difference between the beginning and end of the touch on the y axis. float y = touchEnd.y - touchOrigin.y; //Set touchOrigin.x to -1 so that our else if statement will evaluate false and not repeat immediately. touchOrigin.x = -1; //Check if the difference along the x axis is greater than the difference along the y axis. if (Mathf.Abs(x) > Mathf.Abs(y)) //If x is greater than zero, set horizontal to 1, otherwise set it to -1 horizontal = x > 0 ? 1 : -1; else //If y is greater than zero, set horizontal to 1, otherwise set it to -1 vertical = y > 0 ? 1 : -1; } } #endif //End of mobile platform dependendent compilation section started above with #elif /--; //Update food text display to reflect current score. foodText.text = "Food: " +. SoundManager.instance.RandomizeSfx (moveSound1, moveSound2); } /; //Update foodText to represent current total and notify player that they gained points foodText.text = "+" + pointsPerFood + " Food: " + food; //Call the RandomizeSfx function of SoundManager and pass in two eating sounds to choose between to play the eating sound effect. SoundManager.instance.RandomizeSfx (eatSound1, eatSound2); //Disable the food object the player collided with. other.gameObject.SetActive (false); } //Check if the tag of the trigger collided with is Soda. else if(other.tag == "Soda") { //Add pointsPerSoda to players food points total food += pointsPerSoda; //Update foodText to represent current total and notify player that they gained points foodText.text = "+" + pointsPerSoda + " Food: " + food; //Call the RandomizeSfx function of SoundManager and pass in two drinking sounds to choose between to play the drinking sound effect. SoundManager.instance.RandomizeSfx (drinkSound1, drinkSound2); //Disable the soda object the player collided with. other.gameObject.SetActive (false); } } //Restart reloads the scene when called. private void Restart () { //Load the last scene loaded, in this case Main, the only scene in the game. Application.LoadLevel (Application.loadedLevel); } /; //Update the food display with the new total. foodText.text = "-"+ loss + " Food: " + food; / PlaySingle function of SoundManager and pass it the gameOverSound as the audio clip to play. SoundManager.instance.PlaySingle (gameOverSound); //Stop the background music. SoundManager.instance.musicSource.Stop(); //Call the GameOver function of GameManager. GameManager.instance.GameOver (); } } }
相关教程
- Mobile Development (课程)
- Multi Touch Input (课程)
- (课程)
|
https://unity3d.com/cn/learn/tutorials/projects/2d-roguelike-tutorial/adding-mobile-controls?playlist=17150
|
CC-MAIN-2019-26
|
refinedweb
| 2,367
| 66.88
|
Key:website
The website=* tag can be used to provide the full URL to the official website for the related features, be it a building, park railway or anything else.
For small format websites (designed for smaller displays, reduced bandwidth, or mobile touchscreens) one may in addition add website:mobile=*.
Format
Websites URL usually follow this syntax:
http(s)://(www.)domain.tld/(page)(?parameters)(#anchor), where parts in parenthesis may be optional.
Example would be a good website URL. Its format is valid, it's a direct link to the wanted resource and a trustworthy website.
Namespace values
Best practices
Durability and machine-readability
- Use as short a URL as possible. Choose simple URLs over complex URLs if they basically point to the same content. For example, use instead of, as both will get you to the front page. Websites are frequently redesigned, so strive for the most "robust" URL that works.
- Include the scheme (http or https) explicitly., not
- Take care when copying links from a search engine result. Search results URLs hide information about the final destination of the link, and are not meant for permanent linking. A very bad example would be
https ://encrypted.google.com/url?sa=t&rct=j&q=&esrc=s...
- Prefer websites at domain names. URLs containing IP addresses tend to get outdated more quickly than URLs at domain names and are far less recognisable. This Sophox query finds websites at IPv4 addresses.
Usability
- If the official website is available in multiple languages depending on the URL, link to the version in the locally appropriate language or the language-neutral version. (Some websites automatically redirect to the user's language if you omit a language code from the URL.)
- website=* is not for every URL. Use wikipedia=* for Wikipedia articles. Social media profiles should be tagged using contact=* keys: contact:mastodon=*, contact:facebook=*, contact:twitter=*, etc. If a social media web presence is the only web presence of the POI (point-of-interest), then some taggers prefer to also list the url using website=* to indicate that no other official website exists. (Do not use bulk edits to "deduplicate" website and contact tags)
Privacy and security
- Use the HTTPS version when available, as it enhances security and privacy for users. E.g.: instead of
- Remove any tracking parameters (
mc_id,
utm_source,
utm_medium,
utm_term,
utm_content,
utm_campaign, (and other
utm_*),
fbclid[1],
gclid,
campaign_ref,
gclsrc[2],
dclid[3],
WT.tsrc,
wt.tsrc[4],
zanpid,
yclid[5],
igshid[6] etc.). For example, something like should be replaced by
- This Sophox query detects privacy-invasive tracking identifiers within URLs in several common keys.
- This taginfo listing shows occurrences of "fbclid" in tag values regardless of the key.
- This slower Overpass query is similar to the taginfo listing but takes much longer.
- Never use URL shorteners. URLs such as should not be used as they hide the real URL, can contain advertisements or even malware.
- Key:contact - The contact:* namespace with contact:website=* (newest parallel existing tagging scheme, provides same tag arrangement in groups like addr:*=*) (Includes contact:facebook=*, contact:twitter=* etc.)
- url=* - A more general tag. Any URL other than the main official one used in the website=* tag (parallel existing tagging scheme)
- image=*
- wikipedia=* Links towards wikipedia articles.
- Proposed features/External links - different tagging ideas
- Key:website:stock - Stock information related to a feature.
- Keepright (keepright.ipax.at) - validates the content of website tags, ensuring they still match the OSM object.
- tag2link plugin - JOSM plugin that opens external references, e.g. this website key
References
- ↑ FB click identifier, tracking part added to links from FB. Presumably used to track source of link and its usage
- ↑
- ↑
- ↑
- ↑
- ↑
|
https://wiki.openstreetmap.org/wiki/Key%3Awebsite
|
CC-MAIN-2021-31
|
refinedweb
| 608
| 56.86
|
In C extern is a keyword that is used to tell the compiler that the variable that we are declaring was defined elsewhere. In order to fully understand this, you need to know the difference between a definition and a declaration of a variable.
By default it is up to the compiler to decide where to define and where to declare a variable.
When it comes to functions, we declare a function when we write its prototype.
We define a function when we write its body. Therefore there is no point of using the extern keyword with functions.
External variable is one that is defined out of any function. These variables are also called global. extern keyword is of matter only to external variables in a project with more than one file.
Example 1:
file1.c
#include "source1.c" extern int callCount; void someFunction();
file2.c
#include <stdio.h> #include "source2.c" int main() { extern int callCount; printf("%d\n", callCount); someFunction(); printf("%d\n", callCount); return 0; }
In file2 we declare the variable callCount. extern means that this variable is defined elsewhere and we will use that variable – we will not create a new variable here.
Example 2:
In C, we use header files for all declarations. Usually we will put all extern variables there and we will not do any extern declarations in our source files. We will just include the header file and directly use the variables.
myHeader.h
extern int callCount; void printHello();
source1.c
int callCount = 0; void someFunction() { callCount++; // do some stuff here }
main.c
#include <stdio.h> #include "header.h" int main() { int i; printf("The function \"someFunction\" was called %d times.\n", callCount); for(i = 0; i < 5; i++) { someFunction(); printf("The function \"someFunction\" was called %d times.\n", callCount); } return 0; }
Here's how this examples should work:
You should also try it and move things around to get the idea.
The sources from the examples above are availble on GitHub. I also prepared a zip file you. It contains the source and header files of the examples. You can download it, unzip and create projects with the source files: c-extern-examples.zip
In conclusion, the extern keyword in C forces a declaration of a variable, that was defined elsewhere. In practice it should be used in our header files and then we will include the necessary headers. We should avoid the usage of exern keyword in our source files.
|
https://www.c-programming-simple-steps.com/c-extern.html
|
CC-MAIN-2022-05
|
refinedweb
| 410
| 76.52
|
Build a Rotten Tomatoes Clone With GraphQL and Auth0
Build a Rotten Tomatoes Clone With GraphQL and Auth0
In this tutorial, we cover how easy it is to build a Rotten Tomatoes clone with a backend like Graphcool and add authentication to it easily using Auth0.
Join the DZone community and get the full member experience.Join For Free
Running out of memory? Learn how Redis Enterprise enables large dataset analysis with the highest throughput and lowest latency while reducing costs over 75%! to GraphQL
GraphQL is a query language for APIs created by Facebook that offers declarative data fetching in the client and is already used by companies such as Coursera and GitHub.. While queries are a very easy and quick way for clients to get exactly the data they need in one request, the GraphQL server has to parse and validate the query, check which fields are included, and return the underlying data from the database.
The type-safe schema unlocks new possibilities for tooling, as demonstrated by Graph with
create-react-app my-app --scripts-version auth0-react-scripts. Then open
component/DisplayMovie.js file and the following code:
import React from 'react' import '../App.css'; class DisplayMovie extends React.Component { render() { return ( < div < div style = { { backgroundImage: `url(${this.props.movie.imageUrl})`, backgroundSize: 'cover', paddingBottom: '100%', } } /> < div > < div < h3 > < span Movie Title: < /span> {this.props.movie.description} </h 3 > < h2 > < span Rating: < /span> { this.props.movie.avgRating }% </h 2 > < Urland
component, for a free one now.
Login to your Auth0 management dashboard and let Rotten Tomatoes and for the identifier I'll set it as. We'll leave the signing algorithm as RS256 and click on the Create API button.
Creating the Rotten Tomatoes API.
Create the Auth Service
We'll create an authentication service to handle everything about authentication in our app. Go ahead and create an
AuthService.js file inside the
utilsdirectory. = 'openid email profile'; const AUDIENCE = 'AUDIENCE_ATTRIBUTE'; var auth = new auth0.WebAuth({ clientID: CLIENT_ID, domain: CLIENT_DOMAIN }); export function login() { auth.authorize({ responseType: 'token id_token', redirectUri: REDIRECT, audience: AUDIENCE, scope: SCOPE }); } export function logout() { clearIdToken(); clearAccessToken(); clearProfile();); } function clearProfile() { localStorage.removeItem('profile'); } // Helper function that will allow us to extract the access_token and id_token export function getAndStoreParameters() { auth.parseHash(window.location.hash, function(err, authResult) { if (err) { return console.log(err); } setIdToken(authResult.idToken); setAccessToken(authResult.accessToken); }); } export function getEmail() { return getProfile().email; } export function getName() { return getProfile().nickname; } // Get and store access_token in local storage export function setAccessToken(accessToken) { localStorage.setItem(ACCESS_TOKEN_KEY, accessToken); } // an hosted version of Auth0.
A client was created automatically when you created the API. Now, go to the clients area and check for the test client that was created. You should see it in your list of clients. Open the client and change the Client Type to Single Page Application.
Non-interactive clients are meant to be used in machine-to-machine interactions. We are using a SPA to interact with the API so the client should be a SPA client. Check out Implicit Grant and client credentials exchange for more information.();
Go ahead and login.
Hosted Lock Login Widget.
Logged-in page.
Now you are logged in. Perfect! A user can't create a new movie without been authenticated..
Now, click on Permissions. We need to restrict permission on the type of userhook! Whoop! Whoop!
Conclusion
In this tutorial, we covered how easy it is to build a product with a backend like Graphcool and add authentication to it easily using Auth0.
Running out of memory? Never run out of memory with Redis Enterprise database. Start your free trial today. }}
|
https://dzone.com/articles/build-a-rotten-tomatoes-clone-with-graphql-and-aut
|
CC-MAIN-2018-43
|
refinedweb
| 601
| 50.84
|
Hello,
I am interested in using the Cypress FX3 as a mechanism to communicate with our FPGA. I basically need a method for two communication platforms:
1. FPGA will stream data out to an ARM processor connected via USB (through the Cypress FX3)
2. ARM processor needs to be able to read some registers (BRAM) on the FPGA through the same USB link and be able to set some registers (BRAM) to dictate what the FPGA will stream out.
I am following the information in the document AN65974 - Designing with the EZ-USB® FX3™ Slave FIFO Interface (cypress.com) and I can see that there is a pathway for me to do #1. The "FX3 Slave interface" described in an example like below seems to be a way of me to transfer data out of the FPGA using the "EP1_IN" interface.
However, what I am wondering is how to implement #2 - it is certainly possible to do it via some sort of opcode based protocol using EP1_OUT but what I risk doing in that respect is now data in EP1_IN is containing different streams of input data sources that need to be multiplexed in the USB Host (the ARM Core).
Is there perhaps a more elegant solution to do what I am describing? I saw that there is capability for an I2C and SPI interface on the FX3, but I am not too sure what this refers to. Does this appear like a SPI or I2C interface to the FPGA? Something like that could work - so the I2C/SPI interface is used as my #2 requirement.
Thanks for the help!
PrateekShow Less
Hi,
I am trying to change the descriptor of usbuart example to access to data in cdc and vendor at the same time.
I mean when I am sending data through uart to /from usb , I have access to data by control center as well. So I added vendor descriptor to that example with the same DMA channels, but nothing comes up in CC. Should I change anything else?
In device manger I can see that as a com port, but nothing appears in CC.
I attached the modified descriptor file.
Thanks
I'm having a strange problem supporting more than one resolution in my project. The following code sample is typical of sample code to handle this issue. I get the correct resolution (frame) index in glCommitCtrl[3] when the wValue is PROBE, but I always get glCommitCtrl[3] = 1 when the wValue is COMMIT.
In addition, the values in glCommitCtrl[18]-[21] contain the frame size when these commands are received. The frame size is actually correct for the intended resolution (per the descriptors). For example if 640x480 is index #1 and 320x240 is index #2 and 320x240 is the intended resolution then the frame size will be correct for 320x240 but the index will be 1 (640x480). I am able to get around this problem by storing the frame index when I get the set PROBE and ignoring the value when I set the set COMMIT and using the one stored from PROBE.
Just wondering if this issue is familiar to anyone and they were able to solve it.
/* Set Probe Control */ if (wValue == CX3_APP_VS_PROBE_CONTROL) { glCurrentFrameIndex = glCommitCtrl[3]; } else { /* Set Commit Control and Start Streaming*/ if (wValue == CX3_APP_VS_COMMIT_CONTROL) { CyCx3AppImageSensorSetVideoResolution (glCommitCtrl[3]); if (glIsApplnActive) { #ifdef CX3_DEBUG_ENABLED CyU3PDebugPrint (4, "\n\rUSB Setup CB:Call AppSTOP1"); #endif CyCx3AppStop(); } CyCx3AppStart(); } } }
John AShow Less
Hello. We are working on a product that is embedding during the development phase a Cypress CYUSB3KIT-003 EZ-USB® FX3™ SuperSpeed Explorer Kit (...) and the OV5640. We got the NDA from OmniVision and signed it. Can anyone contact me to get the source code for OV5640 please?Show Less
Hi,
The AN75779 runs properly on the Cypress evaluation board, which uses a CYUSB3014 device. We want to port this application to run on a CYUSB3012 which has only the half of internal memory (256K instead of 512K).
The first try was to change the memory setting in the file: "cyfxtx.c" and also to use the appropriate linker file: "fx3_256k.ld". The result was, that the application complains about a memory allocation error.
QUESTIONS:
1) Is the described way to specify the device restriction regarding memory correct? Is anything missing or does there exist another method to specify the used device?
2) Is it possible to modify the application to work also with less memory? And if yes, what must be changed?
3) There are a lot of functions which are not available as source code. Is it possible to publish those source files?
Kind Regards
Cromagnon
Show LessShow Less
Hi
I encountered a problem while using OpenOCD debugging in FX3.
When debugging the program, it stop at breakpoints as expected. but it does not stop when click suspend after click resume to run, so I can't local where the program is running that moment. I have used other MCU such as STM32, it can desired stop where the program running currently when click suspend in debugging.
I hope someone can help me solve this problem how to stop when click suspend in debugging, thanks very much!
Best Regards,
EkkoShow Less
Hi,
We use FPGA+CYUSB3014 for image capture and the register of camera are controlled by IIC port of CYUSB3014.
Recently we found that the speed of IIC is much lower in WIN 10 system than in WIN 7 system. Maybe more than ten times.
Any experience for this problem, please tell me the reason.
ThanksShow Less
Hi,
I am actually working with 13MPx CMOS Image Sensor. this sensor has 10 bits accuracy in his dynamic range. So, for this reason has the possibility to sending both 8 bits per pixel either 10 bits per pixel (using full dynamic range).
As I am currently working with the CX3, using RGB888 as data input and 24 bits size as data output. I have to use The the 8 bits per pixel profile from the sensor,knowing that I am lossing accuracy which is esential in order to quantize the analog value.
So, I have the following question: Is it possible to send 10 bits per pixel from the sensor, using a CX3's configuration based on RGB888 as data input and holding the same output size of 24 bits?. Because if I do it, we will not have to do zero padding as with the RAW10 format. (Please, see the attached picture).
What code sections will have to change in order to get this profile?.
Thanks so much.Show Less
Employee
Employee
Employee
|
https://community.cypress.com/t5/USB-Superspeed-Peripherals/bd-p/usb-superspeed-peripherals/page/323
|
CC-MAIN-2021-17
|
refinedweb
| 1,099
| 62.58
|
Monitoring Solr with Graphite and Carbon
Monitoring Solr with Graphite and Carbon
Join the DZone community and get the full member experience.Join For Free
This blog post requires graphite, carbon and python to be installed on your *ux. I'm running this on ubuntu.
To setup monitoring RAM usage of Solr instances (shards) with graphite, you will need two things:
1. backend: carbon
2. frontend: graphite
The data can be pushed to carbon using the following simple python script.
In my local cron I have:
1,6,11,16,21,26,31,36,41,46,51,56 * * * * \ /home/dmitry/Downloads/graphite-web-0.9.10\ /examples/update_ram_usage.sh
The shell script is a wrapper for getting data from the remote server + pushing it to carbon with a python script:
scp -i /home/dmitry/keys/somekey.pem \ user@remote_server:/path/memory.csv \ /home/dmitry/Downloads/MemoryStats.csv
python \ /home/dmitry/Downloads/graphite-web-0.9.10\ /examples/solr_ram_usage.py
An example entry in the MemoryStats.csv:
2013-09-06T07:56:02.000Z,SHARD_NAME,\ 20756,33554432,10893512,32%,15.49%,SOLR/shard_name/tomcat
The command to produce a memory stat on ubuntu:
COMMAND="ssh user@remote_server pidstat -r -l -C java" | grep /path/to/shard
The python script is parsing the csv file (you may want to define your own format of the input file, I'm giving this as an example):
import sys import time import os import platform import subprocess from socket import socket import datetime, time CARBON_SERVER = '127.0.0.1' CARBON_PORT = 2003 delay = 60 if len(sys.argv) > 1: delay = int( sys.argv[1] ) sock = socket() try: sock.connect( (CARBON_SERVER,CARBON_PORT) ) except: print "Couldn't connect to %(server)s on port %(port)d, is carbon-agent.py running?" % { 'server':CARBON_SERVER, 'port':CARBON_PORT } sys.exit(1) filename = '/home/dmitry/Downloads/MemoryStats.csv' lines = [] with open(filename, 'r') as f: for line in f: lines.append(line.strip()) print lines lines_to_send = [] for line in lines: if line.startswith("Time stamp"): continue shard = line.split(',') lines_to_send.append("system."+shard[1]+" %s %d" %(shard[5].replace("%", ""),int(time.mktime(datetime.datetime.strptime(shard[0], "%Y-%m-%dT%H:%M:%S.%fZ").timetuple())))) #all lines must end in a newline message = '\n'.join(lines_to_send) + '\n' print "sending message\n" print '-' * 80 print message print sock.sendall(message) time.sleep(delay)
After the data has been pushed you can view it in graphite GWT based UI. The good thing about graphite vs jconsole or jvisualvm is that it persists data points so you can view and analyze them later.
Published at DZone with permission of Dmitry Kan , DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.
{{ parent.title || parent.header.title}}
{{ parent.tldr }}
{{ parent.linkDescription }}{{ parent.urlSource.name }}
|
https://dzone.com/articles/monitoring-solr-graphite-and
|
CC-MAIN-2019-18
|
refinedweb
| 461
| 50.84
|
Hi, I am taking a class in order to learn Python. One of the exercises I need to do is write function definitions. I cannot figure out how to do one of them. To show you an example here is a similar problem:
If m is an integer, then isPrime(m) iff m is prime.
The code:
# Prompts the user for an integer m and then computes and prints whether
# m is prime or not.
# isPrime(m): I -> Bool
# If m is an integer, then isPrime(m) if and only if m is prime.
def isPrime(m):
return False if m <= 1 else isPrimeItr(1,0,m)
# isPrimeItr(i,a,m): I x I x I -> Bool
def isPrimeItr(i,a,m):
return False if a> 2 else True if a == 2 and i == m +1 else isPrimeItr(i+1,a+1,m) if m % i == 0 else isPrimeItr(i+1,a,m)
# print a brief description of the program followed by an empty line
print("Computing Prime Numbers")
print("Prompts the user an integer value for m and then computes and")
print("prints if m is prime or not.")
print()
# prompt the user for a value for m
m = eval(input("Enter an integer value for m: "))
# print if m is prime
print("The value that", m, "is a prime integer is", isPrime(m))
------------------------------------------------------------------------------------------------------------------------
These are the problems I am having problem with:
If m and n are integers, then anyPrimes(m,n) iff there are any prime numbers in the interval [m,n).
If m and n are integers, then countPrimes(m,n) is the number of prime numbers in the interval [m,n).
If anyone could help that would be great. Thank you.
|
http://forums.devshed.com/python-programming/952798-help-writing-function-definitions-last-post.html
|
CC-MAIN-2017-51
|
refinedweb
| 286
| 71.38
|
When developing a complex line of business system queries reuse is often required. This article provides some guidelines for LINQ expressions reuse, and a utility that enables reuse of expressions in projection.
When looking for a way to reuse LINQ expressions for projections (LINQ Select() calls) I came across this reply by Marc Gravell. I liked the use of the term "black art" for expression reuse so I reuse it here...
If you are only interested in using expressions in projections (LINQ Select() calls), go here.
This article assumes reasonable knowledge of LINQ.
To demonstrate the goals of this article, let's assume a model containing projects and subprojects, represented by corresponding classes:
public class Project
{
public int ID { get; set; }
public virtual ICollection<Subproject> Subprojects { get; set; }
}
public class Subproject
{
public int ID { get; set; }
public int Area { get; set; }
public Project Project { get; set; }
}
Now, let's also assume there is some piece of logic that determines what subproject would be considered the "main" subproject for each project. Let's assume this is not trivial logic and it is being used across the application. Obviously, we would like to keep DRY and have this logic wrote in one place only. For performance reasons, we would like this logic to be available in a way that enables us to use it against the DB, we would like to avoid bringing in all subproject when we need only the main one.
With LINQ we can program that logic in a type-safe manner and in terms of our business objects.
Let's assume the logic for the main project is the project with the largest area, provided that it is not larger than 1,000. First, we will select all main subprojects, at this point we will include the selection logic in the query (kindly ignore the ignoring of non-trivial cases):
var mainsSbprojects = ctx.Subprojects
.Where(sp =>
sp.Area ==
sp.Project.Subprojects.Where(spi => spi.Area < 1000)
.Max(spii => spii.Area)).ToArray();
We would like to take the logic, inside the Where clause (the lambda) and reuse it across the application. If we use intellisense, we can learn that the expression is of type Expression<Func<Subproject, bool>>:
Expression<Func<Subproject, bool>>
Func<Subproject, bool> means the parameter is expected to be a method that takes a Subproject and returns a boolean. Think of it as a loop that runs for each one of the subprojects and returns an indication whether it should be included in the result or not. The Expression part means this is not quite a function but rater an expression tree that may be compiled into a method. However, this tree may be translated into SQL (or any other data retrieval equivalent), depending on the data source (If this is unclear to you, have a look at this).
Func<Subproject, bool>
Subproject
boolean
Expression
Let's take that piece of lambda expression and put it into a variable:
private static Expression<Func<Subproject, bool>> mainSubprojectSelector = sp =>
sp.Area ==
sp.Project.Subprojects.Where(spi => spi.Area < 1000)
.Max(spii => spii.Area);
And let's now rephrase our query:
var mainsSbprojectsBySelector =
ctx.Subprojects.Where(mainSubprojectSelector).ToArray();
Now, let's assume we want the main subproject only for project 1:
var proj1MainSubProj = ctx.Subprojects.Where(
mainSubprojectSelector).Single(sp => sp.Project.ID == 1);
OK, that’s nice, we have reused our logic.
Note that this expression starts the selection from the sub project. When we are dealing with a project it would make more sense to answer the question "What is this project's main subproject?". We would still want to use our original logic, so we might want to have a new expression that takes a project and returns a subproject, using the original logic. Maybe something like this:
private static Expression<Func<Project, Subproject>> projectMainSubprojSelector =
proj => proj.Subprojects.AsQueryable().Single(mainSubprojectSelector);
And now we can do this:
var proj1MainSubprojByProj = ctx.Projects.Where(p => p.ID == 1).Select(projectMainSubprojSelector).Single();
(This would run fine in LINQ to objects, however, if you try to run it with LINQ to entities you would run into an error stating that the method Single() can only be the last method in the chain. This is true for SingleOrDefault() and for First, but does not apply for FirstOrDefault().)
Single()
SingleOrDefault()
First
FirstOrDefault()
Let's have another look on that selection expression:
var proj1MainSubprojByProj = ctx.Projects.Where(p => p.ID == 1).Select(projectMainSubprojSelector).Single();
Note that you might think of using DbSet.Find() or DbSet.Single() to get the Project with ID==1 but you would not be able to call Select() on it, therefore, it would be impossible to use the main subproject selection logic. We must keep running an IQueryable<Project> down the chain to reuse the logic.
DbSet.Find()
DbSet.Single()
ID==1
IQueryable<Project>
Let's introduce a new requirement – we now have a logic that retrieves the average effective area (AEA) of a project. That would be the average of the area of all subproject, excluding projects with an area greater than 1,000. Here is an expression for you (to be reused in a DRY environment):
private static Expression<Func<Project, double>> projectAverageEffectiveAreaSelector =
proj => proj.Subprojects.Where(sp => sp.Area < 1000).Average(sp => sp.Area);
And here is how to get the AEA of a project:
var proj1Aea =
ctx.Projects.Where(p => p.ID == 1)
.Select(projectAverageEffectiveAreaSelector)
.Single();
Now, assume we want to retrieve the project ID AND it's AEA. As the AEA selector is an expression we would like to do something like this:
var proj1AndMainSubprojByProj =
ctx.Projects.Where(p => p.ID == 1)
.Select(p => new
{
ProjID = p.ID,
AEA = projectAverageEffectiveAreaSelector
})
.Single();
Well, no. This is a very interesting point with the way the compiler treats lambda expressions. The variable projectAverageEffectiveAreaSelector is of type Expression<Func<Project, double>>. When you write an assignment in an anonymous type initialization, the compiler creates a property of the type you are assigning into it. We want the property AEA to be of type double but is going to be of type Expression<Func<Project, double>>. The compiler has no idea we want to bring in the expression and merge it into the LINQ query, so that we would now have a double (using a pre-defined type instead of an anonymous type will not be any better - the assignment would just not build).
projectAverageEffectiveAreaSelector
Expression<Func<Project, double>>
AEA
double
The LinqExpressionProjection library has the sole purpose of allowing you to project from expressions not part of the LINQ query.
All you need to do is:
AsExpressionProjectable()
Project<T>()
Expression<Func<TIn, TOut>>
TIn
TOut
This is how it would be done for the above example:
var proj1AndAea =
ctx.Projects.AsExpressionProjectable()
.Where(p => p.ID == 1)
.Select(p => new
{
Project = p,
AEA = Utilities.projectAverageEffectiveAreaSelector.Project<double>()
}
).Single();
That's it.
You can download it here or you can (and probably better) use the nuget package.
This section can be safely skipped if you "just want it to work". However, note there is some more discussion of expressions reuse at the end of the article.
The call to AsExpressionProjectable() wrapps the IQueryable with another IQueryable that is in charge of visiting the expression tree and replacing the calls to Project<T>().
IQueryable
When a call to Project<T>() is visited, the method call argument is compiled and executed . As this is an extension method, that is the part where you retrieve the reusable expression of type Expression<Func<TIn, TOut>>. The argument is compiled and executed, not analyzed or interoperated. This means any piece code that has the right return type can fit there, including parameterized method calls.
The body of resulting lambda expression is then visited and inserted into the expression tree, replacing both the call to Project<T>() and it's parameter.
Expression tree visiting in LinqExpressionProjection is based on the popular expression trees rewrite pattern and is based on LinqKit – by the grate Joseph Elbahary (If you do not own LINQPad you really should), which in turn is based on Tomas Petricek's work. All of the above, and this article make use of the, now industry-standard, ExpressionVisitor by Matt Warren (Matt has a series of blog posts called "LINQ: Building an IQueryable Provider" where he builds the "IQToolkit". Read this if you ever want to get a glimpse of genuine genius).
ExpressionVisitor
There are two parameter in play here. One is the parameter of the projection lambda (the Select() method call), and the other one is the parameter of the lambda form reusable expression. These parameters are expected to be of the same type (and the code validates that) but the projection parameter should replace the reusable expression parameter in order for the operation to be successful.
Parameters rebinding is achieved by visiting the relevant part of the tree and replacing the occurrences. The visitor is credited to Colin Meek, and it is also based on Matt Warren's work mentioned above.
The project is open source and can be found on CodePlex
There are two flavors of LINQ – query and methods chaining. If you are versed in the query syntax, consider the fact method chaining lend itself to expressions reuse while query syntax doesn't. Actually, using tools like the ones described here you might get it to work in query syntax but it would make you queries much less readable, losing most of the benefit of query syntax. You are also bound to lose the help of intellisense in understanding the expression tree and the selector expressions you are expected to provide. If you plan for massive LINQ reuse, aim for the method syntax or for mixed one.
Some things can be achieved by retrieving data and processing it or by creating more complex data retrieval that also includes some of the processing. In the world of LINQ, where query itself is type safe, testable, and described in problem domain terms, it is natural to want to shift attitude towards doing more in the query. While this is generally good, and expressions reuse is a supporting tool for that practice, please consider the fact that LINQ is harder to understand when compared to procedural code. It is also generally harder to debug, and much harder when you are not querying objects but rather using an ORM to compose store queries.
Think ahead when planning your approach..
|
https://www.codeproject.com/Articles/402594/Black-Art-LINQ-expressions-reuse?fid=1730297&df=90&mpp=10&sort=Position&spc=None&select=4279593&tid=4279014
|
CC-MAIN-2017-22
|
refinedweb
| 1,733
| 50.87
|
#include <stdio.h> int remove(const char *path);
The remove() function causes the file or empty directory whose name is the string pointed to by path to be no longer accessible by that name. A subsequent attempt to open that file using that name will fail, unless the file is created anew.
For files, remove() is identical to unlink(). For directories, remove() is identical to rmdir().
See rmdir(2) and unlink(2) for a detailed list of failure conditions.
Upon successful completion, remove() returns 0. Otherwise, it returns −1 and sets errno to indicate an error.
See attributes(5) for descriptions of the following attributes:
rmdir(2), unlink(2), attributes(5), standards(5)
|
http://docs.oracle.com/cd/E36784_01/html/E36874/remove-3c.html
|
CC-MAIN-2016-26
|
refinedweb
| 113
| 58.28
|
Ok i finally got it. Thanks!
Ok i finally got it. Thanks!
Ok i got closer with this with the help of google.
import java.io.*;
public class ReadTextFileExample {
public static void main(String[] args) {
File file = new...
Thanks for the reply. Ive looked over the tutorials, while i see how some of them may help me put together the problem, im still confused on putting them where and how.
import java.io.*;
public...
Im trying to write a java code for a homework problem in my intro java class but im having problems, frankly cause i dont know alot of loops and my text book doesnt help.
This is the problem:
...
Ohhhhhhhh 1811
I finally saw what i was doing wrong.... Thanks. 1812
import javax.swing.JOptionPane;
Ok soo i figured out that for the 3rd input box that shows up for "score3", whatever number is inputted there becames the letter grade.
For instance if u put "0" for score1 and score2 and put...
Ok i think i got it. I was able to run it on jgrasp without a problem but can someone just verify how it looks?
import javax.swing.JOptionPane;
public class Exampart2
{
public static...
import javax.swing.JOptionPane;
public class Exampart2
{
public static void main(String[] args)
{
String input, LetterGrd;
double score1,score2,score3,Average;
if...
Thats the thing, im not really sure what needs to be first.
Im guessing grade A, B, and then C?
Hello, im pretty new to Java programming and im currently taking a college class involving java coding. Im doing this weeks homework and have been able to answer all of them but this one. I just cant...
|
http://www.javaprogrammingforums.com/search.php?s=f9caf85280b4c70793aa73fc6f9ef140&searchid=1273444
|
CC-MAIN-2014-52
|
refinedweb
| 278
| 75.61
|
Content count14
Joined
Last visited
About hashi
- RankMember
[WIP] Ninja.io - multiplayer shooter - Box2D
hashi replied to Buizerd's topic in Game ShowcasePhysic simulation is calculated only by host or by all clients as well? If by all clients/players, then how you deal with non-deterministic math between different browsers (firefox and chrome still can produce different results using Math.cos/sin etc. and box2d using it as well)
[ld38] ghost story - story platformer made for ludum dare 38
hashi replied to oOku's topic in Game ShowcaseThere is a bug when you quickly change character in some way you can float up and higher and higher, so then you hit ceil, but your physic leads to teleport to second floor and so on you end up at the top of a cake, you can then go out of map
Gameloop and accuracy timer, feature/fix that was working, but not anymore
hashi posted a topic in Coding and Game DesignI?
Garbage Collector vs Pixi.js
hashi posted a topic in Pixi.jsHello Have an Important question. Is Pixi garbage-collector safe? I've found, for example, kind of terrible pieces of code. I will not show them here, because they are easy to find. Just search for "splice" method in pixi.dev.js. You will find it used for deleting elements from array. But Splice always return an array with deleted element/s. In pixi it isn't grabbed to some var, so Garbage Collector will eat it and after eating enough of unused vars, GC will make a lag in runtime, because of freeing memory. Wonder if there are more of memory-unsafe fragments, is Pixi GC safe? EDIT:Some other example:PIXI.Rope.prototype.updateTransform = function(){ var points = this.points; if(points.length < 1)return; var lastPoint = points[0]; var nextPoint; var perp = {x:0, y:0}; //:O object created every call of functionInstead of perp object it may be two variables, perp_x and perp_y, as you can see in the next lines (that aren't shown) perp is used only for some in-function calculations. EDIT2: Garbage Collector based on snake example: Even better..:
-
-
position of sprite in shader
hashi replied to hashi's topic in Pixi.jsAlso is it possible to add new uniforms any time? I'm gonna send array to shader and create uniforms like this: this.uniforms = { "some_array[0]" : {type: "1i", value: 0}, "some_array[1]" : {type: "1i", value: 0} };and later: this.uniforms["some_array[2]"] = {type: "1i", value: 2};and then always update uniform that say about length of this array too. EDIT: Ha! It works!:
get pixeldata in a webgl canvas
hashi replied to whizzkid's topic in Pixi.jsTry checking color of pixel on canvas: image.onload = function() {var canvas = document.createElement("canvas");canvas.width = image.width;canvas.height = image.height;var context = canvas.getContext("2d");context.drawImage(image, 0, 0);imageData = context.getImageData(mouse_x, mouse_y, 1, 1).data;//do something with imageData}
hashi reacted to a post in a topic: position of sprite in shader
position of sprite in shader
hashi posted a topic in Pixi.jsHow to know about position of sprite in fragment shader program? I can pass data to shader as uniforms, but it happens only once, when filter is created. I need to know, always current, position of sprite in shader program. Also depending on it, pass appropriate textures.
Shader program, pass texture with size not power of 2
hashi posted a topic in Pixi.jsHi I need to pass a monochromatic texture (or just about 5-6 colors) to fragment shader and access it. The thing that stopped me is that textures size has to be power of 2. Why this is needed? I think these days gpus shouldn't need things like this. Is it possible to break this requirement?
Creating mask that cut light/sprite to pixel map terrain
hashi posted a topic in Pixi.jsI try to create mask that clip sprite(light) in this way that we will see shadows. Lets assume: 1. Draw a pixel map where black pixel means rock, white means air. 2. Draw black lightmap on it (black sprite on whole map, with MULTIPLY blending) 3. Draw an light(sprite with ADD blending) on lightmap texture by PIXI,RenderTexure these 3 steps allow us to create lightmaping on map, but lights always will go through walls. So we need to use masking. 1. use Bresenham Algorithm for rasterize Circle (with radius of our light = half of width of texture) result is array of pixels XY that create circle. 2. sort these pixels based on angle (from 0 to 360) so we have then polygon/circle data array. 3, create a mask based on these data now we have masked light but nothing is changed, now we need to raycast from center of light to each pixel on circle again with Bresenham Algorithm 1. use Bresenham Algorithm for rasterize Line from center of light/sprite to each point on circle, result is pixels on the radius/line, if one of them are Rock/black pixel - change this position/pixelXY on circle to collision position. 2. apply mask now we see occlusion to the pixels (rock/air) on map, and shadows. But.. it is too slow!! I've searched a lot of stuff to find a better way, but have nothing. Someone knows a better way for occlusion lights/sprites to pixel map(rock/air)?
- If I understood correctly you want to keep interactions with sprites in PIXI. But pass events to map if happened on empty space of stage. You should then listen for click events on stage and pass it to next/under element. Don't know how to do this, maybe there is some API in JS events for it, like "event.passToNext()" or you could listen on mouse enter/leave sprite/stage events in this way: when mouse enter a point/sprite then set canvas "pointer-events:auto;" but if mouse enter stage(so no sprite under mouse) then set "pointer-events:none"; edit: it will not work because of losing canvas interaction with mouse, so enter/leave events won't be fired.
- Try css value "pointer-events:none;" for canvas. It should ignore mouse events on it and send them to next element under canvas.
d13 reacted to a post in a topic: [Pixi] texture or setTexture?
[Pixi] texture or setTexture?
hashi replied to d13's topic in Pixi.jsDefinetly using setTexture is better method, because it does a few more things than just replacing texture variable. Check this out here: this method notices when old and new textures(just a region definitions on real texture?) has different baseTexture(real texture). I think it is necessary cause WebGL need to switch baseTexture currently in use and it takes some gpu. So if baseTexture are the same, we don't need to do this.
Flipping by Scalling Sprite problem
hashi posted a topic in Pixi.jsHi It's my first post here, glad to join you guys. Anyway I try to flip sprite(or just texture) horizontally, these are the only ways I found: sprite.scale.x = -1;or sprite.width = sprite.width * -1;but both of them are buggy ways (I don't mean that scalling has a bug). They of course flip sprite horizontally, but also move it to the left of sprite.width pixels. I don't really like the solution that we could keep state for example: sprite.flippedH = true; //user defined variable; so then we can move it to the right preventing displacement of sprite. Is there something to just flip it without changing displaying position? EDIT: Is this the only way we can do it? //extending PIXI.Sprite classPIXI.Sprite.prototype.__defineGetter__("flippedH", function() { return (this._flippedH === true);});PIXI.Sprite.prototype.__defineSetter__("flippedH", function(val) { if ( val != (this._flippedH===true) ) { if ( val ) { this.anchor.x = 0.9; //not 1, because then 1px is out of place to right side } else { this.anchor.x = 0; } this.scale.x *= -1; } this._flippedH = val;});it uses scale and anchor vars, Why it's bad way to achieve it? Because when we use anchor or scalling things to do something different that this, we need to care about negative value of scale and occupied field of anchor. Any ideas?
|
http://www.html5gamedevs.com/profile/8990-hashi/
|
CC-MAIN-2018-13
|
refinedweb
| 1,388
| 65.73
|
Related
Tutorial
How to Create a Countdown Timer with React Hooks
Introduction
In this tutorial, you will create a countdown timer. This timer will help you learn how to leverage React hooks to update state and manage side effects in a React component.
With React hooks, you can create cleaner code, reusable logic between components, and update state without classes.
Countdown timers are a common UI component and can serve many purposes. They can communicate to users how long they have been doing something or how much time until some event happens. The event you will countdown to in this tutorial is DigitalOcean’s HacktoberFest.
By the end of this tutorial, you will have a functional and reusable Countdown timer using React’s
useState() and
useEffect() hooks.
Prerequisites
Before you begin this guide, you’ll need the following:
- this tutorial, you will create apps with Create React App. You can find instructions for installing an application with Create React App at How To Set Up a React Project with Create React App
- You will also need basic knowledge of JavaScript, which you can find in How To Code in JavaScript, along with a basic understanding of HTML and CSS. A useful resource for HTML and CSS is the Mozilla Developer Network.
Step 1 — Creating an Empty Project
In this step, you’ll create a new project using Create React App. Then you will delete the sample project and related files that are installed when you bootstrap the project.
To start, make a new project. In your terminal, run the following script to install a fresh project using
create-react-app:
- npx create-react-app react-hooks-counter
After the project is finished, change into the directory:
- cd react-hooks-counter
In a new terminal tab or window, start the project using the Create React App start script. The browser will auto-refresh on changes, so leave this script running while you work:
- npm start
You will get a local running server. If the project did not open in a browser window, you can open it with. If you are running this from a remote server, the address will be.
Your browser will load with a simple React application included as part of Create React App:
>>IMAGE:
- nano src/App.js
You will see a file like this::
import React from 'react'; import './App.css'; function App() { return <></>; } export default App;
Save and exit the text editor.
Finally, delete the logo since you won’t be using it in this application. It’s a good practice to remove unused files as you work to avoid confusion.
In the terminal window type the following command:
- rm src/logo.svg
If you look at your browser, you will see a blank screen:
Now that the project is set up, you can create your first component.
Step 2 — Calculating How Much Time is Left
In this step, you will create a function that calculates the time remaining between the current date and the first day of HacktoberFest.
First, set up a function called
calculateTimeLeft:
const calculateTimeLeft = () => {};
Next, inside the function, you will use the JavaScript
Date object to find the current
year.
Create a variable called
year that is set to the JavaScript
date method
Date.getFullYear().
Add the following code inside the
calculateTimeLeft function:
const calculateTimeLeft = () => { let year = new Date().getFullYear(); }
Note: You can use the JavaScript
Date object to work with dates and times.
The
Date.getFullYear() method will grab the current year.
You can now use this variable to calculate the difference between the current date and the first day of HacktoberFest.
Inside the
calculateTimeLeft function and below the
year variable, add a new variable called
difference. Set it equal to a new
Date object with the following code:
const calculateTimeLeft = () => { let year = new Date().getFullYear(); const difference = +new Date(`${year}-10-1`) - +new Date(); }
The
+ before the new
Date object is shorthand to tell JavaScript to cast the object as an integer, which gives you the object’s Unix timestamp represented as microseconds since the epoch.
To keep the code reusable, you use a JavaScript Template Literal and add in the
year variable along with the month and day of HacktoberFest. HacktoberFest starts on October 1st each year. When you use the
year variable in place of a hard-coded year, you will always have the current year.
Now that you calculated the total number of milliseconds until the countdown timer expires, you need to convert the number of milliseconds to something more friendly and human-readable.
Step 3 — Formatting to Days, Hours, Minutes, and Seconds
In this step, you will create an empty object called
timeLeft, use an
if statement to check if there is time remaining, and calculate the total number of hours, minutes, and seconds by using math and the modulus (
%) operator. Finally, you will return the
timeLeft.
First, create the empty object called
timeLeft which will then be filled in with days, hours, minutes, and seconds in the
if statement.
Add the following code inside the
calculateTimeLeft function and below the
difference variable:
const calculateTimeLeft = () => { let year = new Date().getFullYear(); let difference = +new Date(`${year}-10-1`) - +new Date(); let timeLeft = {}; }
Now create an
if statement that will compare the
difference variable to see if it is greater than
0.
Add this code inside the
calculateTimeLeft function and below the
timeLeft variable:
... if (difference > 0) { timeLeft = { days: Math.floor(difference / (1000 * 60 * 60 * 24)), hours: Math.floor((difference / (1000 * 60 * 60)) % 24), minutes: Math.floor((difference / 1000 / 60) % 60), seconds: Math.floor((difference / 1000) % 60) }; } ...
In this code, you round the numbers from the day, hours, minutes, and seconds down and drop the remainder to get a whole number value. You can then compare the
difference to see if it is greater than
0.
Finally, you need to return
timeLeft so that you can use the value elsewhere in the component.
Add this code inside the
calculateTimeLeft function and below the
if statement:
const calculateTimeLeft = () => { let year = new Date().getFullYear(); let difference = +new Date(`${year}-10-1`) - +new Date(); let timeLeft = {}; if (difference > 0) { timeLeft = { days: Math.floor(difference / (1000 * 60 * 60 * 24)), hours: Math.floor((difference / (1000 * 60 * 60)) % 24), minutes: Math.floor((difference / 1000 / 60) % 60), seconds: Math.floor((difference / 1000) % 60) }; } return timeLeft; }
Now that you have created a function that calculates the time left until Hacktoberfest, you can add in the app state that will control and update your timer.
Step 4 — Updating Your App State with
useState and
useEffect
With React Hooks, you can add state management capabilities to existing functional components without converting them to a class.
In this step, you will import the
useState and
useEffect hooks from React to manage state in this component.
At the top of the
App.js file, add
useState and
useEffect in your import statement:
import React, { useEffect, useState } from "react";
This code tells React that you want to use these specific hooks and their functionality that is available from React.
To make the countdown timer work, you will need to wire up the time remaining method we previously coded to update the state:
Add this code below the
calculateTimeLeft function:
... const [timeLeft, setTimeLeft] = useState(calculateTimeLeft()); ...
This JavaScript syntax is called array destructuring.
The
useState method accepts a parameter to set the initial state and returns an array containing the current state and a function to set the state.
timeLeft will carry our time left object of intervals and provide us with a method to set the state. On component load, the
timeLeft value is set to the current time left value.
Next, you will use the
useEffect hook to deal with the component side effects.
Note: A side effect is anything that affects something outside the scope of the function being executed.
In this solution, you will use a
setTimeout method inside of the
useEffect hook.
setTimeout and the similar
setInterval method are common React patterns when used inside of the
useEffect hook.
Most async behaviors like the
setTimeout method in React are defined with a combination of the
useEffect and
useState hooks.
Note: You can read more about when and how to use methods like
setTimeout and
setInterval in this section of the React Docs.
Add this code below the
useState() function:
... const [timeLeft, setTimeLeft] = useState(calculateTimeLeft()); useEffect(() => { const timer = setTimeout(() => { setTimeLeft(calculateTimeLeft()); }, 1000); }); ...
The
useEffect is what updates the amount of time remaining. By default, React will re-invoke the effect after every render.
Every time the variable
timeLeft is updated in the state, the
useEffect fires. Every time that fires, we set a timer for 1 second (or 1,000ms), which will update the time left after that time has elapsed.
The cycle will continue every second after that.
To help to eliminate the potential of stacking timeouts and causing an error you should add the
clearTimeout method inside the
useEffect hook as well.
Add a
clearTimeout method and pass in the variable timer as a parameter:
useEffect(() => { const timer=setTimeout(() => { setTimeLeft(calculateTimeLeft()); setYear(new Date().getFullYear()); }, 1000); // Clear timeout if the component is unmounted return () => clearTimeout(timer); });
The
return function runs every time the
useEffect runs the
timer except for the first run of the component and will clear out the timeout if the component is unmounted.
Now that your state is set to the
calculateTimeLeft() object and is updating inside your effect hook, it can be used to build your display component.
Step 5 — Using
Object.keys
In this step, you will use
Object.keys to iterate over the
timeLeft object and build out a display component. You will use the display component to show the time left before HacktoberFest begins.
First, create a new variable under the
useEffect hook called
timerComponents:
... useEffect(() => { setTimeout(() => { setTimeLeft(calculateTimeLeft()); }, 1000); const timerComponents = []; ...
After iterating over the keys in
timeLeft, you will use this variable to push a new JSX component with the time left.
Next, use
Object.keys to iterate over the
timeLeft object you returned from your
calculateTimeLeft function.
Add this code in the
timerComponents variable:
... const timerComponents = []; Object.keys(timeLeft).forEach((interval) => { if (!timeLeft[interval]) { return; } timerComponents.push( <span> {timeLeft[interval]} {interval}{" "} </span> ); }); <^> ...
Here the code loops through the properties of the
timeLeft object. If the timer interval has a value above zero, it adds an element to the
timerComponents array.
Note: The extra
{" "} in the code is used so that the intervals that display the time left do not run into each other when displayed on the screen.
The
{} allow you to use JavaScript inside your JSX and the
"" add the space.
Now you are ready to add the new JSX in the App components
return statement to display the time left until HacktoberFest.
Step 6 — Displaying the Time Left
In this step, you will add several JSX components to the app component’s
return statement. You will use a ternary operator to check if there is time left or if it is time for HacktoberFest,
To use the
timerComponents array, you need to check its length and either return it or let the user know that the timer has already elapsed.
Add this code inside the empty
return statement:
... return ( <div> {timerComponents.length ? timerComponents : <span>Time's up!</span>} </div> ); ...
In React JSX components, you use a ternary operator in place of a JavaScript
if statement. This is because only expressions are allowed inside JSX.
The
timerComponents.length line of code checks to see if there is anything inside the
timerComponents array and renders it if there is, otherwise it renders
Time's up!.
Next, you will add two more JSX components to the
return statement to let the user know what they are counting down:
... return ( <div> <h1>HacktoberFest 2020 Countdown</h1> <h2>With React Hooks!</h2> {timerComponents.length ? timerComponents : <span>Time's up!</span>} </div> ); ...
To use the current year instead of hard coding
2020, you can create a new state variable and set the initial state to
new Date().getFullYear();.
Below the first
useState() variable, add this code:
... const [timeLeft, setTimeLeft] = useState(calculateTimeLeft()); <^> const [year] = useState(new Date().getFullYear();); <^> ...
This method will grab the current year like you used in the
calculateTimeLeft function.
You can then remove the hardcoded
2020 from your
h1 and replace it with
year:
... return ( <div> <h1>HacktoberFest {year}Countdown</h1> <h2>With React Hooks!</h2> {timerComponents.length ? timerComponents : <span>Time's up!</span>} </div> ); ...
This will display your state variable, which will now always have the current year. Your completed project will look like this:
Conclusion
In this tutorial, you built a countdown UI component using the
useState and
useEffect hooks to manage and update your applications state.
From here, you can learn how to style React components to create a more attractive countdown UI.
You can also follow the full How To Code in React.js series on DigitalOcean to learn even more about developing with React.
1 Comment
Load
|
https://www.digitalocean.com/community/tutorials/react-countdown-timer-react-hooks
|
CC-MAIN-2020-34
|
refinedweb
| 2,153
| 63.59
|
I need some help..here is what i am trying to make and here is the code
One of your professors has asked you to write a program to grade her final exams. which consist of only 20 multiple-choice questions. Each question has one of four possible answers: A, B, C, or D. The file CorrectAnswers.txt, which is on the Student CD, contains the correct answers for all of the questions, each answer is written on a separate line. The first line contains the answer to the first question, the second line contains the answer to the second question, and so forth.
Write a program that reads the contents of the CorrectAnswers.txt file into a one-dimensional char array, and then reads the contents of another file, containing a student's answers, into a second charr array. The Student CD has a file named StudentAnswer.txt that you can use for testing purposes. The program should determine the number of questions that the student missed, and then display the following:
* A list of the questions missed by the student, showing the correct answer and the incorrect answer
provided by the student for each missed question.
* The total number of questions missed
* The percentage of questions answered correctly. This can be calculated as:
Correctly Answered Questions (divided by) Total Number of Questions
*If the percentage of correctly answered questions is 70% or greater, the program should indicate that the student passed the exam. Otherwise, it should indicate that the student failed the exam.
Code:#include <iostream> #include <fstream> using namespace std; int main () { const int numOfAnswers= 20; const int stringSize=1; char correctAnswers[numOfAnswers][stringSize]; char studentAnswers[numOfAnswers][stringSize]; int totalMissed=0; ifstream inputFile; //Open the file StudentAnswers. inputFile.open("CorrectAnswers.txt"); //Read the 20 answers from the file CorrectAnswers into the char array correctAnswers. for (int count=0; count < numOfAnswers; count++) { inputFile>> correctAnswers[count][stringSize]; } //close the file. inputFile.close(); //Open the file StudentAnswers. inputFile.open("StudentAnswers.txt"); //Read the 20 answers from the file StudentAnswers into the char array studentAnswers. for (int count=0; count < numOfAnswers; count++) { inputFile>> studentAnswers[count][stringSize]; } //close the file. inputFile.close(); return 0; } for(x = 0; x < 20; x++) if(correct[x] != answer[x]) wrong[x]=false; for(x = 0; x < 20; x++) { if(!wrong[x]) { cout<<x<<". wrong"<<endl; total++; } else cout<<x<<". right"<<endl; } cout<<total<<"wrong, "<<(20-total)<<"right"<<endl;
I do not know what is wrong with it!
|
https://cboard.cprogramming.com/cplusplus-programming/136036-helllppp.html
|
CC-MAIN-2017-13
|
refinedweb
| 411
| 56.66
|
- NAME
- DESCRIPTION
- METHODS
- SEE ALSO
- AUTHOR
- BUGS
- SUPPORT
NAME
Webservice::InterMine::Query::Roles::Runnable - Composable behaviour for runnable queries
DESCRIPTION
This module provides composable behaviour for running a query against a webservice and getting the results.
METHODS
results_iterator
Returns a results iterator for use with a query.
The following options are available:
as => $format.
json[objects|rows] - raw data structures
The two json formats allow low-level access to the data-structures returned by the webservice.
count - a single row containing the count
In preference to using the iterator, it is recommended you use Webservice::InterMine::Query::Roles::Runnable::count instead.
size => $size
The number of results to return. Leave undefined for "all" (default).
start => $start
The first result to return (starting at 0). The default is 0.
columnheaders => 0/1/friendly/path
Whether to return the column headers at the top of TSV/CSV results. The default is false. There are two styles - friendly: "Gene > pathways > name" and path: "Gene.pathways.name". The default style is friendly if a true value is entered and it is not "path".
json => $json_processor
Possible values: (inflate|instantiate|raw|perl)
What to do with JSON results. The results can be returned as inflated objects, full instantiated Moose objects, a raw json string, or as a perl data structure. (default is
perl).
summaryPath => $pathas well.
summarise($path, <%opts>) / summarize($path, <%opts>).
iterator
A synonym for results_iterator. See Webservice::InterMine::Query::Roles::Runnable::results_iterator.
results( %options )
returns the results from a query in the result format specified.
This method supports all the options of results_iterator, but returns a list (in list context) or an array-reference (in scalar context) of results instead.
all(%options)
Return all rows of results. This method takes the same options as
results, but any start and size arguments given are ignored. Note that the server code limits result-sets to 10,000,000 rows in size, no matter what.
first(%options)
Return the first result (row or object). This method takes the same options as
results, but any size arguments given are ignored. May return
undef if there are no results.
one(%options)
Return one result (row or result object), throwing an error if more than one is received.
get_count
A convenience method that returns the number of result rows a query returns.
count
Alias for get_count
url
Get the url for a webservice resource.
get_upload_url
get the url to use to upload queries to the webservice.
save
Save this query in the user's history in the connected webservice. For queries this will be saved into query history, and templates will be saved into your personal collection of private templates.
SEE ALSO
Webservice::InterMine::Cookbook for guide on how to use these modules.
Webservice::InterMine::Query
Webservice::InterMine::Service
Webservice::InterMine::Query::Template
AUTHOR
Alex Kalderimis
<dev@intermine.org>
BUGS
Please report any bugs or feature requests to
dev@intermine.org.
SUPPORT
You can find documentation for this module with the perldoc command.
perldoc Webservice::InterMine::Query::Roles::Runnable
You can also look for information at:
Webservice::InterMine
Documentation
This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself.
|
https://metacpan.org/pod/Webservice::InterMine::Query::Roles::Runnable
|
CC-MAIN-2015-35
|
refinedweb
| 530
| 50.33
|
Hi, I have a home assignment in C#, I need to make a simple one-way linked list and I'm having trouble. I can make all kinds of linked lists in C++ with no problem at all but I can't do it in C# and that's because I can't access memory directly so no pointers for me.
Now this is where the fun begins. Where the hell do I start? I don't want to use templates, I need to make it by myself. I just don't know what should I do to replace * and -> in C#?
This is what I've got so far.
using System; using System.Collections.Generic; using System.Linq; using System.Text; namespace ConsoleApplication3 { class Program { //CLASS NODE public class Node { int userinput; Node p_next; void Add(); void Print(); } //CLASS LIST public class List { Node p_start; void PrintAll(); void AddToStart(); void AddToEnd(); } //MAIN public static void Main(string[] args) { List MyList = new List(); MyList.p_start = null; <- !?!? } } }
So how do I go about creating a new List in Main() and creating new pointers?
|
https://www.daniweb.com/programming/software-development/threads/185384/simple-linked-list-in-c
|
CC-MAIN-2017-26
|
refinedweb
| 182
| 82.65
|
Java, J2EE & SOA Certification Training
- 32k Enrolled Learners
- Weekend
- Live Class
In Java programming language, an interface is used to specify a behavior that classes must implement. Java world offers us two such interfaces Comparable and Comparator! Comparable in Java is used to sort the objects with natural ordering while Comparator is used to sort attributes of different objects. Let’s understand these interfaces through the medium of this article.
Now that we are clear with our agenda, let’s begin!
As the name itself suggests, Comparable is an interface which defines a way to compare an object with other objects of the same type. It helps to sort the objects that have self-tendency to sort themselves, i.e., the objects must know how to order themselves. Eg: Roll number, age, salary. This interface is found in java.lang package and it contains only one method, i.e., compareTo(). Comparable is not capable of sorting the objects on its own, but the interface defines a method int compareTo() which is responsible for sorting.
Further, you must be thinking what is the compareTo method? Well, let me explain that to you!
This method is used to compare the given object with the current object. The compareTo() method returns an int value. The value can be either positive, negative, or zero. So now we are well acquainted with the theoretical knowledge of Comparable interface in Java and compareTo method.
Let’s hop into understanding the implementation process. First, let’s see how to implement Comparable.
Below code depicts the usage of comparable in Java.
public class Student implements Comparable { private String name; private int age; public Student(String name, int age) { this.name = name; this.age = age; } public int getAge() { return this.age; } public String getName() { return this.name; } @Override public String toString() { return ""; } @Override public int compareTo(Student per) { if(this.age == per.age) return 0; else return this.age > per.age ? 1 : -1; } public static void main(String[] args) { Person e1 = new Person("Adam", 45); Person e2 = new Person("Steve", 60); int retval = e1.compareTo(e2); switch(retval) { case -1: { System.out.println("The " + e2.getName() + " is older!"); break; } case 1: { System.out.println("The " + e1.getName() + " is older!"); break; } default: System.out.println("The two persons are of the same age!"); } } }
In the above example, I have created a class Student with two fields, name and age. Class Student is implementing the Comparable interface and overrides the compareTo method. This method sorts the instances of the Student class, based on their age.
Now that I have covered Comparable in Java, moving on I will talk about another interface i.e. Comparator in Java. Let’s move to understanding Comparator in Java!
A Comparator interface is used to order the objects of a specific class. This interface is found in java.util package. It contains two methods;
The first method,
compare(Object obj1,Object obj2) compares its two input arguments and showcase the output. It returns a negative integer, zero, or a positive integer to state whether the first argument is less than, equal to, or greater than the second.
The second method,
equals(Object element), requires an Object as a parameter and shows if the input object is equal to the comparator. The method will return true, only if the mentioned object is also a Comparator. The order remains the same as that of the Comparator.
After attaining brief learning about Comparator in Java, it’s time to move a step ahead. Let me show you an example depicting Comparator in Java.
Here is an example of using Comparator in Java:
import java.util.Comparator; public class School { private int num_of_students; private String name; public Company(String name, int num_of_students) { this.name = name; this.num_of_students = num_of_students; } public int getNumOfStudents() { return this.num_of_students; } public String getName() { return this.name; } } public class SortSchools implements Comparator { @Override public int compare(School sch1, School sch2) { if(sch1.getNumOfStudents()== sch2.getNumOfStudents()) return 0; else return sch1.getNumOfStudents() > sch2.getNumOfStudents() ? 1 : -1; } public static void main(String[] args) { School sch1 = new School("sch1", 20); School sch2 = new School("sch2", 15); SortSchools sortSch = new SortSchools(); int retval = sortSch.compare(sch1, sch2); switch(retval) { case -1: { System.out.println("The " + sch2.getName() + " is bigger!"); break; } case 1: { System.out.println("The " + sch1.getName() + " is bigger!"); break; } default: System.out.println("The two schools are of the same size!"); } } } Output: The sch1 is bigger!
Well, no need to panic here. The above-written code is really easy to understand. Let’s go!
First, I created a class School that consists of the name and age of the students. After that, I created another class, SortSchools, in order to implement the Comparator interface which accomplishes the goal of imposing an order between instances of the first class named, School, according to the number of students.
After understanding about Comparator as well as Comparable in Java, let’s move towards our next topic.
I hope the above-mentioned differences brought some clarity regarding the two concepts.
With this, we have reached towards the end of our article. Hope that the content turned out to be informative and imparted knowledge to your Java world. Stay tuned! “Comparable in Java” blog and we will get back to you as soon as possible.
|
https://www.edureka.co/blog/comparable-in-java/
|
CC-MAIN-2019-39
|
refinedweb
| 881
| 60.31
|
View source code
Display the source code in std/math/remainder.math.remainder.modf
Breaks x into an integral part and a fractional part, each of which has the same sign as x. The integral part is stored in i.
real modf (
real x,
ref real i
) nothrow @nogc @trusted;
Returns
The fractional part of x.
Example
import std
.math.math .operations : feqrel; real frac; real intpart; frac = modf(3.14159, intpart); assert(intpart.operations : feqrel; real frac; real intpart; frac = modf(3.14159, intpart); assert(intpart .feqrel(3.0) > 16); assert(frac.feqrel(3.0) > 16); assert(frac .feqrel(0.14159) > 16);.feqrel(0.14159) > 16);
Authors
Walter Bright, Don Clugston, Conversion of CEPHES math library to D by Iain Buclaw and David Nadlinger
License
Copyright © 1999-2022 by the D Language Foundation | Page generated by ddox.
|
https://dlang.org/library/std/math/remainder/modf.html
|
CC-MAIN-2022-27
|
refinedweb
| 138
| 53.58
|
Session Registration
Coming soon! Our webinar just ended. Check back soon to watch the video.
How Sysbee Manages Infrastructures and Provides Advanced Monitoring by Using InfluxDB and Telegraf
Session date: 2020-06-30 08:00:00 (Pacific Time)
Discover how Sysbee helps organizations bring DevOps culture to small and medium enterprises. Their team helps their customers by improving stability, security, scalability — by providing cost-effective IT infrastructure. Learn how monitoring everything can improve your processes and simplify debugging!
Join this webinar as Saša Teković and Branko Toić dive into:
- Sysbee’s introspection on monitoring tools over the years
- How TSDB’s, and specifically InfluxDB, fits into improving observability
- Their approach to using the TICK Stack to improve the web hosting industry
Watch the Webinar
Watch the webinar “How Sysbee Manages Infrastructures and Provides Advanced Monitoring by Using InfluxDB and Telegraf” by filling out the form and clicking on the Watch Webinar button on the right. This will open the recording.
Transcript +
Here is an unedited transcript of the webinar “How Sysbee Manages Infrastructures and Provides Advanced Monitoring by Using InfluxDB and Telegraf”. This is provided for those who prefer to read than watch the webinar. Please note that the transcript is raw. We apologize for any transcribing errors.
Speakers:
- Caitlin Croft: Customer Marketing Manager, InfluxData
- Saša Teković: Linux System Engineer, Sysbee
- Branko Toić: Linux System Engineer, Sysbee
Caitlin Croft: 00:00:04.096 Hello everyone. My name is Caitlin Croft. Welcome to today’s webinar. I’m super excited to have Sysbee here, presenting on how they are using InfluxDB. Once again, if you have any questions, please feel free to post them in the Q&A box or in the chat. We will be monitoring both. And what I’m going to do, I’m going to hand it off to Saša and Branko of Sysbee.
Saša Teković: 00:00:35.734 Hello everyone, and a big thanks to everyone who decided to tune in to our webinar. I’m Saša Teković, and I’d like to begin by going through the day’s agenda. We’ll start with a brief intro about our company and what we do and then we’ll talk about our monitoring requirements. We’ll cover some interesting details about monitoring tools that we’ve used over the last 20 years and challenges we experienced along the way. Finally, we’ll talk about how today we monitor customers’ infrastructures using Influx data products.
Saša Teković: 00:01:15.355 So I work as a senior Linux systems engineer at Sysbee. I’ve been working in the web hosting industry as a systems engineer for the past 11 years and I have experience in designing and maintaining various types of hosting platforms, such as shared, VPS, dedicated server, and private clouds. I enjoy simplifying things for our customers, which are in most cases, developers. For example, when designing a complex infrastructure which consists of many clustered services, I believe that it’s important to introduce an abstraction layer which allows developers to manage their application as if it was hosted on a single server. I’m a big fan of InfluxDB and Telegraf because it makes my work so much easier, but you’ll be able to find out more on that in the following slides.
Branko Toić: 00:02:10.388 Hi, all. My name is Branko. I’m doing the system administrations, mostly on Linux systems back from ’98. Currently, I’m specialized in engineering and maintaining medium to large server systems primarily used for web application delivery. I’m also working in the web hosting industry for the past 15 years now and my passion is collecting and analyzing data. So naturally monitoring is my primary focus. I also like to evaluate and implement new motioning tools and practices whenever and wherever possible. And besides that, I’m doing some internal tooling and system development in Python.
Saša Teković: 00:03:01.219 So just a quick overview of our company. Our roots go way back to 2001 when our founding members entered the Croatian web hosting scene as Plus Hosting. In the beginning, the company offered not only Windows shared hosting but later introduced Linux shared hosting platform as well. Back then, the shared hosting infrastructure was based on rented physical servers in the UK and US data centers. In 2006, we deployed redesigned infrastructure using our own hardware in a data center in our capital city, Zagreb. With shared hosting still being our core business, in 2008, we introduced managed VPS and dedicated server hosting to meet increasing demand for more capable hosting solutions. In 2010, we added managed services to the portfolio as the need for tailored infrastructure solutions began to appear. Then, fast-forward a couple of years. In 2015, we joined DHH, which stands for Dominion Hosting Holding. DHH is a tech group that provides the usual infrastructure to run websites, apps, e-commerces, and software-as-a-service solution to more than 100,000 customers across Southeast Europe. DHH is listed on the Italian stock exchange and under its umbrella holds a number of hosting brands from Italy, Switzerland, Croatia, Slovenia, and Serbia. As mentioned earlier, Sysbee was established in 2018 to meet increasing demand for managed services. We recognize that some customers prefer to host their projects on-premises or with public cloud providers, such as AWS, Google Cloud, and Digital Auction, so we are proud to say that we are platform agnostic, meaning that we can assess, design, and maintain infrastructure hosted with different providers and not only on our own hardware.
Saša Teković: 00:05:01.036 To give you a better idea about our managed services, I’d like to mention a few of them. We offer infrastructure assessment, a service aimed at customers who want a complete assessment of their infrastructure in order to find out where and how can they improve performance stability, security, and optimize infrastructure costs. Our managed infrastructure service covers everything from infrastructure design to continuous monitoring and maintenance with 24/7 technical support. This means that we are collaborating with our customers from the very beginning — from the very beginning of their project/idea in order to identify customer’s requirements and goals. At the same time, we’re also collaborating with developers working on the project to meet their requirements and provide them with a developer-friendly environment in which they don’t have to deal with the complexity of the infrastructure that hosts their application. Our managed AWS is basically the same as managed infrastructure service with special emphasis on services in the AWS ecosystem, managing security of AWS accounts, as well as optimizing AWS costs. Besides services, we also offer a couple of managed products. For example, Magento optimized hosting, which is ideal for hosting solutions for medium to large Magento store owners. Hosting plans come with a preinstalled toolkit for Magento developers and are preconfigured for the best possible performance. Managed GitLab is a relatively new addition to our products portfolio. It’s ideal for companies that require a dedicated GitLab instance, but don’t want to deal with maintenance, backups, and managing security of their GitLab server, or simply have special requirements that gitlab.com service can’t fulfill. For example, to choose the specific country where the GitLab server must be hosted or to ensure that the GitLab server is reachable only via a VPN connection.
Saša Teković: 00:07:10.032 A few words about our typical clients. They’re usually small to medium business whose projects vary from e-commerce sites, news portals, API servers, and software-as-a-service projects. For example, ERP solutions, event ticketing services, etc. Our client’s projects can generally be grouped into three categories. Small projects, which in most cases, have one to three standalone virtual or physical servers. For example, database application and caching server. Then, we have medium projects, which have more than three servers, of which two or more are generally placed behind a load balancer. In some cases, most critical components of the infrastructure are redundant. And finally, we have large projects, which consist of five or even a greater number of servers, with high availability and full tolerance, since all parts of the infrastructure are redundant and clustered. In most cases, such large projects also feature auto-scaling, where part of the infrastructure automatically scales, depending on the resource usage. All right. That was a short intro about our company. Now, I’d like to pass the mic to my colleague Branko, who will tell you an interesting story about our monitoring system and how they evolved over time.
Branko Toić: 00:08:36.312 Okay. Thank you, Saša, for a great introduction. I hope that this gave a clear picture of who we are and what we do, as well as what our typical clients are. So keeping in mind that we are mostly dealing with web services and managing web servers, our monitoring requirements are aligned with that. On the other hand, we aren’t doing web application development or maintenance. So we are in a bit of a pickle when it comes to monitoring application health. Nevertheless, we still like to add value in our support and maintenance plans. So this will usually mean that we will try to monitor our client’s application health indirectly. Nowadays, it goes without saying that most of the serious websites require a near 100% uptime. So we also expect our monitoring stack to have some form of alerting and trend tracking, which can help us to predict failures, or at least mitigate them, as soon as possible.
Branko Toić: 00:09:51.590 As Saša previously mentioned, we also do remote system assessments. And there are some cases where our clients will opt-in for one-week metrics collections. These help us to better understand and analyze their infrastructure workloads, so we can finally give them a better suggestion for optimizing their systems. So in those scenarios, we really need an easy to use metrics collection system that will not affect the system too much. I think that you all know where this is going, but allow me to tell you a short story about the history and the evolution of our monitoring stack, so let’s go [laughter].
Branko Toić: 00:10:39.501 Our story begins back in 2001, and in those days, as Saša mentioned, we were starting as a small company, then known as Plus Hosting. Our primary focus was only a regional shared hosting market. And at the time, we were counting our servers on fingers of one hand. What’s worth keeping in mind, that for our region at least during those years, internet access was just starting to get a foothold in people’s homes and they used the internet mostly for reading their emails, very occasionally, once or twice a day, and smartphones were a very distant future. I would also dare to say that this was mostly an offline era for the online services. We could argue that the uptime requirements back in the days were not as demanding as they are today. And where there is no demand, there is no supply. So in that regard, monitoring software was scarce and hard to find or configure properly. With the introduction of Linux servers in our portfolio, we slowly started to implement MRTG with some basic metrics collection via SNMP. And other than that, we had a handful of custom-made scripts that would be executed via a remote system and then that would trigger basic service alerts that were dispatched via email.
Branko Toić: 00:12:21.732 In the year of 2005 – 2006, several changes were happening. First of all, internet in our region suddenly became available to more people. There was a clear boom of websites and that meant more business for us. But somewhere around that period, GIGRIB was introduced as our first public website monitoring service. You may know that today as Pingdom. Their premise at the time was very interesting. You would install a B2B service on your computer that would monitor other sites and you would gain credits that you could exchange for monitoring your own web site with HTP or ICP checks. So now what that meant was that even though the smartphones were still not available, and this was still — you could call it — an offline era, enthusiasts were starting to monitor their website for uptime and then comparing web hosters against each other. For us as a web hoster, this was a good incentive to step up our game in server monitoring. Because trust me, it’s rather embarrassing to be notified by the customer that your server or service is offline.
Branko Toić: 00:13:47.951 Luckily, Nagios started gaining popularity around 2005. So naturally, we had to investigate it and put it to good use. You may call me old-fashioned, but I still see a great value in Nagios dashboards, which provide a quick glance over a large number of monitoring services. And on top of that, to this day, I still haven’t found as a good alerting manager that will handle scheduled downtimes, alerting rules, routing, and escalation procedures. Even though it was very good, Nagios didn’t cover all of our monitoring requirements. So, for example, long-term metrics for trend analysis. Yes, there are some performance data collected with some of the Nagios plugins, however, there is no real history or graph representations with default setup.
Branko Toić: 00:14:48.822 So somewhere around 2007, we started using Munin. And this tool came in bundled with cPanel, which we were using as a primary shared hosting platform at the time. We were instantly hooked and we began to extend its metrics, plug-in based, to collect even more metrics, and we started deploying it to non-cPanel servers as well. This certainly gave us some more insights into the history and trends on each of those hosts, but there was also this usability issue of decentralized monitoring information. Basically, each server collected its own information and stored them, so you would have to go from server to server to see how each of them is performing. You didn’t have any possibility to compare hosts and metrics easily, and dashboards were mostly predefined with only slight modifications as possible. At some point in time, we also deployed a central Munin server, where you could aggregate old metrics. However, as our server count grew, this failed miserably. The main issue was the disk and the CPU I/O on the central Munin server, which was used to store and render the graphs for that large amount of metrics.
Branko Toić: 00:16:18.161 So a few years have passed, and somewhere around 2012 or 2013, we replaced Munin with Ganglia. At the time, Ganglia was very well-established. It had support for rrdcached, and it had an easy to use web interface that centralized all the metrics in one place. What you have to bear in mind is that during this period, Grafana still wasn’t released publicly and the Ganglia web interface was full of features. You could search metrics. You can create your custom dashboards. You can create metrics comparison graphs, time shifts, where you can compare metrics to itself in the past, or you could organize hosts and services, and so on. So collecting metrics with Ganglia was also very easy. You could extend its gmond collector with custom-made plugins, or you could use its gmetric to push custom metrics even from the simplest [inaudible]. Suddenly, we were collecting well over 600 to 1,200 metrics per server and we were starting them in RRD.
Branko Toić: 00:17:39.917 Disk I/O was still an issue, but it was not pronounced when you would be using rrdcached compared to bare Munin. The disk I/O rights would balance out over time and the server could manage to store even more metrics. On the other hand, what I must say, Ganglia wasn’t the easiest system to configure or maintain for that matter. There were some issues with the plugins and their plugin bugs, where we would lose all or some of the metrics for some server or service. This was naturally bad and we had to develop some kind of a meta-monitoring system so that we could catch those issues early on. Besides that, we were forced to better standardize some other unrelated things and to organize our servers into monitoring groups or create custom puppet modules. And all this gave us much more insight into the operations. We were more agile in resolving issues and even preventing them a bit before they happened. And for that reason, the Ganglia still brings warmth to my heart.
Branko Toić: 00:19:05.102 2014 was, again, a year of big change. Grafana was released that year. But unfortunately, for us, Ganglia or RRD, where the data was stored, was not one of the data sources that was supported by Grafana. Luckily, there were some workarounds, like using Graphite web components just to read the metrics from RRD files and then serve as a bridge. And up until now, we have invested a lot of time and effort into organizing our monitoring system. So we were very, very dependent on it. And switching databases at this point in time was almost mission impossible. So naturally, at first, we invested some time to make that bridge happen between Ganglia and Grafana since this was the fastest and easiest way for us to get metric dashboards. However, Grafana introduced us to new types of storage engines, mainly time series databases, and naturally, this led us to explore the InfluxDB back in 2015.
Branko Toić: 00:20:14.496 So as you can imagine, that enormous monitoring stack that I was describing before, required a lot of moving parts to be configured. First of all, there was Ganglia that needed to be installed and configured on each host. Then, there was Nagios that was monitoring those hosts. We had a mindset of not collecting the same metrics twice, so we developed one middleware called gnag, and this middleware served as a bridge between metrics that were already collected by Ganglia and Nagios service checks. So by using the gnag, we could alert the Nagios alert manager on already collected metrics within Ganglia. And we can also use this for meta-monitoring of the collection system. So in case we hit some of those plugin bugs, we would know early on.
Branko Toić: 00:21:10.148 To manage this part of the configuration, we actually developed a few custom puppet modules. You have to realize that, at the time, we had a vastly diverse infrastructure, where each client had its own set of requirements, different software stacks, and different kinds of configuration. So naturally, we had to adopt puppets rather slowly and carefully. To better automate the configuration of our monitoring system, we created the specialized puppet inventory module that could detect installed services on each host. It would then produce some custom facts, based on the running services, which would then be used to configure monitoring of those services. So having puppet configured in such a fashion meant that we can even manually install software on servers and that software would be detected and monitored the next time the puppet applied a new configuration set. In the end, we were left with the manual configuration of the Graphite as a middleware between Ganglia and Grafana.
Branko Toić: 00:22:27.182 All the hard work of configuring every piece of the monitoring stack and the automation stack has paid off. So we had everything packaged up, easy to use and maintain, and we had a clear view on how to expand our metrics collection and alerting configuration. So during this period, we also had a big boom of sales in our VPS and dedicated servers, with a lot of custom solutions and clients requiring better uptimes and support. I must say that our entire team had peace of mind knowing that the software they deployed for the customer, even on an ad hoc basis, would be automatically monitored. So it’s improved the agility in deploying new servers and acquiring new clients, while still providing invaluable insights into system operations for our end customer.
Branko Toić: 00:23:28.581 So as I mentioned earlier, Influx came to our radar with the release of the Grafana. And there was this cool new data source available, Influx, yeah, and this tickled my curiosity. So our first InfluxDB installation was 0.9, back in the day when Influx stored all the data into a LevelDB Storage Engine, and I’ll get back to that later. So what was very nice at the time of InfluxDB is that it supported out of the box Graphite write protocol, and this meant that our Ganglia installation could mirror all the data that we were already collecting to the remote Graphite cluster. In this case, Influx. So it was time to put Influx through its paces. To be perfectly honest, I was expecting disaster, because we were pushing well over 400,000 metrics every 15 seconds, and we already did this test with native Graphite on a similar hardware that we used for Ganglia at the time, and it failed this test miserably.
Branko Toić: 00:24:40.395 To my surprise, Influx handled this exceptionally well, both on the disk I/O and CPU front. I would dare to say it was even better than the Ganglia with the rrdcached. We left this set up to collect metrics for a couple of days, but it would be soon evident that we will need much more storage if we wanted to store those metrics for the same period of time like we did in Ganglia. LevelDB, the storage engine that was used at the time, wasn’t really up to the task of compressing the time series data efficiently as RRD was. So it was a win some, lose some. You would get less disk I/O but you would have to play with more storage. This fact, alongside some other projects taking most of our focus, left us with little to no time to invest in a full monitoring system reconfiguration. And I must say that, in retrospect, this was a good thing. Because Influx involved, if memory serves me well, two or three storage engines before finally settling on the current storage engine, which would eliminate that huge consumption that we observed in the start with LevelDB.
Branko Toić: 00:26:07.215 During this testing period, I would also like to do a special shutout to Telegraf and its ease of set up and configuration. It was an absolute breeze to set it up and configure it compared to any previously used software. So even though at the time we didn’t commit fully for our monitoring stack reconfiguration, we had a test InfluxDB running and accepting some metrics and we were upgrading it overtime to follow up on the development process and our minds were already set on changing the existing monitoring setup, but we were just waiting for the right time to do it and overhaul everything on a larger scale. We started using InfluxDB in production as our primary driver somewhere around early 2017. We did also consider some other time series databases. However, InfluxDB provided some key values that were a better fit for our use case. It’s an open core, meaning it’s basically open-source, with some paid features and support. So this gave us a sense of security that there is paid professional support if we would ever need one. It also provides high availability support in the paid version. So if we would ever require something like that, we can use it. For our business use case, what was really, really nice was the support of multiple databases per server, and this was a great benefit in building a multi-tenant system. So we have a quite few different clients with different use cases and isolating them into separate databases helped us keeping that on the overall database usage per client.
Branko Toić: 00:28:10.616 From a technical standpoint, we really clicked with the push model of gathering metrics. And also, as mentioned before, Telegraf was very easy to deploy. It was just a single binary collector, as opposed to many other solutions out there requiring you to configure and monitor multiple processes that will collect metrics. Influx also plays very nice with others and it tries to be compatible with other monitoring tools. This opened up this opportunity of fast testing, the InfluxDB, alongside our previous monitoring setup, and I appreciate this very much. But it also opens up possibilities, for example, to take some data from Prometheus exporters and fed it to InfluxDB or vice versa. Out of all the other time series databases we evaluated at the time, Influx was the only one that offered numeric and string data values, as well as data rollups. That’s something that we weren’t used to by using the RRD files in previous monitoring configurations. Last but not least, Kapacitor is also a very powerful tool in helping stack. I only regret that we don’t use this much and hopefully, this will change soon.
Branko Toić: 00:29:44.986 So in our environment, Telegraf and InfluxDB are the powerhouses of our new monitoring system. Just as a quick overview for those that aren’t familiar with the full TICK Stack, here is a quick diagram that I borrowed from the main Influx site. On the left, we have this collector, or Telegraf, that collects the majority of metrics and pushes them to InfluxDB for long-term storage. On top of that is a Chronograf for data visualization and exploration, as well as interaction with Kapacitor. Kapacitor, on the other end, is, as I like to call it, a Swiss army knife. It can be configured for various tasks, either for data downsampling or transforming the Influx push model to a pull model so you could pull metrics from other systems. It can also do anomaly detection and alerting. For us, the visualization platform of choice was a Grafana. And for alerting, we are still actually using one of the Nagios forks with some similar bridging middleware that we used with the Ganglia. But we are considering to replace this with a Kapacitor in the future. Just haven’t still managed to configure it in a way that we have our current alerting system configured so that we can retain our current alerting rules and flexibility.
Branko Toić: 00:31:26.930 So this leads me to Telegraf. And what’s very nice with Telegraf is that it has its large number of built-in plugins for collecting metrics. But what I love about Telegraf is the ease of extending metrics collection. So, for example, on some specific projects, we were tasked to configure and monitor a PowerDNS service. So at the time when we were doing this, PowerDNS was not one of the inbuilt plugins that were available within Telegraf’s inbuilt plugins list. Today, you have a native Telegraf plugin for collecting metrics for PowerDNS, so I really wouldn’t advise you to use anything that I’m about to show you. However, for this demonstration of extending metrics, it will serve its purpose. So if you do happen to find yourself in a situation where you have to collect something but there is no native implementation you can use this exec plugin. So as the name suggests, this plugin will execute your defined command and it will accept data specified as input and insert it in InfluxDB. Now, all you have to do is create this executable. So in our example, this was a simple case of [inaudible] page from PowerDNS and transforming that data into a valid json that Influx would understand. So by using just those few lines of code and configuration, you end up with a bunch of metrics that you can organize in dashboards like this. And dashboards like this give you insights into internal operations of your service, in this case, PowerDNS, which could one day be crucial for debugging any potential issues in production. So it’s very important to set up your monitoring early on just to get the baseline performance data so that later on when you’re hit with a problem you can go back and compare the data.
Branko Toić: 00:33:47.445 Just as a quick showcase of some standard plugins and the data that you can gather with them, we have just a really simple plugin configuration, which will collect data from this local Redis service. And it’s basically two lines of code if we ignore the comments. And when you collect all the data, we can then organize all that data into dashboards like this. So bear in mind that there are many other metrics available with this plugin, so the final design and how you wish to see the data is totally up to you. A similar configuration just for network-related metrics on standard Linux hosts and the graphs representing that collected data.
Branko Toić: 00:34:35.800 I was mentioning before, one of the strengths of Influx is interoperability with other monitoring systems and their protocols, and I was also mentioning that we do a passive web application health monitoring, so let me just quickly fuse those two examples. Telegraf features a StatsD protocol and StatsD is widely used in many frameworks and programming languages for collecting in-app performance metrics. So what you would usually do is you would instrument your code functions with StatsD. So, for example, when you’re entering the function, you would do a timer, and then when testing a function you would do another timer, and you would push the total time spent within the function as a timer to StatsD. Or, for example, you could do a counter and increment it just to see how many times the function or part of the codebase was involved. It’s totally up to you as a developer. And we, on the other hand, in the ops use it a little bit differently. So in the case where we have PHP applications hosted, we can easily install the APM PHP extension and then configure it to shift metrics to the local StatsD interface, or in this case, Telegraf. A quick word on the APM extension. This will gather various PHP performance data about running PHP processes. And let me quickly show you what the end result is.
Branko Toić: 00:36:16.401 So with previous configuration sets and [inaudible] time, we are now presented with the indirect PHP health information on how the application is performing. We can see what are the execution times or how many CPU time is used by the PHP, what’s the distribution of response codes or what’s the distribution of errors. So in some cases, this can prove to be very valuable information. For example, we can start tracking return status codes and timings. Then, you can create alarms to warn you when those metrics reach certain thresholds. Also, for us in operations, having this kind of information early on, alongside with its history, when we are doing any kind of system modification or system upgrades, we can quickly turn to those graphs to see if there were any kind of bad performance implications or if the application started to return any kind of errors without even touching the application. On the other hand, if there are no system causes for those alerts or misbehaving metrics here, we can then forward this information to our end clients, or the developers can investigate this further, be it may be airing some part of the code or deploy or anything like that.
Branko Toić: 00:37:54.741 Similar to PHP, we also gather HAProxy metrics. We’ll use HAProxy a lot, even on single servers. And here we can see and correlate similar metrics but in a different part of the stack. So we can see how the requests are propagating through the system and that correlate the data or some errors or some slow timings. Also, in this example slide, we can also see this vertical annotation point that developers or system administrators can use to set markers for any major changes, be it codebase or infrastructure change. So this way, we have a clear visual clue on the exact times when something changed and what effect did it have on infrastructure or the application itself.
Branko Toić: 00:39:00.099 So let’s go a little bit modern. A quick disclaimer. Even though we don’t really do Kubernetes deployments in the form of managed Kubernetes for clients, we do use it in some capacity for our own needs. And when we are talking Kubernetes, people usually associate monitoring of those clusters with some alternate time series databases. However, it is not that hard to set up cluster monitoring with Influx and Telegraf as well. So here you can see are snippets of general configuration for Telegraf DaemonSet and coding map for its deployment. More extended examples can be found in the link in the slide. I was told that these slides and the presentation will be shared later on, so you can refer to it later. For our own environment, we did some modifications to those deployments, but the results and the premise are the same. You’re basically deploying the DaemonSet into the Kubernetes cluster, meaning that Telegraf will be running on each host in the cluster, collecting localhost node metrics, just as with any other Linux hosts. And on top of that, and you can also collect Kubernetes-specific metrics via its own API or Docker-based metrics. As a result, you will end up with an aggregated metrics about the containers and namespaces, as well as per node-specific metrics where you can then filter utilizations per namespace and/or [inaudible].
Branko Toić: 00:40:49.371 So how do we organize our data? In total, our Influx servers serve over 8K writes per second, collecting well over 130,000 series. Each client has his own Influx database, and some of them are running their own dedicated InfluxDB servers and some are sharing one or more servers. It all depends on the resource usage and the level of isolation that the client will require. By organizing clients into these separate databases, we can then expose the databases containing only its data to the end client. So this is basically doable either by configuring different Grafana organization or Grafana installations altogether. For our own needs, it’s very hard to traverse many different Grafana installations. So both of our Grafana dashboards in our centralized place have a variable structure that is shown as in the picture. So we can then quickly switch between the data sources, or in this case, client databases, their retention policies, and then filter out by hosts or on their host properties.
Branko Toić: 00:42:14.222 So this would be on the resulting landing dashboard, that contains some general host information. And on the right, there are drill-down links to dashboards displaying more specific data, be it system-wise or services-wise. So by selecting the desired values in the variables above, we can traverse all links to dashboards, preserving them, and then quickly moving between different points of interests. For example, Redis or the network dashboard [inaudible], as I was explaining previously.
Branko Toić: 00:42:51.052 To keep the disk use at bay we are using several retention policies with the input data. For example, all the autoscaling host data is kept for seven days as the scaling events will create and destroy hosts with unique host dates, which will then easily clutter the dashboards and total series counts. Also, some less used metrics with high cardinality that don’t have any long-term value, at least for us. For example, system interrupts are also split into shorter retention policies. At the moment, we don’t do data downsampling, as we can accommodate for a year worth of data on disk without modifying its precision. But it would be great if you could lower the disk usage by downsampling data for long-term storage. We are waiting for the intelligent roll-ups feature to be released, as this would ease up the downsampling process and create the data, but this is not the only blocker for us. There are also some other Grafana-related feature requests for InfluxDB source that will help with this as well.
Branko Toić: 00:44:09.143 So the last question is, what the future brings? Well, we are quite excited about the Influx 2.0 and some new features that it brings. We still are not using it. Sure, this will mean evaluating our current setup and redesigning it a bit, but at this point, we are used to that, and we have the infrastructure in place that they can easily switch databases. What we are also very excited about is the new Flux query language, which will hopefully bring us easier to configure Kapacitor scripts and evaluations for alerting. One other thing that we would like to explore in the future is better anomaly detection and prediction, be it natively via Kapacitor or some kind of AI, like Loud ML. And as always, we will be expanding our collected metrics set even more. We are true believers that there is no such thing as too many metrics. And thankfully, we have Influx here to store all those metrics with the peace of mind of how and where our data is stored. So thank you very much for your attention, and this is the end, so we are open for questions.
Caitlin Croft: 00:45:41.813 Thank you so much. Loved your presentation. I particularly really liked the timeline and showing all the different monitoring tools that you have used over the years. And I love all bees that you guys have in your presentation. I think that’s very fun. So while people think of their questions, just want to give everyone another friendly reminder. Last week, we had the European edition of InfluxDays for 2020, and we’re super excited to be offering the North America edition of InfluxDays in November. So there will be Flux training prior to the conference and then the conference itself. And of course, everything will be held virtually again. So we’re super excited. Call for Papers is open. And since this is a virtual event, it’s really fun to see people from around the world coming. So if you’re concerned about the timing, we can definitely figure that out. So please feel free to submit your Call for Papers. It’s open. We’re super excited to get started to reviewing them. So it’s a really great opportunity to get to meet different InfluxDB users from around the world. So we have a question here. Is there any way, hard or easy, to migrate data from Nagios RRD into InfluxDB, even including some intermediary steps?
Branko Toić: 00:47:29.092 So if I understand this question correctly, the issue is about moving existing data from RRD to InfluxDB or even from Nagios performance collected data?
Caitlin Croft: 00:47:41.690 Yes. Yep.
Branko Toić: 00:47:42.457 We did explore this at the time when we were evaluating InfluxDB with our current monitoring setup. Well, basically, the RRD has its own API with different kinds of program languages. You can then extract the data from RRD and feed it to InfluxDB. But you will have to do some kind of a middleware, where you will organize those data from RRD to the InfluxDB format. Because obviously, they are not the same. We did explore this possibility for a short period of time because we were not very keen on losing all the data that we already had. However, luckily for us, we had the Ganglia in place at the time, so we could quickly abandon that thought because it seemed too complex for us. So we were just mirroring metrics from current monitoring configuration to InfluxDB using Ganglia. So we had some kind of a big [inaudible] also in InfluxDB. [crosstalk].
Caitlin Croft: 00:49:08.281 George, if —
Branko Toić: 00:49:09.361 It is possible, but it’s not that easy.
George [attendee]: 00:49:14.813 Thank you very much.
Caitlin Croft: 00:49:15.546 Okay. George, yeah, I unmuted you. So if you have any further questions for the guys feel free to expand upon your question.
George [attendee]: 00:49:24.384 No, no, that’s all. Thank you.
Caitlin Croft: 00:49:28.246 Okay. Great. Yeah. I really loved your presentation. I thought you guys did such a phenomenal job going through all the different monitoring tools that you guys have used over the years and figuring out what was the best solution for you guys, given all the data that you guys were collecting.
Saša Teković: 00:49:47.818 Thank you.
Caitlin Croft: 00:49:50.321 Let’s see if there’s any other questions. So have you guys looked more closely at Flux? When do you think you guys might consider looking at 2.0 and looking more at Flux and Kapacitor?
Branko Toić: 00:50:14.617 So what we would like to achieve is actually to try and [inaudible] Nagios or at least it’s [inaudible] right now for alerting because we would like to lower our footprint of our monitoring stack. We did a large reduction of moving parts in our monitoring stack just by replacing Ganglia and all other bridges and intermediates and everything like that just by using InfluxDB. We would also like to use this Kapacitor for alerting. We do have some very specific alerting rules. For example, with our clients, there are different kinds of requirements, and there are a very large number of hosts that we are monitoring. So, for example, on one host the other will go on certain thresholds, while the other host will go on the other threshold and this is very hard to do with generalizing things. So we are just maintaining one single cluster. We are maintaining multiple different kinds of clusters.
Saša Teković: 00:51:35.418 Yeah. I will just add that, with the current stable version of the Kapacitor we found it very hard to templatize its configuration and distribute it with the puppet configuration management system. So we actually, a while back, started to look into Flux query language because we found that it might be much easier. You could wait for a stable release of Flux query language and a new version of InfluxDB. So then we will, again, revisit this idea of fully utilizing Kapacitor in our production environment.
Caitlin Croft: 00:52:25.063 So do you think you will — oh, okay. Never mind. It looks like you answered the person’s question. Well, this was a great presentation. Oh, it looks like someone’s raising their hand. Let me see. Oh, looks like we’re all set. So this session has been recorded. So for any of you who would like to re-watch it, it will be available for replay by tonight. And if any of you have any questions that you think of after the fact — I know that happens to me where I join a webinar, and then right afterwards, I think of a question that I wanted to ask the speakers — all of you should have my email address, so if you want to want to email me with any further questions, I’m happy to connect you with Saša and Branko. Thank you very much, everyone, for joining, and thank you, Saša and Branko for presenting on how Sysbee is using InfluxDB. It was a great presentation.
Branko Toić: 00:53:31.413 Thank you for having us.
Saša Teković: 00:53:32.569 Yeah, it was a joy.
Caitlin Croft: 00:53:35.223 Glad to hear it. Well, thank you very much, and we’ll talk to you all soon. Bye.
Branko Toić: 00:53:42.103 Bye-bye.
Saša Teković
Linux System Engineer, Sysbee
Saša has been part of the Sysbee (and previously Plus Hosting's) core team for over 12 years now. Saša is well versed in planning, implementation and maintenance of private clouds, VPS and dedicated server hosting as well as shared hosting infrastructures.
Branko Toić
Linux System Engineer, Sysbee
Branko is well versed in monitoring systems, as he’s been a part of the Sysbee’s (and Plus Hosting’s) core team for over 15 years now. As our CTO, Branko helped build the foundation for our excellent infrastructure, methods and IT-related processes which we’re using in our day-to-day work to ensure a fantastic client experience for our customers.
|
https://w2.influxdata.com/resources/how-sysbee-manages-infrastructures-and-provides-advanced-monitoring-by-using-influxdb-and-telegraf/
|
CC-MAIN-2022-33
|
refinedweb
| 7,623
| 62.78
|
Listing Six presents another example of a memory leak. Here, the action method
demoMemoryLeak: creates an instance of
FooLink and stores it into the property
posxLeak (lines 16-18). At first glance, the method looks correct. But if you kept invoking
demoMemoryLeak: without disposing of the previous
FooLink instance, you will end up with a leak.
Listing Six
// -- FooProblem.h @interface FooProblem : NSObject { // -- properties FooLink *posxLeak; } - (IBAction) demoMemoryLeak:(id)aSrc; @end // -- FooProblem.m @implementation FooProblem - (IBAction) demoMemoryLeak:(id)aSrc { // initialise the following private property posxLeak = malloc(sizeof(FooLink)); posxLeak->fFoo = "foobar"; posxLeak->fBar = rand(); } @end
The final common problem is the memory hog. This is where an object or structure uses up at least a quarter of the available memory. It holds on to its allocated memory until it is properly disposed of. An example of a memory hog is shown in Listing Seven. The action method
demoMemoryHog: uses the instance method i
nitWithContentsOfFile: to create an
NSMutableArray object with data from the file
foobar.data (lines 6-11). If the source file is small, perhaps less than a half a megabyte, the
array object will also be managably small. But what if the file is more than a megabyte in size? What if the read data results in a multi-dimensional array? Then you get an
array object that can take up half or more of the available memory.
Listing Seven
- (IBAction) demoMemoryHog:(id)aSrc { NSString *tPth; // initialise the following property tPth = [NSString stringWithString:@"~/Documents/foobar.data"]; tPth = [tPth stringByExpandingTildeInPath]; objcArray = [[NSMutableArray alloc] initWithContentsOfFile:tPth]; }
Listing Eight is another example of a memory hog. This variant of
demoMemoryHog: starts by creating a
FooLink instance and populates its two fields (lines 7-9). Then, it proceeds to create a linked-list of 1024 nodes (lines 12-24). Once done, the resulting data structure stays resident, taking up precious memory unless all of the nodes are disposed of properly.
Listing Eight
- (IBAction) demoMemoryHog:(id)aSrc { FooLink *tNode, *tTest, *tLink; NSInteger tIdx, tMax; // create the first link note tNode = malloc(sizeof(FooLink)); tNode->fFoo = "foobar"; tNode->fBar = 1; // create the rest of the link tTest = tNode; tMax = 1024; for (tIdx = 0; tIdx < tMax; tIdx++) { // prepare the link node tLink = malloc(sizeof(FooLink)); tLink->fFoo = "barfoo"; tLink->fBar = tIdx + 2; // update the linked list tNode->fNext = tLink; tNode = tLink; } //...do other things }
There is one more memory problem that bears mentioning heap corruption. Heap corruption happens when code in the app's heap region is altered unexpectedly. The effects are usually subtle, ranging from incorrect behavior to random crashes. There are many causes of heap corruption. It may be sourced to stack collisions or buffer overruns. Or it may be caused by any of the memory problems discussed earlier. As such, a definitive example of heap corruption is hard to present, and is omitted from the rest of this discussion.
Debugging by Static Analysis
Static analysis can identify potential memory problems in a project. You can then fix these problems while avoiding the overhead of runtime debugging. With Xcode, static analysis is done with the open-source tool Clang, which can handle both ANSI-C and ObjC source files. It works with the new LLVM compiler, available on version 3.2 or later of the Xcode IDE.
To start the analysis, choose Build and Analyze from Xcode's Project menu. Clang then highlights the suspect statement, and describes the problem. Figure 1 shows a sample result of one analysis. Function
danglingObjC creates an
NSString instance (tOrig) with the method
initWithString:. It copies the object and assigns the copy to the local
tCopy. Then it sends
tOrig a release message and returns
tCopy as its result.
Figure 1.
Here, Clang highlighted the last statement in the function. It warned that the
tCopy result is a potential memory leak. This is because the copy message creates a separate instance of
NSString. The release message affects only the
tOrig object, not
tCopy. One way to correct this error is to mark
tCopy for autorelease as it is returned:
return ([tCopy autorelease]);
Another way is to have the caller method dispose of the returned string object:
tText = [self danglingObjC]; //...do something with the string object [tText release];
Figure 2 shows another sample result. The action method
demoDanglingPointer: starts by assigning an empty
NSString instance to property
objcString. Then, in a separate block, it creates another
NSString instance (
tBar), one with a value of
"foobar". It sends a release message to
tBar, but also uses
tBar to create and assign a new
NSString object to
objcString.
Figure 2.
Here, too, Clang highlighted the last statement in
demoDanglingPointer:. It warned that the
NSString object assigned to
objcString is potentially invalid. The memory region with its value
"foobar" may be disposed of at any time, essentially making it a dangling pointer. One way to correct this problem is to move the code for creating
tBar outside the code block. Another way is to create
tBar with the factory method
stringWithString:, and do away with the explicit release message:
tBar = [NSMutableString stringWithString:@"foobar"];
This marks the string object for autorelease. As long as
objcString stays valid, its reference to the object remains strong, preventing the latter's accidental disposal.
The Instruments Tool
Not all memory problems are easily caught through static analysis. Some appear only at runtime, and sometimes only after running a method or function several times. To detect these problems, you need to use the Instruments tool.
This tool uses the open-source framework DTrace to attach itself to an iOS app process and to acquire trace data. It graphs the trace data and lists the objects that make up the data. And it shows the stack frame that lead to each object.
Figure 3 shows a typical trace session. The session window divides itself into four panes; each pane can be hidden or shown. The Instruments pane consists of modules that gather and process the trace data. The Track pane displays the trace data from each module. The Details pane lists the objects and structures detected by the session. It can filter the list to a specific few, and it can drill-down on a specific object. The Extended Details pane reveals the stack frame for a given object. It also identifies the source file or library that made the object.
Figure 3.
Multiple sessions are supported. As a rule, though, only one session should be active to avoid confusion and to reduce impact on app performance. Trace data can be studied during or after acquisition, or it can be saved to the disk for later analysis.
|
http://www.drdobbs.com/parallel/logging-in-c/cpp/debugging-memory-in-ios-apps/240144285?pgno=2
|
CC-MAIN-2014-23
|
refinedweb
| 1,099
| 56.55
|
I wonder - Java Beginners
I wonder Write two separate Java?s class definition where the first one is a class Health Convertor which has at least four main members:
i...) of a person.
For your information, a person needs 19 calories per pound of body weight
Java SDK Directory Structure
Java SDK Directory Structure
This section introduces the Directory and file
structure... information bellow.
SDK Directory Structure:
Subdirectories of the SDK
automate mails in lotus notes
directly i need to open the lotus notes page.there are some methods like sendto... if the solution is in java also no problem
Basically the user would just click some... need to open up the Lotus Notes client from a JSP page..
Currently in the JSP
I need your help - Java Beginners
I need your help For this one I need to create delivery class... the file name is ApplicationDelivery.java
Your program must follow proper Java standards, implement selection and repetition structure, and include comments... . i need some help in getting folder structure using this java .
Folder
I need your help - Java Beginners
I need your help What is the purpose of garbage collection in Java, and when is it used? Hi check out this url :
EJB directory structure
given is the standard directory structure of the Enterprise Java Bean
application...
EJB directory structure
The tutorial is going to explain the standard directory
structure of an EJB
I need help in doing this. - Java Beginners
the above tasks (labeled i through iv) will be here. Methods would be the preferred way...I need help in doing this. Student DataBase
i will need creating..., and the student's GPA.using arrays and objects, need to structure the information
hello there i need help
i am a beginner, and aside from that i am really eager to learn java please help...:
YOur Balance is __
if i chose D:
Enter you deposit amount:
if i choose W...:
for withdrawal:
your withdrwal amount:
if no:
it will return to:
Would you like another
I really need help with this assignment question Please help me out Please
I really need help with this assignment question Please help me out Please ... searches on the database. Using your IDE (NetBeans or Eclipse), you should develop a Java application to provide these services by connecting to a database hosted
Directory structure in hibernate for stand alone application - Java Beginners
Directory structure in hibernate for stand alone application Hi,
Please tell me directory structure of hibernate using only stand alone application
Java I/O - Java Beginners
Creating Directory Java I/O Hi, I wanted to know how to create a directory in Java I/O? Hi, Creating directory with the help of Java Program is not that difficult, go through the given link for Java Example Codehttp to create directory structure as web-inf & classes & lib - Java Beginners
why to create directory structure as web-inf & classes & lib we...
actually what is the reason behind this creating this directory structure?
actually what internally the process will be take place from this directory structure
structure in JSP
structure in JSP How do I achieve structure in JAVA or JSp
What would you rate as your greatest weaknesses?
What would you rate as your greatest weaknesses... not really see any drawback in your background that might affect the job at hand... for the position of a customer care executive. You can say that “I really enjoy
Web Application Directory Structure:
Web Application Directory Structure:
To develop an application using servlet or jsp make the directory structure like... to this
directory structure:
Roseindia/
WEB
No SDK with the name or path
No SDK with the name or path When i run any downloaded application... or path....version xyz. Base SDK Missing.
What is this error and how can i set the correct SDK to it.
If your XCode is returning message like Base SDK
how to create this diamond structure with java. - Java Beginners
how to create this diamond structure with java. ... []args){
for(int i=1;i<=5;i+=2){
System.out.println("");
for(int j=1;j<=i...("*");
}
System.out.print(" ");
}
System.out.println();
for(int i 2 SDK, Standard Edition,
Java 2 SDK, Standard Edition, Once I have the Java 2 SDK, Standard Edition, from Sun, what else do I need to connect to a database
i/o
i/o
Write a Java program to do the following:
a. Write into file the following information regarding the marks for 10 students in a test
i... for the test
You can come out with your own test data.
b. Read the data from
This is what i need in this Question - Java Beginners
array.
so i need your help to know is it correct or wrong
Thank you...This is what i need in this Question You are to choose between two... explaination with out coding
In my understanding I choose the non-empty array
Text File I/O - DVD.java - Java Beginners
Text File I/O - DVD.java NEED HELP PLEASE.
The application should use a text file with one value on each line of the file. Each movie would use... the array to recreate the file.
Here is my code but I need to modify
Directory chooser in core java - Java Beginners
Directory chooser in core java Hi,
I have to create a Utility which will copy files from a source folder to the any other folder. For same I want to create a GUI( AWT or Swing based only) which should have 2 directory
Part I
Part I. Understanding XML
A1. Understanding XML :
Learn XML... and so easy to grasp that they quickly
shift a beginner to XML-Java programming... to define the structure of an XML document.
A2. Designing XML
file i/o - Java Beginners
file i/o hiii,
i have to read some integers from a text file and store them in link list..
so please can i have source code
We would like to hear about your goals.
We would like to hear about your goals.
.... I should also learn about my prospect. I would therefore gather all possible... would also need to research about what you might need in such a product. I can do
explanation to your code - Java Beginners
explanation to your code sir,
I am very happy that you have...)?And i have imported many classes.but in your code you imported a very few classes.sir i need your brief explanation on this code and also your guidance.this
I/O Java
System.out.println(" Error in Concat:"+e);
}
}
}
I am not really sure why...I/O Java import java.io.File;
import java.io.FileNotFoundException...(File file) {
//if the file extension is .txt or .java return true, else
This is what i need - Java Beginners
This is what i need Implement a standalone procedure to read in a file containing words and white space and produce a compressed version of the file....
for this question i need just :
one function can read string like (I like
Traversing a file and directory under a directory
;System.out.println(str[i] + " is a directory"...
and subdirectories are traversed lying under a directory. We can find that, how many files and directories are
available within a directory.
This is very frequently
Developing directory structure for the application (In Tomcat Server):
Developing Directory Structure for the Application
(In Tomcat Server):
Directory structure... in your mind. You
can understand easily where all the files and directories
File I/O
File I/O i am trying to write a program that reads a text file and writes it to another directory. the main purpose of this program is to take the text file then read it and write it into as a comma delimitade file. i have
Directory structure of a web application
Directory structure of a web application Explain the directory structure of a web application.
The directory structure of a web application consists of two parts.
A private directory called WEB-INF.
A public.print("Enter your choice=>");
String op = br.readLine();
int read <
Java Telephone directory management system
Java Telephone directory management system
Here we are going to create a java application for Telephone directory system. For this, we have accepted ID, name...++) {
Directory show = data[i];
int ide = show.getId();
String nnn
I couldn't solve it
I couldn't solve it *A customer who wants to apply for getting a car... of the employees in an array data structure. It is initialized by passing... the unprocessed requests, by moving them to the
data structure holding
what is json in iphone sdk
what is json in iphone sdk What is json in iphone sdk? Is this a library and when i need to include JSon in my iPhone SDK App
table structure - Ajax
sec)
Use this code, no need to change, As per your database scenario, I design.
import java.io.IOException;
import java.io.PrintWriter;:
Parameter month I Report - Java Beginners
Parameter month I Report hy,
I want to ask about how to make parameter in I Report, parameter is month from date. How to view data from i report... like Java/JSP/Servlet/JSF/Struts etc ...
Thanks
iPhone SDK Comparing Strings
In this section I will show you how you can compare the two strings in your iPhone SDK Project.
To compare the Strings you will have to use... with == operator. In the following example code I have compared the two strings
inbox structure
inbox structure I want to create an inbox for my project in which records are to be fetched dynamically. I am using netbeans (jsp + javascript+ servlet.)
The structure is same like yahoo inbox (1st column full of check boxes
Am i convetrting the code right ? - Java Beginners
Am i convetrting the code right ? C#
private void button...; sp.WriteLine("Hello World!"); sp.Close();}
Java
Enumeration pList... (IOException ex) { }// i also tried the code below// try
Parameter month I Report - Java Beginners
Parameter month I Report ok, my problem now is in Report in java with I report. I want to give parameter month/year to my design in I Report... to I Report design.
Thank's Hi friend,
Code to help in solving
how can i close a frame. - Java Beginners
how can i close a frame. Hi,
My question is how can we close...............my target is when i click on that button, a new frame is coming and updated table... frames as i go on clicking button...........can we have like when i click on button
iPhone SDK Multiple UIButton
iPhone SDK Multiple UIButton In my iphone SDK program.. i wanted to create multiple UIButton programatically. I know how to create button and i have already created and added it to my view. But that is only one ..and i
MySQL Directory Structure
MySQL Directory Structure
MySQL provide the biological sequence database suitable for large scale sequence search the approaches as well as mysql table. These table dumps
What you Really Need to know about Fashion
What you Really Need to know about Fashion
You might think... know about fashion, even if you are not really
interested in following... to the
masses. Personally, I have to see fashion as the way people decide
How I call the function union from the main - Java Interview Questions
How I call the function union from the main Please, from
public static void main(String[] args)
I would like to call the function union and I..., mat.columns);
p = result.getMatrix();
for (int i = 0; i < rows; i
Java - Deleting the file or Directory
Java - Deleting File
... or directory is deleted after checking the existence of the. This topic is related to the
I/O (input/output) of java.io package.
In this example we are using
Java Directory - Directory and File Listing Example in Java
Java Directory - Directory and File Listing Example in Java... directory. This topic is related to the
I/O
(input/output... and
directory pathnames. This class is an abstract, system-independent view
Submit Your Article
a comment or submit your article, participate
Would you like to share your... they understand and
believe in. I think the most easy way to get free traffic on your... special relvent knowledge or tips to share with us, we would
love to post your
java programming structure - Java Beginners
java programming structure java programming structure Hi Friend,
The structure of Java programme is:
[Package declarations]
[Import statements]
[Class declaration]
[Method declaration]
eg:
package
Is there a data structure like a class that doesn't need to be defined as a class in it's own file?
as a class in it's own file? I am new to java, and I am trying to make... number of Charges, so normally I would make ArrayLists of each factor: chargename... another charge. However, I recall once seeing a data structure that one could create
Copy Directory or File in Java
Copy Directory in Java
...:
C:\nisha>java CopyDirectory
Enter the source directory...;
Introduction
Java has
How could I use Socket in my application?
an idea. Any idea would be really helpful for me.
Thank you...How could I use Socket in my application? I created an application dealing with the management of a supermarket using Hibernate and Swing. I found
use tables to structure forms
use tables to structure forms How can I use tables to structure forms
tables to structure forms
tables to structure forms How can I use tables to structure forms
Advertisements
If you enjoyed this post then why not add us on Google+? Add us to your Circles
|
http://www.roseindia.net/tutorialhelp/comment/10101
|
CC-MAIN-2015-22
|
refinedweb
| 2,274
| 64.61
|
You have null be a keyword that means NULL (a value which could never be held by any variable). Doesn't have a literal definition, just a symbolic one to the compiler.
BAF.zone | SantaHack!
(a value which could never be held by any variable)
And what would that value be, without going to 96 bit ints or whatever? Even if it's a #define, any equivalent sequence of bits could be in a regular integer what would that value be, without going to 96 bit ints or whatever? Even if it's a #define, any equivalent sequence of bits could be in a regular integer variable.
It would have to be a compile time thing. Its hard to do if you never plan for this sort of feature ahead of time.
Perl has a special "undef" flag on scalars, its not 0, its not any kind of null or "" (empty string), its special and perl complains loudly if you use undefined variables (if you tell it to of course, but I always do, it finds so many bugs, and has trained me to not use undefined would not work for C++. It works in dynamically typed languages because you can have a Null type, which can only be a single value -- null (or None, or Nothing, or whatever you like). That way, you can assign the null constant to any variable at any point. Python and PHP both work this way. C++ is not dynamically typed, so this solution is out.
As far as I know, Perl's undef construct is different; it directly manipulates the relevant symbol table, removing the identifier associated with a particular variable. PHP can do this too, I believe. Obviously, this will not work for a statically typed language like would not work for C++.
Obviously not, it was just for comparison's sake. As I mentioned, anything C++ could do would have to be magic compiler syntax that ends up being mostly useless since you wouldn't be able to check the undef/nullity state at runtime with any sort of accuracy.
It does not have anything to do with dynamic vs. static typing, nor with compiled vs. interpreted. For example, C#, being compiled (at least half-way), and with basically static typing, DOES have a null keyword, and C#'s null does not compare equal to zero.The definition of a language's syntax does not mandate a certain bit representation in memory; in fact, even the existing C++ standard, as far as I know, doesn't say anything about the way numbers are stored in bytes. Nor does it mandate that 0 in a pointer context is stored exactly like in an integer context; it does mandate, though, that using 0 in a pointer context refers to the NULL pointer, and that when converting between pointer and integer types, 0 (the NULL pointer) is equivalent to 0 (the number zero).
---Me make music: Triofobie---"We need Tobias and his awesome trombone, too." - Johan Halmén
That doesn't change the fact that C++ does not have anything like variable type flags; a long int may be 32 bit, say, and nothing more or nothing less, and each of those 32 bits are accessible. There is no "metadata" to say "this is null". In other words, null is indistinguishable from a particular value that datatype can contain, such as the number zero.
An int cannot be null; a pointer can, though, and C++ is capable of adding type information to those.The whole discussion is nonsense, though, because C++ with different pointer semantics is, well, maybe D or C# or something.
// nullcmp.cs
using System;
namespace NullCmp
{
public class Program
{
private static int Main(string[] args)
{
System.Console.WriteLine("(null==0)={0}", (null == 0));
return(0);
}
}
}
prompt>csc nullcmp.cs
Microsoft (R) Visual C# 2005 Compiler version 8.00.50727.1434
for Microsoft (R) Windows (R) 2005 Framework version 2.0.50727
test.cs(10,46): warning CS0472: The result of the expression is always 'false'
since a value of type 'int' is never equal to 'null' of type 'int?'
prompt>test
(null==0)=False
Interestingly, apparently "null" is of type "int?". I'm curious how it is stored under the surface.
prompt>csc nullcmp2.cs /unsafe
Microsoft (R) Visual C# 2005 Compiler version 8.00.50727.1434
for Microsoft (R) Windows (R) 2005 Framework version 2.0.50727
prompt>test
(0==null)=True
(0==null)=True
I may have done something wrong or perhaps the results don't represent the actual implementation, but it seems possible that C# is storing null as 0. There's a good chance I'm overlooking something or lack the understanding to see the difference. I don't actually know how pointers or unsafe code works in C# yet so this is all just a guess.
--)
An int cannot be null
Third time I've posted this...NULL == NIL == NADA == NOTHING == 0 == ZILCH.
NULL == NIL == NADA == NOTHING == 0 == ZILCH.
Not true. Conceptually, Null is something very different from zero, and most languages (but not C / C++) treat it as such - even though in many of them, NULL (or the language equivalent) compares equal to zero due to implicit conversions.PHP even distinguishes between Null and undefined; although the difference is somewhat esoteric, it does make sense in the PHP universe.
In C#, null is of type Null (a class that can only ever carry the null value). Since you're comparing it to an int, though, the compiler applies an implicit conversion, using a type that can both be assigned a null value AND be equality-compared to an int. The best match for these constraints is a nullable int (int?), so that's what the null in your example is treated as. If you were comparing it to a bool, you'd get a bool?. For a string (which is a reference type and thus nullable by default), you'd get a string (not a string?).
According to Pointers in C#, if you cast a pointer to an int you get the memory address though. In theory, null still could be a special circumstance, but to me at least, it seems more likely that 0 is the actual "address"/value stored in the pointer. Abstracting it doesn't really change the underlying implementation.
In fact, using unsafe code, it is. In unsafe mode, C# allows pointers which work pretty much like those in C++, and they can be cast to and from integers - just like in C++. And in that case, a null pointer (as opposed to a null reference!) IS stored as integer zero, and converts to 0 when cast to int.For reference types, this doesn't work, because C# doesn't allow pointers to reference-typed objects.
A pointer is essentially an integer. An integer can store any combination of bits and, therefore, there is no "special" value (0, excluded) for NULL to be without abstracting pointers and adding metadata to them. There would be absolutely no advantage to that. If anything, it would slow the process down and consume more memory than necessary.
References are essentially pointers that are managed by the language/compiler. I'm going to assume that more than likely, null is still represented as 0 for references (that can have a null reference) on an assembler (or intermediate) level and the whole interaction is abstracted from the programmer so it seems more complex than it really is. There may be more to references than I think, in which case I could be mistaken, but I can't think of any reason for them to be more complex than that.
|
https://www.allegro.cc/forums/thread/596785/756291
|
CC-MAIN-2015-40
|
refinedweb
| 1,281
| 61.36
|
Node reverseAlterante ( Node head) {
if ( head == null || head.next == null )
return head;
Node r = reverseAlternate(head.next.next);
Node t = head.next ;
t.next = head;
head.next = r;
return t ;
}
Saiful's World
Wednesday, January 23, 2013
Write a program that reverses alternate elements in a given linked list input: a->b->c->d->e, output should be b->a->d->c->e
Posted by Saiful at 2:17:00 AM 0 comments
Labels: Facebook, LinkList, Programming
Location: Dhaka, Bangladesh
Monday, January 21, 2013
Printing all possible subsets using BitMask
#include<stdio.h>;
}
Posted by Saiful at 12:20:00 AM 0 comments
Labels: Programming, Subset
Location: Dhaka, Bangladesh
Tuesday, December 11, 2012
Write a function to print all subsets of a certain size of the numbers from 1 to n
#include <iostream>
#include <stdio.h>
using namespace std;
const int MAX = 1000;
int a[MAX];
bool b[MAX];
void f1(int n, int k, int indexn, int indexk){
int i, j;
for(i = indexn; i <= n - k + indexk; i++){
if(!b[i]) {
b[i] = true;
a[indexk] = i;
if(indexk == k) {
for(j = 1; j <= k; j++) {
printf("%d ", a[j]);
}
printf("\n");
} else {
f1(n, k, indexn + 1, indexk + 1);
}
b[i] = false;
}
}
}
void f(int n, int k) {
int i;
for(i = 1; i <= n; ++i) {
a[i] = i;
b[i] = false;
}
f1(n, k, 1, 1);
}
int main()
{
int n,k;
while(scanf("%d %d",&n,&k) == 2)
{
f(n,k);
}
return 0;
}
Friday, November 23, 2012
Unable to establish a secure connection to iTunes/iPhone
"Could not establish a secure connection to the device.".
Posted by Saiful at 10:24:00 AM 0 comments
Tuesday, November 13, 2012
Make Xcode(4.x) faster
How.
Posted by Saiful at 11:52:00 PM 0 comments
Monday, August 06, 2012.
Xcode method
Either update method will get you onto the beta. It's up to you to decide what method is most appropriate for your situation.
For more Check this.
For more Check this.
Downgrade
-
Typically you can’t downgrade iOS versions so easily, but because Apple is still signing iOS 5.1.1 this allows downgrading to commence with minimal effort.
|
http://saifsust.blogspot.com/
|
CC-MAIN-2017-13
|
refinedweb
| 365
| 61.77
|
Delete old GIT branches already merged into master
(cross-post from)
It was time to clean up some old git branches at TaskRabbit today. It turned out that we had hundreds of branches that were “old”, and could be removed. What do I mean by “old”? As with many things, coming up with the proper definition is 1/2 the battle. At the end of the day, “old” meant “I have been merged into master, and contain no un-merged code” (where master is your integration branch).
When phrased this way, there are some systematic and simple ways to due some git pruning. Here’s a simple rake task:
namespace :git do desc "delete remote branches which have already been merged into master" task :clean_merged_branches do local_branches = `git branch`.split("\n").map{ |line| line.gsub(" ","") } raise "You need to be in master to start this game" unless local_branches.include?('* master') say_and_do("git fetch --prune") bad_branches = `git branch -r --merged`.split("\n").map{ |line| line.gsub(" ","") } bad_branches.each do |bad_branch| parts = bad_branch.split("/") remote = parts.shift if remote == "origin" branch = parts.join("/") next if branch =~ /^HEAD.*/ next if branch == /^refs\/.*/ next if branch == "master" next if branch == /.*staging.*/ next if branch == /.*production.*/ say_and_do("git branch -D #{branch}") if local_branches.include?(branch) say_and_do("git push origin :#{branch}") else puts "skipping #{bad_branch} because it doesn't have a remote of 'origin'" end end end end def say_and_do(stuff, explanation=nil) puts explanation if explanation puts " > #{stuff}" `#{stuff}` end
The trick here is
git fetch -r –merged command which does exactly what we want: tell me about the remote branches which have all been merged into my current branch, master. We simply collect those branches, and delete them locally (if they exist) and on origin.
The logic goes like this
- Ensure I am in the master branch
- git fetch –prune (clean up my local branch list according to remote’s list)
- git fetch -r –merged (show me the branches which have been merged into the integration branch)
- loop through those branches and delete them locally and remotely
Two other points of note:
- It’s very likely that you will have some staging, test, and production branches which are either equivalent to or slightly behind your integration branch. You probably want to explicitly ignore those
- If you have more than one remote branch setup (perhaps heroku for deployment or tddium for testing), you want to be sure to ignore any branch which isn’t from “origin”
Related protips
2 Responses
Add your response
There's a nice tool from Arc90 that also does this:
(megusta)
Have a fresh tip? Share with Coderwall community!
|
https://coderwall.com/p/fytgoq/delete-old-git-branches-already-merged-into-master
|
CC-MAIN-2021-49
|
refinedweb
| 438
| 71.14
|
Thursday 23 February 2017
In my last blog post, A tale of two exceptions, I laid out the long drawn-out process of trying to get a certain exception to make tests skip in my test runner. I ended on a solution I liked at the time.
But it still meant having test-specific code in the product code, even if it was only a single line to set a base class for an exception. It didn’t feel right to say “SkipTest” in the product code, even once.
In that blog post, I said,
One of the reasons I write this stuff down is because I’m hoping to get feedback that will improve my solution, or advance my understanding. ... a reader might object and say, “you should blah blah blah.”
Sure enough, Ionel said,
A better way is to handle this in coverage’s test suite. Possible solution: wrap all your tests in a decorator that reraises with a SkipException.
I liked this idea. The need was definitely a testing need, so it should be handled in the tests. First I tried doing something with pytest to get it to do the conversion of exceptions for me. But I couldn’t find a way to make it work.
So: how to decorate all my tests? The decorator itself is fairly simple. Just call the method with all the arguments, and return its value, but if it raises StopEverything, then raise SkipTest instead:
def convert_skip_exceptions(method):
"""A decorator for test methods to convert StopEverything to SkipTest."""
def wrapper(*args, **kwargs):
"""Run the test method, and convert exceptions."""
try:
result = method(*args, **kwargs)
except StopEverything:
raise unittest.SkipTest("StopEverything!")
return result
return wrapper
But decorating all the test methods would mean adding a @convert_skip_exceptions line to hundreds of test methods, which I clearly was not going to do. I could use a class decorator, which meant I would only have to add a decorator line to dozens of classes. That also felt like too much to do and remember to do in the future when I write new test classes.
It’s not often I say this, but: it was time for a metaclass. Metaclasses are one of the darkest magics Python has, and they can be mysterious. At heart, they are simple, but in a place you don’t normally think to look. Just as a class is used to make objects, a metaclass is used to make classes. Since there’s something I want to do every time I make a new class (decorate its methods), a metaclass gives me the tools to do it.
class SkipConvertingMetaclass(type):
"""Decorate all test methods to convert StopEverything to SkipTest."""
def __new__(mcs, name, bases, attrs):
for attr_name, attr_value in attrs.items():
right_name = attr_name.startswith('test_')
right_type = isinstance(attr_value, types.FunctionType)
if right_name and right_type:
attrs[attr_name] = convert_skip_exceptions(attr_value)
return super(SkipConvertingMetaclass, mcs).__new__(mcs, name, bases, attrs)
There are details here that you can skip as incantations if you like. Classes are all instances of “type”, so if we want to make a new thing that makes classes, it derives from type to get those same behaviors. The method that gets called when a new class is made is __new__. It gets passed the metaclass itself (just as classmethods get cls and instance methods get self), the name of the class, the tuple of base classes, and a dict of all the names and values defining the class (the methods, attributes, and so on).
The important part of this metaclass is what happens in the __new__ method. We look at all the attributes being defined on the class. If the name starts with “test_”, and it’s a function, then it’s a test method, and we decorate the value with our decorator. Remember that @-syntax is just a shorthand for passing the function through the decorator, which we do here the old-fashioned way.
Then we use super to let the usual class-defining mechanisms in “type” do their thing. Now all of our test methods are decorated, with no explicit @-lines in the code. There’s only one thing left to do: make sure all of our test classes use the metaclass:
CoverageTestMethodsMixin = SkipConvertingMetaclass(
'CoverageTestMethodsMixin', (), {}
)
class CoverageTest(
... some other mixins ...
CoverageTestMethodsMixin,
unittest.TestCase,
):
"""The base class for all coverage.py test classes."""
Metaclasses make classes, just the way classes make instances: you call them. Here we call our metaclass with the arguments it needs (class name, base classes, and attributes) to make a class called CoverageTestMethodsMixin.
Then we use CoverageTestMethodsMixin as one of the base classes of CoverageTest, which is the class used to derive all of the actual test classes.
Pro tip: if you are using unittest-style test classes, make a single class to be the base of all of your test classes, you will be glad.
After all of these class machinations, what have we got? Our test classes all derive from a base class which uses a metaclass to decorate all the test methods. As a result, any test which raises StopEverything will instead raise SkipTest to the test runner, and the test will be skipped. There’s now no mention of SkipTest in the product code at all. Better.
Add a comment:
|
https://nedbatchelder.com/blog/201702/a_tale_of_two_exceptions_continued.html
|
CC-MAIN-2021-04
|
refinedweb
| 879
| 72.26
|
trunc, truncf, truncl - round to integer, toward zero
Synopsis
Description
Errors
Versions
Notes
Colophon
#include <math.h>
double trunc(double x);
float truncf(float x);
long double truncl(long double x);
Link with -lm.
Feature Test Macro Requirements for glibc (see feature_test_macros(7)):
trunc(), truncf(), truncl():_XOPEN_SOURCE >= 600 || _ISOC99_SOURCE || _POSIX_C_SOURCE >= 200112L;
or cc -std=c99
These functions round x to the nearest integer not larger in absolute value.
These functions return the rounded integer value.
If x is integral, infinite, or NaN, x itself is returned.
No errors occur.
These functions first appeared in glibc in version 2.1.
C99, POSIX.1-2001.
The integral value returned by these functions may be too large to store in an integer type (int, long, etc.). To avoid an overflow, which will produce undefined results, an application should perform a range check on the returned value before assigning it to an integer type.
ceil(3), floor(3), lrint(3), nearbyint(3), rint(3), round(3)
This page is part of release 3.44 of the Linux man-pages project. A description of the project, and information about reporting bugs, can be found at.
|
http://manpages.sgvulcan.com/truncf.3.php
|
CC-MAIN-2018-17
|
refinedweb
| 190
| 65.42
|
import "golang.org/x/text/currency"
Package currency contains currency-related functionality.
NOTE: the formatting functionality is currently under development and may change without notice.
common.go currency.go format.go query.go tables.go
CLDRVersion is the CLDR version from which the tables in this package are derived.
Amount is an amount-currency unit pair.
Currency reports the currency unit of this amount.
Format implements fmt.Formatter. It accepts format.State for language-specific rendering.
Formatter decorates a given number, Unit or Amount with formatting options.
var ( // Uses Narrow symbols. Overrides Symbol, if present. NarrowSymbol Formatter = Formatter(formNarrow) // Use Symbols instead of ISO codes, when available. Symbol Formatter = Formatter(formSymbol) // Use ISO code as symbol. ISO Formatter = Formatter(formISO) )
Default creates a new Formatter that defaults to currency unit c if a numeric value is passed that is not associated with a currency.
Kind sets the kind of the underlying currency unit.
Kind determines the rounding and rendering properties of a currency value.
var ( // Standard defines standard rounding and formatting for currencies. Standard Kind = Kind{/* contains filtered or unexported fields */} // Cash defines rounding and formatting standards for cash transactions. Cash Kind = Kind{/* contains filtered or unexported fields */} // Accounting defines rounding and formatting standards for accounting. Accounting Kind = Kind{/* contains filtered or unexported fields */} )
Rounding reports the rounding characteristics for the given currency, where scale is the number of fractional decimals and increment is the number of units in terms of 10^(-scale) to which to round to. }
QueryIter represents a set of Units. The default set includes all Units that are currently in use as legal tender in any Region.
func Query(options ...QueryOption) QueryIter
Query represents a set of Units. The default set includes all Units that are currently in use as legal tender in any Region.
Code:
t1799, _ := time.Parse("2006-01-02", "1799-01-01") for it := currency.Query(currency.Date(t1799)); it.Next(); { from := "" if t, ok := it.From(); ok { from = t.Format("2006-01-02") } fmt.Printf("%v is used in %v since: %v\n", it.Unit(), it.Region(), from) }
Output:
GBP is used in GB since: 1694-07-27 GIP is used in GI since: 1713-01-01 USD is used in US since: 1792-01-01
A QueryOption can be used to change the set of unit information returned by a query.
var Historical QueryOption = historical
Historical selects the units for all dates.
var NonTender QueryOption = nonTender
NonTender returns a new query that also includes matching Units that are not legal tender.
func Date(t time.Time) QueryOption
Date queries the units that were in use at the given point in history.
func Region(r language.Region) QueryOption
Region limits the query to only return entries for the given region.
Unit is an ISO 4217 currency designator.} )
FromRegion reports the currency unit that is currently legal tender in the given region according to CLDR. It will return false if region currently does not have a legal tender.
FromTag reports the most likely currency for the given tag. It considers the currency defined in the -u extension and infers the region if necessary.
MustParseISO is like ParseISO, but panics if the given currency unit cannot be parsed. It simplifies safe initialization of Unit values.
ParseISO parses a 3-letter ISO 4217 currency code. It returns an error if s is not well-formed or not a recognized currency code.
Amount creates an Amount for the given currency unit and amount.
String returns the ISO code of u.
Package currency imports 9 packages (graph) and is imported by 19 packages. Updated 2019-08-02. Refresh now. Tools for package owners.
|
https://godoc.org/golang.org/x/text/currency
|
CC-MAIN-2019-35
|
refinedweb
| 607
| 51.55
|
P1688, an outline and high-level design for the proposed Technical Report.
This document contains a summary and detailed minutes for each of the telecons.
2. Summary of Meetings
3. Meeting Minutes
3.1. 2019-03-08 Meeting Minutes
Attendance:
Ben Craig.
Bryce Adelstein Lelbach (NVIDIA).
Isabella Muerte.
Olga Arhipova (Microsoft).
Ben Boeckel (Kitware).
Boris Kolpackov (build2).
Rene Rivera (Boost.Build).
Richard Smith (Google).
Tom Honermann (Synopsys).
Mark Zeren (VMWare).
Christof Meerwalk.
Bruno Lopes (Apple).
JF Bastien (Apple).
Michael Spencer (Apple).
Mathias Stearn (MongoDB).
Corentin Jabot.
Peter Bindels (TomTom).
Steve Downey (Bloomberg).
Chair Notes (Bryce Adelstein Lelbach):
For the purpose of our discussions, let’s assume that a ISO Technical Report is the right type of document. If we find that this type of document doesn’t meet what we need, we’ll use another type of document.
When should we aim to have the Technical Report completed?
When C++20 is published?
After C++20 is published?
When it’s ready!
Should the Technical Report be a living document that is evolved/revised?
Have it be versioned, not live-at-head?
Who will use modules?
Compiler vendors.
Build system vendors.
Tool vendors.
Library vendors.
Distribution vendors (ex: RPM, Debian, homebrew, vcpkg).
End users.
Other languages that want to interact with C++ modules.
What concrete questions do we need to address in the Technical Report?
Ex: Is shipping prebuilt BMIs best practice?
How do we express module mappings to compilers?
Specification of available modules from an installed package?
What do we need to do to enable adoption of modules by existing build systems and tools?
How can we best exploit them long term?
How do I express that a header is modular (e.g. that you can do
and
import < foo > ;
can be treated as an import)?
#include<foo>
How do we avoid conflicts between module names?
How do we maintain ABI compatibility with modules?
How do we deal with conflicts between the compiler options that different producers of modules use and that consumers of modules use?
What happens to the textual inclusion model in the modular world?
When and how should you migrate to modules? When should you use #include
, import , import foo? How do we communicate this to users?
Should producers ship both headers and modules? When do you get rid of your headers?
How do you make your code modular? How do you break cyclic includes? (Is this all in scope)
How do people write headers that need to work for C++03/11/14/17 and be modular for C++20?
What is a good size for a module? When is a module too big/small? How do modules scale (e.g. if I have a huge module and use just a few things, do I pay a price)? How do large modules affect build parallelism/dependency graphs?
Don’t be too prescriptive; make sure to provide alternatives and suggestions, not specific recipes.
What specific use cases should the Technical Report address?
Ex: Autocompletion in IDEs/editors.
Hello world with modules.
Ideal build setup look like for new projects in a modular world.
Dependency scanning vs explicit module dependencies build example.
Internal (modules that are part of my project) vs external (modules from outside of my project).
Providing #includes for backwards compatibility
Existing build systems consuming modules:
CMake.
Make.
Boost Build.
Internal company build systems.
autoconf.
Meson.
Ninja.
Scons.
Shell scripts.
Waf.
Bazel.
Buck.
Cargo.
Gulp.
Webpack.
Ant.
llbuild.
Evoke.
qmake.
MSBuild.
Mixed build systems
Ex: CMake + qmake in the same
CMake + autoconf
Make + CMake
Distributed builds (high bandwidth + high latency and low bandwidth + low latency):
icecc.
ccache.
sccache.
incredibuild.
distcc.
fastbuild.
Bazel remote build execution.
Internal company distributed build systems.
Incremental builds
Building module interfaces for tool purposes (code completion, etc).
Static analysis tools:
Coverity.
Clang static analysis.
Grammatech (Aaron Ballman).
cppcheck.
Other tools that traditional operate on a single TU:
SWIG.
Things based on clang tooling.
CastXML.
Qt moc.
Test mocking frameworks.
Google mock.
Things that generate code
protobuf.
Test case reduction tools:
creduce.
delta.
Modularizing libraries:
Header only libraries.
Boost.
C++ language extensions:
CUDA (not mentioned by Bryce!).
Vulkan.
OpenCL.
SYCL.
SG15 mailing list issues.
Alternative mailing lists:
discourse.
Google Groups.
Issues today with mailing list:
Dmarc emails.
Action Item for Bryce:
Call Herb today about the mailing list.
Minutes (Ben Craig):
Bryce: When should we have a TR go out?
Mathias: Prefer to release it at the same time as the IS so that we have usage recommendation at the same time as we have the IS support
Tom: Agree with Mathias, but not too concerned about the time. Let’s work on it now and see when it is ready.
Rene: Just saying that the argument for delaying the TR would also apply to not having modules in 20.
Mathias: Yes, but since modules are currently in for 20, they should be useable when released. I think the TR is a major component of the usability story
Olga: Hoping the doc evolves beyond C++20. It would be ideal for us to know everything, and we will get something useful, but hopefully we can evolve it past that.
Bryce: open question: can a TR be a living document? Can it be evolved?
Mathias: I think we can do versions, but probably not a live-at-head document.
Rene: I just want to say that doing releases of it are best.
Bryce: Let’s assume that an ISO TR is the right kind of document.
Corentin: There is no way we can do everything we want by the time C++20 is released. We should aim for a first release that is minimal. Maybe aim for more complete by C++21 or so. The document needs to give a set of good practices and bad practices. If module names don’t match files, we can’t change that, so we need to discourage that practice.
Tom: Maybe today we can establish a priority list of things to address in the TR.
Bryce: Just because the standard is out at C++20 doesn’t mean that implementations will be available in C++20
Steve D: The full standard may not be out in 2020, but people will start to want to work with them now in /std:c++2a modes.
Mathias: All the big compilers have early implementations.
Bryce: Work on defining scope, priorities, and goals for the Technical Report. Between now and next meeting, we will collate that list of things and try to prioritize those things.
Ben Craig: Stakeholders that should be considered are compiler vendors, build tool vendors, library vendors end users
Bryce: Should consider, and determine if package vendors are in scope (like system package managers).
Ben Boeckel: Need to determine where package managers should put module interface units. That may be the extent of their involvement.
Rene: If you make it clear what modules are not meant to cover, then we can scope better.
Tom: Don’t forget SWIG, static analyzers.
Bryce: Agreed. Splitting build system vendors and tool vendors.
Corentin: Distribution maintainers seems more accurate.
Ben: depends on what you call anaconda, nix, and ports.
Mathias: Maybe foreign function interfaces? May be out of scope?
Bryce: Unsure if it falls into one of these categories.
JF: For Clang modules, we’ve been bridging between objective C and C++.
Olga: MSVC also considers modules as a way to interact with other languages.
Mathias: Other languages may also produce modules to be consumed by C++.
Corentin: I think I’d call them distribution too to avoid confusion with package.
Steve D: filesystem hierarchy maintainers is more the case than distro maintainers. At least for first cut.
Ben B: yes, thats linux-centric, but anything which deals with install prefixes cares.
Steve D: BSD has a similar set of rules. Posix-ish.
Ben B: theyll get ports since bsd is C for the long term.
Bryce: What list of concrete open questions should we address? Example: Is shipping prebuild BMIs best practice?
Mathias Stearn: Should these be a list of use-cases we want to address rather than questions?
Boris Kolpackov: Here are some ideas on describing modular libraries in .pc (pkg-config) files:
Ben B: My first thoughts on cmake’s iface:
Ben B: Build tools that know about modules. Old build tools that don’t. How do we tell compilers about modules (i.e. module maps). How to consume a library via modules (i.e. pkgconfig). pkgconfig tells me -I and -L flags. How do I convey that information? Specification of available modules from an install.
Tom: What do we need to do to enable adoption? How can we adhere to what people do today?
Steve D: How do I say something is eligible to be a header unit? How do I say where the interface for a module lives? When can a #include be converted to an import
?
Corentin: How do we avoid conflicts between module names? How do we maintain ABI with modules?
Bryce: Module ABI has multiple models of ownership. We’ll need to discuss it at some point.
Mathias: How close can we get to having modules be "self-descriptive" rather than relying on pkg-conf.
Ben B: Would generating the pkg-conf from the source be sufficiently self-descriptive for you?
Mathias: I meant as a distribution format. If module interface sources are distributed as archive files (still not agreed on, but the more I lthink about it, the more I like it), we can easily include as much metadata in any format we want. At that point, I don’t think there is an advantage to shoehorning the metadata into pkg-conf is useful.
Ben B: You might still need to know where to look for those files and their module name (if, e.g., you have a ácceñt module name on Windows).
Michael: The distribution format for module interfaces may as well be text.
Mathias: Sure, but the same is true of the .pc files. My issue with .pc is that I don’t think actual compile flags are the best way to convey metadata.
Ben B: It wouldn’t contain compile flags, but the same info that CMake stores in its usage requirements. Making the actual flags and rules is up to the consumer of the .pc file. Boris and I have very similar ideas here based on what I see in his link.
Boris: Yeah, looks pretty similar.
Mathias: Michael, in that situation, would you have each module unit distributed in separate files, even for libs with 100’s of internal module partitions?
Michael: Yeah.
Mathias: (I know that is what we do today, so it wouldn’t be much worse, this just seems like a reasonable thing to improve upon)
Ben B: A tool to take a module file and 100 partitions and output a single miu would be useful. But can be done in the future. I’d like to see such a tool first before we try specifying it.
Ben C: How can we best exploit modules long term?
Olga: Maybe group these into questions per stakeholder?
Tom: Multiple models for consuming modules. Separate compilation vs. textual inclusion. With separate compilation, you can get conflicting options. What kind of guidance do we give to avoid those conflicts so that the textual inclusion model can work. How do we deal with conflicts between the options that producers and consumers use and the impact to the ability to support a textual inclusion model?
Mathias: Not sure if it is technically possible to support textual inclusion
Tom: Good discussion to have at some other time.
Bruno: How do we teach people to migrate to modules? When do we suggest to use old
, vs header units, vs modules.
#include
Steve D: What are the techniques for fixing your code so that it can be modular? May be out of scope. Cycle breaking for example.
Tom: How do people write headers that work for both C++20 and C++17? While still being modular?
Corentin: How big should modules be? What makes them too big, too small?
Bryce: Modules are scalable in clang and that big modules are fine?
Richard: They are scalable, and they might be more efficient.
Bryce: Is that expected to be true of all implementations?
Richard: Unclear if it will be true for all implementations.
Rene: We should be sure to stick to tooling as much as possible and not to get too much into software design.
Mathias: Rene perhaps observations might work as notes in the TR?
Rene: Sure. Note this is only for the software design POV. Prescribing for tooling is fine.
Michael: It wouldn’t be crazy to have a single text file represent multiple modules, it’s just more non standard extensions to do it.
Ben B: Yeah, but we can include "here’s timings for project X using {tiny,large} modules for {new,old} build systems".
Olga: From the users perspective, libraries will create modules. From the build perspective it isn’t a question of how big one module is, but how many modules and how long of a dependency chain do you have. This is what will affect the build throughput the most.
Bryce: Specific use cases to address?
Tom: Existing build systems need to be able to consume modules, with minimal updates. Only updating flags.
Existing build systems:
CMake.
Make.
Boost Build.
Internal company build systems.
autoconf.
Meson.
ninja.
shell scripts.
scons.
waf.
Bazel.
Buck.
Cargo.
Gulp.
WebPack.
Ant.
llbuild.
Evoke.
msbuild.
qbs.
qmake.
Mathias Stearn: As a user of scons, I don’t want scons to support modules so I have yet another reason to move off of it :).
Corentin: We should say in the TR that waf is not supported.
Ben B: Isnt waf just a scons fork and stripped down? My only experience is via mpv where its decent (but also a c-only project).
Steve D: Use case: Hello world with modules.
Olga: Just building module interfaces for tooling purposes. Code completion.
Mathias: Distributed builds (high bandwidth + high latency, and low bandwidth + low latency environments):
FastBUILD.
IceCC.
IncrediBuild.
sccache.
ccache.
distcc.
Bezel remote build execution.
My company’s interneal distributed build systems.
Mathias: What does an ideal one look like?
Tom: Static analysis tools, where source code is paramount. Comments get used sometime.
Coverity.
Clang static analysis.
Grammatech (Aaron Ballman).
CppCheck.
Ben B: Static analysis: viva64.
Mathias: Incremental builds.
Bryce: Dependency scanning vs. explicit explicit module dependency example.
Side by side example where one version uses scanning, other uses explicit deps.
Tom: Other tools that traditionally operate on a single TU:
SWIG.
clang tooling.
Qt Moc.
Test mocking frameworks.
Google mocks.
Bruno Lopes: GCCXML.
Ben B: Superseded by castXML.
Ben B: SWIG reads and writes c++; protobuf writes it.
Mathias: Won’t anything using clang-tooling JustWork with modules with something like -frewrite-imports?
Tom: Mathias, maybe? So long as BMIs aren’t shared between tools/compilers built with incompatible Clang versions.
Mathias: I think
is a BMI-free solution. Bruno, can you confirm?
- frewrite - imports
Tom:
might consume an existing BMI.
- frewrite - imports
Bruno: It could, but it doesn’t need to.
Mathias: I was thinking of it similarish to nim-lang which compile to C++ source among other targets.
Ben B: Cocinelle? Though its mostly pattern matching anyways AFAIK.
Things that generate code (out of scope?):
protobuf.
Corentin: Mixed build systems:
Ex: CMake and QMake in the same build.
More likely Cmake + lots of automake.
Olga: Internal dispributed builds (MS and others have it).
Steve D: Internal vs. external modules (things from my project vs. out of my project).
Steve D: Bug reporting tools, like creduce, delta.
Bryce: Modularizing libraries:
Header only libraries.
Boost.
Ben B: Provide
s for backwards compat.
#include
Corentin: CUDA, Vulkan, OpenCL, SYCL.
3.2. 2019-03-22 Meeting Minutes
Attendance:
Anna Gringauze.
Ben Boeckel.
Ben Craig.
Bruno Cardoso Lopes.
Colby Pike.
Gor Nishanov.
JF Bastien.
Mark Zeren.
Mathew Woehlke.
Mathias Stern.
Michael Spencer.
Nathan Sidwell.
Olga Arkhipova.
Peter Bindels.
Rene Rivera.
Steve Downey.
Stephen Kelly.
Tom Honermann.: What approaches and formats are effective for communicating module name <-> module-interface/header file mappings? Module name + configuration <-> BMI mappings?
Module Naming: How should module names be structured? How do we avoid conflicts between different projects? How do we deal with versioning?
Module Granularity: What size should modules be to maximize performance and usability? Does the cost of an import scale with the size of the module?
Module ABI: How do we maintain stable ABIs in a modular world?
Codebase Transition Path: How should projects transition from headers to modules? How should projects support both pre-C++20 headers and C++20 modules?
BMI Configuration: How do we find the BMI that was compiled in the same way as the current TU? What defines the configuration of a BMI?
BMI Distribution: How effective is the distribution of BMIs alongside module-interface/header files?
Dependency Scanning: How do we do dependency scanning in a modular world? Can we make it fast?
Build Performance: How do modules impact build performance? What impact does modules have on parallelism in C++ builds?
Guidance: Concise set of guidelines for the C++ ecosystem. Addresses questions raised in Usage and draws conclusions based on results from Findings.
Do we agree to use this outline as a starting point?
What’s missing (Findings sections in particular)? How can this be improved?
Who is interested in working on a particular section of the proposed outline? Please collect a list of names. Don’t volunteer for everything - just for what you care about and can commit to working on..
Who is interested in working on a particular stakeholder group? Collect a list of names.
For each stakeholder group, we need a short description of the group (e.g. what things are in the group) and a bullet list of the issues that matter for that group. Volunteers? Collect a list of names.
Archetypes:
Hello world with modules.
Header only library.
Incremental build.
Distributed build.
Building BMIs only for tooling consumption.
Dependency scanning vs explicit module dependencies build.
We need more concrete examples. Who volunteers to go write some up? Collect list of names.
Format: The TR will likely need to be in Latex, using a fresh fork of the IS Latex that has been customized, similar to the Coroutines TS Latex. We have had very painful issues with non-Latex formal documents in the past. Ex: The Parallelism TS v2 was originally written in HTML which was converted to PDF for publication. It had to be completely rewritten in Latex after we voted to publish it because of typesetting issues raised by ISO that could not be resolved. has been created for collaboration. Not sure how we’ll use it yet; feel free to create a repository for examples and/or brainstorming. JF Bastien can add people while Bryce is away.
Appendix.
List from Bryce’s pre-Kona notes.
Module Map Format.
Name of Module + ABI Hash -> Physical Location.
Module Mappers.
Module ABI Hashing.
Module Versioning.
Dependency Scanning.
How should tools use a dependency scanner? Command line? Programmatic API?
List from Rene/Corentin presented at the Kona evening session:
Module name <-> Module header unit name mapping.
Module name <-> BMI mapping.
Module naming.
Guidelines for BMI implementations strategies.
Guidelines (maybe format) for shipping modularized closed source libraries.
Guidelines for Linux distributions (maybe).
Guidelines/format for handling legacy header units.
Guidelines for using modules:
ABI concerns/hashing.
Not authoring modules for 3rd party code.
Chair Notes (Ben Craig):
Possibly interleave findings and guidance?
Need concrete code base / code bases, need multiple concrete build systems.
Tie things to specific examples.
For multiple stakeholders, use cases.
Kitchen Sink CopperSpice example?
Volunteers/interest in particular aspects of the Technical Report:
Peter Bindels + Gaby: Volunteering for hello world with modules section.
Corentin Jabot, Peter Bindels, Mathias Stearn: Module Mapping P1484 mapping to source files. Mapping to BMI. Where to find a header unit.
Michael Spencer and Ben Boeckel: Dependency Scanning.
What we’ve found, and what others might want to do. This is from the impl’s perspective.
Does the "can we make it fast?" belong in the TR?
Mathias Stearn + Rene Rivera: Build Performance?
Can’t get recommendations yet based off of current work.
Different models of building? Concurrent BMI and .o vs. distinct builds. Needs research.
Chicken and Egg.
Ben Craig + Tom Honermann + Steve Downey + Stephen Kelly: Codebase Transition Path.
Olga Arkhipova + Bruno Cardoso Lopes (maybe?): BMI Distribution.
Mathias + Ben + Googler TBD: Distributed build.
Microsoft (Gaby + Olga): Dependency scanning vs explicit module dependencies build.
Michael Spencer: Incremental Builds (information on performance and build theory).
Anna Gringauze, Tom Honermann: Building BMIs only for tooling consumption:
Probably going to look at source. Explain how tools are different from compilers.
Sharing BMIs between compilers and tools.
Bruno Cardoso Lopes (maybe?): BMI Configuration:
TBD:
Header only library.
Module Naming.
Module Granularity.
Stakeholders to be covered on a per item basis.
Hope that another meeting we can establish priorities.
Minutes (Tom Honermann):
Ben C: Introduces agenda; agreement on the outline of the TR; get volunteers; minimal tech talk.
Ben C: Any objection to the findings as an outline?
JF: Need to base this on code at some point. Would like to see a code base that uses modules and multiple build systems. We can talk all day about theoretical concerns, but need to base work on reality.
Ben C: As an example, modularize Boost?
JF: Need to focus on applications.
Ben C: Need to distinguish modularizing a library and consuming a library.
Peter: Perhaps try the kitchen sink example from CopperSpice?
Tom: JF, do you want real projects or exemplary projects?
JF: Real projects.
Steve D: Don’t think we need really real projects, just exemplary ones. POSIX demonstrates how to do compiles, link; use case oriented.
Tom: Agree, and it would be nice to have examples in the TR demonstrating usage.
Ben C: Bryce has a hello world with modules that could be an example in the TR.
Peter: I volunteer to make a hello world example.
Ben C: Back to the outline, who is working on what? Corentin and Peter are working on module mapping?
Peter: We have P1484.
Mathias: I volunteer to help with module mapping.
Tom: Which aspect of module mapping are we discussing here?
Peter: Mapping to source.
Mathias: Also want mapping to BMIs.
Ben C: Sounds like this covers mapping to source, BMI, and indication of header units.
Ben C: Michael Spencer is working on dependency scanning. So is Ben Boeckel. Can I record them as volunteers to work on this?
Michael: Yes.
JF: Dependency scanning is part of build system implementation. What is the goal of discussing dependency scanning (and other features we’re discussing) as part of the TR?
Mathias: It is a contract between stakeholders.
Tom: Not concerned about implementation details; concerned about ensuring meta data is represented in ways usable by multiple tools, buld systems, etc..
Michael: The best/fastest way to build modules are relevant for implementation.
Peter: Perhaps worth discussing trade offs between fast and accurate?
Mathias: Let’s take inaccurate off the table.
Ben C: Moving on to build performance, can I sign Rene and Mathias up for that?
Mathias: Yes, questions to address: can BMI and object files be built concurrently? What gets built and how? These are worth researching.
Rene: Happy to work on performance related issues and testing. There is a chicken/egg problem of needing working compilers. We can’t tackle distributed builds without additional work.
Mathias: Would like feedback on how reasonable it is to look at performance of current compiler incarnations.
Nathan: In three years time, performance profile will probably be quite different; focus now is correctness, not speed.
Michael: Clang has some inefficiencies around finding modules now. I think overhead of modules will go down over time. Dependencies will remain.
Mathias: Wondering about relative performance, scanning vs code gen vs BMI gen, etc... Perhaps a TR2 would be a good focus for performance.
Michael: Some performance sensitive things will change, some things won’t.
Ben C: Looking at code base transition path now. BC volunteers.
Tom: Interested in transition path.
Stephen K: Also interested.
Steve D: Also interested. Will bring a Lakos and Bloomberg informed focus.
Ben C: Olga, will you sign up for BMI distribution?
Olga: Yes. We’ve had internal discussions about sharing BMIs.
Ben CL: Volunteers to work on BMI distribution as well.
Steve D: This overlaps with sharing of object files as well. What is the range of IFNDR when sharing BMIs? If a BMI isn’t suitable, how does it get recompiled.
Tom: Is Michael interested in volunteering with regard to BMI distribution?
JF: We can volunteer to write a section that says "don’t".
Ben C: I don’t think we should have a section that raises questions for all stackeholders. Instead, each area under findings should raise questions and explore them from the standpoint of each stakeholder.
Mathias: I agree, though not particularly productive to discuss now until we have stuff to put in the doc.
Ben C: Makes sense.
Ben C: Peter volunteered for hello world, volunteers for distributed build?
Mathias: I volunteer. Would be nice to have someone from Google due to difference in approaches.
Gor: Volunteers Gaby to contribute to hello world examples.
Ben C: Would like to sign up Gaby for explicit module dependencies as the Microsoft Edge team purportedly used them.
Mathias: Do we want to encourage explicit module dependencies?
??: No.
Tom: Matches existing PCH usage in Microsoft ecosystems.
Ben C: We should discuss.
Olga: For dependency scanning, we are planning to do work to support this, but haven’t started yet. Mixed mode dependency scanning and explicit dependencies may happen.
Ben C: Moving on to header only libraries. Done today to avoid build system pain. Any volunteers?
Peter: Catch2 considering moving away from header-only for technical reasons (e.g., build speed).
Ben C: No volunteers for header-only.
Ben C: Volunteers for incremental-build? Kind of inherent to builds in general...
Mathias: I volunteer to writeup something for incremental builds.
Ben C: On to building BMIs for tooling consumption.
Olga: I work on static analysis, so interested in special tools. Interested in saving information useful for tools in BMIs.
Tom: Will contribute to discussion on sharing BMIs across compilers/tools.
Ben C: No assignments for module naming, module granularity, BMI configuration. Got the rest. Stakeholders to be covered on a per item basis.
Tom: Perhaps next meeting we can have everyone vote about their highest priority concerns to be addressed in the TR.
3.3. 2019-04-05 Meeting Minutes
Attendance:
Ben Craig.
Bryce Lelbach.
JF Bastien.
Ben Boeckel.
Bruno Lopes.
Gabriel Dos Reis [Gaby] (Microsoft).
Isabella Muerte [Izzy].
JF Bastien.
Mark Zeren.
Mathias Stearn.
Rene Rivera.
Steve Downey.
Micheal Spencer.
Olga Arhipova (Microsoft). (Nathan Sidwell, Corentin Jabot, Peter Bindels, Mathias Stearn): What approaches and formats are effective for communicating module name <-> module-interface/header file mappings? Module name + configuration <-> BMI mappings?
Module Naming: How should module names be structured? How do we avoid conflicts between different projects? How do we deal with versioning?
Module ABI: How do we maintain stable ABIs in a modular world?
Codebase Transition Path (Ben Craig, Tom Honermann, Steve Downey, Stephen Kelly): How should projects transition from headers to modules? How should projects support both pre-C++20 headers and C++20 modules?
BMI Configuration (Bruno Cardoso Lopes): How do we find the BMI that was compiled in the same way as the current TU? What defines the configuration of a BMI?
BMI Distribution (Olga Arkhipova, JF Bastien, Michael Spencer): How effective is the distribution of BMIs alongside module-interface/header files?
Dependency Scanning (Michael Spencer, Ben Boeckel): How do we do dependency scanning in a modular world? Can we make it fast?
Build Performance (Mathias Stearn, Rene Rivera)::
Hello world with modules (Peter Bindels, Bryce Adelstein Lelbach).
Header only library (Bryce Adelstein Lelbach).
Incremental build (Michael Spencer).
Distributed build (Mathias Stearn, Ben Craig, Manuel Klimek?).
Building BMIs only for tooling consumption (Tom Honermann, Anna Gringauze, Olga Arkhipova).
Dependency scanning vs explicit module dependencies build (GDR).
Minutes (Ben Craig):
Ben C: Outcome for prioritization is unclear.
Bryce: Things without volunteers are implicitly lower priority.
Mathias: Module naming and granularity could be folded into other things.
Bryce: At one point, it was combined.
Gaby: That’s more of a coding guidelines, not clear to me that it belongs here..
Mathias: It does have performance tradeoffs, particularly when switching between library sized vs. header sized modules..
Izzy: This should be for general guidelines, because we could probably convince (for example) the Bloomberg people that it should be one class per partition vs. one class per module.
Gaby: Hoping the TR isn’t just a set of guidelines. It can’t have the force of a standard, but should be more than just a guideline.
Izzy: Maybe put something in the core guidelines for granularity.
Mathias: Should have data to back that up.
Bryce: That’s why the section is called "findings". Should be based on experience and data.
Bryce: Move Module ABI to transition path?
Steve D: Not just part of the transition path.
Bryce: Should have clearly identified people / reps for the stakeholders.
Bryce: Volunteering to work on header only libraries.
Nathan Sidwell’s P1602R0 was presented in Kona in EWG, may still get presented here.
Ben B: Output format to describe what a source file uses and produces.
Bryce: What would this look like if import.cpp was a module partition.
Ben B: (adds some colons to file path) Depends on what gcc outputs for the actual partition BMIs. Haven’t tested that yet. This doesn’t care about is-a-partition or not.
Mathias: What does this return for pure implementation units that are not importable?
Ben B: Wouldn’t be any provides.
Mathias: Provides arrays might be useful if we come up with a multi-module format for distribution.
Gaby: Just a matter of time before we can make multiple modules from one file in C++.
Mathias: Nathan Sidwell is looking to provide extensions along these lines.
Bryce: This doesn’t mention the BMIs configuration.
Ben B: It does not.
Gaby: This is the output of the scanning phase? And it could be input to a build definition?
Ben B: It could be.
Gaby: Just trying to be clear on which phase of the build this comes up.
Mathias: Concerned that the compiler is telling the consumer where files are going, rather than the other way around.
Ben B: The directory came from an initial compiler invocation string.
Mathias: Why not just use logical names.
Ben B: Because without a module map, gcc just comes up with these locations. So the JSON just has a file path hint.
Mathias: so these names are just suggestions.
Ben C: Not fond of the path mismatches where the json gives a path that may not be used.
Ben B: things like filepath are optional.
Mathias: Why include optional things?
Ben B: This is why you need a collator.
Mathias: Then let the collator do the logical to physical mapping.
Mathias: What does it mean for a compiler to want a path?
Steve D: It’s based off of a compiler flag or the compiler defaults.
Mathias: Since you’re going to need to do the mapping, I’m not sure why you want the filepath in the json.
Ben B: Will investigate omitting filepath.
Steve D: If you don’t tell the compiler something, it will pick something.
Izzy: Letting the compiler pick file locations made it very difficult to build things in a performant way. Had to do lots of docker work to fix that in a past project.
Gaby: It’s not a design flaw in GCC that it outputs things by default.
Olga: Traditionally all build systems are explicit about the outputs.
Olga: Would like to focus on the "provides" and "requires". Don’t want "depends" to be required. Want a special mode that has less information for performance.
Ben B: flag for minimal information would be fine, but that file wouldn’t be useful for some build tools.
Mathias: Why would a build tool need anything besides provides and requires.
Ben B: Need to provide dep files as an input edge.
Bryce: Is there any existing thing for fortran?
Ben B: No. Cmake has to parse fortran today.
Bryce: Was Olga’s concern just performance?
Olga: Yes.
Bryce: Without data, I’m hesitant to raise this as an issue.
Olga: Not just planning on using this for builds, but other operations like IDEs. Speed is really critical there since it is interactive.
Gaby: Have you seen how long it takes to output the information?
Ben B: No, haven’t measured performance yet.
Olga: It’s not just producing the json, but parsing the source to get that information.
Ben B: Note that scanning doesn’t need to do full preprocessing.
Spencer: In a few days I’ll be giving a talk on dep scanning for modules. Scanning then doing an explicit build is faster than doing an implicit build.
Gaby: Would like a specification as to what is meant when we say scanning.
Bryce: Different applications may need different amounts of scanning.
Ben C: Getting a skeleton spec is blocking things.
Izzy: How does the JSON handle unicode?
Ben B: You end up with integer encoded things, and URL percent escaping.
Izzy: Considered using base 64 encoding?
Ben B: It could be done, but you need to know endianness.
Mathias: Don’t need to support non-native endianness.
Ben B: No endianness if you are outputting ascii integers.
Bryce: Getting a skeleton repo set up, Latex is basically going to be required.
Izzy: There are tools that can emit both.
Rene: Having graphs in Latex is going to be challenging.
Bryce: Amount of startup overhead is a concern. We can reuse what the standard is already using.
Bryce: I have used restructured text recently, maybe that is an option.
Bryce: Volunteering Ben C and Gaby on helping with Github. Izzy for RST things.
Gaby: Can we reuse the C++ github?
Bryce: Will check with Herb.
3.4. 2019-04-12 Meeting Minutes
Attendance:
Bryce Adelstein Lelbach.
Tom Honermann.
Nathan Sidwell.
Anna Gringauze.
Ben Boeckel.
Ben Craig.
Bruno Lopes.
Isabella Muerte [Izzy].
JF Bastien.
Michael Spencer.
Mathias Stearn.
Przemyslaw Walkowiak.
Rene Rivera.
Steve Downey.
Minutes (Ben Craig):
Bryce: New mailing list should be up and running next week, archives may not be migrated.
Nathan presents P1184.
Bryce: What is "cookie".
Nathan: Just an identifier of which compilation / client is calling. For files, also a way of saying which lines to pay attention to. Not a strong crypto cookie. Implementation and name has changed.
Nathan presents P1602.
Bryce: How would you end up with a cycle with modules?
Nathan: An erroneous program. This would be detected here, and could report on that problem. This prevents Make deadlocks.
Nathan: The module mapper is part of make. Defining the magic variable is how make knows what to talk to.
Bryce: Does make do anything for fortran modules or similar languages?
Nathan: I don’t think make has any intrinsic smarts to deal with module systems.
Nathan: Discovering dependencies requires you to build dependencies during the discovery.
Ben B: Historically, for Fortran, the answer has been "run make until it works". In 2008, Fortran support was added to cmake for the makefile generator. It’s supportable with POSIX make. Need something to do the scanning. Compilers historically write modules where the modules want, without letting the build system tell it where to put things. We’re going to push to have fortran support the same build system format description. Brad King is the maintainer of the fortran support.
Tom: GNU make supporting rules for multiple compilers seems tricky.
Nathan: Do you want to build some parts in compiler 1 and others in compiler 2?
Tom: Yes.
Nathan: How does make deal with it?
Tom: It doesn’t, you need to write your own rules.
Bryce: My understanding is that the module mapper is not responsible for dealing with BMIs of different configurations, the user is. If I have some gcc builds and some clang builds, the BMIs aren’t compatible. Would I need different mappers, different cookies, different servers?
Nathan: Similar to debug vs. release? Canonical way is to have different directories for different configurations. Put the different BMIs in different directories. You would have multiple mapper services.
Mathias: Don’t want to require people use different directories and let ccache handle it. Otherwise it gets exponentially explosive.
Bryce: Not sure how that would work.
Mathias: Put everything in one directory, let ccache switch between configs.
Izzy: ccache doesn’t just look at file contents, also looks at command lines.
Ben B: How does ccache work with split drawf today?
Mathias: ccache just knows about dwarf. May not know about other things.
Ben B: So ccache basically knows how to parse gcc flags too?
Izzy: It does not. It hashes the command line after sorting the argv.
Ben B: Hrm, flags can’t be sorted like that.
Izzy: I might have misread the github issue, i’ll have to go back and check. but they do go out of their way to avoid hashing the file as much as possible.
Mathias: No it definitely knows about the flags, as does icecc. and
Bryce: We should discuss ccache and sccache in a future meeting.
Tom: Depending on the build system not just to do module mapping, but also dependencies. Doing that based on what’s encoded in the makefile. How do we fit this into tools for ides or code generation tools that need to resolve dependencies.
Nathan: Dependencies aren’t encoded in the makefile.
Tom: Build system needs to tell the compiler where to put the BMI.
Ben C: Modules may not create new classifications of problem areas, but it puts a lot more things in a bucket that used to be pretty small. Sort of a union of code generation header files and linking.
Nathan: I somewhat address that in the presentation. Easy to conflate a lot of different problems. Need to be sure we distinguish between the new problems and the old problems.
Bryce: Today, make doesn’t have any language specific knowledge. How did you end up with that design?
Nathan: needs to interact with the build graph construction, and that required new work. You need to be inside make for that.
Tom: Header units and determining when something should be included vs. imported. Did you experiment with that?
Nathan: Yes, needed to put some bits in the Makefile for that to tell everything that "legacy.h" is a header unit.
Tom: Header file has to be consistently translated.
Mathias: Standard calls out if a header is importable, you must transform it to an import.
Tom: So that information has to be provided extrinsically.
Steve: Even more subtle in the standard, because there is implementation defined behavior in there.
Mathias: But things that are in that set must be consistent.
Mathias: Would a pragma at the top of the header to say that this is importable make sense?
Nathan: That may have been discussed on the mailing? Pragma that could say this is importable. Also a pragma to say that "foo" is importable so that one wrapper header could annotate lots of things. include_next kind of scheme could also be used with shim headers.
JF:
.
#pragma once
Tom: Not
because the difference between include/import is observable (in some cases).
#pragma once
Mathias: for the curious.
Nathan: The set of importable headers is not deducable, but everything else should be deducable.
3.5. 2019-04-26 Meeting Minutes
Attendance:
Bryce Adelstein Lelbach.
Ben Boeckel.
Ben Craig.
Bruno Lopes.
Corentin Jabot.
Isabella Muerte [Izzy].
JF Bastien.
Jayesh Badwalk.
Mark Zeren.
Mathian Stearn.
Mathias Stern.
Michael Spencer.
Olga Arhipova (Microsoft).
Rene Rivera.
Richard Smith.
Steve Downey.
Tom Honermann.
Minutes (Bryce Adelstein Lelbach, First Half):
Ben starts presentation on distributed build systems.
Bryce: What type of preprocessing does distcc do? Does it just concat in macros? What about #includes, which can have macro expansions in them?
Ben, Mathias: It’s clever.
Bryce: So you’re saying that FASTBuild might be able to be made to work out of the box, using the support for distribution of extra files?
Ben: Maybe, although I wouldn’t recommend it.
Tom: Do you have any information on the prevalence of the two different distcc modes?
Ben: Not really. Pump mode is newer.
Tom: How easy is it to switch between the two?
Ben: You have to teach it about your build system. It’s non-trivial.
Ben: Pump mode is potentially faster.
Bryce: How could we ever make FASTBuild and normal distcc mode work? They expect a single file to contain a single translation unit. Modules is explicitly trying to move us away from textual inclusion.
Ben, Mathias: Not necessarily. Compilers might be able to support this (with things like -frewrite-imports).
Richard: We support -frewrite-imports currently only for header units, but there’s no reason that we couldn’t make it work for all modules.
Bryce: Is this a bad model for these tools to use?
Richard: You’re probably using a distributed build system because you care about build system. With this model, you’re taking a small file and some build artifacts and turning them into larger files. You might be giving up performance. But from a semantic perspective, this should work.
Ben: This might be useful as a deployment/migration tool, though.
Tom: Is it reasonable to expect all implementations to have something like -frewrite-imports?
Richard: It requires certain properties of your BMI file. You must be able to take the info in a BMI file and reconstruct source from and then reprocess.
Richard: Another idea would be to have the compiler build you a package for a particular TU.
Tom: But then you might end up shipping a lot more text than you need to.
Richard: You could take a hybrid approach, where you preprocess for directives only, and also package up BMIs.
Ben: The packaging approach doesn’t work as well for things like static analysis and creduce. Those tools want to be able to see all the source code.
Mathias: Can you package up the textual source of the module interface units instead of the BMI?
Richard: Maybe, but then you need to have a mapping to file names.
Olga: Modules can be built by different projects/targets with completely different command lines. Just using the sources and hoping that the command lines match up is probably not a good idea.
Mathias: Don’t we have that problem either way?
Mathias: There was an assumption in what I said that the build flags were part of the module <-> file mapping (e.g. module <-> file + build flags).
Izzy: How is this going to work with module partitions? This seems like it will be a big pain.
Ben: How will that be difficult?
Richard: Module partitions for the purposes of distribution behave similarly to using different modules, so it’s not necessarily harder.
Tom: How feasible is it to only distribute the BMIs? Do you also need the corresponding sources? I’ve heard that some implementations need the source in addition to the BMI, for things like diagnostics, etc.
Richard: That is the case for Clang. However, Clang has a mode that can embed the source into the BMI.
Bryce: Is the source needed for things to work, or just for diagnostics?
Richard: It’s need for things to work; if you don’t have it, things will break.
Izzy: What about std::embed?
Ben C: FYI, I believe Jean-Hyde is no longer pursuing std::embed.
Izzy: He and I were discussing it on the include cpp discord yesterday. It might get split up into two separate bits, but he is a bit swamped and is not going to pursue it for a bit. We might see it in 2023/2026.
Mathias: Shipping BMIs in this mode where you make a single file per TU seems difficult. I believe BMIs are 5-10 larger than source and are less compressible. If you’re network bound, that’ll be a problem.
Richard: In my experience, they are about 3x larger and are less compressible.
Minutes (Ben Craig, Second Half):
Richard Smith presents on clang modules and module maps.
Bryce: Module maps that clang supports are just for header units, right?
Richard: Yes.
Bryce: Will you extend that format for other modules?
Richard: Not been my intention to do so. Module maps work well for mapping from name of header file to a module. If you are going from module name to source file, that is harder because you need to read a lot of files on the file system to find that. Not sure that is scalable. Would be better if the compiler is more directly told about this.
Tom: So this is a lesson learned from objective-C.
Richard: Yes. But there are other problems too, like finding multiple conflicting maps in projects that wouldn’t be used together anyway.
Bryce: This is similar to include directories today?
Richard: This is worse, because it doesn’t even work for when everything is in /usr/include, because you need to recurse down.
Tom: Only get one module map per directory? This is a problem for packaging.
Richard: I can imagine a papckager managing the module map for you.
Tom: Or having a libfoo.module.map.
Mathias: Is a "master" module map not suitable for objective C.
Richard: Objective C solves this a different way, but it may work for C++ if you can guarantee uniqueness across the system.
Bryce: What is clang planning on doing for mapping module names to source.
Richard: Most straightforward thing is to have the build system pass in the list of inputs directly on the compile command line. If you want something more implicit...
Bryce: Google has been doing this at large scale and it’s been working, you don’t need something different?
Richard: We started with implicit mode. It was convenient, but didn’t distribute well and didn’t scale well. More sophisticated build systems are going to want to have more control.
Tom: My concern is with things that aren’t using the build system.
Bruno: We pay a high price for the downward recursive search in objective C. We don’t want to encourage that if we can help it. Our mac frameworks aren’t a problem, but other, non-framework locations are an issue. We did a downward search initially, and we’re stuck with that approach.
Bryce: Richard and Nathan started with implicit systems. Our first users didn’t like it, and had us move to something more explicit. Field experience is guiding us to explicit models.
Tom: We aren’t getting a lot of data there. Coverity data says that a lot of people are using implicit modes with xcode.
Corentin: Mapping of name to bmi isn’t important, because build system knows it exists, and shouldn’t be implicit. Mapping of module name to source name is more valuable though.
Bruno: The most valuable part is that it is easy for users to do things, but it has scaling issues. If you want to scale, you will need to change models.
Tom: Do we need an implicit model to have success? I think we do.
Ben B: If you do -fmodule-map=file, it will look in there both for reading and writing BMIs.
Bryce: Any reason compilers couldn’t expose multiple strategies for module mapping?
Richard: Clang already does that. Don’t know of a technical reason other than adding more complexity to compiler.
Bryce: Does the client / server mapper have any appeal for your use cases?
Richard: If we were to do that, main reason would be compatibility with GCC. It’s nice in that it decouples concerns. Seems like a fine approach.
Richard: In my experience, they are about 3x larger and are less compressible.
3.6. 2019-05-10 Meeting Minutes
Attendance:
Ben Craig.
Ben Boeckel.
Gabriel Dos Reis [Gaby] (Microsoft).
JF Bastien.
Lukasz Menakiewicz (Microsoft).
Olga Arhipova (Microsoft).
Mark Zeren.
Nathan Sidwell.
Tom Honermann.
Corentin Jabot.
David Blaikie.
Richard Smith.
Minutes (Ben Craig):
Corentin presenting his proposal on module naming:
Tom: What’s the status quo on if we have a conflict?
Nathan: Ill-formed, only when you try to import the wrong one.
Corentin: Build system can give diagnostics if you have two modules with the same name in the same dependency graph.
Gaby: You can have conflicts within an organization.
Nathan: We also have this collision problem with namespaces. That doesn’t seem to be a problem that requires vendor specific namespaces. Why do we thing this will be a bigger problem with modules?
Ben B: One thing we do with our third party packages is mangle the packages. We change the names of the shared library and of the dynamic symbols.
Nathan: There must be some subtleties there.
Tom Honermann: Namespaces merge naturally.
Ben C: You need a double colision (namespace and class names).
Mark Zeren: These collisons are a problem. Google allegedly has a registry for top level namespaces. VMWare probably needs one.
Sidwell: Would like to see that kind of rationale in the paper, to illustrate that it is a problem.
Tom: When we import a module, we need the unique file that corresponds to the module.
Gaby: Difference is that with namespaces, we get collision of members, and it can be subtle.
Ben B:.
Nathan: There seem to be some mitigating strategies today, but we aren’t describing today’s mitigations. Why don’t we already hit this problem with existing software. On most CPP projects, the dependency tree is shallow, so they mostly conflict with themself. At the namespace level, most projects use namespaces as a way to scope names. With headers you can rename the headers on disk to solve some uniquification problems. Two libraries (simple and duplicate) which have a module m. Two executables that import m. First one found is it. Cmake could say that it sees two "m"s and error out. Only works when the same name module is visible in the same place. Two libraries that make a config module way down deep, that is going to be harder to diagnose.
Corentin: Would be nice to have a way to map module names to file names at some point, and we are limited in what we can put in a file name. Limiting ourself to the basic character set helps that. We can also avoid some issues if we stick with lower case, particularly when sharing between Windows and Linux.
Gaby: Other languages have a mix of pascal case vs. lower case. I would like to be able to visually seperate module names from namespace names, which would argue for pascal case. I’m not sure if we want to force it, or if this is just a guideline.
Corentin: There is no enforcement, no language changes.
Gaby: For the TR, we are aiming for something that everyone would be using and doing. My organization is using a different scheme already, and I would like to avoid that if possible.
Tom: Are they different just because of the casing, or are they using characters outside of the basic character set?
Gaby: Just casing.
Tom: Please switch from ASCII to basic source character set.
JF: ASCII seems like an overly protective regression.
JF: I think we should try to fix some of the existing identifiers with unicode. We should fix C++ as a whole, and make modules follow that. I don’t think we should race to the bottom and support only the lowest common denominator. We don’t want to go back to the 8 character file length. Accept that on some file systems, the mapping will be difficult. What do git and SVN do.
Gaby: Agree. Misconceptions on Windows. Don’t want to base what we are doing on Windows misconceptions.
JF: We should guide people to names that don’t clash.
Tom: I would not like to limit to ASCII either. Would like to give an algorithm to let tools implement the mapping.
Corentin: Underlying filename, you can’t rename it. You need to have the same mapping on each platform. Because of that, I think we should have a subset of characters. It’s about understanding the filename and make sure that tools like cmake will be able to find and name the BMI appropriately.
Tom: Not sure why the source filename needs to be renamed. Doesn’t need to be a correlation between bmi, source, and object name.
Corentin: Would like to make it possible to have a mapping between the module name and a source file name.
David: The compiler/build system could implement a name->filesystem name mapping if the filesystem has naming limitations.
Tom: Would be useful to implement a mapping algorithm (note taker: like punicode).
Ben: Is there an expectation that if you have a foo.bar, that you also have a foo?
Corentin: No. If you have
, you don’t need
. abseil
. If you google.abseil.list, then google.abseil should also bring google.abseil.list.
Gaby: This has problems at scale. I could have a subgroup that conflicts with the bigger group. This introduces problems.
Tom: Should be with something like MISRA, and not the TR.
Corentin: We want this to make it easier to do dependency management at scale. Can’t have name conflicts, as the whole world is your dependency graph. If you have a module name, you could have a global index of every module name that exists, and find the library / project where it is declared and automatically download that library.
Gaby: If you want the library downloaded with no input, then yes. In practice, you want input from the user. It’s ok to have several repos competing to provide something and you have to choose them.
Corentin: If you have two third party libraries that both import a library named foo that you have a conflict that you have no control over. This is one of the reasons you need unique names. The number of dependencies you have can grow exponentially. It needs to not happen to begin with.
David: Even with unique module names - you still have the version problem. Your dependency A depends on X1, but dependency B depends on X2.
Gaby: You get to choose in practice, because if you don’t, you are in big trouble. There are things like licences that users have to pick.
Corentin: Renaming a module is a major breaking change.
3.7. 2019-05-24 Meeting Minutes
Attendance:
Ben Craig.
Bryce Lelbach.
Nathan Sidwell.
David Blaikie.
Rene Rivera.
Olga Arhipova (Microsoft).
Tom Honermann.
Matthew Woehlke.
Mathias Stearn.
Stephen Kelly.
Gabriel Dos Reis [Gaby] (Microsoft).
Richard Smith.
Lukas (Microsoft).
Mark Zeren.
Michael Spencer.
Minutes (Ben Craig):
Bryce: sg15@lists.isocpp.org is the new mailing list:
Bryce: Started calling BMIs CMIs for compiled module interface rather than BMI. Don’t want to imply a standardized binary interface.
Tom: Used term "Artifact".
Bryce: Would like to talk about file extensions. Maybe different extension for modular implementation units vs. other units.
Olga presenting on BMI distribution:
Tom: Some platforms provide PCHs, but they are generally as a fallback.
Mathias: You already need to have a global and a local index to provide things like find all references or go to implementation? Seems like that would require putting the modules in that format rather than putting things in the codegen format. You probably need different information in each? Seems like a codegen compiler may choose not to put any body information in the bmi if no optimizations are requested. But an IDE would need that for find all references.
Lukasz: We do have both kinds of indexes. If a user is able to copy a BMI to the system, then we would want to serve intellisense with that as well. Would be jarring not to have that information.
Mathias: This is presupposing that you wouldn’t have source available for a given BMI, and that BMIs would be more than just a cached artifact.
Lukasz: Wouldn’t go so far to say that it is presupposing.
Mathias: Didn’t quite mean that.
Lukasz: But we do want to have a close parallel between Intellisense and build.
Gaby: For the Microsoft pre-distributed BMIs, we always provide the source as well.
Richard: Clang has a language server protocol (LSP). If you ask the build compiler questions through LSP, you avoid cross compiler requirements.
Mathias: LSP as the main communication mechanism has me concerned. Even in the clang tooling world, there are three different protocols. Would be odd if those needed to use LSP then talk to another compiler,.
Lukasz: My belief is that LSP is not suitable for this purpose. It is too high level to ask about the compiler artifacts. It’s an IDE level interface. Like give me a member list at this location. Current LSP messages probably aren’t useful here.
Richard: The only use case I was suggesting LSP for was the IDE use case.
Olga: The intellisense compilation is more fault tolerant than the build compiler. The build compiler typically stops parsing as soon as it hits errors. We also have tag parsers that need to extract information from already built BMIs.
Bryce: What’s a tag parser?
Lukasz: Different optimization approaches for build compiler vs. intellisense. Throughput vs. responsiveness. Engineering cost wise, it is harder to use one compiler for everything.
Olga: using portion of MSVC compiler for intellisense until 2010, and it was painful.
Richard: This does resonate with Mathias’s concerns, that sharing a file format may not make sense between the different concerns.
Gaby: The scenario isn’t that they want to reuse the contents of the BMI in the way the codegen would, but they do want to look at the structures and idioms that are exported. Suspect they are looking for ways to extract that from the BMI. The expectation is that this information is in all the formats. They are just trying to get at this precomputed information.
Richard: That’s a useful perspective. To a certain extent, it’s something more similar to LSP style analysis, maybe that’s the wrong model for this though. Some way that not every tool needs to know about every other tools format.
Tom: For Coverity, the content and the structure is very compiler specific. We use one front end to emulate lots of compiler versions. BMI distribution being compiler version specific is tough for us because we are a floating compiler version. We’d need stability across versions. If we could extract things like the command line and source files used to make the BMI, then that would be useful for us, so that we could make a compatible invocation?
Bryce: Have you talked to EDG?
Tom: They said they will support modules, but we don’t have more specific details than that.
Tom: Also unsure about sharing information across clang and gcc. Lots of implementation dependent information than can get encoded. Not sure how that information can be made compatible across compilers. How does one compiler consume an intrinsic from another.
Gaby: Primary goal from Olga and Lukasz is to have a mechanism to extract already computed information. Not a goal to share a format that is shared. If you want to go in that direction, then we aren’t equipped for that yet. Being able to invoke the compiler or a compiler provided tool to extract that information may be enough. Just need to define what that thing is, may be a tool.
Bryce: Is there some common set of queries that are useful that everyone wants to ask of a BMI that all implementations can provide?
Ben: Agree with the goal, highly skeptical of it being achievable.
, static analysis macros,
__Intellisense__
, all of those things can make sharing BMIs harder. Dropping those macros would be a big step, but keeping them makes things very difficult.
_MSC_VER
Olga presenting again.
Ben: ABI compatible isn’t necessarily object compatible. Borland used to have OMF files vs. COFF files.
Tom: Clang attempts to make a determination if a cached module can satisfy a request. Replicating that across many tools would be very hard.
Olga: If we can ask the compiler if you can use the BMI or not, then the knowledge is in one spot and the build system can use it.
Tom: But the build system would have to ask that question for every possible set of options.
Olga: When I’m building a .cpp with a module, if I can check to see if the module is applicable, then I can either use the module, or rebuild on demand. Still theoretical.
Tom: Right, that’s my point. The options are even order dependent.
Olga: Can see if we can relax the requirements on having the exact command line.
Tom: History of a similar requirement in Coverity. Given an external compiler invocation, we translate to our invocation. Our finding is that it is too difficult to make the translation, and we end up instantiating things repeatedly.
Ben: You can cheat and just trust the user, like with static libs. For example, I provide debug/release + static/dll versions of boost.
Bryce: Bruno and Michael will present on this. Compiled modules are more sensitive than object files or static archives would be.
Michael: We have a potential solution for reducing the number of configurations due to warnings.
Richard: We have two scenarios. One where we have implicit modules. Here we use the flags that match what are being used. If we have an exact match, we can use a cached version. There’s a complication there based off of warning flags. Reduced warning are usually ok. Second scenario is where you are explicitly building module artifacts. When you use them, we are more permissive in config changes. It’s ok to have different predefined macros or include paths there. We don’t allow config flags to differ if they would result in functionally different ASTs in ways that matter to us (like language modes). So we do allow some amount of deviation, but only fairly limited. May be enough to help with some of these scenarios.
Tom: Add to your list, need to know the current working directory and env of the compiler when invoked. Those are the kinds of things that we tend to capture when analyzing a build.
Olga: Right, those should be part of the "options" and not the "switches".
Gaby: Please send that list to the mailing list.
David: you still have two tools parsing the code. Sometimes you gather everything with a compiler shim to harvest commands.
Tom: We don’t use a shim, we monitor processes being launched.
David: Why would that be different?
Tom: If the build system constructs them, not an issue. If they are distributed in advance, then we don’t have that option.
David: For the IDE situation, I assume it is in a similar situation.
Olga: yes. But similar issues happen when mixing cl and clang-cl.
David: Two ways I’ve heard how to deal with this. One is with the gcc oracle system. So if you have two different compilers, both would call into the build system, and the build system could keep distinct caches for each compiler.
Bryce: Next meeting in 2 weeks, tentatively Spencer and Bruno talking about BMI configuration.
3.8. 2019-06-07 Meeting Minutes
ISO C++ SG15 Tooling Pre-Cologne Modules Tooling Interactions Telecon 2019-06-07
Chairing: Bryce Adelstein Lelbach
Minute Taker: Ben Craig
Attendance:
Bryce Adelstein Lelbach.
Ben Boeckel.
Ben Craig.
Corentin Jabot.
David Blaikie.
Gabriel Dos Reis [Gaby] (Microsoft).
JF Bastien.
Lukasz Menakiewicz (Microsoft).
Mathias Stearn.
Michael Spencer.
Nathan Sidwell.
Olga Arhipova (Microsoft).
Rene Rivera.
Richard Smith.
Stephen Kelly.
Steve Downey.
Tom Honermann.
Chair Notes (Bryce Adelstein Lelbach):
Pre-Cologne Papers (Send Drafts to sg15@lists.isocpp.org by 2019-06-14):
Ecosystem Technical Report Outline (Bryce).
Summary and Minutes of Pre-Cologne SG15 Discussions (Bryce, Ben C).
Dependency Metadata (Ben B).
Compiled Module Reuse (Olga, Lukasz).
Module Naming (Corentin Jabot).
Modules Hello World (Gaby).
Modules and Packaging (Richard).
RFE: (Paper or info on) Implicit Modules (Michael).
Michael gave a talk at LLVM Euro about this.
Minutes (Ben Craig):
Bryce: Distribution of compiled modules. How are you dealing with different configurations?
Michael: Not looking at distributing compiled modules. No plan on giving clang a stable module format. No compatibility between versions which would be confusing for devs. You need the headers or module interface files to do anything. You have to be very careful about what module settings you are using when you are building, and it’s much easier if you have the original file.
Bryce: What about warnings?
Michael: You should always get the same warnings. Only your own warning flags should affect things, regardless of how the module was built. We plan on serializing them into the AST. We’ll have buckets for which warnings we will keep because some of them are expensive to create. The first time you import a module, we want to emit all the warnings that are applicable. For performance reasons, we don’t want to keep everything from -Weverything.
Tom: What about -wno-some-warning?
Michael: We would filter the relevant warning out.
Michael: Non header units may have a different model where you aren’t viewing it as a header. It may be ok to say in that case that the module is its own entitiy. This would be steering away from implicit modules.
Olga: What about clang-cl being able to read modules from MSVC?
Richard: Warning mapping explicit module builds. Explicit module buils, when we build a module, the warnings used to build that module are used, not the warnings when consumed. So a template instantiated by a consumer would get the producing modules warnings. Explicit and implicit models are different.
Michael: Agreed.
Richard: Maybe that isn’t the criterion, but that’s the way we are doing things now.
Bryce: Would be nice if it were easy to explicitly control what model they got.
Gaby: clang-cl and MSVC compat is a discussion going on internally. We’ve been talking with Richard.
Gaby: Clang has experience with implicit module builds, but msvc doesn’t. The MSVC implementation of the TS assumes explicit builds. May be ok for usage at scale, but isn’t necessarily good for small examples. Is there any way that clang people could write an experience paper?
Michael: Gave a talk at LLVM Euro. I don’t know what you are looking for in terms of how that model looks.
Gaby: Interaction with build systems. The implicit build may conflict with the build system, especially around caching.
Michael: Should generally be hidden. From build systems.
Gaby: Scalability?
Michael: There are issues, I think a CppCon talk about it.
Bryce: Some implementations want distributing, some do not want people to do that. Do we think it’s a problem that there is divergence. Is that fine?
Gaby: There’s a notion of reusing which could just be for a given compiler release. Can I reuse those. That’s separate from distribution of artifacts that will outlive the current compiler series.
Bryce: If I’m a user of MSVC using C++20, is your advice that I shouldn’t ship BMIs for my common configurations, or is that fine?
Gaby: Current advice is that for a given compiler release it’s ok, so we can avoid rebuilding everything repeatedly. But you should also ship the source code. Even when the format of the BMI is published, we will still recommend that source code be distributed.
Bryce: For clang, must have the source code for the interface unit.
JF: Three ways to distribute modules. Pure source. Make internal format stable. Or intermediate where some is stable and some is not. For example, description of struct is described in a stable way. With C++, that ends up being an awful lot of stuff though. So technically possible to have a binary module attached with pared down version of the source code, but that is pretty big. There’s a talk by swift people about how they are doing it ( ). We don’t want to serialize the AST. It’s possible, but it’s big and may have more than you want.
Gaby: I don’t know if the size is that big or not. I’m not deterred by the size at this point. I’m more interested in the ability of developers to not need to recompile the same thing repeatedly.
JF: What I’m talking about allows for that. The expensive part isn’t source -> ast, but ast -> compiled stuff. You save a bit of time using AST, but you don’t save the expensive part. Would like to see tools that pare down what is exported. The big things may not just be lots of bytes, but more things than you would expect, and you’ll see unexpected things.
Mathias: How does the size compare to headers? Will it be larger or roughly the same.
JF: It’s easier to use a template that uses a name inside of it. When a user gives some other name, it pulls in a whole lot of other stuff. Hard for a compiler to prove that similarly named things won’t be pulled in. Not sure how things like reflection gets exposed through tree shaking or whatnot.
Michael: At the worst, it’s everything that’s public in your module interface unit.
Corentin: I like to think of it as compiling as-if from by source, but I think of that as caching, possibly even at the company level. Using that model to distribute modules is probably fine. Making things stable / never able to change is a problem though, and I’m opposed to that.
Bryce: The implementers are all saying you must distribute some form of source, and that you can provide interfaces as an optimization.
Gaby: Your summary is fantastic Corentin. This is why we need to split the discussion into distribution and reuse. Guaranteeing compatibility has a big cost, but we still want to be able to enable reuse.
Bryce: What is the clang / llvm plan for distributing default module maps. C++ implementation may not be well suited to do, but the OS is in a good position to do. What are vendors going to need to do to modularize themselves.
Richard: Assume something like clang’s module map system. You need some kind of file accompanying a package that says which headers are to be treated as header units. Any side information you need to build in that mode will be there. GLIBC in particular has information that varies slightly between version, as info moves into and out of "bits" headers. We don’t think the compiler is in a good position to distribute module description files for glibc, because we would need different ones for different versions, and need to know which one you have on a given machine. glibc would need to provide that data. We may be able to provide examples so that distributions could provide that. Hoping there will be sufficient adoption and cross vendor description compat so that packages can provide that information themselves.
Mathias: Has there been any discussion with wg14 about C modules so that we have a good compat story with their library.
JF: No. No discussion in wg14 in the last year. You could join the interop list and ask.
Bryce: Hard to execute, since we would basically just tell them here’s the design, please adopt it, without being able to take feedback.
JF: Question is whether modules is worth their time. My concern is that no one will attend, or that they will make something slightly incompatible. If we don’t participate, incompatibilities are very likely.
Gaby: I’m nervous about C taking things from C++ without modifying them. All the prior cases resulted in modifications except for // comments.
Stephen K: Packaging and libc, distributors would need to provide module map files. All the maps needed the same name... how would that work. Clang can look for them implicitly, but clang allows you to name module maps on the command line. Not sure that module maps are where we want to go, that’s something that SG15 should discuss.
JF: Apple ships implicit modules maps for some of the frameworks we ship. It’s not perfect, but it does some good work. Some of the details end up leaking. It’s hard to break up some of the dependencies. We also have a giant codebase with all the frameworks we expose. Exposing contents as modules can serve as an example of how to migrate existing code. We (Apple) own the versions as the platform vendor, but the clang project isn’t the right place to do this.
Stephen K:.
Corentin: Modules need to be authored by the people maintaining the code, and not by someone else. Maintainer of a package should provide the modules. Not a good idea for someone that doesn’t maintain a library to provide a module for it. People that aren’t the stdlib shouldn’t implement modules for the stdlib.
Ben B: Not so bad if I modularize (for example) libpng, but it is bad if I choose to distribute it.
JF: Bryce: The Apple paper was in the post-Kona mailing, P1482R0.
|
http://www.open-std.org/JTC1/SC22/wg21/docs/papers/2019/p1687r0.html
|
CC-MAIN-2019-47
|
refinedweb
| 12,326
| 68.67
|
0
I'm trying to override equals method, but I'm getting error
I have two classes one abstract and the inherited class.
public abstract class MyData extends Object { public abstract boolean equals(MyData ohtherData); } public class IntData extends MyData { protected int num; public IntData (int n) { this.num = n; } public boolean setNum(int n) { if (n<0) return false; else { this.num = n; return true; } } @Override public boolean equals(MyData otherData) { if (otherData == null || this.getClass() != otherData.getClass()) return false; else { MyData newData = (MyData) otherData; return this.num = newData.num; } } }
I have problem with the last line
return this.num = newData.num; I'm getting that num for newData can not be resolved or is not a field. Nevetherless, my teacher said that I CAN NOT put protected int num in the abstract class. My teacher checked my code and said that it should work but it doesn't.
Any help would be appreciated.
|
https://www.daniweb.com/programming/software-development/threads/436720/overriding-equals-method
|
CC-MAIN-2018-17
|
refinedweb
| 155
| 57.37
|
Reinhold Publishes Open Letter to JCP Pleading That JPMS (Jigsaw) Is Approved
- |
-
-
-
-
-
-
Read later
Reading List
Mark Reinhold, chief architect of the Java Platform Group at Oracle, has published an open letter to the JCP Executive Committee. In the letter he expresses surprise that IBM have decided to vote against the JSR, and argues that Red Hat’s decision to vote no is motivated by a desire to "preserve and protect their home-grown, non-standard module system, which is little used outside of the JBoss/Wildfly ecosystem". He goes on to plead
As you consider how to cast your vote I urge you to judge the Specification on its merits and, also, to take into account the nature of the precedent that your vote will set.
A vote against this JSR due to lack of consensus in the EG is a vote against the Java Community Process itself. The Process does not mandate consensus, and for good reason. It intentionally gives Specification Leads broad decision powers, precisely to prevent EG members from obstructing progress in order to defend their own narrow interests. If you take away that authority then you doom future JSRs to the consensus of self-serving "experts".
Many failed technologies have been designed in exactly that way.
That is not the future that I want for Java.
Responding to the furore, Red Hat’s David Lloyd has provided a summary of what he believes are the outstanding issues. In brief:
- Allow cycles to exist among modules at run time.
- Re-evaluate (piecewise if necessary) the (still very small) module primitives patch.
- Provide package namespace isolation among module path modules.
Lloyd adds
We also have a concern that the changes to reflection may still be too drastic for the community to cope with, and it's possible that this might still be a deal-breaker for other EC and EG members. I think that it is a good idea to ensure that there is consensus among the rest of the EG before we move forward with this as-is.
With regards Reinhold's assertion that Red Hat's is trying to protect JBoss Modules Llyod told InfoQ
I find it disappointing. But as I said at the time to Mark, I found it useful to use our own experiences to describe the problems we see, with the expectation that it would not be difficult to extrapolate those examples to cases. It should be clear not only from my public communications on the experts list, but also from the general community response, that the problems I am thinking about are not limited to our software but are representative of what many other framework and container authors and users are likely to encounter.
On the subject of module path names, Reinhold has also submitted an updated proposal for #AutomaticModuleNames to address the issue raised by by Robert Scholte.
#AutomaticModuleNames is a feature designed to allow developers to begin to break their own code into modules without needing to wait for all the libraries and frameworks they use to support Jigsaw.
The key part of the revised proposal is a JAR-file manifest attribute, Automatic-Module-Name, the value of which is used as the name of the automatic module defined by that JAR file when it is placed on the module path. If a JAR file on the module path does not have such a manifest attribute then its automatic-module name is computed using the existing filename-based algorithm. Reinhold suggests.
The Java Module proposal represents a huge amout of work by Oracle over a 12 year period starting with JSR 277. Originally planned for Java 7, then 8 and now 9, it has been dogged by controversy since it started. However at this point there is, it appears, broad consensus in the community that Jigsaw does provide a much needed solution to breaking down the JDK itself. The concern is over the amount of existing Java tooling that may break if it goes into Java 9 as is.
Rate this Article
- Editor Review
- Chief Editor Action
Hello stranger!You need to Register an InfoQ account or Login or login to post comments. But there's so much more behind being registered.
Get the most out of the InfoQ experience.
Tell us what you think
Please fix this error in the article
by
Ant hony
On the subject of module path names, Reinhold has also submitted an updated proposal for #AutomaticModuleNames to better accommodate Maven by including the Maven group identifier, when available in a JAR file’s pom.properties file, so that module names are less likely to collide.
As can be seen here, it is not Mark, but Robert Scholte who submitted the issue about automatic module names and who proposed to use the JAR file's pom.properties file. However, the accepted resolution does *not* use the pom.properties file at all.
Re: Please fix this error in the article
by
Charles Humble
|
https://www.infoq.com/news/2017/05/jigsaw-open-letter
|
CC-MAIN-2017-26
|
refinedweb
| 828
| 57.3
|
Provided by: manpages-dev_5.10-1ubuntu1_all
NAME
adjtimex, clock_adjtime, ntp_adjtime - tune kernel clock
SYNOPSIS
#include <sys/timex.h> int adjtimex(struct timex *buf); int clock_adjtime(clockid_t clk_id, struct timex *buf); int ntp_adjtime(struct timex *buf);
DESCRIPTION
Linux associated with the NTP implementation. Some bits in the mask are both readable and settable, while others are read-only. STA_PLL
CONFORMING TO
None of these interfaces is described in POSIX.1 adjtimex() and clock_adjtime() are Linux-specific and should not be used in programs intended to be portable. The preferred API for the NTP daemon is ntp_adjtime().
NOTES
In struct timex, freq, ppsfreq, and stabil are ppm (parts per million) with a 16-bit fractional part, which means that a value of 1 in one of those fields actually means 2^-16 ppm, and 2^16=65536 is 1 ppm. This is the case for both input values (in the case of freq) and output values. The leap-second processing triggered by STA_INS and STA_DEL is done by the kernel in timer context. Thus, it will take one tick into the second for the leap second to be inserted or deleted.
SEE ALSO
clock_gettime(2), clock_settime(2), settimeofday(2), adjtime(3), ntp_gettime(3), capabilities(7), time(7), adjtimex(8), hwclock(8) NTP "Kernel Application Program Interface" ⟨ package/rtems/src/ssrlApps/ntpNanoclock/api.htm⟩
COLOPHON
This page is part of release 5.10 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at.
|
https://manpages.ubuntu.com/manpages/impish/en/man2/adjtimex.2.html
|
CC-MAIN-2022-33
|
refinedweb
| 256
| 56.35
|
Opened 4 years ago
Closed 4 years ago
Last modified 4 years ago
#17880 closed New feature (duplicate)
"post_save" signal should not be emitted when using manage.py "loaddata"
Description
I spent some time on migrating database from MySQL to PostgreSQL. I used "dumpdata" to dump all data from MySQL into json ("pure" data, better than MySQL raw dump and then convert to PostgreSQL, which will cause losing some database specific features). Then I created a new database in PostgreSQL, and "syncdb". I also didn't forget to wipe it out completed by running "python manage.py sqlflush | psql -U myusername mydatabase".
However when I used "loaddata", I always get error (unique key conflict or something). I checked the database and it has nothing. I was confused.
Finally I found that in a few models of mine, I actually used "post_save" signal to create instance after a new User instance was created, like this:
def create_stat(sender, instance, created, **kwargs): if created: Stat.objects.create(user=instance) post_save.connect(create_stat, sender=User)
The Stat is like an extension to the User and it has a OneToOneField "user" to the User instance.
By commenting out the above post_save callback, the data was imported correctly into PostgreSQL. Then after resetting sequences, the migration was done.
So the reason is now clear: when "loaddata" loads data from json and creates a User instance, the "post_save" signal will always be emitted, resulting a new Stat instance is created as well. Then when loading the actually Stat data from json, error will occur since there is already a Stat instance for each User.
I'm not sure if using "post_save" signal in my case is a good practice (I just wanted to automatically create some instances when a new user is created. I know I can also put some check code in views and create instances if there is none, but that would look ugly). But I'm thinking perhaps "loaddata" could provide an option to temporarily disable emitting signals when loading data? I mean when someone is "loading" data it usually means the data should be completed "loaded" rather than "running bulk save" which could cause problems by signals/transactions/etc.
Change History (2)
comment:1 Changed 4 years ago by tmitchell
- Needs documentation unset
- Needs tests unset
- Patch needs improvement unset
- Resolution set to duplicate
- Status changed from new to closed
- Type changed from Uncategorized to New feature
comment:2 Changed 4 years ago by danielfeng
Great, thanks for the links. The "raw" argument check works.
Thanks for the report. This appears to duplicate the feature request in ticket #8399, so please check that out. There currently exists a workaround using the "raw" argument to pre/post_save.
Here are some links that might help:
|
https://code.djangoproject.com/ticket/17880
|
CC-MAIN-2015-48
|
refinedweb
| 461
| 61.97
|
lp:charms/trusty/apache-hadoop-yarn-master
- Get this branch:
- bzr branch lp:charms/trusty/apache-hadoop-yarn-master
Branch merges
Related bugs
Related blueprints
Branch information
- Owner:
- Big Data Charmers
- Status:
- Mature
Recent revisions
- 91. By Cory Johns on 2015-10-07
Get Hadoop binaries to S3 and cleanup tests to favor and improve bundle tests
- 90. By Kevin W Monroe on 2015-09-15
[merge] merge bigdata-dev r103..105 into bigdata-charmers
- 89. By Kevin W Monroe on 2015-08-24
[merge] merge bigdata-dev r94..r102 into bigdata-charmers
- 88. By Kevin W Monroe on 2015-07-24
updated resources to use lp:git vs lp:bzr
- 87. By Kevin W Monroe on 2015-06-29
bundle resources into charm for ease of install; add extended status messages; use updated java-installer.sh that ensures java is on the path; update to latest jujubigdata library; send jobhistory ipc port on the RM relation (vs the web port); add actions to stop/start the cluster
- 86. By Kevin W Monroe on 2015-06-18
remove namespace refs from readmes now that we are promulgated; update DEV-README with jujubigdata info
- 85. By Kevin W Monroe on 2015-06-01
remove dev references for production
- 84. By Kevin W Monroe on 2015-05-29
reference plugin instead of client in the docs
- 83. By Kevin W Monroe on 2015-05-29
update DEV-README to reflect correct relation data; reference plugin instead of client
- 82. By Kevin W Monroe on 2015-05-28
update jujubigdata to get better /etc/hosts debug logging
Branch metadata
- Branch format:
- Branch format 7
- Repository format:
- Bazaar repository format 2a (needs bzr 1.16 or later)
|
https://code.launchpad.net/~bigdata-charmers/charms/trusty/apache-hadoop-yarn-master/trunk
|
CC-MAIN-2017-17
|
refinedweb
| 283
| 55.54
|
.
In PyMC, if you want to sample points uniformly from the unit square, all you need to do is define your stochastic variable, make a Markov chain, and sample from it. Here is how:
from pymc import * X = Uniform('X', zeros(2), ones(2)) mc = MCMC([X]) mc.sample(10000)
If you have matplotlib set up right, you should be able to get a nice picture of this now
plot(X.trace()[:,0], X.trace()[:,1],',')
Not a very complicated convex body, but it’s a start. And it was easy! Five lines of code? No problem.
Say you want something to sample from some more complicated, though, like anything-that’s-not-a-rectangle. One way to do this is to use PyMC’s potential object. Below is a constraint that makes S into a ball of diameter one, centered at (0.5, 0.5). It just says that the log-probability for any point outside of this ball is
from pymc import * X = Uniform('X', zeros(2), ones(2)) @potential def s1(X=X): if ((X-0.5)**2).sum() >= 0.25: return -inf else: return 0.0
Did that give you an error? For a
fraction of readers, it will; PyMC gets unhappy when it has a sample with probability zero. Try setting X to something in the ball first:
from pymc import * X = Uniform('X', zeros(2), ones(2)) X.value = [0.5,0.5] @potential def s2(X=X): if ((X-0.5)**2).sum() >= 0.25: return -inf else: return 0.0 mc = MCMC([X, s2]) mc.sample(10000) plot(X.trace()[:,0], X.trace()[:,1],',')
But, this is too easy to sample from. It is a nice, plump convex body. The hard stuff comes from the convex bodies with skinny bits. If you live in high dimensions, these skinny bits show up everywhere. For example, if you have a ball just like the one above, but in 20 dimensions instead of 2, check out what happens:
from pymc import * X = Uniform('X', zeros(20), ones(20), value=0.5*ones(20)) @potential def s3(X=X): if ((X-0.5)**2).sum() >= 0.25: return -inf else: return 0.0 mc = MCMC([X, s3]) mc.sample(10000) plot(X.trace()[:,0], X.trace()[:,1],',')
That’s not looking nice anymore. The problem is this. If you don’t tell PyMC how you want the Markov transitions to go, it picks something for you. It defaults to a Metropolis walk, where each stochastic variable is drawn from its prior distribution. In this case, that means that for 10,000 samples, PyMC draws
uniformly from the 20-dimensional hypercube, and then rejects it and stays at
if
falls outside of the ball we want. Since the volume of the ball is much smaller than the volume of the cube, roughly 75% of these samples are rejected. (In the picture above, I jittered the points to reveal the duplicate points.)
How can we make things nice again? Instead of sampling from the prior every time, we can take a small, random step away from the point that we are currently at.
from pymc import * X = Uniform('X', zeros(20), ones(20), value=0.5*ones(20)) @potential def s4(X=X): if sum((X-0.5)**2) >= 0.25: return -inf else: return 0.0 mc = MCMC([X, s4]) mc.use_step_method(Metropolis, X, dist='Normal', sig=0.05) mc.sample(50*10000,thin=50) plot(X.trace()[:,0], X.trace()[:,1],',')
The new trick is in line 10, where I asked for a particular step method. It appears that the
dist parameter tells PyMC to use move to a new point normally distributed around the current point, and the
sig parameter controls the variance of the normal distribution. Now things looks nice again. Maybe this isn’t actually a great example, because it takes longer for this chain to mix, so the pictures that look nice come from keeping 1 sample out of every 50, meaning the chain takes 50 times longer to run. If I really wanted samples from the 20-dimensional ball, rejection sampling is a better way to go. (But if I really wanted samples from the 20-dimensional ball, there are even better ways to draw them…)
Up next, how to implement the hit-and-run walk in PyMC. (Actually, not next…)
9 responses to “MCMC in Python: PyMC to sample uniformly from a convex body”
Abie,
What particular rejection sampling algorithm do you have in mind in the second-to-last sentence?
Anand
Anand: Let me see if I am learning the language of stats MCMC. I think that you would say that the hit-and-run walk uses a Gibbs method, because it never rejects a sample.
The way it works (in the paper by Lovasz and Vempala linked above, when specialized to uniform sampling) is to: pick a uniformly random line through the current point, and move to a uniformly random point on the line and inside the set.
This is the walk which currently has the best upper bound on mixing time. From a “warm-start” it requires
steps, a huge improvement on the original proof that a poly-time procedure exists, which had the
bound I mentioned above.
I think the name is taken from sports (not traffic-related injuries).
Huh, that’s really cool. I’ll have to check out the paper. Do you know if it can be mixed into a larger MCMC algorithm, ie if some variables are being updated with hit-and-run and some are being updated with Metropolis, do you still get joint samples?
Gibbs methods update variables by drawing them conditional on their Markov blanket. It doesn’t seem like hit-and-run is doing that, so you probably wouldn’t call it a Gibbs method even though it never rejects a jump.
Incidentally, the default step method being used for X is the one in the ref below. Rather than proposing from the prior, it proposes from a multivariate normal centered on the current state. The covariance is a scalar multiple of its estimate of the covariance of the ‘posterior’, which it updates on the fly.
Heikki Haario, Eero Saksman, and Johanna Tamminen. An adaptive Metropolis algorithm. Bernoulli, 7(2):
223–242, 2001.
Ananda: It seems like it should be possible. I think that it is straight-forward to show that it works for uniform sampling from a convex body you can mix hit-and-run on some dimensions and another method on the other dimensions.
Where things get more complicated would be when you are approximating hit-and-run in a space that is not uniform (and probably not even log-concave). My guess is that the stationary distribution of the chain will be within
of the true stationary distribution.
Of course, the question remains, is this actually a good idea in practice.
I’ll look at the Haario, et al paper in more detail soon. It reminds me of a different theoretical result by Vempala. It does mean that the default behavior of PyMC is non-Markovian, right?
We hang our heads in shame.
See Levine et al. Implementing componentwise Hastings algorithms. Computational Statistics & Data Analysis (2005) vol. 48 (2) pp. 363-389
We await part 2! I want to see the code for a modern convex sampler…
Thanks for the encouragement… this hasn’t even made it on my to-do list yet, but I would love to come back to it one of these days.
I consider PyMC a modern convex sampler! Actually, I should re-write some portions of this tutorial to more accurately reflect what PyMC 2.0 (just released, congratulations!) is doing.
And the great thing about PyMC is that it’s open source and written in python, so you can see the code with one click.
To Read: arxiv paper on when the Adaptive Metropolis will still converge to the intended distribution.
Eero Saksman, Matti Vihola, On the Ergodicity of the Adaptive Metropolis Algorithm on Unbounded Domains.
Two years ago, I was just joking around about writing a hit-and-run sampler, but I finally got off my butt and did a proof of concept! Code and movie at the end of this post:
|
http://healthyalgorithms.com/2008/11/05/mcmc-in-python-pymc-to-sample-uniformly-from-a-convex-body/
|
CC-MAIN-2014-10
|
refinedweb
| 1,383
| 73.88
|
Log message:
(devel/p5-namespace-autoclean) Updated to 0.29
0.29 2019-08-24 17:07:22Z
- switch from Test::Requires to Test::Needs
- report on the installed versions of more optional:
Remove Moose dependency.
It is only used by some tests, and Moose itself would like to use
this module.
Bump PKGREVISION.
Log message:
Update 0.26 to 0.28
-------------------
0.28 2015-10-13 01:25:26Z
- skip failing tests with old Moo or when Sub::Util is broken (RT#107643)
0.27 2015-09-09 02:29:20Z
- package with only ExtUtils::MakeMaker to ease installation on perl 5.6
|
https://pkgsrc.se/devel/p5-namespace-autoclean
|
CC-MAIN-2020-29
|
refinedweb
| 104
| 68.77
|
Z-Algorithm
Note: This is not a complete tutorial where the complete concept is explained, though a link is provided to a tutorial where it is explained. I have just explained the basic overview and used it in two problems.
I find Z-algorithm to be one of the most useful and simple algorithms ever. It is very useful in competitive programming and comparatively much faster to code than KMP or sometimes even Naive.
Z-Algorithm is just a simple calculation of function
z[i] on string. Where
z[i] is the maximum length such that
substring(0...z[i]-1) = substring(i...i+z[i]-1)
It can be coded in just 4 lines, really.
vector<int> z(a.size(),0); for(int i = 1, l = 0, r = 0; i < (int)a.size(); ++i){ if(i <= r) z[i] = min(r-i+1, z[i-l]); while(z[i] + i < (int)a.size() && a[z[i]] == a[z[i]+i])z[i]++; if(i+z[i]-1 > r)l = i, r = z[i]+i-1; }
Well understand algorithm is important, so go and learn all you want to here: Z-Function
Let’s talk about some uses in competitive programming (Codes are provided at the end).
- Simple Use - String Matching.
Sample Question: NAJPF
The fun thing here is the meaning of
z[i]here itself. If we find Z-values of
String A = pattern + "$" + text, that can any
z[i]after index
pattern.size()having length of
pattern.size()matches the pattern itself. What’s even more fun is that no index before (or on) index
pattern.size()can be equal to
pattern.size(), so question is simply indices with
z[i] == pattern.size()
- A little more awesome use:
Sample Question: Prefix-Suffix Palindrome
Spend some time on the question. And you will observe the following:
Take as many characters from start and end as you can while it makes a palindrome and then lets see what we can do with the remaining string.
Example: String = abcdfdcecba
Let’s take
abcprefix and
cbasuffix and we are left with dfdce
Now it is guaranteed if we want a palindrome, we can only take from either prefix or suffix. But what to take? Note that if you have a palindrome
Aand you delete some prefix and suffix of length
l, then what’s left is also a palindrome. So we need the longest prefix palindrome or suffix palindrome, whichever is longer.
How can Z-values help here?
Notice that if
z[i]is z-value of index
ifor any string
a, then if
z[i]+i == a.length(), then this means that suffix from
imatches prefix? So this can help us find pallindrome? Yes.
How?
Consider this string
A = ABACD, let’s find the longest prefix palindrome.
Consider
B = rev(A) = DCABA, do you notice that the longest palindrome, is actually longest suffix matching in B matching prefix of A.
Is this magic? No, just math.
Any suffix of B of length
lin originality is just
rev(a[0...l-1])
So on matching it becomes
a[0...l-1] == rev(a[0...l-1])
Can same be done to find longest suffix pallindrome? Yes.
So for longest palindromic prefix find the longest suffix matching prefix in string
A = A+"$"+rev(A)
So for longest palindromic suffix find the longest suffix matching prefix in the string
A = rev(A)+"$"+A
And then it is just simple string construction.
Code for problem A
#include<bits/stdc++.h> using namespace std; int main(){ ios_base::sync_with_stdio(false); cin.tie(0); cout.tie(0); int t; cin >> t; while(t--){ string text, pattern; cin >> text >> pattern; text = pattern + "$" + text; vector<int> z(text.size()); vector<int> ans; for(int i = 1, l = 0, r =0 ;i < (int)text.size(); ++i){ if(i <= r)z[i] = min(z[i-l],r-i+1); while(z[i]+i < (int)text.size() && text[z[i]] == text[z[i]+i])z[i]++; if(z[i]+i-1 > r)l = i, r = z[i]+i-1; } for(int i = 0; i < (int)text.size(); ++i){ if(z[i] == (int)pattern.size())ans.emplace_back(i-(int)pattern.size()); } if((int)ans.size() == 0)cout << "Not Found"; else{ cout << ans.size() << "\n"; for(int &i: ans)cout << i << " "; } if(t > 0)cout << "\n\n"; } return 0; }
Code for problem B
#include<bits/stdc++.h> using namespace std; int longestPallindromicPrefix(string a){ if(a.size() == 0)return 0; string res = a; std::reverse(res.begin(),res.end()); string s = a+"#"+res; vector<int> z(s.size(),0); for(int i = 1, r = 0, l = 0; i < s.size(); ++i){ if(i <= r)z[i]=min(r-i+1,z[i-l]); while(z[i]+i < s.size() && s[z[i]] == s[z[i]+i])z[i]++; if(z[i] + i -1 > r)l=i,r=z[i]+i-1; } int ans = 1; for(int i = 0; i < s.size(); ++i){ int lrem = s.size()-i; if(z[i] == lrem)ans=max(ans,lrem); } return ans; } int solve(){ string a = "", b = "" , s; cin >> s; if(s.length() == 1){ cout << s << "\n"; return 0; } int i = 0, j = s.size()-1; while(i < j){ if(s[i] == s[j]){ a+=s[i++]; b+=s[j--]; }else{ break; } } int save = i; string rem = ""; while(i <= j){ rem+=s[i++]; } int pref = longestPallindromicPrefix(rem); reverse(rem.begin(),rem.end()); int suff = longestPallindromicPrefix(rem); i = save; if(suff >= pref){ for(int k = 0; k < suff; ++k){ b+=s[j-k]; } }else{ reverse(rem.begin(),rem.end()); for(int k = 0; k < pref; ++k){ a+=s[i+k]; } } reverse(b.begin(),b.end()); return cout << a << b << "\n",0; } int main(){ int t; cin >> t; while(t--)solve(); return 0; }
Please do leave a like if this helps you in any way. And also do tell if you think I should improve something. Not much on the writing side.
|
https://discuss.codechef.com/t/z-algorithm-tutorial/64274
|
CC-MAIN-2021-49
|
refinedweb
| 990
| 76.01
|
Hello!
I am taking a C programming class and my teacher uses the "puts statement" to combine an if statements with the action if the condition is met. I saw too different examples of doing the same thing and was just wondering which method is more "standardized" and which method is not recommended using? It's nothing too serious, but it got me curious. thanks!
PS - here's an example:
#include <stdio.h> int main(int argc, const char * argv[]) { int grade; // gets user input printf("What is the students grade? "); scanf("%d", &grade); // alerts user if passed or failed (grade >= 60) ? printf("Passed!\n") : printf("Uh-ohhh...\n"); // puts statement version puts(grade >= 60 ? "Passed!\n" : "Uh-ohhh...\n"); // displays grade if(grade >= 90) printf("Grade = A\n"); else if(grade >= 80) printf("Grade = B\n"); else if(grade >= 70) printf("Grade = C\n"); else if(grade >= 60) printf("Grade = D\n"); else printf("Student failed.\n"); return 0; }
|
http://www.gamedev.net/topic/652996-c-if-statement-question/
|
CC-MAIN-2015-32
|
refinedweb
| 160
| 65.12
|
22 July 2010 06:21 [Source: ICIS news]
By Nurluqman Suratman
SINGAPORE (ICIS news)--South Korea's petrochemical exports are expected to decline in July after growing by 24.1% year on year to $2.87bn (€2.24bn) in June as the won stabilises after sharp falls against the US dollar last month, analysts said on Thursday.
The currency tumbled 7.4% in the three months through June, on concerns that ?xml:namespace>
“Almost all exports, including petrochemicals, were affected by the weakness in the Won in June and that boosted export profits,” said Lee Sung-kwon, chief economist at brokerage house Shinhan Investment Corp in
“Despite rising concerns from the eurozone and
However, annual growth of petrochemical exports was expected to slow down from the middle of 2010 as overseas sales for the same period in 2009 had recovered from the height of the economic global crisis, analysts said.
“As compared to June we have being seeing some stability in the Won in July. In addition to the base affect operating in June, our estimation is a month on month decrease of 5.1% in July for overall exports,” Lee from Shinhan Investment Corp said.
Overall exports growth was also expected to slow down to 27% year on year in July, he added.
Petrochemical export volumes in June fell 11.9% year on year to 1.76m tonnes, but increased 8% from May, according to data from the Korean International Trade Association. (please see table below)
Volumes of toluene sent abroad jumped 63% in June as compared to the year-ago period, while exports of ethyl acetate grew 60%, the data showed.
Exports of ethylene rose 33% year on year, while orthoxylene shipments grew 49%.
Overseas shipments of oil products from refining activities, meanwhile, grew 28% year on year to $2.66bn, according to Lee.
Overall exports from
“Producers were also trying to boost exports in June, more so than in April or in May, to boost the half-year figures in their reports,” Lee said.
The country’s finance ministry said late in June that the South Korean economy would see 25% growth in exports this year, higher than the earlier forecast of a 13% gain.
?xml:namespace>
($1 = €0.78 / $1 = W1,207)
|
http://www.icis.com/Articles/2010/07/22/9378327/south-korea-petrochemical-exports-seen-declining-in-july.html
|
CC-MAIN-2013-48
|
refinedweb
| 376
| 62.68
|
Purchases through our eBay channel are covered by our 12 months Door to
Door Return to Seller Warranty against any manufacture defects. If a
manufacture defect is found within 12 months after the date of purchase,
simply contact us and we will provide instructions on how to get your
item(s) collected and repaired. A Return Authorisation is strictly
required before the item may be returned to us for repair. Items will be
returned to Hong Kong for repair and sent back to you once repair is
complete.
Please refer to the bottom of the page for the cost of shipping fee and shipping insurance.
Item will be shipped to you from our Hong Kong office via the most expedient method of shipping*. The following is a guideline of typical delivery times:
*We will effect shipment by the one of the following postal services at our discretion to ensure the most appropriate shipping service is provided. This will NOT affect the delivery time stated above. **Notice: Due to increased administrative costs and delays on customs clearance, we have suspended shipping to some countries. These restrictions are temporary and we hope to have it resolved in the near future. Orders/Items shipping to these countries will be refunded within 24 hours. To learn if your delivery address falls into our list of restricted countries, please click here for more information.
Note: We will not be
shipping to Post Office addresses. Exceptions may apply but to avoid
disappointments, please check before placing orders.
*If customer decides to cancel an order, or reject a parcel while the parcel has
already been dispatched, customers are responsible to cover the cost of postage
fees( both ways).
**. Boxes of Item may
be opened for inspection purpose or by custom when shipping into the country.**
This item will be shipped
to you from HONG KONG Office. We will ship the item after the payment has been
paid in full and cleared.
The item may be subjected to customs import tax/fee when it enters your country.
For the international buyers, you need to responsibility in case there is some
import tax/fee to pay..
Images and Videos are for reference only and do not constitute part of the conditions of sales. Colour of item is stated in title and may vary from listing to listing. Items sold on eBay is for a single (one) piece only [Quantity = 1 (One)]. Please read the ENTIRE listing BEFORE purchase.
When buying from an international seller, you are very likely to be surprised by the cost of sending payment overseas, or the hidden surcharges most sellers charge for using credit cards. With us, there will never be such hidden charges. Item price + shipping are all you need to pay us.No surcharge for using PayPal No sending money overseasItem price + Shipping are all you have to pay usBilling in Australian Dollar, Pound Sterling and Euro - No loss in exchangeNOTE: If you have any questions about payment methods, please contact us. We will do our best to help you
If you have received wrong or defective item(s), please ensure that items are returned to us within 7 days in original packaging in brand new and resalable condition. You will be required to contact us for a return authorization form before sending anything back to us.
Please note that DigitalRev Limited is under NO obligation to send
replacement/issue refund, for shipments/orders that are signed and delivered
to the shipping address specified by the customer.
DigitalRev will appreciate if you leave us a positive feedback with all 5
star rating when you are satisfied with your purchase.
Our ebay customer support team is dedicated to serve you better! Please
communicate with us before leaving any
Neutral
or
Negative
feedback. We believe everything can be resolved through communication!
|
http://www.ebay.com.au/itm/Nikon-SpeedLight-SB-N5-Flash-Nikon-1-V1-Camera-7993-/300622364531?pt=AU_Flashes&hash=item45fe7d3f73
|
CC-MAIN-2015-14
|
refinedweb
| 636
| 61.77
|
A cross product is a mathematical tool to find the perpendicular vector component of two vector coordinates.
Suppose in a 3D space, there are two points:
- ‘a’ with coordinates (1,2,3)
- ‘b’ with coordinates (4,5,6).
So the vector component of the two coordinates will be the cross-product of the determinant of this vector matrix.
The cross product will be a non-commutative perpendicular vector product of the two matrix points.
Numpy Cross Product
The numpy.cross() is a mathematical function in the Python library that finds out the cross product between two arrays (Dimension of 2&3) and the result can be displayed with the print function.
Syntax of Numpy Cross Product
The basic syntax for implementing cross product is:
np.cross[M,N]
where M and N are array variables storing vector coordinates, but we can specify certain parameters as per our suitability and needs.
How To Calculate Cross Product Using Numpy Python?
Let’s look at a functional code over how cross-product is found in python.
1. Cross product of 2X2 matrix
Let’s suppose there are two arrays, X= [2,3], and Y= [4,3]. To find the vector product, we need to find the difference between the product of i1-j2 and i2-j1. The vector-product of two 2-Dimensional arrays will always be a single-dimensional integer.
The final result is (3*2) – (4*3) = -6.
Note: In this case, X and Y dimensions are defined while the z component is not there, so the final output is scalar.
Example code:
import numpy as pr #initialize arrays X = pr.array([2, 3]) Y = pr.array([4, 3]) #calculating cross product vector_result= pr.cross(X,Y) print(vector_result)
2. Cross product of a 2X3 array
Let’s take two 3-Dimensional arrays and find the cross-product of it.
Let’s take X= [1,3,5] and Y= [1,2,1]
Here, the final output will be = (-7, 4, -1)
Example code:
import numpy as pr #initialize arrays X = pr.array([1, 3, 5]) Y = pr.array([1, 2, 1]) #calculating cross product cross_product= pr.cross(X,Y) print(cross_product)
Note: The numpy cross-product supports matrices of 2 & 3 dimensions and any matrix with the higher dimension will throw error outputs.
Let’s take another example where, let’s suppose, M=[5,6,4] and N=[2,1]
Example code:
import numpy as pr #initialize arrays X = pr.array([5, 6, 4]) Y = pr.array([2, 1]) #calculating cross product cross_product= pr.cross(X,Y) print(cross_product)
Here, the compiler automatically assigns the z component of array N as zero and calculates the final output based on that parameter.
Final result = [-4, 8, -7]
Conclusion
In this article we learned how to find the cross product of two vector arrays by using the python mathematical function ‘numpy.cross’. We also learned about different case scenarios and parameters through which numpy.cross can be implemented on different sets of array values.
|
https://www.askpython.com/python-modules/numpy/numpy-cross-product
|
CC-MAIN-2022-21
|
refinedweb
| 501
| 57.16
|
from collections import deque def add_edge(adj: list, u, v): adj[u].append(v) adj[v].append(u) def detect_cycle(adj: list, s, V, visited: list): parent = [-1] * V q = deque() visited[s] = True q.append(s) while q != []: u = q.pop() for v in adj[u]: if not visited[v]: visited[v] = True q.append(v) parent[v] = u elif parent[u] != v: return True return False def cycle_disconnected(adj: list, V): visited = [False] * V for i in range(V): if not visited[i] and detect_cycle(adj, i, V, visited): return True return False if __name__ == "__main__": V = 5 adj = [[] for i in range(V)] add_edge(adj, 0, 1) add_edge(adj, 1, 2) add_edge(adj, 2, 0) add_edge(adj, 2, 3) add_edge(adj, 2, 1) if cycle_disconnected(adj, V): print("There are 5 vertices in the graph") print("0-->1") print("1-->2") print("2-->0") print("2-->3") print("2-->1") print("Is there a cycle ?") print("Yes") else: print("There are 5 vertices in the graph") print("0-->1") print("1-->2") print("2-->0") print("2-->3") print("2-->1") print("Is there a cycle ?") print("Yes") print("No")
There are 5 vertices in the graph 0-->1 1-->2 2-->0 2-->3 2-->1 Is there a cycle ? Yes
The required packages are imported.
Another method named ‘add_edge’ is defined that helps add nodes to the graph.
A method named ‘detect_cycle’ is defined that helps determine if a cycle is formed when the components of the graph are connected.
Another method named ‘cycle_disconnected’ is defined that helps determine if the cycle is a connected one or not.
Elements are added to graph using the ‘add_edge’ method.
It is displayed on the console.
The ‘cycle_disconnected’ method is called and the output is displayed on the console.
|
https://www.tutorialspoint.com/python-program-to-find-if-undirected-graph-contains-cycle-using-bfs
|
CC-MAIN-2021-25
|
refinedweb
| 301
| 63.8
|
Count the sound cards and list their card numbers in an array
#include <sys/asoundlib.h> int snd_cards_list( int *cards, int card_array_size, int *cards_over );
The snd_cards_list() function returns the instantaneous number of sound cards that have running drivers. There's no guarantee that the sound cards have contiguous card numbers, and cards may be unmounted at any time.
You should use this function instead of snd_cards() because snd_cards_list() can fill in an array of card numbers. This overcomes the difficulties involved in hunting a (possibly) non-contiguous list of card numbers for active cards.
The number of sound cards.
QNX Neutrino
|
http://www.qnx.com/developers/docs/6.6.0_anm11_wf10/com.qnx.doc.neutrino.audio/topic/libs/snd_cards_list.html
|
CC-MAIN-2018-09
|
refinedweb
| 101
| 63.49
|
Hello all,
I'm getting an error in the following code, and I can't figure out why exactly. Error is as follows:
no match for 'operator>>' in 'scoresIn >> scoresArray'
this is at line 28 on the code below.
I'm trying to bring in some data from a text file into and array and just can't seem to make it happen. I've looked at examples and they all state that you can basically use the ifstream variable as a cin. Using this approach doesn't work for me. Must I use a for loop? the example code is as follows.
#include<fstream> #include<iostream> using namespace std; int main ( ) { //char opposingTeam [25]; const int initalize = 25; int count = 1, count1, scores; int scoresArray [initalize]; ifstream scoresIn; for (int i = 0; i < initalize; i++) // sets scoresArray to 0 { scoresArray[i] = 0; } for (int i = 0; i < initalize; i++) // checks that they are zero via cout. { cout << scoresArray[i] << " "; } scoresIn.open("scores.txt"); if (!scoresIn.fail( )) { cout << "Success!\n"; scoresIn >> scoresArray; } //scoresIn >> scoresArray[scores]; while (!scoresIn.eof( )) { cout << "UF game #: " << count << " score: " << scoresArray [initalize] << endl; count++; scoresIn >> scoresArray [initalize]; } scoresIn.close(); cout << scores << endl; cout << scoresArray [1] << endl;
Almost forgot, i'm using MinGW in Eclipse indigo
If the data from the text file is needed just let me know.
thanks alot for the help.
rcowboy
|
https://www.daniweb.com/programming/software-development/threads/387360/help-with-input-from-a-text-file-to-an-array
|
CC-MAIN-2022-21
|
refinedweb
| 227
| 73.27
|
#include <DGtal/io/boards/Board3D.h>
The class Board3D is a type of Display3D which export the figures in the format OBJ/MTL when calling the method saveOBJ.
The format OBJ/MTL is a geometry definition file format, this format has been adopted by many 3D graphics application vendors (pbrt, blender, etc.). to learn more about OBJ see
When exporting Board3D regroup objects by they list name. If two lists have the same name they will melt in the same mesh eventually. If a list doesn't have a name the program will try to give it an unique name so it will become a separate mesh. Each list have a material description which correspond to its first element color. If two lists have the same name and merge the final mesh will have two materials (one per list).
Definition at line 81 of file Board3D.h.
Constructor.
Constructor with a khalimsky space
Definition at line 95 of file Board3D.h.
References DGtal::Board3D< Space, KSpace >::init().
Check if the material associated woth the color exists. If it does, the method returns the material index. Otherwise, the new material is created.
init function (should be in Constructor).
Referenced by DGtal::Board3D< Space, KSpace >::Board3D().
Checks the validity/consistency of the object.
Set the default color for future drawing.
Draws the drawable [object] in this board. It should satisfy the concept CDrawableWithDisplay3D, which requires for instance a method setStyle( Board3D & ).
Save a OBJ image.
Writes/Displays the object on an output stream.
|
http://dgtal.org/doc/nightly/classDGtal_1_1Board3D.html
|
CC-MAIN-2017-43
|
refinedweb
| 251
| 60.01
|
Wireless Java applications are, by their nature, network-centric. The devices that these applications run on are, however, less predictable. Most notably, the precise nature of the network connection depends both on the device and on the services provided by the network to which it is connected. Some wireless devices may
be directly connected to the Internet, while others are only able to access it through a gateway. Whatever the nature of the underlying network, a wireless Java device that conforms to the Mobile Internet Device Profile (MIDP) specification is required to provide the illusion that it is directly connected to the Internet by implementing the HTTP support that is part of the MIDP Generic Connection Framework API. A description of this framework can be found in the article "Invoking Java ServerPages from MIDlets" by Qusay Mahmoud. The device is not required to do this by including a TCP/IP protocol stack to carry the HTTP protocol messages; it is permitted to use the network's existing protocols as the bearer, as long as it preserves the behavior that an HTTP-based application would expect.
The lack of a TCP/IP stack usually means that access to lower-level programming paradigms, such as sockets, is not guaranteed to be available to a MIDP application, even though the Generic Connection Framework provides an interface to such low-level services, and the next version of the MIDP specification is likely to require their inclusion in all devices. For the time being, then, wireless Java applications will have to use HTTP to communicate with the outside world. However, several features of the wireless Java environment make it slightly more difficult to write a MIDP HTTP client than would be the case with J2SE. This article highlights some of the pitfalls that are unique to the J2ME environment, using an example taken from Chapter 6 of my recently published book, J2ME in a Nutshell.
The Web client used in this article is, in principle, very simple. Given the ISBN for a book, we want to connect to Amazon's online bookstore, retrieve the book's details, and display its title, sales ranking, and the number of customer reviews. In the second part of this article, we'll get a little smarter and store these details, along with the book's title, so that we can go back to the site later and get updated details without having to remember the ISBN. For now, our problem will be how to get the information that we need and how to display it to the user.
Before proceeding with the technical details, let's take a look at the completed client in action. To build and run this example, you'll need the source code, which is available in this .zip file, and a suitable development environment. The example source code is appropriately packaged for the J2ME Wireless Toolkit, which can be downloaded for free and includes emulators for several types of mobile devices. This article assumes that you are using this tookit. The source code can, however, be used with other development tools, such as Forte for Java.
Having downloaded the source code, unpack it below the J2ME Wireless
Toolkit's apps directory, start the toolkit's KToolbar application, press the Open Project button, and select the project J2MEHttp. If this project does not appear, make sure that you have a directory called J2MEHttp below the apps directory of the J2ME Wireless Toolkit installation. If you don't, then you have not unpacked the example code to the correct location.
apps
KToolbar
J2MEHttp
With the project open, press Build to compile the source code, select an emulated device, and press Run to start the emulator. When the emulator starts, it will offer the choice of two MIDlets to run. In this case, chose RankingMidlet; the other one, PersistentRankingMidlet, is the subject of the second article in this series.
RankingMidlet
PersistentRankingMidlet
The RankingMidlet MIDlet presents a form where you can input an ISBN, as shown in the left side of Figure 1. Supply the ISBN of your favorite book and select OK. After a short while (assuming the ISBN is valid), you'll see the title, sales ranking, and the number of reader reviews for your chosen book, as shown in the right side of Figure 1.
So how does this client work? Apart from the details of the user interface, which we're not going to cover in this article, the client does three things:
This all sounds simple enough, but there are a few pitfalls waiting to trap the unwary along the way. Let's see what can go wrong by examining each of these three steps in more detail.
The first problem we encounter is how to get the correct HTML page for a book, given its ISBN. It isn't difficult to work this out -- just point your browser at Amazon's Web site, enter an ISBN in the search box on the home page and look at the URL of the page that is returned. If you do this, you'll find that the browser ends up loading a URL that looks something like "", for a book with ISBN 156592455X.
In fact, everything after the ISBN in this URL is concerned with tracking your user session with Amazon, and does not have to be supplied on initial contact. Therefore, to get the details for this book, we only have to make an HTTP GET request with the URL.
HTTP GET
This isn't how the browser got the page, however; when you entered an ISBN on the home page, this URL was not constructed directly. Instead, a query was created and sent to the server, which allowed it to return the correct page. The fact that Amazon also recognizes a more explicit URL that gives the same result is useful for this client, but you might not be so lucky if you had the task of creating a client for a less cooperative server. To demonstrate how to handle the more general case, we'll show you how the browser actually fetched the correct page and show you the Java code that achieves the same result.
The search feature on the Amazon home page is implemented using HTML form tags. When it comes to the nitty-gritty detail, the form causes an HTTP POST request to be sent to the URL
in which the body of the message contains the query itself, in the form:
HTTP POST
index=books&field-keywords=isbn
This query causes a search of all books on the site (as distinct from software, electronics, etc.) for the given ISBN. Recreating this in Java is quite straightforward -- we simply use the Connector class from the Generic Connection Framework to open a connection to the URL shown earlier, set the request method to POST, open an output stream, and write the query to it. (If you're not familiar with the basics of using HTTP with MIDP, or with the Generic Connection Framework, you should first read Qusay Mahmoud's article, which covers the necessary groundwork).
Connector
Using this reasoning, our first attempt at emulating the browser might look like this:
public class Fetcher {
private static final String BASE_URL = "";
private static final String QUERY_URL = BASE_URL +
"/exec/obidos/search-handle-form;
conn = (HttpConnection)Connector.open(name,
Connector.READ_WRITE);
// Send the ISBN number to perform the query
conn.setRequestMethod(requestMethod);
conn.setRequestProperty("Content-Type",
"application/x-www-form-urlencoded");
os = conn.openOutputStream();
os.write(query.getBytes());
os.close();
// Read the response from the server
is = conn.openInputStream();
int code = conn.getResponseCode();
// Process the returned data (not shown here)
} finally {
// Tidy up (code not shown)
}
}
}
This code extract shows a class called Fetcher that has a method called fetch(), which requests information about a book whose ISBN is in an object of type BookInfo (which will be shown in the second article in this series), using the algorithm that was just described. The expectation is that, once the request has been sent, the HTML page for the book will be accessible from the input stream obtained from the HttpConnection's openInputStream() method. If this were a J2SE program and we had used a URL object to get a URLConnection and then made the same request as the one shown here, we would indeed get the HTML page from the input stream. Unfortunately, in the J2ME world, things are a little different.
Fetcher
fetch()
BookInfo
HttpConnection
openInputStream()
URL
URLConnection
We deliberately made this example more difficult by using a POST request instead of GET, in order to show you how different the MIDP HTTP implementation is from its J2SE counterpart when the server does not directly provide the data that you require. Instead of replying with a response code of 200 (or, more correctly, HttpConnection.HTTP_OK), the Amazon Web server sends a response code of 302, without any useful data. Since the data is missing, the usual, simple-minded approach of reading the content of the input stream isn't going to work here. So why is the server sending this response code, and what should we do about it?
GET
200
HttpConnection.HTTP_OK
302
Response code 302 is one of several codes that a Web server can use to indicate a redirection. The full list of these codes, and their official meanings, can be found in Table 1. A complete specification of the HTTP client's expected follow-up action when receiving each of these codes can be found in the HTTP 1.1 specification.
These codes require the client to look for the requested resource at a different URL, which is included with the server's response in a header called Location. As well as connecting to a different location, if the server responsed with either 302 or 303, and the original request was a POST, the new request should instead be a GET; in all other cases, the original request method should be used.
Location
303
There is no guarantee that a second request following a redirection will result in the required HTML page being returned, because multiple redirections are permitted. In other words, we have to keep following redirections until we get to the actual location of the information that we need. In order to avoid loops caused by incorrect server configuration, however, it is normal to impose an upper limit of the number of times a redirection will be followed, or to detect a loop by keeping a history of redirections.
In terms of our Fetcher class, the need to follow server redirections means that we convert the simple fetch method shown earlier into a loop that terminates when either the data is returned, an error occurs, or we get redirected too many times. Each pass of the loop will use the Connector class's open() method to open a new connection to the URL obtained from the previous redirection. The final version of this method is shown below, with the most important changes shown in bold.
fetch
open()
public class Fetcher {
private static final String BASE_URL = "";
private static final String QUERY_URL = BASE_URL +
"/exec/obidos/search-handle-form/0";
private static final int MAX_REDIRECTS =;
while (redirects < max_redirects) {
conn = (HttpConnection)Connector.open(name,
Connector.READ_WRITE);
// Send the ISBN number to perform the query
conn.setRequestMethod(requestMethod);
conn.setRequestProperty("Connection", "Close");
if (requestMethod.equals(HttpConnection.POST)) {
conn.setRequestProperty("Content-Type",
"application/x-www-form-urlencoded");
os = conn.openOutputStream();
os.write(query.getBytes());
os.close();
os = null;
}
// Read the response from the server
is = conn.openInputStream();
int code = conn.getResponseCode();
// If we get a redirect, try again at the new location
if ((code >= HttpConnection.HTTP_MOVED_PERM &&
code <= httpconnection.http_see_other) ||
code == httpconnection.http_temp_redirect) {
// get the url of the new location (always absolute)
name = conn.getheaderfield("Location");
is.close();
conn.close();
is = null;
conn = null;
if (++redirects > MAX_REDIRECTS) {
// Too many redirects - give up.
break;
}
// Choose the appropriate request method
requestMethod = HttpConnection.POST;
if (code == HttpConnection.HTTP_MOVED_TEMP ||
code == HttpConnection.HTTP_SEE_OTHER) {
requestMethod = HttpConnection.GET;
}
continue;
}
String type = conn.getType();
if (code == HttpConnection.HTTP_OK &&
type.equals("text/html")) {
info.setFromInputStream(is);
return true;
}
}
} finally {
// Close all strams (not shown)
}
return false;
}
}
As you can see, the first pass of the loop makes a POST request using the original URL, writing the query to the output stream obtained from the openOutputStream(). If a redirect is returned, the new URL is obtained from the Location header of the response and the request method is converted to GET if necessary, so that on the next pass the query will not be written. Conversion to a GET request implies that the server obtains all of the necessary information to locate the resource from the URL in the Location field.
openOutputStream()
Related Reading
J2ME in a Nutshell
By Kim Topley
In fact, in the case of the Amazon Web server, the URL that is supplied is exactly the one that you finally see in the Web browser when the page is displayed, and which contains the book's ISBN. Eventually, the server should return a response code of 200, at which point the HTML can be read from the input stream. This, however, is not the end of our problems, as we'll see in the next section.
Before moving on, you are probably wondering why it is that a J2SE application can get away without concerning itself with server redirects, whereas a MIDlet cannot. The answer to this question is very simple -- in J2SE, the code to handle redirects is built into the core libraries, and therefore happens automatically without the application being aware of it (although it can be turned off using the setFollowRedirects() and setInstanceFollowRedirects() method of java.net.HttpURLConnection). Unfortunately, the MIDP HTTP implementation does not include this feature.
setFollowRedirects()
setInstanceFollowRedirects()
java.net.HttpURLConnection
A final word of caution -- if you think you can avoid the problems shown here by working out what the "real" URL is and using that to make the initial request, think again! In some cases, that might be possible, but in other cases it won't be. If you are faced with writing a J2ME client to interface with somebody else's server, the only way to work out what you have to do is try it and see -- or ask the server's owner, if that is possible. Be warned, though, that application servers hosting the server side of J2EE applications might use the redirection techniques shown here to point your client to a different URL following authentication, which is itself a topic that requires special treatment (similar to that shown here) for a MIDP application. That, however, is beyond the scope of this.
|
http://www.onlamp.com/pub/a/onjava/2002/04/17/j2me.html
|
CC-MAIN-2016-26
|
refinedweb
| 2,441
| 57.5
|
hi there, I am the author of a program called Favorez. Favorez is an AxtiveX IE 5+ plug-in program that creates web pages from the users' favorites. XBEL Favorez 1.2 has just been released and now supports the XBEL standard. I have been careful to make the program adhere to the XBEL DTD. XSLT Favorez 1.2 ships with several XSL transformation files that should be interesting to your audience. The XSLT's can be used for any valid XBEL data file. (example at the bottom of this email) Favorez website: I was hoping to maybe get into your list of XBEL compatible programs? :-) Hope to hear from you and thanks for your efforts! kind regards, Tom Tom Kalmijn WarpGear Software tom.kalmijn@warpgear.com >>> example >>> Here's an example of how the HTML page may look when Favorez transforms the XBEL data: The link will automatically transform in IE 5+. Otherwise please follow this link (html): The xml above points to this stylesheet: When you download and install Favorez you will find a total of three new XSLT's and I have also included two XSLT's that I found on the web (from Juergen Hermann). >>> note on XBEL compatibility >>> The special favicons where added to the DTD like this: <!DOCTYPE xbel PUBLIC "+//IDN python.org//DTD XML Bookmark Exchange Language 1.0//EN//XML" "" [ <!ENTITY % local.url.att "favicon CDATA '01.png' target CDATA '_blank'"> ]> The XSL transformation templates actively check for these attributes and substitute values if needed. ps I had considered using XML namespaces but didn't want to end up with a well formed but "invalid" XBEL file. I have read quite a few articles on DTD validation and namespaces and I must admit: it did became a tiny bit hairy for me... If you have read all the way down to here... I say cheers! Thanks again for your time.
|
http://mail.python.org/pipermail/xml-sig/2003-May/009456.html
|
crawl-002
|
refinedweb
| 318
| 75.2
|
Created on 2007-11-11 20:15 by jeroen, last changed 2013-02-06 19:15 by serhiy.storchaka.
What software do you use to play the sample? Is it perhaps ignoring the
nchannels value from the header?
I played using winsound.PlaySound function for the wav. I used VLC and windows media player for wav,au and aiff after that
All had the same problem. It was solved for sunau by doubling the nframes number written by the close function in the sunau module. I assume something similar should be done for the wav and aiff modules.
This is what I changed in sunau I am not sure if adding *self._sampwidth breaks something else. It works for me now when I create 16bit stereo files.
def _patchheader(self):
self._file.seek(8)
# jjk added * sampwidth otherwise for 16 bit you get wrong nframes
_write_u32(self._file, self._datawritten*self._sampwidth)
self._datalength = self._datawritten
self._file.seek(0, 2)
Hope this helps.
greetings
Crys, since you apparently have working sound on Windows, could you have
a look at this? There's also an (unrelated) issue with sunau.py on
Py3k, it doesn't work (but the unittests aren't strong enough to
discover that :-).
Is this still valid?
as far as I know. But i have not use or tested it since a long time
On Sat, Sep 18, 2010 at 4:34 PM, Mark Lawrence <report@bugs.python.org>wrote:
>
> Mark Lawrence <breamoreboy@yahoo.co.uk> added the comment:
>
> Is this still valid?
>
> ----------
> nosy: +BreamoreBoy -gvanrossum
> versions: +Python 2.7, Python 3.1, Python 3.2 -Python 2.6, Python 3.0
>
> _______________________________________
> Python tracker <report@bugs.python.org>
> <>
> _______________________________________
>
|
http://bugs.python.org/issue1423
|
CC-MAIN-2013-20
|
refinedweb
| 283
| 79.77
|
gri6507 has asked for the wisdom of the Perl Monks concerning the following question:
I don't remember why I even started doing this (probably as a passtime at work), but here are several algorithms to calculate the factorial of an integer (as in 6! = 6*5*4*3*2*1).
use Math::BigInt;
use Benchmark;
use strict;
$|++;
my $t0;
my $t1;
my $i = Math::BigInt->new($ARGV[0]);
$t0 = new Benchmark;
fact($i);
$t1 = new Benchmark;
print "Method 1: ",timestr(timediff($t1, $t0)),"\n";
$t0 = new Benchmark;
fact2($i,1);
$t1 = new Benchmark;
print "Method 2: ",timestr(timediff($t1, $t0)),"\n";
$t0 = new Benchmark;
fact3($i);
$t1 = new Benchmark;
print "Method 3: ",timestr(timediff($t1, $t0)),"\n";
sub fact{
my $n = Math::BigInt->new(shift);
return 1 unless $n->bcmp(0);
return $n->bmul(fact($n->bsub(1)));
}
sub fact2{
#Tail Recursion Ellimination
my $n = Math::BigInt->new(shift);
my $f = shift;
if (!$n->bcmp(0)) {return $f}
else {return &fact2($n->bsub(1),$n->bmul($f))}
}
sub fact3{
#no recursion at all
my $n = Math::BigInt->new($_[0]);
my $i = $_[0]-1;
while($i){
$n = Math::BigInt->new($n->bmul($i--));
}
return $n;
}
[download]
What really surprised me was that the recursive algorithm is faster than the straight up loop. Why is that? Isn't it true that in the recursive function, extra steps are taken to store the intermediate values in a stack? Is this to imply that recursion is faster than a loop?? Thanks for your wisdom.
P.S. Suggestions for more robust algorithms are welcome.
1 * 2 * 3 * 4 * 5 =
| | | |
| +-------+ |
+---------------+
5 * 8 * 3 =
| |
+-------+
15 * 8 =
120
#!/usr/bin/perl -w
use strict;
use Benchmark;
use Math::BigInt;
sub fact1 {
my $n = Math::BigInt->new(shift);
return 1 unless $n->bcmp(0);
return $n->bmul(fact1($n->bsub(1)));
}
sub fact4 {
my $n = Math::BigInt->new(1);
foreach my $i (map Math::BigInt->new($_), 1..shift) {
$n = Math::BigInt->new($i->bmul($n));
}
return $n;
}
sub fact7 {
my @factor = map Math::BigInt->new($_), (1 .. shift);
while(@factor > 1) {
my @pair = splice @factor, 1 + $#factor / 2;
$_ = $_ * pop @pair for @factor[0..$#pair];
}
return shift @factor;
}
my $fact = shift || 100;
my ($f1, $f4, $f7) = (fact1($fact), fact4($fact), fact7($fact));
die "something's broke" if grep $_ != $f1, ($f4, $f7);
timethese(shift || 10, {
f1 => sub { fact1($fact) },
f4 => sub { fact4($fact) },
f7 => sub { fact7($fact) },
});
__END__
$ fact 600 5
Deep recursion on subroutine "main::fact1" at fact line 9.
Benchmark: timing 5 iterations of f1, f4, f7....
f1: 17 wallclock secs (17.09 usr + 0.10 sys = 17.19 CPU) @ 0
+.29/s (n=5)
f4: 15 wallclock secs (15.09 usr + 0.06 sys = 15.15 CPU) @ 0
+.33/s (n=5)
f7: 4 wallclock secs ( 3.99 usr + 0.04 sys = 4.03 CPU) @ 1
+.24/s (n=5)
[download]
Makeshifts last the longest.
2 * 3 * 4 * 5 * 6 =
| | | |
| +-------+ |
+---------------+
4 * 12 * 15 =
| |
+---------+
12 * 60 =
120
[download]
sub fact8{
#divide and conquer without recursion
my @N = (2 .. shift);
my @M;
my $tmp;
while ($#N){
while(@N){
$tmp = pop(@N);
if (($_ = shift(@N))){push(@M,Math::BigInt->new($tmp)->bmul($_))
+}
else {unshift(@M,$tmp)}
}
while(@M){
$tmp = pop(@M);
if (($_ = shift(@M))){push(@N,Math::BigInt->new($tmp)->bmul($_))
+}
else {unshift(@N,$tmp)}
}
}
return @N;
}
perl fact.pl 5000
Method 8 (new func): 28 wallclock secs (28.65 usr + 0.00 sys = 28.65
+CPU)
Method 7 (your func): 30 wallclock secs (29.06 usr + 0.00 sys = 29.06
+ CPU)
[download]
sub fact7b {
my @factor = map Math::BigInt->new($_), (2 .. shift);
while(@factor > 1) {
my @pair = splice @factor, 0, @factor / 2;
$_ = $_ * pop @pair for @factor[0..$#pair];
}
return shift @factor;
}
[download]
die "fact is broken" unless fact(5) == 120;
die "fact2 is broken" unless fact2(5,1) == 120;
die "fact3 is broken" unless fact3(5) == 120;
[download]
-Blake
sub fact4{
#no recursion at all
my $n = Math::BigInt->new(1);
foreach my $i (map Math::BigInt->new($_), 1..shift) {
$n = Math::BigInt->new($i->bmul($n));
}
return $n;
}
[download]
sub fact5{
#no recursion at all
my $n = Math::BigInt->new(1);
foreach my $i (1..shift) {
$n = Math::BigInt->new($n->bmul($i));
}
return $n;
}
[download]
sub fact6{
#divide and conquer
my $m = shift;
my $n = @_ ? shift : $m;
if ($m < $n) {
my $k = int($m/2 + $n/2);
return Math::BigInt->new(fact6($m, $k))->bmul(fact6($k+1,$n));
}
else {
return Math::BigInt->new($m);
}
}
[download]
I thought about this method, but I opted to try out a different method. You are correct in saying that much time is spent multiplying numbers. So what we can do is get rid of that multiplication and turn it into addition. This can be easily done by first taking logs of all integers in the factorial calculation, summing those values and finally raising e to that power. For example, 5!=e^(ln(2)*ln(2)*ln(4)*ln(5)). There is only one problem with that. Since Math::BigFloat does not contain a log or exp function, the limited precision of the built in functions creates a rounded answer (try doing 200!)
It works by calculating fact6(1,100) as fact6(1,50)*fact6(51,100). Break those up likewise until you have a range of one number, and you know how to calculate that. So there are very few multiplications with very large numbers.
You need not pass a second parameter to that function - or so it appears. A quick test shows it being broken. I'm not sure I'm following the algorithm either, so I'll leave it at that. (I misunderstood the parameter handling.)
Also summing the logarithms will be fast, but calculating the logarithms in the first place is going to take much longer than straight multiplication would have.
YuckFoo
Benchmark: timing 16000 iterations of nocurse, recurse...
nocurse: 0 wallclock secs
( 0.20 usr + 0.00 sys = 0.20 CPU) @ 80000.00/s (n=16000)
recurse: 1 wallclock secs
( 0.51 usr + 0.00 sys = 0.51 CPU) @ 31372.55/s (n=16000)
#!/usr/bin/perl
use Benchmark;
$fact = $ARGV[0] || 10;
timethese(16000, {'recurse'=>'recurse($fact)',
'nocurse'=>'nocurse($fact)'});
sub recurse {
my ($n) = @_;
if ($n == 0) { return 1; }
return ($n * recurse($n-1));
}
sub nocurse {
my ($n) = @_;
my $f = 1;
for my $i (1..$n) { $f *= $i; }
return $f;
}
[download]
Also isn't exponentiating the log of n! and carrying accuracy to the last integer is as much or more work than calculating n! in the first place? If you just want the first k digits, the approximation is better. Most of us don't.
That said, the approximations are good for order of magnitude estimates. Understanding the basic approximations is easier than most think. Watch:
log(n!) = log(1) + log(2) + ... + log(n)
= 1/2 log(1) + (log(1) + log(2))/2
+ (log(2) + log(3))/2
+ ...
+ (log(n-1) + log(n))/2
+ 1/2 log(n)
[download]
log(n!) = 1/2 0 + Integral from 1 to n of log(x) +
1/2 log(n) + e(1) + e(2) + ... + e(n-1)
[download]
The answer is, "Quite a bit." The goal of approximation techniques is to take something, divide it into a few big terms, understand those terms, a big mess, show that the mess isn't important. This is what we have done.
That integral is a single huge term. But it is one we understand. The integral of log(x) is x*log(x)-x. So that integral is n*log(n)-n-(1*log(1)-1) = n*log(n)-n+1. log(n)/2 is the next biggest thing around. And finally we have those e()'s. We think that they are small. But how to show that? It turns out that the error in a trapezoidal approximation is w**3 * f''(t)/12 where w is the width of the interval and t is a point in it. (This is reasonable - the error in a linear approximation is tied to what the function's bend does to the integral.) The second derivative of log(x) is -1/(x*x), so e(i) is O(1/(i*i)). If you remember your infinite series, that means that the infinite sum e(1)+e(2)+e(3)+... converges to some number, call it E. And the partial sum from n out to infinity is O(1/n) from that number.
Remember the principle of approximation? Get a few big understood terms, at the cost adding lots of messy small ones? We are about to do that where "lots" is an infinite number! Taking the above we get:
log(n!) = 1/2 0 + Integral from 1 to n of log(x) +
1/2 log(n) + e(1) + e(2) + ... + e(n-1)
= n*log(n) - n + 1 - 1/2 log(n)
+ (e(1) + e(2) + e(3) + ...)
- (e(n) + e(n+1) + e(n+2) + ...)
= n*log(n) - n + log(n)/2 + E + 1
- e(n) - e(n+1) - e(n+2) - ...
[download]
n! = K * sqrt(n) + n**n * (1 + O(1/n)) / e**n
[download]
The first is straightforward but messy. Just look at the infinite sum as an approximation to an integral. Write it as that integral and associated error terms. Get another term of the approximation, with smaller and more complex error terms. Wash rinse and repeat to your heart's desire.
The second requires cleverness. I suggest sticking this approximation into the binomial expansion, and compare with the central limit theorem's approximation to show that K has to be 1/sqrt(2*pi). If you are clever enough about it you can actually derive the central limit theorem for the binomial expansion up to the constant, then show that the only value of K which makes the terms for (1+1)**n come out to 2**n for large n is 1/sqrt(2*pi).
These are left as exercises because I am tired of typing. :-)
PerlMonks, of course - how could you ask?
Other tech
Porn
Shopping
News
Sports
Games & puzzles
Social media
Arts & music
Job hunting
Whichever pays my bills
Results (207 votes). Check out past polls.
|
https://www.perlmonks.org/?node_id=206098
|
CC-MAIN-2020-45
|
refinedweb
| 1,709
| 67.76
|
This action might not be possible to undo. Are you sure you want to continue?
1
'
USS Constellation arriving in Perth, Western Australia Career (United States) Name: Awarded: Builder: Laid down: Launched: Acquired: Commissioned: Decommissioned: Struck: Nickname: Status: USS Constellation 1 July 1956
[1]
New York Naval Shipyard 14 September 1957 8 October 1960 1 October 1961
[1] [1] [1] [1]
27 October 1961 6 August 2003
[1] [1]
2 December 2003 Connie
currently at the Inactive Ships Maintenance Facility in Bremerton, WA General characteristics
Class and type: Displacement:
Kitty Hawk-class aircraft carrier 61,981 tons light 82,538 tons full load [1] 20,557 tons dead 1088 ft (332 m) overall, 990 ft (302 m) waterline 282 ft (86 m) extreme, 130 ft (40 m) waterline 39 ft (12 m)
[1] [1] [1]
Length: Beam: Draft: Propulsion: Speed: Complement:
eight boilers, four steam turbine engines, totaling 280,000 shp (210 MW) 34 knots (62.96 km/h) 3,150 – Air Wing: 2,480
'
2
Armament: Aircraft carried: Two Sea Sparrow missile launchers, three 20 mm Phalanx CIWS guns, and her Air Wing. Formerly: Terrier surface-to-air missile systems. 72 (approx)
USS Constellation (CV-64), a Kitty Hawk-class supercarrier, was the third ship of the United States Navy to be named in honor of the "new constellation of stars" on the flag of the United States.$400 million. Constellation was the last U.S. aircraft carrier (as of 2010) to be built at a yard other than Newport News Shipbuilding & Drydock Company.
History
Fire during construction
The USS Kitty Hawk, CV-63 was heavily damaged by fire while under construction on 19 December 1960.[2] [3] The carrier was in the final stages of construction at the Brooklyn Navy Yard in Brooklyn, New York when the fire began..”[3] It took 17 hours for firefighters to extinguish the fire and some of whom had been “driven to the raw edge of exhaustion” after being called into service in the Park Slope air accident. The firefighters saved hundreds of lives without losing any of their own.[3] The extensive damage cost 75 million dollars to repair, and delayed the commissioning date by seven months, leading to a rumor that this caused the Navy to change the designation between the two sister ships being built simultaneously. CV-63 became CV-64 and vice versa. This enabled the Navy to still commission the USS Kitty Hawk first. However, both carriers had been christened before the fire. USS Kitty Hawk, was christened on May 21, 1960. The sponsor was Mrs. Neil H. McElroy, wife of the former U.S. Secretary of Defense. USS Constellation was christened 8 October 1960 by Mrs. C. A. Herter, wife of the then Secretary of State, more than a month before the fire. This disproves the name switch myth.
1960–1969 allegedly claimed by the Johnson administration to have been attacked by North Vietnamese torpedo boats. On 5 August both carriers launched air strikes on a North Vietnamese oil facility and naval vessels. CVW-14 USS Constellation during her 1964–65 lost two aircraft, an A-1 Skyraider, piloted by LTjg. Richard C. Sather, deployment. who was killed in action (KIA), and an A-4 Skyhawk flown by LTjg. Everett Alvarez Jr., who became one of America's first US POW's of the Vietnam War.[4] Operations returned to a more normal cycle for the remainder of the deployment, and Constellation returned to San Diego, California on 1 February 1965, ending a nearly nine-month cruise. Connie and CVW-14 were awarded a Navy Unit Commendation (NUC) for the early August operations...
3
' nine-month deployment ended in May, with CVW-14 suffering the loss of seven total aircraft, five to enemy action. One aircrewman was taken as a POW, but there were no fatalities.[2]
4
1970–1979 Constellation underway off Vietnam, 1971–72. abortive.[5] 23 December 1974. A 14-month major overhaul and upgrade at Puget Sound Naval Shipyard, Wash., commenced in February 1974,.
5
Constellation near the Aleutian Islands during PACEX '89.]
1980–1989."[6] An uneventful deployment to the western Pacific and Indian Ocean from October 1981 to May 1982 followed. In January 1983, Constellation entered the Puget Sound Naval Shipyard President Ronald Reagan (third from left) aboard for a 13-month complex overhaul, during which the ships Terrier USS Constellation, 20 August 1981]
'
6.[7]
Constellation crew members form Battle E awards on the flight deck. on the One Main Machinery Room and erupted into a full blown conflagration that tore through the uptakes and spread throughout the ship. The Oil King and Oil Lab were blamed early. One Main Top Watch (a Machinist Mate) said JP-5, jet fuel, as he exited the Constellation underway, 1988. space hangar bay. To the crew's horror, the fires reflashed and the crew went back into action. Into the next day, the crew battled the blaze that had reflashed and continued to threaten the entire ship. Connie pulled back into North Island on 3 August. Round-the-clock repairs by the crew assisted by civilian contractors got the ship ready for deployment, on schedule. The Constellation/CVW-14 team deployed on 1 December 1988 for the Indian Ocean. Four days out to sea, a Prowler and its four crew members were lost at sea.[8] This West-Pac deployment ended six months later at San Diego on 1 June 1989.[2]
1990–1999 Constellation in Seattle, 1996.. The Constellation's next deployment, from 1 April to 1 October 1997, included a return to the Persian Gulf for OSW activities, now under operational control of the Fifth Fleet. In over 10 weeks of operating in the Gulf, CVW-2 flew more than 4,400 sorties, with well over 1,000 sorties in direct support of OSW.]
7
2000–present
Constellation's 20th deployment began on 16 March 2001. She entered the Persian Gulf on 30 April and immediately commenced operations in support of OSW. On 13 May Capt. John W. Miller Constellation in Sydney harbor, 2001. and]
'
8 Connie departed the gulf on 17 April and steamed for San Diego for the last time. On 1 June a Sea Control Squadron 38 S-3B Viking crewed by Lt. Hartley Postlethwaite, Ltjg. Arthur Gutting and CO Capt. John W. Miller recorded Constellation's 395,710th and final arrested landing. Her 21st and final deployment ended the next day.[2] After 41 years of commissioned service, the USS Constellation was decommissioned at the Naval Air Station North Island in San Diego on 7 August 2003. The ship was towed, beginning 12 September 2003, to USS Constellation is decommission.[9] As of February 2008, Constellation is scheduled to be disposed of by dismantling in the next five years, along with USS Independence.[10] Constellation was replaced by USS Ronald Reagan (CVN-76).
In popular culture
Constellation appears in the 2001 film Pearl Harbor, portraying the USS Hornet during the takeoff sequences for the Doolittle Raid. In September of 2000, four vintage B-25s took off from the ship while she was steaming of the coast of San Diego, reenacting the feat of Doolittle's raiders, albeit on a much larger flight deck. Some members of the crew were used as extras in the movie. In the Home Improvement television episode "At Sea" (Season 6, Ep. 1), the Tool Time cast and crew visit the Connie for a "Salute to Engines" special. The story of Tiger Cruise, a Disney Channel Original Movie, takes place aboard the USS Constellation.
A B-25 sits on the flight deck of USS Constellation during filming of the movie Pearl Harbor in 2000
In computer game Jetfighter II, USS Constellation serves as the base of operations in most of player's combat missions. Player can also choose it from six available airfields (the other five being LAX, VBG, SFO, OAK, and NUQ) to takeoff or land at in "Free Flight" mode.
References
[1] "Constellation" (http:/ / www. nvr. navy. mil/ nvrships/ details/ CV64. htm). Naval Vessel Register. . Retrieved 15 March 2009. [2] United States Navy. Dictionary of American Naval Fighting Ships. Constellation III (http:/ / www. history. navy. mil/ danfs/ c13/ constellation-iii. htm). [3] Haberman, Clyde (December 21, 2010). "Recalling a Brooklyn Disaster Otherwise Forgotten" (http:/ / www. nytimes. com/ 2010/ 12/ 21/ nyregion/ 21nyc. html?ref=nyregion). The New York Times. . Retrieved December 21, 2010. [4] Moise, p. 219 [5] Ryan, Paul B., CAPT USN "USS Constellation Flare-up: Was it Mutiny?" United States Naval Institute Proceedings January 1976 pp.46–52 [6] The Public Papers of President Ronald W. Reagan. Ronald Reagan Presidential Library. August 20, 1981 – Remarks on Board the U.S.S. Constellation off the Coast of California (http:/ / www. reagan. utexas. edu/ archives/ speeches/ 1981/ 82081a. htm). [7] Moran, Richard Thomas: "Official Personnel Military Record DD-214", Department of Defense, 1989. [8] USS Constellation (CV 64) (http:/ / www. navybuddies. com/ cvn/ cv64. htm) [9] Maintenance Category D and X - Definition (http:/ / www. nvr. navy. mil/ maint_dx. htm)
'
[10] Peterson, Zachary M. (2008-02-26). "Navy sink list includes Forrestal, destroyers" (http:/ / www. navytimes. com/ news/ 2008/ 02/ navy_shipdisposal_080223w/ ). NavyTimes. . Retrieved 2008-09-07.
9
• This article includes text from the public domain Dictionary of American Naval Fighting Ships. The entry can be found here (). • Leonard F. Guttridge, Mutiny: A History of Naval Insurrection, United States Naval Institute Press, 1992, ISBN 0-87021-281-8 • Moise, Edwin E. Tonkin Gulf and the Escalation of the Vietnam War. 1996. The University of North Carolina Press. ISBN 0-8078-2300-7
External links
• An official US Navy USS Constellation page ( cv64-constellation/cv64-constellation.html) • Official current status of Constellation () – NAVSHIPSO (NAVSEA Shipbuilding Support Office) • An unofficial USS Constellation webpage () • Maritimequest USS Constellation CV-64 Photo Gallery ( us_navy_pages/aircraft_carriers/constellation_cv_64/uss_constellation_cv_64_page_1.htm) • Overhead view of the Constellation in 'mothballs' ( wa&ie=UTF8&z=17&ll=47.55239,-122.652408&spn=0.003389,0.010729&t=k&om=1) – google maps • USS Constellation Association history page () • USS Constellation history at U.S. Carriers () • America's Flagship: A History of USS Constellation (CV/CVA-64) ( backissues/2000s/2004/ma/connie.pdf) by Mike Weeks – Naval Aviation News – March–April 2004
Article Sources and Contributors
10
Article Sources and Contributors
' Source: Contributors: A. B., Amakuru, Archimedean, Arcturis, Bane22, Barneyg, Bdevlinski, Beetstra, Bellhalla, Berolina, BilCat, Bobblewik, Brad101, Busaccsb, Bwood1957, Cadastral, Cheetah00, Clindberg, Closedmouth, Cobatfor, Coders4hire, Da monster under your bed, Dale101usa, Dallan007, Danthenic, David Newton, Daysleeper47, Dcviper, Dhartung, DocWatson42, Dormcat3, Dual Freq, Elendil's Heir, Eleuther, Eluchil404, GageSkidmore, HDCase, HGB, Haus, Hmains, Iceberg3k, Itsfullofstars, Javidan, Jgrace1294, Jinian, Jor70, Joseph Solis in Australia, Joshbaumgartner, Jrdioko, JustAGal, Kenshinflyer, Kumioko, LeyteWolfer, Lightmouse, LilHelpa, Lommer, Looper5920, Lord Chamberlain, the Renowned, MBK004, Marcd30319, Mark Sublette, Michael Pocock, Miranda, Mortein, Morven, N328KF, NE2, Naddy, Neutrality, Nick Thorne, Nohomers48, Oberiko, Ocon, Ottershrew, PeterWD, Polihale, Pthype, RadicalBender, Ravenswing, Rettetast, Rich Farmbrough, Ricmon7964, Rjwilmsi, Rochoa, Sagefoole, Schnellundleicht, Scott in Houtex, ScottOram, Sirbarksalot195, Sjl, Subsurd, SunKing, Swarve, Tabletop, Template namespace initialisation script, The Epopt, Thewellman, Tmangray, TomStar81, TomTheHand, Tomladams, Tommy2010, Topbanana, Turkeyshoes, Usnht, Vanished 6551232, ViriiK, Wikited, Wilsbadkarma, Wwoods, 94 anonymous edits
Image Sources, Licenses and Contributors
Image:USS Constellation CV-64.jpg Source: License: Public Domain Contributors: U.S. Navy photo by Photographer’s Mate 2nd Class Timothy Smith File:Flag of the United States.svg Source: License: Public Domain Contributors: Dbenbenn, Zscout370, Jacobolus, Indolences, Technion. File:USS Constellation (CVA-64) underway 1964-65.jpg Source: License: Public Domain Contributors: USN File:USS Constellation (CVA-64) underway 1971-72.JPEG Source: License: Public Domain Contributors: USN Image:USS Constellation (CV-64) near the Aleutian Islands.jpg Source: License: Public Domain Contributors: Camera Operator: PH1 (NAC) WILCOX Image:Ronald Reagan aboard USS Constellation.jpg Source: License: Public Domain Contributors: US Executive branch or DoD / US Navy. Courtesy Ronald Reagan Library Image:USS Constellation (CV-64) aerial Battle E.jpg Source: License: Public Domain Contributors: US Navy Image:USS Constellation (CV-64) underway bow view.jpg Source: License: Public Domain Contributors: Camera Operator: ENS GUTILLAS Image:USS Constellation (CV-64) in Seattle.jpg Source: License: Public Domain Contributors: Camera Operator: PHAN JAMES WEIDMAN Image:USS Constellation (CV-64) Sydney Australia 2001.jpg Source: License: Public Domain Contributors: Camera Operator: PH3 THOMAS NORTHRUP, USN Image:USS Constellation decommissioning ceramony.jpg Source: License: Public Domain Contributors: U.S. Navy photo by Photographer's Mate 2nd Class Charles Alvarado. Image:USS Constellation in Pearl Harbor Movie.jpg Source: License: Public Domain Contributors: ScottOram listening from where you left off, or restart the preview.
|
https://www.scribd.com/document/58768940/Uss-Constellation
|
CC-MAIN-2016-30
|
refinedweb
| 2,094
| 56.96
|
A Basic Introduction to Boto3
Want to share your content on python-bloggers? click here.
In a previous post, we showed how to interact with S3 using AWS CLI. In this post, we will provide a brief introduction to boto3 and especially how we can interact with the S3.
Download and Configure Boto3
You can download the Boto3 packages with pip install:
$ python -m pip install boto3
conda install -c anaconda boto3
Then, it is better to configure it as follows:
For the credentials which are under
~/.aws/credentials :
[default] aws_access_key_id = YOUR_KEY aws_secret_access_key = YOUR_SECRET
And for the region, you can with the file which is under
~/.aws/config :
[default] region=us-east-1
Once you are ready you can create your
client:
import boto3 s3 = boto3.client('s3')
Notice, that in many cases and in many examples you can see the
boto3.resource instead of
boto3.client. There are small differences and I will use the answer I found in StackOverflow
Client:
- low-level AWS service access
- generated from AWS service description
- exposes botocore client to the developer
- typically maps 1:1 with the AWS service API
- all AWS service operations are supported by clients
- snake-cased method names (e.g. ListBuckets API => list_buckets method)
Resource:
- higher-level, object-oriented API
- generated from resource description
- uses identifiers and attributes
- has actions (operations on resources)
- exposes subresources and collections of AWS resources
- does not provide 100% API coverage of AWS services
How to List your Buckets
Assume that you have already created some S3 buckets, you can list them as follow:
list_buckets = s3.list_buckets() for bucket in list_buckets['Buckets']: print(bucket['Name'])
gpipis-cats-and-dogs gpipis-test-bucket my-petsdata
How to Create a New Bucket
Let’s say that we want to create a new bucket in S3. Let’s call it
20201920-boto3-tutorial.
s3.create_bucket(Bucket='20201920-boto3-tutorial')
Let’s see if the bucket is actually on S3
for bucket in s3.list_buckets()['Buckets']: print(bucket['Name'])
20201920-boto3-tutorial gpipis-cats-and-dogs gpipis-test-bucket my-petsdata
As we can see, the
20201920-boto3-tutorial bucket added.
How to Delete an Empty Bucket
We can simply delete an empty bucket:
s3.delete_bucket(Bucket='my_bucket')
If you want to delete multiple empty buckets, you can write the following loop:
list_of_buckets_i_want_to_delete = ['my_bucket01', 'my_bucket02', 'my_bucket03'] for bucket in s3.list_buckets()['Buckets']: if bucket['Name'] in list_of_buckets_i_want_to_delete: s3.delete_bucket(Bucket=bucket['Name'])
Bucket vs Object
A bucket has a unique name in all of S3 and it may contain many objects which are like the “files”. The name of the object is the full path from the bucket root, and any object has a key which is unique in the bucket.
Upload files to S3
I have 3 txt files and I will upload them to my bucket under a key called
mytxt.
s3.upload_file(Bucket='20201920-boto3-tutorial', # Set filename and key Filename='file01.txt', Key='mytxt/file01.txt') s3.upload_file(Bucket='20201920-boto3-tutorial', # Set filename and key Filename='file02.txt', Key='mytxt/file02.txt') s3.upload_file(Bucket='20201920-boto3-tutorial', # Set filename and key Filename='file03.txt', Key='mytxt/file03.txt')
As we can see, the three txt files were uploaded to the
20201920-boto3-tutorial under the
mytxt key
Notice: The files that we upload to S3 are private by default. If we want to make them public then we need to add the
ExtraArgs = { 'ACL': 'public-read'}). For example:
s3.upload_file(Bucket='20201920-boto3-tutorial', # Set filename and key Filename='file03.txt', Key='mytxt/file03.txt', ExtraArgs = { 'ACL': 'public-read'}) )
List the Objects
We can list the objects as follow:
for obj in s3.list_objects(Bucket='20201920-boto3-tutorial', Prefix='mytxt/')['Contents']: print(obj['Key'])
Output:
mytxt/file01.txt mytxt/file02.txt mytxt/file03.txt
Delete the Objects
Let’s assume that I want to delete all the objects in ‘20201920-boto3-tutorial’ bucket under the ‘mytxt’ Key. We can delete them as follows:
for obj in s3.list_objects(Bucket='20201920-boto3-tutorial', Prefix='mytxt/')['Contents']: s3.delete_object(Bucket='20201920-boto3-tutorial', Key=obj['Key'])
How to Download an Object
Let’s assume that we want to download the
dataset.csv file which is under the
mycsvfiles Key in
MyBucketName. We can download the existing object (i.e. file) as follows:
s3.download_file(Filename='my_csv_file.csv', Bucket='MyBucketName', Key='mycsvfiles/dataset.csv')
How to Get an Object
Instead of downloading an object, you can read it directly. For example, it is quite common to deal with the csv files and you want to read them as
pandas
DataFrames. Let’s see how we can get the
file01.txt which is under the
mytxt key.
obj = s3.get_object(Bucket='20201920-boto3-tutorial', Key='mytxt/file01.txt') obj['Body'].read().decode('utf-8')
Output:
'This is the content of the file01.txt'
Discussion
That was a brief introduction to Boto3. Actually, with the Boto3 you can have almost full control of the platform.
Want to share your content on python-bloggers? click here.
|
https://python-bloggers.com/2020/10/a-basic-introduction-to-boto3/
|
CC-MAIN-2021-10
|
refinedweb
| 838
| 56.25
|
OpenGL is very flexible in the way it renders 3D scenes. There are a number of texture parameters that can be set to create visual effects and adjust how textures are used to render polygons. This section covers some commonly used texture parameters—the full list is beyond the scope of this book, but you can read the OpenGL documentation online () for all the details.
In Listing 11-2 we set two texture parameters, GL_TEXTURE_MIN_FILTER and GL_TEXTURE_MAX_FILTER, which define the minimizing and maximizing filters for texture scaling. Both were set to GL_LINEAR, which gives good results in most cases, but there are other values that you can use to fine-tune the scaling.
When OpenGL renders a textured polygon to the screen, it samples the texture at regular intervals to calculate the color of pixels in the polygon. If the texture is mip mapped (see our earlier discussion), OpenGL also has to decide which mip level(s) it should sample. The method it uses to sample the textures and select a mip level is defined by the minimizing or maximizing filter parameter.
The only values you can set for the maximizing filter (GL_TEXTURE_MAX_FILTER) are GL_NEAREST and GL_LINEAR. Generally it is best to stick with GL_LINEAR, which makes OpenGL use bilinear filtering to smooth the texture when scaled, but textures can appear blurry at high scales. The alternative is GL_NEAREST, which looks sharper but blocky. These values are also supported by the minimizing filter (GL_TEXTURE_MIN_FILTER), in addition to four other constants that tell OpenGL how to include the mip level in the color calculation (see Table 11-2). The highest-quality setting for the minimizing filter is GL_LINEAR_MIPMAP_LINEAR, which softens the texture like GL_LINEAR but also blends between the two nearest mip levels (known as trilin-ear filtering). The following lines set the minimizing and maximizing filter methods to the highest-quality settings:
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR) glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR)
Table 11-2. Potential Values for Min and Max Filter Parameters
Parameter Constant
Sample Effect
GL NEAREST
GL LINEAR
GL NEAREST MIPMAP NEAREST
GL LINEAR MIPMAP NEAREST
GL NEAREST MIPMAP LINEAR
GL LINEAR MIPMAP LINEAR
Selects the texel nearest to the sample point. This keeps the texture looking sharp, but can appear low quality.
Selects the four texels nearest to the sample point and blends between them. This smoothes the rendered texture and makes it appear less jagged when scaled.
Selects the texel nearest to the sample point, and the mip level that is closest to the size of the pixel being rendered. Minimizing filter only.
Blends between the four texels nearest to the sample point and selects the mip level that is closest to the size of the pixel being rendered. Minimizing filter only.
Selects the texel nearest to the sample point and blends between the two mip levels that most closely match the size of the pixel being rendered. Minimizing filter only.
Blends between the four texels nearest to the sample point and blends between the two mip levels that most closely match the size of the pixel being rendered. Minimizing filter only.
A texture coordinate with components in the zero to one range will refer to a point inside the texture, but it is not an error to have a texture coordinate with components outside that range—in fact, it can be very useful. How OpenGL treats coordinates that are not in the zero to one range is defined by the GL_TEXTURE_WRAP_S and GL_TEXTURE_WRAP_T texture parameters. The default for these parameters is G L_R EPEAT, which causes samples that go off one edge of the texture to appear on the opposite edge. This creates the effect of tiling the texture, so that multiple copies of it are placed edge to edge. You can see this in action by editing the calls to glTexCoord2f in Listing 11-2. Try replacing the lines between the calls to glBegin and glEnd with the following:
glTexCoord2f(0, 3)
glVertex3f(-300, 300, 0)
glTexCoord2f(3, 3)
glVertex3f(300, 300, 0)
glTexCoord2f(3, 0)
glVertex3f(300, -300, 0)
glTexCoord2f(0, 0)
If you run the edited Listing 11-2, you should see something like Figure 11-4. Only a single quad is drawn, but the texture is repeated nine times because the texture components range from zero to three. Tiling textures like this is useful in games because you can texture very large polygons without having to break them up in to smaller pieces. For example, a long piece of fence could be created with a single elongated quad and a tiled texture, which is more efficient than small quads for each piece of fence.
Repeating textures has the same effect as transforming the components of every texture coordinate with the following Python function (although the wrapping is done by your graphics card and not Python). The % symbol is the modulus operator, which returns the remainder in a division, so % 1.0 would return the fractional part of a number.
def wrap_repeat(component): return component % 1.0
Was this article helpful?
|
https://www.pythonstudio.us/pygame-game-development/texture-parameters.html
|
CC-MAIN-2019-51
|
refinedweb
| 838
| 50.87
|
BLE USB dongle throughput measurementMarch 22, 2022
Introduction
Here we will describe two quick ways of measuring the data throughput of the BleuIO Dongle.
For both examples we are going to need a BleuIO Dongle, another Bluetooth device (like another Bleuio Dongle) and a computer with Python (minimum version: 3.6) installed.
For the first measurement example, measuring the BLE data throughput, you will need one of the following supported development kits from Nordic Semiconductor:
- nRF52840 DK (PCA10056)
- nRF52840 Dongle (PCA10059)
- nRF52833 DK (PCA10100)
- nRF52 DK (PCA10040)
- nRF51 DK (PCA10028)
- nRF51 Dongle (PCA10031)
The first measurement example is the actual BLE data throughput. For this we will use a BleuIO Dongle and Wireshark. (For help on how to setup Wireshark and requirements go to this link: ).
We will also utilize a simple python script that sends a set amount of data. For this measurement you can ignore the throughput print at the end of the script.
The second measurement example is for measuring the actual data being transferred over the USB as a Virtual COM port (via the CDC protocol).
We will be using the same simple script that will send a set amount of data and time when the transfer starts and then stops. Then divide the amount of data with the time the transfer took to get the throughput.
Notice : Interference can be caused by other wireless networks, other 2.4 GHz frequency devices, and high voltage devices that generate electromagnetic interference. This have impact on the measurement of throughput. To avoid interference, select wireless free space or use a shield box.
Instructions for BLE data throughput
- For best result place the nRF Dev Kit between the BleuIO Dongle and your target device.
- Open Wireshark and double-click the ‘nRF Sniffer for Bluetooth LE’.
- Make sure the target Bluetooth device is advertising and find in the the scroll-down list.
- Choose ‘IO/Data’ under the ‘Analysis’ menu tab.
- Click the ‘+’ button to add new graphs. Add ‘bytes per seconds’ and/or ‘bit per seconds’.
- Modify the script by filling in the relevant information into the variables ‘your_com_port’, ‘target_mac_addr’ and ‘write_handle’.
- Run the python script.
- You can now observe the graph showing the BLE Data throughput!
Instructions for USB port data throughput
This is the second measurement example for measuring the actual point to point data transfer between the two USB ports.
- Connect the dongle to your computer. (Look up the COM port your dongle uses and paste it in the script in the variable ‘your_com_port’)
- Scan (Using AT+GAPSCAN) after the device you wish to send the data to. Copy the mac address of the device into the script in the variable ‘target_mac_addr’.
- Connect to the device and look up the handle of the characteristic you want to write to and paste into the script in the variable ‘write_handle’.
- Finally just run python script and the throughput will be displayed at the end!
The script
import datetime import serial import time import string import random connecting_to_dongle = True trying_to_connect = False # Change this to the com port your dongle is connected to. your_com_port = "COM20" # Change this to the mac address of your target device. target_mac_addr = "[0]40:48:FD:E5:2C:F2" # Change this to the handle of the characteristic on your target device. write_handle = "0011" # You can experiment with the packet length, increasing or decreasing it and see how that effect the throughput packet_length = 150 # 1 Megabytes = 1000000 Bytes file_size = 0.5 * 1000000 end_when = file_size / packet_length send_counter = 0 # Random data string generator def random_data_generator(size=packet_length, chars=string.digits + string.digits): return "".join(random.choice(chars) for _ in range(size)) print("Connecting to dongle...") while connecting_to_dongle: try: console = serial.Serial( port=your_com_port, baudrate=115200, parity="N", stopbits=1, bytesize=8, timeout=0, ) if console.is_open.__bool__(): connecting_to_dongle = False except: print("Dongle not connected. Please reconnect Dongle.") time.sleep(5) print("Connected to Dongle.") console.write(str.encode("AT+GAPDISCONNECT\r")) start = input("Press Enter to start.\n\r>> ") console.write(str.encode("ATE0\r")) console.write(str.encode("AT+DUAL\r")) connected = "0" while connected == "0": time.sleep(0.5) if not trying_to_connect: # change to Mac address of the device you want to connect to console.write(str.encode("AT+GAPCONNECT=" + target_mac_addr + "\r")) trying_to_connect = True dongle_output2 = console.read(console.in_waiting) time.sleep(2) print("Trying to connect to Peripheral...") if not dongle_output2.isspace(): if dongle_output2.decode().__contains__("\r\nCONNECTED."): connected = "1" print("Connected!") time.sleep(8) if dongle_output2.decode().__contains__("\r\nDISCONNECTED."): connected = "0" print("Disconnected!") trying_to_connect = False dongle_output2 = " " start2 = input("Press Enter to sending.\n\r>> ") start_time = time.mktime(datetime.datetime.today().timetuple()) console.write( str.encode( "AT+GATTCWRITEWRB=" + write_handle + " " + random_data_generator() + "\r" ) ) while 1: dongle_output = console.read(console.in_waiting) if send_counter > end_when: end_time = time.mktime(datetime.datetime.today().timetuple()) break # Change to the handle of the characteristic you want to write to if "handle_evt_gattc_write_completed" in str(dongle_output): console.write( str.encode( "AT+GATTCWRITEWR=" + write_handle + " " + random_data_generator() + "\r" ) ) send_counter = send_counter + 1 try: if not dongle_output.decode() == "": print(dongle_output.decode()) except: print(dongle_output) time_elapsed = end_time - start_time time.sleep(0.1) print("*" * 25) print("Transfer Complete in: " + str(time_elapsed) + " seconds") print(str(packet_length * send_counter) + "bytes sent.") print("*" * 25) print( "Throughput via USB (Virtual COM port): " + str((packet_length * send_counter) / time_elapsed) + " Bytes per seconds" ) print("*" * 25)
|
https://www.bleuio.com/blog/ble-usb-dongle-throughput-measurement/
|
CC-MAIN-2022-40
|
refinedweb
| 873
| 50.02
|
A toolbox to analyse diagnostic data!
Project description
Diagnostics - a toolbox built for analyzing diagnostic data!
Installation
The diagnostics library is tested on python 3.7. However, it should run on python 3.6 and 3.5 as well.
You can install the library using
pip:
pip install git+
Alternatively, you can clone the repository and use
setup.py to install:
git clone cd diagnostics python setup.py install
Usage
TimeSeries
Diagnostic events are derived from from real occurances.
For instance, your phone will probably generate a message (event)
if your battery is running low (percentage below threshold value).
The diagnostics library has a
TimeSerie class that can capture these occurances.
For example, a
TimeSerie representing your battery life, which drains 0.01% each second:
import numpy as np import diagnostics as ds battery_life = ds.TimeSerie(np.arange(100, 0, -0.01), fs=1)
the first argument is consists of a data array (both
list() and
numpy.array() are supported),
and additionally you can provide some keyword parameters.
Here we've provided the sample frequency (
fs) which is 1 Hz,
because we said our battery drains 0.01% each second.
In this particular case we could've left
fs out, since the default value of
fs is also 1.
Now that we've got our data, we can easily visualize this:
battery_life.plot(show=True)
There are other keyword parameters that we can use as well,
such as t0 (start time of
TimeSerie in posixtime or a
datetime object),
and a name (default is an empty string).
from datetime import datetime battery_life = ds.TimeSerie(np.arange(100, 0, -0.01), fs=1, t0=datetime(2019,1,1,8,5), # 2019-01-01 08:05 name='battery life')
Now we've got our battery life set to a specific day, and gave it a name. Both will come in handy later.
BooleanTimeSeries
Let's be honest, the battery percentage of your phone does not really matter to you,
unless it goes below a certain threshold.
Luckily for us, our
TimeSerie can easily be converted to a
BooleanTimeSerie,
which only contains boolean values of when the percentage reaches below 25%:
battery_below25 = battery_life <= 25 battery_below25.plot(show=True)
Now that's easy! We can see that our battery goes below 25% at HH:MM:SS.
StateChangeArray
You could argue that our
BooleanTimeSerie contains a lot of data points with the same value.
I'd agree with you, and therefore introduce a class that only keeps track of the changes in
data points, the
StateChangeArray:
battery_low_state = battery_below25.to_statechangearray()
Alternatively, we can create a
StateChangeArray (or
BooleanStateChangeArray,
you can probably guess the difference :smile:) from scratch:
s = ds.StateChangeArray([1, 4, 8, 13], t=[1,2,4,8], name='my state') b = ds.BooleanStateChangeArray([True, False, True, False], t=[1,3,6,9], name='b')
Both the data array as the values for time (
t) can be
list() or
np.array().
The time is considered as posixtime. For now it is not possible to give a datetimearray
or list of datetimes as an input, but this wil be implemented in the near future.
Comparing TimeSeries and StateChangeArrays
There are more classes besides TimeSeries and StateChangearrays, each with their own
advantages and disadvantages. The power of this module lies in clear transformations
from one class to another (we've already shown the
TimeSerie.to_statechangearray() method),
and the comparison of multiple classes.
To start with TimeSeries, if two (or more) have the same array_length,
t0 and
fs, we can
easily do calculations with them!
# create two TimeSerie objects that we'll combine a = ds.TimeSerie(np.sin(np.linspace(0, 2*np.pi, 100)), t0=0, fs=1, name='a') b = ds.TimeSerie(np.sin(2* np.linspace(0, 2*np.pi, 100)), t0=0, fs=1, name='b') # It's this easy! c = a + b # We're interested in the more extreme values, lets create TimeSeries for these: d = c <= -1 e = c >= 1 # we'll name them to keep our bookkeeping up to date d.name = 'c <= -1' e.name = 'c >= 1' # and find when one of the above conditions is True! f = d | e # when performing boolean operators ('~', '^', '&', '|'), the library # does it's own bookkeeping: print(f.name) f.plot(show=True)
Comparing StateChangeArrays would normally be a bit tricky, since the data is most likely non-linearly spaced. This means that we can't just perform vectorized boolean operations, but we'll need to combine both data values as well as their respective points in time.
Luckily for us, the
StateChangeArray has this built in:
a = StateChangeArray([True, False, True, False], t=[2,4,6,8], name='a') b = StateChangeArray([True, False, True, False], t=[3,5,7,9], name='b') c = a | b d = a & b e = ~a f = a ^ a g = a ^ e
That's pretty great right?
Reports & Events
WIP
Project details
Release history Release notifications
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
|
https://pypi.org/project/bonkie-diagnostics/
|
CC-MAIN-2019-18
|
refinedweb
| 837
| 57.37
|
I wrote some object to object mapping code and introduced it in some of my previous postings about performance. As I received 22x performance raise when trying out different mapping methods and it is now time to make my code nice. In this posting I will show you how I organized my code to classes. Yes, you can use my classes in your work if you like. :)
To get fast overview of my mapping journey so far I suggest you to read the following postings from this blog:
In this posting I will create base class for my O/O-mappers. Also I will provide three implementations of O/O-mapper. There will be reflection based, dynamic code based and IL-instructions based implementations.
NB! Although implementations given here work pretty nice they are very general and therefore not maximally optimal. I will provide faster implementations in my next posting about object to object mapper.
I wrote simple base class that provides base type and some common functionality to mapper implementations. Common functionality comes in two methods: GetMapKey() and GetMatchingProperties(). First one of them creates unique string key for two mapped types. Second one queries source and target types and decides what properties are matching. These methods are protected because there is no reason for external code to call them. If you write your own implementations you can override these methods is you like.
public class PropertyMap
{
public PropertyInfo SourceProperty { get; set; }
public PropertyInfo TargetProperty { get; set; }
}
public abstract class ObjectCopyBase
public abstract void MapTypes(Type source, Type target);
public abstract void Copy(object source, object target);
protected virtual == t.PropertyType
select new PropertyMap
{
SourceProperty = s,
TargetProperty = t
}).ToList();
return properties;
}
protected virtual string GetMapKey(Type sourceType, Type targetType)
var keyName = "Copy_";
keyName += sourceType.FullName.Replace(".", "_");
keyName += "_";
keyName += targetType.FullName.Replace(".", "_");
return keyName;
Properties query given here is primitive one. My own query is far more complex and has many more conditions. I don’t want to drive your attention away from main topic and that’s why I am using simple implementation of query here. Querying result is property map for two given types. Every property of source type that fits is matched to property of target type.
My first implementation was based on reflection. It made heavy use of it and was not as optimal as I wanted. Of course, it has also bright side – it is simplest one to read and understand. Just open Google if you are not familiar with reflection and it takes you couple of minutes to understand how this implementation works.
public class ObjectCopyReflection : ObjectCopyBase
private readonly Dictionary<string, PropertyMap[]> _maps = new Dictionary<string, PropertyMap[]>();
public override void MapTypes(Type source, Type target)
if (source == null || target == null)
return;
var key = GetMapKey(source, target);
if (_maps.ContainsKey(key))
var props = GetMatchingProperties(source, target);
_maps.Add(key, props.ToArray());
public override void Copy(object source, object target)
var sourceType = source.GetType();
var targetType = target.GetType();
var key = GetMapKey(sourceType, targetType);
if (!_maps.ContainsKey(key))
MapTypes(sourceType, targetType);
var propMap = _maps[key];
for (var i = 0; i < propMap.Length; i++)
{
var prop = propMap[i];
var sourceValue = prop.SourceProperty.GetValue(source, null);
prop.TargetProperty.SetValue(target, sourceValue, null);
}
If you look at Copy() method you can see that this method is pretty safe – it checks if mapping is for types already there and if it is not it creates it. This method was not perfect one because in my test environment it took about 0.0238 ms to copy matching properties from one object to another. Using this class I got the following result: 0,0240 ms.
Next implementation was remarkably faster. I used dynamically generated C# code and it gave me 0,0055 ms as result. Implementation for dynamically compiled C# mapper is here.
public class ObjectCopyDynamicCode : ObjectCopyBase
private readonly Dictionary<string, Type> _comp = new Dictionary<string, Type>();
if (_comp.ContainsKey(key))
var builder = new StringBuilder();
builder.Append("namespace Copy {\r\n");
builder.Append(" public class ");
builder.Append(key);
builder.Append(" {\r\n");
builder.Append(" public static void CopyProps(");
builder.Append(source.FullName);
builder.Append(" source, ");
builder.Append(target.FullName);
builder.Append(" target) {\r\n");
var map = GetMatchingProperties(source, target);." + key);
_comp.Add(key, copierType);
if (!_comp.ContainsKey(key))
var flags = BindingFlags.Public | BindingFlags.Static | BindingFlags.InvokeMethod;
var args = new[] { source, target };
_comp[key].InvokeMember("CopyProps", flags, null, null, args);
This implementation is way better than previous one. It takes 0,0058 ms per copying operation. The code is not so simple to read and understand but we got better performance.
Our last implementation is based on LCG (Lightweight Code Generation) and this code has best performance as you saw from the chart given in LCG posting. Here is the code of LCG implementation of object copy class.
public class ObjectCopyLcg : ObjectCopyBase
private readonly Dictionary<string, DynamicMethod> _del = new Dictionary<string, DynamicMethod>();
if (_del.ContainsKey(key))
var mod = typeof(Program).Module;
var dm = new DynamicMethod(key, null, args, mod);
var il = dm.GetILGenerator();
var maps = GetMatchingProperties(source, target););
_del.Add(key, dm);
var del = _del[key];
del.Invoke(null, args);
Now let’s see how well this implementation performs. The result is a little bit unexpected: 0,0084 ms. Okay, there is a little gotcha but let it be your homework if you have nothing better to do.
Here is my simple test method that measures how much time each implementation takes with given to objects. After measuring it writes results to console window. You can use this code to make testing easier.
public static void TestMappers(object source, object target)
var mappers = new ObjectCopyBase[]
{
new ObjectCopyReflection(),
new ObjectCopyDynamicCode(),
new ObjectCopyLcg()
};
var sourceType = source.GetType();
var targetType = target.GetType();
foreach (var mapper in mappers)
mapper.MapTypes(sourceType, targetType);
var stopper = new Stopwatch();
stopper.Start();
for (var i = 0; i < 100000; i++)
mapper.Copy(source, target);
stopper.Stop();
var time = stopper.ElapsedMilliseconds / (double)100000;
Console.WriteLine(mapper.GetType().Name + ": " + time);
In this posting I showed you how I organized my previously written dirty code to classes and achieved better readability and maintainability of code. Abstract base class serves also very well – I am able to cast different mapper instances to base type and therefore it is easier for me to measure their performance and to use them with IoC containers like Unity.
In the next posting about object to object mapper I will show you how to modify these classes to gain better performance.
Are you familiar with AutoMapper?
This post was mentioned on Twitter by gpeipman: New blog post: - Writing object to object mapper: first implementations
Thanks for feedback Udi!
Of course I am familiar with AutoMapper. But it is always good study to try out something new you have not written before. :)
In my previous posting about object to object mapping Writing object to object mapper: first implementations
How this can be use against converting datareader to property info class.
DataReader is totally different thing and needs different handling. When I have packed this mapper to something that people can download, try and use I will start with mapper that works with ADO.NET objects.
|
http://weblogs.asp.net/gunnarpeipman/archive/2010/02/07/writing-object-to-object-mapper-first-implementations.aspx?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+gunnarpeipman+%28Gunnar+Peipman%27s+ASP.NET+blog%29
|
crawl-003
|
refinedweb
| 1,184
| 50.63
|
From the Bit manipulation Tutorial, Peter has the following SBIT.H file. I am just started using CodevisionAVR (trying to move into C from assembler coding. Project would be done but I figured now is as good as any to put in the time to finally learn this C stuff.) and I am having trouble with one line of the Structure code.
struct bits {
....
} __attribute__((__packed__));
....
I get a declaration syntax error. I am lost at this point to understand what this line means.
His post below.:
Code:
#include
#include "sbit.h"
#define KEY0 SBIT( PINB, 0 )
#define KEY0_PULLUP SBIT( PORTB, 0 )
#define LED0 SBIT( PORTB, 1 )
#define LED0_DDR SBIT( DDRB, 1 )
int main( void )
{
LED0_DDR = 1; // output
KEY0_PULLUP = 1; // pull up on
for(;;){
if( KEY0 == 0 ) // if key pressed (low)
LED0 = 0; // LED on (low)
else
LED0 = 1; // LED off (high)
}
}
Naturally this macro can also be used for internal flag variables, not only for IO registers.
Peter
The "__attribute__" is a GCC specific thing for passing instructions to the compiler to tell it how to generate code. In Codevision they use #pragma for similar kind of things. All the attribute is saying is that members of the struct{} must be byte aligned so if you had:
then on a CPU that uses 32bit alignment (ARM for example) the compiler might lay this out in memory so that it uses 8 bytes and the char uses the lower byte of a 32bit word with the other 24 bits unused. The ((packed)) attribute says that the compiler should lay things out on byte boundaries so the char would be in one byte and that is immediately followed by 4 bytes holding the long. On an ARM this is actually a bad idea as it has instructions to access full 32 bit words and 16bit "half words" but not for individual bytes so the compiler would have to use masks and shifts to pick out the variable value.
All of this is moot on an AVR - there's always byte alignment in structs as it uses 8bit alignment. For the purposes of porting to CV just remove the __attribute__ completely.
I have to ask why you are bothering with any of this anyway? The sbit.h is used to give GCC a facility that CV already has (bit access to SFRs). In CV you can just:
in which the ".3" is a non standard C feature that Pavel has chosen to add to CV.
Because GCC is more generic (ARM, MIPS, Pentium, etc, etc) then ".3" notation could never be added so workarounds such as sbit.h are introduced to offer something similar for easy bit manipulation on MCUs.
Top
- Log in or register to post comments
Thank's Clawson. That is even easier than and clearer reading than using sbi in Assembler. Perhaps I will grow to love C as much as I do Assembler especially with such a fine compiler as CodeVision.
The answer to the question you pose is simply I did not know CodeVision had this. I see it now under Accessing the I/O Registers in the HELP and the warning on address locations above 5Fh (like PORTF for the ATmega128 for example) will not work.)
Now if I could just stay home and work on this stuff I bet it would not have taken 5 years to decide to try and learn C. I think the process is going to go smoothly now.
Top
- Log in or register to post comments
Playing around with the debugger I see a defined variable such as:
#define wire7b PORTB.0
Not in scope. So I did this.
unsigned char *pwire7b = &wire7b;
So I can see the byte in the Watch window using pwire7b.
Is there a way to create a struct that would allow the debugger to see individual bytes? The bit and bool give error illegal symbol.
Top
- Log in or register to post comments
You need to read a book about C programming. The word #define creates a pre-processor macro and has nothing to do with defining a C variable. In fact the C compiler itself never sees "#define" as it's just used as part of a string substitution process done by the C Pre-Processor. (aka CPP)
If you had wanted to use the #define and then watch what happens when you:
then simply look at the IO view and see what's happening in the PORTB register. Bit 7 should be seen to reflect the last 1/0 that was written.
Note that those two lines are seen by the C compiler (after pre-processing) as:
th C compiler itself never "sees" anything called "wire7b" at all.
All this will be fully explained in any good book about C programming.
(but not the .n notation which is CV specific)
Top
- Log in or register to post comments
Reading both:
C Programming for Microcontrollers (Smiley micros)
Beginning C From Novice to Professional. Fourth edition Ivor Horton Apress.
I especially enjoyed the page on Bit-Fields in a Structure (exactly what I need in this project) It saus at the end of the description: "You'll rarly, If ever, have any need for this facility..." But even if I am programming with gigabytes of free memory space I just don't see why I would not use them. Perhaps there is a trade off in efficiency or code size I'll be taking care to learn about all that stuff as I go along.
Top
- Log in or register to post comments
In assembler I could use:
.MACRO SUBI16 ; Start macro definition
subi @1,low(@0) ; Subtract low byte
sbci @2,high(@0) ; Subtract high byte
.ENDMACRO ; End macro definition
In C, can I do something similar. The code I have I need to test 16 wires (making an Ethernet wire tester)
The #define statement I see can be used for macros but can I convert this function to a macro so I can invoke it changing the dwire1a,wire1a,pwire1b and the shift number 0 in the mask? or do i have to cut and past 15 more procedures and edit the lines?
Attachment(s):
Top
- Log in or register to post comments
Update to previous question on using multiline macros in C. From the codevisionavr forum the following code answers my question. I never would have figured this out on my own.
Link to codevisionavr answer....
Top
- Log in or register to post comments
What advantage do you see in that over:
which is simply more readable as it's not beset with '\' characters. It could even return 'result' rather than setting a global.
Top
- Log in or register to post comments
Hi Clawson. I guess it would be speed and depending on the compiler of course as this one handles it differently than others.
Remember my initial question here was "is it possible" to do multi line macros similar to the macro and end macro pre processor commands in assembler.
As a side note, the author mentions perhaps speed was my goal but that was not and is not critical in this example. My goal is learning what can and can't be done and I will learn the details of the what should be or should not be done after that. (hopefully:) )
From the codevisionavr thread.
Top
- Log in or register to post comments
See recent thread discussing this very subject. I guess it comes down to a question of style but I'd never use a macro with no type checking of parameters and return value when the same could be implemented with a static inline function (which will not generate any more code and will execute just as quick)
Top
- Log in or register to post comments
Regards,
Steve A.
The Board helps those that help themselves.
Top
- Log in or register to post comments
|
https://www.avrfreaks.net/forum/need-some-help-bit-code-tuts
|
CC-MAIN-2021-04
|
refinedweb
| 1,317
| 69.72
|
Bart De Smet's on-line blog (0x2B | ~0x2B, that's the question)
A couple of days ago I posted the final build number of .NET Framework v2.0 and VS2005 (starts with 8. instead of 2. but for the rest it's exactly the same). Unfortunately, a pretty heavy spam cleaning script on my database deleted some of my latest posts. Luckily SQL Server's backup functionality saved my blog except for a couple of posts, including the version numbering one. So here it is:
The official version number of .NET v2.0 is 2.0.50727.42, VS2005's one is 8.0.50727.42. You might ask yourself what the hell the number stands for. The major and minor are pretty clear I think. The rest isn't that straightforward. AFAIK the story goes back to the early days of .NET v2.0 development where the format YMMDD was chosen for version numbering. So, the build points in the direction of July 27 2005, which is the point in time the final development push was initiated. From that point on, the last number increments every build with +1. We're now three months further, which indicates there were 14 builds per month, or 3-4 builds every week.
I just ran into an interesting post on MSDN Blogs about this too:. The "Project 42" code name mentioned therein is new to me. Happy coincidence? Jason Zander has another post on the topic on about team t-shirts decorated with the magic number 42.
So, after v1.0.3705 and v1.1.4322 comes v2.0.50727.42. Start memorizing the number now :-).
Finally.
The last couple of days, Larry Osterman has posted three posts on easter eggs:
I thought this is a golden opportunity to share my favorite easter egg in a recent Microsoft product:
The easter egg disappeared in Visual Studio 2005. Read: the bug was fixed. This really was a bug, basically the emulator and the Virtual PC environment had some shared code which made the emulator in the guest think it was running inside another instance of the emulator. Funny however. Here's a screenshot:
Notice my quote in my yesterday post "enough stupid content posts for today". I kept my promise as I'm writing this on Sunday :p. However, next post will be more interesting again.
Download time :-)
I intended to blog about this long time ago, but the idea died while sitting in the long long blog idea queue. Now it's here: a post about System.Reflection.Emit. Stimulans: a session on writing a managed compiler in one hour on PDC 05. Let's kick off.
What is it?
System.Reflection.Emit is a namespace that allows assemblies to be created from within managed code applications. Once you've created the assembly dynamically it can be called immediately or it can be saved to disk to be executed later on. In fact, you can use System.Reflection.Emit for two big scenarios:
Noticed the word IL? Yes, there we go again :-).
Hello world +o(
Sick and tired of those hello world samples? I do, so let's create a "Yow dude" example instead (pretending to be in a funny mood).
Manual compilation
Open up a Visual Studio command prompt (e.g. VS .NET 2003, will work with version 2005 too). Write and compile a one line application as follows:
>echo class Yow { public static void Main() { System.Console.WriteLine("Yow dude"); } } > yow.cs
>csc yow.csMicrosoft (R) Visual C# .NET Compiler version 7.10.6310.4for Microsoft (R) .NET Framework version 1.1.4322Copyright (C) Microsoft Corporation 2001-2002. All rights reserved.
Execute yow.exe if you doubt about the results of this application (if so, I'd recommend to quit your browser now and come back next year :o). Now ildasm.exe yow.exe and take a look at Yow::Main's definition. It should not just look like but be completely identical to the next piece of IL:
.method public hidebysig static void Main() cil managed{ .entrypoint // Code size 11 (0xb) .maxstack 1 IL_0000: ldstr "Yow dude" IL_0005: call void [mscorlib]System.Console::WriteLine(string) IL_000a: ret} // end of method Yow::Main
Dynamic compilation
Now we're going to write an application that creates the same tremendously useful (*kuch*) functionality but dynamically using Reflection.Emit. Open up your favorite development environment to create a console application and compile it. One thing missing in the previous fase? Yes, the code. So here it is:
using System; using System.Reflection; using System.Reflection.Emit;
class ReflectSample { static void Main(string[] args) { AssemblyName name = new AssemblyName(); name.Name = "YowAgain";
AssemblyBuilder assembly = AppDomain.CurrentDomain.DefineDynamicAssembly(name, AssemblyBuilderAccess.RunAndSave); ModuleBuilder module = assembly.DefineDynamicModule("YowModule", "yowagain.exe"); TypeBuilder someClass= module.DefineType("Yow", TypeAttributes.Public | TypeAttributes.Class); MethodBuilder main = someClass.DefineMethod("Main", MethodAttributes.Public | MethodAttributes.Static, typeof(void), new Type[] {});
ILGenerator ilgen = main.GetILGenerator(); ilgen.Emit(OpCodes.Ldstr, "Yow dude"); ilgen.Emit(OpCodes.Call, typeof(Console).GetMethod("WriteLine", new Type[] { typeof(string) })); ilgen.Emit(OpCodes.Ret);
assembly.SetEntryPoint(someClass.CreateType().GetMethod("Main"), PEFileKinds.ConsoleApplication); assembly.Save(module.Name); AppDomain.CurrentDomain.ExecuteAssembly(module.Name); } }
Line by line overview? Here it comes (blank and curly-brace-only lines were skipped):
Run. "Yow dude" should appear and a file yow_dynamic.exe should be created. Finally, the IL looks like this:
Exactly the same as the result of our manual development and compilation earlier. Certainly worth to look further at System.Reflection.Emit (tip: don't try to understand all of the OpCodes that are available, start from an existing piece of IL-code).
Have fun!
Today I discovered a pretty nice keystroke I wasn't aware of. As the matter in fact, I'm a huuuuuuuuge keystroke fan trying to eliminate any mouse interaction whatsoever to boost productivity :d. Sometimes this results in unexpected behavior however, for instance sending a nudge in MSN 7 to a buddy because you missed one ALT-TAB while navigating to Internet Explorer, therefore arriving on a MSN conversation window and pressing ALT-D which does not put you in the address bar of IE as intended but sends a nudge in MSN :o.
Now the more serious stuff: CTRL-BACKSPACE seems to remove characters in front of your cursor word by word. Pressing CTRL-BACKSPACE over here | 2 times would erase the words "over here". A big replacement (read: keystroke saver) for two CTRL-SHIFT-LEFT ARROW's + DEL.
In an analogous fashion you can use CTRL-DELETE to delete words in the other direction.
This will be another killer keystroke when doing demos if I get used to it :-). Another one I blogged about earlier is "The quick brown fox jumps over the lazy dog"-generating rand-function in Word which is the ideal tool to junk strings (see "Lorem Ipsum Dolor Sit Amet" and "The quick brown fox jumps over the lazy dog" explained (or: cool KB Articles)...).
To conclude, let's mention CTRL-SHIFT-ESCAPE to open up the task manager in Windows (without going through CTRL-ALT-DEL, ALT-T or WIN-R, taskmgr, ENTER).
|
http://community.bartdesmet.net/blogs/bart/archive/2005/10.aspx
|
CC-MAIN-2016-40
|
refinedweb
| 1,187
| 59.8
|
Apple Announces New Programming Language Called Swift 636
Posted by Unknown Lamer
from the everyone's-got-one dept.
from the everyone's-got-one de.
First Rhyme (Score:4, Funny)
AAPL's YAPL
Good bye source compatibility (Score:4, Interesting)
Good bye source compatibility. We hardly knew ye.
First Windows, and now OSX. I am still maintaining applications that are built crossplatform (Windows/Mac/Linux, with unified GUI look) but it's getting harder every year and, by the looks of it, will be impossible soon.
Which means that an individual developer (like myself) or a smaller shop would have to choose one big player/OS vendor and stick with it. That increases risk and makes small players that much less viable (while, of course, helping the big ones consolidate user base and profit).
Funny how the world works.
Re:Good bye source compatibility (Score:5, Insightful)
Since when does Qt not work X-platform anymore?
Re:Good bye source compatibility (Score:5, Interesting)
Qt does not (and cannot) support Windows "Metro" (or whatever the name is for the C#/event driven/non Win32 environment now)
By the same token it won't be able to support this new environment.
Qt, XWidgets and others like them rely on basic C compatibility and certain common UI themes and primitives to be able to build cross-platform libraries and applications. With proprietary, non-portable and non-overlapping languages vendors make sure that any development has to target their platform specifically.
Aside from that, if new development environment does not support linking against "old" binary libraries - developers also don't get the benefit of code reuse (since they won't be able to use existing libraries for things like image handling, graphics, sound, networking, you name it).
Re: (Score:3, Insightful)
Qt does not (and cannot) support Windows "Metro"
"Windows Metro" is dead, irrelevant to this discussion. QT will continue to be available for Apple's little garden. Your comment constitues "fear mongering".
Windows Phone and RT do not require C# (Score:5, Informative)
I was under impression that all new windows "apps" had to be written in C# against a new SDK that has neither binary nor source compatibility with Win32/posix/C/C++. I'd be glad to be wrong, but that's what I've seen so far.
Only Windows Phone 7 and Xbox Live Indie Games required C#.* C++ works on Windows Phone 8 and Windows RT, though they do require use of the Windows Runtime API. For actual Windows on x86, you can continue developing desktop applications without having to deal with Windows Runtime (the "Metro" crap).
* In theory, they required verifiably type-safe CIL compatible with the
.NET Compact Framework. In practice, they required C#, as standard C++ is not verifiably type-safe, and DLR languages require functionality not in .NET CF.
Re:Good bye source compatibility (Score:5, Informative)
Are you sure about the "metro"? Name is dead, but I was under impression that all new windows "apps" had to be written in C# against a new SDK that has neither binary nor source compatibility with Win32/posix/C/C++. I'd be glad to be wrong
You're wrong! You can write Windows Store apps in C++ just fine. This C++ can use win32, posix, STL etc just fine. (however, all apps run in a sandbox, so some win32/posix functions aren't as powerful as normal, e.g. the sandbox doesn't let a Windows Store app enumerate files on the hard disk outside of the files it's allowed to touch).
You can also write Windows Phone apps in the same C++. So a mobile app developer could write their core in posix platform-neutral C++, and have it portable across Android+iOS+Windows. I know a few who do.
Of course posix doesn't have anything to do with windowing systems or touch, and win32 APIs (e.g. gdi32.dll, user32.dll) don't apply to windows store or phone apps since they don't have GDI or the traditional windowing system. So your C++ code will invoke new WinRT APIs to access the new functionalities. WinRT is perfectly compatible with posix/win32 APIs. Indeed, for some functionality (e.g. audio), you're REQUIRED to use win32 APIs because they're not exposed in WinRT.
Here's some example code that shows how you'd mix the new WinRT APIs to get hold of sandbox-specific stuff, and then interop with traditional posix APIs. It also shows how the asynchronous nature of WinRT APIs combine with the synchronous traditional posix APIs:
auto f1 = Windows::ApplicationModel::Package::Current->InstalledLocation;
create_task(f1->GetFolderAsync("Assets")).then([this](StorageFolder ^f2)
{
create_task(f2->GetFileAsync("Logo.scale-100.png")).then([this](StorageFile ^f3)
{
auto path = f3->Path;
FILE *f = _wfopen(path->Data, L"r");
byte buf[100];
fread(buf, 1, 100, f);
fclose(f)
});
});
If you care about Windows Phone or Windows RT (Score:5, Informative)
It doesn't use Metro's libraries.
Anything that doesn't use the Windows Runtime API (what you call "Metro's libraries") will not be approved for the Windows Store and will not run on Windows RT tablets or Windows Phone 8 smartphones.
Re:Since when does Qt "work" with OS X? (Score:5, Informative)
There is VLC
There is CMake
There is my project --... [sourceforge.net]
There is Sorenson Squeeze --... [sorensonmedia.com]
I am sure there are others
Re:Since when does Qt "work" with OS X? (Score:5, Informative)
No this is NOT a troll, please read.
A claim of cross-platform is one thing. But in practice I know of no significant apps using Qt that exist in the wild that work on OS X.
Please provide a link to any mainstream working application for Mac OS X that uses Qt.
I don't know of a single one because Qt's support for XCode is incredibly poor.
Do you have to use Xcode, the IDE, to develop OS X apps? Or by "Xcode" do you mean "Xcode the IDE, plus the command-line tools"?
Re: (Score:3)
The scientific method is providing proof --- rhetoric and hot air is great, but science is providing evidence
Huh? So which one is it, proof, or evidence? Surely the two of them are not the same. Also, scientific evidence is quantitative and statistical in nature, and you're asking for an anecdote. Please stop pulling the term "scientific" into places it doesn't belong; we're not formulating theories of software here.
Re:Since when does Qt "work" with OS X? (Score:4, Insightful)
Then compile your code with Eclipse or from the command line.
What has XCode to do with developing Qt?
Re: (Score:3)
you haven't heard of Google Earth or VLC ?
Re:Since when does Qt "work" with OS X? (Score:5, Insightful)
There are plenty of apps that use QT--probably the most mainstream one is Google Earth.
Now, look at me with a straight face and say, "And Google Earth has a great UI!"
To me, this is the problem with cross-platform UI. It starts from a mistaken premise: Windows and Mac or iOS and Android have the same basic UI. There's even a grain of truth to it. But it doesn't really work.
The example I love to use is French and English. They are, basically, the same language, right? They both have words, sentences, and paragraphs. They both have nouns, verbs, and adjectives. So if you just translate the words and move around the adjectives, you've got a French/English translator! It's that simple!
No, not really. If it's 100 degrees outside and you've just come from the outside and remark to a pretty girl "Je suis chaud" (literally, I am hot), she might very well slap your face. Because you've just said that you are hot as in, "Oh, baby, you make me so hot."
And those are the silly mistakes that cross-platform UIs make.
Take a simple one from Mac versus Windows: On the Mac, in a dialog box, the default button is always the right-most button. So you have a dialog box that says, "Are you sure you want to do this?" and the right-most button would say, "OK" and the button to the left of it would say, "Cancel." On Windows, the default "OK" button would be on the left with the "Cancel" button the right of it.
Alignment, again, is a question. I'm not sure there's a standard on Windows--I've seen things centered [spaanjaars.com] and I've seen them aligned right. [samba.org] On Mac OS X, there's a standard. Which means when Windows aligns them on the right like on the Mac, I'm always pressing the Cancel button.
So, yeah, you can use QT to have a cross platform application and it will work fine. And it's great, if you have an application like Google Earth, which has lots of great GIS capabilities so that the result is worth the pain. But, frankly, if Microsoft did an equivalent to Google Earth but made a Mac application that was "correct," I'd use it in a heartbeat. Because, all else being equal, I'd rather have an application that "speaks my language" to one that only sort of does.
Have you ever spoken to a tech support person from another country with a thick accent? That's the equivalent of using Google Earth on a Mac.
Re:Since when does Qt "work" with OS X? (Score:5, Informative)
Oh, stop trolling. You have obviously never used Qt, it will automatically fix the order of the dialog buttons for you. You can even launch the same application under GNOME and get one order, and under KDE and get another. It is controlled by the widget-style it uses. And it does more than that, it also matches the reading direction of the language you are using so that it reverses for Hebrew, Arabic or other right-to-left languages.
There are things that you need to handle yourself in a crossplatform application, but that is not one of them.
Re:Good bye source compatibility (Score:5, Insightful)
Why is this dumb post modded insightful? You can still use all the same languages you did before.
Re: (Score:3, Funny)
Why is this dumb post modded insightful? You can still use all the same languages you did before.
Because Slashdot Sheep have a childish hate for apple, as they post comments from their iPhones?
Re: (Score:3)
You can't post from the iPhone anymore. Mobile Slashdot on safari is horribly broken. At least for me I can't log in or post once I do login. I switch to classic.Slashdot and everything works as normal. Switch back to mobile and it breaks. I don't know why they can't test it. It is not like you can extend mobile safari.
That's not Apple's issue, that's a Slashdot issue. Just like unicode...
Re: (Score:3)
That's why you use the desktop site and not the mobile one.
...
I never use the mobile version of any website on my iPhone or my iPad
Re:Good bye source compatibility (Score:5, Insightful)
Hey, I'm not a developer/coder/programmer, so I honor and respect the work you've put in to things in the past. But if you've been tying yourself to a "unified GUI look" across platforms, you've long been dooming your products and yourself.
As a UX person, I can throw data at you all day long that shows device/OS specificity and familiarity are key elements in making something work for the user. I'm sure you don't literally mean you chose either a menu bar in all app windows on every platform or a top-of-the-screen menu bar on every platform, but the obvious reason why that would be wrong also holds for controls, colors, placement, text sizes, and so on to the last turtle at the bottom.
Re:Good bye source compatibility (Score:4, Insightful)
That's not what cross-platform compatibility implies. Placement of specific elements and their view is a subject of "themes" and is readily customizable.
As a developer I care about underlying primitives - things like "windows", "buttons", "menus" or more generically "events", "inputs" etc. Once those underlying things can no longer be shared - you have to write a new product from scratch for every platform.
Think of something like Adobe Photoshop (I assume as a UX person you are using it?). It is possible to have a version for Windows, and one for Mac precisely because you have those common underlying primitives and APIs, even though they don't necessarily look the same in all respects.
If commonality of platforms is gone - even a company like Adobe will have really hard time building products for both platforms. That will eventually affect users too, since they will likely have to select different (and no longer compatible) products for each platform as well. For now that's not the case - but given where things go, it probably will be.
Re:Good bye source compatibility (Score:4, Insightful)
It's a good point..
So, putting things that people access somewhat frequently into a menu item on the menu bar isn't a horrible thing on the Mac. But on Windows--because the menu bar is harder to access--it will frustrate your users. You probably want to set up some kind of contextual menu on Windows.
Do it the Mac way, you've annoyed your Windows users. Do it the Windows way and you confuse your Mac users (who are used to searching the menu bar to find things). Or devote the time and effort to doing it both ways.
Re: (Score:3).
And if we consider the real world with 24"+ screens been very common, putting the menu bar on top is ridiculous because it's so far away (and even if you just "swipe"
up, said swipe still takes time to reach the top).
By the way, for the same reason, you don't want to skimp on the context menu on a Mac.
Re: (Score:3)
If the applications don't follow the OS norms, they are not fine applications.
Re:Good bye source compatibility (Score:4, Informative)
Whatever source compatibility existed before Swift (and the degree to which that exists is surely debatable), it was not removed by Swift. Objective-C, C/C++, and Swift can coexist in the same project. I believe they can even coexist inline, which makes me shudder to think, but there it is. Still, you could ostensibly have a UI in Swift and your core business logic in C, if your architecture is solid. (Obviously YMMV, and there are bugs to be discovered, to be sure.)
Re:Good bye source compatibility (Score:4, Funny)
I believe they can even coexist inline, which makes me shudder to think, but there it is.
If that makes you shudder, then this will terrify you [ideology.com.au]. It compiles in eight different languages.
Compatibility is no problem, before or after swift (Score:5, Informative)
Good bye source compatibility. We hardly knew ye.
I have absolutely no compatibility problems. I strictly use objective-c for only user interface code. The core functional application code is written in c/c++. I have multiple iOS/Android apps whose core code is shared and can even be compiled with a console app under Mac OS X or Linux, I use this for regression testing and fuzzing. A headless Linux box in the closet exercises this core code. Similar story for Mac OS X and Windows.
Swift code can replace objective-c code and it matters little to me. Has zero impact on other platforms I target.
Admittedly I've ported other people's code between Windows, Mac and Linux for years and written my own code for Windows, Mac and Linux for years and as a result I am extremely aggressive about separating UI code from functional code.
For those people using some sort of cross-platform wrapper for their project, if it supports Mac OS X objective-c it will probably support Swift. Even if it takes time for the wrapper developers so what, the use of Swift is entirely optional.
Re:Compatibility is no problem, before or after sw (Score:5, Informative)
// main.m
#import <Cocoa/Cocoa.h>
int main(int argc, const char * argv[]){
return NSApplicationMain(argc, argv);
}
// AppDelegate.h
#import <Cocoa/Cocoa.h>
@interface AppDelegate : NSObject <NSApplicationDelegate>
@property (assign) IBOutlet NSWindow *window;
@end
// AppDelegate.m
#import "AppDelegate.h"
#include "Work.h"
@implementation AppDelegate
- (void)applicationDidFinishLaunching:(NSNotification *)aNotification {
}
- (IBAction)buttonPressed:(id)sender {
some_work();
}
@end
// Work.h
void some_work(void);
// Work.c
#include <stdio.h>
#include "Work.h"
void some_work(void) {
FILE *fp = fopen("/tmp/work.txt", "w");
if (fp != NULL) {
fprintf(fp, "Hello World\n");
fclose(fp);
}
}
Re: (Score:3)
Re:Good bye source compatibility (Score:4, Informative)
What possible application could ever require GDB as a dependency?
LLDB is a far superior debugger anyways.
Clearly his team needs it. Don't question why. (Score:3, Insightful)
My god, it's like I'm at Stack Overflow, reading the typical stupid and cocky "WHY ARE YOU DOING IT THAT WAY?!?#?!?@?!@?!?" responses that are so prevalent over there.
His team probably has its reasons for using or requiring GDB. And you know what? They're probably pretty damn legitimate reasons, too.
I'm sure they know about LLDB. But it's probably not what they need, and thus they do not use it.
If they need GDB, then that's what they need. It's that simple.
What we don't need is somebody like you questioning
Re: (Score:3, Informative)
So then install GDB. There is no reason to stop supporting Mavericks because it doesn't come with GDB preinstalled.
Re:Good bye source compatibility (Score:5, Funny)
Creating GUI's in OSX is currently problematic because of font issues.
Obviously this must be the case as no-one else is creating GUIs in OS X either. That, and the fact that OS X is hated the world over by designers for it's awful handling of fonts.
Re:Good bye source compatibility (Score:5, Funny)
That, and the fact that OS X is hated the world over by designers for it's awful handling of fonts.
As a programmer, I can tell you that designers are hated the world over for their awful handling of fonts.
Re:Good bye source compatibility (Score:4, Funny)
Could you tell me who your team is? I have a tumblr for really shitty software and I'd love to feature them!
Re:Good bye source compatibility (Score:4, Informative)
Apple has always been hostile to unified look on their platform.
You do realize, of course, that you are talking about the company that literally wrote the book [scripting.com] on good, consistent UI design, right?
The above, linked pdf copy dates from 1995 (the earliest actual copy I could find in a 2 minute search), but Apple first published their most-excellent HIG manual on or around 1985 [goodreads.com], before most slashdotters were even born.
Now, get off my lawn!
Whoa 1.3x (Score:5, Funny)
That's what about 1/10,000th of what hiring a good programmer would get you, at the price of not being to find any programmers.
Re:Whoa 1.3x (Score:5, Funny)
Now hiring Swift programmers. 10 years experience required.
/sarcasm.
New bells and whistles (Score:3, Interesting)
I was particularly surprised to see closures appear. So far I've only been using them in Javascript and Perl, but my experience has been that they are about 15% added flexibility for about -40% readability. That is, they make it harder to tell what's going on, more than they reduce development time.
Re:New bells and whistles (Score:4, Interesting)
Depends on the language.
In groovy closures are perfectly readable, as they are in Smalltalk.
Problem is that closures are often considered second class citizens, hence they get plugged in later nad then they look wiered.
Re: (Score:3)
Re: (Score:3)
If you use them right, they increase readability. Unfortunately they are very, very easy to use in a way that decreases readability.
Bret Victor influence? (Score:3)
The live REPL reminds me of Bret Victor, who used to work for apple. [worrydream.com]
I hope they take advantage of some of his ideas??... [youtube.com]
Who designed this, and what drugs were they on? (Score:5, Interesting)
’s Array type to provide optimal performance for array operations when the size of an array is fixed."
i.e. Swift arrays that are "immutable" actually aren't. Way to rewrite the dictionary. But wait, it gets worse. Here's for some schizophrenia.
"Structures and Enumerations Are Value Types. A value type is a type that is copied when it is assigned to a variable or constant, or when it is passed to a function. Swift’s Array and Dictionary types are implemented as structures."
So far so good. I always liked collections that don't pretend to be any more than an aggregate of values, and copy semantics is a good thing in that context (so long as you still provide a way to share a single instance). But wait, it's all lies:
"If you assign an Array instance to a constant or variable, or pass an Array instance as an argument to a function or method call, the contents of the array are not copied at the point that the assignment or call takes place. Instead, both arrays share the same sequence of element values. When you modify an element value through one array, the result is observable through the other. For arrays, copying only takes place when you perform an action that has the potential to modify the length of the array. This includes appending, inserting, or removing items, or using a ranged subscript to replace a range of items in the array"
Swift, a language that is naturally designed to let you shoot your foot in the most elegant way possible, courtesy of Apple.
Re:Who designed this, and what drugs were they on? (Score:4, Interesting)
That is bizarre. So if you see a function signature which takes an array as a parameter, you either do know that elements will be changed, or will not be changed---but only depending on potentially hidden implementation of that function?
And which things have the 'potential to modify' the length of an array? Implementation defined?
Fortran 90+ had it right. You just say for each argument whether the intent is data to go 'in' (can't change it), 'out' (set by implementation), or 'inout', values go in, and may be modified.
Re:Who designed this, and what drugs were they on? (Score:4, Informative)
And which things have the 'potential to modify' the length of an array? Implementation defined?
It's defined by the operations on the array. Basically, appending, inserting or removing an element would do that, but subscript-assigning to an element or a range will not.
Fortran 90+ had it right. You just say for each argument whether the intent is data to go 'in' (can't change it), 'out' (set by implementation), or 'inout', values go in, and may be modified.
Funnily enough, they do actually have in/out/inout parameters in the language.
Note however that the story for arrays here does not apply only to parameters. It's also the behavior if you alias the array by e.g. assigning it to a different variable. So it's not fully covered by parameter passing qualifiers.
Re: (Score:3)
I don't agree with the decisions either. However, it is consistent with Java. Like it or don't like it, Java is popular and its semantics are well-known.
Re: (Score:3)
No, it is not consistent with Java. In Java, you can't change the size of the array, period. If you want a dynamically resizable collection, you use an ArrayList. Here, they have conflated the two together, and then added strange copy-on-write semantics that is triggered by operations that are unique to ArrayList, but not by those that are shared between it and array. There's nothing even remotely similar to that in Java - arrays are just passed by reference, and so are ArrayLists, and if you can mutate it,
Re: (Score:3)
Java array is a reference type, so you're not declaring the array as final - you're declaring the reference to that array as final. Here, they're claiming that their arrays are value types.
And there's still nothing equivalent to copy-on-write behavior that they do for arrays when they're copied. Again, in Java, if you copy a value of array type, you just copied a reference - any changes to the actual array object will be seen through either reference. Ditto with ArrayList. In Swift, though, if you copy an a
Re: (Score:3)
I completely fail to see what your problem is.
Immutable arrays are defined exactly the same in several other languages. If you want an array of constants, you need to defined its contents as constants, not just the array itself. It's good behaviour to give you this choice.
Same for collections passed by reference. Again, several other programming languages do it exactly this way, implicitly passing collections by reference because collections can become large and implicitly copying them every time you touch
Re:Who designed this, and what drugs were they on? (Score:4, Informative)
You completely miss the point.
Regarding immutability, it's not about an array of constants. It's about an immutable array - as in, an array which has its content defined once, and not changed afterwards. They actually do use the term "immutable" for this, and this is what it means in any other language. It's also what it means in Swift - for example, an immutable dictionary cannot be mutated at all, neither by adding or removing elements to it, nor by changing a value associated with some existing key. The only special type here is array, for which immutability is effectively redefined to mean "immutable length, mutable contents" - which is a very weird and counter-intuitive definition when the word "immutable" is involved (e.g. in Java you also can change elements of an array but cannot add new ones - but this is the rule for all arrays in Java, and it doesn't call that "immutable"). The fact that there's no way to have a truly immutable array is just icing on the cake.
And they don't pass collections by reference. They say that value types are passed by value (duh), and that both dictionaries and arrays are value types (unusual, but ok). But then they completely redefine what copying an array means, with a very strange copy-on-write semantics whereby they do implicitly copy them if you touch them "in the wrong way" (e.g. by appending an element), but not if you touch them in the "right way" (e.g. by mutating an existing element). Again, this magic is utterly specific to arrays - for dictionaries, they behave like true value types at all type, and use full-fledged copy-on-write under the hood to avoid unnecessary copies - so if you "touch" a dictionary in a way that mutates it, it makes a copy, and it doesn't matter how you do so - by inserting a new key-value pair or by changing a value for an existing key. Not only this is very much non-orthogonal (why make copying of arrays and dictionaries so different?), the behavior that's defined for arrays just doesn't make any sense in distinguishing between various ways to mutate them.
Re: (Score:3)
By the way, it might be the case where showing how it behaves explains the strangeness better than trying to describe it. Have a look [imgur.com], and note how changes either are or aren't reflected across "copies".
Re: (Score:3)
I don't see what the big deal is. If you modify the size of an array, regardless context, you get a different array. Not exactly a brain buster.
Re: (Score:3)
VS has allowed for the worst programmers to get away with egregious stupidity for a long time because the preprocessor would "fix" garbage code
I don't know what your definition of "preprocessor" is, but it clearly isn't the common one because no matter how I try to parse the above, it makes zero sense.
Of course, the fact that VS is not a language, whereas Swift is, is also kinda telling.
Re: (Score:3)
That part of it kinda sorta makes sense for someone coming from ML background, I suppose. I've seen other languages do similar.
Though I think that Scala really did arrive at the clearest concise syntax for this distinction: "var" is a (mutable) variable; "val" is an (immutable) value. They're also similar enough that it's clear that the concepts are closely related in practice.
Swift Programmers Wanted (Score:5, Funny)
Must have 5 years experience.
Re:Swift Programmers Wanted (Score:4, Funny)
Good news; I've got over 20 years experience. (bullshitting my way into positions with languages I don't know. Then learning fast.)
Guaranteed results...'I can guarantee anything you want.' Bender
Re: (Score:3)
Re: (Score:3)
I'm sure in a month, some teams of 60 people will claim to have 5 years combined experience in swift....
Re: (Score:3)
The parallel scripting language described at swift-lang.org is NOT the swift language referred to by this article.
So no, you cannot have 5 years experience in this.
Re: (Score:3)
Apple was aware of swift and gave the project leaders a heads up before WWDC.
SWIFT programmers (Score:5, Interesting)
They could have chosen a name other than that of the international banking protocols. Asking for SWIFT programmers is going to get them a bevy of COBOL coders who know the protocol.
You think that is the problem? (Score:5, Informative)
Re: (Score:3)
I just look forward to the organization behind the SWIFT protocols suing Apple into the ground for trying to appropriate a name that already has a well-established meaning in computing. They've certainly got the budget to do it...
:)
iPhone announcement (Score:3)
Apple has enough $$$ to payoff for virtually any name they set their mind to, just like what they did with the iPhone.
Re: (Score:3)
Actually Cisco was actively using the brand at the time that Apple released their's and Cisco sued but settled out of court without releasing any of the details other than both companies would use the brand.
Re: (Score:3)
Google released the Go programming language five years after the Go! programming language was first documented.
It seems as if large companies only care about intellectual property when it is their own. Imagine that.
Re:You think that is the problem? (Score:5, Funny)
Designed for safety & performance (Score:3, Interesting)
I find these two aspects interesting and wonder what the trade off is. Longer compiler times?
"Designed for Safety
Swift eliminates entire classes of unsafe code. Variables are always initialized before use, arrays and integers are checked for overflow, and memory is managed automatically. Syntax is tuned to make it easy to define your intent — for example, simple three-character keywords define a variable (var) or constant (let)."
" Swift code is transformed into optimized native code, "
Somebody post a SWIFT example PLEASE! (Score:3)
I wanted to write apps and tried to learn Objective-C, but as a coder that started with C and then moved on to C++ and PERL (the swiss army chainsaw), the language syntax hurt my ability to read it. In case you don't know what I am talking about, here are some of my learning notes
// old school // send a message or call a method
myObject.someMethod();
[myObject someMethod];
result = myObject.someMethod();
// old school // method returns a result
result = [myObject someMethod];
result = myObject.someMethod(arg);
// old school // pass an argument
result = [myObject someMethod:arg];
You can see the Old School syntax above (which works in Objective-C) and the Objective-C standard syntax below. The square brackets [ ] and colons : just hurt my mental debugger... [ ] and yes I know Objective-C is a Superset of C, so they had to steer clear of the C-Syntax but it just looks wrong. Further, I know that I could write my own style of Objective-C but I wouldn't be able to read the code of others. Apple had to start somewhere and Steve had the NeXT languages ready to go but to me the syntax is ugly and offensive. However, I am ready for a better Apple language.
I can't wait to see a SWIFT code example, if it gets rid of the NeXT Objective-C Superset Syntax, I might be coding for iPad and Mac sooner than I thought. If anyone has a code example, please share it, I would like to see what a function, method, or message call looks like. Hoping for parenthesis and a Standford iTunesU class. Guardedly excited!
Re:Somebody post a SWIFT example PLEASE! (Score:5, Informative).
“class Square: NamedShape {
var sideLength: Double
init(sideLength: Double, name: String) {
self.sideLength = sideLength
super.init(name: name)
numberOfSides = 4
}
func area() -> Double {
return sideLength * sideLength
}
override func simpleDescription() -> String {
return "A square with sides of length \(sideLength)."
}
}
let test = Square(sideLength: 5.2, name: "my test square")
test.area()
test.simpleDescription()”
Excerpt From: Apple Inc. “The Swift Programming Language.” iBooks. [itun.es]
Re:Somebody post a SWIFT example PLEASE! (Score:4, Insightful) "->".
It's not horrible, but I'm not sure this sample is more readable than Obj-C. As others have noted, Swift has the habit of taking the important parts of a function (like what it's named and what it returns, or what name a class is and what it subclasses) and shoving them off to entirely different sides of the function declaration.
Re: (Score:3) "->".
"override" is a _massive_ improvement. It means you cannot override a superclass method by accident. And you can't try to override a non-existing superclass method by accident.
Re: (Score:3, Insightful)
I haven't checked, but its a great idea to have override as a mandatory descriptor (If it is). Java now has @Override, but code quality suffers from it not being compulsory, leading later to subtle bugs. As for func and let, I imagine it makes it easier to make a scripting language to have less ambiguity about what you are trying to declare up front. I mean, without func, would the line "area()" be the start of a declaration, or a call to a function? Sure, you could wait for a semi-colon to finalise the dec
Re: (Score:3)
It may not measure up to whatever fly-by-night languages to which you might compare it, but it's a MAJOR and LONG OVERDUE replacement for Objective-C, which is the only language any serious Mac developer has had to put up with. I for one welcome our new Swift overlords.
Re: (Score:3)
Oh come on. It is pretty obvious that you can add named parameters to a C-like syntax without having this weird square bracket stuff:
someObject->setColor(red:0.4, green:0.3, blue:1.0, alpha:0.5);
The square brackets are there because the original Objective-C compilier was very primitive. It basically looked for the square brackets, did some manipulation of the text, and passed everything to the C compiler. Pretty much it turned this:
[someObject method:x]
into this:
Re: (Score:3)
Colour me skeptical... (Score:4, Insightful)
Apple had a fine language 20 years ago. It was said to influence the design of Ruby and Python. They butchered it into an Algol-like syntax because 'real programmers' can't grok s-expressions. Then they abandoned Dylan.
Next, they created a language for mobile devices. Its programming model was said influence the design of JavaScript. Then they abandoned NewtonScript.
Swift (Score:3)
Viva Eco (Score:5, Insightful)
Ok, so now you'll be developing software using Apple's frameworks and Apple's language to run on Apple's runtime, after passing Apple's compiler (i.e. LLVM) for download using Apple's store (after finding your product with Apple's iAD) directly onto Apple's products built with Apple's custom processors, after you register as an Apple Developer. If your app needs to do something outside this environment, you can use said APIs now to reach out to Apple's Could and Apple's Database servers. And if your app is really successful as measured by Apple Crash Reporting and Apple Usage statistics or Apple's iTunes Connect, then they'll just straight out fucking copy you.
Something about the new "language" is what makes that summary start sounding ridiculous.
Bjarne Stroustrup (Score:5, Interesting)
* What problem would the new language solve?
* Who would it solve problems for?
* What dramatically new could be provided (compared to every existing language)?
* Could the new language be effectively deployed (n a world with many well-supported languages)?
* Would designing a new language simply be a pleasant distraction from the hard work of helping people build better real-world tools and systems?
Apple can definitely deploy the new language effectively, but I'm not sure it solves any problems.
Re: (Score:3, Interesting)
Re: (Score:3)
Re: (Score:3)
It gives Apple complete control over their own destiny, which is something Apple likes to have (not exactly news). They now have a language they can tinker with to their hearts' content and no external group or standards body can restrict what they do with it. They've made it very clear they intend to listen to developer feedback and tinker with it, at least in the near future. Certainly even if they do eventually open it up, they'll still be able to extend it however they like and whenever they like in the
In related news... (Score:5, Funny)
... HR departments began advertising for programmers with 3+ years of Swift programming experience.
Re:It's about time (Score:5, Funny)
I can't wait to add this to my résumé.... I already have 2 years of experience with Swift!
Re: (Score:3, Funny)
Oooh, sorry. We're only looking for candidates with at least 5 years of experience with Swift.
Re:It's about time (Score:4, Insightful)
Wow, I happen to meet that requirement. I've been using SWIFT [swift-lang.org] for quite a few years and have done image processing and molecular docking workflows in it./p.
Re: (Score:3)
Re:and it needs an new OS the mess up other apps (Score:5, Funny)
and the comment grammar no sense slashdot article read.
captcha: verbally. Seriously?
Re: (Score:3)
Re:and it needs an new OS the mess up other apps (Score:4, Funny)
*that poorly.
Re: (Score:3, Informative)
They've already got LLVM and Clang, no? Or did you mean better than those?
Re: (Score:3)
Just what we don't need, an easier way to write buggy code.
Exactly! Who, besides the Amish, need buugy code? What we need is self driving automobile code.
Re: (Score:3, Informative)
Yes, Swift itself does not have the baggage of C just like Python does not have the baggage of C. The fact that both languages can interoperate with C does not change that.
Re: (Score:3)
The statement is talking about if you only write pure Swift. What you describe is really no different than using C code with Java through JNI. But that does not mean Java itself has any C baggage.
Re: (Score:3)
From what I can tell (I just got out of WWDC and am reading through the docs) it can be bridged to, but not directly called. You can directly call Obj-C methods through the bridge, but not C methods. You'd have to bridge to the Obj-C methods which then call C methods.
I don't know what happens when that Obj-C method calls malloc and returns some memory for leak-tastic behavior. I still haven't read if or how Swift handles raw memory buffers.
Re: (Score:3)
Really, it is not the fault of MS, Google, or Apple but of academia. In the CS curriculum they still teach the "compiler" class and as long as you keep teaching kids how to write compilers, they will keep writing languages. SWIFT is definitely a variation on a C theme, but much better than the Objective-C (superset of C) syntax, at least at first glance.
Re:Just what we need, another C++ clone (Score:5, Funny)
You mean like C++?
Re: (Score:3)
|
http://developers.slashdot.org/story/14/06/02/2311257/apple-announces-new-programming-language-called-swift?sbsrc=md
|
CC-MAIN-2014-42
|
refinedweb
| 6,889
| 64.3
|
Linux
The Crypto++ mailing list occasionally receives questions regarding library use on various Linux platforms. This page will attempt to help folks working with Linux.
On Linux, Crypto++ is named libcryptopp. The makefile (discussed below) will create and install libcryptopp.a and libcryptopp.so. libcryptopp.a is a traditional static library, while libcryptopp.so is a shared object. Note: libcryptopp should not be confused with libcrypto (libcrypto is OpenSSL).
Linux shared objects written in C++ can be tricky since exceptions must cross the executable/shared object boundary for proper program execution. Those using a Crypto++ shared object should read Note for Shared Object Callers.
Crypto++ FIPS validation does not apply to Linux modules. This is because a Crypto++ binary was validated on Windows as a compiled DLL, and not in source form.
A separate page is available for using Crypto++ under Apple's Xcode. See iOS for building Crypto++ as a static library.
Finally, many thanks to the folks who helped with this page, including (but definitely not limited to) Zooko Wilcox O'Hearn, Nathan Phillip Brink, Alexey Kurov, Jonathan Wakely, Ian Lance Taylor, and Jens Peter Secher.
Fetch the Library
There are two ways to get Crypto++ on a Linux machine. The first is to download Crypto++ from the website (or SourceForge/SVN). The second is to install Crypto++ from a package provided by a distribution such as Debian, Fedora, Mandrivia, OpenSuse, or Ubuntu (see Linux Distributions Offering Crypto++ below).
When Crypto++ releases a version, a ZIP file is created an placed on the website for download. For example, the Crypto++ 5.6.1 ZIP was placed in the download area in August 2010 after revision 521 (the version change and tag). The 5.6.1 ZIP file is fixed in time and does not include the latest bug fixes - it will always be cumulative up to revision 521. Inevitably, this means the Crypto++ ZIP files will become stale over time.
Crypto++ Website
Fetch the latest version of the library. Below, the download was saved in
cryptopp. Issue the following to extract the archive. -a handles the CR/LF conversions.
> cd cryptopp/ > unzip -a cryptopp561.zip
Move on to Build and Install the Library.
SourceForge Repository
Issue the following to fetch the latest Crypto++ code from the SourceForge SVN repository.
$svn checkout cryptopp
I you only want Crypto++ up to a certain revision, perform the following. Below, a checkout of revision 496 is performed.
$svn checkout -r 496 cryptopp-496
Distribution Package
Most distributions package the library as cryptopp (libcryptopp.a or libcryptopp.so), while Debian (and derivatives) use libcryptopp and libcrypto++. Other distributions, such as Red Hat and Mint, do not provide a Crypto++ package. If using a distribution's version of the library, be aware that you will most likely receive a shared object (libcryptopp.so) rather than a traditional archive (libcryptopp.a).
To develop with Crypto++ using a distribution's package, you usually need to install three packages: the library's base package, the development package, and the debug symbol package. For example, on Debian, the packages of interest are libcrypto++8, libcrypto++8-dbg, and libcrypto++-dev.
If using a distribution's shared object, please take time to read Note for Shared Object Callers.
apt-get
For apt-get distributions (Debian and derivatives such as Ubuntu), issue the following to locate the package name and install the package. Note that the dev or development version of the Crypto++ library was selected. Note that grep crypto++ might need to be changed to grep cryptopp.
root@bruno:/# apt-cache pkgnames | grep -i crypto++ libcrypto++-utils libcrypto++8 libcrypto++8-dbg libcrypto++-dev libcrypto++-doc root@bruno:/# apt-get install libcrypto++8 libcrypto++8-dbg libcrypto++-dev Reading package lists... Done Building dependency tree Reading state information... Done The following NEW packages will be installed: libcrypto++-dev libcrypto++8 libcrypto++8-dbg 0 upgraded, 3 newly installed, 0 to remove and 0 not upgraded. Need to get 10.7MB of archives. ...
yum
On distributions which use yum (such as Fedora), search for Crypto++ using yum:
# yum search crypto++ ============================== Matched: crypto++ =============================== pycryptopp.x86_64 : Python wrappers for the Crypto++ library cryptopp.i686 : Public domain C++ class library of cryptographic schemes cryptopp.x86_64 : Public domain C++ class library of cryptographic schemes cryptopp-devel.i686 : Header files and development documentation for cryptopp cryptopp-devel.x86_64 : Header files and development documentation for cryptopp cryptopp-doc.noarch : Documentation for cryptopp cryptopp-progs.x86_64 : Programs for manipulating cryptopp routines
Issue the following commands to install the library and header files:
> su - password: # yum install cryptopp cryptopp-devel
The cryptest.exe program is not installed with the library on Fedora. If you want the cryptest program, issue:
# yum install cryptopp cryptopp-progs
Build and Install the Library
Crypto++ comes with a makefile named GNUMakefile. The library does not use autoconf and friends, so there is no
./configure. A detailed treatment of the makefile can be found at GNUmakefile.
Precompiled headers require more than simply defining
USE_PRECOMPILED_HEADERS. If you plan on using it, see Precompiled headers below.
Makefile
The default makefile is configured to build a static library (archive), release build (-DNDEBUG), and includes options for moderate optimizations (-O2) and debug symbols (-g2). Performance penalties (or lack of penalties) when using -g are discussed in How does the gcc -g option affect performance?.
The Crypto++ makefile uses a
CXXFLAG of
-march=native, so Crypto++ builds correctly for the platform. However, GCC prior to version 4.6 has bugs related to
-march=native. See, for example, LTO performance: -march=native isn't saved in COLLECT_GCC_OPTIONS.
If you are using pthreads, Crypto++ will add
-pthread to
LDFLAGS for Linux. The library does not add the
_REENTRANT flag to
CFLAGS. For GNU/Linux, both
-pthread and
_REENTRANT should be specified. See -pthread option and gcc -pthread should define _REENTRANT?.
If you plan to profile your program under GCC, then add the -p or -pg switch to the makefile for use by prof or gprof. See Options for Debugging Your Program or GCC and Profiling.
Make and Install
Issue
make to build the static library and cryptest program. To build the static archive, shared object, and test program, enter the following. There is no need for root at this point.
make static dynamic cryptest.exe
After the library successfully builds, run
cryptest.exe v and
cryptest.exe tv all to validate the library. See Self Tests on the Release Process page for more details.
If you plan on building and installing both the static and dynamic libraries, tell make to do so. Otherwise, libcryptopp.a and cryptest.exe will be built under a standard user account, and libcryptopp.so will be built as root during install.
$ make libcryptopp.a libcryptopp.so cryptest.exe ... $ ls *.so *.a *.exe cryptest.exe libcryptopp.a libcryptopp.so
Finally, install the library. By default, Crypto++ installs to
/usr/local, so you will root to install components.
$ sudo make install PREFIX=/usr/local mkdir -p /usr/local/include/cryptopp cp *.h /usr/local/include/cryptopp chmod 755 /usr/local/include/cryptopp chmod 644 /usr/local/include/cryptopp/*.h mkdir -p /usr/local/lib cp libcryptopp.a /usr/local/lib chmod 644 /usr/local/lib/libcryptopp.a mkdir -p /usr/local/bin cp cryptest.exe /usr/local/bin chmod 755 /usr/local/bin/cryptest.exe mkdir -p /usr/local/share/cryptopp cp -r TestData /usr/local/share/cryptopp cp -r TestVectors /usr/local/share/cryptopp chmod 755 /usr/local/share/cryptopp chmod 755 /usr/local/share/cryptopp/TestData chmod 755 /usr/local/share/cryptopp/TestVectors chmod 644 /usr/local/share/cryptopp/TestData/*.dat chmod 644 /usr/local/share/cryptopp/TestVectors/*.txt
Also see Installing the Library for more details on the options available to you during install.
OpenBSD
OpenBSD is not Linux (but an OpenBSD might find themselves here). If compiling and installing for OpenBSD, then the library must be installed into
/usr/local. If you install to
/usr, be prepared for a typical output of:
g++ -g3 -ggdb -O0 -pipe -fsigned-char -fmessage-length=0 -Woverloaded-virtual -Wreorder -Wformat=2 -Wformat-security -Wno-unused -fvisibility=hidden -fstack-protector -I. -I./esapi -I./deps -I/usr/local/include -fpic -c src/codecs/HTMLEntityCodec.cpp -o src/codecs/HTMLEntityCodec.o In file included from /usr//include/cryptopp/misc.h:4, from ./esapi/crypto/Crypto++Common.h:24, from src/codecs/HTMLEntityCodec.cpp:12: /usr//include/cryptopp/cryptlib.h:99: error: template with C linkage /usr//include/cryptopp/cryptlib.h:247: error: template with C linkage /usr//include/cryptopp/cryptlib.h:254: error: template with C linkage /usr//include/cryptopp/cryptlib.h:261: error: template with C linkage /usr//include/cryptopp/cryptlib.h:268: error: template with C linkage /usr//include/cryptopp/cryptlib.h:293: error: template with C linkage /usr//include/cryptopp/cryptlib.h:698: error: template with C linkage
Precompiled Headers
Use of precompiled headers is framed out in Crypto++, but the feature requires a few tweaks for Linux. Using precompiled headers on a Core 2 Duo (circa late-2008) reduced compile times by 38% (3 min, 51 sec versus 2 min, 22 sec). On a Core i7 machine (circa mid-2010) the time was reduced by 24% to just under 2 minutes. Note that GCC must be version 3.4 or hight to support the feature.
To use precompiled headers, modify pch.h by adding <string>, <algorithm>, and <memory> to the list of includes. This makes sense since string and auto_ptr are used frequently in Crypto++ and therefore benefit from precompilation.
Add
USE_PRECOMPILED_HEADERS to
CXXFLAGS. Note: this is a Crypto++ define, and is not required by GCC's implementation of precompiled headers.
CXXFLAGS = -DNDEBUG -g -O2 -DUSE_PRECOMPILED_HEADERS=1
Add a rule for precompile. The file name must have the extension .gch. The command must also use the same
CXXFLAGS as the files using precompiled header. Both are GCC requirements.
precompile: pch.h pch.cpp $(CXX) $(CXXFLAGS) pch.h -o pch.h.gch
Finally, edit GNUmakefile and add precompile as a dependency for targets all, libcryptopp.a, libcryptopp.so, and cryptest.exe. For example:
all: precompile cryptest.exe ... cryptest.exe: precompile libcryptopp.a $(TESTOBJS) $(CXX) -o $@ $(CXXFLAGS) $(TESTOBJS) -L. -lcryptopp $(LDFLAGS) $(LDLIBS)
To verify that precompiled headers are being used, add the following to pch.h. If one message is displayed, precompiled headers are in play. If 130 or so messages are displayed, Crypto++ is not using precompiled headers. Note: Stallman no longer objects to #pragma since it can be used in a macro.
#pragma message("Including Crypto++ precompiled header file.")
To benchmark the savings issue the following:
make clean && date && make && date
Also see Jason Smethers' comments on using ccache to reduce compilation times at Linux/GCC and Precompiled Headers.
A Crypto++ Program
The sample program below uses Crypto++'s Integer class. It was chosen because Integer overloads operator<<, which makes it convenient to exercise the library and display results with minimal effort.
Source Code
#include <iostream> using std::cout; using std::endl; #include "cryptopp/integer.h" using CryptoPP::Integer; int main( int, char** ) { Integer i; cout << "i: " << i << endl; return 0; }
The integer class has arbitrary precision, so a variable will easily handle large integers and other arithmetic operations such as the following.
int main( int, char** ) { Integer j("100000000000000000000000000000000"); j %= 1999; cout << "j: " << j << endl; return 0; }
Compiling and Linking
To compile and link the program issue the following. There are two points to observe when compiling and linking. First, use g++ rather than gcc (see GCC's Compiling C++ Programs). Second the library directive (-l) is the last argument to g++. Note: on some systems (for example, Fedora), the library option might be -lcrypto++.
g++ -DNDEBUG -g3 -O2 -Wall -Wextra -o test test.cpp -l:libcryptopp.a
-Wall and
-Wextra turn on most warnings since its always a good idea to let static analysis tools catch mistakes early. (For reasons only known to the free software world,
-Wall only turns on some warnings, and not all warnings as the name implies.)
-Wno-unused increases the signal to noise ratio because Crypto++ has a lot of unused parameters, which is par for the course for an object oriented program with lots of interfaces.
If
-Wall and
-Wextra are producing too much noise (even after
-Wno-unused), compile with
-fdiagnostics-show-option to locate which additional warnings to suppress. See Options to Request or Suppress Warnings.
The -g3 switch adds maximum debug information to the executable test and is not required (note that -g is equivalent to -g2). The -ggdb adds gdb extensions which might prove useful when debugging non-trivial programs under gdb which support extensions. See Options for Debugging Your Program or GCC. If you are adding
-ggdb, you might consider adding
-fno-omit-frame-pointer so that stack traces from the field are easier to walk.
-O0 disables all optimizations, so even
#define's are available to the debugger. Usually, optimizations such as these are taken by the optimizer, and the debugger will display something similar to 'value optimized out'. Note that
-O0 is the default optimization level.
-o specifies the output file name. -l specifies the library. When using -l, we only need to specify the base library name. ld will prepend lib and append .a (or .so) when searching for the library (reference the ld man pages around line 6200). Finally, the -l option location is significant (libraries must be specified last).
The
-l:filename.
Running the Program
To run the program, specify the current working directory in case test is a program in
/usr/bin.
./test
Output should be as below.
i: 0.
If we were to run the variant 100000000000000000000000000000000 % 1999, we would receive the expected result of 820.
Stripping Debug Symbols
If the program was compiled and linked with debug symbols, strip can be used to remove the symbols. Once stripped, debuggers and stack traces are generally useless.
> ls -l myprogram ... 6866924 2009-04-28 16:12 myprogram > strip --strip-debug myprogram > ls -l myprogram ... 1738761 2009-04-28 16:13 myprogram
Also see Debug Symbols for how to create two part executables on Linux. A two part executable is a stripped executable and its debug information file (symbol file, similar to a Microsoft PDB).
Crypto++ Libraries
In general, try to use the static version of the Crypto++ library (libcryptopp.a or libcryptopp.a) rather than the shared object.
When building from the makefile, the library name will be either libcryptopp.a or libcryptopp.so (depending on the make target). For distributions which provide a Crypto++ library package, there seems to be two predominant shared object names: libcryptopp.{a|so} and libcrypto++.{a|so}. For example, systems running Fedora or OpenSuse will use libcryptopp.so, while Debian and Ubuntu will offer libcrypto++.so.
C++ based shared objects under GNU Linux usually leaves something to to be desired. If you have problems with two separate libraries each loading Crypto++ as s shared object, see:
- C++ One Definition Rule (cause of many double free's)
- dynamic_cast, throw, typeid don't work with shared libraries (the GCC FAQ entry for the behavioral change in the 3.0 ABI)
- Minimal GCC/Linux shared lib + EH bug example (consequence of documented behavior)
- dlopen and placing exception body in .cpp file (consequence of documented behavior)
- Errors with multiple loading cryptopp as shared lib on Linux (ODR hardening at version 5.6.1 with commits 488, et al)
- Global variable in static library - double free or corruption error (consequence of documented behavior)
- Comparing types across DSOs (sic) (consequence of documented behavior)
- RTLD_GLOBAL and libcryptopp.so crash (ODR hardening at version 5.6.1 with commits 488, et al)
- pycryptopp Ticket 44 (Python module loading of Crypto++ dynamic library) (using
RTLD_GLOBALbreaks glibc, includes a python script and test cases)
libcryptopp.a
libcryptopp.a is the static version of the Crypto++ library. The Crypto++ makefile will build and install the static library in response to make install. Since the library is typically on a known path for the linker (see below), there is usually no need for the -L option to ld.
> whereis libcryptopp.a libcryptopp: /usr/lib/libcryptopp.a
The GNU linker has a nice option of the form
-l:filename. It.
libcryptopp.so
libcryptopp.so is the shared object version of the Crypto++ library (dynamic version in Windows parlance). The makefile added the libcryptopp.so target at commit 496. Revision 496 is included in the 5.6.1 ZIP archive on the website.
Though Crypto++ now supports building a shared object, Crypto++ versions 5.6.0 and prior did not support shared objects and the makefile did not include the target. Zooko Wilcox-O'Hearn, author of the Tahoe File System (a secure, decentralized, fault-tolerant file system), has posted a patch for building Crypto++ as a shared object. See Patch: build libcryptopp.so. In addition, Zooko offers information on using Debian and Fedora's prebuilt libcryptopp.so.
Users of the shared object should familiarize themselves with Note for Shared Object Callers, and package maintainers should read Note for Distribution Packagers below.
Note for Shared Object Callers and Note for Distribution Packagers discuss some circumstances required to crash a shared object (C++ shared object,
RTLD_GLOBAL, global objects with destructors, and violation of C++'s One Definition Rule). Use of global variables are inevitable, and Crypto++ attempts to use them in a library-safe manner.
Auditing the library for use of global variables began when Crypto++ added official support for shared objects. Unfortunately, it does not appear that gcc offers a switch to audit for use of globals. A feature request for the switch was submitted in October, 2010. See Switch to warn of global variables in a C++ shared object.
In the absence of an official auditing method, Crypto++ tries to use consistent naming, with global variables specified with a g_ prefix. Ian Taylor suggests using objdump and objdump -t -j .data | grep ' g '. nm -D /usr/lib/cryptopp.so | grep g_ can also be used to audit use of global variables in the library. See Audit Use of Global Variables in C++ Shared Object (GCC Warning?).
Callers which dynamically load libcryptopp.so or libcrypto++.so should use the
RTLD_GLOBAL flag in dlopen. If the call to dlopen fails, OR in
RTLD_LAZY and try again. According to the Open Group Base Specifications of dlopen, a third option of interest exists, but we've never had to use it:
RTLD_NOW. If the first two tries fail,
RTLD_GLOBAL | RTLD_NOW probably will not hurt.
RTLD_GLOBAL is required so that a module can correctly catch a Crypto++ exception. The reason for the flags is the new C++ ABI and the GCC 3.0 [and above] series of tools use address comparisons, rather than string comparisons, to determine type equality. See dynamic_cast, throw, typeid don't work with shared libraries.
The flags passed to dlopen (
RTLD_GLOBAL,
RTLD_LAZY, and
RTLD_NOW) have been determined through trial and error, and is being passed on in folklore fashion (there's some wiggle room in the Open Group Base Specifications of dlopen). For example, Fedora and OpenSuse appear to accept just
RTLD_GLOBAL, while Debian and Ubuntu seem to require
RTLD_GLOBAL | RTLD_LAZY.
Since your program might run on systems which use either libcryptopp.so or libcrypto++.so, the following might be a helpful strategy for dynamic loading. In fact, Cryptopp-SO-Test.zip uses similar to ensure the Crypto++ shared object is located regardless of Linux distribution or dlopen behavior.
void* cryptlib = NULL; cryptlib = dlopen("libcryptopp.so", RTLD_GLOBAL); if(!cryptlib) cryptlib = dlopen("libcryptopp.so", RTLD_GLOBAL | RTLD_LAZY); if(!cryptlib) cryptlib = dlopen("libcrypto++.so", RTLD_GLOBAL); if(!cryptlib) cryptlib = dlopen("libcrypto++.so", RTLD_GLOBAL | RTLD_LAZY); if(!cryptlib) throw runtime_error("Failed to load crypto++ shared object"); ... dlclose(cryptlib);
Android 6.0 changed share object loading behavior, so you may also need to try with
RTLD_LOCAL also. See Android 6.0 Changes | Runtime for details.
Finally, Zooko Wilcox O'Hearn reports that Python test cases occasionally fail when using
RTLD_GLOBAL. Zooko's work around was to drop the flag. Open questions include 'Does this imply
RTLD_LOCAL', and 'Is this behavior limited to Ubuntu'. See [Tahoe LAFS] Changeset 711. So much for planning based on documentation.
Note for Distribution Packagers
The following should probably be in a separate FAQ for packagers and patchers.
Bug Fixes and Releases
If you are a package maintainer and would like to know of bug fixes, releases, and other interesting events which might warrant a package update, please add your email address to the list on the Linux Talk page.
Master is Current and Stable
Crypto++ keeps Master stable. You can just about always checkout master and use anything from it.
Unfortunately, Crypto++ does not back port because it does not maintain release branches.
Crypto++ X.Y.Z ZIP is Fixed
When Crypto++ releases a version, a ZIP file is created and placed on the website for download. For example, the Crypto++ 5.6.1 ZIP was placed in the download area in August 2010 after a commit of revision 521. The 5.6.1 ZIP file is fixed in time and does not include the latest bug fixes - it will always be cumulative up to revision 521. Inevitably, all ZIP files will become stale over time.
Library Validation and Benchmarks
After building the Crypto++ library, please issue cryptest.exe v to verify the library. More comprehensice tests are available with cryptest.exe tv all. Run cryptest.exe b to run the benchmarks.
cryptest.exe v
To validate the Crypto++ library, one runs cryptest.exe v. However, cryptest.exe makes assumptions about the location of its test data. After cryptest.exe has been installed in /usr/bin or /usr/local/bin, the assumptions no longer hold true and the validation no longer works. A patch is now available, and it can be found at DataDir patch.
Linux Distributions
The following offers helpful information when using packages provided by the distribution.
Distributions Offering Crypto++
Crypto++ does not offer RPMs or DEBs. Each distribution performs its own packaging.
The list above is not complete, is listed in alphabetical order, and not meant to insult unlisted distributions. The table includes Red Hat because it is popular in the enterprise. The table was compiled using the top 5 or so distributions from Linux Distributions - Facts and Figures at distrowatch.com.
Distribution Package Version
The
dpkg command can be used to determine the version of Crypto++ installed on a system using apt-get and friends.
$ apt-cache pkgnames | grep -i crypto++ libcrypto++-utils libcrypto++8 libcrypto++8-dbg libcrypto++-dev libcrypto++-doc $ dpkg -s libcrypto++8 Package: libcrypto++8 Status: install ok installed Priority: optional Section: libs Installed-Size: 4900 Maintainer: Ubuntu Developers Architecture: amd64 Source: libcrypto++ Version: 5.6.0-5 Depends: libc6 (>= 2.4), libgcc1 (>= 1:4.1.1), libstdc++6 (>= 4.2.1) Description: General purpose cryptographic library - shared library General purpose cryptographic library for C++. This package contains the shared libraries and should only be installed if other packages depend on it. Original-Maintainer: Jens Peter Secher <jps@debian.org> Homepage:
GNU Linux versioning usually leaves a lot to be desired. Some distributions use the library name and Crypto++ library version to form a package name and file names (for example, libcryptopp-5.6.0.so). Other distributions use the library name and base system version (for example, libcryptopp-fc13.so) for package and file names. And the situation only gets worst as fixes are back-ported while leaving package names, versions, and file names unchanged.
To show package related information, use
apt-cache.
$ apt-cache showpkg libcrypto++8 Package: libcrypto++8 Versions: 5.6.0-5 (/var/lib/apt/lists/us.archive.ubuntu.com_ubuntu_dists_lucid_universe_binary-amd64_Packages) (/var/lib/dpkg/status) Description Language: File: /var/lib/apt/lists/us.archive.ubuntu.com_ubuntu_dists_lucid_universe_binary-amd64_Packages MD5: 81fdf53fa3eb3f0f338e2c29ca70b7aa Reverse Depends: kvirc,libcrypto++8 python-pycryptopp-dbg,libcrypto++8 python-pycryptopp,libcrypto++8 libcrypto++8-dbg,libcrypto++8 5.6.0-5 libcrypto++-utils,libcrypto++8 libcrypto++-dev,libcrypto++8 5.6.0-5 kvirc,libcrypto++8 amule-utils-gui,libcrypto++8 amule-daemon,libcrypto++8 amule-adunanza-utils-gui,libcrypto++8 amule-adunanza-daemon,libcrypto++8 amule-adunanza,libcrypto++8 amule,libcrypto++8 Dependencies: 5.6.0-5 - libc6 (2 2.4) libgcc1 (2 1:4.1.1) libstdc++6 (2 4.2.1) Provides: 5.6.0-5 -
For systems using yum, use info to display Crypto++ package information:
# yum info cryptopp updates/metalink | 17 kB 00:00 updates-debuginfo/metalink | 14 kB 00:00 updates-source/metalink | 15 kB 00:00 Installed Packages Name : cryptopp Arch : x86_64 Version : 5.6.1 Release : 3.fc14 Size : 5.4 M Repo : installed From repo : updates Summary : Public domain C++ class library of cryptographic schemes URL : License : Public Domain Description : Crypto++ Library is a free C++ class library of cryptographic : schemes. See for a list of supported : algorithms. : : One purpose of Crypto++ is to act as a repository of public domain : (not copyrighted) source code. Although the library is copyrighted : as a compilation, the individual files in it are in the public : domain. ...
Examining the Library
This section covers usage of objdump and nm to examine symbols and code generation. See Ten Commands Every Linux Developer Should Know. If you are working on Mac OS X, use otool -afhv <obj file>.
Examining Code
To examine the generated code, use objview or dissy (a visual front end to objview). Below, objview displays the file header.
user@studio:~$ objdump -f rsa_kgen.exe rsa_kgen.exe: file format elf64-x86-64 architecture: i386:x86-64, flags 0x00000112: EXEC_P, HAS_SYMS, D_PAGED start address 0x0000000000404dd0
To disassemble a function, use the -d switch.
user@studio:~$ objdump -d rsa_kgen.exe 0x4049a8 rsa_kgen.exe: file format elf64-x86-64 Disassembly of section .init: 00000000004049a8 <_init>: 4049a8: 48 83 ec 08 sub $0x8,%rsp 4049ac: e8 4b 04 00 00 callq 404dfc <call_gmon_start> 4049b1: e8 da 04 00 00 callq 404e90 <frame_dummy> 4049b6: e8 25 3d 00 00 callq 4086e0 <__do_global_ctors_aux> 4049bb: 48 83 c4 08 add $0x8,%rsp 4049bf: c3 retq
Similar to nm (discussed below), we can also dump decorated symbols.
user@studio:~$ objdump -t rsa_kgen.exe | grep Integer 000000000060c260 w O .bss 0000000000000038 _ZTVN8CryptoPP7IntegerE 0000000000406e5a w F .text 0000000000000048 _ZNK8CryptoPP23TrapdoorFunctionInverse26CalculateRandomizedInverseERNS_21RandomNumberGeneratorERKNS_7IntegerE 0000000000000000 F *UND* 0000000000000000 _ZN8CryptoPP7IntegerC1ERKS0_ 0000000000406cba w F .text 0000000000000047 _ZNK8CryptoPP16TrapdoorFunction23ApplyRandomizedFunctionERNS_21RandomNumberGeneratorERKNS_7IntegerE 0000000000000000 F *UND* 0000000000000000 _ZN8CryptoPP7IntegermmEv 0000000000000000 F *UND* 0000000000000000 _ZN8CryptoPP7IntegerC1Ev 0000000000406a8c w F .text 0000000000000075 _ZN8CryptoPP7IntegerD1Ev
Finally, we can clean up the output a bit with the following command.
user@studio:~$ objdump -t rsa_kgen.exe | grep Integer | cut --bytes=26-30,60- .bss _ZTVN8CryptoPP7IntegerE .text _ZNK8CryptoPP23TrapdoorFunctionInverse26CalculateRandomizedInverseERNS_21RandomNumberGeneratorERKNS_7IntegerE *UND* _ZN8CryptoPP7IntegerC1ERKS0_ .text _ZNK8CryptoPP16TrapdoorFunction23ApplyRandomizedFunctionERNS_21RandomNumberGeneratorERKNS_7IntegerE *UND* _ZN8CryptoPP7IntegermmEv *UND* _ZN8CryptoPP7IntegerC1Ev .text _ZN8CryptoPP7IntegerD1Ev
Examining Symbols
To dump the contents of the library, use the nm command. Note: if examining symbols from a shared object, you must inldue the -D or --dynamic option:
nm /usr/lib/cryptopp.a nm --dynamic /usr/lib/cryptopp.so
The command below dumps the library's exported symbols then filters based on "Integer". Output is not shown because of the number of lines in the output.
> nm /usr/lib/libcryptopp.a | grep 'Integer'
To dump the first eight demangled Integer entries from the library. Unused entries - those of type U - can be removed with option
--defined-only.
> nm --demangle /usr/lib/libcryptopp.a | grep 'Integer' | head -8 00000000 B CryptoPP::g_pAssignIntToInteger U CryptoPP::g_pAssignIntToInteger U CryptoPP::g_pAssignIntToInteger U CryptoPP::a_exp_b_mod_c(CryptoPP::Integer const&, CryptoPP::Integer const&, CryptoPP::Integer const&) 00000850 T CryptoPP::PublicBlumBlumShub::PublicBlumBlumShub(CryptoPP::Integer const&, CryptoPP::Integer const&) 00000b60 T CryptoPP::PublicBlumBlumShub::PublicBlumBlumShub(CryptoPP::Integer const&, CryptoPP::Integer const&) U CryptoPP::Integer::Integer(CryptoPP::Integer::Sign, unsigned long long) U CryptoPP::Integer::Integer(CryptoPP::Integer const&)
Below is a dump of unique Integer constructors offered by the library. The command should be entered on a single line (the command spans two lines for readability). The --bytes=12- option instructs cut to remove bytes 0-11 by specifying 12 to End of Line ( 12- ). Note that the trailing dash of 12- is not optional. sort performs an ascending sort, and uniq removes consecutive duplicates.
> nm --demangle --defined-only /usr/lib/libcryptopp.a | grep 'CryptoPP::Integer::Integer' | cut --bytes=12- | sort | uniq CryptoPP::Integer::Integer() CryptoPP::Integer::Integer(char const*) CryptoPP::Integer::Integer(CryptoPP::BufferedTransformation&) CryptoPP::Integer::Integer(CryptoPP::BufferedTransformation&, unsigned int, CryptoPP::Integer::Signedness) CryptoPP::Integer::Integer(CryptoPP::Integer const&) CryptoPP::Integer::Integer(CryptoPP::Integer::Sign, unsigned int, unsigned int) CryptoPP::Integer::Integer(CryptoPP::Integer::Sign, unsigned long long) CryptoPP::Integer::Integer(CryptoPP::RandomNumberGenerator&, unsigned int) CryptoPP::Integer::Integer(long) CryptoPP::Integer::Integer(unsigned char const*, unsigned int, CryptoPP::Integer::Signedness) CryptoPP::Integer::Integer(unsigned int, unsigned int) CryptoPP::Integer::Integer(wchar_t const*) [One contructor removed for screen formatting]
Finally, we can further clean up the output by specifying
sed "s/CryptoPP:://g". The sed command will replace instances of "CryptoPP::" with an empty string.
> nm --demangle --defined-only /usr/lib/libcryptopp.a | grep 'CryptoPP::Integer::Integer' | cut --bytes=12- | sed "s/CryptoPP:://g" | sort | uniq Integer::Integer() Integer::Integer(BufferedTransformation&) Integer::Integer(BufferedTransformation&, unsigned int, Integer::Signedness) Integer::Integer(char const*) Integer::Integer(Integer const&) Integer::Integer(Integer::Sign, unsigned int, unsigned int) Integer::Integer(Integer::Sign, unsigned long long) Integer::Integer(long) Integer::Integer(RandomNumberGenerator&, unsigned int) Integer::Integer(unsigned char const*, unsigned int, Integer::Signedness) Integer::Integer(unsigned int, unsigned int) Integer::Integer(wchar_t const*) [One contructor removed for screen formatting]
x86 vs x64
The Crypto++ makefile uses a CXXFLAG of -march=native, so Crypto++ builds correctly for the platform. There are no special options required.
On x86 Linux, the traditional /usr/lib directory is used for the library. Some 64-bit Linux's, such as Debian and Ubuntu, use two directories: /usr/lib and /usr/lib64. Both /usr/lib and /usr/lib64 contain 64 bit libraries. Other distributions, such as Fedora, use only /usr/lib64 (see Alexey Yurov, Maintainer of Crypto++ on Fedora, response to Linux: makefile, install and x86_64). The Crypto++ makefile only installs to /usr/lib on x64 machines.
Below, we use objdump to examine a Ubuntu 10.04 installation. The -f option to objdump indicates we only want the header. When dumping the archive (libcryptopp.a), we use head to display the first 12 lines of output. Otherwise, we would get a complete dump since an archive is a collection of objects to be used by the linker.
user@studio:~$ whereis libcryptopp libcryptopp: /usr/lib/libcryptopp.so /usr/lib/libcryptopp.a /usr/lib64/libcrypto++.so /usr/lib64/libcrypto++.a user@studio:~$ objdump -f /usr/lib/libcryptopp.so /usr/lib/libcryptopp.so: file format elf64-x86-64 architecture: i386:x86-64, flags 0x00000150: HAS_SYMS, DYNAMIC, D_PAGED start address 0x000000000027f630 user@studio:~$ objdump -f /usr/lib/libcryptopp.a | head -12 In archive /usr/lib/libcryptopp user@studio:~$ objdump -f /usr/lib64/libcrypto++.so /usr/lib64/libcrypto++.so: file format elf64-x86-64 architecture: i386:x86-64, flags 0x00000150: HAS_SYMS, DYNAMIC, D_PAGED start address 0x000000000027f630 user@studio:~$ objdump -f /usr/lib64/libcrypto++.a | head -12 In archive /usr/lib64/libcrypto++
Common Errors
This section covers common errors encountered when working with Cryto++.
gcc/g++/ld Versions
Verify you are using a supported version of the g++ and ld. Compare the results to the platform matrix on the Crypto++ homepage. In general, GCC 3.3 and above is supported for the latest versions of Crypto++ (5.4, 5.5, and 5.6).
> g++ --version g++ (SUSE Linux) 4.3.2 [gcc-4_3-branch revision 141291]
> ld --version GNU ld (GNU Binutils; openSUSE 11.1) 2.19
-l option
According to the g++ man pages (around line 25), the -l option location is significant:
For the most part, the order you use doesn't matter. Order does matter when you use several options of the same kind; for example, if you specify -L more than once, the directories are searched in the order specified. Also, the placement of the -l option is significant.
Also see the note regarding object link order in GCC Options for Linking.
If you receive an error similar to undefined reference to CryptoPP::Integer::Integer() or undefined reference to vtable for CryptoPP::Integer, you most likely specified the library as an early option to g++:
> g++ -lcryptopp -o test test.cpp # wrong!!!
We can verify the vtables are present using nm:
> nm --demangle --defined-only /usr/lib/libcryptopp.a | grep 'vtable for CryptoPP::Integer' | sort | uniq 00000000 V vtable for CryptoPP::Integer 00000000 V vtable for CryptoPP::Integer::DivideByZero 00000000 V vtable for CryptoPP::Integer::OpenPGPDecodeErr 00000000 V vtable for CryptoPP::Integer::RandomNumberNotFound
SHA-2 Crash and Binutils
If you experience a crash during validation (cryptest.exe v) and you are using the lates SVN code, verify your version of binutils. See "crytest v" hangs up and Conflict between Crypto++ 5.6.0 and GNU binutils 2.19.91.20091014-0ubuntu1.
GCC 4.5.2 Internal Compiler Error
If you experience an internal compiler error when using GCC 4.5.2 on Ubuntu 11.04:
$ make g++ -DNDEBUG -g -O2 -march=native -pipe -c 3way.cpp g++ -DNDEBUG -g -O2 -march=native -pipe -c adler32.cpp g++ -DNDEBUG -g -O2 -march=native -pipe -c algebra.cpp g++ -DNDEBUG -g -O2 -march=native -pipe -c algparam.cpp g++ -DNDEBUG -g -O2 -march=native -pipe -c arc4.cpp g++ -DNDEBUG -g -O2 -march=native -pipe -c asn.cpp asn.cpp: In member function ‘void CryptoPP::OID::DEREncode(CryptoPP::BufferedTransformation&) const’: asn.cpp:254:1: error: unrecognizable insn: (insn 194 178 195 2 asn.cpp:248 (set (reg:SI 2 cx) (mem:QI (plus:SI (reg/f:SI 1 dx [orig:61 D.44160 ] [61]) (const_int 4 [0x4])) [16 S1 A32])) -1 (nil)) asn.cpp:254:1: internal compiler error: in extract_insn, at recog.c:2104 Please submit a full bug report, with preprocessed source if appropriate. See <> for instructions. make: *** [asn.o] Error 1
Then comment out the
-mtune=native in GNUMakefile:
cat GNUMakefile ... ifneq ($(GCC42_OR_LATER),0) ifeq ($(UNAME),Darwin) CXXFLAGS += -arch x86_64 -arch i386 else # CXXFLAGS += -march=native endif endif
stdlibc
If you start with an error regarding __static_initialization_and_destruction, with hundreds following, you most likely used gcc rather than g++:
> gcc -o test test.cpp -lcryptopp # wrong!!!
Undefined Crypto++ Type
If gcc/g++ does not recognize a Crypto++ type (such as CryptoPP::Integer), verify the correct header is being included and verify the CryptoPP namespace is being used. Finally, verify gcc/g++ header paths. To display the default search directories of GCC, try:
> echo "" | gcc -o /tmp/tmp.o -v -x c - > echo "" | g++ -o /tmp/tmp.o -v -x c++ -
Cannot find -lcryptopp (or -lcrypto++)
If you receive an error from collect2/ld stating cannot find -lcryptopp, the library is not installed or on path.
> g++ -o test test.cpp -lcryptopp .../i586-suse-linux/bin/ld: cannot find -lcryptopp collect2: ld returned 1 exit status
Verify the library is installed:
> whereis libcryptopp libcryptopp: /usr/lib/libcryptopp.a /usr/lib64/libcryptopp.a
If installed, try specifying the path with the -L option (below, change /usr/lib to the library location). (Also see GCC Options for Linking)
> g++ -o test test.cpp -L/usr/lib -lcryptopp
If the library is not installed and it cannot be found in the distribution, it might be named libcrypto++.a or libcrypto++.so (or vice-versa: libcryptopp.a or libcryptopp.so):
> g++ -o test test.cpp -lcrypto++
Undefined Reference to WSA Sockets
If you encounter link errors relating to sockets such as the following, you might need to add -lws2_32 to the link command. See trungantran's comments on Compile Error in socketft.cpp.
undefined reference to '_WSAEnumNetworkEvents@12' undefined reference to '_WSAEventSelect@12' undefined reference to '_WSAGetOverlappedResult@20' undefined reference to '_WSAGetOverlappedResult@20' undefined reference to '_WSARecv@28' undefined reference to '_WSASend@28'
Undefined Reference to pthread
If you encounter link errors relating to pthreads such as "undefined reference to pthread_key_create" when creating libcryptopp.so or cryptest.exe, add -pthread to the LDFLAGS (encountered on OpenBSD 4.8/4.9).
./libcryptopp.so: undefined reference to `pthread_getspecific' ./libcryptopp.so: undefined reference to `pthread_key_delete' ./libcryptopp.so: undefined reference to `pthread_key_create' ./libcryptopp.so: undefined reference to `pthread_setspecific'
If you encounter link errors relating to pthreads such as "undefined reference to pthread_key_create" when linking against a shared object, you most likely did not link against the pthread library. See Eugene Zolenko's comments on the Crypto++ user group.
/usr/lib/gcc/i486-linux-gnu/4.3.2/../../../../lib/libcryptopp.so: undefined reference to `pthread_key_create' /usr/lib/gcc/i486-linux-gnu/4.3.2/../../../../lib/libcryptopp.so: undefined reference to `pthread_getspecific' /usr/lib/gcc/i486-linux-gnu/4.3.2/../../../../lib/libcryptopp.so: undefined reference to `pthread_key_delete' /usr/lib/gcc/i486-linux-gnu/4.3.2/../../../../lib/libcryptopp.so: undefined reference to `pthread_setspecific'
Try including the pthread library:
> g++ -o test test.cpp -lcryptopp -lpthread
File number X already allocated
If using precompiled headers and you get a message similar to "file number 2 already allocated", see Including a precompiled header from another header causes invalid assembly to be generated. The take away: upgrade to GCC 4.5.
No such instruction: pclmulqdq
pclmulqdq is part of Intel AVX. According to Intel Software Development Emulator, "GCC 4.4 includes support for these [AESNI and PCLMULQDQ] instructions". H.J. Lu's patch was submitted in 2008 and can be found at PATCH: Enable Intel AES/CLMUL.
cpu.h:53:no such instruction: `pclmulqdq $16, -64(%rbp),%xmm0' cpu.h:53:no such instruction: `pclmulqdq $0, -192(%rbp),%xmm0' cpu.h:53:no such instruction: `pclmulqdq $0, -80(%rbp),%xmm0' cpu.h:53:no such instruction: `pclmulqdq $16, -112(%rbp),%xmm0' cpu.h:53:no such instruction: `pclmulqdq $1, -144(%rbp),%xmm1' ...
According to X86 Built-in Functions, GCC requires the following to generate the instruction:
The following built-in function is available when -mpclmul is used. v2di __builtin_ia32_pclmulqdq128 (v2di, v2di, const int) Generates the pclmulqdq machine instruction.
If your processor does not support the instruction, or you don't want to generate AESNI and PCLMULQDQ instructions, define
CRYPTOPP_DISABLE_AESNI.
Debugging
If you attempt to debug a program linked to libcryptopp.a (or libcryptopp.so) and you cannot list your source file:
> (gdb) list main No line number known for main.
Verify that you have compiled with debugging information (the -g option):
> g++ -g -ggdb test.cpp -o test -lcryptopp
If you are using a distribution's library, make sure you have installed the development library and the debug symbols:
> # Debian, Ubuntu, and other Debian derivatives > apt-get install libcryptopp-dev libcryptopp-dbg
$ # Fedora and other RPM based systems $ yum install cryptopp cryptopp-devel cryptopp-develdebug
With debug symbols present, listing and stepping should be available:
(gdb) list 5 #include "cryptopp/osrng.h" 6 #include "cryptopp/secblock.h" 7 using CryptoPP::AutoSeededRandomPool; 8 using CryptoPP::SecByteBlock; 9 10 int main( int, char** ) { 11 12 AutoSeededRandomPool prng; 13 SecByteBlock block(16); 14 15 prng.GenerateBlock(block, block.size()); 16 17 return 0; 18 }
Downloads
Cryptopp-SO-Test.zip - test harness with multiple threads that calls one of four [test] shared objects (each is distinct). The four [test] shared objects each loads the libcryptopp.so shared object. See Note for Distribution Packagers above.
|
https://www.cryptopp.com/wiki/Linux
|
CC-MAIN-2021-43
|
refinedweb
| 6,512
| 50.53
|
On 2/13/06, Allen Bierbaum <abierbaum at gmail.com> wrote: > Roman et al: > > I have been wracking my brain but I still can not figure out why my > simple example is not exporting bindings for a very simple class. I > have attached the relevant files to this e-mail. > > Base.h - Defines a class with a constructor, destructor, and one method. > gen_python_module.py - Script to parse Base.h and generate bindings. > decl_printer.py - Contains helper class I have found useful for > writing out the current > decl tree/list > > As far as I can tell the filter is working correctly and the Base > class is being allowed through the filter. One thing I don't > understand is why all the free functions in namespace '::' are making > it through the filter as well, but that shouldn't affect the exporting > of this class. > > Can anyone see why I keep getting this output: Because you use old version. Try CVS one and it will work. > > This seems like about the simplest example I can make for exporting a class. > > Roman: I think it would be a *very* good idea to add exporting of a > class to the tutorial. This is most likely the first thing anyone > will try with pyplusplus. I will do it. > Thanks, > Allen Thanks -- Roman Yakovenko C++ Python language binding
|
https://mail.python.org/pipermail/cplusplus-sig/2006-February/009911.html
|
CC-MAIN-2016-50
|
refinedweb
| 221
| 74.19
|
Setting up start-up scripts ↩
What is a start-up script?
A start-up script is a script which is executed when RoboFont is initialized.
Start-up scripts are typically used to add custom observers, menu items or toolbar buttons to RoboFont. They can be included in extensions, or added manually in the Preferences.
This article explains how to add a start-up script manually using the Extensions Preferences.
An example script
As an example, we’ll use the script below to make your computer read a welcome message every time RoboFont is launched.
import AppKit speaker = AppKit.NSSpeechSynthesizer.availableVoices()[7] voice = AppKit.NSSpeechSynthesizer.alloc().initWithVoice_(speaker) voice.startSpeakingString_("Welcome to RoboFont!")
Save this script somewhere in our computer.
Editing the Start-Up Scripts Preferences
Now go to the Extensions Preferences, select the Start Up Scripts tab, and use the
+ button to add the script to the list.
Apply the changes, and restart RoboFont.
You should now hear a welcome message every time RoboFont is launched!
|
https://doc.robofont.com/documentation/how-tos/setting-up-a-startup-script/
|
CC-MAIN-2019-13
|
refinedweb
| 166
| 51.34
|
53413/manually-import-module-python
I think the following should work:
foo.py:
from bar import bar_var
foo_var = 1
bar.py:
from foo import foo_var
bar_var = 2
Hi,
If you successfully installed opencv in your ...READ MORE
Run with Python 3 from the command ...READ MORE
A module is a file containing a ...READ MORE
Exception handling issues in python can easily ...READ MORE
You can also use the random library's ...READ MORE
Syntax :
list. count(value)
Code:
colors = ['red', 'green', ...READ MORE
Enumerate() method adds a counter to an ...READ MORE
You can simply the built-in function in ...READ MORE
Hello,
Open Cmd or Powershell as Admin.
type pip ...READ MORE
A module is basically a single file ...READ MORE
OR
At least 1 upper-case and 1 lower-case letter
Minimum 8 characters and Maximum 50 characters
Already have an account? Sign in.
|
https://www.edureka.co/community/53413/manually-import-module-python
|
CC-MAIN-2022-21
|
refinedweb
| 147
| 70.09
|
This Tutorial Explains the Doubly Linked List in Java along with Double Linked List Implementation, Circular Doubly Linked List Java Code & Examples:
The linked list is a sequential representation of elements. Each element of the linked list is called a ‘Node’. One type of linked list is called “Singly linked list”.
In this, each node contains a data part that stores actual data and a second part that stores pointer to the next node in the list. We have already learned the details of the singly linked list in our previous tutorial.
=> Check ALL Java Tutorials Here.
What You Will Learn:
Doubly Linked List In Java
A linked list has another variation called “doubly linked list”. A doubly linked list has an additional pointer known as the previous pointer in its node apart from the data part and the next pointer as in the singly linked list.
A node in the doubly linked list looks as follows:
Here, “Prev” and “Next” are pointers to the previous and next elements of the node respectively. The ‘Data’ is the actual element that is stored in the node.
The following figure shows a doubly linked list.
The above diagram shows the doubly linked list. There are four nodes in this list. As you can see, the previous pointer of the first node, and the next pointer of the last node is set to null. The previous pointer set to null indicates that this is the first node in the doubly linked list while the next pointer set to null indicates the node is the last node.
Advantages
- As each node has pointers pointing to the previous and next nodes, the doubly linked list can be traversed easily in forward as well as backward direction
- You can quickly add the new node just by changing the pointers.
- Similarly, for delete operation since we have previous as well as next pointers, the deletion is easier and we need not traverse the whole list to find the previous node as in case of the singly linked list.
Disadvantages
- Since there is an extra pointer in the doubly linked list i.e. the previous pointer, additional memory space is required to store this pointer along with the next pointer and data item.
- All the operations like addition, deletion, etc. require that both previous and next pointers are manipulated thus imposing operational overhead.
Implementation In Java
The implementation of doubly linked list in Java comprises of creating a doubly-linked list class, the node class and adding nodes to the doubly linked list
The addition of new nodes is usually done at the end of the list. The below diagram shows the addition of the new node at the end of the doubly linked list.
As shown in the above diagram, to add a new node at the end of the list, the next pointer of the last node now points to the new node instead of null. The previous pointer of the new node points to the last node. Also, the next pointer of the new node points to null, thereby making it a new last node.
The program below shows Java implementation of a doubly-linked list with the addition of new nodes at the end of the list.
class DoublyLinkedList { //A node class for doubly linked list class Node{ int item; Node previous; Node next; public Node(int item) { this.item = item; } } //Initially, heade and tail is set to null Node head, tail = null; //add a node to the list public void addNode(int item) { //Create a new node Node newNode = new Node(item); //if list is empty, head and tail points to newNode if(head == null) { head = tail = newNode; //head's previous will be null head.previous = null; //tail's next will be null tail.next = null; } else { //add newNode to the end of list. tail->next set to newNode tail.next = newNode; //newNode->previous set to tail newNode.previous = tail; //newNode becomes new tail tail = newNode; //tail's next point to null tail.next = null; } } //print all the nodes of doubly linked list public void printNodes() { //Node current will point to head Node current = head; if(head == null) { System.out.println("Doubly linked list is empty"); return; } System.out.println("Nodes of doubly linked list: "); while(current != null) { //Print each node and then go to next. System.out.print(current.item + " "); current = current.next; } } } class Main{ public static void main(String[] args) { //create a DoublyLinkedList object DoublyLinkedList dl_List = new DoublyLinkedList(); //Add nodes to the list dl_List.addNode(10); dl_List.addNode(20); dl_List.addNode(30); dl_List.addNode(40); dl_List.addNode(50); //print the nodes of DoublyLinkedList dl_List.printNodes(); } }
Output:
Nodes of doubly linked list:
10 20 30 40 50
Apart from adding a new node at the end of the list, you can also add a new node at the beginning of the list or in between the list. We leave this implementation to the reader so that the readers can understand the operations in a better way.
Circular Doubly Linked List In Java.
The following diagram shows the circular doubly linked list.
As shown in the above diagram, the next pointer of the last node points to the first node. The previous pointer of the first node points to the last node.
Circular doubly linked lists have wide applications in the software industry. One such application is the musical app which has a playlist. In the playlist, when you finish playing all the songs, then at the end of the last song, you go back to the first song automatically. This is done using circular lists.
Advantages of a Circular Double Linked List:
- The circular doubly linked list can be traversed from head to tail or tail to head.
- Going from head to tail or tail to head is efficient and takes only constant time O (1).
- It can be used for implementing advanced data structures including Fibonacci heap.
Disadvantages:
- As each node needs to make space for the previous pointer, extra memory is required.
- We need to deal with lots of pointers while performing operations on a circular doubly linked list. If pointers are not handled properly, then the implementation may break.
The below Java program shows the implementation of the Circular doubly linked list.
import java.util.*; class Main{ static Node head; // Doubly linked list node definition static class Node{ int data; Node next; Node prev; }; // Function to insert node in the list static void addNode(int value) { // List is empty so create a single node furst if (head == null) { Node new_node = new Node(); new_node.data = value; new_node.next = new_node.prev = new_node; head = new_node; return; } // find last node in the list if list is not empty Node last = (head).prev; //previous of head is the last node // create a new node Node new_node = new Node(); new_node.data = value; // next of new_node will point to head since list is circular new_node.next = head; // similarly previous of head will be new_node (head).prev = new_node; // change new_node=>prev to last new_node.prev = last; // Make new node next of old last last.next = new_node; } static void printNodes() { Node temp = head; //traverse in forward direction starting from head to print the list while (temp.next != head) { System.out.printf("%d ", temp.data); temp = temp.next; } System.out.printf("%d ", temp.data); //traverse in backward direction starting from last node System.out.printf("\nCircular doubly linked list travesed backward: \n"); Node last = head.prev; temp = last; while (temp.prev != last) { System.out.printf("%d ", temp.data); temp = temp.prev; } System.out.printf("%d ", temp.data); } public static void main(String[] args) { //the empty list Node l_list = null; // add nodes to the list addNode(40); addNode(50); addNode(60); addNode(70); addNode(80); //print the list System.out.printf("Circular doubly linked list: "); printNodes(); } }
Output:
Circular doubly linked list: 40 50 60 70 80
Circular doubly linked list travesed backward:
80 70 60 50 40
In the above program, we have added the node at the end of the list. As the list is circular, when the new node is added, the next pointer of the new node will point to the first node and the previous pointer of the first node will point to the new node.
Similarly, the previous pointer of the new node will point to the current last node which will now become the second last node. We leave the implementation of adding a new node at the beginning of the list and in between the nodes to the readers.
Frequently Asked Questions
Q #1) Can the Doubly Linked List be circular?
Answer: Yes. It is a more complex data structure. In a circular doubly linked list, the previous pointer of the first node contains the address of the last node and the next pointer of the last node contains the address of the first node.
Q #2) How do you create a Doubly Circular Linked List?
Answer: You can create a class for a doubly circular linked list. Inside this class, there will be a static class to represent the node. Each node will contain two pointers – previous and next and a data item. Then you can have operations to add nodes to the list and to traverse the list.
Q #3) Is Doubly Linked List linear or circular?
Answer: The doubly linked list is a linear structure but a circular doubly linked list that has its tail pointed to head and head pointed to tail. Hence it’s a circular list.
Q #4) What is the difference between the Doubly linked list and the Circular linked list?
Answer: A doubly linked list has nodes that keep information about its previous as well as next nodes using the previous and next pointers respectively. Also, the previous pointer of the first node and the next pointer of the last node is set to null in the doubly linked list.
In the circular linked list, there is no start or end nodes and the nodes form a cycle. Also, none of the pointers are set to null in the circular linked list.
Q #5) What are the Advantages of a Doubly Linked List?
Answer: The Advantages of the Doubly Linked List are:
- It can be traversed in forward as well as backward direction.
- Insertion operation is easier as we need not traverse the whole list to find the previous element.
- Deletion is efficient as we know that the previous and next nodes and manipulating is easier.
Conclusion
In this tutorial, we discussed the Doubly linked list in Java in detail. A doubly linked list is a complex structure wherein each node contains pointers to its previous as well as the next nodes. The management of these links is sometimes difficult and can lead to the breakdown of code if not handled properly.
Overall the operations of a doubly-linked list are more efficient as we can save the time for traversing the list as we have got both the previous and next pointers.
The circular doubly linked list is more complex and they form a circular pattern with the previous pointer of the first node pointing to the last node and the next pointer of the last node pointing to the first node. In this case, also, the operations are efficient.
With this, we are done with the linked list in Java. Stay tuned for many more tutorials on searching and sorting techniques in Java.
=> Visit Here For The Exclusive Java Training Tutorial Series.
|
https://www.softwaretestinghelp.com/doubly-linked-list-in-java/
|
CC-MAIN-2021-17
|
refinedweb
| 1,909
| 70.94
|
Hey, it's depricated, check here:
The alternative is to use Episerver Projects that will soon have workflows (different, but more modern) pretty soon.
BR,
Marija
Thanks for the reply... Booo! That's what I thought. The new workflows only coming out in beta in 10.1, so it will be a little while until it's finally live. In the end I figured out that the person who originally made the website attempted a failed EPi upgrade and checked it into master. Episerver was set to use The 9+ version of search which was causing any nuget package to try and upgrade episerver.. after I rolled back the search plug-in to the correct version, I could install Redis without being prompted to upgrade :) Note.. don't trust people!
Hi, I've been doing a proof of concept for a clients epi 8.1 website, using the MS Redis provider to deal with session management. It seems the nuget package has a shared dependency on it that epi.core also needs which forces epi to upgrade. It looks like 9 is the most minor upgrade the package wants. The site has some custom workflow code that I@m not 100% sure what it does.
When I upgrade I'm getting errors around:
using EPiServer.WorkflowFoundation;
using EPiServer.WorkflowFoundation.UI;
Are now not recognised and IWorkflowTaskControl, IWorkflowStartParameterHandler are unknown interfaces.
Can anyone confirm that in 9, the workflow assemblies are being deprecated? As new workflow is coming out in epi 10.1 as a beta, I'm not sure if it's time well spent doing the upgrade and refactoring the code now when I'll only probably need to update it again in the next few weeks/months. I do have an alternative approach of using SQL session state management instead, as that wouldn't invovle me upgrading, My questions:
Where has workflow moved in 9?
Will it require a lot of work to upgrade now?
Ta,
J
|
https://world.episerver.com/forum/developer-forum/-Episerver-75-CMS/Thread-Container/2016/11/episerver-workflowfoundation/
|
CC-MAIN-2019-51
|
refinedweb
| 329
| 66.33
|
Like I said in one post below, few days ago I wrote my own source code security scanner.
Yesterday it found a new 'possible insecure' parameter in new phpMyAdmin (3.4.5).
Here is a quick note about it:
1. goto
2. Go to 'New server'
3. vulnerable is (could be) $value, be cause:
when you type 'Save' PMA is going to: (here is $value param).
Content of thi $param should be:
(... this is 'name of the server')
btw: doing research using Data Tamper I check the vulnerable parameter is $Servers-0-verbose in PMA/setup/index.php.
Anyway I dont saw any of this in PMA/setup/index.php (or either in all *.php files located in PMA directory.
So next I decide to search via grep:
so I think vuln is right here ;)
Testing is in progress, so this post will be updated soon...
2.10.2011 * So update here *
It looks like PMA does not validate some "Server-*" parameters.
Vulnerability exist in :
Server-0-verbose <-here will be XSS (upper in this post)
Server-0-host <- here will be vulnerable too
File ./setup/validate.php contains unpropper validation for $value.
Code of XSS placed here is forwarded to ./setup/index.php as a parameters.
And whats next:
to parameters AFTER You click 'Forward' for validate.php try to put the same value (script,etc...) to ./setup/index.php :)
17.10.2011
Update: released the patch for this vuln. Check it out, and try a new version of this amazing webapplication :)
Details here .
*** Important thing ***
I really recommend cooperation with PMA Team. People know what they're doing, and doing it fast! Good job! :)
Thursday, 29 September 2011
Wednesday, 28 September 2011
phpMiniAdmin 1.7 vulnerable to SQL Injection and more...
Today I finished 'version 4' of one of my python project: a PHP source code scanner.;)
Tuesday, 27 September 2011
Enticore CMS (0.8) vs 24.09.2011
(GET send to logged user)’>(GET send to logged user)’>
Once upon a day I decide to sit back and enjoy a few moments of ‘free time’ spending for vulnerability research in some popular CMS applications.
First I’ve found Enticore CMS 0.8 ( avaliable at ).
I decide to download it and install in my lab-box :)
To make this news more precisely:
„the lab” configuration was: Ubuntu 10.10 with Apache 2.2.16, and PHP 5.3.3. One more important think is that, I set display_errors = On
in php.ini file.
So first what I’ve done was install it on my test lab.
Nothing special in this process but here I found first Cross Site Scripting vulnerability.
1) Cross Site Scripting in ‘install.php’:
Here we go: when we set „include/external/” to 777, we can click to the „next step” of installation process. Our link will be:
First I tried to make some ../ request for $page parameter, but there was ‘only’ an error:
‘Notice: Undefined index: ../ in ./enticore-0.8/install.php on line 66 Fatal error: Call to
a member function getPage() on a non-object in ./enticore-0.8/install.php on line 67 ‘
Ok, so lets see, what is behind line 66-67:
—cut—
63 function getPage($page) {
64 global $installationSteps;
65
66 $installationStep = $installationSteps[$page]['item'];
67 $retval = $installationStep->getPage();
—cut—
We must definetly choose one of the ‘item’. So I tried to ‘choose’ item, that not exist.
For example: testujto:
—cut—
Notice: Undefined index: testujto in (…) line 66.
—cut—
Ok. Once again, error output informed me, that ‘item’ I decided to use is placed ‘like I wrote it’ in $page parameter.
So next test will be simple
And here we have our ‘Cross Site Scripting vulnerability’. Checked in source (view source) looks like this:
—cut—
—cut—
So vulnerability exist, and there is a possibility to send malformed URL to the victim with JS/HTML payload.
PoC could look like this:
2) XSS in ./include/plugin/EnticorePluginUsers.php2) XSS in ./include/plugin/EnticorePluginUsers.php
—cut—
241 return Helper::getMessage(‘warn.png’, _(‘Login incorrect, please try again.’)).$this->getLoginForm($_POST['username']);
—cut—
Vulnerability founded here is the same. Enticore not properely validate user input.
So we have another one Cross Site Scripting:
—cut—
$username=
you<— try to put here some POST values, for example y0
—cut—
Remember, this vulnerable was checked with „display_error = On”! (Some more info about errors in webapps, maybe soon...;))
3) Vulnerabilities for ‘Logged only’.
Now is time for searching some vulnerabilities for logged users. First one has been found for $site parameter.
(Default password for admin is ‘enticore’.)
PoC :
Vulnerable code:
—cut—
/enticore-0.8$ cat -n ./include/plugin/EnticorePluginUsers.php | grep site
32 * @see include/plugin/EnticorePlugin#getMenuEntries($site)
34 public function getMenuEntries($site) {
38 array_push($retval, array(‘uri’ => $this->generatePluginPart(‘admin’),
‘site’ => ‘admin’, ‘name’ => _(‘Administrate users’), ‘css’ => $this->getSelectedCss(‘admin’)));
40 array_push($retval, array(‘uri’ => $this->generatePluginPart(‘logout’),
‘site’ => ‘logout’, ‘name’ => _(‘Logout’), ‘css’ => $this->getSelectedCss(null, $site != ‘admin’)));
43 array_push($retval, array(‘uri’ => $this->generatePluginPart(‘login’),
‘site’ => ‘login’, ‘name’ => _(‘Login’), ‘css’ => $this->getSelectedCss(null, $site != ‘admin’)));
49 public function getContent($site) {
50 if ($site == ‘login’) {
52 } else if ($site == ‘logout’) {
54 } else if ($site == ‘admin’ || $site == ‘show’) {
56 } else if ($site == ‘add’) {
58 } else if ($site == ‘edit’) {
60 } else if ($site == ‘delete’) {
63 return get_class($this).’: Unkown site ‘.$site;
—cut—
I uderstand this vuln as: get_class is no properly validated, so the PoC can be placed in line 63 of EnticorePluginUsers.php.
Payload for test could be: ‘>
testtest2or whatever you want in HTML/JavaScript.
You should already knew, if there will be something like:
—cut—
(…) switch { if 1 then… if 2, then…, else if anyDifferentSiteValue, and here else Default } (…)
—cut—
the vulnerability should not exist. So think about it how You write your if/else part of code;)
4) DB password stored in clear-text.
—cut—
mysql> use enticore;
Database changed
mysql> select * from ec_users limit 1;
+—–+———-+———-+————+———————-+————————+——–+
| idx | username | password | encryption | email | name | status |
+—–+———-+———-+————+———————-+————————+——–+
| 1 | admin | enticore | plain | admin@yourdomain.com | Enticore Administrator | 1 |
+—–+———-+———-+————+———————-+————————+——–+
1 row in set (0.00 sec)
mysql>
—cut—
Very, very nice! :)
5) Shell upload is possible.
Enticore CMS has a nice page for uploading files via web browser when You’re logged.
First, what „bad hackers” do is trying to upload some kind of shell in PHP, to better
remote access. When we upload a shell in php (shell.php), Enticore, puts in in $webroot/content/shell.php.
File is accessable via web browser, so for this time is game over.
Good practice here is to think about what kind of files could and should be possible to upload.
6) Directory traversal attack (for logged only):
I looked at the source of this CMS and there is opendir() function. Like I thought, we can do a directory traversal attack :
PoC:
Be cause of vulnerability in source code:
— cut include/helper/Helper.php —
$ cat -n include/helper/Helper.php | grep opendir
109 if ($handle = opendir($dir)) {
139 if ($handle = opendir($dir)) {
301 $handle = opendir($path.’/');
— cut include/helper/Helper.php —
7) stored XSS in
Try $title too. It is vulnerable to stored xss attack… ;)
Testing in progres...
WordPress 3.2.1 user enumeration vulnerability
Like we all know, not only banks have an user enumeration vulnerabilities in their webapplications :)
Almost all the time „user enumeration” is possible, be cause of bad informing about ‘wrong credentials’ in login process.
So, lets see how it lookgs in new WordPress (3.2.1).
(In pseudo code):
if user_ok --> echo 'user ok'
else if user_bad --> echo 'username invalid'
...
So thats the simple way to enumerate users (bruteforce as welcome) ;)
Here I wrote a simple tool, to check if there is an admin account:
Like You see, this simple tool can enumerate only ‘admin’. So the idea is simple. Check if there is a name(wordlist?;) ), and if it is – analyse/log the answer.
Regards!
*Update 12.03.2012*
If You want more information about vulnerabilities in latest WordPress,
try here ;)
searchr.py - inspired by catonman
I have been searching about 'how to automate searching in net' and then I found really interesting thing. I saw. On this site, author wrote a nice "hack" for implementing a kind of 'google-searchr code' in python. I thought about possibilities which its gave me. And thats how I wrote...
# :
# : s3archer.py @ 13.o6.2o11
# :
# : 20.o6 : dodane ua
# : 15.o6 : dodane spanie na googlu
# : 14.o5 : dodane logowanie + edit
# : 13.o5 : dodane szukanie v1
# :
# :
from xgoogle.search import GoogleSearch, SearchError # szukanie
import sys # argz
import socket # gniazda
from time import sleep # spanie na googlach
from urllib import FancyURLopener # UAaaaaaaa
if len(sys.argv) < 2:
print ‘\n-=[ searcher ]=-\n’
print ‘usage: python searcher.py what2find…\n’
try:
gs = GoogleSearch(sys.argv[1])
gs.results_per_page = 100
results = []
while True:
tmp = gs.get_results()
for res in results:
print res.url.encode(„utf-8″) # wyswietla tylko urla
sleep(9)
if not tmp: # brak wynikow
break
results.extend(tmp)
# wyjatki/bledy
except SearchError, e:
print „Search failed: %s” % e
Its of course just a simple example and was made for fun only. Anyway I wish You luck in python learning and programming.
*todo:
- sths wrong with searching (google drop some request; sleep()?)
- make searching more ‘unix style’ – uniq to implement
- add google hacks (filetype, etc)
Drugi post
Zobaczmy :)
Posted by Haunt IT at 05:35 No comments:
|
http://hauntit.blogspot.com/2011/09/
|
CC-MAIN-2020-10
|
refinedweb
| 1,558
| 58.58
|
ncl_tdstri man page
TDSTRI — Add triangles defining a simple surface to the triangles in a triangle list.
Synopsis
CALL TDSTRI (U, NU, V, NV, W, LW1D, RTRI, MTRI, NTRI, IRST)
C-Binding Synopsis
#include <ncarg/ncargC.h>
void c_tdstri(float *u, int nu, float *v, int nv, float *w, int lw1d, float *rtri, int mtri, int *ntri, int irst)
Description
The arguments of TDSTRI NU x NV and having FORTRAN first dimension LW1D) - values of a dependent variable "w(u,v)". The points (((U(I),V(J),W(I,J)),I=1,NU),J=1,NV) define a surface that one wishes to draw.
- LW1D
(an input expression of type INTEGER) - the FORTRAN first dimension of the array W. It must be the case that LW1D is greater than or equal to NU.
- TDSTRI or c_tdstri,, tdotri, tdpack, tdpack_params, tdpara, tdplch, tdprpa, tdprpi, tdprpt, tdseti, tdsetr, tdsort, tdstrs
University Corporation for Atmospheric Research
The use of this Software is governed by a License Agreement.
|
https://www.mankier.com/3/ncl_tdstri
|
CC-MAIN-2017-26
|
refinedweb
| 164
| 58.72
|
Problem when stacking DropAreas
Hi,
I have an Item with a DropArea which contains a child item with also a DropArea. When I drag something onto the child's DropArea, it is the parent's DropArea that receives drop events. Is there a way to workaround it. I am using Qt-5.5 alpha.
Regards
Hi @dzimiwine
Can you post an example which shows the problem ?
@p3c0 Thanks for your reply.
This is a small code that shows the problem:
main.qml:
import QtQuick 2.4 import QtQuick.Controls 1.3 import QtQuick.Window 2.2 import QtQuick.Dialogs 1.2 ApplicationWindow { title: qsTr("Hello World") width: 640 height: 480 visible: true Rectangle { x: 40 y: 40 width: 200 height: 200 color: parentDropArea.containsDrag ? "purple" : "red" DropArea { id: parentDropArea anchors.fill: parent } Rectangle { x: 10 y: 20 color: childDropArea.containsDrag ? "purple" : "blue" width: 100 height: 100 DropArea { id: childDropArea anchors.fill: parent } } } Rectangle { id: draggableItem color: "green" x: 10 y: 10 width: 20 height: 20 Drag.active: mouseArea.drag.active MouseArea { id: mouseArea anchors.fill: parent drag.target: parent } } }
@dzimiwine To make the events generate properly and propagate try putting
Rectangleinside
DropAreaas follows:
DropArea { x: 40 y: 40 width: 200 height: 200 Rectangle { anchors.fill: parent color: parent.containsDrag ? "purple" : "red" } DropArea { x: 10 y: 20 width: 100 height: 100 Rectangle { anchors.fill: parent color: parent.containsDrag ? "purple" : "blue" } } }
In this way you do not need to reject any events that you would need in your earlier approach.
Thanks a lot. It works!! Is the problem because the child item is not a child of the parent's drop area?
@dzimiwine Yes I think. Or else you have to reject the events to allow them to propagate to other areas.
@dzimiwine No. Mouse events too wont. For a test try replacing
DropAreawith
MouseAreaand
containsDragwith
containsMousein your original example.
|
https://forum.qt.io/topic/58270/problem-when-stacking-dropareas
|
CC-MAIN-2018-13
|
refinedweb
| 308
| 70.09
|
How to Use Graphics in a Java Applet
Applets involving graphics and animations usually look more exciting than applets that don't. Here is a basic overview of how to implement graphics in an applet.
[edit] Steps
- In your paint method, have a parameter that needs a Graphics class. Using the graphics class, you should be able to draw things and images inside the paint method. The method signature should look like this:
public void paint(Graphics g)
Drawing Lines
- Use:
g.drawLine(10, 20, 50, 60);
- The line will be drawn from (10,20) to (50,60).
Drawing Rectangles
- Invoke the drawRect method of the Graphics class. The arguments to the method should be, in order, the top left corner's x coordinate, the top left corner's y coordinate, the width, and the height. Here is a sample code snippet:
g.drawRect(10, 15, 50, 30);
- So, the rectangle's top left corner's coordinates will be (10,15), and its width will be 50 pixels, and its height will be 30 pixels.
Drawing Images
- To draw images, import the class Image. Type this at the top of your code (not in your class):
import java.awt.Image;
- Now, create an Image object. Here is the code for making an Image object. Instead of typing "getCodeBase()", you can also replace it with a URL. If your picture file is inside a folder, you can also type that folder with the name:
Image myImage = getImage(getCodeBase(), "FolderName\\Pic5.jpg");
- To draw the image, use the drawImage method of Graphics. The arguments to the method should be, in order, the image object name, the x coordinate, the y coordinate, the width, the height, and "this". Here is the code snippet:
g.drawImage(myImage, 300, 200, 60, 120, this);
- The image's top left corner will be at (300,200). Its width will be 60 pixels, and its height will be 120 pixels.
[edit] Tips
- There are many more methods in the Graphics class in the Java API, such as drawing ovals and drawing polygons. Take a look for yourself: The Java API - Graphics.
- To draw individual pixels, one common method is:
g.fillRect(x, y, 1, 1);
[edit] Sources and Citations
- The Java API - Contains documentation on all pre-defined classes in Java.
- The 2D Graphics Tutorial - The Sun Java website has a wide range of very helpful tutorials to learn from.
|
http://www.wikihow.com/Use-Graphics-in-a-Java-Applet
|
crawl-002
|
refinedweb
| 402
| 73.68
|
User Name:
Published: 21 Sep 2008
By: Dino Esposito
Dino Esposito talks about extension methods in .NET
Which came first? Was it the egg or was it rather the chicken? You know, this is the canonical example of a sort of futile dilemma around the first case of a circular reference. Want another nice example of such a causality dilemma? Which came first, C# extension methods or LINQ?
Introduced with the .NET Framework 3.5, extension methods let you add new methods to an existing class. How is this different from using classic inheritance? There are two fundamental differences—one conceptual and one practical. Conceptually, extension methods differ from inheritance because you don’t need to create a new type and implement new methods. The existing class is just “extended” with a new set of methods. From a more practical perspective, extension methods allow you to add methods also to sealed and not further derivable classes.
It works more or less as with hair extensions. You take an existing type and just attach new methods to it, without defining a new class or recompiling.
Syntax-wise, an extension method is a static method defined in a static class. However, from the point of view of client code (be it C# or Visual Basic) an extension method is invoked as if it is an instance method actually defined in the type.
To define an extension method, you create a class with only static methods. The class can be placed in any namespace and be named at your leisure. Here’s the typical skeleton of an extension methods container.
How is the relationship between the list of methods and the original type established? Nothing in the previous code snippet leads to think that I’m going to add extension method to, say, the System.String class. Here’s how to add an extension method to the System.String class that attempts to return an integer if a string-to-integer conversion can be successfully performed.
As you can see, the syntax of an extension method is a slight variation of the syntax of a regular static method. The signature of the method features an extra keyword: this. Just the type name trailing the this keyword indicates the type being extended with the method. The parameter name, instead, specifies the formal name through which the developer can reference the current instance of the “base” type it extends.
In the preceding code, the method ToInt extends the type System.String. The new method returns any integer that can be converted from the current content of the string. The TryParse method is invoked on the Int32 type to ascertain this possibility. If the current string cannot be converted to an integer, the minimum integer is returned. Other options entail throwing an invalid cast exception or a custom type of exception. Figure 1 shows how IntelliSense in Visual Studio 2008 processes extension methods.
ToInt
TryParse
There’s no limitation to the number of extensions you can make to a type. Likewise, there are no constraints on the types you can extend with these extra methods—system types as well as custom types.
Both methods considered in the previous example do not need any extra argument to work. What if the method requires one or more parameters? Here’s another example:
The RightSubstring method works on System.String objects and returns the rightmost substring of the specified length. As you can see, the method is merely a shortcut to calling the regular Substring method with a given set of arguments.
RightSubstring
The first argument you specify in the signature of the static method operating as an extension method indicates the instance being extended. Any other parameters that follow on the signature indicate the signature of the extender method. Based on the preceding definition, the type System.String is extended with a RightSubstring method with the following prototype:
As a developer, all you have to do is bring in the namespace (and the assembly) where extension methods are defined. For a caller, extension and regular instance methods are the same. As you’ll see in a moment, this is not entirely true for the CLR.
What’s the purpose of extension methods? When are they useful and when should you consider using them in your own applications? Extension methods are a nice feature to have in C# but probably not one that developers requested in a loud voice. If you need to extend a type with additional capabilities, the best thing you can do is still deriving a new type.
By deriving a new type from an existing one, you inherit all the public members and gain access to all protected members of the base type. You may take advantage of features in the base type specifically designed for inheritance, such as virtual methods. If you have reusability in mind, then inheritance is by all means the way to go. Inheritance is a more powerful (and to some extent delicate to manage) mechanism than extension methods.
Extension methods are a shortcut for all those situations when you need an extra functionality that is not available on the base class, and you have no source code available or the original is sealed. In these cases, extension methods come to the rescue. You sneakily add desired methods to a class without modifying the original source code or assembly.
Extension methods differ from classic inheritance for a number of reasons. First off, extension methods are invoked like instance methods but are not instance methods. This means, for example, that they cannot invoke any non-static methods, except those available on parameters. In other words, any piece of code that an extension method shares with the outside world has to be static. In addition, an extension method has no access to protected members of the type it extends and cannot override or shadow an existing method. By design, in fact, an extension method is never invoked if there’s a member in the extended type with the same name and signature.
Personally, I use extension methods quite sparingly and mostly when I need shortcuts and utilities such as a WordCount or RightSubstring method on a type such as the String type, sealed at the .NET Framework level. Put this way, reasonably, extension methods are nice to have but not patently useful. What’s the real reason of their introduction? Enter LINQ.
WordCount
Extension methods exist because of the role they play in LINQ. Extension methods are used to add new capabilities to the IEnumerable and IEnumerable<T> types. Why did Microsoft opt for extensions instead of just adding new query methods to enumerable types? I can’t answer for Microsoft’s architects, but I like to think that the principle of Single Responsibility had some influence on the final decision. The IEnumerable type represents a collection of data; querying a collection is another matter. An API was needed to query collections in a unified manner. At the same time, this API had to be pluggable by developers using collections. Let’s consider the following fragment.
If you use IntelliSense on the variable coll, you see what’s in Figure 2.
As you can see, IntelliSense shows only instance method defined on the type. Now add an extra line of code to link LINQ to the project.
Using IntelliSense on the same variable produces radically different results, as you can see in Figure 3.
The same type now shows a number of extension methods with the connection established at the namespace level. The underlying type is the same—IEnumerable—and it didn’t undergo any change or modification. A bunch of extra features—logically related—can be added by the developer simply bringing in a given namespace (and related assemblies).
Extension methods are useful in scenarios where you need to opt-in a given feature but want it to stay away by default. And also you don’t want to change anything in the types being used—regardless of the fact they are built-in or custom types.
A trick at the level of the CLR was the only way to go, and that’s why extension methods were introduced. Extension methods find in LINQ their rationale, but they’re also a feature that can be helpful in other scenarios. That’s why you find them fully supported by C# and Visual Basic in the .NET Framework 3.5.
Compilers resolve calls to extension methods by binding to a static method. However, in case of ambiguity (for instance, another instance method is found with the same name and signature) extension methods are given the lowest priority.
Extension methods allow you to enrich the programming interface of a given class. The most interesting aspect of extension methods is that you don’t use a new type, but stick to the old type just “enriched” with new methods. Extension methods are not like inheritance and do not compete with inheritance. They are an “internal” framework feature promoted to the rank of a public feature. Extension methods have been introduced to enrich the IEnumerable interface for LINQ purposes; that is the scenario that suggests the creation of new extension methods. Most of the time, you’ll be using extension methods rather than creating new ones.
|
http://dotnetslackers.com/articles/net/Dot-NET-Classes-Now-Can-Have-Extensions.aspx
|
crawl-003
|
refinedweb
| 1,549
| 55.84
|
Anyone who is a reasonably competent Windows/Python programmer can help me out here. (At least) two functions in the distutils.util module need help: 'mkpath()' and 'move_file()'. Actually, 'mkpath()' might be working fine now, but I'd like some of you Windows folks to take a look at it, beat on it, make sure it'll really work. Here's the code: ------------------------------------------------------------------------ # cache for by mkpath() -- in addition to cheapening redundant calls, # eliminates redundant "creating /foo/bar/baz" messages in dry-run mode PATH_CREATED = {} # I don't use os.makedirs because a) it's new to Python 1.5.2, and # b) it blows up if the directory already exists (I want to silently # succeed in that case). def mkpath (name, mode=0777, verbose=0, dry_run=0): """Create a directory and any missing ancestor directories. If the directory already exists, return silently. Raise DistutilsFileError if unable to create some directory along the way (eg. some sub-path exists, but is a file rather than a directory). If 'verbose' is true, print a one-line summary of each mkdir to stdout.""" global PATH_CREATED # XXX what's the better way to handle verbosity? print as we create # each directory in the path (the current behaviour), or only announce # the creation of the whole path? (quite easy to do the latter since # we're not using a recursive algorithm) name = os.path.normpath (name) if os.path.isdir (name): return if PATH_CREATED.get (name): return (head, tail) = os.path.split (name) tails = [tail] # stack of lone dirs to create while head and tail and not os.path.isdir (head): #print "splitting '%s': " % head, (head, tail) = os.path.split (head) #print "to ('%s','%s')" % (head, tail) tails.insert (0, tail) # push next higher dir onto stack #print "stack of tails:", tails # now 'head' contains the deepest directory that already exists # (that is, the child of 'head' in 'name' is the highest directory # that does *not* exist) for d in tails: #print "head = %s, d = %s: " % (head, d), head = os.path.join (head, d) if PATH_CREATED.get (head): continue if verbose: print "creating", head if not dry_run: try: os.mkdir (head) except os.error, (errno, errstr): raise DistutilsFileError, "%s: %s" % (head, errstr) PATH_CREATED[head] = 1 # mkpath () ------------------------------------------------------------------------ The other one, 'move_file()', almost certainly has portability problems. Please read it, bash on it, and help me fix it! Here's the code for it: ------------------------------------------------------------------------ def move_file (src, dst, verbose=0, dry_run=0): ""???""" from os.path import exists, isfile, isdir, basename, dirname if verbose: print "moving %s -> %s" % (src, dst) if dry_run: return dst if not isfile (src): raise DistutilsFileError, \ "can't move '%s': not a regular file" % src if isdir (dst): dst = os.path.join (dst, basename (src)) elif exists (dst): raise DistutilsFileError, \ "can't move '%s': destination '%s' already exists" % \ (src, dst) if not isdir (dirname (dst)): raise DistutilsFileError, \ "can't move '%s': destination '%s' not a valid path" % \ (src, dst) copy_it = 0 try: os.rename (src, dst) except os.error, (num, msg): if num == errno.EXDEV: copy_it = 1 else: raise DistutilsFileError, \ "couldn't move '%s' to '%s': %s" % (src, dst, msg) if copy_it: copy_file (src, dst) try: os.unlink (src) except os.error, (num, msg): try: os.unlink (dst) except os.error: pass raise DistutilsFileError, \ ("couldn't move '%s' to '%s' by copy/delete: " + "delete '%s' failed: %s") % \ (src, dst, src, msg) return dst # move_file () ------------------------------------------------------------------------ Thanks!
|
https://mail.python.org/pipermail/distutils-sig/1999-October/000442.html
|
CC-MAIN-2018-26
|
refinedweb
| 562
| 66.13
|
Drivers, Relays, and Solid State Relays
Driver circuits
A typical digital logic output pin can only supply tens of MA of current. Even though they might require the same voltage levels, small external devices such as high-power LEDs, motors, speakers, light bulbs, buzzers, solenoids, and relays can require hundreds of MA. Larger devices might even need several amps.. A discrete BJT is sometimes used instead of a newer MOSFET transistor especially on older or low voltage circuits as shown below. On mbed, any GPIO pin could be used for the logic control input to the circuit with DigitalOut.
Basic driver circuit using a BJT transistor
The transistor primarily provides current gain. PNP, NPN, or MOS transistors can also be used. The resistor used on the base of the transistor is typically around 1K ohm. On inductive loads (i.e., motors, relays, solenoids), a diode is often connected backwards across the load to suppress the voltage spikes (back EMF) generated when turning devices off. (Recall on an inductor V=L*di/dt, so a negative voltage spike is produced when turning the device off). Sometimes the diode is also connected across the transistor instead of the load (this protects the transistor). The 2N3904 shown below is a small discrete BJT transistor that can be used for a driver circuit needing less than 200MA. In this circuit with BJTs, Vcc can also be a higher voltage supply than the logic power supply. 6 or 12V DC is often needed for motors or relays. In battery operated devices, the load may be directly connected to the battery power and not pass through the voltage regulator. Many devices such as motors have a momentary large inrush current spike when they are first turned on and have a larger stall current, so be a bit conservative on the maximum current ratings.
2N3904 Transistor in a TO-92 package
Depending on the current gain of the transistor used, some adjustments may be needed in the value of the base resistor. A high gain TO-92 transistor such as the ZTX689B can drive up to 2A at up to 12V in this circuit. A Darlington transistor pair contains two BJT transistors connected together for higher current gain. If a Darlington transistor in a TO-92 package such as a ZTX605 is used in the driver circuit, outputs of 1A at up to 100V are possible. At high current levels, the transistor might get a bit hot. Transistors can even get too hot and burn out, if the circuit is not designed correctly. The transistor has to dissipate the power (V*I) across its C-E junction (i.e., the switch point) as heat. This means that the transistor should either be completely “on” (saturation) or “off” (cutoff) to minimize heat dissipation and maximize efficiency. Larger power transistors have metal tabs on the case that can be connected to a heat sink for cooling. The pins on larger power transistors are often too large for standard breadboards and the spacing is not always compatible.
PWM Control
The logic signal (control) turns the transistor on and off to drive high current loads. For motor speed control or dimming lights, a digital PWM output signal is typically used for control instead of an analog output. Digital PWM is more energy efficient than analog as it significantly reduces the heat dissipated by the transistor (i.e., it is always completely "on" or "off"). For motors, the PWM clock rate is typically in the tens of KHz range. For lighting, it needs to be greater than 50Hz or perhaps 100Hz. Early studies for electric power systems showed that many people have headaches caused by lighting systems that use less than 50Hz AC even if they do not directly perceive a flicker. A Class-D amplifier uses PWM to drive audio speakers and the PWM clock rate is typically around ten times the highest frequency in the audio signal. A low pass filter is sometimes added on the output. The mbed PWMout Handbook page shows an example using PWM to dim an LED. Even when using PWM, some large transistors may require a heat sink for proper cooling. If the transistor gets too hot to touch, it needs a larger heatsink.
Noise Issues from High Current Loads
Switching high current inductive loads and motor arcing can put noise spikes or voltage surges on power supply lines and it is possible that they can become large enough or that the supply voltage could momentarily drop low enough when turning on a large inductive load to cause a microprocessor to crash and even reset when using the same power supply as the load, so additional power supply decoupling capacitors may need to be added near the high current load, or a separate power supply can be used for the high current load.
If high voltage spikes, surges, or electrostatic discharge (static electricity) are a potential issue, transient-voltage-suppression (TVS) diodes (also known as transorbs) or varistors (also known as MOVs) are sometimes connected across a high current load or the high voltage supply lines for even more protection. MOVs are typically found in AC surge protector outlet strips. A wide variety of these devices are available in different voltage and current ranges. These devices typically clip off voltage spikes above a fixed threshold voltage.It is common to pick a transorb or MOV with a clip-off threshold voltage a bit higher than the normal operating range found in the circuit (>20%?). Activated transorbs or MOVs have to dissipate the energy in the clipped off voltage spike and they are typically rated by the amount of energy they can absorb before overheating and burning out, so the duration of the overvoltage spike needs to be relatively short.
Driver ICs
As an alternative to using discrete transistors, special purpose driver ICs are also available that can drive multiple devices. These ICs contain several internal transistor driver circuits similar to the one just described above. A small number are still available in a DIP package that can plug into a breadboard such as the ULN2803 8-channel 500MA 50V driver seen below, but most new ones are surface mount ICs that will require a breakout board for use on a breadboard.
ULN2803 8-Channel Darlington Driver DIP IC
TLC5940 16 channel PWM Driver
The TI TLC5940 is a 16 channel driver IC with 12 bit duty cycle PWM control (0 - 4095), 6 bit current limit control (0 - 63), and a daisy chainable serial interface (SPI). Maximum drive current is 120 MA per output. It is handy for expanding the number of high current drive PWM outputs available. This IC was originally targeted for driving LED arrays. 16 PWM outputs might sound like a lot, but a humanoid robot might need over twenty to control all of the servo motors used the joints. In addition to the DIP package seen above, is it also available in surface mount. Sparkfun makes the breakout board seen below using the surface mount package. A TLC5940 code library is available for mbed. There is even a special version of an mbed code library just for servos that sets up a 16 servo array. Driver ICs may also require heat sinks or other cooling considerations when used at high current levels.
Sparkfun TLC5940 Breakout board
Devices that require several amps of current will need a more complex driver circuit with larger power transistors on heat sinks, and more than one transistor current amplification stage may be required. It is not advisable or reliable in the long term to connect several small BJT transistors in parallel to increase the current output provided by the driver circuit; a larger power transistor must be used. Driver circuits can be built using small discrete transistors such as the TO-92 size 2N3904 on a standard breadboard. If even higher current drive is needed, the larger power transistors used will not fit directly on a breadboard and the wires are not large enough. Having these devices already assembled on a small PCB will save prototyping time with mbed, so those options will be the primary focus here.
For speakers, an audio amplifier IC is often used to drive the speaker. New class D audio amplifiers actually use PWM.
MOSFETs
At higher voltage and high current levels, newer MOSFET transistors are more efficient than the older BJTs. In BJTs, the base current controls the switch, but in MOSFETs it is the gate voltage. A common N-channel RFP30N06LE MOSFET transistor symbol and pinout is shown below.
N-Channel MOSFET transistor symbol and TO-220 package pinout
Sparkfun MOSFET driver breakout board
The board seen above uses the RFP30N06LE MOSFET transistor rated at 60V and 30A for higher current loads. The trace size on the PCB and the wire size for the screw terminals limits loads to around 4A. The screw terminals are used for high current connections since the wires need to be larger than the standard breadboard jumper wires. The schematic is seen in the image below. This special MOSFET has a very low gate input voltage that works with 3.3V logic signals like those on mbed.
A typical MOSFET runs just a bit more efficiently if the gate input is a bit higher than the supply voltage. Special MOSFET driver ICs such as the LTC1155 use a charge pump circuit to drive the gate voltage higher on higher voltage MOSFET driver circuits using a normal digital logic level control signal (i.e., useful when load voltage (RAW in schematic) is larger then the logic supply voltage). The LTC1155 is used with a MOSFET in many laptop PCs and cellphones to turn power on and off for power management and is available in an 8-pin DIP package or surface mount. Overvoltage and short circuit protection can also be added using the LTC1155. Some large MOSFETs including the one on the Sparkfun board already contain an internal snubber diode for driving inductive loads. If this is not the case, it would be a good idea to add an external diode when driving inductive loads.
MOSFET driver circuit for high current DC loads
Floating Inputs
Note the 10K pull-down resistor on the control input line. This prevents the gate input from floating high and turning on the device when nothing is driving the input. If it did float, it is also possible that the MOSFET might oscillate and overheat. In most cases, the device should be off if something is wrong. This can happen if a wire was not connected or perhaps briefly when the microcontroller is reset and GPIO pins reset to input mode. It also might happen if the power supply for the microcontroller is not on, but another power supply for the device is on. A similar design issue of leaving control inputs floating in a computer control system in a hydroelectric power plant once caused a major power blackout in California when power was lost on the computer.
Wiring
Any digital out pin can be used for control (connects to the control input of the driver circuit). If you plan on using PWM, select one of the mbed PWMOut pins.
Keep in mind that mbed can only supply about 200MA of current for external devices via USB power, so an external DC power supply may also be needed for large loads. When using external DC power supplies for additional power (to RAW in schematic above), only connect the supply grounds together (i.e., and not mbed Vcc and RAW even if the voltage is the same).
Here is an example program the turns the transistor switch on and off every .2 seconds.
#include "mbed.h" DigitalOut myled(LED1); DigitalOut Ctrl(p8); int main() { while(1) { Ctrl = 1; myled = 1; wait(.2); Ctrl = 0; myled = 0; wait(.2); } }
IGBTs
The insulated gate bipolar transistor (IGBT) is a new type of semiconductor device used as an electronic switch in newer designs that combines high efficiency and fast switching. It is used in medium and high power applications such as appliances, electric cars, trains, variable speed refrigerators, air-conditioners, stereo systems that use switching amplifiers, and even welding machines. They can be connected in parallel to make devices with ratings up to 6000V and hundreds of amps.
It has an isolated gate FET as the control input of a BJT.
IGBT schematic symbol
Infineon Hybrid Car IGBT switch module
The IGBT module seen above is used to switch the drive motor in hybrid cars. It contains six IGBTs (3 Phase AC motor with variable frequency drive (VFD) using PWM) and is rated at 800A/650V. Many of the newer energy efficient home furnaces also have a variable frequency drive blower motor. A wide array of IGBT modules are available ranging from several amps to several thousand amps and in this range they can be more energy efficient than power MOSFETs.
A Popular Toshiba VFD system used for large industrial motors. Note the six large IGBT driver circuits in the racks.
Special Purpose Driver ICs
As an alternative to using several discrete power transistors mounted on a PCB, multiple driver circuits are often packaged in ICs targeted for particular applications to save space and reduce cost. Two of the most common examples are motor and LED driver ICs.
H-Bridge Driver ICs
To control and reverse a DC motor, an H-bridge circuit is used with two control signals and four driver transistors. This allows the current direction through the load to be reversed similar to swapping the wires on a DC motor.
Basic H-bridge driver for DC motor control
The basic H-Bridge circuit with four power transistors that provide drive current for the motor is seen above. In this circuit, you can think of the power transistors functioning as on/off switches. Two digital logic level inputs, forward and reverse, turn diagonal pairs of transistors on and off to reverse the current flow through the DC motor (M). In some very simple H bridge circuits, forward and reverse must not both be turned on at the same time or it will short the power supply. More advanced H-bridge circuits prevent this issue with a bit more decoding on the transistor inputs and add a dynamic brake feature. Dynamic braking can generate a bit more heat in the H bridge IC. MOSFETs are often used in newer H-bridge ICs instead of BJTs.
See for example code using an H-bridge driver to control the direction of a DC motor. It also uses digital PWM for motor speed control. Small H-bridge modules are available on breakout boards.
Pololu or Sparkfun 1.2A MOSFET Dual H-bridge breakout board
The small breakout board seen above uses Toshiba’s TB6612FNG dual H-bridge motor driver IC, which can independently control two bidirectional DC motors or one bipolar stepper motor. A recommended motor voltage of 4.5 – 13.5 V and peak current output of 3 A per channel (1 A continuous) make this a handy motor driver for low-power motors. Many small robots have two DC drive motors controlled by a dual H-bridge driver with PWM speed control. The Sparkfun magician robot cookbook page has code examples for mbed. Higher current H-bridge modules are also available.
The speed of DC motors varies with the load, and will also vary a bit from motor to motor even on identical motors. For accurate speed control under varying loads, feedback is typically required. Three cookbook projects, QEI, PID, and mbed Rover provide some additional background on how to use feedback for more accurate speed control of DC motors.
Small Class D Amp Breakout from Adafruit
Many Class D audio amplifier ICs also use an H-bridge driver circuit. Examples can be found in the mbed component pages.
Some types of reversible solenoids also use a H-bridge driver, but a driver circuit for one direction is more common with a spring return.
Haptic Feedback Driver ICs
Vibration motors can provide user feedback by shaking a device. They are basically a small DC motor with an out of balance weight attached the drive shaft. Newer devices do more than a simple buzz, they use advanced vibration patterns and waveforms to convey eyes free information to a user or operator. It is becoming an important part of modern user interface design. Haptic (i.e., touch) feedback is becoming so widespread in embedded devices that there are even new haptic driver chips such as the TI DRV2605 available with over 100 built-in effects for vibration motors. This chip also includes the H-bridge driver circuit and can function as an audio sub-sonic woofer using an audio stream. An mbed component page with a hello world example is available for the DRV2605. The effects in the chip are licensed by Immersion, a haptics technology company. Immersion also has a haptics Andriod SDK.
A DRV2605 Haptic Motor Driver
Stepper motor driver ICs
Stepper motors have multiple coil windings that need to be activated in the current sequence to rotate the motor shaft by a fixed angle without the need for position feedback hardware. Stepper motors are used in devices to provide low-cost accurate position control (i.e., moving the print head on an inkjet printer). Stepper motor driver ICs contain an H-bridge driver for each winding and often also include a small state machine or counter to sequence through the correct states to drive the motor. The typical control inputs are step and direction. The newest stepper motor driver ICs can respond to a variety of complex commands that even include the capability to move fractions of a step using PWM or perhaps move multiple steps automatically. The Pololu stepper motor driver breakout board seen below will drive a bipolar stepper motor up to 2A per coil using the Allegro A4988 DMOS Microstepping Driver IC. An example library to control stepper motors is available in the cookbook. A wide variety of stepper motor driver ICs is available.
Pololu stepper motor driver breakout board. An mbed code example is available for an RC ESC using a Brushless DC motor for a racing drone.
A brushless DC motor for drones with an electronic speed control module.
LED Driver ICs
High-power bright LEDs require more current than the typical digital logic output pin can provide and they need a driver circuit.
The small module seen below contains the Allegro A6281 IC with three 150MA driver circuits for a high-power red, green and blue(RGB) LED. It also includes PWM dimming hardware for each of the three driver circuits and it can generate 230 different colors and brightness levels. Code examples for mbed can be found on the Shiftbrite cookbook page. The modules can be chained together to build large LED arrays. With the rapid growth of LED lighting, quite a few LED driver ICs are available.
Shiftbrite RGB LED driver breakout board
NeoPixel RGB LEDs contain three PWM driver circuits in a WS2811 driver IC in the same package as the RGB LED. They are available in many form factors (i.e., discrete, surface mount, panels, strips, rings, and breakout boards) and can be connected in long chains. Several Mbed code examples are available.
Relays
Relays can also be used to switch high current and/or high voltage AC and DC devices using logic signals for control.
Electromechanical Relay
An electromechanical relay contains an electromagnetic coil (right side of image above) that moves a metal arm to make and break an electrical connection. Electromechanical relays can be used to switch high current and also AC devices. They provide electrical isolation between the control signal and the load and are relatively low cost. No common ground connection between the control signal and load is needed. A standard digital logic GPIO output pin does not supply enough current to drive a relay coil directly. When using logic signals to control a relay, a driver circuit must be used to boost the current needed to energize the relay’s electromagnetic coil. The load is switched on and off using the relay’s metal contacts that move when the coil is energized. Since the metal contacts actually touch, relays will have less of a voltage drop across the switch point than transistor circuits. They are sometimes used to switch regulated power supplies on and off. Relays tend to more resistant to failure caused by high voltage surges than semiconductor devices.
Electromechanical relays do have some limitations for designers to consider:
- The number of lifetime ON/OFF cycles is limited by mechanical wear (typically 106 to 107 cycles)
- They have slow ON/OFF times – around 20 per minute. Too slow for motor speed control or dimming lights.
- Relay contacts can wear out due to arcing on inductive loads (perhaps as few as 105 cycles) even on rated loads.
- Oxidation on relay contacts can be a problem on low voltage analog signals. (around 2 volts is needed to initially punch through the oxidation layer that occurs between any two metal contacts)
- Worn out relays will sometimes stick due to mechanical wear and an audible click is typically heard during switching.
If a relay is being used to switch high current AC loads, the relay's contact life can be greatly extended by turning it off near an AC zero crossing. This requires a zero crossing detection circuit for synchronization to the AC supply or a driver circuit that always turns off near zero such as a triac or SSR.
Many relays and solenoids are rated only for “intermittent duty”. This means that they should only be turned “on” for short periods of time and “off” for the vast majority of the time. If left “on” for long periods of time, the coil wire will overheat and it can melt through the thin insulation on the tiny coil wires and destroy the device. In an application that needs to leave a relay or solenoid turned “on” for long or undetermined periods of time a device is needed that is rated for “continuous duty”.
Breadboard friendly small relay from Sparkfun
Most relays will not fit directly on a breadboard, but there are a few small ones (.5A to 5A) that will fit on a breadboard such as the one seen above. Breadboard jumper wires cannot handle very large current levels in any case. Even these small relays will still require a driver circuit and diode, so a relay breakout board might be just as easy to use.
Sparkfun relay board with driver
Sparkfun makes a low-cost relay board shown above that contains both the relay and the required driver circuit built using a discrete transistor. The relay coils on this relay require around 200MA at 5VDC. It is easier to drive relays like this that use a lower coil voltage. The relay board's driver circuit is built using a BJT as seen in the schematic below. The relay can switch up to 220VAC at 20A using a logic signal for control, but the small PCB layout and screw terminals likely limit it to lower voltage and current levels to at least half of these ratings. A similar board is available from Sparky's Widgets. It is probably wise to be very conservative on the maximum voltage and current ratings quoted for relays in datasheets.
Sparkfun Relay Board Schematic
The snubber diode backwards across the relay coil absorbs the reverse voltage inductive spike that occurs when turning off the coil (i.e., V=Ldi/dt).
Any digital out pin can be used to control the relay (connects to the input of the relay driver circuit).
Wiring
Safety Note on High Voltages. Make sure that the bottom side of the PCB does not short out on any metallic surfaces. Breadboard contacts and small jumper wires only handle about one amp. The relay boards typically use screw terminals to attach the larger wires needed for the external device. Just driving the coil of a large relay requires most of the additional current that can be supplied to mbed via the USB cable, so an external DC power supply will likely be needed to power the relay coils and the load of the external device. For electrical isolation, when using a relay to control external AC devices or high voltage DC devices, do not connect the grounds on the power supplies from the control side to the load side.
Here is an example program the turns the relay on and off every 2 seconds.
#include "mbed.h" DigitalOut myled(LED1); DigitalOut Ctrl(p8); int main() { while(1) { Ctrl = 1; myled = 1; wait(2); Ctrl = 0; myled = 0; wait(2); } }
A demo using mbed with this code example for the Sparky's Widget relay breakout board is available at.
For safety and especially if you do not have prior experience working with high voltage AC, one of the sealed devices such as the Power Switch Tail II seen below would be a safer alternative to switch small household AC devices. It has an internal switch module and the high voltage connections are all enclosed in the case. Standard AC plugs are already attached and international versions of the Power Switch Tail are also available. The US version is also available from Sparkfun. A code example for mbed is available at
The Power Switch Tail II has an internal driver and relay circuit with the standard AC plugs for the US. It addition to the standard relay version, a Power Switch Tail with a solid state relay (SSR) is also available. SSRs and TRIACs will be explained in a later section.
The small Phidgets dual relay board seen below works in a similar manner to the Sparkfun board, but it has two relays.
Phidgets Dual relay board
Relays need to be selected based on both the input and output current and voltage rating. Since contacts can wear out on the output side be conservative on current ratings.
Reed Relays
A small reed relay module in a DIP package
, and are usually plated with silver. As the moving parts are small and lightweight, reed relays can switch much faster than relays with armatures. They typically switch lower current values than a large relay. They are mechanically simple, making for a bit more reliability and longer life. The coil current needed is lower (perhaps 10MA @5V) and in some cases a driver circuit may not be needed if the digital logic output has high drive current. Some reed relays already contain a snubber diode. If not, an external back EMF snubber diode would still be a good idea. Many are small enough to come in a DIP package that can plug into a breadboard.
Solid State Relays
In many applications, solid state relays can be used instead of electromechanical relays. Solid State Relays (SSRs) offer several advantages over electromechanical relays:
- Most have optical isolation between input and output load
- No moving parts - built using semiconductors
- Some are fast enough for motor speed control and dimming lights
- Resistant to shock and vibration
- Around 100X more reliable than mechanical relays
- Silent operation
Sparkfun Solid State Relay Board
The schematic is shown below for the Sparkfun Solid State Relay board seen above. It uses a Sharp SSR module and can switch 125VAC at 8A (AC only - not DC). Note that it also requires a driver circuit and the external wire connections are the same as the relay board. A demo for mbed is available at
Sparkfun Solid State Relay Board Schematic
Optical Isolation
Optical or mechanical isolation (relays) between higher voltage supplies and computers is always a good idea. Not having to connect grounds between supplies helps in noisy industrial environments. Long wires can also pick up large transient voltages. One of the first projects to control and instrument an airport ended abruptly when a lightning bolt from a thunderstorm hit near the end of the runway and the ground wires carried the ground voltage transient spike all the way back to the control tower and blew up the computer. Most SSRs have an opto-isolator on the input. As seen below, Sparkfun also makes a small opto-isolator breakout board with small driver transistors on the output to isolate the computer output signals and convert them to higher voltage signals. It has limited current drive, but it could be added to the input side of a driver circuit to provide optical isolation. Opto-isolators are sometimes also used on sensor inputs to a computer. In this case, the sensor drives the input side and the output side connects to the computer.
Sparkfun Opto-isolater breakout board
As seen below, the ILD2213T optocoupler IC used on the Sparkfun board contains two optically coupled pairs consisting of a Gallium Arsenide infrared LED and a silicon NPN phototransistor. Signal information, including a DC level, can be transmitted by the LED via IR (infrared light) to the phototransistor while maintaining 4000V of electrical isolation between the input and output. Before reaching 4000V, other parts such as wires and connectors will likely short out first. Opto-isolators tend to be a bit slow to switch when compared to the speed of digital logic circuits. This device is in the five microsecond range. A few SSRs use reed relays on the input signal or feed the input through a DC to AC converter with a transformer for electrical isolation.
ILD213T Dual Optocoupler IC
Other SSR modules
Unfortunately, solid state relay modules typically cost a bit more than mechanical relays. Many SSRs include a zero crossing circuit that turns on or off the device only when the AC voltage level is zero(a zero crossing). This also helps to reduce RF noise emissions generated by switching high current loads. The Phidgets SSR board seen below can switch both AC and DC voltages. It uses a small NEC/CEL SSR IC.
Switching AC
Some solid state relays can also drive AC loads.
This is possible using a TRIAC as in the Sparkfun Solid State relay board schematic above with the Sharp SSR module. The TRIAC symbol looks like two diodes connected in opposite directions as seen below. The gate control input (G) only requires a few milliamps and the AC load connects to A1 and A2. In the Sharp SSR module, the IR light from the LED drives the TRIAC gate input (provides optical isolation). A TRIAC's gate input can be carefully time controlled in phase with the AC signal to dim lights, control motor speed, or adjust the power output to the load. Household light dimmer switches often use TRIACs.
A TRIAC can switch AC loads
Two MOSFETs with their source pins connected together as in the Phidgets SSR module with the NEC/CEL SSR IC will also drive AC loads as seen below. The gate input is optically isolated and the MOSFETs drain pins connect to the AC load. This works since all MOSFETs have a substrate diode that always conducts current in the reverse direction (only the normal current direction can be switched). Be sure to check the SSR's datasheet, SSRs can be AC only (TRIAC) or AC/DC(have two MOSFETs) or DC only(one MOSFET).
SSR using two MOSFETs to switch AC loads with optical isolation
The power SSR tail module seen below can be used to dim incandescent or dimmable LED lights.
The PowerSSR Tail includes AC plugs and an SSR in an enclosed case
As seen in the schematic below, it contains a solid state relay built using an opto-isolator with digital logic inputs, a TRIAC for AC switching, and a MOV for surge suppression. A zero crossing and state detection tail module with isolation are also available. A code example for mbed is available at
Power SSR Tail Schematic
The new PowerSSR and ZeroCross Tails are now available in one case and power cord, the PSSR ZC. It is the best choice now, if the dimmer function is needed.
Higher Power SSRs
Opto 22 developed the first SSRs and makes a wide variety of DC and AC SSR modules including the large module below that will switch 480V AC at 45A with a 3V DC logic control input. They are frequently used in industrial automation systems. Heatsinks may also be needed on SSRs.
Opto 22 480D45 SSR module
IGBTs are also starting to appear in new designs of medium to high current SSRs. A wide assortment of SSRs is available.
Industrial Automation Systems & PACs
Programmable automation controllers (PACs) for industrial automation systems with a large number of inputs and outputs often mount a customized mixture of relays, AC/DC SSR modules, and AC/DC isolated input modules on special breakout boards or rail mount systems with screw terminals. Some SSRs mount directly to the rail. Several examples are shown below. They are handy whenever there are a large number of external I/Os to hookup with larger wires. A ribbon cable connector ties the digital I/O signals to the microcontroller. Such packaging works out well and helps to keep the wiring under control and organized.
A industrial automation relay room showing racks. Courtesy of Signalhead via Wikimedia Commons
If a lot of external devices need to be connected in a prototype, many of these systems can be adapted for use with small microcontrollers such as mbed.
DIN rail mounting system for SSRs and relays.
Two types of direct DIN rail mount SSR modules from Crydom and Power IO
DIN rail mount optically isolated AC input module
Someone even has a DIN rail mount expansion board for mbed
Home Automation Systems
If you only need to control household AC devices, several low-cost home automation systems are available with small plug in modules for controlling household AC devices and dimming lights. X10 is of the first low-cost systems, and it uses signaling over the power lines to control each module. Switches on each X10 control module are configured by the user to select a unique address (0..255) for each AC device as seen below. X10 power line control signals are sensitive to distance and noise and will not cross over from one power phase to the other, just like home networking plug in devices. In addition to a power line interface to decode the control signals, a module contains a relay for appliances, or a triac to dim lights. A small interface device is available from X10 to send the control signals over the power line using a microprocessor. An easier to use RS232 serial interface for X10 can still be found, but it is no longer in production. There are two mbed X10 projects already available in the mbed cookbook, links to ways to interface to the X10 power line signals, and code for a wireless interface to the X10 wireless module. The X10 wireless receiver module then sends out the signals on the power line to control modules.
A plug in X10 appliance module can switch household AC devices
Another home automation system is Z-wave. It uses RF signals to control the plug in AC modules. Insteon uses both power line and RF signals in a mesh network. One of the controllers from these systems could be interfaced with mbed to use these modules. WiFi controlled AC outlet strips are also starting to appear such as the one seen below. These systems work well in homes, but probably would not be appropriate for use in noisy safety critical industrial environments.
This AC outlet strip can be controlled using WiFi and a smartphone or Amazon Echo
The WeMo switch contains a IoT WiFi controlled relay. A WeMo teardown shows the internal parts.
A Wijit Smart AC Plug
The Wijit is one of the newer low cost home automation examples. It has a Wi Fi hub that communcates to 10A relay controlled AC plug modules using low cost 434Mhz RF receiver and transmitter modules similar to those used in car key fobs. Information on the WiJit is a bit hard to find, but some details and internal photos from the FCC approval are at. They can be controlled from a free smart phone app.
5 comments on Drivers, Relays, and Solid State Relays:
Please log in to post comments.
Thank for the effort. The page is very useful.
|
https://os.mbed.com/users/4180_1/notebook/relays1/
|
CC-MAIN-2018-39
|
refinedweb
| 6,063
| 60.14
|
index(3) BSD Library Functions Manual index(3)
NAME
index, rindex -- locate character in string
LIBRARY
Standard C Library (libc, -lc)
SYNOPSIS
#include <strings.h> char * index(const char *s, int c); char * rindex(const char *s, int c);
DESCRIPTION
The index() function locates the first occurrence of c (converted to a char) in the string pointed to by s. The terminating null character is considered to be part of the string; therefore, if c is `\0', the func- tions locate the terminating `\0'. The rindex() function is identical to index(), except that it locates the last occurrence of c.
RETURN VALUES
The functions index() and rindex() return a pointer to the located char- acter, or NULL if the character does not appear in the string.
SEE ALSO
memchr(3), strchr(3), strcspn(3), strpbrk(3), strrchr(3), strsep(3), strspn(3), strstr(3), strtok(3)
HISTORY
The index() and rindex() functions appeared in Version 6 AT&T UNIX. Their prototypes existed previously in <string.h> before they were moved to <strings.h> for IEEE Std 1003.1-2001 (``POSIX.1'') compliance. BSD June 4, 1993 BSD
Mac OS X 10.8 - Generated Tue Aug 28 05:49:43 CDT 2012
|
http://www.manpagez.com/man/3/index/
|
CC-MAIN-2018-09
|
refinedweb
| 201
| 63.49
|
From the RSpec docs:
require 'rspec/expectations'
RSpec::Matchers.define :be_a_multiple_of do |expected|
match do |actual|
actual % expected == 0
end
end
RSpec.describe 9 do
it { is_expected.to be_a_multiple_of(3) }
end
:be_a_multiple_of do |expected|
match do |actual|
actual % expected == 0
end
end
Proc
.define
|expected|
:be_a_multiple_of
match do
You can look at the rspec source docs for starters -
if you wanted to make a method callable like this:
some_method :symbol do end # one-liner version (parens are needed for arg) some_method(:symbol) { }
you could define this:
def some_method(arg, &block) # the block gets called in here end
See Blocks and yields in Ruby
As far as the symbol being turned into a method - this is actually pretty common, and is one of the main things that distinguishes symbols from strings.
So on that RSpec doc page, it says:
The block passed to RSpec::Matchers.define will be evaluated in the context of the singleton class of an instance, and will have the Macros methods available.
If you want to see exactly what's going on here you can go through the RSpec source code. However the internals aren't necessarily as well documented,
See What exactly is the singleton class in ruby?
|
https://codedump.io/share/UEgpUlYkPr9a/1/rspec-custom-matchers-syntax-what-is-going-on-structurally
|
CC-MAIN-2017-34
|
refinedweb
| 202
| 58.62
|
Description: In this article I will demonstrate the use of serialization and how to perform XML serialization.Content: Serialization means saving the state of your object to secondary memory, such as a file.Suppose you have a business layer where you have many classes to perform your business data.Now suppose you want to test whether your business classes give the correct data out without verifying the result from the UI or from a database. Because it will take some time to process.SO what you will you do my friend?Here comes Serialization. You will serialize all your necessary business classes and save them into a text or XML fileon your hard disk. So you can easily test your desired result by comparing your serialized saved data withyour desired output data. You can say it is a little bit of autonomic unit testing performed by the developer.There are three types of serialization:
namespace CSharpBasicCoading
{
public class A
public void Add()
Console.WriteLine("A");
}
public virtual void same()
Console.WriteLine("In A");
public virtual void foo()
Console.WriteLine("Foo A");
public class B :A
public void z()
Console.WriteLine("z");
public override void same()
Console.WriteLine("In B");
public new virtual void foo()
Console.WriteLine("Foo B");
public class C :B
Console.WriteLine("In C");
public override void foo()
Console.WriteLine("Foo C");}
public class Test:C,IDisposable
public string FirstName;
public string MI;
public string LastName;
public void Dispose()
GC.SuppressFinalize(this);
class Program{ static void Main(string[] args) {
Test oc = new Test();
oc.FirstName = "shirendu";
oc.MI = "bikash";
oc.LastName = "Nandi";
XmlSerializer x = new XmlSerializer(oc.GetType());
Stream s = File.OpenWrite("c:\\test1.xml");
x.Serialize(s, oc);
oc.Dispose();
System.GC.Collect();
System.GC.WaitForPendingFinalizers();
Console.ReadLine();
}
}}Here in this program I have four classes.The Test Class is inheriting all the "a","b" and "c" class.The Test class also has three variables.Here when you serialize the test class object in to the "test1.xml" file it saves the desired object result to the XML file.Please run and test the program.Here I have also used the garbage collector. It is good for a Professional program to destroy your object afterusing it. So in this article you can also get the idea of implement the garbage collector.In my next article I will describe the Deserialization. It is nothing but the reverse engineering of serialization.
View All
|
http://www.c-sharpcorner.com/uploadfile/b19d5a/use-of-serialization-in-C-Sharp/
|
CC-MAIN-2017-22
|
refinedweb
| 404
| 53.07
|
Difference between revisions of "ATL/Howtos"
Revision as of 08:08, 31 August 2012
This page contains a list of problems related to ATL usage. It gives hints towards solutions and links to examples.
Contents
- 1 How do I launch transformations programmatically?
- 2 How do I generate text from models?
- 3 How do I declare/use external parameters in ATL transformations?
- 4 How can I tune the XML output of ATL?
- 5 How can I handle arbitrary XML documents?
- 6 How can I implement my own injector or extractor?
- 7 How can I transform UML models?
- 8 How can I retrieve tagged values from stereotyped UML model elements?
- 9 How can I specify a Virtual Machine operation in Java? declare/use external,');
How can I tune the XML output of ATL?
The ATL Virtual Machine can write target models to XML files in two different ways:
- XMI writing is delegated to then used programmatically or via ATL-specific ANT tasks atl.loadModel and atl.saveModel.
To be usable in practice, an injector/extractor must implement the IInjector/IExtractor interface (cf. ATL Core API documentation), as shown by the XML injector/extractor example for instance. It must also declare explicitly the use of the specific org.eclipse.m2m.atl.core.injector/extractor extension points, as shown by the same example than before.
How can I transform UML models?
There are several implementations of UML. The one we consider here is the de-facto standard implementation from Eclipse: the UML2 project.
ATL allows to transform such UML models as any other EMF models.
The UML2 metamodel must be loaded from the EMF registry (i.e., by namespace URI). This can be done using the "EMF Registry..." button of the ATL launch configuration, or using the nsURI attribute of the atl.loadModel.
|
http://wiki.eclipse.org/index.php?title=ATL/Howtos&diff=314170&oldid=314169
|
CC-MAIN-2017-34
|
refinedweb
| 299
| 52.05
|
preface
Good luck AK (╯`□ ') ╯ (┻━┻)
I didn't understand the problem of celebration \ (E \) and found the rule (╯`□ ') ╯ (┻━┻)
A
Meaning:
That is to say, in the range of \ (1-10000 \), if each digit of a number is the same number, it is called boring number, and then it will be arranged in the order of \ (1,11111111,2,22... \), and then ask the boring number \ (x \) and the sum of the digits in front of it. For example, \ (11 \), there is \ (1 \) in front of it, and then the sum of digits is \ (2 + 1 = 3 \).
Violence simulation.
#include<cstdio> #include<cstring> using namespace std; inline int solve(int x) { int type=x%10; int ans=(type-1)*10; int y=0; while(x) { x/=10; y++; ans+=y; } printf("%d\n",ans); } int main() { int T;scanf("%d",&T); for(int i=1;i<=T;i++) { int x;scanf("%d",&x); solve(x); } return 0; }
B
Meaning:
If there is a book in position \ (i \), it is \ (1 \) and there is no book, it is \ (0 \).
For a continuous section of books \ ([l,r] \) on the shelf, if the position of \ (r+1 \) has no position, you can move the book of \ ([l,r] \) from right to \ ([l+1,r+1] \), which is similar to the left.
Then ask the minimum number of moves required to turn all books into a continuous section.
Practice:
At most one interval can be eliminated at a time. It is not difficult to find that the answer is the interval and between adjacent paragraphs of books.
#include<cstdio> #include<cstring> #define N 60 using namespace std; int a[N],n; void solve() { scanf("%d",&n); for(int i=1;i<=n;i++)scanf("%d",&a[i]); bool bk=0;int ans=0,cnt=0; for(int i=1;i<=n;i++) { if(!a[i]) { if(bk)cnt++; } else { bk=1; ans+=cnt; cnt=0; } } printf("%d\n",ans); } int main() { int T;scanf("%d",&T); while(T--)solve(); return 0; }
C
Meaning:
In other words, a fish \ (I \), if \ (a[i-1] < a[i] \) or \ (a[i+1] < a[i] \), can eat the position of \ (i-1 \) or \ (i+1 \), and then \ (a[i] + + + \). If a fish, assuming that only it can eat other fish in the whole world, and it can eat all the fish in the end, it is called the dominant fish, and then ask whether there is a dominant fish, and enter the number of one of the dominant fish.
First, consider the case of the largest weight. If the fish with the largest weight eats a fish, it will eat all the fish at will. Therefore, as long as there is a fish with the largest weight and one fish on the left and right sides is smaller than it, it is the dominant fish.
At that time, can we judge that there was no dominant fish? It is not difficult to find that the dominant fish found in this method must be correct, and if it cannot be found, if and only if all fish have the same weight, there is no dominant fish at this time, so this method is correct.
#include<cstdio> #include<cstring> #define N 310000 using namespace std; int a[N],n; inline int mymax(int x,int y){return x>y?x:y;} void solve() { scanf("%d",&n); int mmax=1; for(int i=1;i<=n;i++) { scanf("%d",&a[i]); mmax=mymax(a[i],mmax); } a[0]=a[1];a[n+1]=a[n]; for(int i=1;i<=n;i++) { if(a[i]==mmax && (a[i-1]!=a[i] || a[i+1]!=a[i])) { printf("%d\n",i); return ; } } printf("-1\n"); return ; } int main() { int T;scanf("%d",&T); while(T--)solve(); return 0; }
The guy in the same computer room also thought of a similar stack, so I won't repeat it. Although the idea was wrong at first, it was later found that changing it could avoid the problem.
D
Meaning:
Each point has a weight, and then you can use the \ (n-1 \) edge to connect the \ (n \) points into a tree, and the weight of the points at both ends of the edge must not be the same.
Record the color of the first point, find the point with different color from the first point, and record it as \ (id \). If a point has different color from the first point, it will be connected to the first point. If it is the same as the first point, it will be connected to \ (id \).
The case of no solution is the case where \ (id \) cannot be found.
#include<cstdio> #include<cstring> #define N 5100 using namespace std; int co[N],n; void solve() { scanf("%d",&n); for(int i=1;i<=n;i++) { scanf("%d",&co[i]); } int shit=co[1]; int id=0; for(int i=2;i<=n;i++) { if(co[i]!=shit) { id=i; break; } } if(!id) { printf("NO\n"); return ; } printf("YES\n"); for(int i=2;i<=n;i++) { if(co[i]!=shit)printf("1 %d\n",i); else printf("%d %d\n",id,i); } } int main() { int T;scanf("%d",&T); while(T--)solve(); return 0; }
E
Meaning:
I didn't know the meaning of the title until after the game
First of all, what is round dancing?
Have you seen the bonfire dinner? Dancing in a circle hand in hand is a round dance. Now divide the \ (n(n is an even number) \) person (the \ (i \) person number is \ (i \)) into two round dances.
It is recorded as a wheel dance scheme {\ (A,B \)}.
Two rounds \ (X,Y \) are the same. If and only if a point is selected as the beginning, the sequence formed from left to right is the same (that is to say, \ ([1,2,3] \) and \ ([3,2,1] \) are different).
The rotation schemes {\ (a {1}, B 1 \)} and {\ (a {2}, B 2 \)} are the same if and only if \ (a 1 = a 2, B 1 = B 2 \) or \ (a 1 = B 2, B 1 = a 2 \).
Ask how many different schemes there are, and the answer is guaranteed to be within the range of \ (long\) \(long \).
Practice:
Let \ (n=2m \), first select \ (m \) individuals to form the left circle, that is \ (C {n} ^ m \), but if \ (m \) individuals are selected, how many different circles are arranged? We might as well say in the title that fixing one person is in position one, and the others are all arranged, so it is \ ((n-1)! \), It is not difficult to find that this will not repeat and all schemes can be found (obviously correct, because everyone must be in the circle), and then \ (C {n} ^{m} * (m-1)* (m-1)!\), However, it is found that the left and right sides of a circle scheme will be selected, so \ (2 \) (in fact, there are ways to avoid it. Assuming that the person numbered \ (1 \) must be selected, it is \ (C {n-1} ^{M-1} \)).
Change the following formula:
\(\frac{C_{n}^{m}*(m-1)!*(m-1)!}{2}=\frac{n!*(m-1)!*(m-1)!}{2*m!*m!}=\frac{2n!}{n^2}=\frac{2(n-1)!}{n}\)
This formula is much simpler. This is the rule I found in the examination room.
Open the calculator and find that \ (19! \) won't explode \ (long\) \(long \) and mess with it directly.
#include<cstdio> #include<cstring> using namespace std; int main() { int n;scanf("%d",&n); if(n==2)printf("1\n"); else { long long ans=1; for(int i=1;i<n;i++)ans*=i; ans/=n/2; printf("%lld\n",ans); } return 0; }
F
For a matrix of \ (n*m \), you can select \ (\ frac{m}{2} \) at most in each row, and then the sum of the selected numbers is required to be a multiple of \ (k \). Ask what is the maximum value of the sum.
Let \ (f[i][j][k] \) represent the \ (I \) number of each line, select \ (j \), and the remainder is the maximum value of \ (K \), so it is not difficult to think of the state transition equation.
Time complexity: \ (O(n^4) \)
#include<cstdio> #include<cstring> #define N 90 using namespace std; inline int mymax(int x,int y){return x>y?x:y;} int a[N][N]; int f[2][N][N]; int n,m,K; void dp() { memset(f[0],-20,sizeof(f[0])); f[0][0][0]=0;int now=0,pre=1; int limit=m/2; for(int i=1;i<=n;i++) { now^=1;pre^=1; memset(f[now],-20,sizeof(f[now])); for(int j=0;j<=limit;j++) { for(int k=0;k<K;k++)f[now][0][k]=mymax(f[now][0][k],f[pre][j][k]); } for(int j=1;j<=m;j++) { now^=1;pre^=1; memset(f[now],-20,sizeof(f[now])); for(int k=1;k<=limit;k++) { for(int t=0;t<K;t++) { int shit=(t+a[i][j])%K; f[now][k][shit]=mymax(f[now][k][shit],f[pre][k-1][t]+a[i][j]); } } for(int k=0;k<=limit;k++) { for(int t=0;t<K;t++)f[now][k][t]=mymax(f[now][k][t],f[pre][k][t]); } } } now^=1;pre^=1; memset(f[now],-20,sizeof(f[now])); for(int j=0;j<=limit;j++) { for(int k=0;k<K;k++)f[now][0][k]=mymax(f[now][0][k],f[pre][j][k]); } printf("%d\n",f[now][0][0]); } int main() { scanf("%d%d%d",&n,&m,&K); for(int i=1;i<=n;i++) { for(int j=1;j<=m;j++)scanf("%d",&a[i][j]); } dp(); return 0; }
G
Meaning:
An undirected graph with \ (n \) points and \ (m \) edges has \ (k \) orders. Each order runs from \ (x \) to \ (y \), and the cost of each order is defined as the shortest path from \ (x \) to \ (y \), and the total cost is the sum of the costs of each order.
You can change one edge to \ (0 \) at most, and then ask you what the minimum total cost is.
Practice:
Run the shortest path for each point, because it is a sparse graph, \ (SPFA \) is very fast. In order not to be hack ed, another version of Dij was typed later
It must be a little less expensive to use opportunities than not to use them (at most unchanged).
Enumerate which side becomes \ (0 \). For the order \ (x - > y \), there are two operations: one is to follow the original path, and the other is to forcibly change the route in order to follow the \ (0 \). As for how to make \ (x - > y \) forcibly follow one side, you also understand. You can directly add the cost of \ (x \) walking to one end of the side and the cost of walking to \ (Y \) at the other end.
Dij time complexity: \ (O(nm\log{m}+mk) \)
SPFA:
#include<cstdio> #include<cstring> #define N 1100 #define M 2100 using namespace std; typedef long long LL; inline LL mymin(LL x,LL y){return x<y?x:y;} LL d[N][N]; int list[N],head,tail,n,m,k; bool v[N]; struct node { int y,next; LL c; }a[M];int len,last[N]; inline void ins(int x,int y,LL c){len++;a[len].y=y;a[len].c=c;a[len].next=last[x];last[x]=len;} void SPFA(int st) { memset(d[st],20,sizeof(d[st]));d[st][st]=0; list[head=1]=st;tail=2;v[st]=1; while(head!=tail) { int x=list[head++];if(head==n+1)head=1; v[x]=0; for(int k=last[x];k;k=a[k].next) { int y=a[k].y; if(d[st][x]+a[k].c<d[st][y]) { d[st][y]=d[st][x]+a[k].c; if(!v[y]) { v[y]=1; list[tail++]=y;if(tail==n+1)tail; }
Dij:
#include<cstdio> #include<cstring> #include<queue> #define N 1100 #define M 2100 using namespace std; typedef long long LL; inline LL mymin(LL x,LL y){return x<y?x:y;} LL d[N][N]; int n,m,k; bool v[N]; struct node { int y,next; LL c; }a[M];int len,last[N]; inline void ins(int x,int y,LL c){len++;a[len].y=y;a[len].c=c;a[len].next=last[x];last[x]=len;} priority_queue<pair<LL,int>,vector<pair<LL,int> >,greater<pair<LL,int> > > fuck; void SPFA(int st) { memset(d[st],20,sizeof(d[st]));d[st][st]=0; memset(v,0,sizeof(v)); while(!fuck.empty())fuck.pop(); fuck.push(make_pair(0,st)); for(int i=1;i<=n;i++) { pair<LL,int> zwq=fuck.top();fuck.pop(); int x=zwq.second; while(v[x]) { zwq=fuck.top();fuck.pop(); x=zwq.second; } v[x]=1; for(int k=last[x];k;k=a[k].next) { int y=a[k].y; if(!v[y] && zwq.first+a[k].c<d[st][y]) { d[st][y]=zwq.first+a[k].c; fuck.push(make_pair(d[st][y]; }
|
https://algorithm.zone/blogs/cf-round-677-div3-post-game-summary.html
|
CC-MAIN-2022-21
|
refinedweb
| 2,201
| 67.69
|
WebService::Google::Reader - Perl interface to the Google Reader API
use WebService::Google::Reader; my $reader = WebService::Google::Reader->new( host => '', appid => $appid, appkey => $appey, username => $username, password => $password, ); my $feed = $reader->unread(count => 100); my @entries = $feed->entries; # Fetch past entries. while ($reader->more($feed)) { my @entries = $feed->entries; }
The
WebService::Google::Reader module provides an interface to webservices using the Google Reader API. The only tested webservice at this time is.
Creates a new WebService::Google::Reader object. The following named parameters are accepted:
The hostname of the service.
Required for accessing any personalized or account-related functionality (reading-list, editing, etc.).
Inoreader requires users to also obtain an appid and appkey.
See
Use https scheme for all requests, even when not required.
An optional useragent object.
Enable debugging. Default: 0. This will dump the headers and content for both requests and responses.
Disable compression. Default: 1. This is useful when debugging is enabled and you want to read the response content.
Returns the error string, if any.
Returns an HTTP::Response object for the last submitted request. Can be used to determine the details of an error.).
- exclude(feed => $feed|[@feeds], tag => $tag|[@tags])
-
Accepts a hash reference to one or more of feed / tag / state. Each of which is a scalar or array reference.
Accepts a single feed url.
Accepts a single tag name. See "TAGS"
Accepts a single state name. See "STATES".
Shortcut for state('broadcast').
Shortcut for state('starred').
Shortcut for state('reading-list', exclude => { state => 'read' })
Shortcut for state('like').
Accepts a query string and the following named parameters:
One or more (as a array reference) feed / state / tag to search. The default is to search all feed subscriptions.
The total number of search results: defaults to 1000.
The number of entries per fetch: defaults to 40.
The sort order of the entries: desc (default) or asc in time.
A feed generator only returns $count entries. If more are available, calling this method will return a feed with the next $count entries.
The following methods return an object of type
WebService::Google::Reader::ListElement.
Returns a list of subscriptions and a count of unread entries. Also listed are any tags or states which have positive unread counts. The following accessors are provided: id, count. The maximum count reported is 1000.
Returns the list of user subscriptions. The following accessors are provided: id, title, categories, firstitemmsec. categories is a reference to a list of
ListElements providing accessors: id, label.
Returns the list of preference settings. The following accessors are provided: id, value.
Returns the list of user-created tags. The following accessors are provided: id, shared.
Returns the list of user information. The following accessors are provided: isBloggerUser, userId, userEmail.
The following methods are used to edit feed subscriptions.
Requires a feed url or Feed object, or a reference to a list of them. The following named parameters are accepted:
Flag indicating whether the target feeds should be added or removed from the user's subscriptions.
Accepts a title to associate with the feed. This probaby wouldn't make sense to use when there are multiple feeds. (Maybe later will consider allowing a list here and zipping the feed and title lists).
Accepts a tag / state or a reference to a list of tags / states for which to associate / unassociate the target feeds.
Associate / unassociate a list of tags / states from a feed / feeds.
Subscribe or unsubscribe from a list of feeds.
Renames a feed to the given title.
Marks the feeds as read.
The following methods are used to edit tags and states.
Accepts the following parameters.
Make the given tags / states public.
Make the given tags / states private.
Only tags (and not states) can be disabled.
Associate / unassociate the 'broadcast' state with the given tags / states.
Delete the given tags.
Renames the tags associated with any feeds.
Renames the tags associated with any individual entries.
Calls rename_feed_tag and rename_entry_tag, and finally delete_tag.
Marks all entries as read for the given tags / states.
The following methods are used to edit individual entries.
Associate / unassociate the entries with the given tags / states.
Associate / unassociate the entries with the given tags / states.
Marks all the given entries as "broadcast".
Marks / unmarks all the given entries as "starred".
Marks all the given entries as "read".
Marks / unmarks all the given entries as "liked".
These are a list of other useful methods.
Sets the given preference name to the given value.
Exports feed subscriptions as OPML.
Returns true / false on success / failure. Unsure of when this needs to be used..
Given an
HTTP::Request, this will perform the request and if the response indicates a bad (expired) token, it will request another token before performing the request again. Returns an
HTTP::Response on success, false on failure (check error).
This is automatically called from within methods that require a user token. If successful, the token is available via the token accessor.
Returns a list of all the known states. See "STATES".
The following characters are not allowed: "<>?&/\^
These are tags in a Google-specific namespace. The following are all the known used states.
Entries which have been read.
Entries which have been read, but marked unread.
New entries from reading-list.
Entries which have been starred.
Entries which have been shared and made publicly available.
Entries from all subscriptions.
Entries for which a link in the body has been clicked.
Entries which have been mailed.
Entries for which the title link has been clicked.
Entries which have been kept unread. (Not sure how this differs from "kept-unread").
Please report any bugs or feature requests to. I will be notified, and then you'll automatically be notified of progress on your bug as I make changes.
You can find documentation for this module with the perldoc command.
perldoc WebService::Google::Reader
You can also look for information at:
This library is free software; you can redistribute it and/or modify it under the same terms as Perl itself.
gray, <gray at cpan.org>
|
http://search.cpan.org/dist/WebService-Google-Reader/lib/WebService/Google/Reader.pm
|
CC-MAIN-2016-18
|
refinedweb
| 1,006
| 61.83
|
Cross-language scripting¶
Godot allows you to mix and match scripting languages to suit your needs. This means a single project can define nodes in both C# and GDScript. This page will go through the possible interactions between two nodes written in different languages.
The following two scripts will be used as references throughout this page.
extends Node var str1 : String = "foo" var str2 : String setget ,get_str2 func get_str2() -> String: return "foofoo" func print_node_name(node : Node) -> void: print(node.get_name()) func print_array(arr : Array) -> void: for element in arr: print(element) func print_x_times(msg : String, n : int) -> void: for i in range(n): print(msg)
public class MyCSharpNode : Node { public String str1 = "bar"; public String str2 { get { return "barbar"; } } public void PrintNodeName(Node node) { GD.Print(node.GetName()); } public void PrintArray(String[] arr) { foreach (String element in arr) { GD.Print(element); } } public void PrintNTimes(String msg, int n) { for (int i = 0; i < n; ++i) { GD.Print(msg); } } }
Instantiating nodes¶
If you're not using nodes from the scene tree, you'll probably want to instantiate nodes directly from the code.
Instantiating C# nodes from GDScript¶
Using C# from GDScript doesn't need much work. Once loaded (see 리소스로 취급되는 클래스), the script can be instantiated with new().
var my_csharp_script = load("res://path_to_cs_file.cs") var my_csharp_node = my_csharp_script.new() print(my_csharp_node.str2) # barbar
경고
When creating
.cs scripts, you should always keep in mind that the class
Godot will use is the one named like the
.cs file itself. If that class
does not exist in the file, you'll see the following error:
Invalid call. Nonexistent function `new` in base.
For example, MyCoolNode.cs should contain a class named MyCoolNode.
You also need to check your
.cs file is referenced in the project's
.csproj file. Otherwise, the same error will occur.
Instantiating GDScript nodes from C#¶
From the C# side, everything work the same way. Once loaded, the GDScript can be instantiated with GDScript.New().
GDScript MyGDScript = (GDScript) GD.Load("res://path_to_gd_file.gd"); Object myGDScriptNode = (Godot.Object) MyGDScript.New(); // This is a Godot.Object
Here we are using an Object, but you can use type conversion like explained in 형 변환(Type conversion)과 캐스팅(casting).
Accessing fields¶
Accessing C# fields from GDScript¶
Accessing C# fields from GDScript is straightforward, you shouldn't have anything to worry about.
print(my_csharp_node.str1) # bar my_csharp_node.str1 = "BAR" print(my_csharp_node.str1) # BAR print(my_csharp_node.str2) # barbar # my_csharp_node.str2 = "BARBAR" # This line will hang and crash
Note that it doesn't matter if the field is defined as a property or an attribute. However, trying to set a value on a property that does not define a setter will result in a crash.
Accessing GDScript fields from C#¶
As C# is statically typed, accessing GDScript from C# is a bit more convoluted, you will have to use Object.Get() and Object.Set(). The first argument is the name of the field you want to access.
GD.Print(myGDScriptNode.Get("str1")); // foo myGDScriptNode.Set("str1", "FOO"); GD.Print(myGDScriptNode.Get("str1")); // FOO GD.Print(myGDScriptNode.Get("str2")); // foofoo // myGDScriptNode.Set("str2", "FOOFOO"); // This line won't do anything
Keep in mind that when setting a field value you should only use types the GDScript side knows about. Essentially, you want to work with built-in types as described in GDScript 기초 or classes extending Object.
Calling methods¶
Calling C# methods from GDScript¶
Again, calling C# methods from GDScript should be straightforward. The
marshalling process will do its best to cast the arguments to match
function signatures.
If that's impossible, you'll see the following error:
Invalid call. Nonexistent function `FunctionName`.
my_csharp_node.PrintNodeName(self) # myGDScriptNode # my_csharp_node.PrintNodeName() # This line will fail. my_csharp_node.PrintNTimes("Hello there!", 2) # Hello there! Hello there! my_csharp_node.PrintArray(["a", "b", "c"]) # a, b, c my_csharp_node.PrintArray([1, 2, 3]) # 1, 2, 3
Calling GDScript methods from C#¶
To call GDScript methods from C# you'll need to use Object.Call(). The first argument is the name of the method you want to call. The following arguments will be passed to said method.
myGDScriptNode.Call("print_node_name", this); // my_csharp_node // myGDScriptNode.Call("print_node_name"); // This line will fail silently and won't error out. myGDScriptNode.Call("print_n_times", "Hello there!", 2); // Hello there! Hello there! // When dealing with functions taking a single array as arguments, we need to be careful. // If we don't cast it into an object, the engine will treat each element of the array as a separate argument and the call will fail. String[] arr = new String[] { "a", "b", "c" }; // myGDScriptNode.Call("print_array", arr); // This line will fail silently and won't error out. myGDScriptNode.Call("print_array", (object)arr); // a, b, c myGDScriptNode.Call("print_array", (object)new int[] { 1, 2, 3 }); // 1, 2, 3 // Note how the type of each array entry does not matter as long as it can be handled by the marshaller
경고
As you can see, if the first argument of the called method is an array,
you'll need to cast it as
object.
Otherwise, each element of your array will be treated as a single argument
and the function signature won't match.
상속(Inheritance)¶
A GDScript file may not inherit from a C# script. Likewise, a C# script may not inherit from a GDScript file. Due to how complex this would be to implement, this limitation is unlikely to be lifted in the future. See this GitHub issue for more information.
|
https://docs.godotengine.org/ko/latest/getting_started/scripting/cross_language_scripting.html
|
CC-MAIN-2021-25
|
refinedweb
| 907
| 68.47
|
JS vs pure patching
hi folks,
I’m currently designing a probabilistic sequencer for my own use, firstly.
I’m hesitating between a JS core which would retain all arrays of data (or almost all) VS pure patching objects (coll, etc)
Of course, time accuracy is totally required.
So I’m hesitating before diving into one or the other solution and I’d want to have your opinion.
of course, I tested the beta Max6 and the javascript seems to be improved (tested with a couple of existing projects I had.
let me know here or on
best,
j
I don’t know this for sure, but if you’re using arrays, wouldn’t jitter matrices be faster than either coll or js?
I decided to go to JS.
Trusting the new engine in Max6…
Translating it into coll or even jitter matrices wouldn’t be that hard
Considering HUGE max 6 improvements, I guess that JS worth to be used..
I’d like to know more about performance comparison between:
- 1 JS parsing A LOT of data coming from 8 sources
VS
- 8 JS parsing data coming from 1 source each one
I really wanted to know that.
Best,
j
posting that in JS forum… more appropriate..
I would love to hear about this as well. In Max4 I used javascript quite a bit but eventually disliked it mainly for performance reasons. I moved to Lua and C from there but recently I got back into Javascript for other domains.
For a new project I’d like to implement a midi processor/mapper in javascript so I’m also worried about timing and performance.
Julien, what are your findings so far?
Hi,
JS in Max6 is REALLY faster, first good point.
I’d suggest you to prototype first using Max6 Objects, if possible (sometimes, making multiple included loops or more tricky stuff can be a pain following that way)
Then, if it works, keep it. If it doesn’t work well because of complexity of patching work, move to JS.
This would be my way.
About Externals.
Indeed, if you can code C++ routines using the SDK, I’m quite sure it would be the fastest way. I wrote "could be", because I have been very surprised by some part of my patches using JS in Max6.
I know where the weakness of visual programming lies, and when it pays off to move to procedural. The question is more whether I should give JS another try or stick with what I already know is good. I think from your experiences I should give JS second chance.
I forgot one major JS annoyance. The missing module system. I had a look and I still can find no docs on any sort of require() function in Max6. It is possible to break JS code into modules now or is the jsextensions folder still the only way to reuse code?
Hi Thijs,
Luke hall published a script (hallluke.wordpress.com/2010/10/31/including-extra-javascript-files) using the eval() function to include js files into a project. (it not really elegant but it works until Max5). In Max6 though the implementation of eval seems to have changed so the old script doesn’t work anymore. In this thread () there is a discussion about some modifications to the script to make it work. (probably even a bit less elegant as functions are passed around a lot.. ).
I would be very curious to hear if some else found a better solution.
So far I go for Java if scripting project get to large.
Jan
That’s too bad. Thanks for the tip about eval but I’d rather stick with jit.gl.lua for scripting then. NodeJS has a module system using require() which works nice for server side code. The browser on the other hand loads all scripts in one global space, and then you can quite easily create modules with some namespaces and immediate function wrappers.
A missed opportunity in Max6 imho. Now we have a fast engine and no real way to reuse code or write bigger projects. It was like this in Max4 and years after things are still the same. Bummer.
|
https://cycling74.com/forums/topic/js-vs-pure-patching/
|
CC-MAIN-2015-11
|
refinedweb
| 703
| 72.46
|
On Tue, 08 Jul 2003 15:28:27 +0000bzzz@tmi.comex.ru wrote:> dynlocks implements 'lock namespace', so you can lock A for namepace N1 and> lock B for namespace N1 and so on. we need this because we want to take lock> on _part_ of directory.Ok, a mini database lock manager. Wouldn't it be better to use a small hash table and lock escalation on overflow for this? Otherwise you couldhave quite a lot of entries queued up in the list if the server is slow.-Andi-To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at
|
https://lkml.org/lkml/2003/7/8/58
|
CC-MAIN-2017-30
|
refinedweb
| 118
| 75.1
|
IPC::LDT - implements a length based IPC protocol
This manual describes version 2.03.
Interprocess communication often uses line (or record) oriented protocols. FTP, for example, usually is such a protocol: a client sends a command (e.g. "LS") which is completed by a carriage return. This carriage return is included in the command which is sent to the server process (FTP deamon) which could implement its reading in a way like this:
while ($cmd=<CLIENT>) { chomp($cmd); performCommand($cmd); }
Well, such record oriented, blocked protocols are very useful and simply to implement, but sometimes there is a need to transfer more complex data which has no trailing carriage return, or data which may include more carriage returns inside the message which should not cause the reciepient to think the message is already complete while it is really not. Even if you choose to replace carriage returns by some obscure delimiters, the same could happen again until you switch to a protocol which does not flag the end of a message by special strings.
On the other hand, if there is no final carriage return (or whatever flag string) within a message, the end of the message has to be marked another way to avoid blocking by endless waiting for more message parts. A simple way to provide this is to precede a message by a prefix which includes the length of the remaining (real) message. A reciepient reads this prefix, decodes the length information and continues reading until the announced number of bytes came in.
IPC::LDT provides a class to build objects which transparently perform such "length driven transfer". A user sends and receives messages by simple method calls, while the LDT objects perform the complete translation into and from LDT messages (with prefix) and all the necessary low level IO handling to transfer stream messages on non blocked handles.
IPC::LDT objects can be configured to transfer simle string messages as well as complex data structures. Additionally, they allow to delay the transfer of certain messages in a user defined way.
Load the module as usual:
use IPC::LDT;
Make an LDT object for every handle that should be used in an LDT communication:
my $asciiClient=new IPC::LDT(handle=>HANDLE); my $objectClient=new IPC::LDT(handle=>HANDLE, objectMode=>1);
Now you can send and receive data:
$data=$asciiClient->receive; @objects=$objectClient->receive;
$asciiClient=$client->send("This is", " a message."); $objectClient=$client->send("These are data:", [qw(a b c)]);
No symbol is exported by default.
You can explicitly import LDT_CLOSED, LDT_READ_INCOMPLETE, LDT_WRITE_INCOMPLETE, LDT_OK and LDT_INFO_LENGTH which are described in section CONSTANTS.
You may set the module variable $IPC::LDT::Trace before the module is loaded (that means in a BEGIN block before the "use" statement) to activate the built in trace code. If not prepared this way, all runtime trace settings (e.g. via the constructor parameter traceMode) will take no effect because the trace code will have been filtered out at compile time for reasons of performance. (This means that no trace message will appear.)
If $IPC::LDT::Trace is set before the module is loaded, te builtin trace code is active and can be deactivated or reactivated at runtime globally (for all objects of this class) by unsetting or resetting of this module variable. Alternatively, you may choose to control traces for certain objects by using the constructor parameter traceMode.
So, if you want to trace every object, set $IPC::LDT::Trace initially and load the module. If you want to trace only certain objects, additionally unset $IPC::LDT::Trace after the module is loaded and construct these certain objects with constructor flag traceMode.
It is a good tradition to build self checks into a code. This makes code execution more secure and simplifies bug searching after a failure. On the other hand, self checks decrease code performance. That's why you can filter out the self checking code (which is built in and activated by default) by setting the module variable $IPC::LDT::noAssert before the module is loaded. The checks will be removed from the code before they reach the compiler.
Setting or unsetting this variable after the module was loaded takes no effect.
a handle related to an LDT object was closed when reading or writing should be performed on it;
a message could not be (completely) read within the set number of trials;
a message could not be (completely) written within the set number of trials;
The constructor builds a new object for data transfers. All parameters except of the class name are passed named (this means, by a hash).
Parameters:
the first parameter as usual - passed implicitly by Perl:
my $asciiClient=new IPC::LDT(...);
The method form of construtor calls is not supported.
The handle to be used to perform the communication. It has to be opened already and will not be closed if the object will be destroyed.
Example: handle => SERVER
A closed handle is not accepted.
You can use whatever type of handle meets your needs. Usually it is a socket or anything derived from a socket. For example, if you want to perform secure IPC, the handle could be made by Net::SSL. There is only one precondition: the handle has to provide a fileno() method. (You can enorce this even for Perls default handles by simply using FileHandle.)
Pass a true value if you want to transfer data structures. If this setting is missed or a "false" value is passed, the object will transfer strings.
Data structures will be serialized via Storable for transfer. Because of this, such a communication is usually restricted to partners which could use Storable methods as well to reconstruct the data structures (which means that they are written in Perl).
String transfer objects, on the other hand, can be used to cimmunicate with any partner who speaks the LDT protocol. We use Java and C clients as well as Perl ones, for example.
Example: objectMode => 1
The transfer mode may be changed while the object is alive by using the methods setObjectMode() and setAsciiMode().
sets the length of the initial info block which preceds every LDT message coding the length of the remaining message. This setting is done in bytes.
If no value is provided, the builtin default value LDT_INFO_LENGTH is used. (This value can be imported in your own code, see section "Exports" for details.) LDT_INFO_LENGTH is designed to meet usual needs.
Example: startblockLength => 4
Set this flag to a true value if you want to trace to actions of the module. If set, messages will be displayed on STDERR reporting what is going on.
Traces for <all> objects of this class can be activated (regardless of this constructor parameter) via $IPC::LDT::Trace. This is described more detailed in section "CONSTANTS".
Example: traceMode => 1
Return value:
A successfull constructor call replies the new object. A failed call replies an undefined value.
Examples:
my $asciiClient=new IPC::LDT(handle=>HANDLE); my $objectClient=new IPC::LDT(handle=>HANDLE, objectMode=>1);
Switches the LDT object to "object trasnfer mode" which means that is can send and receive Perl data structures now.
Runtime changes of the transfer mode have to be exactly synchronized with the partner the object is talking with. See the constructor (new()) description for details.
Parameters:
An LDT object made by new().
Example:
$asciiClient->setObjectMode;
Switches the LDT object to "ASCII trasnfer mode" which means that is sends and receives strings now.
Runtime changes of the transfer mode have to be exactly synchronized with the partner the object is talking with. See the constructor (new()) description for details.
Parameters:
An LDT object made by new().
Example:
$objectClient->setAsciiMode;
Sometimes you do not want to send messages immediatly but buffer them for later delivery, e.g. to set up a certain send order. You can use delay() to install a filter which enforces the LDT object to delay the delivery of all matching messages until the next call of undelay().
The filter is implemented as a callback of send(). As long as it is set, send() calls it to check a message for sending or buffering it.
You can overwrite a set filter by a subsequent call of delay(). Messages already collected will remain collected.
To send delayed messages you have to call undelay().
If the object is detroyed while messages are still buffered, they will not be delivered but lost.
Parameters:
An LDT object made by new().
A code reference. It should await a reference to an array which will contain the message (possibly in parts). It should reply a true or false value to flag if the passed message has to be delayed.
It is recommended to provide a fast function because it will be called everytime send() will be invoked.
Example:
$ldt->delay(\&filter);
with filter() defined as
sub filter { # get and check parameters my ($msg)=@_; confess "Missed message parameter" unless $msg; confess "Message parameter is no array reference" unless ref($msg) and ref($msg) eq 'ARRAY';
# check something $msg->[0] eq 'delay me'; }
See undelay() for a complete example.
Sends all messages collected by a filter which was set by delay(). The filter is removed, so that every message will be sent by send() immediatly afterwards again.
In case of no buffered message and no set filter, a call of this message takes no effect.
Parameters:
An LDT object made by new().
Beispiel:
$ldt->undelay;
Here comes a complete example to illustrate how delays can be used.
filter definition:
sub filter { # check something $msg->[0] eq 'delay me'; }
usage:
# send messages $ldt->send('send me', 1); # sent; $ldt->send('delay me', 2); # sent; # activate filter $ldt->delay(\&filter); # send messages $ldt->send('send me', 3); # sent; $ldt->send('delay me', 4); # delayed; $ldt->send('send me', 5); # sent; $ldt->send('delay me', 6); # delayed; # send collected messages, uninstall filter $ldt->undelay; # sends messages 4 and 6; # send messages $ldt->send('send me', 7); # sent; $ldt->send('delay me', 8); # sent;
Sends the passed message via the related handle (which was passed to new()). The message, which could be passed as a list of parts, is sent as a (concatenated) string or as serialized Perl data depending on the settings made by the constructor flag objectMode and calls of setObjectMode() or setAsciiMode, respectively.
In case of an error, the method replies an undefined value and stores both an error code and an error message inside the object which could be accessed via the object variables "rc" and "msg". (See CONSTANTS for a list of error codes.). On the other hand, the sender might only be able to send them part by part as well. That is why this send() method retries writing attempts to the associated handle until the complete message could be sent.().
All list elements will be combined to the resulting message as done by print() or warn() (that means, without separating parts by additional whitespaces).
Examples:
$asciiClient->send('Silence?', 'Maybe.') or die $asciiClient->{'msg'};
$objectClient->send({oops=>1, beep=>[qw(7)]}, $scalar, \@array);
Note: If the connection is closed while the message is sent, the signal SIGPIPE might arrive and terminate the complete program. To avoid this, SIGPIPE is ignored while this method is running.
The handle associated with the LDT object is made non blocking during data transmission. The original mode is restored before the method returns.
If an error occurs while data are transmitted, further usage of the associated handle is usually critical. That is why send() and receive() stop operation after a transmission error, even if you repeat their calls. This should protect your program and make it more stable (e.g. writing to a closed handle migth cause a fatal error and even terminate your program).
Nevertheless, if you really want to retry after an error, here is the reset() method which resets the internal error flags - unless the associated handle was not already closed.
Parameters:
An LDT object made by new().
Example:
$ldtObject->reset;
reads a message from the associated handle and replies it.
In case of an error, the method replies an undefined value and provides both a return code (see CONSTANTS) and a complete message in the object variables "rc" and "msg", respectively, where you can read them.. That is why this receive() method retries reading attempts to the associated handle until the complete message could be read.().
The received message is replied as a string in ASCII mode, and as a list in object mode.
Example:
$msg=$asciiClient->receive or die $asciiClient->{'msg'};
@objects=$objectClient->receive or die $objectClient->{'msg'};
Note: If the connection is closed while the message is read, the signal SIGPIPE might arrive and terminate the complete program. To avoid this, SIGPIPE is ignored while this method is running.
The handle associated with the LDT object is made non blocking during data transmission. The original mode is restored before the method returns.
replies the modules version. It simply replies $IPC::LDT::VERSION and is implemented only to provide compatibility to other object modules.
Example:
# get version warn "[Info] IPC is performed by IPC::LDT ", IPC::LDT::version, ".\n";
To share data between processes, you could embed a socket into an LDT object.
my $ipc=new IO::Socket(...); my $ldt=new IPC::LDT(handle=>$ipc, objectMode=>1);
Now you are able to send data:
my $dataRef=[{o=>1, lal=>2, a=>3}, [[qw(4 5 6)], [{oo=>'ps'}, 7, 8, 9]]]; $ldt->send($dataRef) or die $ldt->{'msg'};
or receive them:
@data=$ldt->receive or die $ldt->{'msg'};
Jochen Stenzel (perl@jochen-stenzel.de).
|
http://search.cpan.org/~jstenzel/IPC-LDT-2.03/LDT.pm
|
CC-MAIN-2017-51
|
refinedweb
| 2,261
| 61.77
|
101616/implements-vs-extends-when-to-use-what-s-the-difference
Extends is for extending a class.
implements are for implementing an interface
The difference between an interface and a regular class is that in an interface you can not implement any of the declared methods. Only the class that "implements" the interface can implement the methods. The C++ equivalent of an interface would be an abstract class (not EXACTLY the same but pretty much).
Also, Java doesn't support multiple inheritances for classes. This is solved by using multiple interfaces.
public interface ExampleInterface {
public void doAction();
public String doThis(int number);
}
public class sub implements ExampleInterface {
public void doAction() {
//specify what must happen
}
public String doThis(int number) {
//specfiy what must happen
}
}
now extending a class
public class SuperClass {
public int getNb() {
//specify what must happen
return 1;
}
public int getNb2() {
//specify what must happen
return 2;
}
}
public class SubClass extends SuperClass {
//you can override the implementation
@Override
public int getNb2() {
return 3;
}
}
in this case
Subclass s = new SubClass();
s.getNb(); //returns 1
s.getNb2(); //returns 3
SuperClass sup = new SuperClass();
sup.getNb(); //returns 1
sup.getNb2(); //returns 2
Also, note that an @Override tag is not required for implementing an interface, as there is nothing in the original interface methods to be overridden
I suggest you do some more research on dynamic binding, polymorphism and in general inheritance in Object-oriented programming
JPA mapping annotation can be classified as ...READ MORE
You can also use regular expression.
str.matches("-?\\d+");
It will ...READ MORE
A static method has two main purposes:
For ...READ MORE
Runnable is the preferred way to create ...READ MORE
Extends : This is used to get attributes ...READ MORE
You've got two problems:
1) You're using Collections.sort (which takes ...READ MORE
Take a look to this example: ...READ MORE
Well let me draw a clear line ...READ MORE
LinkedList and ArrayList are two different implementations of the List ...READ MORE
Java main() method is always static, so that compiler ...READ MORE
OR
Already have an account? Sign in.
|
https://www.edureka.co/community/101616/implements-vs-extends-when-to-use-what-s-the-difference
|
CC-MAIN-2021-21
|
refinedweb
| 347
| 57.16
|
User talk:Emj
Contents
Proj
In trying to Print OpenStreetMap with Gnuplot, what arguments should I use with proj in order to get a conic Euro-centric projection with center longitude 20 degrees east of Greenwich and covering Tunisia to Nordkap? Something like this is what I want to accomplish. As a comparison, this map is centered around 15 E and this one doesn't span as far north and south as I want. You can see that all three are conic projections, because the longitudes are straight lines pointing towards the north pole. I guess 50 degrees north should be the very center of the map. --LA2 19:57, 10 Jul 2006 (UTC)
Category:Way Direction Dependant
I've seen that you recently created this new category. I just wanted to point you to my suggestion I wrote in the Talk:Map_Features bottom. Like you, I think it is important to make very visible that some tags are "way sensitive" and reversing the order of nodes could break some rules (coastline, oneway, roundabout) and editors should treat those tags in a particular way. My idea was more to show it in the most viewed page Map_Features. Any feedback would be appreciated. Pieren 14:50, 8 May 2008 (UTC)
- Well sure, but I don't see why you can't have both. Categories are a lot easier to handle than manualy updating data. Erik Johansson
natural=cliff
What shall we do about that paragraph in natural=cliff? I still think it's incorrect but I don't want to start a revert war :) --Jttt 08:14, 4 June 2008 (UTC)
Hey there...
Hi Emj, even if "the others do all the work" I wanted to drop in and say hi. :-) Firefishy just set the admin flag for User:Uboot and me. So it should be a bit easier to get some things tidied up in the German namespace. But of course I'm also available to help with other stuff in other languages. --Avatar 07:15, 30 January 2009 (UTC)
lighthouse
Thanks for your help, I've done the modification. The french for lighthouse is "phare".
But one thing doesn't work : the french page FR:Phare don't show correctly the languages available.
If you know how to repair that, can you do it ?
Thanks again ! Selenium134 19:42, 6 April 2010 (UTC)
Forming Wiki Team
Hi Emj,
what do you think about this idea?
regards Matthias
Potlatch 2 bug reporting
I noticed in Recent Changes that you'd added to the Potlatch 2 bugs page. I thought I should warn you that Potlatch_2/Bugs#Bugs suggests that new bugs should be filed in trac rather than added to the wiki page. It is possible someone will notice it though and add it to trac for you (not me I'm afraid as I'm about to dash out and will have forgotten before my return). Best wishes. --EdLoach 09:09, 23 March 2011 (UTC)
Doc templates on redirect pages
this edit "Addings docs to see if tagwatch is effected.". Did this experiment work? Does it help TagWatch as intended?
It's quite unconventional to have stuff on redirect pages. I'm noticing it creates some confusing effects on "what links here" listings e.g. Special:WhatLinksHere/Way. Maybe not the end of the world, but just wondering if this template is needed on there.
-- Harry Wood 23:25, 21 December 2011 (UTC)
- I should have said "taginfo" and yes it seems like it works. Erik Johansson 23:04, 27 December 2011 (UTC)
- compare:
- which doesn't have a description
- which have a descrition.
- Btw thanks for reminding me about this Erik Johansson 23:06, 27 December 2011 (UTC)
Cleaning up the wiki
Please take part in this discussion :) --★ → Airon 90 12:58, 12 November 2012 (UTC)
Geography lessons
You might like to check your geography as you marked a map of Dunfermline as being in England. Oops! England is about 2-3 hours drive to the south of Dunfermline. :-) --Colin Angus Mackay 14:58, 23 Apr 2006 (UTC)
- Well you changed it, that's great. I've actually never heard of Dunfermline, I knew someone would catch any mistakes I did on the geography of your islands.. ;-) Cheers and thanks for correcting me.. About the England/Scotland/UK it's a mess for me, I know I can call Sweden, Norway and Denmark Scandinavia but not what to call all of your Islands.. Sad isn't it.. Erik Johansson 16:58, 23 Apr 2006 (UTC)
- You might find this wikipedia article useful British Isles Terminology --Colin Angus Mackay 21:03, 23 Apr 2006 (UTC)
|
http://wiki.openstreetmap.org/wiki/User_talk:Emj
|
CC-MAIN-2017-22
|
refinedweb
| 776
| 70.84
|
On Wed, Dec 26, 2018 at 9:13 PM Dan Williams <dan.j.williams@intel.com> wrote:>> On Wed, Dec 26, 2018 at 8:11 PM Fengguang Wu <fengguang.wu@intel.com> wrote:> >> > On Wed, Dec 26, 2018 at 07:41:41PM -0800, Matthew Wilcox wrote:> > >On Wed, Dec 26, 2018 at 09:14:47PM +0800, Fengguang Wu wrote:> > >> From: Fan Du <fan.du@intel.com>> > >>> > >> This is a hack to enumerate PMEM as NUMA nodes.> > >> It's necessary for current BIOS that don't yet fill ACPI HMAT table.> > >>> > >> WARNING: take care to backup. It is mutual exclusive with libnvdimm> > >> subsystem and can destroy ndctl managed namespaces.> > >> > >Why depend on firmware to present this "correctly"? It seems to me like> > >less effort all around to have ndctl label some namespaces as being for> > >this kind of use.> >> > Dave Hansen may be more suitable to answer your question. He posted> > patches to make PMEM NUMA node coexist with libnvdimm and ndctl:> >> > [PATCH 0/9] Allow persistent memory to be used like normal RAM> >> >> > That depends on future BIOS. So we did this quick hack to test out> > PMEM NUMA node for the existing BIOS.>> No, it does not depend on a future BIOS.It is correct. We already have Dave's patches + Dan's patch (addedtarget_node field) work on our machine which has SRAT.Thanks,Yang>> Willy, have a look here [1], here [2], and here [3] for the> work-in-progress ndctl takeover approach (actually 'daxctl' in this> case).>> [1]:> [2]:> [3]:>
|
https://lkml.org/lkml/2018/12/27/204
|
CC-MAIN-2019-18
|
refinedweb
| 253
| 74.79
|
I have two arrays as an output from a simulation script where one contains IDs and one times, i.e. something like:
ids = np.array([2, 0, 1, 0, 1, 1, 2])
times = np.array([.1, .3, .3, .5, .6, 1.2, 1.3])
These arrays are always of the same size. Now I need to calculate the differences of
times, but only for those times with the same
ids. Of course, I can simply loop over the different
ids an do
for id in np.unique(ids):
diffs = np.diff(times[ids==id])
print diffs
# do stuff with diffs
However, this is quite inefficient and the two arrays can be very large. Does anyone have a good idea on how to do that more efficiently?
You can use
array.argsort() and ignore the values corresponding to change in ids:
>>> id_ind = ids.argsort(kind='mergesort') >>> times_diffs = np.diff(times[id_ind]) array([ 0.2, -0.2, 0.3, 0.6, -1.1, 1.2])
To see which values you need to discard, you could use a Counter to count the number of times per id (
from collections import Counter)
or just sort ids, and see where its diff is nonzero: these are the indices where id change, and where you time diffs are irrelevant:
times_diffs[np.diff(ids[id_ind]) == 0] # ids[id_ind] being the sorted indices sequence
and finally you can split this array with np.split and np.where:
np.split(times_diffs, np.where(np.diff(ids[id_ind]) != 0)[0])
As you mentionned in your comment,
argsort() default algorithm (quicksort) might not preserve order between equals times, so the
argsort(kind='mergesort') option must be used.
Say you
np.argsort by
ids:
inds = np.argsort(ids, kind='mergesort') >>> array([1, 3, 2, 4, 5, 0, 6])
Now sort
times by this,
np.diff, and prepend a
nan:
diffs = np.concatenate(([np.nan], np.diff(times[inds]))) >>> diffs array([ nan, 0.2, -0.2, 0.3, 0.6, -1.1, 1.2])
These differences are correct except for the boundaries. Let's calculate those
boundaries = np.concatenate(([False], ids[inds][1: ] == ids[inds][: -1])) >>> boundaries array([False, True, False, True, True, False, True], dtype=bool)
Now we can just do
diffs[~boundaries] = np.nan
Let's see what we got:
>>> ids[inds] array([0, 0, 1, 1, 1, 2, 2]) >>> times[inds] array([ 0.3, 0.5, 0.3, 0.6, 1.2, 0.1, 1.3]) >>> diffs array([ nan, 0.2, nan, 0.3, 0.6, nan, 1.2])
The numpy_indexed package (disclaimer: I am its author) contains efficient and flexible functionality for these kind of grouping operations:
import numpy_indexed as npi unique_ids, diffed_time_groups = npi.group_by(keys=ids, values=times, reduction=np.diff)
Unlike pandas, it does not require a specialized datastructure just to perform this kind of rather elementary operation.
I'm adding another answer, since, even though these things are possible in
numpy, I think that the higher-level
pandas is much more natural for them.
In
pandas, you could do this in one step, after creating a DataFrame:
df = pd.DataFrame({'ids': ids, 'times': times}) df['diffs'] = df.groupby(df.ids).transform(pd.Series.diff)
This gives:
>>> df ids times diffs 0 2 0.1 NaN 1 0 0.3 NaN 2 1 0.3 NaN 3 0 0.5 0.2 4 1 0.6 0.3 5 1 1.2 0.6 6 2 1.3 1.2
|
http://m.dlxedu.com/m/askdetail/3/3cb0684bcc329b319f7ba7d029e0becb.html
|
CC-MAIN-2018-39
|
refinedweb
| 573
| 77.33
|
Name
DMA Support — Description
Synopsis
#include <cyg/hal/bcm283x_dma.h>
ok = hal_dma_channel_init(hal_dma_channel *chan, cyg_uint8 permap, cyg_bool fast, hal_dma_callback *callback, CYG_ADDRWORD data
);
hal_dma_channel_delete(hal_dma_channel *chan
);
hal_dma_channel_set_polled(hal_dma_channel *chan, cyg_bool polled
);
hal_dma_poll(void
);
hal_dma_cb_init(hal_dma_cb *cb, cyg_uint32 ti, void *source, void *dest, cyg_uint32 size
);
hal_dma_add_cb(hal_dma_channel *chan, hal_dma_cb *cb
);
hal_dma_channel_start(hal_dma_channel *chan
);
hal_dma_channel_stop(hal_dma_channel *chan
);
Description
The HAL provides support for access to the DMA channels. This support is not intended to expose the full functionality of these devices and is mainly limited to supporting peripheral DMA. The API is therefore mainly oriented for use by device drivers rather than applications. The user is referred to the BCM2835 documentation for full details of the DMA channels, and to the SDHOST driver for an example of this API in use.
The DMA hardware consist of sixteen independent channels. DMA is initiated by attaching a chain of control blocks to a channel and starting it running. Each control block contains source and destination addresses, size, transfer direction and a number of other parameters. Of the sixteen channels available, some are reserved for use by the GPU. Also, the channels are divided into full function channels and lite channels that lack some functionality and have lower bandwidth. Full details are available in the BCM2835 documentation.
A DMA channel is represented by a hal_dma_channel
object that the client must allocate. Control blocks are
similarly represented by a hal_dma_cb object, which
again must be allocated by the client. DMA control blocks must be
aligned on a 32 byte boundary. The type definition in
bcm283x_dma.h has an alignment
attribute so that static allocations should be correctly aligned
by default; however care should be taken to align
dynamic allocations.
A DMA channel is initialized by calling
hal_dma_channel_init, the parameters are as
follows:
chan
- A pointer to the channel object to be initialized.
permap
- Peripheral map value. This is one of the
CYGHWR_HAL_BCM283X_DMA_DREQ_XXXXvalues defined in
bcm238x.h. It specifies the peripheral to or from which the transfer will be made.
fast
- This specifies whether the DMA channel should be a full featured fast channel or a reduced bandwidth lite channel. If a lite channel is specified and none are available a fast channel will be allocated. However, if a fast channel is specified and none are available then this routine will return 0 to indicate an error.
callback
- A pointer to a function that will be called when the DMA transfer has been completed. This will be called with a pointer to the channel, an event code, nd a copy of the
dataparameter. The event code will be either
CYGHWR_REG_BCM283X_DMA_CS_ENDto indicate a successful completion of the transfer or
CYGHWR_REG_BCM283X_DMA_CS_ERRORto indicate an error.
data
- An uninterpreted data value that will be passed to the callback. This would typically be a pointer to a client data structure.
If the initialization is successful the routine will return 1. The current implementation of the DMA API permanently allocates a physical channel when this routine is called. In the future the allocation of physical channels may be more dynamic, so the client should not assume that the channel in use is constant.
The
hal_dma_channel_delete function
deletes the given channel, releasing any resources and making
them available for reuse.
The
hal_dma_channel_set_polled function
marks a channel for polled operation only. Otherwise the
channel will enable interrupts and wait for an interrupt to
complete. If a channel is marked polled then it will only be
completed and its callback called during calls to
hal_dma_poll. Note that channels
not marked polled may also be completed during this call if
their interrupt has not yet fired.
A transfer control block is initialized by calling
hal_dma_cb_init. The parameters are as
follows:
cb
- A pointer to the control block to be initialized.
ti
- This is an initial value for the
TIregister field of the control block. This may contain any of the bits and fields specified for this register except the
PERMAPfield, which will be set from the value set in the channel. For simplicity the standard settings for common operations are defined by the DMA API;
HAL_DMA_INFO_DEV2MEMinitializes the control block for a single buffer transfer from a device to memory, and
HAL_DMA_INFO_MEM2DEVfor a transfer in the reverse direction. If a client needs to perform scatter/gather transfers, then it needs to set this argument more explicitly. In particular, the
INTENbit should normally only be set on the last control block of a chain.
source
- The source address for the transfer, either the start of a memory buffer or the data register of the appropriate device.
dest
- The destination address for the transfer, either the start of a memory buffer or the data register of the appropriate device.
size
- Transfer size in bytes.
Once initialized a control block may be added to a channel by
calling
hal_dma_cb_add. Control blocks
will be chained together on the channel in the order in which
they are added. The DMA engines operate on addresses in the
GPU address space, not the physical address space visible the
the ARM CPUs or the virtual address space set up by the
MMU. During initialization the source and destination
addresses will be translated into GPU addresses, and after it
is added, the
dma_next field of the control
block will be translated to a GPU address. So, care should be
taken when inspecting an active control block and it should
not be changed.
Once a channel had been initialized and any control blocks have
been added the transfers may be started by calling
hal_dma_channel_start. For channels not
marked polled, interrupts will fire and the callback will be
called from a DSR when the control block chain has been
completed. For polled channel, it will be necessary to call
hal_dma_poll until all channels have
completed.
An ongoing transfer may be halted by calling
hal_dma_channel_stop. This function
should also be called as a matter of course when a transfer
has completed normally.
|
https://doc.ecoscentric.com/ref/hal-arm-bcm283x-ref-dma.html
|
CC-MAIN-2019-26
|
refinedweb
| 981
| 53.92
|
Subversion allows users to invent arbitrarily-named versioned properties on files and directories, as well as unversioned properties on revisions. The only restriction is on properties prefixed with “svn:”. Properties in that namespace are reserved for Subversion's own use. While these properties may be set by users to control Subversion's behavior, users may not invent new “svn:” properties.
svn:executable
If present on a file, the client will make the file executable in Unix-hosted working copies. See the section called “File Executability”.
svn:mime-type
If present on a file, the value indicates the file's mime-type. This allows the client to decide whether line-based contextual merging is safe to perform during updates, and can also affect how the file behaves when fetched via web browser. See the section called “File Content Type”.
svn:ignore
If present on a directory, the value is a list of unversioned file patterns to be ignored by svn status and other subcommands. See the section called “Ignoring Unversioned Items”
svn:keywords
If present on a file, the value tells the client how to expand particular keywords within the file. See the section called “Keyword Substitution”.
svn:eol-style
If present on a file, the value tells the client how to manipulate the file's line-endings in the working copy. See the section called “End-of-Line Character Sequences”.
svn:externals
If present on a directory, the value is a multi-line list of other paths and URLs the client should check out. See the section called “Externals Definitions”.
svn:special
If present on a file, indicates that the file is not an ordinary file, but a symbolic link or other special object.
svn:needs-lock
If present on a file, tells the client to make the file read-only in the working copy, as a reminder that the file should be locked before editing begins. See the section called “Lock Communication”.
svn:author
If present, contains the authenticated username of the person who created the revision. (If not present, then the revision was committed anonymously.)
svn:date
Contains the UTC time the revision was created, in ISO format. The value comes from the server machine's clock.
svn:log
Contains the log message describing the revision.
svn:autoversioned
If present, the revision was created via the autoversioning feature. See the section called “Autoversioning”.
|
https://docs.huihoo.com/subversion/1.4/svn.ref.properties.html
|
CC-MAIN-2019-30
|
refinedweb
| 395
| 57.16
|
Alexandra CiorraPython Web Development Techdegree Student 796 Points
I'm not sure how to finish the second part? Is my first correct?
I am trying to figure this our but most of the time i need a boost to understand. Is my firs part correct? If not where did I go wrong? How do I finish the else?
Thanks!
def squared (num): try: num=(int(input(num*num) else: return len(num) # EXAMPLES # squared(5) would return 25 # squared("2") would return 4 # squared("tim") would return "timtimtim"
1 Answer
behar10,780 Points
Hey Alexandra! Your slightly off track here. I would suggest watching the previous video again because it dosent seem you have picked up on the try and except idea. However in the comments there are examples of what the function is expecting to have returned with different kinds of inputs. While your code is not finished it still has a couple of problems. First is that your trying to int(input(num)).. There should be no input what so ever in this function. What the challenge says is: It will provide you with an unknown string or number, you have chosen to store this number or string in the variable "num". Now it wants you to try to turn this number into an integer, if you can do that return the square of that integer, if not return the legnth of the string multiplied by the string. Here is the code:
def squared(num): #Here your saying that the string or number should be stored in the variable num try: #Here your asking the computer to try to do the following num = int(num) #Here were trying to turn the num variable into a string, and store it in the variable "num" return num ** 2 #Here were returning the square of that number except ValueError: #If the variable cannot be turned into a integer it will spit a ValueError, and will do what comes after this code return len(num) * num #Here were telling it to return the length of the string, multiplied by the string
If the last part is confusing, basicly if we have the string "test" and say "test" * 3, we will get "testtesttest". And the len() function allows us to find the len of a string, and return that length as an integer. Hope this helps but please feel free to write back if you need more help!
behar10,780 Points
Hey again Alexandra! Great attitude, and yea the videos can be a little quick sometimes. You can mark this question as "solved" by sellecting a "best answer".
Alexandra CiorraPython Web Development Techdegree Student 796 Points
Alexandra CiorraPython Web Development Techdegree Student 796 Points
Thank you so much for taking the time to explain. I pick up things along the way that I may have missed. For example any time I've seen int() used it's with input. I didn't know int would be used with just an argument. It's nice to have this avenue for learning as well because the videos don't quite explain everything. I understand now but I am going to take your advice and watch the video again.
|
https://teamtreehouse.com/community/im-not-sure-how-to-finish-the-second-part-is-my-first-correct
|
CC-MAIN-2019-26
|
refinedweb
| 533
| 77.57
|
usocket – socket module¶
This module provides access to the BSD socket interface.
See the corresponding CPython module for comparison.
Difference to CPython
CPython used to have a
socket.error exception which is now deprecated,
and is an alias of OSError. In MicroPython, use OSError directly.¶
socket.
socket(socket.AF_INET, socket.SOCK_STREAM, socket.IPPROTO_TCP)¶
Create a new socket using the given address family, socket type and protocol number.
socket.])
Difference to CPython
CPython raises a
socket.gaierrorexception (OSError subclass) in case of error in this function. MicroPython doesn’t have
socket.gaierrorand raises OSError directly. Note that error numbers of
getaddrinfo()form a separate namespace and may not match error numbers.
Constants¶
socket.SOL_*
Socket option levels (an argument to
setsockopt()). The exact inventory depends on a board.
socket.SO_*
Socket options (an argument to
setsockopt()). The exact inventory depends on a board.
Constants specific to WiPy:
Methods¶
socket.
Mark the socket..
Difference to CPython
CPython raises a
socket.timeoutexception in case of timeout, which is an
OSErrorsubclass. MicroPython raises an OSError directly instead. If you use
except OSError:to catch the exception, your code will work both in MicroPython and CPython.)and
newlineare not supported.
Difference to CPython
As MicroPython doesn’t support buffered streams, values of
bufferingparameter is ignored and treated as if it was 0 (unbuffered).
Difference to CPython
Closing the file object returned by makefile() WILL close the original socket as well.
socket.
read([size])¶
Read up to size bytes from the socket. Return a bytes object. If
sizeis specified then read at most that many bytes. Otherwise, read at most
len(buf)bytes. Just as
read(), this method follows “no short reads” policy.
Return value: number of bytes read and stored into
buf.
|
http://docs.micropython.org/en/v1.9/esp8266/library/usocket.html
|
CC-MAIN-2022-21
|
refinedweb
| 287
| 53.88
|
Contains the basic usage parameters of a data source.
Configuration parameters for the basic usage of a data source are specified using a data source parameters bean.
This section describes the following attributes:
The algorithm determines the connection request processing for the multi data source.
You can specify one of the following algorithm types:
Connection requests are sent to the first data source in the list; if the request fails, the request is sent to the next data source in the list, and so forth. The process is repeated until a valid connection is obtained, or until the end of the list is reached, in which case an exception is thrown.
The multi data source distributes connection requests evenly to its member data sources. With this algorithm, the multi data source also provides failover processing. That is, if a request fails, the multi data source sends the request to the next data source in the list until a valid connection is obtained, or until the end of the list is reached, in which case an exception is thrown.
The name of the application class to handle the callback sent when a multi data source is ready to failover or fail back connection requests to another data source within the multi data source.
The name must be the absolute name of an application class that
implements the
weblogic.jdbc.extensions.ConnectionPoolFailoverCallback
interface..
Determines the transaction protocol (global transaction processing behavior) for the data source. Options include:
TwoPhaseCommit - Standard XA transaction processing. Requires an XA driver.
LoggingLastResource - A performance enhancement for one non-XA resource.
EmulateTwoPhaseCommit - Enables one non-XA resource to participate in a global transaction, but has some risk to data.
OnePhaseCommit - One-phase XA transaction processing using a non-XA driver. This is the default setting.
None - Support for local transactions only..
The optimal prefetch size.
Specifies the scoping of the data source.
You can specify one of the following scopes:
Specifies that the data source is bound in the cluster-wide JNDI tree with the JNDIName specified so that the data source is available for use to any JDBC client across the cluster.
This is the default setting.
Specifies that the data source is bound in the application's local namespace with the JNDIName specified so that the data source is available for use only by JDBC clients within the application..
|
https://docs.oracle.com/cd/E13222_01/wls/docs91/wlsmbeanref/mbeans/JDBCDataSourceParamsBean.html
|
CC-MAIN-2018-13
|
refinedweb
| 393
| 54.52
|
know it took me a while to get used to playing with branches, and I
still get nervous when doing something creative. So I've been trying
to get more comfortable, and wrote the following to document what I've
learned.
It's a first draft - I just finished writing it, so there are probably
some glaring errors - but I thought it might be of interest anyway.
* Branching and merging in git
In CVS, branches are difficult and awkward to use, and generally
considered an advanced technique. Many people use CVS for a long time
without departing from the trunk.
Git is very different. Branching and merging are central to effective use
of git, and if you aren't comfortable with them, you won't be comfortable
with git. In particular, they are required to share work with other
people.
The only things that are a bit confusing are some of the names.
In particular, at least when beginning:
- You create new branches with "git checkout -b".
"git branch" should only be used to list and delete branches.
- You share work with "git fetch" and "git push". These are opposites.
- You merge with "git pull", not "git merge". "git pull" can
also do a "git fetch", but that's optional. What's not optional
is the merge.
* A brief digression on command names.
Originally, all git commands were named "git-foo". When there got to
be over a hundred, people started complaining about the clutter in
/usr/bin. After some discussion, the following solution was reached:
- It's now possible to place all of the git-foo commands into a separate
directory. (Despite the complaints, not too many people are doing it
yet.)
- One option for git users is to add that directory to their $PATH.
- Another is provided by a wrapper called just "git". It's intended to
live in a public directory like /usr/bin, and knows the location of
the separate directory. When you type "git foo", it finds and executes
"git-foo".
- Some simple commands are built into the git wrapper. When you type
"git add", it just does it internally. (On the git mailing list,
you will see patches like "make git diff a builtin"; this is what
they're talking about.)
- For compatibility, for each builtin, there is a "git-add" file,
which is just a link to the "git" wrapper. It looks at the name it
was invoked as to figure out what it should do.
The one confusing thing is that, although people usually type "git foo"
in examples, they're interchangeable in practice. I go back and forth
for no good reason. The main caveat is that to get the man page, you
still need to type "man git-foo". Fortunately, there are two other ways
to get the man page:
1) "git help foo"
2) "git foo --help"
Git doesn't have a specialized built-in help system; it just shows you
the man pages.
One outstanding problem with git's man pages is that often the most detail
is in the command page that was written first, not the user-friendly
one that you should use. For example, there are a number of special
cases of the "git diff" command that were written first, and the man
pages for these commands (git-diff-index, git-diff-files, git-diff-tree,
and git-diff-stages) are considerably more informative than the page for
plain git-diff, even though that's the command that you should use 99%
of the time.
* Git's representation of history
As you recall from Git 101, there are exactly four kinds of objects in
Git's object database. All of them have globally unique 40-character hex
names made by hashing their type and contents. Blob objects record file
contents; they contain bytes. Tree objects record directory contents;
they contain file names, permissions, and the associated tree or blob
object names. Tag objects are shareable pointers to other objects;
they're generally used to store a digital signature.
And then, we come to commit objects. Every commit points to (contains
the name of) an associated tree object which records the state of the
source code at the time of the commit, and some descriptive data (time,
author, committer, commit comment) about the commit.
And most importantly, it contains a list of "parent commits", older
commits from which this one is derived. These pointers are what produce
the history graph.
Typically only one commit (the initial commit) has zero parents. It's
possible to have more than one such commit (if you merge two projects
with different history), but that's unusual.
Many commits have exactly one parent. These are made by a normal commit
after editing. From a branching and merging point of view, they're not
too exciting.
And then there are commits which have multiple parents. Two is most
common, but git allows many more. (There's a limit of sixteen in the
source code, and the most anyone's ever used in real life is 12, and
that was generally regarded as overdoing it. Google on "doedecapus"
for discussion of it.)
Finally, there are references, stored in the .git/refs directory.
These are the human-readable names associated with commits, and the
"root set" from which all other commits should be reachable.
These references are generally divided into two types, although
there is no fundamental difference:
- Tags are references that are intended to be immutable.
The "v1.2" tag is a historical record. A tag may point to
a tag object (which will hold a signature), or just to a commit
directly. The latter isn't cryptographically authenticated, but
works just fine for everyday use.
- Heads are references that are intended to be updated. "Head"
is actually synonymous with "branch", although one emphasizes the
tip more, while the other directs your attention to the entire
path that got there.
Either way, they're just a 41-byte file that contains a 40-byte hex
object ID, plus a newline. Tags are stored in .git/refs/tags, and heads
are stored in .git/refs/heads. Creating a new branch is literally just
picking a file name and writing the ID of an existing commit into it.
The git programs enforce the immutability of tags, but that's a safety
feature, not something fundamental. You can rename a tag to the heads
directory and go wild.
The only limit on branches is clutter. A number of git commands have
ways to operate on "all heads", and if you have too many, it can get
annoying. If you're not using a branch, either delete it, or move it
somewhere (like the tags directory) where it won't clutter up the list of
"currently active heads".
(Note that CVS doesn't have this all-heads default, so people tend to
use longer branch names and keep them around after they've been merged
into the trunk. Old CVS repositories converted to git generally need
an old-branch cleanup.)
Another thing that's worth mentioning is that head and tag names can
contain slashes; i.e. you're allowed to make subdirectories in the
.git/refs/heads and .git/refs/tags directories. See the name page
for "git-check-ref-format" for full details of legal names.
* Naming revisions
CVS encourages you to tag like crazy, because the only other way to
find a given revision is by date. Git makes it a lot easier, so most
revisions don't need names.
You can find a full description in the git-rev-parse man page, but here's
a summary.
First of all, every commit has a globally unique name, its 40-digit hex
object ID. It's a bit long and awkward, but always works. This is useful
for talking about a specific commit on a mailing list. You can abbreviate
it to a unique prefix; most people find about 8 digits sufficient.
(Subversion is easier yet, because it assigns a sequential number to each
commit. However, that isn't possible in a distributed system like git.)
Second, you can refer to a head or tag name. Git looks in the
following places, in order, for a head:
1) .git
2) .git/refs
3) .git/refs/heads
4) .git/refs/tags
You should avoid having e.g. a head and a tag with the same name, but
if you do, you can specify one or the other with heads/foo and tags/foo.
Third, you can specify a commit relative to another. The simplest
one is "the parent", specified by appending ^ to a name. E.g. HEAD^
or deadbeef^. If there are multiple parents, then ^ is the same as ^1,
and the others are ^2, ^3, etc.
So the last few commits you've made are HEAD, HEAD^, HEAD^^, HEAD^^^, etc.
After a while, counting carets becomes annoying, so you can abbreviate
^^^^ as ~4. Note that this only lets you specify the first parent.
If you want to follow a side branch, you have to specify something like
"master~305^2~22".
* Converting between names
Git has two helpers (programs designed mainly for use in shell scripts)
to convert between global object IDs and human-readable names.
The first is git-rev-parse. This is a general git shell script helper,
which validates the command line and converts object names to absolute
object IDs. Its man page has a detailed description of the object
name syntax.
The second is git-name-rev, which converts the other way around. It's
particularly useful for seeing which tags a given commit falls between.
* Working with branches, the trivial cases.
By convention, the local "trunk" of git development is called "master".
This is just the name of the branch it creates when you start an empty
repository. You can delete it if you don't like the name.
If you create your repository by cloning someone else's repository, the
remote "master" branch is copied to a local branch named "origin". You
get your own "master" branch which is not tied to the remote repository.
There is always a current head, known as HEAD. (This is actually a
symbolic link, .git/HEAD, to a file like refs/heads/master.) Git requires
that this always point to the refs/heads directory.
Minor technical details:
1) HEAD used to be a Unix symlink, and can still be though of that
way, but for Microsoft support, this is now what's called a
"symbolic reference" or symref, and is a plain file containing
"ref: refs/heads/master". Git treats it just like a symlink.
There's a git-update-ref helper which writes these.
2) While HEAD must point to refs/heads, it's legal for it to
point to a file that doesn't exist. This is what happens
before the first commit in a brand new repository.
When you do "git commit", a new commit object is created with the old
HEAD as a parent, and the new commit is written to the current head
(pointed to by HEAD).
* The three uses of "git checkout"
Git checkout can do three separate things:
1) Change to a new head
git checkout [-f|-m] <branch>
This makes <branch> the new HEAD, and copies its state to the index
and the working directory.
If a file has unsaved changes in the working directory, this tries
to preserve them. This is a simple attempt, and requires that the
modified files(s) are not altered between the old and new HEADs.
In that case, the version in the working directory is left untouched.
A more aggressive option is -m, which will try to do a three-way
(intra-file) merge. This can fail, leaving unmerged files in the
index.
An alternative is to use -f, which will overwrite any unsaved changes
in the working directory. This option can be used with no <branch>
specified (defaults to HEAD) to undo local edits.
2) Revert changes to a small number of files.
git checkout [<revision>] [--] <paths>
will copy the version of the <paths> from the index to the working
directory. If a <revision> is given, the index for those paths will
be updated from the given revision before copying from the index to
the working tree.
Unlike the version with no <paths> specified, this does NOT update
HEAD, even if <paths> is ".".
3) Create a branch.
git checkout [-f|-m] -b <branch> [revision]
will create, and switch to, a new branch with the given name.
This is equivalent to
git branch <branch> [<revision>]
git checkout [-f|-m] <branch>
If <revision> is omitted, it defaults to the current HEAD, in which
case no working directory files are altered.
This is the usual way that one checks out a revision that does not
have an existing head pointing to it.
* Deleting branches
"git branch -d <head>" is safe. It deletes the given <head>, but first
it checks that the commit is reachable some other way. That is, you
merged the branch in somewhere, or you never did any edits on that branch.
It's a good idea to create a "topic branch" when you're working on
anything bigger than a one-liner, but it's also a good idea to delete
them when you're done. It's still there in the history.
* Doing rude things to heads: git reset
If you need to overwrite the current HEAD for some reason, the tool to
do it with is "git reset". There are three levels of reset:
git reset --soft <head>
This overwrites the current HEAD with the contents of <head>.
If you omit <head>, it defaults to HEAD, so this does nothing.
git reset [<head>]
git reset --mixed [<head>]
These overwrite the current HEAD, and copy it to the index,
undoing any git-update-index commands you may have executed.
If you omit <head>, it default to HEAD, so there is no change
to the current branch, but all index changes are undone.
git reset --hard [<head>]
This does everything mentioned above, and updates the
working directory. This throws away all of your in-progress
edits and gets you a clean copy. This is also commonly
used without an explicit <head>, in which case the current
HEAD is used.
* Using git-reset to fix mistakes
"Oh, no! I didn't mean to commit *that*! How do I undo it?"
If you just want to undo a commit, then you can use "git reset HEAD^"
to return the current HEAD to the previous version. If you want to leave
the commit in the index (this only applies to you if you are familiar with
using the index; see below), then you can use "git reset --soft HEAD^".
And if you want to blow away every record of the changes you made,
you can use "git reset --hard HEAD^"
If you just want a stupid trivial mistake and want to replace the most
recent commit with a corrected one, "git commit --amend" is your friend.
It makes a new commit with HEAD^ rather than HEAD as its ancestor.
* Fixing mistakes without git-reset
git-reset has the problem that it doesn't preserve hacking in progress
in the working directory. It can leave the working directory alone
(making everything a "hack in progress"), but it can't merge changes
like git checkout.
So, suppose you've been trying something that should have been simple, and
made three commits before realizing that the problem is harder than you
thought and you want your work so far to be on a new branch of its own;
committing them on the current HEAD (I'll call it "old") was a mistake.
You don't want to erase anything, just rename it. Make "new" a copy of
the current "old" and move old back to HEAD^^^ (three commits ago).
While there are ways to do that using git-reset, but far better is
to use "git branch -f":
git checkout -b new
Create (and switch to) the "new" branch.
git branch -f old HEAD^^^
Forcibly move "old" back three versions.
(You could also use old~3 or new^^^ or any synonymous name.)
You can use a similar trick to rename a branch. If it's the current
HEAD, then:
git checkout -b newname
git branch -d oldname
and if it's not, then
git branch newname oldname
git branch -d oldname
An alternative in the latter case is to just use mv on the raw
.git/refs/heads/oldname file.
* How do I check out an old version?
A very common beginning question is how to check out an old version.
Say you need to compile an old release for test purposes. "git checkout
v1.2" gives a funny error message. What's going on?
Well, "git checkout" makes the current HEAD point to the head that
you specify. And, as previously mentioned, git requires that it point
to something in the .git/refs/heads directory. So you can't do that.
If you're busy doing things in your working directory, and don't want to
overwrite your work with an old version, then you can get a snapshot with
the (old) git-tar-tree or (new) git-archive commands. These produce a
tar file (git-archive can also produce a zip file) which is a snapshot
of any version you like. You can then unpack this file in a different
directory and build it.
However, if you haven't got any edits in progress, and want to check out
the old version into your working directory, just create a temp branch!
git checkout -b temp v1.2
Will do what you want. This will also do what you want if you have a
local edit (like the "#define DEBUG 1" mentioned above) that you want
to preserve while working on the old version.
You'll see this in use if you ever use the (highly recommended) git-bisect
tool. It creates a branch called "bisect" for the duration of the bisect.
(Yes, I have to confess, I sometimes wish that git would enforce the
"HEAD must point to .git/refs/heads" rule when committing (checking in)
rather than when checking out, but that's the way git has grown up.)
Note that if you want *exactly* an old version, with no local hacks,
make sure there are none (with "git status") when doing this. It's more
convenient if you do it before the checkout, but you'll get the same
answer if you ask afterwards.
Now, what about the complex case: you have local hacks that you
want to keep, but not have polluting the old version?
Well, one way of the other, you'll have to commit it. If you don't mind
committing your changes to the current branch ("git commit -a"), do that.
If they're not ready to commit, you can commit them anyway, and back
them out when you're done:
git commit -a -m "Temp commit"
git checkout -b temp v1.2
make ; make test ; whatever
git checkout master
git branch -d temp
git reset HEAD^
This leaves both the working directory and the master head in the states
they were in at the beginning.
If you don't like committing to the master branch, you can make a new one.
In this example, it's "work in progress", a.k.a. "wip":
git checkout -b wip
git commit -a -m "Temp commit"
git checkout -b temp v1.2
make ; make test ; whatever
git checkout wip
git branch -d temp
git reset master
git checkout master # Won't change working directory
git branch -d wip
* Examining history: git-log and git-rev-list
In another example of docs being better on the first command written,
the all-purpose utility for examining history is "git log", but all of
the examples of clever ways to use it are in the git-rev-list man page.
And git-log also has most of git-diff's options.
Other utilities, notably the gitk and qgit GUIs, also use the git-rev-list
command-line options, so it's well worth learning them.
git-rev-list gives you a filtered subset of the repository history.
There are two basic ways that you can do the filtering:
1) By ancestry. You specify a set of commits to include all the
ancestors of, and another set to exclude all the ancestors of.
(For this purpose, a commit is considered an ancestor of itself.)
So if you want to see all commits between v1.1 and v1.2, you
can specify
git log ^v1.1 v1.2
or, with a more convenient syntax
git log v1.1..v1.2
However, there are times when you want to specify something more
complex. For example, if a big branch that had been in progress since
v1.0.7 was merged between v1.1 and v1.2, but you don't want to see it,
you could specify any of:
git log v1.2 ^v1.1 ^bigbranch
git log ^bigbranch v1.1..v1.2
git log ^v1.1 bigbranch..v1.2
They're all equivalent. Another special syntax that's sometimes
handy is
git log branch1...branch2
Note the three dots. This generates the symmetric difference between
the two; basically it's a diff between the commits that went into
each of them.
"git log" by default pipes its output through less(1), and generates
its output from newest to oldest on the fly, so there's no great
speed penalty to not specifying a starting place. It'll generate a
few screen fulls more than you look at, but not waste any more effort
than that.
2) By path name. This is a feature which appears to be unique to git.
If you give git-rev-list (or git-log, or gitk, or qgit) a list of
pathname prefixes, it will list only commits which touch those
paths. So "git log drivers/scsi include/scsi" will list only
commits which alters a file whose name begins with drivers/scsi
or include/scsi.
(If there's any possible ambiguity between a path name and a commit
name, git-rev-list will refuse to proceed. You can resolve it by
including "--" on the command line. Everything before that is a
commit name; everything after is a path.)
This filter is in addition to the ancestry filter. It's also rather
clever about omitting unnecessary detail. In particular, if there's
a side branch which does touch drivers/scsi, then the entire branch,
and the merge at the end, will be removed from the log.
You can additionally limit the commits to a certain number, or by date,
author, committer, and so on.
By default, "git log" only shows the commit messages, so it's important to
write good ones. Other tools compress commit messages down to
the first line, so try to make that as informative as possible.
* History diagrams
When talking about various situations involving multiple branches,
people often find it handy to draw pictures. Gitk draws nice pictures
vertically, but for e-mail, ASCII art drawn horizontally is often easier.
Commits are shown as "o", and the links between them with lines drawn with
- / and \. Time goes left to right, and heads may be labelled with names.
For example:
o--o--o <-- Branch A
/
o--o--o <-- master
\
o--o--o <-- Branch B
If someone needs to talk about a particular commit, the character "o"
may be replaced with another letter or number.
* Trivial merges: fast-forward and already up-to-date.
There are two kinds of merge that are particularly simple, and you will
encounter them in git a great deal. They are mirror images.
Suppose that you are working on branch A and merge in branch B, but no
work has been done to branch B since the last time you merged, or since
you spawned branch A from it. That is, the history looks like
o--o--o--o <-- B
\
o--o--o <-- A
or
o--o--o--o--o--o <-- B
\ \
o--o--o--o--o <-- A
If you then merge B into A, A is described as "already up to date".
It is already a strict superset of B, and the merge does nothing.
In particular, git will not create a dummy commit to record the fact that
a merge was done. It turns out that are a number of bad things that would
happen if you did this, but for now, I'll just say that git doesn't do it.
Now, the opposite scenario is the "fast-forward" merge. Suppose you
merge A into B. Again, A is a strict superset of B.
In this case, git will simply change the head B to point to the same
commit as A and say that it did a "fast-forward" merge. Again, no commit
object is created to reflect this fact.
The effect is to unclutter the git history. If I create a topic branch to
work on a feature, do some hacking, and then merge the result back into
the (untouched!) master, the history will look just like I did all the
work on the master directly. If I then delete the topic branch (because
I'm done using it), the repository state is truly indistinguishable.
While the topic branch existed, you could have done something to the
master branch, in which case the final merge would have been non-trivial,
but if that didn't happen, git produces a simple, easy-to-follow linear
history.
Some people used to heavyweight branches find this confusing; they
think a merge is a big deal and it should be memorialized, but there
are actually excellent reasons for doing this.
The most important one is that a fit of merging back and forth will
eventually end. Suppose that branches A and B are maintained by separate
developers who like to track each other's work closely.
If the fast-forward case did create a commit, then merging A into B
would produce
o--o--o--o---------o <-- B
\ /
o--o--o <-- A
then merging B into A would produce:
o--o--o--o---------o <-- B
\ / \
o--o--o---o <-- A
and further merges would produce more and more dummy commits, all without
ever reaching a steady state, and without making it obvious that the
two heads are actually identical.
Since history lasts forever, cluttering it up with unimportant stuff is a
burden to all future users, and not a good idea. Allowing the merge of a
branch to be seamless in the simple case encourages lightweight branches.
If you _might_ need a separate branch, create it. If it turned out that
you didn't, it won't make a difference.
* Exchanging work with other repositories
The basic tools for exchanging work with other repositories are "git
fetch" and "git push". The fact that "git pull" is not the opposite of
"git push" is often confusing to beginners (it's a superset of git fetch),
but that's the terminology that has grown up.
The unit of sharing in git is the branch. If you've used branches in
CVS, you'll be familiar with using "CVS update" to pull changes from your
"current branch" in the repository into your working directory.
In Git, you don't pull into the working directory, but rather into a
tracking branch. You set up a branch in your repository which will be
a copy of the branch in the remote repository. For example, if you use
"git clone", then the remote "master" branch is tracked by the local
"origin" branch.
Then, when you do a "git fetch", git fetches all of the new commits
and sets the origin head to point to the newly fetched head of the
remote branch.
By default, git checks that this is a trivial fast-forward merge, that
is not throwing away history. If it finds something like:
o--o--o--o--o--o <-- remote master
\
o <-- Local origin
It will complain and abort the fetch. This is usually a warning that
something has gone wrong - in particular, you forgot that this was
supposed to be a tracking branch and committed some work to it - and it
aborts before throwing your work away.
However, sometimes the remote git user will have a branch name that they
delete and re-create frequently. There are plenty of reasons to do this.
The most common is doing a "test merge" between various branches in
progress. They're all unfinished, so the developer of branch A doesn't
want to merge in all the new bugs in branch B, but a tester might want
to create a merged version with both sets of bugs for testing.
The merged version is not intended to be a permanent part of history -
it'll get deleted after the test - but it can still be useful to have
a draft copy.
In this case, you can mark the source branch with a leading "+", to
disable this sanity check. (See the git-fetch man page for details.)
Note that in this case, you should specifically avoid merging from such
a branch into any non-test branches of your own. It is, as mentioned,
not intended to be a permanent part of history, so don't make it part
of your permanent history. (You still might want to test-merge it with
your work in progress, of course.)
The fact that you should know to treat such branches specially is why
git doesn't try to automatically cope with them.
* Alternate branch naming
The original git scheme mixes tracking branches with all the other heads.
This requires that you remember which branches are tracking branches and
which aren't. Hopefully, you remember what all your branches are for,
but if you track a lot of remote repositories, you might not remember
what every remote branch is for and what you called it locally.
* Remotes files
You can specify what to fetch on the git-fetch command line. However,
if you intend to monitor another repository on an ongoing basis,
it's generally easier to set up a short-cut by placing the options in
.git/remotes/<name>.
The syntax is explained in the git-fetch man page. When this is st
up, "git fetch <name>" will retrieve all the branches listed in the
.git/remotes/<name> file. The ability to fetch multiple branches at
once (such as release, beta, and development) is an advantage of using
a remotes file.
You can also create the remotes file "origin" (not necessarily any
relation to the branch named "origin"), which is the default for
git-fetch. If you have a single primary "upstream" repository that
you sync to, place it in the origin remotes file, and you can just type
"git fetch" to get all the latest changes.
Note that branches to fetch are identified by "Pull: " lines in the
remotes file. This is another example of the fetch/pull confusion.
git-pull will be explained eventually.
* Remote tags
TODO: Figure out how remote tags work, under what circumstances
they are fetched, and what git does if there are conflicts.
* Exchanging work with other repositories, part II: git-push
It's simpler to set up git sharing on a pull basis. If your source
code isn't secret, you can set up a public read-only server very easily
(see the git-daemon man page for details), and have other fetch from that.
However, N developers all pulling from each other is an N^2 mess.
Some centralization helps.
One way is to have a central coordinator (like Linus) who pulls from
all of the developers, and who they in turn pull from.
The other is to have a central repository that people can push to.
This generally requires an ssh login on the server. You can use git-shell
as the login shell if all you want to allow the account to do is git
fetch and push. (You can use the hook scripts to enforce rules about
who's allowed to do what to which branch.)
Git-push to the remote machine works exactly like git-fetch from the
remote machine. The objects are moved over, and the branches pushed to
are fast-forwarded. If fast-forward is impossible, you get an error.
So if you have multiple people committing to a branch on the server,
you will not be allowed to push if someone has pushed more to that branch
since last time you fetched it.
You have to merge the changes locally, and re-try the push when you've
got a new head that includes the most recently pushed work as an ancestor.
This is exactly like "cvs commit" not working if your recent checkout
wasn't the (current) tip of the branch, but git can upload more than
one commit.
The simplest way to resolve the conflict is to merge the remote head with
your local head. This is easiest if you have different local branches
for fetching the remote repository and for pushing to it.
That is, you have one head that just tracks the master repository's
main branch, and another that you add your work to, and push from.
This makes merging simpler when there are conflicts.
Another use for git-push, even for a solo developer, is sharing your work
with the world. You can set up a public git server on a high-bandwidth
machine (possibly rented from a hosting service) and then push to it to
publish something.
* Merging (finally!)
I went through everything else first because the most common merge case
is local changes with remote changes. Not that you can't merge two
branches of your own, but you don't need to do that nearly as often.
The primitive that does the merging is called (guess what?) git-merge.
And you can use that if you want. If you want to create a so-called
octopus merge, with more than two parents, you have to.
However, it's usually easier to use the git-pull wrapper. This merges
the changes from some other branch into the current HEAD and generates
a commit message automatically.
git-merge lets you specify the commit message (rather than generating it
automatically) and use a non-HEAD destination branch, but those options
are usually more annoying than useful.
The basic git-pull syntax is
git-pull <repository> <branch>
The repository can be any URL that git supports. Including, particularly,
a local file. So to do a simple local merge, you just type
git-pull . <branch>
So after doing some hacking on branch "foo", you would
git checkout master
git pull . foo
and ba-boom, all is done.
Now, you can also specify a remote repository to merge from, using a
git://, http:// or git+ssh:// URL. This is what Linus does all day long,
and why the git-pull tool is optimized to allow that. It uses git-fetch
to fetch the remote branch without assigning it a branch name (it gets
the special name FETCH_HEAD temporarily), and them merges it into the
current HEAD directly.
There is absolutely nothing wrong with doing that, but beginners often
find it confusing to have a single short command do quite so much.
And if you are working closely with someone, it's often more convenient
and less confusing to keep local tracking branches. Then you can
git fetch upstream # Fetches 'origin'
git pull . origin
It's also possible to give just a single remotes file name to git-pull:
git pull upstream
That does a git fetch, updating all of the listed branches as usual,
then merges the _first_ listed branch into HEAD.
By the way: don't blink, you might miss it! As I mentioned, pulling is
a very big part of Linus's daily routine, and he's made sure it's fast.
(Actually, it produces a fair bit of output, so you'll see.)
Just to clarify, because people often get confused:
git-pull is a MERGING tool. It always does a merge, as well as an optional
fetch. If you just want to LOOK at a remote branch, use git-fetch.
* Undoing a merge
If you discover that a merge was a mistake, it can be undone just like
any other commit. The HEAD you merged to is the first parent, so just do
git reset --hard HEAD^
This is why Linus likes a git-pull command that does so much in one shot -
if he doesn't like what he pulls, it's easy to undo.
* How merging operates
Git uses the basic three-way merge. First, it applies it to whole files,
and then to lines within files.
To do a three-way merge, you need three versions of a file. The versions
A and B you want to merge, and a common ancestor, commonly called O.
That is, history proceeds something like:
o--o--A
/
o--o--O
\
o--B
The basic idea is "I want the file O, plus all the changes made from O
to A, plus all the changes made from O to B." Since the cases where one
of A or B is a direct ancestor of the other have already been disposed
of, the three commits must be different.
For each file, there are a few cases that are trivial, and git gets
these out of the way immediately:
- If A and B are identical, the merged result is obvious.
- If O and A are the same, then the result should be B.
- If O and B are the same, then the result should be A.
In the completely trivial case when O, A and B are the same, then
all three rules apply, they all produce the same obvious result.
The "merge base" version O is generally the most recent common ancestor
of A and B. The only problem is, that's not necessarily unique!
The classic confusing case is called a "criss-cross merge", and looks
like this:
o--b-o-o--B
/ \ /
o--o--o X
\ / \
o--a-o-o--A
There are two common ancestors of A and B, marked a and b in the graph
above. And they're not the same. You could use either one and get
reasonable results, but how to choose?
The details are too advanced for this discussion, but the default
"recursive" merge strategy that git uses solves the answer by merging
a and b into a temporary commit and using *that* as the merge base.
Of course, a and b could have the same problem, so merging them could
require another merge of still-older commits. This is why the algorithm
is called "recursive." It's been tested with pathological conditions,
but multiply nested criss-cross merges are very rare, so the recursion
isn't a performance limit in practice.
If all three of a given file in O, A, B are different, then the three
versions are pulled into the index file, called "stage 1", "stage 2",
and "stage 3", and a merge strategy driver is called to resolve the mess.
Git then uses the classic line-based three-way merge, looking for isolated
changes and applying the same rules as for files when two of the source
files are the same in some range.
* Alternate merge strategies
In every version control system prior to git, the merging algorithm was
buried deep in the bowels of the software, and very difficult to change.
One of particularly nice things that git did was allow for easily
replaceable "merge strategies". Indeed, you can try multiple merge
strategies, and the fallback - print an error message and let the user
sort it out - can be thought of as just another merge strategy.
Enabling this is why the index is so important to git. It provides a
place to store an unfinished merge, so you can try various strategies
(including hand-editing) to finish it.
Generally, git's default merge strategies are just fine. There is,
however, one special case that is occasionally useful, specified with the
"-s ours" strategy.
That strategy instructs git that the merged result should be the same
as the current HEAD. Any other branches are recorded as parents, but
their contents are ignored.
What the heck is the use of that? Well, it lets you record the fact
that some work has been done in the history, and that it shouldn't be
merged again. For example, say you write and share a popular patch set.
People are always merging it in to their local source trees. But then
you discover a much better way to achieve the goal of that patch set, and
you want to publish the fact that the new patch supersedes the old one.
If you developed the new set starting from the old one, that would happen
automatically. But another way to achieve the same goal is to merge the
old branch it in using the "ours" strategy. Everyone else's git will
notice that the patch is already included, and stop trying to merge it in.
* When merging goes wrong
This is the fun part. Git's default recursive-merge strategy is pretty
clever, but sometimes changes truly do conflict and need manual fix-up.
When git is unable to complete a merge, it leaves the three different
versions in the index and places a file with CVS-style conflict markers
in the working directory.
As long as there is a "staged" file in the index, you will not be able
to commit. You must resolve the conflict, and update the index with the
resolved versions. You can do this one at a time with git-update-index,
or at the end by giving the files as arguments to git-commit.
Doing them one at a time is probably safest; checking in a file which still
has conflict markers makes a bit of a mess. Note that git will still
use the automatically generated commit message when you finally commit.
(It's in .git/MERGE_MSG, if you care.)
Note that "git diff" knows how to be useful with a staged file.
By default, it displays a multi-way diff. For example, suppose I take a
(slightly buggy) hello.c:
--- hello.c ---
#include <stdio.h>
int main(void)
{
printf("Hello, world!");
}
--- end ---
Now, suppose that in branch A, I fix some bugs - add the missing newline
and "return 0;". In branch B, I display my angst and change it to
"Goodbye, cruel world!". When I try to merge A into B, obviously I'll
get a conflict. The resultant file, with conflict markers, looks like:
--- hello.c ---
#include <stdio.h>
int
main(void)
{
<<<<<<< HEAD/hello.c
printf("Goodbye, cruel world!");
=======
printf("Hello, world!\n");
return 0;
>>>>>>> edadc53fc7a8aef2a672a4fa9d09aa16f4e14706/hello.c
}
--- end ---
and the result of "git diff" is
diff --cc hello.c
index 4b7f550,948a5f8..0000000
--- a/hello.c
+++ b/hello.c
@@@ -3,5 -3,6 +3,10 @@@
int
main(void)
{
++<<<<<<< HEAD/hello.c
+ printf("Goodbye, cruel world!");
++=======
+ printf("Hello, world!\n");
+ return 0;
++>>>>>>> edadc53fc7a8aef2a672a4fa9d09aa16f4e14706/hello.c
}
Notice how this is not a standard diff! It has two columns of diff
symbols, and shows the difference from each of the ancestors to the
current hello.c contents. I can also use "git diff -1" to compare
against the common ancestor, or "-2" or "-3" to compare against each of
the merged copies individually.
* Alternatives to merging
The bigger and more active your source tree, the more important it is to
keep the history reasonably clean. Just because git can do a merge in
under a second doesn't mean that you should do one daily. When you look
back at a feature's development history, you'd like to see meaningful
changes recorded and not a lot of meaningless ones.
Now, once you have shared a commit with others, and they have incorporated
it into their development, it becomes impossible to undo. But git
provides tools that are useful for "rewriting history" before public
release. These can be used to edit a commit for publication.
* Test merging
One way to keep the history clean is to simply not merge other branches
into your development branch. If you want to use your new features and
other people's code changes, make a test merge and use that, but don't
make that merge part of your branch.
This is slightly more work (you have to change to a test branch and do
your merging there), but not very much.
Sometimes, when doing this, a conflict appears between your changes and
someone else's development. If you get tired of fixing the same conflict
every time you do a test merge, have a look at the git-rerere tool.
This remembers resolved conflicts and tries to apply the same resolution
patch the next time.
It's written specifically to help you not do an extra merge unnecessarily.
Although its man page is well worth reading, you never invoke git-rerere
explicitly; it's invoked automatically by the merge and patch tools if
you create a .git/rr-cache directory.
* Cherry picking
If you have a series of patches on a branch, but you want a subset
of them, or in a different order, there's a handy utility called
"git-cherry-pick" which will find the diff and apply it as a patch to
the current HEAD. It automatically recycles the commit message from
the original commit.
If the patch can't be applied, it leaves the versions in the index and
conflict markers in the working directory just like a failed merge.
And just like a merge, it remembers the commit message and provides it
as a default when I finally commit.
Note that this can only work on a chain of single-parent commits.
If a commit has multiple parents, there's no single patch to apply.
You van get the list of commits on a branch with git-log or git-rev-list,
but for more complex cases, the git-cherry tool is designed to generate
the list of commits to merge. It has a rather neat approximate-match
function built in which identifies patches that appear to already be
present in the target branch.
* Rebasing
A special case of cherry-picking is if you want to move a whole branch
to a newer "base" commit. This is done by git-rebase. You specify
the branch to move (default HEAD) and where to move it to (no default),
and git cherry-picks every patch out of that branch, applies it on top
of the target, and moves the refs/heads/<branch> pointer to the newly
created commits.
By default, "the branch" is every commit back to the last common
ancestor of the branch head and the target, but you can override that
with command-line arguments.
If you want to avoid merge conflicts due to the master code changing out
from under your edits, but not have "cleanup" merges in your history,
git-rebase is the tool to use.
Git-rebase will also use git-rerere if enabled ("mkdir .git/rr-cache").
If rebasing encounters a conflict it can't resolve, it will stop halfway
and ask you to resolve the problem by hand. However, it still knows it
has a job to finish! The unapplied patches are remembered until you do
one of
git-rebase --continue
This will check in the current index. You should
do git-update-index <files> in the conflicts that
you resolve, but NOT do an actual git-commit.
git-rebase --continue will do the commit.
git-rebase --skip
This will skip the conflicting patch. You
don't have to resolve the conflicts; git will
just back up and try the next patch in the series.
git-rebase --abort
This will abandon the whole rebase operation (including
any half-done work) and return you to where you began.
Git-rebase can also help you divide up work. Suppose you've mixed up
development of two features in the current HEAD, a branch called "dev".
You want to divide them up into "dev1" and "dev2". Assuming that HEAD
is a branch off master, then you can either look through
git log master..HEAD
or just get a raw list of the commits with
git rev-list master..HEAD
Either way, suppose you figure out a list of commits that you want in
dev1 and create that branch:
git checkout -b dev1 master
for i in `cat commit_list`; do
git-cherry-pick $i
done
You can use the other half of the list you edited to generate the dev2
branch, but if you're not sure if you forgot something, or just don't
feel like doing that manual work, then you can use git-rebase to do it
for you...
git checkout -b dev2 dev # Create dev2 branch
git-rebase --onto master dev1 # Subreact dev1 and rebase
This will find all patches that are in dev and not in dev1,
apply them on top of master, and call the result dev2.
* Experimenting with merging
To play with non-trivial merging, get an existing git repository of
a non-trivial project (git itself and the Linux kernel are readily
available. Fire up gitk to look at history, find some interesting-looking
merges, and redo them yourself on a test branch.
As long as you do everything on test branches, you aren't going to screw
anything up. So play!
You can use gitk to search for "Conflicts:" in the commit comments to
find merges that didn't go smoothly and see what happens. (Or you can
search in "git log" output. gitk just draws prettier pictures.)
You can also set up two repositories on the same machine and try pulling
and pushing between them.
To identify arbitrary commits, the 40-byte raw hex ID is probably easiest;
you can cut-and-paste them from the gitk window.
For example, in the git repository,
3f69d405d749742945afd462bff6541604ecd420
looks like an interesting merge. Its parents are
Parent: 7d55561986ffe94ca7ca22dc0a6846f698893226
Parent: 097dc3d8c32f4b85bf9701d5e1de98999ac25c1c
Let's try doing that manually:
$ git checkout -b test 7d55561986ffe94ca7ca22dc0a6846f698893226
$ git pull . 097dc3d8c32f4b85bf9701d5e1de98999ac25c1c
error: no such remote ref refs/heads/097dc3d8c32f4b85bf9701d5e1de98999ac25c1c
Fetch failure: .
Cool! I didn't know that wasn't allowed. (I'll have to ask why it's
not; perhaps it's because it uses the branch name in the automatic
commit message.) I could do it by hand with git-merge, but I'll just
give it a branch name:
$ git branch test2 097dc3d8c32f4b85bf9701d5e1de98999ac25c1c
$ git pull . test2
Merging HEAD with 097dc3d8c32f4b85bf9701d5e1de98999ac25c1c
Merging:
7d55561986ffe94ca7ca22dc0a6846f698893226 Merge branch 'jc/dirwalk-n-cache-tree' into jc/cache-tree
097dc3d8c32f4b85bf9701d5e1de98999ac25c1c Remove "tree->entries" tree-entry list from tree parser
found 2 common ancestor(s):'
Merging:'
found 1 common ancestor(s):
63dffdf03da65ddf1a02c3215ad15ba109189d42 Remove old "git-grep.sh" remnants
Auto-merging Makefile
merge: warning: conflicts during merge
CONFLICT (content): Merge conflict in Makefile
Auto-merging builtin.h
merge: warning: conflicts during merge
CONFLICT (content): Merge conflict in builtin.h
Auto-merging cache.h
Removing check-ref-format.c
Auto-merging git.c
merge: warning: conflicts during merge
CONFLICT (content): Merge conflict in git.c
Auto-merging read-cache.c
Auto-merging update-index.c
merge: warning: conflicts during merge
CONFLICT (content): Merge conflict in update-index.c
Renaming apply.c => builtin-apply.c
Auto-merging builtin-apply.c
Renaming read-tree.c => builtin-read-tree.c
Auto-merging builtin-read-tree.c
Auto-merging .gitignore
Auto-merging Makefile
merge: warning: conflicts during merge
CONFLICT (content): Merge conflict in Makefile
Auto-merging builtin.h
merge: warning: conflicts during merge
CONFLICT (content): Merge conflict in builtin.h
Auto-merging cache.h
Auto-merging fsck-objects.c
Removing git-format-patch.sh
Auto-merging git.c
merge: warning: conflicts during merge
CONFLICT (content): Merge conflict in git.c
Auto-merging update-index.c
Automatic merge failed; fix conflicts and then commit the result.
$ git status
Hey, look, lots of interesting stuff. Particularly, see
# Changed but not updated:
# (use git-update-index to mark for commit)
#
# unmerged: Makefile
# modified: Makefile
# unmerged: builtin.h
# modified: builtin.h
# unmerged: git.c
# modified: git.c
The "unmerged" (a.k.a. "staged") files are ones that need manual resolution.
(I notice that update-index.c isn't listed, despite being mentioned
as a conflict in the message. Can someone explain that?)
Fixing those is easy, but as you can see from the original commit comment
and diffs, there were some additional changes that were necessary to
make that compile.
You can test before committing the change, or do it the git way - commit
anyway, then test and "git commit --amend" with the fixes, of any.
Unlike a centralized VCS, committing is not the same as pushing upstream.
You can use test branches in the repository to save as much work as
you like. While it's still nice to keep the public repository clean,
you don't have to worry about "breaking the tree" every time you commit.
You can do all kinds of stuff in test branches, and clean it up later.
This is why all the git merge tools do the commit without waiting for
you to test it. The merge is usually okay, and it saves time. If not,
you can easily amend or undo the commit.
Linux is a registered trademark of Linus Torvalds
|
http://lwn.net/Articles/210045/
|
crawl-002
|
refinedweb
| 8,912
| 72.05
|
Question:
I made a Python function to convert dictionaries to formatted strings. My goal was to have a function take a dictionary for input and turn it into a string that looked good. For example, something like
{'text':'Hello', 'blah':{'hi':'hello','hello':'hi'}} would be turned into this:
text: Hello blah: hi: hello hello: hi
This is the code I wrote:
indent = 0 def format_dict(d): global indent res = "" for key in d: res += (" " * indent) + key + ":\n" if not type(d[key]) == type({}): res += (" " * (indent + 1)) + d[key] + "\n" else: indent += 1 res += format_dict(d[key]) indent -= 1 return res #test print format_dict({'key with text content':'some text', 'key with dict content': {'cheese': 'text', 'item':{'Blah': 'Hello'}}})
It works like a charm. It checks if the dictionary's item is another dictionary, in which case it process that, or something else, then it would use that as the value. The problem is: I can't have a dictionary and a string together in a dictionary item. For example, if I wanted:
blah: hi hello: hello again
there'd be no way to do it. Is there some way I could have something like a list item in a dictionary. Something like this
{'blah':{'hi', 'hello':'hello again'}}? And if you provide a solution could you tell me how I would need to change my code (if it did require changes).
Note: I am using python 2.5
Solution:1
You can simply store a list in the dictionary. Also, it's better not to use a global to store the indentation. Something along the lines of:
def format_value(v, indent): if isinstance(v, list): return ''.join([format_value(item, indent) for item in v]) elif isinstance(v, dict): return format_dict(v, indent) elif isinstance(v, str): return (" " * indent) + v + "\n" def format_dict(d, indent=0): res = "" for key in d: res += (" " * indent) + key + ":\n" res += format_value(d[key], indent + 1) return res
Solution:2
You can express dictionaries as having lists of children:
{'blah': [ 'hi', {'hello':[ 'hello again' ]}, {'goodbye':[ 'hasta la vista, baby' ]} ]}
A consequence of this is that each dictionary will have just a single key-value pair. On the plus side, it means you can have repeating keys and deterministic ordering, just like XML.
EDIT: On second thought, you could simply fold
'hello' and
'goodbye' into a single dictionary, though I would personally find that to be quite confusing, since you could now have a mish-mash of ordered and unordered stuff. So I guess the one-key-per-dictionary rule is more of a recommendation than a requirement.
Solution:3
import yaml import StringIO d = {'key with text content':'some text', 'key with dict content': {'cheese': 'text', 'item': {'Blah': 'Hello'}}} s = StringIO.StringIO() yaml.dump(d, s) print s.getvalue()
this prints out:
key with dict content: cheese: text item: {Blah: Hello} key with text content: some text
and you can load it back in to a dict
s.seek(0) d = yaml.load(s)
Solution:4
A dictionary is a mapping, so you can't have a key without a value. However, the closest to that would be for a key to have the value of
None. Then add a check for
None before the
if not type(d[key]) == type({}): line and
continue to avoid printing the value. BTW, that line would be better as
if not isinstance(d[key], dict):.
Solution:5
As mentioned, you'll need to use lists as values whenever you want to have a text and dictionary at the same level. here's some code that prints what you need.
# -*- coding: utf-8 -*- #!/usr/bin/env python2.5 # def pretty_dict(d, indent=0, spacer='.'): """ takes a dict {'text':'Hello', 'blah':{'hi':'hello','hello':'hi'}} And prints: text: Hello blah: hi: hello hello: hi """ kindent = spacer * indent if isinstance(d, basestring): return kindent + d if isinstance(d, list): return '\n'.join([(pretty_dict(v, indent, spacer)) for v in d]) return '\n'.join(['%s%s:\n%s' % (kindent, k, pretty_dict(v, indent + 1, spacer)) for k, v in d.items()]) test_a = {'text':'Hello', 'blah':{'hi':'hello','hello':'hi'}} test_b = {'key with text content':'some text', 'key with dict content': {'cheese': 'text', 'item':{'Blah': 'Hello'}}} test_c = {'blah':['hi', {'hello':'hello again'}]} test_d = {'blah': [ 'hi', {'hello':[ 'hello again' ]}, {'goodbye':[ 'hasta la vista, baby' ]} ]} if __name__ == '__main__': print pretty_dict(test_a) print pretty_dict(test_b) print pretty_dict(test_c) print pretty_dict(test_d)
Solution:6
Why not using pretty print, which already does all this?
Note:If u also have question or solution just comment us below or mail us on toontricks1994@gmail.com
EmoticonEmoticon
|
http://www.toontricks.com/2019/04/tutorial-python-dictionary-formatting.html
|
CC-MAIN-2019-18
|
refinedweb
| 767
| 59.33
|
BACKGROUND Your company is writing software for a new networking device and
you have the task of managing the DNS storage and retrieval.
DNS must translate between an Internet IP address such as
74.125.19.99 and the URI (in this case).
The networking device is not available so the code has been
split into segments and each bit is being developed on a PC.
You have been allocated a code segment that must use structures
and pointers to form a link list record of these pairs of strings
( IP naddress and URI).
It must allow insertion, deletion, and querying.
SPECIFIC REQUIREMENTS
* The command line parameters will consist of a DNS command and
then a number of IP-URI string pairs.
The DNS command is always a single letter followed by a search string.
A string pairs is always IP address then URI which must be placed
in the DNS store.
The example below has the DNS command U, search string 74.1.2.3,
and two IP-URI pairs.
lab3_s1235567 U 74.1.2.3 74.125.19.1 74.1.2.3
* The program must insert all string pairs into a data structure defined
below in the proforma. It must then executed the command and print
the result (with an endl) to standard out.
In the example above the command is asking to find the URI of the
IP address 74.1.2.3. The output should be.
* Commands include-
U ip_address : find the IP address ip_address in the link list and
print out the matching URI.
I uri_name : search the link list for the URI given and print out
the matching IP address.
this is my code I am lost!!!
/*===================== linked_list_search.cpp ==================================== PURPOSE - Show how to search a list link created at run time. USAGE - Read code carefully to understand it, then compile as "g++ -g -o x linked_list_search.cpp", run as "./x" NOTE - You must first understand the programs struct_and_ptr.cpp and linked_list-struct.cpp - The code creates a link list with amount values from zero upward and the name string filled in with the hexadecimal value of the amount. This is then serached to find a particular target in the name string, for example 0x100. */ #include <iostream> // IO library code to include. #include <string> // string functions. using namespace std ; // Use standard C++ names for functions. //====== structure type declaration, sets the form of structure // but doesn't create one in memory. typedef char ip50 [50] ; // 49 characters available for text, typedef char uri50 [50] ; // need last string position for a zero. typedef char boolean ; const char TRUE = 1 ; const char FALSE = 0 ; struct str_pair // all data associated with an individual string pair is in here. { int amount ; ip50 ip ; uri50 uri; str_pair* next ; // poin to next item, or set to NULL if end item. } ; str_pair *list_head ; // start of linked list, set to 1st item. str_pair *work_ptr ; // work pointer to work along list. const int LIST_SIZE = 4096;//the combined arrays!! const ip50 abc_ip =" 74.1.2.3"; const uri50 abc_uri=""; const ip50 google_ip=" 74.125.19.1"; const uri50 google_uri=""; char target[100] ; //====== Start program here ================================================= int main(int argc, char *argv[]) // argc = number strings on command line. // argv is meter. {//--- initialize link list with a loop. list_head = NULL ; int i ; char I, U, ch; for ( i=0 ; i < LIST_SIZE ; i++ ) { work_ptr = list_head ; // save old head item. list_head = new str_pair ; // create new item. list_head->next = work_ptr ; // get next item to point to previous head. list_head->amount = i ; //cout<<"amouhnt"<<list_head->amount<<endl; // fill in payload data, i=0 upward. //sprintf(list_head->ip, "0x%x", i) ; // print value of i as hexadecimal. } //--- Now do a search for an item, simple linear search from start to end. strncpy( target, abc_uri, sizeof(target)-1); // not strictly necessary strcat( target,abc_ip); cout<<target<<endl; work_ptr = list_head ; while ( work_ptr != NULL) // while still pointing to a valid item. { if ( strcmp( target, argv[2]) == 0) // get a match. { cout << endl << " Found it : given target name " << target << ", value of amount was " << work_ptr->amount << endl << endl ; return(0) ; // break = exit from loop, return(0) = quit program. } work_ptr = work_ptr->next ; // move onto next item in list. } //--- If got here then no match was found. cout << endl << " No match found for target " << target << endl << endl ; return(0) ; }
|
https://www.daniweb.com/programming/software-development/threads/122366/linked-list-omg
|
CC-MAIN-2017-26
|
refinedweb
| 710
| 76.11
|
7310/access-restriction-on-class-error
My guess is that you are trying to replace a standard class which ships with Java 5 with one in a library you have.
This is not allowed under the terms of the license agreement, however AFAIK it wasn't enforced until Java 5.
I have seen this with QName before and I "fixed" it by removing the class from the jar I had.
EDIT notes for the option "-Xbootclasspath:"
"Applications that use this option for the purpose of overriding a class in rt.jar should not be deployed as doing so would contravene the Java 2 Runtime Environment binary code license."
The
."
If you are getting error: could not ...READ MORE
If you have downloaded the 64 bit ...READ MORE
A class that is declared without any ...READ MORE
PFB source code:
import java.io.File;
import java.lang.*;
import java.util.Arrays;
import java.util.List;
import ...READ MORE
To get the file extension, we can ...READ MORE
InputStream in = this.getClass().getClassLoader().getResourceAsStream("TextFile.txt");
InputStream in = this.getClass().getResourceAsStream("/TextFile.txt");
package ...READ MORE
Follow these steps:
Write:
public class User {
...READ MORE
List<String> results = new ArrayList<String>();
File[] files = ...READ MORE
There are combinations of Operating System, JDK ...READ MORE
It's defined in the Java language specification:
The members ...READ MORE
OR
Already have an account? Sign in.
|
https://www.edureka.co/community/7310/access-restriction-on-class-error?show=7314
|
CC-MAIN-2020-50
|
refinedweb
| 231
| 60.51
|
DataColumn.Namespace Property
.NET Framework (current version)
Namespace: System.Data
Gets or sets the namespace of the DataColumn.
Assembly: System.Data (in System.Data.dll)
Property ValueType: System.String
The namespace of the DataColumn.
The Namespace property is used when reading and writing an XML document into a DataTable in the DataSet using the ReadXml, WriteXml, ReadXmlSchema, or WriteXmlSchema methods.
The namespace of an XML document is used to scope XML attributes and elements when read into a DataSet. For example, a DataSet contains a schema read from a document that has the namespace "myCompany," and an attempt is made to read data (with the ReadXml method) from a document that has the namespace "theirCompany." Any data that does not correspond to the existing schema will be ignored.
Return to top
.NET Framework
Available since 1.1
Available since 1.1
Show:
|
https://msdn.microsoft.com/en-us/library/system.data.datacolumn.namespace.aspx
|
CC-MAIN-2016-44
|
refinedweb
| 142
| 60.11
|
fclose(3) BSD Library Functions Manual fclose(3)
NAME
fclose, fcloseall -- close a stream
LIBRARY
Standard C Library (libc, -lc)
SYNOPSIS
#include <stdio.h> int fclose(FILE *stream); void fcloseall(void);
DESCRIPTION
The fclose() function dissociates the named stream from its underlying file or set of functions. If the stream was being used for output, any buffered data is written first, using fflush(3). The fcloseall() function calls fclose() on all open streams.
RETURN VALUES
Upon successful completion 0 is returned. Otherwise, EOF is returned and the global variable errno is set to indicate the error. In either case no further access to the stream is possible.
ERRORS
The fclose() function may also fail and set errno for any of the errors specified for the routines close(2) or fflush(3).
NOTES
The fclose() function does not handle NULL arguments; they will result in a segmentation violation. This is intentional - it makes it easier to make sure programs written under FreeBSD are bug free. This behaviour is an implementation detail, and programs should not rely upon it.
SEE ALSO
close(2), fflush(3), fopen(3), setbuf(3)
STANDARDS
The fclose() function conforms to ISO/IEC 9899:1990 (``ISO C90''). The fcloseall() function first appeared in FreeBSD 7.0. BSD April 22, 2006 BSD
Mac OS X 10.8 - Generated Mon Aug 27 16:33:11 CDT 2012
|
http://manpagez.com/man/3/fclose/
|
CC-MAIN-2018-34
|
refinedweb
| 227
| 65.83
|
- NAME
- DESCRIPTION
- METHODS
-.
METHODS
The xslate engine supports auto-boxing, so you can call methods for primitive (non-object) values. The following is builtin methods.
For nil
nil has its specific namespace as
nil, although no builtin methods are provided.
For SCALARs
The namespace of SCALARs is
scalar, although no builtin methods are provided.
For ARRAY references.
For HASH references :> : }
FILTERS/FUNCTIONS
The xslate engine supports filter syntax as well as function call. The following is the builtin functions, which can be invoked as filter syntax. =head2
mark_raw($str)
Mark $str as a raw string to avoid auto HTML escaping.
raw is an alias to
mark_raw.
unmark_raw($str)
Remove the raw mark from str. If str is not a raw string, this function returns str as is.
html($str)
Escapes html meta characters in str. If str is a raw string, this function returns str as is.
The html meta characters are
/[<"'&]/>.
uri($str)
Escapes unsafe URI characters in $str.
The unsafe URI characters are characters not included in the
unreserved character class defined by RFC 3986, i.e.
/[^A-Za-z0-9\-\._~]/.
See also RFC 3986.
dump($value)
Inspects $value with
Data::Dumper.
This function is provided for testing and debugging.
|
https://metacpan.org/pod/release/GFUJI/Text-Xslate-0.2003/lib/Text/Xslate/Manual/Builtin.pod
|
CC-MAIN-2016-22
|
refinedweb
| 203
| 70.29
|
- asynchronous read from a device
#include <sys/uio.h> #include <sys/aio_req.h> #include <sys/cred.h> #include <sys/ddi.h> #include <sys/sunddi.h> intprefix aread(dev_t dev, struct aio_req *aio_reqp, cred_t *cred_p);
Solaris DDI specific (Solaris DDI). This entry point is optional. Drivers that do not support an aread() entry point should use nodev(9F)
Device number.
Pointer to the aio_req(9S) structure that describes where the data is to be stored.
Pointer to the credential structure.
The driver's aread() routine is called to perform an asynchronous read. getminor(9F) can be used to access the minor number component of the dev argument. aread() may use the credential structure pointed to by cred_p to check for superuser access by calling drv_priv(9F). The aread() routine may also examine the uio(9S) structure through the aio_req structure pointer, aio_reqp. aread() aread() routine should return 0 for success, or the appropriate error number.
This function is called from user context only.
Example 1 The following is an example of an aread() routine:
static int xxaread_READ, xxminphys, aio)); }
read(2), aioread(3AIO), awrite read.
|
https://docs.oracle.com/cd/E26505_01/html/816-5179/aread-9e.html#REFMAN9Earead-9e
|
CC-MAIN-2021-49
|
refinedweb
| 184
| 61.63
|
Episode #85: Visually debugging your Jupyter notebook
Published Tues, Jul 3, 2018, recorded Wed, Jul 4, 2018.
Sponsored by DigitalOcean: pythonbytes.fm/digitalocean
Brian #1: the state of type hints in Python
- “Therefore, type hints should be used whenever unit test are worth writing.”
- Type hints, especially for function arguments and return values, help make your code easier to read, and therefore, easier to maintain.
- This includes refactoring, allowing IDEs to help with code completion, and allow linters to find problems.
- For CPython
- No runtime type inference happens.
- No performance tuning allowed.
- Of course, third party packages are not forbidden to do so.
- Non-comment type annotations are available for functions in 3.0+
- Variable annotations for 3.6+
- In 3.7, you can postpone evaluation of annotations with: from
__future__import annotations
- Interface stub files
.pyifiles, are allowed now, but this is extra work and code to maintain.
- typeshed has types for standard library plus many popular libraries.
- How do deal with multiple types, duck typing, and more discussed.
- A discussion of type generation and checking tools available now, including mypy
- See also: Stanford Seminar - Optional Static Typing for Python - Talk by Guido van Rossum
- Interesting discussion that starts with a bit of history of where mypy came from.
Michael #2: Django MongoDB connector
- Via Robin on Twitter
- Use MongoDB as the backend for your Django project, without changing the Django ORM.
- Use Django Admin to access MongoDB
- Use Django with MongoDB data fields: Use MongoDB embedded documents and embedded arrays in Django Models.
- Connect 3rd party apps with MongoDB: Apps like Django Rest Framework and Viewflow app that use Django Models integrate easily with MongoDB.
- Requirements:
- Python 3.6 or higher.
- MongoDB 3.4 or higher.
- Example
inner_qs = Blog.objects.filter(name__contains='Ch').values('name') entries = Entry.objects.filter(blog__name__in=inner_qs)
Brian #*3: Python Idioms: Multiline Strings*
- or “How I use dedent”
- Example:
def create_snippet(): code_snippet = textwrap.dedent("""\ int main(int argc, char* argv[]) { return 0; } """) do_something(code_snippet)
Michael #4: Flaskerizer
- A program that automatically creates Flask apps from Bootstrap templates
- Bootstrap templates from websites like and are a fast way to get very dynamic website up and running
- Bootstap templates typically don't work "out of the box" with the python web framework Flask and require some tedious directory building and broken link fixing before being functional with Flask.
- The Flaskerizer automates the necessary directory building and link creation needed to make Bootstrap templates work "out of the box" with Flask.
- Queue black turtleneck!
Brian #*5: Learn Python the Methodical Way
- From the article:
- Make your way through a tutorial/chapter that teaches you some discrete, four-to-six-step skill.
- Write down those steps as succinctly and generically as possible.
- Put the tutorial/chapter and its solutions away.
- Build your project from scratch, peeking only when you’re stuck.
- Erase what you built.
- Do the project again.
- Drink some water.
- Erase what you built and do it again.
- A day or two later, delete your work and do it again – this time without peeking even once.
- Erase your work and do it again.
- The notion of treating code like you treat creative writing with rough drafts and sometimes complete do-overs is super liberating.
- You’ll be surprised how fast you can do something the second time, the third time, the fourth time. And it’s very gratifying.
Michael #6: PixieDebugger
- The Visual Python Debugger for Jupyter Notebooks You’ve Always Wanted
- Jupyter already supports pdb for simple debugging, where you can manually and sequentially enter commands to do things like inspect variables, set breakpoints, etc.
- Check out the video to get a good idea of its usage:
|
https://pythonbytes.fm/episodes/show/85/visually-debugging-your-jupyter-notebook
|
CC-MAIN-2018-30
|
refinedweb
| 611
| 57.77
|
Moore's Law states that the number of transistors on a chip roughly doubles every two years. But how does that stack up against reality?
I was inspired by this data visualization of Moore's law from @datagrapha going viral on Twitter and decided to replicate it in React and D3.
Some data bugs break it down in the end and there's something funky with Voodoo Rush, but those transitions came out wonderful 👌
You can watch me build it from scratch, here 👇
First 30min eaten by a technical glitch 🤷♀️
Try it live in your browser, here 👉
And here's the full source code on GitHub.
you can Read this online
How it works
At its core Moore's Law in React & D3 is a bar chart flipped on its side.
We started with fake data and a React component that renders a bar chart. Then we made the data go through time and looped through. The bar chart jumped around.
So our next step was to add transitions. Made the bar chart look smooth.
Then we made our data gain an entry each year and created an enter transition to each bar. Makes it smoother to see how new entries fly in.
At this point we had the building blocks and it was time to use real data. We used wikitable2csv to download data from Wikipedia's Transistor Count page and fed it into our dataviz.
Pretty much everything worked right away 💪
Start with fake data
Data visualization projects are best started with fake date. This approach lets you focus on the visualization itself. Build the components, the transitions, make it all fit together ... all without worrying about the exact shape of your data.
Of course it's best if your fake data looks like your final dataset will. Array, object, grouped by year, that sort of thing.
Plus you save time when you aren't waiting for large datasets to parse :)
Here's the fake data generator we used:
// src/App.jsconst useData = () => {const [data, setData] = useState(null);// Replace this with actual data loadinguseEffect(() => {// Create 5 imaginary processorsconst processors = d3.range(10).map(i => `CPU ${i}`),random = d3.randomUniform(1000, 50000);let N = 1;// create random transistor counts for each yearconst data = d3.range(1970, 2026).map(year => {if (year % 5 === 0 && N < 10) {N += 1;}return d3.range(N).map(i => ({year: year,name: processors[i],transistors: Math.round(random()),}));});setData(data);}, []);return data;};
Create 5 imaginary processors, iterate over the years, and give them random
transistor counts. Every 5 years we increase the total
N of processors in our
visualization.
We create data inside a
useEffect to simulate that data loads asynchronously.
Driving animation through the years
A large part of visualizing Moore's Law is showing its progression over the years. Transistor counts increased as new CPUs and GPUs entered the market.
Best way to drive that progress animation is with a
useEffect and a D3 timer.
We do that in our
App component.
// src/App.jsfunction App() {const data = useData();const [currentYear, setCurrentYear] = useState(1970);const yearIndex = d3.scaleOrdinal().domain(d3.range(1970, 2025)).range(d3.range(0, 2025 - 1970));// Drives the main animation progressing through the years// It's actually a simple counter :PuseEffect(() => {const interval = d3.interval(() => {setCurrentYear(year => {if (year + 1 > 2025) {interval.stop();}return year + 1;});}, 2000);return () => interval.stop();}, []);
useData() runs our data generation custom hook. We
useState for the current
year. A linear scale helps us translate from meaningful
1970 to
2026
numbers to indexes in our data array.
The
useEffect starts a
d3.interval, which is like a
setInterval but more
reliable. We update current year state in the interval callback.
Remember that state setters accept a function that gets current state as an argument. Useful trick in this case where we don't want to restart the effect on every year change.
We return
interval.stop() as our cleanup function so React stops the loop
when our component unmounts.
The basic render
Our main component renders a
<Barchart> inside an
<Svg>. Using styled
components for size and some layout.
// src/App.jsreturn (<Svg><Title x={"50%"} y={30}>Moore's law vs. actual transistor count in React & D3</Title>{data ? (<Barchartdata={data[yearIndex(currentYear)]}x={100}y={50}barThickness={20}width={500}/>) : null}<Year x={"95%"} y={"95%"}>{currentYear}</Year></Svg>
Our
Svg is styled to take up the entire viewport and the
Year component is
a big text.
The
<Barchart> is where our dataviz work happens. From the outside it's a
component that takes "current data" and handles the rest. Positioning and
sizing props make it more reusable.
A smoothly transitioning Barchart
Our goal with the Barchart component was to:
- always render current state
- have smooth transitions on changes
- follow React-y principles
- easy to use from the outside
You can watch the video to see how it evolved. Here I explain the final state 😇
The
<Barchart> component
The Barchart component takes in data, sets up vertical and horizontal D3 scales, and loops through data to render individual bars.
// src/Barchart.js// Draws the barchart for a single yearconst Barchart = ({ data, x, y, barThickness, width }) => {const yScale = useMemo(() =>d3.scaleBand().domain(d3.range(0, data.length)).paddingInner(0.2).range([data.length * barThickness, 0]),[data.length, barThickness]);// not worth memoizing because data changes every timeconst xScale = d3.scaleLinear().domain([0, d3.max(data, d => d.transistors)]).range([0, width]);const formatter = xScale.tickFormat();
D3 scales help us translate between datapoints and pixels on a screen. I like to memoize them when it makes sense.
Memoizing is particularly important with large datasets. You don't want to waste time looking for the max in 100,000 elements on every render.
We were able to memoize
yScale because
data.length and
barThickness don't
change every time.
xScale on the other hand made no sense to memoize since we know
<Barchart>
gets a new data object for every render. At least in theory.
We borrow xScale's tick formatter to help us render
10000 as
10,000. Built
into D3 ✌️
Rendering our Barchart component looks like this:
// src/Barchart.jsreturn (<g transform={`translate(${x}, ${y})`}>{data.sort((a, b) => a.transistors - b.transistors).map((d, index) => (<Bardata={d}key={d.name}y={yScale(index)}width={xScale(d.transistors)}endLabel={formatter(d.transistors)}thickness={yScale.bandwidth()}/>))}</g>);
A grouping element holds our bars together and moves them into place. Using a group element changes the internal coordinate system so individual bars don't have to know about overall positioning.
Just like in HTML when you position a div and its children don't need to know :)
We sort data by transistor count and render a
<Bar> element for each.
Individual bars get all needed info via props.
The
<Bar> component
Individual
<Bar> components render a rectangle flanked on each side by a
label.
return (<g transform={`translate(${renderX}, ${renderY})`}><rect x={10} y={0} width={renderWidth} height={thickness} fill={color} /><Label y={thickness / 2}>{data.name}</Label><EndLabel y={thickness / 2} x={renderWidth + 15}>{data.designer === 'Moore'? formatter(Math.round(transistors)): formatter(data.transistors)}</EndLabel></g>);
A grouping element groups the 3 elements, styled components style the labels,
and a
rect SVG element creates the rectangle. Simple React markup stuff ✌️
Where the
<Bar> component gets interesting is the positioning. We use
renderX and
renderY even though the vertical position comes from props as
y and
x is static.
That's got to do with transitions.
Transitions
The
<Bar> component uses the hybrid animation approach from my
React For DataViz course.
A key insight is that we use independent transitions on each axis to create a coordinated transition. Both for entering into the chart and for moving around later.
Special case for the
Moore's Law bar itself where we also transition the
label so it looks like it's counting.
We created a
useTransition custom hook to make our code easier to understand
and cleaner to read.
useTransition
The
useTransition custom hook helps us move values from props to state. State
becomes the staging area and props are the target we want to reach.
To run a transition we create an effect and set up a D3 transition. On each tick of the animation we update state proportionately to time spent animating.
const useTransition = ({ targetValue, name, startValue, easing }) => {const [renderValue, setRenderValue] = useState(startValue || targetValue);useEffect(() => {d3.selection().transition(name).duration(2000).ease(easing || d3.easeLinear).tween(name, () => {const interpolate = d3.interpolate(renderValue, targetValue);return t => setRenderValue(interpolate(t));});}, [targetValue]);return renderValue;};
State update happens inside that custom
.tween method. We interpolate between
the current value and the target value.
D3 handles the rest.
Using useTransition
We can reuse that same transition approach for each independent axis we want to animate. D3 makes sure all transitions start at the same time and run at the same pace. Any dropped frames or browser slow downs are handled for us.
// src/Bar.jsconst Bar = ({ data, y, width, thickness, formatter, color }) => {const renderWidth = useTransition({targetValue: width,name: `width-${data.name}`,easing: data.designer === "Moore" ? d3.easeLinear : d3.easeCubicInOut});const renderY = useTransition({targetValue: y,name: `y-${data.name}`,startValue: -500 + Math.random() * 200,easing: d3.easeCubicInOut});const renderX = useTransition({targetValue: 0,name: `x-${data.name}`,startValue: 1000 + Math.random() * 200,easing: d3.easeCubicInOut});const transistors = useTransition({targetValue: data.transistors,name: `trans-${data.name}`,easing: d3.easeLinear});
Each transition returns the current value for the transitioned axis.
renderWidth,
renderX,
renderY, and even
transistors.
When a transition updates, its internal
useState setter runs. That triggers a
re-render and updates the value in our
<Bar> component, which then
re-renders.
Because D3 transitions run at 60fps, we get a smooth animation ✌️
Yes that's a lot of state updates for each frame of animation. At least 4 per frame per datapoint. About 4*60*298 = 71,520 per second at max.
And React can handle it all. At least on my machine, I haven't tested elsewhere yet :)
Conclusion
And that's how you can combine React & D3 to get a smoothly transitioning barchart visualizing Moore's Law through the years.
Key takeaways:
- React for rendering
- D3 for data loading
- D3 runs and coordinates transitions
- state updates drive re-rendering animation
- build custom hooks for common setup
Cheers,
~Swizec
|
https://reactfordataviz.com/cookbook/17/
|
CC-MAIN-2022-40
|
refinedweb
| 1,715
| 50.63
|
You can click on the Google or Yahoo buttons to sign-in with these identity providers,
or you just type your identity uri and click on the little login button.
class Klass:
aa = 0
bb = 0
cc = 0
def incr(self, val):
if val > self.aa:
val = self.aa
if val < self.bb:
val = self.bb
self.aa += val
self.cc += val
issues:
E: 14:Klass.incr: Instance of 'Klass' has no 'aa' member
E: 15:Klass.incr: Instance of 'Klass' has no 'aa' member
E: 18:Klass.incr: Instance of 'Klass' has no 'aa' member
E: 20:Klass.incr: Instance of 'Klass' has no 'cc' member
really?
Ticket #13944 - latest update on 2009/11/25, created on 2009/09/08 by Sylvain Thenault
add comment
-
2009/11/23 18:54, written by sthenault
really?
|
https://www.logilab.org/ticket/13944
|
CC-MAIN-2019-26
|
refinedweb
| 135
| 78.96
|
August 28 2002
Public Comment Invited For Second Level Domain Proposal
InternetNZ (The Internet Society of New Zealand Inc) has begun public consultation on a second proposal to create a new second level domain in New Zealand. If successful, the proposal from the New Zealand Bankers Association for the creation of the ".bank.nz" would only be the second new Second Level Domain created in New Zealand since 1996. The Banker's Association proposal was received as the proposal from the New Zealand Maori Internet Society to create ".maori.nz" reached a successful conclusion.
The New Zealand Banker's Association proposes that the Second Level Domain be created as a 'moderated' or restricted Second Level Domain which is open to only to banks registered under the Reserve Bank of New Zealand Act 1989 . If successful the new domain, ".bank.nz", would join four other moderated domains in the New Zealand namespace: ".govt.nz", ".mil.nz", ".cri.nz", and ".iwi.nz".
InternetNZ issued a formal "Request for Discussion" on Monday which invites all interested people to take part in the first round of discussion and debate. "The process involves extensive public consultation", said the Society's Executive Director Sue Leader, "and the discussion list is already running hot."
"The Banker's Association originally proposed the creation of ".bank.nz" in late 2000, and the proposal failed to meet the threshold of support in the straw poll by a narrow margin", Leader said. "The rules state that another application cannot be made for a full year, and we understand that the Banker's Association consulted their membership before putting forward the new proposal".
The public can join the list by going to". The next stage is a straw poll of the Internet community in New Zealand which will take place in 60-90 days. All New Zealander's are eligible to vote in the non-binding straw poll. The application process can be found on the Society's website at:.
The Council of InternetNZ is expected to make an announcement for the start date for the new second level domain ".maori.nz" after its August 30 meeting.
Ends
|
http://www.scoop.co.nz/stories/SC0208/S00043.htm
|
CC-MAIN-2016-40
|
refinedweb
| 356
| 53.41
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.