question_id
int64 25
74.7M
| answer_id
int64 332
74.7M
| title
stringlengths 20
150
| question
stringlengths 23
4.1k
| answer
stringlengths 20
4.1k
|
|---|---|---|---|---|
1,053,099
| 1,053,134
|
How can i get content of web-page
|
i'm trying to get web-page data in string that than i could parse it. I didn't found any methods in qwebview, qurl and another. Could you help me? Linux, C++, Qt.
EDIT:
Thanks for help. Code is working, but some pages after downloading have broken charset.
I tried something like this to repair it:
QNetworkRequest *request = new QNetworkRequest(QUrl("http://ru.wiktionary.org/wiki/bovo"));
request->setRawHeader( "User-Agent", "Mozilla/5.0 (X11; U; Linux i686 (x86_64); "
"en-US; rv:1.9.0.1) Gecko/2008070206 Firefox/3.0.1" );
request->setRawHeader( "Accept-Charset", "win1251,utf-8;q=0.7,*;q=0.7" );
request->setRawHeader( "charset", "utf-8" );
request->setRawHeader( "Connection", "keep-alive" );
manager->get(*request);
Any results =(.
|
Have you looked at QNetworkAccessManager? Here's a rough and ready sample illustrating usage:
class MyClass : public QObject
{
Q_OBJECT
public:
MyClass();
void fetch();
public slots:
void replyFinished(QNetworkReply*);
private:
QNetworkAccessManager* m_manager;
};
MyClass::MyClass()
{
m_manager = new QNetworkAccessManager(this);
connect(m_manager, SIGNAL(finished(QNetworkReply*)),
this, SLOT(replyFinished(QNetworkReply*)));
}
void MyClass::fetch()
{
m_manager->get(QNetworkRequest(QUrl("http://stackoverflow.com")));
}
void MyClass::replyFinished(QNetworkReply* pReply)
{
QByteArray data=pReply->readAll();
QString str(data);
//process str any way you like!
}
In your in your handler for the finished signal you will be passed a QNetworkReply object, which you can read the response from as it inherits from QIODevice. A simple way to do this is just call readAll to get a QByteArray. You can construct a QString from that QByteArray and do whatever you want to do with it.
|
1,053,231
| 1,053,311
|
Sprites in directx10 and texture filtering
|
Is it possible to set different texture filtering when working with sprites?
|
Can you be more specific about how you're drawing the sprites?
Texture filtering is determined by the ID3D10SamplerState objects bound to the device. If you're using the ID3DX10Sprite interface, it won't change the shaders or the samplers for each set of sprites, only the textures. So whatever shaders and samplers you've set before drawing the sprites will be used -- just set a sampler with the filtering mode you want before drawing the sprites.
|
1,053,242
| 1,054,538
|
Array of pairs of 3 bit elements
|
Because of memory constrains, I have to store some pairs of values in an array with 6 bits/pair (3 bits/value). The problem comes when I want to access this array as a normal one, based on the index of the pair.
The array looks like this
|--byte 0 | --byte 1 | --byte 2
|00000011 | 11112222 | 22333333 ... and so on, the pattern repeats.
|------|-------|--------|------|
pair 0 pair 1 pair 2 pair 3
=> 4 pairs / 3 bytes
You can see that sometimes (for indexes divisible by 1 and 2) 2 bytes are required to extract the values.
I made a function that given an index, returns the first value from the pair (3 bits) and the other one (also 3 bits).
void GetPair(char *array, int index, int &value1, int &value2) {
int groupIndex = index >> 2; // Divide by 4 to get the index of the group of 3 bytes (with 4 pairs)
// We use 16 bits starting with the first byte from the group for indexes divisible by 0 and 1,
// 16 bits starting with the second byte when divisible by 2 and 3
short int value = *(short int *)(array + groupIndex + ((index & 0x02) >> 1));
switch(index & 0x03) { // index % 4
case 0: {
// extract first 3 bits
value1 = (value & 0xE000) >> 13;
// extract the next 3 bits
value2 = (value & 0x1C00) >> 10;
break;
}
case 1: {
value1 = (value & 0x380) >> 7;
value2 = (value & 0x70) >> 4;
break;
}
case 2: {
value1 = (value & 0xE00) >> 9;
value2 = (value & 0x1C0) >> 6;
break;
}
case 3: {
value1 = (value & 0x38) >> 2;
value2 = value & 0x7;
break;
}
}
Now my question is: Is there any faster method to extract these values?
I made a test and when using 2 bytes/pair (1 byte/value) it takes about 6 seconds to access all pairs (53 in total) 100 million times. When using the compact array, it takes about 22 seconds :( (probably because it needs to compute all those masks and bit shifts).
I tried to explain as clearly as i could... forgive me if not.
|
How about this? It eliminates memory accesses for the masks and shift values. (Of course, the (non-portable) assumption is that char is 8 bit and short is 16 bit. It is also assumed that index * 6 does not overflow int.)
void GetPair(char *array, int index, int &value1, int &value2)
{
unsigned shift = 10 - index * 6 % 8;
unsigned short data = (*(unsigned short *)(array + index * 6 / 8) >> shift) & 0x3f;
value2 = data & 7;
value1 = data >> 3;
}
There might be a penalty for reading a short crossing a 16 bit boundary, though. There used to be such issues way back when I was still keeping track of such things. If that is the case, it would probably be better to read a 32 bit value starting at a 16 bit boundary and adjust the shifts and masks accordingly.
|
1,053,244
| 1,058,231
|
Qt QSystemTrayIcon not sending activated signal
|
I'm trying to copy the Qt systray example here:
http://doc.qt.io/archives/4.6/desktop-systray.html
Things seem to be working except that the QSystemTrayIcon object is not sending an activate signal.
Here's my mainwindow.cpp code:
#include <QtGui>
#include "mainwindow.h"
#include "ui_mainwindow.h"
MainWindow::MainWindow(QWidget *parent)
: QMainWindow(parent), ui(new Ui::MainWindow)
{
ui->setupUi(this);
QMessageBox::information(0, tr("Systray"),
tr("Loaded."));
createTrayIcon();
connect(trayIcon,SIGNAL(activated(QSystemTrayIcon::ActivationReason)),this,
SLOT(iconActivated(QSystemTrayIcon::ActivationReason)));
trayIcon->show();
}
void MainWindow::createTrayIcon()
{
trayIcon = new QSystemTrayIcon(this);
}
void MainWindow::iconActivated(QSystemTrayIcon::ActivationReason reason)
{
QMessageBox::information(0, tr("Systray"),
tr("Testing"));
}
void MainWindow::messageClicked()
{
QMessageBox::information(0, tr("Systray"),
tr("Sorry, I already gave what help I could.\n"
"Maybe you should try asking a human?"));
}
MainWindow::~MainWindow()
{
delete ui;
}
I'm using Qt 4.5.2 on Windows XP SP2. Could this be an issue with Windows XP? Or am I doing something wrong? I don't have a QIcon set for the trayIcon. Is that a problem?
Any help would be appreciated.
Thanks!
Jieren
|
Well if anyone's interested, I found the issue. The problem was actually in the header file.
Here's the one that works:
#ifndef MAINWINDOW_H
#define MAINWINDOW_H
#include <QWidget>
#include <QSystemTrayIcon>
class MainWindow : public QWidget
{
Q_OBJECT
public:
MainWindow();
private slots:
void iconActivated(QSystemTrayIcon::ActivationReason reason);
private:
QAction *minimizeAction;
QAction *maximizeAction;
QAction *restoreAction;
QAction *quitAction;
QSystemTrayIcon *trayIcon;
void createActions();
void createTrayIcon();
void messageClicked();
};
#endif // MAINWINDOW_H
iconActivated needs to be declared as a private slot. I had it declared as a private function.
|
1,053,456
| 1,053,462
|
Taking type of a template class
|
Is there a way for taking type of a template class, for example
//i have template function
template<typename T>
IData* createData();
//a template class instance
std::vector<int> a;
//using type of this instance in another template
//part in quotation mark is imaginary of course :D
IData* newData = createData<"typeOf(a)">();
is it possible in c++? or is there an shortcut alternative
|
Yes - Use boost::typeof
IData* newData = createData<typeof(a)>();
The new standard (C++0x) will provide a builtin way for this.
Note that you could give createData a dummy-argument which the compiler could use to infer the type.
template<typename T>
IData* createData(const T& dummy);
IData* newData = createData(a);
|
1,053,678
| 1,053,766
|
Implementing a generic fixed size array with iterator support
|
I need an array where size is known at compile time. I know I can use std::vector or boost::array. But that's doesn't teach me how it works internally. Also I couldn't find how to add items into boost::array other than using the initializer. I have written the following code for a generic array. My intention is to get familiar with iterators, template specializations etc. Following is the code
template<typename T>
struct iterator_traits
{
typedef T value_type;
typedef T& reference_type;
typedef T* iterator;
typedef const T* const_iterator;
typedef std::reverse_iterator<iterator> reverse_iterator;
};
template<typename T>
struct iterator_traits<T*>
{
typedef T* value_type;
typedef T*& reference_type;
typedef T** iterator;
typedef const T const_iterator;
typedef std::reverse_iterator<iterator> reverse_iterator;
};
template<typename T, size_t size = 10>
class Array
{
public:
typedef typename iterator_traits<T>::value_type value_type;
typedef typename iterator_traits<T>::reference_type reference_type;
typedef typename iterator_traits<T>::iterator iterator;
typedef typename iterator_traits<T>::const_iterator const_iterator;
typedef typename iterator_traits<T>::reverse_iterator reverse_iterator;
Array() : lastIndex(0) {
}
void add(value_type element) {
if(lastIndex >= size)
throw std::out_of_range("Array is full");
array_[lastIndex++] = element;
}
reference_type at(unsigned int index){
if(index < size)
return array_[index];
else
throw std::out_of_range("Invalid index");
}
size_t capacity(){
return size;
}
iterator begin(){
return array_;
}
iterator end(){
return array_ + size;
}
const_iterator begin() const{
return array_;
}
const_iterator end() const{
return array_ + size;
}
reverse_iterator rbegin() {
return reverse_iterator(end());
}
reverse_iterator rend() {
return reverse_iterator(begin());
}
private:
value_type array_[size];
unsigned int lastIndex;
};
The above code works well. Following are my questions
1 - How can I create my array like boost::array does? Something like
Array<int> ints = { 10, 12 };
2 - Are there any pitfalls in the code?
3 - I had to use a specialization for pointer types in traits. Is that the best practice?
4 - Iterator pattern is implemented correctly or not?
Any thoughts would be great!
|
1 - How can I create my array like boost::array does? Something like
Array<int> ints = { 10, 12 };
In c++, you can only (currently) use a braces enclosed initializer list if your struct, union or c-style array meets the criteria of being an aggregate. To do such, according to the standard:
8.5.1.1 An aggregate is an array or a class (Clause 9) with no user-provided constructors (12.1), no private or protected non-static data members (Clause 11), no base classes (Clause 10), and no virtual functions (10.3).
You would have to make your class have those features if you wanted to use them in the current standard. The next standard (see here) includes a provision to allow other types to do so.
2 - Are there any pitfalls in the code?
Here is one: the reason you can't add things to a boost list is it always has the same number of elements (the size it was allocated with). In your array you can add elements, but you've still constructed 10 elements underneath the hood during construction. this could lead to some suprising results if the user isn't expecting the default constructor called 10 times.
|
1,053,986
| 1,054,825
|
can a GC be implemented with C++ raw pointers?
|
I was wondering how a Garbage Collector can be implemented with C++ full power of pointers arithmetic. Also, In languages like Java, I can not assign literal addresses to references. In C++ it is very flexible.
I believe that C# has both, but again, the unsafe pointer in C# is the responsibility of the programmer.
EITD :: guys, I am asking if C++ pointers 'as they are currently' can be GCed in theory or not ?
|
Pointer arithmetic isn't the fundamental problem. GC's have to deal with pointers being reassigned all the time, and pointer arithmetic is just another example of that. (Of course, if pointer arithmetic between pointers pointing to different buffers was allowed, it would cause problems, but it isn't. The only arithmetics you're allowed to perform on a pointer pointing into an array A are the ones that repositions it within that array.
The real problem is the lack of metadata. A GC has to know what is a pointer and what isn't.
If it encounters the value 0x27a2c230, it has to be able to determine if it is
a pointer (in which case it has to follow the pointer to mark the destination as "in use" recursively)
An integer (The same value is a perfectly valid integer. Perhaps it's not a pointer at all)
or something else, say, a bit of a string.
It also has to be able to determine the extent of a struct. Assuming that value is a pointer, and it points into another struct, the GC has to be able to determine the size and extent of that struct, so it knows which range of addresses should be scanned for more pointers.
GC'ed languages have a lot of infrastructure to deal with this. C++ doesn't.
Boehm's GC is the closest you can generally get, and it it is conservative in that if something might be a pointer, the GC assumes it to be one, which means some data is needlessly kept alive. And so, it will likely keep data alive which should be GC'ed.
Alternatively, of course all this infrastructure could in principle be added to a C++ compiler. There's no rule in the standard that it's not allowed to exist. The problem is that it would be a major performance hit and eliminate a lot of optimization opportunities.
|
1,054,009
| 1,054,060
|
How can I pass MemoryStream data to unmanaged C++ DLL using P/Invoke
|
I need your help with the following scenario:
I am reading some data from hardware into a MemoryStream (C#) and I need to pass this data in memory to a dll implemented in unmanaged C++ (using pointer ??).
The data read (into stream) is very large (megabytes). I understand that I can P/Invoke this dll but what I am not sure is how to pass the pointer / reference of the stream data to the C++ API ?
I must admit I am confused as I am new to C# - do I need to use unsafe / fixed since data is large or these are irrelevant as MemoryStream object is managed by GC ? Some example code / detailed description would be very helpful. Thanks
Signature of unmanaged API:
BOOL doSomething(void * rawData, int dataLength)
|
If it's just expecting bytes you can read the MemoryStream into a byte array and then pass a pointer to that to the method.
You have to declare the external method:
[DllImport("mylibrary.dll", CharSet = CharSet.Auto)]
public static extern bool doSomething(IntPtr rawData, int dataLength);
Then, read the bytes from the MemoryStream into a byte array. Allocate a GCHandle which:
Once allocated, you can use a GCHandle
to prevent the managed object from
being collected by the garbage
collector when an unmanaged client
holds the only reference. Without such
a handle, the object can be collected
by the garbage collector before
completing its work on behalf of the
unmanaged client.
And finally, use the AddrOfPinnedObject method to get an IntPtr to pass to the C++ dll.
private void CallTheMethod(MemoryStream memStream)
{
byte[] rawData = new byte[memStream.Length];
memStream.Read(rawData, 0, memStream.Length);
GCHandle rawDataHandle = GCHandle.Alloc(rawData, GCHandleType.Pinned);
try
{
IntPtr address = rawDataHandle.AddrOfPinnedObject ();
doSomething(address, rawData.Length);
}
finally
{
if (rawDataHandle.IsAllocated)
rawDataHandle.Free();
}
}
|
1,054,171
| 1,054,198
|
What does it take to become a Java expert?
|
I was just reading this thread and wondered if it's easier to become a Java expert than a C++ one? Is it because it's very easy to write wrong code in C++ while in Java you have less flexibility (memory management for example) which prevents you from writing code horrors? Or is it because C++ is just inherently harder to learn and master? Have you came across a study that distinguishes and characterizes C++ vs Java vs C# etc coders?
|
What does it take? Like anything, years of practice and countless mistakes.
And even then, your expertise will be in a handful of areas where you solved particular problem domains again and again; i.e., islands of expertise. Middleware and/or frameworks? Threading & concurrency? Maybe Swing/GUI Java applications? Or web applications...
Java has grown to such an extent that nobody can be an expert in ALL that is Java--look at how many packages have been added between 1.4 to 1.6 and this should be obvious. Though you have guys that come awfully close (Johsua Bloch comes to mind...).
Edit: If this is in regards to recruiters and HR...shrug off all of the buzzwords, minimum years, and "expert in" and just get a phone interview with the team looking to hire. You need to find out from the technical guy what exactly he's looking for because nearly all recruiters and recruitment agencies are COMPLETELY CLUELESS.
|
1,054,447
| 1,054,451
|
Printing a char* in C++
|
I'm writing a simple program. There is only one class in it. There is a private member 'char * number' and two function (there will be more, but first these should work correctly :) ).
The first one should copy the 'source' into 'number' variable (and I suppose somewhere here is the problem):
LongNumber::LongNumber(const char * source ){
int digits = strlen(source);
char* number = new char[digits+1];
strcpy( number, source );
// cout<<number<<endl; - if the line is uncommented,
// the output is correct and there isn't a problem
}
And a print function:
void LongNumber::print(){
cout<<number<<endl;
// when I try to print with the same line of code here..it crashes
}
Sure, I'm missing something...but what?
(As this is my first post...do you think the tags are corrected..how would you tagged the post?)
Thank you in advance :)
|
In the LongNumber constructor you declare a new local variable named number and initialize it with a new char array:
char* number = new char[digits+1];
Instead you should leave out the char*, so that it doesn't look like a new variable declaration and uses the object member variable:
number = new char[digits+1];
With the current code, the member variable number never gets initialized and using it later in print leads to an error.
|
1,054,496
| 1,054,501
|
How to signify to the compiler that a function always throws?
|
When calling functions that always throw from a function returning a value, the compiler often warns that not all control paths return a value. Legitimately so.
void AlwaysThrows() { throw "something"; }
bool foo()
{
if (cond)
AlwaysThrows();
else
return true; // Warning C4715 here
}
Is there a way to tell the compiler that AlwaysThrows does what it says?
I'm aware that I can add another throw after the function call:
{ AlwaysThrows(); throw "dummy"; }
And I'm aware that I can disable the warning explicitly. But I was wondering if there is a more elegant solution.
|
With Visual C++, you can use __declspec(noreturn).
|
1,054,530
| 1,054,691
|
Use file-only APIs in memory
|
Some APIs only support output to files. e.g. a library that converts a BMP to PNG and only has a Save(file) option - no in memory function. Disk IO is slow, though, and sometimes you just want in-memory operations.
Is there a generic solution to such a problem? Maybe a fake in-memory file of sorts that would allow one to use the library, yet not pay the performance penalty of disk IO?
|
Use named pipes.
Similar constructs exist for both Windowsand Unix (and this).
But I don't believe it worth the effort setting up all those constructs. Choose an alternative library or just write to disk if you may.
|
1,054,697
| 1,056,738
|
Why isn't my new operator called
|
I wanted to see that a dynamically loaded library (loaded with dlopen etc.) really uses its own new an delete operators and not these ones defined in the calling program. So I wrote the following library.cpp
#include <exception>
#include <new>
#include <cstdlib>
#include <cstdio>
#include "base.hpp"
void* operator new(size_t size) {
std::printf("New of library called\n");
void *p=std::malloc(size);
if (p == 0) // did malloc succeed?
throw std::bad_alloc(); // ANSI/ISO compliant behavior
return p;
}
void operator delete(void* p) {
std::printf("Delete of library called\n");
std::free(p);
}
class Derived : public Base {
public:
Derived() : Base(10) { }
};
extern "C" {
Base* create() {
return new Derived;
}
void destroy(Base* p) {
delete p;
}
}
and compiled it with
g++ -g -Wall -fPIC -shared library.cpp -o library.so
or as Employed Russian suggested to try (but in the end nothing changed)
g++ -g -Wall -fPIC -shared -Wl,-Bsymbolic library.cpp -o library.so
The class Base is only holding an int value and a function get_value() to get this value. After that I wrote client.cpp like this
#include <exception>
#include <new>
#include <iostream>
#include <cstdlib>
#include <cstdio>
#include <dlfcn.h>
#include "base.hpp"
void* operator new(size_t size) {
std::printf("New of client called\n");
void *p=std::malloc(size);
if (p == 0) // did malloc succeed?
throw std::bad_alloc(); // ANSI/ISO compliant behavior
return p;
}
void operator delete(void* p) {
std::printf("Delete of client called\n");
std::free(p);
}
typedef Base* create_module_t();
typedef void destroy_module_t(Base *);
int main() {
void* handle = dlopen("./library.so",
RTLD_LAZY);
if (handle == NULL) {
std::cout << dlerror() << std::endl;
return 1;
}
create_module_t* create_module = NULL;
void* func = dlsym(handle, "create");
if (func == NULL) {
std::cout << dlerror() << std::endl;
return 1;
} else create_module = (create_module_t *)func;
destroy_module_t* destroy_module = NULL;
func = dlsym(handle, "destroy");
if (func == NULL) {
std::cout << dlerror() << std::endl;
return 1;
} else destroy_module = (destroy_module_t *)func;
Base* a = create_module();
std::cout << "Value: " << a->get_value() << std::endl;
destroy_module(a);
return 0;
}
and compiled it with
g++ -Wall -g -o client -ldl client.cpp
Executing client I only get a "New of client called" and a "Delete of client called". Even if I use the compiler switch -Bsymbolic for the library like Employed Russian suggested.
Now: What went wrong? I thought shared library are using their own new/delete and therefore you have to provide next to the factory create a destructor destroy in the library code.
Supplementary question: Why do I need the destroy(Base* p) function? If this function only calls the delete-operator of the client I could also do it by myself, i.e "delete a" instead of destroy_module(a) in the next to last line.
Answer I found: The library can also provide a new/delete-operator pair. So if I use first the library's new and later the client's delete I can probably step into a pitfall. Sadly until now I never saw my library using it's own new or delete... So the original question still isn't answered.
Supplement: I'm only referring to the Linux platform.
Edit: The important parts are in the comments to Employed Russian's Answer. So I'm giving the main clue in a nutshell: If one calls the gcc this way
g++ -Wall -g -fPIC -shared library.cpp -o library.so -Wl,-Bsymbolic
the library will use it's own new/delete operators. Otherwise results
g++ -Wall -g -fPIC -shared library.cpp -o library.so
in a library that's using the new/delete operators of the calling program. Thanks to Employed Russian!
|
The problem is that on most UNIX platforms (unlike on Win32 and AIX) all symbol references by default bind to the first definition of the symbol visible to the runtime loader.
If you define 'operator new' in the main a.out, everything will bind to that definition (as Neil Butterworth's example shows), because a.out is the very first image runtime loader searches.
If you define it in a library which is loaded after libC.so (or libstdc++.so in case you are using GCC), then your definition will never be used. Since you are dlopen()ing your library after the program has started, libC is already loaded by that point, and your library is the very last one the runtime loader will search; so you lose.
On ELF platforms, you may be able to change the default behavior by using -Bsymbolic. From man ld on Linux:
-Bsymbolic
When creating a shared library, bind references to global symbols
to the definition within the shared library, if any. Normally, it
is possible for a program linked against a shared library to override
the definition within the shared library. This option is only meaningful
on ELF platforms which support shared libraries.
Note that -Bsymbolic is a linker flag, not a compiler flag. If using g++, you must pass the flag to the linker like this:
g++ -fPIC -shared library.cpp -o library.so -Wl,-Bsymbolic
|
1,055,195
| 1,055,201
|
Does a FileStream object (.NETCF, C#) created using handle returned from Win32 API CreateFile (C++, P/Invoke) prone to .NET Garbage Collection
|
UPDATED QUESTION
Since the ctor is not supported by .NETCF (public FileStream(IntPtr handle, FileAccess access). Could you please suggest other ways of sharing large file in memory between managed and unmanaged code on a limited resource (RAM) platform. Basically I want to map the file in the upper region of 2GB user space (Win CE 5.0) outside of process space / heap. How can I do that in C#.
Also, do MemoryStream objects allocate space in heap or in memory mapped region on Win CE 5.0 ?
thanks...
ORIGINAL QUESTION
I am instantiating a FileStream Object (.NETCF , C#) using a file handle returned by native CreateFile() as below:
//P/Invoke
[DllImport("coredll.dll", SetLastError = true)]
public static extern IntPtr CreateFile(string lpFileName,
uint dwDesiredAccess,
uint dwShareMode,
IntPtr lpSecurityAttributes,
uint dwCreationDisposition,
uint dwFlagsAndAttributes,
IntPtr hTemplateFile);
// File handle received from native Win32 API
IntPtr ptr= CreateFile("myfile.txt",
0,
0,
0,
FileMode.Create,
0,
IntPtr.Zero);
//Instantiate a FileStream object using handle (returned above) as parameter.
FileStream fs = new FileStream(ptr,FileAccess.ReadWrite);
The file will grow to large size > 500 KB or more. So, my questions are:
*1) Is there anything wrong with this way of doing things given that SafeFileHandle / Handle properties are not supported in .NETCF version ? Is there any better way of doing it (I am planning to use native memory mapped file handle with FileStream / MemoryStream) ?
2) Is the memory allocated by FileStream object fall under .NETCF garbage collector ? Or given that handle is of a memory mapped file created using native API, it (managed FileStream object and its resources) is out of purview of garbage collector ?*
Thanks in advance.
|
Overall there is nothing wrong with this approach of using a native Create file and wrapping it in a FileStream object. This is a supported feature of FileStream.
n terms of garbage collection though there are really 2 things at play here.
The memory associated with the FileStream object. Yes this will be garbage collected
The handle which is a resource created with CreateFile. The FileStream object will take ownership of this handle and will free it when it is disposed (passively or actively).
|
1,055,208
| 1,055,217
|
C++ data structure with lookuptime O(1), like java's hashmap in stl?
|
Is there such a structure in c++ standard library? I don't have access to anything else so unordered_map in tr1 cant be used (and boost etc).
What I have is a large number of custom class elements 100000+ which I need to store, and access them very fast O(1) on everage. I can't use arrays/vectors as the elements will be stored randomly and I don't know the position of the element.
Is my only alternative to implements an own hashmap implementation with only the c++ standard library available?
|
The problem is that the O(1) lookup is not standard. I am unsure about what boost has, but some STL implementations (like sgi) have hash_map. That's what you need.
Here is the documentation.
Just try out:
#include <hash_map>
Keep in mind if this works, it is not portable... but maybe for now that's ok, and later you can find workarounds.
|
1,055,299
| 1,055,309
|
Is there any way to get some information at least for catch(...)?
|
Is there any way to get at least some information inside of here?
...
catch(...)
{
std::cerr << "Unhandled exception" << std::endl;
}
I have this as a last resort around all my code. Would it be better to let it crash, because then I at least could get a crash report?
|
No, there isn't any way. Try making all your exception classes derive from one single class, like std::exception, and then catch that one.
You could rethrow in a nested try, though, in an attempt to figure out the type. But then you could aswell use a previous catch clause (and ... only as fall-back).
|
1,055,340
| 1,055,363
|
C++ Meta Templates: A Good or Bad Design Choice?
|
I'm curious to find out if and when C++ meta templates are a good design choice for systems small to large. I understand that they increase your build time in order to speed up your execution time. However, I've heard that the meta template code is inherently hard to understand by many developers, which could be a problem for a large group of people working on a system with a very large code base (millions of lines of code). Where do you think C++ meta templates are useful (or not)?
|
Metaprogramming is just another tool in a (C++) programmers' toolbox - it has many great applications, but like anything can be mis- or over- used. I think it's got a bad reputation in terms of 'hard to use', and I think this mainly comes from the fact that it's a significant addition to the language and so takes a while to learn.
As an example of real-world use; I've used template metaprogramming to implement Compile-time asserts and shim libraries in the past; implementing these without templates would either have been impossible or required significantly more code than I had to write.
In the case of the shim library it could have been implemented in a classic object-orientated fashion which could have been designed to have a similar (low) level of code duplication as the templated implementation; however it's runtime performance would have been significantly worse.
If you want to see some good examples of how it can be used, I suggest you read Modern C++ Design by Andrei Alexandrescu (there is a sample chapter on the publisher's website) - IMO this is one of the best books on the subject.
|
1,055,387
| 1,055,407
|
Throw keyword in function's signature
|
What is the technical reason why it is considered bad practice to use the C++ throw keyword in a function signature?
bool some_func() throw(myExc)
{
...
if (problem_occurred)
{
throw myExc("problem occurred");
}
...
}
|
No, it is not considered good practice. On the contrary, it is generally considered a bad idea.
http://www.gotw.ca/publications/mill22.htm goes into a lot more detail about why, but the problem is partly that the compiler is unable to enforce this, so it has to be checked at runtime, which is usually undesirable. And it is not well supported in any case. (MSVC ignores exception specifications, except throw(), which it interprets as a guarantee that no exception will be thrown.
|
1,055,398
| 1,055,445
|
Differences between Conditional variables, Mutexes and Locks
|
For example the c++0x interfaces
I am having a hard time figuring out when to use which of these things (cv, mutex and lock).
Can anyone please explain or point to a resource?
Thanks in advance.
|
On the page you refer to, "mutex" is the actual low-level synchronizing primitive. You can take a mutex and then release it, and only one thread can take it at any single time (hence it is a synchronizing primitive). A recursive mutex is one which can be taken by the same thread multiple times, and then it needs to be released as many times by the same thread before others can take it.
A "lock" here is just a C++ wrapper class that takes a mutex in its constructor and releases it at the destructor. It is useful for establishing synchronizing for C++ scopes.
A condition variable is a more advanced / high-level form of synchronizing primitive which combines a lock with a "signaling" mechanism. It is used when threads need to wait for a resource to become available. A thread can "wait" on a CV and then the resource producer can "signal" the variable, in which case the threads who wait for the CV get notified and can continue execution. A mutex is combined with CV to avoid the race condition where a thread starts to wait on a CV at the same time another thread wants to signal it; then it is not controllable whether the signal is delivered or gets lost.
|
1,055,413
| 1,055,668
|
Sending a Java UUID to C++ as bytes and back over TCP
|
I'm trying to send a Java UUID to C++, where it will be used as a GUID, then send it back and see it as a UUID, and I'm hoping to send it across as just 16 bytes.
Any suggestions on an easy way to do this?
I've got a complicated way of doing it, sending from Java to C++, where I ask the UUID for its least and most significant bits, write this into a ByteBuffer, and then read it out as bytes.
Here is my silly-complicated way of getting 2 longs out of a UUID, sending them to C++:
Java
public static byte[] asByteArray(UUID uuid)
{
long msb = uuid.getMostSignificantBits();
long lsb = uuid.getLeastSignificantBits();
byte[] buffer = new byte[16];
for (int i = 0; i < 8; i++) {
buffer[i] = (byte) (msb >>> 8 * (7 - i));
}
for (int i = 8; i < 16; i++) {
buffer[i] = (byte) (lsb >>> 8 * (7 - i));
}
return buffer;
}
byte[] bytesOriginal = asByteArray(uuid);
byte[] bytes = new byte[16];
// Reverse the first 4 bytes
bytes[0] = bytesOriginal[3];
bytes[1] = bytesOriginal[2];
bytes[2] = bytesOriginal[1];
bytes[3] = bytesOriginal[0];
// Reverse 6th and 7th
bytes[4] = bytesOriginal[5];
bytes[5] = bytesOriginal[4];
// Reverse 8th and 9th
bytes[6] = bytesOriginal[7];
bytes[7] = bytesOriginal[6];
// Copy the rest straight up
for ( int i = 8; i < 16; i++ )
{
bytes[i] = bytesOriginal[i];
}
// Use a ByteBuffer to switch our ENDIAN-ness
java.nio.ByteBuffer buffer = java.nio.ByteBuffer.allocate(16);
buffer.order(java.nio.ByteOrder.BIG_ENDIAN);
buffer.put(bytes);
buffer.order(java.nio.ByteOrder.LITTLE_ENDIAN);
buffer.position(0);
UUIDComponents x = new UUIDComponents();
x.id1 = buffer.getLong();
x.id2 = buffer.getLong();
C++
google::protobuf::int64 id1 = id.id1();
google::protobuf::int64 id2 = id.id2();
char* pGuid = (char*) &guid;
char* pGuidLast8Bytes = pGuid + 8;
memcpy(pGuid, &id1, 8);
memcpy(pGuidLast8Bytes, &id2, 8);
This works, but seems way too complex, and I can't yet get it working in the other direction.
(I'm using google protocol buffers to send the two longs back and forth)
Alex
|
I got something working.
Instead of sending it across as two longs, I send it across as bytes, here is the Java code:
public static UUID fromBytes( ByteString byteString)
{
byte[] bytesOriginal = byteString.toByteArray();
byte[] bytes = new byte[16];
// Reverse the first 4 bytes
bytes[0] = bytesOriginal[3];
bytes[1] = bytesOriginal[2];
bytes[2] = bytesOriginal[1];
bytes[3] = bytesOriginal[0];
// Reverse 6th and 7th
bytes[4] = bytesOriginal[5];
bytes[5] = bytesOriginal[4];
// Reverse 8th and 9th
bytes[6] = bytesOriginal[7];
bytes[7] = bytesOriginal[6];
// Copy the rest straight up
for ( int i = 8; i < 16; i++ )
{
bytes[i] = bytesOriginal[i];
}
return toUUID(bytes);
}
public static ByteString toBytes( UUID uuid )
{
byte[] bytesOriginal = asByteArray(uuid);
byte[] bytes = new byte[16];
// Reverse the first 4 bytes
bytes[0] = bytesOriginal[3];
bytes[1] = bytesOriginal[2];
bytes[2] = bytesOriginal[1];
bytes[3] = bytesOriginal[0];
// Reverse 6th and 7th
bytes[4] = bytesOriginal[5];
bytes[5] = bytesOriginal[4];
// Reverse 8th and 9th
bytes[6] = bytesOriginal[7];
bytes[7] = bytesOriginal[6];
// Copy the rest straight up
for ( int i = 8; i < 16; i++ )
{
bytes[i] = bytesOriginal[i];
}
return ByteString.copyFrom(bytes);
}
private static byte[] asByteArray(UUID uuid)
{
long msb = uuid.getMostSignificantBits();
long lsb = uuid.getLeastSignificantBits();
byte[] buffer = new byte[16];
for (int i = 0; i < 8; i++) {
buffer[i] = (byte) (msb >>> 8 * (7 - i));
}
for (int i = 8; i < 16; i++) {
buffer[i] = (byte) (lsb >>> 8 * (7 - i));
}
return buffer;
}
private static UUID toUUID(byte[] byteArray) {
long msb = 0;
long lsb = 0;
for (int i = 0; i < 8; i++)
msb = (msb << 8) | (byteArray[i] & 0xff);
for (int i = 8; i < 16; i++)
lsb = (lsb << 8) | (byteArray[i] & 0xff);
UUID result = new UUID(msb, lsb);
return result;
}
Doing it this way, the bytes can be used straight up on the C++ side. I suppose the switching around of the order of the bytes could be done on either end.
C++
memcpy(&guid, data, 16);
|
1,055,452
| 1,055,563
|
C++ Get name of type in template
|
I'm writing some template classes for parseing some text data files, and as such it is likly the great majority of parse errors will be due to errors in the data file, which are for the most part not written by programmers, and so need a nice message about why the app failed to load e.g. something like:
Error parsing example.txt. Value ("notaninteger")of [MySectiom]Key is not a valid int
I can work out the file, section and key names from the arguments passed to the template function and member vars in the class, however I'm not sure how to get the name of the type the template function is trying to convert to.
My current code looks like, with specialisations for just plain strings and such:
template<typename T> T GetValue(const std::wstring §ion, const std::wstring &key)
{
std::map<std::wstring, std::wstring>::iterator it = map[section].find(key);
if(it == map[section].end())
throw ItemDoesNotExist(file, section, key)
else
{
try{return boost::lexical_cast<T>(it->second);}
//needs to get the name from T somehow
catch(...)throw ParseError(file, section, key, it->second, TypeName(T));
}
}
Id rather not have to make specific overloads for every type that the data files might use, since there are loads of them...
Also I need a solution that does not incur any runtime overhead unless an exception occurs, i.e. a completely compile time solution is what I want since this code is called tons of times and load times are already getting somewhat long.
EDIT: Ok this is the solution I came up with:
I have a types.h containg the following
#pragma once
template<typename T> const wchar_t *GetTypeName();
#define DEFINE_TYPE_NAME(type, name) \
template<>const wchar_t *GetTypeName<type>(){return name;}
Then I can use the DEFINE_TYPE_NAME macro to in cpp files for each type I need to deal with (eg in the cpp file that defined the type to start with).
The linker is then able to find the appropirate template specialisation as long as it was defined somewhere, or throw a linker error otherwise so that I can add the type.
|
Jesse Beder's solution is likely the best, but if you don't like the names typeid gives you (I think gcc gives you mangled names for instance), you can do something like:
template<typename T>
struct TypeParseTraits;
#define REGISTER_PARSE_TYPE(X) template <> struct TypeParseTraits<X> \
{ static const char* name; } ; const char* TypeParseTraits<X>::name = #X
REGISTER_PARSE_TYPE(int);
REGISTER_PARSE_TYPE(double);
REGISTER_PARSE_TYPE(FooClass);
// etc...
And then use it like
throw ParseError(TypeParseTraits<T>::name);
EDIT:
You could also combine the two, change name to be a function that by default calls typeid(T).name() and then only specialize for those cases where that's not acceptable.
|
1,055,471
| 1,132,960
|
Numerical regression testing
|
I'm working on a scientific computing code (written in C++), and in addition to performing unit tests for the smaller components, I'd like to do regression testing on some of the numerical output by comparing to a "known-good" answer from previous revisions. There are a few features I'd like:
Allow comparing numbers to a specified tolerance (for both roundoff error and looser expectations)
Ability to distinguish between ints, doubles, etc, and to ignore text if necessary
Well-formatted output to tell what went wrong and where: in a multi-column table of data, only show the column entry that differs
Return EXIT_SUCCESS or EXIT_FAILURE depending on whether the files match
Are there any good scripts or applications out there that do this, or will I have to roll my own in Python to read and compare output files? Surely I'm not the first person with these kind of requirements.
[The following is not strictly relevant, but it may factor into the decision of what to do. I use CMake and its embedded CTest functionality to drive unit tests that use the Google Test framework. I imagine that it shouldn't be hard to add a few add_custom_command statements in my CMakeLists.txt to call whatever regression software I need.]
|
I ended up writing a Python script to do more or less what I wanted.
#!/usr/bin/env python
import sys
import re
from optparse import OptionParser
from math import fabs
splitPattern = re.compile(r',|\s+|;')
class FailObject(object):
def __init__(self, options):
self.options = options
self.failure = False
def fail(self, brief, full = ""):
print ">>>> ", brief
if options.verbose and full != "":
print " ", full
self.failure = True
def exit(self):
if (self.failure):
print "FAILURE"
sys.exit(1)
else:
print "SUCCESS"
sys.exit(0)
def numSplit(line):
list = splitPattern.split(line)
if list[-1] == "":
del list[-1]
numList = [float(a) for a in list]
return numList
def softEquiv(ref, target, tolerance):
if (fabs(target - ref) <= fabs(ref) * tolerance):
return True
#if the reference number is zero, allow tolerance
if (ref == 0.0):
return (fabs(target) <= tolerance)
#if reference is non-zero and it failed the first test
return False
def compareStrings(f, options, expLine, actLine, lineNum):
### check that they're a bunch of numbers
try:
exp = numSplit(expLine)
act = numSplit(actLine)
except ValueError, e:
# print "It looks like line %d is made of strings (exp=%s, act=%s)." \
# % (lineNum, expLine, actLine)
if (expLine != actLine and options.checkText):
f.fail( "Text did not match in line %d" % lineNum )
return
### check the ranges
if len(exp) != len(act):
f.fail( "Wrong number of columns in line %d" % lineNum )
return
### soft equiv on each value
for col in range(0, len(exp)):
expVal = exp[col]
actVal = act[col]
if not softEquiv(expVal, actVal, options.tol):
f.fail( "Non-equivalence in line %d, column %d"
% (lineNum, col) )
return
def run(expectedFileName, actualFileName, options):
# message reporter
f = FailObject(options)
expected = open(expectedFileName)
actual = open(actualFileName)
lineNum = 0
while True:
lineNum += 1
expLine = expected.readline().rstrip()
actLine = actual.readline().rstrip()
## check that the files haven't ended,
# or that they ended at the same time
if expLine == "":
if actLine != "":
f.fail("Tested file ended too late.")
break
if actLine == "":
f.fail("Tested file ended too early.")
break
compareStrings(f, options, expLine, actLine, lineNum)
#print "%3d: %s|%s" % (lineNum, expLine[0:10], actLine[0:10])
f.exit()
################################################################################
if __name__ == '__main__':
parser = OptionParser(usage = "%prog [options] ExpectedFile NewFile")
parser.add_option("-q", "--quiet",
action="store_false", dest="verbose", default=True,
help="Don't print status messages to stdout")
parser.add_option("--check-text",
action="store_true", dest="checkText", default=False,
help="Verify that lines of text match exactly")
parser.add_option("-t", "--tolerance",
action="store", type="float", dest="tol", default=1.e-15,
help="Relative error when comparing doubles")
(options, args) = parser.parse_args()
if len(args) != 2:
print "Usage: numdiff.py EXPECTED ACTUAL"
sys.exit(1)
run(args[0], args[1], options)
|
1,055,576
| 1,055,889
|
Non-member non-friend functions vs private functions
|
Herb Sutter has said that the most object oriented way to write methods in C++ is using non-member non-friend functions. Should that mean that I should take private methods and turn them into non-member non-friend functions? Any member variables that these methods may need can be passed in as parameters.
Example (before):
class Number {
public:
Number( int nNumber ) : m_nNumber( nNumber ) {}
int CalculateDifference( int nNumber ) { return minus( nNumber ); }
private:
int minus( int nNumber ) { return m_nNumber - nNumber; }
int m_nNumber;
};
Example (after):
int minus( int nLhsNumber, int nRhsNumber ) { return nLhsNumber - nRhsNumber; }
class Number {
public:
Number( int nNumber ) : m_nNumber( nNumber ) {}
int CalculateDifference( int nNumber ) { return minus( m_nNumber, nNumber ); }
private:
int m_nNumber;
};
Am I on the right track? Should all private methods be moved to non-member non-friend functions? What should be rules that would tell you otherwise?
|
I believe in free functions and agree with Sutter, but my understanding is in the opposite direction. It is not that you should have your public methods depend on free functions instead of private methods, but rather that you can build a richer interface outside of the class with free functions by using the provided public interface.
That is, you don't push your privates outside of the class, but rather reduce the public interface to the minimum that allows you to build the rest of the functionality with the least possible coupling: only using the public interface.
In your example, what I would move outside of the class is the CalculateDifference method if it can be represented effectively in terms of other operations.
class Number { // small simple interface: accessor to constant data, constructor
public:
explicit Number( int nNumber ) : m_nNumber( nNumber ) {}
int value() const { return m_nNumber; }
private:
int m_nNumber;
};
Number operator+( Number const & lhs, Number const & rhs ) // Add addition to the interface
{
return Number( lhs.value() + rhs.value() );
}
Number operator-( Number const & lhs, Number const & rhs ) // Add subtraction to the interface
{
return Number( lhs.value() - rhs.value() );
}
The advantage is that if you decide to redefine your Number internals (there is not that much that you can do with such a simple class), as long as you keep your public interface constant then all other functions will work out of the box. Internal implementation details will not force you to redefine all the other methods.
The hard part (not in the simplistic example above) is determining what is the least interface that you must provide. The article (GotW#84), referenced from a previous question here is a great example. If you read it in detail you will find that you can greatly reduce the number of methods in std::basic_string while maintaining the same functionality and performance. The count would come down from 103 member functions to only 32 members. That means that implementation changes in the class will affect only 32 instead of 103 members, and as the interface is kept the 71 free functions that can implement the rest of the functionality in terms of the 32 members will not have to be changed.
That is the important point: it is more encapsulated as you are limiting the impact of implementation changes on the code.
Moving out of the original question, here is a simple example of how using free functions improve the locality of changes to the class. Assume a complex class with really complex addition operation. You could go for it and implement all operator overrides as member functions, or you can just as easily and effectively implement only some of them internally and provide the rest as free functions:
class ReallyComplex
{
public:
ReallyComplex& operator+=( ReallyComplex const & rhs );
};
ReallyComplex operator+( ReallyComplex const & lhs, ReallyComplex const & rhs )
{
ReallyComplex tmp( lhs );
tmp += rhs;
return tmp;
}
It can be easily seen that no matter how the original operator+= performs its task, the free operator+ performs its duty correctly. Now, with any and all changes to the class, operator+= will have to be updated, but the external operator+ will be untouched for the rest of its life.
The code above is a common pattern, while usually instead of receiving the lhs operand by constant reference and creating a temporary object inside, it can be changed so that the parameter is itself a value copy, helping the compiler with some optimizations:
ReallyComplex operator+( ReallyComplex lhs, ReallyComplex const & rhs )
{
lhs += rhs;
return lhs;
}
|
1,055,627
| 1,055,634
|
Strange stdout behavior in C++
|
I want my program to display the unix windmill while processing. There's a for loop and in every iteration theres a printf function:
printf("Fetching articles (%c)\r",q);
q is one of the characters in the windmill (-\|/) depending on the iteration number.
The problem is - it seems like in 100 iterations there are only two changes in displayed line, and every iteration takes about one second to complete.
What could be the aouse for this?
Here's the whole loop with only two possible chars for the windmill:
for (int i=0;i<numb_articles;i++) {
memset(file_path,0x0,BUFF_SIZE);
url=article_urls[i];
if (rules->print!=NO_PRINT) {
url=modify_url(url,rules->printout,rules->print);
if (url=="NULL")
continue;
}
get_page(url,file_content);
if (strcmp(rules->save.data(),"NULL")!=0)
if (!check_save(rules->save,file_content,url))
continue;
at_least_one_saved=true;
numb_articles_accepted++;
encoding_list[i]=get_encoding(file_content);
title=get_title(file_content,err_msg);
if (title=="")
continue;
title_list[i]=strdup(title.data());
filename=get_filename(title);
int count=numb_fn_found(filename_list,i,filename.data());
char *tmp = new char[10];
if (count>0) {
sprintf(tmp,"(%d)",count);
filename.insert((size_t)filename.length(),tmp);
}
filename_list[i]=strdup(filename.data());
char q;
if (i%2==0)
q='|';
else q='-';
printf("Fetching articles (%c)\r",q);
ofstream output_file;
sprintf(file_path,TMP_FILE,filename.data());
strncat(file_path,".html",5);
output_file.open(file_path);
output_file << file_content;
output_file.close();
}
|
Flush the output after writing each line:
printf("Fetching articles (%c)\r",q);
fflush(stdout);
Without doing this, normally stdout is buffered and only dumps its output when a newline is seen, or its internal buffer fills up.
|
1,055,643
| 1,080,822
|
Crash with boost::thread
|
I am using wxwidgets together with boost::thread. The Thread is a worker thread which sends some Events to the GUI:
Thread creation:
thrd = boost::thread(boost::bind(workerFunction,this));
Send Message to the GUI:
wxPostEvent(loWindow, event);
wxSafeYield();
Under Windows I don't see any problems, but when starting the application under Linux (Ubuntu 8.10), it stops with the following error message:
_XCBUnlockDisplay: Assertion `xcb_get_request_sent(dpy->xcb->connection) == dpy->request' failed.
Aborted
What am I missing? When the workerFunction is not started in a thread, it works without problems.
Regards,
/mspoerr
|
The problem was with the data I sent - for complex data you need to use custom events. I now implemented a custom event and it works.
For more information please see http://forums.wxwidgets.org/viewtopic.php?t=24663
Thank you for your help!
/mspoerr
EDIT: Updated the link. The old one was broken
|
1,055,661
| 1,055,686
|
Bigint (bigbit) library
|
I'm looking for a c++ class/library that provides 1024 bit and bigger integers and bit operations like:
- bit shifting,
- bitwise OR/AND,
- position first zero bit
speed is crucial, so it would have to be implemented with some SIMD assembly.
|
There are several, including GMP, but for speed, the best is likely TTmath. TTmath's design decision to use templated fixed lengths at compiletime lets it be quite fast.
|
1,055,744
| 1,056,028
|
Using WTL with Codeblocks
|
I want to try WTL, but problem is i can't use Visual Studio for this. So i've codeblocks on my side. Is there any way i can use WTL with codeblocks ? I mean configuration/settings that i need to do for this ?
is it possible to use WTL with codeblocks?
Just to clear first, i tried google for this. No satisfactory success. So asking this here.
How can i configure Codeblocks for WTL ?
Any suggestions, pointers are welcome.
|
I don't think it is possible unless you use codeblocks with the MS compiler AND get the version of Windows SDK that contains ATL (new ones don't, AFAIK). WTL is built on top of ATL.
|
1,055,756
| 1,055,762
|
C++ dynamic memory detail
|
I'm a C and Java programmer, so memory allocation and OOP aren't anything new to me. But, I'm not sure about how exactly to avoid memory leaks with C++ implementation of objects. Namely:
string s1("0123456789");
string s2 = s1.substr(0,3);
s2 now has a new string object, so it must be freed via:
delete &s2;
Right?
Moreover, am I correct to assume that I'll have to delete the address for any (new) object returned by a function, regardless of the return type not being a pointer or reference? It just seems weird that an object living on the heap wouldn't be returned as a pointer when it must be freed.
|
No,
Both s1 and s2 will get destructed when out of scope.
s1.substr() will create a temporary object that you don't have to think of.
|
1,056,244
| 1,060,324
|
std::map and performance, intersecting sets
|
I'm intersecting some sets of numbers, and doing this by storing a count of each time I see a number in a map.
I'm finding the performance be very slow.
Details:
- One of the sets has 150,000 numbers in it
- The intersection of that set and another set takes about 300ms the first time, and about 5000ms the second time
- I haven't done any profiling yet, but every time I break the debugger while doing the intersection its in malloc.c!
So, how can I improve this performance? Switch to a different data structure? Some how improve the memory allocation performance of map?
Update:
Is there any way to ask std::map or
boost::unordered_map to pre-allocate
some space?
Or, are there any tips for using these efficiently?
Update2:
See Fast C++ container like the C# HashSet<T> and Dictionary<K,V>?
Update3:
I benchmarked set_intersection and got horrible results:
(set_intersection) Found 313 values in the intersection, in 11345ms
(set_intersection) Found 309 values in the intersection, in 12332ms
Code:
int runIntersectionTestAlgo()
{
set<int> set1;
set<int> set2;
set<int> intersection;
// Create 100,000 values for set1
for ( int i = 0; i < 100000; i++ )
{
int value = 1000000000 + i;
set1.insert(value);
}
// Create 1,000 values for set2
for ( int i = 0; i < 1000; i++ )
{
int random = rand() % 200000 + 1;
random *= 10;
int value = 1000000000 + random;
set2.insert(value);
}
set_intersection(set1.begin(),set1.end(), set2.begin(), set2.end(), inserter(intersection, intersection.end()));
return intersection.size();
}
|
I figured something out: if I attach the debugger to either RELEASE or DEBUG builds (e.g. hit F5 in the IDE), then I get horrible times.
|
1,056,254
| 1,057,722
|
How to maintain a weak pointer to a parent in C++?
|
Is there a standard way of maintaining a weak pointer to a parent (which is created using a shared pointer) in a child object in C++?
Essentially, I need to implement something on the lines of the following:
Class B;
Class A
{
...
private:
B m_b;
};
Class B
{
....
public:
void SetParentPtr(const boost::shared_ptr<A>& a)
{
m_parentPtr = a;
}
private:
boost::weak_ptr<A> m_parentPtr;
};
In the above all instances of class B need to hold a weak pointer to their parent (i.e object of class A). Class A objects are instantiated using a shared_ptr. I can think of a solution that uses a null deleter. But is that a standard way of doing something like this?
|
There is an implicit conversion to weak_ptr, so you can use
void SetParentPtr(boost::weak_ptr<A> a) { }
directly.
check also boost::shared_from_this so the parent can give a pointer to himself without storing a weak_ptr explicitly.
Otherwise, this seems like a normal way to have a back-pointer. Just check whether there is a real added value in using back-pointers.
|
1,056,364
| 1,056,370
|
How to generate a OS independent path in c++
|
I have a destination path and a file name as strings and I want to concatenate them with c++.
Is there a way to do this and let the program/compiler choose between / and \ for windows or unix systems?
|
If you wanted to do it at compile time you could certainly do something like
#ifdef WIN32
#define OS_SEP '\\'
#else
#define OS_SEP '/'
#endif
Or you could just use '/' and things will work just fine on windows (except for older programs that parse the string and only work with '\'). It only looks funny if displayed to the user that way.
|
1,056,411
| 1,056,442
|
How to pass variable number of arguments to printf/sprintf
|
I have a class that holds an "error" function that will format some text. I want to accept a variable number of arguments and then format them using printf.
Example:
class MyClass
{
public:
void Error(const char* format, ...);
};
The Error method should take in the parameters, call printf/sprintf to format it and then do something with it. I don't want to write all the formatting myself so it makes sense to try and figure out how to use the existing formatting.
|
void Error(const char* format, ...)
{
va_list argptr;
va_start(argptr, format);
vfprintf(stderr, format, argptr);
va_end(argptr);
}
If you want to manipulate the string before you display it and really do need it stored in a buffer first, use vsnprintf instead of vsprintf. vsnprintf will prevent an accidental buffer overflow error.
|
1,056,691
| 1,056,712
|
How to start modification with big projects
|
I have to do enhancements to an existing C++ project with above 100k lines of code.
My question is How and where to start with such projects ?
The problem increases further if the code is not well documented.
Are there any automated tools for studying code flow with large projects?
Thanx,
|
There's a book for you: Working Effectively with Legacy Code
It's not about tools, but about various approaches, processes and techniques you can use to better understand and make changes to the code. It is even written from a mostly C++ perspective.
|
1,056,879
| 1,056,891
|
How to represent 18bit color depth to 16bit color depth?
|
I'm porting a software that build from 16bit color depth to 18bit color depth. How can I convert the 16-bit colors to 18-bit colors? Thanks.
|
Without knowing the device, I can only speculate. Devices are typically Red, Green, Blue so each color would get 6 bits of variation. That means 64 variations of each color and a total of 262,144 colors.
Any bitmap can be scaled to this display. If you take each component (say, red), normalize it, then multiply by 64, you'll have the scaled version.
If you are asking something else or want more detail, please ask.
Update:
There are two 16-bit bitmap formats. One is 5-5-5 (5 bits per pixel) and the other is 5-6-5 (green gets an extra bit). I'll assume 5-5-5 for the sake of this conversion.
For each color within each pixel you need something like this:
NewColor = (oldColor/32.0)*64
This will turn the old color into a number between 0.0 and 1.0 and then scale it up to the new value range.
|
1,056,911
| 1,058,917
|
C++ classes as instance variables of an Objective-C class
|
I need to mix Objective-C and C++. I would like to hide all the C++ stuff inside one class and keep all the others plain Objective-C. The problem is that I want to have some C++ classes as instance variables. This means they have to be mentioned in the header file, which gets included by other classes and C++ starts spreading to the whole application. The best solution I was able to come with so far looks like this:
#ifdef __cplusplus
#import "cppheader.h"
#endif
@interface Foo : NSObject
{
id regularObjectiveCProperty;
#ifdef __cplusplus
CPPClass cppStuff;
#endif
}
@end
This works. The implementation file has an mm extension, so that it gets compiled as Objective-C mixed with C++, the #ifdef unlocks the C++ stuff and there we go. When some other, purely Objective-C class imports the header, the C++ stuff is hidden and the class does not see anything special. This looks like a hack, is there a better solution?
|
This sounds like a classic use for an interface/@protocol. Define an objective-c protocol for the API and then provide an implementation of that protocol using your Objective-C++ class. This way clients need only know about the protocol and not the header of the implementation. So given the original implementation
@interface Foo : NSObject
{
id regularObjectiveCProperty;
CPPClass cppStuff;
}
@end
I would define a protocol
//Extending the NSObject protocol gives the NSObject
// protocol methods. If not all implementations are
// descended from NSObject, skip this.
@protocol IFoo <NSObject>
// Foo methods here
@end
and modify the original Foo declaration to
@interface Foo : NSObject <IFoo>
{
id regularObjectiveCProperty;
CPPClass cppStuff;
}
@end
Client code can then work with type id<IFoo> and does not need to be compiled as Objective-C++. Obviously you can pass an instance of Foo to these clients.
|
1,056,921
| 1,057,046
|
How to represent from 16bit color depth to 18bit color depth?
|
Possible Duplicate:
How to represent 18bit color depth to 16bit color depth?
I'm porting a software that build from 16-bit color depth device to 18-bit color depth device? How can I represent the 18-bit color depth? Thanks.
|
NOTE: This is just an edit of my answer to your previous question, since the two are so similar - you just have to reverse the process.
You will first of all have to access each of the colour components (i.e. extract the R value, the G value, and the B value). The way to do this will depend totally on the way that the colour is stored in memory. If it is stored as RGB, with 5-6-5 bits for the components, you could use something like this:
blueComponent = (colour & 0x3F);
greenComponent = (colour >> 6) & 0x3F;
redComponent = (colour >> 12) & 0x3F;
This will extract the colour components for you, then you can use the method outlined above to convert each of the components (I presume 18-bit will use 6 bits per component):
blueComponent = (blueComponent / 64.0) * 32;
//greenComponent = (greenComponent / 64.0) * 64; //not needed
redComponent = (redComponent / 64.0) * 32;
Note that with the above code, it is important that you use 64.0, or some other method that will ensure that your program represents the number as a floating point number. Then to combine the components into your 18-bit colour, use:
colour = (redComponent << 11) | (greenComponent << 5) | blueComponent;
|
1,056,964
| 1,056,998
|
Crash on assigning NULL to a pointer in C++(Qtopia-Core-4.3.3) on Linux
|
When,i try to assign null value to my pointer,it sometimes crashes on that line. The code is
something like this :
if(s_counter != NULL)
{
delete s_counter;
s_counter = NULL; // it sometimes crashes here.
}
This is not reproducable,100%,but it occurs frequently. Can anybody help in this?
|
Note that although it is OK to delete a NULL pointer, it is not necessarily OK to delete a a non-NULL pointer. The pointer must have been allocated with new and must not already have been deleted. Note also that allocating NULL to deleted pointers can add to a false sense of security - simply checking for NULL is not enough, your programs memory allocation semantics need to be correct too.
|
1,057,158
| 1,057,173
|
A question about printf arguments. C/C++
|
We have the following code fragment:
char tab[2][3] = {'1', '2', '\0', '3', '4', '\0'};
printf("%s\n", tab);
And I don't understand why we don't get an error / warning in the call to printf. I DO get a warning but not an error, and the program runs fine. It prints '12'.
printf is expecting an argument of type char *, i.e. a pointer to char. So if I declared char arr[3], then arr is an address of a memory unit which contains a char, so if I called printf with it, it would decay to pointer to char, i.e. char *.
Analogously, tab is an address of a memory unit that contains the type array of 3 char's which is in turn, an address of memory unit contains char, so tab will decay to char **, and it should be a problem, since printf is expecting a char *.
Can someone explain this issue?
Addendum:
The warning I get is:
a.c:6: warning: char format, different type arg (arg 2)
|
Example Source
#include <stdio.h>
int main( void ) {
char tab[2][3] = {'1', '2', '\0', '3', '4', '\0'};
printf("%s\n", tab);
return 0;
}
Compile Warning
$ gcc test.c
test.c: In function ‘main’:
test.c:5: warning: format ‘%s’ expects type ‘char *’, but argument 2 has type ‘char (*)[3]’
Pointers are Pointers
The %s argument to printf indicates to the function that it will be receiving a pointer (to a string). A string, in C, is merely a series of bytes terminated by an ASCII-Z. The tab[2][3] variable is a pointer. Some compilers will issue a warning about the pointer mismatch. However, the code should still print out 12 because printf's code traverses memory starting at the pointer it was given (printing characters as it goes) until it finds a zero byte. The 1, 2, and \0 are contiguously set in memory, starting at the address represented by the tab variable.
Experiment
As an experiment, what happens when you compile and run the following code:
#include <stdio.h>
int main( void ) {
char tab[2][3] = {'1', '2', '\0', '3', '4', '\0'};
printf("%s\n", tab[1]);
return 0;
}
Don't be afraid of experimenting. See if you can come up with the answer based on what you now know. How would you reference tab now (in light of the experiment) to get rid of the warning and still display 12?
|
1,057,221
| 1,057,245
|
What are practical uses of a protected constructor?
|
Why would anyone declare a constructor protected? I know that constructors are declared private for the purpose of not allowing their creation on stack.
|
When a class is (intended as) an abstract class, a protected constructor is exactly right. In that situation you don't want objects to be instantiated from the class but only use it to inherit from.
There are other uses cases, like when a certain set of construction parameters should be limited to derived classes.
|
1,057,425
| 1,057,451
|
Excessive use of `this` in C++
|
I'm dealing with a large code base that uses the following construct throughout
class MyClass
{
public:
void f(int x);
private:
int x;
};
void MyClass::f(int x)
{
'
'
this->x = x;
'
'
}
Personally, I'd always used and hence prefer the form
class MyClass
{
public:
void f(int x);
private:
int _x;
};
void MyClass::f(int x)
{
'
'
_x = x;
'
'
}
The reasons I prefer the latter are that it is more succinct (less code = fewer potential bugs), and that I don't like having multiple variables of the same name in scope at the same time where I can avoid it. That said, I am seeing the former usage more and more often these days. Is there any upside to second approach that I am unaware of? (e.g. effect on compile time, use with templated code, etc...) Are the advantages of either approach significant enough merit a refactor to the other? Reason I ask, that while I don't like the second approach present in the code, the amount of effort and associated risk of introducing further bugs don't quite merit a refactor.
|
Your version is a bit cleaner, but while you're at it, I would:
Avoid leading underscore: _x is ok until somebody chooses _MyField which is a reserved name. An initial underscore followed by a capital letter is not allowed as a variable name. See: What are the rules about using an underscore in a C++ identifier?
Make the attribute private or protected: the change is safe if it compiles, and you'll ensure your setter will be used.
The this-> story has a use, for example in templated code to make the field name dependent on your type (can solve some lookup issues).
A small example of name resolutions which are fixed by using an explicit this-> (tested with g++ 3.4.3):
#include <iostream>
#include <ostream>
class A
{
public:
int g_;
A() : g_(1) {}
const char* f() { return __FUNCTION__; }
};
const char* f() { return __FUNCTION__; }
int g_ = -1;
template < typename Base >
struct Derived : public Base
{
void print_conflicts()
{
std::cout << f() << std::endl; // Calls ::f()
std::cout << this->f() << std::endl; // Calls A::f()
std::cout << g_ << std::endl; // Prints global g_
std::cout << this->g_ << std::endl; // Prints A::g_
}
};
int main(int argc, char* argv[])
{
Derived< A >().print_conflicts();
return EXIT_SUCCESS;
}
|
1,057,637
| 1,057,673
|
C++ distributed programming
|
Is there any library for distributed in-memory cache, distributed tasks, publish/subscribe messaging? I have used Hazelcast in Java, I would like something similar.
I know that Memcached is an in-memory cache and even distributed, but it is missing the messaging and remote task.
I just need something to coordinate a cluster of server without using traditional RPC and socket programming.
|
MPI might be what you want:
http://en.wikipedia.org/wiki/Message_Passing_Interface
There are C++ hooks available in boost:
http://www.boost.org/doc/libs/1_39_0/doc/html/mpi.html
Here is an informative podcast about Open-MPI, which is an implementation of MPI:
http://twit.tv/floss50
|
1,057,724
| 1,057,788
|
What happens if you increment an iterator that is equal to the end iterator of an STL container
|
What if I increment an iterator by 2 when it points onto the last element of a vector? In this question asking how to adjust the iterator to an STL container by 2 elements two different approaches are offered:
either use a form of arithmetic operator - +=2 or ++ twice
or use std::advance()
I've tested both of them with VC++ 7 for the edge case when the iterator points onto the last element of the STL container or beyond:
vector<int> vec;
vec.push_back( 1 );
vec.push_back( 2 );
vector<int>::iterator it = vec.begin();
advance( it, 2 );
bool isAtEnd = it == vec.end(); // true
it++; // or advance( it, 1 ); - doesn't matter
isAtEnd = it == vec.end(); //false
it = vec.begin();
advance( it, 3 );
isAtEnd = it == vec.end(); // false
I've seen may times an advise to compare against vector::end() when traversing the vector and other containers:
for( vector<int>::iterator it = vec.begin(); it != vec.end(); it++ ) {
//manipulate the element through the iterator here
}
Obviously if the iterator is advanced past the last element inside the loop the comparison in the for-loop statement will evaluate to false and the loop will happily continue into undefined behaviour.
Do I get it right that if I ever use advance() or any kind of increment operation on an iterator and make it point past the container's end I will be unable to detect this situation? If so, what is the best practice - not to use such advancements?
|
Following is the quote from Nicolai Josuttis book:
Note that advance() does not check
whether it crosses the end() of a
sequence (it can't check because
iterators in general do not know the
containers on which they operate).
Thus, calling this function might
result in undefined behavior because
calling operator ++ for the end of a
sequence is not defined
In other words, the responsibility of maintaining the iterator within the range lies totally with the caller.
|
1,058,051
| 1,058,358
|
Boost serialization performance: text vs. binary format
|
Should I prefer binary serialization over ascii / text serialization if performance is an issue?
Has anybody tested it on a large amount of data?
|
I used boost.serialization to store matrices and vectors representing lookup tables and
some meta data (strings) with an in memory size of about 200MByte. IIRC for loading from
disk into memory it took 3 minutes for the text archive vs. 4 seconds using the binary archive
on WinXP.
|
1,058,117
| 1,060,191
|
Extending Visual Studio 2003 C++ debugger using autoexp.dat and DLL
|
I know a solution would be to use VS 2005 or 2008, but that's not an option at the moment. I am supposed to write an extension to the VS 2003 C++ debugger to improve the way it displays data in the watch window. The main reason I am using a DLL rather than just the basic autoexp.dat functionality is that I want to be able to display things conditionally. I.e. I want to be able to say "If the name member is not an empty string, display name, otherwise display [some other member]"
I can't find much documentation online, either from MS or other people who have used (or tried to use) this part of VS 2003. The MSDN EEaddin sample was an ok start, but very basic and didn't really help me get very far.
So far I am just learning my way around it, learning how to display various types without knowing exactly what kind of types I will be working on in the end.
I have managed (through much trial and error) to get the DLL to display pointer-to-basic-type members, string members, pointer-to-user-defined-type members and auto_ptr<int> members. At the moment I am having trouble displaying vector members and auto_ptr<string> members.
(I found a page that says auto_ptrs are deprecated, but I need to be able to display them properly as they are used in the code my extension is meant for.)
My question is: has anyone done this kind of thing and do you have or know of some helpful documentation on the subject?
Thanks!
[update:]
I've worked out why I'm having trouble with the auto_ptr<string> class members. It's because of the way the string class is implemented. It has a char[16] buffer where it stores the data for short strings, and a char* pointer if the string is longer than (15 characters+termination character). I'm not sure whether it's possible and worth trying to hack into that to display longer strings (I can display strings that short enough to be saved in the char[16] buffer)
|
This article might help you:
http://msdn.microsoft.com/en-us/library/aa730838(VS.80).aspx
|
1,058,295
| 1,058,368
|
Learning more about distributed computing
|
I'm interested in learning more about distributed computing and how to do it - mostly in C++ but I'd be interested in C# as well.
Can someone please recommend some resources? I know very little to nothing about the topic so where should I start?
Thanks.
|
Distributed computing encompasses quite a lot of areas. Is there a specific class of problem you are looking to solve?
If you are just starting off you might want to do some background reading before getting into language specifics. You could start from Wikipedia. The paper on the Fallacies of Distributed Computing is quite well known and would give an interesting read.
|
1,058,444
| 1,058,465
|
How to Connect SVN server in C++?
|
I want to connect an svn server and download one file to my computer by using C++. How can I make this?
|
You have a few options:
Set up WebDAV and use HTTP.
Use the SVN client library and integrate using it.
I've done both approaches in the past. The SVN client library is actually quite easy to use.
Edit
The subversion client library is described in Version Control with Subversion. Pay particular attention to Chapter 3 and Chapter 8. I started by grabbing a source snapshot from subversion.apache.org and reading the source code for the command line client. It's in <root>/subversion/clients/cmdline. I found that the source is easy enough to follow.
|
1,058,897
| 1,058,921
|
Can C++/CLI be used to call .NET code from native C++ applications?
|
I've done the other way around (calling pure C++ code from .NET) with C++/CLI, and it worked (for the most part).
How is the native-to-C++/CLI direction done?
I really don't want to use COM interop...
|
You can always host the CLR in your native app.
|
1,059,024
| 1,059,139
|
Is there a pattern which names managing evil static_casts
|
The following code is a simplified version of what I use for event dispatching. The essential point is
that there is a static_cast<T*> on the argument of a template functor and another class makes
sure that the argument passed to the functor is what the static_cast casts to.
struct AbstractArg {
virtual ~AbstractArg() { }
};
struct AbstractFunctor {
virtual void operator()(AbstractArg*) const = 0;
virtual ~AbstractFunctor() { }
};
namespace evil {
template<typename ArgT>
struct CastingFunctor : AbstractFunctor {
void operator()(AbstractArg* aarg) const
{
ArgT* arg = static_cast<ArgT*>(arg); // Danger!
// ... do stuff with it
}
};
} // namespace evil
class TypeidManager
{
public:
typedef std::map<std::string, AbstractFunctor*> map_type;
void dispatch(AbstractArg* arg) const
{
map_type::const_iterator it = typeid_map.find(std::string(typeid(*arg).name()));
if (it != typeid_map.end())
{
AbstractFunctor* pFunctor = it->second;
(*pFunctor)(arg);
}
};
template<typename ArgT>
void register_func( void (*fun_ptr)(ArgT*))
{
typeid_map.insert(std::make_pair(std::string(typeid(ArgT).name()),
new evil::CastingFunctor<ArgT>));
}
private:
map_type typeid_map;
};
Is there a name for this pattern?
Thanks!
|
This isn't an anti-pattern, it's a really useful technique often used with type erasure.
|
1,059,062
| 1,059,076
|
In C++, how to get the address in which the value of a class instance's field is stored?
|
class A {
public: int i;
};
A *a = new A();
How to get the address of a->i? I tried &a->i and also &(a->i) but those generate compile time errors:
"left of '.i' must have class/struct/union type"
|
You have not provided the same code you tried to compile. Always copy and paste. The tells in your code are that you don't have a syntactically correct class declaration or variable declaration, and that your error message talks about ".i" when you've claimed you've only used a->i. Here's working code:
#include <stdio.h>
class A {
public:
int i;
};
int main() {
A* a = new A();
int* i = &a->i;
printf("a: %p\na->i: %p\n", a, i);
return 0;
}
Ultimately, the syntax you say you tried for getting the address of the member was correct. The syntax the error message says you tried was a.i. That doesn't work, and for the reason the error message gave. The variable a is not a class, struct, or union type. Rather, it's a pointer to one of those types. You need to dereference the pointer to get at the member.
When I run it, I get this:
$ ./a.out
a: 40748
a->i: 40748
The addresses are the same because A is a simple class, so this output is to be expected. The first member is frequently placed at the very start of a class's memory. Add a second member variable to the class and get its address; you should see different values then.
|
1,059,167
| 1,192,831
|
MFC Control in a Qt Tab Widget
|
I'm working on a project that is using the Qt/MFC Migration Framework and I'm trying to reuse some existing MFC controls inside of a Qt dialog.
Does anyone know if it is possible to insert an MFC control (CDialog or CWnd) inside of a QTabWidget. Right now we're doing the opposite, we have an MFC dialog with a tab control which is populated with a mix of MFC tabs (CDialog) and Qt tabs (QWinWidget). However, this approach is giving me a headache because the QWinWidget controls are not properly being drawn nor are they receiving focus or keyboard input correctly. I am hoping that using a Qt dialog with a QTabWidget will work better than this approach.
|
Seeing as you use QWinWidget, you must have come cross QWinHost? Simply use QWinHost as the pages for a QTabWidget:
HWND w = ...;
QTabWidget * tw = new QTabWidget;
QWinHost * wh = new QWinHost;
wh->setWindow( w );
tw->addTab( tr("Page with Windows Control"), wh );
|
1,059,200
| 1,059,219
|
true isometric projection with opengl
|
I am a newbie in OpenGL programming with C++ and not very good at mathematics. Is there a simple way to have isometric projection?
I mean the true isometric projection, not the general orthogonal projection.
(Isometric projection happens only when projections of unit X, Y and Z vectors are equally long and angles between them are exactly 120 degrees.)
Code snippets are highly appreciated..
|
Try using gluLookAt
glClearColor(0.0, 0.0, 0.0, 1.0);
glClear(GL_COLOR_BUFFER_BIT);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
/* use this length so that camera is 1 unit away from origin */
double dist = sqrt(1 / 3.0);
gluLookAt(dist, dist, dist, /* position of camera */
0.0, 0.0, 0.0, /* where camera is pointing at */
0.0, 1.0, 0.0); /* which direction is up */
glMatrixMode(GL_MODELVIEW);
glBegin(GL_LINES);
glColor3d(1.0, 0.0, 0.0);
glVertex3d(0.0, 0.0, 0.0);
glVertex3d(1.0, 0.0, 0.0);
glColor3d(0.0, 1.0, 0.0);
glVertex3d(0.0, 0.0, 0.0);
glVertex3d(0.0, 1.0, 0.0);
glColor3d(0.0, 0.0, 1.0);
glVertex3d(0.0, 0.0, 0.0);
glVertex3d(0.0, 0.0, 1.0);
glEnd();
glFlush();
Results in
We can draw a cube to check that parallel lines are indeed parallel
glPushMatrix();
glTranslated(0.5, 0.5, 0.5);
glColor3d(0.5, 0.5, 0.5);
glutWireCube(1);
glPopMatrix();
|
1,059,330
| 1,059,704
|
why is set_intersection in STL so slow?
|
I'm intersecting a set of 100,000 numbers and a set of 1,000 numbers using set_intersection in STL and its taking 21s, where it takes 11ms in C#.
C++ Code:
int runIntersectionTestAlgo()
{
set<int> set1;
set<int> set2;
set<int> intersection;
// Create 100,000 values for set1
for ( int i = 0; i < 100000; i++ )
{
int value = 1000000000 + i;
set1.insert(value);
}
// Create 1,000 values for set2
for ( int i = 0; i < 1000; i++ )
{
int random = rand() % 200000 + 1;
random *= 10;
int value = 1000000000 + random;
set2.insert(value);
}
set_intersection(set1.begin(),set1.end(), set2.begin(), set2.end(), inserter(intersection, intersection.end()));
return intersection.size();
}
C# Code:
static int runIntersectionTest()
{
Random random = new Random(DateTime.Now.Millisecond);
Dictionary<int,int> theMap = new Dictionary<int,int>();
List<int> set1 = new List<int>();
List<int> set2 = new List<int>();
// Create 100,000 values for set1
for ( int i = 0; i < 100000; i++ )
{
int value = 1000000000 + i;
set1.Add(value);
}
// Create 1,000 values for set2
for ( int i = 0; i < 1000; i++ )
{
int value = 1000000000 + (random.Next() % 200000 + 1);
set2.Add(value);
}
// Now intersect the two sets by populating the map
foreach( int value in set1 )
{
theMap[value] = 1;
}
int intersectionSize = 0;
foreach ( int value in set2 )
{
int count;
if ( theMap.TryGetValue(value, out count ) )
{
intersectionSize++;
theMap[value] = 2;
}
}
return intersectionSize;
}
}
|
On this ancient 3GHz Pentium 4, I get 2734 milliseconds for the entire runIntersectionTestAlgo function, in a debug build with optimizations disabled. I compiled with VS2008 SP1.
If I enable optimizations, I get 93 milliseconds.
Here's my code:
#include <set>
#include <algorithm>
using namespace std;
int runIntersectionTestAlgo()
{
set<int> set1;
set<int> set2;
set<int> intersection;
// Create 100,000 values for set1
for ( int i = 0; i < 100000; i++ )
{
int value = 1000000000 + i;
set1.insert(value);
}
// Create 1,000 values for set2
for ( int i = 0; i < 1000; i++ )
{
int random = rand() % 200000 + 1;
random *= 10;
int value = 1000000000 + random;
set2.insert(value);
}
set_intersection(set1.begin(),set1.end(), set2.begin(), set2.end(), inserter(intersection, intersection.end()));
return intersection.size();
}
#include <windows.h>
#include <iostream>
int main(){
DWORD start = GetTickCount();
runIntersectionTestAlgo();
DWORD span = GetTickCount() - start;
std::cout << span << " milliseconds\n";
}
Disabling _SECURE_SCL made no difference for the release build, which still hovered around the 100 ms.
GetTickCount isn't ideal, of course, but it should be good enough to distinguish 21 seconds from less than 100 milliseconds.
So I conclude that there's something wrong with your benchmarks.
|
1,059,372
| 1,059,393
|
C++ code and objects from C?
|
Is there a simple way to work with C++ objects directly from C?
I want to expose some classes from C++ to C or to FFI(foreign function interface).
Sure, I can write a stuff like that:
class Foo{
....
};
void *make_foo(...){
Foo *ptr = new Foo(..)
return static_cast<void *>(ptr);
}
..
int *foo_method1(void *fooptr, ...){
Foo *ptr = static_cast<Foo*>(fooptr);
}
But there is a simpler method?
|
That, in general, is the simplest method.
Remember, too, that you'll need to use extern "C" on all of your C "wrapper" methods, as well.
|
1,059,432
| 1,059,533
|
Call different classes in different time in same function
|
I just cannot imaginate a way to do a call to a function with genericity. I have a code which a have to call a function in two different classes in different moments.
I have A and B classes which I can access one time or other time. Or I access A or I access B. Not both in the same type.
I have code this program but I just cannot imagine how to do this. Or if this is good for performance or codding. I just want to eliminate the C class but I don't know how.
Any idea?
class MyClass
{
public:
MyClass() {} //contructor padrão, não deve ser utilizado isoladamente
virtual int a() = 0;
virtual int b() = 0;
int c()
{
return b();
}
};
class A : public MyClass
{
public:
int a() { return 1; }
int b() { return 1; }
int d() { return 1; }
};
class B : public MyClass
{
public:
int a() { return 1; }
int b() { return 1; }
int e() { return 1; }
};
class C
{
public:
A ca;
B cb;
enum TIPO { A, B };
TIPO Tipo;
C(TIPO tipo) { Tipo = tipo; }
int a()
{
switch(Tipo)
{
case A:
return ca.a();
break;
case B:
return cb.b();
break;
default:
break;
}
}
};
void main()
{
C c(C::B);
c.a();
return;
}
|
As you've written 'A' and 'B', you don't actually need the C class. By declaring your member functions "virtual" you are using run time polymorphism and this will result in the "correct" functions being called:
void foo (MyClass & mc) {
mc.a ();
}
int main () {
A a;
B b;
foo (a); // 'mc.a()' will call 'A::a'
foo (b); // 'mc.a()' will call 'B::a'
}
Is there some other reason that you need to inherit from C?
|
1,059,544
| 1,059,758
|
SetThreadLocale and UTF8
|
So I want to use SetThreadLocale to set a threads codepage to UTF8. Up to now, I've been using the second parameter of atl string conversion macros like "CT2A(szBUF, CP_UTF8)" to do this. But I want to be able to set the thread codepage once in the beginning with SetThreadLocale() and never have to use the second parameter of the conversion macro again.
How do I do this? SetThreadLocale won't take a code page parameter like CP_UTF8, just an LCID. What parameters should I be feeding SetThreadLocale to achieve this??
Keep in mind, I have no particular language in mind. The strings I get could be Japanese, Korean, English Etc. So far, I'm having no problems with this mix of strings when specifying CP_UTF8 as the second parameter of a conversion macro. You may ask "well then why not just keep using the second parameter". Answer, "because it can be easily forgotten by team members working on the code. It would be nice if it would just work correctly using the default 1 parameter version of the conversion macro."
|
SetThreadLocale expects a language identifier, but UTF-8 is not a language identifier - it's a Unicode encoding. One of the purposes of the land ID is to tell the system how to treat ANSI text in the range 128-255. Given a real language, its code page will be used when dealing with such characters. UTF-8, OTOH, is a compressed representation of Unicode text. In order to create UTF-8 text, your input has to be Unicode. Given ANSI text, you just won't know how to convert the upper range of characters. This is way when done "manually", in order to convert ANSI to UTF-8 you have to first use MultiByteToWideChar with a specified codepage, and only then can you convert the resulting Unicode string to UTF-8.
Now, back to your question - I would go another way. If the additional codepage param bugs you that much, make a macro that will hide it or so (or inherit the CT2A class and have the second param fixed).
|
1,059,545
| 2,412,962
|
How to use stdext::hash_map where the key is a custom object?
|
Using the STL C++ hash_map...
class MyKeyObject
{
std::string str1;
std::string str2;
bool operator==(...) { this.str1 == that.str1 ... }
};
class MyData
{
std::string data1;
int data2;
std::string etcetc;
};
like this...
MyKeyObject a = MyKeyObject(...);
MyData b = MyData(...);
stdext::hash_map <MyKeyObject, MyData> _myDataHashMap;
_myDataHashMap[ a ] = b;
I get a whole load of errors. Here are the first three...
Error 1 error C2784: 'bool
std::operator <(const
std::_Tree<_Traits> &,const
std::_Tree<_Traits> &)' : could not
deduce template argument for 'const
std::_Tree<_Traits> &' from 'const
MyKeyObject' c:\program files\microsoft
visual studio
8\vc\include\functional 143
Error 2 error C2784: 'bool
std::operator <(const
std::basic_string<_Elem,_Traits,_Alloc>
&,const _Elem *)' : could not deduce
template argument for 'const
std::basic_string<_Elem,_Traits,_Alloc>
&' from 'const
Tasking::MyKeyObject' c:\program
files\microsoft visual studio
8\vc\include\functional 143
Error 3 error C2784: 'bool
std::operator <(const _Elem *,const
std::basic_string<_Elem,_Traits,_Alloc>
&)' : could not deduce template
argument for 'const _Elem *' from
'const MyDataObject' c:\program
files\microsoft visual studio
8\vc\include\functional 143
...
If I set the key to something simple like an int all is well.
What am I doing wrong?! Maybe I need to do something with templates?
Is there a better (quicker?) way of accessing data using a custom key object like this?
|
Try the following, worked for me in VS 2005. This is a solution for both VS2005 built-in hash_map type in stdext namespace as well as the boost unordered_map (preferred). Delete whichever you don't use.
#include <boost/unordered_map.hpp>
#include <hash_map>
class HashKey
{
public:
HashKey(const std::string& key)
{
_key=key;
}
HashKey(const char* key)
{
_key=key;
}
// for boost and stdext
size_t hash() const
{
// your own hash function here
size_t h = 0;
std::string::const_iterator p, p_end;
for(p = _key.begin(), p_end = _key.end(); p != p_end; ++p)
{
h = 31 * h + (*p);
}
return h;
}
// for boost
bool operator==(const HashKey& other) const
{
return _key == other._key;
}
std::string _key;
};
// for boost
namespace boost
{
template<>
class hash<HashKey>
{
public :
std::size_t operator()(const HashKey &mc) const
{
return mc.hash();
}
};
}
// for stdext
namespace stdext
{
template<>
class hash_compare<HashKey>
{
public :
static const size_t bucket_size = 4;
static const size_t min_buckets = 8;
size_t operator()(const HashKey &mc) const
{
return mc.hash();
}
bool operator()(const HashKey &mc1, const HashKey &mc2) const
{
return (mc1._key < mc2._key);
}
};
}
int _tmain(int argc, _TCHAR* argv[])
{
{
stdext::hash_map<HashKey, int> test;
test["one"] = 1;
test["two"] = 2;
}
{
boost::unordered_map<HashKey, int> test(8); // optional default initial bucket count 8
test["one"] = 1;
test["two"] = 2;
}
return 0;
}
|
1,059,727
| 1,060,185
|
C++ code in iPhone app
|
I'm trying to use a C++ library (CLucene) from my Cocoa Touch iPhone application using Xcode 3.1.3. Everything works fine when I run in the iPhone simulator, but things get strange when I run on device. It seems like pointers aren't being passed correctly from the Objective-C++ code (my app) to the C++ library (CLucene).
While debugging the app on device, I can watch a const char* variable passed as a parameter to a C++ function change from 0x12546c0 in Objective-C++ to 0x4e in C++. Since 0x4e doesn't point to a valid const char*, the C++ code fails. This doesn't happen when debugging in the simulator.
I'm compiling the C++ library directly into the app, not linking to a static or dynamic lib.
Any help would be much appreciated.
|
Disabling "Compile for Thumb" in the project's build settings fixes the problem.
|
1,059,824
| 1,060,029
|
Interface/Superclass for Collections/Containers in c++
|
I'm coming from the Java world and are building a small c++ program at the moment.
I have an object that does some work and then returns the result of the work as a list.
Now a day later i changed the behavior of the object to save the results in a set to avoid duplicates in the container. But I can't simply return the set because I used a list for the interface in the first time.
Is there a common container interface that I can use to specify the interface of my object and forget about the container type I use internally?
At the moment I'm creating a set adding all the values and then creating a list from the set:
return std::list<foo>(this->mySet.begin(), this->mySet.end())
Seems a little strange.
|
The concept of a container is enbodied by iterators.
As you have seen hard coding a specific type of container is probably not what you want. So make your class return iterators. You can then re-use the conatiners iterators.
class MyClass
{
private:
typedef std::list<int> Container;
public:
typedef Container::iterator iterator;
typedef Container::const_iterator const_iterator;
iterator begin() {return myData.begin();}
const_iterator begin() const {return myData.begin();}
iterator end() {return myData.end();}
const_iterator end() const {return myData.end();}
private:
Container myData;
};
Now when you change the Container type from std::list to std::set nobody needs to know.
Also by using the standard names that other containers use your class starts to look like any other container from the STL.
Note: A method that returns a const_iterator should be a const method.
|
1,059,838
| 1,059,938
|
P/Invoke Struct with Pointers, C++ from C#
|
I'm attempt to call a C++ dll with a struct and function like
struct some_data{
int size,degree,df,order;
double *x,*y,lambda;
};
extern "C"{
__declspec(dllexport) double *some_func(some_data*);
}
from C#:
[System.Runtime.InteropServices.StructLayoutAttribute(System.Runtime.InteropServices.LayoutKind.Sequential)]
public struct SOME_DATA
{
public int size;
public int degree;
public int df;
public int order;
public System.IntPtr x;
public System.IntPtr y;
public double lambda;
}
[System.Runtime.InteropServices.DllImportAttribute("mydll.dll",EntryPoint="some_func")]
public static extern System.IntPtr some_func(ref SOME_DATA someData);
public IntPtr some_funcCall(){
double[] x = new double[] { 4, 4, 7, 7 };
double[] y = new double[] { 2, 10, 4, 22 };
SOME_DATA someData = new SOME_DATA();
someData.x = Marshal.AllocHGlobal(x.Length * Marshal.SizeOf(typeof(double)));
Marshal.Copy(x, 0, someData.x, x.Length);
someData.y = Marshal.AllocHGlobal(y.Length * Marshal.SizeOf(typeof(double)));
Marshal.Copy(y, 0, someData.y, y.Length);
someData.size = 50;
someData.degree = 3;
someData.df = 50;
someData.order = 4;
someData.lambda = 1;
return some_func(ref someData);
}
I thought I was pretty dang close, but when I run this, the program just quits at the return statement.
Any ideas where I've gone wrong?
Thanks,
Mark
|
Have you debugged into some_func? Try to windbg that program. Or use VS, but make sure to catch all exceptions and also enable mixed debugging.
|
1,060,337
| 1,060,929
|
Why does my STL code run so slowly when I have the debugger/IDE attached?
|
I'm running the following code, using Visual Studio 2008 SP1, on Windows Vista Business x64, quad core machine, 8gb ram.
If I build a release build, and run it from the command line, it reports 31ms. If I then start it from the IDE, using F5, it reports 23353ms.
Here are the times: (all Win32 builds)
DEBUG, command line: 421ms
DEBUG, from the IDE: 24,570ms
RELEASE, command line: 31ms
RELEASE, from IDE: 23,353ms
code:
#include <windows.h>
#include <iostream>
#include <set>
#include <algorithm>
using namespace std;
int runIntersectionTestAlgo()
{
set<int> set1;
set<int> set2;
set<int> intersection;
// Create 100,000 values for set1
for ( int i = 0; i < 100000; i++ )
{
int value = 1000000000 + i;
set1.insert(value);
}
// Create 1,000 values for set2
for ( int i = 0; i < 1000; i++ )
{
int random = rand() % 200000 + 1;
random *= 10;
int value = 1000000000 + random;
set2.insert(value);
}
set_intersection(set1.begin(),set1.end(), set2.begin(), set2.end(), inserter(intersection, intersection.end()));
return intersection.size();
}
int main(){
DWORD start = GetTickCount();
runIntersectionTestAlgo();
DWORD span = GetTickCount() - start;
std::cout << span << " milliseconds\n";
}
|
Running under a Microsoft debugger (windbg, kd, cdb, Visual Studio Debugger) by default forces Windows to use the debug heap instead of the default heap. On Windows 2000 and above, the default heap is the Low Fragmentation Heap, which is insanely good compared to the debug heap. You can query the kind of heap you are using with HeapQueryInformation.
To solve your particular problem, you can use one of the many options recommended in this KB article: Why the low fragmentation heap (LFH) mechanism may be disabled on some computers that are running Windows Server 2003, Windows XP, or Windows 2000
For Visual Studio, I prefer adding _NO_DEBUG_HEAP=1 to Project Properties->Configuration Properties->Debugging->Environment. That always does the trick for me.
|
1,060,382
| 1,060,449
|
Login and use rails from C++
|
I need to use a rails app from C++. I say login in the title because that's one of my options.
As far as I see it, I either need to do the standard login, and keep track of a session or something in the C++ code, or use an API token of sorts, and just pass that on every URL and never actually create a session on the rails side (which uses restful_authentication).
Are those my only two options? Are there any nice C++ libs to deal with RESTful services or specifically rails?
The C++ side of things is on Windows btw.
|
It may be lower-level than you're looking for, but I believe you should be able to accomplish this sort of thing with libcurl (and, potentially, libxml if you need an HTML or XML parser to handle return values).
|
1,060,483
| 1,060,681
|
Dependency Injection in C++
|
How do I implement dependancy injection in C++ explicitly without using frameworks or reflection?
I could use a factory to return a auto_ptr or a shared_ptr. Is this a good way to do it?
|
Just use a shared_ptr to the service you need, and make a setter to it. E.g.:
class Engine;
class Car {
public:
void setEngine(shared_ptr<Engine> p_engine) {
this->m_engine = p_engine;
}
int onAcceleratorPedalStep(int p_gas_pedal_pressure) {
this->m_engine->setFuelValveIntake(p_gas_pedal_pressure);
int torque = this->m_engine->getTorque();
int speed = ... //math to get the car speed from the engine torque
return speed;
}
protected:
shared_ptr<Engine> m_engine;
}
// (now must create an engine and use setEngine when constructing a Car on a factory)
Avoid using auto_ptr, because you can't share it through more than one object (it transfers ownership when assigning).
|
1,060,679
| 1,061,362
|
Cross Platform Flash Player Embedding
|
I need to embed the Flash player in a native application (C++) in a cross platform way (at least Windows and Mac OSX). I need to allow the Flash gui to make calls back to the native application to do things that Flash normally can’t do (e.g. write to the file system, talk to devices, loading native image processing libraries, etc). The Adobe AIR runtime is too restrictive so it is unfortunately not an option. I’ve used ActiveX hosting in Windows previously, but is there a cross platform gui toolkit that solves this problem for both Windows and OSX? If not what are my options for embedding Flash on OSX?
EDIT: Must support Actionscript 3.0
|
Another option is MDM Zinc. Win and OSX aren't 100% equal, and you should make sure it will do everything you need, but it may work for you.
|
1,060,991
| 1,067,029
|
Compiling C++ program on Fedora
|
I'm having problems compiling an open source C++ project on Fedora. When I download and run the ./configure I eventually end up with....
.
.
.
checking dynamic linker characteristics... GNU/Linux ld.so
checking how to hardcode library paths into programs... immediate
./configure: line 15513: AX_CFLAGS_WARN_ALL: command not found
./configure: line 15514: AX_CXXFLAGS_WARN_ALL: command not found
checking for flex... flex
checking lex output file root... lex.yy
checking lex library... -lfl
checking whether yytext is a pointer... yes
checking for bison... bison
./configure: line 15784: AX_PROG_GPERF: command not found
checking trace option enabled... no
checking for getrusage... yes
checking time profiling enabled... no
checking poll.h usability... yes
checking poll.h presence... yes
checking for poll.h... yes
checking forcing use of select... no
checking use pipes to communication between scheduler and dispatcher... no
./configure: line 16280: syntax error near unexpected token `1.39.0'
./configure: line 16280: `AX_BOOST_BASE(1.39.0)'
When I compiled the equivilant project in Windows I did need to install and update project references to Boost. I can see that this is related to boost but not sure why I would get a syntax error.
A few other things to note, the original source code in configure and configure.ac had references to boost 1.3.5 with the same compile errors (obviously with 1.3.5 in the error msg).
I have recently installed boost 1.3.9 and updated the source. Also note that when I tried to
yum install boost
it reports I have 1.3.7 installed and that is the latest version. I did also try 1.3.7 inside the source code but I get the same problem. I just don't get why I would get a syntax error!
this is the code inside configure and configre.ac that throws the error
#BOOST
AX_BOOST_BASE(1.39.0)
AX_BOOST_THREAD
Any thoughts on where to go next would be great.
TIA
G
|
I think your 'open source project' requires a later version of autoconf/aclocal
than the version installed.
'AX_CFLAGS _WARN _ALL', ..., 'AX _BOOST _BASE', are all autoconf macros which
would be correctly expanded if you had a newer version of autoconf.
|
1,061,060
| 1,062,290
|
Embedding multiple, identically named resource (RC) files in a native DLL
|
For my application (an MMC snap-in) I need to create a single native DLL containing strings that are localized into different languages. In other words, if you were to inspect this DLL with Visual Studio, you would see multiple string tables, each associated with a different locale but containing the same string IDs.
The approach I would like to take is to have various subdirectories under my project directory such as "de", "en", "es", etc (i.e. one for each language). Inside each subdirectory would be a file called "Resources.rc" which would be the RC file containing the strings for that language. Having my resources in this structure would be ideal for the localisation team.
I have managed to create my various RC files and have added them to my Visual C++ project. They all appear correctly in Solution Explorer in Visual Studio (you basically see five instances of an entry called "Resource.rc", but each entry points to a different file).
The problem comes with building my project. It seems that only a single one of the RC files (the one that is specified first in the vcproj file) is compiled into a RES file and included into my DLL. Presumably this is because Visual Studio does not like the fact that the RC files all have the same name.
Is there any way to achieve what I want?
Thanks!
|
Yes. And No.
If you want multiple RC files you are going to have to leverage off the Operating systems support to have multiple resources in one file. In the resource editor, for each resource, you can set its locale AND the resource editor will allow you to have multiple resources with the same ID, as long as their locale is different.
So, your first step would be to edit each of the RC files to ensure that the resources in one are English/US, another contains French etc.
Next, get the main RC file to #include the others.
Lastly, and this is the problem, you need to rely on the Operating systems logic to load the correct resources. If you are happy to let the locale of the PC determine what UI language is used you have done enough.
If you want to provide a menu option allowing users to change languages: SetThreadLocale used to be an easy way to switch the Locale of loaded resources on the current thread. Since Windows 2000 some unfortunate overloaded usage of the API has caused MS to deprecate its usage in favor of requiring App Developers to always use FindResourceEx - which really doesn't help you if you want, for example, to load a string from a string table.
If you really want an easy way to have a user selectable UI language, then you need to place each of your .rc files into a seperate dll. and then LoadLibrary on the appropriate language resource dll.
|
1,061,152
| 1,061,183
|
C++ std::set comparator
|
This is the code:
struct comp
{
bool operator()(Reputation *one, Reputation *two)
{
if (one->Amount < 0 && two->Amount >= 0)
return false;
if (one->Amount >= 0 && two->Amount < 0)
return true;
if (one->Amount >= 0)
return one->Amount <= two->Amount;
else
return one->Amount >= two->Amount;
}
};
And this is the problem:
Debug Assertion Failed!
File: ..\VC\include\xtree
Line: 638
Expression: invalid operator<
After that, I can choose "Abort", "Retry" or "Ignore". If I choose ignore many more (identical ones) come up but it ends up working perfectly.
The problem seems to occur when I insert a Reputation with ->Amount == to one of the Reputation *'s previously inserted, but I'm not sure about this last one.
Any help would be greatly appreciated
EDIT: The order I want them ordered in is first the positive ones in asc order, then the negative ones in desc order. Example: 1 5 10 11 11 20 50 -1 -5 -50
|
You must define a relation that's irreflexive, just like < -- therefore, change the <= to < and the '>=' to '>' in the last couple of comparisons in your method. This is what VC++ is diagnosing.
Moreover, given a correctly coded, <-like operator, if two items a and b are such that a < b and b < a are both false, those items are considered equivalent and thus only one will be inserted in the set (it's not material whether the items could be distinguished by some other comparison: only the equivalence relationship implied by the comparator matters).
|
1,061,169
| 1,063,308
|
boost serialization vs google protocol buffers?
|
Does anyone with experience with these libraries have any comment on which one they preferred? Were there any performance differences or difficulties in using?
|
I've played around a little with both systems, nothing serious, just some simple hackish stuff, but I felt that there's a real difference in how you're supposed to use the libraries.
With boost::serialization, you write your own structs/classes first, and then add the archiving methods, but you're still left with some pretty "slim" classes, that can be used as data members, inherited, whatever.
With protocol buffers, the amount of code generated for even a simple structure is pretty substantial, and the structs and code that's generated is more meant for operating on, and that you use protocol buffers' functionality to transport data to and from your own internal structures.
|
1,061,246
| 1,061,252
|
How can I check if a window has WS_VISIBLE to set? (or if is visible)
|
How can I do it? It's an external window, not from my program. Thanks
|
Do you have an HWND to the window? If not, then you will need to obtain the window handle somehow, for example through FindWindow() (or FindWindowEx()).
Once you have the HWND to the window, call IsWindowVisible().
|
1,061,291
| 1,061,714
|
does cdb/windbg have an equivalent to autoexp.dat?
|
I'd like to change the way some types are displayed using either 'dt' or '??' in a manner similar to how you can do that with autoexp.dat. Is there a way to do this?
For example, I have a structure something like this:
struct Foo
{
union Bar
{
int a;
void *p;
} b;
};
And I've got an array of a few hundred of these, all of which I know point to a structure Bar. Is there any way to tell cdb that, in this expression anyway, that 'p' is a pointer to Bar? This is the kind of thing you could do with autoexp. (The concrete example here is that I've got a stashtable that can have keys of any type, but I know they keys are strings. the implementation stores them as void pointers).
Thanks in advance!
|
I don't think there's anything as simple as autoexp.dat.
You have a couple potential options - you can write a simple script file with the debugger commands to dump the data structure in the way you want and use the "$<filename" command (or one of its variants). Combined with user aliases you can get this to be pretty easy and natural to use.
The second option is quite a bit more involved, but with it comes much more power - write an extension DLL that dumps your data structure. For something like what you're talking about this is probably overkill. But you have immense power with debugger extensions (in fact, much of the power that comes in the Debugging tools package is implemented this way). The SDK is packaged with the debugger, so it's easy to determine if this is what you might need.
|
1,061,387
| 1,061,418
|
Why do we use "type * var" instead of "type & var" when defining a pointer?
|
I'm relatively new to C++ (about one year of experience, on and off). I'm curious about what led to the decision of type * name as the syntax for defining pointers. It seems to me that the syntax should be type & name as the & symbol is used everywhere else in code to refer to the variable's memory address. So, to use the traditional example of int pointers:
int a = 1;
int * b = &a;
would become
int a = 1;
int & b = &a
I'm sure there's some reason for this that I'm just not seeing, and I'd love to hear some input from C++ veterans.
Thanks,
-S
|
C++ adopts the C syntax. As revealed in "The Development of the C Language" (by Dennis Ritchie) C uses * for pointers in type declarations because it was decided that type syntax should follow use.
For each object of [a compound type], there was already a way to mention the underlying object: index the array, call the function, use the indirection operator [*] on the pointer. Analogical reasoning led to a declaration syntax for names mirroring that of the expression syntax in which the names typically appear. Thus,
int i, *pi, **ppi;
declare an integer, a pointer to an integer, a pointer to a pointer to an integer. The syntax of these declarations reflects the observation that i, *pi, and **ppi all yield an int type when used in an expression.
Here's a more complex example:
int *(*foo)[4][];
This declaration means an expression *(*foo)[4][0] has type int, and from that (and that [] has higher precedence than unary *) you can decode the type: foo is a pointer to an array of size 4 of array of pointers to ints.
This syntax was adopted in C++ for compatibility with C. Also, don't forget that C++ has a use for & in declarations.
int & b = a;
The above line means a reference variable refering to another variable of type int. The difference between a reference and pointer roughly is that references are initialized only, and you can not change where they point, and finally they are always dereferenced automatically.
int x = 5, y = 10;
int& r = x;
int sum = r + y; // you do not need to say '*r' automatically dereferenced.
r = y; // WRONG, 'r' can only have one thing pointing at during its life, only at its infancy ;)
|
1,061,448
| 1,064,153
|
Boost date add one day, non standard GMT string
|
In C++ what is the simplest way to add one day to a date in this format:
"20090629-05:57:43"
Probably using Boost 1.36 - Boost::date, Boost::posix_date or any other boost or std library functionality, I'm not interested in other libraries.
So far I came up with:
format the string (split date and time parts as string op) to be able to initialize boost::gregorian::date, date expects format like:
"2009-06-29 05:57:43"
I have
"20090629-05:57:43"
add one day (boost date_duration stuff)
convert back to_simple_string and append the time part (string operation)
Is there any easier/niftier way to do this?
I am looking at run time efficiency.
Example code for the above steps:
using namespace boost::gregorian;
string orig("20090629-05:57:43");
string dday(orig.substr(0,8));
string dtime(orig.substr(8));
date d(from_undelimited_string(dday));
date_duration dd(1);
d += dd;
string result(to_iso_string(d) + dtime);
result:
20090630-05:57:43
|
That's pretty close to the simplest method I know of. About the only way to simplify it further would be using facets for the I/O stuff, to eliminate the need for string manipulation:
#include <iostream>
#include <sstream>
#include <locale>
#include <boost/date_time.hpp>
using namespace boost::local_time;
int main() {
std::stringstream ss;
local_time_facet* output_facet = new local_time_facet();
local_time_input_facet* input_facet = new local_time_input_facet();
ss.imbue(std::locale(std::locale::classic(), output_facet));
ss.imbue(std::locale(ss.getloc(), input_facet));
local_date_time ldt(not_a_date_time);
input_facet->format("%Y%m%d-%H:%M:%S");
ss.str("20090629-05:57:43");
ss >> ldt;
output_facet->format("%Y%m%d-%H:%M:%S");
ss.str(std::string());
ss << ldt;
std::cout << ss.str() << std::endl;
}
That's longer, and arguably harder to understand, though. I haven't tried to prove it, but I suspect it would be about equal runtime-efficiency that way.
|
1,061,543
| 1,061,555
|
Whats the syntax to use boost::pool_allocator with boost::unordered_map?
|
I'm just experimenting with boost::pool to see if its a faster allocator for stuff I am working with, but I can't figure out how to use it with boost::unordered_map:
Here is a code snippet:
unordered_map<int,int,boost::hash<int>, fast_pool_allocator<int>> theMap;
theMap[1] = 2;
Here is the compile error I get:
Error 3 error C2064: term does not evaluate to a function taking 2 arguments C:\Program Files (x86)\boost\boost_1_38\boost\unordered\detail\hash_table_impl.hpp 2048
If I comment out the use of the map, e.g. "theMap[1] = 2" then the compile error goes away.
|
It looks like you are missing a template parameter.
template<typename Key, typename Mapped, typename Hash = boost::hash<Key>,
typename Pred = std::equal_to<Key>,
typename Alloc = std::allocator<std::pair<Key const, Mapped> > >
The fourth parameter is the predicate for comparison, the fifth is the allocator.
unordered_map<int, int, boost::hash<int>,
std::equal_to<int>, fast_pool_allocator<int> > theMap;
Also, but probably not the cause of your issue, you need to separate the two '>' at the end of the template instantiation.
|
1,061,578
| 1,061,591
|
Method to return to beginning of function
|
Does C++ have any type of utility to return to the beginning of a function after a function call? For example, example the call to help() in the calculate function.
void help()
{
cout << "Welcome to this annoying calculator program.\n";
cout << "You can add(+), subtract(-), multiply(*), divide(/),\n";
cout << "find the remainder(%), square root(sqrt()), use exponents(pow(x,x)),\n";
cout << "use parentheses, assign variables (ex: let x = 3), and assign\n";
cout << " constants (ex: const pi = 3.14). Happy Calculating!\n";
return;
}
void clean_up_mess() // purge error tokens
{
ts.ignore(print);
}
const string prompt = "> ";
const string result = "= ";
void calculate()
{
while(true) try {
cout << prompt;
Token t = ts.get();
if (t.kind == help_user) help();
else if (t.kind == quit) return;
while (t.kind == print) t=ts.get();
ts.unget(t);
cout << result << statement() << endl;
}
catch(runtime_error& e) {
cerr << e.what() << endl;
clean_up_mess();
}
}
While technically my implementation of a help function works fine, it's not perfect. After help is called, and returns, it proceeds with trying to cout << result << statement() << endl; which isn't possible because no values have been entered. Thus it gives a little error message (elsewhere in the program) and then proceeds on with the program. No problem with functionality, but it's ugly and I don't like it (:P).
So is there any way for when the help function returns, to return to the beginning of calculate and start over? (I played around with inserting a function call in if(t.kind == help_user) block to call calculate, but as I figured that just delays the problem rather than solving it.)
|
You can use goto, but the moment you do that consider yourself over. It's considered bad practice and good uses of it are rare and far apart.
I think what you're looking for is continue:
void do_calculate(void)
{
while (true)
{
cout << prompt;
Token t = ts.get();
if (t.kind == help_user)
{
help();
continue; // <- here
}
else if (t.kind == quit)
{
return;
}
while (t.kind == print)
{
t = ts.get();
}
ts.unget(t);
cout << result << statement() << endl;
}
}
void calculate()
{
try
{
do_calculate();
}
catch (const std::exception& e)
{
cerr << e.what() << endl;
clean_up_mess();
}
}
I have also reformatted your code. I think this is more readable, to each his own but just wanted to let you compare.
The try/catch clause is now no longer interfering with the calculation function.
The 'if' statements use brackets for consistency. Also, it's much easier to read, because I know whatever the if is controlling is within these brackets.
The catch will catch a std::exception, rather than runtime_error. All standard exceptions inherit from std::exception, so by catching that you know you can catch anything.
|
1,061,611
| 1,061,617
|
C++ - Calling a function inside a class with the same name as the class
|
I was trying to write up a class in c++, and I came across a rather odd problem: calling outside functions inside of a class that have the same name as the class. It's kinda confusing, so here's an example:
void A(char* D) {
printf(D);
}
class A
{
public:
A(int B);
void C();
};
A::A(int B) {
// something here
}
void A::C() {
A("Hello, World.");
}
The compiler complains at the second to last line that it can't find a function A(char*), because it is inside the class, and the constructor has the same name as the function. I could write another function outside, like:
ousideA(char* D) {
A(D);
}
And then call outsideA inside of A::C, but this seems like a silly solution to the problem. Anyone know of a more proper way to solve this?
|
::A("Hello, world.");
should work fine. Basically it is saying "use the A found in the global namespace"
|
1,061,634
| 1,061,660
|
How do I know who holds the shared_ptr<>?
|
I use boost::shared_ptr in my application in C++. The memory problem is really serious, and the application takes large amount of memory.
However, because I put every newed object into a shared_ptr, when the application exits, no memory leaking can be detected.
There must be something like std::vector<shared_ptr<> > pool holding the resource. How can I know who holds the shared_ptr, when debugging?
It is hard to review code line by line. Too much code...
|
You can't know, by only looking at a shared_ptr, where the "sibling pointers" are. You can test if one is unique() or get the use_count(), among other methods.
|
1,061,726
| 1,062,393
|
Public virtual function derived private in C++
|
I was trying to figure out what happens when a derived class declares a virtual function as private. The following is the program that I wrote
#include <iostream>
using namespace std;
class A
{
public:
virtual void func() {
cout<<"A::func called"<<endl;
}
private:
};
class B:public A
{
public:
B()
{
cout<<"B constructor called"<<endl;
}
private:
void func() {
cout<<"B::func called"<<endl;
}
};
int main()
{
A *a = new B();
a->func();
return 0;
}
Surprisingly (for me) the output was:
B constructor called
B::func called
Isn't this violating the private access set for that function. Is this the expected behavior? Is this is a standard workaround or loophole? Are access levels bypassed when resolving function calls through the VTABLE?
Any insight in to this behavior would be greatly helpful.
Further it was mentioned that a privately overriding a virtual member would prevent further classes from inheriting it. Even this is having problems. Modifying the above program to include:
class C: public B
{
public:
void func() {
cout<<"C::func called"<<endl;
}
};
and the main test program to:
int main()
{
A *a = new C();
a->func();
return 0;
}
output is:
C::func called
|
The behavior is correct. Whenever you declare your function as "virtual", you instruct the compiler to generate a virtual call, instead of the direct call to this function. Whenever you override the virtual function in the descendant class, you specify the behavior of this function (you do not change the access mode for those clients, who rely on the "parent's" interface).
Changing the access mode for the virtual function in the descendant class means that you want to hide it from those clients, who use the descendant class directly (who rely on the "child's" interface).
Consider the example:
void process(const A* object) {
object->func();
}
"process" function relies on the parent's interface. It is expected to work for any class, public-derived from A. You cannot public-derive B from A (saying "every B is A"), but hide a part of its interface. Those, who expect "A" must receive a fully functional "A".
|
1,061,889
| 1,062,378
|
lock-free memory reclamation with 64bit pointers
|
Herlihy and Shavit's book (The Art of Multiprocessor Programming) solution to memory reclamation uses Java's AtomicStampedReference<T>;.
To write one in C++ for the x86_64 I imagine requires at least a 12 byte swap operation - 8 for a 64bit pointer and 4 for the int.
Is there x86 hardware support for this and if not, any pointers on how to do wait-free memory reclamation without it?
|
Yes, there is hardware support, though I don't know if it is exposed by C++ libraries. Anyway, if you don't mind doing some low-level unportable assembly language trickery - look up the CMPXCHG16B instruction in Intel manuals.
|
1,061,903
| 1,061,936
|
C++ : How to write a const_iterator?
|
I've written my own container template with an iterator. How do I implement const_iterator?
template <class T>
class my_container {
private:
...
public:
my_container() : ... { }
~my_container() { }
class iterator : public std::iterator<std::bidirectional_iterator_tag, T> {
public: ...
|
The only difference should be that when you de-reference a const iterator you get a const reference rather than a reference to the object in the container.
|
1,061,966
| 1,062,034
|
Expression: String iterator not dereferencable
|
I'm having a hard time using std::string::iterators in C++. This code compiles fine (still not getting correct output, but that's my fault: TODO, fix algorithm) in Dev-C++, and I don't get runtime errors. The error is with Visual Studio Express 2008 C++, where I'm getting an error pointing to < xstring>: "Expression: string iterator not dereferencable," and points to line 112 of the < xstring> file.
My debugging tells me I might be trying to dereference past the end of the sentence input, but I can't see where. Can anyone shed some light?
std::string wordWrap(std::string sentence, int width)
{
std::string::iterator it = sentence.begin();
//remember how long next word is
int nextWordLength = 0;
int distanceFromWidth = width;
while (it < sentence.end())
{
while (*it != ' ' && it != sentence.end())
{
nextWordLength++;
distanceFromWidth--;
it++;
}
if (nextWordLength > distanceFromWidth)
{
*it = '\n';
distanceFromWidth = width;
nextWordLength = 0;
}
//skip the space
it++;
}
return sentence;
}
|
Firstly, use operator!=() on iterators, not operator<():
while (it != sentence.end())
Secondly, this is backwards: while (*it != ' ' && it != sentence.end())
You do something with the iterator, than check if the iterator is valid. Rather, you should check if it's valid first:
while (it != sentence.end() && *it != ' ')
Thirdly, you should use ++iterator over iterator++, though this isn't related to your crashing.
Fourth, a main issue is here:
*it = '\n';
Because of the preceeding check, while (it != sentence.end(), it's possible to reach that iterator dereference while being at the end. A fix would be to do this:
if (it != sentence.end() && nextWordLength > distanceFromWidth)
So now if you have reached the end, you stop.
After fixing the previous issue, now the only problem is this:
//skip the space
++it;
This assumes that the character you are skipping is in fact a space. But what about the end of the string? Run this function with this string:
"a test string " // <- space at end
And it will succeed; it skips the space, putting the iterator at end(), the loop exits and success.
However, without the space it will crash, because you have reached the end, and are skipping past the end. To fix, add a check:
//skip the space
if (it != sentence.end())
{
++it;
}
Resulting in this final code:
std::string wordWrap(std::string sentence, int width)
{
std::string::iterator it = sentence.begin();
//remember how long next word is
int nextWordLength = 0;
int distanceFromWidth = width;
while (it != sentence.end())
{
while (it != sentence.end() && *it != ' ')
{
nextWordLength++;
distanceFromWidth--;
++it;
}
if (it != sentence.end() && nextWordLength > distanceFromWidth)
{
*it = '\n';
distanceFromWidth = width;
nextWordLength = 0;
}
//skip the space
if (it != sentence.end())
{
++it;
}
}
return sentence;
}
You might notice this seems like it has a lot of redundant checks. This can be fixed:
std::string wordWrap(std::string sentence, int width)
{
std::string::iterator it = sentence.begin();
//remember how long next word is
int nextWordLength = 0;
int distanceFromWidth = width;
while (it != sentence.end())
{
while (*it != ' ')
{
nextWordLength++;
distanceFromWidth--;
++it;
// check if done
if (it == sentence.end())
{
return sentence;
}
}
if (nextWordLength > distanceFromWidth)
{
*it = '\n';
distanceFromWidth = width;
nextWordLength = 0;
}
//skip the space
++it;
}
return sentence;
}
Hopefully that helps!
|
1,062,156
| 1,067,627
|
Error while opening shared object: SunGrid Engine
|
My application uses the Sun N1 grid engine through the API DRMAA present as shared object libdrmaa.so
.
I am using dlopen and dlsym to acess functions of the library. That works fine. Now if I try to link
it form command line the executable is built but executing it gives the error " Cannot open shared object file".
Can anyone suggest what may be the reason.
I am using g++ 2.95.3 for compilation and the machine is linux x86_64.
Thanx
|
Your question and answer are both very confused: if you can link your executable directly against libdrmaa.so, then there is absolutely no good reason to also dlopen that same library (and presumably call dlsym() on its handle as well).
|
1,062,198
| 1,062,265
|
C++ library with a Java-like API
|
Hoping that anybody here knows about a good one: I'm looking for a (free to use) C++ library with a class hierarchy and methods resembling the Java API, with at least the I/O & networking part if it, specifically HTTP handling.
I work mainly with C & Java, but for this particular project C++ is recommended, so I thought of adopting a good set of C++ libraries without facing a steep learning curve.
Thanks in advance for any recommendation.
|
Qt is IMHO very java like. I.e. they prefer Java-Style Iterators over the STL ones. Qt includes networking (examples) and much other stuff (like scripting via javascript)
|
1,062,306
| 1,062,346
|
SetCurrentDirectory in multi-threaded application
|
I understand SetCurrentDirectory shouldn't be used in a multithreaded application since the current directory is shared between all threads in the process.
What is the best approach to setting the directory with this in mind.
It can mostly be avoided setting the directory by including the full pathname when opening files instead of first navigating to them with SetCurrentDirectory, but is this the only solution?
|
I've encountered this problem before.
Any object that needs the concept of a current directory to support relative paths or searching (e.g. a build tool) has a member property that it maintains with its "current" path, then build the full path to open/create/search.
The initial value for CurrentPath can be retrieved once during the application's load phase, e.g. main(), WinMain(), DllInit(), etc. via GetCurrentDirectory and stored in a global. After that the Win32 version is ignored.
The OPENFILENAME structure has an initial directory member, so file open/save dialogs don't have to use the Win32 current directory.
|
1,062,478
| 1,074,661
|
QTreeView - Sort and Filter a model
|
I am trying to create a QTreeView which displays some sorted information. To do this I use a QSortFilterProxyModel between the view and my model.
The problem is that I want to limit the number of rows to the first n rows (after sorting). The filter function from the model receives the original sourceRow so I cannot use it.
I've tried chaining two QSortFilterProxyModel: the first for the sorting and the second for the filtering. But it seems that the second proxymodel(filtering) doesn't receive the rows sorted....
Is there another way to do it?
Has anyone use this technique(chaining of 2 proxy models) and it works?
thank you
EDIT:
I've tried with the rowCount and it doesn't work.
I've also tried to chain 2 proxy models but the problem is that the view calls the sort function for the model it receives. So if the first proxy sorts and the second filters the sort will be called on the filter model and the data won't be sorted.
EDIT2: I've looked into the qt source code and the filtering is done before sorting, so in the filterAcceptsRow() I don't know any sorting order.
|
After trying a number of overcomplicated ways to solve this I've done a small hack for my problem: after I insert/remove a row I call setRowHidden to hide the first n rows.
This is not the most elegant solution and is particular for my needs, but I am unable to find a better alternative.
I like to mention that on gtk, because the filter and the sort proxy models are separated, this can be done fairly easy.
I'm still hoping someone can provide a better solution to this.
|
1,062,601
| 1,062,647
|
How does the C++ STL vector template store its objects in the Visual Studio compiler implementation?
|
I am extending the Visual Studio 2003 debugger using autoexp.dat and a DLL to improve the way it displays data in the watch window. The main reason I am using a DLL rather than just the basic autoexp.dat functionality is that I want to be able to display things conditionally. e.g. I want to be able to say "If the name member is not an empty string, display name, otherwise display [some other member]"
I'm quite new to OOP and haven't got any experience with the STL. So it might be that I'm missing the obvious.
I'm having trouble displaying vector members because I don't know how to get the pointer to the memory the actual values are stored in.
Am I right in thinking the values are stored in a contiguous block of memory? And is there any way to get access to the pointer to that memory?
Thanks!
[edit:] To clarify my problem (I hope):
In my DLL, which is called by the debugger, I use a function called ReadDebuggeeMemory which makes a copy of the memory used by an object. It doesn't copy the memory the object points to. So I need to know the actual address value of the internal pointer in order to be able to call ReadDebuggeeMemory on that as well. At the moment, the usual methods of getting the vector contents are returning garbage because that memory hasn't been copied yet.
[update:]
I was getting garbage, even when I was looking at the correct pointer _Myfirst because I was creating an extra copy of the vector, when I should have been using a pointer to a vector. So the question then becomes: how do you get access to the pointer to the vector's memory via a pointer to the vector? Does that make sense?
|
The elements in a standard vector are allocated as one contiguous memory chunk.
You can get a pointer to the memory by taking the address of the first element, which can be done is a few ways:
std::vector<int> vec;
/* populate vec, e.g.: vec.resize(100); */
int* arr = vec.data(); // Method 1, C++11 and beyond.
int* arr = &vec[0]; // Method 2, the common way pre-C++11.
int* arr = &vec.front(); // Method 3, alternative to method 2.
However unless you need to pass the underlying array around to some old interfaces, generally you can just use the operators on vector directly.
Note that you can only access up to vec.size() elements of the returned value. Accessing beyond that is undefined behavior (even if you think there is capacity reserved for it).
If you had a pointer to a vector, you can do the same thing above just by dereferencing:
std::vector<int>* vecptr;
int* arr = vecptr->data(); // Method 1, C++11 and beyond.
int* arr = &(*vecptr)[0]; // Method 2, the common way pre-C++11.
int* arr = &vec->front(); // Method 3, alternative to method 2.
Better yet though, try to get a reference to it.
About your solution
You came up with the solution:
int* vMem = vec->_Myfirst;
The only time this will work is on that specific implementation of that specific compiler version. This is not standard, so this isn't guaranteed to work between compilers, or even different versions of your compiler.
It might seem okay if you're only developing on that single platform & compiler, but it's better to do the the standard way given the choice.
|
1,062,690
| 1,067,764
|
QtCreator performance on Windows
|
I've finally managed to run the QtCreator debugger on Windows after struggling with the Comodo Firewall incompatibilities.
I was hoping to switch from an older version of Qt and Visual C++ to the newest version of Qt and QtCreator, but the debugger performance is atrocious.
I have created a simple GUI with one window that does nothing else but display the window. After starting up QtCreator takes ~60MB RAM (Private bytes in Sysinternals process explorer).
When I start debugging, GDB is using 180MB. I start examining the main window pointer and it jumps to 313. Every time I try to inspect something, one of the cores jumps to 100% use and I have to wait for a few seconds for the information to show. This is just a toy program and I'm afraid that the real program that I want to switch will be much worse.
Is this kind of performance normal for MinGW? Would changing to the latest MinGW release improve things?
Visual C++ IDE + debugger + real-world program takes just close to 100MB of RAM and examining local variables is instantaneous.
|
Yesterday I built a copy of the Qt 4.5.2 libraries using MSVC 2008 and am using the QtCreator 1.2 MS CDB (Microsoft Console Debugger) support. It seems much faster than gdb. Building Qt for MSVC takes a few hours, but it might be worth trying.
Also, that means smaller Qt DLLs and EXEs as the MS compiler/linker is much better at removing unused code. Some of the Qt DLLs are less than half the size of their MinGW equivalents. Rumour has it that the C++ code the MS compiler generates is faster too.
|
1,062,748
| 1,062,809
|
How do I handle combinations of behaviours?
|
I am considering the problem of validating real numbers of various formats, because this is very similar to a problem I am facing in design.
Real numbers may come in different combinations of formats, for example:
1. with/without sign at the front
2. with/without a decimal point (if no decimal point, then perhaps number of decimals can be agreed beforehand)
3. base 10 or base 16
We need to allow for each combination, so there are 2x2x2=8 combinations. You can see that the complexity increases exponentially with each new condition imposed.
In OO design, you would normally allocate a class for each number format (e.g. in this case, we have 8 classes), and each class would have a separate validation function. However, with each new condition, you have to double the number of classes required and it soon becomes a nightmare.
In procedural programming, you use 3 flags (i.e. has_sign, has_decimal_point and number_base) to identify the property of the real number you are validating. You have a single function for validation. In there, you would use the flags to control its behaviour.
// This is part of the validation function
if (has_sign)
check_sign();
for (int i = 0; i < len; i++)
{
if (has_decimal_point)
// Check if number[i] is '.' and do something if it is. If not, continue
if (number_base = BASE10)
// number[i] must be between 0-9
else if (number_base = BASE16)
// number[i] must be between 0-9, A-F
}
Again, the complexity soon gets out of hand as the function becomes cluttered with if statements and flags.
I am sure that you have come across design problems of this nature before - a number of independent differences which result in difference in behaviour. I would be very interested to hear how have you been able to implement a solution without making the code completely unmaintainable.
Would something like the bridge pattern have helped?
|
In OO design, you would normally
allocate a class for each number
format (e.g. in this case, we have 8
classes), and each class would have a
separate validation function.
No no no no no. At most, you'd have a type for representing Numeric Input (in case String doesn't make it); another one for Real Number (in most languages you'd pick a built-in type, but anyway); and a Parser class, which has the knowledge to take a Numeric Input and transform it into a Real Number.
To be more general, one difference of behaviour in and by itself doesn't automatically map to one class. It can just be a property inside a class. Most importantly, behaviours should be treated orthogonally.
If (imagining that you write your own parser) you may have a sign or not, a decimal point or not, and hex or not, you have three independent sources of complexity and it would be ok to find three pieces of code, somewhere, that treat one of these issues each; but it would not be ok to find, anywhere, 2^3 = 8 different pieces of code that treat the different combinations in an explicit way.
Imagine that add a new choice: suddenly, you remember that numbers might have an "e" (such as 2.34e10) and want to be able to support that. With the orthogonal strategy, you'll have one more independent source of complexity, the fourth one. With your strategy, the 8 cases would suddenly become 16! Clearly a no-no.
|
1,062,861
| 1,062,878
|
ios::nocreate error while compiling a C++ code
|
While, compiling a package, written in C++ on RHEL 5.0. I am getting the following error.
> error: nocreate is not a member of std::ios
The source-code corresponds to:
ifstream tempStr(argv[4],ios::in|ios::nocreate);
I have tried
#g++ -O -Wno-deprecated <file.cpp> -o <file>
as well as:
#g++ -O -o <file>
Please suggest a solution.
|
ios::nocreate is not part of standard C++ - what are you expecting it to do?
Edit: From a Google, it seems like it was intended to prevent the file being created if it doesn't already exist. This is the default for ifstreams anyway, so you can just say:
ifstream f( filename );
if ( ! f.is_open() ) {
// open failed - probably because infput file does not exist
}
|
1,062,965
| 1,064,831
|
VirtualAlloc failing on certain configurations of hdd
|
For some reason on my application when running on some configuration of hard disk (RAID, random partitions, through a separate IDE controller, not on the operating systems partition), VirtualAlloc returns null with the GetLastError code of ERROR_INVALID_PARAMETER.
The only parameter apart from the flags (MEM_COMMIT | MEM_TOP_DOWN, PAGE_READWRITE) passed in is the size calculated as a multiple of the sector size obtained from the disk geometry (unfortunately I cant post code due to NDA etc).
Just wondering if anyone has any experience of this? or if anyone knows any reason why windows might treat page sizes differently on disks that are part of a raid or partitioned?, any help is greatly appreciated thanks!
|
Issue was to do with DeviceIoControl when trying to return IOCTL_DISK_GET_DRIVE_GEOMETRY seemed to be returning complete garbage when using anything other than the primary hdd, I solved this by using GetDiskFreeSpace instead to get the sector information and is all working fine
obviously finding out why IOCTL_DISK_GET_DRIVE_GEOMETRY is garbage would be ideal, but im happy that it works!
|
1,063,133
| 1,065,150
|
Usage of volatile specifier in C/C++/Java
|
While going through many resources on multithreaded programming, reference to volatile specifier usually comes up.
It is clear that usage of this keyword is not a reliable way to achieve synchronization between multiple threads atleast in C/C++ and Java (versions 1.4 and earlier).
Here is what wikipedia lists (without explaining how) as typical usages this specifier:-
allow access to memory mapped devices
allow uses of variables between setjmp and longjmp
allow uses of variables in signal handlers
busy waiting
I can begin to see the role of this specifier in the above listed usages but since I don't yet have a complete understanding of each of these areas, I can't figure out how exactly this specifier behaves in each of these usages.
Could someone explain?
|
Since you're interested in those usage cases, I'll explain the first one. Note that this applies from a c/c++ perspective, not sure how it plays into java, although I suspect in general volatile in c/c++ and java are used for completely different cases.
Memory mapped devices are peripherals which the processor communicates with in the same manner as the memory rather than through a special bus.
Suppose you have a little light with a timer that is memory mapped. You turn on the light by writing 1 to its memory address & its internal timer counts down for 5 seconds & turns the light off and resets the memory location to 0. Now you are developing a c program that needs to turn that light on after certain events, and sometimes turn it off before the counter expires. If you use a regular variable (tends to be a pointer or a reference for this type of application) to write to its memory location, there are a number of things that might go wrong due to compiler optimizations.
If you aren't working with that many variables and you are turning the light on and shortly there after turning it off without any other variables using that value - sometimes the compiler will altogether get rid of the first assignment, or in other cases it will simply maintain the value in the processor registers & never write to memory. In both these cases, the light woudl never turn on since it's memory was never changed.
Now think of another situation where you check the state of the light & it is on. Here, the value is extracted from the device's memory & kept in a processor register. Now, after a few seconds, the light turns off by itself. Shortly thereafter you try to turn the light on again, however since you read that memory address & haven't changed it since, the compiler assumes the value is still one & therefore never changes it, although it is actually 0 now.
By using the volatile key word, you prevent the compiler from making any of those assumptions when converting your code into machine code & ensures all those specific operations are performed strictly as written by the programmer. This is essential for memory mapped devices mostly because the memory location is not changed strictly by the processor. For these same reasons, multiprocessor systems with shared memory often require similar practices when operating on a common memory space.
|
1,063,147
| 1,063,191
|
Select() not Working in thread
|
I have to monitor a serial port and process its data. As a test program I was using select for just one port. The run function is as follows:
void <ProtocolClass>::run()
{
int fd = mPort->GetFileDescriptor();
fd_set readfs;
int maxfd=1;
int res;
FD_ZERO(&readfs);
FD_SET(fd,&readfs);
struct timeval Timeout;
Timeout.tv_usec=0;
Timeout.tv_sec=3;
//BYTE ack_message_frame[ACKNOWLEDGE_FRAME_SIZE];
while(true)
{
usleep(10);
res=select(maxfd,&readfs,NULL,NULL,NULL);
if(res<0)
perror("\nselect failed");
else if( res==0)
puts("TIMEOUT");
else if(FD_ISSET(fd,&readfs))
{//IF INPUT RECEIVED
qDebug("************RECEIVED DATA****************");
FlushBuf();
qDebug("\nReading data into a read buffer");
int bytes_read=mPort->ReadPort(mBuf,1000);
mFrameReceived=false;
for(int i=0;i<bytes_read;i++)
{
qDebug("%x",mBuf[i]);
}
//if complete frame has been received, write the acknowledge message frame to the port.
if(bytes_read>0)
{
qDebug("\nAbout to Process Received bytes");
ProcessReceivedBytes(mBuf,bytes_read);
qDebug("\n Processed Received bytes");
if(mFrameReceived)
{
int no_bytes=mPort->WritePort(mAcknowledgeMessage,ACKNOWLEDGE_FRAME_SIZE);
}//if frame received
}//if bytes read > 0
} //if input received
}//end while
}
But the problem is it doesn't seemed to work as nothing happens. Can somebody suggest the correct way to do it. I want to use select per thread. Is this feasible. Can you give me a sample code to do it. I have searched the net but the examples are very basic involving just the main function. There are no C++ specific examples. I am using Qt threads by the way.
Thanks
|
i think i know what's the problem.
FD_ZERO(&readfs);
FD_SET(fd,&readfs);
The above lines should be inside the while loop.
Because, select call will reset the 'fd' bit position in the readFs structure. So, the next time when select is called, it does know what while file descriptors to poll, as all are reset here.
The correction suggested by stefaanv should also be included
|
1,063,156
| 1,065,821
|
Is it possible to add CMFCToolBar to a dialog?
|
I just tryed the standard way for adding CToolbar to a dialog on the new CMFCToolBar. But it doesn't work. Befor I dip into the new implementation, I want to know if it actually possible?
|
I'm not sure what you mean by "the standard way", but you can certainly do it programatically:
// In MyDlg.h
class CMyDlg : public CDialog
{
...
CMFCToolBar m_ToolBar;
...
};
// In MyDlg.cpp
BOOL CMyDlg::OnInitDialog()
{
...
if( m_ToolBar.Create( this, AFX_DEFAULT_TOOLBAR_STYLE, 100 ) )
{
m_ToolBar.SetPaneStyle( m_ToolBar.GetPaneStyle()
& ~(CBRS_GRIPPER | CBRS_SIZE_DYNAMIC | CBRS_BORDER_ANY) );
m_ToolBar.InsertButton( CMFCToolBarButton( ID_APP_ABOUT, -1, _T("About") ) );
m_ToolBar.InsertButton( CMFCToolBarButton( ID_APP_EXIT, -1, _T("Exit") ) );
CSize sizeToolBar = m_ToolBar.CalcFixedLayout( FALSE, TRUE );
m_ToolBar.SetWindowPos( NULL, 0, 0, sizeToolBar.cx, sizeToolBar.cy,
SWP_NOACTIVATE | SWP_NOZORDER );
}
...
}
|
1,063,453
| 1,063,492
|
How can I display the content of a map on the console?
|
I have a map declared as follows:
map < string , list < string > > mapex ; list< string > li;
How can I display the items stored in the above map on the console?
|
Well it depends on how you want to display them, but you can always iterate them easily:
typedef map<string, list<string>>::const_iterator MapIterator;
for (MapIterator iter = mapex.begin(); iter != mapex.end(); iter++)
{
cout << "Key: " << iter->first << endl << "Values:" << endl;
typedef list<string>::const_iterator ListIterator;
for (ListIterator list_iter = iter->second.begin(); list_iter != iter->second.end(); list_iter++)
cout << " " << *list_iter << endl;
}
|
1,063,733
| 1,063,985
|
QtCreator project with multiple libs and one exe
|
Like I said in a previous question, I'm planning on porting a Qt project from VC++ to QtCreator.
The project consists of 4 subprojects: 3 of them are libs and the last one makes the exe.
The subprojects are layered, meaning that each layer has compile and link time dependencies to the layers beneath it.
The files are organized as follows:
root
lib1
lib2
lib3
main
I have created each of the subprojects in QtCreator from scratch. Each folder contains a pro file with the same name (e.g: lib1 -> lib1.pro). Using this approach I have built all the projects except the last one which must link the executable. I was getting linker errors which I fixed by manually editing the pro file and adding the code below:
LIBS += ../path/lib1.a \
../path/lib2.a \
../path/lib3.a
All is fine, except now lib3 complains that it can't find the symbols from lib2 and lib1. Addind LIBS to the lib2.pro file doesn't work.
I have a feeling that I'm not getting the way QtCreator works with libs and executables. How would you organize such a project so that it compiles and links correctly?
|
After working on the problem some more, I remembered that for the GCC tool chain the order of the linker parameters is important.
Reversing the LIBS entries did the trick.
|
1,063,809
| 1,063,893
|
Aligned and unaligned memory accesses?
|
What is the difference between aligned and unaligned memory access?
I work on an TMS320C64x DSP, and I want to use the intrinsic functions (C functions for assembly instructions) and it has
ushort & _amem2(void *ptr);
ushort & _mem2(void *ptr);
where _amem2 does an aligned access of 2 bytes and _mem2 does unaligned access.
When should I use which?
|
An aligned memory access means that the pointer (as an integer) is a multiple of a type-specific value called the alignment. The alignment is the natural address multiple where the type must be, or should be stored (e.g. for performance reasons) on a CPU. For example, a CPU might require that all two-byte loads or stores are done through addresses that are multiples of two. For small primitive types (under 4 bytes), the alignment is almost always the size of the type. For structs, the alignment is usually the maximum alignment of any member.
The C compiler always puts variables that you declare at addresses which satisfy the "correct" alignment. So if ptr points to e.g. a uint16_t variable, it will be aligned and you can use _amem2. You need to use _mem2 only if you are accessing e.g. a packed byte array received via I/O, or bytes in the middle of a string.
|
1,064,248
| 1,064,277
|
Correct way to initialize an object with exception throwing constructor
|
This seems to be a trivial question but I got hung on it for a few hours now (maybe too much Java killed my C++ braincells).
I have created a class that has the following constructor (i.e. no default constructor)
VACaptureSource::VACaptureSource( std::string inputType, std::string inputLocation ) {
if( type == "" || location == "" ) {
throw std::invalid_argument("Empty type or location in VACaptureSource()");
}
type = inputType;
location = inputLocation;
// Open the given media source using the appropriate OpenCV function.
if( type.compare("image") ) {
frame = cvLoadImage( location.c_str() );
if( !frame ) {
throw std::runtime_error("error opening file");
}
}
else {
throw std::invalid_argument("Unknown input type in VACaptureSource()");
}
}
When I want to create an instance, I use
// Create input data object
try {
VACaptureSource input = VACaptureSource("image", "/home/cuneyt/workspace/testmedia/face_images/jhumpa_1.jpg");
}
catch( invalid_argument& ia ) {
cerr << "FD Error: " << ia.what() << endl;
usage(argv[0]);
}
catch( runtime_error& re ) {
cerr << "FD Error: " << re.what() << endl;
usage(argv[0]);
}
However, in this case the instance is local to this block and I can't refer to it anywhere else. On the other hand, I can't say
VACAptureSource input;
at the beginning of the program since there's no default constructor.
What is the correct way to do this?
Thanks!
|
What about using a pointer (or some RAII version thereof)?
VACaptureSource* input = NULL;
try {
input = new VACaptureSource(...);
} catch(...) {
//error handling
}
//And, of course, at the end of the program
delete input;
|
1,064,325
| 1,064,388
|
Why not use pointers for everything in C++?
|
Suppose that I define some class:
class Pixel {
public:
Pixel(){ x=0; y=0;};
int x;
int y;
}
Then write some code using it. Why would I do the following?
Pixel p;
p.x = 2;
p.y = 5;
Coming from a Java world I always write:
Pixel* p = new Pixel();
p->x = 2;
p->y = 5;
They basically do the same thing, right?
One is on the stack while the other is on the heap, so I'll have to delete it later on. Is there any fundamental difference between the two? Why should I prefer one over the other?
|
Yes, one is on the stack, the other on the heap. There are two important differences:
First, the obvious, and less important one: Heap allocations are slow. Stack allocations are fast.
Second, and much more important is RAII. Because the stack-allocated version is automatically cleaned up, it is useful. Its destructor is automatically called, which allows you to guarantee that any resources allocated by the class get cleaned up. This is essentialy how you avoid memory leaks in C++. You avoid them by never calling delete yourself, instead wrapping it in stack-allocated objects which call delete internally, typicaly in their destructor. If you attempt to manually keep track of all allocations, and call delete at the right times, I guarantee you that you'll have at least a memory leak per 100 lines of code.
As a small example, consider this code:
class Pixel {
public:
Pixel(){ x=0; y=0;};
int x;
int y;
};
void foo() {
Pixel* p = new Pixel();
p->x = 2;
p->y = 5;
bar();
delete p;
}
Pretty innocent code, right? We create a pixel, then we call some unrelated function, and then we delete the pixel. Is there a memory leak?
And the answer is "possibly". What happens if bar throws an exception? delete never gets called, the pixel is never deleted, and we leak memory. Now consider this:
void foo() {
Pixel p;
p.x = 2;
p.y = 5;
bar();
}
This won't leak memory. Of course in this simple case, everything is on the stack, so it gets cleaned up automatically, but even if the Pixel class had made a dynamic allocation internally, that wouldn't leak either. The Pixel class would simply be given a destructor that deletes it, and this destructor would be called no matter how we leave the foo function. Even if we leave it because bar threw an exception. The following, slightly contrived example shows this:
class Pixel {
public:
Pixel(){ x=new int(0); y=new int(0);};
int* x;
int* y;
~Pixel() {
delete x;
delete y;
}
};
void foo() {
Pixel p;
*p.x = 2;
*p.y = 5;
bar();
}
The Pixel class now internally allocates some heap memory, but its destructor takes care of cleaning it up, so when using the class, we don't have to worry about it. (I should probably mention that the last example here is simplified a lot, in order to show the general principle. If we were to actually use this class, it contains several possible errors too. If the allocation of y fails, x never gets freed, and if the Pixel gets copied, we end up with both instances trying to delete the same data. So take the final example here with a grain of salt. Real-world code is a bit trickier, but it shows the general idea)
Of course the same technique can be extended to other resources than memory allocations. For example it can be used to guarantee that files or database connections are closed after use, or that synchronization locks for your threading code are released.
|
1,065,011
| 1,065,191
|
Why is this boost header file not included
|
I'm building my c++ program with cmake on a Mac. The compiler gives me following Error:
error: boost/filesystem.hpp: No such file or directory
The line that triggers the error is the following:
#include "boost/filesystem.hpp"
or
#include <boost/filesystem.hpp>
Which of the above I use doesn't changed the Error
But in my CMakeLists.txt I include the boost headers in the following way:
FIND_PACKAGE(Boost)
MESSAGE("Boost information:")
MESSAGE(" Boost_INCLUDE_DIRS: ${Boost_INCLUDE_DIRS}")
MESSAGE(" Boost_LIBRARIES: ${Boost_LIBRARIES}")
MESSAGE(" Boost_LIBRARY_DIRS: ${Boost_LIBRARY_DIRS}")
INCLUDE_DIRECTORIES(${Boost_INCLUDE_DIRS})
LINK_DIRECTORIES(${Boost_LIBRARY_DIRS})
Boost include dirs is filled with "/opt/local/include/" during the cmake process and this folder contains a folder boost which contains the filesystem.hpp
Boost gives the following messages while generating the Makefile, I only copied the boost part:
-- Boost version: 1.38.0
-- Found the following Boost libraries:
Boost information:
Boost_INCLUDE_DIRS: /opt/local/include
Boost_LIBRARIES:
Boost_LIBRARY_DIRS: /opt/local/lib
-- Configuring done
While running make VERBOSE=1 This line contains the error:
cd /Users/janusz/Documents/workspace/ImageMarker/Debug/src &&
/usr/bin/c++ -O3 -Wall -Wno-deprecated -g -verbose -I/Users/janusz/Documents/workspace/ImageMarker/src/. -o CMakeFiles/ImageMarker.dir/FaceRecognizer.cpp.o -c /Users/janusz/Documents/workspace/ImageMarker/src/FaceRecognizer.cpp
/Users/janusz/Documents/workspace/ImageMarker/src/FaceRecognizer.cpp:8:32: error: boost/filesystem.hpp: No such file or directory
make[2]: *** [src/CMakeFiles/ImageMarker.dir/FaceRecognizer.cpp.o] Error 1
Do you understand why the compiler isn't picking the /opt/local/include directory?
If you need more information I'm happy to provide it
|
First of all use
FIND_PACKAGE(Boost REQUIRED)
rather than
FIND_PACKAGE(Boost)
This way cmake will give you a nice error message if it doesn't find it, long before any compilations are started. If it fails set the environment variable BOOST_ROOT to /opt/local (which is the install prefix).
Additionally you will have to link in the filesystem library, so you want
FIND_PACKAGE(Boost COMPONENTS filesystem REQUIRED)
for later use of
target_link_libraries(mytarget ${Boost_FILESYSTEM_LIBRARY})
Enter
cmake --help-module FindBoost
at the shell to get the docs for the Boost find module in your cmake installation.
PS: An example
The CMakeLists.txt
cmake_minimum_required(VERSION 2.6)
project(Foo)
find_package(Boost COMPONENTS filesystem REQUIRED)
include_directories(${Boost_INCLUDE_DIRS})
add_executable(foo main.cpp)
target_link_libraries(foo
${Boost_FILESYSTEM_LIBRARY}
)
main.cpp
#include <boost/filesystem.hpp>
#include <vector>
#include <string>
#include <cstdio>
#include <cstddef>
namespace fs = boost::filesystem;
using namespace std;
int main(int argc, char** argv)
{
vector<string> args(argv+1, argv+argc);
if(args.empty())
{
printf("usage: ./foo SOME_PATH\n");
return EXIT_FAILURE;
}
fs::path path(args.front());
if(fs::exists(path))
printf("%s exists\n", path.string().c_str());
else
printf("%s doesn't exist\n", path.string().c_str());
return EXIT_SUCCESS;
}
|
1,065,054
| 1,065,171
|
C++ Process Management
|
Is there a well-known, portable, good library for C++ process management?
I found a promising library called Boost.Process, but it's only a candidate for inclusion in the Boost library. Has anyone use this? Does anyone know why it isn't a part of Boost?
|
How much management do you need? Just fork/exec? IPC? Resource management? Security contexts and process isolation?
I haven't used the Boost.Process library. However, I do know that getting included in Boost is a rather difficult affair. Boost recently accepted a futures library that had already been approved as part of the standard. However, getting into Boost wasn't a forgone conclusion. Another library recently did not make the cut. And although I think the criticisms are valid, I personally would be willing to use that library.
|
1,065,211
| 1,066,423
|
How can I use a class from a header file in a source file using extern but not #include?
|
If I have a class in outside.h like:
class Outside
{
public:
Outside(int count);
GetCount();
}
How can I use it in framework.cpp using the extern keyword, where I need to instantiate the class and call GetCount?
Edit:
#include is not allowed.
|
Just to clarify. It is impossible to extern the class:
class Outside
{
public:
Outside(int count);
GetCount();
}
But, once you have the class available in framework.cpp, you CAN extern an object of type Outside. You'll need a .cpp file declaring that variable:
#include "outside.h"
Outside outside(5);
And then you can refer to that object in another file via extern (as long as you link in the correct object file when you compile your project):
#include "outside.h"
extern Outside outside;
int current_count = outside.GetCount();
extern is used to say "I KNOW a variable of this type with this name will exist when this program runs, and I want to use it." It works with variables/objects, not classes, structs, unions, typedefs, etc. It's not much different from static objects.
You may be thinking about forward declaring classes to cut down on compile times, but there are restrictions on that (you only get to use the objects as opaque pointers and are not able to call methods on them).
You may also mean to hide the implementation of Outside from users. In order to do that, you're going to want to read up on the PIMPL pattern.
Update
One possibility would be to add a free function to Outside.h (I've also added a namespace):
namespace X {
class Outside {
int count_;
public:
Outside(int count) : count_(count) { }
int GetCount()
{
return count_;
}
};
int GetOutsideCount(Outside* o);
}
Implement that function in a .cpp file. While you're at it, you might as well make the global variable that you intend to extern (note, the variable itself does not need to be a pointer):
#include "outside.h"
namespace X {
int GetOutsideCount(Outside* o)
{
return o->GetCount();
}
}
X::Outside outside(5);
And then do this in your program (note that you cannot call any methods on outside because you did not include outside.h and you don't want to violate the one definition rule by adding a new definition of the class or those methods; but since the definitions are unavailable you'll need to pass pointers to outside around and not outside itself):
namespace X {
class Outside;
int GetOutsideCount(Outside* o);
}
extern X::Outside outside;
int main()
{
int current_count = GetOutsideCount(&outside);
}
I consider this an abomination, to put it mildly. Your program will find the GetOutsideCount function, call it by passing it an Outside*. Outside::GetCount is actually compiled to a normal function that takes a secret Outside object (inside Outside::GetCount that object is referred to via the this pointer), so GetOutsideCount will find that function, and tell it to dereference the Outside* that was passed to GetOutsideCount. I think that's called "going the long way 'round."
But it is what it is.
If you aren't married to using the extern keyword, you can instead go full "let's use C++ like it's C" mode by adding the following two functions in the same way (i.e., via forward declarations and implementing right next to int GetOUtsideCount():
Outside* CreateOutsidePointer(int count)
{
return new Outside(count);
}
void DestroyOutsidePointer(Outside* o)
{
return delete o;
}
I'm more willing to swallow that. It's a lot like the strategy used by the APR.
|
1,065,623
| 1,069,085
|
Compile Cygwin project in Eclipse
|
I have a c++ Project that was compiled with the cygwin toolchain, now I want to use Eclipse to compile and test it.
If I create a project (cygwin toolchain is set in the options) I get the error:
make: *** No rule to make target `all'. 7wWin line 0 C/C++ Problem
In Cygwin I use:
cd $BUILDDIR
make
make install
Can Eclipse create it's own makefile? And how to setup that.
Better would be a good tutorial how to compile a Cygwin c++ project with Eclipse.
|
Check the following pages:
http://homepage.cs.uri.edu/courses/fall2007/csc406/Handouts/eclipseTutorial.pdf
http://wikimix.blogspot.com/2006/11/using-eclipse-as-c-development_05.html
http://www.benjaminarai.com/benjamin_arai/index.php?display=/eclipsecygwingcc.php
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.